Skip to content

[Request]: Improve filesystem performance #882

@vsarunas

Description

@vsarunas

Feature or enhancement request details

While trying to find a way to reproduce #614 noticed that filesystem performance is ~3 times slower compared to a Qemu VM.

For example create a large container:

container run --cpus 8 --memory 16G -ti ubuntu:latest /bin/bash

Can run a simple test:

apt-get update; apt-get install -y curl fio
curl -sL https://yabs.sh -i | bash -s -- -i -n -g

M1 Max:

io Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vdb):
---------------------------------
Block Size | 4k            (IOPS) | 64k           (IOPS)
  ------   | ---            ----  | ----           ----
Read       | 112.95 MB/s  (28.2k) | 328.21 MB/s   (5.1k)
Write      | 112.88 MB/s  (28.2k) | 337.97 MB/s   (5.2k)
Total      | 225.84 MB/s  (56.4k) | 666.18 MB/s  (10.4k)
           |                      |
Block Size | 512k          (IOPS) | 1m            (IOPS)
  ------   | ---            ----  | ----           ----
Read       | 256.00 MB/s    (500) | 432.09 MB/s    (421)
Write      | 277.89 MB/s    (542) | 482.09 MB/s    (470)
Total      | 533.89 MB/s   (1.0k) | 914.18 MB/s    (891)

YABS completed in 14 sec

Multipass VM with Ubuntu with ext4:

fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/sda1):
---------------------------------
Block Size | 4k            (IOPS) | 64k           (IOPS)
  ------   | ---            ----  | ----           ----
Read       | 134.06 MB/s  (33.5k) | 1.75 GB/s    (27.3k)
Write      | 133.97 MB/s  (33.4k) | 1.80 GB/s    (28.1k)
Total      | 268.04 MB/s  (67.0k) | 3.55 GB/s    (55.5k)
           |                      |
Block Size | 512k          (IOPS) | 1m            (IOPS)
  ------   | ---            ----  | ----           ----
Read       | 4.74 GB/s     (9.2k) | 6.52 GB/s     (6.3k)
Write      | 5.14 GB/s    (10.0k) | 7.27 GB/s     (7.1k)
Total      | 9.89 GB/s    (19.3k) | 13.79 GB/s   (13.4k)

Docker:

fio Disk Speed Tests (Mixed R/W 50/50) (Partition overlay):
---------------------------------
Block Size | 4k            (IOPS) | 64k           (IOPS)
  ------   | ---            ----  | ----           ----
Read       | 416.23 MB/s (104.0k) | 5.32 GB/s    (83.2k)
Write      | 415.96 MB/s (103.9k) | 5.48 GB/s    (85.6k)
Total      | 832.20 MB/s (208.0k) | 10.81 GB/s  (168.9k)
           |                      |
Block Size | 512k          (IOPS) | 1m            (IOPS)
  ------   | ---            ----  | ----           ----
Read       | 11.97 GB/s   (23.3k) | 14.16 GB/s   (13.8k)
Write      | 12.99 GB/s   (25.3k) | 15.79 GB/s   (15.4k)
Total      | 24.96 GB/s   (48.7k) | 29.95 GB/s   (29.2k)

The test are shorter; but representative; longer duration for example with container it can run with close 1000MiB/s, and occasional drops to 300MiBs. Native block performance is ~3000MB/s.

Long duration test can be launched via:

fio --name=rand_rw_512k --ioengine=libaio --rw=randrw --rwmixread=50 --bs=512k --iodepth=64 --numjobs=2 --size=10G --runtime=3600 --time_based --gtod_reduce=1 --direct=1 --filename=./test.fio --group_reporting

Instruments only has shown pwritev calls for the most part from an IO thread:

Image

Code of Conduct

  • I agree to follow this project's Code of Conduct

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions