Hi, In this post we will explore FIO’s RTEMS port and how it can be used to benchmark RTEMS filesystems and drivers.
First, let’s have a quick look at all the RTEMS filesystems:
RTEMS FILESYSTEMS
RTEMS supports mainly two types of filesystems: Network and Physical fs. Benchmarking support for networking filesystems isn’t yet available. However, nearly every other physical filesystem can be benchmarked and contrasted. Heap-based file systems are those which uses malloc() for file allocation. In other words, they reside completely on heap memory. These are mainly used to provide basic directory/file management even if there is no dedicated physical storage for files. Further, they facilitate mounting of other file systems too. For benchmarking IMFS/M-IMFS no external device is needed, unlike block-based fs which need either a RAM-disk or a flash device to work with. Further detailed information of these filesystems is available here: https://devel.rtems.org/wiki/Developer/FileSystems
Now, let’s move on to the benchmarking section (for which this post is dedicated to):
Step 1# Preparation of the benchmarking tool
As of today(02/08/2018) FIO’s RTEMS port isn’t yet merged into the fio’s official repository(here’s the corresponding thread: https://www.spinics.net/lists/fio/msg07157.html). I’ll update this section, once it gets merged. Till then we can use my repository https://github.com/madaari/fio/tree/paper . Please stick to the ‘paper‘ branch as it will be much more stable then the master one. For benchmarking purposes, it’s always recommended to use only one version of the tool throughout the results.
Preparation of RTEMS toolchain
Toolchain, required for cross-compiling fio for desired architecture (like ARM in this example) can be generated by using RTEMS Source Builder(RSB).
Please note that fio’s build for RTEMS has been tested with RSB version 5(commit id :25f4db09c85a52fb1640a29f9bdc2de8c2768988), and it may not work with older versions.
Moreover, For enabling POSIX support(required by fio), build the BSP using RTEMS v5 with –enable-posix option. After that, if needed(like for using SD card driver for BeagleBone Black) one may also need to build rtems-libbsd for the desired BSP.
Cross-Compiling FIO
Fetch the fio repository and then configure it as shown below:
Variable to be passed for toolchain path is TOOL_PATH_PREFIX, which in this case would be,
$ export TOOL_PATH_PREFIX=/home/uka_in/development/sandbox/5
After setting up the variable, the next step would be to configure and build fio as:
$ make clean $ ./configure --cc=$TOOL_PATH_PREFIX/bin/arm-rtems5-gcc --disable-optimizations --extra-cflags=-O3 $ make fio CROSS_COMPILE=$TOOL_PATH_PREFIX/bin/arm-rtems5- V=1
By now, you will have the fio binary ready which can be loaded to the target using the bootloader like u-boot.
Note:- While using u-boot to load up the fio binary use the following configuration:
boot=fatload mmc 0 0x81000000 $app ; fatload mmc 0 0x88000000 ${DTB_INSTALL_NAME} ; bootm 0x81000000 - 0x88000000"
where $app is the name of the image file to be loaded. Just make sure, there is enough space between the loading and executing addresses. Otherwise, I would recommend using this script after making above mentioned changes(mainly change the load address from 0x80800000 to 0x81000000). The main reason for this is to avoid the following error during uncompressing:
## Booting kernel from Legacy Image at 80800000 ... Image Name: RTEMS Created: 2018-07-15 12:05:41 UTC Image Type: ARM Linux Kernel Image (gzip compressed) Data Size: 1930907 Bytes = 1.8 MiB Load Address: 80000000 Entry Point: 80000000 Verifying Checksum ... OK ## Flattened Device Tree blob at 88000000 Booting using the fdt blob at 0x88000000 Uncompressing Kernel Image ... Error: inflate() returned -5 Image too large: increase CONFIG_SYS_BOOTM_LEN Must RESET board to recover
It was because during gunzip(at loadaddr i.e 0x80000000) uboot overwrite the gzipped kernel image(at 0x80800000) thus an error during uncompressing. It’s resolved via a small change in uenv.txt: `boot=fatload mmc 0 0x81000000 rtems-app.img ; fatload mmc 0 0x88000000 am335x-boneblack.dtb ; bootm 0x81000000 – 0x88000000`
Step 2# Setting up the Benchmarking Environment
The user might want to customize the rtems configuration for simulating different environments (to notice their effect on benchmarking stats) or might want to set up a filesystem on any mounted device like flash, RAM-disk. This sub-section will begin with a small description of rtems-init.c file and will then describe the changes required for benchmarking different file systems and device drivers.
Just like any for any other rtems application, rtems-init.c file is the main configuration file with Init() function, device drivers, and libblock/cache settings. Making a change in this file will not in any way affect the benchmarking tool aka fio but would alter the benchmarking environment.
Evaluating the effect of cache size
For example, if someone wanted to evaluate the effect of cache size on the stats, he can change the libblock and cache settings from the following lines:
#define CONFIGURE_BDBUF_BUFFER_MAX_SIZE (1024) #define CONFIGURE_BDBUF_MAX_READ_AHEAD_BLOCKS 0 #define CONFIGURE_BDBUF_CACHE_MEMORY_SIZE (10 * 1024)
Note:- In case of using DOSFS, don’t set the cache size below 8k that might have an adverse effect on the rtems-shell application(In my case, it didn’t even load up ). Also, Heap-based filesystems like IMFS/M-IMFS are virtually unaffected by any change in cache settings. That has a pretty straight forward reason: Heap-based fs just don’t use libblock API. Further, information/description of these options can be found in RTEMS documentation.
Setting up RAM-disk
For benchmarking on a RAM-disk, the following configuration is required:
//ramdisk path can be /dev/rda and mount path can be /mnt //This configuration is for setting up a ramdisk with RFS as filesystem Init(){ ... rv = rtems_rfs_format(RAMDISK_PATH, &rfs_config); assert(rv == 0); rv = mount_and_make_target_path( RAMDISK_PATH, MOUNT_PATH, RTEMS_FILESYSTEM_TYPE_RFS, RTEMS_FILESYSTEM_READ_WRITE, NULL ); assert(rv == 0); ... } rtems_ramdisk_config rtems_ramdisk_configuration[] ={ { .block_size = 512, .block_num = 131072*2 } }; size_t rtems_ramdisk_configuration_size = RTEMS_ARRAY_SIZE(rtems_ramdisk_configuration); #define CONFIGURE_APPLICATION_EXTRA_DRIVERS RAMDISK_DRIVER_TABLE_ENTRY
Setting the RAM-disk block size other then 512B might not work, in that case, rtems_rfs_format won’t successfully exit. Also, note that for IMFS you don’t need to set up the RAM-disk since Heap-based fs allocate on RAM itself! Also, please carefully choose the RAM-disk size. for example, In my case, BeagleBone Black has 512MB of RAM, while RTEMS BBB BSP uses only 216MB of it. Use it’s wise to uses only 128MB(50% of RAM available) for RAM-disk. For benchmarking, it’s always better to use larger-sized files for consistent bandwidth stats. So, a very small RAM-disk size is also not viable.
As a general observation, the cache has a negative effect on RAM-disk bandwidth!!
Setting up the SD card
For benchmarking the SD card driver, one can use the media_server(config given below) to mount the card. Please note that the SD card driver acts as a bottle-neck and thus evaluating RTEMS filesystems on an SD-card isn’t really a good idea.
#include <rtems/media.h> static rtems_status_code media_listener(rtems_media_event event, rtems_media_state state, const char *src, const char *dest, void *arg) { if (dest != NULL) { printf(", dest = %s", dest); } if (arg != NULL) { printf(", arg = %p\n", arg); } return RTEMS_SUCCESSFUL; } static void early_initialization(void) { rtems_status_code sc; sc = rtems_bdbuf_init(); assert(sc == RTEMS_SUCCESSFUL); sc = rtems_media_initialize(); assert(sc == RTEMS_SUCCESSFUL); sc = rtems_media_listener_add(media_listener, NULL); assert(sc == RTEMS_SUCCESSFUL); sc = rtems_media_server_initialize( 200, 32 * 1024, RTEMS_DEFAULT_MODES, RTEMS_DEFAULT_ATTRIBUTES ); assert(sc == RTEMS_SUCCESSFUL); }
Benchmarking IMFS
IMFS being the default filesystem, is pretty easy to benchmark. However, there are some settings of IMFS which might need to be taken care of:
Init(){ ... rv = mount(NULL, "/mnt", "imfs", RTEMS_FILESYSTEM_READ_WRITE, NULL); assert(rv == 0); ... } #define CONFIGURE_USE_IMFS_AS_BASE_FILESYSTEM #define CONFIGURE_FILESYSTEM_IMFS #define CONFIGURE_IMFS_MEMFILE_BYTES_PER_BLOCK 512
Also, please don’t try to mount RAM-disk using IMFS, it would fail: https://lists.rtems.org/pipermail/users/2018-July/032466.html
IMFS block size determines the size of the file that can be created. 512B block size allows maximum possible file size. However, for some unknown(at least to me) reasons benchmarking with a file size greater then 80MB-82MB raises an IO error. Maybe on some other boards or with other RAM sizes this limit gets changed.
Benchmarking RFS
For benchmarking RFS, the first step would be to set up a RAM disk with rtems_rfs_format and RTEMS_FILESYSTEM_TYPE_RFS settings. Next step would be to add RFS configuration like:
#include <rtems/rtems-rfs-format.h> static const rtems_rfs_format_config rfs_config[] = { { .block_size = 1024 } }; //use rfs_config as a parameter to rfs_format Init(){ ... rv = rtems_rfs_format(RAMDISK_PATH, &rfs_config); ... } #define CONFIGURE_FILESYSTEM_RFS
Setting RFS block size other than 1024 might raise an error as well !!
Benchmarking DOSFS
For DOSFS, msdos_format() and RTEMS_FILESYSTEM_TYPE_DOSFS(while mounting RAM-disk) can be used. like:
#include <rtems/dosfs.h> Init(){ ... rv = msdos_format(RAMDISK_PATH); assert(rv == 0); ... } #define CONFIGURE_FILESYSTEM_DOSFS
Default block size in case of DOSFS is 512B and I couldn’t find a way to change it: https://lists.rtems.org/pipermail/users/2018-July/032475.html
Possible pitfalls while benchmarking on dosfs:
- Please make sure while benchmarking, files to be created should not lie in the mount root directory i.e ‘/mnt’ instead use a directory like ‘/mnt/1’. This is because there’s an upper limit on the number of files that can be created in the root directory of the FAT file system.
- Make sure cache size when using DOSFS is more than 8Kb
General observation: DOSFS being very old, is more optimized then RFS
General settings
Working with multiple files
When benchmarking on many files concurrently, don’t forget to set the following parameter:
#define CONFIGURE_LIBIO_MAXIMUM_FILE_DESCRIPTORS 320
where 320 is the maximum number of files that can be opened concurrently.
Verification
Many times, you may want to verify your settings either of the filesystem or of the block size. In that case, It’s very handy to have the following shell commands:
#define CONFIGURE_SHELL_COMMAND_BLKSTATS #define CONFIGURE_SHELL_COMMAND_CPUINFO #define CONFIGURE_SHELL_COMMAND_MKRFS #define CONFIGURE_SHELL_MOUNT_RFS #define CONFIGURE_SHELL_MOUNT_MSDOS #define CONFIGURE_SHELL_MOUNT_DOSFS #define CONFIGURE_SHELL_COMMAND_MSDOSFMT #define CONFIGURE_SHELL_COMMAND_MOUNT #define CONFIGURE_SHELL_COMMAND_UNMOUNT // for editor #define CONFIGURE_SHELL_COMMAND_EDIT
Step 3# Selecting the Job configuration file
After setting up the environment and the tool now it’s time to select the job configuration file for FIO.
Job configuration files are used to set IO type, directory, IO size, block size, and numerous other settings. Please have a look at fio documentation once for detailed explanation of job file configuration parameters. Here I will cover only a few of them, which I have actually used.
Selecting the IOengine
Following are the different ioengines that can be used with fio’s RTEMS port:
- sync – Uses basic read() and write() system calls for IO and lseek() is used to position IO. fsync and fdatasync is used to sync the file in case of Buffered IO
- psync – Uses pread() and pwrite() system calls. for IO
- vsync – Uses vectored read and write operations(readv and writev)
- ftruncate – Uses ftruncate() to set file size and then use write() for IO
- filecreate – Just create empty files – used to evaluate the latency while creating a file
IOengine determines how the IO will take place. like in case of `sync` read() and write() calls are used to do IO.
Selecting IO direction
IO direction refers to whether you want to read or write from a file, that too in a random or sequential way. Following are different IO types available to choose from:
- read – only read a file in a sequential way
- write – only write a file in a sequential way
- rw – do both sequential read write operations
- randread – Read a file in a random manner
- randwrite – Write a file in a random manner
- randrw – random read and write file
The percentage of read and write operations in mixed io types is by default 50-50%. However, it can be changed.
Other parameters
- size :- It is the size of the total IO operation. If there is only one file, then size is equal to filesize.
- nrfiles :- Number of files over which IO is uniformly distributed
- thread :- Number of threads to use. Use 1 if not sure.
- direct :- Use to bypass cache while IO. Note that this option won’t work on RTEMS.
- directory :- directory to use while creating files
- ss_ramp :- used to warm-up the benchmarking tool, before actually start io. It should be used if benchmarking on smaller sized files.
- bs :- IO block size
Here is the sample configuration file:
[global] ioengine=sync size=80M rw=write directory=/mnt thread=1 ss_ramp=5 [imfs-write-cfg1-4k] bs=4k
The configuration file can be entered in the RTEMS shell directly by using the file editor. and then can be supplied to fio as a parameter like $ fio config_file_name
Step 4# Interpreting the results
Following is sample output in case of RTEMS:
fat-randrw-sync-16k: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=sync, iodepth=1 fio-3.6-149-g82eb4-dirty Starting 1 thread fat-randrw-sync-16k: Laying out IO file (1 file / 100MiB) Jobs: 1 (f=1): [m(1)][90.9%][r=5204KiB/s,w=5220KiB/s][r=325,w=326 IOPS][eta 00m:01s] fat-randrw-sync-16k: (groupid=0, jobs=1): err= 0: pid=1: Fri Jan 1 00:00:30 1988 read: IOPS=314, BW=5025KiB/s (5146kB/s)(48.9MiB/9966msec) clat (usec): min=128, max=4330, avg=1521.40, stdev=1010.44 lat (usec): min=129, max=4331, avg=1522.66, stdev=1010.44 clat percentiles (usec): | 1.00th=[ 139], 5.00th=[ 200], 10.00th=[ 314], 20.00th=[ 562], | 30.00th=[ 783], 40.00th=[ 1057], 50.00th=[ 1336], 60.00th=[ 1647], | 70.00th=[ 2008], 80.00th=[ 2474], 90.00th=[ 3032], 95.00th=[ 3425], | 99.00th=[ 3916], 99.50th=[ 4113], 99.90th=[ 4293], 99.95th=[ 4293], | 99.99th=[ 4359] bw ( KiB/s): min= 4332, max= 5587, per=98.07%, avg=4928.16, stdev=363.67, samples=19 iops : min= 270, max= 349, avg=307.58, stdev=22.89, samples=19 write: IOPS=328, BW=5250KiB/s (5376kB/s)(51.1MiB/9966msec) clat (usec): min=88, max=10844, avg=1545.92, stdev=1109.49 lat (usec): min=90, max=10846, avg=1548.11, stdev=1109.48 clat percentiles (usec): | 1.00th=[ 109], 5.00th=[ 182], 10.00th=[ 293], 20.00th=[ 537], | 30.00th=[ 783], 40.00th=[ 1045], 50.00th=[ 1352], 60.00th=[ 1696], | 70.00th=[ 2040], 80.00th=[ 2507], 90.00th=[ 3097], 95.00th=[ 3458], | 99.00th=[ 4113], 99.50th=[ 4228], 99.90th=[10028], 99.95th=[10290], | 99.99th=[10814] bw ( KiB/s): min= 4752, max= 5630, per=97.82%, avg=5134.53, stdev=245.69, samples=19 iops : min= 297, max= 351, avg=320.53, stdev=15.18, samples=19 lat (usec) : 100=0.20%, 250=7.53%, 500=10.31%, 750=10.48%, 1000=9.70% lat (msec) : 2=31.00%, 4=29.53%, 10=1.17%, 20=0.06% cpu : usr=100.00%, sys=100.00%, ctx=18446744073709551615, majf=18446744073709551615, minf=18446744073709551615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=3130,3270,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1 Run status group 0 (all jobs): READ: bw=5025KiB/s (5146kB/s), 5025KiB/s-5025KiB/s (5146kB/s-5146kB/s), io=48.9MiB (51.3MB), run=9966-9966msec WRITE: bw=5250KiB/s (5376kB/s), 5250KiB/s-5250KiB/s (5376kB/s-5376kB/s), io=51.1MiB (53.6MB), run=9966-9966msec
There are numerous parameters like Bandwidth, latency distribution, IOPS, clat, issued rwts through which one can get an overview view of how the device performs under different conditions. Complete description of these parameters can be found on the fio documentation. Please note that ctx,majf and minf fields are misconfigured for RTEMS.
In case of any Query or for reporting any bugs in fio’s RTEMS port:
Please send an e-mail to me(<dev.madaari@gmail.com>) along with <users@rtems.org> and <fio@vger.kernel.org> on CC. Also, attach complete fio output, job configuration file, and a small description of your RTEMS environment settings(If possible, attach complete rtems-init.c file).
ToDo:- I’ve got some really cool benchmarking statistics for RTEMS filesystems. But, as of now, I couldn’t publicize them as we have an EwiLi paper in progress. So, once publicized, I can further extend this post with the results and their interpretations. However, RAW data can always be found in my GitHub notes here:https://gist.github.com/madaari
Thanks!