I had a task to compare different protocols( and their different implementations) like SD, SPI, SDIO(in FreeBSD’s MMCCAM implementation) by accessing SD Card. Now, for unbiased comparison, I must eliminate file system type in sd card from the equation.
In this blog post, I’ll discuss/share my findings of the file-systems used for the card(SD and MMC) and how they affect its performance.
So, let’s start with a very quick introduction to some basic concepts:
- MMC- Multimedia Card is a memory card unveiled in 1997 by SanDisk and Siemens based on NAND flash memory. eMMC is a regular MMC in a BGA package.
SD Card: SecureDigital Card was introduced in 1999 based on MMC but adding extra features such as security.
MMC is made up of broadly 3 parts:
- MMC interface – Responsible for handling communication
- FTL(Flash Transition Layer) – It’s a small controller running a firmware. Its main task is to transform the local sector addressing physical NAND addressing. It also handles: Bad block management, Wear leveling and Garbage collection
- Storage Area – Array of SLC(single level cell)/MLC/TLC NAND chips. NOR-based flash is the older technology that supported high read performance at the cost of smaller capacities. NAND flash offers higher capacities with significantly faster write and erase performance. NAND also requires a much more complicated input/output (I/O) interface. NOR flash memory can typically be programmed a byte at a time, whereas NAND flash memory must be programmed in multi-byte bursts (typically, 512 bytes). NOR flash devices typically require seconds for the Erase operation, whereas a NAND device can erase in milliseconds.
BLOCK VS MEMORY TECHNOLOGY DEVICES(MTD)
Block vs MTD is basically storage abstraction types. In Block memory, complete memory is partitioned into several blocks which are grouped as sectors. Now it is assumed that reading/writing on a block is faster than on a cell and it’s ok to write at the same block repeatedly. Thus, every Read/Write operation occurs in blocks. This model is well suited to disk drives.
In the case of Flash memory, Each block has to be erased before rewritten. This erasing circuitry is quite complex and big. thus, the size of blocks is increased to decrease the need for erasing circuitry. Also, one must take into account the limitation on read/write cycles of a block. Now, there can be two ways to deploy it physically:
- Using FTL:- In this, a controller is deployed to handle wear leveling by remapping logical and physical blocks. So, the blocks with less wear are used more often to level wearing throughout the memory. That means that any normal file system can be used, but the remapping system is difficult to implement correctly. In particular handling power failure correctly is very difficult. This is the approach used by SSDs, USB sticks, SD cards, etc.
- Using MTD:- In this method, a new device type is created for flash storage. There is a physical layer needed. The file system takes the responsibility to wear management. Such devices are called mtd devices. Its widely used in Tablets, mobile phones, embedded systems, etc.
Major Type of File systems
- JOURNALIZED FS:- A journalized fs keeps a track of all modifications within a journal in a dedicated area. A journal thus, allow restoring a corrupted fs as in case of corruption modification is removed from the block, thus restoring the previous state. Example: EXTx, XFS, Raiser4
- B-TREE/COW FS:- B+ tree is a data type that generalized binary trees. Copy on Write(CoW) is a mechanism that allows modification on a copy of actual data. Modified data will be replaced by actual data only if the transaction is successful, in case of corruption, the copy is discarded. Ex:- ZFS, BTRFS, NILFS2.
- Log FS:- A log fs will write data and its metadata sequentially to the storage as a log. Recovering from corruption is done from last consistent block entry in the log. Ex:- F2Fs, NILFS2, JFFS2 etc.
Expectations from a flash file system
We generally expect the following 3 properties to be n a flash file system:
It’s the process of reclaiming invalid blocks(having completely invalid or partially invalid data). It includes moving partially valid data (within the invalid block) to a new block and the erasing invalid block. It happens in the background or when the file system requires space.
Managing Bad Blocks
Bad blocks can appear due to excessive usage of the block beyond its write cycles or may be formed during manufacturing. Bad blocks are identified by invalid ECC and are then moved to Bad Block Table.
Due to finite number of write cycles of NAND/NOR memory, wear level of each block should be leveled for maximum life of device. Wear leveling is done via Static wear-leveling algorithms and Dynamic wear-leveling algorithms.
Dynamic wear leveling algorithms simply remaps logical addresses to physical addresses. Static wear-leveling algorithms, target even a more severe problem of the finite number of reading cycles between an erase cycle. It then transfers very old data to new blocks periodically.