Just found: an interesting method to check how your NAND driver (and filesystem kernel module) behave during massive write operations. I've added additional printk-s() to NAND driver kernel module write() call and collected data during filling all available filesystem space.
Output dmesg logs have been preprocessed and offset call parameter has been presented as a graph using gnuplot (horizontally: write number, vertically: offset):
What can we see from that graph:
- some writes at offset 0 - typically dangerous as might overwrite bootloader area, I've added them intentionally in this graph by reflashing bootloader manually
- with missing space wear levelling is less efficient - right part of the graph
- we can see that catalog location (blocks with filesystem metadata) is moved dynamically across all space to avoid multiple writes to the same location (NAND specific), we see the location has been changed ~6 times during whole operation (probably relocation is done after every 2000 write operations)
- two different partitions are visible (one at ~60M, one at ~10M)