RDFS is a custom file system. It uses a 512-byte encryption key on file data and directories. It uses variable sized "clusters" (extents). The size of the cluster is small at first and increases with the file size. The increment factor ~15% is chosen so that 124 32-bit pointers covers a 4GB large file. This way, every file needs only one sector for it's allocation chain. The wasted space will be 5-10% on a typcial file system. Here are the extent sizes: 1,1,1,1,1,1,1,2,2,2,2,3,3,4,4,5,5,6,6,7,8,9,10.... 85408. The sum is 2^23, which gives a maximum file size of 4GB. If support for large files is required, a different increment factor can be selected. This way, files sizes up to 2TB could be supported. Although this would also mean more space would be wasted. For RDOS, the 4GB limit is natural, since RDOS currently only supports file sizes up to 4GB.The two first entries in the this sector is used to store directory entry sector (32-bits), file size (32-bits) and last modification time (64-bits).
To aid in implementing automatic defragmentation, each RDFS partition contains an array of 32-bit pointers. These points to the control sector of the files. 0 is used for free sectors, 1 for control sector.
It should support long file-names, but not in
the cludgy way M$ FAT does it. I haven't decided how to layout
directories yet. There won't be a global file allocation table
like in FAT. Each file / directory will have its own local sector
chain, located near the file data itself. There will also be a
bitmap with all free sectors on disk.
Encryption will be performed at the system level. The idea behind it is to create a file system that would be impossible to read, even through the use of norton's diskeditor and simular tools. It doesn't affect performance very much. You only perform an additional operation while moving data between application and file buffers. Encryption might be optional.
RDFS should obtain robustness by sequencing filesystem
structure (metadata) updates in such a way that no matter where
it's interrupted, the filesystem would still be OK (possibly
allocation blocks could be lost, but that can easily be fixed at
mount time). This would force all those updates to be
synchronous, which would be slow. My idea is to provide
"sequential asynchronous" calls instead of fully
synchronous calls. Sequential asynchronous calls would always be
guaranteed to be served in the order they there executed. They
would not normally block, unless a previous similar request to
the same sector were pending. The disk-cache would queue the
requests in a FIFO list. This list would always be served before
the normal request list, but as the head moves, normal requests
lying in between can also be served. If a new sequential
asynchronous request is done on an already queued request, caller
would block (unless it's an asynchronous write). When no
sequential asynchronous requests are present, the normal request
list would be served with C-SCAN algorithm.