Difference between revisions of "Btrfs"

m
Corrected minor typos
m (Corrected minor typos)
m (Corrected minor typos)
Line 39: Line 39:
For example this is the case for FAT32 file systems. The '''F'''ile '''A'''llocation '''T'''able is at a fixed place on this file system. When the "FAT" changes (because a file got bigger and needs more blocks), this new FAT must be written with the new data to the same place as bevor. When the disk is ejected bevor(or while) this data is written, the file system will be corrupted. And the FAT does change a lot.
For example this is the case for FAT32 file systems. The '''F'''ile '''A'''llocation '''T'''able is at a fixed place on this file system. When the "FAT" changes (because a file got bigger and needs more blocks), this new FAT must be written with the new data to the same place as bevor. When the disk is ejected bevor(or while) this data is written, the file system will be corrupted. And the FAT does change a lot.


The danger of corruption is especially big when metadata (like filename, permission, usage of disk space ...) is written.
The danger of corruption is especially big while metadata (like filename, permission, usage of disk space ...) is being written.


=== write to a metadata-log (Ext4) ===
=== write to a metadata-log (Ext4) ===
There is a solution to this with newer file systems like Ext4. Instead of writing metadata "in place", metadata is written into an "endless" log. Then it is not possible to be corrupted while overwritten. This is possible because metadata is only a very small part of the data in a file system.
There is a solution to this with newer file systems like Ext4. Instead of writing metadata "in place", metadata is written into an "endless" log. Then it is not possible to be corrupted while overwritten. This is possible because metadata is only a very small part of the data in a file system.


There has to be an additional mechanism to make this save. Sometimes it is called barriers. And there have to be checksums that tell when a part of the log is corrupted.
There has to be an additional mechanism to make this safe. Sometimes this is called "barriers", and there have to be checksums that tell when a part of the log is corrupted.


This does protect the file system itself, but not the files in it. Because a file may be overwritten in place, and then the old file is lost, and the new one may not have been written completely.
This does protect the file system itself, but not the files in it. Because a file may be overwritten in place, and then the old file is lost, and the new one may not have been written completely.
Line 61: Line 61:
* management of space is complex
* management of space is complex
* there are 2 sorts of pages
* there are 2 sorts of pages
* there has to be a cleanup-process who makes the space of deleted files reusable, so that the disk does not run out of free pages
* there has to be a clean-up-process who makes the space of deleted files reusable, so that the disk does not run out of free pages
* it must be avoided to write data unnecessarily, because then the cleanup would also be very expensive
* it must be avoided to write data unnecessarily, because then the clean-up would also be very expensive


==== chances ====
==== chances ====
* it is possible to detect nearly any corruption because of the checksums
* it is possible to detect nearly any corruption because of the checksums
** when the power is lost, or the disk is disconnected, all old data is save. WHY?
** when the power is lost, or the disk is disconnected, all old data is save. WHY?
** every bit of "old" data from bevor the power loss or the disconnection is present because it is NOT overwritten
** every bit of "old" data from before the power loss or the disconnection is present because it is NOT overwritten
** only the newly written data may be partly damaged
** only the newly written data may be partly damaged
** the metadata may also be partly damaged
** the metadata may also be partly damaged
** when mounting the volume it is possible by analyzing checksums and metadata to find the point in the file system where all was good
** when mounting the volume it is possible by analysing checksums and metadata to find the point in the file system where all was good
** btrfs will automatically roll back to this point, then it can mount the file system writeable
** btrfs will automatically roll back to this point, then it can mount the file system writeable


Moderators, translator
286

edits