Thread Previous • Date Previous • Date Next • Thread Next |
@Carey, >Theo, does that then imply that setting the writeback time to the >journal commit time (5 seconds) would also largely eliminate the >unpopular behavior? You'd need to set it to be substantially smaller than the journal commit time, (half the commit time or smaller), since the two timers are not correlated. Furthermore, the VM subsystem doesn't write dirty pages as soon as the the expiration time goes off. It stages the writes over several writeback windows, to avoid overloading the hard drive with background writes (which are intended to be asynchronous). So the answer is yes, you could probably do it by adjusting timers, but you'd probably need to up the journal commit time as well as decreasing the dirty_writeback and dirty_expire timers. >How much of the benefit of delayed allocation do we lose by waiting a >couple seconds rather than minutes or tens of seconds? Any large >write could easily be happening over a longer period than any >reasonable writeback time, and so those cases should already be >allocating their eventual size immediately (think torrents or a long >running file copy). Well, yes, but that means modifying more application code (which most people on this thread seems to think is a hopeless cause :-P). Also, it's only been in the latest glibc in CVS that there is access the fallocate() system call. Current glibc has posix_fallocate(), but the problem with posix_fallocate() is that it tries to simulate fallocate on filesystems (such as ext3) which doesn't support it via writing zero's into the file. So posix_fallocate() is a bit of a performance disaster on filesystems that don't support fallocate(). If you use the fallocate() system call directly, it will return a error (ENOTSUPP, if I recall correctly) if the file system doesn't support it, which is what you want in this case. The reality is that almost none of the appliations which are writing big files are using fallocate() today. They should, especially bittorent clients, but most of them do not --- just as many applications aren't calling fsync() even though POSIX demands it if there is a requirement that the file be written onto stable storage. -- Ext4 data loss https://bugs.launchpad.net/bugs/317781 You received this bug notification because you are a member of eCryptfs, which is subscribed to ecryptfs-utils in ubuntu. Status in “ecryptfs-utils” source package in Ubuntu: Invalid Status in “linux” source package in Ubuntu: Confirmed Status in ecryptfs-utils in Ubuntu Jaunty: Invalid Status in linux in Ubuntu Jaunty: Confirmed Bug description: I recently installed Kubuntu Jaunty on a new drive, using Ext4 for all my data. The first time i had this problem was a few days ago when after a power loss ktimetracker's config file was replaced by a 0 byte version . No idea if anything else was affected.. I just noticed ktimetracker right away. Today, I was experimenting with some BIOS settings that made the system crash right after loading the desktop. After a clean reboot pretty much any file written to by any application (during the previous boot) was 0 bytes. For example Plasma and some of the KDE core config files were reset. Also some of my MySQL databases were killed... My EXT4 partitions all use the default settings with no performance tweaks. Barriers on, extents on, ordered data mode.. I used Ext3 for 2 years and I never had any problems after power losses or system crashes. Jaunty has all the recent updates except for the kernel that i don't upgrade because of bug #315006 ProblemType: Bug Architecture: amd64 DistroRelease: Ubuntu 9.04 NonfreeKernelModules: nvidia Package: linux-image-2.6.28-4-generic 2.6.28-4.6 ProcCmdLine: root=UUID=81942248-db70-46ef-97df-836006aad399 ro rootfstype=ext4 vga=791 all_generic_ide elevator=anticipatory ProcEnviron: LANGUAGE= LANG=en_US.UTF-8 SHELL=/bin/bash ProcVersionSignature: Ubuntu 2.6.28-4.6-generic SourcePackage: linux
Thread Previous • Date Previous • Date Next • Thread Next |