← Back to team overview

ecryptfs team mailing list archive

[Bug 317781] Re: Ext4 data loss

 

@Kai,

>But you can imagine what happens to fs performance if 
>every application does fsyncs after every write or before
>every close. Performance would suffer badly.

Note that the "fsync causes performance problems meme got" started
precisely because of ext3's "data=ordered" mode.   This is what causes
all dirty blocks to be pushed out to disks on an fsync().   Using ext4
with delayed allocation, or ext3 with data=writeback, fsync() is
actually quite fast.    Again, all modern file systems will be doing
delayed allocation, one way or another.   Even the much-vaunted ZFS does
delayed allocation.   So really, the right answer is to use fsync() or
fdatasync(), as appropriate.

@Pablomme,

The reason why the write operation is delayed is because of performance.
It's always going to give you performance benefits to delay writes for
as long as possible.  For example, if you end up deleting the file
before it ever gets staged out for writing, then you might not need to
write it at all.   There's a reason why all modern filesystems use
delayed allocation as a technique.  Now, maybe for desktop applications,
we need to have modes that sacrifice performance for better reliability
given unreliable device drivers, and applications that aren't explicitly
requesting fsync() when they really should.

It comes as an absolute shock to me, for example, that using a system
which requires a hard reset whenever you exit "World of Goo", would ever
be considered acceptable; I'd use an ATI or Intel chipset before I would
accept that kind of reliability.  Maybe that's an ivory-tower attitude;
I dunno.  But for a system which is that unreliable, disabling delayed
allocation does make sense.  Maybe we can do something such as what
r6144 has suggested, where we have a data=alloc-on-commit mode.   There
will still be a performance hit associated with such a mode, since (like
ext3's data=ordered mode) it effectively means an implied fsync() for
every single inode involved with the transaction commit --- which will
hurt; there's no way around it.

-- 
Ext4 data loss
https://bugs.launchpad.net/bugs/317781
You received this bug notification because you are a member of eCryptfs,
which is subscribed to ecryptfs-utils in ubuntu.

Status in “ecryptfs-utils” source package in Ubuntu: Invalid
Status in “linux” source package in Ubuntu: Confirmed
Status in ecryptfs-utils in Ubuntu Jaunty: Invalid
Status in linux in Ubuntu Jaunty: Confirmed

Bug description:
I recently installed Kubuntu Jaunty on a new drive, using Ext4 for all my data.

The first time i had this problem was a few days ago when after a power loss ktimetracker's config file was replaced by a 0 byte version . No idea if anything else was affected.. I just noticed ktimetracker right away.

Today, I was experimenting with some BIOS settings that made the system crash right after loading the desktop. After a clean reboot pretty much any file written to by any application (during the previous boot) was 0 bytes.
For example Plasma and some of the KDE core config files were reset. Also some of my MySQL databases were killed...

My EXT4 partitions all use the default settings with no performance tweaks. Barriers on, extents on, ordered data mode..

I used Ext3 for 2 years and I never had any problems after power losses or system crashes.

Jaunty has all the recent updates except for the kernel that i don't upgrade because of bug #315006

ProblemType: Bug
Architecture: amd64
DistroRelease: Ubuntu 9.04
NonfreeKernelModules: nvidia
Package: linux-image-2.6.28-4-generic 2.6.28-4.6
ProcCmdLine: root=UUID=81942248-db70-46ef-97df-836006aad399 ro rootfstype=ext4 vga=791 all_generic_ide elevator=anticipatory
ProcEnviron:
 LANGUAGE=
 LANG=en_US.UTF-8
 SHELL=/bin/bash
ProcVersionSignature: Ubuntu 2.6.28-4.6-generic
SourcePackage: linux