u1db-discuss team mailing list archive
-
u1db-discuss team
-
Mailing list archive
-
Message #00070
Some view functions to ponder
I thought it might be helpful to point out some places where CouchDB's
design has worked extremely well for me.
I have a love/hate relationship with CouchDB views. I'm not a fan of
the obscene IPC overhead their implementation has, nor am I a fan of
writing view functions in something as crippled as JavaScript. My
grief with JavaScript centers around it not having a real dictionary
type, and don't even get me started on how clumsy it is to do the
equivalent of Python's isinstance().
But all the same, CouchDB's procedural views have been life changing
for me. Between procedural views and document-oriented, I've gotten a
lot better at data design for hard problems. And Dmedia is the hardest
problem I've ever tackled, by a long shot.
Each file in Dmedia has a corresponding doc in CouchDB. This is an
example doc with the essential schema that drives all the automation
behaviors (the "make file management go away" aspect of Dmedia):
{
"_id": "WEFB4GYQBWOEGCUA5AZEXOKWHCB4IUSYZ2TGJBJUPFWTWCNL",
"type": "dmedia/file",
"time": 1338560151.467609,
"atime": 1342088785,
"bytes": 26566829,
"origin": "user",
"stored": {
"3KX6WX6YAUQIQ7M2Y3BMPSAI": {
"copies": 1,
"mtime": 1338564292,
"verified": 1338571742
},
"PSRJGX2O2N77XLBEBI2TNDNU": {
"copies": 1,
"mtime": 1338560150,
"verified": 1338564292
},
"YE4SYZAVTFEBDI2JZQBUTYQY": {
"copies": 1,
"mtime": 1338560150
}
}
}
This particular file is stored in 3 different "stores" (let's say hard
drives, but a store could also be something like UbuntuOne or S3).
Each of these stores has a durability confidence of 1 copy, and the
total durability for this file is 3 copies. And the copy in the last
store hasn't yet been verified (full content-hash verification).
The "stored" dictionary was the hardest thing to get right. Hopefully
it seems like the obvious design choice now, but for me anyway, it
took a full-time year of blood, sweat, and tears to come up with it.
It has a power and a clarity that I don't think you could match with
SQL/tables/columns. It's especially nice to work with from Python, for
example:
What is the total durability?
>>> sum(v['copies'] for v in doc['stored'].values())
Delete from a particular store:
>>> del doc['stored'][store_id]
It's not quite as elegant to work with from JavaScript, but that's
also just a reflection of how well designed the Python `dict` and
`list` types are, and how awesome Python iteration is.
But the (still) amazing thing to me is that it only takes 4 relatively
simple view functions to drive all the complex Dmedia file automation
behavior. Dmedia does 4 critical housekeeping tasks:
1) At a high-frequency it checks that a file is actually still in a
particular store, and that the file has the expected mtime and the
correct file size. Dmedia always works from the assumption that it
shouldn't particularly trust the metadata, so it constantly re-samples
reality. And if it's been too long since a particular store has been
"connected" to Dmedia (think external USB HDD), Dmedia will
automatically update all those files to have "copies": 0, meaning, the
files might be there, but we're not going to count on them.
2) At a lower frequency it does full content-hash verification,
starting with the files that have gone the longest without
verification (starting first with files that have *never* been
verified). This is expensive so we can't do it as often, but this is
the only way to know without any doubt that the exact bytes are stored
with perfect integrity.
3) It monitors the database for files with insufficient durability,
and tries to fix the problem when it finds this, starting with the
files with the lowest durability. Dmedia treats irreplaceable user
created files specially. So we're talking about files with "origin":
"user", and Dmedia tries to maintain at least a durability of 3 copies
for these files.
4) It monitors hard drives for low free space, and when space is
needed, it figures out which files can be safely reclaimed, and
deletes them, starting with the least likely to be used (the oldest
atime). This one always sounds scary, but for pro video especially,
your hard drives are constantly filling up, and you often need to free
space to make room for a new working set of files. This is crazy error
prone for the user to do manually (not to mention a stressful waste of
time). It's far lower risk for Dmedia to do this automatically.
And here's the 4 view functions, respectively, that drive these 4 tasks:
file_stored = """
function(doc) {
if (doc.type == 'dmedia/file') {
var key;
for (key in doc.stored) {
emit(key, null);
}
}
}
"""
file_verified = """
function(doc) {
if (doc.type == 'dmedia/file') {
var key;
for (key in doc.stored) {
emit([key, doc.stored[key].verified], null);
}
}
}
"""
file_fragile = """
function(doc) {
if (doc.type == 'dmedia/file' && doc.origin == 'user') {
var copies = 0;
var key;
for (key in doc.stored) {
copies += doc.stored[key].copies;
}
if (copies < 3) {
emit(copies, null);
}
}
}
"""
file_reclaimable = """
function(doc) {
if (doc.type == 'dmedia/file' && doc.origin == 'user') {
var copies = 0;
var key;
for (key in doc.stored) {
copies += doc.stored[key].copies;
}
if (copies >= 3) {
for (key in doc.stored) {
if (copies - doc.stored[key].copies >= 3) {
emit([key, doc.atime], null);
}
}
}
}
}
"""
If you want to explore these deeper and think about why they work, I
recommend looking at the unit test for file_reclaimable, as it has the
most complex and subtle behavior:
http://bazaar.launchpad.net/~dmedia/dmedia/trunk/view/head:/dmedia/tests/test_views.py#L336
Now I'm not saying that U1DB needs to be designed around the needs of
Dmedia and Novacut. I'm just trying to share what I've learned in the
brave new world of document-oriented data design during the past two
years. My person bias says Dmedia and Novacut are pretty darn
interesting problems, but I'm fully aware that what they need might
not be what's needed to build a thriving Ubuntu app ecosystem for all
the *other* types of apps that, let's be honest, are far more numerous
and add up to a lot more revenue potential.
Long term, I think Novacut will migrate to something besides CouchDB.
U1DB might be the right fit (with a local HTTP layer and if we can
build real-time, continuous sync atop your data model). Or we might
write our own, very tailored to our particular problems. But right now
there are bigger fish to fry, plus I still feel I have a lot more to
learn from CouchDB and to learn about our problem domain before I can
decide what to do next.
Lastly, I want to thank everyone for all your work on desktopcouch.
You were on the right tack, and I think you understood the strengths
of CouchDB better than CouchDB seems to at times. I don't think the
developer community at large ever grasped just how visionary
desktopcouch was.
Cheers,
Jason
Follow ups