zeitgeist team mailing list archive
-
zeitgeist team
-
Mailing list archive
-
Message #04278
[Merge] lp:~j-bitron/zeitgeist/bb-memory into lp:zeitgeist
Seif Lotfy has proposed merging lp:~j-bitron/zeitgeist/bb-memory into lp:zeitgeist.
Requested reviews:
Zeitgeist Framework Team (zeitgeist)
For more details, see:
https://code.launchpad.net/~j-bitron/zeitgeist/bb-memory/+merge/79774
--
The attached diff has been truncated due to its size.
https://code.launchpad.net/~j-bitron/zeitgeist/bb-memory/+merge/79774
Your team Zeitgeist Framework Team is requested to review the proposed merge of lp:~j-bitron/zeitgeist/bb-memory into lp:zeitgeist.
=== added file '.bzrignore'
--- .bzrignore 1970-01-01 00:00:00 +0000
+++ .bzrignore 2011-10-19 08:09:50 +0000
@@ -0,0 +1,50 @@
+codeblocks.cbp
+codeblocks.layout
+INSTALL
+Makefile
+Makefile.in
+aclocal.m4
+autom4te.cache
+compile
+config.guess
+config.h
+config.h.in
+config.log
+config.status
+config.sub
+configure
+depcomp
+install-sh
+stamp-h1
+mkinstalldirs
+po/POTFILES
+po/stamp-it
+src/.deps
+missing
+ltmain.sh
+libtool
+intltool-update.in
+intltool-merge.in
+intltool-extract.in
+po/Makefile.in.in
+src/.libs
+src/*.c
+src/*.stamp
+src/bluebird
+test/direct/where-clause-test
+extra/ontology/*.py
+query-operators-test
+src/ontology.vala
+src/ontology-uris.vala
+extra/org.gnome.zeitgeist.service
+extensions/.deps
+extensions/.libs
+extensions/*.c
+extensions/*.stamp
+extensions/*.la
+extensions/*.lo
+test/direct/marshalling
+test/dbus/__pycache__
+test/direct/table-lookup-test
+src/zeitgeist-engine.vapi
+src/zeitgeist-engine.h
=== renamed file '.bzrignore' => '.bzrignore.moved'
=== added file 'AUTHORS'
--- AUTHORS 1970-01-01 00:00:00 +0000
+++ AUTHORS 2011-10-19 08:09:50 +0000
@@ -0,0 +1,4 @@
+Seif Lotfy <seif@xxxxxxxxx>
+Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+Manish Sinha <manishsinha@xxxxxxxxxx>
+Michael Hruby <michal.mhr@xxxxxxxxx>
=== renamed file 'AUTHORS' => 'AUTHORS.moved'
=== added file 'COPYING'
--- COPYING 1970-01-01 00:00:00 +0000
+++ COPYING 2011-10-19 08:09:50 +0000
@@ -0,0 +1,339 @@
+ GNU GENERAL PUBLIC LICENSE
+ Version 2, June 1991
+
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+ Preamble
+
+ The licenses for most software are designed to take away your
+freedom to share and change it. By contrast, the GNU General Public
+License is intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users. This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it. (Some other Free Software Foundation software is covered by
+the GNU Lesser General Public License instead.) You can apply it to
+your programs, too.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it
+in new free programs; and that you know you can do these things.
+
+ To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+ For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have. You must make sure that they, too, receive or can get the
+source code. And you must show them these terms so they know their
+rights.
+
+ We protect your rights with two steps: (1) copyright the software, and
+(2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+ Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software. If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+ Finally, any free program is threatened constantly by software
+patents. We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary. To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+ GNU GENERAL PUBLIC LICENSE
+ TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+ 0. This License applies to any program or other work which contains
+a notice placed by the copyright holder saying it may be distributed
+under the terms of this General Public License. The "Program", below,
+refers to any such program or work, and a "work based on the Program"
+means either the Program or any derivative work under copyright law:
+that is to say, a work containing the Program or a portion of it,
+either verbatim or with modifications and/or translated into another
+language. (Hereinafter, translation is included without limitation in
+the term "modification".) Each licensee is addressed as "you".
+
+Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope. The act of
+running the Program is not restricted, and the output from the Program
+is covered only if its contents constitute a work based on the
+Program (independent of having been made by running the Program).
+Whether that is true depends on what the Program does.
+
+ 1. You may copy and distribute verbatim copies of the Program's
+source code as you receive it, in any medium, provided that you
+conspicuously and appropriately publish on each copy an appropriate
+copyright notice and disclaimer of warranty; keep intact all the
+notices that refer to this License and to the absence of any warranty;
+and give any other recipients of the Program a copy of this License
+along with the Program.
+
+You may charge a fee for the physical act of transferring a copy, and
+you may at your option offer warranty protection in exchange for a fee.
+
+ 2. You may modify your copy or copies of the Program or any portion
+of it, thus forming a work based on the Program, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+ a) You must cause the modified files to carry prominent notices
+ stating that you changed the files and the date of any change.
+
+ b) You must cause any work that you distribute or publish, that in
+ whole or in part contains or is derived from the Program or any
+ part thereof, to be licensed as a whole at no charge to all third
+ parties under the terms of this License.
+
+ c) If the modified program normally reads commands interactively
+ when run, you must cause it, when started running for such
+ interactive use in the most ordinary way, to print or display an
+ announcement including an appropriate copyright notice and a
+ notice that there is no warranty (or else, saying that you provide
+ a warranty) and that users may redistribute the program under
+ these conditions, and telling the user how to view a copy of this
+ License. (Exception: if the Program itself is interactive but
+ does not normally print such an announcement, your work based on
+ the Program is not required to print an announcement.)
+
+These requirements apply to the modified work as a whole. If
+identifiable sections of that work are not derived from the Program,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works. But when you
+distribute the same sections as part of a whole which is a work based
+on the Program, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Program.
+
+In addition, mere aggregation of another work not based on the Program
+with the Program (or with a work based on the Program) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+ 3. You may copy and distribute the Program (or a work based on it,
+under Section 2) in object code or executable form under the terms of
+Sections 1 and 2 above provided that you also do one of the following:
+
+ a) Accompany it with the complete corresponding machine-readable
+ source code, which must be distributed under the terms of Sections
+ 1 and 2 above on a medium customarily used for software interchange; or,
+
+ b) Accompany it with a written offer, valid for at least three
+ years, to give any third party, for a charge no more than your
+ cost of physically performing source distribution, a complete
+ machine-readable copy of the corresponding source code, to be
+ distributed under the terms of Sections 1 and 2 above on a medium
+ customarily used for software interchange; or,
+
+ c) Accompany it with the information you received as to the offer
+ to distribute corresponding source code. (This alternative is
+ allowed only for noncommercial distribution and only if you
+ received the program in object code or executable form with such
+ an offer, in accord with Subsection b above.)
+
+The source code for a work means the preferred form of the work for
+making modifications to it. For an executable work, complete source
+code means all the source code for all modules it contains, plus any
+associated interface definition files, plus the scripts used to
+control compilation and installation of the executable. However, as a
+special exception, the source code distributed need not include
+anything that is normally distributed (in either source or binary
+form) with the major components (compiler, kernel, and so on) of the
+operating system on which the executable runs, unless that component
+itself accompanies the executable.
+
+If distribution of executable or object code is made by offering
+access to copy from a designated place, then offering equivalent
+access to copy the source code from the same place counts as
+distribution of the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+ 4. You may not copy, modify, sublicense, or distribute the Program
+except as expressly provided under this License. Any attempt
+otherwise to copy, modify, sublicense or distribute the Program is
+void, and will automatically terminate your rights under this License.
+However, parties who have received copies, or rights, from you under
+this License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+ 5. You are not required to accept this License, since you have not
+signed it. However, nothing else grants you permission to modify or
+distribute the Program or its derivative works. These actions are
+prohibited by law if you do not accept this License. Therefore, by
+modifying or distributing the Program (or any work based on the
+Program), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Program or works based on it.
+
+ 6. Each time you redistribute the Program (or any work based on the
+Program), the recipient automatically receives a license from the
+original licensor to copy, distribute or modify the Program subject to
+these terms and conditions. You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties to
+this License.
+
+ 7. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Program at all. For example, if a patent
+license would not permit royalty-free redistribution of the Program by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Program.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system, which is
+implemented by public license practices. Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+ 8. If the distribution and/or use of the Program is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Program under this License
+may add an explicit geographical distribution limitation excluding
+those countries, so that distribution is permitted only in or among
+countries not thus excluded. In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+ 9. The Free Software Foundation may publish revised and/or new versions
+of the General Public License from time to time. Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Program
+specifies a version number of this License which applies to it and "any
+later version", you have the option of following the terms and conditions
+either of that version or of any later version published by the Free
+Software Foundation. If the Program does not specify a version number of
+this License, you may choose any version ever published by the Free Software
+Foundation.
+
+ 10. If you wish to incorporate parts of the Program into other free
+programs whose distribution conditions are different, write to the author
+to ask for permission. For software which is copyrighted by the Free
+Software Foundation, write to the Free Software Foundation; we sometimes
+make exceptions for this. Our decision will be guided by the two goals
+of preserving the free status of all derivatives of our free software and
+of promoting the sharing and reuse of software generally.
+
+ NO WARRANTY
+
+ 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
+REPAIR OR CORRECTION.
+
+ 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGES.
+
+ END OF TERMS AND CONDITIONS
+
+ How to Apply These Terms to Your New Programs
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+ <one line to give the program's name and a brief idea of what it does.>
+ Copyright (C) <year> <name of author>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License along
+ with this program; if not, write to the Free Software Foundation, Inc.,
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this
+when it starts in an interactive mode:
+
+ Gnomovision version 69, Copyright (C) year name of author
+ Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+ This is free software, and you are welcome to redistribute it
+ under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License. Of course, the commands you use may
+be called something other than `show w' and `show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary. Here is a sample; alter the names:
+
+ Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+ `Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+ <signature of Ty Coon>, 1 April 1989
+ Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs. If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library. If this is what you want to do, use the GNU Lesser General
+Public License instead of this License.
=== renamed file 'COPYING' => 'COPYING.moved'
=== added file 'ChangeLog'
=== added file 'MAINTAINERS'
--- MAINTAINERS 1970-01-01 00:00:00 +0000
+++ MAINTAINERS 2011-10-19 08:09:50 +0000
@@ -0,0 +1,5 @@
+Seif Lotfy <seif@xxxxxxxxx>
+Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+Manish Sinha <manishsinha@xxxxxxxxxx>
+Michael Hruby <michal.mhr@xxxxxxxxx>
+
=== renamed file 'MAINTAINERS' => 'MAINTAINERS.moved'
=== added file 'Makefile.am'
--- Makefile.am 1970-01-01 00:00:00 +0000
+++ Makefile.am 2011-10-19 08:09:50 +0000
@@ -0,0 +1,47 @@
+NULL =
+
+SUBDIRS = \
+ src \
+ extensions \
+ extra \
+ test \
+ po \
+ $(NULL)
+
+bluebirddocdir = ${prefix}/doc/bluebird
+bluebirddoc_DATA = \
+ ChangeLog \
+ README \
+ COPYING \
+ AUTHORS \
+ INSTALL \
+ NEWS \
+ $(NULL)
+
+DISTCHECK_CONFIGURE_FLAGS = --with-session-bus-services-dir="\$(datadir)"/dbus-1/services
+
+EXTRA_DIST = \
+ $(bluebirddoc_DATA) \
+ intltool-extract.in \
+ intltool-merge.in \
+ intltool-update.in \
+ $(NULL)
+
+DISTCLEANFILES = \
+ intltool-extract \
+ intltool-merge \
+ intltool-update \
+ po/.intltool-merge-cache \
+ $(NULL)
+
+run: all
+ ./src/bluebird
+
+debug: all
+ gdb ./src/bluebird
+
+test: all
+ ./test/dbus/run-all-tests.py
+
+test-direct: all
+ cd ./test/direct/ && make run;
=== renamed file 'Makefile.am' => 'Makefile.am.moved'
=== added file 'NEWS'
=== renamed file 'NEWS' => 'NEWS.moved'
=== added file 'README'
=== renamed file 'README' => 'README.moved'
=== added file 'autogen.sh'
--- autogen.sh 1970-01-01 00:00:00 +0000
+++ autogen.sh 2011-10-19 08:09:50 +0000
@@ -0,0 +1,9 @@
+#!/bin/sh
+# Run this to generate all the initial makefiles, etc.
+
+srcdir=`dirname $0`
+test -z "$srcdir" && srcdir=.
+
+PKG_NAME="bluebird"
+
+. gnome-autogen.sh
=== renamed file 'autogen.sh' => 'autogen.sh.moved'
=== added file 'config.vapi'
--- config.vapi 1970-01-01 00:00:00 +0000
+++ config.vapi 2011-10-19 08:09:50 +0000
@@ -0,0 +1,7 @@
+[CCode (cprefix = "", lower_case_cprefix = "", cheader_filename = "config.h")]
+namespace Config
+{
+ public const string GETTEXT_PACKAGE;
+ public const string VERSION;
+ public const string DATADIR;
+}
=== added file 'configure.ac'
--- configure.ac 1970-01-01 00:00:00 +0000
+++ configure.ac 2011-10-19 08:09:50 +0000
@@ -0,0 +1,84 @@
+AC_INIT([bluebird], [0.8.99], [dev@xxxxxxxxxxxxxxxxxxxxxxxxxxx], [bluebird])
+AC_CONFIG_SRCDIR([Makefile.am])
+AC_CONFIG_HEADERS(config.h)
+AM_INIT_AUTOMAKE([dist-bzip2])
+AM_MAINTAINER_MODE
+
+AC_PROG_CC
+AM_PROG_CC_C_O
+AC_DISABLE_STATIC
+AC_PROG_LIBTOOL
+
+AM_PROG_VALAC([0.14.0])
+
+AM_SILENT_RULES([yes])
+
+AH_TEMPLATE([GETTEXT_PACKAGE], [Package name for gettext])
+GETTEXT_PACKAGE=bluebird
+AC_DEFINE_UNQUOTED(GETTEXT_PACKAGE, "$GETTEXT_PACKAGE")
+AC_SUBST(GETTEXT_PACKAGE)
+AM_GLIB_GNU_GETTEXT
+IT_PROG_INTLTOOL([0.35.0])
+
+AC_SUBST(CFLAGS)
+AC_SUBST(CPPFLAGS)
+AC_SUBST(LDFLAGS)
+
+GLIB_REQUIRED=2.26.0
+SQLITE_REQUIRED=3.7
+
+BLUEBIRD_REQUIRED="glib-2.0 >= $GLIB_REQUIRED
+ gobject-2.0 >= $GLIB_REQUIRED
+ gio-unix-2.0 >= $GLIB_REQUIRED
+ sqlite3 >= $SQLITE_REQUIRED"
+
+PKG_CHECK_MODULES(BLUEBIRD, [$BLUEBIRD_REQUIRED])
+AC_SUBST(BLUEBIRD_CFLAGS)
+AC_SUBST(BLUEBIRD_LIBS)
+
+#################################################
+# DBus service
+#################################################
+
+AC_ARG_WITH([session_bus_services_dir],
+ AC_HELP_STRING([--with-session-bus-services-dir], [Path to DBus services directory]))
+
+if test "x$with_session_bus_services_dir" = "x" ; then
+ PKG_CHECK_MODULES(DBUS_MODULE, "dbus-1")
+ services_dir="`$PKG_CONFIG --variable session_bus_services_dir dbus-1`"
+else
+ services_dir="$with_session_bus_services_dir"
+fi
+
+DBUS_SERVICES_DIR="$services_dir"
+AC_SUBST(DBUS_SERVICES_DIR)
+
+AC_CONFIG_FILES([
+ Makefile
+ src/Makefile
+ extensions/Makefile
+ extensions/fts-python/Makefile
+ extra/Makefile
+ extra/ontology/Makefile
+ test/Makefile
+ test/dbus/Makefile
+ test/direct/Makefile
+ po/Makefile.in
+])
+
+# check for rapper
+AC_CHECK_PROG(HAVE_RAPPER, rapper, yes, no)
+if test "x$HAVE_RAPPER" = "xno"; then
+ AC_MSG_ERROR("You need the tool `rapper' from the `raptor-utils' package in order to compile Zeitgeist")
+fi
+
+# check for python-rdflib
+AC_MSG_CHECKING([for python-rdflib])
+echo "import rdflib" | python - 2>/dev/null
+if test $? -ne 0 ; then
+ AC_MSG_FAILURE([failed. Please install the python-rdflib package.])
+else
+ AC_MSG_RESULT([yes])
+fi
+
+AC_OUTPUT
=== renamed file 'configure.ac' => 'configure.ac.moved'
=== added directory 'extensions'
=== added file 'extensions/Makefile.am'
--- extensions/Makefile.am 1970-01-01 00:00:00 +0000
+++ extensions/Makefile.am 2011-10-19 08:09:50 +0000
@@ -0,0 +1,62 @@
+SUBDIRS = fts-python
+
+NULL =
+
+#extensionsdir = $(libdir)/zeitgeist/extensions
+noinst_LTLIBRARIES = ds-registry.la blacklist.la storage-monitor.la fts.la
+
+AM_CPPFLAGS = \
+ $(BLUEBIRD_CFLAGS) \
+ -include $(CONFIG_HEADER) \
+ -I $(top_srcdir)/src \
+ -w \
+ $(NULL)
+
+VALAFLAGS = \
+ --target-glib=2.26 \
+ --pkg gio-2.0 \
+ --pkg sqlite3 \
+ --pkg gmodule-2.0 \
+ $(top_srcdir)/src/zeitgeist-engine.vapi \
+ $(NULL)
+
+ds_registry_la_SOURCES = \
+ ds-registry.vala \
+ $(NULL)
+
+ds_registry_la_LDFLAGS = -module -avoid-version
+
+ds_registry_la_LIBADD = \
+ $(BLUEBIRD_LIBS) \
+ $(NULL)
+
+blacklist_la_SOURCES = \
+ blacklist.vala \
+ $(NULL)
+
+blacklist_la_LDFLAGS = -module -avoid-version
+
+blacklist_la_LIBADD = \
+ $(BLUEBIRD_LIBS) \
+ $(NULL)
+
+storage_monitor_la_SOURCES = \
+ storage-monitor.vala \
+ $(NULL)
+
+storage_monitor_la_LDFLAGS = -module -avoid-version
+
+storage_monitor_la_LIBADD = \
+ $(BLUEBIRD_LIBS) \
+ $(NULL)
+
+
+fts_la_SOURCES = \
+ fts.vala \
+ $(NULL)
+
+fts_la_LDFLAGS = -module -avoid-version
+
+fts_la_LIBADD = \
+ $(BLUEBIRD_LIBS) \
+ $(NULL)
=== added file 'extensions/blacklist.vala'
--- extensions/blacklist.vala 1970-01-01 00:00:00 +0000
+++ extensions/blacklist.vala 2011-10-19 08:09:50 +0000
@@ -0,0 +1,205 @@
+/* ds-registry.vala
+ *
+ * Copyright © 2011 Collabora Ltd.
+ * By Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+ * Copyright © 2011 Michal Hruby <michal.mhr@xxxxxxxxx>
+ *
+ * Based upon a Python implementation (2009-2011) by:
+ * Mikkel Kamstrup Erlandsen <mikkel.kamstrup@xxxxxxxxx>
+ * Manish Sinha <manishsinha@xxxxxxxxxx>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published by
+ * the Free Software Foundation, either version 2.1 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+namespace Zeitgeist
+{
+ [DBus (name = "org.gnome.zeitgeist.Blacklist")]
+ public interface RemoteBlacklist: Object
+ {
+ public abstract void add_template (string template_id,
+ [DBus (signature = "(asaasay)")] Variant event_template)
+ throws Error;
+ [DBus (signature = "a{s(asaasay)}")]
+ public abstract Variant get_templates () throws Error;
+ public abstract void remove_template (string template_id)
+ throws Error;
+
+ public signal void template_added (string template_id,
+ [DBus (signature = "s(asaasay)")] Variant event_template);
+ public signal void template_removed (string template_id,
+ [DBus (signature = "s(asassay)")] Variant event_template);
+ }
+
+ namespace BlacklistTemplates
+ {
+ private const string SIG_BLACKLIST = "a{s("+Utils.SIG_EVENT+")}";
+
+ private static HashTable<string, Event> from_variant (
+ Variant templates_variant)
+ {
+ var blacklist = new HashTable<string, Event> (str_hash, str_equal);
+
+ warn_if_fail (
+ templates_variant.get_type_string () == SIG_BLACKLIST);
+ foreach (Variant template_variant in templates_variant)
+ {
+ VariantIter iter = template_variant.iterator ();
+ string template_id = iter.next_value ().get_string ();
+ // FIXME: throw exception upon error instead of aborting
+ Event template = new Event.from_variant (iter.next_value ());
+ blacklist.insert (template_id, template);
+ }
+
+ return blacklist;
+ }
+
+ public static Variant to_variant (HashTable<string, Event> blacklist)
+ {
+ var vb = new VariantBuilder (new VariantType (SIG_BLACKLIST));
+ {
+ var iter = HashTableIter<string, Event> (blacklist);
+ string template_id;
+ Event event_template;
+ while (iter.next (out template_id, out event_template))
+ {
+ vb.open (new VariantType ("{s("+Utils.SIG_EVENT+")}"));
+ vb.add ("s", template_id);
+ vb.add_value (event_template.to_variant ());
+ vb.close ();
+ }
+ }
+ return vb.end ();
+ }
+ }
+
+ class Blacklist: Extension, RemoteBlacklist
+ {
+ private HashTable<string, Event> blacklist;
+ private uint registration_id;
+
+ Blacklist ()
+ {
+ Object ();
+ }
+
+ construct
+ {
+ // Restore previous blacklist from database, or create an empty one
+ Variant? templates = retrieve_config ("blacklist",
+ BlacklistTemplates.SIG_BLACKLIST);
+ if (templates != null)
+ blacklist = BlacklistTemplates.from_variant (templates);
+ else
+ blacklist = new HashTable<string, Event> (str_hash, str_equal);
+
+ // This will be called after bus is acquired, so it shouldn't block
+ try
+ {
+ var connection = Bus.get_sync (BusType.SESSION, null);
+ registration_id = connection.register_object<RemoteBlacklist> (
+ "/org/gnome/zeitgeist/blacklist", this);
+ }
+ catch (Error err)
+ {
+ warning ("%s", err.message);
+ }
+ }
+
+ public override void unload ()
+ {
+ try
+ {
+ var connection = Bus.get_sync (BusType.SESSION, null);
+ if (registration_id != 0)
+ {
+ connection.unregister_object (registration_id);
+ registration_id = 0;
+ }
+ }
+ catch (Error err)
+ {
+ warning ("%s", err.message);
+ }
+
+ debug ("%s, this.ref_count = %u", Log.METHOD, this.ref_count);
+ }
+
+ private void flush ()
+ {
+ Variant v = BlacklistTemplates.to_variant (blacklist);
+ store_config ("blacklist", v);
+ }
+
+ public override void pre_insert_events (GenericArray<Event?> events,
+ BusName? sender)
+ {
+ for (int i = 0; i < events.length; i++)
+ {
+ foreach (var tmpl in blacklist.get_values ())
+ {
+ if (events[i].matches_template (tmpl))
+ {
+ events[i] = null;
+ break;
+ }
+ }
+ }
+ }
+
+ public void add_template (string template_id, Variant event_template)
+ throws EngineError
+ {
+ Event template = new Event.from_variant (event_template);
+ blacklist.insert (template_id, template);
+ debug ("Added blacklist template: %s", template_id);
+ template_added (template_id, event_template);
+ flush ();
+ }
+
+ public void remove_template (string template_id)
+ {
+ Event event_template = blacklist.lookup (template_id);
+ if (blacklist.remove (template_id))
+ {
+ debug ("Removed blacklist template: %s", template_id);
+ template_removed (template_id, event_template.to_variant ());
+ flush ();
+ }
+ else
+ {
+ debug ("Blacklist template \"%s\" not found.", template_id);
+ }
+ }
+
+ public Variant get_templates ()
+ {
+ return BlacklistTemplates.to_variant (blacklist);
+ }
+
+ }
+
+ [ModuleInit]
+#if BUILTIN_EXTENSIONS
+ public static Type blacklist_init (TypeModule module)
+ {
+#else
+ public static Type extension_register (TypeModule module)
+ {
+#endif
+ return typeof (Blacklist);
+ }
+}
+
+// vim:expandtab:ts=4:sw=4
=== added file 'extensions/ds-registry.vala'
--- extensions/ds-registry.vala 1970-01-01 00:00:00 +0000
+++ extensions/ds-registry.vala 2011-10-19 08:09:50 +0000
@@ -0,0 +1,440 @@
+/* ds-registry.vala
+ *
+ * Copyright © 2011 Michal Hruby <michal.mhr@xxxxxxxxx>
+ * Copyright © 2011 Collabora Ltd.
+ * By Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+ *
+ * Based upon a Python implementation (2009-2010) by:
+ * Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+ * Mikkel Kamstrup Erlandsen <mikkel.kamstrup@xxxxxxxxx>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published by
+ * the Free Software Foundation, either version 2.1 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+namespace Zeitgeist
+{
+ [DBus (name = "org.gnome.zeitgeist.DataSourceRegistry")]
+ public interface RemoteRegistry: Object
+ {
+ [DBus (signature = "a(sssa(asaasay)bxb)")]
+ public abstract Variant get_data_sources () throws Error;
+ public abstract bool register_data_source (string unique_id,
+ string name, string description,
+ [DBus (signature = "a(asaasay)")] Variant event_templates, BusName? sender)
+ throws Error;
+ public abstract void set_data_source_enabled (string unique_id,
+ bool enabled) throws Error;
+ [DBus (signature = "(sssa(asaasay)bxb)")]
+ public abstract Variant get_data_source_from_id (string id) throws Error;
+
+ public signal void data_source_disconnected (
+ [DBus (signature = "(sssa(asaasay)bxb)")] Variant data_source);
+ public signal void data_source_enabled (string unique_id,
+ bool enabled);
+ public signal void data_source_registered (
+ [DBus (signature = "(sssa(asaasay)bxb)")] Variant data_source);
+ }
+
+ class DataSource: Object
+ {
+ public string unique_id { get; set; }
+ public string name { get; set; }
+ public string description { get; set; }
+
+ public GenericArray<Event>? event_templates { get; set; }
+
+ public bool enabled { get; set; }
+ public bool running { get; set; }
+ public int64 timestamp { get; set; }
+
+ public DataSource ()
+ {
+ Object ();
+ }
+
+ public DataSource.full (string unique_id, string name,
+ string description, GenericArray<Event> templates)
+ {
+ Object (unique_id: unique_id, name: name, description: description,
+ event_templates: templates);
+ }
+
+ public DataSource.from_variant (Variant variant,
+ bool reset_running=false)
+ {
+ warn_if_fail (
+ variant.get_type_string () == "(sssa("+Utils.SIG_EVENT+")bxb)"
+ || variant.get_type_string () == "sssa("+Utils.SIG_EVENT+")");
+ var iter = variant.iterator ();
+
+ assert (iter.n_children () >= 4);
+ unique_id = iter.next_value ().get_string ();
+ name = iter.next_value ().get_string ();
+ description = iter.next_value ().get_string ();
+ event_templates = Events.from_variant (iter.next_value ());
+
+ if (iter.n_children () > 4)
+ {
+ running = iter.next_value ().get_boolean ();
+ if (reset_running)
+ running = false;
+ timestamp = iter.next_value ().get_int64 ();
+ enabled = iter.next_value ().get_boolean ();
+ }
+ }
+
+ public Variant to_variant ()
+ {
+ var vb = new VariantBuilder (new VariantType (
+ "(sssa("+Utils.SIG_EVENT+")bxb)"));
+
+ vb.add ("s", unique_id);
+ vb.add ("s", name);
+ vb.add ("s", description);
+ if (event_templates != null && event_templates.length > 0)
+ {
+ vb.add_value (Events.to_variant (event_templates));
+ }
+ else
+ {
+ vb.open (new VariantType ("a("+Utils.SIG_EVENT+")"));
+ vb.close ();
+ }
+
+ vb.add ("b", running);
+ vb.add ("x", timestamp);
+ vb.add ("b", enabled);
+
+ return vb.end ();
+ }
+ }
+
+ namespace DataSources
+ {
+ private const string SIG_DATASOURCES =
+ "a(sssa("+Utils.SIG_EVENT+")bxb)";
+
+ private static HashTable<string, DataSource> from_variant (
+ Variant sources_variant, bool reset_running=false)
+ {
+ var registry = new HashTable<string, DataSource> (
+ str_hash, str_equal);
+
+ warn_if_fail (
+ sources_variant.get_type_string() == SIG_DATASOURCES);
+ foreach (Variant ds_variant in sources_variant)
+ {
+ DataSource ds = new DataSource.from_variant (ds_variant,
+ reset_running);
+ registry.insert (ds.unique_id, ds);
+ }
+
+ return registry;
+ }
+
+ public static Variant to_variant (
+ HashTable<string, DataSource> sources)
+ {
+ var vb = new VariantBuilder (new VariantType (SIG_DATASOURCES));
+
+ List<unowned DataSource> data_sources = sources.get_values ();
+ data_sources.sort ((a, b) =>
+ {
+ return strcmp (a.unique_id, b.unique_id);
+ });
+
+ foreach (unowned DataSource ds in data_sources)
+ {
+ vb.add_value (ds.to_variant ());
+ }
+
+ return vb.end ();
+ }
+ }
+
+ class DataSourceRegistry: Extension, RemoteRegistry
+ {
+ private HashTable<string, DataSource> sources;
+ private HashTable<string, GenericArray<BusName>> running;
+ private uint registration_id;
+ private bool dirty;
+
+ private static const uint DISK_WRITE_TIMEOUT = 5 * 60; // 5 minutes
+
+ DataSourceRegistry ()
+ {
+ Object ();
+ }
+
+ construct
+ {
+ running = new HashTable<string, GenericArray<BusName?>>(
+ str_hash, str_equal);
+
+ Variant? registry = retrieve_config ("registry",
+ DataSources.SIG_DATASOURCES);
+ if (registry != null)
+ sources = DataSources.from_variant (registry, true);
+ else
+ sources = new HashTable<string, DataSource> (
+ str_hash, str_equal);
+
+ // this will be called after bus is acquired, so it shouldn't block
+ try
+ {
+ var connection = Bus.get_sync (BusType.SESSION, null);
+ registration_id = connection.register_object<RemoteRegistry> (
+ "/org/gnome/zeitgeist/data_source_registry", this);
+
+ connection.signal_subscribe ("org.freedesktop.DBus",
+ "org.freedesktop.DBus", "NameOwnerChanged",
+ "/org/freedesktop/DBus", null, 0,
+ name_owner_changed);
+ }
+ catch (Error err)
+ {
+ warning ("%s", err.message);
+ }
+
+ // Changes are saved to the DB every few seconds and at unload.
+ Timeout.add_seconds (DISK_WRITE_TIMEOUT, flush, Priority.LOW);
+ }
+
+ public override void unload ()
+ {
+ try
+ {
+ var connection = Bus.get_sync (BusType.SESSION, null);
+ if (registration_id != 0)
+ {
+ connection.unregister_object (registration_id);
+ registration_id = 0;
+ }
+ }
+ catch (Error err)
+ {
+ warning ("%s", err.message);
+ }
+
+ flush ();
+ debug ("%s, this.ref_count = %u", Log.METHOD, this.ref_count);
+ }
+
+ public Variant get_data_sources ()
+ {
+ return DataSources.to_variant (sources);
+ }
+
+ private bool is_sender_known (BusName sender,
+ GenericArray<BusName> sender_array)
+ {
+ for (int i = 0; i < sender_array.length; i++)
+ {
+ if (sender == sender_array[i])
+ return true;
+ }
+ return false;
+ }
+
+ public bool register_data_source (string unique_id, string name,
+ string description, Variant event_templates, BusName? sender)
+ {
+ debug ("%s: %s, %s, %s", Log.METHOD, unique_id, name, description);
+ if (sender == null)
+ {
+ warning ("%s: sender == null, ignoring request", Log.METHOD);
+ return false;
+ }
+
+
+ var sender_array = running.lookup (unique_id);
+ if (sender_array == null)
+ {
+ running.insert (unique_id, new GenericArray<BusName?>());
+ running.lookup (unique_id).add (sender);
+ }
+ else if (is_sender_known (sender, sender_array))
+ {
+ running.lookup (unique_id).add (sender);
+ }
+
+ unowned DataSource? ds = sources.lookup (unique_id);
+ if (ds != null)
+ {
+ var templates = Events.from_variant (event_templates);
+ ds.name = name;
+ ds.description = description;
+ ds.event_templates = templates;
+ ds.timestamp = Timestamp.now ();
+ ds.running = true;
+ dirty = true;
+
+ data_source_registered (ds.to_variant ());
+
+ return ds.enabled;
+ }
+ else
+ {
+ var templates = Events.from_variant (event_templates);
+ DataSource new_ds = new DataSource.full (unique_id, name,
+ description, templates);
+ new_ds.enabled = true;
+ new_ds.running = true;
+ new_ds.timestamp = Timestamp.now ();
+ sources.insert (unique_id, new_ds);
+ dirty = true;
+
+ data_source_registered (new_ds.to_variant ());
+
+ return new_ds.enabled;
+ }
+
+ }
+
+ public void set_data_source_enabled (string unique_id, bool enabled)
+ {
+ debug ("%s: %s, %d", Log.METHOD, unique_id, (int) enabled);
+ unowned DataSource? ds = sources.lookup (unique_id);
+ if (ds != null)
+ {
+ if (ds.enabled != enabled)
+ {
+ ds.enabled = enabled;
+ dirty = true;
+ data_source_enabled (unique_id, enabled);
+ }
+ }
+ else
+ {
+ warning ("DataSource \"%s\" isn't registered!", unique_id);
+ }
+ }
+
+ public Variant get_data_source_from_id (string unique_id) throws Error
+ {
+ unowned DataSource? ds = sources.lookup (unique_id);
+ if (ds != null)
+ {
+ return ds.to_variant ();
+ }
+
+ throw new EngineError.INVALID_KEY (
+ "Datasource with unique ID: %s not found".printf (unique_id));
+ }
+
+ public override void pre_insert_events (GenericArray<Event?> events,
+ BusName? sender)
+ {
+ foreach (string unique_id in running.get_keys())
+ {
+ GenericArray<BusName?> bus_names = running.lookup (unique_id);
+ if (is_sender_known (sender, bus_names))
+ {
+ var data_source = sources.lookup (unique_id);
+
+ data_source.timestamp = Timestamp.now ();
+ dirty = true;
+
+ if (!data_source.enabled)
+ {
+ for (int i = 0; i < events.length; i++)
+ events[i] = null;
+ }
+ }
+ }
+ }
+
+ /*
+ * Cleanup disconnected clients and mark data-sources as not running
+ * when no client remains.
+ **/
+ private void name_owner_changed (DBusConnection conn, string sender,
+ string path, string interface_name, string signal_name,
+ Variant parameters)
+ {
+ var name = parameters.get_child_value (0).dup_string ();
+ //var old_owner = parameters.get_child_value (1).dup_string ();
+ var new_owner = parameters.get_child_value (2).dup_string ();
+ if (new_owner != "") return;
+
+ // Are there data-sources with this bus name?
+ var disconnected_ds = new GenericArray<DataSource> ();
+ {
+ var iter = HashTableIter<string, GenericArray<BusName?>> (
+ running);
+ unowned string uid;
+ unowned GenericArray<BusName> name_arr;
+ while (iter.next (out uid, out name_arr))
+ {
+ for (int i = 0; i < name_arr.length; i++)
+ {
+ if (name_arr[i] == name)
+ {
+ disconnected_ds.add (sources.lookup (uid));
+ name_arr.remove_index_fast (i--);
+ }
+ }
+ }
+ }
+
+ if (disconnected_ds.length == 0) return;
+
+ for (int i = 0; i < disconnected_ds.length; i++)
+ {
+ var ds = disconnected_ds[i];
+ unowned string uid = ds.unique_id;
+ debug ("Client disconnected: %s [%s]", ds.name, uid);
+
+ // FIXME: Update here or change semantics to "last insert"?
+ ds.timestamp = Timestamp.now ();
+ dirty = true;
+
+ if (running.lookup (uid).length == 0)
+ {
+ debug ("No remaining client running: %s [%s]",
+ ds.name, uid);
+ running.remove (uid);
+ ds.running = false;
+
+ data_source_disconnected (ds.to_variant ());
+ }
+ }
+ }
+
+ private bool flush ()
+ {
+ if (dirty)
+ {
+ Variant v = DataSources.to_variant (sources);
+ store_config ("registry", v);
+ dirty = false;
+ }
+ return true;
+ }
+ }
+
+ [ModuleInit]
+#if BUILTIN_EXTENSIONS
+ public static Type data_source_registry_init (TypeModule module)
+ {
+#else
+ public static Type extension_register (TypeModule module)
+ {
+#endif
+ return typeof (DataSourceRegistry);
+ }
+}
+
+// vim:expandtab:ts=4:sw=4
=== added directory 'extensions/fts-python'
=== added file 'extensions/fts-python/Makefile.am'
--- extensions/fts-python/Makefile.am 1970-01-01 00:00:00 +0000
+++ extensions/fts-python/Makefile.am 2011-10-19 08:09:50 +0000
@@ -0,0 +1,20 @@
+NULL =
+
+ftsdir = $(pkgdatadir)/fts-python
+dist_fts_SCRIPTS = \
+ datamodel.py \
+ constants.py \
+ fts.py \
+ lrucache.py \
+ sql.py \
+ $(NULL)
+
+servicedir = $(DBUS_SERVICES_DIR)
+service_DATA = org.gnome.zeitgeist.fts.service
+
+org.gnome.zeitgeist.fts.service: org.gnome.zeitgeist.fts.service.in
+ $(AM_V_GEN)sed -e s!\@pkgdatadir\@!$(pkgdatadir)! < $< > $@
+org.gnome.zeitgeist.fts.service: Makefile
+
+EXTRA_DIST = org.gnome.zeitgeist.fts.service.in
+CLEANFILES = org.gnome.zeitgeist.fts.service
=== added file 'extensions/fts-python/constants.py'
--- extensions/fts-python/constants.py 1970-01-01 00:00:00 +0000
+++ extensions/fts-python/constants.py 2011-10-19 08:09:50 +0000
@@ -0,0 +1,71 @@
+# -.- coding: utf-8 -.-
+
+# Zeitgeist
+#
+# Copyright © 2009 Markus Korn <thekorn@xxxxxx>
+# Copyright © 2009-2010 Siegfried-Angel Gevatter Pujals <rainct@xxxxxxxxxx>
+# Copyright © 2009 Mikkel Kamstrup Erlandsen <mikkel.kamstrup@xxxxxxxxx>
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Lesser General Public License as published by
+# the Free Software Foundation, either version 2.1 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+import os
+import logging
+from xdg import BaseDirectory
+
+from zeitgeist.client import ZeitgeistDBusInterface
+
+__all__ = [
+ "log",
+ "get_engine",
+ "constants"
+]
+
+log = logging.getLogger("zeitgeist.engine")
+
+_engine = None
+def get_engine():
+ """ Get the running engine instance or create a new one. """
+ global _engine
+ if _engine is None or _engine.is_closed():
+ import main # _zeitgeist.engine.main
+ _engine = main.ZeitgeistEngine()
+ return _engine
+
+class _Constants:
+ # Directories
+ DATA_PATH = os.environ.get("ZEITGEIST_DATA_PATH",
+ BaseDirectory.save_data_path("bluebird"))
+ DATABASE_FILE = os.environ.get("ZEITGEIST_DATABASE_PATH",
+ os.path.join(DATA_PATH, "activity.sqlite"))
+ DATABASE_FILE_BACKUP = os.environ.get("ZEITGEIST_DATABASE_BACKUP_PATH",
+ os.path.join(DATA_PATH, "activity.sqlite.bck"))
+ DEFAULT_LOG_PATH = os.path.join(BaseDirectory.xdg_cache_home,
+ "zeitgeist", "daemon.log")
+
+ # D-Bus
+ DBUS_INTERFACE = ZeitgeistDBusInterface.INTERFACE_NAME
+ SIG_EVENT = "asaasay"
+
+ # Required version of DB schema
+ CORE_SCHEMA="core"
+ CORE_SCHEMA_VERSION = 4
+
+ USER_EXTENSION_PATH = os.path.join(DATA_PATH, "extensions")
+
+ # configure runtime cache for events
+ # default size is 2000
+ CACHE_SIZE = int(os.environ.get("ZEITGEIST_CACHE_SIZE", 2000))
+ log.debug("Cache size = %i" %CACHE_SIZE)
+
+constants = _Constants()
=== added file 'extensions/fts-python/datamodel.py'
--- extensions/fts-python/datamodel.py 1970-01-01 00:00:00 +0000
+++ extensions/fts-python/datamodel.py 2011-10-19 08:09:50 +0000
@@ -0,0 +1,83 @@
+# -.- coding: utf-8 -.-
+
+# Zeitgeist
+#
+# Copyright © 2009 Mikkel Kamstrup Erlandsen <mikkel.kamstrup@xxxxxxxxx>
+# Copyright © 2009 Markus Korn <thekorn@xxxxxx>
+# Copyright © 2009 Seif Lotfy <seif@xxxxxxxxx>
+# Copyright © 2009-2010 Siegfried-Angel Gevatter Pujals <rainct@xxxxxxxxxx>
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Lesser General Public License as published by
+# the Free Software Foundation, either version 2.1 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+from zeitgeist.datamodel import Event as OrigEvent, Subject as OrigSubject, \
+ DataSource as OrigDataSource
+
+class Event(OrigEvent):
+
+ @staticmethod
+ def _to_unicode(obj):
+ """
+ Return an unicode representation of the given object.
+ If obj is None, return an empty string.
+ """
+ return unicode(obj) if obj is not None else u""
+
+ @staticmethod
+ def _make_dbus_sendable(obj):
+ """
+ Ensure that all fields in the event struct are non-None
+ """
+ for n, value in enumerate(obj[0]):
+ obj[0][n] = obj._to_unicode(value)
+ for subject in obj[1]:
+ for n, value in enumerate(subject):
+ subject[n] = obj._to_unicode(value)
+ # The payload require special handling, since it is binary data
+ # If there is indeed data here, we must not unicode encode it!
+ if obj[2] is None:
+ obj[2] = u""
+ elif isinstance(obj[2], unicode):
+ obj[2] = str(obj[2])
+ return obj
+
+ @staticmethod
+ def get_plain(ev):
+ """
+ Ensure that an Event instance is a Plain Old Python Object (popo),
+ without DBus wrappings etc.
+ """
+ popo = []
+ popo.append(map(unicode, ev[0]))
+ popo.append([map(unicode, subj) for subj in ev[1]])
+ # We need the check here so that if D-Bus gives us an empty
+ # byte array we don't serialize the text "dbus.Array(...)".
+ popo.append(str(ev[2]) if ev[2] else u'')
+ return popo
+
+class Subject(OrigSubject):
+ pass
+
+class DataSource(OrigDataSource):
+
+ @staticmethod
+ def get_plain(datasource):
+ for plaintype, props in {
+ unicode: (DataSource.Name, DataSource.Description),
+ lambda x: map(Event.get_plain, x): (DataSource.EventTemplates,),
+ bool: (DataSource.Running, DataSource.Enabled),
+ int: (DataSource.LastSeen,),
+ }.iteritems():
+ for prop in props:
+ datasource[prop] = plaintype(datasource[prop])
+ return tuple(datasource)
=== added file 'extensions/fts-python/fts.py'
--- extensions/fts-python/fts.py 1970-01-01 00:00:00 +0000
+++ extensions/fts-python/fts.py 2011-10-19 08:09:50 +0000
@@ -0,0 +1,1273 @@
+#!/usr/bin/env python
+# -.- coding: utf-8 -.-
+
+# Zeitgeist
+#
+# Copyright © 2009 Mikkel Kamstrup Erlandsen <mikkel.kamstrup@xxxxxxxxx>
+# Copyright © 2010 Canonical Ltd
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Lesser General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+#
+
+#
+# TODO
+#
+# - Delete events hook
+# - ? Filter on StorageState
+# - Throttle IO and CPU where possible
+
+import os, sys
+import time
+import pickle
+import dbus
+import sqlite3
+import dbus.service
+from xdg import BaseDirectory
+from xdg.DesktopEntry import DesktopEntry, xdg_data_dirs
+import logging
+import subprocess
+from xml.dom import minidom
+import xapian
+import os
+from Queue import Queue, Empty
+import threading
+from urllib import quote as url_escape, unquote as url_unescape
+import gobject, gio
+from cStringIO import StringIO
+
+from collections import defaultdict
+from array import array
+from zeitgeist.datamodel import Event as OrigEvent, StorageState, TimeRange, \
+ ResultType, get_timestamp_for_now, Interpretation, Symbol, NEGATION_OPERATOR, WILDCARD
+from datamodel import Event, Subject
+from constants import constants
+from zeitgeist.client import ZeitgeistClient, ZeitgeistDBusInterface
+from sql import get_default_cursor, unset_cursor, TableLookup, WhereClause
+from lrucache import LRUCache
+
+ZG_CLIENT = ZeitgeistClient()
+
+logging.basicConfig(level=logging.DEBUG)
+log = logging.getLogger("zeitgeist.fts")
+
+INDEX_FILE = os.path.join(constants.DATA_PATH, "bb.fts.index")
+INDEX_VERSION = "1"
+INDEX_LOCK = threading.Lock()
+FTS_DBUS_BUS_NAME = "org.gnome.zeitgeist.SimpleIndexer"
+FTS_DBUS_OBJECT_PATH = "/org/gnome/zeitgeist/index/activity"
+FTS_DBUS_INTERFACE = "org.gnome.zeitgeist.Index"
+
+FILTER_PREFIX_EVENT_INTERPRETATION = "ZGEI"
+FILTER_PREFIX_EVENT_MANIFESTATION = "ZGEM"
+FILTER_PREFIX_ACTOR = "ZGA"
+FILTER_PREFIX_SUBJECT_URI = "ZGSU"
+FILTER_PREFIX_SUBJECT_INTERPRETATION = "ZGSI"
+FILTER_PREFIX_SUBJECT_MANIFESTATION = "ZGSM"
+FILTER_PREFIX_SUBJECT_ORIGIN = "ZGSO"
+FILTER_PREFIX_SUBJECT_MIMETYPE = "ZGST"
+FILTER_PREFIX_SUBJECT_STORAGE = "ZGSS"
+FILTER_PREFIX_XDG_CATEGORY = "AC"
+
+VALUE_EVENT_ID = 0
+VALUE_TIMESTAMP = 1
+
+MAX_CACHE_BATCH_SIZE = constants.CACHE_SIZE/2
+
+# When sorting by of the COALESCING_RESULT_TYPES result types,
+# we need to fetch some extra events from the Xapian index because
+# the final result set will be coalesced on some property of the event
+COALESCING_RESULT_TYPES = [ \
+ ResultType.MostRecentSubjects,
+ ResultType.LeastRecentSubjects,
+ ResultType.MostPopularSubjects,
+ ResultType.LeastPopularSubjects,
+ ResultType.MostRecentActor,
+ ResultType.LeastRecentActor,
+ ResultType.MostPopularActor,
+ ResultType.LeastPopularActor,
+]
+
+MAX_TERM_LENGTH = 245
+
+
+class NegationNotSupported(ValueError):
+ pass
+
+class WildcardNotSupported(ValueError):
+ pass
+
+def parse_negation(kind, field, value, parse_negation=True):
+ """checks if value starts with the negation operator,
+ if value starts with the negation operator but the field does
+ not support negation a ValueError is raised.
+ This function returns a (value_without_negation, negation)-tuple
+ """
+ negation = False
+ if parse_negation and value.startswith(NEGATION_OPERATOR):
+ negation = True
+ value = value[len(NEGATION_OPERATOR):]
+ if negation and field not in kind.SUPPORTS_NEGATION:
+ raise NegationNotSupported("This field does not support negation")
+ return value, negation
+
+def parse_wildcard(kind, field, value):
+ """checks if value ends with the a wildcard,
+ if value ends with a wildcard but the field does not support wildcards
+ a ValueError is raised.
+ This function returns a (value_without_wildcard, wildcard)-tuple
+ """
+ wildcard = False
+ if value.endswith(WILDCARD):
+ wildcard = True
+ value = value[:-len(WILDCARD)]
+ if wildcard and field not in kind.SUPPORTS_WILDCARDS:
+ raise WildcardNotSupported("This field does not support wildcards")
+ return value, wildcard
+
+def parse_operators(kind, field, value):
+ """runs both (parse_negation and parse_wildcard) parser functions
+ on query values, and handles the special case of Subject.Text correctly.
+ returns a (value_without_negation_and_wildcard, negation, wildcard)-tuple
+ """
+ try:
+ value, negation = parse_negation(kind, field, value)
+ except ValueError:
+ if kind is Subject and field == Subject.Text:
+ # we do not support negation of the text field,
+ # the text field starts with the NEGATION_OPERATOR
+ # so we handle this string as the content instead
+ # of an operator
+ negation = False
+ else:
+ raise
+ value, wildcard = parse_wildcard(kind, field, value)
+ return value, negation, wildcard
+
+
+def synchronized(lock):
+ """ Synchronization decorator. """
+ def wrap(f):
+ def newFunction(*args, **kw):
+ lock.acquire()
+ try:
+ return f(*args, **kw)
+ finally:
+ lock.release()
+ return newFunction
+ return wrap
+
+class Deletion:
+ """
+ A marker class that marks an event id for deletion
+ """
+ def __init__ (self, event_id):
+ self.event_id = event_id
+
+class Reindex:
+ """
+ Marker class that tells the worker thread to rebuild the entire index.
+ On construction time all events are pulled out of the zg_engine
+ argument and stored for later processing in the worker thread.
+ This avoid concurrent access to the ZG sqlite db from the worker thread.
+ """
+ def __init__ (self, zg_engine):
+ all_events = zg_engine._find_events(1, TimeRange.always(),
+ [], StorageState.Any,
+ sys.maxint,
+ ResultType.MostRecentEvents)
+ self.all_events = all_events
+
+class SearchEngineExtension (dbus.service.Object):
+ """
+ Full text indexing and searching extension for Zeitgeist
+ """
+ PUBLIC_METHODS = []
+
+ def __init__ (self):
+ bus_name = dbus.service.BusName(FTS_DBUS_BUS_NAME, bus=dbus.SessionBus())
+ dbus.service.Object.__init__(self, bus_name, FTS_DBUS_OBJECT_PATH)
+ self._indexer = Indexer()
+
+ ZG_CLIENT.install_monitor((0, 2**63 - 1), [],
+ self.pre_insert_event, self.post_delete_events)
+
+ def pre_insert_event(self, timerange, events):
+ for event in events:
+ self._indexer.index_event (event)
+
+ def post_delete_events (self, ids):
+ for _id in ids:
+ self._indexer.delete_event (_id)
+
+ @dbus.service.method(FTS_DBUS_INTERFACE,
+ in_signature="s(xx)a("+constants.SIG_EVENT+")uuu",
+ out_signature="a("+constants.SIG_EVENT+")u")
+ def Search(self, query_string, time_range, filter_templates, offset, count, result_type):
+ """
+ DBus method to perform a full text search against the contents of the
+ Zeitgeist log. Returns an array of events.
+ """
+ time_range = TimeRange(time_range[0], time_range[1])
+ filter_templates = map(Event, filter_templates)
+ events, hit_count = self._indexer.search(query_string, time_range,
+ filter_templates,
+ offset, count, result_type)
+ return self._make_events_sendable (events), hit_count
+
+ @dbus.service.method(FTS_DBUS_INTERFACE,
+ in_signature="",
+ out_signature="")
+ def ForceReindex(self):
+ """
+ DBus method to force a reindex of the entire Zeitgeist log.
+ This method is only intended for debugging purposes and is not
+ considered blessed public API.
+ """
+ log.debug ("Received ForceReindex request over DBus.")
+ self._indexer._queue.put (Reindex (self.engine))
+
+ def _make_events_sendable(self, events):
+ return [NULL_EVENT if event is None else Event._make_dbus_sendable(event) for event in events]
+
+def mangle_uri (uri):
+ """
+ Converts a URI into an index- and query friendly string. The problem
+ is that Xapian doesn't handle CAPITAL letters or most non-alphanumeric
+ symbols in a boolean term when it does prefix matching. The mangled
+ URIs returned from this function are suitable for boolean prefix searches.
+
+ IMPORTANT: This is a 1-way function! You can not convert back.
+ """
+ result = ""
+ for c in uri.lower():
+ if c in (": /"):
+ result += "_"
+ else:
+ result += c
+ return result
+
+def cap_string (s, nbytes=MAX_TERM_LENGTH):
+ """
+ If s has more than nbytes bytes (not characters) then cap it off
+ after nbytes bytes in a way still producing a valid utf-8 string.
+
+ Assumes that s is a utf-8 string.
+
+ This function useful for working with Xapian terms because Xapian has
+ a max term length of 245 (which is not very well documented, but see
+ http://xapian.org/docs/omega/termprefixes.html).
+ """
+ # Check if we can fast-path this string
+ if (len(s.encode("utf-8")) <= nbytes):
+ return s
+
+ # We use a StringIO here to avoid mem thrashing via naiive
+ # string concatenation. See fx. http://www.skymind.com/~ocrow/python_string/
+ buf = StringIO()
+ for char in s :
+ if buf.tell() >= nbytes - 1 :
+ return buf.getvalue()
+ buf.write(char.encode("utf-8"))
+
+ return unicode(buf.getvalue().decode("utf-8"))
+
+
+def expand_type (type_prefix, uri):
+ """
+ Return a string with a Xapian query matching all child types of 'uri'
+ inside the Xapian prefix 'type_prefix'.
+ """
+ is_negation = uri.startswith(NEGATION_OPERATOR)
+ uri = uri[1:] if is_negation else uri
+ children = Symbol.find_child_uris_extended(uri)
+ children = [ "%s:%s" % (type_prefix, child) for child in children ]
+
+ result = " OR ".join(children)
+ return result if not is_negation else "NOT (%s)" % result
+
+class Indexer:
+ """
+ Abstraction of the FT indexer and search engine
+ """
+
+ QUERY_PARSER_FLAGS = xapian.QueryParser.FLAG_PHRASE | \
+ xapian.QueryParser.FLAG_BOOLEAN | \
+ xapian.QueryParser.FLAG_PURE_NOT | \
+ xapian.QueryParser.FLAG_LOVEHATE | \
+ xapian.QueryParser.FLAG_WILDCARD
+
+ def __init__ (self):
+
+ self._cursor = cursor = get_default_cursor()
+ os.environ["XAPIAN_CJK_NGRAM"] = "1"
+ self._interpretation = TableLookup(cursor, "interpretation")
+ self._manifestation = TableLookup(cursor, "manifestation")
+ self._mimetype = TableLookup(cursor, "mimetype")
+ self._actor = TableLookup(cursor, "actor")
+ self._event_cache = LRUCache(constants.CACHE_SIZE)
+
+ log.debug("Opening full text index: %s" % INDEX_FILE)
+ try:
+ self._index = xapian.WritableDatabase(INDEX_FILE, xapian.DB_CREATE_OR_OPEN)
+ except xapian.DatabaseError, e:
+ log.warn("Full text index corrupted: '%s'. Rebuilding index." % e)
+ self._index = xapian.WritableDatabase(INDEX_FILE, xapian.DB_CREATE_OR_OVERWRITE)
+ self._tokenizer = indexer = xapian.TermGenerator()
+ self._query_parser = xapian.QueryParser()
+ self._query_parser.set_database (self._index)
+ self._query_parser.add_prefix("name", "N")
+ self._query_parser.add_prefix("title", "N")
+ self._query_parser.add_prefix("site", "S")
+ self._query_parser.add_prefix("app", "A")
+ self._query_parser.add_boolean_prefix("zgei", FILTER_PREFIX_EVENT_INTERPRETATION)
+ self._query_parser.add_boolean_prefix("zgem", FILTER_PREFIX_EVENT_MANIFESTATION)
+ self._query_parser.add_boolean_prefix("zga", FILTER_PREFIX_ACTOR)
+ self._query_parser.add_prefix("zgsu", FILTER_PREFIX_SUBJECT_URI)
+ self._query_parser.add_boolean_prefix("zgsi", FILTER_PREFIX_SUBJECT_INTERPRETATION)
+ self._query_parser.add_boolean_prefix("zgsm", FILTER_PREFIX_SUBJECT_MANIFESTATION)
+ self._query_parser.add_prefix("zgso", FILTER_PREFIX_SUBJECT_ORIGIN)
+ self._query_parser.add_boolean_prefix("zgst", FILTER_PREFIX_SUBJECT_MIMETYPE)
+ self._query_parser.add_boolean_prefix("zgss", FILTER_PREFIX_SUBJECT_STORAGE)
+ self._query_parser.add_prefix("category", FILTER_PREFIX_XDG_CATEGORY)
+ self._query_parser.add_valuerangeprocessor(
+ xapian.NumberValueRangeProcessor(VALUE_EVENT_ID, "id", True))
+ self._query_parser.add_valuerangeprocessor(
+ xapian.NumberValueRangeProcessor(VALUE_TIMESTAMP, "ms", False))
+ self._query_parser.set_default_op(xapian.Query.OP_AND)
+ self._enquire = xapian.Enquire(self._index)
+
+ self._desktops = {}
+
+ gobject.threads_init()
+ self._may_run = True
+ self._queue = Queue(0)
+ self._worker = threading.Thread(target=self._worker_thread,
+ name="IndexWorker")
+ self._worker.daemon = True
+
+ # We need to defer the index checking until after ZG has completed
+ # full setup. Hence the idle handler.
+ # We also don't start the worker until after we've checked the index
+ gobject.idle_add (self._check_index_and_start_worker)
+
+ @synchronized (INDEX_LOCK)
+ def _check_index_and_start_worker (self):
+ """
+ Check whether we need a rebuild of the index.
+ Returns True if the index is good. False if a reindexing has
+ been commenced.
+
+ This method should be called from the main thread and only once.
+ It starts the worker thread as a side effect.
+
+ We are clearing the queue, because there may be a race when an
+ event insertion / deletion is already queued and our index
+ is corrupted. Creating a new queue instance should be safe,
+ because we're running in main thread as are the index_event
+ and delete_event methods, and the worker thread wasn't yet
+ started.
+ """
+ if self._index.get_metadata("fts_index_version") != INDEX_VERSION:
+ log.info("Index must be upgraded. Doing full rebuild")
+ self._queue = Queue(0)
+ self._queue.put(Reindex(self))
+ elif self._index.get_doccount() == 0:
+ # If the index is empty we trigger a rebuild
+ # We must delay reindexing until after the engine is done setting up
+ log.info("Empty index detected. Doing full rebuild")
+ self._queue = Queue(0)
+ self._queue.put(Reindex(self))
+
+ # Now that we've checked the index from the main thread we can start the worker
+ self._worker.start()
+
+ def index_event (self, event):
+ """
+ This method schedules and event for indexing. It returns immediate and
+ defers the actual work to a bottom half thread. This means that it
+ will not block the main loop of the Zeitgeist daemon while indexing
+ (which may be a heavy operation)
+ """
+ self._queue.put (event)
+ return event
+
+ def delete_event (self, event_id):
+ """
+ Remove an event from the index given its event id
+ """
+ self._queue.put (Deletion(event_id))
+ return
+
+ @synchronized (INDEX_LOCK)
+ def search (self, query_string, time_range=None, filters=None, offset=0, maxhits=10, result_type=100):
+ """
+ Do a full text search over the indexed corpus. The `result_type`
+ parameter may be a zeitgeist.datamodel.ResultType or 100. In case it is
+ 100 the textual relevancy of the search engine will be used to sort the
+ results. Result type 100 is the fastest (and default) mode.
+
+ The filters argument should be a list of event templates.
+ """
+ # Expand event template filters if necessary
+ if filters:
+ query_string = "(%s) AND (%s)" % (query_string, self._compile_event_filter_query (filters))
+
+ # Expand time range value query
+ if time_range and not time_range.is_always():
+ query_string = "(%s) AND (%s)" % (query_string, self._compile_time_range_filter_query (time_range))
+
+ # If the result type coalesces the events we need to fetch some extra
+ # events from the index to have a chance of actually holding 'maxhits'
+ # unique events
+ if result_type in COALESCING_RESULT_TYPES:
+ raw_maxhits = maxhits * 3
+ else:
+ raw_maxhits = maxhits
+
+ # When not sorting by relevance, we fetch the results from Xapian sorted,
+ # by timestamp. That minimizes the skew we get from otherwise doing a
+ # relevancy ranked xapaian query and then resorting with Zeitgeist. The
+ # "skew" is that low-relevancy results may still have the highest timestamp
+ if result_type == 100:
+ self._enquire.set_sort_by_relevance()
+ else:
+ self._enquire.set_sort_by_value(VALUE_TIMESTAMP, True)
+
+ # Allow wildcards
+ query_start = time.time()
+ query = self._query_parser.parse_query (query_string,
+ self.QUERY_PARSER_FLAGS)
+ self._enquire.set_query (query)
+ hits = self._enquire.get_mset (offset, raw_maxhits)
+ hit_count = hits.get_matches_estimated()
+ log.debug("Search '%s' gave %s hits in %sms" %
+ (query_string, hits.get_matches_estimated(), (time.time() - query_start)*1000))
+
+ if result_type == 100:
+ event_ids = []
+ for m in hits:
+ event_id = int(xapian.sortable_unserialise(
+ m.document.get_value(VALUE_EVENT_ID)))
+ event_ids.append (event_id)
+ if event_ids:
+ return self.get_events(event_ids), hit_count
+ else:
+ return [], 0
+ else:
+ templates = []
+ for m in hits:
+ event_id = int(xapian.sortable_unserialise(
+ m.document.get_value(VALUE_EVENT_ID)))
+ ev = Event()
+ ev[0][Event.Id] = str(event_id)
+ templates.append(ev)
+ if templates:
+ x = self._find_events(1, TimeRange.always(),
+ templates,
+ StorageState.Any,
+ maxhits,
+ result_type), hit_count
+ return x
+ else:
+ return [], 0
+
+ def _worker_thread (self):
+ is_dirty = False
+ while self._may_run:
+ # FIXME: Throttle IO and CPU
+ try:
+ # If we are dirty wait a while before we flush,
+ # or if we are clean wait indefinitely to avoid
+ # needless wakeups
+ if is_dirty:
+ event = self._queue.get(True, 0.5)
+ else:
+ event = self._queue.get(True)
+
+ if isinstance (event, Deletion):
+ self._delete_event_real (event.event_id)
+ elif isinstance (event, Reindex):
+ self._reindex (event.all_events)
+ else:
+ self._index_event_real (event)
+
+ is_dirty = True
+ except Empty:
+ if is_dirty:
+ # Write changes to disk
+ log.debug("Committing FTS index")
+ self._index.flush()
+ is_dirty = False
+ else:
+ log.debug("No changes to index. Sleeping")
+
+ @synchronized (INDEX_LOCK)
+ def _reindex (self, event_list):
+ """
+ Index everything in the ZG log. The argument must be a list
+ of events. Typically extracted by a Reindex instance.
+ Only call from worker thread as it writes to the db and Xapian
+ is *not* thread safe (only single-writer-multiple-reader).
+ """
+ self._index.close ()
+ self._index = xapian.WritableDatabase(INDEX_FILE, xapian.DB_CREATE_OR_OVERWRITE)
+ self._query_parser.set_database (self._index)
+ self._enquire = xapian.Enquire(self._index)
+ # Register that this index was built with CJK enabled
+ self._index.set_metadata("fts_index_version", INDEX_VERSION)
+ log.info("Preparing to rebuild index with %s events" % len(event_list))
+ for e in event_list : self._queue.put(e)
+
+ @synchronized (INDEX_LOCK)
+ def _delete_event_real (self, event_id):
+ """
+ Look up the doc id given an event id and remove the xapian.Document
+ for that doc id.
+ Note: This is slow, but there's not much we can do about it
+ """
+ try:
+ _id = xapian.sortable_serialise(float(event_id))
+ query = xapian.Query(xapian.Query.OP_VALUE_RANGE,
+ VALUE_EVENT_ID, _id, _id)
+
+ self._enquire.set_query (query)
+ hits = self._enquire.get_mset (0, 10)
+
+ total = hits.get_matches_estimated()
+ if total > 1:
+ log.warning ("More than one event found with id '%s'" % event_id)
+ elif total <= 0:
+ log.debug ("No event for id '%s'" % event_id)
+ return
+
+ for m in hits:
+ log.debug("Deleting event '%s' with docid '%s'" %
+ (event_id, m.docid))
+ self._index.delete_document(m.docid)
+ except Exception, e:
+ log.error("Failed to delete event '%s': %s" % (event_id, e))
+
+ def _split_uri (self, uri):
+ """
+ Returns a triple of (scheme, host, and path) extracted from `uri`
+ """
+ i = uri.find(":")
+ if i == -1 :
+ scheme = ""
+ host = ""
+ path = uri
+ else:
+ scheme = uri[:i]
+ host = ""
+ path = ""
+
+ if uri[i+1] == "/" and uri[i+2] == "/":
+ j = uri.find("/", i+3)
+ if j == -1 :
+ host = uri[i+3:]
+ else:
+ host = uri[i+3:j]
+ path = uri[j:]
+ else:
+ host = uri[i+1:]
+
+ # Strip out URI query part
+ i = path.find("?")
+ if i != -1:
+ path = path[:i]
+
+ return scheme, host, path
+
+ def _get_desktop_entry (self, app_id):
+ """
+ Return a xdg.DesktopEntry.DesktopEntry `app_id` or None in case
+ no file is found for the given desktop id
+ """
+ if app_id in self._desktops:
+ return self._desktops[app_id]
+
+ for datadir in xdg_data_dirs:
+ path = os.path.join(datadir, "applications", app_id)
+ if os.path.exists(path):
+ try:
+ desktop = DesktopEntry(path)
+ self._desktops[app_id] = desktop
+ return desktop
+ except Exception, e:
+ log.warning("Unable to load %s: %s" % (path, e))
+ return None
+
+ return None
+
+ def _index_actor (self, actor):
+ """
+ Takes an actor as a path to a .desktop file or app:// uri
+ and index the contents of the corresponding .desktop file
+ into the document currently set for self._tokenizer.
+ """
+ if not actor : return
+
+ # Get the path of the .desktop file and convert it to
+ # an app id (eg. 'gedit.desktop')
+ scheme, host, path = self._split_uri(url_unescape (actor))
+ if not path:
+ path = host
+
+ if not path :
+ log.debug("Unable to determine application id for %s" % actor)
+ return
+
+ if path.startswith("/") :
+ path = os.path.basename(path)
+
+ desktop = self._get_desktop_entry(path)
+ if desktop:
+ if not desktop.getNoDisplay():
+ self._tokenizer.index_text(desktop.getName(), 5)
+ self._tokenizer.index_text(desktop.getName(), 5, "A")
+ self._tokenizer.index_text(desktop.getGenericName(), 5)
+ self._tokenizer.index_text(desktop.getGenericName(), 5, "A")
+ self._tokenizer.index_text(desktop.getComment(), 2)
+ self._tokenizer.index_text(desktop.getComment(), 2, "A")
+
+ doc = self._tokenizer.get_document()
+ for cat in desktop.getCategories():
+ doc.add_boolean_term(FILTER_PREFIX_XDG_CATEGORY+cat.lower())
+ else:
+ log.debug("Unable to look up app info for %s" % actor)
+
+
+ def _index_uri (self, uri):
+ """
+ Index `uri` into the document currectly set on self._tokenizer
+ """
+ # File URIs and paths are indexed in one way, and all other,
+ # usually web URIs, are indexed in another way because there may
+ # be domain name etc. in there we want to rank differently
+ scheme, host, path = self._split_uri (url_unescape (uri))
+ if scheme == "file" or not scheme:
+ path, name = os.path.split(path)
+ self._tokenizer.index_text(name, 5)
+ self._tokenizer.index_text(name, 5, "N")
+
+ # Index parent names with descending weight
+ weight = 5
+ while path and name:
+ weight = weight / 1.5
+ path, name = os.path.split(path)
+ self._tokenizer.index_text(name, int(weight))
+
+ elif scheme == "mailto":
+ tokens = host.split("@")
+ name = tokens[0]
+ self._tokenizer.index_text(name, 6)
+ if len(tokens) > 1:
+ self._tokenizer.index_text(" ".join[1:], 1)
+ else:
+ # We're cautious about indexing the path components of
+ # non-file URIs as some websites practice *extremely* long
+ # and useless URLs
+ path, name = os.path.split(path)
+ if len(name) > 30 : name = name[:30]
+ if len(path) > 30 : path = path[30]
+ if name:
+ self._tokenizer.index_text(name, 5)
+ self._tokenizer.index_text(name, 5, "N")
+ if path:
+ self._tokenizer.index_text(path, 1)
+ self._tokenizer.index_text(path, 1, "N")
+ if host:
+ self._tokenizer.index_text(host, 2)
+ self._tokenizer.index_text(host, 2, "N")
+ self._tokenizer.index_text(host, 2, "S")
+
+ def _index_text (self, text):
+ """
+ Index `text` as raw text data for the document currently
+ set on self._tokenizer. The text is assumed to be a primary
+ description of the subject, such as the basename of a file.
+
+ Primary use is for subject.text
+ """
+ self._tokenizer.index_text(text, 5)
+
+ def _index_contents (self, uri):
+ # xmlindexer doesn't extract words for URIs only for file paths
+
+ # FIXME: IONICE and NICE on xmlindexer
+
+ path = uri.replace("file://", "")
+ xmlindexer = subprocess.Popen(['xmlindexer', path],
+ stdout=subprocess.PIPE)
+ xml = xmlindexer.communicate()[0].strip()
+ xmlindexer.wait()
+
+ dom = minidom.parseString(xml)
+ text_nodes = dom.getElementsByTagName("text")
+ lines = []
+ if text_nodes:
+ for line in text_nodes[0].childNodes:
+ lines.append(line.data)
+
+ if lines:
+ self._tokenizer.index_text (" ".join(lines))
+
+
+ def _add_doc_filters (self, event, doc):
+ """Adds the filtering rules to the doc. Filtering rules will
+ not affect the relevancy ranking of the event/doc"""
+ if event.interpretation:
+ doc.add_boolean_term (cap_string(FILTER_PREFIX_EVENT_INTERPRETATION+event.interpretation))
+ if event.manifestation:
+ doc.add_boolean_term (cap_string(FILTER_PREFIX_EVENT_MANIFESTATION+event.manifestation))
+ if event.actor:
+ doc.add_boolean_term (cap_string(FILTER_PREFIX_ACTOR+mangle_uri(event.actor)))
+
+ for su in event.subjects:
+ if su.uri:
+ doc.add_boolean_term (cap_string(FILTER_PREFIX_SUBJECT_URI+mangle_uri(su.uri)))
+ if su.interpretation:
+ doc.add_boolean_term (cap_string(FILTER_PREFIX_SUBJECT_INTERPRETATION+su.interpretation))
+ if su.manifestation:
+ doc.add_boolean_term (cap_string(FILTER_PREFIX_SUBJECT_MANIFESTATION+su.manifestation))
+ if su.origin:
+ doc.add_boolean_term (cap_string(FILTER_PREFIX_SUBJECT_ORIGIN+mangle_uri(su.origin)))
+ if su.mimetype:
+ doc.add_boolean_term (cap_string(FILTER_PREFIX_SUBJECT_MIMETYPE+su.mimetype))
+ if su.storage:
+ doc.add_boolean_term (cap_string(FILTER_PREFIX_SUBJECT_STORAGE+su.storage))
+
+ @synchronized (INDEX_LOCK)
+ def _index_event_real (self, event):
+ if not isinstance (event, OrigEvent):
+ log.error("Not an Event, found: %s" % type(event))
+ if not event.id:
+ log.warning("Not indexing event. Event has no id")
+ return
+
+ try:
+ doc = xapian.Document()
+ doc.add_value (VALUE_EVENT_ID,
+ xapian.sortable_serialise(float(event.id)))
+ doc.add_value (VALUE_TIMESTAMP,
+ xapian.sortable_serialise(float(event.timestamp)))
+ self._tokenizer.set_document (doc)
+
+ self._index_actor (event.actor)
+
+ for subject in event.subjects:
+ if not subject.uri : continue
+
+ # By spec URIs can have arbitrary length. In reality that's just silly.
+ # The general online "rule" is to keep URLs less than 2k so we just
+ # choose to enforce that
+ if len(subject.uri) > 2000:
+ log.info ("URI too long (%s). Discarding: %s..."% (len(subject.uri), subject.uri[:30]))
+ return
+ log.debug("Indexing '%s'" % subject.uri)
+
+ self._index_uri (subject.uri)
+ self._index_text (subject.text)
+
+ # If the subject URI is an actor, we index the .desktop also
+ if subject.uri.startswith ("application://"):
+ self._index_actor (subject.uri)
+
+ # File contents indexing disabled for now...
+ #self._index_contents (subject.uri)
+
+ # FIXME: Possibly index payloads when we have apriori knowledge
+
+ self._add_doc_filters (event, doc)
+ self._index.add_document (doc)
+
+ except Exception, e:
+ log.error("Error indexing event: %s" % e)
+
+ def _compile_event_filter_query (self, events):
+ """Takes a list of event templates and compiles a filter query
+ based on their, interpretations, manifestations, and actor,
+ for event and subjects.
+
+ All fields within the same event will be ANDed and each template
+ will be ORed with the others. Like elsewhere in Zeitgeist the
+ type tree of the interpretations and manifestations will be expanded
+ to match all child symbols as well
+ """
+ query = []
+ for event in events:
+ if not isinstance(event, Event):
+ raise TypeError("Expected Event. Found %s" % type(event))
+
+ tmpl = []
+ if event.interpretation :
+ tmpl.append(expand_type("zgei", event.interpretation))
+ if event.manifestation :
+ tmpl.append(expand_type("zgem", event.manifestation))
+ if event.actor : tmpl.append("zga:%s" % mangle_uri(event.actor))
+ for su in event.subjects:
+ if su.uri :
+ tmpl.append("zgsu:%s" % mangle_uri(su.uri))
+ if su.interpretation :
+ tmpl.append(expand_type("zgsi", su.interpretation))
+ if su.manifestation :
+ tmpl.append(expand_type("zgsm", su.manifestation))
+ if su.origin :
+ tmpl.append("zgso:%s" % mangle_uri(su.origin))
+ if su.mimetype :
+ tmpl.append("zgst:%s" % su.mimetype)
+ if su.storage :
+ tmpl.append("zgss:%s" % su.storage)
+
+ tmpl = "(" + ") AND (".join(tmpl) + ")"
+ query.append(tmpl)
+
+ return " OR ".join(query)
+
+ def _compile_time_range_filter_query (self, time_range):
+ """Takes a TimeRange and compiles a range query for it"""
+
+ if not isinstance(time_range, TimeRange):
+ raise TypeError("Expected TimeRange, but found %s" % type(time_range))
+
+ return "%s..%sms" % (time_range.begin, time_range.end)
+
+ def _get_event_from_row(self, row):
+ event = Event()
+ event[0][Event.Id] = row["id"] # Id property is read-only in the public API
+ event.timestamp = row["timestamp"]
+ for field in ("interpretation", "manifestation", "actor"):
+ # Try to get event attributes from row using the attributed field id
+ # If attribute does not exist we break the attribute fetching and return
+ # None instead of of crashing
+ try:
+ setattr(event, field, getattr(self, "_" + field).value(row[field]))
+ except KeyError, e:
+ log.error("Event %i broken: Table %s has no id %i" \
+ %(row["id"], field, row[field]))
+ return None
+ event.origin = row["event_origin_uri"] or ""
+ event.payload = row["payload"] or "" # default payload: empty string
+ return event
+
+ def _get_subject_from_row(self, row):
+ subject = Subject()
+ for field in ("uri", "text", "storage"):
+ setattr(subject, field, row["subj_" + field])
+ subject.origin = row["subj_origin_uri"]
+ if row["subj_current_uri"]:
+ subject.current_uri = row["subj_current_uri"]
+ for field in ("interpretation", "manifestation", "mimetype"):
+ # Try to get subject attributes from row using the attributed field id
+ # If attribute does not exist we break the attribute fetching and return
+ # None instead of crashing
+ try:
+ setattr(subject, field,
+ getattr(self, "_" + field).value(row["subj_" + field]))
+ except KeyError, e:
+ log.error("Event %i broken: Table %s has no id %i" \
+ %(row["id"], field, row["subj_" + field]))
+ return None
+ return subject
+
+ def get_events(self, ids, sender=None):
+ """
+ Look up a list of events.
+ """
+
+ t = time.time()
+
+ if not ids:
+ return []
+
+ # Split ids into cached and uncached
+ uncached_ids = array("i")
+ cached_ids = array("i")
+
+ # If ids batch greater than MAX_CACHE_BATCH_SIZE ids ignore cache
+ use_cache = True
+ if len(ids) > MAX_CACHE_BATCH_SIZE:
+ use_cache = False
+ if not use_cache:
+ uncached_ids = ids
+ else:
+ for id in ids:
+ if id in self._event_cache:
+ cached_ids.append(id)
+ else:
+ uncached_ids.append(id)
+
+ id_hash = defaultdict(lambda: array("i"))
+ for n, id in enumerate(ids):
+ # the same id can be at multible places (LP: #673916)
+ # cache all of them
+ id_hash[id].append(n)
+
+ # If we are not able to get an event by the given id
+ # append None instead of raising an Error. The client
+ # might simply have requested an event that has been
+ # deleted
+ events = {}
+ sorted_events = [None]*len(ids)
+
+ for id in cached_ids:
+ event = self._event_cache[id]
+ if event:
+ if event is not None:
+ for n in id_hash[event.id]:
+ # insert the event into all necessary spots (LP: #673916)
+ sorted_events[n] = event
+
+ # Get uncached events
+ rows = self._cursor.execute("""
+ SELECT * FROM event_view
+ WHERE id IN (%s)
+ """ % ",".join("%d" % _id for _id in uncached_ids))
+
+ time_get_uncached = time.time() - t
+ t = time.time()
+
+ t_get_event = 0
+ t_get_subject = 0
+ t_apply_get_hooks = 0
+
+ row_counter = 0
+ for row in rows:
+ row_counter += 1
+ # Assumption: all rows of a same event for its different
+ # subjects are in consecutive order.
+ t_get_event -= time.time()
+ event = self._get_event_from_row(row)
+ t_get_event += time.time()
+
+ if event:
+ # Check for existing event.id in event to attach
+ # other subjects to it
+ if event.id not in events:
+ events[event.id] = event
+ else:
+ event = events[event.id]
+
+ t_get_subject -= time.time()
+ subject = self._get_subject_from_row(row)
+ t_get_subject += time.time()
+ # Check if subject has a proper value. If none than something went
+ # wrong while trying to fetch the subject from the row. So instead
+ # of failing and raising an error. We silently skip the event.
+ if subject:
+ event.append_subject(subject)
+ if use_cache and not event.payload:
+ self._event_cache[event.id] = event
+ if event is not None:
+ for n in id_hash[event.id]:
+ # insert the event into all necessary spots (LP: #673916)
+ sorted_events[n] = event
+ # Avoid caching events with payloads to have keep the cache MB size
+ # at a decent level
+
+
+ log.debug("Got %d raw events in %fs" % (row_counter, time_get_uncached))
+ log.debug("Got %d events in %fs" % (len(sorted_events), time.time()-t))
+ log.debug(" Where time spent in _get_event_from_row in %fs" % (t_get_event))
+ log.debug(" Where time spent in _get_subject_from_row in %fs" % (t_get_subject))
+ log.debug(" Where time spent in apply_get_hooks in %fs" % (t_apply_get_hooks))
+ return sorted_events
+
+ def _find_events(self, return_mode, time_range, event_templates,
+ storage_state, max_events, order, sender=None):
+ """
+ Accepts 'event_templates' as either a real list of Events or as
+ a list of tuples (event_data, subject_data) as we do in the
+ DBus API.
+
+ Return modes:
+ - 0: IDs.
+ - 1: Events.
+ """
+ t = time.time()
+
+ where = self._build_sql_event_filter(time_range, event_templates,
+ storage_state)
+
+ if not where.may_have_results():
+ return []
+
+ if return_mode == 0:
+ sql = "SELECT DISTINCT id FROM event_view"
+ elif return_mode == 1:
+ sql = "SELECT id FROM event_view"
+ else:
+ raise NotImplementedError, "Unsupported return_mode."
+
+ wheresql = " WHERE %s" % where.sql if where else ""
+
+ def group_and_sort(field, wheresql, time_asc=False, count_asc=None,
+ aggregation_type='max'):
+
+ args = {
+ 'field': field,
+ 'aggregation_type': aggregation_type,
+ 'where_sql': wheresql,
+ 'time_sorting': 'ASC' if time_asc else 'DESC',
+ 'aggregation_sql': '',
+ 'order_sql': '',
+ }
+
+ if count_asc is not None:
+ args['aggregation_sql'] = ', COUNT(%s) AS num_events' % \
+ field
+ args['order_sql'] = 'num_events %s,' % \
+ ('ASC' if count_asc else 'DESC')
+
+ return """
+ NATURAL JOIN (
+ SELECT %(field)s,
+ %(aggregation_type)s(timestamp) AS timestamp
+ %(aggregation_sql)s
+ FROM event_view %(where_sql)s
+ GROUP BY %(field)s)
+ GROUP BY %(field)s
+ ORDER BY %(order_sql)s timestamp %(time_sorting)s
+ """ % args
+
+ if order == ResultType.MostRecentEvents:
+ sql += wheresql + " ORDER BY timestamp DESC"
+ elif order == ResultType.LeastRecentEvents:
+ sql += wheresql + " ORDER BY timestamp ASC"
+ elif order == ResultType.MostRecentEventOrigin:
+ sql += group_and_sort("origin", wheresql, time_asc=False)
+ elif order == ResultType.LeastRecentEventOrigin:
+ sql += group_and_sort("origin", wheresql, time_asc=True)
+ elif order == ResultType.MostPopularEventOrigin:
+ sql += group_and_sort("origin", wheresql, time_asc=False,
+ count_asc=False)
+ elif order == ResultType.LeastPopularEventOrigin:
+ sql += group_and_sort("origin", wheresql, time_asc=True,
+ count_asc=True)
+ elif order == ResultType.MostRecentSubjects:
+ # Remember, event.subj_id identifies the subject URI
+ sql += group_and_sort("subj_id", wheresql, time_asc=False)
+ elif order == ResultType.LeastRecentSubjects:
+ sql += group_and_sort("subj_id", wheresql, time_asc=True)
+ elif order == ResultType.MostPopularSubjects:
+ sql += group_and_sort("subj_id", wheresql, time_asc=False,
+ count_asc=False)
+ elif order == ResultType.LeastPopularSubjects:
+ sql += group_and_sort("subj_id", wheresql, time_asc=True,
+ count_asc=True)
+ elif order == ResultType.MostRecentCurrentUri:
+ sql += group_and_sort("subj_id_current", wheresql, time_asc=False)
+ elif order == ResultType.LeastRecentCurrentUri:
+ sql += group_and_sort("subj_id_current", wheresql, time_asc=True)
+ elif order == ResultType.MostPopularCurrentUri:
+ sql += group_and_sort("subj_id_current", wheresql, time_asc=False,
+ count_asc=False)
+ elif order == ResultType.LeastPopularCurrentUri:
+ sql += group_and_sort("subj_id_current", wheresql, time_asc=True,
+ count_asc=True)
+ elif order == ResultType.MostRecentActor:
+ sql += group_and_sort("actor", wheresql, time_asc=False)
+ elif order == ResultType.LeastRecentActor:
+ sql += group_and_sort("actor", wheresql, time_asc=True)
+ elif order == ResultType.MostPopularActor:
+ sql += group_and_sort("actor", wheresql, time_asc=False,
+ count_asc=False)
+ elif order == ResultType.LeastPopularActor:
+ sql += group_and_sort("actor", wheresql, time_asc=True,
+ count_asc=True)
+ elif order == ResultType.OldestActor:
+ sql += group_and_sort("actor", wheresql, time_asc=True,
+ aggregation_type="min")
+ elif order == ResultType.MostRecentOrigin:
+ sql += group_and_sort("subj_origin", wheresql, time_asc=False)
+ elif order == ResultType.LeastRecentOrigin:
+ sql += group_and_sort("subj_origin", wheresql, time_asc=True)
+ elif order == ResultType.MostPopularOrigin:
+ sql += group_and_sort("subj_origin", wheresql, time_asc=False,
+ count_asc=False)
+ elif order == ResultType.LeastPopularOrigin:
+ sql += group_and_sort("subj_origin", wheresql, time_asc=True,
+ count_asc=True)
+ elif order == ResultType.MostRecentSubjectInterpretation:
+ sql += group_and_sort("subj_interpretation", wheresql,
+ time_asc=False)
+ elif order == ResultType.LeastRecentSubjectInterpretation:
+ sql += group_and_sort("subj_interpretation", wheresql,
+ time_asc=True)
+ elif order == ResultType.MostPopularSubjectInterpretation:
+ sql += group_and_sort("subj_interpretation", wheresql,
+ time_asc=False, count_asc=False)
+ elif order == ResultType.LeastPopularSubjectInterpretation:
+ sql += group_and_sort("subj_interpretation", wheresql,
+ time_asc=True, count_asc=True)
+ elif order == ResultType.MostRecentMimeType:
+ sql += group_and_sort("subj_mimetype", wheresql, time_asc=False)
+ elif order == ResultType.LeastRecentMimeType:
+ sql += group_and_sort("subj_mimetype", wheresql, time_asc=True)
+ elif order == ResultType.MostPopularMimeType:
+ sql += group_and_sort("subj_mimetype", wheresql, time_asc=False,
+ count_asc=False)
+ elif order == ResultType.LeastPopularMimeType:
+ sql += group_and_sort("subj_mimetype", wheresql, time_asc=True,
+ count_asc=True)
+
+ if max_events > 0:
+ sql += " LIMIT %d" % max_events
+ result = array("i", self._cursor.execute(sql, where.arguments).fetch(0))
+
+ if return_mode == 0:
+ log.debug("Found %d event IDs in %fs" % (len(result), time.time()- t))
+ elif return_mode == 1:
+ log.debug("Found %d events in %fs" % (len(result), time.time()- t))
+ result = self.get_events(ids=result, sender=sender)
+ else:
+ raise Exception("%d" % return_mode)
+
+ return result
+
+ @staticmethod
+ def _build_templates(templates):
+ for event_template in templates:
+ event_data = event_template[0]
+ for subject in (event_template[1] or (Subject(),)):
+ yield Event((event_data, [], None)), Subject(subject)
+
+ def _build_sql_from_event_templates(self, templates):
+
+ where_or = WhereClause(WhereClause.OR)
+
+ for template in templates:
+ event_template = Event((template[0], [], None))
+ if template[1]:
+ subject_templates = [Subject(data) for data in template[1]]
+ else:
+ subject_templates = None
+
+ subwhere = WhereClause(WhereClause.AND)
+
+ if event_template.id:
+ subwhere.add("id = ?", event_template.id)
+
+ try:
+ value, negation, wildcard = parse_operators(Event, Event.Interpretation, event_template.interpretation)
+ # Expand event interpretation children
+ event_interp_where = WhereClause(WhereClause.OR, negation)
+ for child_interp in (Symbol.find_child_uris_extended(value)):
+ if child_interp:
+ event_interp_where.add_text_condition("interpretation",
+ child_interp, like=wildcard, cache=self._interpretation)
+ if event_interp_where:
+ subwhere.extend(event_interp_where)
+
+ value, negation, wildcard = parse_operators(Event, Event.Manifestation, event_template.manifestation)
+ # Expand event manifestation children
+ event_manif_where = WhereClause(WhereClause.OR, negation)
+ for child_manif in (Symbol.find_child_uris_extended(value)):
+ if child_manif:
+ event_manif_where.add_text_condition("manifestation",
+ child_manif, like=wildcard, cache=self._manifestation)
+ if event_manif_where:
+ subwhere.extend(event_manif_where)
+
+ value, negation, wildcard = parse_operators(Event, Event.Actor, event_template.actor)
+ if value:
+ subwhere.add_text_condition("actor", value, wildcard, negation, cache=self._actor)
+
+ value, negation, wildcard = parse_operators(Event, Event.Origin, event_template.origin)
+ if value:
+ subwhere.add_text_condition("origin", value, wildcard, negation)
+
+ if subject_templates is not None:
+ for subject_template in subject_templates:
+ value, negation, wildcard = parse_operators(Subject, Subject.Interpretation, subject_template.interpretation)
+ # Expand subject interpretation children
+ su_interp_where = WhereClause(WhereClause.OR, negation)
+ for child_interp in (Symbol.find_child_uris_extended(value)):
+ if child_interp:
+ su_interp_where.add_text_condition("subj_interpretation",
+ child_interp, like=wildcard, cache=self._interpretation)
+ if su_interp_where:
+ subwhere.extend(su_interp_where)
+
+ value, negation, wildcard = parse_operators(Subject, Subject.Manifestation, subject_template.manifestation)
+ # Expand subject manifestation children
+ su_manif_where = WhereClause(WhereClause.OR, negation)
+ for child_manif in (Symbol.find_child_uris_extended(value)):
+ if child_manif:
+ su_manif_where.add_text_condition("subj_manifestation",
+ child_manif, like=wildcard, cache=self._manifestation)
+ if su_manif_where:
+ subwhere.extend(su_manif_where)
+
+ # FIXME: Expand mime children as well.
+ # Right now we only do exact matching for mimetypes
+ # thekorn: this will be fixed when wildcards are supported
+ value, negation, wildcard = parse_operators(Subject, Subject.Mimetype, subject_template.mimetype)
+ if value:
+ subwhere.add_text_condition("subj_mimetype",
+ value, wildcard, negation, cache=self._mimetype)
+
+ for key in ("uri", "origin", "text"):
+ value = getattr(subject_template, key)
+ if value:
+ value, negation, wildcard = parse_operators(Subject, getattr(Subject, key.title()), value)
+ subwhere.add_text_condition("subj_%s" % key, value, wildcard, negation)
+
+ if subject_template.current_uri:
+ value, negation, wildcard = parse_operators(Subject,
+ Subject.CurrentUri, subject_template.current_uri)
+ subwhere.add_text_condition("subj_current_uri", value, wildcard, negation)
+
+ if subject_template.storage:
+ subwhere.add_text_condition("subj_storage", subject_template.storage)
+
+ except KeyError, e:
+ # Value not in DB
+ log.debug("Unknown entity in query: %s" % e)
+ where_or.register_no_result()
+ continue
+ where_or.extend(subwhere)
+ return where_or
+
+ def _build_sql_event_filter(self, time_range, templates, storage_state):
+
+ where = WhereClause(WhereClause.AND)
+
+ # thekorn: we are using the unary operator here to tell sql to not use
+ # the index on the timestamp column at the first place. This `fix` for
+ # (LP: #672965) is based on some benchmarks, which suggest a performance
+ # win, but we might not oversee all implications.
+ # (see http://www.sqlite.org/optoverview.html section 6.0)
+ min_time, max_time = time_range
+ if min_time != 0:
+ where.add("+timestamp >= ?", min_time)
+ if max_time != sys.maxint:
+ where.add("+timestamp <= ?", max_time)
+
+ if storage_state in (StorageState.Available, StorageState.NotAvailable):
+ where.add("(subj_storage_state = ? OR subj_storage_state IS NULL)",
+ storage_state)
+ elif storage_state != StorageState.Any:
+ raise ValueError, "Unknown storage state '%d'" % storage_state
+
+ where.extend(self._build_sql_from_event_templates(templates))
+
+ return where
+
+if __name__ == "__main__":
+ mainloop = gobject.MainLoop(is_running=True)
+ search_engine = SearchEngineExtension()
+ ZG_CLIENT._iface.connect_exit(lambda: mainloop.quit ())
+ mainloop.run()
+
=== added file 'extensions/fts-python/lrucache.py'
--- extensions/fts-python/lrucache.py 1970-01-01 00:00:00 +0000
+++ extensions/fts-python/lrucache.py 2011-10-19 08:09:50 +0000
@@ -0,0 +1,125 @@
+# -.- coding: utf-8 -.-
+
+# lrucache.py
+#
+# Copyright © 2009 Mikkel Kamstrup Erlandsen <mikkel.kamstrup@xxxxxxxxx>
+# Copyright © 2009 Markus Korn <thekorn@xxxxxx>
+# Copyright © 2011 Seif Lotfy <seif@xxxxxxxxx>
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Lesser General Public License as published by
+# the Free Software Foundation, either version 2.1 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+class LRUCache:
+ """
+ A simple LRUCache implementation backed by a linked list and a dict.
+ It can be accessed and updated just like a dict. To check if an element
+ exists in the cache the following type of statements can be used:
+ if "foo" in cache
+ """
+
+ class _Item:
+ """
+ A container for each item in LRUCache which knows about the
+ item's position and relations
+ """
+ def __init__(self, item_key, item_value):
+ self.value = item_value
+ self.key = item_key
+ self.next = None
+ self.prev = None
+
+ def __init__(self, max_size):
+ """
+ The size of the cache (in number of cached items) is guaranteed to
+ never exceed 'size'
+ """
+ self._max_size = max_size
+ self.clear()
+
+
+ def clear(self):
+ self._list_end = None # The newest item
+ self._list_start = None # Oldest item
+ self._map = {}
+
+ def __len__(self):
+ return len(self._map)
+
+ def __contains__(self, key):
+ return key in self._map
+
+ def __delitem__(self, key):
+ item = self._map[key]
+ if item.prev:
+ item.prev.next = item.next
+ else:
+ # we are deleting the first item, so we need a new first one
+ self._list_start = item.next
+ if item.next:
+ item.next.prev = item.prev
+ else:
+ # we are deleting the last item, get a new last one
+ self._list_end = item.prev
+ del self._map[key], item
+
+ def __setitem__(self, key, value):
+ if key in self._map:
+ item = self._map[key]
+ item.value = value
+ self._move_item_to_end(item)
+ else:
+ new = LRUCache._Item(key, value)
+ self._append_to_list(new)
+
+ if len(self._map) > self._max_size :
+ # Remove eldest entry from list
+ self.remove_eldest_item()
+
+ def __getitem__(self, key):
+ item = self._map[key]
+ self._move_item_to_end(item)
+ return item.value
+
+ def __iter__(self):
+ """
+ Iteration is in order from eldest to newest,
+ and returns (key,value) tuples
+ """
+ iter = self._list_start
+ while iter != None:
+ yield (iter.key, iter.value)
+ iter = iter.next
+
+ def _move_item_to_end(self, item):
+ del self[item.key]
+ self._append_to_list(item)
+
+ def _append_to_list(self, item):
+ self._map[item.key] = item
+ if not self._list_start:
+ self._list_start = item
+ if self._list_end:
+ self._list_end.next = item
+ item.prev = self._list_end
+ item.next = None
+ self._list_end = item
+
+ def remove_eldest_item(self):
+ if self._list_start == self._list_end:
+ self._list_start = None
+ self._list_end = None
+ return
+ old = self._list_start
+ old.next.prev = None
+ self._list_start = old.next
+ del self[old.key], old
=== added file 'extensions/fts-python/org.gnome.zeitgeist.fts.service.in'
--- extensions/fts-python/org.gnome.zeitgeist.fts.service.in 1970-01-01 00:00:00 +0000
+++ extensions/fts-python/org.gnome.zeitgeist.fts.service.in 2011-10-19 08:09:50 +0000
@@ -0,0 +1,3 @@
+[D-BUS Service]
+Name=org.gnome.zeitgeist.SimpleIndexer
+Exec=@pkgdatadir@/fts-python/fts.py
=== added file 'extensions/fts-python/sql.py'
--- extensions/fts-python/sql.py 1970-01-01 00:00:00 +0000
+++ extensions/fts-python/sql.py 2011-10-19 08:09:50 +0000
@@ -0,0 +1,686 @@
+# -.- coding: utf-8 -.-
+
+# Zeitgeist
+#
+# Copyright © 2009-2010 Siegfried-Angel Gevatter Pujals <rainct@xxxxxxxxxx>
+# Copyright © 2009 Mikkel Kamstrup Erlandsen <mikkel.kamstrup@xxxxxxxxx>
+# Copyright © 2009-2011 Markus Korn <thekorn@xxxxxxx>
+# Copyright © 2009 Seif Lotfy <seif@xxxxxxxxx>
+# Copyright © 2011 J.P. Lacerda <jpaflacerda@xxxxxxxxx>
+# Copyright © 2011 Collabora Ltd.
+# By Siegfried-Angel Gevatter Pujals <rainct@xxxxxxxxxx>
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Lesser General Public License as published by
+# the Free Software Foundation, either version 2.1 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+import sqlite3
+import logging
+import time
+import os
+import shutil
+
+from constants import constants
+
+log = logging.getLogger("siis.zeitgeist.sql")
+
+TABLE_MAP = {
+ "origin": "uri",
+ "subj_mimetype": "mimetype",
+ "subj_origin": "uri",
+ "subj_uri": "uri",
+ "subj_current_uri": "uri",
+}
+
+def explain_query(cursor, statement, arguments=()):
+ plan = ""
+ for r in cursor.execute("EXPLAIN QUERY PLAN "+statement, arguments).fetchall():
+ plan += str(list(r)) + "\n"
+ log.debug("Got query:\nQUERY:\n%s (%s)\nPLAN:\n%s" % (statement, arguments, plan))
+
+class UnicodeCursor(sqlite3.Cursor):
+
+ debug_explain = os.getenv("ZEITGEIST_DEBUG_QUERY_PLANS")
+
+ @staticmethod
+ def fix_unicode(obj):
+ if isinstance(obj, (int, long)):
+ # thekorn: as long as we are using the unary operator for timestamp
+ # related queries we have to make sure that integers are not
+ # converted to strings, same applies for long numbers.
+ return obj
+ if isinstance(obj, str):
+ obj = obj.decode("UTF-8")
+ # seif: Python’s default encoding is ASCII, so whenever a character with
+ # an ASCII value > 127 is in the input data, you’ll get a UnicodeDecodeError
+ # because that character can’t be handled by the ASCII encoding.
+ try:
+ obj = unicode(obj)
+ except UnicodeDecodeError, ex:
+ pass
+ return obj
+
+ def execute(self, statement, parameters=()):
+ parameters = [self.fix_unicode(p) for p in parameters]
+ if UnicodeCursor.debug_explain:
+ explain_query(super(UnicodeCursor, self), statement, parameters)
+ return super(UnicodeCursor, self).execute(statement, parameters)
+
+ def fetch(self, index=None):
+ if index is not None:
+ for row in self:
+ yield row[index]
+ else:
+ for row in self:
+ yield row
+
+def _get_schema_version (cursor, schema_name):
+ """
+ Returns the schema version for schema_name or returns 0 in case
+ the schema doesn't exist.
+ """
+ try:
+ schema_version_result = cursor.execute("""
+ SELECT version FROM schema_version WHERE schema=?
+ """, (schema_name,))
+ result = schema_version_result.fetchone()
+ return result[0] if result else 0
+ except sqlite3.OperationalError, e:
+ # The schema isn't there...
+ log.debug ("Schema '%s' not found: %s" % (schema_name, e))
+ return 0
+
+def _set_schema_version (cursor, schema_name, version):
+ """
+ Sets the version of `schema_name` to `version`
+ """
+ cursor.execute("""
+ CREATE TABLE IF NOT EXISTS schema_version
+ (schema VARCHAR PRIMARY KEY ON CONFLICT REPLACE, version INT)
+ """)
+
+ # The 'ON CONFLICT REPLACE' on the PK converts INSERT to UPDATE
+ # when appriopriate
+ cursor.execute("""
+ INSERT INTO schema_version VALUES (?, ?)
+ """, (schema_name, version))
+ cursor.connection.commit()
+
+def _do_schema_upgrade (cursor, schema_name, old_version, new_version):
+ """
+ Try and upgrade schema `schema_name` from version `old_version` to
+ `new_version`. This is done by executing a series of upgrade modules
+ named '_zeitgeist.engine.upgrades.$schema_name_$(i)_$(i+1)' and executing
+ the run(cursor) method of those modules until new_version is reached
+ """
+ _do_schema_backup()
+ _set_schema_version(cursor, schema_name, -1)
+ for i in xrange(old_version, new_version):
+ # Fire off the right upgrade module
+ log.info("Upgrading database '%s' from version %s to %s. "
+ "This may take a while" % (schema_name, i, i+1))
+ upgrader_name = "%s_%s_%s" % (schema_name, i, i+1)
+ module = __import__ ("_zeitgeist.engine.upgrades.%s" % upgrader_name)
+ eval("module.engine.upgrades.%s.run(cursor)" % upgrader_name)
+
+ # Update the schema version
+ _set_schema_version(cursor, schema_name, new_version)
+
+ log.info("Upgrade succesful")
+
+def _check_core_schema_upgrade (cursor):
+ """
+ Checks whether the schema is good or, if it is outdated, triggers any
+ necessary upgrade scripts. This method will also attempt to restore a
+ database backup in case a previous upgrade was cancelled midway.
+
+ It returns a boolean indicating whether the schema was good and the
+ database cursor (which will have changed if the database was restored).
+ """
+ # See if we have the right schema version, and try an upgrade if needed
+ core_schema_version = _get_schema_version(cursor, constants.CORE_SCHEMA)
+ if core_schema_version >= constants.CORE_SCHEMA_VERSION:
+ return True, cursor
+ else:
+ try:
+ if core_schema_version <= -1:
+ cursor.connection.commit()
+ cursor.connection.close()
+ _do_schema_restore()
+ cursor = _connect_to_db(constants.DATABASE_FILE)
+ core_schema_version = _get_schema_version(cursor,
+ constants.CORE_SCHEMA)
+ log.exception("Database corrupted at upgrade -- "
+ "upgrading from version %s" % core_schema_version)
+
+ _do_schema_upgrade (cursor,
+ constants.CORE_SCHEMA,
+ core_schema_version,
+ constants.CORE_SCHEMA_VERSION)
+
+ # Don't return here. The upgrade process might depend on the
+ # tables, indexes, and views being set up (to avoid code dup)
+ log.info("Running post upgrade setup")
+ return False, cursor
+ except sqlite3.OperationalError:
+ # Something went wrong while applying the upgrade -- this is
+ # probably due to a non existing table (this occurs when
+ # applying core_3_4, for example). We just need to fall through
+ # the rest of create_db to fix this...
+ log.exception("Database corrupted -- proceeding")
+ return False, cursor
+ except Exception, e:
+ log.exception(
+ "Failed to upgrade database '%s' from version %s to %s: %s" % \
+ (constants.CORE_SCHEMA, core_schema_version,
+ constants.CORE_SCHEMA_VERSION, e))
+ raise SystemExit(27)
+
+def _do_schema_backup ():
+ shutil.copyfile(constants.DATABASE_FILE, constants.DATABASE_FILE_BACKUP)
+
+def _do_schema_restore ():
+ shutil.move(constants.DATABASE_FILE_BACKUP, constants.DATABASE_FILE)
+
+def _connect_to_db(file_path):
+ conn = sqlite3.connect(file_path)
+ conn.row_factory = sqlite3.Row
+ cursor = conn.cursor(UnicodeCursor)
+ return cursor
+
+def create_db(file_path):
+ """Create the database and return a default cursor for it"""
+ start = time.time()
+ log.info("Using database: %s" % file_path)
+ new_database = not os.path.exists(file_path)
+ cursor = _connect_to_db(file_path)
+
+ # Seif: as result of the optimization story (LP: #639737) we are setting
+ # journal_mode to WAL if possible, this change is irreversible but
+ # gains us a big speedup, for more information see http://www.sqlite.org/wal.html
+ # FIXME: Set journal_mode to WAL when teamdecision has been take.
+ # cursor.execute("PRAGMA journal_mode = WAL")
+ # cursor.execute("PRAGMA journal_mode = DELETE")
+ # Seif: another result of the performance tweaks discussed in (LP: #639737)
+ # we decided to set locking_mode to EXCLUSIVE, from now on only
+ # one connection to the database is allowed to revert this setting set locking_mode to NORMAL.
+
+ # thekorn: as part of the workaround for (LP: #598666) we need to
+ # create the '_fix_cache' TEMP table on every start,
+ # this table gets purged once the engine gets closed.
+ # When a cached value gets deleted we automatically store the name
+ # of the cache and the value's id to this table. It's then up to
+ # the python code to delete items from the cache based on the content
+ # of this table.
+ cursor.execute("CREATE TEMP TABLE _fix_cache (table_name VARCHAR, id INTEGER)")
+
+ # Always assume that temporary memory backed DBs have good schemas
+ if constants.DATABASE_FILE != ":memory:" and not new_database:
+ do_upgrade, cursor = _check_core_schema_upgrade(cursor)
+ if do_upgrade:
+ _time = (time.time() - start)*1000
+ log.debug("Core schema is good. DB loaded in %sms" % _time)
+ return cursor
+
+ # the following sql statements are only executed if a new database
+ # is created or an update of the core schema was done
+ log.debug("Updating sql schema")
+ # uri
+ cursor.execute("""
+ CREATE TABLE IF NOT EXISTS uri
+ (id INTEGER PRIMARY KEY, value VARCHAR UNIQUE)
+ """)
+ cursor.execute("""
+ CREATE UNIQUE INDEX IF NOT EXISTS uri_value ON uri(value)
+ """)
+
+ # interpretation
+ cursor.execute("""
+ CREATE TABLE IF NOT EXISTS interpretation
+ (id INTEGER PRIMARY KEY, value VARCHAR UNIQUE)
+ """)
+ cursor.execute("""
+ CREATE UNIQUE INDEX IF NOT EXISTS interpretation_value
+ ON interpretation(value)
+ """)
+
+ # manifestation
+ cursor.execute("""
+ CREATE TABLE IF NOT EXISTS manifestation
+ (id INTEGER PRIMARY KEY, value VARCHAR UNIQUE)
+ """)
+ cursor.execute("""
+ CREATE UNIQUE INDEX IF NOT EXISTS manifestation_value
+ ON manifestation(value)""")
+
+ # mimetype
+ cursor.execute("""
+ CREATE TABLE IF NOT EXISTS mimetype
+ (id INTEGER PRIMARY KEY, value VARCHAR UNIQUE)
+ """)
+ cursor.execute("""
+ CREATE UNIQUE INDEX IF NOT EXISTS mimetype_value
+ ON mimetype(value)""")
+
+ # actor
+ cursor.execute("""
+ CREATE TABLE IF NOT EXISTS actor
+ (id INTEGER PRIMARY KEY, value VARCHAR UNIQUE)
+ """)
+ cursor.execute("""
+ CREATE UNIQUE INDEX IF NOT EXISTS actor_value
+ ON actor(value)""")
+
+ # text
+ cursor.execute("""
+ CREATE TABLE IF NOT EXISTS text
+ (id INTEGER PRIMARY KEY, value VARCHAR UNIQUE)
+ """)
+ cursor.execute("""
+ CREATE UNIQUE INDEX IF NOT EXISTS text_value
+ ON text(value)""")
+
+ # payload, there's no value index for payload,
+ # they can only be fetched by id
+ cursor.execute("""
+ CREATE TABLE IF NOT EXISTS payload
+ (id INTEGER PRIMARY KEY, value BLOB)
+ """)
+
+ # storage, represented by a StatefulEntityTable
+ cursor.execute("""
+ CREATE TABLE IF NOT EXISTS storage
+ (id INTEGER PRIMARY KEY,
+ value VARCHAR UNIQUE,
+ state INTEGER,
+ icon VARCHAR,
+ display_name VARCHAR)
+ """)
+ cursor.execute("""
+ CREATE UNIQUE INDEX IF NOT EXISTS storage_value
+ ON storage(value)""")
+
+ # event - the primary table for log statements
+ # - Note that event.id is NOT unique, we can have multiple subjects per ID
+ # - Timestamps are integers.
+ # - (event-)origin and subj_id_current are added to the end of the table
+ cursor.execute("""
+ CREATE TABLE IF NOT EXISTS event (
+ id INTEGER,
+ timestamp INTEGER,
+ interpretation INTEGER,
+ manifestation INTEGER,
+ actor INTEGER,
+ payload INTEGER,
+ subj_id INTEGER,
+ subj_interpretation INTEGER,
+ subj_manifestation INTEGER,
+ subj_origin INTEGER,
+ subj_mimetype INTEGER,
+ subj_text INTEGER,
+ subj_storage INTEGER,
+ origin INTEGER,
+ subj_id_current INTEGER,
+ CONSTRAINT interpretation_fk FOREIGN KEY(interpretation)
+ REFERENCES interpretation(id) ON DELETE CASCADE,
+ CONSTRAINT manifestation_fk FOREIGN KEY(manifestation)
+ REFERENCES manifestation(id) ON DELETE CASCADE,
+ CONSTRAINT actor_fk FOREIGN KEY(actor)
+ REFERENCES actor(id) ON DELETE CASCADE,
+ CONSTRAINT origin_fk FOREIGN KEY(origin)
+ REFERENCES uri(id) ON DELETE CASCADE,
+ CONSTRAINT payload_fk FOREIGN KEY(payload)
+ REFERENCES payload(id) ON DELETE CASCADE,
+ CONSTRAINT subj_id_fk FOREIGN KEY(subj_id)
+ REFERENCES uri(id) ON DELETE CASCADE,
+ CONSTRAINT subj_id_current_fk FOREIGN KEY(subj_id_current)
+ REFERENCES uri(id) ON DELETE CASCADE,
+ CONSTRAINT subj_interpretation_fk FOREIGN KEY(subj_interpretation)
+ REFERENCES interpretation(id) ON DELETE CASCADE,
+ CONSTRAINT subj_manifestation_fk FOREIGN KEY(subj_manifestation)
+ REFERENCES manifestation(id) ON DELETE CASCADE,
+ CONSTRAINT subj_origin_fk FOREIGN KEY(subj_origin)
+ REFERENCES uri(id) ON DELETE CASCADE,
+ CONSTRAINT subj_mimetype_fk FOREIGN KEY(subj_mimetype)
+ REFERENCES mimetype(id) ON DELETE CASCADE,
+ CONSTRAINT subj_text_fk FOREIGN KEY(subj_text)
+ REFERENCES text(id) ON DELETE CASCADE,
+ CONSTRAINT subj_storage_fk FOREIGN KEY(subj_storage)
+ REFERENCES storage(id) ON DELETE CASCADE,
+ CONSTRAINT unique_event UNIQUE (timestamp, interpretation, manifestation, actor, subj_id)
+ )
+ """)
+ cursor.execute("""
+ CREATE INDEX IF NOT EXISTS event_id
+ ON event(id)""")
+ cursor.execute("""
+ CREATE INDEX IF NOT EXISTS event_timestamp
+ ON event(timestamp)""")
+ cursor.execute("""
+ CREATE INDEX IF NOT EXISTS event_interpretation
+ ON event(interpretation)""")
+ cursor.execute("""
+ CREATE INDEX IF NOT EXISTS event_manifestation
+ ON event(manifestation)""")
+ cursor.execute("""
+ CREATE INDEX IF NOT EXISTS event_actor
+ ON event(actor)""")
+ cursor.execute("""
+ CREATE INDEX IF NOT EXISTS event_origin
+ ON event(origin)""")
+ cursor.execute("""
+ CREATE INDEX IF NOT EXISTS event_subj_id
+ ON event(subj_id)""")
+ cursor.execute("""
+ CREATE INDEX IF NOT EXISTS event_subj_id_current
+ ON event(subj_id_current)""")
+ cursor.execute("""
+ CREATE INDEX IF NOT EXISTS event_subj_interpretation
+ ON event(subj_interpretation)""")
+ cursor.execute("""
+ CREATE INDEX IF NOT EXISTS event_subj_manifestation
+ ON event(subj_manifestation)""")
+ cursor.execute("""
+ CREATE INDEX IF NOT EXISTS event_subj_origin
+ ON event(subj_origin)""")
+ cursor.execute("""
+ CREATE INDEX IF NOT EXISTS event_subj_mimetype
+ ON event(subj_mimetype)""")
+ cursor.execute("""
+ CREATE INDEX IF NOT EXISTS event_subj_text
+ ON event(subj_text)""")
+ cursor.execute("""
+ CREATE INDEX IF NOT EXISTS event_subj_storage
+ ON event(subj_storage)""")
+
+ # Foreign key constraints don't work in SQLite. Yay!
+ for table, columns in (
+ ('interpretation', ('interpretation', 'subj_interpretation')),
+ ('manifestation', ('manifestation', 'subj_manifestation')),
+ ('actor', ('actor',)),
+ ('payload', ('payload',)),
+ ('mimetype', ('subj_mimetype',)),
+ ('text', ('subj_text',)),
+ ('storage', ('subj_storage',)),
+ ):
+ for column in columns:
+ cursor.execute("""
+ CREATE TRIGGER IF NOT EXISTS fkdc_event_%(column)s
+ BEFORE DELETE ON event
+ WHEN ((SELECT COUNT(*) FROM event WHERE %(column)s=OLD.%(column)s) < 2)
+ BEGIN
+ DELETE FROM %(table)s WHERE id=OLD.%(column)s;
+ END;
+ """ % {'column': column, 'table': table})
+
+ # ... special cases
+ for num, column in enumerate(('subj_id', 'subj_origin',
+ 'subj_id_current', 'origin')):
+ cursor.execute("""
+ CREATE TRIGGER IF NOT EXISTS fkdc_event_uri_%(num)d
+ BEFORE DELETE ON event
+ WHEN ((
+ SELECT COUNT(*)
+ FROM event
+ WHERE
+ origin=OLD.%(column)s
+ OR subj_id=OLD.%(column)s
+ OR subj_id_current=OLD.%(column)s
+ OR subj_origin=OLD.%(column)s
+ ) < 2)
+ BEGIN
+ DELETE FROM uri WHERE id=OLD.%(column)s;
+ END;
+ """ % {'num': num+1, 'column': column})
+
+ cursor.execute("DROP VIEW IF EXISTS event_view")
+ cursor.execute("""
+ CREATE VIEW IF NOT EXISTS event_view AS
+ SELECT event.id,
+ event.timestamp,
+ event.interpretation,
+ event.manifestation,
+ event.actor,
+ (SELECT value FROM payload WHERE payload.id=event.payload)
+ AS payload,
+ (SELECT value FROM uri WHERE uri.id=event.subj_id)
+ AS subj_uri,
+ event.subj_id, -- #this directly points to an id in the uri table
+ event.subj_interpretation,
+ event.subj_manifestation,
+ event.subj_origin,
+ (SELECT value FROM uri WHERE uri.id=event.subj_origin)
+ AS subj_origin_uri,
+ event.subj_mimetype,
+ (SELECT value FROM text WHERE text.id = event.subj_text)
+ AS subj_text,
+ (SELECT value FROM storage
+ WHERE storage.id=event.subj_storage) AS subj_storage,
+ (SELECT state FROM storage
+ WHERE storage.id=event.subj_storage) AS subj_storage_state,
+ event.origin,
+ (SELECT value FROM uri WHERE uri.id=event.origin)
+ AS event_origin_uri,
+ (SELECT value FROM uri WHERE uri.id=event.subj_id_current)
+ AS subj_current_uri,
+ event.subj_id_current
+ FROM event
+ """)
+
+ # All good. Set the schema version, so we don't have to do all this
+ # sql the next time around
+ _set_schema_version (cursor, constants.CORE_SCHEMA, constants.CORE_SCHEMA_VERSION)
+ _time = (time.time() - start)*1000
+ log.info("DB set up in %sms" % _time)
+ cursor.connection.commit()
+
+ return cursor
+
+_cursor = None
+def get_default_cursor():
+ global _cursor
+ if not _cursor:
+ dbfile = constants.DATABASE_FILE
+ _cursor = create_db(dbfile)
+ return _cursor
+def unset_cursor():
+ global _cursor
+ _cursor = None
+
+class TableLookup(dict):
+
+ # We are not using an LRUCache as pressumably there won't be thousands
+ # of manifestations/interpretations/mimetypes/actors on most
+ # installations, so we can save us the overhead of tracking their usage.
+
+ def __init__(self, cursor, table):
+
+ self._cursor = cursor
+ self._table = table
+
+ for row in cursor.execute("SELECT id, value FROM %s" % table):
+ self[row["value"]] = row["id"]
+
+ self._inv_dict = dict((value, key) for key, value in self.iteritems())
+
+ cursor.execute("""
+ CREATE TEMP TRIGGER update_cache_%(table)s
+ BEFORE DELETE ON %(table)s
+ BEGIN
+ INSERT INTO _fix_cache VALUES ("%(table)s", OLD.id);
+ END;
+ """ % {"table": table})
+
+ def __getitem__(self, name):
+ # Use this for inserting new properties into the database
+ if name in self:
+ return super(TableLookup, self).__getitem__(name)
+ try:
+ self._cursor.execute(
+ "INSERT INTO %s (value) VALUES (?)" % self._table, (name,))
+ id = self._cursor.lastrowid
+ except sqlite3.IntegrityError:
+ # This shouldn't happen, but just in case
+ # FIXME: Maybe we should remove it?
+ id = self._cursor.execute("SELECT id FROM %s WHERE value=?"
+ % self._table, (name,)).fetchone()[0]
+ # If we are here it's a newly inserted value, insert it into cache
+ self[name] = id
+ self._inv_dict[id] = name
+ return id
+
+ def value(self, id):
+ # When we fetch an event, it either was already in the database
+ # at the time Zeitgeist started or it was inserted later -using
+ # Zeitgeist-, so here we always have the data in memory already.
+ return self._inv_dict[id]
+
+ def id(self, name):
+ # Use this when fetching values which are supposed to be in the
+ # database already. Eg., in find_eventids.
+ return super(TableLookup, self).__getitem__(name)
+
+ def remove_id(self, id):
+ value = self.value(id)
+ del self._inv_dict[id]
+ del self[value]
+
+def get_right_boundary(text):
+ """ returns the smallest string which is greater than `text` """
+ if not text:
+ # if the search prefix is empty we query for the whole range
+ # of 'utf-8 'unicode chars
+ return unichr(0x10ffff)
+ if isinstance(text, str):
+ # we need to make sure the text is decoded as 'utf-8' unicode
+ text = unicode(text, "UTF-8")
+ charpoint = ord(text[-1])
+ if charpoint == 0x10ffff:
+ # if the last character is the biggest possible char we need to
+ # look at the second last
+ return get_right_boundary(text[:-1])
+ return text[:-1] + unichr(charpoint+1)
+
+class WhereClause:
+ """
+ This class provides a convenient representation a SQL `WHERE' clause,
+ composed of a set of conditions joined together.
+
+ The relation between conditions can be either of type *AND* or *OR*, but
+ not both. To create more complex clauses, use several :class:`WhereClause`
+ instances and joining them together using :meth:`extend`.
+
+ Instances of this class can then be used to obtain a line of SQL code and
+ a list of arguments, for use with the SQLite3 module, accessing the
+ appropriate properties:
+ >>> where.sql, where.arguments
+ """
+
+ AND = " AND "
+ OR = " OR "
+ NOT = "NOT "
+
+ @staticmethod
+ def optimize_glob(column, table, prefix):
+ """returns an optimized version of the GLOB statement as described
+ in http://www.sqlite.org/optoverview.html `4.0 The LIKE optimization`
+ """
+ if isinstance(prefix, str):
+ # we need to make sure the text is decoded as 'utf-8' unicode
+ prefix = unicode(prefix, "UTF-8")
+ if not prefix:
+ # empty prefix means 'select all', no way to optimize this
+ sql = "SELECT %s FROM %s" %(column, table)
+ return sql, ()
+ elif all([i == unichr(0x10ffff) for i in prefix]):
+ sql = "SELECT %s FROM %s WHERE value >= ?" %(column, table)
+ return sql, (prefix,)
+ else:
+ sql = "SELECT %s FROM %s WHERE (value >= ? AND value < ?)" %(column, table)
+ return sql, (prefix, get_right_boundary(prefix))
+
+ def __init__(self, relation, negation=False):
+ self._conditions = []
+ self.arguments = []
+ self._relation = relation
+ self._no_result_member = False
+ self._negation = negation
+
+ def __len__(self):
+ return len(self._conditions)
+
+ def add(self, condition, arguments=None):
+ if not condition:
+ return
+ self._conditions.append(condition)
+ if arguments is not None:
+ if not hasattr(arguments, "__iter__"):
+ self.arguments.append(arguments)
+ else:
+ self.arguments.extend(arguments)
+
+ def add_text_condition(self, column, value, like=False, negation=False, cache=None):
+ if like:
+ assert column in ("origin", "subj_uri", "subj_current_uri",
+ "subj_origin", "actor", "subj_mimetype"), \
+ "prefix search on the %r column is not supported by zeitgeist" % column
+ if column == "subj_uri":
+ # subj_id directly points to the id of an uri entry
+ view_column = "subj_id"
+ elif column == "subj_current_uri":
+ view_column = "subj_id_current"
+ else:
+ view_column = column
+ optimized_glob, value = self.optimize_glob("id", TABLE_MAP.get(column, column), value)
+ sql = "%s %sIN (%s)" %(view_column, self.NOT if negation else "", optimized_glob)
+ if negation:
+ sql += " OR %s IS NULL" % view_column
+ else:
+ if column == "origin":
+ column ="event_origin_uri"
+ elif column == "subj_origin":
+ column = "subj_origin_uri"
+ sql = "%s %s= ?" %(column, "!" if negation else "")
+ if cache is not None:
+ value = cache[value]
+ self.add(sql, value)
+
+ def extend(self, where):
+ self.add(where.sql, where.arguments)
+ if not where.may_have_results():
+ if self._relation == self.AND:
+ self.clear()
+ self.register_no_result()
+
+ @property
+ def sql(self):
+ if self: # Do not return "()" if there are no conditions
+ negation = self.NOT if self._negation else ""
+ return "%s(%s)" %(negation, self._relation.join(self._conditions))
+
+ def register_no_result(self):
+ self._no_result_member = True
+
+ def may_have_results(self):
+ """
+ Return False if we know from our cached data that the query
+ will give no results.
+ """
+ return len(self._conditions) > 0 or not self._no_result_member
+
+ def clear(self):
+ """
+ Reset this WhereClause to the state of a newly created one.
+ """
+ self._conditions = []
+ self.arguments = []
+ self._no_result_member = False
=== added file 'extensions/fts.vala'
--- extensions/fts.vala 1970-01-01 00:00:00 +0000
+++ extensions/fts.vala 2011-10-19 08:09:50 +0000
@@ -0,0 +1,154 @@
+/* fts.vala
+ *
+ * Copyright © 2011 Seif Lotfy <seif@xxxxxxxxx>
+ * Copyright © 2011 Canonical Ltd.
+ * By Michal Hruby <michal.hruby@xxxxxxxxxxxxx>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published by
+ * the Free Software Foundation, either version 2.1 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+namespace Zeitgeist
+{
+ [DBus (name = "org.gnome.zeitgeist.Index")]
+ public interface RemoteSearchEngine: Object
+ {
+ [DBus (signature = "a(asaasay)u")]
+ public abstract async Variant search (
+ string query_string,
+ [DBus (signature = "(xx)")] Variant time_range,
+ [DBus (signature = "a(asaasay)")] Variant filter_templates,
+ uint offset, uint count, uint result_type,
+ [DBus (signature = "a(asaasay)")] out Variant events) throws Error;
+ }
+
+ /* Because of a Vala bug we have to define the proxy interface outside of
+ * [ModuleInit] source */
+ /*
+ [DBus (name = "org.gnome.zeitgeist.SimpleIndexer")]
+ public interface RemoteSimpleIndexer : Object
+ {
+ [DBus (signature = "a(asaasay)u")]
+ public abstract async Variant search (
+ string query_string,
+ [DBus (signature = "(xx)")] Variant time_range,
+ [DBus (signature = "a(asaasay)")] Variant filter_templates,
+ uint offset, uint count, uint result_type) throws Error;
+ }
+ */
+
+ class SearchEngine: Extension, RemoteSearchEngine
+ {
+
+ private RemoteSimpleIndexer siin;
+ private uint registration_id;
+
+ SearchEngine ()
+ {
+ Object ();
+ }
+
+ construct
+ {
+ try
+ {
+ var connection = Bus.get_sync (BusType.SESSION, null);
+ registration_id = connection.register_object<RemoteSearchEngine> (
+ "/org/gnome/zeitgeist/index/activity", this);
+
+ // FIXME: shouldn't we delay this to next idle callback?
+ // Get SimpleIndexer
+ Bus.watch_name_on_connection (connection,
+ "org.gnome.zeitgeist.SimpleIndexer",
+ BusNameWatcherFlags.AUTO_START,
+ (conn) =>
+ {
+ if (siin != null) return;
+ conn.get_proxy.begin<RemoteSimpleIndexer> (
+ "org.gnome.zeitgeist.SimpleIndexer",
+ "/org/gnome/zeitgeist/index/activity",
+ 0, null, this.proxy_acquired);
+ },
+ () => {});
+ }
+ catch (Error err)
+ {
+ warning ("%s", err.message);
+ }
+ }
+
+ private void proxy_acquired (Object? obj, AsyncResult res)
+ {
+ var conn = obj as DBusConnection;
+ try
+ {
+ siin = conn.get_proxy.end<RemoteSimpleIndexer> (res);
+ }
+ catch (IOError err)
+ {
+ warning ("%s", err.message);
+ }
+ }
+
+ /* This whole method is one huge workaround for an issue with Vala
+ * enclosing all out/return parameters in a TUPLE variant */
+ public async Variant search (string query_string, Variant time_range,
+ Variant filter_templates, uint offset, uint count, uint result_type,
+ out Variant events) throws Error
+ {
+ debug ("Performing search for %s", query_string);
+ if (siin == null || !(siin is DBusProxy))
+ {
+ // FIXME: queue until we have the proxy
+ throw new EngineError.DATABASE_ERROR (
+ "Not connected to SimpleIndexer");
+ }
+ var timer = new Timer ();
+ DBusProxy proxy = (DBusProxy) siin;
+ var b = new VariantBuilder (new VariantType ("(s(xx)a(asaasay)uuu)"));
+ b.add ("s", query_string);
+ b.add_value (time_range);
+ b.add_value (filter_templates);
+ b.add ("u", offset);
+ b.add ("u", count);
+ b.add ("u", result_type);
+ var result = yield proxy.call ("Search", b.end (), 0, -1, null);
+ events = result.get_child_value (0);
+ /* FIXME: this somehow doesn't work :(
+ * but it's fixable in a similar way as this method's signature
+ * is done */
+ /*
+ var result = yield siin.search (query_string, time_range,
+ filter_templates, offset, count, result_type);
+ */
+ debug ("Got %u results from indexer (in %f seconds)",
+ (uint) events.n_children (), timer.elapsed ());
+ return result.get_child_value (1);
+ }
+
+ }
+
+ [ModuleInit]
+#if BUILTIN_EXTENSIONS
+ public static Type fts_init (TypeModule module)
+ {
+#else
+ public static Type extension_register (TypeModule module)
+ {
+#endif
+ return typeof (SearchEngine);
+ }
+}
+
+// vim:expandtab:ts=4:sw=4
=== added file 'extensions/histogram.vala'
--- extensions/histogram.vala 1970-01-01 00:00:00 +0000
+++ extensions/histogram.vala 2011-10-19 08:09:50 +0000
@@ -0,0 +1,124 @@
+/* histogram.vala
+ *
+ * Copyright © 2011 Michal Hruby <michal.mhr@xxxxxxxxx>
+ *
+ * Based upon a Python implementation (2010-2011) by:
+ * Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published by
+ * the Free Software Foundation, either version 2.1 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+namespace Zeitgeist
+{
+ [DBus (name = "org.gnome.zeitgeist.Histogram")]
+ public interface RemoteHistogram: Object
+ {
+ [DBus (signature = "a(xu)")]
+ public abstract Variant get_histogram_data () throws Error;
+ }
+
+ class Histogram: Extension, RemoteHistogram
+ {
+
+ private uint registration_id = 0;
+
+ construct
+ {
+ // This will be called after bus is acquired, so it shouldn't block
+ try
+ {
+ var connection = Bus.get_sync (BusType.SESSION, null);
+ registration_id = connection.register_object<RemoteHistogram> (
+ "/org/gnome/zeitgeist/journal/activity", this);
+ }
+ catch (Error err)
+ {
+ warning ("%s", err.message);
+ }
+ }
+
+ public override void unload ()
+ {
+ try
+ {
+ var connection = Bus.get_sync (BusType.SESSION, null);
+ if (registration_id != 0)
+ {
+ connection.unregister_object (registration_id);
+ registration_id = 0;
+ }
+ }
+ catch (Error err)
+ {
+ warning ("%s", err.message);
+ }
+
+ debug ("%s, this.ref_count = %u", Log.METHOD, this.ref_count);
+ }
+
+ public Variant get_histogram_data () throws Error
+ {
+ var builder = new VariantBuilder (new VariantType ("a(xu)"));
+
+ string sql = """
+ SELECT strftime('%s', datetime(timestamp/1000, 'unixepoch'),
+ 'start of day') AS daystamp,
+ COUNT(*)
+ FROM event
+ GROUP BY daystamp
+ ORDER BY daystamp DESC
+ """;
+
+ Sqlite.Statement stmt;
+ var database = engine.database;
+ unowned Sqlite.Database db = database.database;
+
+ int rc = db.prepare_v2 (sql, -1, out stmt);
+ database.assert_query_success (rc, "SQL error");
+
+ while ((rc = stmt.step ()) == Sqlite.ROW)
+ {
+ int64 t = stmt.column_int64 (0);
+ uint32 count = stmt.column_int (1);
+
+ builder.add ("(xu)", t, count);
+ }
+
+ if (rc != Sqlite.DONE)
+ {
+ string error_message = "Error in get_histogram_data: " +
+ "%d, %s".printf (rc, db.errmsg ());
+ warning ("%s", error_message);
+ throw new EngineError.DATABASE_ERROR (error_message);
+ }
+
+ return builder.end ();
+ }
+
+ }
+
+ [ModuleInit]
+#if BUILTIN_EXTENSIONS
+ public static Type histogram_init (TypeModule module)
+ {
+#else
+ public static Type extension_register (TypeModule module)
+ {
+#endif
+ return typeof (Histogram);
+ }
+}
+
+// vim:expandtab:ts=4:sw=4
=== added file 'extensions/storage-monitor.vala'
--- extensions/storage-monitor.vala 1970-01-01 00:00:00 +0000
+++ extensions/storage-monitor.vala 2011-10-19 08:09:50 +0000
@@ -0,0 +1,344 @@
+/* ds-registry.vala
+ *
+ * Copyright © 2011 Collabora Ltd.
+ * By Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+ *
+ * Based upon a Python implementation:
+ * Copyright © 2009 Mikkel Kamstrup Erlandsen <mikkel.kamstrup@xxxxxxxxx>
+ * Copyright © 2011 Canonical Ltd.
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published by
+ * the Free Software Foundation, either version 2.1 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+using Zeitgeist;
+
+namespace Zeitgeist
+{
+ [DBus (name = "org.gnome.zeitgeist.StorageMonitor")]
+ public interface RemoteStorageMonitor: Object
+ {
+ [DBus (signature = "a(sa{sv})")]
+ public abstract Variant get_storages () throws Error;
+
+ public signal void storage_available (string storage_id,
+ [DBus (signature = "a{sv}")] Variant storage_description);
+ public signal void storage_unavailable (string storage_id);
+ }
+
+ namespace StorageMedia
+ {
+ private Variant to_variant (string medium_name, bool available,
+ string icon, string display_name)
+ {
+ var vb = new VariantBuilder (new VariantType ("(sa{sv})"));
+
+ vb.add ("s", medium_name);
+ vb.open (new VariantType ("a{sv}"));
+ {
+ vb.open (new VariantType ("{sv}"));
+ vb.add ("s", "available");
+ vb.add ("v", new Variant ("b", available));
+ vb.close ();
+ vb.open (new VariantType ("{sv}"));
+ vb.add ("s", "icon");
+ vb.add ("v", new Variant ("s", icon));
+ vb.close ();
+ vb.open (new VariantType ("{sv}"));
+ vb.add ("s", "display-name");
+ vb.add ("v", new Variant ("s", display_name));
+ vb.close ();
+ }
+ vb.close ();
+
+ return vb.end ();
+ }
+ }
+
+ /*
+ * The Storage Monitor monitors the availability of network interfaces
+ * and storage devices (USB drives, data/audio/video CD/DVDs, etc) and
+ * updates the Zeitgeist database with this information so clients can
+ * efficiently query based on the storage identifier and availability
+ * of the storage medium the event subjects reside on.
+ *
+ * Subject can have the following types of storage identifiers:
+ * - for local resources, the fixed identifier `local`;
+ * - for network URIs, the fixed identifier `net`;
+ * - for resources on storage devices, the UUID of the partition
+ * they reside in;
+ * - otherwise, the fixed identifier `unknown`.
+ *
+ * Subjects with storage `local` or `unwknown` are always considered as
+ * available; for network resources, the monitor will use either ConnMan
+ * or NetworkManager (whichever is available).
+ *
+ * For subjects being inserted without a storage id set, this extension
+ * will attempt to determine it and update the subject on the fly.
+ */
+ class StorageMonitor: Extension, RemoteStorageMonitor
+ {
+ private Zeitgeist.SQLite.ZeitgeistDatabase database;
+ private unowned Sqlite.Database db;
+ private uint registration_id;
+
+ private Sqlite.Statement get_storages_stmt;
+ private Sqlite.Statement store_storage_medium_stmt;
+ private Sqlite.Statement insert_unavailable_medium_stmt;
+ private Sqlite.Statement update_medium_state_stmt;
+
+ StorageMonitor ()
+ {
+ Object ();
+ }
+
+ construct
+ {
+ try
+ {
+ prepare_queries ();
+ }
+ catch (EngineError e)
+ {
+ warning ("Storage Monitor couldn't communicate with DB - bye");
+ return;
+ }
+
+ // This will be called after bus is acquired, so it shouldn't block
+ try
+ {
+ var connection = Bus.get_sync (BusType.SESSION, null);
+ registration_id = connection.register_object<RemoteStorageMonitor> (
+ "/org/gnome/zeitgeist/storagemonitor", this);
+ }
+ catch (Error err)
+ {
+ warning ("%s", err.message);
+ }
+
+ VolumeMonitor monitor = VolumeMonitor.get ();
+ monitor.volume_added.connect (on_volume_added);
+ monitor.volume_removed.connect (on_volume_removed);
+ foreach (Volume volume in monitor.get_volumes ())
+ {
+ add_storage_medium (get_volume_id (volume),
+ volume.get_icon ().to_string (), volume.get_name ());
+ }
+
+ // FIXME: ConnMan / NetworkManager D-Bus stuff...
+ }
+
+ public override void unload ()
+ {
+ // FIXME: move all this D-Bus stuff to some shared
+ // {request,release}_iface functions
+ try
+ {
+ var connection = Bus.get_sync (BusType.SESSION, null);
+ if (registration_id != 0)
+ {
+ connection.unregister_object (registration_id);
+ registration_id = 0;
+ }
+ }
+ catch (Error err)
+ {
+ warning ("%s", err.message);
+ }
+
+ debug ("%s, this.ref_count = %u", Log.METHOD, this.ref_count);
+ }
+
+ private void prepare_queries () throws EngineError
+ {
+ database = engine.database;
+ db = database.database;
+
+ int rc;
+ string sql;
+
+ // Prepare query to retrieve all storage medium information
+ sql = """
+ SELECT value, state, icon, display_name
+ FROM storage
+ """;
+ rc = db.prepare_v2 (sql, -1, out get_storages_stmt);
+ database.assert_query_success (rc, "Storage retrieval query error");
+
+ sql = """
+ INSERT OR REPLACE INTO storage (
+ value, state, icon, display_name
+ ) VALUES (
+ ?, ?, ?, ?
+ )""";
+ rc = db.prepare_v2 (sql, -1, out store_storage_medium_stmt);
+ database.assert_query_success (rc, "Storage insertion query error");
+
+ sql = """
+ INSERT INTO storage (
+ state, value
+ ) VALUES (
+ ?, ?
+ )""";
+ rc = db.prepare_v2 (sql, -1, out insert_unavailable_medium_stmt);
+ database.assert_query_success (rc,
+ "insert_unavailable_medium_stmt error");
+
+ sql = """
+ UPDATE storage
+ SET state=?
+ WHERE value=?
+ """;
+ rc = db.prepare_v2 (sql, -1, out update_medium_state_stmt);
+ database.assert_query_success (rc,
+ "update_medium_state_stmt error");
+ }
+
+ public override void pre_insert_events (GenericArray<Event?> events,
+ BusName? sender)
+ {
+ for (int i = 0; i < events.length; ++i)
+ {
+ for (int j = 0; j < events[i].subjects.length; ++j)
+ {
+ Subject subject = events[i].subjects[j];
+ if (subject.storage == "")
+ subject.storage = find_storage_for_uri (subject.uri);
+ }
+ }
+ }
+
+ /*
+ * Find the name of the storage medium the given URI resides on.
+ */
+ private string find_storage_for_uri (string uri)
+ {
+ // FIXME
+ return "unknown";
+ }
+
+ private void on_volume_added (VolumeMonitor monitor, Volume volume)
+ {
+ debug ("volume added");
+ Icon icon = volume.get_icon ();
+ string icon_name = "";
+ // FIXME: why volume.get_icon ().to_string () above but not here?
+ if (icon is ThemedIcon && ((ThemedIcon) icon).get_names ().length > 0)
+ icon_name = ((ThemedIcon) icon).get_names ()[0];
+ add_storage_medium (get_volume_id (volume), icon_name,
+ volume.get_name ());
+ }
+
+ private void on_volume_removed (VolumeMonitor monitor, Volume volume)
+ {
+ debug ("Volume removed");
+ remove_storage_medium (get_volume_id (volume));
+ }
+
+ /*
+ * Return a string identifier for a GIO Volume. This id is constructed
+ * as a `best effort` since we can not always uniquely identify
+ * volumes, especially audio- and data CDs are problematic.
+ */
+ private string get_volume_id (Volume volume)
+ {
+ string volume_id;
+
+ volume_id = volume.get_uuid ();
+ if (volume_id != null)
+ return volume_id;
+
+ volume_id = volume.get_identifier ("uuid");
+ if (volume_id != null)
+ return volume_id;
+
+ volume_id = volume.get_identifier ("label");
+ if (volume_id != null)
+ return volume_id;
+
+ return "unknown";
+ }
+
+ public void add_storage_medium (string medium_name, string icon,
+ string display_name)
+ {
+ debug ("VOLUME ADDED: %s".printf(medium_name));
+ store_storage_medium_stmt.reset ();
+ store_storage_medium_stmt.bind_text (1, medium_name);
+ store_storage_medium_stmt.bind_int (2, 1);
+ store_storage_medium_stmt.bind_text (3, icon);
+ store_storage_medium_stmt.bind_text (4, display_name);
+
+ int rc = store_storage_medium_stmt.step ();
+ database.assert_query_success (rc, "add_storage_medium",
+ Sqlite.DONE);
+
+ storage_available (medium_name, StorageMedia.to_variant (
+ medium_name, true, icon, display_name));
+ }
+
+ public void remove_storage_medium (string medium_name)
+ {
+ debug ("VOLUME REMOVED: %s".printf(medium_name));
+ insert_unavailable_medium_stmt.reset ();
+ insert_unavailable_medium_stmt.bind_int (1, 0);
+ insert_unavailable_medium_stmt.bind_text (2, medium_name);
+ if (insert_unavailable_medium_stmt.step () != Sqlite.DONE)
+ {
+ update_medium_state_stmt.reset ();
+ update_medium_state_stmt.bind_int (1, 0);
+ update_medium_state_stmt.bind_text (2, medium_name);
+ int rc = update_medium_state_stmt.step ();
+ database.assert_query_success (rc, "remove_storage_medium",
+ Sqlite.DONE);
+ }
+ storage_unavailable (medium_name);
+ }
+
+ public Variant get_storages () throws EngineError
+ {
+ var vb = new VariantBuilder (new VariantType ("a(sa{sv})"));
+
+ int rc;
+ get_storages_stmt.reset ();
+ while ((rc = get_storages_stmt.step ()) == Sqlite.ROW)
+ {
+ // name, available?, icon, display name
+ Variant medium = StorageMedia.to_variant (
+ get_storages_stmt.column_text (0),
+ get_storages_stmt.column_int (1) == 1,
+ get_storages_stmt.column_text (2) ?? "",
+ get_storages_stmt.column_text (3) ?? "");
+ vb.add_value (medium);
+ }
+ database.assert_query_success (rc, "get_storages", Sqlite.DONE);
+
+ return vb.end ();
+ }
+
+ }
+
+ [ModuleInit]
+#if BUILTIN_EXTENSIONS
+ public static Type storage_monitor_init (TypeModule module)
+ {
+#else
+ public static Type extension_register (TypeModule module)
+ {
+#endif
+ return typeof (StorageMonitor);
+ }
+}
+
+// vim:expandtab:ts=4:sw=4
=== added directory 'extra'
=== renamed directory 'extra' => 'extra.moved'
=== added file 'extra/Makefile.am'
--- extra/Makefile.am 1970-01-01 00:00:00 +0000
+++ extra/Makefile.am 2011-10-19 08:09:50 +0000
@@ -0,0 +1,23 @@
+SUBDIRS = ontology
+
+servicedir = $(DBUS_SERVICES_DIR)
+service_DATA = org.gnome.zeitgeist.service
+
+org.gnome.zeitgeist.service: org.gnome.zeitgeist.service.in
+ $(AM_V_GEN)sed -e s!\@prefix\@!$(prefix)! < $< > $@
+org.gnome.zeitgeist.service: Makefile
+
+CLEANFILES = \
+ org.gnome.zeitgeist.service \
+ PythonSerializer.pyc \
+ $(NULL)
+EXTRA_DIST = \
+ org.gnome.zeitgeist.service.in \
+ zeitgeist-daemon.bash_completion \
+ ontology2code \
+ $(NULL)
+
+all-local: org.gnome.zeitgeist.service
+
+clean:
+ rm -rf *.pyc *.~[0-9]~
=== added directory 'extra/ontology'
=== added file 'extra/ontology/Makefile.am'
--- extra/ontology/Makefile.am 1970-01-01 00:00:00 +0000
+++ extra/ontology/Makefile.am 2011-10-19 08:09:50 +0000
@@ -0,0 +1,25 @@
+ontology_trig_DATA = \
+ zg.trig \
+ nie.trig \
+ nco.trig \
+ nfo.trig \
+ ncal.trig \
+ nao.trig \
+ nmo.trig \
+ nmm.trig
+
+ontology_py_DATA = \
+ zeitgeist.py
+
+ontology_trigdir = $(datadir)/zeitgeist/ontology
+ontology_pydir = $(datadir)/zeitgeist/ontology
+
+zeitgeist.py: $(ontology_trig_DATA)
+ @echo -e "#\n# Auto-generated from all .trig files ($^). Do not edit.\n#" > $@
+ $(AM_V_GEN)$(top_srcdir)/extra/ontology2code --dump-python >> $@
+
+CLEANFILES = \
+ $(ontology_py_DATA)
+
+EXTRA_DIST = \
+ $(ontology_trig_DATA)
=== added file 'extra/ontology/nao.trig'
--- extra/ontology/nao.trig 1970-01-01 00:00:00 +0000
+++ extra/ontology/nao.trig 2011-10-19 08:09:50 +0000
@@ -0,0 +1,358 @@
+#
+# Copyright (c) 2007 NEPOMUK Consortium
+# All rights reserved, licensed under either CC-BY or BSD.
+#
+# You are free:
+# * to Share - to copy, distribute and transmit the work
+# * to Remix - to adapt the work
+# Under the following conditions:
+# * Attribution - You must attribute the work in the manner specified by the author
+# or licensor (but not in any way that suggests that they endorse you or your use
+# of the work).
+#
+# Redistribution and use in source and binary forms, with or without modification,
+# are permitted provided that the following conditions are met:
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright notice, this
+# list of conditions and the following disclaimer in the documentation and/or
+# other materials provided with the distribution.
+# * Neither the names of the authors nor the names of contributors may
+# be used to endorse or promote products derived from this ontology without
+# specific prior written permission.
+#
+# THIS ONTOLOGY IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+# OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+# IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+# NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+# THIS ONTOLOGY, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+@prefix dc: <http://purl.org/dc/elements/1.1/> .
+@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
+@prefix nao: <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#> .
+@prefix nrl: <http://www.semanticdesktop.org/ontologies/2007/08/15/nrl#> .
+@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
+@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
+@prefix dcterms: <http://purl.org/dc/terms/> .
+
+<http://www.semanticdesktop.org/ontologies/2007/08/15/nao> {
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#hasDefaultNamespaceAbbreviation>
+ a rdf:Property ;
+ rdfs:comment "Defines the default static namespace abbreviation for a graph" ;
+ rdfs:domain nrl:Data ;
+ rdfs:label "has default namespace abbreviation" ;
+ rdfs:range rdfs:Literal ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#annotation> ;
+ nrl:maxCardinality "1"^^xsd:nonNegativeInteger .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#Symbol>
+ a rdfs:Class ;
+ rdfs:comment "Represents a symbol" ;
+ rdfs:label "symbol" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#FreeDesktopIcon>
+ a rdfs:Class ;
+ rdfs:comment "Represents a desktop icon as defined in the FreeDesktop Icon Naming Standard" ;
+ rdfs:label "freedesktopicon" ;
+ rdfs:subClassOf nao:Symbol .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#iconName>
+ a rdf:Property ;
+ rdfs:comment "Defines a name for a FreeDesktop Icon as defined in the FreeDesktop Icon Naming Standard" ;
+ rdfs:domain nao:FreeDesktopIcon ;
+ rdfs:label "iconname" ;
+ rdfs:range rdfs:Literal ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#identifier> ;
+ nrl:minCardinality "1"^^xsd:nonNegativeInteger .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#score>
+ a rdf:Property ;
+ rdfs:comment "An authoritative score for an item valued between 0 and 1" ;
+ rdfs:domain rdfs:Resource ;
+ rdfs:label "score" ;
+ rdfs:range xsd:float ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#annotation> .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#scoreParameter>
+ a rdf:Property ;
+ rdfs:comment "A marker property to mark selected properties which are input to a mathematical algorithm to generate scores for resources. Properties are marked by being defined as subproperties of this property" ;
+ rdfs:domain rdfs:Resource ;
+ rdfs:label "scoreparameter" ;
+ rdfs:range xsd:float ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#annotation> .
+
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#isTopicOf>
+ a rdf:Property ;
+ rdfs:comment "Defines a relationship between two resources, where the subject is a topic of the object" ;
+ rdfs:label "is topic of" ;
+ rdfs:range rdfs:Resource ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#isRelated> ;
+ nrl:inverseProperty nao:hasTopic .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#hasSubResource>
+ a rdf:Property, nrl:SymmetricProperty ;
+ rdfs:comment "Defines a relationship between a resource and one or more sub resources" ;
+ rdfs:label "has Subresource" ;
+ rdfs:range rdfs:Resource ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#isRelated> ;
+ nrl:inverseProperty nao:hasSuperResource .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#hasSuperResource>
+ a rdf:Property, nrl:SymmetricProperty ;
+ rdfs:comment "Defines a relationship between a resource and one or more super resources" ;
+ rdfs:label "has Superresource" ;
+ rdfs:range rdfs:Resource ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#isRelated> ;
+ nrl:inverseProperty nao:hasSubResource .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#isTagFor>
+ a rdf:Property ;
+ rdfs:comment "States which resources a tag is associated with" ;
+ rdfs:domain <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#Tag> ;
+ rdfs:label "is tag for" ;
+ rdfs:range rdfs:Resource ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#annotation> ;
+ nrl:inverseProperty nao:hasTag .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#version>
+ a rdf:Property ;
+ rdfs:comment "Specifies the version of a graph, in numeric format" ;
+ rdfs:domain nrl:Data ;
+ rdfs:label "version" ;
+ rdfs:range xsd:float ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#annotation> ;
+ nrl:maxCardinality "1"^^xsd:nonNegativeInteger .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#altLabel>
+ a rdf:Property ;
+ rdfs:comment "An alternative label alongside the preferred label for a resource" ;
+ rdfs:label "alternative label" ;
+ rdfs:range rdfs:Literal ;
+ rdfs:subPropertyOf rdfs:label .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#hasSymbol>
+ a rdf:Property ;
+ rdfs:comment "Annotation for a resource in the form of a symbol representation" ;
+ rdfs:domain rdfs:Resource ;
+ rdfs:label "has symbol" ;
+ rdfs:range <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#Symbol> .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#prefSymbol>
+ a rdf:Property ;
+ rdfs:comment "A unique preferred symbol representation for a resource" ;
+ rdfs:domain rdfs:Resource ;
+ rdfs:label "preferred symbol" ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#hasSymbol> ;
+ rdfs:range <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#Symbol> .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#altSymbol>
+ a rdf:Property ;
+ rdfs:comment "An alternative symbol representation for a resource" ;
+ rdfs:domain rdfs:Resource ;
+ rdfs:label "alternative symbol" ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#hasSymbol> ;
+ rdfs:range <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#Symbol> .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#hasTopic>
+ a rdf:Property ;
+ rdfs:comment "Defines a relationship between two resources, where the object is a topic of the subject" ;
+ rdfs:label "has topic" ;
+ rdfs:range rdfs:Resource ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#isRelated> ;
+ nrl:inverseProperty nao:isTopicOf .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#serializationLanguage>
+ a rdf:Property ;
+ rdfs:comment "States the serialization language for a named graph that is represented within a document" ;
+ rdfs:domain nrl:DocumentGraph ;
+ rdfs:label "serialization language" ;
+ rdfs:range rdfs:Literal ;
+ nrl:maxCardinality "1"^^xsd:nonNegativeInteger .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#creator>
+ a rdf:Property ;
+ rdfs:comment "Refers to the single or group of individuals that created the resource" ;
+ rdfs:domain rdfs:Resource ;
+ rdfs:label "creator" ;
+ rdfs:range <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#Party> ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#annotation> , dc:creator .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#annotation>
+ a rdf:Property ;
+ rdfs:comment "Generic annotation for a resource" ;
+ rdfs:domain rdfs:Resource ;
+ rdfs:label "annotation" .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#rating>
+ a rdf:Property ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#annotation> ;
+ rdfs:comment "Annotation for a resource in the form of an unrestricted rating" ;
+ rdfs:domain rdfs:Resource ;
+ rdfs:label "rating" .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#numericRating>
+ a rdf:Property ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#rating> ,
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#scoreParameter> ;
+ rdfs:comment " Annotation for a resource in the form of a numeric rating (float value), allowed values are between 1 and 10 whereas 0 is interpreted as not set" ;
+ rdfs:domain rdfs:Resource ;
+ rdfs:label "numeric rating" ;
+ rdfs:range xsd:integer ;
+ nrl:maxCardinality "1"^^xsd:nonNegativeInteger .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#Tag>
+ a rdfs:Class ;
+ rdfs:comment "Represents a generic tag" ;
+ rdfs:label "tag" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#contributor>
+ a rdf:Property ;
+ rdfs:comment "Refers to a single or a group of individuals that contributed to a resource" ;
+ rdfs:domain rdfs:Resource ;
+ rdfs:label "contributor" ;
+ rdfs:range <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#Party> ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#annotation> , dc:contributor .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#hasDefaultNamespace>
+ a rdf:Property ;
+ rdfs:comment "Defines the default static namespace for a graph" ;
+ rdfs:domain nrl:Data ;
+ rdfs:label "has default namespace" ;
+ rdfs:range rdfs:Literal ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#annotation> ;
+ nrl:maxCardinality "1"^^xsd:nonNegativeInteger .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#modified>
+ a rdf:Property ;
+ rdfs:comment "States the modification time for a resource" ;
+ rdfs:domain rdfs:Resource ;
+ rdfs:label "modified at" ;
+ rdfs:range xsd:dateTime ;
+ rdfs:subPropertyOf dcterms:modified , <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#annotation> .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#created>
+ a rdf:Property ;
+ rdfs:comment "States the creation, or first modification time for a resource" ;
+ rdfs:domain rdfs:Resource ;
+ rdfs:label "created at" ;
+ rdfs:range xsd:dateTime ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#modified> ;
+ nrl:maxCardinality "1"^^xsd:nonNegativeInteger .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#lastModified>
+ a rdf:Property ;
+ rdfs:comment "States the last modification time for a resource" ;
+ rdfs:domain rdfs:Resource ;
+ rdfs:label "last modified at" ;
+ rdfs:range xsd:dateTime ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#modified> ;
+ nrl:maxCardinality "1"^^xsd:nonNegativeInteger .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#identifier>
+ a rdf:Property ;
+ rdfs:comment "Defines a generic identifier for a resource" ;
+ rdfs:domain rdfs:Resource ;
+ rdfs:label "identifier" .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#status>
+ a rdf:Property ;
+ rdfs:comment "Specifies the status of a graph, stable, unstable or testing" ;
+ rdfs:domain nrl:Data ;
+ rdfs:label "status" ;
+ rdfs:range rdfs:Literal ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#annotation> ;
+ nrl:maxCardinality "1"^^xsd:nonNegativeInteger .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#description>
+ a rdf:Property ;
+ rdfs:comment "A non-technical textual annotation for a resource" ;
+ rdfs:label "description" ;
+ rdfs:range rdfs:Literal ;
+ rdfs:subPropertyOf rdfs:comment , <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#annotation> .
+
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#prefLabel>
+ a rdf:Property ;
+ rdfs:comment "A preferred label for a resource" ;
+ rdfs:label "preferred label" ;
+ rdfs:range rdfs:Literal ;
+ rdfs:subPropertyOf rdfs:label .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#pluralPrefLabel>
+ a rdf:Property ;
+ rdfs:comment "The plural form of the preferred label for a resource" ;
+ rdfs:label "preferred label plural form" ;
+ rdfs:range rdfs:Literal ;
+ rdfs:subPropertyOf rdfs:label .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#engineeringTool>
+ a rdf:Property ;
+ rdfs:comment "Specifies the engineering tool used to generate the graph" ;
+ rdfs:domain nrl:Data ;
+ rdfs:label "engineering tool" ;
+ rdfs:range rdfs:Literal ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#annotation> .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#hasTag>
+ a rdf:Property ;
+ rdfs:comment "Defines an existing tag for a resource" ;
+ rdfs:domain rdfs:Resource ;
+ rdfs:label "has tag" ;
+ rdfs:range <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#Tag> ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#annotation> ;
+ nrl:inverseProperty nao:isTagFor .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#isRelated>
+ a rdf:Property , nrl:SymmetricProperty ;
+ rdfs:comment "Defines an annotation for a resource in the form of a relationship between the subject resource and another resource" ;
+ rdfs:label "is related to" ;
+ rdfs:range rdfs:Resource ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#annotation> .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#personalIdentifier>
+ a rdf:Property , nrl:InverseFunctionalProperty ;
+ rdfs:comment "Defines a personal string identifier for a resource" ;
+ rdfs:label "personal identifier" ;
+ rdfs:range rdfs:Literal ;
+ rdfs:subPropertyOf <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#identifier> .
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#Party>
+ a rdfs:Class ;
+ rdfs:comment "Represents a single or a group of individuals" ;
+ rdfs:label "party" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ nao:deprecated
+ a rdf:Property ;
+ rdfs:comment "If this property is assigned, the subject class, property, or resource, is deprecated and should not be used in production systems any longer. It may be removed without further notice." ;
+ rdfs:label "deprecated" ;
+ rdfs:domain rdfs:Resource ;
+ rdfs:range rdfs:Resource .
+}
+
+<http://www.semanticdesktop.org/ontologies/2007/08/15/nao/metadata> {
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao/metadata>
+ a nrl:GraphMetadata ;
+ nrl:coreGraphMetadataFor <http://www.semanticdesktop.org/ontologies/2007/08/15/nao> .
+
+
+ <http://www.semanticdesktop.org/ontologies/2007/08/15/nao>
+ a nrl:Ontology , nrl:DocumentGraph ;
+ nao:hasDefaultNamespace "http://www.semanticdesktop.org/ontologies/2007/08/15/nao#" ;
+ nao:hasDefaultNamespaceAbbreviation "nao" ;
+ nao:lastModified "2009-07-20T14:59:09.500Z" ;
+ nao:serializationLanguage "TriG" ;
+ nao:status "Unstable" ;
+ nrl:updatable "0" ;
+ nao:version "3" .
+}
+
=== added file 'extra/ontology/ncal.trig'
--- extra/ontology/ncal.trig 1970-01-01 00:00:00 +0000
+++ extra/ontology/ncal.trig 2011-10-19 08:09:50 +0000
@@ -0,0 +1,1400 @@
+#
+# Copyright (c) 2007 NEPOMUK Consortium
+# All rights reserved, licensed under either CC-BY or BSD.
+#
+# You are free:
+# * to Share - to copy, distribute and transmit the work
+# * to Remix - to adapt the work
+# Under the following conditions:
+# * Attribution - You must attribute the work in the manner specified by the author
+# or licensor (but not in any way that suggests that they endorse you or your use
+# of the work).
+#
+# Redistribution and use in source and binary forms, with or without modification,
+# are permitted provided that the following conditions are met:
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright notice, this
+# list of conditions and the following disclaimer in the documentation and/or
+# other materials provided with the distribution.
+# * Neither the names of the authors nor the names of contributors may
+# be used to endorse or promote products derived from this ontology without
+# specific prior written permission.
+#
+# THIS ONTOLOGY IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+# OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+# IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+# NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+# THIS ONTOLOGY, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+@prefix nid3: <http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#> .
+@prefix nrl: <http://www.semanticdesktop.org/ontologies/2007/08/15/nrl#> .
+@prefix nfo: <http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#> .
+@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
+@prefix tmo: <http://www.semanticdesktop.org/ontologies/2008/05/20/tmo#> .
+@prefix protege: <http://protege.stanford.edu/system#> .
+@prefix nmo: <http://www.semanticdesktop.org/ontologies/2007/03/22/nmo#> .
+@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
+@prefix nexif: <http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#> .
+@prefix ncal: <http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#> .
+@prefix pimo: <http://www.semanticdesktop.org/ontologies/2007/11/01/pimo#> .
+@prefix dcterms: <http://purl.org/dc/terms/> .
+@prefix nao: <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#> .
+@prefix geo: <http://www.w3.org/2003/01/geo/wgs84_pos#> .
+@prefix dc: <http://purl.org/dc/elements/1.1/> .
+@prefix nie: <http://www.semanticdesktop.org/ontologies/2007/01/19/nie#> .
+@prefix nco: <http://www.semanticdesktop.org/ontologies/2007/03/22/nco#> .
+@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
+
+ncal: {ncal:sunday
+ a ncal:Weekday ;
+ rdfs:label "sunday" .
+
+ ncal:ncalTimezone
+ a rdf:Property ;
+ rdfs:comment "The timezone instance that should be used to interpret an NcalDateTime. The purpose of this property is similar to the TZID parameter specified in RFC 2445 sec. 4.2.19" ;
+ rdfs:domain ncal:NcalDateTime ;
+ rdfs:label "ncalTimezone" ;
+ rdfs:range ncal:Timezone .
+
+ ncal:delegatedFrom
+ a rdf:Property ;
+ rdfs:comment "To specify the calendar users that have delegated their participation to the calendar user specified by the property. Inspired by RFC 2445 sec. 4.2.4. Originally the value type for this property was CAL-ADDRESS. This has been expressed as nco:Contact to promote integration between NCAL and NCO." ;
+ rdfs:domain ncal:Attendee ;
+ rdfs:label "delegatedFrom" ;
+ rdfs:range nco:Contact .
+
+ ncal:TimezoneObservance
+ a rdfs:Class ;
+ rdfs:label "TimezoneObservance" ;
+ rdfs:subClassOf ncal:UnionOfTimezoneObservanceEventJournalTimezoneTodo , ncal:UnionOfTimezoneObservanceEventFreebusyTimezoneTodo , ncal:UnionOfTimezoneObservanceEventFreebusyJournalTimezoneTodo .
+
+ ncal:base64Encoding
+ a ncal:AttachmentEncoding ;
+ rdfs:label "base64Encoding" .
+
+ ncal:related
+ a rdf:Property ;
+ rdfs:comment "To specify the relationship of the alarm trigger with respect to the start or end of the calendar component. Inspired by RFC 2445 4.2.14. The RFC has specified two possible values for this property ('START' and 'END') they have been expressed as instances of the TriggerRelation class." ;
+ rdfs:domain ncal:Trigger ;
+ rdfs:label "related" ;
+ rdfs:range ncal:TriggerRelation .
+
+ ncal:Event
+ a rdfs:Class ;
+ rdfs:comment "Provide a grouping of component properties that describe an event." ;
+ rdfs:label "Event" ;
+ rdfs:subClassOf ncal:UnionOfTimezoneObservanceEventFreebusyTimezoneTodo , ncal:UnionOfAlarmEventFreebusyJournalTodo , ncal:UnionOfEventFreebusy , ncal:UnionOfAlarmEventJournalTodo , ncal:UnionOfEventFreebusyJournalTodo , nie:InformationElement , ncal:UnionOfEventJournalTodo , ncal:UnionOfTimezoneObservanceEventJournalTimezoneTodo , ncal:UnionOfTimezoneObservanceEventFreebusyJournalTimezoneTodo , ncal:UnionOfEventJournalTimezoneTodo , ncal:UnionOfAlarmEventFreebusyTodo , ncal:UnionOfEventTodo , ncal:UnionOfAlarmEventTodo .
+
+ ncal:created
+ a rdf:Property ;
+ rdfs:comment "This property specifies the date and time that the calendar information was created by the calendar user agent in the calendar store. Note: This is analogous to the creation date and time for a file in the file system. Inspired by RFC 2445 sec. 4.8.7.1. Note that this property is a subproperty of nie:created. The domain of nie:created is nie:DataObject. It is not a superclass of UnionOf_Vevent_Vjournal_Vtodo, but since that union is conceived as an 'abstract' class, and in real-life all resources referenced by this property will also be DataObjects, than this shouldn't cause too much of a problem. Note that RFC allows ONLY UTC time values for this property." ;
+ rdfs:domain ncal:UnionOfEventJournalTodo ;
+ rdfs:label "created" ;
+ rdfs:range xsd:dateTime ;
+ rdfs:subPropertyOf nie:created .
+
+ ncal:comment
+ a rdf:Property ;
+ rdfs:comment "Non-processing information intended to provide a comment to the calendar user. Inspired by RFC 2445 sec. 4.8.1.4 with the following reservations: the LANGUAGE parameter has been discarded. Please use xml:lang literals to express language. For the ALTREP parameter use the commentAltRep property." ;
+ rdfs:domain ncal:UnionOfTimezoneObservanceEventFreebusyJournalTimezoneTodo ;
+ rdfs:label "comment" ;
+ rdfs:range xsd:string .
+
+ ncal:summaryAltRep
+ a rdf:Property ;
+ rdfs:comment """Alternate representation of the comment. Introduced to cover
+the ALTREP parameter of the SUMMARY property. See
+documentation of ncal:summary for details.""" ;
+ rdfs:domain ncal:UnionOfAlarmEventJournalTodo ;
+ rdfs:label "summaryAltRep" ;
+ rdfs:range rdfs:Resource .
+
+ ncal:completedParticipationStatus
+ a ncal:ParticipationStatus ;
+ rdfs:label "completedParticipationStatus" .
+
+ ncal:component
+ a rdf:Property ;
+ rdfs:comment "Links the Vcalendar instance with the calendar components. This property has no direct equivalent in the RFC specification. It has been introduced to express the containmnent relations." ;
+ rdfs:domain ncal:Calendar ;
+ rdfs:label "component" ;
+ rdfs:range ncal:CalendarDataObject ;
+ rdfs:subPropertyOf nie:hasPart .
+
+ ncal:RecurrenceIdentifierRange
+ a rdfs:Class ;
+ rdfs:comment "Recurrence Identifier Range. This class has been created to provide means to express the limited set of values for the ncal:range property. See documentation for ncal:range for details." ;
+ rdfs:label "RecurrenceIdentifierRange" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:cutype
+ a rdf:Property ;
+ rdfs:comment "To specify the type of calendar user specified by the property. Inspired by RFC 2445 sec. 4.2.3. This parameter has a limited vocabulary. The terms that may serve as values for this property have been expressed as instances of CalendarUserType class. The user may use instances provided with this ontology or create his own." ;
+ rdfs:domain ncal:Attendee ;
+ rdfs:label "cutype" ;
+ rdfs:range ncal:CalendarUserType .
+
+ ncal:needsActionStatus
+ a ncal:TodoStatus ;
+ rdfs:label "needsActionStatus" .
+
+ ncal:wkst
+ a rdf:Property ;
+ rdfs:comment "The day that's counted as the start of the week. It is used to disambiguate the byweekno rule. Defined in RFC 2445 sec. 4.3.10" ;
+ rdfs:domain ncal:RecurrenceRule ;
+ rdfs:label "wkst" ;
+ rdfs:range ncal:Weekday .
+
+ ncal:tzurl
+ a rdf:Property ;
+ rdfs:comment "The TZURL provides a means for a VTIMEZONE component to point to a network location that can be used to retrieve an up-to- date version of itself. Inspired by RFC 2445 sec. 4.8.3.5. Originally the range of this property had been specified as URI." ;
+ rdfs:domain ncal:Timezone ;
+ rdfs:label "tzurl" ;
+ rdfs:range rdfs:Resource .
+
+ ncal:acceptedParticipationStatus
+ a ncal:ParticipationStatus ;
+ rdfs:label "acceptedParticipationStatus" .
+
+ ncal:UnionOfEventTodo
+ a rdfs:Class ;
+ rdfs:label "UnionOfEventTodo" ;
+ rdfs:subClassOf ncal:UnionParentClass .
+
+ ncal:displayAction
+ a ncal:AlarmAction ;
+ rdfs:label "displayAction" .
+
+ ncal:wednesday
+ a ncal:Weekday ;
+ rdfs:label "wednesday" .
+
+ ncal:uid
+ a rdf:Property ;
+ rdfs:comment "This property defines the persistent, globally unique identifier for the calendar component. Inspired by the RFC 2445 sec 4.8.4.7" ;
+ rdfs:domain ncal:UnionOfEventFreebusyJournalTodo ;
+ rdfs:label "uid" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nie:identifier .
+
+ ncal:standard
+ a rdf:Property ;
+ rdfs:comment "Links the timezone with the standard timezone observance. This property has no direct equivalent in the RFC 2445. It has been inspired by the structure of the Vtimezone component defined in sec.4.6.5" ;
+ rdfs:domain ncal:Timezone ;
+ rdfs:label "standard" ;
+ rdfs:range ncal:TimezoneObservance .
+
+ ncal:resourceUserType
+ a ncal:CalendarUserType ;
+ rdfs:label "resourceUserType" .
+
+ ncal:cancelledEventStatus
+ a ncal:EventStatus ;
+ rdfs:label "cancelledEventStatus" .
+
+ ncal:TimeTransparency
+ a rdfs:Class ;
+ rdfs:comment """Time transparency. Introduced to provide a way to express
+the limited vocabulary for the values of ncal:transp property.
+See documentation of ncal:transp for details.""" ;
+ rdfs:label "TimeTransparency" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:cancelledTodoStatus
+ a ncal:TodoStatus ;
+ rdfs:label "cancelledTodoStatus" .
+
+ ncal:NcalTimeEntity
+ a rdfs:Class ;
+ rdfs:comment "A time entity. Conceived as a common superclass for NcalDateTime and NcalPeriod. According to RFC 2445 both DateTime and Period can be interpreted in different timezones. The first case is explored in many properties. The second case is theoretically possible in ncal:rdate property. Therefore the timezone properties have been defined at this level." ;
+ rdfs:label "NcalTimeEntity" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:attachmentUri
+ a rdf:Property ;
+ rdfs:comment "The uri of the attachment. Created to express the actual value of the ATTACH property defined in RFC 2445 sec. 4.8.1.1. This property expresses the default URI datatype of that property. see ncal:attachmentContents for the BINARY datatype." ;
+ rdfs:domain ncal:Attachment ;
+ rdfs:label "attachmentUri" ;
+ rdfs:range rdfs:Resource .
+
+ ncal:CalendarScale
+ a rdfs:Class ;
+ rdfs:comment "A calendar scale. This class has been introduced to provide the limited vocabulary for the ncal:calscale property." ;
+ rdfs:label "CalendarScale" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:member
+ a rdf:Property ;
+ rdfs:comment "To specify the group or list membership of the calendar user specified by the property. Inspired by RFC 2445 sec. 4.2.11. Originally this parameter had a value type of CAL-ADDRESS. This has been expressed as nco:Contact to promote integration between NCAL and NCO" ;
+ rdfs:domain ncal:Attendee ;
+ rdfs:label "member" ;
+ rdfs:range nco:Contact .
+
+ ncal:individualUserType
+ a ncal:CalendarUserType ;
+ rdfs:label "individualUserType" .
+
+ ncal:completedStatus
+ a ncal:TodoStatus ;
+ rdfs:label "completedStatus" .
+
+ ncal:periodDuration
+ a rdf:Property ;
+ rdfs:comment "Duration of a period of time. Inspired by the second part of a structured value of the PERIOD datatype specified in RFC 2445 sec. 4.3.9. Note that a single NcalPeriod instance shouldn't have the periodEnd and periodDuration properties specified simultaneously." ;
+ rdfs:domain ncal:NcalPeriod ;
+ rdfs:label "periodDuration" ;
+ rdfs:range xsd:duration .
+
+ ncal:dtstamp
+ a rdf:Property ;
+ rdfs:comment "The property indicates the date/time that the instance of the iCalendar object was created. Inspired by RFC 2445 sec. 4.8.7.1. Note that the RFC allows ONLY UTC values for this property." ;
+ rdfs:domain ncal:UnionOfEventFreebusyJournalTodo ;
+ rdfs:label "dtstamp" ;
+ rdfs:range xsd:dateTime .
+
+ ncal:UnionOfAlarmEventJournalTodo
+ a rdfs:Class ;
+ rdfs:label "UnionOfAlarmEventJournalTodo" ;
+ rdfs:subClassOf ncal:UnionParentClass .
+
+ ncal:BydayRulePart
+ a rdfs:Class ;
+ rdfs:comment "Expresses the compound value of a byday part of a recurrence rule. It stores the weekday and the integer modifier. Inspired by RFC 2445 sec. 4.3.10" ;
+ rdfs:label "BydayRulePart" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:Trigger
+ a rdfs:Class ;
+ rdfs:comment "An alarm trigger. This class has been created to serve as the range of ncal:trigger property. See the documentation for ncal:trigger for more details." ;
+ rdfs:label "Trigger" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:procedureAction
+ a ncal:AlarmAction ;
+ rdfs:label "procedureAction" .
+
+ ncal:class
+ a rdf:Property ;
+ rdfs:comment "Defines the access classification for a calendar component. Inspired by RFC 2445 sec. 4.8.1.3 with the following reservations: this property has limited vocabulary. Possible values are: PUBLIC, PRIVATE and CONFIDENTIAL. The default is PUBLIC. Those values are expressed as instances of the AccessClassification class. The user may create his/her own if necessary." ;
+ rdfs:domain ncal:UnionOfEventJournalTodo ;
+ rdfs:label "class" ;
+ rdfs:range ncal:AccessClassification .
+
+ ncal:_8bitEncoding
+ a ncal:AttachmentEncoding ;
+ rdfs:label "_8bitEncoding" .
+
+ ncal:delegatedParticipationStatus
+ a ncal:ParticipationStatus ;
+ rdfs:label "delegatedParticipationStatus" .
+
+ ncal:transp
+ a rdf:Property ;
+ rdfs:comment "Defines whether an event is transparent or not to busy time searches. Inspired by RFC 2445 sec.4.8.2.7. Values for this property can be chosen from a limited vocabulary. To express this a TimeTransparency class has been introduced." ;
+ rdfs:domain ncal:Event ;
+ rdfs:label "transp" ;
+ rdfs:range ncal:TimeTransparency .
+
+ ncal:date
+ a rdf:Property ;
+ rdfs:comment "Date an instance of NcalDateTime refers to. It was conceived to express values in DATE datatype specified in RFC 2445 4.3.4" ;
+ rdfs:domain ncal:NcalDateTime ;
+ rdfs:label "date" ;
+ rdfs:range xsd:date .
+
+ ncal:busyTentativeFreebusyType
+ a ncal:FreebusyType ;
+ rdfs:label "busyTentativeFreebusyType" .
+
+ ncal:emailAction
+ a ncal:AlarmAction ;
+ rdfs:label "emailAction" .
+
+ ncal:needsActionParticipationStatus
+ a ncal:ParticipationStatus ;
+ rdfs:label "needsActionParticipationStatus" .
+
+ ncal:dtend
+ a rdf:Property ;
+ rdfs:comment "This property specifies the date and time that a calendar component ends. Inspired by RFC 2445 sec. 4.8.2.2" ;
+ rdfs:domain ncal:UnionOfEventFreebusy ;
+ rdfs:label "dtend" ;
+ rdfs:range ncal:NcalDateTime .
+
+ ncal:privateClassification
+ a ncal:AccessClassification ;
+ rdfs:label "privateClassification" .
+
+ ncal:dateTime
+ a rdf:Property ;
+ rdfs:comment "Representation of a date an instance of NcalDateTime actually refers to. It's purpose is to express values in DATE-TIME datatype, as defined in RFC 2445 sec. 4.3.5" ;
+ rdfs:domain ncal:NcalDateTime ;
+ rdfs:label "dateTime" ;
+ rdfs:range xsd:dateTime .
+
+ ncal:prodid
+ a rdf:Property ;
+ rdfs:comment "This property specifies the identifier for the product that created the iCalendar object. Defined in RFC 2445 sec. 4.7.2" ;
+ rdfs:domain ncal:Calendar ;
+ rdfs:label "prodid" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nie:generator .
+
+ ncal:bymonthday
+ a rdf:Property ;
+ rdfs:comment "Day of the month when the event should recur. Defined in RFC 2445 sec. 4.3.10" ;
+ rdfs:domain ncal:RecurrenceRule ;
+ rdfs:label "bymonthday" ;
+ rdfs:range xsd:integer .
+
+ ncal:ncalRelation
+ a rdf:Property ;
+ rdfs:comment "A common superproperty for all types of ncal relations. It is not to be used directly." ;
+ rdfs:domain ncal:UnionOfEventJournalTodo ;
+ rdfs:label "ncalRelation" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf dc:relation .
+
+ ncal:EventStatus
+ a rdfs:Class ;
+ rdfs:comment """A status of an event. This class has been introduced to express
+the limited set of values for the ncal:status property. The user may
+use the instances provided with this ontology or create his/her own.
+See the documentation for ncal:eventStatus for details.""" ;
+ rdfs:label "EventStatus" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:summary
+ a rdf:Property ;
+ rdfs:comment "Defines a short summary or subject for the calendar component. Inspired by RFC 2445 sec 4.8.1.12 with the following reservations: the LANGUAGE parameter has been discarded. Please use xml:lang literals to express language. For the ALTREP parameter use the summaryAltRep property." ;
+ rdfs:domain ncal:UnionOfAlarmEventJournalTodo ;
+ rdfs:label "summary" ;
+ rdfs:range xsd:string .
+
+ ncal:RecurrenceFrequency
+ a rdfs:Class ;
+ rdfs:comment "Frequency of a recurrence rule. This class has been introduced to express a limited set of allowed values for the ncal:freq property. See the documentation of ncal:freq for details." ;
+ rdfs:label "RecurrenceFrequency" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:Organizer
+ a rdfs:Class ;
+ rdfs:comment "An organizer of an event. This class has been introduced to serve as a range of ncal:organizer property. See documentation of ncal:organizer for details." ;
+ rdfs:label "Organizer" ;
+ rdfs:subClassOf ncal:AttendeeOrOrganizer .
+
+ ncal:fbtype
+ a rdf:Property ;
+ rdfs:comment "To specify the free or busy time type. Inspired by RFC 2445 sec. 4.2.9. The RFC specified a limited vocabulary for the values of this property. The terms of this vocabulary have been expressed as instances of the FreebusyType class. The user can use instances provided with this ontology or create his own." ;
+ rdfs:domain ncal:FreebusyPeriod ;
+ rdfs:label "fbtype" ;
+ rdfs:range ncal:FreebusyType .
+
+ ncal:Attachment
+ a rdfs:Class ;
+ rdfs:comment "An object attached to a calendar entity. This class has been introduced to serve as a structured value of the ncal:attach property. See the documentation of ncal:attach for details." ;
+ rdfs:label "Attachment" ;
+ rdfs:subClassOf rdfs:Resource , nfo:Attachment .
+
+ ncal:tzname
+ a rdf:Property ;
+ rdfs:comment "Specifies the customary designation for a timezone description. Inspired by RFC 2445 sec. 4.8.3.2 The LANGUAGE parameter has been discarded. Please xml:lang literals to express languages. Original specification for the domain of this property stated that it must appear within the timezone component. In this ontology the TimezoneObservance class has been itroduced to clarify this specification." ;
+ rdfs:domain ncal:TimezoneObservance ;
+ rdfs:label "tzname" ;
+ rdfs:range xsd:string .
+
+ ncal:busyUnavailableFreebusyType
+ a ncal:FreebusyType ;
+ rdfs:label "busyUnavailableFreebusyType" .
+
+ ncal:CalendarDataObject
+ a rdfs:Class ;
+ rdfs:comment "A DataObject found in a calendar. It is usually interpreted as one of the calendar entity types (e.g. Event, Journal, Todo etc.)" ;
+ rdfs:label "CalendarDataObject" ;
+ rdfs:subClassOf nie:DataObject .
+
+ ncal:chairRole
+ a ncal:AttendeeRole ;
+ rdfs:label "chairRole" .
+
+ ncal:nonParticipantRole
+ a ncal:AttendeeRole ;
+ rdfs:label "nonParticipantRole" .
+
+ ncal:descriptionAltRep
+ a rdf:Property ;
+ rdfs:comment """Alternate representation of the calendar entity description. Introduced to cover
+the ALTREP parameter of the DESCRIPTION property. See
+documentation of ncal:description for details.""" ;
+ rdfs:domain ncal:UnionOfAlarmEventJournalTodo ;
+ rdfs:label "descriptionAltRep" ;
+ rdfs:range rdfs:Resource .
+
+ ncal:commentAltRep
+ a rdf:Property ;
+ rdfs:comment """Alternate representation of the comment. Introduced to cover
+the ALTREP parameter of the COMMENT property. See
+documentation of ncal:comment for details.""" ;
+ rdfs:domain ncal:UnionOfTimezoneObservanceEventFreebusyJournalTimezoneTodo ;
+ rdfs:label "commentAltRep" ;
+ rdfs:range rdfs:Resource .
+
+ ncal:UnionOfTimezoneObservanceEventFreebusyJournalTimezoneTodo
+ a rdfs:Class ;
+ rdfs:label "UnionOfTimezoneObservanceEventFreebusyJournalTimezoneTodo" ;
+ rdfs:subClassOf ncal:UnionParentClass .
+
+ ncal:contact
+ a rdf:Property ;
+ rdfs:comment "The property is used to represent contact information or alternately a reference to contact information associated with the calendar component. Inspired by RFC 2445 sec. 4.8.4.2 with the following reservations: the LANGUAGE parameter has been discarded. Please use xml:lang literals to express language. For the ALTREP parameter use the contactAltRep property.RFC doesn't define any format for the string." ;
+ rdfs:domain ncal:UnionOfEventFreebusyJournalTodo ;
+ rdfs:label "contact" ;
+ rdfs:range xsd:string .
+
+ ncal:sequence
+ a rdf:Property ;
+ rdfs:comment "This property defines the revision sequence number of the calendar component within a sequence of revisions. Inspired by RFC 2445 sec. 4.8.7.4" ;
+ rdfs:domain ncal:UnionOfEventJournalTodo ;
+ rdfs:label "sequence" ;
+ rdfs:range xsd:integer .
+
+ ncal:repeat
+ a rdf:Property ;
+ rdfs:comment "This property defines the number of time the alarm should be repeated, after the initial trigger. Inspired by RFC 2445 sec. 4.8.6.2" ;
+ rdfs:domain ncal:Alarm ;
+ rdfs:label "repeat" ;
+ rdfs:range xsd:integer .
+
+ ncal:freeFreebusyType
+ a ncal:FreebusyType ;
+ rdfs:label "freeFreebusyType" .
+
+ ncal:yearly
+ a ncal:RecurrenceFrequency ;
+ rdfs:label "yearly" .
+
+ ncal:involvedContact
+ a rdf:Property ;
+ rdfs:comment "A contact of the Attendee or the organizer involved in an event or other calendar entity. This property has been introduced to express the actual value of the ATTENDEE and ORGANIZER properties. The contact will also represent the CN parameter of those properties. See documentation of ncal:attendee or ncal:organizer for more details." ;
+ rdfs:domain ncal:AttendeeOrOrganizer ;
+ rdfs:label "involvedContact" ;
+ rdfs:range nco:Contact .
+
+ ncal:opaqueTransparency
+ a ncal:TimeTransparency ;
+ rdfs:label "opaqueTransparency" .
+
+ ncal:inProcessStatus
+ a ncal:TodoStatus ;
+ rdfs:label "inProcessStatus" .
+
+ ncal:startTriggerRelation
+ a ncal:TriggerRelation ;
+ rdfs:label "startTriggerRelation" .
+
+ ncal:AlarmAction
+ a rdfs:Class ;
+ rdfs:comment "Action to be performed on alarm. This class has been introduced to express the limited set of values of the ncal:action property. Please refer to the documentation of ncal:action for details." ;
+ rdfs:label "AlarmAction" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:UnionParentClass
+ a rdfs:Class ;
+ rdfs:label "UnionParentClass" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:Alarm
+ a rdfs:Class ;
+ rdfs:comment "Provide a grouping of component properties that define an alarm." ;
+ rdfs:label "Alarm" ;
+ rdfs:subClassOf ncal:UnionOfAlarmEventFreebusyJournalTodo , ncal:UnionOfAlarmEventJournalTodo , ncal:UnionOfAlarmEventFreebusyTodo , ncal:UnionOfAlarmEventTodo , nie:InformationElement .
+
+ ncal:unknownUserType
+ a ncal:CalendarUserType ;
+ rdfs:label "unknownUserType" .
+
+ ncal:rsvp
+ a rdf:Property ;
+ rdfs:comment "To specify whether there is an expectation of a favor of a reply from the calendar user specified by the property value. Inspired by RFC 2445 sec. 4.2.17" ;
+ rdfs:domain ncal:Attendee ;
+ rdfs:label "rsvp" ;
+ rdfs:range xsd:boolean .
+
+ ncal:RecurrenceRule
+ a rdfs:Class ;
+ rdfs:label "RecurrenceRule" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:UnionOfEventJournalTimezoneTodo
+ a rdfs:Class ;
+ rdfs:label "UnionOfEventJournalTimezoneTodo" ;
+ rdfs:subClassOf ncal:UnionParentClass .
+
+ ncal:TodoStatus
+ a rdfs:Class ;
+ rdfs:comment """A status of a calendar entity. This class has been introduced to express
+the limited set of values for the ncal:status property. The user may
+use the instances provided with this ontology or create his/her own.
+See the documentation for ncal:todoStatus for details.""" ;
+ rdfs:label "TodoStatus" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:monday
+ a ncal:Weekday ;
+ rdfs:label "monday" .
+
+ ncal:geo
+ a rdf:Property ;
+ rdfs:comment "This property specifies information related to the global position for the activity specified by a calendar component. Inspired by RFC 2445 sec. 4.8.1.6" ;
+ rdfs:domain ncal:UnionOfEventTodo ;
+ rdfs:label "geo" ;
+ rdfs:range geo:Point .
+
+ ncal:UnionOfAlarmEventFreebusyTodo
+ a rdfs:Class ;
+ rdfs:label "UnionOfAlarmEventFreebusyTodo" ;
+ rdfs:subClassOf ncal:UnionParentClass .
+
+ ncal:hourly
+ a ncal:RecurrenceFrequency ;
+ rdfs:label "hourly" .
+
+ ncal:fmttype
+ a rdf:Property ;
+ rdfs:comment "To specify the content type of a referenced object. Inspired by RFC 2445 sec. 4.2.8. The value of this property should be an IANA-registered content type (e.g. application/binary)" ;
+ rdfs:domain ncal:Attachment ;
+ rdfs:label "fmttype" ;
+ rdfs:range xsd:string .
+
+ ncal:byyearday
+ a rdf:Property ;
+ rdfs:comment "Day of the year the event should occur. Defined in RFC 2445 sec. 4.3.10" ;
+ rdfs:domain ncal:RecurrenceRule ;
+ rdfs:label "byyearday" ;
+ rdfs:range xsd:integer .
+
+ ncal:dtstart
+ a rdf:Property ;
+ rdfs:comment "This property specifies when the calendar component begins. Inspired by RFC 2445 sec. 4.8.2.4" ;
+ rdfs:domain ncal:UnionOfTimezoneObservanceEventFreebusyTimezoneTodo ;
+ rdfs:label "dtstart" ;
+ rdfs:range ncal:NcalDateTime .
+
+ ncal:description
+ a rdf:Property ;
+ rdfs:comment "A more complete description of the calendar component, than that provided by the ncal:summary property.Inspired by RFC 2445 sec. 4.8.1.5 with following reservations: the LANGUAGE parameter has been discarded. Please use xml:lang literals to express language. For the ALTREP parameter use the descriptionAltRep property." ;
+ rdfs:domain ncal:UnionOfAlarmEventJournalTodo ;
+ rdfs:label "description" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nie:description .
+
+ ncal:thursday
+ a ncal:Weekday ;
+ rdfs:label "thursday" .
+
+ ncal:daily
+ a ncal:RecurrenceFrequency ;
+ rdfs:label "daily" .
+
+ ncal:Weekday
+ a rdfs:Class ;
+ rdfs:comment "Day of the week. This class has been created to provide the limited vocabulary for ncal:byday property. See the documentation for ncal:byday for details." ;
+ rdfs:label "Weekday" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:partstat
+ a rdf:Property ;
+ rdfs:comment "To specify the participation status for the calendar user specified by the property. Inspired by RFC 2445 sec. 4.2.12. Originally this parameter had three sets of allowed values. Which set applied to a particular case - depended on the type of calendar entity this parameter occured in. (event, todo, journal entry). This would be awkward to model in RDF so a single ParticipationStatus class has been introduced. Terms of the values vocabulary are expressed as instances of this class. Users are advised to pay attention which instances they use." ;
+ rdfs:domain ncal:Attendee ;
+ rdfs:label "partstat" ;
+ rdfs:range ncal:ParticipationStatus .
+
+ ncal:relatedToSibling
+ a rdf:Property ;
+ rdfs:comment "The property is used to represent a relationship or reference between one calendar component and another. Inspired by RFC 2445 sec. 4.8.4.5. Originally this property had a RELTYPE parameter. It has been decided that it is more natural to introduce three different properties to express the values of that parameter. This property expresses the RELATED-TO property with RELTYPE=SIBLING parameter." ;
+ rdfs:domain ncal:UnionOfEventJournalTodo ;
+ rdfs:label "relatedToSibling" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf ncal:ncalRelation .
+
+ ncal:count
+ a rdf:Property ;
+ rdfs:comment "How many times should an event be repeated. Defined in RFC 2445 sec. 4.3.10" ;
+ rdfs:domain ncal:RecurrenceRule ;
+ rdfs:label "count" ;
+ rdfs:range xsd:integer .
+
+ ncal:duration
+ a rdf:Property ;
+ rdfs:comment "The property specifies a positive duration of time. Inspired by RFC 2445 sec. 4.8.2.5" ;
+ rdfs:domain ncal:UnionOfAlarmEventFreebusyTodo ;
+ rdfs:label "duration" ;
+ rdfs:range xsd:duration .
+
+ ncal:bymonth
+ a rdf:Property ;
+ rdfs:comment "Number of the month of the recurrence. Valid values are integers from 1 (January) to 12 (December). Defined in RFC 2445 sec. 4.3.10" ;
+ rdfs:domain ncal:RecurrenceRule ;
+ rdfs:label "bymonth" ;
+ rdfs:range xsd:integer .
+
+ ncal:inProcessParticipationStatus
+ a ncal:ParticipationStatus ;
+ rdfs:label "inProcessParticipationStatus" .
+
+ ncal:weekly
+ a ncal:RecurrenceFrequency ;
+ rdfs:label "weekly" .
+
+ ncal:sentBy
+ a rdf:Property ;
+ rdfs:comment "To specify the calendar user that is acting on behalf of the calendar user specified by the property. Inspired by RFC 2445 sec. 4.2.18. The original data type of this property was a mailto: URI. This has been changed to nco:Contact to promote integration between NCO and NCAL." ;
+ rdfs:domain ncal:AttendeeOrOrganizer ;
+ rdfs:label "sentBy" ;
+ rdfs:range nco:Contact .
+
+ ncal:attachmentContent
+ a rdf:Property ;
+ rdfs:comment "The uri of the attachment. Created to express the actual value of the ATTACH property defined in RFC 2445 sec. 4.8.1.1. This property expresses the BINARY datatype of that property. see ncal:attachmentUri for the URI datatype." ;
+ rdfs:domain ncal:Attachment ;
+ rdfs:label "attachmentContent" ;
+ rdfs:range xsd:string .
+
+ ncal:attach
+ a rdf:Property ;
+ rdfs:comment "The property provides the capability to associate a document object with a calendar component. Defined in the RFC 2445 sec. 4.8.1.1" ;
+ rdfs:domain ncal:UnionOfAlarmEventJournalTodo ;
+ rdfs:label "attach" ;
+ rdfs:range ncal:Attachment ;
+ rdfs:subPropertyOf nie:hasPart .
+
+ ncal:RecurrenceIdentifier
+ a rdfs:Class ;
+ rdfs:comment "Recurrence Identifier. Introduced to provide a structure for the value of ncal:recurrenceId property. See the documentation of ncal:recurrenceId for details." ;
+ rdfs:label "RecurrenceIdentifier" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:percentComplete
+ a rdf:Property ;
+ rdfs:comment "This property is used by an assignee or delegatee of a to-do to convey the percent completion of a to-do to the Organizer. Inspired by RFC 2445 sec. 4.8.1.8" ;
+ rdfs:domain ncal:Todo ;
+ rdfs:label "percentComplete" ;
+ rdfs:range xsd:integer .
+
+ ncal:gregorianCalendarScale
+ a ncal:CalendarScale ;
+ rdfs:label "gregorianCalendarScale" .
+
+ ncal:delegatedTo
+ a rdf:Property ;
+ rdfs:comment "To specify the calendar users to whom the calendar user specified by the property has delegated participation. Inspired by RFC 2445 sec. 4.2.5. Originally the value type for this parameter was CAL-ADDRESS. This has been expressed as nco:Contact to promote integration between NCAL and NCO." ;
+ rdfs:domain ncal:Attendee ;
+ rdfs:label "delegatedTo" ;
+ rdfs:range nco:Contact .
+
+ ncal:action
+ a rdf:Property ;
+ rdfs:comment "This property defines the action to be invoked when an alarm is triggered. Inspired by RFC 2445 sec 4.8.6.1. Originally this property had a limited set of values. They are expressed as instances of the AlarmAction class." ;
+ rdfs:domain ncal:Alarm ;
+ rdfs:label "action" ;
+ rdfs:range ncal:AlarmAction .
+
+ ncal:AttachmentEncoding
+ a rdfs:Class ;
+ rdfs:comment "Attachment encoding. This class has been introduced to express the limited vocabulary of values for the ncal:encoding property. See the documentation of ncal:encoding for details." ;
+ rdfs:label "AttachmentEncoding" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:lastModified
+ a rdf:Property ;
+ rdfs:comment "The property specifies the date and time that the information associated with the calendar component was last revised in the calendar store. Note: This is analogous to the modification date and time for a file in the file system. Inspired by RFC 2445 sec. 4.8.7.3. Note that the RFC allows ONLY UTC time values for this property." ;
+ rdfs:domain ncal:UnionOfEventJournalTimezoneTodo ;
+ rdfs:label "lastModified" ;
+ rdfs:range xsd:dateTime .
+
+ ncal:trigger
+ a rdf:Property ;
+ rdfs:comment "This property specifies when an alarm will trigger. Inspired by RFC 2445 sec. 4.8.6.3 Originally the value of this property could accept two types : duration and date-time. To express this fact a Trigger class has been introduced. It also has a related property to account for the RELATED parameter." ;
+ rdfs:domain ncal:UnionOfAlarmEventTodo ;
+ rdfs:label "trigger" ;
+ rdfs:range ncal:Trigger .
+
+ ncal:until
+ a rdf:Property ;
+ rdfs:comment "The UNTIL rule part defines a date-time value which bounds the recurrence rule in an inclusive manner. If the value specified by UNTIL is synchronized with the specified recurrence, this date or date-time becomes the last instance of the recurrence. If specified as a date-time value, then it MUST be specified in an UTC time format. If not present, and the COUNT rule part is also not present, the RRULE is considered to repeat forever." ;
+ rdfs:domain ncal:RecurrenceRule ;
+ rdfs:label "until" ;
+ rdfs:range xsd:dateTime .
+
+ ncal:relatedToChild
+ a rdf:Property ;
+ rdfs:comment "The property is used to represent a relationship or reference between one calendar component and another. Inspired by RFC 2445 sec. 4.8.4.5. Originally this property had a RELTYPE parameter. It has been decided to introduce three different properties to express the values of that parameter. This property expresses the RELATED-TO property with RELTYPE=CHILD parameter." ;
+ rdfs:domain ncal:UnionOfEventJournalTodo ;
+ rdfs:label "relatedToChild" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf ncal:ncalRelation .
+
+ ncal:url
+ a rdf:Property ;
+ rdfs:comment "This property defines a Uniform Resource Locator (URL) associated with the iCalendar object. Inspired by the RFC 2445 sec. 4.8.4.6. Original range had been specified as URI." ;
+ rdfs:domain ncal:UnionOfEventFreebusyJournalTodo ;
+ rdfs:label "url" ;
+ rdfs:range rdfs:Resource .
+
+ ncal:AccessClassification
+ a rdfs:Class ;
+ rdfs:comment """Access classification of a calendar component. Introduced to express
+the set of values for the ncal:class property. The user may use instances
+provided with this ontology or create his/her own with desired semantics.
+See the documentation of ncal:class for details.""" ;
+ rdfs:label "AccessClassification" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:Freebusy
+ a rdfs:Class ;
+ rdfs:comment "Provide a grouping of component properties that describe either a request for free/busy time, describe a response to a request for free/busy time or describe a published set of busy time." ;
+ rdfs:label "Freebusy" ;
+ rdfs:subClassOf ncal:UnionOfTimezoneObservanceEventFreebusyTimezoneTodo , ncal:UnionOfAlarmEventFreebusyJournalTodo , ncal:UnionOfTimezoneObservanceEventFreebusyJournalTimezoneTodo , ncal:UnionOfEventFreebusy , ncal:UnionOfAlarmEventFreebusyTodo , ncal:UnionOfEventFreebusyJournalTodo , nie:InformationElement .
+
+ ncal:completed
+ a rdf:Property ;
+ rdfs:comment "This property defines the date and time that a to-do was actually completed. Inspired by RFC 2445 sec. 4.8.2.1. Note that the RFC allows ONLY UTC time values for this property." ;
+ rdfs:domain ncal:Todo ;
+ rdfs:label "completed" ;
+ rdfs:range xsd:dateTime .
+
+ ncal:JournalStatus
+ a rdfs:Class ;
+ rdfs:comment """A status of a journal entry. This class has been introduced to express
+the limited set of values for the ncal:status property. The user may
+use the instances provided with this ontology or create his/her own.
+See the documentation for ncal:journalStatus for details.""" ;
+ rdfs:label "JournalStatus" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:UnionOfEventFreebusy
+ a rdfs:Class ;
+ rdfs:label "UnionOfEventFreebusy" ;
+ rdfs:subClassOf ncal:UnionParentClass .
+
+ ncal:priority
+ a rdf:Property ;
+ rdfs:comment "The property defines the relative priority for a calendar component. Inspired by RFC 2445 sec. 4.8.1.9" ;
+ rdfs:domain ncal:UnionOfEventTodo ;
+ rdfs:label "priority" ;
+ rdfs:range xsd:integer .
+
+ ncal:relatedToParent
+ a rdf:Property ;
+ rdfs:comment "The property is used to represent a relationship or reference between one calendar component and another. Inspired by RFC 2445 sec. 4.8.4.5. Originally this property had a RELTYPE parameter. It has been decided that it is more natural to introduce three different properties to express the values of that parameter. This property expresses the RELATED-TO property with no RELTYPE parameter (the default value is PARENT), or with explicit RELTYPE=PARENT parameter." ;
+ rdfs:domain ncal:UnionOfEventJournalTodo ;
+ rdfs:label "relatedToParent" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf ncal:ncalRelation .
+
+ ncal:resources
+ a rdf:Property ;
+ rdfs:comment "Defines the equipment or resources anticipated for an activity specified by a calendar entity. Inspired by RFC 2445 sec. 4.8.1.10 with the following reservations: the LANGUAGE parameter has been discarded. Please use xml:lang literals to express language. For the ALTREP parameter use the resourcesAltRep property. This property specifies multiple resources. The order is not important. it is recommended to introduce a separate triple for each resource." ;
+ rdfs:domain ncal:UnionOfEventTodo ;
+ rdfs:label "resources" ;
+ rdfs:range xsd:string .
+
+ ncal:bysecond
+ a rdf:Property ;
+ rdfs:comment "Second of a recurrence. Defined in RFC 2445 sec. 4.3.10" ;
+ rdfs:domain ncal:RecurrenceRule ;
+ rdfs:label "bysecond" ;
+ rdfs:range xsd:integer .
+
+ ncal:Journal
+ a rdfs:Class ;
+ rdfs:comment "Provide a grouping of component properties that describe a journal entry." ;
+ rdfs:label "Journal" ;
+ rdfs:subClassOf ncal:UnionOfTimezoneObservanceEventJournalTimezoneTodo , ncal:UnionOfAlarmEventFreebusyJournalTodo , ncal:UnionOfEventJournalTimezoneTodo , ncal:UnionOfTimezoneObservanceEventFreebusyJournalTimezoneTodo , ncal:UnionOfAlarmEventJournalTodo , ncal:UnionOfEventFreebusyJournalTodo , nie:InformationElement , ncal:UnionOfEventJournalTodo .
+
+ ncal:Attendee
+ a rdfs:Class ;
+ rdfs:comment "An attendee of an event. This class has been introduced to serve as the range for ncal:attendee property. See documentation of ncal:attendee for details." ;
+ rdfs:label "Attendee" ;
+ rdfs:subClassOf ncal:AttendeeOrOrganizer .
+
+ ncal:saturday
+ a ncal:Weekday ;
+ rdfs:label "saturday" .
+
+ ncal:statusDescription
+ a rdf:Property ;
+ rdfs:comment "Longer return status description. Inspired by the second part of the structured value of the REQUEST-STATUS property defined in RFC 2445 sec. 4.8.8.2" ;
+ rdfs:domain ncal:RequestStatus ;
+ rdfs:label "statusDescription" ;
+ rdfs:range xsd:string .
+
+ ncal:tentativeStatus
+ a ncal:EventStatus ;
+ rdfs:label "tentativeStatus" .
+
+ ncal:location
+ a rdf:Property ;
+ rdfs:comment "Defines the intended venue for the activity defined by a calendar component. Inspired by RFC 2445 sec 4.8.1.7 with the following reservations: the LANGUAGE parameter has been discarded. Please use xml:lang literals to express language. For the ALTREP parameter use the locationAltRep property." ;
+ rdfs:domain ncal:UnionOfEventTodo ;
+ rdfs:label "location" ;
+ rdfs:range xsd:string .
+
+ ncal:eventStatus
+ a rdf:Property ;
+ rdfs:comment "Defines the overall status or confirmation for an Event. Based on the STATUS property defined in RFC 2445 sec. 4.8.1.11." ;
+ rdfs:domain ncal:Event ;
+ rdfs:label "status" ;
+ rdfs:range ncal:EventStatus .
+
+ ncal:AttendeeOrOrganizer
+ a rdfs:Class ;
+ rdfs:comment "A common superclass for ncal:Attendee and ncal:Organizer." ;
+ rdfs:label "AttendeeOrOrganizer" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:FreebusyPeriod
+ a rdfs:Class ;
+ rdfs:comment "An aggregate of a period and a freebusy type. This class has been introduced to serve as a range of the ncal:freebusy property. See documentation for ncal:freebusy for details. Note that the specification of freebusy property states that the period is to be expressed using UTC time, so the timezone properties should NOT be used for instances of this class." ;
+ rdfs:label "FreebusyPeriod" ;
+ rdfs:subClassOf ncal:NcalPeriod .
+
+ ncal:bydayModifier
+ a rdf:Property ;
+ rdfs:comment "A n integer modifier for the BYDAY rule part. Each BYDAY value can also be preceded by a positive (+n) or negative (-n) integer. If present, this indicates the nth occurrence of the specific day within the MONTHLY or YEARLY RRULE. For example, within a MONTHLY rule, +1MO (or simply 1MO) represents the first Monday within the month, whereas -1MO represents the last Monday of the month. If an integer modifier is not present, it means all days of this type within the specified frequency. For example, within a MONTHLY rule, MO represents all Mondays within the month. Inspired by RFC 2445 sec. 4.3.10" ;
+ rdfs:domain ncal:BydayRulePart ;
+ rdfs:label "bydayModifier" ;
+ rdfs:range xsd:integer .
+
+ ncal:UnionOfAlarmEventFreebusyJournalTodo
+ a rdfs:Class ;
+ rdfs:label "UnionOfAlarmEventFreebusyJournalTodo" ;
+ rdfs:subClassOf ncal:UnionParentClass .
+
+ ncal:minutely
+ a ncal:RecurrenceFrequency ;
+ rdfs:label "minutely" .
+
+ ncal:method
+ a rdf:Property ;
+ rdfs:comment "This property defines the iCalendar object method associated with the calendar object. Defined in RFC 2445 sec. 4.7.2" ;
+ rdfs:domain ncal:Calendar ;
+ rdfs:label "method" ;
+ rdfs:range xsd:string .
+
+ ncal:tuesday
+ a ncal:Weekday ;
+ rdfs:label "tuesday" .
+
+ ncal:UnionOfEventJournalTodo
+ a rdfs:Class ;
+ rdfs:label "UnionOfEventJournalTodo" ;
+ rdfs:subClassOf ncal:UnionParentClass .
+
+ ncal:triggerDateTime
+ a rdf:Property ;
+ rdfs:comment "The exact date and time of the trigger. This property has been created to express the VALUE=DATE, and VALUE=DATE-TIME parameters of the TRIGGER property. See the documentation for ncal:trigger for more details" ;
+ rdfs:domain ncal:Trigger ;
+ rdfs:label "triggerDateTime" ;
+ rdfs:range xsd:dateTime .
+
+ ncal:endTriggerRelation
+ a ncal:TriggerRelation ;
+ rdfs:label "endTriggerRelation" .
+
+ ncal:dir
+ a rdf:Property ;
+ rdfs:comment "Specifies a reference to a directory entry associated with the calendar user specified by the property. Inspired by RFC 2445 sec. 4.2.6. Originally the data type of the value of this parameter was URI (Usually an LDAP URI). This has been expressed as rdfs:resource." ;
+ rdfs:domain ncal:AttendeeOrOrganizer ;
+ rdfs:label "dir" ;
+ rdfs:range rdfs:Resource .
+
+ ncal:interval
+ a rdf:Property ;
+ rdfs:comment "The INTERVAL rule part contains a positive integer representing how often the recurrence rule repeats. The default value is \"1\", meaning every second for a SECONDLY rule, or every minute for a MINUTELY rule, every hour for an HOURLY rule, every day for a DAILY rule, every week for a WEEKLY rule, every month for a MONTHLY rule andevery year for a YEARLY rule. Defined in RFC 2445 sec. 4.3.10" ;
+ rdfs:domain ncal:RecurrenceRule ;
+ rdfs:label "interval" ;
+ rdfs:range xsd:integer .
+
+ ncal:requestStatus
+ a rdf:Property ;
+ rdfs:comment "This property defines the status code returned for a scheduling request. Inspired by RFC 2445 sec. 4.8.8.2. Original value of this property was a four-element structure. The RequestStatus class has been introduced to express it. In RFC 2445 this property could have the LANGUAGE parameter. This has been discarded in this ontology. Use xml:lang literals to express it if necessary." ;
+ rdfs:domain ncal:UnionOfEventFreebusyJournalTodo ;
+ rdfs:label "requestStatus" ;
+ rdfs:range ncal:RequestStatus .
+
+ ncal:byday
+ a rdf:Property ;
+ rdfs:comment "Weekdays the recurrence should occur. Defined in RFC 2445 sec. 4.3.10" ;
+ rdfs:domain ncal:RecurrenceRule ;
+ rdfs:label "byday" ;
+ rdfs:range ncal:BydayRulePart .
+
+ ncal:friday
+ a ncal:Weekday ;
+ rdfs:label "friday" .
+
+ ncal:recurrenceId
+ a rdf:Property ;
+ rdfs:comment "This property is used in conjunction with the \"UID\" and \"SEQUENCE\" property to identify a specific instance of a recurring \"VEVENT\", \"VTODO\" or \"VJOURNAL\" calendar component. The property value is the effective value of the \"DTSTART\" property of the recurrence instance. Inspired by the RFC 2445 sec. 4.8.4.4" ;
+ rdfs:domain ncal:UnionOfEventJournalTimezoneTodo ;
+ rdfs:label "recurrenceId" ;
+ rdfs:range ncal:RecurrenceIdentifier .
+
+ ncal:thisAndPriorRange
+ a ncal:RecurrenceIdentifierRange ;
+ rdfs:label "thisAndPriorRange" .
+
+ ncal:periodBegin
+ a rdf:Property ;
+ rdfs:comment "Beginng of a period. Inspired by the first part of a structured value of the PERIOD datatype specified in RFC 2445 sec. 4.3.9" ;
+ rdfs:domain ncal:NcalPeriod ;
+ rdfs:label "periodBegin" ;
+ rdfs:range xsd:dateTime .
+
+ ncal:byweekno
+ a rdf:Property ;
+ rdfs:comment "The number of the week an event should recur. Defined in RFC 2445 sec. 4.3.10" ;
+ rdfs:domain ncal:RecurrenceRule ;
+ rdfs:label "byweekno" ;
+ rdfs:range xsd:integer .
+
+ ncal:tzoffsetfrom
+ a rdf:Property ;
+ rdfs:comment "This property specifies the offset which is in use prior to this time zone observance. Inspired by RFC 2445 sec. 4.8.3.3. The original domain was underspecified. It said that this property must appear within a Timezone component. In this ontology a TimezoneObservance class has been introduced to clarify this specification. The original range was UTC-OFFSET. There is no equivalent among the XSD datatypes so plain string was chosen." ;
+ rdfs:domain ncal:TimezoneObservance ;
+ rdfs:label "tzoffsetfrom" ;
+ rdfs:range xsd:string .
+
+ ncal:triggerDuration
+ a rdf:Property ;
+ rdfs:comment "The duration of a trigger. This property has been created to express the VALUE=DURATION parameter of the TRIGGER property. See documentation for ncal:trigger for more details." ;
+ rdfs:domain ncal:Trigger ;
+ rdfs:label "triggerDuration" ;
+ rdfs:range xsd:duration .
+
+ ncal:FreebusyType
+ a rdfs:Class ;
+ rdfs:comment "Type of a Freebusy indication. This class has been introduced to serve as a limited set of values for the ncal:fbtype property. See the documentation of ncal:fbtype for details." ;
+ rdfs:label "FreebusyType" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:NcalDateTime
+ a rdfs:Class ;
+ rdfs:label "NcalDateTime" ;
+ rdfs:subClassOf ncal:NcalTimeEntity .
+
+ ncal:organizer
+ a rdf:Property ;
+ rdfs:comment "The property defines the organizer for a calendar component. Inspired by RFC 2445 sec. 4.8.4.3. Originally this property accepted many parameters. The Organizer class has been introduced to express them all. Note that NCAL is aligned with NCO. The actual value (of the CAL-ADDRESS type) is expressed as an instance of nco:Contact. Remember that the CN parameter has been removed from NCAL. Instead that value should be expressed using nco:fullname property of the above mentioned nco:Contact instance." ;
+ rdfs:domain ncal:UnionOfEventFreebusyJournalTodo ;
+ rdfs:label "organizer" ;
+ rdfs:range ncal:Organizer .
+
+ ncal:calscale
+ a rdf:Property ;
+ rdfs:comment "This property defines the calendar scale used for the calendar information specified in the iCalendar object. Defined in RFC 2445 sec. 4.7.1" ;
+ rdfs:domain ncal:Calendar ;
+ rdfs:label "calscale" ;
+ rdfs:range ncal:CalendarScale .
+
+ ncal:periodEnd
+ a rdf:Property ;
+ rdfs:comment "End of a period of time. Inspired by the second part of a structured value of a PERIOD datatype specified in RFC 2445 sec. 4.3.9. Note that a single NcalPeriod instance shouldn't have the periodEnd and periodDuration properties specified simultaneously." ;
+ rdfs:domain ncal:NcalPeriod ;
+ rdfs:label "periodEnd" ;
+ rdfs:range xsd:dateTime .
+
+ ncal:ParticipationStatus
+ a rdfs:Class ;
+ rdfs:comment "Participation Status. This class has been introduced to express the limited vocabulary of values for the ncal:partstat property. See the documentation of ncal:partstat for details." ;
+ rdfs:label "ParticipationStatus" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:draftStatus
+ a ncal:JournalStatus ;
+ rdfs:label "draftStatus" .
+
+ ncal:attendee
+ a rdf:Property ;
+ rdfs:comment "The property defines an \"Attendee\" within a calendar component. Inspired by RFC 2445 sec. 4.8.4.1. Originally this property accepted many parameters. The Attendee class has been introduced to express them all. Note that NCAL is aligned with NCO. The actual value (of the CAL-ADDRESS type) is expressed as an instance of nco:Contact. Remember that the CN parameter has been removed from NCAL. Instead that value should be expressed using nco:fullname property of the above mentioned nco:Contact instance. The RFC stated that whenever this property is attached to a Valarm instance, the Attendee cannot have any parameters apart from involvedContact." ;
+ rdfs:domain ncal:UnionOfAlarmEventFreebusyJournalTodo ;
+ rdfs:label "attendee" ;
+ rdfs:range ncal:Attendee .
+
+ ncal:bydayWeekday
+ a rdf:Property ;
+ rdfs:comment "Connects a BydayRulePath with a weekday." ;
+ rdfs:domain ncal:BydayRulePart ;
+ rdfs:label "bydayWeekday" ;
+ rdfs:range ncal:Weekday .
+
+ ncal:RequestStatus
+ a rdfs:Class ;
+ rdfs:comment "Request Status. A class that was introduced to provide a structure for the value of ncal:requestStatus property. See documentation for ncal:requestStatus for details." ;
+ rdfs:label "RequestStatus" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:rrule
+ a rdf:Property ;
+ rdfs:comment "This property defines a rule or repeating pattern for recurring events, to-dos, or time zone definitions. sec. 4.8.5.4" ;
+ rdfs:domain ncal:UnionOfTimezoneObservanceEventJournalTimezoneTodo ;
+ rdfs:label "rrule" ;
+ rdfs:range ncal:RecurrenceRule .
+
+ ncal:tzoffsetto
+ a rdf:Property ;
+ rdfs:comment "This property specifies the offset which is in use in this time zone observance. nspired by RFC 2445 sec. 4.8.3.4. The original domain was underspecified. It said that this property must appear within a Timezone component. In this ontology a TimezoneObservance class has been introduced to clarify this specification. The original range was UTC-OFFSET. There is no equivalent among the XSD datatypes so plain string was chosen." ;
+ rdfs:domain ncal:TimezoneObservance ;
+ rdfs:label "tzoffsetto" ;
+ rdfs:range xsd:string .
+
+ ncal:exdate
+ a rdf:Property ;
+ rdfs:comment "This property defines the list of date/time exceptions for a recurring calendar component. Inspired by RFC 2445 sec. 4.8.5.1" ;
+ rdfs:domain ncal:UnionOfEventJournalTimezoneTodo ;
+ rdfs:label "exdate" ;
+ rdfs:range ncal:NcalDateTime .
+
+ ncal:secondly
+ a ncal:RecurrenceFrequency ;
+ rdfs:label "secondly" .
+
+ ncal:role
+ a rdf:Property ;
+ rdfs:comment "To specify the participation role for the calendar user specified by the property. Inspired by the RFC 2445 sec. 4.2.16. Originally this property had a limited vocabulary for values. The terms of that vocabulary have been expressed as instances of the AttendeeRole class." ;
+ rdfs:domain ncal:Attendee ;
+ rdfs:label "role" ;
+ rdfs:range ncal:AttendeeRole .
+
+ ncal:UnionOfTimezoneObservanceEventJournalTimezoneTodo
+ a rdfs:Class ;
+ rdfs:label "UnionOfTimezoneObservanceEventJournalTimezoneTodo" ;
+ rdfs:subClassOf ncal:UnionParentClass .
+
+ ncal:tentativeParticipationStatus
+ a ncal:ParticipationStatus ;
+ rdfs:label "tentativeParticipationStatus" .
+
+ ncal:optParticipantRole
+ a ncal:AttendeeRole ;
+ rdfs:label "optParticipantRole" .
+
+ ncal:Todo
+ a rdfs:Class ;
+ rdfs:comment "Provide a grouping of calendar properties that describe a to-do." ;
+ rdfs:label "Todo" ;
+ rdfs:subClassOf ncal:UnionOfTimezoneObservanceEventJournalTimezoneTodo , ncal:UnionOfTimezoneObservanceEventFreebusyTimezoneTodo , ncal:UnionOfAlarmEventFreebusyJournalTodo , ncal:UnionOfEventJournalTimezoneTodo , ncal:UnionOfTimezoneObservanceEventFreebusyJournalTimezoneTodo , ncal:UnionOfAlarmEventJournalTodo , ncal:UnionOfAlarmEventFreebusyTodo , ncal:UnionOfEventTodo , ncal:UnionOfEventFreebusyJournalTodo , nie:InformationElement , ncal:UnionOfAlarmEventTodo , ncal:UnionOfEventJournalTodo .
+
+ ncal:busyFreebusyType
+ a ncal:FreebusyType ;
+ rdfs:label "busyFreebusyType" .
+
+ ncal:declinedParticipationStatus
+ a ncal:ParticipationStatus ;
+ rdfs:label "declinedParticipationStatus" .
+
+ ncal:freebusy
+ a rdf:Property ;
+ rdfs:comment "The property defines one or more free or busy time intervals. Inspired by RFC 2445 sec. 4.8.2.6. Note that the periods specified by this property can only be expressed with UTC times. Originally this property could have many comma-separated values. Please use a separate triple for each value." ;
+ rdfs:domain ncal:Freebusy ;
+ rdfs:label "freebusy" ;
+ rdfs:range ncal:FreebusyPeriod .
+
+ ncal:version
+ a rdf:Property ;
+ rdfs:comment "This property specifies the identifier corresponding to the highest version number or the minimum and maximum range of the iCalendar specification that is required in order to interpret the iCalendar object. Defined in RFC 2445 sec. 4.7.4" ;
+ rdfs:domain ncal:Calendar ;
+ rdfs:label "version" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nie:generatorOption .
+
+ ncal:range
+ a rdf:Property ;
+ rdfs:comment "To specify the effective range of recurrence instances from the instance specified by the recurrence identifier specified by the property. It is intended to express the RANGE parameter specified in RFC 2445 sec. 4.2.13. The set of possible values for this property is limited. See also the documentation for ncal:recurrenceId for more details." ;
+ rdfs:domain ncal:RecurrenceIdentifier ;
+ rdfs:label "range" ;
+ rdfs:range ncal:RecurrenceIdentifierRange .
+
+ ncal:monthly
+ a ncal:RecurrenceFrequency ;
+ rdfs:label "monthly" .
+
+ ncal:Timezone
+ a rdfs:Class ;
+ rdfs:comment "Provide a grouping of component properties that defines a time zone." ;
+ rdfs:label "Timezone" ;
+ rdfs:subClassOf ncal:UnionOfTimezoneObservanceEventJournalTimezoneTodo , ncal:UnionOfTimezoneObservanceEventFreebusyTimezoneTodo , ncal:UnionOfTimezoneObservanceEventFreebusyJournalTimezoneTodo , ncal:UnionOfEventJournalTimezoneTodo , nie:InformationElement .
+
+ ncal:roomUserType
+ a ncal:CalendarUserType ;
+ rdfs:label "roomUserType" .
+
+ ncal:NcalPeriod
+ a rdfs:Class ;
+ rdfs:comment "A period of time. Inspired by the PERIOD datatype specified in RFC 2445 sec. 4.3.9" ;
+ rdfs:label "NcalPeriod" ;
+ rdfs:subClassOf ncal:NcalTimeEntity .
+
+ ncal:Calendar
+ a rdfs:Class ;
+ rdfs:comment "A calendar. Inspirations for this class can be traced to the VCALENDAR component defined in RFC 2445 sec. 4.4, but it may just as well be used to represent any kind of Calendar." ;
+ rdfs:label "Calendar" ;
+ rdfs:subClassOf nie:InformationElement .
+
+ ncal:todoStatus
+ a rdf:Property ;
+ rdfs:comment "Defines the overall status or confirmation for a todo. Based on the STATUS property defined in RFC 2445 sec. 4.8.1.11." ;
+ rdfs:domain ncal:Todo ;
+ rdfs:label "status" ;
+ rdfs:range ncal:TodoStatus .
+
+ ncal:journalStatus
+ a rdf:Property ;
+ rdfs:comment "Defines the overall status or confirmation for a journal entry. Based on the STATUS property defined in RFC 2445 sec. 4.8.1.11." ;
+ rdfs:domain ncal:Journal ;
+ rdfs:label "status" ;
+ rdfs:range ncal:JournalStatus .
+
+ ncal:publicClassification
+ a ncal:AccessClassification ;
+ rdfs:label "publicClassification" .
+
+ ncal:bysetpos
+ a rdf:Property ;
+ rdfs:comment "The BYSETPOS rule part specify values which correspond to the nth occurrence within the set of events specified by the rule. Valid values are 1 to 366 or -366 to -1. It MUST only be used in conjunction with another BYxxx rule part. For example \"the last work day of the month\" could be represented as: RRULE: FREQ=MONTHLY; BYDAY=MO, TU, WE, TH, FR; BYSETPOS=-1. Each BYSETPOS value can include a positive (+n) or negative (-n) integer. If present, this indicates the nth occurrence of the specific occurrence within the set of events specified by the rule. Defined in RFC 2445 sec. 4.3.10" ;
+ rdfs:domain ncal:RecurrenceRule ;
+ rdfs:label "bysetpos" ;
+ rdfs:range xsd:integer .
+
+ ncal:due
+ a rdf:Property ;
+ rdfs:comment "This property defines the date and time that a to-do is expected to be completed. Inspired by RFC 2445 sec. 4.8.2.3" ;
+ rdfs:domain ncal:Todo ;
+ rdfs:label "due" ;
+ rdfs:range ncal:NcalDateTime .
+
+ ncal:contactAltRep
+ a rdf:Property ;
+ rdfs:comment """Alternate representation of the contact property. Introduced to cover
+the ALTREP parameter of the CONTACT property. See
+documentation of ncal:contact for details.""" ;
+ rdfs:domain ncal:UnionOfEventFreebusyJournalTodo ;
+ rdfs:label "contactAltRep" ;
+ rdfs:range rdfs:Resource .
+
+ ncal:audioAction
+ a ncal:AlarmAction ;
+ rdfs:label "audioAction" .
+
+ ncal:UnionOfAlarmEventTodo
+ a rdfs:Class ;
+ rdfs:label "UnionOfAlarmEventTodo" ;
+ rdfs:subClassOf ncal:UnionParentClass .
+
+ ncal:rdate
+ a rdf:Property ;
+ rdfs:comment "This property defines the list of date/times for a recurrence set. Inspired by RFC 2445 sec. 4.8.5.3. Note that RFC allows both DATE, DATE-TIME and PERIOD values for this property. That's why the range has been set to NcalTimeEntity." ;
+ rdfs:domain ncal:UnionOfTimezoneObservanceEventJournalTimezoneTodo ;
+ rdfs:label "rdate" ;
+ rdfs:range ncal:NcalTimeEntity .
+
+ ncal:recurrenceIdDateTime
+ a rdf:Property ;
+ rdfs:comment "The date and time of a recurrence identifier. Provided to express the actual value of the ncal:recurrenceId property. See documentation for ncal:recurrenceId for details." ;
+ rdfs:domain ncal:RecurrenceIdentifier ;
+ rdfs:label "recurrenceIdDateTime" ;
+ rdfs:range ncal:NcalDateTime .
+
+ ncal:reqParticipantRole
+ a ncal:AttendeeRole ;
+ rdfs:label "reqParticipantRole" .
+
+ ncal:returnStatus
+ a rdf:Property ;
+ rdfs:comment """Short return status. Inspired by the first element of the structured value of the REQUEST-STATUS property described in RFC 2445 sec. 4.8.8.2.
+
+The short return status is a PERIOD character (US-ASCII decimal 46) separated 3-tuple of integers. For example, \"3.1.1\". The successive levels of integers provide for a successive level of status code granularity.
+
+The following are initial classes for the return status code. Individual iCalendar object methods will define specific return status codes for these classes. In addition, other classes for the return status code may be defined using the registration process defined later in this memo.
+
+ 1.xx - Preliminary success. This class of status of status code indicates that the request has request has been initially processed but that completion is pending.
+
+2.xx -Successful. This class of status code indicates that the request was completed successfuly. However, the exact status code can indicate that a fallback has been taken.
+
+3.xx - Client Error. This class of status code indicates that the request was not successful. The error is the result of either a syntax or a semantic error in the client formatted request. Request should not be retried until the condition in the request is corrected.
+
+4.xx - Scheduling Error. This class of status code indicates that the request was not successful. Some sort of error occurred within the calendaring and scheduling service, not directly related to the request itself.""" ;
+ rdfs:domain ncal:RequestStatus ;
+ rdfs:label "returnStatus" ;
+ rdfs:range xsd:string .
+
+ ncal:daylight
+ a rdf:Property ;
+ rdfs:comment "Links a timezone with it's daylight observance. This property has no direct equivalent in the RFC 2445. It has been inspired by the structure of the Vtimezone component defined in sec.4.6.5" ;
+ rdfs:domain ncal:Timezone ;
+ rdfs:label "daylight" ;
+ rdfs:range ncal:TimezoneObservance .
+
+ ncal:locationAltRep
+ a rdf:Property ;
+ rdfs:comment """Alternate representation of the event or todo location.
+Introduced to cover the ALTREP parameter of the LOCATION
+property. See documentation of ncal:location for details.""" ;
+ rdfs:domain ncal:UnionOfEventTodo ;
+ rdfs:label "locationAltRep" ;
+ rdfs:range rdfs:Resource .
+
+ ncal:exrule
+ a rdf:Property ;
+ rdfs:comment "This property defines a rule or repeating pattern for an exception to a recurrence set. Inspired by RFC 2445 sec. 4.8.5.2." ;
+ rdfs:domain ncal:UnionOfEventJournalTodo ;
+ rdfs:label "exrule" ;
+ rdfs:range ncal:RecurrenceRule .
+
+ ncal:hasAlarm
+ a rdf:Property ;
+ rdfs:comment "Links an event or a todo with a DataObject that can be interpreted as an alarm. This property has no direct equivalent in the RFC 2445. It has been provided to express this relation." ;
+ rdfs:domain ncal:UnionOfEventTodo ;
+ rdfs:label "hasAlarm" ;
+ rdfs:range ncal:CalendarDataObject ;
+ rdfs:subPropertyOf nie:hasPart .
+
+ ncal:TriggerRelation
+ a rdfs:Class ;
+ rdfs:comment "The relation between the trigger and its parent calendar component. This class has been introduced to express the limited vocabulary for the ncal:related property. See the documentation for ncal:related for more details." ;
+ rdfs:label "TriggerRelation" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:confirmedStatus
+ a ncal:EventStatus ;
+ rdfs:label "confirmedStatus" .
+
+ ncal:confidentialClassification
+ a ncal:AccessClassification ;
+ rdfs:label "confidentialClassification" .
+
+ ncal:byhour
+ a rdf:Property ;
+ rdfs:comment "Hour of recurrence. Defined in RFC 2445 sec. 4.3.10" ;
+ rdfs:domain ncal:RecurrenceRule ;
+ rdfs:label "byhour" ;
+ rdfs:range xsd:integer .
+
+ ncal:byminute
+ a rdf:Property ;
+ rdfs:comment "Minute of recurrence. Defined in RFC 2445 sec. 4.3.10" ;
+ rdfs:domain ncal:RecurrenceRule ;
+ rdfs:label "byminute" ;
+ rdfs:range xsd:integer .
+
+ ncal:resourcesAltRep
+ a rdf:Property ;
+ rdfs:comment "Alternate representation of the resources needed for an event or todo. Introduced to cover the ALTREP parameter of the resources property. See documentation for ncal:resources for details." ;
+ rdfs:domain ncal:UnionOfEventTodo ;
+ rdfs:label "resourcesAltRep" ;
+ rdfs:range rdfs:Resource .
+
+ ncal:AttendeeRole
+ a rdfs:Class ;
+ rdfs:comment "A role the attendee is going to play during an event. This class has been introduced to express the limited vocabulary for the values of ncal:role property. Please refer to the documentation of ncal:role for details." ;
+ rdfs:label "AttendeeRole" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:groupUserType
+ a ncal:CalendarUserType ;
+ rdfs:label "groupUserType" .
+
+ ncal:UnionOfTimezoneObservanceEventFreebusyTimezoneTodo
+ a rdfs:Class ;
+ rdfs:label "UnionOfTimezoneObservanceEventFreebusyTimezoneTodo" ;
+ rdfs:subClassOf ncal:UnionParentClass .
+
+ ncal:encoding
+ a rdf:Property ;
+ rdfs:comment "To specify an alternate inline encoding for the property value. Inspired by RFC 2445 sec. 4.2.7. Originally this property had a limited vocabulary. ('8BIT' and 'BASE64'). The terms of this vocabulary have been expressed as instances of the AttachmentEncoding class" ;
+ rdfs:domain ncal:Attachment ;
+ rdfs:label "encoding" ;
+ rdfs:range ncal:AttachmentEncoding .
+
+ ncal:requestStatusData
+ a rdf:Property ;
+ rdfs:comment "Additional data associated with a request status. Inspired by the third part of the structured value for the REQUEST-STATUS property defined in RFC 2445 sec. 4.8.8.2 (\"Textual exception data. For example, the offending property name and value or complete property line\")" ;
+ rdfs:domain ncal:RequestStatus ;
+ rdfs:label "requestStatusData" ;
+ rdfs:range xsd:string .
+
+ ncal:finalStatus
+ a ncal:JournalStatus ;
+ rdfs:label "finalStatus" .
+
+ ncal:CalendarUserType
+ a rdfs:Class ;
+ rdfs:comment "A calendar user type. This class has been introduced to express the limited vocabulary for the ncal:cutype property. See documentation of ncal:cutype for details." ;
+ rdfs:label "CalendarUserType" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ ncal:UnionOfEventFreebusyJournalTodo
+ a rdfs:Class ;
+ rdfs:label "UnionOfEventFreebusyJournalTodo" ;
+ rdfs:subClassOf ncal:UnionParentClass .
+
+ ncal:tzid
+ a rdf:Property ;
+ rdfs:comment "This property specifies the text value that uniquely identifies the \"VTIMEZONE\" calendar component. Inspired by RFC 2445 sec 4.8.3.1" ;
+ rdfs:domain ncal:Timezone ;
+ rdfs:label "tzid" ;
+ rdfs:range xsd:string .
+
+ ncal:thisAndFutureRange
+ a ncal:RecurrenceIdentifierRange ;
+ rdfs:label "thisAndFutureRange" .
+
+ ncal:categories
+ a rdf:Property ;
+ rdfs:comment "Categories for a calendar component. Inspired by RFC 2445 sec 4.8.1.2 with the following reservations: The LANGUAGE parameter has been discarded. Please use xml:lang literals to express multiple languages. This property can specify multiple comma-separated categories. The order of categories doesn't matter. Please use a separate triple for each category." ;
+ rdfs:domain ncal:UnionOfEventJournalTodo ;
+ rdfs:label "categories" ;
+ rdfs:range xsd:string .
+
+ ncal:transparentTransparency
+ a ncal:TimeTransparency ;
+ rdfs:label "transparentTransparency" .
+
+ ncal:cancelledJournalStatus
+ a ncal:JournalStatus ;
+ rdfs:label "cancelledJournalStatus" .
+
+ ncal:freq
+ a rdf:Property ;
+ rdfs:comment "Frequency of a recurrence rule. Defined in RFC 2445 sec. 4.3.10" ;
+ rdfs:domain ncal:RecurrenceRule ;
+ rdfs:label "freq" ;
+ rdfs:range ncal:RecurrenceFrequency .
+}
+
+<http://www.semanticdesktop.org/ontologies/2007/04/02/ncal_metadata#> {<http://www.semanticdesktop.org/ontologies/2007/04/02/ncal_metadata#>
+ a nrl:GraphMetadata ;
+ nrl:coreGraphMetadataFor
+ ncal: .
+
+ ncal:
+ a nrl:Ontology ;
+ nao:creator <http://www.dfki.uni-kl.de/~mylka> ;
+ nao:hasDefaultNamespace
+ "http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#" ;
+ nao:hasDefaultNamespaceAbbreviation
+ "ncal" ;
+ nao:lastModified "2008-10-05T19:45:48.109Z" ;
+ nao:status "Unstable" ;
+ nao:updatable "0 " ;
+ nao:version "Revision-8" .
+}
+
=== added file 'extra/ontology/nco.trig'
--- extra/ontology/nco.trig 1970-01-01 00:00:00 +0000
+++ extra/ontology/nco.trig 2011-10-19 08:09:50 +0000
@@ -0,0 +1,682 @@
+#
+# Copyright (c) 2007 NEPOMUK Consortium
+# Copyright (c) 2009 Sebastian Trueg <trueg@xxxxxxx>
+# All rights reserved, licensed under either CC-BY or BSD.
+#
+# You are free:
+# * to Share - to copy, distribute and transmit the work
+# * to Remix - to adapt the work
+# Under the following conditions:
+# * Attribution - You must attribute the work in the manner specified by the author
+# or licensor (but not in any way that suggests that they endorse you or your use
+# of the work).
+#
+# Redistribution and use in source and binary forms, with or without modification,
+# are permitted provided that the following conditions are met:
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright notice, this
+# list of conditions and the following disclaimer in the documentation and/or
+# other materials provided with the distribution.
+# * Neither the names of the authors nor the names of contributors may
+# be used to endorse or promote products derived from this ontology without
+# specific prior written permission.
+#
+# THIS ONTOLOGY IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+# OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+# IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+# NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+# THIS ONTOLOGY, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+@prefix exif: <http://www.kanzaki.com/ns/exif#> .
+@prefix nid3: <http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#> .
+@prefix nrl: <http://www.semanticdesktop.org/ontologies/2007/08/15/nrl#> .
+@prefix nfo: <http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#> .
+@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
+@prefix tmo: <http://www.semanticdesktop.org/ontologies/2008/05/20/tmo#> .
+@prefix protege: <http://protege.stanford.edu/system#> .
+@prefix nmo: <http://www.semanticdesktop.org/ontologies/2007/03/22/nmo#> .
+@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
+@prefix nexif: <http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#> .
+@prefix ncal: <http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#> .
+@prefix pimo: <http://www.semanticdesktop.org/ontologies/2007/11/01/pimo#> .
+@prefix dcterms: <http://purl.org/dc/terms/> .
+@prefix nao: <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#> .
+@prefix geo: <http://www.w3.org/2003/01/geo/wgs84_pos#> .
+@prefix dc: <http://purl.org/dc/elements/1.1/> .
+@prefix nie: <http://www.semanticdesktop.org/ontologies/2007/01/19/nie#> .
+@prefix nco: <http://www.semanticdesktop.org/ontologies/2007/03/22/nco#> .
+@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
+
+nco: {nco:region
+ a rdf:Property ;
+ rdfs:comment "Region. Inspired by the fifth part of the value of the 'ADR' property as defined in RFC 2426, sec. 3.2.1" ;
+ rdfs:domain nco:PostalAddress ;
+ rdfs:label "region" ;
+ rdfs:range xsd:string ;
+ nrl:maxCardinality 1 .
+
+ nco:key
+ a rdf:Property ;
+ rdfs:comment "An encryption key attached to a contact. Inspired by the KEY property defined in RFC 2426 sec. 3.7.2" ;
+ rdfs:domain nco:Contact ;
+ rdfs:label "key" ;
+ rdfs:range nie:DataObject ;
+ rdfs:subPropertyOf nie:hasPart .
+
+ nco:nameHonorificSuffix
+ a rdf:Property ;
+ rdfs:comment "A suffix for the name of the Object represented by the given object. See documentation for the 'nameFamily' for details." ;
+ rdfs:domain nco:PersonContact ;
+ rdfs:label "nameHonorificSuffix" ;
+ rdfs:range xsd:string .
+
+ nco:url
+ a rdf:Property ;
+ rdfs:comment "A uniform resource locator associated with the given role of a Contact. Inspired by the 'URL' property defined in RFC 2426 Sec. 3.6.8." ;
+ rdfs:domain nco:Role ;
+ rdfs:label "url" ;
+ rdfs:range rdfs:Resource .
+
+ nco:VoicePhoneNumber
+ a rdfs:Class ;
+ rdfs:comment "A telephone number with voice communication capabilities. Class inspired by the TYPE=voice parameter of the TEL property defined in RFC 2426 sec. 3.3.1" ;
+ rdfs:label "VoicePhoneNumber" ;
+ rdfs:subClassOf nco:PhoneNumber .
+
+ nco:nameFamily
+ a rdf:Property ;
+ rdfs:comment "The family name of an Object represented by this Contact. These applies to people that have more than one given name. The 'first' one is considered 'the' given name (see nameGiven) property. All additional ones are considered 'additional' names. The name inherited from parents is the 'family name'. e.g. For Dr. John Phil Paul Stevenson Jr. M.D. A.C.P. we have contact with: honorificPrefix: 'Dr.', nameGiven: 'John', nameAdditional: 'Phil', nameAdditional: 'Paul', nameFamily: 'Stevenson', honorificSuffix: 'Jr.', honorificSuffix: 'M.D.', honorificSuffix: 'A.C.P.'. These properties form an equivalent of the compound 'N' property as defined in RFC 2426 Sec. 3.1.2" ;
+ rdfs:domain nco:PersonContact ;
+ rdfs:label "nameFamily" ;
+ rdfs:range xsd:string .
+
+ nco:VideoTelephoneNumber
+ a rdfs:Class ;
+ rdfs:comment "A Video telephone number. A class inspired by the TYPE=video parameter of the TEL property defined in RFC 2426 sec. 3.3.1" ;
+ rdfs:label "VideoTelephoneNumber" ;
+ rdfs:subClassOf nco:VoicePhoneNumber .
+
+ nco:contactUID
+ a rdf:Property ;
+ rdfs:comment "A value that represents a globally unique identifier corresponding to the individual or resource associated with the Contact. An equivalent of the 'UID' property defined in RFC 2426 Sec. 3.6.7" ;
+ rdfs:domain nco:Contact ;
+ rdfs:label "contactUID" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nie:identifier .
+
+ nco:publisher
+ a rdf:Property ;
+ rdfs:comment "An entity responsible for making the InformationElement available." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "publisher" ;
+ rdfs:range nco:Contact ;
+ rdfs:subPropertyOf dc:publisher .
+
+ nco:country
+ a rdf:Property ;
+ rdfs:comment "A part of an address specyfing the country. Inspired by the seventh part of the value of the 'ADR' property as defined in RFC 2426, sec. 3.2.1" ;
+ rdfs:domain nco:PostalAddress ;
+ rdfs:label "country" ;
+ rdfs:range xsd:string ;
+ nrl:maxCardinality 1 .
+
+ nco:nameHonorificPrefix
+ a rdf:Property ;
+ rdfs:comment "A prefix for the name of the object represented by this Contact. See documentation for the 'nameFamily' property for details." ;
+ rdfs:domain nco:PersonContact ;
+ rdfs:label "nameHonorificPrefix" ;
+ rdfs:range xsd:string .
+
+ nco:extendedAddress
+ a rdf:Property ;
+ rdfs:comment "An extended part of an address. This field might be used to express parts of an address that aren't include in the name of the Contact but also aren't part of the actual location. Usually the streed address and following fields are enough for a postal letter to arrive. Examples may include ('University of California Campus building 45', 'Sears Tower 34th floor' etc.) Inspired by the second part of the value of the 'ADR' property as defined in RFC 2426, sec. 3.2.1" ;
+ rdfs:domain nco:PostalAddress ;
+ rdfs:label "extendedAddress" ;
+ rdfs:range xsd:string .
+
+ nco:IMAccount
+ a rdfs:Class ;
+ rdfs:comment "An account in an Instant Messaging system." ;
+ rdfs:label "IMAccount" ;
+ rdfs:subClassOf nco:ContactMedium .
+
+ nco:hasIMAccount
+ a rdf:Property ;
+ rdfs:comment "Indicates that an Instant Messaging account owned by an entity represented by this contact." ;
+ rdfs:domain nco:Role ;
+ rdfs:label "hasIMAccount" ;
+ rdfs:range nco:IMAccount ;
+ rdfs:subPropertyOf nco:hasContactMedium .
+
+ nco:IsdnNumber
+ a rdfs:Class ;
+ rdfs:comment "An ISDN phone number. Inspired by the (TYPE=isdn) parameter of the TEL property as defined in RFC 2426 sec 3.3.1." ;
+ rdfs:label "IsdnNumber" ;
+ rdfs:subClassOf nco:VoicePhoneNumber .
+
+ nco:creator
+ a rdf:Property ;
+ rdfs:comment "Creator of a data object, an entity primarily responsible for the creation of the content of the data object." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "creator" ;
+ rdfs:range nco:Contact ;
+ rdfs:subPropertyOf dc:creator , nco:contributor , nao:creator .
+
+ nco:hasLocation
+ a rdf:Property ;
+ rdfs:comment "Geographical location of the contact. Inspired by the 'GEO' property specified in RFC 2426 Sec. 3.4.2" ;
+ rdfs:domain nco:Contact ;
+ rdfs:label "hasLocation" ;
+ rdfs:range geo:Point .
+
+ nco:phoneNumber
+ a rdf:Property ;
+ rdfs:domain nco:PhoneNumber ;
+ rdfs:label "phoneNumber" ;
+ rdfs:range xsd:string .
+
+ nco:nickname
+ a rdf:Property ;
+ rdfs:comment "A nickname of the Object represented by this Contact. This is an equivalen of the 'NICKNAME' property as defined in RFC 2426 Sec. 3.1.3." ;
+ rdfs:domain nco:Contact ;
+ rdfs:label "nickname" ;
+ rdfs:range xsd:string .
+
+ nco:imStatus
+ a rdf:Property ;
+ rdfs:comment "Current status of the given IM account. Values for this property may include 'Online', 'Offline', 'Do not disturb' etc. The exact choice of them is unspecified." ;
+ rdfs:domain nco:IMAccount ;
+ rdfs:label "imStatus" ;
+ rdfs:range xsd:string ;
+ nrl:maxCardinality 1 .
+
+ nco:containsContact
+ a rdf:Property ;
+ rdfs:comment """A property used to group contacts into contact groups. This
+ property was NOT defined in the VCARD standard. See documentation for the
+ 'ContactList' class for details""" ;
+ rdfs:domain nco:ContactList ;
+ rdfs:label "containsContact" ;
+ rdfs:range nco:ContactListDataObject ;
+ rdfs:subPropertyOf nie:hasPart .
+
+ nco:department
+ a rdf:Property ;
+ rdfs:comment "Department. The organizational unit within the organization." ;
+ rdfs:domain nco:Affiliation ;
+ rdfs:label "department" ;
+ rdfs:range xsd:string .
+
+ nco:imID
+ a rdf:Property ;
+ rdfs:comment "Identifier of the IM account. Examples of such identifier might include ICQ UINs, Jabber IDs, Skype names etc." ;
+ rdfs:domain nco:IMAccount ;
+ rdfs:label "imID" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nao:identifier .
+
+ nco:addressLocation
+ a rdf:Property ;
+ rdfs:comment "The geographical location of a postal address." ;
+ rdfs:domain nco:PostalAddress ;
+ rdfs:label "addressLocation" ;
+ rdfs:range geo:Point .
+
+ nco:note
+ a rdf:Property ;
+ rdfs:comment "A note about the object represented by this Contact. An equivalent for the 'NOTE' property defined in RFC 2426 Sec. 3.6.2" ;
+ rdfs:domain nco:Contact ;
+ rdfs:label "note" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nie:description .
+
+ nco:representative
+ a rdf:Property ;
+ rdfs:comment "An object that represent an object represented by this Contact. Usually this property is used to link a Contact to an organization, to a contact to the representative of this organization the user directly interacts with. An equivalent for the 'AGENT' property defined in RFC 2426 Sec. 3.5.4" ;
+ rdfs:domain nco:Contact ;
+ rdfs:label "representative" ;
+ rdfs:range nco:Contact .
+
+ nco:nameAdditional
+ a rdf:Property ;
+ rdfs:comment "Additional given name of an object represented by this contact. See documentation for 'nameFamily' property for details." ;
+ rdfs:domain nco:PersonContact ;
+ rdfs:label "nameAdditional" ;
+ rdfs:range xsd:string .
+
+ nco:nameGiven
+ a rdf:Property ;
+ rdfs:comment "The given name for the object represented by this Contact. See documentation for 'nameFamily' property for details." ;
+ rdfs:domain nco:PersonContact ;
+ rdfs:label "nameGiven" ;
+ rdfs:range xsd:string .
+
+ nco:PcsNumber
+ a rdfs:Class ;
+ rdfs:comment "Personal Communication Services Number. A class inspired by the TYPE=pcs parameter of the TEL property defined in RFC 2426 sec. 3.3.1" ;
+ rdfs:label "PcsNumber" ;
+ rdfs:subClassOf nco:VoicePhoneNumber .
+
+ nco:ContactList
+ a rdfs:Class ;
+ rdfs:comment "A contact list, this class represents an addressbook or a contact list of an IM application. Contacts inside a contact list can belong to contact groups." ;
+ rdfs:label "ContactList" ;
+ rdfs:subClassOf nie:InformationElement .
+
+ nco:fullname
+ a rdf:Property ;
+ rdfs:comment "To specify the formatted text corresponding to the name of the object the Contact represents. An equivalent of the FN property as defined in RFC 2426 Sec. 3.1.1." ;
+ rdfs:domain nco:Contact ;
+ rdfs:label "fullname" ;
+ rdfs:range xsd:string ;
+ nrl:maxCardinality "1" ;
+ rdfs:subPropertyOf nie:title .
+
+ nco:ContactGroup
+ a rdfs:Class ;
+ rdfs:comment "A group of Contacts. Could be used to express a group in an addressbook or on a contact list of an IM application. One contact can belong to many groups." ;
+ rdfs:label "ContactGroup" ;
+ rdfs:subClassOf nie:InformationElement .
+
+ nco:BbsNumber
+ a rdfs:Class ;
+ rdfs:comment "A Bulletin Board System (BBS) phone number. Inspired by the (TYPE=bbsl) parameter of the TEL property as defined in RFC 2426 sec 3.3.1." ;
+ rdfs:label "BbsNumber" ;
+ rdfs:subClassOf nco:ModemNumber .
+
+ nco:Affiliation
+ a rdfs:Class ;
+ rdfs:comment "Aggregates three properties defined in RFC2426. Originally all three were attached directly to a person. One person could have only one title and one role within one organization. This class is intended to lift this limitation." ;
+ rdfs:label "Affiliation" ;
+ rdfs:subClassOf nco:Role .
+
+ nco:streetAddress
+ a rdf:Property ;
+ rdfs:comment "The streed address. Inspired by the third part of the value of the 'ADR' property as defined in RFC 2426, sec. 3.2.1" ;
+ rdfs:domain nco:PostalAddress ;
+ rdfs:label "streetAddress" ;
+ rdfs:range xsd:string .
+
+ nco:OrganizationContact
+ a rdfs:Class ;
+ rdfs:comment "A Contact that denotes on Organization." ;
+ rdfs:label "OrganizationContact" ;
+ rdfs:subClassOf nco:Contact .
+
+ nco:PhoneNumber
+ a rdfs:Class ;
+ rdfs:comment "A telephone number." ;
+ rdfs:label "PhoneNumber" ;
+ rdfs:subClassOf nco:ContactMedium .
+
+ nco:Contact
+ a rdfs:Class ;
+ rdfs:comment "A Contact. A piece of data that can provide means to identify or communicate with an entity." ;
+ rdfs:label "Contact" ;
+ rdfs:subClassOf nco:Role , nie:InformationElement , nao:Party .
+
+ nco:ModemNumber
+ a rdfs:Class ;
+ rdfs:comment "A modem phone number. Inspired by the (TYPE=modem) parameter of the TEL property as defined in RFC 2426 sec 3.3.1." ;
+ rdfs:label "ModemNumber" ;
+ rdfs:subClassOf nco:PhoneNumber .
+
+ nco:Role
+ a rdfs:Class ;
+ rdfs:comment "A role played by a contact. Contacts that denote people, can have many roles (e.g. see the hasAffiliation property and Affiliation class). Contacts that denote Organizations or other Agents usually have one role. Each role can introduce additional contact media." ;
+ rdfs:label "Role" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ nco:PagerNumber
+ a rdfs:Class ;
+ rdfs:comment "A pager phone number. Inspired by the (TYPE=pager) parameter of the TEL property as defined in RFC 2426 sec 3.3.1." ;
+ rdfs:label "PagerNumber" ;
+ rdfs:subClassOf nco:MessagingNumber .
+
+ nco:hasPhoneNumber
+ a rdf:Property ;
+ rdfs:comment "A number for telephony communication with the object represented by this Contact. An equivalent of the 'TEL' property defined in RFC 2426 Sec. 3.3.1" ;
+ rdfs:domain nco:Role ;
+ rdfs:label "hasPhoneNumber" ;
+ rdfs:range nco:PhoneNumber ;
+ rdfs:subPropertyOf nco:hasContactMedium .
+
+ nco:photo
+ a rdf:Property ;
+ rdfs:comment "Photograph attached to a Contact. The DataObject refered to by this property is usually interpreted as an nfo:Image. Inspired by the PHOTO property defined in RFC 2426 sec. 3.1.4" ;
+ rdfs:domain nco:Contact ;
+ rdfs:label "photo" ;
+ rdfs:range nie:DataObject ;
+ rdfs:subPropertyOf nie:hasPart .
+
+ nco:contributor
+ a rdf:Property ;
+ rdfs:comment "An entity responsible for making contributions to the content of the InformationElement." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "contributor" ;
+ rdfs:range nco:Contact ;
+ rdfs:subPropertyOf dc:contributor , nao:contributor .
+
+ nco:logo
+ a rdf:Property ;
+ rdfs:comment "Logo of a company. Inspired by the LOGO property defined in RFC 2426 sec. 3.5.3" ;
+ rdfs:domain nco:OrganizationContact ;
+ rdfs:label "logo" ;
+ rdfs:range nie:DataObject ;
+ rdfs:subPropertyOf nie:hasPart .
+
+ nco:websiteUrl
+ a rdf:Property ;
+ rdfs:comment "A url of a website." ;
+ rdfs:domain nco:Role ;
+ rdfs:label "websiteUrl" ;
+ rdfs:range rdfs:Resource ;
+ rdfs:subPropertyOf nco:url .
+
+ nco:ContactMedium
+ a rdfs:Class ;
+ rdfs:comment "A superclass for all contact media - ways to contact an entity represented by a Contact instance. Some of the subclasses of this class (the various kinds of telephone numbers and postal addresses) have been inspired by the values of the TYPE parameter of ADR and TEL properties defined in RFC 2426 sec. 3.2.1. and 3.3.1 respectively. Each value is represented by an appropriate subclass with two major exceptions TYPE=home and TYPE=work. They are to be expressed by the roles these contact media are attached to i.e. contact media with TYPE=home parameter are to be attached to the default role (nco:Contact or nco:PersonContact), whereas media with TYPE=work parameter should be attached to nco:Affiliation or nco:OrganizationContact." ;
+ rdfs:label "ContactMedium" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ nco:Gender
+ a rdfs:Class ;
+ rdfs:comment "Gender. Instances of this class may include male and female." ;
+ rdfs:label "Gender" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ nco:male
+ a nco:Gender ;
+ rdfs:comment "A Male" ;
+ rdfs:label "male" .
+
+ nco:birthDate
+ a rdf:Property ;
+ rdfs:comment "Birth date of the object represented by this Contact. An equivalent of the 'BDAY' property as defined in RFC 2426 Sec. 3.1.5." ;
+ rdfs:domain nco:Contact ;
+ rdfs:label "birthDate" ;
+ rdfs:range xsd:date ;
+ rdfs:subPropertyOf dc:date ;
+ nrl:maxCardinality 1 .
+
+ nco:hasEmailAddress
+ a rdf:Property ;
+ rdfs:comment "An address for electronic mail communication with the object specified by this contact. An equivalent of the 'EMAIL' property as defined in RFC 2426 Sec. 3.3.1." ;
+ rdfs:domain nco:Role ;
+ rdfs:label "hasEmailAddress" ;
+ rdfs:range nco:EmailAddress ;
+ rdfs:subPropertyOf nco:hasContactMedium .
+
+ nco:postalcode
+ a rdf:Property ;
+ rdfs:comment "Postal Code. Inspired by the sixth part of the value of the 'ADR' property as defined in RFC 2426, sec. 3.2.1" ;
+ rdfs:domain nco:PostalAddress ;
+ rdfs:label "postalcode" ;
+ rdfs:range xsd:string ;
+ nrl:maxCardinality 1 .
+
+ nco:MessagingNumber
+ a rdfs:Class ;
+ rdfs:comment "A number that can accept textual messages." ;
+ rdfs:label "MessagingNumber" ;
+ rdfs:subClassOf nco:PhoneNumber .
+
+ nco:org
+ a rdf:Property ;
+ rdfs:comment "Name of an organization or a unit within an organization the object represented by a Contact is associated with. An equivalent of the 'ORG' property defined in RFC 2426 Sec. 3.5.5" ;
+ rdfs:domain nco:Affiliation ;
+ rdfs:label "org" ;
+ rdfs:range nco:OrganizationContact .
+
+ nco:PersonContact
+ a rdfs:Class ;
+ rdfs:comment "A Contact that denotes a Person. A person can have multiple Affiliations." ;
+ rdfs:label "PersonContact" ;
+ rdfs:subClassOf nco:Contact .
+
+ nco:ParcelDeliveryAddress
+ a rdfs:Class ;
+ rdfs:comment "Parcel Delivery Addresse. Class inspired by TYPE=parcel parameter of the ADR property defined in RFC 2426 sec. 3.2.1" ;
+ rdfs:label "ParcelDeliveryAddress" ;
+ rdfs:subClassOf nco:PostalAddress .
+
+ nco:title
+ a rdf:Property ;
+ rdfs:comment "The official title the object represented by this contact in an organization. E.g. 'CEO', 'Director, Research and Development', 'Junior Software Developer/Analyst' etc. An equivalent of the 'TITLE' property defined in RFC 2426 Sec. 3.5.1" ;
+ rdfs:domain nco:Affiliation ;
+ rdfs:label "title" ;
+ rdfs:range xsd:string .
+
+ nco:AudioIMAccount
+ a rdfs:Class ;
+ rdfs:comment "An account in an InstantMessaging system capable of real-time audio conversations." ;
+ rdfs:label "AudioIMAccount" ;
+ rdfs:subClassOf nco:IMAccount .
+
+ nco:voiceMail
+ a rdf:Property ;
+ rdfs:comment "Indicates if the given number accepts voice mail. (e.g. there is an answering machine). Inspired by TYPE=msg parameter of the TEL property defined in RFC 2426 sec. 3.3.1" ;
+ rdfs:domain nco:VoicePhoneNumber ;
+ rdfs:label "voiceMail" ;
+ rdfs:range xsd:boolean .
+
+ nco:PostalAddress
+ a rdfs:Class ;
+ rdfs:comment "A postal address. A class aggregating the various parts of a value for the 'ADR' property as defined in RFC 2426 Sec. 3.2.1." ;
+ rdfs:label "PostalAddress" ;
+ rdfs:subClassOf nco:ContactMedium .
+
+ nco:belongsToGroup
+ a rdf:Property ;
+ rdfs:comment "Links a Contact with a ContactGroup it belongs to." ;
+ rdfs:domain nco:Contact ;
+ rdfs:label "addressLocation" ;
+ rdfs:range nco:ContactGroup .
+
+ nco:hasContactMedium
+ a rdf:Property ;
+ rdfs:comment "A superProperty for all properties linking a Contact to an instance of a contact medium." ;
+ rdfs:domain nco:Role ;
+ rdfs:label "hasContactMedium" ;
+ rdfs:range nco:ContactMedium .
+
+ nco:contactGroupName
+ a rdf:Property ;
+ rdfs:comment """The name of the contact group. This property was NOT defined
+ in the VCARD standard. See documentation of the 'ContactGroup' class for
+ details""" ;
+ rdfs:domain nco:ContactGroup ;
+ rdfs:label "contactGroupName" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf dc:title ;
+ nrl:maxCardinality 1 .
+
+ nco:FaxNumber
+ a rdfs:Class ;
+ rdfs:comment "A fax number. Inspired by the (TYPE=fax) parameter of the TEL property as defined in RFC 2426 sec 3.3.1." ;
+ rdfs:label "FaxNumber" ;
+ rdfs:subClassOf nco:PhoneNumber .
+
+ nco:contactMediumComment
+ a rdf:Property ;
+ rdfs:comment "A comment about the contact medium." ;
+ rdfs:domain nco:ContactMedium ;
+ rdfs:label "contactMediumComment" ;
+ rdfs:range xsd:string .
+
+ nco:foafUrl
+ a rdf:Property ;
+ rdfs:comment "The URL of the FOAF file." ;
+ rdfs:domain nco:Role ;
+ rdfs:label "foafUrl" ;
+ rdfs:range rdfs:Resource ;
+ rdfs:subPropertyOf nco:url .
+
+ nco:CarPhoneNumber
+ a rdfs:Class ;
+ rdfs:comment "A car phone number. Inspired by the (TYPE=car) parameter of the TEL property as defined in RFC 2426 sec 3.3.1." ;
+ rdfs:label "CarPhoneNumber" ;
+ rdfs:subClassOf nco:VoicePhoneNumber .
+
+ nco:ContactListDataObject
+ a rdfs:Class ;
+ rdfs:comment "An entity occuring on a contact list (usually interpreted as an nco:Contact)" ;
+ rdfs:label "ContactListDataObject" ;
+ rdfs:subClassOf nie:DataObject .
+
+ nco:emailAddress
+ a rdf:Property ;
+ rdfs:domain nco:EmailAddress ;
+ rdfs:label "emailAddress" ;
+ rdfs:range xsd:string ;
+ nrl:maxCardinality 1 .
+
+ nco:InternationalDeliveryAddress
+ a rdfs:Class ;
+ rdfs:comment "International Delivery Addresse. Class inspired by TYPE=intl parameter of the ADR property defined in RFC 2426 sec. 3.2.1" ;
+ rdfs:label "InternationalDeliveryAddress" ;
+ rdfs:subClassOf nco:PostalAddress .
+
+ nco:locality
+ a rdf:Property ;
+ rdfs:comment "Locality or City. Inspired by the fourth part of the value of the 'ADR' property as defined in RFC 2426, sec. 3.2.1" ;
+ rdfs:domain nco:PostalAddress ;
+ rdfs:label "locality" ;
+ rdfs:range xsd:string ;
+ nrl:maxCardinality 1 .
+
+ nco:VideoIMAccount
+ a rdfs:Class ;
+ rdfs:comment "An account in an instant messaging system capable of video conversations." ;
+ rdfs:label "VideoIMAccount" ;
+ rdfs:subClassOf nco:AudioIMAccount .
+
+ nco:sound
+ a rdf:Property ;
+ rdfs:comment "Sound clip attached to a Contact. The DataObject refered to by this property is usually interpreted as an nfo:Audio. Inspired by the SOUND property defined in RFC 2425 sec. 3.6.6." ;
+ rdfs:domain nco:Contact ;
+ rdfs:label "sound" ;
+ rdfs:range nie:DataObject ;
+ rdfs:subPropertyOf nie:hasPart .
+
+ nco:EmailAddress
+ a rdfs:Class ;
+ rdfs:comment "An email address. The recommended best practice is to use mailto: uris for instances of this class." ;
+ rdfs:label "EmailAddress" ;
+ rdfs:subClassOf nco:ContactMedium .
+
+ nco:imNickname
+ a rdf:Property ;
+ rdfs:comment "A nickname attached to a particular IM Account." ;
+ rdfs:domain nco:IMAccount ;
+ rdfs:label "imNickname" ;
+ rdfs:range xsd:string .
+
+ nco:hobby
+ a rdf:Property ;
+ rdfs:comment "A hobby associated with a PersonContact. This property can be used to express hobbies and interests." ;
+ rdfs:domain nco:PersonContact ;
+ rdfs:label "hobby" ;
+ rdfs:range xsd:string .
+
+ nco:blogUrl
+ a rdf:Property ;
+ rdfs:comment "A Blog url." ;
+ rdfs:domain nco:Role ;
+ rdfs:label "blogUrl" ;
+ rdfs:range rdfs:Resource ;
+ rdfs:subPropertyOf nco:url .
+
+ nco:CellPhoneNumber
+ a rdfs:Class ;
+ rdfs:comment "A cellular phone number. Inspired by the (TYPE=cell) parameter of the TEL property as defined in RFC 2426 sec 3.3.1. Usually a cellular phone can accept voice calls as well as textual messages (SMS), therefore this class has two superclasses." ;
+ rdfs:label "CellPhoneNumber" ;
+ rdfs:subClassOf nco:MessagingNumber , nco:VoicePhoneNumber .
+
+ nco:role
+ a rdf:Property ;
+ rdfs:comment "Role an object represented by this contact represents in the organization. This might include 'Programmer', 'Manager', 'Sales Representative'. Be careful to avoid confusion with the title property. An equivalent of the 'ROLE' property as defined in RFC 2426. Sec. 3.5.2. Note the difference between nco:Role class and nco:role property." ;
+ rdfs:domain nco:Affiliation ;
+ rdfs:label "role" ;
+ rdfs:range xsd:string .
+
+ nco:DomesticDeliveryAddress
+ a rdfs:Class ;
+ rdfs:comment "Domestic Delivery Addresse. Class inspired by TYPE=dom parameter of the ADR property defined in RFC 2426 sec. 3.2.1" ;
+ rdfs:label "DomesticDeliveryAddress" ;
+ rdfs:subClassOf nco:PostalAddress .
+
+ nco:female
+ a nco:Gender ;
+ rdfs:comment "A Female" ;
+ rdfs:label "female" .
+
+ nco:hasPostalAddress
+ a rdf:Property ;
+ rdfs:comment "The default Address for a Contact. An equivalent of the 'ADR' property as defined in RFC 2426 Sec. 3.2.1." ;
+ rdfs:domain nco:Role ;
+ rdfs:label "hasPostalAddress" ;
+ rdfs:range nco:PostalAddress ;
+ rdfs:subPropertyOf nco:hasContactMedium .
+
+ nco:imAccountType
+ a rdf:Property ;
+ rdfs:comment "Type of the IM account. This may be the name of the service that provides the IM functionality. Examples might include Jabber, ICQ, MSN etc" ;
+ rdfs:domain nco:IMAccount ;
+ rdfs:label "imAccountType" ;
+ rdfs:range xsd:string .
+
+ nco:pobox
+ a rdf:Property ;
+ rdfs:comment "Post office box. This is the first part of the value of the 'ADR' property as defined in RFC 2426, sec. 3.2.1" ;
+ rdfs:domain nco:PostalAddress ;
+ rdfs:label "pobox" ;
+ rdfs:range xsd:string .
+
+ nco:hasAffiliation
+ a rdf:Property ;
+ rdfs:comment "Links a PersonContact with an Affiliation." ;
+ rdfs:domain nco:PersonContact ;
+ rdfs:label "hasAffiliation" ;
+ rdfs:range nco:Affiliation .
+
+ nco:gender
+ a rdf:Property ;
+ rdfs:comment "Gender of the given contact." ;
+ rdfs:domain nco:PersonContact ;
+ rdfs:label "gender" ;
+ rdfs:range nco:Gender ;
+ nrl:maxCardinality 1 .
+
+ nco:imStatusMessage
+ a rdf:Property ;
+ rdfs:comment "A feature common in most IM systems. A message left by the user for all his/her contacts to see." ;
+ rdfs:domain nco:IMAccount ;
+ rdfs:label "imStatusMessage" ;
+ rdfs:range xsd:string .
+}
+
+<http://www.semanticdesktop.org/ontologies/2007/03/22/nco_metadata#> {nco: a nrl:Ontology ;
+ nao:creator <http://www.dfki.uni-kl.de/~mylka> ;
+ nao:hasDefaultNamespace
+ "http://www.semanticdesktop.org/ontologies/2007/03/22/nco#" ;
+ nao:hasDefaultNamespaceAbbreviation
+ "nco" ;
+ nao:lastModified "2009-11-27T11:45:58Z" ;
+ nao:status "Unstable" ;
+ nao:updatable "0 " ;
+ nao:version "Revision-9" .
+
+ <http://www.semanticdesktop.org/ontologies/2007/03/22/nco_metadata#>
+ a nrl:GraphMetadata ;
+ nrl:coreGraphMetadataFor
+ nco: .
+}
+
=== added file 'extra/ontology/nfo.trig'
--- extra/ontology/nfo.trig 1970-01-01 00:00:00 +0000
+++ extra/ontology/nfo.trig 2011-10-19 08:09:50 +0000
@@ -0,0 +1,823 @@
+#
+# Copyright (c) 2007 NEPOMUK Consortium
+# All rights reserved, licensed under either CC-BY or BSD.
+#
+# You are free:
+# * to Share - to copy, distribute and transmit the work
+# * to Remix - to adapt the work
+# Under the following conditions:
+# * Attribution - You must attribute the work in the manner specified by the author
+# or licensor (but not in any way that suggests that they endorse you or your use
+# of the work).
+#
+# Redistribution and use in source and binary forms, with or without modification,
+# are permitted provided that the following conditions are met:
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright notice, this
+# list of conditions and the following disclaimer in the documentation and/or
+# other materials provided with the distribution.
+# * Neither the names of the authors nor the names of contributors may
+# be used to endorse or promote products derived from this ontology without
+# specific prior written permission.
+#
+# THIS ONTOLOGY IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+# OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+# IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+# NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+# THIS ONTOLOGY, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+@prefix dc: <http://purl.org/dc/elements/1.1/> .
+@prefix exif: <http://www.kanzaki.com/ns/exif#> .
+@prefix geo: <http://www.w3.org/2003/01/geo/wgs84_pos#> .
+@prefix protege: <http://protege.stanford.edu/system#> .
+@prefix nao: <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#> .
+@prefix nfo: <http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#> .
+@prefix nie: <http://www.semanticdesktop.org/ontologies/2007/01/19/nie#> .
+@prefix ncal: <http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#> .
+@prefix nco: <http://www.semanticdesktop.org/ontologies/2007/03/22/nco#> .
+@prefix dcterms: <http://purl.org/dc/terms/> .
+@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
+@prefix pimo: <http://www.semanticdesktop.org/ontologies/2007/11/01/pimo#> .
+@prefix nmo: <http://www.semanticdesktop.org/ontologies/2007/03/22/nmo#> .
+@prefix nrl: <http://www.semanticdesktop.org/ontologies/2007/08/15/nrl#> .
+@prefix tmo: <http://www.semanticdesktop.org/ontologies/2008/05/20/tmo#> .
+@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
+@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
+@prefix nid3: <http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#> .
+@prefix nexif: <http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#> .
+
+nfo: {nfo:horizontalResolution
+ a rdf:Property ;
+ rdfs:comment "Horizontal resolution of an image (if printed). Expressed in DPI." ;
+ rdfs:domain nfo:Image ;
+ rdfs:label "horizontalResolution" ;
+ rdfs:range xsd:integer .
+
+ nfo:sampleRate
+ a rdf:Property ;
+ rdfs:comment "The amount of audio samples per second." ;
+ rdfs:domain nfo:Audio ;
+ rdfs:label "sampleRate" ;
+ rdfs:range xsd:float ;
+ rdfs:subPropertyOf nfo:rate .
+
+ nfo:HardDiskPartition
+ a rdfs:Class ;
+ rdfs:comment "A partition on a hard disk" ;
+ rdfs:label "HardDiskPartition" ;
+ rdfs:subClassOf nie:DataObject .
+
+ nfo:fileName
+ a rdf:Property ;
+ rdfs:comment "Name of the file, together with the extension" ;
+ rdfs:domain nfo:FileDataObject ;
+ rdfs:label "fileName" ;
+ nrl:maxCardinality "1" ;
+ rdfs:range xsd:string .
+
+ nfo:MediaStream
+ a rdfs:Class ;
+ rdfs:comment "A stream of multimedia content, usually contained within a media container such as a movie (containing both audio and video) or a DVD (possibly containing many streams of audio and video). Most common interpretations for such a DataObject include Audio and Video." ;
+ rdfs:label "MediaStream" ;
+ rdfs:subClassOf nie:DataObject .
+
+ nfo:Presentation
+ a rdfs:Class ;
+ rdfs:comment "A Presentation made by some presentation software (Corel Presentations, OpenOffice Impress, MS Powerpoint etc.)" ;
+ rdfs:label "Presentation" ;
+ rdfs:subClassOf nfo:Document .
+
+ nfo:Audio
+ a rdfs:Class ;
+ rdfs:comment "A file containing audio content" ;
+ rdfs:label "Audio" ;
+ rdfs:subClassOf nfo:Media .
+
+ nfo:hashAlgorithm
+ a rdf:Property ;
+ rdfs:comment "Name of the algorithm used to compute the hash value. Examples might include CRC32, MD5, SHA, TTH etc." ;
+ rdfs:domain nfo:FileHash ;
+ rdfs:label "hashAlgorithm" ;
+ rdfs:range xsd:string .
+
+ nfo:commentCharacterCount
+ a rdf:Property ;
+ rdfs:comment "The amount of character in comments i.e. characters ignored by the compiler/interpreter." ;
+ rdfs:domain nfo:SourceCode ;
+ rdfs:label "commentCharacterCount" ;
+ rdfs:range xsd:integer .
+
+ nfo:PlainTextDocument
+ a rdfs:Class ;
+ rdfs:comment "A file containing plain text (ASCII, Unicode or other encodings). Examples may include TXT, HTML, XML, program source code etc." ;
+ rdfs:label "PlainTextDocument" ;
+ rdfs:subClassOf nfo:TextDocument .
+
+ nfo:foundry
+ a rdf:Property ;
+ rdfs:comment "The foundry, the organization that created the font." ;
+ rdfs:domain nfo:Font ;
+ rdfs:label "foundry" ;
+ rdfs:range nco:Contact ;
+ rdfs:subPropertyOf nco:creator .
+
+ nfo:losslessCompressionType
+ a nfo:CompressionType ;
+ rdfs:label "losslessCompressionType" .
+
+ nfo:sideChannels
+ a rdf:Property ;
+ rdfs:comment "Number of side channels" ;
+ rdfs:label "sideChannels" ;
+ rdfs:range xsd:integer ;
+ rdfs:subPropertyOf nfo:channels .
+
+ nfo:interlaceMode
+ a rdf:Property ;
+ rdfs:comment "True if the image is interlaced, false if not." ;
+ rdfs:domain nfo:Visual ;
+ rdfs:label "interlaceMode" ;
+ rdfs:range xsd:boolean .
+
+ nfo:width
+ a rdf:Property ;
+ rdfs:comment "Visual content width in pixels." ;
+ rdfs:domain nfo:Visual ;
+ rdfs:label "width" ;
+ rdfs:range xsd:integer .
+
+ nfo:frameCount
+ a rdf:Property ;
+ rdfs:comment "The amount of frames in a video sequence." ;
+ rdfs:domain nfo:Video ;
+ rdfs:label "frameCount" ;
+ rdfs:range xsd:integer ;
+ rdfs:subPropertyOf nfo:count .
+
+ nfo:MediaFileListEntry
+ a rdfs:Class ;
+ rdfs:comment "A single node in the list of media files contained within an MediaList instance. This class is intended to provide a type all those links have. In valid NRL untyped resources cannot be linked. There are no properties defined for this class but the application may expect rdf:first and rdf:last links. The former points to the DataObject instance, interpreted as Media the latter points at another MediaFileListEntr. At the end of the list there is a link to rdf:nil." ;
+ rdfs:label "MediaFileListEntry" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ nfo:Filesystem
+ a rdfs:Class ;
+ rdfs:comment "A filesystem. Examples of filesystems include hard disk partitions, removable media, but also images thereof stored in files." ;
+ rdfs:label "Filesystem" ;
+ rdfs:subClassOf nfo:DataContainer .
+
+ nfo:definesFunction
+ a rdf:Property ;
+ rdfs:comment "A name of a function/method defined in the given source code file." ;
+ rdfs:domain nfo:SourceCode ;
+ rdfs:label "definesFunction" ;
+ rdfs:range xsd:string .
+
+ nfo:Archive
+ a rdfs:Class ;
+ rdfs:comment "A compressed file. May contain other files or folder inside. " ;
+ rdfs:label "Archive" ;
+ rdfs:subClassOf nfo:DataContainer .
+
+ nfo:permissions
+ a rdf:Property ;
+ rdfs:comment "A string containing the permissions of a file. A feature common in many UNIX-like operating systems." ;
+ rdfs:domain nfo:FileDataObject ;
+ rdfs:label "permissions" ;
+ rdfs:range xsd:string .
+
+ nfo:lineCount
+ a rdf:Property ;
+ rdfs:comment "The amount of lines in a text document" ;
+ rdfs:domain nfo:TextDocument ;
+ rdfs:label "lineCount" ;
+ rdfs:range xsd:integer .
+
+ nfo:SoftwareItem
+ a rdfs:Class ;
+ rdfs:comment "A DataObject representing a piece of software. Examples of interpretations of a SoftwareItem include an Application and an OperatingSystem." ;
+ rdfs:label "SoftwareItem" ;
+ rdfs:subClassOf nie:DataObject .
+
+ nfo:SourceCode
+ a rdfs:Class ;
+ rdfs:comment "Code in a compilable or interpreted programming language." ;
+ rdfs:label "SourceCode" ;
+ rdfs:subClassOf nfo:PlainTextDocument .
+
+ nfo:wordCount
+ a rdf:Property ;
+ rdfs:comment "The amount of words in a text document." ;
+ rdfs:domain nfo:TextDocument ;
+ rdfs:label "wordCount" ;
+ rdfs:range xsd:integer .
+
+ nfo:bookmarks
+ a rdf:Property ;
+ rdfs:comment "The address of the linked object. Usually a web URI." ;
+ rdfs:domain nfo:Bookmark ;
+ rdfs:label "link" ;
+ rdfs:range nie:DataObject ;
+ rdfs:subPropertyOf nie:links .
+
+ nfo:RemotePortAddress
+ a rdfs:Class ;
+ rdfs:comment "An address specifying a remote host and port. Such an address can be interpreted in many ways (examples of such interpretations include mailboxes, websites, remote calendars or filesystems), depending on an interpretation, various kinds of data may be extracted from such an address." ;
+ rdfs:label "RemotePortAddress" ;
+ rdfs:subClassOf nie:DataObject .
+
+ nfo:Attachment
+ a rdfs:Class ;
+ rdfs:comment "A file attached to another data object. Many data formats allow for attachments: emails, vcards, ical events, id3 and exif..." ;
+ rdfs:label "Attachment" ;
+ rdfs:subClassOf nfo:EmbeddedFileDataObject .
+
+ nfo:DataContainer
+ a rdfs:Class ;
+ rdfs:comment "A superclass for all entities, whose primary purpose is to serve as containers for other data object. They usually don't have any \"meaning\" by themselves. Examples include folders, archives and optical disc images." ;
+ rdfs:label "DataContainer" ;
+ rdfs:subClassOf nie:InformationElement .
+
+ nfo:TextDocument
+ a rdfs:Class ;
+ rdfs:comment "A text document" ;
+ rdfs:label "TextDocument" ;
+ rdfs:subClassOf nfo:Document .
+
+ nfo:characterCount
+ a rdf:Property ;
+ rdfs:comment "The amount of characters in the document." ;
+ rdfs:domain nfo:TextDocument ;
+ rdfs:label "characterCount" ;
+ rdfs:range xsd:integer .
+
+ nfo:fileLastAccessed
+ a rdf:Property ;
+ rdfs:comment "Time when the file was last accessed." ;
+ rdfs:domain nfo:FileDataObject ;
+ rdfs:label "fileLastAccessed" ;
+ rdfs:range xsd:dateTime ;
+ rdfs:subPropertyOf dc:date .
+
+ nfo:supercedes
+ a rdf:Property ;
+ rdfs:comment "States that a piece of software supercedes another piece of software." ;
+ rdfs:domain nfo:Software ;
+ rdfs:label "supercedes" ;
+ rdfs:range nfo:Software .
+
+ nfo:programmingLanguage
+ a rdf:Property ;
+ rdfs:comment "Indicates the name of the programming language this source code file is written in. Examples might include 'C', 'C++', 'Java' etc." ;
+ rdfs:domain nfo:SourceCode ;
+ rdfs:label "programmingLanguage" ;
+ rdfs:range xsd:string .
+
+ nfo:PaginatedTextDocument
+ a rdfs:Class ;
+ rdfs:comment "A file containing a text document, that is unambiguously divided into pages. Examples might include PDF, DOC, PS, DVI etc." ;
+ rdfs:label "PaginatedTextDocument" ;
+ rdfs:subClassOf nfo:TextDocument .
+
+ nfo:Application
+ a rdfs:Class ;
+ rdfs:comment "An application" ;
+ rdfs:label "Application" ;
+ rdfs:subClassOf nfo:Software .
+
+ nfo:sampleCount
+ a rdf:Property ;
+ rdfs:comment "The amount of samples in an audio clip." ;
+ rdfs:domain nfo:Audio ;
+ rdfs:label "sampleCount" ;
+ rdfs:range xsd:integer ;
+ rdfs:subPropertyOf nfo:count .
+
+ nfo:Image
+ a rdfs:Class ;
+ rdfs:comment "A file containing an image." ;
+ rdfs:label "Image" ;
+ rdfs:subClassOf nfo:Visual .
+
+ nfo:height
+ a rdf:Property ;
+ rdfs:comment "Visual content height in pixels." ;
+ rdfs:domain nfo:Visual ;
+ rdfs:label "height" ;
+ rdfs:range xsd:integer .
+
+ nfo:frontChannels
+ a rdf:Property ;
+ rdfs:comment "Number of front channels." ;
+ rdfs:label "frontChannels" ;
+ rdfs:range xsd:integer ;
+ rdfs:subPropertyOf nfo:channels .
+
+ nfo:FilesystemImage
+ a rdfs:Class ;
+ rdfs:comment "An image of a filesystem. Instances of this class may include CD images, DVD images or hard disk partition images created by various pieces of software (e.g. Norton Ghost)" ;
+ rdfs:label "FilesystemImage" ;
+ rdfs:subClassOf nfo:Filesystem .
+
+ nfo:CompressionType
+ a rdfs:Class ;
+ rdfs:comment "Type of compression. Instances of this class represent the limited set of values allowed for the nfo:compressionType property." ;
+ rdfs:label "CompressionType" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ nfo:ArchiveItem
+ a rdfs:Class ;
+ rdfs:comment "A file entity inside an archive." ;
+ rdfs:label "ArchiveItem" ;
+ rdfs:subClassOf nfo:EmbeddedFileDataObject .
+
+ nfo:rearChannels
+ a rdf:Property ;
+ rdfs:comment "Number of rear channels." ;
+ rdfs:label "rearChannels" ;
+ rdfs:range xsd:integer ;
+ rdfs:subPropertyOf nfo:channels .
+
+ nfo:bitsPerSample
+ a rdf:Property ;
+ rdfs:comment "Amount of bits in each audio sample." ;
+ rdfs:domain nfo:Audio ;
+ rdfs:label "bitsPerSample" ;
+ rdfs:range xsd:integer ;
+ rdfs:subPropertyOf nfo:bitDepth .
+
+ nfo:HtmlDocument
+ a rdfs:Class ;
+ rdfs:comment "A HTML document, may contain links to other files." ;
+ rdfs:label "HtmlDocument" ;
+ rdfs:subClassOf nfo:PlainTextDocument .
+
+ nfo:Bookmark
+ a rdfs:Class ;
+ rdfs:comment "A bookmark of a webbrowser. Use nie:title for the name/label, nie:contentCreated to represent the date when the user added the bookmark, and nie:contentLastModified for modifications. nfo:bookmarks to store the link." ;
+ rdfs:label "Bookmark" ;
+ rdfs:subClassOf nie:InformationElement .
+
+ nfo:FileHash
+ a rdfs:Class ;
+ rdfs:comment "A fingerprint of the file, generated by some hashing function." ;
+ rdfs:label "FileHash" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ nfo:duration
+ a rdf:Property ;
+ rdfs:comment "Duration of a media piece." ;
+ rdfs:domain nfo:Media ;
+ rdfs:label "duration" ;
+ rdfs:range xsd:duration .
+
+ nfo:lfeChannels
+ a rdf:Property ;
+ rdfs:comment "Number of Low Frequency Expansion (subwoofer) channels." ;
+ rdfs:label "lfeChannels" ;
+ rdfs:range xsd:integer ;
+ rdfs:subPropertyOf nfo:channels .
+
+ nfo:Video
+ a rdfs:Class ;
+ rdfs:comment "A video file." ;
+ rdfs:label "Video" ;
+ rdfs:subClassOf nfo:Visual .
+
+ nfo:hasMediaStream
+ a rdf:Property ;
+ rdfs:comment "Connects a media container with a single media stream contained within." ;
+ rdfs:domain nfo:Media ;
+ rdfs:label "hasMediaStream" ;
+ rdfs:range nie:DataObject ;
+ rdfs:subPropertyOf nie:hasPart .
+
+ nfo:Spreadsheet
+ a rdfs:Class ;
+ rdfs:comment "A spreadsheet, created by a spreadsheet application. Examples might include Gnumeric, OpenOffice Calc or MS Excel." ;
+ rdfs:label "Spreadsheet" ;
+ rdfs:subClassOf nfo:Document .
+
+ nfo:isPasswordProtected
+ a rdf:Property ;
+ rdfs:comment "States if a given resource is password-protected." ;
+ rdfs:domain nfo:ArchiveItem ;
+ rdfs:label "isPasswordProtected" ;
+ rdfs:range xsd:boolean .
+
+ nfo:hashValue
+ a rdf:Property ;
+ rdfs:comment "The actual value of the hash." ;
+ rdfs:domain nfo:FileHash ;
+ rdfs:label "hashValue" ;
+ rdfs:range xsd:string .
+
+ nfo:Document
+ a rdfs:Class ;
+ rdfs:comment "A generic document. A common superclass for all documents on the desktop." ;
+ rdfs:label "Document" ;
+ rdfs:subClassOf nie:InformationElement .
+
+ nfo:FileDataObject
+ a rdfs:Class ;
+ rdfs:comment "A resource containing a finite sequence of bytes with arbitrary information, that is available to a computer program and is usually based on some kind of durable storage. A file is durable in the sense that it remains available for programs to use after the current program has finished." ;
+ rdfs:label "File" ;
+ rdfs:subClassOf nie:DataObject .
+
+ nfo:encryptedStatus
+ a nfo:EncryptionStatus ;
+ rdfs:label "EncryptedStatus" .
+
+ nfo:Visual
+ a rdfs:Class ;
+ rdfs:comment "File containing visual content." ;
+ rdfs:label "Visual" ;
+ rdfs:subClassOf nfo:Media .
+
+ nfo:uncompressedSize
+ a rdf:Property ;
+ rdfs:comment "Uncompressed size of the content of a compressed file." ;
+ rdfs:domain nfo:Archive ;
+ rdfs:label "uncompressedSize" ;
+ rdfs:range xsd:integer .
+
+ nfo:deletionDate
+ a rdf:Property ;
+ rdfs:comment "The date and time of the deletion." ;
+ rdfs:domain nfo:DeletedResource ;
+ rdfs:label "deletionDate" ;
+ rdfs:range xsd:dateTime .
+
+ nfo:MindMap
+ a rdfs:Class ;
+ rdfs:comment "A MindMap, created by a mind-mapping utility. Examples might include FreeMind or mind mapper." ;
+ rdfs:label "MindMap" ;
+ rdfs:subClassOf nfo:Document .
+
+ nfo:SoftwareService
+ a rdfs:Class ;
+ rdfs:comment "A service published by a piece of software, either by an operating system or an application. Examples of such services may include calendar, addresbook and mailbox managed by a PIM application. This category is introduced to distinguish between data available directly from the applications (Via some Interprocess Communication Mechanisms) and data available from files on a disk. In either case both DataObjects would receive a similar interpretation (e.g. a Mailbox) and wouldn't differ on the content level." ;
+ rdfs:label "SoftwareService" ;
+ rdfs:subClassOf nie:DataObject .
+
+ nfo:decryptedStatus
+ a nfo:EncryptionStatus ;
+ rdfs:label "DecryptedStatus" .
+
+ nfo:originalLocation
+ a rdf:Property ;
+ rdfs:comment "The original location of the deleted resource." ;
+ rdfs:domain nfo:DeletedResource ;
+ rdfs:label "originalLocation" ;
+ rdfs:range xsd:string .
+
+ nfo:Website
+ a rdfs:Class ;
+ rdfs:comment "A website, usually a container for remote resources, that may be interpreted as HTMLDocuments, images or other types of content." ;
+ rdfs:label "Website" ;
+ rdfs:subClassOf nie:InformationElement .
+
+ nfo:VectorImage
+ a rdfs:Class ;
+ rdfs:label "VectorImage" ;
+ rdfs:subClassOf nfo:Image .
+
+ nfo:Cursor
+ a rdfs:Class ;
+ rdfs:comment "A Cursor." ;
+ rdfs:label "Cursor" ;
+ rdfs:subClassOf nfo:RasterImage .
+
+ nfo:Media
+ a rdfs:Class ;
+ rdfs:comment "A piece of media content. This class may be used to express complex media containers with many streams of various media content (both aural and visual)." ;
+ rdfs:label "Media" ;
+ rdfs:subClassOf nie:InformationElement .
+
+ nfo:hasMediaFileListEntry
+ a rdf:Property ;
+ rdfs:comment "This property is intended to point to an RDF list of MediaFiles." ;
+ rdfs:domain nfo:MediaList ;
+ rdfs:label "hasMediaFileListEntry" ;
+ rdfs:range nfo:MediaFileListEntry .
+
+ nfo:BookmarkFolder
+ a rdfs:Class ;
+ rdfs:comment "A folder with bookmarks of a webbrowser. Use nfo:containsBookmark to relate Bookmarks. Folders can contain subfolders, use containsBookmarkFolder to relate them." ;
+ rdfs:label "Bookmark Folder" ;
+ rdfs:subClassOf nie:InformationElement .
+
+ nfo:channels
+ a rdf:Property ;
+ rdfs:comment "Number of channels. This property is to be used directly if no detailed information is necessary. Otherwise use more detailed subproperties." ;
+ rdfs:domain nfo:Audio ;
+ rdfs:label "channels" ;
+ rdfs:range xsd:integer .
+
+ nfo:colorDepth
+ a rdf:Property ;
+ rdfs:comment "Amount of bits used to express the color of each pixel." ;
+ rdfs:domain nfo:Visual ;
+ rdfs:label "colorDepth" ;
+ rdfs:range xsd:integer ;
+ rdfs:subPropertyOf nfo:bitDepth .
+
+ nfo:Font
+ a rdfs:Class ;
+ rdfs:comment "A font." ;
+ rdfs:label "Font" ;
+ rdfs:subClassOf nie:InformationElement .
+
+ nfo:averageBitrate
+ a rdf:Property ;
+ rdfs:comment "The average overall bitrate of a media container. (i.e. the size of the piece of media in bits, divided by it's duration expressed in seconds)." ;
+ rdfs:domain nfo:Media ;
+ rdfs:label "averageBitrate" ;
+ rdfs:range xsd:float ;
+ rdfs:subPropertyOf nfo:rate .
+
+ nfo:Icon
+ a rdfs:Class ;
+ rdfs:comment "An Icon (regardless of whether it's a raster or a vector icon. A resource representing an icon could have two types (Icon and Raster, or Icon and Vector) if required." ;
+ rdfs:label "Icon" ;
+ rdfs:subClassOf nfo:Image .
+
+ nfo:fileOwner
+ a rdf:Property ;
+ rdfs:comment "The owner of the file as defined by the file system access rights feature." ;
+ rdfs:domain nfo:FileDataObject ;
+ rdfs:label "fileOwner" ;
+ rdfs:range nco:Contact .
+
+ nfo:aspectRatio
+ a rdf:Property ;
+ rdfs:comment "Visual content aspect ratio. (Width divided by Height)" ;
+ rdfs:domain nfo:Visual ;
+ rdfs:label "aspectRatio" ;
+ rdfs:range xsd:float .
+
+ nfo:bitDepth
+ a rdf:Property ;
+ rdfs:comment "A common superproperty for all properties signifying the amount of bits for an atomic unit of data. Examples of subproperties may include bitsPerSample and bitsPerPixel" ;
+ rdfs:domain nfo:Media ;
+ rdfs:label "bitDepth" ;
+ rdfs:range rdfs:Literal .
+
+ nfo:containsBookmarkFolder
+ a rdf:Property ;
+ rdfs:comment "The folder contains a bookmark folder." ;
+ rdfs:domain nfo:BookmarkFolder ;
+ rdfs:label "contains folder" ;
+ rdfs:range nfo:BookmarkFolder ;
+ rdfs:subPropertyOf nie:hasLogicalPart .
+
+ nfo:belongsToContainer
+ a rdf:Property ;
+ rdfs:comment "Models the containment relations between Files and Folders (or CompressedFiles)." ;
+ rdfs:domain nie:DataObject ;
+ rdfs:label "belongsToContainer" ;
+ rdfs:range nfo:DataContainer ;
+ rdfs:subPropertyOf nie:isPartOf .
+
+ nfo:verticalResolution
+ a rdf:Property ;
+ rdfs:comment "Vertical resolution of an Image (if printed). Expressed in DPI" ;
+ rdfs:domain nfo:Image ;
+ rdfs:label "verticalResolution" ;
+ rdfs:range xsd:integer .
+
+ nfo:fileUrl
+ a rdf:Property ;
+ nao:deprecated true ;
+ rdfs:comment "URL of the file. It points at the location of the file. In cases where creating a simple file:// or http:// URL for a file is difficult (e.g. for files inside compressed archives) the applications are encouraged to use conventions defined by Apache Commons VFS Project at http://jakarta.apache.org/ commons/ vfs/ filesystems.html." ;
+ rdfs:domain nfo:FileDataObject ;
+ rdfs:label "fileUrl" ;
+ rdfs:range rdfs:Resource ;
+ rdfs:subPropertyOf nie:url .
+
+ nfo:count
+ a rdf:Property ;
+ rdfs:comment "A common superproperty for all properties signifying the amount of atomic media data units. Examples of subproperties may include sampleCount and frameCount." ;
+ rdfs:domain nfo:Media ;
+ rdfs:label "count" ;
+ rdfs:range xsd:integer .
+
+ nfo:frameRate
+ a rdf:Property ;
+ rdfs:comment "Amount of video frames per second." ;
+ rdfs:domain nfo:Video ;
+ rdfs:label "frameRate" ;
+ rdfs:range xsd:float ;
+ rdfs:subPropertyOf nfo:rate .
+
+ nfo:fontFamily
+ a rdf:Property ;
+ rdfs:comment "The name of the font family." ;
+ rdfs:domain nfo:Font ;
+ rdfs:label "fontFamily" ;
+ rdfs:range xsd:string .
+
+ nfo:EmbeddedFileDataObject
+ a rdfs:Class ;
+ rdfs:comment "A file embedded in another data object. There are many ways in which a file may be embedded in another one. Use this class directly only in cases if none of the subclasses gives a better description of your case." ;
+ rdfs:label "EmbeddedFileDataObject" ;
+ rdfs:subClassOf nfo:FileDataObject .
+
+ nfo:fileCreated
+ a rdf:Property ;
+ rdfs:comment "File creation date" ;
+ rdfs:domain nfo:FileDataObject ;
+ rdfs:label "fileCreated" ;
+ rdfs:range xsd:dateTime ;
+ rdfs:subPropertyOf nie:created .
+
+ nfo:bitrateType
+ a rdf:Property ;
+ rdfs:comment "The type of the bitrate. Examples may include CBR and VBR." ;
+ rdfs:domain nfo:Media ;
+ rdfs:label "bitrateType" ;
+ rdfs:range xsd:string .
+
+ nfo:encoding
+ a rdf:Property ;
+ rdfs:comment "The encoding used for the Embedded File. Examples might include BASE64 or UUEncode" ;
+ rdfs:domain nfo:EmbeddedFileDataObject ;
+ rdfs:label "encoding" ;
+ rdfs:range xsd:string .
+
+ nfo:Folder
+ a rdfs:Class ;
+ rdfs:comment "A folder/directory. Examples of folders include folders on a filesystem and message folders in a mailbox." ;
+ rdfs:label "Folder" ;
+ rdfs:subClassOf nfo:DataContainer .
+
+ nfo:hasHash
+ a rdf:Property ;
+ rdfs:comment "Links the file with it's hash value." ;
+ rdfs:domain nfo:FileDataObject ;
+ rdfs:label "hasHash" ;
+ rdfs:range nfo:FileHash .
+
+ nfo:codec
+ a rdf:Property ;
+ rdfs:comment "The name of the codec necessary to decode a piece of media." ;
+ rdfs:domain nfo:Media ;
+ rdfs:label "codec" ;
+ rdfs:range rdfs:Literal .
+
+ nfo:fileLastModified
+ a rdf:Property ;
+ nao:deprecated true;
+ rdfs:comment "last modification date" ;
+ rdfs:domain nfo:FileDataObject ;
+ rdfs:label "fileLastModified" ;
+ rdfs:range xsd:dateTime ;
+ rdfs:subPropertyOf nie:lastModified .
+
+ nfo:compressionType
+ a rdf:Property ;
+ rdfs:comment "The type of the compression. Values include, 'lossy' and 'lossless'." ;
+ rdfs:domain nfo:Media ;
+ rdfs:label "compressionType" ;
+ rdfs:range nfo:CompressionType .
+
+ nfo:pageCount
+ a rdf:Property ;
+ rdfs:comment "Number of pages." ;
+ rdfs:domain nfo:PaginatedTextDocument ;
+ rdfs:label "pageCount" ;
+ rdfs:range xsd:integer .
+
+ nfo:RasterImage
+ a rdfs:Class ;
+ rdfs:comment "A raster image." ;
+ rdfs:label "RasterImage" ;
+ rdfs:subClassOf nfo:Image .
+
+ nfo:definesGlobalVariable
+ a rdf:Property ;
+ rdfs:comment "Name of a global variable defined within the source code file." ;
+ rdfs:domain nfo:SourceCode ;
+ rdfs:label "definesGlobalVariable" ;
+ rdfs:range xsd:string .
+
+ nfo:DeletedResource
+ a rdfs:Class ;
+ rdfs:comment "A file entity that has been deleted from the original source. Usually such entities are stored within various kinds of 'Trash' or 'Recycle Bin' folders." ;
+ rdfs:label "DeletedResource" ;
+ rdfs:subClassOf nfo:FileDataObject .
+
+ nfo:Trash
+ a rdfs:Class ;
+ rdfs:comment "Represents a container for deleted files, a feature common in modern operating systems." ;
+ rdfs:label "Trash" ;
+ rdfs:subClassOf nfo:DataContainer .
+
+ nfo:conflicts
+ a rdf:Property ;
+ rdfs:comment "States that a piece of software is in conflict with another piece of software." ;
+ rdfs:domain nfo:Software ;
+ rdfs:label "conflicts" ;
+ rdfs:range nfo:Software .
+
+ nfo:encryptionStatus
+ a rdf:Property ;
+ rdfs:comment "The status of the encryption of the InformationElement." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "encryptionStatus" ;
+ rdfs:range nfo:EncryptionStatus .
+
+ nfo:containsBookmark
+ a rdf:Property ;
+ rdfs:comment "The folder contains a bookmark." ;
+ rdfs:domain nfo:BookmarkFolder ;
+ rdfs:label "contains bookmark" ;
+ rdfs:range nfo:Bookmark ;
+ rdfs:subPropertyOf nie:hasLogicalPart .
+
+ nfo:Executable
+ a rdfs:Class ;
+ rdfs:comment "An executable file." ;
+ rdfs:label "Executable" ;
+ rdfs:subClassOf nie:InformationElement .
+
+ nfo:definesClass
+ a rdf:Property ;
+ rdfs:comment "Name of a class defined in the source code file." ;
+ rdfs:domain nfo:SourceCode ;
+ rdfs:label "definesClass" ;
+ rdfs:range xsd:string .
+
+ nfo:Software
+ a rdfs:Class ;
+ rdfs:comment "A piece of software. Examples may include applications and the operating system. This interpretation most commonly applies to SoftwareItems." ;
+ rdfs:label "Software" ;
+ rdfs:subClassOf nie:InformationElement .
+
+ nfo:lossyCompressionType
+ a nfo:CompressionType ;
+ rdfs:label "lossyCompressionType" .
+
+ nfo:EncryptionStatus
+ a rdfs:Class ;
+ rdfs:comment "The status of the encryption of an InformationElement. nfo:encryptedStatus means that the InformationElement has been encrypted and couldn't be decrypted by the extraction software, thus no content is available. nfo:decryptedStatus means that decryption was successfull and the content is available." ;
+ rdfs:label "EncryptionStatus" .
+
+ nfo:OperatingSystem
+ a rdfs:Class ;
+ rdfs:comment "An OperatingSystem" ;
+ rdfs:label "OperatingSystem" ;
+ rdfs:subClassOf nfo:Software .
+
+ nfo:rate
+ a rdf:Property ;
+ rdfs:comment "A common superproperty for all properties specifying the media rate. Examples of subproperties may include frameRate for video and sampleRate for audio. This property is expressed in units per second." ;
+ rdfs:domain nfo:Media ;
+ rdfs:label "rate" ;
+ rdfs:range xsd:float .
+
+ nfo:MediaList
+ a rdfs:Class ;
+ rdfs:comment "A file containing a list of media files.e.g. a playlist" ;
+ rdfs:label "MediaList" ;
+ rdfs:subClassOf nie:InformationElement .
+
+ nfo:fileSize
+ a rdf:Property ;
+ rdfs:comment "The size of the file in bytes. For compressed files it means the size of the packed file, not of the contents. For folders it means the aggregated size of all contained files and folders " ;
+ rdfs:domain nfo:FileDataObject ;
+ rdfs:label "fileSize" ;
+ rdfs:range xsd:integer ;
+ rdfs:subPropertyOf nie:byteSize .
+
+ nfo:RemoteDataObject
+ a rdfs:Class ;
+ rdfs:comment "A file data object stored at a remote location. Don't confuse this class with a RemotePortAddress. This one applies to a particular resource, RemotePortAddress applies to an address, that can have various interpretations." ;
+ rdfs:label "RemoteDataObject" ;
+ rdfs:subClassOf nfo:FileDataObject .
+}
+
+<http://www.semanticdesktop.org/ontologies/2007/03/22/nfo_metadata#> {nfo: a nrl:Ontology ;
+ nao:creator <http://www.dfki.uni-kl.de/~mylka> ;
+ nao:hasDefaultNamespace
+ "http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#" ;
+ nao:hasDefaultNamespaceAbbreviation
+ "nfo" ;
+ nao:lastModified "2009-07-20T14:59:09.500Z" ;
+ nao:status "Unstable" ;
+ nao:updatable "0 " ;
+ nao:version "Revision-9" .
+
+ <http://www.semanticdesktop.org/ontologies/2007/03/22/nfo_metadata#>
+ a nrl:GraphMetadata ;
+ nrl:coreGraphMetadataFor
+ nfo: .
+}
+
=== added file 'extra/ontology/nie.trig'
--- extra/ontology/nie.trig 1970-01-01 00:00:00 +0000
+++ extra/ontology/nie.trig 2011-10-19 08:09:50 +0000
@@ -0,0 +1,394 @@
+#
+# Copyright (c) 2007 NEPOMUK Consortium
+# All rights reserved, licensed under either CC-BY or BSD.
+#
+# You are free:
+# * to Share - to copy, distribute and transmit the work
+# * to Remix - to adapt the work
+# Under the following conditions:
+# * Attribution - You must attribute the work in the manner specified by the author
+# or licensor (but not in any way that suggests that they endorse you or your use
+# of the work).
+#
+# Redistribution and use in source and binary forms, with or without modification,
+# are permitted provided that the following conditions are met:
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright notice, this
+# list of conditions and the following disclaimer in the documentation and/or
+# other materials provided with the distribution.
+# * Neither the names of the authors nor the names of contributors may
+# be used to endorse or promote products derived from this ontology without
+# specific prior written permission.
+#
+# THIS ONTOLOGY IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+# OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+# IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+# NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+# THIS ONTOLOGY, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+@prefix dc: <http://purl.org/dc/elements/1.1/> .
+@prefix exif: <http://www.kanzaki.com/ns/exif#> .
+@prefix geo: <http://www.w3.org/2003/01/geo/wgs84_pos#> .
+@prefix protege: <http://protege.stanford.edu/system#> .
+@prefix nao: <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#> .
+@prefix nfo: <http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#> .
+@prefix nie: <http://www.semanticdesktop.org/ontologies/2007/01/19/nie#> .
+@prefix ncal: <http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#> .
+@prefix nco: <http://www.semanticdesktop.org/ontologies/2007/03/22/nco#> .
+@prefix dcterms: <http://purl.org/dc/terms/> .
+@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
+@prefix pimo: <http://www.semanticdesktop.org/ontologies/2007/11/01/pimo#> .
+@prefix nmo: <http://www.semanticdesktop.org/ontologies/2007/03/22/nmo#> .
+@prefix nrl: <http://www.semanticdesktop.org/ontologies/2007/08/15/nrl#> .
+@prefix tmo: <http://www.semanticdesktop.org/ontologies/2008/05/20/tmo#> .
+@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
+@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
+@prefix nid3: <http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#> .
+@prefix nexif: <http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#> .
+
+nie: {nie:characterSet
+ a rdf:Property ;
+ rdfs:comment "Characterset in which the content of the InformationElement was created. Example: ISO-8859-1, UTF-8. One of the registered character sets at http://www.iana.org/assignments/character-sets. This characterSet is used to interpret any textual parts of the content. If more than one characterSet is used within one data object, use more specific properties." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "characterSet" ;
+ rdfs:range xsd:string .
+
+ nie:rootElementOf
+ a rdf:Property ;
+ rdfs:comment "DataObjects extracted from a single data source are organized into a containment tree. This property links the root of that tree with the datasource it has been extracted from" ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "rootElementOf" ;
+ rdfs:range nie:DataSource .
+
+ nie:informationElementDate
+ a rdf:Property ;
+ rdfs:comment "A point or period of time associated with an event in the lifecycle of an Information Element. A common superproperty for all date-related properties of InformationElements in the NIE Framework." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "informationElementDate" ;
+ rdfs:range xsd:dateTime ;
+ rdfs:subPropertyOf dc:date .
+
+ nie:legal
+ a rdf:Property ;
+ rdfs:comment "A common superproperty for all properties that point at legal information about an Information Element" ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "legal" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf dc:rights .
+
+ nie:isStoredAs
+ a rdf:Property ;
+ rdfs:comment "Links the information element with the DataObject it is stored in." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "isStoredAs" ;
+ rdfs:range nie:DataObject ;
+ nrl:inverseProperty nie:interpretedAs .
+
+ nie:language
+ a rdf:Property ;
+ rdfs:comment "Language the InformationElement is expressed in. This property applies to the data object in its entirety. If the data object is divisible into parts expressed in multiple languages - more specific properties should be used. Users are encouraged to use the two-letter code specified in the RFC 3066" ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "language" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf dc:language .
+
+ nie:copyright
+ a rdf:Property ;
+ rdfs:comment "Content copyright" ;
+ rdfs:label "copyright" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nie:legal , dcterms:accessRights .
+
+ nie:created
+ a rdf:Property ;
+ rdfs:comment "Date of creation of the DataObject. Note that this date refers to the creation of the DataObject itself (i.e. the physical representation). Compare with nie:contentCreated." ;
+ rdfs:domain nie:DataObject ;
+ rdfs:label "created" ;
+ rdfs:range xsd:dateTime ;
+ rdfs:subPropertyOf dcterms:created, nao:created .
+
+ nie:lastModified
+ a rdf:Property ;
+ rdfs:comment "Last modification date of the DataObject. Note that this date refers to the modification of the DataObject itself (i.e. the physical representation). Compare with nie:contentLastModified." ;
+ rdfs:domain nie:DataObject ;
+ rdfs:label "lastModified" ;
+ rdfs:range xsd:dateTime ;
+ rdfs:subPropertyOf dc:date, nao:lastModified .
+
+ nie:mimeType
+ a rdf:Property ;
+ rdfs:comment "The mime type of the resource, if available. Example: \"text/plain\". See http://www.iana.org/assignments/media-types/. This property applies to data objects that can be described with one mime type. In cases where the object as a whole has one mime type, while it's parts have other mime types, or there is no mime type that can be applied to the object as a whole, but some parts of the content have mime types - use more specific properties." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "mimeType" ;
+ rdfs:range xsd:string .
+
+ nie:version
+ a rdf:Property ;
+ rdfs:comment "The current version of the given data object. Exact semantics is unspecified at this level. Use more specific subproperties if needed." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "version" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf dcterms:hasVersion .
+
+ nie:interpretedAs
+ a rdf:Property ;
+ rdfs:comment "Links the DataObject with the InformationElement it is interpreted as." ;
+ rdfs:domain nie:DataObject ;
+ rdfs:label "interpretedAs" ;
+ rdfs:range nie:InformationElement ;
+ nrl:inverseProperty nie:isStoredAs .
+
+ nie:links
+ a rdf:Property ;
+ rdfs:comment "A linking relation. A piece of content links/mentions a piece of data" ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "links" ;
+ rdfs:range nie:DataObject ;
+ rdfs:subPropertyOf nie:relatedTo .
+
+ nie:InformationElement
+ a rdfs:Class ;
+ rdfs:comment "A unit of content the user works with. This is a superclass for all interpretations of a DataObject." ;
+ rdfs:label "InformationElement" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ nie:DataSource
+ a rdfs:Class ;
+ rdfs:comment "A superclass for all entities from which DataObjects can be extracted. Each entity represents a native application or some other system that manages information that may be of interest to the user of the Semantic Desktop. Subclasses may include FileSystems, Mailboxes, Calendars, websites etc. The exact choice of subclasses and their properties is considered application-specific. Each data extraction application is supposed to provide it's own DataSource ontology. Such an ontology should contain supported data source types coupled with properties necessary for the application to gain access to the data sources. (paths, urls, passwords etc...)" ;
+ rdfs:label "DataSource" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ nie:generator
+ a rdf:Property ;
+ rdfs:comment "Software used to \"generate\" the contents. E.g. a word processor name." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "generator" ;
+ rdfs:range xsd:string .
+
+ nie:isPartOf
+ a rdf:Property ;
+ rdfs:comment "Generic property used to express containment relationships between DataObjects. NIE extensions are encouraged to provide more specific subproperties of this one. It is advisable for actual instances of DataObjects to use those specific subproperties. Note to the developers: Please be aware of the distinction between containment relation and provenance. The isPartOf relation models physical containment, a nie:DataObject (e.g. an nfo:Attachment) is a 'physical' part of an nie:InformationElement (a nmo:Message). Also, please note the difference between physical containment (isPartOf) and logical containment (isLogicalPartOf) the former has more strict meaning. They may occur independently of each other." ;
+ rdfs:domain nie:DataObject ;
+ rdfs:label "isPartOf" ;
+ rdfs:range nie:InformationElement ;
+ rdfs:subPropertyOf dcterms:isPartOf ;
+ nrl:inverseProperty nie:hasPart .
+
+ nie:disclaimer
+ a rdf:Property ;
+ rdfs:comment "A disclaimer" ;
+ rdfs:label "disclaimer" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nie:legal .
+
+ nie:generatorOption
+ a rdf:Property ;
+ rdfs:comment "A common superproperty for all settings used by the generating software. This may include compression settings, algorithms, autosave, interlaced/non-interlaced etc. Note that this property has no range specified and therefore should not be used directly. Always use more specific properties." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "generatorOption" .
+
+ nie:description
+ a rdf:Property ;
+ rdfs:comment "A textual description of the resource. This property may be used for any metadata fields that provide some meta-information or comment about a resource in the form of a passage of text. This property is not to be confused with nie:plainTextContent. Use more specific subproperties wherever possible." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "description" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf dc:description, nao:description .
+
+ nie:contentCreated
+ a rdf:Property ;
+ rdfs:comment "The date of the content creation. This may not necessarily be equal to the date when the DataObject (i.e. the physical representation) itself was created. Compare with nie:created property." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "contentCreated" ;
+ rdfs:range xsd:dateTime ;
+ rdfs:subPropertyOf nie:informationElementDate, nao:created ;
+ nrl:maxCardinality "1" .
+
+ nie:title
+ a rdf:Property ;
+ rdfs:comment "Name given to an InformationElement" ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "title" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf dc:title, nao:prefLabel .
+
+ nie:lastRefreshed
+ a rdf:Property ;
+ rdfs:comment "Date when information about this data object was retrieved (for the first time) or last refreshed from the data source. This property is important for metadata extraction applications that don't receive any notifications of changes in the data source and have to poll it regularly. This may lead to information becoming out of date. In these cases this property may be used to determine the age of data, which is an important element of it's dependability. " ;
+ rdfs:domain nie:DataObject ;
+ rdfs:label "lastRefreshed" ;
+ rdfs:range xsd:dateTime ;
+ rdfs:subPropertyOf dc:date ;
+ nrl:maxCardinality "1" .
+
+ nie:dataSource
+ a rdf:Property ;
+ rdfs:comment "Marks the provenance of a DataObject, what source does a data object come from." ;
+ rdfs:domain nie:DataObject ;
+ rdfs:label "dataSource" ;
+ rdfs:range nie:DataSource ;
+ rdfs:subPropertyOf dc:source ;
+ nrl:minCardinality "1" .
+
+ nie:DataObject
+ a rdfs:Class ;
+ rdfs:comment "A unit of data that is created, annotated and processed on the user desktop. It represents a native structure the user works with. The usage of the term 'native' is important. It means that a DataObject can be directly mapped to a data structure maintained by a native application. This may be a file, a set of files or a part of a file. The granularity depends on the user. This class is not intended to be instantiated by itself. Use more specific subclasses." ;
+ rdfs:label "DataObject" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ nie:depends
+ a rdf:Property ;
+ rdfs:comment "Dependency relation. A piece of content depends on another piece of data in order to be properly understood/used/interpreted." ;
+ rdfs:label "depends" ;
+ rdfs:range nie:DataObject ;
+ rdfs:subPropertyOf nie:relatedTo .
+
+ nie:contentLastModified
+ a rdf:Property ;
+ rdfs:comment "The date of the last modification of the original content (not its corresponding DataObject or local copy). Compare with nie:lastModified." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "contentLastModified" ;
+ rdfs:range xsd:dateTime ;
+ rdfs:subPropertyOf nie:informationElementDate, nao:lastModified ;
+ nrl:maxCardinality "1" .
+
+ nie:keyword
+ a rdf:Property ;
+ rdfs:comment "Adapted DublinCore: The topic of the content of the resource, as keyword. No sentences here. Recommended best practice is to select a value from a controlled vocabulary or formal classification scheme. " ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "keyword" ;
+ rdfs:range xsd:string .
+
+ nie:isLogicalPartOf
+ a rdf:Property ;
+ rdfs:comment "Generic property used to express 'logical' containment relationships between DataObjects. NIE extensions are encouraged to provide more specific subproperties of this one. It is advisable for actual instances of InformationElement to use those specific subproperties. Note the difference between 'physical' containment (isPartOf) and logical containment (isLogicalPartOf)" ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "isLogicalPartOf" ;
+ rdfs:range nie:InformationElement ;
+ rdfs:subPropertyOf dcterms:isPartOf ;
+ nrl:inverseProperty nie:hasLogicalPart .
+
+ nie:identifier
+ a rdf:Property ;
+ rdfs:comment "An unambiguous reference to the InformationElement within a given context. Recommended best practice is to identify the resource by means of a string conforming to a formal identification system." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "identifier" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nao:identifier , dc:identifier .
+
+ nie:plainTextContent
+ a rdf:Property ;
+ rdfs:comment "Plain-text representation of the content of a InformationElement with all markup removed. The main purpose of this property is full-text indexing and search. Its exact content is considered application-specific. The user can make no assumptions about what is and what is not contained within. Applications should use more specific properties wherever possible." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "plainTextContent" ;
+ rdfs:range xsd:string .
+
+ nie:comment
+ a rdf:Property ;
+ rdfs:comment "A user comment about an InformationElement." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "comment" ;
+ rdfs:range xsd:string .
+
+ nie:relatedTo
+ a rdf:Property ;
+ rdfs:comment "A common superproperty for all relations between a piece of content and other pieces of data (which may be interpreted as other pieces of content)." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "relatedTo" ;
+ rdfs:range nie:DataObject ;
+ rdfs:subPropertyOf dc:relation .
+
+ nie:contentSize
+ a rdf:Property ;
+ rdfs:comment "The size of the content. This property can be used whenever the size of the content of an InformationElement differs from the size of the DataObject. (e.g. because of compression, encoding, encryption or any other representation issues). The contentSize in expressed in bytes." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "contentSize" ;
+ rdfs:range xsd:integer .
+
+ nie:license
+ a rdf:Property ;
+ rdfs:comment "Terms and intellectual property rights licensing conditions." ;
+ rdfs:label "license" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf dcterms:license , nie:legal .
+
+ nie:subject
+ a rdf:Property ;
+ rdfs:comment "An overall topic of the content of a InformationElement" ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "subject" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf dc:subject .
+
+ nie:coreGraph
+ a rdf:Property ;
+ rdfs:comment "Connects the data object with the graph that contains information about it." ;
+ rdfs:domain nie:DataObject ;
+ rdfs:label "coreGraph" ;
+ rdfs:range nrl:InstanceBase .
+
+ nie:hasPart
+ a rdf:Property ;
+ rdfs:comment "Generic property used to express 'physical' containment relationships between DataObjects. NIE extensions are encouraged to provide more specific subproperties of this one. It is advisable for actual instances of DataObjects to use those specific subproperties. Note to the developers: Please be aware of the distinction between containment relation and provenance. The hasPart relation models physical containment, an InformationElement (a nmo:Message) can have a 'physical' part (an nfo:Attachment). Also, please note the difference between physical containment (hasPart) and logical containment (hasLogicalPart) the former has more strict meaning. They may occur independently of each other." ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "hasPart" ;
+ rdfs:range nie:DataObject ;
+ rdfs:subPropertyOf nie:relatedTo , dcterms:hasPart ;
+ nrl:inverseProperty nie:isPartOf .
+
+ nie:licenseType
+ a rdf:Property ;
+ rdfs:comment "The type of the license. Possible values for this field may include \"GPL\", \"BSD\", \"Creative Commons\" etc." ;
+ rdfs:label "licenseType" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nie:legal .
+
+ nie:byteSize
+ a rdf:Property ;
+ rdfs:comment "The overall size of the data object in bytes. That means the space taken by the DataObject in its container, and not the size of the content that is of interest to the user. For cases where the content size is different (e.g. in compressed files the content is larger, in messages the content excludes headings and is smaller) use more specific properties, not necessarily subproperties of this one." ;
+ rdfs:domain nie:DataObject ;
+ rdfs:label "byteSize" ;
+ rdfs:range xsd:integer ;
+ nrl:maxCardinality "1" .
+
+ nie:hasLogicalPart
+ a rdf:Property ;
+ rdfs:comment "Generic property used to express 'logical' containment relationships between InformationElements. NIE extensions are encouraged to provide more specific subproperties of this one. It is advisable for actual instances of InformationElement to use those specific subproperties. Note the difference between 'physical' containment (hasPart) and logical containment (hasLogicalPart)" ;
+ rdfs:domain nie:InformationElement ;
+ rdfs:label "hasLogicalPart" ;
+ rdfs:range nie:InformationElement ;
+ rdfs:subPropertyOf dcterms:hasPart ;
+ nrl:inverseProperty nie:isLogicalPartOf .
+
+ nie:url
+ a rdf:Property ;
+ rdfs:comment "URL of a DataObject. It points to the location of the object. A typial usage is FileDataObject. In cases where creating a simple file:// or http:// URL for a file is difficult (e.g. for files inside compressed archives) the applications are encouraged to use conventions defined by Apache Commons VFS Project at http://jakarta.apache.org/ commons/ vfs/ filesystems.html." ;
+ rdfs:domain nie:DataObject ;
+ rdfs:label "url" ;
+ rdfs:range rdfs:Resource .
+}
+
+<http://www.semanticdesktop.org/ontologies/2007/01/19/nie_metadata#> {nie: a nrl:Ontology ;
+ nao:creator <http://www.dfki.uni-kl.de/~mylka> ;
+ nao:hasDefaultNamespace
+ "http://www.semanticdesktop.org/ontologies/2007/01/19/nie#" ;
+ nao:hasDefaultNamespaceAbbreviation
+ "nie" ;
+ nao:lastModified "2009-11-12T07:45:58Z" ;
+ nao:status "Unstable" ;
+ nao:updatable "0 " ;
+ nao:version "Revision-9" .
+
+ <http://www.semanticdesktop.org/ontologies/2007/01/19/nie_metadata#>
+ a nrl:GraphMetadata ;
+ nrl:coreGraphMetadataFor
+ nie: .
+}
+
=== added file 'extra/ontology/nmm.trig'
--- extra/ontology/nmm.trig 1970-01-01 00:00:00 +0000
+++ extra/ontology/nmm.trig 2011-10-19 08:09:50 +0000
@@ -0,0 +1,315 @@
+#
+# Copyright (c) 2009-2010 Evgeny Egorochkin <phreedom.stdin@xxxxxxxxx>
+# Copyright (c) 2010 Sebastian Trueg <trueg@xxxxxxx>
+# Copyright (c) 2010 Andrew Lake <jamboarder@xxxxxxxxx>
+# All rights reserved, licensed under either CC-BY or BSD.
+#
+# You are free:
+# * to Share - to copy, distribute and transmit the work
+# * to Remix - to adapt the work
+# Under the following conditions:
+# * Attribution - You must attribute the work in the manner specified by the author
+# or licensor (but not in any way that suggests that they endorse you or your use
+# of the work).
+#
+# Redistribution and use in source and binary forms, with or without modification,
+# are permitted provided that the following conditions are met:
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright notice, this
+# list of conditions and the following disclaimer in the documentation and/or
+# other materials provided with the distribution.
+# * Neither the names of the authors nor the names of contributors may
+# be used to endorse or promote products derived from this ontology without
+# specific prior written permission.
+#
+# THIS ONTOLOGY IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+# OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+# IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+# NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+# THIS ONTOLOGY, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
+@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
+@prefix nrl: <http://www.semanticdesktop.org/ontologies/2007/08/15/nrl#> .
+@prefix nao: <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#> .
+@prefix nco: <http://www.semanticdesktop.org/ontologies/2007/03/22/nco#> .
+@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
+@prefix nie: <http://www.semanticdesktop.org/ontologies/2007/01/19/nie#> .
+@prefix nfo: <http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#> .
+@prefix nmm: <http://www.semanticdesktop.org/ontologies/2009/02/19/nmm#> .
+
+nmm: {
+
+ nmm:MusicPiece
+ a rdfs:Class ;
+ rdfs:subClassOf nfo:Media ;
+ rdfs:label "music" ;
+ rdfs:comment "Used to assign music-specific properties such a BPM to video and audio" .
+
+ nmm:musicAlbum
+ a rdf:Property ;
+ rdfs:subPropertyOf nie:isLogicalPartOf ;
+ rdfs:label "album" ;
+ rdfs:comment "Album the music belongs to" ;
+ rdfs:domain nmm:MusicPiece ;
+ rdfs:range nmm:MusicAlbum ;
+ nrl:maxCardinality 1 .
+
+ nmm:beatsPerMinute
+ a rdf:Property ;
+ rdfs:label "Beats per minute" ;
+ rdfs:comment "beats per minute" ;
+ rdfs:domain nmm:MusicPiece ;
+ rdfs:range xsd:integer .
+
+ nmm:performer
+ a rdf:Property ;
+ rdfs:subPropertyOf nco:contributor ;
+ rdfs:label "performer" ;
+ rdfs:comment "Performer" ;
+ rdfs:domain nmm:MusicPiece ;
+ rdfs:range nco:Contact .
+
+ nmm:composer
+ a rdf:Property ;
+ rdfs:subPropertyOf nco:contributor ;
+ rdfs:label "composer" ;
+ rdfs:comment "Composer" ;
+ rdfs:domain nmm:MusicPiece ;
+ rdfs:range nco:Contact .
+
+ nmm:lyricist
+ a rdf:Property ;
+ rdfs:subPropertyOf nco:contributor ;
+ rdfs:label "lyricist";
+ rdfs:comment "Lyricist";
+ rdfs:domain nmm:MusicPiece ;
+ rdfs:range nco:Contact .
+
+ nmm:trackNumber
+ a rdf:Property ;
+ rdfs:label "track number" ;
+ rdfs:comment "Track number of the music in its album" ;
+ rdfs:domain nmm:MusicPiece ;
+ rdfs:range xsd:integer .
+
+ nmm:musicBrainzTrackID
+ a rdf:Property ;
+ rdfs:subPropertyOf nie:identifier ;
+ rdfs:label "musicbrainz track ID" ;
+ rdfs:comment "MusicBrainz track ID" ;
+ rdfs:domain nmm:MusicPiece ;
+ rdfs:range xsd:integer .
+
+ nmm:trackGain
+ a rdf:Property ;
+ rdfs:label "track gain" ;
+ rdfs:comment "ReplayGain track gain" ;
+ rdfs:domain nmm:MusicPiece .
+
+ nmm:trackPeakGain
+ a rdf:Property ;
+ rdfs:label "track peak gain" ;
+ rdfs:comment "ReplayGain track peak gain" ;
+ rdfs:domain nmm:MusicPiece .
+
+
+ nmm:MusicAlbum
+ a rdfs:Class ;
+ rdfs:subClassOf nfo:MediaList ;
+ rdfs:label "music album" ;
+ rdfs:comment "The music album as provided by the publisher. Not to be confused with media lists or collections." .
+
+ nmm:musicCDIdentifier
+ a rdf:Property ;
+ rdfs:subPropertyOf nie:identifier ;
+ rdfs:label "music CD identifier" ;
+ rdfs:comment "Music CD identifier to for databases like FreeDB.org. This property is intended for music that comes from a CD, so that the CD can be identified in external databases." ;
+ rdfs:domain nmm:MusicAlbum ;
+ rdfs:range xsd:string .
+
+ nmm:internationalStandardRecordingCode
+ a rdf:Property ;
+ rdfs:subPropertyOf nie:identifier ;
+ rdfs:label "international standard recording code" ;
+ rdfs:comment "ISRC ID. Format: 'CC-XXX-YY-NNNNN'" ;
+ rdfs:domain nmm:MusicAlbum ;
+ rdfs:range xsd:string .
+
+ nmm:musicBrainzAlbumID
+ a rdf:Property ;
+ rdfs:subPropertyOf nie:identifier ;
+ rdfs:label "musicbrainz album ID" ;
+ rdfs:comment "MusicBrainz album ID" ;
+ rdfs:domain nmm:MusicAlbum ;
+ rdfs:range xsd:string .
+
+ nmm:albumGain
+ a rdf:Property ;
+ rdfs:label "album gain" ;
+ rdfs:comment "ReplayGain album(audiophile) gain" ;
+ rdfs:domain nmm:MusicAlbum .
+
+ nmm:albumPeakGain
+ a rdf:Property ;
+ rdfs:label "album peak gain" ;
+ rdfs:comment "ReplayGain album(audiophile) peak gain" ;
+ rdfs:domain nmm:MusicAlbum .
+
+ nmm:genre
+ a rdf:Property ;
+ rdfs:label "genre" ;
+ rdfs:comment "Genre" ;
+ rdfs:domain nfo:Media ;
+ rdfs:range xsd:string .
+
+ nmm:artwork
+ a rdf:Property ;
+ rdfs:label "artwork" ;
+ rdfs:comment "Associated Artwork" ;
+ rdfs:domain nfo:Media ;
+ rdfs:range nfo:Image .
+
+ nmm:Movie
+ a rdfs:Class ;
+ rdfs:subClassOf nfo:Video ;
+ rdfs:label "movie" ;
+ rdfs:comment "A Movie" .
+
+ nmm:TVShow
+ a rdfs:Class ;
+ rdfs:subClassOf nfo:Video ;
+ rdfs:label "tv show" ;
+ rdfs:comment "A TV Show" .
+
+ nmm:TVSeries
+ a rdfs:Class ;
+ rdfs:subClassOf nie:InformationElement ;
+ rdfs:label "tv series" ;
+ rdfs:comment "A TV Series has multiple seasons and episodes" .
+
+ nmm:series
+ a rdf:Property ;
+ rdfs:label "series" ;
+ rdfs:comment "series" ;
+ nrl:maxCardinality 1 ;
+ rdfs:domain nmm:TVShow ;
+ rdfs:range nmm:TVSeries ;
+ nrl:inverseProperty nmm:hasEpisode .
+
+ nmm:hasEpisode
+ a rdf:Property ;
+ rdfs:label "has episode" ;
+ rdfs:comment "A TVSeries has many episodes" ;
+ rdfs:domain nmm:TVSeries ;
+ rdfs:range nmm:TVShow ;
+ nrl:inverseProperty nmm:series .
+
+ nmm:season
+ a rdf:Property ;
+ rdfs:label "Season" ;
+ nrl:maxCardinality 1 ;
+ rdfs:domain nmm:TVShow ;
+ rdfs:range xsd:integer .
+
+ nmm:episodeNumber
+ a rdf:Property ;
+ rdfs:label "Episode number" ;
+ nrl:maxCardinality 1 ;
+ rdfs:domain nmm:TVShow ;
+ rdfs:range xsd:integer .
+
+ nmm:synopsis a rdf:Property ;
+ rdfs:label "synopsis" ;
+ rdfs:comment "Long form description of video content (plot, premise, etc.)" ;
+ nrl:maxCardinality 1 ;
+ rdfs:domain nfo:Video ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nie:description .
+
+ nmm:audienceRating a rdf:Property ;
+ rdfs:label "audience rating" ;
+ rdfs:comment "Rating used to identify appropriate audience for video (MPAA rating, BBFC, FSK, TV content rating, etc.)" ;
+ rdfs:domain nfo:Video ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nao:rating .
+
+ nmm:writer
+ a rdf:Property ;
+ rdfs:subPropertyOf nco:contributor ;
+ rdfs:label "writer" ;
+ rdfs:comment "Writer" ;
+ rdfs:domain nfo:Video ;
+ rdfs:range nco:Contact .
+
+ nmm:director
+ a rdf:Property ;
+ rdfs:subPropertyOf nco:contributor ;
+ rdfs:label "director" ;
+ rdfs:comment "Director" ;
+ rdfs:domain nfo:Video ;
+ rdfs:range nco:Contact .
+
+ nmm:producer
+ a rdf:Property ;
+ rdfs:subPropertyOf nco:contributor ;
+ rdfs:label "producer" ;
+ rdfs:comment "Producer" ;
+ rdfs:domain nfo:Video ;
+ rdfs:range nco:Contact .
+
+ nmm:actor
+ a rdf:Property ;
+ rdfs:subPropertyOf nco:contributor ;
+ rdfs:label "actor" ;
+ rdfs:comment "Actor" ;
+ rdfs:domain nfo:Video ;
+ rdfs:range nco:Contact .
+
+ nmm:cinematographer
+ a rdf:Property ;
+ rdfs:subPropertyOf nco:contributor ;
+ rdfs:label "cinematographer" ;
+ rdfs:domain nfo:Video ;
+ rdfs:range nco:Contact .
+
+ nmm:assistantDirector
+ a rdf:Property ;
+ rdfs:subPropertyOf nco:contributor ;
+ rdfs:label "assistant director" ;
+ rdfs:domain nfo:Video ;
+ rdfs:range nco:Contact .
+
+ nmm:releaseDate
+ a rdf:Property ;
+ rdfs:subPropertyOf nie:informationElementDate ;
+ rdfs:label "release date" ;
+ rdfs:comment "The date the media was released." ;
+ rdfs:domain nfo:Media ;
+ rdfs:range xsd:dateTime .
+}
+
+<http://www.semanticdesktop.org/ontologies/2009/02/19/nmm/metadata>
+{
+ nmm:
+ a nrl:DocumentGraph , nrl:Ontology ;
+ nao:hasDefaultNamespace "http://www.semanticdesktop.org/ontologies/2009/02/19/nmm#" ;
+ nao:hasDefaultNamespaceAbbreviation "nmm" ;
+ nao:lastModified "2010-02-15T08:34:29" ;
+ nao:serializationLanguage "TriG" ;
+ nao:status "Testing" ;
+ nrl:updatable "0" ;
+ nao:version "2" .
+
+ <http://www.semanticdesktop.org/ontologies/2009/02/19/nmm/metadata>
+ a nrl:GraphMetadata , nrl:DocumentGraph ;
+ nao:serializationLanguage "TriG" ;
+ nrl:coreGraphMetadataFor nmm: .
+}
=== added file 'extra/ontology/nmo.trig'
--- extra/ontology/nmo.trig 1970-01-01 00:00:00 +0000
+++ extra/ontology/nmo.trig 2011-10-19 08:09:50 +0000
@@ -0,0 +1,298 @@
+#
+# Copyright (c) 2007 NEPOMUK Consortium
+# Copyright (c) 2009 Sebastian Trueg <trueg@xxxxxxx>
+# All rights reserved, licensed under either CC-BY or BSD.
+#
+# You are free:
+# * to Share - to copy, distribute and transmit the work
+# * to Remix - to adapt the work
+# Under the following conditions:
+# * Attribution - You must attribute the work in the manner specified by the author
+# or licensor (but not in any way that suggests that they endorse you or your use
+# of the work).
+#
+# Redistribution and use in source and binary forms, with or without modification,
+# are permitted provided that the following conditions are met:
+# * Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright notice, this
+# list of conditions and the following disclaimer in the documentation and/or
+# other materials provided with the distribution.
+# * Neither the names of the authors nor the names of contributors may
+# be used to endorse or promote products derived from this ontology without
+# specific prior written permission.
+#
+# THIS ONTOLOGY IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+# OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+# IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+# NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+# THIS ONTOLOGY, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+@prefix exif: <http://www.kanzaki.com/ns/exif#> .
+@prefix nid3: <http://www.semanticdesktop.org/ontologies/2007/05/10/nid3#> .
+@prefix nrl: <http://www.semanticdesktop.org/ontologies/2007/08/15/nrl#> .
+@prefix nfo: <http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#> .
+@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
+@prefix tmo: <http://www.semanticdesktop.org/ontologies/2008/05/20/tmo#> .
+@prefix protege: <http://protege.stanford.edu/system#> .
+@prefix nmo: <http://www.semanticdesktop.org/ontologies/2007/03/22/nmo#> .
+@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
+@prefix nexif: <http://www.semanticdesktop.org/ontologies/2007/05/10/nexif#> .
+@prefix ncal: <http://www.semanticdesktop.org/ontologies/2007/04/02/ncal#> .
+@prefix pimo: <http://www.semanticdesktop.org/ontologies/2007/11/01/pimo#> .
+@prefix dcterms: <http://purl.org/dc/terms/> .
+@prefix nao: <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#> .
+@prefix geo: <http://www.w3.org/2003/01/geo/wgs84_pos#> .
+@prefix dc: <http://purl.org/dc/elements/1.1/> .
+@prefix nie: <http://www.semanticdesktop.org/ontologies/2007/01/19/nie#> .
+@prefix nco: <http://www.semanticdesktop.org/ontologies/2007/03/22/nco#> .
+@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
+
+nmo: {nmo:IMMessage
+ a rdfs:Class ;
+ rdfs:comment "A message sent with Instant Messaging software." ;
+ rdfs:label "IMMessage" ;
+ rdfs:subClassOf nmo:Message .
+
+ nmo:Email
+ a rdfs:Class ;
+ rdfs:comment "An email." ;
+ rdfs:label "Email" ;
+ rdfs:subClassOf nmo:Message .
+
+ nmo:messageSubject
+ a rdf:Property ;
+ rdfs:comment "The subject of a message" ;
+ rdfs:domain nmo:Message ;
+ rdfs:label "messageSubject" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nie:subject ;
+ nrl:maxCardinality 1 .
+
+ nmo:MessageHeader
+ a rdfs:Class ;
+ rdfs:comment "An arbitrary message header." ;
+ rdfs:label "MessageHeader" ;
+ rdfs:subClassOf rdfs:Resource .
+
+ nmo:references
+ a rdf:Property ;
+ rdfs:comment "Signifies that a message references another message. This property is a generic one. See RFC 2822 Sec. 3.6.4" ;
+ rdfs:domain nmo:Message ;
+ rdfs:label "references" ;
+ rdfs:range nmo:Message .
+
+ nmo:to
+ a rdf:Property ;
+ rdfs:comment "The primary intended recipient of an email." ;
+ rdfs:domain nmo:Email ;
+ rdfs:label "to" ;
+ rdfs:range nco:Contact ;
+ rdfs:subPropertyOf nmo:primaryRecipient .
+
+ nmo:cc
+ a rdf:Property ;
+ rdfs:comment "A Contact that is to receive a cc of the email. A cc (carbon copy) is a copy of an email message whose recipient appears on the recipient list, so that all other recipients are aware of it." ;
+ rdfs:domain nmo:Email ;
+ rdfs:label "cc" ;
+ rdfs:range nco:Contact ;
+ rdfs:subPropertyOf nmo:secondaryRecipient .
+
+ nmo:from
+ a rdf:Property ;
+ rdfs:comment "The sender of the message" ;
+ rdfs:domain nmo:Message ;
+ rdfs:label "from" ;
+ rdfs:range nco:Contact .
+
+ nmo:isRead
+ a rdf:Property ;
+ rdfs:comment "A flag that states the fact that a MailboxDataObject has been read." ;
+ rdfs:domain nmo:MailboxDataObject ;
+ rdfs:label "isRead" ;
+ rdfs:range xsd:boolean .
+
+ nmo:Mailbox
+ a rdfs:Class ;
+ rdfs:comment "A mailbox - container for MailboxDataObjects." ;
+ rdfs:label "Mailbox" ;
+ rdfs:subClassOf nie:InformationElement .
+
+ nmo:MailboxDataObject
+ a rdfs:Class ;
+ rdfs:comment "An entity encountered in a mailbox. Most common interpretations for such an entity include Message or Folder" ;
+ rdfs:label "MailboxDataObject" ;
+ rdfs:subClassOf nie:DataObject .
+
+ nmo:messageHeader
+ a rdf:Property ;
+ rdfs:comment "Links the message wiith an arbitrary message header." ;
+ rdfs:domain nmo:Message ;
+ rdfs:label "messageHeader" ;
+ rdfs:range nmo:MessageHeader ;
+ nrl:maxCardinality "1" .
+
+ nmo:primaryRecipient
+ a rdf:Property ;
+ rdfs:comment "The primary intended recipient of a message." ;
+ rdfs:domain nmo:Message ;
+ rdfs:label "primaryRecipient" ;
+ rdfs:range nco:Contact ;
+ rdfs:subPropertyOf nmo:recipient .
+
+ nmo:inReplyTo
+ a rdf:Property ;
+ rdfs:comment "Signifies that a message is a reply to another message. This feature is commonly used to link messages into conversations. Note that it is more specific than nmo:references. See RFC 2822 sec. 3.6.4" ;
+ rdfs:domain nmo:Message ;
+ rdfs:label "inReplyTo" ;
+ rdfs:range nmo:Message ;
+ rdfs:subPropertyOf nmo:references .
+
+ nmo:messageId
+ a rdf:Property ;
+ rdfs:comment "An identifier of a message. This property has been inspired by the message-id property defined in RFC 2822, Sec. 3.6.4. It should be used for all kinds of identifiers used by various messaging applications to connect multiple messages into conversations. For email messageids, values are according to RFC2822/sec 3.6.4 and the literal value in RDF must include the brackets." ;
+ rdfs:domain nmo:Message ;
+ rdfs:label "messageId" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nie:identifier .
+
+ nmo:receivedDate
+ a rdf:Property ;
+ rdfs:comment "Date when this message was received." ;
+ rdfs:domain nmo:Message ;
+ rdfs:label "receivedDate" ;
+ rdfs:range xsd:dateTime ;
+ rdfs:subPropertyOf dc:date ;
+ nrl:maxCardinality 1 .
+
+ nmo:MimeEntity
+ a rdfs:Class ;
+ rdfs:comment "A MIME entity, as defined in RFC2045, Section 2.4." ;
+ rdfs:label "MimeEntity" ;
+ rdfs:subClassOf nie:InformationElement .
+
+ nmo:replyTo
+ a rdf:Property ;
+ rdfs:comment "An address where the reply should be sent." ;
+ rdfs:domain nmo:Message ;
+ rdfs:label "replyTo" ;
+ rdfs:range nco:Contact .
+
+ nmo:recipient
+ a rdf:Property ;
+ rdfs:comment "A common superproperty for all properties that link a message with its recipients. Please don't use this property directly." ;
+ rdfs:domain nmo:Message ;
+ rdfs:label "recipient" ;
+ rdfs:range nco:Contact .
+
+ nmo:bcc
+ a rdf:Property ;
+ rdfs:comment "A Contact that is to receive a bcc of the email. A Bcc (blind carbon copy) is a copy of an email message sent to a recipient whose email address does not appear in the message." ;
+ rdfs:domain nmo:Email ;
+ rdfs:label "bcc" ;
+ rdfs:range nco:Contact ;
+ rdfs:subPropertyOf nmo:secondaryRecipient .
+
+ nmo:secondaryRecipient
+ a rdf:Property ;
+ rdfs:comment "A superproperty for all \"additional\" recipients of a message." ;
+ rdfs:domain nmo:Message ;
+ rdfs:label "secondaryRecipient" ;
+ rdfs:range nco:Contact ;
+ rdfs:subPropertyOf nmo:recipient .
+
+ nmo:contentMimeType
+ a rdf:Property ;
+ rdfs:comment """Key used to store the MIME type of the content of an object when it is different from the object's main MIME type. This value can be used, for example, to model an e-mail message whose mime type is\"message/rfc822\", but whose content has type \"text/html\". If not specified, the MIME type of the
+content defaults to the value specified by the 'mimeType' property.""" ;
+ rdfs:domain nmo:Email ;
+ rdfs:label "contentMimeType" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nie:mimeType .
+
+ nmo:plainTextMessageContent
+ a rdf:Property ;
+ rdfs:comment "Plain text representation of the body of the message. For multipart messages, all parts are concatenated into the value of this property. Attachments, whose mimeTypes are different from text/plain or message/rfc822 are considered separate DataObjects and are therefore not included in the value of this property." ;
+ rdfs:domain nmo:Message ;
+ rdfs:label "plainTextMessageContent" ;
+ rdfs:range xsd:string ;
+ rdfs:subPropertyOf nie:plainTextContent .
+
+ nmo:Message
+ a rdfs:Class ;
+ rdfs:comment "A message. Could be an email, instant messanging message, SMS message etc." ;
+ rdfs:label "Message" ;
+ rdfs:subClassOf nie:InformationElement .
+
+ nmo:htmlMessageContent
+ a rdf:Property ;
+ rdfs:comment "HTML representation of the body of the message. For multipart messages, all parts are concatenated into the value of this property. Attachments, whose mimeTypes are different from text/plain or message/rfc822 are considered separate DataObjects and are therefore not included in the value of this property." ;
+ rdfs:domain nmo:Message ;
+ rdfs:label "htmlMessageContent" ;
+ rdfs:range xsd:string .
+
+ nmo:sentDate
+ a rdf:Property ;
+ rdfs:comment "Date when this message was sent." ;
+ rdfs:domain nmo:Message ;
+ rdfs:label "sentDate" ;
+ rdfs:range xsd:dateTime ;
+ rdfs:subPropertyOf dc:date , nie:contentCreated ;
+ nrl:maxCardinality "1" .
+
+ nmo:sender
+ a rdf:Property ;
+ rdfs:comment "The person or agent submitting the message to the network, if other from the one given with the nmo:from property. Defined in RFC 822 sec. 4.4.2" ;
+ rdfs:domain nmo:Message ;
+ rdfs:label "sender" ;
+ rdfs:range nco:Contact ;
+ rdfs:subPropertyOf nmo:recipient .
+
+ nmo:headerName
+ a rdf:Property ;
+ rdfs:comment "Name of the message header." ;
+ rdfs:domain nmo:MessageHeader ;
+ rdfs:label "headerName" ;
+ rdfs:range xsd:string ;
+ nrl:maxCardinality "1" .
+
+ nmo:headerValue
+ a rdf:Property ;
+ rdfs:comment "Value of the message header." ;
+ rdfs:domain nmo:MessageHeader ;
+ rdfs:label "headerValue" ;
+ rdfs:range xsd:string ;
+ nrl:maxCardinality "1" .
+
+ nmo:hasAttachment
+ a rdf:Property ;
+ rdfs:comment "Links a message with files that were sent as attachments." ;
+ rdfs:domain nmo:Message ;
+ rdfs:label "hasAttachment" ;
+ rdfs:range nfo:Attachment ;
+ rdfs:subPropertyOf nie:hasPart .
+}
+
+<http://www.semanticdesktop.org/ontologies/2007/03/22/nmo_metadata#> {<http://www.semanticdesktop.org/ontologies/2007/03/22/nmo_metadata#>
+ a nrl:GraphMetadata ;
+ nrl:coreGraphMetadataFor
+ nmo: .
+
+ nmo: a nrl:Ontology ;
+ nao:creator <http://www.dfki.uni-kl.de/~mylka> ;
+ nao:hasDefaultNamespace
+ "http://www.semanticdesktop.org/ontologies/2007/03/22/nmo#" ;
+ nao:hasDefaultNamespaceAbbreviation
+ "nmo" ;
+ nao:lastModified "2008-11-27T11:45:56.656Z" ;
+ nao:status "Unstable" ;
+ nao:updatable "0 " ;
+ nao:version "Revision-9" .
+}
+
=== added file 'extra/ontology/zg.trig'
--- extra/ontology/zg.trig 1970-01-01 00:00:00 +0000
+++ extra/ontology/zg.trig 2011-10-19 08:09:50 +0000
@@ -0,0 +1,161 @@
+# README:
+# * The Zeitgeist ontology does not spec out what a "subject" is because we
+# use Nepomuk to describe subjects. With the convention that
+# interpretation=InformationElement and manifestation=DataObject
+# FIXME: NFO might not specify an origin for a rdfs:Resource ??
+
+@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>.
+@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>.
+@prefix xsd: <http://www.w3.org/2001/XMLSchema#>.
+@prefix nrl: <http://www.semanticdesktop.org/ontologies/2007/08/15/nrl#>.
+@prefix nao: <http://www.semanticdesktop.org/ontologies/2007/08/15/nao#>.
+@prefix nie: <http://www.semanticdesktop.org/ontologies/2007/01/19/nie#>.
+@prefix zg: <http://www.zeitgeist-project.com/ontologies/2010/01/27/zg#>.
+
+# interpretations
+
+zg:EventInterpretation
+ a rdfs:Class ;
+ rdfs:comment "Base class for event interpretations. Please do no instantiate directly, but use one of the sub classes. The interpretation of an event describes 'what happened' - fx. 'something was created' or 'something was accessed'" ;
+ rdfs:subClassOf nie:InformationElement .
+
+zg:CreateEvent
+ a rdfs:Class ;
+ rdfs:comment "Event type triggered when an item is created" ;
+ rdfs:subClassOf zg:EventInterpretation .
+
+zg:AccessEvent
+ a rdfs:Class ;
+ rdfs:comment "Event triggered by opening, accessing, or starting a resource. Most zg:AccessEvents will have an accompanying zg:LeaveEvent, but this need not always be the case" ;
+ rdfs:subClassOf zg:EventInterpretation .
+
+zg:LeaveEvent
+ a rdfs:Class ;
+ rdfs:comment "Event triggered by closing, leaving, or stopping a resource. Most zg:LeaveEvents will be following a zg:Access event, but this need not always be the case" ;
+ rdfs:subClassOf zg:EventInterpretation .
+
+zg:ModifyEvent
+ a rdfs:Class ;
+ rdfs:comment "Event triggered by modifying an existing resources. Fx. when editing and saving a file on disk or correcting a typo in the name of a contact" ;
+ rdfs:subClassOf zg:EventInterpretation .
+
+zg:DeleteEvent
+ a rdfs:Class ;
+ rdfs:comment "Event triggered because a resource has been deleted or otherwise made permanently unavailable. Fx. when deleting a file. FIXME: How about when moving to trash?" ;
+ rdfs:subClassOf zg:EventInterpretation .
+
+zg:ReceiveEvent
+ a rdfs:Class ;
+ rdfs:comment "Event triggered when something is received from an external party. The event manifestation must be set according to the world view of the receiving party. Most often the item that is being received will be some sort of message - an email, instant message, or broadcasted media such as micro blogging" ;
+ rdfs:subClassOf zg:EventInterpretation .
+
+zg:SendEvent
+ a rdfs:Class ;
+ rdfs:comment "Event triggered when something is send to an external party. The event manifestation must be set according to the world view of the sending party. Most often the item that is being send will be some sort of message - an email, instant message, or broadcasted media such as micro blogging" ;
+ rdfs:subClassOf zg:EventInterpretation .
+
+zg:AcceptEvent
+ a rdfs:Class ;
+ rdfs:comment "Event triggered when the user accepts a request of some sort. Examples could be answering a phone call, accepting a file transfer, or accepting a friendship request over an IM protocol. See also DenyEvent for when the user denies a similar request" ;
+ rdfs:subClassOf zg:EventInterpretation .
+
+zg:DenyEvent
+ a rdfs:Class ;
+ rdfs:comment "Event triggered when the user denies a request of some sort. Examples could be rejecting a phone call, rejecting a file transfer, or denying a friendship request over an IM protocol. See also AcceptEvent for the converse event type" ;
+ rdfs:subClassOf zg:EventInterpretation .
+
+zg:ExpireEvent
+ a rdfs:Class ;
+ rdfs:comment "Event triggered when something expires or times out. These types of events are normally not triggered by the user, but by the operating system or some external party. Examples are a recurring calendar item or task deadline that expires or a when the user fails to respond to an external request such as a phone call" ;
+ rdfs:subClassOf zg:EventInterpretation .
+
+zg:MoveEvent
+ a rdfs:Class ;
+ rdfs:comment "Event triggered when a resource has been moved from a location to another. Fx. moving a file from a folder to another.";
+ rdfs:subClassOf zg:EventInterpretation.
+
+
+# manifestations
+
+zg:EventManifestation
+ a rdfs:Class ;
+ rdfs:comment "Base class for event manifestation types. Please do no instantiate directly, but use one of the sub classes. The manifestation of an event describes 'how it happened'. Fx. 'the user did this' or 'the system notified the user'" ;
+ rdfs:subClassOf nie:DataObject .
+
+zg:UserActivity
+ a rdfs:Class ;
+ rdfs:comment "An event that was actively performed by the user. For example saving or opening a file by clicking on it in the file manager" ;
+ rdfs:subClassOf zg:EventManifestation .
+
+zg:HeuristicActivity
+ a rdfs:Class ;
+ rdfs:comment "An event that is caused indirectly from user activity or deducted via analysis of other events. Fx. if an algorithm divides a user workflow into disjoint 'projects' based on temporal analysis it could insert heuristic events when the user changed project" ;
+ rdfs:subClassOf zg:EventManifestation .
+
+zg:ScheduledActivity
+ a rdfs:Class ;
+ rdfs:comment "An event that was directly triggered by some user initiated sequence of actions. For example a music player automatically changing to the next song in a playlist" ;
+ rdfs:subClassOf zg:EventManifestation .
+
+zg:WorldActivity
+ a rdfs:Class ;
+ rdfs:comment "An event that was performed by an entity, usually human or organization, other than the user. An example could be logging the activities of other people in a team" ;
+ rdfs:subClassOf zg:EventManifestation .
+
+zg:SystemNotification
+ a rdfs:Class ;
+ rdfs:comment "An event send to the user by the operating system. Examples could include when the user inserts a USB stick or when the system warns that the hard disk is full" ;
+ rdfs:subClassOf zg:EventManifestation .
+
+# event datastructure
+
+zg:Event
+ a rdfs:Class ;
+ rdfs:comment "Something that happened at a point in time. Events are categorized by two primary factors 'what happened' - called the interpretation and 'how did it happen' - called the manifestation. Aside from a timestamp, events can also carry a reference to the entity responsible for instantiating it - called the actor. Normally the event actor is an application, but it need not be. Events happen to zero or more subjects. The subjects are described using the Nepomuk ontologies." ;
+ rdfs:subClassOf rdfs:Resource .
+
+zg:eventId
+ a rdf:Property ;
+ rdfs:comment "A unique integer id assigned to an event by the logging framework when the event is first logged" ;
+ rdfs:domain zg:Event ;
+ rdfs:range xsd:nonNegativeInteger ;
+ rdfs:label "id" ;
+ nrl:maxCardinality "1"^^xsd:nonNegativeInteger .
+
+zg:timestamp
+ a rdf:Property ;
+ rdfs:comment "Timestamp in milliseconds since the Unix Epoch" ;
+ rdfs:domain zg:Event ;
+ rdfs:range xsd:long ;
+ rdfs:label "timestamp" ;
+ nrl:maxCardinality "1"^^xsd:nonNegativeInteger .
+
+zg:hasActor
+ a rdf:Property ;
+ rdfs:comment "The application or entity responsible for emitting the event. For applications the format of this field is the base filename of the corresponding .desktop file with an app:// URI scheme. For example /usr/share/applications/firefox.desktop is encoded as app://firefox.desktop" ;
+ rdfs:domain zg:Event ;
+ rdfs:range rdfs:Resource ;
+ rdfs:label "actor" ;
+ nrl:maxCardinality "1"^^xsd:nonNegativeInteger .
+
+zg:hasEventInterpretation
+ a rdf:Property ;
+ rdfs:domain zg:Event ;
+ rdfs:range zg:EventInterpretation ;
+ nrl:maxCardinality "1"^^xsd:nonNegativeInteger ;
+ rdfs:label "interpretation" ;
+ rdfs:subPropertyOf nie:interpretedAs .
+
+zg:hasEventManifestation
+ a rdf:Property ;
+ rdfs:domain zg:Event ;
+ rdfs:range zg:EventManifestation ;
+ nrl:maxCardinality "1"^^xsd:nonNegativeInteger ;
+ rdfs:label "manifestation" ;
+ rdfs:subPropertyOf rdfs:type .
+
+zg:hasSubject
+ a rdf:Property ;
+ rdfs:domain zg:Event ;
+ rdfs:range rdfs:Resource ;
+ rdfs:label "subject" .
=== added file 'extra/ontology2code'
--- extra/ontology2code 1970-01-01 00:00:00 +0000
+++ extra/ontology2code 2011-10-19 08:09:50 +0000
@@ -0,0 +1,421 @@
+#! /usr/bin/python
+# -.- coding: utf-8 -.-
+
+# Zeitgeist
+#
+# Copyright © 2009-2010 Markus Korn <thekorn@xxxxxx>
+# Copyright © 2010 Mikkel Kamstrup Erlandsen <mikkel.kamstrup@xxxxxxxxx>
+# Copyright © 2010 Canonical Ltd.
+# By Mikkel Kamstrup Erlandsen <mikkel.kamstrup@xxxxxxxxx>
+# Copyright © 2011 Collabora Ltd.
+# By Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+# By Seif Lotfy <seif@xxxxxxxxx>
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Lesser General Public License as published by
+# the Free Software Foundation, either version 2.1 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+import os
+import re
+import sys
+import glob
+import codecs
+import commands
+import StringIO
+import collections
+
+import rdflib
+from rdflib import RDF, RDFS
+from rdflib.plugin import register
+try:
+ # rdflib2
+ from rdflib.syntax.serializers import Serializer
+ from rdflib import StringInputSource
+ from rdflib.Namespace import Namespace
+except ImportError:
+ # rdflib3 (LP: #626224)
+ from rdflib.serializer import Serializer
+ from rdflib.parser import StringInputSource
+ from rdflib.namespace import Namespace
+
+NIENS = Namespace("http://www.semanticdesktop.org/ontologies/2007/01/19/nie#")
+
+class SymbolCollection(dict):
+
+ closed = False
+
+ _by_namespace = None
+
+ def __init__(self):
+ self._by_namespace = collections.defaultdict(lambda: [])
+
+ def register(self, uri, parents, display_name, doc):
+ assert not self.closed
+ symbol = Symbol(self, uri, parents, display_name, doc)
+ self[uri] = symbol
+
+ def post_process(self):
+ self.closed = True
+ for symbol in self.itervalues():
+ for (i, parent) in enumerate(symbol.parents):
+ symbol.parents[i] = self[parent]
+ self._by_namespace[symbol.namespace].append(symbol)
+
+ def iter_by_namespace(self):
+ return self._by_namespace.iteritems()
+
+ def debug_print(self):
+ for symbol in self.registered_symbols.itervalues():
+ symbol.debug_print()
+ print
+
+class Symbol:
+
+ name = None
+ namespace = None
+ uri = None
+ parents = None
+ display_name = None
+ doc = None
+
+ _collection = None
+ _children = None
+ _all_children = None
+
+ def __init__(self, collection, uri, parents, display_name, doc):
+ self._collection = collection
+ self.uri = str(uri)
+ self.namespace, self.name = self.uri[self.uri.rfind('/')+1:].split('#')
+ self.name = Utils.camel2upper(self.name)
+ self.namespace = self.namespace.upper()
+ self.parents = [str(parent) for parent in parents]
+ self.display_name = str(display_name) if display_name is not None \
+ else self.name
+ self.doc = str(doc)
+
+ @property
+ def children(self):
+ """ Return all direct children of this Symbol. """
+ if self._children is None:
+ childs = set()
+ for symbol in self._collection.itervalues():
+ if self in symbol.parents:
+ childs.add(symbol)
+ self._children = childs
+ return self._children
+
+ @property
+ def all_children(self):
+ """ Return all children of this Symbol, recursively. """
+ if self._all_children is None:
+ all_children = set()
+ for symbol in self.children:
+ all_children.update([symbol])
+ all_children.update(symbol.all_children)
+ self._all_children = all_children
+ return self._all_children
+
+ def debug_print(self):
+ print "Name: %s" % self.name
+ print " URI: %s" % self.uri
+ print " Display Name: %s" % self.display_name
+ print " Parents: %s" % ', '.join([str(p) for p in self.parents])
+ doc = self.doc if len(self.doc) <= 50 else "%s..." % self.doc[:47]
+ print " Description: %s" % doc
+
+ def __str__(self):
+ return self.name
+
+ def __doc__(self):
+ return self.doc
+
+ def __cmp__(self, other):
+ return cmp(self.namespace, other.namespace) or \
+ cmp(self.name, other.name)
+
+ def __hash__(self):
+ return self.uri.__hash__()
+
+class Utils:
+
+ @staticmethod
+ def escape_chars(text, quotes='"', strip=True):
+ assert len(quotes) == 1
+ text = text.replace('%s' % quotes, '\\%s' % quotes)
+ if strip:
+ text = text.strip()
+ return text
+
+ @staticmethod
+ def camel2upper(name):
+ """
+ Convert CamelCase to CAMEL_CASE
+ """
+ result = ""
+ for i in range(len(name) - 1) :
+ if name[i].islower() and name[i+1].isupper():
+ result += name[i].upper() + "_"
+ else:
+ result += name[i].upper()
+ result += name[-1].upper()
+ return result
+
+ @staticmethod
+ def replace_items(item_set, item_map):
+ if not item_set:
+ return
+ for item, value in item_map.iteritems():
+ try:
+ item_set.remove(item)
+ except KeyError:
+ # item is not in set
+ continue
+ else:
+ # item was in set, replace it with value
+ item_set.add(value)
+
+ @staticmethod
+ def indent(text, indentation):
+ return re.sub(r'(?m)^(.+)$', r'%s\1' % indentation, text)
+
+class OntologyParser:
+
+ symbols = None
+
+ def __init__(self, directory):
+ rdfxml = self._load_rdfxml_from_trig_directory(directory)
+ self.symbols = self._parse(rdfxml)
+
+ @staticmethod
+ def _load_rdfxml_from_trig_directory(directory):
+ if not os.path.isdir(directory):
+ raise SystemExit, 'Directory doesn\'t exist: %s' % directory
+ files = ' '.join(glob.glob(os.path.join(directory, '*.trig')))
+ return commands.getoutput(
+ "cat %s | rapper -i trig -o rdfxml -I ZeitgeistNamespace - " \
+ "2>/dev/null" % files)
+
+ def _parse(self, rdfxml_stream):
+ """
+ Parse an RDFXML stream into a SymbolCollection.
+ """
+ ontology = rdflib.ConjunctiveGraph()
+ ontology.parse(StringInputSource(rdfxml_stream))
+
+ def _get_all_classes(*super_classes):
+ for cls in super_classes:
+ for subclass in ontology.subjects(RDFS.subClassOf, cls):
+ yield subclass
+ for x in _get_all_classes(subclass):
+ yield x
+
+ parent_classes = [NIENS['InformationElement'], NIENS['DataObject']]
+ symbol_classes = set(_get_all_classes(*parent_classes))
+ all_symbols = symbol_classes.union(parent_classes)
+
+ symbols = SymbolCollection()
+ for symbol in sorted(all_symbols):
+ # URI
+ uri = str(symbol)
+
+ # Description
+ comments = list(ontology.objects(symbol, RDFS.comment))
+ doc = comments[0] if comments else ''
+
+ # Display name
+ labels = list(ontology.objects(symbol, RDFS.label))
+ display_name = (labels[0]) if labels else None
+
+ # Parents
+ parents = set(ontology.objects(symbol, RDFS.subClassOf)
+ ).intersection(all_symbols)
+
+ if symbol in symbol_classes:
+ assert parents
+
+ # And we have a new Symbol!
+ symbols.register(uri, parents, display_name, doc)
+
+ symbols.post_process()
+ return symbols
+
+class GenericSerializer:
+
+ parser = None
+ symbols = None
+
+ def __init__(self, parser):
+ self.parser = parser
+ self.symbols = parser.symbols
+
+class PythonSerializer(GenericSerializer):
+
+ def dump(self):
+ for symbol in sorted(self.symbols.itervalues()):
+ parents = set((symbol.uri for symbol in symbol.parents))
+ Utils.replace_items(parents, {
+ str(NIENS['InformationElement']): 'Interpretation',
+ str(NIENS['DataObject']): 'Manifestation' })
+ print "Symbol('%s', parent=%r, uri='%s', display_name='%s', " \
+ "doc='%s', auto_resolve=False)" % (symbol.name, parents,
+ symbol.uri, Utils.escape_chars(symbol.display_name, '\''),
+ Utils.escape_chars(symbol.doc, '\''))
+
+class ValaSerializer(GenericSerializer):
+
+ @staticmethod
+ def symbol_link(symbol):
+ return '%s_%s' % (symbol.namespace, symbol.name)
+
+ @classmethod
+ def build_doc(cls, symbol, doc_prefix=""):
+ """
+ Build a C-style docstring for gtk-doc processing.
+ """
+ uri_link = '<ulink url="%s">%s</ulink>' % (symbol.uri,
+ symbol.uri.replace('#', '#'))
+ doc = symbol.doc
+
+ # List children
+ children = ['#' + cls.symbol_link(child) for child in symbol.children]
+ if children:
+ doc += '\n\n Children: %s' % ', '.join(children)
+ else:
+ doc += '\n\n Children: None'
+
+ # List parents
+ parents = ['#' + cls.symbol_link(parent) for parent in symbol.parents]
+ if parents and not parents in (['#INTERPRETATION'], ['#MANIFESTATION']):
+ doc += '\n\n Parents: %s' % ', '.join(parents)
+ else:
+ doc += '\n\n Parents: None'
+
+ # Convert docstring to gtk-doc style C comment
+ doc = doc.replace('\n', '\n *')
+ doc = '/**\n * %s:\n *\n * %s%s\n * \n * %s\n */' % (
+ symbol.name, doc_prefix, uri_link, doc)
+ return doc
+
+ def dump_uris(self, dest):
+ dest.write('namespace Zeitgeist\n{\n')
+ for namespace, symbols in sorted(self.symbols.iter_by_namespace()):
+ dest.write('\n namespace %s\n {\n\n' % namespace)
+ for symbol in sorted(symbols):
+ # FIXME: (event/subject) interpretation/manifestation ??
+ doc = self.build_doc(symbol,
+ doc_prefix='Macro defining the interpretation type ')
+ dest.write(' %s\n' % doc.replace('\n', '\n '
+ ).strip())
+ dest.write(' public const string %s = "%s";\n\n' % (
+ symbol.name, symbol.uri))
+ dest.write(' }\n')
+ dest.write('}\n')
+
+ def dump_symbols(self, dest):
+ dest.write('string uri, display_name, description;\n')
+ dest.write('string[] parents, children, all_children;\n\n')
+ for namespace, symbols in sorted(self.symbols.iter_by_namespace()):
+ for symbol in sorted(symbols):
+ parent_uris = ', '.join('%s.%s' % (s.namespace, s.name) for
+ s in symbol.parents)
+ children_uris = ', '.join('%s.%s' % (s.namespace, s.name)
+ for s in symbol.children)
+ all_children_uris = ', '.join('%s.%s' % (s.namespace,
+ s.name) for s in symbol.all_children)
+ dest.write('uri = Zeitgeist.%s.%s;\n' % (symbol.namespace,
+ symbol.name))
+ dest.write('description = "%s";\n' % Utils.escape_chars(
+ symbol.doc, '"'));
+ dest.write('display_name = "%s";\n' % Utils.escape_chars(
+ symbol.display_name, '"'))
+ dest.write('parents = { %s };\n' % parent_uris)
+ dest.write('children = { %s };\n' % children_uris)
+ dest.write('all_children = { %s };\n' % all_children_uris)
+ dest.write('Symbol.Info.register (uri, display_name, description, ' \
+ 'parents, children, all_children);\n\n')
+
+class OntologyCodeGenerator:
+
+ _INSERTION_MARK = '// *insert-auto-generated-code*'
+
+ _selfpath = None
+ _basepath = None
+ _parser = None
+ _python_serializer = None
+ _vala_serializer = None
+
+ def __init__(self):
+ self._selfpath = os.path.dirname(os.path.abspath(__file__))
+ self._basepath = os.path.join(self._selfpath, '..')
+ self._parser = OntologyParser(os.path.join(self._selfpath, 'ontology'))
+ self._python_serializer = PythonSerializer(self._parser)
+ self._vala_serializer = ValaSerializer(self._parser)
+
+ def generate_python(self):
+ self._python_serializer.dump()
+
+ def generate_vala(self):
+ self._write_file('src/ontology-uris.vala.in', 'src/ontology-uris.vala',
+ self._vala_serializer.dump_uris, 'vala')
+ self._write_file('src/ontology.vala.in', 'src/ontology.vala',
+ self._vala_serializer.dump_symbols, 'vala')
+
+ def _write_file(self, tplfilename, outfilename, content_generator, _type):
+ print >>sys.stderr, "Generating %s..." % outfilename
+
+ # Read template file
+ tplfilename = os.path.join(self._basepath, tplfilename)
+ template = open(tplfilename).read()
+
+ # Generate output
+ content = StringIO.StringIO()
+ content_generator(content)
+ content = content.getvalue().strip('\n')
+
+ # Write header
+ output = StringIO.StringIO()
+ self._write_header(output, _type)
+
+ # Write template, insert the generated output into the correct
+ # position (marked by "// *insert-auto-generated-code*").
+ insertion_pos = template.find(self._INSERTION_MARK)
+ indentation = insertion_pos - template.rfind('\n', 0, insertion_pos) - 1
+ start_pos = template.rfind('\n', 0, insertion_pos) + 1
+ continue_pos = insertion_pos
+ output.write(template[:start_pos])
+ output.write(Utils.indent(content, ' ' * indentation))
+ output.write(template[continue_pos+len(self._INSERTION_MARK):])
+
+ # Write everything to the result file
+ outpath = os.path.join(self._basepath, outfilename)
+ open(outpath, 'w').write(output.getvalue())
+
+ def _write_header(self, dest, _type):
+ if _type == 'vala':
+ dest.write('// This file has been auto-generated by the ' \
+ 'ontology2code script.\n')
+ dest.write('// Do not modify it directly.\n\n')
+ else:
+ raise NotImplementedError
+
+ def _generate_vala_uris(self, dest):
+ pass
+
+if __name__ == "__main__":
+ if len(sys.argv) != 2 or sys.argv[1] not in ('--vala', '--dump-python'):
+ raise SystemExit, 'Usage: %s [--vala|--dump-python]' % \
+ sys.argv[0]
+ generator = OntologyCodeGenerator()
+ if sys.argv[1] == '--vala':
+ generator.generate_vala()
+ elif sys.argv[1] == '--dump-python':
+ generator.generate_python()
=== added file 'extra/org.gnome.zeitgeist.service.in'
--- extra/org.gnome.zeitgeist.service.in 1970-01-01 00:00:00 +0000
+++ extra/org.gnome.zeitgeist.service.in 2011-10-19 08:09:50 +0000
@@ -0,0 +1,3 @@
+[D-BUS Service]
+Name=org.gnome.zeitgeist.Engine
+Exec=@prefix@/bin/bluebird
=== added file 'extra/zeitgeist-daemon.bash_completion'
--- extra/zeitgeist-daemon.bash_completion 1970-01-01 00:00:00 +0000
+++ extra/zeitgeist-daemon.bash_completion 2011-10-19 08:09:50 +0000
@@ -0,0 +1,14 @@
+# -*- shell-script -*-
+#
+# Bash tab auto-completion rules for the zeitgeist-daemon command.
+# Put this file in /etc/bash_completion.d/ and bash will automatically load it.
+#
+# By Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+
+have zeitgeist-daemon &&
+_zeitgeist_daemon()
+{
+ local cur=${COMP_WORDS[COMP_CWORD]}
+ COMPREPLY=($(compgen -W "`zeitgeist-daemon --shell-completion`" -- $cur))
+}
+[ "${have:-}" ] && complete -F _zeitgeist_daemon -o default zeitgeist-daemon
=== added directory 'po'
=== renamed directory 'po' => 'po.moved'
=== added file 'po/LINGUAS'
--- po/LINGUAS 1970-01-01 00:00:00 +0000
+++ po/LINGUAS 2011-10-19 08:09:50 +0000
@@ -0,0 +1,2 @@
+# please keep this list sorted alphabetically
+#
=== added file 'po/POTFILES.in'
--- po/POTFILES.in 1970-01-01 00:00:00 +0000
+++ po/POTFILES.in 2011-10-19 08:09:50 +0000
@@ -0,0 +1,3 @@
+[encoding: UTF-8]
+# List of source files which contain translatable strings.
+src/zeitgeist-daemon.vala
=== added file 'po/POTFILES.skip'
--- po/POTFILES.skip 1970-01-01 00:00:00 +0000
+++ po/POTFILES.skip 2011-10-19 08:09:50 +0000
@@ -0,0 +1,1 @@
+src/main.c
=== added directory 'src'
=== added file 'src/Makefile.am'
--- src/Makefile.am 1970-01-01 00:00:00 +0000
+++ src/Makefile.am 2011-10-19 08:09:50 +0000
@@ -0,0 +1,102 @@
+NULL =
+
+bin_PROGRAMS = bluebird
+
+AM_CPPFLAGS = \
+ $(BLUEBIRD_CFLAGS) \
+ -include $(CONFIG_HEADER) \
+ -w \
+ $(NULL)
+
+VALAFLAGS = \
+ --target-glib=2.26 \
+ -D BUILTIN_EXTENSIONS \
+ --pkg gio-2.0 \
+ --pkg sqlite3 \
+ --pkg posix \
+ --pkg gmodule-2.0 \
+ $(top_srcdir)/config.vapi \
+ $(NULL)
+
+# Make sure every extension has only one vala file!
+extensions_VALASOURCES = \
+ ext-data-source-registry.vala \
+ ext-blacklist.vala \
+ ext-histogram.vala \
+ ext-storage-monitor.vala \
+ ext-fts.vala \
+ $(NULL)
+
+bluebird_VALASOURCES = \
+ zeitgeist-daemon.vala \
+ datamodel.vala \
+ engine.vala \
+ remote.vala \
+ extension.vala \
+ extension-collection.vala \
+ extension-store.vala \
+ notify.vala \
+ sql.vala \
+ utils.vala \
+ errors.vala \
+ table-lookup.vala \
+ sql-schema.vala \
+ where-clause.vala \
+ ontology.vala \
+ ontology-uris.vala \
+ $(NULL)
+
+bluebird_SOURCES = \
+ zeitgeist-engine_vala.stamp \
+ $(bluebird_VALASOURCES:.vala=.c) \
+ $(extensions_VALASOURCES:.vala=.c) \
+ $(NULL)
+
+bluebird_LDADD = \
+ $(BLUEBIRD_LIBS) \
+ $(NULL)
+
+bluebird_LDFLAGS = -export-dynamic -no-undefined
+
+BUILT_SOURCES = zeitgeist-engine_vala.stamp extensions_vala.stamp
+
+zeitgeist-engine_vala.stamp: $(bluebird_VALASOURCES)
+ $(VALA_V)$(VALAC) $(VALAFLAGS) -C -H zeitgeist-engine.h --library zeitgeist-engine $^
+ @touch "$@"
+
+extensions_vala.stamp: zeitgeist-engine_vala.stamp $(extensions_VALASOURCES)
+ $(AM_V_GEN)$(foreach ext_src,$(filter %.vala,$^),\
+ $(VALAC) $(VALAFLAGS) $(EXT_FLAGS) -C zeitgeist-engine.vapi $(ext_src) || exit 1;)
+ @touch "$@"
+
+# FIXME: can we make this depend on $(ontology_trig_DATA)?
+ontology_vala.stamp: ontology.vala.in ontology-uris.vala.in
+ $(AM_V_GEN)$(top_srcdir)/extra/ontology2code --vala
+ @touch "$@"
+
+ontology.vala ontology-uris.vala: ontology_vala.stamp
+
+EXTRA_DIST = \
+ $(bluebird_VALASOURCES) \
+ $(extensions_VALASOURCES) \
+ ontology_vala.stamp \
+ ontology.vala.in \
+ ontology-uris.vala.in \
+ zeitgeist-engine.h \
+ zeitgeist-engine.vapi \
+ zeitgeist-engine_vala.stamp \
+ extensions_vala.stamp \
+ $(NULL)
+
+DISTCLEANFILES = \
+ ontology.vala \
+ ontology-uris.vala \
+ $(NULL)
+
+distclean-local:
+ rm -f *.c *.o *.stamp *.~[0-9]~
+
+VALA_V = $(VALA_V_$(V))
+VALA_V_ = $(VALA_V_$(AM_DEFAULT_VERBOSITY))
+VALA_V_0 = @echo " VALAC " $^;
+
=== added file 'src/datamodel.vala'
--- src/datamodel.vala 1970-01-01 00:00:00 +0000
+++ src/datamodel.vala 2011-10-19 08:09:50 +0000
@@ -0,0 +1,604 @@
+/* datamodel.vala
+ *
+ * Copyright © 2011 Collabora Ltd.
+ * By Seif Lotfy <seif@xxxxxxxxx>
+ * By Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+ * Copyright © 2011 Manish Sinha <manishsinha@xxxxxxxxxx>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published by
+ * the Free Software Foundation, either version 2.1 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+namespace Zeitgeist
+{
+ namespace Timestamp
+ {
+ public static int64 now ()
+ {
+ return from_timeval (TimeVal ());
+ }
+
+ public static int64 from_timeval (TimeVal tv)
+ {
+ int64 result;
+ result = ((int64) tv.tv_sec) * 1000;
+ result += ((int64) tv.tv_usec) / 1000;
+
+ return result;
+ }
+ }
+
+ [CCode (type_signature = "(xx)")]
+ public class TimeRange: Object
+ {
+ public int64 start { get; private set; }
+ public int64 end { get; private set; }
+
+ public TimeRange (int64 start_msec, int64 end_msec)
+ {
+ start = start_msec;
+ end = end_msec;
+ }
+
+ public TimeRange.anytime ()
+ {
+ this (0, int64.MAX);
+ }
+
+ public TimeRange.to_now ()
+ {
+ this (0, Timestamp.now ());
+ }
+
+ public TimeRange.from_now ()
+ {
+ this (Timestamp.now (), int64.MAX);
+ }
+
+ public TimeRange.from_variant (Variant variant)
+ {
+ assert (variant.get_type_string () == "(xx)");
+
+ int64 start_msec = 0;
+ int64 end_msec = 0;
+
+ variant.get ("(xx)", &start_msec, &end_msec);
+
+ this (start_msec, end_msec);
+ }
+
+ public Variant to_variant ()
+ {
+ return new Variant ("(xx)", start, end);
+ }
+
+ public TimeRange? intersect (TimeRange time_range)
+ {
+ var result = new TimeRange(0,0);
+ if (start < time_range.start)
+ if (end < time_range.start)
+ return null;
+ else
+ result.start = time_range.start;
+ else
+ if (start > time_range.end)
+ return null;
+ else
+ result.start = start;
+
+ if (end < time_range.end)
+ if (end < time_range.start)
+ return null;
+ else
+ result.end = end;
+ else
+ if (start > time_range.end)
+ return null;
+ else
+ result.end = time_range.end;
+ return result;
+ }
+ }
+
+ public enum ResultType
+ {
+ MOST_RECENT_EVENTS = 0, // All events with the most
+ // recent events first
+ LEAST_RECENT_EVENTS = 1, // All events with the oldest
+ // ones first
+ MOST_RECENT_SUBJECTS = 2, // One event for each subject
+ // only, ordered with the
+ // most recent events first
+ LEAST_RECENT_SUBJECTS = 3, // One event for each subject
+ // only, ordered with oldest
+ // events first
+ MOST_POPULAR_SUBJECTS = 4, // One event for each subject
+ // only, ordered by the
+ // popularity of the subject
+ LEAST_POPULAR_SUBJECTS = 5, // One event for each subject
+ // only, ordered ascendingly
+ // by popularity of the
+ // subject
+ MOST_POPULAR_ACTOR = 6, // The last event of each
+ // different actor ordered
+ // by the popularity of the
+ // actor
+ LEAST_POPULAR_ACTOR = 7, // The last event of each
+ // different actor, ordered
+ // ascendingly by the
+ // popularity of the actor
+ MOST_RECENT_ACTOR = 8, // The actor that has been used
+ // to most recently
+ LEAST_RECENT_ACTOR = 9, // The actor that has been used
+ // to least recently
+ MOST_RECENT_ORIGIN = 10, // The last event of each
+ // different subject origin.
+ LEAST_RECENT_ORIGIN = 11, // The last event of each
+ // different subject origin,
+ // ordered by least
+ // recently used first
+ MOST_POPULAR_ORIGIN = 12, // The last event of each
+ // different subject origin,
+ // ordered by the
+ // popularity of the origins
+ LEAST_POPULAR_ORIGIN = 13, // The last event of each
+ // different subject origin,
+ // ordered ascendingly by
+ // the popularity of the
+ // origin
+ OLDEST_ACTOR = 14, // The first event of each
+ // different actor
+ MOST_RECENT_SUBJECT_INTERPRETATION = 15, // One event for each subject
+ // interpretation only,
+ // ordered with the most
+ // recent events first
+ LEAST_RECENT_SUBJECT_INTERPRETATION = 16, // One event for each subject
+ // interpretation only,
+ // ordered with the least
+ // recent events first
+ MOST_POPULAR_SUBJECT_INTERPRETATION = 17, // One event for each subject
+ // interpretation only,
+ // ordered by the popularity
+ // of the subject
+ // interpretation
+ LEAST_POPULAR_SUBJECT_INTERPRETATION = 18, // One event for each subject
+ // interpretation only,
+ // ordered ascendingly by
+ // popularity of the subject
+ // interpretation
+ MOST_RECENT_MIMETYPE = 19, // One event for each mimetype
+ // only ordered with the
+ // most recent events first
+ LEAST_RECENT_MIMETYPE = 20, // One event for each mimetype
+ // only ordered with the
+ // least recent events first
+ MOST_POPULAR_MIMETYPE = 21, // One event for each mimetype
+ // only ordered by the
+ // popularity of the mimetype
+ LEAST_POPULAR_MIMETYPE = 22, // One event for each mimetype
+ // only ordered ascendingly
+ // by popularity of the
+ // mimetype
+ MOST_RECENT_CURRENT_URI = 23, // One event for each subject
+ // only by current_uri
+ // instead of uri ordered
+ // with the most recent
+ // events first
+ LEAST_RECENT_CURRENT_URI = 24, // One event for each subject
+ // only by current_uri
+ // instead of uri ordered
+ // with oldest events first
+ MOST_POPULAR_CURRENT_URI = 25, // One event for each subject
+ // only by current_uri
+ // instead of uri ordered
+ // by the popularity of the
+ // subject
+ LEAST_POPULAR_CURRENT_URI = 26, // One event for each subject
+ // only by current_uri
+ // instead of uri
+ // ordered ascendingly by
+ // popularity of the subject
+ MOST_RECENT_EVENT_ORIGIN = 27, // The last event of each
+ // different origin
+ LEAST_RECENT_EVENT_ORIGIN = 28, // The last event of each
+ // different origin, ordered
+ // by least recently used
+ // first
+ MOST_POPULAR_EVENT_ORIGIN = 29, // The last event of each
+ // different origin ordered
+ // by the popularity of the
+ // origins
+ LEAST_POPULAR_EVENT_ORIGIN = 30, // The last event of each
+ // different origin, ordered
+ // ascendingly by the
+ // popularity of the origin
+ }
+
+ /*
+ * An enumeration class used to define how query results should
+ * be returned from the Zeitgeist engine.
+ */
+ public enum RelevantResultType
+ {
+ RECENT = 0, // All uris with the most recent uri first
+ RELATED = 1, // All uris with the most related one first
+ }
+
+ /**
+ * Enumeration class defining the possible values for the storage
+ * state of an event subject.
+ *
+ * The StorageState enumeration can be used to control whether or
+ * not matched events must have their subjects available to the user.
+ * Fx. not including deleted files, files on unplugged USB drives,
+ * files available only when a network is available etc.
+ */
+ public enum StorageState
+ {
+ NOT_AVAILABLE = 0, // The storage medium of the events
+ // subjects must not be available to the user
+ AVAILABLE = 1, // The storage medium of all event subjects
+ // must be immediately available to the user
+ ANY = 2 // The event subjects may or may not be available
+ }
+
+ private bool check_field_match (string property,
+ string template_property, bool is_symbol = false,
+ bool can_wildcard = false)
+ {
+ var matches = false;
+ var parsed = template_property;
+ var is_negated = Engine.parse_negation (ref parsed);
+
+ if (parsed == "")
+ {
+ return true;
+ }
+ else if (parsed == property)
+ {
+ matches = true;
+ }
+ else if (is_symbol &&
+ Symbol.get_all_parents (property).find_custom (parsed, strcmp) != null)
+ {
+ matches = true;
+ }
+ else if (can_wildcard && Engine.parse_wildcard (ref parsed))
+ {
+ if (property.has_prefix (parsed)) matches = true;
+ }
+
+ debug ("Checking matches for %s", parsed);
+ return (is_negated) ? !matches : matches;
+ }
+
+ public class Event : Object
+ {
+ public uint32 id { get; set; }
+ public int64 timestamp { get; set; }
+ public string interpretation { get; set; }
+ public string manifestation { get; set; }
+ public string actor { get; set; }
+ public string origin { get; set; }
+
+ public GenericArray<Subject> subjects { get; set; }
+ public ByteArray? payload { get; set; }
+
+ construct
+ {
+ subjects = new GenericArray<Subject> ();
+ }
+
+ public int num_subjects ()
+ {
+ return subjects.length;
+ }
+
+ public void add_subject (Subject subject)
+ {
+ subjects.add (subject);
+ }
+
+ public Event.from_variant (Variant event_variant) {
+ assert (event_variant.get_type_string () == "(" +
+ Utils.SIG_EVENT + ")");
+
+ VariantIter iter = event_variant.iterator ();
+
+ assert (iter.n_children () >= 3);
+ VariantIter event_array = iter.next_value ().iterator ();
+ VariantIter subjects_array = iter.next_value ().iterator ();
+ Variant payload_variant = iter.next_value ();
+
+ var event_props = event_array.n_children ();
+ assert (event_props >= 5);
+ id = (uint32) uint64.parse (event_array.next_value ().get_string ());
+ var str_timestamp = event_array.next_value ().get_string ();
+ if (str_timestamp == "")
+ timestamp = Timestamp.now ();
+ else
+ timestamp = int64.parse (str_timestamp);
+ interpretation = event_array.next_value ().get_string ();
+ manifestation = event_array.next_value ().get_string ();
+ actor = event_array.next_value ().get_string ();
+ // let's keep this compatible with older clients
+ if (event_props >= 6)
+ origin = event_array.next_value ().get_string ();
+ else
+ origin = "";
+
+ for (int i = 0; i < subjects_array.n_children (); ++i) {
+ Variant subject_variant = subjects_array.next_value ();
+ subjects.add (new Subject.from_variant (subject_variant));
+ }
+
+ // Parse payload...
+ uint payload_length = (uint) payload_variant.n_children ();
+ if (payload_length > 0)
+ {
+ debug ("there was payload with length: %u", payload_length);
+ payload = new ByteArray.sized (payload_length);
+ unowned uint8[] data = (uint8[]?) payload_variant.get_data ();
+ data.length = (int) payload_length;
+ payload.append (data);
+ }
+ }
+
+ public Variant to_variant ()
+ {
+ var vb = new VariantBuilder (new VariantType ("("+Utils.SIG_EVENT+")"));
+
+ vb.open (new VariantType ("as"));
+ vb.add ("s", id == 0 ? "" : id.to_string ());
+ vb.add ("s", timestamp.to_string ());
+ vb.add ("s", interpretation != null ? interpretation : "");
+ vb.add ("s", manifestation != null ? manifestation : "");
+ vb.add ("s", actor != null ? actor : "");
+ vb.add ("s", origin ?? "");
+ vb.close ();
+
+ vb.open (new VariantType ("aas"));
+ for (int i = 0; i < subjects.length; ++i) {
+ vb.add_value (subjects[i].to_variant ());
+ }
+ vb.close ();
+
+ if (payload != null)
+ {
+ Variant payload_variant = Variant.new_from_data<ByteArray> (
+ new VariantType ("ay"), payload.data, false, payload);
+ // FIXME: somehow adding the payload_variant is not working
+ vb.add_value (payload_variant);
+ }
+ else
+ {
+ vb.open (new VariantType ("ay"));
+ vb.close ();
+ }
+
+ return vb.end ();
+ }
+
+ public void debug_print ()
+ {
+ stdout.printf ("id: %d\t" +
+ "timestamp: %" + int64.FORMAT + "\n" +
+ "actor: %s\n" +
+ "interpretation: %s\n" +
+ "manifestation: %s\n" +
+ "origin: %s\n" +
+ "num subjects: %d\n",
+ id, timestamp, actor, interpretation,
+ manifestation, origin, subjects.length);
+ for (int i = 0; i < subjects.length; i++)
+ {
+ var s = subjects[i];
+ stdout.printf (" Subject #%d:\n" +
+ " uri: %s\n" +
+ " interpretation: %s\n" +
+ " manifestation: %s\n" +
+ " mimetype: %s\n" +
+ " origin: %s\n" +
+ " text: %s\n" +
+ " current_uri: %s\n" +
+ " storage: %s\n",
+ i, s.uri, s.interpretation, s.manifestation,
+ s.mimetype, s.origin, s.text, s.current_uri,
+ s.storage);
+ }
+ }
+
+
+
+ public bool matches_template (Event template_event)
+ {
+ /**
+ Return True if this event matches *event_template*. The
+ matching is done where unset fields in the template is
+ interpreted as wild cards. Interpretations and manifestations
+ are also matched if they are children of the types specified
+ in `event_template`. If the template has more than one
+ subject, this event matches if at least one of the subjects
+ on this event matches any single one of the subjects on the
+ template.
+ */
+
+ //Check if interpretation is child of template_event or same
+ debug("Checking if event %u matches template_event %u\n",
+ this.id, template_event.id);
+ if (!check_field_match (this.interpretation, template_event.interpretation, true))
+ return false;
+ //Check if manifestation is child of template_event or same
+ if (!check_field_match (this.manifestation, template_event.manifestation, true))
+ return false;
+ //Check if actor is equal to template_event actor
+ if (!check_field_match (this.actor, template_event.actor, false, true))
+ return false;
+ //Check if origin is equal to template_event origin
+ if (!check_field_match (this.origin, template_event.origin, false, true))
+ return false;
+
+ if (template_event.subjects.length == 0)
+ return true;
+
+ for (int i = 0; i < this.subjects.length; i++)
+ for (int j = 0; j < template_event.subjects.length; j++)
+ if (this.subjects[i].matches_template (template_event.subjects[j]))
+ return true;
+
+ return false;
+ }
+
+ }
+
+ namespace Events
+ {
+
+ public static GenericArray<Event> from_variant (Variant vevents)
+ {
+ GenericArray<Event> events = new GenericArray<Event> ();
+
+ assert (vevents.get_type_string () == "a("+Utils.SIG_EVENT+")");
+
+ foreach (Variant event in vevents)
+ {
+ events.add (new Event.from_variant (event));
+ }
+
+ return events;
+ }
+
+ public static Variant to_variant (GenericArray<Event?> events)
+ {
+ var vb = new VariantBuilder(new VariantType("a("+Utils.SIG_EVENT+")"));
+
+ for (int i = 0; i < events.length; ++i)
+ {
+ if (events[i] != null)
+ {
+ vb.add_value (events[i].to_variant ());
+ }
+ else
+ {
+ vb.add_value (get_null_event_variant ());
+ }
+ }
+
+ return vb.end ();
+ }
+
+ public static Variant get_null_event_variant ()
+ {
+ var vb = new VariantBuilder (new VariantType ("("+Utils.SIG_EVENT+")"));
+ vb.open (new VariantType ("as"));
+ vb.close ();
+ vb.open (new VariantType ("aas"));
+ vb.close ();
+ vb.open (new VariantType ("ay"));
+ vb.close ();
+ return vb.end ();
+ }
+
+ }
+
+ public class Subject : Object
+ {
+
+ public string uri { get; set; }
+ public string interpretation { get; set; }
+ public string manifestation { get; set; }
+ public string mimetype { get; set; }
+ public string origin { get; set; }
+ public string text { get; set; }
+ public string storage { get; set; }
+ public string current_uri { get; set; }
+
+ public Subject.from_variant (Variant subject_variant)
+ {
+ VariantIter iter = subject_variant.iterator();
+
+ var subject_props = iter.n_children ();
+ assert (subject_props >= 7);
+ uri = iter.next_value().get_string ();
+ interpretation = iter.next_value().get_string ();
+ manifestation = iter.next_value().get_string ();
+ origin = iter.next_value().get_string ();
+ mimetype = iter.next_value().get_string ();
+ text = iter.next_value().get_string ();
+ storage = iter.next_value().get_string ();
+ // let's keep this compatible with older clients
+ if (subject_props >= 8)
+ current_uri = iter.next_value().get_string ();
+ else
+ current_uri = "";
+ }
+
+ public Variant to_variant ()
+ {
+ /* The FAST version */
+ char* ptr_arr[8];
+ ptr_arr[0] = uri != null ? uri : "";
+ ptr_arr[1] = interpretation != null ? interpretation : "";
+ ptr_arr[2] = manifestation != null ? manifestation : "";
+ ptr_arr[3] = origin != null ? origin : "";
+ ptr_arr[4] = mimetype != null ? mimetype : "";
+ ptr_arr[5] = text != null ? text : "";
+ ptr_arr[6] = storage != null ? storage : "";
+ ptr_arr[7] = current_uri != null ? current_uri : "";
+ return new Variant.strv ((string[]) ptr_arr);
+ /* The NICE version */
+ /*
+ var vb = new VariantBuilder (new VariantType ("as"));
+ vb.add ("s", uri ?? "");
+ vb.add ("s", interpretation ?? "");
+ vb.add ("s", manifestation ?? "");
+ vb.add ("s", origin ?? "");
+ vb.add ("s", mimetype ?? "");
+ vb.add ("s", text ?? "");
+ vb.add ("s", storage ?? "");
+ vb.add ("s", current_uri ?? "");
+
+ return vb.end ();
+ */
+ }
+
+ public bool matches_template (Subject template_subject)
+ {
+ /**
+ Return True if this Subject matches *subject_template*. Empty
+ fields in the template are treated as wildcards.
+ Interpretations and manifestations are also matched if they are
+ children of the types specified in `subject_template`.
+ */
+ if (!check_field_match (this.uri, template_subject.uri, false, true))
+ return false;
+ if (!check_field_match (this.current_uri, template_subject.current_uri, false, true))
+ return false;
+ if (!check_field_match (this.interpretation, template_subject.interpretation, true))
+ return false;
+ if (!check_field_match (this.manifestation, template_subject.manifestation, true))
+ return false;
+ if (!check_field_match (this.origin, template_subject.origin, false, true))
+ return false;
+ if (!check_field_match (this.mimetype, template_subject.mimetype, false, true))
+ return false;
+
+ return true;
+ }
+
+ }
+
+}
+
+// vim:expandtab:ts=4:sw=4
=== added file 'src/engine.vala'
--- src/engine.vala 1970-01-01 00:00:00 +0000
+++ src/engine.vala 2011-10-19 08:09:50 +0000
@@ -0,0 +1,1155 @@
+/* engine.vala
+ *
+ * Copyright © 2011 Collabora Ltd.
+ * By Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+ * By Seif Lotfy <seif@xxxxxxxxx>
+ *
+ * Based upon a Python implementation (2009-2011) by:
+ * Markus Korn <thekorn@xxxxxxx>
+ * Mikkel Kamstrup Erlandsen <mikkel.kamstrup@xxxxxxxxx>
+ * Seif Lotfy <seif@xxxxxxxxx>
+ * Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published by
+ * the Free Software Foundation, either version 2.1 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+using Zeitgeist;
+using Zeitgeist.SQLite;
+
+namespace Zeitgeist
+{ // FIXME: increase indentation once we're ok with breaking 'bzr diff'
+
+public class Engine : Object
+{
+
+ public Zeitgeist.SQLite.ZeitgeistDatabase database { get; private set; }
+ public ExtensionStore extension_store;
+ private ExtensionCollection extension_collection;
+ private unowned Sqlite.Database db;
+
+ protected TableLookup interpretations_table;
+ protected TableLookup manifestations_table;
+ protected TableLookup mimetypes_table;
+ protected TableLookup actors_table;
+
+ private uint32 last_id;
+
+ public Engine () throws EngineError
+ {
+ database = new Zeitgeist.SQLite.ZeitgeistDatabase ();
+ database.set_deletion_callback (delete_from_cache);
+ db = database.database;
+ last_id = database.get_last_id ();
+
+ interpretations_table = new TableLookup (database, "interpretation");
+ manifestations_table = new TableLookup (database, "manifestation");
+ mimetypes_table = new TableLookup (database, "mimetype");
+ actors_table = new TableLookup (database, "actor");
+
+ extension_store = new ExtensionStore (this);
+ extension_collection = new ExtensionCollection (this);
+ }
+
+ public string[] get_extension_names ()
+ {
+ return extension_collection.get_extension_names ();
+ }
+
+ public Variant get_events(uint32[] event_ids,
+ BusName? sender=null) throws EngineError
+ {
+ // TODO: Consider if we still want the cache. This should be done
+ // once everything is working, since it adds unneeded complexity.
+ // It'd also benchmark it again first, we may have better options
+ // to enhance the performance of SQLite now, and event processing
+ // will be faster now being C.
+
+ Sqlite.Statement stmt;
+ int rc;
+
+ if (event_ids.length == 0)
+ return new Variant.array (null, null);
+ string sql = """
+ SELECT * FROM event_view
+ WHERE id = ?
+ """;
+
+ rc = db.prepare_v2 (sql, -1, out stmt);
+ database.assert_query_success (rc, "SQL error");
+
+ var events = new VariantBuilder (new VariantType ("a("+Utils.SIG_EVENT+")"));
+
+ foreach (var event_id in event_ids)
+ {
+ stmt.bind_int64 (1, event_id);
+
+ Event? event = null;
+
+ while ((rc = stmt.step ()) == Sqlite.ROW)
+ {
+ if (event == null)
+ {
+ event = new Event ();
+ event.id = event_id;
+ event.timestamp = stmt.column_int64 (EventViewRows.TIMESTAMP);
+ event.interpretation = interpretations_table.get_value (
+ stmt.column_int (EventViewRows.INTERPRETATION));
+ event.manifestation = manifestations_table.get_value (
+ stmt.column_int (EventViewRows.MANIFESTATION));
+ event.actor = actors_table.get_value (
+ stmt.column_int (EventViewRows.ACTOR));
+ event.origin = stmt.column_text (
+ EventViewRows.EVENT_ORIGIN_URI);
+
+ // Load payload
+ unowned uint8[] data = (uint8[])
+ stmt.column_blob(EventViewRows.PAYLOAD);
+ data.length = stmt.column_bytes(EventViewRows.PAYLOAD);
+ if (data != null)
+ {
+ event.payload = new ByteArray();
+ event.payload.append(data);
+ }
+ }
+
+ Subject subject = new Subject ();
+ subject.uri = stmt.column_text (EventViewRows.SUBJECT_URI);
+ subject.text = stmt.column_text (EventViewRows.SUBJECT_TEXT);
+ subject.storage = stmt.column_text (EventViewRows.SUBJECT_STORAGE);
+ subject.origin = stmt.column_text (EventViewRows.SUBJECT_ORIGIN_URI);
+ subject.current_uri = stmt.column_text (
+ EventViewRows.SUBJECT_CURRENT_URI);
+ subject.interpretation = interpretations_table.get_value (
+ stmt.column_int (EventViewRows.SUBJECT_INTERPRETATION));
+ subject.manifestation = manifestations_table.get_value (
+ stmt.column_int (EventViewRows.SUBJECT_MANIFESTATION));
+ subject.mimetype = mimetypes_table.get_value (
+ stmt.column_int (EventViewRows.SUBJECT_MIMETYPE));
+
+ event.add_subject(subject);
+ }
+ if (rc != Sqlite.DONE)
+ {
+ throw new EngineError.DATABASE_ERROR ("Error: %d, %s\n",
+ rc, db.errmsg ());
+ }
+
+ // statement may get reused in next iteration, make sure it's reset
+ rc = stmt.reset ();
+ if (rc != Sqlite.OK)
+ {
+ throw new EngineError.DATABASE_ERROR ("Error: %d, %s\n",
+ rc, db.errmsg ());
+ }
+
+ if (event != null)
+ {
+ events.add_value (event.to_variant ());
+ }
+ else
+ {
+ events.add_value (Events.get_null_event_variant ());
+ }
+ }
+
+ Variant v = events.end ();
+
+ extension_collection.call_post_get_events (v, sender);
+
+ return v;
+ }
+
+ public uint32[] find_event_ids (TimeRange time_range,
+ GenericArray<Event> event_templates,
+ uint storage_state, uint max_events, uint result_type,
+ BusName? sender=null) throws EngineError
+ {
+
+ WhereClause where = new WhereClause (WhereClause.Type.AND);
+
+ /**
+ * We are using the unary operator here to tell SQLite to not use
+ * the index on the timestamp column at the first place. This is a
+ * "fix" for (LP: #672965) based on some benchmarks, which suggest
+ * a performance win, but we might not oversee all implications.
+ * (See http://www.sqlite.org/optoverview.html, section 6.0).
+ * -- Markus Korn, 29/11/2010
+ */
+ if (time_range.start != 0)
+ where.add (("+timestamp >= %" + int64.FORMAT).printf(
+ time_range.start));
+ if (time_range.end != 0)
+ where.add (("+timestamp <= %" + int64.FORMAT).printf(
+ time_range.end));
+
+ if (storage_state == StorageState.AVAILABLE ||
+ storage_state == StorageState.NOT_AVAILABLE)
+ {
+ where.add ("(subj_storage_state=? OR subj_storage_state IS NULL)",
+ storage_state.to_string ());
+ }
+ else if (storage_state != StorageState.ANY)
+ {
+ throw new EngineError.INVALID_ARGUMENT(
+ "Unknown storage state '%u'".printf(storage_state));
+ }
+
+ WhereClause tpl_conditions = get_where_clause_from_event_templates (
+ event_templates);
+ where.extend (tpl_conditions);
+ //if (!where.may_have_results ())
+ // return new uint32[0];
+
+ string sql = "SELECT DISTINCT id FROM event_view ";
+ string where_sql = "";
+ if (!where.is_empty ())
+ {
+ where_sql = "WHERE " + where.get_sql_conditions ();
+ }
+
+ switch (result_type)
+ {
+ case ResultType.MOST_RECENT_EVENTS:
+ sql += where_sql + " ORDER BY timestamp DESC";
+ break;
+ case ResultType.LEAST_RECENT_EVENTS:
+ sql += where_sql + " ORDER BY timestamp ASC";
+ break;
+ case ResultType.MOST_RECENT_EVENT_ORIGIN:
+ sql += group_and_sort ("origin", where_sql, false);
+ break;
+ case ResultType.LEAST_RECENT_EVENT_ORIGIN:
+ sql += group_and_sort ("origin", where_sql, true);
+ break;
+ case ResultType.MOST_POPULAR_EVENT_ORIGIN:
+ sql += group_and_sort ("origin", where_sql, false, false);
+ break;
+ case ResultType.LEAST_POPULAR_EVENT_ORIGIN:
+ sql += group_and_sort ("origin", where_sql, true, true);
+ break;
+ case ResultType.MOST_RECENT_SUBJECTS:
+ sql += group_and_sort ("subj_id", where_sql, false);
+ break;
+ case ResultType.LEAST_RECENT_SUBJECTS:
+ sql += group_and_sort ("subj_id", where_sql, true);
+ break;
+ case ResultType.MOST_POPULAR_SUBJECTS:
+ sql += group_and_sort ("subj_id", where_sql, false, false);
+ break;
+ case ResultType.LEAST_POPULAR_SUBJECTS:
+ sql += group_and_sort ("subj_id", where_sql, true, true);
+ break;
+ case ResultType.MOST_RECENT_CURRENT_URI:
+ sql += group_and_sort ("subj_id_current", where_sql, false);
+ break;
+ case ResultType.LEAST_RECENT_CURRENT_URI:
+ sql += group_and_sort ("subj_id_current", where_sql, true);
+ break;
+ case ResultType.MOST_POPULAR_CURRENT_URI:
+ sql += group_and_sort ("subj_id_current", where_sql,
+ false, false);
+ break;
+ case ResultType.LEAST_POPULAR_CURRENT_URI:
+ sql += group_and_sort ("subj_id_current", where_sql,
+ true, true);
+ break;
+ case ResultType.MOST_RECENT_ACTOR:
+ sql += group_and_sort ("actor", where_sql, false);
+ break;
+ case ResultType.LEAST_RECENT_ACTOR:
+ sql += group_and_sort ("actor", where_sql, true);
+ break;
+ case ResultType.MOST_POPULAR_ACTOR:
+ sql += group_and_sort ("actor", where_sql, false, false);
+ break;
+ case ResultType.LEAST_POPULAR_ACTOR:
+ sql += group_and_sort ("actor", where_sql, true, true);
+ break;
+ case ResultType.OLDEST_ACTOR:
+ sql += group_and_sort ("actor", where_sql, true, null, "min");
+ break;
+ case ResultType.MOST_RECENT_ORIGIN:
+ sql += group_and_sort ("subj_origin", where_sql, false);
+ break;
+ case ResultType.LEAST_RECENT_ORIGIN:
+ sql += group_and_sort ("subj_origin", where_sql, true);
+ break;
+ case ResultType.MOST_POPULAR_ORIGIN:
+ sql += group_and_sort ("subj_origin", where_sql, false, false);
+ break;
+ case ResultType.LEAST_POPULAR_ORIGIN:
+ sql += group_and_sort ("subj_origin", where_sql, true, true);
+ break;
+ case ResultType.MOST_RECENT_SUBJECT_INTERPRETATION:
+ sql += group_and_sort ("subj_interpretation", where_sql, false);
+ break;
+ case ResultType.LEAST_RECENT_SUBJECT_INTERPRETATION:
+ sql += group_and_sort ("subj_interpretation", where_sql, true);
+ break;
+ case ResultType.MOST_POPULAR_SUBJECT_INTERPRETATION:
+ sql += group_and_sort ("subj_interpretation", where_sql,
+ false, false);
+ break;
+ case ResultType.LEAST_POPULAR_SUBJECT_INTERPRETATION:
+ sql += group_and_sort ("subj_interpretation", where_sql,
+ true, true);
+ break;
+ case ResultType.MOST_RECENT_MIMETYPE:
+ sql += group_and_sort ("subj_mimetype", where_sql, false);
+ break;
+ case ResultType.LEAST_RECENT_MIMETYPE:
+ sql += group_and_sort ("subj_mimetype", where_sql, true);
+ break;
+ case ResultType.MOST_POPULAR_MIMETYPE:
+ sql += group_and_sort ("subj_mimetype", where_sql,
+ false, false);
+ break;
+ case ResultType.LEAST_POPULAR_MIMETYPE:
+ sql += group_and_sort ("subj_mimetype", where_sql,
+ true, true);
+ break;
+ default:
+ string error_message = "Invalid ResultType.";
+ warning (error_message);
+ throw new EngineError.INVALID_ARGUMENT (error_message);
+ }
+
+ if (max_events > 0)
+ sql += " LIMIT %u".printf (max_events);
+
+ int rc;
+ Sqlite.Statement stmt;
+
+ rc = db.prepare_v2 (sql, -1, out stmt);
+ database.assert_query_success(rc, "SQL error");
+
+ var arguments = where.get_bind_arguments ();
+ for (int i = 0; i < arguments.length; ++i)
+ stmt.bind_text (i + 1, arguments[i]);
+
+ uint32[] event_ids = {};
+
+ while ((rc = stmt.step()) == Sqlite.ROW)
+ {
+ var id = (uint32) uint64.parse(
+ stmt.column_text (EventViewRows.ID));
+ event_ids += id;
+ }
+ if (rc != Sqlite.DONE)
+ {
+ string error_message = "Error in find_event_ids: %d, %s".printf (
+ rc, db.errmsg ());
+ warning (error_message);
+ throw new EngineError.DATABASE_ERROR (error_message);
+ }
+
+ return event_ids;
+ }
+
+ public Variant find_events (TimeRange time_range,
+ GenericArray<Event> event_templates,
+ uint storage_state, uint max_events, uint result_type,
+ BusName? sender=null) throws EngineError
+ {
+ return get_events (find_event_ids (time_range, event_templates,
+ storage_state, max_events, result_type));
+ }
+
+ private struct RelatedUri {
+ public uint32 id;
+ public int64 timestamp;
+ public string uri;
+ public int32 counter;
+ }
+
+ public string[] find_related_uris (TimeRange time_range,
+ GenericArray<Event> event_templates,
+ GenericArray<Event> result_event_templates,
+ uint storage_state, uint max_results, uint result_type,
+ BusName? sender=null) throws EngineError
+ {
+ /**
+ * Return a list of subject URIs commonly used together with events
+ * matching the given template, considering data from within the
+ * indicated timerange.
+ * Only URIs for subjects matching the indicated `result_event_templates`
+ * and `result_storage_state` are returned.
+ */
+ if (result_type == ResultType.MOST_RECENT_EVENTS ||
+ result_type == ResultType.LEAST_RECENT_EVENTS)
+ {
+
+ // We pick out the ids for relational event so we can set them as
+ // roots the ids are taken from the events that match the
+ // events_templates
+ uint32[] ids = find_event_ids (time_range, event_templates,
+ storage_state, 0, ResultType.LEAST_RECENT_EVENTS);
+
+ if (event_templates.length > 0 && ids.length == 0)
+ {
+ throw new EngineError.INVALID_ARGUMENT (
+ "No results found for the event_templates");
+ }
+
+ // Pick out the result_ids for the filtered results we would like to
+ // take into account the ids are taken from the events that match
+ // the result_event_templates if no result_event_templates are set we
+ // consider all results as allowed
+ uint32[] result_ids;
+ result_ids = find_event_ids (time_range, result_event_templates,
+ storage_state, 0, ResultType.LEAST_RECENT_EVENTS);
+
+ // From here we create several graphs with the maximum depth of 2
+ // and push all the nodes and vertices (events) in one pot together
+
+ uint32[] pot = new uint32[ids.length + result_ids.length];
+
+ for (uint32 i=0; i < ids.length; i++)
+ pot[i] = ids[i];
+ for (uint32 i=0; i < result_ids.length; i++)
+ pot[ids.length + i] = result_ids[ids.length + i];
+
+ Sqlite.Statement stmt;
+
+ var sql_event_ids = database.get_sql_string_from_event_ids (pot);
+ string sql = """
+ SELECT id, timestamp, subj_uri FROM event_view
+ WHERE id IN (%s) ORDER BY timestamp ASC
+ """.printf (sql_event_ids);
+
+ int rc = db.prepare_v2 (sql, -1, out stmt);
+
+ database.assert_query_success(rc, "SQL error");
+
+ // FIXME: fix this ugly code
+ var temp_related_uris = new GenericArray<RelatedUri?>();
+
+ while ((rc = stmt.step()) == Sqlite.ROW)
+ {
+ RelatedUri ruri = RelatedUri(){
+ id = (uint32) uint64.parse(stmt.column_text (0)),
+ timestamp = stmt.column_int64 (1),
+ uri = stmt.column_text (2),
+ counter = 0
+ };
+ temp_related_uris.add (ruri);
+ }
+
+ // RelatedUri[] related_uris = new RelatedUri[temp_related_uris.length];
+ // for (int i=0; i<related_uris.length; i++)
+ // related_uris[i] = temp_related_uris[i];
+
+ if (rc != Sqlite.DONE)
+ {
+ string error_message =
+ "Error in find_related_uris: %d, %s".printf (
+ rc, db.errmsg ());
+ warning (error_message);
+ throw new EngineError.DATABASE_ERROR (error_message);
+ }
+
+ var uri_counter = new HashTable<string, RelatedUri?>(
+ str_hash, str_equal);
+
+ for (int i = 0; i < temp_related_uris.length; i++)
+ {
+ var window = new GenericArray<unowned RelatedUri?>();
+
+ bool count_in_window = false;
+ for (int j = int.max (0, i - 5);
+ j < int.min (i, temp_related_uris.length);
+ j++)
+ {
+ window.add(temp_related_uris[j]);
+ if (temp_related_uris[j].id in ids)
+ count_in_window = true;
+ }
+
+ if (count_in_window)
+ {
+ for (int j = 0; j < window.length; j++)
+ {
+ if (uri_counter.lookup (window[j].uri) == null)
+ {
+ RelatedUri ruri = RelatedUri ()
+ {
+ id = window[j].id,
+ timestamp = window[j].timestamp,
+ uri = window[j].uri,
+ counter = 0
+ };
+ uri_counter.insert (window[j].uri, ruri);
+ }
+ uri_counter.lookup (window[j].uri).counter++;
+ if (uri_counter.lookup (window[j].uri).timestamp
+ < window[j].timestamp)
+ {
+ uri_counter.lookup (window[j].uri).timestamp =
+ window[j].timestamp;
+ }
+ }
+ }
+ }
+
+
+ // We have the big hashtable with the structs, now we sort them by
+ // most used and limit the result then sort again
+ List<RelatedUri?> temp_ruris = new List<RelatedUri?>();
+ List<RelatedUri?> values = new List<RelatedUri?>();
+
+ foreach (var uri in uri_counter.get_values())
+ values.append(uri);
+
+ values.sort ((a, b) => a.counter - b.counter);
+ values.sort ((a, b) => {
+ int64 delta = a.timestamp - b.timestamp;
+ if (delta < 0) return 1;
+ else if (delta > 0) return -1;
+ else return 0;
+ });
+
+ foreach (RelatedUri ruri in values)
+ {
+ if (temp_ruris.length() < max_results)
+ temp_ruris.append(ruri);
+ else
+ break;
+ }
+
+ // Sort by recency
+ if (result_type == 1)
+ temp_ruris.sort ((a, b) => {
+ int64 delta = a.timestamp - b.timestamp;
+ if (delta < 0) return 1;
+ else if (delta > 0) return -1;
+ else return 0;});
+
+ string[] results = new string[temp_ruris.length()];
+
+ int i = 0;
+ foreach (var uri in temp_ruris)
+ {
+ results[i] = uri.uri;
+ stdout.printf("%i %lld %s\n", uri.counter,
+ uri.timestamp,
+ uri.uri);
+ i++;
+ }
+
+ return results;
+ }
+ else
+ {
+ throw new EngineError.DATABASE_ERROR ("Unsupported ResultType.");
+ }
+ }
+
+ public uint32[] insert_events (GenericArray<Event> events,
+ BusName? sender=null) throws EngineError
+ {
+ extension_collection.call_pre_insert_events (events, sender);
+ uint32[] event_ids = new uint32[events.length];
+ database.begin_transaction ();
+ for (int i = 0; i < events.length; ++i)
+ {
+ if (events[i] != null)
+ event_ids[i] = insert_event (events[i], sender);
+ }
+ database.end_transaction ();
+ extension_collection.call_post_insert_events (events, sender);
+ return event_ids;
+ }
+
+ public uint32 insert_event (Event event,
+ BusName? sender=null) throws EngineError
+ requires (event.id == 0)
+ requires (event.num_subjects () > 0)
+ {
+ event.id = ++last_id;
+
+ // Make sure all the URIs, texts and storage are inserted
+ {
+ var uris = new GenericArray<string> ();
+ var texts = new GenericArray<string> ();
+ var storages = new GenericArray<string> ();
+
+ if (event.origin != "")
+ uris.add (event.origin);
+
+ for (int i = 0; i < event.num_subjects(); ++i)
+ {
+ unowned Subject subject = event.subjects[i];
+ uris.add (subject.uri);
+
+ if (subject.current_uri == "" || subject.current_uri == null)
+ subject.current_uri = subject.uri;
+
+ if (event.interpretation == ZG.MOVE_EVENT
+ && subject.uri == subject.current_uri)
+ {
+ throw new EngineError.INVALID_ARGUMENT (
+ "Illegal event: unless event.interpretation is " +
+ "'MOVE_EVENT' then subject.uri and " +
+ "subject.current_uri have to be the same");
+ }
+ else if (event.interpretation != ZG.MOVE_EVENT
+ && subject.uri != subject.current_uri)
+ {
+ throw new EngineError.INVALID_ARGUMENT (
+ "Redundant event: event.interpretation indicates " +
+ "the uri has been moved yet the subject.uri and " +
+ "subject.current_uri are identical");
+ }
+
+ uris.add (subject.current_uri);
+
+ if (subject.origin != "")
+ uris.add (subject.origin);
+ if (subject.text != "")
+ texts.add (subject.text);
+ if (subject.storage != "")
+ storages.add (subject.storage);
+ }
+
+ try
+ {
+ if (uris.length > 0)
+ database.insert_or_ignore_into_table ("uri", uris);
+ if (texts.length > 0)
+ database.insert_or_ignore_into_table ("text", texts);
+ if (storages.length > 0)
+ database.insert_or_ignore_into_table ("storage", storages);
+ }
+ catch (EngineError e)
+ {
+ warning ("Can't insert data for event: " + e.message);
+ return 0;
+ }
+ }
+
+ var payload_id = store_payload (event);
+
+ // FIXME: Should we add something just like TableLookup but with LRU
+ // for those? Or is embedding the query faster? Needs testing!
+
+ int rc;
+ unowned Sqlite.Statement insert_stmt = database.event_insertion_stmt;
+
+ // We need to call reset here (even if we do so again in the subjects
+ // loop) since calling .bind_* after a .step() invocation is illegal.
+ insert_stmt.reset ();
+
+ insert_stmt.bind_int64 (1, event.id);
+ insert_stmt.bind_int64 (2, event.timestamp);
+ insert_stmt.bind_int64 (3,
+ interpretations_table.get_id (event.interpretation));
+ insert_stmt.bind_int64 (4,
+ manifestations_table.get_id (event.manifestation));
+ insert_stmt.bind_int64 (5, actors_table.get_id (event.actor));
+ insert_stmt.bind_text (6, event.origin);
+ insert_stmt.bind_int64 (7, payload_id);
+
+ for (int i = 0; i < event.num_subjects(); ++i)
+ {
+ insert_stmt.reset();
+
+ unowned Subject subject = event.subjects[i];
+
+ insert_stmt.bind_text (8, subject.uri);
+ insert_stmt.bind_text (9, subject.current_uri);
+ insert_stmt.bind_int64 (10,
+ interpretations_table.get_id (subject.interpretation));
+ insert_stmt.bind_int64 (11,
+ manifestations_table.get_id (subject.manifestation));
+ insert_stmt.bind_text (12, subject.origin);
+ insert_stmt.bind_int64 (13,
+ mimetypes_table.get_id (subject.mimetype));
+ insert_stmt.bind_text (14, subject.text);
+ // FIXME: Consider a storages_table table. Too dangerous?
+ insert_stmt.bind_text (15, subject.storage);
+
+ if ((rc = insert_stmt.step()) != Sqlite.DONE) {
+ if (rc != Sqlite.CONSTRAINT)
+ {
+ warning ("SQL error: %d, %s\n", rc, db.errmsg ());
+ return 0;
+ }
+ // This event was already registered.
+ // Rollback last_id and return the ID of the original event
+ --last_id;
+
+ unowned Sqlite.Statement retrieval_stmt =
+ database.id_retrieval_stmt;
+
+ retrieval_stmt.reset ();
+
+ retrieval_stmt.bind_int64 (1, event.timestamp);
+ retrieval_stmt.bind_int64 (2,
+ interpretations_table.get_id (event.interpretation));
+ retrieval_stmt.bind_int64 (3,
+ manifestations_table.get_id (event.manifestation));
+ retrieval_stmt.bind_int64 (4, actors_table.get_id (event.actor));
+
+ if ((rc = retrieval_stmt.step ()) != Sqlite.ROW) {
+ warning ("SQL error: %d, %s\n", rc, db.errmsg ());
+ return 0;
+ }
+
+ return retrieval_stmt.column_int (0);
+ }
+ }
+
+ if (event.interpretation == ZG.MOVE_EVENT)
+ {
+ handle_move_event (event);
+ }
+
+ return event.id;
+ }
+
+ public TimeRange? delete_events (uint32[] event_ids, BusName? sender)
+ throws EngineError
+ requires (event_ids.length > 0)
+ {
+ event_ids = extension_collection.call_pre_delete_events (
+ event_ids, sender);
+
+ TimeRange? time_range = database.get_time_range_for_event_ids (
+ event_ids);
+
+ string sql_event_ids = database.get_sql_string_from_event_ids (
+ event_ids);
+
+ if (time_range == null)
+ {
+ warning ("Tried to delete non-existing event(s): %s".printf (
+ sql_event_ids));
+ return null;
+ }
+
+ int rc = db.exec ("DELETE FROM event WHERE id IN (%s)".printf(
+ sql_event_ids), null, null);
+ database.assert_query_success (rc, "SQL Error");
+ message ("Deleted %d (out of %d) events.".printf (
+ db.changes(), event_ids.length));
+
+ extension_collection.call_post_delete_events (event_ids, sender);
+
+ return time_range;
+ }
+
+ /**
+ * Clear all resources Engine is using (close database connection,
+ * unload extensions, etc.).
+ *
+ * After executing this method on an Engine instance, no other function
+ * of said instance may be called.
+ */
+ public void close ()
+ {
+ // We delete the ExtensionCollection here so that it unloads
+ // all extensions and they get a chance to access the database
+ // (including through ExtensionStore) before it's closed.
+ extension_collection = null;
+ database.close ();
+ }
+
+ // Used by find_event_ids
+ private string group_and_sort (string field, string where_sql,
+ bool time_asc=false, bool? count_asc=null,
+ string aggregation_type="max")
+ {
+ string time_sorting = (time_asc) ? "ASC" : "DESC";
+ string aggregation_sql = "";
+ string order_sql = "";
+
+ if (count_asc != null)
+ {
+ aggregation_sql = ", COUNT(%s) AS num_events".printf (field);
+ order_sql = "num_events %s,".printf ((count_asc) ? "ASC" : "DESC");
+ }
+
+ return """
+ NATURAL JOIN (
+ SELECT %s,
+ %s(timestamp) AS timestamp
+ %s
+ FROM event_view %s
+ GROUP BY %s)
+ GROUP BY %s
+ ORDER BY %s timestamp %s
+ """.printf (
+ field,
+ aggregation_type,
+ aggregation_sql,
+ where_sql,
+ field,
+ field,
+ order_sql, time_sorting);
+ }
+
+ // Used by find_event_ids
+ private WhereClause get_where_clause_from_event_templates (
+ GenericArray<Event> templates) throws EngineError
+ {
+ WhereClause where = new WhereClause (WhereClause.Type.OR);
+ for (int i = 0; i < templates.length; ++i)
+ {
+ Event event_template = templates[i];
+ where.extend (
+ get_where_clause_from_event_template (event_template));
+ }
+ return where;
+ }
+
+ // Used by get_where_clause_from_event_templates
+ private WhereClause get_where_clause_from_event_template (Event template)
+ throws EngineError
+ {
+ WhereClause where = new WhereClause (WhereClause.Type.AND);
+
+ // Event ID
+ if (template.id != 0)
+ where.add ("id=?", template.id.to_string());
+
+ // Interpretation
+ if (template.interpretation != "")
+ {
+ assert_no_wildcard ("interpretation", template.interpretation);
+ WhereClause subwhere = get_where_clause_for_symbol (
+ "interpretation", template.interpretation,
+ interpretations_table);
+ if (!subwhere.is_empty ())
+ where.extend (subwhere);
+ }
+
+ // Manifestation
+ if (template.manifestation != "")
+ {
+ assert_no_wildcard ("manifestation", template.interpretation);
+ WhereClause subwhere = get_where_clause_for_symbol (
+ "manifestation", template.manifestation,
+ manifestations_table);
+ if (!subwhere.is_empty ())
+ where.extend (subwhere);
+ }
+
+ // Actor
+ if (template.actor != "")
+ {
+ string val = template.actor;
+ bool like = parse_wildcard (ref val);
+ bool negated = parse_negation (ref val);
+
+ if (like)
+ where.add_wildcard_condition ("actor", val, negated);
+ else
+ where.add_match_condition ("actor",
+ actors_table.get_id (val), negated);
+ }
+
+ // Origin
+ if (template.origin != "")
+ {
+ string val = template.origin;
+ bool like = parse_wildcard (ref val);
+ bool negated = parse_negation (ref val);
+
+ if (like)
+ where.add_wildcard_condition ("origin", val, negated);
+ else
+ where.add_text_condition_subquery ("origin", val, negated);
+ }
+
+ // Subject templates within the same event template are AND'd
+ // See LP bug #592599.
+ for (int i = 0; i < template.num_subjects(); ++i)
+ {
+ Subject subject_template = template.subjects[i];
+
+ // Subject interpretation
+ if (subject_template.interpretation != "")
+ {
+ assert_no_wildcard ("subject interpretation",
+ template.interpretation);
+ WhereClause subwhere = get_where_clause_for_symbol (
+ "subj_interpretation", subject_template.interpretation,
+ interpretations_table);
+ if (!subwhere.is_empty ())
+ where.extend (subwhere);
+ }
+
+ // Subject manifestation
+ if (subject_template.manifestation != "")
+ {
+ assert_no_wildcard ("subject manifestation",
+ subject_template.manifestation);
+ WhereClause subwhere = get_where_clause_for_symbol (
+ "subj_manifestation", subject_template.manifestation,
+ manifestations_table);
+ if (!subwhere.is_empty ())
+ where.extend (subwhere);
+ }
+
+ // Mime-Type
+ if (subject_template.mimetype != "")
+ {
+ string val = subject_template.mimetype;
+ bool like = parse_wildcard (ref val);
+ bool negated = parse_negation (ref val);
+
+ if (like)
+ where.add_wildcard_condition (
+ "subj_mimetype", val, negated);
+ else
+ where.add_match_condition ("subj_mimetype",
+ mimetypes_table.get_id (val), negated);
+ }
+
+ // URI
+ if (subject_template.uri != "")
+ {
+ string val = subject_template.uri;
+ bool like = parse_wildcard (ref val);
+ bool negated = parse_negation (ref val);
+
+ if (like)
+ where.add_wildcard_condition ("subj_id", val, negated);
+ else
+ where.add_text_condition_subquery ("subj_id", val, negated);
+ }
+
+ // Origin
+ if (subject_template.origin != "")
+ {
+ string val = subject_template.origin;
+ bool like = parse_wildcard (ref val);
+ bool negated = parse_negation (ref val);
+
+ if (like)
+ where.add_wildcard_condition (
+ "subj_origin", val, negated);
+ else
+ where.add_text_condition_subquery (
+ "subj_origin", val, negated);
+ }
+
+ // Text
+ if (subject_template.text != "")
+ {
+ // Negation and prefix search isn't supported for
+ // subject texts, but "!" and "*" are valid as
+ // plain text characters.
+ where.add_text_condition_subquery ("subj_text_id",
+ subject_template.text, false);
+ }
+
+ // Current URI
+ if (subject_template.current_uri != "")
+ {
+ string val = subject_template.current_uri;
+ bool like = parse_wildcard (ref val);
+ bool negated = parse_negation (ref val);
+
+ if (like)
+ where.add_wildcard_condition (
+ "subj_id_current", val, negated);
+ else
+ where.add_text_condition_subquery (
+ "subj_id_current", val, negated);
+ }
+
+ // Subject storage
+ if (subject_template.storage != "")
+ {
+ string val = subject_template.storage;
+ assert_no_negation ("subject storage", val);
+ assert_no_wildcard ("subject storage", val);
+ where.add_text_condition_subquery ("subj_storage_id", val);
+ }
+ }
+
+ return where;
+ }
+
+ // Used by get_where_clause_from_event_templates
+ /**
+ * Check if the value starts with the negation operator. If it does,
+ * remove the operator from the value and return true. Otherwise,
+ * return false.
+ */
+ public static bool parse_negation (ref string val)
+ {
+ if (!val.has_prefix ("!"))
+ return false;
+ val = val.substring (1);
+ return true;
+ }
+
+ // Used by get_where_clause_from_event_templates
+ /**
+ * If the value starts with the negation operator, throw an
+ * error.
+ */
+ protected void assert_no_negation (string field, string val)
+ throws EngineError
+ {
+ if (!val.has_prefix ("!"))
+ return;
+ string error_message =
+ "Field '%s' doesn't support negation".printf (field);
+ warning (error_message);
+ throw new EngineError.INVALID_ARGUMENT (error_message);
+ }
+
+ // Used by get_where_clause_from_event_templates
+ /**
+ * Check if the value ends with the wildcard character. If it does,
+ * remove the wildcard character from the value and return true.
+ * Otherwise, return false.
+ */
+ public static bool parse_wildcard (ref string val)
+ {
+ if (!val.has_suffix ("*"))
+ return false;
+ unowned uint8[] val_data = val.data;
+ val_data[val_data.length-1] = '\0';
+ return true;
+ }
+
+ // Used by get_where_clause_from_event_templates
+ /**
+ * If the value ends with the wildcard character, throw an error.
+ */
+ protected void assert_no_wildcard (string field, string val)
+ throws EngineError
+ {
+ if (!val.has_suffix ("*"))
+ return;
+ string error_message =
+ "Field '%s' doesn't support prefix search".printf (field);
+ warning (error_message);
+ throw new EngineError.INVALID_ARGUMENT (error_message);
+ }
+
+ protected WhereClause get_where_clause_for_symbol (string table_name,
+ string symbol, TableLookup lookup_table) throws EngineError
+ {
+ string _symbol = symbol;
+ bool negated = parse_negation (ref _symbol);
+ List<unowned string> symbols = Symbol.get_all_children (_symbol);
+ symbols.prepend (_symbol);
+
+ WhereClause subwhere = new WhereClause(
+ WhereClause.Type.OR, negated);
+
+ /*
+ FIXME: what is this?
+ foreach (unowned string uri in symbols)
+ {
+ subwhere.add_match_condition (table_name,
+ lookup_table.get_id (uri));
+ }
+ */
+ if (symbols.length () == 1)
+ {
+ subwhere.add_match_condition (table_name,
+ lookup_table.get_id (_symbol));
+ }
+ else
+ {
+ var sb = new StringBuilder ();
+ foreach (string uri in symbols)
+ {
+ sb.append_printf ("%d,", lookup_table.get_id (uri));
+ }
+ sb.truncate (sb.len - 1);
+
+ string sql = "%s IN (%s)".printf(table_name, sb.str);
+ subwhere.add(sql);
+ }
+
+ return subwhere;
+ }
+
+ private void handle_move_event (Event event)
+ {
+ for (int i = 0; i < event.subjects.length; i++)
+ {
+ Subject subject = event.subjects[i];
+ int rc;
+ unowned Sqlite.Statement move_stmt = database.move_handling_stmt;
+ move_stmt.reset();
+ move_stmt.bind_text (1, subject.current_uri);
+ move_stmt.bind_text (2, subject.uri);
+ move_stmt.bind_text (3, event.interpretation);
+ move_stmt.bind_int64 (4, event.timestamp);
+ if ((rc = move_stmt.step()) != Sqlite.DONE) {
+ if (rc != Sqlite.CONSTRAINT)
+ {
+ warning ("SQL error: %d, %s\n", rc, db.errmsg ());
+ }
+ }
+ }
+ }
+
+ private int64 store_payload (Event event)
+ {
+ /**
+ * TODO: Right now payloads are not unique and every event has its
+ * own one. We could optimize this to store those which are repeated
+ * for different events only once, especially considering that
+ * events cannot be modified once they've been inserted.
+ */
+ if (event.payload != null)
+ {
+ int rc;
+ unowned Sqlite.Statement payload_insertion_stmt =
+ database.payload_insertion_stmt;
+ payload_insertion_stmt.reset ();
+ payload_insertion_stmt.bind_blob (1, event.payload.data,
+ event.payload.data.length);
+ if ((rc = payload_insertion_stmt.step ()) != Sqlite.DONE)
+ if (rc != Sqlite.CONSTRAINT)
+ warning ("SQL error: %d, %s\n", rc, db.errmsg ());
+
+ return database.database.last_insert_rowid ();
+ }
+ return 0;
+ }
+
+ private void delete_from_cache (string table, int64 rowid)
+ {
+ TableLookup table_lookup;
+
+ if (table == "interpretation")
+ table_lookup = interpretations_table;
+ else if (table == "manifestation")
+ table_lookup = manifestations_table;
+ else if (table == "mimetype")
+ table_lookup = mimetypes_table;
+ else if (table == "actor")
+ table_lookup = actors_table;
+ else
+ return;
+
+ table_lookup.remove((int) rowid);
+ }
+
+}
+
+}
+
+// vim:expandtab:ts=4:sw=4
=== added file 'src/errors.vala'
--- src/errors.vala 1970-01-01 00:00:00 +0000
+++ src/errors.vala 2011-10-19 08:09:50 +0000
@@ -0,0 +1,51 @@
+/* zeitgeist-daemon.vala
+ *
+ * Copyright © 2011 Collabora Ltd.
+ * By Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published by
+ * the Free Software Foundation, either version 2.1 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+namespace Zeitgeist
+{
+ [DBus (name = "org.gnome.zeitgeist.EngineError")]
+ public errordomain EngineError
+ {
+ DATABASE_ERROR,
+ INVALID_ARGUMENT,
+ INVALID_KEY,
+ }
+
+ // vala doesn't include proper headers, this fixes it
+ private static void vala_bug_workaround ()
+ {
+ try
+ {
+ Bus.get_sync (BusType.SESSION, null);
+ }
+ catch (Error err)
+ {
+ i_know_its_unused ();
+ }
+ }
+
+ // also screw your warnings vala
+ private static void i_know_its_unused ()
+ {
+ vala_bug_workaround ();
+ }
+}
+
+// vim:expandtab:ts=4:sw=4
=== added symlink 'src/ext-blacklist.vala'
=== target is u'../extensions/blacklist.vala'
=== added symlink 'src/ext-data-source-registry.vala'
=== target is u'../extensions/ds-registry.vala'
=== added symlink 'src/ext-fts.vala'
=== target is u'../extensions/fts.vala'
=== added symlink 'src/ext-histogram.vala'
=== target is u'../extensions/histogram.vala'
=== added symlink 'src/ext-storage-monitor.vala'
=== target is u'../extensions/storage-monitor.vala'
=== added file 'src/extension-collection.vala'
--- src/extension-collection.vala 1970-01-01 00:00:00 +0000
+++ src/extension-collection.vala 2011-10-19 08:09:50 +0000
@@ -0,0 +1,179 @@
+/* extension-collection.vala
+ *
+ * Copyright © 2011 Michal Hruby <michal.mhr@xxxxxxxxx>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published by
+ * the Free Software Foundation, either version 2.1 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+namespace Zeitgeist
+{
+ public class ExtensionCollection : Object
+ {
+ private GenericArray<Extension> extensions;
+
+ public unowned Engine engine { get; construct; }
+
+ public ExtensionCollection (Engine engine)
+ {
+ Object (engine: engine);
+ }
+
+ ~ExtensionCollection ()
+ {
+ extensions.foreach ((ext) => { ext.unload (); });
+ }
+
+ construct
+ {
+ Extension? extension;
+ extensions = new GenericArray<Extension> ();
+
+ // load the builtin extensions first
+#if BUILTIN_EXTENSIONS
+ RegisterExtensionFunc[] builtins =
+ {
+ data_source_registry_init,
+ blacklist_init,
+ histogram_init,
+ storage_monitor_init,
+ fts_init
+ };
+
+ foreach (var func in builtins)
+ {
+ ExtensionLoader builtin = new BuiltinExtension (func);
+ extension = builtin.create_instance (engine);
+ if (extension != null) extensions.add (extension);
+ }
+#endif
+
+ // TODO: load extensions from system & user directories, and make
+ // sure the order is correct
+ unowned string ext_dir1 = Utils.get_local_extensions_path ();
+ if (!FileUtils.test (ext_dir1, FileTest.IS_DIR | FileTest.EXISTS))
+ return;
+ Dir? user_ext_dir;
+ try
+ {
+ user_ext_dir = Dir.open (ext_dir1);
+ }
+ catch (Error e)
+ {
+ warning (
+ "Couldn't open local extensions directory: %s", e.message);
+ }
+ if (user_ext_dir != null)
+ {
+ unowned string? file_name = user_ext_dir.read_name ();
+ while (file_name != null)
+ {
+ if (file_name.has_suffix (".so"))
+ {
+ string path = Path.build_filename (ext_dir1, file_name);
+ debug ("Loading extension: \"%s\"", path);
+ var loader = new ModuleLoader (path);
+ // FIXME: check if disabled
+ extension = loader.create_instance (engine);
+ if (extension != null) extensions.add (extension);
+ }
+ else
+ {
+ debug ("Ignored file \"%s/%s\"", ext_dir1, file_name);
+ }
+ file_name = user_ext_dir.read_name ();
+ }
+ }
+ }
+
+ public string[] get_extension_names ()
+ {
+ string[] result = {};
+ for (int i = 0; i < extensions.length; i++)
+ {
+ unowned string ext_name = extensions[i].get_type ().name ();
+ if (ext_name.has_prefix ("Zeitgeist"))
+ result += ext_name.substring (9);
+ else
+ result += ext_name;
+ }
+
+ return result;
+ }
+
+ public void call_pre_insert_events (GenericArray<Event?> events,
+ BusName? sender)
+ {
+ int num_events = events.length;
+ for (int i = 0; i < extensions.length; ++i)
+ {
+ extensions[i].pre_insert_events (events, sender);
+ }
+ assert (num_events == events.length);
+ }
+
+ public void call_post_insert_events (GenericArray<Event?> events,
+ BusName? sender)
+ {
+ int num_events = events.length;
+ for (int i = 0; i < extensions.length; ++i)
+ {
+ extensions[i].post_insert_events (events, sender);
+ }
+ assert (num_events == events.length);
+ }
+
+ public void call_post_get_events (Variant events,
+ BusName? sender)
+ {
+ // GVariant is immutable, no need to verify that extensions don't modify it
+ for (int i = 0; i < extensions.length; ++i)
+ {
+ extensions[i].post_get_events (events, sender);
+ }
+ }
+
+ public unowned uint32[] call_pre_delete_events (uint32[] event_ids,
+ BusName? sender)
+ {
+ for (int i = 0; i < extensions.length; ++i)
+ {
+ uint32[]? filtered_ids = extensions[i].pre_delete_events (
+ event_ids, sender);
+ if (filtered_ids != null)
+ event_ids = filtered_ids;
+ }
+ return event_ids;
+ }
+
+ public void call_post_delete_events (uint32[] event_ids,
+ BusName? sender)
+ {
+ for (int i = 0; i < extensions.length; ++i)
+ {
+ extensions[i].post_delete_events (event_ids, sender);
+ }
+ }
+ }
+
+#if BUILTIN_EXTENSIONS
+ private extern static Type data_source_registry_init (TypeModule mod);
+ private extern static Type blacklist_init (TypeModule mod);
+ private extern static Type histogram_init (TypeModule mod);
+ private extern static Type storage_monitor_init (TypeModule mod);
+ private extern static Type fts_init (TypeModule mod);
+#endif
+
+}
+// vim:expandtab:ts=4:sw=4
=== added file 'src/extension-store.vala'
--- src/extension-store.vala 1970-01-01 00:00:00 +0000
+++ src/extension-store.vala 2011-10-19 08:09:50 +0000
@@ -0,0 +1,124 @@
+/* extension-store.vala
+ *
+ * Copyright © 2011 Collabora Ltd.
+ * By Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published by
+ * the Free Software Foundation, either version 2.1 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+using Zeitgeist;
+
+namespace Zeitgeist
+{
+ public class ExtensionStore : Object
+ {
+
+ private Zeitgeist.SQLite.ZeitgeistDatabase database;
+ private unowned Sqlite.Database db;
+ private Sqlite.Statement storage_stmt;
+ private Sqlite.Statement retrieval_stmt;
+
+ public ExtensionStore (Zeitgeist.Engine engine) {
+ database = engine.database;
+ db = database.database;
+ try
+ {
+ prepare_queries ();
+ }
+ catch (Error err)
+ {
+ warning ("%s", err.message);
+ }
+ }
+
+ private void prepare_queries () throws EngineError
+ {
+ int rc;
+ string sql;
+
+ // Prepare storage query
+ sql = """
+ INSERT OR REPLACE INTO extensions_conf (
+ extension, key, value
+ ) VALUES (
+ ?, ?, ?
+ )""";
+ rc = database.database.prepare_v2 (sql, -1, out storage_stmt);
+ database.assert_query_success (rc, "Storage query error");
+
+ // Prepare retrieval query
+ sql = """
+ SELECT value
+ FROM extensions_conf
+ WHERE extension=? AND key=?
+ """;
+ rc = database.database.prepare_v2 (sql, -1, out retrieval_stmt);
+ database.assert_query_success (rc, "Retrieval query error");
+ }
+
+ /**
+ * Store the given Variant under the given (extension, key)
+ * identifier, replacing any previous value.
+ */
+ public void store (string extension, string key, Variant data)
+ {
+ int rc;
+ storage_stmt.reset ();
+ storage_stmt.bind_text (1, extension);
+ storage_stmt.bind_text (2, key);
+ storage_stmt.bind_blob (3, data.get_data (), (int) data.get_size ());
+
+ if ((rc = storage_stmt.step ()) != Sqlite.DONE)
+ warning ("SQL error: %d, %s", rc, db.errmsg ());
+ }
+
+ /**
+ * Retrieve a previously stored value.
+ */
+ public Variant? retrieve(string extension, string key, VariantType format)
+ {
+ retrieval_stmt.reset ();
+ retrieval_stmt.bind_text (1, extension);
+ retrieval_stmt.bind_text (2, key);
+
+ int rc = retrieval_stmt.step ();
+ if (rc != Sqlite.ROW)
+ {
+ if (rc != Sqlite.DONE)
+ warning ("SQL error: %d, %s", rc, db.errmsg ());
+ return null;
+ }
+
+ unowned uchar[] blob;
+ blob = (uchar[]) retrieval_stmt.column_blob (0);
+ blob.length = retrieval_stmt.column_bytes (0);
+
+ Variant? data = null;
+ if (blob != null)
+ {
+ ByteArray byte_array = new ByteArray.sized (blob.length);
+ byte_array.append (blob);
+
+ data = Variant.new_from_data<ByteArray> (format,
+ byte_array.data, false, byte_array);
+ }
+
+ retrieval_stmt.reset ();
+ return data;
+ }
+
+ }
+}
+// vim:expandtab:ts=4:sw=4
=== added file 'src/extension.vala'
--- src/extension.vala 1970-01-01 00:00:00 +0000
+++ src/extension.vala 2011-10-19 08:09:50 +0000
@@ -0,0 +1,268 @@
+/* extension.vala
+ *
+ * Copyright © 2011 Manish Sinha <manishsinha@xxxxxxxxxx>
+ * Copyright © 2011 Michal Hruby <michal.mhr@xxxxxxxxx>
+ * Copyright © 2011 Collabora Ltd.
+ * By Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published by
+ * the Free Software Foundation, either version 2.1 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+namespace Zeitgeist
+{
+ /**
+ * Base class for all extensions
+ *
+ * The constructor of an Extension object takes the Engine object
+ * it extends as the only argument.
+ *
+ * Extensions have a set of hooks with which they can control how
+ * events are inserted and retrieved from the log.
+ *
+ * Additionally, extensions may create their own D-Bus interface
+ * over which they can expose their own methods.
+ */
+ public abstract class Extension : Object
+ {
+ public unowned Engine engine { get; construct set; }
+
+ /**
+ * This method gets called before Zeitgeist stops.
+ *
+ * Execution of this method isn't guaranteed, and you shouldn't do
+ * anything slow in there.
+ */
+ public virtual void unload ()
+ {
+ }
+
+ /**
+ * Hook applied to all events before they are inserted into the
+ * log. The returned event is progressively passed through all
+ * extensions before the final result is inserted.
+ *
+ * To block an event completely simply replace it with NULL.
+ * The event may also be modified or completely substituted for
+ * another event.
+ *
+ * @param events: A GenericArray of Event instances
+ * @param sender: The D-Bus bus name of the client or NULL
+ */
+ public virtual void pre_insert_events (GenericArray<Event?> events,
+ BusName? sender)
+ {
+ }
+
+ /**
+ * Hook applied to all events after they are inserted into the log.
+ *
+ * The inserted events will have been updated to include their new
+ * ID.
+ *
+ * @param events: A GenericArray of Event instances
+ * @param sender: The D-Bus bus name of the client or NULL
+ */
+ public virtual void post_insert_events (GenericArray<Event?> events,
+ BusName? sender)
+ {
+ }
+
+ /**
+ * Hook applied to all events before they are returned to a client
+ * (as a result of a GetEvents or FindEvents call).
+ *
+ * The events returned from this method are progressively passed
+ * through all extensions before the final result is returned to
+ * the client.
+ *
+ * To prevent an event from being returned replace it with NULL.
+ * The event may also be changed in place or fully substituted for
+ * another event.
+ *
+ * @param events: A GVariant with event instances (signature a(asaasay))
+ * @param sender: The D-Bus bus name of the client or NULL
+ */
+ public virtual void post_get_events (Variant events,
+ BusName? sender)
+ {
+ }
+
+ /**
+ * Hook applied before events are deleted from the log.
+ *
+ * @param ids: A list with the IDs of the events whose deletion
+ * is being requested
+ * @param sender: The unique DBus name for the client triggering
+ * the delete, or NULL
+ * @return: The filtered list of event IDs which should be deleted,
+ * or NULL to specify no change
+ */
+ public virtual uint32[]? pre_delete_events (uint32[] ids,
+ BusName? sender)
+ {
+ return null;
+ }
+
+ /**
+ * Hook applied after events have been deleted from the log.
+ *
+ * @param ids: A list with the IDs of the events that have been deleted
+ * @param sender: The unique DBus name for the client triggering the delete
+ */
+ public virtual void post_delete_events (uint32[] ids, BusName? sender)
+ {
+ }
+
+ /**
+ * Store `data' under the given (extension unique) key, overwriting any
+ * previous value.
+ */
+ protected void store_config (string key, Variant data)
+ {
+ engine.extension_store.store (get_type ().name (), key, data);
+ }
+
+ /**
+ * Retrieve data this extension previously stored under the given key,
+ * or null if there is no such data.
+ *
+ * @param key: key under which the data is stored
+ * @param format: type string for the resulting Variant
+ */
+ protected Variant? retrieve_config (string key, string format)
+ {
+ VariantType type = new VariantType(format);
+ return engine.extension_store.retrieve (
+ get_type ().name (), key, type);
+ }
+ }
+
+ [CCode (has_target = false)]
+ public delegate Type RegisterExtensionFunc (TypeModule module);
+
+ public abstract class ExtensionLoader: TypeModule
+ {
+ public Type extension_type { get; protected set; }
+
+ public virtual Extension? create_instance (Engine engine)
+ {
+ if (this.use ())
+ {
+ if (extension_type == Type.INVALID) return null;
+ Extension? instance = Object.@new (extension_type,
+ "engine", engine) as Extension;
+ debug ("Loaded extension: %s", extension_type.name ());
+ this.unuse ();
+ return instance;
+ }
+
+ return null;
+ }
+ }
+
+ public class ModuleLoader: ExtensionLoader
+ {
+ public string module_path { get; construct; }
+
+ private Module? module = null;
+
+ public ModuleLoader (string module_path)
+ {
+ Object (module_path: module_path);
+ }
+
+ protected override bool load ()
+ {
+ module = Module.open (module_path, ModuleFlags.BIND_LOCAL);
+ if (module == null)
+ {
+ warning ("%s", Module.error ());
+ return false;
+ }
+
+ void* func_ptr;
+ if (module.symbol ("zeitgeist_extension_register", out func_ptr))
+ {
+ RegisterExtensionFunc func = (RegisterExtensionFunc) func_ptr;
+ extension_type = func (this);
+
+ if (extension_type.is_a (typeof (Extension)) == false)
+ {
+ extension_type = Type.INVALID;
+ warning ("Type implemented in \"%s\" does not subclass " +
+ "Zeitgeist.Extension!", module_path);
+ return false;
+ }
+
+ // according to docs initialized TypeModule is not supposed
+ // to be unreferenced, so we do this
+ this.ref ();
+ }
+ else
+ {
+ warning ("%s", Module.error ());
+ return false;
+ }
+
+ return true;
+ }
+
+ protected override void unload ()
+ {
+ module = null;
+ }
+ }
+
+ public class BuiltinExtension: ExtensionLoader
+ {
+ private RegisterExtensionFunc reg_func;
+
+ public BuiltinExtension (RegisterExtensionFunc func)
+ {
+ Object ();
+ reg_func = func;
+ }
+
+ protected override bool load ()
+ {
+ if (extension_type == Type.INVALID)
+ {
+ extension_type = reg_func (this);
+
+ if (extension_type.is_a (typeof (Extension)) == false)
+ {
+ warning ("Type \"%s\" implemented by [%p] does not " +
+ "subclass Zeitgeist.Extension!",
+ extension_type.name (), this.reg_func);
+ extension_type = Type.INVALID;
+ return false;
+ }
+
+ // according to docs initialized TypeModule is not supposed
+ // to be unreferenced, so we do this
+ this.ref ();
+ }
+
+ return true;
+ }
+
+ protected override void unload ()
+ {
+ }
+
+ }
+
+}
+// vim:expandtab:ts=4:sw=4
=== added file 'src/notify.vala'
--- src/notify.vala 1970-01-01 00:00:00 +0000
+++ src/notify.vala 2011-10-19 08:09:50 +0000
@@ -0,0 +1,219 @@
+/* notify.vala
+ *
+ * Copyright © 2011 Collabora Ltd.
+ * By Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+ * By Seif Lotfy <seif@xxxxxxxxx>
+ * Copyright © 2011 Michal Hruby <michal.mhr@xxxxxxxxx>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation, either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+namespace Zeitgeist
+{
+
+ public class MonitorManager : Object
+ {
+
+ private HashTable<string, Monitor> monitors;
+ private HashTable<string, GenericArray<string>> connections;
+
+ construct
+ {
+ monitors = new HashTable<string, Monitor> (str_hash, str_equal);
+ connections = new HashTable<string, GenericArray<string>>
+ (str_hash, str_equal);
+
+ // FIXME: it'd be nice if this supported arg2
+ try
+ {
+ var connection = Bus.get_sync (BusType.SESSION);
+ connection.signal_subscribe ("org.freedesktop.DBus",
+ "org.freedesktop.DBus", "NameOwnerChanged",
+ "/org/freedesktop/DBus", null, 0,
+ (conn, sender, path, ifc_name, sig_name, parameters) =>
+ {
+ // name, old_owner, new_owner
+ var arg0 = parameters.get_child_value (0).dup_string ();
+ var arg1 = parameters.get_child_value (1).dup_string ();
+ var arg2 = parameters.get_child_value (2).dup_string ();
+
+ if (arg2 != "") return;
+
+ foreach (var owner in connections.get_keys())
+ {
+ if (arg0 == owner)
+ {
+ var paths = connections.lookup (arg0);
+ debug("Client disconnected %s", owner);
+ for (int i = 0; i < paths.length; i++)
+ remove_monitor ((BusName)arg0, paths[i]);
+ connections.remove(arg0);
+ }
+ }
+ });
+ }
+ catch (IOError err)
+ {
+ warning ("Cannot subscribe to NameOwnerChanged signal! %s",
+ err.message);
+ }
+ }
+
+ private class Monitor
+ {
+
+ private GenericArray<Event> event_templates;
+ private TimeRange time_range;
+ private RemoteMonitor? proxy_object = null;
+
+ public Monitor (BusName peer, string object_path,
+ TimeRange tr, GenericArray<Event> templates)
+ {
+ Bus.get_proxy<RemoteMonitor> (BusType.SESSION, peer,
+ object_path, DBusProxyFlags.DO_NOT_LOAD_PROPERTIES |
+ DBusProxyFlags.DO_NOT_CONNECT_SIGNALS,
+ null, (obj, res) =>
+ {
+ try
+ {
+ proxy_object = Bus.get_proxy.end (res);
+ }
+ catch (IOError err)
+ {
+ warning ("%s", err.message);
+ }
+ });
+ time_range = tr;
+ event_templates = templates;
+ }
+
+ private bool matches (Event event)
+ {
+ if (event_templates.length == 0)
+ return true;
+ for (var i = 0; i < event_templates.length; i++)
+ {
+ if (event.matches_template (event_templates[i]))
+ {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ // FIXME: we need to queue the notification if proxy_object == null
+ public void notify_insert (TimeRange time_range, GenericArray<Event> events)
+ requires (proxy_object != null)
+ {
+ var intersection_timerange = time_range.intersect(this.time_range);
+ if (intersection_timerange != null)
+ {
+ var matching_events = new GenericArray<Event>();
+ for (int i=0; i<events.length; i++)
+ {
+ if (matches(events[i])
+ && events[i].timestamp >= intersection_timerange.start
+ && events[i].timestamp <= intersection_timerange.end)
+ {
+ matching_events.add(events[i]);
+ }
+ }
+ if (matching_events.length > 0)
+ {
+ DBusProxy p = (DBusProxy) proxy_object;
+ debug ("Notifying %s about %d insertions",
+ p.get_name (), matching_events.length);
+
+ proxy_object.notify_insert (intersection_timerange.to_variant (),
+ Events.to_variant (matching_events));
+ }
+ }
+ }
+
+ public void notify_delete (TimeRange time_range, uint32[] event_ids)
+ requires (proxy_object != null)
+ {
+ var intersection_timerange = time_range.intersect(this.time_range);
+ if (intersection_timerange != null)
+ {
+ proxy_object.notify_delete (intersection_timerange.to_variant (),
+ event_ids);
+ }
+ }
+ }
+
+ public void install_monitor (BusName peer, string object_path,
+ TimeRange time_range, GenericArray<Event> templates)
+ {
+ var hash = "%s#%s".printf (peer, object_path);
+ if (monitors.lookup (hash) == null)
+ {
+ var monitor = new Monitor (peer, object_path, time_range,
+ templates);
+ monitors.insert (hash, monitor);
+ if (connections.lookup (peer) == null)
+ connections.insert (peer, new GenericArray<string> ());
+ connections.lookup (peer).add (object_path);
+
+ debug ("Installed new monitor for %s", peer);
+ }
+ else
+ {
+ warning ("There's already a monitor installed for %s", hash);
+ }
+ }
+
+ public void remove_monitor (BusName peer, string object_path)
+ {
+ debug("Removing monitor %s%s", peer, object_path);
+ var hash = "%s#%s".printf (peer, object_path);
+
+ if (monitors.lookup (hash) != null)
+ monitors.remove (hash);
+ else
+ warning ("There's no monitor installed for %s", hash);
+
+ if (connections.lookup (peer) != null)
+ {
+ var paths = connections.lookup (peer);
+ for (int i = 0; i < paths.length; i++)
+ {
+ if (paths[i] == object_path)
+ {
+ paths.remove_index_fast (i);
+ break;
+ }
+ }
+ }
+
+ }
+
+ public void notify_insert (TimeRange time_range,
+ GenericArray<Event> events)
+ {
+ foreach (unowned Monitor mon in monitors.get_values ())
+ mon.notify_insert(time_range, events);
+ }
+
+ public void notify_delete (TimeRange time_range, uint32[] event_ids)
+ {
+ foreach (unowned Monitor mon in monitors.get_values ())
+ mon.notify_delete(time_range, event_ids);
+ }
+ }
+
+}
+
+// vim:expandtab:ts=4:sw=4
=== added file 'src/ontology-uris.vala.in'
--- src/ontology-uris.vala.in 1970-01-01 00:00:00 +0000
+++ src/ontology-uris.vala.in 2011-10-19 08:09:50 +0000
@@ -0,0 +1,22 @@
+/* ontology-uris.vala
+ *
+ * Copyright © 2009-2011 The Zeitgeist Team
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published by
+ * the Free Software Foundation, either version 2.1 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+// *insert-auto-generated-code*
+
+// vim:expandtab:ts=4:sw=4
=== added file 'src/ontology.vala.in'
--- src/ontology.vala.in 1970-01-01 00:00:00 +0000
+++ src/ontology.vala.in 2011-10-19 08:09:50 +0000
@@ -0,0 +1,172 @@
+/* ontology.vala
+ *
+ * Copyright © 2011 Collabora Ltd.
+ * By Seif Lotfy <seif@xxxxxxxxx>
+ * By Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+ * Copyright © 2011 Michal Hruby <michal.mhr@xxxxxxxxx>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published by
+ * the Free Software Foundation, either version 2.1 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+namespace Zeitgeist
+{
+
+ namespace Symbol
+ {
+ private static HashTable<string, Info> all_symbols = null;
+ private static bool initialized = false;
+
+ public static unowned string get_display_name (string symbol_uri)
+ {
+ initialize_symbols ();
+
+ var symbol = all_symbols.lookup (symbol_uri);
+ if (symbol == null) return symbol_uri;
+
+ return symbol.display_name;
+ }
+
+ public static unowned string get_description(string symbol_uri)
+ {
+ initialize_symbols ();
+
+ var symbol = all_symbols.lookup (symbol_uri);
+ if (symbol == null) return "";
+
+ return symbol.description;
+ }
+
+ public static List<unowned string> get_all_parents(string symbol_uri)
+ {
+ initialize_symbols ();
+
+ var results = new List<string> ();
+ var symbol = all_symbols.lookup (symbol_uri);
+ if (symbol == null) return results;
+
+ foreach (unowned string uri in symbol.parents)
+ {
+ results.append (uri);
+ // Recursively get the other parents
+ foreach (string parent_uri in get_all_parents (uri))
+ if (results.index (parent_uri) > -1)
+ results.append (parent_uri);
+ }
+
+ return results;
+ }
+
+ public static List<unowned string> get_all_children (string symbol_uri)
+ {
+ initialize_symbols ();
+
+ var results = new List<string> ();
+ var symbol = all_symbols.lookup (symbol_uri);
+ if (symbol == null) return results;
+
+ foreach (unowned string uri in symbol.all_children)
+ results.append (uri);
+
+ return results;
+ }
+
+ public static List<unowned string> get_children (string symbol_uri)
+ {
+ initialize_symbols ();
+ var results = new List<string> ();
+ var symbol = all_symbols.lookup (symbol_uri);
+ if (symbol == null) return results;
+
+ foreach (unowned string uri in symbol.children)
+ results.append(uri);
+
+ return results;
+ }
+
+ public static List<unowned string> get_parents (string symbol_uri)
+ {
+ initialize_symbols ();
+
+ var results = new List<string>();
+ var symbol = all_symbols.lookup (symbol_uri);
+ if (symbol == null) return results;
+
+ foreach (unowned string uri in symbol.parents)
+ results.append (uri);
+
+ return results;
+ }
+
+ public static bool is_a (string symbol_uri, string parent_uri)
+ {
+ initialize_symbols ();
+
+ foreach (unowned string uri in get_all_parents (symbol_uri))
+ if (parent_uri == uri)
+ return true;
+ return false;
+ }
+
+ private static void initialize_symbols ()
+ {
+ if (initialized) return;
+ initialized = true;
+ // *insert-auto-generated-code*
+ }
+
+ }
+
+ private class Symbol.Info
+ {
+ public List<string> parents;
+ public List<string> children;
+ public List<string> all_children;
+ public string uri;
+ public string display_name;
+ public string description;
+
+ private Info (string uri, string display_name, string description,
+ string[] parents, string[] children, string[] all_children)
+ {
+ this.uri = uri;
+ this.display_name = display_name;
+ this.description = description;
+ this.parents = new List<string> ();
+ for (int i = 0; i < parents.length; i++)
+ this.parents.append (parents[i]);
+ this.children = new List<string> ();
+ for (int i = 0; i < children.length; i++)
+ this.children.append (children[i]);
+ this.all_children = new List<string> ();
+ for (int i = 0; i < all_children.length; i++)
+ this.all_children.append (all_children[i]);
+ }
+
+ internal static void register (string uri, string display_name,
+ string description, string[] parents, string[] children,
+ string[] all_children)
+ {
+ if (all_symbols == null)
+ all_symbols = new HashTable<string, Info> (str_hash, str_equal);
+ Info symbol = new Info (uri, display_name, description,
+ parents, children, all_children);
+ all_symbols.insert (uri, symbol);
+ }
+
+ }
+
+}
+
+// vim:expandtab:ts=4:sw=4
=== added file 'src/remote.vala'
--- src/remote.vala 1970-01-01 00:00:00 +0000
+++ src/remote.vala 2011-10-19 08:09:50 +0000
@@ -0,0 +1,127 @@
+/* remote.vala
+ *
+ * Copyright © 2011 Collabora Ltd.
+ * By Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+ * Copyright © 2011 Michal Hruby <michal.mhr@xxxxxxxxx>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published by
+ * the Free Software Foundation, either version 2.1 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+namespace Zeitgeist
+{
+ public struct VersionStruct
+ {
+ int major;
+ int minor;
+ int micro;
+ }
+
+ [DBus (name = "org.gnome.zeitgeist.Log")]
+ public interface RemoteLog : Object
+ {
+
+ [DBus (signature = "(xx)")]
+ public abstract Variant delete_events (
+ uint32[] event_ids,
+ BusName sender
+ ) throws Error;
+
+ public abstract uint32[] find_event_ids (
+ [DBus (signature = "(xx)")] Variant time_range,
+ [DBus (signature = "a(asaasay)")] Variant event_templates,
+ uint storage_state, uint num_events, uint result_type,
+ BusName sender
+ ) throws Error;
+
+ [DBus (signature = "a(asaasay)")]
+ public abstract Variant find_events (
+ [DBus (signature = "(xx)")] Variant time_range,
+ [DBus (signature = "a(asaasay)")] Variant event_templates,
+ uint storage_state, uint num_events, uint result_type,
+ BusName sender
+ ) throws Error;
+
+ public abstract string[] find_related_uris (
+ [DBus (signature = "(xx)")] Variant time_range,
+ [DBus (signature = "a(asaasay)")] Variant event_templates,
+ [DBus (signature = "a(asaasay)")] Variant result_event_templates,
+ uint storage_state, uint num_events, uint result_type,
+ BusName sender
+ ) throws Error;
+
+ [DBus (signature = "a(asaasay)")]
+ public abstract Variant get_events (
+ uint32[] event_ids,
+ BusName sender
+ ) throws Error;
+
+ public abstract uint32[] insert_events (
+ [DBus (signature = "a(asaasay)")] Variant events,
+ BusName sender
+ ) throws Error;
+
+ public abstract void install_monitor (
+ ObjectPath monitor_path,
+ [DBus (signature = "(xx)")] Variant time_range,
+ [DBus (signature = "a(asaasay)")] Variant event_templates,
+ BusName owner
+ ) throws Error;
+
+ public abstract void remove_monitor (
+ ObjectPath monitor_path,
+ BusName owner
+ ) throws Error;
+
+ public abstract void quit () throws Error;
+
+ [DBus (name = "extensions")]
+ public abstract string[] extensions { owned get; }
+
+ [DBus (name = "version")]
+ public abstract VersionStruct version { owned get; }
+
+ }
+
+ [DBus (name = "org.gnome.zeitgeist.Monitor")]
+ public interface RemoteMonitor : Object
+ {
+
+ public async abstract void notify_insert (
+ [DBus (signature = "(xx)")] Variant time_range,
+ [DBus (signature = "a(asaasay)")] Variant events
+ ) throws IOError;
+
+ public async abstract void notify_delete (
+ [DBus (signature = "(xx)")] Variant time_range,
+ uint32[] event_ids
+ ) throws IOError;
+
+ }
+
+ /* FIXME: Remove this! Only here because of a bug in Vala (see ext-fts) */
+ [DBus (name = "org.gnome.zeitgeist.Index")]
+ public interface RemoteSimpleIndexer : Object
+ {
+ [DBus (signature = "a(asaasay)u")]
+ public abstract async Variant search (
+ string query_string,
+ [DBus (signature = "(xx)")] Variant time_range,
+ [DBus (signature = "a(asaasay)")] Variant filter_templates,
+ uint offset, uint count, uint result_type) throws Error;
+ }
+
+}
+
+// vim:expandtab:ts=4:sw=4
=== added file 'src/sql-schema.vala'
--- src/sql-schema.vala 1970-01-01 00:00:00 +0000
+++ src/sql-schema.vala 2011-10-19 08:09:50 +0000
@@ -0,0 +1,373 @@
+/* sql-schema.vala
+ *
+ * Copyright © 2011 Collabora Ltd.
+ * By Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+ *
+ * Based upon a Python implementation (2009-2011) by:
+ * Markus Korn <thekorn@xxxxxxx>
+ * Mikkel Kamstrup Erlandsen <mikkel.kamstrup@xxxxxxxxx>
+ * Seif Lotfy <seif@xxxxxxxxx>
+ * Siegfried-Angel Gevatter Pujals <siegfried@xxxxxxxxxxxx>
+ *
+ * This program is free software: you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published by
+ * the Free Software Foundation, either version 2.1 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ *
+ */
+
+using Zeitgeist;
+
+namespace Zeitgeist.SQLite
+{
+
+ public class DatabaseSchema : Object
+ {
+
+ public static void ensure_schema (Sqlite.Database database)
+ throws EngineError
+ {
+ //if (Constants.DATABASE_FILE_PATH != ":memory:" && !new_db)
+ // assume temporary memory backed DBs are good
+ // check_core_schema_upgrade
+
+ create_schema (database);
+ }
+
+ public static void create_schema (Sqlite.Database database)
+ throws EngineError
+ {
+ exec_query (database, "PRAGMA journal_mode = WAL");
+
+ // URI
+ exec_query (database, """
+ CREATE TABLE IF NOT EXISTS uri (
+ id INTEGER PRIMARY KEY,
+ value VARCHAR UNIQUE
+ )
+ """);
+ exec_query (database, """
+ CREATE UNIQUE INDEX IF NOT EXISTS uri_value ON uri(value)
+ """);
+
+ // Interpretation
+ exec_query (database, """
+ CREATE TABLE IF NOT EXISTS interpretation (
+ id INTEGER PRIMARY KEY,
+ value VARCHAR UNIQUE
+ )
+ """);
+ exec_query (database, """
+ CREATE UNIQUE INDEX IF NOT EXISTS interpretation_value
+ ON interpretation(value)
+ """);
+
+ // Manifestation
+ exec_query (database, """
+ CREATE TABLE IF NOT EXISTS manifestation (
+ id INTEGER PRIMARY KEY,
+ value VARCHAR UNIQUE
+ )
+ """);
+ exec_query (database, """
+ CREATE UNIQUE INDEX IF NOT EXISTS manifestation_value
+ ON manifestation(value)
+ """);
+
+ // Mime-Type
+ exec_query (database, """
+ CREATE TABLE IF NOT EXISTS mimetype (
+ id INTEGER PRIMARY KEY,
+ value VARCHAR UNIQUE
+ )
+ """);
+ exec_query (database, """
+ CREATE UNIQUE INDEX IF NOT EXISTS mimetype_value
+ ON mimetype(value)
+ """);
+
+ // Actor
+ exec_query (database, """
+ CREATE TABLE IF NOT EXISTS actor (
+ id INTEGER PRIMARY KEY,
+ value VARCHAR UNIQUE
+ )
+ """);
+ exec_query (database, """
+ CREATE UNIQUE INDEX IF NOT EXISTS actor_value
+ ON actor(value)
+ """);
+
+ // Text
+ exec_query (database, """
+ CREATE TABLE IF NOT EXISTS text (
+ id INTEGER PRIMARY KEY,
+ value VARCHAR UNIQUE
+ )
+ """);
+ exec_query (database, """
+ CREATE UNIQUE INDEX IF NOT EXISTS text_value
+ ON text(value)
+ """);
+
+ // Payload
+ // (There's no value index for payloads, they can only be fetched
+ // by ID).
+ exec_query (database, """
+ CREATE TABLE IF NOT EXISTS payload
+ (id INTEGER PRIMARY KEY, value BLOB)
+ """);
+
+ // Storage
+ exec_query (database, """
+ CREATE TABLE IF NOT EXISTS storage (
+ id INTEGER PRIMARY KEY,
+ value VARCHAR UNIQUE,
+ state INTEGER,
+ icon VARCHAR,
+ display_name VARCHAR
+ )
+ """);
+ exec_query (database, """
+ CREATE UNIQUE INDEX IF NOT EXISTS storage_value
+ ON storage(value)
+ """);
+
+ // Event
+ // This is the primary table for log statements. Note that:
+ // - event.id is NOT unique, each subject has a separate row;
+ // - timestamps are integers;
+ // - (event_)origin and subj_id_current are at the end of the
+ // table, for backwards-compatibility reasons.
+ exec_query (database, """
+ CREATE TABLE IF NOT EXISTS event (
+ id INTEGER,
+ timestamp INTEGER,
+ interpretation INTEGER,
+ manifestation INTEGER,
+ actor INTEGER,
+ payload INTEGER,
+ subj_id INTEGER,
+ subj_interpretation INTEGER,
+ subj_manifestation INTEGER,
+ subj_origin INTEGER,
+ subj_mimetype INTEGER,
+ subj_text INTEGER,
+ subj_storage INTEGER,
+ origin INTEGER,
+ subj_id_current INTEGER,
+ CONSTRAINT interpretation_fk
+ FOREIGN KEY(interpretation)
+ REFERENCES interpretation(id)
+ ON DELETE CASCADE,
+ CONSTRAINT manifestation_fk
+ FOREIGN KEY(manifestation)
+ REFERENCES manifestation(id)
+ ON DELETE CASCADE,
+ CONSTRAINT actor_fk
+ FOREIGN KEY(actor)
+ REFERENCES actor(id)
+ ON DELETE CASCADE,
+ CONSTRAINT origin_fk
+ FOREIGN KEY(origin)
+ REFERENCES uri(id)
+ ON DELETE CASCADE,
+ CONSTRAINT payload_fk
+ FOREIGN KEY(payload)
+ REFERENCES payload(id)
+ ON DELETE CASCADE,
+ CONSTRAINT subj_id_fk
+ FOREIGN KEY(subj_id)
+ REFERENCES uri(id)
+ ON DELETE CASCADE,
+ CONSTRAINT subj_id_current_fk
+ FOREIGN KEY(subj_id_current)
+ REFERENCES uri(id)
+ ON DELETE CASCADE,
+ CONSTRAINT subj_interpretation_fk
+ FOREIGN KEY(subj_interpretation)
+ REFERENCES interpretation(id)
+ ON DELETE CASCADE,
+ CONSTRAINT subj_manifestation_fk
+ FOREIGN KEY(subj_manifestation)
+ REFERENCES manifestation(id)
+ ON DELETE CASCADE,
+ CONSTRAINT subj_origin_fk
+ FOREIGN KEY(subj_origin)
+ REFERENCES uri(id)
+ ON DELETE CASCADE,
+ CONSTRAINT subj_mimetype_fk
+ FOREIGN KEY(subj_mimetype)
+ REFERENCES mimetype(id)
+ ON DELETE CASCADE,
+ CONSTRAINT subj_text_fk
+ FOREIGN KEY(subj_text)
+ REFERENCES text(id)
+ ON DELETE CASCADE,