divmod-dev team mailing list archive
-
divmod-dev team
-
Mailing list archive
-
Message #00633
[Merge] lp:~exarkun/divmod.org/remove-axiom-1325288 into lp:divmod.org
Jean-Paul Calderone has proposed merging lp:~exarkun/divmod.org/remove-axiom-1325288 into lp:divmod.org.
Requested reviews:
Divmod-dev (divmod-dev)
Related bugs:
Bug #1325288 in Divmod: "move Axiom to github"
https://bugs.launchpad.net/divmod.org/+bug/1325288
For more details, see:
https://code.launchpad.net/~exarkun/divmod.org/remove-axiom-1325288/+merge/224944
https://github.com/twisted/axiom
--
The attached diff has been truncated due to its size.
https://code.launchpad.net/~exarkun/divmod.org/remove-axiom-1325288/+merge/224944
Your team Divmod-dev is requested to review the proposed merge of lp:~exarkun/divmod.org/remove-axiom-1325288 into lp:divmod.org.
=== removed directory 'Axiom'
=== removed file 'Axiom/.coveragerc'
--- Axiom/.coveragerc 2014-01-22 19:09:17 +0000
+++ Axiom/.coveragerc 1970-01-01 00:00:00 +0000
@@ -1,9 +0,0 @@
-[run]
-branch = True
-source =
- axiom
-
-[report]
-exclude_lines =
- pragma: no cover
-show_missing = True
=== removed file 'Axiom/LICENSE'
--- Axiom/LICENSE 2005-12-10 22:31:51 +0000
+++ Axiom/LICENSE 1970-01-01 00:00:00 +0000
@@ -1,20 +0,0 @@
-Copyright (c) 2005 Divmod Inc.
-
-Permission is hereby granted, free of charge, to any person obtaining
-a copy of this software and associated documentation files (the
-"Software"), to deal in the Software without restriction, including
-without limitation the rights to use, copy, modify, merge, publish,
-distribute, sublicense, and/or sell copies of the Software, and to
-permit persons to whom the Software is furnished to do so, subject to
-the following conditions:
-
-The above copyright notice and this permission notice shall be
-included in all copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
-LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
-OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
-WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
\ No newline at end of file
=== removed file 'Axiom/MANIFEST.in'
--- Axiom/MANIFEST.in 2014-01-15 22:31:13 +0000
+++ Axiom/MANIFEST.in 1970-01-01 00:00:00 +0000
@@ -1,5 +0,0 @@
-include LICENSE
-include NAME.txt
-recursive-include axiom/test/historic *.tbz2
-include axiom/batch.tac
-graft axiom/examples
=== removed file 'Axiom/NAME.txt'
--- Axiom/NAME.txt 2005-08-27 23:09:07 +0000
+++ Axiom/NAME.txt 1970-01-01 00:00:00 +0000
@@ -1,15 +0,0 @@
-
-See: http://mathworld.wolfram.com/Axiom.html
-
-An axiom is a statement taken as true without proof or supporting arguments.
-
-Divmod Axiom is so named because it is a database, and a database is where you
-put assertions about the world. In particular a database is where you put
-values which you do not wish to re-calculate; the data that your computation is
-based upon. In this way axiom items are similar to axioms, since (for example)
-euclidean geometry can be derived from the set axioms known as "euclid's
-postulates", but those axioms need to be stored independently; they cannot be
-derived from anything.
-
-Plus it has an X in it, which sounds neat.
-
=== removed file 'Axiom/NEWS.txt'
--- Axiom/NEWS.txt 2014-03-22 19:16:37 +0000
+++ Axiom/NEWS.txt 1970-01-01 00:00:00 +0000
@@ -1,313 +0,0 @@
-0.7.1 (2014-03-22):
- Major:
-
- - Fix some packaging issues that led to some important files being missing
- from the 0.7.0 release.
-
- Minor:
-
- - Uses of the deprecated unsignedID and isWinNT Twisted APIs have been
- removed.
-
-
-0.7.0 (2014-01-11):
- Major:
-
- - Only Python 2.6 and 2.7 are supported now. 2.4, 2.5 is deprecated.
- - setup.py now uses setuptools, and stores its dependencies. This
- means you no longer need to manually install dependencies.
- - setup.py no longer requires Epsilon for egg_info, making it easier
- to install Axiom using pip.
- - Significant improvements to PyPy support. PyPy is now a supported
- platform, with CI support.
- - Axiom now uses the stdlib sqlite3 if pysqlite2 is not available.
- Since all supported versions have this, installing pysqlite2 is
- now no longer necessary, and is only an (optional) performance
- improvement on CPython. This is a huge improvement for PyPy, where
- the stdlib version is reportedly much faster.
-
- Minor:
-
- - Passing a string to SubStore.createNew now raises an exception
- instead of silently almost certainly doing the wrong thing.
- - Setting an integer value that is too negative will now raise an
- exception.
- - __conform__ (interface adaptation) now also works for items that
- are not in a store.
- - Starting the store service now automatically activates the
- scheduler service as well.
- - Batch processing can now be triggered by adding remote work.
- - Startup performance for stores with many legacy type declarations
- is improved.
- - Several benchmarks were added.
- - Many internal cleanups.
-
-0.6.0 (2009-11-25):
- - Speed up creation, insertion, and various other operations on Item by
- optimizing Item.getSchema.
- - Improve error reporting from the batch upgrade system.
- - Speed up setting attributes on Item instances.
- - Remove the batch process manhole service.
- - Improve the reliability of some unit tests.
- - Fix `axiomatic --reactor ...`.
- - Remove invalid SQL normalization code which would occassionally corrupt
- certain obscure but valid SQL statements.
- - Add an in-memory `IScheduler` powerup for stores and substores.
-
-0.5.31 (2008-12-09):
- - An IStatEvent is now logged when a store is opened.
- - Different schema versions of the same item type now no longer
- compare equal, which fixes some breakage in the upgrade system,
- among other things.
- - Significantly reduce the runtime cost of opening a store by
- reducing the amount of work spent to verify schema correctness.
-
-0.5.30 (2008-10-15):
- - Fixed a _SubSchedulerParentHook bug where a transient run failure
- would cause future event scheduling in the relevant substore to
- fail/traceback.
-
-0.5.29 (2008-10-02):
- - Added 'requiresFromSite' to axiom.dependency, expressing a
- requirement on the site store for the successful installation of
- an item in a user store.
- - Made errors from duplicate item type definition friendlier.
-
-0.5.28 (2008-08-12):
- - Upgraders can now safely upgrade reference attributes.
- - The batch process is no longer started unless it's needed.
- - Removed use of private Decimal APIs that changed in Python 2.5.2.
-
- - "axiomatic start" changed to use the public interface to twistd's
- behaviour instead of relying on internal details.
- - Store now uses FilePaths to refer to its database or files directory.
- - Automatic powerup discovery is now a feature of powerups rather
- than of axiom.dependency.
- - Stores now record the released versions of code used to open them.
- - "axiomatic upgrade" added, a command for completely upgrading a store.
- - Removed no-longer-working APSW support code.
-
-0.5.27 (2007-11-27):
- - Substores and file storage for in-memory stores are now supported.
-
-0.5.26 (2007-09-05):
- - A bug where exceptions were raised when tables were created concurrently is
- now fixed.
-
-0.5.25 (2007-08-01):
- - Added the beginnings of a query introspection API.
-
-0.5.24 (2007-07-06):
- - Added a 'postCopy' argument to
- upgrade.registerAttributeCopyingUpgrader, a callable run
- with the new item after upgrading.
-
-0.5.23 (2007-06-06):
- - Fixed a bug where user store insertion/extraction failed if a
- SubScheduler was installed but no TimedEvents existed.
-
-0.5.22 (2007-05-24):
- - Fixed docstrings in axiom.dependency.
- - Scheduler and SubScheduler now declared to implement IScheduler.
-
-0.5.21 (2007-04-27):
- - Multi-version upgraders are now supported: an upgrader function
- can upgrade items more than a single version at a time.
- - Multi-item-class queries now supported: Store.query takes a tuple
- as its first argument, similar to a comma-separated column clause
- for a SELECT statement in SQL.
- - Empty textlists are now properly distinguished from a textlist
- containing a single empty string.
- - Handling of items scheduled to run with axiom.scheduler being
- deleted before they run has been fixed.
-
-0.5.20 (2007-02-23):
- - AxiomaticCommand is no longer itself an axiom plugin.
- - axiom.test.historic.stubloader.StubbedTest now has an
- 'openLegacyStore' method, for opening the unupgraded store
- multiple times.
- - The default argument to Store.getItemByID is now respected in the
- case where an attempt is made to load an item which was created
- and deleted within a single transaction.
-
-0.5.19 (2007-01-11):
- - A new method, axiom.store.ItemQuery.paginate, has been added, which splits
- a query's result-gathering work into multiple "pages" so that we can deal
- with extremely large result sets.
- - A dependency management system for Items has been added in
- axiom.dependency. InstallableMixin has been removed;
- axiom.dependency.installOn is now used to install Items and connect powerups.
- Items can declare their dependence on another item by declaring attributes
- with axiom.dependency.dependsOn. When items are installed, their dependencies
- will be created and installed as well. Installation is no longer tracked by
- 'installedOn' attributes but by _DependencyConnector items.
- - A bug preventing 'axiomatic userbase list' from working on a fresh
- mantissa database has been fixed.
-
-0.5.18 (2006-12-08):
- - Change ItemQuery.deleteFromStore so that it will call deleteFromStore on an
- Item subclass if it has overridden that method.
-
-0.5.17 (2006-11-20):
- - Added fullyQualifiedName to IColumn, _StoreIDComparer, and _PlaceholderColumn.
- - Added support for distinct Item queries and for counting distinct attribute
- queries.
- - Exceptions raised by Axiom upgrade methods are logged instead of silently
- swallowing them.
-
-0.5.16 (2006-11-17):
- - Updated axiomatic to work with Twisted trunk.
-
-0.5.15 (2006-10-31):
-
- - Raise a more informative exception when accessing Item references pointing
- to nonexistent items.
- - Enforce prevention of deletion of items referred to by references set to
- reference.DISALLOW.
- - Tables in the FROM clause of SQL generated by queries are now ordered by the
- order of the Item subclasses in the comparisons used to generate them.
- - A new IComparison implementation has been added to allow application-level
- code to explicitly specify the order of types in the join.
-
-0.5.14 (2006-10-17):
- - Added a 'batchInsert' method to Store, allowing insertion of items without
- loading them into memory.
- - Change ItemQuery.deleteFromStore to delete items without loading them if
- possible.
-
-0.5.13 (2006-10-05):
- - Changed userbase.getLoginMethods to return LoginMethods rather than
- (localpart, domain) tuples.
-
-0.5.12 (2006-09-29):
- - Fixed a scheduler bug that would cause tasks scheduled in a substore to be
- removed from the scheduler.
-
-0.5.11 (2006-09-20):
- - dependency.dependsOn now takes similar arguments to attributes.reference.
-
-0.5.10 (2006-09-12):
- - The axiomatic commands "insert-user" and "extract-user" now interact with
- the scheduler properly.
-
-0.5.9 (2006-08-30):
- - A new dependency-management system has been added, in axiom.dependency.
-
-0.5.8 (2006-08-17):
- - The upgrader added in the previous release has been fixed.
-
-0.5.7 (2006-08-14):
- - item.Item has a new method, stored, which will be called the first time an
- item is added to a store, in the same transaction as it is added.
- - A new class, item.Placeholder, has been added to assist in self-join
- queries.
-
-0.5.6 (2006-07-18):
- - userbase.LoginSystem now raises a new exception type when login is attempted
- using a username with no domain part.
-
-0.5.5 (2006-07-08):
- - SubStoreStartupService was removed; user stores' services are no longer
- incorrectly started when the Mantissa administrative powerup is installed.
- - IPowerupIndirector was added, allowing for installation of SubStore items
- as powerups on other items.
-
-0.5.4 (2006-07-05):
- - Items with attributes.path attributes can now be upgraded.
- - axiom.scheduler has been improved to make clock-related tests easier to write.
- - Improved test coverage and various bugfixes.
-
-0.5.3 (2006-06-27):
- - A bug causing the table name cache to grow too large was fixed.
-
-0.5.2 (2006-06-26):
- - Type names are now determined on a per-store basis, rather than cached
- globally on the Item.
-
-0.5.1 (2006-06-16):
- - axiom.slotmachine._structlike removed in favor of the implementation in
- Epsilon, epsilon.structlike.record.
- - The batch process has been adjusted to do more work per iteration.
-
-0.5.0 (2006-06-12):
- Highlights:
- - Fixed several bugs, including several potential data-corruption issues.
- All users are recommended to upgrade, but back up your data and test your
- upgrade first!
- - There is now a 'money' attribute type which uses fixed-precision math in
- the database specifically designed for dealing with the types of issues
- associated with database-persistent financial data.
- - Some simple relational constraints (the equivalent of ON DELETE CASCADE)
- have been implemented using the 'whenDeleted' keyword argument.
- - Indexes which are created in your code will now automatically be added to
- opened databases without requiring an upgrader or a change to your Item's
- schemaVersion.
- - You can now use 'declareLegacyItem' to declare legacy schemas to record the
- schema of older versions of your software -- this enables upgrading of more
- than one step per release of your application code.
- - You can now create multi-column indexes using attributes.compoundIndex.
- ---
- - Made Item.typeName and Item.schemaVersion optional in most cases.
- - Added axiom.batch for reliably operating on large groups of items.
- - Removed all usages of util.wait from tests
- - added 'queryutil.contains' utility query method, for testing when a value
- is between two attributes.
- - Added 'negate' argument to oneOf, allowing for issing SQL 'NOT IN' queries.
- - Improved reliability of the scheduler. Errors are now logged in a
- structured manner.
- - Added helper classes for writing axiomatic plug-in commands; see
- documentation for axiomatic.scripts.axiomatic.AxiomaticCommand and
- AxiomaticSubCommand.
- - AttributeQuery now provides .min() and .max() methods which return the
- obvious thing.
- - Transactions are more managed more conservatively; BEGIN IMMEDIATE
- TRANSACTION is used at the beginning of each transact() call, to guarantee
- that concurrent access is safe, if sometimes slightly slower.
- - SQL generation has been deferred to query time, which means that there is a
- more complete API for manipulating Query objects.
- - repr() of various objects has been improved for easier debugging.
- - Axiom now emits various log events which you can observe if you wish to
- analyze query statistics in real-time. These events don't go to the text log by
- default: Mantissa, for example, uses them to display a pie chart of the
- most expensive queries on a running system.
-
-0.4.0 (2005-12-20):
- - Fixed sum() in the case of a table with no rows.
- - LoginAccount no longer contains authentication information, but may be
- referred to by one or more LoginMethods, which do.
- - Added an attribute type for floats: ieee754_double.
- - Enhanced functionality in axiom.sequence.List.
- - Added support for SQL DISTINCT queries.
- - On the command line, axiomatic will attempt to automatically discover
- the correct database to use, if one is not specified.
- - PID and logfiles are now kept in a subdirectory of the database
- directory.
- - The "start" axiomatic subcommand now works on Windows.
- - Two new axiomatic subcommands have been added related to running servers
- from Axiom database: "stop" and "status".
- - Two new axiomatic subcommands have been added related to user
- management: "extract-user" and "insert-user" for removing users from and
- adding users to an existing credentials database, along with all of
- their data.
- - Axiom queries can now be sorted by a tuple of columns.
-
-0.3.0 (2005-11-02):
- - Removed Axiom/axiom/examples/axiom.tac
- - Added 'axiomatic start'
- - added 'hyper', a 'super' capable of working with Item mixins
- - added check to make sure Unicode strings won't be misleadingly persisted as
- bytes(), like so:
- >>> str(buffer(u'hello'))
- 'h\x00\x00\x00e\x00\x00\x00l\x00\x00\x00l\x00\x00\x00o\x00\x00\x00'
- - formalized and improved query result to be an object with its own interface
- rather than a generator
- - correctly call activate() on items after they have been upgraded
-
-0.2.0 (2005-10-27):
- - Removed accidental Mantissa dependency
- - Automatic upgrade service added
- - Lots of new docstrings
- - Query utility module added, with a function for finding overlapping
- ranges
- - Added formal interface for the `where' argument to Store.query()
- - Added 'oneOf' attribute
=== removed file 'Axiom/README.txt'
--- Axiom/README.txt 2006-06-14 11:54:41 +0000
+++ Axiom/README.txt 1970-01-01 00:00:00 +0000
@@ -1,23 +0,0 @@
-
-Divmod Axiom
-============
-
-Divmod Axiom is an object database, or alternatively, an object-relational
-mapper, implemented on top of Python.
-
- Note: Axiom currently supports only SQLite and does NOT have any features
- for dealing with concurrency. We do plan to add some later, and perhaps
- also support other databases in the future.
-
-Its primary goal is to provide an object-oriented layer with what we consider
-to be the key aspects of OO, i.e. polymorphism and message dispatch, without
-hindering the power of an RDBMS.
-
-Axiom is a live database, not only an SQL generation tool: it includes an
-implementation of a scheduler service, external file references, automatic
-upgraders, robust failure handling, and Twisted integration.
-
-Axiom is tightly integrated with Twisted, and can store, start, and stop
-Twisted services directly from the database using the included 'axiomatic'
-command-line tool.
-
=== removed directory 'Axiom/axiom'
=== removed file 'Axiom/axiom/__init__.py'
--- Axiom/axiom/__init__.py 2014-01-15 10:48:55 +0000
+++ Axiom/axiom/__init__.py 1970-01-01 00:00:00 +0000
@@ -1,8 +0,0 @@
-# -*- test-case-name: axiom.test -*-
-from axiom._version import __version__
-from twisted.python import versions
-
-def asTwistedVersion(packageName, versionString):
- return versions.Version(packageName, *map(int, versionString.split(".")))
-
-version = asTwistedVersion("axiom", __version__)
=== removed file 'Axiom/axiom/_fincache.py'
--- Axiom/axiom/_fincache.py 2013-08-02 19:10:21 +0000
+++ Axiom/axiom/_fincache.py 1970-01-01 00:00:00 +0000
@@ -1,167 +0,0 @@
-from weakref import ref
-from traceback import print_exc
-
-from twisted.python import log
-
-from axiom import iaxiom
-
-class CacheFault(KeyError):
- """
- An item has fallen out of cache, but the weakref callback has not yet run.
- """
-
-
-
-class CacheInconsistency(RuntimeError):
- """
- A key being cached is already present in the cache.
- """
-
-
-
-def logErrorNoMatterWhat():
- try:
- log.msg("Exception in finalizer cannot be propagated")
- log.err()
- except:
- try:
- emergLog = file("WEAKREF_EMERGENCY_ERROR.log", 'a')
- print_exc(file=emergLog)
- emergLog.flush()
- emergLog.close()
- except:
- # Nothing can be done. We can't get an emergency log file to write
- # to. Don't bother.
- return
-
-
-
-def createCacheRemoveCallback(cacheRef, key, finalizer):
- """
- Construct a callable to be used as a weakref callback for cache entries.
-
- The callable will invoke the provided finalizer, as well as removing the
- cache entry if the cache still exists and contains an entry for the given
- key.
-
- @type cacheRef: L{weakref.ref} to L{FinalizingCache}
- @param cacheRef: A weakref to the cache in which the corresponding cache
- item was stored.
-
- @param key: The key for which this value is cached.
-
- @type finalizer: callable taking 0 arguments
- @param finalizer: A user-provided callable that will be called when the
- weakref callback runs.
- """
- def remove(self):
- # Weakref callbacks cannot raise exceptions or DOOM ensues
- try:
- finalizer()
- except:
- logErrorNoMatterWhat()
- try:
- self = cacheRef()
- if self is not None:
- try:
- del self.data[key]
- except KeyError:
- # FinalizingCache.get may have already removed the cache
- # item from the dictionary; see the comment in that method
- # for an explanation of why.
- pass
- except:
- logErrorNoMatterWhat()
- return remove
-
-
-
-class FinalizingCache:
- """
- A cache that stores values by weakref.
-
- A finalizer is invoked when the weakref to a cached value is broken.
-
- @type data: L{dict}
- @ivar data: The cached values.
- """
- def __init__(self):
- self.data = {}
-
-
- def cache(self, key, value):
- """
- Add an entry to the cache.
-
- A weakref to the value is stored, rather than a direct reference. The
- value must have a C{__finalizer__} method that returns a callable which
- will be invoked when the weakref is broken.
-
- @param key: The key identifying the cache entry.
-
- @param value: The value for the cache entry.
- """
- fin = value.__finalizer__()
- try:
- # It's okay if there's already a cache entry for this key as long
- # as the weakref has already been broken. See the comment in
- # get() for an explanation of why this might happen.
- if self.data[key]() is not None:
- raise CacheInconsistency(
- "Duplicate cache key: %r %r %r" % (
- key, value, self.data[key]))
- except KeyError:
- pass
- self.data[key] = ref(value, createCacheRemoveCallback(
- ref(self), key, fin))
- return value
-
-
- def uncache(self, key, value):
- """
- Remove a key from the cache.
-
- As a sanity check, if the specified key is present in the cache, it
- must have the given value.
-
- @param key: The key to remove.
-
- @param value: The expected value for the key.
- """
- try:
- assert self.get(key) is value
- del self.data[key]
- except KeyError:
- # If the entry has already been removed from the cache, this will
- # result in KeyError which we ignore. If the entry is still in the
- # cache, but the weakref has been broken, this will result in
- # CacheFault (a KeyError subclass) which we also ignore. See the
- # comment in get() for an explanation of why this might happen.
- pass
-
-
- def get(self, key):
- """
- Get an entry from the cache by key.
-
- @raise KeyError: if the given key is not present in the cache.
-
- @raise CacheFault: (a L{KeyError} subclass) if the given key is present
- in the cache, but the value it points to is gone.
- """
- o = self.data[key]()
- if o is None:
- # On CPython, the weakref callback will always(?) run before any
- # other code has a chance to observe that the weakref is broken;
- # and since the callback removes the item from the dict, this
- # branch of code should never run. However, on PyPy (and possibly
- # other Python implementations), the weakref callback does not run
- # immediately, thus we may be able to observe this intermediate
- # state. Should this occur, we remove the dict item ourselves,
- # and raise CacheFault (which is a KeyError subclass).
- del self.data[key]
- raise CacheFault(
- "FinalizingCache has %r but its value is no more." % (key,))
- log.msg(interface=iaxiom.IStatEvent, stat_cache_hits=1, key=key)
- return o
-
=== removed file 'Axiom/axiom/_pysqlite2.py'
--- Axiom/axiom/_pysqlite2.py 2010-04-03 12:38:34 +0000
+++ Axiom/axiom/_pysqlite2.py 1970-01-01 00:00:00 +0000
@@ -1,162 +0,0 @@
-# -*- test-case-name: axiom.test.test_pysqlite2 -*-
-
-"""
-PySQLite2 Connection and Cursor wrappers.
-
-These provide a uniform interface on top of PySQLite2 for Axiom, particularly
-including error handling behavior and exception types.
-"""
-
-import time, sys
-
-try:
- # Prefer the third-party module, as it is easier to update, and so may
- # be newer or otherwise better.
- from pysqlite2 import dbapi2
-except ImportError:
- # But fall back to the stdlib module if we're on Python 2.6 or newer,
- # because it should work too. Don't do this for Python 2.5 because
- # there are critical, data-destroying bugs in that version.
- if sys.version_info >= (2, 6):
- import sqlite3 as dbapi2
- else:
- raise
-
-from twisted.python import log
-
-from axiom import errors, iaxiom
-
-class Connection(object):
- def __init__(self, connection, timeout=None):
- self._connection = connection
- self._timeout = timeout
-
-
- def fromDatabaseName(cls, dbFilename, timeout=None, isolationLevel=None):
- return cls(dbapi2.connect(dbFilename, timeout=0,
- isolation_level=isolationLevel))
- fromDatabaseName = classmethod(fromDatabaseName)
-
-
- def cursor(self):
- return Cursor(self, self._timeout)
-
-
- def identifySQLError(self, sql, args, e):
- """
- Identify an appropriate SQL error object for the given message for the
- supported versions of sqlite.
-
- @return: an SQLError
- """
- message = e.args[0]
- if message.startswith("table") and message.endswith("already exists"):
- return errors.TableAlreadyExists(sql, args, e)
- return errors.SQLError(sql, args, e)
-
-
-
-class Cursor(object):
- def __init__(self, connection, timeout):
- self._connection = connection
- self._cursor = connection._connection.cursor()
- self.timeout = timeout
-
-
- def __iter__(self):
- return iter(self._cursor)
-
-
- def time(self):
- """
- Return the current wallclock time as a float representing seconds
- from an fixed but arbitrary point.
- """
- return time.time()
-
-
- def sleep(self, seconds):
- """
- Block for the given number of seconds.
-
- @type seconds: C{float}
- """
- time.sleep(seconds)
-
-
- def execute(self, sql, args=()):
- try:
- try:
- blockedTime = 0.0
- t = self.time()
- try:
- # SQLite3 uses something like exponential backoff when
- # trying to acquire a database lock. This means that even
- # for very long timeouts, it may only attempt to acquire
- # the lock a handful of times. Another process which is
- # executing frequent, short-lived transactions may acquire
- # and release the lock many times between any two attempts
- # by this one to acquire it. If this process gets unlucky
- # just a few times, this execute may fail to acquire the
- # lock within the specified timeout.
-
- # Since attempting to acquire the lock is a fairly cheap
- # operation, we take another route. SQLite3 is always told
- # to use a timeout of 0 - ie, acquire it on the first try
- # or fail instantly. We will keep doing this, ten times a
- # second, until the actual timeout expires.
-
- # What would be really fantastic is a notification
- # mechanism for information about the state of the lock
- # changing. Of course this clearly insane, no one has ever
- # managed to invent a tool for communicating one bit of
- # information between multiple processes.
- while 1:
- try:
- return self._cursor.execute(sql, args)
- except dbapi2.OperationalError, e:
- if e.args[0] == 'database is locked':
- now = self.time()
- if self.timeout is not None:
- if (now - t) > self.timeout:
- raise errors.TimeoutError(sql, self.timeout, e)
- self.sleep(0.1)
- blockedTime = self.time() - t
- else:
- raise
- finally:
- txntime = self.time() - t
- if txntime - blockedTime > 2.0:
- log.msg('Extremely long execute: %s' % (txntime - blockedTime,))
- log.msg(sql)
- # import traceback; traceback.print_stack()
- log.msg(interface=iaxiom.IStatEvent,
- stat_cursor_execute_time=txntime,
- stat_cursor_blocked_time=blockedTime)
- except dbapi2.OperationalError, e:
- if e.args[0] == 'database schema has changed':
- return self._cursor.execute(sql, args)
- raise
- except (dbapi2.ProgrammingError,
- dbapi2.InterfaceError,
- dbapi2.OperationalError), e:
- raise self._connection.identifySQLError(sql, args, e)
-
-
- def lastRowID(self):
- return self._cursor.lastrowid
-
-
- def close(self):
- self._cursor.close()
-
-
-# Export some names from the underlying module.
-sqlite_version_info = dbapi2.sqlite_version_info
-OperationalError = dbapi2.OperationalError
-
-__all__ = [
- 'OperationalError',
- 'Connection',
- 'sqlite_version_info',
- ]
=== removed file 'Axiom/axiom/_schema.py'
--- Axiom/axiom/_schema.py 2006-03-30 01:22:40 +0000
+++ Axiom/axiom/_schema.py 1970-01-01 00:00:00 +0000
@@ -1,71 +0,0 @@
-
-# DELETE_OBJECT = 'DELETE FROM axiom_objects WHERE oid = ?'
-CREATE_OBJECT = 'INSERT INTO *DATABASE*.axiom_objects (type_id) VALUES (?)'
-CREATE_TYPE = 'INSERT INTO *DATABASE*.axiom_types (typename, module, version) VALUES (?, ?, ?)'
-
-
-BASE_SCHEMA = ["""
-CREATE TABLE *DATABASE*.axiom_objects (
- type_id INTEGER NOT NULL
- CONSTRAINT fk_type_id REFERENCES axiom_types(oid)
-)
-""",
-
-"""
-CREATE INDEX *DATABASE*.axiom_objects_type_idx
- ON axiom_objects(type_id);
-""",
-
-"""
-CREATE TABLE *DATABASE*.axiom_types (
- typename VARCHAR,
- module VARCHAR,
- version INTEGER
-)
-""",
-
-"""
-CREATE TABLE *DATABASE*.axiom_attributes (
- type_id INTEGER,
- row_offset INTEGER,
- indexed BOOLEAN,
- sqltype VARCHAR,
- allow_none BOOLEAN,
- pythontype VARCHAR,
- attribute VARCHAR,
- docstring TEXT
-)
-"""]
-
-TYPEOF_QUERY = """
-SELECT *DATABASE*.axiom_types.typename, *DATABASE*.axiom_types.module, *DATABASE*.axiom_types.version
- FROM *DATABASE*.axiom_types, *DATABASE*.axiom_objects
- WHERE *DATABASE*.axiom_objects.oid = ?
- AND *DATABASE*.axiom_types.oid = *DATABASE*.axiom_objects.type_id
-"""
-
-HAS_SCHEMA_FEATURE = ("SELECT COUNT(oid) FROM *DATABASE*.sqlite_master "
- "WHERE type = ? AND name = ?")
-
-IDENTIFYING_SCHEMA = ('SELECT indexed, sqltype, allow_none, attribute '
- 'FROM *DATABASE*.axiom_attributes WHERE type_id = ? '
- 'ORDER BY row_offset')
-
-ADD_SCHEMA_ATTRIBUTE = (
- 'INSERT INTO *DATABASE*.axiom_attributes '
- '(type_id, row_offset, indexed, sqltype, allow_none, attribute, docstring, pythontype) '
- 'VALUES (?, ?, ?, ?, ?, ?, ?, ?)')
-
-ALL_TYPES = 'SELECT oid, module, typename, version FROM *DATABASE*.axiom_types'
-
-GET_GREATER_VERSIONS_OF_TYPE = ('SELECT version FROM *DATABASE*.axiom_types '
- 'WHERE typename = ? AND version > ?')
-
-SCHEMA_FOR_TYPE = ('SELECT indexed, pythontype, attribute, docstring '
- 'FROM *DATABASE*.axiom_attributes '
- 'WHERE type_id = ?')
-
-CHANGE_TYPE = 'UPDATE *DATABASE*.axiom_objects SET type_id = ? WHERE oid = ?'
-
-APP_VACUUM = 'DELETE FROM *DATABASE*.axiom_objects WHERE (type_id == -1) AND (oid != (SELECT MAX(oid) from *DATABASE*.axiom_objects))'
-
=== removed file 'Axiom/axiom/_version.py'
--- Axiom/axiom/_version.py 2014-03-22 19:16:37 +0000
+++ Axiom/axiom/_version.py 1970-01-01 00:00:00 +0000
@@ -1,1 +0,0 @@
-__version__ = "0.7.1"
=== removed file 'Axiom/axiom/attributes.py'
--- Axiom/axiom/attributes.py 2010-07-14 21:44:42 +0000
+++ Axiom/axiom/attributes.py 1970-01-01 00:00:00 +0000
@@ -1,1326 +0,0 @@
-# -*- test-case-name: axiom.test.test_attributes,axiom.test.test_reference -*-
-
-import os
-
-from decimal import Decimal
-
-from epsilon import hotfix
-hotfix.require('twisted', 'filepath_copyTo')
-
-from zope.interface import implements
-
-from twisted.python import filepath
-from twisted.python.components import registerAdapter
-
-from epsilon.extime import Time
-
-from axiom.slotmachine import Attribute as inmemory
-
-from axiom.errors import NoCrossStoreReferences, BrokenReference
-
-from axiom.iaxiom import IComparison, IOrdering, IColumn, IQuery
-
-_NEEDS_FETCH = object() # token indicating that a value was not found
-
-__metaclass__ = type
-
-
-class _ComparisonOperatorMuxer:
- """
- Collapse comparison operations into calls to a single method with varying
- arguments.
- """
- def compare(self, other, op):
- """
- Override this in a subclass.
- """
- raise NotImplementedError()
-
-
- def __eq__(self, other):
- return self.compare(other, '=')
-
-
- def __ne__(self, other):
- return self.compare(other, '!=')
-
-
- def __gt__(self, other):
- return self.compare(other, '>')
-
-
- def __lt__(self, other):
- return self.compare(other, '<')
-
-
- def __ge__(self, other):
- return self.compare(other, '>=')
-
-
- def __le__(self, other):
- return self.compare(other, '<=')
-
-
-def compare(left, right, op):
- # interim: maybe we want objects later? right now strings should be fine
- if IColumn.providedBy(right):
- return TwoAttributeComparison(left, op, right)
- elif right is None:
- if op == '=':
- negate = False
- elif op == '!=':
- negate = True
- else:
- raise TypeError(
- "None/NULL does not work with %s comparison" % (op,))
- return NullComparison(left, negate)
- else:
- # convert to constant usable in the database
- return AttributeValueComparison(left, op, right)
-
-
-
-class _MatchingOperationMuxer:
- """
- Collapse string matching operations into calls to a single method with
- varying arguments.
- """
- def _like(self, negate, firstOther, *others):
- others = (firstOther,) + others
- likeParts = []
-
- allValues = True
- for other in others:
- if IColumn.providedBy(other):
- likeParts.append(LikeColumn(other))
- allValues = False
- elif other is None:
- # LIKE NULL is a silly condition, but it's allowed.
- likeParts.append(LikeNull())
- allValues = False
- else:
- likeParts.append(LikeValue(other))
-
- if allValues:
- likeParts = [LikeValue(''.join(others))]
-
- return LikeComparison(self, negate, likeParts)
-
-
- def like(self, *others):
- return self._like(False, *others)
-
-
- def notLike(self, *others):
- return self._like(True, *others)
-
-
- def startswith(self, other):
- return self._like(False, other, '%')
-
-
- def endswith(self, other):
- return self._like(False, '%', other)
-
-
-
-_ASC = 'ASC'
-_DESC = 'DESC'
-
-class _OrderingMixin:
- """
- Provide the C{ascending} and C{descending} attributes to specify sort
- direction.
- """
- def _asc(self):
- return SimpleOrdering(self, _ASC)
-
- def _desc(self):
- return SimpleOrdering(self, _DESC)
-
- desc = descending = property(_desc)
- asc = ascending = property(_asc)
-
-
-
-class _ContainableMixin:
- def oneOf(self, seq, negate=False):
- """
- Choose items whose attributes are in a fixed set.
-
- X.oneOf([1, 2, 3])
-
- Implemented with the SQL 'in' statement.
- """
- return SequenceComparison(self, seq, negate)
-
-
- def notOneOf(self, seq):
- return self.oneOf(seq, negate=True)
-
-
-
-class Comparable(_ContainableMixin, _ComparisonOperatorMuxer,
- _MatchingOperationMuxer, _OrderingMixin):
- """
- Helper for a thing that can be compared like an SQLAttribute (or is in fact
- an SQLAttribute). Requires that 'self' have 'type' (Item-subclass) and
- 'columnName' (str) attributes, as well as an 'infilter' method in the
- spirit of SQLAttribute, documented below.
- """
-
- # XXX TODO: improve error reporting
-
- def compare(self, other, sqlop):
- return compare(self, other, sqlop)
-
-
-
-class SimpleOrdering:
- """
- Currently this class is mostly internal. More documentation will follow as
- its interface is finalized.
- """
- implements(IOrdering)
-
- # maybe this will be a useful public API, for the query something
- # something.
-
- isDescending = property(lambda self: self.direction == _DESC)
- isAscending = property(lambda self: self.direction == _ASC)
-
- def __init__(self, attribute, direction=''):
- self.attribute = attribute
- self.direction = direction
-
-
- def orderColumns(self):
- return [(self.attribute, self.direction)]
-
-
- def __repr__(self):
- return repr(self.attribute) + self.direction
-
-
- def __add__(self, other):
- if isinstance(other, SimpleOrdering):
- return CompoundOrdering([self, other])
- elif isinstance(other, (list, tuple)):
- return CompoundOrdering([self] + list(other))
- else:
- return NotImplemented
-
-
- def __radd__(self, other):
- if isinstance(other, SimpleOrdering):
- return CompoundOrdering([other, self])
- elif isinstance(other, (list, tuple)):
- return CompoundOrdering(list(other) + [self])
- else:
- return NotImplemented
-
-
-class CompoundOrdering:
- """
- List of SimpleOrdering instances.
- """
- implements(IOrdering)
-
- def __init__(self, seq):
- self.simpleOrderings = list(seq)
-
-
- def __repr__(self):
- return self.__class__.__name__ + '(' + repr(self.simpleOrderings) + ')'
-
-
- def __add__(self, other):
- """
- Just thinking about what might be useful from the perspective of
- introspecting on query objects... don't document this *too* thoroughly
- yet.
- """
- if isinstance(other, CompoundOrdering):
- return CompoundOrdering(self.simpleOrderings + other.simpleOrderings)
- elif isinstance(other, SimpleOrdering):
- return CompoundOrdering(self.simpleOrderings + [other])
- elif isinstance(other, (list, tuple)):
- return CompoundOrdering(self.simpleOrderings + list(other))
- else:
- return NotImplemented
-
-
- def __radd__(self, other):
- """
- Just thinking about what might be useful from the perspective of
- introspecting on query objects... don't document this *too* thoroughly
- yet.
- """
- if isinstance(other, CompoundOrdering):
- return CompoundOrdering(other.simpleOrderings + self.simpleOrderings)
- elif isinstance(other, SimpleOrdering):
- return CompoundOrdering([other] + self.simpleOrderings)
- elif isinstance(other, (list, tuple)):
- return CompoundOrdering(list(other) + self.simpleOrderings)
- else:
- return NotImplemented
-
-
- def orderColumns(self):
- x = []
- for o in self.simpleOrderings:
- x.extend(o.orderColumns())
- return x
-
-
-
-class UnspecifiedOrdering:
- implements(IOrdering)
-
- def __init__(self, null):
- pass
-
- def __add__(self, other):
- return IOrdering(other, NotImplemented)
-
- __radd__ = __add__
-
-
- def orderColumns(self):
- return []
-
-
-registerAdapter(CompoundOrdering, list, IOrdering)
-registerAdapter(CompoundOrdering, tuple, IOrdering)
-registerAdapter(UnspecifiedOrdering, type(None), IOrdering)
-registerAdapter(SimpleOrdering, Comparable, IOrdering)
-
-def compoundIndex(*columns):
- for column in columns:
- column.compoundIndexes.append(columns)
-
-class SQLAttribute(inmemory, Comparable):
- """
- Abstract superclass of all attributes.
-
- _Not_ an attribute itself.
-
- @ivar indexed: A C{bool} indicating whether this attribute will be indexed
- in the database.
-
- @ivar default: The value used for this attribute, if no value is specified.
- """
- implements(IColumn)
-
- sqltype = None
-
- def __init__(self, doc='', indexed=False, default=None, allowNone=True, defaultFactory=None):
- inmemory.__init__(self, doc)
- self.indexed = indexed
- self.compoundIndexes = []
- self.allowNone = allowNone
- self.default = default
- self.defaultFactory = defaultFactory
- if default is not None and defaultFactory is not None:
- raise ValueError("You may specify only one of default "
- "or defaultFactory, not both")
-
- def computeDefault(self):
- if self.defaultFactory is not None:
- return self.defaultFactory()
- return self.default
-
-
- def reprFor(self, oself):
- return repr(self.__get__(oself))
-
-
- def getShortColumnName(self, store):
- return store.getShortColumnName(self)
-
-
- def getColumnName(self, store):
- return store.getColumnName(self)
-
-
- def prepareInsert(self, oself, store):
- """
- Override this method to do something to an item to prepare for its
- insertion into a database.
- """
-
- def coercer(self, value):
- """
- must return a value equivalent to the data being passed in for it to be
- considered valid for a value of this attribute. for example, 'int' or
- 'str'.
- """
-
- raise NotImplementedError()
-
-
- def infilter(self, pyval, oself, store):
- """
- used to convert a Python value to something that lives in the database;
- so called because it is called when objects go in to the database. It
- takes a Python value and returns an SQL value.
- """
- raise NotImplementedError()
-
- def outfilter(self, dbval, oself):
- """
- used to convert an SQL value to something that lives in memory; so
- called because it is called when objects come out of the database. It
- takes an SQL value and returns a Python value.
- """
- return dbval
-
- # requiredSlots must be called before it's run
-
- prefix = "_axiom_memory_"
- dbprefix = "_axiom_store_"
-
- def requiredSlots(self, modname, classname, attrname):
- self.modname = modname
- self.classname = classname
- self.attrname = attrname
- self.underlying = self.prefix + attrname
- self.dbunderlying = self.dbprefix + attrname
- yield self.underlying
- yield self.dbunderlying
-
-
- def fullyQualifiedName(self):
- return '.'.join([self.modname,
- self.classname,
- self.attrname])
-
- def __repr__(self):
- return '<%s %s>' % ( self.__class__.__name__, self.fullyQualifiedName())
-
- def type():
- def get(self):
- if self._type is None:
- from twisted.python.reflect import namedAny
- self._type = namedAny(self.modname+'.'+self.classname)
- return self._type
- return get,
- _type = None
- type = property(*type())
-
- def __get__(self, oself, cls=None):
- if cls is not None and oself is None:
- if self._type is not None:
- assert self._type == cls
- else:
- self._type = cls
- return self
-
- pyval = getattr(oself, self.underlying, _NEEDS_FETCH)
- if pyval is _NEEDS_FETCH:
- dbval = getattr(oself, self.dbunderlying, _NEEDS_FETCH)
- if dbval is _NEEDS_FETCH:
- # here is what *is* happening here:
-
- # SQL attributes are always loaded when an Item is created by
- # loading from the database, either via a query, a getItemByID
- # or an attribute access. If an attribute is left un-set, that
- # means that the item it is on was just created, and we fill in
- # the default value.
-
- # Here is what *should be*, but *is not* happening here:
-
- # this condition ought to indicate that a value may exist in
- # the database, but it is not currently available in memory.
- # It would then query the database immediately, loading all
- # SQL-resident attributes related to this item to minimize the
- # number of queries run (e.g. rather than one per attribute)
-
- # this is a more desireable condition because it means that you
- # can create items "for free", so doing, for example,
- # self.bar.storeID is a much cheaper operation than doing
- # self.bar.baz. This particular idiom is frequently used in
- # queries and so speeding it up to avoid having to do a
- # database hit unless you actually need an item's attributes
- # would be worthwhile.
-
- return self.default
- pyval = self.outfilter(dbval, oself)
- # An upgrader may have changed the value of this attribute. If so,
- # return the new value, not the old one.
- if dbval != getattr(oself, self.dbunderlying):
- return self.__get__(oself, cls)
- # cache python value
- setattr(oself, self.underlying, pyval)
- return pyval
-
- def loaded(self, oself, dbval):
- """
- This method is invoked when the item is loaded from the database, and
- when a transaction is reverted which restores this attribute's value.
-
- @param oself: an instance of an item which has this attribute.
-
- @param dbval: the underlying database value which was retrieved.
- """
- setattr(oself, self.dbunderlying, dbval)
- delattr(oself, self.underlying) # member_descriptors don't raise
- # attribute errors; what gives? good
- # for us, I guess.
-
- def _convertPyval(self, oself, pyval):
- """
- Convert a Python value to a value suitable for inserting into the
- database.
-
- @param oself: The object on which this descriptor is an attribute.
- @param pyval: The value to be converted.
- @return: A value legal for this column in the database.
- """
- # convert to dbval later, I guess?
- if pyval is None and not self.allowNone:
- raise TypeError("attribute [%s.%s = %s()] must not be None" % (
- self.classname, self.attrname, self.__class__.__name__))
-
- return self.infilter(pyval, oself, oself.store)
-
- def __set__(self, oself, pyval):
- st = oself.store
-
- dbval = self._convertPyval(oself, pyval)
- oself.__dirty__[self.attrname] = self, dbval
- oself.touch()
- setattr(oself, self.underlying, pyval)
- setattr(oself, self.dbunderlying, dbval)
- if st is not None and st.autocommit:
- st._rejectChanges += 1
- try:
- oself.checkpoint()
- finally:
- st._rejectChanges -= 1
-
-
-class TwoAttributeComparison:
- implements(IComparison)
- def __init__(self, leftAttribute, operationString, rightAttribute):
- self.leftAttribute = leftAttribute
- self.operationString = operationString
- self.rightAttribute = rightAttribute
-
- def getQuery(self, store):
- sql = ('(%s %s %s)' % (self.leftAttribute.getColumnName(store),
- self.operationString,
- self.rightAttribute.getColumnName(store)) )
- return sql
-
- def getInvolvedTables(self):
- tables = [self.leftAttribute.type]
- if self.leftAttribute.type is not self.rightAttribute.type:
- tables.append(self.rightAttribute.type)
- return tables
-
-
- def getArgs(self, store):
- return []
-
-
- def __repr__(self):
- return ' '.join((self.leftAttribute.fullyQualifiedName(),
- self.operationString,
- self.rightAttribute.fullyQualifiedName()))
-
-
-class AttributeValueComparison:
- implements(IComparison)
- def __init__(self, attribute, operationString, value):
- self.attribute = attribute
- self.operationString = operationString
- self.value = value
-
- def getQuery(self, store):
- return ('(%s %s ?)' % (self.attribute.getColumnName(store),
- self.operationString))
-
- def getArgs(self, store):
- return [self.attribute.infilter(self.value, None, store)]
-
- def getInvolvedTables(self):
- return [self.attribute.type]
-
- def __repr__(self):
- return ' '.join((self.attribute.fullyQualifiedName(),
- self.operationString,
- repr(self.value)))
-
-class NullComparison:
- implements(IComparison)
- def __init__(self, attribute, negate=False):
- self.attribute = attribute
- self.negate = negate
-
- def getQuery(self, store):
- if self.negate:
- op = 'NOT'
- else:
- op = 'IS'
- return ('(%s %s NULL)' % (self.attribute.getColumnName(store),
- op))
-
- def getArgs(self, store):
- return []
-
- def getInvolvedTables(self):
- return [self.attribute.type]
-
-class LikeFragment:
- def getLikeArgs(self):
- return []
-
- def getLikeQuery(self, st):
- raise NotImplementedError()
-
- def getLikeTables(self):
- return []
-
-class LikeNull(LikeFragment):
- def getLikeQuery(self, st):
- return "NULL"
-
-class LikeValue(LikeFragment):
- def __init__(self, value):
- self.value = value
-
- def getLikeQuery(self, st):
- return "?"
-
- def getLikeArgs(self):
- return [self.value]
-
-class LikeColumn(LikeFragment):
- def __init__(self, attribute):
- self.attribute = attribute
-
- def getLikeQuery(self, st):
- return self.attribute.getColumnName(st)
-
- def getLikeTables(self):
- return [self.attribute.type]
-
-
-class LikeComparison:
- implements(IComparison)
- # Not AggregateComparison or AttributeValueComparison because there is a
- # different, optimized syntax for 'or'. WTF is wrong with you, SQL??
-
- def __init__(self, attribute, negate, likeParts):
- self.negate = negate
- self.attribute = attribute
- self.likeParts = likeParts
-
- def getInvolvedTables(self):
- tables = [self.attribute.type]
- for lf in self.likeParts:
- tables.extend([
- t for t in lf.getLikeTables() if t not in tables])
- return tables
-
- def getQuery(self, store):
- if self.negate:
- op = 'NOT LIKE'
- else:
- op = 'LIKE'
- sqlParts = [lf.getLikeQuery(store) for lf in self.likeParts]
- sql = '(%s %s (%s))' % (self.attribute.getColumnName(store),
- op, ' || '.join(sqlParts))
- return sql
-
- def getArgs(self, store):
- l = []
- for lf in self.likeParts:
- for pyval in lf.getLikeArgs():
- l.append(
- self.attribute.infilter(
- pyval, None, store))
- return l
-
-
-
-class AggregateComparison:
- """
- Abstract base class for compound comparisons that aggregate other
- comparisons - currently only used for AND and OR comparisons.
- """
-
- implements(IComparison)
- operator = None
-
- def __init__(self, *conditions):
- self.conditions = conditions
- if self.operator is None:
- raise NotImplementedError, ('%s cannot be used; you want AND or OR.'
- % self.__class__.__name__)
- if not conditions:
- raise ValueError, ('%s condition requires at least one argument'
- % self.operator)
-
- def getQuery(self, store):
- oper = ' %s ' % self.operator
- return '(%s)' % oper.join(
- [condition.getQuery(store) for condition in self.conditions])
-
- def getArgs(self, store):
- args = []
- for cond in self.conditions:
- args += cond.getArgs(store)
- return args
-
- def getInvolvedTables(self):
- tables = []
- for cond in self.conditions:
- tables.extend([
- t for t in cond.getInvolvedTables() if t not in tables])
- return tables
-
- def __repr__(self):
- return '%s(%s)' % (self.__class__.__name__,
- ', '.join(map(repr, self.conditions)))
-
-
-
-class SequenceComparison:
- implements(IComparison)
-
- def __init__(self, attribute, container, negate):
- self.attribute = attribute
- self.container = container
- self.negate = negate
-
- if IColumn.providedBy(container):
- self.containerClause = self._columnContainer
- self.getArgs = self._columnArgs
- elif IQuery.providedBy(container):
- self.containerClause = self._queryContainer
- self.getArgs = self._queryArgs
- else:
- self.containerClause = self._sequenceContainer
- self.getArgs = self._sequenceArgs
-
-
- def _columnContainer(self, store):
- """
- Return the fully qualified name of the column being examined so as
- to push all of the containment testing into the database.
- """
- return self.container.getColumnName(store)
-
-
- def _columnArgs(self, store):
- """
- The IColumn form of this has no arguments, just a column name
- specified in the SQL, specified by _columnContainer.
- """
- return []
-
-
- _subselectSQL = None
- _subselectArgs = None
- def _queryContainer(self, store):
- """
- Generate and cache the subselect SQL and its arguments. Return the
- subselect SQL.
- """
- if self._subselectSQL is None:
- sql, args = self.container._sqlAndArgs('SELECT',
- self.container._queryTarget)
- self._subselectSQL, self._subselectArgs = sql, args
- return self._subselectSQL
-
-
- def _queryArgs(self, store):
- """
- Make sure subselect arguments have been generated and then return
- them.
- """
- self._queryContainer(store)
- return self._subselectArgs
-
-
- _sequence = None
- def _sequenceContainer(self, store):
- """
- Smash whatever we got into a list and save the result in case we are
- executed multiple times. This keeps us from tripping up over
- generators and the like.
- """
- if self._sequence is None:
- self._sequence = list(self.container)
- self._clause = ', '.join(['?'] * len(self._sequence))
- return self._clause
-
-
- def _sequenceArgs(self, store):
- """
- Filter each element of the data using the attribute type being
- tested for containment and hand back the resulting list.
- """
- self._sequenceContainer(store) # Force _sequence to be valid
- return [self.attribute.infilter(pyval, None, store) for pyval in self._sequence]
-
-
- # IComparison - getArgs is assigned as an instance attribute
- def getQuery(self, store):
- return '%s %sIN (%s)' % (
- self.attribute.getColumnName(store),
- self.negate and 'NOT ' or '',
- self.containerClause(store))
-
-
- def getInvolvedTables(self):
- return [self.attribute.type]
-
-
-
-class AND(AggregateComparison):
- """
- Combine 2 L{IComparison}s such that this is true when both are true.
- """
- operator = 'AND'
-
-class OR(AggregateComparison):
- """
- Combine 2 L{IComparison}s such that this is true when either is true.
- """
- operator = 'OR'
-
-
-class TableOrderComparisonWrapper(object):
- """
- Wrap any other L{IComparison} and override its L{getInvolvedTables} method
- to specify the same tables but in an explicitly specified order.
- """
- implements(IComparison)
-
- tables = None
- comparison = None
-
- def __init__(self, tables, comparison):
- assert set(tables) == set(comparison.getInvolvedTables())
-
- self.tables = tables
- self.comparison = comparison
-
-
- def getInvolvedTables(self):
- return self.tables
-
-
- def getQuery(self, store):
- return self.comparison.getQuery(store)
-
-
- def getArgs(self, store):
- return self.comparison.getArgs(store)
-
-
-
-class boolean(SQLAttribute):
- sqltype = 'BOOLEAN'
-
- def infilter(self, pyval, oself, store):
- if pyval is None:
- return None
- if pyval is True:
- return 1
- elif pyval is False:
- return 0
- else:
- raise TypeError("attribute [%s.%s = boolean()] must be True or False; not %r" %
- (self.classname, self.attrname, type(pyval).__name__,))
-
- def outfilter(self, dbval, oself):
- if dbval == 1:
- return True
- elif dbval == 0:
- return False
- elif self.allowNone and dbval is None:
- return None
- else:
- raise ValueError(
- "attribute [%s.%s = boolean()] "
- "must have a database value of 1 or 0; not %r" %
- (self.classname, self.attrname, dbval))
-
-
-
-LARGEST_POSITIVE = (2 ** 63)-1
-LARGEST_NEGATIVE = -(2 ** 63)
-
-class ConstraintError(TypeError):
- """A type constraint was violated.
- """
-
- def __init__(self,
- attributeObj,
- requiredTypes,
- providedValue):
- self.attributeObj = attributeObj
- self.requiredTypes = requiredTypes
- self.providedValue = providedValue
- TypeError.__init__(self,
- "attribute [%s.%s = %s()] must be "
- "(%s); not %r" %
- (attributeObj.classname,
- attributeObj.attrname,
- attributeObj.__class__.__name__,
- requiredTypes,
- type(providedValue).__name__))
-
-
-
-def requireType(attributeObj, value, typerepr, *types):
- if not isinstance(value, types):
- raise ConstraintError(attributeObj,
- typerepr,
- value)
-
-
-
-inttyperepr = "integer between %r and %r" % (LARGEST_NEGATIVE, LARGEST_POSITIVE)
-
-class integer(SQLAttribute):
- sqltype = 'INTEGER'
- def infilter(self, pyval, oself, store):
- if pyval is None:
- return None
- requireType(self, pyval, inttyperepr, int, long)
- if not LARGEST_NEGATIVE <= pyval <= LARGEST_POSITIVE:
- raise ConstraintError(
- self, inttyperepr, pyval)
- return pyval
-
-
-
-class bytes(SQLAttribute):
- """
- Attribute representing a sequence of bytes; this is represented in memory
- as a Python 'str'.
- """
-
- sqltype = 'BLOB'
-
- def infilter(self, pyval, oself, store):
- if pyval is None:
- return None
- if isinstance(pyval, unicode):
- raise ConstraintError(self, "str or other byte buffer", pyval)
- return buffer(pyval)
-
- def outfilter(self, dbval, oself):
- if dbval is None:
- return None
- return str(dbval)
-
-class InvalidPathError(ValueError):
- """
- A path that could not be used with the database was attempted to be used
- with the database.
- """
-
-class text(SQLAttribute):
- """
- Attribute representing a sequence of characters; this is represented in
- memory as a Python 'unicode'.
- """
-
- def __init__(self, caseSensitive=False, **kw):
- SQLAttribute.__init__(self, **kw)
- if caseSensitive:
- self.sqltype = 'TEXT'
- else:
- self.sqltype = 'TEXT COLLATE NOCASE'
- self.caseSensitive = caseSensitive
-
- def infilter(self, pyval, oself, store):
- if pyval is None:
- return None
- if not isinstance(pyval, unicode) or u'\0' in pyval:
- raise ConstraintError(
- self, "unicode string without NULL bytes", pyval)
- return pyval
-
- def outfilter(self, dbval, oself):
- return dbval
-
-
-
-class textlist(text):
- delimiter = u'\u001f'
-
- # Once upon a time, textlist encoded the list in such a way that caused []
- # to be indistinguishable from [u'']. This value is now used as a
- # placeholder at the head of the list, to avoid this problem in a way that
- # is almost completely backwards-compatible with older databases.
- guard = u'\u0002'
-
- def outfilter(self, dbval, oself):
- unicodeString = super(textlist, self).outfilter(dbval, oself)
- if unicodeString is None:
- return None
- val = unicodeString.split(self.delimiter)
- if val[:1] == [self.guard]:
- del val[:1]
- return val
-
- def infilter(self, pyval, oself, store):
- if pyval is None:
- return None
- for innerVal in pyval:
- assert self.delimiter not in innerVal and self.guard not in innerVal
- result = self.delimiter.join([self.guard] + list(pyval))
- return super(textlist, self).infilter(result, oself, store)
-
-class path(text):
- """
- Attribute representing a pathname in the filesystem. If 'relative=True',
- the default, the representative pathname object must be somewhere inside
- the store, and will migrate with the store.
-
- I expect L{twisted.python.filepath.FilePath} or compatible objects as my
- values.
- """
-
- def __init__(self, relative=True, **kw):
- text.__init__(self, **kw)
- self.relative = True
-
- def prepareInsert(self, oself, store):
- """
- Prepare for insertion into the database by making the dbunderlying
- attribute of the item a relative pathname with respect to the store
- rather than an absolute pathname.
- """
- if self.relative:
- fspath = self.__get__(oself)
- oself.__dirty__[self.attrname] = self, self.infilter(fspath, oself, store)
-
- def infilter(self, pyval, oself, store):
- if pyval is None:
- return None
- mypath = unicode(pyval.path)
- if store is None:
- store = oself.store
- if store is None:
- return None
- if self.relative:
- # XXX add some more filepath APIs to make this kind of checking easier.
- storepath = os.path.normpath(store.filesdir.path)
- mysegs = mypath.split(os.sep)
- storesegs = storepath.split(os.sep)
- if len(mysegs) <= len(storesegs) or mysegs[:len(storesegs)] != storesegs:
- raise InvalidPathError('%s not in %s' % (mypath, storepath))
- # In the database we use '/' to separate paths for portability.
- # This databaes could have relative paths created on Windows, then
- # be moved to Linux for deployment, and what *was* the native
- # os.sep (backslash) will not be friendly to Linux's filesystem.
- # However, this is only for relative paths, since absolute or UNC
- # pathnames on a Windows system are inherently unportable and it's
- # not reasonable to calculate relative paths outside the store.
- p = '/'.join(mysegs[len(storesegs):])
- else:
- p = mypath # we already know it's absolute, it came from a
- # filepath.
- return super(path, self).infilter(p, oself, store)
-
- def outfilter(self, dbval, oself):
- if dbval is None:
- return None
- if self.relative:
- fp = oself.store.filesdir
- for segment in dbval.split('/'):
- fp = fp.child(segment)
- else:
- fp = filepath.FilePath(dbval)
- return fp
-
-
-MICRO = 1000000.
-
-class timestamp(integer):
- """
- An in-database representation of date and time.
-
- To make formatting as easy as possible, this is represented in Python as an
- instance of L{epsilon.extime.Time}; see its documentation for more details.
- """
- def infilter(self, pyval, oself, store):
- if pyval is None:
- return None
- return integer.infilter(self,
- int(pyval.asPOSIXTimestamp() * MICRO), oself,
- store)
-
- def outfilter(self, dbval, oself):
- if dbval is None:
- return None
- return Time.fromPOSIXTimestamp(dbval / MICRO)
-
-_cascadingDeletes = {}
-_disallows = {}
-
-class reference(integer):
- NULLIFY = object()
- DISALLOW = object()
- CASCADE = object()
-
- def __init__(self, doc='', indexed=True, allowNone=True, reftype=None,
- whenDeleted=NULLIFY):
- integer.__init__(self, doc, indexed, None, allowNone)
- assert whenDeleted in (reference.NULLIFY,
- reference.CASCADE,
- reference.DISALLOW),(
- "whenDeleted must be one of: "
- "reference.NULLIFY, reference.CASCADE, reference.DISALLOW")
- self.reftype = reftype
- self.whenDeleted = whenDeleted
- if whenDeleted is reference.CASCADE:
- # Note; this list is technically in a slightly inconsistent state
- # as things are being built.
- _cascadingDeletes.setdefault(reftype, []).append(self)
- if whenDeleted is reference.DISALLOW:
- _disallows.setdefault(reftype, []).append(self)
-
- def reprFor(self, oself):
- obj = getattr(oself, self.underlying, None)
- if obj is not None:
- if obj.storeID is not None:
- return 'reference(%d)' % (obj.storeID,)
- else:
- return 'reference(unstored@%d)' % (id(obj),)
- sid = getattr(oself, self.dbunderlying, None)
- if sid is None:
- return 'None'
- return 'reference(%d)' % (sid,)
-
-
- def __get__(self, oself, cls=None):
- """
- Override L{integer.__get__} to verify that the value to be returned is
- currently a valid item in the same store, and to make sure that legacy
- items are upgraded if they happen to have been cached.
- """
- rv = super(reference, self).__get__(oself, cls)
- if rv is self:
- # If it's an attr lookup on the class, just do that.
- return self
- if rv is None:
- return rv
- if not rv._currentlyValidAsReferentFor(oself.store):
- # Make sure it's currently valid, i.e. it's not going to be deleted
- # this transaction or it hasn't been deleted.
-
- # XXX TODO: drop cached in-memory referent if it's been deleted /
- # no longer valid.
- assert self.whenDeleted is reference.NULLIFY, (
- "not sure what to do if not...")
- return None
- if rv.__legacy__:
- delattr(oself, self.underlying)
- return super(reference, self).__get__(oself, cls)
- return rv
-
- def prepareInsert(self, oself, store):
- oitem = super(reference, self).__get__(oself) # bypass NULLIFY
- if oitem is not None and oitem.store is not store:
- raise NoCrossStoreReferences(
- "Trying to insert item: %r into store: %r, "
- "but it has a reference to other item: .%s=%r "
- "in another store: %r" % (
- oself, store,
- self.attrname, oitem,
- oitem.store))
-
- def infilter(self, pyval, oself, store):
- if pyval is None:
- return None
- if oself is None:
- return pyval.storeID
- if oself.store is None:
- return pyval.storeID
- if oself.store != pyval.store:
- raise NoCrossStoreReferences(
- "You can't establish references to items in other stores.")
-
- return integer.infilter(self, pyval.storeID, oself, store)
-
- def outfilter(self, dbval, oself):
- if dbval is None:
- return None
-
- referee = oself.store.getItemByID(dbval, default=None, autoUpgrade=not oself.__legacy__)
- if referee is None and self.whenDeleted is not reference.NULLIFY:
-
- # If referee merely changed to another valid referent,
- # SQLAttribute.__get__ will notice that what we returned is
- # inconsistent and try again. However, it doesn't know about the
- # BrokenReference that is raised if the old referee is no longer a
- # valid referent. Check to see if the dbunderlying is still the
- # same as the dbval passed in. If it's different, we should try to
- # load the value again. Only if it is unchanged will we raise the
- # BrokenReference. It would be better if all of this
- # change-detection logic were in one place, but I can't figure out
- # how to do that. -exarkun
- if dbval != getattr(oself, self.dbunderlying):
- return self.__get__(oself, None)
-
- raise BrokenReference('Reference to storeID %r is broken' % (dbval,))
- return referee
-
-class ieee754_double(SQLAttribute):
- """
- From the SQLite documentation::
-
- Each value stored in an SQLite database (or manipulated by the
- database engine) has one of the following storage classes: (...)
- REAL. The value is a floating point value, stored as an 8-byte IEEE
- floating point number.
-
- This attribute type implements IEEE754 double-precision binary
- floating-point storage. Some people call this 'float', and think it is
- somehow related to numbers. This assumption can be misleading when working
- with certain types of data.
-
- This attribute name has an unweildy name on purpose. You should be aware
- of the caveats related to binary floating point math before using this
- type. It is particularly ill-advised to use it to store values
- representing large amounts of currency as rounding errors may be
- significant enough to introduce accounting discrepancies.
-
- Certain edge-cases are not handled properly. For example, INF and NAN are
- considered by SQLite to be equal to everything, rather than the Python
- interpretation where INF is equal only to itself and greater than
- everything, and NAN is equal to nothing, not even itself.
- """
-
- sqltype = 'REAL'
-
- def infilter(self, pyval, oself, store):
- if pyval is None:
- return None
- requireType(self, pyval, 'float', float)
- return pyval
-
- def outfilter(self, dbval, oself):
- return dbval
-
-
-
-class AbstractFixedPointDecimal(integer):
- """
- Attribute representing a number with a specified number of decimal
- places.
-
- This is stored in SQLite as a binary integer multiplied by M{10**N}
- where C{N} is the number of decimal places required by Python.
- Therefore, in-database multiplication, division, or queries which
- compare to integers or fixedpointdecimals with a different number of
- decimal places, will not work. Also, you cannot store, or sum to, fixed
- point decimals greater than M{(2**63)/(10**N)}.
-
- While L{ieee754_double} is handy for representing various floating-point
- numbers, such as scientific measurements, this class (and the associated
- Python decimal class) is more appropriate for arithmetic on sums of money.
-
- For more information on Python's U{Decimal
- class<http://www.python.org/doc/current/lib/module-decimal.html>} and on
- general U{computerized Decimal math in
- general<http://www2.hursley.ibm.com/decimal/decarith.html>}.
-
- This is currently a private helper superclass because we cannot store
- additional metadata about column types; maybe we should fix that.
-
- @cvar decimalPlaces: the number of points of decimal precision allowed by
- the storage and retrieval of this class. *Points beyond this number
- will be silently truncated to values passed into the database*, so be
- sure to select a value appropriate to your application!
- """
-
- def __init__(self, **kw):
- integer.__init__(self, **kw)
-
-
- def infilter(self, pyval, oself, store):
- if pyval is None:
- return None
- if isinstance(pyval, (int, long)):
- pyval = Decimal(pyval)
- if isinstance(pyval, Decimal):
- # Python < 2.5.2 compatibility:
- # Use to_integral instead of to_integral_value.
- dbval = int((pyval * 10**self.decimalPlaces).to_integral())
- return super(AbstractFixedPointDecimal, self).infilter(
- dbval, oself, store)
- else:
- raise TypeError(
- "attribute [%s.%s = AbstractFixedPointDecimal(...)] must be "
- "Decimal instance; not %r" % (
- self.classname, self.attrname, type(pyval).__name__))
-
-
- def outfilter(self, dbval, oself):
- if dbval is None:
- return None
- return Decimal(dbval) / 10**self.decimalPlaces
-
-
- def compare(self, other, sqlop):
- if isinstance(other, Comparable):
- if isinstance(other, AbstractFixedPointDecimal):
- if other.decimalPlaces == self.decimalPlaces:
- # fall through to default behavior at bottom
- pass
- else:
- raise TypeError(
- "Can't compare Decimals of varying precisions: "
- "(%s.%s %s %s.%s)" % (
- self.classname, self.attrname,
- sqlop,
- other.classname, other.attrname
- ))
- else:
- raise TypeError(
- "Can't compare Decimals to other things: "
- "(%s.%s %s %s.%s)" % (
- self.classname, self.attrname,
- sqlop,
- other.classname, other.attrname
- ))
- return super(AbstractFixedPointDecimal, self).compare(other, sqlop)
-
-class point1decimal(AbstractFixedPointDecimal):
- decimalPlaces = 1
-class point2decimal(AbstractFixedPointDecimal):
- decimalPlaces = 2
-class point3decimal(AbstractFixedPointDecimal):
- decimalPlaces = 3
-class point4decimal(AbstractFixedPointDecimal):
- decimalPlaces = 4
-class point5decimal(AbstractFixedPointDecimal):
- decimalPlaces = 5
-class point6decimal(AbstractFixedPointDecimal):
- decimalPlaces = 6
-class point7decimal(AbstractFixedPointDecimal):
- decimalPlaces = 7
-class point8decimal(AbstractFixedPointDecimal):
- decimalPlaces = 8
-class point9decimal(AbstractFixedPointDecimal):
- decimalPlaces = 9
-class point10decimal(AbstractFixedPointDecimal):
- decimalPlaces = 10
-
-class money(point4decimal):
- """
- I am a 4-point precision fixed-point decimal number column type; suggested
- for representing a quantity of money.
-
- (This does not, however, include features such as currency.)
- """
=== removed file 'Axiom/axiom/batch.py'
--- Axiom/axiom/batch.py 2012-07-05 13:37:40 +0000
+++ Axiom/axiom/batch.py 1970-01-01 00:00:00 +0000
@@ -1,1230 +0,0 @@
-# -*- test-case-name: axiom.test.test_batch -*-
-
-"""
-Utilities for performing repetitive tasks over potentially large sets
-of data over an extended period of time.
-"""
-
-import weakref, datetime, os, sys
-
-from zope.interface import implements
-
-from twisted.python import reflect, failure, log, procutils, util, runtime
-from twisted.internet import task, defer, reactor, error, protocol
-from twisted.application import service
-
-from epsilon import extime, process, cooperator, modal, juice
-
-from axiom import iaxiom, errors as eaxiom, item, attributes
-from axiom.scheduler import Scheduler, SubScheduler
-from axiom.upgrade import registerUpgrader, registerDeletionUpgrader
-from axiom.dependency import installOn
-
-VERBOSE = False
-
-_processors = weakref.WeakValueDictionary()
-
-
-class _NoWorkUnits(Exception):
- """
- Raised by a _ReliableListener's step() method to indicate it
- didn't do anything.
- """
-
-
-
-class _ProcessingFailure(Exception):
- """
- Raised when processItem raises any exception. This is never raised
- directly, but instances of the three subclasses are.
- """
- def __init__(self, reliableListener, workUnit, failure):
- Exception.__init__(self)
- self.reliableListener = reliableListener
- self.workUnit = workUnit
- self.failure = failure
-
- # Get rid of all references this failure is holding so that it doesn't
- # cause any crazy object leaks. See also the comment in
- # BatchProcessingService.step's except suite.
- self.failure.cleanFailure()
-
-
- def mark(self):
- """
- Mark the unit of work as failed in the database and update the listener
- so as to skip it next time.
- """
- self.reliableListener.lastRun = extime.Time()
- BatchProcessingError(
- store=self.reliableListener.store,
- processor=self.reliableListener.processor,
- listener=self.reliableListener.listener,
- item=self.workUnit,
- error=self.failure.getErrorMessage())
-
-
-
-class _ForwardProcessingFailure(_ProcessingFailure):
- """
- An error occurred in a reliable listener while processing items forward
- from the mark.
- """
-
- def mark(self):
- _ProcessingFailure.mark(self)
- self.reliableListener.forwardMark = self.workUnit.storeID
-
-
-
-class _BackwardProcessingFailure(_ProcessingFailure):
- """
- An error occurred in a reliable listener while processing items backwards
- from the mark.
- """
- def mark(self):
- _ProcessingFailure.mark(self)
- self.reliableListener.backwardMark = self.workUnit.storeID
-
-
-
-class _TrackedProcessingFailure(_ProcessingFailure):
- """
- An error occurred in a reliable listener while processing items specially
- added to the batch run.
- """
-
-
-
-class BatchProcessingError(item.Item):
- processor = attributes.reference(doc="""
- The batch processor which owns this failure.
- """)
-
- listener = attributes.reference(doc="""
- The listener which caused this error.
- """)
-
- item = attributes.reference(doc="""
- The item which actually failed to be processed.
- """)
-
- error = attributes.bytes(doc="""
- The error message which was associated with this failure.
- """)
-
-
-
-class _ReliableTracker(item.Item):
- """
- A tracking item for an out-of-sequence item which a reliable listener
- should be given to process.
-
- These are created when L{_ReliableListener.addItem} is called and the
- specified item is in the range of items which have already been processed.
- """
-
- processor = attributes.reference(doc="""
- The batch processor which owns this tracker.
- """)
-
- listener = attributes.reference(doc="""
- The listener which is responsible for this tracker's item.
- """)
-
- item = attributes.reference(doc="""
- The item which this is tracking.
- """)
-
-
-
-class _ReliableListener(item.Item):
- processor = attributes.reference(doc="""
- The batch processor which owns this listener.
- """)
-
- listener = attributes.reference(doc="""
- The item which is actually the listener.
- """)
-
- backwardMark = attributes.integer(doc="""
- Store ID of the first Item after the next Item to be processed in
- the backwards direction. Usually, the Store ID of the Item
- previously processed in the backwards direction.
- """)
-
- forwardMark = attributes.integer(doc="""
- Store ID of the first Item before the next Item to be processed in
- the forwards direction. Usually, the Store ID of the Item
- previously processed in the forwards direction.
- """)
-
- lastRun = attributes.timestamp(doc="""
- Time indicating the last chance given to this listener to do some
- work.
- """)
-
- style = attributes.integer(doc="""
- Either L{iaxiom.LOCAL} or L{iaxiom.REMOTE}. Indicates where the
- batch processing should occur, in the main process or a
- subprocess.
- """)
-
- def __repr__(self):
- return '<ReliableListener %s %r #%r>' % ({iaxiom.REMOTE: 'remote',
- iaxiom.LOCAL: 'local'}[self.style],
- self.listener,
- self.storeID)
-
-
- def addItem(self, item):
- assert type(item) is self.processor.workUnitType, \
- "Adding work unit of type %r to listener for type %r" % (
- type(item), self.processor.workUnitType)
- if item.storeID >= self.backwardMark and item.storeID <= self.forwardMark:
- _ReliableTracker(store=self.store,
- listener=self,
- item=item)
-
-
- def _forwardWork(self, workUnitType):
- if VERBOSE:
- log.msg("%r looking forward from %r" % (self, self.forwardMark,))
- return self.store.query(
- workUnitType,
- workUnitType.storeID > self.forwardMark,
- sort=workUnitType.storeID.ascending,
- limit=2)
-
-
- def _backwardWork(self, workUnitType):
- if VERBOSE:
- log.msg("%r looking backward from %r" % (self, self.backwardMark,))
- if self.backwardMark == 0:
- return []
- return self.store.query(
- workUnitType,
- workUnitType.storeID < self.backwardMark,
- sort=workUnitType.storeID.descending,
- limit=2)
-
-
- def _extraWork(self):
- return self.store.query(_ReliableTracker,
- _ReliableTracker.listener == self,
- limit=2)
-
-
- def _doOneWork(self, workUnit, failureType):
- if VERBOSE:
- log.msg("Processing a unit of work: %r" % (workUnit,))
- try:
- self.listener.processItem(workUnit)
- except:
- f = failure.Failure()
- if VERBOSE:
- log.msg("Processing failed: %s" % (f.getErrorMessage(),))
- log.err(f)
- raise failureType(self, workUnit, f)
-
-
- def step(self):
- first = True
- for workTracker in self._extraWork():
- if first:
- first = False
- else:
- return True
- item = workTracker.item
- workTracker.deleteFromStore()
- self._doOneWork(item, _TrackedProcessingFailure)
-
- for workUnit in self._forwardWork(self.processor.workUnitType):
- if first:
- first = False
- else:
- return True
- self.forwardMark = workUnit.storeID
- self._doOneWork(workUnit, _ForwardProcessingFailure)
-
- for workUnit in self._backwardWork(self.processor.workUnitType):
- if first:
- first = False
- else:
- return True
- self.backwardMark = workUnit.storeID
- self._doOneWork(workUnit, _BackwardProcessingFailure)
-
- if first:
- raise _NoWorkUnits()
- if VERBOSE:
- log.msg("%r.step() returning False" % (self,))
- return False
-
-
-
-class _BatchProcessorMixin:
-
- def step(self, style=iaxiom.LOCAL, skip=()):
- now = extime.Time()
- first = True
-
- for listener in self.store.query(_ReliableListener,
- attributes.AND(_ReliableListener.processor == self,
- _ReliableListener.style == style,
- _ReliableListener.listener.notOneOf(skip)),
- sort=_ReliableListener.lastRun.ascending):
- if not first:
- if VERBOSE:
- log.msg("Found more work to do, returning True from %r.step()" % (self,))
- return True
- listener.lastRun = now
- try:
- if listener.step():
- if VERBOSE:
- log.msg("%r.step() reported more work to do, returning True from %r.step()" % (listener, self))
- return True
- except _NoWorkUnits:
- if VERBOSE:
- log.msg("%r.step() reported no work units" % (listener,))
- else:
- first = False
- if VERBOSE:
- log.msg("No listeners left with work, returning False from %r.step()" % (self,))
- return False
-
-
- def run(self):
- """
- Try to run one unit of work through one listener. If there are more
- listeners or more work, reschedule this item to be run again in
- C{self.busyInterval} milliseconds, otherwise unschedule it.
-
- @rtype: L{extime.Time} or C{None}
- @return: The next time at which to run this item, used by the scheduler
- for automatically rescheduling, or None if there is no more work to do.
- """
- now = extime.Time()
- if self.step():
- self.scheduled = now + datetime.timedelta(milliseconds=self.busyInterval)
- else:
- self.scheduled = None
- return self.scheduled
-
-
- def timedEventErrorHandler(self, timedEvent, failureObj):
- failureObj.trap(_ProcessingFailure)
- log.msg("Batch processing failure")
- log.err(failureObj.value.failure)
- failureObj.value.mark()
- return extime.Time() + datetime.timedelta(milliseconds=self.busyInterval)
-
-
- def addReliableListener(self, listener, style=iaxiom.LOCAL):
- """
- Add the given Item to the set which will be notified of Items
- available for processing.
-
- Note: Each Item is processed synchronously. Adding too many
- listeners to a single batch processor will cause the L{step}
- method to block while it sends notification to each listener.
-
- @param listener: An Item instance which provides a
- C{processItem} method.
-
- @return: An Item representing L{listener}'s persistent tracking state.
- """
- existing = self.store.findUnique(_ReliableListener,
- attributes.AND(_ReliableListener.processor == self,
- _ReliableListener.listener == listener),
- default=None)
- if existing is not None:
- return existing
-
- for work in self.store.query(self.workUnitType,
- sort=self.workUnitType.storeID.descending,
- limit=1):
- forwardMark = work.storeID
- backwardMark = work.storeID + 1
- break
- else:
- forwardMark = 0
- backwardMark = 0
-
- if self.scheduled is None:
- self.scheduled = extime.Time()
- iaxiom.IScheduler(self.store).schedule(self, self.scheduled)
-
- return _ReliableListener(store=self.store,
- processor=self,
- listener=listener,
- forwardMark=forwardMark,
- backwardMark=backwardMark,
- style=style)
-
-
- def removeReliableListener(self, listener):
- """
- Remove a previously added listener.
- """
- self.store.query(_ReliableListener,
- attributes.AND(_ReliableListener.processor == self,
- _ReliableListener.listener == listener)).deleteFromStore()
- self.store.query(BatchProcessingError,
- attributes.AND(BatchProcessingError.processor == self,
- BatchProcessingError.listener == listener)).deleteFromStore()
-
-
- def getReliableListeners(self):
- """
- Return an iterable of the listeners which have been added to
- this batch processor.
- """
- for rellist in self.store.query(_ReliableListener, _ReliableListener.processor == self):
- yield rellist.listener
-
-
- def getFailedItems(self):
- """
- Return an iterable of two-tuples of listeners which raised an
- exception from C{processItem} and the item which was passed as
- the argument to that method.
- """
- for failed in self.store.query(BatchProcessingError, BatchProcessingError.processor == self):
- yield (failed.listener, failed.item)
-
-
- def itemAdded(self):
- """
- Called to indicate that a new item of the type monitored by this batch
- processor is being added to the database.
-
- If this processor is not already scheduled to run, this will schedule
- it. It will also start the batch process if it is not yet running and
- there are any registered remote listeners.
- """
- localCount = self.store.query(
- _ReliableListener,
- attributes.AND(_ReliableListener.processor == self,
- _ReliableListener.style == iaxiom.LOCAL),
- limit=1).count()
-
- remoteCount = self.store.query(
- _ReliableListener,
- attributes.AND(_ReliableListener.processor == self,
- _ReliableListener.style == iaxiom.REMOTE),
- limit=1).count()
-
- if localCount and self.scheduled is None:
- self.scheduled = extime.Time()
- iaxiom.IScheduler(self.store).schedule(self, self.scheduled)
- if remoteCount:
- batchService = iaxiom.IBatchService(self.store, None)
- if batchService is not None:
- batchService.start()
-
-
-
-def upgradeProcessor1to2(oldProcessor):
- """
- Batch processors stopped polling at version 2, so they no longer needed the
- idleInterval attribute. They also gained a scheduled attribute which
- tracks their interaction with the scheduler. Since they stopped polling,
- we also set them up as a timed event here to make sure that they don't
- silently disappear, never to be seen again: running them with the scheduler
- gives them a chance to figure out what's up and set up whatever other state
- they need to continue to run.
-
- Since this introduces a new dependency of all batch processors on a powerup
- for the IScheduler, install a Scheduler or a SubScheduler if one is not
- already present.
- """
- newProcessor = oldProcessor.upgradeVersion(
- oldProcessor.typeName, 1, 2,
- busyInterval=oldProcessor.busyInterval)
- newProcessor.scheduled = extime.Time()
-
- s = newProcessor.store
- sch = iaxiom.IScheduler(s, None)
- if sch is None:
- if s.parent is None:
- # Only site stores have no parents.
- sch = Scheduler(store=s)
- else:
- # Substores get subschedulers.
- sch = SubScheduler(store=s)
- installOn(sch, s)
-
- # And set it up to run.
- sch.schedule(newProcessor, newProcessor.scheduled)
- return newProcessor
-
-def processor(forType):
- """
- Create an Axiom Item type which is suitable to use as a batch processor for
- the given Axiom Item type.
-
- Processors created this way depend on a L{iaxiom.IScheduler} powerup on the
- on which store they are installed.
-
- @type forType: L{item.MetaItem}
- @param forType: The Axiom Item type for which to create a batch processor
- type.
-
- @rtype: L{item.MetaItem}
-
- @return: An Axiom Item type suitable for use as a batch processor. If such
- a type previously existed, it will be returned. Otherwise, a new type is
- created.
- """
- MILLI = 1000
- if forType not in _processors:
- def __init__(self, *a, **kw):
- item.Item.__init__(self, *a, **kw)
- self.store.powerUp(self, iaxiom.IBatchProcessor)
-
- attrs = {
- '__name__': 'Batch_' + forType.__name__,
-
- '__module__': forType.__module__,
-
- '__init__': __init__,
-
- '__repr__': lambda self: '<Batch of %s #%d>' % (reflect.qual(self.workUnitType), self.storeID),
-
- 'schemaVersion': 2,
-
- 'workUnitType': forType,
-
- 'scheduled': attributes.timestamp(doc="""
- The next time at which this processor is scheduled to run.
- """, default=None),
-
- # MAGIC NUMBERS AREN'T THEY WONDERFUL?
- 'busyInterval': attributes.integer(doc="", default=MILLI / 10),
- }
- _processors[forType] = item.MetaItem(
- attrs['__name__'],
- (item.Item, _BatchProcessorMixin),
- attrs)
-
- registerUpgrader(
- upgradeProcessor1to2,
- _processors[forType].typeName,
- 1, 2)
-
- return _processors[forType]
-
-
-
-class ProcessUnavailable(Exception):
- """Indicates the process is not available to perform tasks.
-
- This is a transient error. Calling code should handle it by
- arranging to do the work they planned on doing at a later time.
- """
-
-
-
-class Shutdown(juice.Command):
- """
- Abandon, belay, cancel, cease, close, conclude, cut it out, desist,
- determine, discontinue, drop it, end, finish, finish up, give over, go
- amiss, go astray, go wrong, halt, have done with, hold, knock it off, lay
- off, leave off, miscarry, perorate, quit, refrain, relinquish, renounce,
- resolve, scrap, scratch, scrub, stay, stop, terminate, wind up.
- """
- commandName = "Shutdown"
- responseType = juice.QuitBox
-
-
-def _childProcTerminated(self, err):
- self.mode = 'stopped'
- err = ProcessUnavailable(err)
- for d in self.waitingForProcess:
- d.errback(err)
- del self.waitingForProcess
-
-
-class ProcessController(object):
- """
- Stateful class which tracks a Juice connection to a child process.
-
- Communication occurs over stdin and stdout of the child process. The
- process is launched and restarted as necessary. Failures due to the child
- process terminating, either unilaterally of by request, are represented as
- a transient exception class,
-
- Mode is one of::
-
- - 'stopped' (no process running or starting)
- - 'starting' (process begun but not ready for requests)
- - 'ready' (process ready for requests)
- - 'stopping' (process being torn down)
- - 'waiting_ready' (process beginning but will be shut down
- as soon as it starts up)
-
- Transitions are as follows::
-
- getProcess:
- stopped -> starting:
- launch process
- create/save in waitingForStartup/return Deferred
- starting -> starting:
- create/save/return Deferred
- ready -> ready:
- return saved process
- stopping:
- return failing Deferred indicating transient failure
- waiting_ready:
- return failing Deferred indicating transient failure
-
- stopProcess:
- stopped -> stopped:
- return succeeding Deferred
- starting -> waiting_ready:
- create Deferred, add transient failure errback handler, return
- ready -> stopping:
- call shutdown on process
- return Deferred which fires when shutdown is done
-
- childProcessCreated:
- starting -> ready:
- callback saved Deferreds
- clear saved Deferreds
- waiting_ready:
- errback saved Deferred indicating transient failure
- return _shutdownIndexerProcess()
-
- childProcessTerminated:
- starting -> stopped:
- errback saved Deferreds indicating transient failure
- waiting_ready -> stopped:
- errback saved Deferreds indicating transient failure
- ready -> stopped:
- drop reference to process object
- stopping -> stopped:
- Callback saved shutdown deferred
-
- @ivar process: A reference to the process object. Set in every non-stopped
- mode.
-
- @ivar juice: A reference to the juice protocol. Set in all modes.
-
- @ivar connector: A reference to the process protocol. Set in every
- non-stopped mode.
-
- @ivar onProcessStartup: None or a no-argument callable which will
- be invoked whenever the connection is first established to a newly
- spawned child process.
-
- @ivar onProcessTermination: None or a no-argument callable which
- will be invoked whenever a Juice connection is lost, except in the
- case where process shutdown was explicitly requested via
- stopProcess().
- """
-
- __metaclass__ = modal.ModalType
-
- initialMode = 'stopped'
- modeAttribute = 'mode'
-
- # A reference to the Twisted process object which corresponds to
- # the child process we have spawned. Set to a non-None value in
- # every state except stopped.
- process = None
-
- # A reference to the process protocol object via which we
- # communicate with the process's stdin and stdout. Set to a
- # non-None value in every state except stopped.
- connector = None
-
- def __init__(self, name, juice, tacPath,
- onProcessStartup=None,
- onProcessTermination=None,
- logPath=None,
- pidPath=None):
- self.name = name
- self.juice = juice
- self.tacPath = tacPath
- self.onProcessStartup = onProcessStartup
- self.onProcessTermination = onProcessTermination
- if logPath is None:
- logPath = name + '.log'
- if pidPath is None:
- pidPath = name + '.pid'
- self.logPath = logPath
- self.pidPath = pidPath
-
- def _startProcess(self):
- executable = sys.executable
- env = os.environ
-
- twistdBinaries = procutils.which("twistd2.4") + procutils.which("twistd")
- if not twistdBinaries:
- return defer.fail(RuntimeError("Couldn't find twistd to start subprocess"))
- twistd = twistdBinaries[0]
-
- setsid = procutils.which("setsid")
-
- self.connector = JuiceConnector(self.juice, self)
-
- args = [
- sys.executable,
- twistd,
- '--logfile=%s' % (self.logPath,)]
-
- if not runtime.platform.isWindows():
- args.append('--pidfile=%s' % (self.pidPath,))
-
- args.extend(['-noy',
- self.tacPath])
-
- if setsid:
- args = ['setsid'] + args
- executable = setsid[0]
-
- self.process = process.spawnProcess(
- self.connector, executable, tuple(args), env=env)
-
- class stopped(modal.mode):
- def getProcess(self):
- self.mode = 'starting'
- self.waitingForProcess = []
-
- self._startProcess()
-
- # Mode has changed, this will call some other
- # implementation of getProcess.
- return self.getProcess()
-
- def stopProcess(self):
- return defer.succeed(None)
-
- class starting(modal.mode):
- def getProcess(self):
- d = defer.Deferred()
- self.waitingForProcess.append(d)
- return d
-
- def stopProcess(self):
- def eb(err):
- err.trap(ProcessUnavailable)
-
- d = defer.Deferred().addErrback(eb)
- self.waitingForProcess.append(d)
-
- self.mode = 'waiting_ready'
- return d
-
- def childProcessCreated(self):
- self.mode = 'ready'
-
- if self.onProcessStartup is not None:
- self.onProcessStartup()
-
- for d in self.waitingForProcess:
- d.callback(self.juice)
- del self.waitingForProcess
-
- def childProcessTerminated(self, reason):
- _childProcTerminated(self, reason)
- if self.onProcessTermination is not None:
- self.onProcessTermination()
-
-
- class ready(modal.mode):
- def getProcess(self):
- return defer.succeed(self.juice)
-
- def stopProcess(self):
- self.mode = 'stopping'
- self.onShutdown = defer.Deferred()
- Shutdown().do(self.juice)
- return self.onShutdown
-
- def childProcessTerminated(self, reason):
- self.mode = 'stopped'
- self.process = self.connector = None
- if self.onProcessTermination is not None:
- self.onProcessTermination()
-
-
- class stopping(modal.mode):
- def getProcess(self):
- return defer.fail(ProcessUnavailable("Shutting down"))
-
- def stopProcess(self):
- return self.onShutdown
-
- def childProcessTerminated(self, reason):
- self.mode = 'stopped'
- self.process = self.connector = None
- self.onShutdown.callback(None)
-
-
- class waiting_ready(modal.mode):
- def getProcess(self):
- return defer.fail(ProcessUnavailable("Shutting down"))
-
- def childProcessCreated(self):
- # This will put us into the stopped state - no big deal,
- # we are going into the ready state as soon as it returns.
- _childProcTerminated(self, RuntimeError("Shutting down"))
-
- # Dip into the ready mode for ever so brief an instant so
- # that we can shut ourselves down.
- self.mode = 'ready'
- return self.stopProcess()
-
- def childProcessTerminated(self, reason):
- _childProcTerminated(self, reason)
- if self.onProcessTermination is not None:
- self.onProcessTermination()
-
-
-
-class JuiceConnector(protocol.ProcessProtocol):
-
- def __init__(self, proto, controller):
- self.juice = proto
- self.controller = controller
-
- def connectionMade(self):
- log.msg("Subprocess started.")
- self.juice.makeConnection(self)
- self.controller.childProcessCreated()
-
- # Transport
- disconnecting = False
-
- def write(self, data):
- self.transport.write(data)
-
- def writeSequence(self, data):
- self.transport.writeSequence(data)
-
- def loseConnection(self):
- self.transport.loseConnection()
-
- def getPeer(self):
- return ('omfg what are you talking about',)
-
- def getHost(self):
- return ('seriously it is a process this makes no sense',)
-
- def inConnectionLost(self):
- log.msg("Standard in closed")
- protocol.ProcessProtocol.inConnectionLost(self)
-
- def outConnectionLost(self):
- log.msg("Standard out closed")
- protocol.ProcessProtocol.outConnectionLost(self)
-
- def errConnectionLost(self):
- log.msg("Standard err closed")
- protocol.ProcessProtocol.errConnectionLost(self)
-
- def outReceived(self, data):
- self.juice.dataReceived(data)
-
- def errReceived(self, data):
- log.msg("Received stderr from subprocess: " + repr(data))
-
- def processEnded(self, status):
- log.msg("Process ended")
- self.juice.connectionLost(status)
- self.controller.childProcessTerminated(status)
-
-
-
-class JuiceChild(juice.Juice):
- """
- Protocol class which runs in the child process
-
- This just defines one behavior on top of a regular juice protocol: the
- shutdown command, which drops the connection and stops the reactor.
- """
- shutdown = False
-
- def connectionLost(self, reason):
- juice.Juice.connectionLost(self, reason)
- if self.shutdown:
- reactor.stop()
-
- def command_SHUTDOWN(self):
- log.msg("Shutdown message received, goodbye.")
- self.shutdown = True
- return {}
- command_SHUTDOWN.command = Shutdown
-
-
-
-class SetStore(juice.Command):
- """
- Specify the location of the site store.
- """
- commandName = 'Set-Store'
- arguments = [('storepath', juice.Path())]
-
-
-class SuspendProcessor(juice.Command):
- """
- Prevent a particular reliable listener from receiving any notifications
- until a L{ResumeProcessor} command is sent or the batch process is
- restarted.
- """
- commandName = 'Suspend-Processor'
- arguments = [('storepath', juice.Path()),
- ('storeid', juice.Integer())]
-
-
-
-class ResumeProcessor(juice.Command):
- """
- Cause a particular reliable listener to begin receiving notifications
- again.
- """
- commandName = 'Resume-Processor'
- arguments = [('storepath', juice.Path()),
- ('storeid', juice.Integer())]
-
-
-
-class CallItemMethod(juice.Command):
- """
- Invoke a particular method of a particular item.
- """
- commandName = 'Call-Item-Method'
- arguments = [('storepath', juice.Path()),
- ('storeid', juice.Integer()),
- ('method', juice.String())]
-
-
-class BatchProcessingControllerService(service.Service):
- """
- Controls starting, stopping, and passing messages to the system process in
- charge of remote batch processing.
-
- @ivar batchController: A reference to the L{ProcessController} for
- interacting with the batch process, if one exists. Otherwise C{None}.
- """
- implements(iaxiom.IBatchService)
-
- batchController = None
-
- def __init__(self, store):
- self.store = store
- self.setName("Batch Processing Controller")
-
-
- def startService(self):
- service.Service.startService(self)
- tacPath = util.sibpath(__file__, "batch.tac")
- proto = BatchProcessingProtocol()
- rundir = self.store.dbdir.child("run")
- logdir = rundir.child("logs")
- for d in rundir, logdir:
- try:
- d.createDirectory()
- except OSError:
- pass
- self.batchController = ProcessController(
- "batch", proto, tacPath,
- self._setStore, self._restartProcess,
- logdir.child("batch.log").path,
- rundir.child("batch.pid").path)
-
-
- def _setStore(self):
- return SetStore(storepath=self.store.dbdir).do(self.batchController.juice)
-
-
- def _restartProcess(self):
- reactor.callLater(1.0, self.batchController.getProcess)
-
-
- def stopService(self):
- service.Service.stopService(self)
- d = self.batchController.stopProcess()
- d.addErrback(lambda err: err.trap(error.ProcessDone))
- return d
-
-
- def call(self, itemMethod):
- """
- Invoke the given bound item method in the batch process.
-
- Return a Deferred which fires when the method has been invoked.
- """
- item = itemMethod.im_self
- method = itemMethod.im_func.func_name
- return self.batchController.getProcess().addCallback(
- CallItemMethod(storepath=item.store.dbdir,
- storeid=item.storeID,
- method=method).do)
-
-
- def start(self):
- if self.batchController is not None:
- self.batchController.getProcess()
-
-
- def suspend(self, storepath, storeID):
- return self.batchController.getProcess().addCallback(
- SuspendProcessor(storepath=storepath, storeid=storeID).do)
-
-
- def resume(self, storepath, storeID):
- return self.batchController.getProcess().addCallback(
- ResumeProcessor(storepath=storepath, storeid=storeID).do)
-
-
-
-class _SubStoreBatchChannel(object):
- """
- SubStore adapter for passing messages to the batch processing system
- process.
-
- SubStores are adaptable to L{iaxiom.IBatchService} via this adapter.
- """
- implements(iaxiom.IBatchService)
-
- def __init__(self, substore):
- self.storepath = substore.dbdir
- self.service = iaxiom.IBatchService(substore.parent)
-
-
- def call(self, itemMethod):
- return self.service.call(itemMethod)
-
-
- def start(self):
- self.service.start()
-
-
- def suspend(self, storeID):
- return self.service.suspend(self.storepath, storeID)
-
-
- def resume(self, storeID):
- return self.service.resume(self.storepath, storeID)
-
-
-
-def storeBatchServiceSpecialCase(st, pups):
- """
- Adapt a L{Store} to L{IBatchService}.
-
- If C{st} is a substore, return a simple wrapper that delegates to the site
- store's L{IBatchService} powerup. Return C{None} if C{st} has no
- L{BatchProcessingControllerService}.
- """
- if st.parent is not None:
- try:
- return _SubStoreBatchChannel(st)
- except TypeError:
- return None
- storeService = service.IService(st)
- try:
- return storeService.getServiceNamed("Batch Processing Controller")
- except KeyError:
- return None
-
-
-
-class BatchProcessingProtocol(JuiceChild):
- siteStore = None
-
- def __init__(self, service=None, issueGreeting=False):
- juice.Juice.__init__(self, issueGreeting)
- self.storepaths = []
- if service is not None:
- service.cooperator = cooperator.Cooperator()
- self.service = service
-
-
- def connectionLost(self, reason):
- # In the child process, we are a server. In the child process, we
- # don't want to keep running after we can't talk to the client anymore.
- if self.isServer:
- reactor.stop()
-
-
- def command_SET_STORE(self, storepath):
- from axiom import store
-
- assert self.siteStore is None
-
- self.siteStore = store.Store(storepath, debug=False)
- self.subStores = {}
- self.pollCall = task.LoopingCall(self._pollSubStores)
- self.pollCall.start(10.0)
-
- return {}
-
- command_SET_STORE.command = SetStore
-
-
- def command_SUSPEND_PROCESSOR(self, storepath, storeid):
- return self.subStores[storepath.path].suspend(storeid).addCallback(lambda ign: {})
- command_SUSPEND_PROCESSOR.command = SuspendProcessor
-
-
- def command_RESUME_PROCESSOR(self, storepath, storeid):
- return self.subStores[storepath.path].resume(storeid).addCallback(lambda ign: {})
- command_RESUME_PROCESSOR.command = ResumeProcessor
-
-
- def command_CALL_ITEM_METHOD(self, storepath, storeid, method):
- return self.subStores[storepath.path].call(storeid, method).addCallback(lambda ign: {})
- command_CALL_ITEM_METHOD.command = CallItemMethod
-
-
- def _pollSubStores(self):
- from axiom import store, substore
-
- # Any service which has encountered an error will have logged it and
- # then stopped. Prune those here, so that they are noticed as missing
- # below and re-added.
- for path, svc in self.subStores.items():
- if not svc.running:
- del self.subStores[path]
-
- try:
- paths = set([p.path for p in self.siteStore.query(substore.SubStore).getColumn("storepath")])
- except eaxiom.SQLError, e:
- # Generally, database is locked.
- log.msg("SubStore query failed with SQLError: %r" % (e,))
- except:
- # WTF?
- log.msg("SubStore query failed with bad error:")
- log.err()
- else:
- for removed in set(self.subStores) - paths:
- self.subStores[removed].disownServiceParent()
- del self.subStores[removed]
- if VERBOSE:
- log.msg("Removed SubStore " + removed)
- for added in paths - set(self.subStores):
- try:
- s = store.Store(added, debug=False)
- except eaxiom.SQLError, e:
- # Generally, database is locked.
- log.msg("Opening sub-Store failed with SQLError: %r" % (e,))
- except:
- log.msg("Opening sub-Store failed with bad error:")
- log.err()
- else:
- self.subStores[added] = BatchProcessingService(s, style=iaxiom.REMOTE)
- self.subStores[added].setServiceParent(self.service)
- if VERBOSE:
- log.msg("Added SubStore " + added)
-
-
-
-class BatchProcessingService(service.Service):
- """
- Steps over the L{iaxiom.IBatchProcessor} powerups for a single L{axiom.store.Store}.
- """
- def __init__(self, store, style=iaxiom.LOCAL):
- self.store = store
- self.style = style
- self.suspended = []
-
-
- def suspend(self, storeID):
- item = self.store.getItemByID(storeID)
- self.suspended.append(item)
- return item.suspend()
-
-
- def resume(self, storeID):
- item = self.store.getItemByID(storeID)
- self.suspended.remove(item)
- return item.resume()
-
-
- def call(self, storeID, methodName):
- return defer.maybeDeferred(getattr(self.store.getItemByID(storeID), methodName))
-
-
- def items(self):
- return self.store.powerupsFor(iaxiom.IBatchProcessor)
-
-
- def processWhileRunning(self):
- """
- Run tasks until stopService is called.
- """
- work = self.step()
- for result, more in work:
- yield result
- if not self.running:
- break
- if more:
- delay = 0.1
- else:
- delay = 10.0
- yield task.deferLater(reactor, delay, lambda: None)
-
-
- def step(self):
- while True:
- items = list(self.items())
-
- if VERBOSE:
- log.msg("Found %d processors for %s" % (len(items), self.store))
-
- ran = False
- more = False
- while items:
- ran = True
- item = items.pop()
- if VERBOSE:
- log.msg("Stepping processor %r (suspended is %r)" % (item, self.suspended))
- try:
- itemHasMore = item.store.transact(item.step, style=self.style, skip=self.suspended)
- except _ProcessingFailure, e:
- log.msg("%r failed while processing %r:" % (e.reliableListener, e.workUnit))
- log.err(e.failure)
- e.mark()
-
- # _Fuck_. /Fuck/. If user-code in or below (*fuck*)
- # item.step creates a Failure on any future iteration
- # (-Fuck-) of this loop, it will get a reference to this
- # exception instance, since it's in locals and Failures
- # extract and save locals (Aaarrrrggg). Get rid of this so
- # that doesn't happen. See also the definition of
- # _ProcessingFailure.__init__.
- e = None
- else:
- if itemHasMore:
- more = True
- yield None, bool(more or items)
- if not ran:
- yield None, more
-
-
- def startService(self):
- service.Service.startService(self)
- self.parent.cooperator.coiterate(self.processWhileRunning())
-
-
- def stopService(self):
- service.Service.stopService(self)
- self.store.close()
-
-
-
-class BatchManholePowerup(item.Item):
- """
- Previously, an L{IConchUser} powerup. This class is only still defined for
- schema compatibility. Any instances of it will be deleted by an upgrader.
- See #1001.
- """
- schemaVersion = 2
- unused = attributes.integer(
- doc="Satisfy Axiom requirement for at least one attribute")
-
-registerDeletionUpgrader(BatchManholePowerup, 1, 2)
=== removed file 'Axiom/axiom/batch.tac'
--- Axiom/axiom/batch.tac 2006-04-27 14:22:50 +0000
+++ Axiom/axiom/batch.tac 1970-01-01 00:00:00 +0000
@@ -1,20 +0,0 @@
-# -*- test-case-name: axiom.test.test_batch -*-
-
-"""
-Application configuration for the batch sub-process.
-
-This process reads commands and sends responses via stdio using the JUICE
-protocol. When it's not doing that, it queries various databases for work to
-do, and then does it. The databases which it queries can be controlled by
-sending it messages.
-"""
-
-from twisted.application import service
-from twisted.internet import stdio
-
-from axiom import batch
-
-application = service.Application("Batch Processing App")
-svc = service.MultiService()
-svc.setServiceParent(application)
-stdio.StandardIO(batch.BatchProcessingProtocol(svc, True))
=== removed directory 'Axiom/axiom/benchmarks'
=== removed file 'Axiom/axiom/benchmarks/benchmark_batchitemcreation.py'
--- Axiom/axiom/benchmarks/benchmark_batchitemcreation.py 2006-10-11 21:52:50 +0000
+++ Axiom/axiom/benchmarks/benchmark_batchitemcreation.py 1970-01-01 00:00:00 +0000
@@ -1,25 +0,0 @@
-
-"""
-Benchmark batch creation of a large number of simple Items in a transaction.
-"""
-
-from epsilon.scripts import benchmark
-
-from axiom.store import Store
-from axiom.item import Item
-from axiom.attributes import integer, text
-
-class AB(Item):
- a = integer()
- b = text()
-
-def main():
- s = Store("TEMPORARY.axiom")
- benchmark.start()
- rows = [(x, unicode(x)) for x in xrange(10000)]
- s.transact(lambda: s.batchInsert(AB, (AB.a, AB.b), rows))
- benchmark.stop()
-
-
-if __name__ == '__main__':
- main()
=== removed file 'Axiom/axiom/benchmarks/benchmark_batchitemdeletion.py'
--- Axiom/axiom/benchmarks/benchmark_batchitemdeletion.py 2006-10-11 21:52:50 +0000
+++ Axiom/axiom/benchmarks/benchmark_batchitemdeletion.py 1970-01-01 00:00:00 +0000
@@ -1,27 +0,0 @@
-
-"""
-Benchmark batch creation of a large number of simple Items in a transaction.
-"""
-
-from epsilon.scripts import benchmark
-
-from axiom.store import Store
-from axiom.item import Item
-from axiom.attributes import integer, text
-
-class AB(Item):
- a = integer()
- b = text()
-
-def main():
- s = Store("TEMPORARY.axiom")
- rows = [(x, unicode(x)) for x in xrange(10000)]
- s.transact(lambda: s.batchInsert(AB, (AB.a, AB.b), rows))
-
- benchmark.start()
- s.transact(s.query(AB).deleteFromStore)
- benchmark.stop()
-
-
-if __name__ == '__main__':
- main()
=== removed file 'Axiom/axiom/benchmarks/benchmark_itemcreation.py'
--- Axiom/axiom/benchmarks/benchmark_itemcreation.py 2006-06-01 15:53:37 +0000
+++ Axiom/axiom/benchmarks/benchmark_itemcreation.py 1970-01-01 00:00:00 +0000
@@ -1,28 +0,0 @@
-
-"""
-Benchmark creation of a large number of simple Items in a transaction.
-"""
-
-from epsilon.scripts import benchmark
-
-from axiom.store import Store
-from axiom.item import Item
-from axiom.attributes import integer, text
-
-class AB(Item):
- a = integer()
- b = text()
-
-def main():
- s = Store("TEMPORARY.axiom")
- def txn():
- for x in range(10000):
- AB(a=x, b=unicode(x), store=s)
-
- benchmark.start()
- s.transact(txn)
- benchmark.stop()
-
-
-if __name__ == '__main__':
- main()
=== removed file 'Axiom/axiom/benchmarks/benchmark_itemdeletion.py'
--- Axiom/axiom/benchmarks/benchmark_itemdeletion.py 2006-10-11 21:52:50 +0000
+++ Axiom/axiom/benchmarks/benchmark_itemdeletion.py 1970-01-01 00:00:00 +0000
@@ -1,29 +0,0 @@
-
-"""
-Benchmark batch creation of a large number of simple Items in a transaction.
-"""
-
-from epsilon.scripts import benchmark
-
-from axiom.store import Store
-from axiom.item import Item
-from axiom.attributes import integer, text
-
-class AB(Item):
- a = integer()
- b = text()
-
-def main():
- s = Store("TEMPORARY.axiom")
- rows = [(x, unicode(x)) for x in xrange(10000)]
- s.transact(lambda: s.batchInsert(AB, (AB.a, AB.b), rows))
- def deleteStuff():
- for it in s.query(AB):
- it.deleteFromStore()
- benchmark.start()
- s.transact(deleteStuff)
- benchmark.stop()
-
-
-if __name__ == '__main__':
- main()
=== removed file 'Axiom/axiom/benchmarks/benchmark_tagnames.py'
--- Axiom/axiom/benchmarks/benchmark_tagnames.py 2006-06-01 15:53:37 +0000
+++ Axiom/axiom/benchmarks/benchmark_tagnames.py 1970-01-01 00:00:00 +0000
@@ -1,43 +0,0 @@
-
-"""
-Benchmark the tagNames method of L{axiom.tags.Catalog}
-"""
-
-import time, sys
-
-from epsilon.scripts import benchmark
-
-from axiom import store, item, attributes, tags
-
-N_TAGS = 20
-N_COPIES = 5000
-N_LOOPS = 1000
-
-class TaggedObject(item.Item):
- name = attributes.text()
-
-
-
-def main():
- s = store.Store("tags.axiom")
- c = tags.Catalog(store=s)
- o = TaggedObject(store=s)
-
- def tagObjects(tag, copies):
- for x in xrange(copies):
- c.tag(o, tag)
- for i in xrange(N_TAGS):
- s.transact(tagObjects, unicode(i), N_COPIES)
-
- def getTags():
- for i in xrange(N_LOOPS):
- list(c.tagNames())
-
- benchmark.start()
- s.transact(getTags)
- benchmark.stop()
-
-
-
-if __name__ == '__main__':
- main()
=== removed file 'Axiom/axiom/benchmarks/benchmark_tagsof.py'
--- Axiom/axiom/benchmarks/benchmark_tagsof.py 2006-06-01 15:53:37 +0000
+++ Axiom/axiom/benchmarks/benchmark_tagsof.py 1970-01-01 00:00:00 +0000
@@ -1,48 +0,0 @@
-
-"""
-Benchmark the tagsOf method of L{axiom.tags.Catalog}
-"""
-
-import time, sys
-
-from epsilon.scripts import benchmark
-
-from axiom import store, item, attributes, tags
-
-N = 30
-
-class TaggedObject(item.Item):
- name = attributes.text()
-
-
-
-def main():
- s = store.Store("tags.axiom")
- c = tags.Catalog(store=s)
-
- objects = []
- def createObjects():
- for x in xrange(N):
- objects.append(TaggedObject(store=s))
- s.transact(createObjects)
-
- def tagObjects():
- for o in objects:
- for x in xrange(N):
- c.tag(o, unicode(x))
- s.transact(tagObjects)
-
- def getTags():
- for i in xrange(N):
- for o in objects:
- for t in c.tagsOf(o):
- pass
-
- benchmark.start()
- s.transact(getTags)
- benchmark.stop()
-
-
-
-if __name__ == '__main__':
- main()
=== removed file 'Axiom/axiom/benchmarks/testbase.py'
--- Axiom/axiom/benchmarks/testbase.py 2010-04-03 12:38:34 +0000
+++ Axiom/axiom/benchmarks/testbase.py 1970-01-01 00:00:00 +0000
@@ -1,5 +0,0 @@
-
-from axiom._pysqlite2 import Connection
-
-con = Connection.fromDatabaseName("test.sqlite")
-cur = con.cursor()
=== removed file 'Axiom/axiom/benchmarks/testindex.py'
--- Axiom/axiom/benchmarks/testindex.py 2005-10-28 22:06:23 +0000
+++ Axiom/axiom/benchmarks/testindex.py 1970-01-01 00:00:00 +0000
@@ -1,5 +0,0 @@
-
-from testbase import cur
-
-cur.execute('create index foo_bar_idx on foo(bar)')
-cur.commit()
=== removed file 'Axiom/axiom/benchmarks/testinit.py'
--- Axiom/axiom/benchmarks/testinit.py 2005-07-28 22:09:16 +0000
+++ Axiom/axiom/benchmarks/testinit.py 1970-01-01 00:00:00 +0000
@@ -1,10 +0,0 @@
-
-from testbase import con, cur
-
-cur.execute("create table foo (bar int, baz varchar)")
-
-for x in range(500):
- cur.execute("insert into foo values (?, ?)",
- (x, "string-value-of-"+str(x)))
-
-con.commit()
=== removed file 'Axiom/axiom/benchmarks/testreader.py'
--- Axiom/axiom/benchmarks/testreader.py 2005-10-28 22:06:23 +0000
+++ Axiom/axiom/benchmarks/testreader.py 1970-01-01 00:00:00 +0000
@@ -1,10 +0,0 @@
-
-import itertools
-import time
-
-from testbase import cur
-
-for num in itertools.count():
- cur.execute("select * from foo")
- foovals = cur.fetchall()
- print num, 'I fetched', len(foovals), 'values.', time.ctime()
=== removed file 'Axiom/axiom/benchmarks/testwriter.py'
--- Axiom/axiom/benchmarks/testwriter.py 2005-07-28 22:09:16 +0000
+++ Axiom/axiom/benchmarks/testwriter.py 1970-01-01 00:00:00 +0000
@@ -1,13 +0,0 @@
-
-import time
-import itertools
-
-from testbase import con, cur
-BATCH = 500
-for num in itertools.count():
- for x in range(BATCH):
- n = (num * BATCH) + x
- cur.execute("insert into foo values (?, ?)",
- (n, "string-value-of-"+str(n)))
- con.commit()
- print num, 'write pass complete', time.ctime()
=== removed file 'Axiom/axiom/dependency.py'
--- Axiom/axiom/dependency.py 2008-08-13 02:55:58 +0000
+++ Axiom/axiom/dependency.py 1970-01-01 00:00:00 +0000
@@ -1,289 +0,0 @@
-# Copright 2008 Divmod, Inc. See LICENSE file for details.
-# -*- test-case-name: axiom.test.test_dependency -*-
-"""
-A dependency management system for items.
-"""
-
-import sys, itertools
-
-from zope.interface.advice import addClassAdvisor
-
-from epsilon.structlike import record
-
-from axiom.item import Item
-from axiom.attributes import reference, boolean, AND
-from axiom.errors import ItemNotFound, DependencyError, UnsatisfiedRequirement
-
-#There is probably a cleaner way to do this.
-_globalDependencyMap = {}
-
-def dependentsOf(cls):
- deps = _globalDependencyMap.get(cls, None)
- if deps is None:
- return []
- else:
- return [d[0] for d in deps]
-
-##Totally ripping off z.i
-
-def dependsOn(itemType, itemCustomizer=None, doc='',
- indexed=True, whenDeleted=reference.NULLIFY):
- """
- This function behaves like L{axiom.attributes.reference} but with
- an extra behaviour: when this item is installed (via
- L{axiom.dependency.installOn} on a target item, the
- type named here will be instantiated and installed on the target
- as well.
-
- For example::
-
- class Foo(Item):
- counter = integer()
- thingIDependOn = dependsOn(Baz, lambda baz: baz.setup())
-
- @param itemType: The Item class to instantiate and install.
- @param itemCustomizer: A callable that accepts the item installed
- as a dependency as its first argument. It will be called only if
- an item is created to satisfy this dependency.
-
- @return: An L{axiom.attributes.reference} instance.
- """
-
- frame = sys._getframe(1)
- locals = frame.f_locals
-
- # Try to make sure we were called from a class def.
- if (locals is frame.f_globals) or ('__module__' not in locals):
- raise TypeError("dependsOn can be used only from a class definition.")
- ref = reference(reftype=itemType, doc=doc, indexed=indexed, allowNone=True,
- whenDeleted=whenDeleted)
- if "__dependsOn_advice_data__" not in locals:
- addClassAdvisor(_dependsOn_advice)
- locals.setdefault('__dependsOn_advice_data__', []).append(
- (itemType, itemCustomizer, ref))
- return ref
-
-def _dependsOn_advice(cls):
- if cls in _globalDependencyMap:
- print "Double advising of %s. dependency map from first time: %s" % (
- cls, _globalDependencyMap[cls])
- #bail if we end up here twice, somehow
- return cls
- for itemType, itemCustomizer, ref in cls.__dict__[
- '__dependsOn_advice_data__']:
- classDependsOn(cls, itemType, itemCustomizer, ref)
- del cls.__dependsOn_advice_data__
- return cls
-
-def classDependsOn(cls, itemType, itemCustomizer, ref):
- _globalDependencyMap.setdefault(cls, []).append(
- (itemType, itemCustomizer, ref))
-
-class _DependencyConnector(Item):
- """
- I am a connector between installed items and their targets.
- """
- installee = reference(doc="The item installed.")
- target = reference(doc="The item installed upon.")
- explicitlyInstalled = boolean(doc="Whether this item was installed"
- "explicitly (and thus whether or not it"
- "should be automatically uninstalled when"
- "nothing depends on it)")
-
-
-def installOn(self, target):
- """
- Install this object on the target along with any powerup
- interfaces it declares. Also track that the object now depends on
- the target, and the object was explicitly installed (and therefore
- should not be uninstalled by subsequent uninstallation operations
- unless it is explicitly removed).
- """
- _installOn(self, target, True)
-
-
-def _installOn(self, target, __explicitlyInstalled=False):
- depBlob = _globalDependencyMap.get(self.__class__, [])
- dependencies, itemCustomizers, refs = (map(list, zip(*depBlob))
- or ([], [], []))
- #See if any of our dependencies have been installed already
- for dc in self.store.query(_DependencyConnector,
- _DependencyConnector.target == target):
- if dc.installee.__class__ in dependencies:
- i = dependencies.index(dc.installee.__class__)
- refs[i].__set__(self, dc.installee)
- del dependencies[i], itemCustomizers[i], refs[i]
- if (dc.installee.__class__ == self.__class__
- and self.__class__ in set(
- itertools.chain([blob[0][0] for blob in
- _globalDependencyMap.values()]))):
- #Somebody got here before we did... let's punt
- raise DependencyError("An instance of %r is already "
- "installed on %r." % (self.__class__,
- target))
- #The rest we'll install
- for i, cls in enumerate(dependencies):
- it = cls(store=self.store)
- if itemCustomizers[i] is not None:
- itemCustomizers[i](it)
- _installOn(it, target, False)
- refs[i].__set__(self, it)
- #And now the connector for our own dependency.
-
- dc = self.store.findUnique(
- _DependencyConnector,
- AND(_DependencyConnector.target==target,
- _DependencyConnector.installee==self,
- _DependencyConnector.explicitlyInstalled==__explicitlyInstalled),
- None)
- assert dc is None, "Dependency connector already exists, wtf are you doing?"
- _DependencyConnector(store=self.store, target=target,
- installee=self,
- explicitlyInstalled=__explicitlyInstalled)
-
- target.powerUp(self)
-
- callback = getattr(self, "installed", None)
- if callback is not None:
- callback()
-
-def uninstallFrom(self, target):
- """
- Remove this object from the target, as well as any dependencies
- that it automatically installed which were not explicitly
- "pinned" by calling "install", and raising an exception if
- anything still depends on this.
- """
-
- #did this class powerup on any interfaces? powerdown if so.
- target.powerDown(self)
-
-
- for dc in self.store.query(_DependencyConnector,
- _DependencyConnector.target==target):
- if dc.installee is self:
- dc.deleteFromStore()
-
- for item in installedUniqueRequirements(self, target):
- uninstallFrom(item, target)
-
- callback = getattr(self, "uninstalled", None)
- if callback is not None:
- callback()
-
-def installedOn(self):
- """
- If this item is installed on another item, return the install
- target. Otherwise return None.
- """
- try:
- return self.store.findUnique(_DependencyConnector,
- _DependencyConnector.installee == self
- ).target
- except ItemNotFound:
- return None
-
-
-def installedDependents(self, target):
- """
- Return an iterable of things installed on the target that
- require this item.
- """
- for dc in self.store.query(_DependencyConnector,
- _DependencyConnector.target == target):
- depends = dependentsOf(dc.installee.__class__)
- if self.__class__ in depends:
- yield dc.installee
-
-def installedUniqueRequirements(self, target):
- """
- Return an iterable of things installed on the target that this item
- requires and are not required by anything else.
- """
-
- myDepends = dependentsOf(self.__class__)
- #XXX optimize?
- for dc in self.store.query(_DependencyConnector,
- _DependencyConnector.target==target):
- if dc.installee is self:
- #we're checking all the others not ourself
- continue
- depends = dependentsOf(dc.installee.__class__)
- if self.__class__ in depends:
- raise DependencyError(
- "%r cannot be uninstalled from %r, "
- "%r still depends on it" % (self, target, dc.installee))
-
- for cls in myDepends[:]:
- #If one of my dependencies is required by somebody
- #else, leave it alone
- if cls in depends:
- myDepends.remove(cls)
-
- for dc in self.store.query(_DependencyConnector,
- _DependencyConnector.target==target):
- if (dc.installee.__class__ in myDepends
- and not dc.explicitlyInstalled):
- yield dc.installee
-
-def installedRequirements(self, target):
- """
- Return an iterable of things installed on the target that this
- item requires.
- """
- myDepends = dependentsOf(self.__class__)
- for dc in self.store.query(_DependencyConnector,
- _DependencyConnector.target == target):
- if dc.installee.__class__ in myDepends:
- yield dc.installee
-
-
-
-def onlyInstallPowerups(self, target):
- """
- Deprecated - L{Item.powerUp} now has this functionality.
- """
- target.powerUp(self)
-
-
-
-class requiresFromSite(
- record('powerupInterface defaultFactory siteDefaultFactory',
- defaultFactory=None,
- siteDefaultFactory=None)):
- """
- A read-only descriptor that will return the site store's powerup for a
- given item.
-
- @ivar powerupInterface: an L{Interface} describing the powerup that the
- site store should be adapted to.
-
- @ivar defaultFactory: a 1-argument callable that takes the site store and
- returns a value for this descriptor. This is invoked in cases where the
- site store does not provide a default factory of its own, and this
- descriptor is retrieved from an item in a store with a parent.
-
- @ivar siteDefaultFactory: a 1-argument callable that takes the site store
- and returns a value for this descriptor. This is invoked in cases where
- this descriptor is retrieved from an item in a store without a parent.
- """
-
- def _invokeFactory(self, defaultFactory, siteStore):
- if defaultFactory is None:
- raise UnsatisfiedRequirement()
- return defaultFactory(siteStore)
-
-
- def __get__(self, oself, type=None):
- """
- Retrieve the value of this dependency from the site store.
- """
- siteStore = oself.store.parent
- if siteStore is not None:
- pi = self.powerupInterface(siteStore, None)
- if pi is None:
- pi = self._invokeFactory(self.defaultFactory, siteStore)
- else:
- pi = self._invokeFactory(self.siteDefaultFactory, oself.store)
- return pi
-
=== removed file 'Axiom/axiom/errors.py'
--- Axiom/axiom/errors.py 2009-01-02 14:21:43 +0000
+++ Axiom/axiom/errors.py 1970-01-01 00:00:00 +0000
@@ -1,193 +0,0 @@
-# -*- test-case-name: axiom.test -*-
-
-from twisted.cred.error import UnauthorizedLogin
-
-
-class TimeoutError(Exception):
- """
- A low-level SQL operation timed out.
-
- @ivar statement: The SQL statement which timed out.
- @ivar timeout: The timeout, in seconds, which was exceeded.
- @ivar underlying: The backend exception which signaled this, or None.
- """
- def __init__(self, statement, timeout, underlying):
- Exception.__init__(self, statement, timeout, underlying)
- self.statement = statement
- self.timeout = timeout
- self.underlying = underlying
-
-
-
-class BadCredentials(UnauthorizedLogin):
- pass
-
-
-
-class NoSuchUser(UnauthorizedLogin):
- pass
-
-
-
-class MissingDomainPart(NoSuchUser):
- """
- Raised when a login is attempted with a username which consists of only
- a local part. For example, "testuser" instead of "testuser@xxxxxxxxxxx".
- """
-
-
-class DuplicateUser(Exception):
- pass
-
-
-
-class CannotOpenStore(RuntimeError):
- """
- There is a problem such that the store cannot be opened.
- """
-
-
-
-class NoUpgradePathAvailable(CannotOpenStore):
- """
- No upgrade path is available, so the store cannot be opened.
- """
-
-
-
-class NoCrossStoreReferences(AttributeError):
- """
- References are not allowed between items within different Stores.
- """
-
-
-
-class SQLError(RuntimeError):
- """
- Axiom internally generated some bad SQL.
- """
- def __init__(self, sql, args, underlying):
- RuntimeError.__init__(self, sql, args, underlying)
- self.sql, self.args, self.underlying = self.args
-
- def __str__(self):
- return "<SQLError: %r(%r) caused %s: %s>" % (
- self.sql, self.args,
- self.underlying.__class__, self.underlying)
-
-
-class TableAlreadyExists(SQLError):
- """
- Axiom internally created a table at the same time as another database.
- (User code should not need to catch this exception.)
- """
-
-
-
-class UnknownItemType(Exception):
- """
- Can't load an item: it's of a type that I don't see anywhere in Python.
- """
-
-
-
-class SQLWarning(Warning):
- """
- Axiom internally generated some CREATE TABLE SQL that ... probably wasn't bad
- """
-
-
-
-class TableCreationConcurrencyError(RuntimeError):
- """
- Woah, this is really bad. If you can get this please tell us how.
- """
-
-
-
-class DuplicateUniqueItem(KeyError):
- """
- Found 2 or more of an item which is supposed to be unique.
- """
-
-
-
-class ItemNotFound(KeyError):
- """
- Did not find even 1 of an item which was supposed to exist.
- """
-
-
-
-class ItemClassesOnly(TypeError):
- """
- An object was passed to a method that wasn't a subclass of Item.
- """
-
-
-class ChangeRejected(Exception):
- """
- Raised when an attempt is made to change the database at a time when
- database changes are disallowed for reasons of consistency.
-
- This is raised when an application-level callback (for example, committed)
- attempts to change database state.
- """
-
-class DependencyError(Exception):
- """
- Raised when an item can't be installed or uninstalled.
- """
-
-class DeletionDisallowed(ValueError):
- """
- Raised when an attempt is made to delete an item that is referred to by
- reference attributes with whenDeleted == DISALLOW.
- """
-
-class DataIntegrityError(RuntimeError):
- """
- Data integrity seems to have been lost.
- """
-
-
-
-class BrokenReference(DataIntegrityError):
- """
- A reference to a nonexistent item was detected when this should be
- impossible.
- """
-
-
-
-class UpgraderRecursion(RuntimeError):
- """
- Upgraders are not allowed to recurse.
- """
-
-
-
-class ItemUpgradeError(RuntimeError):
- """
- Attempting to upgrade an Item resulted in an error.
-
- @ivar originalFailure: The failure that caused the item upgrade to fail
- @ivar storeID: Store ID of the item that failed to upgrade
- @ivar oldType: The type of the item being upgraded
- @ivar newType: The type the item should've been upgraded to
- """
- def __init__(self, originalFailure, storeID, oldType, newType):
- RuntimeError.__init__(self, originalFailure, storeID, oldType, newType)
- self.originalFailure = originalFailure
- self.storeID = storeID
- self.oldType = oldType
- self.newType = newType
-
-
-
-class UnsatisfiedRequirement(AttributeError):
- """
- A requirement described by a L{axiom.dependency.requiresFromSite} was not
- satisfied by the database, and could not be satisfied automatically at
- runtime by a default factory.
- """
=== removed directory 'Axiom/axiom/examples'
=== removed file 'Axiom/axiom/examples/bucket.py'
--- Axiom/axiom/examples/bucket.py 2005-07-28 22:09:16 +0000
+++ Axiom/axiom/examples/bucket.py 1970-01-01 00:00:00 +0000
@@ -1,52 +0,0 @@
-from axiom import item, attributes
-
-class Bucket(item.Item):
- typeName = 'bucket'
- schemaVersion = 1
-
- name = attributes.text()
-
- def getstuff(self):
- for food in self.store.query(FoodItem,
- FoodItem.bucket == self,
- sort=FoodItem.deliciousness.descending):
- food.extra.what()
-
-
-class FoodItem(item.Item):
- typeName = 'food'
- schemaVersion = 1
-
- bucket = attributes.reference()
- extra = attributes.reference()
- deliciousness = attributes.integer(indexed=True)
-
-class Chicken(item.Item):
- typeName = 'chicken'
- schemaVersion = 1
-
- epistemologicalBasisForCrossingTheRoad = attributes.text()
- def what(self):
- print 'chicken!'
-
-class Biscuit(item.Item):
- typeName = 'biscuit'
- schemaVersion = 1
-
- fluffiness = attributes.integer()
- def what(self):
- print 'biscuits!'
-
-
-from axiom.store import Store
-
-s = Store()
-
-u = Bucket(name=u'whatever', store=s)
-c = Chicken(epistemologicalBasisForCrossingTheRoad=u'extropian', store=s)
-b = Biscuit(fluffiness=100, store=s)
-
-FoodItem(store=s, deliciousness=3, extra=c, bucket=u)
-FoodItem(store=s, deliciousness=4, extra=b, bucket=u)
-
-u.getstuff()
=== removed file 'Axiom/axiom/examples/library.py'
--- Axiom/axiom/examples/library.py 2005-09-10 21:18:46 +0000
+++ Axiom/axiom/examples/library.py 1970-01-01 00:00:00 +0000
@@ -1,114 +0,0 @@
-
-import random
-
-from axiom.item import Item
-from axiom.attributes import text, timestamp, reference, integer, AND, OR
-from axiom.store import Store
-from epsilon import extime
-
-_d = extime.Time.fromISO8601TimeAndDate
-
-_books = [
- (u'Heart of Darkness', u'Joseph Conrad', u'0486264645', 80, _d('1990-07-01T00:00:00.000001')),
- (u'The Dark Tower, Book 7', u'Stephen King', u'1880418622', 864, _d('2004-11-21T00:00:00.000001')),
- (u'Guns, Germs, and Steel: The Fates of Human Societies', u'Jared Diamond', u'0393317552', 480, _d('1999-04-01T00:00:00.000001')),
- (u'The Lions of al-Rassan', u'Guy Gavriel Kay', u'0060733497', 528, _d('2005-06-28T00:00:00.000001')),
- ]
-
-_borrowers = [u'Anne', u'Bob', u'Carol', u'Dave']
-
-
-class Borrower(Item):
- typeName = 'borrower'
- schemaVersion = 1
- name = text(indexed=True)
-
-class Book(Item):
- typeName = 'book'
- schemaVersion = 1
-
- title = text()
- author = text()
- isbn = text()
- pages = integer()
- datePublished = timestamp()
-
- lentTo = reference()
- library = reference()
-
-class LendingLibrary(Item):
- typeName = 'lending_library'
- schemaVersion = 1
-
- name = text()
-
- def books(self):
- return self.store.query(Book,
- Book.library == self)
-
- def getBorrower(self, name):
- for b in self.store.query(Borrower,
- Borrower.name == name):
- return b
- b = Borrower(name=name,
- store=self.store)
- return b
-
- def initialize(self):
- for title, author, isbn, pages, published in _books:
- b = Book(
- title=title,
- author=author,
- isbn=isbn,
- pages=pages,
- datePublished=published,
- library=self,
- store=self.store)
-
-
- def displayBooks(self):
- for book in self.books():
- print book.title,
- if book.lentTo is not None:
- print 'lent to', '['+book.lentTo.name+']'
- else:
- print 'in library'
-
- def shuffleLending(self):
- for book in self.books():
- if book.lentTo is not None:
- print book.lentTo.name, 'returned', book.title
- book.lentTo = None
- for book in self.books():
- if random.choice([True, False]):
- borrower = random.choice(_borrowers)
- print 'Lending', book.title, 'to', borrower
- book.lentTo = self.getBorrower(borrower)
-
-def main(s):
- for ll in s.query(LendingLibrary):
- print 'found existing library'
- break
- else:
- print 'creating new library'
- ll = LendingLibrary(store=s)
- ll.initialize()
- ll.displayBooks()
- print '***'
- ll.shuffleLending()
- print '---'
- ll.displayBooks()
- print '***'
- ll.shuffleLending()
- print '---'
-
- print s.count(Book, AND (Book.author == u'Stephen King',
- Book.title == u'The Lions of al-Rassan'))
- print s.count(Book, OR (Book.author == u'Stephen King',
- Book.title == u'The Lions of al-Rassan'))
-
-
-if __name__ == '__main__':
- s = Store('testdb')
- s.transact(main, s)
- s.close()
=== removed file 'Axiom/axiom/iaxiom.py'
--- Axiom/axiom/iaxiom.py 2010-07-18 17:44:38 +0000
+++ Axiom/axiom/iaxiom.py 1970-01-01 00:00:00 +0000
@@ -1,363 +0,0 @@
-
-from zope.interface import Interface, Attribute
-
-
-class IStatEvent(Interface):
- """
- Marker for a log message that is useful as a statistic.
-
- Log messages with 'interface' set to this class will be made available to
- external observers. This is useful for tracking the rate of events such as
- page views.
- """
-
-
-class IAtomicFile(Interface):
- def __init__(tempname, destdir):
- """Create a new atomic file.
-
- The file will exist temporarily at C{tempname} and be relocated to
- C{destdir} when it is closed.
- """
-
- def tell():
- """Return the current offset into the file, in bytes.
- """
-
- def write(bytes):
- """Write some bytes to this file.
- """
-
- def close(callback):
- """Close this file. Move it to its final location.
-
- @param callback: A no-argument callable which will be invoked
- when this file is ready to be moved to its final location. It
- must return the segment of the path relative to per-user
- storage of the owner of this file. Alternatively, a string
- with semantics the same as those previously described for the
- return value of the callable.
-
- @rtype: C{axiom.store.StoreRelativePath}
- @return: A Deferred which fires with the full path to the file
- when it has been closed, or which fails if there is some error
- closing the file.
- """
-
- def abort():
- """Give up on this file. Discard its contents.
- """
-
-
-class IAxiomaticCommand(Interface):
- """
- Subcommand for 'axiomatic' and 'tell-axiom' command line programs.
-
- Should subclass twisted.python.usage.Options and provide a command to run.
-
- '.parent' attribute will be set to an object with a getStore method.
- """
-
- name = Attribute("""
- """)
-
- description = Attribute("""
- """)
-
-
-
-class IBeneficiary(Interface):
- """
- Interface to adapt to when looking for an appropriate application-level
- object to install powerups on.
- """
-
- def powerUp(implementor, interface):
- """ Install a powerup on this object. There is not necessarily any inverse
- powerupsFor on a beneficiary, although there may be; installations may
- be forwarded to a different implementation object, or deferred.
- """
-
-class IPowerupIndirector(Interface):
- """
- Implement this interface if you want to change what is returned from
- powerupsFor for a particular interface.
- """
-
- def indirect(interface):
- """
- When an item which implements IPowerupIndirector is returned from a
- powerupsFor query, this method will be called on it to give it the
- opportunity to return something other than itself from powerupsFor.
-
- @param interface: the interface passed to powerupsFor
- @type interface: L{zope.interface.Interface}
- """
-
-
-
-class IScheduler(Interface):
- """
- An interface for scheduling tasks. Quite often the store will be adaptable
- to this; in any Mantissa application, for example; so it is reasonable to
- assume that it is if your application needs to schedule timed events or
- queue tasks.
- """
- def schedule(runnable, when):
- """
- @param runnable: any Item with a 'run' method.
-
- @param when: a Time instance describing when the runnable's run()
- method will be called. See extime.Time's documentation for more
- details.
- """
-
-
-
-class IQuery(Interface):
- """
- An object that represents a query that can be performed against a database.
- """
-
- limit = Attribute(
- """
- An integer representing the maximum number of rows to be returned from
- this query, or None, if the query is unlimited.
- """)
-
- store = Attribute(
- """
- The Axiom store that this query will return results from.
- """)
-
- def __iter__():
- """
- Retrieve an iterator for the results of this query.
-
- The query is performed whenever this is called.
- """
-
-
- def count():
- """
- Return the number of results in this query.
-
- NOTE: In most cases, this will have to load all of the rows in this
- query. It is therefore very slow and should generally be considered
- discouraged. Call with caution!
- """
-
-
- def cloneQuery(limit):
- """
- Create a similar-but-not-identical copy of this query with certain
- attributes changed.
-
- (Currently this only supports the manipulation of the "limit"
- parameter, but it is the intent that with a richer query-introspection
- interface, this signature could be expanded to support many different
- attributes.)
-
- @param limit: an integer, representing the maximum number of rows that
- this query should return.
-
- @return: an L{IQuery} provider with the new limit.
- """
-
-
-
-class IColumn(Interface):
- """
- An object that represents a column in the database.
- """
- def getShortColumnName(store):
- """
- @rtype: C{str}
- @return: Just the name of this column.
- """
-
-
- def getColumnName(store):
- """
- @rtype: C{str}
-
- @return: The fully qualified name of this object as a column within the
- database, eg, C{"main_database.some_table.[this_column]"}.
- """
-
- def fullyQualifiedName():
- """
- @rtype: C{str}
-
- @return: The fully qualfied name of this object as an attribute in
- Python code, eg, C{myproject.mymodule.MyClass.myAttribute}. If this
- attribute is represented by an actual Python code object, it will be a
- dot-separated sequence of Python identifiers; otherwise, it will
- contain invalid identifier characters other than '.'.
- """
-
- def __get__(row):
- """
- @param row: an item that has this column.
- @type row: L{axiom.item.Item}
-
- @return: The value of the column described by this object, for the given
- row.
-
- @rtype: depends on the underlying type of the column.
- """
-
-
-
-class IOrdering(Interface):
- """
- An object suitable for passing to the 'sort' argument of a query method.
- """
- def orderColumns():
- """
- Return a list of two-tuples of IColumn providers and either C{'ASC'} or
- C{'DESC'} defining this ordering.
- """
-
-
-
-class IComparison(Interface):
- """
- An object that represents an in-database comparison. A predicate that may
- apply to certain items in a store. Passed as an argument to
- attributes.AND, .OR, and Store.query(...)
- """
- def getInvolvedTables():
- """
- Return a sequence of L{Item} subclasses which are referenced by this
- comparison. A class may appear at most once.
- """
-
-
- def getQuery(store):
- """
- Return an SQL string with ?-style bind parameter syntax thingies.
- """
-
-
- def getArgs(store):
- """
- Return a sequence of arguments suitable for use to satisfy the bind
- parameters in the result of L{getQuery}.
- """
-
-
-
-class IReliableListener(Interface):
- """
- Receives notification of the existence of Items of a particular type.
-
- {IReliableListener} providers are given to
- L{IBatchProcessor.addReliableListener} and will then have L{processItem}
- called with items handled by that processor.
- """
-
- def processItem(item):
- """
- Callback notifying this listener of the existence of the given item.
- """
-
- def suspend():
- """
- Invoked when notification for this listener is being temporarily
- suspended.
-
- This should clean up any ephemeral resources held by this listener and
- generally prepare to not do anything for a while.
- """
-
- def resume():
- """
- Invoked when notification for this listener is being resumed.
-
- Any actions taken by L{suspend} may be reversed by this method.
- """
-
-
-LOCAL, REMOTE = range(2)
-class IBatchProcessor(Interface):
- def addReliableListener(listener, style=LOCAL):
- """
- Add the given Item to the set which will be notified of Items
- available for processing.
-
- Note: Each Item is processed synchronously. Adding too many
- listeners to a single batch processor will cause the L{step}
- method to block while it sends notification to each listener.
-
- @type listener: L{IReliableListener}
- @param listener: The item to which listened-for items will be passed
- for processing.
- """
-
-
- def removeReliableListener(listener):
- """
- Remove a previously added listener.
- """
-
-
- def getReliableListeners():
- """
- Return an iterable of the listeners which have been added to
- this batch processor.
- """
-
-
-
-class IBatchService(Interface):
- """
- Object which allows minimal communication with L{IReliableListener}
- providers which are running remotely (that is, with the L{REMOTE} style).
- """
- def start():
- """
- Start the remote batch process if it has not yet been started, otherwise
- do nothing.
- """
-
-
- def suspend(storeID):
- """
- @type storeID: C{int}
- @param storeID: The storeID of the listener to suspend.
-
- @rtype: L{twisted.internet.defer.Deferred}
- @return: A Deferred which fires when the listener has been suspended.
- """
-
-
- def resume(storeID):
- """
- @type storeID: C{int}
- @param storeID: The storeID of the listener to resume.
-
- @rtype: L{twisted.internet.defer.Deferred}
- @return: A Deferred which fires when the listener has been resumed.
- """
-
-
-
-class IVersion(Interface):
- """
- Object with version information for a package that creates Axiom
- items, most likely a L{twisted.python.versions.Version}. Used to
- track which versions of a package have been used to load a store.
- """
- package = Attribute("""
- Name of a Python package.
- """)
- major = Attribute("""
- Major version number.
- """)
- minor = Attribute("""
- Minor version number.
- """)
- micro = Attribute("""
- Micro version number.
- """)
=== removed file 'Axiom/axiom/item.py'
--- Axiom/axiom/item.py 2014-04-08 22:47:38 +0000
+++ Axiom/axiom/item.py 1970-01-01 00:00:00 +0000
@@ -1,1137 +0,0 @@
-# -*- test-case-name: axiom.test -*-
-
-__metaclass__ = type
-
-import gc
-from zope.interface import implements, Interface
-
-from inspect import getabsfile
-from weakref import WeakValueDictionary
-
-from twisted.python import log
-from twisted.python.reflect import qual, namedAny
-from twisted.python.util import mergeFunctionMetadata
-from twisted.application.service import IService, IServiceCollection, MultiService
-
-from axiom import slotmachine, _schema, iaxiom
-from axiom.errors import ChangeRejected, DeletionDisallowed
-from axiom.iaxiom import IColumn, IPowerupIndirector
-
-from axiom.attributes import (
- SQLAttribute, _ComparisonOperatorMuxer, _MatchingOperationMuxer,
- _OrderingMixin, _ContainableMixin, Comparable, compare, inmemory,
- reference, text, integer, AND, _cascadingDeletes, _disallows)
-
-_typeNameToMostRecentClass = WeakValueDictionary()
-
-def normalize(qualName):
- """
- Turn a fully-qualified Python name into a string usable as part of a
- table name.
- """
- return qualName.lower().replace('.', '_')
-
-class NoInheritance(RuntimeError):
- """
- Inheritance is as-yet unsupported by XAtop.
- """
-
-class NotInStore(RuntimeError):
- """
- """
-
-class CantInstantiateItem(RuntimeError):
- """You can't instantiate Item directly. Make a subclass.
- """
-
-class MetaItem(slotmachine.SchemaMetaMachine):
- """Simple metaclass for Item that adds Item (and its subclasses) to
- _typeNameToMostRecentClass mapping.
- """
-
- def __new__(meta, name, bases, dictionary):
- T = slotmachine.SchemaMetaMachine.__new__(meta, name, bases, dictionary)
- if T.__name__ == 'Item' and T.__module__ == __name__:
- return T
- T.__already_inherited__ += 1
- if T.__already_inherited__ >= 2:
- raise NoInheritance("already inherited from item once: "
- "in-database inheritance not yet supported")
- if T.typeName is None:
- T.typeName = normalize(qual(T))
- if T.schemaVersion is None:
- T.schemaVersion = 1
- if not T.__legacy__ and T.typeName in _typeNameToMostRecentClass:
- # Let's try not to gc.collect() every time.
- gc.collect()
- if T.typeName in _typeNameToMostRecentClass:
- if T.__legacy__:
- return T
- otherT = _typeNameToMostRecentClass[T.typeName]
-
- if (otherT.__name__ == T.__name__
- and getabsfile(T) == getabsfile(otherT)
- and T.__module__ != otherT.__module__):
-
- if len(T.__module__) < len(otherT.__module__):
- relmod = T.__module__
- else:
- relmod = otherT.__module__
-
- raise RuntimeError(
- "Use absolute imports; relative import"
- " detected for type %r (imported from %r)" % (
- T.typeName, relmod))
-
- raise RuntimeError("2 definitions of axiom typename %r: %r %r" % (
- T.typeName, T, _typeNameToMostRecentClass[T.typeName]))
- _typeNameToMostRecentClass[T.typeName] = T
- return T
-
-
- def __cmp__(self, other):
- """
- Ensure stable sorting between Item classes. This provides determinism
- in SQL generation, which is beneficial for debugging and performance
- purposes.
- """
- if isinstance(other, MetaItem):
- return cmp((self.typeName, self.schemaVersion),
- (other.typeName, other.schemaVersion))
- return NotImplemented
-
-
-def noop():
- pass
-
-class _StoreIDComparer(Comparable):
- """
- See Comparable's docstring for the explanation of the requirements of my implementation.
- """
- implements(IColumn)
-
- def __init__(self, type):
- self.type = type
-
- def __repr__(self):
- return '<storeID ' + qual(self.type) + '.storeID>'
-
- def fullyQualifiedName(self):
- # XXX: this is an example of silly redundancy, this really ought to be
- # refactored to work like any other attribute (including being
- # explicitly covered in the schema, which has other good qualities like
- # allowing tables to be VACUUM'd without destroying oid stability and
- # every storeID reference ever. --glyph
- return qual(self.type)+'.storeID'
-
- # attributes required by ColumnComparer
- def infilter(self, pyval, oself, store):
- return pyval
-
- def outfilter(self, dbval, oself):
- return dbval
-
- def getShortColumnName(self, store):
- return store.getShortColumnName(self)
-
- def getColumnName(self, store):
- return store.getColumnName(self)
-
- def __get__(self, item, type=None):
- if item is None:
- return self
- else:
- return getattr(item, 'storeID')
-
-
-class _SpecialStoreIDAttribute(slotmachine.SetOnce):
- """
- Because storeID is special (it's unique, it determines a row's cache
- identity, it's immutable, etc) we don't use a regular SQLAttribute to
- represent it - but it still needs to be compared with other SQL attributes,
- as it is in fact represented by the 'oid' database column.
-
- I implement set-once semantics to enforce immutability, but delegate
- comparison operations to _StoreIDComparer.
- """
- def __get__(self, oself, type=None):
- if type is not None and oself is None:
- if type._storeIDComparer is None:
- # Reuse the same instance so that the store can use it
- # as a key for various caching, like any other attributes.
- type._storeIDComparer = _StoreIDComparer(type)
- return type._storeIDComparer
- return super(_SpecialStoreIDAttribute, self).__get__(oself, type)
-
-
-def serviceSpecialCase(item, pups):
- if item._axiom_service is not None:
- return item._axiom_service
- svc = MultiService()
- for subsvc in pups:
- subsvc.setServiceParent(svc)
- item._axiom_service = svc
- return svc
-
-
-
-
-class Empowered(object):
- """
- An object which can have powerups.
-
- @type store: L{axiom.store.Store}
- @ivar store: Persistence object to which powerups can be added for later
- retrieval.
-
- @type aggregateInterfaces: C{dict}
- @ivar aggregateInterfaces: Mapping from interface classes to callables
- which will be used to produce corresponding powerups. The callables
- will be invoked with two arguments, the L{Empowered} for which powerups
- are being loaded and with a list of powerups found in C{store}. The
- return value is the powerup. These are used only by the callable
- interface adaption API, not C{powerupsFor}.
- """
- aggregateInterfaces = {
- IService: serviceSpecialCase,
- IServiceCollection: serviceSpecialCase}
-
- def inMemoryPowerUp(self, powerup, interface):
- """
- Install an arbitrary object as a powerup on an item or store.
-
- Powerups installed using this method will only exist as long as this
- object remains in memory. They will also take precedence over powerups
- installed with L{powerUp}.
-
- @param interface: a zope interface
- """
- self._inMemoryPowerups[interface] = powerup
-
-
- def powerUp(self, powerup, interface=None, priority=0):
- """
- Installs a powerup (e.g. plugin) on an item or store.
-
- Powerups will be returned in an iterator when queried for using the
- 'powerupsFor' method. Normally they will be returned in order of
- installation [this may change in future versions, so please don't
- depend on it]. Higher priorities are returned first. If you have
- something that should run before "normal" powerups, pass
- POWERUP_BEFORE; if you have something that should run after, pass
- POWERUP_AFTER. We suggest not depending too heavily on order of
- execution of your powerups, but if finer-grained control is necessary
- you may pass any integer. Normal (unspecified) priority is zero.
-
- Powerups will only be installed once on a given item. If you install a
- powerup for a given interface with priority 1, then again with priority
- 30, the powerup will be adjusted to priority 30 but future calls to
- powerupFor will still only return that powerup once.
-
-
- If no interface or priority are specified, and the class of the
- powerup has a "powerupInterfaces" attribute (containing
- either a sequence of interfaces, or a sequence of
- (interface, priority) tuples), this object will be powered up
- with the powerup object on those interfaces.
-
- If no interface or priority are specified and the powerup has
- a "__getPowerupInterfaces__" method, it will be called with
- an iterable of (interface, priority) tuples, collected from the
- "powerupInterfaces" attribute described above. The iterable of
- (interface, priority) tuples it returns will then be
- installed.
-
-
- @param powerup: an Item that implements C{interface} (if specified)
- @param interface: a zope interface, or None
-
- @param priority: An int; preferably either POWERUP_BEFORE,
- POWERUP_AFTER, or unspecified.
-
- @raise TypeError: raises if interface is IPowerupIndirector You may not
- install a powerup for IPowerupIndirector because that would be
- nonsensical.
- """
- if interface is None:
- for iface, priority in powerup._getPowerupInterfaces():
- self.powerUp(powerup, iface, priority)
-
- elif interface is IPowerupIndirector:
- raise TypeError(
- "You cannot install a powerup for IPowerupIndirector: " +
- powerup)
- else:
- forc = self.store.findOrCreate(_PowerupConnector,
- item=self,
- interface=unicode(qual(interface)),
- powerup=powerup)
- forc.priority = priority
-
-
- def powerDown(self, powerup, interface=None):
- """
- Remove a powerup.
-
- If no interface is specified, and the type of the object being
- installed has a "powerupInterfaces" attribute (containing
- either a sequence of interfaces, or a sequence of (interface,
- priority) tuples), the target will be powered down with this
- object on those interfaces.
-
- If this object has a "__getPowerupInterfaces__" method, it
- will be called with an iterable of (interface, priority)
- tuples. The iterable of (interface, priority) tuples it
- returns will then be uninstalled.
-
- (Note particularly that if powerups are added or removed to the
- collection described above between calls to powerUp and powerDown, more
- powerups or less will be removed than were installed.)
- """
- if interface is None:
- for interface, priority in powerup._getPowerupInterfaces():
- self.powerDown(powerup, interface)
- else:
- for cable in self.store.query(_PowerupConnector,
- AND(_PowerupConnector.item == self,
- _PowerupConnector.interface == unicode(qual(interface)),
- _PowerupConnector.powerup == powerup)):
- cable.deleteFromStore()
- return
- raise ValueError("Not powered up for %r with %r" % (interface,
- powerup))
-
-
- def __conform__(self, interface):
- """
- For 'normal' interfaces, returns the first powerup found when doing
- self.powerupsFor(interface).
-
- Certain interfaces are special - IService from twisted.application
- being the main special case - and will be aggregated according to
- special rules. The full list of such interfaces is present in the
- 'aggregateInterfaces' class attribute.
- """
- if interface is IPowerupIndirector:
- # This would cause an infinite loop, since powerupsFor will try to
- # adapt every powerup to IPowerupIndirector, calling this method.
- return
-
- pups = self.powerupsFor(interface)
- aggregator = self.aggregateInterfaces.get(interface, None)
- if aggregator is not None:
- return aggregator(self, pups)
-
- for pup in pups:
- return pup # return first one, or None if no powerups
-
-
- def powerupsFor(self, interface):
- """
- Returns powerups installed using C{powerUp}, in order of descending
- priority.
-
- Powerups found to have been deleted, either during the course of this
- powerupsFor iteration, during an upgrader, or previously, will not be
- returned.
- """
- inMemoryPowerup = self._inMemoryPowerups.get(interface, None)
- if inMemoryPowerup is not None:
- yield inMemoryPowerup
- if self.store is None:
- return
- name = unicode(qual(interface), 'ascii')
- for cable in self.store.query(
- _PowerupConnector,
- AND(_PowerupConnector.interface == name,
- _PowerupConnector.item == self),
- sort=_PowerupConnector.priority.descending):
- pup = cable.powerup
- if pup is None:
- # this powerup was probably deleted during an upgrader.
- cable.deleteFromStore()
- else:
- indirector = IPowerupIndirector(pup, None)
- if indirector is not None:
- yield indirector.indirect(interface)
- else:
- yield pup
-
- def interfacesFor(self, powerup):
- """
- Return an iterator of the interfaces for which the given powerup is
- installed on this object.
-
- This is not implemented for in-memory powerups. It will probably fail
- in an unpredictable, implementation-dependent way if used on one.
- """
- pc = _PowerupConnector
- for iface in self.store.query(pc,
- AND(pc.item == self,
- pc.powerup == powerup)).getColumn('interface'):
- yield namedAny(iface)
-
-
- def _getPowerupInterfaces(self):
- """
- Collect powerup interfaces this object declares that it can be
- installed on.
- """
- powerupInterfaces = getattr(self.__class__, "powerupInterfaces", ())
- pifs = []
- for x in powerupInterfaces:
- if isinstance(x, type(Interface)):
- #just an interface
- pifs.append((x, 0))
- else:
- #an interface and a priority
- pifs.append(x)
-
- m = getattr(self, "__getPowerupInterfaces__", None)
- if m is not None:
- pifs = m(pifs)
- try:
- pifs = [(i, p) for (i, p) in pifs]
- except ValueError:
- raise ValueError("return value from %r.__getPowerupInterfaces__"
- " not an iterable of 2-tuples" % (self,))
- return pifs
-
-
-def transacted(func):
- """
- Return a callable which will invoke C{func} in a transaction using the
- C{store} attribute of the first parameter passed to it. Typically this is
- used to create Item methods which are automatically run in a transaction.
-
- The attributes of the returned callable will resemble those of C{func} as
- closely as L{twisted.python.util.mergeFunctionMetadata} can make them.
- """
- def transactionified(item, *a, **kw):
- return item.store.transact(func, item, *a, **kw)
- return mergeFunctionMetadata(func, transactionified)
-
-
-
-
-def dependentItems(store, tableClass, comparisonFactory):
- """
- Collect all the items that should be deleted when an item or items
- of a particular item type are deleted.
-
- @param tableClass: An L{Item} subclass.
-
- @param comparison: A one-argument callable taking an attribute and
- returning an L{iaxiom.IComparison} describing the items to
- collect.
-
- @return: An iterable of items to delete.
- """
- for cascadingAttr in (_cascadingDeletes.get(tableClass, []) +
- _cascadingDeletes.get(None, [])):
- for cascadedItem in store.query(cascadingAttr.type,
- comparisonFactory(cascadingAttr)):
- yield cascadedItem
-
-
-
-def allowDeletion(store, tableClass, comparisonFactory):
- """
- Returns a C{bool} indicating whether deletion of an item or items of a
- particular item type should be allowed to proceed.
-
- @param tableClass: An L{Item} subclass.
-
- @param comparison: A one-argument callable taking an attribute and
- returning an L{iaxiom.IComparison} describing the items to
- collect.
-
- @return: A C{bool} indicating whether deletion should be allowed.
- """
- for cascadingAttr in (_disallows.get(tableClass, []) +
- _disallows.get(None, [])):
- for cascadedItem in store.query(cascadingAttr.type,
- comparisonFactory(cascadingAttr),
- limit=1):
- return False
-
- return True
-
-
-
-class Item(Empowered, slotmachine._Strict):
- # Python-Special Attributes
- __metaclass__ = MetaItem
-
- # Axiom-Special Attributes
- __dirty__ = inmemory()
- __legacy__ = False
-
- __already_inherited__ = 0
-
- # Private attributes.
- __store = inmemory() # underlying reference to the store.
-
- __everInserted = inmemory() # has this object ever been inserted into the
- # database?
-
- __justCreated = inmemory() # was this object just created, i.e. is there
- # no committed database representation of it
- # yet
-
- __deleting = inmemory() # has this been marked for deletion at
- # checkpoint
-
- __deletingObject = inmemory() # being marked for deletion at checkpoint,
- # are we also deleting the central object row
- # (True: as in an actual delete) or are we
- # simply deleting the data row (False: as in
- # part of an upgrade)
-
- storeID = _SpecialStoreIDAttribute(default=None)
- _storeIDComparer = None
- _axiom_service = inmemory()
-
- # A mapping from interfaces to in-memory powerups.
- _inMemoryPowerups = inmemory()
-
- def _currentlyValidAsReferentFor(self, store):
- """
- Is this object currently valid as a reference? Objects which will be
- deleted in this transaction, or objects which are not in the same store
- are not valid. See attributes.reference.__get__.
- """
- if store is None:
- # If your store is None, you can refer to whoever you want. I'm in
- # a store but it doesn't matter that you're not.
- return True
- if self.store is not store:
- return False
- if self.__deletingObject:
- return False
- return True
-
-
- def _schemaPrepareInsert(self, store):
- """
- Prepare each attribute in my schema for insertion into a given store,
- either by upgrade or by creation. This makes sure all references point
- to this store and all relative paths point to this store's files
- directory.
- """
- for name, atr in self.getSchema():
- atr.prepareInsert(self, store)
-
-
- def store():
- def get(self):
- return self.__store
- def set(self, store):
- if self.__store is not None:
- raise AttributeError(
- "Store already set - can't move between stores")
-
- if store._rejectChanges:
- raise ChangeRejected()
-
- self._schemaPrepareInsert(store)
- self.__store = store
- oid = self.storeID = self.store.executeSchemaSQL(
- _schema.CREATE_OBJECT, [self.store.getTypeID(type(self))])
- if not self.__legacy__:
- store.objectCache.cache(oid, self)
- if store.autocommit:
- log.msg(interface=iaxiom.IStatEvent,
- name='database', stat_autocommits=1)
-
- self.checkpoint()
- else:
- self.touch()
- self.activate()
- self.stored()
- return get, set, """
-
- A reference to a Store; when set for the first time, inserts this object
- into that store. Cannot be set twice; once inserted, objects are
- 'stuck' to a particular store and must be copied by creating a new
- Item.
-
- """
-
- store = property(*store())
-
-
- def __repr__(self):
- """
- Return a nice string representation of the Item which contains some
- information about each of its attributes.
- """
- attrs = ", ".join("{n}={v}".format(n=name, v=attr.reprFor(self))
- for name, attr in sorted(self.getSchema()))
- template = b"{s.__name__}({attrs}, storeID={s.storeID})@{id:#x}"
- return template.format(s=self, attrs=attrs, id=id(self))
-
-
- def __subinit__(self, **kw):
- """
- Initializer called regardless of whether this object was created by
- instantiation or loading from the database.
- """
- self._axiom_service = None
- self._inMemoryPowerups = {}
- self.__dirty__ = {}
- to__store = kw.pop('__store', None)
- to__everInserted = kw.pop('__everInserted', False)
- to__justUpgraded = kw.pop('__justUpgraded', False)
- self.__store = to__store
- self.__everInserted = to__everInserted
- self.__deletingObject = False
- self.__deleting = False
- tostore = kw.pop('store',None)
-
- if not self.__everInserted:
- for (name, attr) in self.getSchema():
- if name not in kw:
- kw[name] = attr.computeDefault()
-
- for k, v in kw.iteritems():
- setattr(self, k, v)
-
- if tostore != None:
- if to__justUpgraded:
-
- # we can't just set the store, because that allocates an ID.
- # we do still need to do all the attribute prep, make sure
- # references refer to this store, paths are adjusted to point
- # to this store's static offset, etc.
-
- self._schemaPrepareInsert(tostore)
- self.__store = tostore
-
- # However, setting the store would normally cache this item as
- # well, so we need to cache it here - unless this is actually a
- # dummy class which isn't real! In that case don't.
- if not self.__legacy__:
- tostore.objectCache.cache(self.storeID, self)
- if tostore.autocommit:
- self.checkpoint()
- else:
- self.store = tostore
-
-
- def __init__(self, **kw):
- """
- Create a new Item. This is called on an item *only* when it is being created
- for the first time, not when it is loaded from the database. The
- 'activate()' hook is called every time an item is loaded from the
- database, as well as the first time that an item is inserted into the
- store. This will be inside __init__ if you pass a 'store' keyword
- argument to an Item's constructor.
-
- This takes an arbitrary set of keyword arguments, which will be set as
- attributes on the created item. Subclasses of Item must honor this
- signature.
- """
- if type(self) is Item:
- raise CantInstantiateItem()
- self.__justCreated = True
- self.__subinit__(**kw)
-
-
- def __finalizer__(self):
- return noop
-
-
- def existingInStore(cls, store, storeID, attrs):
- """Create and return a new instance from a row from the store."""
- self = cls.__new__(cls)
-
- self.__justCreated = False
- self.__subinit__(__store=store,
- storeID=storeID,
- __everInserted=True)
-
- schema = self.getSchema()
- assert len(schema) == len(attrs), "invalid number of attributes"
- for data, (name, attr) in zip(attrs, schema):
- attr.loaded(self, data)
- self.activate()
- return self
- existingInStore = classmethod(existingInStore)
-
- def activate(self):
- """The object was loaded from the store.
- """
-
- def getSchema(cls):
- """
- return all persistent class attributes
- """
- schema = []
- for name, atr in cls.__attributes__:
- atr = atr.__get__(None, cls)
- if isinstance(atr, SQLAttribute):
- schema.append((name, atr))
- cls.getSchema = staticmethod(lambda schema=schema: schema)
- return schema
- getSchema = classmethod(getSchema)
-
-
- def persistentValues(self):
- """
- Return a dictionary of all attributes which will be/have been/are being
- stored in the database.
- """
- return dict((k, getattr(self, k)) for (k, attr) in self.getSchema())
-
-
- def touch(self):
- # xxx what
- if self.store is None:
- return
- self.store.changed(self)
-
- def revert(self):
- if self.__justCreated:
- # The SQL revert has already been taken care of.
- if not self.__legacy__:
- self.store.objectCache.uncache(self.storeID, self)
- return
- self.__dirty__.clear()
- dbattrs = self.store.querySQL(
- self._baseSelectSQL(self.store),
- [self.storeID])[0]
-
- for data, (name, atr) in zip(dbattrs, self.getSchema()):
- atr.loaded(self, data)
-
- self.__deleting = False
- self.__deletingObject = False
-
-
- def deleted(self):
- """User-definable callback that is invoked when an object is well and truly
- gone from the database; the transaction which deleted it has been
- committed.
- """
-
-
- def stored(self):
- """
- User-definable callback that is invoked when an object is placed into a
- Store for the very first time.
-
- If an Item is created with a store, this will be invoked I{after}
- C{activate}.
- """
-
-
- def committed(self):
- """
- Called after the database is brought into a consistent state with this
- object.
- """
- if self.__deleting:
- self.deleted()
- if not self.__legacy__:
- self.store.objectCache.uncache(self.storeID, self)
- self.__store = None
- self.__justCreated = False
-
-
- def checkpoint(self):
- """
- Update the database to reflect in-memory changes made to this item; for
- example, to make it show up in store.query() calls where it is now
- valid, but was not the last time it was persisted to the database.
-
- This is called automatically when in 'autocommit mode' (i.e. not in a
- transaction) and at the end of each transaction for every object that
- has been changed.
- """
-
- if self.store is None:
- raise NotInStore("You can't checkpoint %r: not in a store" % (self,))
-
-
- if self.__deleting:
- if not self.__everInserted:
- # don't issue duplicate SQL and crap; we were created, then
- # destroyed immediately.
- return
- self.store.executeSQL(self._baseDeleteSQL(self.store), [self.storeID])
- # re-using OIDs plays havoc with the cache, and with other things
- # as well. We need to make sure that we leave a placeholder row at
- # the end of the table.
- if self.__deletingObject:
- # Mark this object as dead.
- self.store.executeSchemaSQL(_schema.CHANGE_TYPE,
- [-1, self.storeID])
-
- # Can't do this any more:
- # self.store.executeSchemaSQL(_schema.DELETE_OBJECT, [self.storeID])
-
- # TODO: need to measure the performance impact of this, then do
- # it to make sure things are in fact deleted:
- # self.store.executeSchemaSQL(_schema.APP_VACUUM)
-
- else:
- assert self.__legacy__
-
- # we're done...
- if self.store.autocommit:
- self.committed()
- return
-
- if self.__everInserted:
- # case 1: we've been inserted before, either previously in this
- # transaction or we were loaded from the db
- if not self.__dirty__:
- # we might have been checkpointed twice within the same
- # transaction; just don't do anything.
- return
- self.store.executeSQL(*self._updateSQL())
- else:
- # case 2: we are in the middle of creating the object, we've never
- # been inserted into the db before
- schemaAttrs = self.getSchema()
-
- insertArgs = [self.storeID]
- for (ignoredName, attrObj) in schemaAttrs:
- attrObjDuplicate, attributeValue = self.__dirty__[attrObj.attrname]
- # assert attrObjDuplicate is attrObj
- insertArgs.append(attributeValue)
-
- # XXX this isn't atomic, gross.
- self.store.executeSQL(self._baseInsertSQL(self.store), insertArgs)
- self.__everInserted = True
- # In case 1, we're dirty but we did an update, synchronizing the
- # database, in case 2, we haven't been created but we issue an insert.
- # In either case, the code in attributes.py sets the attribute *as well
- # as* populating __dirty__, so we clear out dirty and we keep the same
- # value, knowing it's the same as what's in the db.
- self.__dirty__.clear()
- if self.store.autocommit:
- self.committed()
-
- def upgradeVersion(self, typename, oldversion, newversion, **kw):
- # right now there is only ever one acceptable series of arguments here
- # but it is useful to pass them anyway to make sure the code is
- # functioning as expected
- assert typename == self.typeName, '%r != %r' % (typename, self.typeName)
- assert oldversion == self.schemaVersion
- key = typename, newversion
- T = None
- if key in _legacyTypes:
- T = _legacyTypes[key]
- elif typename in _typeNameToMostRecentClass:
- mostRecent = _typeNameToMostRecentClass[typename]
- if mostRecent.schemaVersion == newversion:
- T = mostRecent
- if T is None:
- raise RuntimeError("don't know about type/version pair %s:%d" % (
- typename, newversion))
- newTypeID = self.store.getTypeID(T) # call first to make sure the table
- # exists for doInsert below
-
- new = T(store=self.store,
- __justUpgraded=True,
- storeID=self.storeID,
- **kw)
-
- new.touch()
- new.activate()
-
- self.store.executeSchemaSQL(_schema.CHANGE_TYPE,
- [newTypeID, self.storeID])
- self.deleteFromStore(False)
- return new
-
- def deleteFromStore(self, deleteObject=True):
- # go grab dependent stuff
- if deleteObject:
- if not allowDeletion(self.store, self.__class__,
- lambda attr: attr == self):
- raise DeletionDisallowed(
- 'Cannot delete item; '
- 'has referents with whenDeleted == reference.DISALLOW')
-
- for dependent in dependentItems(self.store, self.__class__,
- lambda attr: attr == self):
- dependent.deleteFromStore()
-
- self.touch()
-
- self.__deleting = True
- self.__deletingObject = deleteObject
-
- if self.store.autocommit:
- self.checkpoint()
-
-
- # You may specify schemaVersion and typeName in subclasses
- schemaVersion = None
- typeName = None
-
- ###### SQL generation ######
-
- def _baseSelectSQL(cls, st):
- if cls not in st.typeToSelectSQLCache:
- st.typeToSelectSQLCache[cls] = ' '.join(['SELECT * FROM',
- st.getTableName(cls),
- 'WHERE',
- st.getShortColumnName(cls.storeID),
- '= ?'
- ])
- return st.typeToSelectSQLCache[cls]
-
- _baseSelectSQL = classmethod(_baseSelectSQL)
-
- def _baseInsertSQL(cls, st):
- if cls not in st.typeToInsertSQLCache:
- attrs = list(cls.getSchema())
- qs = ', '.join((['?']*(len(attrs)+1)))
- st.typeToInsertSQLCache[cls] = (
- 'INSERT INTO '+
- st.getTableName(cls) + ' (' + ', '.join(
- [ st.getShortColumnName(cls.storeID) ] +
- [ st.getShortColumnName(a[1]) for a in attrs]) +
- ') VALUES (' + qs + ')')
- return st.typeToInsertSQLCache[cls]
-
- _baseInsertSQL = classmethod(_baseInsertSQL)
-
- def _baseDeleteSQL(cls, st):
- if cls not in st.typeToDeleteSQLCache:
- st.typeToDeleteSQLCache[cls] = ' '.join(['DELETE FROM',
- st.getTableName(cls),
- 'WHERE',
- st.getShortColumnName(cls.storeID),
- '= ? '
- ])
- return st.typeToDeleteSQLCache[cls]
-
- _baseDeleteSQL = classmethod(_baseDeleteSQL)
-
- def _updateSQL(self):
- # XXX no point in caching for every possible combination of attribute
- # values - probably. check out how prepared statements are used in
- # python sometime.
- dirty = self.__dirty__.items()
- if not dirty:
- raise RuntimeError("Non-dirty item trying to generate SQL.")
- dirty.sort()
- dirtyColumns = []
- dirtyValues = []
- for dirtyAttrName, (dirtyAttribute, dirtyValue) in dirty:
- dirtyColumns.append(self.store.getShortColumnName(dirtyAttribute))
- dirtyValues.append(dirtyValue)
- stmt = ' '.join([
- 'UPDATE', self.store.getTableName(self.__class__), 'SET',
- ', '.join(['%s = ?'] * len(dirty)) %
- tuple(dirtyColumns),
- 'WHERE ', self.store.getShortColumnName(type(self).storeID), ' = ?'])
- dirtyValues.append(self.storeID)
- return stmt, dirtyValues
-
-
- def getTableName(cls, store):
- """
- Retrieve a string naming the database table associated with this item
- class.
- """
- return store.getTableName(cls)
- getTableName = classmethod(getTableName)
-
-
- def getTableAlias(cls, store, currentAliases):
- return None
- getTableAlias = classmethod(getTableAlias)
-
-
-
-class _PlaceholderColumn(_ContainableMixin, _ComparisonOperatorMuxer,
- _MatchingOperationMuxer, _OrderingMixin):
- """
- Wrapper for columns from a L{Placeholder} which provides a fully qualified
- name built with a table alias name instead of the underlying column's real
- table name.
- """
- implements(IColumn)
-
- def __init__(self, placeholder, column):
- self.type = placeholder
- self.column = column
-
- def __repr__(self):
- return '<Placeholder %r>' % (self.column,)
-
-
- def __get__(self, inst):
- return self.column.__get__(inst)
-
- def fullyQualifiedName(self):
- return self.column.fullyQualifiedName() + '.<placeholder:%s>' % (
- self.type._placeholderCount,)
-
-
- def compare(self, other, op):
- return compare(self, other, op)
-
-
- def getShortColumnName(self, store):
- return self.column.getShortColumnName(store)
-
-
- def getColumnName(self, store):
- assert self.type._placeholderTableAlias is not None, (
- "Placeholder.getTableAlias() must be called "
- "before Placeholder.attribute.getColumnName()")
-
- return '%s.%s' % (self.type._placeholderTableAlias,
- self.column.getShortColumnName(store))
-
- def infilter(self, pyval, oself, store):
- return self.column.infilter(pyval, oself, store)
-
-
- def outfilter(self, dbval, oself):
- return self.column.outfilter(dbval, oself)
-
-
-
-_placeholderCount = 0
-
-class Placeholder(object):
- """
- Wrap an existing L{Item} type to provide a different name for it.
-
- This can be used to join a table against itself which is useful for
- flattening normalized data. For example, given a schema defined like
- this::
-
- class Tag(Item):
- taggedObject = reference()
- tagName = text()
-
-
- class SomethingElse(Item):
- ...
-
-
- It might be useful to construct a query for instances of SomethingElse
- which have been tagged both with C{"foo"} and C{"bar"}::
-
- t1 = Placeholder(Tag)
- t2 = Placeholder(Tag)
- store.query(SomethingElse, AND(t1.taggedObject == SomethingElse.storeID,
- t1.tagName == u"foo",
- t2.taggedObject == SomethingElse.storeID,
- t2.tagName == u"bar"))
- """
- _placeholderTableAlias = None
-
- def __init__(self, itemClass):
- global _placeholderCount
-
- self._placeholderItemClass = itemClass
- self._placeholderCount = _placeholderCount + 1
- _placeholderCount += 1
-
- self.existingInStore = self._placeholderItemClass.existingInStore
-
-
- def __cmp__(self, other):
- """
- Provide a deterministic sort order between Placeholder instances.
- Those instantiated first will compare as less than than instantiated
- later.
- """
- if isinstance(other, Placeholder):
- return cmp(self._placeholderCount, other._placeholderCount)
- return NotImplemented
-
-
- def __getattr__(self, name):
- if name == 'storeID' or name in dict(self._placeholderItemClass.getSchema()):
- return _PlaceholderColumn(self, getattr(self._placeholderItemClass, name))
- raise AttributeError(name)
-
-
- def getSchema(self):
- # In a MultipleItemQuery, the same table can appear more than
- # once in the "SELECT ..." part of the query, determined by
- # getSchema(). In this case, the correct placeholder names
- # need to be used.
- schema = []
- for (name, atr) in self._placeholderItemClass.getSchema():
- schema.append((
- name,
- _PlaceholderColumn(
- self, getattr(self._placeholderItemClass, name))))
- return schema
-
-
- def getTableName(self, store):
- return self._placeholderItemClass.getTableName(store)
-
-
- def getTableAlias(self, store, currentAliases):
- if self._placeholderTableAlias is None:
- self._placeholderTableAlias = 'placeholder_' + str(len(currentAliases))
- return self._placeholderTableAlias
-
-
-
-_legacyTypes = {} # map (typeName, schemaVersion) to dummy class
-
-def declareLegacyItem(typeName, schemaVersion, attributes, dummyBases=()):
- """
- Generate a dummy subclass of Item that will have the given attributes,
- and the base Item methods, but no methods of its own. This is for use
- with upgrading.
-
- @param typeName: a string, the Axiom TypeName to have attributes for.
- @param schemaVersion: an int, the (old) version of the schema this is a proxy
- for.
- @param attributes: a dict mapping {columnName: attr instance} describing
- the schema of C{typeName} at C{schemaVersion}.
-
- @param dummyBases: a sequence of 4-tuples of (baseTypeName,
- baseSchemaVersion, baseAttributes, baseBases) representing the dummy bases
- of this legacy class.
- """
- if (typeName, schemaVersion) in _legacyTypes:
- return _legacyTypes[typeName, schemaVersion]
- if dummyBases:
- realBases = [declareLegacyItem(*A) for A in dummyBases]
- else:
- realBases = (Item,)
- attributes = attributes.copy()
- attributes['__module__'] = 'item_dummy'
- attributes['__legacy__'] = True
- attributes['typeName'] = typeName
- attributes['schemaVersion'] = schemaVersion
- result = type(str('DummyItem<%s,%d>' % (typeName, schemaVersion)),
- realBases,
- attributes)
- assert result is not None, 'wtf, %r' % (type,)
- _legacyTypes[(typeName, schemaVersion)] = result
- return result
-
-
-
-class _PowerupConnector(Item):
- """
- I am a connector between the store and a powerup.
- """
- typeName = 'axiom_powerup_connector'
-
- powerup = reference()
- item = reference()
- interface = text()
- priority = integer()
-
-
-POWERUP_BEFORE = 1 # Priority for 'high' priority powerups.
-POWERUP_AFTER = -1 # Priority for 'low' priority powerups.
=== removed file 'Axiom/axiom/listversions.py'
--- Axiom/axiom/listversions.py 2008-08-22 14:47:07 +0000
+++ Axiom/axiom/listversions.py 1970-01-01 00:00:00 +0000
@@ -1,142 +0,0 @@
-# -*- test-case-name: axiom.test.test_listversions -*-
-
-from zope.interface import classProvides
-from twisted import plugin
-from twisted.python import usage, versions
-from axiom import iaxiom, item, attributes, plugins
-from axiom.scripts import axiomatic
-from epsilon.extime import Time
-
-
-class ListVersions(usage.Options, axiomatic.AxiomaticSubCommandMixin):
- """
- Command for listing the version history of a store.
- """
-
- classProvides(plugin.IPlugin, iaxiom.IAxiomaticCommand)
- name = "list-version"
- description = "Display software package version history."
-
- def postOptions(self):
- for line in listVersionHistory(self.parent.getStore()):
- print line
-
-
-
-class SystemVersion(item.Item):
- """
- Represents a set of software package versions which, taken together,
- comprise a "system version" of the software that can have affected
- the contents of a Store.
-
- By recording the changes of these versions in the store itself we can
- better reconstruct its history later.
- """
-
- creation = attributes.timestamp(
- doc="When this system version set was recorded.",
- allowNone=False)
-
-
- def __repr__(self):
- return '<SystemVersion %s>' % (self.creation,)
-
-
- def longWindedRepr(self):
- """
- @return: A string representation of this SystemVersion suitable for
- display to the user.
- """
- return '\n\t'.join(
- [repr(self)] + [repr(sv) for sv in self.store.query(
- SoftwareVersion,
- SoftwareVersion.systemVersion == self)])
-
-
-
-class SoftwareVersion(item.Item):
- """
- An Item subclass to map L{twisted.python.versions.Version} objects.
- """
-
- systemVersion = attributes.reference(
- doc="The system version this package version was observed in.",
- allowNone=False)
-
- package = attributes.text(doc="The software package.",
- allowNone=False)
- version = attributes.text(doc="The version string of the software.",
- allowNone=False)
- major = attributes.integer(doc='Major version number.',
- allowNone=False)
- minor = attributes.integer(doc='Minor version number.',
- allowNone=False)
- micro = attributes.integer(doc='Micro version number.',
- allowNone=False)
-
-
- def asVersion(self):
- """
- Convert the version data in this item to a
- L{twisted.python.versions.Version}.
- """
- return versions.Version(self.package, self.major, self.minor, self.micro)
-
-
- def __repr__(self):
- return '<SoftwareVersion %s: %s>' % (self.package, self.version)
-
-
-
-def makeSoftwareVersion(store, version, systemVersion):
- """
- Return the SoftwareVersion object from store corresponding to the
- version object, creating it if it doesn't already exist.
- """
- return store.findOrCreate(SoftwareVersion,
- systemVersion=systemVersion,
- package=unicode(version.package),
- version=unicode(version.short()),
- major=version.major,
- minor=version.minor,
- micro=version.micro)
-
-
-
-def listVersionHistory(store):
- """
- List the software package version history of store.
- """
- q = store.query(SystemVersion, sort=SystemVersion.creation.descending)
- return [sv.longWindedRepr() for sv in q]
-
-
-def getSystemVersions(getPlugins=plugin.getPlugins):
- """
- Collect all the version plugins and extract their L{Version} objects.
- """
- return list(getPlugins(iaxiom.IVersion, plugins))
-
-
-def checkSystemVersion(s, versions=None):
- """
- Check if the current version is different from the previously recorded
- version. If it is, or if there is no previously recorded version,
- create a version matching the current config.
- """
-
- if versions is None:
- versions = getSystemVersions()
-
- currentVersionMap = dict([(v.package, v) for v in versions])
- mostRecentSystemVersion = s.findFirst(SystemVersion,
- sort=SystemVersion.creation.descending)
- mostRecentVersionMap = dict([(v.package, v.asVersion()) for v in
- s.query(SoftwareVersion,
- (SoftwareVersion.systemVersion ==
- mostRecentSystemVersion))])
-
- if mostRecentVersionMap != currentVersionMap:
- currentSystemVersion = SystemVersion(store=s, creation=Time())
- for v in currentVersionMap.itervalues():
- makeSoftwareVersion(s, v, currentSystemVersion)
=== removed directory 'Axiom/axiom/plugins'
=== removed file 'Axiom/axiom/plugins/__init__.py'
--- Axiom/axiom/plugins/__init__.py 2008-08-07 14:03:07 +0000
+++ Axiom/axiom/plugins/__init__.py 1970-01-01 00:00:00 +0000
@@ -1,12 +0,0 @@
-# Copyright (c) 2008 Divmod. See LICENSE for details.
-
-"""
-Package for plugins for interfaces in Axiom.
-"""
-
-from epsilon.hotfix import require
-require('twisted', 'plugin_package_paths')
-
-from twisted.plugin import pluginPackagePaths
-__path__.extend(pluginPackagePaths(__name__))
-__all__ = []
=== removed file 'Axiom/axiom/plugins/axiom_plugins.py'
--- Axiom/axiom/plugins/axiom_plugins.py 2013-08-14 16:07:45 +0000
+++ Axiom/axiom/plugins/axiom_plugins.py 1970-01-01 00:00:00 +0000
@@ -1,299 +0,0 @@
-# Copyright (c) 2008 Divmod. See LICENSE for details.
-
-"""
-Plugins provided by Axiom for Axiom.
-"""
-
-import getpass
-import code, os, traceback, sys
-try:
- import readline
-except ImportError:
- readline = None
-
-from zope.interface import directlyProvides
-
-from twisted.python import usage, filepath, log
-from twisted.python.reflect import qual
-from twisted.plugin import IPlugin
-
-from epsilon.hotfix import require
-require('twisted', 'filepath_copyTo')
-
-import axiom
-from axiom import store, attributes, userbase, dependency, errors
-from axiom.substore import SubStore
-from axiom.scripts import axiomatic
-from axiom.listversions import ListVersions
-from axiom import version
-from axiom.iaxiom import IVersion
-
-directlyProvides(version, IPlugin, IVersion)
-
-#placate pyflakes
-ListVersions
-
-
-
-class Upgrade(axiomatic.AxiomaticCommand):
- name = 'upgrade'
- description = 'Synchronously upgrade an Axiom store and substores'
-
- optParameters = [
- ('count', 'n', '100', 'Number of upgrades to perform per transaction')]
-
- errorMessageFormat = 'Error upgrading item (with typeName=%s and storeID=%d) from version %d to %d.'
-
- def upgradeEverything(self, store):
- """
- Upgrade all the items in C{store}.
- """
- for dummy in store._upgradeManager.upgradeBatch(self.count):
- pass
-
-
- def upgradeStore(self, store):
- """
- Recursively upgrade C{store}.
- """
- self.upgradeEverything(store)
-
- for substore in store.query(SubStore):
- self.upgradeStore(substore.open())
-
- def perform(self, store, count):
- """
- Upgrade C{store} performing C{count} upgrades per transaction.
-
- Also, catch any exceptions and print out something useful.
- """
- self.count = count
-
- try:
- self.upgradeStore(store)
- print 'Upgrade complete'
- except errors.ItemUpgradeError, e:
- print 'Upgrader error:'
- e.originalFailure.printTraceback(file=sys.stdout)
- print self.errorMessageFormat % (
- e.oldType.typeName, e.storeID, e.oldType.schemaVersion,
- e.newType.schemaVersion)
-
-
- def postOptions(self):
- try:
- count = int(self['count'])
- except ValueError:
- raise usage.UsageError('count must be an integer')
-
- siteStore = self.parent.getStore()
- self.perform(siteStore, count)
-
-
-
-class AxiomConsole(code.InteractiveConsole):
- def runcode(self, code):
- """
- Override L{code.InteractiveConsole.runcode} to run the code in a
- transaction unless the local C{autocommit} is currently set to a true
- value.
- """
- if not self.locals.get('autocommit', None):
- return self.locals['db'].transact(code.InteractiveConsole.runcode, self, code)
- return code.InteractiveConsole.runcode(self, code)
-
-
-
-class Browse(axiomatic.AxiomaticCommand):
- synopsis = "[options]"
-
- name = 'browse'
- description = 'Interact with an Axiom store.'
-
- optParameters = [
- ('history-file', 'h', '~/.axiomatic-browser-history',
- 'Name of the file to which to save input history.'),
- ]
-
- optFlags = [
- ('debug', 'b', 'Open Store in debug mode.'),
- ]
-
- def postOptions(self):
- interp = code.InteractiveConsole(self.namespace(), '<axiom browser>')
- historyFile = os.path.expanduser(self['history-file'])
- if readline is not None and os.path.exists(historyFile):
- readline.read_history_file(historyFile)
- try:
- interp.interact("%s. Autocommit is off." % (str(axiom.version),))
- finally:
- if readline is not None:
- readline.write_history_file(historyFile)
-
-
- def namespace(self):
- """
- Return a dictionary representing the namespace which should be
- available to the user.
- """
- self._ns = {
- 'db': self.store,
- 'store': store,
- 'autocommit': False,
- }
- return self._ns
-
-
-
-class UserbaseMixin:
- def installOn(self, other):
- # XXX check installation on other, not store
- for ls in self.store.query(userbase.LoginSystem):
- raise usage.UsageError("UserBase already installed")
- else:
- ls = userbase.LoginSystem(store=self.store)
- dependency.installOn(ls, other)
- return ls
-
-
-
-class Install(axiomatic.AxiomaticSubCommand, UserbaseMixin):
- def postOptions(self):
- self.installOn(self.store)
-
-
-
-class Create(axiomatic.AxiomaticSubCommand, UserbaseMixin):
- synopsis = "<username> <domain> [password]"
-
- def parseArgs(self, username, domain, password=None):
- self['username'] = self.decodeCommandLine(username)
- self['domain'] = self.decodeCommandLine(domain)
- self['password'] = password
-
-
- def postOptions(self):
- msg = 'Enter new AXIOM password: '
- while not self['password']:
- password = getpass.getpass(msg)
- second = getpass.getpass('Repeat to verify: ')
- if password == second:
- self['password'] = password
- else:
- msg = 'Passwords do not match. Enter new AXIOM password: '
- self.addAccount(
- self.store, self['username'], self['domain'], self['password'])
-
-
- def addAccount(self, siteStore, username, domain, password):
- """
- Create a new account in the given store.
-
- @param siteStore: A site Store to which login credentials will be
- added.
- @param username: Local part of the username for the credentials to add.
- @param domain: Domain part of the username for the credentials to add.
- @param password: Password for the credentials to add.
- @rtype: L{LoginAccount}
- @return: The added account.
- """
- for ls in siteStore.query(userbase.LoginSystem):
- break
- else:
- ls = self.installOn(siteStore)
- try:
- acc = ls.addAccount(username, domain, password)
- except userbase.DuplicateUser:
- raise usage.UsageError("An account by that name already exists.")
- return acc
-
-
-
-class Disable(axiomatic.AxiomaticSubCommand):
- synopsis = "<username> <domain>"
-
- def parseArgs(self, username, domain):
- self['username'] = self.decodeCommandLine(username)
- self['domain'] = self.decodeCommandLine(domain)
-
- def postOptions(self):
- for acc in self.store.query(userbase.LoginAccount,
- attributes.AND(userbase.LoginAccount.username == self['username'],
- userbase.LoginAccount.domain == self['domain'])):
- if acc.disabled:
- raise usage.UsageError("That account is already disabled.")
- else:
- acc.disabled = True
- break
- else:
- raise usage.UsageError("No account by that name exists.")
-
-
-
-class List(axiomatic.AxiomaticSubCommand):
- def postOptions(self):
- acc = None
- for acc in self.store.query(userbase.LoginMethod):
- if acc.domain is None:
- print acc.localpart,
- else:
- print acc.localpart + '@' + acc.domain,
- if acc.account.disabled:
- print '[DISABLED]'
- else:
- print
- if acc is None:
- print 'No accounts'
-
-
-
-class UserBaseCommand(axiomatic.AxiomaticCommand):
- name = 'userbase'
- description = 'LoginSystem introspection and manipulation.'
-
- subCommands = [
- ('install', None, Install, "Install UserBase on an Axiom database"),
- ('create', None, Create, "Create a new user"),
- ('disable', None, Disable, "Disable an existing user"),
- ('list', None, List, "List users in an Axiom database"),
- ]
-
- def getStore(self):
- return self.parent.getStore()
-
-
-
-class Extract(axiomatic.AxiomaticCommand):
- name = 'extract-user'
- description = 'Remove an account from the login system, moving its associated database to the filesystem.'
- optParameters = [
- ('address', 'a', None, 'localpart@domain-format identifier of the user store to extract.'),
- ('destination', 'd', None, 'Directory into which to extract the user store.')]
-
- def extractSubStore(self, localpart, domain, destinationPath):
- siteStore = self.parent.getStore()
- la = siteStore.findFirst(
- userbase.LoginMethod,
- attributes.AND(userbase.LoginMethod.localpart == localpart,
- userbase.LoginMethod.domain == domain)).account
- userbase.extractUserStore(la, destinationPath)
-
-
- def postOptions(self):
- localpart, domain = self.decodeCommandLine(self['address']).split('@', 1)
- destinationPath = filepath.FilePath(
- self.decodeCommandLine(self['destination'])).child(localpart + '@' + domain + '.axiom')
- self.extractSubStore(localpart, domain, destinationPath)
-
-
-
-class Insert(axiomatic.AxiomaticCommand):
- name = 'insert-user'
- description = 'Insert a user store, such as one extracted with "extract-user", into a site store and login system.'
- optParameters = [
- ('userstore', 'u', None, 'Path to user store to be inserted.')
- ]
-
- def postOptions(self):
- userbase.insertUserStore(self.parent.getStore(),
- filepath.FilePath(self.decodeCommandLine(self['userstore'])))
=== removed file 'Axiom/axiom/queryutil.py'
--- Axiom/axiom/queryutil.py 2009-07-06 11:21:31 +0000
+++ Axiom/axiom/queryutil.py 1970-01-01 00:00:00 +0000
@@ -1,154 +0,0 @@
-# -*- test-case-name: axiom.test.test_queryutil -*-
-
-import operator
-
-from axiom.attributes import AND, OR
-
-def contains(startAttribute,
- endAttribute,
- value):
- """
- Return an L{axiom.iaxiom.IComparison} (an object that can be
- passed as the 'comparison' argument to Store.query/.sum/.count)
- which will constrain a query against 2 attributes for ranges which
- contain the given argument. The range is half-open.
- """
- return AND(
- startAttribute <= value,
- value < endAttribute)
-
-
-def overlapping(startAttribute, # X
- endAttribute, # Y
- startValue, # A
- endValue, # B
- ):
- """
- Return an L{axiom.iaxiom.IComparison} (an object that can be passed as the
- 'comparison' argument to Store.query/.sum/.count) which will constrain a
- query against 2 attributes for ranges which overlap with the given
- arguments.
-
- For a database with Items of class O which represent values in this
- configuration::
-
- X Y
- (a) (b)
- |-------------------|
- (c) (d)
- |--------| (e) (f)
- |--------|
-
- (g) (h)
- |---| (i) (j)
- |------|
-
- (k) (l)
- |-------------------------------------|
-
- (a) (l)
- |-----------------------------|
- (c) (b)
- |------------------------|
-
- (c) (a)
- |----|
- (b) (l)
- |---------|
-
- The query::
-
- myStore.query(
- O,
- findOverlapping(O.X, O.Y,
- a, b))
-
- Will return a generator of Items of class O which represent segments a-b,
- c-d, e-f, k-l, a-l, c-b, c-a and b-l, but NOT segments g-h or i-j.
-
- (NOTE: If you want to pass attributes of different classes for
- startAttribute and endAttribute, read the implementation of this method to
- discover the additional join clauses required. This may be eliminated some
- day so for now, consider this method undefined over multiple classes.)
-
- In the database where this query is run, for an item N, all values of
- N.startAttribute must be less than N.endAttribute.
-
- startValue must be less than endValue.
- """
- assert startValue <= endValue
-
- return OR(
- AND(startAttribute >= startValue,
- startAttribute <= endValue),
- AND(endAttribute >= startValue,
- endAttribute <= endValue),
- AND(startAttribute <= startValue,
- endAttribute >= endValue)
- )
-
-def _tupleCompare(tuple1, ineq, tuple2,
- eq=lambda a,b: (a==b),
- ander=AND,
- orer=OR):
- """
- Compare two 'in-database tuples'. Useful when sorting by a compound key
- and slicing into the middle of that query.
- """
-
- orholder = []
- for limit in range(len(tuple1)):
- eqconstraint = [
- eq(elem1, elem2) for elem1, elem2 in zip(tuple1, tuple2)[:limit]]
- ineqconstraint = ineq(tuple1[limit], tuple2[limit])
- orholder.append(ander(*(eqconstraint + [ineqconstraint])))
- return orer(*orholder)
-
-def _tupleLessThan(tuple1, tuple2):
- return _tupleCompare(tuple1, operator.lt, tuple2)
-
-def _tupleGreaterThan(tuple1, tuple2):
- return _tupleCompare(tuple1, operator.gt, tuple2)
-
-class AttributeTuple(object):
- def __init__(self, *attributes):
- self.attributes = attributes
-
- def __iter__(self):
- return iter(self.attributes)
-
- def __eq__(self, other):
- if not isinstance(other, (AttributeTuple, tuple, list)):
- return NotImplemented
- return AND(*[
- myAttr == otherAttr
- for (myAttr, otherAttr)
- in zip(self, other)])
-
- def __ne__(self, other):
- if not isinstance(other, (AttributeTuple, tuple, list)):
- return NotImplemented
- return OR(*[
- myAttr != otherAttr
- for (myAttr, otherAttr)
- in zip(self, other)])
-
- def __gt__(self, other):
- if not isinstance(other, (AttributeTuple, tuple, list)):
- return NotImplemented
- return _tupleGreaterThan(tuple(iter(self)), other)
-
- def __lt__(self, other):
- if not isinstance(other, (AttributeTuple, tuple, list)):
- return NotImplemented
- return _tupleLessThan(tuple(iter(self)), other)
-
- def __ge__(self, other):
- if not isinstance(other, (AttributeTuple, tuple, list)):
- return NotImplemented
- return OR(self > other, self == other)
-
- def __le__(self, other):
- if not isinstance(other, (AttributeTuple, tuple, list)):
- return NotImplemented
- return OR(self < other, self == other)
=== removed file 'Axiom/axiom/scheduler.py'
--- Axiom/axiom/scheduler.py 2010-07-17 17:41:30 +0000
+++ Axiom/axiom/scheduler.py 1970-01-01 00:00:00 +0000
@@ -1,557 +0,0 @@
-# -*- test-case-name: axiom.test.test_scheduler -*-
-
-"""
-Timed event scheduling for Axiom databases.
-
-With this module, applications can schedule an L{Item} to have its C{run} method
-called at a particular point in the future. This call will happen even if the
-process which initially schedules it exits and the database is later re-opened
-by another process (of course, if the scheduled time comes and goes while no
-process is using the database, then the call will be delayed until some process
-opens the database and starts its services).
-
-This module contains two implementations of the L{axiom.iaxiom.IScheduler}
-interface, one for site stores and one for sub-stores. Items can only be
-scheduled using an L{IScheduler} implementations from the store containing the
-item. This means a typical way to schedule an item to be run is::
-
- IScheduler(item.store).schedule(item, when)
-
-The scheduler service can also be retrieved from the site store's service
-collection by name::
-
- IServiceCollection(siteStore).getServiceNamed(SITE_SCHEDULER)
-"""
-
-import warnings
-
-from zope.interface import implements
-
-from twisted.internet import reactor
-
-from twisted.application.service import IService, Service
-from twisted.python import log, failure
-
-from epsilon.extime import Time
-
-from axiom.iaxiom import IScheduler
-from axiom.item import Item, declareLegacyItem
-from axiom.attributes import AND, timestamp, reference, integer, inmemory, bytes
-from axiom.dependency import uninstallFrom
-from axiom.upgrade import registerUpgrader
-from axiom.substore import SubStore
-
-VERBOSE = False
-
-SITE_SCHEDULER = u"Site Scheduler"
-
-
-class TimedEventFailureLog(Item):
- typeName = 'timed_event_failure_log'
- schemaVersion = 1
-
- desiredTime = timestamp()
- actualTime = timestamp()
-
- runnable = reference()
- traceback = bytes()
-
-
-class TimedEvent(Item):
- typeName = 'timed_event'
- schemaVersion = 1
-
- time = timestamp(indexed=True)
- runnable = reference()
-
- running = inmemory(doc='True if this event is currently running.')
-
- def activate(self):
- self.running = False
-
-
- def _rescheduleFromRun(self, newTime):
- """
- Schedule this event to be run at the indicated time, or if the
- indicated time is None, delete this event.
- """
- if newTime is None:
- self.deleteFromStore()
- else:
- self.time = newTime
-
-
- def invokeRunnable(self):
- """
- Run my runnable, and reschedule or delete myself based on its result.
- Must be run in a transaction.
- """
- runnable = self.runnable
- if runnable is None:
- self.deleteFromStore()
- else:
- try:
- self.running = True
- newTime = runnable.run()
- finally:
- self.running = False
- self._rescheduleFromRun(newTime)
-
-
- def handleError(self, now, failureObj):
- """ An error occurred running my runnable. Check my runnable for an
- error-handling method called 'timedEventErrorHandler' that will take
- the given failure as an argument, and execute that if available:
- otherwise, create a TimedEventFailureLog with information about what
- happened to this event.
-
- Must be run in a transaction.
- """
- errorHandler = getattr(self.runnable, 'timedEventErrorHandler', None)
- if errorHandler is not None:
- self._rescheduleFromRun(errorHandler(self, failureObj))
- else:
- self._defaultErrorHandler(now, failureObj)
-
-
- def _defaultErrorHandler(self, now, failureObj):
- TimedEventFailureLog(store=self.store,
- desiredTime=self.time,
- actualTime=now,
- runnable=self.runnable,
- traceback=failureObj.getTraceback())
- self.deleteFromStore()
-
-
-
-class _WackyControlFlow(Exception):
- def __init__(self, eventObject, failureObject):
- Exception.__init__(self, "User code failed during timed event")
- self.eventObject = eventObject
- self.failureObject = failureObject
-
-
-MAX_WORK_PER_TICK = 10
-
-class SchedulerMixin:
- def _oneTick(self, now):
- theEvent = self._getNextEvent(now)
- if theEvent is None:
- return False
- try:
- theEvent.invokeRunnable()
- except:
- raise _WackyControlFlow(theEvent, failure.Failure())
- self.lastEventAt = now
- return True
-
-
- def _getNextEvent(self, now):
- # o/` gonna party like it's 1984 o/`
- theEventL = list(self.store.query(TimedEvent,
- TimedEvent.time <= now,
- sort=TimedEvent.time.ascending,
- limit=1))
- if theEventL:
- return theEventL[0]
-
-
- def tick(self):
- now = self.now()
- self.nextEventAt = None
- workBeingDone = True
- workUnitsPerformed = 0
- errors = 0
- while workBeingDone and workUnitsPerformed < MAX_WORK_PER_TICK:
- try:
- workBeingDone = self.store.transact(self._oneTick, now)
- except _WackyControlFlow, wcf:
- self.store.transact(wcf.eventObject.handleError, now, wcf.failureObject)
- log.err(wcf.failureObject)
- errors += 1
- workBeingDone = True
- if workBeingDone:
- workUnitsPerformed += 1
- x = list(self.store.query(TimedEvent, sort=TimedEvent.time.ascending, limit=1))
- if x:
- self._transientSchedule(x[0].time, now)
- if errors or VERBOSE:
- log.msg("The scheduler ran %(eventCount)s events%(errors)s." % dict(
- eventCount=workUnitsPerformed,
- errors=(errors and (" (with %d errors)" % (errors,))) or ''))
-
-
- def schedule(self, runnable, when):
- TimedEvent(store=self.store, time=when, runnable=runnable)
- self._transientSchedule(when, self.now())
-
-
- def reschedule(self, runnable, fromWhen, toWhen):
- for evt in self.store.query(TimedEvent,
- AND(TimedEvent.time == fromWhen,
- TimedEvent.runnable == runnable)):
- evt.time = toWhen
- self._transientSchedule(toWhen, self.now())
- break
- else:
- raise ValueError("%r is not scheduled to run at %r" % (runnable, fromWhen))
-
-
- def unscheduleFirst(self, runnable):
- """
- Remove from given item from the schedule.
-
- If runnable is scheduled to run multiple times, only the temporally first
- is removed.
- """
- for evt in self.store.query(TimedEvent, TimedEvent.runnable == runnable, sort=TimedEvent.time.ascending):
- evt.deleteFromStore()
- break
-
-
- def unscheduleAll(self, runnable):
- for evt in self.store.query(TimedEvent, TimedEvent.runnable == runnable):
- evt.deleteFromStore()
-
-
- def scheduledTimes(self, runnable):
- """
- Return an iterable of the times at which the given item is scheduled to
- run.
- """
- events = self.store.query(
- TimedEvent, TimedEvent.runnable == runnable)
- return (event.time for event in events if not event.running)
-
-_EPSILON = 1e-20 # A very small amount of time.
-
-
-
-class _SiteScheduler(object, Service, SchedulerMixin):
- """
- Adapter from a site store to L{IScheduler}.
- """
- implements(IScheduler)
-
- timer = None
- callLater = reactor.callLater
- now = Time
-
- def __init__(self, store):
- self.store = store
- self.setName(SITE_SCHEDULER)
-
-
- def startService(self):
- """
- Start calling persistent timed events whose time has come.
- """
- super(_SiteScheduler, self).startService()
- self._transientSchedule(self.now(), self.now())
-
-
- def stopService(self):
- """
- Stop calling persistent timed events.
- """
- super(_SiteScheduler, self).stopService()
- if self.timer is not None:
- self.timer.cancel()
- self.timer = None
-
-
- def tick(self):
- self.timer = None
- return super(_SiteScheduler, self).tick()
-
-
- def _transientSchedule(self, when, now):
- """
- If the service is currently running, schedule a tick to happen no
- later than C{when}.
-
- @param when: The time at which to tick.
- @type when: L{epsilon.extime.Time}
-
- @param now: The current time.
- @type now: L{epsilon.extime.Time}
- """
- if not self.running:
- return
- if self.timer is not None:
- if self.timer.getTime() < when.asPOSIXTimestamp():
- return
- self.timer.cancel()
- delay = when.asPOSIXTimestamp() - now.asPOSIXTimestamp()
-
- # reactor.callLater allows only positive delay values. The scheduler
- # may want to have scheduled things in the past and that's OK, since we
- # are dealing with Time() instances it's impossible to predict what
- # they are relative to the current time from user code anyway.
- delay = max(_EPSILON, delay)
- self.timer = self.callLater(delay, self.tick)
- self.nextEventAt = when
-
-
-
-class _UserScheduler(object, Service, SchedulerMixin):
- """
- Adapter from a non-site store to L{IScheduler}.
- """
- implements(IScheduler)
-
- def __init__(self, store):
- self.store = store
-
-
- def now(self):
- """
- Report the current time, as reported by the parent's scheduler.
- """
- return IScheduler(self.store.parent).now()
-
-
- def _transientSchedule(self, when, now):
- """
- If this service's store is attached to its parent, ask the parent to
- schedule this substore to tick at the given time.
-
- @param when: The time at which to tick.
- @type when: L{epsilon.extime.Time}
-
- @param now: Present for signature compatibility with
- L{_SiteScheduler._transientSchedule}, but ignored otherwise.
- """
- if self.store.parent is not None:
- subStore = self.store.parent.getItemByID(self.store.idInParent)
- hook = self.store.parent.findOrCreate(
- _SubSchedulerParentHook,
- subStore=subStore)
- hook._schedule(when)
-
-
- def migrateDown(self):
- """
- Remove the components in the site store for this SubScheduler.
- """
- subStore = self.store.parent.getItemByID(self.store.idInParent)
- ssph = self.store.parent.findUnique(
- _SubSchedulerParentHook,
- _SubSchedulerParentHook.subStore == subStore,
- default=None)
- if ssph is not None:
- te = self.store.parent.findUnique(TimedEvent,
- TimedEvent.runnable == ssph,
- default=None)
- if te is not None:
- te.deleteFromStore()
- ssph.deleteFromStore()
-
-
- def migrateUp(self):
- """
- Recreate the hooks in the site store to trigger this SubScheduler.
- """
- te = self.store.findFirst(TimedEvent, sort=TimedEvent.time.descending)
- if te is not None:
- self._transientSchedule(te.time, None)
-
-
-
-class _SchedulerCompatMixin(object):
- """
- Backwards compatibility helper for L{Scheduler} and L{SubScheduler}.
-
- This mixin provides all the attributes from L{IScheduler}, but provides
- them by adapting the L{Store} the item is in to L{IScheduler} and
- getting them from the resulting object. Primarily in support of test
- code, it also supports rebinding those attributes by rebinding them on
- the L{IScheduler} powerup.
-
- @see: L{IScheduler}
- """
- implements(IScheduler)
-
- def forwardToReal(name):
- def get(self):
- return getattr(IScheduler(self.store), name)
- def set(self, value):
- setattr(IScheduler(self.store), name, value)
- return property(get, set)
-
- now = forwardToReal("now")
- tick = forwardToReal("tick")
- schedule = forwardToReal("schedule")
- reschedule = forwardToReal("reschedule")
- unschedule = forwardToReal("unschedule")
- unscheduleAll = forwardToReal("unscheduleAll")
- scheduledTimes = forwardToReal("scheduledTimes")
-
-
- def activate(self):
- """
- Whenever L{Scheduler} or L{SubScheduler} is created, either newly or
- when loaded from a database, emit a deprecation warning referring
- people to L{IScheduler}.
- """
- # This is unfortunate. Perhaps it is the best thing which works (it is
- # the first I found). -exarkun
- if '_axiom_memory_dummy' in vars(self):
- stacklevel = 7
- else:
- stacklevel = 5
- warnings.warn(
- self.__class__.__name__ + " is deprecated since Axiom 0.5.32. "
- "Just adapt stores to IScheduler.",
- category=PendingDeprecationWarning,
- stacklevel=stacklevel)
-
-
-
-class Scheduler(Item, _SchedulerCompatMixin):
- """
- Track and execute persistent timed events for a I{site} store.
-
- This is deprecated and present only for backwards compatibility. Adapt
- the store to L{IScheduler} instead.
- """
- implements(IService)
-
- typeName = 'axiom_scheduler'
- schemaVersion = 2
-
- dummy = integer()
-
- def activate(self):
- _SchedulerCompatMixin.activate(self)
-
-
- def setServiceParent(self, parent):
- """
- L{Scheduler} is no longer an L{IService}, but still provides this
- method as a no-op in case an instance which was still an L{IService}
- powerup is loaded (in which case it will be used like a service
- once).
- """
-
-
-
-declareLegacyItem(
- Scheduler.typeName, 1,
- dict(eventsRun=integer(default=0),
- lastEventAt=timestamp(),
- nextEventAt=timestamp()))
-
-
-def scheduler1to2(old):
- new = old.upgradeVersion(Scheduler.typeName, 1, 2)
- new.store.powerDown(new, IService)
- new.store.powerDown(new, IScheduler)
- return new
-
-registerUpgrader(scheduler1to2, Scheduler.typeName, 1, 2)
-
-
-class _SubSchedulerParentHook(Item):
- schemaVersion = 4
- typeName = 'axiom_subscheduler_parent_hook'
-
- subStore = reference(
- doc="""
- The L{SubStore} for which this scheduling hook exists.
- """, reftype=SubStore)
-
- def run(self):
- """
- Tick our C{subStore}'s L{SubScheduler}.
- """
- IScheduler(self.subStore).tick()
-
-
- def _schedule(self, when):
- """
- Ensure that this hook is scheduled to run at or before C{when}.
- """
- sched = IScheduler(self.store)
- for scheduledAt in sched.scheduledTimes(self):
- if when < scheduledAt:
- sched.reschedule(self, scheduledAt, when)
- break
- else:
- sched.schedule(self, when)
-
-
-def upgradeParentHook1to2(oldHook):
- """
- Add the scheduler attribute to the given L{_SubSchedulerParentHook}.
- """
- newHook = oldHook.upgradeVersion(
- oldHook.typeName, 1, 2,
- loginAccount=oldHook.loginAccount,
- scheduledAt=oldHook.scheduledAt,
- scheduler=oldHook.store.findFirst(Scheduler))
- return newHook
-
-registerUpgrader(upgradeParentHook1to2, _SubSchedulerParentHook.typeName, 1, 2)
-
-declareLegacyItem(
- _SubSchedulerParentHook.typeName, 2,
- dict(loginAccount=reference(),
- scheduledAt=timestamp(default=None),
- scheduler=reference()))
-
-def upgradeParentHook2to3(old):
- """
- Copy the C{loginAccount} attribute, but drop the others.
- """
- return old.upgradeVersion(
- old.typeName, 2, 3,
- loginAccount=old.loginAccount)
-
-registerUpgrader(upgradeParentHook2to3, _SubSchedulerParentHook.typeName, 2, 3)
-
-declareLegacyItem(
- _SubSchedulerParentHook.typeName, 3,
- dict(loginAccount=reference(),
- scheduler=reference()))
-
-def upgradeParentHook3to4(old):
- """
- Copy C{loginAccount} to C{subStore} and remove the installation marker.
- """
- new = old.upgradeVersion(
- old.typeName, 3, 4, subStore=old.loginAccount)
- uninstallFrom(new, new.store)
- return new
-
-
-registerUpgrader(upgradeParentHook3to4, _SubSchedulerParentHook.typeName, 3, 4)
-
-
-class SubScheduler(Item, _SchedulerCompatMixin):
- """
- Track and execute persistent timed events for a substore.
-
- This is deprecated and present only for backwards compatibility. Adapt
- the store to L{IScheduler} instead.
- """
- schemaVersion = 2
- typeName = 'axiom_subscheduler'
-
- dummy = integer()
-
- def activate(self):
- _SchedulerCompatMixin.activate(self)
-
-
-def subscheduler1to2(old):
- new = old.upgradeVersion(SubScheduler.typeName, 1, 2)
- try:
- new.store.powerDown(new, IScheduler)
- except ValueError:
- # Someone might have created a SubScheduler but failed to power it
- # up. Fine.
- pass
- return new
-
-registerUpgrader(subscheduler1to2, SubScheduler.typeName, 1, 2)
=== removed directory 'Axiom/axiom/scripts'
=== removed file 'Axiom/axiom/scripts/__init__.py'
=== removed file 'Axiom/axiom/scripts/axiomatic.py'
--- Axiom/axiom/scripts/axiomatic.py 2014-01-22 15:22:31 +0000
+++ Axiom/axiom/scripts/axiomatic.py 1970-01-01 00:00:00 +0000
@@ -1,197 +0,0 @@
-# -*- test-case-name: axiomatic.test.test_axiomatic -*-
-from zope.interface import alsoProvides, noLongerProvides
-
-import os
-import sys
-import glob
-import errno
-import signal
-
-from twisted.plugin import IPlugin, getPlugins
-from twisted.python import usage
-from twisted.python.runtime import platform
-from twisted.scripts import twistd
-
-from axiom.iaxiom import IAxiomaticCommand
-
-class AxiomaticSubCommandMixin(object):
- store = property(lambda self: self.parent.getStore())
-
- def decodeCommandLine(self, cmdline):
- """Turn a byte string from the command line into a unicode string.
- """
- codec = getattr(sys.stdin, 'encoding', None) or sys.getdefaultencoding()
- return unicode(cmdline, codec)
-
-
-
-class _AxiomaticCommandMeta(type):
- """
- Metaclass for L{AxiomaticCommand}.
-
- This serves to make subclasses provide L{IPlugin} and L{IAxiomaticCommand}.
- """
- def __new__(cls, name, bases, attrs):
- newcls = type.__new__(cls, name, bases, attrs)
- alsoProvides(newcls, IPlugin, IAxiomaticCommand)
- return newcls
-
-
-
-class AxiomaticSubCommand(usage.Options, AxiomaticSubCommandMixin):
- """
- L{twisted.python.usage.Options} subclass for Axiomatic sub commands.
- """
-
-
-
-class AxiomaticCommand(usage.Options, AxiomaticSubCommandMixin):
- """
- L{twisted.python.usage.Options} subclass for Axiomatic plugin commands.
-
- Subclass this to have your class automatically provide the necessary
- interfaces to be picked up by axiomatic.
- """
- __metaclass__ = _AxiomaticCommandMeta
-
-noLongerProvides(AxiomaticCommand, IPlugin)
-noLongerProvides(AxiomaticCommand, IAxiomaticCommand)
-
-
-
-class PIDMixin:
-
- def _sendSignal(self, signal):
- if platform.isWindows():
- raise usage.UsageError("You can't send signals on Windows (XXX TODO)")
- dbdir = self.parent.getStoreDirectory()
- serverpid = int(file(os.path.join(dbdir, 'run', 'axiomatic.pid')).read())
- os.kill(serverpid, signal)
- return serverpid
-
- def signalServer(self, signal):
- try:
- return self._sendSignal(signal)
- except (OSError, IOError), e:
- if e.errno in (errno.ENOENT, errno.ESRCH):
- raise usage.UsageError('There is no server running from the Axiom database %r.' % (self.parent.getStoreDirectory(),))
- else:
- raise
-
-
-class Stop(usage.Options, PIDMixin):
- def postOptions(self):
- self.signalServer(signal.SIGINT)
-
-
-
-class Status(usage.Options, PIDMixin):
- def postOptions(self):
- dbdir = self.parent.getStoreDirectory()
- serverpid = self.signalServer(0)
- print 'A server is running from the Axiom database %r, PID %d.' % (dbdir, serverpid)
-
-
-
-class Start(twistd.ServerOptions):
- run = staticmethod(twistd.run)
-
- def subCommands():
- raise AttributeError()
- subCommands = property(subCommands)
-
-
- def getArguments(self, store, args):
- run = store.dbdir.child("run")
- logs = run.child("logs")
- if "--logfile" not in args and "-l" not in args and "--nodaemon" not in args and "-n" not in args:
- if not logs.exists():
- logs.makedirs()
- args.extend(["--logfile", logs.child("axiomatic.log").path])
- if not platform.isWindows() and "--pidfile" not in args:
- args.extend(["--pidfile", run.child("axiomatic.pid").path])
- args.extend(["axiomatic-start", "--dbdir", store.dbdir.path])
- return args
-
-
- def parseOptions(self, args):
- if "--help" in args:
- self.opt_help()
- else:
- # If a reactor is being selected, it must be done before the store
- # is opened, since that may execute arbitrary application code
- # which may in turn install the default reactor.
- if "--reactor" in args:
- reactorIndex = args.index("--reactor")
- shortName = args[reactorIndex + 1]
- del args[reactorIndex:reactorIndex + 2]
- self.opt_reactor(shortName)
- sys.argv[1:] = self.getArguments(self.parent.getStore(), args)
- self.run()
-
-
-
-class Options(usage.Options):
- def subCommands():
- def get(self):
- yield ('start', None, Start, 'Launch the given Axiom database')
- if not platform.isWindows():
- yield ('stop', None, Stop, 'Stop the server running from the given Axiom database')
- yield ('status', None, Status, 'Report whether a server is running from the given Axiom database')
-
- from axiom import plugins
- for plg in getPlugins(IAxiomaticCommand, plugins):
- try:
- yield (plg.name, None, plg, plg.description)
- except AttributeError:
- raise RuntimeError("Maldefined plugin: %r" % (plg,))
- return get,
- subCommands = property(*subCommands())
-
- optParameters = [
- ('dbdir', 'd', None, 'Path containing axiom database to configure/create'),
- ]
-
- optFlags = [
- ('debug', 'b', 'Enable Axiom-level debug logging')]
-
- store = None
-
- def usedb(self, potentialdb):
- yn = raw_input("Use database %r? (Y/n) " % (potentialdb,))
- if yn.lower() in ('y', 'yes', ''):
- self['dbdir'] = potentialdb
- else:
- raise usage.UsageError('Select another database with the -d option, then.')
-
- def getStoreDirectory(self):
- if self['dbdir'] is None:
- possibilities = glob.glob('*.axiom')
- if len(possibilities) > 1:
- raise usage.UsageError(
- "Multiple databases found here, please select one with "
- "the -d option: %s" % (' '.join(possibilities),))
- elif len(possibilities) == 1:
- self.usedb(possibilities[0])
- else:
- self.usedb(self.subCommand + '.axiom')
- return self['dbdir']
-
- def getStore(self):
- from axiom.store import Store
- if self.store is None:
- self.store = Store(self.getStoreDirectory(), debug=self['debug'])
- return self.store
-
-
- def postOptions(self):
- if self.store is not None:
- self.store.close()
-
-
-def main(argv=None):
- o = Options()
- try:
- o.parseOptions(argv)
- except usage.UsageError, e:
- raise SystemExit(str(e))
=== removed file 'Axiom/axiom/scripts/pysql.py'
--- Axiom/axiom/scripts/pysql.py 2010-04-03 12:38:34 +0000
+++ Axiom/axiom/scripts/pysql.py 1970-01-01 00:00:00 +0000
@@ -1,19 +0,0 @@
-
-import sys
-import readline # Imported for its side-effects
-import traceback
-from pprint import pprint
-
-from axiom._pysqlite2 import Connection
-
-con = Connection.fromDatabaseName(sys.argv[1])
-cur = con.cursor()
-
-while True:
- try:
- cur.execute(raw_input("SQL> "))
- results = list(cur)
- if results:
- pprint(results)
- except:
- traceback.print_exc()
=== removed file 'Axiom/axiom/sequence.py'
--- Axiom/axiom/sequence.py 2006-04-14 20:18:55 +0000
+++ Axiom/axiom/sequence.py 1970-01-01 00:00:00 +0000
@@ -1,175 +0,0 @@
-# -*- test-case-name: axiom.test.test_sequence -*-
-
-from axiom.item import Item
-from axiom.attributes import reference, integer, AND
-
-class _ListItem(Item):
- typeName = 'list_item'
- schemaVersion = 1
-
- _index = integer()
- _value = reference()
- _container = reference()
-
-class List(Item):
- typeName = 'list'
- schemaVersion = 1
-
- length = integer(default=0)
-
- def __init__(self, *args, **kw):
- super(List, self).__init__(**kw)
- if args:
- self.extend(args[0])
-
- def _queryListItems(self):
- return self.store.query(_ListItem, _ListItem._container == self)
-
- def _getListItem(self, index):
- return list(self.store.query(_ListItem,
- AND(_ListItem._container == self,
- _ListItem._index == index)))[0]
-
- def _delListItem(self, index, resetIndexes=True):
- for li in self.store.query(_ListItem,
- AND(_ListItem._container == self,
- _ListItem._index == index)):
- li.deleteFromStore(deleteObject=True)
- break
-
- def _fixIndex(self, index, truncate=False):
- """
- @param truncate: If true, negative indices which go past the
- beginning of the list will be evaluated as zero.
- For example::
-
- >>> L = List([1,2,3,4,5])
- >>> len(L)
- 5
- >>> L._fixIndex(-9, truncate=True)
- 0
- """
- assert not isinstance(index, slice), 'slices are not supported (yet)'
- if index < 0:
- index += self.length
- if index < 0:
- if not truncate:
- raise IndexError('stored List index out of range')
- else:
- index = 0
- return index
-
- def __getitem__(self, index):
- index = self._fixIndex(index)
- return self._getListItem(index)._value
-
- def __setitem__(self, index, value):
- index = self._fixIndex(index)
- self._getListItem(index)._value = value
-
- def __add__(self, other):
- return list(self) + list(other)
- def __radd__(self, other):
- return list(other) + list(self)
-
- def __mul__(self, other):
- return list(self) * other
- def __rmul__(self, other):
- return other * list(self)
-
- def index(self, other, start=0, maximum=None):
- if maximum is None:
- maximum = len(self)
- for pos in range(start, maximum):
- if pos >= len(self):
- break
- if self[pos] == other:
- return pos
- raise ValueError, 'List.index(x): %r not in List' % other
-
- def __len__(self):
- return self.length
-
- def __delitem__(self, index):
- assert not isinstance(index, slice), 'slices are not supported (yet)'
- self._getListItem(index).deleteFromStore()
- if index < self.length - 1:
- for item in self.store.query(_ListItem, AND(
- _ListItem._container == self, _ListItem._index > index)):
- item._index -= 1
- self.length -= 1
-
- def __contains__(self, value):
- return bool(self.count(value))
-
- def append(self, value):
- """
- @type value: L{axiom.item.Item}
- @param value: Must be stored in the same L{Store<axiom.store.Store>}
- as this L{List} instance.
- """
- # XXX: Should List.append(unstoredItem) automatically store the item?
- self.insert(self.length, value)
-
- def extend(self, other):
- for item in iter(other):
- self.append(item)
-
- def insert(self, index, value):
- index = self._fixIndex(index, truncate=True)
- # If we do List(length=5).insert(50, x), we don't want
- # x's _ListItem._index to actually be 50.
- index = min(index, self.length)
- # This uses list() in case our contents change halfway through.
- # But does that _really_ work?
- for li in list(self.store.query(_ListItem,
- AND(_ListItem._container == self,
- _ListItem._index >= index))):
- # XXX: The performance of this operation probably sucks
- # compared to what it would be with an UPDATE.
- li._index += 1
- _ListItem(store=self.store,
- _value=value,
- _container=self,
- _index=index)
- self.length += 1
-
- def pop(self, index=None):
- if index is None:
- index = self.length - 1
- index = self._fixIndex(index)
- x = self[index]
- del self[index]
- return x
-
- def remove(self, value):
- del self[self.index(value)]
-
- def reverse(self):
- # XXX: Also needs to be an atomic action.
- length = 0
- for li in list(self.store.query(_ListItem,
- _ListItem._container == self,
- sort=_ListItem._index.desc)):
- li._index = length
- length += 1
- self.length = length
-
- def sort(self, *args):
- # We want to sort by value, not sort by _ListItem. We could
- # accomplish this by having _ListItem.__cmp__ do something
- # with self._value, but that seemed wrong. This was easier.
- values = [li._value for li in self._queryListItems()]
- values.sort(*args)
- index = 0
- for li in self._queryListItems():
- # XXX: Well, can it?
- assert index < len(values), \
- '_ListItems were added during a sort (can this happen?)'
- li._index = index
- li._value = values[index]
- index += 1
-
- def count(self, value):
- return self.store.count(_ListItem, AND(
- _ListItem._container == self, _ListItem._value == value))
=== removed file 'Axiom/axiom/slotmachine.py'
--- Axiom/axiom/slotmachine.py 2009-05-14 13:33:33 +0000
+++ Axiom/axiom/slotmachine.py 1970-01-01 00:00:00 +0000
@@ -1,181 +0,0 @@
-# -*- test-case-name: axiom.test.test_slotmachine -*-
-
-hyper = super
-
-_NOSLOT = object()
-
-class Allowed(object):
- """
- An attribute that's allowed to be set.
- """
- def __init__(self, name, default=_NOSLOT):
- self.name = name
- self.default = default
-
- def __get__(self, oself, otype=None):
- if otype is not None and oself is None:
- return self
- if self.name in oself.__dict__:
- return oself.__dict__[self.name]
- if self.default is not _NOSLOT:
- return self.default
- raise AttributeError("%r object did not have attribute %r" %(oself.__class__.__name__, self.name))
-
- def __delete__(self, oself):
- if self.name not in oself.__dict__:
- # Returning rather than raising here because that's what
- # member_descriptor does, and Axiom relies upon that behavior.
- ## raise AttributeError('%r object has no attribute %r' %
- ## (oself.__class__.__name__, self.name))
- return
- del oself.__dict__[self.name]
-
- def __set__(self, oself, value):
- oself.__dict__[self.name] = value
-
-class _SlotMetaMachine(type):
- def __new__(meta, name, bases, dictionary):
- dictionary['__name__'] = name
- slots = list(meta.determineSchema(dictionary))
- for slot in slots:
- for base in bases:
- defval = getattr(base, slot, _NOSLOT)
- if defval is not _NOSLOT:
- break
- dictionary[slot] = Allowed(slot, defval)
- nt = type.__new__(meta, name, bases, dictionary)
- return nt
-
- def determineSchema(meta, dictionary):
- return dictionary.get("slots", [])
-
- determineSchema = classmethod(determineSchema)
-
-
-class DescriptorWithDefault(object):
- def __init__(self, default, original):
- self.original = original
- self.default = default
-
- def __get__(self, oself, type=None):
- if type is not None:
- if oself is None:
- return self.default
- return getattr(oself, self.original, self.default)
-
- def __set__(self, oself, value):
- setattr(oself, self.original, value)
-
- def __delete__(self, oself):
- delattr(oself, self.original)
-
-
-class Attribute(object):
- def __init__(self, doc=''):
- self.doc = doc
-
- def requiredSlots(self, modname, classname, attrname):
- self.name = attrname
- yield attrname
-
- def __get__(self, oself, type=None):
- assert oself is None, "%s: should be masked" % (self.name,)
- return self
-
-_RAISE = object()
-class SetOnce(Attribute):
-
- def __init__(self, doc='', default=_RAISE):
- Attribute.__init__(self)
- if default is _RAISE:
- self.default = ()
- else:
- self.default = (default,)
-
- def requiredSlots(self, modname, classname, attrname):
- self.name = attrname
- t = self.trueattr = ('_' + self.name)
- yield t
-
- def __set__(self, iself, value):
- if not hasattr(iself, self.trueattr):
- setattr(iself, self.trueattr, value)
- else:
- raise AttributeError('%s.%s may only be set once' % (
- type(iself).__name__, self.name))
-
- def __get__(self, iself, type=None):
- if type is not None and iself is None:
- return self
- return getattr(iself, self.trueattr, *self.default)
-
-class SchemaMetaMachine(_SlotMetaMachine):
-
- def determineSchema(meta, dictionary):
- attrs = dictionary['__attributes__'] = []
- name = dictionary['__name__']
- moduleName = dictionary['__module__']
- dictitems = dictionary.items()
- dictitems.sort()
- for k, v in dictitems:
- if isinstance(v, Attribute):
- attrs.append((k, v))
- for slot in v.requiredSlots(moduleName, name, k):
- if slot == k:
- del dictionary[k]
- yield slot
-
- determineSchema = classmethod(determineSchema)
-
-
-class _Strict(object):
- """
- I disallow all attributes from being set that do not have an explicit
- data descriptor.
- """
-
- def __setattr__(self, name, value):
- """
- Like PyObject_GenericSetAttr, but call descriptors only.
- """
- try:
- allowed = type(self).__dict__['_Strict__setattr__allowed']
- except KeyError:
- allowed = type(self)._Strict__setattr__allowed = {}
- for cls in type(self).__mro__:
- for attrName, slot in cls.__dict__.iteritems():
- if attrName in allowed:
- # It was found earlier in the mro, overriding
- # whatever this is. Ignore it and move on.
- continue
- setter = getattr(slot, '__set__', _NOSLOT)
- if setter is not _NOSLOT:
- # It is a data descriptor, so remember the setter
- # for it in the cache.
- allowed[attrName] = setter
- else:
- # It is something else, so remember None for it in
- # the cache to indicate it cannot have its value
- # set.
- allowed[attrName] = None
-
- try:
- setter = allowed[name]
- except KeyError:
- pass
- else:
- if setter is not None:
- setter(self, value)
- return
-
- # It wasn't found in the setter cache or it was found to be None,
- # indicating a non-data descriptor which cannot be set.
- raise AttributeError(
- "%r can't set attribute %r" % (self.__class__.__name__, name))
-
-
-class SchemaMachine(_Strict):
- __metaclass__ = SchemaMetaMachine
-
-class SlotMachine(_Strict):
- __metaclass__ = _SlotMetaMachine
=== removed file 'Axiom/axiom/store.py'
--- Axiom/axiom/store.py 2014-01-22 19:37:14 +0000
+++ Axiom/axiom/store.py 1970-01-01 00:00:00 +0000
@@ -1,2376 +0,0 @@
-# Copyright 2008 Divmod, Inc. See LICENSE for details
-# -*- test-case-name: axiom.test -*-
-
-"""
-This module holds the Axiom Store class and related classes, such as queries.
-"""
-from epsilon import hotfix
-hotfix.require('twisted', 'filepath_copyTo')
-
-import time, os, itertools, warnings, sys, operator, weakref
-
-from zope.interface import implements
-
-from twisted.python import log
-from twisted.python.failure import Failure
-from twisted.python import filepath
-from twisted.internet import defer
-from twisted.python.reflect import namedAny
-from twisted.application.service import IService, IServiceCollection
-
-from epsilon.pending import PendingEvent
-from epsilon.cooperator import SchedulingService
-
-from axiom import _schema, attributes, upgrade, _fincache, iaxiom, errors
-from axiom import item
-from axiom._pysqlite2 import Connection
-
-from axiom.item import \
- _typeNameToMostRecentClass, declareLegacyItem, \
- _legacyTypes, Empowered, serviceSpecialCase, _StoreIDComparer
-
-IN_MEMORY_DATABASE = ':memory:'
-
-# The special storeID used to mark the store itself as the target of a
-# reference.
-STORE_SELF_ID = -1
-
-tempCounter = itertools.count()
-
-# A mapping from MetaItem instances to precomputed structures describing the
-# indexes necessary for those MetaItems. Avoiding recomputing this speeds up
-# opening stores significantly.
-_requiredTableIndexes = weakref.WeakKeyDictionary()
-
-# A mapping from MetaItem instances to precomputed structures describing the
-# known in-memory schema for those MetaItems. Avoiding recomputing this speeds
-# up opening stores significantly.
-_inMemorySchemaCache = weakref.WeakKeyDictionary()
-
-
-
-class NoEmptyItems(Exception):
- """You must define some attributes on every item.
- """
-
-def _mkdirIfNotExists(dirname):
- if os.path.isdir(dirname):
- return False
- os.makedirs(dirname)
- return True
-
-class AtomicFile(file):
- """I am a file which is moved from temporary to permanent storage when it
- is closed.
-
- After I'm closed, I will have a 'finalpath' property saying where I went.
- """
-
- implements(iaxiom.IAtomicFile)
-
- def __init__(self, tempname, destpath):
- """
- Create an AtomicFile. (Note: AtomicFiles can only be opened in
- write-binary mode.)
-
- @param tempname: The filename to open for temporary storage.
-
- @param destpath: The filename to move this file to when .close() is
- called.
- """
- self._destpath = destpath
- file.__init__(self, tempname, 'w+b')
-
- def close(self):
- """
- Close this file and commit it to its permanent location.
-
- @return: a Deferred which fires when the file has been moved (and
- backed up to tertiary storage, if necessary).
- """
- now = time.time()
- try:
- file.close(self)
- _mkdirIfNotExists(self._destpath.dirname())
- self.finalpath = self._destpath
- os.rename(self.name, self.finalpath.path)
- os.utime(self.finalpath.path, (now, now))
- except:
- return defer.fail()
- return defer.succeed(self.finalpath)
-
-
- def abort(self):
- os.unlink(self.name)
-
-
-_noItem = object() # tag for optional argument to getItemByID
- # default
-
-
-
-def storeServiceSpecialCase(st, pups):
- """
- Adapt a store to L{IServiceCollection}.
-
- @param st: The L{Store} to adapt.
- @param pups: A list of L{IServiceCollection} powerups on C{st}.
-
- @return: An L{IServiceCollection} which has all of C{pups} as children.
- """
- if st.parent is not None:
- # If for some bizarre reason we're starting a substore's service, let's
- # just assume that its parent is running its upgraders, rather than
- # risk starting the upgrader run twice. (XXX: it *IS* possible to
- # figure out whether we need to or not, I just doubt this will ever
- # even happen in practice -- fix here if it does)
- return serviceSpecialCase(st, pups)
- if st._axiom_service is not None:
- # not new, don't add twice.
- return st._axiom_service
-
- collection = serviceSpecialCase(st, pups)
-
- st._upgradeService.setServiceParent(collection)
-
- if st.dbdir is not None:
- from axiom import batch
- batcher = batch.BatchProcessingControllerService(st)
- batcher.setServiceParent(collection)
-
- scheduler = iaxiom.IScheduler(st)
- # If it's an old database, we might get a SubScheduler instance. It has no
- # setServiceParent method.
- setServiceParent = getattr(scheduler, 'setServiceParent', None)
- if setServiceParent is not None:
- setServiceParent(collection)
-
- return collection
-
-
-def _typeIsTotallyUnknown(typename, version):
- return ((typename not in _typeNameToMostRecentClass)
- and ((typename, version) not in _legacyTypes))
-
-
-
-class BaseQuery:
- """
- This is the abstract base implementation of query logic shared between item
- and attribute queries.
-
- Note: as this is an abstract class, it doesn't *actually* implement IQuery,
- but all its subclasses must, so it is declared to. Don't instantiate it
- directly.
- """
- # XXX: need a better convention for this sort of
- # abstract-but-provide-most-of-a-base-implementation thing. -glyph
-
- # How about not putting the implements(iaxiom.IQuery) here, but on
- # subclasses instead? -exarkun
-
- implements(iaxiom.IQuery)
-
- def __init__(self, store, tableClass,
- comparison=None, limit=None,
- offset=None, sort=None):
- """
- Create a generic object-oriented interface to SQL, used to implement
- Store.query.
-
- @param store: the store that this query is within.
-
- @param tableClass: a subclass of L{Item}.
-
- @param comparison: an implementor of L{iaxiom.IComparison}
-
- @param limit: an L{int} that limits the number of results that will be
- queried for, or None to indicate that all results should be returned.
-
- @param offset: an L{int} that specifies the offset within the query
- results to begin iterating from, or None to indicate that we should
- start at 0.
-
- @param sort: A sort order object. Obtained by doing
- C{YourItemClass.yourAttribute.ascending} or C{.descending}.
- """
-
- self.store = store
- self.tableClass = tableClass
- self.comparison = comparison
- self.limit = limit
- self.offset = offset
- self.sort = iaxiom.IOrdering(sort)
- tables = self._involvedTables()
- self._computeFromClause(tables)
-
-
- _cloneAttributes = 'store tableClass comparison limit offset sort'.split()
-
- # IQuery
- def cloneQuery(self, limit=_noItem):
- clonekw = {}
- for attr in self._cloneAttributes:
- clonekw[attr] = getattr(self, attr)
- if limit is not _noItem:
- clonekw['limit'] = limit
- return self.__class__(**clonekw)
-
-
- def __repr__(self):
- return self.__class__.__name__ + '(' + ', '.join([
- repr(self.store),
- repr(self.tableClass),
- repr(self.comparison),
- repr(self.limit),
- repr(self.offset),
- repr(self.sort)]) + ')'
-
-
- def explain(self):
- """
- A debugging API, exposing SQLite's I{EXPLAIN} statement.
-
- While this is not a private method, you also probably don't have any
- use for it unless you understand U{SQLite
- opcodes<http://www.sqlite.org/opcode.html>} very well.
-
- Once you do, it can be handy to call this interactively to get a sense
- of the complexity of a query.
-
- @return: a list, the first element of which is a L{str} (the SQL
- statement which will be run), and the remainder of which is 3-tuples
- resulting from the I{EXPLAIN} of that statement.
- """
- return ([self._sqlAndArgs('SELECT', self._queryTarget)[0]] +
- self._runQuery('EXPLAIN SELECT', self._queryTarget))
-
-
- def _involvedTables(self):
- """
- Return a list of tables involved in this query,
- first checking that no required tables (those in
- the query target) have been omitted from the comparison.
- """
- # SQL and arguments
- if self.comparison is not None:
- tables = self.comparison.getInvolvedTables()
- self.args = self.comparison.getArgs(self.store)
- else:
- tables = [self.tableClass]
- self.args = []
-
- if self.tableClass not in tables:
- raise ValueError(
- "Comparison omits required reference to result type")
-
- return tables
-
- def _computeFromClause(self, tables):
- """
- Generate the SQL string which follows the "FROM" string and before the
- "WHERE" string in the final SQL statement.
- """
- tableAliases = []
- self.fromClauseParts = []
- for table in tables:
- # The indirect calls to store.getTableName() will create the tables
- # if needed. (XXX That's bad, actually. They should get created
- # some other way if necessary. -exarkun)
- tableName = table.getTableName(self.store)
- tableAlias = table.getTableAlias(self.store, tuple(tableAliases))
- if tableAlias is None:
- self.fromClauseParts.append(tableName)
- else:
- tableAliases.append(tableAlias)
- self.fromClauseParts.append('%s AS %s' % (tableName,
- tableAlias))
-
- self.sortClauseParts = []
- for attr, direction in self.sort.orderColumns():
- assert direction in ('ASC', 'DESC'), "%r not in ASC,DESC" % (direction,)
- if attr.type not in tables:
- raise ValueError(
- "Ordering references type excluded from comparison")
- self.sortClauseParts.append(
- '%s %s' % (attr.getColumnName(self.store), direction))
-
-
- def _sqlAndArgs(self, verb, subject):
- limitClause = []
- if self.limit is not None:
- # XXX LIMIT and OFFSET used to be using ?, but they started
- # generating syntax errors in places where generating the whole SQL
- # statement does not. this smells like a bug in sqlite's parser to
- # me, but I don't know my SQL syntax standards well enough to be
- # sure -glyph
- if not isinstance(self.limit, (int, long)):
- raise TypeError("limit must be an integer: %r" % (self.limit,))
- limitClause.append('LIMIT')
- limitClause.append(str(self.limit))
- if self.offset is not None:
- if not isinstance(self.offset, (int, long)):
- raise TypeError("offset must be an integer: %r" % (self.offset,))
- limitClause.append('OFFSET')
- limitClause.append(str(self.offset))
- else:
- assert self.offset is None, 'Offset specified without limit'
-
- sqlParts = [verb, subject]
- if self.fromClauseParts:
- sqlParts.extend(['FROM', ', '.join(self.fromClauseParts)])
- if self.comparison is not None:
- sqlParts.extend(['WHERE', self.comparison.getQuery(self.store)])
- if self.sortClauseParts:
- sqlParts.extend(['ORDER BY', ', '.join(self.sortClauseParts)])
- if limitClause:
- sqlParts.append(' '.join(limitClause))
- sqlstr = ' '.join(sqlParts)
- return (sqlstr, self.args)
-
-
- def _runQuery(self, verb, subject):
- # XXX ideally this should be creating an SQL cursor and iterating
- # through that so we don't have to load the whole query into memory,
- # but right now Store's interface to SQL is all through one cursor.
- # I'm not sure how to do this and preserve the chokepoint so that we
- # can do, e.g. transaction fallbacks.
- t = time.time()
- if not self.store.autocommit:
- self.store.checkpoint()
- sqlstr, sqlargs = self._sqlAndArgs(verb, subject)
- sqlResults = self.store.querySQL(sqlstr, sqlargs)
- cs = self.locateCallSite()
- log.msg(interface=iaxiom.IStatEvent,
- querySite=cs, queryTime=time.time() - t, querySQL=sqlstr)
- return sqlResults
-
- def locateCallSite(self):
- i = 3
- frame = sys._getframe(i)
- while frame.f_code.co_filename == __file__:
- #let's not get stuck in findOrCreate, etc
- i += 1
- frame = sys._getframe(i)
- return (frame.f_code.co_filename, frame.f_lineno)
-
-
- def _selectStuff(self, verb='SELECT'):
- """
- Return a generator which yields the massaged results of this query with
- a particular SQL verb.
-
- For an attribute query, massaged results are of the type of that
- attribute. For an item query, they are items of the type the query is
- supposed to return.
-
- @param verb: a str containing the SQL verb to execute. This really
- must be some variant of 'SELECT', the only two currently implemented
- being 'SELECT' and 'SELECT DISTINCT'.
- """
- sqlResults = self._runQuery(verb, self._queryTarget)
- for row in sqlResults:
- yield self._massageData(row)
-
-
- def _massageData(self, row):
- """
- Subclasses must override this method to 'massage' the data received
- from the database, converting it from data direct from the database
- into Python objects of the appropriate form.
-
- @param row: a tuple of some kind, representing an element of data
- returned from a call to sqlite.
- """
- raise NotImplementedError()
-
-
- def distinct(self):
- """
- Call this method if you want to avoid repeated results from a query.
-
- You can call this on either an attribute or item query. For example,
- on an attribute query::
-
- X(store=s, value=1, name=u'foo')
- X(store=s, value=1, name=u'bar')
- X(store=s, value=2, name=u'baz')
- X(store=s, value=3, name=u'qux')
- list(s.query(X).getColumn('value'))
- => [1, 1, 2, 3]
- list(s.query(X).getColumn('value').distinct())
- => [1, 2, 3]
-
- You can also use distinct queries to eliminate duplicate results from
- joining two Item types together in a query, like so::
-
- x = X(store=s, value=1, name=u'hello')
- Y(store=s, other=x, ident=u'a')
- Y(store=s, other=x, ident=u'b')
- Y(store=s, other=x, ident=u'b+')
- list(s.query(X, AND(Y.other == X.storeID,
- Y.ident.startswith(u'b'))))
- => [X(name=u'hello', value=1, storeID=1)@...,
- X(name=u'hello', value=1, storeID=1)@...]
- list(s.query(X, AND(Y.other == X.storeID,
- Y.ident.startswith(u'b'))).distinct())
- => [X(name=u'hello', value=1, storeID=1)@...]
-
- @return: an L{iaxiom.IQuery} provider whose values are distinct.
- """
- return _DistinctQuery(self)
-
-
- def __iter__(self):
- """
- Iterate the results of this query.
- """
- return self._selectStuff('SELECT')
-
-
- _selfiter = None
- def next(self):
- """
- This method is deprecated, a holdover from when queries were iterators,
- rather than iterables.
-
- @return: one element of massaged data.
- """
- if self._selfiter is None:
- warnings.warn(
- "Calling 'next' directly on a query is deprecated. "
- "Perhaps you want to use iter(query).next(), or something "
- "more expressive like store.findFirst or store.findOrCreate?",
- DeprecationWarning, stacklevel=2)
- self._selfiter = self.__iter__()
- return self._selfiter.next()
-
-
-
-class _FakeItemForFilter:
- __legacy__ = False
- def __init__(self, store):
- self.store = store
-
-
-def _isColumnUnique(col):
- """
- Determine if an IColumn provider is unique.
-
- @param col: an L{IColumn} provider
- @return: True if the IColumn provider is unique, False otherwise.
- """
- return isinstance(col, _StoreIDComparer)
-
-class ItemQuery(BaseQuery):
- """
- This class is a query whose results will be Item instances. This is the
- type always returned from L{Store.query}.
- """
-
- def __init__(self, *a, **k):
- """
- Create an ItemQuery. This is typically done via L{Store.query}.
- """
- BaseQuery.__init__(self, *a, **k)
- self._queryTarget = (
- self.tableClass.storeID.getColumnName(self.store) + ', ' + (
- ', '.join(
- [attrobj.getColumnName(self.store)
- for name, attrobj in self.tableClass.getSchema()
- ])))
-
-
- def paginate(self, pagesize=20):
- """
- Split up the work of gathering a result set into multiple smaller
- 'pages', allowing very large queries to be iterated without blocking
- for long periods of time.
-
- While simply iterating C{paginate()} is very similar to iterating a
- query directly, using this method allows the work to obtain the results
- to be performed on demand, over a series of different transaction.
-
- @param pagesize: the number of results gather in each chunk of work.
- (This is mostly for testing paginate's implementation.)
- @type pagesize: L{int}
-
- @return: an iterable which yields all the results of this query.
- """
-
- sort = self.sort
- oc = list(sort.orderColumns())
- if not oc:
- # You can't have an unsorted pagination.
- sort = self.tableClass.storeID.ascending
- oc = list(sort.orderColumns())
- if len(oc) != 1:
- raise RuntimeError("%d-column sorts not supported yet with paginate" %(len(oc),))
- sortColumn = oc[0][0]
- if oc[0][1] == 'ASC':
- sortOp = operator.gt
- else:
- sortOp = operator.lt
- if _isColumnUnique(sortColumn):
- # This is the easy case. There is never a tie to be broken, so we
- # can just remember our last value and yield from there. Right now
- # this only happens when the column is a storeID, but hopefully in
- # the future we will have more of this.
- tiebreaker = None
- else:
- tiebreaker = self.tableClass.storeID
-
- tied = lambda a, b: (sortColumn.__get__(a) ==
- sortColumn.__get__(b))
- def _AND(a, b):
- if a is None:
- return b
- return attributes.AND(a, b)
-
- results = list(self.store.query(self.tableClass, self.comparison,
- sort=sort, limit=pagesize + 1))
- while results:
- if len(results) == 1:
- # XXX TODO: reject 0 pagesize. If the length of the result set
- # is 1, there's no next result to test for a tie with, so we
- # must be at the end, and we should just yield the result and finish.
- yield results[0]
- return
- for resultidx in range(len(results) - 1):
- # check for a tie.
- result = results[resultidx]
- nextResult = results[resultidx + 1]
- if tied(result, nextResult):
- # Yield any ties first, in the appropriate order.
- lastTieBreaker = tiebreaker.__get__(result)
- # Note that this query is _NOT_ limited: currently large ties
- # will generate arbitrarily large amounts of work.
- trq = self.store.query(
- self.tableClass,
- _AND(self.comparison,
- sortColumn == sortColumn.__get__(result)))
- tiedResults = list(trq)
- tiedResults.sort(key=lambda rslt: (sortColumn.__get__(result),
- tiebreaker.__get__(result)))
- for result in tiedResults:
- yield result
- # re-start the query here ('result' is set to the
- # appropriate value by the inner loop)
- break
- else:
- yield result
-
- lastSortValue = sortColumn.__get__(result) # hooray namespace pollution
- results = list(self.store.query(
- self.tableClass,
- _AND(self.comparison,
- sortOp(sortColumn,
- sortColumn.__get__(result))),
- sort=sort,
- limit=pagesize + 1))
-
- def _massageData(self, row):
- """
- Convert a row into an Item instance by loading cached items or
- creating new ones based on query results.
-
- @param row: an n-tuple, where n is the number of columns specified by
- my item type.
-
- @return: an instance of the type specified by this query.
- """
- result = self.store._loadedItem(self.tableClass, row[0], row[1:])
- assert result.store is not None, "result %r has funky store" % (result,)
- return result
-
-
- def getColumn(self, attributeName, raw=False):
- """
- Get an L{iaxiom.IQuery} whose results will be values of a single
- attribute rather than an Item.
-
- @param attributeName: a L{str}, the name of a Python attribute, that
- describes a column on the Item subclass that this query was specified
- for.
-
- @return: an L{AttributeQuery} for the column described by the attribute
- named L{attributeName} on the item class that this query's results will
- be instances of.
- """
- # XXX: 'raw' is undocumented because I think it's completely unused,
- # and it's definitely untested. It should probably be removed when
- # someone has the time. -glyph
-
- # Quotient POP3 server uses it. Not that it shouldn't be removed.
- # ;) -exarkun
- attr = getattr(self.tableClass, attributeName)
- return AttributeQuery(self.store,
- self.tableClass,
- self.comparison,
- self.limit,
- self.offset,
- self.sort,
- attr,
- raw)
-
-
- def count(self):
- rslt = self._runQuery(
- 'SELECT',
- 'COUNT(' + self.tableClass.storeID.getColumnName(self.store)
- + ')')
- assert len(rslt) == 1, 'more than one result: %r' % (rslt,)
- return rslt[0][0] or 0
-
-
- def deleteFromStore(self):
- """
- Delete all the Items which are found by this query.
- """
- #We can do this the fast way or the slow way.
-
- # If there's a 'deleted' callback on the Item type or 'deleteFromStore'
- # is overridden, we have to do it the slow way.
- deletedOverridden = (
- self.tableClass.deleted.im_func is not item.Item.deleted.im_func)
- deleteFromStoreOverridden = (
- self.tableClass.deleteFromStore.im_func is not
- item.Item.deleteFromStore.im_func)
-
- if deletedOverridden or deleteFromStoreOverridden:
- for it in self:
- it.deleteFromStore()
- else:
-
- # Find other item types whose instances need to be deleted
- # when items of the type in this query are deleted, and
- # remove them from the store.
- def itemsToDelete(attr):
- return attr.oneOf(self.getColumn("storeID"))
-
- if not item.allowDeletion(self.store, self.tableClass, itemsToDelete):
- raise errors.DeletionDisallowed(
- 'Cannot delete item; '
- 'has referents with whenDeleted == reference.DISALLOW')
-
- for it in item.dependentItems(self.store,
- self.tableClass, itemsToDelete):
- it.deleteFromStore()
-
- # actually run the DELETE for the items in this query.
- self._runQuery('DELETE', "")
-
-class MultipleItemQuery(BaseQuery):
- """
- A query that returns tuples of Items from a join.
- """
-
- def __init__(self, *a, **k):
- """
- Create a MultipleItemQuery. This is typically done via L{Store.query}.
- """
- BaseQuery.__init__(self, *a, **k)
-
- # Just in case it's some other kind of iterable.
- self.tableClass = tuple(self.tableClass)
-
- if len(self.tableClass) == 0:
- raise ValueError("Multiple item queries must have "
- "at least one table class")
-
- targets = []
-
- # Later when we massage data out, we need to slice the row.
- # This records the slice lengths.
- self.schemaLengths = []
-
- # self.tableClass is a tuple of Item classes.
- for tableClass in self.tableClass:
-
- schema = tableClass.getSchema()
-
- # The extra 1 is oid
- self.schemaLengths.append(len(schema) + 1)
-
- targets.append(
- tableClass.storeID.getColumnName(self.store) + ', ' + (
- ', '.join(
- [attrobj.getColumnName(self.store)
- for name, attrobj in schema
- ])))
-
- self._queryTarget = ', '.join(targets)
-
- def _involvedTables(self):
- """
- Return a list of tables involved in this query,
- first checking that no required tables (those in
- the query target) have been omitted from the comparison.
- """
- # SQL and arguments
- if self.comparison is not None:
- tables = self.comparison.getInvolvedTables()
- self.args = self.comparison.getArgs(self.store)
- else:
- tables = list(self.tableClass)
- self.args = []
-
- for tableClass in self.tableClass:
- if tableClass not in tables:
- raise ValueError(
- "Comparison omits required reference to result type %s"
- % tableClass.typeName)
-
- return tables
-
- def _massageData(self, row):
-
- """
- Convert a row into a tuple of Item instances, by slicing it
- according to the number of columns for each instance, and then
- proceeding as for ItemQuery._massageData.
-
- @param row: an n-tuple, where n is the total number of columns
- specified by all the item types in this query.
-
- @return: a tuple of instances of the types specified by this query.
- """
- offset = 0
- resultBits = []
-
- for i, tableClass in enumerate(self.tableClass):
- numAttrs = self.schemaLengths[i]
-
- result = self.store._loadedItem(self.tableClass[i],
- row[offset],
- row[offset+1:offset+numAttrs])
- assert result.store is not None, "result %r has funky store" % (result,)
- resultBits.append(result)
-
- offset += numAttrs
-
- return tuple(resultBits)
-
- def count(self):
- """
- Count the number of distinct results of the wrapped query.
-
- @return: an L{int} representing the number of distinct results.
- """
- if not self.store.autocommit:
- self.store.checkpoint()
- target = ', '.join([
- tableClass.storeID.getColumnName(self.store)
- for tableClass in self.tableClass ])
- sql, args = self._sqlAndArgs('SELECT', target)
- sql = 'SELECT COUNT(*) FROM (' + sql + ')'
- result = self.store.querySQL(sql, args)
- assert len(result) == 1, 'more than one result: %r' % (result,)
- return result[0][0] or 0
-
- def distinct(self):
- """
- @return: an L{iaxiom.IQuery} provider whose values are distinct.
- """
- return _MultipleItemDistinctQuery(self)
-
-class _DistinctQuery(object):
- """
- A query for results excluding duplicates.
-
- Results from this query depend on the query it was initialized with.
- """
- implements(iaxiom.IQuery)
-
- def __init__(self, query):
- """
- Create a distinct query, based on another query.
-
- @param query: an instance of a L{BaseQuery} subclass. Note: an IQuery
- provider is not sufficient, this class relies on implementation details
- of L{BaseQuery}.
- """
- self.query = query
- self.store = query.store
- self.limit = query.limit
-
-
- def cloneQuery(self, limit=_noItem):
- """
- Clone the original query which this distinct query wraps, and return a new
- wrapper around that clone.
- """
- newq = self.query.cloneQuery(limit=limit)
- return self.__class__(newq)
-
-
- def __iter__(self):
- """
- Iterate the distinct results of the wrapped query.
-
- @return: a generator which yields distinct values from its delegate
- query, whether they are items or attributes.
- """
- return self.query._selectStuff('SELECT DISTINCT')
-
-
- def count(self):
- """
- Count the number of distinct results of the wrapped query.
-
- @return: an L{int} representing the number of distinct results.
- """
- if not self.query.store.autocommit:
- self.query.store.checkpoint()
- sql, args = self.query._sqlAndArgs(
- 'SELECT DISTINCT',
- self.query.tableClass.storeID.getColumnName(self.query.store))
- sql = 'SELECT COUNT(*) FROM (' + sql + ')'
- result = self.query.store.querySQL(sql, args)
- assert len(result) == 1, 'more than one result: %r' % (result,)
- return result[0][0] or 0
-
-
-class _MultipleItemDistinctQuery(_DistinctQuery):
- """
- Distinct query based on a MultipleItemQuery.
- """
-
- def count(self):
- """
- Count the number of distinct results of the wrapped query.
-
- @return: an L{int} representing the number of distinct results.
- """
- if not self.query.store.autocommit:
- self.query.store.checkpoint()
- target = ', '.join([
- tableClass.storeID.getColumnName(self.query.store)
- for tableClass in self.query.tableClass ])
- sql, args = self.query._sqlAndArgs(
- 'SELECT DISTINCT',
- target)
- sql = 'SELECT COUNT(*) FROM (' + sql + ')'
- result = self.query.store.querySQL(sql, args)
- assert len(result) == 1, 'more than one result: %r' % (result,)
- return result[0][0] or 0
-
-
-_noDefault = object()
-
-class AttributeQuery(BaseQuery):
- """
- A query for the value of a single attribute from an item class, so as to
- load only a single value rather than an instantiating an entire item when
- the value is all that is needed.
- """
- def __init__(self,
- store,
- tableClass,
- comparison=None, limit=None,
- offset=None, sort=None,
- attribute=None,
- raw=False):
- BaseQuery.__init__(self, store, tableClass,
- comparison, limit,
- offset, sort)
- self.attribute = attribute
- self.raw = raw
- self._queryTarget = attribute.getColumnName(self.store)
-
-
- _cloneAttributes = BaseQuery._cloneAttributes + 'attribute raw'.split()
-
-
- def _massageData(self, row):
- """
- Convert a raw database row to the type described by an attribute. For
- example, convert a database integer into an L{extime.Time} instance for
- an L{attributes.timestamp} attribute.
-
- @param row: a 1-tuple, containing the in-database value from my
- attribute.
-
- @return: a value of the type described by my attribute.
- """
- if self.raw:
- return row[0]
- return self.attribute.outfilter(row[0], _FakeItemForFilter(self.store))
-
-
- def count(self):
- """
- @return: the number of non-None values of this attribute specified by this query.
- """
- rslt = self._runQuery('SELECT', 'COUNT(%s)' % (self._queryTarget,)) or [(0,)]
- assert len(rslt) == 1, 'more than one result: %r' % (rslt,)
- return rslt[0][0]
-
-
-
- def sum(self):
- """
- Return the sum of all the values returned by this query. If no results
- are specified, return None.
-
- Note: for non-numeric column types the result of this method will be
- nonsensical.
-
- @return: a number or None.
- """
- res = self._runQuery('SELECT', 'SUM(%s)' % (self._queryTarget,)) or [(0,)]
- assert len(res) == 1, "more than one result: %r" % (res,)
- dbval = res[0][0] or 0
- return self.attribute.outfilter(dbval, _FakeItemForFilter(self.store))
-
-
- def average(self):
- """
- Return the average value (as defined by the AVG implementation in the
- database) of the values specified by this query.
-
- Note: for non-numeric column types the result of this method will be
- nonsensical.
-
- @return: a L{float} representing the 'average' value of this column.
- """
- rslt = self._runQuery('SELECT', 'AVG(%s)' % (self._queryTarget,)) or [(0,)]
- assert len(rslt) == 1, 'more than one result: %r' % (rslt,)
- return rslt[0][0]
-
-
- def max(self, default=_noDefault):
- return self._functionOnTarget('MAX', default)
-
-
- def min(self, default=_noDefault):
- return self._functionOnTarget('MIN', default)
-
-
- def _functionOnTarget(self, which, default):
- rslt = self._runQuery('SELECT', '%s(%s)' %
- (which, self._queryTarget,)) or [(None,)]
- assert len(rslt) == 1, 'more than one result: %r' % (rslt,)
- dbval = rslt[0][0]
- if dbval is None:
- if default is _noDefault:
- raise ValueError, '%s() on table with no items'%(which)
- else:
- return default
- return self.attribute.outfilter(dbval, _FakeItemForFilter(self.store))
-
-
-def _storeBatchServiceSpecialCase(*args, **kwargs):
- """
- Trivial wrapper around L{batch.storeBatchServiceSpecialCase} to delay the
- import of axiom.batch, which imports the reactor, which we do not want as a
- side-effect of importing L{axiom.store} (as this would preclude selecting a
- reactor after importing this module; see #2864).
- """
- from axiom import batch
- return batch.storeBatchServiceSpecialCase(*args, **kwargs)
-
-
-def _schedulerServiceSpecialCase(empowered, pups):
- """
- This function creates (or returns a previously created) L{IScheduler}
- powerup.
-
- If L{IScheduler} powerups were found on C{empowered}, the first of those
- is given priority. Otherwise, a site L{Store} or a user L{Store} will
- have any pre-existing L{IScheduler} powerup associated with them (on the
- hackish cache attribute C{_schedulerService}) returned, or a new one
- created if none exists already.
- """
- from axiom.scheduler import _SiteScheduler, _UserScheduler
-
- # Give precedence to anything found in the store
- for pup in pups:
- return pup
- # If the empowered is a store, construct a scheduler for it.
- if isinstance(empowered, Store):
- if getattr(empowered, '_schedulerService', None) is None:
- if empowered.parent is None:
- sched = _SiteScheduler(empowered)
- else:
- sched = _UserScheduler(empowered)
- empowered._schedulerService = sched
- return empowered._schedulerService
- return None
-
-
-class Store(Empowered):
- """
- I am a database that Axiom Items can be stored in.
-
- Store an item in me by setting its 'store' attribute to be me.
-
- I can be created one of two ways::
-
- Store() # Create an in-memory database
-
- Store("/path/to/file.axiom") # create an on-disk database in the
- # directory /path/to/file.axiom
-
- @ivar typeToTableNameCache: a dictionary mapping Item subclass type objects
- to the fully-qualified sqlite table name where items of that type are
- stored. This cache is generated from the saved schema metadata when this
- store is opened and updated when schema changes from other store objects
- (such as in other processes) are detected.
-
- @cvar __legacy__: an L{Item} may refer to a L{Store} via a L{reference},
- and this attribute tells the item reference system that the store itself is
- not an old version of an item; i.e. it does not need to have its upgraders
- invoked.
-
- @cvar storeID: an L{Item} may refer to a L{Store} via a L{reference}, and
- this attribute tells the item reference system that the L{Store} has a
- special ID that to use (which is never allocated to any item).
- """
-
- aggregateInterfaces = {
- IService: storeServiceSpecialCase,
- IServiceCollection: storeServiceSpecialCase,
- iaxiom.IBatchService: _storeBatchServiceSpecialCase,
- iaxiom.IScheduler: _schedulerServiceSpecialCase}
-
- implements(iaxiom.IBeneficiary)
-
- transaction = None # set of objects changed in the current transaction
- touched = None # set of objects changed since the last checkpoint
-
- databaseName = 'main' # can differ if database is attached to another
- # database.
-
- dbdir = None # FilePath to the Axiom database directory, or None for
- # in-memory Stores.
- filesdir = None # FilePath to the filesystem-storage subdirectory of the
- # database directory, or None for in-memory Stores.
-
- store = property(lambda self: self) # I have a 'store' attribute because I
- # am 'stored' within myself; this is
- # also for references to use.
-
-
- # Counter indicating things are going on which disallows changes to the
- # database. Callbacks dispatched to application code while this is
- # non-zero will reject database changes with a ChangeRejected exception.
- _rejectChanges = 0
-
- # The following method and attributes are the ad-hoc interface required as
- # targets of attributes.reference attributes. (In other words, the store
- # is a little bit like a fake item.) These should probably eventually be
- # on an interface somewhere, and be better named.
-
- def _currentlyValidAsReferentFor(self, store):
- """
- Check to see if this store is currently valid as a target of a
- reference from an item in the given L{Store}. This is true iff the
- given L{Store} is this L{Store}.
-
- @param store: the store that the referring item is present in.
-
- @type store: L{Store}
- """
- if store is self:
- return True
- else:
- return False
-
- __legacy__ = False
-
- storeID = STORE_SELF_ID
-
-
- def __init__(self, dbdir=None, filesdir=None, debug=False, parent=None, idInParent=None):
- """
- Create a store.
-
- @param dbdir: A L{FilePath} to (or name of) an existing Axiom directory, or
- directory that does not exist yet which will be created as this Store
- is instantiated. If unspecified, this database will be kept in memory.
-
- @param filesdir: A L{FilePath} to (or name of) a directory to keep files in for in-memory
- stores. An exception will be raised if both this attribute and C{dbdir}
- are specified.
-
- @param debug: set to True if this Store should print out every SQL
- statement it sends to SQLite.
-
- @param parent: (internal) If this is opened using an
- L{axiom.substore.Substore}, a reference to its parent.
-
- @param idInParent: (internal) If this is opened using an
- L{axiom.substore.Substore}, the storeID of the item within its parent
- which opened it.
-
- @raises: C{ValueError} if both C{dbdir} and C{filesdir} are specified.
- """
- if parent is not None or idInParent is not None:
- assert parent is not None
- assert idInParent is not None
- self.parent = parent
- self.idInParent = idInParent
- self.debug = debug
- self.autocommit = True
- self.queryTimes = []
- self.execTimes = []
-
- self._inMemoryPowerups = {}
-
- self._attachedChildren = {} # database name => child store object
-
- self.statementCache = {} # non-normalized => normalized qmark SQL
- # statements
-
- self.activeTables = {} # tables which have had items added/removed
- # this run
-
- self.objectCache = _fincache.FinalizingCache()
-
- self.tableQueries = {} # map typename: query string w/ storeID
- # parameter. a typename is a persistent
- # database handle for what we'll call a 'FQPN',
- # i.e. arg to namedAny.
-
- self.typenameAndVersionToID = {} # map database-persistent typename and
- # version to an oid in the types table
-
- self.typeToInsertSQLCache = {}
- self.typeToSelectSQLCache = {}
- self.typeToDeleteSQLCache = {}
-
- self.typeToTableNameCache = {}
- self.attrToColumnNameCache = {}
-
- self._upgradeManager = upgrade._StoreUpgrade(self)
-
- self._axiom_service = None
-
-
- if self.parent is None:
- self._upgradeService = SchedulingService()
- else:
- # Substores should hook into their parent, since they shouldn't
- # expect to have their own substore service started.
- self._upgradeService = self.parent._upgradeService
-
-
- # OK! Everything that can be set up without touching the filesystem
- # has been done. Let's get ready to open the actual database...
-
- _initialOpenFailure = None
- if dbdir is None:
- self._initdb(IN_MEMORY_DATABASE)
- self._initSchema()
- self._memorySubstores = []
- if filesdir is not None:
- if not isinstance(filesdir, filepath.FilePath):
- filesdir = filepath.FilePath(filesdir)
- self.filesdir = filesdir
- if not self.filesdir.isdir():
- self.filesdir.makedirs()
- self.filesdir.child("temp").createDirectory()
- else:
- if filesdir is not None:
- raise ValueError("Only one of dbdir and filesdir"
- " may be specified")
- if not isinstance(dbdir, filepath.FilePath):
- dbdir = filepath.FilePath(dbdir)
- # required subdirs: files, temp, run
- # datafile: db.sqlite
- self.dbdir = dbdir
- self.filesdir = self.dbdir.child('files')
-
- if not dbdir.isdir():
- tempdbdir = dbdir.temporarySibling()
- tempdbdir.makedirs() # maaaaaaaybe this is a bad idea, we
- # probably shouldn't be doing this
- # automatically.
- for child in ('files', 'temp', 'run'):
- tempdbdir.child(child).createDirectory()
- self._initdb(tempdbdir.child('db.sqlite').path)
- self._initSchema()
- self.close(_report=False)
- try:
- tempdbdir.moveTo(dbdir)
- except:
- _initialOpenFailure = Failure()
-
- try:
- self._initdb(dbdir.child('db.sqlite').path)
- except:
- if _initialOpenFailure is not None:
- log.msg("Failed to initialize axiom database."
- " Possible cause of error: ")
- log.err(_initialOpenFailure)
- raise
-
- self.transact(self._startup)
-
- # _startup may have found some things which we must now upgrade.
- if self._upgradeManager.upgradesPending:
- # Automatically upgrade when possible.
- self._upgradeComplete = PendingEvent()
- d = self._upgradeService.addIterator(self._upgradeManager.upgradeEverything())
- def logUpgradeFailure(aFailure):
- if aFailure.check(errors.ItemUpgradeError):
- log.err(aFailure.value.originalFailure, 'Item upgrade error')
- log.err(aFailure, "upgrading %r failed" % (self,))
- return aFailure
- d.addErrback(logUpgradeFailure)
- def finishHim(resultOrFailure):
- self._upgradeComplete.callback(resultOrFailure)
- self._upgradeComplete = None
- d.addBoth(finishHim)
- else:
- self._upgradeComplete = None
-
- log.msg(
- interface=iaxiom.IStatEvent,
- store_opened=self.dbdir is not None and self.dbdir.path or '')
-
- _childCounter = 0
-
- def _attachChild(self, child):
- "attach a child database, returning an identifier for it"
- self._childCounter += 1
- databaseName = 'child_db_%d' % (self._childCounter,)
- self._attachedChildren[databaseName] = child
- # ATTACH DATABASE statements can't use bind paramaters, blech.
- self.executeSQL("ATTACH DATABASE '%s' AS %s" % (
- child.dbdir.child('db.sqlite').path,
- databaseName,))
- return databaseName
-
- attachedToParent = False
-
- def attachToParent(self):
- assert self.parent is not None, 'must have a parent to attach'
- assert self.transaction is None, "can't attach within a transaction"
-
- self.close()
-
- self.attachedToParent = True
- self.databaseName = self.parent._attachChild(self)
- self.connection = self.parent.connection
- self.cursor = self.parent.cursor
-
-# def detachFromParent(self):
-# pass
-
-
- def _initSchema(self):
- # No point in even attempting to transactionalize this:
- # every single statement is a CREATE TABLE or a CREATE
- # INDEX and those commit transactions silently anyway.
- for stmt in _schema.BASE_SCHEMA:
- self.executeSchemaSQL(stmt)
-
-
- def _startup(self):
- """
- Called during __init__. Check consistency of schema in database with
- classes in memory. Load all Python modules for stored items, and load
- version information for upgrader service to run later.
- """
- typesToCheck = []
-
- for oid, module, typename, version in self.querySchemaSQL(_schema.ALL_TYPES):
- if self.debug:
- print
- print 'SCHEMA:', oid, module, typename, version
- if typename not in _typeNameToMostRecentClass:
- try:
- namedAny(module)
- except ValueError, err:
- raise ImportError('cannot find module ' + module, str(err))
- self.typenameAndVersionToID[typename, version] = oid
-
- # Can't call this until typenameAndVersionToID is populated, since this
- # depends on building a reverse map of that.
- persistedSchema = self._loadTypeSchema()
-
- # Now that we have persistedSchema, loop over everything again and
- # prepare old types.
- for (typename, version), typeID in self.typenameAndVersionToID.iteritems():
- cls = _typeNameToMostRecentClass.get(typename)
-
- if cls is not None:
- if version != cls.schemaVersion:
- typesToCheck.append(
- self._prepareOldVersionOf(
- typename, version, persistedSchema))
- else:
- typesToCheck.append(cls)
-
- for cls in typesToCheck:
- self._checkTypeSchemaConsistency(cls, persistedSchema)
-
- # Schema is consistent! Now, if I forgot to create any indexes last
- # time I saw this table, do it now...
- extantIndexes = self._loadExistingIndexes()
- for cls in typesToCheck:
- self._createIndexesFor(cls, extantIndexes)
-
- self._upgradeManager.checkUpgradePaths()
-
-
- def _loadExistingIndexes(self):
- """
- Return a C{set} of the SQL indexes which already exist in the
- underlying database. It is important to load all of this information
- at once (as opposed to using many CREATE INDEX IF NOT EXISTS statements
- or many CREATE INDEX statements and handling the errors) to minimize
- the cost of opening a store. Loading all the indexes at once is much
- faster than doing pretty much anything that involves doing something
- once per required index.
- """
- # Totally SQLite-specific: look up what indexes exist already in
- # sqlite_master so we can skip trying to create them (which can be
- # really slow).
- return set(
- name
- for (name,) in self.querySchemaSQL(
- "SELECT name FROM *DATABASE*.sqlite_master "
- "WHERE type = 'index'"))
-
-
- def _initdb(self, dbfname):
- self.connection = Connection.fromDatabaseName(dbfname)
- self.cursor = self.connection.cursor()
-
-
- def __repr__(self):
- dbdir = self.dbdir
- if self.dbdir is None:
- dbdir = '(in memory)'
-
- return "<Store {dbdir}@{id:#x}".format(dbdir=dbdir, id=id(self))
-
-
- def findOrCreate(self, userItemClass, __ifnew=None, **attrs):
- """
- Usage::
-
- s.findOrCreate(userItemClass [, function] [, x=1, y=2, ...])
-
- Example::
-
- class YourItemType(Item):
- a = integer()
- b = text()
- c = integer()
-
- def f(x):
- print x, \"-- it's new!\"
- s.findOrCreate(YourItemType, f, a=1, b=u'2')
-
- Search for an item with columns in the database that match the passed
- set of keyword arguments, returning the first match if one is found,
- creating one with the given attributes if not. Takes an optional
- positional argument function to call on the new item if it is new.
- """
- andargs = []
- for k, v in attrs.iteritems():
- col = getattr(userItemClass, k)
- andargs.append(col == v)
-
- if len(andargs) == 0:
- cond = []
- elif len(andargs) == 1:
- cond = [andargs[0]]
- else:
- cond = [attributes.AND(*andargs)]
-
- for result in self.query(userItemClass, *cond):
- return result
- newItem = userItemClass(store=self, **attrs)
- if __ifnew is not None:
- __ifnew(newItem)
- return newItem
-
- def newFilePath(self, *path):
- p = self.filesdir
- for subdir in path:
- p = p.child(subdir)
- return p
-
- def newTemporaryFilePath(self, *path):
- p = self.dbdir.child('temp')
- for subdir in path:
- p = p.child(subdir)
- return p
-
- def newFile(self, *path):
- """
- Open a new file somewhere in this Store's file area.
-
- @param path: a sequence of path segments.
-
- @return: an L{AtomicFile}.
- """
- assert len(path) > 0, "newFile requires a nonzero number of segments"
- if self.dbdir is None:
- if self.filesdir is None:
- raise RuntimeError("This in-memory store has no file directory")
- else:
- tmpbase = self.filesdir
- else:
- tmpbase = self.dbdir
- tmpname = tmpbase.child('temp').child(str(tempCounter.next()) + ".tmp")
- return AtomicFile(tmpname.path, self.newFilePath(*path))
-
- def newDirectory(self, *path):
- p = self.filesdir
- for subdir in path:
- p = p.child(subdir)
- return p
-
-
- def _loadTypeSchema(self):
- """
- Load all of the stored schema information for all types known by this
- store. It's important to load everything all at once (rather than
- loading the schema for each type separately as it is needed) to keep
- store opening fast. A single query with many results is much faster
- than many queries with a few results each.
-
- @return: A dict with two-tuples of item type name and schema version as
- keys and lists of five-tuples of attribute schema information for
- that type. The elements of the five-tuple are::
-
- - a string giving the name of the Python attribute
- - a string giving the SQL type
- - a boolean indicating whether the attribute is indexed
- - the Python attribute type object (eg, axiom.attributes.integer)
- - a string giving documentation for the attribute
- """
-
- # Oops, need an index going the other way. This only happens once per
- # store open, and it's based on data queried from the store, so there
- # doesn't seem to be any broader way to cache and re-use the result.
- # However, if we keyed the resulting dict on the database typeID rather
- # than (typeName, schemaVersion), we wouldn't need the information this
- # dict gives us. That would mean changing the callers of this function
- # to use typeID instead of that tuple, which may be possible. Probably
- # only represents a very tiny possible speedup.
- typeIDToNameAndVersion = {}
- for key, value in self.typenameAndVersionToID.iteritems():
- typeIDToNameAndVersion[value] = key
-
- # Indexing attribute, ordering by it, and getting rid of row_offset
- # from the schema and the sorted() here doesn't seem to be any faster
- # than doing this.
- persistedSchema = sorted(self.querySchemaSQL(
- "SELECT attribute, type_id, sqltype, indexed, "
- "pythontype, docstring FROM *DATABASE*.axiom_attributes "))
-
- # This is trivially (but measurably!) faster than getattr(attributes,
- # pythontype).
- getAttribute = attributes.__dict__.__getitem__
-
- result = {}
- for (attribute, typeID, sqltype, indexed, pythontype,
- docstring) in persistedSchema:
- key = typeIDToNameAndVersion[typeID]
- if key not in result:
- result[key] = []
- result[key].append((
- attribute, sqltype, indexed,
- getAttribute(pythontype), docstring))
- return result
-
-
- def _checkTypeSchemaConsistency(self, actualType, onDiskSchema):
- """
- Called for all known types at database startup: make sure that what we
- know (in memory) about this type agrees with what is stored about this
- type in the database.
-
- @param actualType: A L{MetaItem} instance which is associated with a
- table in this store. The schema it defines in memory will be
- checked against the schema known in the database to ensure they
- agree.
-
- @param onDiskSchema: A mapping from L{MetaItem} instances (such as
- C{actualType}) to the schema known in the database and associated
- with C{actualType}.
-
- @raise RuntimeError: if the schema defined by C{actualType} does not
- match the database-present schema given in C{onDiskSchema} or if
- C{onDiskSchema} contains a newer version of the schema associated
- with C{actualType} than C{actualType} represents.
- """
- # make sure that both the runtime and the database both know about this
- # type; if they don't both know, we can't check that their views are
- # consistent
- try:
- inMemorySchema = _inMemorySchemaCache[actualType]
- except KeyError:
- inMemorySchema = _inMemorySchemaCache[actualType] = [
- (storedAttribute.attrname, storedAttribute.sqltype)
- for (name, storedAttribute) in actualType.getSchema()]
-
- key = (actualType.typeName, actualType.schemaVersion)
- persistedSchema = [(storedAttribute[0], storedAttribute[1])
- for storedAttribute in onDiskSchema[key]]
- if inMemorySchema != persistedSchema:
- raise RuntimeError(
- "Schema mismatch on already-loaded %r <%r> object version %d: %r != %r" %
- (actualType, actualType.typeName, actualType.schemaVersion,
- onDiskSchema, inMemorySchema))
-
- if actualType.__legacy__:
- return
-
- if (key[0], key[1] + 1) in onDiskSchema:
- raise RuntimeError(
- "Memory version of %r is %d; database has newer" % (
- actualType.typeName, key[1]))
-
-
- # finally find old versions of the data and prepare to upgrade it.
- def _prepareOldVersionOf(self, typename, version, persistedSchema):
- """
- Note that this database contains old versions of a particular type.
- Create the appropriate dummy item subclass and queue the type to be
- upgraded.
-
- @param typename: The I{typeName} associated with the schema for which
- to create a dummy item class.
-
- @param version: The I{schemaVersion} of the old version of the schema
- for which to create a dummy item class.
-
- @param persistedSchema: A mapping giving information about all schemas
- stored in the database, used to create the attributes of the dummy
- item class.
- """
- appropriateSchema = persistedSchema[typename, version]
- # create actual attribute objects
- dummyAttributes = {}
- for (attribute, sqlType, indexed, pythontype,
- docstring) in appropriateSchema:
- atr = pythontype(indexed=indexed, doc=docstring)
- dummyAttributes[attribute] = atr
- dummyBases = []
- oldType = declareLegacyItem(
- typename, version, dummyAttributes, dummyBases)
- self._upgradeManager.queueTypeUpgrade(oldType)
- return oldType
-
-
- def whenFullyUpgraded(self):
- """
- Return a Deferred which fires when this Store has been fully upgraded.
- """
- if self._upgradeComplete is not None:
- return self._upgradeComplete.deferred()
- else:
- return defer.succeed(None)
-
- def getOldVersionOf(self, typename, version):
- return _legacyTypes[typename, version]
-
-
-
- # grab the schema for that version
- # look up upgraders which push it forward
-
- def findUnique(self, tableClass, comparison=None, default=_noItem):
- """
- Find an Item in the database which should be unique. If it is found,
- return it. If it is not found, return 'default' if it was passed,
- otherwise raise L{errors.ItemNotFound}. If more than one item is
- found, raise L{errors.DuplicateUniqueItem}.
-
- @param comparison: implementor of L{iaxiom.IComparison}.
-
- @param default: value to use if the item is not found.
- """
- results = list(self.query(tableClass, comparison, limit=2))
- lr = len(results)
-
- if lr == 0:
- if default is _noItem:
- raise errors.ItemNotFound(comparison)
- else:
- return default
- elif lr == 2:
- raise errors.DuplicateUniqueItem(comparison, results)
- elif lr == 1:
- return results[0]
- else:
- raise AssertionError("limit=2 database query returned 3+ results: ",
- comparison, results)
-
-
- def findFirst(self, tableClass, comparison=None,
- offset=None, sort=None, default=None):
- """
- Usage::
-
- s.findFirst(tableClass [, query arguments except 'limit'])
-
- Example::
-
- class YourItemType(Item):
- a = integer()
- b = text()
- c = integer()
- ...
- it = s.findFirst(YourItemType,
- AND(YourItemType.a == 1,
- YourItemType.b == u'2'),
- sort=YourItemType.c.descending)
-
- Search for an item with columns in the database that match the passed
- comparison, offset and sort, returning the first match if one is found,
- or the passed default (None if none is passed) if one is not found.
- """
-
- limit = 1
- for item in self.query(tableClass, comparison, limit, offset, sort):
- return item
- return default
-
- def query(self, tableClass, comparison=None,
- limit=None, offset=None, sort=None):
- """
- Return a generator of instances of C{tableClass},
- or tuples of instances if C{tableClass} is a
- tuple of classes.
-
- Examples::
-
- fastCars = s.query(Vehicle,
- axiom.attributes.AND(
- Vehicle.wheels == 4,
- Vehicle.maxKPH > 200),
- limit=100,
- sort=Vehicle.maxKPH.descending)
-
- quotesByClient = s.query( (Client, Quote),
- axiom.attributes.AND(
- Client.active == True,
- Quote.client == Client.storeID,
- Quote.created >= someDate),
- limit=10,
- sort=(Client.name.ascending,
- Quote.created.descending))
-
- @param tableClass: a subclass of Item to look for instances of,
- or a tuple of subclasses.
-
- @param comparison: a provider of L{IComparison}, or None, to match
- all items available in the store. If tableClass is a tuple, then
- the comparison must refer to all Item subclasses in that tuple,
- and specify the relationships between them.
-
- @param limit: an int to limit the total length of the results, or None
- for all available results.
-
- @param offset: an int to specify a starting point within the available
- results, or None to start at 0.
-
- @param sort: an L{ISort}, something that comes from an SQLAttribute's
- 'ascending' or 'descending' attribute.
-
- @return: an L{ItemQuery} object, which is an iterable of Items or
- tuples of Items, according to tableClass.
- """
- if isinstance(tableClass, tuple):
- queryClass = MultipleItemQuery
- else:
- queryClass = ItemQuery
-
- return queryClass(self, tableClass, comparison, limit, offset, sort)
-
- def sum(self, summableAttribute, *a, **k):
- args = (self, summableAttribute.type) + a
- return AttributeQuery(attribute=summableAttribute,
- *args, **k).sum()
- def count(self, *a, **k):
- return self.query(*a, **k).count()
-
- def batchInsert(self, itemType, itemAttributes, dataRows):
- """
- Create multiple items in the store without loading
- corresponding Python objects into memory.
-
- the items' C{stored} callback will not be called.
-
- Example::
-
- myData = [(37, u"Fred", u"Wichita"),
- (28, u"Jim", u"Fresno"),
- (43, u"Betty", u"Dubuque")]
- myStore.batchInsert(FooItem,
- [FooItem.age, FooItem.name, FooItem.city],
- myData)
-
- @param itemType: an Item subclass to create instances of.
-
- @param itemAttributes: an iterable of attributes on the Item subclass.
-
- @param dataRows: an iterable of iterables, each the same
- length as C{itemAttributes} and containing data corresponding
- to each attribute in it.
-
- @return: None.
- """
- class FakeItem:
- pass
- _NEEDS_DEFAULT = object() # token for lookup failure
- fakeOSelf = FakeItem()
- fakeOSelf.store = self
- sql = itemType._baseInsertSQL(self)
- indices = {}
- schema = [attr for (name, attr) in itemType.getSchema()]
- for i, attr in enumerate(itemAttributes):
- indices[attr] = i
- for row in dataRows:
- oid = self.store.executeSchemaSQL(
- _schema.CREATE_OBJECT, [self.store.getTypeID(itemType)])
- insertArgs = [oid]
- for attr in schema:
- i = indices.get(attr, _NEEDS_DEFAULT)
- if i is _NEEDS_DEFAULT:
- pyval = attr.default
- else:
- pyval = row[i]
- dbval = attr._convertPyval(fakeOSelf, pyval)
- insertArgs.append(dbval)
- self.executeSQL(sql, insertArgs)
-
- def _loadedItem(self, itemClass, storeID, attrs):
- try:
- result = self.objectCache.get(storeID)
- # XXX do checks on consistency between attrs and DB object, maybe?
- except KeyError:
- result = itemClass.existingInStore(self, storeID, attrs)
- if not result.__legacy__:
- self.objectCache.cache(storeID, result)
- return result
-
-
- def changed(self, item):
- """
- An item in this store was changed. Add it to the current transaction's
- list of changed items, if a transaction is currently underway, or raise
- an exception if this L{Store} is currently in a state which does not
- allow changes.
- """
- if self._rejectChanges:
- raise errors.ChangeRejected()
- if self.transaction is not None:
- self.transaction.add(item)
- self.touched.add(item)
-
-
- def checkpoint(self):
- self._rejectChanges += 1
- try:
- for item in self.touched:
- # XXX: it should be possible here, using various clever hacks, to
- # automatically optimize functionally identical statements into
- # executemany.
- item.checkpoint()
- self.touched.clear()
- finally:
- self._rejectChanges -= 1
-
- executedThisTransaction = None
- tablesCreatedThisTransaction = None
-
- def transact(self, f, *a, **k):
- """
- Execute C{f(*a, **k)} in the context of a database transaction.
-
- Any changes made to this L{Store} by C{f} will be committed when C{f}
- returns. If C{f} raises an exception, those changes will be reverted
- instead.
-
- If a transaction is already in progress (in this thread - ie, if a
- frame executing L{Store.transact} is already on the call stack), this
- will B{not} start a nested transaction. Changes will not be committed
- until the existing transaction completes, and an exception raised by
- C{f} will not revert changes made by C{f}. You probably don't want to
- ever call this if another transaction is in progress.
-
- @return: Whatever C{f(*a, **kw)} returns.
- @raise: Whatever C{f(*a, **kw)} raises, or a database exception.
- """
- if self.transaction is not None:
- return f(*a, **k)
- if self.attachedToParent:
- return self.parent.transact(f, *a, **k)
- try:
- self._begin()
- try:
- result = f(*a, **k)
- self.checkpoint()
- except:
- exc = Failure()
- try:
- self.revert()
- except:
- log.err(exc)
- raise
- raise
- else:
- self._commit()
- return result
- finally:
- self._cleanupTxnState()
-
- # The following three methods are necessary...
- # - in PySQLite: because PySQLite has some buggy transaction handling which
- # makes it impossible to issue explicit BEGIN statements - which we
- # _need_ to do to provide guarantees for read/write transactions.
-
- def _begin(self):
- if self.debug:
- print '<'*10, 'BEGIN', '>'*10
- self.cursor.execute("BEGIN IMMEDIATE TRANSACTION")
- self._setupTxnState()
-
- def _setupTxnState(self):
- self.executedThisTransaction = []
- self.tablesCreatedThisTransaction = []
- if self.attachedToParent:
- self.transaction = self.parent.transaction
- self.touched = self.parent.touched
- else:
- self.transaction = set()
- self.touched = set()
- self.autocommit = False
- for sub in self._attachedChildren.values():
- sub._setupTxnState()
-
- def _commit(self):
- if self.debug:
- print '*'*10, 'COMMIT', '*'*10
- # self.connection.commit()
- self.cursor.execute("COMMIT")
- log.msg(interface=iaxiom.IStatEvent, stat_commits=1)
- self._postCommitHook()
-
-
- def _postCommitHook(self):
- self._rejectChanges += 1
- try:
- for committed in self.transaction:
- committed.committed()
- finally:
- self._rejectChanges -= 1
-
-
- def _rollback(self):
- if self.debug:
- print '>'*10, 'ROLLBACK', '<'*10
- # self.connection.rollback()
- self.cursor.execute("ROLLBACK")
- log.msg(interface=iaxiom.IStatEvent, stat_rollbacks=1)
-
-
- def revert(self):
- self._rollback()
- self._inMemoryRollback()
-
-
- def _inMemoryRollback(self):
- self._rejectChanges += 1
- try:
- for item in self.transaction:
- item.revert()
- finally:
- self._rejectChanges -= 1
- self.transaction.clear()
- for tableClass in self.tablesCreatedThisTransaction:
- del self.typenameAndVersionToID[tableClass.typeName,
- tableClass.schemaVersion]
- # Clear all cache related to this table
- for cache in (self.typeToInsertSQLCache,
- self.typeToDeleteSQLCache,
- self.typeToSelectSQLCache,
- self.typeToTableNameCache) :
- if tableClass in cache:
- del cache[tableClass]
- if tableClass.storeID in self.attrToColumnNameCache:
- del self.attrToColumnNameCache[tableClass.storeID]
- for name, attr in tableClass.getSchema():
- if attr in self.attrToColumnNameCache:
- del self.attrToColumnNameCache[attr]
-
- for sub in self._attachedChildren.values():
- sub._inMemoryRollback()
-
-
- def _cleanupTxnState(self):
- self.autocommit = True
- self.transaction = None
- self.touched = None
- self.executedThisTransaction = None
- self.tablesCreatedThisTransaction = []
- for sub in self._attachedChildren.values():
- sub._cleanupTxnState()
-
- def close(self, _report=True):
- self.cursor.close()
- self.cursor = self.connection = None
- if self.debug and _report:
- if not self.queryTimes:
- print 'no queries'
- else:
- print 'query:', self.avgms(self.queryTimes)
- if not self.execTimes:
- print 'no execs'
- else:
- print 'exec:', self.avgms(self.execTimes)
-
- def avgms(self, l):
- return 'count: %d avg: %dus' % (len(l),
- int( (sum(l)/len(l)) * 1000000.),)
-
- def _indexNameOf(self, tableClass, attrname):
- """
- Return the unqualified (ie, no database name) name of the given
- attribute of the given table.
-
- @type tableClass: L{MetaItem}
- @param tableClass: The Python class associated with a table in the
- database.
-
- @param attrname: A sequence of the names of the columns of the
- indicated table which will be included in the named index.
-
- @return: A C{str} giving the name of the index which will index the
- given attributes of the given table.
- """
- return "axiomidx_%s_v%d_%s" % (tableClass.typeName,
- tableClass.schemaVersion,
- '_'.join(attrname))
-
-
- def _tableNameFor(self, typename, version):
- return "%s.item_%s_v%d" % (self.databaseName, typename, version)
-
-
- def getTableName(self, tableClass):
- """
- Retrieve the fully qualified name of the table holding items
- of a particular class in this store. If the table does not
- exist in the database, it will be created as a side-effect.
-
- @param tableClass: an Item subclass
-
- @raises axiom.errors.ItemClassesOnly: if an object other than a
- subclass of Item is passed.
-
- @return: a string
- """
- if not (isinstance(tableClass, type) and issubclass(tableClass, item.Item)):
- raise errors.ItemClassesOnly("Only subclasses of Item have table names.")
-
- if tableClass not in self.typeToTableNameCache:
- self.typeToTableNameCache[tableClass] = self._tableNameFor(tableClass.typeName, tableClass.schemaVersion)
- # make sure the table exists
- self.getTypeID(tableClass)
- return self.typeToTableNameCache[tableClass]
-
-
- def getShortColumnName(self, attribute):
- """
- Retreive the column name for a particular attribute in this
- store. The attribute must be bound to an Item subclass (its
- type must be valid). If the underlying table does not exist in
- the database, it will be created as a side-effect.
-
- @param tableClass: an Item subclass
-
- @return: a string
-
- XXX: The current implementation does not really match the
- description, which is actually more restrictive. But it will
- be true soon, so I guess it is ok for now. The reason is
- that this method is used during table creation.
- """
- if isinstance(attribute, _StoreIDComparer):
- return 'oid'
- return '[' + attribute.attrname + ']'
-
-
- def getColumnName(self, attribute):
- """
- Retreive the fully qualified column name for a particular
- attribute in this store. The attribute must be bound to an
- Item subclass (its type must be valid). If the underlying
- table does not exist in the database, it will be created as a
- side-effect.
-
- @param tableClass: an Item subclass
-
- @return: a string
- """
- if attribute not in self.attrToColumnNameCache:
- self.attrToColumnNameCache[attribute] = '.'.join(
- (self.getTableName(attribute.type),
- self.getShortColumnName(attribute)))
- return self.attrToColumnNameCache[attribute]
-
-
- def getTypeID(self, tableClass):
- """
- Retrieve the typeID associated with a particular table in the
- in-database schema for this Store. A typeID is an opaque integer
- representing the Item subclass, and the associated table in this
- Store's SQLite database.
-
- @param tableClass: a subclass of Item
-
- @return: an integer
- """
- key = (tableClass.typeName,
- tableClass.schemaVersion)
- if key in self.typenameAndVersionToID:
- return self.typenameAndVersionToID[key]
- return self.transact(self._maybeCreateTable, tableClass, key)
-
-
- def _maybeCreateTable(self, tableClass, key):
- """
- A type ID has been requested for an Item subclass whose table was not
- present when this Store was opened. Attempt to create the table, and
- if that fails because another Store object (perhaps in another process)
- has created the table, re-read the schema. When that's done, return
- the typeID.
-
- This method is internal to the implementation of getTypeID. It must be
- run in a transaction.
-
- @param tableClass: an Item subclass
- @param key: a 2-tuple of the tableClass's typeName and schemaVersion
-
- @return: a typeID for the table; a new one if no table exists, or the
- existing one if the table was created by another Store object
- referencing this database.
- """
- sqlstr = []
- sqlarg = []
-
- # needs to be calculated including version
- tableName = self._tableNameFor(tableClass.typeName,
- tableClass.schemaVersion)
-
- sqlstr.append("CREATE TABLE %s (" % tableName)
-
- for nam, atr in tableClass.getSchema():
- # it's a stored attribute
- sqlarg.append("\n%s %s" %
- (atr.getShortColumnName(self), atr.sqltype))
-
- if len(sqlarg) == 0:
- # XXX should be raised way earlier, in the class definition or something
- raise NoEmptyItems("%r did not define any attributes" % (tableClass,))
-
- sqlstr.append(', '.join(sqlarg))
- sqlstr.append(')')
-
- try:
- self.createSQL(''.join(sqlstr))
- except errors.TableAlreadyExists:
- # Although we don't have a memory of this table from the last time
- # we called "_startup()", another process has updated the schema
- # since then.
- self._startup()
- return self.typenameAndVersionToID[key]
-
-
- typeID = self.executeSchemaSQL(_schema.CREATE_TYPE,
- [tableClass.typeName,
- tableClass.__module__,
- tableClass.schemaVersion])
-
- self.typenameAndVersionToID[key] = typeID
-
- if self.tablesCreatedThisTransaction is not None:
- self.tablesCreatedThisTransaction.append(tableClass)
-
- # We can pass () for extantIndexes here because since the table didn't
- # exist for tableClass, none of its indexes could have either.
- # Whatever checks _createIndexesFor will make would give the same
- # result against the actual set of existing indexes as they will
- # against ().
- self._createIndexesFor(tableClass, ())
-
- for n, (name, storedAttribute) in enumerate(tableClass.getSchema()):
- self.executeSchemaSQL(
- _schema.ADD_SCHEMA_ATTRIBUTE,
- [typeID, n, storedAttribute.indexed, storedAttribute.sqltype,
- storedAttribute.allowNone, storedAttribute.attrname,
- storedAttribute.doc, storedAttribute.__class__.__name__])
- # XXX probably need something better for pythontype eventually,
- # when we figure out a good way to do user-defined attributes or we
- # start parameterizing references.
-
- return typeID
-
-
- def _createIndexesFor(self, tableClass, extantIndexes):
- """
- Create any indexes which don't exist and are required by the schema
- defined by C{tableClass}.
-
- @param tableClass: A L{MetaItem} instance which may define a schema
- which includes indexes.
-
- @param extantIndexes: A container (anything which can be the right-hand
- argument to the C{in} operator) which contains the unqualified
- names of all indexes which already exist in the underlying database
- and do not need to be created.
- """
- try:
- indexes = _requiredTableIndexes[tableClass]
- except KeyError:
- indexes = set()
- for nam, atr in tableClass.getSchema():
- if atr.indexed:
- indexes.add(((atr.getShortColumnName(self),), (atr.attrname,)))
- for compound in atr.compoundIndexes:
- indexes.add((tuple(inatr.getShortColumnName(self) for inatr in compound),
- tuple(inatr.attrname for inatr in compound)))
- _requiredTableIndexes[tableClass] = indexes
-
- # _ZOMFG_ SQL is such a piece of _shit_: you can't fully qualify the
- # table name in CREATE INDEX statements because the _INDEX_ is fully
- # qualified!
-
- indexColumnPrefix = '.'.join(self.getTableName(tableClass).split(".")[1:])
-
- for (indexColumns, indexAttrs) in indexes:
- nameOfIndex = self._indexNameOf(tableClass, indexAttrs)
- if nameOfIndex in extantIndexes:
- continue
- csql = 'CREATE INDEX %s.%s ON %s(%s)' % (
- self.databaseName, nameOfIndex, indexColumnPrefix,
- ', '.join(indexColumns))
- self.createSQL(csql)
-
-
- def getTableQuery(self, typename, version):
- if (typename, version) not in self.tableQueries:
- query = 'SELECT * FROM %s WHERE oid = ?' % (
- self._tableNameFor(typename, version), )
- self.tableQueries[typename, version] = query
- return self.tableQueries[typename, version]
-
-
- def getItemByID(self, storeID, default=_noItem, autoUpgrade=True):
- """
- Retrieve an item by its storeID, and return it.
-
- Note: most of the failure modes of this method are catastrophic and
- should not be handled by application code. The only one that
- application programmers should be concerned with is KeyError. They are
- listed for educational purposes.
-
- @param storeID: an L{int} which refers to the store.
-
- @param default: if passed, return this value rather than raising in the
- case where no Item is found.
-
- @raise TypeError: if storeID is not an integer.
-
- @raise UnknownItemType: if the storeID refers to an item row in the
- database, but the corresponding type information is not available to
- Python.
-
- @raise RuntimeError: if the found item's class version is higher than
- the current application is aware of. (In other words, if you have
- upgraded a database to a new schema and then attempt to open it with a
- previous version of the code.)
-
- @raise KeyError: if no item corresponded to the given storeID.
-
- @return: an Item, or the given default, if it was passed and no row
- corresponding to the given storeID can be located in the database.
- """
-
- if not isinstance(storeID, (int, long)):
- raise TypeError("storeID *must* be an int or long, not %r" % (
- type(storeID).__name__,))
- if storeID == STORE_SELF_ID:
- return self
- try:
- return self.objectCache.get(storeID)
- except KeyError:
- pass
- log.msg(interface=iaxiom.IStatEvent, stat_cache_misses=1, key=storeID)
- results = self.querySchemaSQL(_schema.TYPEOF_QUERY, [storeID])
- assert (len(results) in [1, 0]),\
- "Database panic: more than one result for TYPEOF!"
- if results:
- typename, module, version = results[0]
- # for the moment we're going to assume no inheritance
- attrs = self.querySQL(self.getTableQuery(typename, version),
- [storeID])
- if len(attrs) != 1:
- if default is _noItem:
- raise errors.ItemNotFound("No results for known-to-be-good object")
- return default
- attrs = attrs[0]
- useMostRecent = False
- moreRecentAvailable = False
-
- # The schema may have changed since the last time I saw the
- # database. Let's look to see if this is suspiciously broken...
-
- if _typeIsTotallyUnknown(typename, version):
- # Another process may have created it - let's re-up the schema
- # and see what we get.
- self._startup()
-
- # OK, all the modules have been loaded now, everything
- # verified.
- if _typeIsTotallyUnknown(typename, version):
-
- # If there is STILL no inkling of it anywhere, we are
- # almost certainly boned. Let's tell the user in a
- # structured way, at least.
- raise errors.UnknownItemType(
- "cannot load unknown schema/version pair: %r %r - id: %r" %
- (typename, version, storeID))
-
- if typename in _typeNameToMostRecentClass:
- moreRecentAvailable = True
- mostRecent = _typeNameToMostRecentClass[typename]
-
- if mostRecent.schemaVersion < version:
- raise RuntimeError("%s:%d - was found in the database and most recent %s is %d" %
- (typename, version, typename, mostRecent.schemaVersion))
- if mostRecent.schemaVersion == version:
- useMostRecent = True
- if useMostRecent:
- T = mostRecent
- else:
- T = self.getOldVersionOf(typename, version)
- x = T.existingInStore(self, storeID, attrs)
- if moreRecentAvailable and (not useMostRecent) and autoUpgrade:
- # upgradeVersion will do caching as necessary, we don't have to
- # cache here. (It must, so that app code can safely call
- # upgradeVersion and get a consistent object out of it.)
- x = self.transact(self._upgradeManager.upgradeItem, x)
- elif not x.__legacy__:
- # We loaded the most recent version of an object
- self.objectCache.cache(storeID, x)
- return x
- if default is _noItem:
- raise KeyError(storeID)
- return default
-
-
- def querySchemaSQL(self, sql, args=()):
- sql = sql.replace("*DATABASE*", self.databaseName)
- return self.querySQL(sql, args)
-
-
- def querySQL(self, sql, args=()):
- """For use with SELECT (or SELECT-like PRAGMA) statements.
- """
- if self.debug:
- result = timeinto(self.queryTimes, self._queryandfetch, sql, args)
- else:
- result = self._queryandfetch(sql, args)
- return result
-
-
- def _queryandfetch(self, sql, args):
- if self.debug:
- print '**', sql, '--', ', '.join(map(str, args))
- self.cursor.execute(sql, args)
- before = time.time()
- result = list(self.cursor)
- after = time.time()
- if after - before > 2.0:
- log.msg('Extremely long list(cursor): %s' % (after - before,))
- log.msg(sql)
- # import traceback; traceback.print_stack()
- if self.debug:
- print ' lastrow:', self.cursor.lastRowID()
- print ' result:', result
- return result
-
-
- def createSQL(self, sql, args=()):
- """
- For use with auto-committing statements such as CREATE TABLE or CREATE
- INDEX.
- """
- before = time.time()
- self._execSQL(sql, args)
- after = time.time()
- if after - before > 2.0:
- log.msg('Extremely long CREATE: %s' % (after - before,))
- log.msg(sql)
- # import traceback; traceback.print_stack()
-
-
- def _execSQL(self, sql, args):
- if self.debug:
- rows = timeinto(self.execTimes, self._queryandfetch, sql, args)
- else:
- rows = self._queryandfetch(sql, args)
- assert not rows
- return sql
-
-
- def executeSchemaSQL(self, sql, args=()):
- sql = sql.replace("*DATABASE*", self.databaseName)
- return self.executeSQL(sql, args)
-
-
- def executeSQL(self, sql, args=()):
- """
- For use with UPDATE or INSERT statements.
- """
- sql = self._execSQL(sql, args)
- result = self.cursor.lastRowID()
- if self.executedThisTransaction is not None:
- self.executedThisTransaction.append((result, sql, args))
- return result
-
-# This isn't actually useful any more. It turns out that the pysqlite
-# documentation is confusingly worded; it's perfectly possible to create tables
-# within transactions, but PySQLite's automatic transaction management (which
-# we turn off) breaks that. However, a function very much like it will be
-# useful for doing nested transactions without support from the database
-# itself, so I'm keeping it here commented out as an example.
-
-# def _reexecute(self):
-# assert self.executedThisTransaction is not None
-# self._begin()
-# for resultLastTime, sql, args in self.executedThisTransaction:
-# self._execSQL(sql, args)
-# resultThisTime = self.cursor.lastRowID()
-# if resultLastTime != resultThisTime:
-# raise errors.TableCreationConcurrencyError(
-# "Expected to get %s as a result "
-# "of %r:%r, got %s" % (
-# resultLastTime,
-# sql, args,
-# resultThisTime))
-
-
-def timeinto(l, f, *a, **k):
- then = time.time()
- try:
- return f(*a, **k)
- finally:
- now = time.time()
- elapsed = now - then
- l.append(elapsed)
-
-queryTimes = []
-execTimes = []
=== removed file 'Axiom/axiom/substore.py'
--- Axiom/axiom/substore.py 2013-07-07 12:35:29 +0000
+++ Axiom/axiom/substore.py 1970-01-01 00:00:00 +0000
@@ -1,122 +0,0 @@
-# -*- test-case-name: axiom.test.test_substore -*-
-
-from zope.interface import implements
-
-from twisted.application import service
-
-from axiom.iaxiom import IPowerupIndirector
-
-from axiom.store import Store
-from axiom.item import Item
-from axiom.attributes import path, inmemory, reference
-
-from axiom.upgrade import registerUpgrader
-
-class SubStore(Item):
-
- schemaVersion = 1
- typeName = 'substore'
-
- storepath = path()
- substore = inmemory()
-
- implements(IPowerupIndirector)
-
- def createNew(cls, store, pathSegments):
- """
- Create a new SubStore, allocating a new file space for it.
- """
- if isinstance(pathSegments, basestring):
- raise ValueError(
- 'Received %r instead of a sequence' % (pathSegments,))
- if store.dbdir is None:
- self = cls(store=store, storepath=None)
- else:
- storepath = store.newDirectory(*pathSegments)
- self = cls(store=store, storepath=storepath)
- self.open()
- self.close()
- return self
-
- createNew = classmethod(createNew)
-
-
- def close(self):
- self.substore.close()
- del self.substore._openSubStore
- del self.substore
-
-
- def open(self, debug=False):
- if hasattr(self, 'substore'):
- return self.substore
- else:
- s = self.substore = self.createStore(debug)
- s._openSubStore = self # don't fall out of cache as long as the
- # store is alive!
- return s
-
-
- def createStore(self, debug):
- """
- Create the actual Store this Substore represents.
- """
- if self.storepath is None:
- self.store._memorySubstores.append(self) # don't fall out of cache
- if self.store.filesdir is None:
- filesdir = None
- else:
- filesdir = (self.store.filesdir.child("_substore_files")
- .child(str(self.storeID))
- .path)
- return Store(parent=self.store,
- filesdir=filesdir,
- idInParent=self.storeID,
- debug=debug)
- else:
- return Store(self.storepath.path,
- parent=self.store,
- idInParent=self.storeID,
- debug=debug)
-
-
- def __conform__(self, interface):
- """
- I adapt my store object to whatever interface I am adapted to. This
- allows for avatar adaptation in L{axiom.userbase} to work properly
- without having to know explicitly that all 'avatars' objects are
- SubStore instances, since it is valid to have non-SubStore avatars,
- which are simply adaptable to the cred interfaces they represent.
- """
- ifa = interface(self.open(debug=self.store.debug), None)
- return ifa
-
-
- def indirect(self, interface):
- """
- Like __conform__, I adapt my store to whatever interface I am asked to
- produce a powerup for. This allows for app stores to be installed as
- powerups for their site stores directly, rather than having an
- additional item type for each interface that we might wish to adapt to.
- """
- return interface(self)
-
-
-
-class SubStoreStartupService(Item, service.Service):
- """
- This class no longer exists. It is here simply to trigger an upgrade which
- deletes it. Ignore it, please.
- """
- installedOn = reference()
- parent = inmemory()
- running = inmemory()
- name = inmemory()
-
- schemaVersion = 2
-
-def eliminateSubStoreStartupService(subservice):
- subservice.deleteFromStore()
- return None
-
-registerUpgrader(eliminateSubStoreStartupService, SubStoreStartupService.typeName, 1, 2)
=== removed file 'Axiom/axiom/tags.py'
--- Axiom/axiom/tags.py 2006-06-01 15:53:37 +0000
+++ Axiom/axiom/tags.py 1970-01-01 00:00:00 +0000
@@ -1,125 +0,0 @@
-
-from epsilon.extime import Time
-
-from axiom.item import Item
-from axiom.attributes import text, reference, integer, AND, timestamp
-
-class Tag(Item):
- typeName = 'tag'
- schemaVersion = 1
-
- name = text(doc="""
- The short string which is being applied as a tag to an Item.
- """)
-
- created = timestamp(doc="""
- When this tag was applied to the Item to which it applies.
- """)
-
- object = reference(doc="""
- The Item to which this tag applies.
- """)
-
- catalog = reference(doc="""
- The L{Catalog} item in which this tag was created.
- """)
-
- tagger = reference(doc="""
- An optional reference to the Item which is responsible for this tag's
- existence.
- """)
-
-
-
-class _TagName(Item):
- """
- Helper class to make Catalog.tagNames very fast. One of these is created
- for each distinct tag name that is created. _TagName Items are never
- deleted from the database.
- """
- typeName = 'tagname'
-
- name = text(doc="""
- The short string which uniquely represents this tag.
- """, indexed=True)
-
- catalog = reference(doc="""
- The L{Catalog} item in which this tag exists.
- """)
-
-
-
-class Catalog(Item):
-
- typeName = 'tag_catalog'
- schemaVersion = 2
-
- tagCount = integer(default=0)
-
- def tag(self, obj, tagName, tagger=None):
- """
- """
- # check to see if that tag exists. Put the object attribute first,
- # since each object should only have a handful of tags and the object
- # reference is indexed. As long as this is the case, it doesn't matter
- # whether the name or catalog attributes are indexed because selecting
- # from a small set of results is fast even without an index.
- if self.store.findFirst(Tag,
- AND(Tag.object == obj,
- Tag.name == tagName,
- Tag.catalog == self)):
- return
-
- # if the tag doesn't exist, maybe we need to create a new tagname object
- self.store.findOrCreate(_TagName, name=tagName, catalog=self)
-
- # Increment only if we are creating a new tag
- self.tagCount += 1
- Tag(store=self.store, object=obj,
- name=tagName, catalog=self,
- created=Time(), tagger=tagger)
-
-
- def tagNames(self):
- """
- Return an iterator of unicode strings - the unique tag names which have
- been applied objects in this catalog.
- """
- return self.store.query(_TagName, _TagName.catalog == self).getColumn("name")
-
-
- def tagsOf(self, obj):
- """
- Return an iterator of unicode strings - the tag names which apply to
- the given object.
- """
- return self.store.query(
- Tag,
- AND(Tag.catalog == self,
- Tag.object == obj)).getColumn("name")
-
-
- def objectsIn(self, tagName):
- return self.store.query(
- Tag,
- AND(Tag.catalog == self,
- Tag.name == tagName)).getColumn("object")
-
-
-
-def upgradeCatalog1to2(oldCatalog):
- """
- Create _TagName instances which version 2 of Catalog automatically creates
- for use in determining the tagNames result, but which version 1 of Catalog
- did not create.
- """
- newCatalog = oldCatalog.upgradeVersion('tag_catalog', 1, 2,
- tagCount=oldCatalog.tagCount)
- tags = newCatalog.store.query(Tag, Tag.catalog == newCatalog)
- tagNames = tags.getColumn("name").distinct()
- for t in tagNames:
- _TagName(store=newCatalog.store, catalog=newCatalog, name=t)
- return newCatalog
-
-from axiom.upgrade import registerUpgrader
-registerUpgrader(upgradeCatalog1to2, 'tag_catalog', 1, 2)
=== removed directory 'Axiom/axiom/test'
=== removed file 'Axiom/axiom/test/__init__.py'
=== removed file 'Axiom/axiom/test/brokenapp.py'
--- Axiom/axiom/test/brokenapp.py 2006-11-17 20:38:14 +0000
+++ Axiom/axiom/test/brokenapp.py 1970-01-01 00:00:00 +0000
@@ -1,59 +0,0 @@
-# -*- test-case-name: axiom.test.test_upgrading -*-
-
-
-from axiom.item import Item
-from axiom.attributes import text, integer, reference, inmemory
-
-from axiom.upgrade import registerUpgrader
-
-class UpgradersAreBrokenHere(Exception):
- """
- The upgraders in this module are broken. They raise this exception.
- """
-
-class ActivateHelper:
- activated = 0
- def activate(self):
- self.activated += 1
-
-class Adventurer(ActivateHelper, Item):
- typeName = 'test_app_player'
- schemaVersion = 2
-
- name = text()
- activated = inmemory()
-
-
-class Sword(ActivateHelper, Item):
- typeName = 'test_app_sword'
- schemaVersion = 2
-
- name = text()
- damagePerHit = integer()
- owner = reference()
- activated = inmemory()
-
-
-def upgradePlayerAndSword(oldplayer):
- newplayer = oldplayer.upgradeVersion('test_app_player', 1, 2)
- newplayer.name = oldplayer.name
-
- oldsword = oldplayer.sword
-
- newsword = oldsword.upgradeVersion('test_app_sword', 1, 2)
- newsword.name = oldsword.name
- newsword.damagePerHit = oldsword.hurtfulness * 2
- newsword.owner = newplayer
-
- return newplayer, newsword
-
-def player1to2(oldplayer):
- raise UpgradersAreBrokenHere()
-
-def sword1to2(oldsword):
- raise UpgradersAreBrokenHere()
-
-
-registerUpgrader(sword1to2, 'test_app_sword', 1, 2)
-registerUpgrader(player1to2, 'test_app_player', 1, 2)
-
=== removed file 'Axiom/axiom/test/cursortest.py'
--- Axiom/axiom/test/cursortest.py 2008-05-21 16:14:04 +0000
+++ Axiom/axiom/test/cursortest.py 1970-01-01 00:00:00 +0000
@@ -1,164 +0,0 @@
-# -*- test-case-name: axiom.test.test_pysqlite2 -*-
-
-"""
-Test code for any cursor implementation which is to work with Axiom.
-
-This probably isn't complete.
-"""
-
-from axiom.errors import TimeoutError, TableAlreadyExists, SQLError
-
-class StubCursor(object):
- """
- Stand in for an actual database-backed cursor. Used by tests to assert the
- right calls are made to execute and to make sure errors from execute are
- handled correctly.
-
- @ivar statements: A list of SQL strings which have been executed.
- @ivar connection: A reference to the L{StubConnection} which created this
- cursor.
- """
- def __init__(self, connection):
- self.connection = connection
- self.statements = []
-
-
- def execute(self, statement, args=()):
- """
- Capture some SQL for later inspection.
- """
- self.statements.append(statement)
-
-
-
-class StubConnection(object):
- """
- Stand in for an actual database-backed connection. Used by tests to create
- L{StubCursors} to easily test behavior of code which interacts with cursors.
-
- @ivar cursors: A list of all cursors ever created with this connection.
- """
- def __init__(self):
- self.cursors = []
-
-
- def cursor(self):
- """
- Create and return a new L{StubCursor}.
- """
- self.cursors.append(StubCursor(self))
- return self.cursors[-1]
-
-
- def timeout(self):
- """
- Induce behavior indicative of a database-level transient failure which
- might lead to a timeout.
- """
- raise NotImplementedError
-
-
-
-class ConnectionTestCaseMixin:
-
- # The number of seconds we will allow for timeouts in this test suite.
- TIMEOUT = 5.0
-
- # The amount of time beyond the specified timeout we will allow Axiom to
- # waste sleeping. This number shouldn't be changed very often, if ever.
- # We're testing a particular performance feature which we should be able to
- # rely on.
- ALLOWED_SLOP = 0.2
-
-
- def createAxiomConnection(self):
- raise NotImplementedError("Cannot create Axiom Connection instance.")
-
-
- def createStubConnection(self):
- raise NotImplementedError("Cannot create Axiom Connection instance.")
-
-
- def createRealConnection(self):
- """
- Create a memory-backed database connection for integration testing.
- """
- raise NotImplementedError("Real connection creation not implemented.")
-
-
- def test_identifyTableCreationError(self):
- """
- When the same table is created twice, we should get a TableAlreadyExists
- exception.
- """
- con = self.createRealConnection()
- cur = con.cursor()
- CREATE_TABLE = "create table foo (bar integer)"
- cur.execute(CREATE_TABLE)
- e = self.assertRaises(TableAlreadyExists, cur.execute, CREATE_TABLE)
-
-
- def test_identifyGenericError(self):
- """
- When invalid SQL is issued, we should get a SQLError exception.
- """
- con = self.createRealConnection()
- cur = con.cursor()
- INVALID_STATEMENT = "not an SQL string"
- e = self.assertRaises(SQLError, cur.execute, INVALID_STATEMENT)
-
-
- def test_cursor(self):
- """
- Test that the cursor method can actually create a cursor object.
- """
- stubConnection = self.createStubConnection()
- axiomConnection = self.createAxiomConnection(stubConnection)
- axiomCursor = axiomConnection.cursor()
-
- self.assertEquals(len(stubConnection.cursors), 1)
- statement = "SELECT foo FROM bar"
- axiomCursor.execute(statement)
- self.assertEquals(len(stubConnection.cursors[0].statements), 1)
- self.assertEquals(stubConnection.cursors[0].statements[0], statement)
-
-
- def test_timeoutExceeded(self):
- """
- Test that the timeout we pass to the Connection is respected.
- """
- clock = [0]
- def time():
- return clock[0]
- def sleep(n):
- clock[0] += n
-
- stubConnection = self.createStubConnection()
- axiomConnection = self.createAxiomConnection(stubConnection, timeout=self.TIMEOUT)
- axiomCursor = axiomConnection.cursor()
-
- axiomCursor.time = time
- axiomCursor.sleep = sleep
-
- def execute(statement, args=()):
- if time() < self.TIMEOUT * 2:
- return stubConnection.timeout()
- return object()
-
- stubConnection.cursors[0].execute = execute
-
- statement = 'SELECT foo FROM bar'
- timeoutException = self.assertRaises(
- TimeoutError,
- axiomCursor.execute, statement)
-
- self.failUnless(
- self.TIMEOUT <= time() <= self.TIMEOUT + self.ALLOWED_SLOP,
- "Wallclock duration of execute() call out of bounds.")
-
- self.assertEquals(timeoutException.statement, statement)
- self.assertEquals(timeoutException.timeout, self.TIMEOUT)
- self.failUnless(isinstance(
- timeoutException.underlying,
- self.expectedUnderlyingExceptionClass))
-
=== removed file 'Axiom/axiom/test/deleteswordapp.py'
--- Axiom/axiom/test/deleteswordapp.py 2008-08-07 14:03:07 +0000
+++ Axiom/axiom/test/deleteswordapp.py 1970-01-01 00:00:00 +0000
@@ -1,17 +0,0 @@
-# -*- test-case-name: axiom.test.test_upgrading -*-
-
-"""
-New version of L{axiom.test.oldapp} which upgrades swords by deleting them.
-"""
-
-from axiom.item import Item
-from axiom.attributes import text
-from axiom.upgrade import registerDeletionUpgrader
-
-class Sword(Item):
- typeName = 'test_app_sword'
- schemaVersion = 2
-
- name = text()
-
-registerDeletionUpgrader(Sword, 1, 2)
=== removed directory 'Axiom/axiom/test/historic'
=== removed file 'Axiom/axiom/test/historic/__init__.py'
--- Axiom/axiom/test/historic/__init__.py 2006-03-21 22:35:27 +0000
+++ Axiom/axiom/test/historic/__init__.py 1970-01-01 00:00:00 +0000
@@ -1,1 +0,0 @@
-# -*- test-case-name: axiom.test.historic -*-
=== removed file 'Axiom/axiom/test/historic/account1to2.axiom.tbz2'
Binary files Axiom/axiom/test/historic/account1to2.axiom.tbz2 2005-11-15 15:26:28 +0000 and Axiom/axiom/test/historic/account1to2.axiom.tbz2 1970-01-01 00:00:00 +0000 differ
=== removed file 'Axiom/axiom/test/historic/catalog1to2.axiom.tbz2'
Binary files Axiom/axiom/test/historic/catalog1to2.axiom.tbz2 2006-06-01 15:53:37 +0000 and Axiom/axiom/test/historic/catalog1to2.axiom.tbz2 1970-01-01 00:00:00 +0000 differ
=== removed file 'Axiom/axiom/test/historic/loginMethod1to2.axiom.tbz2'
Binary files Axiom/axiom/test/historic/loginMethod1to2.axiom.tbz2 2005-12-27 20:40:16 +0000 and Axiom/axiom/test/historic/loginMethod1to2.axiom.tbz2 1970-01-01 00:00:00 +0000 differ
=== removed file 'Axiom/axiom/test/historic/manhole1to2.axiom.tbz2'
Binary files Axiom/axiom/test/historic/manhole1to2.axiom.tbz2 2009-06-01 16:46:29 +0000 and Axiom/axiom/test/historic/manhole1to2.axiom.tbz2 1970-01-01 00:00:00 +0000 differ
=== removed file 'Axiom/axiom/test/historic/parentHook2to3.axiom.tbz2'
Binary files Axiom/axiom/test/historic/parentHook2to3.axiom.tbz2 2009-07-07 20:35:43 +0000 and Axiom/axiom/test/historic/parentHook2to3.axiom.tbz2 1970-01-01 00:00:00 +0000 differ
=== removed file 'Axiom/axiom/test/historic/parentHook3to4.axiom.tbz2'
Binary files Axiom/axiom/test/historic/parentHook3to4.axiom.tbz2 2009-07-07 20:35:43 +0000 and Axiom/axiom/test/historic/parentHook3to4.axiom.tbz2 1970-01-01 00:00:00 +0000 differ
=== removed file 'Axiom/axiom/test/historic/processor1to2.axiom.tbz2'
Binary files Axiom/axiom/test/historic/processor1to2.axiom.tbz2 2006-07-25 16:39:57 +0000 and Axiom/axiom/test/historic/processor1to2.axiom.tbz2 1970-01-01 00:00:00 +0000 differ
=== removed file 'Axiom/axiom/test/historic/scheduler1to2.axiom.tbz2'
Binary files Axiom/axiom/test/historic/scheduler1to2.axiom.tbz2 2009-07-07 20:35:43 +0000 and Axiom/axiom/test/historic/scheduler1to2.axiom.tbz2 1970-01-01 00:00:00 +0000 differ
=== removed file 'Axiom/axiom/test/historic/stub_account1to2.py'
--- Axiom/axiom/test/historic/stub_account1to2.py 2006-03-21 22:35:27 +0000
+++ Axiom/axiom/test/historic/stub_account1to2.py 1970-01-01 00:00:00 +0000
@@ -1,17 +0,0 @@
-
-from axiom.userbase import LoginSystem
-from axiom.test.test_userbase import GarbageProtocolHandler
-
-def createDatabase(s):
- ls = LoginSystem(store=s)
- ls.installOn(s)
- acc = ls.addAccount(u'test', u'example.com', 'asdf')
- ss = acc.avatars.open()
- gph = GarbageProtocolHandler(store=ss, garbage=7)
- gph.installOn(ss)
- # ls.addAccount(u'test2', u'example.com', 'ghjk')
-
-from axiom.test.historic.stubloader import saveStub
-
-if __name__ == '__main__':
- saveStub(createDatabase)
=== removed file 'Axiom/axiom/test/historic/stub_catalog1to2.py'
--- Axiom/axiom/test/historic/stub_catalog1to2.py 2006-07-25 16:39:57 +0000
+++ Axiom/axiom/test/historic/stub_catalog1to2.py 1970-01-01 00:00:00 +0000
@@ -1,28 +0,0 @@
-# -*- test-case-name: axiom.test.historic.test_catalog1to2 -*-
-
-
-from axiom.item import Item
-from axiom.attributes import text
-from axiom.tags import Catalog
-from axiom.test.historic.stubloader import saveStub
-
-class Dummy(Item):
- attribute = text(doc="dummy attribute")
-
-
-
-def createDatabase(s):
- """
- Populate the given Store with a Catalog and some Tags.
- """
- c = Catalog(store=s)
- c.tag(c, u"internal")
- c.tag(s, u"internal")
- i = Dummy(store=s)
- c.tag(i, u"external")
- c.tag(i, u"green")
-
-
-
-if __name__ == '__main__':
- saveStub(createDatabase, 6917)
=== removed file 'Axiom/axiom/test/historic/stub_loginMethod1to2.py'
--- Axiom/axiom/test/historic/stub_loginMethod1to2.py 2006-03-21 22:35:27 +0000
+++ Axiom/axiom/test/historic/stub_loginMethod1to2.py 1970-01-01 00:00:00 +0000
@@ -1,17 +0,0 @@
-
-from axiom.userbase import LoginSystem
-from axiom.test.test_userbase import GarbageProtocolHandler
-from axiom.test.historic.test_loginMethod1to2 import CREDENTIALS, GARBAGE_LEVEL
-
-def createDatabase(s):
- ls = LoginSystem(store=s)
- ls.installOn(s)
- acc = ls.addAccount(*CREDENTIALS)
- ss = acc.avatars.open()
- gph = GarbageProtocolHandler(store=ss, garbage=GARBAGE_LEVEL)
- gph.installOn(ss)
-
-from axiom.test.historic.stubloader import saveStub
-
-if __name__ == '__main__':
- saveStub(createDatabase)
=== removed file 'Axiom/axiom/test/historic/stub_manhole1to2.py'
--- Axiom/axiom/test/historic/stub_manhole1to2.py 2009-06-01 16:46:29 +0000
+++ Axiom/axiom/test/historic/stub_manhole1to2.py 1970-01-01 00:00:00 +0000
@@ -1,12 +0,0 @@
-# -*- test-case-name: axiom.test.historic.test_manhole1to2 -*-
-# Copyright 2008 Divmod, Inc. See LICENSE for details.
-
-from axiom.dependency import installOn
-from axiom.batch import BatchManholePowerup
-from axiom.test.historic.stubloader import saveStub
-
-def createDatabase(store):
- installOn(BatchManholePowerup(store=store), store)
-
-if __name__ == '__main__':
- saveStub(createDatabase, 16829)
=== removed file 'Axiom/axiom/test/historic/stub_parentHook2to3.py'
--- Axiom/axiom/test/historic/stub_parentHook2to3.py 2009-07-07 20:35:43 +0000
+++ Axiom/axiom/test/historic/stub_parentHook2to3.py 1970-01-01 00:00:00 +0000
@@ -1,23 +0,0 @@
-# -*- test-case-name: axiom.test.historic.test_parentHook2to3 -*-
-"""
-Generate a test stub for upgrading L{_SubSchedulerParentHook} from version 2 to
-3, which removes the C{scheduledAt} attribute.
-"""
-
-from axiom.test.historic.stubloader import saveStub
-
-from axiom.dependency import installOn
-from axiom.scheduler import Scheduler, _SubSchedulerParentHook
-from axiom.substore import SubStore
-
-def createDatabase(store):
- scheduler = Scheduler(store=store)
- installOn(scheduler, store)
- installOn(
- _SubSchedulerParentHook(
- store=store, loginAccount=SubStore(store=store)),
- store)
-
-
-if __name__ == '__main__':
- saveStub(createDatabase, 16800)
=== removed file 'Axiom/axiom/test/historic/stub_parentHook3to4.py'
--- Axiom/axiom/test/historic/stub_parentHook3to4.py 2009-07-07 20:35:43 +0000
+++ Axiom/axiom/test/historic/stub_parentHook3to4.py 1970-01-01 00:00:00 +0000
@@ -1,24 +0,0 @@
-# -*- test-case-name: axiom.test.historic.test_parentHook3to4 -*-
-
-"""
-Generate a test stub for upgrading L{_SubSchedulerParentHook} from version 3 to
-4, which removes the C{scheduler} attribute.
-"""
-
-from axiom.test.historic.stubloader import saveStub
-
-from axiom.dependency import installOn
-from axiom.scheduler import Scheduler, _SubSchedulerParentHook
-from axiom.substore import SubStore
-
-def createDatabase(store):
- scheduler = Scheduler(store=store)
- installOn(scheduler, store)
- installOn(
- _SubSchedulerParentHook(
- store=store, loginAccount=SubStore(store=store)),
- store)
-
-
-if __name__ == '__main__':
- saveStub(createDatabase, 17606)
=== removed file 'Axiom/axiom/test/historic/stub_processor1to2.py'
--- Axiom/axiom/test/historic/stub_processor1to2.py 2006-07-25 16:39:57 +0000
+++ Axiom/axiom/test/historic/stub_processor1to2.py 1970-01-01 00:00:00 +0000
@@ -1,29 +0,0 @@
-# -*- test-case-name: axiom.test.historic.test_processor1to2 -*-
-
-from axiom.item import Item
-from axiom.attributes import text
-from axiom.batch import processor
-
-from axiom.test.historic.stubloader import saveStub
-
-
-class Dummy(Item):
- __module__ = 'axiom.test.historic.stub_processor1to2'
- typeName = 'axiom_test_historic_stub_processor1to2_dummy'
-
- attribute = text()
-
-
-DummyProcessor = processor(Dummy)
-
-
-def createDatabase(s):
- """
- Put a processor of some sort into a Store.
- """
- t = DummyProcessor(store=s)
- print t.typeName
-
-
-if __name__ == '__main__':
- saveStub(createDatabase, 7973)
=== removed file 'Axiom/axiom/test/historic/stub_scheduler1to2.py'
--- Axiom/axiom/test/historic/stub_scheduler1to2.py 2009-07-07 20:35:43 +0000
+++ Axiom/axiom/test/historic/stub_scheduler1to2.py 1970-01-01 00:00:00 +0000
@@ -1,19 +0,0 @@
-# test-case-name: axiom.test.historic.test_scheduler1to2
-
-"""
-Database creator for the test for the upgrade of Scheduler from version 1 to
-version 2.
-"""
-
-from axiom.test.historic.stubloader import saveStub
-
-from axiom.scheduler import Scheduler
-from axiom.dependency import installOn
-
-
-def createDatabase(store):
- installOn(Scheduler(store=store), store)
-
-
-if __name__ == '__main__':
- saveStub(createDatabase, 17606)
=== removed file 'Axiom/axiom/test/historic/stub_subStoreStartupService1to2.py'
--- Axiom/axiom/test/historic/stub_subStoreStartupService1to2.py 2006-07-08 04:10:31 +0000
+++ Axiom/axiom/test/historic/stub_subStoreStartupService1to2.py 1970-01-01 00:00:00 +0000
@@ -1,51 +0,0 @@
-# -*- test-case-name: axiom.test.historic.test_subStoreStartupService1to2 -*-
-
-from zope.interface import implements
-from twisted.application.service import IService
-
-from axiom.item import Item
-from axiom.attributes import boolean
-from axiom.substore import SubStore, SubStoreStartupService
-
-from axiom.test.historic.stubloader import saveStub
-
-class DummyService(Item):
- """
- Service which does nothing but mark itself as run, if it's ever run. After
- the upgrader it should not be run.
- """
- typeName = 'substore_service_upgrade_stub_service'
- everStarted = boolean(default=False)
- implements(IService)
-
- name = property(lambda : "sucky-service")
- running = property(lambda : False)
-
- def setName(self, name):
- pass
- def setServiceParent(self, parent):
- pass
- def disownServiceParent(self):
- pass
- def startService(self):
- self.everStarted = True
- def stopService(self):
- pass
- def privilegedStartService(self):
- pass
-
-
-def createDatabase(s):
- """
- Create a store which contains a substore-service-starter item powered up
- for IService, and a substore, which contains a service that should not be
- started after the upgrader runs.
- """
- ssi = SubStore.createNew(s, ["sub", "test"])
- ss = ssi.open()
- ds = DummyService(store=ss)
- ss.powerUp(ds, IService)
- ssss = SubStoreStartupService(store=s).installOn(s)
-
-if __name__ == '__main__':
- saveStub(createDatabase, 7615)
=== removed file 'Axiom/axiom/test/historic/stub_subscheduler1to2.py'
--- Axiom/axiom/test/historic/stub_subscheduler1to2.py 2010-07-16 16:55:26 +0000
+++ Axiom/axiom/test/historic/stub_subscheduler1to2.py 1970-01-01 00:00:00 +0000
@@ -1,21 +0,0 @@
-# test-case-name: axiom.test.historic.test_subscheduler1to2
-
-"""
-Database creator for the test for the upgrade of SubScheduler from version 1 to
-version 2.
-"""
-
-from axiom.test.historic.stubloader import saveStub
-
-from axiom.substore import SubStore
-from axiom.scheduler import SubScheduler
-from axiom.dependency import installOn
-
-
-def createDatabase(store):
- sub = SubStore.createNew(store, ["substore"]).open()
- installOn(SubScheduler(store=sub), sub)
-
-
-if __name__ == '__main__':
- saveStub(createDatabase, 17606)
=== removed file 'Axiom/axiom/test/historic/stub_textlist.py'
--- Axiom/axiom/test/historic/stub_textlist.py 2009-06-01 16:46:29 +0000
+++ Axiom/axiom/test/historic/stub_textlist.py 1970-01-01 00:00:00 +0000
@@ -1,25 +0,0 @@
-# -*- test-case-name: axiom.test.historic.test_textlist -*-
-
-
-from axiom.item import Item
-from axiom.attributes import textlist
-from axiom.test.historic.stubloader import saveStub
-
-class Dummy(Item):
- typeName = 'axiom_textlist_dummy'
- schemaVersion = 1
-
- attribute = textlist(doc="a textlist")
-
-
-
-def createDatabase(s):
- """
- Populate the given Store with some Dummy items.
- """
- Dummy(store=s, attribute=[u'foo', u'bar'])
-
-
-
-if __name__ == '__main__':
- saveStub(createDatabase, 11858)
=== removed file 'Axiom/axiom/test/historic/stubloader.py'
--- Axiom/axiom/test/historic/stubloader.py 2007-02-21 18:54:38 +0000
+++ Axiom/axiom/test/historic/stubloader.py 1970-01-01 00:00:00 +0000
@@ -1,74 +0,0 @@
-
-import os
-import sys
-import shutil
-import tarfile
-import inspect
-
-from twisted.trial import unittest
-from twisted.application.service import IService
-
-from axiom.store import Store
-
-def saveStub(funcobj, revision):
- """
- Create a stub database and populate it using the given function.
-
- @param funcobj: A one-argument callable which will be invoked with an Axiom
- Store instance and should add to it the old state which will be used to
- test an upgrade.
-
- @param revision: An SVN revision of trunk at which it was possible it is
- possible for funcobj to create the necessary state.
- """
- # You may notice certain files don't pass the second argument. They don't
- # work any more. Please feel free to update them with the revision number
- # they were created at.
- filename = inspect.getfile(funcobj)
- dbfn = os.path.join(
- os.path.dirname(filename),
- os.path.basename(filename).split("stub_")[1].split('.py')[0]+'.axiom')
-
- s = Store(dbfn)
- s.transact(funcobj, s)
-
- s.close()
- tarball = tarfile.open(dbfn+'.tbz2', 'w:bz2')
- tarball.add(os.path.basename(dbfn))
- tarball.close()
- shutil.rmtree(dbfn)
-
-
-
-class StubbedTest(unittest.TestCase):
-
- def openLegacyStore(self):
- """
- Extract the Store tarball associated with this test, open it, and return
- it.
- """
- temp = self.mktemp()
- f = sys.modules[self.__module__].__file__
- dfn = os.path.join(
- os.path.dirname(f),
- os.path.basename(f).split("test_")[1].split('.py')[0]+'.axiom')
- arcname = dfn + '.tbz2'
- tarball = tarfile.open(arcname, 'r:bz2')
- for member in tarball.getnames():
- tarball.extract(member, temp)
- return Store(os.path.join(temp, os.path.basename(dfn)))
-
-
- def setUp(self):
- """
- Prepare to test a stub by opening and then fully upgrading the legacy
- store.
- """
- self.store = self.openLegacyStore()
- self.service = IService(self.store)
- self.service.startService()
- return self.store.whenFullyUpgraded()
-
-
- def tearDown(self):
- return self.service.stopService()
=== removed file 'Axiom/axiom/test/historic/subStoreStartupService1to2.axiom.tbz2'
Binary files Axiom/axiom/test/historic/subStoreStartupService1to2.axiom.tbz2 2006-07-08 04:10:31 +0000 and Axiom/axiom/test/historic/subStoreStartupService1to2.axiom.tbz2 1970-01-01 00:00:00 +0000 differ
=== removed file 'Axiom/axiom/test/historic/subscheduler1to2.axiom.tbz2'
Binary files Axiom/axiom/test/historic/subscheduler1to2.axiom.tbz2 2010-07-16 16:55:26 +0000 and Axiom/axiom/test/historic/subscheduler1to2.axiom.tbz2 1970-01-01 00:00:00 +0000 differ
=== removed file 'Axiom/axiom/test/historic/test_account1to2.py'
--- Axiom/axiom/test/historic/test_account1to2.py 2006-03-21 23:58:20 +0000
+++ Axiom/axiom/test/historic/test_account1to2.py 1970-01-01 00:00:00 +0000
@@ -1,25 +0,0 @@
-
-from twisted.cred.portal import Portal, IRealm
-
-from twisted.cred.checkers import ICredentialsChecker
-from twisted.cred.credentials import UsernamePassword
-
-from axiom.test.test_userbase import IGarbage
-
-from axiom.test.historic import stubloader
-
-SECRET = 'asdf'
-SECRET2 = 'ghjk'
-
-class AccountUpgradeTest(stubloader.StubbedTest):
- def testUpgrade(self):
- p = Portal(IRealm(self.store),
- [ICredentialsChecker(self.store)])
-
- def loggedIn((ifc, av, lgo)):
- assert av.garbage == 7
- # Bug in cooperator? this triggers an exception.
- # return svc.stopService()
- d = p.login(
- UsernamePassword('test@xxxxxxxxxxx', SECRET), None, IGarbage)
- return d.addCallback(loggedIn)
=== removed file 'Axiom/axiom/test/historic/test_catalog1to2.py'
--- Axiom/axiom/test/historic/test_catalog1to2.py 2006-06-01 15:53:37 +0000
+++ Axiom/axiom/test/historic/test_catalog1to2.py 1970-01-01 00:00:00 +0000
@@ -1,14 +0,0 @@
-
-from axiom.tags import Catalog
-from axiom.test.historic.stubloader import StubbedTest
-
-class CatalogUpgradeTest(StubbedTest):
- def testCatalogTagNames(self):
- """
- Test that the tagNames method of L{axiom.tags.Catalog} returns all the
- correct tag names.
- """
- c = self.store.findUnique(Catalog)
- self.assertEquals(
- sorted(c.tagNames()),
- [u"external", u"green", u"internal"])
=== removed file 'Axiom/axiom/test/historic/test_loginMethod1to2.py'
--- Axiom/axiom/test/historic/test_loginMethod1to2.py 2006-03-21 23:58:20 +0000
+++ Axiom/axiom/test/historic/test_loginMethod1to2.py 1970-01-01 00:00:00 +0000
@@ -1,25 +0,0 @@
-
-from twisted.cred.portal import Portal, IRealm
-
-from twisted.cred.checkers import ICredentialsChecker
-from twisted.cred.credentials import UsernamePassword
-
-from axiom.test.test_userbase import IGarbage
-
-from axiom.test.historic import stubloader
-
-CREDENTIALS = (u'test', u'example.com', 'secret')
-GARBAGE_LEVEL = 26
-
-class LoginMethodUpgradeTest(stubloader.StubbedTest):
- def testUpgrade(self):
- p = Portal(IRealm(self.store),
- [ICredentialsChecker(self.store)])
-
- def loggedIn((interface, avatarAspect, logout)):
- # if we can login, i guess everything is fine
- self.assertEquals(avatarAspect.garbage, GARBAGE_LEVEL)
-
- creds = UsernamePassword('@'.join(CREDENTIALS[:-1]), CREDENTIALS[-1])
- d = p.login(creds, None, IGarbage)
- return d.addCallback(loggedIn)
=== removed file 'Axiom/axiom/test/historic/test_manhole1to2.py'
--- Axiom/axiom/test/historic/test_manhole1to2.py 2009-06-01 16:46:29 +0000
+++ Axiom/axiom/test/historic/test_manhole1to2.py 1970-01-01 00:00:00 +0000
@@ -1,16 +0,0 @@
-# Copyright 2008 Divmod, Inc. See LICENSE for details.
-
-"""
-Test for the deletion of L{BatchManholePowerup}.
-"""
-
-from axiom.batch import BatchManholePowerup
-from axiom.test.historic.stubloader import StubbedTest
-
-
-class BatchManholePowerupTests(StubbedTest):
- def test_deletion(self):
- """
- The upgrade to schema version 2 deletes L{BatchManholePowerup}.
- """
- self.assertEqual(self.store.query(BatchManholePowerup).count(), 0)
=== removed file 'Axiom/axiom/test/historic/test_parentHook2to3.py'
--- Axiom/axiom/test/historic/test_parentHook2to3.py 2009-07-07 20:35:43 +0000
+++ Axiom/axiom/test/historic/test_parentHook2to3.py 1970-01-01 00:00:00 +0000
@@ -1,44 +0,0 @@
-
-"""
-Test upgrading L{_SubSchedulerParentHook} from version 2 to 3.
-"""
-
-from axiom.test.historic.stubloader import StubbedTest
-
-from axiom.scheduler import _SubSchedulerParentHook
-from axiom.substore import SubStore
-from axiom.dependency import _DependencyConnector
-
-
-class SubSchedulerParentHookUpgradeTests(StubbedTest):
- """
- Test upgrading L{_SubSchedulerParentHook} from version 2 to 3.
- """
- def setUp(self):
- d = StubbedTest.setUp(self)
- def cbSetUp(ignored):
- self.hook = self.store.findUnique(_SubSchedulerParentHook)
- d.addCallback(cbSetUp)
- return d
-
-
- def test_attributesCopied(self):
- """
- The only attribute of L{_SubSchedulerParentHook} which still exists at
- the current version, version 4, C{subStore}, ought to have been
- copied over.
- """
- self.assertIdentical(
- self.hook.subStore, self.store.findUnique(SubStore))
-
-
- def test_uninstalled(self):
- """
- The record of the installation of L{_SubSchedulerParentHook} on the
- store is deleted in the upgrade to schema version 4.
- """
- self.assertEquals(
- list(self.store.query(
- _DependencyConnector,
- _DependencyConnector.installee == self.hook)),
- [])
=== removed file 'Axiom/axiom/test/historic/test_parentHook3to4.py'
--- Axiom/axiom/test/historic/test_parentHook3to4.py 2009-07-07 20:35:43 +0000
+++ Axiom/axiom/test/historic/test_parentHook3to4.py 1970-01-01 00:00:00 +0000
@@ -1,11 +0,0 @@
-
-"""
-Test upgrading L{_SubSchedulerParentHook} from version 3 to 4.
-"""
-
-# It's the same as upgrading from 2 to 3.
-from axiom.test.historic.test_parentHook2to3 import SubSchedulerParentHookUpgradeTests
-
-
-class SubSchedulerParentHookUpgradeTests(SubSchedulerParentHookUpgradeTests):
- pass
=== removed file 'Axiom/axiom/test/historic/test_processor1to2.py'
--- Axiom/axiom/test/historic/test_processor1to2.py 2009-06-13 08:35:38 +0000
+++ Axiom/axiom/test/historic/test_processor1to2.py 1970-01-01 00:00:00 +0000
@@ -1,40 +0,0 @@
-
-from twisted.internet.defer import Deferred
-
-from axiom.test.historic.stubloader import StubbedTest
-from axiom.test.historic.stub_processor1to2 import DummyProcessor
-
-class ProcessorUpgradeTest(StubbedTest):
- def setUp(self):
- # Ick, we need to catch the run event of DummyProcessor, and I can't
- # think of another way to do it.
- self.dummyRun = DummyProcessor.run.im_func
- self.calledBack = Deferred()
- def dummyRun(calledOn):
- self.calledBack.callback(calledOn)
- DummyProcessor.run = dummyRun
-
- return StubbedTest.setUp(self)
-
-
- def tearDown(self):
- # Okay this is a pretty irrelevant method on a pretty irrelevant class,
- # but we'll fix it anyway.
- DummyProcessor.run = self.dummyRun
-
- return StubbedTest.tearDown(self)
-
-
- def test_pollingRemoval(self):
- """
- Test that processors lose their idleInterval but none of the rest of
- their stuff, and that they get scheduled by the upgrader so they can
- figure out what state they should be in.
- """
- proc = self.store.findUnique(DummyProcessor)
- self.assertEquals(proc.busyInterval, 100)
- self.failIfEqual(proc.scheduled, None)
- def assertion(result):
- self.assertEquals(result, proc)
- return self.calledBack.addCallback(assertion)
-
=== removed file 'Axiom/axiom/test/historic/test_scheduler1to2.py'
--- Axiom/axiom/test/historic/test_scheduler1to2.py 2009-07-07 20:35:43 +0000
+++ Axiom/axiom/test/historic/test_scheduler1to2.py 1970-01-01 00:00:00 +0000
@@ -1,21 +0,0 @@
-
-"""
-Tests for the upgrade of Scheduler from version 1 to version 2, in which it was
-largely supplanted by L{_SiteScheduler}.
-"""
-
-from axiom.iaxiom import IScheduler
-from axiom.scheduler import Scheduler, _SiteScheduler
-from axiom.test.historic.stubloader import StubbedTest
-
-
-class SchedulerUpgradeTests(StubbedTest):
- def test_powerdown(self):
- """
- The L{Scheduler} created by the stub is powered down by the upgrade and
- adapting the L{Store} to L{IScheduler} succeeds with an instance of
- L{_SiteScheduler}.
- """
- scheduler = self.store.findUnique(Scheduler)
- self.assertEquals(list(self.store.interfacesFor(scheduler)), [])
- self.assertIsInstance(IScheduler(self.store), _SiteScheduler)
=== removed file 'Axiom/axiom/test/historic/test_subStoreStartupService1to2.py'
--- Axiom/axiom/test/historic/test_subStoreStartupService1to2.py 2006-07-08 04:10:31 +0000
+++ Axiom/axiom/test/historic/test_subStoreStartupService1to2.py 1970-01-01 00:00:00 +0000
@@ -1,22 +0,0 @@
-
-from axiom.substore import SubStore
-from axiom.test.historic.stubloader import StubbedTest
-
-from axiom.test.historic.stub_subStoreStartupService1to2 import DummyService
-
-class UpgradeTest(StubbedTest):
- def testSubStoreServiceStarterStoppedStartingSubStoreServices(self):
- """
- Verify that the sub-store service starter is removed and substore services
- will not be started.
-
- Also, say that nine times fast.
- """
- ss = self.store.findUnique(SubStore)
- thePowerup = ss.open().findUnique(DummyService)
-
- # The upgrade stub-loading framework actually takes care of invoking
- # the parent store startService, so we don't have to do that here;
- # let's just make sure that the substore's service wasn't started as
- # part of the upgrade.
- self.failIf(thePowerup.everStarted)
=== removed file 'Axiom/axiom/test/historic/test_subscheduler1to2.py'
--- Axiom/axiom/test/historic/test_subscheduler1to2.py 2010-07-16 16:55:26 +0000
+++ Axiom/axiom/test/historic/test_subscheduler1to2.py 1970-01-01 00:00:00 +0000
@@ -1,28 +0,0 @@
-
-"""
-Tests for the upgrade of SubScheduler from version 1 to version 2, in which it
-was largely supplanted by L{_UserScheduler}.
-"""
-
-from axiom.iaxiom import IScheduler
-from axiom.substore import SubStore
-from axiom.scheduler import SubScheduler, _UserScheduler
-from axiom.test.historic.stubloader import StubbedTest
-
-
-class SubSchedulerUpgradeTests(StubbedTest):
- def test_powerdown(self):
- """
- The L{SubScheduler} created by the stub is powered down by the upgrade
- and adapting the L{Store} to L{IScheduler} succeeds with a
- L{_UserScheduler}.
- """
- sub = self.store.findFirst(SubStore).open()
- upgraded = sub.whenFullyUpgraded()
- def subUpgraded(ignored):
- scheduler = sub.findUnique(SubScheduler)
- self.assertEquals(list(sub.interfacesFor(scheduler)), [])
-
- self.assertIsInstance(IScheduler(sub), _UserScheduler)
- upgraded.addCallback(subUpgraded)
- return upgraded
=== removed file 'Axiom/axiom/test/historic/test_textlist.py'
--- Axiom/axiom/test/historic/test_textlist.py 2007-04-03 21:58:55 +0000
+++ Axiom/axiom/test/historic/test_textlist.py 1970-01-01 00:00:00 +0000
@@ -1,11 +0,0 @@
-
-from axiom.test.historic.stubloader import StubbedTest
-from axiom.test.historic.stub_textlist import Dummy
-
-class TextlistTransitionTest(StubbedTest):
- def test_transition(self):
- """
- Test that the textlist survives the encoding transition intact.
- """
- d = self.store.findUnique(Dummy)
- self.assertEquals(d.attribute, [u'foo', u'bar'])
=== removed file 'Axiom/axiom/test/historic/textlist.axiom.tbz2'
Binary files Axiom/axiom/test/historic/textlist.axiom.tbz2 2007-04-03 21:58:55 +0000 and Axiom/axiom/test/historic/textlist.axiom.tbz2 1970-01-01 00:00:00 +0000 differ
=== removed file 'Axiom/axiom/test/itemtest.py'
--- Axiom/axiom/test/itemtest.py 2005-10-28 22:06:23 +0000
+++ Axiom/axiom/test/itemtest.py 1970-01-01 00:00:00 +0000
@@ -1,8 +0,0 @@
-
-from axiom import item, attributes
-
-class PlainItem(item.Item):
- typeName = 'axiom_test_plain_item'
- schemaVersion = 1
-
- plain = attributes.text()
=== removed file 'Axiom/axiom/test/itemtestmain.py'
--- Axiom/axiom/test/itemtestmain.py 2008-05-06 13:33:30 +0000
+++ Axiom/axiom/test/itemtestmain.py 1970-01-01 00:00:00 +0000
@@ -1,15 +0,0 @@
-import sys
-
-from axiom import store
-from twisted.python import filepath
-
-def main(storePath, itemID):
-
- assert 'axiom.test.itemtest' not in sys.modules, "Test is invalid."
-
- st = store.Store(filepath.FilePath(storePath))
- item = st.getItemByID(itemID)
- print item.plain
-
-if __name__ == '__main__':
- main(storePath=sys.argv[1], itemID=int(sys.argv[2]))
=== removed file 'Axiom/axiom/test/morenewapp.py'
--- Axiom/axiom/test/morenewapp.py 2006-03-30 01:22:40 +0000
+++ Axiom/axiom/test/morenewapp.py 1970-01-01 00:00:00 +0000
@@ -1,104 +0,0 @@
-# -*- test-case-name: axiom.test.test_upgrading.SchemaUpgradeTest.testUpgradeWithMissingVersion -*-
-
-
-from axiom.item import Item
-from axiom.attributes import text, integer, reference, inmemory
-
-from axiom.upgrade import registerUpgrader
-
-class ActivateHelper:
- activated = 0
- def activate(self):
- self.activated += 1
-
-class Adventurer(ActivateHelper, Item):
- typeName = 'test_app_player'
- schemaVersion = 2
-
- name = text()
- activated = inmemory()
-
-class InventoryEntry(ActivateHelper, Item):
- typeName = 'test_app_inv'
- schemaVersion = 1
-
- owner = reference()
- owned = reference()
-
- activated = inmemory()
-
-class Sword(ActivateHelper, Item):
- typeName = 'test_app_sword'
- schemaVersion = 3
-
- name = text()
- damagePerHit = integer()
- activated = inmemory()
-
- def owner():
- def get(self):
- return self.store.findUnique(InventoryEntry,
- InventoryEntry.owned == self).owner
- return get,
-
- owner = property(*owner())
-
-
-def sword2to3(oldsword):
- newsword = oldsword.upgradeVersion('test_app_sword', 2, 3)
- n = oldsword.store.getOldVersionOf('test_app_sword', 2)
- itrbl = oldsword.store.query(n)
- newsword.name = oldsword.name
- newsword.damagePerHit = oldsword.damagePerHit
- invent = InventoryEntry(store=newsword.store,
- owner=oldsword.owner,
- owned=newsword)
- return newsword
-
-
-registerUpgrader(sword2to3, 'test_app_sword', 2, 3)
-
-
-####### DOUBLE-LEGACY UPGRADE SPECTACULAR !! ###########
-
-# declare legacy class.
-
-from axiom.item import declareLegacyItem
-
-declareLegacyItem(typeName = 'test_app_sword',
- schemaVersion = 2,
-
- attributes = dict(name=text(),
- damagePerHit=integer(),
- owner=reference(),
- activated=inmemory()))
-
-
-def upgradePlayerAndSword(oldplayer):
- newplayer = oldplayer.upgradeVersion('test_app_player', 1, 2)
- newplayer.name = oldplayer.name
-
- oldsword = oldplayer.sword
-
- newsword = oldsword.upgradeVersion('test_app_sword', 1, 2,
- name=oldsword.name,
- damagePerHit=oldsword.hurtfulness * 2,
- owner=newplayer)
-
- return newplayer, newsword
-
-def player1to2(oldplayer):
- newplayer, newsword = upgradePlayerAndSword(oldplayer)
- return newplayer
-
-def sword1to2(oldsword):
- oldPlayerType = oldsword.store.getOldVersionOf('test_app_player', 1)
- oldplayer = list(oldsword.store.query(oldPlayerType,
- oldPlayerType.sword == oldsword))[0]
- newplayer, newsword = upgradePlayerAndSword(oldplayer)
- return newsword
-
-
-registerUpgrader(sword1to2, 'test_app_sword', 1, 2)
-registerUpgrader(player1to2, 'test_app_player', 1, 2)
-
=== removed file 'Axiom/axiom/test/newapp.py'
--- Axiom/axiom/test/newapp.py 2005-11-02 00:12:25 +0000
+++ Axiom/axiom/test/newapp.py 1970-01-01 00:00:00 +0000
@@ -1,59 +0,0 @@
-# -*- test-case-name: axiom.test.test_upgrading -*-
-
-
-from axiom.item import Item
-from axiom.attributes import text, integer, reference, inmemory
-
-from axiom.upgrade import registerUpgrader
-
-class ActivateHelper:
- activated = 0
- def activate(self):
- self.activated += 1
-
-class Adventurer(ActivateHelper, Item):
- typeName = 'test_app_player'
- schemaVersion = 2
-
- name = text()
- activated = inmemory()
-
-
-class Sword(ActivateHelper, Item):
- typeName = 'test_app_sword'
- schemaVersion = 2
-
- name = text()
- damagePerHit = integer()
- owner = reference()
- activated = inmemory()
-
-
-def upgradePlayerAndSword(oldplayer):
- newplayer = oldplayer.upgradeVersion('test_app_player', 1, 2)
- newplayer.name = oldplayer.name
-
- oldsword = oldplayer.sword
-
- newsword = oldsword.upgradeVersion('test_app_sword', 1, 2)
- newsword.name = oldsword.name
- newsword.damagePerHit = oldsword.hurtfulness * 2
- newsword.owner = newplayer
-
- return newplayer, newsword
-
-def player1to2(oldplayer):
- newplayer, newsword = upgradePlayerAndSword(oldplayer)
- return newplayer
-
-def sword1to2(oldsword):
- oldPlayerType = oldsword.store.getOldVersionOf('test_app_player', 1)
- oldplayer = list(oldsword.store.query(oldPlayerType,
- oldPlayerType.sword == oldsword))[0]
- newplayer, newsword = upgradePlayerAndSword(oldplayer)
- return newsword
-
-
-registerUpgrader(sword1to2, 'test_app_sword', 1, 2)
-registerUpgrader(player1to2, 'test_app_player', 1, 2)
-
=== removed file 'Axiom/axiom/test/newcirc.py'
--- Axiom/axiom/test/newcirc.py 2006-07-08 04:10:31 +0000
+++ Axiom/axiom/test/newcirc.py 1970-01-01 00:00:00 +0000
@@ -1,32 +0,0 @@
-# -*- test-case-name: axiom.test.test_upgrading.DeletionTest.testCircular -*-
-from axiom.item import Item
-
-from axiom.attributes import reference, integer
-
-class A(Item):
- typeName = 'test_circular_a'
- b = reference()
-
-class B(Item):
- typeName = 'test_circular_b'
- a = reference()
- n = integer()
-
- schemaVersion = 2
-
-from axiom.upgrade import registerUpgrader
-
-def b1to2(oldb):
- # This upgrader isn't doing anything that actually makes sense; in a
- # realistic upgrader, you'd probably be changing A around, perhaps deleting
- # it to destroy old adjunct items and creating a new A. The point is,
- # s.findUnique(A).b should give back the 'b' that you are upgrading whether
- # it is run before or after the upgrade.
- oldb.a.deleteFromStore()
- newb = oldb.upgradeVersion('test_circular_b', 1, 2)
- newb.n = oldb.n
- newb.a = A(store=newb.store,
- b=newb)
- return newb
-
-registerUpgrader(b1to2, 'test_circular_b', 1, 2)
=== removed file 'Axiom/axiom/test/newobsolete.py'
--- Axiom/axiom/test/newobsolete.py 2006-07-08 04:10:31 +0000
+++ Axiom/axiom/test/newobsolete.py 1970-01-01 00:00:00 +0000
@@ -1,23 +0,0 @@
-# -*- test-case-name: axiom.test.test_upgrading.DeletionTest.testPowerups -*-
-
-from axiom.item import Item
-from axiom.attributes import integer
-
-class Obsolete(Item):
- """
- This is a stub placeholder so that axiomInvalidateModule will invalidate
- the appropriate typeName; it's probably bad practice to declare recent
- versions of deleted portions of the schema, but that's not what this is
- testing.
- """
- typeName = 'test_upgrading_obsolete'
- nothing = integer()
- schemaVersion = 2
-
-from axiom.upgrade import registerUpgrader
-
-def obsolete1toNone(oldObsolete):
- oldObsolete.deleteFromStore()
- return None
-
-registerUpgrader(obsolete1toNone, 'test_upgrading_obsolete', 1, 2)
=== removed file 'Axiom/axiom/test/newpath.py'
--- Axiom/axiom/test/newpath.py 2006-07-03 17:35:07 +0000
+++ Axiom/axiom/test/newpath.py 1970-01-01 00:00:00 +0000
@@ -1,14 +0,0 @@
-# -*- test-case-name: axiom.test.test_upgrading.PathUpgrade.testUpgradePath -*-
-
-from axiom.attributes import path
-
-from axiom.item import Item
-
-from axiom.upgrade import registerAttributeCopyingUpgrader
-
-class Path(Item):
- schemaVersion = 2
- typeName = 'test_upgrade_path'
- thePath = path()
-
-registerAttributeCopyingUpgrader(Path, 1, 2)
=== removed file 'Axiom/axiom/test/oldapp.py'
--- Axiom/axiom/test/oldapp.py 2005-07-28 22:09:16 +0000
+++ Axiom/axiom/test/oldapp.py 1970-01-01 00:00:00 +0000
@@ -1,18 +0,0 @@
-
-
-from axiom.item import Item
-from axiom.attributes import text, integer, reference
-
-class Player(Item):
- typeName = 'test_app_player'
- schemaVersion = 1
-
- name = text()
- sword = reference()
-
-class Sword(Item):
- typeName = 'test_app_sword'
- schemaVersion = 1
-
- name = text()
- hurtfulness = integer()
=== removed file 'Axiom/axiom/test/oldcirc.py'
--- Axiom/axiom/test/oldcirc.py 2006-07-08 04:10:31 +0000
+++ Axiom/axiom/test/oldcirc.py 1970-01-01 00:00:00 +0000
@@ -1,14 +0,0 @@
-# -*- test-case-name: axiom.test.test_upgrading.DeletionTest.testCircular -*-
-from axiom.item import Item
-
-from axiom.attributes import reference, integer
-
-class A(Item):
- typeName = 'test_circular_a'
- b = reference()
-
-class B(Item):
- typeName = 'test_circular_b'
- a = reference()
- n = integer()
-
=== removed file 'Axiom/axiom/test/oldobsolete.py'
--- Axiom/axiom/test/oldobsolete.py 2006-07-08 04:10:31 +0000
+++ Axiom/axiom/test/oldobsolete.py 1970-01-01 00:00:00 +0000
@@ -1,11 +0,0 @@
-# -*- test-case-name: axiom.test.test_upgrading.DeletionTest.testPowerups -*-
-
-from axiom.item import Item
-from axiom.attributes import integer
-
-class Obsolete(Item):
- """
- This is an obsolete class that will be destroyed in the upcoming version.
- """
- typeName = 'test_upgrading_obsolete'
- nothing = integer()
=== removed file 'Axiom/axiom/test/oldpath.py'
--- Axiom/axiom/test/oldpath.py 2006-07-03 17:35:07 +0000
+++ Axiom/axiom/test/oldpath.py 1970-01-01 00:00:00 +0000
@@ -1,10 +0,0 @@
-# -*- test-case-name: axiom.test.test_upgrading.PathUpgrade.testUpgradePath -*-
-
-from axiom.attributes import path
-
-from axiom.item import Item
-
-class Path(Item):
- schemaVersion = 1
- typeName = 'test_upgrade_path'
- thePath = path()
=== removed file 'Axiom/axiom/test/onestepapp.py'
--- Axiom/axiom/test/onestepapp.py 2007-03-19 20:59:00 +0000
+++ Axiom/axiom/test/onestepapp.py 1970-01-01 00:00:00 +0000
@@ -1,104 +0,0 @@
-# -*- test-case-name: axiom.test.test_upgrading.SwordUpgradeTest.test_upgradeSkipVersion -*-
-
-"""
-This is a newer version of the module found in oldapp.py, used in the
-upgrading tests. It upgrades from oldapp in one step, rather than requiring an
-intermediary step as morenewapp.py does.
-"""
-
-from axiom.item import Item
-from axiom.attributes import text, integer, reference, inmemory
-
-from axiom.upgrade import registerUpgrader
-
-class ActivateHelper:
- activated = 0
- def activate(self):
- self.activated += 1
-
-class Adventurer(ActivateHelper, Item):
- typeName = 'test_app_player'
- schemaVersion = 2
-
- name = text()
- activated = inmemory()
-
-class InventoryEntry(ActivateHelper, Item):
- typeName = 'test_app_inv'
- schemaVersion = 1
-
- owner = reference()
- owned = reference()
-
- activated = inmemory()
-
-class Sword(ActivateHelper, Item):
- typeName = 'test_app_sword'
- schemaVersion = 3
-
- name = text()
- damagePerHit = integer()
- activated = inmemory()
-
- def owner():
- def get(self):
- return self.store.findUnique(InventoryEntry,
- InventoryEntry.owned == self).owner
- return get,
-
- owner = property(*owner())
-
-
-def sword2to3(oldsword):
- raise RuntimeError("The database does not contain any swords of version 2,"
- " so you should be able to skip this version.")
-
-
-registerUpgrader(sword2to3, 'test_app_sword', 2, 3)
-
-
-####### DOUBLE-LEGACY UPGRADE SPECTACULAR !! ###########
-
-# declare legacy class.
-
-from axiom.item import declareLegacyItem
-
-declareLegacyItem(typeName = 'test_app_sword',
- schemaVersion = 2,
- attributes = dict(name=text(),
- damagePerHit=integer(),
- owner=reference(),
- activated=inmemory()))
-
-
-def upgradePlayerAndSword(oldplayer):
- newplayer = oldplayer.upgradeVersion('test_app_player', 1, 2)
- newplayer.name = oldplayer.name
-
- oldsword = oldplayer.sword
-
- newsword = oldsword.upgradeVersion('test_app_sword', 1, 3,
- name=oldsword.name,
- damagePerHit=oldsword.hurtfulness * 2)
- invent = InventoryEntry(store=newsword.store,
- owner=newplayer,
- owned=newsword)
- return newplayer, newsword
-
-
-def player1to2(oldplayer):
- newplayer, newsword = upgradePlayerAndSword(oldplayer)
- return newplayer
-
-
-def sword1to3(oldsword):
- oldPlayerType = oldsword.store.getOldVersionOf('test_app_player', 1)
- oldplayer = list(oldsword.store.query(oldPlayerType,
- oldPlayerType.sword == oldsword))[0]
- newplayer, newsword = upgradePlayerAndSword(oldplayer)
- return newsword
-
-
-registerUpgrader(sword1to3, 'test_app_sword', 1, 3)
-registerUpgrader(player1to2, 'test_app_player', 1, 2)
-
=== removed file 'Axiom/axiom/test/openthenload.py'
--- Axiom/axiom/test/openthenload.py 2008-05-06 13:33:30 +0000
+++ Axiom/axiom/test/openthenload.py 1970-01-01 00:00:00 +0000
@@ -1,28 +0,0 @@
-# -*- test-case-name: axiom.test.test_xatop.ProcessConcurrencyTestCase -*-
-
-# this file is in support of the named test case
-
-import sys
-
-from axiom.store import Store
-from twisted.python import filepath
-
-# Open the store so that we get the bad version of the schema
-s = Store(filepath.FilePath(sys.argv[1]))
-
-# Alert our parent that we did that
-sys.stdout.write("1")
-sys.stdout.flush()
-
-# Grab the storeID we are supposed to be reading
-sids = sys.stdin.readline()
-sid = int(sids)
-
-# load the item we were told to - this should force a schema reload
-s.getItemByID(sid)
-
-# let our parent process know that we loaded it successfully
-sys.stdout.write("2")
-sys.stdout.flush()
-
-# then terminate cleanly
=== removed file 'Axiom/axiom/test/path_postcopy.py'
--- Axiom/axiom/test/path_postcopy.py 2007-07-03 20:07:11 +0000
+++ Axiom/axiom/test/path_postcopy.py 1970-01-01 00:00:00 +0000
@@ -1,24 +0,0 @@
-# -*- test-case-name: axiom.test.test_upgrading.PathUpgrade.test_postCopy -*-
-
-from axiom.attributes import path
-
-from axiom.item import Item
-
-from axiom.upgrade import registerAttributeCopyingUpgrader
-
-class Path(Item):
- """
- Trivial Item class for testing upgrading.
- """
- schemaVersion = 2
- typeName = 'test_upgrade_path'
- thePath = path()
-
-def fixPath(it):
- """
- An example postcopy function, for fixing up an item after its attributes
- have been copied.
- """
- it.thePath = it.thePath.child("foo")
-
-registerAttributeCopyingUpgrader(Path, 1, 2, postCopy=fixPath)
=== removed file 'Axiom/axiom/test/reactorimporthelper.py'
--- Axiom/axiom/test/reactorimporthelper.py 2009-06-22 20:54:01 +0000
+++ Axiom/axiom/test/reactorimporthelper.py 1970-01-01 00:00:00 +0000
@@ -1,17 +0,0 @@
-# Copyright 2009 Divmod, Inc. See LICENSE file for details
-
-"""
-Helper for axiomatic reactor-selection unit tests.
-"""
-
-# The main point of this file: import the default reactor.
-from twisted.internet import reactor
-
-# Define an Item, too, so that it can go into a Store and trigger an import of
-# this module at schema-check (ie, store opening) time.
-from axiom.item import Item
-from axiom.attributes import integer
-
-class SomeItem(Item):
- attribute = integer()
-
=== removed file 'Axiom/axiom/test/test_attributes.py'
--- Axiom/axiom/test/test_attributes.py 2010-07-14 21:44:42 +0000
+++ Axiom/axiom/test/test_attributes.py 1970-01-01 00:00:00 +0000
@@ -1,458 +0,0 @@
-# -*- test-case-name: axiom.test.test_attributes -*-
-
-import random
-from decimal import Decimal
-
-from epsilon.extime import Time
-
-from twisted.trial.unittest import TestCase
-from twisted.python.reflect import qual
-
-from axiom.store import Store
-from axiom.item import Item, normalize, Placeholder
-
-from axiom.attributes import (
- Comparable, SQLAttribute, integer, timestamp, textlist, ConstraintError,
- ieee754_double, point1decimal, money)
-
-class Number(Item):
- typeName = 'test_number'
- schemaVersion = 1
-
- value = ieee754_double()
-
-
-class IEEE754DoubleTest(TestCase):
-
- def testRoundTrip(self):
- s = Store()
- Number(store=s, value=7.1)
- n = s.findFirst(Number)
- self.assertEquals(n.value, 7.1)
-
- def testFPSumsAreBrokenSoDontUseThem(self):
- s = Store()
- for x in range(10):
- Number(store=s,
- value=0.1)
- self.assertNotEquals(s.query(Number).getColumn("value").sum(),
- 1.0)
-
- # This isn't really a unit test. It's documentation.
- self.assertEquals(s.query(Number).getColumn("value").sum(),
- 0.99999999999999989)
-
-
-
-class _Integer(Item):
- """
- Dummy item with an integer attribute.
- """
- value = integer()
-
-
-
-class IntegerTests(TestCase):
- """
- Tests for L{integer} attributes.
- """
- def setUp(self):
- self.store = Store()
-
-
- def test_roundtrip(self):
- """
- A Python int roundtrips through an integer attribute.
- """
- i = _Integer(store=self.store, value=42)
- self.assertEquals(i.value, 42)
-
-
- def test_roundtripLong(self):
- """
- A Python long roundtrips through an integer attribute.
- """
- i = _Integer(store=self.store, value=42L)
- self.assertEquals(i.value, 42)
-
-
- def test_magnitudeBound(self):
- """
- Storing a value larger than what SQLite supports raises an exception.
- """
- i = _Integer()
- self.assertRaises(ConstraintError, _Integer, value=9999999999999999999)
- self.assertRaises(ConstraintError, _Integer, value=9223372036854775808)
- _Integer(value=9223372036854775807)
- _Integer(value=-9223372036854775808)
- self.assertRaises(ConstraintError, _Integer, value=-9223372036854775809)
- self.assertRaises(ConstraintError, _Integer, value=-9999999999999999999)
-
-
-
-class DecimalDoodad(Item):
- integral = point1decimal(default=0, allowNone=False)
- otherMoney = money(allowNone=True)
- extraintegral = integer()
- money = money(default=0)
-
-class FixedPointDecimalTest(TestCase):
- def testSum(self):
- s = Store()
- for x in range(10):
- DecimalDoodad(store=s,
- money=Decimal("0.10"))
- self.assertEquals(s.query(DecimalDoodad).getColumn("money").sum(),
- 1)
-
- def testRoundTrip(self):
- s = Store()
- DecimalDoodad(store=s, integral=19947,
- money=Decimal("4.3"),
- otherMoney=Decimal("-17.94"))
- self.assertEquals(s.findFirst(DecimalDoodad).integral, 19947)
- self.assertEquals(s.findFirst(DecimalDoodad).money, Decimal("4.3"))
- self.assertEquals(s.findFirst(DecimalDoodad).otherMoney, Decimal("-17.9400"))
-
- def testComparisons(self):
- s = Store()
- DecimalDoodad(store=s,
- money=Decimal("19947.000000"),
- otherMoney=19947)
- self.assertEquals(
- s.query(DecimalDoodad,
- DecimalDoodad.money == DecimalDoodad.otherMoney).count(),
- 1)
- self.assertEquals(
- s.query(DecimalDoodad,
- DecimalDoodad.money != DecimalDoodad.otherMoney).count(),
- 0)
- self.assertEquals(
- s.query(DecimalDoodad,
- DecimalDoodad.money == 19947).count(),
- 1)
- self.assertEquals(
- s.query(DecimalDoodad,
- DecimalDoodad.money == Decimal("19947")).count(),
- 1)
-
-
- def testDisallowedComparisons(self):
- # These tests should go away; it's (mostly) possible to support
- # comparison of different precisions:
-
- # sqlite> select 1/3;
- # 0
- # sqlite> select 3/1;
- # 3
- # sqlite> select 3/2;
- # 1
-
-
- s = Store()
- DecimalDoodad(store=s,
- integral=1,
- money=1)
-
- self.assertRaises(TypeError,
- lambda : s.query(
- DecimalDoodad,
- DecimalDoodad.integral == DecimalDoodad.money))
-
- self.assertRaises(TypeError,
- lambda : s.query(
- DecimalDoodad,
- DecimalDoodad.integral == DecimalDoodad.extraintegral))
-
-
-class SpecialStoreIDAttributeTest(TestCase):
-
- def testStringStoreIDsDontWork(self):
- s = Store()
- sid = Number(store=s, value=1.0).storeID
- self.assertRaises(TypeError, s.getItemByID, str(sid))
- self.assertRaises(TypeError, s.getItemByID, float(sid))
- self.assertRaises(TypeError, s.getItemByID, unicode(sid))
-
-class SortedItem(Item):
- typeName = 'test_sorted_thing'
- schemaVersion = 1
-
- goingUp = integer()
- goingDown = integer()
- theSame = integer()
-
-class SortingTest(TestCase):
-
- def testCompoundSort(self):
- s = Store()
- L = []
- r10 = range(10)
- random.shuffle(r10)
- L.append(SortedItem(store=s,
- goingUp=0,
- goingDown=1000,
- theSame=8))
- for x in r10:
- L.append(SortedItem(store=s,
- goingUp=10+x,
- goingDown=10-x,
- theSame=7))
-
- for colnms in [['goingUp'],
- ['goingUp', 'storeID'],
- ['goingUp', 'theSame'],
- ['theSame', 'goingUp'],
- ['theSame', 'storeID']]:
- LN = L[:]
- LN.sort(key=lambda si: tuple([getattr(si, colnm) for colnm in colnms]))
-
- ascsort = [getattr(SortedItem, colnm).ascending for colnm in colnms]
- descsort = [getattr(SortedItem, colnm).descending for colnm in colnms]
-
- self.assertEquals(LN, list(s.query(SortedItem,
- sort=ascsort)))
- LN.reverse()
- self.assertEquals(LN, list(s.query(SortedItem,
- sort=descsort)))
-
-
-class FunkyItem(Item):
- name = unicode()
-
-class BadAttributeTest(TestCase):
-
- def test_badAttribute(self):
- """
- L{Item} should not allow setting undeclared attributes.
- """
- s = Store()
- err = self.failUnlessRaises(AttributeError,
- FunkyItem, store=s, name=u"foo")
- self.assertEquals(str(err), "'FunkyItem' can't set attribute 'name'")
-
-
-class WhiteboxComparableTest(TestCase):
- def test_likeRejectsIllegalOperations(self):
- """
- Test that invoking the underlying method which provides the interface
- to the LIKE operator raises a TypeError if it is invoked with too few
- arguments.
- """
- self.assertRaises(TypeError, Comparable()._like, 'XYZ')
-
-someRandomDate = Time.fromISO8601TimeAndDate("1980-05-29")
-
-class DatedThing(Item):
- date = timestamp(default=someRandomDate)
-
-class CreationDatedThing(Item):
- creationDate = timestamp(defaultFactory=lambda : Time())
-
-class StructuredDefaultTestCase(TestCase):
- def testTimestampDefault(self):
- s = Store()
- sid = DatedThing(store=s).storeID
- self.assertEquals(s.getItemByID(sid).date,
- someRandomDate)
-
- def testTimestampNow(self):
- s = Store()
- sid = CreationDatedThing(store=s).storeID
- self.failUnless(
- (Time().asDatetime() - s.getItemByID(sid).creationDate.asDatetime()).seconds <
- 10)
-
-
-
-class TaggedListyThing(Item):
- strlist = textlist()
-
-
-
-class StringListTestCase(TestCase):
- def tryRoundtrip(self, value):
- """
- Attempt to roundtrip a value through a database store and load, to
- ensure the representation is not lossy.
- """
- s = Store()
- tlt = TaggedListyThing(store=s, strlist=value)
- self.assertEquals(tlt.strlist, value)
-
- # Force it out of the cache, so it gets reloaded from the store
- del tlt
- tlt = s.findUnique(TaggedListyThing)
- self.assertEquals(tlt.strlist, value)
-
-
- def test_simpleListOfStrings(self):
- """
- Test that a simple list can be stored and retrieved successfully.
- """
- SOME_VALUE = [u'abc', u'def, ghi', u'jkl']
- self.tryRoundtrip(SOME_VALUE)
-
-
- def test_emptyList(self):
- """
- Test that an empty list can be stored and retrieved successfully.
- """
- self.tryRoundtrip([])
-
-
- def test_oldRepresentation(self):
- """
- Test that the new code can still correctly load the old representation
- which could not handle an empty list.
- """
-
- oldCases = [
- (u'foo', [u'foo']),
- (u'', [u'']),
- (u'\x1f', [u'', u'']),
- (u'foo\x1fbar', [u'foo', u'bar']),
- ]
-
- for dbval, pyval in oldCases:
- self.assertEqual(TaggedListyThing.strlist.outfilter(dbval, None), pyval)
-
-
-
-class SQLAttributeDummyClass(Item):
- """
- Dummy class which L{SQLAttributeTestCase} will poke at to assert various
- behaviors.
- """
- dummyAttribute = SQLAttribute()
-
-
-
-class FullImplementationDummyClass(Item):
- """
- Dummy class which L{SQLAttributeTestCase} will poke at to assert various
- behaviors - SQLAttribute is really an abstract base class, so this uses a
- concrete attribute (integer) for its assertions.
- """
- dummyAttribute = integer()
-
-
-class SQLAttributeTestCase(TestCase):
- """
- Tests for behaviors of the L{axiom.attributes.SQLAttribute} class.
- """
-
- def test_attributeName(self):
- """
- Test that an L{SQLAttribute} knows its own local name.
- """
- self.assertEquals(
- SQLAttributeDummyClass.dummyAttribute.attrname,
- 'dummyAttribute')
-
-
- def test_fullyQualifiedName(self):
- """
- Test that the L{SQLAttribute.fullyQualifiedName} method correctly
- returns the fully qualified Python name of the attribute: that is, the
- fully qualified Python name of the type it is defined on (plus a dot)
- plus the name of the attribute.
- """
- self.assertEquals(
- SQLAttributeDummyClass.dummyAttribute.fullyQualifiedName(),
- 'axiom.test.test_attributes.SQLAttributeDummyClass.dummyAttribute')
-
-
- def test_fullyQualifiedStoreID(self):
- """
- Test that the L{IColumn} implementation on the storeID emits the
- correct fullyQualifiedName as well. This is necessary because storeID
- is unfortunately implemented differently than other columns, due to its
- presence on Item.
- """
- self.assertEquals(
- SQLAttributeDummyClass.storeID.fullyQualifiedName(),
- 'axiom.test.test_attributes.SQLAttributeDummyClass.storeID')
-
-
- def test_fullyQualifiedPlaceholder(self):
- """
- Verify that the L{IColumn.fullyQualifiedName} implementation on
- placeholder attributes returns a usable string, but one which is
- recognizable as an invalid Python identifier.
- """
- ph = Placeholder(SQLAttributeDummyClass)
- self.assertEquals(
- 'axiom.test.test_attributes.SQLAttributeDummyClass'
- '.dummyAttribute.<placeholder:%d>' % (ph._placeholderCount,),
- ph.dummyAttribute.fullyQualifiedName())
-
-
- def test_accessor(self):
- """
- Test that the __get__ of SQLAttribute does the obvious thing, and returns
- its value when given an instance.
- """
- dummy = FullImplementationDummyClass(dummyAttribute=1234)
- self.assertEquals(
- FullImplementationDummyClass.dummyAttribute.__get__(dummy), 1234)
- self.assertEquals(dummy.dummyAttribute, 1234)
-
-
- def test_storeIDAccessor(self):
- """
- Test that the __get__ of the IColumn implementation for storeID works
- the same as that for normal attributes.
- """
- s = Store()
- dummy = FullImplementationDummyClass(store=s)
- self.assertIdentical(s.getItemByID(dummy.storeID), dummy)
-
- def test_placeholderAccessor(self):
- """
- Test that the __get__ of SQLAttribute does the obvious thing, and returns
- its value when given an instance.
- """
- dummy = FullImplementationDummyClass(dummyAttribute=1234)
- self.assertEquals(
- Placeholder(FullImplementationDummyClass
- ).dummyAttribute.__get__(dummy), 1234)
- self.assertEquals(dummy.dummyAttribute, 1234)
-
-
- def test_typeAttribute(self):
- """
- Test that the C{type} attribute of an L{SQLAttribute} references the
- class on which the attribute is defined.
- """
- self.assertIdentical(
- SQLAttributeDummyClass,
- SQLAttributeDummyClass.dummyAttribute.type)
-
-
- def test_getShortColumnName(self):
- """
- Test that L{Store.getShortColumnName} returns something pretty close to
- the name of the attribute.
-
- XXX Testing this really well would require being able to parse a good
- chunk of SQL. I don't know how to do that yet. -exarkun
- """
- s = Store()
- self.assertIn(
- 'dummyAttribute',
- s.getShortColumnName(SQLAttributeDummyClass.dummyAttribute))
-
-
- def test_getColumnName(self):
- """
- Test that L{Store.getColumnName} returns something made up of the
- attribute's type's typeName and the attribute's name.
- """
- s = Store()
- self.assertIn(
- 'dummyAttribute',
- s.getColumnName(SQLAttributeDummyClass.dummyAttribute))
- self.assertIn(
- normalize(qual(SQLAttributeDummyClass)),
- s.getColumnName(SQLAttributeDummyClass.dummyAttribute))
=== removed file 'Axiom/axiom/test/test_axiomatic.py'
--- Axiom/axiom/test/test_axiomatic.py 2014-01-15 19:19:18 +0000
+++ Axiom/axiom/test/test_axiomatic.py 1970-01-01 00:00:00 +0000
@@ -1,410 +0,0 @@
-# Copyright 2006-2009 Divmod, Inc. See LICENSE file for details
-
-"""
-Tests for L{axiom.scripts.axiomatic}.
-"""
-
-import sys, os, signal, StringIO
-
-from zope.interface import implements
-
-from twisted.python.log import msg
-from twisted.python.filepath import FilePath
-from twisted.python.procutils import which
-from twisted.python.runtime import platform
-from twisted.trial.unittest import SkipTest, TestCase
-from twisted.plugin import IPlugin
-from twisted.internet import reactor
-from twisted.internet.task import deferLater
-from twisted.internet.protocol import ProcessProtocol
-from twisted.internet.defer import Deferred
-from twisted.internet.error import ProcessTerminated
-from twisted.application.service import IService, IServiceCollection
-
-from axiom.store import Store
-from axiom.item import Item
-from axiom.attributes import boolean
-from axiom.scripts import axiomatic
-from axiom.listversions import SystemVersion
-from axiom.iaxiom import IAxiomaticCommand
-from twisted.plugins.axiom_plugins import AxiomaticStart
-
-from axiom.test.reactorimporthelper import SomeItem
-
-
-class RecorderService(Item):
- """
- Minimal L{IService} implementation which remembers if it was ever started.
- This is used by tests to make sure services get started when they should
- be.
- """
- implements(IService)
-
- started = boolean(
- doc="""
- A flag which is initially false and set to true once C{startService} is
- called.
- """, default=False)
-
- name = "recorder"
-
- def setServiceParent(self, parent):
- """
- Do the standard Axiom thing to make sure this service becomes a child
- of the top-level store service.
- """
- IServiceCollection(parent).addService(self)
-
-
- def startService(self):
- """
- Remember that this method was called.
- """
- self.started = True
-
-
- def stopService(self):
- """
- Ignore this event.
- """
-
-
-
-class StartTests(TestCase):
- """
- Test the axiomatic start sub-command.
- """
- def setUp(self):
- """
- Work around Twisted #3178 by tricking trial into thinking something
- asynchronous is happening.
- """
- return deferLater(reactor, 0, lambda: None)
-
-
- def _getRunDir(self, dbdir):
- return dbdir.child("run")
-
-
- def _getLogDir(self, dbdir):
- return self._getRunDir(dbdir).child("logs")
-
-
- def test_getArguments(self):
- """
- L{Start.getArguments} adds a I{--pidfile} argument if one is not
- present and a I{--logfile} argument if one is not present and
- daemonization is enabled and adds a I{--dbdir} argument pointing at the
- store it is passed.
- """
- dbdir = FilePath(self.mktemp())
- store = Store(dbdir)
- run = self._getRunDir(dbdir)
- logs = self._getLogDir(dbdir)
- start = axiomatic.Start()
-
- logfileArg = ["--logfile", logs.child("axiomatic.log").path]
-
- # twistd on Windows doesn't support PID files, so on Windows,
- # getArguments should *not* add --pidfile.
- if platform.isWindows():
- pidfileArg = []
- else:
- pidfileArg = ["--pidfile", run.child("axiomatic.pid").path]
- restArg = ["axiomatic-start", "--dbdir", dbdir.path]
-
- self.assertEqual(
- start.getArguments(store, []),
- logfileArg + pidfileArg + restArg)
- self.assertEqual(
- start.getArguments(store, ["--logfile", "foo"]),
- ["--logfile", "foo"] + pidfileArg + restArg)
- self.assertEqual(
- start.getArguments(store, ["-l", "foo"]),
- ["-l", "foo"] + pidfileArg + restArg)
- self.assertEqual(
- start.getArguments(store, ["--nodaemon"]),
- ["--nodaemon"] + pidfileArg + restArg)
- self.assertEqual(
- start.getArguments(store, ["-n"]),
- ["-n"] + pidfileArg + restArg)
- self.assertEqual(
- start.getArguments(store, ["--pidfile", "foo"]),
- ["--pidfile", "foo"] + logfileArg + restArg)
-
-
- def test_logDirectoryCreated(self):
- """
- If L{Start.getArguments} adds a I{--logfile} argument, it creates the
- necessary directory.
- """
- dbdir = FilePath(self.mktemp())
- store = Store(dbdir)
- start = axiomatic.Start()
- start.getArguments(store, ["-l", "foo"])
- self.assertFalse(self._getLogDir(dbdir).exists())
- start.getArguments(store, [])
- self.assertTrue(self._getLogDir(dbdir).exists())
-
-
- def test_parseOptions(self):
- """
- L{Start.parseOptions} adds axiomatic-suitable defaults for any
- unspecified parameters and then calls L{twistd.run} with the modified
- argument list.
- """
- argv = []
- def fakeRun():
- argv.extend(sys.argv)
- options = axiomatic.Options()
- options['dbdir'] = dbdir = self.mktemp()
- start = axiomatic.Start()
- start.parent = options
- start.run = fakeRun
- original = sys.argv[:]
- try:
- start.parseOptions(["-l", "foo", "--pidfile", "bar"])
- finally:
- sys.argv[:] = original
- self.assertEqual(
- argv,
- [sys.argv[0],
- "-l", "foo", "--pidfile", "bar",
- "axiomatic-start", "--dbdir", os.path.abspath(dbdir)])
-
-
- def test_parseOptionsHelp(self):
- """
- L{Start.parseOptions} writes usage information to stdout if C{"--help"}
- is in the argument list it is passed and L{twistd.run} is not called.
- """
- start = axiomatic.Start()
- start.run = None
- original = sys.stdout
- sys.stdout = stdout = StringIO.StringIO()
- try:
- self.assertRaises(SystemExit, start.parseOptions, ["--help"])
- finally:
- sys.stdout = original
-
- # Some random options that should be present. This is a bad test
- # because we don't control what C{opt_help} actually does and we don't
- # even really care as long as it's the same as what I{twistd --help}
- # does. We could try running them both and comparing, but then we'd
- # still want to do some sanity check against one of them in case we end
- # up getting the twistd version incorrectly somehow... -exarkun
- self.assertIn("--reactor", stdout.getvalue())
- if not platform.isWindows():
- # This isn't an option on Windows, so it shouldn't be there.
- self.assertIn("--uid", stdout.getvalue())
-
- # Also, we don't want to see twistd plugins here.
- self.assertNotIn("axiomatic-start", stdout.getvalue())
-
-
-
- def test_checkSystemVersion(self):
- """
- The L{IService} returned by L{AxiomaticStart.makeService} calls
- L{checkSystemVersion} with its store when it is started.
-
- This is done for I{axiomatic start} rather than somewhere in the
- implementation of L{Store} so that it happens only once per server
- startup. The overhead of doing it whenever a store is opened is
- non-trivial.
- """
- dbdir = self.mktemp()
- store = Store(dbdir)
- service = AxiomaticStart.makeService({'dbdir': dbdir, 'debug': False})
- self.assertEqual(store.query(SystemVersion).count(), 0)
- service.startService()
- self.assertEqual(store.query(SystemVersion).count(), 1)
- return service.stopService()
-
-
- def test_axiomOptions(self):
- """
- L{AxiomaticStart.options} takes database location and debug setting
- parameters.
- """
- options = AxiomaticStart.options()
- options.parseOptions([])
- self.assertEqual(options['dbdir'], None)
- self.assertFalse(options['debug'])
- options.parseOptions(["--dbdir", "foo", "--debug"])
- self.assertEqual(options['dbdir'], 'foo')
- self.assertTrue(options['debug'])
-
-
- def test_makeService(self):
- """
- L{AxiomaticStart.makeService} returns the L{IService} powerup of the
- L{Store} at the directory in the options object it is passed.
- """
- dbdir = FilePath(self.mktemp())
- store = Store(dbdir)
- recorder = RecorderService(store=store)
- self.assertFalse(recorder.started)
- store.powerUp(recorder, IService)
- store.close()
-
- service = AxiomaticStart.makeService({"dbdir": dbdir, "debug": False})
- service.startService()
- service.stopService()
-
- store = Store(dbdir)
- self.assertTrue(store.getItemByID(recorder.storeID).started)
-
-
- def test_reactorSelection(self):
- """
- L{AxiomaticStart} optionally takes the name of a reactor and
- installs it instead of the default reactor.
- """
- # Since this process is already hopelessly distant from the state in
- # which I{axiomatic start} operates, it would make no sense to try a
- # functional test of this behavior in this process. Since the
- # behavior being tested involves lots of subtle interactions between
- # lots of different pieces of code (the reactor might get installed
- # at the end of a ten-deep chain of imports going through as many
- # different projects), it also makes no sense to try to make this a
- # unit test. So, start a child process and try to use the alternate
- # reactor functionality there.
-
- here = FilePath(__file__)
- # Try to find it relative to the source of this test.
- bin = here.parent().parent().parent().child("bin")
- axiomatic = bin.child("axiomatic")
- if axiomatic.exists():
- # Great, use that one.
- axiomatic = axiomatic.path
- else:
- # Try to find it on the path, instead.
- axiomatics = which("axiomatic")
- if axiomatics:
- # Great, it was on the path.
- axiomatic = axiomatics[0]
- else:
- # Nope, not there, give up.
- raise SkipTest(
- "Could not find axiomatic script on path or at %s" % (
- axiomatic.path,))
-
- # Create a store for the child process to use and put an item in it.
- # This will force an import of the module that defines that item's
- # class when the child process starts. The module imports the default
- # reactor at the top-level, making this the worst-case for the reactor
- # selection code.
- storePath = self.mktemp()
- store = Store(storePath)
- SomeItem(store=store)
- store.close()
-
- # Install select reactor because it available on all platforms, and
- # it is still an error to try to install the select reactor even if
- # the already installed reactor was the select reactor.
- argv = [
- sys.executable,
- axiomatic, "-d", storePath,
- "start", "--reactor", "select", "-n"]
- expected = [
- "reactor class: twisted.internet.selectreactor.SelectReactor.",
- "reactor class: <class 'twisted.internet.selectreactor.SelectReactor'>"]
- proto, complete = AxiomaticStartProcessProtocol.protocolAndDeferred(expected)
-
- environ = os.environ.copy()
- reactor.spawnProcess(proto, sys.executable, argv, env=environ)
- return complete
-
-
-
-class AxiomaticStartProcessProtocol(ProcessProtocol):
- """
- L{AxiomaticStartProcessProtocol} watches an I{axiomatic start} process
- and fires a L{Deferred} when it sees either successful reactor
- installation or process termination.
-
- @ivar _success: A flag which is C{False} until the expected text is found
- in the child's stdout and C{True} thereafter.
-
- @ivar _output: A C{str} giving all of the stdout from the child received
- thus far.
- """
- _success = False
- _output = ""
-
-
- def protocolAndDeferred(cls, expected):
- """
- Create and return an L{AxiomaticStartProcessProtocol} and a
- L{Deferred}. The L{Deferred} will fire when the protocol receives
- the given string on standard out or when the process ends, whichever
- comes first.
- """
- proto = cls()
- proto._complete = Deferred()
- proto._expected = expected
- return proto, proto._complete
- protocolAndDeferred = classmethod(protocolAndDeferred)
-
-
- def errReceived(self, bytes):
- """
- Report the given unexpected stderr data.
- """
- msg("Received stderr from axiomatic: %r" % (bytes,))
-
-
- def outReceived(self, bytes):
- """
- Add the given bytes to the output buffer and check to see if the
- reactor has been installed successfully, firing the completion
- L{Deferred} if so.
- """
- msg("Received stdout from axiomatic: %r" % (bytes,))
- self._output += bytes
- if not self._success:
- for line in self._output.splitlines():
- for expectedLine in self._expected:
- if expectedLine in line:
- msg("Received expected output")
- self._success = True
- self.transport.signalProcess("TERM")
-
-
- def processEnded(self, reason):
- """
- Check that the process exited in the way expected and that the required
- text has been found in its output and fire the result L{Deferred} with
- either a value or a failure.
- """
- self._complete, result = None, self._complete
- if self._success:
- if platform.isWindows() or (
- # Windows can't tell that we SIGTERM'd it, so sorry.
- reason.check(ProcessTerminated) and
- reason.value.signal == signal.SIGTERM):
- result.callback(None)
- return
- # Something went wrong.
- result.errback(reason)
-
-
-
-class TestMisc(TestCase):
- """
- Test things not directly involving running axiomatic commands.
- """
- def test_axiomaticCommandProvides(self):
- """
- Test that AxiomaticCommand itself does not provide IAxiomaticCommand or
- IPlugin, but subclasses do.
- """
- self.failIf(IAxiomaticCommand.providedBy(axiomatic.AxiomaticCommand), 'IAxiomaticCommand provided')
- self.failIf(IPlugin.providedBy(axiomatic.AxiomaticCommand), 'IPlugin provided')
-
- class _TestSubClass(axiomatic.AxiomaticCommand):
- pass
-
- self.failUnless(IAxiomaticCommand.providedBy(_TestSubClass), 'IAxiomaticCommand not provided')
- self.failUnless(IPlugin.providedBy(_TestSubClass), 'IPlugin not provided')
=== removed file 'Axiom/axiom/test/test_batch.py'
--- Axiom/axiom/test/test_batch.py 2013-07-07 12:43:27 +0000
+++ Axiom/axiom/test/test_batch.py 1970-01-01 00:00:00 +0000
@@ -1,733 +0,0 @@
-
-from twisted.trial import unittest
-from twisted.python import failure, filepath
-from twisted.application import service
-
-from axiom import iaxiom, store, item, attributes, batch, substore
-
-class TestWorkUnit(item.Item):
- information = attributes.integer()
-
- def __repr__(self):
- return '<TestWorkUnit %d>' % (self.information,)
-
-
-class ExtraUnit(item.Item):
- unformashun = attributes.text()
-
-
-
-class WorkListener(item.Item):
- comply = attributes.integer(doc="""
- This exists solely to satisfy the requirement that Items have at
- least one persistent attribute.
- """)
-
- listener = attributes.inmemory(doc="""
- A callable which will be invoked by processItem. This will be
- provided by the test method and will assert that the appropriate
- items are received, in the appropriate order.
- """)
-
- def processItem(self, item):
- self.listener(item)
-
-
-
-class BatchTestCase(unittest.TestCase):
- def setUp(self):
- self.procType = batch.processor(TestWorkUnit)
- self.store = store.Store()
- self.scheduler = iaxiom.IScheduler(self.store)
-
-
- def testItemTypeCreation(self):
- """
- Test that processors for a different Item types can be
- created, that they are valid Item types themselves, and that
- repeated calls return the same object when appropriate.
- """
- procB = batch.processor(TestWorkUnit)
- self.assertIdentical(self.procType, procB)
-
- procC = batch.processor(ExtraUnit)
- self.failIfIdentical(procB, procC)
- self.failIfEqual(procB.typeName, procC.typeName)
-
-
- def testInstantiation(self):
- """
- Test that a batch processor can be instantiated and added to a
- database, and that it can be retrieved in the usual ways.
- """
- proc = self.procType(store=self.store)
- self.assertIdentical(self.store.findUnique(self.procType), proc)
-
-
- def testListenerlessProcessor(self):
- """
- Test that a batch processor can be stepped even if it has no
- listeners, and that it correctly reports it has no work to do.
- """
- proc = self.procType(store=self.store)
- self.failIf(proc.step(), "expected no more work to be reported, some was")
-
- TestWorkUnit(store=self.store, information=0)
- self.failIf(proc.step(), "expected no more work to be reported, some was")
-
-
- def testListeners(self):
- """
- Test that items can register or unregister their interest in a
- processor's batch of items.
- """
- proc = self.procType(store=self.store)
- listenerA = WorkListener(store=self.store)
- listenerB = WorkListener(store=self.store)
-
- self.assertEquals(list(proc.getReliableListeners()), [])
-
- proc.addReliableListener(listenerA)
- self.assertEquals(list(proc.getReliableListeners()), [listenerA])
-
- proc.addReliableListener(listenerB)
- expected = [listenerA, listenerB]
- listeners = list(proc.getReliableListeners())
- self.assertEquals(sorted(expected), sorted(listeners))
-
- proc.removeReliableListener(listenerA)
- self.assertEquals(list(proc.getReliableListeners()), [listenerB])
-
- proc.removeReliableListener(listenerB)
- self.assertEquals(list(proc.getReliableListeners()), [])
-
-
- def testBasicProgress(self):
- """
- Test that when a processor is created and given a chance to
- run, it completes some work.
- """
- processedItems = []
- def listener(item):
- processedItems.append(item.information)
-
- proc = self.procType(store=self.store)
- listener = WorkListener(store=self.store, listener=listener)
-
- proc.addReliableListener(listener)
-
- self.assertEquals(processedItems, [])
-
- self.failIf(proc.step(), "expected no work to be reported, some was")
-
- self.assertEquals(processedItems, [])
-
- for i in range(3):
- TestWorkUnit(store=self.store, information=i)
- ExtraUnit(store=self.store, unformashun=unicode(-i))
-
- self.failUnless(proc.step(), "expected more work to be reported, none was")
- self.assertEquals(processedItems, [0])
-
- self.failUnless(proc.step(), "expected more work to be reported, none was")
- self.assertEquals(processedItems, [0, 1])
-
- self.failIf(proc.step(), "expected no more work to be reported, some was")
- self.assertEquals(processedItems, [0, 1, 2])
-
- self.failIf(proc.step(), "expected no more work to be reported, some was")
- self.assertEquals(processedItems, [0, 1, 2])
-
-
- def testProgressAgainstExisting(self):
- """
- Test that when a processor is created when work units exist
- already, it works backwards to notify its listener of all
- those existing work units. Also test that work units created
- after the processor are also handled.
- """
- processedItems = []
- def listener(item):
- processedItems.append(item.information)
-
- proc = self.procType(store=self.store)
- listener = WorkListener(store=self.store, listener=listener)
-
- for i in range(3):
- TestWorkUnit(store=self.store, information=i)
-
- proc.addReliableListener(listener)
-
- self.assertEquals(processedItems, [])
-
- self.failUnless(proc.step(), "expected more work to be reported, none was")
- self.assertEquals(processedItems, [2])
-
- self.failUnless(proc.step(), "expected more work to be reported, none was")
- self.assertEquals(processedItems, [2, 1])
-
- self.failIf(proc.step(), "expected no more work to be reported, some was")
- self.assertEquals(processedItems, [2, 1, 0])
-
- self.failIf(proc.step(), "expected no more work to be reported, some was")
- self.assertEquals(processedItems, [2, 1, 0])
-
- for i in range(3, 6):
- TestWorkUnit(store=self.store, information=i)
-
- self.failUnless(proc.step(), "expected more work to be reported, none was")
- self.assertEquals(processedItems, [2, 1, 0, 3])
-
- self.failUnless(proc.step(), "expected more work to be reported, none was")
- self.assertEquals(processedItems, [2, 1, 0, 3, 4])
-
- self.failIf(proc.step(), "expected no more work to be reported, some was")
- self.assertEquals(processedItems, [2, 1, 0, 3, 4, 5])
-
- self.failIf(proc.step(), "expected no more work to be reported, some was")
- self.assertEquals(processedItems, [2, 1, 0, 3, 4, 5])
-
-
- def testBrokenListener(self):
- """
- Test that if a listener's processItem method raises an
- exception, processing continues beyond that item and that an
- error marker is created for that item.
- """
-
- errmsg = "This reliable listener is not very reliable!"
- processedItems = []
- def listener(item):
- if item.information == 1:
- raise RuntimeError(errmsg)
- processedItems.append(item.information)
-
- proc = self.procType(store=self.store)
- listener = WorkListener(store=self.store, listener=listener)
-
- proc.addReliableListener(listener)
-
- # Make some work, step the processor, and fake the error handling
- # behavior the Scheduler actually provides.
- for i in range(3):
- TestWorkUnit(store=self.store, information=i)
- try:
- proc.step()
- except batch._ProcessingFailure:
- proc.timedEventErrorHandler(
- (u"Oh crap, I do not have a TimedEvent, "
- "I sure hope that never becomes a problem."),
- failure.Failure())
-
- self.assertEquals(processedItems, [0, 2])
-
- errors = list(proc.getFailedItems())
- self.assertEquals(len(errors), 1)
- self.assertEquals(errors[0][0], listener)
- self.assertEquals(errors[0][1].information, 1)
-
- loggedErrors = self.flushLoggedErrors(RuntimeError)
- self.assertEquals(len(loggedErrors), 1)
- self.assertEquals(loggedErrors[0].getErrorMessage(), errmsg)
-
-
- def testMultipleListeners(self):
- """
- Test that a single batch processor with multiple listeners
- added at different times delivers each item to each listener.
- """
- processedItemsA = []
- def listenerA(item):
- processedItemsA.append(item.information)
-
- processedItemsB = []
- def listenerB(item):
- processedItemsB.append(item.information)
-
- proc = self.procType(store=self.store)
-
- for i in range(2):
- TestWorkUnit(store=self.store, information=i)
-
- firstListener = WorkListener(store=self.store, listener=listenerA)
- proc.addReliableListener(firstListener)
-
- for i in range(2, 4):
- TestWorkUnit(store=self.store, information=i)
-
- secondListener = WorkListener(store=self.store, listener=listenerB)
- proc.addReliableListener(secondListener)
-
- for i in range(4, 6):
- TestWorkUnit(store=self.store, information=i)
-
- for i in range(100):
- if not proc.step():
- break
- else:
- self.fail("Processing loop took too long")
-
- self.assertEquals(
- processedItemsA, [2, 3, 4, 5, 1, 0])
-
- self.assertEquals(
- processedItemsB, [4, 5, 3, 2, 1, 0])
-
-
- def testRepeatedAddListener(self):
- """
- Test that adding the same listener repeatedly has the same
- effect as adding it once.
- """
- proc = self.procType(store=self.store)
- listener = WorkListener(store=self.store)
- proc.addReliableListener(listener)
- proc.addReliableListener(listener)
- self.assertEquals(list(proc.getReliableListeners()), [listener])
-
-
- def testSuperfluousItemAddition(self):
- """
- Test the addItem method for work which would have been done already,
- and so for which addItem should therefore be a no-op.
- """
- processedItems = []
- def listener(item):
- processedItems.append(item.information)
-
- proc = self.procType(store=self.store)
- listener = WorkListener(store=self.store, listener=listener)
-
- # Create a couple items so there will be backwards work to do.
- one = TestWorkUnit(store=self.store, information=0)
- two = TestWorkUnit(store=self.store, information=1)
-
- rellist = proc.addReliableListener(listener)
-
- # Create a couple more items so there will be some forwards work to do.
- three = TestWorkUnit(store=self.store, information=2)
- four = TestWorkUnit(store=self.store, information=3)
-
- # There are only two regions at this point - work behind and work
- # ahead; no work has been done yet, so there's no region in between.
- # Add items behind and ahead of the point; these should not result in
- # any explicit tracking items, since they would have been processed in
- # due course anyway.
- rellist.addItem(two)
- rellist.addItem(three)
-
- for i in range(100):
- if not proc.step():
- break
- else:
- self.fail("Processing loop took too long")
-
- self.assertEquals(processedItems, [2, 3, 1, 0])
-
-
- def testReprocessItemAddition(self):
- """
- Test the addItem method for work which is within the bounds of work
- already done, and so which would not have been processed without the
- addItem call.
- """
- processedItems = []
- def listener(item):
- processedItems.append(item.information)
-
- proc = self.procType(store=self.store)
- listener = WorkListener(store=self.store, listener=listener)
- rellist = proc.addReliableListener(listener)
-
- one = TestWorkUnit(store=self.store, information=0)
- two = TestWorkUnit(store=self.store, information=1)
- three = TestWorkUnit(store=self.store, information=2)
-
- for i in range(100):
- if not proc.step():
- break
- else:
- self.fail("Processing loop took too long")
-
- self.assertEquals(processedItems, range(3))
-
- # Now that we have processed some items, re-add one of those items to
- # be re-processed and make sure it actually does get passed to the
- # listener again.
- processedItems = []
-
- rellist.addItem(two)
-
- for i in xrange(100):
- if not proc.step():
- break
- else:
- self.fail("Processing loop took too long")
-
- self.assertEquals(processedItems, [1])
-
-
- def test_processorStartsUnscheduled(self):
- """
- Test that when a processor is first created, it is not scheduled to
- perform any work.
- """
- proc = self.procType(store=self.store)
- self.assertIdentical(proc.scheduled, None)
- self.assertEquals(
- list(self.scheduler.scheduledTimes(proc)),
- [])
-
-
- def test_itemAddedIgnoredWithoutListeners(self):
- """
- Test that if C{itemAdded} is called while the processor is idle but
- there are no listeners, the processor does not schedule itself to do
- any work.
- """
- proc = self.procType(store=self.store)
- proc.itemAdded()
- self.assertEqual(proc.scheduled, None)
- self.assertEquals(
- list(self.scheduler.scheduledTimes(proc)),
- [])
-
-
- def test_itemAddedSchedulesProcessor(self):
- """
- Test that if C{itemAdded} is called while the processor is idle and
- there are listeners, the processor does schedules itself to do some
- work.
- """
- proc = self.procType(store=self.store)
- listener = WorkListener(store=self.store)
- proc.addReliableListener(listener)
-
- # Get rid of the scheduler state that addReliableListener call just
- # created.
- proc.scheduled = None
- self.scheduler.unscheduleAll(proc)
-
- proc.itemAdded()
- self.failIfEqual(proc.scheduled, None)
- self.assertEquals(
- list(self.scheduler.scheduledTimes(proc)),
- [proc.scheduled])
-
-
- def test_addReliableListenerSchedulesProcessor(self):
- """
- Test that if C{addReliableListener} is called while the processor is
- idle, the processor schedules itself to do some work.
- """
- proc = self.procType(store=self.store)
- listener = WorkListener(store=self.store)
- proc.addReliableListener(listener)
- self.failIfEqual(proc.scheduled, None)
- self.assertEquals(
- list(self.scheduler.scheduledTimes(proc)),
- [proc.scheduled])
-
-
- def test_itemAddedWhileScheduled(self):
- """
- Test that if C{itemAdded} is called when the processor is already
- scheduled to run, the processor remains scheduled to run at the same
- time.
- """
- proc = self.procType(store=self.store)
- listener = WorkListener(store=self.store)
- proc.addReliableListener(listener)
- when = proc.scheduled
- proc.itemAdded()
- self.assertEquals(proc.scheduled, when)
- self.assertEquals(
- list(self.scheduler.scheduledTimes(proc)),
- [proc.scheduled])
-
-
- def test_addReliableListenerWhileScheduled(self):
- """
- Test that if C{addReliableListener} is called when the processor is
- already scheduled to run, the processor remains scheduled to run at the
- same time.
- """
- proc = self.procType(store=self.store)
- listenerA = WorkListener(store=self.store)
- proc.addReliableListener(listenerA)
- when = proc.scheduled
- listenerB = WorkListener(store=self.store)
- proc.addReliableListener(listenerB)
- self.assertEquals(proc.scheduled, when)
- self.assertEquals(
- list(self.scheduler.scheduledTimes(proc)),
- [proc.scheduled])
-
-
- def test_processorIdlesWhenCaughtUp(self):
- """
- Test that the C{run} method of the processor returns C{None} when it
- has done all the work it needs to do, thus unscheduling the processor.
- """
- proc = self.procType(store=self.store)
- self.assertIdentical(proc.run(), None)
-
-
-
-class BatchCallTestItem(item.Item):
- called = attributes.boolean(default=False)
-
- def callIt(self):
- self.called = True
-
-
-
-class BrokenException(Exception):
- """
- Exception always raised by L{BrokenReliableListener.processItem}.
- """
-
-
-
-class BatchWorkItem(item.Item):
- """
- Item class which will be delivered as work units for testing error handling
- around reliable listeners.
- """
- value = attributes.text(default=u"unprocessed")
-
-
-
-BatchWorkSource = batch.processor(BatchWorkItem)
-
-
-
-class BrokenReliableListener(item.Item):
- """
- A listener for batch work which always raises an exception from its
- processItem method. Used to test that errors from processItem are properly
- handled.
- """
-
- anAttribute = attributes.integer()
-
- def processItem(self, item):
- raise BrokenException("Broken Reliable Listener is working as expected.")
-
-
-
-class WorkingReliableListener(item.Item):
- """
- A listener for batch work which actually works. Used to test that even if
- a broken reliable listener is around, working ones continue to receive new
- items to process.
- """
-
- anAttribute = attributes.integer()
-
- def processItem(self, item):
- item.value = u"processed"
-
-
-
-class RemoteTestCase(unittest.TestCase):
- def test_noBatchService(self):
- """
- A L{Store} with no database directory cannot be adapted to
- L{iaxiom.IBatchService}.
- """
- st = store.Store()
- self.assertRaises(TypeError, iaxiom.IBatchService, st)
- self.assertIdentical(
- iaxiom.IBatchService(st, None), None)
-
-
- def test_subStoreNoBatchService(self):
- """
- A user L{Store} attached to a site L{Store} with no database directory
- cannot be adapted to L{iaxiom.IBatchService}.
- """
- st = store.Store(filesdir=self.mktemp())
- ss = substore.SubStore.createNew(st, ['substore']).open()
- self.assertRaises(TypeError, iaxiom.IBatchService, ss)
- self.assertIdentical(
- iaxiom.IBatchService(ss, None), None)
-
-
- def testBatchService(self):
- """
- Make sure SubStores can be adapted to L{iaxiom.IBatchService}.
- """
- dbdir = filepath.FilePath(self.mktemp())
- s = store.Store(dbdir)
- ss = substore.SubStore.createNew(s, ['substore'])
- bs = iaxiom.IBatchService(ss)
- self.failUnless(iaxiom.IBatchService.providedBy(bs))
-
-
- def testProcessLifetime(self):
- """
- Test that the batch system process can be started and stopped.
- """
- dbdir = filepath.FilePath(self.mktemp())
- s = store.Store(dbdir)
- svc = batch.BatchProcessingControllerService(s)
- svc.startService()
- return svc.stopService()
-
-
- def testCalling(self):
- """
- Test invoking a method on an item in the batch process.
- """
- dbdir = filepath.FilePath(self.mktemp())
- s = store.Store(dbdir)
- ss = substore.SubStore.createNew(s, ['substore'])
- service.IService(s).startService()
- d = iaxiom.IBatchService(ss).call(
- BatchCallTestItem(store=ss.open()).callIt)
- ss.close()
- def called(ign):
- self.assertTrue(
- ss.open().findUnique(BatchCallTestItem).called,
- "Was not called")
- ss.close()
- return service.IService(s).stopService()
- return d.addCallback(called)
-
-
- def testProcessingServiceStepsOverErrors(self):
- """
- If any processor raises an unexpected exception, the work unit which
- was being processed should be marked as having had an error and
- processing should move on to the next item. Make sure that this
- actually happens when L{BatchProcessingService} is handling those
- errors.
- """
- BATCH_WORK_UNITS = 3
-
- dbdir = filepath.FilePath(self.mktemp())
- st = store.Store(dbdir)
- source = BatchWorkSource(store=st)
- for i in range(BATCH_WORK_UNITS):
- BatchWorkItem(store=st)
-
- source.addReliableListener(BrokenReliableListener(store=st), iaxiom.REMOTE)
- source.addReliableListener(WorkingReliableListener(store=st), iaxiom.REMOTE)
-
- svc = batch.BatchProcessingService(st, iaxiom.REMOTE)
-
- task = svc.step()
-
- # Loop 6 (BATCH_WORK_UNITS * 2) times - three items times two
- # listeners, it should not take any more than six iterations to
- # completely process all work.
- for i in xrange(BATCH_WORK_UNITS * 2):
- task.next()
-
-
- self.assertEquals(
- len(self.flushLoggedErrors(BrokenException)),
- BATCH_WORK_UNITS)
-
- self.assertEquals(
- st.query(BatchWorkItem, BatchWorkItem.value == u"processed").count(),
- BATCH_WORK_UNITS)
-
-
- def test_itemAddedStartsBatchProcess(self):
- """
- If there are remote-style listeners for an item source, C{itemAdded}
- starts the batch process.
-
- This is not completely correct. There may be items to process remotely
- when the main process starts up, before any new items are added. This
- is simpler to implement, but it shouldn't be taken as a reason not to
- implement the actually correct solution.
- """
- st = store.Store(self.mktemp())
- svc = service.IService(st)
- svc.startService()
- self.addCleanup(svc.stopService)
-
- batchService = iaxiom.IBatchService(st)
-
- procType = batch.processor(TestWorkUnit)
- proc = procType(store=st)
- listener = WorkListener(store=st)
- proc.addReliableListener(listener, style=iaxiom.REMOTE)
-
- # Sanity check: addReliableListener should eventually also trigger a
- # batch process start if necessary. But we don't want to test that case
- # here, so make sure it's not happening.
- self.assertEquals(batchService.batchController.mode, 'stopped')
-
- # Now trigger it to start.
- proc.itemAdded()
-
- # It probably won't be ready by now, but who knows.
- self.assertIn(batchService.batchController.mode, ('starting', 'ready'))
-
-
- def test_itemAddedBeforeStarted(self):
- """
- If C{itemAdded} is called before the batch service is started, the batch
- process is not started.
- """
- st = store.Store(self.mktemp())
-
- procType = batch.processor(TestWorkUnit)
- proc = procType(store=st)
- listener = WorkListener(store=st)
- proc.addReliableListener(listener, style=iaxiom.REMOTE)
-
- proc.itemAdded()
-
- # When the service later starts, the batch service needn't start its
- # process. Not that this would be bad. Feel free to reverse this
- # behavior if you really want.
- svc = service.IService(st)
- svc.startService()
- self.addCleanup(svc.stopService)
-
- batchService = iaxiom.IBatchService(st)
- self.assertEquals(batchService.batchController.mode, 'stopped')
-
-
- def test_itemAddedWithoutBatchService(self):
- """
- If the store has no batch service, C{itemAdded} doesn't start the batch
- process and also doesn't raise an exception.
- """
- # An in-memory store can't have a batch service.
- st = store.Store()
- svc = service.IService(st)
- svc.startService()
- self.addCleanup(svc.stopService)
-
- procType = batch.processor(TestWorkUnit)
- proc = procType(store=st)
- listener = WorkListener(store=st)
- proc.addReliableListener(listener, style=iaxiom.REMOTE)
-
- proc.itemAdded()
-
- # And still there should be no batch service at all.
- self.assertIdentical(iaxiom.IBatchService(st, None), None)
-
-
- def test_subStoreBatchServiceStart(self):
- """
- The substore implementation of L{IBatchService.start} starts the batch
- process.
- """
- st = store.Store(self.mktemp())
- svc = service.IService(st)
- svc.startService()
- self.addCleanup(svc.stopService)
-
- ss = substore.SubStore.createNew(st, ['substore']).open()
- iaxiom.IBatchService(ss).start()
-
- batchService = iaxiom.IBatchService(st)
- self.assertIn(batchService.batchController.mode, ('starting', 'ready'))
=== removed file 'Axiom/axiom/test/test_count.py'
--- Axiom/axiom/test/test_count.py 2005-10-28 22:06:23 +0000
+++ Axiom/axiom/test/test_count.py 1970-01-01 00:00:00 +0000
@@ -1,69 +0,0 @@
-from twisted.trial.unittest import TestCase
-
-from axiom.store import Store
-from axiom.item import Item
-from axiom.attributes import integer, AND, OR
-
-class ThingsWithIntegers(Item):
- schemaVersion = 1
- typeName = 'axiom_test_thing_with_integers'
-
- a = integer()
- b = integer()
-
-
-class NotARealThing(Item):
- schemaVersion = 1
- typeName = 'axiom_test_never_created_item'
-
- irrelevantAttribute = integer()
-
- def __init__(self, **kw):
- raise NotImplementedError("You cannot create things that are not real!")
-
-
-class TestCountQuery(TestCase):
-
- RANGE = 10
- MIDDLE = 5
-
-
- def assertCountEqualsQuery(self, item, cond = None):
- self.assertEquals(self.store.count(item, cond),
- len(list(self.store.query(item, cond))),
- 'count and len(list(query)) not equals: %r,%r'%(item, cond))
-
- def setUp(self):
- self.store = Store()
- def populate():
- for i in xrange(self.RANGE):
- for j in xrange(self.RANGE):
- ThingsWithIntegers(store = self.store, a = i, b = j)
- self.store.transact(populate)
-
- def testBasicCount(self):
- self.assertCountEqualsQuery(ThingsWithIntegers)
-
- def testSimpleConditionCount(self):
- self.assertCountEqualsQuery(ThingsWithIntegers,
- ThingsWithIntegers.a > self.MIDDLE)
-
- def testTwoFieldsConditionCount(self):
- self.assertCountEqualsQuery(ThingsWithIntegers,
- ThingsWithIntegers.a == ThingsWithIntegers.b)
-
- def testANDConditionCount(self):
- self.assertCountEqualsQuery(ThingsWithIntegers,
- AND(ThingsWithIntegers.a > self.MIDDLE, ThingsWithIntegers.b < self.MIDDLE))
-
- def testORConditionCount(self):
- self.assertCountEqualsQuery(ThingsWithIntegers,
- OR(ThingsWithIntegers.a > self.MIDDLE, ThingsWithIntegers.b < self.MIDDLE))
-
- def testEmptyResult(self):
- self.assertCountEqualsQuery(ThingsWithIntegers,
- ThingsWithIntegers.a == self.RANGE + 3)
-
- def testNonExistentTable(self):
- self.assertCountEqualsQuery(NotARealThing,
- NotARealThing.irrelevantAttribute == self.RANGE + 3)
=== removed file 'Axiom/axiom/test/test_crossstore.py'
--- Axiom/axiom/test/test_crossstore.py 2008-05-06 13:33:30 +0000
+++ Axiom/axiom/test/test_crossstore.py 1970-01-01 00:00:00 +0000
@@ -1,68 +0,0 @@
-
-from axiom.store import Store
-from axiom.substore import SubStore
-from axiom.item import Item
-
-from axiom.attributes import integer
-
-from twisted.trial.unittest import TestCase
-from twisted.python import filepath
-
-class ExplosiveItem(Item):
-
- nothing = integer()
-
- def yourHeadAsplode(self):
- 1 / 0
-
-
-class CrossStoreTest(TestCase):
-
- def setUp(self):
- self.spath = filepath.FilePath(self.mktemp() + ".axiom")
- self.store = Store(self.spath)
- self.substoreitem = SubStore.createNew(self.store,
- ["sub.axiom"])
-
- self.substore = self.substoreitem.open()
- # Not available yet.
- self.substore.attachToParent()
-
-
-class TestCrossStoreTransactions(CrossStoreTest):
-
- def testCrossStoreTransactionality(self):
- def createTwoSubStoreThings():
- ExplosiveItem(store=self.store)
- ei = ExplosiveItem(store=self.substore)
- ei.yourHeadAsplode()
-
- self.failUnlessRaises(ZeroDivisionError,
- self.store.transact,
- createTwoSubStoreThings)
-
- self.failUnlessEqual(
- self.store.query(ExplosiveItem).count(),
- 0)
-
- self.failUnlessEqual(
- self.substore.query(ExplosiveItem).count(),
- 0)
-
-class TestCrossStoreInsert(CrossStoreTest):
-
- def testCrossStoreInsert(self):
- def populate(s, n):
- for i in xrange(n):
- ExplosiveItem(store=s)
-
- self.store.transact(populate, self.store, 2)
- self.store.transact(populate, self.substore, 3)
-
- self.failUnlessEqual(
- self.store.query(ExplosiveItem).count(),
- 2)
-
- self.failUnlessEqual(
- self.substore.query(ExplosiveItem).count(),
- 3)
=== removed file 'Axiom/axiom/test/test_dependency.py'
--- Axiom/axiom/test/test_dependency.py 2008-08-13 02:55:58 +0000
+++ Axiom/axiom/test/test_dependency.py 1970-01-01 00:00:00 +0000
@@ -1,598 +0,0 @@
-# Copright 2008 Divmod, Inc. See LICENSE file for details.
-
-from zope.interface import Interface, implements
-
-from twisted.trial import unittest
-
-from axiom import dependency
-from axiom.store import Store
-from axiom.substore import SubStore
-from axiom.item import Item
-from axiom.errors import UnsatisfiedRequirement
-from axiom.attributes import text, integer, reference, inmemory
-
-
-class IElectricityGrid(Interface):
- """
- An interface representing something likely to be present in the site store.
- As opposed to the other examples below, present in a hypothetical kitchen,
- it is something managed for lots of different people.
- """
-
- def draw(watts):
- """
- Draw some number of watts from this power grid.
-
- @return: a constant, one of L{REAL_POWER} or L{FAKE_POWER}.
- """
-
-
-FAKE_POWER = 'fake power'
-REAL_POWER = 'real power'
-
-class NullGrid(object):
- """
- This is a null electricity grid. It is provided as a default grid in the
- case where a site store is not present.
- """
- implements(IElectricityGrid)
-
- def __init__(self, siteStore):
- """
- Create a null grid with a reference to the site store.
- """
- self.siteStore = siteStore
-
-
- def draw(self, watts):
- """
- Draw some watts from the null power grid. For simplicity of examples
- below, this works. Not in real life, though. In a more realistic
- example, this might do something temporary to work around the site
- misconfiguration, and warn an administrator that someone was getting
- power out of thin air. Or, depending on the application, we might
- raise an exception to prevent this operation from succeeding.
- """
- return FAKE_POWER
-
-
-class RealGrid(Item):
- """
- A power grid for the power utility; this is an item which should be
- installed on a site store.
- """
- implements(IElectricityGrid)
-
- powerupInterfaces = (IElectricityGrid,)
-
- totalWattage = integer(default=10000000,
- doc="""
- Total wattage of the entire electricity grid. (This
- is currently a dummy attribute.)
- """)
-
- def draw(self, watts):
- """
- Draw some real power from the real power grid. This is the way that
- the site should probably be working.
- """
- return REAL_POWER
-
-
-
-def noGrid(siteStore):
- """
- No power grid was available. Raise an exception.
- """
- raise RuntimeError("No power grid available.")
-
-
-
-class IronLung(Item):
- """
- This item is super serious business! It has to draw real power from the
- real power grid; it won't be satisfied with fake power; too risky for its
- life-critical operation. So it doesn't specify a placeholder default grid.
-
- @ivar grid: a read-only reference to an L{IElectricityGrid} provider,
- resolved via the site store this L{IronLung} is in.
- """
-
- wattsPerPump = integer(default=100, allowNone=False,
- doc="""
- The number of watts to draw from L{self.grid} when
- L{IronLung.pump} is called.
- """)
-
- grid = dependency.requiresFromSite(IElectricityGrid)
-
- def pump(self):
- """
- Attempting to pump the iron lung by talking to the power grid.
- """
- return self.grid.draw(self.wattsPerPump)
-
-
-
-class SpecifiedBadDefaults(Item):
- """
- Depends on a power grid, but specifies defaults for that dependency that
- should never be invoked. This item can't retrieve a grid.
-
- @ivar grid: Retrieving this attribute should never work. It should raise
- L{RuntimeError}.
- """
- dummy = integer(doc="""
- Dummy attribute required by axiom for Item class definition.
- """)
-
- grid = dependency.requiresFromSite(IElectricityGrid, noGrid, noGrid)
-
- def pump(self):
- """
- Attempting to pump the iron lung by talking to the power grid.
- """
- return self.grid.draw(100)
-
-
-class Kitchen(Item):
- name = text()
-
-class PowerStrip(Item):
- """
- A simulated collection of power points. This is where L{IAppliance}
- providers get their power from.
-
- @ivar grid: A read-only reference to an L{IElectricityGrid} provider. This
- may be a powerup provided by the site store or a L{NullGrid} if no powerup
- is installed.
- """
- voltage = integer()
- grid = dependency.requiresFromSite(IElectricityGrid, NullGrid, NullGrid)
-
- def setForUSElectricity(self):
- if not self.voltage:
- self.voltage = 110
- else:
- raise RuntimeError("Oops! power strip already set up")
-
- def draw(self, watts):
- """
- Draw the given amount of power from this strip's electricity grid.
-
- @param watts: The number of watts to draw.
-
- @type watts: L{int}
- """
- return self.grid.draw(watts)
-
-
-class PowerPlant(Item):
- """
- This is an item which supplies the grid with power. It lives in the site
- store.
-
- @ivar grid: a read-only reference to an L{IElectricityGrid} powerup on the
- site store, or a L{NullGrid} if none is installed. If this item is present
- in a user store, retrieving this will raise a L{RuntimeError}.
- """
-
- wattage = integer(default=1000, allowNone=False,
- doc="""
- The amount of power the grid will be supplied with.
- Currently a dummy attribute required by axiom for item
- definition.
- """)
- grid = dependency.requiresFromSite(IElectricityGrid, noGrid, NullGrid)
-
-
-
-class IAppliance(Interface):
- pass
-
-class IBreadConsumer(Interface):
- pass
-
-class Breadbox(Item):
- slices = integer(default=100)
-
- def dispenseBread(self, amt):
- self.slices -= amt
-
-class Toaster(Item):
- implements(IBreadConsumer)
- powerupInterfaces = (IAppliance, IBreadConsumer)
-
- powerStrip = dependency.dependsOn(PowerStrip,
- lambda ps: ps.setForUSElectricity(),
- doc="the power source for this toaster")
- description = text()
- breadFactory = dependency.dependsOn(
- Breadbox,
- doc="the thing we get bread input from",
- whenDeleted=reference.CASCADE)
-
- callback = inmemory()
-
- def activate(self):
- self.callback = None
-
- def installed(self):
- if self.callback is not None:
- self.callback("installed")
-
- def uninstalled(self):
- if self.callback is not None:
- self.callback("uninstalled")
-
- def toast(self):
- self.powerStrip.draw(100)
- self.breadFactory.dispenseBread(2)
-
-def powerstripSetup(ps):
- ps.setForUSElectricity()
-class Blender(Item):
- powerStrip = dependency.dependsOn(PowerStrip,
- powerstripSetup)
- description = text()
-
- def __getPowerupInterfaces__(self, powerups):
- yield (IAppliance, 0)
-
-class IceCrusher(Item):
- blender = dependency.dependsOn(Blender)
-
-class Blender2(Item):
- powerStrip = reference()
-
-class DependencyTest(unittest.TestCase):
- def setUp(self):
- self.store = Store()
-
- def test_dependsOn(self):
- """
- Ensure that classes with dependsOn attributes set up the dependency map
- properly.
- """
- foo = Blender(store=self.store)
- depBlob = dependency._globalDependencyMap.get(Blender, None)[0]
- self.assertEqual(depBlob[0], PowerStrip)
- self.assertEqual(depBlob[1], powerstripSetup)
- self.assertEqual(depBlob[2], Blender.__dict__['powerStrip'])
-
- def test_classDependsOn(self):
- """
- Ensure that classDependsOn sets up the dependency map properly.
- """
- dependency.classDependsOn(Blender2, PowerStrip, powerstripSetup,
- Blender2.__dict__['powerStrip'])
- depBlob = dependency._globalDependencyMap.get(Blender2, None)[0]
- self.assertEqual(depBlob[0], PowerStrip)
- self.assertEqual(depBlob[1], powerstripSetup)
- self.assertEqual(depBlob[2], Blender2.__dict__['powerStrip'])
-
- def test_basicInstall(self):
- """
- If a Toaster gets installed in a Kitchen, make sure that the
- required dependencies get instantiated and installed too.
- """
- foo = Kitchen(store=self.store)
- e = Toaster(store=self.store)
- self.assertEquals(e.powerStrip, None)
- dependency.installOn(e, foo)
- e.toast()
- ps = self.store.findUnique(PowerStrip, default=None)
- bb = self.store.findUnique(Breadbox, default=None)
- self.failIfIdentical(ps, None)
- self.failIfIdentical(bb, None)
- self.assertEquals(e.powerStrip, ps)
- self.assertEquals(ps.voltage, 110)
- self.assertEquals(e.breadFactory, bb)
- self.assertEquals(set(dependency.installedRequirements(e, foo)),
- set([ps, bb]))
- self.assertEquals(list(dependency.installedDependents(ps, foo)), [e])
-
- def test_basicUninstall(self):
- """
- Ensure that uninstallation removes the adapter from the former
- install target and all orphaned dependencies.
- """
- foo = Kitchen(store=self.store)
- e = Toaster(store=self.store)
- dependency.installOn(e, foo)
- dependency.uninstallFrom(e, foo)
- self.assertEqual(dependency.installedOn(e), None)
- self.assertEqual(dependency.installedOn(e.powerStrip), None)
-
- def test_wrongUninstall(self):
- """
- Ensure that attempting to uninstall an item that something
- else depends on fails.
- """
- foo = Kitchen(store=self.store)
- e = Toaster(store=self.store)
- dependency.installOn(e, foo)
-
- ps = self.store.findUnique(PowerStrip)
- self.failUnlessRaises(dependency.DependencyError,
- dependency.uninstallFrom, ps, foo)
-
- def test_properOrphaning(self):
- """
- If two installed items both depend on a third, it should be
- removed as soon as both installed items are removed, but no
- sooner.
- """
-
- foo = Kitchen(store=self.store)
- e = Toaster(store=self.store)
- dependency.installOn(e, foo)
- ps = self.store.findUnique(PowerStrip)
- bb = self.store.findUnique(Breadbox)
- f = Blender(store=self.store)
- dependency.installOn(f, foo)
-
- self.assertEquals(list(self.store.query(PowerStrip)), [ps])
- #XXX does ordering matter?
- self.assertEquals(set(dependency.installedDependents(ps, foo)),
- set([e, f]))
- self.assertEquals(set(dependency.installedRequirements(e, foo)),
- set([bb, ps]))
- self.assertEquals(list(dependency.installedRequirements(f, foo)),
- [ps])
-
- dependency.uninstallFrom(e, foo)
- self.assertEquals(dependency.installedOn(ps), foo)
-
- dependency.uninstallFrom(f, foo)
- self.assertEquals(dependency.installedOn(ps), None)
-
- def test_installedUniqueRequirements(self):
- """
- Ensure that installedUniqueRequirements lists only powerups depended on
- by exactly one installed powerup.
- """
- foo = Kitchen(store=self.store)
- e = Toaster(store=self.store)
- dependency.installOn(e, foo)
- ps = self.store.findUnique(PowerStrip)
- bb = self.store.findUnique(Breadbox)
- f = Blender(store=self.store)
- dependency.installOn(f, foo)
-
- self.assertEquals(list(dependency.installedUniqueRequirements(e, foo)),
- [bb])
-
- def test_customizerCalledOnce(self):
- """
- The item customizer defined for a dependsOn attribute should
- only be called if an item is created implicitly to satisfy the
- dependency.
- """
- foo = Kitchen(store=self.store)
- ps = PowerStrip(store=self.store)
- dependency.installOn(ps, foo)
- ps.voltage = 115
- e = Toaster(store=self.store)
- dependency.installOn(e, foo)
- self.assertEqual(ps.voltage, 115)
-
- def test_explicitInstall(self):
- """
- If an item is explicitly installed, it should not be
- implicitly uninstalled. Also, dependsOn attributes should be
- filled in properly even if a dependent item is not installed
- automatically.
- """
- foo = Kitchen(store=self.store)
- ps = PowerStrip(store=self.store)
- dependency.installOn(ps, foo)
- e = Toaster(store=self.store)
- dependency.installOn(e, foo)
- self.assertEqual(e.powerStrip, ps)
- dependency.uninstallFrom(e, foo)
- self.assertEquals(dependency.installedOn(ps), foo)
-
- def test_doubleInstall(self):
- """
- Make sure that installing two instances of a class on the same
- target fails, if something depends on that class, and succeeds
- otherwise.
- """
- foo = Kitchen(store=self.store)
- e = Toaster(store=self.store)
- dependency.installOn(e, foo)
- ps = PowerStrip(store=self.store)
- self.failUnlessRaises(dependency.DependencyError,
- dependency.installOn, ps, foo)
- e2 = Toaster(store=self.store)
- dependency.installOn(e2, foo)
-
-
- def test_recursiveInstall(self):
- """
- Installing an item should install all of its dependencies, and
- all of its dependencies, and so forth.
- """
- foo = Kitchen(store=self.store)
- ic = IceCrusher(store=self.store)
- dependency.installOn(ic, foo)
- blender = self.store.findUnique(Blender)
- ps = self.store.findUnique(PowerStrip)
-
- self.assertEquals(dependency.installedOn(blender), foo)
- self.assertEquals(dependency.installedOn(ps), foo)
- self.assertEquals(list(dependency.installedRequirements(ic, foo)),
- [blender])
-
- def test_recursiveUninstall(self):
- """
- Removal of items should recursively remove orphaned
- dependencies.
- """
- foo = Kitchen(store=self.store)
- ic = IceCrusher(store=self.store)
- dependency.installOn(ic, foo)
- blender = self.store.findUnique(Blender)
- ps = self.store.findUnique(PowerStrip)
-
- dependency.uninstallFrom(ic, foo)
-
- self.failIf(dependency.installedOn(blender))
- self.failIf(dependency.installedOn(ps))
- self.failIf(dependency.installedOn(ic))
-
- def test_wrongDependsOn(self):
- """
- dependsOn should raise an error if used outside a class definition.
- """
- self.assertRaises(TypeError, dependency.dependsOn, Toaster)
-
- def test_referenceArgsPassthrough(self):
- """
- dependsOn should accept (most of) attributes.reference's args.
- """
-
- self.failUnless("power source" in Toaster.powerStrip.doc)
- self.assertEquals(Toaster.breadFactory.whenDeleted, reference.CASCADE)
-
- def test_powerupInterfaces(self):
- """
- Make sure interfaces are powered up and down properly.
- """
-
- foo = Kitchen(store=self.store)
- e = Toaster(store=self.store)
- f = Blender(store=self.store)
- dependency.installOn(e, foo)
- dependency.installOn(f, foo)
- self.assertEquals(IAppliance(foo), e)
- self.assertEquals(IBreadConsumer(foo), e)
- dependency.uninstallFrom(e, foo)
- self.assertEquals(IAppliance(foo), f)
- dependency.uninstallFrom(f, foo)
- self.assertRaises(TypeError, IAppliance, foo)
-
-
- def test_callbacks(self):
- """
- 'installed' and 'uninstalled' callbacks should fire on
- install/uninstall.
- """
- foo = Kitchen(store=self.store)
- e = Toaster(store=self.store)
- self.installCallbackCalled = False
- e.callback = lambda _: setattr(self, 'installCallbackCalled', True)
- dependency.installOn(e, foo)
- self.failUnless(self.installCallbackCalled)
- self.uninstallCallbackCalled = False
- e.callback = lambda _: setattr(self, 'uninstallCallbackCalled', True)
- dependency.uninstallFrom(e, foo)
- self.failUnless(self.uninstallCallbackCalled)
-
-
- def test_onlyInstallPowerups(self):
- """
- Make sure onlyInstallPowerups doesn't load dependencies or prohibit
- multiple calls.
- """
- foo = Kitchen(store=self.store)
- e = Toaster(store=self.store)
- f = Toaster(store=self.store)
- dependency.onlyInstallPowerups(e, foo)
- dependency.onlyInstallPowerups(f, foo)
- self.assertEquals(list(foo.powerupsFor(IBreadConsumer)), [e, f])
- self.assertEquals(list(self.store.query(
- dependency._DependencyConnector)), [])
-
-
-class RequireFromSiteTests(unittest.TestCase):
- """
- L{axiom.dependency.requiresFromSite} should allow items in either a user or
- site store to depend on powerups in the site store.
- """
-
- def setUp(self):
- """
- Create a L{Store} to be used as the site store for these tests.
- """
- self.store = Store()
-
-
- def test_requiresFromSite(self):
- """
- The value of a L{axiom.dependency.requiresFromSite} descriptor ought to
- be the powerup on the site for the instance it describes.
- """
- dependency.installOn(RealGrid(store=self.store), self.store)
- substore = SubStore.createNew(self.store, ['sub']).open()
- self.assertEquals(PowerStrip(store=substore).draw(1), REAL_POWER)
-
-
- def test_requiresFromSiteDefault(self):
- """
- The value of a L{axiom.dependency.requiresFromSite} descriptor on an
- item in a user store ought to be the result of invoking its default
- factory parameter.
- """
- substore = SubStore.createNew(self.store, ['sub']).open()
- ps = PowerStrip(store=substore)
- self.assertEquals(ps.draw(1), FAKE_POWER)
- self.assertEquals(ps.grid.siteStore, self.store)
-
-
- def test_requiresFromSiteInSiteStore(self):
- """
- L{axiom.dependency.requiresFromSite} should use the
- C{siteDefaultFactory} rather than the C{defaultFactory} to satisfy the
- dependency for items stored in a site store. It should use this
- default whether or not any item which could satisfy the requirement is
- installed on the site store.
-
- This behavior is important because some powerup interfaces are provided
- for site and user stores with radically different behaviors; for
- example, the substore implementation of L{IScheduler} depends on the
- site implementation of L{IScheduler}; if a user's substore were opened
- accidentally as a site store (i.e. with no parent) then the failure of
- the scheduler API should be obvious and immediate so that it can
- compensate; it should not result in an infinite recursion as the
- scheduler is looking for its parent.
-
- Items which wish to be stored in a site store and also depend on items
- in the site store can specifically adapt to the appropriate interface
- in the C{siteDefaultFactory} supplied to
- L{dependency.requiresFromSite}.
- """
- plant = PowerPlant(store=self.store)
- self.assertEquals(plant.grid.siteStore, self.store)
- self.assertEquals(plant.grid.draw(100), FAKE_POWER)
- dependency.installOn(RealGrid(store=self.store), self.store)
- self.assertEquals(plant.grid.siteStore, self.store)
- self.assertEquals(plant.grid.draw(100), FAKE_POWER)
-
-
- def test_requiresFromSiteNoDefault(self):
- """
- The default function shouldn't be needed or invoked if its value isn't
- going to be used.
- """
- dependency.installOn(RealGrid(store=self.store), self.store)
- substore = SubStore.createNew(self.store, ['sub']).open()
- self.assertEquals(SpecifiedBadDefaults(store=substore).pump(),
- REAL_POWER)
-
-
- def test_requiresFromSiteUnspecifiedException(self):
- """
- If a default factory function isn't supplied, an
- L{UnsatisfiedRequirement}, which should be a subtype of
- L{AttributeError}, should be raised when the descriptor is retrieved.
- """
- lung = IronLung(store=self.store)
- siteLung = IronLung(
- store=SubStore.createNew(self.store, ['sub']).open())
- self.assertRaises(UnsatisfiedRequirement, lambda : lung.grid)
- self.assertRaises(UnsatisfiedRequirement, lambda : siteLung.grid)
- default = object()
- self.assertIdentical(getattr(lung, 'grid', default), default)
-
=== removed file 'Axiom/axiom/test/test_files.py'
--- Axiom/axiom/test/test_files.py 2011-08-19 02:50:43 +0000
+++ Axiom/axiom/test/test_files.py 1970-01-01 00:00:00 +0000
@@ -1,89 +0,0 @@
-
-import os
-
-from twisted.trial import unittest
-from twisted.python import filepath
-
-from axiom.store import Store
-
-from axiom.item import Item
-from axiom.attributes import path
-
-class PathTesterItem(Item):
- schemaVersion = 1
- typeName = 'test_path_thing'
-
- relpath = path()
- abspath = path(relative=False)
-
-
-
-class InStoreFilesTest(unittest.TestCase):
- """
- Tests for files managed by the store.
- """
- def _testFile(self, s):
- """
- Shared part of file creation tests.
- """
- f = s.newFile('test', 'whatever.txt')
- f.write('crap')
- def cb(fpath):
- self.assertEquals(fpath.open().read(), 'crap')
-
- return f.close().addCallback(cb)
-
-
- def test_createFile(self):
- """
- Ensure that file creation works for on-disk stores.
- """
- s = Store(filepath.FilePath(self.mktemp()))
- return self._testFile(s)
-
-
- def test_createFileInMemory(self):
- """
- Ensure that file creation works for in-memory stores as well.
- """
- s = Store(filesdir=filepath.FilePath(self.mktemp()))
- return self._testFile(s)
-
- def test_createFileInMemoryAtString(self):
- """
- The 'filesdir' parameter should accept a string as well, for now.
- """
- s = Store(filesdir=self.mktemp())
- return self._testFile(s)
-
-
- def test_noFiledir(self):
- """
- File creation should raise an error if the store has no file directory.
- """
- s = Store()
- self.assertRaises(RuntimeError, s.newFile, "test", "whatever.txt")
-
-class PathAttributesTest(unittest.TestCase):
- def testRelocatingPaths(self):
- spath = self.mktemp()
- npath = self.mktemp()
- s = Store(spath)
- rel = s.newFile("test", "123")
- TEST_STR = "test 123"
-
- def cb(fpath):
- fpath.setContent(TEST_STR)
-
- PathTesterItem(store=s,
- relpath=fpath)
-
- s.close()
- os.rename(spath, npath)
- s2 = Store(npath)
- pti = list(s2.query(PathTesterItem))[0]
-
- self.assertEquals(pti.relpath.getContent(),
- TEST_STR)
-
- return rel.close().addCallback(cb)
=== removed file 'Axiom/axiom/test/test_inheritance.py'
--- Axiom/axiom/test/test_inheritance.py 2005-07-28 22:09:16 +0000
+++ Axiom/axiom/test/test_inheritance.py 1970-01-01 00:00:00 +0000
@@ -1,28 +0,0 @@
-
-# This module is really a placeholder: inheritance between database classes is
-# unsupported in XAtop right now. We are just making sure that it is
-# aggressively unsupported.
-
-from twisted.trial import unittest
-
-from axiom.item import Item, NoInheritance
-from axiom.attributes import integer
-
-class InheritanceUnsupported(unittest.TestCase):
-
- def testNoInheritance(self):
- class XA(Item):
- schemaVersion = 1
- typeName = 'inheritance_test_xa'
- a = integer()
-
- try:
- class XB(XA):
- schemaVersion = 1
- typeName = 'inheritance_test_xb'
- b = integer()
- except NoInheritance:
- pass
- else:
- self.fail("Expected RuntimeError but none occurred")
-
=== removed file 'Axiom/axiom/test/test_item.py'
--- Axiom/axiom/test/test_item.py 2011-08-20 01:48:56 +0000
+++ Axiom/axiom/test/test_item.py 1970-01-01 00:00:00 +0000
@@ -1,586 +0,0 @@
-
-import sys, os
-
-from twisted.trial import unittest
-from twisted.trial.unittest import TestCase
-from twisted.internet import error, protocol, defer, reactor
-from twisted.protocols import policies
-from twisted.python import log, filepath
-
-from axiom import store, item
-from axiom.store import Store
-from axiom.item import Item, declareLegacyItem
-from axiom.errors import ChangeRejected
-from axiom.test import itemtest, itemtestmain
-from axiom.attributes import integer, text, inmemory
-
-class ProcessFailed(Exception):
- pass
-
-class ProcessOutputCollector(protocol.ProcessProtocol, policies.TimeoutMixin):
- TIMEOUT = 60
-
- def __init__(self, onCompletion):
- self.output = []
- self.error = []
- self.onCompletion = onCompletion
- self.onCompletion.addCallback(self.processOutput)
-
- def processOutput(self, output):
- return output
-
- def timeoutConnection(self):
- self.transport.signalProcess('KILL')
-
- def connectionMade(self):
- self.setTimeout(self.TIMEOUT)
-
- def outReceived(self, bytes):
- self.resetTimeout()
- self.output.append(bytes)
-
- def errReceived(self, bytes):
- self.resetTimeout()
- self.error.append(bytes)
-
- def processEnded(self, reason):
- self.setTimeout(None)
- if reason.check(error.ProcessTerminated):
- self.onCompletion.errback(ProcessFailed(self, reason))
- elif self.error:
- self.onCompletion.errback(ProcessFailed(self, None))
- else:
- self.onCompletion.callback(self.output)
-
-
-
-class NoAttrsItem(item.Item):
- typeName = 'noattrsitem'
- schemaVersion = 1
-
-
-
-class TransactedMethodItem(item.Item):
- """
- Helper class for testing the L{axiom.item.transacted} decorator.
- """
- value = text()
- calledWith = inmemory()
-
- def method(self, a, b, c):
- self.value = u"changed"
- self.calledWith = [a, b, c]
- raise Exception("TransactedMethodItem.method test exception")
- method.attribute = 'value'
- method = item.transacted(method)
-
-
-
-class StoredNoticingItem(item.Item):
- """
- Test item which just remembers whether or not its C{stored} method has been
- called.
- """
- storedCount = integer(doc="""
- The number of times C{stored} has been called on this item.
- """, default=0)
-
- activatedCount = integer(doc="""
- The number of times C{stored} has been called on this item.
- """, default=0)
-
- activated = inmemory(doc="""
- A value set in the C{activate} callback and nowhere else. Used to
- determine the ordering of C{activate} and C{stored} calls.
- """)
-
- def activate(self):
- self.activated = True
-
-
- def stored(self):
- self.storedCount += 1
- self.activatedCount += getattr(self, 'activated', 0)
-
-
-class ItemWithDefault(item.Item):
- """
- Item with an attribute having a default value.
- """
- value = integer(default=10)
-
-
-
-class ItemTestCase(unittest.TestCase):
- """
- Tests for L{Item}.
- """
-
- def test_repr(self):
- """
- L{Item.__repr__} should return a C{str} giving the name of the
- subclass and the names and values of all the item's attributes.
- """
- reprString = repr(ItemWithDefault(value=123))
- self.assertIn('value=123', reprString)
- self.assertIn('storeID=None', reprString)
- self.assertIn('ItemWithDefault', reprString)
-
- store = Store()
- item = ItemWithDefault(store=store, value=321)
- reprString = repr(item)
- self.assertIn('value=321', reprString)
- self.assertIn('storeID=%d' % (item.storeID,), reprString)
- self.assertIn('ItemWithDefault', reprString)
-
-
- def test_partiallyInitializedRepr(self):
- """
- L{Item.__repr__} should return a C{str} giving some information,
- even if called before L{Item.__init__} has run completely.
- """
- item = ItemWithDefault.__new__(ItemWithDefault)
- reprString = repr(item)
- self.assertIn('ItemWithDefault', reprString)
-
-
- def test_itemClassOrdering(self):
- """
- Test that L{Item} subclasses (not instances) sort by the Item's
- typeName.
- """
- A = TransactedMethodItem
- B = NoAttrsItem
-
- self.failUnless(A < B)
- self.failUnless(B >= A)
- self.failIf(A >= B)
- self.failIf(B <= A)
- self.failUnless(A != B)
- self.failUnless(B != A)
- self.failIf(A == B)
- self.failIf(B == A)
-
-
- def test_legacyItemComparison(self):
- """
- Legacy items with different versions must not compare equal.
- """
- legacy1 = declareLegacyItem('test_type', 1, {})
- legacy2 = declareLegacyItem('test_type', 2, {})
- self.assertNotEqual(legacy1, legacy2)
- self.assertEqual(legacy1, legacy1)
- self.assertEqual(legacy2, legacy2)
-
-
- def testCreateItem(self):
- st = store.Store()
- self.assertRaises(item.CantInstantiateItem, item.Item, store=st)
-
-
- def testCreateItemWithDefault(self):
- """
- Test that attributes with default values can be set to None properly.
- """
- st = store.Store()
- it = ItemWithDefault()
- it.value = None
- self.assertEqual(it.value, None)
-
-
- def test_storedCallbackAfterActivateCallback(self):
- """
- Test that L{Item.stored} is only called after L{Item.activate} has been
- called.
- """
- st = store.Store()
- i = StoredNoticingItem(store=st)
- self.assertEquals(i.activatedCount, 1)
-
-
- def test_storedCallbackOnAttributeSet(self):
- """
- Test that L{Item.stored} is called when an item is actually added to a
- store and not before.
- """
- st = store.Store()
- i = StoredNoticingItem()
- self.assertEquals(i.storedCount, 0)
- i.store = st
- self.assertEquals(i.storedCount, 1)
-
-
- def test_storedCallbackOnItemCreation(self):
- """
- Test that L{Item.stored} is called when an item is created with a
- store.
- """
- st = store.Store()
- i = StoredNoticingItem(store=st)
- self.assertEquals(i.storedCount, 1)
-
-
- def test_storedCallbackNotOnLoad(self):
- """
- Test that pulling an item out of a store does not invoke its stored
- callback again.
- """
- st = store.Store()
- storeID = StoredNoticingItem(store=st).storeID
- self.assertEquals(st.getItemByID(storeID).storedCount, 1)
-
-
- def testTransactedTransacts(self):
- """
- Test that a method wrapped in C{axiom.item.transacted} is automatically
- run in a transaction.
- """
- s = store.Store()
- i = TransactedMethodItem(store=s, value=u"unchanged")
- exc = self.assertRaises(Exception, i.method, 'a', 'b', 'c')
- self.assertEquals(exc.args, ("TransactedMethodItem.method test exception",))
- self.assertEquals(i.value, u"unchanged")
-
-
- def testTransactedPassedArguments(self):
- """
- Test that position and keyword arguments are passed through
- L{axiom.item.transacted}-wrapped methods correctly.
- """
- s = store.Store()
- i = TransactedMethodItem(store=s)
- exc = self.assertRaises(Exception, i.method, 'a', b='b', c='c')
- self.assertEquals(exc.args, ("TransactedMethodItem.method test exception",))
- self.assertEquals(i.calledWith, ['a', 'b', 'c'])
-
-
- def testTransactedPreservesAttributes(self):
- """
- Test that the original function attributes are available on a
- L{axiom.item.transacted}-wrapped function.
- """
- self.assertEquals(TransactedMethodItem.method.attribute, 'value')
-
-
- def testPersistentValues(self):
- st = store.Store()
- pi = itemtest.PlainItem(store=st, plain=u'hello')
- self.assertEqual(pi.persistentValues(), {'plain': u'hello'})
-
-
- def testPersistentValuesWithoutValue(self):
- st = store.Store()
- pi = itemtest.PlainItem(store=st)
- self.assertEqual(pi.persistentValues(), {'plain': None})
-
-
- def testCreateItemWithNoAttrs(self):
- st = store.Store()
- self.assertRaises(store.NoEmptyItems, NoAttrsItem, store=st)
-
- def testCreatePlainItem(self):
- st = store.Store()
- s = itemtest.PlainItem(store=st)
-
- def testLoadLoadedPlainItem(self):
- """
- Test that loading an Item out of the store by its Store ID
- when a Python object representing that Item already exists in
- memory returns the same object as the one which already
- exists.
- """
- st = store.Store()
- item = itemtest.PlainItem(store=st)
- self.assertIdentical(item, st.getItemByID(item.storeID))
-
- def testLoadUnimportedPlainItem(self):
- """
- Test that an Item in the database can be loaded out of the
- database, even if the module defining its Python class has not
- been imported, as long as its class definition has not moved
- since it was added to the database.
- """
- storePath = filepath.FilePath(self.mktemp())
- st = store.Store(storePath)
- itemID = itemtest.PlainItem(store=st, plain=u'Hello, world!!!').storeID
- st.close()
-
- e = os.environ.copy()
- d = defer.Deferred()
- p = ProcessOutputCollector(d)
- try:
- reactor.spawnProcess(p, sys.executable, [sys.executable, '-Wignore', itemtestmain.__file__.rstrip('co'), storePath.path, str(itemID)], e)
- except NotImplementedError:
- raise unittest.SkipTest("Implement processes here")
-
- def cbOutput(output):
- self.assertEquals(''.join(output).strip(), 'Hello, world!!!')
-
- def ebBlah(err):
- log.err(err)
- self.fail(''.join(err.value.args[0].error))
-
- return d.addCallbacks(cbOutput, ebBlah)
-
- def testDeleteCreatePair(self):
- # Test coverage for a bug which was present in Axiom: deleting
- # the newest item in a database and then creating a new item
- # re-used the deleted item's oid causing all manner of
- # ridiculuousness.
- st = store.Store()
-
- i = itemtest.PlainItem(store=st)
-
- oldStoreID = i.storeID
- i.deleteFromStore()
- j = itemtest.PlainItem(store=st)
- self.failIfEqual(oldStoreID, j.storeID)
-
- def testDeleteThenLoad(self):
- st = store.Store()
- i = itemtest.PlainItem(store=st)
- oldStoreID = i.storeID
- self.assertEquals(st.getItemByID(oldStoreID, default=1234),
- i)
- i.deleteFromStore()
- self.assertEquals(st.getItemByID(oldStoreID+100, default=1234),
- 1234)
- self.assertEquals(st.getItemByID(oldStoreID, default=1234),
- 1234)
-
-
- def test_duplicateDefinition(self):
- """
- When the same typeName is defined as an item class multiple times in
- memory, the second definition fails with a L{RuntimeError}.
- """
- class X(Item):
- dummy = integer()
- try:
- class X(Item):
- dummy = integer()
- except RuntimeError:
- pass
- else:
- self.fail("Duplicate definition should have failed.")
-
-
- def test_nonConflictingRedefinition(self):
- """
- If the python item class associated with a typeName is garbage
- collected, a new python item class can re-use that type name.
- """
- class X(Item):
- dummy = integer()
- del X
- class X(Item):
- dummy = integer()
-
-
-
-class TestItem(Item):
- """
- Boring, behaviorless Item subclass used when we just need an item
- someplace.
- """
- attribute = integer()
-
-
-
-class BrokenCommittedItem(Item):
- """
- Item class which changes database state in its committed method. Don't
- write items like this, they're broken.
- """
- attribute = integer()
- _committed = inmemory()
-
- def committed(self):
- Item.committed(self)
- if getattr(self, '_committed', None) is not None:
- self._committed(self)
-
-
-
-class CheckpointTestCase(TestCase):
- """
- Tests for Item checkpointing.
- """
- def setUp(self):
- self.checkpointed = []
- def checkpoint(item):
- self.checkpointed.append(item)
- self.originalCheckpoint = TestItem.checkpoint.im_func
- TestItem.checkpoint = checkpoint
-
-
- def tearDown(self):
- TestItem.checkpoint = self.originalCheckpoint
-
-
- def _autocommitBrokenCommittedMethodTest(self, method):
- store = Store()
- item = BrokenCommittedItem(store=store)
- item._committed = method
- self.assertRaises(ChangeRejected, setattr, item, 'attribute', 0)
-
-
- def _transactionBrokenCommittedMethodTest(self, method):
- store = Store()
- item = BrokenCommittedItem(store=store)
- item._committed = method
-
- def txn():
- item.attribute = 0
- self.assertRaises(ChangeRejected, store.transact, txn)
-
-
- def test_autocommitBrokenCommittedMethodMutate(self):
- """
- Test changing a persistent attribute in the committed (even if the
- original change was made in autocommit mode) callback raises
- L{ChangeRejected}.
- """
- def mutate(self):
- self.attribute = 0
- return self._autocommitBrokenCommittedMethodTest(mutate)
-
-
- def test_transactionBrokenCommittedMethodMutate(self):
- """
- Test changing a persistent attribute in the committed callback raises
- L{ChangeRejected}.
- """
- def mutate(item):
- item.attribute = 0
- return self._transactionBrokenCommittedMethodTest(mutate)
-
-
- def test_autocommitBrokenCommittedMethodDelete(self):
- """
- Test deleting an item in the committed (even if the original change was
- made in autocommit mode) callback raises L{ChangeRejected}.
- """
- def delete(item):
- item.deleteFromStore()
- return self._autocommitBrokenCommittedMethodTest(delete)
-
-
- def test_transactionBrokenCommittedMethodDelete(self):
- """
- Test changing a persistent attribute in the committed callback raises
- L{ChangeRejected}.
- """
- def delete(item):
- item.deleteFromStore()
- return self._transactionBrokenCommittedMethodTest(delete)
-
-
- def test_autocommitBrokenCommittedMethodCreate(self):
- """
- Test that creating a new item in a committed (even if the original
- change was made in autocommit mode) callback raises L{ChangeRejected}
- """
- def create(item):
- TestItem(store=item.store)
- return self._autocommitBrokenCommittedMethodTest(create)
-
-
- def test_transactionBrokenCommittedMethodCreate(self):
- """
- Test that creating a new item in a committed callback raises
- L{ChangeRejected}.
- """
- def create(item):
- TestItem(store=item.store)
- return self._transactionBrokenCommittedMethodTest(create)
-
-
- def test_autocommitCheckpoint(self):
- """
- Test that an Item is checkpointed when it is created outside of a
- transaction.
- """
- store = Store()
- item = TestItem(store=store)
- self.assertEquals(self.checkpointed, [item])
-
-
- def test_transactionCheckpoint(self):
- """
- Test that an Item is checkpointed when the transaction it is created
- within is committed.
- """
- store = Store()
- def txn():
- item = TestItem(store=store)
- self.assertEquals(self.checkpointed, [])
- return item
- item = store.transact(txn)
- self.assertEquals(self.checkpointed, [item])
-
-
- def test_queryCheckpoint(self):
- """
- Test that a newly created Item is checkpointed before a query is
- executed.
- """
- store = Store()
- def txn():
- item = TestItem(store=store)
- list(store.query(TestItem))
- self.assertEquals(self.checkpointed, [item])
- store.transact(txn)
-
-
- def test_autocommitTouchCheckpoint(self):
- """
- Test that an existing Item is checkpointed if it has an attribute
- changed on it.
- """
- store = Store()
- item = TestItem(store=store)
-
- # Get rid of the entry that's there from creation
- self.checkpointed = []
-
- item.attribute = 0
- self.assertEquals(self.checkpointed, [item])
-
-
- def test_transactionTouchCheckpoint(self):
- """
- Test that in a transaction an existing Item is checkpointed if it has
- touch called on it and the store it is in is checkpointed.
- """
- store = Store()
- item = TestItem(store=store)
-
- # Get rid of the entry that's there from creation
- self.checkpointed = []
-
- def txn():
- item.touch()
- store.checkpoint()
- self.assertEquals(self.checkpointed, [item])
- store.transact(txn)
-
-
- def test_twoQueriesOneCheckpoint(self):
- """
- Test that if two queries are performed in a transaction, a touched item
- only has checkpoint called on it before the first.
- """
- store = Store()
- item = TestItem(store=store)
-
- # Get rid of the entry that's there from creation
- self.checkpointed = []
-
- def txn():
- item.touch()
- list(store.query(TestItem))
- self.assertEquals(self.checkpointed, [item])
- list(store.query(TestItem))
- self.assertEquals(self.checkpointed, [item])
- store.transact(txn)
=== removed file 'Axiom/axiom/test/test_listversions.py'
--- Axiom/axiom/test/test_listversions.py 2008-12-09 21:22:31 +0000
+++ Axiom/axiom/test/test_listversions.py 1970-01-01 00:00:00 +0000
@@ -1,116 +0,0 @@
-# Copyright 2008 Divmod, Inc.
-# See LICENSE file for details
-
-"""
-Tests for Axiom store version history.
-"""
-
-import sys, StringIO
-from twisted.trial import unittest
-from twisted.python.versions import Version
-
-from axiom.store import Store
-from axiom import version as axiom_version
-from axiom.listversions import (getSystemVersions,
- SystemVersion,
- checkSystemVersion)
-
-from axiom.scripts.axiomatic import Options as AxiomaticOptions
-from axiom.test.util import CommandStubMixin
-from axiom.plugins.axiom_plugins import ListVersions
-
-class SystemVersionTests(unittest.TestCase, CommandStubMixin):
- """
- Tests for recording the versions of software used to open a store
- throughout its lifetime.
- """
-
- def setUp(self):
- """
- Create an on-disk store.
- """
- self.dbdir = self.mktemp()
- self.store = Store(self.dbdir)
-
-
- def _reopenStore(self):
- """
- Close the store and reopen it.
- """
- self.store.close()
- self.store = Store(self.dbdir)
-
-
- def test_getSystemVersions(self):
- """
- L{getSystemVersions} returns all the version plugins it finds.
- """
- someVersions = [Version("foo", 1, 2, 3),
- Version("baz", 0, 0, 1)]
- def getSomeVersions(iface, package):
- return someVersions
- self.assertEqual(getSystemVersions(getSomeVersions),
- someVersions)
-
- def test_checkSystemVersion(self):
- """
- Calling checkSystemVersion:
- 1. Doesn't duplicate the system version when called with the
- same software package versions.
- 2. Creates a new system version when one of the software
- package versions has changed.
- 3. Notices and creates a new system version when the system
- config has reverted to a previous state.
- """
- versions = [Version("foo", 1, 2, 3)]
-
- checkSystemVersion(self.store, versions)
- checkSystemVersion(self.store, versions)
-
- query_results = list(self.store.query(SystemVersion))
- self.assertEquals(len(query_results), 1)
-
- # Adjust a version number and try again.
- v = versions[0]
- versions[0] = Version(v.package, v.major, v.minor + 1, v.micro)
- checkSystemVersion(self.store, versions)
-
- query_results = list(self.store.query(SystemVersion))
- self.assertEquals(len(query_results), 2)
-
- # Revert the version number and try again.
- versions[0] = v
-
- checkSystemVersion(self.store, versions)
- query_results = list(self.store.query(SystemVersion))
- self.assertEquals(len(query_results), 3)
-
- # Reopening the store does not duplicate the version.
- self._reopenStore()
- query_results = list(self.store.query(SystemVersion))
- self.assertEquals(len(query_results), 3)
-
-
- def test_commandLine(self):
- """
- L{ListVersions} will list versions of code used in this store when
- invoked as an axiomatic subcommand.
- """
- checkSystemVersion(self.store)
-
- out = StringIO.StringIO()
- self.patch(sys, 'stdout', out)
- lv = ListVersions()
- lv.parent = self
- lv.parseOptions([])
- result = out.getvalue()
- self.assertSubstring("axiom: " + axiom_version.short(), result)
-
-
- def test_axiomaticSubcommand(self):
- """
- L{ListVersions} is available as a subcommand of I{axiomatic}.
- """
- subCommands = AxiomaticOptions().subCommands
- [options] = [cmd[2] for cmd in subCommands if cmd[0] == 'list-version']
- self.assertIdentical(options, ListVersions)
=== removed file 'Axiom/axiom/test/test_mixin.py'
--- Axiom/axiom/test/test_mixin.py 2005-10-28 14:45:46 +0000
+++ Axiom/axiom/test/test_mixin.py 1970-01-01 00:00:00 +0000
@@ -1,56 +0,0 @@
-
-from twisted.trial.unittest import TestCase
-
-from axiom.item import Item
-from axiom.attributes import integer
-
-from axiom.slotmachine import hyper as super
-
-__metaclass__ = type
-
-class X:
- xm = 0
- def m(self):
- self.xm += 1
- return self.xm
-
-class Y(X):
- ym = 0
-
- def m(self):
- ret = super(Y, self).m()
- self.ym += 1
- ret += 1
- return ret
-
-class Z(X):
- zm = 0
- def m(self):
- ret = super(Z, self).m()
- ret += 1
- self.zm += 1
- return ret
-
-class XYZ(Y, Z):
- pass
-
-class ItemXYZ(Item, XYZ):
- typeName = 'item_xyz'
- schemaVersion = 1
-
- xm = integer(default=0)
- ym = integer(default=0)
- zm = integer(default=0)
-
-
-class TestBorrowedMixins(TestCase):
-
- def testSanity(self):
- xyz = XYZ()
- val = xyz.m()
- self.assertEquals(val, 3)
-
- def testItemSanity(self):
- xyz = ItemXYZ()
- val = xyz.m()
- self.assertEquals(val, 3)
=== removed file 'Axiom/axiom/test/test_paginate.py'
--- Axiom/axiom/test/test_paginate.py 2006-12-19 22:42:15 +0000
+++ Axiom/axiom/test/test_paginate.py 1970-01-01 00:00:00 +0000
@@ -1,171 +0,0 @@
-# Copyright 2006 Divmod, Inc. See LICENSE file for details
-
-"""
-This module contains tests for the L{axiom.store.ItemQuery.paginate} method.
-"""
-
-from twisted.trial.unittest import TestCase
-
-
-from axiom.store import Store
-from axiom.item import Item
-from axiom.attributes import integer, compoundIndex
-
-from axiom.test.util import QueryCounter
-
-class SingleColumnSortHelper(Item):
- mainColumn = integer(indexed=True)
- other = integer()
- compoundIndex(mainColumn, other)
-
-class MultiColumnSortHelper(Item):
- columnOne = integer()
- columnTwo = integer()
- compoundIndex(columnOne, columnTwo)
-
-
-class CrossTransactionIteration(TestCase):
-
- def test_separateTransactions(self):
- """
- Verify that 'paginate' is iterable in separate transactions.
- """
- s = Store()
- b1 = SingleColumnSortHelper(store=s, mainColumn=1)
- b2 = SingleColumnSortHelper(store=s, mainColumn=2)
- b3 = SingleColumnSortHelper(store=s, mainColumn=3)
- itr = s.transact(lambda : iter(s.query(SingleColumnSortHelper).paginate()))
- self.assertIdentical(s.transact(itr.next), b1)
- self.assertEquals(s.transact(lambda : (itr.next(), itr.next())),
- (b2, b3))
- self.assertRaises(StopIteration, lambda : s.transact(itr.next))
-
-
- def test_moreItemsNotMoreWork(self):
- """
- Verify that each step of a paginate does not become more work as items
- are added.
- """
- s = Store()
- self._checkEfficiency(s.query(SingleColumnSortHelper))
-
- def test_moreItemsNotMoreWorkSorted(self):
- """
- Verify that each step of a paginate does not become more work as more
- items are added even if a sort is given.
- """
- s = Store()
- self._checkEfficiency(s.query(SingleColumnSortHelper,
- sort=SingleColumnSortHelper.mainColumn.ascending))
-
-
- def test_moreItemsNotMoreWorkRestricted(self):
- s = Store()
- self._checkEfficiency(s.query(SingleColumnSortH
Follow ups