← Back to team overview

oship-dev team mailing list archive

Re: adl2py fundamentals

 

Hi, Tim:
 [... and thanks for the new thread, Roger!]

Yes, your explanations were very useful. I understand your approach much better now. By the way: generation of Python code using Python itself is what is called metaprogramming (http://en.wikipedia.org/wiki/Metaprogramming). It would be wonderful to find some sort of Python "metaprogrammer" ("disassembler" or "decompiler") library ready to use, don't you think? Unfortunately, the only ones I've found (up to now) are "low level" bytecode decompilers like: http://docs.python.org/library/dis.html , that are not capable to decompile "high level" objects like classes, mixins etc. In any case, I suppose that the small "helper class" described in: http://effbot.org/zone/python-code-generator.htm will have some utility here, as handling indentation can be very cumbersome, sometimes.

Now, let me explain my approach a little more. [Without any engagement, of course; just "brainstorming", OK?] Well: when the adl_1_4.py parser finishes, we have a series of Python objects (classes etc) "living" in program memory, right? Presently, these objects are made permanent by storing them on a ZODB database. But there is another "standard" way to make them permanent: any Python object can be pickled/unpickled to/from a file, and that includes lists, classes, instances, functions etc. There are in fact many "pickling" modules out there (pickle, cPickle, jsonpickle etc). If we choose to do the "pickling" using a friendly (readable, utf8, simple etc) format like JSON, these objects will also become reusable, in my point of view. Have a look at jsonpickle: http://jsonpickle.googlecode.com/svn/docs/index.html and Joose: http://joose-js.blogspot.com/2008/08/joose-now-supports-jsonpickle.html) and you will see what I mean. A few other points favoring JSON as our "official internal representation" of the archetypes: Zope comes already with simplejson; starting from Python 2.6, json has become a standard Python module (http://docs.python.org/library/json.html); and json can even be used to validate itself: http://bitbucket.org/IanLewis/jsonschema/ .

 Best regards,
Roberto.

P.S.: A third possible approach to this problem could be through "metaclassing" (http://www.ibm.com/developerworks/linux/library/l-pymeta.html), but when even the developers themselves call it "black magic"...

Roger Erens a écrit :
Starting up a new thread...

>   Now, concerning the "adl2py" experimental branch: instead of
> > "grokifing" the .ADL files, maybe it'll be enough to convert them into > > an intermediary format that Zope/Grok already knows how to handle (e.g.
> > XML or, better yet, JSON: http://www.json.org and:
> > http://en.wikipedia.org/wiki/JSON). What do you think?

There are some issues with this and one very fundamental issue that I
think many people do not see.

Remember that the ADL represents and archetype that is a constraint
expression on the reference model.

What this means in practical terms is that if you only 'convert' what is
in the ADL to some other format, you still haven't created a class that
can be instantiated as openEHR data.

For example let us look at simple ADL file like
openEHR-EHR-COMPOSITION.encounter.v1.adl  it shows that you need the
class COMPOSITION with an archetype node id of at0000 and it has one
attribute named category and category contains an instance of
DV_CODED_TEXT with the defining_code being openehr::433.  That's it,
that's all you get from the ADL.

In reality the COMPOSITION class has three other mandatory and two other
optional attributes.  It also inherits from LOCATABLE which adds an
additional 6 attributes.

There are developers out there right now that say they are using openEHR
archetypes because what they are doing is mapping what they find in ADL
to their data model.  They will eventually discover (because they
haven't read and listened yet) that the 'data' they produce is not
openEHR compatible because it lacks so many components.

So my move to 'grokifing' OSHIP is to use the grok classes as mixins in
places where needed so that it will be easier for people that do not
understand or want to know how the openEHR model works, to still be able
to build applications that create fully compatible openEHR data.

In the end we will have a lot of extra code.  Things like the AQL
translation engine will have to know how to translate a query into the
query expected by the catalog/indexing components of the ZCA.  The EHR
Extract module will have to do the same.

It is a lot of extra work but the upside is more openEHR applications
being built on a platform that people are more comfortable with and
knowledgeable about.

So, the short answer is that we need to take the ADL, build a Python
source code file containing the class.  In the example
openEHR-EHR-COMPOSITION.encounter.v1.adl the class name is EncounterV1.
It lives in km/openehr/ehr/entry/observation directory and expresses all
of the RM classes with the ADL constraints applied.  This class can then
be used in any appropriate place in an application.  The Python files
can then be reused/shared among the OSHIP community.

Does this explain my approach to 'grokifing' a little better?

Comments are not only welcome but encouraged.

Cheers,
Tim






Follow ups

References