← Back to team overview

launchpad-dev team mailing list archive

Some launchpad data model thoughts

 

Hi

I'm a new starter with Canonical, working on the Launchpad team. I have
been getting myself familiar with the Launchpad code base and have some
observations/questions I'd like to pose. A large part of my background
has been working on enterprise Java applications (web and thick client),
built using a service oriented architecture and using O/R mapping
technologies such as Hibernate, so I realise there may be differences in
design patterns being used on Launchpad compared with other systems I
have been exposed to. Be gentle with me if I have made any incorrect
assumptions etc. I'll try and keep it brief but would be happy to
explain my thoughts in more detail if required.

My initial questions pertain to the implementation of the data model
layer in Launchpad; specifically, the mixing of model business logic and
O/R mapping infrastructure within the domain object classes
(lp.code.model module). My experience has been that you generally want
to keep orthogonal concerns such as the two just mentioned quite
separate. Amongst other things, this allows the domain model to be
instantiated and manipulated without requiring a full database
implementation to be present. It also prevents the inevitable impedance
mismatch between a relational schema representation of the data model
and an object based one resulting in unnecessary design and/or
implementation compromises. The data model objects used at runtime by
the various business services encapsulate only object state and
behaviour and avoid coupling to unrelated concepts from other
architectural layers.

It appears one consequence of the current implementation is that the
unit tests, which as we know should be fast and able to be run in
isolation (the main participants should be the test harness, the class
being tested, and possibly a few stubs or mocks), take too long to run
and hence adversely affect the continuous integration process. I may be
mistaken, but I think the current situation could be exacerbated by the
fact the the setup/tear down of each individual test creates/resets a
database instance?

What would I do? Well layered architectures I've worked with in the past
include a persistence layer containing data access objects (DAOs). These
are responsible for collaborating with the O/R mapping infrastructure to
retrieve/save objects from the domain model and it's here where the O/R
mapping logic is placed. For testing purposes, the DAOs can be stubbed
out or an alternative method can be used, but the key point is that
business domain objects are created for the various unit tests without
the need for the database. The approach also typically results in a much
simplified design, one which preserves the solution's architectural
integrity and reduces unnecessary coupling between layers.

What do others think? Are my thoughts sensible? Is the type of
refactoring that would be required to "fix" the current implementation
something that would be considered for inclusion in the MGPP?

Thanks,
Ian



Follow ups