maria-discuss team mailing list archive
Mailing list archive
Re: The future of Dynamic Columns
On 29.01.15 01:12, Tom Worster wrote:
And Maria promises us that we could use that expression to write the
value back and the result will be the exact same thing. A sort of
idempotency promise (kinda).
This doesn't make the interface any less clunky (on the contrary) but
I have resigned myself to a clunky interface. The only alternative is
to forget all this and start something different. SQL is an insanely
clunky old language and that's why we use some kind of abstraction
software to make it nicer. If we're going to use SQL we may as well
try to use what's there.
Something like this COLUMN_GET_SQL() would at least allow the
application the possibility to get back what it put in.
I am not sure that I understand what you need. Is it ability to know
internal storage type?
When I use COLUMN_CREATE('x', ___ ) I provide in the ___ a literal that
has, in its SQL context, both a value and a datatype. So I have a way to
put something into the database that I cannot read back out.
COLUMN_CREATE() is the SQL way to deposit (value, datatype) into the
database. There is no corresponding way SQL way to make the withdrawal of
(value, datatype). My proposal was aimed at resolving that.
My understanding is that we have to fix the datatype of what will be
returned on a reading before any data is read. So we have been talking
about just fixing it as string and encode the (value, datatype) tuple into
In documenting my ORM stuff, I wrote "In Maria the situation is worse.
Data is saved via SQL and retrieved via JSON. This may seem perverse
but..." And I end up with a lifecycle for a model in the ORM that's just
strange: Start in PHP machine encoding. Convert to SQL and send to DB.
Request JSON representation of the data from the DB. Convert the JSON back
to machine encoding. PHP isn't doing anything weird. It uses very
conventional representations just like in C.
PHP -> SQL -> JSON -> PHP. There is exactly one too many data
Do I explain myself any better now?
So the problem is to get the same data after conversion via JSON (Right?).
Data are stored with its type (not exactly SQL type, but close (codding
aimed to keep data as compact as possible). But if we need to have
Dynamic Columns -> JSON -> Dynamic Columns types conversion in the way
that after conversion we will have the same data as we had (not
internally but from point of view of external requests) it is doable
because get_column() has also type casting and we can have exact results
even if representation differs.
There are problems of mapping JSON types back to columns I can see:
1) numeric types (here we can store in the most compact ways without
loosing precission) trying to use:
unsigned -> signed -> decimal(with limit on digits) -> double(for
everything which can not be represented by decimal)
Dynamic columns make automatic conversion so even if the number will be
requested in other way (get_column()) we still get the same data except
floating point but floating point is something which always should be
compared with precision taking into account:
(it is not recommended to compare a==b but abs(a-b) < precision_delta )
JSON have no dates (AFAIK) so string is safe (but not compact, that is
3) boolean/array as I told is a problem and here dynamic columns can be
extended to support them, so conversion to json and back will not change