maria-developers team mailing list archive
-
maria-developers team
-
Mailing list archive
-
Message #04103
Re: Review of microsecond patch
Hi, Michael!
On Apr 05, Michael Widenius wrote:
> Here is finally a review of the microsecond patch.
Thanks!
> > === modified file 'client/mysqltest.cc'
> > --- client/mysqltest.cc 2010-11-22 09:21:10 +0000
> > +++ client/mysqltest.cc 2011-02-23 17:26:02 +0000
> > @@ -7342,7 +7342,8 @@ int util_query(MYSQL* org_mysql, const c
> > cur_con->util_mysql= mysql;
> > }
> >
> > - return mysql_query(mysql, query);
> > + int ret= mysql_query(mysql, query);
> > + DBUG_RETURN(ret);
> > }
>
> Not necessary change, but not important enough to revert.
It changes "return" to "DBUG_RETURN" (because the function uses
DBUG_ENTER) - that's the important part.
V> > === modified file 'include/my_sys.h'
>
> <cut>
>
> > +#define my_micro_time() (my_getsystime()/10)
>
> I am not happy with the above change. The old my_micro_time() was
> defined to be significantly faster than getsystime() for some systems
> (especially windows and solaris) and we should consider keeping the
> old implementation....
> <long phone conversation>
> Conclusion: my_getsystime() is not depending on the clock and only to
> be used when calculating intervals.
I've did changes as you requested and also
* renamed my_getsystime() to my_interval_timer(), to make the semantics
clearer and let the compiler catch wrong usages of my_getsystime()
(also future ones, that may come in merges).
* increased its granularity from 100ns to 1ns, old value was for UUID,
but as UUID can no longer use it directly there is no need to downgrade
the OS provided value
* fixed the UUID code to anchor the my_interval_timer() on the epoch, as
required by the UUID standard. That is, this was only needed by UUID,
and now I've moved it to UUID code from my_getsystime().
* fixed other wrong usages of my_getsystime() - e.g. in calculating
times for pthread_cond_timedwait. It was buggy and could've caused
long waits if OS clock would be changed.
> > === added file 'mysql-test/include/type_hrtime.inc'
> > --- mysql-test/include/type_hrtime.inc 1970-01-01 00:00:00 +0000
> > +++ mysql-test/include/type_hrtime.inc 2011-03-01 12:24:36 +0000
> > @@ -0,0 +1,94 @@
> > +select * from t1;
> > +select extract(microsecond from a + interval 100 microsecond) from t1 where a>'2010-11-12 01:02:03.456';
> > +select a from t1 where a>'2010-11-12 01:02:03.456' group by a;
>
> Shouldn't we do the above for all engines ?
> If you can, please add at test of this in suite/engines/time.test
We don't have "engines" suite like that.
Although I agree that it would be great to run this - as well as many
others - tests on different engines, we don't quite have the support for
that now. I think that implementing it is not part of this WL. I'm happy
to do it as a separate task after I push this one (and finish merges).
> > === modified file 'mysql-test/r/date_formats.result'
> > --- mysql-test/r/date_formats.result 2010-11-12 10:12:15 +0000
> > +++ mysql-test/r/date_formats.result 2011-03-08 18:41:58 +0000
> > @@ -196,16 +196,16 @@ date format datetime
> > 0003-01-02 8:11:2.123456 %Y-%m-%d %H:%i:%S.%# 0003-01-02 08:11:02
> > 03-01-02 8:11:2.123456 %Y-%m-%d %H:%i:%S.%# 2003-01-02 08:11:02
> > 2003-01-02 10:11:12 PM %Y-%m-%d %h:%i:%S %p 2003-01-02 22:11:12
> > -2003-01-02 01:11:12.12345AM %Y-%m-%d %h:%i:%S.%f%p 2003-01-02 01:11:12.123450
> > -2003-01-02 02:11:12.12345AM %Y-%m-%d %h:%i:%S.%f %p 2003-01-02 02:11:12.123450
> > -2003-01-02 12:11:12.12345 am %Y-%m-%d %h:%i:%S.%f%p 2003-01-02 00:11:12.123450
> > +2003-01-02 01:11:12.12345AM %Y-%m-%d %h:%i:%S.%f%p 2003-01-02 01:11:12
> > +2003-01-02 02:11:12.12345AM %Y-%m-%d %h:%i:%S.%f %p 2003-01-02 02:11:12
> > +2003-01-02 12:11:12.12345 am %Y-%m-%d %h:%i:%S.%f%p 2003-01-02 00:11:12
>
> Why has microseconds been removed from the output ?
>
> It looks like 'cast(str_to_date(date, format) as datetime)' removes
> all microseconds, which is an incompatible change. In the past
> cast(..., datetime) as same as cast(..., datetime(6))
>
> <cut>
>
> I think that it's more important that cast() produces the same value
> as MySQL did than that create ... select cast() produces the same
> type as before.
>
> With the first issue, you will get lost data (without any warnings)
> for old applictions if they would do things like:
>
> select * from t1 where cast(char_column as time) = "12:23:24.123456";
>
> which works with MySQL but not with this patch.
Looking at how Oracle implements this feature I can see that cast()
there works exactly as in my patch. If I do what you want, MariaDB won't
be compatible with MySQL in the future.
> > === modified file 'mysql-test/r/func_time.result'
> > --- mysql-test/r/func_time.result 2010-10-31 16:04:38 +0000
> > +++ mysql-test/r/func_time.result 2011-03-19 14:49:36 +0000
>
> As part of this, I noticed that from_unixtime() and unix_timestamp()
> is not enhanced to support microseconds.
>
> This needs to work:
>
> create table t2 (a int, b timestamp(6))
> insert into t2 (a) values(1);
> select * from t2;
> I got '1 | 2011-03-29 15:38:53.395516'
>
> select unix_timestamp(b) from t2; -> 1301402333 (no decimals)
>
> This needs to returns sub seconds!
If I do that, unix_timestamp() will no longer return INT_RESULT.
Which means it cannot be used as a partitioning function anymore.
Few tests use it that way. I've fixed the tests by wrapping
unix_timestamp() in cast(... as unsigned). Which failed, because
cast cannot be used as a partitioning function either. I fixed that too.
Now tests pass. But this is an incompatible change, it may break
existing applications. And fixing SQL will make it backward
incompatible - partitioning by unix_timestamp() will not work in newer
MariaDB, partitioning by cast(unix_timestamp(...) as unsigned) will
work, but it will not work in MySQL and older MariaDB.
At the moment I've shelved this change and I'm not committing it, until
we decide what to do.
> > === modified file 'mysql-test/r/ps.result'
> > --- mysql-test/r/ps.result 2010-10-04 08:51:26 +0000
> > +++ mysql-test/r/ps.result 2011-03-01 12:24:36 +0000
> > @@ -3040,3 +3040,18 @@ id select_type table type possible_keys
> > DEALLOCATE PREPARE stmt;
> > DROP TABLE t1;
> > End of 5.1 tests.
> > +prepare stmt from "select date('2010-10-10') between '2010-09-09' and ?";
> > +set @a='2010-11-11';
> > +execute stmt using @a;
> > +date('2010-10-10') between '2010-09-09' and ?
> > +1
> > +execute stmt using @a;
> > +date('2010-10-10') between '2010-09-09' and ?
> > +1
> > +set @a='2010-08-08';
> > +execute stmt using @a;
> > +date('2010-10-10') between '2010-09-09' and ?
> > +0
> > +execute stmt using @a;
> > +date('2010-10-10') between '2010-09-09' and ?
> > +0
>
> Please add a test that uses sub seconds (for completeness)
>
> set @a='2010-08-08 02:03:04.567890';
> execute stmt using @a;
it's not a completeness, but a bloat. First this is a test for dates.
second - the comment says "restoring of the Item tree in BETWEEN with
dates". this code path does not check for second_part > 0, it is not
affected by microsecond presence at all.
> > === modified file 'mysql-test/r/subselect.result'
>
> <cut>
>
> > -delete from t1 where b = (select b from t1);
> > +delete from t1 where b in (select b from t1);
>
> Why the above change ?
As explained on IRC: the original one is ambigious. It contains two
problems, and the server can return either one of two errors, both
replies being correct. I've fixed this to be unambiguos and test only
one error condition. There is another statement in this file that
tests the other error condition.
> > +++ mysql-test/suite/funcs_1/r/is_columns_innodb.result 2011-03-24 14:55:52 +0000
> > @@ -382,333 +382,333 @@ LOAD DATA INFILE '<MYSQLTEST_VARDIR>/std
> > SELECT * FROM information_schema.columns
> > WHERE table_schema LIKE 'test%'
> > ORDER BY table_schema, table_name, column_name;
> > -TABLE_CATALOG TABLE_SCHEMA TABLE_NAME COLUMN_NAME ORDINAL_POSITION COLUMN_DEFAULT IS_NULLABLE DATA_TYPE CHARACTER_MAXIMUM_LENGTH CHARACTER_OCTET_LENGTH NUMERIC_PRECISION NUMERIC_SCALE CHARACTER_SET_NAME COLLATION_NAME COLUMN_TYPE COLUMN_KEY EXTRA PRIVILEGES COLUMN_COMMENT
> > -NULL test t1 f1 1 NULL YES char 20 20 NULL NULL latin1 latin1_swedish_ci char(20) select,insert,update,references
> ... lots of changes
>
> How about changing the SELECT * FROM information_schema.columns to
> just list the old columns in suite/funcs_1/datadict/columns.inc
>
> This gives a smaller change list and will not cause a diff if we add
> another column...
> Especially this change caused really huge diff and will likely cause
> merge conflicts sooner or later...
Changing will also cause merge conflicts.
And I think it's good to see all columns here.
> > === modified file 'sql/event_db_repository.cc'
> > --- sql/event_db_repository.cc 2010-01-22 09:38:21 +0000
> > +++ sql/event_db_repository.cc 2011-03-01 12:24:36 +0000
> > @@ -231,6 +231,9 @@ mysql_event_fill_row(THD *thd,
> > rs|= fields[ET_FIELD_STATUS]->store((longlong)et->status, TRUE);
> > rs|= fields[ET_FIELD_ORIGINATOR]->store((longlong)et->originator, TRUE);
> >
> > + if (!is_update)
> > + rs|= fields[ET_FIELD_CREATED]->set_time();
> > +
>
> Don't understand why this was moved from Event_db_repository::create_event().
> Probably safe but it would be nice to know the reason.
> (The bzr file comment didn't say anything about this)
to be able to write
rs|= fields[ET_FIELD_CREATED]->set_time();
instead of
if (fields[ET_FIELD_CREATED]->set_time())
{
my_error(ER_EVENT_STORE_FAILED, MYF(0), fields[ET_FIELD_CREATED]->field_name, 1)
goto end;
}
that is, to reuse the error reporting in mysql_event_fill_row() instead
of duplicating it.
> > === modified file 'sql/event_queue.cc'
> > --- sql/event_queue.cc 2008-05-09 07:43:02 +0000
> > +++ sql/event_queue.cc 2011-03-01 12:24:36 +0000
> > @@ -513,9 +513,10 @@ Event_queue::empty_queue()
> > */
> >
> > void
> > +Event_queue::dbug_dump_queue(my_time_t when)
> > {
> > #ifndef DBUG_OFF
> > + my_time_t now= when;
> > Event_queue_element *et;
> > uint i;
> > DBUG_ENTER("Event_queue::dbug_dump_queue");
> > @@ -592,14 +593,13 @@ Event_queue::get_top_for_execution_if_ti
> > thd->set_current_time(); /* Get current time */
> >
> > next_activation_at= top->execute_at;
> > + if (next_activation_at >thd->query_start())
> > {
> > /*
> > Not yet time for top event, wait on condition with
> > time or until signaled. Release LOCK_queue while waiting.
> > */
> > - struct timespec top_time;
> > - set_timespec(top_time, next_activation_at - thd->query_start());
> > + struct timespec top_time= { next_activation_at, 0 };
>
> Not sure if the above change is portable. That is why we have the
> set_timespec() defines.
>
> Any particular reason why you did the change ?
At this point we know when to wake up (next_activation_at). But because
set_timespec() macro only take a timeout argument (how long to wait),
old code was subtracting current time from next_activation_at, only to
have it added back by the set_timespec() macro.
Besides it was subtracting the time when the query started, while
set_timespec() was adding the current time - so the activation was was
drifting slightly.
Here I thought it would be more logical to setup the structure directly,
as we know when to wake up.
> I would prefer to have the original code as we know it's works and
> compiles on a lot of platforms.
I'll fix it when merging in 5.3, as there we have different set of
set_timespec* macros, and there I can set the wakeup time directly.
> However, after your definition change of my_getsystime() you have to
> redefine all the macros on my_pthread.h that calls my_getsystime() to
> call my_hrtime() instead as my_getsystime() can't anymore be used as
> a start point for timespec.
already done. before my change of my_getsystime() it was still only
usable for intervals, and set_timespec() macros were buggy even before
my changes.
> > === modified file 'sql/field.cc'
>
> > @@ -3151,13 +3141,9 @@ int Field_short::store(const char *from,
> >
> > error= get_int(cs, from, len, &rnd, UINT_MAX16, INT_MIN16, INT_MAX16);
> > store_tmp= unsigned_flag ? (int) (ulonglong) rnd : (int) rnd;
> > -#ifdef WORDS_BIGENDIAN
> > - if (table->s->db_low_byte_first)
> > - {
> > + if (ARCH_BIGENDIAN && table->s->db_low_byte_first)
> > int2store(ptr, store_tmp);
> > - }
>
> The above code can produce warnings on some compilers with the error
> 'if is always true'. I also find it much harder to read and understand!
>
> Please add back the original code and don't try to play smart games
> with the compiler!
>
> This is not part of microsecond patch and should not be done here.
>
> Same goes for all other changes of WORDS_BIGENDIAN.
>
> That said, we can probably remove the db_low_byte_first code in the
> future as no engine uses this anymore. This should however be a
> different patch!
As we agreed on IRC, I won't revert this change, but instead remove
ARCH_BIGENDIAN and db_low_byte_first in a separate patch. Will push both
together.
> > @@ -4822,136 +4682,136 @@ timestamp_auto_set_type Field_timestamp:
> > }
> > }
> >
> > +long Field_timestamp::get_timestamp(ulong *sec_part) const
> > +{
> > + ASSERT_COLUMN_MARKED_FOR_READ;
> > + *sec_part= 0;
> > + if (ARCH_BIGENDIAN && table && table->s->db_low_byte_first)
> > + return sint4korr(ptr);
> > + long tmp;
> > + longget(tmp,ptr);
> > + return tmp;
> > +}
> >
> Shouldn't this be of type 'my_time_t' for completeness?
I think 'long' is kind of safer, as these types use longget() anyway,
and beeing explicit about the type would help to avoid unreasonable
assumptions and expectations.
> > if (temp == 0 && sec_part == 0)
> > return(0);
>
>
> <cut>
>
> > longlong Field_timestamp::val_int(void)
> > {
> > MYSQL_TIME time_tmp;
> > THD *thd= table ? table->in_use : current_thd;
>
> Please add:
>
> ASSERT_COLUMN_MARKED_FOR_READ;
I have it in the get_timestamp() method that all val_*() methods use.
> <cut>
>
> > -void Field_timestamp::set_time()
> > +int Field_timestamp::set_time()
> > {
> > THD *thd= table ? table->in_use : current_thd;
> > - long tmp= (long) thd->query_start();
> > set_notnull();
> > - store_timestamp(tmp);
> > + store_TIME(thd->query_start(), 0);
> > + return 0;
> > }
>
> The set_time() function could be a void as it can never fail.
> (Yes, I agree that it means that we have to test type of field in
> event code, but that something we should do at event startup anyway...)
I may do it after a merge - I've seen that Alik added this test to
MySQL.
> > +static void store_native(ulonglong num, uchar *to, uint bytes)
> > {
> > + switch(bytes) {
> > + case 1: *to= (uchar)num; break;
> > + case 2: shortstore(to, (ushort)num); break;
> > + case 3: int3store(to, num); /* Sic!*/ break;
> > + case 4: longstore(to, (ulong)num); break;
> > + case 8: longlongstore(to, num); break;
> > + default: DBUG_ASSERT(0);
> > + }
> > +}
>
> What would be even better if each field would have a pointers to c
> functions with the right read and write functions; Would avoid a
> switch for every call...
Mats Kindal benchmarked and blogged about it.
switch is about 20% slower than function pointers if the function is
big. But it is several times faster than function pointers if the
function (with the switch) is small and can be inlined.
> > + if (!tmp)
> > + return fuzzydate & TIME_NO_ZERO_DATE;
>
> should be:
>
> > + return (fuzzydate & TIME_NO_ZERO_DATE) != 0;
>
> as the function is a bool function
no need to because the function is a bool function. If it were my_bool,
then it would be needed.
> > + if (!ltime->month || !ltime->day)
> > + return !(fuzzydate & TIME_FUZZY_DATE);
> > + return 0;
> > +}
> > +
> > +uint32 Field_datetime_hires::pack_length() const
> > +{
> > + return 5 + sec_part_bytes[dec];
> > +}
>
> move to include file.
Can't. sec_part_bytes[] is defined here, and it's static.
> > @@ -8217,7 +8020,7 @@ Field_blob::unpack_key(uchar *to, const
> > length+= *from++ << 8;
> >
> > /* put the length into the record buffer */
> > - put_length(to, length);
> > + store_length(to, packlength, length, table->s->db_low_byte_first);
>
> Note that the old format was not depending on table->s->db_low_byte_first !
> it should alway stored in low byte first order!
>
> Change above to store_lowendian()
This was one of the bugs that I've fixed.
Field_blob::pack_key() was using get_length(), that depended on
table->s->db_low_byte_first. So, this code, basically, was broken, but
it never showed up because table->s->db_low_byte_first is never used.
I've made Field_blob::unpack_key to be the reverse of
Field_blob::pack_key.
Anyway, the point is moot. The ultimate fix will be a removal of
db_low_byte_first, as you suggested.
> > === modified file 'sql/item.cc'
> > --- sql/item.cc 2010-11-18 13:11:18 +0000
> > +++ sql/item.cc 2011-03-24 14:55:52 +0000
> > @@ -915,13 +982,15 @@ bool Item_string::eq(const Item *item, b
> >
> > bool Item::get_date(MYSQL_TIME *ltime,uint fuzzydate)
> > {
> > - if (result_type() == STRING_RESULT)
> > + if (field_type() == MYSQL_TYPE_TIME)
> > + fuzzydate|= TIME_TIME_ONLY;
> > + if (result_type() == STRING_RESULT || fuzzydate & TIME_TIME_ONLY)
> > {
>
> Please add a comment under what conditions or why field_type() can be
> MYSQL_TYPE_TIME and result_type() is STRING_RESULT.
>
> (Looks like this only happens for Item_field(Field_time))
it happens for any item that has field_type() == MYSQL_TYPE_TIME.
All temporal field types have result_type() == STRING_RESULT.
They have cmp_type() == TIME_RESULT.
> I am actually a bit confused by the fact that only Item_field can have
> TIME_RESULT. Why doesn't all time functions say they have TIME_RESULT?
They do. They have a temporal field_type() - e.g. MYSQL_TYPE_DATETIME -
and that automatically means cmp_type() == TIME_RESULT.
> > @@ -2790,10 +2843,12 @@ bool Item_param::set_from_user_var(THD *
> > case REAL_RESULT:
> > set_double(*(double*)entry->value);
> > item_type= Item::REAL_ITEM;
> > + param_type= MYSQL_TYPE_DOUBLE;
> > break;
>
> Here you set the param_type, but I wonder if there could be a problem
> that the param_type is different over two execution of a prepared
> statement.
>
> I tried some things to see if there is a problem, but couldn't see any
> behavior changes because of this, so this change may be safe.
>
> What did this change fix?
> (At at least a code comment why param_type is reset here)
This made cmp_type() to return correct value, that corresponds to the
value type (to item_type that is).
> > @@ -7009,6 +7047,21 @@ int stored_field_cmp_to_item(THD *thd, F
> > field_val= field->val_decimal(&field_buf);
> > return my_decimal_cmp(item_val, field_val);
> > }
>
> Add comment:
>
> /*
> We have to check field->cmp_type() here as res_type is TIME_RESULT
> if either field or item is of type TIME_RESULT
> */
No, the reason is that result_type() - and thus res_type - can never be
TIME_RESULT. Remember, TIME_RESULT can only be returned in cmp_type(),
never in result_type(). Yet.
I've added a comment.
...
> > + }
> > + return my_time_compare(&field_time, &item_time);
> > + }
>
> Looking at the code, it would probably be simpler if we replaced all
> the 'if (res_type)' with a switch.
Right. But it has to wait until we fix result_type() to return
TIME_RESULT.
> > @@ -7048,6 +7101,8 @@ Item_cache* Item_cache::get_cache(const
> > return new Item_cache_str(item);
> > case ROW_RESULT:
> > return new Item_cache_row();
> > + case TIME_RESULT:
> > + return new Item_cache_int(MYSQL_TYPE_DATETIME);
>
> Shouldn't this be a double or decimal as the time can contain microseconds?
>
> Looks like a potential bug!
No :)
It's an integer that stores packed datetime value.
This is the value we pass around and compare with.
> > === modified file 'sql/item.h'
> > --- sql/item.h 2010-07-30 13:35:06 +0000
> > +++ sql/item.h 2011-03-24 11:30:03 +0000
> > @@ -569,7 +569,8 @@ class Item {
> > virtual bool send(Protocol *protocol, String *str);
> > virtual bool eq(const Item *, bool binary_cmp) const;
> > virtual Item_result result_type() const { return REAL_RESULT; }
> > - virtual Item_result cast_to_int_type() const { return result_type(); }
> > + virtual Item_result cmp_type() const;
>
> Please add documentation what's the difference between cmp_type() and
> result_type() and when one should choose which.
Easy. cmp_type is how the value should be compared, result_type - how it
should be returned.
but ok.
> I also noticed that you have a Field->cmp_type() but there is no
> Item_field->cmp_type() mapped to Field->cmp_type().
> As I assume that for all fields the following should hold:
>
> Item_field->cmp_type() == Item_field->field->cmp_type()
Yes.
> > @@ -3031,7 +3022,7 @@ class Item_cache_int: public Item_cache
> > protected:
> > longlong value;
> > public:
> > - Item_cache_int(): Item_cache(),
> > + Item_cache_int(): Item_cache(MYSQL_TYPE_LONGLONG),
> > value(0) {}
> > Item_cache_int(enum_field_types field_type_arg):
> > Item_cache(field_type_arg), value(0) {}
> > @@ -3043,8 +3034,13 @@ class Item_cache_int: public Item_cache
> > String* val_str(String *str);
> > my_decimal *val_decimal(my_decimal *);
> > enum Item_result result_type() const { return INT_RESULT; }
> > - bool result_as_longlong() { return TRUE; }
> > bool cache_value();
> > + Item *clone_item()
> > + {
> > + Item_cache_int *item= new Item_cache_int(cached_field_type);
> > + item->store(this, value);
> > + return item;
> > + }
> > };
>
> Don't understand why Item_cache_int() needs a clone_item when
> Item_cache_real() or Item_cache_decimal() dosn't have one.
>
> If this was needed, please check if the other Item_cache_xxx()
> functions needs it.
It only needed it because of this special usage, as a holder of packed
datetime value. get_datetime_value() creates it to cache converted
constant datetime. The existence of clone() method tells the server that
this item is a proper constant and there's no need to optimize it
further. Other Item_cache_* objects are never used like that.
> > === modified file 'sql/item_cmpfunc.cc'
> <cut>
>
> > @@ -1026,9 +761,14 @@ Item** Arg_comparator::cache_converted_c
> > Item **cache_item,
> > Item_result type)
> > {
> > - /* Don't need cache if doing context analysis only. */
> > + /*
> > + Don't need cache if doing context analysis only.
> > + Also, get_datetime_value creates Item_cache internally.
> > + Unless fixed, we should not do it here.
> > + */
> > if (!thd_arg->is_context_analysis_only() &&
> > - (*value)->const_item() && type != (*value)->result_type())
> > + (*value)->const_item() && type != (*value)->result_type() &&
> > + type != TIME_RESULT)
> > {
> > Item_cache *cache= Item_cache::get_cache(*value, type);
> > cache->setup(*value);
>
> Why don't we cache it there instead of doing it in get_datetime_value()?
> This would save us one if and 3 test for every call to
> get_datetime_value().
>
> You could here call get_datetime_value() and use this value in the
> cache. If this doesn't easily work (as indicated on IRC), at least
> expand the comment to explain the issue so that we can later try to
> fix it in a better way.
>
> > + if (args[i]->cmp_type() == TIME_RESULT)
> > {
> > - datetime_found= TRUE;
> > + if (args[i]->field_type() != MYSQL_TYPE_TIME ||
> > + (args[i]->field_type() == MYSQL_TYPE_TIME && compare_as_dates==0))
> > + compare_as_dates= args[i];
> > continue;
> > }
> > }
> > }
>
> I find it strange that you set compare_as_dates to the last not
> MYSQL_TYPE_TIME column as this is used to give warnings as the
> warning may in theory come from some other field.
> I tried to make a test case to show the problem but didn't succeed :(
Yes, the warning may come from some other field. But 'compare_as_dates'
field affects how we will interpret that other field. For example, if
you compare a string and a time - the string will be parsed as a time
value, and the warning will say "incorrect TIME value".
If you compare a string and a datetime - the warning will say "incorrect
DATETIME value". If you compare a string, a time, and a datetime (this
is BETWEEN, remember? Three operands), the string will be parsed as
DATETIME.
> > === modified file 'sql/item_func.cc'
> > --- sql/item_func.cc 2010-11-23 13:08:10 +0000
> > +++ sql/item_func.cc 2011-03-19 13:54:46 +0000
> <cut>
> > +bool Item_func_min_max::get_date(MYSQL_TIME *ltime, uint fuzzy_date)
> > {
> > longlong UNINIT_VAR(min_max);
> > + DBUG_ASSERT(fixed == 1);
> > + if (!compare_as_dates)
> > + return Item_func::get_date(ltime, fuzzy_date);
>
> Add a comment in which case the above may not be true.
But it's a normal situation.
For all val_* we consider all cases.
You don't want me to add a similar comment for case INT_RESULT in
val_str(), for example, do you? And for all other combinations.
> > === modified file 'sql/item_timefunc.cc'
> > @@ -578,6 +464,10 @@ static bool extract_date_time(DATE_TIME_
> > l_time->minute > 59 || l_time->second > 59)
> > goto err;
> >
> > + if ((fuzzy_date & TIME_NO_ZERO_DATE) &&
> > + (l_time->year == 0 || l_time->month == 0 || l_time->day == 0))
> > + goto err;
> > +
>
> This is something we didn't check before. If this is a compatibily
> issue, you should list it in your matrix.
we did, but it used to be done by the caller
> > @@ -2050,42 +1845,14 @@ bool Item_func_from_unixtime::get_date(M
> > void Item_func_convert_tz::fix_length_and_dec()
> > {
> > collation.set(&my_charset_bin);
> > - decimals= 0;
> > - max_length= MAX_DATETIME_WIDTH*MY_CHARSET_BIN_MB_MAXLEN;
> > + max_length= MAX_DATETIME_WIDTH;
>
> Should probably be:
> max_length= MAX_DATETIME_WIDTH*MY_CHARSET_BIN_MB_MAXLEN;
>
> This is how all the other functions are defined.
MY_CHARSET_BIN_MB_MAXLEN is always 1. It is assumed and hardcoded in so
many places in the code (like buffer sizes, memory allocations, using strlen,
byte-per-byte string processing and so on) that it's practically
impossible to change. And doing it makes no sense either.
I've asked Bar, he said that this define (MY_CHARSET_BIN_MB_MAXLEN)
was removed from MySQL sources some time ago. And that it's meaningless
and should've never been added in the first place.
> > + if ((fuzzy_date & TIME_NO_ZERO_DATE) && (seconds == 0) && (microseconds == 0))
> > + goto null_date;
>
> TIME_NO_ZERO_DATE means that neither day or month should not be 0
> As this is always the case for time_diff() we could merge this test
> with the above 'if (fuzzy_date & TIME_NO_ZERO_IN_DATE)'
I think that TIME_NO_ZERO_DATE prevents '0000-00-00 00:00:00.000000'
only. What you're talking about is TIME_NO_ZERO_IN_DATE.
> For some examples where this now goes strange:
>
> dayofyear(timediff('0000-00-01 00:00:05', '0000-00-01 00:00:05')) -> NULL
> dayofyear(timediff('0000-00-01 00:00:05', '0000-00-01 00:00:04')) -> 0
> dayofyear(cast('0000-00-01 00:00:05' as datetime)) -> NULL
Which means that dayofyear() needs to require both TIME_NO_ZERO_DATE and
TIME_NO_ZERO_IN_DATE. I'll fix that.
Alternatively, we can say that TIME_NO_ZERO_DATE implicitly includes
TIME_NO_ZERO_IN_DATE.
> > === modified file 'sql/log_event.cc'
>
> > @@ -1000,16 +1004,27 @@ class Log_event
> > { return 0; }
> > virtual bool write_data_body(IO_CACHE* file __attribute__((unused)))
> > { return 0; }
> > - inline time_t get_time()
> > + inline my_time_t get_time()
> > {
> > THD *tmp_thd;
> > if (when)
> > return when;
> > if (thd)
> > + {
> > + when= thd->start_time;
> > + when_sec_part= thd->start_time_sec_part;
> > + return when;
> > + }
>
> thd should alway be true. Can you at some point in time check when
> this is not the case and if needed set thd for the class !
>
> > if ((tmp_thd= current_thd))
>
> current_thd() should never fail. If it can, there should be a comment
> in the code about this!
This is your code :)
http://bazaar.launchpad.net/~maria-captains/maria/5.1/revision/2476.293.2
And there was a comment that dissapeared in that change.
It used to be
THD *thd= current_thd;
/* thd will only be 0 here at time of log creation */
now= thd ? thd->start_time : my_time(0);
> > === modified file 'sql/mysql_priv.h'
>
> <cut>
>
> > +/*
> > + to unify the code that differs only in the argument passed to the
> > + error message (string vs. number)
> > +
> > + We pass this container around, and only convert the number
> > + to a string when necessary.
> > +*/
>
> Good idea, but please consider using a better name for the string.
> This is more a value wrapper than an string.
Will fix in 5.5
(5.5 has a class where this functionality naturally belongs to)
> > @@ -3045,10 +3045,10 @@ void sys_var_microseconds::set_default(T
> > uchar *sys_var_microseconds::value_ptr(THD *thd, enum_var_type type,
> > LEX_STRING *base)
> > {
> > - thd->tmp_double_value= (double) ((type == OPT_GLOBAL) ?
> > + thd->sys_var_tmp.double_value= (double) ((type == OPT_GLOBAL) ?
> > global_system_variables.*offset :
> > thd->variables.*offset) / 1000000.0;
>
> Change 100000.0 to same define constant as 1e6
No. I have a constant for "precision of my_hrtime_t", it's 1000000,
which means "microseconds", but it does not have to be. We can have
my_hrtime_t storing, for example, hundreds of nanoseconds.
I have a constant for "precision of second_part", it's also 1000000,
which means second_part is microseconds, but it doesn't have to be
either.
But I won't introduce a constant for "number of microseconds in a
second".
> > === modified file 'sql/sql_select.cc'
>
> > @@ -9455,7 +9445,7 @@ test_if_equality_guarantees_uniqueness(I
> > {
> > return r->const_item() &&
> > /* elements must be compared as dates */
> > - (Arg_comparator::can_compare_as_dates(l, r, 0) ||
> > + (l->cmp_type() == TIME_RESULT ||
> > /* or of the same result type */
> > (r->result_type() == l->result_type() &&
> > /* and must have the same collation if compared as strings */
>
> Add a comment:
>
> /*
> It's enough to test 'l' for TIME_RESULT as only 'l' can be a field
> in this context.
> */
No, this was not why I changed it.
It was, in fact, a bug. Cases where 'r' had TIME_RESULT, but 'l' did not
have it, may cause test_if_equality_guarantees_uniqueness() to return
false positives. Like:
create table t1 (a varchar(100));
insert t1 values ('2010-01-02'), ('2010-1-2'), ('20100102');
select distinct t1 from t1 where a=date('2010-01-02');
This fails in 5.1 and works correctly in 5.1-micro.
Regards,
Sergei
Follow ups
References