← Back to team overview

maria-developers team mailing list archive

Re: Sachin weekly report

 

Hello Sergei!
Please review  commit 71f9069 onward i have changed
mysql_prepare_alter_table func.
Regards
sachin

On Sat, Aug 13, 2016 at 4:31 PM, sachin setiya <sachinsetia1001@xxxxxxxxx>
wrote:

> Hello sergei
>
> Please review the code
>
> Work left
>
>
> 1 Where works on complex query  including join , in , subquery but fails
> in this query do not know why
>
> CREATE table t1 (a blob unique not null , b blob unique not null );
> select * from t1 where a=b;
>
> 2 Although alter works but thinking of changing the code so that it will
> become less complex.
>
> 3 delete , update using where is optimized in complex case involving joins
> subquery but normal case wont work
> because they use function like sql_quick _select which i do not optimized
>
> 4 need to write test cases for update.
>
> 5 tried to make prototype for update problem
> In this prototype what happenes is that when there is some dupp key tey
> then it caches the record in stack ,
> and does this until some record insert is succeded then it checks succeded
> record's  old data to key stack last element
> if it matches then we pop the stack and do update. update fails when there
> is some element left in stack
> prototype is on https://github.com/SachinSetiya/server/tree/up_proto_2
> it works on table like
> create table t1 (a int unique , b int );
> insert into t1 values(1,1),(2,2),(3,3),(4,4);
> update table t1 set a= a+1;
> currently this works only for sorted key but can be worked for unsorted
> key be sorting the records in stack with respect to key
> to extend this to multiple keys we can use 1 key stack for each key
>
>
> On 07/11/2016 11:41 PM, Sergei Golubchik wrote:
>
>> Hi, Sachin!
>>
>> Here's a review of your commits from
>> b69e141a32 to 674eb4c4277.
>>
>> Last time I've reviewed up to b69e141a32,
>> so next time I'll review from 674eb4c4277 and up.
>>
>> Thanks for your work!
>>
>> diff --git a/mysql-test/r/long_unique.result
>>> b/mysql-test/r/long_unique.result
>>> new file mode 100644
>>> index 0000000..fc6ff12
>>> --- /dev/null
>>> +++ b/mysql-test/r/long_unique.result
>>> @@ -0,0 +1,160 @@
>>> +create table z_1(abc blob unique);
>>> +insert into z_1 values(112);
>>> +insert into z_1 values('5666666666666666666666
>>> 666666666666666666666666666666666666666666666666666666666666
>>> 666666666666666666666666666666666666666666666666666666666666
>>> 666666666666666666666666666666666666666666666666666666666666
>>> 666666666666666666666666666666666666666666666666666666666666
>>> 666666666666666666666666666666');
>>> +insert into z_1 values('sachin');
>>> +insert into z_1 values('sachin');
>>> +ERROR 23000: Can't write; duplicate key in table 'z_1'
>>> +select * from z_1;
>>> +abc
>>> +112
>>> +56666666666666666666666666666666666666666666666666666666666
>>> 666666666666666666666666666666666666666666666666666666666666
>>> 666666666666666666666666666666666666666666666666666666666666
>>> 666666666666666666666666666666666666666666666666666666666666
>>> 66666666666666666666666666666666666666666666666666666
>>> +sachin
>>> +select db_row_hash_1 from z_1;
>>> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
>>> +desc z_1;
>>> +Field  Type    Null    Key     Default Extra
>>> +abc    blob    YES             NULL
>>> +select * from information_schema.columns where table_schema='mtr' and
>>> table_name='z_1';
>>> +TABLE_CATALOG  TABLE_SCHEMA    TABLE_NAME      COLUMN_NAME
>>>  ORDINAL_POSITION        COLUMN_DEFAULT  IS_NULLABLE     DATA_TYPE
>>>  CHARACTER_MAXIMUM_LENGTH        CHARACTER_OCTET_LENGTH  NUMERIC_PRECISION
>>>      NUMERIC_SCALE   DATETIME_PRECISION      CHARACTER_SET_NAME
>>> COLLATION_NAME  COLUMN_TYPE     COLUMN_KEY      EXTRA   PRIVILEGES
>>> COLUMN_COMMENT
>>> +create table tst_1(xyz blob unique , x2 blob unique);
>>> +insert into tst_1 values(1,22);
>>> +insert into tst_1 values(2,22);
>>> +ERROR 23000: Can't write; duplicate key in table 'tst_1'
>>> +select * from tst_1;
>>> +xyz    x2
>>> +1      22
>>> +select db_row_hash_1 from tst_1;
>>> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
>>> +select db_row_hash_2 from tst_1;
>>> +ERROR 42S22: Unknown column 'db_row_hash_2' in 'field list'
>>> +select db_row_hash_1,db_row_hash_2 from tst_1;
>>> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
>>> +desc tst_1;
>>> +Field  Type    Null    Key     Default Extra
>>> +xyz    blob    YES             NULL
>>> +x2     blob    YES             NULL
>>> +select * from information_schema.columns where table_schema='mtr' and
>>> table_name='tst_1';
>>> +TABLE_CATALOG  TABLE_SCHEMA    TABLE_NAME      COLUMN_NAME
>>>  ORDINAL_POSITION        COLUMN_DEFAULT  IS_NULLABLE     DATA_TYPE
>>>  CHARACTER_MAXIMUM_LENGTH        CHARACTER_OCTET_LENGTH  NUMERIC_PRECISION
>>>      NUMERIC_SCALE   DATETIME_PRECISION      CHARACTER_SET_NAME
>>> COLLATION_NAME  COLUMN_TYPE     COLUMN_KEY      EXTRA   PRIVILEGES
>>> COLUMN_COMMENT
>>> +create table t1 (empnum smallint, grp int);
>>> +create table t2 (empnum int, name char(5));
>>> +insert into t1 values(1,1);
>>> +insert into t2 values(1,'bob');
>>> +create view v1 as select * from t2 inner join t1 using (empnum);
>>> +select * from v1;
>>> +empnum name    grp
>>> +1      bob     1
>>>
>> what is this test for? (with t1, t2, v1)
>>
> Removed this is one of the test in inoodb which was failing so i added this
> but now it is removed.
>
>
>> +create table c_1(abc blob unique, db_row_hash_1 int unique);
>>> +desc c_1;
>>> +Field  Type    Null    Key     Default Extra
>>> +abc    blob    YES             NULL
>>> +db_row_hash_1  int(11) YES     UNI     NULL
>>> +insert into c_1 values(1,1);
>>> +insert into c_1 values(1,2);
>>> +ERROR 23000: Can't write; duplicate key in table 'c_1'
>>> +create table c_2(abc blob unique,xyz blob unique, db_row_hash_2
>>> +int,db_row_hash_1 int unique);
>>> +desc c_2;
>>> +Field  Type    Null    Key     Default Extra
>>> +abc    blob    YES             NULL
>>> +xyz    blob    YES             NULL
>>> +db_row_hash_2  int(11) YES             NULL
>>> +db_row_hash_1  int(11) YES     UNI     NULL
>>> +insert into c_2 values(1,1,1,1);
>>> +insert into c_2 values(1,23,4,5);
>>> +ERROR 23000: Can't write; duplicate key in table 'c_2'
>>> +create table u_1(abc int primary key , xyz blob unique);
>>> +insert into u_1 values(1,2);
>>> +insert into u_1 values(2,3);
>>> +update u_1 set xyz=2 where abc=1;
>>> +alter table z_1 drop db_row_hash_1;
>>> +ERROR 42000: Can't DROP 'db_row_hash_1'; check that column/key exists
>>> +alter table c_1 drop db_row_hash_2;
>>> +ERROR 42000: Can't DROP 'db_row_hash_2'; check that column/key exists
>>> +alter table c_1 drop db_row_hash_1;
>>> +alter table z_1 add column  db_row_hash_1 int unique;
>>> +show create table z_1;
>>> +Table  Create Table
>>> +z_1    CREATE TABLE `z_1` (
>>> +  `abc` blob,
>>> +  `db_row_hash_1` int(11) DEFAULT NULL,
>>> +  UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
>>> +  UNIQUE KEY `abc`(`abc`)
>>> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
>>> +insert into z_1 values('sachin',1);
>>> +ERROR 23000: Can't write; duplicate key in table 'z_1'
>>> +alter table c_1 add column db_row_hash_2 int;
>>> +show create table c_1;
>>> +Table  Create Table
>>> +c_1    CREATE TABLE `c_1` (
>>> +  `abc` blob,
>>> +  `db_row_hash_2` int(11) DEFAULT NULL,
>>> +  UNIQUE KEY `abc`(`abc`)
>>> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
>>> +alter table z_1 change db_row_hash_2 temp int;
>>> +ERROR 42S22: Unknown column 'db_row_hash_2' in 'z_1'
>>> +alter table c_1 change db_row_hash_3 temp int;
>>> +ERROR 42S22: Unknown column 'db_row_hash_3' in 'c_1'
>>> +alter table c_1 change db_row_hash_4 temp int;
>>> +ERROR 42S22: Unknown column 'db_row_hash_4' in 'c_1'
>>> +alter table c_2 change db_row_hash_1 temp int;
>>> +create table c_3(abc blob unique, xyz blob);
>>> +alter table c_3 add index(db_row_hash_1);
>>> +ERROR 42000: Key column 'DB_ROW_HASH_1' doesn't exist in table
>>> +create table i_1(x1 int );
>>> +alter table i_1 add column x2 blob unique;
>>> +insert into i_1 value(1,1);
>>> +insert into i_1 value(2,1);
>>> +ERROR 23000: Can't write; duplicate key in table 'i_1'
>>> +alter table i_1 drop index x2;
>>> +insert into i_1 value(2,1);
>>> +create table i_2(abc blob);
>>> +insert into i_2 values(1);
>>> +alter table i_2 add  unique index(abc);
>>> +select * from i_2;
>>> +abc
>>> +1
>>> +insert into i_2 values(22);
>>> +insert into i_2 values(22);
>>> +ERROR 23000: Can't write; duplicate key in table 'i_2'
>>> +alter table i_2 drop key abc;
>>> +insert into i_2 values(22);
>>> +drop table i_2;
>>> +create table i_2(abc blob);
>>> +insert into i_2 values(1);
>>> +alter table i_2 add constraint  unique(abc);
>>> +select * from i_2;
>>> +abc
>>> +1
>>> +insert into i_2 values(22);
>>> +insert into i_2 values(22);
>>> +ERROR 23000: Can't write; duplicate key in table 'i_2'
>>> +alter table i_2 drop key abc;
>>> +insert into i_2 values(22);
>>> +create table di_1(abc blob , xyz int , index(abc,xyz));
>>>
>> what is this test for?
>>
> Early it will create a long unique index but now it
> will truncate
>
>
>> +show create table di_1;
>>> +Table  Create Table
>>> +di_1   CREATE TABLE `di_1` (
>>> +  `abc` blob,
>>> +  `xyz` int(11) DEFAULT NULL,
>>> +  INDEX `abc_xyz`(`abc`,`xyz`)
>>> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
>>> +insert into di_1 values(1,1);
>>> +insert into di_1 values(1,1);
>>> +insert into di_1 values(2,1);
>>> +create table di_2 (abc blob);
>>> +insert into di_2 values(1);
>>> +insert into di_2 values(1);
>>> +alter table di_2 add index(abc);
>>> +show create table di_2;
>>> +Table  Create Table
>>> +di_2   CREATE TABLE `di_2` (
>>> +  `abc` blob,
>>> +  INDEX `abc`(`abc`)
>>> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
>>> +insert into di_2 values(1);
>>> +alter table di_2 drop index abc;
>>> +show create table di_2;
>>> +Table  Create Table
>>> +di_2   CREATE TABLE `di_2` (
>>> +  `abc` blob
>>> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
>>>
>> please add tests for the following:
>>
>> 1. blob with a couple of really long lines. like
>>
>>   into t1 values(repeat('x', 500*1024*1024));
>>   into t1 values(repeat('y', 500*1024*1024));
>>   into t1 values(repeat('x', 500*1024*1024));
>>
>> just don't do select * after that :)
>>
>> 2. TEXT, where collation matters and equal strings aren't always
>> bytewise identical:
>>
>>   create t1 (a TEXT CHARSET latin1 COLLATE latin1_german2_ci);
>>
>>   insert t1 values ('ae');
>>   insert t1 values ('AE');
>>   insert t1 values ('Ä');
>>
>> last two inserts must be an error (I suspect. If not sure, you can always
>> test it as, for example, SELECT 'ae' = 'AE' COLLATE latin1_german2_ci;
>>
>> 3. multi-column constraints, like
>>
>>   create t1 (a blob, b int, unique (a,b));
>>   create t2 (a blob, b blob, unique (a,b));
>>   create t3 (a varchar(2048), b varchar(2048), unique(a,b));
>>
>> and also test inserts and errors with this multi-column constraints,
>> not only create table statement.
>>
> Done
>
>> diff --git a/mysql-test/t/hidden_columns.test
>>> b/mysql-test/t/hidden_columns.test
>>> new file mode 100644
>>> index 0000000..f0fed41
>>> --- /dev/null
>>> +++ b/mysql-test/t/hidden_columns.test
>>>
>> where's the hidden_columns.result?
>>
> sorry added
>
>> diff --git a/mysql-test/t/hidden_field.test
>>> b/mysql-test/t/hidden_field.test
>>> new file mode 100644
>>> index 0000000..3806a92
>>> --- /dev/null
>>> +++ b/mysql-test/t/hidden_field.test
>>>
>> where's the hidden_field.result?
>> how is it different from hidden_columns.test, why not to put all tests
>> there?
>>
>> I mean the difference between "hidden column" and "hidden field"
>> is really subtle. Either rename one of the test files or,
>> combine them, or, I think, the best way would be to move tests from
>> hidden_columns.test to long_unique.test
>>
> done and now there is only one file hidden_field
>
>>
>> @@ -0,0 +1,35 @@
>>> +create table h_1(abc int primary key, xyz int hidden);
>>> +create table h_2(a1 int hidden);
>>>
>> This, probably, should not be allowed. I mean, what 'SELECT * FROM h_2'
>> will do?
>> Let's return an error ER_TABLE_MUST_HAVE_COLUMNS in this case.
>>
> Done
>
>>
>> +--error 1064
>>>
>> try to use error names, not error numbers (see them in
>> include/mysqld_error.h)
>>
> okay
>
>
>> +create table h_3(a1 blob,hidden(a1));
>>> +--error 1981
>>> +create table h_4(a1 int primary key hidden ,a2 int unique hidden , a3
>>> blob,a4
>>> +int not null hidden unique);
>>> +--error 1981
>>> +create table h_5(abc int not null hidden);
>>> +#
>>> +create table t1(abc int hidden);
>>> +#should automatically add null
>>> +insert into t1 values();
>>> +insert into t1 values();
>>> +insert into t1 values();
>>> +insert into t1 values();
>>> +--error 1096
>>> +select * from t1;
>>>
>> see, the error doesn't really fit here. 1096 is ER_NO_TABLES_USED,
>> but the table is used here. Let's just disallow tables with no
>> visible columns.
>>
>> +select abc from t1;
>>> +
>>> +create table t2(abc int primary key);
>>> +insert into t2 values();
>>> +select * from t2;
>>> +--error 1064
>>> +create table sdsdsd(a int , b int, hidden(a,b));
>>> +
>>> +create table hid_4(a int,abc int as (a mod 3) virtual hidden);
>>> +desc hid_4;
>>> +create table ht_5(abc int primary key hidden auto_increment, a int);
>>> +desc ht_5;
>>> +insert into ht_5 values(1);
>>> +insert into ht_5 values(2);
>>> +insert into ht_5 values(3);
>>> +select * from ht_5;
>>> +select abc,a from ht_5;
>>>
>> if you'd have hidden_field.result you'd notice that this test
>> doesn't clean up after itself :)
>>
>> diff --git a/mysql-test/t/long_unique.test b/mysql-test/t/long_unique.tes
>>> t
>>> new file mode 100644
>>> index 0000000..fc8bcf5
>>> --- /dev/null
>>> +++ b/mysql-test/t/long_unique.test
>>> @@ -0,0 +1,114 @@
>>> +create table z_1(abc blob unique);
>>> +insert into z_1 values(112);
>>> +insert into z_1 values('5666666666666666666666
>>> 666666666666666666666666666666666666666666666666666666666666
>>> 666666666666666666666666666666666666666666666666666666666666
>>> 666666666666666666666666666666666666666666666666666666666666
>>> 666666666666666666666666666666666666666666666666666666666666
>>> 666666666666666666666666666666');
>>>
>> better use repeat() function in these cases. makes test cases smaller
>> and more readable.
>>
> okay
>
>>
>> +insert into z_1 values('sachin');
>>> +--error 1022
>>> +insert into z_1 values('sachin');
>>> +select * from z_1;
>>> +--error 1054
>>> +select db_row_hash_1 from z_1;
>>> +desc z_1;
>>> +select * from information_schema.columns where table_schema='mtr' and
>>> table_name='z_1';
>>>
>> did you read results of your tests? wasn't it suspicious that
>> this select returns nothing? it's because the table is in the 'test'
>> schema, not in 'mtr'. Please, fix.
>>
> changed
>
>
>> +create table tst_1(xyz blob unique , x2 blob unique);
>>> +insert into tst_1 values(1,22);
>>> +--error 1022
>>> +insert into tst_1 values(2,22);
>>> +select * from tst_1;
>>> +--error 1054
>>> +select db_row_hash_1 from tst_1;
>>> +--error 1054
>>> +select db_row_hash_2 from tst_1;
>>> +--error 1054
>>> +select db_row_hash_1,db_row_hash_2 from tst_1;
>>> +desc tst_1;
>>> +select * from information_schema.columns where table_schema='mtr' and
>>> table_name='tst_1';
>>>
>> same
>>
>> +create table t1 (empnum smallint, grp int);
>>> +create table t2 (empnum int, name char(5));
>>> +insert into t1 values(1,1);
>>> +insert into t2 values(1,'bob');
>>> +create view v1 as select * from t2 inner join t1 using (empnum);
>>> +select * from v1;
>>>
>> what was the point of this t1/t2/v1 test?
>>
>> +#test on create table with column db_row_hash
>>> +create table c_1(abc blob unique, db_row_hash_1 int unique);
>>> +desc c_1;
>>> +insert into c_1 values(1,1);
>>> +--error 1022
>>> +insert into c_1 values(1,2);
>>> +create table c_2(abc blob unique,xyz blob unique, db_row_hash_2
>>> +int,db_row_hash_1 int unique);
>>> +desc c_2;
>>> +insert into c_2 values(1,1,1,1);
>>> +--error 1022
>>> +insert into c_2 values(1,23,4,5);
>>> +#update table
>>> +create table u_1(abc int primary key , xyz blob unique);
>>> +insert into u_1 values(1,2);
>>> +insert into u_1 values(2,3);
>>> +update u_1 set xyz=2 where abc=1;
>>>
>> this updates the row to itw old values. test the update to new values.
>> to non-duplicate and to duplicate values.
>>
>> +#alter table drop column
>>> +--error 1091
>>> +alter table z_1 drop db_row_hash_1;
>>> +--error 1091
>>> +alter table c_1 drop db_row_hash_2;
>>> +alter table c_1 drop db_row_hash_1;
>>> +#add column
>>> +alter table z_1 add column  db_row_hash_1 int unique;
>>> +show create table z_1;
>>> +#insert should not break
>>> +--error 1022
>>> +insert into z_1 values('sachin',1);
>>>
>> good
>>
>> +alter table c_1 add column db_row_hash_2 int;
>>>
>> test also adding and removing unique index for blobs.
>> 1. adding when a column has no duplicates
>> 2. adding when a column has duplicates
>> 3. adding when a column has duplicates and IGNORE clause was used
>>
>> +show create table c_1;
>>> +#modify db_row_hash
>>> +--error 1054
>>> +alter table z_1 change db_row_hash_2 temp int;
>>> +--error 1054
>>> +alter table c_1 change db_row_hash_3 temp int;
>>> +--error 1054
>>> +alter table c_1 change db_row_hash_4 temp int;
>>> +alter table c_2 change db_row_hash_1 temp int;
>>> +# now try to index db_row_hash
>>> +create table c_3(abc blob unique, xyz blob);
>>> +--error 1072
>>> +alter table c_3 add index(db_row_hash_1);
>>> +create table i_1(x1 int );
>>> +alter table i_1 add column x2 blob unique;
>>> +insert into i_1 value(1,1);
>>> +--error 1022
>>> +insert into i_1 value(2,1);
>>> +alter table i_1 drop index x2;
>>> +insert into i_1 value(2,1);
>>> +create table i_2(abc blob);
>>> +insert into i_2 values(1);
>>> +alter table i_2 add  unique index(abc);
>>> +select * from i_2;
>>> +insert into i_2 values(22);
>>> +--error 1022
>>> +insert into i_2 values(22);
>>> +alter table i_2 drop key abc;
>>> +insert into i_2 values(22);
>>> +drop table i_2;
>>> +create table i_2(abc blob);
>>> +insert into i_2 values(1);
>>> +alter table i_2 add constraint  unique(abc);
>>> +select * from i_2;
>>> +insert into i_2 values(22);
>>> +--error 1022
>>> +insert into i_2 values(22);
>>> +alter table i_2 drop key abc;
>>> +insert into i_2 values(22);
>>> +# now some indexes on blob these should allow
>>> +# duplicates these will be used in where clause
>>> +create table di_1(abc blob , xyz int , index(abc,xyz));
>>> +show create table di_1;
>>> +insert into di_1 values(1,1);
>>> +insert into di_1 values(1,1);
>>> +insert into di_1 values(2,1);
>>> +create table di_2 (abc blob);
>>> +insert into di_2 values(1);
>>> +insert into di_2 values(1);
>>> +alter table di_2 add index(abc);
>>>
>> what kind of index is used here? should be a normal index,
>> no hash, no hidden db_row_hash_1 column, just a normal b-tree index.
>>
>> +show create table di_2;
>>> +insert into di_2 values(1);
>>> +alter table di_2 drop index abc;
>>> +show create table di_2;
>>>
>> you never clean up in these tests. see other - existing - test files,
>> they always drop all created tables and views, so that when the test
>> end all databases look exactly as before the test started.
>>
>> diff --git a/include/my_base.h b/include/my_base.h
>>> index 8b546ed..d5c697b 100644
>>> --- a/include/my_base.h
>>> +++ b/include/my_base.h
>>> @@ -250,6 +250,8 @@ enum ha_base_keytype {
>>>     #define HA_NOSAME            1      /* Set if not dupplicated
>>> records */
>>>   #define HA_PACK_KEY            2      /* Pack string key to previous
>>> key */
>>> +#define HA_INDEX_HASH  4  /* */
>>>
>> you don't need HA_INDEX_HASH. See above, non-unique index on blobs
>> should be just a normal b-tree index, no hash, subject to automatic
>> key truncation, see sql_table.cc around the line with the comment
>> /* not a critical problem */
>>
> removed
>
>> +#define HA_UNIQUE_HASH 8
>>>
>> as you've found out already, there two bits are not free
>>
>>   #define HA_AUTO_KEY            16
>>>   #define HA_BINARY_PACK_KEY     32     /* Packing of all keys to prev
>>> key */
>>>   #define HA_FULLTEXT           128     /* For full-text search */
>>> diff --git a/sql/field.h b/sql/field.h
>>> index 08905f2..2e6e0e4 100644
>>> --- a/sql/field.h
>>> +++ b/sql/field.h
>>> @@ -628,6 +628,8 @@ class Field: public Value_source
>>>        @c NOT @c NULL field, this member is @c NULL.
>>>     */
>>>     uchar               *null_ptr;
>>> +  field_visible_type field_visibility=NORMAL;
>>> +  bool is_hash=false;
>>>
>> please add a comment for is_hash or rename it to something that is
>> self-explanatory. field_visibility, for example, is self-explanatory.
>>
> okay
>
>>     /*
>>>       Note that you can use table->in_use as replacement for current_thd
>>> member
>>>       only inside of val_*() and store() members (e.g. you can't use it
>>> in cons)
>>> @@ -3051,6 +3052,10 @@ class Field_blob :public Field_longstr {
>>>     inline uint32 get_length(uint row_offset= 0)
>>>     { return get_length(ptr+row_offset, this->packlength); }
>>>     uint32 get_length(const uchar *ptr, uint packlength);
>>> +  uint32 data_length(int row_offset=0)
>>> +  {
>>> +    return get_length(row_offset);
>>> +  }
>>>
>> Huh?
>> There was no Field_blob::data_length(), it was using the parent
>> implementation Field::data_length(). That was returning pack_length(),
>> that is Field_blob::pack_length(), which is simply packlength +
>> portable_sizeof_char_ptr
>> Now you've defined Field_blob::data_length() to return get_length(0).
>> That is, you've changed the meaning of Field_blob::data_length().
>> Is that ok? Was old Field_blob::data_length() never used?
>>
> Reverted  actually i used this in earlier code but now no longer use this
> function
>
>>
>>     uint32 get_length(const uchar *ptr_arg)
>>>     { return get_length(ptr_arg, this->packlength); }
>>>     inline void get_ptr(uchar **str)
>>> @@ -3455,6 +3460,8 @@ class Create_field :public Sql_alloc
>>>       max number of characters.
>>>     */
>>>     ulonglong length;
>>> +  field_visible_type field_visibility=NORMAL;
>>> +  bool is_hash=false;
>>>
>> don't do that. gcc complains
>>
>>    warning: non-static data member initializers only available with
>> -std=c++11 or -std=gnu++11
>>
>> and we don't use c++11 yet, because some of our compilers in buildbot
>> don't support it.
>>
> done
>
>     /*
>>>       The value of `length' as set by parser: is the number of characters
>>>       for most of the types, or of bytes for BLOBs or numeric types.
>>> diff --git a/sql/handler.cc b/sql/handler.cc
>>> index 49e451e..e7d7815 100644
>>> --- a/sql/handler.cc
>>> +++ b/sql/handler.cc
>>> @@ -5864,16 +5864,60 @@ int handler::ha_reset()
>>>     DBUG_RETURN(reset());
>>>   }
>>>   -
>>> +/** @brief
>>> +   Compare two records
>>> +   It requires both records to be present in table->record[0]
>>> +   and table->record[1]
>>> +   @returns true if equal else false
>>> +  */
>>> +bool rec_hash_cmp(uchar *first_rec, uchar *sec_rec, Field *hash_field)
>>> +{
>>> +  Item_args * t_item=(Item_args *)hash_field->vcol_info->expr_item;
>>> +  int arg_count = t_item->argument_count();
>>> +  Item ** arguments=t_item->arguments();
>>> +  int diff = sec_rec-first_rec;
>>> +  Field * t_field;
>>> +  for(int i=0;i<arg_count;i++)
>>> +  {
>>> +    t_field = ((Item_field *)arguments[i])->field;//need a debug assert
>>> +    if(t_field->cmp_binary_offset(diff))
>>> +      return false;
>>> +  }
>>> +  return true;
>>> +}
>>>
>> 1. please format the code like it's done elsewhere in the file.
>> - in assignments: no space before '=', one space after
>> - after a function/method must be one or two empty lines
>> - in if/for/while: a space between the keyword and a parenthesys
>> - in for() loop - a space after a semicolon
>> - in function arguments - a space after a comma
>> - not in the function above, but everywhere in your patch
>>
>> 2. it's better to avoid type conversion where it's not needed, for
>> example, t_item->argument_count() returns uint, so your arg_count should
>> be of that type too.
>>
>> 3. either write a completely generic function that compares two records
>> given as pointers, or write a function that compares record[0] and
>> record[1].
>> Ok, it looks you need to compare a record as a pointer with record[0].
>> Then remove first_rec, it only creates a false pretence that a  function
>> is more generic than it is.
>> Instead, do
>>
>>    int diff= sec_rec - hash_field->table->record[0];
>>
>> (and, may be, rename sec_rec to "other_rec")
>>
>> 4. what do you mean "need a debug assert"? if you need it, add it :)
>> if you want to add a debug assert, but don't know how - please, ask.
>>
>>   int handler::ha_write_row(uchar *buf)
>>>   {
>>> -  int error;
>>> +  int error,result;
>>>     Log_func *log_func= Write_rows_log_event::binlog_r
>>> ow_logging_function;
>>> +  Field *field_iter;
>>>     DBUG_ASSERT(table_share->tmp_table != NO_TMP_TABLE ||
>>>                 m_lock_type == F_WRLCK);
>>>     DBUG_ENTER("handler::ha_write_row");
>>>     DEBUG_SYNC_C("ha_write_row_start");
>>> -
>>> +  /* First need to whether inserted record is unique or not */
>>> +  /* One More Thing  if i implement hidden field then detection can be
>>> easy */
>>>
>> this second comment is probably obsolete, you can remove it
>>
>> +  for(uint i=0;i<table->s->keys;i++)
>>> +  {
>>> +    if(table->key_info[i].flags&HA_UNIQUE_HASH)
>>>
>> in fact, you don't need a special key flag. you can check, like
>>
>>     if (table->key_info[i].key_part->field->is_hash)
>>
>> but a flag makes it kinda simpler, so if you like the flag, go ahead and
>> use it (if you find an free bit for it). Perhaps, even, you can use the
>> key flag and remove Field::is_hash? The field does not need any special
>> treatment, does it? it only needs to be hidden.
>>
>> +    {
>>> +      field_iter=table->key_info[i].key_part->field;
>>>
>> why did you name it "field_iter"? it's not an iterator. may be
>> "hash_field"?
>>
>> +      int len =table->key_info[i].key_length;
>>> +      uchar  ptr[len];
>>>
>> we compile with -Wvla, so this will be a warning too. Just use
>> ptr[9], for example. And add DBUG_ASSERT(len < sizeof(ptr));
>>
>> +      if(field_iter->is_null())
>>> +      {
>>> +        goto write_row;
>>>
>> no goto here, just break;
>>
>> +      }
>>> +      key_copy(ptr,buf,&table->key_info[i],len,false);
>>> +      result= table->file->ha_index_read_idx
>>> _map(table->record[1],i,ptr,
>>> +          HA_WHOLE_KEY,HA_READ_KEY_EXACT);
>>> +      if(!result)
>>> +      {
>>> +        if(rec_hash_cmp(table->record[0],table->record[1],field_iter))
>>> +          DBUG_RETURN(HA_ERR_FOUND_DUPP_KEY);
>>> +      }
>>> +    }
>>> +  }
>>> +  write_row:
>>>
>> and no label here
>>
>>     MYSQL_INSERT_ROW_START(table_share->db.str,
>>> table_share->table_name.str);
>>>     mark_trx_read_write();
>>>     increment_statistics(&SSV::ha_write_count);
>>> @@ -5907,6 +5952,36 @@ int handler::ha_update_row(const uchar *old_data,
>>> uchar *new_data)
>>>     DBUG_ASSERT(new_data == table->record[0]);
>>>     DBUG_ASSERT(old_data == table->record[1]);
>>>   +  /* First need to whether inserted record is unique or not */
>>> +  /* One More Thing  if i implement hidden field then detection can be
>>> easy */
>>> +  for(uint i=0;i<table->s->keys;i++)
>>> +  {
>>> +    if(table->key_info[i].flags&HA_UNIQUE_HASH)
>>> +    {
>>> +      /*
>>> +        We need to add the null bit
>>> +        If the column can be NULL, then in the first byte we put 1 if
>>> the
>>> +        field value is NULL, 0 otherwise.
>>> +       */
>>> +      uchar *new_rec = (uchar *)alloc_root(&table->mem_root,
>>> +                                           table->s->reclength*sizeof(uc
>>> har));
>>>
>> do you know how alloc_root works? read include/mysql/service_thd_alloc.h
>> memory allocated with alloc_root() can not be freed individually, it is
>> freed when the complete MEM_ROOT is reset or destroyed.
>> so, three problems here:
>> 1. you allocate memory on MEM_ROOT in a loop. so, if the table has, say
>> three HA_UNIQUE_HASH keys, you'll allocate the buffer three times, while
>> one should've been totally sufficient
>> 2. you allocate for every row that is inserted
>> 3. you allocate it on the *TABLE::mem_root*. Table's memroot is freed
>> only when a table is closed!
>>
>> So, if you create a table with three unique(blob) keys, and insert
>> four rows in one statement: INSERT ... VALUES (),(),(),(),(). And
>> repeat this insert five times, you'll allocate 3*4*5=60 reclength bytes.
>> They'll not be freed, so if you repeat this insert five more times, it'll
>> be 60 reclength more bytes. And so on :)
>>
>> I think the good approach would be to have, like TABLE::check_unique_buf
>> row buffer. See that TABLE has now:
>>
>>   uchar *record[2];                     /* Pointer to records */
>>   uchar *write_row_record;              /* Used as optimisation in
>>                                            THD::write_row */
>>   uchar *insert_values;                  /* used by INSERT ... UPDATE */
>>
>> that's where you add uchar *check_unique_buf;
>> it's initialized to 0. and on the first use you do, like
>>
>>   if (!table->check_unique_buf)
>>     table->check_unique_buf= alloc_root(&table->mem_root, ...)
>>
>> and then you can freely use it for checking your unique constraints.
>> it'll be allocated only once per TABLE.
>>
> done
>
>>
>> +      field_iter=table->key_info[i].key_part->field;
>>> +      uchar  ptr[9];
>>>
>> better #define UNIQ_HASH_KEY_LEN 9 and use it everywhere
>>
>> +      if(field_iter->is_null())
>>> +      {
>>> +        goto write_row;
>>> +      }
>>> +      key_copy(ptr,new_data,&table->key_info[i],9,false);
>>> +      result= table->file->ha_index_read_idx_map(new_rec,i,ptr,
>>> +                            HA_WHOLE_KEY,HA_READ_KEY_EXACT);
>>> +      if(!result)
>>> +      {
>>> +        if(rec_hash_cmp(table->record[0],new_rec,field_iter))
>>> +          return HA_ERR_FOUND_DUPP_KEY;
>>> +      }
>>> +    }
>>> +  }
>>> +  write_row:
>>>
>> this is the same code as in handler::ha_write_row, please combine them
>> into one function (for simplicity you can use table->check_unique_buf also
>> for INSERT)
>>
> done
>
>
>>     MYSQL_UPDATE_ROW_START(table_share->db.str,
>>> table_share->table_name.str);
>>>     mark_trx_read_write();
>>>     increment_statistics(&SSV::ha_update_count);
>>> diff --git a/sql/item.cc b/sql/item.cc
>>> index adf7031..2015752 100644
>>> --- a/sql/item.cc
>>> +++ b/sql/item.cc
>>> @@ -5059,7 +5059,13 @@ bool Item_field::fix_fields(THD *thd, Item
>>> **reference)
>>>       }
>>>       else if (!from_field)
>>>         goto error;
>>> -
>>> +    if(from_field->field_visibility==FULL_HIDDEN)
>>> +    {
>>> +      my_error(ER_BAD_FIELD_ERROR, MYF(0), (* reference)->full_name(),
>>> +               thd->where);
>>> +      from_field=NULL;
>>> +      goto error;
>>> +    }
>>>
>> So, you're patching the "search for field by name" process, pretending
>> that the field does not exist if it's FULL_HIDDEN. This is, of course,
>> correct. But I suspect you're doing it too late. Consider this case:
>>
>>    create table t1 (b blob, unique (b));
>>    create table t2 (db_row_hash_1 int, a int);
>>
>>      select * from t1, t2 where db_row_hash_1=5;
>>
>> in this case, mariadb needs to search for db_row_hash_1 field in t1,
>> should not find it, continue searching in t2, find it there.
>> in your code it will find it in t1, and later reject it with
>> ER_BAD_FIELD_ERROR.
>>
>> there may be more complex cases, with subqueries and outer selects,
>> views, etc. important part is - you need to reject FULL_HIDDEN field
>> immediately when it's found, so that the search could continue in other
>> tables, outer selects, etc.
>>
>> btw, don't forget to add tests for this behavior (one table or outer
>> select has db_row_hash_1 visible field, another table has db_row_hash_1
>> FULL_HIDDEN field).
>>
> done
>
>       table_list= (cached_table ? cached_table :
>>>                    from_field != view_ref_found ?
>>>                    from_field->table->pos_in_table_list : 0);
>>> diff --git a/sql/item_func.cc b/sql/item_func.cc
>>> index 6edb276..889ac40 100644
>>> --- a/sql/item_func.cc
>>> +++ b/sql/item_func.cc
>>> @@ -1916,6 +1916,37 @@ void Item_func_int_div::fix_length_and_dec()
>>>   }
>>>     +longlong  Item_func_hash::val_int()
>>> +{
>>> +  unsigned_flag= true;
>>> +  ulong nr1= 1,nr2= 4;
>>> +  CHARSET_INFO *cs;
>>> +  for(uint i= 0;i<arg_count;i++)
>>> +  {
>>> +    String * str = args[i]->val_str();
>>> +    if(args[i]->null_value)
>>> +    {
>>> +      null_value= 1;
>>> +      return 0;
>>> +    }
>>> +    cs= str->charset();
>>> +    uchar l[4];
>>> +    int4store(l,str->length());
>>> +    cs->coll->hash_sort(cs,l,sizeof(l), &nr1, &nr2);
>>>
>> looks good, but use my_charset_binary for the length.
>>
> did not get it :(
>
>
>> +    cs->coll->hash_sort(cs, (uchar *)str->ptr(), str->length(), &nr1,
>>> &nr2);
>>> +  }
>>> +  return   (longlong)nr1;
>>> +}
>>> +
>>> +
>>> +void  Item_func_hash::fix_length_and_dec()
>>> +{
>>> +  maybe_null= 1;
>>> +  decimals= 0;
>>> +  max_length= 8;
>>> +}
>>> +
>>> +
>>>   longlong Item_func_mod::int_op()
>>>   {
>>>     DBUG_ASSERT(fixed == 1);
>>> diff --git a/sql/lex.h b/sql/lex.h
>>> index 22ff4e6..d7a4e14 100644
>>> --- a/sql/lex.h
>>> +++ b/sql/lex.h
>>> @@ -266,6 +266,7 @@ static SYMBOL symbols[] = {
>>>     { "HAVING",         SYM(HAVING)},
>>>     { "HELP",           SYM(HELP_SYM)},
>>>     { "HIGH_PRIORITY",  SYM(HIGH_PRIORITY)},
>>> +  {"HIDDEN",  SYM(HIDDEN)},
>>>
>> spacing! align it as all other rows are
>>
>>     { "HOST",           SYM(HOST_SYM)},
>>>     { "HOSTS",          SYM(HOSTS_SYM)},
>>>     { "HOUR",           SYM(HOUR_SYM)},
>>> diff --git a/sql/share/errmsg-utf8.txt b/sql/share/errmsg-utf8.txt
>>> index 709ac55..9ae3ac6 100644
>>> --- a/sql/share/errmsg-utf8.txt
>>> +++ b/sql/share/errmsg-utf8.txt
>>> @@ -7139,3 +7139,5 @@ ER_KILL_QUERY_DENIED_ERROR
>>>   ER_NO_EIS_FOR_FIELD
>>>           eng "Engine-independent statistics are not collected for
>>> column '%s'"
>>>           ukr "Незалежна від типу таблиці статистика не збирається для
>>> стовбця '%s'"
>>> +ER_HIDDEN_NOT_NULL_WOUT_DEFAULT
>>> +        eng "Hidden column '%s' either allow null values or it must
>>> have default value"
>>>
>> when you'll merge with 10.2 make sure this your new error will
>> end up at the very end, after ER_EXPRESSION_REFERS_TO_UNINIT_FIELD
>>
>> diff --git a/sql/sql_class.h b/sql/sql_class.h
>>> index 8bfc243..bb206a8 100644
>>> --- a/sql/sql_class.h
>>> +++ b/sql/sql_class.h
>>> @@ -297,7 +297,7 @@ class Key :public Sql_alloc, public DDL_options {
>>>     LEX_STRING name;
>>>     engine_option_value *option_list;
>>>     bool generated;
>>> -
>>> +  key_hash_type hash_type=NOT_HASH;
>>>
>> same as above about c++11
>>
>>     Key(enum Keytype type_par, const LEX_STRING &name_arg,
>>>         ha_key_alg algorithm_arg, bool generated_arg, DDL_options_st
>>> ddl_options)
>>>       :DDL_options(ddl_options),
>>> diff --git a/sql/sql_base.cc b/sql/sql_base.cc
>>> index e808fba..b2c8dfb 100644
>>> --- a/sql/sql_base.cc
>>> +++ b/sql/sql_base.cc
>>> @@ -8343,6 +8343,8 @@ insert_fields(THD *thd, Name_resolution_context
>>> *context, const char *db_name,
>>>         for (; !field_iterator.end_of_fields(); field_iterator.next())
>>>       {
>>> +      if(field_iterator.field()==NULL ||
>>>
>> eh... can it be NULL here, really?
>>
> i do not there was a test in innodb which was failing because of this
> anyway changed the whole code
>
>>
>> +             field_iterator.field()->field_visibility==NORMAL){
>>>         Item *item;
>>>           if (!(item= field_iterator.create_item(thd)))
>>> diff --git a/sql/sql_insert.cc b/sql/sql_insert.cc
>>> index fcf8c14..2bfb288 100644
>>> --- a/sql/sql_insert.cc
>>> +++ b/sql/sql_insert.cc
>>> @@ -723,7 +722,24 @@ bool mysql_insert(THD *thd,TABLE_LIST *table_list,
>>>       if (open_and_lock_tables(thd, table_list, TRUE, 0))
>>>         DBUG_RETURN(TRUE);
>>>     }
>>> -
>>> +
>>> +  context= &thd->lex->select_lex.context;
>>> +  if(table_list->table!=NULL)
>>>
>> can table_list->table be NULL here?
>>
> same a test in innodb was failing but now i do not use his logic
>
>>
>> +  {
>>> +    Field ** f=table_list->table->field;
>>> +    List<Item> * i_list = (List<Item> *)values_list.first_node()->info;
>>> +    for(uint i=0;i<table_list->table->s->fields;i++)
>>> +    {
>>> +      if((*f)->is_hash ||(*f)->field_visibility==USER_DEFINED_HIDDEN)
>>> +      {
>>> +        i_list->push_front(new (thd->mem_root)
>>> +        Item_default_value(thd,context),thd->mem_root);
>>> +      }
>>> +      f++;
>>> +    }
>>> +  }
>>> +  List_iterator_fast<List_item> its(values_list);
>>> +
>>>
>> is that needed at all? one can always do INSERT t1 (a) VALUES (1)
>> and all non-specified fields will get default values. without
>> specifically adding Item_default_value to the list. So, it seems, that
>> hidden fields will automatically get default values too.
>>
>> changed
>
>     lock_type= table_list->lock_type;
>>>       THD_STAGE_INFO(thd, stage_init);
>>> diff --git a/sql/sql_show.cc b/sql/sql_show.cc
>>> index 1cbad7e..60e586c 100644
>>> --- a/sql/sql_show.cc
>>> +++ b/sql/sql_show.cc
>>> @@ -1932,6 +1931,26 @@ int show_create_table(THD *thd, TABLE_LIST
>>> *table_list, String *packet,
>>>       }
>>>       append_create_options(thd, packet, field->option_list,
>>> check_options,
>>>                             hton->field_options);
>>> +    //TODO need a better logic to find wheter to put comma or not
>>> +    int i=1;
>>> +    bool is_comma_needed=false;
>>> +    if (*(ptr+i)!=NULL)
>>> +    {
>>> +      is_comma_needed=true;
>>> +      while((*(ptr+i))->field_visibility==MEDIUM_HIDDEN ||
>>> +            (*(ptr+i))->field_visibility==FULL_HIDDEN)
>>> +      {
>>> +        i++;
>>> +        is_comma_needed=true;
>>> +        if(!*(ptr+i))
>>> +        {
>>> +          is_comma_needed =false;
>>> +          break;
>>> +        }
>>> +      }
>>> +    }
>>> +    if(is_comma_needed)
>>> +     packet->append(STRING_WITH_LEN(",\n"));
>>>
>> agree about "better logic". You can do it like this:
>>
>>    bool is_comma_needed= false;
>>    for (ptr=table->field ; (field= *ptr); ptr++)
>>    {
>>      if (is_comma_needed)
>>        packet->append(STRING_WITH_LEN(",\n"));
>>      is_comma_needed= false;
>>      ...
>>      print field
>>      is_comma_needed= true;
>>    }
>>
>> but  this wont work because i need to put comma only when
> there is remaning field but atleast one of then is non hidden so i have to
> use
> while
>
>>     }
>>>       key_info= table->key_info;
>>> @@ -1946,6 +1965,22 @@ int show_create_table(THD *thd, TABLE_LIST
>>> *table_list, String *packet,
>>>     for (uint i=0 ; i < share->keys ; i++,key_info++)
>>>     {
>>>       KEY_PART_INFO *key_part= key_info->key_part;
>>> +    if(key_info->flags&HA_UNIQUE_HASH||key_info->flags&HA_INDEX_HASH)
>>>
>> code formatting. spaces!
>>
> changed
>
>> +    {
>>> +      char * column_names= key_part->field->vcol_info->
>>> +                          expr_str.str+strlen("hash");
>>> +      int length=key_part->field->vcol_info->expr_str.length;
>>> +      length-=strlen("hash");
>>> +      packet->append(STRING_WITH_LEN(",\n"));
>>> +      if(key_info->flags&HA_INDEX_HASH)
>>> +        packet->append(STRING_WITH_LEN("  INDEX `"));
>>> +      else
>>> +        packet->append(STRING_WITH_LEN("  UNIQUE KEY `"));
>>> +      packet->append(key_info->name,strlen(key_info->name));
>>> +      packet->append(STRING_WITH_LEN("`"));
>>> +      packet->append(column_names,length);
>>>
>> I don't like that ±strlen("hash"). Better do it like this:
>>
>>   static LEX_CSTRING uniq_hash_func={ STRING_WITH_LEN("hash" };
>>
>> and instead of strlen("hash") you use uniq_hash_func.length,
>> and below instead of explicit "hash" string in the
>> mysql_prepare_create_table
>> you use uniq_hash_func.str.
>>
> done
> but instead of
>
> static LEX_CSTRING uniq_hash_func={ STRING_WITH_LEN("hash" };
>
> i have define macro in my_base.h
>
> +      continue;
>>> +    }
>>>       bool found_primary=0;
>>>       packet->append(STRING_WITH_LEN(",\n  "));
>>>   @@ -5416,6 +5454,22 @@ static int get_schema_column_record(THD *thd,
>>> TABLE_LIST *tables,
>>>         else
>>>           table->field[17]->store(STRING_WITH_LEN("VIRTUAL"), cs);
>>>       }
>>> +    /*hidden can coexist with auto_increment and virtual */
>>> +    if(field->field_visibility==USER_DEFINED_HIDDEN)
>>> +    {
>>> +      table->field[17]->store(STRING_WITH_LEN("HIDDEN"), cs);
>>> +      if (field->unireg_check == Field::NEXT_NUMBER)
>>> +        table->field[17]->store(STRING_WITH_LEN("auto_increment ,
>>> HIDDEN"), cs);
>>> +      if (print_on_update_clause(field, &type, true))
>>> +        table->field[17]->store(type.ptr(), type.length(), cs);
>>> +      if (field->vcol_info)
>>> +      {
>>> +        if (field->stored_in_db)
>>> +          table->field[17]->store(STRING_WITH_LEN("PERSISTENT,
>>> HIDDEN"), cs);
>>> +        else
>>> +          table->field[17]->store(STRING_WITH_LEN("VIRTUAL, HIDDEN"),
>>> cs);
>>> +      }
>>> +    }
>>>
>> eh, please no. don't duplicate the whole block. A variant that avoids it:
>>
>>    StringBuffer<256> buf;
>>
>> if (autoinc) // this is pseudocode, see the correct condition is in the
>> code above
>>    buf.set(STRING_WITH_LEN("auto_increment"))
>> else if (virtual)
>>    buf.set(STRING_WITH_LEN("VIRTUAL");
>> ...
>>
>> if (hidden)
>> {
>>    if (buf.length)
>>      buf.append(STRING_WITH_LEN(", "));
>>    buf.append(STRING_WITH_LEN("HIDDEN"));
>> }
>> table->field[17]->store(buf.ptr(), buf.length(), cs);
>>
> changed
>
>>       table->field[19]->store(field->comment.str, field->comment.length,
>>> cs);
>>>       if (schema_table_store_record(thd, table))
>>>         DBUG_RETURN(1);
>>> diff --git a/sql/sql_table.cc b/sql/sql_table.cc
>>> index dae3b7a..d9322ca 100644
>>> --- a/sql/sql_table.cc
>>> +++ b/sql/sql_table.cc
>>> @@ -3013,7 +3013,7 @@ int prepare_create_field(Create_field *sql_field,
>>>       break;
>>>     }
>>>     if (!(sql_field->flags & NOT_NULL_FLAG) ||
>>> -      (sql_field->vcol_info))  /* Make virtual columns allow NULL
>>> values */
>>> +      (sql_field->vcol_info))
>>>
>> why did you remove this comment?
>>
>> reverted
>
>       sql_field->pack_flag|= FIELDFLAG_MAYBE_NULL;
>>>     if (sql_field->flags & NO_DEFAULT_VALUE_FLAG)
>>>       sql_field->pack_flag|= FIELDFLAG_NO_DEFAULT;
>>> @@ -3223,6 +3222,130 @@ mysql_prepare_create_table(THD *thd,
>>> HA_CREATE_INFO *create_info,
>>>     bool tmp_table= create_table_mode == C_ALTER_TABLE;
>>>     DBUG_ENTER("mysql_prepare_create_table");
>>>   +  /*
>>> +    scan the the whole alter list
>>> +    and add one field if length of blob is zero
>>> +    TODO change to work for all too-long unique keys like varchar,text
>>> +  */
>>> +
>>> +  List_iterator<Key> key_iter(alter_info->key_list);
>>> +  Key *key_iter_key;
>>> +  Key_part_spec *temp_colms;
>>> +  int num= 1;
>>> +  bool is_long_unique=false;
>>> +  int key_initial_elements=alter_info->key_list.elements;
>>> +  while((key_iter_key=key_iter++)&&key_initial_elements)
>>> +  {
>>> +    key_initial_elements--;
>>> +    List_iterator<Key_part_spec> key_part_iter(key_iter_key->columns);
>>> +    while((temp_colms=key_part_iter++))
>>> +    {
>>> +      while ((sql_field=it++) &&sql_field&&
>>> my_strcasecmp(system_charset_info,
>>> +
>>> temp_colms->field_name.str,
>>> +
>>> sql_field->field_name)){}
>>> +      if(sql_field && (sql_field->sql_type==MYSQL_TYPE_BLOB ||
>>> +                       sql_field->sql_type==MYSQL_TYPE_MEDIUM_BLOB||
>>> +                       sql_field->sql_type==MYSQL_TYPE_LONG_BLOB)
>>> +         &&temp_colms->length==0)
>>> +      {
>>> +        is_long_unique=true;
>>> +        /*
>>> +          One long key  unique in enough
>>> +         */
>>> +        it.rewind();
>>> +        break;
>>> +      }
>>> +      // if(sql_field->sql_type==MYSQL_TYPE_VARCHAR)
>>> +      it.rewind();
>>> +    }
>>> +    if(is_long_unique)
>>> +    {
>>> +      /* make a virtual field */
>>> +      key_part_iter.rewind();
>>> +      Create_field *cf = new (thd->mem_root) Create_field();
>>> +      cf->flags|=UNSIGNED_FLAG;
>>> +      cf->length=cf->char_length=8;
>>> +      cf->charset=NULL;
>>> +      cf->decimals=0;
>>> +      char temp_name[30];
>>> +      strcpy(temp_name,"DB_ROW_HASH_");
>>> +      char num_holder[10];    //10 is way more but i think it is ok
>>> +      sprintf(num_holder,"%d",num);
>>> +      strcat(temp_name,num_holder);
>>> +      /*
>>> +        Check for collusions
>>> +       */
>>> +      while((dup_field=it++))
>>> +      {
>>> +        if(!my_strcasecmp(system_charset_info,temp_name,dup_field->
>>> field_name))
>>> +        {
>>> +          temp_name[12]='\0'; //now temp_name='DB_ROW_HASH_'
>>> +          num++;
>>> +          sprintf(num_holder,"%d",num);
>>> +          strcat(temp_name,num_holder);
>>> +          it.rewind();
>>> +        }
>>> +      }
>>> +      it.rewind();
>>> +      char * name = (char *)thd->alloc(30);
>>> +      strcpy(name,temp_name);
>>> +      cf->field_name=name;
>>> +      cf->stored_in_db=true;
>>> +      cf->sql_type=MYSQL_TYPE_LONGLONG;
>>> +      /* hash column should be atmost hidden */
>>> +      cf->field_visibility=FULL_HIDDEN;
>>> +      if(key_iter_key->type==Key::UNIQUE)
>>> +        key_iter_key->hash_type=UNIQUE_HASH;
>>> +      else
>>> +        key_iter_key->hash_type=INDEX_HASH;
>>> +      cf->is_hash=true;
>>> +      /* add the virtual colmn info */
>>> +      Virtual_column_info *v= new (thd->mem_root) Virtual_column_info();
>>> +      char * hash_exp=(char *)thd->alloc(252);
>>> +      char * key_name=(char *)thd->alloc(252);
>>> +      strcpy(hash_exp,"hash(`");
>>> +      temp_colms=key_part_iter++;
>>> +      strcat(hash_exp,temp_colms->field_name.str);
>>> +      strcpy(key_name,temp_colms->field_name.str);
>>> +      strcat(hash_exp,"`");
>>> +      while((temp_colms=key_part_iter++)){
>>> +        strcat(hash_exp,(const char * )",");
>>> +        strcat(key_name,"_");
>>> +        strcat(hash_exp,"`");
>>> +        strcat(hash_exp,temp_colms->field_name.str);
>>> +        strcat(key_name,temp_colms->field_name.str);
>>> +        strcat(hash_exp,"`");
>>> +      }
>>> +      strcat(hash_exp,(const char * )")");
>>> +      v->expr_str.str= hash_exp;
>>> +      v->expr_str.length= strlen(hash_exp);
>>> +      v->expr_item= NULL;
>>> +      v->set_stored_in_db_flag(true);
>>> +      cf->vcol_info=v;
>>> +      alter_info->create_list.push_front(cf,thd->mem_root);
>>> +      /*
>>> +         Now create  the key field kind
>>> +         of harder then prevoius one i guess
>>> +                */
>>> +      key_iter_key->type=Key::MULTIPLE;
>>> +      key_iter_key->columns.delete_elements();
>>> +      LEX_STRING  *ls =(LEX_STRING *)thd->alloc(sizeof(LEX_STRING)) ;
>>> +      ls->str=(char *)sql_field->field_name;
>>> +      ls->length =strlen(sql_field->field_name);
>>> +      if(key_iter_key->name.length==0)
>>> +      {
>>> +        LEX_STRING  *ls_name =(LEX_STRING *)thd->alloc(sizeof(LEX_STRING))
>>> ;
>>> +        ls_name->str=key_name;
>>> +        ls_name->length=strlen(key_name);
>>> +        key_iter_key->name= *ls_name;
>>> +      }
>>> +      key_iter_key->columns.push_back(new (thd->mem_root)
>>> Key_part_spec(name,
>>> +                                                  strlen(name),
>>> 0),thd->mem_root);
>>> +
>>> +    }
>>> +    is_long_unique=false;
>>> +  }
>>> +  it.rewind();
>>>
>> why are you doing it here, in a separate loop, insead of doing it where
>> ER_TOO_LONG_KEY is issued?
>>
> changed
>
>>
>>     select_field_pos= alter_info->create_list.elements -
>>> select_field_count;
>>>     null_fields=blob_columns=0;
>>>     create_info->varchar= 0;
>>> @@ -3241,7 +3364,7 @@ mysql_prepare_create_table(THD *thd,
>>> HA_CREATE_INFO *create_info,
>>>       /* Set field charset. */
>>>       save_cs= sql_field->charset= get_sql_field_charset(sql_field,
>>> create_info);
>>>       if ((sql_field->flags & BINCMP_FLAG) &&
>>> -       !(sql_field->charset= find_bin_collation(sql_field->charset)))
>>> +  !(sql_field->charset= find_bin_collation(sql_field->charset)))
>>>
>> typo. please revert
>>
>>         DBUG_RETURN(TRUE);
>>>         /*
>>> @@ -3274,7 +3397,17 @@ mysql_prepare_create_table(THD *thd,
>>> HA_CREATE_INFO *create_info,
>>>           DBUG_RETURN(TRUE);
>>>         }
>>>       }
>>> -
>>> +    if(sql_field->field_visibility==USER_DEFINED_HIDDEN)
>>> +    {
>>> +      if(sql_field->flags&NOT_NULL_FLAG)
>>> +      {
>>> +        if(sql_field->flags&NO_DEFAULT_VALUE_FLAG)
>>>
>> do it in one if(), not in three, please
>>
> changed
>
>> +        {
>>> +          my_error(ER_HIDDEN_NOT_NULL_WOUT_DEFAULT, MYF(0),
>>> sql_field->field_name);
>>> +          DBUG_RETURN(TRUE);
>>> +        }
>>> +      }
>>> +    }
>>>       if (sql_field->sql_type == MYSQL_TYPE_SET ||
>>>           sql_field->sql_type == MYSQL_TYPE_ENUM)
>>>       {
>>> @@ -3831,8 +3968,8 @@ mysql_prepare_create_table(THD *thd,
>>> HA_CREATE_INFO *create_info,
>>>             sql_field->charset->mbminlen > 1 || // ucs2 doesn't work yet
>>>             (ft_key_charset && sql_field->charset != ft_key_charset))
>>>         {
>>> -           my_error(ER_BAD_FT_COLUMN, MYF(0), column->field_name.str);
>>> -           DBUG_RETURN(-1);
>>> +        my_error(ER_BAD_FT_COLUMN, MYF(0), column->field_name.str);
>>> +        DBUG_RETURN(-1);
>>>         }
>>>
>> indentation is two spaces, not one
>>
> i do not know why this is not working i tried in vi  but does not worked
> i removed most of idention problem but this one is not solving :(
>
>         ft_key_charset=sql_field->charset;
>>>         /*
>>> @@ -3874,10 +4011,9 @@ mysql_prepare_create_table(THD *thd,
>>> HA_CREATE_INFO *create_info,
>>>             if (f_is_geom(sql_field->pack_flag) && sql_field->geom_type
>>> ==
>>>                 Field::GEOM_POINT)
>>>               column->length= MAX_LEN_GEOM_POINT_FIELD;
>>> -         if (!column->length)
>>> +    if (!column->length)
>>>
>> again, as a couple of times above - tabs confuse you
>> and you mess up the indentation :(
>>
>>           {
>>> -           my_error(ER_BLOB_KEY_WITHOUT_LENGTH, MYF(0),
>>> column->field_name.str);
>>> -           DBUG_RETURN(TRUE);
>>> +                               DBUG_ASSERT(0);
>>>           }
>>>         }
>>>   #ifdef HAVE_SPATIAL
>>> @@ -7420,7 +7557,10 @@ mysql_prepare_alter_table(THD *thd, TABLE *table,
>>>       while ((drop=drop_it++))
>>>       {
>>>         if (drop->type == Alter_drop::COLUMN &&
>>> -         !my_strcasecmp(system_charset_info,field->field_name,
>>> drop->name))
>>> +    !my_strcasecmp(system_charset_info,field->field_name, drop->name)
>>>
>> again
>>
>> +          /* we cant not drop system generated column */
>>> +          && !(field->field_visibility==MEDIUM_HIDDEN
>>> +               ||field->field_visibility==FULL_HIDDEN))
>>>         {
>>>         /* Reset auto_increment value if it was dropped */
>>>         if (MTYP_TYPENR(field->unireg_check) == Field::NEXT_NUMBER &&
>>> @@ -7444,7 +7584,10 @@ mysql_prepare_alter_table(THD *thd, TABLE *table,
>>>       while ((def=def_it++))
>>>       {
>>>         if (def->change &&
>>> -         !my_strcasecmp(system_charset_info,field->field_name,
>>> def->change))
>>> +    !my_strcasecmp(system_charset_info,field->field_name, def->change)
>>>
>> and again
>>
>> +          /* we cant change  system generated column */
>>> +          && !(field->field_visibility==MEDIUM_HIDDEN
>>> +               ||field->field_visibility==FULL_HIDDEN))
>>>         break;
>>>       }
>>>       if (def)
>>> @@ -7512,6 +7655,51 @@ mysql_prepare_alter_table(THD *thd, TABLE *table,
>>>                  table->s->table_name.str);
>>>         goto err;
>>>       }
>>> +    /* Need to see whether new added column clashes with already existed
>>> +       DB_ROW_HASH_*(is must have is_hash==tru)
>>> +     */
>>> +    for (f_ptr=table->vfield ;f_ptr&&(field= *f_ptr) ; f_ptr++)
>>> +    {
>>> +      if ((field->is_hash)&&!my_strcasecmp(system_charset_info,field-
>>> >field_name,def->field_name))
>>> +      {
>>> +        /* got a clash  find the highest number in db_row_hash_* */
>>> +        Field *temp_field;
>>> +        int max=0;
>>> +        for (Field ** f_temp_ptr=table->vfield ; (temp_field=
>>> *f_temp_ptr); f_temp_ptr++)
>>> +        {
>>> +          if(temp_field->is_hash)
>>> +          {
>>> +            int temp = atoi(temp_field->field_name+12);
>>> +            if(temp>max)
>>> +              max=temp;
>>> +          }
>>> +        }
>>> +        max++;
>>> +
>>> +        Create_field * old_hash_field;
>>> +        while((old_hash_field=field_it++))
>>> +        {
>>> +          if(!my_strcasecmp(system_charset_info,old_hash_field->field_
>>> name,
>>> +                           field->field_name))
>>> +          {
>>> +            field_it.remove();
>>> +            break;
>>> +          }
>>> +        }
>>> +        /* Give field new name which does not clash with
>>> def->feild_name */
>>> +        new_hash_field = old_hash_field->clone(thd->mem_root);
>>> +        char *name = (char *)thd->alloc(30);
>>> +        strcpy(name,"DB_ROW_HASH_");
>>> +        char num_holder[10];
>>> +        sprintf(num_holder,"%d",max);
>>> +        strcat(name,num_holder);
>>> +        new_hash_field->field_name=name;
>>> +        new_hash_field->field=field;
>>> +        new_hash_field->change=old_hash_field->field_name;
>>> +        new_create_list.push_front(new_hash_field,thd->mem_root);
>>> +        field_it.rewind();
>>> +      }
>>> +    }
>>>
>> okay... I wonder, if it would be easier not to look for collisions in
>> mysql_prepare_alter_table, but simply rename all is_hash columns
>> in mysql_prepare_create_table(). then you'll only need to do it
>> in one place.
>>
> Actually i am thinking of changing the whole approch in
> mysql_prepare_alter table
> currently it works but  i think better version would be
> in mysql_prepare_create table we have orignal table table->field and
> table->key_info
> i am thinking of making a copy of these two and making changes like
> deleting db_row_hash_1 from
> field and expanding keyinfo->key_part of those keys who are long unique
> but this way mysqll_prepare_create table work normally and all changes
> will be done in mysql prepare_create_table
>
>       /*
>>>         Check that the DATE/DATETIME not null field we are going to add
>>> is
>>>         either has a default value or the '0000-00-00' is allowed by the
>>> @@ -7609,6 +7799,25 @@ mysql_prepare_alter_table(THD *thd, TABLE *table,
>>>       }
>>>       if (drop)
>>>       {
>>> +      /* If we drop index of blob unique then we need to drop the
>>> db_row_hash col */
>>> +      if(key_info->flags&HA_UNIQUE_HASH||key_info->flags&HA_INDEX_HASH)
>>> +      {
>>> +        char * name = (char *)key_info->key_part->field->field_name;
>>> +        /*iterate over field_it and remove db_row_hash col  */
>>> +        field_it.rewind();
>>> +        Create_field * temp_field;
>>> +        while ((temp_field=field_it++))
>>> +        {
>>> +          if(!my_strcasecmp(system_charset_info,temp_field->field_
>>> name,name))
>>> +          {
>>> +            field_it.remove();
>>> +            //add flag if it not exists
>>> +            alter_info->flags|=Alter_info::ALTER_DROP_COLUMN;
>>> +            break;
>>> +          }
>>> +        }
>>> +        field_it.rewind();
>>>
>> this should be a little bit more complicated. an index can cover many
>> columns, so the index should be dropped when its last column is removed.
>> I suppose you've already done that, I need to pull and check.
>>
> works
>
>
>> +      }
>>>         if (table->s->tmp_table == NO_TMP_TABLE)
>>>         {
>>>           (void) delete_statistics_for_index(thd, table, key_info,
>>> FALSE);
>>> @@ -7766,6 +7979,48 @@ mysql_prepare_alter_table(THD *thd, TABLE *table,
>>>         my_error(ER_WRONG_NAME_FOR_INDEX, MYF(0), key->name.str);
>>>           goto err;
>>>         }
>>> +      List_iterator<Key_part_spec> key_part_iter(key->columns);
>>> +      Key_part_spec * temp_colms;
>>> +      Create_field * sql_field;
>>> +      field_it.rewind();
>>> +      while((temp_colms=key_part_iter++))
>>> +      {
>>> +        while ((sql_field=field_it++))
>>> +        {
>>> +          if(!my_strcasecmp(system_charset_info,
>>> +                           temp_colms->field_name.str,
>>> +                           sql_field->field_name))
>>> +          {
>>> +            if(sql_field->field_visibility==MEDIUM_HIDDEN||sql_field->
>>> field_visibility==FULL_HIDDEN)
>>> +            {
>>> +              /* If we added one column (which clash with db_row_has
>>> )then this key
>>> +              is different then added by user to make it sure we check
>>> for */
>>> +              if(new_hash_field)
>>> +              {
>>> +                if(sql_field!=new_hash_field)
>>> +                {
>>> +                  my_error(ER_KEY_COLUMN_DOES_NOT_EXITS, MYF(0),
>>> sql_field->field_name);
>>> +                  goto err;
>>> +                }
>>> +              }
>>> +              else /*this is for add index to db_row_hash which must
>>> show error  */
>>> +              {
>>> +                my_error(ER_KEY_COLUMN_DOES_NOT_EXITS, MYF(0),
>>> sql_field->field_name);
>>> +                goto err;
>>> +              }
>>>
>> I don't quite understand these your conditions, but the logic should be
>> like this: MEDIUM_HIDDEN can be indexed, FULL_HIDDEN can never be.
>>
>> +            }
>>> +            /* Check whether we adding index for blob or other long
>>> length column then add column flag*/
>>> +            if((sql_field->sql_type==MYSQL_TYPE_BLOB ||
>>> +                sql_field->sql_type==MYSQL_TYPE_MEDIUM_BLOB||
>>> +                sql_field->sql_type==MYSQL_TYPE_LONG_BLOB||
>>> +                sql_field->sql_type==MYSQL_TYPE_LONG_BLOB)
>>> +               &&(temp_colms->length==0))
>>> +              alter_info->flags|=Alter_info::ALTER_ADD_COLUMN;
>>>
>> really? why don't you simply set this flag when you actually create new
>> Create_field and push it into the list?
>>
> okay
>
> +          }
>>> +        }
>>> +      }
>>> +      field_it.rewind();
>>> +
>>>       }
>>>     }
>>>   diff --git a/sql/sql_update.cc b/sql/sql_update.cc
>>> index 4221025..4955881 100644
>>> --- a/sql/sql_update.cc
>>> +++ b/sql/sql_update.cc
>>> @@ -92,24 +92,24 @@ bool compare_record(const TABLE *table)
>>>       }
>>>       return FALSE;
>>>     }
>>> -
>>> -  /*
>>> +
>>> +  /*
>>>        The storage engine has read all columns, so it's safe to compare
>>> all bits
>>>        including those not in the write_set. This is cheaper than the
>>> field-by-field
>>>        comparison done above.
>>> -  */
>>> +  */
>>>     if (table->s->can_cmp_whole_record)
>>>       return cmp_record(table,record[1]);
>>>     /* Compare null bits */
>>>     if (memcmp(table->null_flags,
>>> -            table->null_flags+table->s->rec_buff_length,
>>> -            table->s->null_bytes_for_compare))
>>> +       table->null_flags+table->s->rec_buff_length,
>>> +       table->s->null_bytes_for_compare))
>>>
>> in this file you've done *only* whitespace changes, some of them are ok,
>> but many result in broken indentation, please fix them all!
>>
> changed
>
>
>>       return TRUE;                              // Diff in NULL value
>>>     /* Compare updated fields */
>>>     for (Field **ptr= table->field ; *ptr ; ptr++)
>>>     {
>>>       if (bitmap_is_set(table->write_set, (*ptr)->field_index) &&
>>> -       (*ptr)->cmp_binary_offset(table->s->rec_buff_length))
>>> +  (*ptr)->cmp_binary_offset(table->s->rec_buff_length))
>>>         return TRUE;
>>>     }
>>>     return FALSE;
>>> diff --git a/sql/sql_yacc.yy b/sql/sql_yacc.yy
>>> index e614692..4872d20 100644
>>> --- a/sql/sql_yacc.yy
>>> +++ b/sql/sql_yacc.yy
>>> @@ -1271,6 +1271,7 @@ bool my_yyoverflow(short **a, YYSTYPE **b, ulong
>>> *yystacksize);
>>>   %token  HEX_NUM
>>>   %token  HEX_STRING
>>>   %token  HIGH_PRIORITY
>>> +%token  HIDDEN
>>>
>> HIDDEN_SYM, please
>>
>>   %token  HOST_SYM
>>>   %token  HOSTS_SYM
>>>   %token  HOUR_MICROSECOND_SYM
>>> @@ -6047,6 +6048,11 @@ vcol_attribute:
>>>               lex->alter_info.flags|= Alter_info::ALTER_ADD_INDEX;
>>>             }
>>>           | COMMENT_SYM TEXT_STRING_sys { Lex->last_field->comment= $2; }
>>> +        | HIDDEN
>>> +          {
>>> +              LEX *lex =Lex;
>>>
>> no need to do that ^^^ you can simply write Lex->last_field->field_visibil
>> ity=...
>>
> change but in sql_yacc.yy this type of code is use everywhere
>
>>
>> +              lex->last_field->field_visibility=USER_DEFINED_HIDDEN;
>>> +          }
>>>           ;
>>>     parse_vcol_expr:
>>> @@ -6469,6 +6475,11 @@ attribute:
>>>               new (thd->mem_root)
>>>                 engine_option_value($1, &Lex->last_field->option_list,
>>> &Lex->option_list_last);
>>>             }
>>> +        | HIDDEN
>>> +          {
>>> +              LEX *lex =Lex;
>>> +              lex->last_field->field_visibility=USER_DEFINED_HIDDEN;
>>> +          }
>>>
>> no need to repeat this twice. Try to remove HIDDEN, COMMENT_SYM, UNIQUE,
>> and UNIQUE KEY rules from attribute and add the vcol_attribute instead.
>>
>> done
>
>>           ;
>>>     diff --git a/sql/table.h b/sql/table.h
>>> index eb4076e..982de6f 100644
>>> --- a/sql/table.h
>>> +++ b/sql/table.h
>>> @@ -321,6 +321,24 @@ enum enum_vcol_update_mode
>>>     VCOL_UPDATE_ALL
>>>   };
>>>   +/* Field visibility enums */
>>> +
>>> +enum  field_visible_type{
>>> +       NORMAL=0,
>>>
>> better NOT_HIDDEN
>>
> changed
>
>> +       USER_DEFINED_HIDDEN,
>>> +       MEDIUM_HIDDEN,
>>> +       FULL_HIDDEN
>>> +};
>>> +
>>> +enum key_hash_type{
>>> +       /* normal column */
>>> +       NOT_HASH=0,
>>> +       /* hash for defination index(A,...) in this case no duplicate
>>> will be checked */
>>> +       INDEX_HASH,
>>>
>> there's no point in having a hash index for blobs, if it doesn't check
>> for duplicates
>>
> okay reverted
>
>
>> +       /* hash for defination unique(A,...) in this duplicate will be
>>> checked in ha_write_row and
>>> +                update */
>>> +       UNIQUE_HASH
>>> +};
>>>   class Filesort_info
>>>   {
>>>     /// Buffer for sorting keys.
>>> diff --git a/sql/table.cc b/sql/table.cc
>>> index a90eb2e..31d8f1e 100644
>>> --- a/sql/table.cc
>>> +++ b/sql/table.cc
>>> @@ -785,7 +785,8 @@ static bool create_key_infos(const uchar *strpos,
>>> const uchar *frm_image_end,
>>>       keyinfo->ext_key_parts= keyinfo->user_defined_key_parts;
>>>       keyinfo->ext_key_flags= keyinfo->flags;
>>>       keyinfo->ext_key_part_map= 0;
>>> -    if (share->use_ext_keys && i && !(keyinfo->flags & HA_NOSAME))
>>> +    if (share->use_ext_keys && i && !(keyinfo->flags & HA_NOSAME)
>>> +  )
>>>
>> revert
>>
>>       {
>>>         for (j= 0;
>>>              j < first_key_parts && keyinfo->ext_key_parts <
>>> MAX_REF_PARTS;
>>> @@ -985,7 +990,7 @@ int TABLE_SHARE::init_from_binary_frm_image(THD
>>> *thd, bool write,
>>>           if (length < 256)
>>>             goto err;
>>>         }
>>> -      if (extra2 + length > e2end)
>>> +      if ( extra2 + length > e2end)
>>>
>> revert
>>
>>           goto err;
>>>         switch (type) {
>>>         case EXTRA2_TABLEDEF_VERSION:
>>> @@ -1040,7 +1048,6 @@ int TABLE_SHARE::init_from_binary_frm_image(THD
>>> *thd, bool write,
>>>       if (extra2 != e2end)
>>>         goto err;
>>>     }
>>> -
>>>
>> nope, revert
>>
>>     if (frm_length < FRM_HEADER_SIZE + len ||
>>>         !(pos= uint4korr(frm_image + FRM_HEADER_SIZE + len)))
>>>       goto err;
>>> @@ -1671,7 +1678,8 @@ int TABLE_SHARE::init_from_binary_frm_image(THD
>>> *thd, bool write,
>>>       if (!reg_field)                           // Not supported field
>>> type
>>>         goto err;
>>>   -
>>> +    reg_field->field_visibility=(field_visible_type)*field_prope
>>> rties++;
>>>
>> please, add a feature status variable for it. search, for example, for
>> "feature_fulltext" or "feature_gis". Just as feature_gis, you can
>> increment your feature_hidden_column when a table with user-created hidden
>> columns is opened.
>>
> done
>
>> better use c++ casts here, like
>>
>>     static_cast<field_visible_type>(*field_properties++)
>>
>> won't look as multiplication :)
>>
>> +    reg_field->is_hash=(bool)*field_properties++;
>>>
>> you don't need a cast at all, but if you want to keep it, use C++ cast
>>
>>       reg_field->field_index= i;
>>>       reg_field->comment=comment;
>>>       reg_field->vcol_info= vcol_info;
>>> @@ -1985,6 +1993,10 @@ int TABLE_SHARE::init_from_binary_frm_image(THD
>>> *thd, bool write,
>>>         if ((keyinfo->flags & HA_NOSAME) ||
>>>             (ha_option & HA_ANY_INDEX_MAY_BE_UNIQUE))
>>>           set_if_bigger(share->max_unique_length,keyinfo->key_length);
>>> +      if(keyinfo->flags&HA_UNIQUE_HASH||keyinfo->flags&HA_INDEX_HASH)
>>> +      {
>>> +        keyinfo->ext_key_parts=1;
>>>
>> why?
>>
> this is because consider the query
>     create table t1 (a int primary key, b blob unique);
> then in mysql will add field a as keypart in second key_info
> and increase ext_key_part and if i do not do this the query like this
> insert into t1(1,1);
> insert into t1(2,1);
> will succeed
>
>
>> +      }
>>>       }
>>>       if (primary_key < MAX_KEY &&
>>>         (share->keys_in_use.is_set(primary_key)))
>>> diff --git a/sql/unireg.h b/sql/unireg.h
>>> index 10751b6..211749a 100644
>>> --- a/sql/unireg.h
>>> +++ b/sql/unireg.h
>>> @@ -186,7 +186,8 @@ enum extra2_frm_value_type {
>>>     EXTRA2_TABLEDEF_VERSION=0,
>>>     EXTRA2_DEFAULT_PART_ENGINE=1,
>>>     EXTRA2_GIS=2,
>>> -
>>> +  EXTRA2_FIELD_FLAGS=3,
>>> +  EXTRA2_KEY_HASH_FLAG=4,
>>>
>> make it 129 and 130
>>
>>   #define EXTRA2_ENGINE_IMPORTANT 128
>>>       EXTRA2_ENGINE_TABLEOPTS=128,
>>> diff --git a/sql/unireg.cc b/sql/unireg.cc
>>> index 66959f4..0d764a1 100644
>>> --- a/sql/unireg.cc
>>> +++ b/sql/unireg.cc
>>> @@ -204,7 +224,8 @@ LEX_CUSTRING build_frm_image(THD *thd, const char
>>> *table,
>>>     if (gis_extra2_len)
>>>       extra2_size+= 1 + (gis_extra2_len > 255 ? 3 : 1) + gis_extra2_len;
>>>   -
>>> +  extra2_size+=1 + ( 2*create_fields.elements > 255 ? 3 : 1) +
>>> +       2*create_fields.elements;// first one for
>>> type(extra2_field_flags) next 2  for length
>>>
>> for backward compatibility it would make sense to add this
>> EXTRA2_FIELD_FLAGS data *only* if you have hidden fields.
>>
> okay
>
>>
>>     key_buff_length= uint4korr(fileinfo+47);
>>>       frm.length= FRM_HEADER_SIZE;                  // fileinfo;
>>> diff --git a/storage/maria/maria_def.h b/storage/maria/maria_def.h
>>> index 7337b01..3d0860f 100644
>>> --- a/storage/maria/maria_def.h
>>> +++ b/storage/maria/maria_def.h
>>> @@ -895,7 +895,7 @@ struct st_maria_handler
>>>     /* The UNIQUE check is done with a hashed long key */
>>>   -#define MARIA_UNIQUE_HASH_TYPE       HA_KEYTYPE_ULONG_INT
>>> +#define MARIA_UNIQUE_hash_type HA_KEYTYPE_ULONG_INT
>>>
>> why?
>>
>> reverted sorry
>
>   #define maria_unique_store(A,B)    mi_int4store((A),(B))
>>>     extern mysql_mutex_t THR_LOCK_maria;
>>>
>>
>> Regards,
>> Sergei
>> Chief Architect MariaDB
>> and security@xxxxxxxxxxx
>>
>
>

Follow ups

References