maria-developers team mailing list archive
-
maria-developers team
-
Mailing list archive
-
Message #12120
Re: 5365db70cb8: Improve update handler (long unique keys on blobs)
Hi!
<cut>
> > Improve update handler (long unique keys on blobs)
<cut>
>
> you may want to remove this line from the commit comment
Fixed
> >
> > diff --git a/mysql-test/main/long_unique.result b/mysql-test/main/long_unique.result
> > index a4955b3e7b5..fee8e7721bf 100644
> > --- a/mysql-test/main/long_unique.result
> > +++ b/mysql-test/main/long_unique.result
> > @@ -1477,4 +1477,28 @@ id select_type table type possible_keys key key_len ref rows Extra
> > SELECT t2.b FROM t1 JOIN t2 ON t1.d = t2.f WHERE t2.pk >= 20;
> > b
> > drop table t1,t2;
> > +#
> > +# MDEV-21470 MyISAM start_bulk_insert doesn't work with long unique
> > +#
> > +CREATE TABLE t1 (a INT, b BLOB) ENGINE=MyISAM;
> > +INSERT INTO t1 VALUES (1,'foo'),(2,'bar');
> > +CREATE TABLE t2 (c BIT, d BLOB, UNIQUE(d)) ENGINE=MyISAM;
> > +INSERT INTO t2 SELECT * FROM t1;
> > +Warnings:
> > +Warning 1264 Out of range value for column 'c' at row 2
> > +DROP TABLE t1, t2;
> > +#
> > +# MDEV-19338 Using AUTO_INCREMENT with long unique
> > +#
> > +CREATE TABLE t1 (pk INT, a TEXT NOT NULL DEFAULT '', PRIMARY KEY (pk), b INT AUTO_INCREMENT, UNIQUE(b), UNIQUE (a,b)) ENGINE=myisam;
> > +ERROR HY000: Function or expression 'AUTO_INCREMENT' cannot be used in the UNIQUE clause of `a`
>
> Quite confusing error message :(
> I understand where it comes from, but for a user, I'm afraid, it just won't
> make any sense.
>
> A reasonable message could be, for example,
>
> AUTO_INCREMENT column %`s cannot be used in the UNIQUE index %`s
I was trying to not have to create a new error message just for this
very unlikely case.
But if it makes you happy, I can add a new one
>
> > +#
> > +# MDEV-21819 Assertion `inited == NONE || update_handler != this'
> > +# failed in handler::ha_write_row
> > +#
> > +CREATE OR REPLACE TABLE t1 (a INT, b BLOB, s DATE, e DATE, PERIOD FOR app(s,e), UNIQUE(b)) ENGINE=MyISAM PARTITION BY HASH(a) PARTITIONS 2;
> > +INSERT INTO t1 VALUES (1,'foo','2022-01-01', '2025-01-01');
> > +DELETE FROM t1 FOR PORTION OF app FROM '2023-01-01' TO '2024-01-01';
> > +ERROR 23000: Duplicate entry 'foo' for key 'b'
> > +DROP TABLE t1;
> > set @@GLOBAL.max_allowed_packet= @allowed_packet;
> > diff --git a/mysql-test/suite/versioning/r/long_unique.result b/mysql-test/suite/versioning/r/long_unique.result
> > new file mode 100644
> > index 00000000000..da07bc66e22
> > --- /dev/null
> > +++ b/mysql-test/suite/versioning/r/long_unique.result
> > @@ -0,0 +1,8 @@
> > +#
> > +# Assertion `inited == NONE || update_handler != this' failed in
> > +# handler::ha_write_row
> > +#
> > +CREATE TABLE t1 (f VARCHAR(4096), s DATE, e DATE, PERIOD FOR app(s,e), UNIQUE(f)) ENGINE=MyISAM;
> > +INSERT INTO t1 VALUES ('foo', '2023-08-30', '2025-07-09'),('bar', '2021-01-01', '2021-12-31');
> > +DELETE FROM t1 FOR PORTION OF app FROM '2023-08-29' TO '2025-07-01';
> > +DROP TABLE t1;
>
> you've had almost the same test (MDEV-21819) in main/long_unique.test
>
> it's a bit strange to have to very similar tests in different suites
Agree. However I didn't want to do too much cleanup work
(Already done way more than my share :( )
However, I noticed that there was different code path's for versioning when it
comes to unique handling so I had to add the test to many places.
> I suggest to move MDEV-21819 test from main/long_unique.test to this file.
Don't know what you mean with 'this file'. The versioning/long_unique file
only contains the test relevant for versioning, there is no reason to have
any more tests in there.
> and then move this whole file from versioning suite to the period suite.
As this particular test crashes with just versioning, it should be there,
not in period.
Feel free to do the above cleanup later if you want. However, I think
that main unique testing should be in the main test suite, not in period.
> > diff --git a/sql/ha_partition.cc b/sql/ha_partition.cc
> > index 73191907aac..c7e61864366 100644
> > --- a/sql/ha_partition.cc
> > +++ b/sql/ha_partition.cc
> > @@ -4340,6 +4339,10 @@ int ha_partition::write_row(const uchar * buf)
> > }
> > m_last_part= part_id;
> > DBUG_PRINT("info", ("Insert in partition %u", part_id));
> > + if (table->s->long_unique_table &&
> > + m_file[part_id]->update_handler == m_file[part_id] && inited == RND)
> > + m_file[part_id]->prepare_for_insert(0);
>
> This looks quite incomprehensible. If the table has a long unique key
> and the update_handler is the handler itself you _prepare for insert_ ???
This is basically the original code we had in handler::write_row.
We have to have this in this place as we don't want to call
handler->prepare_for_insert() for all partitions, only those
that where really used.
> if it's an insert it should've been prepared earlier, like, normally for
> an insert, and it don't have to do anything with long uniques.
See above. The above is to avoid creating new handlers when we don't need
them! If you have 100 partitions, you really don't want to create 100 cloned
table handlers.
> What you actually do here (as I've found out looking firther in the patch)
> you prepare update_handler. That is "prepare_for_insert" is VERY confusing
> here, it sounds like it prepares for a normal insert and should be
> called at the beginning of mysql_insert(). And it's nothing like that.
>
> It would remove half of the questions if you'd call it
> prepare_update_handler() or, better, prepare_auxiliary_handler() (because
> it'll be used not only for long uniques) or something like that.
The idea with prepare_for_insert is that we can add special handler code
that we want to do once in case of insert. We didn't have that before and
I can see many use of that.
In any case, I will add a comment to the above partition code to clarify
what we are doing:
"We have to call prepare_for_insert() if we have an update handler
in the underlying table (to clone the handler). This is because for
INSERT's prepare_for_insert() is only called for the main table,
not for all partitions. This is to reduce the huge overhead of cloning
a possible not needed handler if there are many partitions."
> Still, some questions are left:
>
> 1. why partitioning needs special code for that?
The comment explains it all
> 2. when a handler itself is its update_handler AND inited==RND?
> the handler itself can be the update_handler on INSERT, but then inited is NONE
> Is it UPDATE with system versioning?
No, this is needed for normal updates, just like in the old code.
> > +
> > start_part_bulk_insert(thd, part_id);
> >
> > tmp_disable_binlog(thd); /* Do not replicate the low-level changes. */
> > diff --git a/sql/handler.cc b/sql/handler.cc
> > index 0700b1571e5..1e3f987b4e5 100644
> > --- a/sql/handler.cc
> > +++ b/sql/handler.cc
> > @@ -2648,6 +2649,39 @@ handler *handler::clone(const char *name, MEM_ROOT *mem_root)
> > return NULL;
> > }
> >
> > +/**
> > + @brief clone of current handler.
>
> you generally don't have to write @brief in these cases. Just
I usually don't write @brief. Strange, I must have copied the code from
somewhere.
Ok, the @brief came from the original code that I copied from table.cc
and I didn't remove it.
I have now removed it. Someone else should fix all the other places
where @brief is used wrongly!
<cut>
> > +bool handler::clone_handler_for_update()
> > +{
> > + handler *tmp;
> > + DBUG_ASSERT(table->s->long_unique_table);
> > +
> > + if (update_handler != this)
> > + return 0; // Already done
> > + if (!(tmp= clone(table->s->normalized_path.str, table->in_use->mem_root)))
> > + return 1;
> > + update_handler= tmp;
> > + update_handler->ha_external_lock(table->in_use, F_RDLCK);
>
> Looks strange. Why F_RDLCK if it's an *update* handler?
> This definitely needs a comment, why you're using read lock for updates.
Because the update handler is ONLY used to check if there is an existing
row. We never use it for doing changes.
Note that this is old code. Didn't you catch this in your original review
before it was pushed ?
Anyway, I have added a comment.
<cut>
> > @@ -6481,6 +6515,8 @@ int handler::ha_reset()
> > DBUG_ASSERT(inited == NONE);
> > /* reset the bitmaps to point to defaults */
> > table->default_column_bitmaps();
> > + if (update_handler != this)
> > + delete_update_handler();
>
> you already have an if() inside the delete_update_handler().
> Either remove it from here or from there.
We don't check the handler in all calls.
I just added the check to ha_reset() as this is a very common call
and wanted to avoid an extra call.
Keeping current code.
> > +/**
> > + Do all initialization needed for insert
> > +
> > + @param force_update_handler Set to TRUE if we should always create an
> > + update handler. Needed if we don't know if we
> > + are going to do inserted while a scan is in
>
> "going to do inserts"
fixed
<cut>
> > +bool handler::has_long_unique()
> > +{
> > + return table->s->long_unique_table;
> > +}
>
> I don't see why you need it for Aria.
> It can always check table->s->long_unique_table without an extra call
In this case I thought that an abstraction was much better. (and I still think)
This is because many other handlers may want to the same check.
<cut>
> > +++ b/sql/sql_delete.cc
> > @@ -752,6 +752,10 @@ bool mysql_delete(THD *thd, TABLE_LIST *table_list, COND *conds,
> > && !table->versioned()
> > && table->file->has_transactions();
> >
> > + if (table->versioned(VERS_TIMESTAMP) ||
> > + (table_list->has_period() && !portion_of_time_through_update))
> > + table->file->prepare_for_insert(1);
>
> Ok, now I think that your idea of the default update_handler=this
> is rather fragile. If the default is NULL, than we can put in many places
>
> DBUG_ASSERT(update_handler != NULL);
>
> to make sure it was initialized when needed. Now we cannot do it.
The current is faster as we can always rely on the value.
The code will work in most cases even if prepare_for_insert()
is not set.
For an assert, on can always do (trough an inline handler function)
DBUG_ASSERT(update_handler != table->file)
Exactly same thing.
> And there are many places in the code and non-trivial conditions when
> the update_handler has to be initialized and these places and conditions
> will change in the future. An assert would be helpful here.
Nothing stops us from having an assert with current code.
The only different from my code is that we don't get a crash
if we forget an assert and wrongly call update_handler.
> > +
> > THD_STAGE_INFO(thd, stage_updating);
> > while (likely(!(error=info.read_record())) && likely(!thd->killed) &&
> > likely(!thd->is_error()))
> > diff --git a/sql/table.h b/sql/table.h
> > index 6ce92ee048e..1a0b9514d23 100644
> > --- a/sql/table.h
> > +++ b/sql/table.h
> > @@ -1182,6 +1181,7 @@ struct TABLE
> > /* Map of keys dependent on some constraint */
> > key_map constraint_dependent_keys;
> > KEY *key_info; /* data of keys in database */
> > + KEY_PART_INFO *base_key_part; /* Where key parts are stored */
>
>
> again, this is *only* needed for INSERT DELAYED that very few people
> ever use. It should be in some INSERT DELAYED specific data structure,
> not in the common TABLE.
Feel free to change after push.
The real problem here is that long unique hash changes the original
key parts, which it should not do. We don't do that for other virtual fields
and it would be better if long unique hashed would create the keys
correct from the start. I would rather fix it that way so that we can remove
the above code
> By the way, why cannot you get the list of keyparts
> from table->key_info->key_part ?
Because that table probably doesn't exist anymore
when the insert delayed table is used.
That was the bug in the original code that I had to fix.
> > --- a/storage/myisam/ha_myisam.cc
> > +++ b/storage/myisam/ha_myisam.cc
> > @@ -1740,7 +1740,7 @@ void ha_myisam::start_bulk_insert(ha_rows rows, uint flags)
> > enable_indexes() failed, thus it's essential that indexes are
> > disabled ONLY for an empty table.
> > */
> > - if (file->state->records == 0 && can_enable_indexes &&
> > + if (file->state->records == 0 && can_enable_indexes && !has_long_unique() &&
> > (!rows || rows >= MI_MIN_ROWS_TO_DISABLE_INDEXES))
>
> why do you disable bulk inserts for long uniques?
Because they are incompatible.
Didn't you read my excellent commit message?
" Disable write cache in MyISAM and Aria when using update_handler as
if cache is used, the row will not be inserted until end of statement
and update_handler would not find conflicting rows"
Does that explain things?
> > {
> > if (file->open_flag & HA_OPEN_INTERNAL_TABLE)
> > diff --git a/sql/table.cc b/sql/table.cc
> > index 718efa5767c..18a942964c3 100644
> > --- a/sql/table.cc
> > +++ b/sql/table.cc
> > @@ -1241,31 +1241,46 @@ bool parse_vcol_defs(THD *thd, MEM_ROOT *mem_root, TABLE *table,
> > Item *list_item;
> > KEY *key= 0;
> > uint key_index, parts= 0;
> > + KEY_PART_INFO *key_part= table->base_key_part;
> > +
> > for (key_index= 0; key_index < table->s->keys; key_index++)
> > {
> > - key=table->key_info + key_index;
> > - parts= key->user_defined_key_parts;
> > - if (key->key_part[parts].fieldnr == field->field_index + 1)
> > - break;
> > + /*
> > + We have to use key from share as this function may have changed
> > + table->key_info if it was ever invoked before. This could happen
> > + in case of INSERT DELAYED.
> > + */
> > + key= table->s->key_info + key_index;
> > + if (key->algorithm == HA_KEY_ALG_LONG_HASH)
> > + {
> > + parts= key->user_defined_key_parts;
> > + if (key_part[parts].fieldnr == field->field_index + 1)
> > + break;
> > + key_part++;
> > + }
>
> Here, again, you've basically duplicated the logic of how long uniques
> are represented in the keys and keyparts. All for the sake of INSERT DELAYED.
No, this is because long unique hashes creates the wrong key structure from
the start and changes that after open.
> I'd really prefer INSERT DELAYED problems to be solved inside
> the Delayed_insert class. And it can use setup_keyinfo_hash()
> and re_setup_keyinfo_hash() functions to avoid duplicating the logic.
Then please do it that way.
But I would really prefer that someone would fix the unique hash
to not change anything in the KEY or KEY_PART.
This is how keys on virtual fields works and there is no reason
that unique hash should do it differently.
I have just fixed bugs in unique key handler to get my code to
work. I don't intend to fix everything in the unique key handling.
The biggest problem is not the above code, but the fact that
long unique keys doesn't work at all with InnoDB.
(It will cause crashes in recovery).
But this is of course another issue.
I have attached the diff of changes from the review.
Regards,
Monty
Attachment:
diff
Description: Binary data
Follow ups
References