← Back to team overview

maria-developers team mailing list archive

Re: 74b2eba1ca6: MDEV-15458 Segfault in heap_scan() upon UPDATE after ADD SYSTEM VERSIONING

 

Hi, Aleksey!

It seems that everyone prefers your approach :)

I also think this it's easier to use and it's a more lightweight than
the clone.

A clone is justified when one wants to do two scans in parallel
(like, read few rows from one handler, read few rows from the clone,
than from the first handler, and so on). Which is not the case here, so
the clone looks like an overkill.

The only problem here - not all engines might implement
HA_EXTRA_REMEMBER_POS. But it looks like it's generally easy to
implement.

On Mar 27, Aleksey Midenkov wrote:
> > >
> > > -  return table->file->ha_write_row(table->record[0]);
> > > +  int res;
> > > +  if ((res= table->file->extra(HA_EXTRA_REMEMBER_POS)))
> > > +    return res;
> > > +  if ((res= table->file->ha_write_row(table->record[0])))
> > > +    return res;
> > > +  return table->file->extra(HA_EXTRA_RESTORE_POS);
> > >  }
> >
> > Frankly, I don't like it. I mean, this particular fix is fine, but
> > here's the problem:
> >
> > We now have (at least) three features where the server might need to do
> > something (insert/update/search) during the index/rnd scan.
> >
> > And three different solutions: you use HA_EXTRA_REMEMBER_POS in system
> > versioning, Nikita simply sets delete_while_scanning=false in
> > application time period tables, and Sachin uses handler::clone() to
> > create a second handler that can be used to check for hash collisions
> > without disrupting the scan on the primary handler.
> >
> > I think it's getting somewhat out of hands. Can we please reduce the
> > number of approaches to fix the same issue?
> >
> 
> What solution do you propose to use?

Regards,
Sergei
Chief Architect MariaDB
and security@xxxxxxxxxxx


References