dolfin team mailing list archive
-
dolfin team
-
Mailing list archive
-
Message #26158
Re: Integer type transition
On Mon, Nov 19, 2012 at 7:25 AM, Anders Logg <logg@xxxxxxxxx> wrote:
> On Sat, Nov 17, 2012 at 10:36:10AM +0000, Garth N. Wells wrote:
>> On Sat, Nov 17, 2012 at 2:47 AM, Anders Logg <logg@xxxxxxxxx> wrote:
>> > On Thu, Nov 15, 2012 at 09:09:52PM +0000, Garth N. Wells wrote:
>> >> The first stage of the transition to a new integer type is complete.
>> >
>> > Great!
>> >
>> >> Most of DOLFIN now uses std::size_t, and a new typedef 'DolfinIndex'
>> >> has been introduced for compatibility with linear algebra backends.
>> >> Please feel free to suggest a better name for this typedef. Presently,
>> >> DolfinIndex is just a typedef for int. This typedef should rarely be
>> >> visible to a user, and it should have a name that discourages its
>> >> unnecessary use.
>> >
>> > I'm a bit confused. We now have
>> >
>> > PETScMatrix::add(const double* block, std::size_t m, const DolfinIndex* rows,
>> > std::size_t n, const DolfinIndex* cols);
>> >
>> > Here, the only thing being size_t is the dimension of the local
>> > stiffness matrix, which is never big (so the 4 bytes of uint should
>> > be sufficient).
>>
>> We could use unsigned int or DolfinIndex for the block size. It
>> doesn't really make any difference. I would like to change these
>> interfaces anyway and use std::vector<DolfinIndex> and/or
>> boost::multi_array.
>
> I'm also in favor of changing those interfaces think it should wait
> until after 1.1.0.
>
>> > On the other hand the rows and cols pointers contain
>> > global indices and they now use int which have an even smaller range
>> > than uint. I would expect to have
>> >
>> > PETScMatrix::add(const double* block, uint m, size_t* rows,
>> > uint n, size_t* cols);
>> >
>> > or size_t all over.
>> >
>>
>> That doesn't work. The point of DolfinIndex is that it must *match*
>> the index type of the linear algebra backend, e.g. it can be a typedef
>> for PetscItn.
>
> Yes, that's what makes me confused. We want to move from uint to
> another int type to be able to solve larger problems (and for
> compatibility with LA libraries). But we can't solve bigger problems
> if the int type does not match tha LA type.
>
Then configure the backend to support 64 bit integers.
> So why do we need size_t? Shouldn't we rather use the LA int type for
> everything that needs to be big (and dolfin::uint (typedef for
> unsigned int as before) for all other integers?
>
Because Mesh, etc should not depend on the backend int type.
>> With the change I just pushed, if you make DolfinIndex a typedef for
>> 'long long int', Trilinos works with 64 bit integers. I did some basic
>> testing and it worked fine (with Krylov solvers). You need Trilinos 11
>> for this.
>
> Does Trilinos have a typedef for this like PETSc?
>
No. It overloads functions.
>> > And if we want to use PETSc for large index ranges, then we need to
>> > use PETScInt which may or may not be compatible with size_t.
>> >
>>
>> Which a typedef can handle. Block sizes are passed by value, which
>> takes care of any casting.
>
> Sure, but it's confusing with 3 index types: the LA type (now
> dolfin::DolfinIndex), size_t and dolfin::uint).
>
> My suggestion would be:
>
> - typedef dolfin::Index (not dolfin::DolfinIndex) used for integers
> that need to be large
>
> This can be a typedef for either PetscInt or whatever Epetra uses.
>
We shouldn't couple all of DOLFIN to the backend typedef.
> - typedef dolfin::uint for unsigned int as now for all other integers
> that don't need to be large
>
> - don't use size_t
>
I think we should use std::size_t because it signals intent. It also
maps naturally onto the STL and numpy (numpy.uintp). I would like to
remove dolfin::uint and use 'unsigned int' when we want an 'unsigned
int'. The integer type for backend compatibly should be localised as
much as possible. Otherwise we will get into the situation we have
now: a typedef for uint that cannot be changed without breakages all
over.
Garth
> --
> Anders
Follow ups
References