← Back to team overview

kicad-developers team mailing list archive

Re: Converting KiCAD to metric units: The approach

 

On Mon, 27 Jun 2011, Vladimir Uryvaev wrote:

Isn't long long int in the new standard C(++) ? Or (as usual) MS decided
to do it its way? There should be a cmake incantation to detect it,
anyway...

It seems so, MS ignores new standards. But anyway doing this portable is not so difficult. Typedefs are great in this.

Exactly what I meant with 'cmake incantation'... autoconf
has some define which you can use to do these typedefs, I
don't know about automake.

Result of integer division is exact in terms of integers. :P

Obviously, but then you lose commutativity and
associativity (that's why sometimes is better x*a/b than
x/b*a). Otherwise you must track the remainder. There are
similar techniques for minimizing/tracking rounding error
with floats, look in the nearest numerics book for plenty
of examples (and big head aches)

Which "long int" you're speaking about? 32, 40, 64 bits wide or some another? ;)

Usually long ints are 32 bit, except on 64 bit platforms
where usually are 64 bits :D. I could have simply said int
since on kicad platform it's the same.

Anyway rounding in FP is much harder to control than in fixed point. Spontaneous rounding may cause DRC to fail someday without obvious reason.

That's correct. You could even go the Knuth way and
reimplement all the arithmetic in 16.16 fixed point to have
reproducible results on all the platforms (in the '70 there
were strange ALUs around, it seems:D)

Main reason I started working on metric scale is exact DRC matching.


>From a performance view today long long ints are slower than doubles in

non-64bit architectures (given a modern FPU obviously), and their size
is the same. Also most (maybe all) of the math routines are double only.

Computing technologies are heading towards 64 bit. :) And there are lots of math routines in fixed point.

*Lots* in the standard C lib is better approximate with
zero... looking around, yes, you can find almost everything
if you really need it. All this only for displaying? No
way. For plotting, maybe. DRC I would say yes if I really
cared. As I said is a case-by-case decision.

Anyway, there's a biggest problem: I think that an oval pad
tangent to a track is the most difficult DRC case to
handle. Can we do it correctly? Yes, of course, in fixed
point, checking overflows and whatever. The do the DRC with
the fabricator DRC... are you sure that it will accept your
results? Maybe it rounds 'the other way' and reject an
analytically correct layout. The manufacturer is wrong but
anyway doesn't build your board because its DRC is flawed
and you have *no* control about it... (just think about the
PNG support in the old IE to get a better idea:D)

Oh that's not so problematic for those who familiar with. :)

No, actually it's easier because you *have* to track where
overflow can happen and you actually can compute the
resulting precision. Property damage from unchecked
arithmetic is a big no in embedded development :D (space
probes crashed for similar reasons in the past...)

OTOH you have to really think about your numeric domains
and at the end you'll spend about four time more in
implementation. Code quality is usually better than a
straight implementation, at least...

PS: in the new standard the are multiple precision integers (COBOL-like)
but I fear they would be slow as hell:P

It's up to the compiler. :)

gcc is using gmp for integers and mpfr for floating points. No idea if these are fast or slow. But for positioning tracks on a gerber plot I think vanilla ints are enough:P

BTW decimils are about the limit for conventional phototools, with laser direct imaging maybe more. And anyway etching tolerances are wayyy bigger than that...

I think that when we can exactly represent in decimal 1/100 of a mil we already exceeded most fabrication requirements. So if I'm not forgetting some zero we would have:

1/100 mil = 1/10000" = 25.4/10000 mm = 25.4/10 µm = 2540 nm

Using signed 32 bit (~2000000000 on each side of the zero)
you would handle about 4 m of linear space. Can you
fabricate a 4 m PCB? I don't think so, or at least is well
beyond Kicad target:P (I've seen 120cm board and keeping
them in shape was a big issue since the epoxy collapsed
under its own weight...)

Yep, nanometers are small enough. You could use tens of nm
to gain an order of magnitude on the maximum size (40 m of
space, or about 3.5 bits of headroom for computations
without overflowing). OTOH you would have a slightly less legibly file format.

--
Lorenzo Marcantonio
Logos Srl

Follow ups

References