← Back to team overview

kicad-developers team mailing list archive

Re: Converting KiCAD to metric units: The approach

 

В сообщении от Понедельник 27 июня 2011 10:38:17 автор Lorenzo Marcantonio 
написал:
> On Sun, 26 Jun 2011, Dick Hollenbeck wrote:
> > Wayne has pointed out that MS VC++ has no int64_t stuff and that there is
> > a wx equivalent to assist poor little Microsoft.  Maybe we should simply
> > define int64_t, if MS VC++, so we can use a int standard type rather
> > than the wxInt64 (or whatever it is).    Unnecessary allegience to wx
> > over pure C++ seems
> 
> Isn't long long int in the new standard C(++) ? Or (as usual) MS decided
> to do it its way? There should be a cmake incantation to detect it,
> anyway...

It seems so, MS ignores new standards. But anyway doing this portable is not 
so difficult. Typedefs are great in this.

> 
> As for the 'dynamic range' issue, working in nm with 32 bits give
> a useful size of >2m (signed) so for variables it shouldn't be an issue.
> Intermediate computation *could* overflow (for example when scaling
> using the x*a/b instead of the x/b*a form to preserve precision).
> 
> Given kicad's use of coordinates I think this could only occur during
> scaling in display (plotting is done in floating point IIRC) and
> analytics (intersections, distances and so on) that are probably done in
> FP for a number of reasons. Also remember that FP rounds only when: 1)
> exceeding it's significant figures (which for the IEE754 double is about
> 15 places) and 2) when *dividing*; there is a common understatement: FP

Result of integer division is exact in terms of integers. :P

> doesn't necessarily round; it rounds only when there is a lack of
> dynamic range when adding (so that some digit falls off the significant
> range) and when dividing (because of some arithetic stuff saying that
> some numbers can't be expressed in base-2 in aperiodic form).
> Multiplication only rounds when the combined significant figures exceeds
> the mantissa size (i.e. a 8-figures by an 8-figures would round). But
> that's anyway better than a long int.

Which "long int" you're speaking about? 32, 40, 64 bits wide or some another? 
;)

Anyway rounding in FP is much harder to control than in fixed point. 
Spontaneous rounding may cause DRC to fail someday without obvious reason.

Main reason I started working on metric scale is exact DRC matching.

> 
> >From a performance view today long long ints are slower than doubles in
> 
> non-64bit architectures (given a modern FPU obviously), and their size
> is the same. Also most (maybe all) of the math routines are double only.

Computing technologies are heading towards 64 bit. :) And there are lots of 
math routines in fixed point.
> 
> Given that I work all the day with 8-/16- bit MCUs (and mostly in
> assembly) I'd confirm that tracking fixed point precision during
> computation is a moderate PITA :D I'd propose to use long ints for

Oh that's not so problematic for those who familiar with. :)

> general use and double for computation where it's needed. Troublesome
> rounding cases should be decided on the spot (as usual when doing
> numerics...)
> 
> PS: in the new standard the are multiple precision integers (COBOL-like)
> but I fear they would be slow as hell:P

It's up to the compiler. :)

-- 

--- KeyFP: DC07 D4C3 BB33 E596 CF5E  CEF9 D4E3 B618 65BA 2B61


Follow ups

References