kicad-developers team mailing list archive
-
kicad-developers team
-
Mailing list archive
-
Message #10130
Re: layer based constraints
On 04/27/2013 12:10 PM, Lorenzo Marcantonio wrote:
> On Sat, Apr 27, 2013 at 11:11:54AM -0500, Dick Hollenbeck wrote:
>> But how important is that really? Will it result in a reduction in pay?
>>
>> The "common" user has been a hobbyist, and is becoming a corporate employee as the
>> software gets better.
>
> OK, even most of the corporate employees have probably 0% of
> understanding a boolean expression. Unless he's a digital designer :D
> woes on him in that case
>
>> So while I agree, I think the importance of this concern is minimal. Actually zero.
>> If we like it, that should be good enough. We are corporate employees, and are paying to
>> make this software useable for our needs. You said yourself the same thing.
>
> Agree on that. But if it's possible to make it usable for the
> non-programmer people (without too much work) then it's better.
> Otherwise I'd looking for components using grep instead of the component
> browser for example (the sad thing is that often grepping is more
> effective...)
>
>> Great idea, do it. It seems we are all still open to the best ideas.
>
> In a previous message I wrote about how CAM350/gerbtool do DRC. The CAM
> products are for 60% DRC beasts so they know how to do their job:D
>
> Eagle has a simple netclass-to-netclass clearance. I haven't used any
> other of the 'big ones' (Altium, Allegro, PADS or Expedition) so I don't
> really know how they handle it. Many entry level products don't have even
> a netclass notion, so we already are way above them:D
>
>> Is there a way we can provide a python expression, but do most of the work in C++?, by
>> implementing the functions being called in C++ behind the expression, by following the
>> python C API.
>>
>> This is a slight variation on the original idea, which can be a way to turbo charge it,
>> maybe to cut out a few lookups at runtime.
>>
>> Obviously the python expressions can all be "pre-compiled", but I don't think they can be
>> "pre-evalutated" when "both" is in play, since that is a dynamic match maker.
>
> Caching (memoizing, actually) strategies for the python calls would
> depend on the data available to the call. Assuming the function is pure
> (i.e. depending only on the passed parameters) you could cull *a lot* of
> calls.
>
> Example (in C, I don't know python enough): function returning
> the clearance (one of the contended values) between two netclasses and
> the current layer; with only the minimum data we'd have:
>
> int dynamic_clearance(const char *netclass1, const char *netclass2,
> const char *layer);
>
> *iif* the implementation is pure, as from the hypothesis, there are no
> external dependancies and no collateral effects so *every* call with
> a tuple (triplet) would always return the same value. Put it in a map
> (memoize it) and you have just resolved the clearance issues between
> these two classes on that layer with just one python call. AFAIK
> memoizing is a standard idiom in python, too.
>
> More flexibility reduces optimization possibilities. Let's say we pass
> the netnames too (don't ask why, maybe the user wants more granularity
> than netnames, no idea)
>
> int dynamic_clearance(const char *netclass1, const char *netclass2,
> const char *netname1, const char *netname2,
> const char *layer);
>
> Here memoizing is a lot less effective... you only cache clearance
> between two nets (the netclasses are actually redundant and passed as
> convenience). However you would optimize the common case of buses
> wandering around the board. Still a good improvement, probably.
>
> At the other end, if the decision function has the whole segment
> information at hand:
>
> int dynamic_clearance(const char *netclass1, const char *netclass2,
> const char *netname1, const char *netname2,
> const char *layer, int stuff, int other_stuff...)
>
> Then we gain *no* performance since (ideally) every segment is tested
> against any other segment just once. In fact caching overhead would
> probably make it slower than without.
>
> That said, maybe a python call is just so fast that the big time is
> spent computing distances maybe. Or the bottleneck is in the string
> comparison. If it was LUA instead of Python I'd say to directly call the
> function without any doubt (LUA strings are interned so the string
> equality is pointer equality for example, and the byte code interpreter
> is... well, fast).
>
> As you said instrumented profiling is needed, at least to compare the cost
> of a python call to the rest of the stuff in the DRC routines.
Caching is an interesting idea to keep in our toolbox. I think C++ has some great
potential in developing some wrapper classes for making use of python expessions fairly
painless. It is an area I hope to contribute to when I have some more time.
Each year we host a large memorial day party here at the ranch, and often have a pig roast
on a spit. The preparations for that can get overwhelming and start weeks in advance. So
this has reached full scale as of this weekend. Time is more scarce than ever.
Dick
>
Follow ups
References
-
layer based constraints
From: Simon Huwyler, 2013-04-24
-
Re: layer based constraints
From: Dimitris Lampridis, 2013-04-26
-
Re: layer based constraints
From: Dick Hollenbeck, 2013-04-26
-
Re: layer based constraints
From: Simon Huwyler, 2013-04-26
-
Re: layer based constraints
From: Simon Huwyler, 2013-04-26
-
Re: layer based constraints
From: Tomasz Wlostowski, 2013-04-26
-
Re: layer based constraints
From: Dick Hollenbeck, 2013-04-27
-
Re: layer based constraints
From: Lorenzo Marcantonio, 2013-04-27
-
Re: layer based constraints
From: Dick Hollenbeck, 2013-04-27
-
Re: layer based constraints
From: Lorenzo Marcantonio, 2013-04-27