← Back to team overview

kicad-developers team mailing list archive

Re: Kicad's way of drawing filled zones



I've been away for the weekend, here's the reply for all the

> As far as I am aware, all commercial tools in the space have more
advanced / modern system requirements than KiCad
> The integrated Intel GPUs that are old enough to not have OpenGL 3.0
are no longer supported by Intel

Most (if not all) commercial EDA tools that use GPUs run under Windows
only, which - paradoxically - gives the authors more freedom while
choosing which GPU features to use. While OpenGL 2.1 (with only VS/PS
shaders) is more-or-less supported on all Linux distros, GL 3.0 (with
Geometry Shader support) is at least troublesome. I'm not sure if the
speed/memory improvements provided by using GS will be more beneficial
than additional support effort (fallback to 2.1 on unsupported systems?).

> I have a *strong preference* for the solution 3.

JP, You convinced me. This will solve the drawing issue without using
heavy weapons (like GL 3.0). The only thing I'm not sure about is how to
communicate the zone has a 0-width outline (still keeping the minimum
width parameter in the file). Should we add zone property field in the
file format?

Removing "thick" polygon outlines solves some other issues too - I
noticed Hyperlynx stores polygons with 0-width outlines, so the ones
exported currently from KiCad are thinner than they should be because
there's no inflation code in the exporter. Other EDA software exporters
might also be affected.

> I don't think any desktop computer released after 2010 would have
issues with GL3 unless the hardware/OS is defective in some way.

Hardware wouldn't, but drivers would. I still have issues running Half
Life 2 EP2 (a 9-year old game) on a 4-year old laptop with Intel
graphics under Linux...

> Is it possible to determine openGL hardware support at runtime and use
advanced API on newer machines while switching to fallback for older ones?

I'd rather go to JP's idea of not using thick outlines instead of
supporting rendering backends for two different OpenGL versions.

> I don't see the need to tie OS support to hardware support. It's
totally plausible to say, for example, "we'll support users on Debian 6
but onlyif they have a <10 year old graphics card"

Expect a lot of complaints on the bug tracker and the forums then :)

> What about moving the knock-out code to the relative-error calculation
first?  Vias probably don't need 32 segments around the edge.  Look at
buildZoneFeatureHoleList().  We currently use 32 as the minimum value
for segments per circle

Good idea, especially combined with JP's solution. Are you going to fix

> For instance: trying to render just the visible part of the board (
culling ) or on the case on render the full board, implement some kind
of "level of detail" to render a less accurate version ? ( eg as you
pointed the vias could be a special case and simplified while on distance )

We only render the visible part of the board only (glDrawElements() with
indices generated from the currenlty visible item set obtained by
traversing the VIEW's rtree). I'm not sure if it the geometry LOD
wouldn't severely degrade the performance (the cost of generating and
uploading a 1 GB VBO on LOD change - most of board geometry is cached in
the GPU memory) or be just complex to implement (GAL has no LOD for
primitives cached in the GPU RAM).

> - render the via outline using a rectangular texture with transparent
> - render the via hole in the zone as a simple square, side length =
via diameter, and underneath the texture, so the texture's curve fills
in the square's corners?

Vias are already rendered using 1 triangle (not even quad) per each,
except the 'texture' is generated by the pixel shader instead of being

> Would be possible to share that Victor-s project or where we can get it?

I have been asked to not share the board. Please ask Victor if he'd want
to send you the design. Might be useful for optimizing 3D support.


Follow ups