← Back to team overview

kicad-developers team mailing list archive

Re: Regression Testing

 

On 04/29/2013 07:51 AM, Dick Hollenbeck wrote:
On 04/29/2013 07:45 AM, Wayne Stambaugh wrote:
On 4/28/2013 8:15 PM, Dick Hollenbeck wrote:
On Apr 28, 2013 10:54 AM, "Brian Sidebotham" <brian.sidebotham@xxxxxxxxx
<mailto:brian.sidebotham@xxxxxxxxx>> wrote:
Hi Guys,

I'm just catching up with the list, and I saw something that caught my
eye as it's something that's been on my mind for a while:
--------------------------

Dick Hollenbeck wrote:

- Right now, I am finding too many bugs in the software ...

- We might do well as a team by slowing down and focusing
- on reliability and quality not features for awhile.  Firstly,
- the bugs are damaging to the project.

---------------------------

I agree with this, there are things I'd like to add to KiCad, but only
on-top of something I can be confident I'm not breaking, especially by
creating corner case issues.
I would like us to think about regression testing using something like
CTest (Which would make sense as we're currently using CMake anyway!).
We could then publish dashboard regression testing results.
I'm aware work is going into making eeschema and PCBNEW essentially
into DLL's, so perhaps it's best to wait until that work is complete
before starting down this road?
In particular I'd like to see regression testing on the DRC, Gerber
generation, and the Python exposed API. Probably in that order of
priority too. Certainly the Python API changes are already tripping us
up, but only when they have already been broken in committed code.
Being able to regression test changes to optimisations and code
tidying will help that move along anyway as you can be more confident in
your changes having complete coverage once the number of tests increases.
I am prepared to say that I'll undertake this work too. Obviously it
can't start straight away as I'm currently doing work on the Windows
scripting build system and python-a-mingw-us packaging.
Is anyone against regression testing, or have alternatives that would
achieve similar confidence in committed code? My vote is for regression
testing.
I think it's good idea as long as we think it through before we start
implementing them.  I want avoid a free for all mentality and then have
to go back and clean up the mess.  We should set down some preliminary
guidelines for testing along the lines of the coding policy before we
start actually writing test code.  This way developers will no what is
expected.


Yes, it's a good idea and here are my high level $0.02.

I flash back to hardware computer graphic ASIC design days
where we could stimulate the input like a host driver
and produce an output, a picture. The input was an automatic
or interactive script to generate PCI bus cycles. A set of
scripts, text were source controlled along with the Verilog
source as a set suite that ran a simulation every night. The
picture output was verified automaticly by tracking the expected,
resultant frame buffer checksum.
I havn't looked at the python interface yet as a possibility,
but one approach that to me would seem not add a lot of additional
code to be considered might be a keystroke record and playback.
Other EDA packages I have run into allow the user to start
a recording, stop and save it. This kind of feature would
require a keyboard shortcut for each menu pulldown. Each
recording session would constitute a test in the test suite.
Some short, some longer, some like DRC could even start with
a partial design file load. Shorter tests should be encouraged
as they are easier to debug. Proper automatic output response
candidates might be a netlist or maybe the .sch or .pcb file.
Such a feature also could be used to generate tutorials.

sig Frank Bennett


Follow ups

References