← Back to team overview

kicad-developers team mailing list archive

Re: Regression Testing

 

On 04/29/2013 07:45 AM, Wayne Stambaugh wrote:
> On 4/28/2013 8:15 PM, Dick Hollenbeck wrote:
>>
>> On Apr 28, 2013 10:54 AM, "Brian Sidebotham" <brian.sidebotham@xxxxxxxxx
>> <mailto:brian.sidebotham@xxxxxxxxx>> wrote:
>>>
>>> Hi Guys,
>>>
>>> I'm just catching up with the list, and I saw something that caught my
>> eye as it's something that's been on my mind for a while:
>>>
>>> --------------------------
>>>
>>> Dick Hollenbeck wrote:
>>>
>>> - Right now, I am finding too many bugs in the software ...
>>>
>>> - We might do well as a team by slowing down and focusing
>>> - on reliability and quality not features for awhile.  Firstly,
>>> - the bugs are damaging to the project.
>>>
>>> ---------------------------
>>>
>>> I agree with this, there are things I'd like to add to KiCad, but only
>> on-top of something I can be confident I'm not breaking, especially by
>> creating corner case issues.
>>>
>>> I would like us to think about regression testing using something like
>> CTest (Which would make sense as we're currently using CMake anyway!).
>> We could then publish dashboard regression testing results.
>>>
>>> I'm aware work is going into making eeschema and PCBNEW essentially
>> into DLL's, so perhaps it's best to wait until that work is complete
>> before starting down this road?
>>>
>>> In particular I'd like to see regression testing on the DRC, Gerber
>> generation, and the Python exposed API. Probably in that order of
>> priority too. Certainly the Python API changes are already tripping us
>> up, but only when they have already been broken in committed code.
>>>
>>> Being able to regression test changes to optimisations and code
>> tidying will help that move along anyway as you can be more confident in
>> your changes having complete coverage once the number of tests increases.
>>>
>>> I am prepared to say that I'll undertake this work too. Obviously it
>> can't start straight away as I'm currently doing work on the Windows
>> scripting build system and python-a-mingw-us packaging.
>>>
>>> Is anyone against regression testing, or have alternatives that would
>> achieve similar confidence in committed code? My vote is for regression
>> testing.
> 
> I think it's good idea as long as we think it through before we start
> implementing them.  I want avoid a free for all mentality and then have
> to go back and clean up the mess.  We should set down some preliminary
> guidelines for testing along the lines of the coding policy before we
> start actually writing test code.  This way developers will no what is
> expected.
> 
>>>
>>
>> I fully support the idea.  It will expand the size of the source tree
>> significantly over time, and increase maintainence, but these costs are
>> dwarfed by the benefits.
>>
>> Pyhon itself has quite a developed test harness environment with lots of
>> tests.  They were helpful in getting a-ming-us up to a certain
>> confidence level.  I did not need an understanding of the test harness
>> environment to use it.
>>
>> The other thing I learned was that python can call arbitrary C functions
>> in an arbitrary dll, even if they are not swigged.
> 
> I've used the Python unit test framework for testing Python code.  I
> never thought about using it for testing C++ code but it does seem
> feasible.  Does anybody have any experience doing this or are there any
> examples of anyone else doing this so we can get an idea of what is
> involved? 

The python regression testsuite that I referred to is about 60% tests that call into C
code.  This C code, more often than not, resides in C extensions.  So by necessity they
are using the stack that you ask about, although *through* the python C API.  But there is
also the interface that Joseph referred to, the _ctypes module&extension stack.  This
allows any *C function* to be called, and it must be a *DLL entry point*.

a) C function
b) DLL entry point

It does not have to be swigged.

In testing _ctypes, in the python regression test suite, aka unittests, they arbitrarily
load windows' kernel.dll, and call functions in there.  These are obviously not python
aware functions.

The main drawbacks are a) and b) above.  The positives are the conciseness of python and
the access to a reporting UI, wxPython.

We could address a) and b) by having a single DLL/DSO specific entrypoint for all tests,
with a big switch in there to route to the actual test.  This scaffolding has some
maintenance cost, and begins to wash out the ease of using python on top, but perhaps not
fully.

Doing human-less testing on code written for human interaction requires some thought.

here is the typical software stack we would want to test.

1) entrypoint called from a hotkey or menu
2) put up dialog, ask for parameters
3) from dialog, call worker function

If you want to test "worker function", it is probably a C++ member function, or at least
should be.  Python will have difficulty calling it unless it is swigged.

But let's say the bug is in the dialog window and the gathering of parameters from it?

Please think about this awhile.  Does this not expose a severe limitation to what can be
tested?


 If the Python testing framework is not adequate, we can
> always take a look at the Boost testing framework.  


Yes, and CTest also.

In my use of the python unittest framework, I came to believe that the shear volume of
tests and output from them, make any "test UI" essentially worthless.  This output must be
logged and looked through later with a text editor and grepping tools.  There are
thousands of lines of text, and watching them scroll up a gui panel would tell you far
less than looking through a log file.

We may not actually need the best test framework, what we actually need is one that is
good enough.  Wayne your concept of documenting expectations is good, a priori.

But if we want [new] folks to follow it, the test framework itself must be quickly
understood, to a sufficient degree by new folks to use it.  Complexity could be a hurdle
in this regard.  Everything boost in general, is above average complexity.

I do not know CTest.

So we have at least 3 possibilities on the table.  I think we are still in a brainstorming
phase on it.



I looked at this
> several months ago and it looked like a good fit given that we already
> have Boost as a dependency and it's feature parity seemed as good as any
> of the other open source C/C++ testing frameworks and even many of the
> commercial testing frameworks.
> 
>>
>>> We would need good support from all, new code requires new tests, and
>> nobody is better suited to devising the tests than the person who
>> specified the new functionality - which means all developers would
>> probably end up writing test cases at some point.
>>>
>>> Best Regards, Brian.
>>>
>>>
>>> _______________________________________________
>>> Mailing list: https://launchpad.net/~kicad-developers
>>> Post to     : kicad-developers@xxxxxxxxxxxxxxxxxxx
>> <mailto:kicad-developers@xxxxxxxxxxxxxxxxxxx>
>>> Unsubscribe : https://launchpad.net/~kicad-developers
>>> More help   : https://help.launchpad.net/ListHelp
>>>
>>
>>
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~kicad-developers
>> Post to     : kicad-developers@xxxxxxxxxxxxxxxxxxx
>> Unsubscribe : https://launchpad.net/~kicad-developers
>> More help   : https://help.launchpad.net/ListHelp
>>
> 
> 
> _______________________________________________
> Mailing list: https://launchpad.net/~kicad-developers
> Post to     : kicad-developers@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~kicad-developers
> More help   : https://help.launchpad.net/ListHelp
> .
> 



Follow ups

References