kicad-developers team mailing list archive
Mailing list archive
Re: Python Unittest Trial
Thanks for taking a look at this.
> ... this work is always going to be needed.
Well, that's enough reason for me to continue. I guess whether the python
tests are enough for validating the cpp code itself remains to be seen. I
can foresee some issues when you don't know whether the python bindings or
the cpp code is broken when tests fail. I will move these tests to
/pcbnew/scripting/tests for now then where there are already some really
basic tests written for the scripting work.
The only issue I have with this at the moment is that I will actually start
by writing tests for the common classes (e.g. EDA_RECT) initially rather
than pcbnew. Can we maybe reorganize the bindings so there are separate
bindings to this common code which the bindings for pcbnew, eeschema, etc.
will make use of? Maybe a kicad module which has the common classed (e.g.
kicad.EDA_RECT) and the other modules (kicad.pcbnew etc.).
We also need to be able to test the UI code too.
To me, that's sounds like a different kettle of fish and falls under
integration, rather than unit testing. I agree it will be useful but I am
not planning on working on that at the moment unless you are talking about
unit testing the UI classes.
To address the criteria:
(a) QUICK to use. The test need to be integrated into the build system. I
will try and get them to run automatically on the built module when when
-DKICAD_SCRIPTING and -DKICAD_SCRIPTING_MODULES are on. How would you
suggest enabling more fine grained control? Via CMake switches as well?
Seems to me that could get unwieldy quickly but I don't have a better idea.
(b) OBVIOUS assertions. Yes, we should test dependencies before testing
dependents; other than that using nosetest the verbose output of a failed
test will look something like this:
testCenter (test_EDA_RECT.EDA_RECT_Test) ... ok
testContains (test_EDA_RECT.EDA_RECT_Test) ... ok
testContains2 (test_EDA_RECT.EDA_RECT_Test) ... ok
testContains3 (test_EDA_RECT.EDA_RECT_Test) ... ok
testContains4 (test_EDA_RECT.EDA_RECT_Test) ... ok
testContains5 (test_EDA_RECT.EDA_RECT_Test) ... ok
testContains6 (test_EDA_RECT.EDA_RECT_Test) ... ok
testMove (test_EDA_RECT.EDA_RECT_Test) ... FAIL
FAIL: testMove (test_EDA_RECT.EDA_RECT_Test)
Traceback (most recent call last):
File "/home/kaspar/kicad/kicad.bzr/tests/test_EDA_RECT.py", line 39, in
self.assertEqual( self.rect.Centre() , tempPoint )
AssertionError: wxPoint(0, 0) != wxPoint(1, 0)
Ran 8 tests in 0.090s
The message can be easily customized and information can be added to it.
However, getting the relevant cpp source file and line number of the failed
function definition seems to be non-trivial (though maybe possible).
(c) MINIMAL overhead. I feel this is one of python's and nosetest's strong
suites as they keep boilerplate to a minum and have a good inheritance
model which encourages code re-use. Also, my testing work with SooperLooper
I was able to quickly set up parametrized tests using python decorators
that perform the same function repeatedly taking input values from lists or
dictionaries. Though, given that we want tests for the scripting bindings
whether we have additional non-scripting tests or not, the point seem moot.
On 19 July 2013 12:43, Brian Sidebotham <brian.sidebotham@xxxxxxxxx> wrote:
> On 17 July 2013 02:00, Kaspar Emanuel <kaspar.emanuel@xxxxxxxxx> wrote:
>> Hi all,
>> I had a brief look at using Python for unit testing today and wrote a few
>> tests for the EDA_RECT class.
>> I added a tests directory in the root-dir and in it there is a README.md
>> that tells you how to run the tests. There should be no dependencies except
>> Python 2.7 . If I go further with this I would likely switch to the
>> nose-test module.
>> Any feedback welcome. What I really need is some help in a plan to asses
>> the usefulness and feasibility of this. Can you see yourself using this?
>> Where could this Python approach falter?
>> Here is my branch:
> Hi Kaspar,
> Thanks for doing some initial work on the regression testing in KiCad. I
> am still stuck with doing work on python scripting support for KiCad on
> Windows, but I should be at a place soon where that work can be released.
> In which case, I can turn my attention to creating a branch for working out
> test strategies.
> This initial python work looks good. One of the things that instigated
> regression testing was to include the python API so that we could be sure
> when changing things in the C++ code we weren't inadvertently breaking the
> Python API and therefore breaking lots of scripts. So this work is always
> going to be needed.
> We also need to be able to test the UI code too. We need a way of testing
> various plotting options and settings are always honoured so that we don't
> break the essentials of KiCad such as plotting.
> "Using" whatever framework we come up with should ideally become
> non-optional to developers. Therefore the testing framework should be:
> (a) QUICK to use. This doesn't necessarily mean that running all tests in
> the entire test suite takes a short amount of time, but rather all tests
> are performed as quickly as possible whilst also allowing an easy selection
> of subsets of tests to be run. Developers can always run all tests, but can
> save time by running tests only relevant to what they've been working on.
> (b) OBVIOUS assertions. Assertions should make it obvious what failed.
> There should be good information pointing to the source of the problem*
> (c) MINIMAL overhead. Writing new tests should be very quick. Once you
> start writing tests you realise that most functions require several tests.
> We might even want to have some template of tests that need to be performed
> on certain return types or input types, etc.
> * This can mean the order of tests will need thinking about. I would
> rather have an assertion on a WorkerFunction as the first fatal error
> report as opposed to a more complicated UI assertion which uses the failing
> WorkerFunction underneath.
> Best Regards, Brian.