ubuntu-appstore-developers team mailing list archive
-
ubuntu-appstore-developers team
-
Mailing list archive
-
Message #00631
Re: Determining CI low-hanging fruits
Hello,
I updated
https://wiki.ubuntu.com/AppStore/Decisions/ContinuousIntegration again
and feel we should have another meeting. Everybody should be busy with
last minute landings and bug fixes now, so I could imagine that the next
vUDS is probably the best time.
I'd still appreciate any feedback to prepare the discussion, or have the
discussion on the mailing list.
Have a great day,
Daniel
On 20.09.2013 17:52, Daniel Holbach wrote:
> Hello everybody,
>
> to clarify the tests I suggested a bit, I could imagine the following
> work items.
>
> On 19.09.2013 15:05, Daniel Holbach wrote:
>> = Suggested tests =
>> 1. Dev uses ubuntu-sdk (qtcreator) to create a .click package
>> - A (Step 1): Keep a list of app projects, check all of them
>> out and run lp:qtcreator-plugin-ubuntu scripts on them
>> (whenever qtcreator-plugin-ubuntu changes), generate
>> .click packages. Validate packages.
>
> - Determine initial list of projects (Core apps? Touch+Core apps?).
> - Write a broken app(?).
> - Make click review tools available on Launchpad.
> - Determine which CI tools let us most easily branch code, run
> QtC tools, then run validation scripts on generated .click tools.
> - Write code to run the tests automatically. (Not very explicit
> work item, right?)
> - Hook up with CI infrastructure.
>
> Which other qtcreator-plugin-ubuntu scripts do we want to test? For
> which would we need QtC running? For which an attached device?
>
>
>> 2. The .click is uploaded to the review website
>> 3. Our reviewers check the .click and approve it in the website
>> - B (Step 2+3): Keep a list of .click packages. Run validation
>> scripts on them server side (whenever scripts or click
>> change).
>
> - Test-run of current review scripts across current list of approved
> and rejected apps.
> - Reach out to app authors of approved apps to let them know of
> breakage, if we find any.
> - Set up a list of known-good and known-failing tests.
> - Write code to run the tests automatically.
> - Hook up with CI infrastructure.
>
>
>> 5. unity-scope-click gets the list of .clicks from the click
>> index webservice
>> - C (Step 5): ? Maybe some validation of the results from the
>> store?
>
> Can anyone comment on the kinds of breakage we saw here?
>
>
>> 10. Download finishes, ubuntu-download-manager calls “pkcon
>> install-local”
>> 11. pkcon uses packagekit dbus api to talk to click pk plugin
>> 12. click does the unpacking, calls hooks to create apparmor
>> profile and .desktop files
>> - D (Step 10-12): Keep a list of .click packages (whenever
>> relevant bits - which? - change), check if everything gets
>> installed in the right place and generated files make sense.
>
> Which paths does other code rely on? Do we have validation code for any
> of the generated files?
>
>
>> 13. unity-scope-click creates list of installed packages from
>> .desktop files
>> - E (Step 13) ? Check for duplicates? Check for missing icons,
>> etc?
>
> I'm no expert here. Does autopilot help us here? Can we mock navigation
> and verify data in the scope easily?
>
>
>> 14. user taps on installed app, app is started with the right
>> apparmor profile
>> - F (Step 14) ?
>
> Ideas?
>
>
> Any comments would be very welcome. Be it different choice of tests,
> work items, priorities or comments about feasibility or ease of
> implementation.
>
> Thanks a bunch in advance!
>
> Have a great day,
> Daniel
>
--
Get involved in Ubuntu development! developer.ubuntu.com/packaging
Follow @ubuntudev on identi.ca/twitter.com/facebook.com/G+
References