← Back to team overview

checkbox-dev team mailing list archive

Re: Trouble with i18n

 



W dniu 30.03.2015 o 20:22, Daniel Manrique pisze:
On Mon, Mar 30, 2015 at 1:53 PM, Zygmunt Krynicki

I think it's unfortunate that we don't care about i18n too much and ignore
untranslated or half-translated applications. Is there something we can do
to improve our process to ensure that our tools can handle multiple
languages and that our users use them in their preferred language?

Is there any way we could run at least the per-merge-request tests
with a different locale? for instance, we could (as part of one of the
provisioning scripts) install the languages the teams are fluent in

We could install all of the locale-specific dependencies but I don't know what we could actually test for.

(polish, chinese, spanish, french; we have at least 2 people fluent in
each of those, and we could add a few more languages with at least one
speaker) and run the test suite for each. We can make this conditional
to the langpack being installed so things don't fail horribly for
people who are testing locally and don't want 10 foreign languages
intstalled. This would be done once the container is provisioned; say,
in each provider's requirements/container-tests-blah script, you'd:

Well, the problem with translations is that there's no simple way to see if they are okay.

- We can measure coverage
- But we cannot spot missing untranslated strings (missing markers)
- We can have a translation
- But it is not used (broken mechanism)
- We can have a translation
- But it looks bad in context (missing context markers)
- We can have a translation
- But it uses unnatural words or just looks bad in practice

I think I'm trying to say that this very very tied to a human that can spot those high-level issues and react.

./manage.py validate
for blah in list_of_locales; do
  LANG=blah LANGUAGE=blah ./manage.py validate
done

I was thinking about one possible enhancement while thinking about QML and arrowhead (that flow control thing) a while ago. We could, given some effort, generate "logs" of interaction between the user and the application in certain scenarios. We could render that for any language desirable. This is something that we could do in a per-commit fashion and just stick reports in a server. Those could be reviewed by anyone that looks after a particular language. This is much more powerful (logs and scree-shots) than looking at raw code or endless translation catalogs as it is closer to how a normal human interaction would look like.

The technical challenge is to come up with a way to do this for a useful chunk of our stack where we can see issues.

Sadly this is still a stop gap measure, the only thing that works very well and is low cost are users actively using a specific language, translating everything available and reporting bugs on everything that stands out.

Best regards
ZK


References