dhis2-devs team mailing list archive
-
dhis2-devs team
-
Mailing list archive
-
Message #06730
Fwd: [OPENMRS-DEV] Bandwidth and performance issues to address in OpenMRS
---------- Forwarded message ----------
From: Bob Jolliffe <bobjolliffe@xxxxxxxxx>
Date: Wed, Jul 21, 2010 at 10:49 PM
Subject: Re: [OPENMRS-DEV] Bandwidth and performance issues to address
in OpenMRS
To: openmrs-devel-l@xxxxxxxxxxxxxxxxxx
The classic (read old fashioned :-) tool for modelling links on
freeBSD is something called dummynet. Dummynet is a marvelously
simple concept where you can define pipes with configurable bandwidth,
delay and packet loss probability. Then you use ipfw (firewall)
commands to direct packets through your pipes. eg|:
# define bandwidth and delay of the emulated link
ipfw pipe 1 config bw 3Mbit/s delay 32ms
# pass all traffic through the emulator
ipfw add pipe 1 ip from any to any
Now I know that not many systems are running on freeBSD. But there
are options ...
(i) set up a virtual network with virtualbox or vmware or xen or
something, then run freebsd on a virtual machine, setup the dummynet
pipe to simulate your network conditions of choice, then route traffic
to your web application via the vm. I have done this but maybe more
effort than its worth for the casual experimenter.
(ii) far as I know ipfw and dummynet pipes are available on Mac OSX.
At the last implementors workshop I saw quite a few people wandering
around with these. People running that system (I don't) should be
able to set up dummynet pipes easily
(iii) According to the dummynet home page
(http://info.iet.unipi.it/~luigi/dummynet/) dummynet has been ported
to linux and windoze earlier this year. So I guess it should be
possible to try something with that.
If anybody's interested I'd be happy to assist someone who wanted to
come up with a generic dummynet based tool for link simulation.
Regards
Bob
On 21 July 2010 21:03, Glen McCallum <mcglen@xxxxxxxxx> wrote:
> I stumbled on these resources:
> http://www.aptivate.org/webguidelines/Home.html
> (Yes. I realize it is biased because the publisher sells an analysis tool
> ... still interesting)
> http://appfrica.net/blog/2009/08/17/how-to-shoehorn-the-high-bandwidth-internet-into-a-low-bandwidth-connection/
> http://wiki.km4dev.org/wiki/index.php/Low-Bandwidth_Design
> http://www.kstoolkit.org/Low+Bandwidth+Tools
> regards,
> Glen
>
> On Wed, Jul 21, 2010 at 11:56 AM, Jeremy Keiper <jeremy@xxxxxxxxxxx> wrote:
>>
>> I'm pretty sure JMeter pulls dependencies.
>>
>> Jeremy Keiper
>> OpenMRS Core Developer
>> AMPATH / IU-Kenya Support
>>
>>
>> On Wed, Jul 21, 2010 at 1:46 PM, Darius Jazayeri <djazayeri+pih@xxxxxxxxx>
>> wrote:
>>>
>>> Hi Glen,
>>> I think it's been years since any of the core code or modules were
>>> performance-tested.
>>> We're putting together a suite of performance benchmark tests for 1.8.
>>> We'll have two versions
>>> 1. simple version, for all devs--you deploy the application to your local
>>> tomcat, and invoke the test suite. It will report timing information and
>>> produce a yourkit snapshot of the runs. (This will only test the server, so
>>> it's not going to touch latency.)
>>> 2. sophisticated version, to be run by CI. This will run the same tests,
>>> but also generate load for the server
>>> If someone can identify a free java tool that will download a web page
>>> and its dependent js/css/etc files under simulated network latency, we can
>>> try to include that in the suite as well.
>>> -Darius
>>>
>>> On Wed, Jul 21, 2010 at 9:38 AM, Glen McCallum <mcglen@xxxxxxxxx> wrote:
>>>>
>>>> Great suggestions. Thanks. Are any core developers using these tools? or
>>>> similar tools? Are performance constraints captured on the wiki?
>>>> Glen
>>>>
>>>> On Wed, Jul 21, 2010 at 7:59 AM, Maros Cunderlik <maros6677@xxxxxxxxx>
>>>> wrote:
>>>>>
>>>>> Glen,
>>>>> A few things to consider in order of difficulty:
>>>>> 1. Minimally consider using any http proxy tool and simply *look* at
>>>>> the size in bytes of your page load and number of roundtrips it takes to
>>>>> render your page. There are numerous tools that do this with various level
>>>>> of sophistication. If you do nothing else, simply use Firebug for Firefox
>>>>> (http://getfirebug.com/) which as 'net' panel that shows you break down of
>>>>> the page load.
>>>>> 2. The next step up but still easily doable with single developer box
>>>>> setup, IMHO, is to get actual client-side http/web proxy such as Charles
>>>>> (http://www.charlesproxy.com/) or Fiddler
>>>>> (http://www.fiddler2.com/fiddler2/; this is .net app for all of you Linux
>>>>> people out there :) ). Charles will let you record traffic, inspect it and
>>>>> then replay it with different throttle settings. Still, this is just one
>>>>> client at the time.
>>>>> Next up options really require server/client machines setup on network:
>>>>> 3. Actual load testing tools (I am only familiar with commercial ones
>>>>> like Rational Robot; or HP LoadRunner). In addition to simulating multiple
>>>>> clients these tools also in general allow you adjust throttle settings.
>>>>> 4. Finally, the cadillac approach: actual network device/software that
>>>>> introduces latency to your test scenarios: there are HW devices that do
>>>>> this, I am familiar with software solution called http://www.shunra.com/.
>>>>>
>>>>> Hope this helps!
>>>>> Maros
>>>>>
>>>>> On Wed, Jul 21, 2010 at 9:08 AM, Glen McCallum <mcglen@xxxxxxxxx>
>>>>> wrote:
>>>>>>
>>>>>> Are there any tools that developers are using to simulate poor
>>>>>> connectivity/bandwidth during development? (wiki page I may have not seen
>>>>>> yet?)
>>>>>> I'm guilty of running the server on my dev machine. There are ample
>>>>>> resources and no network constraints. Should I be trying to run it over a
>>>>>> dial-up connection and timing page loads? It could have a huge impact on the
>>>>>> design of a module.
>>>>>> Glen
>>>>>> On Fri, Jul 16, 2010 at 8:10 PM, Hamish Fraser
>>>>>> <hamish_fraser@xxxxxxxxxxxxxxx> wrote:
>>>>>>>
>>>>>>> One aspect that hasn’t had a lot of attention recently is performance
>>>>>>> over an internet connection, i.e. in an ASP mode. This is an important
>>>>>>> benefit of having a web interface of course but to date relatively few
>>>>>>> projects have run OpenMRS remotely. In Lesotho, one place that works this
>>>>>>> way, we have seen real challenges due to page download size in addition to
>>>>>>> slow rendering of the patient page and the other recently discussed
>>>>>>> performance issues. In the Misys test of OpenMRS, performance fell off
>>>>>>> significantly at low bandwidth.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> When we plan the performance tests for future releases I would like
>>>>>>> to put in a bid to include tests of page load times over limited bandwidth
>>>>>>> connections. I think lots more projects will want to use the internet/ASP
>>>>>>> model for OpenMRS as fiber starts to spread in Africa.
>>>>>>>
>>>>>>> Regards
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Hamish
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> From: dev@xxxxxxxxxxx [mailto:dev@xxxxxxxxxxx] On Behalf Of Maros
>>>>>>> Cunderlik
>>>>>>> Sent: Thursday, July 15, 2010 2:05 PM
>>>>>>> To: openmrs-devel-l@xxxxxxxxxxxxxxxxxx
>>>>>>> Subject: Re: [OPENMRS-DEV] Performance issues to address in OpenMRS
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Just following on this email with regards to performance improvements
>>>>>>> from last week: what I don't see here or on the top10 list o wiki is running
>>>>>>> the app through java *and* database profiler under load scenarios.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Wyclif's email refers to this indirectly, but just my two cents in
>>>>>>> response to/support of Burke's email: if we are seriously planning to make
>>>>>>> sustainable difference then we ought start having standardized load tests
>>>>>>> (with perf benchmarks and profiling on both app and db) as part of every
>>>>>>> major release process.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> In addition, while I agree that having every dev running local
>>>>>>> multi-gig DB is not a sustainable way to test performance, I would also
>>>>>>> argue that we can't just let dev's completely off the hook either. In the
>>>>>>> end, of course, it is about how to integrate perf. into overall dev life
>>>>>>> cycle; to be practical a few common-sense steps:
>>>>>>>
>>>>>>> - create perf guidelines for feature design reviews
>>>>>>>
>>>>>>> - minimal perf guidelines for code reviews: a) web, java, hibernate,
>>>>>>> RDBMS best practices, b) static code analysis c) specific openmrs metrics of
>>>>>>> our choosing (max & average page size and load time, # of web-to-app and
>>>>>>> app-to-db roundtrips per single user interaction with the application i.e
>>>>>>> roundtrips/page render, mandatory explain plans for any new SQL code against
>>>>>>> obs table, etc.)
>>>>>>>
>>>>>>> - standardized perf test bed (as part of CI or major release cycle,
>>>>>>> ideally both)
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> How far and how formal we make it is certainly open to a reasonable
>>>>>>> conversation and disagreement but having it as part of 'normal' dev process
>>>>>>> is necessary, IMHO.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Maros
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Jul 11, 2010 at 11:47 PM, Burke Mamlin
>>>>>>> <bmamlin@xxxxxxxxxxxxxxx> wrote:
>>>>>>>
>>>>>>> Wyclif,
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I think there are some low-hanging fruit for performance
>>>>>>> improvements, like (especially first time) loading of the patient dashboard,
>>>>>>> patient searching, concept searching. We'll have to work hard on getting
>>>>>>> an anonymized version of AMPATH's data soon, because running the demo
>>>>>>> dataset in a 1.6.1 virtual appliance on my MacBook Pro is significantly
>>>>>>> snappier than what we see at AMPATH on a high-end server. Something
>>>>>>> (whether it's amount of patient data or some other bottlenecked resource) is
>>>>>>> slowing down the production environments well beyond what I can approximate
>>>>>>> with the demo dataset. It would be nice if we could simulate the production
>>>>>>> issues without forcing every dev working on 1.8 to load multiple gigs of
>>>>>>> data on their machine.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> FWIW, I cleaned up the Top Ten List for 1.8 Improvements wiki page a
>>>>>>> couple days ago, trying to summarize the input thus far from TRUNK-379.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> -Burke
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Jul 9, 2010, at 4:21 PM, Wyclif Luyima wrote:
>>>>>>>
>>>>>>> Hi Devs,
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> The Performance experts at ThoughtWorks had a discussion with Darius
>>>>>>> and they are willing to help us with the task of addressing performance
>>>>>>> issues in OpenMRS.
>>>>>>>
>>>>>>> They asked for a test dataset which of now we don’t yet have in place
>>>>>>> so they will probably use the demo dataset in the meantime, they also wish
>>>>>>> to know the common tasks that users concurrently carry out in OpenMRS , with
>>>>>>> this they can be able to simulate a heavy load on the server for purposes
>>>>>>> of load testing and tracing performance bottlenecks to enable them to come
>>>>>>> up with rational recommendations and probably close solutions to improve
>>>>>>> performance.
>>>>>>>
>>>>>>> So am asking for your views about these common tasks (Can we call it
>>>>>>> the top 10 or 15 tasks?) and should we consider scheduled tasks as being
>>>>>>> among them?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Waiting on all your feed back.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Wyclif
>>>>>>>
>>>>>>> ________________________________
>>>>>>>
>>>>>>> Click here to unsubscribe from OpenMRS Developers' mailing list
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ________________________________
>>>>>>>
>>>>>>> Click here to unsubscribe from OpenMRS Developers' mailing list
>>>>>>>
>>>>>>> ________________________________
>>>>>>> Click here to unsubscribe from OpenMRS Developers' mailing list
>>>>>>
>>>>>> ________________________________
>>>>>> Click here to unsubscribe from OpenMRS Developers' mailing list
>>>>>
>>>>> ________________________________
>>>>> Click here to unsubscribe from OpenMRS Developers' mailing list
>>>>
>>>> ________________________________
>>>> Click here to unsubscribe from OpenMRS Developers' mailing list
>>>
>>> ________________________________
>>> Click here to unsubscribe from OpenMRS Developers' mailing list
>>
>> ________________________________
>> Click here to unsubscribe from OpenMRS Developers' mailing list
>
> ________________________________
> Click here to unsubscribe from OpenMRS Developers' mailing list
_________________________________________
To unsubscribe from OpenMRS Developers' mailing list, send an e-mail
to LISTSERV@xxxxxxxxxxxxxxxxxx with "SIGNOFF openmrs-devel-l" in the
body (not the subject) of your e-mail.
[mailto:LISTSERV@xxxxxxxxxxxxxxxxxx?body=SIGNOFF%20openmrs-devel-l]
--
Cheers,
Knut Staring