← Back to team overview

dhis2-users team mailing list archive

Re: [Dhis2-devs] 25 hours in completing Analytic

 

Hi Neeraj,

Using VACUUM and ANALYZE

Like Brajesh, my background is MySQL, and one database admin task that is often overlooked in MySQL is OPTIMIZE TABLEs.  This reclaims unused space (we’ve had 100Gb databases files drop to half their size) and refreshes index statistics (if the shape of your data has changed over time, this can make indices run faster).

I’m new to PostgreSQL, but the core principles are the same, and a quick bit of Googling shows that the equivalents in PostgreSQL are the VACUUM and ANALYZE commands.  If your database isn’t set to automatically do VACUUMs (the default DHIS2 postgres config doesn’t seem to be), you might want to try VACUUM FULL, which will literally rewrite all of your database tables and indices into smaller, more efficient files (note, however, that on a 500Gb database this could take a looong time – perhaps test on a backup first?).  The following forum post is a really nice, plain-English explanation of what VACUUM does:
http://dba.stackexchange.com/questions/126258/what-is-table-bloating-in-databases

As I mentioned, my background is MySQL rather than Postgres, so someone with more specific Postgres experience might like to also chime in here.

Cheers, Sam.


From: Dhis2-users <dhis2-users-bounces+samuel.johnson=qebo.co.uk@xxxxxxxxxxxxxxxxxxx> on behalf of Brajesh Murari <brajesh.murari@xxxxxxxxx>
Date: Wednesday, 19 October 2016 at 08:28
To: Knut Staring <knutst@xxxxxxxxx>
Cc: DHIS 2 Users list <dhis2-users@xxxxxxxxxxxxxxxxxxx>, DHIS2 Developers <dhis2-devs@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Dhis2-users] [Dhis2-devs] 25 hours in completing Analytic

Dear Neeraj,

The physical database size doesn't matter much, even the number of records don't matter. In my experience the biggest problem that one can going to run in to is not size, but the number of queries you can handle at a time instance specially during analytic functionality execution. Most probably you should going to have to move to a master/slave configuration of your database, so that the read queries can run against the slaves and the write queries run against the master. However, if you and your database management team are not ready for this than, you can tweak your indexes for the queries you are running to speed up the response times. Also there is a lot of tweaking you can do to the network stack and kernel in Linux where MySQL Server has been installed that will help.Perhaps, I would focus first on your indexes, then have a server admin look at your OS, and if all that doesn't help it might be time to implement a master/slave configuration. The most important scalability factor is RAM. If the indexes of your tables fit into memory and your queries are highly optimized in analytic functionality, you can serve a reasonable amount of requests with a average machine. The number of records do matter, depending of how your tables look like. It's a difference to have a lot of varchar fields or only a couple of ints or longs. The physical size of the database matters as well,  think of backups, for instance. Depending on your engine, your physical db files on grow, but don't shrink, for instance with innodb. So deleting a lot of rows, doesn't help to shrink your physical files. Thus the database size does matter. If you have more than one table with more than a million records, then performance starts indeed to degrade. Indexig is one of the important stand need to take care, If you hit one million records you will get performance problems, if the indices are not set right (for example no indices for fields in "WHERE statements" or "ON conditions" in joins). If you hit 10 million records, you will start to get performance problems even if you have all your indices right. Hardware upgrades - adding more memory and more processor power, especially memory - often help to reduce the most severe problems by increasing the performance again, at least to a certain degree.

On Wed, Oct 19, 2016 at 12:35 PM, Knut Staring <knutst@xxxxxxxxx<mailto:knutst@xxxxxxxxx>> wrote:
Just a heads-up that there seems to be a JDBC issue with Postgres 9.6, so perhaps you should try upgrading to 9.5 first.

On Wed, Oct 19, 2016 at 8:58 AM, Lars Helge Øverland <lars@xxxxxxxxx<mailto:lars@xxxxxxxxx>> wrote:

Hi Neeraj,

what usually helps to improve runtime is to improve/increase:

- ssd (read and write speed)
- number of CPUs
- using latest postgresql (9.6 claims to have even better indexing performance<https://www.postgresql.org/docs/9.6/static/release-9-6.html> than 9.5)
- tuning<https://dhis2.github.io/dhis2-docs/master/en/implementer/html/install_server_setup.html#install_postgresql_performance_tuning> of postgresql


regards,

Lars



--
Lars Helge Øverland
Lead developer, DHIS 2
University of Oslo
Skype: larshelgeoverland
lars@xxxxxxxxx<mailto:lars@xxxxxxxxx>
http://www.dhis2.org<https://www.dhis2.org/>


_______________________________________________
Mailing list: https://launchpad.net/~dhis2-users
Post to     : dhis2-users@xxxxxxxxxxxxxxxxxxx<mailto:dhis2-users@xxxxxxxxxxxxxxxxxxx>
Unsubscribe : https://launchpad.net/~dhis2-users
More help   : https://help.launchpad.net/ListHelp



--
Knut Staring
Dept. of Informatics, University of Oslo
Norway: +4791880522
Skype: knutstar
http://dhis2.org

_______________________________________________
Mailing list: https://launchpad.net/~dhis2-devs
Post to     : dhis2-devs@xxxxxxxxxxxxxxxxxxx<mailto:dhis2-devs@xxxxxxxxxxxxxxxxxxx>
Unsubscribe : https://launchpad.net/~dhis2-devs
More help   : https://help.launchpad.net/ListHelp



--
Best Regards,

Brajesh Murari,
Postgraduate, Department of Computer Science and Engineering,
Chaudhary Devi Lal University, Sirsa,
India.

The three basic dimensions of human development: a long and healthy life, access to knowledge, and a decent standard of living.

Follow ups

References