openerp-community team mailing list archive
-
openerp-community team
-
Mailing list archive
-
Message #00762
Re: OpenERP: Hardware sizing for OpenERP
Let's take this rather technical discussion about sizing and configuring
an OpenERP production environment to the community list...
See my random comments below in the text..
On 01/05/2012 08:38 PM, Rostam Azarbehi wrote:
> Hi Everyone,
>
> Just my two cents, if that is ok. I agree with the comments, there
> should be some basic expectations for each of the major components of
> OpenERP. The postgreSQL is pretty straightforward as that info is
> readily available. However, no clear measures are there for the OpenERP
> server, or the web-server.
PostgreSQL is probably one of the most important component to configure,
because the default settings are quite conservative and mean you might
not be getting the best performance out of your hardware.
This is the reason why I mentioned mostly postgres-related links [2,3,4]
The other OpenERP components (web and server) don't have configuration
settings that would make such a big "generic" difference. What matters
here is the deployment architecture: whether you will be running all
components on the same machine or not, whether you will be doing
load-balancing, etc.
OpenERP 6.1 introduces several important changes[5] to make the server
truly stateless and WSGI compliant, so that a really scalable deployment
architectures is easier to achieve (including harnessing all CPU cores
for OpenERP!)
We don't have any recommended settings yet, but you can find a sample
"gunicorn.conf.py" configuration file [6] in the server to get started.
All feedback on deployment experience with gunicorn, mod_wsgi or others
would be most welcome, and I'll be glad to summarize that in our
installation documentation for 6.1.
The server team also plans to improve soon (early 2012, next version)
the isolation of "OpenERP Workers" to make it possible to enforce
per-process limits. As part of this task we should be benchmarking
typical OpenERP flows using various configurations, and thus we'll be
able to provide some raw benchmark numbers.
> In other solutions, like General Dynamics and
> Prophet 21, we have received recommended requirements based on load
> factors for an installation with base components. Yes, there are many
> complexities in OpenERP, especially when you consider all the modules
> partners have created, but basic specs based on the base modules should
> be easy to create for different load factors. As partners we can factor
> in buffer capacity for additional modules and complexities when we start
> working with the clients.
I'd be interested to see in what form the "load factor"-based
requirements were provided to you, to see if similar numbers could be
provided for OpenERP (unless this is confidential).
For OpenERP 6.0 and earlier the "generic" requirements are so simple
compared to other solutions that they're barely worth mentioning:
*one decent server machine*.
Who would be serious enough to ask about hardware requirements and still
expect to deploy on less than that anyway?
For more precise requirements, they depend on so many project-specific
factors that they'd quickly be misleading... If you expect heavy load
(e.g several dozens of active concurrent users), split the database and
OpenERP server on separate similar-spec'd machines. And this rule
remains the most important one:
*test in real conditions and measure!*
Note: a decent server machine means a recent multi-core machine with a
lot of RAM (it's cheap, 16Gb min) and fast RAIDed disks.
As of 6.1 and with a WSGI-based deployment with multiple workers, you
should be able to get more bang for the buck on your existing
infrastructure, but we don't have numbers yet.
> On 12-01-04 02:35 PM, Hery Atmadja wrote:
>> Hi everyone,
>>
>> While I do share Olivier's viewpoint in principle, as Carlos has
>> kindly pointed out it is ultimately beneficial for OpenERP and
>> partners to move towards the direction of establishing basic
>> framework for hardware sizing. Unfortunately, the added complexity
>> in all of this also lies with the fact _not a single entity/party_
>> knows exactly how all the various modules (authored by many
>> parties) would impact the sizing (load).
>>
>> Lacking a more 'methodical' way of creating a 'yardstick', a
>> possible viable alternative is to start comparing it to live sites.
>> With enough data, potential 'methods' or 'sizing parameters' may
>> be surmised as a start. The option is either we stay put or we
>> move forward together.
Sure, and I'll be glad to summarize any feedback regarding performance
and deployment settings in our documentation.
>> On 01/05/2012 12:05 AM, Carlos Almeida wrote:
>>> Hello everyone,
>>>
>>> I found Olivier email very usefull, thank you.
>>>
>>> But your email points our attention only to postgres. The links
>>> are very usefull but they only refers about the database
>>> optimization.
As I mentioned, it is the main thing that needs configuration.
>>> There must be parametrisation and/or some configuration rule of
>>> thumb for OpenERP, for example postgres suggest 50MB of
>>> maintenance_work_mem for every GB of RAM, something like this.
The work-horse of OpenERP is PostgreSQL, so this is where such
configuration settings matter.
>>> Also in OpenERP we have to deal with (i) server and (ii) web
>>> client paramenters, it's even worst when they are separated,
>>> because they have to comunicate through sockets, and so on...
The configuration parameters of OpenERP won't affect the performance
except in very specific cases, they're mostly present to connect the
various components together.
>>> At least having some "starting base" to work on, or some basic
>>> (not so right) rules will be great for a start. I'm talking about
>>> memory settings, time outs and number of connections related
>>> ones (eg: server.thread_pool, server.socket_queue_size,
>>> server.socket_timeout, openerp.server.timeout, db_maxconn,
>>> etc).
The timeout settings start to matter if you're hitting them, which means
you're doing some long-running operations or you're already having
performance issues. They won't solve any perf issue, and fine-tuning
them is specific to the reason you're hitting them.
The main connection limit that matters is PostgreSQL's max_connections
(again), which should be bumped up according to the number of concurrent
users you expect to service: 10 concurrent requests = 10 db connections.
The maxconn limit of the server should be set accordingly depending on
the number of servers/workers that are connecting to the db.
[1]
http://wiki.postgresql.org/wiki/Replication,_Clustering,_and_Connection_Pooling
[2] pgtune utility: https://github.com/gregs1104/pgtune
[3] http://blip.tv/djangocon/secrets-of-postgresql-performance-5572403
[4] http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
[5] See also draft 6.1 release notes http://bit.ly/zzxdsK
[6]
http://bazaar.launchpad.net/~openerp/openobject-server/trunk/view/head:/gunicorn.conf.py
Follow ups