fuel-dev team mailing list archive
-
fuel-dev team
-
Mailing list archive
-
Message #00370
Re: Ceilometer+ mongo (simple and HA)
Seconded.
Also, when we implement this we should also take the current Ceph disk
space allocation code into account: it already allows to dedicate
separate drives to OSD and OSD journal devices, although it assumes
that Ceph is the only component that requires whole disks. It may not
work correctly in its current form if we try to combine e.g. ceph-osd
and ceilometer-db roles on a single node.
-DmitryB
On Tue, Feb 4, 2014 at 1:00 AM, Bogdan Dobrelya <bdobrelia@xxxxxxxxxxxx> wrote:
> On 02/04/2014 08:21 AM, Matthew Mosesohn wrote:
>> One major element to consider is # of transactions per second we need
>> to anticipate, rather than volume in mb/gb/tb. No matter how small a
>> transaction is, it creates overhead. Yes, storage capacity is
>> important, but you have to be functional in order to fill the disk up
>> with Ceilometer events.
>> What we ought to do is determine is the breaking point. How many
>> nova/neutron/glance DB transactions per minute cause too much
>> API/Galera traffic + Ceilometer traffic that it starts to interfere
>> with normal user requests?
>>
>> I'm still working on sizing for Galera/SQLAlchemy, and that breaking
>> point will raise for environments with many CPUs and lots of memory,
>> but it won't go away.
>> I vote for Ceilometer + MongoDB as a role and on a separate node if we
>> have over 50 nodes.
>>
> +1, I vote for make roles as much atomic as possible. I believe, the
> best practice for enterprise clusters (i.e. HA and large scaled ones) is
> to separate roles, not to combine them.
>>
>>
>> On Tue, Feb 4, 2014 at 9:55 AM, Mike Scherbakov
>> <mscherbakov@xxxxxxxxxxxx> wrote:
>>> Some numbers can be found here for large amount of instances (20k) and
>>> volumes (20k):
>>> https://docs.google.com/a/mirantis.com/spreadsheet/ccc?key=0AtziNGvs-uPudDhRbEJJOHFXV3d0ZGc1WE9NLTVPX0E#gid=0
>>>
>>> According to this table, it generates 339Mb/hour, or ~8Gb/day, or
>>> ~0.25Tb/month. Even on such a scale, it should survive on existing
>>> controller nodes, if we allocate dedicated disks, does not it?
>>>
>>>
>>>
>>> On Tue, Feb 4, 2014 at 9:44 AM, Mike Scherbakov <mscherbakov@xxxxxxxxxxxx>
>>> wrote:
>>>>
>>>> +David, Nadya
>>>>
>>>> Hi Max,
>>>> as we discussed verbally there is a major concern behind - about placement
>>>> of MongoDB. As I understand right, it is expected that there is a huge disk
>>>> IO consumption in case of larger deployments (let's say >50 nodes).
>>>> If it is the case, then I would agree that we may not want to use shared
>>>> disks for MongoDB and other OpenStack components. I see two options here:
>>>>
>>>> Make sure MongoDB uses dedicated disk(s) on the server where it's
>>>> installed, and it can be part of existing controller role then
>>>>
>>>> Nailgun can make default allocation in a way that MongoDB has dedicated
>>>> disk by default, if there is more than one disk on the server (which is 100%
>>>> of real cases, I assume)
>>>> User's experience would be simply to enable Ceilometer installation by
>>>> clicking on checkbox. In simple mode, ceilometer + mongo will be installed
>>>> on controller node. In HA mode, ceilometer + mongo will be installed on all
>>>> 3 controllers under pacemaker control
>>>>
>>>> Make sure MongoDB is installed on a separated server
>>>>
>>>> In UI, user will have to:
>>>>
>>>> enable ceilometer checkbox ("Install Ceilometer")
>>>> don't forget to add "ceilometer-db" (mongodb) role to one of the
>>>> unallocated nodes
>>>>
>>>> UI must ensure that this role should not intersect with any other
>>>> UI must ensure that this role is assigned to at least one node in the env,
>>>> if ceilometer checkbox is enabled
>>>> UI must ensure that this role is assigned to at least 3 different
>>>> unallocated nodes in case of HA deployment mode, to ensure that we will have
>>>> Ceilometer HA (we can skip this, but add logic that if we have more than one
>>>> mongo - we must build cluster)
>>>>
>>>> In terms of simplicity and ease of use, I would vote for option #1, while
>>>> leaving ability to place MongoDB on a separate server via Fuel CLI for
>>>> customized deployments. #1 solves the issue with disk IO by providing
>>>> dedicated disk(s).
>>>>
>>>>> Do we need HA Cluster with non-HA Mongo?
>>>> For consistency over Fuel story, I vote for HA for all OpenStack
>>>> components, if HA mode is chosen. So my opinion is no - we do not need such
>>>> a case.
>>>>
>>>>> Puppet manifest are finished
>>>> Great! Please pull request them asap - we will need time for reviews. I
>>>> hope we can complete manifests for HA story by the end of the week too.
>>>>
>>>> Thanks,
>>>>
>>>>
>>>> On Mon, Feb 3, 2014 at 5:37 PM, Max Mazur <mmaxur@xxxxxxxxxxxx> wrote:
>>>>>
>>>>> Hi!
>>>>>
>>>>>
>>>>> I'd like to add to Fuel the following options:
>>>>>
>>>>> 1. Simple install
>>>>> - Ceilometer with MongoDB or Ceilometer wit If customer selected Mongo
>>>>> it is necessary to deploy one more node with MongoDB
>>>>> Puppet manifest are finished
>>>>>
>>>>> 2. HA Mode
>>>>> - Ceilometer with MongoDB replica set. In this case we need 3 MongoDB
>>>>> nodes to build HA replica set.
>>>>> Puppet manifests are in progress now
>>>>>
>>>>>
>>>>> Do we need HA Cluster with non-HA Mongo?
>>>>>
>>>>> Best Regards,
>>>>> Max Mazur
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Mailing list: https://launchpad.net/~fuel-dev
>>>>> Post to : fuel-dev@xxxxxxxxxxxxxxxxxxx
>>>>> Unsubscribe : https://launchpad.net/~fuel-dev
>>>>> More help : https://help.launchpad.net/ListHelp
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Mike Scherbakov
>>>> #mihgen
>>>
>>>
>>>
>>>
>>> --
>>> Mike Scherbakov
>>> #mihgen
>>>
>>> --
>>> Mailing list: https://launchpad.net/~fuel-dev
>>> Post to : fuel-dev@xxxxxxxxxxxxxxxxxxx
>>> Unsubscribe : https://launchpad.net/~fuel-dev
>>> More help : https://help.launchpad.net/ListHelp
>>>
>>
>
>
> --
> Best regards,
> Bogdan Dobrelya.
>
>
> --
> Mailing list: https://launchpad.net/~fuel-dev
> Post to : fuel-dev@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~fuel-dev
> More help : https://help.launchpad.net/ListHelp
--
Dmitry Borodaenko
Follow ups
References