← Back to team overview

fuel-dev team mailing list archive

Re: HA for MySQL and ceph-mon: active/active?

 

I submitted https://bugs.launchpad.net/fuel/+bug/1270840 about this.
Anyone, feel free to take it. It is targeted to 4.1.

Thanks,


On Fri, Jan 10, 2014 at 11:04 PM, Roman Alekseenkov <
ralekseenkov@xxxxxxxxxxxx> wrote:

> Dmitry,
>
> You are welcome to create a pull request against the documentation to make
> it more precise.
>
> Thanks,
> Roman
>
>
> On Friday, January 10, 2014, Dmitriy Novakovskiy wrote:
>
>> Thanks guys. That helped a lot.
>>
>> ---
>> Regards,
>> Dmitriy
>>
>>
>> On Fri, Jan 10, 2014 at 5:44 AM, Miroslav Anashkin <
>> manashkin@xxxxxxxxxxxx> wrote:
>>
>> Hi Dmitriy,
>>
>> A1. Yes, MySQL+Galera is true master-master solution. While it is
>> possible and, in case of 6+ nodes even recommended, to set 1-2 Galera nodes
>> as slaves - for additional data consistency.
>> Previous Fuel versions use HAProxy as MySQL managing solution.
>> In case of Mirantis OpenStack (starting from 3.0 or 3.1) MySQL+Galera
>> cluster is managed by Pacemaker.
>> There is no single master node in Galera cluster at all - there are nodes
>> with most recent data replica and nodes which still have to sync with this
>> recent replica.
>> Workflow is simple. Node serves new data changing request and increases
>> its UUID number. All the other nodes must synchronize data with all the
>> nodes with UUID greater than current.
>>
>> A2. Yes, CEPH monitors are also master-master.  One of them is
>> periodically becomes a leader. Leader is the node, which got the most
>> recent cluster map replica first. Other monitor nodes must sync they
>> cluster map with current leader. Every monitor node already synced with
>> leader becomes provider and leader knows which nodes are currently
>> providers. So, leader also tells the other nodes which provider each of
>> them should use to get data from.
>> CEPH monitor synchronization algorithm works similar way as Galera, but
>> CEPH nodes are parted by functionality to monitor nodes and data storage
>> nodes. In turn, every Galera node has all the same service set on each node.
>> So, CEPH monitor nodes only manage where the data should be actually
>> stored and maintain data consistency between OSD nodes.
>>
>>
>> A3. Additionally to the previous answerers I may add that Neutron is a
>> router among the other functionality. It is the reason why there are single
>> entry points.
>>
>>
>> On Fri, Jan 10, 2014 at 12:18 PM, Dmitriy Novakovskiy <
>> dnovakovskiy@xxxxxxxxxxxx> wrote:
>>
>> Hi guys,
>>
>> I'm currently kicking off a a project with new customer, and stumbled on
>> some holes in my understanding of Mirantis HA architecture.
>>
>> Can you please help me understand the following:
>>
>> Q1. MySQL + Galera - is it an active/active HA? I was told/tend to think
>> "yes", but want to understand it better. A simple workflow example would
>> help
>>
>> Q2. ceph-mon on controllers - same question as Q1
>>
>> Q3. Neutron - is it in active/standy HA? I got this understanding from
>> docs and want to understand why. I was told that Grizzly and Havanna
>> support multiple l3 agents, but we don't leverage it on some reason in Fuel.
>>
>> Thanks a lot in advance
>>
>> ---
>> Regards,
>> Dmitriy
>>
>> --
>> Mailing list: https://launchpad.net/~fuel-dev
>> Post to     : fuel-dev@xxxxxxxxxxxxxxxxxxx
>> Unsubscribe : https://launchpad.net/~fuel-dev
>> More help   : https://help.launchpad.net/ListHelp
>>
>>
>>
>>
>> --
>>
>> *Kind Regards*
>>
>> *Miroslav Anashkin**L2 support engineer**,*
>> *Mirantis Inc.*
>>
>>
> --
> Mailing list: https://launchpad.net/~fuel-dev
> Post to     : fuel-dev@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~fuel-dev
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
Mike Scherbakov
#mihgen

Follow ups

References