fuel-dev team mailing list archive
-
fuel-dev team
-
Mailing list archive
-
Message #00289
Re: Ceph journal placement
+fuel-dev
On Jan 17, 2014 6:42 PM, "Roman Sokolkov" <rsokolkov@xxxxxxxxxxxx> wrote:
> Andrey,
>
> are we able to use multiple block devices for journal purpose right now?
>
> Answer on your question is yes. I've checked it already. I've used 2 disks
> for journal and 2 for OSD. Single journal disk mapped on single OSD disk in
> this case. I assume if i will add additional (third) OSD disk, it will
> remain without separate journal. Here is the logic<https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/ceph/lib/facter/ceph_osd.rb#L30>
> .
>
> But the main question was:
>
> Is there ability to use one journal disk for multiple OSD disks?
>
> i.e. i have 4 SSDs and 20 HDDs in storage node. I want to use 1 SSD per 5
> HDD. In this case SSD should contain 5 partitions.
>
> Thanks, Roman S.
>
>
> On Fri, Jan 17, 2014 at 6:31 PM, Andrey Korolev <akorolev@xxxxxxxxxxxx>wrote:
>
>> Hello,
>>
>> Services team came with a question about Ceph setup - are we able to use
>> multiple block devices for journal purpose right now?
>>
>> Also I am reviving back the issue with journal placement - our current
>> approach to slice journal device to the equal chunk is not very good in
>> terms of possible reconfiguration - e.g. you can not add more OSD daemons
>> without shutting down every daemon on the selected nodes, doing journal
>> flush and repartitioning. Can we please switch to the filesystem-based
>> journal placement in the next release? Passing 'discard' option to the
>> filesystem we will be able to maintain same level of wearout and with fixed
>> or proportional journal size (e.g. to not allocate all free space) we will
>> achieve greater maintenance flexibility.
>>
>
>
>
> --
> Best regards, Roman Sokolkov
>
Follow ups