← Back to team overview

fuel-dev team mailing list archive

Re: bridges galore

 

+fuel-dev

I assume this will no longer be an issue if we get rid of Swift in favor of
Ceph.

Thanks,
Roman

On Thursday, January 9, 2014, Andrey Korolyov wrote:

> Hello,
>
> I am suspecting that we`ve mixed a lot of different points in the
> conversation:
> 1) the first problem is the packet processing in pre-megaflow(>=1.10
> ovs) era which adds overhead a bit but will be fine until someone try to
> build production-quality video streaming edge server with it,
> 2) our current approach to work around with vlan splinters adds actually
> relatively large overhead, though tolerable until we will launch legacy
> OpenStack Swift daemon which does enormous amounts of socket polling
> therefore bringing system to the knees by introducing truckload of
> context switches,
> 3) our current topology, though be pretty complex, does not add much
> overhead comparing with solution with less amount of bridges, GRE tunnel
> performance is the first stopper in all bandwidth-related cases (packet
> rate may affect common openvswitch bridges as hard as GRE endpoints).
>
> So since only bullet #2 can be real issue from very beginning, we should
> take care for at least paid customers to work this around - install
> newer kernel shipped in 4.x series (3.10) or to advice to not use legacy
> Swift installation and chose Ceph with RGW.
>
> On 01/09/2014 10:39 PM, Mike Scherbakov wrote:
> > Sergey, Andrey,
> > can you guys comment on the performance implications Greg is talking
> about?
> >
> >
> >
> >     On Wednesday, January 8, 2014, Gregory Elkinbard wrote:
> >
> >         I have confirmed with various sources that putting in bridges
> while
> >         convinient will increase both latency and cpu hit.
> >>
> >         Thanks
> >         Greg
> >
> >         >
> >         > On Tuesday, January 7, 2014, Gregory Elkinbard wrote:
> >         >>
> >         >> trying to make sense of our network scheme.
> >         >> For some reason we install a bridge for every single vlan
> >         network that
> >         >> we are running.
> >         >> so we have a bridge for storage, management, external, etc?
> >         >>
> >         >> Why are these not simply defined as ports on br-ethX?
> >         >> What are the performance implications of having these many
> >         bridges
> >         >> are the packets actually copied when they traverse from
> bridge to
> >         >> bridge, or are we handing off memory references?
> >         >>
> >         >> Thanks
> >         >> Greg
> >
> >
> >
> >
> > --
> > Mike Scherbakov
> > #mihgen
>
>