openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #05945
Re: HPC with Openstack?
On Mon, Dec 05, 2011 at 09:07:06PM -0500, Lorin Hochstein wrote:
>
>
> On Dec 4, 2011, at 7:46 AM, Soren Hansen wrote:
>
> > 2011/12/4 Lorin Hochstein <lorin@xxxxxxx>:
> >> Some of the LXC-related issues we've run into:
> >>
> >> - The CPU affinity issue on LXC you mention. Running LXC with OpenStack, you
> >> don't get proper "space sharing" out of the box, each instance actually sees
> >> all of the available CPUs. It's possible to restrict this, but that
> >> functionality doesn't seem to be exposed through libvirt, so it would have
> >> to be implemented in nova.
I recently added support for CPU affinity to the libvirt LXC driver. It will
be in libvirt 0.9.8. I also wired up various other cgroups tunables including
NUMA memory binding, block I/O tuning and CPU quota/period caps.
> >> - LXC doesn't currently support volume attachment through libvirt. We were
> >> able to implement a workaround by invoking "lxc-attach" inside of OpenStack
> >> instead (e.g., see
> >> <https://github.com/usc-isi/nova/blob/hpc-testing/nova/virt/libvirt/connection.py#L482>.
> >> But to be able to use lxc-attach, we had to upgrade the Linux kernel in
> >> RHEL6.1 from 2.6.32 to 2.6.38. This kernel isn't supported by SGI, which
> >> means that we aren't able to load the SGI numa-related kernel modules.
Can you clarify what you mean by volume attachment ?
Are you talking about passing through host block devices, or hotplug of
further filesystems for the container ?
> > Why not address these couple of issues in libvirt itself?
If you let me know what issues you have with libvirt + LXC in OpenStack,
I'll put them on my todo list.
Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
Follow ups
References