← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1616388] [NEW] [RFE] Provide separation between VNIC and VIF in core Nova and Neutron

 

Public bug reported:

It might be easier to view the VM plugging process in OpenStack as composed of three partners, attributes of a Neutron port:
* The VIF edge (plugging in a TAP device into a bridge, configuring a NIC's VEB, some other form of virtual port manipulation…)
* The VNIC edge (hypervisor config, emulated hardware model, etc.)
* The transport mechanism (vhost-net, vhost-user, vfio for SR-IOV, etc.)

Currently, there are a few places in core OpenStack that conflate the
three concepts. For example, nova/network/model.py has VNIC_TYPE_* that
denote the different methods for connecting to the hardware, but
VIF_MODEL_* denotes different hardware emulation settings for the
hypervisors.

Compounding this problem is the current plugging method with libvirt.
The plugging logic for many mechanism drivers moved into libvirt,
meaning that nova passes both VNIC and VIF information to libvirt in
some cases and not in others. The OS-VIF library is a step away from
that direction: https://blueprints.launchpad.net/nova/+spec/os-vif-
library

This RFE requests a mechanism to allow for more granularity in the
plugging logic semantics: a mechanism driver should not need to re-
invent common plugging code on the VIF or VNIC side. For example, just
as primitives for plugging a netdev into a bridge should be part of the
OS-VIF library, so should the VNIC configuration by the hypervisor be as
cleanly abstracted and separated as possible.

Initially, the hypervisor driver should receive both VNIC and VIF
objects and have logic to decide to generate the required VM
configuration. For example if an OVS VIF and vhost-net VNIC is passed to
the libvirt driver, it generates xml to handle the bridge and VM
plugging. In the case a driver requires another method of plugging, but
can re-use the OVS VIF or the VNIC code it should be able to do so.

Netronome is willing to devote resources to this project in order to
improve the OpenStack infrastructure and reduce code duplication.

** Affects: neutron
     Importance: Undecided
         Status: New

** Affects: nova
     Importance: Undecided
         Status: New


** Tags: neutron nova rfe

** Project changed: devstack => nova

** Also affects: neutron
   Importance: Undecided
       Status: New

** Description changed:

  It might be easier to view the VM plugging process in OpenStack as composed of three partners, attributes of a Neutron port:
  * The VIF edge (plugging in a TAP device into a bridge, configuring a NIC's VEB, some other form of virtual port manipulation…)
- (The VNIC edge (hypervisor config, emulated hardware model, etc.)
+ * The VNIC edge (hypervisor config, emulated hardware model, etc.)
  * The transport mechanism (vhost-net, vhost-user, vfio for SR-IOV, etc.)
  
  Currently, there are a few places in core OpenStack that conflate the
  three concepts. For example, nova/network/model.py has VNIC_TYPE_* that
  denote the different methods for connecting to the hardware, but
  VIF_MODEL_* denotes different hardware emulation settings for the
  hypervisors.
  
  Compounding this problem is the current plugging method with libvirt.
  The plugging logic for many mechanism drivers moved into libvirt,
  meaning that nova passes both VNIC and VIF information to libvirt in
  some cases and not in others. The OS-VIF library is a step away from
  that direction: https://blueprints.launchpad.net/nova/+spec/os-vif-
  library
  
  This RFE requests a mechanism to allow for more granularity in the
  plugging logic semantics: a mechanism driver should not need to re-
  invent common plugging code on the VIF or VNIC side. For example, just
  as primitives for plugging a netdev into a bridge should be part of the
  OS-VIF library, so should the VNIC configuration by the hypervisor be as
  cleanly abstracted and separated as possible.
  
  Initially, the hypervisor driver should receive both VNIC and VIF
  objects and have logic to decide to generate the required VM
  configuration. For example if an OVS VIF and vhost-net VNIC is passed to
  the libvirt driver, it generates xml to handle the bridge and VM
  plugging. In the case a driver requires another method of plugging, but
  can re-use the OVS VIF or the VNIC code it should be able to do so.
  
  Netronome is willing to devote resources to this project in order to
  improve the OpenStack infrastructure and reduce code duplication.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1616388

Title:
  [RFE] Provide separation between VNIC and VIF in core Nova and Neutron

Status in neutron:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  It might be easier to view the VM plugging process in OpenStack as composed of three partners, attributes of a Neutron port:
  * The VIF edge (plugging in a TAP device into a bridge, configuring a NIC's VEB, some other form of virtual port manipulation…)
  * The VNIC edge (hypervisor config, emulated hardware model, etc.)
  * The transport mechanism (vhost-net, vhost-user, vfio for SR-IOV, etc.)

  Currently, there are a few places in core OpenStack that conflate the
  three concepts. For example, nova/network/model.py has VNIC_TYPE_*
  that denote the different methods for connecting to the hardware, but
  VIF_MODEL_* denotes different hardware emulation settings for the
  hypervisors.

  Compounding this problem is the current plugging method with libvirt.
  The plugging logic for many mechanism drivers moved into libvirt,
  meaning that nova passes both VNIC and VIF information to libvirt in
  some cases and not in others. The OS-VIF library is a step away from
  that direction: https://blueprints.launchpad.net/nova/+spec/os-vif-
  library

  This RFE requests a mechanism to allow for more granularity in the
  plugging logic semantics: a mechanism driver should not need to re-
  invent common plugging code on the VIF or VNIC side. For example, just
  as primitives for plugging a netdev into a bridge should be part of
  the OS-VIF library, so should the VNIC configuration by the hypervisor
  be as cleanly abstracted and separated as possible.

  Initially, the hypervisor driver should receive both VNIC and VIF
  objects and have logic to decide to generate the required VM
  configuration. For example if an OVS VIF and vhost-net VNIC is passed
  to the libvirt driver, it generates xml to handle the bridge and VM
  plugging. In the case a driver requires another method of plugging,
  but can re-use the OVS VIF or the VNIC code it should be able to do
  so.

  Netronome is willing to devote resources to this project in order to
  improve the OpenStack infrastructure and reduce code duplication.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1616388/+subscriptions


Follow ups