← Back to team overview

group.of.nepali.translators team mailing list archive

[Bug 1914283] Re: Enable CONFIG_PCI_MSI in the linux-kvm derivative

 

** Description changed:

- To be filled - I'm just reserving the LP number for now.
+ [Impact]
+ * Currently linux-kvm derivative doesn't have CONFIG_PCI_MSI (and its dependency options) enabled. The goal for such derivative is to be minimal and boot as fast as possible in virtual environments, hence most config options were dropped.
+ 
+ * Happens that MSI/MSI-X are the de facto drivers' standard with regards
+ to interrupts, and as such the hot path is optimized for MSIs. Boot
+ testing with that config enabled showed that we have improvements in
+ boot time (details in next section).
+ 
+ * Also, performance-wise MSIs are a good idea too, since it usually
+ allows multiple queues in network devices and KVM is more optimized to
+ MSIs in comparison with regular IRQs - tests (detailed in next section)
+ showed performance improvements in virtio devices with MSIs.
+ 
+ * Based on that findings, we are hereby enabling MSIs for the linux-kvm
+ derivatives in all series (Bionic / Focal / Groovy / Hirsute) - notice
+ that Xenial already has that config option enabled.
+ 
+ [Test Case]
+ * All below tests were performed in a x86-64 KVM guest with 2 VCPUs and 2GB of RAM, running in a Focal host. Three runs of each test were performed, and we took the average.
+  
+ * Boot time test (measured by dmesg timestamp) showed an improvement of ~21%, the following chart exhibiting the data: https://kernel.ubuntu.com/~gpiccoli/MSI/boot_time.svg
+ We also timed the full boot until the login prompt is available, we had a decrease from ~1 second.
+ 
+ * The storage test was performed with the fio tool, using a virtio-blk empty disk. The following arguments were used:
+ fio --filename /dev/vdc --rw=rw --runtime 600 --loops 100 --ioengine libaio --numjobs 2 --group_reporting
+ 
+ On average we had a ~4.5% speedup in both reads and writes, the
+ following chart represents the data:
+ https://kernel.ubuntu.com/~gpiccoli/MSI/fio_storage.svg
+ 
+ * From the network perspective, we've used iPerf with the following
+ arguments: iperf -c <server> -t 300 (server was the host machine). On
+ average, the performance improvement was ~8%, as per the following
+ chart: https://kernel.ubuntu.com/~gpiccoli/MSI/iperf_network.svg
+ 
+ [Where problems could occur]
+ * Given that the main linux package (generic) and basically all other derivatives already enable this option, and given that MSIs are the standard with regards to interrupts from drivers point-of-view, it's safe to say the risks are minimal, likely smaller than not enabling MSIs (since the hot path is usually more tested/exercised).
+ 
+ * That said, problems could occur if we have bugs in MSI-related code in
+ drivers or in PCI MSI core code, then those potential problems that
+ would already affect all other derivatives begin to affect linux-kvm
+ with this change.

** Changed in: linux-kvm (Ubuntu Xenial)
       Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of नेपाली
भाषा समायोजकहरुको समूह, which is subscribed to Xenial.
Matching subscriptions: Ubuntu 16.04 Bugs
https://bugs.launchpad.net/bugs/1914283

Title:
  Enable CONFIG_PCI_MSI in the linux-kvm derivative

Status in linux-kvm package in Ubuntu:
  In Progress
Status in linux-kvm source package in Xenial:
  Invalid
Status in linux-kvm source package in Bionic:
  In Progress
Status in linux-kvm source package in Focal:
  In Progress
Status in linux-kvm source package in Groovy:
  In Progress
Status in linux-kvm source package in Hirsute:
  In Progress

Bug description:
  [Impact]
  * Currently linux-kvm derivative doesn't have CONFIG_PCI_MSI (and its dependency options) enabled. The goal for such derivative is to be minimal and boot as fast as possible in virtual environments, hence most config options were dropped.

  * Happens that MSI/MSI-X are the de facto drivers' standard with
  regards to interrupts, and as such the hot path is optimized for MSIs.
  Boot testing with that config enabled showed that we have improvements
  in boot time (details in next section).

  * Also, performance-wise MSIs are a good idea too, since it usually
  allows multiple queues in network devices and KVM is more optimized to
  MSIs in comparison with regular IRQs - tests (detailed in next
  section) showed performance improvements in virtio devices with MSIs.

  * Based on that findings, we are hereby enabling MSIs for the linux-
  kvm derivatives in all series (Bionic / Focal / Groovy / Hirsute) -
  notice that Xenial already has that config option enabled.

  [Test Case]
  * All below tests were performed in a x86-64 KVM guest with 2 VCPUs and 2GB of RAM, running in a Focal host. Three runs of each test were performed, and we took the average.
   
  * Boot time test (measured by dmesg timestamp) showed an improvement of ~21%, the following chart exhibiting the data: https://kernel.ubuntu.com/~gpiccoli/MSI/boot_time.svg
  We also timed the full boot until the login prompt is available, we had a decrease from ~1 second.

  * The storage test was performed with the fio tool, using a virtio-blk empty disk. The following arguments were used:
  fio --filename /dev/vdc --rw=rw --runtime 600 --loops 100 --ioengine libaio --numjobs 2 --group_reporting

  On average we had a ~4.5% speedup in both reads and writes, the
  following chart represents the data:
  https://kernel.ubuntu.com/~gpiccoli/MSI/fio_storage.svg

  * From the network perspective, we've used iPerf with the following
  arguments: iperf -c <server> -t 300 (server was the host machine). On
  average, the performance improvement was ~8%, as per the following
  chart: https://kernel.ubuntu.com/~gpiccoli/MSI/iperf_network.svg

  [Where problems could occur]
  * Given that the main linux package (generic) and basically all other derivatives already enable this option, and given that MSIs are the standard with regards to interrupts from drivers point-of-view, it's safe to say the risks are minimal, likely smaller than not enabling MSIs (since the hot path is usually more tested/exercised).

  * That said, problems could occur if we have bugs in MSI-related code
  in drivers or in PCI MSI core code, then those potential problems that
  would already affect all other derivatives begin to affect linux-kvm
  with this change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-kvm/+bug/1914283/+subscriptions