← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1310110] [NEW] resource_tracker.update_available_resources() calls virt driver for resources, then promptly throws them away

 

Public bug reported:

An abbreviated version of the
nova.compute.resource_tracker.update_available_resources() call:

    def update_available_resource(self, context):
        resources = self.driver.get_available_resource(self.nodename)
...
        self._report_hypervisor_resource_view(resources)
...
        # Grab all instances assigned to this node:
        instances = instance_obj.InstanceList.get_by_host_and_node(
            context, self.host, self.nodename)

        # Now calculate usage based on instance utilization:
        self._update_usage_from_instances(resources, instances)

And the nova.compute.resource_tracker._update_usage_from_instances()
call looks like this:

    def _update_usage_from_instances(self, resources, instances):
        """Calculate resource usage based on instance utilization.  This is
        different than the hypervisor's view as it will account for all
        instances assigned to the local compute host, even if they are not
        currently powered on.
        """
        self.tracked_instances.clear()

        # purge old stats
        self.stats.clear()

        # set some initial values, reserve room for host/hypervisor:
        resources['local_gb_used'] = CONF.reserved_host_disk_mb / 1024
        resources['memory_mb_used'] = CONF.reserved_host_memory_mb
        resources['vcpus_used'] = 0
        resources['free_ram_mb'] = (resources['memory_mb'] -
                                    resources['memory_mb_used'])
        resources['free_disk_gb'] = (resources['local_gb'] -
                                     resources['local_gb_used'])
        resources['current_workload'] = 0
        resources['running_vms'] = 0

        for instance in instances:
            if instance['vm_state'] == vm_states.DELETED:
                continue
            else:
                self._update_usage_from_instance(resources, instance)

While it is true that the _report_hypervisor_resource_view() method uses
the resources returned from the hypervisor, it does nothing more than
write out some of the resource usage stats in a DEBUG log:

        free_ram_mb = resources['memory_mb'] - resources['memory_mb_used']
        free_disk_gb = resources['local_gb'] - resources['local_gb_used']

        LOG.debug(_("Hypervisor: free ram (MB): %s") % free_ram_mb)
        LOG.debug(_("Hypervisor: free disk (GB): %s") % free_disk_gb)

I wonder what the point is of asking the virt driver to query for its
resource usage information if the resource_tracker just proceeds to
throw that information away? Given the fact that the
update_available_resources() method holds a lock on the compute worker
entirely and the fact that this periodic task interval is so frequent, I
question whether it is prudent to make such calls to the hypervisor when
they are not used.

** Affects: nova
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310110

Title:
  resource_tracker.update_available_resources() calls virt driver for
  resources, then promptly throws them away

Status in OpenStack Compute (Nova):
  New

Bug description:
  An abbreviated version of the
  nova.compute.resource_tracker.update_available_resources() call:

      def update_available_resource(self, context):
          resources = self.driver.get_available_resource(self.nodename)
  ...
          self._report_hypervisor_resource_view(resources)
  ...
          # Grab all instances assigned to this node:
          instances = instance_obj.InstanceList.get_by_host_and_node(
              context, self.host, self.nodename)

          # Now calculate usage based on instance utilization:
          self._update_usage_from_instances(resources, instances)

  And the nova.compute.resource_tracker._update_usage_from_instances()
  call looks like this:

      def _update_usage_from_instances(self, resources, instances):
          """Calculate resource usage based on instance utilization.  This is
          different than the hypervisor's view as it will account for all
          instances assigned to the local compute host, even if they are not
          currently powered on.
          """
          self.tracked_instances.clear()

          # purge old stats
          self.stats.clear()

          # set some initial values, reserve room for host/hypervisor:
          resources['local_gb_used'] = CONF.reserved_host_disk_mb / 1024
          resources['memory_mb_used'] = CONF.reserved_host_memory_mb
          resources['vcpus_used'] = 0
          resources['free_ram_mb'] = (resources['memory_mb'] -
                                      resources['memory_mb_used'])
          resources['free_disk_gb'] = (resources['local_gb'] -
                                       resources['local_gb_used'])
          resources['current_workload'] = 0
          resources['running_vms'] = 0

          for instance in instances:
              if instance['vm_state'] == vm_states.DELETED:
                  continue
              else:
                  self._update_usage_from_instance(resources, instance)

  While it is true that the _report_hypervisor_resource_view() method
  uses the resources returned from the hypervisor, it does nothing more
  than write out some of the resource usage stats in a DEBUG log:

          free_ram_mb = resources['memory_mb'] - resources['memory_mb_used']
          free_disk_gb = resources['local_gb'] - resources['local_gb_used']

          LOG.debug(_("Hypervisor: free ram (MB): %s") % free_ram_mb)
          LOG.debug(_("Hypervisor: free disk (GB): %s") % free_disk_gb)

  I wonder what the point is of asking the virt driver to query for its
  resource usage information if the resource_tracker just proceeds to
  throw that information away? Given the fact that the
  update_available_resources() method holds a lock on the compute worker
  entirely and the fact that this periodic task interval is so frequent,
  I question whether it is prudent to make such calls to the hypervisor
  when they are not used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1310110/+subscriptions


Follow ups

References