← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1736171] Re: create_and_delete_subnets rally test failures

 

Discussed with the team. For 18.02 we will change the OpenStack API
charm timeout values from their current default:

  haproxy-server-timeout: 30000
  haproxy-client-timeout: 30000
  haproxy-connect-timeout: 5000
  haproxy-queue-timeout: 5000

To more forgiving values:

  haproxy-server-timeout: 90000
  haproxy-client-timeout: 90000
  haproxy-connect-timeout: 9000
  haproxy-queue-timeout: 9000


** Also affects: charm-neutron-api
   Importance: Undecided
       Status: New

** Also affects: charm-keystone
   Importance: Undecided
       Status: New

** Also affects: charm-nova-cloud-controller
   Importance: Undecided
       Status: New

** Also affects: charm-cinder
   Importance: Undecided
       Status: New

** Also affects: charm-glance
   Importance: Undecided
       Status: New

** Also affects: charm-ceph-radosgw
   Importance: Undecided
       Status: New

** Also affects: charm-heat
   Importance: Undecided
       Status: New

** Also affects: charm-openstack-dashboard
   Importance: Undecided
       Status: New

** Summary changed:

- create_and_delete_subnets rally test failures
+ Update OS API charm default haproxy timeout values

** Description changed:

- NeutronNetworks.create_and_delete_subnets is failing when run with
- concurrency greater than 1.
+ Change OpenStack API charm haproxy timeout values
+ 
+   haproxy-server-timeout: 90000
+   haproxy-client-timeout: 90000
+   haproxy-connect-timeout: 9000
+   haproxy-queue-timeout: 9000
+ 
+ Workaround until this lands is to set these values in config:
+ 
+ juju config neutron-api haproxy-server-timeout=90000 haproxy-client-
+ timeout=90000 haproxy-queue-timeout=9000 haproxy-connect-timeout=9000
+ 
+ 
+ ------- Original Bug ---------
+ NeutronNetworks.create_and_delete_subnets is failing when run with concurrency greater than 1.
  
  Here's a snippet of a failure: http://paste.ubuntu.com/25927074/
  
  Here is my rally yaml: http://paste.ubuntu.com/26112719/
  
  This is happening using pike on xenial, from the ubuntu cloud archive's.
  The deployment is distributed across 9 nodes, with HA services.
  
  For now we have adjusted our test scenario to be more realistic.  When
  we spread the test over 30 tenants, instead of 3 and if we simulate 2
  users per tenant, instead of 3, we do not hit the issue.

** Changed in: charm-cinder
   Importance: Undecided => Medium

** Changed in: charm-cinder
       Status: New => Triaged

** Changed in: charm-cinder
    Milestone: None => 18.02

** Changed in: charm-glance
   Importance: Undecided => Medium

** Changed in: charm-glance
       Status: New => Triaged

** Changed in: charm-glance
    Milestone: None => 18.02

** Changed in: charm-ceph-radosgw
   Importance: Undecided => Medium

** Changed in: charm-ceph-radosgw
       Status: New => Triaged

** Changed in: charm-ceph-radosgw
    Milestone: None => 18.02

** Changed in: charm-heat
   Importance: Undecided => Medium

** Changed in: charm-heat
       Status: New => Triaged

** Changed in: charm-heat
    Milestone: None => 18.02

** Changed in: charm-keystone
   Importance: Undecided => Medium

** Changed in: charm-keystone
       Status: New => Triaged

** Changed in: charm-keystone
    Milestone: None => 18.02

** Changed in: charm-neutron-api
   Importance: Undecided => Medium

** Changed in: charm-neutron-api
       Status: New => Triaged

** Changed in: charm-neutron-api
    Milestone: None => 18.02

** Changed in: charm-nova-cloud-controller
   Importance: Undecided => Medium

** Changed in: charm-nova-cloud-controller
       Status: New => Triaged

** Changed in: charm-nova-cloud-controller
    Milestone: None => 18.02

** Changed in: charm-openstack-dashboard
   Importance: Undecided => Medium

** Changed in: charm-openstack-dashboard
       Status: New => Triaged

** Changed in: charm-openstack-dashboard
    Milestone: None => 18.02

** Changed in: charm-neutron-gateway
     Assignee: David Ames (thedac) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1736171

Title:
  Update OS API charm default haproxy timeout values

Status in OpenStack ceph-radosgw charm:
  Triaged
Status in OpenStack cinder charm:
  Triaged
Status in OpenStack glance charm:
  Triaged
Status in OpenStack heat charm:
  Triaged
Status in OpenStack keystone charm:
  Triaged
Status in OpenStack neutron-api charm:
  Triaged
Status in OpenStack neutron-gateway charm:
  Invalid
Status in OpenStack nova-cloud-controller charm:
  Triaged
Status in OpenStack openstack-dashboard charm:
  Triaged
Status in neutron:
  Invalid

Bug description:
  Change OpenStack API charm haproxy timeout values

    haproxy-server-timeout: 90000
    haproxy-client-timeout: 90000
    haproxy-connect-timeout: 9000
    haproxy-queue-timeout: 9000

  Workaround until this lands is to set these values in config:

  juju config neutron-api haproxy-server-timeout=90000 haproxy-client-
  timeout=90000 haproxy-queue-timeout=9000 haproxy-connect-timeout=9000

  
  ------- Original Bug ---------
  NeutronNetworks.create_and_delete_subnets is failing when run with concurrency greater than 1.

  Here's a snippet of a failure: http://paste.ubuntu.com/25927074/

  Here is my rally yaml: http://paste.ubuntu.com/26112719/

  This is happening using pike on xenial, from the ubuntu cloud
  archive's.  The deployment is distributed across 9 nodes, with HA
  services.

  For now we have adjusted our test scenario to be more realistic.  When
  we spread the test over 30 tenants, instead of 3 and if we simulate 2
  users per tenant, instead of 3, we do not hit the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-ceph-radosgw/+bug/1736171/+subscriptions


References