openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #14998
Re: [Quantum] Scalable agents
On 07/23/2012 11:02 AM, Dan Wendlandt wrote:
On Sun, Jul 22, 2012 at 5:51 AM, Gary Kotton <gkotton@xxxxxxxxxx
<mailto:gkotton@xxxxxxxxxx>> wrote:
This is an interesting idea. In addition to the creation we will
also need the update. I would prefer that the agents would have
one topic - that is for all updates. When an agent connects to the
plugin it will register the type of operations that are supported
on the specific agent. The agent operations can be specific as bit
masks.
I have implemented something similar in
https://review.openstack.org/#/c/9591
This can certainly be improved and optimized. What are your thoughts?
Based on your follow-up emails, I think we're now thinking similarly
about this. Just to be clear though, for updates I was talking about
a different topic for each entity that has its own UUID (e.g., topic
port-update-f01c8dcb-d9c1-4bd6-9101-1924790b4b45)
Printout from the rabbit queues (this is for linux bridge where at the
moment the port update and network deletion are of interest (unless we
decide to change the way in which the agent is implemented))
openstack@openstack:~/devstack$ sudo rabbitmqctl list_queues
Listing queues ...
q-agent-network-update 0
q-agent-network-update.10351797001a4a231279 0
q-plugin 0
q-agent-port-update.10351797001a4a231279 0
10351797001a4a231279 == IP and MAC of the host
In addition to this we have a number of issues where the plugin
does not expose the information via the standard API's - for
example the VLAN tag (this is being addressed via extensions in
the provider networks feature)
Agreed. There are a couple options here: direct DB access (no
polling, just direct fetching), admin API extensions, or custom RPC
calls. Each has pluses and minuses. Perhaps my real goal here would
be better described as "if there's an existing plugin agnostic way to
doing X, our strong bias should be to use it until presented with
concrete evidence to the contrary". For example, should a DHCP
client create a port for the DHCP server via the standard API, or via
a custom API or direct DB access? My strong bias would be toward
using the standard API.
Good question. I think that if the standard API's can be used then we
should go for it. Problem is that these require additional configurations.
3. Logging. At the moment the agents do not have a decent logging
mechanism. This makes debugging the RPC code terribly difficult.
This was scheduled for F-3. I'll be happy to add this if there are
no objections.
That sounds valuable.
Hopefully I'll be able to find some time for this.
4. We need to discuss the notifications that Yong added and how
these two methods can interact together. More specifically I think
that we need to address the configuration files.
Agreed. I think we need to decide on this at monday's IRC meeting, so
we can move forward. Given F-3 deadlines, I'm well aware that I'll
have to be pragmatic here :)
The RPC code requires that the eventlet monkey patch be set. This
cause havoc when I was using the events from pyudev for new device
creation. At the moment I have moved the event driven support to
polling (if anyone who reads this is familiar with the issue or
has an idea on how to address it any help will be great)
Sorry, wish I could help, but I'm probably in the same boat as you on
this one.
I have a solution that works. In the long term it would be better if
this was event driven. This all depends on how the discussions above
play out.
I'm going to make sure we have a good chunk of time to discuss this
during the IRC meeting on monday (sorry, I know that's late night for
you...).
:). Tomorrow is jet lag day!
Dan
Thanks
Gary
Dan
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com <http://www.nicira.com>
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com <http://www.nicira.com>
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~
References