← Back to team overview

openstack team mailing list archive

Re: Monitoring / Billing Architecture proposed

 

On Apr 24, 2012, at 11:00 AM, Loic Dachary wrote:

On 04/24/2012 04:45 PM, Monsyne Dragon wrote:

On Apr 24, 2012, at 9:03 AM, Loic Dachary wrote:

On 04/24/2012 03:06 PM, Monsyne Dragon wrote:

Yes,  we emit bandwidth (bytes in/out) on a per VIF basis from each instance The event has the somewhat generic name of 'compute.instance.exists'  and is emitted on an periodic basis, currently by a cronjob.
Currently, we only populate bandwidth data from XenServer, but if the hook is implemented for  the kvm, etc drivers, it will be picked up automatically for them as well.

Note that we could report other metrics similarly.


Hi,

Thanks for clarifying this. So you're suggesting that the metering agent should collect this data from the nova queue instead of extracting it from the system (interface, disk stats etc.) ? And for other openstack components ( as Nick Barcet suggests below ) the metering agent will have to find another way. Or do you have something else in mind ?

If it's something we have access to, we should emit it in those usage events.  As far as the other components, glance is already using the same notification system.  (there was a thread awhile back about putting it into openstack.common)  It would be nice to have all of the components using it.

Hi,

I don't see a section in http://wiki.openstack.org/SystemUsageData about making sure all messages related to a billable event are accounted for. I mean, for instance, what if the event that says an instance is deleted is lost ? How is the billing software supposed to cope with that ? If it checks the status of all VM on a regular basis to deal with this, how can it figure out when the missed event occured ?

FIrst, we use a reliable queueing mechanism to prevent that.  Secondly there are periodic audit events that act as a check (and also contain data for usage over a time period, like bandwidth).


It would be worth adding a short section about this in http://wiki.openstack.org/SystemUsageData . Or I can do it if you give me a hint.

Cheers
Cheers

On 04/24/2012 12:17 PM, Nick Barcet wrote:

On 04/23/2012 10:45 PM, Doug Hellmann wrote:


>
>
> On Mon, Apr 23, 2012 at 4:14 PM, Brian Schott
> <brian.schott@xxxxxxxxxxxxxxxxxx<mailto:brian.schott@xxxxxxxxxxxxxxxxxx>
> <mailto:brian.schott@xxxxxxxxxxxxxxxxxx><mailto:brian.schott@xxxxxxxxxxxxxxxxxx>> wrote:
>
>     Doug,
>
>     Do we mirror the table structure of nova, etc. and add
>     created/modified columns?
>
>
>     Or do we flatten into an instance event record with everything?
>
>
> I lean towards flattening the data as it is recorded and making a second
> pass during the bill calculation. You need to record instance
> modifications separately from the creation, especially if the
> modification changes the billing rate. So you might have records for:
>
> created instance, with UUID, name, size, timestamp, ownership
> information, etc.
> resized instance, with UUID, name, new size, timestamp, ownership
> information, etc.
> deleted instance, with UUID, name, size, timestamp, ownership
> information, etc.
>
> Maybe some of those values don't need to be reported in some cases, but
> if you record a complete picture of the state of the instance then the
> code that aggregates the event records to produce billing information
> can use it to make decisions about how to record the charges.
>
> There is also the case where an instance is still no longer running but
> nova thinks it is (or the reverse), so some sort of auditing sweep needs
> to be included (I think that's what Dough called the "farmer" but I
> don't have my notes in front of me).


When I wrote [1], one of the things that I never assumed was how agents
would collect their information. I imagined that the system should allow
for multiple implementation of agents that would collect the same
counters, assuming that 2 implementations for the same counter should
never be running at once.

That said, I am not sure an event based collection of what nova is
notifying would satisfy the requirements I have heard from many cloud
providers:
- how do we ensure that event are not forged or lost in the current nova
system?
- how can I be sure that an instance has not simply crashed and never
started?
- how can I collect information which is not captured by nova events?

Hence the proposal to use a dedicated event queue for billing, allowing
for agents to collect and eventually validate data from different
sources, including, but not necessarily limiting, collection from the
nova events.

Moreover, as soon as you generalize the problem to other components than
just Nova (swift, glance, quantum, daas, ...) just using the nova event
queue is not an option anymore.

[1] http://wiki.openstack.org/EfficientMetering

Nick





On Apr 24, 2012, at 6:20 AM, Sandy Walsh wrote:



I think we have support for this currently in some fashion, Dragon?

-S



On 04/24/2012 12:55 AM, Loic Dachary wrote:


Metering needs to account for the "volume of data sent to external network destinations " ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or the disk I/O etc. This kind of resource is billable.

The information described at http://wiki.openstack.org/SystemUsageData will be used by metering but other data sources need to be harvested as well.


--
        Monsyne M. Dragon
        OpenStack/Nova
        cell 210-441-0965
        work x 5014190


_______________________________________________
Mailing list: https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
Post to     : openstack@xxxxxxxxxxxxxxxxxxx<mailto:openstack@xxxxxxxxxxxxxxxxxxx>
Unsubscribe : https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
More help   : https://help.launchpad.net/ListHelp




--
Loïc Dachary         Chief Research Officer
// eNovance labs   http://labs.enovance.com<http://labs.enovance.com/>
// ✉ loic@xxxxxxxxxxxx<mailto:loic@xxxxxxxxxxxx>  ☎ +33 1 49 70 99 82


_______________________________________________
Mailing list: https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
Post to     : openstack@xxxxxxxxxxxxxxxxxxx<mailto:openstack@xxxxxxxxxxxxxxxxxxx>
Unsubscribe : https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
More help   : https://help.launchpad.net/ListHelp

--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965
work x 5014190




--
Loïc Dachary         Chief Research Officer
// eNovance labs   http://labs.enovance.com<http://labs.enovance.com/>
// ✉ loic@xxxxxxxxxxxx<mailto:loic@xxxxxxxxxxxx>  ☎ +33 1 49 70 99 82


--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965
work x 5014190


References