← Back to team overview

ourdelta-developers team mailing list archive

Re: [Question #69430]: Per session statistics

 

Question #69430 on OurDelta changed:
https://answers.launchpad.net/ourdelta/+question/69430

    Status: Answered => Open

Neil Katin is still having a problem:

> Neil - re the leak, would you care to check if there's a bug on this in the percona patches project, if not create it,
> and if there is add this info to it? That'd be fab tnx.

I'll go look at the percona bug site, but there already was an ourdelta bug I reported on it: https://bugs.launchpad.net/ourdelta/+bug/344447 from mid-March.  I assumed from Mark's comments that the problem was well-understood, and the feature
(host statistics aggregation) was going away.  But from your comment I guess you feel differently, and could use an
actual fix; if so I'll see if I can put one together if you want one.

> Hmm... tracking that could be quite high overhead.

I guess I'm missing the issue.  The code is already collecting all this data, so it can't be an instrumentation
issue.  I'm just trying to aggregate it in a new way: via current session, in addition to user/host/table.

> Using the enhanced slow query log stats, shouldn't you be able to
catch the queries you want anyway?

If I just wanted to catch a few expensive queries then you would be
right.  I guess I didn't explain our use case well.

My interest is more of an accounting issue.  I would like to find metrics I can collect to figure out the relative
cost of various features.  We know (on a given connection) which customers we did work for, and what sorts
of things we did.  We're looking for better visibility into how "expensive" that work was, on an aggregated basis.
We're trying to better understand the underlying infrastructure costs to various features we have.

So its not that any individual query is taking "too long" (the classic tuning problem).  Its that we want to better
understand which features are costing us the most, and for that we need to be able to match some metric
of "underlying cost/effort" to our high-level set of requests.

We thought about assigning each of our customers to a different mysql user; that way the by-user aggregation
would give us some of the info.  But we also wanted to understand the costs of various requests in our system,
and we didn't want to create a huge number of artificial mysql user accounts.

I could turn on the slow query log for all requests, but that would be
very high overhead.

Is that more clear?

-- 
You received this question notification because you are a member of
OurDelta-developers, which is an answer contact for OurDelta.