One of the new features in the new release of MySQL Enterprise Monitor is Query Analyzer. As the name suggests, the Query Analyzer provides information about the queries that are running on your server, the response times and row and byte statistics. The information provided is great, and it doesn’t take very long to see from the query data supplied that there are places where you could improve the the query, or even reduce the number of queries that you submit.
The system works by using the functionality of the MySQL Proxy to monitor the queries being executed and then provide that information up to the MySQL Enterprise Service Manager so that the information can be displayed within the Query Analyzer page. To get the queries monitored, you have to send the queries through the agent which both monitors their execution and sends the information on up to the Manager, along with all the other data being monitored.
The team, though, have been a bit clever and opened up the system to allow information to be sent to the Manager using a REST interface. This means that any system capable of providing information that you want to monitor can be sent up to the Manager. Of course, you can’t just send anything, the Manager needs to know how to handle it, but it shows the flexibility of the design and the potential for the future.
So how does this help us?
Well, one of the new features in MySQL 6.0 that I’ve been working on (with Mikael Ronstrom and Alexey Kopytov) is DTrace probes. We’ve added a bunch of static DTrace probes into MySQL 6.0 (the full set will appear in MySQL 6.0.8, I think) designed to let you monitor the execution of queries within the server. The probes will allow you to see both the top-level information, such as overall execution time, but also deeper so that you can get information about individual row operations, whether the query used the query cache, and whether it used a filesort operation.
I haven’t finished the DTrace probes documentation yet, but I have been demonstrating the probes at conferences and talks (including my MySQL on OpenSolaris university session this week). Trust me, you’ll be pleased. I’ve got a separate blog post detailing some of the specifics in the works at the moment.
For obvious reasons, there’s a synergy here that should be obvious. Why don’t we feed up data extracted using DTrace and provide that up to the Enterprise Manager?
To do this, there are two parts to the process, the DTrace probes and the script hat passes that information up in a suitable format to the manager.
The D script is quite straightforward, we initialize the structures, populate the core information that we need (query string, bytes, rows and the time), and the use the remainder of the probes to finalize that information. Let’s have look at the script and then go through the detail:
#!/usr/sbin/dtrace -s
#pragma D option quiet
mysql*:::query-start
{
self->query = copyinstr(arg0);
self->db = copyinstr(arg2);
self->rows = 0;
self->querystart = timestamp;
self->bytes = 0;
}
mysql*:::select-done
{
self->rows = arg1;
}
mysql*:::insert-done
{
self->rows = arg1;
}
mysql*:::update-done
{
self->rows = arg2;
}
mysql*:::multi-delete-done
{
self->rows = arg1;
}
mysql*:::delete-done
{
self->rows = arg1;
}
mysql*:::multi-update-done
{
self->rows = arg2;
}
mysql*:::net-write-start
{
self->bytes = self->bytes + arg0;
}
mysql*:::query-done
/self->query != NULL/
{
printf("%s:%s:%d:%d:%d\n",
self->query,
self->db,
((timestamp - self->querystart)/1000),
self->rows,self->bytes);
}
First, we set a pragma to quieten down the output so that the DTrace script only reports what we explicitly write out:
#pragma D option quiet
In DTrace, the individual execution points are called probes, and probes are triggered each time that point in the code is reached. To specify the probes we want to watch for, you use a special format, provider:module:function:name that identifies the probe by the name of the provider (the application), the module, the function, and the probe, each separated by a colon. We can just specify the provider and probe name, like mysql*:::query-start.
It should also be noted that probes are often provided in pairs at the start and end of an operation, so you can identify the start and end of a query by looking for the query-start and query-done probes.
The DTrace probes in the server are set-up in a sort of nested structure, going deeper into the query process as needed. Although not at the very top of the execution cycle, the start of the main query processing is identified by the query-start probe. Each time a query is submitted to MySQL, this probe will get triggered, so for us, it is the start of the process. The probe has a number of arguments, but for our purposes we only need the first (arg0), which contains the full query string, and the third (arg2) which contains the name of the database that the query was executed against.
We also initialize the row and byte counts, and the time when the query was executed using the built-in timestamp value. All of this information is placed into the special self structure, which is a persistent structure used to share information between the individual probes that get fired during execution.
mysql*:::query-start
{
self->query = copyinstr(arg0);
self->db = copyinstr(arg2);
self->rows = 0;
self->querystart = timestamp;
self->bytes = 0;
}
To get the counts of the number of rows, we can’t get the information from the query-done probe. This is because different operations actually provide different levels of information. For example, the select-done and insert-done just provide a count of the rows. But the update-done probe provides information both about the number of rows that matched the original WHERE clause, and the count of the number rows actually modified.
To record the number of the rows modified by the query, we therefore need to pull out each piece of information individually:
mysql*:::select-done
{
self->rows = arg1;
}
mysql*:::insert-done
{
self->rows = arg1;
}
mysql*:::update-done
{
self->rows = arg2;
}
mysql*:::multi-delete-done
{
self->rows = arg1;
}
mysql*:::delete-done
{
self->rows = arg1;
}
mysql*:::multi-update-done
{
self->rows = arg2;
}
For the bytes retrieved by each query, the information is a bit more difficult to identify. I’m going to cheat a bit and use the bytes sent by mysqld during a net write to the client. There is a limitation here I’ve skipped, which is that we could report data sent to any client, since I haven’t bothered to track connection IDs. I could do this, but it would make the script a little more complicated. Since the net-write-start might be called multiple times for a long query, we calculate a cumulative byte count.
mysql*:::net-write-start
{
self->bytes = self->bytes + arg0;
}
That’s all of the information collection; now we just need to print out the information when the query completes. We do this by writing out a colon separated list of the information that we’ve collected. One additional point here though is that to calculate the duration of the query, you take the timestamp recorded when query-start was called away from the current timestamp.
Timestamp information is recorded in nanoseconds (yes, you read that right, nanoseconds), so we divide it by a thousand to get it in microseconds, which is what the Enterprise Manager will expected.
mysql*:::query-done /self->query != NULL/ { printf("%s:%s:%d:%d:%d\n", self->query, self->db, ((timestamp - self->querystart)/1000), self->rows,self->bytes); }
If you run this script on it’s own (against a MySQL running on Solaris/OpenSolaris, with probes, of course), then you’ll get output like this:
SELECT DATABASE()::391:1:44 show databases:test:947:2:84 show tables:test:2018:3:74 select * from t limit 5:test:595:5:51
To provide the information up to the Enterprise Manager we cannot use D scripts. Instead, a wrapper around the D script will read the raw information produced and then pass that up to the Enterprise Manager.
Before we look at that process, it is worth looking at the REST API that has been built in to v2 of the Enterprise Monitor. The interface is available through the standard URL for the Enterprise service, typically your hostname and the port 18080 if you’ve used the default settings. Therefore we can access the interface using the url http://nautilus:18080/v2/rest/, assuming our host is nautilus.
From the base URL, you can start to get information, or put information, about the different entries in the repository using the path in the URL to signifiy what it is we are looking for. Information about instances is within the instance, with the provider as mysql, and the MySQL server as server. Or better put, the base URL would be http://nautilus:18080/v2/rest/instance/mysql/server/.
The last fragment of information we need is the UUID. All objects within the repository have a unique ID, and these are split at different levels. For example, an agent has a UUID, and so does the server it is monitoring. In our example, we want the UUID of the MySQL server, which is conveniently stored within the server itself in the mysql.inventory table.
Finally, we need the username and password of the agent user. Through the REST API we use basic HTTP authentication, to make the process easy.
Putting all of this together, we can get the core information about an instance using wget:
$ wget -qO mysql.server --http-user=agent --http-password=password \
'http://nautilus:18080/v2/rest/instance/mysql/server/2b86b277-fb2b-492d-b946-3a2acaec0869'
If we now look at the output file, mysql.server:
{
"name": "2b86b277-fb2b-492d-b946-3a2acaec0869",
"parent": "/instance/os/Host/ssh:{88:e1:fc:6d:99:69:e4:5f:b4:0a:ec:5a:09:c0:6a:24}",
"values": {
"blackout": "false",
"displayname": null,
"registration-complete": "true",
"repl.groupName": null,
"server.connected": 1,
"server.last_error": null,
"server.reachable": 1,
"transport": "a3113263-4993-4890-8235-cadef9617c4b",
"visible.displayname": "bear:3306"
}
}
I wont go into detail about what is here, most of it should be self explanatory. However, there are a few things of note. First, the information is in JSON format. This makes it easy to read and more importantly create.
Second, note the notation. The item is identified by its name, and also by it’s parent. This is an important construct because it helps identify the different elements with each other. In this case, the MySQL server is associated with a physical host (/instance/os/Host) and the individual host is identified by a SSH key, which is one of the alternative UUID formats support by the Enterprise Server to identify individual entities.
When submitting information, we need to flip the process around. We don’t use a GET request to obtain the information, we use a PUT to send up a JSON packet containing the information we want. The URL for sending the information depends on what we are uploading. The main element for the statements used for Query Analyzer is the statementsummary.
The URL for this is http://nautilus:18080/v2/rest/instance/mysql/statementsummary/. For the identifier at the end of the URL, you use a period-separated list that includes the UUID of the MySQL server, the name of the MySQL database the SQL statement relates to, and an MD5 hash of the SQL statement text.
For the actual packet, we use the following format, taken here from the Perl script:
{
"name": "$server_uuid.$quanbase->{dbname}.$md5",
"parent": "/instance/mysql/server/$server_uuid",
"values" : {
"count": "$quanbase->{count}",
"text": "$quanbase->{query}",
"query_type": "$quanbase->{qtype}",
"text_hash": "$md5",
"max_exec_time": "$quanbase->{max_exec_time}",
"min_exec_time": "$quanbase->{min_exec_time}",
"exec_time": "$quanbase->{exec_time}",
"rows": "$quanbase->{rows}",
"max_rows": "$quanbase->{max_rows}",
"min_rows": "$quanbase->{min_rows}",
"database": "$quanbase->{dbname}",
"bytes": "$quanbase->{bytes}",
"max_bytes": "$quanbase->{max_bytes}",
"min_bytes": "$quanbase->{min_bytes}",
}
}
Most of this should be self-explanatory. Remember that this is a statement summary, which means that we can send up information about multiple invocations of the same statement in one packet. Thus, within the statementsummary packet we have information about the count of invocations of the statement, execution, row and byte counts and maximum/minimum of each of them, and then the core information like the actual query text, database name, and query type (SELECT, INSERT, etc).
Once again, note the name and parent. Here the name is the same tuple as used in the URL, the UUID of the MySQL server, the database, and the hash of the query. This is used as the identifier for this query within the repository and allows us to uniquely identify the query, and the query execution on this server. The parent is the location of, and UUID of, the MySQL server.
Now, the Perl script that collates the information from our D script has to do two things, first read the raw output that we create with the D script, and second, supply this up as a PUT request to the Enterprise Server.
Dealing with the latter part first, I’ve used Perl and LWP (libwww-perl) module to construct a suitable request object with the HTTP authorization attached:
my $header = HTTP::Headers->new; $header->content_type('text/text'); $header->authorization_basic('agent','password'); my $res = LWP::UserAgent->new();Once we’ve constructed a packet, sending it is a case of specifying the URL, the header, and the content:
$header->content_length(length $bio); my $req = HTTP::Request->new(PUT => $url, $header, $bio); $res->request($req);The bulk of the rest of the script is devoted to reading the information from the D script output, and assembling the packet and min/max values per query.
Within the Query Analyzer, the SQL statements are normalized, or canonicalized so that variables are replaced with a question mark. This ensures that we are tracking the query and not the individual values. The significance here is that we want to compare the raw SQL statement, of which there may only be a few hundred in a typical application, not each individual query with it’s
WHEREand other clauses.Hence, the statement:
SELECT photoid,title from media_photos where photoid > 23785 limit 15Would be normalized to:
SELECT photoid,title from media_photos where photoid > ? limit ?For the Perl script, I do just one type of normalization, removing the value from a
LIMIT clause.#!/usr/bin/perl use Data::Dumper; use LWP; use HTTP::Request; use Digest::MD5 qw/md5_hex/; my $server_uuid = '2b86b277-fb2b-492d-b946-3a2acaec0869'; my $header = HTTP::Headers->new; $header->content_type('text/text'); $header->authorization_basic('agent','password'); my $res = LWP::UserAgent->new(); my $interval = shift || 20; print "Sending queries every $interval statement(s)\n"; open(DTRACE,"./merlin.d|") or die "Couldn't open DTRACE\n"; my $counter = 1; my $querybase = {}; while() { chomp; my ($origquery,$dbname,$time,$rows,$bytes) = split m{:}; my $query = $origquery; $query =~ s/limit \d+/limit ?/g; $querybase->{$query}->{dbname} = $dbname; $querybase->{$query}->{query} = $query; $querybase->{$query}->{count}++; $querybase->{$query}->{rows} += $rows; $querybase->{$query}->{bytes} += $bytes; $querybase->{$query}->{exec_time} += $time; if (exists($querybase->{$query})) { $querybase->{$query}->{max_rows} = $rows if ($rows > $querybase->{$query}->{max_rows}); $querybase->{$query}->{min_rows} = $rows if ($rows < $querybase->{$query}->{min_rows}); $querybase->{$query}->{max_bytes} = $bytes if ($bytes > $querybase->{$query}->{max_bytes}); $querybase->{$query}->{min_bytes} = $bytes if ($bytes < $querybase->{$query}->{min_bytes}); $querybase->{$query}->{max_exec_time} = $time if ($time > $querybase->{$query}->{max_exec_time}); $querybase->{$query}->{min_exec_time} = $time if ($time < $querybase->{$query}->{min_exec_time}); } else { $querybase->{$query}->{max_rows} = $rows; $querybase->{$query}->{min_rows} = $rows; $querybase->{$query}->{max_bytes} = $bytes; $querybase->{$query}->{min_bytes} = $bytes; $querybase->{$query}->{max_exec_time} = $time; $querybase->{$query}->{min_exec_time} = $time; } if (($counter % $interval) == 0) { print STDERR "Writing quan packets ($counter queries sent)\n"; foreach my $query (keys %{$querybase}) { send_quandata($querybase->{$query}); delete($querybase->{$query}); } } $counter++; } sub send_quandata { my ($quanbase) = @_; my $urlbase = 'http://nautilus:18080/v2/rest/instance/mysql/statementsummary/%s.%s.%s'; my $md5 = md5_hex($quanbase->{query}); my $url = sprintf($urlbase,$server_uuid,$quanbase->{dbname},$md5); my $bio = < {dbname}.$md5", "parent": "/instance/mysql/server/$server_uuid", "values" : { "count": "$quanbase->{count}", "text": "$quanbase->{query}", "query_type": "$quanbase->{qtype}", "text_hash": "$md5", "max_exec_time": "$quanbase->{max_exec_time}", "min_exec_time": "$quanbase->{min_exec_time}", "exec_time": "$quanbase->{exec_time}", "rows": "$quanbase->{rows}", "max_rows": "$quanbase->{max_rows}", "min_rows": "$quanbase->{min_rows}", "database": "$quanbase->{dbname}", "bytes": "$quanbase->{bytes}", "max_bytes": "$quanbase->{max_bytes}", "min_bytes": "$quanbase->{min_bytes}", } } EOF $header->content_length(length $bio); my $req = HTTP::Request->new(PUT => $url, $header, $bio); $res->request($req); } The basic structure is:
- Open the DTrace script
- Read a line
- Add that to the temporary list of queries I know about, adding stats
- When I’ve read N queries, send up the stats about each query as a JSON packet to the Enterprise Manager
- Repeat
Depending on how busy your server is, you may want to adjust the interval when the stats data is uploaded. The default is every 20 queries, but when running on a really busy server, or when running benchmarks, you might want to up that to prevent the script spending too much time sending fairly small packets of stats up.
If you run the script, it should just work in the background:
$ ./dtrace_merlin.pl Sending queries every 20 statement(s) Writing quan packets (20 queries sent) Writing quan packets (40 queries sent) Writing quan packets (60 queries sent)That’s it!
I set this up and then sent some random queries to the server. The following graphic shows the query data only from the DTrace sourced information.
There are some limitations to the current script. I don’t do full normalization, for example, and I dont send the detailed information about individual statements up at the moment. There is also an
EXPLAINpacket that you can send that contains the output from anEXPLAINon a long running query. I could do that by opening a connection to the server and picking out the information.But what I’d really like to do is use the DTrace-based output to show the detail of each part of the query process and the
EXPLAINoutput. I’m sure I can work on that with the Enterprise team.
