maria-developers team mailing list archive
-
maria-developers team
-
Mailing list archive
-
Message #01310
bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2 branch (monty:2726)
#At lp:maria/5.2 based on revid:monty@xxxxxxxxxxxx-20091014125853-gix3sqkenbazrrqz
2726 Michael Widenius 2009-10-19
This is based on the userstatv2 patch from Percona and OurDelta.
The original code comes, as far as I know, from Google (Mark Callaghan's team) with additional work from Percona, Ourdelta and Weldon Whipple.
This code provides the same functionallity, but with a lot of changes to make it faster and better fit the MariaDB infrastucture.
Added new status variables:
- Com_show_client_statistics, Com_show_index_statistics, Com_show_table_statistics, Com_show_user_statistics
- Access_denied_errors, Busy_time (clock time), Binlog_bytes_written, Cpu_time, Empty_queries, Rows_sent, Rows_read
Added new variable / startup option 'userstat' to control if user statistics should be enabled or not
Added my_getcputime(); Returns cpu time used by this thread.
New FLUSH commands:
- FLUSH SLOW QUERY LOG
- FLUSH TABLE_STATISTICS
- FLUSH INDEX_STATISTICS
- FLUSH USER_STATISTICS
- FLUSH CLIENT_STATISTICS
New SHOW commands:
- SHOW CLIENT_STATISTICS
- SHOW USER_STATISTICS
- SHOW TABLE_STATISTICS
- SHOW INDEX_STATISTICS
New Information schemas:
- CLIENT_STATISTICS
- USER_STATISTICS
- INDEX_STATISTICS
- TABLE_STATISTICS
Added support for all new flush commands to mysqladmin
Added handler::ha_... wrappers for all handler read calls to do statistics counting
- Changed all code to use new ha_... calls
- Count number of read rows, changed rows and rows read trough an index
Added counting of number of bytes sent to binary log (status variable Binlog_bytes_written)
Added counting of access denied errors (status variable Access_denied_erors)
Bugs fixed:
- Fixed bug in add_to_status() and add_diff_to_status() where longlong variables where threated as long
- CLOCK_GETTIME was not propely working on Linuxm
added:
mysql-test/r/status_user.result
mysql-test/t/status_user-master.opt
mysql-test/t/status_user.test
modified:
client/mysqladmin.cc
configure.in
include/my_sys.h
include/mysql_com.h
mysql-test/r/information_schema.result
mysql-test/r/information_schema_all_engines.result
mysql-test/r/information_schema_db.result
mysql-test/r/log_slow.result
mysql-test/t/log_slow.test
mysys/my_getsystime.c
sql/authors.h
sql/event_data_objects.cc
sql/event_db_repository.cc
sql/filesort.cc
sql/ha_partition.cc
sql/handler.cc
sql/handler.h
sql/item_subselect.cc
sql/lex.h
sql/log.cc
sql/log.h
sql/log_event.cc
sql/log_event_old.cc
sql/mysql_priv.h
sql/mysqld.cc
sql/opt_range.cc
sql/opt_range.h
sql/opt_sum.cc
sql/records.cc
sql/set_var.cc
sql/sp.cc
sql/sql_acl.cc
sql/sql_base.cc
sql/sql_class.cc
sql/sql_class.h
sql/sql_connect.cc
sql/sql_cursor.cc
sql/sql_handler.cc
sql/sql_help.cc
sql/sql_insert.cc
sql/sql_lex.h
sql/sql_parse.cc
sql/sql_plugin.cc
sql/sql_prepare.cc
sql/sql_select.cc
sql/sql_servers.cc
sql/sql_show.cc
sql/sql_table.cc
sql/sql_udf.cc
sql/sql_update.cc
sql/sql_yacc.yy
sql/structs.h
sql/table.cc
sql/table.h
sql/tztime.cc
per-file messages:
client/mysqladmin.cc
Added support for all new flush commmands and some common combinations:
flush-slow-log
flush-table-statistics
flush-index-statistics
flush-user-statistics
flush-client-statistics
flush-all-status
flush-all-statistics
configure.in
Added checking if clock_gettime needs the librt.
(Fixes Bug #37639 clock_gettime is never used/enabled in Linux/Unix)
include/my_sys.h
Added my_getcputime()
include/mysql_com.h
Added LIST_PROCESS_HOST_LEN & new REFRESH target defines
mysql-test/r/information_schema.result
New information schema tables added
mysql-test/r/information_schema_all_engines.result
New information schema tables added
mysql-test/r/information_schema_db.result
New information schema tables added
mysql-test/r/log_slow.result
Added testing that flosh slow query logs is accepted
mysql-test/r/status_user.result
Basic testing of user, client, table and index statistics
mysql-test/t/log_slow.test
Added testing that flosh slow query logs is accepted
mysql-test/t/status_user-master.opt
Ensure that we get a fresh restart before running status_user.test
mysql-test/t/status_user.test
Basic testing of user, client, table and index statistics
mysys/my_getsystime.c
Added my_getcputime()
Returns cpu time used by this thread.
sql/authors.h
Updated authors to have core and original MySQL developers first.
sql/event_data_objects.cc
Updated call to mysql_reset_thd_for_next_command()
sql/event_db_repository.cc
Changed to use new ha_... calls
sql/filesort.cc
Changed to use new ha_... calls
sql/ha_partition.cc
Changed to use new ha_... calls
Fixed comment syntax
sql/handler.cc
Changed to use new ha_... calls
Reset table statistics
Added code to update global table and index status
Added counting of rows changed
sql/handler.h
Added table and index statistics variables
Added function reset_statistics()
Added handler::ha_... wrappers for all handler read calls to do statistics counting
Protected all normal read calls to ensure we use the new calls in the server.
Made ha_partition a friend class so that partition code can call the old read functions
sql/item_subselect.cc
Changed to use new ha_... calls
sql/lex.h
Added keywords for new information schema tables and flush commands
sql/log.cc
Added flush_slow_log()
Added counting of number of bytes sent to binary log
Removed not needed test of thd (It's used before, so it's safe to use)
Added THD object to MYSQL_BIN_LOG::write_cache() to simplify statistics counting
sql/log.h
Added new parameter to write_cache()
Added flush_slow_log() functions.
sql/log_event.cc
Updated call to mysql_reset_thd_for_next_command()
Changed to use new ha_... calls
sql/log_event_old.cc
Updated call to mysql_reset_thd_for_next_command()
Changed to use new ha_... calls
sql/mysql_priv.h
Updated call to mysql_reset_thd_for_next_command()
Added new statistics functions and variables needed by these.
sql/mysqld.cc
Added new statistics variables and structures to handle these
Added new status variables:
- Com_show_client_statistics, Com_show_index_statistics, Com_show_table_statistics, Com_show_user_statistics
- Access_denied_errors, Busy_time (clock time), Binlog_bytes_written, Cpu_time, Empty_queries, Rows_set, Rows_read
Added new option 'userstat' to control if user statistics should be enabled or not
sql/opt_range.cc
Changed to use new ha_... calls
sql/opt_range.h
Changed to use new ha_... calls
sql/opt_sum.cc
Changed to use new ha_... calls
sql/records.cc
Changed to use new ha_... calls
sql/set_var.cc
Added variable 'userstat'
sql/sp.cc
Changed to use new ha_... calls
sql/sql_acl.cc
Changed to use new ha_... calls
Added counting of access_denied_errors
sql/sql_base.cc
Added call to statistics functions
sql/sql_class.cc
Added usage of org_status_var, to store status variables at start of command
Added functions THD::update_stats(), THD::update_all_stats()
Fixed bug in add_to_status() and add_diff_to_status() where longlong variables where threated as long
sql/sql_class.h
Added new status variables to status_var
Moved variables that was not ulong in status_var last.
Added variables to THD for storing temporary values during statistics counting
sql/sql_connect.cc
Variables and functions to calculate user and client statistics
Added counting of access_denied_errors and lost_connections
sql/sql_cursor.cc
Changed to use new ha_... calls
sql/sql_handler.cc
Changed to use new ha_... calls
sql/sql_help.cc
Changed to use new ha_... calls
sql/sql_insert.cc
Changed to use new ha_... calls
sql/sql_lex.h
Added SQLCOM_SHOW_USER_STATS, SQLCOM_SHOW_TABLE_STATS, SQLCOM_SHOW_INDEX_STATS, SQLCOM_SHOW_CLIENT_STATS
sql/sql_parse.cc
Added handling of:
- SHOW CLIENT_STATISTICS
- SHOW USER_STATISTICS
- SHOW TABLE_STATISTICS
- SHOW INDEX_STATISTICS
Added handling of new FLUSH commands:
- FLUSH SLOW QUERY LOGS
- FLUSH TABLE_STATISTICS
- FLUSH INDEX_STATISTICS
- FLUSH USER_STATISTICS
- FLUSH CLIENT_STATISTICS
Added THD parameter to mysql_reset_thd_for_next_command()
Added initialization and calls to user statistics functions
Added increment of statistics variables empty_queries, rows_sent and access_denied_errors.
Added counting of cpu time per query
sql/sql_plugin.cc
Changed to use new ha_... calls
sql/sql_prepare.cc
Updated call to mysql_reset_thd_for_next_command()
sql/sql_select.cc
Changed to use new ha_... calls
Indentation changes
sql/sql_servers.cc
Changed to use new ha_... calls
sql/sql_show.cc
Added counting of access denied errors
Added function for new information schema tables:
- CLIENT_STATISTICS
- USER_STATISTICS
- INDEX_STATISTICS
- TABLE_STATISTICS
Changed to use new ha_... calls
sql/sql_table.cc
Changed to use new ha_... calls
sql/sql_udf.cc
Changed to use new ha_... calls
sql/sql_update.cc
Changed to use new ha_... calls
sql/sql_yacc.yy
Add new show and flush commands
sql/structs.h
Add name_length to KEY to avoid some strlen
Added cache_name to KEY for fast storage of keyvalue in cache
Added structs USER_STATS, TABLE_STATS, INDEX_STATS
Added function prototypes for statistics functions
sql/table.cc
Store db+table+index name into keyinfo->cache_name
sql/table.h
Added new information schema tables
sql/tztime.cc
Changed to use new ha_... calls
=== modified file 'client/mysqladmin.cc'
--- a/client/mysqladmin.cc 2009-09-07 20:50:10 +0000
+++ b/client/mysqladmin.cc 2009-10-19 17:14:48 +0000
@@ -23,7 +23,7 @@
#include <sys/stat.h>
#include <mysql.h>
-#define ADMIN_VERSION "8.42"
+#define ADMIN_VERSION "9.0"
#define MAX_MYSQL_VAR 512
#define SHUTDOWN_DEF_TIMEOUT 3600 /* Wait for shutdown */
#define MAX_TRUNC_LENGTH 3
@@ -96,7 +96,10 @@ enum commands {
ADMIN_FLUSH_HOSTS, ADMIN_FLUSH_TABLES, ADMIN_PASSWORD,
ADMIN_PING, ADMIN_EXTENDED_STATUS, ADMIN_FLUSH_STATUS,
ADMIN_FLUSH_PRIVILEGES, ADMIN_START_SLAVE, ADMIN_STOP_SLAVE,
- ADMIN_FLUSH_THREADS, ADMIN_OLD_PASSWORD
+ ADMIN_FLUSH_THREADS, ADMIN_OLD_PASSWORD, ADMIN_FLUSH_SLOW_LOG,
+ ADMIN_FLUSH_TABLE_STATISTICS, ADMIN_FLUSH_INDEX_STATISTICS,
+ ADMIN_FLUSH_USER_STATISTICS, ADMIN_FLUSH_CLIENT_STATISTICS,
+ ADMIN_FLUSH_ALL_STATUS, ADMIN_FLUSH_ALL_STATISTICS
};
static const char *command_names[]= {
"create", "drop", "shutdown",
@@ -106,7 +109,10 @@ static const char *command_names[]= {
"flush-hosts", "flush-tables", "password",
"ping", "extended-status", "flush-status",
"flush-privileges", "start-slave", "stop-slave",
- "flush-threads","old-password",
+ "flush-threads", "old-password", "flush-slow-log",
+ "flush-table-statistics", "flush-index-statistics",
+ "flush-user-statistics", "flush-client-statistics",
+ "flush-all-status", "flush-all-statistics",
NullS
};
@@ -518,7 +524,8 @@ static int execute_commands(MYSQL *mysql
for (; argc > 0 ; argv++,argc--)
{
- switch (find_type(argv[0],&command_typelib,2)) {
+ int command;
+ switch ((command= find_type(argv[0],&command_typelib,2))) {
case ADMIN_CREATE:
{
char buff[FN_REFLEN+20];
@@ -596,7 +603,11 @@ static int execute_commands(MYSQL *mysql
if (mysql_refresh(mysql,
(uint) ~(REFRESH_GRANT | REFRESH_STATUS |
REFRESH_READ_LOCK | REFRESH_SLAVE |
- REFRESH_MASTER)))
+ REFRESH_MASTER | REFRESH_TABLE_STATS |
+ REFRESH_INDEX_STATS |
+ REFRESH_USER_STATS |
+ REFRESH_SLOW_QUERY_LOG |
+ REFRESH_CLIENT_STATS)))
{
my_printf_error(0, "refresh failed; error: '%s'", error_flags,
mysql_error(mysql));
@@ -614,7 +625,8 @@ static int execute_commands(MYSQL *mysql
case ADMIN_VER:
new_line=1;
print_version();
- puts("Copyright 2000-2008 MySQL AB, 2008 Sun Microsystems, Inc.");
+ puts("Copyright 2000-2008 MySQL AB, 2008 Sun Microsystems, Inc,\n"
+ "2009 Monty Program Ab");
puts("This software comes with ABSOLUTELY NO WARRANTY. This is free software,\nand you are welcome to modify and redistribute it under the GPL license\n");
printf("Server version\t\t%s\n", mysql_get_server_info(mysql));
printf("Protocol version\t%d\n", mysql_get_proto_info(mysql));
@@ -790,9 +802,19 @@ static int execute_commands(MYSQL *mysql
}
case ADMIN_FLUSH_LOGS:
{
- if (mysql_refresh(mysql,REFRESH_LOG))
+ if (mysql_query(mysql,"flush logs"))
{
- my_printf_error(0, "refresh failed; error: '%s'", error_flags,
+ my_printf_error(0, "flush failed; error: '%s'", error_flags,
+ mysql_error(mysql));
+ return -1;
+ }
+ break;
+ }
+ case ADMIN_FLUSH_SLOW_LOG:
+ {
+ if (mysql_query(mysql,"flush slow query logs"))
+ {
+ my_printf_error(0, "flush failed; error: '%s'", error_flags,
mysql_error(mysql));
return -1;
}
@@ -802,7 +824,7 @@ static int execute_commands(MYSQL *mysql
{
if (mysql_query(mysql,"flush hosts"))
{
- my_printf_error(0, "refresh failed; error: '%s'", error_flags,
+ my_printf_error(0, "flush failed; error: '%s'", error_flags,
mysql_error(mysql));
return -1;
}
@@ -812,7 +834,7 @@ static int execute_commands(MYSQL *mysql
{
if (mysql_query(mysql,"flush tables"))
{
- my_printf_error(0, "refresh failed; error: '%s'", error_flags,
+ my_printf_error(0, "flush failed; error: '%s'", error_flags,
mysql_error(mysql));
return -1;
}
@@ -822,7 +844,71 @@ static int execute_commands(MYSQL *mysql
{
if (mysql_query(mysql,"flush status"))
{
- my_printf_error(0, "refresh failed; error: '%s'", error_flags,
+ my_printf_error(0, "flush failed; error: '%s'", error_flags,
+ mysql_error(mysql));
+ return -1;
+ }
+ break;
+ }
+ case ADMIN_FLUSH_TABLE_STATISTICS:
+ {
+ if (mysql_query(mysql,"flush table_statistics"))
+ {
+ my_printf_error(0, "flush failed; error: '%s'", error_flags,
+ mysql_error(mysql));
+ return -1;
+ }
+ break;
+ }
+ case ADMIN_FLUSH_INDEX_STATISTICS:
+ {
+ if (mysql_query(mysql,"flush index_statistics"))
+ {
+ my_printf_error(0, "flush failed; error: '%s'", error_flags,
+ mysql_error(mysql));
+ return -1;
+ }
+ break;
+ }
+ case ADMIN_FLUSH_USER_STATISTICS:
+ {
+ if (mysql_query(mysql,"flush user_statistics"))
+ {
+ my_printf_error(0, "flush failed; error: '%s'", error_flags,
+ mysql_error(mysql));
+ return -1;
+ }
+ break;
+ }
+ case ADMIN_FLUSH_CLIENT_STATISTICS:
+ {
+ if (mysql_query(mysql,"flush client_statistics"))
+ {
+ my_printf_error(0, "flush failed; error: '%s'", error_flags,
+ mysql_error(mysql));
+ return -1;
+ }
+ break;
+ }
+ case ADMIN_FLUSH_ALL_STATISTICS:
+ {
+ if (mysql_query(mysql,
+ "flush table_statistics,index_statistics,"
+ "user_statistics,client_statistics"))
+ {
+ my_printf_error(0, "flush failed; error: '%s'", error_flags,
+ mysql_error(mysql));
+ return -1;
+ }
+ break;
+ }
+ case ADMIN_FLUSH_ALL_STATUS:
+ {
+ if (mysql_query(mysql,
+ "flush status,table_statistics,index_statistics,"
+ "user_statistics,client_statistics"))
+ {
+ my_printf_error(0, "flush failed; error: '%s'", error_flags,
mysql_error(mysql));
return -1;
}
@@ -994,7 +1080,8 @@ static void print_version(void)
static void usage(void)
{
print_version();
- puts("Copyright 2000-2008 MySQL AB, 2008 Sun Microsystems, Inc.");
+ puts("Copyright 2000-2008 MySQL AB, 2008 Sun Microsystems, Inc,\n"
+ "2009 Monty Program Ab");
puts("This software comes with ABSOLUTELY NO WARRANTY. This is free software,\nand you are welcome to modify and redistribute it under the GPL license\n");
puts("Administration program for the mysqld daemon.");
printf("Usage: %s [OPTIONS] command command....\n", my_progname);
@@ -1002,16 +1089,23 @@ static void usage(void)
my_print_variables(my_long_options);
print_defaults("my",load_default_groups);
puts("\nWhere command is a one or more of: (Commands may be shortened)\n\
- create databasename Create a new database\n\
- debug Instruct server to write debug information to log\n\
- drop databasename Delete a database and all its tables\n\
- extended-status Gives an extended status message from the server\n\
- flush-hosts Flush all cached hosts\n\
- flush-logs Flush all logs\n\
- flush-status Clear status variables\n\
- flush-tables Flush all tables\n\
- flush-threads Flush the thread cache\n\
- flush-privileges Reload grant tables (same as reload)\n\
+ create databasename Create a new database\n\
+ debug Instruct server to write debug information to log\n\
+ drop databasename Delete a database and all its tables\n\
+ extended-status Gives an extended status message from the server\n\
+ flush-all-statistics Flush all statistics tables\n\
+ flush-all-status Flush status and statistics\n\
+ flush-client-statistics Flush client statistics\n\
+ flush-hosts Flush all cached hosts\n\
+ flush-index-statistics Flush index statistics\n\
+ flush-logs Flush all logs\n\
+ flush-privileges Reload grant tables (same as reload)\n\
+ flush-slow-log Flush slow query log\n\
+ flush-status Clear status variables\n\
+ flush-table-statistics Clear table statistics\n\
+ flush-tables Flush all tables\n\
+ flush-threads Flush the thread cache\n\
+ flush-user-statistics Flush user statistics\n\
kill id,id,... Kill mysql threads");
#if MYSQL_VERSION_ID >= 32200
puts("\
=== modified file 'configure.in'
--- a/configure.in 2009-10-08 09:43:31 +0000
+++ b/configure.in 2009-10-19 17:14:48 +0000
@@ -829,7 +829,7 @@ AC_CHECK_HEADERS(fcntl.h fenv.h float.h
sys/timeb.h sys/types.h sys/un.h sys/vadvise.h sys/wait.h term.h \
unistd.h utime.h sys/utime.h termio.h termios.h sched.h crypt.h alloca.h \
sys/ioctl.h malloc.h sys/malloc.h sys/ipc.h sys/shm.h linux/config.h \
- sys/prctl.h sys/resource.h sys/param.h port.h ieeefp.h \
+ sys/prctl.h sys/resource.h sys/param.h port.h ieeefp.h linux/unistd.h \
execinfo.h)
AC_CHECK_HEADERS([xfs/xfs.h])
@@ -2096,7 +2096,18 @@ case "$target" in
# We also disable for SCO for the time being, the headers for the
# thread library we use conflicts with other headers.
;;
- *) AC_CHECK_FUNCS(clock_gettime)
+*)
+ # most systems require the program be linked with librt library to use
+ # the function clock_gettime
+ my_save_LIBS="$LIBS"
+ LIBS=""
+ AC_CHECK_LIB(rt,clock_gettime)
+ LIBRT=$LIBS
+ LIBS="$my_save_LIBS"
+ AC_SUBST(LIBRT)
+
+ LIBS="$LIBS $LIBRT"
+ AC_CHECK_FUNCS(clock_gettime)
;;
esac
@@ -2786,7 +2797,7 @@ then
fi
sql_client_dirs="$sql_client_dirs client"
-CLIENT_LIBS="$NON_THREADED_LIBS $openssl_libs $ZLIB_LIBS $STATIC_NSS_FLAGS"
+CLIENT_LIBS="$NON_THREADED_LIBS $openssl_libs $ZLIB_LIBS $STATIC_NSS_FLAGS $LIBRT"
AC_SUBST(CLIENT_LIBS)
AC_SUBST(CLIENT_THREAD_LIBS)
=== modified file 'include/my_sys.h'
--- a/include/my_sys.h 2009-09-07 20:50:10 +0000
+++ b/include/my_sys.h 2009-10-19 17:14:48 +0000
@@ -904,6 +904,7 @@ void my_free_open_file_info(void);
extern time_t my_time(myf flags);
extern ulonglong my_getsystime(void);
+extern ulonglong my_getcputime(void);
extern ulonglong my_micro_time();
extern ulonglong my_micro_time_and_time(time_t *time_arg);
time_t my_time_possible_from_micro(ulonglong microtime);
=== modified file 'include/mysql_com.h'
--- a/include/mysql_com.h 2008-10-10 15:28:41 +0000
+++ b/include/mysql_com.h 2009-10-19 17:14:48 +0000
@@ -29,6 +29,7 @@
#define SERVER_VERSION_LENGTH 60
#define SQLSTATE_LENGTH 5
+#define LIST_PROCESS_HOST_LEN 64
/*
USER_HOST_BUFF_SIZE -- length of string buffer, that is enough to contain
@@ -115,6 +116,11 @@ enum enum_server_command
thread */
#define REFRESH_MASTER 128 /* Remove all bin logs in the index
and truncate the index */
+#define REFRESH_TABLE_STATS 256 /* Refresh table stats hash table */
+#define REFRESH_INDEX_STATS 512 /* Refresh index stats hash table */
+#define REFRESH_USER_STATS 1024 /* Refresh user stats hash table */
+#define REFRESH_SLOW_QUERY_LOG 4096 /* Flush slow query log and rotate*/
+#define REFRESH_CLIENT_STATS 8192 /* Refresh client stats hash table */
/* The following can't be set with mysql_refresh() */
#define REFRESH_READ_LOCK 16384 /* Lock tables for read */
=== modified file 'mysql-test/r/information_schema.result'
--- a/mysql-test/r/information_schema.result 2009-09-29 20:19:43 +0000
+++ b/mysql-test/r/information_schema.result 2009-10-19 17:14:48 +0000
@@ -45,6 +45,7 @@ NOT (table_schema = 'INFORMATION_SCHEMA'
select * from v1 ORDER BY c COLLATE utf8_bin;
c
CHARACTER_SETS
+CLIENT_STATISTICS
COLLATIONS
COLLATION_CHARACTER_SET_APPLICABILITY
COLUMNS
@@ -54,6 +55,7 @@ EVENTS
FILES
GLOBAL_STATUS
GLOBAL_VARIABLES
+INDEX_STATISTICS
INNODB_BUFFER_POOL_PAGES
INNODB_BUFFER_POOL_PAGES_BLOB
INNODB_BUFFER_POOL_PAGES_INDEX
@@ -82,8 +84,10 @@ STATISTICS
TABLES
TABLE_CONSTRAINTS
TABLE_PRIVILEGES
+TABLE_STATISTICS
TRIGGERS
USER_PRIVILEGES
+USER_STATISTICS
VIEWS
XTRADB_ENHANCEMENTS
columns_priv
@@ -121,6 +125,7 @@ c table_name
TABLES TABLES
TABLE_CONSTRAINTS TABLE_CONSTRAINTS
TABLE_PRIVILEGES TABLE_PRIVILEGES
+TABLE_STATISTICS TABLE_STATISTICS
TRIGGERS TRIGGERS
tables_priv tables_priv
time_zone time_zone
@@ -140,6 +145,7 @@ c table_name
TABLES TABLES
TABLE_CONSTRAINTS TABLE_CONSTRAINTS
TABLE_PRIVILEGES TABLE_PRIVILEGES
+TABLE_STATISTICS TABLE_STATISTICS
TRIGGERS TRIGGERS
tables_priv tables_priv
time_zone time_zone
@@ -159,6 +165,7 @@ c table_name
TABLES TABLES
TABLE_CONSTRAINTS TABLE_CONSTRAINTS
TABLE_PRIVILEGES TABLE_PRIVILEGES
+TABLE_STATISTICS TABLE_STATISTICS
TRIGGERS TRIGGERS
tables_priv tables_priv
time_zone time_zone
@@ -640,12 +647,13 @@ from information_schema.tables
where table_schema='information_schema' limit 2;
TABLE_NAME TABLE_TYPE ENGINE
CHARACTER_SETS SYSTEM VIEW MEMORY
-COLLATIONS SYSTEM VIEW MEMORY
+CLIENT_STATISTICS SYSTEM VIEW MEMORY
show tables from information_schema like "T%";
Tables_in_information_schema (T%)
TABLES
TABLE_CONSTRAINTS
TABLE_PRIVILEGES
+TABLE_STATISTICS
TRIGGERS
create database information_schema;
ERROR 42000: Access denied for user 'root'@'localhost' to database 'information_schema'
@@ -655,6 +663,7 @@ Tables_in_information_schema (T%) Table_
TABLES SYSTEM VIEW
TABLE_CONSTRAINTS SYSTEM VIEW
TABLE_PRIVILEGES SYSTEM VIEW
+TABLE_STATISTICS SYSTEM VIEW
TRIGGERS SYSTEM VIEW
create table t1(a int);
ERROR 42S02: Unknown table 't1' in information_schema
@@ -667,6 +676,7 @@ Tables_in_information_schema (T%)
TABLES
TABLE_CONSTRAINTS
TABLE_PRIVILEGES
+TABLE_STATISTICS
TRIGGERS
select table_name from tables where table_name='user';
table_name
@@ -856,6 +866,7 @@ TABLE_NAME COLUMN_NAME PRIVILEGES
COLUMNS TABLE_NAME select
COLUMN_PRIVILEGES TABLE_NAME select
FILES TABLE_NAME select
+INDEX_STATISTICS TABLE_NAME select
KEY_COLUMN_USAGE TABLE_NAME select
PARTITIONS TABLE_NAME select
REFERENTIAL_CONSTRAINTS TABLE_NAME select
@@ -863,6 +874,7 @@ STATISTICS TABLE_NAME select
TABLES TABLE_NAME select
TABLE_CONSTRAINTS TABLE_NAME select
TABLE_PRIVILEGES TABLE_NAME select
+TABLE_STATISTICS TABLE_NAME select
VIEWS TABLE_NAME select
INNODB_BUFFER_POOL_PAGES_INDEX table_name select
INNODB_INDEX_STATS table_name select
=== modified file 'mysql-test/r/information_schema_all_engines.result'
--- a/mysql-test/r/information_schema_all_engines.result 2009-08-03 20:09:53 +0000
+++ b/mysql-test/r/information_schema_all_engines.result 2009-10-19 17:14:48 +0000
@@ -2,6 +2,7 @@ use INFORMATION_SCHEMA;
show tables;
Tables_in_information_schema
CHARACTER_SETS
+CLIENT_STATISTICS
COLLATIONS
COLLATION_CHARACTER_SET_APPLICABILITY
COLUMNS
@@ -11,6 +12,7 @@ EVENTS
FILES
GLOBAL_STATUS
GLOBAL_VARIABLES
+INDEX_STATISTICS
KEY_COLUMN_USAGE
PARTITIONS
PLUGINS
@@ -26,8 +28,10 @@ STATISTICS
TABLES
TABLE_CONSTRAINTS
TABLE_PRIVILEGES
+TABLE_STATISTICS
TRIGGERS
USER_PRIVILEGES
+USER_STATISTICS
VIEWS
INNODB_BUFFER_POOL_PAGES
PBXT_STATISTICS
@@ -60,6 +64,7 @@ c2.column_name LIKE '%SCHEMA%'
);
table_name column_name
CHARACTER_SETS CHARACTER_SET_NAME
+CLIENT_STATISTICS CLIENT
COLLATIONS COLLATION_NAME
COLLATION_CHARACTER_SET_APPLICABILITY COLLATION_NAME
COLUMNS TABLE_SCHEMA
@@ -69,6 +74,7 @@ EVENTS EVENT_SCHEMA
FILES TABLE_SCHEMA
GLOBAL_STATUS VARIABLE_NAME
GLOBAL_VARIABLES VARIABLE_NAME
+INDEX_STATISTICS TABLE_SCHEMA
KEY_COLUMN_USAGE CONSTRAINT_SCHEMA
PARTITIONS TABLE_SCHEMA
PLUGINS PLUGIN_NAME
@@ -84,8 +90,10 @@ STATISTICS TABLE_SCHEMA
TABLES TABLE_SCHEMA
TABLE_CONSTRAINTS CONSTRAINT_SCHEMA
TABLE_PRIVILEGES TABLE_SCHEMA
+TABLE_STATISTICS TABLE_SCHEMA
TRIGGERS TRIGGER_SCHEMA
USER_PRIVILEGES GRANTEE
+USER_STATISTICS USER
VIEWS TABLE_SCHEMA
INNODB_BUFFER_POOL_PAGES page_type
PBXT_STATISTICS ID
@@ -118,6 +126,7 @@ c2.column_name LIKE '%SCHEMA%'
);
table_name column_name
CHARACTER_SETS CHARACTER_SET_NAME
+CLIENT_STATISTICS CLIENT
COLLATIONS COLLATION_NAME
COLLATION_CHARACTER_SET_APPLICABILITY COLLATION_NAME
COLUMNS TABLE_SCHEMA
@@ -127,6 +136,7 @@ EVENTS EVENT_SCHEMA
FILES TABLE_SCHEMA
GLOBAL_STATUS VARIABLE_NAME
GLOBAL_VARIABLES VARIABLE_NAME
+INDEX_STATISTICS TABLE_SCHEMA
KEY_COLUMN_USAGE CONSTRAINT_SCHEMA
PARTITIONS TABLE_SCHEMA
PLUGINS PLUGIN_NAME
@@ -142,8 +152,10 @@ STATISTICS TABLE_SCHEMA
TABLES TABLE_SCHEMA
TABLE_CONSTRAINTS CONSTRAINT_SCHEMA
TABLE_PRIVILEGES TABLE_SCHEMA
+TABLE_STATISTICS TABLE_SCHEMA
TRIGGERS TRIGGER_SCHEMA
USER_PRIVILEGES GRANTEE
+USER_STATISTICS USER
VIEWS TABLE_SCHEMA
INNODB_BUFFER_POOL_PAGES page_type
PBXT_STATISTICS ID
@@ -182,6 +194,7 @@ group by c2.column_type order by num lim
group by t.table_name order by num1, t.table_name;
table_name group_concat(t.table_schema, '.', t.table_name) num1
CHARACTER_SETS information_schema.CHARACTER_SETS 1
+CLIENT_STATISTICS information_schema.CLIENT_STATISTICS 1
COLLATIONS information_schema.COLLATIONS 1
COLLATION_CHARACTER_SET_APPLICABILITY information_schema.COLLATION_CHARACTER_SET_APPLICABILITY 1
COLUMNS information_schema.COLUMNS 1
@@ -191,6 +204,7 @@ EVENTS information_schema.EVENTS 1
FILES information_schema.FILES 1
GLOBAL_STATUS information_schema.GLOBAL_STATUS 1
GLOBAL_VARIABLES information_schema.GLOBAL_VARIABLES 1
+INDEX_STATISTICS information_schema.INDEX_STATISTICS 1
INNODB_BUFFER_POOL_PAGES information_schema.INNODB_BUFFER_POOL_PAGES 1
INNODB_BUFFER_POOL_PAGES_BLOB information_schema.INNODB_BUFFER_POOL_PAGES_BLOB 1
INNODB_BUFFER_POOL_PAGES_INDEX information_schema.INNODB_BUFFER_POOL_PAGES_INDEX 1
@@ -220,8 +234,10 @@ STATISTICS information_schema.STATISTICS
TABLES information_schema.TABLES 1
TABLE_CONSTRAINTS information_schema.TABLE_CONSTRAINTS 1
TABLE_PRIVILEGES information_schema.TABLE_PRIVILEGES 1
+TABLE_STATISTICS information_schema.TABLE_STATISTICS 1
TRIGGERS information_schema.TRIGGERS 1
USER_PRIVILEGES information_schema.USER_PRIVILEGES 1
+USER_STATISTICS information_schema.USER_STATISTICS 1
VIEWS information_schema.VIEWS 1
XTRADB_ENHANCEMENTS information_schema.XTRADB_ENHANCEMENTS 1
Database: information_schema
@@ -229,6 +245,7 @@ Database: information_schema
| Tables |
+---------------------------------------+
| CHARACTER_SETS |
+| CLIENT_STATISTICS |
| COLLATIONS |
| COLLATION_CHARACTER_SET_APPLICABILITY |
| COLUMNS |
@@ -238,6 +255,7 @@ Database: information_schema
| FILES |
| GLOBAL_STATUS |
| GLOBAL_VARIABLES |
+| INDEX_STATISTICS |
| KEY_COLUMN_USAGE |
| PARTITIONS |
| PLUGINS |
@@ -253,8 +271,10 @@ Database: information_schema
| TABLES |
| TABLE_CONSTRAINTS |
| TABLE_PRIVILEGES |
+| TABLE_STATISTICS |
| TRIGGERS |
| USER_PRIVILEGES |
+| USER_STATISTICS |
| VIEWS |
| INNODB_BUFFER_POOL_PAGES |
| PBXT_STATISTICS |
@@ -277,6 +297,7 @@ Database: INFORMATION_SCHEMA
| Tables |
+---------------------------------------+
| CHARACTER_SETS |
+| CLIENT_STATISTICS |
| COLLATIONS |
| COLLATION_CHARACTER_SET_APPLICABILITY |
| COLUMNS |
@@ -286,6 +307,7 @@ Database: INFORMATION_SCHEMA
| FILES |
| GLOBAL_STATUS |
| GLOBAL_VARIABLES |
+| INDEX_STATISTICS |
| KEY_COLUMN_USAGE |
| PARTITIONS |
| PLUGINS |
@@ -301,8 +323,10 @@ Database: INFORMATION_SCHEMA
| TABLES |
| TABLE_CONSTRAINTS |
| TABLE_PRIVILEGES |
+| TABLE_STATISTICS |
| TRIGGERS |
| USER_PRIVILEGES |
+| USER_STATISTICS |
| VIEWS |
| INNODB_BUFFER_POOL_PAGES |
| PBXT_STATISTICS |
@@ -328,5 +352,5 @@ Wildcard: inf_rmation_schema
+--------------------+
SELECT table_schema, count(*) FROM information_schema.TABLES WHERE table_schema IN ('mysql', 'INFORMATION_SCHEMA', 'test', 'mysqltest') AND table_name<>'ndb_binlog_index' AND table_name<>'ndb_apply_status' GROUP BY TABLE_SCHEMA;
table_schema count(*)
-information_schema 43
+information_schema 47
mysql 22
=== modified file 'mysql-test/r/information_schema_db.result'
--- a/mysql-test/r/information_schema_db.result 2009-09-07 20:50:10 +0000
+++ b/mysql-test/r/information_schema_db.result 2009-10-19 17:14:48 +0000
@@ -7,6 +7,7 @@ Tables_in_information_schema (T%)
TABLES
TABLE_CONSTRAINTS
TABLE_PRIVILEGES
+TABLE_STATISTICS
TRIGGERS
create database `inf%`;
create database mbase;
=== modified file 'mysql-test/r/log_slow.result'
--- a/mysql-test/r/log_slow.result 2009-09-03 14:05:38 +0000
+++ b/mysql-test/r/log_slow.result 2009-10-19 17:14:48 +0000
@@ -56,5 +56,6 @@ last_insert_id int(11) NO NULL
insert_id int(11) NO NULL
server_id int(10) unsigned NO NULL
sql_text mediumtext NO NULL
+flush slow query logs;
set @@log_slow_filter=default;
set @@log_slow_verbosity=default;
=== added file 'mysql-test/r/status_user.result'
--- a/mysql-test/r/status_user.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/r/status_user.result 2009-10-19 17:14:48 +0000
@@ -0,0 +1,166 @@
+DROP TABLE IF EXISTS t1;
+select variable_value from information_schema.global_status where variable_name="handler_read_key" into @global_read_key;
+show columns from information_schema.client_statistics;
+Field Type Null Key Default Extra
+CLIENT varchar(64) NO
+TOTAL_CONNECTIONS int(21) NO 0
+CONCURRENT_CONNECTIONS int(21) NO 0
+CONNECTED_TIME int(21) NO 0
+BUSY_TIME double NO 0
+CPU_TIME double NO 0
+BYTES_RECEIVED int(21) NO 0
+BYTES_SENT int(21) NO 0
+BINLOG_BYTES_WRITTEN int(21) NO 0
+ROWS_READ int(21) NO 0
+ROWS_SENT int(21) NO 0
+ROWS_DELETED int(21) NO 0
+ROWS_INSERTED int(21) NO 0
+ROWS_UPDATED int(21) NO 0
+SELECT_COMMANDS int(21) NO 0
+UPDATE_COMMANDS int(21) NO 0
+OTHER_COMMANDS int(21) NO 0
+COMMIT_TRANSACTIONS int(21) NO 0
+ROLLBACK_TRANSACTIONS int(21) NO 0
+DENIED_CONNECTIONS int(21) NO 0
+LOST_CONNECTIONS int(21) NO 0
+ACCESS_DENIED int(21) NO 0
+EMPTY_QUERIES int(21) NO 0
+show columns from information_schema.user_statistics;
+Field Type Null Key Default Extra
+USER varchar(48) NO
+TOTAL_CONNECTIONS int(21) NO 0
+CONCURRENT_CONNECTIONS int(21) NO 0
+CONNECTED_TIME int(21) NO 0
+BUSY_TIME double NO 0
+CPU_TIME double NO 0
+BYTES_RECEIVED int(21) NO 0
+BYTES_SENT int(21) NO 0
+BINLOG_BYTES_WRITTEN int(21) NO 0
+ROWS_READ int(21) NO 0
+ROWS_SENT int(21) NO 0
+ROWS_DELETED int(21) NO 0
+ROWS_INSERTED int(21) NO 0
+ROWS_UPDATED int(21) NO 0
+SELECT_COMMANDS int(21) NO 0
+UPDATE_COMMANDS int(21) NO 0
+OTHER_COMMANDS int(21) NO 0
+COMMIT_TRANSACTIONS int(21) NO 0
+ROLLBACK_TRANSACTIONS int(21) NO 0
+DENIED_CONNECTIONS int(21) NO 0
+LOST_CONNECTIONS int(21) NO 0
+ACCESS_DENIED int(21) NO 0
+EMPTY_QUERIES int(21) NO 0
+show columns from information_schema.index_statistics;
+Field Type Null Key Default Extra
+TABLE_SCHEMA varchar(192) NO
+TABLE_NAME varchar(192) NO
+INDEX_NAME varchar(192) NO
+ROWS_READ int(21) NO 0
+show columns from information_schema.table_statistics;
+Field Type Null Key Default Extra
+TABLE_SCHEMA varchar(192) NO
+TABLE_NAME varchar(192) NO
+ROWS_READ int(21) NO 0
+ROWS_CHANGED int(21) NO 0
+ROWS_CHANGED_X_INDEXES int(21) NO 0
+set @save_general_log=@@global.general_log;
+set @@global.general_log=0;
+set @@global.userstat=1;
+flush status;
+create table t1 (a int, primary key (a), b int default 0) engine=myisam;
+insert into t1 (a) values (1),(2),(3),(4);
+update t1 set b=1;
+update t1 set b=5 where a=2;
+delete from t1 where a=3;
+/* Empty query */
+select * from t1 where a=999;
+a b
+drop table t1;
+create table t1 (a int, primary key (a), b int default 0) engine=innodb;
+begin;
+insert into t1 values(1,1);
+commit;
+begin;
+insert into t1 values(2,2);
+commit;
+begin;
+insert into t1 values(3,3);
+rollback;
+drop table t1;
+select sleep(1);
+sleep(1)
+0
+show status like "rows%";
+Variable_name Value
+Rows_read 6
+Rows_sent 1
+show status like "ha%";
+Variable_name Value
+Handler_commit 10
+Handler_delete 1
+Handler_discover 0
+Handler_prepare 10
+Handler_read_first 0
+Handler_read_key 3
+Handler_read_next 0
+Handler_read_prev 0
+Handler_read_rnd 0
+Handler_read_rnd_next 5
+Handler_rollback 2
+Handler_savepoint 0
+Handler_savepoint_rollback 0
+Handler_update 5
+Handler_write 7
+select variable_value - @global_read_key as "handler_read_key" from information_schema.global_status where variable_name="handler_read_key";
+handler_read_key
+3
+set @@global.userstat=0;
+select * from information_schema.index_statistics;
+TABLE_SCHEMA TABLE_NAME INDEX_NAME ROWS_READ
+test t1 PRIMARY 2
+select * from information_schema.table_statistics;
+TABLE_SCHEMA TABLE_NAME ROWS_READ ROWS_CHANGED ROWS_CHANGED_X_INDEXES
+test t1 6 13 13
+show table_statistics;
+Table_schema Table_name Rows_read Rows_changed Rows_changed_x_#indexes
+test t1 6 13 13
+show index_statistics;
+Table_schema Table_name Index_name Rows_read
+test t1 PRIMARY 2
+select TOTAL_CONNECTIONS, CONCURRENT_CONNECTIONS, ROWS_READ, ROWS_SENT,
+ROWS_DELETED, ROWS_INSERTED, ROWS_UPDATED, SELECT_COMMANDS,
+UPDATE_COMMANDS, OTHER_COMMANDS, COMMIT_TRANSACTIONS,
+ROLLBACK_TRANSACTIONS, DENIED_CONNECTIONS, LOST_CONNECTIONS,
+ACCESS_DENIED, EMPTY_QUERIES from information_schema.client_statistics;
+TOTAL_CONNECTIONS CONCURRENT_CONNECTIONS ROWS_READ ROWS_SENT ROWS_DELETED ROWS_INSERTED ROWS_UPDATED SELECT_COMMANDS UPDATE_COMMANDS OTHER_COMMANDS COMMIT_TRANSACTIONS ROLLBACK_TRANSACTIONS DENIED_CONNECTIONS LOST_CONNECTIONS ACCESS_DENIED EMPTY_QUERIES
+1 0 6 2 1 8 5 3 11 9 10 2 0 0 0 1
+select TOTAL_CONNECTIONS, CONCURRENT_CONNECTIONS, ROWS_READ, ROWS_SENT,
+ROWS_DELETED, ROWS_INSERTED, ROWS_UPDATED, SELECT_COMMANDS,
+UPDATE_COMMANDS, OTHER_COMMANDS, COMMIT_TRANSACTIONS,
+ROLLBACK_TRANSACTIONS, DENIED_CONNECTIONS, LOST_CONNECTIONS,
+ACCESS_DENIED, EMPTY_QUERIES from information_schema.user_statistics;
+TOTAL_CONNECTIONS CONCURRENT_CONNECTIONS ROWS_READ ROWS_SENT ROWS_DELETED ROWS_INSERTED ROWS_UPDATED SELECT_COMMANDS UPDATE_COMMANDS OTHER_COMMANDS COMMIT_TRANSACTIONS ROLLBACK_TRANSACTIONS DENIED_CONNECTIONS LOST_CONNECTIONS ACCESS_DENIED EMPTY_QUERIES
+1 0 6 2 1 8 5 3 11 9 10 2 0 0 0 1
+flush table_statistics;
+flush index_statistics;
+select * from information_schema.index_statistics;
+TABLE_SCHEMA TABLE_NAME INDEX_NAME ROWS_READ
+select * from information_schema.table_statistics;
+TABLE_SCHEMA TABLE_NAME ROWS_READ ROWS_CHANGED ROWS_CHANGED_X_INDEXES
+show status like "%statistics%";
+Variable_name Value
+Com_show_client_statistics 0
+Com_show_index_statistics 1
+Com_show_table_statistics 1
+Com_show_user_statistics 0
+select connected_time <> 0, busy_time <> 0, bytes_received <> 0,
+bytes_sent <> 0, binlog_bytes_written <> 0
+from information_schema.user_statistics;
+connected_time <> 0 busy_time <> 0 bytes_received <> 0 bytes_sent <> 0 binlog_bytes_written <> 0
+1 1 1 1 1
+select connected_time <> 0, busy_time <> 0, bytes_received <> 0,
+bytes_sent <> 0, binlog_bytes_written <> 0
+from information_schema.client_statistics;
+connected_time <> 0 busy_time <> 0 bytes_received <> 0 bytes_sent <> 0 binlog_bytes_written <> 0
+1 1 1 1 1
+set @@global.general_log=@save_general_log;
=== modified file 'mysql-test/t/log_slow.test'
--- a/mysql-test/t/log_slow.test 2009-09-03 14:05:38 +0000
+++ b/mysql-test/t/log_slow.test 2009-10-19 17:14:48 +0000
@@ -36,6 +36,12 @@ select @@log_slow_verbosity;
show fields from mysql.slow_log;
+#
+# Check flush command
+#
+
+flush slow query logs;
+
# Reset used variables
set @@log_slow_filter=default;
=== added file 'mysql-test/t/status_user-master.opt'
--- a/mysql-test/t/status_user-master.opt 1970-01-01 00:00:00 +0000
+++ b/mysql-test/t/status_user-master.opt 2009-10-19 17:14:48 +0000
@@ -0,0 +1 @@
+--force-restart
=== added file 'mysql-test/t/status_user.test'
--- a/mysql-test/t/status_user.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/t/status_user.test 2009-10-19 17:14:48 +0000
@@ -0,0 +1,97 @@
+#
+# Testing of user status (the userstat variable).
+# Note that this test requires a fresh restart to not problems with
+# old status
+
+-- source include/have_innodb.inc
+-- source include/have_log_bin.inc
+
+--disable_warnings
+DROP TABLE IF EXISTS t1;
+--enable_warnings
+
+select variable_value from information_schema.global_status where variable_name="handler_read_key" into @global_read_key;
+show columns from information_schema.client_statistics;
+show columns from information_schema.user_statistics;
+show columns from information_schema.index_statistics;
+show columns from information_schema.table_statistics;
+
+# Disable logging to get right number of writes into the tables.
+set @save_general_log=@@global.general_log;
+set @@global.general_log=0;
+set @@global.userstat=1;
+flush status;
+
+create table t1 (a int, primary key (a), b int default 0) engine=myisam;
+insert into t1 (a) values (1),(2),(3),(4);
+update t1 set b=1;
+update t1 set b=5 where a=2;
+delete from t1 where a=3;
+
+/* Empty query */
+select * from t1 where a=999;
+
+drop table t1;
+
+#
+# Test the commit and rollback are counted
+#
+
+create table t1 (a int, primary key (a), b int default 0) engine=innodb;
+begin;
+insert into t1 values(1,1);
+commit;
+begin;
+insert into t1 values(2,2);
+commit;
+begin;
+insert into t1 values(3,3);
+rollback;
+drop table t1;
+
+select sleep(1);
+
+show status like "rows%";
+show status like "ha%";
+select variable_value - @global_read_key as "handler_read_key" from information_schema.global_status where variable_name="handler_read_key";
+
+# Ensure that the following commands doesn't change statistics
+
+set @@global.userstat=0;
+
+#
+# Check that we got right statistics
+#
+select * from information_schema.index_statistics;
+select * from information_schema.table_statistics;
+show table_statistics;
+show index_statistics;
+select TOTAL_CONNECTIONS, CONCURRENT_CONNECTIONS, ROWS_READ, ROWS_SENT,
+ ROWS_DELETED, ROWS_INSERTED, ROWS_UPDATED, SELECT_COMMANDS,
+ UPDATE_COMMANDS, OTHER_COMMANDS, COMMIT_TRANSACTIONS,
+ ROLLBACK_TRANSACTIONS, DENIED_CONNECTIONS, LOST_CONNECTIONS,
+ ACCESS_DENIED, EMPTY_QUERIES from information_schema.client_statistics;
+select TOTAL_CONNECTIONS, CONCURRENT_CONNECTIONS, ROWS_READ, ROWS_SENT,
+ ROWS_DELETED, ROWS_INSERTED, ROWS_UPDATED, SELECT_COMMANDS,
+ UPDATE_COMMANDS, OTHER_COMMANDS, COMMIT_TRANSACTIONS,
+ ROLLBACK_TRANSACTIONS, DENIED_CONNECTIONS, LOST_CONNECTIONS,
+ ACCESS_DENIED, EMPTY_QUERIES from information_schema.user_statistics;
+flush table_statistics;
+flush index_statistics;
+select * from information_schema.index_statistics;
+select * from information_schema.table_statistics;
+show status like "%statistics%";
+
+#
+# Test that some variables are not 0
+#
+
+select connected_time <> 0, busy_time <> 0, bytes_received <> 0,
+ bytes_sent <> 0, binlog_bytes_written <> 0
+ from information_schema.user_statistics;
+select connected_time <> 0, busy_time <> 0, bytes_received <> 0,
+ bytes_sent <> 0, binlog_bytes_written <> 0
+ from information_schema.client_statistics;
+
+# Cleanup
+set @@global.general_log=@save_general_log;
=== modified file 'mysys/my_getsystime.c'
--- a/mysys/my_getsystime.c 2008-04-28 16:24:05 +0000
+++ b/mysys/my_getsystime.c 2009-10-19 17:14:48 +0000
@@ -28,6 +28,10 @@
#ifdef __NETWARE__
#include <nks/time.h>
#endif
+#ifdef HAVE_LINUX_UNISTD_H
+#include <linux/unistd.h>
+#endif
+
ulonglong my_getsystime()
{
@@ -222,3 +226,25 @@ time_t my_time_possible_from_micro(ulong
return (time_t) (microtime / 1000000);
#endif /* defined(__WIN__) */
}
+
+
+/*
+ Return cpu time in milliseconds * 10
+*/
+
+ulonglong my_getcputime()
+{
+#ifdef HAVE_CLOCK_GETTIME
+ struct timespec tp;
+ if (clock_gettime(CLOCK_THREAD_CPUTIME_ID, &tp))
+ return 0;
+ return (ulonglong)tp.tv_sec*10000000+(ulonglong)tp.tv_nsec/100;
+#elif defined(__NR_clock_gettime)
+ struct timespec tp;
+ if (syscall(__NR_clock_gettime, CLOCK_THREAD_CPUTIME_ID, &tp))
+ return 0;
+ return (ulonglong)tp.tv_sec*10000000+(ulonglong)tp.tv_nsec/100;
+#else
+ return 0;
+#endif /* HAVE_CLOCK_GETTIME */
+}
=== modified file 'sql/authors.h'
--- a/sql/authors.h 2007-03-16 06:39:07 +0000
+++ b/sql/authors.h 2009-10-19 17:14:48 +0000
@@ -34,23 +34,35 @@ struct show_table_authors_st {
*/
struct show_table_authors_st show_table_authors[]= {
+ { "Michael (Monty) Widenius", "Tusby, Finland",
+ "Lead developer and main author" },
+ { "David Axmark", "London, England",
+ "MySQL founder; Small stuff long time ago, Monty ripped it out!" },
+ { "Sergei Golubchik", "Kerpen, Germany",
+ "Full-text search, precision math" },
+ { "Igor Babaev", "Bellevue, USA", "Optimizer, keycache, core work"},
+ { "Sergey Petrunia", "St. Petersburg, Russia", "Optimizer"},
+ { "Oleksandr Byelkin", "Lugansk, Ukraine",
+ "Query Cache (4.0), Subqueries (4.1), Views (5.0)" },
{ "Brian (Krow) Aker", "Seattle, WA, USA",
"Architecture, archive, federated, bunch of little stuff :)" },
- { "Venu Anuganti", "", "Client/server protocol (4.1)" },
- { "David Axmark", "Uppsala, Sweden",
- "Small stuff long time ago, Monty ripped it out!" },
+ { "Kristian Nielsen", "Copenhagen, Denmark",
+ "General build stuff," },
{ "Alexander (Bar) Barkov", "Izhevsk, Russia",
"Unicode and character sets (4.1)" },
+ { "Guilhem Bichot", "Bordeaux, France", "Replication (since 4.0)" },
+ { "Venu Anuganti", "", "Client/server protocol (4.1)" },
+ { "Konstantin Osipov", "Moscow, Russia",
+ "Prepared statements (4.1), Cursors (5.0)" },
+ { "Dmitri Lenev", "Moscow, Russia",
+ "Time zones support (4.1), Triggers (5.0)" },
{ "Omer BarNir", "Sunnyvale, CA, USA",
"Testing (sometimes) and general QA stuff" },
- { "Guilhem Bichot", "Bordeaux, France", "Replication (since 4.0)" },
{ "John Birrell", "", "Emulation of pthread_mutex() for OS/2" },
{ "Andreas F. Bobak", "", "AGGREGATE extension to user-defined functions" },
{ "Alexey Botchkov (Holyfoot)", "Izhevsk, Russia",
"GIS extensions (4.1), embedded server (4.1), precision math (5.0)"},
{ "Reggie Burnett", "Nashville, TN, USA", "Windows development, Connectors" },
- { "Oleksandr Byelkin", "Lugansk, Ukraine",
- "Query Cache (4.0), Subqueries (4.1), Views (5.0)" },
{ "Kent Boortz", "Orebro, Sweden", "Test platform, and general build stuff" },
{ "Tim Bunce", "", "mysqlhotcopy" },
{ "Yves Carlier", "", "mysqlaccess" },
@@ -67,8 +79,6 @@ struct show_table_authors_st show_table_
{ "Yuri Dario", "", "OS/2 port" },
{ "Andrei Elkin", "Espoo, Finland", "Replication" },
{ "Patrick Galbraith", "Sharon, NH", "Federated Engine, mysqlslap" },
- { "Sergei Golubchik", "Kerpen, Germany",
- "Full-text search, precision math" },
{ "Lenz Grimmer", "Hamburg, Germany",
"Production (build and release) engineering" },
{ "Nikolay Grishakin", "Austin, TX, USA", "Testing - Server" },
@@ -83,8 +93,6 @@ struct show_table_authors_st show_table_
{ "Hakan K������ky��lmaz", "Walldorf, Germany", "Testing - Server" },
{ "Greg (Groggy) Lehey", "Uchunga, SA, Australia", "Backup" },
{ "Matthias Leich", "Berlin, Germany", "Testing - Server" },
- { "Dmitri Lenev", "Moscow, Russia",
- "Time zones support (4.1), Triggers (5.0)" },
{ "Arjen Lentz", "Brisbane, Australia",
"Documentation (2001-2004), Dutch error messages, LOG2()" },
{ "Marc Liyanage", "", "Created Mac OS X packages" },
@@ -96,8 +104,6 @@ struct show_table_authors_st show_table_
{ "Jonathan (Jeb) Miller", "Kyle, TX, USA",
"Testing - Cluster, Replication" },
{ "Elliot Murphy", "Cocoa, FL, USA", "Replication and backup" },
- { "Kristian Nielsen", "Copenhagen, Denmark",
- "General build stuff" },
{ "Pekka Nouisiainen", "Stockholm, Sweden",
"NDB Cluster: BLOB support, character set support, ordered indexes" },
{ "Alexander Nozdrin", "Moscow, Russia",
@@ -105,8 +111,6 @@ struct show_table_authors_st show_table_
{ "Per Eric Olsson", "", "Testing of dynamic record format" },
{ "Jonas Oreland", "Stockholm, Sweden",
"NDB Cluster, Online Backup, lots of other things" },
- { "Konstantin Osipov", "Moscow, Russia",
- "Prepared statements (4.1), Cursors (5.0)" },
{ "Alexander (Sasha) Pachev", "Provo, UT, USA",
"Statement-based replication, SHOW CREATE TABLE, mysql-bench" },
{ "Irena Pancirov", "", "Port to Windows with Borland compiler" },
@@ -144,9 +148,9 @@ struct show_table_authors_st show_table_
{ "Sergey Vojtovich", "Izhevsk, Russia", "Plugins infrastructure (5.1)" },
{ "Matt Wagner", "Northfield, MN, USA", "Bug fixing" },
{ "Jim Winstead Jr.", "Los Angeles, CA, USA", "Bug fixing" },
- { "Michael (Monty) Widenius", "Tusby, Finland",
- "Lead developer and main author" },
{ "Peter Zaitsev", "Tacoma, WA, USA",
"SHA1(), AES_ENCRYPT(), AES_DECRYPT(), bug fixing" },
+ {"Mark Mark Callaghan", "Texas, USA", "Statistics patches"},
+ {"Percona", "CA, USA", "Microslow patches"},
{NULL, NULL, NULL}
};
=== modified file 'sql/event_data_objects.cc'
--- a/sql/event_data_objects.cc 2009-09-15 10:46:35 +0000
+++ b/sql/event_data_objects.cc 2009-10-19 17:14:48 +0000
@@ -1366,7 +1366,7 @@ Event_job_data::execute(THD *thd, bool d
DBUG_ENTER("Event_job_data::execute");
- mysql_reset_thd_for_next_command(thd);
+ mysql_reset_thd_for_next_command(thd, 0);
/*
MySQL parser currently assumes that current database is either
=== modified file 'sql/event_db_repository.cc'
--- a/sql/event_db_repository.cc 2009-02-15 10:58:34 +0000
+++ b/sql/event_db_repository.cc 2009-10-19 17:14:48 +0000
@@ -404,17 +404,18 @@ Event_db_repository::index_read_for_db_f
}
key_copy(key_buf, event_table->record[0], key_info, key_len);
- if (!(ret= event_table->file->index_read_map(event_table->record[0], key_buf,
- (key_part_map)1,
- HA_READ_PREFIX)))
+ if (!(ret= event_table->file->ha_index_read_map(event_table->record[0],
+ key_buf,
+ (key_part_map)1,
+ HA_READ_PREFIX)))
{
DBUG_PRINT("info",("Found rows. Let's retrieve them. ret=%d", ret));
do
{
ret= copy_event_to_schema_table(thd, schema_table, event_table);
if (ret == 0)
- ret= event_table->file->index_next_same(event_table->record[0],
- key_buf, key_len);
+ ret= event_table->file->ha_index_next_same(event_table->record[0],
+ key_buf, key_len);
} while (ret == 0);
}
DBUG_PRINT("info", ("Scan finished. ret=%d", ret));
@@ -883,8 +884,9 @@ Event_db_repository::find_named_event(LE
key_copy(key, table->record[0], table->key_info, table->key_info->key_length);
- if (table->file->index_read_idx_map(table->record[0], 0, key, HA_WHOLE_KEY,
- HA_READ_KEY_EXACT))
+ if (table->file->ha_index_read_idx_map(table->record[0], 0, key,
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT))
{
DBUG_PRINT("info", ("Row not found"));
DBUG_RETURN(TRUE);
=== modified file 'sql/filesort.cc'
--- a/sql/filesort.cc 2009-09-03 14:05:38 +0000
+++ b/sql/filesort.cc 2009-10-19 17:14:48 +0000
@@ -577,11 +577,11 @@ static ha_rows find_all_keys(SORTPARAM *
error= my_errno ? my_errno : -1; /* Abort */
break;
}
- error=file->rnd_pos(sort_form->record[0],next_pos);
+ error=file->ha_rnd_pos(sort_form->record[0],next_pos);
}
else
{
- error=file->rnd_next(sort_form->record[0]);
+ error=file->ha_rnd_next(sort_form->record[0]);
if (!flag)
{
my_store_ptr(ref_pos,ref_length,record); // Position to row
=== modified file 'sql/ha_partition.cc'
--- a/sql/ha_partition.cc 2009-09-07 20:50:10 +0000
+++ b/sql/ha_partition.cc 2009-10-19 17:14:48 +0000
@@ -1636,7 +1636,7 @@ int ha_partition::copy_partitions(ulongl
goto error;
while (TRUE)
{
- if ((result= file->rnd_next(m_rec0)))
+ if ((result= file->ha_rnd_next(m_rec0)))
{
if (result == HA_ERR_RECORD_DELETED)
continue; //Probably MyISAM
@@ -3495,7 +3495,7 @@ int ha_partition::rnd_next(uchar *buf)
while (TRUE)
{
- result= file->rnd_next(buf);
+ result= file->ha_rnd_next(buf);
if (!result)
{
m_last_part= part_id;
@@ -4345,8 +4345,8 @@ int ha_partition::handle_unordered_next(
}
else if (is_next_same)
{
- if (!(error= file->index_next_same(buf, m_start_key.key,
- m_start_key.length)))
+ if (!(error= file->ha_index_next_same(buf, m_start_key.key,
+ m_start_key.length)))
{
m_last_part= m_part_spec.start_part;
DBUG_RETURN(0);
@@ -4354,7 +4354,7 @@ int ha_partition::handle_unordered_next(
}
else
{
- if (!(error= file->index_next(buf)))
+ if (!(error= file->ha_index_next(buf)))
{
m_last_part= m_part_spec.start_part;
DBUG_RETURN(0); // Row was in range
@@ -4409,24 +4409,26 @@ int ha_partition::handle_unordered_scan_
break;
case partition_index_read:
DBUG_PRINT("info", ("index_read on partition %d", i));
- error= file->index_read_map(buf, m_start_key.key,
- m_start_key.keypart_map,
- m_start_key.flag);
+ error= file->ha_index_read_map(buf, m_start_key.key,
+ m_start_key.keypart_map,
+ m_start_key.flag);
break;
case partition_index_first:
DBUG_PRINT("info", ("index_first on partition %d", i));
- /* MyISAM engine can fail if we call index_first() when indexes disabled */
- /* that happens if the table is empty. */
- /* Here we use file->stats.records instead of file->records() because */
- /* file->records() is supposed to return an EXACT count, and it can be */
- /* possibly slow. We don't need an exact number, an approximate one- from*/
- /* the last ::info() call - is sufficient. */
+ /*
+ MyISAM engine can fail if we call index_first() when indexes disabled
+ that happens if the table is empty.
+ Here we use file->stats.records instead of file->records() because
+ file->records() is supposed to return an EXACT count, and it can be
+ possibly slow. We don't need an exact number, an approximate one- from
+ the last ::info() call - is sufficient.
+ */
if (file->stats.records == 0)
{
error= HA_ERR_END_OF_FILE;
break;
}
- error= file->index_first(buf);
+ error= file->ha_index_first(buf);
break;
case partition_index_first_unordered:
/*
@@ -4507,45 +4509,49 @@ int ha_partition::handle_ordered_index_s
switch (m_index_scan_type) {
case partition_index_read:
- error= file->index_read_map(rec_buf_ptr,
- m_start_key.key,
- m_start_key.keypart_map,
- m_start_key.flag);
+ error= file->ha_index_read_map(rec_buf_ptr,
+ m_start_key.key,
+ m_start_key.keypart_map,
+ m_start_key.flag);
break;
case partition_index_first:
- /* MyISAM engine can fail if we call index_first() when indexes disabled */
- /* that happens if the table is empty. */
- /* Here we use file->stats.records instead of file->records() because */
- /* file->records() is supposed to return an EXACT count, and it can be */
- /* possibly slow. We don't need an exact number, an approximate one- from*/
- /* the last ::info() call - is sufficient. */
+ /*
+ MyISAM engine can fail if we call index_first() when indexes disabled
+ that happens if the table is empty.
+ Here we use file->stats.records instead of file->records() because
+ file->records() is supposed to return an EXACT count, and it can be
+ possibly slow. We don't need an exact number, an approximate one- from
+ the last ::info() call - is sufficient.
+ */
if (file->stats.records == 0)
{
error= HA_ERR_END_OF_FILE;
break;
}
- error= file->index_first(rec_buf_ptr);
+ error= file->ha_index_first(rec_buf_ptr);
reverse_order= FALSE;
break;
case partition_index_last:
- /* MyISAM engine can fail if we call index_last() when indexes disabled */
- /* that happens if the table is empty. */
- /* Here we use file->stats.records instead of file->records() because */
- /* file->records() is supposed to return an EXACT count, and it can be */
- /* possibly slow. We don't need an exact number, an approximate one- from*/
- /* the last ::info() call - is sufficient. */
+ /*
+ MyISAM engine can fail if we call index_last() when indexes disabled
+ that happens if the table is empty.
+ Here we use file->stats.records instead of file->records() because
+ file->records() is supposed to return an EXACT count, and it can be
+ possibly slow. We don't need an exact number, an approximate one- from
+ the last ::info() call - is sufficient.
+ */
if (file->stats.records == 0)
{
error= HA_ERR_END_OF_FILE;
break;
}
- error= file->index_last(rec_buf_ptr);
+ error= file->ha_index_last(rec_buf_ptr);
reverse_order= TRUE;
break;
case partition_index_read_last:
- error= file->index_read_last_map(rec_buf_ptr,
- m_start_key.key,
- m_start_key.keypart_map);
+ error= file->ha_index_read_last_map(rec_buf_ptr,
+ m_start_key.key,
+ m_start_key.keypart_map);
reverse_order= TRUE;
break;
case partition_read_range:
@@ -4647,10 +4653,10 @@ int ha_partition::handle_ordered_next(uc
memcpy(rec_buf(part_id), table->record[0], m_rec_length);
}
else if (!is_next_same)
- error= file->index_next(rec_buf(part_id));
+ error= file->ha_index_next(rec_buf(part_id));
else
- error= file->index_next_same(rec_buf(part_id), m_start_key.key,
- m_start_key.length);
+ error= file->ha_index_next_same(rec_buf(part_id), m_start_key.key,
+ m_start_key.length);
if (error)
{
if (error == HA_ERR_END_OF_FILE)
@@ -4695,7 +4701,7 @@ int ha_partition::handle_ordered_prev(uc
handler *file= m_file[part_id];
DBUG_ENTER("ha_partition::handle_ordered_prev");
- if ((error= file->index_prev(rec_buf(part_id))))
+ if ((error= file->ha_index_prev(rec_buf(part_id))))
{
if (error == HA_ERR_END_OF_FILE)
{
=== modified file 'sql/handler.cc'
--- a/sql/handler.cc 2009-09-09 21:06:57 +0000
+++ b/sql/handler.cc 2009-10-19 17:14:48 +0000
@@ -1236,6 +1236,7 @@ int ha_commit_one_phase(THD *thd, bool a
my_error(ER_ERROR_DURING_COMMIT, MYF(0), err);
error=1;
}
+ /* Should this be done only if is_real_trans is set ? */
status_var_increment(thd->status_var.ha_commit_count);
ha_info_next= ha_info->next();
ha_info->reset(); /* keep it conveniently zero-filled */
@@ -2092,6 +2093,8 @@ int handler::ha_open(TABLE *table_arg, c
dup_ref=ref+ALIGN_SIZE(ref_length);
cached_table_flags= table_flags();
}
+ rows_read= rows_changed= 0;
+ memset(index_rows_read, 0, sizeof(index_rows_read));
DBUG_RETURN(error);
}
@@ -2513,9 +2516,10 @@ void handler::get_auto_increment(ulonglo
key_copy(key, table->record[0],
table->key_info + table->s->next_number_index,
table->s->next_number_key_offset);
- error= index_read_map(table->record[1], key,
- make_prev_keypart_map(table->s->next_number_keypart),
- HA_READ_PREFIX_LAST);
+ error= ha_index_read_map(table->record[1], key,
+ make_prev_keypart_map(table->s->
+ next_number_keypart),
+ HA_READ_PREFIX_LAST);
/*
MySQL needs to call us for next row: assume we are inserting ("a",null)
here, we return 3, and next this statement will want to insert
@@ -3549,6 +3553,122 @@ void handler::get_dynamic_partition_info
}
+/*
+ Updates the global table stats with the TABLE this handler represents
+*/
+
+void handler::update_global_table_stats()
+{
+ TABLE_STATS * table_stats;
+
+ status_var_add(table->in_use->status_var.rows_read, rows_read);
+
+ if (!table->in_use->userstat_running)
+ {
+ rows_read= rows_changed= 0;
+ return;
+ }
+
+ if (rows_read + rows_changed == 0)
+ return; // Nothing to update.
+
+ DBUG_ASSERT(table->s && table->s->table_cache_key.str);
+
+ pthread_mutex_lock(&LOCK_global_table_stats);
+ /* Gets the global table stats, creating one if necessary. */
+ if (!(table_stats= (TABLE_STATS*)
+ hash_search(&global_table_stats,
+ (uchar*) table->s->table_cache_key.str,
+ table->s->table_cache_key.length)))
+ {
+ if (!(table_stats = ((TABLE_STATS*)
+ my_malloc(sizeof(TABLE_STATS),
+ MYF(MY_WME | MY_ZEROFILL)))))
+ {
+ /* Out of memory error already given */
+ goto end;
+ }
+ memcpy(table_stats->table, table->s->table_cache_key.str,
+ table->s->table_cache_key.length);
+ table_stats->table_name_length= table->s->table_cache_key.length;
+ table_stats->engine_type= ht->db_type;
+ /* No need to set variables to 0, as we use MY_ZEROFILL above */
+
+ if (my_hash_insert(&global_table_stats, (uchar*) table_stats))
+ {
+ /* Out of memory error is already given */
+ my_free(table_stats, 0);
+ goto end;
+ }
+ }
+ // Updates the global table stats.
+ table_stats->rows_read+= rows_read;
+ table_stats->rows_changed+= rows_changed;
+ table_stats->rows_changed_x_indexes+= (rows_changed *
+ (table->s->keys ? table->s->keys :
+ 1));
+ rows_read= rows_changed= 0;
+end:
+ pthread_mutex_unlock(&LOCK_global_table_stats);
+}
+
+
+/*
+ Updates the global index stats with this handler's accumulated index reads.
+*/
+
+void handler::update_global_index_stats()
+{
+ DBUG_ASSERT(table->s);
+
+ if (!table->in_use->userstat_running)
+ {
+ /* Reset all index read values */
+ bzero(index_rows_read, sizeof(index_rows_read[0]) * table->s->keys);
+ return;
+ }
+
+ for (uint index = 0; index < table->s->keys; index++)
+ {
+ if (index_rows_read[index])
+ {
+ INDEX_STATS* index_stats;
+ uint key_length;
+ KEY *key_info = &table->key_info[index]; // Rows were read using this
+
+ DBUG_ASSERT(key_info->cache_name);
+ if (!key_info->cache_name)
+ continue;
+ key_length= table->s->table_cache_key.length + key_info->name_length + 1;
+ pthread_mutex_lock(&LOCK_global_index_stats);
+ // Gets the global index stats, creating one if necessary.
+ if (!(index_stats= (INDEX_STATS*) hash_search(&global_index_stats,
+ key_info->cache_name,
+ key_length)))
+ {
+ if (!(index_stats = ((INDEX_STATS*)
+ my_malloc(sizeof(INDEX_STATS),
+ MYF(MY_WME | MY_ZEROFILL)))))
+ goto end; // Error is already given
+
+ memcpy(index_stats->index, key_info->cache_name, key_length);
+ index_stats->index_name_length= key_length;
+ if (my_hash_insert(&global_index_stats, (uchar*) index_stats))
+ {
+ my_free(index_stats, 0);
+ goto end;
+ }
+ }
+ /* Updates the global index stats. */
+ index_stats->rows_read+= index_rows_read[index];
+ index_rows_read[index]= 0;
+end:
+ pthread_mutex_unlock(&LOCK_global_index_stats);
+ }
+ }
+}
+
+
/****************************************************************************
** Some general functions that isn't in the handler class
****************************************************************************/
@@ -4207,17 +4327,16 @@ int handler::read_range_first(const key_
range_key_part= table->key_info[active_index].key_part;
if (!start_key) // Read first record
- result= index_first(table->record[0]);
+ result= ha_index_first(table->record[0]);
else
- result= index_read_map(table->record[0],
- start_key->key,
- start_key->keypart_map,
- start_key->flag);
+ result= ha_index_read_map(table->record[0],
+ start_key->key,
+ start_key->keypart_map,
+ start_key->flag);
if (result)
DBUG_RETURN((result == HA_ERR_KEY_NOT_FOUND)
? HA_ERR_END_OF_FILE
: result);
-
DBUG_RETURN (compare_key(end_range) <= 0 ? 0 : HA_ERR_END_OF_FILE);
}
@@ -4243,11 +4362,11 @@ int handler::read_range_next()
if (eq_range)
{
/* We trust that index_next_same always gives a row in range */
- DBUG_RETURN(index_next_same(table->record[0],
- end_range->key,
- end_range->length));
+ DBUG_RETURN(ha_index_next_same(table->record[0],
+ end_range->key,
+ end_range->length));
}
- result= index_next(table->record[0]);
+ result= ha_index_next(table->record[0]);
if (result)
DBUG_RETURN(result);
DBUG_RETURN(compare_key(end_range) <= 0 ? 0 : HA_ERR_END_OF_FILE);
@@ -4629,6 +4748,7 @@ int handler::ha_write_row(uchar *buf)
if (unlikely(error= write_row(buf)))
DBUG_RETURN(error);
+ rows_changed++;
if (unlikely(error= binlog_log_row(table, 0, buf, log_func)))
DBUG_RETURN(error); /* purecov: inspected */
DBUG_RETURN(0);
@@ -4650,6 +4770,7 @@ int handler::ha_update_row(const uchar *
if (unlikely(error= update_row(old_data, new_data)))
return error;
+ rows_changed++;
if (unlikely(error= binlog_log_row(table, old_data, new_data, log_func)))
return error;
return 0;
@@ -4664,6 +4785,7 @@ int handler::ha_delete_row(const uchar *
if (unlikely(error= delete_row(buf)))
return error;
+ rows_changed++;
if (unlikely(error= binlog_log_row(table, buf, 0, log_func)))
return error;
return 0;
=== modified file 'sql/handler.h'
--- a/sql/handler.h 2009-09-07 20:50:10 +0000
+++ b/sql/handler.h 2009-10-19 17:14:48 +0000
@@ -30,6 +30,10 @@
#define USING_TRANSACTIONS
+#if MAX_KEY > 128
+#error MAX_KEY is too large. Values up to 128 are supported.
+#endif
+
// the following is for checking tables
#define HA_ADMIN_ALREADY_DONE 1
@@ -601,8 +605,9 @@ struct handlerton
SHOW_COMP_OPTION state;
/*
- Historical number used for frm file to determine the correct storage engine.
- This is going away and new engines will just use "name" for this.
+ Historical number used for frm file to determine the correct
+ storage engine. This is going away and new engines will just use
+ "name" for this.
*/
enum legacy_db_type db_type;
/*
@@ -1138,6 +1143,12 @@ public:
Interval returned by get_auto_increment() and being consumed by the
inserter.
*/
+ /* Statistics variables */
+ ulonglong rows_read;
+ ulonglong rows_changed;
+ /* One bigger than needed to avoid to test if key == MAX_KEY */
+ ulonglong index_rows_read[MAX_KEY+1];
+
Discrete_interval auto_inc_interval_for_cur_row;
/**
Number of reserved auto-increment intervals. Serves as a heuristic
@@ -1156,7 +1167,10 @@ public:
locked(FALSE), implicit_emptied(0),
pushed_cond(0), next_insert_id(0), insert_id_for_cur_row(0),
auto_inc_intervals_count(0)
- {}
+ {
+ reset_statistics();
+ }
+
virtual ~handler(void)
{
DBUG_ASSERT(locked == FALSE);
@@ -1278,10 +1292,16 @@ public:
virtual void print_error(int error, myf errflag);
virtual bool get_error_message(int error, String *buf);
uint get_dup_key(int error);
+ void reset_statistics()
+ {
+ rows_read= rows_changed= 0;
+ bzero(index_rows_read, sizeof(index_rows_read));
+ }
virtual void change_table_ptr(TABLE *table_arg, TABLE_SHARE *share)
{
table= table_arg;
table_share= share;
+ reset_statistics();
}
virtual double scan_time()
{ return ulonglong2double(stats.data_file_length) / IO_SIZE + 2; }
@@ -1390,22 +1410,23 @@ public:
}
/**
@brief
- Positions an index cursor to the index specified in the handle. Fetches the
- row if available. If the key value is null, begin at the first key of the
- index.
+ Positions an index cursor to the index specified in the
+ handle. Fetches the row if available. If the key value is null,
+ begin at the first key of the index.
*/
+protected:
virtual int index_read_map(uchar * buf, const uchar * key,
key_part_map keypart_map,
enum ha_rkey_function find_flag)
{
uint key_len= calculate_key_len(table, active_index, key, keypart_map);
- return index_read(buf, key, key_len, find_flag);
+ return index_read(buf, key, key_len, find_flag);
}
/**
@brief
- Positions an index cursor to the index specified in the handle. Fetches the
- row if available. If the key value is null, begin at the first key of the
- index.
+ Positions an index cursor to the index specified in the
+ handle. Fetches the row if available. If the key value is null,
+ begin at the first key of the index.
*/
virtual int index_read_idx_map(uchar * buf, uint index, const uchar * key,
key_part_map keypart_map,
@@ -1430,6 +1451,79 @@ public:
uint key_len= calculate_key_len(table, active_index, key, keypart_map);
return index_read_last(buf, key, key_len);
}
+ inline void update_index_statistics()
+ {
+ index_rows_read[active_index]++;
+ rows_read++;
+ }
+public:
+
+ /* Similar functions like the above, but does statistics counting */
+ inline int ha_index_read_map(uchar * buf, const uchar * key,
+ key_part_map keypart_map,
+ enum ha_rkey_function find_flag)
+ {
+ int error= index_read_map(buf, key, keypart_map, find_flag);
+ if (!error)
+ update_index_statistics();
+ return error;
+ }
+ inline int ha_index_read_idx_map(uchar * buf, uint index, const uchar * key,
+ key_part_map keypart_map,
+ enum ha_rkey_function find_flag)
+ {
+ int error= index_read_idx_map(buf, index, key, keypart_map, find_flag);
+ if (!error)
+ {
+ rows_read++;
+ index_rows_read[index]++;
+ }
+ return error;
+ }
+ inline int ha_index_next(uchar * buf)
+ {
+ int error= index_next(buf);
+ if (!error)
+ update_index_statistics();
+ return error;
+ }
+ inline int ha_index_prev(uchar * buf)
+ {
+ int error= index_prev(buf);
+ if (!error)
+ update_index_statistics();
+ return error;
+ }
+ inline int ha_index_first(uchar * buf)
+ {
+ int error= index_first(buf);
+ if (!error)
+ update_index_statistics();
+ return error;
+ }
+ inline int ha_index_last(uchar * buf)
+ {
+ int error= index_last(buf);
+ if (!error)
+ update_index_statistics();
+ return error;
+ }
+ inline int ha_index_next_same(uchar *buf, const uchar *key, uint keylen)
+ {
+ int error= index_next_same(buf, key, keylen);
+ if (!error)
+ update_index_statistics();
+ return error;
+ }
+ inline int ha_index_read_last_map(uchar * buf, const uchar * key,
+ key_part_map keypart_map)
+ {
+ int error= index_read_last_map(buf, key, keypart_map);
+ if (!error)
+ update_index_statistics();
+ return error;
+ }
+
virtual int read_multi_range_first(KEY_MULTI_RANGE **found_range_p,
KEY_MULTI_RANGE *ranges, uint range_count,
bool sorted, HANDLER_BUFFER *buffer);
@@ -1443,6 +1537,7 @@ public:
void ft_end() { ft_handler=NULL; }
virtual FT_INFO *ft_init_ext(uint flags, uint inx,String *key)
{ return NULL; }
+private:
virtual int ft_read(uchar *buf) { return HA_ERR_WRONG_COMMAND; }
virtual int rnd_next(uchar *buf)=0;
virtual int rnd_pos(uchar * buf, uchar *pos)=0;
@@ -1453,11 +1548,50 @@ public:
handlers for random position.
*/
virtual int rnd_pos_by_record(uchar *record)
- {
- position(record);
- return rnd_pos(record, ref);
- }
+ {
+ position(record);
+ return rnd_pos(record, ref);
+ }
virtual int read_first_row(uchar *buf, uint primary_key);
+public:
+
+ /* Same as above, but with statistics */
+ inline int ha_ft_read(uchar *buf)
+ {
+ int error= ft_read(buf);
+ if (!error)
+ rows_read++;
+ return error;
+ }
+ inline int ha_rnd_next(uchar *buf)
+ {
+ int error= rnd_next(buf);
+ if (!error)
+ rows_read++;
+ return error;
+ }
+ inline int ha_rnd_pos(uchar *buf, uchar *pos)
+ {
+ int error= rnd_pos(buf, pos);
+ if (!error)
+ rows_read++;
+ return error;
+ }
+ inline int ha_rnd_pos_by_record(uchar *buf)
+ {
+ int error= rnd_pos_by_record(buf);
+ if (!error)
+ rows_read++;
+ return error;
+ }
+ inline int ha_read_first_row(uchar *buf, uint primary_key)
+ {
+ int error= read_first_row(buf, primary_key);
+ if (!error)
+ rows_read++;
+ return error;
+ }
+
/**
The following 3 function is only needed for tables that may be
internal temporary tables during joins.
@@ -1626,6 +1760,9 @@ public:
virtual bool is_crashed() const { return 0; }
virtual bool auto_repair() const { return 0; }
+ void update_global_table_stats();
+ void update_global_index_stats();
+
#define CHF_CREATE_FLAG 0
#define CHF_DELETE_FLAG 1
#define CHF_RENAME_FLAG 2
@@ -1944,6 +2081,7 @@ private:
{ return HA_ERR_WRONG_COMMAND; }
virtual int rename_partitions(const char *path)
{ return HA_ERR_WRONG_COMMAND; }
+ friend class ha_partition;
};
=== modified file 'sql/item_subselect.cc'
--- a/sql/item_subselect.cc 2009-09-15 10:46:35 +0000
+++ b/sql/item_subselect.cc 2009-10-19 17:14:48 +0000
@@ -2048,7 +2048,7 @@ int subselect_uniquesubquery_engine::sca
table->null_row= 0;
for (;;)
{
- error=table->file->rnd_next(table->record[0]);
+ error=table->file->ha_rnd_next(table->record[0]);
if (error && error != HA_ERR_END_OF_FILE)
{
error= report_error(table, error);
@@ -2222,10 +2222,11 @@ int subselect_uniquesubquery_engine::exe
if (!table->file->inited)
table->file->ha_index_init(tab->ref.key, 0);
- error= table->file->index_read_map(table->record[0],
- tab->ref.key_buff,
- make_prev_keypart_map(tab->ref.key_parts),
- HA_READ_KEY_EXACT);
+ error= table->file->ha_index_read_map(table->record[0],
+ tab->ref.key_buff,
+ make_prev_keypart_map(tab->
+ ref.key_parts),
+ HA_READ_KEY_EXACT);
if (error &&
error != HA_ERR_KEY_NOT_FOUND && error != HA_ERR_END_OF_FILE)
error= report_error(table, error);
@@ -2343,10 +2344,11 @@ int subselect_indexsubquery_engine::exec
if (!table->file->inited)
table->file->ha_index_init(tab->ref.key, 1);
- error= table->file->index_read_map(table->record[0],
- tab->ref.key_buff,
- make_prev_keypart_map(tab->ref.key_parts),
- HA_READ_KEY_EXACT);
+ error= table->file->ha_index_read_map(table->record[0],
+ tab->ref.key_buff,
+ make_prev_keypart_map(tab->
+ ref.key_parts),
+ HA_READ_KEY_EXACT);
if (error &&
error != HA_ERR_KEY_NOT_FOUND && error != HA_ERR_END_OF_FILE)
error= report_error(table, error);
@@ -2367,9 +2369,9 @@ int subselect_indexsubquery_engine::exec
((Item_in_subselect *) item)->value= 1;
break;
}
- error= table->file->index_next_same(table->record[0],
- tab->ref.key_buff,
- tab->ref.key_length);
+ error= table->file->ha_index_next_same(table->record[0],
+ tab->ref.key_buff,
+ tab->ref.key_length);
if (error && error != HA_ERR_END_OF_FILE)
{
error= report_error(table, error);
=== modified file 'sql/lex.h'
--- a/sql/lex.h 2009-09-07 20:50:10 +0000
+++ b/sql/lex.h 2009-10-19 17:14:48 +0000
@@ -106,6 +106,7 @@ static SYMBOL symbols[] = {
{ "CHECKSUM", SYM(CHECKSUM_SYM)},
{ "CIPHER", SYM(CIPHER_SYM)},
{ "CLIENT", SYM(CLIENT_SYM)},
+ { "CLIENT_STATISTICS", SYM(CLIENT_STATS_SYM)},
{ "CLOSE", SYM(CLOSE_SYM)},
{ "COALESCE", SYM(COALESCE)},
{ "CODE", SYM(CODE_SYM)},
@@ -245,6 +246,7 @@ static SYMBOL symbols[] = {
{ "IN", SYM(IN_SYM)},
{ "INDEX", SYM(INDEX_SYM)},
{ "INDEXES", SYM(INDEXES)},
+ { "INDEX_STATISTICS", SYM(INDEX_STATS_SYM)},
{ "INFILE", SYM(INFILE)},
{ "INITIAL_SIZE", SYM(INITIAL_SIZE_SYM)},
{ "INNER", SYM(INNER_SYM)},
@@ -478,6 +480,7 @@ static SYMBOL symbols[] = {
{ "SIGNED", SYM(SIGNED_SYM)},
{ "SIMPLE", SYM(SIMPLE_SYM)},
{ "SLAVE", SYM(SLAVE)},
+ { "SLOW", SYM(SLOW_SYM)},
{ "SNAPSHOT", SYM(SNAPSHOT_SYM)},
{ "SMALLINT", SYM(SMALLINT)},
{ "SOCKET", SYM(SOCKET_SYM)},
@@ -526,6 +529,7 @@ static SYMBOL symbols[] = {
{ "TABLE", SYM(TABLE_SYM)},
{ "TABLES", SYM(TABLES)},
{ "TABLESPACE", SYM(TABLESPACE)},
+ { "TABLE_STATISTICS", SYM(TABLE_STATS_SYM)},
{ "TABLE_CHECKSUM", SYM(TABLE_CHECKSUM_SYM)},
{ "TEMPORARY", SYM(TEMPORARY)},
{ "TEMPTABLE", SYM(TEMPTABLE_SYM)},
@@ -569,6 +573,7 @@ static SYMBOL symbols[] = {
{ "USE", SYM(USE_SYM)},
{ "USER", SYM(USER)},
{ "USER_RESOURCES", SYM(RESOURCES)},
+ { "USER_STATISTICS", SYM(USER_STATS_SYM)},
{ "USE_FRM", SYM(USE_FRM)},
{ "USING", SYM(USING)},
{ "UTC_DATE", SYM(UTC_DATE_SYM)},
=== modified file 'sql/log.cc'
--- a/sql/log.cc 2009-09-15 10:46:35 +0000
+++ b/sql/log.cc 2009-10-19 17:14:48 +0000
@@ -821,6 +821,13 @@ void Log_to_file_event_handler::flush()
mysql_slow_log.reopen_file();
}
+void Log_to_file_event_handler::flush_slow_log()
+{
+ /* reopen slow log file */
+ if (opt_slow_log)
+ mysql_slow_log.reopen_file();
+}
+
/*
Log error with all enabled log event handlers
@@ -916,8 +923,6 @@ void LOGGER::init_log_tables()
bool LOGGER::flush_logs(THD *thd)
{
- int rc= 0;
-
/*
Now we lock logger, as nobody should be able to use logging routines while
log tables are closed
@@ -929,7 +934,24 @@ bool LOGGER::flush_logs(THD *thd)
/* end of log flush */
logger.unlock();
- return rc;
+ return 0;
+}
+
+
+bool LOGGER::flush_slow_log(THD *thd)
+{
+ /*
+ Now we lock logger, as nobody should be able to use logging routines while
+ log tables are closed
+ */
+ logger.lock_exclusive();
+
+ /* reopen log files */
+ file_log_handler->flush_slow_log();
+
+ /* end of log flush */
+ logger.unlock();
+ return 0;
}
@@ -4070,6 +4092,7 @@ bool MYSQL_BIN_LOG::write(Log_event *eve
if (likely(is_open()))
{
IO_CACHE *file= &log_file;
+ my_off_t my_org_b_tell;
#ifdef HAVE_REPLICATION
/*
In the future we need to add to the following if tests like
@@ -4077,7 +4100,7 @@ bool MYSQL_BIN_LOG::write(Log_event *eve
binlog_[wild_]{do|ignore}_table?" (WL#1049)"
*/
const char *local_db= event_info->get_db();
- if ((thd && !(thd->options & OPTION_BIN_LOG)) ||
+ if ((!(thd->options & OPTION_BIN_LOG)) ||
(!binlog_filter->db_ok(local_db)))
{
VOID(pthread_mutex_unlock(&LOCK_log));
@@ -4085,6 +4108,8 @@ bool MYSQL_BIN_LOG::write(Log_event *eve
}
#endif /* HAVE_REPLICATION */
+ my_org_b_tell= my_b_tell(file);
+
#if defined(USING_TRANSACTIONS)
/*
Should we write to the binlog cache or to the binlog on disk?
@@ -4095,7 +4120,7 @@ bool MYSQL_BIN_LOG::write(Log_event *eve
trans/non-trans table types the best possible in binlogging)
- or if the event asks for it (cache_stmt == TRUE).
*/
- if (opt_using_transactions && thd)
+ if (opt_using_transactions)
{
if (thd->binlog_setup_trx_data())
goto err;
@@ -4136,7 +4161,6 @@ bool MYSQL_BIN_LOG::write(Log_event *eve
If row-based binlogging, Insert_id, Rand and other kind of "setting
context" events are not needed.
*/
- if (thd)
{
if (!thd->current_stmt_binlog_row_based)
{
@@ -4183,16 +4207,16 @@ bool MYSQL_BIN_LOG::write(Log_event *eve
}
}
- /*
- Write the SQL command
- */
-
+ /* Write the SQL command */
if (event_info->write(file) ||
DBUG_EVALUATE_IF("injecting_fault_writing", 1, 0))
goto err;
if (file == &log_file) // we are writing to the real log (disk)
{
+ ulonglong data_written= (my_b_tell(file) - my_org_b_tell);
+ status_var_add(thd->status_var.binlog_bytes_written, data_written);
+
if (flush_and_sync())
goto err;
signal_update();
@@ -4318,6 +4342,7 @@ uint MYSQL_BIN_LOG::next_file_id()
SYNOPSIS
write_cache()
+ thd Current_thread
cache Cache to write to the binary log
lock_log True if the LOCK_log mutex should be aquired, false otherwise
sync_log True if the log should be flushed and sync:ed
@@ -4327,7 +4352,8 @@ uint MYSQL_BIN_LOG::next_file_id()
be reset as a READ_CACHE to be able to read the contents from it.
*/
-int MYSQL_BIN_LOG::write_cache(IO_CACHE *cache, bool lock_log, bool sync_log)
+int MYSQL_BIN_LOG::write_cache(THD *thd, IO_CACHE *cache, bool lock_log,
+ bool sync_log)
{
Mutex_sentry sentry(lock_log ? &LOCK_log : NULL);
@@ -4375,6 +4401,7 @@ int MYSQL_BIN_LOG::write_cache(IO_CACHE
/* write the first half of the split header */
if (my_b_write(&log_file, header, carry))
return ER_ERROR_ON_WRITE;
+ status_var_add(thd->status_var.binlog_bytes_written, carry);
/*
copy fixed second half of header to cache so the correct
@@ -4443,6 +4470,8 @@ int MYSQL_BIN_LOG::write_cache(IO_CACHE
/* Write data to the binary log file */
if (my_b_write(&log_file, cache->read_pos, length))
return ER_ERROR_ON_WRITE;
+ status_var_add(thd->status_var.binlog_bytes_written, length);
+
cache->read_pos=cache->read_end; // Mark buffer used up
} while ((length= my_b_fill(cache)));
@@ -4494,6 +4523,8 @@ bool MYSQL_BIN_LOG::write_incident(THD *
if (lock)
pthread_mutex_lock(&LOCK_log);
ev.write(&log_file);
+ status_var_add(thd->status_var.binlog_bytes_written, ev.data_written);
+
if (lock)
{
if (!error && !(error= flush_and_sync()))
@@ -4565,21 +4596,28 @@ bool MYSQL_BIN_LOG::write(THD *thd, IO_C
*/
if (qinfo.write(&log_file))
goto err;
+ status_var_add(thd->status_var.binlog_bytes_written, qinfo.data_written);
DBUG_EXECUTE_IF("crash_before_writing_xid",
{
- if ((write_error= write_cache(cache, false, true)))
+ if ((write_error= write_cache(thd, cache, FALSE,
+ TRUE)))
DBUG_PRINT("info", ("error writing binlog cache: %d",
write_error));
DBUG_PRINT("info", ("crashing before writing xid"));
abort();
});
- if ((write_error= write_cache(cache, false, false)))
+ if ((write_error= write_cache(thd, cache, FALSE, FALSE)))
goto err;
- if (commit_event && commit_event->write(&log_file))
- goto err;
+ if (commit_event)
+ {
+ if (commit_event->write(&log_file))
+ goto err;
+ status_var_add(thd->status_var.binlog_bytes_written,
+ commit_event->data_written);
+ }
if (incident && write_incident(thd, FALSE))
goto err;
=== modified file 'sql/log.h'
--- a/sql/log.h 2009-06-18 13:52:46 +0000
+++ b/sql/log.h 2009-10-19 17:14:48 +0000
@@ -359,7 +359,8 @@ public:
bool write(THD *thd, IO_CACHE *cache, Log_event *commit_event, bool incident);
bool write_incident(THD *thd, bool lock);
- int write_cache(IO_CACHE *cache, bool lock_log, bool flush_and_sync);
+ int write_cache(THD *thd, IO_CACHE *cache, bool lock_log,
+ bool flush_and_sync);
void set_write_error(THD *thd);
bool check_write_error(THD *thd);
@@ -487,6 +488,7 @@ public:
const char *sql_text, uint sql_text_len,
CHARSET_INFO *client_cs);
void flush();
+ void flush_slow_log();
void init_pthread_objects();
MYSQL_QUERY_LOG *get_mysql_slow_log() { return &mysql_slow_log; }
MYSQL_QUERY_LOG *get_mysql_log() { return &mysql_log; }
@@ -531,6 +533,7 @@ public:
void init_base();
void init_log_tables();
bool flush_logs(THD *thd);
+ bool flush_slow_log(THD *thd);
/* Perform basic logger cleanup. this will leave e.g. error log open. */
void cleanup_base();
/* Free memory. Nothing could be logged after this function is called */
=== modified file 'sql/log_event.cc'
--- a/sql/log_event.cc 2009-09-07 20:50:10 +0000
+++ b/sql/log_event.cc 2009-10-19 17:14:48 +0000
@@ -4465,7 +4465,7 @@ int Load_log_event::do_apply_event(NET*
as the present method does not call mysql_parse().
*/
lex_start(thd);
- mysql_reset_thd_for_next_command(thd);
+ mysql_reset_thd_for_next_command(thd, 0);
if (!use_rli_only_for_errors)
{
@@ -6262,7 +6262,7 @@ int Append_block_log_event::do_apply_eve
as the present method does not call mysql_parse().
*/
lex_start(thd);
- mysql_reset_thd_for_next_command(thd);
+ mysql_reset_thd_for_next_command(thd, 0);
my_delete(fname, MYF(0)); // old copy may exist already
if ((fd= my_create(fname, CREATE_MODE,
O_WRONLY | O_BINARY | O_EXCL | O_NOFOLLOW,
@@ -7202,7 +7202,7 @@ int Rows_log_event::do_apply_event(Relay
we need to do any changes to that value after this function.
*/
lex_start(thd);
- mysql_reset_thd_for_next_command(thd);
+ mysql_reset_thd_for_next_command(thd, 0);
/*
The current statement is just about to begin and
has not yet modified anything. Note, all.modified is reset
@@ -8465,7 +8465,7 @@ Rows_log_event::write_row(const Relay_lo
if (table->file->ha_table_flags() & HA_DUPLICATE_POS)
{
DBUG_PRINT("info",("Locating offending record using rnd_pos()"));
- error= table->file->rnd_pos(table->record[1], table->file->dup_ref);
+ error= table->file->ha_rnd_pos(table->record[1], table->file->dup_ref);
if (error)
{
DBUG_PRINT("info",("rnd_pos() returns error %d",error));
@@ -8497,10 +8497,10 @@ Rows_log_event::write_row(const Relay_lo
key_copy((uchar*)key.get(), table->record[0], table->key_info + keynum,
0);
- error= table->file->index_read_idx_map(table->record[1], keynum,
- (const uchar*)key.get(),
- HA_WHOLE_KEY,
- HA_READ_KEY_EXACT);
+ error= table->file->ha_index_read_idx_map(table->record[1], keynum,
+ (const uchar*)key.get(),
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT);
if (error)
{
DBUG_PRINT("info",("index_read_idx() returns %s", HA_ERR(error)));
@@ -8768,13 +8768,14 @@ int Rows_log_event::find_row(const Relay
length. Something along these lines should work:
ADD>>> store_record(table,record[1]);
- int error= table->file->rnd_pos(table->record[0], table->file->ref);
+ int error= table->file->ha_rnd_pos(table->record[0],
+ table->file->ref);
ADD>>> DBUG_ASSERT(memcmp(table->record[1], table->record[0],
table->s->reclength) == 0);
*/
DBUG_PRINT("info",("locating record using primary key (position)"));
- int error= table->file->rnd_pos_by_record(table->record[0]);
+ int error= table->file->ha_rnd_pos_by_record(table->record[0]);
if (error)
{
DBUG_PRINT("info",("rnd_pos returns error %d",error));
@@ -8834,9 +8835,9 @@ int Rows_log_event::find_row(const Relay
table->record[0][table->s->null_bytes - 1]|=
256U - (1U << table->s->last_null_bit_pos);
- if ((error= table->file->index_read_map(table->record[0], m_key,
- HA_WHOLE_KEY,
- HA_READ_KEY_EXACT)))
+ if ((error= table->file->ha_index_read_map(table->record[0], m_key,
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT)))
{
DBUG_PRINT("info",("no record matching the key found in the table"));
if (error == HA_ERR_RECORD_DELETED)
@@ -8898,7 +8899,7 @@ int Rows_log_event::find_row(const Relay
256U - (1U << table->s->last_null_bit_pos);
}
- while ((error= table->file->index_next(table->record[0])))
+ while ((error= table->file->ha_index_next(table->record[0])))
{
/* We just skip records that has already been deleted */
if (error == HA_ERR_RECORD_DELETED)
@@ -8934,7 +8935,7 @@ int Rows_log_event::find_row(const Relay
do
{
restart_rnd_next:
- error= table->file->rnd_next(table->record[0]);
+ error= table->file->ha_rnd_next(table->record[0]);
DBUG_PRINT("info", ("error: %s", HA_ERR(error)));
switch (error) {
=== modified file 'sql/log_event_old.cc'
--- a/sql/log_event_old.cc 2009-05-19 09:28:05 +0000
+++ b/sql/log_event_old.cc 2009-10-19 17:14:48 +0000
@@ -63,7 +63,7 @@ Old_rows_log_event::do_apply_event(Old_r
we need to do any changes to that value after this function.
*/
lex_start(thd);
- mysql_reset_thd_for_next_command(thd);
+ mysql_reset_thd_for_next_command(thd, 0);
/*
Check if the slave is set to use SBR. If so, it should switch
@@ -553,7 +553,7 @@ replace_record(THD *thd, TABLE *table,
*/
if (table->file->ha_table_flags() & HA_DUPLICATE_POS)
{
- error= table->file->rnd_pos(table->record[1], table->file->dup_ref);
+ error= table->file->ha_rnd_pos(table->record[1], table->file->dup_ref);
if (error)
{
DBUG_PRINT("info",("rnd_pos() returns error %d",error));
@@ -579,10 +579,10 @@ replace_record(THD *thd, TABLE *table,
key_copy((uchar*)key.get(), table->record[0], table->key_info + keynum,
0);
- error= table->file->index_read_idx_map(table->record[1], keynum,
- (const uchar*)key.get(),
- HA_WHOLE_KEY,
- HA_READ_KEY_EXACT);
+ error= table->file->ha_index_read_idx_map(table->record[1], keynum,
+ (const uchar*)key.get(),
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT);
if (error)
{
DBUG_PRINT("info", ("index_read_idx() returns error %d", error));
@@ -694,13 +694,13 @@ static int find_and_fetch_row(TABLE *tab
length. Something along these lines should work:
ADD>>> store_record(table,record[1]);
- int error= table->file->rnd_pos(table->record[0], table->file->ref);
+ int error= table->file->ha_rnd_pos(table->record[0], table->file->ref);
ADD>>> DBUG_ASSERT(memcmp(table->record[1], table->record[0],
table->s->reclength) == 0);
*/
table->file->position(table->record[0]);
- int error= table->file->rnd_pos(table->record[0], table->file->ref);
+ int error= table->file->ha_rnd_pos(table->record[0], table->file->ref);
/*
rnd_pos() returns the record in table->record[0], so we have to
move it to table->record[1].
@@ -738,8 +738,9 @@ static int find_and_fetch_row(TABLE *tab
my_ptrdiff_t const pos=
table->s->null_bytes > 0 ? table->s->null_bytes - 1 : 0;
table->record[1][pos]= 0xFF;
- if ((error= table->file->index_read_map(table->record[1], key, HA_WHOLE_KEY,
- HA_READ_KEY_EXACT)))
+ if ((error= table->file->ha_index_read_map(table->record[1], key,
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT)))
{
table->file->print_error(error, MYF(0));
table->file->ha_index_end();
@@ -793,7 +794,7 @@ static int find_and_fetch_row(TABLE *tab
256U - (1U << table->s->last_null_bit_pos);
}
- while ((error= table->file->index_next(table->record[1])))
+ while ((error= table->file->ha_index_next(table->record[1])))
{
/* We just skip records that has already been deleted */
if (error == HA_ERR_RECORD_DELETED)
@@ -822,7 +823,7 @@ static int find_and_fetch_row(TABLE *tab
do
{
restart_rnd_next:
- error= table->file->rnd_next(table->record[1]);
+ error= table->file->ha_rnd_next(table->record[1]);
DBUG_DUMP("record[0]", table->record[0], table->s->reclength);
DBUG_DUMP("record[1]", table->record[1], table->s->reclength);
@@ -2115,7 +2116,7 @@ Old_rows_log_event::write_row(const Rela
if (table->file->ha_table_flags() & HA_DUPLICATE_POS)
{
DBUG_PRINT("info",("Locating offending record using rnd_pos()"));
- error= table->file->rnd_pos(table->record[1], table->file->dup_ref);
+ error= table->file->ha_rnd_pos(table->record[1], table->file->dup_ref);
if (error)
{
DBUG_PRINT("info",("rnd_pos() returns error %d",error));
@@ -2147,10 +2148,10 @@ Old_rows_log_event::write_row(const Rela
key_copy((uchar*)key.get(), table->record[0], table->key_info + keynum,
0);
- error= table->file->index_read_idx_map(table->record[1], keynum,
- (const uchar*)key.get(),
- HA_WHOLE_KEY,
- HA_READ_KEY_EXACT);
+ error= table->file->ha_index_read_idx_map(table->record[1], keynum,
+ (const uchar*)key.get(),
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT);
if (error)
{
DBUG_PRINT("info",("index_read_idx() returns error %d", error));
@@ -2301,13 +2302,13 @@ int Old_rows_log_event::find_row(const R
length. Something along these lines should work:
ADD>>> store_record(table,record[1]);
- int error= table->file->rnd_pos(table->record[0], table->file->ref);
+ int error= table->file->ha_rnd_pos(table->record[0], table->file->ref);
ADD>>> DBUG_ASSERT(memcmp(table->record[1], table->record[0],
table->s->reclength) == 0);
*/
DBUG_PRINT("info",("locating record using primary key (position)"));
- int error= table->file->rnd_pos_by_record(table->record[0]);
+ int error= table->file->ha_rnd_pos_by_record(table->record[0]);
if (error)
{
DBUG_PRINT("info",("rnd_pos returns error %d",error));
@@ -2367,9 +2368,9 @@ int Old_rows_log_event::find_row(const R
table->s->null_bytes > 0 ? table->s->null_bytes - 1 : 0;
table->record[0][pos]= 0xFF;
- if ((error= table->file->index_read_map(table->record[0], m_key,
- HA_WHOLE_KEY,
- HA_READ_KEY_EXACT)))
+ if ((error= table->file->ha_index_read_map(table->record[0], m_key,
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT)))
{
DBUG_PRINT("info",("no record matching the key found in the table"));
if (error == HA_ERR_RECORD_DELETED)
@@ -2431,7 +2432,7 @@ int Old_rows_log_event::find_row(const R
256U - (1U << table->s->last_null_bit_pos);
}
- while ((error= table->file->index_next(table->record[0])))
+ while ((error= table->file->ha_index_next(table->record[0])))
{
/* We just skip records that has already been deleted */
if (error == HA_ERR_RECORD_DELETED)
@@ -2467,7 +2468,7 @@ int Old_rows_log_event::find_row(const R
do
{
restart_rnd_next:
- error= table->file->rnd_next(table->record[0]);
+ error= table->file->ha_rnd_next(table->record[0]);
switch (error) {
=== modified file 'sql/mysql_priv.h'
--- a/sql/mysql_priv.h 2009-10-06 14:53:46 +0000
+++ b/sql/mysql_priv.h 2009-10-19 17:14:48 +0000
@@ -1063,6 +1063,7 @@ bool setup_connection_thread_globals(THD
bool login_connection(THD *thd);
void end_connection(THD *thd);
void prepare_new_connection_state(THD* thd);
+void update_global_user_stats(THD* thd, bool create_user, time_t now);
int mysql_create_db(THD *thd, char *db, HA_CREATE_INFO *create, bool silent);
bool mysql_alter_db(THD *thd, const char *db, HA_CREATE_INFO *create);
@@ -1099,14 +1100,22 @@ bool is_update_query(enum enum_sql_comma
bool is_log_table_write_query(enum enum_sql_command command);
bool alloc_query(THD *thd, const char *packet, uint packet_length);
void mysql_init_select(LEX *lex);
-void mysql_reset_thd_for_next_command(THD *thd);
+void mysql_reset_thd_for_next_command(THD *thd, my_bool calculate_userstat);
bool mysql_new_select(LEX *lex, bool move_down);
void create_select_for_variable(const char *var_name);
void mysql_init_multi_delete(LEX *lex);
bool multi_delete_set_locks_and_link_aux_tables(LEX *lex);
void init_max_user_conn(void);
void init_update_queries(void);
+void init_global_user_stats(void);
+void init_global_table_stats(void);
+void init_global_index_stats(void);
+void init_global_client_stats(void);
void free_max_user_conn(void);
+void free_global_user_stats(void);
+void free_global_table_stats(void);
+void free_global_index_stats(void);
+void free_global_client_stats(void);
pthread_handler_t handle_bootstrap(void *arg);
int mysql_execute_command(THD *thd);
bool do_command(THD *thd);
@@ -1967,6 +1976,7 @@ extern ulong max_connect_errors, connect
extern ulong extra_max_connections;
extern ulong slave_net_timeout, slave_trans_retries;
extern uint max_user_connections;
+extern ulonglong denied_connections;
extern ulong what_to_log,flush_time;
extern ulong query_buff_size;
extern ulong max_prepared_stmt_count, prepared_stmt_count;
@@ -2020,6 +2030,7 @@ extern my_bool opt_safe_show_db, opt_loc
extern my_bool opt_slave_compressed_protocol, use_temp_pool;
extern ulong slave_exec_mode_options;
extern my_bool opt_readonly, lower_case_file_system;
+extern my_bool opt_userstat_running;
extern my_bool opt_enable_named_pipe, opt_sync_frm, opt_allow_suspicious_udfs;
extern my_bool opt_secure_auth;
extern char* opt_secure_file_priv;
@@ -2060,6 +2071,11 @@ extern pthread_mutex_t LOCK_des_key_file
#endif
extern pthread_mutex_t LOCK_server_started;
extern pthread_cond_t COND_server_started;
+extern pthread_mutex_t LOCK_global_user_client_stats;
+extern pthread_mutex_t LOCK_global_table_stats;
+extern pthread_mutex_t LOCK_global_index_stats;
+extern pthread_mutex_t LOCK_stats;
+
extern int mysqld_server_started;
extern rw_lock_t LOCK_grant, LOCK_sys_init_connect, LOCK_sys_init_slave;
extern rw_lock_t LOCK_system_variables_hash;
@@ -2086,6 +2102,11 @@ extern KNOWN_DATE_TIME_FORMAT known_date
extern String null_string;
extern HASH open_cache, lock_db_cache;
+extern HASH global_user_stats;
+extern HASH global_client_stats;
+extern HASH global_table_stats;
+extern HASH global_index_stats;
+
extern TABLE *unused_tables;
extern const char* any_db;
extern struct my_option my_long_options[];
=== modified file 'sql/mysqld.cc'
--- a/sql/mysqld.cc 2009-10-07 13:07:10 +0000
+++ b/sql/mysqld.cc 2009-10-19 17:14:48 +0000
@@ -416,6 +416,7 @@ static pthread_cond_t COND_thread_cache,
bool opt_update_log, opt_bin_log, opt_ignore_builtin_innodb= 0;
my_bool opt_log, opt_slow_log;
+my_bool opt_userstat_running;
ulong log_output_options;
my_bool opt_log_queries_not_using_indexes= 0;
bool opt_error_log= IF_WIN(1,0);
@@ -548,6 +549,7 @@ ulong binlog_cache_use= 0, binlog_cache_
ulong max_connections, max_connect_errors;
ulong extra_max_connections;
uint max_user_connections= 0;
+ulonglong denied_connections;
/**
Limit of the total number of prepared statements in the server.
Is necessary to protect the server against out-of-memory attacks.
@@ -649,6 +651,9 @@ pthread_mutex_t LOCK_mysql_create_db, LO
LOCK_global_system_variables,
LOCK_user_conn, LOCK_slave_list, LOCK_active_mi,
LOCK_connection_count, LOCK_uuid_generator;
+pthread_mutex_t LOCK_stats, LOCK_global_user_client_stats;
+pthread_mutex_t LOCK_global_table_stats, LOCK_global_index_stats;
+
/**
The below lock protects access to two global server variables:
max_prepared_stmt_count and prepared_stmt_count. These variables
@@ -1342,6 +1347,10 @@ void clean_up(bool print_message)
x_free(opt_secure_file_priv);
bitmap_free(&temp_pool);
free_max_user_conn();
+ free_global_user_stats();
+ free_global_client_stats();
+ free_global_table_stats();
+ free_global_index_stats();
#ifdef HAVE_REPLICATION
end_slave_list();
#endif
@@ -1428,6 +1437,11 @@ static void clean_up_mutexes()
(void) pthread_mutex_destroy(&LOCK_bytes_received);
(void) pthread_mutex_destroy(&LOCK_user_conn);
(void) pthread_mutex_destroy(&LOCK_connection_count);
+ (void) pthread_mutex_destroy(&LOCK_stats);
+ (void) pthread_mutex_destroy(&LOCK_global_user_client_stats);
+ (void) pthread_mutex_destroy(&LOCK_global_table_stats);
+ (void) pthread_mutex_destroy(&LOCK_global_index_stats);
+
Events::destroy_mutexes();
#ifdef HAVE_OPENSSL
(void) pthread_mutex_destroy(&LOCK_des_key_file);
@@ -3203,6 +3217,7 @@ SHOW_VAR com_status_vars[]= {
{"show_binlog_events", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_BINLOG_EVENTS]), SHOW_LONG_STATUS},
{"show_binlogs", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_BINLOGS]), SHOW_LONG_STATUS},
{"show_charsets", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_CHARSETS]), SHOW_LONG_STATUS},
+ {"show_client_statistics", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_CLIENT_STATS]), SHOW_LONG_STATUS},
{"show_collations", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_COLLATIONS]), SHOW_LONG_STATUS},
{"show_column_types", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_COLUMN_TYPES]), SHOW_LONG_STATUS},
{"show_contributors", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_CONTRIBUTORS]), SHOW_LONG_STATUS},
@@ -3225,6 +3240,7 @@ SHOW_VAR com_status_vars[]= {
{"show_function_status", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_STATUS_FUNC]), SHOW_LONG_STATUS},
{"show_grants", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_GRANTS]), SHOW_LONG_STATUS},
{"show_keys", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_KEYS]), SHOW_LONG_STATUS},
+ {"show_index_statistics", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_INDEX_STATS]), SHOW_LONG_STATUS},
{"show_master_status", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_MASTER_STAT]), SHOW_LONG_STATUS},
{"show_new_master", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_NEW_MASTER]), SHOW_LONG_STATUS},
{"show_open_tables", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_OPEN_TABLES]), SHOW_LONG_STATUS},
@@ -3241,9 +3257,11 @@ SHOW_VAR com_status_vars[]= {
{"show_slave_status", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_SLAVE_STAT]), SHOW_LONG_STATUS},
{"show_status", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_STATUS]), SHOW_LONG_STATUS},
{"show_storage_engines", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_STORAGE_ENGINES]), SHOW_LONG_STATUS},
+ {"show_table_statistics", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_TABLE_STATS]), SHOW_LONG_STATUS},
{"show_table_status", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_TABLE_STATUS]), SHOW_LONG_STATUS},
{"show_tables", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_TABLES]), SHOW_LONG_STATUS},
{"show_triggers", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_TRIGGERS]), SHOW_LONG_STATUS},
+ {"show_user_statistics", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_USER_STATS]), SHOW_LONG_STATUS},
{"show_variables", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_VARIABLES]), SHOW_LONG_STATUS},
{"show_warnings", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SHOW_WARNS]), SHOW_LONG_STATUS},
{"slave_start", (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SLAVE_START]), SHOW_LONG_STATUS},
@@ -3642,6 +3660,12 @@ static int init_thread_environment()
(void) pthread_mutex_init(&LOCK_prepared_stmt_count, MY_MUTEX_INIT_FAST);
(void) pthread_mutex_init(&LOCK_uuid_generator, MY_MUTEX_INIT_FAST);
(void) pthread_mutex_init(&LOCK_connection_count, MY_MUTEX_INIT_FAST);
+ (void) pthread_mutex_init(&LOCK_stats, MY_MUTEX_INIT_FAST);
+ (void) pthread_mutex_init(&LOCK_global_user_client_stats,
+ MY_MUTEX_INIT_FAST);
+ (void) pthread_mutex_init(&LOCK_global_table_stats, MY_MUTEX_INIT_FAST);
+ (void) pthread_mutex_init(&LOCK_global_index_stats, MY_MUTEX_INIT_FAST);
+
#ifdef HAVE_OPENSSL
(void) pthread_mutex_init(&LOCK_des_key_file,MY_MUTEX_INIT_FAST);
#ifndef HAVE_YASSL
@@ -4005,6 +4029,9 @@ server.");
/* call ha_init_key_cache() on all key caches to init them */
process_key_caches(&ha_init_key_cache);
+ init_global_table_stats();
+ init_global_index_stats();
+
/* Allow storage engine to give real error messages */
if (ha_init_errors())
DBUG_RETURN(1);
@@ -4210,6 +4237,8 @@ server.");
init_max_user_conn();
init_update_queries();
+ init_global_user_stats();
+ init_global_client_stats();
DBUG_RETURN(0);
}
@@ -5019,6 +5048,7 @@ static void create_new_thread(THD *thd)
DBUG_PRINT("error",("Too many connections"));
close_connection(thd, ER_CON_COUNT_ERROR, 1);
+ statistic_increment(denied_connections, &LOCK_status);
delete thd;
DBUG_VOID_RETURN;
}
@@ -5810,6 +5840,7 @@ enum options_mysqld
OPT_LOG_SLOW_RATE_LIMIT,
OPT_LOG_SLOW_VERBOSITY,
OPT_LOG_SLOW_FILTER,
+ OPT_USERSTAT,
OPT_GENERAL_LOG_FILE,
OPT_SLOW_QUERY_LOG_FILE,
OPT_IGNORE_BUILTIN_INNODB
@@ -7209,6 +7240,10 @@ The minimum value for this variable is 4
(uchar**) &max_system_variables.net_wait_timeout, 0, GET_ULONG,
REQUIRED_ARG, NET_WAIT_TIMEOUT, 1, IF_WIN(INT_MAX32/1000, LONG_TIMEOUT),
0, 1, 0},
+ {"userstat", OPT_USERSTAT,
+ "Control USER_STATISTICS, CLIENT_STATISTICS, INDEX_STATISTICS and TABLE_STATISTICS running",
+ (uchar**) &opt_userstat_running, (uchar**) &opt_userstat_running,
+ 0, GET_BOOL, NO_ARG, 0, 0, 1, 0, 1, 0},
{0, 0, 0, 0, 0, 0, GET_NO_ARG, NO_ARG, 0, 0, 0, 0, 0, 0}
};
@@ -7579,19 +7614,24 @@ static int show_ssl_get_cipher_list(THD
SHOW_VAR status_vars[]= {
{"Aborted_clients", (char*) &aborted_threads, SHOW_LONG},
{"Aborted_connects", (char*) &aborted_connects, SHOW_LONG},
+ {"Access_denied_errors", (char*) offsetof(STATUS_VAR, access_denied_errors), SHOW_LONG_STATUS},
{"Binlog_cache_disk_use", (char*) &binlog_cache_disk_use, SHOW_LONG},
{"Binlog_cache_use", (char*) &binlog_cache_use, SHOW_LONG},
+ {"Busy_time", (char*) offsetof(STATUS_VAR, busy_time), SHOW_DOUBLE_STATUS},
{"Bytes_received", (char*) offsetof(STATUS_VAR, bytes_received), SHOW_LONGLONG_STATUS},
{"Bytes_sent", (char*) offsetof(STATUS_VAR, bytes_sent), SHOW_LONGLONG_STATUS},
+ {"Binlog_bytes_written", (char*) offsetof(STATUS_VAR, binlog_bytes_written), SHOW_LONGLONG_STATUS},
{"Com", (char*) com_status_vars, SHOW_ARRAY},
{"Compression", (char*) &show_net_compression, SHOW_FUNC},
{"Connections", (char*) &thread_id, SHOW_LONG_NOFLUSH},
+ {"Cpu_time", (char*) offsetof(STATUS_VAR, cpu_time), SHOW_DOUBLE_STATUS},
{"Created_tmp_disk_tables", (char*) offsetof(STATUS_VAR, created_tmp_disk_tables), SHOW_LONG_STATUS},
{"Created_tmp_files", (char*) &my_tmp_file_created, SHOW_LONG},
{"Created_tmp_tables", (char*) offsetof(STATUS_VAR, created_tmp_tables), SHOW_LONG_STATUS},
{"Delayed_errors", (char*) &delayed_insert_errors, SHOW_LONG},
{"Delayed_insert_threads", (char*) &delayed_insert_threads, SHOW_LONG_NOFLUSH},
{"Delayed_writes", (char*) &delayed_insert_writes, SHOW_LONG},
+ {"Empty_queries", (char*) offsetof(STATUS_VAR, empty_queries), SHOW_LONG_STATUS},
{"Flush_commands", (char*) &refresh_version, SHOW_LONG_NOFLUSH},
{"Handler_commit", (char*) offsetof(STATUS_VAR, ha_commit_count), SHOW_LONG_STATUS},
{"Handler_delete", (char*) offsetof(STATUS_VAR, ha_delete_count), SHOW_LONG_STATUS},
@@ -7626,6 +7666,8 @@ SHOW_VAR status_vars[]= {
{"Opened_tables", (char*) offsetof(STATUS_VAR, opened_tables), SHOW_LONG_STATUS},
{"Opened_table_definitions", (char*) offsetof(STATUS_VAR, opened_shares), SHOW_LONG_STATUS},
{"Prepared_stmt_count", (char*) &show_prepared_stmt_count, SHOW_FUNC},
+ {"Rows_sent", (char*) offsetof(STATUS_VAR, rows_sent), SHOW_LONG_STATUS},
+ {"Rows_read", (char*) offsetof(STATUS_VAR, rows_read), SHOW_LONG_STATUS},
#ifdef HAVE_QUERY_CACHE
{"Qcache_free_blocks", (char*) &query_cache.free_memory_blocks, SHOW_LONG_NOFLUSH},
{"Qcache_free_memory", (char*) &query_cache.free_memory, SHOW_LONG_NOFLUSH},
@@ -9110,6 +9152,8 @@ void refresh_status(THD *thd)
/* Reset thread's status variables */
bzero((uchar*) &thd->status_var, sizeof(thd->status_var));
+ bzero((uchar*) &thd->org_status_var, sizeof(thd->org_status_var));
+ thd->start_bytes_received= 0;
/* Reset some global variables */
reset_status_vars();
=== modified file 'sql/opt_range.cc'
--- a/sql/opt_range.cc 2009-09-09 21:59:28 +0000
+++ b/sql/opt_range.cc 2009-10-19 17:14:48 +0000
@@ -8230,7 +8230,7 @@ int QUICK_ROR_INTERSECT_SELECT::get_next
/* We get here if we got the same row ref in all scans. */
if (need_to_fetch_row)
- error= head->file->rnd_pos(head->record[0], last_rowid);
+ error= head->file->ha_rnd_pos(head->record[0], last_rowid);
} while (error == HA_ERR_RECORD_DELETED);
DBUG_RETURN(error);
}
@@ -8296,7 +8296,7 @@ int QUICK_ROR_UNION_SELECT::get_next()
cur_rowid= prev_rowid;
prev_rowid= tmp;
- error= head->file->rnd_pos(quick->record, prev_rowid);
+ error= head->file->ha_rnd_pos(quick->record, prev_rowid);
} while (error == HA_ERR_RECORD_DELETED);
DBUG_RETURN(error);
}
@@ -8521,10 +8521,12 @@ int QUICK_RANGE_SELECT::get_next_prefix(
key_range start_key, end_key;
if (last_range)
{
- /* Read the next record in the same range with prefix after cur_prefix. */
+ /*
+ Read the next record in the same range with prefix after cur_prefix.
+ */
DBUG_ASSERT(cur_prefix != 0);
- result= file->index_read_map(record, cur_prefix, keypart_map,
- HA_READ_AFTER_KEY);
+ result= file->ha_index_read_map(record, cur_prefix, keypart_map,
+ HA_READ_AFTER_KEY);
if (result || (file->compare_key(file->end_range) <= 0))
DBUG_RETURN(result);
}
@@ -8580,8 +8582,8 @@ int QUICK_RANGE_SELECT_GEOM::get_next()
if (last_range)
{
// Already read through key
- result= file->index_next_same(record, last_range->min_key,
- last_range->min_length);
+ result= file->ha_index_next_same(record, last_range->min_key,
+ last_range->min_length);
if (result != HA_ERR_END_OF_FILE)
DBUG_RETURN(result);
}
@@ -8595,10 +8597,10 @@ int QUICK_RANGE_SELECT_GEOM::get_next()
}
last_range= *(cur_range++);
- result= file->index_read_map(record, last_range->min_key,
- last_range->min_keypart_map,
- (ha_rkey_function)(last_range->flag ^
- GEOM_FLAG));
+ result= file->ha_index_read_map(record, last_range->min_key,
+ last_range->min_keypart_map,
+ (ha_rkey_function)(last_range->flag ^
+ GEOM_FLAG));
if (result != HA_ERR_KEY_NOT_FOUND && result != HA_ERR_END_OF_FILE)
DBUG_RETURN(result);
last_range= 0; // Not found, to next range
@@ -8710,9 +8712,9 @@ int QUICK_SELECT_DESC::get_next()
{ // Already read through key
result = ((last_range->flag & EQ_RANGE &&
used_key_parts <= head->key_info[index].key_parts) ?
- file->index_next_same(record, last_range->min_key,
+ file->ha_index_next_same(record, last_range->min_key,
last_range->min_length) :
- file->index_prev(record));
+ file->ha_index_prev(record));
if (!result)
{
if (cmp_prev(*rev_it.ref()) == 0)
@@ -8728,7 +8730,7 @@ int QUICK_SELECT_DESC::get_next()
if (last_range->flag & NO_MAX_RANGE) // Read last record
{
int local_error;
- if ((local_error=file->index_last(record)))
+ if ((local_error= file->ha_index_last(record)))
DBUG_RETURN(local_error); // Empty table
if (cmp_prev(last_range) == 0)
DBUG_RETURN(0);
@@ -8740,9 +8742,9 @@ int QUICK_SELECT_DESC::get_next()
used_key_parts <= head->key_info[index].key_parts)
{
- result = file->index_read_map(record, last_range->max_key,
- last_range->max_keypart_map,
- HA_READ_KEY_EXACT);
+ result= file->ha_index_read_map(record, last_range->max_key,
+ last_range->max_keypart_map,
+ HA_READ_KEY_EXACT);
}
else
{
@@ -8750,11 +8752,11 @@ int QUICK_SELECT_DESC::get_next()
(last_range->flag & EQ_RANGE &&
used_key_parts > head->key_info[index].key_parts) ||
range_reads_after_key(last_range));
- result=file->index_read_map(record, last_range->max_key,
- last_range->max_keypart_map,
- ((last_range->flag & NEAR_MAX) ?
- HA_READ_BEFORE_KEY :
- HA_READ_PREFIX_LAST_OR_PREV));
+ result= file->ha_index_read_map(record, last_range->max_key,
+ last_range->max_keypart_map,
+ ((last_range->flag & NEAR_MAX) ?
+ HA_READ_BEFORE_KEY :
+ HA_READ_PREFIX_LAST_OR_PREV));
}
if (result)
{
@@ -10467,7 +10469,7 @@ int QUICK_GROUP_MIN_MAX_SELECT::reset(vo
DBUG_RETURN(result);
if (quick_prefix_select && quick_prefix_select->reset())
DBUG_RETURN(1);
- result= file->index_last(record);
+ result= file->ha_index_last(record);
if (result == HA_ERR_END_OF_FILE)
DBUG_RETURN(0);
/* Save the prefix of the last group. */
@@ -10569,9 +10571,9 @@ int QUICK_GROUP_MIN_MAX_SELECT::get_next
first sub-group with the extended prefix.
*/
if (!have_min && !have_max && key_infix_len > 0)
- result= file->index_read_map(record, group_prefix,
- make_prev_keypart_map(real_key_parts),
- HA_READ_KEY_EXACT);
+ result= file->ha_index_read_map(record, group_prefix,
+ make_prev_keypart_map(real_key_parts),
+ HA_READ_KEY_EXACT);
result= have_min ? min_res : have_max ? max_res : result;
} while ((result == HA_ERR_KEY_NOT_FOUND || result == HA_ERR_END_OF_FILE) &&
@@ -10633,9 +10635,10 @@ int QUICK_GROUP_MIN_MAX_SELECT::next_min
/* Apply the constant equality conditions to the non-group select fields */
if (key_infix_len > 0)
{
- if ((result= file->index_read_map(record, group_prefix,
- make_prev_keypart_map(real_key_parts),
- HA_READ_KEY_EXACT)))
+ if ((result=
+ file->ha_index_read_map(record, group_prefix,
+ make_prev_keypart_map(real_key_parts),
+ HA_READ_KEY_EXACT)))
DBUG_RETURN(result);
}
@@ -10650,9 +10653,9 @@ int QUICK_GROUP_MIN_MAX_SELECT::next_min
{
/* Find the first subsequent record without NULL in the MIN/MAX field. */
key_copy(tmp_record, record, index_info, 0);
- result= file->index_read_map(record, tmp_record,
- make_keypart_map(real_key_parts),
- HA_READ_AFTER_KEY);
+ result= file->ha_index_read_map(record, tmp_record,
+ make_keypart_map(real_key_parts),
+ HA_READ_AFTER_KEY);
/*
Check if the new record belongs to the current group by comparing its
prefix with the group's prefix. If it is from the next group, then the
@@ -10707,9 +10710,9 @@ int QUICK_GROUP_MIN_MAX_SELECT::next_max
if (min_max_ranges.elements > 0)
result= next_max_in_range();
else
- result= file->index_read_map(record, group_prefix,
- make_prev_keypart_map(real_key_parts),
- HA_READ_PREFIX_LAST);
+ result= file->ha_index_read_map(record, group_prefix,
+ make_prev_keypart_map(real_key_parts),
+ HA_READ_PREFIX_LAST);
DBUG_RETURN(result);
}
@@ -10752,7 +10755,7 @@ int QUICK_GROUP_MIN_MAX_SELECT::next_pre
{
if (!seen_first_key)
{
- result= file->index_first(record);
+ result= file->ha_index_first(record);
if (result)
DBUG_RETURN(result);
seen_first_key= TRUE;
@@ -10760,9 +10763,9 @@ int QUICK_GROUP_MIN_MAX_SELECT::next_pre
else
{
/* Load the first key in this group into record. */
- result= file->index_read_map(record, group_prefix,
- make_prev_keypart_map(group_key_parts),
- HA_READ_AFTER_KEY);
+ result= file->ha_index_read_map(record, group_prefix,
+ make_prev_keypart_map(group_key_parts),
+ HA_READ_AFTER_KEY);
if (result)
DBUG_RETURN(result);
}
@@ -10839,7 +10842,8 @@ int QUICK_GROUP_MIN_MAX_SELECT::next_min
HA_READ_AFTER_KEY : HA_READ_KEY_OR_NEXT;
}
- result= file->index_read_map(record, group_prefix, keypart_map, find_flag);
+ result= file->ha_index_read_map(record, group_prefix, keypart_map,
+ find_flag);
if (result)
{
if ((result == HA_ERR_KEY_NOT_FOUND || result == HA_ERR_END_OF_FILE) &&
@@ -10978,7 +10982,8 @@ int QUICK_GROUP_MIN_MAX_SELECT::next_max
HA_READ_BEFORE_KEY : HA_READ_PREFIX_LAST_OR_PREV;
}
- result= file->index_read_map(record, group_prefix, keypart_map, find_flag);
+ result= file->ha_index_read_map(record, group_prefix, keypart_map,
+ find_flag);
if (result)
{
=== modified file 'sql/opt_range.h'
--- a/sql/opt_range.h 2009-09-02 08:40:18 +0000
+++ b/sql/opt_range.h 2009-10-19 17:14:48 +0000
@@ -727,7 +727,7 @@ public:
~FT_SELECT() { file->ft_end(); }
int init() { return error=file->ft_init(); }
int reset() { return 0; }
- int get_next() { return error=file->ft_read(record); }
+ int get_next() { return error= file->ha_ft_read(record); }
int get_type() { return QS_TYPE_FULLTEXT; }
};
=== modified file 'sql/opt_sum.cc'
--- a/sql/opt_sum.cc 2009-09-07 20:50:10 +0000
+++ b/sql/opt_sum.cc 2009-10-19 17:14:48 +0000
@@ -254,7 +254,7 @@ int opt_sum_query(TABLE_LIST *tables, Li
error= table->file->ha_index_init((uint) ref.key, 1);
if (!ref.key_length)
- error= table->file->index_first(table->record[0]);
+ error= table->file->ha_index_first(table->record[0]);
else
{
/*
@@ -276,10 +276,10 @@ int opt_sum_query(TABLE_LIST *tables, Li
Closed interval: Either The MIN argument is non-nullable, or
we have a >= predicate for the MIN argument.
*/
- error= table->file->index_read_map(table->record[0],
- ref.key_buff,
- make_prev_keypart_map(ref.key_parts),
- HA_READ_KEY_OR_NEXT);
+ error= table->file->ha_index_read_map(table->record[0],
+ ref.key_buff,
+ make_prev_keypart_map(ref.key_parts),
+ HA_READ_KEY_OR_NEXT);
else
{
/*
@@ -288,10 +288,10 @@ int opt_sum_query(TABLE_LIST *tables, Li
2) there is a > predicate on it, nullability is irrelevant.
We need to scan the next bigger record first.
*/
- error= table->file->index_read_map(table->record[0],
- ref.key_buff,
- make_prev_keypart_map(ref.key_parts),
- HA_READ_AFTER_KEY);
+ error= table->file->ha_index_read_map(table->record[0],
+ ref.key_buff,
+ make_prev_keypart_map(ref.key_parts),
+ HA_READ_AFTER_KEY);
/*
If the found record is outside the group formed by the search
prefix, or there is no such record at all, check if all
@@ -314,10 +314,10 @@ int opt_sum_query(TABLE_LIST *tables, Li
key_cmp_if_same(table, ref.key_buff, ref.key, prefix_len)))
{
DBUG_ASSERT(item_field->field->real_maybe_null());
- error= table->file->index_read_map(table->record[0],
- ref.key_buff,
- make_prev_keypart_map(ref.key_parts),
- HA_READ_KEY_EXACT);
+ error= table->file->ha_index_read_map(table->record[0],
+ ref.key_buff,
+ make_prev_keypart_map(ref.key_parts),
+ HA_READ_KEY_EXACT);
}
}
}
@@ -402,13 +402,13 @@ int opt_sum_query(TABLE_LIST *tables, Li
error= table->file->ha_index_init((uint) ref.key, 1);
if (!ref.key_length)
- error= table->file->index_last(table->record[0]);
+ error= table->file->ha_index_last(table->record[0]);
else
- error= table->file->index_read_map(table->record[0], key_buff,
- make_prev_keypart_map(ref.key_parts),
- range_fl & NEAR_MAX ?
- HA_READ_BEFORE_KEY :
- HA_READ_PREFIX_LAST_OR_PREV);
+ error= table->file->ha_index_read_map(table->record[0], key_buff,
+ make_prev_keypart_map(ref.key_parts),
+ range_fl & NEAR_MAX ?
+ HA_READ_BEFORE_KEY :
+ HA_READ_PREFIX_LAST_OR_PREV);
if (!error && reckey_in_range(1, &ref, item_field->field,
conds, range_fl, prefix_len))
error= HA_ERR_KEY_NOT_FOUND;
=== modified file 'sql/records.cc'
--- a/sql/records.cc 2009-05-06 12:03:24 +0000
+++ b/sql/records.cc 2009-10-19 17:14:48 +0000
@@ -342,7 +342,7 @@ static int rr_quick(READ_RECORD *info)
static int rr_index_first(READ_RECORD *info)
{
- int tmp= info->file->index_first(info->record);
+ int tmp= info->file->ha_index_first(info->record);
info->read_record= rr_index;
if (tmp)
tmp= rr_handle_error(info, tmp);
@@ -368,7 +368,7 @@ static int rr_index_first(READ_RECORD *i
static int rr_index(READ_RECORD *info)
{
- int tmp= info->file->index_next(info->record);
+ int tmp= info->file->ha_index_next(info->record);
if (tmp)
tmp= rr_handle_error(info, tmp);
return tmp;
@@ -378,7 +378,7 @@ static int rr_index(READ_RECORD *info)
int rr_sequential(READ_RECORD *info)
{
int tmp;
- while ((tmp=info->file->rnd_next(info->record)))
+ while ((tmp= info->file->ha_rnd_next(info->record)))
{
if (info->thd->killed)
{
@@ -406,7 +406,7 @@ static int rr_from_tempfile(READ_RECORD
{
if (my_b_read(info->io_cache,info->ref_pos,info->ref_length))
return -1; /* End of file */
- if (!(tmp=info->file->rnd_pos(info->record,info->ref_pos)))
+ if (!(tmp= info->file->ha_rnd_pos(info->record,info->ref_pos)))
break;
/* The following is extremely unlikely to happen */
if (tmp == HA_ERR_RECORD_DELETED ||
@@ -457,7 +457,7 @@ static int rr_from_pointers(READ_RECORD
cache_pos= info->cache_pos;
info->cache_pos+= info->ref_length;
- if (!(tmp=info->file->rnd_pos(info->record,cache_pos)))
+ if (!(tmp= info->file->ha_rnd_pos(info->record,cache_pos)))
break;
/* The following is extremely unlikely to happen */
@@ -590,7 +590,7 @@ static int rr_from_cache(READ_RECORD *in
record=uint3korr(position);
position+=3;
record_pos=info->cache+record*info->reclength;
- if ((error=(int16) info->file->rnd_pos(record_pos,info->ref_pos)))
+ if ((error=(int16) info->file->ha_rnd_pos(record_pos,info->ref_pos)))
{
record_pos[info->error_offset]=1;
shortstore(record_pos,error);
=== modified file 'sql/set_var.cc'
--- a/sql/set_var.cc 2009-09-15 10:46:35 +0000
+++ b/sql/set_var.cc 2009-10-19 17:14:48 +0000
@@ -511,6 +511,9 @@ static sys_var_const sys_prot
static sys_var_thd_ulong sys_read_buff_size(&vars, "read_buffer_size",
&SV::read_buff_size);
static sys_var_opt_readonly sys_readonly(&vars, "read_only", &opt_readonly);
+static sys_var_bool_ptr sys_userstat(&vars, "userstat",
+ &opt_userstat_running);
+
static sys_var_thd_ulong sys_read_rnd_buff_size(&vars, "read_rnd_buffer_size",
&SV::read_rnd_buff_size);
static sys_var_thd_ulong sys_div_precincrement(&vars, "div_precision_increment",
=== modified file 'sql/sp.cc'
--- a/sql/sp.cc 2009-07-28 22:39:58 +0000
+++ b/sql/sp.cc 2009-10-19 17:14:48 +0000
@@ -344,8 +344,9 @@ db_find_routine_aux(THD *thd, int type,
key_copy(key, table->record[0], table->key_info,
table->key_info->key_length);
- if (table->file->index_read_idx_map(table->record[0], 0, key, HA_WHOLE_KEY,
- HA_READ_KEY_EXACT))
+ if (table->file->ha_index_read_idx_map(table->record[0], 0, key,
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT))
DBUG_RETURN(SP_KEY_NOT_FOUND);
DBUG_RETURN(SP_OK);
@@ -1101,9 +1102,9 @@ sp_drop_db_routines(THD *thd, char *db)
ret= SP_OK;
table->file->ha_index_init(0, 1);
- if (! table->file->index_read_map(table->record[0],
- (uchar *)table->field[MYSQL_PROC_FIELD_DB]->ptr,
- (key_part_map)1, HA_READ_KEY_EXACT))
+ if (!table->file->ha_index_read_map(table->record[0],
+ (uchar *) table->field[MYSQL_PROC_FIELD_DB]->ptr,
+ (key_part_map)1, HA_READ_KEY_EXACT))
{
int nxtres;
bool deleted= FALSE;
@@ -1118,9 +1119,11 @@ sp_drop_db_routines(THD *thd, char *db)
nxtres= 0;
break;
}
- } while (! (nxtres= table->file->index_next_same(table->record[0],
- (uchar *)table->field[MYSQL_PROC_FIELD_DB]->ptr,
- key_len)));
+ } while (!(nxtres= table->file->
+ ha_index_next_same(table->record[0],
+ (uchar *)table->field[MYSQL_PROC_FIELD_DB]->
+ ptr,
+ key_len)));
if (nxtres != HA_ERR_END_OF_FILE)
ret= SP_KEY_NOT_FOUND;
if (deleted)
=== modified file 'sql/sql_acl.cc'
--- a/sql/sql_acl.cc 2009-09-07 20:50:10 +0000
+++ b/sql/sql_acl.cc 2009-10-19 17:14:48 +0000
@@ -1834,9 +1834,9 @@ static bool update_user_table(THD *thd,
key_copy((uchar *) user_key, table->record[0], table->key_info,
table->key_info->key_length);
- if (table->file->index_read_idx_map(table->record[0], 0,
- (uchar *) user_key, HA_WHOLE_KEY,
- HA_READ_KEY_EXACT))
+ if (table->file->ha_index_read_idx_map(table->record[0], 0,
+ (uchar *) user_key, HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT))
{
my_message(ER_PASSWORD_NO_MATCH, ER(ER_PASSWORD_NO_MATCH),
MYF(0)); /* purecov: deadcode */
@@ -1927,9 +1927,9 @@ static int replace_user_table(THD *thd,
key_copy(user_key, table->record[0], table->key_info,
table->key_info->key_length);
- if (table->file->index_read_idx_map(table->record[0], 0, user_key,
- HA_WHOLE_KEY,
- HA_READ_KEY_EXACT))
+ if (table->file->ha_index_read_idx_map(table->record[0], 0, user_key,
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT))
{
/* what == 'N' means revoke */
if (what == 'N')
@@ -2151,9 +2151,9 @@ static int replace_db_table(TABLE *table
key_copy(user_key, table->record[0], table->key_info,
table->key_info->key_length);
- if (table->file->index_read_idx_map(table->record[0],0, user_key,
- HA_WHOLE_KEY,
- HA_READ_KEY_EXACT))
+ if (table->file->ha_index_read_idx_map(table->record[0],0, user_key,
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT))
{
if (what == 'N')
{ // no row, no revoke
@@ -2369,8 +2369,9 @@ GRANT_TABLE::GRANT_TABLE(TABLE *form, TA
col_privs->field[4]->store("",0, &my_charset_latin1);
col_privs->file->ha_index_init(0, 1);
- if (col_privs->file->index_read_map(col_privs->record[0], (uchar*) key,
- (key_part_map)15, HA_READ_KEY_EXACT))
+ if (col_privs->file->ha_index_read_map(col_privs->record[0], (uchar*) key,
+ (key_part_map)15,
+ HA_READ_KEY_EXACT))
{
cols = 0; /* purecov: deadcode */
col_privs->file->ha_index_end();
@@ -2391,7 +2392,7 @@ GRANT_TABLE::GRANT_TABLE(TABLE *form, TA
return; /* purecov: deadcode */
}
my_hash_insert(&hash_columns, (uchar *) mem_check);
- } while (!col_privs->file->index_next(col_privs->record[0]) &&
+ } while (!col_privs->file->ha_index_next(col_privs->record[0]) &&
!key_cmp_if_same(col_privs,key,0,key_prefix_len));
col_privs->file->ha_index_end();
}
@@ -2532,8 +2533,8 @@ static int replace_column_table(GRANT_TA
key_copy(user_key, table->record[0], table->key_info,
table->key_info->key_length);
- if (table->file->index_read_map(table->record[0], user_key, HA_WHOLE_KEY,
- HA_READ_KEY_EXACT))
+ if (table->file->ha_index_read_map(table->record[0], user_key,
+ HA_WHOLE_KEY, HA_READ_KEY_EXACT))
{
if (revoke_grant)
{
@@ -2610,9 +2611,9 @@ static int replace_column_table(GRANT_TA
key_copy(user_key, table->record[0], table->key_info,
key_prefix_length);
- if (table->file->index_read_map(table->record[0], user_key,
- (key_part_map)15,
- HA_READ_KEY_EXACT))
+ if (table->file->ha_index_read_map(table->record[0], user_key,
+ (key_part_map)15,
+ HA_READ_KEY_EXACT))
goto end;
/* Scan through all rows with the same host,db,user and table */
@@ -2663,7 +2664,7 @@ static int replace_column_table(GRANT_TA
hash_delete(&g_t->hash_columns,(uchar*) grant_column);
}
}
- } while (!table->file->index_next(table->record[0]) &&
+ } while (!table->file->ha_index_next(table->record[0]) &&
!key_cmp_if_same(table, key, 0, key_prefix_length));
}
@@ -2713,9 +2714,9 @@ static int replace_table_table(THD *thd,
key_copy(user_key, table->record[0], table->key_info,
table->key_info->key_length);
- if (table->file->index_read_idx_map(table->record[0], 0, user_key,
- HA_WHOLE_KEY,
- HA_READ_KEY_EXACT))
+ if (table->file->ha_index_read_idx_map(table->record[0], 0, user_key,
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT))
{
/*
The following should never happen as we first check the in memory
@@ -2840,10 +2841,10 @@ static int replace_routine_table(THD *th
TRUE);
store_record(table,record[1]); // store at pos 1
- if (table->file->index_read_idx_map(table->record[0], 0,
- (uchar*) table->field[0]->ptr,
- HA_WHOLE_KEY,
- HA_READ_KEY_EXACT))
+ if (table->file->ha_index_read_idx_map(table->record[0], 0,
+ (uchar*) table->field[0]->ptr,
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT))
{
/*
The following should never happen as we first check the in memory
@@ -3548,7 +3549,7 @@ static my_bool grant_load_procs_priv(TAB
p_table->file->ha_index_init(0, 1);
p_table->use_all_columns();
- if (!p_table->file->index_first(p_table->record[0]))
+ if (!p_table->file->ha_index_first(p_table->record[0]))
{
memex_ptr= &memex;
my_pthread_setspecific_ptr(THR_MALLOC, &memex_ptr);
@@ -3600,7 +3601,7 @@ static my_bool grant_load_procs_priv(TAB
goto end_unlock;
}
}
- while (!p_table->file->index_next(p_table->record[0]));
+ while (!p_table->file->ha_index_next(p_table->record[0]));
}
/* Return ok */
return_val= 0;
@@ -3650,7 +3651,7 @@ static my_bool grant_load(THD *thd, TABL
t_table->use_all_columns();
c_table->use_all_columns();
- if (!t_table->file->index_first(t_table->record[0]))
+ if (!t_table->file->ha_index_first(t_table->record[0]))
{
memex_ptr= &memex;
my_pthread_setspecific_ptr(THR_MALLOC, &memex_ptr);
@@ -3685,7 +3686,7 @@ static my_bool grant_load(THD *thd, TABL
goto end_unlock;
}
}
- while (!t_table->file->index_next(t_table->record[0]));
+ while (!t_table->file->ha_index_next(t_table->record[0]));
}
return_val=0; // Return ok
@@ -3957,6 +3958,8 @@ err:
{
char command[128];
get_privilege_desc(command, sizeof(command), want_access);
+ status_var_increment(thd->status_var.access_denied_errors);
+
my_error(ER_TABLEACCESS_DENIED_ERROR, MYF(0),
command,
sctx->priv_user,
@@ -5203,9 +5206,9 @@ static int handle_grant_table(TABLE_LIST
table->key_info->key_part[1].store_length);
key_copy(user_key, table->record[0], table->key_info, key_prefix_length);
- if ((error= table->file->index_read_idx_map(table->record[0], 0,
- user_key, (key_part_map)3,
- HA_READ_KEY_EXACT)))
+ if ((error= table->file->ha_index_read_idx_map(table->record[0], 0,
+ user_key, (key_part_map)3,
+ HA_READ_KEY_EXACT)))
{
if (error != HA_ERR_KEY_NOT_FOUND && error != HA_ERR_END_OF_FILE)
{
@@ -5240,7 +5243,7 @@ static int handle_grant_table(TABLE_LIST
DBUG_PRINT("info",("scan table: '%s' search: '%s'@'%s'",
table->s->table_name.str, user_str, host_str));
#endif
- while ((error= table->file->rnd_next(table->record[0])) !=
+ while ((error= table->file->ha_rnd_next(table->record[0])) !=
HA_ERR_END_OF_FILE)
{
if (error)
=== modified file 'sql/sql_base.cc'
--- a/sql/sql_base.cc 2009-09-07 20:50:10 +0000
+++ b/sql/sql_base.cc 2009-10-19 17:14:48 +0000
@@ -1373,6 +1373,12 @@ bool close_thread_table(THD *thd, TABLE
DBUG_PRINT("tcache", ("table: '%s'.'%s' 0x%lx", table->s->db.str,
table->s->table_name.str, (long) table));
+ if (table->file)
+ {
+ table->file->update_global_table_stats();
+ table->file->update_global_index_stats();
+ }
+
*table_ptr=table->next;
/*
When closing a MERGE parent or child table, detach the children first.
@@ -1902,6 +1908,13 @@ void close_temporary(TABLE *table, bool
DBUG_PRINT("tmptable", ("closing table: '%s'.'%s'",
table->s->db.str, table->s->table_name.str));
+ /* in_use is not set for replication temporary tables during shutdown */
+ if (table->in_use)
+ {
+ table->file->update_global_table_stats();
+ table->file->update_global_index_stats();
+ }
+
free_io_cache(table);
closefrm(table, 0);
if (delete_table)
=== modified file 'sql/sql_class.cc'
--- a/sql/sql_class.cc 2009-09-15 10:46:35 +0000
+++ b/sql/sql_class.cc 2009-10-19 17:14:48 +0000
@@ -615,6 +615,7 @@ THD::THD()
mysys_var=0;
binlog_evt_union.do_union= FALSE;
enable_slow_log= 0;
+
#ifndef DBUG_OFF
dbug_sentry=THD_SENTRY_MAGIC;
#endif
@@ -817,7 +818,63 @@ void THD::init(void)
update_charset();
reset_current_stmt_binlog_row_based();
bzero((char *) &status_var, sizeof(status_var));
+ bzero((char *) &org_status_var, sizeof(org_status_var));
sql_log_bin_toplevel= options & OPTION_BIN_LOG;
+ select_commands= update_commands= other_commands= 0;
+ /* Set to handle counting of aborted connections */
+ userstat_running= opt_userstat_running;
+ last_global_update_time= current_connect_time= time(NULL);
+}
+
+
+/* Updates some status variables to be used by update_global_user_stats */
+
+void THD::update_stats(void)
+{
+ /* sql_command == SQLCOM_END in case of parse errors or quit */
+ if (lex->sql_command != SQLCOM_END)
+ {
+ /* The replication thread has the COM_CONNECT command */
+ DBUG_ASSERT(command == COM_QUERY || command == COM_CONNECT);
+
+ /* A SQL query. */
+ if (lex->sql_command == SQLCOM_SELECT)
+ select_commands++;
+ else if (! sql_command_flags[lex->sql_command] & CF_STATUS_COMMAND)
+ {
+ /* Ignore 'SHOW ' commands */
+ }
+ else if (is_update_query(lex->sql_command))
+ update_commands++;
+ else
+ other_commands++;
+ }
+}
+
+
+void THD::update_all_stats()
+{
+ time_t save_time;
+ ulonglong end_cpu_time, end_utime;
+ double busy_time, cpu_time;
+
+ /* This is set at start of query if opt_userstat_running was set */
+ if (!userstat_running)
+ return;
+
+ end_cpu_time= my_getcputime();
+ end_utime= my_micro_time_and_time(&save_time);
+ busy_time= (end_utime - start_utime) / 1000000.0;
+ cpu_time= (end_cpu_time - start_cpu_time) / 10000000.0;
+ /* In case there are bad values, 2629743 is the #seconds in a month. */
+ if (cpu_time > 2629743.0)
+ cpu_time= 0;
+ status_var_add(status_var.cpu_time, cpu_time);
+ status_var_add(status_var.busy_time, busy_time);
+
+ /* Updates THD stats and the global user stats. */
+ update_stats();
+ update_global_user_stats(this, TRUE, save_time);
}
@@ -984,9 +1041,8 @@ THD::~THD()
from_var from this array
NOTES
- This function assumes that all variables are long/ulong.
- If this assumption will change, then we have to explictely add
- the other variables after the while loop
+ This function assumes that all variables at start are long/ulong and
+ other types are handled explicitely
*/
void add_to_status(STATUS_VAR *to_var, STATUS_VAR *from_var)
@@ -998,6 +1054,13 @@ void add_to_status(STATUS_VAR *to_var, S
while (to != end)
*(to++)+= *(from++);
+
+ /* Handle the not ulong variables. See end of system_status_var */
+ to_var->bytes_received= from_var->bytes_received;
+ to_var->bytes_sent+= from_var->bytes_sent;
+ to_var->binlog_bytes_written= from_var->binlog_bytes_written;
+ to_var->cpu_time+= from_var->cpu_time;
+ to_var->busy_time+= from_var->busy_time;
}
/*
@@ -1010,7 +1073,8 @@ void add_to_status(STATUS_VAR *to_var, S
dec_var minus this array
NOTE
- This function assumes that all variables are long/ulong.
+ This function assumes that all variables at start are long/ulong and
+ other types are handled explicitely
*/
void add_diff_to_status(STATUS_VAR *to_var, STATUS_VAR *from_var,
@@ -1023,6 +1087,14 @@ void add_diff_to_status(STATUS_VAR *to_v
while (to != end)
*(to++)+= *(from++) - *(dec++);
+
+ to_var->bytes_received= (from_var->bytes_received -
+ dec_var->bytes_received);
+ to_var->bytes_sent+= from_var->bytes_sent - dec_var->bytes_sent;
+ to_var->binlog_bytes_written= (from_var->binlog_bytes_written -
+ dec_var->binlog_bytes_written);
+ to_var->cpu_time+= from_var->cpu_time - dec_var->cpu_time;
+ to_var->busy_time+= from_var->busy_time - dec_var->busy_time;
}
#define SECONDS_TO_WAIT_FOR_KILL 2
@@ -2773,7 +2845,8 @@ void thd_increment_bytes_sent(ulong leng
{
THD *thd=current_thd;
if (likely(thd != 0))
- { /* current_thd==0 when close_connection() calls net_send_error() */
+ {
+ /* current_thd == 0 when close_connection() calls net_send_error() */
thd->status_var.bytes_sent+= length;
}
}
=== modified file 'sql/sql_class.h'
--- a/sql/sql_class.h 2009-09-15 10:46:35 +0000
+++ b/sql/sql_class.h 2009-10-19 17:14:48 +0000
@@ -415,8 +415,6 @@ struct system_variables
typedef struct system_status_var
{
- ulonglong bytes_received;
- ulonglong bytes_sent;
ulong com_other;
ulong com_stat[(uint) SQLCOM_END];
ulong created_tmp_disk_tables;
@@ -455,6 +453,8 @@ typedef struct system_status_var
ulong select_range_count;
ulong select_range_check_count;
ulong select_scan_count;
+ ulong rows_read;
+ ulong rows_sent;
ulong long_query_count;
ulong filesort_merge_passes;
ulong filesort_range_count;
@@ -472,6 +472,9 @@ typedef struct system_status_var
Number of statements sent from the client
*/
ulong questions;
+ ulong empty_queries;
+ ulong access_denied_errors; /* Can only be 0 or 1 */
+ ulong lost_connections;
/*
IMPORTANT!
SEE last_system_status_var DEFINITION BELOW.
@@ -480,12 +483,16 @@ typedef struct system_status_var
Status variables which it does not make sense to add to
global status variable counter
*/
+ ulonglong bytes_received;
+ ulonglong bytes_sent;
+ ulonglong binlog_bytes_written;
double last_query_cost;
+ double cpu_time, busy_time;
} STATUS_VAR;
/*
This is used for 'SHOW STATUS'. It must be updated to the last ulong
- variable in system_status_var which is makes sens to add to the global
+ variable in system_status_var which is makes sense to add to the global
counter
*/
@@ -1299,6 +1306,7 @@ public:
struct my_rnd_struct rand; // used for authentication
struct system_variables variables; // Changeable local variables
struct system_status_var status_var; // Per thread statistic vars
+ struct system_status_var org_status_var; // For user statistics
struct system_status_var *initial_status_var; /* used by show status */
THR_LOCK_INFO lock_info; // Locking info of this thread
THR_LOCK_OWNER main_lock_id; // To use for conventional queries
@@ -1399,6 +1407,8 @@ public:
uint in_sub_stmt;
/* TRUE when the current top has SQL_LOG_BIN ON */
bool sql_log_bin_toplevel;
+ /* True when opt_userstat_running is set at start of query */
+ bool userstat_running;
/* container for handler's private per-connection data */
Ha_data ha_data[MAX_HA];
@@ -1842,6 +1852,21 @@ public:
*/
LOG_INFO* current_linfo;
NET* slave_net; // network connection from slave -> m.
+
+ /*
+ Used to update global user stats. The global user stats are updated
+ occasionally with the 'diff' variables. After the update, the 'diff'
+ variables are reset to 0.
+ */
+ /* Time when the current thread connected to MySQL. */
+ time_t current_connect_time;
+ /* Last time when THD stats were updated in global_user_stats. */
+ time_t last_global_update_time;
+ /* Number of commands not reflected in global_user_stats yet. */
+ uint select_commands, update_commands, other_commands;
+ ulonglong start_cpu_time;
+ ulonglong start_bytes_received;
+
/* Used by the sys_var class to store temporary values */
union
{
@@ -1902,6 +1927,8 @@ public:
alloc_root.
*/
void init_for_queries();
+ void update_all_stats();
+ void update_stats(void);
void change_user(void);
void cleanup(void);
void cleanup_after_query();
@@ -2319,7 +2346,6 @@ private:
MEM_ROOT main_mem_root;
};
-
/** A short cut for thd->main_da.set_ok_status(). */
inline void
=== modified file 'sql/sql_connect.cc'
--- a/sql/sql_connect.cc 2009-09-07 20:50:10 +0000
+++ b/sql/sql_connect.cc 2009-10-19 17:14:48 +0000
@@ -20,6 +20,13 @@
#include "mysql_priv.h"
+HASH global_user_stats, global_client_stats, global_table_stats;
+HASH global_index_stats;
+/* Protects the above global stats */
+extern pthread_mutex_t LOCK_global_user_client_stats;
+extern pthread_mutex_t LOCK_global_table_stats;
+extern pthread_mutex_t LOCK_global_index_stats;
+
#ifdef HAVE_OPENSSL
/*
Without SSL the handshake consists of one packet. This packet
@@ -459,6 +466,7 @@ check_user(THD *thd, enum enum_server_co
check_for_max_user_connections(thd, thd->user_connect))
{
/* The error is set in check_for_max_user_connections(). */
+ status_var_increment(denied_connections);
DBUG_RETURN(1);
}
@@ -470,6 +478,7 @@ check_user(THD *thd, enum enum_server_co
/* mysql_change_db() has pushed the error message. */
if (thd->user_connect)
decrease_user_connections(thd->user_connect);
+ status_var_increment(thd->status_var.access_denied_errors);
DBUG_RETURN(1);
}
}
@@ -493,6 +502,8 @@ check_user(THD *thd, enum enum_server_co
thd->main_security_ctx.user,
thd->main_security_ctx.host_or_ip,
passwd_len ? ER(ER_YES) : ER(ER_NO));
+ status_var_increment(thd->status_var.access_denied_errors);
+
DBUG_RETURN(1);
#endif /* NO_EMBEDDED_ACCESS_CHECKS */
}
@@ -520,10 +531,14 @@ extern "C" void free_user(struct user_co
void init_max_user_conn(void)
{
#ifndef NO_EMBEDDED_ACCESS_CHECKS
- (void) hash_init(&hash_user_connections,system_charset_info,max_connections,
- 0,0,
- (hash_get_key) get_key_conn, (hash_free_key) free_user,
- 0);
+ if (hash_init(&hash_user_connections,system_charset_info,max_connections,
+ 0,0,
+ (hash_get_key) get_key_conn, (hash_free_key) free_user,
+ 0))
+ {
+ sql_print_error("Initializing hash_user_connections failed.");
+ exit(1);
+ }
#endif
}
@@ -576,6 +591,445 @@ void reset_mqh(LEX_USER *lu, bool get_th
#endif /* NO_EMBEDDED_ACCESS_CHECKS */
}
+/*****************************************************************************
+ Handle users statistics
+*****************************************************************************/
+
+/* 'mysql_system_user' is used for when the user is not defined for a THD. */
+static const char mysql_system_user[]= "#mysql_system#";
+
+// Returns 'user' if it's not NULL. Returns 'mysql_system_user' otherwise.
+static const char * get_valid_user_string(char* user)
+{
+ return user ? user : mysql_system_user;
+}
+
+/*
+ Returns string as 'IP' for the client-side of the connection represented by
+ 'client'. Does not allocate memory. May return "".
+*/
+
+static const char *get_client_host(THD *client)
+{
+ return client->security_ctx->host_or_ip[0] ?
+ client->security_ctx->host_or_ip :
+ client->security_ctx->host ? client->security_ctx->host : "";
+}
+
+extern "C" uchar *get_key_user_stats(USER_STATS *user_stats, size_t *length,
+ my_bool not_used __attribute__((unused)))
+{
+ *length= user_stats->user_name_length;
+ return (uchar*) user_stats->user;
+}
+
+void free_user_stats(USER_STATS* user_stats)
+{
+ my_free(user_stats, MYF(0));
+}
+
+void init_user_stats(USER_STATS *user_stats,
+ const char *user,
+ size_t user_length,
+ const char *priv_user,
+ uint total_connections,
+ uint concurrent_connections,
+ time_t connected_time,
+ double busy_time,
+ double cpu_time,
+ ulonglong bytes_received,
+ ulonglong bytes_sent,
+ ulonglong binlog_bytes_written,
+ ha_rows rows_sent,
+ ha_rows rows_read,
+ ha_rows rows_inserted,
+ ha_rows rows_deleted,
+ ha_rows rows_updated,
+ ulonglong select_commands,
+ ulonglong update_commands,
+ ulonglong other_commands,
+ ulonglong commit_trans,
+ ulonglong rollback_trans,
+ ulonglong denied_connections,
+ ulonglong lost_connections,
+ ulonglong access_denied_errors,
+ ulonglong empty_queries)
+{
+ DBUG_ENTER("init_user_stats");
+ DBUG_PRINT("enter", ("user: %s priv_user: %s", user, priv_user));
+
+ user_length= min(user_length, sizeof(user_stats->user)-1);
+ memcpy(user_stats->user, user, user_length);
+ user_stats->user[user_length]= 0;
+ user_stats->user_name_length= user_length;
+ strmake(user_stats->priv_user, priv_user, sizeof(user_stats->priv_user)-1);
+
+ user_stats->total_connections= total_connections;
+ user_stats->concurrent_connections= concurrent_connections;
+ user_stats->connected_time= connected_time;
+ user_stats->busy_time= busy_time;
+ user_stats->cpu_time= cpu_time;
+ user_stats->bytes_received= bytes_received;
+ user_stats->bytes_sent= bytes_sent;
+ user_stats->binlog_bytes_written= binlog_bytes_written;
+ user_stats->rows_sent= rows_sent;
+ user_stats->rows_updated= rows_updated;
+ user_stats->rows_read= rows_read;
+ user_stats->select_commands= select_commands;
+ user_stats->update_commands= update_commands;
+ user_stats->other_commands= other_commands;
+ user_stats->commit_trans= commit_trans;
+ user_stats->rollback_trans= rollback_trans;
+ user_stats->denied_connections= denied_connections;
+ user_stats->lost_connections= lost_connections;
+ user_stats->access_denied_errors= access_denied_errors;
+ user_stats->empty_queries= empty_queries;
+ DBUG_VOID_RETURN;
+}
+
+
+#ifdef COMPLEAT_PATCH_NOT_ADDED_YET
+
+void add_user_stats(USER_STATS *user_stats,
+ uint total_connections,
+ uint concurrent_connections,
+ time_t connected_time,
+ double busy_time,
+ double cpu_time,
+ ulonglong bytes_received,
+ ulonglong bytes_sent,
+ ulonglong binlog_bytes_written,
+ ha_rows rows_sent,
+ ha_rows rows_read,
+ ha_rows rows_inserted,
+ ha_rows rows_deleted,
+ ha_rows rows_updated,
+ ulonglong select_commands,
+ ulonglong update_commands,
+ ulonglong other_commands,
+ ulonglong commit_trans,
+ ulonglong rollback_trans,
+ ulonglong denied_connections,
+ ulonglong lost_connections,
+ ulonglong access_denied_errors,
+ ulonglong empty_queries)
+{
+ user_stats->total_connections+= total_connections;
+ user_stats->concurrent_connections+= concurrent_connections;
+ user_stats->connected_time+= connected_time;
+ user_stats->busy_time+= busy_time;
+ user_stats->cpu_time+= cpu_time;
+ user_stats->bytes_received+= bytes_received;
+ user_stats->bytes_sent+= bytes_sent;
+ user_stats->binlog_bytes_written+= binlog_bytes_written;
+ user_stats->rows_sent+= rows_sent;
+ user_stats->rows_inserted+= rows_inserted;
+ user_stats->rows_deleted+= rows_deleted;
+ user_stats->rows_updated+= rows_updated;
+ user_stats->rows_read+= rows_read;
+ user_stats->select_commands+= select_commands;
+ user_stats->update_commands+= update_commands;
+ user_stats->other_commands+= other_commands;
+ user_stats->commit_trans+= commit_trans;
+ user_stats->rollback_trans+= rollback_trans;
+ user_stats->denied_connections+= denied_connections;
+ user_stats->lost_connections+= lost_connections;
+ user_stats->access_denied_errors+= access_denied_errors;
+ user_stats->empty_queries+= empty_queries;
+}
+#endif
+
+
+void init_global_user_stats(void)
+{
+ if (hash_init(&global_user_stats, system_charset_info, max_connections,
+ 0, 0, (hash_get_key) get_key_user_stats,
+ (hash_free_key)free_user_stats, 0))
+ {
+ sql_print_error("Initializing global_user_stats failed.");
+ exit(1);
+ }
+}
+
+void init_global_client_stats(void)
+{
+ if (hash_init(&global_client_stats, system_charset_info, max_connections,
+ 0, 0, (hash_get_key) get_key_user_stats,
+ (hash_free_key)free_user_stats, 0))
+ {
+ sql_print_error("Initializing global_client_stats failed.");
+ exit(1);
+ }
+}
+
+extern "C" uchar *get_key_table_stats(TABLE_STATS *table_stats, size_t *length,
+ my_bool not_used __attribute__((unused)))
+{
+ *length= table_stats->table_name_length;
+ return (uchar*) table_stats->table;
+}
+
+extern "C" void free_table_stats(TABLE_STATS* table_stats)
+{
+ my_free(table_stats, MYF(0));
+}
+
+void init_global_table_stats(void)
+{
+ if (hash_init(&global_table_stats, system_charset_info, max_connections,
+ 0, 0, (hash_get_key) get_key_table_stats,
+ (hash_free_key)free_table_stats, 0)) {
+ sql_print_error("Initializing global_table_stats failed.");
+ exit(1);
+ }
+}
+
+extern "C" uchar *get_key_index_stats(INDEX_STATS *index_stats, size_t *length,
+ my_bool not_used __attribute__((unused)))
+{
+ *length= index_stats->index_name_length;
+ return (uchar*) index_stats->index;
+}
+
+extern "C" void free_index_stats(INDEX_STATS* index_stats)
+{
+ my_free(index_stats, MYF(0));
+}
+
+void init_global_index_stats(void)
+{
+ if (hash_init(&global_index_stats, system_charset_info, max_connections,
+ 0, 0, (hash_get_key) get_key_index_stats,
+ (hash_free_key)free_index_stats, 0))
+ {
+ sql_print_error("Initializing global_index_stats failed.");
+ exit(1);
+ }
+}
+
+
+void free_global_user_stats(void)
+{
+ hash_free(&global_user_stats);
+}
+
+void free_global_table_stats(void)
+{
+ hash_free(&global_table_stats);
+}
+
+void free_global_index_stats(void)
+{
+ hash_free(&global_index_stats);
+}
+
+void free_global_client_stats(void)
+{
+ hash_free(&global_client_stats);
+}
+
+/*
+ Increments the global stats connection count for an entry from
+ global_client_stats or global_user_stats. Returns 0 on success
+ and 1 on error.
+*/
+
+static bool increment_count_by_name(const char *name, size_t name_length,
+ const char *role_name,
+ HASH *users_or_clients, THD *thd)
+{
+ USER_STATS *user_stats;
+
+ if (!(user_stats= (USER_STATS*) hash_search(users_or_clients, (uchar*) name,
+ name_length)))
+ {
+ /* First connection for this user or client */
+ if (!(user_stats= ((USER_STATS*)
+ my_malloc(sizeof(USER_STATS),
+ MYF(MY_WME | MY_ZEROFILL)))))
+ return TRUE; // Out of memory
+
+ init_user_stats(user_stats, name, name_length, role_name,
+ 0, 0, // connections
+ 0, 0, 0, // time
+ 0, 0, 0, // bytes sent, received and written
+ 0, 0, // Rows sent and read
+ 0, 0, 0, // rows inserted, deleted and updated
+ 0, 0, 0, // select, update and other commands
+ 0, 0, // commit and rollback trans
+ thd->status_var.access_denied_errors,
+ 0, // lost connections
+ 0, // access denied errors
+ 0); // empty queries
+
+ if (my_hash_insert(users_or_clients, (uchar*)user_stats))
+ {
+ my_free(user_stats, 0);
+ return TRUE; // Out of memory
+ }
+ }
+ user_stats->total_connections++;
+ return FALSE;
+}
+
+
+/*
+ Increments the global user and client stats connection count.
+
+ @param use_lock if true, LOCK_global_user_client_stats will be locked
+
+ @retval 0 ok
+ @retval 1 error.
+*/
+
+#ifndef EMBEDDED_LIBRARY
+static bool increment_connection_count(THD* thd, bool use_lock)
+{
+ const char *user_string= get_valid_user_string(thd->main_security_ctx.user);
+ const char *client_string= get_client_host(thd);
+ bool return_value= FALSE;
+
+ if (!thd->userstat_running)
+ return FALSE;
+
+ if (use_lock)
+ pthread_mutex_lock(&LOCK_global_user_client_stats);
+
+ if (increment_count_by_name(user_string, strlen(user_string), user_string,
+ &global_user_stats, thd))
+ {
+ return_value= TRUE;
+ goto end;
+ }
+ if (increment_count_by_name(client_string, strlen(client_string),
+ user_string, &global_client_stats, thd))
+ {
+ return_value= TRUE;
+ goto end;
+ }
+
+end:
+ if (use_lock)
+ pthread_mutex_unlock(&LOCK_global_user_client_stats);
+ return return_value;
+}
+#endif
+
+/*
+ Used to update the global user and client stats
+*/
+
+static void update_global_user_stats_with_user(THD *thd,
+ USER_STATS *user_stats,
+ time_t now)
+{
+ DBUG_ASSERT(thd->userstat_running);
+
+ user_stats->connected_time+= now - thd->last_global_update_time;
+ user_stats->busy_time+= (thd->status_var.busy_time -
+ thd->org_status_var.busy_time);
+ user_stats->cpu_time+= (thd->status_var.cpu_time -
+ thd->org_status_var.cpu_time);
+ /*
+ This is handle specially as bytes_recieved is incremented BEFORE
+ org_status_var is copied.
+ */
+ user_stats->bytes_received+= (thd->org_status_var.bytes_received-
+ thd->start_bytes_received);
+ user_stats->bytes_sent+= (thd->status_var.bytes_sent -
+ thd->org_status_var.bytes_sent);
+ user_stats->binlog_bytes_written+=
+ (thd->status_var.binlog_bytes_written -
+ thd->org_status_var.binlog_bytes_written);
+ user_stats->rows_read+= (thd->status_var.rows_read -
+ thd->org_status_var.rows_read);
+ user_stats->rows_sent+= (thd->status_var.rows_sent -
+ thd->org_status_var.rows_sent);
+ user_stats->rows_inserted+= (thd->status_var.ha_write_count -
+ thd->org_status_var.ha_write_count);
+ user_stats->rows_deleted+= (thd->status_var.ha_delete_count -
+ thd->org_status_var.ha_delete_count);
+ user_stats->rows_updated+= (thd->status_var.ha_update_count -
+ thd->org_status_var.ha_update_count);
+ user_stats->select_commands+= thd->select_commands;
+ user_stats->update_commands+= thd->update_commands;
+ user_stats->other_commands+= thd->other_commands;
+ user_stats->commit_trans+= (thd->status_var.ha_commit_count -
+ thd->org_status_var.ha_commit_count);
+ user_stats->rollback_trans+= (thd->status_var.ha_rollback_count +
+ thd->status_var.ha_savepoint_rollback_count -
+ thd->org_status_var.ha_rollback_count -
+ thd->org_status_var.
+ ha_savepoint_rollback_count);
+ user_stats->access_denied_errors+=
+ (thd->status_var.access_denied_errors -
+ thd->org_status_var.access_denied_errors);
+ user_stats->empty_queries+= (thd->status_var.empty_queries -
+ thd->org_status_var.empty_queries);
+
+ /* The following can only contain 0 or 1 and then connection ends */
+ user_stats->denied_connections+= thd->status_var.access_denied_errors;
+ user_stats->lost_connections+= thd->status_var.lost_connections;
+}
+
+
+/* Updates the global stats of a user or client */
+void update_global_user_stats(THD *thd, bool create_user, time_t now)
+{
+ const char *user_string, *client_string;
+ USER_STATS *user_stats;
+ size_t user_string_length, client_string_length;
+ DBUG_ASSERT(thd->userstat_running);
+
+ user_string= get_valid_user_string(thd->main_security_ctx.user);
+ user_string_length= strlen(user_string);
+ client_string= get_client_host(thd);
+ client_string_length= strlen(client_string);
+
+ pthread_mutex_lock(&LOCK_global_user_client_stats);
+
+ // Update by user name
+ if ((user_stats= (USER_STATS*) hash_search(&global_user_stats,
+ (uchar*) user_string,
+ user_string_length)))
+ {
+ /* Found user. */
+ update_global_user_stats_with_user(thd, user_stats, now);
+ }
+ else
+ {
+ /* Create the entry */
+ if (create_user)
+ {
+ increment_count_by_name(user_string, user_string_length, user_string,
+ &global_user_stats, thd);
+ }
+ }
+
+ /* Update by client IP */
+ if ((user_stats= (USER_STATS*)hash_search(&global_client_stats,
+ (uchar*) client_string,
+ client_string_length)))
+ {
+ // Found by client IP
+ update_global_user_stats_with_user(thd, user_stats, now);
+ }
+ else
+ {
+ // Create the entry
+ if (create_user)
+ {
+ increment_count_by_name(client_string, client_string_length,
+ user_string, &global_client_stats, thd);
+ }
+ }
+ /* Reset variables only used for counting */
+ thd->select_commands= thd->update_commands= thd->other_commands= 0;
+ thd->last_global_update_time= now;
+
+ pthread_mutex_unlock(&LOCK_global_user_client_stats);
+}
+
void thd_init_client_charset(THD *thd, uint cs_number)
{
@@ -970,6 +1424,14 @@ bool login_connection(THD *thd)
/* Connect completed, set read/write timeouts back to default */
my_net_set_read_timeout(net, thd->variables.net_read_timeout);
my_net_set_write_timeout(net, thd->variables.net_write_timeout);
+
+ /* Updates global user connection stats. */
+ if (increment_connection_count(thd, TRUE))
+ {
+ net_send_error(thd, ER_OUTOFMEMORY); // Out of memory
+ DBUG_RETURN(1);
+ }
+
DBUG_RETURN(0);
}
@@ -991,6 +1453,7 @@ void end_connection(THD *thd)
if (thd->killed || (net->error && net->vio != 0))
{
statistic_increment(aborted_threads,&LOCK_status);
+ status_var_increment(thd->status_var.lost_connections);
}
if (net->error && net->vio != 0)
@@ -1117,10 +1580,14 @@ pthread_handler_t handle_one_connection(
for (;;)
{
NET *net= &thd->net;
+ bool create_user= TRUE;
lex_start(thd);
if (login_connection(thd))
+ {
+ create_user= FALSE;
goto end_thread;
+ }
prepare_new_connection_state(thd);
@@ -1134,12 +1601,14 @@ pthread_handler_t handle_one_connection(
end_thread:
close_connection(thd, 0, 1);
+ if (thd->userstat_running)
+ update_global_user_stats(thd, create_user, time(NULL));
+
if (thd->scheduler->end_thread(thd,1))
return 0; // Probably no-threads
/*
- If end_thread() returns, we are either running with
- thread-handler=no-threads or this thread has been schedule to
+ If end_thread() returns, this thread has been schedule to
handle the next connection.
*/
thd= current_thd;
=== modified file 'sql/sql_cursor.cc'
--- a/sql/sql_cursor.cc 2008-12-10 14:16:21 +0000
+++ b/sql/sql_cursor.cc 2009-10-19 17:14:48 +0000
@@ -655,7 +655,7 @@ void Materialized_cursor::fetch(ulong nu
result->begin_dataset();
for (fetch_limit+= num_rows; fetch_count < fetch_limit; fetch_count++)
{
- if ((res= table->file->rnd_next(table->record[0])))
+ if ((res= table->file->ha_rnd_next(table->record[0])))
break;
/* Send data only if the read was successful. */
result->send_data(item_list);
=== modified file 'sql/sql_handler.cc'
--- a/sql/sql_handler.cc 2009-07-15 23:23:57 +0000
+++ b/sql/sql_handler.cc 2009-10-19 17:14:48 +0000
@@ -565,8 +565,8 @@ retry:
if (table->file->inited != handler::NONE)
{
error=keyname ?
- table->file->index_next(table->record[0]) :
- table->file->rnd_next(table->record[0]);
+ table->file->ha_index_next(table->record[0]) :
+ table->file->ha_rnd_next(table->record[0]);
break;
}
/* else fall through */
@@ -575,13 +575,13 @@ retry:
{
table->file->ha_index_or_rnd_end();
table->file->ha_index_init(keyno, 1);
- error= table->file->index_first(table->record[0]);
+ error= table->file->ha_index_first(table->record[0]);
}
else
{
table->file->ha_index_or_rnd_end();
if (!(error= table->file->ha_rnd_init(1)))
- error= table->file->rnd_next(table->record[0]);
+ error= table->file->ha_rnd_next(table->record[0]);
}
mode=RNEXT;
break;
@@ -589,7 +589,7 @@ retry:
DBUG_ASSERT(keyname != 0);
if (table->file->inited != handler::NONE)
{
- error=table->file->index_prev(table->record[0]);
+ error=table->file->ha_index_prev(table->record[0]);
break;
}
/* else fall through */
@@ -597,13 +597,13 @@ retry:
DBUG_ASSERT(keyname != 0);
table->file->ha_index_or_rnd_end();
table->file->ha_index_init(keyno, 1);
- error= table->file->index_last(table->record[0]);
+ error= table->file->ha_index_last(table->record[0]);
mode=RPREV;
break;
case RNEXT_SAME:
/* Continue scan on "(keypart1,keypart2,...)=(c1, c2, ...) */
DBUG_ASSERT(keyname != 0);
- error= table->file->index_next_same(table->record[0], key, key_len);
+ error= table->file->ha_index_next_same(table->record[0], key, key_len);
break;
case RKEY:
{
@@ -643,8 +643,8 @@ retry:
table->file->ha_index_or_rnd_end();
table->file->ha_index_init(keyno, 1);
key_copy(key, table->record[0], table->key_info + keyno, key_len);
- error= table->file->index_read_map(table->record[0],
- key, keypart_map, ha_rkey_mode);
+ error= table->file->ha_index_read_map(table->record[0],
+ key, keypart_map, ha_rkey_mode);
mode=rkey_to_rnext[(int)ha_rkey_mode];
break;
}
=== modified file 'sql/sql_help.cc'
--- a/sql/sql_help.cc 2009-09-07 20:50:10 +0000
+++ b/sql/sql_help.cc 2009-10-19 17:14:48 +0000
@@ -294,13 +294,13 @@ int get_topics_for_keyword(THD *thd, TAB
rkey_id->store((longlong) key_id, TRUE);
rkey_id->get_key_image(buff, rkey_id->pack_length(), Field::itRAW);
- int key_res= relations->file->index_read_map(relations->record[0],
- buff, (key_part_map) 1,
- HA_READ_KEY_EXACT);
+ int key_res= relations->file->ha_index_read_map(relations->record[0],
+ buff, (key_part_map) 1,
+ HA_READ_KEY_EXACT);
for ( ;
!key_res && key_id == (int16) rkey_id->val_int() ;
- key_res= relations->file->index_next(relations->record[0]))
+ key_res= relations->file->ha_index_next(relations->record[0]))
{
uchar topic_id_buff[8];
longlong topic_id= rtopic_id->val_int();
@@ -308,8 +308,8 @@ int get_topics_for_keyword(THD *thd, TAB
field->store((longlong) topic_id, TRUE);
field->get_key_image(topic_id_buff, field->pack_length(), Field::itRAW);
- if (!topics->file->index_read_map(topics->record[0], topic_id_buff,
- (key_part_map)1, HA_READ_KEY_EXACT))
+ if (!topics->file->ha_index_read_map(topics->record[0], topic_id_buff,
+ (key_part_map)1, HA_READ_KEY_EXACT))
{
memorize_variant_topic(thd,topics,count,find_fields,
names,name,description,example);
=== modified file 'sql/sql_insert.cc'
--- a/sql/sql_insert.cc 2009-09-07 20:50:10 +0000
+++ b/sql/sql_insert.cc 2009-10-19 17:14:48 +0000
@@ -1425,7 +1425,7 @@ int write_record(THD *thd, TABLE *table,
goto err;
if (table->file->ha_table_flags() & HA_DUPLICATE_POS)
{
- if (table->file->rnd_pos(table->record[1],table->file->dup_ref))
+ if (table->file->ha_rnd_pos(table->record[1],table->file->dup_ref))
goto err;
}
else
@@ -1446,9 +1446,10 @@ int write_record(THD *thd, TABLE *table,
}
}
key_copy((uchar*) key,table->record[0],table->key_info+key_nr,0);
- if ((error=(table->file->index_read_idx_map(table->record[1],key_nr,
- (uchar*) key, HA_WHOLE_KEY,
- HA_READ_KEY_EXACT))))
+ if ((error= (table->file->ha_index_read_idx_map(table->record[1],
+ key_nr, (uchar*) key,
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT))))
goto err;
}
if (info->handle_duplicates == DUP_UPDATE)
=== modified file 'sql/sql_lex.h'
--- a/sql/sql_lex.h 2009-09-15 10:46:35 +0000
+++ b/sql/sql_lex.h 2009-10-19 17:14:48 +0000
@@ -118,6 +118,8 @@ enum enum_sql_command {
SQLCOM_SHOW_CREATE_TRIGGER,
SQLCOM_ALTER_DB_UPGRADE,
SQLCOM_SHOW_PROFILE, SQLCOM_SHOW_PROFILES,
+ SQLCOM_SHOW_USER_STATS, SQLCOM_SHOW_TABLE_STATS, SQLCOM_SHOW_INDEX_STATS,
+ SQLCOM_SHOW_CLIENT_STATS,
/*
When a command is added here, be sure it's also added in mysqld.cc
=== modified file 'sql/sql_parse.cc'
--- a/sql/sql_parse.cc 2009-09-15 10:46:35 +0000
+++ b/sql/sql_parse.cc 2009-10-19 17:14:48 +0000
@@ -331,10 +331,14 @@ void init_update_queries(void)
sql_command_flags[SQLCOM_SHOW_CREATE_EVENT]= CF_STATUS_COMMAND;
sql_command_flags[SQLCOM_SHOW_PROFILES]= CF_STATUS_COMMAND;
sql_command_flags[SQLCOM_SHOW_PROFILE]= CF_STATUS_COMMAND;
+ sql_command_flags[SQLCOM_SHOW_CLIENT_STATS]= CF_STATUS_COMMAND;
+ sql_command_flags[SQLCOM_SHOW_USER_STATS]= CF_STATUS_COMMAND;
+ sql_command_flags[SQLCOM_SHOW_TABLE_STATS]= CF_STATUS_COMMAND;
+ sql_command_flags[SQLCOM_SHOW_INDEX_STATS]= CF_STATUS_COMMAND;
- sql_command_flags[SQLCOM_SHOW_TABLES]= (CF_STATUS_COMMAND |
- CF_SHOW_TABLE_COMMAND |
- CF_REEXECUTION_FRAGILE);
+ sql_command_flags[SQLCOM_SHOW_TABLES]= (CF_STATUS_COMMAND |
+ CF_SHOW_TABLE_COMMAND |
+ CF_REEXECUTION_FRAGILE);
sql_command_flags[SQLCOM_SHOW_TABLE_STATUS]= (CF_STATUS_COMMAND |
CF_SHOW_TABLE_COMMAND |
CF_REEXECUTION_FRAGILE);
@@ -549,7 +553,6 @@ end:
DBUG_RETURN(0);
}
-
/**
@brief Check access privs for a MERGE table and fix children lock types.
@@ -801,6 +804,8 @@ bool do_command(THD *thd)
net_new_transaction(net);
+ /* Save for user statistics */
+ thd->start_bytes_received= thd->status_var.bytes_received;
packet_length= my_net_read(net);
#if defined(ENABLED_PROFILING) && defined(COMMUNITY_SERVER)
thd->profiling.start_new_query();
@@ -1324,7 +1329,7 @@ bool dispatch_command(enum enum_server_c
table_list.select_lex= &(thd->lex->select_lex);
lex_start(thd);
- mysql_reset_thd_for_next_command(thd);
+ mysql_reset_thd_for_next_command(thd, opt_userstat_running);
thd->lex->
select_lex.table_list.link_in_list((uchar*) &table_list,
@@ -1609,6 +1614,9 @@ bool dispatch_command(enum enum_server_c
/* Free tables */
close_thread_tables(thd);
+ /* Update status; Must be done after close_thread_tables */
+ thd->update_all_stats();
+
log_slow_statement(thd);
thd_proc_info(thd, "cleaning up");
@@ -1777,6 +1785,12 @@ int prepare_schema_table(THD *thd, LEX *
thd->profiling.discard_current_query();
#endif
break;
+ case SCH_USER_STATS:
+ case SCH_CLIENT_STATS:
+ if (check_global_access(thd, SUPER_ACL | PROCESS_ACL))
+ DBUG_RETURN(1);
+ case SCH_TABLE_STATS:
+ case SCH_INDEX_STATS:
case SCH_OPEN_TABLES:
case SCH_VARIABLES:
case SCH_STATUS:
@@ -2218,6 +2232,10 @@ mysql_execute_command(THD *thd)
case SQLCOM_SHOW_COLLATIONS:
case SQLCOM_SHOW_STORAGE_ENGINES:
case SQLCOM_SHOW_PROFILE:
+ case SQLCOM_SHOW_CLIENT_STATS:
+ case SQLCOM_SHOW_USER_STATS:
+ case SQLCOM_SHOW_TABLE_STATS:
+ case SQLCOM_SHOW_INDEX_STATS:
case SQLCOM_SELECT:
thd->status_var.last_query_cost= 0.0;
if (all_tables)
@@ -5059,6 +5077,10 @@ static bool execute_sqlcom_select(THD *t
delete result;
}
}
+ /* Count number of empty select queries */
+ if (!thd->sent_row_count)
+ status_var_increment(thd->status_var.empty_queries);
+ status_var_add(thd->status_var.rows_sent, thd->sent_row_count);
return res;
}
@@ -5220,6 +5242,7 @@ check_access(THD *thd, ulong want_access
if (!no_errors)
{
const char *db_name= db ? db : thd->db;
+ status_var_increment(thd->status_var.access_denied_errors);
my_error(ER_DBACCESS_DENIED_ERROR, MYF(0),
sctx->priv_user, sctx->priv_host, db_name);
}
@@ -5252,12 +5275,15 @@ check_access(THD *thd, ulong want_access
{ // We can never grant this
DBUG_PRINT("error",("No possible access"));
if (!no_errors)
+ {
+ status_var_increment(thd->status_var.access_denied_errors);
my_error(ER_ACCESS_DENIED_ERROR, MYF(0),
sctx->priv_user,
sctx->priv_host,
(thd->password ?
ER(ER_YES) :
ER(ER_NO))); /* purecov: tested */
+ }
DBUG_RETURN(TRUE); /* purecov: tested */
}
@@ -5283,11 +5309,14 @@ check_access(THD *thd, ulong want_access
DBUG_PRINT("error",("Access denied"));
if (!no_errors)
+ {
+ status_var_increment(thd->status_var.access_denied_errors);
my_error(ER_DBACCESS_DENIED_ERROR, MYF(0),
sctx->priv_user, sctx->priv_host,
(db ? db : (thd->db ?
thd->db :
"unknown"))); /* purecov: tested */
+ }
DBUG_RETURN(TRUE); /* purecov: tested */
}
@@ -5316,6 +5345,7 @@ static bool check_show_access(THD *thd,
if (!thd->col_access && check_grant_db(thd, dst_db_name))
{
+ status_var_increment(thd->status_var.access_denied_errors);
my_error(ER_DBACCESS_DENIED_ERROR, MYF(0),
thd->security_ctx->priv_user,
thd->security_ctx->priv_host,
@@ -5378,14 +5408,14 @@ check_table_access(THD *thd, ulong want_
{
TABLE_LIST *org_tables= tables;
TABLE_LIST *first_not_own_table= thd->lex->first_not_own_table();
- uint i= 0;
Security_context *sctx= thd->security_ctx, *backup_ctx= thd->security_ctx;
+ uint i;
/*
The check that first_not_own_table is not reached is for the case when
the given table list refers to the list for prelocking (contains tables
of other queries). For simple queries first_not_own_table is 0.
*/
- for (; i < number && tables != first_not_own_table;
+ for (i=0; i < number && tables != first_not_own_table;
tables= tables->next_global, i++)
{
if (tables->security_ctx)
@@ -5397,9 +5427,12 @@ check_table_access(THD *thd, ulong want_
(want_access & ~(SELECT_ACL | EXTRA_ACL | FILE_ACL)))
{
if (!no_errors)
+ {
+ status_var_increment(thd->status_var.access_denied_errors);
my_error(ER_DBACCESS_DENIED_ERROR, MYF(0),
sctx->priv_user, sctx->priv_host,
INFORMATION_SCHEMA_NAME.str);
+ }
return TRUE;
}
/*
@@ -5563,6 +5596,7 @@ bool check_global_access(THD *thd, ulong
return 0;
get_privilege_desc(command, sizeof(command), want_access);
my_error(ER_SPECIFIC_ACCESS_DENIED_ERROR, MYF(0), command);
+ status_var_increment(thd->status_var.access_denied_errors);
return 1;
#else
return 0;
@@ -5666,7 +5700,7 @@ bool my_yyoverflow(short **yyss, YYSTYPE
Call it after we use THD for queries, not before.
*/
-void mysql_reset_thd_for_next_command(THD *thd)
+void mysql_reset_thd_for_next_command(THD *thd, my_bool calculate_userstat)
{
DBUG_ENTER("mysql_reset_thd_for_next_command");
DBUG_ASSERT(!thd->spcont); /* not for substatements of routines */
@@ -5711,6 +5745,15 @@ void mysql_reset_thd_for_next_command(TH
thd->total_warn_count=0; // Warnings for this query
thd->rand_used= 0;
thd->sent_row_count= thd->examined_row_count= 0;
+
+ /* Copy data for user stats */
+ if ((thd->userstat_running= calculate_userstat))
+ {
+ thd->start_cpu_time= my_getcputime();
+ memcpy(&thd->org_status_var, &thd->status_var, sizeof(thd->status_var));
+ thd->select_commands= thd->update_commands= thd->other_commands= 0;
+ }
+
thd->query_plan_flags= QPLAN_INIT;
thd->query_plan_fsort_passes= 0;
@@ -5909,7 +5952,6 @@ void mysql_parse(THD *thd, const char *i
const char ** found_semicolon)
{
DBUG_ENTER("mysql_parse");
-
DBUG_EXECUTE_IF("parser_debug", turn_parser_debug_on(););
/*
@@ -5929,7 +5971,7 @@ void mysql_parse(THD *thd, const char *i
FIXME: cleanup the dependencies in the code to simplify this.
*/
lex_start(thd);
- mysql_reset_thd_for_next_command(thd);
+ mysql_reset_thd_for_next_command(thd, opt_userstat_running);
if (query_cache_send_result_to_client(thd, (char*) inBuf, length) <= 0)
{
@@ -6001,10 +6043,11 @@ void mysql_parse(THD *thd, const char *i
}
else
{
+ /* Update statistics for getting the query from the cache */
+ thd->lex->sql_command= SQLCOM_SELECT;
/* There are no multi queries in the cache. */
*found_semicolon= NULL;
}
-
DBUG_VOID_RETURN;
}
@@ -6028,7 +6071,7 @@ bool mysql_test_parse_for_slave(THD *thd
Parser_state parser_state(thd, inBuf, length);
lex_start(thd);
- mysql_reset_thd_for_next_command(thd);
+ mysql_reset_thd_for_next_command(thd, 0);
if (!parse_sql(thd, & parser_state, NULL) &&
all_tables_not_ok(thd,(TABLE_LIST*) lex->select_lex.table_list.first))
@@ -6867,6 +6910,13 @@ bool reload_acl_and_cache(THD *thd, ulon
if (flush_error_log())
result=1;
}
+ if (((options & (REFRESH_SLOW_QUERY_LOG | REFRESH_LOG)) ==
+ REFRESH_SLOW_QUERY_LOG))
+ {
+ /* We are only flushing slow query log */
+ logger.flush_slow_log(thd);
+ }
+
#ifdef HAVE_QUERY_CACHE
if (options & REFRESH_QUERY_CACHE_FREE)
{
@@ -6949,26 +6999,55 @@ bool reload_acl_and_cache(THD *thd, ulon
}
#endif
#ifdef OPENSSL
- if (options & REFRESH_DES_KEY_FILE)
- {
- if (des_key_file && load_des_key_file(des_key_file))
- result= 1;
- }
+ if (options & REFRESH_DES_KEY_FILE)
+ {
+ if (des_key_file && load_des_key_file(des_key_file))
+ result= 1;
+ }
#endif
#ifdef HAVE_REPLICATION
- if (options & REFRESH_SLAVE)
- {
- tmp_write_to_binlog= 0;
- pthread_mutex_lock(&LOCK_active_mi);
- if (reset_slave(thd, active_mi))
- result=1;
- pthread_mutex_unlock(&LOCK_active_mi);
- }
-#endif
- if (options & REFRESH_USER_RESOURCES)
- reset_mqh((LEX_USER *) NULL, 0); /* purecov: inspected */
- *write_to_binlog= tmp_write_to_binlog;
- return result;
+ if (options & REFRESH_SLAVE)
+ {
+ tmp_write_to_binlog= 0;
+ pthread_mutex_lock(&LOCK_active_mi);
+ if (reset_slave(thd, active_mi))
+ result=1;
+ pthread_mutex_unlock(&LOCK_active_mi);
+ }
+#endif
+ if (options & REFRESH_USER_RESOURCES)
+ reset_mqh((LEX_USER *) NULL, 0); /* purecov: inspected */
+ if (options & REFRESH_TABLE_STATS)
+ {
+ pthread_mutex_lock(&LOCK_global_table_stats);
+ free_global_table_stats();
+ init_global_table_stats();
+ pthread_mutex_unlock(&LOCK_global_table_stats);
+ }
+ if (options & REFRESH_INDEX_STATS)
+ {
+ pthread_mutex_lock(&LOCK_global_index_stats);
+ free_global_index_stats();
+ init_global_index_stats();
+ pthread_mutex_unlock(&LOCK_global_index_stats);
+ }
+ if (options & (REFRESH_USER_STATS | REFRESH_CLIENT_STATS))
+ {
+ pthread_mutex_lock(&LOCK_global_user_client_stats);
+ if (options & REFRESH_USER_STATS)
+ {
+ free_global_user_stats();
+ init_global_user_stats();
+ }
+ if (options & REFRESH_CLIENT_STATS)
+ {
+ free_global_client_stats();
+ init_global_client_stats();
+ }
+ pthread_mutex_unlock(&LOCK_global_user_client_stats);
+ }
+ *write_to_binlog= tmp_write_to_binlog;
+ return result;
}
@@ -7004,7 +7083,6 @@ uint kill_one_thread(THD *thd, ulong id,
VOID(pthread_mutex_unlock(&LOCK_thread_count));
if (tmp)
{
-
/*
If we're SUPER, we can KILL anything, including system-threads.
No further checks.
=== modified file 'sql/sql_plugin.cc'
--- a/sql/sql_plugin.cc 2009-10-01 21:27:39 +0000
+++ b/sql/sql_plugin.cc 2009-10-19 17:14:48 +0000
@@ -1790,10 +1790,10 @@ bool mysql_uninstall_plugin(THD *thd, co
table->use_all_columns();
table->field[0]->store(name->str, name->length, system_charset_info);
- if (! table->file->index_read_idx_map(table->record[0], 0,
- (uchar *)table->field[0]->ptr,
- HA_WHOLE_KEY,
- HA_READ_KEY_EXACT))
+ if (! table->file->ha_index_read_idx_map(table->record[0], 0,
+ (uchar *)table->field[0]->ptr,
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT))
{
int error;
/*
=== modified file 'sql/sql_prepare.cc'
--- a/sql/sql_prepare.cc 2009-09-07 20:50:10 +0000
+++ b/sql/sql_prepare.cc 2009-10-19 17:14:48 +0000
@@ -2067,14 +2067,13 @@ void mysqld_stmt_prepare(THD *thd, const
Prepared_statement *stmt;
bool error;
DBUG_ENTER("mysqld_stmt_prepare");
-
DBUG_PRINT("prep_query", ("%s", packet));
/* First of all clear possible warnings from the previous command */
- mysql_reset_thd_for_next_command(thd);
+ mysql_reset_thd_for_next_command(thd, opt_userstat_running);
if (! (stmt= new Prepared_statement(thd)))
- DBUG_VOID_RETURN; /* out of memory: error is set in Sql_alloc */
+ goto end; /* out of memory: error is set in Sql_alloc */
if (thd->stmt_map.insert(thd, stmt))
{
@@ -2082,7 +2081,7 @@ void mysqld_stmt_prepare(THD *thd, const
The error is set in the insert. The statement itself
will be also deleted there (this is how the hash works).
*/
- DBUG_VOID_RETURN;
+ goto end;
}
/* Reset warnings from previous command */
@@ -2109,6 +2108,7 @@ void mysqld_stmt_prepare(THD *thd, const
thd->protocol= save_protocol;
/* check_prepared_statemnt sends the metadata packet in case of success */
+end:
DBUG_VOID_RETURN;
}
@@ -2450,7 +2450,7 @@ void mysqld_stmt_execute(THD *thd, char
packet+= 9; /* stmt_id + 5 bytes of flags */
/* First of all clear possible warnings from the previous command */
- mysql_reset_thd_for_next_command(thd);
+ mysql_reset_thd_for_next_command(thd, opt_userstat_running);
if (!(stmt= find_prepared_statement(thd, stmt_id)))
{
@@ -2549,7 +2549,8 @@ void mysqld_stmt_fetch(THD *thd, char *p
DBUG_ENTER("mysqld_stmt_fetch");
/* First of all clear possible warnings from the previous command */
- mysql_reset_thd_for_next_command(thd);
+ mysql_reset_thd_for_next_command(thd, opt_userstat_running);
+
status_var_increment(thd->status_var.com_stmt_fetch);
if (!(stmt= find_prepared_statement(thd, stmt_id)))
{
@@ -2615,7 +2616,7 @@ void mysqld_stmt_reset(THD *thd, char *p
DBUG_ENTER("mysqld_stmt_reset");
/* First of all clear possible warnings from the previous command */
- mysql_reset_thd_for_next_command(thd);
+ mysql_reset_thd_for_next_command(thd, opt_userstat_running);
status_var_increment(thd->status_var.com_stmt_reset);
if (!(stmt= find_prepared_statement(thd, stmt_id)))
=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc 2009-09-15 10:46:35 +0000
+++ b/sql/sql_select.cc 2009-10-19 17:14:48 +0000
@@ -10603,8 +10603,9 @@ error:
static bool open_tmp_table(TABLE *table)
{
int error;
- if ((error=table->file->ha_open(table, table->s->table_name.str,O_RDWR,
- HA_OPEN_TMP_TABLE | HA_OPEN_INTERNAL_TABLE)))
+ if ((error= table->file->ha_open(table, table->s->table_name.str, O_RDWR,
+ HA_OPEN_TMP_TABLE |
+ HA_OPEN_INTERNAL_TABLE)))
{
table->file->print_error(error,MYF(0)); /* purecov: inspected */
table->db_stat=0;
@@ -10949,7 +10950,7 @@ create_internal_tmp_table_from_heap2(THD
is safe as this is a temporary MyISAM table without timestamp/autoincrement
or partitioning.
*/
- while (!table->file->rnd_next(new_table.record[1]))
+ while (!table->file->ha_rnd_next(new_table.record[1]))
{
write_err= new_table.file->ha_write_row(new_table.record[1]);
DBUG_EXECUTE_IF("raise_error", write_err= HA_ERR_FOUND_DUPP_KEY ;);
@@ -11746,10 +11747,10 @@ int safe_index_read(JOIN_TAB *tab)
{
int error;
TABLE *table= tab->table;
- if ((error=table->file->index_read_map(table->record[0],
- tab->ref.key_buff,
- make_prev_keypart_map(tab->ref.key_parts),
- HA_READ_KEY_EXACT)))
+ if ((error= table->file->ha_index_read_map(table->record[0],
+ tab->ref.key_buff,
+ make_prev_keypart_map(tab->ref.key_parts),
+ HA_READ_KEY_EXACT)))
return report_error(table, error);
return 0;
}
@@ -11858,8 +11859,8 @@ join_read_system(JOIN_TAB *tab)
int error;
if (table->status & STATUS_GARBAGE) // If first read
{
- if ((error=table->file->read_first_row(table->record[0],
- table->s->primary_key)))
+ if ((error= table->file->ha_read_first_row(table->record[0],
+ table->s->primary_key)))
{
if (error != HA_ERR_END_OF_FILE)
return report_error(table, error);
@@ -11901,10 +11902,10 @@ join_read_const(JOIN_TAB *tab)
error=HA_ERR_KEY_NOT_FOUND;
else
{
- error=table->file->index_read_idx_map(table->record[0],tab->ref.key,
- (uchar*) tab->ref.key_buff,
- make_prev_keypart_map(tab->ref.key_parts),
- HA_READ_KEY_EXACT);
+ error= table->file->ha_index_read_idx_map(table->record[0],tab->ref.key,
+ (uchar*) tab->ref.key_buff,
+ make_prev_keypart_map(tab->ref.key_parts),
+ HA_READ_KEY_EXACT);
}
if (error)
{
@@ -11949,10 +11950,10 @@ join_read_key(JOIN_TAB *tab)
table->status=STATUS_NOT_FOUND;
return -1;
}
- error=table->file->index_read_map(table->record[0],
- tab->ref.key_buff,
- make_prev_keypart_map(tab->ref.key_parts),
- HA_READ_KEY_EXACT);
+ error= table->file->ha_index_read_map(table->record[0],
+ tab->ref.key_buff,
+ make_prev_keypart_map(tab->ref.key_parts),
+ HA_READ_KEY_EXACT);
if (error && error != HA_ERR_KEY_NOT_FOUND && error != HA_ERR_END_OF_FILE)
return report_error(table, error);
}
@@ -12005,10 +12006,10 @@ join_read_always_key(JOIN_TAB *tab)
if (cp_buffer_from_ref(tab->join->thd, table, &tab->ref))
return -1;
- if ((error=table->file->index_read_map(table->record[0],
- tab->ref.key_buff,
- make_prev_keypart_map(tab->ref.key_parts),
- HA_READ_KEY_EXACT)))
+ if ((error= table->file->ha_index_read_map(table->record[0],
+ tab->ref.key_buff,
+ make_prev_keypart_map(tab->ref.key_parts),
+ HA_READ_KEY_EXACT)))
{
if (error != HA_ERR_KEY_NOT_FOUND && error != HA_ERR_END_OF_FILE)
return report_error(table, error);
@@ -12039,9 +12040,9 @@ join_read_last_key(JOIN_TAB *tab)
}
if (cp_buffer_from_ref(tab->join->thd, table, &tab->ref))
return -1;
- if ((error=table->file->index_read_last_map(table->record[0],
- tab->ref.key_buff,
- make_prev_keypart_map(tab->ref.key_parts))))
+ if ((error= table->file->ha_index_read_last_map(table->record[0],
+ tab->ref.key_buff,
+ make_prev_keypart_map(tab->ref.key_parts))))
{
if (error != HA_ERR_KEY_NOT_FOUND && error != HA_ERR_END_OF_FILE)
return report_error(table, error);
@@ -12066,9 +12067,9 @@ join_read_next_same(READ_RECORD *info)
TABLE *table= info->table;
JOIN_TAB *tab=table->reginfo.join_tab;
- if ((error=table->file->index_next_same(table->record[0],
- tab->ref.key_buff,
- tab->ref.key_length)))
+ if ((error= table->file->ha_index_next_same(table->record[0],
+ tab->ref.key_buff,
+ tab->ref.key_length)))
{
if (error != HA_ERR_END_OF_FILE)
return report_error(table, error);
@@ -12086,7 +12087,7 @@ join_read_prev_same(READ_RECORD *info)
TABLE *table= info->table;
JOIN_TAB *tab=table->reginfo.join_tab;
- if ((error=table->file->index_prev(table->record[0])))
+ if ((error= table->file->ha_index_prev(table->record[0])))
return report_error(table, error);
if (key_cmp_if_same(table, tab->ref.key_buff, tab->ref.key,
tab->ref.key_length))
@@ -12158,7 +12159,7 @@ join_read_first(JOIN_TAB *tab)
error= table->file->ha_index_init(tab->index, tab->sorted);
if (!error)
error= table->file->prepare_index_scan();
- if (error || (error=tab->table->file->index_first(tab->table->record[0])))
+ if (error || (error=tab->table->file->ha_index_first(tab->table->record[0])))
{
if (error != HA_ERR_KEY_NOT_FOUND && error != HA_ERR_END_OF_FILE)
report_error(table, error);
@@ -12172,7 +12173,7 @@ static int
join_read_next(READ_RECORD *info)
{
int error;
- if ((error=info->file->index_next(info->record)))
+ if ((error= info->file->ha_index_next(info->record)))
return report_error(info->table, error);
return 0;
}
@@ -12199,7 +12200,7 @@ join_read_last(JOIN_TAB *tab)
error= table->file->ha_index_init(tab->index, 1);
if (!error)
error= table->file->prepare_index_scan();
- if (error || (error= tab->table->file->index_last(tab->table->record[0])))
+ if (error || (error= tab->table->file->ha_index_last(tab->table->record[0])))
return report_error(table, error);
return 0;
}
@@ -12209,7 +12210,7 @@ static int
join_read_prev(READ_RECORD *info)
{
int error;
- if ((error= info->file->index_prev(info->record)))
+ if ((error= info->file->ha_index_prev(info->record)))
return report_error(info->table, error);
return 0;
}
@@ -12234,7 +12235,7 @@ join_ft_read_first(JOIN_TAB *tab)
#endif
table->file->ft_init();
- if ((error= table->file->ft_read(table->record[0])))
+ if ((error= table->file->ha_ft_read(table->record[0])))
return report_error(table, error);
return 0;
}
@@ -12243,7 +12244,7 @@ static int
join_ft_read_next(READ_RECORD *info)
{
int error;
- if ((error= info->file->ft_read(info->table->record[0])))
+ if ((error= info->file->ha_ft_read(info->table->record[0])))
return report_error(info->table, error);
return 0;
}
@@ -12535,7 +12536,7 @@ end_write(JOIN *join, JOIN_TAB *join_tab
{
int error;
join->found_records++;
- if ((error=table->file->ha_write_row(table->record[0])))
+ if ((error= table->file->ha_write_row(table->record[0])))
{
if (!table->file->is_fatal_error(error, HA_CHECK_DUP))
goto end;
@@ -12590,15 +12591,15 @@ end_update(JOIN *join, JOIN_TAB *join_ta
if (item->maybe_null)
group->buff[-1]= (char) group->field->is_null();
}
- if (!table->file->index_read_map(table->record[1],
- join->tmp_table_param.group_buff,
- HA_WHOLE_KEY,
- HA_READ_KEY_EXACT))
+ if (!table->file->ha_index_read_map(table->record[1],
+ join->tmp_table_param.group_buff,
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT))
{ /* Update old record */
restore_record(table,record[1]);
update_tmptable_sum_func(join->sum_funcs,table);
- if ((error=table->file->ha_update_row(table->record[1],
- table->record[0])))
+ if ((error= table->file->ha_update_row(table->record[1],
+ table->record[0])))
{
table->file->print_error(error,MYF(0)); /* purecov: inspected */
DBUG_RETURN(NESTED_LOOP_ERROR); /* purecov: inspected */
@@ -12621,7 +12622,7 @@ end_update(JOIN *join, JOIN_TAB *join_ta
}
init_tmptable_sum_functions(join->sum_funcs);
copy_funcs(join->tmp_table_param.items_to_copy);
- if ((error=table->file->ha_write_row(table->record[0])))
+ if ((error= table->file->ha_write_row(table->record[0])))
{
if (create_internal_tmp_table_from_heap(join->thd, table,
&join->tmp_table_param,
@@ -12662,7 +12663,7 @@ end_unique_update(JOIN *join, JOIN_TAB *
copy_fields(&join->tmp_table_param); // Groups are copied twice.
copy_funcs(join->tmp_table_param.items_to_copy);
- if (!(error=table->file->ha_write_row(table->record[0])))
+ if (!(error= table->file->ha_write_row(table->record[0])))
join->send_records++; // New group
else
{
@@ -12671,15 +12672,15 @@ end_unique_update(JOIN *join, JOIN_TAB *
table->file->print_error(error,MYF(0)); /* purecov: inspected */
DBUG_RETURN(NESTED_LOOP_ERROR); /* purecov: inspected */
}
- if (table->file->rnd_pos(table->record[1],table->file->dup_ref))
+ if (table->file->ha_rnd_pos(table->record[1],table->file->dup_ref))
{
table->file->print_error(error,MYF(0)); /* purecov: inspected */
DBUG_RETURN(NESTED_LOOP_ERROR); /* purecov: inspected */
}
restore_record(table,record[1]);
update_tmptable_sum_func(join->sum_funcs,table);
- if ((error=table->file->ha_update_row(table->record[1],
- table->record[0])))
+ if ((error= table->file->ha_update_row(table->record[1],
+ table->record[0])))
{
table->file->print_error(error,MYF(0)); /* purecov: inspected */
DBUG_RETURN(NESTED_LOOP_ERROR); /* purecov: inspected */
@@ -14016,7 +14017,7 @@ static int remove_dup_with_compare(THD *
new_record=(char*) table->record[1]+offset;
file->ha_rnd_init(1);
- error=file->rnd_next(record);
+ error= file->ha_rnd_next(record);
for (;;)
{
if (thd->killed)
@@ -14035,9 +14036,9 @@ static int remove_dup_with_compare(THD *
}
if (having && !having->val_int())
{
- if ((error=file->ha_delete_row(record)))
+ if ((error= file->ha_delete_row(record)))
goto err;
- error=file->rnd_next(record);
+ error= file->ha_rnd_next(record);
continue;
}
if (copy_blobs(first_field))
@@ -14052,7 +14053,7 @@ static int remove_dup_with_compare(THD *
bool found=0;
for (;;)
{
- if ((error=file->rnd_next(record)))
+ if ((error= file->ha_rnd_next(record)))
{
if (error == HA_ERR_RECORD_DELETED)
continue;
@@ -14062,7 +14063,7 @@ static int remove_dup_with_compare(THD *
}
if (compare_record(table, first_field) == 0)
{
- if ((error=file->ha_delete_row(record)))
+ if ((error= file->ha_delete_row(record)))
goto err;
}
else if (!found)
@@ -14152,7 +14153,7 @@ static int remove_dup_with_hash_index(TH
error=0;
goto err;
}
- if ((error=file->rnd_next(record)))
+ if ((error= file->ha_rnd_next(record)))
{
if (error == HA_ERR_RECORD_DELETED)
continue;
@@ -14162,7 +14163,7 @@ static int remove_dup_with_hash_index(TH
}
if (having && !having->val_int())
{
- if ((error=file->ha_delete_row(record)))
+ if ((error= file->ha_delete_row(record)))
goto err;
continue;
}
@@ -14179,7 +14180,7 @@ static int remove_dup_with_hash_index(TH
if (hash_search(&hash, org_key_pos, key_length))
{
/* Duplicated found ; Remove the row */
- if ((error=file->ha_delete_row(record)))
+ if ((error= file->ha_delete_row(record)))
goto err;
}
else
=== modified file 'sql/sql_servers.cc'
--- a/sql/sql_servers.cc 2009-03-20 14:27:53 +0000
+++ b/sql/sql_servers.cc 2009-10-19 17:14:48 +0000
@@ -520,10 +520,10 @@ int insert_server_record(TABLE *table, F
system_charset_info);
/* read index until record is that specified in server_name */
- if ((error= table->file->index_read_idx_map(table->record[0], 0,
- (uchar *)table->field[0]->ptr,
- HA_WHOLE_KEY,
- HA_READ_KEY_EXACT)))
+ if ((error= table->file->ha_index_read_idx_map(table->record[0], 0,
+ (uchar *)table->field[0]->ptr,
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT)))
{
/* if not found, err */
if (error != HA_ERR_KEY_NOT_FOUND && error != HA_ERR_END_OF_FILE)
@@ -863,10 +863,10 @@ update_server_record(TABLE *table, FOREI
server->server_name_length,
system_charset_info);
- if ((error= table->file->index_read_idx_map(table->record[0], 0,
- (uchar *)table->field[0]->ptr,
- ~(longlong)0,
- HA_READ_KEY_EXACT)))
+ if ((error= table->file->ha_index_read_idx_map(table->record[0], 0,
+ (uchar *)table->field[0]->ptr,
+ ~(longlong)0,
+ HA_READ_KEY_EXACT)))
{
if (error != HA_ERR_KEY_NOT_FOUND && error != HA_ERR_END_OF_FILE)
table->file->print_error(error, MYF(0));
@@ -920,10 +920,10 @@ delete_server_record(TABLE *table,
/* set the field that's the PK to the value we're looking for */
table->field[0]->store(server_name, server_name_length, system_charset_info);
- if ((error= table->file->index_read_idx_map(table->record[0], 0,
- (uchar *)table->field[0]->ptr,
- HA_WHOLE_KEY,
- HA_READ_KEY_EXACT)))
+ if ((error= table->file->ha_index_read_idx_map(table->record[0], 0,
+ (uchar *)table->field[0]->ptr,
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT)))
{
if (error != HA_ERR_KEY_NOT_FOUND && error != HA_ERR_END_OF_FILE)
table->file->print_error(error, MYF(0));
=== modified file 'sql/sql_show.cc'
--- a/sql/sql_show.cc 2009-09-23 11:03:47 +0000
+++ b/sql/sql_show.cc 2009-10-19 17:14:48 +0000
@@ -714,6 +714,7 @@ bool mysqld_show_create_db(THD *thd, cha
sctx->master_access);
if (!(db_access & DB_ACLS) && check_grant_db(thd,dbname))
{
+ status_var_increment(thd->status_var.access_denied_errors);
my_error(ER_DBACCESS_DENIED_ERROR, MYF(0),
sctx->priv_user, sctx->host_or_ip, dbname);
general_log_print(thd,COM_INIT_DB,ER(ER_DBACCESS_DENIED_ERROR),
@@ -2100,11 +2101,6 @@ void remove_status_vars(SHOW_VAR *list)
}
}
-inline void make_upper(char *buf)
-{
- for (; *buf; buf++)
- *buf= my_toupper(system_charset_info, *buf);
-}
static bool show_status_array(THD *thd, const char *wild,
SHOW_VAR *variables,
@@ -2143,7 +2139,7 @@ static bool show_status_array(THD *thd,
strnmov(prefix_end, variables->name, len);
name_buffer[sizeof(name_buffer)-1]=0; /* Safety */
if (ucase_names)
- make_upper(name_buffer);
+ my_caseup_str(system_charset_info, name_buffer);
restore_record(table, s->default_values);
table->field[0]->store(name_buffer, strlen(name_buffer),
@@ -2270,6 +2266,323 @@ end:
DBUG_RETURN(res);
}
+#ifdef COMPLEAT_PATCH_NOT_ADDED_YET
+/*
+ Aggregate values for mapped_user entries by their role.
+
+ SYNOPSIS
+ aggregate_user_stats
+ all_user_stats - input to aggregate
+ agg_user_stats - returns aggregated values
+
+ RETURN
+ 0 - OK
+ 1 - error
+*/
+
+static int aggregate_user_stats(HASH *all_user_stats, HASH *agg_user_stats)
+{
+ DBUG_ENTER("aggregate_user_stats");
+ if (hash_init(agg_user_stats, system_charset_info,
+ max(all_user_stats->records, 1),
+ 0, 0, (hash_get_key)get_key_user_stats,
+ (hash_free_key)free_user_stats, 0))
+ {
+ sql_print_error("Malloc in aggregate_user_stats failed");
+ DBUG_RETURN(1);
+ }
+
+ for (uint i= 0; i < all_user_stats->records; i++)
+ {
+ USER_STATS *user= (USER_STATS*)hash_element(all_user_stats, i);
+ USER_STATS *agg_user;
+ uint name_length= strlen(user->priv_user);
+
+ if (!(agg_user= (USER_STATS*) hash_search(agg_user_stats,
+ (uchar*)user->priv_user,
+ name_length)))
+ {
+ // First entry for this role.
+ if (!(agg_user= (USER_STATS*) my_malloc(sizeof(USER_STATS),
+ MYF(MY_WME | MY_ZEROFILL))))
+ {
+ sql_print_error("Malloc in aggregate_user_stats failed");
+ DBUG_RETURN(1);
+ }
+
+ init_user_stats(agg_user, user->priv_user, name_length,
+ user->priv_user,
+ user->total_connections, user->concurrent_connections,
+ user->connected_time, user->busy_time, user->cpu_time,
+ user->bytes_received, user->bytes_sent,
+ user->binlog_bytes_written,
+ user->rows_sent, user->rows_read,
+ user->rows_inserted, user->rows_deleted,
+ user->rows_updated,
+ user->select_commands, user->update_commands,
+ user->other_commands,
+ user->commit_trans, user->rollback_trans,
+ user->denied_connections, user->lost_connections,
+ user->access_denied_errors, user->empty_queries);
+
+ if (my_hash_insert(agg_user_stats, (uchar*) agg_user))
+ {
+ /* Out of memory */
+ my_free(agg_user, 0);
+ sql_print_error("Malloc in aggregate_user_stats failed");
+ DBUG_RETURN(1);
+ }
+ }
+ else
+ {
+ /* Aggregate with existing values for this role. */
+ add_user_stats(agg_user,
+ user->total_connections, user->concurrent_connections,
+ user->connected_time, user->busy_time, user->cpu_time,
+ user->bytes_received, user->bytes_sent,
+ user->binlog_bytes_written,
+ user->rows_sent, user->rows_read,
+ user->rows_inserted, user->rows_deleted,
+ user->rows_updated,
+ user->select_commands, user->update_commands,
+ user->other_commands,
+ user->commit_trans, user->rollback_trans,
+ user->denied_connections, user->lost_connections,
+ user->access_denied_errors, user->empty_queries);
+ }
+ }
+ DBUG_PRINT("exit", ("aggregated %lu input into %lu output entries",
+ all_user_stats->records, agg_user_stats->records));
+ DBUG_RETURN(0);
+}
+#endif
+
+/*
+ Write result to network for SHOW USER_STATISTICS
+
+ SYNOPSIS
+ send_user_stats
+ all_user_stats - values to return
+ table - I_S table
+
+ RETURN
+ 0 - OK
+ 1 - error
+*/
+
+int send_user_stats(THD* thd, HASH *all_user_stats, TABLE *table)
+{
+ DBUG_ENTER("send_user_stats");
+
+ for (uint i= 0; i < all_user_stats->records; i++)
+ {
+ uint j= 0;
+ USER_STATS *user_stats= (USER_STATS*) hash_element(all_user_stats, i);
+
+ table->field[j++]->store(user_stats->user, user_stats->user_name_length,
+ system_charset_info);
+ table->field[j++]->store((longlong)user_stats->total_connections,TRUE);
+ table->field[j++]->store((longlong)user_stats->concurrent_connections);
+ table->field[j++]->store((longlong)user_stats->connected_time);
+ table->field[j++]->store((double)user_stats->busy_time);
+ table->field[j++]->store((double)user_stats->cpu_time);
+ table->field[j++]->store((longlong)user_stats->bytes_received, TRUE);
+ table->field[j++]->store((longlong)user_stats->bytes_sent, TRUE);
+ table->field[j++]->store((longlong)user_stats->binlog_bytes_written, TRUE);
+ table->field[j++]->store((longlong)user_stats->rows_read, TRUE);
+ table->field[j++]->store((longlong)user_stats->rows_sent, TRUE);
+ table->field[j++]->store((longlong)user_stats->rows_deleted, TRUE);
+ table->field[j++]->store((longlong)user_stats->rows_inserted, TRUE);
+ table->field[j++]->store((longlong)user_stats->rows_updated, TRUE);
+ table->field[j++]->store((longlong)user_stats->select_commands, TRUE);
+ table->field[j++]->store((longlong)user_stats->update_commands, TRUE);
+ table->field[j++]->store((longlong)user_stats->other_commands, TRUE);
+ table->field[j++]->store((longlong)user_stats->commit_trans, TRUE);
+ table->field[j++]->store((longlong)user_stats->rollback_trans, TRUE);
+ table->field[j++]->store((longlong)user_stats->denied_connections, TRUE);
+ table->field[j++]->store((longlong)user_stats->lost_connections, TRUE);
+ table->field[j++]->store((longlong)user_stats->access_denied_errors, TRUE);
+ table->field[j++]->store((longlong)user_stats->empty_queries, TRUE);
+ if (schema_table_store_record(thd, table))
+ {
+ DBUG_PRINT("error", ("store record error"));
+ DBUG_RETURN(1);
+ }
+ }
+ DBUG_RETURN(0);
+}
+
+/*
+ Process SHOW USER_STATISTICS
+
+ SYNOPSIS
+ mysqld_show_user_stats
+ thd - current thread
+ wild - limit results to the entry for this user
+ with_roles - when true, display role for mapped users
+
+ RETURN
+ 0 - OK
+ 1 - error
+*/
+
+int fill_schema_user_stats(THD* thd, TABLE_LIST* tables, COND* cond)
+{
+ TABLE *table= tables->table;
+ int result;
+ DBUG_ENTER("fill_schema_user_stats");
+
+ if (check_global_access(thd, SUPER_ACL | PROCESS_ACL))
+ DBUG_RETURN(1);
+
+ /*
+ Iterates through all the global stats and sends them to the client.
+ Pattern matching on the client IP is supported.
+ */
+
+ pthread_mutex_lock(&LOCK_global_user_client_stats);
+ result= send_user_stats(thd, &global_user_stats, table) != 0;
+ pthread_mutex_unlock(&LOCK_global_user_client_stats);
+
+ DBUG_PRINT("exit", ("result: %d", result));
+ DBUG_RETURN(result);
+}
+
+/*
+ Process SHOW CLIENT_STATISTICS
+
+ SYNOPSIS
+ mysqld_show_client_stats
+ thd - current thread
+ wild - limit results to the entry for this client
+
+ RETURN
+ 0 - OK
+ 1 - error
+*/
+
+int fill_schema_client_stats(THD* thd, TABLE_LIST* tables, COND* cond)
+{
+ TABLE *table= tables->table;
+ int result;
+ DBUG_ENTER("fill_schema_client_stats");
+
+ if (check_global_access(thd, SUPER_ACL | PROCESS_ACL))
+ DBUG_RETURN(1);
+
+ /*
+ Iterates through all the global stats and sends them to the client.
+ Pattern matching on the client IP is supported.
+ */
+
+ pthread_mutex_lock(&LOCK_global_user_client_stats);
+ result= send_user_stats(thd, &global_client_stats, table) != 0;
+ pthread_mutex_unlock(&LOCK_global_user_client_stats);
+
+ DBUG_PRINT("exit", ("result: %d", result));
+ DBUG_RETURN(result);
+}
+
+
+/* Fill information schema table with table statistics */
+
+int fill_schema_table_stats(THD *thd, TABLE_LIST *tables, COND *cond)
+{
+ TABLE *table= tables->table;
+ DBUG_ENTER("fill_schema_table_stats");
+
+ pthread_mutex_lock(&LOCK_global_table_stats);
+ for (uint i= 0; i < global_table_stats.records; i++)
+ {
+ char *end_of_schema;
+ TABLE_STATS *table_stats=
+ (TABLE_STATS*)hash_element(&global_table_stats, i);
+ TABLE_LIST tmp_table;
+ size_t schema_length, table_name_length;
+
+ end_of_schema= strend(table_stats->table);
+ schema_length= (size_t) (end_of_schema - table_stats->table);
+ table_name_length= strlen(table_stats->table + schema_length + 1);
+
+ bzero((char*) &tmp_table,sizeof(tmp_table));
+ tmp_table.db= table_stats->table;
+ tmp_table.table_name= end_of_schema+1;
+ tmp_table.grant.privilege= 0;
+ if (check_access(thd, SELECT_ACL | EXTRA_ACL, tmp_table.db,
+ &tmp_table.grant.privilege, 0, 0,
+ is_schema_db(tmp_table.db)) ||
+ check_grant(thd, SELECT_ACL, &tmp_table, 1, UINT_MAX,
+ 1))
+ continue;
+
+ table->field[0]->store(table_stats->table, schema_length,
+ system_charset_info);
+ table->field[1]->store(table_stats->table + schema_length+1,
+ table_name_length, system_charset_info);
+ table->field[2]->store((longlong)table_stats->rows_read, TRUE);
+ table->field[3]->store((longlong)table_stats->rows_changed, TRUE);
+ table->field[4]->store((longlong)table_stats->rows_changed_x_indexes,
+ TRUE);
+ if (schema_table_store_record(thd, table))
+ {
+ VOID(pthread_mutex_unlock(&LOCK_global_table_stats));
+ DBUG_RETURN(1);
+ }
+ }
+ pthread_mutex_unlock(&LOCK_global_table_stats);
+ DBUG_RETURN(0);
+}
+
+
+/* Fill information schema table with index statistics */
+
+int fill_schema_index_stats(THD *thd, TABLE_LIST *tables, COND *cond)
+{
+ TABLE *table= tables->table;
+ DBUG_ENTER("fill_schema_index_stats");
+
+ pthread_mutex_lock(&LOCK_global_index_stats);
+ for (uint i= 0; i < global_index_stats.records; i++)
+ {
+ INDEX_STATS *index_stats =
+ (INDEX_STATS*) hash_element(&global_index_stats, i);
+ TABLE_LIST tmp_table;
+ char *index_name;
+ size_t schema_name_length, table_name_length, index_name_length;
+
+ bzero((char*) &tmp_table,sizeof(tmp_table));
+ tmp_table.db= index_stats->index;
+ tmp_table.table_name= strend(index_stats->index)+1;
+ tmp_table.grant.privilege= 0;
+ if (check_access(thd, SELECT_ACL | EXTRA_ACL, tmp_table.db,
+ &tmp_table.grant.privilege, 0, 0,
+ is_schema_db(tmp_table.db)) ||
+ check_grant(thd, SELECT_ACL, &tmp_table, 1, UINT_MAX, 1))
+ continue;
+
+ index_name= strend(tmp_table.table_name)+1;
+ schema_name_length= (tmp_table.table_name - index_stats->index) -1;
+ table_name_length= (index_name - tmp_table.table_name)-1;
+ index_name_length= (index_stats->index_name_length - schema_name_length -
+ table_name_length - 3);
+
+ table->field[0]->store(tmp_table.db, schema_name_length,
+ system_charset_info);
+ table->field[1]->store(tmp_table.table_name, table_name_length,
+ system_charset_info);
+ table->field[2]->store(index_name, index_name_length, system_charset_info);
+ table->field[3]->store((longlong)index_stats->rows_read, TRUE);
+
+ if (schema_table_store_record(thd, table))
+ {
+ VOID(pthread_mutex_unlock(&LOCK_global_index_stats));
+ DBUG_RETURN(1);
+ }
+ }
+ pthread_mutex_unlock(&LOCK_global_index_stats);
+ DBUG_RETURN(0);
+}
+
/* collect status for all running threads */
@@ -4206,7 +4519,7 @@ int fill_schema_proc(THD *thd, TABLE_LIS
DBUG_RETURN(1);
}
proc_table->file->ha_index_init(0, 1);
- if ((res= proc_table->file->index_first(proc_table->record[0])))
+ if ((res= proc_table->file->ha_index_first(proc_table->record[0])))
{
res= (res == HA_ERR_END_OF_FILE) ? 0 : 1;
goto err;
@@ -4216,7 +4529,7 @@ int fill_schema_proc(THD *thd, TABLE_LIS
res= 1;
goto err;
}
- while (!proc_table->file->index_next(proc_table->record[0]))
+ while (!proc_table->file->ha_index_next(proc_table->record[0]))
{
if (store_schema_proc(thd, table, proc_table, wild, full_access, definer))
{
@@ -5462,6 +5775,81 @@ struct schema_table_ref
ST_SCHEMA_TABLE *schema_table;
};
+ST_FIELD_INFO user_stats_fields_info[]=
+{
+ {"USER", USERNAME_LENGTH, MYSQL_TYPE_STRING, 0, 0, "User", SKIP_OPEN_TABLE},
+ {"TOTAL_CONNECTIONS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Total_connections",SKIP_OPEN_TABLE},
+ {"CONCURRENT_CONNECTIONS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Concurrent_connections",SKIP_OPEN_TABLE},
+ {"CONNECTED_TIME", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Connected_time",SKIP_OPEN_TABLE},
+ {"BUSY_TIME", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_DOUBLE, 0, 0, "Busy_time",SKIP_OPEN_TABLE},
+ {"CPU_TIME", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_DOUBLE, 0, 0, "Cpu_time",SKIP_OPEN_TABLE},
+ {"BYTES_RECEIVED", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Bytes_received",SKIP_OPEN_TABLE},
+ {"BYTES_SENT", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Bytes_sent",SKIP_OPEN_TABLE},
+ {"BINLOG_BYTES_WRITTEN", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Binlog_bytes_written",SKIP_OPEN_TABLE},
+ {"ROWS_READ", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Rows_read",SKIP_OPEN_TABLE},
+ {"ROWS_SENT", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Rows_sent",SKIP_OPEN_TABLE},
+ {"ROWS_DELETED", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Rows_deleted",SKIP_OPEN_TABLE},
+ {"ROWS_INSERTED", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Rows_inserted",SKIP_OPEN_TABLE},
+ {"ROWS_UPDATED", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Rows_updated",SKIP_OPEN_TABLE},
+ {"SELECT_COMMANDS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Select_commands",SKIP_OPEN_TABLE},
+ {"UPDATE_COMMANDS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Update_commands",SKIP_OPEN_TABLE},
+ {"OTHER_COMMANDS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Other_commands",SKIP_OPEN_TABLE},
+ {"COMMIT_TRANSACTIONS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Commit_transactions",SKIP_OPEN_TABLE},
+ {"ROLLBACK_TRANSACTIONS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Rollback_transactions",SKIP_OPEN_TABLE},
+ {"DENIED_CONNECTIONS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Denied_connections",SKIP_OPEN_TABLE},
+ {"LOST_CONNECTIONS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Lost_connections",SKIP_OPEN_TABLE},
+ {"ACCESS_DENIED", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Access_denied",SKIP_OPEN_TABLE},
+ {"EMPTY_QUERIES", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Empty_queries",SKIP_OPEN_TABLE},
+ {0, 0, MYSQL_TYPE_STRING, 0, 0, 0, 0}
+};
+
+ST_FIELD_INFO client_stats_fields_info[]=
+{
+ {"CLIENT", LIST_PROCESS_HOST_LEN, MYSQL_TYPE_STRING, 0, 0, "Client",SKIP_OPEN_TABLE},
+ {"TOTAL_CONNECTIONS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Total_connections",SKIP_OPEN_TABLE},
+ {"CONCURRENT_CONNECTIONS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Concurrent_connections",SKIP_OPEN_TABLE},
+ {"CONNECTED_TIME", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Connected_time",SKIP_OPEN_TABLE},
+ {"BUSY_TIME", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_DOUBLE, 0, 0, "Busy_time",SKIP_OPEN_TABLE},
+ {"CPU_TIME", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_DOUBLE, 0, 0, "Cpu_time",SKIP_OPEN_TABLE},
+ {"BYTES_RECEIVED", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Bytes_received",SKIP_OPEN_TABLE},
+ {"BYTES_SENT", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Bytes_sent",SKIP_OPEN_TABLE},
+ {"BINLOG_BYTES_WRITTEN", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Binlog_bytes_written",SKIP_OPEN_TABLE},
+ {"ROWS_READ", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Rows_read",SKIP_OPEN_TABLE},
+ {"ROWS_SENT", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Rows_sent",SKIP_OPEN_TABLE},
+ {"ROWS_DELETED", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Rows_deleted",SKIP_OPEN_TABLE},
+ {"ROWS_INSERTED", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Rows_inserted",SKIP_OPEN_TABLE},
+ {"ROWS_UPDATED", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Rows_updated",SKIP_OPEN_TABLE},
+ {"SELECT_COMMANDS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Select_commands",SKIP_OPEN_TABLE},
+ {"UPDATE_COMMANDS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Update_commands",SKIP_OPEN_TABLE},
+ {"OTHER_COMMANDS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Other_commands",SKIP_OPEN_TABLE},
+ {"COMMIT_TRANSACTIONS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Commit_transactions",SKIP_OPEN_TABLE},
+ {"ROLLBACK_TRANSACTIONS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Rollback_transactions",SKIP_OPEN_TABLE},
+ {"DENIED_CONNECTIONS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Denied_connections",SKIP_OPEN_TABLE},
+ {"LOST_CONNECTIONS", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Lost_connections",SKIP_OPEN_TABLE},
+ {"ACCESS_DENIED", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Access_denied",SKIP_OPEN_TABLE},
+ {"EMPTY_QUERIES", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Empty_queries",SKIP_OPEN_TABLE},
+ {0, 0, MYSQL_TYPE_STRING, 0, 0, 0, 0}
+};
+
+
+ST_FIELD_INFO table_stats_fields_info[]=
+{
+ {"TABLE_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Table_schema",SKIP_OPEN_TABLE},
+ {"TABLE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Table_name",SKIP_OPEN_TABLE},
+ {"ROWS_READ", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Rows_read",SKIP_OPEN_TABLE},
+ {"ROWS_CHANGED", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Rows_changed",SKIP_OPEN_TABLE},
+ {"ROWS_CHANGED_X_INDEXES", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Rows_changed_x_#indexes",SKIP_OPEN_TABLE},
+ {0, 0, MYSQL_TYPE_STRING, 0, 0, 0, 0}
+};
+
+ST_FIELD_INFO index_stats_fields_info[]=
+{
+ {"TABLE_SCHEMA", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Table_schema",SKIP_OPEN_TABLE},
+ {"TABLE_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Table_name",SKIP_OPEN_TABLE},
+ {"INDEX_NAME", NAME_LEN, MYSQL_TYPE_STRING, 0, 0, "Index_name",SKIP_OPEN_TABLE},
+ {"ROWS_READ", MY_INT64_NUM_DECIMAL_DIGITS, MYSQL_TYPE_LONG, 0, 0, "Rows_read",SKIP_OPEN_TABLE},
+ {0, 0, MYSQL_TYPE_STRING, 0, 0, 0,0}
+};
/*
Find schema_tables elment by name
@@ -6683,6 +7071,8 @@ ST_SCHEMA_TABLE schema_tables[]=
{
{"CHARACTER_SETS", charsets_fields_info, create_schema_table,
fill_schema_charsets, make_character_sets_old_format, 0, -1, -1, 0, 0},
+ {"CLIENT_STATISTICS", client_stats_fields_info, create_schema_table,
+ fill_schema_client_stats, make_old_format, 0, -1, -1, 0, 0},
{"COLLATIONS", collation_fields_info, create_schema_table,
fill_schema_collation, make_old_format, 0, -1, -1, 0, 0},
{"COLLATION_CHARACTER_SET_APPLICABILITY", coll_charset_app_fields_info,
@@ -6707,6 +7097,8 @@ ST_SCHEMA_TABLE schema_tables[]=
fill_status, make_old_format, 0, 0, -1, 0, 0},
{"GLOBAL_VARIABLES", variables_fields_info, create_schema_table,
fill_variables, make_old_format, 0, 0, -1, 0, 0},
+ {"INDEX_STATISTICS", index_stats_fields_info, create_schema_table,
+ fill_schema_index_stats, make_old_format, 0, -1, -1, 0, 0},
{"KEY_COLUMN_USAGE", key_column_usage_fields_info, create_schema_table,
get_all_tables, 0, get_schema_key_column_usage_record, 4, 5, 0,
OPEN_TABLE_ONLY},
@@ -6748,11 +7140,15 @@ ST_SCHEMA_TABLE schema_tables[]=
get_all_tables, make_table_names_old_format, 0, 1, 2, 1, 0},
{"TABLE_PRIVILEGES", table_privileges_fields_info, create_schema_table,
fill_schema_table_privileges, 0, 0, -1, -1, 0, 0},
+ {"TABLE_STATISTICS", table_stats_fields_info, create_schema_table,
+ fill_schema_table_stats, make_old_format, 0, -1, -1, 0, 0},
{"TRIGGERS", triggers_fields_info, create_schema_table,
get_all_tables, make_old_format, get_schema_triggers_record, 5, 6, 0,
OPEN_TABLE_ONLY},
{"USER_PRIVILEGES", user_privileges_fields_info, create_schema_table,
fill_schema_user_privileges, 0, 0, -1, -1, 0, 0},
+ {"USER_STATISTICS", user_stats_fields_info, create_schema_table,
+ fill_schema_user_stats, make_old_format, 0, -1, -1, 0, 0},
{"VARIABLES", variables_fields_info, create_schema_table, fill_variables,
make_old_format, 0, 0, -1, 1, 0},
{"VIEWS", view_fields_info, create_schema_table,
=== modified file 'sql/sql_table.cc'
--- a/sql/sql_table.cc 2009-09-18 01:04:43 +0000
+++ b/sql/sql_table.cc 2009-10-19 17:14:48 +0000
@@ -7815,7 +7815,7 @@ bool mysql_checksum_table(THD *thd, TABL
goto err;
}
ha_checksum row_crc= 0;
- int error= t->file->rnd_next(t->record[0]);
+ int error= t->file->ha_rnd_next(t->record[0]);
if (unlikely(error))
{
if (error == HA_ERR_RECORD_DELETED)
=== modified file 'sql/sql_udf.cc'
--- a/sql/sql_udf.cc 2009-05-15 12:57:51 +0000
+++ b/sql/sql_udf.cc 2009-10-19 17:14:48 +0000
@@ -567,10 +567,10 @@ int mysql_drop_function(THD *thd,const L
goto err;
table->use_all_columns();
table->field[0]->store(exact_name_str, exact_name_len, &my_charset_bin);
- if (!table->file->index_read_idx_map(table->record[0], 0,
- (uchar*) table->field[0]->ptr,
- HA_WHOLE_KEY,
- HA_READ_KEY_EXACT))
+ if (!table->file->ha_index_read_idx_map(table->record[0], 0,
+ (uchar*) table->field[0]->ptr,
+ HA_WHOLE_KEY,
+ HA_READ_KEY_EXACT))
{
int error;
if ((error = table->file->ha_delete_row(table->record[0])))
=== modified file 'sql/sql_update.cc'
--- a/sql/sql_update.cc 2009-09-07 20:50:10 +0000
+++ b/sql/sql_update.cc 2009-10-19 17:14:48 +0000
@@ -142,7 +142,7 @@ static void prepare_record_for_error_mes
/* Tell the engine about the new set. */
table->file->column_bitmaps_signal();
/* Read record that is identified by table->file->ref. */
- (void) table->file->rnd_pos(table->record[1], table->file->ref);
+ (void) table->file->ha_rnd_pos(table->record[1], table->file->ref);
/* Copy the newly read columns into the new record. */
for (field_p= table->field; (field= *field_p); field_p++)
if (bitmap_is_set(&unique_map, field->field_index))
@@ -1928,7 +1928,7 @@ int multi_update::do_updates()
{
if (thd->killed && trans_safe)
goto err;
- if ((local_error=tmp_table->file->rnd_next(tmp_table->record[0])))
+ if ((local_error= tmp_table->file->ha_rnd_next(tmp_table->record[0])))
{
if (local_error == HA_ERR_END_OF_FILE)
break;
@@ -1943,12 +1943,12 @@ int multi_update::do_updates()
uint field_num= 0;
do
{
- if((local_error=
- tbl->file->rnd_pos(tbl->record[0],
- (uchar *) tmp_table->field[field_num]->ptr)))
+ if ((local_error=
+ tbl->file->ha_rnd_pos(tbl->record[0],
+ (uchar*) tmp_table->field[field_num]->ptr)))
goto err;
field_num++;
- } while((tbl= check_opt_it++));
+ } while ((tbl= check_opt_it++));
table->status|= STATUS_UPDATED;
store_record(table,record[1]);
=== modified file 'sql/sql_yacc.yy'
--- a/sql/sql_yacc.yy 2009-09-07 20:50:10 +0000
+++ b/sql/sql_yacc.yy 2009-10-19 17:14:48 +0000
@@ -598,6 +598,7 @@ bool my_yyoverflow(short **a, YYSTYPE **
%token CHECK_SYM /* SQL-2003-R */
%token CIPHER_SYM
%token CLIENT_SYM
+%token CLIENT_STATS_SYM
%token CLOSE_SYM /* SQL-2003-R */
%token COALESCE /* SQL-2003-N */
%token CODE_SYM
@@ -744,6 +745,7 @@ bool my_yyoverflow(short **a, YYSTYPE **
%token IMPORT
%token INDEXES
%token INDEX_SYM
+%token INDEX_STATS_SYM
%token INFILE
%token INITIAL_SIZE_SYM
%token INNER_SYM /* SQL-2003-R */
@@ -985,6 +987,7 @@ bool my_yyoverflow(short **a, YYSTYPE **
%token SIGNED_SYM
%token SIMPLE_SYM /* SQL-2003-N */
%token SLAVE
+%token SLOW_SYM
%token SMALLINT /* SQL-2003-R */
%token SNAPSHOT_SYM
%token SOCKET_SYM
@@ -1029,6 +1032,7 @@ bool my_yyoverflow(short **a, YYSTYPE **
%token TABLES
%token TABLESPACE
%token TABLE_REF_PRIORITY
+%token TABLE_STATS_SYM
%token TABLE_SYM /* SQL-2003-R */
%token TABLE_CHECKSUM_SYM
%token TEMPORARY /* SQL-2003-N */
@@ -1076,6 +1080,7 @@ bool my_yyoverflow(short **a, YYSTYPE **
%token UPGRADE_SYM
%token USAGE /* SQL-2003-N */
%token USER /* SQL-2003-R */
+%token USER_STATS_SYM
%token USE_FRM
%token USE_SYM
%token USING /* SQL-2003-R */
@@ -10131,6 +10136,34 @@ show_param:
{
Lex->sql_command = SQLCOM_SHOW_SLAVE_STAT;
}
+ | CLIENT_STATS_SYM
+ {
+ LEX *lex= Lex;
+ lex->sql_command= SQLCOM_SHOW_CLIENT_STATS;
+ if (prepare_schema_table(YYTHD, lex, 0, SCH_CLIENT_STATS))
+ MYSQL_YYABORT;
+ }
+ | USER_STATS_SYM
+ {
+ LEX *lex= Lex;
+ lex->sql_command= SQLCOM_SHOW_USER_STATS;
+ if (prepare_schema_table(YYTHD, lex, 0, SCH_USER_STATS))
+ MYSQL_YYABORT;
+ }
+ | TABLE_STATS_SYM
+ {
+ LEX *lex= Lex;
+ lex->sql_command= SQLCOM_SHOW_TABLE_STATS;
+ if (prepare_schema_table(YYTHD, lex, 0, SCH_TABLE_STATS))
+ MYSQL_YYABORT;
+ }
+ | INDEX_STATS_SYM
+ {
+ LEX *lex= Lex;
+ lex->sql_command= SQLCOM_SHOW_INDEX_STATS;
+ if (prepare_schema_table(YYTHD, lex, 0, SCH_INDEX_STATS))
+ MYSQL_YYABORT;
+ }
| CREATE PROCEDURE sp_name
{
LEX *lex= Lex;
@@ -10339,6 +10372,16 @@ flush_option:
{ Lex->type|= REFRESH_STATUS; }
| SLAVE
{ Lex->type|= REFRESH_SLAVE; }
+ | SLOW_SYM QUERY_SYM LOGS_SYM
+ { Lex->type |= REFRESH_SLOW_QUERY_LOG; }
+ | CLIENT_STATS_SYM
+ { Lex->type|= REFRESH_CLIENT_STATS; }
+ | USER_STATS_SYM
+ { Lex->type|= REFRESH_USER_STATS; }
+ | TABLE_STATS_SYM
+ { Lex->type|= REFRESH_TABLE_STATS; }
+ | INDEX_STATS_SYM
+ { Lex->type|= REFRESH_INDEX_STATS; }
| MASTER_SYM
{ Lex->type|= REFRESH_MASTER; }
| DES_KEY_FILE
@@ -11447,6 +11490,7 @@ keyword_sp:
| CHAIN_SYM {}
| CHANGED {}
| CIPHER_SYM {}
+ | CLIENT_STATS_SYM {}
| CLIENT_SYM {}
| COALESCE {}
| CODE_SYM {}
@@ -11508,6 +11552,7 @@ keyword_sp:
| HOSTS_SYM {}
| HOUR_SYM {}
| IDENTIFIED_SYM {}
+ | INDEX_STATS_SYM {}
| INVOKER_SYM {}
| IMPORT {}
| INDEXES {}
@@ -11631,6 +11676,7 @@ keyword_sp:
| SIMPLE_SYM {}
| SHARE_SYM {}
| SHUTDOWN {}
+ | SLOW_SYM {}
| SNAPSHOT_SYM {}
| SOUNDS_SYM {}
| SOURCE_SYM {}
@@ -11650,6 +11696,7 @@ keyword_sp:
| SUSPEND_SYM {}
| SWAPS_SYM {}
| SWITCHES_SYM {}
+ | TABLE_STATS_SYM {}
| TABLES {}
| TABLE_CHECKSUM_SYM {}
| TABLESPACE {}
@@ -11675,6 +11722,7 @@ keyword_sp:
| UNKNOWN_SYM {}
| UNTIL_SYM {}
| USER {}
+ | USER_STATS_SYM {}
| USE_FRM {}
| VARIABLES {}
| VIEW_SYM {}
=== modified file 'sql/structs.h'
--- a/sql/structs.h 2009-06-26 19:57:42 +0000
+++ b/sql/structs.h 2009-10-19 17:14:48 +0000
@@ -76,6 +76,7 @@ typedef struct st_key {
uint extra_length;
uint usable_key_parts; /* Should normally be = key_parts */
uint block_size;
+ uint name_length;
enum ha_key_alg algorithm;
/*
Note that parser is used when the table is opened for use, and
@@ -88,6 +89,8 @@ typedef struct st_key {
};
KEY_PART_INFO *key_part;
char *name; /* Name of key */
+ /* Unique name for cache; db + \0 + table_name + \0 + key_name + \0 */
+ uchar *cache_name;
/*
Array of AVG(#records with the same field value) for 1st ... Nth key part.
0 means 'not known'.
@@ -231,6 +234,111 @@ typedef struct user_conn {
USER_RESOURCES user_resources;
} USER_CONN;
+typedef struct st_user_stats
+{
+ char user[max(USERNAME_LENGTH, LIST_PROCESS_HOST_LEN) + 1];
+ // Account name the user is mapped to when this is a user from mapped_user.
+ // Otherwise, the same value as user.
+ char priv_user[max(USERNAME_LENGTH, LIST_PROCESS_HOST_LEN) + 1];
+ uint user_name_length;
+ uint total_connections;
+ uint concurrent_connections;
+ time_t connected_time; // in seconds
+ double busy_time; // in seconds
+ double cpu_time; // in seconds
+ ulonglong bytes_received;
+ ulonglong bytes_sent;
+ ulonglong binlog_bytes_written;
+ ha_rows rows_read, rows_sent;
+ ha_rows rows_updated, rows_deleted, rows_inserted;
+ ulonglong select_commands, update_commands, other_commands;
+ ulonglong commit_trans, rollback_trans;
+ ulonglong denied_connections, lost_connections;
+ ulonglong access_denied_errors;
+ ulonglong empty_queries;
+} USER_STATS;
+
+/* Lookup function for hash tables with USER_STATS entries */
+extern "C" uchar *get_key_user_stats(USER_STATS *user_stats, size_t *length,
+ my_bool not_used __attribute__((unused)));
+
+/* Free all memory for a hash table with USER_STATS entries */
+extern void free_user_stats(USER_STATS* user_stats);
+
+/* Intialize an instance of USER_STATS */
+extern void
+init_user_stats(USER_STATS *user_stats,
+ const char *user,
+ size_t user_length,
+ const char *priv_user,
+ uint total_connections,
+ uint concurrent_connections,
+ time_t connected_time,
+ double busy_time,
+ double cpu_time,
+ ulonglong bytes_received,
+ ulonglong bytes_sent,
+ ulonglong binlog_bytes_written,
+ ha_rows rows_sent,
+ ha_rows rows_read,
+ ha_rows rows_inserted,
+ ha_rows rows_deleted,
+ ha_rows rows_updated,
+ ulonglong select_commands,
+ ulonglong update_commands,
+ ulonglong other_commands,
+ ulonglong commit_trans,
+ ulonglong rollback_trans,
+ ulonglong denied_connections,
+ ulonglong lost_connections,
+ ulonglong access_denied_errors,
+ ulonglong empty_queries);
+
+/* Increment values of an instance of USER_STATS */
+extern void
+add_user_stats(USER_STATS *user_stats,
+ uint total_connections,
+ uint concurrent_connections,
+ time_t connected_time,
+ double busy_time,
+ double cpu_time,
+ ulonglong bytes_received,
+ ulonglong bytes_sent,
+ ulonglong binlog_bytes_written,
+ ha_rows rows_sent,
+ ha_rows rows_read,
+ ha_rows rows_inserted,
+ ha_rows rows_deleted,
+ ha_rows rows_updated,
+ ulonglong select_commands,
+ ulonglong update_commands,
+ ulonglong other_commands,
+ ulonglong commit_trans,
+ ulonglong rollback_trans,
+ ulonglong denied_connections,
+ ulonglong lost_connections,
+ ulonglong access_denied_errors,
+ ulonglong empty_queries);
+
+typedef struct st_table_stats
+{
+ char table[NAME_LEN * 2 + 2]; // [db] + '\0' + [table] + '\0'
+ uint table_name_length;
+ ulonglong rows_read, rows_changed;
+ ulonglong rows_changed_x_indexes;
+ /* Stores enum db_type, but forward declarations cannot be done */
+ int engine_type;
+} TABLE_STATS;
+
+typedef struct st_index_stats
+{
+ // [db] + '\0' + [table] + '\0' + [index] + '\0'
+ char index[NAME_LEN * 3 + 3];
+ uint index_name_length; /* Length of 'index' */
+ ulonglong rows_read;
+} INDEX_STATS;
+
+
/* Bits in form->update */
#define REG_MAKE_DUPP 1 /* Make a copy of record when read */
#define REG_NEW_RECORD 2 /* Write a new record if not found */
=== modified file 'sql/table.cc'
--- a/sql/table.cc 2009-09-09 21:06:57 +0000
+++ b/sql/table.cc 2009-10-19 17:14:48 +0000
@@ -1325,6 +1325,19 @@ static int open_binary_frm(THD *thd, TAB
{
uint usable_parts= 0;
keyinfo->name=(char*) share->keynames.type_names[key];
+ keyinfo->name_length= strlen(keyinfo->name);
+ keyinfo->cache_name=
+ (uchar*) alloc_root(&share->mem_root,
+ share->table_cache_key.length+
+ keyinfo->name_length + 1);
+ if (keyinfo->cache_name) // If not out of memory
+ {
+ uchar *pos= keyinfo->cache_name;
+ memcpy(pos, share->table_cache_key.str, share->table_cache_key.length);
+ memcpy(pos + share->table_cache_key.length, keyinfo->name,
+ keyinfo->name_length+1);
+ }
+
/* Fix fulltext keys for old .frm files */
if (share->key_info[key].flags & HA_FULLTEXT)
share->key_info[key].algorithm= HA_KEY_ALG_FULLTEXT;
=== modified file 'sql/table.h'
--- a/sql/table.h 2009-09-15 10:46:35 +0000
+++ b/sql/table.h 2009-10-19 17:14:48 +0000
@@ -878,6 +878,7 @@ typedef struct st_foreign_key_info
enum enum_schema_tables
{
SCH_CHARSETS= 0,
+ SCH_CLIENT_STATS,
SCH_COLLATIONS,
SCH_COLLATION_CHARACTER_SET_APPLICABILITY,
SCH_COLUMNS,
@@ -887,6 +888,7 @@ enum enum_schema_tables
SCH_FILES,
SCH_GLOBAL_STATUS,
SCH_GLOBAL_VARIABLES,
+ SCH_INDEX_STATS,
SCH_KEY_COLUMN_USAGE,
SCH_OPEN_TABLES,
SCH_PARTITIONS,
@@ -905,8 +907,10 @@ enum enum_schema_tables
SCH_TABLE_CONSTRAINTS,
SCH_TABLE_NAMES,
SCH_TABLE_PRIVILEGES,
+ SCH_TABLE_STATS,
SCH_TRIGGERS,
SCH_USER_PRIVILEGES,
+ SCH_USER_STATS,
SCH_VARIABLES,
SCH_VIEWS
};
=== modified file 'sql/tztime.cc'
--- a/sql/tztime.cc 2009-09-07 20:50:10 +0000
+++ b/sql/tztime.cc 2009-10-19 17:14:48 +0000
@@ -1676,7 +1676,7 @@ my_tz_init(THD *org_thd, const char *def
tz_leapcnt= 0;
- res= table->file->index_first(table->record[0]);
+ res= table->file->ha_index_first(table->record[0]);
while (!res)
{
@@ -1698,7 +1698,7 @@ my_tz_init(THD *org_thd, const char *def
tz_leapcnt, (ulong) tz_lsis[tz_leapcnt-1].ls_trans,
tz_lsis[tz_leapcnt-1].ls_corr));
- res= table->file->index_next(table->record[0]);
+ res= table->file->ha_index_next(table->record[0]);
}
(void)table->file->ha_index_end();
@@ -1865,8 +1865,8 @@ tz_load_from_open_tables(const String *t
*/
(void)table->file->ha_index_init(0, 1);
- if (table->file->index_read_map(table->record[0], table->field[0]->ptr,
- HA_WHOLE_KEY, HA_READ_KEY_EXACT))
+ if (table->file->ha_index_read_map(table->record[0], table->field[0]->ptr,
+ HA_WHOLE_KEY, HA_READ_KEY_EXACT))
{
#ifdef EXTRA_DEBUG
/*
@@ -1893,8 +1893,8 @@ tz_load_from_open_tables(const String *t
table->field[0]->store((longlong) tzid, TRUE);
(void)table->file->ha_index_init(0, 1);
- if (table->file->index_read_map(table->record[0], table->field[0]->ptr,
- HA_WHOLE_KEY, HA_READ_KEY_EXACT))
+ if (table->file->ha_index_read_map(table->record[0], table->field[0]->ptr,
+ HA_WHOLE_KEY, HA_READ_KEY_EXACT))
{
sql_print_error("Can't find description of time zone '%u'", tzid);
goto end;
@@ -1920,8 +1920,8 @@ tz_load_from_open_tables(const String *t
table->field[0]->store((longlong) tzid, TRUE);
(void)table->file->ha_index_init(0, 1);
- res= table->file->index_read_map(table->record[0], table->field[0]->ptr,
- (key_part_map)1, HA_READ_KEY_EXACT);
+ res= table->file->ha_index_read_map(table->record[0], table->field[0]->ptr,
+ (key_part_map)1, HA_READ_KEY_EXACT);
while (!res)
{
ttid= (uint)table->field[1]->val_int();
@@ -1968,8 +1968,8 @@ tz_load_from_open_tables(const String *t
tmp_tz_info.typecnt= ttid + 1;
- res= table->file->index_next_same(table->record[0],
- table->field[0]->ptr, 4);
+ res= table->file->ha_index_next_same(table->record[0],
+ table->field[0]->ptr, 4);
}
if (res != HA_ERR_END_OF_FILE)
@@ -1991,8 +1991,8 @@ tz_load_from_open_tables(const String *t
table->field[0]->store((longlong) tzid, TRUE);
(void)table->file->ha_index_init(0, 1);
- res= table->file->index_read_map(table->record[0], table->field[0]->ptr,
- (key_part_map)1, HA_READ_KEY_EXACT);
+ res= table->file->ha_index_read_map(table->record[0], table->field[0]->ptr,
+ (key_part_map)1, HA_READ_KEY_EXACT);
while (!res)
{
ttime= (my_time_t)table->field[1]->val_int();
@@ -2021,8 +2021,8 @@ tz_load_from_open_tables(const String *t
("time_zone_transition table: tz_id: %u tt_time: %lu tt_id: %u",
tzid, (ulong) ttime, ttid));
- res= table->file->index_next_same(table->record[0],
- table->field[0]->ptr, 4);
+ res= table->file->ha_index_next_same(table->record[0],
+ table->field[0]->ptr, 4);
}
/*