HyperThreading and Multiple cores (CPUs)
Allocating RAM for MySQL - The Short Answer
If using just MyISAM, set key_buffer_size to 20% of available RAM.
(Plus innodb_buffer_pool_size=0) (You should be moving away from MyISAM.)
If using just InnoDB, set innodb_buffer_pool_size to 70% of available RAM. (Plus key_buffer_size = 10M, small, but not zero.)
"Available" memory is what is left over after leaving room for other apps (web server, PHP, Java, etc) if they are on the same server.
Rule of thumb for tuning mysql:
⚈ Start with released copy of my.cnf / my.ini.
⚈ Change key_buffer_size and innodb_buffer_pool_size according to engine usage and RAM.
⚈ Slow queries can usually be 'fixed' via indexes, schema changes, or SELECT changes, not by tuning.
⚈ Don't get carried away with the Query cache until you understand what it can and cannot do.
⚈ Don't change anything else unless you run into trouble (eg, max connections).
⚈ Be sure the changes are under the [mysqld] section, not some other section.
The 20%/70% assume you have at least 4GB of RAM. If you have a tiny antique server, or a tiny VM, then those percentages are too high.
Now for the gory details. (NDB Cluster is not discussed here.)
What is the key_buffer?
MyISAM does two different things for caching.
⚈ Index blocks (1KB each, BTree structured, from .MYI file) are cached in the "key buffer".
⚈ Data block caching (from .MYD file) is left to the OS, so be sure to leave a bunch of free space for this.
Caveat: Some flavors of OS always claim to be using over 90%, even when there is really lots of free space.
SHOW GLOBAL STATUS LIKE 'Key%'; then calculate Key_read_requests / Key_reads If it is high (say, over 10), then the key_buffer is big enough.
What is the buffer_pool?
InnoDB does all its caching in a the "buffer pool", whose size is controlled
by innodb_buffer_pool_size. It contains 16KB data and index blocks from
the open tables, plus some maintenance overhead.
MySQL 5.5 (and 5.1 with the "Plugin") lets you declare the block size to be 8KB or 4KB. (This is an almost-never-used feature.) MySQL 5.5 allows multiple buffer pool "instances"; this can help because there is one mutex per pool, thereby relieving some of the Mutex bottleneck.
More on InnoDB Tuning
This will set the main cache settings to the minimum; it
could be important to systems with lots of other processes
and/or RAM is 2GB or smaller.
Do SHOW TABLE STATUS for all the tables in all the databases. Or run this to see the values for you system (If you have a lot of tables, it can take minute(s).)
ROUND(SUM(data_length) /1024/1024, 1) AS "Data MB",
ROUND(SUM(index_length)/1024/1024, 1) AS "Index MB",
ROUND(SUM(data_length + index_length)/1024/1024, 1) AS "Total MB",
COUNT(*) "Num Tables"
WHERE table_schema not in ("information_schema", "PERFORMANCE_SCHEMA", "SYS_SCHEMA", "ndbinfo", "sys")
GROUP BY ENGINE;
(There may be an issue in MySQL 8.0 of getting these values initialized.)
Add up Index_length for all the MyISAM tables. Set key_buffer_size no larger than that size, but not bigger than 20% of RAM.
Add up Data_length + Index_length for all the InnoDB tables. Set innodb_buffer_pool_size to no more than 110% of that total, but not bigger than 70% of RAM.
If that leads to swapping, cut both settings back. Suggest cutting them down proportionately.
MySQL was designed in the days of single-CPU machines, and designed
to be easily ported to many different architectures.
Unfortunately, that lead to some sloppiness in how to interlock actions.
There are small number (too small) of "mutexes" to gain access to several critical
processes. Of note:
⚈ MyISAM's key_buffer
⚈ The Query Cache
⚈ InnoDB's buffer_pool
With multi-core boxes, the mutex problem is causing performance problems. In version 5.0, after 4-8 cores, MySQL gets slower, not faster. MySQL 5.5 and Percona's XtraDB are making that somewhat better in InnoDB; the practical limit for cores is more like 32, and performance tends plateaus after that rather than declining. 5.6 claims to scale up to about 48 cores.
HyperThreading and Multiple cores (CPUs)
HyperThreading is mostly a thing of the past, so this section may not apply.
HyperThreading is great for marketing, lousy for performance. It involves having two processing units sharing a single hardware cache. If both units are doing the same thing, the cache will be reasonably useful. If the units are doing different things, they will be clobbering each other's cache entries.
Furthermore MySQL is not great on using multiple cores. So, if you turn off HT, the remaining cores run a little faster.
32-bit OS and MySQL
First, the OS (and the hardware?) may conspire to not let you use
all 4GB, if that is what you have.
If you have more than 4GB of RAM, the excess beyond 4GB
is totally inaccessable and unusable on a 32-bit OS.
Secondly, the OS probably has a limit on how much RAM it will allow any process to use.
Example: FreeBSD's maxdsiz, which defaults to 512MB.
$ ulimit -a
max memory size (kbytes, -m) 524288
So, once you have determined how much RAM is available to mysqld, then apply the 20%/70%, but round down some.
If you get an error like [ERROR] /usr/libexec/mysqld: Out of memory (Needed xxx bytes), it probably means that MySQL exceeded what the OS is willing to give it. Decrease the cache settings.
64-bit OS with 32-bit MySQL
The OS is not limited by 4GB, but MySQL is.
If you have more than 4GB of RAM, apply the 20%/70% guideline to only 4GB.
You should probably upgrade MySQL to 64-bit.
64-bit OS and MySQL
MyISAM only: key_buffer_size (before 5.0.52 / 5.1.23) had a hard limit of 4G.
Otherwise, use about 20% of RAM. Set (in my.cnf / my.ini) innodb_buffer_pool_size = 0.
InnoDB only: innodb_buffer_pool_size = 70% of RAM. If you have lots of RAM and are using 5.5 (or later), then consider having multiple pool instances. Recommend 1-16 innodb_buffer_pool_instances, such that each one is no smaller than 1GB. (Sorry, no metric on how much this will help; probably not a lot.)
Meanwhile, set key_buffer_size = 20M (tiny, but non-zero)
If you have a mixture of engines, lower both numbers.
For just InnoDB, this link has a detailed, nuanced, discussion of setting innodb_buffer_pool_size: scalegrid.io
Each "thread" takes some amount of RAM.
This used to be about 200KB; 100 threads would be 20MB, not a signifcant size.
If you have max_connections = 1000, then you are talking about 200MB, maybe more.
Having that many connections probably implies other issues that should be addressed.
In 5.6 (or MariaDB 5.5), optional thread pooling interacts with max_connections. This is a more advanced topic.
Thread stack overrun rarely happens. If it does, do something like thread_stack=256K
More on max_connections, wait_timeout, connection pooling, etc
Although you may be tempted to increase max_connections, it is often the wrong approache to handling a "busy" system.
⚈ Speeding up queries by using composite indexes and/or reformulating queries is usually the best way to get queries, hence connections, to finish faster.
⚈ If the client (eg a web server such as Apache) is letting a lot of connections occur, they could be piling up and stumbling over each other. If Threads_running exceeds 100, you may have this thundering herd problem.
(The name changed in 5.1 from table_cache to table_open_cache.)
Note: The 'unit' for the setting is _files_, not bytes.
The OS has some limit on the number of open files it will let a process have. Each table needs 1 to 3 open files. Each PARTITION is effectively a table. Some operations on a partitioned table open all partitions.
In *nix, ulimit tells you what the file limit the OS is allowing the process to have. The maximum value is in the tens of thousands, but sometimes it is set to only 1024. This limits you to about 300 tables. More discussion on ulimit
(This paragraph is disputed.) On the other side, the table cache is (was) inefficiently implemented -- lookups were done with a linear scan. Hence, setting table_open_cache in the thousands could actually slow down mysql. (Benchmarks have shown this.)
To help with the inefficiency of the table cache, table_open_cache_instances was added in 5.6. Suggest setting it to 16, which is the default as of version 5.7.
You can see how well your system is performing via SHOW GLOBAL STATUS; and computing the opens/second via (Opened_files / Uptime) If this is more than, say, 5 (per second), table_open_cache should be increased. If it is less than, say, 1, you might get improvement by decreasing table_open_cache.
As of 5.6, Table_open_cache_misses/hits/overflows give a good indicator of whether table_open_cache is too low. (Alas, no clue of "too high".)
I recommend increasing table_open_cache in these cases:
⚈ Table_open_cache_misses / (Table_open_cache_hits + Table_open_cache_misses) > 3%
⚈ Table_open_cache_misses / Uptime > 1/second
⚈ Table_open_cache_overflows / Uptime > 1/second
Short answer: query_cache_type = OFF and query_cache_size = 0
The QC is effectively a hash mapping SELECT statements to resultsets.
Long answer... There are many aspects of the "Query cache"; many are negative.
⚈ Novice Alert! The QC is totally unrelated to the key_buffer and buffer_pool.
⚈ When it is useful, the QC is blazingly fast. It would not be hard to create a benchmark that runs 1000x faster.
⚈ There is (or was) a single mutex controlling the QC. So, multi-core servers can get bottlenecked here.
⚈ The QC, unless it is both OFF & 0, is consulted for every SELECT.
⚈ Yes, the mutex is hit even if query_cache_type = DEMAND (2).
⚈ Yes, the mutex is hit even for SQL_NO_CACHE.
⚈ Any change to a query (even adding a space) leads to a different entry in the QC.
⚈ If my.cnf says type=ON and you later turn it OFF, it is not fully OFF. Ref: https://bugs.mysql.com/bug.php?id=60696
"Pruning" is costly and frequent:
⚈ When any write happens on a table, all entries in the QC for that table are removed.
⚈ It happens even on a readonly Replica.
⚈ Purges are performed with a linear algorithm, so a large QC (even 200MB) can be noticeably slow.
To see how well your QC is performing, SHOW GLOBAL STATUS LIKE 'Qc%'; then compute the read hit rate: Qcache_hits / Qcache_inserts If it is over, say, 5, the QC might be worth keeping.
If you decide the QC is right for you, then I recommend
⚈ query_cache_size = no more than 50M. (Amazon Aurora re-implemented the QC, making this limit less important.)
⚈ query_cache_type = DEMAND
⚈ SQL_CACHE or SQL_NO_CACHE in all SELECTs, based on which queries are likely to benefit from caching.
Why to turn off the QC
Discussion about size
This is a minor tunable. thread_cache_size=0 for Windows.
For *nix, zero will slow down thread (connection) creation. A small (say, 10), non-zero number is good. The setting has essentially no impact on RAM usage.
If (Threads_created / Uptime) is more than, say, 3/second, increase thread_cache_size.
It is the number of extra processes to hang onto. It does not restrict the number of threads; max_connections does.
There are hundreds of things that can be changed in the VARIABLES. In general, you should not be changing them; raising values can lead to swapping (bad for performance) or OOM (and a crash).
Here are some commonly changed things, plus the dangers.
Keep these under 1% of RAM. They could be allocated multiple times, thereby leading to filling up RAM: max_heap_table_size, tmp_table_size, join_buffer_size, sort_buffer_size, read_buffer_size, read_rnd_buffer_size
These should generally be left alone: innodb_log_buffer_size, myisam_sort_buffer_size, binlog_stmt_cache_size
In some specific situations these need to be raised; read what they do, then change accordingly: max_allowed_packet, bulk_insert_buffer_size
Some of those settings can be done at the "session" level. That would allow you to increase the setting right before a "big" query and have it apply only to that query if you change it back promptly. Likely cases:
⚈ group_concat_max_len when you want a list longer than 1024 bytes.
⚈ max_heap_table_size just before CREATE TABLE (...) ENGINE=MEMORY when you know that the table will be big. (You should also decrease innodb_buffer_pool_size permanently to make room in RAM.)
Again, don't make changes unless you have a good reason for it.
If you have turned on binarly loging (via log_bin) for replication and/or point-in-time recovery, The system will create binary logs forever. That is, they can slowly fill up disk. Suggest setting expire_logs_days = 14 to keep only 14 days' worth of logs.
If "day" is too coarse, see binlog_expire_logs_seconds (in 8.0).
RHEL, in its infinite wisdom, decided to let you control how aggressively the OS
will preemptively swap RAM. This is good in general, but lousy for MySQL.
MySQL would love for RAM allocations to be reasonably stable -- the caches are (mostly) pre-allocated; the threads, etc, are (mostly) of limited scope. ANY swapping is likely to severly hurt performance of MySQL.
With a high value for swappiness, you lose some RAM because the OS is trying to keep a lot of space free for future allocations (that MySQL is not likely to need).
With swappiness = 0, the OS will probably crash rather than swap. I would rather have MySQL limping than die.f The latest recommendation is swappiness = 1. (2015)
Somewhere in between (say, 5?) might be a good value for a MySQL-only server.
OK, it's time to complicate the architecture of how a CPU talks to RAM.
NUMA (Non-Uniform Memory Access) enters the picture.
Each CPU (or maybe socket with several cores) has a part of the RAM
hanging off each. This leads to memory access being faster for local
RAM, but slower (tens of cycles slower) for RAM hanging off other CPUs.
Then the OS enters the picture. In at least one case (RHEL?), two things seem to be done:
⚈ OS allocations are pinned to the 'first' CPU's RAM.]
⚈ Other allocations go by default to the first CPU until it is full.
Now for the problem.
⚈ The OS and MySQL have allocated all the 'first' RAM.
⚈ MySQL has allocated some of the second RAM.
⚈ The OS needs to allocate something.
Ouch -- it is out of room in the one CPU where it is willing to allocate its stuff, so it swaps out some of MySQL. Bad.
dmesg | grep -i numa # to see if you have numa
Probable solution: Configure the BIOS to "interleave" the RAM allocations. This should prevent the premature swapping, at the cost of off-CPU RAM accesses half the time. Well, you have the costly accesses anyway, since you really want to use all of RAM. Older versions: numactl --interleave=all. Or: innodb_numa_interleave=1, available as a MySQL VARIABLE as of 5.6.27.
Another possible solution: Turn numa off (if the OS has a way of doing that)
Overall performance loss/gain: A few percent.
This is another hardware performance gimmick.
For a CPU to access RAM, especially mapping a 64-bit address to somewhere in, say, 128GB or 'real' RAM, the TLB is used. (TLB = Translation Lookup Buffer.) Think of the TLB as a hardware associative memory lookup table, mapping 64-bit 'virtual' addresses into 'real' addresses.
Because it is an associative memory of finite size, sometimes there will be "misses" that require reaching into real RAM to resolve the lookup. This is costly, so should be avoided.
Normally, RAM is 'paged' in 4KB pieces; the TLB actually maps the top (64-12) bits into a specific page. Then the bottom 12 bits of the virtual address are carried over intact.
For example, 128GB of RAM broken 4KB pages means 32M page-table entries. This is a lot, and probably far exceeds the capacity of the TLB. So, enter the Huge pages trick.
With the help of both the hardware and the OS, it is possible to have some of RAM in huge pages, of say 4MB (instead of 4KB). This leads to far fewer TLB entries, but it means the unit of paging is 4MB for such parts of RAM. Hence, huge pages tend to be non-pagable.
Now RAM is broken into pagable and non pagable parts; what parts can reasonably be non pagable? In MySQL, the innodb_buffer_pool is a perfect candidate. So, by correctly configuring these, InnoDB can run a little faster:
⚈ Huge pages enabled
⚈ Tell the OS to allocate the right amount (namely to match the buffer_pool)
⚈ Tell MySQL to use huge pages
innodb memory usage vs swap
That thread has more details on what to look for and what to set.
Enabling Large Page Support
Overall performance gain: A few percent. Yawn. Too much hassle for too little benefit.
Jumbo Pages? Turn off.
This is a little-used alternative to MyISAM and InnoDB.
The data is not persistent, so it has limited uses.
The size of a MEMORY table is limited to max_heap_table_size, which defaults to 16MB, but can be changed just before doing CREATE TABLE ... ENGINE=Memory.
I mention it in case you have changed the value to something huge; this would
stealing from other possible uses of RAM.
How to Set VARIABLEs
In the text file my.cnf (my.ini on Windows), add more modify a line to say something like
innodb_buffer_pool_size = 5G
That is, VARIABLE name, "=", and a value. Some abbreviations are allowed, such as
M for million (1048576), G for billion.
For the server to see it, the settings must be in the [mysqld] section of the file.
The settings in my.cnf or my.ini will not take effect until you restart the server.
Most settings can be changed on the live system by connecting as user root (or other mysql user with SUPER privilege) and doing something like
SET @@global.key_buffer_size = 77000000;
Note: No M or G suffix is allowed here.
To see the setting a global VARIABLE do something like
mysql> SHOW GLOBAL VARIABLES LIKE "key_buffer_size";
| Variable_name | Value |
| key_buffer_size | 76996608 |
Note that this particular setting was rounded down to some multiple that MySQL
You may want to do both (SET, and modify my.cnf) in order to make the change immediately and have it so that the next restart (for whatever reason) will again get the value.
8.0 has a way of setting variables that will persist after a restart: SET PERSIST
A web server like Apache runs multiple threads. If each threads opens a connection
to MySQL, you could run out of connections. Make sure MaxRequestWorkers (formerly called MaxClients) (or equivalent)
is set to some civilized number (under 50).
There are several tools that advise on memory. Some misleading entries they come up with
Maximum possible memory usage: 31.3G (266% of installed RAM)
Don't let it scare you -- the formulas used are excessively conservative.
They assume all of max_connections are in use and active, and doing something memory-intensive.
Total fragmented tables: 23 This implies that OPTIMIZE TABLE might help. I suggest it for tables with either a high percentage of "free space" (see SHOW TABLE STATUS) or where you know you do a lot of DELETEs and/or UPDATEs. Still, don't bother to OPTIMIZE too often. Once a month might suffice. In the case of InnoDB, even monthly is likely to be too often.
Come to me for an analysis: MySQL Tuning Analysis
which involves about 250 checks, and usually provides about 20 significant recommendations.
5.7 stores a lot more information in RAM, leading to the footprint being perhaps half a GB than 5.6. See Memory increase in 5.7
Refreshed Oct, 2012;
Refreshed Jan, 2014;
Refreshed Sep, 2017;
Minor Refresh Aug, 2019;
Refreshed Mar, 2020
The tips in this document apply to MySQL, MariaDB, and Percona.
More in-depth: Tocker's tuning for 5.6
Irfan's InnoDB performance optimization basics (redux)
10 MySQL settings to tune after installation
Peter Zaitsev's take on such (5/2016)
MySQL On Low Memory Virtual Machines
Small VM? Here's a list of tips.
Partition Maintenance (DROP+REORG) for time series (includes list of PARTITION uses)
Big DELETEs - how to optimize -- and other chunking advice, plus a use for PARTITIONing
Chunking lengthy DELETE/UPDATE/etc.
Data Warehouse techniques:
Overview Summary Tables High speed ingestion Bulk Normalization
Entity-Attribute-Value -- a common, poorly performing, design pattern (EAV); plus an alternative
5 methods for 'Find Nearest'
Find the nearest 10 pizza parlors -- efficient searching on Latitude + Longitude (another PARITION use)
Lat/Long representation choices
Z-Order 'find nearest'(under construction)
Pagination, not with OFFSET, LIMIT
Techniques on efficiently finding a random row (On beyond ORDER BY RAND())
GUID/UUID Performance (type 1 only)
IP Range Table Performance -- or other disjoint ranges
Rollup Unique User Counts
Alter of a Huge table -- Mostly obviated by 5.6
Latest 10 news articles -- how to optimize the schema and code for such
Build and execute a "Pivot" SELECT (showing rows as columns)
Find largest row for each group ("groupwise max")
Other Tips, Tuning, Debugging, Optimizations, etc...
Rick's RoTs (Rules of Thumb -- lots of tips)
Datatypes and building a good schema
Memory Allocation (caching, etc)
Character Set and Collation problem solver
Trouble with UTF-8 If you want case folding, but accent sensitivity, please file a request at http://bugs.mysql.com .
Python tips, PHP tips, other language tips
utf8 Collations utf8mb4 Collations on 8.0
Converting from MyISAM to InnoDB -- includes differences between them
Compound INDEXes plus other insights into the mysteries of INDEXing
Cookbook for Creating Indexes
Many-to-many mapping table Handler counts wp_postmeta UNION+OFFSET
MySQL Limits -- built-in hard limits
767-byte INDEX limit
Galera, tips on converting to (Percona XtraDB Cluster, MariaDB 10, or manually installed)
5.7's Query Rewrite -- perhaps 5.7's best perf gain, at least for this forum's users
Analyze MySQL Performance
Analyze VARIABLEs and GLOBAL STATUS Analyze SlowLog
My slides from conferences
Percona Live 4/2017 - Rick's RoTs (Rules of Thumb) - MySQL/MariaDB
Percona Live 4/2017 - Index Cookbook - MySQL/MariaDB
Percona Live 9/2015 - PARTITIONing - MySQL/MariaDB
(older ones upon request)
Contact me via LinkedIn; be sure to include a brief teaser in the Invite request: