First, my Opinions on PARTITIONing
Why PARTITIONing is useless for speed
INDEXes in a PARTITIONed Table
Preface
This blog covers
⚈ PARTITIONing uses and non-uses
⚈ How to Maintain a time-series PARTITIONed table
⚈ AUTO_INCREMENT secrets
Definitions:
⚈ To "shard" is to split data across multiple machines. (This document does not cover sharding.)
⚈ To "partition" is to split one table into multiple sub-tables (partitions) on a single MySQL instance.
First, my Opinions on PARTITIONing
Taken from
Rick's RoTs - Rules of Thumb for MySQL
⚈ #1: Don't use PARTITIONing until you know how and why it will help.
⚈ Don't use PARTITION unless you will have >1M rows
⚈ No more than 50 PARTITIONs on a table (open, show table status, etc, are impacted) (This may be fixed in 5.6.6, or maybe in 8.0)
⚈ PARTITION BY RANGE is the only useful method. (I include RANGE COLUMNS.)
⚈ SUBPARTITIONs are not useful.
⚈ The partition field should not be the field first in any key. (This is a Rule of Thumb, not an absolute.)
⚈ It is OK to have an AUTO_INCREMENT as the first part of a compound key, or in a non-UNIQUE index.
It is so tempting to believe that PARTITIONing will solve performance problems.
But it is so often wrong.
PARTITIONing splits up one table into several smaller tables.
But table size is rarely a performance issue.
Instead, I/O time and indexes are the issues.
Why PARTITIONing is useless for speed
Well, mostly useless. And it rarely provides performance benefits. And sometimes even things down the query.
Let's look at PARTITION BY
Let's look at various SELECTs/INSERTs/DELETEs that might try to take advantage of partitioning in the WHERE clause.
⚈ WHERE without mentioning X -- What's the advantage of partitioning? Now the query must check each and every partition for the rows in question. You can make the counter-argument that the BTree being searched is slightly shallower. The counter-counter-argument is that the "pruning" may cost more than the savings in the BTree being shallower. Partitioning is probably slower.
⚈ WHERE X = 1234 -- This lets "partition pruning" look only in that one partition. But that's no better than INDEX(x) on a non-partitioned table. And you probably need that index anyway; after first 'pruning' down to the desired partition, you still need the index. No faster.
A common fallacy: "Partitioning will make my queries run faster".
It won't. Ponder what it takes for a 'point query'.
Without partitioning, but with an appropriate index,
there is a BTree (the index) to drill down to find the desired row.
For a billion rows, this might be 5 levels deep.
With partitioning, first the partition is chosen and "opened",
then a smaller BTree (of say 4 levels) is drilled down.
Well, the savings of the shallower BTree is consumed by having
to open the partition.
Similarly, if you look at the disk blocks that need to be touched,
and which of those are likely to be cached, you come to the
conclusion that about the same number of disk hits is likely.
Since disk hits are the main cost in a query, Partitioning
does not gain any performance (at least for this typical case).
The 2D case (below) gives the main contradiction to this discussion.
⚈ WHERE X = 1234 AND ... -- There is possibly an index to help with the rest of the where clause. In the non-partitioned equivalent table, INDEX(X, ...) serves the same purpose and does it at least as fast.
⚈ WHERE X BETWEEN ... AND ... -- For BY LIST and BY RANGE, the above arguments apply. But for PARTITION BY HASH(X), any kind of "range" requires the Optimizer to look in all partitions. Slower!
And what about ORDER BY x? Well, the results of scanning each partition need to be combined and then sorted. In the equivalent non-partitioned table, there is at least a chance of using an index; not so with a partitioned table. Slower.
What about disk space? Do a SHOW TABLE STATUS and look at Data_free. It is probably a huge number because each partition has 4MB-7MB of 'free' space of its own. Ugh.
Now let's look at the very few cases that benefit from PARTITION BY RANGE. (I've been looking for more cases for a decade!)
Use Cases for PARTITIONing
Only two cases benefit SELECTs; the rest are benefitial for other reasons.
Use case #1 -- Time series. Perhaps the most common use case where PARTITIONing shines is in
a dataset where "old" data is peroidically deleted from the table. RANGE PARTITIONing
by day (or other unit of time) lets you do a nearly instantaneous
DROP PARTITION plus REORGANIZE PARTITION instead of a much slower DELETE.
Much of this blog is focused on this use case; see below.
This use case is also discussed in
Big DELETEs
The big win for Case #1: DROP PARTITION is a lot faster than DELETEing a lot of rows.
Use case #2 -- 2-D index. INDEXes are inherently one-dimensional.
If you need two "ranges" in the WHERE clause, try to migrate one of them to PARTITIONing.
Finding the nearest 10 pizza parlors on a map needs a 2D index.
Partition pruning sort of gives a second dimension. See
Latitude/Longitude Indexing
That uses PARTITION BY RANGE(latitude) together with PRIMARY KEY(longitude, ...)
See also
a comparison of "find nearest" techniques
The big win for Case #2: Scanning fewer rows.
Use case #3 -- Hot spot. This is a bit complicated to explain. Given this combination:
⚈ A table's index is too big to be cached, but the index for one partition is cacheable, and
⚈ The index is randomly accessed, and
⚈ Data ingestion would normally be I/O bound due to updating the index
Partitioning can keep all the index "hot" in RAM, thereby avoiding a lot of I/O.
The big win for Case #3: Improving caching to decrease I/O to speed up operations.
Some use cases involve both Cases #1 and #3.
Use case #4 -- Transportable tablespace. Using EXPORT/IMPORT partition for quickly archiving or importing data.
(IMPORTing could be tricky because of the partition key.)
5.7 Import partition
Copying Transportable Tablespaces for Partitioned Tables in MariaDB
The big win for Case #4: Quickly moving a partition in between tables (or servers).
Use case #5 -- Separate datasources. Similar to #4. You have multiple sources for the dataset and they have (or can be modified to) the same schema. Each source is inserted into one partition by partition name using LOAD DATA. This assumes a full replacement when loading -- truncate the partition, then reload. And this is the first useful case for PARTITION BY LIST. (Note: Still, there is probably no performance gain in SELECT.)
How to handle 20M+ records from tables with same structure in MySQL
(especially the answer by GMB)
Use case #6 -- I have yet to find a 6th use case.
Note that almost always, these use cases involve BY RANGE partitioning, not the other forms.
AUTO_INCREMENT in PARTITION
⚈ For AUTO_INCREMENT to work (in any table), it must be the first field in some index. Period. There are no other requirements on indexing it.
⚈ Being the first field in some index lets the engine find the 'next' value when opening the table.
⚈ AUTO_INCREMENT need not be UNIQUE. What you lose: prevention of explicitly inserting a duplicate id. (This is rarely needed, anyway.)
Examples (where id is AUTO_INCREMENT):
⚈ PRIMARY KEY (...), INDEX(id) -- to get clustering on something more useful than id
⚈ PRIMARY KEY (...), UNIQUE(id, partition_key) -- not useful
⚈ INDEX(id), INDEX(...) (but no UNIQUE keys)
⚈ PRIMARY KEY(id), ... -- works only if id is the partition key (not very useful)
INDEXes in a PARTITIONed Table
If you change table from non-PARTITIONed to PARTITIONed, or vice versa, you should rethink all the indexes.
Here are some considerations:
With PARTITION
⚈ Since an index's scope is limited to one partition, the uniqueness constraint of a UNIQUE index is useless. So use INDEX, not UNIQUE.
⚈ Similarly, the uniqueness constraint of a PRIMARY KEY is useless. Instead consider having a PK for its "clustering" effects.
⚈ If you don't have a 'natural' PK (a combination of column(s) that is naturally unique), and want the advantage of clustering, then do have an AUTO_INCREMENT id and put it last in the PK. Then have INDEX(id).
⚈ If you choose to have the partition key in an index, it is usually (but not always) best to put the partition key last in any index. Note that the "pruning" is done before looking at the index, so the partition key is effectively used before the rest of the column(s) in the index.
Without PARTITION
The guidelines are here:
Cookbook for creating indexes from SELECTs
PARTITION Maintenance for the Time-Series Case
Let's focus on the maintenance task involved in Case #1, as described above.
You have a large table that is growing on one end and being purged on the other.
Examples include news, logs, and other transient information.
PARTITION BY RANGE is an excellent vehicle for such a table.
⚈ DROP PARTITION is much faster than DELETE. (This is the big reason for doing this flavor of partitioning.)
⚈ Queries often limit themselves to 'recent' data, thereby taking advantage of "partition pruning".
Depending on the type of data, and how long before it expires, you
might have daily or weekly or hourly (etc) partitions.
There is no simple SQL statement to "drop partitions older than 30 days" or
"add a new partition for tomorrow".
It would be tedious to do this by hand every day.
High Level View of the Code
ALTER TABLE tbl
DROP PARTITION from20120314;
ALTER TABLE tbl
REORGANIZE PARTITION future INTO (
from20120415 VALUES LESS THAN (TO_DAYS('2012-04-16')),
future VALUES LESS THAN MAXVALUE
);
After which you have...
CREATE TABLE tbl (
dt DATETIME NOT NULL, -- or DATE
...
PRIMARY KEY (..., dt),
UNIQUE KEY (..., dt),
...
)
PARTITION BY RANGE (TO_DAYS(dt)) (
PARTITION start VALUES LESS THAN (0),
PARTITION from20120315 VALUES LESS THAN (TO_DAYS('2012-03-16')),
PARTITION from20120316 VALUES LESS THAN (TO_DAYS('2012-03-17')),
...
PARTITION from20120414 VALUES LESS THAN (TO_DAYS('2012-04-15')),
PARTITION from20120415 VALUES LESS THAN (TO_DAYS('2012-04-16')),
PARTITION future VALUES LESS THAN MAXVALUE
);
Be sure to have innodb_file_per_table=ON.
Why?
Perhaps you noticed some odd things in the example. Let me explain them.
⚈ Partition naming: Make them useful.
⚈ from20120415 ... 04-16: Note that the LESS THAN is the next day's date
⚈ The "start" partition: See paragraph below.
⚈ The "future" partition: This is normally empty, but it can catch overflows; more later.
⚈ The range key (dt) must be included in any PRIMARY or UNIQUE key.
⚈ The range key (dt) should be last in any keys it is in -- You have already "pruned" with it; it is almost useless in the index, especially at the beginning.
⚈ Any column used for 'range' filtering should be at the end of an index.
⚈ DATETIME, etc -- I picked this datatype because it is typical for a time series. Newer MySQL versions allow TIMESTAMP. INT can be used. DECIMAL and FLOAT cannot.
⚈ There is an extra day (03-16 thru 04-16) to give you a full month: The latest day is only partially full.
Why the bogus "start" partition?
If an invalid datetime (Feb 31) were to be used, the datetime would turn into NULL.
NULLs are put into the first partition.
Since any SELECT could have an invalid date (yeah, this stretching things),
the partition pruner always includes the first partition in the resulting set
of partitions to search.
So, if the SELECT must scan the first partition, it would be slightly more
efficient if that partition were empty.
Hence the bogus "start" partition.
Longer discussion, by The Data Charmer
5.5 eliminates the bogus check, but only if you switch to a new syntax:
PARTITION BY RANGE COLUMNS(dt) ( PARTITION day_20100226 VALUES LESS THAN ('2010-02-27'), ...
ALTER TABLE tbl REORGANIZE PARTITION future INTO ( PARTITION from20150606 VALUES LESS THAN (736121), PARTITION future VALUES LESS THAN MAXVALUE ) ALTER TABLE tbl DROP PARTITION from20150603
Partition Maintenance (DROP+REORG) for time series (includes list of PARTITION uses)
Big DELETEs - how to optimize -- and other chunking advice, plus a use for PARTITIONing
Chunking lengthy DELETE/UPDATE/etc.
Data Warehouse techniques:
Data Warehouse Overview
Summary Tables
High speed ingestion
Bulk Normalization
Schema and code design for large Sensor database
Entity-Attribute-Value (EAV) -- a common, poorly performing, design pattern; plus an alternative
Lat/Lng search to Find the nearest 10 pizza parlors
Lat/Long representation choices
Pagination, not with OFFSET, LIMIT
Techniques on efficiently finding a random row (On beyond ORDER BY RAND())
GUID/UUID Performance (type 1 only)
IP Range Table Performance -- or other disjoint ranges
Alter of a Huge table -- Mostly obviated by 5.6
Efficient List of Latest 10 news articles
Build and execute a Pivot SELECT (showing rows as columns)
(Groupwise Max): Efficiently find largest row(s) for each group
Other Tips, Tuning, Debugging, Optimizations, etc...
Rick's RoTs (Rules of Thumb -- lots of tips)
Datatypes and building a good schema
Memory Allocation (caching, etc)
Character Set and Collation problem solver
Trouble with UTF-8
If you want case folding, but accent sensitivity, please file a request at https://bugs.mysql.com .
Python tips,
PHP tips,
other language tips
utf8 Collations
utf8mb4 Collations on 8.0
Converting from MyISAM to InnoDB -- includes differences between them
Compound INDEXes plus other insights into the mysteries of INDEXing
Cookbook for Creating Indexes
Many-to-many mapping table
Handler counts
wp_postmeta
UNION+OFFSET
MySQL Limits -- built-in hard limits
767-byte INDEX limit
Galera, tips on converting to (Percona XtraDB Cluster, MariaDB 10, or manually installed)
5.7's Query Rewrite -- perhaps 5.7's best perf gain, at least for this forum's users
Analyze MySQL Performance
Analyze VARIABLEs and GLOBAL STATUS
Analyze SlowLog
My slides from conferences
MiniFest 2021 - Rick James & Daniel Black - Answering on Stack Overflow(+comments) - MariaDB Frontlines
Percona Live 4/2017 - Rick's RoTs (Rules of Thumb) - MySQL/MariaDB
Percona Live 4/2017 - Index Cookbook - MySQL/MariaDB
Percona Live 9/2015 - PARTITIONing - MySQL/MariaDB
Contact me via LinkedIn; be sure to include a brief teaser in the Invite request: