I really really wanted to run the latest MariaDB with LZ4 Page Compression.. it is a game changer for many types of large databases I deal with. There isn't a package for Centos in the trusted repos that includes any of the new algorithms, just zlib. So I compiled it manually in a way that is repeatable and follows best-practices, it's powering this site. Now I can use InnoDB Page Compression with lzo, lzma, bzip2, snappy, or my favorite LZ4.
Thought this would be a good chance to post a howto, and show that there is a lot you can do by compiling software yourself and breaking the package management one-click install shackles. That said, as soon as there is a stable CentOS package built that supports the new algorithms, I'll switch to it.
It's a step-by-step that I just ran through a couple times from scratch. Also a lot of misc notes here for myself.
For CentOS 7 64bit
This howto builds rpm packages to install the latest MariaDB for installation with yum, only these are supercharged and bleeding edge, and compiled and tested on the machine you build it on. These are directions for a new CentOS 7 64bit machine, best to start clean with a new instance. I'm doing this on Rackspace Openstack.
These same instructions can be tweaked for other operating systems and other centos versions.
Prepare Build Environment
First, you must have sudo installed and allowed for your non-privileged user that will actually do the builds. Never compile as root. Best to never use root at all ever.
Enable Yum Repos
Just google for each of these and follow the instructions they recommend, such as verifying the rpms crytographically.
Now upgrade your entire system, and after it would be good to reboot now to start from a clean slate. After rebooting, go ahead and stop any cpu/io/mem intensive services or programs to make the compilation much faster. Stop things like crond, mysql, httpd, nginx, php-fpm, memcached, redis, postfix, cloudbackup tools, datadog, nagios, and any daemons like that. Might be a good time to see if you can permanently disable some of those if you don't need it. The more RAM for InnoDB buffer pool the better.
Prepare Build Environment
First lets create the directories we will be using to build mariadb. We are doing an out-of-source build, which has many benefits, such as making it easy to repeat this process for upgrades.
Building "out-of-source"
Building out-of-source provides additional benefits. For example it allows to build both Release and Debug configurations using the single source tree. Or build the same source with different version of the same compiler or with different compilers. Also you will prevent polluting the source tree with the objects and binaries produced during the make.
Create the root, src, and build folders, these need to be chmod 755 and owned by your non-root user you are using with sudo: mkdir -pv /opt/mariadb10/{src,build}
These tools will be used for make test which is an important step that will alert you of any problems that you will want/need to fix before installing.
Now that you are warmed up in the cli, its time to do the actual build. If all goes smoothly it should take about 30m - 1hr (mostly comiling).
Optionally, during these steps you may want to debug some failed make tests, or track down and install missing packages listed by cmake.. and you can also take some time to read up on all the various configuration options that are possible read the BUILD-CMAKE in the src/ directory. I ended up re-running cmake with different options and improvements several times before the final build/make.
C mode for speed: export LC_ALL=C LANG=C
Cd into your empty build directory: cd /opt/mariadb10/build/
Find number of processers, use it for the value of make -j[n] below for faster makes.
Make: make -j4
Create centos RPMS for installing with yum: make package
Run tests: make test.. should try to fix any failures before installing with yum.
Running tests...
Test project /opt/mariadb10/build
Start 1: pcre_test
1/60 Test #1: pcre_test ........................ Passed 0.37 sec
Start 2: pcre_grep_test
2/60 Test #2: pcre_grep_test ................... Passed 0.56 sec
...
100% tests passed, 0 tests failed out of 60
Total Test time (real) = 55.33 sec
Good practice to: shutdown any running mysql servers, and create a full backup of the entire datadir /var/lib/mysql like this: sudo rsync -alvPh --delete /var/lib/mysql/ /var/lib/mysql.bk/ and while you are at it backup your /etc/ directory as well.
Now uninstall any existing mariadb packages, and take note of any dependencies that also get removed, after you install the new rpms with yum go back and install those dependencies.
Now just configure the server normally, get it running with systemd, and you are good to go and free to experiment and learn how to use the new algorithms and mariadb features.
Verify Compression Support
$ mysql -Ntbe 'SHOW VARIABLES WHERE Variable_name LIKE "have_%" OR Variable_name LIKE "%_compression_%"'
Exclude from yum by adding this to /etc/yum.conf.. that way your yum-cron won't inadvertantly replace your custom install with an updated mainline version from a repo, which if it doesn't have the support for compression that would prevent your mysql from starting and be a pain.
exclude=MariaDB*
Rebuilding and Updating
Specifying Which Plugins to Build
Note that unlike autotools, cmake tries to configure and build incrementally. You can modify one configuration option and cmake will only rebuild the part of the tree affected by it. For example, when you do cmake -DWITH_EMBEDDED_SERVER=1 in the already-built tree, it will make libmysqld to be built, but no other configuration options will be changed or reset to their default values.
Alternatively, you might simply delete the CMakeCache.txt file — this is the file where cmake stores current build configuration — and rebuild everything from scratch.
Choosing compression algorithm
You specify which compression algorithm to use with the
--innodb-compression-algorithm= startup option for MariaDB. The options are:
Option
Description
none
Default. Data is not compressed.
zlib
Pages are compressed with bundled zlib compression method.
The compression method can be changed whenever needed. Currently the compression method is global (i.e. you can't specify compression method/table).
set global innodb_compression_algorithm=lz4;
From this point on page compressed tables will use the lz4 compression method. This setting does not change already compressed pages that were compressed with a different compression method. This is because MariaDB supports pages that are uncompressed, compressed with e.g. lzo and compressed with e.g. lz4 in same tablespace. This is possible because every page in InnoDB tablespace contains compression method information on page header metadata.
Choosing compression level
You specify the default compression level to use with the --innodb-compression-level= startup option for MariaDB. Values are 0-9, default is 6. Note that not all compression methods allow choosing the compression level and in those cases the compression level value is ignored.
InnoDB/XtraDB Page Compression
Page compression is an alternative way to compress your tables which is different (but similar) to the InnoDB COMPRESSED storage format. In page compression, only uncompressed pages are stored in the buffer pool. This approach differs significantly from legacy InnoDB compressed tables using innodb-file-per-table=1.
Page compression can be used on any file system but is most beneficial on SSDs and Non-Volatile Memory (NVM) devices like FusionIO atomic-series. Page compression design also works with double-write enabled, but best performance is reached if double-write is disabled (i.e. innodb-doublewrite=0) and atomic writes are enabled (innodb-use-atomic-writes=1). This naturally requires that the used file system and storage device supports atomic writes.
Server.cnf configuration
Some innodb settings are required in order to use a custom compression algorithm. You must have innodb_file_per_table, and you must use Barracuda.
Comments