MySQL RPMS and the new yum repository

I was really pleased to see the announcement by Oracle MySQL yum repositories that they have now produced a yum repository from where the MySQL RPMs they provide can be downloaded. This makes keeping up to date much easier. Many companies setup internal yum repositories with the software they need as then updating servers is much easier and can be done with a simple command. For many people at home that means you set this up once and don’t need to check for updates and do manual downloads, but can do a quick yum update xxxx and you get the latest version. Great!  This new yum repository only covers RHEL6 did not include RHEL5 which is not yet end of life and still used by me and probably quite a lot of other people. I filed bug#70773 to ask for RHEL5 support to be considered, especially as it was being provided already, and asked for them to also consider including 5.7, but have been told that RHEL5 support will not be added. As you will see later perhaps it is now clear why.

I was somewhat surprised then to read a later post Updating MySQL using official repositories which showed that actually the rpms in this new repository are completely different to the previous ones (MySQL community rpms) that we have been downloading from http://dev.mysql.com/downloads/mysql/#downloads. The package names differ being MySQL-server…rpm, MySQL-client…rpm on the latest page whilst the package names in the new yum repository are named mysql-community-client…rpm, mysql-community-server…rpm, etc.  It looks like these packages are incompatible, thus for those of us already using the first set of RPMs the new yum repository is rather useless, and switching from one set of RPMs to another is quite a nuisance if you have a number of servers.

To avoid confusion in any later comments in this article I will refer to the ORIGINAL MySQL packages as the ones provided by Oracle previously and the NEW MySQL packages as these new ones provided in the new yum repo.

No mention of these differences was made by Oracle in their announcements, so obviously no reasons were given for why they might have chosen to name packages differently and not simply provided a yum repository containing the existing packages they have already built. That is what I had assumed they’d done and would have seemed to have been the most logical step to take.

Independently of the reasons that Oracle has decided to build two sets of RPMs, one thing is clear: They now have to maintain 2 different sets of package for the same distribution (RHEL6) in parallel. It also brings up the question of what will happen with MySQL 5.7 that is out there already being offered in the ORIGINAL packaging format and whether or not new 5.7 packages will be provided in ORIGINAL or NEW versions or both….  I have been trying out the current DEV version of 5.7 and indeed I know that 5.7 can change anyway Oracle chooses until they declare the version GA, so fair enough. The only thing it does is raise doubt about how to prepare for that moment as MySQL 5.7 has many interesting features and some of us way want to use those as soon as possible.

Anyway I digress. I thought that I would see what differences there may be between the src rpms of the ORIGINAL and NEW packages. As the src rpms are available I was able to download these packages to see. One would expect them to be pretty much the same, as after all the software is the same. It is also true that the NEW packages must provide support for Fedora releases so some changes would have been necessary to make that work correctly. Other than that I would not expect many differences.

I downloaded the src rpms from both locations, the original sets of MySQL community rpms has a package named MySQL-5.6.14-1.el6.src.rpm, and the new yum repo version is called mysql-community-5.6.14-3.el6.src.rpm. There’s a minor release difference here but that difference should be easy to see in the spec file.

I installed the src rpm which basically unpacks it ready for building.  Given I have setup my rpm directory structure to install the src packages into a directory named after the package it is quite easy to then compare the resulting files contained in the source rpm.  The source rpm normally contain the spec file which defines how the package should be built, the original source tarball files and any patches that may need applying to the original source files to build the final binaries.

The results can be seen below with the first listing being from the ORIGINAL src rpm and the second from the NEW src rpm:

As you see there are several differences just in the number of files that are being provided.  It is nice to see that the NEW package actually contains an older version of mysql (5.1.70) which I’m assuming is used to build the older “shared-compat” libraries. That was missing from the ORIGINAL package and actually meant that it is not completely possible to build the ORIGINAL package binaries only from the src.rpm.

Some of the files in the NEW package are clearly for Fedora’s new systemd, which is the replacement for the traditional init scripts.

I also diff’d the spec files, and there you see several differences. One surprising one is there’s a difference in the licensing: The ORIGINAL rpms have a GPL license specified, whilst the NEW rpms have a GPLv2 license. Oracle never changes things without a reason and I know there have been many discussions on the topic of GPLv2 vs GPLv3 so I wonder why this change exists.

There are quite a lot of other differences in the 2 spec files, some changes seems insignificant but others may not be. I have not had the time to investigate further.

Many of you may remember that we have had bugs related to Oracle releasing multiple tarballs of the same name but having different checksums. Unlike some other package managers, RPM is not good at noticing or recording these differences so they can go unnoticed. I actually filed a feature request to RH to see if this may get fixed. That is their bug#995822.  If you think this may be useful please tell RedHat.

That feature request refers to bug#69512 where this issue came up and I noticed it again in bug#69987 in September. Oracle said they would ensure they would not distribute source tar balls with the same name and different contents to avoid the confusion that can result from this. It seems they have already forgotten as I see:

So yes, it seems they have done this again. I know that they build from a VCS but if they tag something as 5.6.14 it really helps to have a single tar ball that refers to that tagged version. I haven’t bothered to do a diff between the 2 tarballs but can imagine the new tarball is somewhat more up to date than the last one. However, that is really no excuse. So time to file yet another bug report which I have done with bug#70847.

In the end it seems that Oracle is trying to do the right thing and make life easier for us. That is welcome. However, they seem to trip up making that effort. Things need not be this complicated.

2013/11/07 update

I had a quick check of the differences in the 2 tarballs which are hugely different in size:

Despite what looks like a large change set it seems most of the changes are in the documentation (mysql.info and man files) and these seem to be due to the fact that the source files have been regenerated between the tar balls, most of the changes are thus spacing or generated timestamp differences. Also mysql-5.1.70.tar.gz is included in the new mysql-5.6.14.tar ball so really there’s no need for it to be included separately in the mysql-community 5.6.14 src.rpm. That is likely to be an oversight and will no doubt get corrected.

My conclusion is that simply the mysql-community version of the mysql-5.6.14.tar.gz tar ball is simply more up to date than the one provided in the MySQL-5.6.14 src rpm. So perhaps much ado about nothing.  That said a few questions remain unanswered so it will be interesting to see how things develop.

Google Chrome and CentOS 6

I have been meaning to sit down and write about this for a while now but with the summer upon us have not had time.

I use CentOS 6 on my home PC for many reasons, but mainly because I have been using RedHat versions of Linux for quite a long time and am comfortable with it.  CentOS, like its upstream parent RHEL, is good because it is well tested and stable and it is not absolutely necessary to perform a major version update every few months like its cousin Fedora. I do not have time for that and most security issues will be fixed with normal updates so this works well.

The downside of course is that RedHat can not afford to run bleeding edge versions of software which may be rather unstable and that can be a nuisance, if you need or want to use newer more recent versions.

An example of that, from my DBA perspective, is the version of MySQL which is shipped with RHEL 6.4: MySQL 5.1.66 is very old and I would not really want to run that on a production machine taking anything but toy load.  However, in this case, others provide drop-in replacements: Oracle with MySQL 5.6 (current GA version) and both MariaDB and Percona versions are other good alternatives.

In the browser world I moved over some time ago to use Chrome. It seems to work well on my Mac and at the time I moved over it avoided me having to restart Firefox frequently as it was eating all my memory.  There have been versions of Chrome available for CentOS 6 until rather recently, provided by Google and that was really good. It made life easy and I could switch machine with little trouble and use the same browser.

There are versions of Chrome for other Linux versions: Debian, SuSE and Fedora, but I want to keep using CentOS 6 for the reasons outlined above. Judging from other posts I have seen others feel the same.  My understanding is that CentOS 6 support was dropped as newer libraries on which it depended were not available on CentOS 6: Requires: libstdc++.so.6(GLIBCXX_3.4.15)(64bit) says rpm when you try to install the latest version.

Upgrading the OS versions of these base libraries is just troublesome as it may break several things and while people sometimes do this to try and work around similar issues I do not think that is the way forward.  Something will eventually break. Yet if these libraries are needed why can they not be built and located like google-chrome itself in a different location to the system default, and then link google-chrome using RPATH settings to the expected directory.  I have done some stuff like this before in a previous job and unless I am mis-remembering the procedures this should work fine.  Rebuilding the required libstdc++ in a different location to the default, such as /opt/google/lib64, and naming it something like google-libstdc++, and then adjusting the google-chrome dependencies to require this package and not the system libstdc++ is not really that hard. Doing so would fix the issue for google-chrome, and potentially make the library usable for others who might also want this newer version on CentOS.  Given Google has already setup a yum repo for CentOS 6, the automatic install of these new dependencies would just work out of the box.

Others suggest moving over to Chromium, but I see it is not the same as Chrome and has given me a few issues since switching from Chrome. I would like to use the same browser on my different desktops and not have to worry.

It would also help if RedHat would recognise that some people will need to use newer versions of libraries that come with the OS, and if they do not want to make the effort to support this themselves, it would help if they would perhaps make recommendations on how this can be done by others, perhaps along the lines outlined above.  In the end for them this avoids people being frustrated that their “stable OS” is not out-of-date, and also avoids issues with people hacking things around themselves, or resorting to building from source, when the whole point of a package based system is that you should not need to do that and the packaged builds tend to be more consistent and complete.

So here is to keeping my fingers crossed that perhaps we will see Chrome on CentOS 6 return in the future and make a few people happier.

What do you think?

First Madrid MySQL Users Group scheduled for the 4th July

As mentioned here and after talking to a few people we have created a meetup page, http://www.meetup.com/Madrid-MySQL-users-group/ and proposed the first meeting on Thursday, 4th July. If you are interested and in the area come along and say hello. It should be interesting to see a few others who work in the field.  If you can let us know you are coming so we have an idea of how much interest there is.

On operating system upgrades and a packager’s nightmare

A fairy tale

Once upon a time I did an operating system upgrade, a minor one that should do no harm, but just get me up to date by fixing any bugs in the version I had been using. It seemed like a good idea.

All seemed to be fine. I use a package provided by an external vendor and not the one produced by the operating system provider as this vendor provides a newer version of the package and I need that. The vendor has to make his package fit in the os environment his package is built for and normally does a pretty good job.

I use automation to build my systems and when I built a new one some issues appeared. Related to the new version of the OS the provider had enhanced one of his packages and the installation pulled in new dependencies. The install of the external package I use then broke as it conflicted with the new dependency provided by the OS.  While a workaround is possible: uninstall this, install that, … the nice world the package manager provides you to make these problems a non-issue was suddenly broken. Horrible.

To make matters worse the external vendor can not easily fix this. If he builds a package which obsoletes this new dependency then in earlier versions of the OS he changes stuff that is not needed. If he does not do this the breakage remains in this latest version. Of course in the real world many people use a mixture of OS versions and expect a single package to do the right thing at least for the same major version of an operating system. An external vendor’s packagers life is not a happy one. It’s hard to provide a package that integrates seemlessly without it breaking something.

Back to real life

The story above is a real story relating to RHEL 5.9 versus the previous RHEL 5.8, or CentOS in my case and MySQL 5.5 and 5.6 rpms provided by Oracle. RedHat have enhanced the version of postfix they provide (a core component of the servers we use) and added MySQL library dependencies which were not there in RHEL 5.8. This pulls in the ‘mysql’ package as a dependency, and so installing the MySQL-server, MySQL-client and MySQL-shared-compat libraries I had been using breaks as these do not currently have an rpm “Obsoletes: mysql” entry needed to make the package install go nice and smoothly.

Real life means I do not run the latest version of MySQL 5.5 but a version which works on my systems and has been tested with my software, so if I use that on CentOS 5.9 now it will break. Oracle may not mind having to update their latest MySQL 5.5 rpm (for CentOS 5) but are unlikely to be interested in updating all their previous MySQL 5.5 or 5.6 rpms for this version of CentOS just in case I might be using one of their older versions. I also am not interested in upgrading all these MySQL 5.5 servers just to get a newer version which may have issues, so probably I will end up doing a work around “remove this package”, “add that one” in a adhoc way to avoid this problem. It should not be necessary.

It has always been stated when building software: do not make functional changes in a minor version upgrade as you will break something somewhere and upset someone. I understand that providing MySQL support to postfix (and perhaps other packages) seems like a good idea but this breaks this “rule”. I also think that having to provide a package within an existing OS framework, and try to avoid breaking any native OS packages is hard, but still think as I have commented in previous posts that coming up with a way of allowing external vendors a way to do this which does not interfere with “native OS” packages would be good. Recommend that these packages get named a certain way, do not get located in “native OS” locations, and do not conflict with “native OS” libraries.
If people want to use these packages show them how to link against them and add dependencies in a way which may favour these vendors’ packages if that’s a deliberate intention. Provide these rules and life will be easier for all of us. One world where this is done is the *BSD ports systems, which are intentionally independent of the “native OS” and located separately and people can use them if needed, or not at their convenience. In my opinion something like that in the rpm world would be nice indeed and external rpm repos would not have to worry so much about adding new functionality and breaking the OS.

I am not fully sure how I am going to work around this issue, but it has been causing problems for a while now and will require some poking. I will try and get in touch with the parties concerned to alert them to this change which has broken something for me.

Am I the only one who has noticed this?

Also likely to be an issue I guess for MariaDB and Percona rpms for RHEL 5/ CentOS 5 as they’ll likely implement the same dependency behaviour.

References:

  • http://blog.wl0.org/2009/08/a-packagers-thoughts-on-rpm8/
  • https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/5/pdf/5.9_Technical_Notes/Red_Hat_Enterprise_Linux-5-5.9_Technical_Notes-en-US.pdf

Update: 2013-06-19

This is what I saw:

On a server that runs MySQL 5.5.23 (CentOS 5.8) I see:

which actually looks wrong. CentOS 5 does not have a mysql-libs package, but a package named mysql.  CentOS 6 does have such a named package and on early releases of the MySQL 5.5 rpms some packaging issues similar to the one described here did have to be ironed out.

Checking a later package I see things have changed but there still seems to be an issue.

The first two entries need fixing (first first needs no version and the second should not be there) as they cause other issues, but I had not noticed until now the package name mistake in the last line (which mentions a package in CentOS 6, not CentOS 5 and should be just ‘mysql’).

Maybe, given I am not using the latest version of MySQL, I will need to repackage this version myself. A quick change to the rpm spec file and a rebuild should give me the package I need. Given the src rpm is available it is good I can do that.   …. but no. Further inspection shows that the MySQL-shared-compat package is not built from the MySQL-X.Y.Z-r.src.rpm but is built independently of it and I think old libraries files are “bundled” in manually to make this package.  I vaguely remember that maybe the source tar ball has the spec file for this shared-compat package. Let me check as maybe I have to do some deeper magic to roll my own package.

Oracle: please do not do this. Provide enough of the source in the normal tar ball to be able to build the old version libraries, and build _everything_ from that single tar ball.  This would avoid all the magic you are doing now to build this package and others who need to adjust these packages can do the same.

Madrid MySQL Users Group worth creating?

I’m interested in meeting up and sharing experiences about using MySQL and managing MySQL servers with people here locally in Madrid. I had a quick look around and could see no MySQL user groups locally, so it might be nice to create such a group and exchange ideas over a beer, coffee or cola every once in a while. If you’re in Madrid and are interested please let me know. I’ve created a temporary  email address: madrid-mysql-users-2013 AT wl0.org (careful with the domain), which you can contact me on to confirm an interest.  Oh and I’d expect these meet ups to be in Spanish, but that’s not a requirement.

Estoy interesado en reunirme y compartir experiencias sobre el uso de MySQL y administración de servidores de MySQL con la gente aquí en Madrid. He echado un vistazo en Internet y no he visto ningún grupo de usuarios de MySQL a nivel local, por lo que sería bueno crear un grupo e intercambiar ideas mientras tomamos una cerveza, un café o una coca-cola. Si estás en Madrid y estás interesado por favor házmelo saber. He creado una dirección de correo electrónico provisional: madrid-mysql-usuarios-2013 AT wl0.org (cuidado con el dominio), que puedes usar para ponerte en contacto conmigo para confirmar tu interés en asistir.  Estando aquí en España imagino hablaríamos en español.

 Update: 2013-06-19

Progress, slowly but surely. See: http://www.meetup.com/Madrid-MySQL-users-group/

Extend ansible hostnames

I have been playing with ansible recently and quite like some of what I see.

One issue I have when setting up the ansible hosts file, which contains the inventory of all hosts known to ansible, is that at work my hosts have a slightly more complex pattern which ansible can not directly match, see http://ansible.cc/docs/patterns.html. So I patched 1.0 to cope with these.

This provides for:

  • multiple patterns in a single hostname
  • the optional use of a step in [1:99:3] to generate hosts which don’t increase by 1.

The patch can be found here: http://merlin.wl0.org/201305/01/ansible-1.0-extend-patterns.patch and this allows me to include several types of servers at work in a simpler pattern more easily.

My python is still a bit rusty but this seems to do the right thing. It would be nice to see this patch incorporated into ansible.

Update 2013-06-06

I’ve just upgraded ansible on my Mac to 1.1 (via macports) and this patch hasn’t been applied yet. I think 1.2 is out soon so I’ll see if I can poke the developers to get this patch included. See: https://github.com/ansible/ansible/issues/3135

On warming up a MySQL 5.6 server

In the past…

One of the typical problems you have when restarting mysqld is that the InnoDB buffer pool (buffer pool from now on) is empty and consequently access to the database requires reading directly from disk. Performance suffers dramatically as a consequence.

So the common solution is to artificially warm upthe server by doing queries which will fill the buffer pool. Typical solutions might be to do: SELECT COUNT(*) FROM some_table FORCE INDEX (PRIMARY) LIMIT ... on a number of tables to fill up the pool on startup. Fitting this into the standard mysql init start script is somewhat tricky as no hooks are provided for this sort of post-start action. (It would be nice to have this for other tasks too.)

Of course choosing the right parameters here can be tricky as workload changes over time, and as the ratio of the size of the database to the size of the buffer pool increases, you need to be more selective in what you load in.  This is a tricky problem to solve well and solve generically, I am sure you’ll agree.

Then MySQL 5.6 comes to the rescue (almost)

So I was pleased to hear that MySQL 5.6 had a new feature to take the problem away.  When shutting down mysqld, it can be configured to save the list of pages in the buffer pool, and on startup you can use this list to populate the empty buffer pool, hence putting you in the same warm state you were in before.  See: http://dev.mysql.com/doc/refman/5.6/en/innodb-performance.html.
Good news, but…
As the systems I manage have got busier and technology has advanced, so has the size of the memory in my servers. Some servers I manage now have 192 GB of ram and we have tried set ups with even more memory.  The innodb buffer pool size is likely to be around 90% of that size.  So loading in 160 GB of data from hard disk, even if read 100% sequentially, is going to take some time, so doing this as efficiently as possible is essential.   Yet, perhaps loading in the full pool is not really necessary. Experimentation, depending on the different set ups I use, indicates that mysqld performs sufficiently well even if if buffer pool is only 25% full.  After that natural usage will pull in missing data and gradually performance will improve to expected levels.
The first systems I started using MySQL 5.6 on needed fast disks so were configured with SSDs. Here the absolute need for a warm up script was less important as SSDs are so much faster anyway, so it is only now that I have paid more attention to this new feature and see some potential weaknesses in its implementation. Having said that it’s a good first step, and for many people may be just what they need if they enable it.
Below are some suggestions for improvement, but they will probably have to wait for 5.7 as new features should not really go into the current GA release.
  • Optionally time-limit this warm up process, stopping when the time limit has exceeded
  • Optionally limit the number of pages loaded in, probably to a percentage of the current pool size. The pages loaded in however, should be in MRU first as these are the hotest and most needed pages.  Ensure the load sorts the page list to ensure that I/O is as sequential and fast as possible.
  • Optionally define the number of concurrent threads that can be doing warmup I/O. My experience is that depending on the underlying disk subsystem running several parallel warmup threads may reduce the warmup time considerably. I think the current implementation in 5.6 only uses a single thread to load in pages.
  • Stop warming up once the buffer pool is full. Other normal database activity on the server may trigger data being loaded into the buffer pool so it may fill up before the warmup process finishes. So if buffer pool page eviction starts there’s no point continuing the warmup process.
  • Take into account the fact that copying an on disk mysql image may result in the server being started not being the one that the buffer pool list was saved for and the configuration in terms of memory size and buffer pools may change.  If this makes a difference ensure that the warm up process still does the right thing of warming up the server with the right pages.
I have posted a feature request as Bug#68863 and hope some of these ideas will be considered.
So I still have to do more testing but think that I may well still be doing manual warm ups for while yet.

MySQL 5.6 is out, so what is next?

MySQL 5.6 is out now and that is good news. I have already been using pre-GA versions on some production servers with good success and now that the few wrinkles I have seen have been ironed out, I am sure a lot of people will find the new features in 5.6 well worth waiting for.

However, that does lead to the question of: “what next?”

I still have several things that I would like to see in MySQL in no specific order of preference such as:

  • Session transaction state exposed as a variable to allow you to determine if you have started a transaction or not, and thus allowing you to use BEGIN WORK, ROLLBACK or COMMIT as needed.  This information is available via the C API I believe but not via SQL.  Something like @@in_transaction = 1.  Makes modular programming easier.
  • Table space management. The default behaviour now in 5.6 is to move to innodb_file_per_table = 1, but really on a large system I’d prefer to not have to manage each individual file, or partition, but blocks of files containing one or more tables: that is mange table spaces.  This potentially should help reduce the number of file handles that are needed open and allow the DBA to manage the “database” or “instance” in blocks which are of a manageable size to him/her.
  • Better configuration management. If I want to compare the running configuration of the variables in SHOW GLOBAL VARIABLES with the [mysqld] section in /etc/my.cnf then there are quite a lot of inconsistencies in variables/parameter naming, some parameters are not visible in SHOW GLOBAL VARIABLES and vice versa. After seeing that the default performance_schema settings in MySQL 5.6 did not match my needs, I have had to use quite a bit of magic to setup /etc/my.cnf with the correct parameters. If you add the way you have to configure performance_schema then checking for consistency and / or adjusting the running configuration to match /etc/my.cnf is more of a challenge than really necessary. So clean this up please and make life for us easier.
  • More dynamic configuration. Many configuration settings require restarting MySQL.  Now that mysql is no longer a toy and people depend on it 24×7, any downtime is costly and really unhelpful, especially if it’s not necessary.
    • Many replication configuration settings are still static.
    • Many innodb settings are also static (innodb_{read,write}_io_threads, back_log comes to mind), so having to restart mysqld is really not helpful, just to see if a change will improve performance or not.
  • Move the SHOW ENGINE INNODB STATUS output to a tabular form (probably in multiple tables in I_S) so that the information is easy to parse and thus people can better use the information.  Right now if you want to see which threads are busy doing I/O or want information on the actively running transactions, locks or whatever parsing the current output is a real pain. It’s true that much of this information may well be now in P_S but not everything is visible there, and the information that is missing should move to a place where it is easy to extract and use.
  • Better Memory management.  I have some servers with 96 GB of RAM and a certain size innodb buffer pool, and some of these when busy with heavy queries use up too much memory and start to swap, whilst others use the full buffer pool and have plenty of memory left over. The end result is the wrong amount of memory is being used. Workload changes the memory usage completely and the change can make a server swap or run out of memory.  The memory in the system is there to be used by the database yet MySQL does not manage this in any coherent way, or allow you to control it to prevent overuse.    Most other RDBMSes handle memory quite differently, pinning the configured size into RAM and only using that memory and nothing else. This appears to be more efficient and something similar in MySQL would make life a lot simpler.
  • Better binlog/relay log space management.
    • Several servers I manage write out a lot of binlogs in a day, so space is an issue.  Nevertheless, keeping as many binlogs as space allows is also important.  Also many of the applications are configured to large INSERTS, UPDATES including using ODKU (up to our configured max_packet_size of 16M) and whether using RBR or SBR the resulting output is likely to be compressible.  Given that compressing the whole bin log/ relay log files means you can’t jump to any location, it struck me that just compressing the individual events as they are written, or when they are being read might be possible and might be a way to save 2-3 x the binlog space usage. (See: Bug#46435)
    • Related to this point is the binlog retention period.  expire_logs_days, configured as a number of days sounds quite nice until you realise that you may fill the disk with 1 or 2 day’s worth of binlogs, so it would be nice to be able to configure a decimal value here which would make smaller periods possible if needed. That or have a new setting to configure the retention size rather than the period, a new variable such as: binlog_retention_size = 200G.
    • I have filled up many servers disks after cloning a slave as it catches up and pulls over old relay logs from the master, writing them locally before being able to apply them.  There is a configuration setting to limit this size but again it is static.

So these are just a few of the ideas I personally would like to see. I’ll see if I can update the post later providing some of the feature request bug reports as reference.

However, this is just part of the story. I have my ideas of what I would like to see in MySQL 5.7 and you have yours, but there does not appear to be a way currently with MySQL to voice interests to Oracle in a place where different people can be heard and it is possible to see what is most wanted.

  • http://bugs.mysql.com does allow you to add feature requests and indeed I have added many, but finding them or voting on them is not possible. Oracle’s official feature request mechanism makes the request go into a black hole where any progress is completely invisible to the requestor!
  • Oracle obviously has its own agenda and has the inconvenience of competitors who are in the same arena.  That does mean that Oracle, MPAB and Percona therefore have to do their own thing and try to guess what people want (though they’re obviously influenced by their main customers), but we do not see what appears until the new release comes out.  I think that can lead to rough edges as developers do not always realise the real world use cases and problems that the DBA faces on a day to day basis, and thus may focus on stuff which in the end may not be the most important to us.
  • Some Open Source projects have a bounty mechanism where people can offer to pay to get a feature implemented. The idea sounds quite good and if several people want the same thing, they may be able to push towards a common goal more quickly. I am, however, not sure how that might work in practice with something like MySQL, but the idea sounds interesting, if only to help better gauge where there is interest.

My way of using MySQL may vary significantly to yours. My problems may also be quite different, but I would imagine that there is a lot of common ground. So what would you like to see in MySQL 5.7?

In the meantime I guess we will have to wait and see what will come out in 5.7 and where things are going to go. There’s likely I guess to be a 18 month wait for that, so plenty of time to do lots of new and exciting things.  In the meantime MySQL 5.6 needs deploying on new servers once it has been fully tested and any issues resolved and that is quite a bit of work to keep me busy for a few days …

 

Update:

  • 2013-04-08 Adding some more I would likes:
    • A working I_S (mysql data dictionary) to avoid the workarounds needed on servers with hundreds of thousands of tables.
    • A way to compress/uncompress MyISAM tables from the mysql command line. This avoids having to work around locking, blocking and flushing issues if doing this directly from a shell. I’m using compressed MyISAM tables still as they’re much smaller than compressed InnoDB tables (and easier to move around too).

Do we need a MySQL Cookbook?

The blog title says it all: Do we need a MySQL Cookbook? I tend to think so.

This seems to be something that is missing with current MySQL documentation. There is lots of information available but finding the appropriate bit can be quite tedious and it often requires looking in multiple places.

A lot of other software has such books, but for some reason MySQL seems to be missing one.

A recent example comes from a “documentation feature request” I posted today: http://bugs.mysql.com/bug.php?id=68171. MySQL 5.6 provides a way to “move InnoDB tables” from one server to another. There are many reasons why you may want to do it, but the documentation is currently rather sparse. A simple “example recipe” for this would be good, as would an equivalent recipe for other engines where you can do this such as MyISAM. This is just an isolated example.

As far as I know there are no “best practices” on how to setup or manage a MySQL server, whether that come from Oracle, or the community. A common community view would probably be better as it makes “all vendors” work to a common goal, even if they compete in other areas. Look at the LSB as an example, adopted to some extent by all Linux distributions.  And again often there may be several ways to solve some problems so a Cookbook would be quite nice as it would suggest how to solve common tasks and let you use those as the basis for real implementations.

So what things could these Best Practices or Cook book contain? Things that come to mind are:

  • How to move tables from one instance to another (the example above)
  • How to clone an instance, whether to make a slave, or simply for copying to a development environment. It’s clear that here there are multiple solutions depending on factors such as database size, whether you are allowed to take the instance down or not, etc.
  • How to run multiple instances on the same server. This is something I used to do, but now try to avoid as often these instances interfere with each other. However, many people still do this so having common ways for setting up each instance and managing them (starting stopping, accessing each one) would be good. Here a common filesystem layout would be helpful too so that people do things a similar way but still have enough flexibility to adapt as needed.
  • How to run multiple versions of MySQL on the same server. Often could be useful in a development environment, or even in production, but at least on some versions of Linux the package managers make the assumption that only a single version of the “binaries” can be installed, thus preventing multiple versions running at once.  Sometimes this is needed and to have to work outside of the package managers is rather a pain. A common agreement on how to resolve that would be nice.
  • Backup and restore procedures, whether simple full database dumps, to using some of the other commercial tools.
    • That includes doing full and incremental backups, where this is possible, and also partial restores, or restores to a certain point in time.
  • Replication. Plenty to talk about here.
    • How to set it up from scratch
    • How to make more slaves (see cloning above)
    • Different replication features such as replication specific databases, the benefits or otherwise of row or statement based replication
    • Differences between the different versions, especially as MySQL 5.6 brings lots of new features such as minimal logging of changes to reduce binlog sizes
    • How to diagnose and recover from common replication issues
  • Monitoring
    • Again there are lots of options here, and many do it yourself ways, but also both free and commercial solutions. Often you first need to determine what you want to check, some checks may be technical ( mysql server running, the number of connections is not too close to max_connections, the number of running connections is not over a certain threshold, the SQL thread’s usage is not too high, replication is working, replication delay is not too high, etc). For many of these problems there are already checks that do the right thing. For many others you have to write your own.
    • All of the above is great but really people care about the application working, so on top of these check you need to add business checks etc.  Are the number of widgets we made in the last hour > 0?, …,
  • Grant management.  If we had a single server then you only set things up once. If you have more than one server then it gets more complicated. MySQL allows you to specify network ranges or hostnames from where a clients can connect, so all of a sudden managing this can be hard. I’m not sure if there’s a best way but it would certainly be interesting knowing how different people solve this problem, especially as the environment grows.  Replication can be used to help here and that works pretty well but you do have to be careful of gotchas with replicated grants if anything gets out of sync.  So this is certainly a topic of interest to most DBAs, whatever the chosen solution.
  • Setting up a new server. Simple things like choice of OS, MySQL version, MySQL Engine to store the data, expected profiles of the queries to the server, and then basic settings for a couple of these common setups.
  • Configuration. A huge topic but in many ways there are general things you should do, and probably that you should not do. A cookbook is probably not the place to go into huge detail about anything, which is where existing books may well be better but often helps get you going and provides indications on how to approach the problem and see that things look right.
  • Upgrade procedures for MySQL, a critical procedure for any DBA. We never stay on the same version, so we are always going to upgrade to a newer version. There are things to do in order to make that upgrade process work smoothly and this can get more complicated in a replicated environment, so talking about this and how it should be best done would be very good.  I have found the MySQL upgrade documentation to be sometimes rather incomplete or at least not written in a cookbook fashion of follow these steps to complete your upgrade, and hence you need to move all over the upgrade documentation trying to figure out if special steps are needed.  A minor version upgrade may not be an issue, but when doing a major version upgrade as many people will be doing shortly to MySQL 5.6 is something if you have important data in your database that you do not want to go wrong, so a cookbook style guide would be good for this.
  • Troubleshooting. I have had problems with the mysql servers I manage and some of these problems come up again and again, recognising them and having solutions to the common problems might be good, though recognising which problems are the most common is something that I guess can only be done by reading mail list archives, though Oracle/SkySQL/Percona support probably have a good idea here.

What other things am I missing for a cook book?

I’m sure this list could include many other things and maybe there’s a need to avoid too many details, referring to existing documentation where appropriate for details, but just provide the recipes and the explication of why, how etc would be good for a large number of people.  I for one have found from time to time that I could not find such a recipe to do a particular task, and I am sure there must be others like me with the same problem. Existing books are very good, but the focus seems to be slightly different and may go into a lot of detail when in some cases that detail may not be needed initially.

So would a cook book like this solve a need for you, and if so what might be missing from the suggestions I have made above?

CentOS 6.3 now installs on Mac Mini (mid-2011)

After lots of pain and trying to get CentOS 6 to work on my Mac Mini mid-2011 and failing, I finally find that CentOS 6.3 does indeed install.  Previous versions did not work for me, but I’m delighted to see that RedHat have done great work and the work they did that made it work on Fedora 17 has obviously been ported to CentOS 6.  The issue in the past had been the Mac would not boot the  install DVDs.

So all that was needed was for me to burn the (new?) CentOS-6.3-x86_64-netinstall-EFI.iso image, press ALT on boot and the DVD booted fine, and the install went along the same way as any normal CentOS/RHEL installation.  In fact I’ve been able to keep my original Fedora setup as the Volume Group I’d previously had setup on my disk was recognised so all that I needed to do was to create a new “root” logical volume and install on to that.

So thanks RedHat. You now make it much easier for me to migrate my “main” but older system on to this Mac.

On MySQL Memory Usage and Configuration

I saw a post on profiling memory usage today and this reminds me of several discussions I have had with different people.

Why would you want to profile the memory usage? Usually to see where memory is allocated and for what purposes, and usually you only care when memory usage is higher than expected. That is a DBA normally wants to use all available memory on a server for mysqld, whatever that size may be.

Configuration parameters may be inappropriate and need adjusting, so having a way to determine the range of memory usage based on those parameters would be most helpful.  However, the configuration parameters as a whole put no limit on memory used, so different workloads can quite easily lead to memory being insufficient on the OS, triggering swapping or perhaps the mysqld process to be killed by the kernel’s OOM. None of this is ideal.

So I would like to configure a cap on the maximum amount of memory that can be used and indeed that is what most RDBMSes do.  That is the allocation is made on startup, ensuring the memory is available, and any memory requests are taken out of this pool. This technique _should_ avoid you swapping (though I still see issues on NUMA architecture boxes). It also means that at some points in time the database server may not be able to honour memory requests and therefore must fail, wait until those needed resources are available, or it must free existing resources to make space for the requested new resource.

Doing this well is probably hard enough. Doing it in MySQL with different storage engines is probably harder still as each engine does things its own way.

Some mechanism like that would certainly help avoid strange behavioural issues when the query profiles change and this causes the server to behave in unexpected ways, such as swapping itself to death. The only solution right now seems to be to configure the server to ensure that these “edge cases” can not happen by using less memory, thus wasting valuable resources on the server.

Currently it is also pretty hard to see how memory is used. If you use InnoDB you know that the buffer pool takes up a sizeable portion of memory, but there is still a significant amount of other memory that’s used by each individual connection and this can vary significantly depending on the query profiles. It would be really nice to dynamically look inside MySQL and see how memory is used, and also see how adjusting different parameters may adjust this.

Of course none of this is in MySQL at the moment. I would certainly like to see first better information on current memory usage, I would guess some more tables in performance_schema, and at a later stage some way to cap memory usage to a desired level, and hope that when those limits get reached mysqld will figure out how to free up some resources in order to continue, wait or fail “gracefully”.

Until then we will have to continue playing the game of “configure mysqld”. See if it works and then ensure that queries to the system do not change enough to break the system and require us to go back and do it all again.

Note: there are several feature requests for this, so I do not think I am the only one in wishing for more visibility and configurability of the memory usage.

Update: 2013-01-17

Some feature requests that seem related to all this are shown below:

 

Setting up “cloud services” at home sounds easy but proves to be hard

I have been using virtualisation at home for some time, since the days of vmware-server on Linux. It works well and if you have a sufficiently powerful box is a great way to experiment without destroying your own server or installing software which a few hours later you may want to remove again.

I also recently bought a Mac mini 2011, which I put in 16 GB of RAM and was hoping to set it up as a host to contain various virtual machines.  This was going to run Linux. I wanted to run the host as light as possible and do the real tasks inside my virtual machines.  I also wanted to use this server to migrate from an older PC I use for my day to day services (web, dns, email, ftp services etc).

Getting Linux installed on the Mac was a bit trickier than expected. I have been using RedHat since version 3.0.3 so prefer that to Debian or Ubuntu, if only because it is an environment that I am more familiar with. I tried in vain to get CentOS 6 to install on the Mac but it seems the required EFI boot loader does not work. Others have had problems too I think. Less surprising is that Ubuntu installed first time (credit to them for getting it to work on more exotic hardware), and when I tried Fedora 17 I saw that also worked, so that is what I am using. I would still prefer to switch to CentOS as it stays stable for longer, and playing the upgrade game with Fedora gets to be a pain.  Maybe some time soon…

There is a lot of noise around now about cloud computing and I had wanted to use something a bit more sophisticated than virt-manager on my PC.  This requires you to organise the exact setup of the boxes, network, disk images and other cloud platforms seem to make this management easier for you.

oVirt seemed promising and there is even a Fedora 17 repo and it is supposed to work out of the box according to various blogs and articles, but I had trouble getting it all up and running. If you run it on a single host it does require a lot of packages and software to be installed.  So after various attempts I gave up.

OpenStack is also pretty hot and has quite good documentation, though it seems complex perhaps because of the scale that it runs on in some deployments.  Again I read the manuals and documentation and had trouble getting the whole environment to work.  The configuration is quite complex, and it seems easy to make a mistake. If you do that then figuring out what is broken or why is pretty hard. Again this is a shame especially as this seems to be perhaps the most sophisticated of the 3 cloud environments I tried.

Family circumstances mean I have a lot less time than I would like to investigate but I did do quite a bit of homework before starting so this has been quite frustrating.

I then tried OpenNebula, version 3.8.1.  It seems that the software is still evolving quite quickly so has evolved quite a lot over time.  It also seems that this is the smallest package: the tar ball is tiny and at least on Fedora 17 it did not take me long to get the software installed (including sunstone-server).  I see this is tagged to be included in Fedora 17 but could not find actual packages to install.

There are quite a few howtos and tutorials but for OpenNebula, but some of these are for older versions, and if you follow them thing will not work as expected. (Differences in the templating options etc.) This is confusing for a new user.

I use Fedora 17 but do not see packages for OpenNebula. Working packages for Fedora, CentOS, Ubuntu etc would be most welcome as this would probably ensure that the different components were configured correctly with the OS components like virt-manager etc.

Most of the OpenNebula daemons have start and stop commands but no status command.  This would at least allow you check that the configuration was correct and the daemons correctly started, especially when doing manual installs.

I would like to see some sort of “check installation” command to allow for a more complete set of checks to be done to see if the installation is correct and complete, for each of the different components that makes up or is needed by OpenNebula.  In many places the documentation says “Assuming you have a correct kvm setup …”, “Assuming you have a correct isci setup …”, etc, yet provides no way for you to check if the setup of these base components actually works as required by OpenNebula.  Whether that documentation is included in the documentation or on a wiki somewhere “How to setup kvm on Fedora X for OpenNebula….” I do not really mind but I have had problems with PolicyKit not allowing oneadmin access to the virtual machine, with the virtual machines not seeing their disks (I think due to user/group permissions, as on Fedora 17 kvm seems to assign file ownership as qemu:qemu.), with the VNC connections in Sunstone not working, and I am pretty sure that most of these issues are “trivial” to fix if you have done this before and understand what OpenNebula expects and what the host operating system is doing “incorrectly”.

For initial testing including various standard template files which would useful, if only to allow helping to confirm simple things like:

    • booting from a cd image works
    • booting from a minimal hard disk works
    • booting from a hard disk with networking work (perhaps bridging or NATing out from the host’s network)
    • how to do a boot from a cdimage which will allow someone to install a new image on to a hard disk. This is a very typical usage pattern, and the exact Operating System or Linux distribution / version may vary.

I was confused by the network documentation and exactly how that relates to the underlying host’s network configuration.   This does not seem to be well documented but “assumed”.

To summerise these technologies are here and being used quite widely, but I would guess in larger environments where the people using them have time to investigate and set things up properly. I was hoping that to set these up at home would not be that hard, but have found it much harder than I had anticipated to do this just reading the documentation and trying on my own.  I think that all these platforms would be used more widely if they were more plug and play, that is they worked out of the box on most of the popular Linux distributions.

Having said that things have obviously progressed since the first virtualisation environments were created (on the x86 architecture) and probably in no time at all these technologies or similar will just work out of the box and be the norm.   In the meantime I am going to keep poking away and hope to finally get one of these environments working for me.

So if you read this and have some useful pointers I would be most interested in using them to get up and running.

Update: 2013-01-02

I see the OpenNebula work originally planned for Fedora 17 is now delayed (see: https://bugzilla.redhat.com/show_bug.cgi?id=815001), but there are RPMs for CentOS 6 which can be found or built from here: https://nazar.karan.org/summary/misc!opennebula.git. So it looks to me as if it’s likely I will soon be able to get some packaged version of OpenNebula working on my CentOS 5 and Fedora 17 home PCs some time soon.  Still trying to make progress in the meantime.

MySQL-5.6, GTID and binlogs on slaves

Not much to add really to the bug I’ve filed here: bug#67099.

I personally can think of some very nasty consequences of applying this on the slaves I manage, and the reason I’m posting the bug is that while I guess this is too late to fix in 5.6 as it’s effectively a new feature, I’m sure many sites may bump into this and be somewhat disappointed if they want to use the new GTID feature and have several slaves.  Hence, if the fix/feature has to go into MySQL 5.7 then I hope it goes in sooner rather than later. We will see.

Updated: 2013-09-19

I probably should have updated this earlier but it does seem that Oracle have taken these comments on board. See: WL6559.  It looks like they plan to do this for 5.7 which is good news. I’m really hoping that we will not have to wait too long for this version as it has other “goodies” which are going to be interesting. Reducing the time of the release cycle will I think allow those of us who appreciate these new features to have access to them sooner.

More GTID troubles in MySQL 5.6

A colleague, Kris, commented recently on a recent find I made on 5.6.7-rc1 which while not the end of the world does seem to severely limit the use of the new GTID features, or if nothing else make it a pain to use.

Today I found another related issue, when trying to check the syntax of one of the MySQL commands. That’s now reported as bug#67073.

All of these are sort of edge cases but do make 5.6 look a little rough round the edges, when the rest of the product is actually really good and a great improvement from The MySQL 5.5 GA version that everyone uses.  That’s really a shame.

I did report the Original problem as SR 3-6270525721 : MySQL 5.6.7-rc1, grants, replication and GTID cause problems, but of course that’s not visible to the general MySQL community and while this specific issue with the help tables has been reported in bugs.mysql.com I do get the feeling that other users of MySQL may report issues which are not visible to the community as a whole and that means that solutions, workarounds or even the possibility of saying “I have that problem too” is no longer possible.

For many tickets I’ve chosen to make 2 tickets, one on MySQL’s http://bugs.mysql.com and another internal Oracle ticket but this is really a nuisance and time waster, and in the end I don’t believe it’s helpful to anyone.  I’m sure I’m not the only person who suffers from this.  I also note that when browsing for what’s changed in the lastest release such as http://dev.mysql.com/doc/refman/5.6/en/news-5-6-7.html you see references to the internal bug numbers which many people can not even access. I am able to see most of these tickets but many people are not so the problem and resolution become rather opaque. That’s counterproductive for everyone. Improving this reporting of changes would help us all.

Oracle, please make this work in an easier manner so that issues with mysql software can be more visible (if the reporter choses to make this so), information on changes and bug fixes is also more complete, as this will save time and frustration for everyone.

Note: Opinions expressed above are my own.

Checking /proc/pid/numa_maps can be dangerous for mysql client connections

I’ve blogged before about the way to use numactl to start up mysqld, and thus to try to better spread the memory usage on larger memory servers. This came from an article by Jeremy Cole and is fine. I recently had some issues with mysqld seeming to run out of memory on a box which appeared to have plenty free, so it seemed like a good idea to adapt a minutely collector script I run to include the numa_maps output so that I could see if the failed memory was related to this. So far so good.

Many of the clients that connect to the database servers I manage have a very short connect timeout, typically 2 seconds. In a normal network and under normal conditions this is more than enough to allow for successful operation and if the connect does not work in that time it’s an indication of some underlying issue, whether that be load on the mysqld server, or something else.

The change I implemented on to collect the numa_maps information wasn’t expected to cause any issues, and typically quite a lot of information is cat’d out of /proc/somewhere to collect configuration or status information.

However, after implementing this change I suddenly noticed more mysql client connection errors than before, in fact a significant change. It wasn’t immediately apparent what the cause was until it was noticed that these errors only occurred a couple of seconds after the change in minute, at hh:mm:02, hh:mm:03 etc. Then it dawned on me that indeed this was due to the change in the collection script looking at /proc/<mysqld_pid>/numa_maps.   Disabling this functionality again removed all issues. This was with servers with 192 GB of RAM.

The information provided by /proc is useful and it would be even better if the information could be collected in a way which doesn’t block the process that’s being “investigated”. As such I have filed a bug report with RedHat, though really this is just a kernel “bug”, or behaviour which while it may be known, in this particular case provides a very unsatisfactory behaviour.

So whilst most people do not configure such a short mysql connection timeout if you do have a setting like this please be aware of the consequences of running numa-maps-summary.pl, or looking at /proc/<mysqld_pid>/numa_maps directly. This was not discussed by Jeremy and I’d guess he was unaware of this, or did not expect people to run this type of script as frequently as I was. Database memory sizes keep increasing so this may not have been noticeable on smaller servers but can now become an issue.

Initial Reactions to MySQL 5.6

New versions of MySQL are always interesting to try out. Often they have features which I may have asked for myself so it’s satisfying to see them eventually appear on a system I use. Often other new features make life easier for the DBA. Finally we hope overall performance will improve and managing the server(s) will be come easier.

So I had a system which needs to make heavy writes, and performance was a problem, even when writing to SSDs. Checkpointing seemed to be the big issue and the ib_logfile size in MySQL 5.5 is limited to 4 GB. That seems a lot, but once MySQL starts to fill these files (and this happens at ~70% of the total I believe),  checkpointing kicks in heavily, and slows things down.  So the main reason for trying out MySQL 5.6 was to see how things performed with larger ib_logfiles. (Yes, MariaDB 5.5 can do this too.)

Things improved a lot for my specific workload which was great news, but one thing which I noticed was that initial install / setup time of a system with these large files increased a huge amount. (I was using 2 x 16000M files.)  That was noticeable even on a box with SSDs. On a box with HDs that would be horrible.

mysql_install_db now uses innodb to setup some system things and restarts the mysqld binary a couple of times. When you also have a large innodb_buffer_pool (I was trying on a box with 192GB of RAM) the startup of mysqld each time took quite a while.

So initial thoughts are that the bootstrapping probably does not need to use the normal innodb_buffer_pool_size setting (but should perhaps overwrite it with the minimal size needed).

Managing the size of the ib_logfiles is also something that requires restarting mysqld. Given the sort of experimenting that I was doing it would be nice to be able to dynamically configure this.  I can see many issues with changing the sizes of the existing files, but what seems easier to implement would be to be able to increase or decrease the number of files used (removing the older, higher number files once they are “used”/checkpointed), and that would provide a lot more freedom to the DBA.

I clone servers quite a bit and the extra 32 GB of these redo log files is a pain to clone/copy, and time consuming, so being able to reduce the number of files, and checkpoint prior to shutting the server down for cloning, but then re-enabling the normal value afterwards would save a lot of time cloning. So would be a handy feature.

I also tried playing with the new GTID features.  They look great. Except for one thing.  This expects master and slaves to have binlogs enabled, so that the GTID values are kept locally, and on startup.  I manage several boxes where for performance reasons the slaves do not have binlogs enabled.  If it dies the slave gets recloned.  Adding back the requirement for binlogs ( and log_slave_updates ) on a slave to enable this feature seems wrong. I can see the need that the different GTID values are kept somewhere, but don’t see the need to keep all binlog information, at least on a slave. Given the ratio of slaves to masters can be quite high that can be an issue.  If necessary write this information somewhere else, why not in an InnoDB table, so that on sever recovery you have consistent information with the rest of the database, something that might not happen with the binlogs… ?

Talking of binlogs one new feature I’m using, which I think is good is: binlog_row_image = minimal. This reduces the size of the binlogs and thus I/O.  I’m missing the next step which would be to compress those binlog events, and reduce binlog size further. That may not be an issue on many systems but several servers I manage write over 100 GB of binlogs a day. Reducing this further by being able to compress the binlogs would be nice, and having better control of expire_logs_days too (as the daily granularity can be too large in many cases for me) would help.

NOTE: In MySQL 5.6.5 with gtid_mode = ON , mysql_install_db crashes! Bug reported to Oracle, and I guess will get fixed soon.

I have seen a few other crashes in 5.6.4 and 5.6.5, I can’t provide links unfortunately as they’re not public. They are still not resolved.  I’d like to make the details of these crashes public as others may come across them, or have come across them before me, but these issues are not visible in http://bugs.mysql.com. Would be nice if they were but requires me duplicating information which is a pain.  Oracle, please make this easier. It is good for us all.

So with the little testing I’ve done so far MySQL 5.6 looks good. I need to spend more time with it and investigate the new features, many of which will make life easier.

When it finally goes GA I will have to spend a lot more time seeing how it behaves but I believe there are still a few issues which still need resolving prior to that happening.

Make InnoDB redo log file size configuration more flexible

Over the last few months, on different systems I have been modifying the ib_log_files and their sizes, increasing them due to the increase in load they have been experiencing. These files are used to contain information which has been committed, but not yet written to the ibdata files or the individual .ibd files if using innodb_file_per_table.  And these files are used on recovery if the server shuts down unexpectedly during recovery processing to get the system back into a consistent state again.

The checkpointing in MySQL is well documented, but I was surprised just how much difference the throughput can change for heavy write applications if the sizes of these files is changed.  It may seem obvious but given how disruptive it can be to shut down a system, change these values and start it up again, this process is not done frequently.  These files help improve performance as changes are written linearly into the file, compared to the changes in the data files which are written in pages and those pages are probably scattered randomly over the disk, the latter process being slower.

I come from a Sybase background where “a database” consists of “data” and “log” files. The data files correspond with the ibdataX or .ibd files and the log files correspond with the “ib_log_fileX” files, the major difference being that (at least when I looked at this in Sybase 12.5) the log files were considered part of the database and once added you also could not reduce their size. Increasing was easy but then you couldn’t free up the space again.

So MySQL and Sybase are similar, but MySQL’s implementation keeps the ib_logfileX files independent from the database files. It struck me that it would be very useful if we could dynamically tailor the two settings that InnoDB provides: the innodb_log_file_size and the innodb_log_files_in_group.  Since MySQL already automatically creates missing files, typically done if you start MySQL for the first time, or if you remove the files after changing the innodb_log_file_size parameter, this sounds reasonably easy.  Make the innodb_log_files_in_group larger and then when you get past the last available file, you build a new one. If the number is decreased then you simply go back to the first one again, ignoring later files.  Changing the size of the files could be done too, but perhaps is harder.  In both cases you need to be sure this works when recovering from a crash, which is a critical moment.

A change like this would help people see whether changing the size improves performance, and certainly for systems where taking downtime is expensive and difficult, this would be a welcome move, as over time initial sizing guidelines may vary and a dynamic change would resolve the problem quickly.  I recently took down some old MySQL systems which had been running uninterrupted for just over a year. I’m sure I would have probably adjusted these settings earlier had I been able to, and that would have been very convenient.