If you happen to have some free time tomorrow and are in Madrid please come along to the second Madrid MySQL Users Group.
Details can be found here. The meeting will be in Spanish. I look forward to seeing you.
If you happen to have some free time tomorrow and are in Madrid please come along to the second Madrid MySQL Users Group.
Details can be found here. The meeting will be in Spanish. I look forward to seeing you.
I have been meaning to sit down and write about this for a while now but with the summer upon us have not had time.
I use CentOS 6 on my home PC for many reasons, but mainly because I have been using RedHat versions of Linux for quite a long time and am comfortable with it. CentOS, like its upstream parent RHEL, is good because it is well tested and stable and it is not absolutely necessary to perform a major version update every few months like its cousin Fedora. I do not have time for that and most security issues will be fixed with normal updates so this works well.
The downside of course is that RedHat can not afford to run bleeding edge versions of software which may be rather unstable and that can be a nuisance, if you need or want to use newer more recent versions.
An example of that, from my DBA perspective, is the version of MySQL which is shipped with RHEL 6.4: MySQL 5.1.66 is very old and I would not really want to run that on a production machine taking anything but toy load. However, in this case, others provide drop-in replacements: Oracle with MySQL 5.6 (current GA version) and both MariaDB and Percona versions are other good alternatives.
In the browser world I moved over some time ago to use Chrome. It seems to work well on my Mac and at the time I moved over it avoided me having to restart Firefox frequently as it was eating all my memory. There have been versions of Chrome available for CentOS 6 until rather recently, provided by Google and that was really good. It made life easy and I could switch machine with little trouble and use the same browser.
There are versions of Chrome for other Linux versions: Debian, SuSE and Fedora, but I want to keep using CentOS 6 for the reasons outlined above. Judging from other posts I have seen others feel the same. My understanding is that CentOS 6 support was dropped as newer libraries on which it depended were not available on CentOS 6: Requires: libstdc++.so.6(GLIBCXX_3.4.15)(64bit) says rpm when you try to install the latest version.
Upgrading the OS versions of these base libraries is just troublesome as it may break several things and while people sometimes do this to try and work around similar issues I do not think that is the way forward. Something will eventually break. Yet if these libraries are needed why can they not be built and located like google-chrome itself in a different location to the system default, and then link google-chrome using RPATH settings to the expected directory. I have done some stuff like this before in a previous job and unless I am mis-remembering the procedures this should work fine. Rebuilding the required libstdc++ in a different location to the default, such as /opt/google/lib64, and naming it something like google-libstdc++, and then adjusting the google-chrome dependencies to require this package and not the system libstdc++ is not really that hard. Doing so would fix the issue for google-chrome, and potentially make the library usable for others who might also want this newer version on CentOS. Given Google has already setup a yum repo for CentOS 6, the automatic install of these new dependencies would just work out of the box.
Others suggest moving over to Chromium, but I see it is not the same as Chrome and has given me a few issues since switching from Chrome. I would like to use the same browser on my different desktops and not have to worry.
It would also help if RedHat would recognise that some people will need to use newer versions of libraries that come with the OS, and if they do not want to make the effort to support this themselves, it would help if they would perhaps make recommendations on how this can be done by others, perhaps along the lines outlined above. In the end for them this avoids people being frustrated that their “stable OS” is not out-of-date, and also avoids issues with people hacking things around themselves, or resorting to building from source, when the whole point of a package based system is that you should not need to do that and the packaged builds tend to be more consistent and complete.
So here is to keeping my fingers crossed that perhaps we will see Chrome on CentOS 6 return in the future and make a few people happier.
What do you think?
My MySQL bugs is a list I recently created and intend to keep up to date with issues I have seen.
As mentioned here and after talking to a few people we have created a meetup page, http://www.meetup.com/Madrid-MySQL-users-group/ and proposed the first meeting on Thursday, 4th July. If you are interested and in the area come along and say hello. It should be interesting to see a few others who work in the field. If you can let us know you are coming so we have an idea of how much interest there is.
Once upon a time I did an operating system upgrade, a minor one that should do no harm, but just get me up to date by fixing any bugs in the version I had been using. It seemed like a good idea.
All seemed to be fine. I use a package provided by an external vendor and not the one produced by the operating system provider as this vendor provides a newer version of the package and I need that. The vendor has to make his package fit in the os environment his package is built for and normally does a pretty good job.
I use automation to build my systems and when I built a new one some issues appeared. Related to the new version of the OS the provider had enhanced one of his packages and the installation pulled in new dependencies. The install of the external package I use then broke as it conflicted with the new dependency provided by the OS. While a workaround is possible: uninstall this, install that, … the nice world the package manager provides you to make these problems a non-issue was suddenly broken. Horrible.
To make matters worse the external vendor can not easily fix this. If he builds a package which obsoletes this new dependency then in earlier versions of the OS he changes stuff that is not needed. If he does not do this the breakage remains in this latest version. Of course in the real world many people use a mixture of OS versions and expect a single package to do the right thing at least for the same major version of an operating system. An external vendor’s packagers life is not a happy one. It’s hard to provide a package that integrates seemlessly without it breaking something.
The story above is a real story relating to RHEL 5.9 versus the previous RHEL 5.8, or CentOS in my case and MySQL 5.5 and 5.6 rpms provided by Oracle. RedHat have enhanced the version of postfix they provide (a core component of the servers we use) and added MySQL library dependencies which were not there in RHEL 5.8. This pulls in the ‘mysql’ package as a dependency, and so installing the MySQL-server, MySQL-client and MySQL-shared-compat libraries I had been using breaks as these do not currently have an rpm “Obsoletes: mysql” entry needed to make the package install go nice and smoothly.
Real life means I do not run the latest version of MySQL 5.5 but a version which works on my systems and has been tested with my software, so if I use that on CentOS 5.9 now it will break. Oracle may not mind having to update their latest MySQL 5.5 rpm (for CentOS 5) but are unlikely to be interested in updating all their previous MySQL 5.5 or 5.6 rpms for this version of CentOS just in case I might be using one of their older versions. I also am not interested in upgrading all these MySQL 5.5 servers just to get a newer version which may have issues, so probably I will end up doing a work around “remove this package”, “add that one” in a adhoc way to avoid this problem. It should not be necessary.
It has always been stated when building software: do not make functional changes in a minor version upgrade as you will break something somewhere and upset someone. I understand that providing MySQL support to postfix (and perhaps other packages) seems like a good idea but this breaks this “rule”. I also think that having to provide a package within an existing OS framework, and try to avoid breaking any native OS packages is hard, but still think as I have commented in previous posts that coming up with a way of allowing external vendors a way to do this which does not interfere with “native OS” packages would be good. Recommend that these packages get named a certain way, do not get located in “native OS” locations, and do not conflict with “native OS” libraries.
If people want to use these packages show them how to link against them and add dependencies in a way which may favour these vendors’ packages if that’s a deliberate intention. Provide these rules and life will be easier for all of us. One world where this is done is the *BSD ports systems, which are intentionally independent of the “native OS” and located separately and people can use them if needed, or not at their convenience. In my opinion something like that in the rpm world would be nice indeed and external rpm repos would not have to worry so much about adding new functionality and breaking the OS.
I am not fully sure how I am going to work around this issue, but it has been causing problems for a while now and will require some poking. I will try and get in touch with the parties concerned to alert them to this change which has broken something for me.
Am I the only one who has noticed this?
Also likely to be an issue I guess for MariaDB and Percona rpms for RHEL 5/ CentOS 5 as they’ll likely implement the same dependency behaviour.
This is what I saw:
2013-06-18 00:48:55 +0200 /Stage[main]/Mysql::Client/Package[MySQL-client]/ensure (err): change from absent to 5.5.23 failed: Could not update: Execution of '/usr/bin/yum -d 0 -e 0 -y install MySQL-client-5.5.23' returned 1: Transaction Check Error: file /usr/bin/mysql from install of MySQL-client-5.5.23-1.rhel5.x86_64 conflicts with file from package mysql-5.0.95-3.el5.x86_64 file /usr/bin/mysql_waitpid from install of MySQL-client-5.5.23-1.rhel5.x86_64 conflicts with file from package mysql-5.0.95-3.el5.x86_64
On a server that runs MySQL 5.5.23 (CentOS 5.8) I see:
[myuser@myhost ~]$ <strong>rpm -q --obsoletes MySQL-shared-compat</strong>
which actually looks wrong. CentOS 5 does not have a mysql-libs package, but a package named mysql. CentOS 6 does have such a named package and on early releases of the MySQL 5.5 rpms some packaging issues similar to the one described here did have to be ironed out.
Checking a later package I see things have changed but there still seems to be an issue.
[myuser@myhost ~]$ rpm -qp --obsoletes MySQL-shared-compat-5.5.32-1.rhel5.x86_64.rpm
MySQL-shared-compat-advanced < 5.5.32-1.rhel5
MySQL-shared-compat < 5.5.32-1.rhel5
mysql-libs < 5.5.32-1.rhel5
The first two entries need fixing (first first needs no version and the second should not be there) as they cause other issues, but I had not noticed until now the package name mistake in the last line (which mentions a package in CentOS 6, not CentOS 5 and should be just ‘mysql’).
Maybe, given I am not using the latest version of MySQL, I will need to repackage this version myself. A quick change to the rpm spec file and a rebuild should give me the package I need. Given the src rpm is available it is good I can do that. …. but no. Further inspection shows that the MySQL-shared-compat package is not built from the MySQL-X.Y.Z-r.src.rpm but is built independently of it and I think old libraries files are “bundled” in manually to make this package. I vaguely remember that maybe the source tar ball has the spec file for this shared-compat package. Let me check as maybe I have to do some deeper magic to roll my own package.
Oracle: please do not do this. Provide enough of the source in the normal tar ball to be able to build the old version libraries, and build _everything_ from that single tar ball. This would avoid all the magic you are doing now to build this package and others who need to adjust these packages can do the same.
I’m interested in meeting up and sharing experiences about using MySQL and managing MySQL servers with people here locally in Madrid. I had a quick look around and could see no MySQL user groups locally, so it might be nice to create such a group and exchange ideas over a beer, coffee or cola every once in a while. If you’re in Madrid and are interested please let me know. I’ve created a temporary email address: madrid-mysql-users-2013 AT wl0.org (careful with the domain), which you can contact me on to confirm an interest. Oh and I’d expect these meet ups to be in Spanish, but that’s not a requirement.
Estoy interesado en reunirme y compartir experiencias sobre el uso de MySQL y administración de servidores de MySQL con la gente aquí en Madrid. He echado un vistazo en Internet y no he visto ningún grupo de usuarios de MySQL a nivel local, por lo que sería bueno crear un grupo e intercambiar ideas mientras tomamos una cerveza, un café o una coca-cola. Si estás en Madrid y estás interesado por favor házmelo saber. He creado una dirección de correo electrónico provisional: madrid-mysql-usuarios-2013 AT wl0.org (cuidado con el dominio), que puedes usar para ponerte en contacto conmigo para confirmar tu interés en asistir. Estando aquí en España imagino hablaríamos en español.
Progress, slowly but surely. See: http://www.meetup.com/Madrid-MySQL-users-group/
I have been playing with ansible recently and quite like some of what I see.
One issue I have when setting up the ansible hosts file, which contains the inventory of all hosts known to ansible, is that at work my hosts have a slightly more complex pattern which ansible can not directly match, see http://ansible.cc/docs/patterns.html. So I patched 1.0 to cope with these.
This provides for:
The patch can be found here: http://merlin.wl0.org/201305/01/ansible-1.0-extend-patterns.patch and this allows me to include several types of servers at work in a simpler pattern more easily.
My python is still a bit rusty but this seems to do the right thing. It would be nice to see this patch incorporated into ansible.
I’ve just upgraded ansible on my Mac to 1.1 (via macports) and this patch hasn’t been applied yet. I think 1.2 is out soon so I’ll see if I can poke the developers to get this patch included. See: https://github.com/ansible/ansible/issues/3135
So the common solution is to artificially warm upthe server by doing queries which will fill the buffer pool. Typical solutions might be to do: SELECT COUNT(*) FROM some_table FORCE INDEX (PRIMARY) LIMIT ... on a number of tables to fill up the pool on startup. Fitting this into the standard mysql init start script is somewhat tricky as no hooks are provided for this sort of post-start action. (It would be nice to have this for other tasks too.)
MySQL 5.6 is out now and that is good news. I have already been using pre-GA versions on some production servers with good success and now that the few wrinkles I have seen have been ironed out, I am sure a lot of people will find the new features in 5.6 well worth waiting for.
However, that does lead to the question of: “what next?”
I still have several things that I would like to see in MySQL in no specific order of preference such as:
So these are just a few of the ideas I personally would like to see. I’ll see if I can update the post later providing some of the feature request bug reports as reference.
However, this is just part of the story. I have my ideas of what I would like to see in MySQL 5.7 and you have yours, but there does not appear to be a way currently with MySQL to voice interests to Oracle in a place where different people can be heard and it is possible to see what is most wanted.
My way of using MySQL may vary significantly to yours. My problems may also be quite different, but I would imagine that there is a lot of common ground. So what would you like to see in MySQL 5.7?
In the meantime I guess we will have to wait and see what will come out in 5.7 and where things are going to go. There’s likely I guess to be a 18 month wait for that, so plenty of time to do lots of new and exciting things. In the meantime MySQL 5.6 needs deploying on new servers once it has been fully tested and any issues resolved and that is quite a bit of work to keep me busy for a few days …
The blog title says it all: Do we need a MySQL Cookbook? I tend to think so.
This seems to be something that is missing with current MySQL documentation. There is lots of information available but finding the appropriate bit can be quite tedious and it often requires looking in multiple places.
A lot of other software has such books, but for some reason MySQL seems to be missing one.
A recent example comes from a “documentation feature request” I posted today: http://bugs.mysql.com/bug.php?id=68171. MySQL 5.6 provides a way to “move InnoDB tables” from one server to another. There are many reasons why you may want to do it, but the documentation is currently rather sparse. A simple “example recipe” for this would be good, as would an equivalent recipe for other engines where you can do this such as MyISAM. This is just an isolated example.
As far as I know there are no “best practices” on how to setup or manage a MySQL server, whether that come from Oracle, or the community. A common community view would probably be better as it makes “all vendors” work to a common goal, even if they compete in other areas. Look at the LSB as an example, adopted to some extent by all Linux distributions. And again often there may be several ways to solve some problems so a Cookbook would be quite nice as it would suggest how to solve common tasks and let you use those as the basis for real implementations.
So what things could these Best Practices or Cook book contain? Things that come to mind are:
What other things am I missing for a cook book?
I’m sure this list could include many other things and maybe there’s a need to avoid too many details, referring to existing documentation where appropriate for details, but just provide the recipes and the explication of why, how etc would be good for a large number of people. I for one have found from time to time that I could not find such a recipe to do a particular task, and I am sure there must be others like me with the same problem. Existing books are very good, but the focus seems to be slightly different and may go into a lot of detail when in some cases that detail may not be needed initially.
So would a cook book like this solve a need for you, and if so what might be missing from the suggestions I have made above?
After lots of pain and trying to get CentOS 6 to work on my Mac Mini mid-2011 and failing, I finally find that CentOS 6.3 does indeed install. Previous versions did not work for me, but I’m delighted to see that RedHat have done great work and the work they did that made it work on Fedora 17 has obviously been ported to CentOS 6. The issue in the past had been the Mac would not boot the install DVDs.
So all that was needed was for me to burn the (new?) CentOS-6.3-x86_64-netinstall-EFI.iso image, press ALT on boot and the DVD booted fine, and the install went along the same way as any normal CentOS/RHEL installation. In fact I’ve been able to keep my original Fedora setup as the Volume Group I’d previously had setup on my disk was recognised so all that I needed to do was to create a new “root” logical volume and install on to that.
So thanks RedHat. You now make it much easier for me to migrate my “main” but older system on to this Mac.
I saw a post on profiling memory usage today and this reminds me of several discussions I have had with different people.
Why would you want to profile the memory usage? Usually to see where memory is allocated and for what purposes, and usually you only care when memory usage is higher than expected. That is a DBA normally wants to use all available memory on a server for mysqld, whatever that size may be.
Configuration parameters may be inappropriate and need adjusting, so having a way to determine the range of memory usage based on those parameters would be most helpful. However, the configuration parameters as a whole put no limit on memory used, so different workloads can quite easily lead to memory being insufficient on the OS, triggering swapping or perhaps the mysqld process to be killed by the kernel’s OOM. None of this is ideal.
So I would like to configure a cap on the maximum amount of memory that can be used and indeed that is what most RDBMSes do. That is the allocation is made on startup, ensuring the memory is available, and any memory requests are taken out of this pool. This technique _should_ avoid you swapping (though I still see issues on NUMA architecture boxes). It also means that at some points in time the database server may not be able to honour memory requests and therefore must fail, wait until those needed resources are available, or it must free existing resources to make space for the requested new resource.
Doing this well is probably hard enough. Doing it in MySQL with different storage engines is probably harder still as each engine does things its own way.
Some mechanism like that would certainly help avoid strange behavioural issues when the query profiles change and this causes the server to behave in unexpected ways, such as swapping itself to death. The only solution right now seems to be to configure the server to ensure that these “edge cases” can not happen by using less memory, thus wasting valuable resources on the server.
Currently it is also pretty hard to see how memory is used. If you use InnoDB you know that the buffer pool takes up a sizeable portion of memory, but there is still a significant amount of other memory that’s used by each individual connection and this can vary significantly depending on the query profiles. It would be really nice to dynamically look inside MySQL and see how memory is used, and also see how adjusting different parameters may adjust this.
Of course none of this is in MySQL at the moment. I would certainly like to see first better information on current memory usage, I would guess some more tables in performance_schema, and at a later stage some way to cap memory usage to a desired level, and hope that when those limits get reached mysqld will figure out how to free up some resources in order to continue, wait or fail “gracefully”.
Until then we will have to continue playing the game of “configure mysqld”. See if it works and then ensure that queries to the system do not change enough to break the system and require us to go back and do it all again.
Note: there are several feature requests for this, so I do not think I am the only one in wishing for more visibility and configurability of the memory usage.
Some feature requests that seem related to all this are shown below:
I have been using virtualisation at home for some time, since the days of vmware-server on Linux. It works well and if you have a sufficiently powerful box is a great way to experiment without destroying your own server or installing software which a few hours later you may want to remove again.
I also recently bought a Mac mini 2011, which I put in 16 GB of RAM and was hoping to set it up as a host to contain various virtual machines. This was going to run Linux. I wanted to run the host as light as possible and do the real tasks inside my virtual machines. I also wanted to use this server to migrate from an older PC I use for my day to day services (web, dns, email, ftp services etc).
Getting Linux installed on the Mac was a bit trickier than expected. I have been using RedHat since version 3.0.3 so prefer that to Debian or Ubuntu, if only because it is an environment that I am more familiar with. I tried in vain to get CentOS 6 to install on the Mac but it seems the required EFI boot loader does not work. Others have had problems too I think. Less surprising is that Ubuntu installed first time (credit to them for getting it to work on more exotic hardware), and when I tried Fedora 17 I saw that also worked, so that is what I am using. I would still prefer to switch to CentOS as it stays stable for longer, and playing the upgrade game with Fedora gets to be a pain. Maybe some time soon…
There is a lot of noise around now about cloud computing and I had wanted to use something a bit more sophisticated than virt-manager on my PC. This requires you to organise the exact setup of the boxes, network, disk images and other cloud platforms seem to make this management easier for you.
oVirt seemed promising and there is even a Fedora 17 repo and it is supposed to work out of the box according to various blogs and articles, but I had trouble getting it all up and running. If you run it on a single host it does require a lot of packages and software to be installed. So after various attempts I gave up.
OpenStack is also pretty hot and has quite good documentation, though it seems complex perhaps because of the scale that it runs on in some deployments. Again I read the manuals and documentation and had trouble getting the whole environment to work. The configuration is quite complex, and it seems easy to make a mistake. If you do that then figuring out what is broken or why is pretty hard. Again this is a shame especially as this seems to be perhaps the most sophisticated of the 3 cloud environments I tried.
Family circumstances mean I have a lot less time than I would like to investigate but I did do quite a bit of homework before starting so this has been quite frustrating.
I then tried OpenNebula, version 3.8.1. It seems that the software is still evolving quite quickly so has evolved quite a lot over time. It also seems that this is the smallest package: the tar ball is tiny and at least on Fedora 17 it did not take me long to get the software installed (including sunstone-server). I see this is tagged to be included in Fedora 17 but could not find actual packages to install.
There are quite a few howtos and tutorials but for OpenNebula, but some of these are for older versions, and if you follow them thing will not work as expected. (Differences in the templating options etc.) This is confusing for a new user.
I use Fedora 17 but do not see packages for OpenNebula. Working packages for Fedora, CentOS, Ubuntu etc would be most welcome as this would probably ensure that the different components were configured correctly with the OS components like virt-manager etc.
Most of the OpenNebula daemons have start and stop commands but no status command. This would at least allow you check that the configuration was correct and the daemons correctly started, especially when doing manual installs.
I would like to see some sort of “check installation” command to allow for a more complete set of checks to be done to see if the installation is correct and complete, for each of the different components that makes up or is needed by OpenNebula. In many places the documentation says “Assuming you have a correct kvm setup …”, “Assuming you have a correct isci setup …”, etc, yet provides no way for you to check if the setup of these base components actually works as required by OpenNebula. Whether that documentation is included in the documentation or on a wiki somewhere “How to setup kvm on Fedora X for OpenNebula….” I do not really mind but I have had problems with PolicyKit not allowing oneadmin access to the virtual machine, with the virtual machines not seeing their disks (I think due to user/group permissions, as on Fedora 17 kvm seems to assign file ownership as qemu:qemu.), with the VNC connections in Sunstone not working, and I am pretty sure that most of these issues are “trivial” to fix if you have done this before and understand what OpenNebula expects and what the host operating system is doing “incorrectly”.
For initial testing including various standard template files which would useful, if only to allow helping to confirm simple things like:
I was confused by the network documentation and exactly how that relates to the underlying host’s network configuration. This does not seem to be well documented but “assumed”.
To summerise these technologies are here and being used quite widely, but I would guess in larger environments where the people using them have time to investigate and set things up properly. I was hoping that to set these up at home would not be that hard, but have found it much harder than I had anticipated to do this just reading the documentation and trying on my own. I think that all these platforms would be used more widely if they were more plug and play, that is they worked out of the box on most of the popular Linux distributions.
Having said that things have obviously progressed since the first virtualisation environments were created (on the x86 architecture) and probably in no time at all these technologies or similar will just work out of the box and be the norm. In the meantime I am going to keep poking away and hope to finally get one of these environments working for me.
So if you read this and have some useful pointers I would be most interested in using them to get up and running.
I see the OpenNebula work originally planned for Fedora 17 is now delayed (see: https://bugzilla.redhat.com/show_bug.cgi?id=815001), but there are RPMs for CentOS 6 which can be found or built from here: https://nazar.karan.org/summary/misc!opennebula.git. So it looks to me as if it’s likely I will soon be able to get some packaged version of OpenNebula working on my CentOS 5 and Fedora 17 home PCs some time soon. Still trying to make progress in the meantime.
Not much to add really to the bug I’ve filed here: bug#67159.
Again this GTID stuff looks good, but seems to prevent changes in the configuration of performance_schema, which I think is not appropriate, especially as P_S now has lots of extra goodies and after 5.6 will surely have even more.
Not much to add really to the bug I’ve filed here: bug#67099.
I personally can think of some very nasty consequences of applying this on the slaves I manage, and the reason I’m posting the bug is that while I guess this is too late to fix in 5.6 as it’s effectively a new feature, I’m sure many sites may bump into this and be somewhat disappointed if they want to use the new GTID feature and have several slaves. Hence, if the fix/feature has to go into MySQL 5.7 then I hope it goes in sooner rather than later. We will see.
I probably should have updated this earlier but it does seem that Oracle have taken these comments on board. See: WL6559. It looks like they plan to do this for 5.7 which is good news. I’m really hoping that we will not have to wait too long for this version as it has other “goodies” which are going to be interesting. Reducing the time of the release cycle will I think allow those of us who appreciate these new features to have access to them sooner.
A colleague, Kris, commented recently on a recent find I made on 5.6.7-rc1 which while not the end of the world does seem to severely limit the use of the new GTID features, or if nothing else make it a pain to use.
Today I found another related issue, when trying to check the syntax of one of the MySQL commands. That’s now reported as bug#67073.
All of these are sort of edge cases but do make 5.6 look a little rough round the edges, when the rest of the product is actually really good and a great improvement from The MySQL 5.5 GA version that everyone uses. That’s really a shame.
I did report the Original problem as SR 3-6270525721 : MySQL 5.6.7-rc1, grants, replication and GTID cause problems, but of course that’s not visible to the general MySQL community and while this specific issue with the help tables has been reported in bugs.mysql.com I do get the feeling that other users of MySQL may report issues which are not visible to the community as a whole and that means that solutions, workarounds or even the possibility of saying “I have that problem too” is no longer possible.
For many tickets I’ve chosen to make 2 tickets, one on MySQL’s http://bugs.mysql.com and another internal Oracle ticket but this is really a nuisance and time waster, and in the end I don’t believe it’s helpful to anyone. I’m sure I’m not the only person who suffers from this. I also note that when browsing for what’s changed in the lastest release such as http://dev.mysql.com/doc/refman/5.6/en/news-5-6-7.html you see references to the internal bug numbers which many people can not even access. I am able to see most of these tickets but many people are not so the problem and resolution become rather opaque. That’s counterproductive for everyone. Improving this reporting of changes would help us all.
Oracle, please make this work in an easier manner so that issues with mysql software can be more visible (if the reporter choses to make this so), information on changes and bug fixes is also more complete, as this will save time and frustration for everyone.
Note: Opinions expressed above are my own.
I’ve blogged before about the way to use numactl to start up mysqld, and thus to try to better spread the memory usage on larger memory servers. This came from an article by Jeremy Cole and is fine. I recently had some issues with mysqld seeming to run out of memory on a box which appeared to have plenty free, so it seemed like a good idea to adapt a minutely collector script I run to include the numa_maps output so that I could see if the failed memory was related to this. So far so good.
Many of the clients that connect to the database servers I manage have a very short connect timeout, typically 2 seconds. In a normal network and under normal conditions this is more than enough to allow for successful operation and if the connect does not work in that time it’s an indication of some underlying issue, whether that be load on the mysqld server, or something else.
The change I implemented on to collect the numa_maps information wasn’t expected to cause any issues, and typically quite a lot of information is cat’d out of /proc/somewhere to collect configuration or status information.
However, after implementing this change I suddenly noticed more mysql client connection errors than before, in fact a significant change. It wasn’t immediately apparent what the cause was until it was noticed that these errors only occurred a couple of seconds after the change in minute, at hh:mm:02, hh:mm:03 etc. Then it dawned on me that indeed this was due to the change in the collection script looking at /proc/<mysqld_pid>/numa_maps. Disabling this functionality again removed all issues. This was with servers with 192 GB of RAM.
The information provided by /proc is useful and it would be even better if the information could be collected in a way which doesn’t block the process that’s being “investigated”. As such I have filed a bug report with RedHat, though really this is just a kernel “bug”, or behaviour which while it may be known, in this particular case provides a very unsatisfactory behaviour.
So whilst most people do not configure such a short mysql connection timeout if you do have a setting like this please be aware of the consequences of running numa-maps-summary.pl, or looking at /proc/<mysqld_pid>/numa_maps directly. This was not discussed by Jeremy and I’d guess he was unaware of this, or did not expect people to run this type of script as frequently as I was. Database memory sizes keep increasing so this may not have been noticeable on smaller servers but can now become an issue.
New versions of MySQL are always interesting to try out. Often they have features which I may have asked for myself so it’s satisfying to see them eventually appear on a system I use. Often other new features make life easier for the DBA. Finally we hope overall performance will improve and managing the server(s) will be come easier.
So I had a system which needs to make heavy writes, and performance was a problem, even when writing to SSDs. Checkpointing seemed to be the big issue and the ib_logfile size in MySQL 5.5 is limited to 4 GB. That seems a lot, but once MySQL starts to fill these files (and this happens at ~70% of the total I believe), checkpointing kicks in heavily, and slows things down. So the main reason for trying out MySQL 5.6 was to see how things performed with larger ib_logfiles. (Yes, MariaDB 5.5 can do this too.)
Things improved a lot for my specific workload which was great news, but one thing which I noticed was that initial install / setup time of a system with these large files increased a huge amount. (I was using 2 x 16000M files.) That was noticeable even on a box with SSDs. On a box with HDs that would be horrible.
mysql_install_db now uses innodb to setup some system things and restarts the mysqld binary a couple of times. When you also have a large innodb_buffer_pool (I was trying on a box with 192GB of RAM) the startup of mysqld each time took quite a while.
So initial thoughts are that the bootstrapping probably does not need to use the normal innodb_buffer_pool_size setting (but should perhaps overwrite it with the minimal size needed).
Managing the size of the ib_logfiles is also something that requires restarting mysqld. Given the sort of experimenting that I was doing it would be nice to be able to dynamically configure this. I can see many issues with changing the sizes of the existing files, but what seems easier to implement would be to be able to increase or decrease the number of files used (removing the older, higher number files once they are “used”/checkpointed), and that would provide a lot more freedom to the DBA.
I clone servers quite a bit and the extra 32 GB of these redo log files is a pain to clone/copy, and time consuming, so being able to reduce the number of files, and checkpoint prior to shutting the server down for cloning, but then re-enabling the normal value afterwards would save a lot of time cloning. So would be a handy feature.
I also tried playing with the new GTID features. They look great. Except for one thing. This expects master and slaves to have binlogs enabled, so that the GTID values are kept locally, and on startup. I manage several boxes where for performance reasons the slaves do not have binlogs enabled. If it dies the slave gets recloned. Adding back the requirement for binlogs ( and log_slave_updates ) on a slave to enable this feature seems wrong. I can see the need that the different GTID values are kept somewhere, but don’t see the need to keep all binlog information, at least on a slave. Given the ratio of slaves to masters can be quite high that can be an issue. If necessary write this information somewhere else, why not in an InnoDB table, so that on sever recovery you have consistent information with the rest of the database, something that might not happen with the binlogs… ?
Talking of binlogs one new feature I’m using, which I think is good is: binlog_row_image = minimal. This reduces the size of the binlogs and thus I/O. I’m missing the next step which would be to compress those binlog events, and reduce binlog size further. That may not be an issue on many systems but several servers I manage write over 100 GB of binlogs a day. Reducing this further by being able to compress the binlogs would be nice, and having better control of expire_logs_days too (as the daily granularity can be too large in many cases for me) would help.
NOTE: In MySQL 5.6.5 with gtid_mode = ON , mysql_install_db crashes! Bug reported to Oracle, and I guess will get fixed soon.
I have seen a few other crashes in 5.6.4 and 5.6.5, I can’t provide links unfortunately as they’re not public. They are still not resolved. I’d like to make the details of these crashes public as others may come across them, or have come across them before me, but these issues are not visible in http://bugs.mysql.com. Would be nice if they were but requires me duplicating information which is a pain. Oracle, please make this easier. It is good for us all.
So with the little testing I’ve done so far MySQL 5.6 looks good. I need to spend more time with it and investigate the new features, many of which will make life easier.
When it finally goes GA I will have to spend a lot more time seeing how it behaves but I believe there are still a few issues which still need resolving prior to that happening.
Over the last few months, on different systems I have been modifying the ib_log_files and their sizes, increasing them due to the increase in load they have been experiencing. These files are used to contain information which has been committed, but not yet written to the ibdata files or the individual .ibd files if using innodb_file_per_table. And these files are used on recovery if the server shuts down unexpectedly during recovery processing to get the system back into a consistent state again.
The checkpointing in MySQL is well documented, but I was surprised just how much difference the throughput can change for heavy write applications if the sizes of these files is changed. It may seem obvious but given how disruptive it can be to shut down a system, change these values and start it up again, this process is not done frequently. These files help improve performance as changes are written linearly into the file, compared to the changes in the data files which are written in pages and those pages are probably scattered randomly over the disk, the latter process being slower.
I come from a Sybase background where “a database” consists of “data” and “log” files. The data files correspond with the ibdataX or .ibd files and the log files correspond with the “ib_log_fileX” files, the major difference being that (at least when I looked at this in Sybase 12.5) the log files were considered part of the database and once added you also could not reduce their size. Increasing was easy but then you couldn’t free up the space again.
So MySQL and Sybase are similar, but MySQL’s implementation keeps the ib_logfileX files independent from the database files. It struck me that it would be very useful if we could dynamically tailor the two settings that InnoDB provides: the innodb_log_file_size and the innodb_log_files_in_group. Since MySQL already automatically creates missing files, typically done if you start MySQL for the first time, or if you remove the files after changing the innodb_log_file_size parameter, this sounds reasonably easy. Make the innodb_log_files_in_group larger and then when you get past the last available file, you build a new one. If the number is decreased then you simply go back to the first one again, ignoring later files. Changing the size of the files could be done too, but perhaps is harder. In both cases you need to be sure this works when recovering from a crash, which is a critical moment.
A change like this would help people see whether changing the size improves performance, and certainly for systems where taking downtime is expensive and difficult, this would be a welcome move, as over time initial sizing guidelines may vary and a dynamic change would resolve the problem quickly. I recently took down some old MySQL systems which had been running uninterrupted for just over a year. I’m sure I would have probably adjusted these settings earlier had I been able to, and that would have been very convenient.
I see that MySQL 5.5.21 has just been released. This sounds interesting. I’m mainly running 5.5.16 which has been broadly stable, but I have been caught by a few important issues which 5.5.21 fixes according to the change list:
These together with a few earlier fixes after 5.5.16 are certainly interesting to me, so I’m eager to try out 5.5.21 and see how it fairs.