Archive for January, 2010

Further Thoughts on MySQL Upgrades

Saturday, January 30th, 2010

I have been upgrading more MySQL database instances recently and have found a few more potential gotchas, which if you are not careful, can potentially be rather nasty. These are not documented explicitly by MySQL, so it may be handy for you to know if you have not come across this type of thing before.

Most of the issues are those related to upgrading MySQL instances which are replicated, either the master servers or the slaves. Some seem specific to the rpm packages I am using (MySQL enterprise or MySQL advanced rpms), though others are not.

Take care upgrading a 5.0 master when you have 5.1 slaves

It is not a good idea to run a mixed major version of mysql in a replicated environment so why would I be doing this? If you work in a replicated environment and have several slaves then it is recommended that you upgrade the slaves first. I work with quite a few slaves so the process of upgrading them all takes longer than you would think. Quite a long time in fact, while different systems are tested and upgraded. Along came a newer version of 5.0 and I thought of upgrading the master which had been giving a few issues, and one of them was resolved in the lastest 5.0.89. At least when using rpm packages, the package upgrade uses /usr/bin/mysql_install_db –rpm –user=mysql as part of the package upgrade procedure. This ensures that the mysql db is up to date but also writes to the binlog if one is configured. running on a master normally this would be the case. Of course these are 5.0 install commands and they are not really understood by the slaves which try to interpret them too. End result. Broken replication and you need to skip several transactions. If you have several slaves this can be rather painful.

Note: this is the cause of replicating the mysql database which I do as it is a good, quick and clean way to distribute GRANT information on to all slaves. As far as I know MySQL does not discourage this though perhaps many sites do not replicate their MySQL database.

If you upgrade a master server from 5.o to 5.1 by running mysql_upgrade after upgrading the binaries you may have a similar issue as the mysql_upgrade script determines that some new columns or tables are needed on the master and so adds them.  Again this gets written to the binlog which is good for point in time recovery on the server itself, but again bad from a replication point of view if the slave does not run the same version of MySQL.

In fact the binlog has these 2 uses: (a) writing changes for point in time recovery, and (b) writing the changes for replication to slaves.  Once could argue that for (a) the changes should not be written to the binlog but I think that is wrong if you have to recover just after an upgrade. So I would suggest that the upgrade changes should probably flagged specially in the binlog, allowing the slave to probably ignore them in the normal situation but also be able to recognise them, allowing you to stop the slave and perform the same binary upgrade, and then continue the upgrade with the new binaries exactly in the way this had been executed on the master. This behaviour should be controlled by a runtime flag which can be dynamically configured.

RPM Installs/Upgrades always start the server

rpm(8) is good but some of the design decisions in the MySQL rpms are questionable. One of these is that the current MySQL rpms are designed to always start when either doing a fresh install or doing an upgrade. Currently it’s not good practice to do rpm -Uvh MySQL-server-advanced-gpl…rpm (5.1 rpm) if you are running a MySQL-server-enterprise-gpl…rpm (5.0 rpm) so normally I stop MySQL, remove the enterprise rpm and install the advanced rpm. That starts the server and the slave on a box which is not completely “stable”. Solution is to add slave-skip-start to /etc/my.cnf but that should not be necessary as immediately after running mysql_upgrade you need to remove the value again.

Also during the upgrade process it is a good thing to avoid external client access so sometimes I also set the bind-address to 127.0.0.1 during the period of the upgrade.  That may not keep clients away if they are running on the local server but helps in many cases.

Conclusions

All this leads me to a simple conclusion: if you can, upgrade one box the slow way and then clone the other slaves from that. Cloning is simple and requires no thought so is a good idea.

The other conclusion based on the first one is that if you can: build a new master. That is clone a slave to make a new master. This will be the new master. Configure it to write it’s own binlogs and then you can move the existing slaves underneath the new one. Once all slaves are underneath the new master (left as the only slave of the original master) then you can simply point clients to the new master instead of the old one. That keeps down time to a minimum and avoids many problems.

As far as I know none of the comments I have made above are in the current 5.1 upgrade documentation. I have opened quite a few tickets requesting the documentation be improved and I guess that will happen slowly. For many people the points I have mentioned may seem irrelevant for their situation but for me they have caused a few problems. If you do not need to worry, then you can skip the documentation, otherwise if this were documented then you would be saved a few tears when least expecting problems.

Please tells the Canadians (and others) to learn about IBAN numbers

Friday, January 8th, 2010

A friend of mine in Canada did me a favour and sent me a wallpaper border that I could  not get hold of in Spain. I had originally bought this locally and my daughter had decided to rip off a large strip from the wall. The local distributor told me that the paper was out of stock and could not be obtained. So I tried to contact the manufacturer directly but had no luck at all. Even after several emails and attempted telephone calls I got nowhere. Please manufacturers support your customers or at least return emails or telephone calls, even if they call from some far away place such as Spain.

So I was very happy to have a friend in Ontario who could help me out. Thanks Andrea.

However of course things are not that easy. Over the last few years in Europe we’ve become a little spoilt in the EURO zone that international transfers are now far easier and cheaper than they were. In fact within the EURO zone and for EURO transfers the cost can not be more expensive than a national transfer. In some countries that’s free.

There’s a reasonably new “system” designed to facilitate this type of thing called IBAN numbers. This is not only used in the EURO zone but by several countries, including the UK. Basically with a IBAN number and a SWIFT code of the bank you can make the transfer quickly and with little chance of errors.  Canada does not seem to know (much) about this standard yet, or at least my Andrea was not able to provide me with an IBAN number for her bank account.  Therefore my bank makes a higher charge for the international payment.  Actually of course the “Internet” form used to make the transfer was not really very complete and it took me a while to work out exactly how it should be filled in. So much so that I went and looked on the online pages of other banks where I have to see if they were any better. One did not even allow me to make international transfers with the accounts I had, the other had a similarly obscure web page for making such transfers. Of course if these pages were used often they might actually be a bit more user-friendly.

Come on guys (banks in general), VISA, MASTERCARD, AMEX, … can work out how to charge me for the cost of a payment in pretty much any currency if I’m abroad. And there’s no real hassle doing this. They charge of course for the transaction as part of the exchange rate they apply, but the cost is not ridiculous. So how can the banks get away with such stupidly high charges? We live in a world where we CAN do real time stock trading, and messaging, and the real costs of doing this are very low. I’ve worked in the wholesale banking industry so know that technology is not the block. FX is basically a money making machine for retail clients and there’s little way around this.

Perhaps I should have checked my PayPal account to see if they are quicker and cheaper? However getting money in and out of paypal seems to be slower than I would like.

So perhaps it would be nice if the card issuers like VISA and Mastercard offered their clients a way to make transfers between cardholders leaving the banks to one side? Technically possible but I can imagine that as these organisations are owned by the banks this might be difficult.

Anyway this is just a cry of frustration from someone who believes that in this day and age of high technology there are still lots of blocks to real international commerce and trade.

Draytek Vigor 2820n Setup with Jazztel and Annex M

Thursday, January 7th, 2010

After successfully configuring my Draytek to talk to Jazztel I thought I’d try the Annex M option they offer which improves the uplink bandwidth relative to the downlink speed.

This seems to have worked quite well as can be seen below. Here are the different synchronisation speeds.

Without Annex M:
Down: 14149 kbs, Up: 1235 kb

With Annex M:
Down: 1346 kbs, Up: 2034 kbs

That’s quite a difference especially if the upload speeds are important.

I’ve already been told that my video over Skype is better and also noticed less problems with VoIP although the increase from 320 kbs I had with Telefónica is sufficient to avoid many of the dropouts I had experienced earlier.

Here is the Jazztel configuration I’m using:

Jazztel Configuration with Annex M

Hopefully if you are considering using this router this information may be helpful to you.

The devil is in the detail when changing ISP

Tuesday, January 5th, 2010

This article is about two things: having to change ISP and then getting the configuration to work with the new one.

How to lose a customer.

I wrote earlier about wanting to change ISP from Telefonica because as a retail client they were unable to offer me an ADSL connection with an UPLINK speed faster than 320 kb. The downlink speed of 10 Mbs was not the issue. I often work from home and need to transfer files to the office, so the uplink limit really hurt me. If also cause some issues when using VoIP as the uplink would frequently get saturated.

The silly thing is that Telefonica is Spain’s no. 1 incumbent telco and that the technical support people told me that the company DOES have other options, it seems only contractable by businesses. Calling from a residential number it was impossible even to to speak to the business sales people, calls were redirected to the residential sales number where I was offered what I had and told that nothing else existed. So it was not even an issue of price.

That stance has lost Telefonica a customer, and gained Jazztel a new one.

Preparing for the change

So I ordered my line (switchover) and waited for the date of the change. There have been many horror stories about switching phone/ADSL providers in Spain, so I was cautious. Jazztel confirmed the 31/12/2009 and said “downtime” would be a maximum of 6 hours.

Early in the morning on the 31st I noticed the phone line no longer worked. I was surprised to find the existing ADSL line still worked fine. A couple of hours later the router informed me that the ADSL link had gone down, so I made my way to adjust the router’s ADSL configuration.

Jazztel originally sent me a letter with some configuration details and said they would send me an SMS with the information of my new static IP address.

So I adjusted the configuration and a few minutes later the ADSL link was up but not using the static IP address I had expected. I called support and they told me that the configuration was correct and that they could only check if I was using the router which they had supplied. I was using the Draytek Vigor 2820n I had bought previously so had to switch over. When I did so I saw they adjusted the configuration (logging in remotely) and then I noticed that the configuration information they had sent me originally and the SMS I had received gave slightly different accounts to use. I had not noticed this difference and it broke the configuration on the Draytek. Also I’m now using PPPoE and it seems that inspite of providing incorrect credentials the system does not give an error. This would have been much easier to spot had I seen an authentication error.

So now the Draytek is working with the account information adjusted.

So be prepared to check the information you are given and don’t expect the technical support people you talk to to actually check why things don’t work, but to simply press the configure router button on some web page they have, which of course only works on the routers they provide.

Anyway now I’m working at 14 Mbs down and 1.2 Mbs up using Jazztel, compared with 10 MBs down and 320 kb up with Telefonica, so a good start and still the Annex M option to try out to give me an even better uplink speed. Remember this is on the same physical line as before.

Managing MySQL with MySQL Sandbox?

Sunday, January 3rd, 2010

Normally I like to use the OS’s package manager to manage the software on my system. This ensures that things are done consistently. When managing MySQL I’d also like to manage all my instances the same way. That makes life easier for me but also for my fellow DBAs and sysadmin colleagues.

I use CentOS and the rpm packages it provides and these work quite well. However, the MySQL rpms do not allow me to manage single and multiple instances alike and while mysqld runs as the mysql user the instance management needs to be done as root (stopping, starting instances, or default access). If you want to run multiple instances you can also use mysqld_multi, but that’s not the default setup.

So this is not ideal. While this may not be much of an issue if you manage a single server, if you manage tens or hundreds things changes somewhat and you REALLY want to manage all instances consistently.

So I’ve been thinking about finding an alternative which would suit me better. What do I need?

  • The same treatment of a single instance or multiples on one box.
  • Management as much as possible from a non-root user (mysql?)
  • Separation of instances into separate “areas”, along the lines of Oracle’s Optimal Flexible Architecture.
  • Automatic start up on server boot, and shutdown of “SELECTED” instances.
  • Ability to support different versions of MySQL running at the same time.
  • Straight forward mechanism to upgrade a server or switch the version of MySQL that is used to manage a specific instance.

I was thinking of writing my own scripts but one solution which looks potentially close enough to my ideal solution would be to use Giuseppe Maxia’s MySQL Sandbox. I think this project was born with a completely different goal (enable you to quickly get a different MySQL version up and running for testing), but it actually seems to solve most of the things I want for managing production instances.

If you’ve not used it yet take a look. It works pretty well and is easy to setup.

So what can it do at the moment:

  • It works from standard MySQL tar balls.
  • It’s pretty much platform independent, only requiring perl.
  • Allow you to install different versions of mysql on the same box.
  • Run everything as a non-root user
  • Stop and start either one or all instances easily

For my needs it seems to be missing:

  • The ability to indicate which instances should start / stop and a way to trigger this from init during startup / shutdown.
  • The way to access the instances is quite different from a normal mysql installation. That has confused me quite a bit and as I use the Sandbox infrequently I have to go back and workout how to “get in” and also how to start or stop any instance, or check if it’s running.
  • I’m not entirely sure if it’s easy to adjust the paths for the different mysql “areas” to fit in with my usage of MySQL or something similar to OFA. These things are probably pretty easy to fix.
  • The name MySQL Sandbox sounds rather inappropriate if you really use it to manage mysql instances. That’s easy to change.

Ideal MySQL Layout

So how would I like the layout to look like?

Tarball binaries should probably be located somewhere like /path/to/some_binary_name, which allows us to distinguish between different architectures (i686, x86_64), and versions (5.0.89, 5.1.42, 5.5.0-m2). That’s already solved by the current tarballs which have a prefix such as mysql-5.5.0-m2-osx10.5-x86.

Each instance’s data should probably be located along the lines of:

  • /path/to/instance_name/data (for MySQL data files and my.cnf)
  • /path/to/instance_name/tmp (for MySQL tmpdir)
  • /path/to/instance_name/log (for MySQL logfiles (binlog and relaylog)

This allows the locations under /path/to/ to be mounted on different file systems for performance (if needed). In my case /path/to is simply /mysql, but that does not need to be that rigid. OFA is slightly different but the idea is the same.

Accessing the Instances

How do I access the instances?

Currently I access the instances using the following mechanism. Each instance has a name so that name is used to access the instance as a suffix of the normal mysql command.

So instance “a” is “managed”/”accessed” as follows:

mysql_a access to mysql command line
mysqldump_a access to mysqldump
mysqladmin_a access to mysqladmin

Oracle does something slightly different by setting up the environment to determine which instance to be accessed. Then the normal command line utilities talk to the right instance.

Either way is fine but it should be very clear which instance you are attempting to access to avoid what could be costly mistakes.

It would be nice if the sandbox provided a common directory which could be included in $PATH so that all instances could be accessed from one place.

I don’t currently have a managed way to start / stop the instances. A file like /etc/oratab would do and then a command like:

mysql_init start # start all ‘marked’ instances
mysql_init stop # stop all ‘marked’ instances
mysql_init_a start # start instance a

The final thing to want to do is to upgrade an instance. Normally this involves:

  • Downloading the new version.
  • Stopping the instance to be upgraded.
  • Adjusting links/directories.
  • Perform various “admin tasks” to upgrade the instance.
  • Starting it normally again.

All that MySQL sandbox would need to do would be to perform 3.

So these are some of my ideas. I need to look at the MySQL Sandbox code in more detail to see if it will fit my needs but I expect it should not be too difficult to adjust it if needed.

Perhaps if you use other operating systems or package managers and manage one or more MySQL instances on multiple servers these issues are less of a problem. I have not seen comments by others worrying about some of the limitations of the current MySQL rpms and the problem is not really with rpm itself but with the expected usage of a single instance on a single server.

So is there a better way to do this or might MySQL Sandbox be the right solution to achieve my goal?

Managing MySQL Grants

Friday, January 1st, 2010

MySQL has an unusual grants system that allows a user to be specified by host, ip or network address. That is you identify a user as ‘some_user’@'host.host.name’, ‘some_user’@’1.2.3.4′ or ‘some_user’@’10.3.%’.

That is quite a nice facility but using it is rather tricky. This potentially provides a lot more security as it allows you to specify that different types of database users can only perform certain actions from different types of hosts. So even if you know the user and password you may have trouble getting into a mysqld server. That’s good.

However, this flexibility comes at a price. There are no tools to help you manage this and I have often seen people resorting to using the simplest type of grant, for some_user@’%', or some_user@’10.%’.

I recently wrote a smalltemplate script which would allow me to configure similar grants for the same user but in different networks in a way which made this process more manageable. This is only a partial solution to the problem but does make it possible to configure in a reasonably simple fashion grants for a user with access coming from one of several locations.

An Example

An example might make this clearer.

A simple grant for a web user connection from webserver1 might be.

GRANT USAGE TO 'web'@'webserver1.example.com' IDENTIFIED BY 'some password';
GRANT SELECT ON web_db.* TO 'web'@'webserver1.example.com';

If we want to duplicate this for 3 web servers we need 2 more sets of grants:

GRANT USAGE TO 'web'@'webserver2.example.com' IDENTIFIED BY 'some password';
GRANT SELECT ON web_db.* TO 'web'@'webserver2.example.com';
GRANT USAGE TO 'web'@'webserver3.example.com' IDENTIFIED BY 'some password';
GRANT SELECT ON web_db.* TO 'web'@'webserver3.example.com';

If this is for a larger number of locations the grants become unmanageable (for the human to follow easily).

So my script allows me to do:

-- VARIABLE: %WEB_USER% = 'web'@'webserver1.example.com'
-- VARIABLE: %WEB_USER% = 'web'@'webserver2.example.com'
-- VARIABLE: %WEB_USER% = 'web'@'webserver3.example.com'
-- Grants for %WEB_USER%
GRANT USAGE TO %WEB_USER% IDENTIFIED BY 'some password';
GRANT SELECT ON web_db.* TO %WEB_USER%;

Adding more “web servers” just requires adding a single extra line.

With a simple case like this where there are only 2 grant lines for each user this is not so complicated, but if we had 20 or 30 grants for each user then this simplifies things a huge amount.

The script can be found here and generates the following output:

$ create_grants_from_template grants.template.sql
-- VARIABLE: %WEB_USER% = 'web'@'webserver1.example.com'
-- VARIABLE: %WEB_USER% = 'web'@'webserver2.example.com'
-- VARIABLE: %WEB_USER% = 'web'@'webserver3.example.com'
-- Grants for 'web'@'webserver1.example.com'
GRANT USAGE TO 'web'@'webserver1.example.com' IDENTIFIED BY 'some password';
GRANT SELECT ON web_db.* TO 'web'@'webserver1.example.com';
-- Grants for 'web'@'webserver2.example.com'
GRANT USAGE TO 'web'@'webserver2.example.com' IDENTIFIED BY 'some password';
GRANT SELECT ON web_db.* TO 'web'@'webserver2.example.com';
-- Grants for 'web'@'webserver3.example.com'
GRANT USAGE TO 'web'@'webserver3.example.com' IDENTIFIED BY 'some password';
GRANT SELECT ON web_db.* TO 'web'@'webserver3.example.com';
$

This output can be piped directly into the MySQL command line.

Further Improvements

While this might be helpful, it does not really solve the problem of managing grants, especially if you have a large number of database users and are managing quite a few database servers. I’ve been thinking about what I would like to do to solve this problem and while the ideas are not yet fully set think that something like the /etc/sudoers file is a good base to work from.

That is use a file which defines various things:

  • database users and their passwords
  • database servers and database server groups
  • individual grants and groups of grants

and combines this so that a script can create the specific rights for a specific database server, the results being different from one server to another.