Managing MySQL with MySQL Sandbox?

Normally I like to use the OS’s package manager to manage the software on my system. This ensures that things are done consistently. When managing MySQL I’d also like to manage all my instances the same way. That makes life easier for me but also for my fellow DBAs and sysadmin colleagues.

I use CentOS and the rpm packages it provides and these work quite well. However, the MySQL rpms do not allow me to manage single and multiple instances alike and while mysqld runs as the mysql user the instance management needs to be done as root (stopping, starting instances, or default access). If you want to run multiple instances you can also use mysqld_multi, but that’s not the default setup.

So this is not ideal. While this may not be much of an issue if you manage a single server, if you manage tens or hundreds things changes somewhat and you REALLY want to manage all instances consistently.

So I’ve been thinking about finding an alternative which would suit me better. What do I need?

  • The same treatment of a single instance or multiples on one box.
  • Management as much as possible from a non-root user (mysql?)
  • Separation of instances into separate “areas”, along the lines of Oracle’s Optimal Flexible Architecture.
  • Automatic start up on server boot, and shutdown of “SELECTED” instances.
  • Ability to support different versions of MySQL running at the same time.
  • Straight forward mechanism to upgrade a server or switch the version of MySQL that is used to manage a specific instance.

I was thinking of writing my own scripts but one solution which looks potentially close enough to my ideal solution would be to use Giuseppe Maxia’s MySQL Sandbox. I think this project was born with a completely different goal (enable you to quickly get a different MySQL version up and running for testing), but it actually seems to solve most of the things I want for managing production instances.

If you’ve not used it yet take a look. It works pretty well and is easy to setup.

So what can it do at the moment:

  • It works from standard MySQL tar balls.
  • It’s pretty much platform independent, only requiring perl.
  • Allow you to install different versions of mysql on the same box.
  • Run everything as a non-root user
  • Stop and start either one or all instances easily

For my needs it seems to be missing:

  • The ability to indicate which instances should start / stop and a way to trigger this from init during startup / shutdown.
  • The way to access the instances is quite different from a normal mysql installation. That has confused me quite a bit and as I use the Sandbox infrequently I have to go back and workout how to “get in” and also how to start or stop any instance, or check if it’s running.
  • I’m not entirely sure if it’s easy to adjust the paths for the different mysql “areas” to fit in with my usage of MySQL or something similar to OFA. These things are probably pretty easy to fix.
  • The name MySQL Sandbox sounds rather inappropriate if you really use it to manage mysql instances. That’s easy to change.

Ideal MySQL Layout

So how would I like the layout to look like?

Tarball binaries should probably be located somewhere like /path/to/some_binary_name, which allows us to distinguish between different architectures (i686, x86_64), and versions (5.0.89, 5.1.42, 5.5.0-m2). That’s already solved by the current tarballs which have a prefix such as mysql-5.5.0-m2-osx10.5-x86.

Each instance’s data should probably be located along the lines of:

  • /path/to/instance_name/data (for MySQL data files and my.cnf)
  • /path/to/instance_name/tmp (for MySQL tmpdir)
  • /path/to/instance_name/log (for MySQL logfiles (binlog and relaylog)

This allows the locations under /path/to/ to be mounted on different file systems for performance (if needed). In my case /path/to is simply /mysql, but that does not need to be that rigid. OFA is slightly different but the idea is the same.

Accessing the Instances

How do I access the instances?

Currently I access the instances using the following mechanism. Each instance has a name so that name is used to access the instance as a suffix of the normal mysql command.

So instance “a” is “managed”/”accessed” as follows:

mysql_a access to mysql command line
mysqldump_a access to mysqldump
mysqladmin_a access to mysqladmin

Oracle does something slightly different by setting up the environment to determine which instance to be accessed. Then the normal command line utilities talk to the right instance.

Either way is fine but it should be very clear which instance you are attempting to access to avoid what could be costly mistakes.

It would be nice if the sandbox provided a common directory which could be included in $PATH so that all instances could be accessed from one place.

I don’t currently have a managed way to start / stop the instances. A file like /etc/oratab would do and then a command like:

mysql_init start # start all ‘marked’ instances
mysql_init stop # stop all ‘marked’ instances
mysql_init_a start # start instance a

The final thing to want to do is to upgrade an instance. Normally this involves:

  • Downloading the new version.
  • Stopping the instance to be upgraded.
  • Adjusting links/directories.
  • Perform various “admin tasks” to upgrade the instance.
  • Starting it normally again.

All that MySQL sandbox would need to do would be to perform 3.

So these are some of my ideas. I need to look at the MySQL Sandbox code in more detail to see if it will fit my needs but I expect it should not be too difficult to adjust it if needed.

Perhaps if you use other operating systems or package managers and manage one or more MySQL instances on multiple servers these issues are less of a problem. I have not seen comments by others worrying about some of the limitations of the current MySQL rpms and the problem is not really with rpm itself but with the expected usage of a single instance on a single server.

So is there a better way to do this or might MySQL Sandbox be the right solution to achieve my goal?

Published by

Simon J Mudd

Born in England, I now live in Spain, but spent a few years living in the Netherlands. I previously worked in banking (financial markets) both in IT and as a broker, but IT has always had a stronger influence. Now working at as a Senior Database Administrator. Other interests include photography, and travel. Simon is married, with two children and lives in Madrid.

4 thoughts on “Managing MySQL with MySQL Sandbox?”

  1. About the things that you say seem to be missing, please look at the reference manual, especially the sbtool and the scripts created by the sandbox installer. Some of what you need is already available.
    The ability of starting/stopping a sandbox at startup/shutdown has never been in the design.

    MySQL::Sandbox does already provide a common directory ($HOME/sandboxes)

    The single sandbox installer has more than 20 options that you can fiddle with to achieve optimal installations. The group sandbox installers can do the same fine tuning by means of environment variables that invoke the above mentioned options.

    Regarding the name, it is appropriate for the usage for which it was created.



  2. Hi Giuseppe,

    I’ll take a longer look at the manual to see if I can do what I want.

    Regarding the init scripts both Oracle and Sybase have root owned scripts which su to the non-root script which does the real task of start/stopping the instances. This should not be to hard to replicate for MySQL.

    The thing that is important to me is some way to be able to select which instances will be started on boot. On some development servers I have over 10 mysql instances running and some of those may not be fully operational or I may want to keep them down. Having a way to able to say: start all “selected” instances would be nice. I’m not sure how that fits in with your current scheme.

    In any case the MySQL Sandbox does seem to have the most generic and flexible way to manage MySQL servers and is easy to use.

  3. Hi Simon,
    I have some ideas to extend MySQL::Sandbox into a general purpose install-and-deploy application. I will propose some design plans in my blog quite soon.

  4. Hi Simon,

    In myEnv v0.2 this should be easy to implement. If you want to try I can send you a preview. But I have to implement it first.


Leave a Reply