More GTID troubles in MySQL 5.6

A colleague, Kris, commented recently on a recent find I made on 5.6.7-rc1 which while not the end of the world does seem to severely limit the use of the new GTID features, or if nothing else make it a pain to use.

Today I found another related issue, when trying to check the syntax of one of the MySQL commands. That’s now reported as bug#67073.

All of these are sort of edge cases but do make 5.6 look a little rough round the edges, when the rest of the product is actually really good and a great improvement from The MySQL 5.5 GA version that everyone uses.  That’s really a shame.

I did report the Original problem as SR 3-6270525721 : MySQL 5.6.7-rc1, grants, replication and GTID cause problems, but of course that’s not visible to the general MySQL community and while this specific issue with the help tables has been reported in bugs.mysql.com I do get the feeling that other users of MySQL may report issues which are not visible to the community as a whole and that means that solutions, workarounds or even the possibility of saying “I have that problem too” is no longer possible.

For many tickets I’ve chosen to make 2 tickets, one on MySQL’s http://bugs.mysql.com and another internal Oracle ticket but this is really a nuisance and time waster, and in the end I don’t believe it’s helpful to anyone.  I’m sure I’m not the only person who suffers from this.  I also note that when browsing for what’s changed in the lastest release such as http://dev.mysql.com/doc/refman/5.6/en/news-5-6-7.html you see references to the internal bug numbers which many people can not even access. I am able to see most of these tickets but many people are not so the problem and resolution become rather opaque. That’s counterproductive for everyone. Improving this reporting of changes would help us all.

Oracle, please make this work in an easier manner so that issues with mysql software can be more visible (if the reporter choses to make this so), information on changes and bug fixes is also more complete, as this will save time and frustration for everyone.

Note: Opinions expressed above are my own.

Checking /proc/pid/numa_maps can be dangerous for mysql client connections

I’ve blogged before about the way to use numactl to start up mysqld, and thus to try to better spread the memory usage on larger memory servers. This came from an article by Jeremy Cole and is fine. I recently had some issues with mysqld seeming to run out of memory on a box which appeared to have plenty free, so it seemed like a good idea to adapt a minutely collector script I run to include the numa_maps output so that I could see if the failed memory was related to this. So far so good.

Many of the clients that connect to the database servers I manage have a very short connect timeout, typically 2 seconds. In a normal network and under normal conditions this is more than enough to allow for successful operation and if the connect does not work in that time it’s an indication of some underlying issue, whether that be load on the mysqld server, or something else.

The change I implemented on to collect the numa_maps information wasn’t expected to cause any issues, and typically quite a lot of information is cat’d out of /proc/somewhere to collect configuration or status information.

However, after implementing this change I suddenly noticed more mysql client connection errors than before, in fact a significant change. It wasn’t immediately apparent what the cause was until it was noticed that these errors only occurred a couple of seconds after the change in minute, at hh:mm:02, hh:mm:03 etc. Then it dawned on me that indeed this was due to the change in the collection script looking at /proc/<mysqld_pid>/numa_maps.   Disabling this functionality again removed all issues. This was with servers with 192 GB of RAM.

The information provided by /proc is useful and it would be even better if the information could be collected in a way which doesn’t block the process that’s being “investigated”. As such I have filed a bug report with RedHat, though really this is just a kernel “bug”, or behaviour which while it may be known, in this particular case provides a very unsatisfactory behaviour.

So whilst most people do not configure such a short mysql connection timeout if you do have a setting like this please be aware of the consequences of running numa-maps-summary.pl, or looking at /proc/<mysqld_pid>/numa_maps directly. This was not discussed by Jeremy and I’d guess he was unaware of this, or did not expect people to run this type of script as frequently as I was. Database memory sizes keep increasing so this may not have been noticeable on smaller servers but can now become an issue.

Initial Reactions to MySQL 5.6

New versions of MySQL are always interesting to try out. Often they have features which I may have asked for myself so it’s satisfying to see them eventually appear on a system I use. Often other new features make life easier for the DBA. Finally we hope overall performance will improve and managing the server(s) will be come easier.

So I had a system which needs to make heavy writes, and performance was a problem, even when writing to SSDs. Checkpointing seemed to be the big issue and the ib_logfile size in MySQL 5.5 is limited to 4 GB. That seems a lot, but once MySQL starts to fill these files (and this happens at ~70% of the total I believe),  checkpointing kicks in heavily, and slows things down.  So the main reason for trying out MySQL 5.6 was to see how things performed with larger ib_logfiles. (Yes, MariaDB 5.5 can do this too.)

Things improved a lot for my specific workload which was great news, but one thing which I noticed was that initial install / setup time of a system with these large files increased a huge amount. (I was using 2 x 16000M files.)  That was noticeable even on a box with SSDs. On a box with HDs that would be horrible.

mysql_install_db now uses innodb to setup some system things and restarts the mysqld binary a couple of times. When you also have a large innodb_buffer_pool (I was trying on a box with 192GB of RAM) the startup of mysqld each time took quite a while.

So initial thoughts are that the bootstrapping probably does not need to use the normal innodb_buffer_pool_size setting (but should perhaps overwrite it with the minimal size needed).

Managing the size of the ib_logfiles is also something that requires restarting mysqld. Given the sort of experimenting that I was doing it would be nice to be able to dynamically configure this.  I can see many issues with changing the sizes of the existing files, but what seems easier to implement would be to be able to increase or decrease the number of files used (removing the older, higher number files once they are “used”/checkpointed), and that would provide a lot more freedom to the DBA.

I clone servers quite a bit and the extra 32 GB of these redo log files is a pain to clone/copy, and time consuming, so being able to reduce the number of files, and checkpoint prior to shutting the server down for cloning, but then re-enabling the normal value afterwards would save a lot of time cloning. So would be a handy feature.

I also tried playing with the new GTID features.  They look great. Except for one thing.  This expects master and slaves to have binlogs enabled, so that the GTID values are kept locally, and on startup.  I manage several boxes where for performance reasons the slaves do not have binlogs enabled.  If it dies the slave gets recloned.  Adding back the requirement for binlogs ( and log_slave_updates ) on a slave to enable this feature seems wrong. I can see the need that the different GTID values are kept somewhere, but don’t see the need to keep all binlog information, at least on a slave. Given the ratio of slaves to masters can be quite high that can be an issue.  If necessary write this information somewhere else, why not in an InnoDB table, so that on sever recovery you have consistent information with the rest of the database, something that might not happen with the binlogs… ?

Talking of binlogs one new feature I’m using, which I think is good is: binlog_row_image = minimal. This reduces the size of the binlogs and thus I/O.  I’m missing the next step which would be to compress those binlog events, and reduce binlog size further. That may not be an issue on many systems but several servers I manage write over 100 GB of binlogs a day. Reducing this further by being able to compress the binlogs would be nice, and having better control of expire_logs_days too (as the daily granularity can be too large in many cases for me) would help.

NOTE: In MySQL 5.6.5 with gtid_mode = ON , mysql_install_db crashes! Bug reported to Oracle, and I guess will get fixed soon.

I have seen a few other crashes in 5.6.4 and 5.6.5, I can’t provide links unfortunately as they’re not public. They are still not resolved.  I’d like to make the details of these crashes public as others may come across them, or have come across them before me, but these issues are not visible in http://bugs.mysql.com. Would be nice if they were but requires me duplicating information which is a pain.  Oracle, please make this easier. It is good for us all.

So with the little testing I’ve done so far MySQL 5.6 looks good. I need to spend more time with it and investigate the new features, many of which will make life easier.

When it finally goes GA I will have to spend a lot more time seeing how it behaves but I believe there are still a few issues which still need resolving prior to that happening.

Make InnoDB redo log file size configuration more flexible

Over the last few months, on different systems I have been modifying the ib_log_files and their sizes, increasing them due to the increase in load they have been experiencing. These files are used to contain information which has been committed, but not yet written to the ibdata files or the individual .ibd files if using innodb_file_per_table.  And these files are used on recovery if the server shuts down unexpectedly during recovery processing to get the system back into a consistent state again.

The checkpointing in MySQL is well documented, but I was surprised just how much difference the throughput can change for heavy write applications if the sizes of these files is changed.  It may seem obvious but given how disruptive it can be to shut down a system, change these values and start it up again, this process is not done frequently.  These files help improve performance as changes are written linearly into the file, compared to the changes in the data files which are written in pages and those pages are probably scattered randomly over the disk, the latter process being slower.

I come from a Sybase background where “a database” consists of “data” and “log” files. The data files correspond with the ibdataX or .ibd files and the log files correspond with the “ib_log_fileX” files, the major difference being that (at least when I looked at this in Sybase 12.5) the log files were considered part of the database and once added you also could not reduce their size. Increasing was easy but then you couldn’t free up the space again.

So MySQL and Sybase are similar, but MySQL’s implementation keeps the ib_logfileX files independent from the database files. It struck me that it would be very useful if we could dynamically tailor the two settings that InnoDB provides: the innodb_log_file_size and the innodb_log_files_in_group.  Since MySQL already automatically creates missing files, typically done if you start MySQL for the first time, or if you remove the files after changing the innodb_log_file_size parameter, this sounds reasonably easy.  Make the innodb_log_files_in_group larger and then when you get past the last available file, you build a new one. If the number is decreased then you simply go back to the first one again, ignoring later files.  Changing the size of the files could be done too, but perhaps is harder.  In both cases you need to be sure this works when recovering from a crash, which is a critical moment.

A change like this would help people see whether changing the size improves performance, and certainly for systems where taking downtime is expensive and difficult, this would be a welcome move, as over time initial sizing guidelines may vary and a dynamic change would resolve the problem quickly.  I recently took down some old MySQL systems which had been running uninterrupted for just over a year. I’m sure I would have probably adjusted these settings earlier had I been able to, and that would have been very convenient.

MySQL 5.5.21 now released – looking forward to trying it out

I see that MySQL 5.5.21 has just been released. This sounds interesting. I’m mainly running 5.5.16 which has been broadly stable, but I have been caught by a few important issues which 5.5.21 fixes according to the change list:

These together with a few earlier fixes after 5.5.16 are certainly interesting to me, so I’m eager to try out 5.5.21 and see how it fairs.

Better Controlling MySQL Memory Usage

MySQL, like a lot of other software, has many knobs you can tweak. Most of these knobs may affect behaviour, but more importantly most affect the memory usage of the server, so getting these settings right is very important.

Most of MySQL’s memory is really used just as a cache, in one form or another, information that otherwise is on disk. So ensuring you have as large a cache as possible is important. However, making these memory sizes too large will trigger the server to start swapping and possibly can cause it to crash or cause the kernel to kill the process when it runs out of memory.  So that’s something we want to avoid.

Certain settings affect memory allocation on a per connection/thread basis, being bounded by thread_cache_size and max_connections.  If you configure for the worst behaviour (max_connections) you may end up not actually using all the memory you have available, memory which normally could be used for other purposes.

Recently a server I managed was configured incorrectly with a large sort_buffer_size (4M to 256M) and larger read_buffer_size (4M to 20M).  The change in configuration on first glance looks quite innocent, and not noticing that these are per-connection settings this got rolled out. max_connections on this server was set to 1000, while normally there were ~40 connections of which only a few were active. The mysqld memory footprint on startup looked fine. In fact under normal usage it also worked fine. However spiky load suddenly changed this nice behaviour: as configured mysqld ramped up the thread count and hence memory usage, resulting in swapping and finally server death…

The fault of course was mine for not noticing, but this has been an issue with MySQL forever.  Most other RDBMSes manage memory slightly differently: you define how much memory you want to use (maximum), this is optinally locked into memory, and further requests for different buffers are taken from this global pool.  That is much safer, avoids unexpected swapping, or memory over-allocation, but does raise the question of what happens when a memory request can not be honoured.

Initially I would expect two different behaviours: either

  1. kill a thread whose memory can not be allocated, or
  2. wait a certain time to allocate that memory, after which it gets killed anyway.

Option (2) is probably saner, and possibly some sort of deadlock detection can kick if in all threads are waiting for memory, perhaps killing the younger thread, or thread which has done least work first. Possibly there are other better ways of doing this?

I can imagine that changing MySQL’s current behaviour to do something like this could be quite hard, especially as ideally the engines would also use the same memory management mechanisms, but I see this as being a good thing and would make MySQL more robust, especially under load, which is after all what counts.  Of course this will not happen in today’s 5.5 GA version, or tomorrow’s 5.6 version which is probably likely to appear some time this year. That’s a major change. It would be nice if Oracle look at this for 5.7 as a way of ensuring that when resource usage does come under pressure MySQL does not go heads up, but attempts to use the allocated resources as best as possible.

In the meantime what would help would be:

  • better documentation so we can see clearly how all mysql memory is allocated. There are several web pages commenting ways to calculate this, but certainly no definitive guide.
  • The InnoDB engine’s documentation talks about memory usage and most people think that the innodb_buffer_pool_size is the main setting. yet read further and there’s talk of an overhead of perhaps 1/8th. I have recently been playing with innodb_buffer_pool_instances settings > 1 (using values in the range of 20-40) and am inclined to think that this increases that overhead somewhat more, yet there’s no documentation on this and whether my guess is right or not. Please InnoDB developers improve your documentation, if only to prove me wrong.
  • Ideally some tools to tell you if you server is possibly misconfigured. Coming from a Sybase environment I’d be tempted to suggest a stored procedure in the mysql database which can tell you total memory usage and how it is broken down as doing this with a single SELECT is going to be tricky.

Then once that is done consider adding some extra variable to enable total memory usage to be controlled. I made a feature request for this at http://bugs.mysql.com/?id=64108. If you think this feature might interest you please let Oracle know.

Why online disk storage is not the same as using tapes

A couple of years ago I posted about an unlucky encounter with my Thecus N5200 Pro. On the 1st January at 06:15 I had a New Year’s present. A disk died on me. The Thecus duly notified me by starting to beep.

I still have my RAID-6 setup so there was no real problem. I have 2 redundant disks. So I went and bought a replacement and fitted it and the Thecus duly started rebuilding the RAID array with an estimated time to finish of about 1 day (5x1TB disks, so ~3TB array).

Disk prices have increased significantly since I originally bought my array but buying a single disk was not a problem.  During the time that the array was rebuilding or shortly afterwards however a second disk failed just as it had 2 years ago when I ended up losing my array as the RAID-5 setup did not have any further disks. This time however there was no problem. RAID-6 saved me and the photos and backup information I had on my Thecus so I was happy and I ended up running out and buying another disk. This one triggered the array rebuild and completed successfully.

That lead me to think. A lot of people push the move to external disks as a backup alternative. Certainly this gives you online information and generally works well. I am using this device at home to keep my photos and other files and also to do a backup of my main machine to a separate location.  However, what strikes me as being clear: replacing disks is expensive. Had I been using tapes for backups throwing away the 2 tapes and buying new ones would have been much cheaper than buying 2 new hard disks for my array.  Of course a disk array does not provide you a site backup and also does not provide you a convenient way to store multiple backups over time. For that it is not yet cost effective.

So after this experience I am tempted to look and see if there are any small tape devices which I could attach to my PC and use that to give me these extra facilities. The hardware no doubt exists but from previous searches it is expensive and out of the price range of the average home user. That is a shame.  With a lot of people now dismissing tape devices as old fashioned failure of drives in a small array like my 5-disk array may turn out to be pretty expensive (in money) or pretty costly (lost data).  RAID-1 or RAID-5 sound great but can you afford for a second disk to fail while the array is rebuilding after the first disk has failed? If you can not then arrays of this type may not be for you and may lead you to a false sense of security.

The title of my original post was Unlucky Encounter… So have I just been unlucky (again) or is this sort of problem something which has happened to more people?

The Problems of Managing MySQL’s Configuration

I want to keep a record of the configuration of the MySQL servers I manage. The configuration of some servers differs from others and over time the configuration may vary, partly as a result of upgrades in the mysql version or the use of the particular mysql instance, so tracking this is important.

Configuration items in MySQL can be thought of in 2 separate parts: the static configuration files which determine the behaviour of the server when it starts up (my.cnf) and the running configuration of the server in question. The latter information is usually obtained by running SHOW GLOBAL VARIABLES and SHOW SLAVE STATUS if the server is a slave.

I’d also like to compare the 2 sets of configuration so I can see if a local change has been made to the running server which is not reflected in its configuration file. I might want to correct this, or at least be aware of it.

However, collecting and comparing this configuration information is not as easy to do completely as one would really like. There are various inconsistencies which make this task troublesome.

  • Getting the default configuration out of the system is hard: mysqld --help --verbose sort of shows you this information but you have to filter out the junk.
  • Use of dashes ('-') in variable names in my.cnf or command line parameter names vs '_' in SHOW GLOBAL VARIABLES.
  • 5.5.16 seems to try to ‘partially’ startup when it shouldn’t.  See bug#63187.
  • You can not query the default configuration information from the running server.
  • SHOW SLAVE STATUS uses mixed-case variable names vs SHOW GLOBAL VARIABLES which uses lower-case variable names.
  • SHOW GLOBAL VARIABLES uses lower-case variable names vs SELECT * FROM INFORMATION_SCHEMA.GLOBAL_VARIABLES which shows them in upper-case.
  • Some variables get their multiple values sorted (others don’t)
  • Conversion of sql_mode special values is not consistent or clear.
  • my.cnf may use letter multipliers to indicate size (K, M, G), show global variables uses actual numbers.
  • my.cnf uses 0/1 vs SHOW GLOBAL VARIABLES using OFF/ON for certain switches. In some cases just having the variable defined implies ‘= 1’.
  • Some directory paths provided without a trailing ‘/’ in my.cnf have the trailing ‘/’ added in SHOW GLOBAL VARIABLES.
  • Some variables can take relative paths (relative to datadir) whereas SHOW GLOBAL VARIABLES shows the full path.
  • Some variable names differ in my.cnf compared to the equivalent setting in SHOW GLOBAL VARIABLES.
  • Some variable names have changed with newer versions of MySQL yet in many cases no warning is given that the new (correct?) variable name should be used and the old name is deprecated.
  • Not all my.cnf settings, DEFAULT SETTINGS (234 values) are shown in SHOW GLOBAL VARIABLES (312 values) in 5.5.16, (update 2013-04-08: 294 values vs 420 values in 5.6.10).
  • Some settings are incomplete (log_bin does not provide the full name of the binlog files set in my.cnf).
  • etc…

So all of these issues means that if I want to look at the /etc/my.cnf files and compare that to the running configuration it’s pretty hard to do well.  That may not seem like such a big deal until you realise that many mysql servers may have an uptime over a year. In that time the configuration may have evolved and this may mean you need to bring down mysql to adjust its configuration or alternatively see the difference and accept it’s not such a big issue and can be corrected the next time the server is restarted.

I’ve brought this up with Oracle support but guess that many in the community are so used to this they think it’s normal. If you manage several servers then it really is helpful to be able to collect this information and also see if servers run with the expected settings. Not doing so may mean that performance is affected and no-one notices.

So I’d like to ask that Oracle consider addressing this issue and remove some of the aforementioned inconsistencies, and perhaps also providing something like a SHOW DEFAULT SETTINGS or INFORMATION_SCHEMA.DEFAULT_SETTINGS output.  If you are in a company which goes through a lot of change and that includes the servers that are being managed, it will be much easier to ensure the servers are configured properly and if they were to address that.

Note software from other places makes this really easy and that can be very handy. Postfix‘s postconf shows the current configuration, but when called with -n will only show the non-default options and postconf -d shows all default settings.  So this certainly can be done.

Update: 2012-02-21

It seemed useful to actually add a list of bug reports and related posts.

Bug reports:

  • Bug#19612: InnoDB files per table my.cnf option is read inconsistently
  • Bug#55288: Use of the underscore or hyphen for variables and options is confusing
  • Bug#55705: configuration tool for MySQL server options
  • Bug#63187: mysqld –help –verbose should not try to startup but only provide help
  • Bug#64403: MySQL does not warn of deprecated variables on startup
  • Bug#64474: No clean way to get compiled-in settings from mysqld for a storage engine

Related posts:

Please advise me of other missing references.

Adding pre/post hooks into MySQL init scripts

On several occasions it may be necessary to do some tasks around the process of starting or stopping a MySQL server such as:

  • Perform a warmup of the database using custom scripts and external information that the init_file configuration may not have access to.
  • Change configuration settings such as innodb_log_file_size and adjust the on disk files that this needs.
  • Change configuration settings such as log_bin (I try to use a relative paths to ensure that different instances use a common setting even if datadir is different, and changing from an absolute to relative path often requires adjusting the binlog index file).
  • Shutting down MySQL may need to be delayed if the SQL thread has open temporary tables or open transactions are still being processed.
  • etc…

You may have others and many of these are likely to be site specific. I could ask Oracle to add lots of configuration options to cover many of these special cases but that is likely to not work as my needs may not match others. However asking them to add hooks to allow me to put in the infrastructure that I need, especially if those hooks are normally disabled and require minimal changes to the current init scripts, might be acceptable. That’s why I’m curious as to what others think about this.

Some may be asking: why do you need this at all? I have certainly seen several issues such as those indicated above which I need to resolve. Also the DBA is not the only person who may have to restart a server. Sometimes this may be done by a system administrator, and on some servers a by developer. These people are not necessarily expected to know about the specifics of one MySQL server compared to another or to know the server’s current state. It is also true that often a server may go for a long time without needing to be restarted, so many configuration changes may be pending. They only need to be done when the server is restarted.

So I was considering the idea of some hooks which would work in a similar way to a system init script, probably composed of the following parts:

  • An /etc/mysql/init.d directory containing any scripts that might need to be run. they can be placed there manually or by system configuration tools
  • An /etc/mysql/rc-start and /etc/mysql/rc-stop directory which would have files named: PRE-nn-scriptname, and POST-nn-scriptname linked to the ../init.d/scriptname file. Again similar to the system init scripts. The nn would, like the number in the system init scripts, define the relative ordering of the execution of any scripts in that directory.
  • A modification to /etc/init.d/mysql so that these scripts would be run (if present) at 4 stages:
    • pre-start: before starting mysqld
    • post-start: after starting mysqld
    • pre-stop: before stopping msyqld
    • post-stop: after stopping mysqld
  • Not absolutely necessary but for completeness a tool which would manage the linking / unlinking of these files.

I can certainly see the use of something like this for my work, but do not see that someone like Oracle will take this unless they see a clear business sense to that, or technical advantage. It would also allow Oracle to provide some “standard” type of scripts if they have seen the need from their customers but also the community could equally share scripts which would be usable for their specific needs but may also be useful to others.

If you have a single mysql instance to manage this may seem like overkill. That’s fine: by default it would not be used. However, as MySQL is gradually being used in larger sites I believe this sort of functionality could be quite helpful as more people may be responsible for managing the servers, and some changes may need to be postponed to a later moment, and the change in the scripts I suggest would make management of the instances simpler.

Have you solved the problem already? If so how? If not does this seem like a reasonable idea?

About InnoDB’s: Shortened warm-up times with a preloaded InnoDB buffer pool

This comment is in regard to this InnoDB blog page. Trying to post a message says you need to be logged in, but there’s no register/login page I could see…

Anyway the page talks about a new feature which allows you to save on shutdown an InnoDB buffer pool and to load this on startup, this ensuring that once loaded the database will perform with this “hot” cache.

That sounds interesting as I have seen on numerous occasions that if the buffer pool is not warm then performance can be a magnitude worse.

This looks like a very welcome feature. However, a couple of things are not clear to me.

  1. Having some example benchmark times of using this process and comparing it to trying to warm up the buffer pool by hand would be useful.  While this may heavily dependent on database content it would at least give us an idea of perhaps how much difference this may make.
  2. On servers I have to warm up currently we find performance becomes acceptable when perhaps 20% of the buffer pool has been warmed up.  The implementation behind the new functionality isn’t explained but one option which might be useful would be to save and load only a configurable percentage of the most recently used pages, not 100% of the pages as this patch seems to do. This would save time on save and on load, and possibly be good enough for most people. Many servers I use have 40 GB or 90 GB pools.  So loading files of this size into memory is going to take time. Thus having some control over how much of the buffer pool is loaded / saved seems like a good idea if this is possible.
  3. I assume the buffer pool is loaded and saved sequentially (in large blocks) giving the mysqld and the OS the chance to read/write the pages faster in larger chunks. Is this so?  Certainly currently when I see mysqld shutdown (in 5.1 and I believe 5.5 too) the shutdown times seem to be rather slow, and from what I can see the data does not appear to be written out quickly. Perhaps the shutdown generates a lot of random I/O. So moving to a faster linear read/write seems likely to be much more efficient.
  4. There is no mention of how the content of the file is checked for “consistency” to make sure that invalid data won’t be loaded. I’m sure there is some checking but otherwise this feature may be a way to corrupt the database inadvertently if the buffer pool cache file gets corrupted.

CentOS 6 – Blocked by the installer not quite doing things as I want

RHEL 6 has been out for some time and promises some nice new features. Finally CentOS 6 has been released so I was looking forward to getting it installed and upgrade my existing system.

This is currently running CentOS 5.6 and has quite a few services on it.  The best way to upgrade is usually to do a fresh install and then migrate existing over existing services. That’s what I was planning to do.  I only have a single PC so was trying to figure out the best way to go about this and at the same time minimise downtime.

I use LVM quite a lot. It makes worrying about which disk is used and where the partitions are located much less of an issue. My current setup is quite simple: 2 disks with 2 partitions each. sda1 holds /boot, sdb1 is free. sda2/sdb2 are configured in RAID-1 and hold a single volume group.

Plan A

My first plan was to use a free USB disk drive and install CentOS 6 onto that, leave some space on the USB disk and migrate the CentOS 5 volume group to the USB disk. Then I could move the CentOS 6 installation to the hard disks and complete the migration. That should have left me with the option during this process of booting CentOS 5 or CentOS 6 and migrating data from one system to another.

Plan 1 worked in the sense I was able to complete the install of CentOS 6 on the USB disk but I couldn’t get the installer to install grub on /dev/sdc (MBR): it insisted on doing it on sdc1 which of course the BIOS just ignored.  So I couldn’t get CentOS 6 to boot. I could have spent some more time fiddling around with this but then came up with what I thought was a better plan B.

Plan B

Plan B was simpler: given my setup I had a free partition I could boot from (/dev/sdb1) and also an existing volume group where I could put the new partitions. There was plenty of free space in the existing Volume Group so this should be much easier.  However, this does not work. The CentOS 6 installer shows the disks correctly and it is possible to edit /dev/sdb1 to be /boot. It also shows you the previously created Volume Group. However, it does not show you the existing logical volumes inside that volume group. The list is empty. So I could not tell it to reuse my swap partition, or to use /home from my existing home partition, and I was also fearful of creating new partitions and have the installer perhaps recreate the VG and thus wipe out my existing system.  So I have reported the problem to the CentOS team, though this looks like a bug with RHEL 6, and will play around further to see if I can get this to work.

This makes me wish I had a bigger environment at home.  In a work environment you can afford to install a new server, and set it up, migrating existing functionality from the current one, and then finally switch over. I don’t have the hardware to do that so upgrading is quite a bit more work, at least as I want to minimise downtime.

So I will continue to work out how to best to get the CentOS 6 install running and then migrate everything over.

2011-07-30 Update

After further investigation Plan A seemed to be a problem of my own making. The installer first shows you a list of disks and then asks which one you want to install the OS on, the others being called “data disks”.  There’s an option here to install boot loader. I missed this and if you do then later you are not offered the possibility of installing the bootloader on that disks MBR. So human error though I would prefer perhaps the install disks MBR to be an option, even if there’s a warning “You indicated you did not want to install the boot loader on the MBR of this disk”.

MySQL Partitioning and its Confusing Syntax

While looking at partitioning I recently made a mistake which I guess can happen to others. Often this is due to not fully reading the documentation or scanning it too quickly and misunderstanding what’s being said.

So this post is to complain about the MySQL partitioning syntax and to warn others who may easily make the same mistake without realising.

First we probably need to ask why we are partitioning a table in the first place. The main reasons for this are I think:

  • to improve query performance
  • to reduce individual .ibd file sizes for large tables (if using innodb_file_per_table)

In my case I wanted to do both. I had a several tables which store a large number of rows (batches of data) based on an incremental batch number. One of these tables was around 40 GB and had about 500,000,000 rows in it.  When processing data in this table often all the data from a particular batch run would be aggregated with the resultant data being stored elsewhere.

So I knew I wanted to partition and performance of these aggregations would be improved as the time to table scan a batch would be reduced to the time to scan the partition rather than the whole table.

The primary key of these tables was already in the form ( batch_id, other_key ) so I really wanted to just partition by the batch_id key, using in my case 64 partitions. batch_id is defined as int unsigned NOT NULL.

This is where I made the mistake. There are various ways to partition tables in MySQL and they are named: RANGE, LIST, HASH, KEY.  Given the batch_id was a gradually increasing value and I didn’t want to modify the partitioning once it was created RANGE and LIST seemed inappropriate. Of HASH and KEY, I incorrectly assumed that HASH would do some complex hash function of my integer batch_id, and since batch_id was part of the key that KEY would be the right way to partition.

So I incorrectly defined the table like this:

ALTER TABLE xxxxx PARTITION BY KEY ( batch_id ) PARTITIONS 64;

When you read the documentation you see that for HASH-type partitioning you provide a functional value which must be numeric and it actually determines the partition to use by doing MOD( your_functional_value, number_of_partitions ).

KEY-type partitioning works diferently and only allows you to provide column names and then it uses its own internal hashing function to generate the partition id.

So using PARTITION BY KEY ( numeric_column_name ) seems to correct but is likely to be more expensive to calculate. For large tables this is likely to cause additional performance issues.  It looks like I’m going to have to rebuild the tables I’ve just partitioned and that’ll be another weekend of work.

A suggestion to those at Oracle is that:

  • PARTITION BY KEY ( numeric_column_name ) should be modified to behave like PARTITION BY HASH ( numeric_column_name ). However, as this is likely to cause on disk incompatibilities during a version change if it were implemented, I’m guessing it just won’t happen, unless there’s some easy way to distinguish the current behaviour and my proposed new behaviour.
  • The documentation is made a little clearer and mentions this obvious case. What I see with a lot of the MySQL documentation is that it documents technically how things are done rather than documenting the problem and then how to implement the solution. Since it’s likely that many people are going to partition on one of the columns especially if a multi-column primary key is used this use case should be made clearer.

If I had time I’d look at how partitioning is implemented in Oracle database, Sybase, Postgres or DB2 and see whether it’s just MySQL which has chosen these unfortunate keywords for defining their partitioning methods.

However, I’m curious: am I the only one to fall in this trap?

Getting Started with GuruPlug Plus (almost)

A few months ago I came across the embedded platform SheevaPlug and its brother GuruPlug. I was basically looking to move my Asterisk setup from my main server to a separate box as I’ve been unable to avoid some of the latency issues I’ve reported in previous posts and think it is likely to be related to other software running on the server.

I was thus looking for a small size computer which I could tuck out of the way and hopefully have a nice simple working configuration. I also wanted to move over to FreeSwitch if I had time later.

So I bought myself a GuruPlug Plus. The hardware itself is very tiny and it comes with only 500MB of built-in storage but does have space for a MicroSD card and also external e-SATA/USB drives. It looked economical and a good option.

However, to get it working for my intended use I needed to install more than the sample version of Debian that’s provided with the box, so decided to try and install a larger Debian install on a 8GB MicroSD card. That’s enough for the single application I want to run.

The blog http://bzed.de/posts/2010/05/installing_debian_on_the_guruplug_server_plus/ seemed to have some good pointers as to how to install Debian and exactly on the hardware I was using.

So once I had uboot upgraded I followed the instructions. I checked I could see my MicoSD card:


Marvell>> usb tree

Device Tree:
1 Hub (480MBit/s, 0mA)
| Marvell EHCI
|
+-2 Hub (480MBit/s, 100mA)
| USB2.0 Hub
|
+-3 Mass Storage (480MBit/s, 500mA)
USB Storage 000000009909 <=====

I had decided to use the latest debian version (Squeeze). I also wanted to use LVM as this makes it easier to adjust partition sizes later and also it was an option provided by the installer. I followed the instructions on the website but failed to make the plug bootable. The reason after a bit of looking around is the uInitrd image was bigger than the 0x600000 size given as an example. It would have been helpful if not only the commands but also the uboot output had been shown as this would have made it clearer. I took out the MicroSD card and on a normal Ubuntu server checked the device which was recognised fine:


/dev/sdb1 is ext2 ( this is /boot )

LVM shows:

root@ubuntu:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sdb5 guruplug lvm2 a- 7.23g 0

root@ubuntu:~# lvs
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
home guruplug -wi--- 2.54g
root guruplug -wi--- 264.00m
swap_1 guruplug -wi--- 436.00m
tmp guruplug -wi--- 224.00m
usr guruplug -wi--- 2.57g
var guruplug -wi--- 1.22g

 

root@ubuntu:/mnt# ls -la
total 16584
drwxr-xr-x 3 root root 1024 2011-06-06 09:46 .
drwxr-xr-x 22 root root 4096 2011-06-03 14:34 ..
-rw-r--r-- 1 root root 84094 2011-05-19 23:58 config-2.6.32-5-kirkwood
lrwxrwxrwx 1 root root 28 2011-06-06 08:48 initrd.img -> initrd.img-2.6.32-5-kirkwood
-rw-r--r-- 1 root root 6465845 2011-06-06 09:45 initrd.img-2.6.32-5-kirkwood
drwxr-xr-x 2 root root 12288 2011-06-06 08:12 lost+found
-rw-r--r-- 1 root root 1003125 2011-05-19 23:58 System.map-2.6.32-5-kirkwood
-rw-r--r-- 1 root root 1433984 2011-06-06 09:46 uImage
-rw-r--r-- 1 root root 6465909 2011-06-06 09:46 uInitrd <<============ larger than 0x600000 !
lrwxrwxrwx 1 root root 25 2011-06-06 08:48 vmlinuz -> vmlinuz-2.6.32-5-kirkwood
-rw-r--r-- 1 root root 1433920 2011-05-19 23:57 vmlinuz-2.6.32-5-kirkwood

So I modified the steps in the other blog and did the following:


Marvell>> usb start
(Re)start USB...
USB: scanning bus for devices... 3 USB Device(s) found
Waiting for storage device(s) to settle before scanning...
Device NOT ready
Request Sense returned 02 3A 00
1 Storage Device(s) found
Marvell>> usb tree

Device Tree:
1 Hub (480MBit/s, 0mA)
| Marvell EHCI
|
+-2 Hub (480MBit/s, 100mA)
| USB2.0 Hub
|
+-3 Mass Storage (480MBit/s, 500mA)
USB Storage 000000009909

Marvell>> tftp 0x6400000 uImage
Using egiga0 device
TFTP from server 192.168.a.xx; our IP address is 192.168.a.yy
Filename 'uImage'.
Load address: 0x6400000
Loading: #################################################################
#################################################################
#################################################################
#################################################################
#####################
done
Bytes transferred = 1433984 (15e180 hex)
Marvell>> nand erase 0x100000 0x400000

NAND erase: device 0 offset 0x100000, size 0x400000
Erasing at 0x4e0000 -- 100% complete.
OK
Marvell>> nand write.e 0x6400000 0x100000 0x400000

NAND write: device 0 offset 0x100000, size 0x400000

Writing data at 0x4ff800 -- 100% complete.
4194304 bytes written: OK
Marvell>> tftp 0x6400000 uInitrd
Using egiga0 device
TFTP from server 192.168.a.xx; our IP address is 192.168.a.yy
Filename 'uInitrd'.
Load address: 0x6400000
Loading: #################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
#################################################################
############################
done
Bytes transferred = 6465909 (62a975 hex)
Marvell>> nand erase 0x500000 0x1fb00000

NAND erase: device 0 offset 0x500000, size 0x1fb00000
Skipping bad block at 0x17920000
Skipping bad block at 0x19e60000
Erasing at 0x1ffe0000 -- 100% complete.
OK
Marvell>> nand write.e 0x6400000 0x500000 0x700000

NAND write: device 0 offset 0x500000, size 0x700000

 

Writing data at 0xbff800 -- 100% complete.
7340032 bytes written: OK
Marvell>> setenv bootargs_debian 'console=ttyS0,115200 root=/dev/guruplug/root'
Marvell>> setenv bootcmd_nand 'nand start; nand read.e 0x00800000 0x100000 0x400000; nand read.e 0x01100000 0x500000 0x700000'
Marvell>> setenv bootcmd 'setenv bootargs $(bootargs_debian); run bootcmd_nand; bootm 0x00800000 0x01100000'
Marvell>> saveenv
Saving Environment to NAND...
Erasing Nand...Writing to Nand... done
Marvell>>

Now Debian does start the boot process but I see this:


[ 1.998201] hub 1-0:1.0: 1 port detected
Begin: Loading essential drivers ... done.
Begin: Running /scripts/init-premount ... done.
Begin: Mounting root file system ... Begin: Running /scripts/local-top ... [ 2.077782] device-mapper: uevent: version 1.0.3
[ 2.084521] device-mapper: ioctl: 4.15.0-ioctl (2009-04-01) initialised: dm-devel@redhat.com
Volume group "guruplug" not found
Skipping volume group guruplug
Unable to find LVM volume guruplug/swap_1
done.
Begin: Waiting for root file system ... [ 2.316249] usb 1-1: new high speed USB device using orion-ehci and address 2
[ 2.467658] usb 1-1: New USB device found, idVendor=05e3, idProduct=0608
[ 2.474392] usb 1-1: New USB device strings: Mfr=0, Product=1, SerialNumber=0
[ 2.481574] usb 1-1: Product: USB2.0 Hub
[ 2.486833] usb 1-1: configuration #1 chosen from 1 choice
[ 2.495535] hub 1-1:1.0: USB hub found
[ 2.499665] hub 1-1:1.0: 4 ports detected
[ 2.776575] usb 1-1.1: new high speed USB device using orion-ehci and address 3
[ 2.908157] usb 1-1.1: New USB device found, idVendor=05e3, idProduct=0726
[ 2.915067] usb 1-1.1: New USB device strings: Mfr=0, Product=1, SerialNumber=2
[ 2.922428] usb 1-1.1: Product: USB Storage
[ 2.926663] usb 1-1.1: SerialNumber: 000000009909
[ 2.932740] usb 1-1.1: configuration #1 chosen from 1 choice
[ 3.023159] SCSI subsystem initialized
[ 3.051611] Initializing USB Mass Storage driver...
[ 3.056811] scsi0 : SCSI emulation for USB Mass Storage devices
[ 3.063457] usbcore: registered new interface driver usb-storage
[ 3.070152] USB Mass Storage support registered.
[ 8.087371] scsi 0:0:0:0: Direct-Access Generic STORAGE DEVICE 9909 PQ: 0 ANSI: 0
[ 8.096094] scsi 0:0:0:1: Direct-Access Generic STORAGE DEVICE 9909 PQ: 0 ANSI: 0
[ 8.143441] sd 0:0:0:0: [sda] Attached SCSI removable disk
[ 8.149561] sd 0:0:0:1: [sdb] 15661056 512-byte logical blocks: (8.01 GB/7.46 GiB)
[ 8.157925] sd 0:0:0:1: [sdb] Write Protect is off
[ 8.162745] sd 0:0:0:1: [sdb] Assuming drive cache: write through
[ 8.171549] sd 0:0:0:1: [sdb] Assuming drive cache: write through
[ 8.177695] sdb: sdb1 sdb2 < sdb5 >
[ 8.187177] sd 0:0:0:1: [sdb] Assuming drive cache: write through
[ 8.193308] sd 0:0:0:1: [sdb] Attached SCSI removable disk
done.
Gave up waiting for root device. Common problems:
- Boot args (cat /proc/cmdline)
- Check rootdelay= (did the system wait long enough?)
- Check root= (did the system wait for the right device?)
- Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/guruplug/root does not exist. Dropping to a shell!

BusyBox v1.17.1 (Debian 1:1.17.1-8) built-in shell (ash)
Enter 'help' for a list of built-in commands.

/bin/sh: can't access tty; job control turned off
(initramfs)

It seems the device mapper is not being started at the right time, and hence the root partition can not be found.

I tried to extend the rootdelay. That changes the error slightly but it seems the real problem is the device mapper is started too early or the lvm commands are not used again to enable swap/root partitions to be seen.

 

 

Marvell> setenv bootargs_debian 'console=ttyS0,115200 root=/dev/guruplug/root rootdelay=10'
Marvell> saveenv
Saving Environment to NAND...
Erasing Nand...Writing to Nand... done
Marvell>>

However, that still doesn’t seem to help.


...
[ 29.596839] sd 0:0:0:0: [sda] Attached SCSI removable disk
[ 29.602959] sd 0:0:0:1: [sdb] 15661056 512-byte logical blocks: (8.01 GB/7.46 GiB)
[ 29.611323] sd 0:0:0:1: [sdb] Write Protect is off
[ 29.616150] sd 0:0:0:1: [sdb] Assuming drive cache: write through
[ 29.624947] sd 0:0:0:1: [sdb] Assuming drive cache: write through
[ 29.631091] sdb: sdb1 sdb2 < sdb5 >
[ 29.640449] sd 0:0:0:1: [sdb] Assuming drive cache: write through
[ 29.646601] sd 0:0:0:1: [sdb] Attached SCSI removable disk
Begin: Loading essential drivers ... done.
Begin: Running /scripts/init-premount ... done.
Begin: Mounting root file system ... Begin: Running /scripts/local-top ... [ 33.555674] device-mapper: uevent: version 1.0.3
[ 33.562475] device-mapper: ioctl: 4.15.0-ioctl (2009-04-01) initialised: dm-devel@redhat.com
done.
Begin: Waiting for root file system ... done.
Gave up waiting for root device. Common problems:
- Boot args (cat /proc/cmdline)
- Check rootdelay= (did the system wait long enough?)
- Check root= (did the system wait for the right device?)
- Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/guruplug/root does not exist. Dropping to a shell!

BusyBox v1.17.1 (Debian 1:1.17.1-8) built-in shell (ash)
Enter 'help' for a list of built-in commands.

 

/bin/sh: can't access tty; job control turned off
(initramfs)

So if anyone has any ideas, I’d be delighted to hear from you.

Thoughts about extlookup() in puppet

To manage some of the database servers I look after I use puppet.

I’ve recently been trying to clean up quite an extensive set of puppet recipes I’d written. A suggestion that has been made was to remove the data from the recipes and a pointer was made to the new extlookup functionality. That is keep data and code separate and make it easier to modify the behaviour without having to modify the code itself.

In my case I manage a few sets of database servers. Similar groups of boxes are configured the same way. One specific function I was looking at to improve was this snippet.

case $domain {
'dom1.example.com': { s_db::master_vif::check { 'master vif': interface => 'eth0:123', ip_address => '10.1.xxx.yyy' } }
'dom2.example.com': { s_db::master_vif::check { 'master vif': interface => 'eth0:456', ip_address => '10.2.xxx.yyy' } }
}

The intention here being that on the primary master the interface would be enabled and on the standby master the interface would NOT be enabled.

Note: this snippet applies to a pair of boxes but there are several other pairs to configure in the same way, each with their own specific settings.

The extlookup() functionality only allows a single “lookup value” to be searched and implicitly rather than explicitly where to look for this value. The snippet above has 2 “parameters” and if I have several pairs of boxes: MDB_A, MDB_B, MDB_C (the name is not part of the domain or the hostname) then using the extlookup() functionality I’m going to have to setup a lot of data files and this looks unwieldy, especially as I have other resources defined which take several parameters.

So I thought about how I would do this to make the lookup facility more generic and visible.

What seems better to me would be something which is a bit more generic and based on looking up a < “config_item”, “lookup_value” > pair in a single location to get a single value as now. That would make the extlookup() prototype probably change to something like:

extlookup( “config_item”, [ list of variables to use ] , “default_value”, [ lookup “tables/csv files” ] )

parameters being:
[1] config_item ( grouping of values to lookup )
[2] array of values to apply the lookup on, done in order as now but more explicitly shown
[3] default value
[4] explict array of locations of the data files.

The .csv files would have 3 columns: config_item,lookup_value,result_value

Using something like this I can then simplify the snippet above to:

$interface_name = extlookup( 'MDB_A-vif-interface', [ $hostname, $domain ], '', [ 'dba_config' ] )
$ip_address = extlookup( 'MDB_A-vif-ip-address', [ $hostname, $domain ], '', [ 'dba_config' ] )

s_db::master_vif::check { 'master vif': interface => $interface_name, ip_address => $ip_address } }

This is clearer as the configuration is more explicit then the current extlookup() definition.

If performance of having a large number of entries in a single file were a concern then talking to a DB backend and doing something like the following would work:

SELECT lookup_value FROM some_table WHERE config_item = ? AND lookup_key = ?

Also while in this example above I can use $hostname or $domain as a retrieval key in several other boxes I probably can’t but might need to lookup on another value such as server_type or db_name.
Having the explicit [ …. ] list where I can add any string I want to be looked up give more flexibility and is clearer.

Using the current (puppet 2.6) mechanism would require me, if I understand it correctly, to configure different
.csv files to be searched for each “configuration parameter”. I may also need in several places to override the $extlookup_precedence. In any case this can’t be overridden several times within the same module which is what I would need if I want to lookup different variables.

That doesn’t strike me as being very useful. It’s also unclear from the current extlookup() call itself where the data is being retrieved from. Using these external variables seems to me to make the calls behave like magic.

So my question was to ask how others who are using the extlookup() functionality how they cope with more complex settings than the system settings which depend on say $hostname or $domain etc. and whether my proposed extension makes any sense.

Thanks for your thoughts.

2011-05-12, update

Another snippet of something I’d like to do which seems easier to fulful using my proposed extlookup() setup.

$notifications_1 = 'address11@example.com'
$notifications_2 = 'address2@example.com'
$notifications_3 = 'address3@example.com'

case $hostname {
'host01':  { $email = $notifications_1 }
'host02': { $email = $notifications_2 }
default:   { $email = $notifications_3 }
}

This would seem more naturally expressed as:

$email = extlookup( 'notification_email', [ $hostname ], $notifications_3 )

or perhaps the following if you want to only check for boxes in your domain.

$email = extlookup( 'notification_email', [ $hostname, $domain ] )

but most importantly with a single data file containing:
notification_email,host01,address1@example.com
notification_email,host02,address2@example.com
notification_email,example.com,default_email@example.com

How to start mysqld using numactl

Various people have complained about Linux swapping unexpectedly on boxes running mysqld, when apparently mysqld was not using all the memory and there was quite a bit of free memory available.

There’s also an article by Jeremy Cole. However, his solution requires a one-line change to mysqld_safe which while it’s small does not work very well if you have to maintain a large number of servers and are using packages asa package upgrade will overwrite the modified file mysqld_safe and then restart mysqld with the unmodified script. This leads to the need to repatch the script and then restart mysqld. Not very helpful.

So I was looking for another solution and came up with this option which basically requires a minor change to /etc/my.cnf and the use of a small shell wrapper script. The change to my.cnf is simply to add the following to the [mysqld_safe] section:

The wrapper script is little more than:

This minor change means that you can still use the standard mysql scripts and they will work even after an upgrade to a new rpm or other type of package. So if you’ve seen your mysql server swapping when you think you have the memory settings correctly adjusted and have been trying to figure out how to easily adjust the configuration to try out numactl to see if this improves things this seems to be an easier way of doing it.

log_processlist.sh – script for monitoring MySQL instances

I thought I would upload a small script I use which I find quite useful. log_processlist.sh is normally run minutely from cron and collects statistics from any MySQL instance it finds. Data is kept for up to a week and then rotated. This allows you to check what happened over a period of time, for example if an instance crashes or seems to have a lot of load.

I have found it quite invaluable on the instances I’m monitoring and once setup it’s usually nice and silent. If you find this useful or have suggestions for improvements please let me know.

A few more things to add to my desired SOHO FreeSWITCH configuration

Since my last post I’ve spoken some more on #freeswitch and also freeswitch-users and have some ideas of how to implement some of the things I wanted to do on my SOHO PBX setup.

Additionally I’ve also added several other tasks which I think are important. So here is where you can find the updated post: http://blog.wl0.org/voip-configuration-requirement-for-freeswitch/.

I’ve not fully understood the configuration in FreeSWITCH using 2 different ports for internal and external traffic, especially as most SIP systems just use the default SIP port 5060. As such I want to configure FreeSWITCH so it works on a single port
rather than using the 2 ports the default configuration uses.

Since the requirements list has become quite long I want to try to document how I get to the final setup in the hope it will allow someone else to do this more quickly than it’s taken me.

If you see a configuration / description of FreeSWITCH behaviour which is wrong, or a better way to implement these features please let me know. This is a learning process for me.

Freeswitch – progress and frustration

I’ve not posted for a while and thought it about time I gave an update on my progress trying to get FreeSWITCH to work.

Bluebox

I previously mentioned that I was trying out Bluebox and it looked promising. I did play around with it for a while but had various issues which I had trouble solving.  One of these was due to the way the configuration is stored. Bluebox configuration is stored in a database and then auto-generated. This means that the configuration files have entries for things like sip_interface_1, trunk_1, trunk_2, etc… When looking in the log files this can be a bit confusing as it’s not apparent which “real entry these names refer to”.

I’d suggest that the configuration optionally allows the user to provide their own names so that you can entries such as the_name_of_my_sip_provider instead.

There were also some configuration issues with registrations and dialing not working as I had expected.  To be honest I did get offers of help from some of the Bluebox developers, but did not think it right to expect them to hand-hold me debugging certain software issues. In the end I could not get Bluebox to work the way I expected and so gave up. That is not to say that Bluebox is bad, but I think it needs to include some more tools to help diagnose the status of various things.  After all if you provide a GUI interface it helps if you also provide some sort of GUI test tools.  Perhaps I will try again later and see if  some of the issues I had experienced have been resolved.

Back to Manual Configuration File editing

In the end I decided to try again and edit the configuration files manually. I used the Bluebox install iso as a base (running inside vmware) and this time decided before I started to try to write up a requirements document and work through it. That can be found here. I decided to run the configuration inside a vmware guest while I was testing as this keeps the configuration behind my router’s NAT setup and in theory FreeSWITCH is not visible to the outside world so is safe while I learn and adjust the configuration.

The results of doing things this way have been promising but I notice several things which perhaps for a FreeSWITCH newbie may not be ideal.

  • I believe that the default FreeSWITCH logging may be too verbose. This is great for developers who understand all the logging messages and may be looking for a specific log entry but can be rather intimidating for someone who is trying to see if something worked or not and if not why not. For those type of people a less verbose log level may be appropriate, and of course you can later increase logging if you need to get the details of why something is not working as expected. So I’m not sure that verbose logging is ideal for the default freeswitch configuration.
  • xml file breakage. I’ve occasionally edited the xml configuration files incorrectly and when running reloadxml get the frustrating message about a failure to parse the file and some weird line number. Figuring out what entry is wrong and why from this message is quite tricky.  I only recently came across some comment saying that the preparsed xml files get written to log/freeswitch.xml.fsxml.  This helps a lot as you see the final configuration file generated from the various individual files but I believe mention of this should be more prominently commented in the various tutorials. It’s easy to miss. That said FreeSWITCH could at least report the full path of the xml filename where the error is found.
  • Related to the same issue is the fact that, as far as I can see, comments need to be in the format <!– some stuff –>. It is not clear to me whether white space around these comments is allowed always or only sometimes as the “xml breakage” I seem to have triggered was often caused by adding several lines of comments as above, often with white space before or after the comments. I did not expect that to make a difference but it seems that in some places it does.  I’d love some clarification on this and if it’s possible to make FreeSWITCH’s xml parser slightly more tolerant that would be nice.

Default configuration FreeSWITCH security concerns

Note: the comments here relate to the 1.0.6 version of FreeSWITCH. I’ve tried to build newer versions on my Linux box but the build fails. Checking out 1.0.6 builds without problems but I realise that a lot of changes have happened since. It would be nice to test on a newer version.
The default/sample configuration seems to be very open which is great for learning as a lot of functionality is enabled by default. However, this seems to be a perfect opportunity for SPIT (which seems to be the term used for the equivalent of VoIP SPAM).  While the default password for the extensions is configurable in vars.xml it is not apparent that it may be really important that you change this to something really secure.  I was working with FreeSWITCH behind a NAT’d router so was pretty relaxed about this, thinking they couldn’t reach the software, as it was not accessible on my public IP address.  However, that assumption seems not to be as true as I thought. It seems that registering with an external SIP gateway opens a hole out, but also opens a hole through the firewall to FreeSWITCH. That’s scary and I have not seen it mentioned anywhere.

A few days ago I noticed a lot of FreeSWITCH logging of something apparently trying to dial various international numbers. Looking at the logs it seems that someone had managed to guess one of my extension’s (not a default extension) passwords and was trying to dial out again to various numbers including places like Somalia.  As my configuration was not complete, and the default gateway not setup, this was failing but I was rather horrified at how easily someone could get in.  I was lucky and this could have cost me a lot of money.

I still don’t understand how the attacker found/figured out the extension that was being used to dial out from. Not being a default extension it seems it must have been guessing a large number and finally found one that worked.

However, this really reminded me of a similar situation several years back with sendmail. Before qmail, postfix and several other newer mail servers were popular most UNIX boxes came pre-installed with sendmail. Sendmail is great: you can configure it to do anything but the configuration file is rather complex and hard to really understand. At the time the configuration also assumed that it was fine to resend mail and often sendmail was configured as an open relay.  Newer MTAs (and sendmail too now) are configured with a slightly different attitude: the Internet is hostile and there are lots of people who, if they can, may try to abuse your system.  So now the software is configured to be very restrictive of what mail it will accept and forward.

How does this relate to FreeSWITCH? I’ve seen attacks to my current Asterisk setup (though not successfully) and with the attack of my FreeSWITCH configuration it seems that VoIP software should have something similar. The configuration should be designed to allow the least amount of access necessary and also to have controls in place to mitigate attack attempts.  Things that come to mind are:

  • the external profile should log authentication failures by default
  • FreeSWITCH should have some sort of rate limiting configured, so that someone trying to access FreeSWITCH frequently will trigger this and be ignored for a while, with the issue logged clearly.
  • It should be more obvious how to configure network ACLs for extensions, and these should be configured by default.  Postfix by default allows relaying (registering an extension) only from hosts in my networks which by default is the network of the network interface. This can of course be adjusted but is a good safe starting point.  FreeSWITCH’s default configuration should perhaps be similar, only allow registrations from clients on my network.
  • It seems that to register the client can use FreeSWITCH’s ip address or domain name.  Once someone outside has figured out where the SIP listener is located it’s quite easy to try to register with different users at the ip address. If you only accept the registration to user@my.domain.com then the attacker has to know what the correct configuration realm is before they can make much progress. That’s much harder for them. So it again seems like a good idea to NOT allow registrations directly to the IP address (at least by default).
  • For trunk connections if you have a DID number you expect the VoIP provider to call you. In this case it seems good that the example configuration clearly allow you to limit where the incoming calls to that incoming trunk number may come from, and whether they should be password protected or not.
  • Allow rate limiting of calls to a gateway, or from an extension. That’s different to rate limiting registration attempts to a specific extension, or from a specific ip address.  Also allow the configuration and limiting of the number of concurrent connections that may be allowed through a trunk/gateway.

I believe that some and perhaps  all of these features are possible in FreeSWITCH but they are not documented explicitly in the default configuration in a way which you really take note. It would be helpful if that were addressed.  You expect to have a mail server on all unix servers, and they should be secure. Many people using voice software probably expect the same.

Additionally a hacked mail server does not cost you money. A hacked PBX does.

FreeSWITCH cookbook

One thing that I would like to see is a FreeSWITCH cookbook, something that gives examples of lots of different types of configuration and is complete enough to use to solve different tasks that many people using FreeSWITCH may need to get
resolved. Like many other cookbooks I’ve seen (for perl, mysql) comments on the how and why help understand things better.  This could be part of the existing FreeSWITCH book, though perhaps a separate book would be more appropriate.

The FreeSWITCH configuration files are complex and there are many of them. I think this is part of what makes the software hard to follow for someone who is not using it on a day to day basis. Compare that with Asterisk which basically
has 2 configuration files you are going to change: sip.conf and extensions.conf. FreeSWITCH has many more.  This is why I think that you see a lot of people playing with Asterisk: it’s configuration is reasonably straight forward to understand and doing the basics is quite simple. From there you can hack away adding things that you want to improve. So a FreeSWITCH cookbook would be great. It could mention many of the things that I’ve mentioned in this post but others such some of ones I mention here below are issues I’m still working on resolving:

  • how to setup dialing to a trunk with a fallback trunk if the call fails.
  • common voicemail setups
    • set different languages for different extensions
    • set different languages for an outside caller depending on how he dials FreeSWITCH (which incoming DID number is used)
    • setup of a common voicemail mailbox (typical in a SOHO environment) shared by various extensions
  • how to setup FreeSWITCH in a new language (what’s required to make this work). FreeSWITCH seems to come with English and Russian and there are some external non-free language sound files, but from what I can see no more free ones. Can we “hack” or use a set of Asterisk sound files (using as many of them that “match”) in the meantime as Asterisk has sound support in many more languages? A short guide of what is wanted, or perhaps some sort of fundraising to bulid these would be nice.  I’d be willing to donate something if this were coordinated by the FreeSWITCH developers. If I could figure out what is needed then I may do this myself.  My specific interest is in Spanish, but it would be nice to have some “British English” sound files. Not strictly necessary of course but would be nice.
  • How to change the dial tones to sound like different countries. I live in Spain but have never really figured out how to change the dial tones to match what my family members expect to hear. Though I think that the default setup seems to match pretty closely, it would be nice to match this properly.  Again specific examples for different countries is useful for everyone.
  • Setting up the equivalent of Asterisk’s macros. I’ve seen in several places that I’d like to do the same thing and can’t figure out how to do what I’ve done in Asterisk which is setup a macro to parameterise the functionality. Something as simple as:
    • dial_gateway(gateway_name,gateway_name_file,number_to_call) which makes my configuration
      • say “using gateway <gateway_name>” (taken from some wmv files)
      • set up recording of the call
      • dial the number via the specified gateway
    • for calls to an extension(extension_number,language)
      • ring for a while and if picked up, setup recording and answer
      • if not picked up redirect to voicemail (in the appropriate language)

These are just some examples of things which it seems need to be grouped together as common functionality and so far I have not figured out how to do this in FreeSWITCH short of duplicating the configuration for each extension or trunk.

Summing up

In the meantime I am going to try and figure out how to tighten down my configuration so that it feels safe. Having seen various
attack attempts and one that partly succeeded I really want to be able to trust the configuration to not be the cause of a large bill
at some time in the future.  I like FreeSWITCH and would not have persevered with it if I did not think that the software is good.
It just seems to me that the documentation, while available, is not there in a form which many of us can digest easily.  The FreeSWITCH 1.0.6 book has been very helpful but I believe needs to cover more detail than now perhaps in a second edition. I think that more and better documentation will encourage more people to use this software. Sometimes I wonder if I am the only one who finds FreeSWITCH tricky to configure.

My Review of MySQL High Availability

Originally submitted at O’Reilly

Server bottlenecks and failures are a fact of life in any database deployment, but they don’t have to bring everything to a halt. MySQL provides several features that can protect you from outages, whether you’re running directly on the hardware, on virtual machines, or in the cloud. This…

Good coverage of MySQL and replication

By sjmudd from Madrid, Spain on 11/24/2010
4out of 5

Pros: Easy to understand, Accurate, Well-written, Concise, Helpful examples

Best Uses: Intermediate, Expert

Describe Yourself: Sys Admin, Database Administrator

I work in quite a large site and found the book full of useful tips of things to do, and ways to do them in order to achieve the title’s goal of MySQL High Availability.

To be honest the book should really be called “MySQL and replication in large environments” but that does not sound so exciting.

There’s less information on performance tuning and more on working with a large replicated setup. That’s fine: it fits my needs and will certainly fit others too.

It’s also interesting to note how the authors solve many problems by writing wrapper routines around MySQL and the required replication procedures. That’s done in python which is fine but most of the routines I have written with shell scripts or perl. Again if you are looking at the technique the implementation language does not really matter. Oracle should really take note and come up with some standard routines to make more of these tasks doable the same way by everyone. That makes all DBA or sysadmins lives’ easier.

So the book was full of good tips and procedures and is certainly worth the read.

(legalese)