I’m a big fan of redundancy and distribution when it comes to network services. I don’t like to keep all my servers in one location, or even with a single provider. I’m currently using three different providers right now for various services. But when it comes to database communication, this poses a bit of a problem. Naturally, you would implement a firewall to restrict connections only to specific IP addresses, but if you’ve got servers all across the United States (or the globe), the communication is completely un-encrypted by default.
Fortunately, MySQL has the ability to secure those communications, and even require that specific user accounts use encryption for all communication. So, I’m going to show you how to setup that encryption, give a brief overview of setting up MySQL replication, and give you several examples of different ways to securely connect to your database server(s). I used several different resources in setting this up for EWWW I.O. but none of them had everything I needed, or some had critical errors in them:
Setting up MySQL and secure connections
Getting Started with MySQL over SSL
How to enable SSL for MySQL server and client
I use Debian 8 (fewer major releases than Ubuntu, and rock solid stability), so these instructions will apply to MySQL 5.5 and PHP 5.6, although most of it will work fine on any system. If you aren’t using PHP, you can just skip that section, and apply this to MySQL client connections, and replication. I’ll try to point out any areas where you might have difficulty on other versions, and you’ll need to modify any installation steps that use apt-get to use yum instead if you’re on CentOS, RHEL, or SuSE. If you’re running Windows, sorry, but just stop it. I would never trust a Windows server to do any of these things on the public Internet even with secured connections. You could attempt to do some of this on a Windows box for testing, but you can setup a Linux virtual machine using VirtualBox for free if you really want to test things out locally.
Setup the Server
First, we need to install the MySQL server on our system (you should always use sudo, instead of logging in as root, as a matter of “best practice”):
sudo apt-get install mysql-server
The installation will ask for a root password, and for you to confirm it. This is the account that has full and complete privileges on your MySQL server, so pick a good one. If this gets compromised, all is lost (or very nearly). Backups are your best friend, but even then it might be difficult to know what damage was done, and when. You’ll also want to run this, to make sure your server isn’t allowing test/guest access:
sudo mysql_secure_installation
You should answer yes to just about everything, although naturally you don’t have to change your root password if you already set a good one. And just to make sure I’m clear on this. The root password here is not the same as the root password for the server itself. This root password is only for MySQL. You shouldn’t even ever use the root login on your server, EVER. It should be disabled so that you can only run privileged operations as sudo. Which you can do like this:
sudo passwd -l root
That command just disabled the root user, and should also be a good test to verify you already have an account that can sudo successfully, although I’d recommend testing it with something a little less drastic before you disable the root login.
Generating Certificates & Keys for the server
Normally, setting up secure connections involves purchasing a certificate from an established Certificate Authority (CA), and then downloading that certificate to your machine. However, the prevailing attitude with MySQL seems to be that you should build your own CA so that no one else has any influence on the private keys used to issue your server certificates. That said, you can still purchase a cert if that feels like the right route to go for you. Every organization has different needs, and I’m a bit new to the MySQL SSL topic, so I won’t pretend to be an expert on what everyone should do.
The Certificate Authority consists of a private key, and a CA certificate. These are then used to generate the server and client certificates. Each time you generate a certificate, you first need a private key. These private keys cannot be allowed to fall into the wrong hands, but you also need to have them available on the server, as they are used in establishing the secure connection. So if anyone else has access to your server, you should make sure the permissions are set so that only the root user (or someone with sudo privileges) can access them.
The CA and your server’s private key are used to authenticate the certificate that the server uses when it starts up, and the CA certificate is also used to validate any incoming client certificates. By the same token, the client will use that same CA certificate to validate the server’s certificate as well. I store the bits necessary in the /etc/mysql/ directory, so navigate into that directory, and we’ll use that as a sort of “working directory”. Also, the first command here lets you establish a “sudo shell” so that you don’t have to type sudo in front of every command. Let’s generate the CA private key:
sudo -s
cd /etc/mysql/
openssl genrsa 2048 > cakey.pem
Next, generate a certificate based on that private key:
openssl req -sha256 -new -x509 -nodes -days 3650 -key cakey.pem > cacert.pem
Of note are the -sha256 flag (do not use -sha1 anymore, it is weak), and the certificate expiration, set by “-days 3650” (10 years). Answer all the questions as best you can. The common name (CN) here is usually the hostname of the server, and I try to use the same CN throughout the process, although it shouldn’t really matter what you choose as the CN. If you follow my instructions, the CN will not be validated, only the client and server certificates get validated against the CA cert, as I already mentioned. Especially if you have multiple servers, and multiple servers acting as clients, the CN values would be all over the place, so best to keep it simple.
So the CA is now setup, and we need a private key for the server itself. We’ll generate the key and the certificate signing request (CSR) all at once:
openssl req -sha256 -newkey rsa:2048 -days 3650 -nodes -keyout server-key.pem > server-csr.pem
This will ask many of the same questions, answer them however you want, but be sure to leave the passphrase empty. This key will be needed by the MySQL service/daemon on startup, and a password would prevent MySQL from starting automatically. We also need to export the private key into the RSA format, or MySQL won’t be able to read it:
openssl rsa -in server-key.pem -out server-key.pem
Lastly, we create the server certificate using the CSR (based on the server’s private key) along with the CA certificate and key:
openssl x509 -sha256 -req -in server-csr.pem -days 3650 -CA cacert.pem -CAkey cakey.pem -set_serial 01 > server-cert.pem
Now we have what we need for the server end of things, so let’s edit our MySQL config in /etc/mysql/my.cnf to contain these lines in the [mysqld] section:
ssl-ca=/etc/mysql/cacert.pem
ssl-cert=/etc/mysql/server-cert.pem
ssl-key=/etc/mysql/server-key.pem
If you are using Debian, those lines are probably already present, but commented out (with a # in front of them). Just remove the # from those three lines. If this is a fresh install, you’ll also want to set the bind-address so that it will allow communication from other servers:
bind-address = 198.51.100.10 # replace this with your actual IP address
or you can let it bind to all interfaces (if you have multiple IP addresses):
bind-address = *
Then restart the MySQL service:
sudo service mysql restart
Permissions
If this is an existing MySQL setup, you’ll want to wait until you have all the client connections setup to require SSL, but on a new install, you can run this to setup a new user with SSL required:
GRANT ALL PRIVILEGES ON 'database'.* TO 'database-user'@'%' IDENTIFIED BY 'reallysecurepassword' REQUIRE SSL;
I recommend creating individual user accounts for each database you have, so substitute the name of your database in the above command, as well as replacing the database-user and “really secure password” with suitable values. The command above also allows them to connect from anywhere in the world, and you may only want them to connect from a specific host, so you would replace the ‘%’ with the IP address of the client. I prefer to use my firewall to determine who can connect, as it is a bit easier than running a GRANT statement for every single host that is permitted. One could use a wildcard hostname like *.example.com but that would entail a DNS lookup for every connection, unless you make sure to list all your addresses in /etc/hosts on the server (yuck). Additionally, using your firewall to limit which hosts can connect helps prevent brute-force attacks. I use ufw for that, which is a nice and simple command-line interface to iptables. You also need to run this after you GRANT privileges:
FLUSH PRIVILEGES;
Generating a Certificate and Key for the client
With most forms of encryption, only the server needs a certificate and key, but with MySQL, both server and client can have encryption keys. A quick test from my local machine indicated that it would automatically trust the server cert when using the MySQL client, but we’ll setup the client to use encryption just to be safe. Since we already have a CA setup on the server, we’ll generate the client cert and key on the server. First, the private key and CSR:
openssl req -sha256 -newkey rsa:2048 -days 3650 -nodes -keyout client-key.pem > client-csr.pem
Again, we need to export the key to the RSA format, or MySQL won’t be able to view it:
openssl rsa -in client-key.pem -out client-key.pem
And last step is to create the certificate, which is again based off a CSR generated from the client key, and sign the certificate with the CA cert and key:
openssl x509 -sha256 -req -in client-csr.pem -days 3650 -CA cacert.pem -CAkey cakey.pem -set_serial 01 > client-cert.pem
We now need to copy three files to the client. The certs are just text files, so you can copy and paste them, or you can use scp to transfer them:
- cacert.pem
- client-key.pem
- client-cert.pem
If you don’t need the full mysql-server on the client, or you just want to test it out, you can install the mysql-client like so:
sudo apt-get install mysql-client
Then, open /etc/mysql/my.cnf and put these three lines in the [client] section (usually near the top):
ssl-ca = /etc/mysql/cacert.pem
ssl-cert = /etc/mysql/client-cert.pem
ssl-key = /etc/mysql/client-key.pem
You can then connect to your server like so:
mysql -h 198.51.100.10 -u database-user -p
It will ask for a password, which you set to something really awesome and secure, right? At the MySQL prompt, you can just type the following command shortcut, and look for the SSL line, which should say something like “Cipher in use is …”
\s
You can also specify the –ssl-ca, –ssl-cert, and –ssl-key settings on the command line in the ‘mysql’ command to set the locations dynamically if need be. You may also be able to put them in your .my.cnf file (the leading dot makes it a hidden file, and it should live in ~/ which is your home directory). So for me that might be /home/shanebishop/.my.cnf
Using SSL for mysqldump
To my knowledge, mysqldump does not use the [client] settings, so you can specify the cert and key locations on the command line like I mentioned, or you can add them to the [mysqldump] section of /etc/mysql/my.cnf. To make sure SSL is enabled, I run it like so:
mysqldump --ssl -h 198.51.100.10 -u database-user -p reallysecurepassword > database-backup.sql
Setup Secure Connection from PHP
That’s all well and good, but most of the time you won’t be manually logging in with the mysql client, although mysqldump is very handy for automated nightly backups. I’m going to show you how to use SSL in a couple other areas, the first of which is PHP. It’s recommended to use the “native driver” packages, but from what I could see, the primary benefit of the native driver is decreased memory consumption. There just isn’t much to see in the way of speed improvement, but perhaps I didn’t look long enough. However, being one to follow what the “experts” say, you can install MySQL support in PHP like so:
sudo apt-get install php5-mysqlnd
If you are using PHP 7 on a newer version of Ubuntu, the “native driver” package is now standard:
sudo apt-get install php7.0-mysql
If you are on a version of PHP less than 5.6, you can use the example code at the Percona Blog. However, in PHP 5.6+, certificate validation is a bit more strict, and early versions just fell over when trying to use the mysqli class with self-signed certificates like we have. Now that the dust has settled with PHP 5.6 though, we can connect like so:
<?php
$server = '198.51.100.10';
$dbuser = 'database-user';
$dbpass = 'reallyawesomepassword';
$database = 'database';
$connection = mysqli_init();
if ( ! mysqli_real_connect( $connection, $server, $dbuser, $dbpass', $database, 3306, '/var/run/mysqld/mysqld.sock', MYSQLI_CLIENT_SSL ) ) { //optimize1
error_log( 'Connect Error (' . mysqli_connect_errno() . ') ' . mysqli_connect_error() );
die( 'Connect Error (' . mysqli_connect_errno() . ') ' . mysqli_connect_error() );
}
$result = mysqli_query( $connection, "SHOW STATUS like 'Ssl_cipher'" );
print_r( mysqli_fetch_assoc( $result ) );
mysqli_close( $connection );
?>
Saving this as mysqli-ssl-test.php, you can run it like this, and you should get similar output:
user@debian8:~$ php mysqli-ssl-test.php
Array
(
[Variable_name] => Ssl_cipher
[Value] => DHE-RSA-AES256-SHA
)
Setup Secure (SSL) Replication
That’s all fine for a couple servers, but at EWWW I.O. I quickly realized I could speed things up if each server had a copy of the database. In particular, a significant speed improvement can be had if you setup all SELECT queries to use the local database (replicated from the master). While a query to the master server might take 50ms or more, querying the local database gives you sub-millisecond query times. Beyond that, I also wanted to have redundant write ability, so I setup two masters that would replicate off each other and ensure I never missed an UPDATE/INSERT/DELETE transaction if one of them dies. I’ve been running this setup since the Fall of 2013, and it has worked quite well. There are a few things you have to watch out for. The biggest is if a master server has a hard reboot, and MySQL doesn’t get shut down properly, you have to re-setup the replication on any slaves that communicate with that master, as the binary log gets corrupted. You also have to resync the other master in a similar manner.
The other things to be careful of are conflicting INSERT statements. If you try to INSERT two records with the same primary key from two different servers, it will cause a collision if those keys are set to be UNIQUE. You also have to be careful if you are using numerical values to track various data points. Use MySQLs built-in arithmetic, rather than trying to query a value, add to it in your code, and then updating the new value in a separate query.
So first I’ll show you how to setup replication (just basic master to slave), and then how to make sure that data is encrypted in transit. We should already have the MySQL server installed from above, so now we need to make some changes to the master configuration in /etc/mysql/my.cnf. All of these changes should be made in the [mysqld] section:
max_connections = 500 # the default is 100, and if you get a lot of servers running in your pool, that may not cut it
server-id = 1 # any numerical value will do, but every server should have a unique ID, I started at 1 for simplicity
log-bin = /var/log/mysql/mysql-bin.log
log-slave-updates = true # this one is only needed if you're running a dual-master setup
I’ve also just discovered that it is recommended to set sync_binlog to 1 when using InnoDB, which I am. I haven’t had a chance to see how that impacts performance, so I’ll update this after I’ve had a chance to play with it. The beauty of that is it *should* avoid the problems with a server crash that I mentioned above. At most, you would lose 1 transaction due to an improper server shutdown. All my servers use SSD, so the performance hit should be minimal, but if you’re using regular spinning platters, then be careful with the sync_binlog setting.
Next, we do some changes on the slave config:
server-id = 2 # make sure the id is unique
report-host = myserver.example.com # this should also be unique, so that your master knows which slave it is talking to
log-bin = /var/log/mysql/mysql-bin.log
Once that is setup, you can run a GRANT statement similar to the one above to add a user to do replication, or you can just give that user REPLICATION_SLAVE privileges.
IMPORTANT: If you run this on an existing slave-master setup, it will break replication, as the REQUIRE SSL statement seems to apply to all privileges granted to this user, and we haven’t told it what certificate and key to use. So run the CHANGE MASTER TO statement further down, and then come back here to enforce SSL for your replication user.
GRANT REPLICATION SLAVE ON *.* TO 'database-user'@'%' REQUIRE SSL;
Now we’re ready to synchronize the database from the master to the slave the first time. The slave needs 3 things:
- a dump of the existing data
- the binary log filename, as MySQL adds a numerical identifier to the log-bin setting above, and increments this periodically as it the binary logs hit their max size
- the position within the binary log where the slave should start applying changes
The hard way (that I used 3 years ago), can be found in the MySQL Documentation. The easy way is to use mysqldump (found on a different page in the MySQL docs), which you probably would have used anyway for obtaining a dump of the existing data:
mysqldump --all-databases --master-data -u root -p > dbdump.db
By using the –master-data flag, it will insert items #2 and #3 into the sql file generated, and you will avoid having to hand enter the binary log filename and coordinates. At any rate, you then need to login via your mysql client on the slave server, and run a few commands at the mysql prompt to prep the slave for the import (replacing the values as appropriate:
mysql -uroot -p
mysql> STOP SLAVE;
mysql> CHANGE MASTER TO
-> MASTER_HOST='master_host_name',
-> MASTER_USER='replication_user_name',
-> MASTER_PASSWORD='replication_password';
exit
Then you can import the dbdump.db file (copy it from the master using scp or sftp):
mysql -uroot -p < dbdump.db
Once that is imported, we want to make sure our replication is using SSL. You can also run this on an existing server to upgrade the connection to SSL, but be sure to STOP SLAVE first:
mysql> CHANGE MASTER TO MASTER_SSL=1,
-> MASTER_SSL_CA='/etc/mysql/cacert.pem',
-> MASTER_SSL_CERT='/etc/mysql/client-cert.pem',
-> MASTER_SSL_KEY='/etc/mysql/client-key.pem';
After that, you can start the slave:
START SLAVE;
Give it a few seconds, but you should be able to run this to check the status pretty quick:
SHOW SLAVE STATUS\G;
A successfully running slave should say something like “Waiting for master to send event”, which simply indicates that it has applied all transactions from the master, and is not lagging behind.
If you have additional slaves to setup, you can use the same exact dbdump.db and all the SQL statements that followed the mysqldump command, but if you add them a month or so down the road, there are two ways of doing it:
- Grab a fresh database dump using the mysqldump command, and repeat all of the steps that followed the mysqldump command above.
- Stop the MySQL service on an existing slave and the new slave. Then copy the /var/lib/mysql/ folder to the new slave and make sure it is owned by the mysql user/group: chown -R mysql:mysql /var/lib/mysql/ Lastly, start both slaves up again, and they’ll catch up pretty quick to the master.
Conclusion
In a distributed environment, securing your MySQL communications is an important step in preventing unauthorized access to your services. While it can be a bit daunting to put all the pieces together, it is well worth the effort to make sure no one can intercept traffic from your MySQL server(s). The EWWW I.O. API supports encryption at every layer of the stack, to make sure our customer information says private and secure. Doing so in your environment creates trust with your user base, and customer trust is a precious commodity.