Yahoo's MySQL Partition Manager is Open Source

1 comment
The guys at Yahoo released their partition management script on github:
At Yahoo, we manage a massive number of MySQL databases spread across multiple data centers.
We have thousands of databases and each database has many partitioned tables. In order to efficiently create and maintain partitions we developed a partition manager which automatically manages these for you with minimal pre configuration.
You can check out the original anoucement at the MySQL@Yahoo blog and the code at its github repo.

1 comment :

Post a Comment

MySQL Bug #79497: Full text indexes and aggregate functions

No comments
Playing around with the Employees sample database and full text search on MySQL, I found a weird bug. After creating a full text targeting a column on a table, a "select distinct" query to retrieve the range of values returns an empty set.
So if you initially perform the following query, the outcome comes with 7 rows:
mysql> select distinct title from titles;
| title              |
| Senior Engineer    |
| Staff              |
| Engineer           |
| Senior Staff       |
| Assistant Engineer |
| Technique Leader   |
| Manager            |
7 rows in set (0.38 sec)
If you create a fulltext index on the titles table, over the title column:
mysql> alter table titles add fulltext index `title` (`title`);
Query OK, 0 rows affected (14.65 sec)
Records: 0  Duplicates: 0  Warnings: 0
Issue the same query again:
mysql> select distinct title from titles;
Empty set (0.00 sec)
So, apparently our records are gone? Repeat the query, but with COUNT:
mysql> select count(distinct title) from titles;
| count(distinct title) |
|                     7 |
1 row in set (0.24 sec)
The records are not gone, they justo won't appear with the DISTINCT aggregate function. If you drop the FT index, the query returns the correct result set.
If dropping the index is not an option, you can workaround this bug, repeating the query with UCASE:
mysql> select distinct(ucase(title)) from titles;
| (ucase(title))     |
| STAFF              |
| ENGINEER           |
| SENIOR STAFF       |
| MANAGER            |
7 rows in set (0.44 sec)
This bug is already reported at MySQL Bugs. It's #79497. It appears to be an upstream issue, reproducible on current 5.6 and 5.7.

No comments :

Post a Comment

Get a summary footprint on a MySQL server instance

No comments
Landing on an enterprise with ongoing projects mean that servers are often handed to IT staff without complete knowledge of  what's inside.
I've built a simple script, scraping from here and there, to gather a summary of relevant information.
Once you've gained remote access to the MySQL instance, you can execute the queries to identify the following information regarding the target database server:
  • The host name, what operating system it runs on, the MySQL version installed, default collation of the instance, the installation directory and the data directory;
  • How many user databases it hosts, what they're called and the collation used;
  • The size of each database as a whole and broken down by storage engine;
  • All the tables and their space used ;
  • The creation and last modification on each database.
The script is commented and follows the same order as the above topics:

Running this on my test server, I find the following result for the host, operating system and directory locations:

The following result tells me how many databases there are on the instance:

If relevant, I can choose to see the database names and respective collations:

Then, the script gets the size on each database, in MB:

It breaks that size on the different storage engines:

Still on the size reporting, we get each table's size on records as well as indexes:

Finally, it checks for the creation and update dates of the existing databases to help determine if it's important to have a recent backup of them or not:

This script only gives a summary of the overall instance but has enough details to determine the versions of the software, volume of information and usage of the databases.

Photo credit: tableatny@Flickr

No comments :

Post a Comment

Configuring and testing MySQL binary log

No comments
The binary log contains “events” that describe database changes. On a basic installation with default options, it's not turned on. This log is essential for accommodating the possible following requirements:
  • Replication: the binary log on a master replication server provides a record of the data changes to be sent to slave servers.
  • Point in Time recovery: allow to recover a database from a full backup and them replaying the subsequent events saved on the binary log, up to a given instant.
To turn on the binary log on a MySQL instance, edit the 'my.cnf' configuration file and add the following lines:

#Enabling the binary log

Basically we're doing the following configuration of the binary log:
  • Binary log is turned on and every file name will be 'binlog' and a sequential number as the extension;
  • The maximum file size for each log will be 500 megabytes;
  • The binary log files expire and can be purged after 7 days;
  • Our server has 1 as the identification number (this serves replication purposes but is always required).
Afterwards, restart the service. On my Ubuntu labs server, the command is:
shell> sudo service mysql restart
Now, opening a mysql command line, we can check what binary log files exist:
mysql> show binary logs;
| Log_name      | File_size |
| binlog.000001 |       177 |
| binlog.000002 |       315 |
2 rows in set (0.00 sec)
Let's test if the binary log is working properly using the sample Sakila database. The sample comes with two files:
  • sakila-schema.sql: creates the schema and structure for the sakila database;
  • sakila-data.sql: loads the data into the tables on the sakila database.
Let's first run the database creation script:
shell> mysql -u root -p < sakila-schema.sql
If we manually flush the log file, the database instance will close the current file and open a new one. This is relevant, because we want all the DML operations on the Sakila database to be recorded on a new file:
mysql> flush binary logs;
Query OK, 0 rows affected (0.01 sec)

mysql> show binary logs;
| Log_name      | File_size |
| binlog.000001 |       177 |
| binlog.000002 |       359 |
| binlog.000003 |       154 |
3 rows in set (0.00 sec)
Now, we can run the data script for the sakila database to insert the records:
shell> mysql -u root -p < sakila-data.sql
When it's done, we can check the binary logs to observe the file size increment:
mysql> show binary logs;
| Log_name      | File_size |
| binlog.000001 |       177 |
| binlog.000002 |       359 |
| binlog.000003 |   1359101 |
3 rows in set (0.01 sec)
Next, we can validate the content using the mysqlbinlog utility. By default, mysqlbinlog displays row events encoded as base-64 strings using BINLOG statements. To display actual statements in pseudo-SQL, the --verbose option should be used:
shell> sudo mysqlbinlog --verbose /var/lib/mysql/binlog.000003
Using the verbose option, the following commented output is produced from our log, one per each statement:
### INSERT INTO `sakila`.`country`
### SET
###   @1=1
###   @2='Afghanistan'
###   @3=1139978640
So, all the statements we executed since opening the 'binlog.000003' are registered there and can be replayed to produce the same set of changes on a given target.

No comments :

Post a Comment

This is why you're not better off with a commercial database

No comments
When tackling a new enterprise project to support a given business, you face the challenge of choosing and committing to a database platform. The choice should be the one most adequate, given the needs and requirements of the new information system and data to be hosted and managed.
Typically, a number of factors should be taken into consideration like security features, storage requirements, reliability, high availability, backups, disaster recovery, data compression, technical support and last but definitely not least, the cost of the solution. Added to that there is also performance, scalability and ease of administration to think about.
With the result of this analysis, most of the time, the verdict is this: data platforms available as community editions or free open source fall short on the given requirements fulfillment. So, the advice is almost always to acquire commercial licenses or expand the licensing already owned.
And this should give you peace of mind for a while. At least until the first release of the system goes live. After that, some of the common pitfalls are:

  • Security permissions were not exhaustively identified for all database objects. To solve things quickly, you turn your database authorization management into Swiss cheese;
  • The new system has issues, until the bugs are fixed, manual correction scripts have to be executed on working hours, maiming overall business activity;
  • As the data volume grows, there is performance degradation due to inefficient indexing, bad user experience design or poor database coding skills;
  • Technical support provided by the database vendor performs an audit on the workload, does some tuning on the server instance, and shifts responsibility on the remaining lack of performance over to the development team;
  • The development team struggles adopting the database vendor recommendations as it has great impact on the source code;
  • Management wants high availability, but it won't commit the infrastructure resources and budget to set it up properly;
  • You do not have a remote site so that a disaster recovery plan can be made, you don't have a lab where you regularly restore backups and perform automated integrity checks;
  • You are understaffed and with no one possessing deep skills on the specific data platform you own;

Even if just a third of these pitfalls sound familiar, what are you doing with your next project? Still thinking on recommending commercial software because people are the ones to blame here?
On a global organization, after you deploy the first release and spread it across the offices, the licensing and support costs will skyrocket. That money could be spent preventing some of the pitfalls mentioned here. If you cut back on licensing and support, you can spend on infrastructure and staff.
There are wonderful commercial databases out there, but on the business requirements phase, the pick should be done as a whole and not based on vendor promises because the final solution will be a result of development and available budget, not a sales brochure.
Vision and engineering are the keys to success. And I'm afraid that doesn't come out of the box.

No comments :

Post a Comment

Oracle Announces General Availability of MySQL 5.7

No comments
Oracle today announced the general availability of MySQL 5.7, the latest version of the world’s most popular open source database. The new version delivers greater performance, scalability and manageability, plus enhanced NoSQL capabilities with JSON support and MySQL Router, which makes it easy to connect applications to multiple MySQL databases.
Read the press release at:

No comments :

Post a Comment

Ubuntu: Steps to install/update MySQL to the latest DMR

No comments
For learning, testing and keep up with things, one might want to install/update the MySQL version to the latest DMR (Development Milestone Release). At the time of this writing the latest DMR for MySQL is  MySQL Community Server 5.7.8-rc.
Having chosen Ubuntu server as my operating system and since this is a development version hence not yet part of an official Ubuntu release, you have to install it directly from Oracle. You can achieve this by using the MySQL APT Repository.
Before carrying on, keep in mind that:
  • This shouldn't be done on a production environment;
  • If you're aiming for an upgrade instead of a fresh install, always backup your databases first.
Having said that, you can run the following command to download the debian package for MySQL APT repository:

shell> wget
To install the package, use dpkg and when it's done update the repository package lists:
shell> sudo dpkg -i mysql-apt-config_0.3.7-1ubuntu14.04_all.deb
shell> sudo apt-get update
During the installation of the package, you will be asked to choose the versions of the MySQL server and other components.
Finally, install MySQL, which will fetch the version selected on the APT repository configuration:
shell> sudo apt-get install mysql-server
In the end, log into your newly installed server. You should be greeted with the latest DMR version:
shell> stuntman@MuSQL-Lab:~$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.7.8-rc MySQL Community Server (GPL)
Remember to never do this on a real environment as it's a development version. Also, if you have an official release version installed previously, I'd recommend removing it completely before installing a development release.

No comments :

Post a Comment

MySQL - basic network security

No comments
Having secured our MySQL server and created a personal account to allow for remote administration, we can take one step further on blocking unwanted access to our database server. For this example, we'll continue to use the Ubuntu 14.04.2 LTS Server installed and configured on the previous posts.
After booting up the server, running a network check for connections, lists our server listening on ports 22 and 3306 (mysql):
We also used the Uncomplicated Firewall,  a frontend for iptables, to check if the server's firewall is enabled. It's inactive and consequently all listening ports are available to the network.
Let's adopt the following policy:
  • SSH port is open to the network and available for sysadmin staff. Access control will be implemented on dedicated network firewalls;
  • MySQL port is closed and has to be authorized on demand, on a client machine adress basis.
This will allow the database administration team to know at all times who has access and from which hosts on the network. Furthermore, it prevents remote detection of the database server with tools like NMap.
First, lets add the inbound rule for SSH connection so we don't get locked out of the server:
Then lets enable the firewall and check it's status:
The test will be to connect MySQL Workbench from a remote server and wait for failure:

Afterwards, add a specific rule for the remote machine using the ufw utility and try again:

sudo ufw allow from 192.168.204.XXX to port 3306

You should succeed and while you have the database session open, run netstat again on the server and validate your remote connection:

stuntman@MuSQL-Lab:~$ netstat -at
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 *:ssh                   *:*                     LISTEN
tcp        0      0   *:*                     LISTEN
tcp        0      0   192.168.204.X:54617     ESTABLISHED
tcp        0      0   192.168.204.X:54618     ESTABLISHED
tcp        0     64     192.168.204.X:54483     ESTABLISHED
tcp6       0      0 [::]:ssh                [::]:*                  LISTEN

Final check to see the firewall rules and get what hosts are authorized for remote access:
stuntman@MuSQL-Lab:~$ sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere 3306       ALLOW       192.168.204.X
22/tcp (v6)                ALLOW       Anywhere (v6)
On this blog, we've covered instance lockdown, user accounts lockdown and network lockdown. All these security concerns are of added value when facing production environments with sensitive data.

No comments :

Post a Comment

MySQL sample databases: for testing and training

1 comment
Sometimes it's useful to have a set of data prepared to be used on a fresh MySQL install for testing purposes. Or you might be preparing some training or workshop and want to prepare examples with fake information. For MySQL there are some choices available:

  • Employees Sample Database: provides a combination of a large base of data (approximately 160MB) spread over six separate tables and consisting of 4 million records in total.
  • Sakila Sample Database: is intended to provide a standard schema that can be used for examples in books, tutorials, articles, samples, and so forth. Sakila sample database also serves to highlight the latest features of MySQL such as Views, Stored Procedures, and Triggers.
  • BIRT Sample Database: The schema is for Classic Models, a retailer of scale models of classic cars. The database contains typical business data such as customers, orders, order line items, products and so on.
In general, you can download a single ZIP file with the schema creation and the data loading in separate scripts.

1 comment :

Post a Comment

Print tab separated values as table using MySQL

No comments
Using mysql command line utility to get recordsets, the data rows alignment and line breaks are often  a mess. You can use the command line tool on a mysql database server to get a set of rows into an output TSV file like so:
shell>  mysql -u your_user -p < your_statement.sql > data.csv
I came up with the following python script to grab the output file and pretty print:
You can put your own filename instead of 'data.csv'. And of course, this script also works for other TSV files that don't come from mysql. To run the script you should install the python tabulate package:

No comments :

Post a Comment

mysqlpump — A Database Backup Program

The MySQL 5.7 Release Notes  for version 5.7.8 are out. Besides the new JSON data type, there is also a new tool, called mysqlpump, which offers the following features:

  • Parallel processing of databases, and of objects within databases, to speed up the dump process
  • Better control over which databases and database objects (tables, views, stored programs, user accounts) to dump
  • Dumping of user accounts as account-management statements (CREATE USER, GRANT) rather than as inserts into the mysql system database
  • Capability of creating compressed output
  • Progress indicator
  • For dump file reloading, faster secondary index creation for InnoDB tables by adding indexes after rows are inserted
This is great stuff, however it's still a release candidate so let me point out a caution warning left by Morgan Tocker on his blog:
... mysqlpump is not currently consistent. That is to say that currently each of the dump threads lack a synchronization point before they start backing up the data. This makes it currently unsafe as a general purpose backup replacement.
Happy testing!


Post a Comment

Creating user accounts on a secured MySQL server

1 comment
After installing a MySQL database server and securing that installation with the mysql_secure_installation tool, you are locked out from remote access to perform any operation on the server. Since we all like the 'R' in RDBMS to stand for remote as well as relational, let's see how we can configure user credentials to provide remote access to the database server but still keep those credentials from providing local access.
Suppose you have a key user named James Bond to whom you want to give remote administration on the MySQL server. However, you don't want him to logon locally on the machine to perform any operations. This way you achieve three levels of security:
  • The credential only allows remote access;
  • You control from which remote computers the user can connect, by configuring rules on a network or server firewall;
  • IF it's a shared server, console access on the machine doesn't give access to the database engine.
First, let's open a terminal session on the server and a login with the mysql command line tool, using our root account:
shell> mysql -u root -p
Now, for the sake of this example, let's create a user account for James Bond and give him full privileges:
mysql> CREATE USER 'jamesbond'@'%' IDENTIFIED BY 'double0seven';
Having set this up, if James Bond tries to login from any remote computer he can because we used the wildcard '%' to specify the user's machine. But he can also login from the local server. To prevent this from happening, we create a second account with the same username and password, but do not grant any permissions:
mysql> CREATE USER 'jamesbond'@'localhost' IDENTIFIED BY 'double0seven';
If the user then tries to login again from the local machine, he can. But isn't able to perform any operations or queries.

1 comment :

Post a Comment

What have we learnt in two decades of MySQL?

No comments
 Article on Information Age:
From obscurity to the mainstream, the journey of MySQL shows the power of the open source community to drive innovation.
Read the full article here:

No comments :

Post a Comment

Securing your MySQL server

1 comment
After installing a MySQL database server, like the one I posted earlier, if it's going to be a production environment than you should consider securing the instance by eliminating some of the basic vulnerabilities that come with a generic install.
Fortunately MySQL and MariaDB already come with a tool for that purpose, called mysql_secure_installation. This program enables to perform the following improvements to the security of your installation:

  • set a password for root accounts.
  • remove root accounts that are accessible from outside the local host.
  • remove anonymous-user accounts.
  • remove the test database (if exists), which by default can be accessed by anonymous users.

Be advised that as of MySQL 5.7.2, this tool is an executable binary available on all platforms. Before version 5.7.2, it was a script available only for Unix and Unix-like systems.
Invoking the tool without any arguments:
shell> mysql_secure_installation
The script will prompt you to determine which actions to perform:

In order to log into MySQL to secure it, we'll need the current
password for the root user.  If you've just installed MySQL, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MySQL
root user without the proper authorisation.

You already have a root password set, so you can safely answer 'n'.

Change the root password? [Y/n] n
 ... skipping.

By default, a MySQL installation has an anonymous user, allowing anyone
to log into MySQL without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
 ... Success!

By default, MySQL comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
 - Dropping test database...
ERROR 1008 (HY000) at line 1: Can't drop database 'test'; database doesn't exist
 ... Failed!  Not critical, keep moving...
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MySQL
installation should now be secure.

Thanks for using MySQL!
You can find what there is to know in terms of options from the official documentation:

1 comment :

Post a Comment

Basic MySQL installation on Ubuntu Server

1 comment


The cenario for the installation is an Ubuntu Server with version 14.04 of the operating system. you can download the ISO image for the OS instalation at Canonical's website.
You should perform this install only with the basic platform, maybe choosing only the SSH server in case you're planning remote shell access to this server.

Installation Process

After installing the OS, login into the server and follow the steps to perform a basic MySQL server installation on Ubuntu server as described on the official documentation
To install MySQL, run the following command:
sudo apt-get install mysql-server
You'll be prompted to suplly a password for user 'root'. When done, you can check if the service is listening:
sudo netstat -tap | grep mysql
If the service failed to start automatically, you can restart it issuing the command:
sudo service mysql restart

Post Install Validation

To check if the database engine is running and access the database features, open up a command line on the server and execute the mysql command line utility using the root credentials you supplied during setup:
shell> mysql --user=user_name --password=your_password db_name
With the status command, you can confirm the server is running, for how long and which version is it:

For the list of all MySQL commands type 'help'. To exit, type 'quit'.

Installation path analysis

After installing the database server, its files were placed on the following paths on the local file system (keep in mind this is Ubuntu/Debian specific):

  • /usr/bin/ - binary files for the server and command line utilities
  • /usr/lib/ - libraries
  • /usr/share/man/ - manual pages
  • /usr/share/mysql/ - helper scripts for installation  and language support
  • /var/lib/mysql/ - data files
  • /usr/lib/mysql/plugin/ - plugins installed
  • /etc/mysql/ - configuration files
The my.cnf file, on the /etc/mysql/ directory, is the essential place to control the database engine configuration options.

1 comment :

Post a Comment