Configuring and testing MySQL binary log

No comments
The binary log contains “events” that describe database changes. On a basic installation with default options, it's not turned on. This log is essential for accommodating the possible following requirements:
  • Replication: the binary log on a master replication server provides a record of the data changes to be sent to slave servers.
  • Point in Time recovery: allow to recover a database from a full backup and them replaying the subsequent events saved on the binary log, up to a given instant.
To turn on the binary log on a MySQL instance, edit the 'my.cnf' configuration file and add the following lines:

#Enabling the binary log
log-bin=binlog
max_binlog_size=500M
expire_logs_days=7
server_id=1

Basically we're doing the following configuration of the binary log:
  • Binary log is turned on and every file name will be 'binlog' and a sequential number as the extension;
  • The maximum file size for each log will be 500 megabytes;
  • The binary log files expire and can be purged after 7 days;
  • Our server has 1 as the identification number (this serves replication purposes but is always required).
Afterwards, restart the service. On my Ubuntu labs server, the command is:
shell> sudo service mysql restart
Now, opening a mysql command line, we can check what binary log files exist:
mysql> show binary logs;
+---------------+-----------+
| Log_name      | File_size |
+---------------+-----------+
| binlog.000001 |       177 |
| binlog.000002 |       315 |
+---------------+-----------+
2 rows in set (0.00 sec)
Let's test if the binary log is working properly using the sample Sakila database. The sample comes with two files:
  • sakila-schema.sql: creates the schema and structure for the sakila database;
  • sakila-data.sql: loads the data into the tables on the sakila database.
Let's first run the database creation script:
shell> mysql -u root -p < sakila-schema.sql
If we manually flush the log file, the database instance will close the current file and open a new one. This is relevant, because we want all the DML operations on the Sakila database to be recorded on a new file:
mysql> flush binary logs;
Query OK, 0 rows affected (0.01 sec)

mysql> show binary logs;
+---------------+-----------+
| Log_name      | File_size |
+---------------+-----------+
| binlog.000001 |       177 |
| binlog.000002 |       359 |
| binlog.000003 |       154 |
+---------------+-----------+
3 rows in set (0.00 sec)
Now, we can run the data script for the sakila database to insert the records:
shell> mysql -u root -p < sakila-data.sql
When it's done, we can check the binary logs to observe the file size increment:
mysql> show binary logs;
+---------------+-----------+
| Log_name      | File_size |
+---------------+-----------+
| binlog.000001 |       177 |
| binlog.000002 |       359 |
| binlog.000003 |   1359101 |
+---------------+-----------+
3 rows in set (0.01 sec)
Next, we can validate the content using the mysqlbinlog utility. By default, mysqlbinlog displays row events encoded as base-64 strings using BINLOG statements. To display actual statements in pseudo-SQL, the --verbose option should be used:
shell> sudo mysqlbinlog --verbose /var/lib/mysql/binlog.000003
Using the verbose option, the following commented output is produced from our log, one per each statement:
### INSERT INTO `sakila`.`country`
### SET
###   @1=1
###   @2='Afghanistan'
###   @3=1139978640
So, all the statements we executed since opening the 'binlog.000003' are registered there and can be replayed to produce the same set of changes on a given target.



No comments :

Post a Comment