Lately I was working on a project, highly data driven. Its a movie social network, means there are lots of assets, Photos, Audios & videos. Thats why we should keep continuous backup for our data on some 3rd party provider or somewhere else, which is easier to manage. We are using Amazon::S3 module for our assets and database backup.
Database Backup
For database backup on S3 we used a ruby gem named “mysql_s3_backup”, a simple backup script for mysql and s3 with incremental backups. Before starting the backups, Create a YAML config file:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
mysql:
# Database name to backup
database: xyz_development
# Mysql user and password to execute commands
user: dbuser
password: paswword
# Path to mysql binaries, like mysql, mysqldump (optional)
bin_path: /usr/bin/
# Path to the binary logs, should match the bin_log option in your my.cnf
bin_log: /var/lib/mysql/binlog/mysql-bin
s3:
# S3 bucket name to backup to
bucket: bucketname
# S3 credentials
access_key_id: XXXXXXXXXXXXXXX
secret_access_key: XXXXXXXXXXXXXXXXXXXXXX
1
2
3
4
5
6
7
8
"Create a full backup:
mysql_s3_backup -c=your_config.yml full
"Create an incremental backup:
mysql_s3_backup -c=your_config.yml inc
"Restore the latest backup (applying incremental backups):
mysql_s3_backup -c=your_config.yml restore
"Restore a specific backup (NOT applying incremental backups):
We were planning to just keep backup for last 5 copies of the database. So, I did a tweak in the code, and written one wrapper class over the GEM for storing only last 5 backups [in case of full backups only].
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
require'mysql_s3_backup'
classMysqlS3Dumper
attr_accessor:config
classMysqlS3Backup::Backup
deffull(name=make_new_name)
lock do
# When the full backup runs it delete any binary log files that might already exist
# in the bucket. Otherwise the restore will try to restore them even though they’re
For assets backup we use S3Backup ruby gem. S3Backup, is a backup tool to local directory to Amazon S3. It uploads local directory to Amazon S3 with compression. If directories isn’t modified after prior backup,those aren’t upload. It can be Cryptnize upload files if password and salt are configured. To use remotebackup,you should prepare backup configuration file by yaml such below:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
bucket:"bucket name"
directories:
-"absolute path to directory for backup/restore"
-"iterate directory as you like"
access_key_id:'Amazon access_key_id'
secret_access_key:'Amazon secret_access_key'
password:'password for aes. (optional)'
salt:'HexString(16 length) (must when password is specified)'
buffer_size:'number of byte max 50000000000 (optional default 32000000)'
max_retry_count:'number of retry of post if post failed.(optional default 10)'
proxy_host: proxy host address if you use proxy.
proxy_port: proxy port if you use proxy.
proxy_user: login name for proxy server if you use proxy.
proxy_password: login password for proxy server if you use proxy.
log_level:'output log level. value is debug or info or warn or error(optional default info)'