Safest way to perform mysqldump on a live system with active reads and writes?

  • I'm not sure if this is true but I remember reading if you run the following command in linux

    mysqldump -u username -p database_name > backup_db.sql
    

    while reads and writes are being made to a database then the dump may contain errors.

    Are there particular options in the command mysqldump to make sure this is done safely on a live system? I'm okay with reads/writes being disabled for our users for a few seconds (The database <50MB)

  • All Data is InnoDB

    This is what will give you an exact point-in-time snapshot of the data:

    mysqldump -uuser -ppass --single-transaction --routines --triggers --all-databases > backup_db.sql
    

    --single-transaction produces a checkpoint that allows the dump to capture all data prior to the checkpoint while receiving incoming changes. Those incoming changes do not become part of the dump. That ensures the same point-in-time for all tables.

    --routines dumps all stored procedures and stored functions

    --triggers dumps all triggers for each table that has them

    All Data is MyISAM or Mix of InnoDB/MyISAM

    You will have to impose a global read lock, perform the mysqldump, and release the global lock

    mysql -uuser -ppass -Ae"FLUSH TABLES WITH READ LOCK; SELECT SLEEP(86400)" &
    sleep 5
    mysql -uuser -ppass -ANe"SHOW PROCESSLIST" | grep "SELECT SLEEP(86400)" > /tmp/proclist.txt
    SLEEP_ID=`cat /tmp/proclist.txt | awk '{print $1}'`
    echo "KILL ${SLEEP_ID};" > /tmp/kill_sleep.sql
    mysqldump -uuser -ppass --single-transaction --routines --triggers --all-databases > backup_db.sql
    mysql -uuser -ppass -A < /tmp/kill_sleep.sql
    

    Give it a Try !!!

    UPDATE 2012-06-22 08:12 EDT

    Since you have <50MB of total data I have another option. Instead of launching a SLEEP command into the background to hold the global read lock for 86400 sec (that 24 hr) just to get the process ID and kill outside, let's try setting a 5 second timeout in mysql rather than in the OS:

    SLEEP_TIMEOUT=5
    SQLSTMT="FLUSH TABLES WITH READ LOCK; SELECT SLEEP(${SLEEP_TIMEOUT})"
    mysql -uuser -ppass -Ae"${SQLSTMT}" &
    mysqldump -uuser -ppass --single-transaction --routines --triggers --all-databases > backup_db.sql
    

    This is a cleaner and simpler approach for very small databases.

    Thanks for the thorough answer Rolando, I'll try this out when I get back home

    Rolando - would it ever take longer than `sleep 5` to perform a read lock?

    5 secs is just precautionary. You can can try lower values.

    Rolando - is `ERROR 2013 (HY000) at line 1: Lost connection to MySQL server during query` an expected error message?

    Did all the MySQL data come out in the the mysqldump ?

    I'm not sure about the error message. This is only a guess, but that might have come from the one-line script that kills the user-defined SLEEP function call I mentioned in the second script.

    Yes, all the data came out in the mysqldump and I was able to successfully load it into a fresh database. Should I be concerned about the error message? It comes up after I execute the final command.

    Try my new suggestion and see if it goes well. Hopefully, there will be no error message.

    Hi Rolando, your second suggestion works (I modified the syntax slighty.) Just to clarify, if I choose your original suggestion, is the error message anything to be concerned about (IE does it indicate that the data in the dump could possibly be corrupt?)

    As long as the mysqldump goes to completion, the error message just becomes an annoyance.

    Since the mysqldump is probably somewhere between 50MB and 100MB, you should be able to look at the first 30 lines and last 20 lines. If the mysqldump went successfully, the top and bottom lines will have the same appearance every time.

    Why do you sleep for 86400 seconds, couldn't you just do a read lock without the sleep?

    @chudapati09 if you do not call the SLEEP function to keep the session open, the read lock will die immediately as the session terminates.

    Hi @RolandoMySQLDBA that's an awesome answer, I have one doubt about it though, In my case I have a highly concurrent live database (size ~7Gb), only InnoDB tables. Would the dump cause significant impact on the system performance? Thanks.

    For InnoDB only, don't your recommended options suffer from being corrupted if a DDL change is made in parallel? That's one of the warnings that goes along with using --single-transaction: that repeatable reads are not isolated from such DDL changes and that can cause improper / invalid dumps to occur. What's the best way around that issue? @RolandoMySQLDBA

    • For InnoDB tables, you should use the --single-transaction option, as mentioned in another answer.
    • For MyISAM there is --lock-tables.

    See the official documentation here

  • If you want to do this for MyISAM or mixed tables without any downtime from locking the tables, you can set up a slave database, and take your snapshots from there. Setting up the slave database, unfortunately, causes some downtime to export the live database, but once it's running, you should be able to lock it's tables, and export using the methods others have described. When this is happening, it will lag behind the master, but won't stop the master from updating it's tables, and will catch up as soon as the backup is complete.

  • Here's how I did it. It should work in all cases since it uses FLUSH TABLES WITH READ LOCK.

    #!/bin/bash
    
    DB=example
    DUMP_FILE=export.sql
    
    # Lock the database and sleep in background task
    mysql -uroot -proot $DB -e "FLUSH TABLES WITH READ LOCK; DO SLEEP(3600);" &
    sleep 3
    
    # Export the database while it is locked
    mysqldump -uroot -proot --opt $DB > $DUMP_FILE
    
    # When finished, kill the previous background task to unlock
    kill $! 2>/dev/null
    wait $! 2>/dev/null
    
    echo "Finished export, and unlocked !"
    

    The shell sleep command is just to make sure that the background task running the mysql locking command is executed before the mysqldump starts. You could reduce it to 1 second and it should still be fine. Increase it to 30 seconds and try inserting a values in any table from another client during those 30 seconds you'll see it's locked.

    There are 2 advantages in using this manual background locking, instead of using the mysqldump options --single-transaction and --lock-tables:

    1. This locks everything, if you have mixed MyISAM/InnoDB tables.
    2. You can do run other commands in addition to the mysqldump during the same locking period. It's useful, for instance, when setting up replication on a master node, because you need to get the binary log position with SHOW MASTER STATUS; at the exact state of the dump you created (before unlocking the database), to be able to create a replication slave.
  • The suggestion of the mysql official documentation is that you should have a Master "M1" database and a Slave "S1" Database which is described in "Scenario 2: Backup with a Read-Only Slave" Backing Up a Master or Slave by Making It Read Only

    You should set the slave database read only and perform th

  • if you have a very big MYISAM table and you need to dump the table without lock and avoid high server load, you can use the following script.

    #!/bin/sh
    
    my_user="user"
    my_password="password"
    my_db="vpn"
    my_table="traffic"
    my_step=100000
    
    read -p "Dumping table ${my_db}.${my_table} to ${my_table}.sql?" yn
    case $yn in
        [Yy]* ) break;;
        * ) echo "User cancel."; exit;;
    esac
    
    my_count=$(mysql $my_db -u $my_user -p$my_password -se "SELECT count(*) FROM $my_table")
    my_count=$(($my_count + 0))
    
    if [ ! $my_count ]
    then
        echo "No records found"
        exit
    fi
    
    echo "Records in table ${my_db}.${my_table}: ${my_count}"
    
    echo "" > $my_table.sql
    
    max_progress=60
    
    for (( limit=0; limit<=$my_count; limit+=$my_step )); do
        progress=$((max_progress * ( limit + my_step) / my_count))
    
        echo -ne "Dumping ["
        for ((i=0; i<$progress; i ++)); do
            echo -ne "#"
        done
        for ((; i<$max_progress; i ++)); do
            echo -ne "."
        done
    
        mysqldump -u $my_user -p$my_password --complete-insert --no-create-info --opt --where="1 limit $limit , $my_step" $my_db $my_table >> $my_table.sql
        echo "" >> $my_table.sql
    
        echo -ne "] $((100 * ( limit + my_step ) / my_count)) %"
        echo -ne "\r"
    
        sleep 1
    
    done
    
    echo -ne "\n"
    

License under CC-BY-SA with attribution


Content dated before 6/26/2020 9:53 AM