How can I move a database from one server to another?

  • How can I move MySQL tables from one physical server to another?

    Such as this exact scenario: I have a MySQL server that uses innodb table and is about 20GB size.

    I want to move it to a new server, what's the most efficient way to do this?

    I would use xtrabackup Its very much like copying it over but you can keep the server running and assuming you use mainly innodb tables (which you said you had), you can consider it a "hot" backup as well.

    I don't benefit from other people using this tool. It's free/open-source and even though it was made by a company, it's easy enough to use that you do not need to consider buying support from that company.

  • My favorite way is to pipe a sqldump command to a sql command. You can do all databases or a specific one. So, for instance,

    mysqldump -uuser -ppassword myDatabase | mysql -hremoteserver -uremoteuser -premoteserverpassword 

    You can do all databases with

    mysqldump --all-databases -uuser -ppassword | mysql -hremoteserver -uremoteuser -premoteserver 

    The only problem is when the database is too big and the pipe collapses. In that case, you can do table by table or any of the other methods mentioned below.

    Tip: If neither database allows remote connections, pipe through `netcat`.

    For this to work for me I had to create an empty database with the same name on the remote server and then add that database's name to the end of the command.

    This is elegant, but without compression hardly fast enough for 20GB database.

    This solution is good for a fully controlled network only (if and only if on a secure private network, read: not the internet) ! But fast and easy solution !

    This is lightning fast! But what @Zugwalt said was true for me too. The command didn't work until I had created the database (empty) add added the database name to the end of the command.

  • I recently moved a 30GB database with the following stragegy:

    Old Server

    • Stop mysql server
    • Copy contents of datadir to another location on disk (~/mysqldata/*)
    • Start mysql server again (downtime was 10-15 minutes)
    • compress the data (tar -czvf mysqldata.tar.gz ~/mysqldata)
    • copy the compressed file to new server

    New Server

    • install mysql (don't start)
    • unzip compressed file (tar -xzvf mysqldata.tar.gz)
    • move contents of mysqldata to the datadir
    • Make sure your innodb_log_file_size is same on new server, or if it's not, don't copy the old log files (mysql will generate these)
    • Start mysql

    Skip the copy after compress/decompress. Stream the tar over the network using ssh, or (if and only if on a secure private network read: *not* the internet) use netcat to avoid the encryption overhead. Also if on a local network skip the gzipping, if you have a fast network pipe you'll find the transfer bottlenecked on a pegged core spinning doing the compression

    This works for both innodb and myisam? Also, mysql users are in datadir as well?

    @giorgio79 sure as long as you move the ibdata files as well. By default those are in the datadir. MySQL users are stored in the mysql folder in a user tablespace.

    Will this procedure work for moving from Windows to Linux?

    @TypoCubeᵀᴹ sorry to take over two years to respond, but it would have helped me had someone said definitively, "Yes, It works from Windows to Linux". In my case, I went **from Windows Server 2012 R2** to **Cent OS (Red Hat 4.8.5-11)**. The specific mysql version was **Maria DB 10.1**. As prescribed, I stopped both mysql services, rsynced the data directory, and upon starting the mysql service on the new server, all databases, database tables, and database users were completely intact.

    Is this going to work if the target is a newer version of MySQL that the source? And does it work when target is even MariaDB?

  • According to the MySQL 5.0 Certification Study Guide, Chapter 32 Section 32.3.4, Pages 456,457 describe the Conditions for Binary Portability which bring out the following:

    Binary portability is important if you want to take a binary backup that was made on one machine and use it on another machine that has a different architecture. For example, using a binary backup is one way to copy databases from one MySQL server to another.

    For MyISAM, binary portability means that you can directly copy the files for a MyISAM table from one MySQL server to another on a different machine and the second server will be able to access the table.

    For InnoDB, binary portability means that you can directly copy the tablespace files from a MySQL server on one machine to another server on a different machine and the second server will be able to access the tablespace. By default, all the InnoDB tables managed by a server are stored together in the tablespace, so portability of the tablespace is a function of whether all individual InnoDB tables are portable. If even one table is not portable, neither is the tablespace.

    MyISAM tables and InnoDB tablespaces are binary portable from one host to another if two conditions are met:

    • Both machines must use two's-complement integer arithmetic
    • Both machines must use IEEE floating-point format or else the tables must contain no floating-point columns (FLOAT or DOUBLE)

    In practice, those two conditions pose little restriction. Two's-complement integer arithmetic and IEEE floating-point format are the norm on modern hardware. A third condition for InnoDB binary portability is that you should use lowercase names for tables and databases. This is because InnoDB stores these names internally (in its data dictionary) in lowercase on Windows. Using lowercase names allows binary portability between Windows and Unix, to force the use of lowercase names, you can put the following lines in an option file:


    If you configure InnoDB to use per-table tablespaces, the conditions for binary portability are extended to include the .ibd files for InnoDB tables as well. (The conditions for the shared tablespaces still appliy because it contains the data dictionary that stores information about all InnoDB tables.)

    If conditions for binary portability are not satisfied, you can copy MyISAM or InnoDB tables from one server to another by dumping them using some text format (for example, with mysqldump) and reloading them into the destination server.

    There are two major ways based on storage engine to move individual tables.

    For the given example we will suppose the following:

    1. datadir is /var/lib/mysql
    2. database called mydb
    3. table in mydb database called mytable.

    MyISAM tables

    If mydb.mytable uses the MyISAM storage engine, the table will physically be manifested as three separate files

    1. /var/lib/mysql/mydb/mytable.frm (.frm file)
    2. /var/lib/mysql/mydb/mytable.MYD (.MYD file)
    3. /var/lib/mysql/mydb/mytable.MYI (.MYI file)

    The .frm contains the table structure
    The .MYD contains the table data
    The .MYI contains the table index page

    These files are used interdependently to represent the table from a logical standpoint in mysql. Since these file have no further logical association attach to it, migrating a table from one DB server to another. You can even to this from a Windows server to a Linux Server or a MacOS. Of course, you could shutdown mysql and copy the 3 table files. You could run the following:

    LOCK TABLES mydb.mytable READ;
    SELECT SLEEP(86400);

    in one ssh session to hold table as read only and hold the lock for 24 hours. One second later, perform the copy in another ssh session. Then kill the mysql session with the 24 hour lock. You need not wait 24 hours.

    InnoDB tables

    Based on the aforementioned quote from the Certification book, there are many factors that govern how to backup a specific InnoDB table. For sake of simplicity, clarity, and brevity, simply perform a mysqldump of the desired table using the --single-transaction parameters to have perfect point-in-time dump of the table. No need to cncern yourself with InnoDB semantics if you just want one table. You can reload that dumpfile to any MySQL server of your choose.

    Since two questions were merged here (jcolebrand): EDIT

    If you are more than willing to live with some slow DB performance, you can perform a series of rsyncs from the old server (ServerA) to the new server (ServerB) even while mysql is still running on ServerA.

    Step 01) install the same version of mysql on ServerB that ServerA has

    Step 02) On ServerA, run SET GLOBAL innodb_max_dirty_pages_pct = 0; from mysql and about 10 minutes (This purges dirty pages from the InnoDB Buffer Pool. It also helps perform a mysql shutdown faster) If your database is all MyISAM, you can skip this step.

    Step 03) rsync --archive --verbose --stats --partial --progress --human-readable ServerA:/var/lib/mysql ServerB:/var/lib/mysql

    Step 04) Repeat Step 03 until an rsync takes less than 1 minute

    Step 05) service mysql stop on ServerA

    Step 06) Perform one more rsync

    Step 07) scp ServerA:/etc/my.cnf ServerB:/etc/

    Step 08) service mysql start on ServerB

    Step 08) service mysql start on ServerA (optional)

    Give it a Try !!!


    You can create a replication slave like this. Just remember to have server-id explcitly set in the master /etc/my.cnf and a different number for server-id in the slave /etc/my.cnf

  • You don't even need mysqldump if you're moving a whole database schema, and you're willing to stop the first database (so it's consistent when being transfered)

    1. Stop the database (or lock it)
    2. Go to the directory where the mysql data files are.
    3. Transfer over the folder (and its contents) over to the new server's mysql data directory
    4. Start back up the database
    5. On the new server, issue a 'create database' command.'
    6. Re-create the users & grant permissions.

    I can't remember if mysqldump handles users and permissions, or just the data ... but even if it does, this is way faster than doing a dump & running it. I'd only use that if I needed to dump a mysql database to then re-insert into some other RDBMS, if I needed to change storage options (innodb vs. myisam), or maybe if I was changing major versins of mysql (but I think I've done this between 4 & 5, though)

    This is more efficient, especially if one is a sysadmin/DBA. BTW mysqldump using `--all-databases` dumps the mysql schema. Starting up mysql on the next machine brings up the permissions provided you transferred the data folder to the other machine with the same major release of MySQL. (MySQL 5.5.x to MySQL 5.5.x, MySQL 5.1.x to MySQL 5.1.x, MySQL 5.0.x to MySQL 5.0.x)

    @Joe, yes, `mysqldump` handles users and permissions, as these are stored within the `mysql` schema.

    This approach is especially useful with cloud hosting, like AWS. You can stop mysql, unmount and detach from the current server; attach and mount to new server and start mysql. No copy overhead if volume remained in the same server farm.

    If you're making a replica of a large database, lock the database and make a local copy of the files. Make a note of the information from `show master status;` query and then `unlock tables;`. You can copy the files over to the slave server at your convenience. This is the shortest downtime I was able to come up with in this scenario.

  • If you just want to move a specific table try:

    mysqldump -u username -ppassword databasename tablename > databasename.tablename.sql

    You can specify more table names above, in the same command. Once the command completes, move the databasename.tablename.sql file to the other server and then restore using:

    mysql -u username -ppassword databasename < databasename.tablename.sql

    Note that the back .sql file is created using the mysqldump program, and the restore is done directly into mysql.

    1. If you have ssh access you can use mysqldump from the command line
    2. If you don't have ssh access but you have phpMyAdmin access you can use it to export/import
    3. If you don't have phpMyAdmin access there are some handy php scripts that will dump and import ( however speaking from my own experience i never found one that is as reliable as phpMyAdmin ).

    There might be this posibility where you move the actual database files ( for my install they are located at /var/lib/mysql ) , but i'm not realy shure how it will act/work out .

  • You're going to need to take a downtime. It's going to take a while depending on what your network speed is. I'm going to assume your running MySQL on Linux/Unix. Here's the process I use:

    1. Stop the mysql daemon on the source host.
    2. Make a tmp folder on your target host to receive the files.
    3. Use screen to make a shell session that will survive should your ssh get disconnected.
    4. Use rsync to transfer the files between hosts. Something like: rsync -avhP source [email protected]:/path/to/folder/
    5. Run your test cases to make sure you didn't lose anything in the transfer.

    Then proceed as usual getting the local MySQL set up.

    *Note: you can also use the -c parameter with rsync to add a checksum to the transfer, however this will be sloooow depending on CPU speed.

  • I can confirm that DTest's method also works for copying between ubuntu and osx.

    To copy all of the databases without having to do any dumping or similar:

    Make sure you have a clean mysql of mysql (installed the dmg downloaded from mysql, that (VERY IMPORTANT) has never been run.

    Copy the /var/lib/mysql/ folder contents from the ubuntu machine on top of /usr/local/mysql/data/ contents on the mac. To get access to get folder on the ubuntu machine I had to use sudo i.e.:

    sudo cp /var/lib/mysql /home/foouser/mysql_data_folder
    sudo chown -R foouser /home/foouser/mysql_data_folder

    I copied the folder using scp.

    Before you start take a copy of the mysql folder on the mac to make sure you don't mess up anything.

    After copying the folder, do the following on the mac machine:

    sudo chown -R _mysql /usr/local/mysql/data/
    sudo chgrp -R wheel /usr/local/mysql/data/
    sudo chmod -R g+rx /usr/local/mysql/data/

    Start the mysql server for the first time (from the preferences pane under System Preferences->mysql). All users and databases should now be set up correctly.

    This worked with mysql 5.1.61 on ubuntu 64 bit 11.10 and mysql 5.1.63 on osx lion (macbook pro).

  • I think all of the earlier answers probably work fine, but don't really address the issue of setting a database name during the transfer.

    This is how I just did it with bash:

    You might be better off using rsync than scp, and not compressing the file if you are doing this often.

    On my source server:

    [email protected]:~$ d=members
    [email protected]:~$ mysqldump $d | gzip > $d.sql.gz
    [email protected]:~$ scp -i .ssh/yourkeynamehere $d.sql.gz $sbox:$d.sql.gz

    On my destination server:

    [email protected]:~$ d1=members
    [email protected]:~$ d2=members_sb
    [email protected]:~$ mysqladmin create $d2
    [email protected]:~$ cat $d1.sql.gz | gunzip |  mysql $d2

    On either machine to see progress:

    [email protected]:~$ ls *.gz 
    [email protected]:~$ cat $d.sql.gz | gunzip |  less

    This all assume you have MySQL configuration file in your home directory on both machines and set the permissions:

    $ echo "
    database=nz_drupal" > .my.cnf
    $ chmod 0600 ~/.my.cnf
  • Generic linux method:

    /etc/init.d/mysqld stop
    rsync -avz source_files destination
    vi /etc/my.cnf

    edit the datadir (and socket) for both mysqld and mysqld_safe (if applicable) to point to the new location, then

    /etc/init.d/mysql start

    I posted this because no one seemed simply list out the least amount of steps to do this and I feel it's the simplest way personally.

License under CC-BY-SA with attribution

Content dated before 6/26/2020 9:53 AM