Full DD copy from hdd to hdd


    If I have 2 identical hard drives with the following characteristics:

    • SATA 6.0 Gb/s
    • 5400 rpm
    • 3TB

    How long should a full dd copy take to complete?

    So far it's been running for 5 hours and still going...

    I am using Linux Ubuntu 12.04 64bit and the command I am using is:

    dd if=/dev/sdb of=/dev/sdc

    UPDATE: 1

    I can now see the progress, and it's been 6+ hours for copy 430GB. The HDD is 3TB...

    Is there no faster way of doing this?

    UPDATE: 2

    This seems a lot better than before (Thanks to Groxxda for the suggestions):

    sudo dd if=/dev/sdb bs=128K | pv -s 3000G | sudo dd of=/dev/sdc bs=128K

    ETA is about 9 hours for 3TB, whereas before it reached 430GB after 6 hours, so I am guessing it would have taken about 36 hours using the previous command.

    Try to grab the statistics of the process: Sending a USR1 signal to a running 'dd' process makes it print I/O statistics to standard error and then resume copying. $ dd if=/dev/zero of=/dev/null& pid=$! $ kill -USR1 $pid Check your man page for the actual signal as it differs for different dd implementations.

    @Groxxda, I have no idea how to do that.

    GNU dd uses `SIGUSR1`, and BSD dd uses `SIGINFO`

    Also what do you mean with "connected to the same sata cable"? Are you using some sort of port multiplier? (If you achieve a transfer rate of 150MB/s it should take you 5-6hrs, but I think half of that is more realistic.)

    @Groxxda, No, it's a single sata cable, which allows 2 hdds to connect to a single sata port. It's doing 19MB/s for some reason...

    You may be able to speed up the process by specifying a different (bigger) blocksize (`bs=` argument to `dd`). Also consider connecting each HDD to its own sata port.

    @Groxxda, what blocksize do you recommend?

    Have a look at this thread on superuser. I would suggest you try a few values from 64K to 4M and see what works best for you. They also mention a flag `direct` that might speed up the copy, but I haven't used that.

    @Groxxda you could combine your comments into an answer, there's some good information there.

    Updated question.

    What did you finally use for the other disks? dd (with direct?) ? bs? or cat was as fast? Any insight on a SATA HDD (now connected via USB) to SSD (new that is replacing the SATA HDD) ?

  • dd was useful in the old days when people used tapes (when block sizes mattered) and when simpler tools such as cat might not be binary-safe.

    Nowadays, dd if=/dev/sdb of=/dev/sdc is a just complicated, error-prone, slow way of writing cat /dev/sdb >/dev/sdc. While dd still useful for some relatively rare tasks, it is a lot less useful than the number of tutorials mentioning it would let you believe. There is no magic in dd, the magic is all in /dev/sdb.

    Your new command sudo dd if=/dev/sdb bs=128K | pv -s 3000G | sudo dd of=/dev/sdc bs=128K is again needlessly slow and complicated. The data is read 128kB at a time (which is better than the dd default of 512B, but not as good as even larger values). It then goes through two pipes before being written.

    Use the simpler and faster cat command. (In some benchmarks I made a couple of years ago under Linux, cat was faster than cp for a copy between different disks, and cp was faster than dd with any block size; dd with a large block size was slightly faster when copying onto the same disk.)

    cat /dev/sdb >/dev/sdc

    If you want to run this command in sudo, you need to make the redirection happen as root:

    sudo sh -c 'cat /dev/sdb >/dev/sdc'

    If you want a progress report, since you're using Linux, you can easily get one by noting the PID of the cat process (say 1234) and looking at the position of its input (or output) file descriptor.

    # cat /proc/1234/fdinfo/0
    pos:    64155648 
    flags:  0100000

    If you want a progress report and your unix variant doesn't provide an easy way to get at a file descriptor positions, you can install and use pv instead of cat.

    What is strange with large blocks is that the bottleneck is the disk, so what makes `cat` faster than `dd` ? Could it be that `cat` uses the cache ?

    @Gilles, thanks for the answer. I have another five 3TB drives to clone and will try the cat option next. As far as I can tell, that new dd command is going to take another 3 hours to complete, to about 11 hours in total. If the cat approach is faster than 11 hours for the second 3TB HDD, I will use that method for the remaining drives.

    @Emmanuel Both use the cache in the same way. I don't understand why there's a significant difference between `cat`, `cp` and `dd` with a large block size (it's easy to understand why `dd` with a small block size is slower: it makes more system calls for the same amount of data).

    @oshirowanen You're getting a little over 80MB/s, which sounds pretty good for a 5400rpm drive.

    @Gilles, I have pv installed and am currently copying the data using the cat method you suggested. How do I use pv to get a human readable progress report?

    @gilles I was thinking : if `cat`uses the cache that will allow to read onto one disk while writing on the other. If `dd` was intended to write on tapes perhaps it bypasses the cache to have a better control.

    @Gilles, so to get progress report, do I use `sudo sh -c 'pv /dev/sdb >/dev/sdc'` instead of `sudo sh -c 'cat /dev/sdb >/dev/sdc'`?

    @oshirowanen Yes, use `pv` where you'd use `cat`.

    @Emmanuel No, `dd` doesn't (can't) bypass the cache.

    @gilles I think it can with oflag=direct but doesn't by default.

    @Gilles +1 for suggesting using `cat`. Just copied a 128-Gb SSD drive, and it took only half an hour.

    I have a question, If I've 2 hdd of 2TB (sdc, sdb), and want clone the firts sdc... should I put `cat /dev/sdc >/dev/sdb` ? is it secure way? Thanks!!

    What happens if you do this on a multi-device btrfs fs?

    @unhammer I don't know how btrfs stores information about where to find the other devices. If it's able to assemble filesystem parts from their content alone regardless of their location (like Linux LVM), then just copying one or more of the devices byte for byte will work. A robust system like btrfs should be able to do this, otherwise plugging in an external disk or restoring from a backup would be very painful.

    OK, I guess it'd have to be something like `cat /dev/sdc1 /dev/sdd1 > /dev/sdb` (given the original btrfs mount used `-odevice=/dev/sdc1,device=/dev/sdd1`). I felt a bit too unsure so I ended up waiting for `/bin/cp -a` instead, which wasn't too bad.

    @unhammer No! You would copy each device one by one, e.g. `cat /dev/sdc1 >/dev/sdb1 && cat /dev/sdd1 >/dev/sde1`. Concatenating the two parts doesn't make sense. If you want to change the structure of the btrfs volume, to change it from having two subvolumes to one, you need to use btrfs tools to change the structure, or as you did to create a new filesystem with the desired structure and copy the files.

    OK, had a feeling it wouldn't work exactly like that, thanks for the confirmation!

License under CC-BY-SA with attribution

Content dated before 6/26/2020 9:53 AM

Tags used