How do you test the network speed between two boxes?

  • I have a gigabit network set up in my house and a few Ubuntu based boxes. Out of complete curiosity I would like to check the speed between the two boxes. I am not having any problems with speed or anything, it really is just the geek in me that is curious. Plus maybe the results will let me know if there is room for improvement, or that I have something configured wrongly.

    So how do you properly test the network speed between Ubuntu boxes?

  • Oli

    Oli Correct answer

    10 years ago

    I use iperf. It's a client server arrangement in that you run it in server mode at one end and connect to its from another computer on the other side of the network.

    One both machines run:

    sudo apt-get install iperf
    

    We'll start an iperf server on one of the machines:

    iperf -s
    

    And then on the other computer, tell iperf to connect as a client:

    iperf -c <address of other computer>
    

    On the client machine, you'll see something like this:

    [email protected]:~$ iperf -c tim
    ------------------------------------------------------------
    Client connecting to tim, TCP port 5001
    TCP window size: 16.0 KByte (default)
    ------------------------------------------------------------
    [  3] local 192.168.0.4 port 37248 connected with 192.168.0.5 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-10.0 sec  1.04 GBytes    893 Mbits/sec
    

    Of course, if you're running a firewall on the server machine, you'll need to allow connections on port 5001 or change the port with the -p flag.


    You can do pretty much the same thing with plain old nc (netcat) if you're that way inclined. On the server machine:

    nc -vvlnp 12345 >/dev/null
    

    And the client can pipe a gigabyte of zeros through dd over the nc tunnel.

    dd if=/dev/zero bs=1M count=1K | nc -vvn 10.10.0.2 12345
    

    As demod:

    $ dd if=/dev/zero bs=1M count=1K | nc -vvn 10.10.0.2 12345
    Connection to 10.10.0.2 12345 port [tcp/*] succeeded!
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 9.11995 s, 118 MB/s
    

    The timing there is given by dd but it should be accurate enough as it can only output as fast the pipe will take it. If you're unhappy with that you could wrap the whole thing up in a time call.

    Remember that the result is in megabytes so multiply it by 8 to get a megabits-per-second speed. The demo above is running at 944mbps.

    Man you have all the answers to my questions! My network apparently is not set up as well as yours only transfer at 714 MBytes and bandwith of 598 Mbits/sec. Dunno may look into that in the future. Thanks.

    In fairness the other box is only one switch (and 20meters of cat5e) away and there's no congestion. 600mbps is still pretty fast.

    This is great, but I don't have root access to the server.

    Try -P 10. My result with single connection is similar to jschoens, but with 3+ parallel connections, it consistently pushes 920Mbps.

    What if you don't control the server?

    @CMCDragonkai You probably shouldn't testing resources that aren't yours. Bandwidth heavy tests can have an impact on short-term stability.

    If you're using TCP, `iperf` might be faster than `nc` because of the way they use TCP. See: http://serverfault.com/questions/296539/netcat-throughput-low-but-iperf-high

    `iperf` shows 27Mbit speed on gigabit connection, I tried to increased Windows size but it doesn't work.

    this is a nice solution, bunt unfortunately, is NOT GOOD, both solutions give fake answers, iperf is REALLY BAD, I get even 75mbps, netcat is better, but also fluctuate ... both solutions aren't good be cause they are about client/server not advanced enough for *squeezing* every bit of speed, I'll post my solution as an answer

    what if the client is a windows machine?

    @jonney ncat might be viable substitute for an option for acting as one side of the `nc` command above, otherwise I'd suggest WSL. If all else fails, a LiveUSB stick with Ubuntu on.

  • Same as Oli's recommendation for iperf. Just want to add several points:

    1. There are also windows clients that enables testing across platforms.
    2. -t <seconds> changes the test length. -P <n> changes the number of simultaneous connections. For example, iperf -c [target IP] -P 10 -t 30 tests 10 connections together for 30 seconds and gives aggregated results along with 10 separate connection speeds.
    3. You don't need sudo. You can simply download the binary at http://iperf.fr/. It should work. Download it with wget, make it executable with chmod, and you can directly run the binary. It works perfectly.

    I found that, using default settings, the single connection speed fluctuates quite a bit. However, with 3+ parallel connections, the results are more consistent on my gigabyte switch. (consistently @ 910-920Mbps)

  • Using this script you can easily test connection speed between your machine and some remote host. Example of using:

    $ scp-speed-test.sh [email protected]_host 80000
    
    • [email protected]_host is your destination host (you must have ssh-access to this host)
    • 80000 is the approximate size of test file (in kbs), which will be received to the remote host. It is not mandatory argument.

    This seems to test the speed of the SCP application, which will be lower than a test at a lower layer. For example, nc uses L4. Of course, this is great if you care more about the speed of SCP.

    Has problems: This script writes & reads a file to disk - that's slower than ram, so can be an artificial slowdown. Also only sends zeros, in case they're compressed that's a big artificial speedup. If you do want pseudorandom data, **don't use `/dev/random` (it can block) or `urandom`** (link comments suggested that) they can be very slow too, instead use a dm-crypt (See cryptsetup's FAQ 2.19 How can I wipe a device with crypto-grade randomness?) maybe with a file in ram.

  • The command below does not require additional packages but SSH access:

    ssh [email protected] 'dd if=/dev/zero bs=1GB count=3 2>/dev/null' | dd of=/dev/null status=progress
    

    Example output:

    2992238080 bytes (3.0 GB) copied, 27.010250 s, 111 MB/s
    5859375+0 records in
    5859375+0 records out
    3000000000 bytes (3.0 GB) copied, 27.1943 s, 110 MB/s
    

    The command prints a 3GB (1000^3 bytes) dummy file full of zeros to stdout on the remote server, which is printed (transferred) via SSH to stdout of the local server and then locally piped to /dev/null (i.e. ignored). You can even see the progress of the test while executing it.

    Certainly not as precise as other tools but my use case was to debug a backup process where I wanted to test if network speed was the issue without installing additional packages.

  • If you want to test your Ethernet LAN at a lower level you can use Etherate which is a free Linux CLI Ethernet testing tool:

    https://github.com/jwbensley/Etherate

    Throwing it in the mix as tools like iPerf (which are very good!) operate over IP and TCP or UDP. Etherate tests directly over Ethernet / OSI layer 2.

  • There are also some other nice command-line tools for bandwidth benchmarking between two hosts:

    nuttcp

    server$ nuttcp -S
    client$ nuttcp -v -v -i1 1.1.1.1 ;# 1.1.1.1 is server's address
    

    nepim

     server$ nepim
     client$ nepim -d -c 1.1.1.1 ;# 1.1.1.1 is server's address
    

    goben

     server$ goben
     client$ goben -hosts 1.1.1.1 ;# 1.1.1.1 is server's address
    

    How are these different from each other, and from iperf? Do they work the same, what do they do? nuttcp is in Debian & apparently *"nuttcp is based on nttcp, which in turn was an enhancement by someone at Silicon Graphics (SGI) on the original ttcp, which was written by Mike Muuss at BRL sometime before December 1984, to compare the performance of TCP stacks by U.C. Berkeley and BBN to help DARPA decide which version to place in the first BSD Unix release."*

  • as I pointed out in my comment at best answer, that solution is not good enough be cause client/server is not optimized to ... squeeze every bit of speed

    my solution:

    make a ramdisk on both sides (therefore, you aren't limited by storage speed and I suggest you made them with ramfs not tmpfs, so they won't go in swap ... just be careful not to leave at least 512M free memory for system, this is REQUIRED if you have giga ethernet, at that speed even SSDs may slow things down) install apache on server, then create a link to ramdisk, create few large files on ramdisk (100M-1G, you can create them with dd from /dev/random or copy if you have some at hand) then go client side and download them (also on that side's ramdisk) with an advanced download program, I used lftp

    oh well, difference was major, from 75mbps reported by iperf and 9.5M/s netcat

    to 11.18M/s with my solution:

    1591129421 bytes transferred in 136 seconds (11.18M/s)
    

    9.5M * 8 = 76 mbps ; it is quite near to 75 mbps

  • It's easy plug your computer on the first box, plug the other box to the first box. Then from your computer ping the first box save the result, ping the other box and do the substraction.

    That shows network latency which is only one part of speed. For example my phone's 3G connection has huge latency (100-300ms) but it can still manage a throughput of 5mbps.

    Not my fault if he asked speed but wanted throughput.

    Latency is reaction-time, not speed.

License under CC-BY-SA with attribution


Content dated before 6/26/2020 9:53 AM