Network performance in large transfer of data

3

I'm using DD over Netcat to copy a hard disk from one system to another, straight clone.

I booted RIP on each system.

target system: nc -l -p 9000 |pv|dd of=/dev/hda source system: dd if=/dev/hda |pv|nc 9000 -q 10

The transfer seems to be hovering around 10 or 11 MB/s, with bursts near 18 registering. The two systems are connected to a gigabit switch. Ethtool eth0 on both is showing:

Settings for eth0:
    Supported ports: [ TP ]
    Supported link modes:   10baseT/Half 10baseT/Full 
                            100baseT/Half 100baseT/Full 
                            1000baseT/Full 
    Supports auto-negotiation: Yes
    Advertised link modes:  10baseT/Half 10baseT/Full 
                            100baseT/Half 100baseT/Full 
                            1000baseT/Full 
    Advertised auto-negotiation: Yes
    Speed: 1000Mb/s
    Duplex: Full
    Port: Twisted Pair
    PHYAD: 0
    Transceiver: internal
    Auto-negotiation: on
    Supports Wake-on: umbg
    Wake-on: g
    Current message level: 0x00000007 (7)
    Link detected: yes

I think I may be confusing some numbers for the transfer rates, but is this an expected speed for transferring the data?

EDIT: I just tried using two different cables that are marked as 5e compliant, and used a crossover connector to link the two systems directly. While ethtool still says they're set to a speed of 1000Mb/s, the transfer rate appears to be only slightly higher than before. Either the drives are sucktacular, the network cards are crud, or the processor must be bottlenecking, I'm guessing.

EDIT2 I just tried taking a second hard disk from a unit that needs to be cloned to and physically connecting it to the master clone. Originally one IDE channel went to a HD and another went to the CD-ROM. I took the master's hard disk and connected it to the same channel as the CD-ROM, so they should be /dev/hda and /dev/hdb. I took the cable that was on the CD and connected it to the "blank slate", so it should be /dev/hdc.

I rebooted and ran "dd if=/dev/hda|pv|dd of=/dev/hdc", and I'm getting a whopping...10 MB/s. It's fluctuating wildly between 8 MB/s and spiking to 12.

So...I'm thinking it is the hard disks that are giving crap performance...I'm just so used to network being a bottleneck that it's weird for me to think of disks as being the problem!

networking
performance
netcat
asked on Server Fault Aug 3, 2009 by Bart Silverstrim • edited Aug 5, 2009 by Bart Silverstrim

5 Answers

2

What does dd if=/dev/zero of=/dev/hda on the destination and dd if=/dev/hda of=/dev/null on the source give as the lower of the two that will give you a best case.

If you have spare cpu conside gzip -fast

It is worth conisdering setting jumbo packets (large mtu )

answered on Server Fault Aug 3, 2009 by James
1

I would expect more like 20 MB/s , are you using cat 6 / 5e cabling?

I would also run iostat (part of the sysstat package) and maybe see if the iostat thinks the drives are at 100% utilization:

iostat -c 2 -x

Here is a nice article on gigabit networks by Tom's Hardware: Gigabit Ethernet: Dude, Where's My Bandwidth?

answered on Server Fault Aug 3, 2009 by Kyle Brandt • edited Aug 3, 2009 by Kyle Brandt
0

Pipe your data stream through compress/uncompress to boost your overall throughput at the cost of some CPU.

answered on Server Fault Aug 3, 2009 by Chris Nava
0

I get 12MB/sec, but used to get a lot less - my particular problem was the drives. I was using a nasty cheapass Dell 'storage server' with a horrible RAID card. Scrapping the RAID, replacing with a JBOD configuration make a large difference, even when I then put software RAID5 on top.

I'd also consider setting jumbo frames on the switch, which will improve your throughput considerably. (ifconfig eth0 mtu 9000 temporarily, or add MTU 9000 to your ifcfg file to set jumbo on your linux interfaces).

answered on Server Fault Aug 4, 2009 by gbjbaanb
0

Most of the slowness comes from the HD bottleneck. Your average HD will push 40-50mb across a network on a completely idle disk/system/network. Add-in the overheads of dd into a simply tcp netcat pipe which is in no way optimised for network traffic, and the speeds begin to drop way off.

Most of the slow down comes from TCP window size. Packet goes across and has to wait for a reply before sending the next. Usually an internal network has such low latency that no one ever notices these. When you dump in a non network optimised way of doing it, the windows start to go all over the place. Great example of this was Vistas network file copy in the pre SP1 version which gave transfer speeds on less than 100kb a second when the TCP window tuning got it very wrong.

Also for reference, I've two boxes here than consistently push 60-80 meg a second through their network to easy other. They do have dedicated NICs, RAID 10 and a bunch of 10000RPM SAS drives to give this kind of speed.

answered on Server Fault Aug 4, 2009 by Ryaner

User contributions licensed under CC BY-SA 3.0