2016-01-18

Good evening Server Fault!

I was recently contracted to help a company clean up and consolidate their network\server equipment, and I'm kind of stuck on one issue. I set them up with a new r1soft box for backups. This server is located on the same network as the 2 servers which are being backed up to it. Although the 3 servers are on different VLANs, they are all plugged into the same Cisco Catalyst gigabit switch.

I'm still performing the original backup, however it is going at less than one megabit per second for each server. Occasionally, I will see the speed drop to 0, and then climb all the way back up to a whopping 800Kbps.

Here's a quick overview of the equipment involved:

Backup Server:

CentOS 6.7, latest kernel, up to date

Server Backup Manager SE, latest version (installed today)

SELinux disabled

Physical

Servers Being Backed Up: (2 with identical configs)

CentOS 6.7, CloudLinux

Server Backup Agent, latest version (updated today)

SELinux disabled

Xen HVM

The 2 servers were originally being backed up to an off-site server. I took a look at the Task History on the original backup server, and I see that the average throughput for these jobs was as low as 499.5 KB/s, and no higher than 4MBPs.

I tested the speed between these 2 servers using 2 methods. First and foremost, an iperf test showed 930MBPs between the 2 servers. Also, I created a 1GB test file, put it on the web server, and curl'd it from the backup server. I got about 120 MBPs. Although these are quite different, they're still much better than anything I've seen on R1 between these 2 machines.

So here's my question- I have a hunch that something was put in place to limit the bandwidth in order to prevent overages with their colocation provider. There doesn't appear to be any traffic shaping on any of the servers involved.

Can someone please recommend a best next step? Thanks!

Show more