The Network Engineer’s Guide to Enterprise Backup and Recovery
by October 25, 2011

Filed under: Industry Insights

Backup and recovery methods have been around about as long as computers; but, as with everything else computing-related,  the emergence of new paradigms, new vendors and new technologies (e.g. protecting virtualized server environments and very large databases) have introduced new options in the past several years. In particular, tape-based (VTL) backup strategies are increasingly giving way to disk-to-disk or cloud-based storage scenarios which in turn require new network performance management strategies. The move to disk is not surprising as the price of disks come down. The disk is faster than tape to write to, faster to recover from and more reliable overall.

As the volume of business data grows exponentially and distributed among servers, desktops and laptops dispersed across multiple sites – whether around town or around the world – backup and recovery challenges intensify. Complex regulations and the potential costs and risks of business disruption raise the ante even further.

How prevalent are cloud-based backup/recovery implementations at the enterprise level today? Best practice has always been to maintain backups off site and cloud backups are the easiest, most cost effective way of doing so. Some enterprise server data is now backed up in the cloud; however, the majority of current implementations involve SMB server data and/or branch office and desktop/laptop data.

Gartner worldwide survey data indicates that “cloud-based recovery solutions will increasingly be evaluated by organizations of all sizes.” Many implementations will likely involve branch offices, where data is currently not as well protected as at main offices.

Another backup technology that is rapidly ascending is the ability to perform deduplication on the data being backed up to disk. Deduplication reduces both backup/recovery times and data volume, and hence backup cost as well. Some solutions are bundled with the disk hardware, while others come with backup software. Most deduplication technologies work during the transfer of the data to disk, making them both computer and network intensive.

Another option is to perform deduplication at the backup server level. This has the advantage of reducing network traffic between the backup server and the backup target, but not between the client and the backup server. Some products still perform deduplication post-transfer, at the backup target. While slower overall, this approach eliminates the chance that CPU-intensive deduplication processes will create a bottleneck between the backup server and the secondary storage target.

Emerging technology now enables deduplication on the server that hosts the application. Eventually deduplication will take place as a function of primary storage rather than as a backup/archiving function.

What all of today’s disk and cloud-based backup/recovery technologies have in common is a complete reliance on the performance of the networks carrying the data. Often these are the same networks that support a growing diversity of IP-based applications like SaaS and cloud services, desktop virtualization and video conferencing. If you are short of WAN bandwidth be sure to choose a pre-transfer dedupe solution.

File Backup and RecoveryWhen network capacity is overtaxed, backups fail and transfer times grow unacceptably long. At the same time, VoIP and video conference calls can degrade or drop, and network dependent business applications can stumble and crash, resulting in lost data and productivity. Failures such as these are inexcusable on a business network.

Meanwhile, server-based data volumes are growing far faster than network bandwidth can be built out – even with source-based data deduplication to reduce bandwidth demands. WAN-optimizing remote backup technologies like “smart bandwidth throttling” and “auto-resume on network reconnect” can help ensure that backups don’t break, or at least simplify the restart process.

Many of the cloud backup solutions will tout their ability to manage how much bandwidth their backups are using. Being able to guarantee a backup will never use more than 5mbps on a 10mbps connection is good, but if 6mbps are already in use when the backup starts. the backup solution will not have the intelligence to manage that properly.

But whatever technologies are in place to reduce data volume or otherwise streamline IP traffic, the basic bottleneck of network performance remains a critical issue in the success of disk- or cloud-based backup/recovery scenarios. Organizations need network performance management capabilities in order to ensure the availability of business applications during backups. This requires the ability to monitor capacity, latency, packet loss and jitter in real-time across the distributed environment.