It is a long accepted practice to build three-tier data center LANs based on access, distribution and core switches. But now data center LANs are being re-architected to meet the demands of virtualization, SaaS and cloud computing.
The question now is, what architectures and technologies are appropriate for this new purpose? For example, what is the best alternative to the spanning tree protocol? Does it make sense to converge the LAN and the SAN? These and other strategic questions were debated in a two-hour “deep dive” panel discussion at the upcoming Interop New York 2011 event, entitled “Architecting and Evaluating Technologies for Your Next Data Center LAN.”
Among the most exciting topics mentioned in the session description is the new OpenFlow technology, which is being standardized by the Open Networking Foundation (ONF). The new open routing protocol aims to make networks more flexible and easier to configure by enabling “software-defined networking.” More and more vendors of routers, switches and access points are supporting OpenFlow, including Cisco, Brocade and Netgear.
OpenFlow trades visibility of easily understood and predictable routing for intelligent provisioning. You can use OpenFlow to define flows and programmatically specify what paths those flows take through the network – independent of the underlying infrastructure. In essence, OpenFlow enables “remote controllers” (network owners and even individual users or applications) to control how traffic moves through a network. The combination of the dynamic capacity allocation of QoS and the dynamic routing of OpenFlow results in a highly optimized network on which it is extremely difficult to determine the performance.
A centralized controller processes the paths on behalf of the OpenFlow-enabled network gear. Potentially, you could configure policies that would find paths with less congestion, fewer hops, etc. Early adopters recommend OpenFlow for load balancing, flow control and virtual networking – especially in environments with proliferating devices and burgeoning IP-based traffic straining traditional network topologies, like Spanning Tree.
Is OpenFlow a game-changing technology? That depends on whom you talk to. For those network administrators looking to assure application service delivery, OpenFlow could be a mixed blessing.
It is well understood that maximizing the benefits of virtualization and cloud computing requires continuous monitoring of network performance. This is a prerequisite to ensuring acceptable application service levels from the perspectives of distributed users.
If you’re using OpenFlow to reconfigure the network flow for specific classes of users or specific applications (e.g. video conferencing), how is that affecting other network traffic? Do the changes being made with OpenFlow streamline the overall network topology and serve to reduce latency, jitter and packet loss? Or has something gone awry in the midst of all the changes being directed through OpenFlow?
For that matter, how would you best determine when and how to use OpenFlow to reroute traffic? Performance management data would be critically important for assessing the network and identifying bottlenecks and other weak spots in the network before making changes programmatically.
Whether your data center LAN is configured traditionally, or might soon leverage a cutting edge technology like OpenFlow, network performance management tools are essential to ensure the LAN meets the demands of virtualization, IP storage, VoIP, cloud computing, SaaS, mobile devices and the ever-growing list of network dependent services users are demanding.