The basic principle of net neutrality, which prohibited internet service providers from throttling speeds or intentionally blocking content, applications or websites, is no longer our reality. Now we’re left to wonder, as bandwidth-hungry applications and connected devices exponentially increase global IP traffic, set to reach 3.3 ZB per year by 2021, what will ISPs do with their free reign of traffic management?
One answer that is garnering attention is a lesser-known network design called network slicing. Just like net neutrality, concerns have been raised around its potentially dangerous approach to traffic management.
What is Network Slicing?
Building on the agility principles of SDN and NFV, network slicing allows network operators to create multiple logical networks on top of a common shared physical infrastructure. With the goal of tailoring the network to meet unique environment requirements, each slice offers its own policies for data speed, quality, latency, reliability and security.
Network slicing essentially offers the network as a service. For example, if one customer requires low latency but not high throughput, and another customer requires the exact opposite, two slices could be tailor-made for the specific functionality and operations that each customer requires. At face value, this seems to benefit both parties. Providers are able to capitalize on their network investments and offer a better user experience, while businesses finally have a way to reach their goals with networks that facilitate revenue-impacting activities.
But network slicing isn’t all good. Sure, providers can now deliver a unique network experience for each customer, but will it be fair? Without net neutrality, they still have the ability to throttle traffic deemed to be less important in favor of high-paying traffic. That means that your slice riding on the same physical network could be slower than another, more favorable slice.
Performance Challenges of Network Slicing
Network slicing isn’t so clean-cut. Given that it is a newer network design, there are still concerns around what good network slicing looks like.
When isolating bandwidth into slices, providers cannot completely prevent a single packet flow from being affected by other packets sharing the same resources. Because bandwidth is an average measure over many packet times, partitioning will not prevent bandwidth-sensitive applications from suffering packet delays or drops over shorter time frames. To counteract the potential performance issues, providers will need to find a way to enforce and express requirements over different time frames so that if resource oversubscription occurs, either within or between slices, there is an efficient solution.
Hold Your ISPs Accountable
While network slicing could still be years away from becoming reality, enterprises should begin taking the steps to see how traffic is performing on operators’ networks. This also allows enterprise IT teams to enforce QoS levels and policies.
Network teams need a tool to capture useful metrics that help them understand not only the performance of the network, but applications and the end-user experience. With AppNeta, IT teams can:
- Calculate the available capacity of a link based on the average dispersion of a number of packet trains.
- Identify oversubscription when there is a high utilized capacity without an increase in AppNeta Usage data.
- Visualize provisioned capacity to ensure it matches the guaranteed end-to-end service level agreement.
While network slicing makes promises to improve performance, it’s important no matter what to independently observe ISP performance and SLA fulfillment. AppNeta’s end-to-end monitoring includes hop-by-hop data for the most complete picture to see promised performance honored from source to destination.
Want to learn more? Read our guide to continuous monitoring with AppNeta’s TruPath™.
Filed Under: Networking Technology, Performance Monitoring
Tags: ISP , network slicing , performance monitoring