Connect with us

Cloud

John English: Isolating Last-Mile Service Disruptions in Evolved Cable Networks

The adoption of new technologies presents operators with a plethora of new variables to manage on the user control plane.

Published

on

The author of this Expert Opinion is John English, Director of Service Provider Marketing and Business Development for Netscout

Cable operators are increasingly investing in next-generation network infrastructure, including upgrades to support distributed access architecture and fiber to the home.

By bringing this infrastructure closer to subscribers, cable operators are evolving their networks, adopting greater virtualization  and redistributing key elements toward the edges. They expect these changes to increase their network’s interoperability and, ultimately, improve the speed and uptime available to subscribers. In turn, cable operators expect these new capabilities will help redefine what services they can offer.

However, these new advanced networks are much more complex than previous generations. By virtualizing or cloudifying functions at the edge, operators risk losing the sort of visibility that is essential to rapidly pinpointing the source of service disruptions – and ensuring their networks are meeting desired performance thresholds for next-gen applications.

The challenge of complexity in virtualized networks

As cable networks evolve, so does their complexity. The adoption of technologies like virtualized Cable Modem Termination Systems (vCMTS) and distributed access architecture presents operators with a plethora of new variables to manage, particularly on the user control plane.

Always-on applications and those applications that are most sensitive to network performance changes, such as video games, AR/VR, and remotely-piloted drones, to name just a few examples, require continuous measurement and monitoring for reliability. But ensuring consistent quality of service under all conditions the network may face is no small feat.

To illustrate, let’s consider how cable operators will manage disruptions in a virtualized environment. When issues inevitably pop up, will they be able to isolate the problem virtually, or will they need to dispatch a technician to investigate? Additionally, once a technician is onsite, will they have advanced intelligence to determine if the source of the problem is hardware or software-related?

Or will they need to update or replace multiple systems (e.g., consumer premesis equipment, optical network terminals, router, modem, etc.) to try to resolve the problem? Finally, will they need to also investigate additional network termination points if that doesn’t do the trick?

Indeed, each time a truck or technician is dispatched represents a significant outpouring of resources, and adopting a trial-and-error, process-of-elimination approach to resolution is a costly means of restoring service that cable operators cannot afford at scale. Likewise, the customers that depend the most on constant network availability and performance for various uses, such as content distribution networks, transportation services, and industrial manufacturers, won’t tolerate significant disruptions for long.

Packet monitoring for rapid resolution of last-mile disruptions

In the evolving landscape of cable networks, where downtime can lead to customer dissatisfaction, churn, and revenue loss, rapid resolution of last-mile service disruptions is paramount. Cable operators need more advanced network telemetry to understand where – and why – disruptions are occurring. In short, evolved networks require evolved monitoring. This starts with deep packet inspection at scale.

Packets don’t lie, so they offer an excellent barometer into the health of both the control and user planes. Additionally, they can help determine last-mile & core latency per subscriber, as well as by dimension, so operators can test how different configurations affect performance.

Additionally, in the event of a major service disruption, packet monitoring at the edge enables operators to accurately measure how many subscribers are out of service – regardless of whatever hardware or software they’re using – and determine if there’s a common reason for mass outages to help technicians resolve any problems faster. Finally, proactive monitoring, especially when combined with artificial intelligence, empowers operators to detect and address potential issues before they impact subscribers.≠

All in all, cable operators are navigating a challenging yet exciting era of network evolution. The transition to advanced infrastructure and the demand for high-quality, low-latency services necessitate sophisticated monitoring and diagnostic tools. Deep packet inspection technology will continue to play a pivotal role in ensuring the smooth operation of evolved cable networks.

Additionally, in the quest to maintain the quality of service expected by subscribers, operators must abandon the costly process-of-elimination approach and adopt rapid resolution techniques. By doing so, they will not only reduce service disruption but also make more efficient use of resources, ultimately benefiting both their bottom line and the end user’s experience. Evolved cable networks require evolved strategies, and rapid issue isolation through advanced monitoring must be at the forefront of this transformation.

John English is Director of Service Provider Marketing and Business Development at Netscout’s Service Provider unit. He has an extensive background in telecom, including a decade at a major communications service provider and numerous OEMs and ecosystem partners. English is an expert on how communications service providers can successfully implement new technologies like 5G and virtualization/cloudification while continually assuring the performance of their networks and services. This piece is exclusive to Broadband Breakfast.

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views expressed in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

Broadband Breakfast is a decade-old news organization based in Washington that is building a community of interest around broadband policy and internet technology, with a particular focus on better broadband infrastructure, the politics of privacy and the regulation of social media. Learn more about Broadband Breakfast.

Continue Reading
Click to comment

Leave a Reply

Cloud

Hyperscalers are the New Disrupters in Cloud Computing and Digital Infrastructure

A part of digital Infrastructure investment, hyperscale data centers embody the communications and computation necessary for scaling broadband.

Published

on

Photo of Antonio Piranio, managing partner, Worldview Solutions, Shannon Hulbert, CEO, Opus Interactive, Emil Sayegh, CEO, Ntirety, Tom Wilten, president, Otava, Michael Levy, product director, Rackspace, by Maggie Yun

TORONTO, November 3, 2022 – The digital infrastructure industry is undergoing a disruption brought on by hyperscalers, companies who build cloud service platforms on hyperscale computing.

Hyperscalers, an increasing used buzzword in the cloud computing industry, are able to provide and add compute, memory, networking, and storage resources on a node that makes up a larger distributed computing environment.

As part of the broadband buildout that is part of digital infrastructure investment, hyperscale data centers embody the communications and computation environment necessary for scaling broadband capacity.

Hyperscalers gathered in Toronto

As North America and the rest of the word begins to exit the COVID-19 pandemic, the computing power embodied by hyberscalers has been in much demand. And one key element of the physical community of digital infrastructure operator and investors gathered at the infra/Structure summit here to discuss hyperscalers’ ability to accelerate the the digital transformation of cloud computing globally.

Hyperscale computing is increasingly used in cloud and big data infrastructure systems, and is often associated with the clouds used to run Google, Facebook, Twitter, Amazon, Microsoft, IBM Cloud and Oracle.

And according to Structure Search, which sponsored the infra/Structure conference here on September 14 and 15, the estimated global data center colocation market size will reach $109 billion in 2027, at an average annual growth rate of 12%. As such, businesses in the industry are fighting to position themselves and to claim a piece of this growing market.

The market is big enough that you will see different tier businesses looking for different solutions,” said Sherri Liebo, senior vice president and head of marketing at Flexential, a data center company. “Small businesses are cloud native and designed to run on public cloud. Mid-market businesses need a hybrid solution of private and public cloud, and the large corporations prefer to have a private cloud where they have more control over the system.

Hyperscale computing is a type of computing architecture that is flexible to add additional computing, memory and networking resources. It supports seamless software scaling and enables cloud and “big data” technologies. Some notable names are Amazon Web Services and Microsoft Azure.

Though hyperscale data centers excel in computation and storage, they alone cannot facilitate communication across data centers. Hence the need to collaborate with transit centers, a type of data center where physical connections and cables allow traffic to travel between different data centers across the globe.

Wholesale third party providers, such as Equinix, CoreSite and Digital Realty, provide transit center services to these hyperscalers.

The losers from the rise of hyperscalers

The rise of hyperscalers directly impacts retail data centers – also known as colocation centers or carrier hotels that serve small-and-medium enterprises. Hyperscalers provide significant value to their smaller customers by helping enterprises reduce capital expenditures and maintain stable operational expenditure.

This allows smaller businesses to focus on their core business rather than investing and managing infrastructure. The time ahead will be challenging for retail data centers – and a revision of their value proposition will be necessary to remain relevant.

The winners

In the midst of the cards reshuffling, there are two players benefiting from the growth of hyperscalers: managed service providers and real estate developers. MSPs are consultants who recommend and implement digital infrastructure services for their large or small enterprise customers.

“We have seen five times growth per year in our revenue by helping on-prem customer developing data strategy to support adoption and enablement of hybrid hyperscaler on-premesis and private cloud,” said Sean Charnock, CEO of Faction, a cloud data services company.

Another winner is real estate developers who work closely with hyperscalers in their global expansion journey to design and build data centers for them. As the right hand of the conquerors, real estate developers undoubtedly benefit from the fundamental growth of this digital transformation wave.

The next target

According to Structure Search, in 2022 more than 50% of the hyperscalers’ data centers capacity including space, cooling, and connection is provided by the wholesale thirdparty in colocation sites. However, there is a growing trend of self-build activities in planning by 4 major hyperscalers – Google, Microsoft, Amazon and Meta.

If hyperscalers decide to integrate upward by building more data centers of their own it will significantly reduce the value captured by the wholesale third party providers. The silver lining for third party providers is that they still provide high business value as transit centers, which are a vital piece for supporting hyperscalers’ edge infrastructure development needs as they integrate upwards.

Are hyperscalers unbeatable?

One might think that hyperscalers, with their scale, capital and technology, can grow without any constraint. Yet, to maintain hyperscale computation, hyperscale data centers consume enormous amounts of electricity.

In fact, data centers use 1% of the global electricity supply annually. They also require large amounts of land and water, and can be susceptible to natural disaster.

Continue Reading

Cloud

The Cloud Continues to Revolutionize Industry, With Businesses Transitioning at a Rapid Pace

Published

on

Screenshot from the CES 2021 event

January 19, 2021 —“Cloud technology has fundamentally taken a different form during the COVID-19 pandemic,” said Karthik Narain, cloud first lead at Accenture, adding that businesses are transitioning to the cloud at rapid paces.

Investments in and usage of cloud computing have surged 57 percent due to the impact of the COVID-19 pandemic, said a panel of experts contributing to the Consumer Technology Association’s annual Consumer Electronics Show.

Cloud computing is rapidly reshaping the mobile experience and growing at an incredible rate, said Brian Comiskey, manager at Industry Intelligence-Consumer Technology Association.

According to Narain, moving a business’ operations to the cloud is a significant commitment, but one that increases efficiency and scalability. Businesses and governments alike have more flexibility to scale up or scale down with cloud computing power, he said.

The value of cloud technology to businesses is also seen in the company’s return on investment.  Balancing the pros and cons associated, Narain said one concern businesses may have about cloud technology revolves around security risks that may arise, such as ensuring new and existing cloud consumers’ data is securely kept. Yet, he maintained that cloud providers are well-equipped to handle this issue.

Companies can explore cloud computing technology as a testing ground, without committing too much early-on when deciding to move to the cloud, said Edna Conway, vice president and chief security and risk officer for Azure at Microsoft.

When it comes to cloud adoption rates, some estimates predict as low as 20 percent of companies will adopt it in the coming years, while others are as high as over 60 percent.

Conway says that the variance in adoption predictions is no cause for concern, as it merely depends on the rate at which industries move towards it. For example, the healthcare sector is integrating cloud services slower than the retail sector.

Continue Reading

Cloud

Cloud-Based Wave of 5G Services Could Revolutionize Education and Democracy, Says Amazon

Published

on

Screenshot of Jonathan Solomon from the Broadband Communities Virtual Summit

September 26, 2020 – The next wave of 5G has the potential to revolutionize education and democracy, said Jonathan Solomon, solutions architect for Amazon Web Services, in a question and answer session at the Broadband Communities Summit on Wednesday with Bob Knight, executive vice president of Harrison Edwards.

“There is a basic connectivity problem that needs to be overcome,” said Solomon, speaking of the digital divide. “But when we do overcome this, classroom experiences will go to the next level.”

Solomon said that virtual reality, augmented reality and so-called “mixed reality” would be a large part of this change.

For example, instead of dissecting a real frog, which can be messy and make some students sick, a virtual dissection could be performed by students remotely using special glasses, and without needing a frog and the dissecation equipment.

”Glasses are a good way to start, but I think there needs to be more than that,” he said.

Just as many people have been clearing rooms in their houses for Zoom calls, Solomon projected that “an immersive room could be in the future.” If this were to happen, Solomon said broadband providers would need to support it, which would include making graphics processing units more ubiquitous, as they are required for these immersive experiences.

These experiences will be enabled by multiaccess edge computing, which allows for synchronous interaction even if parties are far away from each other – and Solomon highlighted this as a core competency of AWS Wavelength, Amazon’s newest up and coming technology.

Wavelength, an extension of the Amazon Web Services cloud, is fully managed within a carrier’s network, as opposed to a more remote region. Wavelength allows developers to “deploy applications directly in the network, connected to the network itself, and also leverage the capabilities of AWS and the resources back in the region,” explained Solomon.

The two also discussed broadband’s impact on democracy.

While government can be slow to adapt to new technologies, the pandemic has fostered greater civic engagement. In the move towards a more virtual society, feeling secure to vote has become a major concern.

Solomon responded that Amazon would continue to participate in democracy by providing governments with the tools and technology needed to do “whatever [they] want to do.”

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending