At Light Reading's recent 2020 Vision summit a number of telecom operators expressed concerned about whether they have the staff skillset and organization needed to support innovation. Is this a valid concern? Yes, but it may not be as big as they fear, and it is addressable.
How Big is the Need?
The need for new skills and thinking is significant. Service providers are moving from a world of closed appliances to a Telco Cloud model of rapid service innovation. These service providers need to greatly enhance their capability for software development and integration. They also need to change the way they organize and work, moving from operational silos and waterfall development to agile and DevOps for development and delivery.
Most telecom operators do have some in-house software development capability. However, as a percentage of overall staff these numbers are low compared to cloud and OTT operators.
Not Everything Changes — The Value of Domain Knowledge
At the same time, communications service providers have a wealth of skills and domain knowledge that is still relevant and needed. I discuss this point during a recent “Real CTOs of NFV” interview with Tim Naramore of Masergy. Tim noted that Masergy’s new Virtual f(n) service, based on NFV at the customer premise, uses the current Masergy staff and operational systems.
“One of the reasons that we like our Virtual Network Function partners is that we already use Brocade in our network. We already use Fortinet in our network. When we rolled this service out I didn’t have to go to the NOC and teach them how to use a different firewall from another company. I was able to say, ‘This is the same thing. It’s just a different IP address.’
How you get it set up is different, but that’s a one-time thing. How you maintain it going forward and how you interact with it on a daily basis, that’s the same. I wanted to deliver it to those brand names for one, but also I wanted to leverage the operational knowledge that I already have in my group.”
Tim’s logic applies to other aspects of service providers’ operations. End users don’t care about the internal guts of a service; they want the features and availability they are accustomed to receiving. Service provider staffers have long-term experience with those service requirements.
We see that some aspects of the service provider world are not changing. So where does the pain of change come in?
Old Dogs Can Learn New Tricks
We at the former Overture (now part of ADVA Optical Networking) experienced some of the angst that operators have described. Overture had many years of experience with traditional Carrier Ethernet appliances, and a large installed base of major service providers worldwide. Starting in 2012 we began re-inventing the Overture product line to focus on delivering services using virtualized functions. As a result, we had to also re-invent ourselves. Principal Engineer David Griswold had a front-row seat to this process at Overture. After a long career in embedded development he learned a new set of development processes, languages and programming models.
The first change was in the complexity of the environment.
“With an appliance like our traditional products, the environment is very controlled,” David notes. “In a cloud-based model, you have a lot more variability: processor type, speed and core count, cache and memory size, core affinities, NIC flavors, kernel versions, etc. Making the packet performance deterministic is much more difficult than in the appliance model.”
The next area of change is in the increased use of open source software.
“The sheer volume of information can be overwhelming,” David explains. “For example, I receive dozens of emails daily just for the discussion list for DPDK. To work effectively in this environment you have to get over the need to know how everything works. In addition, you wind up putting a lot of trust in open source, and that can be scary.”
A result of the changes described above is the need to seek a balance between innovation and stability.
“It’s important to keep up with the changing versions of tools like OpenStack, but you don’t necessarily want the absolute latest code, which will be buggy. You have to strike a balance, and also have a good upgrade/downgrade strategy and machinery,” he says.
I asked David if he felt his previous embedded experience helped in this transition.
“Absolutely. We have always been creating software that moves and manipulates packets, and we are very familiar with the operators’ services and networks. That domain knowledge gave me the background to succeed in the new virtualized environment,” he adds. “Developers must have a willingness to learn and change, but it is also important for leadership teams to make the goals clear and explain why change is needed.”
The Conversation is Just Beginning
The topic of staffing and organizing for innovation is far too large to cover in one post, so this is the first in a series on this topic. As we dive deeper into this transformation I am interested in hearing how others are addressing this transition. Please tell me your view!
Here are links to the complete series: Part 1,Part 2, Part 3.