Cloud

How to implement a VoIP platform in public cloud

Home

>

Blog

>

Cloud

>

How to implement a VoIP platform in public cloud

Share this article

Published: 2021/10/21

7 min read

Leveraging cloud native technologies and public cloud providers is becoming ever more common across different sectors and industries, and telecom companies are no exception. This will be even more visible with the adoption of 5G and the digital transformations that it drives. Employing solutions that use public cloud not only helps keep initial investment costs low, but also ensures maintaining a high level of scalability and agility.

How can you effectively deploy a telecommunication VoIP platform in the public cloud? What advantages do Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) bring to public cloud providers and telco platforms? Read on to learn more.

VoIP 101

The most common VoIP signaling protocol used in telecommunication platforms is SIP (Session Initiation Protocol), a request response protocol that somewhat resembles HTTP.  Text-based, it contains headers, payload and employs response codes also similar to HTTP. The primary function of SIP in typical VoIP platforms is session setup and management for example, phone calls.  The SIP payload exchanged by parties of a telephone call is typically SDP (Session Description Protocol), which carries media parameters of the telephone call.

A typical SIP request for initiating a VoIP call looks like this:

 typical-SIP-request-for-initiating-a-VoIP-call

As we mentioned before there are some similarities between SIP and HTTP. As You can see there are a few common headers such as: Contact, Allow, Expires, Content-Type and Content-Length.

Thanks to the information in SDP payload both endpoints of a VoIP call can prepare a dedicated channel for transmission of an audio stream.

The media packets (ie. voice) which are digitally coded using a codec such as G.711 are then sent over the RTP (Real-time Transport Protocol). Both RTP and (most of the time) SIP use UDP in the transport layer. Whereas SIP usually uses port 5060 for communication, for RTP parties dynamically negotiate ports typically in the range between 10000 and 20000.

A SIP-based VoIP platform such as a soft-switch will typically be composed of SIP signaling and RTP media handling nodes, some type of Application Servers for call control and the business logic processing, database and web servers for service provisioning.

The high-level logical architecture of our platform in classical, on-premise deployment, is presented below:

The-high-level-logical-architecture-of-platform-in-classical-on-premise-deployment

 

OpenSIPS and RTPEngine are the core VoIP nodes processing SIP and RTP packets. A Java-based routing application server executes business logic using the service configuration stored in a PostgreSQL database. There is a GUI application for service provisioning and some auxiliary components for CDR processing and HOMER for VoIP traffic monitoring and analysis.

Before examining how to deploy a VoIP platform in a public cloud environment and understanding how the architecture will change, it’s worth evaluating the advantages of  deploying a VoIP platform in a public cloud in the first place.

The benefits of public cloud

There are many reasons why companies want to deploy services, including VoIP platforms in the cloud. The most obvious benefits come from the fact that public cloud offers (in theory) unlimited computing power and scalability, making the adding of resources easy and fast. The other obvious benefit is the fact that there is no initial cost related to the purchase and deployment of physical hardware, as in standard on-premise installation.

In our project the most important key decision factors for our client were a) the initial project costs and b) implementation duration. The fact that one does not have to invest in physical infrastructure means that the initial costs are lower – no servers, rack space, network switches and the associated engineering costs. This of course means that no time is required for these activities so project implementation duration is in turn shorter.

Public cloud deployment: key elements

In this blog we will use AWS as the public cloud. This is the cloud provider we have used in the project and hence can share more deployment details.

The first step in preparing a public cloud deployment is the decision about the overall architecture, which PaaS and other cloud services use. Let’s start by discussing the easy things – your decisions will be straightforward.

For the application server and the related back-end, it makes a lot of sense to go with the PaaS-based services provided by the cloud.

Database

The database service offered as PaaS handles tasks related to installation and maintenance and  provides mechanisms such as high-availability and backup. In our case we choose the Amazon RDS for PostgreSQL. Backed with fast SSD storage, it is very efficient. In our solution we used one primary database instance for writes used by provisioning application and a pool of read replicas for serving requests from routing logic nodes. Traffic to database is balanced and managed by HAproxy with proper health checks and, in case of one replica failing, it can be disabled from the pool. Having pool database instances greatly simplifies the scaling of a solution.

Application and Web Servers

Currently, most web and business applications are containerized and designed with cloud native principles in mind. The most popular system for orchestrating containerized application is, of course, Kubernetes. It provides out-of-the-box mechanisms for high availability, scaling, rolling updates and management. In this situation, we also went with the managed PaaS option, using Amazon EKS for orchestrating containerized applications, along with Amazon ECR for storing container images. For those applications Kubernetes Deployment was used to run the Pods as ReplicaSets. The most important component – routing server application – is a stateless REST microservices so it can be easily scaled in Kubernetes. In our first on-premise deployment there was one important bottleneck – connections from the routing server to database replicas, which was overcome through the speed of the SSDs.

VoIP Servers

For the VoIP signaling and media processing nodes, things are not that straightforward and more aspects have to be considered related to VoIP traffic. For the VoIP processing components, we suggest sticking with the more traditional approach and deploying them as virtualized EC2 compute instances. Why? We have analyzed and considered deploying them as Kubernetes applications as well, however there were just too many drawbacks. Kubernetes was primarily designed to support stateless services using HTTP and running stateful VoIP applications with more complicated SIP and RTP traffic that uses UDP packets and large amounts of ports can be cumbersome. Standard Kubernetes functionalities cannot expose and handle UDP traffic for SIP and RTP services. In order to properly process SIP and RTP networking, the Pods would have to be deployed in the host network mode with headless services or with use of a custom developed CNI plugin and dedicated ingress controller with some specific method for provisioning UDP traffic through mechanisms like config maps or annotations. The cons outweighed the pros and thus, not to increase project costs, we decided to use EC2 for the VoIP components.

Auxiliary Components

We also used EC2 compute instances for some auxiliary components related to monitoring and billing integration with on premise systems. It did not make sense from the point of view of the project goals to put effort into deploying these components using cloud services.

The overall architecture of the AWS cloud deployment is shown below:

The-overall-architecture-of-the-AWS-cloud-deployment-

The results and takeaways of implementing a VoIP platform in public cloud

We used c4.large instances (2 vCPU 3.75 GB Mem) for VoIP and m5.xlarge instances (4 vCPU 16 GB Mem) for the application server nodes. The initial deployment had 4 RTPEngine instances and 2 routing application server instances. Before the commercial launch, performance and stress tests were carried out during which 4,000 concurrent calls (with two-way RTP) and 80 Calls Per Second traffic were run without any errors.

At the time of writing the platform (which provides Class 4 Softswitch functionality for wholesale VoIP traffic) that we deployed in the AWS public cloud has been running for over a year with no service interruptions. Some inconveniences may be related to maintenance activities carried out by the cloud.

It definitely makes sense to use the managed services offered in a PaaS model with the cloud – they work well and eliminate tedious tasks related to installation and maintenance, make scaling quite simple.

Employing Kubernetes for VoIP remains a question mark though. It seems to be the direction taken for developing Cloud Native Network Functions, especially in 5G core. Both the Cloud Native Computing Foundation (CNCF) Telecom User Group and the Telecom Infra Project’s (TIP) Open Core Network project group (OCN) clearly point to containers orchestrated by Kubernetes as underlying infrastructure technology for Cloud Native Network Functions (CNF). TIP however, in its OCN project, does not recommend public cloud-managed K8s service and proposes to deploy its own K8s infrastructure (that follows their recommendations) on top of bare VM compute instances. For K8s networking OCN recommends DANM as CNI plugin and Trafeik as ingress controller. What is intriguing to us is the idea of using the OpenSips / RTPengine duo as a kind of ingress controller dedicated for VoIP technology in Kubernetes. Imagine, all you would need is to simply deploy an ingress controller as usual, then auto-discovery mechanisms would create a configmap with all your media processing (RTP proxying, bridging, transcoding, etc) pods available in a cluster. You could then just start sending traffic and the VoIP Ingress Controller would do its magic (for both inbound and outbound traffic). An OpenSIPS part would take care of SIP signalling and a RTPEngine would pass audio/video streams to media processing cluster inside Kubernetes. The result? A pure cloud native approach, with failover and easy scaling. Who knows what the future holds…

Getting the most out of the public cloud

While the benefits of implementing a VoIP platform into the public cloud, rather than relying on an on-premise environment are straightforward, making the most out of it requires extensive knowledge and experience. As there’s no silver bullet solution, you should always customize your approach to insure it fits your particular scenario and needs. While we can’t predict what would work best in your case, we’re happy to share our extensive experience in telecommunication software development and help you find the most effective way to employ technology that can drive your business growth – just let us know what your goals are by using this contact form!

The project described above describes the implementation of a Class 4 Softswitch platform based on an AWS environment for a Polish operator Globitell Wholesale.

About the authorDawid Mielnik

Operational Director

A telecommunications expert and proficient manager with nearly 20 years’ experience building and integrating core network platforms. At Software Mind, Dawid leads a service unit responsible for delivering best-of-breed and tailored software solutions for operators. Dawid enjoys building teams, cultivating their potential and collaborating to achieve shared business goals. An eager learner with a strong inner drive, Dawid avidly looks for new ideas and knowledge. He advocates open source and is a keen enthusiast of 5G and cloud native technologies in telecommunication networks.

Subscribe to our newsletter

Sign up for our newsletter

Most popular posts

Privacy policyTerms and Conditions

Copyright © 2024 by Software Mind. All rights reserved.