About
Subscribe
  • Home
  • /
  • Wireless
  • /
  • Performance testing, optimisation on 3G networks

Performance testing, optimisation on 3G networks

By Tim Courtenay, Managing Director: Telecom Services, Atio
Johannesburg, 02 Aug 2007

As mobile network technology has advanced, the need for and importance of performance management has increased. Telecom operators in most markets face renewed and complex competitive threats.

Being able to deliver a quality service to their customers is no longer a competitive advantage; at its most basic level it is merely a 'ticket to the game', and operators have to rise to this new challenge or wither away.

3G networks are transitioning from what Gartner calls the 'Trough of Disillusionment' to the 'Slope of Enlightenment', in other words mobile 3G networks have dealt with managing their subscribers' deflated expectations of what 3G can deliver, and are now providing 'reality' to their customer base and starting to find effective ways to leverage 3G for what it was originally intended - a modern multimedia communications delivery platform.

Any organisation that wants to compete must know how its services perform - and to do so requires a measurement function, as well as feedback, and an optimisation function. This is as much of a requirement in telecoms as it is in any other industry concerned with quality.

The need to test

It goes without saying that as a basic principle it is necessary to measure and monitor the performance of network services. Customers, shareholders, management, the media, regulatory authorities - these stakeholders all have an interest in understanding the performance and quality of mobile networks.

Operationally, decisions need to be made about network investment, vendor payment and performance management, staff performance incentives and marketing campaigns. Factual measurement data is essential when making these decisions.

In addition, 3G networks are inherently more complex than 2G networks. Video calling, HSDPA and portal services add to the existing offering of voice, SMS, and MMS services - all of which need to be tested to manage the customer experience.

For operators who have a 3G network layer on top of a 2G layer, inter-layer handover is a particular operational challenge that needs to be tested and optimised.

A classic problem that mobile operators have to deal with is reconciling feedback from customer help-desk services with that of network operations statistics. Testing, and facts, cut through anecdotal evidence of service performance.

What to monitor and test

The focus of monitoring and testing depends on where in the network life cycle that particular service or technology is. For example, an operator can have a very mature 2G radio network, a 3G radio network that is in high-growth mode, and be testing a new service on the 3G network that is very immature, such as streaming TV.

The testing approach for each will be different. In addition, testing should fit into the overall network quality management strategy. This is shown simplistically in figure one.

The graphic shows the maturity life cycle for technology and service roll-out in a telecoms network. Four broad phases relating to the maturity of the underlying technology are defined, ranging from 'New' to 'Mature'. For example, an operator deploying a new technology (eg, DVB-H) will typically run a new service (mobile TV) on a trial basis with a small number of controlled subscribers, using one or two handset models, and limited infrastructure. The exercise is run almost as a proof of concept and learning exercise.

Testing would focus on basic usability and performance, coverage, user experience testing, and so on. It would not include testing against benchmarked KPIs, which would be pointless. The technology and service would later move into the next phase, 'Vendor Conformity to Standards'. In this phase, the underlying technology and the service model have been tested and proven, and now the operator must move into commercial launch.

To do this, interoperability between handset vendors, infrastructure and service providers is key, and testing would focus on this requirement. Later on, the technology and service would move into the 'Inter-Network Conformity' phase, where users are able to utilise the service across networks. Testing would focus on this and start to benchmark and optimise the service performance to deliver an optimal customer experience. Finally, the service would move into 'Maturity', where further technology changes are limited and the underlying technology, as well as the service model, are very well understood. Testing would focus on maintaining performance criteria, and ensuring that change control is implemented to not adversely affect a target service level.

Monitoring and testing generally covers three areas:

1. Network element counters typically aggregated in dashboard functions in the network operations centre (NOC). This would include congestion rates, busy hour traffic parameters, dropped calls and abnormal call release, handover failures, and so on.

2. Static service monitoring tools and probes located around the network, for example testing data services.

3. Mobile or drive test data simulating subscriber calls in a mobile or in-building environment, and measuring competitor networks for benchmarking purposes. These can be manual, semi-automated, or fully automated tools.

Combined, these methods provide a holistic view of network performance and quality. In addition, measured data should contain engineering information (layer 3 messages, radio parameters, service quality indicators) that are then used by the optimisation teams in fault-finding and network reconfiguration planning.

The simplest and earliest key performance indicators (KPIs) in mobile telecoms related to voice calls are:

* Call set-up success rate (CSSR)
* Call success rate (CSR)
* Call drop rate (CDR)

It is important to be able to benchmark your network to whatever international standards are available. While bodies such as the ETSI and ITU have defined communication standards and quality of service metrics, they have not defined what is regarded as internationally acceptable performance standards. For example ETSI 102 250-2 has defined QoS parameters for IP-based data services, including PDP context activation ratios, service accessibility ratio, mean data rates, and other parameters. The ITU has adopted the P.862 methodology for voice quality measurement. It is a good idea for networks to adopt these standards. In addition, many vendors and standards licensors are working on new standards for measuring quality of service for multimedia type applications such as video streaming, portal content, VOIP and others.

3G networks require the same.... and different testing to 2G networks. 2G networks, including GPRS technology, are by definition a lot easier to monitor and test than 3G networks. The static nature of the radio subsystem makes the 'plan-implement-test-optimise' cycle simpler. The radio network can be adequately tested using voice calls and GPRS tests only. Voice testing will identify signal coverage, frequency planning, and circuit-switched voice quality problems on sample basis. GPRS testing will specifically measure IP data KPIs as mentioned previously, as well as confirm coding schemes and expected throughput rates.

3G networks require a more sophisticated approach due to the utilisation of WCDMA in the radio part of the network. Video calls, although these do not generate significant traffic on most networks today, have specific circuit-switched call setup requirements that need to be logged for analysis. Cells adjust their effective coverage area depending on the traffic load in that area, so the analysis of measurement data needs to take this into account. Calls can hand down to the 2G layer, so handover analysis and the classification of handover failures needs to take this into account.

Some networks have different vendors on their 2G and 3G layers, so when a handover failure occurs between layers, whose problem is it? In addition, the 3G call release mechanism requires more message transactions between device and UTRAN than in the 2G case. In some cases a call may not be released properly, but the user does not notice anything since they pressed the call release button anyway - does this mean the call was dropped or not? These may seem trivial but are important as they can make the difference between whether or not a network is meeting the KPI targets that management has set. The measurement of Ec/No and scrambling code information is also key in 3G networks.

Choosing tools for testing

This is a fairly simple matter, but network quality and operations staff should have a clear idea about what they want to monitor, measure, and manage before investing in 3G tools. For most operators there is a logical progression from their existing vendors in the 2G or 2.5G environment to 3G. The key to success is to know what can be achieved through the correct use of the tools, and which tools integrate well with the existing infrastructure and reporting environment within the operator's network. For example, drive test tools should provide data in a format that can be loaded seamlessly into post-processing and reporting tools, and even into radio planning tools.

The standard set of tools typically should include:

1. Drive test solutions for in-field test call generation and coverage/performance analysis.
2. Static service availability tools to provide a heartbeat check of voice, messaging, data and other services.
3. SS7 probes to monitor the signalling network.
4. Network element monitoring tools including dashboard software.

For operators who are in the early growth stages of 3G network roll-out, it is essential to have rapid feedback from the monitoring and testing tools to the NOC and operations teams. A tool selection process should look very clearly at the feedback loop time from starting a test process, to analysis of test data and confirmation of network performance. This is usually critical after a network change event, such as a switch or UTRAN software upgrade. It is not uncommon for an upgrade to go well in a test and 'golden cluster' environment, only for problems to occur after rolling out into the broader network.

An interesting alternative which many operators are implementing is to outsource the test and measurement function, and in some cases even the network optimisation function. In these cases the network operator should be at an operational level where this function is not necessarily a source of strategic advantage and where they will maintain performance KPIs but drive costs down and operating efficiency up.

The future

Interesting challenges will emerge for operators in the next five years that will redefine the concept of quality management and with it the approach to monitoring and testing. The very business model of most incumbent operators is changing - the emerging role of content providers and innovation in messaging services, outsourcing models for network operation and maintenance.

Tools will more than ever need to monitor performance and assist in providing guaranteed quality of service at an application level, eg video streaming and VOIP, and will be driven around service level agreement management - between vendors, operators, and their customers.

Share

ATIO

ATIO is a black empowered company specialising in ICT solutions and services. ATIO's two business divisions - ATIO-Interactive Communications Solutions (ICS) and ATIO-Telecom Services - target clearly defined niches within the ICT market. ICS provides integrated contact-centre, CRM and messaging solutions and services. Telecom Services provides end-to-end network performance and revenue assurance testing solutions and services to mobile and fixed-line operators. ATIO's solutions are widely used in SA, the rest of Africa as well as the European Union and the UK. For more information, please visit http://www.atio.com/.

Editorial contacts

Tim Courtenay
ATIO Corporation
(011) 235 7208
TimC@atio.com