Last week, 14 March, undersea rock falls near Cote d'Ivoire severed four submarine cables– WACS, MainOne, SAT3, ACE – causing enormous disruptions for local public cloud users. Many users experienced vastly reduced speeds to specific public cloud services. The disruptions were minor in some cases, but for others, it shut down operations completely.
Why did the breaks impact local public cloud services so dramatically? South Africa is a region for several major public cloud platforms, which host considerable numbers of servers and services locally. Submarine cables are high-capacity network pipes that link global regions together— why did their failure result in local cloud services failing as well?
Several factors are at play here, but the overarching revelation is that local public cloud servers do not translate into complete redundancy. They do not duplicate every service on a major public cloud platform. Instead, they remain part of a global network that keeps certain services in specific places for financial and security reasons.
Local servers exist for two primary reasons: data laws and lower latency. Data laws can be split into data sovereignty and data localisation. Data sovereignty requires that the data remain under the laws of a nation, but it doesn't mean the data has to reside inside that nation. For example, a European server can store local data, provided its compliance meets the requirements of SA's laws for that data.
In the case of data localisation, the data must reside within a geographic and sovereign region. For example, employee tax data cannot be transferred to an entity in a foreign country without approval from the SA Revenue Service. Most data will fall under data sovereignty and could be stored abroad, thus accessed through a submarine cable.
Balancing latency, essentially the speed of a network, and cost to customers also relies on overseas servers. A server's physical proximity determines latency and speed of access to data and services. But the closer you want to be to the source, the more it can cost. The scale of public cloud server markets can make storing less time-sensitive data overseas cheaper.
A third consideration is cloud services. One impact of the cable breaks was that many groups could not authenticate their credentials for services, effectively locking them out. For security reasons, some authentications concentrate on highly protected hubs in specific countries. Even if a service is available on local servers, the authentication process to access it could happen abroad.
Other examples include backhauls for caches and applying updates. No region of a global public cloud can remain fully functional without interacting with the outside world.
But why was this incident so severe? Submarine cable business models also have an impact. Submarine cables are not public utilities — they sell bandwidth allocations to regional networks that sell capacity to internet service providers. If a cable breaks, its customers don't automatically jump to another cable unless they have a prior arrangement, and unexpected rerouting to a different cable incurs extra costs.
Cloud providers create redundancy by running traffic on multiple submarine cables. But when four of the five West Coast cables connecting SA break, that can scupper even the most robust redundancy plans. It's fair to ask about using East Coast cables, but that's a matter of the cloud provider's strategy and their equity in specific cables.
Why did an undersea cable break SA's public cloud? The regions of global cloud platforms are not entirely autonomous — they rely on services hosted elsewhere, such as certain authentication services. Data is not necessarily hosted locally, even if it is covered by data sovereignty. Economies of scale and company budgets determine latency and data location — storing data overseas might be cheaper. And submarine cables are competing businesses — losing access to one does not automatically jump to another option.