Google claims that its new Equiano subsea cable will increase South Africa's internet speed three-fold

elf_lord_ZC5

Executive Member
Joined
Jan 3, 2010
Messages
9,239
With a Telkom LIT box you can stream 4K Netflix with any problems

I have recently upgraded, my LIT box to an nVidia Shield TV Pro, which is a treat, but helps nada to improve the streaming.

Besides, the Telkom LIT boxes have, as of the beginning of this month, lost the ability to stream Netflix. :(
 

Dan C

Honorary Master
Joined
Nov 21, 2005
Messages
30,660
I have recently upgraded, my LIT box to an nVidia Shield TV Pro, which is a treat, but helps nada to improve the streaming.

Besides, the Telkom LIT boxes have, as of the beginning of this month, lost the ability to stream Netflix. :(
It was a joke !! lol
 

Vorastra

Executive Member
Joined
Jan 13, 2013
Messages
7,576
That 160ms is probably only achievable if you plug in to the fibre optic cable at the landing station, and are communicating with a device at the landing station at the other end...
That's not true though is it.

speedtest_london.jpg

Internal latency in SA would be between 20ms - 40ms depending on routing etc, and the same on the other side. So I would say real world 200ms is actually not bad at all.
My ping from Durban to CPT is 22ms.

What's weird is that in games I get 200 to Europe, but on Ookla Speedtest itself I'm getting 185ms to London from DBN.
Let's assume it's just bad routing for the games I play or they're in fong kong parts of Europe.

So 185ms from DBN to London.
22ms from DBN to CPT.

London to CPT is 152ms as per the image above.

Raw RTT (AKA just on the cable) is ~127ms from CPT to London if assuming an RI of 1.467 (quick Googling) and ~13,000KM for our current cables' distance.

Now, raw RTT from New York to London is ~58ms assuming a cable distance of ~6000KM, and people in London get 70ms real-world to NY. That's an overhead of ~12ms.

Why are we in South Africa sitting with a much higher latency overhead? Because our local routing is garbage, and I suspect the only reason that that ping test is not in the low 140ms range is purely down to garbage routing as soon as it hits ZA side.

TL;DR: Routing in ZA is big sad.
 

ToxicBunny

Oi! Leave me out of this...
Joined
Apr 8, 2006
Messages
100,970
That's not true though is it.

View attachment 1162256


My ping from Durban to CPT is 22ms.

What's weird is that in games I get 200 to Europe, but on Ookla Speedtest itself I'm getting 185ms to London from DBN.
Let's assume it's just bad routing for the games I play or they're in fong kong parts of Europe.

So 185ms from DBN to London.
22ms from DBN to CPT.

London to CPT is 152ms as per the image above.

Raw RTT (AKA just on the cable) is ~127ms from CPT to London if assuming an RI of 1.467 (quick Googling) and ~13,000KM for our current cables' distance.

Now, raw RTT from New York to London is ~58ms assuming a cable distance of ~6000KM, and people in London get 70ms real-world to NY. That's an overhead of ~12ms.

Why are we in South Africa sitting with a much higher latency overhead? Because our local routing is garbage, and I suspect the only reason that that ping test is not in the low 140ms range is purely down to garbage routing as soon as it hits ZA side.

TL;DR: Routing in ZA is big sad.

My number were guestimates..

And yes, you will get lower pings on SpeedTests than to production gaming systems, as Speedtest servers will very likely be colocated quite close to landing points, whereas gaming servers might be in bigger more central DC's and the routing could be different based on how the companies have developed their infrastructure.

Our routing isn't necessarily garbage, you do need to consider that traffic doesn't always go directly to the UK, it might drop out somewhere in Europe and then be routed to the UK, or your servers are in Europe and route from the UK or any number of factors.
 

Vorastra

Executive Member
Joined
Jan 13, 2013
Messages
7,576
My number were guestimates..

And yes, you will get lower pings on SpeedTests than to production gaming systems, as Speedtest servers will very likely be colocated quite close to landing points, whereas gaming servers might be in bigger more central DC's and the routing could be different based on how the companies have developed their infrastructure.

Our routing isn't necessarily garbage, you do need to consider that traffic doesn't always go directly to the UK, it might drop out somewhere in Europe and then be routed to the UK, or your servers are in Europe and route from the UK or any number of factors.
I think you double posted.

My number were guestimates..

And yes, you will get lower pings on SpeedTests than to production gaming systems, as Speedtest servers will very likely be colocated quite close to landing points, whereas gaming servers might be in bigger more central DC's and the routing could be different based on how the companies have developed their infrastructure.

Our routing isn't necessarily garbage, you do need to consider that traffic doesn't always go directly to the UK, it might drop out somewhere in Europe and then be routed to the UK, or your servers are in Europe and route from the UK or any number of factors.
I think you double posted.
 

Geoff.D

Honorary Master
Joined
Aug 4, 2005
Messages
24,982
so the limit is what exactly? quality of the Fiber itself, quality of the glass strands?
The physics of light transmission through a medium other than free space.
Yes, the purity of the glass has some effect but not much seeing that the chosen wavelengths are already within the best possible range. The most significant "impurity" is water.

Next is the mode of transmission, (single versus multimode) But these days there are NO multimode systems in use anymore. ALL long-distance systems are single-mode systems.
 

Geoff.D

Honorary Master
Joined
Aug 4, 2005
Messages
24,982
It does not make sense... increase capacity maybe but speed... ?
Yes. The advances in technology are mostly about increases in capacity per fibre strand rather than an increase in the data rates. The use of the term "speed" is and will always be completely incorrect even if it is universally used by those ignorant of the science behind the use of the electromagnetic spectrum for communication systems over distance.
 

Geoff.D

Honorary Master
Joined
Aug 4, 2005
Messages
24,982
Great circle distance between Cape Town and Portugal is about 8500km (I know the cable is going to be quite a bit longer)

To make the numbers a bit more sane:
Speed of light in the cable is 214137470 m/s.
Dividing that by 1000, to make it meters/millisecond, and by another thousand to make it km/ms is 214.13747.

So doing that, 8500/214.13747 = +-40ms. Round trip gives you a theoretical latency of 80ms.
That cable isn't quite going on a great circle though, so I would guess that it is about 1.5 times the distance., which leaves you with about 120ms. I suspect the rest is network overhead.
The " rule of thumb" for typical routing of cables compared to the shortest distance between two points on the earth's surface is between 1.4 and 1.7.

The distance between SA and the UK (WACS cable distance) is about 14 000 km.
 

Geoff.D

Honorary Master
Joined
Aug 4, 2005
Messages
24,982
That's not true though is it.

View attachment 1162256


My ping from Durban to CPT is 22ms.

What's weird is that in games I get 200 to Europe, but on Ookla Speedtest itself I'm getting 185ms to London from DBN.
Let's assume it's just bad routing for the games I play or they're in fong kong parts of Europe.

So 185ms from DBN to London.
22ms from DBN to CPT.

London to CPT is 152ms as per the image above.

Raw RTT (AKA just on the cable) is ~127ms from CPT to London if assuming an RI of 1.467 (quick Googling) and ~13,000KM for our current cables' distance.

Now, raw RTT from New York to London is ~58ms assuming a cable distance of ~6000KM, and people in London get 70ms real-world to NY. That's an overhead of ~12ms.

Why are we in South Africa sitting with a much higher latency overhead? Because our local routing is garbage, and I suspect the only reason that that ping test is not in the low 140ms range is purely down to garbage routing as soon as it hits ZA side.

TL;DR: Routing in ZA is big sad.
There are practical limitations but in general, routing on national networks is very poorly managed.
In the good olde days, we spent plenty of time ensuring optimal routing, and then ensured that under fault conditions, that routing was ultimately restored to the optimum. But these days, I doubt any of the major long-haul providers bother at all.
Hence why you see the variability in latency over time. (All things being equal) and EXCLUDING, the disgusting contention management done by all the major operators.
 

Vorastra

Executive Member
Joined
Jan 13, 2013
Messages
7,576
There are practical limitations but in general, routing on national networks is very poorly managed.
In the good olde days, we spent plenty of time ensuring optimal routing, and then ensured that under fault conditions, that routing was ultimately restored to the optimum. But these days, I doubt any of the major long-haul providers bother at all.
Hence why you see the variability in latency over time. (All things being equal) and EXCLUDING, the disgusting contention management done by all the major operators.
I could've sworn 10 years ago I was getting ~180ms to Germany, now it's ~200.
I'm absolute convinced that if routing was taken seriously Cape Town would be seeing ~140ms and Durban would be seeing ~160ms to London.
 

Dan C

Honorary Master
Joined
Nov 21, 2005
Messages
30,660
All this bitching, in my day we had 3 gig accounts via a single cable. :D
For South Africans, taking up service from SAT-2 which was reaching maximum capacity. SAT-2 had been brought into service in the early 1990s as a replacement for the original undersea cable SAT-1 which was constructed in the 1960s
 
Top