Cool Ideas Fibre ISP – Feedback Thread 6

Wow nice,

Will you be moving the caches that you have to the new one then or will everything network wise remain the same ?
At this point it's uncertain, currently caches are the largest traffic generators on the network. So in theory moving them closer to hand-off points would make more logical sense vs backhauling multiple 100's of Gigabits. But they are also power hungry and Teraco charges are super expensive.

We will do a launch and some tours etc, maybe even host a LAN :)
 
Last edited:
At this point it's uncertain, currently caches are the largest traffic generators on the network. So in theory moving them closer to hand-off points would make more logical sense vs backhauling multiple 100's of Gigabits. But they are also power hungry and Teraco charges are super expensive.

We will do a launch and some tours etc, maybe even host a LAN :)
Best would be to get them in Teraco obviously. But I know just Rack costs alone in Teraco are insane and the paying for power on top of that would be crazy for all the caches.

I think the better option would be to have them in the new DC with all the space and the power that you need and relative cost would be much cheaper. And then you can use the bigger DC for hosting other services. Caches + Hosting Services would make it worthwhile to use Backhauls.
You would then still need the backhaul links then but then at least you can have big backhaul links between new DC and Teraco and then smaller backhaul to old smaller dc site to use as DR.

Like the tour idea as well as LAN :ROFL:
 
Best would be to get them in Teraco obviously. But I know just Rack costs alone in Teraco are insane and the paying for power on top of that would be crazy for all the caches.

I think the better option would be to have them in the new DC with all the space and the power that you need and relative cost would be much cheaper. And then you can use the bigger DC for hosting other services. Caches + Hosting Services would make it worthwhile to use Backhauls.
You would then still need the backhaul links then but then at least you can have big backhaul links between new DC and Teraco and then smaller backhaul to old smaller dc site to use as DR.

Like the tour idea as well as LAN :ROFL:
Well yeah costs to host in the smaller or the bigger is basically insignificant, we can do 30+ racks in the old DC so space not an issue either.

Old DC has dual utility feeds whilst new only has the single but has triple gensets.

Even though the old has dual they both get affected with Loadshedding etc.
 
At this point it's uncertain, currently caches are the largest traffic generators on the network. So in theory moving them closer to hand-off points would make more logical sense vs backhauling multiple 100's of Gigabits. But they are also power hungry and Teraco charges are super expensive.

We will do a launch and some tours etc, maybe even host a LAN :)

When can we move in? :p
 
Well yeah costs to host in the smaller or the bigger is basically insignificant, we can do 30+ racks in the old DC so space not an issue either.

Old DC has dual utility feeds whilst new only has the single but has triple gensets.

Even though the old has dual they both get affected with Loadshedding etc.
Wowza, what are you planning with all the space between new and old DC... Taking over the world ? 🤣

I think power wise the new DC makes sense then if there is no impact from loadshedding.

You would then just need big backhaul links between Teraco and DC which would be the only real costs then
 
Wowza, what are you planning with all the space between new and old DC... Taking over the world ?

I think power wise the new DC makes sense then if there is no impact from loadshedding.

You would then just need big backhaul links between Teraco and DC which would be the only real costs then
Old DC still has dual gensets, dual UPS as normal. Uptime has been over 99% in 7 years.

New DC is currently 250 cabs with space to 450
 
Old DC still has dual gensets, dual UPS as normal. Uptime has been over 99% in 7 years.

New DC is currently 250 cabs with space to 450
No need to brag

Haha

Well if old DC is so good then you are pretty much spoilt for choice with what you can do
 
Last edited:
Coolideas, I keep getting disconnected from international gaming at night. I am on Metrofibre, 1Gbps in KZN. Its not my LAN connection or anything like that. Somethings wrong. Final Fantasy 14 has never been this bad. Whats going on? Do you need me to set up a smokeping for your techs to look at from my house? Will take anything over this constant bad Disconnection happening.

Firstly, you guys add me to CGNat which was not cool nor communicated.Which in fairness was fixed after I exploded a bit. BUT ever since then the connection quality has really been dropping to the point where I am indeed looking for a new ISP. I know you guys have bigger/better customers to care about, but this really sucks. Not gonna lie. Today there has been a grand total of *eleven* disconnects to German game servers. Not even Vodacom 4G is this bad. And thats saying something. Twitter / X? Barely works at night. Pings? 100% normal. Loading content? absolute garbage. And no I am not on Wifi. Hardwired to the ONT, dialing your pppoe on my PC makes it worse.

Are you guys flapping routes around automatically all day with path costs rather than testing latency and packet loss and seeing whats actually bad? Every time i reconnect, ping times change. From 230ms, down to 180, up to 210, then down again, and then back up to 230ms. Please fix this. Seriously it is sugar honey ice tea. Regards, Probly all Gamers in Durban who play MMOs with sensitive packet / ping times.

P.S My VPN in the UK has the same problems, indicating it is a routing problem to there, rather than anything else.
 
@PBCool

When is latency going to return to normal? (<=140ms)
Since around 23H00 last night is elevated even more..,.
Vuma Trenched, CPT, 1000/250

1750488516223.png

1750488426325.png


Speeds are good!

17880241829.png
 
Coolideas, I keep getting disconnected from international gaming at night. I am on Metrofibre, 1Gbps in KZN. Its not my LAN connection or anything like that. Somethings wrong. Final Fantasy 14 has never been this bad. Whats going on? Do you need me to set up a smokeping for your techs to look at from my house? Will take anything over this constant bad Disconnection happening.

Firstly, you guys add me to CGNat which was not cool nor communicated.Which in fairness was fixed after I exploded a bit. BUT ever since then the connection quality has really been dropping to the point where I am indeed looking for a new ISP. I know you guys have bigger/better customers to care about, but this really sucks. Not gonna lie. Today there has been a grand total of *eleven* disconnects to German game servers. Not even Vodacom 4G is this bad. And thats saying something. Twitter / X? Barely works at night. Pings? 100% normal. Loading content? absolute garbage. And no I am not on Wifi. Hardwired to the ONT, dialing your pppoe on my PC makes it worse.

Are you guys flapping routes around automatically all day with path costs rather than testing latency and packet loss and seeing whats actually bad? Every time i reconnect, ping times change. From 230ms, down to 180, up to 210, then down again, and then back up to 230ms. Please fix this. Seriously it is sugar honey ice tea. Regards, Probly all Gamers in Durban who play MMOs with sensitive packet / ping times.

P.S My VPN in the UK has the same problems, indicating it is a routing problem to there, rather than anything else.
Hi there, can you please log it with support and quote your support ref here. Random disconnects is typically just line related.
 
Twitch and Battlenet are still having issues, likely related to all the issues we’ve experienced over the last few days. Bypassing the router and going directly into the ONT has not helped

Edit:

Connecting to the ONT, I briefly had a connection but now I’m completely offline
 
Twitch and Battlenet are still having issues, likely related to all the issues we’ve experienced over the last few days. Bypassing the router and going directly into the ONT has not helped
It seems you are having specific issues though based on your posts, please PM me your account details so I can check your session info?
 
@PBCool

When is latency going to return to normal? (<=140ms)
Since around 23H00 last night is elevated even more..,.
Vuma Trenched, CPT, 1000/250

View attachment 1829294

View attachment 1829293


Speeds are good!

17880241829.png

So here is the complete explanation, historically we get 2 layer2 services from a carrier from Cape Town to London for international capacity. Of late we've had some challenges with these services (which are effectively EVPN tunnels over the carriers MPLS network) Like recently where we would see loss on every 3rd packet.

We were in the process of turning up our own 100Gbps wave to Lisbon, which have had some delays but went live a few weeks ago. When it did we observed the latency was escalated (157ms), and it turns out the carrier was looping via Angola which is what attributed to the additional latency. They will be moving the wave to the "express" path in the beginning of July.

There is also the challenge around the routes from Lisbon to London, which our carrier uses another carrier for, these routes also have an express/low latency path, and then some alternative longer paths.
By default these routes are not set to revertive, so what would happen is the express route from Lisbon to London would fail and traffic would route via their alternative path, the express route would then recover but the traffic wouldn't switch back. (this is what you are seeing now)

They are going to make the revertive change permanent in the next few days.

We are also in the process of commissioning our Lisbon pop which would then reach into Europe directly from Lisbon, and then still extend to London from there.

So a lot to take in but basically we either use the lossy layer2 tunnels with low latency or a clean wavelength with elevated latency in the short term.

If anyone is interested in the European NLD network it is here:


We are also exploring some options using the Amitie undersea cable back to the US, which claims a latency of 34ms from Bordeaux to New York, which somehow defies physics, but it looks like that is one way and two way would be more like 70ms, which then theoretical latency from Cape Town-Lisbon-Bordeaux-New York would be something like 120ms+10ms+70ms, so a potential slight improvement

 
So here is the complete explanation, historically we get 2 layer2 services from a carrier from Cape Town to London for international capacity. Of late we've had some challenges with these services (which are effectively EVPN tunnels over the carriers MPLS network) Like recently where we would see loss on every 3rd packet.

We were in the process of turning up our own 100Gbps wave to Lisbon, which have had some delays but went live a few weeks ago. When it did we observed the latency was escalated (157ms), and it turns out the carrier was looping via Angola which is what attributed to the additional latency. They will be moving the wave to the "express" path in the beginning of July.

There is also the challenge around the routes from Lisbon to London, which our carrier uses another carrier for, these routes also have an express/low latency path, and then some alternative longer paths.
By default these routes are not set to revertive, so what would happen is the express route from Lisbon to London would fail and traffic would route via their alternative path, the express route would then recover but the traffic wouldn't switch back. (this is what you are seeing now)

They are going to make the revertive change permanent in the next few days.

We are also in the process of commissioning our Lisbon pop which would then reach into Europe directly from Lisbon, and then still extend to London from there.

So a lot to take in but basically we either use the lossy layer2 tunnels with low latency or a clean wavelength with elevated latency in the short term.

If anyone is interested in the European NLD network it is here:


We are also exploring some options using the Amitie undersea cable back to the US, which claims a latency of 34ms from Bordeaux to New York, which somehow defies physics, but it looks like that is one way and two way would be more like 70ms, which then theoretical latency from Cape Town-Lisbon-Bordeaux-New York would be something like 120ms+10ms+70ms, so a potential slight improvement

Some great insight thank you

When would be the realistic go live for LIS POP ?

Would you then pickup Orange in LIS and UK ?
 
Some great insight thank you

When would be the realistic go live for LIS POP ?

Would you then pickup Orange in LIS and UK ?
Cabinet should be ready in the next few weeks, we need to have hardware delivered and cross connects installed.

Then a case of turnup and testing.

We are moving Orange to LIS, Voxility will stay in London, we will then keep peering at LINX, and then peer at De-CIX in Lisbon and Madrid.
 
It looks like both Twitch (Amazon IVS) and Blizzard peers at DE-CIX Madrid, would be interesting to see how things go once everything is setup.

 
Is it just me or has the internet been flapping since about 9am?

CI - Openserve - 200/100
Pinelands, Cape Town

Edit - 12h42 - rebooted ONT and things appear to have normalised.
 
Last edited:
It looks like both Twitch (Amazon IVS) and Blizzard peers at DE-CIX Madrid, would be interesting to see how things go once everything is setup.

Twitch also peers at LINX so I dont see any major difference there. Blizzard some latency benefits potentially.
 
Top
Sign up to the MyBroadband newsletter