Cool Ideas Fibre ISP – Feedback Thread 4

Status
Not open for further replies.

DYreX146

Senior Member
Joined
Nov 14, 2018
Messages
706
OK. It's been mentioned by me and @PBCool but the information is obviously spread accross several threads, so it's hard for anyone to get a summary I suppose, and also we've been down in the "thick of things" and trying to minimize the impact on our network.

  • We are moving our network to a blend of Seacom (East-Coast) and WACS (West-Coast) capacity via NTT.
    • This has happened for KZN and JHB initially.
    • This will provide better routes to Asia, Brazil, and other locations.
    • Thas has led to increased latency on certain international paths, as JHB now prefers Seacom, vs WACS, depending on the route.
    • We are working on this, but it is a process.
  • Between our Rosebank datacentre, and Teraco in Isando, we have deployed an additional 150Gbps of capacity.
    • During the process, we have had several issues with routers in Teraco Isando, and Parklands/Rosebank.
    • Several of the new links have flapped, due to unexpected issues on the build.
      • This caused a lot of the daily reconvergence, where local and peered traffic would be unavailable for a ferw minutes.
      • These have now been stabilized we believe.
  • We have also experienced stability issues on our Rosebank->Midrand->Isando paths.
    • This caused traffic to flow along lower capacity links, leading to packet loss.
  • We upgraded our core routers in Teraco Isando
    • One of these routers contributed to packet loss between Rosebank (Trenched/Aerial customers) which turned out to be an device issue. This router is now running clear.
  • We experienced consistent packet loss via one of our NAP Africa 100Gbps ports, which was eventually resolved by NAP Africa/Teraco replacing an optic on their side, after several escalations to them. This was ongoing for several weeks.
  • We experienced capacity issues at another of our NAP Africa (40Gbps) routers that was resolved by reconfiguring and collapsing 2x20G port-channels into a single 40Gbps port-channel.
    • This led to peering issues with Whatsapp/Facebook/Meta.
  • Due to the new transit capacity with NTT/Didata, we also experienced routing issues at JINX, in Rosebank, which caused things like quad9 and Microsoft teams to have routing loops, which we have resolved by preferring NAP Africa routes for these peers.
  • Our TCP accellerator that manages packet loss to international destinations, developed a fault in one of it's line cards that took 2 weeks to arrive.
    • With the changes in transit capacity, we decided to NOT install it in-line with our current UK capacity but rather inline with the new NTT/Didata capacity, as it was pending the change.
    • This led to degraded international througput over the past few weeks, as we were finalising our deployment with our new NTT/Didata capacity
  • And then Vumatel Villages has been a nightmare, which has been escalated to both COO's at Vumatel, but nothing seems to be happening. We are pushing all the levers we can.
  • In between this, load shedding issues caused accross various FNO networks, which is not something we can control.
I can understand the frustration that this has caused, and it has all compressed into several events in several weeks.

I hope the transparency helps. I've just not had the time to write this essay, respond to queries on MyBroadband, and deal with our network team all at the same time in the fashion that I normally would.

TLDR; we have embarked on a massive network redesign project for the better, and unfortunately, some of the changes have affected services.
I understand that you're busy at the moment, but I just have a quick question. Will KZN eventually get routed directly through Seacom? Right now we're getting routed to JHB and then going back again so we just get extra latency in a lot of cases.
 

TheRoDent

Cool Ideas Rep
Joined
Aug 6, 2003
Messages
6,218
I understand that you're busy at the moment, but I just have a quick question. Will KZN eventually get routed directly through Seacom? Right now we're getting routed to JHB and then going back again so we just get extra latency in a lot of cases.
Getting routed to JHB was always the case, and YES, KZN will soon be routing directly on Seacom via Durban.

We are cutting things over slowly, so it will take some time.

KZN will soon have the best latency paths that we can provide, with failover to JHB, and WACS.
 

DYreX146

Senior Member
Joined
Nov 14, 2018
Messages
706
Getting routed to JHB was always the case, and YES, KZN will soon be routing directly on Seacom via Durban.

We are cutting things over slowly, so it will take some time.

KZN will soon have the best latency paths that we can provide, with failover to JHB, and WACS.
Ok thanks for letting me know, I'll stop complaining for the next few days haha.
 

DeatheCore

Senior Member
Joined
Dec 4, 2006
Messages
847
@TheRoDent Are the new transit changes supposed to have affected CPT <--> UK traffic? If not, please see my previous post.

Also, appreciate the transparency/explanatory post :)
 

TheRoDent

Cool Ideas Rep
Joined
Aug 6, 2003
Messages
6,218
@TheRoDent Are the new transit changes supposed to have affected CPT <--> UK traffic? If not, please see my previous post.

Also, appreciate the transparency/explanatory post :)
No, it *shouldn't*, but it may have. It's a bit of a juggle.

Repost your trace with IPs not DNS names plz.
 

rolandos

Well-Known Member
Joined
Jul 19, 2005
Messages
117
OK. It's been mentioned by me and @PBCool but the information is obviously spread accross several threads, so it's hard for anyone to get a summary I suppose, and also we've been down in the "thick of things" and trying to minimize the impact on our network.

  • We are moving our network to a blend of Seacom (East-Coast) and WACS (West-Coast) capacity via NTT.
    • This has happened for KZN and JHB initially.
    • This will provide better routes to Asia, Brazil, and other locations.
    • Previously we used EASSY as a backup route only on the east coast.
    • Thas has led to increased latency on certain international paths, as JHB now prefers Seacom, vs WACS, depending on the route.
    • We are working on this, but it is a process.
  • Between our Rosebank datacentre, and Teraco in Isando, we have deployed an additional 150Gbps of capacity.
    • During the process, we have had several issues with routers in Teraco Isando, and Parklands/Rosebank.
    • Several of the new links have flapped, due to unexpected issues on the build.
      • This caused a lot of the daily reconvergence, where local and peered traffic would be unavailable for a few minutes.
      • These have now been stabilized we believe.
  • We have also experienced stability issues on our Rosebank->Midrand->Isando paths.
    • This caused traffic to flow along lower capacity links, leading to packet loss.
  • We upgraded our core routers in Teraco Isando
    • One of these routers contributed to packet loss between Rosebank (Trenched/Aerial customers) which turned out to be an device issue. This router is now running clear.
  • We experienced consistent packet loss via one of our NAP Africa 100Gbps ports, which was eventually resolved by NAP Africa/Teraco replacing an optic on their side, after several escalations to them. This was ongoing for several weeks.
  • We experienced capacity issues at another of our NAP Africa (40Gbps) routers that was resolved by reconfiguring and collapsing 2x20G port-channels into a single 40Gbps port-channel.
    • This led to peering issues with Whatsapp/Facebook/Meta.
  • Due to the new transit capacity with NTT/Didata, we also experienced routing issues at JINX, in Rosebank, which caused things like quad9 and Microsoft teams to have routing loops, which we have resolved by preferring NAP Africa routes for these peers.
  • Our TCP accellerator that manages packet loss to international destinations, developed a fault in one of it's line cards that took 2 weeks to arrive.
    • With the changes in transit capacity, we decided to NOT install it in-line with our current UK capacity but rather inline with the new NTT/Didata capacity, as it was pending the change.
    • This led to degraded international througput over the past few weeks, as we were finalising our deployment with our new NTT/Didata capacity
  • And then Vumatel Villages has been a nightmare, which has been escalated to both COO's at Vumatel, but nothing seems to be happening. We are pushing all the levers we can.
  • In between this, load shedding issues caused accross various FNO networks, which is not something we can control.
I can understand the frustration that this has caused, and it has all compressed into several events in several weeks.

I hope the transparency helps. I've just not had the time to write this essay, respond to queries on MyBroadband, and deal with our network team all at the same time in the fashion that I normally would.

TLDR; we have embarked on a massive network redesign project for the better, and unfortunately, some of the changes have affected services.
Thanks for the update, as a long time customer the transparency means a lot, brings more clarity to why Microsoft teams has been a nightmare from home recently
hope all works out soon
 

TheLostPacket

Well-Known Member
Joined
Oct 31, 2019
Messages
173
Big kudos to @TheRoDent @PBCool and Russel for all the late nights and hard work over the last few weeks

Yes the network has had some issues but as Rodent explained the upgrades are all big ones. I think everything will settle down pretty soon and the network will be even better than before.

Thanks Guys
 

TheRoDent

Cool Ideas Rep
Joined
Aug 6, 2003
Messages
6,218
New transit facilities sounds like a good thing :) Blizz still wonky @ 180-200ms - same with your UK VPN. Return path issue perhaps?
View attachment 1437859

Code:
MTR:
Start: Thu Dec  8 17:35:18 2022
HOST: ams1a-admin-looking-glass-back01             Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- Blizzard                                0.0%    10   11.8   3.1   0.9  11.8   3.3
  2.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0
  3.|-- 137.221.66.44                                 0.0%    10    0.3   2.0   0.3  15.8   4.8
  4.|-- 137.221.78.66                                 0.0%    10   13.2  69.0   6.2 336.1 106.2
  5.|-- 137.221.65.156                                0.0%    10   19.9  12.5   6.1  27.8   8.8
  6.|-- 137.221.65.28                                 0.0%    10    6.3   9.2   6.2  35.9   9.3
  7.|-- 137.221.80.32                                 0.0%    10    6.7  12.0   6.3  54.1  15.0
  8.|-- ae114-0.ffttr6.frankfurt.opentransit.net      0.0%    10    6.0   7.7   5.9  19.6   4.2
  9.|-- 193.251.241.143                               0.0%    10  189.7 189.8 189.7 189.9   0.0
 10.|-- 193.251.250.170                               0.0%    10  193.1 193.2 193.0 193.5   0.0
 11.|-- 168.209.0.211                                 0.0%    10  196.0 196.1 196.0 196.4   0.0
 12.|-- 168.209.93.252                                0.0%    10  201.8 201.9 201.6 202.1   0.0
 13.|-- za-kzn-umh-p-2-be-211.ip.ddii.network         0.0%    10  217.6 217.7 217.5 218.0   0.0
 14.|-- za-gp-pkl-p-2-ten-0-0-0-12-2.ip.ddii.network  0.0%    10  217.7 218.2 217.6 222.6   1.5
 15.|-- 168.209.90.45                                 0.0%    10  218.2 218.4 217.3 225.8   2.6
 16.|-- za-wc-cpt-hpe-1-be-211.ip.ddii.network        0.0%    10  218.4 218.6 218.2 219.4   0.0
 17.|-- ar1-ctn-ten-ge-0-0-2-1.ip.ddii.network        0.0%    10  195.8 194.4 192.6 200.2   2.4
 18.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0
 19.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0
 20.|-- c3i-backbone.coolideas.co.za                 10.0%    10  195.7 199.2 195.1 217.3   7.2
 21.|-- um5f-cust.coolideas.co.za                     0.0%    10  193.0 213.2 192.7 271.9  28.6
 22.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0

MTR:
Start: Thu Dec  8 17:35:19 2022
HOST: cdg1a-admin-looking-glass-back01             Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 37.244.50.131                                 0.0%    10    1.6   1.2   0.9   1.6   0.0
  2.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0
  3.|-- 137.221.66.38                                 0.0%    10   16.3   2.1   0.4  16.3   5.0
  4.|-- 137.221.77.50                                 0.0%    10   58.4   6.6   0.6  58.4  18.2
  5.|-- 137.221.77.32                                 0.0%    10    1.0   1.1   0.9   1.4   0.0
  6.|-- bundle-ether31.auvtr5.paris.opentransit.net   0.0%    10    1.0   1.2   0.7   2.3   0.3
  7.|-- 193.251.243.112                               0.0%    10  165.6 165.6 165.3 165.9   0.0
  8.|-- 193.251.250.170                               0.0%    10  174.9 174.9 174.7 175.3   0.0
  9.|-- 168.209.0.211                                 0.0%    10  177.9 177.9 177.7 178.4   0.0
 10.|-- 168.209.93.253                                0.0%    10  178.8 178.5 178.1 178.8   0.0
 11.|-- za-kzn-umh-p-2-be-212.ip.ddii.network         0.0%    10  186.0 185.9 185.6 186.3   0.0
 12.|-- za-gp-pkl-p-2-ten-0-0-0-12-2.ip.ddii.network  0.0%    10  187.0 187.1 186.8 187.3   0.0
 13.|-- za-gp-tis-p-2-hu-0-0-1-4.ip.ddii.network      0.0%    10  187.9 188.0 187.8 188.2   0.0
 14.|-- 168.209.132.137                               0.0%    10  187.4 187.8 186.8 192.5   1.6
 15.|-- 197.103.32.149                                0.0%    10  187.6 187.6 187.5 187.7   0.0
 16.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0
 17.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0
 18.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0
 19.|-- c3i-backbone.coolideas.co.za                  0.0%    10  292.0 201.2 180.6 292.0  35.6
 20.|-- um5f-cust.coolideas.co.za                     0.0%    10  181.3 185.1 181.0 196.7   5.5
 21.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0
Hmm, CPT announces shouldn't use the new network for return yet. Can you PM me your IP ?
 

CrypticZA

Expert Member
Joined
Sep 21, 2019
Messages
3,046
OK. It's been mentioned by me and @PBCool but the information is obviously spread accross several threads, so it's hard for anyone to get a summary I suppose, and also we've been down in the "thick of things" and trying to minimize the impact on our network.

  • We are moving our network to a blend of Seacom (East-Coast) and WACS (West-Coast) capacity via NTT.
    • This has happened for KZN and JHB initially.
    • This will provide better routes to Asia, Brazil, and other locations.
    • Previously we used EASSY as a backup route only on the east coast.
    • Thas has led to increased latency on certain international paths, as JHB now prefers Seacom, vs WACS, depending on the route.
    • We are working on this, but it is a process.
  • Between our Rosebank datacentre, and Teraco in Isando, we have deployed an additional 150Gbps of capacity.
    • During the process, we have had several issues with routers in Teraco Isando, and Parklands/Rosebank.
    • Several of the new links have flapped, due to unexpected issues on the build.
      • This caused a lot of the daily reconvergence, where local and peered traffic would be unavailable for a few minutes.
      • These have now been stabilized we believe.
  • We have also experienced stability issues on our Rosebank->Midrand->Isando paths.
    • This caused traffic to flow along lower capacity links, leading to packet loss.
  • We upgraded our core routers in Teraco Isando
    • One of these routers contributed to packet loss between Rosebank (Trenched/Aerial customers) which turned out to be an device issue. This router is now running clear.
  • We experienced consistent packet loss via one of our NAP Africa 100Gbps ports, which was eventually resolved by NAP Africa/Teraco replacing an optic on their side, after several escalations to them. This was ongoing for several weeks.
  • We experienced capacity issues at another of our NAP Africa (40Gbps) routers that was resolved by reconfiguring and collapsing 2x20G port-channels into a single 40Gbps port-channel.
    • This led to peering issues with Whatsapp/Facebook/Meta.
  • Due to the new transit capacity with NTT/Didata, we also experienced routing issues at JINX, in Rosebank, which caused things like quad9 and Microsoft teams to have routing loops, which we have resolved by preferring NAP Africa routes for these peers.
  • Our TCP accellerator that manages packet loss to international destinations, developed a fault in one of it's line cards that took 2 weeks to arrive.
    • With the changes in transit capacity, we decided to NOT install it in-line with our current UK capacity but rather inline with the new NTT/Didata capacity, as it was pending the change.
    • This led to degraded international througput over the past few weeks, as we were finalising our deployment with our new NTT/Didata capacity
  • And then Vumatel Villages has been a nightmare, which has been escalated to both COO's at Vumatel, but nothing seems to be happening. We are pushing all the levers we can.
  • In between this, load shedding issues caused accross various FNO networks, which is not something we can control.
I can understand the frustration that this has caused, and it has all compressed into several events in several weeks.

I hope the transparency helps. I've just not had the time to write this essay, respond to queries on MyBroadband, and deal with our network team all at the same time in the fashion that I normally would.

TLDR; we have embarked on a massive network redesign project for the better, and unfortunately, some of the changes have affected services.
Peaked my interest with "This will provide better routes to Asia, Brazil, and other locations." Potential route via SAFE? Should i be looking out for VPS providers in Singapore? haha
 

TheRoDent

Cool Ideas Rep
Joined
Aug 6, 2003
Messages
6,218
Peaked my interest with "This will provide better routes to Asia, Brazil, and other locations." Potential route via SAFE? Should i be looking out for VPS providers in Singapore? haha
Yes, potentially via SAFE, but some selective routes.
 

KoolKop

Active Member
Joined
Nov 5, 2022
Messages
34
Connect your PC directly to the ONT via network cable. Reset all the changes you made above. Create a new PPPoE connection in Windows with the login details CISP gave you, using automatically assigned IP / DNS, then dial it.
This didn't work for me.

I keep getting error 651 "the modem (or other connecting device) has reported an error"
 

TheLostPacket

Well-Known Member
Joined
Oct 31, 2019
Messages
173
This didn't work for me.

I keep getting error 651 "the modem (or other connecting device) has reported an error"
You mentioned you reset the ONT...

If that is the case then it is possible that you will need to have OpenServe reprovision your ONT config

Back in the day they use to reprovision unconfigured ONT's every 24 hours not sure if they still do this
 

KoolKop

Active Member
Joined
Nov 5, 2022
Messages
34
You mentioned you reset the ONT...

If that is the case then it is possible that you will need to have OpenServe reprovision your ONT config

Back in the day they use to reprovision unconfigured ONT's every 24 hours not sure if they still do this
Neverending...

What a drag. It took them forever just to hand over the line, how many years is this going to take.

Thanks
 

Getit

Well-Known Member
Joined
Oct 23, 2020
Messages
125
Getting routed to JHB was always the case, and YES, KZN will soon be routing directly on Seacom via Durban.

We are cutting things over slowly, so it will take some time.

KZN will soon have the best latency paths that we can provide, with failover to JHB, and WACS.
Great feedback, thanks @TheRoDent . Any indicative timeline when this would come about, this month? Jan? Feb?
 
Status
Not open for further replies.
Top