Cool Ideas Fibre ISP – Feedback Thread 4

Status
Not open for further replies.

rolandos

Well-Known Member
Joined
Jul 19, 2005
Messages
117
dead here last 45mins or intermittent VERY SLOW morningside Sandton Vuma AE., many disconnects on teams as well during today, something is wacked at Cool ideas today , becoming regular , can't even get Cloudflare warp to connect
 

Satedah

Active Member
Joined
May 11, 2022
Messages
90
Connection went down for 10 minutes at 5pm.
Came back up now. Starting to become quite frequent. :)
 

chrisGolf

Well-Known Member
Joined
Apr 30, 2019
Messages
309
@chrisGolf i believe a technician did try and get hold of you earlier. I have asked the team leader assist with this
yes, they tried to call while I was in a meeting, then basically update the ticket to say they called?
Why not update the ticket with the Vuma Ticket number, and ETR, and call Vuma every hour to get updates? I'm sitting with +40 hours of downtime in this month alone, and it's only been 8 days?

Anyway since 8 this morning there has been no internet, what is going on, what is the ETR? is it a POP issue, is it a fibre break? is it yet to be determined? after the entire day, surely they should have some inclination as to what the issue is and how long to resolve? don't call me, because the conversation will be something like, lets do a MTR from your ONT... it's the entire complex. please fix ASAP. if it's a VUMA issue, continuously phone them to get updates...
 

DeatheCore

Senior Member
Joined
Dec 4, 2006
Messages
847
Not sure if it was mentioned in the previous posts today, but EU latency is all over the show, Vuma trenched in CPT.

Blizzard
1670515134932.png

CISP UK VPN
1670515154823.png
 

TheRoDent

Cool Ideas Rep
Joined
Aug 6, 2003
Messages
6,218
Not sure if it was mentioned in the previous posts today, but EU latency is all over the show, Vuma trenched in CPT.

Blizzard
View attachment 1437833

CISP UK VPN
View attachment 1437835
Blizard should be improving. We are still making changes to peers and routing as we continue with our deployment of new transit facilities. Changes are limited to JHB and KZN for now.
 
Last edited:

DeatheCore

Senior Member
Joined
Dec 4, 2006
Messages
847
Blizard should be improving. We are still making changes to peers and routing as we continue with our deployment of new transit facilities. Changes are limited to JHB and KZN for now.
New transit facilities sounds like a good thing :) Blizz still wonky @ 180-200ms - same with your UK VPN. Return path issue perhaps?
1670521105546.png

Code:
MTR:
Start: Thu Dec  8 17:35:18 2022
HOST: ams1a-admin-looking-glass-back01             Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- Blizzard                                0.0%    10   11.8   3.1   0.9  11.8   3.3
  2.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0
  3.|-- 137.221.66.44                                 0.0%    10    0.3   2.0   0.3  15.8   4.8
  4.|-- 137.221.78.66                                 0.0%    10   13.2  69.0   6.2 336.1 106.2
  5.|-- 137.221.65.156                                0.0%    10   19.9  12.5   6.1  27.8   8.8
  6.|-- 137.221.65.28                                 0.0%    10    6.3   9.2   6.2  35.9   9.3
  7.|-- 137.221.80.32                                 0.0%    10    6.7  12.0   6.3  54.1  15.0
  8.|-- ae114-0.ffttr6.frankfurt.opentransit.net      0.0%    10    6.0   7.7   5.9  19.6   4.2
  9.|-- 193.251.241.143                               0.0%    10  189.7 189.8 189.7 189.9   0.0
 10.|-- 193.251.250.170                               0.0%    10  193.1 193.2 193.0 193.5   0.0
 11.|-- 168.209.0.211                                 0.0%    10  196.0 196.1 196.0 196.4   0.0
 12.|-- 168.209.93.252                                0.0%    10  201.8 201.9 201.6 202.1   0.0
 13.|-- za-kzn-umh-p-2-be-211.ip.ddii.network         0.0%    10  217.6 217.7 217.5 218.0   0.0
 14.|-- za-gp-pkl-p-2-ten-0-0-0-12-2.ip.ddii.network  0.0%    10  217.7 218.2 217.6 222.6   1.5
 15.|-- 168.209.90.45                                 0.0%    10  218.2 218.4 217.3 225.8   2.6
 16.|-- za-wc-cpt-hpe-1-be-211.ip.ddii.network        0.0%    10  218.4 218.6 218.2 219.4   0.0
 17.|-- ar1-ctn-ten-ge-0-0-2-1.ip.ddii.network        0.0%    10  195.8 194.4 192.6 200.2   2.4
 18.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0
 19.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0
 20.|-- c3i-backbone.coolideas.co.za                 10.0%    10  195.7 199.2 195.1 217.3   7.2
 21.|-- um5f-cust.coolideas.co.za                     0.0%    10  193.0 213.2 192.7 271.9  28.6
 22.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0

MTR:
Start: Thu Dec  8 17:35:19 2022
HOST: cdg1a-admin-looking-glass-back01             Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 37.244.50.131                                 0.0%    10    1.6   1.2   0.9   1.6   0.0
  2.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0
  3.|-- 137.221.66.38                                 0.0%    10   16.3   2.1   0.4  16.3   5.0
  4.|-- 137.221.77.50                                 0.0%    10   58.4   6.6   0.6  58.4  18.2
  5.|-- 137.221.77.32                                 0.0%    10    1.0   1.1   0.9   1.4   0.0
  6.|-- bundle-ether31.auvtr5.paris.opentransit.net   0.0%    10    1.0   1.2   0.7   2.3   0.3
  7.|-- 193.251.243.112                               0.0%    10  165.6 165.6 165.3 165.9   0.0
  8.|-- 193.251.250.170                               0.0%    10  174.9 174.9 174.7 175.3   0.0
  9.|-- 168.209.0.211                                 0.0%    10  177.9 177.9 177.7 178.4   0.0
 10.|-- 168.209.93.253                                0.0%    10  178.8 178.5 178.1 178.8   0.0
 11.|-- za-kzn-umh-p-2-be-212.ip.ddii.network         0.0%    10  186.0 185.9 185.6 186.3   0.0
 12.|-- za-gp-pkl-p-2-ten-0-0-0-12-2.ip.ddii.network  0.0%    10  187.0 187.1 186.8 187.3   0.0
 13.|-- za-gp-tis-p-2-hu-0-0-1-4.ip.ddii.network      0.0%    10  187.9 188.0 187.8 188.2   0.0
 14.|-- 168.209.132.137                               0.0%    10  187.4 187.8 186.8 192.5   1.6
 15.|-- 197.103.32.149                                0.0%    10  187.6 187.6 187.5 187.7   0.0
 16.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0
 17.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0
 18.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0
 19.|-- c3i-backbone.coolideas.co.za                  0.0%    10  292.0 201.2 180.6 292.0  35.6
 20.|-- um5f-cust.coolideas.co.za                     0.0%    10  181.3 185.1 181.0 196.7   5.5
 21.|-- ???                                          100.0    10    0.0   0.0   0.0   0.0   0.0
 

TheRoDent

Cool Ideas Rep
Joined
Aug 6, 2003
Messages
6,218
Yeah, think some transparency would be appreciated
OK. It's been mentioned by me and @PBCool but the information is obviously spread accross several threads, so it's hard for anyone to get a summary I suppose, and also we've been down in the "thick of things" and trying to minimize the impact on our network.

  • We are moving our network to a blend of Seacom (East-Coast) and WACS (West-Coast) capacity via NTT.
    • This has happened for KZN and JHB initially.
    • This will provide better routes to Asia, Brazil, and other locations.
    • Previously we used EASSY as a backup route only on the east coast.
    • Thas has led to increased latency on certain international paths, as JHB now prefers Seacom, vs WACS, depending on the route.
    • We are working on this, but it is a process.
  • Between our Rosebank datacentre, and Teraco in Isando, we have deployed an additional 150Gbps of capacity.
    • During the process, we have had several issues with routers in Teraco Isando, and Parklands/Rosebank.
    • Several of the new links have flapped, due to unexpected issues on the build.
      • This caused a lot of the daily reconvergence, where local and peered traffic would be unavailable for a few minutes.
      • These have now been stabilized we believe.
  • We have also experienced stability issues on our Rosebank->Midrand->Isando paths.
    • This caused traffic to flow along lower capacity links, leading to packet loss.
  • We upgraded our core routers in Teraco Isando
    • One of these routers contributed to packet loss between Rosebank (Trenched/Aerial customers) which turned out to be an device issue. This router is now running clear.
  • We experienced consistent packet loss via one of our NAP Africa 100Gbps ports, which was eventually resolved by NAP Africa/Teraco replacing an optic on their side, after several escalations to them. This was ongoing for several weeks.
  • We experienced capacity issues at another of our NAP Africa (40Gbps) routers that was resolved by reconfiguring and collapsing 2x20G port-channels into a single 40Gbps port-channel.
    • This led to peering issues with Whatsapp/Facebook/Meta.
  • Due to the new transit capacity with NTT/Didata, we also experienced routing issues at JINX, in Rosebank, which caused things like quad9 and Microsoft teams to have routing loops, which we have resolved by preferring NAP Africa routes for these peers.
  • Our TCP accellerator that manages packet loss to international destinations, developed a fault in one of it's line cards that took 2 weeks to arrive.
    • With the changes in transit capacity, we decided to NOT install it in-line with our current UK capacity but rather inline with the new NTT/Didata capacity, as it was pending the change.
    • This led to degraded international througput over the past few weeks, as we were finalising our deployment with our new NTT/Didata capacity
  • And then Vumatel Villages has been a nightmare, which has been escalated to both COO's at Vumatel, but nothing seems to be happening. We are pushing all the levers we can.
  • In between this, load shedding issues caused accross various FNO networks, which is not something we can control.
I can understand the frustration that this has caused, and it has all compressed into several events in several weeks.

I hope the transparency helps. I've just not had the time to write this essay, respond to queries on MyBroadband, and deal with our network team all at the same time in the fashion that I normally would.

TLDR; we have embarked on a massive network redesign project for the better, and unfortunately, some of the changes have affected services.
 
Last edited:
Status
Not open for further replies.
Top