After doing some investigating, Your DSLAM ATM links are currently provisioned at 50mbps, As you can see in the graph below for your DSLAM, It's hitting that limit most of the time:
View attachment 241158
This will require either Telkom to Upgrade the links to Metro Ethernet, Or if possible, they may be able to increase the ATM provisioning limits.
Thanks.
And to translate into English for the network-slang impaired:
The links (cables/pipes) that send data out from your exchange to ISPs is provisioned with too little capacity for the demand in the area, meaning packets queue up at the exchange before being sent/received, hence the increased latency you are seeing. This is what congestion looks like at exchanges.
This, contrary to popular opinion, affects both latency
and speed due to operating system and server side receive window size scaling. Which when translated means the bandwidth limits (maximum speed you can achieve on a connection) is severely limited with increased latency like this due to the way packets are received and acknowledged, and the number of unacknowledged packets allowed (in very basic layman terms). You find dropped packets cause an OS to halve a receive window buffer meaning maximum throughput possible is limited - once this limit decreases below your connection speed you notice speed drops. With increased latency, the BDP (bandwidth delay product) decreases and therefore so do speeds. This is also why international connections are always slower over TCP protocols due to the increase in latency to get across the other side of the planet and the higher chance of a dropped packet. It's also why dropped packets on other servers cause your connection to grind to a halt sometimes and why a slight increase in latency, especially to international destinations, causes immediate slowdowns in speed, and a long time to recover from it. Locally it's less of an issue, however the moment you go over a certain threshold it will certainly become a problem.
Let me illustrate with a quick chart:
Basically what this shows is that on Windows operating systems where the RWINN value is in a default restricted state (which helps prevent against some packet loss) your latency will affect your speeds as above. i.e if your latency to the server is around 120ms your theoretical maximum speed is around 2Mbps. Many installations of Windows though allow for scaling to 1GB buffer size. Now this sounds great in theory because it means that 120ms latency still means a theoretical maximum Mbps throughput of around 62Mbps, however a single dropped packet will halve the buffer size each time on the operating system. In turn halving the maximum speed attainable. So 2 dropped packets on a 120ms latency connection will reduce the maximum attainable throughput to around 15Mbps. Now imagine dropping 3% of packets (at your exchange for example which is why we ask for mtr data to analyse packet loss) and you can imagine how this impacts your connection. It's huge. If you're sending thousands of packets per second, and just 1% of them are dropping, in theory and reality your buffer size reduces on the connection and the maximum attainable throughput (now dictated by your latency) comes in to play and will impact on the connection.
So this is why, although it can be tedious and we're working on a better solution, we always try to look at the mtr data in a live environment when issues arise to ascertain what the possible issue may be.
Hope that helps...someone...