Status
Not open for further replies.
If that's the case what is the upgrade that is happening then?

I think @PBCool made a mistake. Someone must have told him the NNI thing was an upgrade. As far as I know all that's happening with Octotel is that NNI change and some random maintenance around me in Southern Suburbs.

Have you considered making a stink with Octotel themselves and having them come out to check the quality and integrity of your actual installation?

Well honestly @TheRoDent and myself have been crapping on Octotel for months at this point. I just assumed he would have had them check my line remotely at some point. I don't think its my line though, most nights the proxy will give me my linespeed in throughput to london yet when I try without the proxy I get sub 10mb. I also had flawless internet for the 2 weeks holiday we had over Christmas. My opinion is that something else is going on I just have no idea what it is.
 
Well honestly @TheRoDent and myself have been crapping on Octotel for months at this point. I just assumed he would have had them check my line remotely at some point. I don't think its my line though, most nights the proxy will give me my linespeed in throughput to london yet when I try without the proxy I get sub 10mb. I also had flawless internet for the 2 weeks holiday we had over Christmas. My opinion is that something else is going on I just have no idea what it is.

If the proxy gives you better international speeds, it's likely your line experiencing packetloss.
 
If the proxy gives you better international speeds, it's likely your line experiencing packetloss.

Yeah but how much packet loss is bad? No one ever seems to answer that question for me. I was told some packet loss is to be expected on gpon because its the nature of a shared technology but no one wants to tell me at what point I need to be worried.

This is what a normal iperf test looks like for me at around 9pm.

iperf3.exe --port 17001 -c trcvmh01.cisp.co.za --bandwidth 20M -l 1400 --omit 2 -R -u
Connecting to host trcvmh01.cisp.co.za, port 17001
Reverse mode, remote host trcvmh01.cisp.co.za is sending
[ 4] local 192.168.1.247 port 49591 connected to 154.0.15.181 port 17001
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 4] 0.00-1.00 sec 2.48 MBytes 20.8 Mbits/sec 0.172 ms 0/1855 (0%) (omitted)
[ 4] 1.00-2.00 sec 2.38 MBytes 20.0 Mbits/sec 0.093 ms 2/1786 (0.11%) (omitted)
[ 4] 0.00-1.00 sec 2.39 MBytes 20.0 Mbits/sec 0.149 ms 1/1789 (0.056%)
[ 4] 1.00-2.00 sec 2.38 MBytes 20.0 Mbits/sec 0.100 ms 1/1783 (0.056%)
[ 4] 2.00-3.00 sec 2.39 MBytes 20.0 Mbits/sec 0.137 ms 0/1787 (0%)
[ 4] 3.00-4.00 sec 2.38 MBytes 20.0 Mbits/sec 0.108 ms 1/1785 (0.056%)
[ 4] 4.00-5.00 sec 2.38 MBytes 20.0 Mbits/sec 0.203 ms 1/1785 (0.056%)
[ 4] 5.00-6.00 sec 2.38 MBytes 20.0 Mbits/sec 0.127 ms 0/1785 (0%)
[ 4] 6.00-7.00 sec 2.38 MBytes 20.0 Mbits/sec 0.164 ms 2/1786 (0.11%)
[ 4] 7.00-8.00 sec 2.38 MBytes 20.0 Mbits/sec 0.132 ms 3/1787 (0.17%)
[ 4] 8.00-9.00 sec 2.38 MBytes 20.0 Mbits/sec 0.129 ms 0/1783 (0%)
[ 4] 9.00-10.00 sec 2.38 MBytes 20.0 Mbits/sec 0.088 ms 3/1788 (0.17%)
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 4] 0.00-10.00 sec 23.9 MBytes 20.1 Mbits/sec 0.083 ms 12/17860 (0.067%)
[ 4] Sent 17860 datagrams

iperf Done.

Thats a local iperf, so to a server in Cape Town.

iperf3.exe -R -u -b 20M -p 17001 -c iperf.cisp.co.za
Connecting to host iperf.cisp.co.za, port 17001
Reverse mode, remote host iperf.cisp.co.za is sending
[ 4] local 192.168.1.247 port 58571 connected to 172.104.157.197 port 17001
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 4] 0.00-1.00 sec 2.44 MBytes 20.4 Mbits/sec 0.181 ms 6/318 (1.9%)
[ 4] 1.00-2.00 sec 2.28 MBytes 19.1 Mbits/sec 0.287 ms 12/304 (3.9%)
[ 4] 2.00-3.00 sec 2.36 MBytes 19.8 Mbits/sec 0.187 ms 3/305 (0.98%)
iperf3: OUT OF ORDER - incoming packet = 1189 and received packet = 1191 AND SP = 4
[ 4] 3.00-4.00 sec 2.36 MBytes 19.8 Mbits/sec 0.313 ms 5/306 (1.6%)
[ 4] 4.00-5.00 sec 2.35 MBytes 19.7 Mbits/sec 0.289 ms 3/304 (0.99%)
[ 4] 5.00-6.00 sec 2.34 MBytes 19.6 Mbits/sec 0.218 ms 7/306 (2.3%)
[ 4] 6.00-7.00 sec 2.30 MBytes 19.3 Mbits/sec 0.310 ms 10/305 (3.3%)
[ 4] 7.00-8.00 sec 2.29 MBytes 19.2 Mbits/sec 0.165 ms 13/306 (4.2%)
[ 4] 8.00-9.00 sec 2.35 MBytes 19.7 Mbits/sec 0.351 ms 3/304 (0.99%)
[ 4] 9.00-10.00 sec 2.32 MBytes 19.5 Mbits/sec 0.169 ms 8/305 (2.6%)
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 4] 0.00-10.00 sec 24.3 MBytes 20.4 Mbits/sec 0.225 ms 70/3111 (2.3%)
[ 4] Sent 3111 datagrams
[SUM] 0.0-10.0 sec 1 datagrams received out-of-order

iperf Done.

And to a server in London I believe or it might be france. You can see packet loss is way more severe on international but if it was my line that was causing that packet loss surely the local would mirror the loss?

Everything just feels so incredibly fragile and transient with Octotel. Last night for instance proxy didn't help me at all get better throughput, without the proxy I was getting 4mb, with the proxy I was getting 6mb. then 10min later my line is doing 23mb and I can stream twitch then at 9pm its back down to sub 10mb.

It seriously feels like some guy is sitting there at Octotel with a tennis ball throwing it at random buttons on a switch board all night.
 
Are you able to share more than that? Are they upgrading/changing switches? Changing routing? Balancing the load on OLT's? Increasing capacity to any of the nodes?
Pretty sure it's switching and capacity on their IP core. That's about all they have told us.
 
And to a server in London I believe or it might be france. You can see packet loss is way more severe on international but if it was my line that was causing that packet loss surely the local would mirror the loss?

It would. It sounds more like over-subscription during peak times, which you've already come to the conclusion of. I wonder if they used some cheap equipment specifically in your area which is now struggling under load.
 
It would. It sounds more like over-subscription during peak times, which you've already come to the conclusion of. I wonder if they used some cheap equipment specifically in your area which is now struggling under load.

Yeah but then surely again local would suffer from that too. Why is it only international that's garbage? I will add a caveat here that I used to have 3-5% local loss. Through @TheRoDent s black box testing and him calling octotel almost everyday that has subsided significantly as you can see. So I don't know if its like you said some budget equipment they used on me and after complaining they migrated me to new stuff early but then when I try to route to international its still going through old stuff?

Bah! I don't know. This stuff makes no sense yet here we are.

*EDIT and then just to add to the confusion most of the time the proxy helps me stream twitch very well so I'm assuming it eliminates the international packet loss too somehow.
 
Yeah but then surely again local would suffer from that too. Why is it only international that's garbage? I will add a caveat here that I used to have 3-5% local loss. Through @TheRoDent s black box testing and him calling octotel almost everyday that has subsided significantly as you can see. So I don't know if its like you said some budget equipment they used on me and after complaining they migrated me to new stuff early but then when I try to route to international its still going through old stuff?

Bah! I don't know. This stuff makes no sense yet here we are.

*EDIT and then just to add to the confusion most of the time the proxy helps me stream twitch very well so I'm assuming it eliminates the international packet loss too somehow.
The proxy has the same amount of access to the network that a regular client has.
 
Yeah but then surely again local would suffer from that too. Why is it only international that's garbage? I will add a caveat here that I used to have 3-5% local loss. Through @TheRoDent s black box testing and him calling octotel almost everyday that has subsided significantly as you can see. So I don't know if its like you said some budget equipment they used on me and after complaining they migrated me to new stuff early but then when I try to route to international its still going through old stuff?

Bah! I don't know. This stuff makes no sense yet here we are.

*EDIT and then just to add to the confusion most of the time the proxy helps me stream twitch very well so I'm assuming it eliminates the international packet loss too somehow.

I didn't realise you had no packet loss on local during peak times too. That's.. really bizarre, as usually I'd come to the conclusion it was an ISP issue then.
 
Yeah but then surely again local would suffer from that too. Why is it only international that's garbage? I will add a caveat here that I used to have 3-5% local loss. Through @TheRoDent s black box testing and him calling octotel almost everyday that has subsided significantly as you can see. So I don't know if its like you said some budget equipment they used on me and after complaining they migrated me to new stuff early but then when I try to route to international its still going through old stuff?

Bah! I don't know. This stuff makes no sense yet here we are.

*EDIT and then just to add to the confusion most of the time the proxy helps me stream twitch very well so I'm assuming it eliminates the international packet loss too somehow.

Why would local suffer? The latency is so low from them to say our local speedtest/iperf server (< 1ms), the TCP packet retry is instant and probably still cached on their system. Now between EU and here, there is 150ms latency, which is massive, their equipment might even have to ask your own PC/network for the packet again. And just looking at the JHB speeds from CPT, it just paints the same picture, just to a lesser degree. IT might not be packetloss perse, but rather congestion causing a delay.

Edit: And they might be clever and give UDP traffic a much higher priority, although I dont know if they have access to the protocol on their level. Heck I dont even know what they can see or what they do on that layer. :D
 
I didn't realise you had no packet loss on local during peak times too. That's.. really bizarre, as usually I'd come to the conclusion it was an ISP issue then.

CISP is my 3rd ISP since moving to fibre on Octotel since May last year. First 2-3 months was fine, then poopoo. I been running tests since October last year. All 3 ISP's have had the same issue with Octotel.
 
CISP is my 3rd ISP since moving to fibre on Octotel since May last year. First 2-3 months was fine, then poopoo. I been running tests since October last year. All 3 ISP's have had the same issue with Octotel.
Who were the first 2?
 
CISP is my 3rd ISP since moving to fibre on Octotel since May last year. First 2-3 months was fine, then poopoo. I been running tests since October last year. All 3 ISP's have had the same issue with Octotel.

What a coincidence. I noticed my issues started in October last year too. The first few months (honeymoon period) was perfect for me too and I also don't believe CISP is the issue here. However they are the only way I can talk to Octotel so I keep moaning here hoping to move the ball a milometer in the right direction.
 
Why would local suffer? The latency is so low from them to say our local speedtest/iperf server (< 1ms), the TCP packet retry is instant and probably still cached on their system. Now between EU and here, there is 150ms latency, which is massive, their equipment might even have to ask your own PC/network for the packet again. And just looking at the JHB speeds from CPT, it just paints the same picture, just to a lesser degree. IT might not be packetloss perse, but rather congestion causing a delay.

Edit: And they might be clever and give UDP traffic a much higher priority, although I dont know if they have access to the protocol on their level. Heck I dont even know what they can see or what they do on that layer. :D

But surely packetloss is packetloss, caused by something like a saturated link or bad SNR on ones line - and this loss should reflect on an iperf be it local or international?

How can one have, let's say 8% PL on an international iperf but 0.6% locally? If there is indeed loss on the line and retransmits, the loss would be tracked either way.

My understanding was that if you are dealing with packetloss, your throughput will be much better locally due to the <1ms latency retransmits, and much lower internationally due to much higher latency.

Does this mean that there is actually LESS loss because of the lower latency? and the higher latency simply exposes a bad line? Because this does not make sense in @image132 's case as the proxy doesn't have the ability to reduce the latency to EU, but yet it gives him more than double his international throughput (which is supposedly caused by PL + latency to EU) :unsure:o_O
 
But surely packetloss is packetloss, caused by something like a saturated link or bad SNR on ones line - and this loss should reflect on an iperf be it local or international?

How can one have, let's say 8% PL on an international iperf but 0.6% locally? If there is indeed loss on the line and retransmits, the loss would be tracked either way.

My understanding was that if you are dealing with packetloss, your throughput will be much better locally due to the <1ms latency retransmits, and much lower internationally due to much higher latency.

Does this mean that there is actually LESS loss because of the lower latency? and the higher latency simply exposes a bad line? Because this does not make sense in @image132 's case as the proxy doesn't have the ability to reduce the latency to EU, but yet it gives him more than double his international throughput (which is supposedly caused by PL + latency to EU) :unsure:o_O

That was my understanding too.

It does my head in everytime I try to work out what could be causing this. My best guess is I'm going over some congested international link but you'd think more people would be complaining if that was the case. Clearly I just don't understand networks well enough to comprehend this issue.
 
Status
Not open for further replies.
Top
Sign up to the MyBroadband newsletter