The best ADSL service provider in South Africa

Status
Not open for further replies.

Pieter Uys

Member
Joined
Jan 7, 2014
Messages
11
So, my Service Provider is 2nd from last, and together with their useless fixed line provider (I see they are last on the list), provide an ADSL service with intermittent breaks. Up to 60 breaks in a 24hour period.
There is no fibre available in Amanzimtoti yet, and satellite based service providers apparently cannot "see" my property from their tower, so no go there either. Any other suggestions from My Broadband?
 

CAPS LOCK

Executive Member
Joined
Jun 29, 2009
Messages
5,797
What kind of sorcery is this "only invited users" demographic - quota system? I did not receive any invite - blames fikile mbalula...
 

wingnut771

Honorary Master
Joined
Feb 15, 2011
Messages
12,467
Question:

I see Internet Solutions in the list but no Plugg, Incredible Connection etc. Would they fall under this? If so, why does OW etc get their own row in the table?
 

profeet

Banned
Joined
Oct 14, 2015
Messages
1,094

attachment.php
 

Bryn

Doubleplusgood
Joined
Oct 29, 2010
Messages
16,250
I never received an invite for this survey. Glad to see Mweb and Openweb getting the recognition they deserve though.
 

KingRat1

Well-Known Member
Joined
Jan 29, 2010
Messages
227
Standard Deviation in this user poll showed that some service providers have varied experiences by their customers.

But look at the numbers of the people surveyed. From 28 for IS, 31 for OpenWeb to 633 for Telkom & 753 for Afrihost. Is this representative of their market share?
STD Deviation of 2.88 for a sample of 31 indicates wide views in the small sample and thus a larger sample is needed.
 

Cactus

Senior Member
Joined
Jan 16, 2015
Messages
654
Question:

I see Internet Solutions in the list but no Plugg, Incredible Connection etc. Would they fall under this? If so, why does OW etc get their own row in the table?

Because Openweb shares IS and Mtn business accounts? :D
 

DJ...

Banned
Joined
Jan 24, 2007
Messages
70,287
I’d sincerely like to open a frank and (hopefully) positive discussion about the methodology adopted for this determination. It is, ultimately, an important consideration when being used to determine a so-called “best” ISP in South Africa. Are we really seeing the best ISP here, or are we seeing the results of major flaws? Let’s break it down:

In order to measure the best, one has to first define what factors/data points contribute individually to being considered the best, weight them fairly in a combined manner while ensuring control of access to sufficient and accurate source data, and determine acceptable margins of error that remain consistent throughout all combinations of long-term testing data-points, all in an effort towards making such a determination. What is best? Without defining it, how do we qualify our methodology? If our methodology does not produce a consistent and accurate outcome to determine with statistical confidence, the best, what exactly are we measuring and why is it claimed to achieve something it is not doing, while potentially misrepresenting these brands? One must then determine what the aim of the survey is exactly. In the case of rating the best of a group of products, services, or companies, accuracy is the ultimate aim, so you want to obtain very tight confidence intervals, from sufficient sample sizes, using normalised data, with minimal variance in standard deviation unless accounted for by reasonable exceptions (so it should come with an analysis of the results, detailed methodology explanations and rationale; and access to the anonymised source-data for replication, benchmarking, and peer-review). If you wish to publish a supposedly statistically accurate industry benchmarking of best and worst companies (upon which you assume liability for publication), where you are responsible for sampling, modelling, testing, regression, benchmarking, analysis, copywriting, and publication (these are the responsibilities associated with the scientific task that you have adopted, and considering that you claim to host the national benchmark award based on this data, you as MyBroadband should be doing all that you can to ensure as close to 100% accuracy as possible), there are no excuses for getting it wrong. Unfortunately it does appear as if these results are not accurate at all, at least not based on the information so far published, and certainly without releasing the anonymised source-data for peer-review.

Let’s analyse some fatal flaws here, and correct me where I am wrong as I am open to sincerely discussing this, as an industry-standard MUST be set, and while we support your efforts to do so, you will do more harm than good publishing incorrect data.

The problem with NPS is that with data-sets above 30+, the overall score tends to veer downwards as you end up with less of a normal distribution (statistically speaking) and you therefore find more responses being caught in the discarded and negative buckets. Ironically this means that respondents with a very low response rate have a tendency to score higher, relatively speaking, than their counterparts with higher response rates. But response rate is only a small factor in this all. NPS is not often used at all for rating of a best and worst of anything as it doesn’t lend itself well to this. Even though there may be some data that is correlated, correlation does not equal causation, and a statement of intent (which is what NPS is) does not reflect nor historically correlate with action or performance (what it purports to be). It’s used, where applicable, for longer-term industry benchmarking and as a standard performance over time measure, to determine the effectiveness of targeted strategies that entrench brand loyalty. The fact that only 10 and 9 responses are counted as promoters of the brand, while 8 and 7s are discounted entirely for score calculation, with 0-6 being negative, means that any ingrained negative industry sentiment results in a large proportion of traditionally valuable data, being discarded even though it should be contributory towards a best and worst statistic. There is no need to discard any responses in a best and worst analysis unless fraud or manipulation is detected. If the intent is to normalise data, exclusion is not the way, and in any case you’d not want to exclude mid-tier normal distribution data for a determination of who is the best.

But let’s assume that you somehow find a way to solve that issue:

In South Africa where telecoms traditionally dwindles in comparison to the rest of the world; where relative cost is significantly higher and has been for decades; and where service levels are perceived to be “sub-world-industry-standard”, you have more than an inkling that negative sentiment will drive ratings down – you have a guarantee of it. If only the two-top responses are considered positive, you end up with a perverse result: the more accurate your representative data and confidence level, and the greater your relative sample size(s), the more likely you are to see traditionally happy customers/brand advocates submit responses expected in the 10s and 9s (the only positive scoring options) into 8s and 7s (discarded entirely and are not used in the determination of the NPS score). So ironically, the more accurately you are able to measure each ISP, the higher likelihood you have that your overall NPS will be reduced and you will end up with a larger, relative, standard deviation. With standard deviation being a measure of the mean, or reflection thereof, you must have normalised data for it to in any way be comparable. Again, you don’t appear to have that here as there is no control for confidence levels and no granular market share analysis to determine whether your sample size is reflective and fairly comparable.

Let’s get crazy and assume that somehow this is all solved mathematically:

Another quite serious flaw in NPS use as a single data point for rating the best of an industry over periods of time (multiple instances and using it as a standard measurement), is that it has no standard scale of measurement of performance changes, so even if one is measured again after changes (up or down) later down the line, the new scale still resides between -100 and 100 as the raw output. This corresponds to a 0 to 10 rating. I will allow Bob Hayes to explain the clear problems here as he illustrates it well:
An NPS value of 15 could be derived from a different combination of promoters and detractors. For example, one company could arrive at an NPS of 15 with 40% promoters and 25% detractors while another company could arrive at the same NPS score of 15 with 20% promoters and 5% detractors. Are these two companies with the same NPS score really the same?
Are two companies with two different NPS scores really any different? How could you know unless you see the underlying data? And how can the discarding of data-sets in the 7 to 8 response range assist, seeing as it is positive response from a user’s perspective and may be as a result of negative industry sentiment? You certainly do not know as the statisticians. The problem with NPS is that you MUST submit the underlying data and utilise more than one data point for determination of “the best” across an industry.

You also have to do a few other important things before then, which I am not seeing here (but let’s continue to assume that you’ve solved everything above):

You have a problem with non-representative data – while one would believe that a captive internet audience is fully representative of an ISP’s users, it is simply representative of that specific sample’s profile segmentation. So in the case of MyBroadband’s forum users, one doesn’t hit the majority, average user, nor the occasional user. One hits a somewhat non-diverse use-case. One also has to ensure that confidence intervals are very, very high and that there is absolutely zero interference from vested interests or surveyed company representatives. Especially if it is known that NPS shall be used. Because NPS mathematics involves reducing responses of an 11 point scale into a 3 point bucket with only 2 being positive, 2 being discarded, and 7 being negative (so actually a 2 point system with many data points being discarded) you need to be sure that your sample is reflective of an average ISP’s clientele. Unfortunately, in the case of MyBroadband’s sample, only one ISP’s customer-base is largely correlated – Crystal Web, which as a ratio of market share has (by an absolute country-mile) the largest proportion of MyBroadband use-cases on their book. Across a survey of 3108 MyBroadband (3 x times the size of a sample group required for statistical accuracy to achieve industry standard 95% confidence level with a 3% margin of error – it’s actually indicative of a 1.75% margin of error), 19% of these respondents indicated that they are Crystal Web users. This is absurdly higher in comparison to every single other ISP in South Africa. So only one ISP’s user-base is highly correlated with the MyBroadband representation/segmentation of users, which remains a niche, tech-orientated audience, who are traditionally far more discerning and demanding than your average ISP end-user.

So even if we were to assume that the data is comparatively accurate for all ISPs (it is not), and the sample sizes are large enough with which to draw conclusions in any accurate and comparable manner (it is not), and the data is normalised to account for NPS oddities (it is not), these scores certainly shouldn’t be used for any industry-orientated benchmarking purposes as they are in all economic likelihood, far lower relatively than global industry scores, and as anyone who knows anything about NPS will tell you, that is really one of only two potential benefits and only in certain instances to certain companies.

continued in part 2...
 
Last edited:

DJ...

Banned
Joined
Jan 24, 2007
Messages
70,287
Let’s just assume for a second that we can resolve all of the above problems, another glaringly obvious issue persists: NPS will reward brand-equity over product delivery in an absurdly perverse manner. If a company has high brand equity and a large proportion of brand advocates as users, NPS will reward those responders’ unwillingness to submit a very low score. Rather than a 0, a company with good brand equity from its users (common in disruptive, utility-based, subscription-based/usage-based businesses) will be inclined to provide just below a 5 if the company is not offering a good product at that point in time. Does good brand equity make for a good ISP? No. It makes for a good business and lower marketing spend, but at no point will brand equity get your packets from point A to point B, nor can it contribute towards the quality of that transmission. Being the best requires analysis and fair comparison of a number of aspects of an ISP’s operations and its network and value. NPS asks the user to make a single subjective decision based on a combination of emotions and their ability to develop valued relationships with friends and family.

But let’s assume that we can somehow solve all of the above problems with novel, published, peer-reviewed research. We now face a problem of long-term analytics as NPS doesn’t scale changes in source data very well. Remember that the next phase is another -100 to 100 scale analysis with a multitude of ways to get to the exact same NPS score (again, what are you really measuring?). So your changes may not even reflect; may even be amplified; or may not even be relative to real-world outcomes. But there is a far bigger flaw here with NPS usage as a single-data-point to determine “the best” – taking an 11 point scale down to 2 points biased towards bad, means that you have to double your sample size for the next round of surveys, because this binary system doubles your margin of error. So if you surveyed with the same sample size, any changes (even if NPS could accurately measure these) would be indistinguishable from the inherent sampling error. But now you have a major problem on your hand because each NPS survey potentially gets huge and requires massive access to willing, relevant responders, similarly segmented to the base data. You also have a major problem when trying to perform overall margin of error calculations because those too don’t scale up and down proportionately. Good luck with those algorithms, as they are not simple Pythagorean theorem calculations of the squares as you could have used for similar sample sizes. Talking of rebasing numbers, how can Q1 operate on the old methodology and Q2 a new methodology? The two quarters and underlying data are not congruent in any way, nor is there any way to fairly rationalise those numbers into a final, single number or expression. In Q1 you based top rated ISP on 3 relevant data-points and published accordingly. While you did ask responders to answer out of 10 (1-10 IIRC) their likelihood they’d recommend their ISP to a friend or family member, it was suspiciously missing or I missed its publication.

OK, but let’s assume all of the above is somehow miraculously taken care of (below would have to form part of the solution). Here is your major problem:

You are aiming for statistical accuracy here. Your NPS-based-single-number-subjective-emotional-non-network-related-midswitch-undocumented-binary-analysis-double-error-margin-subjected-non-comparitive-line-item-relevant-randomly-exclusionary-negative-sentiment-incentive-non-scalable-non-representative-high-brand-equity-rewarding-calcuation requires at least an effort to, from the ground-up, utilise sufficient source data that fulfils even the very basic requirements for accuracy. Even if we’re on the road to solving all of the aforementioned problems, you’d still need to obtain responses from each ISP indicative of each’s market share. After all, in Q1 2015 Crystal Web was excluded from publication even though we won both top rated ISP and overall most highly recommended ISP based on MyBB survey questions. Evidently MyBB must have detailed and up-to-date access to each ISP’s market share in order to have made such a determination, so you will know exactly how many responses you would need from each ISP in order to publish any simple, non-NPS (NPS requires double sample considering the binary evaluation) statistically accurate models purporting to make representations of the opinions of each ISP’s customers, of their associated quality overall. Fortunately, I too have incredibly up-to-date, total market-share analytics, drilled into incredibly detailed information per ISP, specifically relating to number of DSL customers split per ISP. And I’m afraid we have a problem. Without disclosing specific subscriber numbers of other ISPs, here is how it plays out on the following logical assumptions:

Confidence level: 95% standard. We should be aiming higher ideally but let’s just stick with a 95% confidence level.
Confidence Interval: here you cannot afford wide variances. On such flimsy statistical methodology for measuring “the best” you need as accurate a confidence interval as you can afford. So 1.5.
This would mean that your sample-size would need to be a minimum of aound 4,500 unique individuals of the correct user profile/segmentation/distribution. And this is simply if we were performing an overall, non-granular, non-ISP-specific industry statistical analysis. It is great baseline control and industry information. But you only received 2732 respondents. So confidence interval is actually 1.85. But we have a further problem: if we work with 1.85 and 95% confidence level for the granular approach on NPS it will mean a potential margin of error that could see up to 80% of all of the ISPs on the list move positions in either direction unless you received surveys from around 68214 unique individuals.

Because I have accurate market share statistics, I’m able to calculate the various confidence intervals for each ISP, using your published data. Because we don’t know if the error applies up or down, and can never statistically say for certain due to there being insufficient source data for accurate confidence levels for every single ISP, the only conclusions that can be made are upper and lower limit of confidence of data. This is the problem with trying to use a single subjective score orientated approach to this process. And remember that this doesn’t factor in all of the logical and statistical problems as posted above and is only based on a 95% confidence level. Here are the actual results including margins of error:

azDMfYY.png


The only conclusion (even if NPS provided you with accurate information) that you can make is that, for example, Axxess may have scored somewhere between 7.824 and 8.9. And that Crystal Web scored somewhere between 7.874 and 8.222. And Telkom scored somewhere between 5.38 and 6.359. You cannot know the score for any ISP with your data nor have any statistical confidence to make that call about any ISP. If you were to say that likelihood is one is greater than the other you would have to rewrite statistics and revolutionise mathematical forecasting. The likelihood that Afrihost scored 8.6, Cybersmart scored 8.3, and Axxess scored 7.824 is identical to the current set of results that you have published. And that's taking into account accurate market share statistics. Without them your numbers (using this NPS model) go absolutely berserk in terms of their lack of accuracy. In fact the likelihood that Internet Solutions came 1st and Crystal Web came 9th is just as likely as the published results. The fact that the company that came 8th according to your data, could with the same level of confidence as your published results, have come 1st, shows that there is a major flaw in the way that you are calculating this information and an even bigger issue with the conclusions being drawn.

There is also no mention of how the closed groups of invitees selected to vote, were selected (nor the rationale for this – previously all MyBB members got a vote), and if randomly, how this was achieved, and how you expect to ever make the accurate and comparable conclusions you are making here in this article with a new subset of every ISP’s customers for every survey vote, while incorporating Q1 results and criteria in a normalised fashion, and implementing the algorithms required to make NPS equitably comparable across all ISPs given the aforementioned limitations and problems associated with it, nor how you expect to aggregate this data while in any way achieving margins of error low enough for the stats to be trusted, nor where you expect to get this data having to double sample size for every voting round in order to account for sampling error inherent in the binary bucket system that NPS adopts.

Rudolph, I spoke to you about this a few months ago and did offer to help, free of charge, to develop a properly defined and scientific set of algorithms that are applicable to the industry and that could be used as a local industry standard of measure. Whoever developed this method either doesn’t understand the nature of the ISP business or doesn’t understand stats. Or both. If the market research house claim to have normalised the data already and developed solutions to all of the aforementioned problems, you will need to publish this as a research paper with access to the source files, for proper peer review, as this seems highly, highly unlikely...
 
Last edited:

new_in_za2

Senior Member
Joined
Sep 25, 2012
Messages
610
I appreciate the effort, DJ, but indeed I don't need that much detail to realise that the Mybroadband "best ISP" articles are 100% bull****. I've used Afrihost previously, and it was absolutely the worst ISP I've ever used, both from an internet connection and customer service point of view. This ISP ended up on second place, with its sister ending up on first place. The best ISP I've had is middle of the pack. The second best ISP I've had is near the bottom.

Since I've previously paid Afrihost for and not really gotten service in return, I know that they love to have internal competitions where they promise their customers free phones and other crap if they go and vote for Afrihost. I guess a lot of people jump on that opportunity, at least that's the only explanation I have for this phenomenon.

In the end, if you take the Mybroadband "best ISP" article and read the list up side down, it becomes a lot closer to the truth than if you read it normally. At best it is random noise, at worst it is "which ISP has the loudest and most obnoxious fanboys".
 

MickeyD

RIP
Joined
Oct 4, 2010
Messages
139,117
Here's my problem...

You could only rate 1 (one) ISP.

Let's say you selected Web Africa as they are you current ISP. You give them decent scores as they have been good for you.

But during the period under review you moved away from an ISP, XYY. Unfortunately you cannot rate them as useless as you can only select one ISP.

ISP XYZ gets away with providing a rubbish service... and ends up ahead of WA on the survey !!!
 
Status
Not open for further replies.
Top