Understanding latency a little better: Latency is the round trip time between your PC and the server you are connecting to. It's that plain and simple. So this varies (by at least microseconds) for any connection between any A and B location. There is no such thing as a fixed latency based on link type. It depends where you are and where you are connecting to.
So first things first, if you're wondering about latency - please be specific about from and to where. There are multiple factors to your perceived ("eyeball") latency:
Last mile Latency: This is from the nearest pop (and or) datacentre to your home - Typically 1-3ms in major metros and anything up to 15-20ms for those in secondary metros. This is the latency you see when you run a speedtest to a local server (usually hosted in your "home" datacentre). This is also the latency the destination server can experience back to it's main dc. Eg. A server hosted in Manchester, connecting to London Telehouse.
Terrestrial Backhaul latency: If you're in Cape Town and you're connecting to content in JHB, your traffic first leaves your home, goes to a DC, then on to JHB. The JHB<>CPT leg is anything between 16-21ms depending on the route your traffic takes.
Transit Latency: If you're connecting to a server in the EU, your traffic goes from your home, to a DC, to a transit provider, to a cable system's landing station (WACS, SAT3, EASSY etc), over the cable, to an international landing station (eg. London, Marseilles etc) and off to a Datacentre in the EU and on to those networks. This adds at least 140-150ms for London Telehouse on the shortest paths.
All of the above is based on physics. Light travels at a fixed speed through lengths of glass. The longer the path, the higher the latency. And then there's actual processing that happens at dozens of locations along the path, each of which adds a small amount of latency to the travel of a packet.
Now, looking at initial Starlink results, we've seen latencies of 23-35ms in US tests (client to nearest speedtest server). For now, there is very little satellite to satellite communication, so looking at the lowest results, we can assume that the last mile latency will always be a minimum of 25-30 (Client <> Satellite <> Ground station). This would be how long it takes for a signal to travel from a Starlink client dish to a satellite and (presumably) straight back down to a ground station which is linked to a Datacentre. So from this, we can infer that the round trip latency from the ground (client or ground station) to a Starlink satellite is 12-15ms and satellite to satellite comms will add to this, the same way terrestrial backhaul adds to this.
Starlink satellite to satellite communication is currently radio and will move to laser soon. For both of these mediums, the latency is identical (light travels at the speed of light, that simple), the benefit of laser is simply higher capacity. Given the lower altitude of the satellites means that should a long haul transit route be established satellite to satellite (ie, SA to London), this would need multiple satellite hops and a larger radius route. The fact that it is (hopefully) more direct, means it should be equal to or within a single digit percentage difference of submarine systems.
To understand this better, if Starlink wants to deliver on great latency in SA, it would need at least 3 ground stations (JHB, DBN, CPT) and the ability to determine which ground station is optimal for your location - which is not easy when your satellites move over the entire landmass of SA in a matter of minutes. IE, if you're in cape town, you would need to be routed to the cape town ground station, and in the north you're routed to a JHB ground station. And then you would probably pick up traffic via terrestrial backhaul, adding 16-21ms in addition to the 25-35ms. Remember, data and content is still sitting on earth and you need to pick it up from the nearest possible location.
Chances are, with only limited total capacity on each satellite, satellite to satellite communications for global traffic would be limited for special use, eg. financial markets and real time communications and at a significant cost (demand/supply). Most eyeball traffic will probably be routed to the closest regional ground station per country. Building multiple ground stations in each country with power, backhaul, transit, routers and more will be extremely expensive and frankly unnecessary for a small (geographic and customers) market like South Africa, so we'll likely see one or two be built.