June 8-12, 2025
Join ThousandEyes at Cisco Live 2025

Internet and Cloud Research

The Factors Determining LEO Internet Performance

By Mike Hicks & Kemal Sanjta
| April 10, 2025 | 18 min read

Summary

Dive into how LEO Internet through Starlink works, which factors determine the download speed and latency of an individual connection, and the difference that various congestion avoidance algorithms can have on the service’s performance.


Low Earth Orbit (LEO) Internet is a transformative technology that offers a cost-effective method for providing widespread coverage without requiring extensive ground infrastructure. This is particularly beneficial for sparsely populated areas where fixed-line broadband is often impractical or prohibitively expensive.

LEO satellite technology has the potential for low latency and high throughput, making it a viable option for various applications, including Earth observation and research. Consequently, customer interest has surged, leading to a competitive market with multiple companies providing similar services.

In this research, we use Starlink as a case study to examine factors influencing performance, such as throughput, latency, and how different congestion avoidance algorithms affect service quality. Our findings will demonstrate that not all Starlink connections perform uniformly.

How Starlink Works

Starlink is a massive and growing fleet of satellites traveling in low earth orbit, operated by SpaceX. At the time of writing, there are well over 6,000 Starlink satellites deployed, providing a mesh of coverage that spans more than 100 countries and several continents.

The satellites are deployed at altitudes ranging from 310-745 miles (500-1,200 km). This altitude is significantly lower than the geostationary satellites that preceded LEO satellites, which orbit at approximately 21,750 miles (35,000 km) above the Earth. This closer proximity to Earth means LEO technology can offer lower latency and faster speeds than geostationary Internet.

Starlink customers connect to a network of satellites using their Starlink-supplied dish. Starlink offers Internet service for both residential and business customers, available as fixed or mobile options. 

The customer’s dish both sends and receives data from the satellites flying overhead within various frequency bands. Satellites connect with the rest of the Internet using Starlink’s network of ground stations.

Starlink has around 150 active ground stations, but these aren’t uniformly distributed across the planet. In some countries, such as the United Kingdom, there are several ground stations. In others, such as parts of Scandinavia, there are currently none. The significance of this will be discussed shortly.

The ground stations connect the satellite data via fiber to the company’s Points of Presence (POPs)—of which Starlink has many across the globe—and from there to the rest of the Internet.

The Ground Station and POP Impact

To understand the impact of ground stations and POPs on performance, we conducted thousands of throughput tests in locations worldwide, aiming to identify patterns in the performance of LEO Internet as provided by Starlink.

The first thing to note is that our speed tests revealed that Starlink consistently delivers on—or outperforms—its stated speeds in all of the locations that we tested. We tested on the residential fixed plan, with estimated download speeds of 25-100 Mbps, uploads of 5-10 Mbps, and latency of 25-60 ms. The average download speeds were in triple digits in almost all of the locations we tested, with some regions comfortably exceeding 250 Mbps. 

However, we did notice significant variations in speeds and latency, and some of that can probably be attributed to the proximity of ground stations and POPs. As we noted earlier, some countries have multiple ground stations, others have none. That means the wireless signal between satellite and ground station has to travel further, which increases latency. We noted earlier that Scandinavia has no ground stations, so it’s no great shock to see Stockholm as the test destination with the highest latency in Europe, albeit still within Starlink’s estimated bounds.

It's also worth noting that the proximity of ground stations and POPs could become less relevant as time goes on. Why? Because the newer Starlink satellites are fitted with laser links called Inter-Satellite Links (ISL) that allow Starlink’s satellites to communicate directly with one another, rather than having to send data back and forth to the ground. This means that data can be relayed across the satellite network before reaching a ground station, allowing the service to operate in areas where ground stations aren’t available, such as in the polar regions.

There are also other potential reasons for the large discrepancies between regions that we saw in our tests. Obstructions in the satellite’s path (such as tree branches swinging in the wind) can cause lower-than-expected performance from our test location in Germany, for example. The Starlink app, though, highlights such obstructions, as shown in Figure 1.

Screenshot of the Starlink application indicating the location of obstruction
Figure 1. Starlink application indicating the location of obstruction

Suboptimal peering strategies could also explain some of the variation, as could performance throttling when a particular satellite link or ground station is under heavy load. Satellite connectivity is also inherently a lossy technology; in other words, it typically suffers from much higher packet loss than fiber connections. This lossy characteristic leads us to the next part of our research.

Switching Congestion Algorithms

To minimize the impact of packet loss on performance, congestion algorithms such as CUBIC and BBR can play a critical role. CUBIC was designed to manage the effects of packet loss in high-speed, long-distance networks, whereas BBR (Bottleneck Bandwidth and Round-trip propagation time), developed by Google, is an algorithm designed to further optimize network utilization and throughput by continuously probing for available bandwidth. BBR adapts to increases in latency by gradually lowering the sending rate. This is in contrast to the CUBIC algorithm, which reduces the delivery speed when it detects packet loss.

In our study on performance, we therefore conducted initial tests using the default congestion algorithm CUBIC, and then switched to BBR to compare results. Given that we controlled the environment end to end, we were able to enable BBR both on the client side (controlling egress traffic) and on the server side (controlling the client’s ingress traffic) to understand the benefits of using BBR in both directions.

Our tests spanned multiple locations globally, targeting dedicated servers at major points where we had Starlink dishes deployed. In the United States, we deployed dedicated, non-throttled servers in US East (Virginia), US Central (Iowa), and US West (Oregon). In Europe, we had dedicated servers in EU West (London, U.K.) and EU Central (Frankfurt, Germany). Lastly, in Australia, we deployed our testing server in AU East (Sydney). 

The results when we switched to BBR were startling. The download throughput between our Georgetown, Texas, and U.S. West Coast data centers, for example, improved almost ten-fold. Between Weinstadt, Germany, with its partially obstructed link to the satellite, and the EU Central data center, the download throughput increased by a staggering 18.4 times with BBR switched to.

We saw improved performance on the uplink too, with anywhere between a 1.2-fold and 3.4-fold improvement in upload speeds when BBR was activated.

CUBIC and BBR Throughput Differences

The results listed below are based on sustained throughput measurements as part of separately testing ingress and egress traffic. We are showing results that were obtained over 7,200 data points and thus represent a good indication of what to expect throughput-wise over longer time periods and for larger data transfers. 

Results for the United States

As shown in Table 1, Selkirk, NY achieved the highest download speed of 40.102 Mbps, despite having the highest latency of 82.662 ms while using the default congestion algorithm, CUBIC. North Bend, WA recorded the highest upload speed at 6.773 Mbps with the lowest latency of 56.772 ms. In contrast, Georgetown, TX had the poorest performance, with download speeds of 10.860 Mbps and upload speeds of 4.902 Mbps.

After switching to the BBR congestion algorithm, all locations demonstrated significant improvements. Notably, Georgetown's download speed increased dramatically from 10.860 Mbps to 106.668 Mbps, representing a remarkable 9.8-fold improvement. Additionally, Selkirk experienced the most substantial increase in upload speed, rising from 5.631 Mbps to 19.404 Mbps, which reflects a 3.4-fold increase.

Table showing throughput differences between CUBIC and BBR when testing with a server hosted in US West
Table 1. Throughput differences between CUBIC and BBR when testing with a server hosted in US West

As shown in Table 2, our testing on a dedicated, non-throttled server located in Selkirk, NY, demonstrated the highest download speed at 36.177 Mbps and an upload speed of 6.801 Mbps, with the lowest latency recorded at 50.664 ms. In contrast, Georgetown, TX, had one of the poorest performances, delivering the lowest download speed at 17.049 Mbps. Additionally, San Francisco, CA, registered the lowest upload speed of 4.509 Mbps.

Switching from the CUBIC to the BBR congestion control algorithm resulted in significant improvements. The agent in North Bend, WA, experienced a remarkable 7.7-fold increase in download speeds, rising from 17.458 Mbps to 133.741 Mbps. Furthermore, North Bend, WA, also witnessed the largest enhancement in upload speeds, improving 3.3-fold from 4.651 Mbps to 15.736 Mbps.

Table showing throughput differences when testing to US Central
Table 2. Throughput differences when testing to US Central

Testing with a server located in US East showed that Selkirk had the highest download speed at 74.247 Mbps and the highest upload speed at 11.449 Mbps, along with the lowest latency of 32.210 ms. This emphasizes the importance of being close to the POP to which the dish is assigned. In contrast, North Bend, WA performed the worst, recording the lowest download speed at 12.436 Mbps and the lowest upload speed at 3.983 Mbps, along with the highest latency of 115.788 ms. The results for North Bend are to be expected, given the geographical characteristics of the dish's deployment and the testing server's location.

Table showing throughput differences when testing to US East
Table 3. Throughput differences when testing to US East

Results for Europe

Testing the EU West region while using CUBIC as the congestion avoidance algorithm revealed that Weinstadt, DE achieved the highest download speed at 39.434 Mbps, while Jaen, ES recorded the highest upload speed at 8.840 Mbps. Epe, NL had the lowest download speed at 16.454 Mbps, and Weinstadt recorded the lowest upload speed at 6.353 Mbps. Interestingly, Weinstadt exhibited both the highest download and the lowest upload speeds. We attribute these discrepancies to the fact that the testing agent faced physical obstructions to the clear sky during the tests.

Switching to the BBR algorithm resulted in improved speed values across all locations, with the most significant improvement observed in Epe, NL, which experienced a 17.2-fold increase in download speeds—from 16.454 Mbps to 283.013 Mbps. Despite the obstructions, Weinstadt, DE saw a 2.5-fold increase in upload speeds, rising from 6.353 Mbps to 16.369 Mbps.

Table showing throughput results when testing to EU West
Table 4. Throughput results when testing to EU West

As shown in Table 5, the testing conducted in the EU West revealed that Epe, NL achieved the best results for both download (76.010 Mbps) and upload (10.975 Mbps) speeds. In contrast, Weinstadt, DE, despite having the lowest latency (27.251 ms) to the testing server, performed the worst, with a download speed of only 6.336 Mbps and an upload speed of 4.820 Mbps. This poor performance can be attributed to its physical obstruction, which hindered its view of the sky.

After switching to BBR, Weinstadt, DE saw a significant improvement in its performance. Download speeds increased dramatically from 6.336 Mbps to 117.049 Mbps, marking an impressive 18.4-fold increase. Upload speeds also improved substantially, rising from 4.820 Mbps to 14.123 Mbps, a 2.9-fold increase. What makes these results even more remarkable is that the agent was still physically obstructed during this assessment, further underscoring the advantages of BBR over CUBIC.

Table showing throughput results when testing to EU West
Table 5. Throughput results when testing to EU West

Results for Australia

Brookvale recorded the highest download speed at 61.367 Mbps and the highest upload speed at 9.862 Mbps, along with the lowest latency of 27.642 ms. In contrast, Perth experienced the highest latency at 88.038 ms. Erskineville had the lowest download speed at 33.199 Mbps, while Perth also had the lowest upload speed at 5.972 Mbps. This data further illustrates that physical proximity to the assigned POP significantly impacts performance.

Switching to BBR resulted in substantial improvements across all locations, with a notable highlight being Erskineville's download speed increase of 7.9-fold, improving from 33.199 Mbps to 264.460 Mbps. For uploads, Perth experienced the largest increase of 2.1-fold, rising from 5.972 Mbps using CUBIC to 12.988 Mbps with BBR.

Table showing throughput results when testing to AU East
Table 6. Throughput results when testing to AU East

While the results after switching to BBR are significant, before we all start rushing to switch to BBR on our LEO satellites, there are a couple of important points to consider. The speed tests we conducted were based on raw throughput, not application data. While BBR can provide higher throughput, it can also create issues such as buffer bloat and higher retransmission rates, especially in lossy network environments such as satellite connections.

By switching to BBR, you might actually be pushing the problem of retransmissions back to the application server, because it’s effectively saying: “I have a gap in my data, so you need to send that through again,” whereas CUBIC would likely slow down the rate of transmission to maximize the chances of getting all the data you need in the first place.

Therefore, until we can leverage real application data to perform tests on LEO connectivity over Starlink, it’s a little premature to suggest that switching to BBR is the performance panacea that it may first appear to be.

The Next Step

The ability to demonstrate increased throughput with BBR indicates that satellite links possess characteristics well-suited for BBR's hybrid approach, which combines bandwidth efficiency with control over latency caused by buffering. This underscores BBR’s potential to optimize LEO satellite communications and highlights its adaptability to distinct network conditions while effectively managing latency.

The next step for our research is to answer questions that revolve around how different applications react to varying amounts and spikes of packet loss. What would the impact be of switching to BBR when using LEO Internet? How would it affect application performance? And even if it did offer improved performance, would the associated costs of retransmission make it prohibitive to implement?

LEO Internet is a fascinating technology with its own unique characteristics. As with everything we test, you have to consider the full service delivery chain to truly understand its implications.


Explore how your business can deliver seamless digital experiences in a distributed IT landscape. Read the eBook.

related blogs

Processing...
Processing...