Product News
Announcing Cloud Insights for Amazon Web Services

Outage Analyses

Using ThousandEyes to Analyze a DDoS Attack on GitHub

By Doug Royer
| | 10 min read

Summary


On August 15th 2013 a DDoS attack targeted github.com. The attack was reported by many news organizations; here is an article “GitHub code repository rocked by 'very large DDoS' attack” by Jack Clark of The Register. In addition the GitHub Status Page from that day also provides some additional timing for this DDos attack. At 15:47 UTC the Status page reports “a major service outage”. At 15:50 UTC they state: “We are working to mitigate a large DDoS. We will provide an update once we have more information.”

ThousandEyes was monitoring GitHub through public agents before and during the attack and we captured the coordinated effort by GitHub, Rackspace and their ISPs to fend off the attack. While multiple ISPs were involved in the effort, for the sake of simplicity we will focus primarily Level 3 Communications, and AboveNet. They used different techniques to counter the attack and ThousandEyes Deep Path Analysis helped us understand how the attack and defense evolved over time.

Figure 1: This shows end-to-end network metrics from ThousandEyes agents reaching GitHub during the DDoS attack. All agents are reporting 100% packet loss.
Figure 1: This shows end-to-end network metrics from ThousandEyes agents reaching GitHub during the DDoS attack. All agents are reporting 100% packet loss.

The view above shows the end-to-end loss from several of our public agents around the world when testing GitHub site. At 15:40 UTC the loss was at 16.7% worldwide, five minutes later at 15:45 UTC the loss had increased to 100% from all locations. This is typically an indication that there’s something wrong at the network level. To understand better where the problem is occurring we need to go to the path visualization view.

In August, Gitub was hosted by Rackspace, and from our agents we were able to identify four different upstream ISPs connected to Rackspace carrying GitHub traffic. While we will focus on Level 3 Communications and AboveNet, we also monitored provider links from Qwest (CenturyLink). Their configuration has changed significantly since this event in August.

Case #1: Level 3 Communications

Let’s take a step back and look at the state of the network before the DDoS attack on GitHub, as shown in Figure 2. This is what the Path Visualization looked like earlier in the day on the 15th for traffic using the Level 3 Communications network.

ThousandEyes Path Visualization before DDos attack on GitHub Figure 2: In the period before the attack, the Dallas, Ashburn, Philadelphia, and Raleigh agents all used Level 3 as a provider.
Figure 2: In the period before the attack, the Dallas, Ashburn, Philadelphia, and Raleigh agents all used Level 3 as a provider.

As we move forward, at 15:40 UTC we see nodes with loss, circled in red, in the Path Visualization. Specifically there is loss inside of Level 3 and Rackspace. Red nodes indicate that there’s packet loss in the adjacent links facing the destination. Figure 3 below shows though that there are still locations without loss (the green nodes on the left).

ThousandEyes Path Visualization; beginning of DDos attack on GitHub Figure 3: During the attack the Path Visualization shows loss on some nodes but destination is still being reached.
Figure 3: During the attack the Path Visualization shows loss on some nodes but destination is still being reached.

Fifteen minutes later at 15:55 UTC none of the traffic is reaching its destination as Figure 4 below shows. All traffic from these agents is terminating inside of Level 3 Communications.

ThousandEyes Path Visualization; path visualization during GitHub DDos Atack Figure 4: All locations now experiencing 100% packet loss in the Path Visualization
Figure 4: All locations now experiencing 100% packet loss in the Path Visualization.

Case #2: AboveNet

The data from another provider - AboveNet - tells us a similar story. At 15:45 UTC we detect five agents in Path Visualization routing through Abovenet to reach GitHub. There is a link inside of AboveNet with an average delay of 124 ms indicating some stress on their network, and the next three hops (one in AboveNet, two in Rackspace) are experiencing loss and we can see that the traffic is not making it to the destination.

Figure 5: Loss and latency in AboveNet and Rackspace during the DDoS attack in the Path Visualization view.
Figure 5: Loss and latency in AboveNet and Rackspace during the DDoS attack in the Path Visualization view.

Five minutes later at 15:50 UTC, Figure 6 below shows the loss enroute to the destination persists but now location of the loss is completely different. The traffic is no longer terminating inside of AboveNet or Rackspace. In fact it is never making it into either of their networks, it appears it is terminating on the ingress to AboveNet.

Figure 6: Red nodes on the far right represent traffic termination points, before AboveNet and Rackspace.
Figure 6: Red nodes on the far right represent traffic termination points, before AboveNet and Rackspace.

To investigate this further we can select one or more of these nodes, move to a period before or after the attack and see where that next hop is located. Figure 7 below shows us that two hours before the attack all nodes now dropping packets had next hops to AboveNet. This helps show us that there is some sort of destination-based filtering happening in AboveNet for GitHub traffic.

Figure 7: This image shows the Path Visualization before the DDoS attack on GitHub inside AboveNet, and the highlighted blue nodes are the AboveNet edge that during the attack is dropping traffic destined for GitHub.
Figure 7: This image shows the Path Visualization before the attack inside AboveNet, and the highlighted blue nodes are the AboveNet edge that during the attack is dropping traffic destined for GitHub.
ThousandEyes Path Visualization; During DDoS attack on GitHub all nodes 100% packet loss Figure 8: All nodes inside of Level 3 now experiencing 100% packet loss during the attack.
Figure 8: All nodes inside of Level 3 now experiencing 100% packet loss during the attack.

Conclusion

It is difficult to say exactly what techniques were used to mitigate the attack without access to internal ACL lists or routing tables (or even iBGP feeds), or input from the teams involved in stopping this attack on GitHub. However, from the public statements on the GitHub Status Page, and from our own data it is clear there was a coordinated and cooperative effort to stop the DDoS attack, and place destination-based filters in routers inside the GitHub’s providers. These filters are often distributed through iBGP inside the ISP using a mechanism such as Remotely Triggered Black Hole (RTBH). For additional information about this technique go here: "Remotely Triggered Black Hole Filtering-Destination Based and Source Based” Cisco Systems, Inc. 2005 (PDF).

Regardless of the specific techniques used by site owners, their hosting companies or their ISPs to combat DDoS these efforts require some level of coordination and cooperation. ThousandEyes can help in understanding how effective the filters are in mitigating these attacks and how DDoS attacks evolve over time and across the network and even helping identifying the source of the attacks in some instances.

Additional Resources:

Subscribe to the ThousandEyes Blog

Stay connected with blog updates and outage reports delivered while they're still fresh.

Upgrade your browser to view our website properly.

Please download the latest version of Chrome, Firefox or Microsoft Edge.

More detail