Defeating Distributed Denial of Service Attacks

Introduction The distributed denial of service attacks during the week of February 7 highlighted security weaknesses in hosts and software used in the Internet that put electronic commerce at risk. These attacks also illuminated several recent trends and served as a warning for the kinds of high-impact attacks that we may see in the near future. This document outlines key trends and other factors that have exacerbated these Internet security problems, summarizes near-term activities that can be taken to help reduce the threat, and suggests research and development directions that will be required to manage the emerging risks and keep them within more tolerable bounds. For the problems described, activities are listed for user organizations, Internet service providers, network manufacturers, and system software providers. Key Trends and Factors The recent attacks against e-commerce sites demonstrate the opportunities that attackers now have because of several Internet trends and related factors: Attack technology is developing in an open-source environment and is evolving rapidly. Technology producers, system administrators, and users are improving their ability to react to emerging problems, but they are behind and significant damage to systems and infrastructure can occur before effective defenses can be implemented. As long as defensive strategies are reactionary, this situation will worsen. Currently, there are tens of thousands – perhaps even millions – of systems with weak security connected to the Internet. Attackers are (and will) compromising these machines and building attack networks. Attack technology takes advantage of the power of the Internet to exploit its own weaknesses and overcome defenses. Increasingly complex software is being written by programmers who have no training in writing secure code and are working in organizations that sacrifice the safety of their clients for speed to market. This complex software is then being deployed in security-critical environments and applications, to the detriment of all users. User demand for new software features instead of safety, coupled with industry response to that demand, has resulted in software that is increasingly supportive of subversion, computer viruses, data theft, and other malicious acts. Because of the scope and variety of the Internet, changing any particular piece of technology usually cannot eliminate newly emerging problems; broad community action is required. While point solutions can help dampen the effects of attacks, robust solutions will come only with concentrated effort over several years. The explosion in use of the Internet is straining our scarce technical talent. The average level of system administrator technical competence has decreased dramatically in the last 5 years as non-technical people are pressed into service as system administrators. Additionally, there has been little organized support of higher education programs that can train and produce new scientists and educators with meaningful experience and expertise in this emerging discipline. The evolution of attack technology and the deployment of attack tools transcend geography and national boundaries. Solutions must be international in scope. The difficulty of criminal investigation of cybercrime coupled with the complexity of international law mean that successful apprehension and prosecution of computer crime is unlikely, and thus little deterrent value is realized. The number of directly connected homes, schools, libraries and other venues without trained system administration and security staff is rapidly increasing. These “always-on, rarely-protected” systems allow attackers to continue to add new systems to their arsenal of captured weapons. Immediate Steps to Reduce Risk and Dampen the Effects of Attacks There are several steps that can be taken immediately by user organizations, Internet service providers, network manufacturers, and system software providers to reduce risk and decrease the impact of attacks. We hope that major users, including the governments (around the world) will lead the user community by setting examples – taking the necessary steps to protect their computers. And we hope that industry and government will cooperate to educate the community of users – about threats and potential courses of action – through public information campaigns and technical education programs. In all of these recommendations, there may be instances where some steps are not feasible, but these will be rare and requests for waivers within organizations should be granted only on the basis of substantive proof validated by independent security experts. Problem 1: Spoofing Attackers often hide the identity of machines used to carry out an attack by falsifying the source address of the network communication. This makes it more difficult to identity the sources of attack traffic and sometimes shifts attention onto innocent third parties. Limiting the ability of an attacker to spoof IP source addresses will not stop attacks, but will dramatically shorten the time needed to trace an attack back to its origins. Solutions: User organizations and Internet service providers can ensure that traffic exiting an organization’s site, or entering an ISP’s network from a site, carries a source address consistent with the set of addresses for that site. Although this would still allow addresses to be spoofed within a site, it would allow tracing of attack traffic to the site from which it emanated, substantially assisting in the process of locating and isolating attacks traffic sources. Specifically user organizations should ensure that all packets leaving their sites carry source addresses within the address range of those sites. They should also ensure that no traffic from “unroutable addresses” listed in RFC 1918 are sent from their sites. This activity is often called egress filtering. User organizations should take the lead in stopping this traffic because they have the capacity on their routers to handle the load. ISPs can provide backup to pick up spoofed traffic that is not caught by user filters. ISPs may also be able to stop spoofing by accepting traffic (and passing it along) only if it comes from authorized sources. This activity is often called ingress filtering. Dial-up users are the source of some attacks. Stopping spoofing by these users is also an important step. ISPs, universities, libraries and others that serve dial-up users should ensure that proper filters are in place to prevent dial-up connections from using spoofed addresses. Network equipment vendors should ensure that no-IP-spoofing is a user setting, and the default setting, on their dial-up equipment. Problem 2: Broadcast Amplification In a common attack, the malicious user generates packets with a source address of the site he wishes to attack (site A) (using spoofing as described in problem 1) and then sends a series of network packets to an organization with lots of computers (Site B), using an address that broadcasts the packets to every machine at site B. Unless precautions have been taken, every machine at Site B will respond to the packets and send data to the organization (Site A) that was the target of the attack. The target will be flooded and people at Site A may blame the people at Site B. Attacks of this type often are referred to as Smurf attacks. In addition, the echo and chargen services can be used to create oscillation attacks similar in effect to Smurf. Solutions: Unless an organization is aware of a legitimate need to support broadcast or multicast traffic within its environment, the forwarding of directed broadcasts should be turned off. Even when broadcast applications are legitimate, an organization should block certain types of traffic sent to “broadcast” addresses (e.g., ICMP Echo Reply) messages so that its systems cannot be used to effect these Smurf attacks. Network hardware vendors should ensure that routers can turn off the forwarding of IP directed broadcast packets as described in RFC 2644 and that this is the default configuration of every router. Users should turn off echo and chargen services unless they have a specific need for those services. (This is good advice, in general, for all network services – they should be disabled unless known to be needed.) Problem 3: Lack of Appropriate Response To Attacks Many organizations do not respond to complaints of attacks originating from their sites or to attacks against their sites, or respond in a haphazard manner. This makes containment and eradication of attacks difficult. Further, many organizations fail to share information about attacks, giving the attacker community the advantage of better intelligence sharing. Solutions: User organizations should establish incident response policies and teams with clearly defined responsibilities and procedures. ISPs should establish methods of responding quickly and staffing to support those methods when their systems are found to have been used for attacks on other organizations. User organizations should encourage system administrators to participate in industry-wide early warning systems, where their corporate identities can be protected (if necessary), to counter rapid dissemination of information among the attack community. Attacks and system flaws should be reported to appropriate authorities (e.g., vendors, response teams) so that the information can be applied to defenses for other users. Problem 4: Unprotected Computers Many computers are vulnerable to take-over for distributed denial of service attacks because of inadequate implementation of well-known “best practices.” When those computers are used in attacks, the carelessness of their owners is instantly converted to major costs, headaches, and embarrassment for the owners of computers being attacked. Furthermore, once a computer has been compromised, the data may be copied, altered or destroyed, programs changed, and the system disabled. Solutions: User organizations should check their systems periodically to determine whether they have had malicious software installed, including DDOS Trojan Horse programs. If such software is found, the system should be restored to a known good state. User organizations should reduce the vulnerability of their systems by installing firewalls with rule sets that tightly limit transmission across the site’s periphery (e.g. deny traffic, both incoming and outgoing, unless given specific instructions to allow it). All machines, routers, and other Internet-accessible equipment should be periodically checked to verify that all recommended security patches have been installed. The security community should maintain and publicize a current “Top-20 Exploited vulnerabilities” and the “Top 20 Attacks” list of currently most-often-exploited vulnerabilities to help system administrators set priorities. Users should turn off services that are not required and limit access to vulnerable management services (e.g., RPC-based services). Users and vendors should cooperate to create “system-hardening” scripts that can be used by less sophisticated users to close known holes and tighten settings to make their systems more secure. Users should employ these tools when they are available. System software vendors should ship systems where security defaults are set to the highest level of security rather than the lowest level of security. These “secure out-of –the-box” configurations will greatly aid novice users and system administrators. They will furthermore save critically-scarce time for even the most experienced security professionals. System administrators should deploy “best practice” tools including firewalls (as described above), intrusion detection systems, virus detection software, and software to detect unauthorized changes to files. This will reduce the risk that systems are compromised and used as a base for launching attacks. It will increase confidence in the correct functioning of the systems. Use of software to detect unauthorized changes may also be helpful in restoring compromised systems to normal function. System and network administrators should be given time and support for training and enhancement of their skills. System administrators and auditors should be periodically certified to verify that their security knowledge and skills are current. Longer Term Efforts to Provide Adequate Safeguards The steps listed above are needed now to allow us to begin to move away from the extremely vulnerable state we are in. While these steps will help, they will not adequately reduce the risk given the trends listed above. These trends hint at new security requirements that will only be met if information technology and community attitudes about the Internet are changed in fundamental ways. In addition, research is needed in the areas of policy and law to enable us to deal with aspects of the problem that technology improvements will not be able to address by themselves. The following are some of the items that should be considered: Establish load and traffic volume monitoring at ISPs to provide early warning of attacks. Accelerate the adoption of the IPsec components of Internet Protocol Version 6 and Secure Domain Name System. Increase the emphasis on security in the research and development of Internet II. Support the development of tools that automatically generate router access control lists for firewall and router policy. Encourage the development of software and hardware that is engineered for safety with possibly vulnerable settings and services turned off, and encourage vendors to automate security updating for their clients. Sponsor research in network protocols and infrastructure to implement real-time flow analysis and flow control. Encourage wider adoption of routers and switches that can perform sophisticated filtering with minimal performance degradation. Sponsor continuing topological studies of the Internet to understand the nature of “choke points.” Test deployment and continue research in anomaly-based, and other forms of intrusion detection. Support community-wide consensus of uniform security policies to protect systems and to outline security responsibilities of network operators, Internet service providers, and Internet users. Encourage development and deployment of a secure communications infrastructure that can be used by network operators and Internet service providers to enable real-time collaboration when dealing with attacks. Sponsor research and development leading to safer operating systems that are also easier to maintain and manage. Sponsor research into survivable systems that are better able to resist, recognize, and recover from attacks while still providing critical functionality. Sponsor research into better forensic tools and methods to trace and apprehend malicious users without forcing the adoption of privacy-invading monitoring. Provide meaningful infrastructure support for centers of excellence in information security education and research to produce a new generation of leaders in the field. Consider changes in government procurement policy to emphasize security and safety rather than simply cost when acquiring information systems, and to hold managers accountable for poor security.

Autopsy Forensic Browser for Computer Forensics

This is a step by step guide of Autopsy Forensic Browser as a front end for computer forensics. This tool is an essential for Linux forensics investigations and can be used to analyze Windows images.

We will start with the presumption that you have the Forensic Toolkit Installed (whether through the use of a Live CD such as Helix or if it is installed on a Forensic Workstation). Autopsy is built into the SANS Investigative Forensic Toolkit Workstation (SIFT Workstation) that you can download from forensics.sans.org. You can start Autopsy by clicking on the magnifying glass in the upper right corner.

Step 1 — Start the Autopsy Forensic Browser

Autopsy is a web based front end to the FSK (Forensic Toolkit). By default, you will connect to the Autopsy service using the URL; http://localhost:9999. The default start page is displayed in Step 2.

Step 2 — Start a New Case

Click New Case. This will add a new case folder to the system and allow you to begin adding evidence. To begin, click New Case.


Step 3 — Enter the Case Details


Begin by entering the details about the case. This will include the name of the Case itself and a description of the case. For this, you should have a means of identifying cases. An example could be something along the lines of “.” if you do external consulting as I do or it could be related to specific designations within a company.

You will see the message (displayed in Step 4) when the case file is created.

Step 4 — Note where the Evidence Directory is located

In the example above, we see an example case I created for a CHFI course I created. This displays where the evidence is located on the system.

Step 5 — Add a Host to the Case

Click “Add Host” and you will be presented with a screen (above) that allows you to add the host and a description. As it states, the Timezone and skew can be configured. Also, you can add and use a list of known good or known bad hashes. This can be as complex as the NSRL lists or as simple as a hashed list of your own organizations “known good” files. Lists of known rootkits and other Malware can be added as a known bad list.

Where a time skew is known, you can also add this in advance.

Step 6 — Note whete the host is located

Next, add the disk image by pressing the Add Image button (Example /home/CHFI.img. Autopsy allows you to use an image that you have already captured. This can be an image of the disk using the dd command for instance). You can also use Autopsy to capture an image, but this is not covered in this post.

Step 7 — Add an Image to Analyze

The “Add Image” screen allows us to import the image that we are going to analyze in Autospy.

Step 8 — Select the location of the Image to Analyze

This will allow us to import an image into our evidence locker. Rather than working on the original image, you can select the move option to copy the image to the analysis host and have a separate copy of the image for use in Autopsy.

Step 9 — the Case Gallery

As you add hosts to the case, these will be displayed in the “Case Gallery”. When you now go back to the Case Gallery and view your options, you will be presented with the options displayed in Step 10.

Step 10 — Now try the other options

You should work with various features of Autopsy browser and experiment with these in order to become familiar with the options and functionality. Try the other options and analyze an image to gain experience with the tool.

The Evidence Analysis Techniques in Autopsy

The primary modes and functions of the Autopsy Forensic Browser are to act as a graphical front end to the Sleuth Kit and other related tools in order to provide the capabilities of analysis, search and case management in a simple but comprehensive package. This collection of tools creates a simple, yet powerful forensic analysis platform.

Analysis Modes in Autopsy

A dead analysis occurs when a dedicated analysis system is used to examine the data from a suspect system. When this occurs, Autopsy and The Sleuth Kit are run in a trusted environment, typically in a lab. Autopsy and TSK provides support for raw, Expert Witness, and AFF file formats.

A live analysis occurs when the suspect system is being analyzed while it is running. In this case, Autopsy and The Sleuth Kit are run from a CD in an untrusted environment. This is frequently used during incident response while the incident is being confirmed. Following confirmation, the system is acquired and a dead analysis performed.

Evidence Search Techniques

The Autopsy Browser provides the following evidence search functionality:

  • File Listing: Analyze the files and directories, including the names of deleted files and files with Unicode-based names.
  • File Content: The contents of files can be viewed in raw, hex, or the ASCII strings can be extracted. When data is interpreted, Autopsy sanitizes it to prevent damage to the local analysis system. Autopsy does not use any client-side scripting languages.
  • Hash Databases: Lookup unknown files in a hash database to quickly identify it as good or bad. Autopsy uses the NIST National Software Reference Library (NSRL) and user created databases of known good and known bad files.
  • File Type Sorting: Sort the files based on their internal signatures to identify files of a known type. Autopsy can also extract only graphic images (including thumbnails). The extension of the file will also be compared to the file type to identify files that may have had their extension changed to hide them.
  • Timeline of File Activity: A timeline of file activity can help identify areas of a file system that may contain evidence. Autopsy can create timelines that contain entries for the Modified, Access, and Change (MAC) times of both allocated and unallocated files.
  • Keyword Search: Keyword searches of the file system image can be performed using ASCII strings and grep regular expressions. Searches can be performed on either the full file system image or just the unallocated space. An index file can be created for faster searches. Strings that are frequently searched for can be easily configured into Autopsy for automated searching.
  • Meta Data Analysis: Meta Data structures contain the details about files and directories. Autopsy allows you to view the details of any meta data structure in the file system. This is useful for recovering deleted content. Autopsy will search the directories to identify the full path of the file that has allocated the structure.
  • Data Unit Analysis: Data Units are where the file content is stored. Autopsy allows you to view the contents of any data unit in a variety of formats including ASCII, hexdump, and strings. The file type is also given and Autopsy will search the meta data structures to identify which has allocated the data unit.
  • Image Details: File system details can be viewed, including on-disk layout and times of activity. This mode provides information that is useful during data recovery.

Case Management

Autopsy provides a number of functions that aid in case management. In particular, investigations started within autopsy are organized by cases, which can contain one or more hosts. Each host is configured to have its own time zone setting and clock skew so that the times shown are the same as the original user would have seen. Each host can contain one or more file system images to analyze. The following functions within Autopsy are specifically designed aid in case management:

  • Event Sequencer: Time-based events can be added from file activity or IDS and firewall logs. Autopsy sorts the events so that the sequence of incident associated with an event can be easily determined.
  • Notes: Notes can be saved on a per-host and per-investigator basis. These allow the investigator to make quick notes about files and structures. The original location can be easily recalled with the click of a button when the notes are later reviewed. All notes are stored in an ASCII file.
  • Image Integrity: Being that one of the most crucial aspects of a forensics investigation involves ensuring that data is not modified during analysis; Autopsy will generate an MD5 value for all files that are imported or created by default. The integrity of any file that Autopsy uses can be validated at any time.
  • Reports: Autopsy can create ASCII reports for files and other file system structures. This enables investigator to promptly make consistent data sheets during the course of the investigation.
  • Logging: Audit logs are created on a case, host, and investigator level so that all actions can be easily retrieved. The entire Sleuth Kit commands are logged exactly as they are executed on the system.

Autopsy is available from http://www.sleuthkit.org/autopsy.

Download Your copy of a beginner guide of computer forensics A guide to basic computer forensics