Friday, November 18, 2011

White Box Testing Technique

Data-Flow Analysis

Data-flow analysis can be used to increase program understanding and to develop test cases based on data flow within the program. The data-flow testing technique is based on investigating the ways values are associated with variables and the ways that these associations affect the execution of the program. Data-flow analysis focuses on occurrences of variables, following paths from the definition (or initialization) of a variable to its uses. The variable values may be used for computing values for defining other variables or used as predicate variables to decide whether a predicate is true for traversing a specific execution path. A data-flow analysis for an entire program involving all variables and traversing all usage paths requires immense computational resources; however, this technique can be applied for select variables. The simplest approach is to validate the usage of select sets of variables by executing a path that starts with definition and ends at uses of the definition. The path and the usage of the data can help in identifying suspicious code blocks and in developing test cases to validate the runtime behavior of the software. For example, for a chosen data definition-to-use path, with well-crafted test data, testing can uncover time-of-check-to-time-of-use (TOCTTOU) flaws. The ”Security Testing” section in [Howard 02] explains the data mutation technique, which deals with perturbing environment data. The same technique can be applied to internal data as well, with the help of data-flow analysis.

Code-Based Fault Injection

The fault injection technique perturbs program states by injecting software source code to force changes into the state of the program as it executes. Instrumentation is the process of non-intrusively inserting code into the software that is being analyzed and then compiling and executing the modified (or instrumented) software. Assertions are added to the code to raise a flag when a violation condition is encountered. This form of testing measures how software behaves when it is forced into anomalous circumstances. Basically this technique forces non-normative behavior of the software, and the resulting understanding can help determine whether a program has vulnerabilities that can lead to security violations. This technique can be used to force error conditions to exercise the error handling code, change execution paths, input unexpected (or abnormal) data, change return values, etc. In [Thompson 02], runtime fault injection is explained and advocated over code-based fault injection methods. One of the drawbacks of code based methods listed in the book is the lack of access to source code. However, in this content area, the assumptions are that source code is available and that the testers have the knowledge and expertise to understand the code for security implications. Refer to [Voas 98] for a detailed understanding of software fault injection concepts, methods, and tools.

Abuse Cases

Abuse cases help security testers view the software under test in the same light as attackers do. Abuse cases capture the non-normative behavior of the system. While in [McGraw 04c] abuse cases are described more as a design analysis technique than as a white box testing technique, the same technique can be used to develop innovative and effective test cases mirroring the way attackers would view the system. With access to the source code, a tester is in a better position to quickly see where the weak spots are compared to an outside attacker. The abuse case can also be applied to interactions between components within the system to capture abnormal behavior, should a component misbehave. The technique can also be used to validate design decisions and assumptions. The simplest, most practical method for creating abuse cases is usually through a process of informed brainstorming, involving security, reliability, and subject matter expertise. Known attack patterns form a rich source for developing abuse cases.

Trust Boundaries Mapping

Defining zones of varying trust in an application helps identify vulnerable areas of communication and possible attack paths for security violations. Certain components of a system have trust relationships (sometimes implicit, sometime explicit) with other parts of the system. Some of these trust relationships offer ”trust elevation” possibilities—that is, these components can escalate trust privileges of a user when data or control flow cross internal boundaries from a region of less trust to a region of more trust [Hoglund 04]. For systems that have n-tier architecture or that rely on several third-party components, the potential for missing trust validation checks is high, so drawing trust boundaries becomes critical for such systems. Drawing clear boundaries of trust on component interactions and identifying data validation points (or chokepoints, as described in [Howard 02]) helps in validating those chokepoints and testing some of the design assumptions behind trust relationships. Combining trust zone mapping with data-flow analysis helps identify data that move from one trust zone to another and whether data checkpoints are sufficient to prevent trust elevation possibilities. This insight can be used to create effective test cases.

Code Coverage Analysis

Code coverage is an important type of test effectiveness measurement. Code coverage is a way of determining which code statements or paths have been exercised during testing. With respect to testing, coverage analysis helps in identifying areas of code not exercised by a set of test cases. Alternatively, coverage analysis can also help in identifying redundant test cases that do not increase coverage. During ad hoc testing (testing performed without adhering to any specific test approach or process), coverage analysis can greatly reduce the time to determine the code paths exercised and thus improve understanding of code behavior. There are various measures for coverage, such as path coverage, path testing, statement coverage, multiple condition coverage, and function coverage. When planning to use coverage analysis, establish the coverage measure and the minimum percentage of coverage required. Many tools are available for code coverage analysis. It is important to note that coverage analysis should be used to measure test coverage and should not be used to create tests. After performing coverage analysis, if certain code paths or statements were found to be not covered by the tests, the questions to ask are whether the code path should be covered and why the tests missed those paths. A risk-based approach should be employed to decide whether additional tests are required. Covering all the code paths or statements does not guarantee that the software does not have faults; however, the missed code paths or statements should definitely be inspected. One obvious risk is that unexercised code will include Trojan horse functionality, whereby seemingly innocuous code can carry out an attack. Less obvious (but more pervasive) is the risk that unexercised code has serious bugs that can be leveraged into a successful attack [McGraw 02].

Classes of Tests

Creating security tests other than ones that directly map to security specifications is challenging, especially tests that intend to exercise the non-normative or non-functional behavior of the system. When creating such tests, it is helpful to view the software under test from multiple angles, including the data the system is handling, the environment the system will be operating in, the users of the software (including software components), the options available to configure the system, and the error handling behavior of the system. There is an obvious interaction and overlap between the different views; however, treating each one with specific focus provides a unique perspective that is very helpful in developing effective tests.

Data

All input data should be untrusted until proven otherwise, and all data must be validated as it crosses the boundary between trusted and untrusted environments [Howard 02]. Data sensitivity/criticality plays a big role in data-based testing; however, this does not imply that other data can be ignored—non-sensitive data could allow a hacker to control a system. When creating tests, it is important to test and observe the validity of data at different points in the software. Tests based on data and data flow should explore incorrectly formed data and stressing the size of the data. The section ”Attacking with Data Mutation” in [Howard 02] describes different properties of data and how to mutate data based on given properties. To understand different attack patterns relevant to program input, refer to chapter six, ”Crafting (Malicious) Input,” in [Hoglund 04]. Tests should validate data from all channels, including web inputs, databases, networks, file systems, and environment variables. Risk analysis should guide the selection of tests and the data set to be exercised.

Fuzzing

Although normally associated exclusively with black box security testing, fuzzing can also provide value in a white box testing program. Specifically, [Howard 06] introduced the concept of “smart fuzzing.” Indeed, a rigorous testing program involving smart fuzzing can be quite similar to the sorts of data testing scenarios presented above and can produce useful and meaningful results as well. [Howard 06] claims that Microsoft finds some 25-25 percent of the bugs in their code via fuzzing techniques. Although much of that is no doubt “dumb” fuzzing in black box tests, “smart” fuzzing should also be strongly considered in a white box testing program.

Environment

Software can only be considered secure if it behaves securely under all operating environments. The environment includes other systems, users, hardware, resources, networks, etc. A common cause of software field failure is miscommunication between the software and its environment [Whittaker 02]. Understanding the environment in which the software operates, and the interactions between the software and its environment, helps in uncovering vulnerable areas of the system. Understanding dependency on external resources (memory, network bandwidth, databases, etc.) helps in exploring the behavior of the software under different stress conditions. Another common source of input to programs is environment variables. If the environment variables can be manipulated, then they can have security implications. Similar conditions occur for registry information, configuration files, and property files. In general, analyzing entities outside the direct control of the system provides good insights in developing tests to ensure the robustness of the software under test, given the dependencies.

Component Interfaces

Applications usually communicate with other software systems. Within an application, components interface with each other to provide services and exchange data. Common causes of failure at interfaces are misunderstanding of data usage, data lengths, data validation, assumptions, trust relationships, etc. Understanding the interfaces exposed by components is essential in exposing security bugs hidden in the interactions between components. The need for such understanding and testing becomes paramount when third-party software is used or when the source code is not available for a particular component. Another important benefit of understanding component interfaces is validation of principles of compartmentalization. The basic idea behind compartmentalization is to minimize the amount of damage that can be done to a system by breaking up the system into as few units as possible while still isolating code that has security privileges [McGraw 02]. Test cases can be developed to validate compartmentalization and to explore failure behavior of components in the event of security violations and how the failure affects other components.

Configuration

In many cases, software comes with various parameters set by default, possibly with no regard for security. Often, functional testing is performed only with the default settings, thus leaving sections of code related to non-default settings untested. Two main concerns with configuration parameters with respect to security are storing sensitive data in configuration files and configuration parameters changing the flow of execution paths. For example, user privileges, user roles, or user passwords are stored in the configuration files, which could be manipulated to elevate privilege, change roles, or access the system as a valid user. Configuration settings that change the path of execution could exercise vulnerable code sections that were not developed with security in mind. The change of flow also applies to cases where the settings are changed from one security level to another, where the code sections are developed with security in mind. For example, changing an endpoint from requiring authentication to not requiring authentication means the endpoint can be accessed by everyone. When a system has multiple configurable options, testing all combinations of configuration can be time consuming; however, with access to source code, a risk-based approach can help in selecting combinations that have higher probability in exposing security violations. In addition, coverage analysis should aid in determining gaps in test coverage of code paths.

Error handling

The most neglected code paths during the testing process are error handling routines. Error handling in this paper includes exception handling, error recovery, and fault tolerance routines. Functionality tests are normally geared towards validating requirements, which generally do not describe negative (or error) scenarios. Even when negative functional tests are created, they don’t test for non-normative behavior or extreme error conditions, which can have security implications. For example, functional stress testing is not performed with an objective to break the system to expose security vulnerability. Validating the error handling behavior of the system is critical during security testing, especially subjecting the system to unusual and unexpected error conditions. Unusual errors are those that have a low probability of occurrence during normal usage. Unexpected errors are those that are not explicitly specified in the design specification, and the developers did not think of handling the error. For example, a system call may throw an ”unable to load library” error, which may not be explicitly listed in the design documentation as an error to be handled. All aspects of error handling should be verified and validated, including error propagation, error observability, and error recovery. Error propagation is how the errors are propagated through the call chain. Error observability is how the error is identified and what parameters are passed as error messages. Error recovery is getting back to a state conforming to specifications. For example, return codes for errors may not be checked, leading to uninitialized variables and garbage data in buffers; if the memory is manipulated before causing a failure, the uninitialized memory may contain attacker-supplied data. Another common mistake to look for is when sensitive information is included as part of the error messages.


0 comments:

Post a Comment