Chapter 3. Static techniques
With dynamic testing methods, software is executed using a set of input values and its output is then examined and compared to what is expected.
Types of defects that are easier to find during static testing are: Deviations from standards, missing requirements, design defects, non-maintainable code and inconsistent interface specifications. Note that in contrast to dynamic testing, static testing finds defects rather than failures.
Although inspection is perhaps the most documented and formal review technique, it is certainly not the only one. The formality of a review process is related to factors such as the maturity of the development process, any legal or regulatory requirements or the need for an audit trail.
Informal reviews are applied at various times during the early stages in the life cycle of a document. A two-person team can conduct an informal review, as the author can ask a colleague to review a document or code. In later stages these reviews often involve more people and a meeting. This normally involves peers of the author, who try to find defects in the document under review and discuss these defects in a review meeting. The goal is to help the author and to improve the quality of the document. Informal reviews come in various shapes and forms, but all have one characteristic in common - they are not documented.
A typical formal review process consists of six main steps:
Planning, Kick-off, Preparation, Review meeting, Rework, Follow-up.
Planning
The review process for a particular review begins with a 'request for review' by the author to the moderator (or inspection leader).
A moderator is often assigned to take care of the scheduling (dates, time, place and invitation) of the review
For more formal reviews, e.g. inspections, the moderator always performs an entry check and defines at this stage formal exit criteria.
The entry check is carried out to ensure that the reviewers' time is not wasted on a document that is not ready for review
For a review, the maximum size is usually between 10 and 20 pages.
In formal inspection, only a page or two may be looked at in depth in order to find the most serious defects that are not obvious.
The moderator assigns the roles to the reviewers.
The moderator takes the role of checking compliance to standards
Kick-off
An optional step in a review procedure is a kick-off meeting.
The goal of this meeting is to get everybody on the same wavelength regarding the document under review and to commit to the time that will be spent on checking. Also the result of the entry check and defined exit criteria are discussed in case of a more formal review.
During the kick-off meeting the reviewers receive a short introduction on the objectives of the review and the documents. The relationships between the document under review and the other documents (sources) are explained, especially if the number of related documents is high.
Role assignments, checking rate, the pages to be checked, process changes and possible other questions are also discussed during this meeting. Of course the distribution of the document under review, source documents and other related documentation, can also be done during the kick-off.
Preparation
All issues are recorded, preferably using a logging form. A critical success factor for a thorough preparation is the number of pages checked per hour. This is called the checking rate.
The optimum checking rate is the result of a mix of factors, including the type of document, its complexity, and the number of related documents and the experience of the reviewer.
Usually the checking rate is in the range of five to ten pages per hour, but may be much less for formal inspection, e.g. one page per hour.
Review meeting
The meeting typically consists of the following elements: logging phase, discussion phase and decision phase. Defects identified in preparation phase are logged either by the author or by a scribe
Every defect and its severity should be logged. The participant who identifies the defect proposes the severity. Severity classes could be:
• Critical: defects will cause downstream damage; the scope and impact of the defect is beyond the document under inspection.
• Major, defects could cause a downstream effect (e.g. a fault in a design can result in an error in the implementation).
• Minor, defects are not likely to cause downstream damage (e.g. non-compliance with the standards and templates).
The moderator tries to keep a good logging rate (number of defects logged per minute)
The most important Exit criterion is the average number of critical and/or major defects found per page (e.g. no more than three critical/major defects per page).
If a project is under pressure, the moderator will sometimes be forced to skip re-reviews and exit with a defect-prone document. Setting, and agreeing, a quantified exit level criterion helps the moderator to make firm decisions at all times.
Rework
Based on the defects detected, the author will improve the document under review step by step.
Changes that are made to the document should be easy to identify during follow-up. Therefore the author has to indicate where changes are made (e.g. using 'Track changes' in word-processing software).
Follow-up
The moderator is responsible for ensuring that satisfactory actions have been taken on all (logged) defects, process improvement suggestions and change requests.
Although the moderator checks to make sure that the author has taken action on all known defects, it is not necessary for the moderator to check all the corrections in detail.
In order to control and optimize the review process, a number of measurements are collected by the moderator at each step of the process
Roles and responsibilities
Within a review team, four types of participants can be distinguished: moderator, author, scribe and reviewer. In addition management needs to play a role in the review process.
The moderator
The moderator (or review leader) leads the review process.
The moderator performs the entry check and the follow-up on the rework, in order to control the quality of the input and output of the review process.
The moderator also schedules the meeting, disseminates documents before the meeting, coaches other team members, paces the meeting, leads possible discussions and stores the data that is collected.
The author
The author's basic goal should be to learn as much as possible with regard to improving the quality of the document, but also to improve his or her ability to write future documents.
The author's task is to illuminate unclear areas and to understand the defects found.
The scribe
During the logging meeting, the scribe (or recorder) has to record each defect mentioned and any suggestions for process improvement.
Having someone other than the author take the role of the scribe (e.g. the moderator) can have significant advantages, since the author is freed up to think about the document rather than being tied down with lots of writing.
The reviewers
The task of the reviewers (also called checkers or inspectors) is to check any material for defects, mostly prior to the meeting. The level of thoroughness required depends on the type of review
The manager
The manager is involved in the reviews as he or she decides on the execution of reviews, allocates time in project schedules and determines whether review process objectives have been met. The manager will also take care of any review training requested by the participants.
Types of review
A single document may be the subject of more than one review.
An informal review may be carried out before a technical review
An inspection may be carried out on a requirements specification before a walkthrough with customers
Walkthrough
A walkthrough is characterized by the author of the document
The content of the document is explained step by step by the author, to reach consensus on changes or to gather information. Within a walkthrough the author does most of the preparation
A walkthrough is especially useful for higher-level documents, such as requirement specifications and architectural documents
Key characteristics of walkthroughs are:
The meeting is led by the authors; often a separate scribe is present.
Scenarios and dry runs may be used to validate the content.
Separate pre-meeting preparation for reviewers is optional.
Technical review
A technical review is a discussion meeting that focuses on achieving consensus about the technical content of a document.
Compared to inspections, technical reviews are less formal and there is little or no focus on defect identification on the basis of referenced documents, intended readership and rules.
During technical reviews defects are found by experts, who focus on the content of the document
The experts that are needed for a technical review are, for example, architects, chief designers and key users.
The goals of a technical review are to:
Assess the value of technical concepts and alternatives in the product and project environment;
Inform participants of the technical content of the document.
Key characteristics of a technical review are:
It is a documented defect-detection process that involves peers and technical experts.
It is often performed as a peer review without management participation.
Ideally it is led by a trained moderator, but possibly also by a technical expert.
Inspection
Inspection is the most formal review type. The document under inspection is prepared and checked thoroughly by the reviewers before the meeting, comparing the work product with its sources and other referenced documents, and using rules and checklists.
In the inspection meeting the defects found are logged and any discussion is postponed until the discussion phase. This makes the inspection meeting a very efficient meeting.
The generally accepted goals of inspection are to:
• help the author to improve the quality of the document under inspection;
• remove defects efficiently, as early as possible;
• improve product quality, by producing documents with a higher level of quality;
• create a common understanding by exchanging information among the inspection participants;
• train new employees in the organization's development process;
• learn from defects found and improve processes in order to prevent recurrence of similar defects;
• sample a few pages or sections from a larger document in order to measure the typical quality of the document, leading to improved work by individuals in the future, and to process improvements.
Key characteristics of an inspection are:
• It is usually led by a trained moderator (certainly not by the author).
• It uses defined roles during the process.
• It involves peers to examine the product.
• Rules and checklists are used during the preparation phase.
• A separate preparation is carried out during which the product is examined and the defects are found.
• The defects found are documented in a logging list or issue log.
• A formal follow-up is carried out by the moderator applying exit criteria.
• Optionally, a causal analysis step is introduced to address process improvement issues and learn from The defects found.
• Metrics are gathered and analyzed to optimize the process.
STATIC ANALYSIS BY TOOLS
Static analysis is an examination of requirements, design and code that differs from more traditional Dynamic testing in a number of important ways:
- Static analysis is performed on requirements, design or code without actually executing the software artifact being examined.
- Static analysis is ideally performed before the types of formal review
- Static analysis is unrelated to dynamic properties of the requirements, design and code, such as test coverage
- The goal of static analysis is to find defects, whether or not they may cause failures. As with reviews, static analysis finds defects rather than failures.
For static analysis there are many tools, and most of them focus on soft-ware code.
Static analysis tools are typically used by developers before, and sometimes during, component and integration testing and by designers during software modeling.
The tools can show not only structural attributes (code metrics), such as depth of nesting or cyclomatic number and check against coding standards, but also graphic depictions of control flow, data relationships and the number of distinct paths from one line of code to another.
Even the compiler can be considered a static analysis tool, since it builds a symbol table, points out incorrect usage and checks for non-compliance to coding language conventions (syntax).
Coding standards
Usually a coding standard consists of a set of programming rules, naming conventions and layout specifications. The main advantage of this is that it saves a lot of effort. An extra reason for adopting this approach is that if you take a well-known coding standard there will probably be checking tools available that support this standard.
There are three main causes for this: the number of rules in a coding standard is usually so large that nobody can remember them all; some context-sensitive rules that demand reviews of several files are very hard to check by human beings; and if people spend time checking coding standards in reviews, that will distract them from other defects they might otherwise find, making the review process less effective.
Code metrics
When performing static code analysis, usually information is calculated about structural attributes of the code, such as comment frequency, depth of nesting, cyclomatic number and number of lines of code. There are many different kinds of structural measures, each of which tells us something about the effort required to write the code in the first place, to understand the code when making a change, or to test the code using particular tools or techniques.
Complexity metrics identify high risk, complex areas.
The cyclomatic complexity metric is based on the number of decisions in a program. It is important to testers because it provides an indication of the amount of testing (including reviews) necessary to practically avoid defects.
There are many ways to calculate cyclomatic complexity, the easiest way is to sum the number of binary decision statements (e.g. if, while, for, etc.) and add 1 to it.
A more formal definition regarding the calculation rules is provided in the glossary. Below is a simple Program as an example:
IF A = 354
THEN IF B > C
THEN A = B
ELSE A = C
ENDIF
ENDIF
Print A
The control flow generated from the program would look like Figure 3.2.
The control flow shows seven nodes (shapes) and eight edges (lines), thus using the formal formula
the cyclomatic complexity is 8-7 + 2 = 3. In this case there is no graph called or subroutine.
Alternatively one may calculate the cyclomatic complexity using the decision points rule. Since
there are two decision points, the cyclomatic complexity is 2 + 1 = 3.
Code structure
There are several aspects of code structure to consider:
• control flow structure;
• Data flow structure;
• Data structure.
The control flow structure addresses the sequence in which the instructions are executed. This aspect of structure reflects the iterations and loops in a program's design.
Control flow analysis can also be used to identify unreachable (dead) code. In fact many of the code metrics relate to the control flow structure, e.g. number of nested levels or cyclomatic complexity.
Data flow structure follows the trail of a data item as it is accessed and modified by the code. Many times, the transactions applied to data are more complex than the instructions that implement them. Thus, using data flow measures it is shown how the data act as they are transformed by the program. Defects can be found such as referencing a variable with an undefined value and variables that are never used.
Data structure refers to the organization of the data itself, independent of the program. When data is arranged as a list, queue, stack, or other well-defined structure, the algorithms for creating, modifying or deleting them are more likely to be well-defined, too. Thus, the data structure provides a lot of informa-tion about the difficulty in writing programs to handle the data and in designing test cases to show program correctness.
The important thing for the tester is static analysis measures can be used as early warning signals of how good the code is likely to be when it is finished.
The value of static analysis is especially for:
- early detection of defects prior to test execution;
- early warning about suspicious aspects of the code, design or requirements;
- identification of defects not easily found in dynamic testing;
- improved maintainability of code and design since engineers work according to documented standards and rules;
- Prevention of defects provided that engineers are willing to learn from their errors and continuous improvement is practiced.
No comments:
Post a Comment