CHAPTER 6: Tool support for testing
1. TYPES OF TEST TOOL
Test tool classification
The tools are grouped by the testing activities or areas that are supported by a set of tools, for example, tools that support management activities, tools to support static testing, etc.
'Test management' tool may provide support for managing testing (progress monitoring), configuration management of test ware, incident management, and requirements management and traceability.
In order to measure coverage, the tool must first identify all of the structural elements that might be exercised to see whether a test exercises it or not. This is called 'instrumenting the code'.
Non-intrusive coverage tools that observe the blocks of memory containing the object code to get a rough measurement without instrumentation, e.g. for embedded software
Example of the probe effect is when a debugging tool is used to try to find a particular defect. If the code is run with the debugger, then the bug disappears; it only re-appears when the debugger is turned off (thereby making it much more difficult to find). These are sometimes known as 'Heizenbugs'.
Coverage measurement tools are most often used in component testing.
Performance testing tools are more often used at system testing, system integration testing and acceptance testing.
Tool support for management of testing and tests
What does 'test management' mean? It could be 'the management of tests' or it could be 'managing the testing process'.
A test management tool may also manage the tests, which would begin early in the project and would then continue to be used throughout the project and also after the system had been released.
In practice, test management tools are typically used by specialist testers or test managers at system or acceptance test level.
Test management tools
Features or characteristics of test management tools include support for:
•Management of tests (knowing which tests need to run in a common environment, number
of Tests planned, written, run, passed or failed)
• scheduling of tests to be executed (manually or by a test execution tool);
• Management of testing activities (time spent in test design, test execution, whether we are on
Schedule Or on budget);
• interfaces to other tools, such as:
- Test execution tools (test running tools);
- Incident management tools;
- Requirement management tools;
• Traceability of tests, test results and defects to requirements or other sources;
• Logging test results (note that the test management tool does not run tests, but could summarize
Results From test execution tools that the test management tool interfaces with);
• preparing progress reports based on metrics (quantitative analysis), such as:
- Tests run and tests passed;
- Incidents raised ,defects fixed and outstanding.
This information can be used to monitor the testing process and decide what actions to take (test control). Test management tools help to gather, organize and communicate information about the testing on a project.
Requirements management tools
Are requirements management tools really testing tools? Some people may say they are not, but they do provide some features that are very helpful to testing. Because tests are based on requirements
Features or characteristics of requirements management tools include support for:
• storing requirement statements;
• storing information about requirement attributes;
• checking consistency of requirements;
• identifying undefined, missing or 'to be defined later' requirements;
• prioritizing requirements for testing purposes;
• Traceability of requirements to tests and tests to requirements, functions or features;
• Traceability through levels of requirements;
• interfacing to test management tools;
• Coverage of requirements by a set of tests (sometimes).
Incident management tools
This type of tool is also known as a defect-tracking tool, a defect-management tool, a bug-tracking tool or a bug-management tool. However, 'incident management tool' is probably a better name for it because not all of the things tracked are actually defects or bugs; incidents may also be perceived problems, anomalies (that aren't necessarily defects) or enhancement requests. Also what is normally recorded is information about the failure (not the defect) that was generated during testing.
Incident reports go through a number of stages from initial identification and recording of the details, through analysis, classification, assignment for fixing, fixed, re-tested and closed. Incident management tools make it much easier to keep track of the incidents over time.
Features or characteristics of incident management tools include support for:
• storing information about the attributes of incidents (e.g. severity);
• storing attachments (e.g. a screen shot);
• prioritizing incidents;
• Assigning actions to people (fix, confirmation test, etc.);
• Status (e.g. open, rejected, duplicate, deferred, ready for confirmation test, closed);
• reporting of statistics/metrics about incidents (e.g. average time open, number of incidents with each Status, total number raised, open or closed)
Incident management tool functionality may be included in commercial test management tools.
Configuration management tools
Configuration management tools are not strictly testing tools either, but good configuration management is critical for controlled testing.
We need to know exactly what it is that we are sup-posed to test, such as the exact version of all of the things that belong in a system. It is possible to perform configuration management activities without the use of tools, but the tools make life a lot easier, especially in complex environments.
Features or characteristics of configuration management tools include support for:
• storing information about versions and builds of the software and test ware;
• Traceability between software and test ware and different versions or variants;
• keeping track of which versions belong with which configurations (e.g. operating systems, libraries, Browsers);
• build and release management;
• Base lining (e.g. all the configuration items that make up a specific release);
• Access control (checking in and out).
Tool support for static testing
Review process support tools
For a very informal review, where one person looks at another's document and gives a few comments about it, a tool such as this might just get in the way. However, when the review process is more formal, when many people are involved, or when the people involved are in different geographical locations, then tool support becomes far more beneficial.
One thing that should be monitored for each review is that the reviewers have not gone over the document too quickly, i.e. that the checking rate (number of pages checked per hour) was close to that recommended for that review cycle. A review process support tool could automatically calculate the checking rate and flag exceptions. The review process support tools can normally be tailored for the particular review process or type of review being done.
Features or characteristics of review process support tools include support for:
• A common reference for the review process or processes to use in different situations;
• storing and sorting review comments;
• communicating comments to relevant people;
• coordinating online reviews;
• keeping track of comments, including defects found, and providing statistical information about them;
• providing traceability between comments, documents reviewed and related documents;
• A repository for rules, procedures and checklists to be used in reviews, as well as entry and exit
criteria;
• monitoring the review status (passed, passed with corrections, and requires re- review);
• collecting metrics and reporting on key factors
Static analysis tools (D)
Static analysis tools are normally used by developers as part of the development and component testing process. The key aspect is that the code (or other artefact) is not executed or run. Of course the tool itself is executed, but the source code we are interested in is the input data to the tool.
Static analysis can also be carried out on things other than software code, for example static analysis Of Requirements or static analysis of websites
Static analysis tools for code can help the developers to understand the structure of the code, and Can also be used to enforce coding standards
Features or characteristics of static analysis tools include support to:
• calculate metrics such as cyclomatic complexity or nesting levels (which can help to identify where
More testing may be needed due to increased risk);
• enforce coding standards;
• analyze structures and dependencies;
• Aid in code understanding;
• identify anomalies or defects in the code
Modeling tools (D)
Modeling tools help to validate models of the system or software. For example a tool can check consistency of data objects in a database and can find inconsistencies and defects.
Modeling tools can also check state models or object models. Modeling tools are typically used by developers and can help in the design of the software.
One strong advantage of both modeling tools and static analysis tools is that they can be used before dynamic tests can be run. This enables any defects that these tools can find to be identified as early as possible, when it is easier and cheaper to fix them.
'model-based testing tools' are actually tools that generate test inputs or test cases from stored information about a particular model (e.g. a state diagram), so are classified as test design tools
Features or characteristics of modeling tools include support for:
• identifying inconsistencies and defects within the model;
• helping to identify and prioritize areas of the model for testing;
• predicting system response and behavior under various situations, such as level of load;
• helping to understand system functions and identify test conditions using a modeling language such as UML
Tool support for test specification
This tools are support the testing activities.
Test design tools (orthogonal array used)
Test design tools help to construct test cases, or at least test inputs (which is part of a test case). If an automated oracle is available, then the tool can also construct the expected result, so it can actually generate test cases (rather than just test inputs).
For example, if the requirements are kept in a requirements management or test management tool, or in a Computer Aided Software Engineering (CASE) tool used by developers, then it is possible to identify the input fields, including the range of valid values. This range information can be used to identify boundary values and equivalence partitions. If the valid range is stored, the tool can distinguish between values that should be accepted and those that should generate an error message.
Another type of test design tool is sometimes called a 'screen scraper', a structured template or a test frame. The tool looks at a window of the graphical user interface and identifies all of the buttons, lists and input fields, and can set up a test for each thing that it finds. This means that every button will be clicked for example and every list box will be selected. This is a good start for a thorough set of tests and it can quickly and easily identify non-working buttons. However, unless the tool has access to an oracle, it may not know what should actually happen as a result of the button click.
Features or characteristics of test design tools include support for:
• generating test input values from:
- Requirements;
- Design models (state, data or object);
- Code;
- Graphical user interfaces;
- test conditions;
• generating expected results, if an oracle is available to the tool.
The benefit of this type of tool is that it can easily and quickly identify the tests (or test inputs) that will exercise all of elements, e.g. input fields, buttons, branches. This helps the testing to be more thorough (if that is an objective of the test!).
Test data preparation tools
Setting up test data can be a significant effort, especially if an extensive range or volume of data is needed for testing. Test data preparation tools help in this area. They may be used by developers, but they may also be used during system or acceptance testing. They are particularly useful for performance and reliability testing, where a large amount of realistic data is needed.
Test data preparation tools enable data to be selected from an existing data-base or created, generated, manipulated and edited for use in tests. The most sophisticated tools can deal with a range of files and database formats.
Features or characteristics of test data preparation tools include support to:
• extract selected data records from files or databases;
• 'massage' data records to make them anonymous or not able to be identified with real
• enable records to be sorted or arranged in a different order;
• generate new records populated with pseudo-random data, or data set up according to some guidelines,
• construct a large number of similar records from a template, to give a large set of records for volume Tests
Tool support for test execution and logging
Test execution tools
This type of tool is also referred to as a 'test running tool'. Most tools of this type offer a way to get started by capturing or recording manual tests; hence they are also known as 'capture/playback' tools, 'capture/replay' tools or 'record/playback' tools.
Test execution tools use a scripting language to drive the tool. The scripting language is actually a programming language. So any tester who wishes to use a test execution tool directly will need to use programming skills to create and modify the scripts.
The advantage of programmable scripting is that tests can repeat actions (in loops) for different data values (i.e. test inputs), they can take different routes depending on the outcome of a test and they can be called from other scripts giving some structure to the set of tests.
When you try to replay the captured tests - this approach does not scale up for large numbers of tests. The main reason for this is that a captured script is very difficult to maintain because:
• It is closely tied to the flow and interface presented by the GUI.
• The test input information is 'hard-coded', i.e. it is embedded in the individual script for each test.
They are commonly referred to as testing tools, they are actually best used for regression testing (so they could be referred to as 'regression testing tools' rather than 'testing tools').
One of the most significant benefits of using this type of tool is that whenever an existing system is changed (e.g. for a defect fix or an enhancement), all of the tests that were run earlier could potentially be run again, to make sure that the changes have not disturbed the existing system by introducing or revealing a defect.
Features or characteristics of test execution tools include support for:
• capturing (recording) test inputs while tests are executed manually;
• storing an expected result in the form of a screen or object to compare to, the next time the test is run;
• executing tests from stored scripts and optionally data files accessed by the script (if data-driven or keyword-driven scripting is used);
• Dynamic comparison (while the test is running) of screens, elements, links, controls, objects and values;
• Ability to initiate post-execution comparison;
• Logging results of tests run (pass/fail, differences between expected and actual results);
• Masking or filtering of subsets of actual and expected results, for example excluding the screen- Displayed current date and time which is not of interest to a particular test
• measuring timings for tests;
• Synchronizing inputs with the application under test e.g. wait until the application is ready to accept The next input, or insert a fixed delay to represent human interaction speed;
• sending summary results to a test management tool.
Test harness/unit test framework tools (D)
These two types of tool are grouped together because they are variants of the type of support needed by developers when testing individual components or units of software. A test harness provides stubs and drivers, which are small programs that interact with the software under test (e.g. for testing middle-ware and embedded software).
Some unit test framework tools provide support for object-oriented software, others for other development paradigms. Unit test frameworks can be used in agile development to automate tests in parallel with development.
Both types of tool enable the developer to test, identify and localize any defects. The framework or the stubs and drivers supply any information needed by the software being tested (e.g. an input that would have come from a user) and also receive any information sent by the software. Stubs may also be referred to as 'mock objects'.
Test harnesses or drivers may be developed in-house for particular systems. Unit test framework tools are very similar to test execution tools, since they include facilities such as the ability to store test cases and monitor whether tests pass or fail.
Features or characteristics of test harnesses and unit test framework tools include support for:
• supplying inputs to the software being tested;
• receiving outputs generated by the software being tested;
• executing a set of tests within the framework or using the test harness;
• recording the pass/fail results of each test (framework tools);
• storing tests (framework tools);
• Support for debugging (framework tools);
• Coverage measurement at code level (framework tools).
Test comparators
We must compare what the software produces to what it should produce. A test comparator helps to automate aspects of that comparison.
There are two ways in which actual results of a test can be compared to the expected results for the test. Dynamic comparison is where the comparison is done dynamically, i.e. while the test is executing. The other way is post-execution comparison, where the comparison is performed after the test has finished executing and the software under test is no longer running.
Test execution tools include the capability to perform dynamic comparison while the tool is executing a test. Dynamic comparison is useful when an actual result does not match the expected result in the middle of a test - the tool can be programmed to take some recovery action at this point or go to a different set of tests.
Post-execution comparison is usually best done by a separate tool (i.e. not the test execution tool).typically it’s a 'stand-alone' tool.
Post-execution comparison is best for comparing a large volume of data, for example comparing the contents of an entire file with the expected contents of that file, or comparing a large set of records from a database with the expected content of those records. For example, comparing the result of a batch run (e.g. overnight processing of the day's online transactions) is probably impossible to do without tool support.
Whether a comparison is dynamic or post-execution, the test comparator needs to know what the correct result is. This may be stored as part of the test case itself or it may be computed using a test oracle.
Features or characteristics of test comparators include support for:
• Dynamic comparison of transient events that occur during test execution;
• Post-execution comparison of stored data, e.g. in files or databases;
• Masking or filtering of subsets of actual and expected results.
Coverage measurement tools (D)
How thoroughly have you tested? Coverage tools can help answer this Question
A coverage tool first identifies the elements or coverage items that can be counted, and where the tool can identify when a test has exercised that cover-age item.
At component testing level, the coverage items could be lines of code or code statements or decision outcomes (e.g. the True or False exit from an IF statement).
At component integration level, the coverage item may be a call to a function or module. Although coverage can be measured at system or acceptance testing levels
The process of identifying the coverage items at component test level is called 'instrumenting the code'.
The coverage tool then counts the number of coverage items that have been executed by the test suite, and reports the percentage of coverage items that have been exercised, and may also identify the items that have not yet been exercised (i.e. not yet tested).
Features or characteristics of coverage measurement tools include support for:
• identifying coverage items (instrumenting the code);
• calculating the percentage of coverage items that were exercised by a suite of tests;'
• reporting coverage items that have not been exercised as yet;
• identifying test inputs to exercise as yet uncovered items (test design tool functionality);
• generating stubs and drivers (if part of a unit test framework)
Note that the coverage tools only measure the coverage of the items that they can identify. Just because your tests have achieved 100% statement cover-age, this does not mean that your software is 100% tested!
Security tools
Security testing tools can be used to test security by trying to break into a system, whether or not it is protected by a security tool. The attacks may focus On the network, the support software, the application code or the underlying database.
Features or characteristics of security testing tools include support for:
• identifying viruses;
• detecting intrusions such as denial of service attacks;
• simulating various types of external attacks;
• probing for open ports or other externally visible points of attack;
• identifying weaknesses in password files and passwords;
• Security checks during operation, e.g. for checking integrity of files, and intrusion detection, e.g. Checking results of test attacks
Tool support for performance and monitoring
This can be during testing or could be after a system is released into live operation.
Dynamic analysis tools (D)
Dynamic analysis tools are 'dynamic' because they require the code to be running. They are 'analysis' rather than 'testing' tools because they analyze what is happening 'behind the scenes' while the software is running
Features or characteristics of dynamic analysis tools include support for:
• detecting memory leaks;
• identifying pointer arithmetic errors such as null pointers;
• identifying time dependencies.
These tools would typically be used by developers in component testing and component integration testing.If the performance is not up to the standard expected, then some analysis needs to be performed to see where the problem is and to know what can be done to improve the performance.
Features or characteristics of performance-testing tools include support for:
• generating a load on the system to be tested;
• measuring the timing of specific transactions as the load on the system varies;
• measuring average response times;
• producing graphs or charts of responses over time
Monitoring tools
Monitoring tools are used to continuously keep track of the status of the system in use, in order to have the earliest warning of problems and to improve service. There are monitoring tools for servers, networks, databases, security, performance, website and internet usage, and applications.
Features or characteristics of monitoring tools include support for:
• identifying problems and sending an alert message to the administrator (e.g. network administrator);
• logging real-time and historical information;
• finding optimal settings;
• monitoring the number of users on a network;
• monitoring network traffic (either in real time or covering a given length of time of operation with the
analysis performed afterwards).
Tool support for specific application areas
There are also further specializations of tools within these classifications. For example there are web-based performance-testing tools as well as performance-testing tools for back-office systems.
There are static analysis tools for specific development platforms and programming languages, since each programming language and every platform has distinct characteristics.
There are dynamic analysis tools that focus on security issues, as well as dynamic analysis tools for embedded systems.
Commercial tool sets may be bundled for specific application areas such as web-based or embedded systems.
Tool support using other tools
Testers may also use SQL to set up and query databases containing test data. Tools used by developers when debugging, to help localize defects and check their fixes, are also testing tools.
Developers use debugging tools when identifying and fixing defects. The debugging tools enable them to run individual and localized tests to ensure that they have correctly identified the cause of a defect and to confirm that their change to the code will indeed fix the defect. Testers can use Perl scripts to help compare test results.
EFFECTIVE USE OF TOOLS: POTENTIAL BENEFITS AND RISKS
2.1 Potential benefits of using tools
There are many benefits that can be gained by using tools to support testing, whatever the specific type of tool. Benefits include:
• Reduction of repetitive work;
• Greater consistency and repeatability;
• Objective assessment;
• ease of access to information about tests or testing
Repetitive work is tedious to do manually. People become bored and make mistakes when doing the same task over and over. Examples of this type of repetitive work include running regression tests, entering the same test data over and over again (both of which can be done by a test execution tool), checking against coding standards (which can be done by a static analysis tool) or creating a specific test database (which can be done by a test data preparation tool).
Checking to confirm the correctness of a fix to a defect (which can be done by a debugging tool or test execution tool), enter-ing test inputs (which can be done by a test execution tool) and generating tests from requirements (which can be done by a test design tool or possibly a requirements management tool).
Assessing the cyclomatic complexity or nesting levels of a component (which can be done by a static analysis tool), coverage (coverage measurement tool), system behavior (monitoring tools) and incident statistics (test management tool).
statistics and graphs about test progress (test execution or test management tool), incident rates (incident management or test management tool) and performance (performance testing tool).
2.2 Risks of using tools
There are many risks that are present when tool support for testing is introduced and used, whatever the specific type of tool. Risks include:
• Unrealistic expectations for the tool;
• underestimating the time, cost and effort for the initial introduction of a tool;
• underestimating the time and effort needed to achieve significant and continuing benefits from the tool;
• underestimating the effort required to maintain the test assets generated by the tool;
• Over-reliance on the tool.
Unrealistic expectations may be one of the greatest risks to success with tools.
This list of risks is not exhaustive. Two other important factors are:
• The skill needed to create good tests;
• The skill needed to use the tools well, depending on the type of tool.
2.3 Special considerations for some types of tools
Test execution tools
There are tools that can generate scripts by identifying what is on the screen rather than by capturing a manual test, but they still generate scripts to be used in execution; they are not script-free.
There are different levels of scripting. Five are described in:
• Linear scripts (which could be created manually or captured by recording a manual test);
• structured scripts (using selection and iteration programming structures);
• shared scripts (where a script can be called by other scripts so can be re-used - shared scripts also Require a formal script library under configuration management);
• Data-driven scripts (where test data is in a file or spreadsheet to be read by a control script);
• Keyword-driven scripts (where all of the information about the test is stored in a file or spreadsheet,
With a number of control scripts that implement the tests described in the file).
A captured test (a linear script) is not a good solution, for a number of reasons, including:
• The script doesn't know what the expected result is until you program it in - it only stores inputs that Have been recorded, not test cases.
• A small change to the software may invalidate dozens or hundreds of scripts.
• The recorded script can only cope with exactly the same conditions as when it was recorded.
Unexpected events (e.g. a file that already exists) will not be interpreted correctly by the tool. Audit trail can also be very useful if a failure occurs which cannot be easily reproduced - the recording of the specific failure can be played to the developer to see exactly what sequence caused the problem.
Data-driven scripts allow the data, i.e. the test inputs and expected out-comes, to be stored separately from the script. This is particularly useful when there are a large number of data values that need to be tested using the same control script.
Keyword-driven scripts include not just data but also keywords in the data file or spreadsheet. This enables a tester to devise a great variety of tests. Keywords can deal with both test inputs and expected outcomes.
Performance testing tools
In performance testing, we are not normally concerned so much with functional correctness, but with non-functional quality characteristics. When using a performance testing tool we are looking at the transaction throughput, the degree of accuracy of a given computation, the computer resources being used for a given level of transactions, the time taken for certain transactions or the number of users that can use the system at once.
There are particular issues with performance-testing tools, including:
• The design of the load to be generated by the tool (e.g. random input or according to user profiles);
• Timing aspects (e.g. inserting delays to make simulated user input more realistic);
• The length of the test and what to do if a test stops prematurely;
• narrowing down the location of a bottleneck;
• Exactly what aspects to measure (e.g. user interaction level or server level);
• How to present the information gathered.
Static analysis tools
Static analysis tools are very useful to developers, as they can identify potential problems in code before the code is executed and they can also help to check that the code is written to coding standards.
The aim of the static analysis tool is to produce code that will be easier to maintain in the future, so it would be a good idea to implement higher standards on new code that is still being tested, before it is released into use, but to allow older code to be less stringently checked. There is still a risk that the changes to conform to the new standard will introduce an unexpected side-effect, but there is a much greater likelihood that it will be found in testing and there is time to fix it before the system is released.
Test management tools
A report produced by a test management tool (either directly or indirectly through another tool or spreadsheet) may be a very useful report at the moment, but may not be useful in three or six months. It is important to monitor the information produced to ensure it is the most relevant now.
It is important to have a defined test process before test management tools are introduced. If the testing process is working well manually, then a test management tool can help to support the process and make it more efficient. The best approach is to define your own processes, taking into account the tool you will be using, and then adapt the tool to provide the greatest benefit to your organization.
INTRODUCING A TOOL INTO AN ORGANIZATION
Main principles
The tool should help to build on the strengths of the organization and address its weaknesses. The organization needs to be ready for the changes that will come with the new tool.
The following factors are important in selecting a tool:
• Assessment of the organization's maturity (e.g. readiness for change);
• Identification of the areas within the organization where tool support will help to improve testing Processes;
• Evaluation of tools against clear requirements and objective criteria;
• Proof-of-concept to see whether the product works as desired and meets the requirements and
Objectives Defined for it;
• Evaluation of the vendor (training, support and other commercial aspects) or open-source network of Support;
• identifying and planning internal implementation (including coaching and mentoring for those new to The use of the tool)
3.2 Pilot project
One of the ways to do a proof-of-concept is to have a pilot project as the first thing done with a new tool. This will use the tool in earnest but on a small scale, with sufficient time to explore different ways of using the tool. Objectives should be set for the pilot in order to assess whether or not the concept is proven, i.e. that the tool can accomplish what is needed within the current organizational context.
The objectives for a pilot project for a new tool are:
• To learn more about the tool (more detail, more depth);
• To see how the tool would fit with existing processes or documentation, how those would need to Change To work well with the tool and how to use the tool to streamline existing processes;
• To decide on standard ways of using the tool that will work for all potential users (e.g. naming
Conventions, creation of libraries, defining modularity, where different elements will be stored, how They and the tool itself will be maintained);
• To evaluate the pilot project against its objectives (have the benefits been achieved at reasonable Cost?)
3.3 Success factors
Success is not guaranteed or automatic when implementing a testing tool, but many organizations have succeeded. Here are some of the factors that have contributed to success:
• Incremental roll-out (after the pilot) to the rest of the organization;
• Adapting and improving processes, test ware and tool artifacts to get the best fit and balance
Between Them and the use of the tool;
• providing adequate training, coaching and mentoring of new users;
• defining and communicating guidelines for the use of the tool, based on what was learned in the pilot;
• implementing a continuous improvement mechanism as tool use spreads through more of the
Organization;
• monitoring the use of the tool and the benefits achieved and adapting the use of the tool to take
Account of what is learned.
Error and mistake both are same.
When the software code has been built,its executed and then any defects may cause the system to fail to do what it should do causing failure.
Human being,programmers and testers included can make an error. These errors may produce defects in the software code or system or in a document.
US Federal Aviation administrations DO-178B standard [RTCA/DO-178B] has requirements for test coverage.
Root cause analysis:
When we detect failures, we might try to track them back to their root cause,
the real reason that they happened.
For root cause analysis we can use some techniques such as Evans,TQMI,Robson,Group brainstorming. we do this to help reduce the risk of failures occurring in an operational environment.
Understand of the root causes of defects is an important aspect of quality assurance activities