Chapter 2.Testing throughout the software life cycle
Verification is concerned with evaluating a work product, component or system to determine whether it meets the requirements set. In fact, verification focuses on the question 'Is the deliverable built according to the specification?
Validation is concerned with evaluating a work product, component or system to determine whether it meets the user needs and requirements. Validation focuses on the question 'Is the deliverable fit for purpose, e.g. does it provide a solution to the problem?'
Component testing: searches for defects in and verifies the functioning of software components (e.g. Modules, programs, objects, classes etc) that are separately testable;
Integration testing: A test interfaces between components, interactions to different parts of a system such as an operating system, file system and hard ware or interfaces between systems;
• System testing: concerned with the behavior of the whole system/product as defined by the scope of a development project or product. The main focus of system testing is verification against specified requirements;
• Acceptance testing: validation testing with respect to user needs, requirements, and business processes conducted to determine whether or not to accept the system.
For the integration of a commercial off-the-shelf (COTS) software product into a system, a purchaser may perform only Integration testing at the system level (Eng. integration to the infrastructure and other systems) and at a later stage acceptance testing.
Examples of iterative or incremental development models are prototyping, Rapid Application Development (RAD), Rational Unified Process (RUP) and agile development
Component testing, also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable.
A stub is called from the software component to be tested;
A driver calls a component to be tested
Performance Testing another name is robustness testing
Structural testing (e.g. decision coverage)
One approach in component testing, used in Extreme Programming (XP), is to prepare and automate test cases before coding. This is called a test-first approach or test-driven development. This approach is highly iterative and is based on cycles of developing test cases, then building and integrating small pieces of code, and executing the component tests until they pass.
Integration testing tests interfaces between components, interactions to different parts of a system such as an operating system, file system and hard-ware or interfaces between systems. Note that integration testing should be differentiated from other integration activities. Integration testing is often carried out by the integrator, but preferably by a specific integration tester or test team.
Component integration testing tests the interactions between software components and is done after component testing;
System integration testing tests the interactions between different systems and may be done after system testing. In this case, the developing organization may control only one side of the interface, so changes may be destabilizing. Business processes implemented as work flows may involve a series of systems that can even run on different platforms.
All components or systems are integrated simultaneously, after which everything is tested as a whole. This is called 'big-bang' integration testing
Big-bang testing has the advantage that everything is finished before integration testing starts. There is no need to simulate (as yet unfinished) parts.
Integration testing will find defects, it is a good practice to consider whether time might be saved by breaking the down the integration test process. Another extreme is that all programs are integrated one by one, and a test is carried out after each step (incremental testing).
The incremental approach has the advantage that the defects are found early in a smaller assembly when it is relatively easy to detect the cause.
A disadvantage is that it can be time-consuming since stubs and drivers have to be developed and used in the test. Within incremental integration testing a range of possibilities exist, partly depending on the system architecture:
Top-down: testing takes place from top to bottom, following the control flow or architectural structure (e.g. starting from the GUI or main menu). Components or systems are substituted by stubs.
Bottom-up: testing takes place from the bottom of the control flow upwards. Components or systems are Substituted by drivers
System testing is concerned with the behavior of the whole system/product as defined by the scope of a development project or product. System testing is most often the final test on behalf of development To verify that the system to be delivered meets the specification and its purpose may be to find as many Defects as possible
System testing should investigate both functional and non-functional requirements of the system.
System testing requires a controlled test environment with regard to, amongst other things, control of The software versions, test ware and the test data
A system test is executed by the development organization in a (properly controlled) environment. The Test environment should correspond to the final target or production environment as much as possible In order to minimize the risk of environment-specific failures not being found by testing
The acceptance test should answer questions such as: 'Can the system be released?', 'What, if any, are the outstanding (business) risks?' and 'Has development met their obligations?'
Acceptance testing is most often the responsibility of the user or customer, although other stakeholders may be involved as well. The execution of the acceptance test requires a test environment that is for most aspects, representative of the production environment ('as-if production').
The goal of acceptance testing is to establish confidence in the system, part of the system or specific non-functional characteristics, e.g. usability, of the system. Acceptance testing is most often focused on a validation type of testing, whereby we are trying to determine whether the system is fit for purpose whereby we are trying to determine whether the system is fit for purpose. Finding defects should not be the main focus in acceptance testing. Although it assesses the system's readiness for deployment and use, it is not necessarily the final level of testing.
Acceptance testing of a new functional enhancement may come before system testing.
The user acceptance test focuses mainly on the functionality thereby validating the fitness-for-use of the system by the business user, while the operational acceptance test (also called production acceptance test) validates whether the system meets the requirements for operation. The user acceptance test is performed by the users and application managers. In most organizations, system administration will perform the operational acceptance test shortly before the system is released. The operational acceptance test may include testing of backup/restore, disaster recovery, maintenance tasks and periodic check of security vulnerabilities.
Other types of acceptance testing that exist are contract acceptance testing and compliance acceptance Testing. Contract acceptance testing is performed against a contract's acceptance criteria for producing Custom-developed soft-ware. Acceptance should be formally defined when the contract is agreed.
Compliance acceptance testing or regulation acceptance testing is performed against the regulations Which must be adhered to, such as governmental, legal or safety regulations.
Depending on its objectives, testing will be organized differently. For example, component testing aimed at performance would be quite different to component testing aimed at achieving decision coverage.
Functional testing considers the specified behavior and is often also referred to as black-box testing. This is not entirely true, since black-box testing also includes non-functional testing.
Function (or functionality) testing can, based upon ISO 9126, be done focusing on suitability, interoperability, security, accuracy and compliance. For example Security testing investigates the functions (e.g. a firewall) relating to detection of threats, such as viruses, from malicious outsiders.
The techniques used for functional testing are often specification-based, but experienced-based techniques can also be used .
The International Organization for Standardization (ISO) has defined a set of quality characteristics [ISO/IEC 9126, 2001].
The ISO 9126 standard defines six quality characteristics and the subdivision of each quality characteristic into a number of sub-characteristics. This standard is getting more and more recognition in the industry, enabling development, testing and their stakeholders to use a common terminology for quality characteristics and thereby for non-functional testing.
The characteristics and their sub-characteristics are, respectively:
• Functionality, which consists of five sub-characteristics: suitability, accuracy, security, interoperability and compliance; this characteristic deals with functional testing
• Reliability, which is defined further into the sub-characteristics maturity (robustness), fault-tolerance, recover-ability and compliance;
• Usability, which is divided into the sub-characteristics understandability, learn-ability, operability, attractiveness and compliance;
• Efficiency, which is divided into time behavior (performance), resource utilization and compliance;
• Maintainability, which consists of five sub-characteristics: analyzability, changeability, stability, testability and compliance;
• Portability, which also consists of five sub-characteristics: adaptability, install-ability, co-existence, replace-ability and compliance.
Structural testing is often referred to as 'white-box' or 'glass-box' because we are interested in what is happening 'inside the box'.
At component level, and to a lesser extent at component integration testing, there is good tool support to measure code coverage. Coverage measurement tools assess the percentage of executable elements (e.g. statements or decision outcomes) that have been exercised (i.e. covered) by a test suite. If coverage is not 100%, then additional tests may need to be written and run to cover those parts that have not yet been exercised. This of course depends on the exit criteria.
The techniques used for structural testing are structure-based techniques, also referred to as white-box techniques. Control flow models are often used to support structural testing.
Confirmation testing (re-testing)
When a test fails and we determine that the cause of the failure is a software defect, the defect is reported, and we can expect a new version of the software that has had the defect fixed. In this case we will need to execute the test again to confirm that the defect has indeed been fixed. This is known as confirmation testing (also known as re-testing).
The way to detect these 'unexpected side-effects' of fixes is to do regression testing.
The purpose of regression testing is to verify that modifications in the software or the environment have not caused unintended adverse side effects and that the system still meets its requirements.
It is common for organizations to have what is usually called a regression test suite or regression test pack. This is a set of test cases that is specifically used for regression testing.
All of the test cases in a regression test suite would be executed every time a new version of software is produced and this makes them ideal candidates for automation. If the regression test suite is very large it may be more appropriate to select a subset for execution.
Regression tests are executed whenever the software changes, either as a result of fixes or new or changed functionality. It is also a good idea to execute them when some aspect of the environment changes, for example when a new version of a database management system is introduced or a new version of a source code compiler is used.
Once deployed, a system is often in service for years or even decades. During this time the system and its operational environment is often corrected, changed or extended. Testing that is executed during this life cycle phase is called 'maintenance testing'.
Note that Maintenance testing is different from Maintainability testing, which defines how easy it is to maintain the system.
A maintenance test process usually begins with the receipt of an application for a change or a release plan. The test manager will use this as a basis for producing a test plan. Note that reproducibility of tests is also important for maintenance testing.
A major and important activity within maintenance testing is impact analysis. During impact analysis, together with stakeholders, a decision is made on what parts of the system may be unintentionally affected and therefore need careful regression testing. Risk analysis will help to decide where to focus regression testing - it is unlikely that the team will have time to repeat all the existing tests.
As stated maintenance testing is done on an existing operational system. It is triggered by modifications, migration, or retirement of the system. Modifications include planned enhancement changes (e.g. release-based), corrective and emergency changes, and changes of environment, such as planned operating system or database upgrades, or patches to newly exposed or discovered vulnerabilities of the operating system. Maintenance testing for migration (e.g. from one platform to another) should include operational testing of the new environment, as well as the changed software. Maintenance testing for the retirement of a system may include the testing of data migration or archiving, if long data-retention periods are required.
Planned modifications
The following types of planned modification may be identified:
• Perfective modifications (adapting software to the user's wishes, for instance by supplying new functions or enhancing performance);
• Adaptive modifications (adapting software to environmental changes such as new hardware, new systems software or new legislation);
Corrective planned modifications (deferrable correction of defects).
The standard structured test approach is almost fully applicable to planned modifications. On average, planned modification represents over 90% of all maintenance work on systems.
No comments:
Post a Comment