Tuesday, May 22, 2012


Chapter 5 Test management

    1. TEST ORGANIZATION
      1.Independent and integrated testing

Testing is an assessment of quality, and since that assessment is not always positive, many organizations strive to create an organizational climate where testers can deliver an independent, objective assessment of quality.

Moving toward independence, you find an integrated tester or group of testers working alongside the programmers, but still within and reporting to the development manager. You might find a team of testers who are independent and outside the development team, but reporting to project management.

In other hand we can see that a separate test team reporting into the organization at a point equal to the development or project team. You might find Specialists in the business domain (such as users of the system), specialists in technology (such as data-base experts), and specialists in testing (such as security testers, certification testers, or test automation experts) in a separate test team, as part of a larger independent test team, or as part of a contract, outsourced test team.

An independent tester can often see more, other, and different defects than a tester working within a programming team.

An independent tester brings a different set of assumptions to testing and to reviews, which often helps expose hidden defects and problems related to the group's way of thinking

An independent tester who reports to senior management can report his results honestly and without concern for reprisals that might result from pointing out problems in coworkers' or, worse yet, the manager's work.

An independent test team often has a separate budget, which helps ensure the proper level of money is spent on tester training, testing tools, test equipment.

      1.Working as a test leader

Test leaders tend to be involved in the planning, monitoring, and control of the testing activities and tasks.
They lead, guide and monitor the analysis, design, implementation and execution of the test cases, test procedures and test suites. They ensure proper configuration management of the test ware produced and traceability of the tests to the test basis.
Sometimes test leaders wear different titles, such as test manager or test coordinator. Alternatively, the test leader role may wind up assigned to a project manager, a development manager or a quality assurance manager.
Main works of test lead are plan, monitor and control the testing work.

      2.Working as a tester
In the planning and preparation phases of the testing, testers should review and contribute to test plans, as well as analyzing, reviewing and assessing requirements and design specifications.
They may be identifying test conditions and creating test designs, test cases, test procedure specifications and test data, and may automate or help to automate the tests.

  1. TEST PLANS, ESTIMATES AND STRATEGIES

    1. The purpose and substance of test plans

Test plan is the project plan for the testing work to be done. It is not a test design specification, a collection of test cases or a set of test procedures; in fact, most of our test plans do not address that level of detail.

The test plan also helps us manage change. During early phases of the project, as we gather more information, we revise our plans. As the project evolves and situations change, we adapt our plans. Written test plans give us a baseline against which to measure such revisions and changes.

For some systems projects, a hardware test plan and a software test plan will address different techniques and tools as well as different audiences. However, since there might be overlap between these test plans, a master test plan that addresses the common elements can reduce the amount of redundant documentation.

    1. What to do with your brain while planning tests

Writing a good test plan is easier than writing a novel. In terms of the specific project, understanding the purpose of testing means knowing the answers to questions such as:

• What is in scope and what is out of scope for this testing effort?
• What are the test objectives?
• What are the important project and product risks?
• What constraints affect testing (e.g., budget limitations, hard deadlines, etc.)?
• What is most critical for this product and project?
• Which aspects of the product are more (or less) testable?
• What should be the overall test execution schedule and how should we decide the order in which to run specific tests?

You should then select strategies which are appropriate to the purpose of testing. In addition, you need to decide how to split the testing work into various levels (e.g., component, integration, system and acceptance)

Finally, moving back up to a higher level, think about 'entry criteria' and 'exit criteria.' For such criteria, typical factors are:
Acquisition and supply, Test items, Defects, Tests, Coverage, Quality, Money, Risk

When writing exit criteria, we try to remember that a successful project is a balance of quality, budget, schedule and feature considerations. This is even more important when applying exit criteria at the end of the project.


    1. Estimating what testing will involve and what it will cost

Starting at the highest level, we can break down a testing project into phases using the fundamental test process: planning and control; analysis and design; implementation and execution; evaluating exit criteria and reporting; and test closure. Within each phase we identify activities and within each activity we identify tasks and perhaps subtasks. To identify the activities and tasks, we work both forward and backward. When we say we work forward, we mean that we start with the planning activities and then move forward in time step by step
Working backward means that we consider the risks that we identified during risk analysis. For those
risks which you intend to address through testing.
When you are creating your work-breakdown structure, remember that you will want to use it for
Both estimation (at the beginning) and monitoring and control (as the project continues). To
ensure Accuracy of the estimate and precise control, make sure that you subdivide the work finely
enough.
    1. Estimation techniques

There are two techniques for estimation covered by the ISTQB Foundation Syllabus. One involves consulting the people who will do the work and other people with expertise on the tasks to be done. The other involves analyzing metrics from past projects and from industry data.
Asking the individual contributors and experts involves working with experienced staff members to develop a work-breakdown structure for the project.

Using a tool such as Microsoft Project or a whiteboard and sticky-notes, you and the team can then predict the testing end-date and major milestones.
This technique is often called 'bottom up' estimation because you start at the lowest level of the hierarchical breakdown in the work-breakdown structure - the task - and let the duration, effort, dependencies and resources for each task add up across all the tasks.

    1. Factors affecting test effort

Testing is a complex endeavor on many projects and a variety of factors can influence it.
The test strategies or approaches you pick will have a major influence on the testing effort.
Product factors start with the presence of sufficient project documentation so that the testers can figure out what the system is, how it is supposed to work and what correct behavior looks like.
Complexity is another major product factor. Examples of complexity considerations include:

The difficulty of comprehending and correctly handling the problem the system is being built to solve
The use of innovative technologies, especially those long on hyperbole and short on proven track records;
The need for intricate and perhaps multiple test configurations, especially when these rely on the timely arrival of scarce software, hardware and other supplies;
The prevalence of stringent security rules, strictly regimented processes or other regulations;
• The geographical distribution of the team, especially if the team crosses time-zones (as many outsourcing efforts do).

Process factors include the availability of test tools, especially those that reduce the effort associated with test execution, which is on the critical path for release. On the development side, debugging tools and a dedicated debugging environment also reduce the time required to complete testing.

    1. Test approaches or strategies

The choice of test approaches or strategies is one powerful factor in the success of the test effort and the accuracy of the test plans and estimates. This factor is under the control of the testers and test leaders.The major types of test strategies that are commonly found

Analytical: For example, the risk-based strategy involves performing a risk analysis using project documents and stakeholder input, then planning, estimating, designing, and prioritizing the tests based on risk. Another analytical test strategy is the requirements-based strategy, where an analysis of the requirements specification forms the basis for planning, estimating and designing tests. Analytical test strategies have in common the use of some formal or informal analytical technique, usually during the requirements and design stages of the project.

Model-based: Model-based test strategies have in common the creation or selection of some formal or informal model for critical system behaviors, usually during the requirements and design stages of the project.
Methodical test strategies have in common the adherence to a pre-planned, systematized approach that Has been developed in-house, assembled from various concepts developed in- house and gathered from Outside, or adapted significantly from outside ideas and may have an early or late point of involvement For testing

Process- or standard-compliant:

Dynamic: Dynamic strategies, such as exploratory testing, have in common concentrating on finding as Many defects as possible during test execution and adapting to the realities of the system under test as It is when delivered, and they typically emphasize the later stages of testing.
Consultative or directed: Consultative or directed strategies have in common the reliance on a group of non-testers to guide or Perform the testing effort and typically emphasize the later stages of testing simply due to the lack of Recognition of the value of early testing.

Regression-averse: Regression-averse strategies have in common a set of procedures - usually automated - that allow them to detect regression defects. A regression-averse strategy may involve automating functional tests prior to release of the function, in which case it requires early testing, but sometimes the testing is almost entirely focused on testing functions that already have been released, which is in some sense a form of post- release test involvement.

How do you know which strategies to pick or blend for the best chance of success? There are many factors to consider, but let us highlight a few of the most important: Risks, Skills, Objectives, Regulations, Product, and Business.





3. TEST PROGRESS MONITORING AND CONTROL

3.1 Monitoring the progress of test activities

Test monitoring can serve various purposes during the project, including the following:

• Give the test team and the test manager feedback on how the testing work is going, allowing
Opportunities to guide and improve the testing and the project
• Provide the project team with visibility about the test results.
• Measure the status of the testing, test coverage and test items against the exit criteria to determine Whether the test work is done
• Gather data for use in estimating future test efforts.

Especially for small projects, the test leader or a delegated person can gather test progress monitoring information manually using documents, spreadsheets and simple databases. When working with large teams, distributed projects and long-term test efforts, we find that the efficiency and consistency of data collection is aided by the use of automated tools

One way to gather test progress information is to use the IEEE 829 test log template. While much of the information related to logging events can be use-fully captured in a document, we prefer to capture the test-by-test information in spreadsheets

Common metrics for test progress monitoring include:
• The extent of completion of test environment preparation;
• The extent of test coverage achieved, measured against requirements, risks, code, configurations or Other areas of interest;
• The status of the testing (including analysis, design and implementation) compared to various test milestones;
• The economics of testing, such as the costs and benefits of continuing test execution in terms of finding the next defect or running the next test.

As a complementary monitoring technique, you might assess the subjective level of confidence the testers have in the test items. However, avoid making important decisions based on subjective assessments alone, as people's impressions have a way of being inaccurate and colored by bias.

3.2 Reporting test status

Test progress monitoring is about gathering detailed test data; reporting test status is about effectively communicating our findings to other project stake-holders. Variations or summaries of the metrics used for test progress monitoring.

The specific data you'll want to gather will depend on your specific reports, but common considerations include the following:

• How will you assess the adequacy of the test objectives for a given test level and whether those
Objectives were achieved?
• How will you assess the adequacy of the test approaches taken and whether they support the
Achievement of the project's testing goals?
• How will you assess the effectiveness of the testing with respect to these objectives and approaches?
If you are doing risk-based testing, one main test objective is to subject the important product risks to the appropriate extent of testing.
If you are doing requirements-based testing, you could measure coverage in terms of requirements or functional areas instead of risks.

3.3 Test control

Testing can be delayed when the test items show up late or the test environment is unavailable. Test control is about guiding and corrective actions to try to achieve the best possible outcome for the project.

The specific corrective or guiding actions depend, of course, on what we are trying to control. Consider the following hypothetical examples:

  • A portion of the software under test will be delivered late, after the planned test start date. Market conditions dictate that we cannot change the release date. Test control might involve re-prioritizing the tests so that we start testing against what is available now.
  • For cost reasons, performance testing is normally run on weekday evenings during off-hours in the production environment.

  1. CONFIGURATION MANAGEMENT

Configuration management has a number of important implications for testing. For one thing, it allows the testers to manage their test ware and test results using the same configuration management mechanisms, as if they were as valuable as the source code and documentation for the system itself - which of course they are.

For another thing, configuration management supports the build process, which is essential for delivery of a test release into the test environment .Release notes are not always so formal and do not always contain all the information shown

During the project planning stage - and perhaps as part of your own test plan - make sure that configuration management procedures and tools are selected. As the project proceeds, the configuration process and mechanisms must be implemented, and the key interfaces to the rest of the development process should be documented. Come test execution time, this will allow you and the rest of the project team to avoid nasty surprises like testing the wrong software, receiving uninstallable builds and reporting irreproducible defects against versions of code that don't exist anywhere but in the test environment.


  1. RISK AND TESTING
    1. Risks and levels of risk

Risk is a possibility of a negative or undesirable outcome. It is a possibility, not a certainty. We can classify risks into project risks (factors relating to the way the work is carried out, i.e. the test project) and product risks (factors relating to what is produced by the work, i.e. the thing we are testing).
    1. Product risks

We can think of a product risk as the possibility that the system or software might fail to satisfy some reasonable customer, user, or stakeholder expectation.

Unsatisfactory software might omit some key function that the customers specified, the users required or the stakeholders were promised. Unsatisfactory software might be unreliable and frequently fail to behave normally.

Unsatisfactory software might fail in ways that cause financial or other damage to a user or the company that user works for.

Unsatisfactory software might have problems related to a particular quality characteristic, which might not be functionality, but rather security, reliability, usability, maintainability or performance.

Risk-based testing is the idea that we can organize our testing efforts in a way that reduces the residual level of product risk when the system ships. Risk-based testing uses risk prioritizing and emphasizing the appropriate tests during test execution.

Risk-based testing starts early in the project, identifying risks to system quality and using that knowledge of risk to guide testing planning, specification, preparation and execution. Risk-based testing involves both mitigation - testing to provide opportunities to reduce the likelihood of defects, especially high-impact defects - and contingency - testing to identify work around to make the defects that do get past us less painful.

Risk-based testing also involves measuring how well we are doing at finding and removing defects in critical areas. Risk-based testing can also involve using risk analysis to identify proactive opportunities to remove or prevent defects through non-testing activities and to help us select which test activities to perform.

Risk-based testing starts with product risk analysis. One technique for risk analysis is a close reading of the requirements specification, design specifications, user documentation and other items.

Another technique is brainstorming with many of the project stakeholders.
Another is a sequence of one-on-one or small-group sessions with the business and technology experts in the company.

a team-based approach that involves the key stakeholders and experts is preferable to a purely document-based approach, as team approaches draw on the knowledge, wisdom and insight of the entire team to determine what to test and how much.

    1. Project risks

A risk is the possibility of a negative outcome, what project risks affect testing? There are direct risks such as the late delivery of the test items to the test team or availability issues with the test environment. There are also indirect risks such as excessive delays in repairing defects found in testing or problems with getting professional system administration support for the test environment.

Checklists and examples can help you identify test project risks. For any risk, product or project, you have four typical options:

Mitigate: Take steps in advance to reduce the likelihood (and possibly the impact) of the risk.
Contingency: Have a plan in place to reduce the impact should the risk become an outcome.
Transfer: Convince some other member of the team or project stakeholder to reduce the likelihood or accept the impact of the risk.
Ignore: Do nothing about the risk, which is usually a smart option only when there's little that can be done or when the likelihood and impact are low.


5.4 Tying it all together for risk management

We can deal with test-related risks to the project and product by applying some straightforward, structured risk management techniques. The first step is to assess or analyze risks early in the project by you can remind yourself to consider and manage risks during the planning phase.


  1. INCIDENT MANAGEMENT
    1. What are incident reports for and how do I write good ones?

When running a test, you might observe actual results that vary from expected results. Different organizations have different names to describe such situations. Commonly, they're called incidents, bugs, defects, problems or issues.

An incident is any situation where the system exhibits questionable behavior, but often we refer to an incident as a defect only when the root cause is some problem in the item we're testing.
Other causes of incidents include misconfiguration or failure of the test environment, corrupted test Data, bad tests, invalid expected results and tester mistakes. We can also log, report, track, and Manage incidents found during development and reviews.

While many of these incidents will be user error or some other behavior not related to a defect, some percentages of defects do escape from quality assurance and testing activities. The defect detection percentage, which compares field defects with test defects, is an important metric of the effectiveness of the test process.
Here is an example of a DDP formula that would apply for calculating DDP for the last level of testing prior to release to the field:

Defects (testers)
DDP = ----------------------------------------------------------
Defects (testers) + defects (field)

A good incident report is a technical document. Reviews are proven quality assurance techniques and incident reports are important project deliverables.

    1. What goes in an incident report?

An incident report describes some situation, behavior or event that occurred during testing that requires further investigation. In many cases, an incident report consists of one or two screens - full of information gathered by a defect-tracking tool and stored in a database.

After the defect has been resolved, managers, programmers or others may want to capture conclusions and recommendations. Throughout the life cycle of the incident report, from discovery to resolution, the defect-tracking system should allow each person who works on the incident report to enter status and history information.

    1. What happens to incident reports after you file them?

Incident reports are managed through a life cycle from discovery to resolution. The incident report life cycle is often shown as a state transition diagram (see Figure 5.3).
In the incident report life cycle shown in Figure 5.3, all incident reports move through a series of clearly identified states after being reported. Some of these state transitions occur when a member of the project team completes some assigned task related to closing an incident report. Some of these state transitions occur when the project team decides not to repair a defect during this project, leading to the deferral of the incident report. Some of these state transitions occur when an incident report is poorly written or describes behavior which is actually correct, leading to the rejection of that report.

Let's focus on the path taken by incident reports which are ultimately fixed. After an incident is reported, a peer tester or test manager reviews the report. If successful in the review, the incident report becomes opened, so now the project team must decide whether or not to repair the defect. If the defect is to be repaired, a programmer is assigned to repair it.

Once the programmer believes the repairs are complete, the incident report returns to the tester for confirmation testing. If the confirmation test fails, the incident report is re-opened and then re-assigned. Once the tester confirms a good repair, the incident report is closed. No further work remains to be done.

In any state other than rejected, deferred or closed, further work is required on the incident prior to the end of this project. In such a state, the incident report has a clearly identified owner. The owner is responsible for transitioning the incident into an allowed subsequent state. The arrows in the diagram show these allowed transitions.

In a rejected, deferred or closed state, the incident report will not be assigned to an owner. However, certain real-world events can cause an incident report to change state even if no active work is occurring on the incident report. Examples include the recurrence of a failure associated with a closed incident report and the discovery of a more serious failure associated with a deferred incident report.

Ideally, only the owner can transition the incident report from the current state to the next state and ideally the owner can only transition the incident report to an allowed next state. Most defect-tracking systems support and enforce the life cycle and life cycle rules. Good defect-tracking systems allow you to customize the set of states, the owners, and the transitions allowed to match your actual workflows. And, while a good defect-tracking system is helpful, the actual defect workflow should be monitored and supported by project and company management.


No comments: