Tuesday, April 26, 2011

Http status code 500 series

5xx Server Error
The server failed to fulfill an apparently valid request.
Response status codes beginning with the digit "5" indicate cases in which the server is aware that it has encountered an error or is otherwise incapable of performing the request. Except when responding to a HEAD request, the server should include an entity containing an explanation of the error situation, and indicate whether it is a temporary or permanent condition. Likewise, user agents should display any included entity to the user. These response codes are applicable to any request method.

500 Internal Server Error
A generic error message, given when no more specific message is suitable.

501 Not Implemented
The server either does not recognise the request method, or it lacks the ability to fulfill the request.

502 Bad Gateway
The server was acting as a gateway or proxy and received an invalid response from the upstream server.

503 Service Unavailable
The server is currently unavailable (because it is overloaded or down for maintenance). Generally, this is a temporary state.

504 Gateway Timeout
The server was acting as a gateway or proxy and did not receive a timely response from the upstream server.

505 HTTP Version Not Supported
The server does not support the HTTP protocol version used in the request.

506 Variant Also Negotiates (RFC 2295)
Transparent content negotiation for the request results in a circular reference.

507 Insufficient Storage (WebDAV) (RFC 4918)


509 Bandwidth Limit Exceeded (Apache bw/limited extension)
This status code, while used by many servers, is not specified in any RFCs.

510 Not Extended (RFC 2774)
Further extensions to the request are required for the server to fulfill it.

Http status code 400 series

4xx Client Error
The 4xx class of status code is intended for cases in which the client seems to have erred. Except when responding to a HEAD request, the server should include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. These status codes are applicable to any request method. User agents should display any included entity to the user. These are typically the most common error codes encountered while online.

400 Bad Request
The request cannot be fulfilled due to bad syntax.

401 Unauthorized
Similar to 403 Forbidden, but specifically for use when authentication is possible but has failed or not yet been provided. The response must include a WWW-Authenticate header field containing a challenge applicable to the requested resource. See Basic access authentication and Digest access authentication.

402 Payment Required
Reserved for future use. The original intention was that this code might be used as part of some form of digital cash or micropayment scheme, but that has not happened, and this code is not usually used. As an example of its use, however, Apple's MobileMe service generates a 402 error ("httpStatusCode:402" in the Mac OS X Console log) if the MobileMe account is delinquent.

403 Forbidden
The request was a legal request, but the server is refusing to respond to it. Unlike a 401 Unauthorized response, authenticating will make no difference.

404 Not Found
The requested resource could not be found but may be available again in the future. Subsequent requests by the client are permissible.

405 Method Not Allowed
A request was made of a resource using a request method not supported by that resource; for example, using GET on a form which requires data to be presented via POST, or using PUT on a read-only resource.

406 Not Acceptable
The requested resource is only capable of generating content not acceptable according to the Accept headers sent in the request.

407 Proxy Authentication Required

408 Request Timeout
The server timed out waiting for the request. According to W3 HTTP specifications: "The client did not produce a request within the time that the server was prepared to wait. The client MAY repeat the request without modifications at any later time."

409 Conflict
Indicates that the request could not be processed because of conflict in the request, such as an edit conflict.

410 Gone
Indicates that the resource requested is no longer available and will not be available again. This should be used when a resource has been intentionally removed and the resource should be purged. Upon receiving a 410 status code, the client should not request the resource again in the future. Clients such as search engines should remove the resource from their indices. Most use cases do not require clients and search engines to purge the resource, and a "404 Not Found" may be used instead.

411 Length Required
The request did not specify the length of its content, which is required by the requested resource.

412 Precondition Failed
The server does not meet one of the preconditions that the requester put on the request.

413 Request Entity Too Large
The request is larger than the server is willing or able to process.

414 Request-URI Too Long
The URI provided was too long for the server to process.

415 Unsupported Media Type
The request entity has a media type which the server or resource does not support.For example, the client uploads an image as image/svg+xml, but the server requires that images use a different format.
416 Requested Range Not Satisfiable
The client has asked for a portion of the file, but the server cannot supply that portion. For example, if the client asked for a part of the file that lies beyond the end of the file.

417 Expectation Failed
The server cannot meet the requirements of the Expect request-header field.

418 I'm a teapot
This code was defined in 1998 as one of the traditional IETF April Fools' jokes, in RFC 2324, Hyper Text Coffee Pot Control Protocol, and is not expected to be implemented by actual HTTP servers.

422 Unprocessable Entity (WebDAV) (RFC 4918)
The request was well-formed but was unable to be followed due to semantic errors.

423 Locked (WebDAV) (RFC 4918)
The resource that is being accessed is locked

424 Failed Dependency (WebDAV) (RFC 4918)
The request failed due to failure of a previous request (e.g. a PROPPATCH).

425 Unordered Collection (RFC 3648)
Defined in drafts of "WebDAV Advanced Collections Protocol", but not present in "Web Distributed Authoring and Versioning (WebDAV) Ordered Collections Protocol".

426 Upgrade Required (RFC 2817)
The client should switch to a different protocol such as TLS/1.0.

444 No Response
An Nginx HTTP server extension. The server returns no information to the client and closes the connection (useful as a deterrent for malware).

449 Retry With
A Microsoft extension. The request should be retried after performing the appropriate action.

450 Blocked by Windows Parental Controls
A Microsoft extension. This error is given when Windows Parental Controls are turned on and are blocking access to the given webpage.

499 Client Closed Request
An Nginx HTTP server extension. This code is introduced to log the case when the connection is closed by client while HTTP server is processing its request, making server unable to send the HTTP header back.

Http status code 300 series

3xx Redirection
The client must take additional action to complete the request.
This class of status code indicates that further action needs to be taken by the user agent in order to fulfil the request. The action required may be carried out by the user agent without interaction with the user if and only if the method used in the second request is GET or HEAD. A user agent should not automatically redirect a request more than five times, since such redirections usually indicate an infinite loop.

300 Multiple Choices
Indicates multiple options for the resource that the client may follow. It, for instance, could be used to present different format options for video, list files with different extensions, or word sense disambiguation.

301 Moved Permanently
This and all future requests should be directed to the given URI.

302 Found
This is an example of industrial practice contradicting the standard. HTTP/1.0 specification (RFC 1945) required the client to perform a temporary redirect (the original describing phrase was "Moved Temporarily"), but popular browsers implemented 302 with the functionality of a 303 See Other. Therefore, HTTP/1.1 added status codes 303 and 307 to distinguish between the two behaviours. However, the majority of Web applications and frameworks still[as of?] use the 302 status code as if it were the 303.[citation needed]

303 See Other (since HTTP/1.1)
The response to the request can be found under another URI using a GET method. When received in response to a POST (or PUT/DELETE), it should be assumed that the server has received the data and the redirect should be issued with a separate GET message.

304 Not Modified
Indicates the resource has not been modified since last requested. Typically, the HTTP client provides a header like the If-Modified-Since header to provide a time against which to compare. Using this saves bandwidth and reprocessing on both the server and client, as only the header data must be sent and received in comparison to the entirety of the page being re-processed by the server, then sent again using more bandwidth of the server and client.

305 Use Proxy (since HTTP/1.1)
Many HTTP clients (such as Mozilla[10] and Internet Explorer) do not correctly handle responses with this status code, primarily for security reasons.

306 Switch Proxy
No longer used.

307 Temporary Redirect (since HTTP/1.1)
In this occasion, the request should be repeated with another URI, but future requests can still use the original URI. In contrast to 303, the request method should not be changed when reissuing the original request. For instance, a POST request must be repeated using another POST request

HTTP status code 200 series

2xx Success
This class of status codes indicates the action requested by the client was received, understood, accepted and processed successfully.
200 OK
Standard response for successful HTTP requests. The actual response will depend on the request method used. In a GET request, the response will contain an entity corresponding to the requested resource. In a POST request the response will contain an entity describing or containing the result of the action.
201 Created
The request has been fulfilled and resulted in a new resource being created.
202 Accepted
The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place.

203 Non-Authoritative Information (since HTTP/1.1)
The server successfully processed the request, but is returning information that may be from another source.

204 No Content
The server successfully processed the request, but is not returning any content.

205 Reset Content
The server successfully processed the request, but is not returning any content. Unlike a 204 response, this response requires that the requester reset the document view.

206 Partial Content
The server is delivering only part of the resource due to a range header sent by the client. The range header is used by tools like wget to enable resuming of interrupted downloads, or split a download into multiple simultaneous streams.

207 Multi-Status (WebDAV) (RFC 4918)
The message body that follows is an XML message and can contain a number of separate response codes, depending on how many sub-requests were made.

226 IM Used (RFC 3229)
The server has fulfilled a GET request for the resource, and the response is a representation of the result of one or more instance-manipulations applied to the current instance.

Http Status codes 100 series

The following is a list of HyperText Transfer Protocol (HTTP) response status codes. This includes codes from IETF internet standards as well as unstandardised RFCs, other specifications and some additional commonly used codes. The first digit of the status code specifies one of five classes of response; the bare minimum for an HTTP client is that it recognises these five classes. Microsoft IIS may use additional decimal sub-codes to provide more specific information, but these are not listed here. The phrases used are the standard examples, but any human-readable alternative can be provided. Unless otherwise stated, the status code is part of the HTTP/1.1 standard.

1xx Informational
Request received, continuing process.
This class of status code indicates a provisional response, consisting only of the Status-Line and optional headers, and is terminated by an empty line. Since HTTP/1.0 did not define any 1xx status codes, servers must not send a 1xx response to an HTTP/1.0 client except under experimental conditions.

100 Continue
This means that the server has received the request headers, and that the client should proceed to send the request body (in the case of a request for which a body needs to be sent; for example, a POST request). If the request body is large, sending it to a server when a request has already been rejected based upon inappropriate headers is inefficient. To have a server check if the request could be accepted based on the request's headers alone, a client must send Expect: 100-continue as a header in its initial request and check if a 100 Continue status code is received in response before continuing (or receive 417 Expectation Failed and not continue).

101 Switching Protocols
This means the requester has asked the server to switch protocols and the server is acknowledging that it will do so.

102 Processing (WebDAV) (RFC 2518)
As a WebDAV request may contain many sub-requests involving file operations, it may take a long time to complete the request. This code indicates that the server has received and is processing the request, but no response is available yet. This prevents the client from timing out and assuming the request was lost.

122 Request-URI too long
This is a non-standard IE7-only code which means the URI is longer than a maximum of 2083 characters.(See code 414.)

Tuesday, April 19, 2011

Complete guide on testing web application

Before start to test web application we should have indepth knowledge in key elements,approriate approach and knowledge. while doing web based testing we have to concentrate more on below testing

Let’s have first web testing checklist.
1) Functionality Testing
2) Usability testing
3) Interface testing
4) Compatibility testing
5) Performance testing
6) Security testing

1) Functionality Testing:
Test for – all the links in web pages, database connection, forms used in the web pages for submitting or getting information from user, Cookie testing.
Check all the links:
Test the outgoing links from all the pages from specific domain under test.
Test all internal links.
Test links jumping on the same pages.
Test links used to send the email to admin or other users from web pages.
Test to check if there are any orphan pages.
Lastly in link checking, check for broken links in all above-mentioned links.
Test forms in all pages:
Forms are the integral part of any web site. Forms are used to get information from users and to keep interaction with them. So what should be checked on these forms?
First check all the validations on each field.
Check for the default values of fields.
Wrong inputs to the fields in the forms.
Options to create forms if any, form delete, view or modify the forms.
Let’s take example of the search engine project currently I am working on, In this project we have advertiser and affiliate sign up steps. Each sign up step is different but dependent on other steps. So sign up flow should get executed correctly. There are different field validations like email Ids, User financial info validations. All these validations should get checked in manual or automated web testing.
Cookies testing:
Cookies are small files stored on user machine. These are basically used to maintain the session mainly login sessions. Test the application by enabling or disabling the cookies in your browser options. Test if the cookies are encrypted before writing to user machine. If you are testing the session cookies (i.e. cookies expire after the sessions ends) check for login sessions and user stats after session end. Check effect on application security by deleting the cookies. (I will soon write separate article on cookie testing)
Validate your HTML/CSS:
If you are optimizing your site for Search engines then HTML/CSS validation is very important. Mainly validate the site for HTML syntax errors. Check if site is craw lable to different search engines.
Database testing:
Data consistency is very important in web application. Check for data integrity and errors while you edit, delete, modify the forms or do any DB related functionality.
Check if all the database queries are executing correctly, data is retrieved correctly and also updated correctly. More on database testing could be load on DB, we will address this in web load or performance testing below.

2) Usability Testing:
Test for navigation:
Navigation means how the user surfs the web pages, different controls like buttons, boxes or how user using the links on the pages to surf different pages.
Usability testing includes:
Web site should be easy to use. Instructions should be provided clearly. Check if the provided instructions are correct means whether they satisfy purpose.
Main menu should be provided on each page. It should be consistent.
Content checking:
Content should be logical and easy to understand. Check for spelling errors. Use of dark colors annoys users and should not be used in site theme. You can follow some standards that are used for web page and content building. These are common accepted standards like as I mentioned above about annoying colors, fonts, frames etc.
Content should be meaningful. All the anchor text links should be working properly. Images should be placed properly with proper sizes.
These are some basic standards that should be followed in web development. Your task is to validate all for UI testing
Other user information for user help:
Like search option, site map, help files etc. Site map should be present with all the links in web sites with proper tree view of navigation. Check for all links on the site map.
“Search in the site” option will help users to find content pages they are looking for easily and quickly. These are all optional items and if present should be validated.

3) Interface Testing:
The main interfaces are:
Web server and application server interface
Application server and Database server interface.
Check if all the interactions between these servers are executed properly. Errors are handled properly. If database or web server returns any error message for any query by application server then application server should catch and display these error messages appropriately to users. Check what happens if user interrupts any transaction in-between? Check what happens if connection to web server is reset in between?



4) Compatibility Testing:
Compatibility of your web site is very important testing aspect. See which compatibility test to be executed:
Browser compatibility
Operating system compatibility
Mobile browsing
Printing options
Browser compatibility:
In my web-testing career I have experienced this as most influencing part on web site testing.
Some applications are very dependent on browsers. Different browsers have different configurations and settings that your web page should be compatible with. Your web site coding should be cross browser platform compatible. If you are using Java scripts or AJAX calls for UI functionality, performing security checks or validations then give more stress on browser compatibility testing of your web application.
Test web application on different browsers like Internet explorer, Firefox, Netscape navigator, AOL, Safari, Opera browsers with different versions.
OS compatibility:
Some functionality in your web application is may not be compatible with all operating systems. All new technologies used in web development like graphics designs, interface calls like different API’s may not be available in all Operating Systems.
Test your web application on different operating systems like Windows, Unix, MAC, Linux, Solaris with different OS flavors.
Mobile browsing:
This is new technology age. So in future Mobile browsing will rock. Test your web pages on mobile browsers. Compatibility issues may be there on mobile.
Printing options:
If you are giving page-printing options then make sure fonts, page alignment, page graphics getting printed properly. Pages should be fit to paper size or as per the size mentioned in printing option.

5) Performance testing:
Web application should sustain to heavy load. Web performance testing should include:
Web Load Testing
Web Stress Testing
Test application performance on different Internet connection speed.
In web load testing test if many users are accessing or requesting the same page. Can system sustain in peak load times? Site should handle many simultaneous user requests, large input data from users, Simultaneous connection to DB, heavy load on specific pages etc.
Stress testing: Generally stress means stretching the system beyond its specification limits. Web stress testing is performed to break the site by giving stress and checked how system reacts to stress and how system recovers from crashes.
Stress is generally given on input fields, login and sign up areas.
In web performance testing web site functionality on different operating systems, different hardware platforms is checked for software, hardware memory leakage errors


6) Security Testing:
Following are some test cases for web security testing:
Test by pasting internal URL directly into browser address bar without login. Internal pages should not open.
If you are logged in using user name and password and browsing internal pages then try changing URL options directly. I.e. If you are checking some publisher site statistics with publisher site ID= 123. Try directly changing the URL site ID parameter to different site ID which is not related to logged in user. Access should denied for this user to view others stats.
Try some invalid inputs in input fields like login user name, password, input text boxes. Check the system reaction on all invalid inputs.
Web directories or files should not be accessible directly unless given download option.
Test the CAPTCHA for automates scripts logins.
Test if SSL is used for security measures. If used proper message should get displayed when user switch from non-secure http:// pages to secure https:// pages and vice versa.
All transactions, error messages, security breach attempts should get logged in log files somewhere on web server.

Tuesday, April 12, 2011

Task- Based software testing

Once in a Software Testing conference held in Banglore, India, the topic of discussion was "How to test software’s impact on a system’s mission effectiveness?"

Mostly cutomers want systems that are:

- On-time
- Within budget
- That satisfy user requirements
- Reliable

Latter two concerns (out of above four) can be refined into two broad objectives for operational testing:

1. That a system’s performance satisfies its requirements as specified in the Operational Requirement Document and related documents.

2. To identify any serious deficiencies in the system design that need correction before full rate production.

Following the path from the system level to software two generic reasons for testing software are:

- Test for defects so they can be fixed - Debug Testing
- Test for confidence in the software - Operational Testing

Debug testing is usually conducted using a combination of functional test techniques and structural test techniques. The goal is to locate defects in the most cost-effective manner and correct the defects, ensuring the performance satisfies the user requirements.

Operational testing is based on the expected usage profile for a system. The goal is to estimate the confidence in a system, ensuring the system is reliable for its intended use.

Task-Based Testing is a variation on operational testing. The particular techniques are not new, rather it leverages commonly accepted techniques by placing them within the context of current operational and acquisition strategies.

Task-based testing, as the name implies, uses task analysis. This begins with a comprehensive framework for all of the tasks that the system will perform. Through a series of hierarchical task analyses, each unit within the service creates a Mission Essential Task List (Mission of System).

These lists only describe "what" needs to be done, not "how" or "who." Further task decomposition identifies the system and people required to carry out a mission essential task. Another level of decomposition results in the system tasks (i.e. functions) a system must provide. This is, naturally, the level in which developers and testers are most interested. From a tester’s perspective, this framework identifies the most important functions to test by correlating functions against the mission essential tasks a system is designed to support.

This is distinctly different from the typical functional testing or "test-to-spec" approach where each function or specification carries equal importance. Ideally, there should be no function or specification which does not contribute to a task, but in reality there are often requirements, specifications, and capabilities which do not or minimally support a mission essential task. Using task analysis, one identifies those functions impacting the successful completion of mission essential tasks and highlights them for testing.


Operational Profiles: The process of task analysis has great benefit in identifying what functions are the most important to test. However, the task analysis only identifies the mission essential tasks and functions, not their frequency of use. Greater utility can be gained by combining the mission essential tasks with an operational profile an estimate of the relative frequency of inputs that represent field use. This has several benefits:

1. Offers a basis for reliability assessment, so that the developer can have not only the assurance of having tried to improve the software, but also has an estimate of the reliability actually achieved.

2. Provides a common base for communicating with the developers about the intended use of the system and how it will be evaluated.

3. When software testing schedules and budgets are tightly constrained, this design yields the highest practical reliability because if failures are seen they would be the high frequency failures.


The first benefit has the advantage of applying statistical techniques in:

- The design of tests
- The analysis of resulting data

Software reliability estimation methods such as Task Analysis are available to estimate both the expected field reliability and the rate of growth in reliability. This directly supports an answer to the question about software’s impact on a system’s mission effectiveness.

Operational profiles are criticized as being difficult to develop. However, as part of its current operations and acquisition strategy, some organizations inherently develops an operational profile. At higher levels, this is reflected in the following documents:

- Analysis of Alternatives
- Operational Requirements Document (ORD)
- Operations Plans
- Concept of Operations (CONOPS) etc.

Closer to the tester’s realm is the interaction between the user and the developer which the current acquisition strategy encourages. The tester can act as a facilitator in helping the user refine his or her needs while providing insight to the developer on expected use. This highlights the second benefit above the communication between the user, developer, and tester.

Despite years of improvement in the software development process, one still sees systems which have gone through intensive debug testing (statement coverage, branch coverage, etc.) and "test-to-spec," but still fail to satisfy the customer’s concerns (that I mentioned above). By involving a customer early in the process to develop an operational profile, the most needed functions to support a task will be developed and tested first, increasing the likelihood of satisfying the customer’s four concerns. This third benefit is certainly of interest in today’s environment of shrinking budgets and manpower, shorter schedules (spiral acquisition), and greater demands on a system.


Task-Based Software Testing

Thus, Task-based software testing is the combination of a task analysis and an operational profile. The task analysis helps partition the input domain into mission essential tasks and the system functions which support them. Operational profiles, based on these tasks, are developed to further focus the testing effort.

Debug Testing

Debug testing is directed at finding as many bugs as possible, by either sampling all situations likely to produce failures using methods like code coverage & specification criteria etc, or concentrating on those that are considered most likely to produce failures like stress testing and boundary testing methods.
Survey of unit testing methods are examples of debug testing methods. These include such techniques as statement testing, branch testing, basis path testing, etc. Typically associated with these methods are some criteria based on coverage, thus they are sometimes referred to as coverage methods. Debug testing is based on a tester’s hypothesis of the likely types and locations of bugs.

Consequently, the effectiveness of this method depends heavily on whether the tester’s assumptions are correct.

If a developer and/or tester has a process in place to correctly identify the potential types and locations of bugs, then debug testing may be very effective at finding bugs. If a "standard" or "blind" approach is used, such as statement testing for its own sake, the testing effort may be ineffectual and wasted. A subtle hazard of debug testing is that it may uncover many failures, but in the process wastes test and repair effort without notably improving the software because the failures occur at a negligible rate during field use.

Integration of Test Methods

Historically, a system’s developer relied on debug testing (which includes functional or "test-to-spec" testing). Testing with the perspective of how the system would by employed was not seen until an operational test agency (OTA) became involved. Even on the occasions when developmental test took on an operational flavor, this is viewed as too late in the process. This historical approach to testing amplifies the weaknesses of both operational and debug testing. I propose that task-based software testing be accelerated to a much earlier point in the acquisition process. This has the potential of countering each respective method’s weaknesses with the other’s strengths.

Conclusion: Task-based Software Testing evaluation is a combination of demonstrated, existing methods (task analysis and operational testing). Its strength lies in matching well with the current operational strategy of mission essential tasks and the acquisition community’s goal to deliver operational capability quickly. By integrating task-based software testing with existing debug testing, the risk of meeting the customer’s four concerns (on-time, within budget, satisfies requirements, and is reliable) can be reduced.

User Acceptance Testing

User Acceptance Testing is a formal way to ensure that the new system or process does actually meet the user requirements. Each module to be implemented will be subject to one or more User Acceptance Tests (UAT) before being ‘signed off’ as meeting user needs. The time required will vary depending on the extent of the functionality to be tested. The test schedule will allow time for discussion and issue resolution.

Thus, I can say a user acceptance test is a chance to completely test business processes implemented in the application or software.

Main Objectives of the user acceptance testing:


Validate system set-up for transactions and user access
Confirm use of system in performing business processes
Verify performance on business critical functions
Confirm integrity of converted and additional data, for example values that appear in a look-up table
Assess and sign off go-live readiness
The scope of each user acceptance test will vary depending on which business process is being tested. In general however, tests will cover the following broad areas:

A number of defined test cases using quality data to validate end-to-end business processes.
A comparison of actual test results against expected results
A meeting/discussion forum to evaluate the process and facilitate issue resolution.
User Acceptance Testing is a 7 step process:

UAT Planning
Designing User Acceptance Test Cases
Creation team for UAT
Executing Test Cases
Defect Logging
Resolving the issues/bug fixing
Sign Off
Designing UA Test Cases: The UA test cases help the UAT team to test the application thoroughly. This also helps ensure that the UAT provides sufficient coverage of all the scenarios. Generally, scenario based test cases are created for UAT. The inputs for these test cases are:

Use cases created during requirements gathering
Inputs from business analysts and subject matter experts
UAT test cases are written in very simple language that describe steps to be taken to test various business workflow or scenario.

Participants of UAT: Participants for a UAT can vary from project to project, client to client or organization to organization. The team for UAT is typically consists of customer team & project team.

Customer Team:
IT team of customer (if any)
Business Users / Managers / Application owner (E.g. If the developed application is for HR department, then, HR head can be the application owner).
End Users
Project Team:
Project Manager / Tech Lead
Testing Team / Test Lead
Business Analyst
Roles and Responsibilities: The project team will be responsible for coordinating the preparation of all test cases and the UAT group will be responsible for the execution of all test cases (with support from the project team). However, sometimes, UAT test cases are prepared by customer team specially by business users.

The UAT team will

Ensure that the definition of the tests provide comprehensive and effective coverage of all reasonable aspects of functionality
Execute the test cases using sample source documents as inputs and ensure that the final outcomes of the tests are satisfactory
Validate that all test case input sources and test case output results are documented and can be audited
Document any problems, and work with the project team to resolve problems identified during the tests
Sign off on all test cases by signing the completed test worksheets
Accept the results on behalf of the relevant user population
Recognize any changes necessary to existing processes and take a lead role locally in ensuring that the changes are made and adequately communicated to other users
The Project Team will:

Provide first level support for all testing issues
Advise on changes to business process and procedure and/or
Change the system functionality, where possible, via set up changes
Track and manage test problems

Block box Testing

Black box testing is a test design method. Black box testing treats the system as a "black-box", so it doesn't explicitly use Knowledge of the internal structure. Or in other words the Test engineer need not know the internal working of the “Black box”. It focuses on the functionality part of the module.

Some people like to call black box testing as behavioral, functional, opaque-box, and closed-box. While the term black box is most popularly use, many people prefer the terms "behavioral" and "structural" for black box and white box respectively. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged.

Personally we feel that there is a trade off between the approaches used to test a product using white box and black box types.

There are some bugs that cannot be found using only black box or only white box. If the test cases are extensive and the test inputs are also from a large sample space then it is always possible to find majority of the bugs through black box testing.

Tools used for Black Box testing: Many tool vendors have been producing tools for automated black box and automated white box testing for several years. The basic functional or regression testing tools capture the results of black box tests in a script format. Once captured, these scripts can be executed against future builds of an application to verify that new functionality hasn't disabled previous functionality.

Advantages of Black Box Testing:

- Tester can be non-technical.

- This testing is most likely to find those bugs as the user would find.

- Testing helps to identify the vagueness and contradiction in functional specifications.

- Test cases can be designed as soon as the functional specifications are complete.

Disadvantages of Black Box Testing:

- Chances of having repetition of tests that are already done by programmer.

- The test inputs needs to be from large sample space.

- It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult.

- Chances of having unidentified paths during this testing.

- Graph Based Testing Methods: Software testing begins by creating a graph of important objects and their relationships and then devising a series of tests that will cover the graph so that each objects and their relationships and then devising a series of tests that will cover the graph so that each object and relationship is exercised and error is uncovered.

Error Guessing: Error Guessing comes with experience with the technology and the project. Error Guessing is the art of guessing where errors can be hidden. There are no specific tools and techniques for this, but you can write test cases depending on the situation: Either when reading the functional documents or when you are testing and find an error that you have not documented.

Boundary Value Analysis: Boundary Value Analysis (BVA) is a test data selection technique (Functional Testing technique) where the extreme values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. The hope is that, if a system works correctly for these special values then it will work correctly for all values in between.

- Extends equivalence partitioning

- Test both sides of each boundary

- Look at output boundaries for test cases too

- Test min, min-1, max, max+1, typical values

- BVA focuses on the boundary of the input space to identify test cases

- Rational is that errors tend to occur near the extreme values of an input variable

There are two ways to generalize the BVA techniques:

By the number of variables (For n variables): BVA yields 4n + 1 test cases.

By the kinds of ranges: Generalizing ranges depends on the nature or type of variables:

- NextDate has a variable Month and the range could be defined as {Jan, Feb, …Dec}
Min = Jan, Min +1 = Feb, etc.

- Triangle had a declared range of {1, 20,000}

- Boolean variables have extreme values True and False but there is no clear choice for the remaining three values

Advantages of Boundary Value Analysis:

- Robustness Testing - Boundary Value Analysis plus values that go beyond the limits
Min - 1, Min, Min +1, Nom, Max -1, Max, Max +1

- Forces attention to exception handling

- For strongly typed languages robust testing results in run-time errors that abort normal execution

Limitations of Boundary Value Analysis: BVA works best when the program is a function of several independent variables that represent bounded physical quantities:

1. Independent Variables:
NextDate test cases derived from BVA would be inadequate: focusing on the boundary would not leave emphasis on February or leap years.

- Dependencies exist with NextDate's Day, Month and Year.
- Test cases derived without consideration of the function

2. Physical Quantities:
An example of physical variables being tested, telephone numbers - what faults might be revealed by numbers of 000-0000, 000-0001, 555-5555, 999-9998, 999-9999?
Equivalence Partitioning: Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived. EP can be defined according to the following guidelines:

- If an input condition specifies a range, one valid and one two invalid classes are defined.

- If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.

- If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.

- If an input condition is Boolean, one valid and one invalid class is defined.

Comparison Testing: There are situations where independent versions of software be developed for critical applications, even when only a single version will be used in the delivered computer based system. It is these independent versions which form the basis of a black box testing technique called Comparison testing or back-to-back testing.

Orthogonal Array Testing: The Orthogonal Array Testing Strategy (OATS) is a systematic, statistical way of testing pair-wise interactions by deriving a suitable small set of test cases (from a large number of possibilities).

White box Testing

What is WBT?
White box testing involves looking at the structure of the code. When you know the internal structure of a product, tests can be conducted to ensure that the internal operations performed according to the specification. And all internal components have been adequately exercised. In other word WBT tends to involve the coverage of the specification in the code.

Code coverage is defined in following types as listed below:

1. Segment coverage – Each segment of code b/w control structure is executed at least once.

2. Branch Coverage or Node Testing – Each branch in the code is taken in each possible direction at least once.

3. Compound Condition Coverage – When there are multiple conditions, you must test not only each direction but also each possible combinations of conditions, which is usually done by using a ‘Truth Table’

4. Basis Path Testing – Each independent path through the code is taken in a pre-determined order.

5. Data Flow Testing (DFT) – In this approach you track the specific variables through each possible calculation, thus defining the set of intermediate paths through the code i.e., those based on each piece of code chosen to be tracked. Even though the paths are considered independent, dependencies across multiple paths are not really tested for by this approach. DFT tends to reflect dependencies but it is mainly through sequences of data manipulation. This approach tends to uncover bugs like variables used but not initialize, or declared but not used, and so on.

6. Path Testing – Path testing is where all possible paths through the code are defined and covered. This testing is extremely laborious and time consuming.

7. Loop Testing – In addition top above measures, there are testing strategies based on loop testing. These strategies relate to testing single loops, concatenated loops, and nested loops. Loops are fairly simple to test unless dependencies exist among the loop or b/w a loop and the code it contains.

What do we do in WBT?

In WBT, we use the control structure of the procedural design to derive test cases. Using WBT methods a tester can derive the test cases that

- Guarantee that all independent paths within a module have been exercised at least once.

- Exercise all logical decisions on their true and false values.

- Execute all loops at their boundaries and within their operational bounds

- Exercise internal data structures to ensure their validity.

White box testing (WBT) is also called Structural or Glass box testing.

Why WBT?

We do WBT because Black box testing is unlikely to uncover numerous sorts of defects in the program. These defects can be of the following nature:

- Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. Error tend to creep into our work when we design and implement functions, conditions or controls that are out of the program

- The logical flow of the program is sometimes counterintuitive, meaning that our unconscious assumptions about flow of control and data may lead to design errors that are uncovered only when path testing starts.

- Typographical errors are random, some of which will be uncovered by syntax checking mechanisms but others will go undetected until testing begins.

Skills Required

1. Talking theoretically, all we need to do in WBT is to define all logical paths, develop test cases to exercise them and evaluate results i.e. generate test cases to exercise the program logic exhaustively.

2. For this we need to know the program well i.e. We should know the specification and the code to be tested; related documents should be available too us .We must be able to tell the expected status of the program versus the actual status found at any point during the testing process.

Limitations

Unfortunately in WBT, exhaustive testing of a code presents certain logistical problems. Even for small programs, the number of possible logical paths can be very large. For instance, a 100 line C Language program that contains two nested loops executing 1 to 20 times depending upon some initial input after some basic data declaration. Inside the interior loop four if-then-else constructs are required. Then there are approximately 1014 logical paths that are to be exercised to test the program exhaustively. Which means that a magic test processor developing a single test case, execute it and evaluate results in one millisecond would require 3170 years working continuously for this exhaustive testing which is certainly impractical. Exhaustive WBT is impossible for large software systems. But that doesn’t mean WBT should be considered as impractical. Limited WBT in which a limited no. of important logical paths are selected and exercised and important data structures are probed for validity, is both practical and WBT. It is suggested that white and black box testing techniques can be coupled to provide an approach that that validates the software interface selectively ensuring the correction of internal working of the software.

Tools used for White Box testing:

Few Test automation tool vendors offer white box testing tools which:

1) Provide run-time error and memory leak detection;

2) Record the exact amount of time the application spends in any given block of code for the purpose of finding inefficient code bottlenecks; and

3) Pinpoint areas of the application that have and have not been executed.

Basis Path Testing : Basis path testing is a white box testing technique first proposed by Tom McCabe. The Basis path method enables to derive a logical complexity measure of a procedural design and use this measure as a guide for defining a basis set of execution paths. Test Cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time during testing.

Flow Graph Notation: The flow graph depicts logical control flow using a diagrammatic notation. Each structured construct has a corresponding flow graph symbol.

Cyclomatic Complexity: Cyclomatic complexity is a software metric that provides a quantitative measure of the logical complexity of a program. When used in the context of a basis path testing method, the value computed for Cyclomatic complexity defines the number for independent paths in the basis set of a program and provides us an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once.

An independent path is any path through the program that introduces at least one new set of processing statements or a new condition.

Computing Cyclomatic Complexity: Cyclomatic complexity has a foundation in graph theory and provides us with extremely useful software metric. Complexity is computed in one of the three ways:

1. The number of regions of the flow graph corresponds to the Cyclomatic complexity.

2. Cyclomatic complexity, V(G), for a flow graph, G is defined as
V (G) = E-N+2 Where E, is the number of flow graph edges, N is the number of flow graph nodes.

3. Cyclomatic complexity, V (G) for a flow graph, G is also defined as:
V (G) = P+1 where P is the number of predicate nodes contained in the flow graph G.

Graph Matrices: The procedure for deriving the flow graph and even determining a set of basis paths is amenable to mechanization. To develop a software tool that assists in basis path testing, a data structure, called a graph matrix can be quite useful.
A Graph Matrix is a square matrix whose size is equal to the number of nodes on the flow graph. Each row and column corresponds to an identified node, and matrix entries correspond to connections between nodes.

Control Structure Testing: Described below are some of the variations of Control Structure Testing:

Condition Testing: Condition testing is a test case design method that exercises the logical conditions contained in a program module.

Data Flow Testing: The data flow testing method selects test paths of a program according to the locations of definitions and uses of variables in the program.

Loop Testing: Loop Testing is a white box testing technique that focuses exclusively on the validity of loop constructs. Four classes of loops can be defined: Simple loops, Concatenated loops, nested loops, and unstructured loops.

Simple Loops: The following sets of tests can be applied to simple loops, where ‘n’ is the maximum number of allowable passes through the loop.

1. Skip the loop entirely.
2. Only one pass through the loop.
3. Two passes through the loop.
4. ‘m’ passes through the loop where m is less than n.
5. n-1, n, n+1 passes through the loop.

Nested Loops: If we extend the test approach from simple loops to nested loops, the number of possible tests would grow geometrically as the level of nesting increases.

1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimum iteration parameter values. Add other tests for out-of-range or exclude values.
3. Work outward, conducting tests for the next loop, but keep all other outer loops at minimum values and other nested loops to "typical" values.
4. Continue until all loops have been tested.

Concatenated Loops: Concatenated loops can be tested using the approach defined for simple loops, if each of the loops is independent of the other. However, if two loops are concatenated and the loop counter for loop 1 is used as the initial value for loop 2, then the loops are not independent.

Unstructured Loops: Whenever possible, this class of loops should be redesigned to reflect the use of the structured programming constructs.

Difference between smoke and sanity software testing

Smoke Testing: Software Testing done to ensure that whether the build can be accepted for through software testing or not. Basically, it is done to check the stability of the build received for software testing.

Sanity testing: After receiving a build with minor changes in the code or functionality, a subset of regression test cases are executed that to check whether it rectified the software bugs or issues and no other software bug is introduced by the changes. Sometimes, when multiple cycles of regression testing are executed, sanity testing of the software can be done at later cycles after through regression test cycles. If we are moving a build from staging / testing server to production server, sanity testing of the software application can be done to check that whether the build is sane enough to move to further at production server or not.

Difference between Smoke & Sanity Software Testing:


Smoke testing is a wide approach where all areas of the software application are tested without getting into too deep. However, a sanity software testing is a narrow regression testing with a focus on one or a small set of areas of functionality of the software application.
The test cases for smoke testing of the software can be either manual or automated. However, a sanity test is generally without test scripts or test cases.
Smoke testing is done to ensure whether the main functions of the software application are working or not. During smoke testing of the software, we do not go into finer details. However, sanity testing is a cursory software testing type. It is done whenever a quick round of software testing can prove that the software application is functioning according to business / functional requirements.
Smoke testing of the software application is done to check whether the build can be accepted for through software testing. Sanity testing of the software is to ensure whether the requirements are met or not.

How to do system testing

Testing the software system or software application as a whole is referred to as System Testing of the software. System testing of the application is done on complete application software to evaluate software's overall compliance with the business / functional / end-user requirements. The system testing comes under black box software testing. So, the knowledge of internal design or structure or code is not required for this type of software testing.

In system testing a software test professional aims to detect defects or bugs both within the interfaces and also within the software as a whole. However, the during integration testing of the application or software, the software test professional aims to detect the bugs / defects between the individual units that are integrated together.

During system testing, the focus is on the software design, behavior and even the believed expectations of the customer. So, we can also refer the system testing phase of software testing as investigatory testing phase of the software development life cycle.

At what stage of SDLC the System Testing comes into picture:

After the integration of all components of the software being developed, the whole software system is rigorously tested to ensure that it meets the specified business, functional & non-functional requirements. System Testing is build on the unit testing and integration testing levels. Generally, a separate and dedicated team is responsible for system testing. And, system testing is performed on stagging server.

Why system testing is required:


It is the first level of software testing where the software / application is tested as a whole.
It is done to verify and validate the technical, business, functional and non-functional requirements of the software. It also includes the verification & validation of software application architecture.
System testing is done on stagging environment that closely resembles the production environment where the final software will be deployed.
Entry Criteria for System Testing:

Unit Testing must be completed
Integration Testing must be completed
Complete software system should be developed
A software testing environment that closely resembling the production environment must be available (stagging environment).
System Testing in seven steps:

Creation of System Test Plan
Creation of system test cases
Selection / creation of test data for system testing
Software Test Automation of execution of automated test cases (if required)
Execution of test cases
Bug fixing and regression testing
Repeat the software test cycle (if required on multiple environments)
Contents of a system test plan: The contents of a software system test plan may vary from organization to organization or project to project. It depends how we have created the software test strategy, project plan and master test plan of the project. However, the basic contents of a software system test plan should be:

- Scope
- Goals & Objective
- Area of focus (Critical areas)
- Deliverables
- System testing strategy
- Schedule
- Entry and exit criteria
- Suspension & resumption criteria for software testing
- Test Environment
- Assumptions
- Staffing and Training Plan
- Roles and Responsibilities
- Glossary

How to write system test cases: The system test cases are written in a similar way as we write functional test cases. However, while creating system test cases following two points needs to be kept in mind:

- System test cases must cover the use cases and scenarios
- They must validate the all types of requirements - technical, UI, functional, non-functional, performance etc.

As per Wikipedia, there are total of 24 types of testings that needs to be considered during system testing. These are:

GUI software testing, Usability testing, Performance testing, Compatibility testing, Error handling testing, Load testing, Volume testing, Stress testing, User help testing, Security testing, Scalability testing, Capacity testing, Sanity testing, Smoke testing, Exploratory testing, Ad hoc testing, Regression testing, Reliability testing, Recovery testing, Installation testing, Idem potency testing, Maintenance testing, Recovery testing and failover testing, Accessibility testing

The format of system test cases contains:

Test Case ID - a unique number
Test Suite Name
Tester - name of tester who execute of write test cases
Requirement - Requirement Id or brief description of the functionality / requirement
How to Test - Steps to follow for execution of the test case
Test Data - Input Data
Expected Result
Actual Result
Pass / Fail
Test Iteration

Top down testing Vs Bottom up testing

Top down Testing: In this approach testing is conducted from main module to sub module. if the sub module is not developed a temporary program called STUB is used for simulate the submodule.

Advantages:- Advantageous if major flaws occur toward the top of the program.
- Once the I/O functions are added, representation of test cases is easier.
- Early skeletal Program allows demonstrations and boosts morale.

Disadvantages:- Stub modules must be produced
- Stub Modules are often more complicated than they first appear to be.
- Before the I/O functions are added, representation of test cases in stubs can be difficult.
- Test conditions ma be impossible, or very difficult, to create.
- Observation of test output is more difficult.
- Allows one to think that design and testing can be overlapped.
- Induces one to defer completion of the testing of certain modules.

Bottom up testing: In this approach testing is conducted from sub module to main module, if the main module is not developed a temporary program called DRIVERS is used to simulate the main module.

Advantages:
- Advantageous if major flaws occur toward the bottom of the program.
- Test conditions are easier to create.
- Observation of test results is easier.


Disadvantages:
- Driver Modules must be produced.
- The program as an entity does not exist until the last module is added.

Stubs and Drivers
It is always a good idea to develop and test software in "pieces". But, it may seem impossible because it is hard to imagine how you can test one "piece" if the other "pieces" that it uses have not yet been developed (and vice versa).

A software application is made up of a number of ‘Units’, where output of one ‘Unit’ goes as an ‘Input’ of another Unit. e.g. A ‘Sales Order Printing’ program takes a ‘Sales Order’ as an input, which is actually an output of ‘Sales Order Creation’ program.

Due to such interfaces, independent testing of a Unit becomes impossible. But that is what we want to do; we want to test a Unit in isolation! So here we use ‘Stub’ and ‘Driver.

A ‘Driver’ is a piece of software that drives (invokes) the Unit being tested. A driver creates necessary ‘Inputs’ required for the Unit and then invokes the Unit.

Driver passes test cases to another piece of code. Test Harness or a test driver is supporting code and data used to provide an environment for testing part of a system in isolation. It can be called as as a software module which is used to invoke a module under test and provide test inputs, control and, monitor execution, and report test results or most simplistically a line of code that calls a method and passes that method a value.

For example, if you wanted to move a fighter on the game, the driver code would bemoveFighter(Fighter, LocationX, LocationY);

This driver code would likely be called from the main method. A white-box test case would execute this driver line of code and check "fighter.getPosition()" to make sure the player is now on the expected cell on the board.

A Unit may reference another Unit in its logic. A ‘Stub’ takes place of such subordinate unit during the Unit Testing.

A ‘Stub’ is a piece of software that works similar to a unit which is referenced by the Unit being tested, but it is much simpler that the actual unit. A Stub works as a ‘Stand-in’ for the subordinate unit and provides the minimum required behavior for that unit. A Stub is a dummy procedure, module or unit that stands in for an unfinished portion of a system.

Four basic types of Stubs for Top-Down Testing are:
- Display a trace message
- Display parameter value(s)
- Return a value from a table
- Return table value selected by parameter

A stub is a computer program which is used as a substitute for the body of a software module that is or will be defined elsewhere or a dummy component or object used to simulate the behavior of a real component until that component has been developed.

For example, if the movefighter method has not been written yet, a stub such as the one below might be used temporarily – which moves any player to position 1.

public void moveFighter(Fighter player, int LocationX, int LocationY)

{fighter.setPosition(1);}

Ultimately, the dummy method would be completed with the proper program logic. However, developing the stub allows the programmer to call a method in the code being developed, even if the method does not yet have the desired behavior.

Programmer needs to create such ‘Drivers’ and ‘Stubs’ for carrying out Unit Testing.

Both the Driver and the Stub are kept at a minimum level of complexity, so that they do not induce any errors while testing the Unit in question.

Stubs and drivers are often viewed as throwaway code. However, they do not have to be thrown away: Stubs can be "filled in" to form the actual method. Drivers can become automated test cases.

Example - For Unit Testing of ‘Sales Order Printing’ program, a ‘Driver’ program will have the code which will create Sales Order records using hardcoded data and then call ‘Sales Order Printing’ program. Suppose this printing program uses another unit which calculates Sales discounts by some complex calculations. Then call to this unit will be replaced by a ‘Stub’, which will simply return fix discount data.

Monday, April 11, 2011

Test Efficiency Vs Test Effectiveness

I've seen that many test engineers are confused with the understanding of Software Test Efficiency and Software Test Effectiveness. Below is the summary of what I understand from Efficiency and Effectiveness.

Software Test Efficiency:

- It is internal in the organization how much resources were consumed how much of these resources were utilized.
- Software Test Efficiency is number of test cases executed divided by unit of time (generally per hour).
- Test Efficiency test the amount of code and testing resources required by a program to perform a particular function.

Here are some formulas to calculate Software Test Efficiency (for different factors):

Test efficiency = (total number of defects found in unit+integration+system) / (total number of defects found in unit+integration+system+User acceptance testing)

Testing Efficiency = (No. of defects Resolved / Total No. of Defects Submitted)* 100

Software Test Effectiveness:Software Test Effectiveness covers three aspects:

- How much the customer's requirements are satisfied by the system.
- How well the customer specifications are achieved by the system.
- How much effort is put in developing the system.

Software Test Effectivness judge the Effect of the test enviornment on the application.

Here are some formulas to calculate Software Test Effectiveness (for different factors):

- Test effectiveness = Number of defects found divided by number of test cases executed.

- Test effectiveness = (total number of defects injected +total number of defect found) / (total number of defect escaped)* 100

- Test Effectiveness = Loss due to problems / Total resources processed by the system

Software testing principles

Below are some basic Software Testing Principles:

- A necessary part of a test case is a definition of the expected output or result.

- A programmer should avoid attempting to test his or her own program.

- A programming organization should not test its own programs.

- Thoroughly inspect the results of each test.

- Test cases must be written for input conditions that are invalid and unexpected, as well as for those that are valid and expected.

- Examining a program to see if it does not do what it is supposed to do is only half the battle; the other half is seeing whether the program does what it is not supposed to do.

- Avoid throwaway test cases unless the program is truly a throwaway program.

- Do not plan a testing effort under the tacit assumption that no errors will be found.

- The probability of the existence of more errors in a section of a program is proportional to the number of errors already found in that section.

- Software Testing is an extremely creative and intellectually challenging task.

Software Errors Categories

One common definition of a software error is a mismatch between the program and its specification. In other words, we can say, a software error is present in a program when the program does not do what its end user expects.

Categories of Software Errors:

User interface errors such as output errors or incorrect user messages.
Function errors
Hardware defects
Incorrect program version
Requirements errors
Design errors
Documentation errors
Architecture errors
Module interface errors
Performance errors
Boundary-related errors
Logic errors such as calculation errors, State-based behavior errors, Communication errors, Program structure errors, such as control-flow errors.

Most programmers are rather cavalier about controlling the quality of the software they write. They bang out some code, run it through some fairly obvious ad hoc tests, and if it seems okay, they’re done. While this approach may work all right for small, personal programs, it doesn’t cut the mustard for professional software development.

Modern software engineering practices include considerable effort directed toward software quality assurance and testing. The idea, of course, is to produce a high software with the probability of satisfying the customer’s needs.

There are two ways to deliver software free of errors:

Preventing the introduction of errors in the first place.
Identifying the bugs lurking in program code, seek them out, and destroy them.
Obviously, the first method is superior. A big part of software quality comes from doing a good job of defining the requirements for the system you’re building and designing a software solution that will satisfy those requirements. Testing concentrates on detecting those errors that creep in despite your best efforts to keep them out.

Wednesday, April 6, 2011

what type of bugs are usually rejected and deferred by developer

Chance 1:
Differed means if one bug is raised in this cycle ,developer know about this bug but they don't fix it in this cycle they fix in next cycle or next version that time they kept this bug in differed stage.
if the bugs are raised by tester it is not valid like any validations .they reject the bugs


Chance 2:
Developer Rejected the bug when---
--Ths Bug is not Genuine.
--Result are not same

Developer Deferred the bug when---
--Priority Of bug may be low
--Lack of time for release
--The bug may not major effect on our application

Chance 3:
1. Tester will raised the bug with reproduction steps. Project Team will analysis whether the raised bug is valid or not. If the bug is not able to reproduced in the application . then Project team will "Reject" the bug.
2. If the bug is not so important for the present release, then Project Team will " Deferred" the Bug.

Suppose You have to do a project of 50 hours work in only 20 hours. And You have to make delivery on Time. now what will be your testing approach

Opinion 1:
In this critical situation first analyze the business critical s and focus area of the product. And break the product in small chunks and prioritize them as per client prospective. Then start working on as per priority. It is also know as risk based. After that you will able to complete the delivery on time.
Basically this type of condition is not possible because everything would be predefined during the whole cycle. And in typical Agile terminology , first all cross functional team member , product owner and scrum master set the delivery time as a "sprint" for given product on given release time with the approval of clients.

Opinion 2:
You need to work on Project Strategy which involves priotity-based scoping and incremental delivery

Opinion 3:
As per stated by other members, you have to prioritize and determine risk. You should consider talking to the user and development teams when making this deliverable. You should consider what the user's priorities (expectations) are along with the stability of the requirements and code. If you do not get reasonable feedback, you will have to prioritize and provide a risk statement. It sounds like you are in a somewhat challenging position as what you are describing indicates that the customer is setting limited time lines and from my perspective this indicates that this was not well planned out and the customer did not work with you to understand your estimates. You should be able to follow the SDLC and test properly. That said, test approach is to indicate the tests that you will focus on requirements that have the greatest value to the customer (detailed tests with good coverage) and risk is that the other requirements (lower importance such as usability) will have limited coverage.

Please do let me know if the situation comes like below scenario than what should tester do?

suppose the product/application has to deliver to client at 5.00PM,At that time you or your team member caught a high severity defect at 3PM.(Remember defect is high severity)But the the client is cannot wait for long time. You should deliver the product at 5.00Pm exactly. then what is the procedure you follow?

The bug is high severity only so we send the application to the client and find out the severity is priority one or not. if its priority then we ask him to wait.
Here we found defects/bugs in the last minute of the delivery or release date Then we have two options
1.explain the situation to client and ask some more time to fix the bug.
2.If the client is not ready to give some some time then analyze the impact of defect/bug and try to find workarounds for the defect and mention these issues in the release notes as known issues or known limitations or known bugs. Here the workaround means remedy process to be followed to overcome the defect effect.
3.Normally this known issues or known limitations(defects) will be fixed in next version or next release of the software

Severity and priority

High severity and low priority bug
the client logo may be displayed wrong in application. its not going to affect functionality but from the client perspective it will be high severity bug.


Low severity and high priority bug
Google page is loading but the Google is misspelled in the page. Because the spelling mistake would not cause the functionality of Google page but its the prestige issue of Google brand. So severity will be low and priority will be high


Example of high priority and high severity
every application is needs to login. If you are not able to login the application you cannot process further. its mean its a show stopper. So its a high priority bug

Example of Low priority and low severity
spelling mistake

Monday, April 4, 2011

How to identify broken links in QTP

Broken Links also sometimes called as dead links are those links on the web which are permanently unavailable. Commonly found, 404 error is one example of such link. Now the question is how can we identify broken links with the help of QTP during the run session?
There can be two ways to do this:
1.Using Automatic Page checkpoint.
2.By manually creating a Page checkpoint.
Using Automatic Page checkpoint: Go to Tools > Options > Web > Advanced and check the two boxes labeled “Create a checkpoint for each page while recording” and “Broken Links”



Now every time you record a new page, QTP will automatically include a checkpoint for broken links.
By manually creating a Page checkpoint: QTP does not provide a direct menu option to incorporate a page checkpoint. You need to take the help of standard checkpoint. Start recording session > Insert > Checkpoint > Standard Checkpoint (OR press F12). Place and click the hand pointer anywhere on your web page. Select Page (As shown in picture below) and Click OK.



You will get the following screen:



Check “Broken Link” checkbox down below and click OK.
Now, how will you verify page checkpoint and hence broken links?
Run the above script. Go To Test Results > Your Check Point. Check the status of all links under “Broken Links Result”
If you want to verify links pointing only to the current host check the box titled “Broken Links- check only links to current host” under Tools > Options > Web. Similarly If you want to verify links pointing to other hosts as well, uncheck it.

How to get Text of statusbar from the QTP

When you work in QTP with Web applications, it is sometimes necessary to get a text of browser's status bar.
Let's see for example the following browser:



There are two ways:
1.Object.StatusText property of Browser object
2.GetROProperty("text") method of WinStatusBar object
1.Getting text of Status Bar using Object.StatusText property of Browser object

To access text of Status Bar, we use Browser's Object object and its StatusText property.
Browser("bname").Object is a reference to the Internet Explorer's DOM object. To be more precise, it's a reference to the Internet Explorer's IWebBrowser2 interface.

Using Browser("bname").Object, you can access different methods and properties of IE, for example:
#
Statement
Meaning
1
Browser("bname").Object.GoBack
Navigates backward one item in the history list
2
Browser("bname").Object.LocationURL
Gets the URL of the page that is currently displayed
3
Browser("bname").Object.StatusText
Sets or gets the text in the status bar for the object
4
Browser("bname").Object.ToolBar
Sets or gets whether toolbars for the object are visible

So, our code is simpe enough:

sText = Browser("QTP - How to get Status").Object.StatusText
MsgBox sText

And its result is:



Note: Since we use Internet Explorer's IWebBrowser2 interface, we can use this solution win IE only. It doesn't work with FireFox. The next solution will be compatibe with both IE and FF.

1.Getting text of Status Bar using GetROProperty("text") method of WinStatusBar object

Status bar is a part of browser's window. There is a special class to handle it from QTP - WinStatusBar. We can get text of WinStatusBar using GetROProperty("text") method.

So, I add Status Bar to QTP's Object Repository (OR):



The final script is:

sText = Browser("QTP - How to get Status").WinStatusBar("msctls_statusbar32").GetROProperty("text")
MsgBox sText

And its result is:



Note: This solution works correctly both for IE and FF, but it requires additional operations with Object Repository.

When requirements are changing continuously

Lot of times I've been asked from various stack holders bout what to do when requirements are changing continuously. Here, I'm going to describe some basic things that we need to take care when requirements are changing continuously.


Work with end users and management early on to understand how requirements might change, so that alternate test plans and strategies can be worked out in advance. It is helpful if the application's initial design allows for some adaptability, so that later changes do not require re-doing the whole work from scratch.



Below are some more points that might help:

Devote appropriate effort to risk analysis of changes, in order to minimize regression-testing needs.
In the project's initial stages, allow for some extra time to commensurate with probable changes.
Balance the effort put into setting up automated testing with the expected effort required to redo them to deal with changes.
Design some flexibility into test cases; this is not easily done; the best bet is to minimize the detail in the test cases, or set up only higher-level generic-type test plans.
Focus less on detailed test plans and test cases and more on ad-hoc testing with an understanding of the added risk this entails.
Try to design some flexibility into automated test scripts.
Focus initial automated testing on application aspects that are most likely to remain unchanged.
Ensure management and client understand scheduling impacts, inherent risks and costs of significant requirements changes. Then let management or the customers decide if the changes are acceptable.

What is actual testing process in practical or company environment?

Today I got  interesting question from reader, How testing is carried out in company i.e in practical environment? Those who get just out of college and start for searching the jobs have this curiosity,  How would be the actual working environment in the companies?

Here I focus on software Testing actual working process in the companies. As of now I got good experience of software testing career and day to day testing activities.  So I will try to share more practically rather than theoretically.

Whenever we get any new project there is initial project familiarity meeting. In this meeting we basically discuss on who is client? what is project duration and when is delivery? Who is involved in project i.e manager, Tech leads, QA leads, developers, testers etc etc..?

From the SRS (software requirement specification) project plan is developed. The responsibility of testers is to create software test plan from this SRS and project plan. Developers start coding from the design. The project work is divided into different modules and these project modules are distributed among the developers. In meantime testers responsibility is to create test scenario and write test cases according to assigned modules. We try to cover almost all the functional test cases from SRS.  The data can be maintained manually in some excel test case templates or bug tracking tools.

When developers finish individual modules, those modules are assigned to testers.  Smoke testing is performed on these modules and if they fail this test, modules are reassigned to respective developers for fix. For passed modules manual testing is carried out from the written test cases. If any bug is found that get assigned to module developer and  get logged in bug tracking tool. On bug fix tester do bug verification and regression testing of all related modules. If bug passes the verification it is marked as verified and marked as closed. Otherwise above mentioned bug cycle gets repeated.(I will cover bug life cycle in other post)

Different tests are performed on individual modules and integration testing on module integration. These tests includes Compatibility testing i.e testing application on different hardware, OS versions,  software platform, different browsers etc. Load and stress testing is also carried out according to SRS. Finally system testing is performed by creating virtual client environment. On passing all the test cases test report is prepared and decision is taken to release the product!