Different Message types should be displayed differently. I came across one website and I got to know that every webapplication and windows based application handle 4 main message types such as Information,Successful Operation, Warning Message and Error message. Each message type should be presented in a different color and different icon. A special message type represents Validation messages.
1.Information Messages
The purpose of information messages is to inform the user about something relevant. This should be presented in blue because people associate this color with information, Regardless of content. This could be any information relevant to a user action
Informational Messages
For example, info message can show help information regarding current user (Or) some tips.
2.Success messages
Success messages should be displayed after user successful performs an operation. By that I mean a complete operation – no partial operations and no errors. For example, the messagecan say: “Your profile has been saved successfully and confirmation mail has been sent to
email address you provided”. This means that each operation in this process ( saving profile and sending email) has been successfully performed.
Success Messages
To show this message type using its own colors and icons – green with a check mark icon
3.Warning Messages
Warning messages should be displayed to a user when an operation could not completed in a whole .For example “Your profile has been saved successfully,But But confirmation mail could not be sent to the email address you provided.” Or “If you dont finish your Profile now you wont be able to search jobs”. Usual warning color is yellow and icon exclamation.
4.Error messages
Error messages should be displayed when an operation could not be completed at all.For example , “Red is very suitable for this since people associate this color with an alert of any kind.
5. Validation Messages
This Article author noticed that many developers cant distinguish between validation and other message types ( such as error or warning messages). I saw many times that validation message is displayed as error message and caused confusion in the users's mind.
Validation is all about user input and should be treated that way. ASP.NET has built in controls that enable full control over user input. The purpose of validation is to force a user to enter all required fields or to enter in the correct format. Therefore it should be clear that the form will not be submitted if these rules are not matched.
That's why I like to style validation messages in a slightly less intensive red than error messages and use a red exclamation icon.
Monday, December 19, 2011
Tuesday, September 27, 2011
Ten Tips for effective bug tracking
1.Remember that the only person who can close a bug is the person who findout first.Anyone can resolve it,but only the person who saw the bug can really be sure that when they saw is fixed.
2.A good tester will always try to reduce the reproduction steps to the minimal steps to reproduce.this is extremely helpful for the programmer who has to find the bug.
3.There are many ways to resolve a bug. Developer can resolve a bug as fixed,wont fix,postponed,not pro,duplicate or by design
4.You will want to keep careful track of versions. Every build of the sofware that you give to testers should have a build ID Number so that the poor tester doesnt have to retest the bug on a version of the software where it wasnot even supposed to be fixed.
5.Not reproduce means that nobody could ever reproduce the bug. Programmers often use this when the bug report is missing the reproduce steps.
6.If yoy are a programmer and you are having trouble getting testers to use the bug database, just dont accept bug reports by any other method. If your testers are used to sending you email with bug reports, just bounce the emails back to them with a brief message: “ please put this in the bug database. I cant keep track of emails.
7.If you are a tester, and you are having trouble getting programmers to use the bug database, just dont tell them about bugs – put them in the database and let the database email them.
8.If your are a programmer, and only some of your colleagues use the bug database, just start assigning them bugs in the database. Eventually they will get the hint.
9.If you are a manager, and nobody seems to be using the bug database that you installed at great expense, start assigning new features to people using bug. A bug database is alos a great “unimplemented feature” database, too.
10.Avoid the temptation to add new fields to the bug database.Every month or so, somebody will come up with a great idea for a new field to put in the database. You get all kinds of clever ideas, for example, keeping track of file where the bug was found; keeping track of what % of the time the bug is reproducible; keeping track of how many times the bug occured ; keeping track of which exact versions of which DLLs were installed on the machine where the bug happend. Its very important not to give in to these ideas. If you do , your new bug entry screen will end up with a thousand fields that you need to supply, and nobody will want to input bug reports any more. For the bug database to work, everybody needs to use it, and if entering bugs “formally” is too mush work, people will go around the bug database.
2.A good tester will always try to reduce the reproduction steps to the minimal steps to reproduce.this is extremely helpful for the programmer who has to find the bug.
3.There are many ways to resolve a bug. Developer can resolve a bug as fixed,wont fix,postponed,not pro,duplicate or by design
4.You will want to keep careful track of versions. Every build of the sofware that you give to testers should have a build ID Number so that the poor tester doesnt have to retest the bug on a version of the software where it wasnot even supposed to be fixed.
5.Not reproduce means that nobody could ever reproduce the bug. Programmers often use this when the bug report is missing the reproduce steps.
6.If yoy are a programmer and you are having trouble getting testers to use the bug database, just dont accept bug reports by any other method. If your testers are used to sending you email with bug reports, just bounce the emails back to them with a brief message: “ please put this in the bug database. I cant keep track of emails.
7.If you are a tester, and you are having trouble getting programmers to use the bug database, just dont tell them about bugs – put them in the database and let the database email them.
8.If your are a programmer, and only some of your colleagues use the bug database, just start assigning them bugs in the database. Eventually they will get the hint.
9.If you are a manager, and nobody seems to be using the bug database that you installed at great expense, start assigning new features to people using bug. A bug database is alos a great “unimplemented feature” database, too.
10.Avoid the temptation to add new fields to the bug database.Every month or so, somebody will come up with a great idea for a new field to put in the database. You get all kinds of clever ideas, for example, keeping track of file where the bug was found; keeping track of what % of the time the bug is reproducible; keeping track of how many times the bug occured ; keeping track of which exact versions of which DLLs were installed on the machine where the bug happend. Its very important not to give in to these ideas. If you do , your new bug entry screen will end up with a thousand fields that you need to supply, and nobody will want to input bug reports any more. For the bug database to work, everybody needs to use it, and if entering bugs “formally” is too mush work, people will go around the bug database.
Wednesday, September 21, 2011
Synchronize for a particular object
// Function to Synchronize for a particular object//
Public function fnSynchronization(objName)
fnSynchronization = false
Dim intLoopStart
Dim intLoopwait
intLoopStart = 1
intLoopwait = 10
Set objName = objName
// waiting for the object to appear//
Do while intLoopStart <= intLoopwait
// wait for existance of that object//
If opbjName.Exist(1) then
fnSynchronization = true
Exit do
Else
intLoopStart = intLoopStart+1
End if
Loop
End Function
Public function fnSynchronization(objName)
fnSynchronization = false
Dim intLoopStart
Dim intLoopwait
intLoopStart = 1
intLoopwait = 10
Set objName = objName
// waiting for the object to appear//
Do while intLoopStart <= intLoopwait
// wait for existance of that object//
If opbjName.Exist(1) then
fnSynchronization = true
Exit do
Else
intLoopStart = intLoopStart+1
End if
Loop
End Function
Verify the items exists in Drop down box
Public Function fnVerifyDroDownItems(objDropdown,strItemsToSearch)
Dim intItemsCount,intCounter,strItem,bInItemPresent
Dim arrItemsToSerach
fnVerifyDropDownItems = True
If (objDropdown.Exist = False) then
reporter.report micFail,"The Dropdown box '" & objDropdown.GetROProperty("name") & "' should Exist", "The Dropdown Box does not exist", "FAIL")
fnVerifyDropDownItems=False
Exit Function
End If
// Get count of items in dropdown //
intItemsCount = objDropdown.GetTOProperty("items count")
// Split the list items list based on comma(,)//
arrItemsToSearch = Split(strItemsToSearch, ",")
For intItems=0 to UBound(arrItemsToSearch)
blnItemPresent = False
// Loop through all items //
For intCounter=1 to intItemsCount
strItem = ""
// Get an item //
strItem = objDropdown.GetItem(intCounter)
// If the search item is present //
If (StrComp(Trim(strItem), Trim(arrItemsToSearch(intItems)), 1) = 0) Then
blnItemPresent = True
reporter.report micPass "The Item '" & arrItemsToSearch(intItems) & "' should be present in the dropdown box", "The specified Item exists in the dropdown box", "PASS")
Exit For
End If
Next
If (Not blnItemPresent) Then
reporter.report micFail , "The Item '" & arrItemsToSearch(intItems) & "' should be present in the dropdown box", "The specified Item does not exist in the dropdown box", "WARNING")
fnVerifyDropDownItems = False
End If
Next
End Function
Dim intItemsCount,intCounter,strItem,bInItemPresent
Dim arrItemsToSerach
fnVerifyDropDownItems = True
If (objDropdown.Exist = False) then
reporter.report micFail,"The Dropdown box '" & objDropdown.GetROProperty("name") & "' should Exist", "The Dropdown Box does not exist", "FAIL")
fnVerifyDropDownItems=False
Exit Function
End If
// Get count of items in dropdown //
intItemsCount = objDropdown.GetTOProperty("items count")
// Split the list items list based on comma(,)//
arrItemsToSearch = Split(strItemsToSearch, ",")
For intItems=0 to UBound(arrItemsToSearch)
blnItemPresent = False
// Loop through all items //
For intCounter=1 to intItemsCount
strItem = ""
// Get an item //
strItem = objDropdown.GetItem(intCounter)
// If the search item is present //
If (StrComp(Trim(strItem), Trim(arrItemsToSearch(intItems)), 1) = 0) Then
blnItemPresent = True
reporter.report micPass "The Item '" & arrItemsToSearch(intItems) & "' should be present in the dropdown box", "The specified Item exists in the dropdown box", "PASS")
Exit For
End If
Next
If (Not blnItemPresent) Then
reporter.report micFail , "The Item '" & arrItemsToSearch(intItems) & "' should be present in the dropdown box", "The specified Item does not exist in the dropdown box", "WARNING")
fnVerifyDropDownItems = False
End If
Next
End Function
fetch cell value from table
Dim X
Browser("S.O.S. Math - Mathematical").Page("S.O.S. Math - Mathematical").Frame("Frame").Check CheckPoint("Frame")
Browser("S.O.S. Math - Mathematical").Page("S.O.S. Math - Mathematical").Frame("Frame").WebElement("Class 1 to Class 12 Lessons,").FireEvent "onmouseover",718,13
Browser("S.O.S. Math - Mathematical").Page("S.O.S. Math - Mathematical").Link("A Trigonometric Table").Click 66,12
Browser("S.O.S. Math - Mathematical").Navigate "http://www.sosmath.com/tables/trigtable/trigtable.html"
Browser("S.O.S. Math - Mathematical").Page("Trig Table").Frame("Frame").Check CheckPoint("Frame_2")
Browser("S.O.S. Math - Mathematical").Page("Trig Table").Frame("Frame").Link("Class 1 to Class 12").FireEvent "onmouseover",42,1
Browser("S.O.S. Math - Mathematical").Page("Trig Table").Frame("Frame").WebElement("Lessons, Animations, Videos").FireEvent "onmouseover",45,8
Browser("S.O.S. Math - Mathematical").Page("Trig Table").WebTable("0").Check CheckPoint("0")
x = Browser("S.O.S. Math - Mathematical").Page("Trig Table").WebTable("0").GetCellData(2,4)
msgbox " value is." & x
Browser("S.O.S. Math – Mathematical").CloseAllTabs
Browser("S.O.S. Math - Mathematical").Page("S.O.S. Math - Mathematical").Frame("Frame").Check CheckPoint("Frame")
Browser("S.O.S. Math - Mathematical").Page("S.O.S. Math - Mathematical").Frame("Frame").WebElement("Class 1 to Class 12 Lessons,").FireEvent "onmouseover",718,13
Browser("S.O.S. Math - Mathematical").Page("S.O.S. Math - Mathematical").Link("A Trigonometric Table").Click 66,12
Browser("S.O.S. Math - Mathematical").Navigate "http://www.sosmath.com/tables/trigtable/trigtable.html"
Browser("S.O.S. Math - Mathematical").Page("Trig Table").Frame("Frame").Check CheckPoint("Frame_2")
Browser("S.O.S. Math - Mathematical").Page("Trig Table").Frame("Frame").Link("Class 1 to Class 12").FireEvent "onmouseover",42,1
Browser("S.O.S. Math - Mathematical").Page("Trig Table").Frame("Frame").WebElement("Lessons, Animations, Videos").FireEvent "onmouseover",45,8
Browser("S.O.S. Math - Mathematical").Page("Trig Table").WebTable("0").Check CheckPoint("0")
x = Browser("S.O.S. Math - Mathematical").Page("Trig Table").WebTable("0").GetCellData(2,4)
msgbox " value is." & x
Browser("S.O.S. Math – Mathematical").CloseAllTabs
Web link counter script
Dim Des_obj,link_col
Browser("editorial » Blog Archive_2").Page("editorial » Blog Archive").Check CheckPoint("editorial » Blog Archive_2")
Browser("editorial » Blog Archive_2").Page("editorial » Blog Archive").Link("Debt Consolidation").Click 48,5
Browser("editorial » Blog Archive_2").Navigate "http://editorial.co.in/debt-consolidation/debt-consolidation.php"
Browser("editorial » Blog Archive_2").Page("editorial » Blog Archive_2").Check CheckPoint("editorial » Blog Archive_3")
Browser("editorial » Blog Archive_2").Page("editorial » Blog Archive_2").Link("Debt Consolidation Calculator").Click 121,4
Browser("editorial » Blog Archive_2").Navigate "http://www.lendingtree.com/home-equity-loans/calculators/loan-consolidation-calculator"
Browser("editorial » Blog Archive_2").Navigate "http://www.lendingtree.com/home-equity-loans/calculators/loan-consolidation-calculator/"
Browser("editorial » Blog Archive_2").Back
Browser("editorial » Blog Archive_2").Navigate "http://editorial.co.in/debt-consolidation/debt-consolidation.php"
Browser("editorial » Blog Archive_2").Back
Browser("editorial » Blog Archive_2").Navigate "http://editorial.co.in/software/software-testing-life-cycle.php"
Set Des_obj = Description.Create
Des_obj("micclass").value = "Link"
Set link_col = Browser("editorial » Blog Archive_2").Page("editorial » Blog Archive").ChildObjects(Des_obj)
msgbox link_col.count
Browser("editorial » Blog Archive_2").CloseAllTabs
Browser("editorial » Blog Archive_2").Page("editorial » Blog Archive").Check CheckPoint("editorial » Blog Archive_2")
Browser("editorial » Blog Archive_2").Page("editorial » Blog Archive").Link("Debt Consolidation").Click 48,5
Browser("editorial » Blog Archive_2").Navigate "http://editorial.co.in/debt-consolidation/debt-consolidation.php"
Browser("editorial » Blog Archive_2").Page("editorial » Blog Archive_2").Check CheckPoint("editorial » Blog Archive_3")
Browser("editorial » Blog Archive_2").Page("editorial » Blog Archive_2").Link("Debt Consolidation Calculator").Click 121,4
Browser("editorial » Blog Archive_2").Navigate "http://www.lendingtree.com/home-equity-loans/calculators/loan-consolidation-calculator"
Browser("editorial » Blog Archive_2").Navigate "http://www.lendingtree.com/home-equity-loans/calculators/loan-consolidation-calculator/"
Browser("editorial » Blog Archive_2").Back
Browser("editorial » Blog Archive_2").Navigate "http://editorial.co.in/debt-consolidation/debt-consolidation.php"
Browser("editorial » Blog Archive_2").Back
Browser("editorial » Blog Archive_2").Navigate "http://editorial.co.in/software/software-testing-life-cycle.php"
Set Des_obj = Description.Create
Des_obj("micclass").value = "Link"
Set link_col = Browser("editorial » Blog Archive_2").Page("editorial » Blog Archive").ChildObjects(Des_obj)
msgbox link_col.count
Browser("editorial » Blog Archive_2").CloseAllTabs
Find Tooltips of the specific website
Dim descImage,listImages,attrAltText,attrSrcText
Browser("Browser_2").Navigate "http://www.medusind.com/"
Browser("US Healthcare Revenue").Page("US Healthcare Revenue").Image("Medusind Solutions - Enabling").Click 150,9
Browser("Browser_2").Navigate "http://www.medusind.com/index.asp"
set descImage = description.create
descImage("html tag").value = "IMG"
SET listImages = Browser("Webpage error").page("US Healthcare Revenue").childobjects(descImage)
For i=0 to listimages.count-1
attrAltText = ListImages(i).GetRoProperty("alt")
attrrcText = listImages(i).GetRopRoperty("src")
If attrAltText <> "" Then
Msgbox "Images src: " & attrSrcText & vbnewline & "Tooltip: " & attrAltText
End If
Next
Browser("Webpage error").CloseAllTabs
Browser("Browser_2").Navigate "http://www.medusind.com/"
Browser("US Healthcare Revenue").Page("US Healthcare Revenue").Image("Medusind Solutions - Enabling").Click 150,9
Browser("Browser_2").Navigate "http://www.medusind.com/index.asp"
set descImage = description.create
descImage("html tag").value = "IMG"
SET listImages = Browser("Webpage error").page("US Healthcare Revenue").childobjects(descImage)
For i=0 to listimages.count-1
attrAltText = ListImages(i).GetRoProperty("alt")
attrrcText = listImages(i).GetRopRoperty("src")
If attrAltText <> "" Then
Msgbox "Images src: " & attrSrcText & vbnewline & "Tooltip: " & attrAltText
End If
Next
Browser("Webpage error").CloseAllTabs
message window validation
If not dialog( "Login").Exist(2)Then
SystemUtil.Run "C:\Program Files\HP\QuickTest Professional\samples\flight\app\flight4a.exe","","C:\Program Files\HP\QuickTest Professional\samples\flight\app\",""
End If
Dialog("Login").Activate
Dialog("Login").WinEdit("Agent Name:").Set "rajan"
Dialog("Login").WinEdit("Agent Name:").Type micTab
Dialog("Login").WinEdit("Password:").SetSecure "4e2e616befbf25362991054e9261351ead9a"
Dialog("Login").WinButton("OK").Click
Dialog("Flight Reservations").WinButton("OK").Click
Dialog("Login").WinButton("Help").Click
Dialog("Flight Reservations").Static("The password is 'MERCURY'").Check CheckPoint("The password is 'MERCURY'")
message = Dialog("text:= Login").dialog("text:= Flight Reservations").Static("window id:= 65535").GetROProperty("text")
Dialog("Flight Reservations").WinButton("OK").Click
If message = "The password is 'MERCURY'" Then
reporter.ReportEvent 0,"Res","Correct message" & message
else
reporter.ReportEvent 1,"Res","Incorrect message"
End If
SystemUtil.Run "C:\Program Files\HP\QuickTest Professional\samples\flight\app\flight4a.exe","","C:\Program Files\HP\QuickTest Professional\samples\flight\app\",""
End If
Dialog("Login").Activate
Dialog("Login").WinEdit("Agent Name:").Set "rajan"
Dialog("Login").WinEdit("Agent Name:").Type micTab
Dialog("Login").WinEdit("Password:").SetSecure "4e2e616befbf25362991054e9261351ead9a"
Dialog("Login").WinButton("OK").Click
Dialog("Flight Reservations").WinButton("OK").Click
Dialog("Login").WinButton("Help").Click
Dialog("Flight Reservations").Static("The password is 'MERCURY'").Check CheckPoint("The password is 'MERCURY'")
message = Dialog("text:= Login").dialog("text:= Flight Reservations").Static("window id:= 65535").GetROProperty("text")
Dialog("Flight Reservations").WinButton("OK").Click
If message = "The password is 'MERCURY'" Then
reporter.ReportEvent 0,"Res","Correct message" & message
else
reporter.ReportEvent 1,"Res","Incorrect message"
End If
Check Checkbox using Descriptive programming
Option explicit
Dim qtp,flight_app,f,t,i,j,x,y
If not Window("text:= Flight Reservation:").Exist (2) = true Then
qtp = Environment("ProductDir")
Flight_app = "\samples\flight\app\flight4a.exe"
SystemUtil.Run qtp & Flight_app
Dialog("text:= Login").Activate
Dialog("text:= Login").WinEdit("attached text:= Agent Name:").Set "asdf"
Dialog("text:= Login").WinEdit("attached text:= Password:").SetSecure "4e2d605c46a3b5d32706b9ea1735d00e79319dd2"
Dialog("text:= Login").WinButton("text:= OK").Click
End If
Window("text:= Flight Reservation").Activate
Window("text:= Flight Reservation").Activex("Acx_name:= MaskEdBox","window id:=0").Type "121212"
f = Window("text:= Flight Reservation").WinComboBox("attached text:= Fly From:").GetItemsCount
For i= 0 to f-1 step 1
Window("text:=Flight Reservation").WinComboBox("attached Text:= Fly From:").Select(i)
x =Window("text:=Flight Reservation").WinComboBox("attached Text:= Fly From:").GetROProperty("text")
t = Window("text:=Flight Reservation").WinComboBox("attached text:= Fly To:","x:= 244","y:=143").GetItemsCount
For J = 0 TO t-1 step 1
Window("text:= Flight Reservation").WinComboBox("attached text:= Fly To:","x:= 244","y:=143").Select(j)
y = Window("text:= Flight Reservation").WinComboBox("attached text:= Fly To:","x:= 244","y:=143").GetROProperty("text")
If x <> y Then
Reporter.ReportEvent 0,"Res","Test passed"
else
Reporter.ReportEvent 1,"Res","Test Failed"
End If
Next
Next
Dim qtp,flight_app,f,t,i,j,x,y
If not Window("text:= Flight Reservation:").Exist (2) = true Then
qtp = Environment("ProductDir")
Flight_app = "\samples\flight\app\flight4a.exe"
SystemUtil.Run qtp & Flight_app
Dialog("text:= Login").Activate
Dialog("text:= Login").WinEdit("attached text:= Agent Name:").Set "asdf"
Dialog("text:= Login").WinEdit("attached text:= Password:").SetSecure "4e2d605c46a3b5d32706b9ea1735d00e79319dd2"
Dialog("text:= Login").WinButton("text:= OK").Click
End If
Window("text:= Flight Reservation").Activate
Window("text:= Flight Reservation").Activex("Acx_name:= MaskEdBox","window id:=0").Type "121212"
f = Window("text:= Flight Reservation").WinComboBox("attached text:= Fly From:").GetItemsCount
For i= 0 to f-1 step 1
Window("text:=Flight Reservation").WinComboBox("attached Text:= Fly From:").Select(i)
x =Window("text:=Flight Reservation").WinComboBox("attached Text:= Fly From:").GetROProperty("text")
t = Window("text:=Flight Reservation").WinComboBox("attached text:= Fly To:","x:= 244","y:=143").GetItemsCount
For J = 0 TO t-1 step 1
Window("text:= Flight Reservation").WinComboBox("attached text:= Fly To:","x:= 244","y:=143").Select(j)
y = Window("text:= Flight Reservation").WinComboBox("attached text:= Fly To:","x:= 244","y:=143").GetROProperty("text")
If x <> y Then
Reporter.ReportEvent 0,"Res","Test passed"
else
Reporter.ReportEvent 1,"Res","Test Failed"
End If
Next
Next
Getting dynamic text from the webpage by using text output value function
Step 1: Open google and type as site: motevich.blogspot.com QTP
Step 2: Click search button
Step 3: get a searched result
Step 4: see below google search bar we can view no of results displayed like 1000 or 2000
Step 5: now we can capture that no of values only
Step 6: now just change as test instead of QTP
Step 7: Click search button
Step 8: get a searched result
Step 9: now we can get someother result like 3000 or 4000
Step 10: Stop recording
Step 11: Go to active screen and select the value what we want to get and right click that
Step 12: just click text output value ,screen will open
Step 13: Make some changes in that window
Step 14: click ok
Step 15: again run the script
Browser("Google").Page("Google").WebEdit("q").Set "site: motevich.blogspot.com QTP"
Browser("Google").Page("Google").WebButton("Google Search").Click
Browser("Google").Page("site: motevich.blogspot.com").Sync
Browser("Google").Page("Google").Output CheckPoint("ResCount_2")
msgbox Datatable.Value("count")
Browser("Google").CloseAllTabs
Step 2: Click search button
Step 3: get a searched result
Step 4: see below google search bar we can view no of results displayed like 1000 or 2000
Step 5: now we can capture that no of values only
Step 6: now just change as test instead of QTP
Step 7: Click search button
Step 8: get a searched result
Step 9: now we can get someother result like 3000 or 4000
Step 10: Stop recording
Step 11: Go to active screen and select the value what we want to get and right click that
Step 12: just click text output value ,screen will open
Step 13: Make some changes in that window
Step 14: click ok
Step 15: again run the script
Browser("Google").Page("Google").WebEdit("q").Set "site: motevich.blogspot.com QTP"
Browser("Google").Page("Google").WebButton("Google Search").Click
Browser("Google").Page("site: motevich.blogspot.com").Sync
Browser("Google").Page("Google").Output CheckPoint("ResCount_2")
msgbox Datatable.Value("count")
Browser("Google").CloseAllTabs
Verify the Check point and if check point is true or false further process is handled using functional statement
Dim Str
Dialog("Login").WinEdit("Agent Name:").Set "rajan"
Dialog("Login").WinEdit("Agent Name:").Type micTab
Dialog("Login").WinEdit("Password:").SetSecure "4e2558dc476c11a2ea3597e2545811aef6477598"
Dialog("Login").WinButton("OK").Click
Window("Flight Reservation").WinComboBox("Fly To:").Check CheckPoint("Fly To:_2")
Str = Window("Flight Reservation").WinComboBox("Fly To:").Check (CheckPoint("Fly To:"))
msgbox (Str)
If Str = true Then
process()
Else
exitaction()
End If
Private Function process()
Window("Flight Reservation").WinComboBox("Fly To:").Select "London"
Window("Flight Reservation").Dialog("Flight Reservations").WinButton("OK").Click
exitaction()
End Function
Private Function exitaction()
Window("Flight Reservation").Close
End Function
Dialog("Login").WinEdit("Agent Name:").Set "rajan"
Dialog("Login").WinEdit("Agent Name:").Type micTab
Dialog("Login").WinEdit("Password:").SetSecure "4e2558dc476c11a2ea3597e2545811aef6477598"
Dialog("Login").WinButton("OK").Click
Window("Flight Reservation").WinComboBox("Fly To:").Check CheckPoint("Fly To:_2")
Str = Window("Flight Reservation").WinComboBox("Fly To:").Check (CheckPoint("Fly To:"))
msgbox (Str)
If Str = true Then
process()
Else
exitaction()
End If
Private Function process()
Window("Flight Reservation").WinComboBox("Fly To:").Select "London"
Window("Flight Reservation").Dialog("Flight Reservations").WinButton("OK").Click
exitaction()
End Function
Private Function exitaction()
Window("Flight Reservation").Close
End Function
Open Various Application using vb script
Dim wsh
Public Function Launch_App(arg1)
Set wsh=CreateObject("wscript.shell")
wsh.run arg1
Set wsh=nothing
End Function
Call Launch_App("Notepad.exe")
wait 1
Call Launch_App("cmd.exe")
wait 1
Call Launch_App("www.google.com")
wait 1
Call Launch_App("calc.exe")
Public Function Launch_App(arg1)
Set wsh=CreateObject("wscript.shell")
wsh.run arg1
Set wsh=nothing
End Function
Call Launch_App("Notepad.exe")
wait 1
Call Launch_App("cmd.exe")
wait 1
Call Launch_App("www.google.com")
wait 1
Call Launch_App("calc.exe")
Run Multiple QTP Test Script using command prompt and display results in command prompt itself
Dim APP //Declaration
Set APP = CreateObject("QuickTest.Application") //create Application object
App.Launch //Start Quicktestapplication
App.Visible = True //make QTP Application visible
Dim QTP_Tests(3) //Declaration
QTP_Tests(1) = "C:\Users\perinbarajani\Documents\HP\QuickTest Professional\Tests\pagecheckpoint" //Set path
QTP_Tests(2) = "C:\Users\perinbarajani\Documents\HP\QuickTest Professional\Tests\textcheckpoint" //Set path
QTP_Tests(3) = "C:\Users\perinbarajani\Documents\HP\QuickTest Professional\Tests\Tooltip" //Set path
set res_obj = CreateObject("QuickTest.RunResultsOptions") //Create run result object
For i=1 to Ubound(QTP_Tests) //For loop running Multiple application
App.Open QTP_Tests(i),True
Set QTP_Test = App.Test
res_obj.ResultsLocation = Qtp_Tests(i) & "QTPResults"
Qtp_Test.Run res_obj,True
QTP_Test.close
Next
App.Quit //Quit the Application
Set res_obj = nothing //Release Result object
Set QTP_Test = nothing // Release the test object
Set App = nothing // Release the application object
Set APP = CreateObject("QuickTest.Application") //create Application object
App.Launch //Start Quicktestapplication
App.Visible = True //make QTP Application visible
Dim QTP_Tests(3) //Declaration
QTP_Tests(1) = "C:\Users\perinbarajani\Documents\HP\QuickTest Professional\Tests\pagecheckpoint" //Set path
QTP_Tests(2) = "C:\Users\perinbarajani\Documents\HP\QuickTest Professional\Tests\textcheckpoint" //Set path
QTP_Tests(3) = "C:\Users\perinbarajani\Documents\HP\QuickTest Professional\Tests\Tooltip" //Set path
set res_obj = CreateObject("QuickTest.RunResultsOptions") //Create run result object
For i=1 to Ubound(QTP_Tests) //For loop running Multiple application
App.Open QTP_Tests(i),True
Set QTP_Test = App.Test
res_obj.ResultsLocation = Qtp_Tests(i) & "QTPResults"
Qtp_Test.Run res_obj,True
QTP_Test.close
Next
App.Quit //Quit the Application
Set res_obj = nothing //Release Result object
Set QTP_Test = nothing // Release the test object
Set App = nothing // Release the application object
Wednesday, September 7, 2011
Bugs, defects and issues
This article explains the terminologies like bugs, defects and issues
All these three terms means the same. It all represents a problem in software.
Background of Bug
In 1946, a huge electromechanical computer stopped functioning suddenly. Operators traced that a bug was trapped in it's relay unit causing the problem. They fixed the problem by removing the bug. Software "bug tracking" and "bug fixing" was evolved from this!
Below is the picture of the first real bug reported:
For several years, the term "bug" and "defect" were widely used in software develop process to indicate problems in software. However, software engineers started questioning the terms "bugs" and "defects" because in many cases they argue that certain "bug" is not a bug, but it is a "feature" or "it is how customer originally asked for it". To avoid conflicts between testing and development team, several companies are now using a different term - "software issue".
Even though both "issue" and "bug" indicate some kind of problems in software, developers feel the term "issue" is less offensive than "bug" ! This is because, the bug directly indicate a problem in the code he wrote, while "issue" is a term which indicates any kind of general problems in the software including wrong requirement, bad design etc.
Whatever problems the QA or testing team find, they will call it as an "issue". An issue may or may not be a bug. A tester may call a feature as a "issue" because that is not what the customer wants, even though it is a nice feature. Or, the software is not delivered to QA team on the date planned, it can be reported as an "issue".
The tester reports issues and his role ends there. It is the job of the product manager to decide whether to solve the issue and how to solve it. Depending on the nature of the issue, the product manager assigns it to the appropriate team to resolve. Product manager may even decide to "waive the issue" if he feels that it is not a problem. If the issue is a bug, then it will be assigned to the developers and they will fix the code. When the bug is fixed, the testing team will re test the software and verify it. If the issues is fixed, then the status of the issue will be changed to "closed".
An issue can be resolved in different ways, depending on it's nature. If it is a software bug, it goes to the developer to correct the code and program. If it is due to wrong requirement, it goes to the customer or marketing to correct the requirement. If the issue was caused by bad configuration in the testing computer, it will be assigned to the appropriate hardware representative to correct the configuration problem.
Software developers like the term "issue" rather than "bug" because the term "issue" does not really indicate that there is a problem in their code ! The term "issue" is becoming the standard in software testing process to indicate problems in software.
All these three terms means the same. It all represents a problem in software.
Background of Bug
In 1946, a huge electromechanical computer stopped functioning suddenly. Operators traced that a bug was trapped in it's relay unit causing the problem. They fixed the problem by removing the bug. Software "bug tracking" and "bug fixing" was evolved from this!
Below is the picture of the first real bug reported:
For several years, the term "bug" and "defect" were widely used in software develop process to indicate problems in software. However, software engineers started questioning the terms "bugs" and "defects" because in many cases they argue that certain "bug" is not a bug, but it is a "feature" or "it is how customer originally asked for it". To avoid conflicts between testing and development team, several companies are now using a different term - "software issue".
Even though both "issue" and "bug" indicate some kind of problems in software, developers feel the term "issue" is less offensive than "bug" ! This is because, the bug directly indicate a problem in the code he wrote, while "issue" is a term which indicates any kind of general problems in the software including wrong requirement, bad design etc.
Whatever problems the QA or testing team find, they will call it as an "issue". An issue may or may not be a bug. A tester may call a feature as a "issue" because that is not what the customer wants, even though it is a nice feature. Or, the software is not delivered to QA team on the date planned, it can be reported as an "issue".
The tester reports issues and his role ends there. It is the job of the product manager to decide whether to solve the issue and how to solve it. Depending on the nature of the issue, the product manager assigns it to the appropriate team to resolve. Product manager may even decide to "waive the issue" if he feels that it is not a problem. If the issue is a bug, then it will be assigned to the developers and they will fix the code. When the bug is fixed, the testing team will re test the software and verify it. If the issues is fixed, then the status of the issue will be changed to "closed".
An issue can be resolved in different ways, depending on it's nature. If it is a software bug, it goes to the developer to correct the code and program. If it is due to wrong requirement, it goes to the customer or marketing to correct the requirement. If the issue was caused by bad configuration in the testing computer, it will be assigned to the appropriate hardware representative to correct the configuration problem.
Software developers like the term "issue" rather than "bug" because the term "issue" does not really indicate that there is a problem in their code ! The term "issue" is becoming the standard in software testing process to indicate problems in software.
Monday, August 29, 2011
Check Checkbox using Descriptive programming
Option explicit
Dim qtp,flight_app,f,t,i,j,x,y
If not Window("text:= Flight Reservation:").Exist (2) = true Then
qtp = Environment("ProductDir")
Flight_app = "\samples\flight\app\flight4a.exe"
SystemUtil.Run qtp & Flight_app
Dialog("text:= Login").Activate
Dialog("text:= Login").WinEdit("attached text:= Agent Name:").Set "asdf"
Dialog("text:= Login").WinEdit("attached text:= Password:").SetSecure "4e2d605c46a3b5d32706b9ea1735d00e79319dd2"
Dialog("text:= Login").WinButton("text:= OK").Click
End If
Window("text:= Flight Reservation").Activate
Window("text:= Flight Reservation").Activex("Acx_name:= MaskEdBox","window id:=0").Type "121212"
f = Window("text:= Flight Reservation").WinComboBox("attached text:= Fly From:").GetItemsCount
For i= 0 to f-1 step 1
Window("text:=Flight Reservation").WinComboBox("attached Text:= Fly From:").Select(i)
x =Window("text:=Flight Reservation").WinComboBox("attached Text:= Fly From:").GetROProperty("text")
t = Window("text:=Flight Reservation").WinComboBox("attached text:= Fly To:","x:= 244","y:=143").GetItemsCount
For J = 0 TO t-1 step 1
Window("text:= Flight Reservation").WinComboBox("attached text:= Fly To:","x:= 244","y:=143").Select(j)
y = Window("text:= Flight Reservation").WinComboBox("attached text:= Fly To:","x:= 244","y:=143").GetROProperty("text")
If x <> y Then
Reporter.ReportEvent 0,"Res","Test passed"
else
Reporter.ReportEvent 1,"Res","Test Failed"
End If
Next
Next
Count Buttons of Flight reservation window
SystemUtil.Run "C:\Program Files\HP\QuickTest Professional\samples\flight\app\flight4a.exe","","C:\Program Files\HP\QuickTest Professional\samples\flight\app\",""
Dialog("Login").WinEdit("Agent Name:").Set "rajan"
Dialog("Login").WinEdit("Agent Name:").Type micTab
Dialog("Login").WinEdit("Password:").SetSecure "4e1ebb0beb016ff0acd0c8d19e774cb573"
Dialog("Login").WinButton("OK").Click
Window("Flight Reservation").ActiveX("MaskEdBox").Type "121212"
Window("Flight Reservation").WinComboBox("Fly From:").Select "Denver"
Window("Flight Reservation").WinComboBox("Fly To:").Select "London"
Window("Flight Reservation").WinButton("FLIGHT").Click
Window("Flight Reservation").Dialog("Flights Table").WinList("From").Select "20262 DEN 10:12 AM LON 05:23 PM AA $112.20"
Window("Flight Reservation").Dialog("Flights Table").WinButton("OK").Click
Window("Flight Reservation").WinEdit("Name:").Set "ranab"
Window("Flight Reservation").WinButton("Insert Order").Click
Call Count_Buttons()
Window("Flight Reservation").Close
write function in separate function window and add resource while running any script:
Function Count_Buttons()
Dim oButton,Buttons,ToButtons,i
Set oButton = Description.Create
oButton("Class Name").value = "WinButton"
Set Buttons = Window("text:= Flight Reservation").ChildObjects(oButton)
ToButtons = Buttons.count
msgbox ToButtons
End Function
Dialog("Login").WinEdit("Agent Name:").Set "rajan"
Dialog("Login").WinEdit("Agent Name:").Type micTab
Dialog("Login").WinEdit("Password:").SetSecure "4e1ebb0beb016ff0acd0c8d19e774cb573"
Dialog("Login").WinButton("OK").Click
Window("Flight Reservation").ActiveX("MaskEdBox").Type "121212"
Window("Flight Reservation").WinComboBox("Fly From:").Select "Denver"
Window("Flight Reservation").WinComboBox("Fly To:").Select "London"
Window("Flight Reservation").WinButton("FLIGHT").Click
Window("Flight Reservation").Dialog("Flights Table").WinList("From").Select "20262 DEN 10:12 AM LON 05:23 PM AA $112.20"
Window("Flight Reservation").Dialog("Flights Table").WinButton("OK").Click
Window("Flight Reservation").WinEdit("Name:").Set "ranab"
Window("Flight Reservation").WinButton("Insert Order").Click
Call Count_Buttons()
Window("Flight Reservation").Close
write function in separate function window and add resource while running any script:
Function Count_Buttons()
Dim oButton,Buttons,ToButtons,i
Set oButton = Description.Create
oButton("Class Name").value = "WinButton"
Set Buttons = Window("text:= Flight Reservation").ChildObjects(oButton)
ToButtons = Buttons.count
msgbox ToButtons
End Function
Count Buttons of Flight reservation window
SystemUtil.Run "C:\Program Files\HP\QuickTest Professional\samples\flight\app\flight4a.exe","","C:\Program Files\HP\QuickTest Professional\samples\flight\app\",""
Dialog("Login").WinEdit("Agent Name:").Set "rajan"
Dialog("Login").WinEdit("Agent Name:").Type micTab
Dialog("Login").WinEdit("Password:").SetSecure "4e1ebb0beb016ff0a573"
Dialog("Login").WinButton("OK").Click
Window("Flight Reservation").ActiveX("MaskEdBox").Type "121212"
Window("Flight Reservation").WinComboBox("Fly From:").Select "Denver"
Window("Flight Reservation").WinComboBox("Fly To:").Select "London"
Window("Flight Reservation").WinButton("FLIGHT").Click
Window("Flight Reservation").Dialog("Flights Table").WinList("From").Select "20262 DEN 10:12 AM LON 05:23 PM AA $112.20"
Window("Flight Reservation").Dialog("Flights Table").WinButton("OK").Click
Window("Flight Reservation").WinEdit("Name:").Set "ranab"
Window("Flight Reservation").WinButton("Insert Order").Click
Call Count_Buttons()
Window("Flight Reservation").Close
write function in separate function window and add resource while running any script:
Function Count_Buttons()
Dim oButton,Buttons,ToButtons,i
Set oButton = Description.Create
oButton("Class Name").value = "WinButton"
Set Buttons = Window("text:= Flight Reservation").ChildObjects(oButton)
ToButtons = Buttons.count
msgbox ToButtons
End Function
Dialog("Login").WinEdit("Agent Name:").Set "rajan"
Dialog("Login").WinEdit("Agent Name:").Type micTab
Dialog("Login").WinEdit("Password:").SetSecure "4e1ebb0beb016ff0a573"
Dialog("Login").WinButton("OK").Click
Window("Flight Reservation").ActiveX("MaskEdBox").Type "121212"
Window("Flight Reservation").WinComboBox("Fly From:").Select "Denver"
Window("Flight Reservation").WinComboBox("Fly To:").Select "London"
Window("Flight Reservation").WinButton("FLIGHT").Click
Window("Flight Reservation").Dialog("Flights Table").WinList("From").Select "20262 DEN 10:12 AM LON 05:23 PM AA $112.20"
Window("Flight Reservation").Dialog("Flights Table").WinButton("OK").Click
Window("Flight Reservation").WinEdit("Name:").Set "ranab"
Window("Flight Reservation").WinButton("Insert Order").Click
Call Count_Buttons()
Window("Flight Reservation").Close
write function in separate function window and add resource while running any script:
Function Count_Buttons()
Dim oButton,Buttons,ToButtons,i
Set oButton = Description.Create
oButton("Class Name").value = "WinButton"
Set Buttons = Window("text:= Flight Reservation").ChildObjects(oButton)
ToButtons = Buttons.count
msgbox ToButtons
End Function
Verify the Check point and if check point is true or false further process is handled using functional statement
Dim Str
Dialog("Login").WinEdit("Agent Name:").Set "rajan"
Dialog("Login").WinEdit("Agent Name:").Type micTab
Dialog"Login").WinEdit("Password:").SetSecure "4e2558dc476c1aef6477598"
Dialog("Login").WinButton("OK").Click
Window("Flight Reservation").WinComboBox("Fly To:").Check CheckPoint("Fly To:_2")
Str = Window("Flight Reservation").WinComboBox("Fly To:").Check (CheckPoint("Fly To:"))
msgbox (Str)
If Str = true Then
process()
Else
exitaction()
End If
Private Function process()
Window("Flight Reservation").WinComboBox("Fly To:").Select "London"
Window("Flight Reservation").Dialog("Flight Reservations").WinButton("OK").Click
exitaction()
End Function
Private Function exitaction()
Window("Flight Reservation").Close
End Function
Friday, August 19, 2011
Acceptance Testing
DEFINITION
Acceptance Testing is a level of the software testing process where a system is tested for acceptability.
The purpose of this test is to evaluate the system’s compliance with the business requirements and assess whether it is acceptable for delivery.
ANALOGY
During the process of manufacturing a ballpoint pen, the cap, the body, the tail and clip, the ink cartridge and the ballpoint are produced separately and unit tested separately. When two or more units are ready, they are assembled and Integration Testing is performed. When the complete pen is integrated, System Testing is performed. Once the System Testing is complete, Acceptance Testing is performed so as to confirm that the ballpoint pen is ready to be made available to the end-users.
METHOD
Usually, Black Box Testing method is used in Acceptance Testing.
Testing does not usually follow a strict procedure and is not scripted but is rather ad-hoc.
TASKS
Acceptance Test Plan
Prepare
Review
Rework
Baseline
Acceptance Test Cases/Checklist
Prepare
Review
Rework
Baseline
Acceptance Test
Perform
When is it performed?
Acceptance Testing is performed after System Testing and before making the system available for actual use.
Who performs it?
Internal Acceptance Testing (Also known as Alpha Testing) is performed by members of the organization that developed the software but who are not directly involved in the project (Development or Testing). Usually, it is the members of Product Management, Sales and/or Customer Support.
External Acceptance Testing is performed by people who are not employees of the organization that developed the software.
Customer Acceptance Testing is performed by the customers of the organization that developed the software. They are the ones who asked the organization to develop the software for them. [This is in the case of the software not being owned by the organization that developed it.]
User Acceptance Testing (Also known as Beta Testing) is performed by the end users of the software. They can be the customers themselves or the customers’ customers.
Definition by ISTQB
acceptance testing: Formal testing with respect to user needs, requirements, and business
processes conducted to determine whether or not a system satisfies the acceptance criteria
and to enable the user, customers or other authorized entity to determine whether or not to
accept the system.
Acceptance Testing is a level of the software testing process where a system is tested for acceptability.
The purpose of this test is to evaluate the system’s compliance with the business requirements and assess whether it is acceptable for delivery.
ANALOGY
During the process of manufacturing a ballpoint pen, the cap, the body, the tail and clip, the ink cartridge and the ballpoint are produced separately and unit tested separately. When two or more units are ready, they are assembled and Integration Testing is performed. When the complete pen is integrated, System Testing is performed. Once the System Testing is complete, Acceptance Testing is performed so as to confirm that the ballpoint pen is ready to be made available to the end-users.
METHOD
Usually, Black Box Testing method is used in Acceptance Testing.
Testing does not usually follow a strict procedure and is not scripted but is rather ad-hoc.
TASKS
Acceptance Test Plan
Prepare
Review
Rework
Baseline
Acceptance Test Cases/Checklist
Prepare
Review
Rework
Baseline
Acceptance Test
Perform
When is it performed?
Acceptance Testing is performed after System Testing and before making the system available for actual use.
Who performs it?
Internal Acceptance Testing (Also known as Alpha Testing) is performed by members of the organization that developed the software but who are not directly involved in the project (Development or Testing). Usually, it is the members of Product Management, Sales and/or Customer Support.
External Acceptance Testing is performed by people who are not employees of the organization that developed the software.
Customer Acceptance Testing is performed by the customers of the organization that developed the software. They are the ones who asked the organization to develop the software for them. [This is in the case of the software not being owned by the organization that developed it.]
User Acceptance Testing (Also known as Beta Testing) is performed by the end users of the software. They can be the customers themselves or the customers’ customers.
Definition by ISTQB
acceptance testing: Formal testing with respect to user needs, requirements, and business
processes conducted to determine whether or not a system satisfies the acceptance criteria
and to enable the user, customers or other authorized entity to determine whether or not to
accept the system.
System Testing
DEFINITION
System Testing is a level of the software testing process where a complete, integrated system/software is tested.
The purpose of this test is to evaluate the system’s compliance with the specified requirements.
ANALOGY
During the process of manufacturing a ballpoint pen, the cap, the body, the tail, the ink cartridge and the ballpoint are produced separately and unit tested separately. When two or more units are ready, they are assembled and Integration Testing is performed. When the complete pen is integrated, System Testing is performed.
METHOD
Usually, Black Box Testing method is used.
TASKS
System Test Plan
Prepare
Review
Rework
Baseline
System Test Cases
Prepare
Review
Rework
Baseline
System Test
Perform
When is it performed?
System Testing is performed after Integration Testing and before Acceptance Testing.
Who performs it?
Normally, independent Testers perform System Testing.
Definition by ISTQB
system testing: The process of testing an integrated system to verify that it meets specified
requirements.
System Testing is a level of the software testing process where a complete, integrated system/software is tested.
The purpose of this test is to evaluate the system’s compliance with the specified requirements.
ANALOGY
During the process of manufacturing a ballpoint pen, the cap, the body, the tail, the ink cartridge and the ballpoint are produced separately and unit tested separately. When two or more units are ready, they are assembled and Integration Testing is performed. When the complete pen is integrated, System Testing is performed.
METHOD
Usually, Black Box Testing method is used.
TASKS
System Test Plan
Prepare
Review
Rework
Baseline
System Test Cases
Prepare
Review
Rework
Baseline
System Test
Perform
When is it performed?
System Testing is performed after Integration Testing and before Acceptance Testing.
Who performs it?
Normally, independent Testers perform System Testing.
Definition by ISTQB
system testing: The process of testing an integrated system to verify that it meets specified
requirements.
Integration Testing
DEFINITION
Integration Testing is a level of the software testing process where individual units are combined and tested as a group.
The purpose of this level of testing is to expose faults in the interaction between integrated units.
Test drivers and test stubs are used to assist in Integration Testing.
Note: The definition of a unit is debatable and it could mean any of the following:
the smallest testable part of a software
a ‘module’ which could consist of many of ‘1’
a ‘component’ which could consist of many of ’2′
ANALOGY
During the process of manufacturing a ballpoint pen, the cap, the body, the tail and clip, the ink cartridge and the ballpoint are produced separately and unit tested separately. When two or more units are ready, they are assembled and Integration Testing is performed. For example, whether the cap fits into the body or not.
METHOD
Any of Black Box Testing, White Box Testing, and Gray Box Testing methods can be used. Normally, the method depends on your definition of ‘unit’.
TASKS
Integration Test Plan
Prepare
Review
Rework
Baseline
Integration Test Cases/Scripts
Prepare
Review
Rework
Baseline
Integration Test
Perform
When is Integration Testing performed?
Integration Testing is performed after Unit Testing and before System Testing.
Who performs Integration Testing?
Either Developers themselves or independent Testers perform Integration Testing.
APPROACHES
Big Bang is an approach to Integration Testing where all or most of the units are combined together and tested at one go. This approach is taken when the testing team receives the entire software in a bundle. So what is the difference between Big Bang Integration Testing and System Testing? Well, the former tests only the interactions between the units while the latter tests the entire system.
Top Down is an approach to Integration Testing where top level units are tested first and lower level units are tested step by step after that. This approach is taken when top down development approach is followed. Test Stubs are needed to simulate lower level units which may not be available during the initial phases.
Bottom Up is an approach to Integration Testing where bottom level units are tested first and upper level units step by step after that. This approach is taken when bottom up development approach is followed. Test Drivers are needed to simulate higher level units which may not be available during the initial phases.
Sandwich/Hybrid is an approach to Integration Testing which is a combination of Top Down and Bottom Up approaches.
TIPS
Ensure that you have a proper Detail Design document where interactions between each unit are clearly defined. In fact, you will not be able to perform Integration Testing without this information.
Ensure that you have a robust Software Configuration Management system in place. Or else, you will have a tough time tracking the right version of each unit, especially if the number of units to be integrated is huge.
Make sure that each unit is first unit tested before you start Integration Testing.
As far as possible, automate your tests, especially when you use the Top Down or Bottom Up approach, since regression testing is important each time you integrate a unit, and manual regression testing can be inefficient.
Definition by ISTQB
integration testing: Testing performed to expose defects in the interfaces and in the
interactions between integrated components or systems. See also component integration
testing, system integration testing.
component integration testing: Testing performed to expose defects in the interfaces and
interaction between integrated components.
system integration testing: Testing the integration of systems and packages; testing
interfaces to external organizations (e.g. Electronic Data Interchange, Internet).
Integration Testing is a level of the software testing process where individual units are combined and tested as a group.
The purpose of this level of testing is to expose faults in the interaction between integrated units.
Test drivers and test stubs are used to assist in Integration Testing.
Note: The definition of a unit is debatable and it could mean any of the following:
the smallest testable part of a software
a ‘module’ which could consist of many of ‘1’
a ‘component’ which could consist of many of ’2′
ANALOGY
During the process of manufacturing a ballpoint pen, the cap, the body, the tail and clip, the ink cartridge and the ballpoint are produced separately and unit tested separately. When two or more units are ready, they are assembled and Integration Testing is performed. For example, whether the cap fits into the body or not.
METHOD
Any of Black Box Testing, White Box Testing, and Gray Box Testing methods can be used. Normally, the method depends on your definition of ‘unit’.
TASKS
Integration Test Plan
Prepare
Review
Rework
Baseline
Integration Test Cases/Scripts
Prepare
Review
Rework
Baseline
Integration Test
Perform
When is Integration Testing performed?
Integration Testing is performed after Unit Testing and before System Testing.
Who performs Integration Testing?
Either Developers themselves or independent Testers perform Integration Testing.
APPROACHES
Big Bang is an approach to Integration Testing where all or most of the units are combined together and tested at one go. This approach is taken when the testing team receives the entire software in a bundle. So what is the difference between Big Bang Integration Testing and System Testing? Well, the former tests only the interactions between the units while the latter tests the entire system.
Top Down is an approach to Integration Testing where top level units are tested first and lower level units are tested step by step after that. This approach is taken when top down development approach is followed. Test Stubs are needed to simulate lower level units which may not be available during the initial phases.
Bottom Up is an approach to Integration Testing where bottom level units are tested first and upper level units step by step after that. This approach is taken when bottom up development approach is followed. Test Drivers are needed to simulate higher level units which may not be available during the initial phases.
Sandwich/Hybrid is an approach to Integration Testing which is a combination of Top Down and Bottom Up approaches.
TIPS
Ensure that you have a proper Detail Design document where interactions between each unit are clearly defined. In fact, you will not be able to perform Integration Testing without this information.
Ensure that you have a robust Software Configuration Management system in place. Or else, you will have a tough time tracking the right version of each unit, especially if the number of units to be integrated is huge.
Make sure that each unit is first unit tested before you start Integration Testing.
As far as possible, automate your tests, especially when you use the Top Down or Bottom Up approach, since regression testing is important each time you integrate a unit, and manual regression testing can be inefficient.
Definition by ISTQB
integration testing: Testing performed to expose defects in the interfaces and in the
interactions between integrated components or systems. See also component integration
testing, system integration testing.
component integration testing: Testing performed to expose defects in the interfaces and
interaction between integrated components.
system integration testing: Testing the integration of systems and packages; testing
interfaces to external organizations (e.g. Electronic Data Interchange, Internet).
Unit Testing
DEFINITION
Unit Testing is a level of the software testing process where individual units/components of a software/system are tested. The purpose is to validate that each unit of the software performs as designed.
A unit is the smallest testable part of software. It usually has one or a few inputs and usually a single output. In procedural programming a unit may be an individual program, function, procedure, etc. In object-oriented programming, the smallest unit is a method, which may belong to a base/super class, abstract class or derived/child class. (Some treat a module of an application as a unit. This is to be discouraged as there will probably be many individual units within that module.)
Unit testing frameworks, drivers, stubs and mock or fake objects are used to assist in unit testing.
METHOD
Unit Testing is performed by using the White Box Testing method.
When is it performed?
Unit Testing is the first level of testing and is performed prior to Integration Testing.
Who performs it?
Unit Testing is normally performed by software developers themselves or their peers. In rare cases it may also be performed by independent software testers.
TASKS
Unit Test Plan
Prepare
Review
Rework
Baseline
Unit Test Cases/Scripts
Prepare
Review
Rework
Baseline
Unit Test
Perform
BENEFITS
Unit testing increases confidence in changing/maintaining code. If good unit tests are written and if they are run every time any code is changed, the likelihood of any defects due to the change being promptly caught is very high. If unit testing is not in place, the most one can do is hope for the best and wait till the test results at higher levels of testing are out. Also, if codes are already made less interdependent to make unit testing possible, the unintended impact of changes to any code is less.
Codes are more reusable. In order to make unit testing possible, codes need to be modular. This means that codes are easier to reuse.
Development is faster. How? If you do not have unit testing in place, you write your code and perform that fuzzy ‘developer test’ (You set some breakpoints, fire up the GUI, provide a few inputs that hopefully hit your code and hope that you are all set.) In case you have unit testing in place, you write the test, code and run the tests. Writing tests takes time but the time is compensated by the time it takes to run the tests. The test runs take very less time: You need not fire up the GUI and provide all those inputs. And, of course, unit tests are more reliable than ‘developer tests’. Development is faster in the long run too. How? The effort required to find and fix defects found during unit testing is peanuts in comparison to those found during system testing or acceptance testing.
The cost of fixing a defect detected during unit testing is lesser in comparison to that of defects detected at higher levels. Compare the cost (time, effort, destruction, humiliation) of a defect detected during acceptance testing or say when the software is live.
Debugging is easy. When a test fails, only the latest changes need to be debugged. With testing at higher levels, changes made over the span of several days/weeks/months need to be debugged.
Codes are more reliable. Why? I think there is no need to explain this to a sane person.
TIPS
Find a tool/framework for your language.
Do not create test cases for ‘everything’: some will be handled by ‘themselves’. Instead, focus on the tests that impact the behavior of the system.
Isolate the development environment from the test environment.
Use test data that is close to that of production.
Before fixing a defect, write a test that exposes the defect. Why? First, you will later be able to catch the defect if you do not fix it properly. Second, your test suite is now more comprehensive. Third, you will most probably be too lazy to write the test after you have already ‘fixed’ the defect.
Write test cases that are independent of each other. For example if a class depends on a database, do not write a case that interacts with the database to test the class. Instead, create an abstract interface around that database connection and implement that interface with mock object.
Aim at covering all paths through the unit. Pay particular attention to loop conditions.
Make sure you are using a version control system to keep track of your code as well as your test cases.
In addition to writing cases to verify the behavior, write cases to ensure performance of the code.
Perform unit tests continuously and frequently.
ONE MORE REASON
Lets say you have a program comprising of two units. The only test you perform is system testing. [You skip unit and integration testing.] During testing, you find a bug. Now, how will you determine the cause of the problem?
Is the bug due to an error in unit 1?
Is the bug due to an error in unit 2?
Is the bug due to errors in both units?
Is the bug due to an error in the interface between the units?
Is the bug due to an error in the test or test case?
Unit testing is often neglected but it is, in fact, the most important level of testing.
Unit Testing is a level of the software testing process where individual units/components of a software/system are tested. The purpose is to validate that each unit of the software performs as designed.
A unit is the smallest testable part of software. It usually has one or a few inputs and usually a single output. In procedural programming a unit may be an individual program, function, procedure, etc. In object-oriented programming, the smallest unit is a method, which may belong to a base/super class, abstract class or derived/child class. (Some treat a module of an application as a unit. This is to be discouraged as there will probably be many individual units within that module.)
Unit testing frameworks, drivers, stubs and mock or fake objects are used to assist in unit testing.
METHOD
Unit Testing is performed by using the White Box Testing method.
When is it performed?
Unit Testing is the first level of testing and is performed prior to Integration Testing.
Who performs it?
Unit Testing is normally performed by software developers themselves or their peers. In rare cases it may also be performed by independent software testers.
TASKS
Unit Test Plan
Prepare
Review
Rework
Baseline
Unit Test Cases/Scripts
Prepare
Review
Rework
Baseline
Unit Test
Perform
BENEFITS
Unit testing increases confidence in changing/maintaining code. If good unit tests are written and if they are run every time any code is changed, the likelihood of any defects due to the change being promptly caught is very high. If unit testing is not in place, the most one can do is hope for the best and wait till the test results at higher levels of testing are out. Also, if codes are already made less interdependent to make unit testing possible, the unintended impact of changes to any code is less.
Codes are more reusable. In order to make unit testing possible, codes need to be modular. This means that codes are easier to reuse.
Development is faster. How? If you do not have unit testing in place, you write your code and perform that fuzzy ‘developer test’ (You set some breakpoints, fire up the GUI, provide a few inputs that hopefully hit your code and hope that you are all set.) In case you have unit testing in place, you write the test, code and run the tests. Writing tests takes time but the time is compensated by the time it takes to run the tests. The test runs take very less time: You need not fire up the GUI and provide all those inputs. And, of course, unit tests are more reliable than ‘developer tests’. Development is faster in the long run too. How? The effort required to find and fix defects found during unit testing is peanuts in comparison to those found during system testing or acceptance testing.
The cost of fixing a defect detected during unit testing is lesser in comparison to that of defects detected at higher levels. Compare the cost (time, effort, destruction, humiliation) of a defect detected during acceptance testing or say when the software is live.
Debugging is easy. When a test fails, only the latest changes need to be debugged. With testing at higher levels, changes made over the span of several days/weeks/months need to be debugged.
Codes are more reliable. Why? I think there is no need to explain this to a sane person.
TIPS
Find a tool/framework for your language.
Do not create test cases for ‘everything’: some will be handled by ‘themselves’. Instead, focus on the tests that impact the behavior of the system.
Isolate the development environment from the test environment.
Use test data that is close to that of production.
Before fixing a defect, write a test that exposes the defect. Why? First, you will later be able to catch the defect if you do not fix it properly. Second, your test suite is now more comprehensive. Third, you will most probably be too lazy to write the test after you have already ‘fixed’ the defect.
Write test cases that are independent of each other. For example if a class depends on a database, do not write a case that interacts with the database to test the class. Instead, create an abstract interface around that database connection and implement that interface with mock object.
Aim at covering all paths through the unit. Pay particular attention to loop conditions.
Make sure you are using a version control system to keep track of your code as well as your test cases.
In addition to writing cases to verify the behavior, write cases to ensure performance of the code.
Perform unit tests continuously and frequently.
ONE MORE REASON
Lets say you have a program comprising of two units. The only test you perform is system testing. [You skip unit and integration testing.] During testing, you find a bug. Now, how will you determine the cause of the problem?
Is the bug due to an error in unit 1?
Is the bug due to an error in unit 2?
Is the bug due to errors in both units?
Is the bug due to an error in the interface between the units?
Is the bug due to an error in the test or test case?
Unit testing is often neglected but it is, in fact, the most important level of testing.
when we have stop testing?
common factors in deciding when to stop are,
Deadlines(release deadlines,testing deadlines,etc.)
Testcases completed with certain percentage passed.
Test budget depleted.
Coverage of code/functionality/Requirements reaches a specified point.
Bug rate falls below a certain level.
Beta or alpha testing period ends.
Deadlines(release deadlines,testing deadlines,etc.)
Testcases completed with certain percentage passed.
Test budget depleted.
Coverage of code/functionality/Requirements reaches a specified point.
Bug rate falls below a certain level.
Beta or alpha testing period ends.
Why we have to start testing early
Introduction :
You probably heard and read in blogs “Testing should start early in the life cycle of development". In this chapter, we will discuss Why start testing Early? very practically.
Fact One
Let’s start with the regular software development life cycle:
First we’ve got a planning phase: needs are expressed, people are contacted, meetings are booked. Then the decision is made: we are going to do this project.
After that analysis will be done, followed by code build.
Now it’s your turn: you can start testing.
Do you think this is what is going to happen? Dream on.
This is what's going to happen:
Planning, analysis and code build will take more time then planned.
That would not be a problem if the total project time would pro-longer. Forget it; it is most likely that you are going to deal with the fact that you will have to perform the tests in a few days.
The deadline is not going to be moved at all: promises have been made to customers, project managers are going to lose their bonuses if they deliver later past deadline.
Fact Two
The earlier you find a bug, the cheaper it is to fix it.
If you are able to find the bug in the requirements determination, it is going to be 50 times cheaper
(!!) than when you find the same bug in testing.
It will even be 100 times cheaper (!!) than when you find the bug after going live.
Easy to understand: if you find the bug in the requirements definitions, all you have to do is change the text of the requirements. If you find the same bug in final testing, analysis and code build already took place. Much more effort is done to build something that nobody wanted.
Conclusion: start testing early!
This is what you should do:
You probably heard and read in blogs “Testing should start early in the life cycle of development". In this chapter, we will discuss Why start testing Early? very practically.
Fact One
Let’s start with the regular software development life cycle:
First we’ve got a planning phase: needs are expressed, people are contacted, meetings are booked. Then the decision is made: we are going to do this project.
After that analysis will be done, followed by code build.
Now it’s your turn: you can start testing.
Do you think this is what is going to happen? Dream on.
This is what's going to happen:
Planning, analysis and code build will take more time then planned.
That would not be a problem if the total project time would pro-longer. Forget it; it is most likely that you are going to deal with the fact that you will have to perform the tests in a few days.
The deadline is not going to be moved at all: promises have been made to customers, project managers are going to lose their bonuses if they deliver later past deadline.
Fact Two
The earlier you find a bug, the cheaper it is to fix it.
If you are able to find the bug in the requirements determination, it is going to be 50 times cheaper
(!!) than when you find the same bug in testing.
It will even be 100 times cheaper (!!) than when you find the bug after going live.
Easy to understand: if you find the bug in the requirements definitions, all you have to do is change the text of the requirements. If you find the same bug in final testing, analysis and code build already took place. Much more effort is done to build something that nobody wanted.
Conclusion: start testing early!
This is what you should do:
Golden software testing rules
Introduction
Read these simple golden rules for software testing. They are based on years of practical testing
experience and solid theory.
Its all about finding the bug as early as possible:
Start software testing process as soon as you got the requirement specification document. Review the specification document carefully, get your queries resolved. With this you can easily find bugs in requirement document (otherwise development team might developed the software with wrong functionality) and many time this happens that requirement document gets changed when test team raise queries.
After requirement doc review, prepare scenarios and test cases.
Make sure you have atleast these 3 software testing levels
1. Integration testing or Unit testing (performed by dev team or separate white box testing team)
2. System testing (performed by professional testers)
3. Acceptance testing (performed by end users, sometimes Business Analyst and test leads assist end users)
Don’t expect too much of automated testing
Automated testing can be extremely useful and can be a real time saver. But it can also turn out to be a very expensive and invalid solution. Consider - ROI.
Deal with resistance
Don't try to become popular by completing tasks before time and by loosing quality. Some testers do this and get appreciation from managers in early project cycles. But you should stick to the quality and perform quality testing. If you really tested the application fully, then you can go with numbers (count of bus reported, test case prepared, etc). Definitely your project partners will appreciate the great job you're doing!
Do regression testing every new release:
Once the development team is done with the bug fixes and give release to testing team, apart from the bug fixes, testing team should perform the regression testing as well. In early test cycles regression testing of entire application is required. In late testing cycles, when application is near UAT, discuss the impact of bug fixes with deal team and test the functionality as per that
Test With Real data:
Apart from invalid data entry, testers must test the application with real data. for this help can be taken from Business analyst and Client.
You can take help like these sites - http://www.fakenamegenerator.com/. But when it comes to Finance domain, request client for sample data, coz there can data like - $10.87 Million etc.
Keep track of change requests
Sometimes, in later test cycles everyone in the project become so busy and didn't get time to document the change requests. So in this situation, for testers (Test leads) I suggest to keep track of change requests (which happens thru email communication) in a separate excel document.
Also give the change requests a priority status:
Show stopper (must have, no work around)
Major (must have, work around possible)
Minor (not business critical, but wanted)
Nice to have
Actively use these above statuses for reporting and follow up!
Note - In CMMi or in process oriented companies, there are already change request management (configuration management systems)
Don't be a Quality Police:
Let Business analyst and technical managers to decide what bugs need to be fixed. Definetely testers can give inputs to them why this fix is required.
'Impact' and 'Chance' are the keys to decide on risk and priority
You should keep a helicopter view on your project. For each part of your application you have to
define the 'impact' and the 'chance' of anything going wrong.
'Impact' being what happens if a certain situation occurs.
What’s the impact of an airplane crashing?
'Chance' is the likelihood that something happens.
What’s the chance to an airplane crash?
Delivery to client:
Once the final testing cycle is completed or when the application is going for UAT, Test lead should discuss that these many bugs still persists (with priority) and let Technical manager (dev team), Product manager and business analyst to decide whether application needs to be deliver or not. Definetely testers can give inputs to them on the OPENED bugs.
Focus on the software testing process, not on the tools
Teta management and other testing tools make our tasks easy but these tools cannot perform testing. So instead of focusing on tools, focus on core software testing. You can be very successful by using basic tools like MS Excel.
Read these simple golden rules for software testing. They are based on years of practical testing
experience and solid theory.
Its all about finding the bug as early as possible:
Start software testing process as soon as you got the requirement specification document. Review the specification document carefully, get your queries resolved. With this you can easily find bugs in requirement document (otherwise development team might developed the software with wrong functionality) and many time this happens that requirement document gets changed when test team raise queries.
After requirement doc review, prepare scenarios and test cases.
Make sure you have atleast these 3 software testing levels
1. Integration testing or Unit testing (performed by dev team or separate white box testing team)
2. System testing (performed by professional testers)
3. Acceptance testing (performed by end users, sometimes Business Analyst and test leads assist end users)
Don’t expect too much of automated testing
Automated testing can be extremely useful and can be a real time saver. But it can also turn out to be a very expensive and invalid solution. Consider - ROI.
Deal with resistance
Don't try to become popular by completing tasks before time and by loosing quality. Some testers do this and get appreciation from managers in early project cycles. But you should stick to the quality and perform quality testing. If you really tested the application fully, then you can go with numbers (count of bus reported, test case prepared, etc). Definitely your project partners will appreciate the great job you're doing!
Do regression testing every new release:
Once the development team is done with the bug fixes and give release to testing team, apart from the bug fixes, testing team should perform the regression testing as well. In early test cycles regression testing of entire application is required. In late testing cycles, when application is near UAT, discuss the impact of bug fixes with deal team and test the functionality as per that
Test With Real data:
Apart from invalid data entry, testers must test the application with real data. for this help can be taken from Business analyst and Client.
You can take help like these sites - http://www.fakenamegenerator.com/. But when it comes to Finance domain, request client for sample data, coz there can data like - $10.87 Million etc.
Keep track of change requests
Sometimes, in later test cycles everyone in the project become so busy and didn't get time to document the change requests. So in this situation, for testers (Test leads) I suggest to keep track of change requests (which happens thru email communication) in a separate excel document.
Also give the change requests a priority status:
Show stopper (must have, no work around)
Major (must have, work around possible)
Minor (not business critical, but wanted)
Nice to have
Actively use these above statuses for reporting and follow up!
Note - In CMMi or in process oriented companies, there are already change request management (configuration management systems)
Don't be a Quality Police:
Let Business analyst and technical managers to decide what bugs need to be fixed. Definetely testers can give inputs to them why this fix is required.
'Impact' and 'Chance' are the keys to decide on risk and priority
You should keep a helicopter view on your project. For each part of your application you have to
define the 'impact' and the 'chance' of anything going wrong.
'Impact' being what happens if a certain situation occurs.
What’s the impact of an airplane crashing?
'Chance' is the likelihood that something happens.
What’s the chance to an airplane crash?
Delivery to client:
Once the final testing cycle is completed or when the application is going for UAT, Test lead should discuss that these many bugs still persists (with priority) and let Technical manager (dev team), Product manager and business analyst to decide whether application needs to be deliver or not. Definetely testers can give inputs to them on the OPENED bugs.
Focus on the software testing process, not on the tools
Teta management and other testing tools make our tasks easy but these tools cannot perform testing. So instead of focusing on tools, focus on core software testing. You can be very successful by using basic tools like MS Excel.
Golden rules for bug reporting
Introduction
Read these simple golden rules for bug reporting. They are based on years of practical testing experience and solid theory.
Make one change request for every bug
- This will enable you to keep count of the number of bugs in the application
- You'll be able to give a priority on every bug separately
- You'll be able to test each resolved bug apart (and prevent having requests that are only resolved half)
Give step by step description of the problem:
E.g. "- I entered the Client page
- I performed a search on 'Google'
- In the Result page 2 clients were displayed with ‘Google’ in their name
- I clicked on the second one
---> The application returned a server error"
Explain the problem in plain language:
- Developers / re-testers don't necessarily have business knowledge
- Don't use business terminology
Be concrete
- Errors usually don't appear for every case you test
- What is the difference between this case (that failed) and other cases (that didn't fail)?
Give a clear explanation on the circumstances where the bug appeared
- Give concrete information (field names, numbers, names,...)
If a result is not as expected, indicate what is expected exactly
- Not OK : 'The message given in this screen is not correct"
- OK: 'The message that appears in the Client screen when entering a wrong Client number is "enter client number"
--> This should be: "Enter a valid client number please"
Explain why (in your opinion) the request is a "show stopper"
- Don't expect other contributors to the project always know what is important
- If you now a certain bug is critical, explain why!
Last but not least: don't forget to use screen shots!
- One picture says more than 1000 words
- Use advanced toold like SnagIt (www.techsmith.com/screen-capture.asp)
When testers follow these rules, it will be a real time and money saver for your project ! Don't expect the testers to know this by themselves. Explain these rules to them and give feedback when they do bad bug reporting!
Read these simple golden rules for bug reporting. They are based on years of practical testing experience and solid theory.
Make one change request for every bug
- This will enable you to keep count of the number of bugs in the application
- You'll be able to give a priority on every bug separately
- You'll be able to test each resolved bug apart (and prevent having requests that are only resolved half)
Give step by step description of the problem:
E.g. "- I entered the Client page
- I performed a search on 'Google'
- In the Result page 2 clients were displayed with ‘Google’ in their name
- I clicked on the second one
---> The application returned a server error"
Explain the problem in plain language:
- Developers / re-testers don't necessarily have business knowledge
- Don't use business terminology
Be concrete
- Errors usually don't appear for every case you test
- What is the difference between this case (that failed) and other cases (that didn't fail)?
Give a clear explanation on the circumstances where the bug appeared
- Give concrete information (field names, numbers, names,...)
If a result is not as expected, indicate what is expected exactly
- Not OK : 'The message given in this screen is not correct"
- OK: 'The message that appears in the Client screen when entering a wrong Client number is "enter client number"
--> This should be: "Enter a valid client number please"
Explain why (in your opinion) the request is a "show stopper"
- Don't expect other contributors to the project always know what is important
- If you now a certain bug is critical, explain why!
Last but not least: don't forget to use screen shots!
- One picture says more than 1000 words
- Use advanced toold like SnagIt (www.techsmith.com/screen-capture.asp)
When testers follow these rules, it will be a real time and money saver for your project ! Don't expect the testers to know this by themselves. Explain these rules to them and give feedback when they do bad bug reporting!
QTP Scripts
Script to finding out Broken link:
'Start of Code
Set a=Browser().Page().Link()
Dim URL,httprot
URL=a.GetROProperty("href")
Set httprot = CreateObject("MSXML2.XmlHttp")
httprot.open "GET",URL,FALSE
On Error Resume Next
httprot.send()
Print httprot.Status
If httprot.Status<>200 Then
msgbox "fail"
Else
msgbox "pass"
End If
Set httprot = Nothing
'End Of Code
---------------------------------------------------------
Get names of all open Browsers:
Set bDesc = Description.Create()
bDesc(“application version”).Value = “internet explorer 6″
Set bColl = DeskTop.ChildObjects(bDesc)
Cnt = bColl.Count
MsgBox “There are total:”&Cnt&”browsers opened”
For i = 0 To (Cnt -1)
MsgBox “Browser: “&i&” has title: “& bColl(i).GetROProperty(“title”)
Next ‘ i
Set bColl = Nothing
Set bDesc = Nothing
------------------------------------------------------------
QTP Scripts to captures a text of the Tooltip:
' Place mouse cursor over the link
Browser("Yahoo!").Page("Yahoo!").WebElement("text:=My Yahoo!").FireEvent "onmouseover"
wait 1
' Grab tooltip
ToolTip = Window("nativeclass:=tooltips_class32").GetROProperty("text")
Capturing Tooltip of the images:
Now, I'm going to show how to show how to capture tool tips of images located on a Web page.Actually, the solution is simple.
To capture a tool tip of an image, we can get value of "alt" Run-time Object property with GetROProperty("alt") function:
Browser("brw").Page("pg").GetROProperty("alt")
Let's verify this code in practice. For example, let's check tooltips from Wikipedia Main page
'Start of Code
Set a=Browser().Page().Link()
Dim URL,httprot
URL=a.GetROProperty("href")
Set httprot = CreateObject("MSXML2.XmlHttp")
httprot.open "GET",URL,FALSE
On Error Resume Next
httprot.send()
Print httprot.Status
If httprot.Status<>200 Then
msgbox "fail"
Else
msgbox "pass"
End If
Set httprot = Nothing
'End Of Code
---------------------------------------------------------
Get names of all open Browsers:
Set bDesc = Description.Create()
bDesc(“application version”).Value = “internet explorer 6″
Set bColl = DeskTop.ChildObjects(bDesc)
Cnt = bColl.Count
MsgBox “There are total:”&Cnt&”browsers opened”
For i = 0 To (Cnt -1)
MsgBox “Browser: “&i&” has title: “& bColl(i).GetROProperty(“title”)
Next ‘ i
Set bColl = Nothing
Set bDesc = Nothing
------------------------------------------------------------
QTP Scripts to captures a text of the Tooltip:
' Place mouse cursor over the link
Browser("Yahoo!").Page("Yahoo!").WebElement("text:=My Yahoo!").FireEvent "onmouseover"
wait 1
' Grab tooltip
ToolTip = Window("nativeclass:=tooltips_class32").GetROProperty("text")
Capturing Tooltip of the images:
Now, I'm going to show how to show how to capture tool tips of images located on a Web page.Actually, the solution is simple.
To capture a tool tip of an image, we can get value of "alt" Run-time Object property with GetROProperty("alt") function:
Browser("brw").Page("pg").GetROProperty("alt")
Let's verify this code in practice. For example, let's check tooltips from Wikipedia Main page
SDLC (Software Development Life Cycle)
SOFTWARE DEVELOPMENT LIFE CYCLE [SDLC] Information:
Software Development Life Cycle, or Software Development Process, defines the steps/stages/phases in the building of software.
There are various kinds of software development models like:
Waterfall model
Spiral model
Iterative and incremental development (like ‘Unified Process’ and ‘Rational Unified Process’)
Agile development (like ‘Extreme Programming’ and ‘Scrum’)
Models are evolving with time and the development life cycle can vary significantly from one model to the other. It is beyond the scope of this particular article to discuss each model. However, each model comprises of all or some of the following phases/activities/tasks.
SDLC IN SUMMARY
Project Planning
Requirements Development
Estimation
Scheduling
Design
Coding
Test Build/Deployment
Unit Testing
Integration Testing
User Documentation
System Testing
Acceptance Testing
Production Build/Deployment
Release
Maintenance
SDLC IN DETAIL
Project Planning
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Requirements Development [Business Requirements and Software/Product Requirements]
Develop
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Estimation [Size / Effort / Cost]
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Scheduling
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Designing[ High Level Design and Detail Design]
Coding
Code
Review
Rework
Commit
Recode [if necessary] >> Review >> Rework >> Commit
Test Builds Preparation/Deployment
Build/Deployment Plan
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Build/Deploy
Unit Testing
Test Plan
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Test Cases/Scripts
Prepare
Review
Rework
Baseline
Execute
Revise [if necessary] >> Review >> Rework >> Baseline >> Execute
Integration Testing
Test Plan
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Test Cases/Scripts
Prepare
Review
Rework
Baseline
Execute
Revise [if necessary] >> Review >> Rework >> Baseline >> Execute
User Documentation
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
System Testing
Test Plan
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Test Cases/Scripts
Prepare
Review
Rework
Baseline
Execute
Revise [if necessary] >> Review >> Rework >> Baseline >> Execute
Acceptance Testing [ Internal Acceptance Test and External Acceptance Test]
Test Plan
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Test Cases/Scripts
Prepare
Review
Rework
Baseline
Execute
Revise [if necessary] >> Review >> Rework >> Baseline >> Execute
Production Build/Deployment
Build/Deployment Plan
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Build/Deploy
Release
Prepare
Review
Rework
Release
Maintenance
Recode [Enhance software / Fix bugs]
Retest
Redeploy
Rerelease
Notes:
The life cycle mentioned here is NOT set in stone and each phase does not necessarily have to be implemented in the order mentioned.
Though SDLC uses the term ‘Development’, it does not focus just on the coding tasks done by developers but incorporates the tasks of all stakeholders, including testers.
There may still be many other activities/ tasks which have not been specifically mentioned above, like Configuration Management. No matter what, it is essential that you clearly understand the software development life cycle your project is following. One issue that is widespread in many projects is that software testers are involved much later in the life cycle, due to which they lack visibility and authority (which ultimately compromises software quality).
Software Development Life Cycle, or Software Development Process, defines the steps/stages/phases in the building of software.
There are various kinds of software development models like:
Waterfall model
Spiral model
Iterative and incremental development (like ‘Unified Process’ and ‘Rational Unified Process’)
Agile development (like ‘Extreme Programming’ and ‘Scrum’)
Models are evolving with time and the development life cycle can vary significantly from one model to the other. It is beyond the scope of this particular article to discuss each model. However, each model comprises of all or some of the following phases/activities/tasks.
SDLC IN SUMMARY
Project Planning
Requirements Development
Estimation
Scheduling
Design
Coding
Test Build/Deployment
Unit Testing
Integration Testing
User Documentation
System Testing
Acceptance Testing
Production Build/Deployment
Release
Maintenance
SDLC IN DETAIL
Project Planning
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Requirements Development [Business Requirements and Software/Product Requirements]
Develop
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Estimation [Size / Effort / Cost]
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Scheduling
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Designing[ High Level Design and Detail Design]
Coding
Code
Review
Rework
Commit
Recode [if necessary] >> Review >> Rework >> Commit
Test Builds Preparation/Deployment
Build/Deployment Plan
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Build/Deploy
Unit Testing
Test Plan
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Test Cases/Scripts
Prepare
Review
Rework
Baseline
Execute
Revise [if necessary] >> Review >> Rework >> Baseline >> Execute
Integration Testing
Test Plan
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Test Cases/Scripts
Prepare
Review
Rework
Baseline
Execute
Revise [if necessary] >> Review >> Rework >> Baseline >> Execute
User Documentation
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
System Testing
Test Plan
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Test Cases/Scripts
Prepare
Review
Rework
Baseline
Execute
Revise [if necessary] >> Review >> Rework >> Baseline >> Execute
Acceptance Testing [ Internal Acceptance Test and External Acceptance Test]
Test Plan
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Test Cases/Scripts
Prepare
Review
Rework
Baseline
Execute
Revise [if necessary] >> Review >> Rework >> Baseline >> Execute
Production Build/Deployment
Build/Deployment Plan
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Build/Deploy
Release
Prepare
Review
Rework
Release
Maintenance
Recode [Enhance software / Fix bugs]
Retest
Redeploy
Rerelease
Notes:
The life cycle mentioned here is NOT set in stone and each phase does not necessarily have to be implemented in the order mentioned.
Though SDLC uses the term ‘Development’, it does not focus just on the coding tasks done by developers but incorporates the tasks of all stakeholders, including testers.
There may still be many other activities/ tasks which have not been specifically mentioned above, like Configuration Management. No matter what, it is essential that you clearly understand the software development life cycle your project is following. One issue that is widespread in many projects is that software testers are involved much later in the life cycle, due to which they lack visibility and authority (which ultimately compromises software quality).
Defect
DEFINITION
A Software Bug / Defect is a condition in a software product which does not meet a software requirement (as stated in the requirement specifications) or end-user expectations (which may not be specified but are reasonable). In other words, a bug is an error in coding or logic that causes a program to malfunction or to produce incorrect/unexpected results.
A program that contains a large number of bugs is said to be buggy.
Reports detailing bugs in software are known as bug reports.
Applications for tracking bugs are known as bug tracking tools.
The process of finding the cause of bugs is known as debugging.
The process of intentionally injecting bugs in a software program, to estimate test coverage by monitoring the detection of those bugs, is known as bebugging.
Software Testing proves that bugs exist but NOT that bugs do not exist.
CLASSIFICATION
Software Bugs /Defects are normally classified as per:
Severity / Impact
Probability / Visibility
Priority / Urgency
Related Module / Component
Related Dimension of Quality
Phase Detected
Phase Injected
Severity/Impact
Severity indicates the impact of a bug on the quality of the software. This is normally set by the Software Tester himself/herself.
Critical:
There is no workaround.
Affects critical functionality or critical data.
Example: Unsuccessful installation, complete failure of a feature.
Major:
There is a workaround but is not obvious and is difficult.
Affects major functionality or major data.
Example: A feature is not functional from one module but the task is doable if 10 complicated indirect steps are followed in another module/s.
Minor:
There is an easy workaround.
Affects minor functionality or non-critical data.
Example: A feature that is not functional in one module but the task is easily doable from another module.
Trivial:
There is no need for a workaround.
Does not affect functionality or data.
Does not impact productivity or efficiency.
Example: Layout discrepancies, spelling/grammatical errors.
Severity is also denoted as S1 for Critical, S2 for Major and so on.
The examples above are only guidelines and different organizations/projects may define severity differently for the same types of bugs.
Probability / Visibility
Probability / Visibility indicates the likelihood of a user encountering the bug.
High: Encountered by all or almost all the users of the feature
Medium: Encountered by about 50% of the users of the feature
Low: Encountered by no or very few users of the feature
The measure of Probability/Visibility is with respect to the usage of a feature and not the overall software. Hence, a bug in a rarely used feature can have a high probability if the bug is easily encountered by users of the feature. Similarly, a bug in a widely used feature can have a low probability if the users rarely detect it.
Priority / Urgency
Priority indicates the importance or urgency of fixing the bug. Though this may be initially set by the Software Tester himself/herself, the priority is finalized by the Project Manager.
Urgent: Must be fixed prior to next build
High: Must be fixed prior to next release
Medium: May be fixed after the release/ in the next release
Low: May or may not be fixed at all
Priority is also denoted as P1 for Urgent and so on.
Normally the following are considered when determining the priority of bugs
Severity/Impact
Probability/Visibility
Available Resources (Developers to fix and Testers to verify the fixes)
Available Time (Time for fixing, verifying the fixes and performing regression tests after the verification of the fixes)
If a release is already scheduled and if bugs with critical/major severity and high probability are still not fixed, the release is usually postponed.
If a release is already scheduled and if bugs with minor/low severity and medium/low probability are not fixed, the release is usually made by mentioning them as Known Issues/Bugs. They are normally catered to in the next release cycle. Nevertheless, any project’s goal should be to make releases will all detected defects fixed.
Related Module /Component
Related Module / Component indicates the module or component of the software where the bug was detected. This provides information on which module / component is buggy or risky.
Module/Component A
Module/Component B
Module/Component C
…
Related Dimension of Quality
Related Dimension of Quality indicates the aspect of software quality that the bug is connected with.
Functionality
Usability
Performance
Security
Compatibility
…
Phase Detected
Phase Detected indicates the phase in the software development lifecycle where the bug was identified.
Unit Testing
Integration Testing
System Testing
Acceptance Testing
Phase Injected
Phase Injected indicates the phase in the software development lifecycle where the bug was introduced. Phase Injected is always earlier in the software development lifecycle than the Phase Detected. Phase Injected can be known only after a proper root-cause analysis of the bug.
Requirements Development
High Level Design
Detailed Design
Coding
Build/Deployment
Note that the categorizations above are just guidelines and it is up to the project/organization to decide on what kind of categorization to use. In most cases the categorization depends on the bug tracking tool that is being used. It is essential that project members agree beforehand on the categorization (and the meaning of each categorization) to be used so as to avoid arguments, conflicts, and unhealthy bickering later.
A BUG JOKE
“There is a bug in this ant’s farm.”
“What do you mean? I don’t see any ants in it.”
“Well, that’s the bug.”
A BUG STORY
Once upon a time, in a jungle, there was a little bug. He was young but very smart. He quickly learned the tactics of other bugs in the jungle: how to bring maximum destruction to the plants; how to effectively pester the animals; and most importantly, how to maneuver underground so as to avoid detection. Soon, the little bug was famous / notorious for his ‘severity’. All the bugs in the jungle hailed him as the Lord of the Jungle. Others feared him as the Scourge of the Jungle and mothers started taking his name to deter their children from going out in the night.
The Jungle Council, headed by the Lion, announced a hefty prize for anyone being able to capture the bug but no one was yet successful in capturing, or even sighting, the bug. The bug was a living legend.
For years, the bug basked in glory and he swelled with pride day by day. One day, when the Lion was away hunting, he burrowed to the top of the Lion’s hill and, standing atop the hill, he roared “I have captured the lily-livered Lion’s domain. I am now the true King of the Jungle! I am the Greatest! I am Invincible!”
His words resonated through the jungle and life stood still for a moment in sheer awe of the bug’s capabilities. Just then, it so happened that a Tester was passing by the Jungle and he promptly submitted a bug report with the exact longitude and latitude of the bug’s location. Then, a Developer hurriedly ‘fixed’ the bug (The bug was so swollen up after his boastful speech that he could not squeeze himself back into the burrow on time.) and that was the tragic end of the legendary bug.
NOTE: We prefer the term ‘Defect’ over the term ‘Bug’ because ‘Defect’ is more comprehensive.
A Software Bug / Defect is a condition in a software product which does not meet a software requirement (as stated in the requirement specifications) or end-user expectations (which may not be specified but are reasonable). In other words, a bug is an error in coding or logic that causes a program to malfunction or to produce incorrect/unexpected results.
A program that contains a large number of bugs is said to be buggy.
Reports detailing bugs in software are known as bug reports.
Applications for tracking bugs are known as bug tracking tools.
The process of finding the cause of bugs is known as debugging.
The process of intentionally injecting bugs in a software program, to estimate test coverage by monitoring the detection of those bugs, is known as bebugging.
Software Testing proves that bugs exist but NOT that bugs do not exist.
CLASSIFICATION
Software Bugs /Defects are normally classified as per:
Severity / Impact
Probability / Visibility
Priority / Urgency
Related Module / Component
Related Dimension of Quality
Phase Detected
Phase Injected
Severity/Impact
Severity indicates the impact of a bug on the quality of the software. This is normally set by the Software Tester himself/herself.
Critical:
There is no workaround.
Affects critical functionality or critical data.
Example: Unsuccessful installation, complete failure of a feature.
Major:
There is a workaround but is not obvious and is difficult.
Affects major functionality or major data.
Example: A feature is not functional from one module but the task is doable if 10 complicated indirect steps are followed in another module/s.
Minor:
There is an easy workaround.
Affects minor functionality or non-critical data.
Example: A feature that is not functional in one module but the task is easily doable from another module.
Trivial:
There is no need for a workaround.
Does not affect functionality or data.
Does not impact productivity or efficiency.
Example: Layout discrepancies, spelling/grammatical errors.
Severity is also denoted as S1 for Critical, S2 for Major and so on.
The examples above are only guidelines and different organizations/projects may define severity differently for the same types of bugs.
Probability / Visibility
Probability / Visibility indicates the likelihood of a user encountering the bug.
High: Encountered by all or almost all the users of the feature
Medium: Encountered by about 50% of the users of the feature
Low: Encountered by no or very few users of the feature
The measure of Probability/Visibility is with respect to the usage of a feature and not the overall software. Hence, a bug in a rarely used feature can have a high probability if the bug is easily encountered by users of the feature. Similarly, a bug in a widely used feature can have a low probability if the users rarely detect it.
Priority / Urgency
Priority indicates the importance or urgency of fixing the bug. Though this may be initially set by the Software Tester himself/herself, the priority is finalized by the Project Manager.
Urgent: Must be fixed prior to next build
High: Must be fixed prior to next release
Medium: May be fixed after the release/ in the next release
Low: May or may not be fixed at all
Priority is also denoted as P1 for Urgent and so on.
Normally the following are considered when determining the priority of bugs
Severity/Impact
Probability/Visibility
Available Resources (Developers to fix and Testers to verify the fixes)
Available Time (Time for fixing, verifying the fixes and performing regression tests after the verification of the fixes)
If a release is already scheduled and if bugs with critical/major severity and high probability are still not fixed, the release is usually postponed.
If a release is already scheduled and if bugs with minor/low severity and medium/low probability are not fixed, the release is usually made by mentioning them as Known Issues/Bugs. They are normally catered to in the next release cycle. Nevertheless, any project’s goal should be to make releases will all detected defects fixed.
Related Module /Component
Related Module / Component indicates the module or component of the software where the bug was detected. This provides information on which module / component is buggy or risky.
Module/Component A
Module/Component B
Module/Component C
…
Related Dimension of Quality
Related Dimension of Quality indicates the aspect of software quality that the bug is connected with.
Functionality
Usability
Performance
Security
Compatibility
…
Phase Detected
Phase Detected indicates the phase in the software development lifecycle where the bug was identified.
Unit Testing
Integration Testing
System Testing
Acceptance Testing
Phase Injected
Phase Injected indicates the phase in the software development lifecycle where the bug was introduced. Phase Injected is always earlier in the software development lifecycle than the Phase Detected. Phase Injected can be known only after a proper root-cause analysis of the bug.
Requirements Development
High Level Design
Detailed Design
Coding
Build/Deployment
Note that the categorizations above are just guidelines and it is up to the project/organization to decide on what kind of categorization to use. In most cases the categorization depends on the bug tracking tool that is being used. It is essential that project members agree beforehand on the categorization (and the meaning of each categorization) to be used so as to avoid arguments, conflicts, and unhealthy bickering later.
A BUG JOKE
“There is a bug in this ant’s farm.”
“What do you mean? I don’t see any ants in it.”
“Well, that’s the bug.”
A BUG STORY
Once upon a time, in a jungle, there was a little bug. He was young but very smart. He quickly learned the tactics of other bugs in the jungle: how to bring maximum destruction to the plants; how to effectively pester the animals; and most importantly, how to maneuver underground so as to avoid detection. Soon, the little bug was famous / notorious for his ‘severity’. All the bugs in the jungle hailed him as the Lord of the Jungle. Others feared him as the Scourge of the Jungle and mothers started taking his name to deter their children from going out in the night.
The Jungle Council, headed by the Lion, announced a hefty prize for anyone being able to capture the bug but no one was yet successful in capturing, or even sighting, the bug. The bug was a living legend.
For years, the bug basked in glory and he swelled with pride day by day. One day, when the Lion was away hunting, he burrowed to the top of the Lion’s hill and, standing atop the hill, he roared “I have captured the lily-livered Lion’s domain. I am now the true King of the Jungle! I am the Greatest! I am Invincible!”
His words resonated through the jungle and life stood still for a moment in sheer awe of the bug’s capabilities. Just then, it so happened that a Tester was passing by the Jungle and he promptly submitted a bug report with the exact longitude and latitude of the bug’s location. Then, a Developer hurriedly ‘fixed’ the bug (The bug was so swollen up after his boastful speech that he could not squeeze himself back into the burrow on time.) and that was the tragic end of the legendary bug.
NOTE: We prefer the term ‘Defect’ over the term ‘Bug’ because ‘Defect’ is more comprehensive.
Unit testing - why, how and when
This article explains benefits from unit testing, what components we should test and give some directions to write better unit tests.
The article assumes that the reader knows how to create and run simple unit tests but doesn't know everything about unit testing and can find useful tips and points there.
Unit test is code for testing other code. It calls the code under testing and compares received results with expected:
public void testTotal() {
Bill bill = new Bill();
bill.addItem(new Item("Thinking in Java", "book", 29.50));
bill.setShippingPrice(15.50);
assertEquals(45.00, bill.getTotal());
}
You can execute unit test at any time to check if the functionaly works as expected.
Benefits from Unit testing
Perform testing frequently - automatic testing doesn't take a lot of time.
After the code changes we perform testing to guarantee that the program works as expected. If the product lifecycle isn't short then automatic testing is preferrable: we capture requirements in the code (unit tests) and can perform these tests after each code change.
So we can find out what's broken as early as possible and fix the problem immediately. We write tests once and run many times, and testing doesn't take a lot of time that almost impossible with manual testing.
Also sometimes changes in one module require changes in other module or affect it but we don't know about it until we test other module. If we have unit tests, we can run them to check other modules. It's good to discover this in the middle of development from failed unit tests rather during testing.
Keep your code cleaner. Can perform refactoring without breaking the program.
We don't refactor our code because we might break some functionality with our code changes. With unit tests you do refactoring and run tests to see that the program works after changes correctly.
Find exact place of error.
Testing from user interface is coarse grained: you check a case that consists from other small subcases. So you need additional efforts to find what doesn't work exactly if an error occurs.
For example, client bills generation in billing system is one call from the web page, but it includes several steps:
get clients to which send bills
compose bill for each client
save bill into the database
prepare PDF file for each bill by database data
If you receive an incorrect bill, you don't know where the error is exactly. It will be in bill creation, placing bill into the database or bill PDF generation by database.
In unit tests we can test these parts independently and on different conditions.
We can test PDF generation without bill composing module: we create a plain bill object, set it fields to proper values and pass it to the PDF generation function:
Customer customer = new Customer();
customer.setAddress(new Address("UK", "London"));
Bill bill = new Bill();
bill.setCustomer(customer);
bill.addItem(new Item("Thinking in Java", "book", 29.50));
bill.setShippingPrice(15.50);
bill.setCurrency(Currency.USD);
Document doc = billPdfService.generate(bill);
List sections = doc.getSections();
assertEquals(4, sections.size());
assertEquals("London, UK", section.get(0));
assertEquals("Thinking in Java $29.50", section.get(1));
assertEquals("Shipping $15.50", section.get(2));
assertEquals("Total $45.00", section.get(3));
If the output is unexpected then there is an error in the PDF generation.
Reuse your test data.
When we test functionality through user interface, such test needs some preparations: we should create objects needed for test (input data). For example, search engine testing requires objects with different properties for searching and search queries. Also it requires manual checking of output data: what objects are included in search results.
Unit test also requires test preparations but only once during test creation. UI testing requires test preparation before each run and manual results checking after it.
Requirements for test data: test data should be the same on each test run.So this and another tests should perform cleanup: delete created objects that affect next test runs.For example, delete created bill from the database because it can be treated as input bill for another test. Or delete file created by the test because if we don't delete the file then the test fails on next run with file creation error since there is a file with same name.
Just do it, or unit test is good start point to encourage working on the feature.Sometimes you write a lot of code and can't run it immediately since it requires integration with other modules, creation of web pages or database objects. You can run the code in the unit test immediately after its creation and test it under different conditions. Seeing that the code works, really encourages!
Also if we write the unit test before the code, we understand more clearly how to call this code from other parts of applications: input parameters contraints and what methods to call. We create more usable and predictable code.
That's why we write unit tests first.
What code we should test in unit tests.
complicated logic like billing, search engines, text parsing, system states and transitions between them
code that can't be tested from UI directly: requires additional UI pages, special input data (like steps in multi-step process), or called by timer
a lot of test cases that require a lot of time if perform manually (a lot of variants for input data, for example text parsing). Such testing from the code is much easier: unit test can use loops and method calls with parameters for sets of similar test cases.
How to write better unit tests.
Keep your user interface code and controllers simple, move all complicated logic into separate model or business logic objects which it's easier to test. Controller for the client bill generation from example above can look like:
void generateBill() {
List clients = getClients();
List bills = createBills(clients);
saveBills(bills);
generatePdf(); // reads bills from the database
}
and test each of these functions.
Unit testing is most useful if you test business and model objects with minimal dependencies or algorithms without dependencies.
Prefer pure objects that don't inherit framework classes to minimize dependencies if you need a lot of code to setup and work with the framework.
Avoid database access or other external services usage. It's easier to create tests because we have less additional objects to create or setup. And these objects introduce own errors too. It can be incorrect database setup or errors in the database access code (other module of the system).
It's better if your domain objects are plain objects that can be created without database. Or use same separate test database for all tests to reduce number of errors due with dependency.
Another problem is dependencies from other modules. You can use stubs with same interfaces instead real objects. These stubs can do nothing if it doesn't affect your test (example: send email) or return predefined same results without complicated calculations or database access. You don't have to create these stubs manually: use mock objects for automatic creation.
Conclusion:
Unit tests help to perform testing frequently, find the problems early and their location with minimal efforts, refactor the code, understand what it does, increase the code testing coverage and it's a good start point to obtain a clean working code as earlier as possible.
Unit testing is most useful if you test business and model objects with minimal dependencies or algorithms without dependencies.
The article assumes that the reader knows how to create and run simple unit tests but doesn't know everything about unit testing and can find useful tips and points there.
Unit test is code for testing other code. It calls the code under testing and compares received results with expected:
public void testTotal() {
Bill bill = new Bill();
bill.addItem(new Item("Thinking in Java", "book", 29.50));
bill.setShippingPrice(15.50);
assertEquals(45.00, bill.getTotal());
}
You can execute unit test at any time to check if the functionaly works as expected.
Benefits from Unit testing
Perform testing frequently - automatic testing doesn't take a lot of time.
After the code changes we perform testing to guarantee that the program works as expected. If the product lifecycle isn't short then automatic testing is preferrable: we capture requirements in the code (unit tests) and can perform these tests after each code change.
So we can find out what's broken as early as possible and fix the problem immediately. We write tests once and run many times, and testing doesn't take a lot of time that almost impossible with manual testing.
Also sometimes changes in one module require changes in other module or affect it but we don't know about it until we test other module. If we have unit tests, we can run them to check other modules. It's good to discover this in the middle of development from failed unit tests rather during testing.
Keep your code cleaner. Can perform refactoring without breaking the program.
We don't refactor our code because we might break some functionality with our code changes. With unit tests you do refactoring and run tests to see that the program works after changes correctly.
Find exact place of error.
Testing from user interface is coarse grained: you check a case that consists from other small subcases. So you need additional efforts to find what doesn't work exactly if an error occurs.
For example, client bills generation in billing system is one call from the web page, but it includes several steps:
get clients to which send bills
compose bill for each client
save bill into the database
prepare PDF file for each bill by database data
If you receive an incorrect bill, you don't know where the error is exactly. It will be in bill creation, placing bill into the database or bill PDF generation by database.
In unit tests we can test these parts independently and on different conditions.
We can test PDF generation without bill composing module: we create a plain bill object, set it fields to proper values and pass it to the PDF generation function:
Customer customer = new Customer();
customer.setAddress(new Address("UK", "London"));
Bill bill = new Bill();
bill.setCustomer(customer);
bill.addItem(new Item("Thinking in Java", "book", 29.50));
bill.setShippingPrice(15.50);
bill.setCurrency(Currency.USD);
Document doc = billPdfService.generate(bill);
List
assertEquals(4, sections.size());
assertEquals("London, UK", section.get(0));
assertEquals("Thinking in Java $29.50", section.get(1));
assertEquals("Shipping $15.50", section.get(2));
assertEquals("Total $45.00", section.get(3));
If the output is unexpected then there is an error in the PDF generation.
Reuse your test data.
When we test functionality through user interface, such test needs some preparations: we should create objects needed for test (input data). For example, search engine testing requires objects with different properties for searching and search queries. Also it requires manual checking of output data: what objects are included in search results.
Unit test also requires test preparations but only once during test creation. UI testing requires test preparation before each run and manual results checking after it.
Requirements for test data: test data should be the same on each test run.So this and another tests should perform cleanup: delete created objects that affect next test runs.For example, delete created bill from the database because it can be treated as input bill for another test. Or delete file created by the test because if we don't delete the file then the test fails on next run with file creation error since there is a file with same name.
Just do it, or unit test is good start point to encourage working on the feature.Sometimes you write a lot of code and can't run it immediately since it requires integration with other modules, creation of web pages or database objects. You can run the code in the unit test immediately after its creation and test it under different conditions. Seeing that the code works, really encourages!
Also if we write the unit test before the code, we understand more clearly how to call this code from other parts of applications: input parameters contraints and what methods to call. We create more usable and predictable code.
That's why we write unit tests first.
What code we should test in unit tests.
complicated logic like billing, search engines, text parsing, system states and transitions between them
code that can't be tested from UI directly: requires additional UI pages, special input data (like steps in multi-step process), or called by timer
a lot of test cases that require a lot of time if perform manually (a lot of variants for input data, for example text parsing). Such testing from the code is much easier: unit test can use loops and method calls with parameters for sets of similar test cases.
How to write better unit tests.
Keep your user interface code and controllers simple, move all complicated logic into separate model or business logic objects which it's easier to test. Controller for the client bill generation from example above can look like:
void generateBill() {
List clients = getClients();
List bills = createBills(clients);
saveBills(bills);
generatePdf(); // reads bills from the database
}
and test each of these functions.
Unit testing is most useful if you test business and model objects with minimal dependencies or algorithms without dependencies.
Prefer pure objects that don't inherit framework classes to minimize dependencies if you need a lot of code to setup and work with the framework.
Avoid database access or other external services usage. It's easier to create tests because we have less additional objects to create or setup. And these objects introduce own errors too. It can be incorrect database setup or errors in the database access code (other module of the system).
It's better if your domain objects are plain objects that can be created without database. Or use same separate test database for all tests to reduce number of errors due with dependency.
Another problem is dependencies from other modules. You can use stubs with same interfaces instead real objects. These stubs can do nothing if it doesn't affect your test (example: send email) or return predefined same results without complicated calculations or database access. You don't have to create these stubs manually: use mock objects for automatic creation.
Conclusion:
Unit tests help to perform testing frequently, find the problems early and their location with minimal efforts, refactor the code, understand what it does, increase the code testing coverage and it's a good start point to obtain a clean working code as earlier as possible.
Unit testing is most useful if you test business and model objects with minimal dependencies or algorithms without dependencies.
Monday, June 27, 2011
IEEE 829 Documentation
Over the years a number of types of document have been invented to allow for the control of testing.They apply to software testing of all kinds from component testing through to release testing.Every organization develops these documents themselves and gives them different names,and in some cases confuses their purpose.To provide a common set of standardised documents the IEEE developed the 829 standard for software test documentation for any type of software testing,including user acceptance Testing.
This article outlines each of the types of document in this standard and describes how they work together.
The Types of Document
There are eight document types in the IEEE 829 standard, which can be used in three distinct phases of software testing:
1.Preparation of Tests
* Test plan: Plan how the testing will proceed
* Test Design Specification : Decide what needs to be tested
* Test Case specification: Create the tests to be run
* Test procedure: Describe how the tests are run
* Test item Transmittal Report: Specify the items released for
testing.
2. Running the Tests
* Test log: Record the details of tests in time order
* Test incident report: Record details of tests in time order
3. Completion of Testing
* Test summary Report: Summarise and evaluate tests.
These are eight document types which can be used in three distinct phasesof software testing.If anybody wants elabrated details of this IEEE 829 Document can send your comments and it will be posted as soon as possible.
This article outlines each of the types of document in this standard and describes how they work together.
The Types of Document
There are eight document types in the IEEE 829 standard, which can be used in three distinct phases of software testing:
1.Preparation of Tests
* Test plan: Plan how the testing will proceed
* Test Design Specification : Decide what needs to be tested
* Test Case specification: Create the tests to be run
* Test procedure: Describe how the tests are run
* Test item Transmittal Report: Specify the items released for
testing.
2. Running the Tests
* Test log: Record the details of tests in time order
* Test incident report: Record details of tests in time order
3. Completion of Testing
* Test summary Report: Summarise and evaluate tests.
These are eight document types which can be used in three distinct phasesof software testing.If anybody wants elabrated details of this IEEE 829 Document can send your comments and it will be posted as soon as possible.
Monday, May 16, 2011
Test Plan
Test plan template may vary company to company based upon their application and testing process. Here i mentioned the common template which is used by Multi national companies.
Project Name
Test Plan
Document Change History
Version Number Date Contributor Description
V1.0 12.04.2011 Perinbarajan.I What changes like additions and
deletions were made for this version
Table of Contents
1.Introduction
1.1 Scope
1.1.1 In Scope
1.1.2 Out of Scope
1.2 Quality Objective
1.2.1 Primary Objective
1.2.2 Secondary Objective
1.3 Roles and Responsibilities
1.3.1 Developer
1.3.2 Adopter
1.3.3 Testing Process Management Team
1.4 Assumptions for Test Execution
1.5 Constraints for Test Execution
1.6 Definitions
2.Test Methodology
2.1 Purpose
2.1.1 Overview
2.1.2 Usability Testing
2.1.3 Unit Testing (Multiple)
2.1.4 Iteration/Regression Testing
2.1.5 Final release Testing
2.1.6 Testing completeness Criteria
2.2 Test Levels
2.2.1 Build Tests
2.2.2 Milestone Tests
2.2.3 Release Tests
2.3 Bug Regression
2.4 Bug Triage
2.5 Suspension Criteria and Resumption Requirements
2.6 Test Completeness
2.6.1 Standard Conditions:
2.6.2 Bug Reporting & Triage Conditions:
3.Test Deliverables
3.1 Deliverables Matrix
3.2 Documents
3.2.1 Test Approach Document
3.2.2 Test Plan
3.2.3 Test Schedule
3.2.4 Test Specifications
3.2.5 Requirements Traceability Matrix
3.3 Defect Tracking & Debugging
3.3.1 Testing Workflow
3.3.2 Defect reporting using G FORGE
3.4 Reports
3.4.1 Testing status reports
3.4.2 Phase Completion Reports
3.4.3 Test Final Report - Sign-Off
3.5 Responsibility Matrix
4.Resource & Environment Needs
4.1 Testing Tools
4.1.1 Tracking Tools
4.2 Test Environment
4.2.1 Hardware
4.2.2 Software
4.3 Bug Severity and Priority Definition
4.3.1 Severity List
4.3.2 Priority List
4.4 Bug Reporting
5.Terms/Acronyms
Monday, May 9, 2011
Responsibilities of Test manager/Lead
1.Understand the testing effort by analyzing the requirements of project.
2.Estimate and obtain management support for the time,resources and budget required to perform the testing.
3.Organize the testing kick-off meeting
4.Define the Strategy
5.Build a testing team of professionals with appropriate skills, attitudes and motivation.
6.Identify Training requirements and forward it to the Project Manager (Technical and Soft skills).
7.Develop the test plan for the tasks, dependencies and participants required to mitigate the risks to system quality and obtain stakeholder support for this plan.
8.Arrange the Hardware and software requirement for the Test Setup.
9.Assign task to all Testing Team members and ensure that all of them have sufficient work in the project.
10.Ensure content and structure of all Testing documents / artifacts is documented and maintained.
11.Document, implement, monitor, and enforce all processes for testing as per standards defined by the organization.
12.Check / Review the Test Cases documents.
13.Keep track of the new requirements / change in requirements of the Project.
14.Escalate the issues about project requirements (Software, Hardware, Resources) to Project Manager / Sr. Test Manager.
15.Organize the status meetings and send the Status Report (Daily, Weekly etc.) to the Client.
16.Attend the regular client call and discuss the weekly status with the client.
17.Communication with the Client (If required).
18.Act as the single point of contact between Development and Testers.
19.Track and prepare the report of testing activities like test testing results, test case coverage, required resources, defects discovered and their status, performance baselines etc.
20.Review various reports prepared by Test engineers.
21.Ensure the timely delivery of different testing milestones.
22.Prepares / updates the metrics dashboard at the end of a phase or at the completion of project.
2.Estimate and obtain management support for the time,resources and budget required to perform the testing.
3.Organize the testing kick-off meeting
4.Define the Strategy
5.Build a testing team of professionals with appropriate skills, attitudes and motivation.
6.Identify Training requirements and forward it to the Project Manager (Technical and Soft skills).
7.Develop the test plan for the tasks, dependencies and participants required to mitigate the risks to system quality and obtain stakeholder support for this plan.
8.Arrange the Hardware and software requirement for the Test Setup.
9.Assign task to all Testing Team members and ensure that all of them have sufficient work in the project.
10.Ensure content and structure of all Testing documents / artifacts is documented and maintained.
11.Document, implement, monitor, and enforce all processes for testing as per standards defined by the organization.
12.Check / Review the Test Cases documents.
13.Keep track of the new requirements / change in requirements of the Project.
14.Escalate the issues about project requirements (Software, Hardware, Resources) to Project Manager / Sr. Test Manager.
15.Organize the status meetings and send the Status Report (Daily, Weekly etc.) to the Client.
16.Attend the regular client call and discuss the weekly status with the client.
17.Communication with the Client (If required).
18.Act as the single point of contact between Development and Testers.
19.Track and prepare the report of testing activities like test testing results, test case coverage, required resources, defects discovered and their status, performance baselines etc.
20.Review various reports prepared by Test engineers.
21.Ensure the timely delivery of different testing milestones.
22.Prepares / updates the metrics dashboard at the end of a phase or at the completion of project.
Difference between high level and low level test cases?
High level testacases are those which covers major functionality in the application(i.e. Retrieve,update display,cancel(functionality related test cases), database test cases.)
Low level test cases are those related to user interface(UI) in the application.
Low level test cases are those related to user interface(UI) in the application.
What is Concurrency Testing
Concurrency Testing(also commonly known as muti user testing) is used to know the effects of accessing the application,code module or database by different users at the same time.
It helps in identifying and measuring the problems in response time, levels of locking and deadlocking in the application.
Ex. Loadrunner is widely used for this type of testing,Vugen(Virtual user generator) is used to add the number of concurrent users and how the users need to be added like gradual rampup or spike stepped.
It helps in identifying and measuring the problems in response time, levels of locking and deadlocking in the application.
Ex. Loadrunner is widely used for this type of testing,Vugen(Virtual user generator) is used to add the number of concurrent users and how the users need to be added like gradual rampup or spike stepped.
Entry and Exit criteria in software testing
The entry criteria is the process that must be present when a system begins like,
SRS
FRS
usecase
Test case
Test plan
The exit criteria ensures whether testing is complted and the application is ready for release like,
Test summary
Metrics
Defect analysis report
SRS
FRS
usecase
Test case
Test plan
The exit criteria ensures whether testing is complted and the application is ready for release like,
Test summary
Metrics
Defect analysis report
Bucket testing
Bucket testing(also known as A/B Testing) is mostly used to study the impact of various product designes in website metrics, two simultaneous versions were run in a single or set of webpages to measure the difference in click rates,interface and traffic.
Different Types of Severity
User interface Defects - low
Boundary related defects - Medium
Error handling defects - Medium
Calculation defects - High
Interpreting Data defects - High
Hardware Failure and Problems - High
Compatibility and intersystem defect - High
Control flow defects - High
Load conditions(Memory leakages under load testing) - High
Boundary related defects - Medium
Error handling defects - Medium
Calculation defects - High
Interpreting Data defects - High
Hardware Failure and Problems - High
Compatibility and intersystem defect - High
Control flow defects - High
Load conditions(Memory leakages under load testing) - High
Tuesday, April 26, 2011
Http status code 500 series
5xx Server Error
The server failed to fulfill an apparently valid request.
Response status codes beginning with the digit "5" indicate cases in which the server is aware that it has encountered an error or is otherwise incapable of performing the request. Except when responding to a HEAD request, the server should include an entity containing an explanation of the error situation, and indicate whether it is a temporary or permanent condition. Likewise, user agents should display any included entity to the user. These response codes are applicable to any request method.
500 Internal Server Error
A generic error message, given when no more specific message is suitable.
501 Not Implemented
The server either does not recognise the request method, or it lacks the ability to fulfill the request.
502 Bad Gateway
The server was acting as a gateway or proxy and received an invalid response from the upstream server.
503 Service Unavailable
The server is currently unavailable (because it is overloaded or down for maintenance). Generally, this is a temporary state.
504 Gateway Timeout
The server was acting as a gateway or proxy and did not receive a timely response from the upstream server.
505 HTTP Version Not Supported
The server does not support the HTTP protocol version used in the request.
506 Variant Also Negotiates (RFC 2295)
Transparent content negotiation for the request results in a circular reference.
507 Insufficient Storage (WebDAV) (RFC 4918)
509 Bandwidth Limit Exceeded (Apache bw/limited extension)
This status code, while used by many servers, is not specified in any RFCs.
510 Not Extended (RFC 2774)
Further extensions to the request are required for the server to fulfill it.
The server failed to fulfill an apparently valid request.
Response status codes beginning with the digit "5" indicate cases in which the server is aware that it has encountered an error or is otherwise incapable of performing the request. Except when responding to a HEAD request, the server should include an entity containing an explanation of the error situation, and indicate whether it is a temporary or permanent condition. Likewise, user agents should display any included entity to the user. These response codes are applicable to any request method.
500 Internal Server Error
A generic error message, given when no more specific message is suitable.
501 Not Implemented
The server either does not recognise the request method, or it lacks the ability to fulfill the request.
502 Bad Gateway
The server was acting as a gateway or proxy and received an invalid response from the upstream server.
503 Service Unavailable
The server is currently unavailable (because it is overloaded or down for maintenance). Generally, this is a temporary state.
504 Gateway Timeout
The server was acting as a gateway or proxy and did not receive a timely response from the upstream server.
505 HTTP Version Not Supported
The server does not support the HTTP protocol version used in the request.
506 Variant Also Negotiates (RFC 2295)
Transparent content negotiation for the request results in a circular reference.
507 Insufficient Storage (WebDAV) (RFC 4918)
509 Bandwidth Limit Exceeded (Apache bw/limited extension)
This status code, while used by many servers, is not specified in any RFCs.
510 Not Extended (RFC 2774)
Further extensions to the request are required for the server to fulfill it.
Http status code 400 series
4xx Client Error
The 4xx class of status code is intended for cases in which the client seems to have erred. Except when responding to a HEAD request, the server should include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. These status codes are applicable to any request method. User agents should display any included entity to the user. These are typically the most common error codes encountered while online.
400 Bad Request
The request cannot be fulfilled due to bad syntax.
401 Unauthorized
Similar to 403 Forbidden, but specifically for use when authentication is possible but has failed or not yet been provided. The response must include a WWW-Authenticate header field containing a challenge applicable to the requested resource. See Basic access authentication and Digest access authentication.
402 Payment Required
Reserved for future use. The original intention was that this code might be used as part of some form of digital cash or micropayment scheme, but that has not happened, and this code is not usually used. As an example of its use, however, Apple's MobileMe service generates a 402 error ("httpStatusCode:402" in the Mac OS X Console log) if the MobileMe account is delinquent.
403 Forbidden
The request was a legal request, but the server is refusing to respond to it. Unlike a 401 Unauthorized response, authenticating will make no difference.
404 Not Found
The requested resource could not be found but may be available again in the future. Subsequent requests by the client are permissible.
405 Method Not Allowed
A request was made of a resource using a request method not supported by that resource; for example, using GET on a form which requires data to be presented via POST, or using PUT on a read-only resource.
406 Not Acceptable
The requested resource is only capable of generating content not acceptable according to the Accept headers sent in the request.
407 Proxy Authentication Required
408 Request Timeout
The server timed out waiting for the request. According to W3 HTTP specifications: "The client did not produce a request within the time that the server was prepared to wait. The client MAY repeat the request without modifications at any later time."
409 Conflict
Indicates that the request could not be processed because of conflict in the request, such as an edit conflict.
410 Gone
Indicates that the resource requested is no longer available and will not be available again. This should be used when a resource has been intentionally removed and the resource should be purged. Upon receiving a 410 status code, the client should not request the resource again in the future. Clients such as search engines should remove the resource from their indices. Most use cases do not require clients and search engines to purge the resource, and a "404 Not Found" may be used instead.
411 Length Required
The request did not specify the length of its content, which is required by the requested resource.
412 Precondition Failed
The server does not meet one of the preconditions that the requester put on the request.
413 Request Entity Too Large
The request is larger than the server is willing or able to process.
414 Request-URI Too Long
The URI provided was too long for the server to process.
415 Unsupported Media Type
The request entity has a media type which the server or resource does not support.For example, the client uploads an image as image/svg+xml, but the server requires that images use a different format.
416 Requested Range Not Satisfiable
The client has asked for a portion of the file, but the server cannot supply that portion. For example, if the client asked for a part of the file that lies beyond the end of the file.
417 Expectation Failed
The server cannot meet the requirements of the Expect request-header field.
418 I'm a teapot
This code was defined in 1998 as one of the traditional IETF April Fools' jokes, in RFC 2324, Hyper Text Coffee Pot Control Protocol, and is not expected to be implemented by actual HTTP servers.
422 Unprocessable Entity (WebDAV) (RFC 4918)
The request was well-formed but was unable to be followed due to semantic errors.
423 Locked (WebDAV) (RFC 4918)
The resource that is being accessed is locked
424 Failed Dependency (WebDAV) (RFC 4918)
The request failed due to failure of a previous request (e.g. a PROPPATCH).
425 Unordered Collection (RFC 3648)
Defined in drafts of "WebDAV Advanced Collections Protocol", but not present in "Web Distributed Authoring and Versioning (WebDAV) Ordered Collections Protocol".
426 Upgrade Required (RFC 2817)
The client should switch to a different protocol such as TLS/1.0.
444 No Response
An Nginx HTTP server extension. The server returns no information to the client and closes the connection (useful as a deterrent for malware).
449 Retry With
A Microsoft extension. The request should be retried after performing the appropriate action.
450 Blocked by Windows Parental Controls
A Microsoft extension. This error is given when Windows Parental Controls are turned on and are blocking access to the given webpage.
499 Client Closed Request
An Nginx HTTP server extension. This code is introduced to log the case when the connection is closed by client while HTTP server is processing its request, making server unable to send the HTTP header back.
The 4xx class of status code is intended for cases in which the client seems to have erred. Except when responding to a HEAD request, the server should include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. These status codes are applicable to any request method. User agents should display any included entity to the user. These are typically the most common error codes encountered while online.
400 Bad Request
The request cannot be fulfilled due to bad syntax.
401 Unauthorized
Similar to 403 Forbidden, but specifically for use when authentication is possible but has failed or not yet been provided. The response must include a WWW-Authenticate header field containing a challenge applicable to the requested resource. See Basic access authentication and Digest access authentication.
402 Payment Required
Reserved for future use. The original intention was that this code might be used as part of some form of digital cash or micropayment scheme, but that has not happened, and this code is not usually used. As an example of its use, however, Apple's MobileMe service generates a 402 error ("httpStatusCode:402" in the Mac OS X Console log) if the MobileMe account is delinquent.
403 Forbidden
The request was a legal request, but the server is refusing to respond to it. Unlike a 401 Unauthorized response, authenticating will make no difference.
404 Not Found
The requested resource could not be found but may be available again in the future. Subsequent requests by the client are permissible.
405 Method Not Allowed
A request was made of a resource using a request method not supported by that resource; for example, using GET on a form which requires data to be presented via POST, or using PUT on a read-only resource.
406 Not Acceptable
The requested resource is only capable of generating content not acceptable according to the Accept headers sent in the request.
407 Proxy Authentication Required
408 Request Timeout
The server timed out waiting for the request. According to W3 HTTP specifications: "The client did not produce a request within the time that the server was prepared to wait. The client MAY repeat the request without modifications at any later time."
409 Conflict
Indicates that the request could not be processed because of conflict in the request, such as an edit conflict.
410 Gone
Indicates that the resource requested is no longer available and will not be available again. This should be used when a resource has been intentionally removed and the resource should be purged. Upon receiving a 410 status code, the client should not request the resource again in the future. Clients such as search engines should remove the resource from their indices. Most use cases do not require clients and search engines to purge the resource, and a "404 Not Found" may be used instead.
411 Length Required
The request did not specify the length of its content, which is required by the requested resource.
412 Precondition Failed
The server does not meet one of the preconditions that the requester put on the request.
413 Request Entity Too Large
The request is larger than the server is willing or able to process.
414 Request-URI Too Long
The URI provided was too long for the server to process.
415 Unsupported Media Type
The request entity has a media type which the server or resource does not support.For example, the client uploads an image as image/svg+xml, but the server requires that images use a different format.
416 Requested Range Not Satisfiable
The client has asked for a portion of the file, but the server cannot supply that portion. For example, if the client asked for a part of the file that lies beyond the end of the file.
417 Expectation Failed
The server cannot meet the requirements of the Expect request-header field.
418 I'm a teapot
This code was defined in 1998 as one of the traditional IETF April Fools' jokes, in RFC 2324, Hyper Text Coffee Pot Control Protocol, and is not expected to be implemented by actual HTTP servers.
422 Unprocessable Entity (WebDAV) (RFC 4918)
The request was well-formed but was unable to be followed due to semantic errors.
423 Locked (WebDAV) (RFC 4918)
The resource that is being accessed is locked
424 Failed Dependency (WebDAV) (RFC 4918)
The request failed due to failure of a previous request (e.g. a PROPPATCH).
425 Unordered Collection (RFC 3648)
Defined in drafts of "WebDAV Advanced Collections Protocol", but not present in "Web Distributed Authoring and Versioning (WebDAV) Ordered Collections Protocol".
426 Upgrade Required (RFC 2817)
The client should switch to a different protocol such as TLS/1.0.
444 No Response
An Nginx HTTP server extension. The server returns no information to the client and closes the connection (useful as a deterrent for malware).
449 Retry With
A Microsoft extension. The request should be retried after performing the appropriate action.
450 Blocked by Windows Parental Controls
A Microsoft extension. This error is given when Windows Parental Controls are turned on and are blocking access to the given webpage.
499 Client Closed Request
An Nginx HTTP server extension. This code is introduced to log the case when the connection is closed by client while HTTP server is processing its request, making server unable to send the HTTP header back.
Subscribe to:
Posts (Atom)