Software testing: principles and practices / Srinivasan Desikan and Gopalaswamy Ramesh

By: Desikan, SrinivasanMaterial type: TextTextPublication details: Noida : Pearson , 2006Description: xviii, 486 pISBN: 9788177541218 (pb)Subject(s): Computer ProgrammingDDC classification: 005.14
Contents:
1.1 Context of Testing in Producing Software 1.2 About this Chapter 1.3 The Incomplete Car 1.4 Dijkstra's Doctrine 1.5 A Test in Time! 1.6 The Cat and the Saint 1.7 Test the Tests First! 1.8 The Pesticide Paradox 1.9 The Convoy and the Rags 1.10 The Policemen on the Bridge 1.11 The Ends of the Pendulum 1.12 Men in Black 1.13 • Automation Syndrome 1.14 Putting it All Together References Problems and Exercises 2.1 Phases of Software Project 2.1.1 Requirements Gathering and Analysis 2.1.2 Planning 2.1.3 Design 2.1.4 Development or Coding 2.1.5 Testing 2.1.6 Deployment and Maintenance 2.2 Quality, Quality Assurance, and Quality Control 2.3 Testing, Verification, and Validation 2.4 Process Model to Represent Different Phases 2.5 Life Cycle Models 2.5.1 Waterfall Model 2.5.2 Prototjqjing and Rapid Application Development Models 2.5.3 Spiral or Iterative Model 2.5.4 The V Model 2.5.5 Modified V Model 2.5.6 Comparison of Various Life Cycle Models References Problems and Exercises 3.1 What is White Box Testing? 3.2 Static Testing 3.2.1 Static Testing by Humans 3.2.2 Static Analysis Tools 3.3 Structural Testing 3.3.1 Unit/Code Functional Testing 3.3.2 Code Coverage Testing 3.3.3 Code Complexity Testing 3.4 Challenges in White Box Testing References Problems and Exercises 4.1 What is Black Box Testing? 42 Why Black Box Testing? 4.3 When to do Black Box Testing? 4.4 How to do Black Box Testing? 4.4.1 Requirements Based Testing 4.4.2 Positive and Negative Testing 4.4.3 Boundary Value Analysis 4.4.4 Decision Tables 4.4.5 Equivalence Partitioning 4.4.6 State Based or Graph Based Testing 4.4.7 Compatibility Testing 4.4.8 User Documentation Testing 4.4.9 Domain Testing 4.5 Conclusion References Problems and Exercises 5.1 What is Integration Testing? 5.2 Integration Testing as a Type of Testing 5.2.1 Top-Down Integration 5.2.2 Bottom-Up Integration 5.2.3 Bi-Directional Integration 5.2.4 System Integration 5.2.5 Choosing Integration Method 5.3 Integration Testing as a Phase of Testing 5.4 Scenario Testing 5.4.1 System Scenarios 5.4.2 Use Case Scenarios 5.5 Defect Bash 5.5.1 Choosing the Frequency and Duration of Defect Bash 5.5.2 Selecting the Right Product Build 5.5.3 Communicating the Objective of Defect Bash 5.5.4 Setting up and Monitoring the Lab 5.5.5 Taking Actions and Fixing Issues 5.5.6 Optimizing the Effort Involved in Defect Bash 5.6 Conclusion References Problems and Exercises 6.1 System Testing Overview 6.2 Why is System Testing Done? 6.3 Functional Versus Non-Functional Testing 6.4 Functional System Testing 6.4.1 Design/Architecture Verification 6.4.2 Business Vertical Testing 6.4.3 Deplo5Tnent Testing 6.4.4 Beta Testing 6.4.5 Certification, Standards and Testing for Compliance 6.5 Non-Functional Testing 6.5.1 Setting up the Configuration 6.5.2 Coming up with Entry/Exit Criteria 6.5.3 Balancing Key Resources 6.5.4 Scalability Testing 6.5.5 Reliability Testing 6.5.6 Stress Testing 6.5.7 Interoperability Testing 6.6 Acceptance Testing 6.6.1 Acceptance Criteria 6.6.2 Selecting Test Cases for Acceptance Testing 6.6.3 Executing Acceptance Tests 6.7 Summary of Testing Phases 6.7.1 Multiphase Testing Model 6.7.2 Working Across Multiple Releases 6.7.3 Who Does What and When References Problems and Exercises 7.1 Introduction 7.2 Factors Governing Performance Testing 7.3 Methodology for Performance Testing 7.3.1 Collecting Requirements 7.3.2 Writing Test Cases 7.3.3 Automating Performance Test Cases 7.3.4 Executing Performance Test Cases 7.3.5 Analyzing the Performance Test Results 7.3.6 Performance Tuning 7.3.7 Performance Benchmarking 7.3.8 Capacity Planning 7.4 Tools for Performance Testing 7.5 Process for Performance Testing 7.6 Challenges References Problems and Exercises 8.1 What is Regression Testing? 8.2 lypes of Regression Testing 8.3 When to do Regression Testing? 8.4 How to do Regression Testing? 8.4.1 Performing an Initial "Smoke" or "Saiuty" Test 8.4.2 Understanding the Criteria for Selecting the Test Cases 8.4.3 Cleissif3dng Test Cases 8.4.4 Methodology for Selecting Test Cases 8.4.5 Resetting the Test Cases for Regression Testing 8.4.6 Concluding the Results of Regression Testing 8.5 Best Practices in Regression Testing References Problems and Exercises 9.1 Introduction 9.2 Primer on Internationalization 9.2.1 Definition of Language 9.2.2 Qiaracter Set 9.2.3 Locale 9.2.4 Terms Used in This Chapter 9.3 Test Phases for Internationalization Testing 9.4 Enabling Testing 9.5 Locale Testing 9.6 Internationalization Validation 9.7 Fsike Language Testing 9.8 Language Testing 9.9 Localization Testing 9.10 Tools Used for Internationalization 9.11 Challenges and Issues References Problems and Exercises 10.1 Overview of Ad Hoc Testing 10.2 Buddy Testing 10.3 Pair Testing 10.3.1 Situations When Pedr Testing Becomes Ineffective 10.4 Exploratory Testing 10.4.1 Exploratory Testing Techniques 10.5 Iterative Testing 10.6 Agile and Extreme Testing 10.6.1 XP Work Flow 10.6.2 Summary with an Example 10.7 Defect Seeding 10.8 Conclusion References Problems and Exercises 11.1 Introduction 11.2 Primer on Object-Oriented Software 11.3 Differences in OO Testing 11.3.1 Unit Testing a set of Classes 11.3.2 Putting Classes to Work Together—Integration Testing 11.3.3 System Testing and Interoperability of OO Systems 11.3.4 Regression Testing of OO Systems 11.3.5 Tools for Testing of OO Systems 11.3.6 Summary References Problems and Exercises 12.1 What is Usability Testing? 12.2 Approach to Usability 12.3 When to do Usability Testing? 12.4 How to Achieve Usability? 12.5 Quality Factors for Usability 12.6 Aesthetics Testing 12.7 Accessibility Testing 12.7.1 Basic Accessibility 12.7.2 Product Accessibility 12.8 Tools for Usability 12.9 Usability Lab Setup 12.10 Test Roles for Usability 12.11 Summary References Problems and Exercises 13.1 Perceptions and Misconceptions About Testing 13.1.1 Testing is not Technically Challenging" 13.1.2 Testing Does Not Provide me a Career Path or Growth" 13.1.3 "I Am Put in Testing—What is Wrong With Me?!" 13.1.4 "These Folks Are My Adversaries" 13.1.5 "Testing is What I Can Do in the End if I Get Time" 13.1.6 "There is no Sense of Ownership in Testing" 13.1.7 "Testing is only Destructive" 13.2 Comparison between Testing and Development Fimctions 13.3 Providing Career Paths for Testing Professionals 13.4 The Role of the Ecosystem and a Call for Action 13.4.1 Role of Education System 13.4.2 Role of Senior Management 13.4.3 Role of the Community References Problems and Exercises 14.1 Dimensions of Organization Structures 14.2 Structures in Single-Product Companies 14.2.1 Testing Team Structures for Single-Product Companies 14.2.2 Component-Wise Testing Teams 14.3 Structures for Multi-Product Companies 14.3.1 Testing Teams as Part of "CTO's Office" 14.3.2 Single Test Team for All Products 14.3.3 Testing Teams Organized by Product 14.3.4 Separate Testing Teams for Different Phases of Testing 14.3.5 Hybrid Models 14.4 Effects of Globalization and Geographically Distributed Teams on Product Testing 14.4.1 Business Impact of Globalization 14.4.2 Round the Clock Development/Testing Model 14.4.3 Testing Competency Center Model 14.4.4 Challenges in Global Teams 14.5 Testing Services Organizations 14.5.1 Business Need for Testing Services 14.5.2 Differences between Testing as a Service and Product- Testing Organizations 14.5.3 Typical Roles and Responsibilities of Testing Services Organization 14.5.4 Challenges and Issues in Testing Services Organizations 14.6 Success Factors for Testing Organizations References Problems and Exercises 15.1 introduction 15.2 Test Planning 15.2.1 Preparing a Test Plan 15.2.2 Scope Management: Deciding Features to be Tested/Not Tested 15.2.3 Deciding Test Approach/Strategy 15.2.4 Setting up Criteria for Testing 15.2.5 Identifjdng Responsibilities, Staffing, and Training Needs 15.2.6 Identifying Resource Requirements 15.2.7 Identifying Test Deliverables 15.2.8 Testing Tasks: Size and Effort Estimation 15.2.9 Activity Breakdown and Scheduling 15.2.10 Communications Management 15.2.11 Risk Management 15.3 Test Management 15.3.1 Choice of Standards 15.3.2 Test Infrastructure Management 15.3.3 Test People Management 15.3.4 Integrating with Product Release 15.4 Test Process 15.4.1 Putting Together and Baselining a Test Plan 15.4.2 Test Case Specification 15.4.3 Update of Traceability Matrix 15.4.4 Identifying Possible Candidates for Automation 15.4.5 Developing and Baselining Test Cases 15.4.6 Executing Test Cases and Keeping TraceabiUty Matrix Current 15.4.7 Collecting and Analyzing Metrics 15.4.8 Preparing Test Summary Report 15.4.9 Recommending Product Release Criteria 15.5 Test Reporting 15.5.1 Recommending Product Release 15.6 Best Practices 15.6.1 Process Related Best Practices 15.6.2 People Related Best Practices 15.6.3 Technology Related Best Practices Appendix A: Test Planning Checklist Appendix B: Test Plan Template References Problems and Exercises 16.1 What is Test Automation? 16.2 Terms Used in Automation 16.3 Skills Needed for Automation 16.4 What to Automate, Scope of Automation 16.4.1 Identifjdng the Types of Testing Amenable to Automation 16.4.2 Automating Areas Less Prone to Chcmge 16.4.3 Automate Tests that Pertain to Standards 16.4.4 Management Aspects in Automation 16.5 Design and Architecture for Automation 16.5.1 External Modules 16.5.2 Scenario and Configuration File Modules 16.5.3 Test Cases and Test Freimework Modules 16.5.4 Tools and Results Modules 16.5.5 Report Generator and Reports/Metrics Modules 16.6 Generic Requirements for Test Tool/Framework 16.7 Process Model for Automation 16.8 Selecting a Test Tool 16.8.1 Criteria for Selecting Test Tools 16.8.2 Steps for Tool Selection and Deplojmient 16.9 Automation for Extreme Programming Model 16.10 Challenges in Automation 16.11 Summary References Problems and Exercises 17.1 What are Metrics and Measurements? 17.2 Why Metrics in Testing? 17.3 Types of Metrics 17.4 Project Metrics 17.4.1 Effort Variance (Planned vs Actual) 17.4.2 Schedule Variance (Plaimed vs Actual) 17.4.3 Effort Distribution Across Phases 17.5 Progress Metrics 17.5.1 Test Defect Metrics 17.5.2 Development Defect Metrics 17.6 Productivity Metrics 17.6.1 Defects per ICQ Hours of Testing 17.6.2 Test Cases Executed per ICQ Hours of Testing 17.6.3 Test Cases Developed per 100 Hours of Testing 17.6.4 Defects per 100 Test Cases 17.6.5 Defects per 100 Failed Test Cases 17.6.6 Test Phase Effectiveness 17.6.7 Closed Defect Distribution 17.7 Release metrics 17.8 Summary References Problems and Exercises
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Call number Status Date due Barcode Item holds
General Books General Books Central Library, Sikkim University
General Book Section
005.14 DES/S (Browse shelf(Opens below)) Available P19916
Total holds: 0

1.1 Context of Testing in Producing Software
1.2 About this Chapter
1.3 The Incomplete Car
1.4 Dijkstra's Doctrine
1.5 A Test in Time!
1.6 The Cat and the Saint
1.7 Test the Tests First!
1.8 The Pesticide Paradox
1.9 The Convoy and the Rags
1.10 The Policemen on the Bridge
1.11 The Ends of the Pendulum
1.12 Men in Black
1.13 • Automation Syndrome
1.14 Putting it All Together
References
Problems and Exercises
2.1 Phases of Software Project
2.1.1 Requirements Gathering and Analysis
2.1.2 Planning
2.1.3 Design
2.1.4 Development or Coding
2.1.5 Testing
2.1.6 Deployment and Maintenance
2.2 Quality, Quality Assurance, and Quality Control
2.3 Testing, Verification, and Validation
2.4 Process Model to Represent Different Phases
2.5 Life Cycle Models
2.5.1 Waterfall Model
2.5.2 Prototjqjing and Rapid Application Development Models
2.5.3 Spiral or Iterative Model
2.5.4 The V Model
2.5.5 Modified V Model
2.5.6 Comparison of Various Life Cycle Models
References
Problems and Exercises
3.1 What is White Box Testing?
3.2 Static Testing
3.2.1 Static Testing by Humans
3.2.2 Static Analysis Tools
3.3 Structural Testing
3.3.1 Unit/Code Functional Testing
3.3.2 Code Coverage Testing
3.3.3 Code Complexity Testing
3.4 Challenges in White Box Testing
References
Problems and Exercises
4.1 What is Black Box Testing?
42 Why Black Box Testing?
4.3 When to do Black Box Testing?
4.4 How to do Black Box Testing?
4.4.1 Requirements Based Testing
4.4.2 Positive and Negative Testing
4.4.3 Boundary Value Analysis
4.4.4 Decision Tables
4.4.5 Equivalence Partitioning
4.4.6 State Based or Graph Based Testing
4.4.7 Compatibility Testing
4.4.8 User Documentation Testing
4.4.9 Domain Testing
4.5 Conclusion
References
Problems and Exercises
5.1 What is Integration Testing?
5.2 Integration Testing as a Type of Testing
5.2.1 Top-Down Integration
5.2.2 Bottom-Up Integration
5.2.3 Bi-Directional Integration
5.2.4 System Integration
5.2.5 Choosing Integration Method
5.3 Integration Testing as a Phase of Testing
5.4 Scenario Testing
5.4.1 System Scenarios
5.4.2 Use Case Scenarios
5.5 Defect Bash
5.5.1 Choosing the Frequency and Duration of Defect Bash
5.5.2 Selecting the Right Product Build
5.5.3 Communicating the Objective of Defect Bash
5.5.4 Setting up and Monitoring the Lab
5.5.5 Taking Actions and Fixing Issues
5.5.6 Optimizing the Effort Involved in Defect Bash
5.6 Conclusion
References
Problems and Exercises
6.1 System Testing Overview
6.2 Why is System Testing Done?
6.3 Functional Versus Non-Functional Testing
6.4 Functional System Testing
6.4.1 Design/Architecture Verification
6.4.2 Business Vertical Testing
6.4.3 Deplo5Tnent Testing
6.4.4 Beta Testing
6.4.5 Certification, Standards and Testing for Compliance
6.5 Non-Functional Testing
6.5.1 Setting up the Configuration
6.5.2 Coming up with Entry/Exit Criteria
6.5.3 Balancing Key Resources
6.5.4 Scalability Testing
6.5.5 Reliability Testing
6.5.6 Stress Testing
6.5.7 Interoperability Testing
6.6 Acceptance Testing
6.6.1 Acceptance Criteria
6.6.2 Selecting Test Cases for Acceptance Testing
6.6.3 Executing Acceptance Tests
6.7 Summary of Testing Phases
6.7.1 Multiphase Testing Model
6.7.2 Working Across Multiple Releases
6.7.3 Who Does What and When
References
Problems and Exercises
7.1 Introduction
7.2 Factors Governing Performance Testing
7.3 Methodology for Performance Testing
7.3.1 Collecting Requirements
7.3.2 Writing Test Cases
7.3.3 Automating Performance Test Cases
7.3.4 Executing Performance Test Cases
7.3.5 Analyzing the Performance Test Results
7.3.6 Performance Tuning
7.3.7 Performance Benchmarking
7.3.8 Capacity Planning
7.4 Tools for Performance Testing
7.5 Process for Performance Testing
7.6 Challenges
References
Problems and Exercises
8.1 What is Regression Testing?
8.2 lypes of Regression Testing
8.3 When to do Regression Testing?
8.4 How to do Regression Testing?
8.4.1 Performing an Initial "Smoke" or "Saiuty" Test
8.4.2 Understanding the Criteria for Selecting the Test Cases
8.4.3 Cleissif3dng Test Cases
8.4.4 Methodology for Selecting Test Cases
8.4.5 Resetting the Test Cases for Regression Testing
8.4.6 Concluding the Results of Regression Testing
8.5 Best Practices in Regression Testing
References
Problems and Exercises
9.1 Introduction
9.2 Primer on Internationalization
9.2.1 Definition of Language
9.2.2 Qiaracter Set
9.2.3 Locale
9.2.4 Terms Used in This Chapter
9.3 Test Phases for Internationalization Testing
9.4 Enabling Testing
9.5 Locale Testing
9.6 Internationalization Validation
9.7 Fsike Language Testing
9.8 Language Testing
9.9 Localization Testing
9.10 Tools Used for Internationalization
9.11 Challenges and Issues
References
Problems and Exercises
10.1 Overview of Ad Hoc Testing
10.2 Buddy Testing
10.3 Pair Testing
10.3.1 Situations When Pedr Testing Becomes Ineffective
10.4 Exploratory Testing
10.4.1 Exploratory Testing Techniques
10.5 Iterative Testing
10.6 Agile and Extreme Testing
10.6.1 XP Work Flow
10.6.2 Summary with an Example
10.7 Defect Seeding
10.8 Conclusion
References
Problems and Exercises
11.1 Introduction
11.2 Primer on Object-Oriented Software
11.3 Differences in OO Testing
11.3.1 Unit Testing a set of Classes
11.3.2 Putting Classes to Work Together—Integration Testing
11.3.3 System Testing and Interoperability of OO Systems
11.3.4 Regression Testing of OO Systems
11.3.5 Tools for Testing of OO Systems
11.3.6 Summary
References
Problems and Exercises
12.1 What is Usability Testing?
12.2 Approach to Usability
12.3 When to do Usability Testing?
12.4 How to Achieve Usability?
12.5 Quality Factors for Usability
12.6 Aesthetics Testing
12.7 Accessibility Testing
12.7.1 Basic Accessibility
12.7.2 Product Accessibility
12.8 Tools for Usability
12.9 Usability Lab Setup
12.10 Test Roles for Usability
12.11 Summary
References
Problems and Exercises
13.1 Perceptions and Misconceptions About Testing
13.1.1 Testing is not Technically Challenging"
13.1.2 Testing Does Not Provide me a Career Path or Growth"
13.1.3 "I Am Put in Testing—What is Wrong With Me?!"
13.1.4 "These Folks Are My Adversaries"
13.1.5 "Testing is What I Can Do in the End if I Get Time"
13.1.6 "There is no Sense of Ownership in Testing"
13.1.7 "Testing is only Destructive"
13.2 Comparison between Testing and Development Fimctions
13.3 Providing Career Paths for Testing Professionals
13.4 The Role of the Ecosystem and a Call for Action
13.4.1 Role of Education System
13.4.2 Role of Senior Management
13.4.3 Role of the Community
References
Problems and Exercises
14.1 Dimensions of Organization Structures
14.2 Structures in Single-Product Companies
14.2.1 Testing Team Structures for Single-Product Companies
14.2.2 Component-Wise Testing Teams
14.3 Structures for Multi-Product Companies
14.3.1 Testing Teams as Part of "CTO's Office"
14.3.2 Single Test Team for All Products
14.3.3 Testing Teams Organized by Product
14.3.4 Separate Testing Teams for Different Phases of Testing
14.3.5 Hybrid Models
14.4 Effects of Globalization and Geographically Distributed Teams
on Product Testing
14.4.1 Business Impact of Globalization
14.4.2 Round the Clock Development/Testing Model
14.4.3 Testing Competency Center Model
14.4.4 Challenges in Global Teams
14.5 Testing Services Organizations
14.5.1 Business Need for Testing Services
14.5.2 Differences between Testing as a Service and Product-
Testing Organizations
14.5.3 Typical Roles and Responsibilities of Testing Services Organization
14.5.4 Challenges and Issues in Testing Services Organizations
14.6 Success Factors for Testing Organizations
References
Problems and Exercises
15.1 introduction
15.2 Test Planning
15.2.1 Preparing a Test Plan
15.2.2 Scope Management: Deciding Features to be Tested/Not Tested
15.2.3 Deciding Test Approach/Strategy
15.2.4 Setting up Criteria for Testing
15.2.5 Identifjdng Responsibilities, Staffing, and Training Needs
15.2.6 Identifying Resource Requirements
15.2.7 Identifying Test Deliverables
15.2.8 Testing Tasks: Size and Effort Estimation
15.2.9 Activity Breakdown and Scheduling
15.2.10 Communications Management
15.2.11 Risk Management
15.3 Test Management
15.3.1 Choice of Standards
15.3.2 Test Infrastructure Management
15.3.3 Test People Management
15.3.4 Integrating with Product Release
15.4 Test Process
15.4.1 Putting Together and Baselining a Test Plan
15.4.2 Test Case Specification
15.4.3 Update of Traceability Matrix
15.4.4 Identifying Possible Candidates for Automation
15.4.5 Developing and Baselining Test Cases
15.4.6 Executing Test Cases and Keeping TraceabiUty Matrix Current
15.4.7 Collecting and Analyzing Metrics
15.4.8 Preparing Test Summary Report
15.4.9 Recommending Product Release Criteria
15.5 Test Reporting
15.5.1 Recommending Product Release
15.6 Best Practices
15.6.1 Process Related Best Practices
15.6.2 People Related Best Practices
15.6.3 Technology Related Best Practices
Appendix A: Test Planning Checklist
Appendix B: Test Plan Template
References
Problems and Exercises
16.1 What is Test Automation?
16.2 Terms Used in Automation
16.3 Skills Needed for Automation
16.4 What to Automate, Scope of Automation
16.4.1 Identifjdng the Types of Testing Amenable to Automation
16.4.2 Automating Areas Less Prone to Chcmge
16.4.3 Automate Tests that Pertain to Standards
16.4.4 Management Aspects in Automation
16.5 Design and Architecture for Automation
16.5.1 External Modules
16.5.2 Scenario and Configuration File Modules
16.5.3 Test Cases and Test Freimework Modules
16.5.4 Tools and Results Modules
16.5.5 Report Generator and Reports/Metrics Modules
16.6 Generic Requirements for Test Tool/Framework
16.7 Process Model for Automation
16.8 Selecting a Test Tool
16.8.1 Criteria for Selecting Test Tools
16.8.2 Steps for Tool Selection and Deplojmient
16.9 Automation for Extreme Programming Model
16.10 Challenges in Automation
16.11 Summary
References
Problems and Exercises
17.1 What are Metrics and Measurements?
17.2 Why Metrics in Testing?
17.3 Types of Metrics
17.4 Project Metrics
17.4.1 Effort Variance (Planned vs Actual)
17.4.2 Schedule Variance (Plaimed vs Actual)
17.4.3 Effort Distribution Across Phases
17.5 Progress Metrics
17.5.1 Test Defect Metrics
17.5.2 Development Defect Metrics
17.6 Productivity Metrics
17.6.1 Defects per ICQ Hours of Testing
17.6.2 Test Cases Executed per ICQ Hours of Testing
17.6.3 Test Cases Developed per 100 Hours of Testing
17.6.4 Defects per 100 Test Cases
17.6.5 Defects per 100 Failed Test Cases
17.6.6 Test Phase Effectiveness
17.6.7 Closed Defect Distribution
17.7 Release metrics
17.8 Summary
References
Problems and Exercises

There are no comments on this title.

to post a comment.
SIKKIM UNIVERSITY
University Portal | Contact Librarian | Library Portal

Powered by Koha