دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش:
نویسندگان: Chauhan. Naresh
سری:
ISBN (شابک) : 9781680152906, 1680152904
ناشر: Oxford University Press
سال نشر: 2010
تعداد صفحات: 629
زبان: English
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود)
حجم فایل: 8 مگابایت
کلمات کلیدی مربوط به کتاب تست نرم افزار: اصول و روش ها: نرم افزار کامپیوتر، تست.
در صورت تبدیل فایل کتاب Software testing: principles and practices به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب تست نرم افزار: اصول و روش ها نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
Content: PART I TESTING METHODOLOGY
1. Introduction to Effective Software Testing
1.1 Introduction
1.2 Evolution of Software Testing
1.3 Software Testing Myths
1.4 Goals of Software Testing
1.5 Psychology for Software Testing
1.6 Software Testing Definitions
1.7 Model for Software Testing
1.8 Effective Software Testing vs Exhaustive Software Testing
1.9 Effective Testing is hard
1.10 Software Testing as a Process
1.11 Schools of Software Testing
1.12 Software Failure Case Studies
Summary
2. Software Testing Terminology and Methodology
2.1 Software Testing Terminology
2.1.1 Definitions
2.1.2 Life Cycle of a Bug
2.1.3 States of a Bug
2.1.4 Why do bugs occur?
2.1.5 Bug affects economics of software testing
2.1.6 Bug Classification based on Criticality
2.1.7 Bug Classification based on SDLC
2.1.8 Testing Principles
2.2 Software Testing Life Cycle (STLC)
2.3 Software Testing Methodology
2.3.1 Software Testing Strategy
2.3.2 Test Strategy Matrix
2.3.3 Development of Test Strategy
2.3.4 Testing Life Cycle Model
2.3.4.1 V Testing life cycle model
2.3.5 Validation Activities
2.3.6 Testing Tactics
2.3.7 Considerations in developing testing methodologies
Summary
3. Verification and Validation
3.1 Verification and Validation Activities
3.2 Verification
3.2.1 Verification Activities
3.3 Verification of Requirements
3.3.1 Verification of Objectives
3.3.2 How to verify Requirements and Objectives
3.4 Verification of High level Design
3.4.1 How to verify High level Design?
3.4.4.1 Verification of Data Design
3.4.1.2 Verification of Architectural Design
3.4.1.3 Verification of User Interface Design
3.5 Verification of Low level Design
3.5.1 How to verify Low level Design?
3.6 How to verify Code?
3.6.1 Unit Verification
3.7 Validation
3.7.1 Validation Activities
Summary
PART II TESTING TECHNIQUES
4. Dynamic Testing : Black Box Testing Techniques
4.1 Boundary Value Analysis
4.1.1 Boundary value checking
4.1.2 Robustness Testing method
4.1.3 Worst Case Testing method
4.2 Equivalence Class Testing
4.2.1 Identification of Equivalence classes
4.2.2 Identifying the Test cases
4.3 State Table based Testing
4.3.1 Finite State Machine
4.3.2 State Graphs
4.3.3 State Tables
4.3.4 State table based testing
4.4 Decision Table based Testing
4.4.1 Formation of Decision Table
4.4.2 Test case design using decision table
4.4.3 Expanding the immaterial test cases in decision table
4.5 Cause Effect Graphing based Testing
4.5.1 Basic notations
4.6 Error Guessing
Summary
5. Dynamic Testing : White Box Testing Techniques
5.1 Need of White box testing
5.2 Logic Coverage Criteria
5.3 Basis Path Testing
5.3.1 Control Flow Graph
5.3.2 Flow graph notations of different programming constructs
5.3.3 Path Testing Terminology
5.3.4 Cyclomatic Complexity
5.3.4.1 Formulae based on Cyclomatic complexity
5.3.4.2 Guidelines for Basis Path Testing
5.3.5 Applications of Path Testing
5.4 Graph Matrices
5.4.1 Graph Matrix
5.4.2 Connection Matrix
5.4.3 Use of connection matrix in finding cyclomatic complexity number
5.4.4 Use of graph matrix for finding the set of all paths
5.5 Loop Testing
5.6 Data Flow Testing
5.6.1 State of a Data Object
5.6.2 Data Flow Anomalies
5.6.3 Terminology used in Data Flow Testing
5.6.4 Static Data flow testing
5.6.4.1 Static analysis is not enough
5.6.5 Dynamic Data flow testing
5.6.6 Ordering of Data flow testing strategies
5.8 Mutation Testing
5.8.1 Primary Mutants
5.8.2 Secondary Mutants
5.8.3 Mutation Testing Process
Summary
6. Static Testing
6.1 Inspections
6.1.1 Inspection Team
6.1.2 Inspection Process
6.1.3 Benefits of Inspection Process
6.1.4 Effectiveness of Inspection Process
6.1.5 Cost of Inspection Process
6.1.6 Variants of Inspection process
6.1.7 Reading Techniques
6.1.8 Checklists for Inspection Process
6.2 Walkthroughs
6.3 Technical Reviews
Summary
7. Validation Activities
7.1 Unit Validation Testing
7.2 Integration Testing
7.2.1 Decomposition Based Integration
7.2.1.1 Types of Incremental Integration Testing
7.2.1.2 Comparison between Top-Down and Bottom-Up Integration Testing
7.2.1.3 Practical Approach for Integration Testing
7.2.2 Call Graph Based Integration
7.2.2.1 Pair-wise Integration
7.2.2.2 Neighborhood Integration
7.2.3 Path Based Integration
7.3 Function Testing
7.4 System Testing
7.4.1 Categories of Systems Tests
7.4.1.1 Recovery Testing
7.4.1.2 Security Testing
7.4.1.3 Performance Testing
7.4.1.4 Load Testing
7.4.1.5 Stress Testing
7.4.1.6 Usability Testing
7.4.1.7 Compatibility/Conversion/Configuration Testing
7.4.2 Guidelines for performing the system tests
7.5 Acceptance Testing
7.5.1 Alpha Testing
7.5.1.1 Entry to Alpha
7.5.1.2 Exit to Alpha
7.5.2 Beta Testing
7.5.2.1 Entry to Beta
7.5.2.2 Exit criteria
Summary
8. Regression Testing
8.1 Progressive vs Regression Testing
8.2 Regression testing produces quality software
8.3 Regression Testability
8.4 Objectives of Regression Testing
8.5 When to do regression testing?
8.6 Regression Testing Types
8.7 Defining Regression Test Problem
8.7.1 Regression Testing is a problem?
8.7.2 Regression Testing Problem
8.8 Regression Testing Techniques
8.8.1 Selective Retest Techniques
8.8.1.1 Strategy for Test Case Selection
8.8.1.2 Selection Criteria Based on Code
8.8.1.3 Regression Test selection Techniques
8.8.1.4 Evaluating Regression Test Selection Techniques
8.8.2 Regression Test Prioritization
8.9 Benefits of Regression Testing
Summary
PART III MANAGING THE TEST PROCESS
9. Test Management
9.1 Test Organization
9.2 Structure of Testing Group
9.3 Test Planning
9.3.1 Test Plan Components
9.3.2 Test Plan Hierarchy
9.3.3 Master Test Plan
9.3.4 Verification Test Plan
9.3.5 Validation Test Plan
9.3.5.1 Unit Test Plan
9.3.5.2 Integration Test Plan
9.3.5.3 Function Test Plan
9.3.5.4 System Test Plan
9.3.5.5 Acceptance Test Plan
9.4 Detailed Test Design and Test Specifications
9.4.1 Test Design Specifications
9.4.2 Test Case Specifications
9.4.3 Test Procedure Specifications
9.4.4 Test Result Specifications
9.4.4.1 Test Log
9.4.4.2 Test Incident Report
9.4.4.3 Test Summary Report
Summary
10. Software Metrics
10.1 Need of Software Measurement
10.2 Definition of Software Metrics
10.3 Classification of Software Metrics
10.3.1 Product vs. Process Metrics
10.3.2 Objective vs. Subjective Metrics
10.3.3 Primitive vs. Computed Metrics
10.3.4 Private vs. Public Metrics
10.4 Entities to be measured
10.5 Size Metrics
10.5.1 Line of Code (LOC)
10.5.2 Token Count (Halstead Product Metrics)
10.5.2.1 Program Vocabulary
10.5.2.2 Program Length
10.5.2.3 Program Volume
10.5.3 Function Point Analysis
10.5.3.1 Process used for calculating Function Points
10.5.3.2 Sizing Data Functions
10.5.3.3 Sizing Transactional Functions
10.5.3.4 Calculating Unadjusted Function Point (UFP)
10.5.3.5 Calculating Adjusted Function Point
Summary
11. Testing Metrics for Monitoring and Controlling the Testing Process
11.1 Measurement Objectives for Testing
11.2 Attributes and Corresponding Metrics in Software Testing
11.3 Attributes
11.3.1 Progress
11.3.2 Cost
11.3.3 Quality
11.3.4 Size
11.4 Estimation models for estimating testing efforts
11.4.1 Halstead metrics for estimating testing effort
11.4.2 Development ratio method
11.4.3 Project staff ratio method
11.4.4 Test procedure method
11.4.5 Task planning method
11.5 Architectural Design Metric used for Testing
11.6 Information Flow Metrics used for Testing
11.6.1 Henry & Kafura Design Metric
11.7 Cyclomatic Complexity measures for Testing
11.8 Function Point Metrics for Testing
11.9 Test Point Analysis
11.9.1 Procedure for Calculating TPA
11.9.2 Calculating Dynamic Test Points
11.9.3 Calculating Static Test Points
11.9.4 Calculating Primary Test Hours
11.9.5 Calculating Total Test Hours
11.10 Some Testing Metrics
Summary
12. Efficient Test Suite Management
12.1 Why Test Suite grows?
12.2 Minimizing the test suite and its benefits
12.3 Defining the Test Suite Minimization Problem
12.4 Test Suite Prioritization
12.5 Types of Test case Prioritization
12.6 Prioritization Techniques
12.6.1 Coverage based Test Case Prioritization
12.6.1.1Total Statement Coverage Prioritization
12.6.1.2 Additional Statement Coverage Prioritization
12.6.1.3 Total Branch Coverage Prioritization
12.6.1.4 Additional Branch Coverage Prioritization
12.6.1.5 Total Fault-Exposing-Potential (FEP) Prioritization
12.6.2 Risk based Prioritization
12.6.3 Prioritization based on Operational Profiles
12.6.4 Prioritization using Relevant Slices
12.6.4.1 Execution Slice
12.6.4.2 Dynamic Slice
12.6.4.3 Relevant Slice
12.6.5 Prioritization based on Requirements
12.7 Measuring Effectiveness of Prioritized Test Suite
Summary
PART IV QUALITY MANAGEMENT
13. Software Quality Management
13.1 Software Quality
13.2 Quality Types
13.3 Broadening the concept of Quality
13.4 Quality Cost
13.5 Benefits of Investment on Quality
13.6 Quality Control
13.7 Quality Assurance
13.8 Quality management
13.9 QM and Project management
13.10 Quality factors
13.11 Methods of Quality Management
13.11.1 Procedural Approach to QM
13.11.2 Quantitative Approach to QM
13.12 Software Quality Metrics
Summary
14. Testing Process Maturity Models
14.1 Need for Test Process Maturity
14.2 Measurement and Improvement of a Test Process
14.3 Test Process Maturity Models
14.3.1 The Testing Improvement Model (TIM)
14.3.1.1 Maturity Model
14.3.1.2 Key areas of TIM
14.3.1.3 The Assessment procedure of TIM
14.3.2 Test Organization Model (TOM)
14.3.2.1 Questionnaire
14.3.2.2 Improvement Suggestions
14.3.3 Test Process Improvement (TPI) Model
14.3.3.1 Key Process Areas
14.3.3.2 Maturity Levels
14.3.3.3 Test Maturity Matrix
14.3.3.4 Checkpoints
14.3.3.5 Improvement Suggestions
14.3.4 Test Maturity Model (TMM)
14.3.4.1 TMM Components
14.3.4.2 TMM Levels
14.3.4.3 The Assessment Model
Summary
PART V TEST AUTOMATION
15. Automation and Testing Tools
15.1 Need of Automation
15.2 Categorization of Testing Tools
15.2.1 Static and Dynamic Testing Tools
15.2.2 Testing Activity Tools
15.3 Selection of Testing Tools
15.4 Costs incurred in Testing Tools
15.5 Guidelines for Automated Testing
15.6 Overview of some commercial Testing Tools
Summary
PART VI TESTING FOR SPECIALIZED ENVIRONMENT
16. Testing Object Oriented Software
16.1 OOT and Structured Approach
16.1.1 Terminology
16.1.2 Object-Oriented Modelling and UML
16.2 Object Oriented Testing
16.2.1 Differences between Conventional testing Object oriented Testing
16.2.2 Object-Oriented Testing and Maintenance Problems
16.2.3 Issues in OO Testing
16.2.4 Strategy and Tactics of Testing OOS
16.2.5 Verification of OOS
16.2.6 Validation Activities
16.2.7 Testing of OO Classes
16.2.7.1 Feature based Testing of Classes
16.2.7.2 Role of Invariants in Class Testing
16.2.7.3 State Based Testing
16.2.8 Inheritance Testing
16.2.8.1 Issues in Inheritance Testing
16.2.8.2 Inheritance of Invariants of Base Class
16.2.8.3 Incremental Testing
16.2.9 Integration Testing
16.2.9.1 Thread based Integration Testing
16.2.9.2 Implicit Control Flow based Integration Testing
16.2.10 UML based OO Testing
16.2.10.1 UML diagrams in software testing
16.2.10.2 System Testing based on Use cases
Summary
17. Testing Web based Systems
17.1 Web-based System
17.2 Web Technology Evolution
17.2.1 First Generation/ 2-tier Web system
17.2.2 Modern 3-tier & n-tier architecture
17.3 Differences between Traditional Software and Web-based Software
17.4 Challenges in Testing for Web-based Software
17.5 Quality Aspects
17.6 Web Engineering (WebE)
17.6.1 Analysis and Design of Web-based systems
17.6.1.1 Conceptual Modeling
17.6.1.2 Navigation Modeling
17.6.1.3 Presentation Modeling
17.6.1.4 Web Scenarios Modeling
17.6.1.5 Task Modeling
17.6.1.6 Configuration Modeling
17.6.2 Design Activities
17.6.2.1 Interface Design
17.6.2.2 Content Design
17.6.2.3 Architecture Design
17.6.2.4 Presentation Design
17.6.2.5 Navigation design
17.7 Testing of Web-Based Systems
17.7.1 Interface Testing
17.7.2 Usability Testing
17.7.3 Content Testing
17.7.4 Navigation Testing
17.7.5 Configuration/Compatibility Testing
17.7.6 Security Testing
17.7.6.1 Security Test Plan
17.7.6.2 Various Threat Types and their corresponding Test cases
17.7.7 Performance Testing
17.7.7.1 Performance Parameters
17.7.7.2 Types of Performance Testing
Summary
PART VII TRACKING THE BUG
18. Debugging
18.1 Debugging: An art or technique?
18.2 Debugging Process
18.3 Debugging is Difficult?
18.4 Debugging Techniques
18.4.1 Debugging with Memory Dump
18.4.2 Debugging with Watch Points
18.4.3 BackTracking
18.5 Correcting the Bugs
18.5.1 Debugging Guidelines
18.6 Debuggers
18.6.1 Types of Debuggers
Summary
CASE STUDIES
REFERENCES
APPENDICES
APPENDIX AANSWERS TO MULTIPLE CHOICE QUESTIONS
APPENDIX A SRS VERIFICATION CHECKLIST
APPENDIX BHLD VERIFICATION CHECKLIST
APPENDIX CLLD VERIFICATION CHECKLIST
APPENDIX D GENERAL SDD VERIFICATION CHECKLIST
APPENDIX EGENERIC CODE VERIFICATION CHECKLIST
INDEX