دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: 5
نویسندگان: Alan E. Kazdin
سری:
ISBN (شابک) : 2015048757, 2010048486
ناشر: Pearson
سال نشر: 2016
تعداد صفحات: 577
زبان: English
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود)
حجم فایل: 11 مگابایت
در صورت تبدیل فایل کتاب Research Design in Clinical Psychology به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب طراحی تحقیق در روانشناسی بالینی نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
Cover Title Page Copyright Page Brief Contents Contents Preface Acknowledgments About the Author 1.1 Why Do We Need Science at All? 1.1.1 Rationale 1.2 Illustrations of Our Limitations in Accruing Knowledge 1.2.1 Senses and Their Limits 1.2.2 Cognitive Heuristics 1.2.3 Additional Information Regarding Cognitive Heuristics 1.2.4 Memory 1.2.5 General Comments 1.3 Methodology 1.3.1 Definition and Its Components 1.3.2 Using Methodology to Answer Critical Questions 1.4 A Way of Thinking and Problem Solving 1.4.1 The Role of Theory 1.4.2 Findings and Conclusions 1.4.3 Additional Information Regarding Findings and Conclusions 1.4.4 Parsimony 1.4.5 How Parsimony Relates to Methodology 1.4.6 Plausible Rival Hypothesis 1.4.7 An Example of Plausible Rival Hypothesis 1.5 The Semmelweis Illustration of Problem Solving 1.5.1 Illustration: Saving Mothers from Dying 1.5.2 Additional Information Regarding the Semmelweis Illustration 1.5.3 A New Procedure 1.5.4 General Comments 2 Internal and External Validity 2.1 Types of Validity 2.2 Internal Validity 2.3 Threats to Internal Validity 2.3.1 History 2.3.2 Maturation 2.3.3 Testing 2.3.4 History, Maturation, and Testing Combined 2.4 Instrumentation as a Threat to Internal Validity 2.4.1 Some Examples Involving Instrumentation 2.4.2 Additional Information on Instrumentation 2.4.3 Response Shift 2.5 Additional Threats to Internal Validity 2.5.1 Statistical Regression 2.5.2 Three Ways to Help Protect Against Statistical Regression 2.5.3 Selection Biases 2.5.4 Attrition 2.5.5 Diffusion or Imitation of Treatment 2.5.6 Special Treatment or Reactions of Controls 2.5.7 Additional Information on Reactions of Controls 2.6 When and How These Threats Emerge 2.6.1 Poorly Designed Study 2.6.2 Well-Designed Study but Sloppily Conducted 2.6.3 Well-Designed Study with Influences Hard to Control during the Study 2.6.4 Well-Designed Study but the Results Obscure Drawing Conclusions 2.7 Managing Threats to Internal Validity 2.7.1 General Comments 2.8 External Validity 2.9 Threats to External Validity 2.9.1 Summary of Major Threats 2.9.2 Sample Characteristics 2.9.3 College Students as Subjects 2.9.4 Samples of Convenience 2.9.5 Underrepresented Groups 2.9.6 Additional Information on Underrepresented Groups 2.9.7 Narrow Stimulus Sampling 2.9.8 Additional Information on Narrow Stimulus Sampling 2.10 Additional Threats to External Validity 2.10.1 Reactivity of Experimental Arrangements 2.10.2 Reactivity of Assessment 2.10.3 Main Strategy for Combatting Reactivity 2.10.4 Test Sensitization 2.10.5 Multiple-Treatment Interference 2.10.6 Novelty Effects 2.10.7 Generality across Measures, Setting, and Time 2.10.8 Cohorts 2.11 When We Do and Do Not Care about External Validity 2.11.1 Proof of Concept (or Test of Principle) 2.11.2 Additional Information on Proof of Concept 2.12 Managing Threats to External Validity 2.12.1 General Comments 2.12.2 More General Comments on Managing Threats 2.13 Perspectives on Internal and External Validity 2.13.1 Parsimony and Plausibility 2.13.2 Priority of Internal Validity 2.13.3 Further Considerations Regarding Priority of Internal Validity Summary and Conclusions: Internal and External Validity 3 Construct and Data-Evaluation Validity 3.1 Construct Validity Defined 3.2 Confounds and Other Intriguing Aspects of Construct Validity 3.3 Threats to Construct Validity 3.3.1 Attention and Contact with the Clients 3.3.2 Single Operations and Narrow Stimulus Sampling 3.3.3 Experimenter Expectancies 3.3.4 Cues of the Experimental Situation 3.4 Managing Threats to Construct Validity 3.4.1 General Comments 3.5 Data-Evaluation Validity Defined 3.6 Threats to Data-Evaluation Validity Defined 3.7 Overview of Essential Concepts of Data-Evaluation Validity 3.7.1 Statistical Test and Decision Making 3.7.2 Effect Size 3.8 Threats to Data-Evaluation Validity 3.8.1 Low Statistical Power 3.8.2 Subject Heterogeneity 3.8.3 Variability in the Procedures 3.8.4 Unreliability of the Measures 3.8.5 Restricted Range of the Measures 3.8.6 Errors in Data Recording, Analysis, and Reporting 3.8.7 Multiple Comparisons and Error Rates 3.8.8 Misreading or Misinterpreting the Data Analyses 3.9 Managing Threats to Data-Evaluation Validity 3.9.1 General Comments 3.10 Experimental Precision 3.10.1 Trade-Offs and Priorities 3.10.2 Holding Constant Versus Controlling Sources of Variation Summary and Conclusions: Construct and Data-Evaluation Validity 4 Ideas that Begin the Research Process 4.1 Developing the Research Idea 4.2 Sources of Ideas for Study 4.2.1 Curiosity 4.2.2 The Case Study 4.2.3 Study of Special Populations 4.2.4 Additional Information Regarding Special Populations 4.2.5 Stimulated by Other Studies 4.2.6 Translations and Extensions between Human and Nonhuman Animals 4.2.7 Measurement Development and Validation 4.3 Investigating How Two (or more) Variables Relate to Each Other 4.3.1 Association or Correlation between Variables 4.3.2 Concepts That Serve as the Impetus for Research 4.3.3 Risk Factor 4.3.4 Understanding the Difference between a Correlate and a Risk Factor 4.3.5 Protective Factor 4.3.6 Causal Factors 4.3.7 Key Criteria for Inferring a Causal Relation 4.3.8 General Comments 4.4 Moderators, Mediators, and Mechanisms 4.4.1 Moderators 4.4.2 Moderator Research 4.4.3 Mediators and Mechanisms 4.4.4 Tutti: Bringing Moderators, Mediators, and Mechanisms Together 4.4.5 General Comments 4.5 Translating Findings from Research to Practice 4.5.1 Basic and Applied Research 4.5.2 Distinguishing Applied Research from Basic Research 4.5.3 Translational Research 4.5.4 Further Consideration Regarding Translational Research 4.6 Theory as a Guide to Research 4.6.1 Definition and Scope 4.6.2 Theory and Focus 4.7 Why Theory Is Needed 4.7.1 Some Additional Reasons Why Theory Is Needed 4.7.2 Generating Versus Testing Hypotheses 4.7.3 Further Considerations Regarding Generating Versus Testing Hypotheses 4.8 What Makes a Research Idea Interesting or Important? 4.8.1 Guiding Questions 4.8.2 More Information on Generating Guiding Questions 4.9 From Ideas to a Research Project 4.10 Overview of Key Steps 4.10.1 Abstract Ideas to Hypothesis and Operations 4.10.2 Moving to Operations Constructs and Procedures 4.10.3 Sample to Be Included 4.10.4 Research Design Options 4.10.5 Additional Information Regarding Research Design Options 4.10.6 Multiple Other Decision Points 4.11 General Comments Summary and Conclusions: Ideas that Begin the Research Process 5 Experimental Research Using Group Designs 5.1 Subject Selection 5.1.1 Random Selection 5.1.2 More Information on Random Selection 5.2 Who Will Serve as Subjects and Why? 5.2.1 Diversity of the Sample 5.2.2 Dilemmas Related to Subject Selection 5.2.3 Samples of Convenience 5.2.4 Additional Sample Considerations 5.3 Subject Assignment and Group Formation 5.3.1 Random Assignment 5.3.2 Group Equivalence 5.3.3 Matching 5.3.4 Matching When Random Assignment is Not Possible 5.3.5 Perspective on Random Assignment and Matching 5.4 True-Experimental Designs 5.5 Pretest–Posttest Control Group Design 5.5.1 Description 5.5.2 An Example of an Randomized Controlled Trial (RCT) 5.5.3 Considerations in Using the Design 5.5.4 Additional Consideration Regarding Pretest–Posttest Design 5.6 Posttest-Only Control Group Design 5.6.1 Description 5.6.2 Considerations in Using the Design 5.7 Solomon Four-Group Design 5.7.1 Description 5.7.2 Considerations in Using the Design 5.8 Factorial Designs 5.8.1 Considerations in Using the Design 5.9 Quasi-Experimental Designs 5.10 Variations: Briefly Noted 5.10.1 Pretest–Posttest Design 5.10.2 Posttest-Only Design 5.11 Illustration 5.11.1 General Comments 5.12 Multiple-Treatment Designs 5.12.1 Crossover Design 5.12.2 Multiple-Treatment Counterbalanced Design 5.13 Considerations in Using the Designs 5.13.1 Order and Sequence Effects 5.13.2 Restrictions with Various Independent and Dependent Variables 5.13.3 Ceiling and Floor Effects 5.13.4 Additional Considerations Regarding Ceiling and Floor Effects Summary and Conclusions: Experimental Research Using Group Designs 6 Control and Comparison Groups 6.1 Control Groups 6.2 No-Treatment Control Group 6.2.1 Description and Rationale 6.2.2 Special Considerations 6.3 Wait-List Control Group 6.3.1 Description and Rationale 6.3.2 Special Considerations 6.4 No-Contact Control Group 6.4.1 Description and Rationale 6.4.2 Special Considerations 6.5 Nonspecific Treatment or Attention-Placebo Control Group 6.5.1 Description and Rationale 6.5.2 More Information on Description and Rationale 6.5.3 Special Considerations 6.5.4 Ethical Issues 6.6 Treatment as Usual 6.6.1 Description and Rationale 6.6.2 Special Considerations 6.7 Yoked Control Group 6.7.1 Description and Rationale 6.7.2 More Information on Description and Rationale 6.7.3 Special Considerations 6.8 Nonrandomly Assigned or Nonequivalent Control Group 6.8.1 Description and Rationale 6.8.2 Special Considerations 6.9 Key Considerations in Group Selection 6.10 Evaluating Psychosocial Interventions 6.10.1 Intervention Package Strategy 6.10.2 Dismantling Intervention Strategy 6.10.3 Constructive Intervention Strategy 6.10.4 Parametric Intervention Strategy 6.11 Evaluating Additional Psychosocial Interventions 6.11.1 Comparative Intervention Strategy 6.11.2 Intervention Moderator Strategy 6.11.3 More Information on Intervention Moderator Strategy 6.11.4 Intervention Mediator/Mechanism Strategy 6.11.5 General Comments Summary and Conclusions: Control and Comparison Groups 7 Case-Control and Cohort Designs 7.1 Critical Role of Observational Research: Overview 7.1.1 More Information on the Critical Role of Observational Research 7.2 Case-Control Designs 7.2.1 Cross-Sectional Design 7.2.2 Retrospective Design 7.2.3 More Information on Retrospective Design 7.2.4 Considerations in Using Case-Control Designs 7.2.5 Further Considerations in Using Case-Control Designs 7.3 Cohort Designs 7.3.1 Single-Group Cohort Design 7.3.2 Birth-Cohort Design 7.3.3 More Information on Birth-Cohort Design 7.3.4 Multigroup Cohort Design 7.3.5 More Information on Multigroup Cohort Design 7.3.6 Accelerated, Multi-Cohort Longitudinal Design 7.3.7 More Information on Accelerated, Multi-Cohort Longitudinal Design 7.3.8 Considerations in Using Cohort Designs 7.4 Prediction, Classification, and Selection 7.4.1 Identifying Varying Outcomes: Risk and Protective Factors 7.4.2 Sensitivity and Specificity: Classification, Selection, and Diagnosis 7.4.3 Further Considerations Regarding Sensitivity and Specificity 7.4.4 General Comments 7.5 Critical Issues in Designing and Interpreting Observational Studies 7.6 Specifying the Construct 7.6.1 Level of Specificity of the Construct 7.6.2 Operationalizing the Construct 7.6.3 Further Considerations Regarding Operationalizing the Construct 7.7 Selecting Groups 7.7.1 Special Features of the Sample 7.7.2 Selecting Suitable Controls 7.7.3 Additional Information on Selecting Suitable Controls 7.7.4 Possible Confounds 7.7.5 More Information on Possible Confounds 7.8 Time Line and Causal Inferences 7.9 General Comments Summary and Conclusions: Case-Control and Cohort Designs 8 Single-Case Experimental Research Designs 8.1 Key Requirements of the Designs 8.1.1 Ongoing Assessment 8.1.2 Baseline Assessment 8.2 Stability of Performance 8.2.1 Trend in the Data 8.2.2 Variability in the Data 8.3 Major Experimental Design Strategies 8.4 ABAB Designs 8.4.1 Description 8.4.2 Illustration 8.4.3 Design Variations 8.4.4 Considerations in Using the Designs 8.5 Multiple-Baseline Designs 8.5.1 Description 8.5.2 Illustration 8.5.3 Design Variations 8.5.4 Considerations in Using the Designs 8.6 Changing-Criterion Designs 8.6.1 Description 8.6.2 Illustration 8.6.3 Design Variations 8.6.4 Considerations in Using the Designs 8.7 Data Evaluation in Single-Case Research 8.8 Visual Inspection 8.8.1 Criteria Used for Visual Inspection 8.8.2 Additional Information on Criteria Used for Visual Inspection 8.8.3 Considerations in Using Visual Inspection 8.9 Statistical Evaluation 8.9.1 Statistical Tests 8.9.2 Additional Information on Statistical Tests 8.9.3 Considerations in Using Statistical Tests 8.10 Evaluation of Single-Case Designs 8.10.1 Special Strengths and Contributions 8.10.2 Strength 1 of Single-Case Designs 8.10.3 Strengths 2 and 3 of Single-Case Designs 8.10.4 Strengths 4 and 5 of Single-Case Designs 8.10.5 Issues and Concerns Summary and Conclusions: Single-Case Experimental Research Designs 9 Qualitative Research Methods 9.1 Key Characteristics 9.1.1 Overview 9.1.2 An Orienting Example 9.1.3 Definition and Core Features 9.1.4 Contrasting Qualitative and Quantitative Research 9.1.5 More Information on Contrasting Qualitative and Quantitative Research 9.2 Methods and Analyses 9.3 The Data for Qualitative Analysis 9.4 Validity and Quality of the Data 9.4.1 Validity 9.4.2 Qualitative Research on and with Its Own Terms 9.4.3 More Information on Key Concepts and Terms 9.4.4 Checks and Balances 9.5 Illustrations 9.5.1 Surviving a Major Bus Crash 9.5.2 Comments on This Illustration 9.5.3 Lesbian, Gay, Bisexual, and Transgender (LGBT) Youth and the Experience of Violence 9.5.4 Comments on This Illustration 9.5.5 Yikes! Why Did I Post That on Facebook? 9.5.6 Comments on This Illustration 9.6 Mixed Methods: Combining Quantitative and Qualitative Research 9.6.1 Motorcycle Helmet Use 9.6.2 Comments on This Example 9.7 Recapitulation and Perspectives on Qualitative Research 9.7.1 Contributions of Qualitative Research 9.7.2 Further Considerations Regarding Contributions of Qualitative Research 9.7.3 Limitations and Unfamiliar Characteristics 9.7.4 Unfamiliar Characteristics 1 and 2 of Qualitative Research 9.7.5 Unfamiliar Characteristics 3, 4, and 5 of Qualitative Research 9.7.6 General Comments Summary and Conclusions: Qualitative Research Methods 10 Selecting Measures for Research 10.1 Key Considerations in Selecting Measures 10.1.1 Construct Validity 10.1.2 More Information on Construct Validity 10.1.3 Reasons for Carefully Selecting Measures 10.1.4 Psychometric Characteristics 10.1.5 More Information on Psychometric Characteristics 10.1.6 Sensitivity of the Measure 10.1.7 Diversity and Multicultural Relevance of the Measure 10.1.8 Core Features of Ethnicity, Culture, and Diversity 10.1.9 General Comments 10.2 Using Available or Devising New Measures 10.2.1 Using a Standardized Measure 10.2.2 Varying the Use or Contents of an Existing Measure 10.2.3 More Information on Varying the Use or Contents 10.2.4 Developing a New Measure 10.2.5 General Comments 10.3 Special Issues to Guide Measurement Selection 10.3.1 Awareness of Being Assessed: Measurement Reactivity 10.3.2 More Information on Awareness of Being Assessed 10.3.3 Countering Limited Generality 10.3.4 Use of Multiple Measures 10.4 Brief Measures, Shortened Forms, and Use of Single-Item Measures 10.4.1 Use of Brief Measures 10.4.2 Use of Short or Shortened Forms 10.4.3 Single or a Few Items 10.4.4 Considerations and Cautions 10.4.5 More Information Regarding Considerations and Cautions 10.5 Interrelations of Different Measures 10.5.1 Three Reasons for Lack of Correspondence among Measures 10.6 Construct and Method Variance 10.6.1 Using a Correlation Matrix 10.7 General Comments Summary and Conclusions: Selecting Measures for Research 11 Assessment: Types of Measures and Their Use 11.1 Type of Assessment 11.1.1 Modalities of Assessment Used in Clinical Psychology 11.2 Objective Measures 11.2.1 Characteristics 11.2.2 Issues and Considerations 11.2.3 More Information on Issues and Considerations 11.3 Global Ratings 11.3.1 Characteristics 11.3.2 Issues and Considerations 11.3.3 More Information on Issues and Considerations 11.4 Projective Measures 11.4.1 Characteristics 11.4.2 Issues and Considerations 11.4.3 More Information on Issues and Considerations 11.5 Direct Observations of Behavior 11.5.1 Characteristics 11.5.2 More Information on Characteristics 11.5.3 Issues and Considerations 11.6 Psychobiological Measures 11.6.1 Characteristics 11.6.2 More Information on Characteristics 11.6.3 Issues and Considerations 11.7 Computerized, Technology-Based, and Web-Based Assessment 11.7.1 Characteristics 11.7.2 More Information on Characteristics 11.7.3 Issues and Considerations 11.8 Unobtrusiveness Measures 11.8.1 Characteristics 11.8.2 More Information on Characteristics 11.8.3 Issues and Considerations 11.9 General Comments Summary and Conclusions: Assessment: Types of Measure and Their Use 12 Special Topics of Assessment 12.1 Assessing the Impact of the Experimental Manipulation 12.1.1 Checking on the Experimental Manipulation 12.2 Types of Manipulations 12.2.1 Variations of Information 12.2.2 Variations in Subject Tasks and Experience 12.2.3 Variation of Intervention Conditions 12.2.4 Additional Information on Variation of Intervention Conditions 12.3 Utility of Checking the Manipulation 12.3.1 No Differences between Groups 12.3.2 Keeping Conditions Distinct 12.4 Interpretive Problems in Checking the Manipulation 12.4.1 Effects on Manipulation Check and Dependent Measure 12.4.2 No Effect on Manipulation Check and Dependent Measure 12.4.3 Effect on Manipulation Check but No Effect on the Dependent Measure 12.4.4 No Effect on the Manipulation Check but an Effect on the Dependent Measure 12.4.5 General Comments 12.5 Special Issues and Considerations in Manipulation Checks 12.5.1 Assessment Issues 12.5.2 More Information on Assessment Issues 12.5.3 Data Analysis Issues: Omitting Subjects 12.5.4 More Information on Omitting Subjects 12.5.5 Intent-to-Treat Analyses and Omitting and Keeping Subjects in Separate Data Analyses 12.5.6 Pilot Work and Establishing Potent Manipulations 12.6 Assessing Clinical Significance or Practical Importance of the Changes 12.6.1 Most Frequently Used Measures 12.6.2 Further Considerations Regarding Most Frequently Used Measures 12.6.3 More Information on Most Frequently Used Measures 12.6.4 Other Criteria Briefly Noted 12.6.5 Further Considerations Regarding Other Criteria 12.6.6 Other Terms and Criteria worth Knowing 12.6.7 General Comments 12.7 Assessment during the Course of Treatment 12.7.1 Evaluating Mediators of Change 12.7.2 More Information on Evaluating Mediators of Change 12.7.3 Improving Patient Care in Research and Clinical Practice 12.7.4 More Information on Improving Patient Care in Research 12.7.5 General Comments Summary and Conclusions: Special Topics of Assessment 13 Null Hypothesis Significance Testing 13.1 Significance Tests and the Null Hypothesis 13.1.1 More Information on Significance Tests 13.2 Critical Concepts and Strategies in Significance Testing 13.2.1 Significance Level (alpha) 13.3 Power 13.3.1 The Power Problem 13.3.2 Relation to Alpha, Effect Size, and Sample Size 13.3.3 More Information on Relations to Alpha, Effect Size, and Sample Size 13.3.4 Variability in the Data 13.4 Ways to Increase Power 13.4.1 Increasing Expected Differences between Groups 13.4.2 Use of Pretests 13.4.3 Varying Alpha Levels within an Investigation 13.4.4 Using Directional Tests 13.4.5 Decreasing Variability (Error) in the Study 13.5 Planning the Data Analyses at the Design Stage 13.6 Objections to Statistical Significance Testing 13.6.1 Major Concerns 13.6.2 Misinterpretations 13.6.3 More Information on Misinterpretations 13.6.4 Significance Testing and Failures to Replicate 13.6.5 General Comments 13.7 Hypothesis Testing: Illustrating an Alternative 13.7.1 Bayesian Data Analyses 13.7.2 More Information on Bayesian Data Analyses 13.7.3 General Comments Summary and Conclusions: Null Hypothesis Significance Testing 14 Presenting and Analyzingthe Data 14.1 Overview of Data Evaluation 14.1.1 Checking the Data 14.1.2 Description and Preliminary Analyses 14.2 Supplements to Tests of Significance 14.2.1 Magnitude and Strength of Effect 14.2.2 Confidence Intervals 14.2.3 Error Bars in Data Presentation 14.2.4 Statistical Significance, Magnitude of Effect, and Clinical or Practical Significance 14.3 Critical Decisions in Presenting and Analyzing the Data 14.4 Handling Missing Data 14.4.1 Completer Analysis 14.4.2 Intent-to-Treat Analysis 14.4.3 Multiple Imputation Models 14.4.4 General Comments 14.5 Outliers and the Prospect of Deleting Data 14.6 Analyses Involving Multiple Comparisons 14.6.1 Controlling Alpha Levels 14.6.2 Considerations 14.7 Multivariate and Univariate Analyses 14.7.1 Considerations 14.8 General Comments 14.9 Special Topics in Data Analysis 14.9.1 Understanding and Exploring the Data 14.9.2 Research Based on Previously Collected Data Summary and Conclusions: Presenting and Analyzing the Data 15 Cautions, Negative Effects, and Replication 15.1 Interpreting the Results of a Study 15.1.1 Common Leaps in Language and Conceptualization of the Findings 15.1.2 Meaning Changes of Innocent Words and One Variable “Predicts” Another 15.1.3 “Implications” in the Interpretation of Findings 15.1.4 Further Considerations regarding “Implications” 15.1.5 More Data Analyses Can Enhance Data Interpretation 15.1.6 Another Example of More Data Analyses Enhancing Data Interpretation 15.1.7 Searching for Moderators or Statistical Interactions 15.1.8 General Comments 15.2 Negative Results or No-Difference Findings 15.2.1 Ambiguity of Negative Results 15.3 Why Negative Results Are Useful 15.3.1 When Negative Results Are Interpretable 15.3.2 When Negative Results Are Important 15.3.3 Additional Examples of Negative Results Being Important 15.3.4 Further Considerations Regarding Importance of Negative Results 15.3.5 Special Case of Searching for Negative Effects 15.3.6 Negative Effects in Perspective 15.3.7 Further Considerations Regarding Negative Effects 15.4 Replication 15.4.1 Defined 15.4.2 Types of Replication 15.4.3 Expansion of Concepts and Terms 15.5 Importance of Replication 15.5.1 Reasons 1 and 2 for the Importance of Replication 15.5.2 Reasons 3, 4, and 5 for the Importance of Replication 15.5.3 Instructive but Brief Replication Examples 15.5.4 One Additional Replication Example 15.5.5 Renewed Attention to Replication 15.5.6 Additional Information Regarding Renewed Attention to Replication 15.5.7 The Reproducibility Project Summary and Conclusions: Cautions, Negative Effects, and Replication 16 Ethical Issues and Guidelines for Research 16.1 Background and Contexts 16.2 Scope of Ethical Issues 16.3 Inherent Roles of Values and Ethics in Research 16.3.1 Values and Decisions in Research 16.3.2 Relevance to Psychological Research 16.3.3 Power Difference of Investigator and Participant 16.4 Critical Issues in Research 16.4.1 Deception 16.4.2 Further Considerations Regarding Deception 16.4.3 Debriefing 16.4.4 Further Considerations Regarding Debriefing 16.4.5 Invasion of Privacy 16.4.6 Sources of Protection 16.4.7 Special Circumstances and Cases 16.4.8 Further Considerations Regarding Special Circumstances 16.5 Informed Consent 16.5.1 Conditions and Elements 16.5.2 Important Considerations 16.5.3 Additional Important Considerations 16.5.4 Consent and Assent 16.5.5 Forms and Procedures 16.5.6 Certificate of Confidentiality 16.5.7 Letter and Spirit of Consent 16.6 Intervention Research Issues 16.6.1 Informing Clients about Treatment 16.6.2 Withholding the Intervention 16.6.3 Control Groups and Treatments of Questionable Efficacy 16.6.4 Consent and the Interface with Threats to Validity 16.6.5 General Comments 16.7 Regulations, Ethical Guidelines, and Protection of Client Rights 16.7.1 Federal Codes and Regulations 16.7.2 Professional Codes and Guidelines 16.7.3 More Information on Professional Codes and Guidelines 16.7.4 General Comments Summary and Conclusions: Ethical Issues and Guidelines for Research 17 Scientific Integrity 17.1 Core Values Underpinning Scientific Integrity 17.2 Ethical Codes Related to Scientific Integrity 17.3 Critical Issues and Lapses of Scientific Integrity 17.3.1 Fraud in Science 17.3.2 More Information Regarding Fraud in Science 17.3.3 Questionable Practices and Distortion of Findings 17.3.4 More Information on Questionable Practices 17.3.5 Another Data Analysis Point 17.3.6 Plagiarism 17.3.7 Self-Plagiarism 17.4 Authorship and Allocation of Credit 17.4.1 Guidelines and Best Practices for Allocating Authorship 17.4.2 Special Circumstances and Challenges 17.5 Sharing of Materials and Data 17.5.1 “Big Data:” Special Circumstances Data Sharing 17.5.2 More Information on “Big Data” 17.5.3 When Not to Share Data 17.5.4 General Comments 17.6 Conflict of Interest 17.6.1 Procedures to Address Conflict of Interest 17.6.2 Other Conflicts of Interest Briefly Noted 17.7 Breaches of Scientific Integrity 17.7.1 Jeopardizing the Public Trust 17.8 Remedies and Protections Summary and Conclusions: Scientific Integrity 18 Communication of Research Findings 18.1 Methodologically Informed Manuscript Preparation 18.2 Overview 18.3 Main Sections of the Article 18.3.1 Title of the Article 18.3.2 Abstract 18.3.3 Introduction 18.3.4 More Information on the Introduction 18.3.5 Method 18.3.6 Results 18.3.7 Discussion 18.3.8 Tables, Figures, Appendices, and Other Supporting Data 18.4 General Comments 18.5 Further Guides to Manuscript Preparation 18.5.1 Questions to Guide Manuscript Preparation 18.5.2 Formal Guidelines for Presenting Research 18.5.3 General Comments 18.6 Selecting a Journal 18.6.1 What Journal Outlets Are Available? 18.6.2 Some Criteria for Choosing among the Many Options 18.6.3 Additional Criteria for Consideration 18.7 Manuscript Submission and Review 18.7.1 Overview of the Journal Review Process 18.7.2 More Information on Overview of the Journal Review Process 18.7.3 You Receive the Reviews 18.7.4 General Comments Summary and Conclusions: Communication of Research Findings 19 Methodology: Constantly Evolving along with Advances in Science Additional Information on Methodology 19.1 The Dynamic Nature of Methodology 19.2 Research Design 19.2.1 Assessment 19.2.2 Data Evaluation and Interpretation 19.2.3 Ethical Issues and Scientific Integrity 19.2.4 Communication of Research Findings 19.2.5 General Comments 19.3 Importance of Methodological Diversity 19.4 Abbreviated Guidelines for a Well-(and Quickly) Designed Study Summary and Conclusions: Methodology: Constantly Evolving along with Advances in Science Glossary A B C D F G H I L M N O P Q R S T U V W Y References End Notes Credits Name Index A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Subject Index A B C D E F G H I J L M N O P Q R S T U V W Y