دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: [Second ed.] نویسندگان: Robert A. Altmann, Daniel N. Allen, Cecil R. Reynolds سری: ISBN (شابک) : 9783030594558, 3030594556 ناشر: سال نشر: 2020 تعداد صفحات: [726] زبان: English فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 9 Mb
در صورت تبدیل فایل کتاب Mastering modern psychological testing theory and methods به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب تسلط بر نظریه و روش های نوین تست روانشناسی نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
این کتاب مقدمهای جامع برای ارزیابی روانشناختی ارائه میکند و حوزههایی را پوشش میدهد که معمولاً در متون آزمونها و اندازهگیریهای موجود، مانند ارزیابی عصب روانشناختی و استفاده از آزمونها در محیطهای پزشکی قانونی به آنها اشاره نمیشود. این کتاب واژگان این حرفه و ابتدایی ترین ریاضیات تست زنی اولیه را به عنوان اساسی برای درک این رشته معرفی می کند. نمونههای متعددی از آزمونهایی که نویسندگان نوشتهاند یا به شکل دیگری به توسعه آنها کمک کردهاند، استخراج شدهاند، که نشاندهنده درک عمیق نویسندگان از این آزمونها و آشنایی آنها با مشکلاتی است که در توسعه، استفاده و تفسیر آزمون با آن مواجه میشوند. پس از معرفی حوزههای اساسی روانسنجی، کتاب به حوزههایی از آزمون میرود که نشاندهنده رویکردهای مختلف برای اندازهگیری سازههای روانشناختی مختلف (حافظه، زبان، عملکرد اجرایی و غیره) با تأکید بر موضوع پیچیده سوگیری فرهنگی در آزمون است. نمونه هایی از آزمون های موجود در سراسر کتاب آورده شده است. با این حال، این کتاب برای آماده کردن دانش آموزان برای بیرون رفتن و اجرای، نمره گذاری و تفسیر تست های روانشناختی خاص طراحی نشده است. در عوض، هدف این کتاب ارائه هسته اساسی دانش در مورد آزمون ها، سازه های اندازه گیری و ارزیابی، مسائل و ابزارهای کمی است. توضیح میدهد که چه چیزی یک آزمون روانشناختی را تشکیل میدهد، آزمونها چگونه ایجاد میشوند، چگونه بهترین استفاده را از آنها میکنند، و چگونه نقاط قوت و ضعف آنها را ارزیابی میکنند. حوزههایی از آزمایش را توصیف میکند که نشاندهنده رویکردهای مختلف برای اندازهگیری ساختارهای روانشناختی مختلف است. کاربردهای تست روانشناختی را برای مسائل در دادگاه توضیح می دهد. به نحوه طراحی و تحقیق نویسندگان آزمون و ناشران آزمون ها برای رسیدگی به مسائل دشوار و سخت تفاوت های فرهنگی در عملکرد آزمون و تفسیر نتایج آزمون می پردازد.
This book provides a comprehensive introduction to psychological assessment and covers areas not typically addressed in existing test and measurements texts, such as neuropsychological assessment and the use of tests in forensics settings. The book introduces the vocabulary of the profession and the most basic mathematics of testing early as being fundamental to understanding the field. Numerous examples are drawn from tests that the authors have written or otherwise helped to develop, reflecting the authors’ deep understanding of these tests and their familiarity with problems encountered in test development, use, and interpretation. Following the introduction of the basic areas of psychometrics, the book moves to areas of testing that represent various approaches to measuring different psychological constructs (memory, language, executive function, etc.), with emphasis on the complex issue of cultural bias in testing. Examples of existing tests are given throughout the book; however, this book is not designed to prepare students to go out and administer, score, and interpret specific psychological tests. Rather, the purpose of this book is to provide the foundational core of knowledge about tests, measurement, and assessment constructs, issues, and quantitative tools. Explains what constitutes a psychological test, how tests are developed, how they are best used, and how to evaluate their strengths and weaknesses; Describes areas of testing that represent different approaches to measuring different psychological constructs; Explains applications of psychological testing to issues in the courts; Addresses how test authors and publishers design and research tests to address the difficult and demanding issues of cultural differences in test performance and interpretation of test results.
Contents List of Figures List of Tables List of Special Interest Topics 1: Introduction to Psychological Assessment 1.1 Brief History of Testing 1.1.1 Earliest Testing: Circa 2200 BC 1.1.2 Eighteenth- and Nineteenth-Century Testing 1.1.2.1 Carl Frederich Gauss 1.1.2.2 Civil Service Examinations 1.1.2.3 Physicians and Psychiatrists 1.1.3 Brass Instruments Era 1.1.3.1 Sir Francis Galton 1.1.3.2 James McKeen Cattell 1.1.3.3 Clark Wissler 1.1.4 Twentieth-Century Testing 1.1.4.1 Alfred Binet: Bring on Intelligence Testing! 1.1.4.2 Army Alpha and Beta Tests 1.1.4.3 Robert Woodworth: Bring on Personality Testing! 1.1.4.4 Rorschach Inkblot Test 1.1.4.5 College Admission Tests 1.1.4.6 Wechsler Intelligence Scales 1.1.4.7 Minnesota Multiphasic Personality Inventory 1.1.5 Twenty-First-Century Testing 1.2 The Language of Assessment 1.2.1 Tests 1.2.2 Standardized Tests 1.2.3 Measurement 1.2.4 Assessment 1.2.5 Are Tests, Measurement, and Assessment Interchangeable Terms? 1.2.6 Other Important Terms 1.3 Types of Tests 1.3.1 Maximum Performance Tests 1.3.1.1 Achievement and Aptitude Tests 1.3.1.2 Objective and Subjective Tests 1.3.1.3 Speed and Power Tests 1.3.2 Typical Response Tests 1.3.2.1 Objective Personality Tests 1.3.2.2 Projective Personality Tests 1.4 Types of Scores 1.5 Assumptions of Psychological Assessment 1.5.1 Assumption #1: Psychological Constructs Exist 1.5.2 Assumption #2: Psychological Constructs Can Be Measured 1.5.3 Assumption #3: Although We Can Measure Constructs, Our Measurement Is Not Perfect 1.5.4 Assumption #4: There Are Different Ways to Measure Any Given Construct 1.5.5 Assumption #5: All Assessment Procedures Have Strengths and Limitations 1.5.6 Assumption #6: Multiple Sources of Information Should Be Part of the Assessment Process 1.5.7 Assumption #7: Performance on Tests Can Be Generalized to Non-Test Behaviors 1.5.8 Assumption #8: Assessment Can Provide Information that Helps Psychologists Make Better Professional Decisions 1.5.9 Assumption #9: Assessments Can Be Conducted in a Fair Manner 1.5.10 Assumption #10: Testing and Assessment Can Benefit Individuals and Society as a Whole 1.6 Why Use Tests? 1.7 Common Applications of Psychological Assessments 1.7.1 Diagnosis 1.7.2 Treatment Planning and Treatment Effectiveness 1.7.3 Selection, Placement, and Classification 1.7.4 Self-Understanding 1.7.5 Evaluation 1.7.6 Licensing 1.7.7 Program Evaluation 1.7.8 Scientific Method 1.8 Common Criticisms of Testing and Assessment 1.9 Participants in the Assessment Process 1.9.1 People Who Develop Tests 1.9.2 People Who Use Tests 1.9.3 People Who Take Tests 1.9.4 Other People Involved in Assessment Process 1.10 Psychological Assessment in the Twenty-First Century 1.10.1 Computerized Adaptive Testing (CAT) 1.10.2 Other Technological Applications Used in Assessment 1.10.3 “Authentic” Assessments 1.10.4 Health-Care Delivery Systems 1.10.5 High-Stakes Assessment 1.11 Summary References Recommended Reading and Internet Sites 2: The Basic Statistics of Measurement 2.1 Scales of Measurement 2.1.1 What Is Measurement? 2.1.2 Nominal Scales 2.1.3 Ordinal Scales 2.1.4 Interval Scales 2.1.5 Ratio Scales 2.2 The Description of Test Scores 2.2.1 Distributions 2.2.2 Measures of Central Tendency 2.2.2.1 Mean 2.2.2.2 Median 2.2.2.3 Mode 2.2.2.4 Choosing Between the Mean, Median, and Mode 2.2.3 Measures of Variability 2.2.3.1 Range 2.2.3.2 Standard Deviation 2.2.3.3 Variance 2.2.3.4 Choosing Between the Range, Standard Deviation, and Variance 2.2.4 The Normal Distribution 2.3 Correlation Coefficients 2.3.1 Scatterplots 2.3.2 Types of Correlation Coefficients 2.3.3 Factors that Affect Correlation Coefficients 2.3.3.1 Linear Relationship 2.3.3.2 Range Restriction 2.3.4 Correlation Versus Causation 2.4 Linear Regression 2.4.1 Standard Error of Estimate 2.5 Summary Practice Items References Recommended Reading Internet Sites of Interest 3: The Meaning of Test Scores 3.1 Norm-Referenced and Criterion-Referenced Score Interpretations 3.1.1 Norm-Referenced Interpretations 3.1.1.1 Norms and Reference Groups 3.1.1.2 Derived Scores Used with Norm-Referenced Interpretations 3.1.1.2.1 Standard Scores 3.1.1.2.2 Normalized Standard Scores 3.1.1.2.3 Percentile Rank 3.1.1.2.4 Grade Equivalents 3.1.2 Criterion-Referenced Interpretations 3.1.3 Norm-Referenced, Criterion-Referenced, or Both? 3.2 Scores Based on Item Response Theory 3.3 So What Scores Should We Use: Norm-Referenced, Criterion-Referenced, or Rasch-Based Scores? 3.4 Qualitative Description of Test Scores 3.5 Reporting Information on Normative Samples and Test Scores 3.6 Summary Practice Items References Recommended Reading Internet Sites of Interest 4: Reliability 4.1 Classical Test Theory and Measurement Error 4.2 Sources of Measurement Error 4.2.1 Content Sampling Error 4.2.2 Time Sampling Error 4.2.3 Other Sources of Error 4.3 Reliability Coefficients 4.3.1 Test-Retest Reliability 4.3.2 Alternate-Form Reliability 4.3.3 Internal-Consistency Reliability 4.3.3.1 Split-Half Reliability 4.3.3.2 Coefficient Alpha and Kuder-Richardson Reliability 4.3.4 Inter-Rater Reliability 4.3.5 Reliability Estimates Are Not Independent 4.3.6 Reliability of Composite Scores 4.3.7 Reliability of Difference Scores 4.3.8 Selecting a Reliability Coefficient 4.3.9 Evaluating Reliability Coefficients 4.3.9.1 Construct 4.3.9.2 Time Available for Testing 4.3.9.3 Test Score Use 4.3.9.4 Method of Estimating Reliability 4.3.9.5 General Guidelines 4.3.10 How to Improve Reliability 4.3.11 Special Problems in Estimating Reliability 4.4 The Standard Error of Measurement 4.4.1 Evaluating the Standard Error of Measurement 4.4.2 Calculating Confidence Intervals 4.5 Modern Test Theories 4.5.1 Generalizability Theory 4.5.2 Item Response Theory 4.6 Reporting Reliability Information 4.6.1 How Test Manuals Report Reliability Information: The Reynolds Intellectual Assessment Scales, Second Edition (RIAS-2) 4.7 Reliability: Practical Strategies for Educators 4.8 Summary Practice Items References Recommended Reading 5: Validity 5.1 Threats to Validity 5.1.1 Examinee Characteristics 5.1.2 Test Administration and Scoring Procedures 5.1.3 Instruction and Coaching 5.2 Reliability and Validity 5.3 “Types of Validity” Versus “Types of Validity Evidence” 5.4 Sources of Validity Evidence 5.4.1 Evidence Based on Test Content 5.4.1.1 Face Validity 5.4.2 Evidence Based on Response Processes 5.4.3 Evidence Based on Internal Structure 5.4.3.1 Factor Analysis: A Gentle Introduction 5.4.3.2 Factor Analysis: The Process 5.4.3.3 Confirmatory Factor Analysis 5.4.4 Evidence Based on Relations to Other Variables 5.4.4.1 Test-Criterion Relationships 5.4.4.1.1 Selecting a Criterion 5.4.4.1.2 Criterion Contamination 5.4.4.1.3 Interpreting Validity Coefficients 5.4.4.2 Contrasted Groups Studies 5.4.4.3 Decision-Theory Models 5.4.4.3.1 Selection Ratio and Base Rate 5.4.4.3.2 Sensitivity and Specificity 5.4.4.4 Convergent and Discriminant Evidence 5.4.4.5 Validity Generalization 5.4.5 Evidence Based on Consequences of Testing 5.5 Integrating Evidence of Validity 5.6 How Test Manuals Report Validity Evidence: The Reynolds Intellectual Assessment Scales, Second Edition (RIAS–2) 5.7 Summary References Recommended Reading 6: Item Development 6.1 Item Formats 6.2 General Item Writing Guidelines 6.3 Maximum-Performance Tests 6.3.1 Multiple-Choice Items 6.3.2 True–False Items 6.3.3 Matching Items 6.3.4 Essay Items 6.3.5 Short-Answer Items 6.4 Typical-Response Tests 6.4.1 Typical-Response Item Formats 6.4.1.1 Rating Scale Item Example 6.4.2 Typical-Response Item Guidelines 6.5 Summary References Suggested Reading and Internet Sites 7: Item Analysis: Methods for Fitting the Right Items to the Right Test 7.1 Item Difficulty Index (or Item Difficulty Level) 7.1.1 Special Assessment Situations and Item Difficulty 7.2 Item Discrimination 7.2.1 Discrimination Index 7.2.2 Item Discrimination on Mastery Tests 7.2.3 Item Discrimination on Typical-Response Tests 7.2.4 Difficulty and Discrimination on Speed Tests 7.2.5 Examples of Item Difficulty and Discrimination Indices 7.3 Distracter Analysis 7.3.1 How Distracters Influence Item Difficulty and Discrimination 7.4 Qualitative Item Analysis 7.5 Item Characteristic Curves and Item Response Theory 7.5.1 Item Characteristic Curves 7.5.2 IRT Models 7.5.3 Invariance of Item Parameters 7.5.4 Special Applications of IRT 7.5.4.1 Computer Adaptive Testing 7.5.4.2 Detecting Biased Items 7.5.4.3 Scores Based on Item Response Theory 7.5.4.4 Reliability 7.6 Summary References Recommended Reading 8: Achievement Tests in the Era of High-Stakes Assessment 8.1 The Impetus for Achievement Tests 8.2 Group-Administered Achievement Tests 8.2.1 Commercial Standardized Achievement Test 8.2.1.1 Data Recognition Corporation (DRC) 8.2.1.2 Pearson 8.2.1.3 Houghton Mifflin Harcourt (HMH) Assessments 8.2.1.4 Diagnostic Achievement Tests 8.2.2 State-Developed Achievement Tests 8.2.3 Best Practices in Preparing Students for Standardized Assessment 8.3 Individual Achievement Tests 8.3.1 Wechsler Individual Achievement Test, Third Edition (WIAT-III) 8.3.2 Woodcock-Johnson IV Tests of Achievement (WJ-IV ACH) 8.3.3 Wide Range Achievement Test Fifth Edition (WRAT5) 8.3.4 Individual Achievement Tests That Focus on Specific Skills 8.3.4.1 Gray Oral Reading Test: Fifth Edition (GORT-5) 8.3.4.2 KeyMath-3 Diagnostic Assessment (KeyMath-3) 8.4 Selecting an Achievement Battery 8.5 Teacher-Constructed Achievement Tests and Student Evaluation 8.6 Achievement Tests: Not Only in the Public Schools! 8.6.1 Examination for Professional Practice in Psychology (EPPP) 8.6.2 United States Medical Licensing Examination (USMLE) 8.7 Summary References Suggested Reading 9: Assessment of Intelligence 9.1 A Brief History of Intelligence Tests 9.2 The Use of Aptitude and Intelligence Tests in School Settings 9.2.1 Aptitude-Achievement Discrepancies 9.2.2 A New Assessment Strategy for Specific Learning Disabilities: Response to Intervention (RTI) 9.2.3 Diagnosing Intellectual Disability 9.3 The Use of Aptitude and Intelligence Tests in Clinical Settings 9.4 Major Aptitude/Intelligence Tests 9.4.1 Group Aptitude/Intelligence Tests 9.4.1.1 K-12 Tests 9.4.1.1.1 Otis-Lennon School Ability Test, 8th Edition (OLSAT-8) 9.4.1.1.2 Naglieri Nonverbal Ability Test, Third Edition (NNAT3) 9.4.1.1.3 Cognitive Abilities Test (CogAT), Form 7 9.4.1.2 Personnel and Vocational Assessment 9.4.1.3 College Admission Tests 9.4.1.3.1 Scholastic Assessment Test 9.4.1.3.2 American College Test 9.4.2 Individual Aptitude/Intelligence Tests 9.4.2.1 Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V) 9.4.2.2 Stanford-Binet Intelligence Scales, Fifth Edition (SB5) 9.4.2.3 Woodcock-Johnson IV (WJ IV) Tests of Cognitive Abilities 9.4.2.4 Reynolds Intellectual Assessment Scales, Second Edition (RIAS-2) 9.5 Selecting Aptitude/Intelligence Tests 9.6 Understanding the Report of an Intellectual Assessment 9.7 Summary References Recommended Reading 10: Assessment of Personality 10.1 Assessing Personality 10.1.1 Response Sets and Dissimulation 10.1.2 Factors Affecting Reliability and Validity 10.2 Objective Personality Tests: An Overview 10.2.1 Content/Rational Approach 10.2.2 Empirical Criterion Keying 10.2.3 Factor Analysis 10.2.4 Theoretical Approach 10.3 Assessment of Personality in Children and Adolescents 10.3.1 Behavior Assessment System for Children, Third Edition: Self-Report of Personality (SRP) 10.3.2 Single-Domain Self-Report Measures 10.4 Projective Personality Tests: An Overview 10.4.1 Projective Drawings 10.4.1.1 Draw-A-Person (DAP) Test 10.4.1.2 House-Tree-Person (H-T-P) 10.4.1.3 Kinetic Family Drawing (KFD) 10.4.2 Sentence Completion Tests 10.4.3 Apperception Tests 10.4.4 Inkblot Techniques 10.5 Summary References Recommended Reading 11: Behavioral Assessment 11.1 Assessing Behavior 11.2 Response Sets 11.3 Assessment of Behavior in the Schools 11.4 Behavioral Interviewing 11.5 Behavior Rating Scales 11.5.1 Behavior Assessment System for Children, Third Edition: Teacher Rating Scales and Parent Rating Scales (TRSs and PRSs) 11.5.2 Achenbach System of Empirically Based Assessment: Child Behavior Checklist and Teacher Report Form (CBCL and TRF) 11.5.3 Single-Domain or Syndrome-Specific Rating Scales 11.5.3.1 Childhood Autism Rating Scale, Second Edition (CARS-2) 11.5.3.2 Reynolds Adolescent Depression Scale, Second Edition (RADS-2) 11.5.3.3 Pediatric Behavior Rating Scale (PBRS) 11.5.4 Adaptive Behavior Scales 11.5.4.1 Vineland Adaptive Behavior Scales, Third Edition (Vineland-3) 11.5.5 Adult Behavior Rating Scales 11.6 Direct Observation 11.7 Continuous Performance Tests 11.8 Psychophysiological Assessments 11.9 Summary References Recommended Reading 12: Employment and Vocational Testing 12.1 Historical View of I/O Psychology 12.2 Personnel Selection Approaches 12.2.1 Cognitive Ability 12.2.2 Interviews 12.2.3 Integrity 12.2.4 Assessment Centers 12.2.5 Work Sample Tests 12.2.6 Biodata 12.3 Choosing a Personnel Selection Approach 12.3.1 Advantages and Disadvantages of Different Approaches 12.3.2 Applicant Reactions 12.3.3 Job Analysis 12.4 Evaluating Job Performance 12.4.1 Approaches to Performance Ratings 12.4.2 Comparison of Rating Approaches 12.4.3 Types of Rating Methods 12.4.4 Sources of Error 12.5 Legal Issues 12.5.1 The Uniform Guidelines on Employee Selection Procedures (1978) 12.5.2 Principles for the Validation and Use of Personnel Selection Procedures, Fifth Edition (2018) 12.6 Career Assessment 12.6.1 Strong Interest Inventory, Revised Edition 12.6.2 Career Decision-Making System, Revised 12.6.3 Self-Directed Search 12.7 Summary References Recommended Reading 13: Neuropsychological Testing 13.1 Components of a Neuropsychological Evaluation 13.2 Neuropsychological Assessment Approaches and Instruments 13.2.1 The Halstead-Reitan Neuropsychological Test Battery (HRNB) 13.2.2 The Luria-Nebraska Neuropsychological Battery (LNNB) for Adults 13.2.3 The Boston Process Approach 13.3 Assessment of Memory Functions 13.3.1 TOMAL-2: An Example of a Contemporary Comprehensive Memory Assessment 13.4 The Process of Neuropsychological Assessment 13.4.1 Referral 13.4.2 Review of Records 13.4.3 Clinical Interview 13.4.4 Test Selection 13.4.5 Test Conditions 13.5 Measurement of Deficits and Strengths 13.5.1 Normative Approach 13.5.2 Deficit Measurement Approach 13.5.3 Premorbid Ability 13.5.4 Pattern Analysis 13.5.5 Pathognomonic Signs 13.6 Summary References Recommended Reading 14: Forensic Applications of Psychological Assessment 14.1 What Is Forensic Psychology? 14.2 Expert Witnesses and Expert Testimony 14.3 Clinical Assessment Versus Forensic Assessment 14.4 Applications in Criminal Proceedings 14.4.1 Not Guilty by Reason of Insanity: The NGRI Defense 14.4.2 Competency to Stand Trial 14.4.3 Transfer of a Juvenile to Adult Criminal Court 14.4.4 Mitigation in Sentencing 14.4.5 The Special Case of Intellectual Disability in Capital Sentencing 14.4.6 Competency to Be Executed 14.5 Applications in Civil Proceedings 14.5.1 Personal Injury Litigation 14.5.2 Divorce and Child Custody 14.5.3 Determining Common Civil Competencies 14.5.4 Other Civil Matters 14.6 Third Party Observers in Forensic Psychological Testing 14.7 Detection of Malingering and Other Forms of Dissimulation 14.8 The Admissibility of Testimony Based on Psychological Testing Results 14.9 Summary References Additional Reading 15: The Problem of Bias in Psychological Assessment 15.1 What Do We Mean by Bias? 15.2 Past and Present Concerns: A Brief Look 15.3 The Controversy Over Bias in Testing: Its Origin, What It Is, and What It Is Not 15.3.1 Explaining Mean Group Differences 15.3.2 Test Bias and Etiology 15.3.3 Test Bias and Fairness 15.3.4 Test Bias and Offensiveness 15.3.5 Test Bias and Inappropriate Test Administration and Use 15.3.6 Bias and Extraneous Factors 15.4 Cultural Bias and the Nature of Psychological Testing 15.5 Objections to the Use of Educational and Psychological Tests with Minority Students 15.5.1 Inappropriate Content 15.5.2 Inappropriate Standardization Samples 15.5.3 Examiner and Language Bias 15.5.4 Inequitable Social Consequences 15.5.5 Measurement of Different Constructs 15.5.6 Differential Predictive Validity 15.5.7 Qualitatively Distinct Aptitude and Personality 15.6 The Problem of Definition in Test Bias Research: Differential Validity 15.7 Cultural Loading, Cultural Bias, and Culture-Free Tests 15.8 Inappropriate Indicators of Bias: Mean Differences and Equivalent Distributions 15.9 Bias in Test Content 15.9.1 How Test Publishers Commonly Identify Biased Items 15.10 Bias in Other Internal Features of Tests 15.10.1 How Test Publishers Commonly Identify Bias in Construct Measurement 15.11 Bias in Prediction and in Relation to Variables External to the Test 15.11.1 How Test Publishers Commonly Identify Bias in Prediction 15.12 Summary References Recommended Reading 16: Assessment Accommodations 16.1 Accommodations Versus Modifications 16.2 The Rationale for Assessment Accommodations 16.3 When Are Accommodations Not Appropriate or Necessary? 16.4 Strategies for Accommodations 16.4.1 Modifications of Presentation Format 16.4.2 Modifications of Response Format 16.4.3 Modifications of Timing 16.4.4 Modifications of Setting 16.4.5 Adaptive Devices and Supports 16.4.6 Using Only Portions of a Test 16.4.7 Using Alternate Assessments 16.5 Determining What Accommodations to Provide 16.6 Assessment of English Language Learners (ELLs) 16.7 Reporting Results of Modified Assessments 16.8 Summary References Recommended Reading 17: Best Practices: Legal and Ethical Issues 17.1 Guidelines for Developing Assessments 17.2 Guidelines for Selecting Published Assessments 17.3 Guidelines for Administering Assessments 17.4 Guidelines for Scoring Assessments 17.5 Guidelines for Interpreting Assessment Results, Making Clinical Decisions, and Reporting Results 17.6 Responsibilities of Test Takers 17.7 Summary and Top 10 Assessment-Related Behaviors to Avoid References Suggested Reading Internet Sites of Interest 18: How to Develop a Psychological Test: A Practical Approach 18.1 Phase I: Test Conceptualization 18.1.1 Conduct a Literature Review and Develop a Statement of Need for the Test 18.1.2 Describe the Proposed Uses and Interpretations of Results From the Test 18.1.3 Determine Who Will Use the Test and Why 18.1.4 Develop Conceptual and Operational Definitions of Constructs You Intend to Measure 18.1.5 Determine Whether Measures of Dissimulation Are Needed and If So, What Kind 18.1.5.1 Scales for Detecting Dissimulation on Assessments of Personality and Behavior 18.1.5.2 Scales for Detecting Dissimulation on Assessments of Aptitude and Achievement 18.2 Phase II: Specification of Test Structure and Format 18.2.1 Designate the Age Range Appropriate for the Measure 18.2.2 Determine and Describe the Testing Format 18.2.3 Describe the Structure of the Test 18.2.4 Develop a Table of Specifications (TOS) 18.2.5 Determine and Describe the Item Formats and Write Instructions for Administration and Scoring 18.2.6 Develop an Explanation of Methods for Item Development, Tryout, and Final Item Selection 18.3 Phase III: Planning Standardization and Psychometric Studies 18.3.1 Specify a Sampling Plan for Standardization 18.3.2 Determine Your Choice of Scaling Methods and Rationale 18.3.3 Briefly Outline the Reliability Studies to Be Performed and Their Rationale 18.3.4 Briefly Outline the Validity Studies to Be Performed and Their Rationale 18.3.5 Determine If There Are Any Special Studies That May Be Needed for Development of This Test or to Support Proposed Interpretations of Performance 18.3.6 List the Components of the Test 18.4 Phase 4: Plan Implementation 18.4.1 Reevaluate the Test Content and Structure 18.4.2 Prepare the Test Manual 18.4.3 Submit a Test Proposal 18.5 Summary References Recommended Reading Appendix: Calculation (Table A.1) Index