دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: 2 نویسندگان: Vitaly Herasevich, MD, PhD, MSc Brian Pickering, MD, MSc سری: ISBN (شابک) : 9780367488215, 9781003042969 ناشر: Routledge سال نشر: 2021 تعداد صفحات: 199 زبان: English فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 3 مگابایت
در صورت تبدیل فایل کتاب Health Information Technology Evaluation Handbook From Meaningful Use to Meaningful Outcomes به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب راهنمای ارزیابی فناوری اطلاعات سلامت از استفاده معنادار تا پیامدهای معنادار نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
Cover Page Half Title Page Title Page Copyright Page Contents Foreword Preface Acknowledgments Authors 1 The Foundation and Pragmatics of HIT Evaluation 1.1 Need for Evaluation Historical Essay 1.2 HIT: Why Should We Worry About It? Historical Essay Definitions History of Technology Assessment Medical or Health Technology Assessment Health Information Technology Assessment 1.3 Regulatory Framework in the United States Food and Drug Administration Agency for Healthcare Research and Quality 1.4 Fundamental Steps Required for Meaningful HIT Evaluation Suggested Reading References 2 Structure and Design of Evaluation Studies 2.1 Review of Study Methodologies and Approaches that Can Be Used in Health IT Evaluations Define the Health IT (Application, System) to Be Studied Define the Stakeholders Whose Questions Should Be Addressed Define and Prioritize Study Questions 2.2 Clinical Research Design Overview Clinical Epidemiology Evidence Pyramid Specific Study Design Considerations in Health IT Evaluation Randomized Controlled Trial in Health IT Diagnostic Performance Study 2.3 How to Ask Good Evaluation Questions and Develop Protocol Suggested Reading References 3 Study Design and Measurements Fundamentals 3.1 Fundamental Principles of Study Design Selection Criteria and Sample Validity Accuracy and Precision Bias Confounding 3.2 Core Measurements in HIT Evaluation Clinical Outcome Measures Clinical Process Measurements Financial Impact Measures Other Outcome Measurement Concepts Intermediate Outcome Composite Outcome Patient-Reported Outcomes Health-Related Quality of Life Subjective and Objective Measurements 3.3 Data Collection for Evaluation Studies 3.4 Data Quality Suggested Reading References 4 Analyzing the Results of Evaluation 4.1 Fundamental Principles of Statistics Measurement Variables Data Preparation Descriptive (Summary) Statistics Data Distribution Confidence Intervals p-Value 4.2 Statistical Tests: Choosing the Right Test Hypothesis Testing Non-Parametric Tests One- and Two-Tailed Tests Paired and Independent Tests Number of Comparisons Groups Analytics Methods Identifying Relationship: Correlation Regression Longitudinal Studies: Repeated Measures Time-to-Event: Survival Analysis Diagnostic Accuracy Studies Assessing Agreements Outcome Measurements Other Statistical Considerations Multiple Comparisons Subgroup Analysis Sample Size Calculation Commonly Used Statistical Tools Suggested Reading References 5 Proposing and Communicating the Results of Evaluation Studies 5.1 Target Audience 5.2 Methods of Dissemination 5.3 Universal, Scientifically Based Outline for the Dissemination of Evaluation Study Results 5.4 Reporting Standards and Guidelines 5.5 Other Communication Methods Suggested Reading References 6 Safety Evaluation 6.1 Role of Government Organizations in HIT Safety Evaluation ONC EHR Technology Certification Program Meaningful Use (Stage 2) and 2014 Edition Standards and Certification Criteria Safety Evaluation Outside the Legislative Process 6.2 Problem Identification and Related Metrics: What Should One Study? Where Can One Study the Safety Evaluation of HIT? Passive and Active Evaluation 6.3 Tools and Methodologies to Assist Capture and Report HIT Safety Events: Passive Evaluation Simulation Studies and Testing in a Safe Environment: Active Evaluations 6.4 Summary Suggested Reading References 7 Cost Evaluation 7.1 Health Economics Basics Setting and Methodology 7.2 Main Types of Cost Analysis Applied to HIT Cost-Benefit Analysis Cost-Effectiveness Analysis Cost-Minimization Analysis Return on Investment How to Report Economic Evaluation Studies Suggested Reading References 8 Efficacy and Effectiveness Evaluation 8.1 Clinically Oriented Outcomes of Interest (What) 8.2 Settings for Evaluation (Where) 8.3 Evaluation Methods (How) 8.4 Evaluation Timing (When) 8.5 Example of HIT Evaluation Studies Example of a Survey Analysis Study Example of a Gold Standard Validation Study Example of a Before–After Study 8.6 Security Evaluation Suggested Reading References 9 Usability Evaluation 9.1 Evaluation of Efficiency 9.2 Effectiveness and Evaluation of Errors 9.3 Evaluating Consistency of Experience (User Satisfaction) 9.4 Electronic Medical Record Usability Principles 9.5 Usability and the EHR Evaluation Process A Note on Evaluating the Real-World Usability of HIT 9.6 Usability Testing Approaches 9.7 Specific Usability Testing Methods 9.8 Cognitive Walk-Through Key Features and Output Procedure Phase 1: Defining the Users of the System Phase 2: Defining the Task(s) for the Walk-Through Phase 3: Walking Through the Actions and Critiquing Critical Information Phase 4: Summarization of the Walk-Through Results Phase 5: Recommendations to Designers 9.9 Keystroke-Level Model Key Features and Output 9.10 Heuristic Evaluation Key Features and Output Reporting 9.11 System Usability Scale Benefits of Using an SUS 9.12 Conclusions Suggested Reading References 10 Case Studies 10.1 Case Study 1: SWIFT Score Rationale SWIFT Score Development SWIFT Implementation Results Case Discussion SWIFT Score Study Design Implementation Results 10.2 Case Study 2: Lessons Applied, More to Be Learned—AWARE Dashboard AWARE Design Principles Testing AWARE Implementation Results and Lessons Learned 10.3 Summary References 11 Healthcare Artificial Intelligence Tools Evaluation 11.1 Role 11.2 Framework for Evaluation 11.3 Special Considerations for AI/ML 11.4 Evaluation Checklist (ABC of Artificial Intelligence Evaluation) 11.5 Conclusion Suggested Reading References Index