دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
دسته بندی: فلسفه ویرایش: نویسندگان: Jakob Mökander. Marta Ziosi سری: Digital Ethics Lab Yearbook ISBN (شابک) : 3031098455, 9783031098451 ناشر: Springer سال نشر: 2023 تعداد صفحات: 290 زبان: English فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 8 مگابایت
در صورت تبدیل فایل کتاب The 2021 Yearbook of the Digital Ethics Lab به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب سالنامه 2021 آزمایشگاه اخلاق دیجیتال نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
این جلد ویرایش شده سالیانه طیف وسیعی از موضوعات در اخلاق دیجیتال و حکمرانی را بررسی میکند. شامل فصول است که: تجزیه و تحلیل فرصت ها و چالش های اخلاقی ناشی از نوآوری دیجیتال. ترسیم رویکردهای جدید برای حل آنها؛ و راهنمایی های ملموس در مورد نحوه اداره فناوری های نوظهور ارائه دهد. مشارکت کنندگان همه اعضای آزمایشگاه اخلاق دیجیتال (DELAB) در مؤسسه اینترنت آکسفورد هستند، یک محیط تحقیقاتی که از طیف گسترده ای از سنت های دانشگاهی استفاده می کند.
در مجموع، فصلهای این کتاب نشان میدهد که چگونه حوزه اخلاق دیجیتال - چه به عنوان یک رشته آکادمیک و چه به عنوان یک حوزه عملی درک شود - در حال گذراندن فرآیند بلوغ است. مهمتر از همه، تمرکز گفتمان در مورد چگونگی طراحی و استفاده از فناوری های دیجیتال به طور فزاینده ای از «اخلاق نرم» به «حکمرانی سخت» تغییر می کند. سپس، روند تغییر مداوم از «چه» به «چگونه» وجود دارد، که به موجب آن رویکردهای انتزاعی یا موقتی برای حکمرانی هوش مصنوعی جای خود را به راهحلهای ملموستر و سیستماتیکتر میدهند. بلوغ حوزه اخلاق دیجیتال، همانطور که این کتاب تلاش می کند نشان دهد، با مجموعه ای از رویدادهای اخیر هم تسریع شده و هم نشان داده شده است. بدین ترتیب این متن گام مهمی را در جهت تعریف و اجرای رویکردهای عملی و موثر برای حاکمیت دیجیتال برمی دارد. این برای دانشجویان، محققان و متخصصان در این زمینه جذاب است.
This annual edited volume explores a wide range of topics in digital ethics and governance. Included are chapters that: analyze the opportunities and ethical challenges posed by digital innovation; delineate new approaches to solve them; and offer concrete guidance on how to govern emerging technologies. The contributors are all members of the Digital Ethics Lab (the DELab) at the Oxford Internet Institute, a research environment that draws on a wide range of academic traditions.
Collectively, the chapters of this book illustrate how the field of digital ethics - whether understood as an academic discipline or an area of practice - is undergoing a process of maturation. Most importantly, the focus of the discourse concerning how to design and use digital technologies is increasingly shifting from ‘soft ethics’ to ‘hard governance’. Then, there is the trend in the ongoing shift from ‘what’ to ‘how’, whereby abstract or ad-hoc approaches to AI governance are giving way to more concrete and systematic solutions. The maturation of the field of digital ethics has, as this book attempts to show, been both accelerated and illustrated by a series of recent events. This text thereby takes an important step towards defining and implementing feasible and effective approaches to digital governance. It appeals to students, researchers and professionals in the field.
Preface References Contents Contributors The European Legislation on AI: A Brief Analysis of Its Philosophical Approach References Informational Privacy with Chinese Characteristics 1 Introduction 2 Exploring China’s ‘Privacy Awakening’ 3 The State-Individual Relationship in Confucian Thought 4 Individualisation and the ‘Great Self’ 5 Informational Privacy as Relational Obligations 6 Conclusion References Lessons Learned from Co-governance Approaches – Developing Effective AI Policy in Europe 1 Introduction 2 Co-governance as an Approach 3 Approach and Methodology – Challenges to Regulating AI and Identifying Common Themes and Examples 3.1 Challenges in AI Governance 3.2 Identifying Examples 4 Co-governance Examples from Outside AI – Ideas, Implementation and Challenges 4.1 Complex Supply Chain Management – The Transparency/Accountability Challenge 4.2 Building Reliable and Professional Services – Transparency/Accountability/Dynamic Challenge 5 Co-governance as It Relates to AI 5.1 Multi-stakeholder Monitoring, Certification and Labeling of Products 5.2 Professionalization 5.3 Points of Control (Targeted Regulation with Stakeholder Buy-in) 6 “Lessons Learned” – Recommendations for AI Governance in the EU 7 Conclusions References State-Firm Coordination in AI Governance 1 Introduction 2 Theoretical Framework: Power and Legitimacy 3 The Role of States and Technology Firms in AI Governance 3.1 Evaluating State-Driven AI Governance 3.2 Evaluating Corporate-Driven AI Governance 3.3 The Case for a Combined Approach to AI Governance 4 How to Achieve Responsible AI Governance 4.1 Updating the State for the Digital Age 4.2 Rebooting Tech Firms with Purpose 4.3 Avenues Toward Systemic Change 5 Conclusion References The Impact of Australia’s News Media Bargaining Code on Journalism, Democracy, and the Battle to Regulate Big Tech 1 Introduction 2 Outcompeted 3 Google: Pre-emptive Dodging 4 Facebook: Strongarm Tactics 5 Concerns over the Media Code 6 Predatory Aggression 7 Conclusion References App Store Governance: The Implications and Limitations of Duopolistic Dominance 1 Introduction 2 App Store Governance: Duopolistic Dominance 3 Instructive Episodes in App Store Governance 3.1 Babylon 3.2 Parler 3.3 Contact Tracing for Covid-19 3.4 “Smart Voting” 4 Conclusions References A Legal Principles-Based Framework for AI Liability Regulation 1 Introduction 2 Methodology 3 Ethical and Legal Principles 4 A Case Study: How to Derive Legal Principles for AI Regulation 5 General Aims or Meta-principles of AI Regulation 6 Innovation and Trust 7 Policy Recommendations 8 Conclusions References The New Morality of Debt 1 “Datafied” Lending 2 Recasting Regulation References Site of the Living Dead: Clarifying Our Moral Obligations Towards Digital Remains 1 Introduction 2 Contextualizing Online Death 3 Methodology 4 Analysis 4.1 LoAArch – A Balancing Act 4.2 LoAGR – The Near and Forgotten Dead 4.3 LoACur – Selective Memory 5 Conclusion References The Statistics of Interpretable Machine Learning 1 Introduction 2 Local Linear Approximators 3 Rule Lists 4 Case-Based Methods 5 Variable Importance 6 Conclusion References Formalising Trade-Offs Beyond Algorithmic Fairness: Lessons from Ethical Philosophy and Welfare Economics 1 Introduction 2 Definitions 2.1 Fairness Metrics 2.2 Acceptability of Inequalities Legally Protected Characteristics Effort vs. Circumstances Source of Inequality Takeaways 3 Lessons from Ethical Philosophy on (In)Equalities 3.1 Ethical Subjectivity of Algorithmic Fairness 3.2 Linking Ethical Philosophy to Algorithmic Fairness 4 Lessons from Welfare Economics 4.1 Welfare in Algorithmic Ethics: Beneficence and Nonmaleficence 4.2 Liberty in Algorithmic Ethics: Autonomy and Explicability Autonomy: Liberty Autonomy: Forgiveness Autonomy: Vulnerability Explicability 5 Proposed Method: Key Ethics Indicators 5.1 Define Success 5.2 Identify Sources of Inequality 5.3 Identify Sources of Bias 5.4 Design Mitigation Strategies 5.5 Operationalise Key Ethics Indicators (KEIs), Calculate Trade-Offs Between KEIs 5.6 Select a Model and Provide Justifications 6 Conclusion References Ethics Auditing Framework for Trustworthy AI: Lessons from the IT Audit Literature 1 Introduction 2 Governance, Assurance and Risk 3 Results 3.1 Governance 3.2 Assurance Agile Methodologies and Safety-Critical Systems 3.3 Risk 4 Further Considerations for EA of AI 4.1 COBIT as a Structural Analogue for an EA of AI Framework 4.2 Accounting for Agile 4.3 Risk Thresholds for Ethics Audits? 5 Conclusion Appendices Appendix 1: Methodology Appendix 2: AGILE and Safety-Critical Systems Scrum Methodology R-Scrum Methodology Appendix 3: Ethics and Risk Mapping CITYCoP Ethics Compliance Process CITYCoP Ethics Compliance Matrix References Ethics Auditing: Lessons from Business Ethics for Ethics Auditing of AI 1 Introduction 2 Definitions, Purposes, and Motivations 2.1 Definition: What Is an Ethics Audit? 2.2 Purpose and Motivations: Why Do Ethics Audits? 3 Methods: How to Do an Ethics Audit? 4 Evaluating Ethics Auditing 5 Lessons for AI Ethics Auditing 6 Conclusion Appendices Appendix 1: Methodology Appendix 2: General Business Ethics Audit – Radar Chart Appendix 3: Ethics Thermometer – Ethical Qualities Matrix Appendix 4: Economy for the Common Good Balance Sheet Appendix 5: Improved Ethics Audit – Risk Assessment Model Appendix 6: List of Abbreviations References AI Ethics and Policies: Why European Journalism Needs More of Both 1 Introduction 2 The Issues Raised by AI in Journalism 2.1 Algorithmic Accountability 2.2 Excessive Personalization 2.3 Lack of Media Pluralism 2.4 Privacy 2.5 Copyright 3 AI’s Contribution to Journalism 3.1 Newsgathering 3.2 Story Production 3.3 Content Distribution 4 The European Policy Context 5 Recommendations 6 Conclusions References Towards Equitable Health Outcomes Using Group Data Rights 1 Introduction 2 Health Inequities, Datafication, and Automation 3 Losing the Forest for the Trees – How Individual Data Rights Leave Marginalized Populations Vulnerable 4 Underrepresented, Underserved and Fighting Back – Two Case Studies on Group Data Rights 4.1 Indigenous Data Sovereignty 4.2 Disease Advocacy Organizations 5 Who Are Group Data Rights For? 6 Exercising Group Data Rights – Practical Barriers and Solutions 7 Limitations 8 Conclusion References Ethical Principles for Artificial Intelligence in National Defence 1 Introduction 2 Methodology 3 Ethical Challenges of AI for Defence Purposes 3.1 Sustainment and Support Uses of AI 3.2 Adversarial and Non-kinetic Uses of AI 3.3 Adversarial and Kinetic Uses of AI 4 Ethical Guidelines for the Use of AI 4.1 Responsible 4.2 Equitable 4.3 Traceability 4.4 Reliable and Governable 5 Five Ethical Principles for Sustainment and Support and Adversarial and Non-kinetic Uses of AI 5.1 Justified and Overridable Uses 5.2 Just and Transparent Systems and Processes 5.3 Human Moral Responsibility 5.4 Meaningful Human Control 5.5 Reliable AI Systems 6 Conclusion References