دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
دسته بندی: طراحی: معماری ویرایش: 5 نویسندگان: David A. Patterson, John L. Hennessy سری: The Morgan Kaufmann Series in Computer Architecture and Design ISBN (شابک) : 0124077269, 9780124077263 ناشر: Morgan Kaufmann سال نشر: 2013 تعداد صفحات: 0 زبان: English فرمت فایل : EPUB (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) حجم فایل: 14 مگابایت
کلمات کلیدی مربوط به کتاب سازمان و طراحی رایانه ، چاپ پنجم: رابط سخت افزار / نرم افزار: مهندسی انفورماتیک و کامپیوتر، سازمان کامپیوتر و معماری نیروهای مسلح
در صورت تبدیل فایل کتاب Computer Organization and Design, Fifth Edition: The Hardware/Software Interface به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب سازمان و طراحی رایانه ، چاپ پنجم: رابط سخت افزار / نرم افزار نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
ویرایش پنجم سازماندهی و طراحی رایانه با مثالها، تمرینها و مطالب جدیدی که ظهور محاسبات تلفن همراه و فضای ابری را برجسته میکند، به سمت دوران پس از رایانه شخصی حرکت میکند. این تغییر نسلی با محتوای بهروز شده شامل رایانههای لوحی، زیرساخت ابری، و معماریهای ARM (دستگاههای محاسباتی همراه) و x86 (محاسبات ابری) مورد تأکید و بررسی قرار میگیرد. از آنجایی که درک سخت افزار مدرن برای دستیابی به عملکرد خوب و بهره وری انرژی ضروری است، این نسخه یک مثال ملموس جدید به نام «سریع پیش رفتن» را اضافه می کند که در سراسر متن برای نشان دادن تکنیک های بهینه سازی بسیار مؤثر استفاده می شود. همچنین بحث درباره «هشت ایده بزرگ» معماری کامپیوتر در این نسخه جدید است. همانند نسخه های قبلی، پردازنده MIPS هسته ای است که برای ارائه اصول اولیه فناوری های سخت افزاری، زبان اسمبلی، محاسبات کامپیوتری، خط لوله، سلسله مراتب حافظه و I/O استفاده می شود. - شامل مثالها، تمرینها و مطالب جدیدی است که ظهور محاسبات تلفن همراه و ابر را برجسته میکند. - موازی سازی را به طور عمیق با مثال ها و محتوا پوشش می دهد که موضوعات سخت افزاری و نرم افزاری موازی را برجسته می کند - دارای پردازنده Intel Core i7، ARM Cortex-A8 و NVIDIA Fermi GPU به عنوان نمونه های دنیای واقعی در سراسر کتاب - یک مثال ملموس جدید اضافه می کند، «سریع پیش می رود،» برای نشان دادن اینکه چگونه درک سخت افزار می تواند الهام بخش بهینه سازی نرم افزار باشد که عملکرد را تا 200 برابر بهبود می بخشد. - در مورد «هشت ایده عالی» معماری کامپیوتر بحث می کند و برجسته می کند: عملکرد از طریق موازی سازی عملکرد از طریق لوله گذاری عملکرد از طریق طراحی پیش بینی برای قانون مورز سلسله مراتب انتزاع خاطرات برای ساده سازی طراحی باعث می شود مورد رایج سریع و قابل اعتماد از طریق افزونگی باشد. - شامل مجموعه ای کامل از تمرینات به روز شده و بهبود یافته است.
The 5th edition of Computer Organization and Design moves forward into the post-PC era with new examples, exercises, and material highlighting the emergence of mobile computing and the cloud. This generational change is emphasized and explored with updated content featuring tablet computers, cloud infrastructure, and the ARM (mobile computing devices) and x86 (cloud computing) architectures. Because an understanding of modern hardware is essential to achieving good performance and energy efficiency, this edition adds a new concrete example, «Going Faster,» used throughout the text to demonstrate extremely effective optimization techniques. Also new to this edition is discussion of the «Eight Great Ideas» of computer architecture. As with previous editions, a MIPS processor is the core used to present the fundamentals of hardware technologies, assembly language, computer arithmetic, pipelining, memory hierarchies and I/O. - Includes new examples, exercises, and material highlighting the emergence of mobile computing and the Cloud. - Covers parallelism in depth with examples and content highlighting parallel hardware and software topics - Features the Intel Core i7, ARM Cortex-A8 and NVIDIA Fermi GPU as real-world examples throughout the book - Adds a new concrete example, «Going Faster,» to demonstrate how understanding hardware can inspire software optimizations that improve performance by 200 times. - Discusses and highlights the «Eight Great Ideas» of computer architecture: Performance via Parallelism Performance via Pipelining Performance via Prediction Design for Moores Law Hierarchy of Memories Abstraction to Simplify Design Make the Common Case Fast and Dependability via Redundancy. - Includes a full set of updated and improved exercises.
Front Cover Computer Organization and Design Copyright Page Acknowledgments Contents Preface About This Book About the Other Book Changes for the Fifth Edition Changes for the Fifth Edition Concluding Remarks Acknowledgments for the Fifth Edition 1 Computer Abstractions and Technology 1.1 Introduction Classes of Computing Applications and Their Characteristics Welcome to the PostPC Era What You Can Learn in This Book 1.2 Eight Great Ideas in Computer Architecture Design for Moore’s Law Use Abstraction to Simplify Design Make the Common Case Fast Performance via Parallelism Performance via Pipelining Performance via Prediction Hierarchy of Memories Dependability via Redundancy 1.3 Below Your Program From a High-Level Language to the Language of Hardware 1.4 Under the Covers Through the Looking Glass Touchscreen Opening the Box A Safe Place for Data Communicating with Other Computers 1.5 Technologies for Building Processors and Memory 1.6 Performance Defining Performance Measuring Performance CPU Performance and Its Factors Instruction Performance The Classic CPU Performance Equation 1.7 The Power Wall 1.8 The Sea Change: The Switch from Uniprocessors to Multiprocessors 1.9 Real Stuff: Benchmarking the Intel Core i7 SPEC CPU Benchmark SPEC Power Benchmark 1.10 Fallacies and Pitfalls 1.11 Concluding Remarks Road Map for This Book 1.12 Historical Perspective and Further Reading 1.13 Exercises 2 Instructions: Language of the Computer 2.1 Introduction 2.2 Operations of the Computer Hardware 2.3 Operands of the Computer Hardware Memory Operands Constant or Immediate Operands 2.4 Signed and Unsigned Numbers Summary 2.5 Representing Instructions in the Computer MIPS Fields 2.6 Logical Operations 2.7 Instructions for Making Decisions Loops Case/Switch Statement 2.8 Supporting Procedures in Computer Hardware Using More Registers Nested Procedures Allocating Space for New Data on the Stack Allocating Space for New Data on the Heap 2.9 Communicating with People Characters and Strings in Java 2.10 MIPS Addressing for 32-bit Immediates and Addresses 32-Bit Immediate Operands Addressing in Branches and Jumps MIPS Addressing Mode Summary Decoding Machine Language 2.11 Parallelism and Instructions: Synchronization 2.12 Translating and Starting a Program Compiler Assembler Linker Loader Dynamically Linked Libraries Starting a Java Program 2.13 A C Sort Example to Put It All Together The Procedure swap Register Allocation for swap Code for the Body of the Procedure swap The Full swap Procedure The Procedure sort Register Allocation for sort Code for the Body of the Procedure sort The Procedure Call in sort Passing Parameters in sort Preserving Registers in sort The Full Procedure sort 2.14 Arrays versus Pointers Array Version of Clear Pointer Version of Clear Comparing the Two Versions of Clear 2.15 Advanced Material: Compiling C and Interpreting Java 2.16 Real Stuff: ARMv7 (32-bit) Instructions Addressing Modes Compare and Conditional Branch Unique Features of ARM 2.17 Real Stuff: x86 Instructions Evolution of the Intel x86 x86 Registers and Data Addressing Modes x86 Integer Operations x86 Instruction Encoding x86 Conclusion 2.18 Real Stuff: ARMv8 (64-bit) Instructions 2.19 Fallacies and Pitfalls 2.20 Concluding Remarks 2.21 Historical Perspective and Further Reading 2.22 Exercises 3 Arithmetic for Computers 3.1 Introduction 3.2 Addition and Subtraction Summary 3.3 Multiplication Sequential Version of the Multiplication Algorithm and Hardware Signed Multiplication Faster Multiplication Multiply in MIPS Summary 3.4 Division A Division Algorithm and Hardware Signed Division Faster Division Divide in MIPS Summary 3.5 Floating Point Floating-Point Representation Floating-Point Addition Floating-Point Multiplication Floating-Point Instructions in MIPS Accurate Arithmetic Summary 3.6 Parallelism and Computer Arithmetic: Subword Parallelism 3.7 Real Stuff: Streaming SIMD Extensions and Advanced Vector Extensions in x86 3.8 Going Faster: Subword Parallelism and Matrix Multiply 3.9 Fallacies and Pitfalls 3.10 Concluding Remarks 3.11 Historical Perspective and Further Reading 3.12 Exercises 4 The Processor 4.1 Introduction A Basic MIPS Implementation An Overview of the Implementation Clocking Methodology 4.2 Logic Design Conventions 4.3 Building a Datapath Creating a Single Datapath 4.4 A Simple Implementation Scheme The ALU Control Designing the Main Control Unit Operation of the Datapath Finalizing Control Why a Single-Cycle Implementation Is Not Used Today 4.5 An Overview of Pipelining Designing Instruction Sets for Pipelining Pipeline Hazards Hazards Data Hazards Control Hazards Pipeline Overview Summary 4.6 Pipelined Datapath and Control Graphically Representing Pipelines Pipelined Control 4.7 Data Hazards: Forwarding versus Stalling Data Hazards and Stalls 4.8 Control Hazards Assume Branch Not Taken Reducing the Delay of Branches Dynamic Branch Prediction Pipeline Summary 4.9 Exceptions How Exceptions Are Handled in the MIPS Architecture Exceptions in a Pipelined Implementation 4.10 Parallelism via Instructions The Concept of Speculation Static Multiple Issue An Example: Static Multiple Issue with the MIPS ISA Dynamic Multiple-Issue Processors Dynamic Pipeline Scheduling Energy Efficiency and Advanced Pipelining 4.11 Real Stuff: The ARM Cortex-A8 and Intel Core i7 Pipelines The ARM Cortex-A8 The Intel Core i7 920 Performance of the Intel Core i7 920 4.12 Going Faster: Instruction-Level Parallelism and Matrix Multiply 4.13 Advanced Topic: an Introduction to Digital Design Using a Hardware Design Language to Describe and Model a Pipeline and Mo… 4.14 Fallacies and Pitfalls 4.15 Concluding Remarks 4.16 Historical Perspective and Further Reading 4.17 Exercises 5 Large and Fast: Exploiting Memory Hierarchy 5.1 Introduction 5.2 Memory Technologies SRAM Technology DRAM Technology Flash Memory Disk Memory 5.3 The Basics of Caches Accessing a Cache Handling Cache Misses Handling Writes An Example Cache: The Intrinsity FastMATH Processor Summary 5.4 Measuring and Improving Cache Performance Reducing Cache Misses by More Flexible Placement of Blocks Locating a Block in the Cache Choosing Which Block to Replace Reducing the Miss Penalty Using Multilevel Caches Software Optimization via Blocking Summary 5.5 Dependable Memory Hierarchy Defining Failure The Hamming Single Error Correcting, Double Error Detecting Code (SEC/DED) 5.6 Virtual Machines Requirements of a Virtual Machine Monitor (Lack of) Instruction Set Architecture Support for Virtual Machines Protection and Instruction Set Architecture 5.7 Virtual Memory Placing a Page and Finding it Again Page Faults What about Writes? Making Address Translation Fast: the TLB The Intrinsity FastMATH TLB Integrating Virtual Memory, TLBs, and Caches Implementing Protection with Virtual Memory Handling TLB Misses and Page Faults Summary 5.8 A Common Framework for Memory Hierarchy Question 1: Where Can a Block Be Placed? Question 2: How is a Block Found? Question 3: Which Block Should Be Replaced on a Cache Miss? Question 4: What Happens on a Write? The Three Cs: An Intuitive Model for Understanding the Behavior of Memory Hierarchies 5.9 Using a Finite-State Machine to Control a Simple Cache A Simple Cache Finite-State Machines FSM for a Simple Cache Controller 5.10 Parallelism and Memory Hierarchy: Cache Coherence Basic Schemes for Enforcing Coherence Snooping Protocols 5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive Disks 5.12 Advanced Material: Implementing Cache Controllers 5.13 Real Stuff: The ARM Cortex-A8 and Intel Core i7 Memory Hierarchies Performance of the A8 and Core i7 Memory Hierarchies 5.14 Going Faster: Cache Blocking and Matrix Multiply 5.15 Fallacies and Pitfalls 5.16 Concluding Remarks 5.17 Historica Perspective and Further Reading 5.18 Exercises 6 Parallel Processors from Client to Cloud 6.1 Introduction 6.2 The Difficulty of Creating Parallel Processing Programs 6.3 SISD, MIMD, SIMD, SPMD, and Vector SIMD in x86: Multimedia Extensions Vector Vector versus Scalar Vector versus Multimedia Extensions 6.4 Hardware Multithreading 6.5 Multicore and Other Shared Memory Multiprocessors 6.6 Introduction to Graphics Processing Units An Introduction to the NVIDIA GPU Architecture NVIDIA GPU Memory Structures Putting GPUs into Perspective 6.7 Clusters, Warehouse Scale Computers, and Other Message-Passing Multiprocessors Warehouse-Scale Computers 6.8 Introduction to Multiprocessor Network Topologies Implementing Network Topologies 6.9 Communicating to the Outside World: Cluster Networking 6.10 Multiprocessor Benchmarks and Performance Models Performance Models The Roofline Model Comparing Two Generations of Opterons 6.11 Real Stuff: Benchmarking and Rooflines of the Intel Core i7 960 and the NVIDIA Tesla GPU 6.12 Going Faster: Multiple Processors and Matrix Multiply 6.13 Fallacies and Pitfalls 6.14 Concluding Remarks 6.15 Historical Perspective and Further Reading 6.16 Exercises Appendix A: Assemblers, Linkers, and the SPIM Simulator A.1 Introduction A.2 Assemblers A.3 Linkers A.4 Loading A.5 Memory Usage A.6 Procedure Call Convention A.7 Exceptions and Interrupts A.8 Input and Output A.9 SPIM A.10 MIPS R2000 Assembly Language A.11 Concluding Remarks A.12 Exercises Appendix B: The Basics of Logic Design B.1 Introduction B.2 Gates, Truth Tables, and Logic Equations B.3 Combinational Logic B.4 Using a Hardware Description Language B.5 Constructing a Basic Arithmetic Logic Unit B.6 Faster Addition: Carry Lookahead B.7 Clocks B.8 Memory Elements: Flip-Flops, Latches, and Registers B.9 Memory Elements: SRAMs and DRAMs B.10 Finite-State Machines B.11 Timing Methodologies B.12 Field Programmable Devices B.13 Concluding Remarks B.14 Exercises Index