دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش:
نویسندگان: Pete Warden. Daniel Situnayake
سری:
ISBN (شابک) : 1492052043, 9781492052043
ناشر: O'Reilly UK Ltd.
سال نشر: 2020
تعداد صفحات: 484
زبان: English
فرمت فایل : EPUB (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود)
حجم فایل: 26 Mb
در صورت تبدیل فایل کتاب Tinyml: Machine Learning with Tensorflow Lite on Arduino and Ultra-Low-Power Microcontrollers به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب Tinyml: یادگیری ماشین با Tensorflow Lite در آردوینو و میکروکنترلرهای فوق العاده کم قدرت نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
شبکه های یادگیری عمیق در حال کوچکتر شدن هستند. بسیار کوچکتر. تیم Google Assistant میتواند کلمات را با مدلی با اندازه فقط 14 کیلوبایت تشخیص دهد - به اندازه کافی برای اجرا روی یک میکروکنترلر. با این کتاب عملی شما وارد حوزه TinyML خواهید شد، جایی که یادگیری عمیق و سیستمهای تعبیهشده ترکیب میشوند تا چیزهای خیرهکنندهای را با دستگاههای کوچک امکانپذیر کنند. پیت واردن و دانیل سیتونایاک توضیح میدهند که چگونه میتوانید مدلهایی را به اندازهای کوچک تربیت کنید که در هر محیطی قرار بگیرند. ایده آل برای توسعه دهندگان نرم افزار و سخت افزار که می خواهند سیستم های جاسازی شده را با استفاده از یادگیری ماشین بسازند، این راهنما شما را در ایجاد یک سری پروژه های TinyML، گام به گام راهنمایی می کند. هیچ یادگیری ماشینی یا تجربه میکروکنترلر لازم نیست. یک تشخیصدهنده گفتار، دوربینی که افراد را تشخیص میدهد، و یک عصای جادویی که به حرکات پاسخ میدهد بسازید. کار با آردوینو و میکروکنترلرهای کم مصرف، ملزومات ML و نحوه آموزش مدلهای خود را بیاموزید آموزش مدلها برای درک صدا، تصویر و دادههای شتابسنج کاوش TensorFlow Lite برای میکروکنترلرها، مجموعه ابزار Google برای برنامههای TinyML Debug و ارائه تدابیر امنیتی برای حفظ حریم خصوصی و امنیت بهینهسازی تأخیر، مصرف انرژی، و مدل و اندازه باینری
Deep learning networks are getting smaller. Much smaller. The Google Assistant team can detect words with a model just 14 kilobytes in size--small enough to run on a microcontroller. With this practical book you'll enter the field of TinyML, where deep learning and embedded systems combine to make astounding things possible with tiny devices. Pete Warden and Daniel Situnayake explain how you can train models small enough to fit into any environment. Ideal for software and hardware developers who want to build embedded systems using machine learning, this guide walks you through creating a series of TinyML projects, step-by-step. No machine learning or microcontroller experience is necessary. Build a speech recognizer, a camera that detects people, and a magic wand that responds to gestures Work with Arduino and ultra-low-power microcontrollers Learn the essentials of ML and how to train your own models Train models to understand audio, image, and accelerometer data Explore TensorFlow Lite for Microcontrollers, Google's toolkit for TinyML Debug applications and provide safeguards for privacy and security Optimize latency, energy usage, and model and binary size
Cover Copyright Table of Contents Preface Conventions Used in This Book Using Code Examples O’Reilly Online Learning How to Contact Us Acknowledgments Chapter 1. Introduction Embedded Devices Changing Landscape Chapter 2. Getting Started Who Is This Book Aimed At? What Hardware Do You Need? What Software Do You Need? What Do We Hope You’ll Learn? Chapter 3. Getting Up to Speed on Machine Learning What Machine Learning Actually Is The Deep Learning Workflow Decide on a Goal Collect a Dataset Design a Model Architecture Train the Model Convert the Model Run Inference Evaluate and Troubleshoot Wrapping Up Chapter 4. The “Hello World” of TinyML: Building and Training a Model What We’re Building Our Machine Learning Toolchain Python and Jupyter Notebooks Google Colaboratory TensorFlow and Keras Building Our Model Importing Dependencies Generating Data Splitting the Data Defining a Basic Model Training Our Model Training Metrics Graphing the History Improving Our Model Testing Converting the Model for TensorFlow Lite Converting to a C File Wrapping Up Chapter 5. The “Hello World” of TinyML: Building an Application Walking Through the Tests Including Dependencies Setting Up the Test Getting Ready to Log Data Mapping Our Model Creating an AllOpsResolver Defining a Tensor Arena Creating an Interpreter Inspecting the Input Tensor Running Inference on an Input Reading the Output Running the Tests Project File Structure Walking Through the Source Starting with main_functions.cc Handling Output with output_handler.cc Wrapping Up main_functions.cc Understanding main.cc Running Our Application Wrapping Up Chapter 6. The “Hello World” of TinyML: Deploying to Microcontrollers What Exactly Is a Microcontroller? Arduino Handling Output on Arduino Running the Example Making Your Own Changes SparkFun Edge Handling Output on SparkFun Edge Running the Example Testing the Program Viewing Debug Data Making Your Own Changes ST Microelectronics STM32F746G Discovery Kit Handling Output on STM32F746G Running the Example Making Your Own Changes Wrapping Up Chapter 7. Wake-Word Detection: Building an Application What We’re Building Application Architecture Introducing Our Model All the Moving Parts Walking Through the Tests The Basic Flow The Audio Provider The Feature Provider The Command Recognizer The Command Responder Listening for Wake Words Running Our Application Deploying to Microcontrollers Arduino SparkFun Edge ST Microelectronics STM32F746G Discovery Kit Wrapping Up Chapter 8. Wake-Word Detection: Training a Model Training Our New Model Training in Colab Using the Model in Our Project Replacing the Model Updating the Labels Updating command_responder.cc Other Ways to Run the Scripts How the Model Works Visualizing the Inputs How Does Feature Generation Work? Understanding the Model Architecture Understanding the Model Output Training with Your Own Data The Speech Commands Dataset Training on Your Own Dataset How to Record Your Own Audio Data Augmentation Model Architectures Wrapping Up Chapter 9. Person Detection: Building an Application What We’re Building Application Architecture Introducing Our Model All the Moving Parts Walking Through the Tests The Basic Flow The Image Provider The Detection Responder Detecting People Deploying to Microcontrollers Arduino SparkFun Edge Wrapping Up Chapter 10. Person Detection: Training a Model Picking a Machine Setting Up a Google Cloud Platform Instance Training Framework Choice Building the Dataset Training the Model TensorBoard Evaluating the Model Exporting the Model to TensorFlow Lite Exporting to a GraphDef Protobuf File Freezing the Weights Quantizing and Converting to TensorFlow Lite Converting to a C Source File Training for Other Categories Understanding the Architecture Wrapping Up Chapter 11. Magic Wand: Building an Application What We’re Building Application Architecture Introducing Our Model All the Moving Parts Walking Through the Tests The Basic Flow The Accelerometer Handler The Gesture Predictor The Output Handler Detecting Gestures Deploying to Microcontrollers Arduino SparkFun Edge Wrapping Up Chapter 12. Magic Wand: Training a Model Training a Model Training in Colab Other Ways to Run the Scripts How the Model Works Visualizing the Input Understanding the Model Architecture Training with Your Own Data Capturing Data Modifying the Training Scripts Training Using the New Model Wrapping Up Learning Machine Learning What’s Next Chapter 13. TensorFlow Lite for Microcontrollers What Is TensorFlow Lite for Microcontrollers? TensorFlow TensorFlow Lite TensorFlow Lite for Microcontrollers Requirements Why Is the Model Interpreted? Project Generation Build Systems Specializing Code Makefiles Writing Tests Supporting a New Hardware Platform Printing to a Log Implementing DebugLog() Running All the Targets Integrating with the Makefile Build Supporting a New IDE or Build System Integrating Code Changes Between Projects and Repositories Contributing Back to Open Source Supporting New Hardware Accelerators Understanding the File Format FlatBuffers Porting TensorFlow Lite Mobile Ops to Micro Separate the Reference Code Create a Micro Copy of the Operator Port the Test to the Micro Framework Build a Bazel Test Add Your Op to AllOpsResolver Build a Makefile Test Wrapping Up Chapter 14. Designing Your Own TinyML Applications The Design Process Do You Need a Microcontroller, or Would a Larger Device Work? Understanding What’s Possible Follow in Someone Else’s Footsteps Find Some Similar Models to Train Look at the Data Wizard of Oz-ing Get It Working on the Desktop First Chapter 15. Optimizing Latency First Make Sure It Matters Hardware Changes Model Improvements Estimating Model Latency How to Speed Up Your Model Quantization Product Design Code Optimizations Performance Profiling Optimizing Operations Look for Implementations That Are Already Optimized Write Your Own Optimized Implementation Taking Advantage of Hardware Features Accelerators and Coprocessors Contributing Back to Open Source Wrapping Up Chapter 16. Optimizing Energy Usage Developing Intuition Typical Component Power Usage Hardware Choice Measuring Real Power Usage Estimating Power Usage for a Model Improving Power Usage Duty Cycling Cascading Design Wrapping Up Chapter 17. Optimizing Model and Binary Size Understanding Your System’s Limits Estimating Memory Usage Flash Usage RAM Usage Ballpark Figures for Model Accuracy and Size on Different Problems Speech Wake-Word Model Accelerometer Predictive Maintenance Model Person Presence Detection Model Choice Reducing the Size of Your Executable Measuring Code Size How Much Space Is Tensorflow Lite for Microcontrollers Taking? OpResolver Understanding the Size of Individual Functions Framework Constants Truly Tiny Models Wrapping Up Chapter 18. Debugging Accuracy Loss Between Training and Deployment Preprocessing Differences Debugging Preprocessing On-Device Evaluation Numerical Differences Are the Differences a Problem? Establish a Metric Compare Against a Baseline Swap Out Implementations Mysterious Crashes and Hangs Desktop Debugging Log Tracing Shotgun Debugging Memory Corruption Wrapping Up Chapter 19. Porting Models from TensorFlow to TensorFlow Lite Understand What Ops Are Needed Look at Existing Op Coverage in Tensorflow Lite Move Preprocessing and Postprocessing into Application Code Implement Required Ops if Necessary Optimize Ops Wrapping Up Chapter 20. Privacy, Security, and Deployment Privacy The Privacy Design Document Using a PDD Security Protecting Models Deployment Moving from a Development Board to a Product Wrapping Up Chapter 21. Learning More The TinyML Foundation SIG Micro The TensorFlow Website Other Frameworks Twitter Friends of TinyML Wrapping Up Appendix A. Using and Generating an Arduino Library Zip Appendix B. Capturing Audio on Arduino Index About the Authors Colophon