دسترسی نامحدود
برای کاربرانی که ثبت نام کرده اند
برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید
در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید
برای کاربرانی که ثبت نام کرده اند
درصورت عدم همخوانی توضیحات با کتاب
از ساعت 7 صبح تا 10 شب
ویرایش: [1 ed.]
نویسندگان: Sinan Ozdemir
سری:
ISBN (شابک) : 0138199191, 9780138199197
ناشر: Addison-Wesley Professional
سال نشر: 2023
تعداد صفحات: 288
[432]
زبان: English
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود)
حجم فایل: 19 Mb
در صورت تبدیل فایل کتاب Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs (Addison-Wesley Data & Analytics Series) به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.
توجه داشته باشید کتاب راهنمای شروع سریع مدلهای زبان بزرگ: استراتژیها و بهترین روشها برای استفاده از ChatGPT و سایر LLMها (سری دادهها و تحلیلهای آدیسون-وسلی) نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.
Cover Page About This eBook Halftitle Page Title Page Copyright Page Pearson’s Commitment to Diversity, Equity, and Inclusion Contents Foreword Preface Audience and Prerequisites How to Approach This Book Overview Unique Features Summary Acknowledgments About the Author I: Introduction to Large Language Models 1. Overview of Large Language Models What Are Large Language Models? Popular Modern LLMs Domain-Specific LLMs Applications of LLMs Summary 2. Semantic Search with LLMs Introduction The Task Solution Overview The Components Putting It All Together The Cost of Closed-Source Components Summary 3. First Steps with Prompt Engineering Introduction Prompt Engineering Working with Prompts Across Models Building a Q/A Bot with ChatGPT Summary II: Getting the Most Out of LLMs 4. Optimizing LLMs with Customized Fine-Tuning Introduction Transfer Learning and Fine-Tuning: A Primer A Look at the OpenAI Fine-Tuning API Preparing Custom Examples with the OpenAI CLI Setting Up the OpenAI CLI Our First Fine-Tuned LLM Case Study: Amazon Review Category Classification Summary 5. Advanced Prompt Engineering Introduction Prompt Injection Attacks Input/Output Validation Batch Prompting Prompt Chaining Chain-of-Thought Prompting Revisiting Few-Shot Learning Testing and Iterative Prompt Development Summary 6. Customizing Embeddings and Model Architectures Introduction Case Study: Building a Recommendation System Summary III: Advanced LLM Usage 7. Moving Beyond Foundation Models Introduction Case Study: Visual Q/A Case Study: Reinforcement Learning from Feedback Summary 8. Advanced Open-Source LLM Fine-Tuning Introduction Example: Anime Genre Multilabel Classification with BERT Example: LaTeX Generation with GPT2 Sinan’s Attempt at Wise Yet Engaging Responses: SAWYER The Ever-Changing World of Fine-Tuning Summary 9. Moving LLMs into Production Introduction Deploying Closed-Source LLMs to Production Deploying Open-Source LLMs to Production Summary IV: Appendices A. LLM FAQs The LLM already knows about the domain I’m working in. Why should I add any grounding? I just want to deploy a closed-source API. What are the main things I need to look out for? I really want to deploy an open-source model. What are the main things I need to look out for? Creating and fine-tuning my own model architecture seems hard. What can I do to make it easier? I think my model is susceptible to prompt injections or going off task. How do I correct it? Why didn’t we talk about third-party LLM tools like LangChain? How do I deal with overfitting or underfitting in LLMs? How can I use LLMs for non-English languages? Are there any unique challenges? How can I implement real-time monitoring or logging to understand the performance of my deployed LLM better? What are some things we didn’t talk about in this book? B. LLM Glossary Transformer Architecture Attention Mechanism Large Language Model (LLM) Autoregressive Language Models Autoencoding Language Models Transfer Learning Prompt Engineering Alignment Reinforcement Learning from Human Feedback (RLHF) Reinforcement Learning from AI Feedback (RLAIF) Corpora Fine-Tuning Labeled Data Hyperparameters Learning Rate Batch Size Training Epochs Evaluation Metrics Incremental/Online Learning Overfitting Underfitting C. LLM Application Archetypes Chatbots/Virtual Assistants Fine-Tuning a Closed-Source LLM Fine-Tuning an Open-Source LLM Fine-Tuning a Bi-encoder to Learn New Embeddings Fine-Tuning an LLM for Following Instructions Using Both LM Training and Reinforcement Learning from Human / AI Feedback (RLHF & RLAIF) Open-Book Question-Answering Index Permissions and Image Credits Code Snippets