ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب A first course in optimization

دانلود کتاب اولین دوره بهینه سازی

A first course in optimization

مشخصات کتاب

A first course in optimization

ویرایش:  
نویسندگان:   
سری:  
ISBN (شابک) : 9781482226560, 9781482226584 
ناشر: CRC Press 
سال نشر: 2015 
تعداد صفحات: 313 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 2 مگابایت 

قیمت کتاب (تومان) : 28,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 13


در صورت تبدیل فایل کتاب A first course in optimization به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب اولین دوره بهینه سازی نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب اولین دوره بهینه سازی

"این متن که برای دانشجویان کارشناسی ارشد و پیشرفته طراحی شده است، مقدمه معاصر بسیار مورد نیاز برای بهینه سازی را ارائه می دهد. با تاکید بر مشکلات کلی و نظریه اساسی، مسائل اساسی بهینه سازی محدود و نامحدود، برنامه ریزی خطی و محدب، الگوریتم های راه حل تکراری اساسی را پوشش می دهد. روش‌های گرادیان، الگوریتم نیوتن رافسون و انواع آن، و روش‌های بهینه‌سازی بدون محدودیت متوالی. این کتاب ابزارها و نتایج ریاضی لازم و همچنین کاربردهایی را ارائه می‌کند. به عنوان نظریه بازی\"-- بیشتر بخوانید...


توضیحاتی درمورد کتاب به خارجی

"Designed for graduate and advanced undergraduate students, this text provides a much-needed contemporary introduction to optimization. Emphasizing general problems and the underlying theory, it covers the fundamental problems of constrained and unconstrained optimization, linear and convex programming, fundamental iterative solution algorithms, gradient methods, the Newton-Raphson algorithm and its variants, and sequential unconstrained optimization methods. The book presents the necessary mathematical tools and results as well as applications, such as game theory"-- Read more...



فهرست مطالب

Cover

S Title

A First Course in Optimization

© 2015 by Taylor & Francis Group LLC
     ISBN 978-1-4822-2658-4 (eBook - PDF)

Dedication

Contents

Preface

Overview

1. Optimization Without Calculus
     1.1 Chapter Summary
     1.2 The Arithmetic Mean-Geometric Mean Inequality
     1.3 Applying the AGM Inequality: the Number e
     1.4 Extending the AGM Inequality
     1.5 Optimization Using the AGM Inequality
     1.6 The H older and Minkowski Inequalities
          1.6.1 H older\'s Inequality
          1.6.2 Minkowski\'s Inequality
     1.7 Cauchy\'s Inequality
     1.8 Optimizing Using Cauchy\'s Inequality
     1.9 An Inner Product for Square Matrices
     1.10 Discrete Allocation Problems
     1.11 Exercises

2. Geometric Programming
     2.1 Chapter Summary
     2.2 An Example of a GP Problem
     2.3 Posynomials and the GP Problem
     2.4 The Dual GP Problem
     2.5 Solving the GP Problem
     2.6 Solving the DGP Problem
          2.6.1 The MART
          2.6.2 MART I
          2.6.3 MART II
          2.6.4 Using the MART to Solve the DGP Problem
     2.7 Constrained Geometric Programming
     2.8 Exercises

3. Basic Analysis
     3.1 Chapter Summary
     3.2 Minima and In ma
     3.3 Limits
     3.4 Completeness
     3.5 Continuity
     3.6 Limsup and Liminf
     3.7 Another View
     3.8 Semi-Continuity
     3.9 Exercises

4. Convex Sets
     4.1 Chapter Summary
     4.2 The Geometry of Real Euclidean Space
          4.2.1 Inner Products
          4.2.2 Cauchy\'s Inequality
          4.2.3 Other Norms
     4.3 A Bit of Topology
     4.4 Convex Sets in RJ
          4.4.1 Basic De nitions
          4.4.2 Orthogonal Projection onto Convex Sets
     4.5 More on Projections
     4.6 Linear and A ne Operators on RJ
     4.7 The Fundamental Theorems
          4.7.1 Basic De nitions
          4.7.2 The Separation Theorem
          4.7.3 The Support Theorem
     4.8 Block-Matrix Notation
     4.9 Theorems of the Alternative
     4.10 Another Proof of Farkas\' Lemma
     4.11 Gordan\'s Theorem Revisited
     4.12 Exercises

5. Vector Spaces and Matrices
     5.1 Chapter Summary
     5.2 Vector Spaces
     5.3 Basic Linear Algebra
          5.3.1 Bases and Dimension
          5.3.2 The Rank of a Matrix
          5.3.3 The \\Matrix Inversion Theorem
          5.3.4 Systems of Linear Equations
          5.3.5 Real and Complex Systems of Linear Equations
     5.4 LU and QR Factorization
     5.5 The LU Factorization
          5.5.1 A Shortcut
          5.5.2 A Warning!
          5.5.3 The QR Factorization and Least Squares
     5.6 Exercises

6. Linear Programming
     6.1 Chapter Summary
     6.2 Primal and Dual Problems
          6.2.1 An Example
          6.2.2 Canonical and Standard Forms
          6.2.3 From Canonical to Standard and Back
     6.3 Converting a Problem to PS Form
     6.4 Duality Theorems
          6.4.1 Weak Duality
          6.4.2 Primal-Dual Methods
          6.4.3 Strong Duality
     6.5 A Basic Strong Duality Theorem
     6.6 Another Proof
     6.7 Proof of Gale\'s Strong Duality Theorem
     6.8 Some Examples
          6.8.1 The Diet Problem
          6.8.2 The Transport Problem
     6.9 The Simplex Method
     6.10 Yet Another Proof
     6.11 The Sherman{Morrison{Woodbury Identity
     6.12 An Example of the Simplex Method
     6.13 Another Example
     6.14 Some Possible Di culties
          6.14.1 A Third Example
          6.15 Topics for Projects
     6.16 Exercises

7. Matrix Games and Optimization
     7.1 Chapter Summary
     7.2 Two-Person Zero-Sum Games
     7.3 Deterministic Solutions
          7.3.1 Optimal Pure Strategies
     7.4 Randomized Solutions
          7.4.1 Optimal Randomized Strategies
          7.4.2 An Exercise
          7.4.3 The Min-Max Theorem
     7.5 Symmetric Games
          7.5.1 An Example of a Symmetric Game
          7.5.2 Comments on the Proof of the Min-Max Theorem
     7.6 Positive Games
          7.6.1 Some Exercises
          7.6.2 Comments
     7.7 Example: The \\Blu ng\" Game
     7.8 Learning the Game
          7.8.1 An Iterative Approach
          7.8.2 An Exercise
     7.9 Non-Constant-Sum Games
          7.9.1 The Prisoners\' Dilemma
          7.9.2 Two Payo  Matrices Needed
          7.9.3 An Example: Illegal Drugs in Sports

8. Differentiation
     8.1 Chapter Summary
     8.2 Directional Derivative
          8.2.1 De nitions
     8.3 Partial Derivatives
     8.4 Some Examples
     8.5 G^ateaux Derivative
     8.6 Fr echet Derivative
          8.6.1 The De nition
          8.6.2 Properties of the Fr echet Derivative
     8.7 The Chain Rule
     8.8 Exercises

9. Convex Functions
     9.1 Chapter Summary
     9.2 Functions of a Single Real Variable
          9.2.1 Fundamental Theorems
          9.2.2 Proof of Rolle\'s Theorem
          9.2.3 Proof of the Mean Value Theorem
          9.2.4 A Proof of the MVT for Integrals
          9.2.5 Two Proofs of the EMVT
          9.2.6 Lipschitz Continuity
          9.2.7 The Convex Case
     9.3 Functions of Several Real Variables
          9.3.1 Continuity
          9.3.2 Di erentiability
          9.3.3 Second Di erentiability
          9.3.4 Finding Maxima and Minima
          9.3.5 Solving F(x) = 0 through Optimization
          9.3.6 When Is F(x) a Gradient?
          9.3.7 Lower Semi-Continuity
          9.3.8 The Convex Case
     9.4 Sub-Di erentials and Sub-Gradients
     9.5 Sub-Gradients and Directional Derivatives
          9.5.1 Some De nitions
          9.5.2 Sub-Linearity
          9.5.3 Sub-Di erentials and Directional Derivatives
          9.5.4 An Example
     9.6 Functions and Operators
     9.7 Convex Sets and Convex Functions
     9.8 Exercises

10. Convex Programming
     10.1 Chapter Summary
     10.2 The Primal Problem
          10.2.1 The Perturbed Problem
          10.2.2 The Sensitivity Vector and the Lagrangian
     10.3 From Constrained to Unconstrained
     10.4 Saddle Points
          10.4.1 The Primal and Dual Problems
          10.4.2 The Main Theorem
          10.4.3 A Duality Approach to Optimization
     10.5 The Karush{Kuhn{Tucker Theorem
          10.5.1 Su cient Conditions
          10.5.2 The KKT Theorem: Saddle-Point Form
          10.5.3 The KKT Theorem: The Gradient Form
     10.6 On Existence of Lagrange Multipliers
     10.7 The Problem of Equality Constraints
          10.7.1 The Problem
          10.7.2 The KKT Theorem for Mixed Constraints
          10.7.3 The KKT Theorem for LP
          10.7.4 The Lagrangian Fallacy
     10.8 Two Examples
          10.8.1 A Linear Programming Problem
          10.8.2 A Nonlinear Convex Programming Problem
     10.9 The Dual Problem
          10.9.1 When Is MP = MD?
          10.9.2 The Primal-Dual Method
          10.9.3 Using the KKT Theorem
     10.10 Nonnegative Least-Squares Solutions
     10.11 An Example in Image Reconstruction
     10.12 Solving the Dual Problem
          10.12.1 The Primal and Dual Problems
          10.12.2 Hildreth’s Dual Algorithm
     10.13 Minimum One-Norm Solutions
          10.13.1 Reformulation as an LP Problem
          10.13.2 Image Reconstruction
     10.14 Exercises

11. Iterative Optimization
     11.1 Chapter Summary
     11.2 The Need for Iterative Methods
     11.3 Optimizing Functions of a Single Real Variable
     11.4 Iteration and Operators
     11.5 The Newton{Raphson Approach
          11.5.1 Functions of a Single Variable
          11.5.2 Functions of Several Variables
     11.6 Approximate Newton{Raphson Methods
          11.6.1 Avoiding the Hessian Matrix
          11.6.2 The BFGS Method
          11.6.3 The Broyden Class
          11.6.4 Avoiding the Gradient
     11.7 Derivative-Free Methods
          11.7.1 Multi-Directional Search Algorithms
          11.7.2 The Nelder{Mead Algorithm
          11.7.3 Comments on the Nelder{Mead Algorithm

12. Solving Systems of Linear Equations
     12.1 Chapter Summary
     12.2 Arbitrary Systems of Linear Equations
          12.2.1 Under-Determined Systems of Linear Equations
          12.2.2 Over-Determined Systems of Linear Equations
          12.2.3 Landweber\'s Method
          12.2.4 The Projected Landweber Algorithm
          12.2.5 The Split-Feasibility Problem
          12.2.6 An Extension of the CQ Algorithm
          12.2.7 The Algebraic Reconstruction Technique
          12.2.8 Double ART
     12.3 Regularization
          12.3.1 Norm-Constrained Least-Squares
          12.3.2 Regularizing Landweber\'s Algorithm
          12.3.3 Regularizing the ART
     12.4 Nonnegative Systems of Linear Equations
          12.4.1 The Multiplicative ART
          12.4.2 MART I
          12.4.3 MART II
          12.4.4 The Simultaneous MART
          12.4.5 The EMML Iteration
          12.4.6 Alternating Minimization
          12.4.7 The Row-Action Variant of EMML
          12.4.8 EMART I
          12.4.9 EMART II
     12.5 Regularized SMART and EMML
          12.5.1 Regularized SMART
          12.5.2 Regularized EMML
     12.6 Block-Iterative Methods
     12.7 Exercises

13. Conjugate-Direction Methods
     13.1 Chapter Summary
     13.2 Iterative Minimization
     13.3 Quadratic Optimization
     13.4 Conjugate Bases for RJ
          13.4.1 Conjugate Directions
          13.4.2 The Gram{Schmidt Method
     13.5 The Conjugate Gradient Method
          13.5.1 The Main Idea
          13.5.2 A Recursive Formula
     13.6 Krylov Subspaces
     13.7 Extensions of the CGM
     13.8 Exercises

14. Operators
     14.1 Chapter Summary
     14.2 Operators
     14.3 Contraction Operators
          14.3.1 Lipschitz-Continuous Operators
          14.3.2 Nonexpansive Operators
          14.3.3 Strict Contractions
          14.3.4 Eventual Strict Contractions
          14.3.5 Instability
     14.4 Orthogonal-Projection Operators
          14.4.1 Properties of the Operator PC
          14.4.2 PC Is Nonexpansive
          14.4.3 PC Is Firmly Nonexpansive
          14.4.4 The Search for Other Properties of PC
     14.5 Two Useful Identities
     14.6 Averaged Operators
     14.7 Gradient Operators
     14.8 The Krasnosel\'skii{Mann{Opial Theorem
     14.9 A ne-Linear Operators
     14.10 Paracontractive Operators
          14.10.1 Linear and A ne Paracontractions
          14.10.2 The Elsner{Koltracht{Neumann Theorem
     14.11 Matrix Norms
          14.11.1 Induced Matrix Norms
          14.11.2 Condition Number of a Square Matrix
          14.11.3 Some Examples of Induced Matrix Norms
          14.11.4 The Euclidean Norm of a Square Matrix
     14.12 Exercises

15. Looking Ahead
     15.1 Chapter Summary
     15.2 Sequential Unconstrained Minimization
     15.3 Examples of SUM
          15.3.1 Barrier-Function Methods
          15.3.2 Penalty-Function Methods
     15.4 Auxiliary-Function Methods
          15.4.1 General AF Methods
          15.4.2 AF Requirements
     15.5 The SUMMA Class of AF Methods

Bibliography

Back Cover




نظرات کاربران