ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Machine Learning in Finance: From Theory to Practice (Instructor's Solution Manual with Extra Resources) (Solutions)

دانلود کتاب یادگیری ماشینی در امور مالی: از تئوری تا عمل (راهنمای راه حل مدرس با منابع اضافی) (راه حل ها)

Machine Learning in Finance: From Theory to Practice (Instructor's Solution Manual with Extra Resources) (Solutions)

مشخصات کتاب

Machine Learning in Finance: From Theory to Practice (Instructor's Solution Manual with Extra Resources) (Solutions)

ویرایش: [1 ed.] 
نویسندگان: , ,   
سری:  
ISBN (شابک) : 3030410676, 9783030410674 
ناشر: Springer 
سال نشر: 2020 
تعداد صفحات: 96 
زبان: English 
فرمت فایل : ZIP (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 2 Mb 

قیمت کتاب (تومان) : 36,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 4


در صورت تبدیل فایل کتاب Machine Learning in Finance: From Theory to Practice (Instructor's Solution Manual with Extra Resources) (Solutions) به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب یادگیری ماشینی در امور مالی: از تئوری تا عمل (راهنمای راه حل مدرس با منابع اضافی) (راه حل ها) نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب یادگیری ماشینی در امور مالی: از تئوری تا عمل (راهنمای راه حل مدرس با منابع اضافی) (راه حل ها)

راهنمای راه حل مربی و (عمدتا) منابع پایتون، که به طور رسمی از طریق Springer.com به دست آمده است.


توضیحاتی درمورد کتاب به خارجی

instructor's solution manual and (mostly) python sources, officially obtained through Springer.com



فهرست مطالب

Introduction
	Prerequisites
	Advantages of the Book
	Overview of the Book
		Chapter 1
		Chapter 2
		Chapter 3
		Chapter 4
		Chapter 5
		Chapter 6
		Chapter 7
		Chapter 8
		Chapter 9
		Chapter 10
		Chapter 11
		Chapter 12
		Source Code
		Scope
		Multiple-Choice Questions
		Exercises
	Instructor Materials
	Acknowledgements
Contents
About the Authors
Part I Machine Learning with Cross-Sectional Data
	1 Introduction
		1 Background
			1.1 Big Data—Big Compute in Finance
			1.2 Fintech
				1.2.1 Robo-Advisors
				1.2.2 Fraud Detection
				1.2.3 Cryptocurrencies
		2 Machine Learning and Prediction
			2.1 Entropy
			2.2 Neural Networks
		3 Statistical Modeling vs. Machine Learning
			3.1 Modeling Paradigms
			3.2 Financial Econometrics and Machine Learning
			3.3 Over-fitting
		4 Reinforcement Learning
		5 Examples of Supervised Machine Learning in Practice
			5.1 Algorithmic Trading
			5.2 High-Frequency Trade Execution
			5.3 Mortgage Modeling
				5.3.1 Model Stability
		6 Summary
		7 Exercises
		Appendix
			Answers to Multiple Choice Questions
		References
	2 Probabilistic Modeling
		1 Introduction
		2 Bayesian vs. Frequentist Estimation
		3 Frequentist Inference from Data
		4 Assessing the Quality of Our Estimator: Bias and Variance
		5 The Bias–Variance Tradeoff (Dilemma) for Estimators
		6 Bayesian Inference from Data
			6.1 A More Informative Prior: The Beta Distribution
			6.2 Sequential Bayesian updates
				6.2.1 Online Learning
				6.2.2 Prediction
			6.3 Practical Implications of Choosing a Classical or Bayesian Estimation Framework
		7 Model Selection
			7.1 Bayesian Inference
			7.2 Model Selection
			7.3 Model Selection When There Are Many Models
			7.4 Occam's Razor
			7.5 Model Averaging
		8 Probabilistic Graphical Models
			8.1 Mixture Models
				8.1.1 Hidden Indicator Variable Representation of Mixture Models
				8.1.2 Maximum Likelihood Estimation
		9 Summary
		10 Exercises
		Appendix
			Answers to Multiple Choice Questions
		References
	3 Bayesian Regression and Gaussian Processes
		1 Introduction
		2 Bayesian Inference with Linear Regression
			2.1 Maximum Likelihood Estimation
			2.2 Bayesian Prediction
			2.3 Schur Identity
		3 Gaussian Process Regression
			3.1 Gaussian Processes in Finance
			3.2 Gaussian Processes Regression and Prediction
			3.3 Hyperparameter Tuning
			3.4 Computational Properties
		4 Massively Scalable Gaussian Processes
			4.1 Structured Kernel Interpolation (SKI)
			4.2 Kernel Approximations
				4.2.1 Structure Exploiting Inference
		5 Example: Pricing and Greeking with Single-GPs
			5.1 Greeking
			5.2 Mesh-Free GPs
			5.3 Massively Scalable GPs
		6 Multi-response Gaussian Processes
			6.1 Multi-Output Gaussian Process Regressionand Prediction
		7 Summary
		8 Exercises
			8.1 Programming Related Questions*
		Appendix
			Answers to Multiple Choice Questions
			Python Notebooks
		References
	4 Feedforward Neural Networks
		1 Introduction
		2 Feedforward Architectures
			2.1 Preliminaries
			2.2 Geometric Interpretation of Feedforward Networks
			2.3 Probabilistic Reasoning
			2.4 Function Approximation with Deep Learning*
			2.5 VC Dimension
			2.6 When Is a Neural Network a Spline?*
			2.7 Why Deep Networks?
				2.7.1 Approximation with Compositions of Functions
				2.7.2 Composition with ReLU Activation
		3 Convexity and Inequality Constraints
			3.1 Similarity of MLPs with Other Supervised Learners
		4 Training, Validation, and Testing
		5 Stochastic Gradient Descent (SGD)
			5.1 Back-Propagation
				5.1.1 Updating the Weight Matrices
			5.2 Momentum
				5.2.1 Computational Considerations
				5.2.2 Model Averaging via Dropout
		6 Bayesian Neural Networks*
		7 Summary
		8 Exercises
			8.1 Programming Related Questions*
		Appendix
			Answers to Multiple Choice Questions
			Back-Propagation
			Proof of Theorem 4.2
			Proof of Lemmas from Telgarsky (2016)
			Python Notebooks
		References
	5 Interpretability
		1 Introduction
		2 Background on Interpretability
			2.1 Sensitivities
		3 Explanatory Power of Neural Networks
			3.1 Multiple Hidden Layers
			3.2 Example: Step Test
		4 Interaction Effects
			4.1 Example: Friedman Data
		5 Bounds on the Variance of the Jacobian
			5.1 Chernoff Bounds
			5.2 Simulated Example
		6 Factor Modeling
			6.1 Non-linear Factor Models
			6.2 Fundamental Factor Modeling
		7 Summary
		8 Exercises
			8.1 Programming Related Questions*
		Appendix
			Other Interpretability Methods
			Proof of Variance Bound on Jacobian
			Russell 3000 Factor Model Description
			Python Notebooks
		References
Part II Sequential Learning
	6 Sequence Modeling
		1 Introduction
		2 Autoregressive Modeling
			2.1 Preliminaries
			2.2 Autoregressive Processes
			2.3 Stability
			2.4 Stationarity
			2.5 Partial Autocorrelations
			2.6 Maximum Likelihood Estimation
			2.7 Heteroscedasticity
			2.8 Moving Average Processes
			2.9 GARCH
			2.10 Exponential Smoothing
		3 Fitting Time Series Models: The Box–Jenkins Approach
			3.1 Stationarity
			3.2 Transformation to Ensure Stationarity
			3.3 Identification
			3.4 Model Diagnostics
		4 Prediction
			4.1 Predicting Events
			4.2 Time Series Cross-Validation
		5 Principal Component Analysis
			Projection
			5.1 Principal Component Projection
			5.2 Dimensionality Reduction
		6 Summary
		7 Exercises
		Appendix
			Hypothesis Tests
			Python Notebooks
		Reference
	7 Probabilistic Sequence Modeling
		1 Introduction
		2 Hidden Markov Modeling
			2.1 The Viterbi Algorithm
				2.1.1 Filtering and Smoothing with HMMs
			2.2 State-Space Models
		3 Particle Filtering
			3.1 Sequential Importance Resampling (SIR)
			3.2 Multinomial Resampling
			3.3 Application: Stochastic Volatility Models
		4 Point Calibration of Stochastic Filters
		5 Bayesian Calibration of Stochastic Filters
		6 Summary
		7 Exercises
		Appendix
			Python Notebooks
		References
	8 Advanced Neural Networks
		1 Introduction
		2 Recurrent Neural Networks
			2.1 RNN Memory: Partial Autocovariance
			2.2 Stability
			2.3 Stationarity
			2.4 Generalized Recurrent Neural Networks (GRNNs)
		3 Gated Recurrent Units
			3.1 α-RNNs
				3.1.1 Dynamic αt-RNNs
			3.2 Neural Network Exponential Smoothing
			3.3 Long Short-Term Memory (LSTM)
		4 Python Notebook Examples
			4.1 Bitcoin Prediction
			4.2 Predicting from the Limit Order Book
		5 Convolutional Neural Networks
			5.1 Weighted Moving Average Smoothers
			5.2 2D Convolution
			5.3 Pooling
			5.4 Dilated Convolution
			5.5 Python Notebooks
		6 Autoencoders
			6.1 Linear Autoencoders
			6.2 Equivalence of Linear Autoencoders and PCA
			6.3 Deep Autoencoders
		7 Summary
		8 Exercises
			8.1 Programming Related Questions*
		Appendix
			Answers to Multiple choice questions
			Python Notebooks
		References
Part III Sequential Data with Decision-Making
	9 Introduction to Reinforcement Learning
		1 Introduction
		2 Elements of Reinforcement Learning
			2.1 Rewards
			2.2 Value and Policy Functions
			2.3 Observable Versus Partially Observable Environments
		3 Markov Decision Processes
			3.1 Decision Policies
			3.2 Value Functions and Bellman Equations
			3.3 Optimal Policy and Bellman Optimality
		4 Dynamic Programming Methods
			4.1 Policy Evaluation
			4.2 Policy Iteration
			4.3 Value Iteration
		5 Reinforcement Learning Methods
			5.1 Monte Carlo Methods
			5.2 Policy-Based Learning
			5.3 Temporal Difference Learning
			5.4 SARSA and Q-Learning
			5.5 Stochastic Approximations and Batch-Mode Q-learning
			5.6 Q-learning in a Continuous Space: FunctionApproximation
			5.7 Batch-Mode Q-Learning
			5.8 Least Squares Policy Iteration
			5.9 Deep Reinforcement Learning
				5.9.1 Preliminaries
				5.9.2 Target Network
				5.9.3 Replay Memory
		6 Summary
		7 Exercises
		Appendix
			Answers to Multiple Choice Questions
			Python Notebooks
		References
	10 Applications of Reinforcement Learning
		1 Introduction
		2 The QLBS Model for Option Pricing
		3 Discrete-Time Black–Scholes–Merton Model
			3.1 Hedge Portfolio Evaluation
			3.2 Optimal Hedging Strategy
			3.3 Option Pricing in Discrete Time
			3.4 Hedging and Pricing in the BS Limit
		4 The QLBS Model
			4.1 State Variables
			4.2 Bellman Equations
			4.3 Optimal Policy
			4.4 DP Solution: Monte Carlo Implementation
			4.5 RL Solution for QLBS: Fitted Q Iteration
			4.6 Examples
			4.7 Option Portfolios
			4.8 Possible Extensions
		5 G-Learning for Stock Portfolios
			5.1 Introduction
			5.2 Investment Portfolio
			5.3 Terminal Condition
			5.4 Asset Returns Model
			5.5 Signal Dynamics and State Space
			5.6 One-Period Rewards
			5.7 Multi-period Portfolio Optimization
			5.8 Stochastic Policy
			5.9 Reference Policy
			5.10 Bellman Optimality Equation
			5.11 Entropy-Regularized Bellman Optimality Equation
			5.12 G-Function: An Entropy-Regularized Q-Function
			5.13 G-Learning and F-Learning
			5.14 Portfolio Dynamics with Market Impact
			5.15 Zero Friction Limit: LQR with Entropy Regularization
			5.16 Non-zero Market Impact: Non-linear Dynamics
		6 RL for Wealth Management
			6.1 The Merton Consumption Problem
			6.2 Portfolio Optimization for a Defined Contribution Retirement Plan
			6.3 G-Learning for Retirement Plan Optimization
			6.4 Discussion
		7 Summary
		8 Exercises
		Appendix
			Answers to Multiple Choice Questions
			Python Notebooks
		References
	11 Inverse Reinforcement Learning and Imitation Learning
		1 Introduction
		2 Inverse Reinforcement Learning
			2.1 RL Versus IRL
			2.2 What Are the Criteria for Success in IRL?
			2.3 Can a Truly Portable Reward Function Be Learned with IRL?
		3 Maximum Entropy Inverse Reinforcement Learning
			3.1 Maximum Entropy Principle
			3.2 Maximum Causal Entropy
			3.3 G-Learning and Soft Q-Learning
			3.4 Maximum Entropy IRL
			3.5 Estimating the Partition Function
		4 Example: MaxEnt IRL for Inference of Customer Preferences
			4.1 IRL and the Problem of Customer Choice
			4.2 Customer Utility Function
			4.3 Maximum Entropy IRL for Customer Utility
			4.4 How Much Data Is Needed? IRL and ObservationalNoise
			4.5 Counterfactual Simulations
			4.6 Finite-Sample Properties of MLE Estimators
			4.7 Discussion
		5 Adversarial Imitation Learning and IRL
			5.1 Imitation Learning
			5.2 GAIL: Generative Adversarial Imitation Learning
			5.3 GAIL as an Art of Bypassing RL in IRL
			5.4 Practical Regularization in GAIL
			5.5 Adversarial Training in GAIL
			5.6 Other Adversarial Approaches*
			5.7 f-Divergence Training*
			5.8 Wasserstein GAN*
			5.9 Least Squares GAN*
		6 Beyond GAIL: AIRL, f-MAX, FAIRL, RS-GAIL, etc.*
			6.1 AIRL: Adversarial Inverse Reinforcement Learning
			6.2 Forward KL or Backward KL?
			6.3 f-MAX
			6.4 Forward KL: FAIRL
			6.5 Risk-Sensitive GAIL (RS-GAIL)
			6.6 Summary
		7 Gaussian Process Inverse Reinforcement Learning
			7.1 Bayesian IRL
			7.2 Gaussian Process IRL
		8 Can IRL Surpass the Teacher?
			8.1 IRL from Failure
			8.2 Learning Preferences
			8.3 T-REX: Trajectory-Ranked Reward EXtrapolation
			8.4 D-REX: Disturbance-Based Reward EXtrapolation
		9 Let Us Try It Out: IRL for Financial Cliff Walking
			9.1 Max-Causal Entropy IRL
			9.2 IRL from Failure
			9.3 T-REX
			9.4 Summary
		10 Financial Applications of IRL
			10.1 Algorithmic Trading Strategy Identification
			10.2 Inverse Reinforcement Learning for Option Pricing
			10.3 IRL of a Portfolio Investor with G-Learning
			10.4 IRL and Reward Learning for Sentiment-Based Trading Strategies
			10.5 IRL and the ``Invisible Hand'' Inference
		11 Summary
		12 Exercises
		Appendix
			Answers to Multiple Choice Questions
			Python Notebooks
		References
	12 Frontiers of Machine Learning and Finance
		1 Introduction
		2 Market Dynamics, IRL, and Physics
			2.1 ``Quantum Equilibrium–Disequilibrium'' (QED) Model
			2.2 The Langevin Equation
			2.3 The GBM Model as the Langevin Equation
			2.4 The QED Model as the Langevin Equation
			2.5 Insights for Financial Modeling
			2.6 Insights for Machine Learning
		3 Physics and Machine Learning
			3.1 Hierarchical Representations in Deep Learningand Physics
			3.2 Tensor Networks
			3.3 Bounded-Rational Agents in a Non-equilibriumEnvironment
		4 A ``Grand Unification'' of Machine Learning?
			4.1 Perception-Action Cycles
			4.2 Information Theory Meets Reinforcement Learning
			4.3 Reinforcement Learning Meets Supervised Learning: Predictron, MuZero, and Other New Ideas
		References
Index




نظرات کاربران