ورود به حساب

نام کاربری گذرواژه

گذرواژه را فراموش کردید؟ کلیک کنید

حساب کاربری ندارید؟ ساخت حساب

ساخت حساب کاربری

نام نام کاربری ایمیل شماره موبایل گذرواژه

برای ارتباط با ما می توانید از طریق شماره موبایل زیر از طریق تماس و پیامک با ما در ارتباط باشید


09117307688
09117179751

در صورت عدم پاسخ گویی از طریق پیامک با پشتیبان در ارتباط باشید

دسترسی نامحدود

برای کاربرانی که ثبت نام کرده اند

ضمانت بازگشت وجه

درصورت عدم همخوانی توضیحات با کتاب

پشتیبانی

از ساعت 7 صبح تا 10 شب

دانلود کتاب Evolution, Learning and Cognition

دانلود کتاب تکامل، یادگیری و شناخت

Evolution, Learning and Cognition

مشخصات کتاب

Evolution, Learning and Cognition

ویرایش:  
نویسندگان:   
سری:  
ISBN (شابک) : 9971505290, 9971505304 
ناشر: World Scientific 
سال نشر: 1988 
تعداد صفحات: 425 
زبان: English 
فرمت فایل : PDF (درصورت درخواست کاربر به PDF، EPUB یا AZW3 تبدیل می شود) 
حجم فایل: 59 مگابایت 

قیمت کتاب (تومان) : 40,000



ثبت امتیاز به این کتاب

میانگین امتیاز به این کتاب :
       تعداد امتیاز دهندگان : 8


در صورت تبدیل فایل کتاب Evolution, Learning and Cognition به فرمت های PDF، EPUB، AZW3، MOBI و یا DJVU می توانید به پشتیبان اطلاع دهید تا فایل مورد نظر را تبدیل نمایند.

توجه داشته باشید کتاب تکامل، یادگیری و شناخت نسخه زبان اصلی می باشد و کتاب ترجمه شده به فارسی نمی باشد. وبسایت اینترنشنال لایبرری ارائه دهنده کتاب های زبان اصلی می باشد و هیچ گونه کتاب ترجمه شده یا نوشته شده به فارسی را ارائه نمی دهد.


توضیحاتی در مورد کتاب تکامل، یادگیری و شناخت

این جلد بررسی اولین تلاش برای ارائه یک نمای کلی جامع از این توسعه هیجان انگیز و به سرعت در حال تحول را نشان می دهد. این کتاب شامل مقالات سفارشی ویژه توسط محققان برجسته در زمینه شبکه های عصبی و سیستم های اتصال گرا، سیستم های طبقه بندی کننده، سیستم های شبکه تطبیقی، الگوریتم ژنتیک، اتوماتای ​​سلولی، سیستم های ایمنی مصنوعی، ژنتیک تکاملی، علوم شناختی، محاسبات نوری، بهینه سازی ترکیبی و سایبرنتیک است. .


توضیحاتی درمورد کتاب به خارجی

This review volume represents the first attempt to provide a comprehensive overview of this exciting and rapidly evolving development. The book comprises specially commissioned articles by leading researchers in the areas of neural networks and connectionist systems, classifier systems, adaptive network systems, genetic algorithm, cellular automata, artificial immune systems, evolutionary genetics, cognitive science, optical computing, combinatorial optimization, and cybernetics.



فهرست مطالب

CONTENTS
PREFACE
Part One MATHEMATICAL THEORY
	Connectionist Learning Through Gradient Following
		INTRODUCTION
		CONNECTIONIST SYSTEMS
		LEARNING
			Supervised Learning vs. Associative Reinforcement Learning
		FORMAL ASSUMPTIONS AND NOTATION
		BACK-PROPAGATION ALGORITHM FOR SUPERVISED LEARNING
			Extended Back-Propagation
		REINFORCE ALGORITHMS FOR ASSOCIATIVE REINFORCEMENT LEARNING
			Extended REINFORCE Algorithms
		DISCUSSION
		SUMMARY
		REFERENCES
	Efficient Stochastic Gradient Learning Algorithm for Neural Network
		1 Introduction
		2 Learning as Stochastic Gradient Descents
		3 Convergence Theorems for First Order Schemes
		4 Convergence of the Second Order Schemes
		5 Discussion
		References
	INFORMATION STORAGE IN FULLY CONNECTED NETWORKS
		1 INTRODUCTION
			1.1 Neural Networks
			1.2 Organisation
			1.3 Notation
		2 THE MODEL OF McCULLOCH-PITTS
			2.1 State-Theoretic Description
			2.2 Associative Memory
		3 THE OUTER-PRODUCT ALGORITHM
			3.1 The Model
			3.2 Storage Capacity
		4 SPECTRAL ALGORITHMS
			4.1 Outer-Products Revisited
			4.2 Constructive Spectral Approaches
			4.3 Basins of Attraction
			4.4 Choice of Eigenvalues
		5 COMPUTER SIMULATIONS
		6 DISCUSSION
			A PROPOSITIONS
			B OUTER-PRODUCT THEOREMS
			C PROOFS OF SPECTRAL THEOREMS
		References
	NEURONIC EQUATIONS AND THEIR SOLUTIONS
		1. Introduction
			1.1. Reminiscing
			1.2. The 1961 Model
			1.3. Notation
		2. Linear Separable NE
			2.1
. Neuronic Equations
			2.2. Polygonal Inequalities
			2.3. Computation of the n-expansion of arbitrary l.s. functions
			2.4.
Continuous versus discontinuous behaviour: transitions
		3. General Boolean NE
			3.1. Linearization in tensor space
			3.2. Next-state matrix
			3.3. Normal modes, attractors
			3.4. Synthesis of nets: the inverse problem
			3.5. Separable versus Boolean nets; connections with spin formalism
		References
	The Dynamics of Searches Directed by Genetic Algorithms
		The Hyperplane Transformation.
		The Genetic Algorithm as a Hyperplane-Directed Search Procedure
			(1) Description of the genetic algorithm
			(2) Effects of the S\'s on the search generated by a genetic algorithm.
			(3) An Example.
			References.
	PROBABILISTIC NEURAL NETWORKS
		1. INTRODUCTION
		2. MODELING THE NOISY NEURON
			2.1. Empirical Properties of Neuron and Synapse
			22. Model of Shaw and Vasudevan
			2.3. Model of Little
			2.4. Model of Taylor
		3. NONEQUILIBRIUM STATISTICAL MECHANICS OF LINEAR MODELS
			3.1. Statistical Law of Motion - Markov Chain and Master Equation
			3.2. Entropy Production in the Neural
			3.3. Macroscopic Forces and Fluxes
			3.4. Conditions for Thermodynamic Equilibrium
			3.5. Implications for Memory Storage: How Dire?
		4. DYNAMICAL PROPERTIES OF NONLINEAR MODELS
			4.1. Views of Statistical Dynamics
			4.2. Multineuron Interactions, Revisited
			4.3. Cognitive Aspects of the Taylor Model
			4.4. Noisy RAMS and Noisy Nets
		5. THE END OF THE BEGINNING
		ACKNOWLEDGMENTS
		APPENDIX. TRANSITION PROBABILITIES IN 2-NEURON NETWORKS
		REFERENCES
Part Two ARCHITECTURAL DESIGN
	Some Quantitative Issues in the Theory of Perception
		I. PERFORMANCE
			Optimal Performance
			Discriminability
			Field Theory and Statistical Mechanics
			Likely and Unlikely Distortions
			Local versus Non-local Computations
			Some Questions
			Performance of Neural Nets
		II. MODELS
			Feature Detectors
			Ising Spins in Random Fields
			Linear Filters
			Perception by Steepest Descent
		III. NETWORKS
			Feed Forward Net and Grandmother Cells
			Visual Perception by Neural Nets
			Generalization
			The Discriminant in Neural Nets
			Neural Spike Trains
		ACKNOWLEDGEMENTS
		REFERENCES
	SPEECH PERCEPTION AND PRODUCTION BY A SELF-ORGANIZING NEURAL NETWORK
		Abstract
			1. The Learning of Language Units
			2. Low Stages of Processing: Circular Reactions and the Emerging Auditory and Motor Codes
			3. The Vector Integration to Endpoint Model
			4. Self-Stabilization of Imitation via Motor-to-Auditory Priming
			5. Higher Stages of Processing: Context-Sensitive Chunking and Unitization of the Emerging Auditory Speech Code
			6. Masking Fields
		References
	NEOCOGNITRON: A NEURAL NETWORK MODEL FOR VISUAL PATTERN RECOGNITION
		1. INTRODUCTION
		2. THE STRUCTURE AND BEHAVIOR OF THE NETWORK
			2.1 Physiological Background
			2.2 The Structure of the Network
			2.3 Deformation- and Position-Invariant Recognition
			2.4 Mathematical Description of the Cell\'s Response
		3. SELF-ORGANIZATION OF THE NETWORK
			3.1 Learning without a Teacher
				3.1.1 Reinforcement of maximum-output cells
				3.1.2 Generation of a feature-extracting S-cell
				3.1.3 Development of homogeneous connections
				3.1.4 Initial values of the variable connections
				3.1.5 Mathematical description of the reinforcement
		4. HANDWRITTEN NUMERAL RECOGNITION
		5. DISCUSSION
		REFERENCES
Part Three APPLICATIONS
	LEARNING TO PREDICT THE SECONDARY STRUCTURE OF GLOBULAR PROTEINS
		Acknowledgements
		References
		Figure Legends
	Exploiting Chaos to Predict the Future and Reduce Noise
		Abstract
		1 Introduction
			1.1 Chaos and randomness
		2 Model Building
			2.1 State space reconstruction
			2.2 Learning nonlinear transformations
				2.2.1 Representations
				2.2.2 Local approximation
				2.2.3 Trajectory segmenting
				2.2.4 Nonstationarity
				2.2.5 Discontinuities
				2.2.6 Implementing local approximation on computers
				2.2.7 An historical note
			2.3 Comparison to statistically motivated methods
		3 Scaling of Error estimates
			3.1 Dependence on number of data points
			3.2 Dependence on extrapolation time
				3.2.1 Higher order Lyapunov exponents
				3.2.2 Direct forecasting
				3.2.3 Iterative forecasting
				3.2.4 Temporal scaling with noise
			3.3 Continuous time
			3.4 Numerical results
			3.5 Is there an optimal approach?
		4 Experimental Data Analysis
			4.1 Computing fractal dimension: A review
			4.2 More accurate data analysis with higher order approximation
			4.3 Forecasting as a measure of self-consistency
		5 Noise Reduction
		6 Adaptive Dynamics
		7 Conclusions
		References
	How Neural Nets Work*
		1. Introduction
		2. Backpropagation
		3. Prediction
		4. Why It Works
		5. Conclusions
		References
	PATTERN RECOGNITION AND SINGLE LAYER NETWORKS
		Distinctions and Differences
		Adaptive Pattern Classifiers
		Discriminant Functions
		Choosing A Discriminant Function
		The Concept of Order
		Choosing a $ Function
		Storage Capacity of a $ Machi
		Supervised Learning Problem
		Optimal Associative Mappings
		Perceptron Learning Rule
		Symmetry Detection Problem
		Simulation Description
		Simulation Results
		Implementing Invariances
		Implementing Invariances: General Case
		Conclusion
		References
	WHAT IS THE SIGNIFICANCE OF NEURAL NETWORKS FOR AI ?
		1. INTRODUCTION
		2. Associative Memory
		3. ATTENTIVE ASSOCIATIVE MEMORY
		4. Conclusion
		5. Other attributes yet to be discovered
		6. REFERENCES
	SELECTED BIBLIOGRAPHY ON CONNECTIONISM
		Introduction
	HIERTALKER: A DEFAULT HIERARCHY OF HIGH ORDER NEURAL NETWORKS THAT LEARNS TO READ ENGLISH ALOUD
		1. Introduction
		2. How HIERtalker works
		3. The Training Sets
		4. Conclusion
		References
		Acknowledgments




نظرات کاربران