Zurada, Jacek M.

Introduction to artificial neural systems Jacek M. Zurada - Ahmedabad: Jaico Publishing House, 2006. - xxiv, 683, A1-I19 p. ill.

1 Artificial Neural Systems; Preliminaries
1.1 Neural Computation; Some Examples and Applications
Classifiers, Approximators, and Autonomous Drivers 4
Simple Memory and Restoration of Patterns 10
Optimizing Networks 14
Clustering and Feature Detecting Networks 16
1.2 History of Artificial Neural Systems Development 17
U Future Outlook 21
References 22
Fundamental Concepts and Models of Artificial
Neural Systems 25
2.1 Biological Neurons and Their Artificial Models 26
Biological Neuron 27
McCulloch-Pitts Neuron Model 30
Neuron Modeling for Artificial Neural Systems 31
2.2 Models of Artificial Neural Networks 37
Feedforward Network 37
Feedback Network 42
2.3 Neural Processing 53
2.4 Learning and Adaptation 55
Learning as Approximation or Equilibria Encoding 55
Supervised and Unsupervised Learning 56
2.5 Neural Network Learning Rules 59
Hebbian Learning Rule 60
Perceptron Learning Rule 64
Delta Learning Rule 66
Widrow-Hoff Learning Rule 69
Correlation Learning Rule 69
Winner-Take-All Learning Rule 70
Outstar Learning Rule 71
Summary of Learning Rules 72
2.6 Overview of Neural Networks 74
2.7 Concluding Remsfirks 76
Problems 78
References 89
3 Single-Layer Perceptron Classifiers
3.1 Classification Model, Features, and Decision Regions 94
3.2 Discriminant Functions 99
3.3 Linear Machine and Minimum Distance Classificatidn 106
3.4 Nonparametric IVaining Concept 114
3.5 IVaining and Classification Using the Discrete Perceptron:
Algorithm and Example •. 120
3.6 Single-Layer Confiimous Perceptron Networks for Linearly
Separable Classifications 132
^ ^ Muitlcategory Single-Layer Perceptron Networks 142
93
3.8 Concluding Remarks 152
Problems 153
References 161
4 Multilayer Feedforward Networks 163
4.1 Linearly Nonseparable Pattern Classification 165
4.2 Delta Learning Rule for Multiperceptron Layer 175
43 Generalized Delta Learning Rule 181
4.4 Feedforward Recall and Error Back-Propagation IVaining 185
Feedforward Recall 185
Error Back-Propagation Training 186
Example of Error Back-Propagation Training 190
Training Errors 195
Multilayer Feedforward Networks as
Universal Approximators 196
4.5 Learning Factors 206
Initial Weights -208
Cumiilative Weight Adjustment versus
Increniental Updating 208
Steepness of the Activation Function 209
Learning Constant 210
Momentum Method 211
Network Architectures Versus Data Representation 214
Necessary Number of Hidden Neurons 216
4.6 Classifying and Expert Layered Networks 220
Character Recognition Application 221
Expert Systems Applications 225
Learning Time Sequences 229
4.7 Functional Link Networks 230
4.8 Concluding Remarks 234
Problems 235
References 248
5 single-Layer Feedback Networks 251
5.1 Basic Concepts of Dynamical Systems 253
5.2 Mathematical Foundations of Discrete-Time
Hopfield Networks 254
5.3 Mathematical Foundations of (Jradient-Type
Hopficid Networks 264
5.4 Transient Response of Continuous-Time Networks 276
5.5 Relaxation Modeling; in .Single-Layer Feedback Networks 283
5.6 Example Solutions of Optimization Problems 287
Summing Network with Digital Outputs 287
Minimization of the Traveling Salesman Tour Length 2^4
5.7 Concluding Remarks 299
Problems 301
References 310
6 Associative Memories 313
6.1 Basic Concepts 314
6.2 Linear Associator 320
6.3 Basic Concepts of Recurrent Autoassociative Memory 325
Retrieval Algorithm 327
Storage Algorithm 328
Performance Considerations 336
6.4 Performance Analysis of Recurrent
Autoassociative Memory 339
Energy Function Reduction 342
Capacity of Autoassociative Recurrent Memory 343
Memory Convergence versus Corruption 345
Fixed Point Concept 349
Modified Memory Convergent Toward Fixed Points 351
Advantages and Limitations 354
6.5 Bidirectional Associative Memory 354
Memory Architecture 355
Association Encoding and Decoding 357
Stability Considerations 359
Memory Example and Performance Evaluation 360
Improved Coding of Memories 363
Multidirectional Associative Memory 368
Associative Memory of Spatio-temporal Patterns 370
5 7 Concluding Remarks 375
Probleins 377
References 386
7 Matching and Self-Organizing Networks 389
7.1 Hamming Net and MAXNET 391
7.2 Unsupervised Learning of Clusters 399
Clustering and Similarity Measures 399
Winner-Take-AII Learning 401
Recall Mode 406
Initialization of Weights 406
Separability Limitations 409
7.3 Counterpropagation Network 410
7.4 Feature Mapping 414
7.5 Self-Organizing Feature Maps 423
7.6 Cluster Discovery Network (ART!) 432
7.7 Concluding Remarks 444
Problems 445
References 452
8 Applications of Neural Algorithms and Systems 455
8.1 Linear Programming Modeling Network 456
8.2 Character Recognition Networks 464
Multilayer Feedforward Network for Printed
Character Classification 464
Handwritten Digit Recognition: Problem Statement 476
Recognition Based on Handwritten Character Skeletonization 478
Recognition of Handwritten Characters Based on Error
Back-propagation Training 482
SJ Neural Networks Control Applications 485
Overview of Control Systems Concepts 485
Process Identification 489
Basic Nondynamic Learning Control Architectures 494
Inverted Pendulum Neurocontroller 499
Cerebellar Model Articulation Controller 504
Concluding Remarks 511
8.4 Networks for Robot Kinematics 513
Overview of Robot Kinematics Problems 514
Solution of the Forward and Inverse Kinematics Problems 516
Comparison of Archilcctures for the Forward
Kinematics Problem 519
Target Position Learning 523
8.5 Connectionist Expert Systems for Medical Diagnosis 527
F.xpert System for Skin Diseases Diagnosis 528
Lxpert System for Low Back Pain Diagnosis 532
Expert System for Coronary Occlusion Diagnosis 537
Concluding Remarks 539
8.6 Self-Organizing Semantic Maps 539
8.7 Concluding Remarks 546
Problems 548
References 559
Neural Networks Implementation
9.1 Artificial Neural Systems; Overview of Actual Models 566
Node Numbers and Complexity of Computing Systems 567
Neurocomputing Hardware Requirements 569
Digital and Analog Electronic Neurocomputing Circuits 575
9.2 Integrated Circuit Synaptic Connections 585
Voltage-controlled Weights 587
Analog Storage of Adjustable Weights 592
Digitally Programmable Weights 595
Learning Weight Implementation 605
9.3 Active Building Blocks of Neural Networks 608
Current Mirrors 610
Inverter-based Neuron 613
Differential Voltage Amplifiers 617
Scalar Product and Averaging Circuits with
Transconductance Amplifiers 624
Current Comparator 626
Template Matching Network 628
9.4 Analog MuHipUors and Scalar Prodnc. Circn.ts 630
Depletion MOSFET Circuit 631
Enhancement Mode MOS Circuit 636
Analog Multiplier with Weight Storage 638
Floating-G3'® Transistor Multipliers 640

9.5 Associative Memory Implementations 644
9.6 Electronic Neural Processors 652
9.7 Concluding Remarks 663
Problems 666
References 679

8172246501


Computer algorithms
Artificial neural systems

006.32 / ZUR/I