Главная    Ex Libris    Книги    Журналы    Статьи    Серии    Каталог    Wanted    Загрузка    ХудЛит    Справка    Поиск по индексам    Поиск    Форум   
blank
Авторизация

       
blank
Поиск по указателям

blank
blank
blank
Красота
blank
Mandic D.P., Chambers J.A. — Recurrent neural networks for prediction: learning algorithms, architectures and stability
Mandic D.P., Chambers J.A. — Recurrent neural networks for prediction: learning algorithms, architectures and stability



Обсудите книгу на научном форуме



Нашли опечатку?
Выделите ее мышкой и нажмите Ctrl+Enter


Название: Recurrent neural networks for prediction: learning algorithms, architectures and stability

Авторы: Mandic D.P., Chambers J.A.

Аннотация:

Within this text neural networks are considered as massively interconnected nonlinear adaptive filters. Offers a new insight into the learning algorithms, architectures and stability of recurrent neural networks and, consequently, will have instant appeal.


Язык: en

Рубрика: Computer science/Генетика, нейронные сети/

Статус предметного указателя: Готов указатель с номерами страниц

ed2k: ed2k stats

Год издания: 2001

Количество страниц: 285

Добавлена в каталог: 19.11.2005

Операции: Положить на полку | Скопировать ссылку для форума | Скопировать ID
blank
Предметный указатель
A posteriori, algorithms      113 138
A posteriori, computational complexity      138
A posteriori, error      135
A posteriori, techniques      241
Activation functions, continual adaptation      202
Activation functions, definition      239
Activation functions, desirable properties      51
Activation functions, examples      53
Activation functions, properties to map to the complex plane      61
Activation functions, steepness      200
Activation functions, temperature      200
Activation functions, why nonlinear      50
Activation potential      36 54
ADALINE      2
Adaptability      9
Adaptation gain, behaviour of steepest descent      16
Adaptive algorithms, convergence criteria      163
Adaptive algorithms, momentum      211
Adaptive algorithms, performance criteria      161
Adaptive learning      21
Adaptive resonance theory      2
Adaptive systems, configurations      10
Adaptive systems, generic structure      9
Analytic continuation      61
Asymptotic convergence rate      121
Asymptotic stability      116
Attractors      115
Autonomous systems      263
Autoregressive (AR) models, coefficients      37
Autoregressive integrated moving, average (ARIMA) model      171
Autoregressive moving average (ARMA) models, filter structure      37
Averaging methods      163
Backpropagation      18
Backpropagation, normalised algorithm      153
Backpropagation, through time      209
Backpropagation, through time dynamical equivalence      210
Batch learning      20
Bias/variance dilemma      112
Bilinear model      93
Black box modelling      73
Blind equalizer      12
Block-based estimators      15
Bounded Input Bounded Output (BIBO) stability      38 118
Brower’s Fixed Point Theorem      247
Canonical state-space representation      44
Cauchy Riemann equations      228
Channel equalisation      11
Clamping functions      240
Classes of learning algorithm      18
Cognitive science      1
Complex numbers, elementary transformations      227
Connectionist Models      1
Connectionist models, dates      2
Constructive learning      21
Contraction mapping theorem      245
Contractivity of relaxation      252
Curse of dimensionality      22
Data-reusing      114 135 142
Data-reusing, stabilising features      137 145
David Hilbert      47
Delay space embedding      41
Deseasonalising data      265
Deterministic learning      21
Deterministic versus stochastic (DVS) plots      172 175
Directed algorithms      111
Domain of attraction      116
Dynamic multilayer perceptron (DMLP)      79
Efficiency index      135
Electrocardiogram (ECG)      181
Embedded memory      111
Embedding dimension      74 76 174 178
Equation error      104
Equation error, adaptation      107
Equilibrium point      264
Error criterion      101
Error function      20
Exogeneous inputs      40
Exponentially stable      264
Extended Kalman filter      109
Extended recursive least squares algorithm      217 236
Feedforward network, definition      239
Fixed point, iteration      143
Fixed point, theory      117 245
Forgetting behaviour      110
Forgetting mechanism      101
Frobenius matrix      251
Function definitions, conformal      227
Function definitions, entire      227
Function definitions, meromorphic      227
Gamma memory      42
Gaussian variables, fourth order standard factorisation      167
Gear shifting      21
Global asymptotic stability (GAS)      116 118 251 264
Gradient-based learning      12
Grey box modelling      73
Hammerstein model      77
Heart rate variability      181
Heaviside function      55
hessian      15 52
Holomorphic function      227
Hyperbolic attractor      110
Incremental learning      21
Independence assumptions      163
Independent Identically Distributed (IID)      38
Induced local field      36
Infinite Impulse Response (IIR), equation error adaptive filter      107
Input transformations      23
Kalman Filter (KF) algorithm      14
Kolmogorov function      223
Kolmogorov — Sprecher Theorem      224
Kolmogorov’s theorem      6 93 223
Kolmogorov’s theorem, universal approximation theorem      47 48
Learning rate      13
Learning rate adaptation      200
Learning rate, continual adaptation      202
Learning rate, selection      202
Least Mean Square (LMS) algorithm      14 18
Least Mean Square (LMS) algorithm, data-reusing form      136
Linear filters      37
Linear prediction, foundations      31
Linear regression      14
Liouvitle Theorem      61 228
Lipschitz function      224 246 263
Logistic function      36 53
Logistic function, a contraction      118
Logistic function, approximation      58
Logistic function, fixed points of biased form      127
Lorenz equations      174 195
Lyapunov stability      116 143 162
Lyapunov stability, indirect method      162
Mandelbrot and Julia sets      61
Markov model, first order      164
Massive parallelism      6
Misalignment      168 169
Mobius transformation      47 228
Mobius transformation, fixed points      67
Model reference adaptive system (MRAS)      106
Modular group      229
Modular group, transfer function between neurons      66
Modular neural networks, dynamic equivalence      215
Modular neural networks, static equivalence      214
NARMA with eXogeneous inputs (NARMAX) model, compact representation      71
NARMA with eXogeneous inputs (NARMAX) model, validity      95
Nearest neighbours      175
Nesting      130
Neural dynamics      115
Neural network, bias term      50
Neural network, free parameters      199
Neural network, general architectures for prediction and system identification      99
Neural network, growing and pruning      21
Neural network, hybrid      84
Neural network, in complex plane      60
Neural network, invertibility      67
Neural network, modularity      26 199 214
Neural network, multilayer feedforward      41
Neural network, nesting      27
Neural network, node structure      2
Neural network, ontogenic      21
Neural network, properties      1
Neural network, radial basis function      60
Neural network, redundancy      113
Neural network, specifications      2
Neural network, spline      56
Neural network, time-delay      42
Neural network, topology      240
Neural network, universal approximators      49 54
Neural network, wavelet      57
Neural network, with locally distributed dynamics (LDNN)      79
Neuron, biological perspective      32
Neuron, definition      3
Neuron, structure      32 36
Noise cancellation      10
Non-recursive algorithm      25
Nonlinear Autoregressive (NAR) model      40
Nonlinear Autoregressive Moving, Average (NARMA) model      39
Nonlinear Autoregressive Moving, recurrent perceptron      97
Nonlinear Finite Impulse Response (FIR) filter, learning algorithm      18
Nonlinear Finite Impulse Response (FIR) filter, normalised gradient descent, optimal step size      153
Nonlinear Finite Impulse Response (FIR) filter, weight update      201
Nonlinear gradient descent      151
Nonlinear parallel model      103
Nonlinearity detection      171 173
Nonparametric modelling      72
Normalised LMS algorithm, learning rate      150
o notation      221
Objective function      20
Ontogenic functions      241
Orthogonal condition      34
Output Error      104
Output error, adaptive infinite impulse response (IIR) filter      105
Output error, learning algorithm      108
Parametric modelling      72
Pattern learning      26
Perceptron      2
Phase space      174
Piecewisedinear model      36
Pipelining      131
Polynomial equations      48
Polynomial time      221
Prediction, basic building blocks      35
Prediction, conditional mean      39 88
Prediction, configuration      11
Prediction, difficulties      5
Prediction, history      4
Prediction, principle      33
Prediction, reasons for using neural networks      5
Preservation of contractivity/expansivity      218
Principal component analysis      23
Proximity functions      54
Pseudolinear regression algorithm      105
Quasi-Newton learning algorithm      15
Rate of convergence      121
Real time recurrent learning (RTRL)      92 108 231
Real time recurrent learning (RTRL), a posteriori form      141
Real time recurrent learning (RTRL), normalised form      159
Real time recurrent learning (RTRL), teacher forcing      234
Real time recurrent learning (RTRL), weight update for static and dynamic equivalence      209
Recurrent backpropagation      109 209
Recurrent backpropagation, static and dynamic equivalence      211
Recurrent neural filter, a posteriori form      140
Recurrent neural filter, fully connected      98
Recurrent neural filter, stability bound for adaptive algorithm      166
Recurrent neural networks (RNNs), activation feedback      81
Recurrent neural networks (RNNs), dynamic behaviour      69
Recurrent neural networks (RNNs), dynamic equivalence      205 207
Recurrent neural networks (RNNs), Elman      82
Recurrent neural networks (RNNs), fully connected, relaxation      133
Recurrent neural networks (RNNs), fully connected, structure      231
Recurrent neural networks (RNNs), Jordan      83
Recurrent neural networks (RNNs), local or global feedback      43
Recurrent neural networks (RNNs), locally recurrent -globally feedforward      82
Recurrent neural networks (RNNs), nesting      130
Recurrent neural networks (RNNs), output feedback      81
Recurrent neural networks (RNNs), pipelined (PRNN)      85 132 204 234
Recurrent neural networks (RNNs), rate of convergence of relaxation      127
Recurrent neural networks (RNNs), relaxation      129
Recurrent neural networks (RNNs), RTRL optimal learning rate      159
Recurrent neural networks (RNNs), static equivalence      205 206
Recurrent neural networks (RNNs), universal approximators      49
Recurrent neural networks (RNNs), Williams — Zipser      83
Recurrent perceptron, GAS relaxation      125
Recursive algorithm      25
Recursive Least-Squares (RLS) algorithm      14
Referent network      205
Riccati equation      15
Robust stability      116
Sandwich structure      86
Santa Fe Institute      6
Saturated-modulus function      57
Seasonal ARIMA model      266
Seasonal behaviour      172
Semiparametric modelling      72
Sensitivities      108
Sequential estimators      15
Set, closure      224
Set, compact      225
Set, dense subset      224
Sigmoid packet      56
Sign-preserving      162
Spin glass      2
Spline, cubic      225
Staircase function      55
Standardisation      23
Stochastic learning      21
Stochastic matrix      253
Stone — Weierstrass theorem      62
Supervised learning      25
Supervised learning, definition      239
Surrogate dataset      173
System identification      10
System linearity      263
Takens’ theorem      44 71 96
Tanh activation function, contraction mapping      124
Teacher forced adaptation      108
Threshold nonlinearity      36
Training set construction      24
Turing machine      22
Unidirected algorithms      111
Uniform approximation      51
Uniform asymptotic stability      264
Uniform stability      264
Unsupervised learning      25
Unsupervised learning, definition      239
Uryson model      77
Vanishing gradient      109 166
Vector and matrix differentiation rules      221
Volterra series, expansion      71
Weierstrass theorem      6 92 224
White box modelling      73
Wiener filter      17
Wiener model      77
Wiener model, represented by NARMA model      80
Wiener — Hammerstein model      78
Wold decomposition      39
Yule — Walker equations      34
Zero-memory nonlinearities      31
Zero-memory nonlinearities, examples      35
blank
Реклама
blank
blank
HR
@Mail.ru
       © Электронная библиотека попечительского совета мехмата МГУ, 2004-2024
Электронная библиотека мехмата МГУ | Valid HTML 4.01! | Valid CSS! О проекте