Главная    Ex Libris    Книги    Журналы    Статьи    Серии    Каталог    Wanted    Загрузка    ХудЛит    Справка    Поиск по индексам    Поиск    Форум   
blank
Авторизация

       
blank
Поиск по указателям

blank
blank
blank
Красота
blank
Suykens J.A.K., Horvath G., Basu S. — Advances in learning theory: methods, models and applications
Suykens J.A.K., Horvath G., Basu S. — Advances in learning theory: methods, models and applications



Обсудите книгу на научном форуме



Нашли опечатку?
Выделите ее мышкой и нажмите Ctrl+Enter


Название: Advances in learning theory: methods, models and applications

Авторы: Suykens J.A.K., Horvath G., Basu S.

Аннотация:

In recent years, considerable progress has been made in the understanding of problems of learning and generalization. In this context, Intelligence basically means the ability to perform well on new data after learning a model on the basis of given data. Such problems arise in many different areas and are becoming increasingly important and crucial towards many applications such as in bioinformatics, multimedia, computer vision and signal processing, internet search and information retrieval, datamining and textmining, finance, fraud detection, measurement systems and process control, and several others. Currently, the development of new technologies enables to generate massive amounts of data containing a wealth of information that remains to become explored. Often the dimensionality of the input spaces in these novel applications is huge. In the analysis of microarray data, for example, where expression levels of thousands of genes need to be analyzed given only a limited number of experiments. Without performing dimensionality reduction, the classical statistical paradigms show fundamental shortcomings at this point. Facing these new challenges, there is a need for new mathematical foundations and models such that the data can become processed in a reliable way. These subjects are very interdisciplinary and relate to problems studied in neural networks, machine learning, mathematics and statistics.


Язык: en

Рубрика: Computer science/AI, knowledge/

Статус предметного указателя: Готов указатель с номерами страниц

ed2k: ed2k stats

Год издания: 2003

Количество страниц: 414

Добавлена в каталог: 15.11.2005

Операции: Положить на полку | Скопировать ссылку для форума | Скопировать ID
blank
Предметный указатель
$\beta$-mixing      368
$\epsilon$-insensitive loss function      389
$\nu$-support vector classifiers      181
Absolute cost function      382
Adaboost      255
Admissible set      72
Algorithmic stability      149
Annealed VC-entropy      9
Approximation      69
Approximation error      32
Ative learning      42
Automatic relevance determination      163 275
Auxiliary field      309
Average binary loss      258
Average case geometry      331
Backpropagation method      18
Bagging      119
Base learning algorithm      255
Bayesian classification      280
Bayesian decision theory      291
Bayesian field theory      290
Bayesian inference      163 276 322
Bayesian regression      271
Besov spaces      49
Bias-variance problem      41
Black box models      378
Bochner’s theorem      53
Brownian motion      292
Case-based reasoning      320
Centered kernel matrix      165
Closure      71
CMAC      394
Coding matrix      257
Collocation scheme      59
Concept class      360
Conditional distribution function      2
Conditional expectation      344
Conjugate gradient method      136 239
Conjugate prior      326
Consistency      6
Consistent algorithm      365
Convergence in probability      5
Convex      72
Correlation coefficient      166 214
Covariance operator      300
Covering number      33 83 362
Cross-linguistic correlation      212
Cross-model likelihood      334
Cross-validation      112 387
CS-functional      48
Cumulative prediction error      342
Curse of dimensionality      73
Data smoothing      320
Decision trees      256
Deflation      231
Density estimation      3 170 295
Density operator      298
Dependent inputs      367
Diffusion kernel      207
Diffusion process      209
Direct method      136
dispersion      367
Dual variables      135
Eigenfunctions      169
Embedded hardware      394
Empirical density      325
Empirical error      32 113
Empirical mean      358
Empirical risk functional      4
Empirical risk minimization      132 387
Energy      296
Entropy of a set of functions      7
Error correcting output codes      257
Error stability      117
Errors-in-variables      382
Euclidean orthonormal basis      83
Evaluation functional      100
Evaluation space      100
Evaluation subduality      101
Expected risk      132
Exponential family      328
Feature selection      123 243
Filter operator      304
Filtered differences      304
Fisher discriminant analysis      160
Fisher information matrix      330
Fixed-size LS-SVM      170
Fourier orthonormal basis      83
Fourier representation      81
Frobenius inner product      211
Fubini’s Theorem      30
Functional learning      89
Gagliardo diagram      49
Gaussian mixture prior      300
Gaussian prior factors      299
Gaussian process prior      299
Gaussian processes      163
Generalization capability      386
Generalization error      113 361
Generalized cross-validation      121 148
Generalized eigenvalue problem      168 214
Glivenko — Cantelli lemma      358
Globally exponentially stable      372
Gram matrix      201
Grey box models      378
Growth function      9
h-projection      328
Hamming decoding      259
Hardware complexity      396
Hilbert isomorphism      31
Hilbert space      30
Hoeffding’s Inequality      34 359
Hyperfield      303
Hyperparameter      303
Hyperparameter optimization      279
Hyperparameters      163
Hyperprior      308
Hypertext documents      215
Hypothesis      360
Hypothesis space      253
Hypothesis stability      114
Image classification      138 142
Image completion      303
Incomplete Cholesky factorization      140
Information geometry      327
Information retrieval      198
Information-based inference      324
Invariances      125
Inverse document frequency      203
Inverse quantum theory      298
Inverse temperature      296
Ivanov regularization      132
Joint density      322
Joint probability distribution      2
k-nearest neighbor algorithm      114
k-nearest neighbor estimate      343
Karush — Kuhn — Tucker conditions      20 189
Karush — Kuhn — Tucker system      159
Kernel CCA      166 212
Kernel estimate      344
Kernel FDA      159
Kernel machines      119
Kernel PCA      163
Kernel PLS      168 236
Kernel ridge regression      239
Kernelization      189
kernels      89
Kerridge inaccuracy      295 325
Kolmogorov’s n-width      79
Kullback — Leibler distance      292
Kullback — Leibler divergence      327 329
Lagrangian      20 158 182
Laplacian operator      81
Latent semantic indexing      204
Law of Large Numbers      358
Learning machine      2
Learning rate      361
Least squares estimate      344 381
Least squares support vector machines      136 157 236 239 392
Leave-one-out bound      146
Leave-one-out error      113
Lie group      301
Likelihood energy      296
Likelihood field      291
Likelihood function      323
Linear system      32 121 137 159 239
Local averaging estimates      343
Local learning      320
Local modeling      320
Local models      333
Locally weighted geometry      336
Logistic regression      256
loss      2
Loss-based decoding      259
Low-rank approximation      139 168
Margin      184 253
Markov chain      372
Maximal margin hyperplane      19
Maximum a posteriori approximation      293
Maximum entropy estimate      330
Maximum likelihood      382
Maximum likelihood estimate      330
Measurement      377
Mercer kernel      31 77
Mercer’s condition      23 159 184
Minimal empirical risk algorithm      363
Misclassification error      113
Model complexity      379
Model selection      112 380
Model validation      382
Modeling capability      384
Modulus of continuity      72
Monotonicity      297
Monte Carlo methods      42
Multilayer perceptron      383
Multiple-model prior      333
n-grams      220
Natural language processing      199
Newton’s method      38
NIPALS      231
Norm-induced topology      71
Nystrom approximation      139 168
Optimal control      173
Optimal interpolant      63
Outliers      161
Output coding      257
Overfitting      24 42
P-dimension      364
Paley — Wiener theorem      55
Parameter estimation      383
Partial stability      118
Partition sum      296
Partitioning estimate      344
Pattern recognition      3 346
Peetre K-functional      48
Pointwise defined functions      101
Portfolio selection      348
Posterior      291
Posterior density      323
Posterior energy      296
Predictive density      291 324
Primal-dual neural network interpretation      160
Prior      291
Prior information      366 393
Probability measure      30
Probability-based inference      322
Probably approximately correct      360
Proximal support vector machine      135 239
Pruning      161 393
Pythagorean relation      329
Quadratic programming      395
Quadratic Renyi entropy      170
Radial basis function network      383
Random entropy      7
Random VC-entropy      8
Rate of convergence      9
Rayleigh quotient      160
Real normed linear space      71
Real-valued Boolean function      83
Recurrent networks      172
Recursive least squares      170
Reduced form      239
Regression function      3 30 293
Regularization functionals      120
Regularization networks      121 161
Regularization parameter      30 77
Regularized least-squares classification      134
Relevance vector machine      273
Representer theorem      77 91 133
Reproducing kernel      102
Reproducing kernel Hilbert space (RKHS)      31 105 120 132
Ridge regression      120 161
Risk functional      2
Robust statistics      162
Robustness-efficiency trade-off      162
Sample complexity      362
Sample error      32
Semantic proximity matrix      210
Semantic relations      202
Semantic similarity      207
Sensitivity analysis      124
Sherman — Morrison — Woodbury formula      137
Similar-case modeling      332
Similarity measures      200
Singular value decomposition      210
Small sample size      15
Sobolev space      79
Soft margin      158
Sparse models      275
Sparseness      161
Stationary and ergodic process      349
Statistical learning theory      2 358 387
Statistically dependent models      334
Stochastic process      358
String subsequence kernel      217
Structural risk minimization      15 389
Subduality kernel      102
Support vector machines      21 121 133 156 180 254 388
Support vectors      20
Target function      360
Text categorization      138
Tikhonov regularization      133
Total variation metric      366
Transductive inference      123 170
UCI machine learning repository      137 159 240
Underfitting      42
Uniform convergence of empirical means      358
Uniform stability      118
Universal approximators      73
Universally consistent      345
Universally consistent regression estimates      343
Variable-basis approximation      74
Variation w.r.t.set of functions      75
VC dimension      11 364 388
VC entropy      6
VC theory      117
Vector space model      201
Virtual samples      125
von Neumann kernel      209
Vowel-recognizer      70
1 2
blank
Реклама
blank
blank
HR
@Mail.ru
       © Электронная библиотека попечительского совета мехмата МГУ, 2004-2024
Электронная библиотека мехмата МГУ | Valid HTML 4.01! | Valid CSS! О проекте