Главная    Ex Libris    Книги    Журналы    Статьи    Серии    Каталог    Wanted    Загрузка    ХудЛит    Справка    Поиск по индексам    Поиск    Форум   
blank
Авторизация

       
blank
Поиск по указателям

blank
blank
blank
Красота
blank
Friedman J., Hastie T., Tibshirani R. — The Elements of Statistical Learning
Friedman J., Hastie T., Tibshirani R. — The Elements of Statistical Learning



Обсудите книгу на научном форуме



Нашли опечатку?
Выделите ее мышкой и нажмите Ctrl+Enter


Название: The Elements of Statistical Learning

Авторы: Friedman J., Hastie T., Tibshirani R.

Аннотация:

During the past decade there has been an explosion in computation and information technology. With it has come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics.Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It should be a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting—the first comprehensive treatment of this topic in any book. Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie wrote much of the statistical modeling software in S-PLUS and invented principal curves and surfaces. Tibshirani proposed the Lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, and projection pursuit.


Язык: en

Рубрика: Computer science/AI, knowledge/

Статус предметного указателя: Готов указатель с номерами страниц

ed2k: ed2k stats

Год издания: 2001

Количество страниц: 533

Добавлена в каталог: 17.11.2005

Операции: Положить на полку | Скопировать ссылку для форума | Скопировать ID
blank
Предметный указатель
$C_{p}$ statistic      203
Abu-Mostafa, Y.S.      77 509
Activation function      350-352
Adaboost      299—309
Adaptive methods      383
Adaptive nearest neighbor methods      427—430
Adaptive wavelet filtering      157
Additive model      257—266
Adjusted response      259
Affine invariant average      434
Affine set      106
Agrawal, R.      442 443 503 509
AIC      see "Akaike information criterion"
Akaike information criterion (AIC)      203
Akaike, H.      222 509
Allen, D.M.      222 509
Analysis of deviance      102
Applications galaxy      175
Applications nuclear magnetic resonance      150
Applications ZIP code      3 362 488—489
Applications, aorta      178
Applications, bone      128
Applications, California housing      335—336
Applications, countries      468
Applications, document      485
Applications, heart attack      122 181
Applications, marketing      444
Applications, microarray      5 462 485
Applications, ozone      175
Applications, prostate cancer      2 47 57
Applications, satellite image      422
Applications, spam      2 62—264 274 276 282 289 314
Applications, vowel      391 416
Applications, waveform      402
Association rules      444—447 451—453
Automatic selection of smoothing parameters      134
B-spline      160
Back-propagation      349 353—355 366—367
Backfitting procedure      259
Backward pass      354
Backward stepwise selection      55
Bagging      246—249
Barron, A.R      368 509
Barry, Ronald      335 519
Bartlett, P.      343 520
Basis expansions and regularization      115—164
Basis functions      117 161 163 283 289
Baskett, F.      513
Batch learning      355
Baum — Welch algorithm      236
Bayes classifier      21
Bayes factor      207
Bayes methods      206—207 231—236
Bayes rate      21
Bayesian information criterion (BIC)      206
Becker, R.      333 509
Bell, A.      504 509
Bellman, R E.      22 510
Benade, A.      100 520
Bengio, Y.      363 366 368 517
Bentley, J.      513
Best, N.      255 520
Between-class covariance matrix      92
Bias      16 24 37 136 193
Bias-variance decomposition      24 37 193
Bias-variance tradeoff      37 193
Bibby, J.M.      75 111 495 504 518
BIC      see "Bayesian Information Criterion"
Bishop, CM.      39 206 367 510
Boosting      299—346
Bootstrap      217 225—228 231 234—246
Bootstrap relationship to Bayesian method      235
Bootstrap relationship to maximum likelihood method      231
Boser, B.      362 368 517
Botha, J.      295 515
Bottom-up clustering      472—479
Bottou, L.      363 366 368 517
Breiman, L.      74 75 219 222 255 270 272 296 302 331 405 406 510
Brooks, R. J      521
Bruce, A.      155 510
BRUTO      266 385
Buja, A.      88 260 399 404 406 500 504 510 515 516 521
Bump hunting      see "patient rule induction method (PRIM)"
Bumping      253—254
Burges, G. J. G.      406 510
Canonical variates      392
CART      see "classification and regression trees"
Categorical predictors      10 271—272
Chambers, J.      295 510
Clark, L.G.      293 518
Classical multidimensional scaling      502
Classification      21 79—114 266—278 371—384
Classification and regression trees (CART)      266—278
Cleveland, W.      333 509
Clustering      453—479
Clustering, agglomerative      475—479
Clustering, hierarchical      472—479
Clustering, K-means      461—462
Codebook      465 468
Combinatorial algorithms      460
Combining models      250-252
Committee methods      251
Comon, P.      504 511
Comparison of learning methods      312—314
Complete data      240
Complexity parameter      37
Condensing procedure      432
Conditional likelihood      31
Confusion matrix      263
Conjugate gradients      355
Convolutional networks      364
Cook, D.      500 521
Copas, J. B.      75 330 511
Cost complexity pruning      270
Cover, T.M.      222 417 433 511
Cox, D.R.      254 511
Cressie, Noel A. C.      511
Cross-entropy      270-271
Cross-validation      214—216
Csiszar, I.      255 511
Cubic smoothing spline      127—128
Cubic spline      127—128
Curse of dimensionality      22—27
Dale, M.B.      518
Dasarathy, B.V.      432 433 511
Data augmentation      240
Daubechies symmlet-8 wavelets      150
Daubechies, I.      155 511
de Boor, C.      155 511
Dean, N.      504 510
Decision boundary      13 15 16 22
Decision trees      266—278
Decoding step      467
Degrees in an additive model      264
Degrees of a tree      297
Degrees of freedom in ridge regression      63
Degrees of smoother matrices      129—130 134
Delta rule      355
Demmler — Reinsch basis for splines      132
Dempster, A.      255 400 511
Denker, J.      362 368 517 520
Density estimation      182—189
Deviance      102 271
Devijver, P.A.      432 511
Discrete variables      10 272—273
Discriminant adaptive nearest neighbor (DANN) classifier      427—432
Discriminant analysis      84—94
Discriminant coordinates      85
Discriminant functions      87—88
Dissimilarity measure      455—456
Donoho, D.      331 511
du Plessis, J.      100 520
Duan, N.      432 511
Dubes, R.C.      461 475 516
Duchamp, T.      512
Duda, R.      39 111 512
Dummy variables      10
Early stopping      355
Effective degrees of freedom      15 63 129—130 134 205 264 297
Effective number of parameters      15 63 129—130 134 205 264 297
Efron, B.      105 204 222 254 295 512
Eigenvalues of a smoother matrix      130
EM algorithm      236—242
EM algorithm as a maximization-maximization procedure      241
EM algorithm for two component Gaussian mixture      236
Encoder      466—467
entropy      271
Equivalent kernel      133
Error rate      193—203
Estimates of in-sample prediction error      203
Evgeniou, T.      144 155 406 512
Expectation-maximization algorithm      see "EM algorithm"
Exponential loss and AdaBoost      305
Extra-sample error      202
Fan, J.      190 512
Feature extraction      126
features      1
Feed-forward neural networks      350-366
Ferreira, J.      100 520
Finkel, R      513
Fisher's linear discriminant      84—94 390
Fisher, N.      296 514
Fisher, R. A.      406 512
Fix, E.      433 512
Flexible discriminant analysis      391—396
Flury, B.      504 512 521
Forgy, E.W.      503 512
Forward pass algorithm      353
Forward selection      55
Forward stagewise additive modeling      304
Fourier transform      144
Frank, I.      70 75 512
Freiha, F.      3 47 521
Frequentist methods      231
Freund, Y.      299 341 343 513 520
Friedman, J.      39 70 74 75 90 219 223 270 272 296 301 307 326 331 333 335 343 344 367 405 429 500 504 510 512 513
Fukunaga, K.      429 520
Function approximation      28—36
Furnival, G.      55 514
Gao, H.      155 510
Gap statistic      472
Garlin, J.      255 514
Gating networks      290-291
Gauss — Markov theorem      49—50
Gauss — Newton method      349
Gaussian (normal) distribution      17
Gaussian mixtures      237 416 444 462
Gaussian radial basis functions      186
GCV      see "Generalized cross-validation"
Gelfand, A.      255 514
Gelman, A.      255 514
GEM (generalized EM)      241
Geman, D.      255 514
Geman, S.      255 514
Generalization error      194
Generalization performance      194
Generalized additive model      257—265
Generalized association rules      449—450
Generalized cross-validation      216
Generalized linear models      103
Generalizing linear discriminant analysis      390
Gersho, A.      466 468 480 503 514
Gherkassky, V.      39 211 510
Ghui, G.      155 511
Gibbs sampler      243—244
Gibbs sampler for mixtures      244
Gijbels, I.      190 512
Gilks, W.      255 520
Gill, P.E.      75 519
Gini index      271
Girosi, F.      144 148 155 368 514
Global dimension reduction for nearest neighbors      431
Golub, G.      222 296 514
Gordon, A.D.      503 514
Gradient boosting      320
Gradient descent      320 353—354
Gray, R.      466 468 480 503 514
Green, P.      155 157 295 515
Greenacre, M.      515
Haar basis function      150
Haffner, P.      363 366 368 517
Hall, P.      254 515
Hand, D.J.      111 429 515 519
Hansen, M.      289 521
Hansen, R.      75 517
Hart, P.      39 111 417 432 433 511 512 515
Hartigan, J. A.      462 503 515
Hastie, T.      88 113 190 222 260 261 262 266 295 301 307 343 344 382 385 399 402 404 406 429 431 432 433 472 504 510 514 515 516 519
Hat matrix      44
Hathaway, Richard J.      255 516
Heath, M.      222 514
Hebb, D.O.      367 516
Helix      506
Henderson, D.      362 368 517
Herman, A.      295 515
Hertz, J.      367 516
Hessian matrix      99
Hidden units      351—352
Hierarchical clustering      472—479
Hierarchical mixtures of experts      290-292
Hinkley, D.V.      254 511
Hinton, G.      255 296 367 516 519 520
hints      77
Hodges, J.L.      433 512
Hoerl, A. E.      60 75 516
Hoff, M.E.      355 367 522
Howard, R.E.      362 368 517
Hubbard, W.      362 368 517
Huber, P.      311 367 386 504 516
Hyperplane, separating      108—110
Hyvarinen, A.      496 497 498 504 516
ICA      see "independent components analysis"
Ihaka, R      406 510
In-sample prediction error      203
Incomplete data      293
Independent components analysis      494—501
Independent variables      9
Indicator response matrix      81
Inference      225—255
Information theory      208 496
Information, Fisher      230
Information, observed      239
Inputs      10
Inskip, H.      255 520
Instability of trees      274
intercept      11
Invariance manifold      423
Invariant metric      423
Inverse wavelet transform      153
IRLS      see "iteratively reweighted least squares"
Irreducible error      197
Iteratively reweighted least squares (IRLS)      99
Izenman, A.      516
Jackel, L.D.      362 368 517
Jacobs, R.      296 516 517
Jain, A.K.      461 475 516
Jancey, P.C.      503 516
Jensen's inequality      255
Johnstone, I.      3 47 331 511 521
Jones, L.      368 517
1 2 3
blank
Реклама
blank
blank
HR
@Mail.ru
       © Электронная библиотека попечительского совета мехмата МГУ, 2004-2024
Электронная библиотека мехмата МГУ | Valid HTML 4.01! | Valid CSS! О проекте