Главная    Ex Libris    Книги    Журналы    Статьи    Серии    Каталог    Wanted    Загрузка    ХудЛит    Справка    Поиск по индексам    Поиск    Форум   
blank
Авторизация

       
blank
Поиск по указателям

blank
blank
blank
Красота
blank
Friedman J., Hastie T., Tibshirani R. — The Elements of Statistical Learning
Friedman J., Hastie T., Tibshirani R. — The Elements of Statistical Learning



Обсудите книгу на научном форуме



Нашли опечатку?
Выделите ее мышкой и нажмите Ctrl+Enter


Название: The Elements of Statistical Learning

Авторы: Friedman J., Hastie T., Tibshirani R.

Аннотация:

During the past decade there has been an explosion in computation and information technology. With it has come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics.Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It should be a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting—the first comprehensive treatment of this topic in any book. Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie wrote much of the statistical modeling software in S-PLUS and invented principal curves and surfaces. Tibshirani proposed the Lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, and projection pursuit.


Язык: en

Рубрика: Computer science/AI, knowledge/

Статус предметного указателя: Готов указатель с номерами страниц

ed2k: ed2k stats

Год издания: 2001

Количество страниц: 533

Добавлена в каталог: 17.11.2005

Операции: Положить на полку | Скопировать ссылку для форума | Скопировать ID
blank
Предметный указатель
Squared error loss      18 24 37 193
Srikant, R.      442 443 503 509
SRM      see "structural risk minimization"
Stacking (stacked generalization)      252—253
Stamey, T      3 47 521
Starting values      355
Statistical decision theory      18—21
Statistical model      28—29
Steepest descent      320 353—354
Stern, H.      255 514
Stochastic approximation      355
Stochastic search (bumping)      253—254
Stone, G.      219 270 272 289 296 331 405 510 521
Stone, M.      222 521
Stork, D.      39 111 512
Stress function      502—503
Structural risk minimization (SRM)      212—213
Stuetzle, W.      367 504 512 514 515
Subset selection      55—57
Supervised learning      2
Support vector classifier      371—376
Support vector machine      377—389
SURE shrinkage method      153
SVD      see "singular value decomposition"
Swayne, D.      500 504 510 521
Symmlet basis      150
Tangent distance      423—426
Tanh activation function      378
Tanner, M.      255 521
Target variable      10
Tarpey, T.      504 521
Tensor product basis      138
Test error      194—196
Test set      194
Thin plate spline      140
Thinning strategy      163
Thomas, J.A.      222 511
Tibshirani, R.      75 88 113 190 222 254 255 260 261 262 266 295 301 307 343 344 382 385 399 402 404 406 429 431 432 433 472 510 512 514 515 516 517 521
Toivonen, H.      442 443 503 509
Trace of a matrix      130
Training epoch      355
Training error      194—196
Training set      193—196
Tree for regression      267—269
Tree-based methods      266—278
Trees for classification      270-271
Trellis display      176
Truong, Y.      289 521
Tukey, J.      367 500 504 514
Turnbull, B.W.      293 518
Tusnady, G.      255 511
Universal approximator      348
Unsupervised learning      2 437—508
Unsupervised learning as supervised learning      447—448
Valiant, L. G.      521
Validation set      196
van der Merwe, A.      521
Van Loan, G.      296 514
Vapnik, V.      39 80 108 111 147 222 406 521
Vapnik—Ghernovenkis (VC) dimension      210-211
Variable types and terminology      9
Variance      16 24 37 134—136 193
Variance, between      92 94
Variance, within      92 94 397
Varying coefficient models      177—178
Vazirani, U.      517
VC dimension      see "Vapnik—Ghernovenkis dimension"
Vector quantization      466—467
Verkamo, A. I.      442 443 503 509
Vidakovic, B.      155 522
Voronoi regions      463
Wahba, G.      144 155 222 382 406 514 522
Wald test      103
Walther, G.      472 521
Wavelet basis functions      150-152
Wavelet smoothing      148
Wavelet transform      150-153
Weak learner      341
Weakest link pruning      270
Website for book      8
Weight decay      356
Weight elimination      356
Weights in a neural network      353
Weisberg, Sanford      75 522
Werbos, P.J      367 522
Wickerhauser, M.V.      155 522
Widrow, B.      355 367 522
Wild, G.J.      262 522
Williams, R.      367 520
Williams, W.T.      518
Wilson, R.      55 514
Within class covariance matrix      92 94 397
Wold, H.      75 522
Wolpert, D.      255 522
Wong, M. A.      462 515
Wong, W.      255 521
Wright, M.H.      75 519
Yang, N.      3 47 521
Yee, T.W.      262 522
Zhang, H.      382 406 522
Zhang, P.      222 523
Zidek, J.      521
1 2 3
blank
Реклама
blank
blank
HR
@Mail.ru
       © Электронная библиотека попечительского совета мехмата МГУ, 2004-2024
Электронная библиотека мехмата МГУ | Valid HTML 4.01! | Valid CSS! О проекте