Hurtig levering
Fremragende Trustpilot
Op til 20% Rabat på nye medlemsordrer
Kurv

Minimum Divergence Methods in Statistical Machine Learning

Af: Osamu Komori, Shinto Eguchi Engelsk Hardback
SPAR
kr 320

Minimum Divergence Methods in Statistical Machine Learning

Af: Osamu Komori, Shinto Eguchi Engelsk Hardback
This book explores minimum divergence methods of statistical machine learning for estimation,  regression, prediction, and so forth,  in which we engage in information geometry to elucidate their intrinsic properties of the corresponding loss functions, learning algorithms, and statistical models. One of the most elementary  examples is Gauss's least squares estimator in a linear regression model, in which the estimator is given by minimization of the sum of squares between a response vector and a vector of the linear subspace hulled by explanatory vectors.  This is extended to Fisher's maximum likelihood estimator (MLE) for an exponential model, in which the estimator is provided by minimization of the Kullback-Leibler (KL) divergence between a data distribution and a parametric distribution of the exponential model in an empirical analogue. Thus, we envisage a geometric interpretation of such  minimization procedures such that a right triangle is kept with Pythagorean identity in the sense of the KL divergence.  This understanding sublimates  a dualistic interplay between a statistical estimation and model, which requires dual geodesic paths, called m-geodesic and e-geodesic paths, in a framework of information geometry. We extend such a dualistic structure of the MLE and exponential model to that of the minimum divergence estimator and the maximum entropy model, which is applied to robust statistics, maximum entropy, density estimation, principal component analysis, independent component analysis, regression analysis, manifold learning, boosting algorithm,  clustering, dynamic treatment regimes, and so forth. We consider a variety of information divergence measures typically including KL divergence to express departure from one probability distribution to another. An information divergence is decomposed into the cross-entropy and the (diagonal) entropy in which the entropy associates with a generative model as a family of maximum entropy distributions; the cross entropy associates with a statistical estimation method via minimization of the empirical analogue based on given data. Thus any statistical divergence includes an intrinsic object between the generative model and the estimation method. Typically, KL divergence leads to the exponential model and the maximum likelihood estimation. It is shown that any information divergence leads to a Riemannian metric and a pair of the linear connections in the framework of information geometry. We focus on a class of information divergence generated by an increasing and convex function U, called U-divergence. It is shown that any generator function U generates the U-entropy and U-divergence, in which there is a dualistic structure between the U-divergence method and the maximum U-entropy model. We observe that a specific choice of  U leads to a robust statistical procedurevia the minimum U-divergence method. If U is selected as an exponential function, then the corresponding  U-entropy and U-divergence are reduced to the Boltzmann-Shanon entropy and the KL divergence; the minimum U-divergence estimator is equivalent to the MLE. For robust supervised learning to predict a class label we observe that the U-boosting algorithm performs well for contamination of mislabel examples if U is appropriately selected. We present such maximal U-entropy and minimum U-divergence methods, in particular, selecting a power function as U to provide flexible performance in statistical machine learning.  
Eksklusiv medlemspris 919 kr
Medlemspris 951 kr
Denne pris er kun for medlemmer. Du bliver automatisk medlem når du køber til denne pris. Prøv 7 dages gratis medlemskab (herefter automatisk 89 kr/30 dage). Læs mere om fordelene
Gratis fragt
23 - 25 hverdage
10 kr
Lavt pakkegebyr
Normalpris 1.239 kr
Fragt: 59 kr
23 - 25 hverdage
20 kr
Pakkegebyr
Spar 320 kr
Se vores konkurrenters priser her
God 15.888 anmeldelser på
This book explores minimum divergence methods of statistical machine learning for estimation,  regression, prediction, and so forth,  in which we engage in information geometry to elucidate their intrinsic properties of the corresponding loss functions, learning algorithms, and statistical models. One of the most elementary  examples is Gauss's least squares estimator in a linear regression model, in which the estimator is given by minimization of the sum of squares between a response vector and a vector of the linear subspace hulled by explanatory vectors.  This is extended to Fisher's maximum likelihood estimator (MLE) for an exponential model, in which the estimator is provided by minimization of the Kullback-Leibler (KL) divergence between a data distribution and a parametric distribution of the exponential model in an empirical analogue. Thus, we envisage a geometric interpretation of such  minimization procedures such that a right triangle is kept with Pythagorean identity in the sense of the KL divergence.  This understanding sublimates  a dualistic interplay between a statistical estimation and model, which requires dual geodesic paths, called m-geodesic and e-geodesic paths, in a framework of information geometry. We extend such a dualistic structure of the MLE and exponential model to that of the minimum divergence estimator and the maximum entropy model, which is applied to robust statistics, maximum entropy, density estimation, principal component analysis, independent component analysis, regression analysis, manifold learning, boosting algorithm,  clustering, dynamic treatment regimes, and so forth. We consider a variety of information divergence measures typically including KL divergence to express departure from one probability distribution to another. An information divergence is decomposed into the cross-entropy and the (diagonal) entropy in which the entropy associates with a generative model as a family of maximum entropy distributions; the cross entropy associates with a statistical estimation method via minimization of the empirical analogue based on given data. Thus any statistical divergence includes an intrinsic object between the generative model and the estimation method. Typically, KL divergence leads to the exponential model and the maximum likelihood estimation. It is shown that any information divergence leads to a Riemannian metric and a pair of the linear connections in the framework of information geometry. We focus on a class of information divergence generated by an increasing and convex function U, called U-divergence. It is shown that any generator function U generates the U-entropy and U-divergence, in which there is a dualistic structure between the U-divergence method and the maximum U-entropy model. We observe that a specific choice of  U leads to a robust statistical procedurevia the minimum U-divergence method. If U is selected as an exponential function, then the corresponding  U-entropy and U-divergence are reduced to the Boltzmann-Shanon entropy and the KL divergence; the minimum U-divergence estimator is equivalent to the MLE. For robust supervised learning to predict a class label we observe that the U-boosting algorithm performs well for contamination of mislabel examples if U is appropriately selected. We present such maximal U-entropy and minimum U-divergence methods, in particular, selecting a power function as U to provide flexible performance in statistical machine learning.  
Produktdetaljer
Sprog: Engelsk
Sider: 221
ISBN-13: 9784431569206
Indbinding: Hardback
Udgave:
ISBN-10: 4431569200
Udg. Dato: 16 mar 2022
Længde: 0mm
Bredde: 235mm
Højde: 155mm
Forlag: Springer Verlag, Japan
Oplagsdato: 16 mar 2022
Forfatter(e): Osamu Komori, Shinto Eguchi
Forfatter(e) Osamu Komori, Shinto Eguchi


Kategori Matematik til informatikfag


Sprog Engelsk


Indbinding Hardback


Sider 221


Udgave


Længde 0mm


Bredde 235mm


Højde 155mm


Udg. Dato 16 mar 2022


Oplagsdato 16 mar 2022

MEDLEMSFORDELE
GRATIS FRAGT
SPAR OP TIL 90%
Andre har også købt
BOG (INDBUNDET)
Eksklusiv medlemspris kr 380

kr 499
Normalpris
kr 392
Medlemspris
SPAR
kr 119
BOG (INDBUNDET)
Eksklusiv medlemspris kr 199

kr 279
Normalpris
kr 207
Medlemspris
SPAR
kr 80
BOG (INDBUNDET)
Eksklusiv medlemspris kr 214

kr 320
Normalpris
kr 225
Medlemspris
SPAR
kr 106
BOG (INDBUNDET)
Eksklusiv medlemspris kr 105

kr 120
Normalpris
kr 107
Medlemspris
SPAR
kr 15
BOG (FYSISK BOG)
Eksklusiv medlemspris kr 200

kr 200
Normalpris
kr 200
Medlemspris
BOG (INDBUNDET)
Eksklusiv medlemspris kr 919

kr 1.499
Normalpris
kr 977
Medlemspris
SPAR
kr 580
BOG (HÆFTET)
Eksklusiv medlemspris kr 182

kr 250
Normalpris
kr 189
Medlemspris
SPAR
kr 68
BOG (INDBUNDET)
Eksklusiv medlemspris kr 184

kr 299
Normalpris
kr 196
Medlemspris
SPAR
kr 115
BOG (HÆFTET)
Eksklusiv medlemspris kr 200

kr 280
Normalpris
kr 208
Medlemspris
SPAR
kr 80
BOG (HÆFTET)
Eksklusiv medlemspris kr 172

kr 249
Normalpris
kr 180
Medlemspris
SPAR
kr 77
BOG (HÆFTET)
Eksklusiv medlemspris kr 199

kr 299
Normalpris
kr 209
Medlemspris
SPAR
kr 100
BOG (HÆFTET)
Eksklusiv medlemspris kr 191

kr 269
Normalpris
kr 199
Medlemspris
SPAR
kr 78
BOG (INDBUNDET)
Eksklusiv medlemspris kr 95

kr 120
Normalpris
kr 98
Medlemspris
SPAR
kr 25
BOG (PAPERBACK)
Eksklusiv medlemspris kr 565

kr 883
Normalpris
kr 597
Medlemspris
SPAR
kr 318
BOG (PAPERBACK)
Eksklusiv medlemspris kr 166

kr 201
Normalpris
kr 170
Medlemspris
SPAR
kr 35
BOG (INDBUNDET)
Eksklusiv medlemspris kr 262

kr 349
Normalpris
kr 271
Medlemspris
SPAR
kr 87
BOG (INDBUNDET)
Eksklusiv medlemspris kr 199

kr 320
Normalpris
kr 211
Medlemspris
SPAR
kr 121
BOG (INDBUNDET)
Eksklusiv medlemspris kr 167

kr 250
Normalpris
kr 175
Medlemspris
SPAR
kr 83
BOG (INDBUNDET)
Eksklusiv medlemspris kr 241

kr 349
Normalpris
kr 252
Medlemspris
SPAR
kr 108
BOG (INDBUNDET)
Eksklusiv medlemspris kr 199

kr 299
Normalpris
kr 209
Medlemspris
SPAR
kr 100
Vi anbefaler også
BOG (PAPERBACK)
Eksklusiv medlemspris kr 485

kr 619
Normalpris
kr 498
Medlemspris
SPAR
kr 134
BOG (PAPERBACK)
Eksklusiv medlemspris kr 455

kr 518
Normalpris
kr 461
Medlemspris
SPAR
kr 63
BOG (PAPERBACK)
Eksklusiv medlemspris kr 274

kr 506
Normalpris
kr 297
Medlemspris
SPAR
kr 232
BOG (PAPERBACK)
Eksklusiv medlemspris kr 919

kr 1.239
Normalpris
kr 951
Medlemspris
SPAR
kr 320
BOG (HARDBACK)
Eksklusiv medlemspris kr 328

kr 393
Normalpris
kr 335
Medlemspris
SPAR
kr 65
BOG (HARDBACK)
Eksklusiv medlemspris kr 682

kr 901
Normalpris
kr 704
Medlemspris
SPAR
kr 219
BOG (PAPERBACK)
Eksklusiv medlemspris kr 532

kr 732
Normalpris
kr 552
Medlemspris
SPAR
kr 200
BOG (PAPERBACK)
Eksklusiv medlemspris kr 485

kr 619
Normalpris
kr 498
Medlemspris
SPAR
kr 134
BOG (PAPERBACK)
Eksklusiv medlemspris kr 402

kr 450
Normalpris
kr 407
Medlemspris
SPAR
kr 48
BOG (PAPERBACK)
Eksklusiv medlemspris kr 376

kr 416
Normalpris
kr 380
Medlemspris
SPAR
kr 40
BOG (PAPERBACK)
Eksklusiv medlemspris kr 761

kr 1.014
Normalpris
kr 786
Medlemspris
SPAR
kr 253
BOG (PAPERBACK)
Eksklusiv medlemspris kr 485

kr 619
Normalpris
kr 498
Medlemspris
SPAR
kr 134
BOG (HARDBACK)
Eksklusiv medlemspris kr 485

kr 619
Normalpris
kr 498
Medlemspris
SPAR
kr 134
BOG (PAPERBACK)
Eksklusiv medlemspris kr 446

kr 563
Normalpris
kr 458
Medlemspris
SPAR
kr 117
BOG (PAPERBACK)
Eksklusiv medlemspris kr 455

kr 518
Normalpris
kr 461
Medlemspris
SPAR
kr 63
BOG (PAPERBACK)
Eksklusiv medlemspris kr 437

kr 495
Normalpris
kr 443
Medlemspris
SPAR
kr 58
BOG (PAPERBACK)
Eksklusiv medlemspris kr 402

kr 450
Normalpris
kr 407
Medlemspris
SPAR
kr 48
BOG (HARDBACK)
Eksklusiv medlemspris kr 1144

kr 1.183
Normalpris
kr 1.148
Medlemspris
SPAR
kr 39