home libri books Fumetti ebook dvd top ten sconti 0 Carrello


Torna Indietro

sugiyama masashi - statistical reinforcement learning

Statistical Reinforcement Learning Modern Machine Learning Approaches




Disponibilità: Normalmente disponibile in 20 giorni
A causa di problematiche nell'approvvigionamento legate alla Brexit sono possibili ritardi nelle consegne.


PREZZO
103,98 €
NICEPRICE
98,78 €
SCONTO
5%



Questo prodotto usufruisce delle SPEDIZIONI GRATIS
selezionando l'opzione Corriere Veloce in fase di ordine.


Pagabile anche con Carta della cultura giovani e del merito, 18App Bonus Cultura e Carta del Docente


Facebook Twitter Aggiungi commento


Spese Gratis

Dettagli

Genere:Libro
Lingua: Inglese
Pubblicazione: 06/2015
Edizione: 1° edizione





Note Editore

Reinforcement learning is a mathematical framework for developing computer agents that can learn an optimal behavior by relating generic reward signals with its past actions. With numerous successful applications in business intelligence, plant control, and gaming, the RL framework is ideal for decision making in unknown environments with large amounts of data.Supplying an up-to-date and accessible introduction to the field, Statistical Reinforcement Learning: Modern Machine Learning Approaches presents fundamental concepts and practical algorithms of statistical reinforcement learning from the modern machine learning viewpoint. It covers various types of RL approaches, including model-based and model-free approaches, policy iteration, and policy search methods. Covers the range of reinforcement learning algorithms from a modern perspective Lays out the associated optimization problems for each reinforcement learning scenario covered Provides thought-provoking statistical treatment of reinforcement learning algorithms The book covers approaches recently introduced in the data mining and machine learning fields to provide a systematic bridge between RL and data mining/machine learning researchers. It presents state-of-the-art results, including dimensionality reduction in RL and risk-sensitive RL. Numerous illustrative examples are included to help readers understand the intuition and usefulness of reinforcement learning techniques.This book is an ideal resource for graduate-level students in computer science and applied statistics programs, as well as researchers and engineers in related fields.




Sommario

Introduction to Reinforcement LearningReinforcement LearningMathematical FormulationStructure of the Book Model-Free Policy Iteration Model-Free Policy Search Model-Based Reinforcement LearningMODEL-FREE POLICY ITERATIONPolicy Iteration with Value Function ApproximationValue Functions State Value Functions State-Action Value FunctionsLeast-Squares Policy Iteration Immediate-Reward Regression Algorithm Regularization Model SelectionRemarksBasis Design for Value Function ApproximationGaussian Kernels on Graphs MDP-Induced Graph Ordinary Gaussian Kernels Geodesic Gaussian KernelsExtension to Continuous State SpacesIllustration Setup Geodesic Gaussian Kernels Ordinary Gaussian Kernels Graph-Laplacian Eigenbases Diffusion WaveletsNumerical Examples Robot-Arm Control Robot-Agent NavigationRemarksSample Reuse in Policy Iteration FormulationOff-Policy Value Function Approximation Episodic Importance Weighting Per-Decision Importance Weighting Adaptive Per-Decision Importance Weighting IllustrationAutomatic Selection of Flattening Parameter Importance-Weighted Cross-Validation IllustrationSample-Reuse Policy Iteration Algorithm IllustrationNumerical Examples Inverted Pendulum Mountain Car RemarksActive Learning in Policy IterationEfficient Exploration with Active Learning Problem Setup Decomposition of Generalization Error Estimation of Generalization Error Designing Sampling Policies IllustrationActive Policy Iteration Sample-Reuse Policy Iteration with Active Learning IllustrationNumerical ExamplesRemarksRobust Policy IterationRobustness and Reliability in Policy Iteration Robustness ReliabilityLeast Absolute Policy Iteration Algorithm Illustration PropertiesNumerical ExamplesPossible Extensions Huber Loss Pinball Loss Deadzone-Linear Loss Chebyshev Approximation Conditional Value-At-RiskRemarksMODEL-FREE POLICY SEARCH Direct Policy Search by Gradient Ascent FormulationGradient Approach Gradient Ascent Baseline Subtraction for Variance Reduction Variance Analysis of Gradient EstimatorsNatural Gradient Approach Natural Gradient Ascent IllustrationApplication in Computer Graphics: Artist Agent Sumie Paining Design of States, Actions, and Immediate Rewards Experimental ResultsRemarksDirect Policy Search by Expectation-Maximization Expectation-Maximization ApproachSample Reuse Episodic Importance Weighting Per-Decision Importance Weight Adaptive Per-Decision Importance Weighting Automatic Selection of Flattening Parameter Reward-Weighted Regression with Sample ReuseNumerical ExamplesRemarksPolicy-Prior SearchFormulationPolicy Gradients with Parameter-Based Exploration Policy-Prior Gradient Ascent Baseline Subtraction for Variance Reduction Variance Analysis of Gradient Estimators Numerical ExamplesSample Reuse in Policy-Prior Search Importance Weighting Variance Reduction by Baseline Subtraction Numerical ExamplesRemarksMODEL-BASED REINFORCEMENT LEARNING Transition Model EstimationConditional Density Estimation Regression-Based Approach Q-Neighbor Kernel Density Estimation Least-Squares Conditional Density EstimationModel-Based Reinforcement LearningNumerical Examples Continuous Chain Walk Humanoid Robot ControlRemarksDimensionality Reduction for Transition Model Estimation Sufficient Dimensionality ReductionSquared-Loss Conditional Entropy Conditional Independence Dimensionality Reduction with SCE Relation to Squared-Loss Mutual InformationNumerical Examples Artificial and Benchmark Datasets Humanoid RobotRemarksReferences Index




Autore

Masashi Sugiyama received his bachelor, master, and doctor of engineering degrees in computer science from the Tokyo Institute of Technology, Japan. In 2001 he was appointed assistant professor at the Tokyo Institute of Technology and he was promoted to associate professor in 2003. He moved to the University of Tokyo as professor in 2014.He received an Alexander von Humboldt Foundation Research Fellowship and researched at Fraunhofer Institute, Berlin, Germany, from 2003 to 2004. In 2006, he received a European Commission Program Erasmus Mundus Scholarship and researched at the University of Edinburgh, Scotland. He received the Faculty Award from IBM in 2007 for his contribution to machine learning under non-stationarity, the Nagao Special Researcher Award from the Information Processing Society of Japan in 2011, and the Young Scientists’ Prize from the Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology for his contribution to the density-ratio paradigm of machine learning.His research interests include theories and algorithms of machine learning and data mining, and a wide range of applications such as signal processing, image processing, and robot control. He published Density Ratio Estimation in Machine Learning (Cambridge University Press, 2012) and Machine Learning in Non-Stationary Environments: Introduction to Covariate Shift Adaptation (MIT Press, 2012).










Altre Informazioni

ISBN:

9781439856895

Condizione: Nuovo
Collana: Chapman & Hall/Crc Machine Learning & Pattern Recognition
Dimensioni: 9.25 x 6.25 in Ø 0.95 lb
Formato: Copertina rigida
Illustration Notes:114 b/w images and 3 tables
Pagine Arabe: 206


Dicono di noi