
Questo prodotto usufruisce delle SPEDIZIONI GRATIS
selezionando l'opzione Corriere Veloce in fase di ordine.
Pagabile anche con Carta della cultura giovani e del merito, 18App Bonus Cultura e Carta del Docente
This book presents an in-depth exploration of multimodal learning toward recommendation, along with a comprehensive survey of the most important research topics and state-of-the-art methods in this area.
First, it presents a semantic-guided feature distillation method which employs a teacher-student framework to robustly extract effective recommendation-oriented features from generic multimodal features. Next, it introduces a novel multimodal attentive metric learning method to model user diverse preferences for various items. Then it proposes a disentangled multimodal representation learning recommendation model, which can capture users’ fine-grained attention to different modalities on each factor in user preference modeling. Furthermore, a meta-learning-based multimodal fusion framework is developed to model the various relationships among multimodal information. Building on the success of disentangled representation learning, it further proposes an attribute-driven disentangled representation learning method, which uses attributes to guide the disentanglement process in order to improve the interpretability and controllability of conventional recommendation methods. Finally, the book concludes with future research directions in multimodal learning toward recommendation.
The book is suitable for graduate students and researchers who are interested in multimodal learning and recommender systems. The multimodal learning methods presented are also applicable to other retrieval or sorting related research areas, like image retrieval, moment localization, and visual question answering.
Preface .- 1) Introduction .- 2) Semantic-Guided Feature Distillation for Multimodal Recommendation .- 3) User Diverse Preference Modeling by Multimodal Attentive Metric Learning .- 4) Disentangled Multimodal Representation Learning for Recommendation .- 5) Dynamic Multimodal Fusion via Meta-Learning Towards Multimodal Recommendation .- 6) Attribute-driven Disentangled Representation Learning for Multimodal Recommendation .- 7) Research Frontiers.
Fan Liu is a Research Fellow with the School of Computing, National University of Singapore (NUS). His research interests lie primarily in multimedia computing and information retrieval. His work has been published in a set of top forums, including ACM SIGIR, MM, WWW, TKDE, TOIS, TMM, and TCSVT. He is an area chair of ACM MM and a senior PC member of CIKM.
Zhenyang Li is a Postdoc with the Hong Kong Generative Al Research and Development Center Limited. His research interest is primarily in recommendation and visual question answering. His work has been published in a set of top forums, including ACM MM, TIP, and TMM.
Liqiang Nie is Professor at and Dean of the School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen). His research interests are primarily in multimedia computing and information retrieval. He has co-authored more than 200 articles and four books. He is a regular area chair of ACM MM, NeurIPS, IJCAI, and AAAI, and a member of the ICME steering committee. He has received many awards, like the ACM MM and SIGIR best paper honorable mention in 2019, SIGMM rising star in 2020, TR35 China 2020, DAMO Academy Young Fellow in 2020, and SIGIR best student paper in 2021.


Il sito utilizza cookie ed altri strumenti di tracciamento che raccolgono informazioni dal dispositivo dell’utente. Oltre ai cookie tecnici ed analitici aggregati, strettamente necessari per il funzionamento di questo sito web, previo consenso dell’utente possono essere installati cookie di profilazione e marketing e cookie dei social media. Cliccando su “Accetto tutti i cookie” saranno attivate tutte le categorie di cookie. Per accettare solo deterninate categorie di cookie, cliccare invece su “Impostazioni cookie”. Chiudendo il banner o continuando a navigare saranno installati solo cookie tecnici. Per maggiori dettagli, consultare la Cookie Policy.