libri scuola books Fumetti ebook dvd top ten sconti 0 Carrello


Torna Indietro

hvitfeldt emil; silge julia - supervised machine learning for text analysis in r
Zoom

Supervised Machine Learning for Text Analysis in R

;




Disponibilità: Normalmente disponibile in 20 giorni
A causa di problematiche nell'approvvigionamento legate alla Brexit sono possibili ritardi nelle consegne.


PREZZO
58,98 €
NICEPRICE
56,03 €
SCONTO
5%



Questo prodotto usufruisce delle SPEDIZIONI GRATIS
selezionando l'opzione Corriere Veloce in fase di ordine.


Pagabile anche con Carta della cultura giovani e del merito, 18App Bonus Cultura e Carta del Docente


Facebook Twitter Aggiungi commento


Spese Gratis

Dettagli

Genere:Libro
Lingua: Inglese
Pubblicazione: 10/2021
Edizione: 1° edizione





Note Editore

Text data is important for many domains, from healthcare to marketing to the digital humanities, but specialized approaches are necessary to create features for machine learning from language. Supervised Machine Learning for Text Analysis in R explains how to preprocess text data for modeling, train models, and evaluate model performance using tools from the tidyverse and tidymodels ecosystem. Models like these can be used to make predictions for new observations, to understand what natural language features or characteristics contribute to differences in the output, and more. If you are already familiar with the basics of predictive modeling, use the comprehensive, detailed examples in this book to extend your skills to the domain of natural language processing. This book provides practical guidance and directly applicable knowledge for data scientists and analysts who want to integrate unstructured text data into their modeling pipelines. Learn how to use text data for both regression and classification tasks, and how to apply more straightforward algorithms like regularized regression or support vector machines as well as deep learning approaches. Natural language must be dramatically transformed to be ready for computation, so we explore typical text preprocessing and feature engineering steps like tokenization and word embeddings from the ground up. These steps influence model results in ways we can measure, both in terms of model metrics and other tangible consequences such as how fair or appropriate model results are.?




Sommario

I Natural Language Features 1. Language and modeling Linguistics for text analysis A glimpse into one area: morphology Different languages Other ways text can vary Summary 2. Tokenization What is a token? Types of tokens Character tokens Word tokens Tokenizing by n-grams Lines, sentence, and paragraph tokens Where does tokenization break down? Building your own tokenizer Tokenize to characters, only keeping letters Allow for hyphenated words Wrapping it in a function Tokenization for non-Latin alphabets Tokenization benchmark Summary 3. Stop words Using premade stop word lists Stop word removal in R Creating your own stop words list All stop word lists are context-specific What happens when you remove stop words Stop words in languages other than English Summary 4. Stemming How to stem text in R Should you use stemming at all? Understand a stemming algorithm Handling punctuation when stemming Compare some stemming options Lemmatization and stemming Stemming and stop words Summary 5. Word Embeddings Motivating embeddings for sparse, high-dimensional data Understand word embeddings by finding them yourself Exploring CFPB word embeddings Use pre-trained word embeddings Fairness and word embeddings Using word embeddings in the real world Summary II Machine Learning Methods Regression A first regression model Building our first regression model Evaluation Compare to the null model Compare to a random forest model Case study: removing stop words Case study: varying n-grams Case study: lemmatization Case study: feature hashing Text normalization What evaluation metrics are appropriate? The full game: regression Preprocess the data Specify the model Tune the model Evaluate the modeling Summary Classification A first classification model Building our first classification model Evaluation Compare to the null model Compare to a lasso classification model Tuning lasso hyperparameters Case study: sparse encoding Two class or multiclass? Case study: including non-text data Case study: data censoring Case study: custom features Detect credit cards Calculate percentage censoring Detect monetary amounts What evaluation metrics are appropriate? The full game: classification Feature selection Specify the model Evaluate the modeling Summary III Deep Learning Methods Dense neural networks Kickstarter data A first deep learning model Preprocessing for deep learning One-hot sequence embedding of text Simple flattened dense network Evaluation Using bag-of-words features Using pre-trained word embeddings Cross-validation for deep learning models Compare and evaluate DNN models Limitations of deep learning Summary Long short-term memory (LSTM) networks A first LSTM model Building an LSTM Evaluation Compare to a recurrent neural network Case study: bidirectional LSTM Case study: stacking LSTM layers Case study: padding Case study: training a regression model Case study: vocabulary size The full game: LSTM Preprocess the data Specify the model Summary Convolutional neural networks What are CNNs? Kernel Kernel size A first CNN model Case study: adding more layers Case study: byte pair encoding Case study: explainability with LIME Case study: hyperparameter search The full game: CNN Preprocess the data Specify the model Summary IV Conclusion Text models in the real world Appendix A Regular expressions A Literal characters A Meta characters A Full stop, the wildcard A Character classes A Shorthand character classes A Quantifiers A Anchors A Additional resources B Data B Hans Christian Andersen fairy tales B Opinions of the Supreme Court of the United States B Consumer Financial Protection Bureau (CFPB) complaints B Kickstarter campaign blurbs C Baseline linear classifier C Read in the data C Split into test/train and create resampling folds C Recipe for data preprocessing C Lasso regularized classification model C A model workflow C Tune the workflow




Autore

Emil Hvitfeldt is a clinical data analyst working in healthcare, and an adjunct professor at American University where he is teaching statistical machine learning with tidymodels. He is also an open source R developer and author of the textrecipes package. Julia Silge is a data scientist and software engineer at RStudio PBC where she works on open source modeling tools. She is an author, an international keynote speaker and educator, and a real-world practitioner focusing on data analysis and machine learning practice.










Altre Informazioni

ISBN:

9780367554194

Condizione: Nuovo
Collana: Chapman & Hall/CRC Data Science Series
Dimensioni: 9.25 x 6.25 in Ø 1.23 lb
Formato: Brossura
Illustration Notes:8 b/w images, 57 color images, 1 table, 8 line drawings and 57 color line drawings
Pagine Arabe: 402


Dicono di noi





Per noi la tua privacy è importante


Il sito utilizza cookie ed altri strumenti di tracciamento che raccolgono informazioni dal dispositivo dell’utente. Oltre ai cookie tecnici ed analitici aggregati, strettamente necessari per il funzionamento di questo sito web, previo consenso dell’utente possono essere installati cookie di profilazione e marketing e cookie dei social media. Cliccando su “Accetto tutti i cookie” saranno attivate tutte le categorie di cookie. Per accettare solo deterninate categorie di cookie, cliccare invece su “Impostazioni cookie”. Chiudendo il banner o continuando a navigare saranno installati solo cookie tecnici. Per maggiori dettagli, consultare la Cookie Policy.

Impostazioni cookie
Rifiuta Tutti i cookie
Accetto tutti i cookie
X