home libri books Fumetti ebook dvd top ten sconti 0 Carrello


Torna Indietro

gupta saurabh; giri venkata - practical enterprise data lake insights

Practical Enterprise Data Lake Insights Handle Data-Driven Challenges in an Enterprise Big Data Lake

;




Disponibilità: Normalmente disponibile in 15 giorni
A causa di problematiche nell'approvvigionamento legate alla Brexit sono possibili ritardi nelle consegne.


PREZZO
54,98 €
NICEPRICE
52,23 €
SCONTO
5%



Questo prodotto usufruisce delle SPEDIZIONI GRATIS
selezionando l'opzione Corriere Veloce in fase di ordine.


Pagabile anche con Carta della cultura giovani e del merito, 18App Bonus Cultura e Carta del Docente


Facebook Twitter Aggiungi commento


Spese Gratis

Dettagli

Genere:Libro
Lingua: Inglese
Editore:

Apress

Pubblicazione: 06/2018
Edizione: 1st ed.





Trama

Use this practical guide to successfully handle the challenges encountered when designing an enterprise data lake and learn industry best practices to resolve issues.

When designing an enterprise data lake you often hit a roadblock when you must leave the comfort of the relational world and learn the nuances of handling non-relational data. Starting from sourcing data into the Hadoop ecosystem, you will go through stages that can bring up tough questions such as data processing, data querying, and security. Concepts such as change data capture and data streaming are covered. The book takes an end-to-end solution approach in a data lake environment that includes data security, high availability, data processing, data streaming, and more.

Each chapter includes application of a concept, code snippets, and use case demonstrations to provide you with a practical approach. You will learn the concept, scope, application, and starting point.

What You'll Learn
  • Get to know data lake architecture and design principles
  • Implement data capture and streaming strategies
  • Implement data processing strategies in Hadoop
  • Understand the data lake security framework and availability model
Who This Book Is For

Big data architects and solution architects




Sommario

Chapter 1:  Data Lake Concepts Overview

Chapter Goal: This chapter highlights key concepts of Data Lake and Tech Stack. It briefs the readers on the background of Data Management, the need to have a Data Lake, and focus on latest running trends.
No of pages: 20
Sub -Topics:
1. Familiarization with Enterprise Data Lake ecosystem
2. Understand key components of Data Lake
3. Data understanding – Structured vs Unstructured

Chapter 2:  Data Replication Strategies 

Chapter Goal: The chapter will focus on how to replicate data into Hadoop from source systems. Depending on the nature of source systems, strategies may change. The chapter will start with a talk trivial approaches to ETL data into Hadoop and then dive into the latest trends of change data capture.
No of pages:  25
Sub – Topics:
1. Conventio
nal ETL strategies
2. Change data capture for relational data
3. Change data capture for time-series data

Chapter – 3: Bring Data into Hadoop

Chapter Goal: The chapter will focus on how to get data into a Hadoop cluster. It will talk on several approaches and utilities that can be used to bring data into Hadoop for processing.
Page count: 30
Sub -Topics:
1. RDBMS to Hadoop
2. MPP database systems to Hadoop
3. Unstructured data into Hadoop

Chapter 4: Data Streaming Strategies

Chapter Goal: The chapter will deep dive into data streaming principles of Kafka. It will talk on how Kafka works and understand how it resolves the challenge of getting data into Data Lake.
No of pages: 50
Sub - Topics:  
1. How to stream the data? Kafka
2. How to persist the
changes
3. How to batch the data
4. How to massage the data5. Tools and technologies – HVR, Oracle golden gate for big data

Chapter 5: Data Processing in Hadoop

Chapter Goal: This chapter will provide an insight into various data querying platforms. It all started with Map Reduce but Hive is quickly acquiring de facto status in the industry. Chapter will deep dive into Hive, its SQL like semantics and show case its most recent capabilities. A dedicated section on Spark will give a detailed walk-through on Spark approach to process data in Hadoop.
No of pages: 30
Sub - Topics: 
1. Map reduce
2. Query engines – intro/bigdata sql/bigSQL
3. Hive - focus
4. Spark – focus
5. Presto

Chapter 6: Data Security and Compliance

Chapter Goal: This chapter will talk on security as
pects of a data lake in Hadoop. The fact that security had been deliberately compromised in the past by organizations, does has a weight. The chapter talks about how to build a safety net around data lake and mitigate the risks of unauthorized access or injection attacks on a Data Lake. 
Page count: 20
Sub - Topics:
1. Encryption in-transit and at rest
2. Data masking
3. Kerberos security and LDAP authentication
4. Ranger 

Chapter 7: Ensure Availability of a Data Lake

Chapter Goal: This chapter throws light on yet another key aspect of data landscape i.e. availability. It will discuss topics like disaster recovery strategies, how to setup replication between two data centers, and how to tackle consistency and integrity of data.
Page count: 20
Sub - Topics:
1. Disaster Recovery Strategies
2. Setup Data center replic
ation
3. Active-passive mode
4. Active-active mode





Autore

Saurabh K. Gupta is a technology leader, published author, and database enthusiast with more than 11 years of industry experience in data architecture, engineering, development, and administration. Working as a Manager, Data & Analytics at GE Transportation, his focus lies with data lake analytics programs that build a digital solution for business stakeholders. In the past, he has worked extensively with Oracle database design and development, PaaS and IaaS cloud service models, consolidation, and in-memory technologies. He has authored two books on advanced PL/SQL for Oracle versions 11g and 12c. He is a frequent speaker at numerous conferences organized by the user community and technical institutions. He tweets at @saurabhkg and blogs at sbhoracle.wordpress.com. 

Venkata Giri currently works with GE Digital and has been involved with building resilient distributed services at a massive scale. He has worked on big data tech stack, relational databases, high availability, and performance tuning. With over 20 years of experience in data technologies, he has in-depth knowledge of big data ecosystems, complex data ingestion pipelines, data engineering, data processing, and operations. Prior to working at GE, he worked with the data teams at Linkedin and Yahoo.










Altre Informazioni

ISBN:

9781484235218

Condizione: Nuovo
Dimensioni: 235 x 155 mm Ø 534 gr
Formato: Brossura
Illustration Notes:XVIII, 327 p. 90 illus.
Pagine Arabe: 327
Pagine Romane: xviii


Dicono di noi