Robust and Scalable ETL on Big Data with Apache Spark | hub.berlin Skip to main content
1 & 2 APR `20 STATION BERLIN
10 Apr 2019 | 15:50 - 16:10 | Big-Data.AI Summit

Robust and Scalable ETL on Big Data with Apache Spark

John McCarthy Stage

ETL pipelines are a critical component of the data infrastructure of modern enterprises. As Big Data assumes an infinite shape, one needs to process and integrate much higher volume of data coming from more sources and at much greater speed than ever before, and traditional data warehouse and related ETL/DI processes are struggling to keep the pace in the big data integration context. Building your ETL data pipelines for big data processing using Apache Spark has become viable choice of many as it not only helps organisations to dramatically reduce costs but it will facilitate agile and iterative data discovery between legacy systems and big data sources. In this session, we present the feature-rich & flexible ADASTRA Framework for Big Data Integration based on Apache Spark that enables you to build robust, scalable and reliable data pipelines for your Data Lakes and Big Data environments. We will also talk about the benefits of a Framework-based approach gained through valuable experience from successful customer projects.

Programmes & Topics
10 Apr 2019 | 12:30 - 12:50
Big-Data.AI Summit
11 Apr 2019 | 15:20 - 15:40
Big-Data.AI Summit
10 Apr 2019 | 17:10 - 17:30
Big-Data.AI Summit
11 Apr 2019 | 13:30 - 14:00
Big-Data.AI Summit