Remove 2012 Remove Data Lake Remove Data Quality Remove Reporting
article thumbnail

Measure performance of AWS Glue Data Quality for ETL pipelines

AWS Big Data

In recent years, data lakes have become a mainstream architecture, and data quality validation is a critical factor to improve the reusability and consistency of the data. In this post, we provide benchmark results of running increasingly complex data quality rulesets over a predefined test dataset.

article thumbnail

How SumUp made digital analytics more accessible using AWS Glue

AWS Big Data

Founded in 2012, SumUp is the financial partner for more than 4 million small merchants in over 35 markets worldwide, helping them start, run and grow their business. Unless, of course, the rest of their data also resides in the Google Cloud. The Data Science teams also use this data for churn prediction and CLTV modeling.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Convergent Evolution

Peter James Thomas

That was the Science, here comes the Technology… A Brief Hydrology of Data Lakes. Overlapping with the above, from around 2012, I began to get involved in also designing and implementing Big Data Architectures; initially for narrow purposes and later Data Lakes spanning entire enterprises.

article thumbnail

How BMO improved data security with Amazon Redshift and AWS Lake Formation

AWS Big Data

One of the bank’s key challenges related to strict cybersecurity requirements is to implement field level encryption for personally identifiable information (PII), Payment Card Industry (PCI), and data that is classified as high privacy risk (HPR). Only users with required permissions are allowed to access data in clear text.

article thumbnail

Build efficient ETL pipelines with AWS Step Functions distributed map and redrive feature

AWS Big Data

Solution overview One of the common functionalities involved in data pipelines is extracting data from multiple data sources and exporting it to a data lake or synchronizing the data to another database. The other step in the parallel state to fetch the DynamoDB table ran successfully.

Metadata 117