article thumbnail

Measure performance of AWS Glue Data Quality for ETL pipelines

AWS Big Data

In recent years, data lakes have become a mainstream architecture, and data quality validation is a critical factor to improve the reusability and consistency of the data. In this post, we provide benchmark results of running increasingly complex data quality rulesets over a predefined test dataset.

article thumbnail

How BMO improved data security with Amazon Redshift and AWS Lake Formation

AWS Big Data

One of the bank’s key challenges related to strict cybersecurity requirements is to implement field level encryption for personally identifiable information (PII), Payment Card Industry (PCI), and data that is classified as high privacy risk (HPR). Only users with required permissions are allowed to access data in clear text.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Accelerate analytics on Amazon OpenSearch Service with AWS Glue through its native connector

AWS Big Data

As the volume and complexity of analytics workloads continue to grow, customers are looking for more efficient and cost-effective ways to ingest and analyse data. AWS Glue provides both visual and code-based interfaces to make data integration effortless. Attach the AWS managed policy GlueServiceRole.

article thumbnail

How SumUp made digital analytics more accessible using AWS Glue

AWS Big Data

Founded in 2012, SumUp is the financial partner for more than 4 million small merchants in over 35 markets worldwide, helping them start, run and grow their business. Unless, of course, the rest of their data also resides in the Google Cloud. The Data Science teams also use this data for churn prediction and CLTV modeling.

article thumbnail

Handle UPSERT data operations using open-source Delta Lake and AWS Glue

AWS Big Data

Many customers need an ACID transaction (atomic, consistent, isolated, durable) data lake that can log change data capture (CDC) from operational data sources. There is also demand for merging real-time data into batch data. Delta Lake framework provides these two capabilities. Choose Create policy.

article thumbnail

Build efficient ETL pipelines with AWS Step Functions distributed map and redrive feature

AWS Big Data

Solution overview One of the common functionalities involved in data pipelines is extracting data from multiple data sources and exporting it to a data lake or synchronizing the data to another database. By default, when an iteration of map state fails, all other iterations are aborted.

Metadata 118
article thumbnail

Convergent Evolution

Peter James Thomas

That was the Science, here comes the Technology… A Brief Hydrology of Data Lakes. Overlapping with the above, from around 2012, I began to get involved in also designing and implementing Big Data Architectures; initially for narrow purposes and later Data Lakes spanning entire enterprises.