Remove Data Lake Remove Experimentation Remove Metrics Remove Snapshot
article thumbnail

Orca Security’s journey to a petabyte-scale data lake with Apache Iceberg and AWS Analytics

AWS Big Data

With data becoming the driving force behind many industries today, having a modern data architecture is pivotal for organizations to be successful. In this post, we describe Orca’s journey building a transactional data lake using Amazon Simple Storage Service (Amazon S3), Apache Iceberg, and AWS Analytics.

article thumbnail

How Gupshup built their multi-tenant messaging analytics platform on Amazon Redshift

AWS Big Data

It makes it fast, simple, and cost-effective to analyze all your data using standard SQL and your existing business intelligence (BI) tools. Overall architecture and implementation details with Redshift Materialized views Gupshup uses a CDC mechanism to extract data from their source systems and persist it in S3 in order to meet these needs.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Build a multi-Region and highly resilient modern data architecture using AWS Glue and AWS Lake Formation

AWS Big Data

The utility for cloning and experimentation is available in the open-sourced GitHub repository. This solution only replicates metadata in the Data Catalog, not the actual underlying data. This ensures that the data lake will still be functional in another Region if Lake Formation has an availability issue.

article thumbnail

Interview with Dominic Sartorio, Senior Vice President for Products & Development, Protegrity

Corinium

For example, P&C insurance strives to understand its customers and households better through data, to provide better customer service and anticipate insurance needs, as well as accurately measure risks. Life insurance needs accurate data on consumer health, age and other metrics of risk.

Insurance 150
article thumbnail

Unleashing the power of Presto: The Uber case study

IBM Big Data Hub

Uber understood that digital superiority required the capture of all their transactional data, not just a sampling. They stood up a file-based data lake alongside their analytical database. Because much of the work done on their data lake is exploratory in nature, many users want to execute untested queries on petabytes of data.

OLAP 87