InfoQ Homepage Big Data Content on InfoQ
-
How Data Mesh Platforms Connect Data Producers and Consumers
A challenge that companies often face when exploiting their data in data warehouses or data lakes is that ownership of analytical data is weak or non-existent, and quality can suffer as a result. A data mesh is an organizational paradigm shift in how companies create value from data where responsibilities go back into the hands of producers and consumers.
-
Uber Migrates 1 Trillion Records from DynamoDB to LedgerStore to Save $6 Million Annually
Uber migrated all its payment transaction data from DynamoDB and blob storage into a new long-term solution, a purpose-built data store named LedgerStore. The company was looking for cost savings and had previously reduced the use of DynamoDB to store hot data (12 weeks old). The move resulted in significant savings and simplified the storage architecture.
-
QCon London: Lessons Learned from Building LinkedIn’s AI/ML Data Platform
At the QCon London 2024 conference, Félix GV from LinkedIn discussed the AI/ML platform powering the company’s products. He specifically delved into Venice DB, the NoSQL data store used for feature persistence. The presenter shared the lessons learned from evolving and operating the platform, including cluster management and library versioning.
-
Spotify's Approach to Leverage Recursive Embedding and Clustering to Enhanced Data Explainability
One of the main challenges of any online business is to get actionable insight from their data for decision-making. Spotify shares its methodology and experience to solve this problem by clustering diverse data sets through a unique method involving dimensionality reduction, recursion, and supervised machine learning.
-
Netflix Creates Incremental Processing Solution Using Maestro and Apache Iceberg
Netflix created a new solution for incremental processing in its data platform. The incremental approach reduces the cost of computing resources and execution time significantly as it avoids processing complete datasets. The company used its Maestro workflow engine and Apache Iceberg to improve data freshness and accuracy and plans to provide managed backfill capabilities.
-
QCon San Francisco 2023 Day 1: Architectures, Data Engineering, Infra Languages, Staff+ Skills
The 17th annual QCon San Francisco conference was held at the Hyatt Regency San Francisco in San Francisco, California. This five-day event, organized by C4Media, consists of three days of presentations and two days of workshops. Day One, scheduled on October 2nd, 2023, included a keynote address by Suhail Patel and presentations from four conference tracks and two sponsored tracks.
-
Grammarly Replaces its in-House Data Lake with Databricks Platform Using Medallion Architecture
Grammarly adopted the medallion architecture while migrating from their in-house data lake, storing Parquet files in AWS S3, to the Delta Lake lakehouse. The company created a new event store for over 6000 event types from 40 internal and external clients and, in the process, improved data quality and reduced the data-delivery time by 94%.
-
How LinkedIn Serves over 4.8 Million Member Profiles per Second
LinkedIn introduced Couchbase as a centralized caching tier for scaling member profile reads to handle increasing traffic that has outgrown their existing database cluster. The new solution achieved over 99% hit rate, helped reduce tail latencies by more than 60% and costs by 10% annually.
-
Discord Migrates Trillions of Messages from Cassandra to ScyllaDB
Discord has migrated trillions of message records from Apache Cassandra to ScyllaDB, reducing the size of the largest cluster from 177 Cassandra nodes to 72 ScyllaDB nodes and reducing tail latencies for reads and writes. The move has unlocked new product use cases because of the improved database stability and performance.
-
Adopting Artificial Intelligence: Things Leaders Need to Know
Artificial intelligence (AI) can help companies identify new opportunities and products, and stay ahead of the competition. Senior software managers should understand the basics of how this new technology works, why agility is important in developing AI products, and how to hire or train people for new roles.
-
AWS Introduces Athena Provisioned Capacity
AWS recently announced a new feature Provisioned Capacity for Athena, that allows users to run SQL queries on fully-managed compute capacity for a fixed price and no long-term commitments.
-
Apache Linkis Graduated to Apache Top-Level Project
Apache Linkis is a computation middleware that acts as a layer between upper-level applications and underlying engines, such as Apache Spark, Apache Hive and Apache Flink. It started as an Apache Incubator project in 2021 and graduated to a Top Level Project in January 2023.
-
Apache Druid 25.0 Delivers Multi-Stage Query Engine and Kubernetes Task Management
Apache Druid is a high-performance real-time datastore and its latest release, version 25.0, provides many improvements and enhancements. The main new features are: the multi-stage query (MSQ) task engine used for SQL-based ingestion is now production ready, and Kubernetes can be used to launch and manage tasks eliminating the need for middle managers...
-
How Twitter Automated Data Quality Check Process
Twitter engineering has recently shared a blog post on how they architected and developed a quality automation platform. Twitter digests and creates thousands of data sets for different data products and applications. The next natural step is to make sure of the quality of the data by adding automation on top of it. In this news post, we explore this architecture in more detail.
-
Uber Freight Near-Real-Time Analytics Architecture
Uber Freight is the Uber platform dedicated to connecting shippers with carriers. Providing reliable service to shippers is crucial for Uber Freight. This is why the Carrier Scorecard was developed, with several metrics including on-time pickup/delivery, tracking automation, and late cancellations.