InfoQ Homepage Database Content on InfoQ
-
Big Data Processing with Apache Spark – Part 1: Introduction
Apache Spark is an open source big data processing framework built around speed, ease of use, and sophisticated analytics. In this article, Srini Penchikala talks about how Apache Spark framework helps with big data processing and analytics with its standard API. He also discusses how Spark compares with traditional MapReduce implementation like Apache Hadoop.
-
Building a Mars Rover Application with DynamoDB
DynamoDB is a NoSQL database service that aims to be easily managed, so you don't have to worry about administrative burdens such as operating and scaling. This article shows how to use Amazon DynamoDB to create a Mars Rover application. You can use the same concepts described in this post to build your own web application.
-
The Definitive Guide to Database Version Control
In the brave new world of big data and BI, the only technology constant is change. When it comes to database change, agility through automation - the ability to do more with less more rapidly to accelerate delivery – is what differentiates highly competitive, world-class enterprises from the rest of the crowd.
-
Apache Ignite GridGain Incubator Project - Q&A Interview with Nikita Ivanov
GridGain announced that the In-Memory Data Fabric has been accepted into Apache Incubator program as Apache Ignite. InfoQ spoke with Nikita Ivanov about their product becoming part of Apache.
-
The Long Journey Towards Database Lifecycle Management
Application Lifecycle Management is a standard process, but there have been obstacles to using the same practice for databases. Ben Rees, General Manager for Database Continuous Delivery at Red Gate, explains why the road ahead is now clear for Database Lifecycle Management.
-
Interview with Alex Holmes, author of “Hadoop in Practice. Second Edition”
The new “Hadoop in Practice. Second Edition” book by Alex Holmes provides a deep insight into Hadoop ecosystem covering a wide spectrum of topics such as data organization, layouts and serialization, data processing, including MapReduce and big data patterns, special structures along with their usage to simplify big data processing, and SQL on Hadoop data.
-
Matt Schumpert on Datameer Smart Execution
Datameer, a big data analytics application for Hadoop, introduced Datameer 5.0 with Smart Execution to dynamically select the optimal compute framework at each step in the big data analytics process. InfoQ spoke with Matt Schumpert from Datameer team about the new product and how it works to help with big data analytics needs.
-
Stats Anomalies Detector
The article describes the general outline of the Stats Anomalies Detector we developed at MyHeritage and provides a detailed explanation of how to enhance the code (will be available soon at MyHeritage GitHub) to meet your company’s needs.
-
SQL Server Source Control and Deployment with Visual Studio
The holy grail of database development is the ability to treat database objects (tables, views, stored procedures, etc.) as if they were just like any other form of source code. While SQL Server Data Tools doesn’t quite that level, it gets very close.
-
Analytics Across the Enterprise: How IBM Realizes Business Value from Big Data and Analytics
Analytics Across the Enterprise: How IBM Realizes Business Value from Big Data and Analytics book by Brenda L. Dietrich, Emily C. Plachy, and Maureen F. Norton is a collection of experiences by analytics practitioners in IBM. InfoQ spoke with the authors about the lessons learned from the book, the arsenal of technologies IBM has about Big Data and the future of Analytics.
-
How to Effectively Map SQL Data to a NoSQL Store
Sytze Harkema explains how to save and retrieve relational SQL data into a NoSQL key-value store as implemented by FoundationDB, an open source, scalable, fault tolerant and ACID database.
-
A Rails Enthusiast’s take on MEAN.js
John looks at AngularJS and the MEAN stack as an alternative to Ruby on Rails as a productive stack for building typical web applications.