InfoQ Homepage Performance & Scalability Content on InfoQ
-
Donkey: a Highly-Performant HTTP Stack for Clojure
Donkey is the product of the quest for a highly performant Clojure HTTP stack aimed to scale at the rapid pace of growth we have been experiencing at AppsFlyer, and save us computing costs. In this article, we’ll briefly outline the use-case for a library like Donkey and present our benchmarks. Finally, we will discuss Clojure and immutability, and some of our design decisions.
-
Four Techniques Serverless Platforms Use to Balance Performance and Cost
There are two aspects that have been key to the rapid adoption of serverless computing: the performance and the cost model. This article looks at those aspects, the tradeoffs, and opportunity ahead.
-
Scaling a Distributed Stream Processor in a Containerized Environment
The article presents our experience of scaling a distributed stream processor in Kubernetes. The stream processor should provide support for maintaining the optimal level of parallelism. However, adding more resources incurs additional cost and also it does not guarantee performance improvements. Instead, the stream processor should identify the level of resource requirement and scale accordingly.
-
Columnar Databases and Vectorization
In this article, author Siddharth Teotia discusses the Dremio database which is based on Apache Arrow with vectorization capabilities.
-
Six Tips for Running Scalable Workloads on Kubernetes
Tips to ensure Kubernetes knows what is happening with your deployment: where best to schedule it, when is it ready to serve requests and ensuring work is spread across as many nodes as possible.
-
Unleashing the Power of .NET Big Memory and Memory Mapped Files
In continuation of the Big Memory topic on the .NET platform, this article describes the benefits of utilization of large data sets in-process on the managed CLR server environments using Agincore’s Big Memory Pile.
-
Developing a Secure and Scalable Web Ecosystem at LinkedIn
LinkedIn’s hyper-growth placed strains on the organization’s infrastructure. A new release model was instrumental to scale and led to increased code quality, security, and member satisfaction.
-
On Abstractions and For-Each Performance in C#
Donald Knuth famously said, “We should forget about small efficiencies, say about 97% of the time”. But when faced with the other 3%, it is good to know what’s going on behind the scenes. So in this article we’ll be taking a dive into the foreach loop.
-
Virtual Panel: Current State of NoSQL Databases
NoSQL databases have been around for several years now and have become a choice of data storage for managing semi-structured and unstructured data. These databases offer lot of advantages in terms of linear scalability and better performance for both data writes and reads. InfoQ spoke with four panelists to get different perspectives on the current state of NoSQL databases.
-
Graph API in a Large Scale Environment
MyHeritage is a rapidly-growing destination used around the world to discover, preserve and share family histories. There is increasing demand for our services, accessed both internally and externally by our partners via the FamilyGraph API. Millions of API calls are made every day providing a huge challenge in terms of performance, scalability and security.
-
Big Memory .NET Part 2 - Pile, Our Big Memory Solution for .NET
In part one, Leonid Ganeline introduced the concept of big memory and discussed why it is so hard to deal with in a .NET environment. In part two, Dmitriy Khmaladze describes their solution NFX Pile; a hybrid memory manager written in C# with 100% managed code.
-
Big Memory .NET Part 1 – The Challenges in Handling 1 Billion Resident Business Objects
This article describes the concept of Big Memory and concentrates on its applicability to managed execution models like the one used in Microsoft’s Common Language Runtime (CLR). A few different approaches are suggested to resolve GC pausing issues that arise when a managed process starts to store over a few million objects.