Microsoft Azure On-Premises? Technical Preview of Azure Stack Released
Last week, Microsoft delivered the first technical preview of Microsoft Azure Stack – a product that promises to let organizations run Azure services in their own data centers. This represents the third attempt by Microsoft to deliver a local Azure experience, but the first one that’s faithful to the public cloud experience and targeted at a mass audience.
In a blog post announcing the availability of a technical preview, Mike Neil of Microsoft explained the hybrid reality faced by today’s enterprises.
However, we know many enterprises still have business concerns around moving fully to the public cloud, such as data sovereignty or regulatory considerations. This leaves them in a complicated position, with one foot in the public cloud and one on-premises.
To manage this complexity, Microsoft believes enterprises have to approach cloud as a model – not a place. This model cuts across infrastructure, applications and people, and requires a hybrid cloud approach that provides consistency across private, hosted, and public clouds. Today, Microsoft is delivering on the next phase of its hybrid cloud strategy with the first Technical Preview of Microsoft Azure Stack – the only hybrid cloud platform that is consistent with a leading public cloud. Born from Azure, Azure Stack helps organizations deliver Azure services from their own datacenter.
Microsoft pointed out three reasons that a consistent hybrid cloud platform matters for customers.
- Application developers can maximize their productivity using a ‘write once, deploy to Azure or Azure Stack’ approach. Using APIs that are identical to Microsoft Azure, they can create applications based on open source or .NET technology that can easily run on-premises or in the public cloud. They can also leverage the rich Azure ecosystem to jumpstart their Azure Stack development efforts.
- IT professionals can transform on-premises datacenter resources into Azure IaaS/PaaS services while maintaining oversight using the same management and automation tools that Microsoft uses to operate Azure. This approach to cloud enables IT professionals to have a valuable seat at the table – they are empowered to deliver services to the business quickly, while continuing to steward corporate governance needs.
- Organizations can embrace hybrid cloud computing on their terms by helping them address business and technical considerations like regulation, data sovereignty, customization and latency. Azure Stack enables that by giving businesses the freedom to decide where applications and workloads reside without being constrained by technology.”
What actually comes with the Azure Stack Technical Preview? Microsoft says that both public cloud Azure and Azure Stack use a standardized architecture, the same user portal, an application model based on Azure Resource Manager, and support for the same tools – such as Visual Studio and PowerShell. The Technical Preview deploys to a single server running Windows Server 2016 Data Center Edition Technical Preview 4, with recommended hardware specs of 16 physical cores, 128GB of RAM, more than 1TB of storage. It also requires a connection to Azure Active Directory, and currently offers a very small subset of the dozens of services in the Microsoft public cloud. The first full product targeted for Q4 2016 is expected to contain core services around infrastructure-as-a-service and platform-as-a-service, including Virtual Machines, Storage Blobs and Tables, Virtual Network, Load Balancer, VPN Gateway, and Web Apps. Microsoft identified a handful of additional services that should be in preview when Azure Stack becomes available, including Service Fabric, Storage Queues, Key Vault, Logic Apps, Mobile Apps, and API Apps. In the comments of the announcement blog post, Microsoft employees explain that additional services – like Azure Machine Learning, Service Bus, or Azure SQL databases – will get prioritized based on user feedback. Ryan O’Hara of Microsoft told The Next Platform that some services just won’t make sense for Azure Stack.
“Taking a copy of Azure and putting it into the enterprise doesn’t make sense,” Ryan O’Hara, director of program management for the enterprise cloud division at Microsoft, tells The Next Platform. “While Azure Stack should be just another region as far as enterprise customers are concerned and we should maximize the number of services that can be deployed locally, these services have to be things that can scale down to the enterprise level. There are some services, such as the Azure Data Lake, data ingestion, or data encoding, where customers will benefit from the scale of the Azure public cloud.”
According to The Next Platform, Azure Stack will “run on the stripped-down Nano Server implementation of Windows Server ” and any patches or updates will happen by doing clean installations of the hypervisor and Nano Server configuration. Microsoft is still working out the update frequency for Azure Stack, and recognizes that hourly or daily updates are too often, but annual updates would be too slow.
The Next Platform sees this move as differentiating for Microsoft because they are attacking ground that neither Google nor Amazon want to take on.
Both Microsoft and Google wanted to skip the infrastructure cloud and expose their infrastructure to the outside world as platform services, and they were rebuffed by most enterprise customers. The leap from what they had in their datacenters – virtualized servers running a hodge-podge of hundreds to thousands of applications – was too great. Which is why AWS EC2 compute and S3 and EBS storage took off. While the gap is large between the corporate datacenter and these basic AWS is still large, the spark can jump it. This is why Google and Microsoft ate some humble pie a few years ago and rolled out their own respective Compute Engine and Azure Virtual Machines services.
But Google and AWS still believe, fundamentally and at their core, that cloud means public cloud – unless you are a US government agency with hundreds of millions of dollars to spend on a truly private public cloud – and this is leaving an opening for Microsoft Azure to exploit.
“A lot of people think that cloud is a place, and we reject that thinking,” says Jeffrey Snover, Technical Fellow and chief architect of enterprise cloud at Microsoft. “We think of the cloud as a model that can exist in multiple places.”
Arguing about how that cloudy infrastructure is broken up is not the point. Showing how modern cloud software is better than traditional and more rigid infrastructure is where the focus has to be, according to the top brass running Azure and their peers at Microsoft who are trying to bring it into the datacenters of the world
As described in Ars Technica, Azure Stack is somewhat like an “Azure-flavored counterpart to OpenStack” where businesses can have similar experiences across on-premises and cloud deployments. Ben Kepes of Computer World plays with that theme further.
The real benefit that Azure Stack brings to organizations with an existing Microsoft footprint is that there is no incongruence between their public and private cloud utilization. Where an organization that, for example, takes advantage of Amazon Web Services (AWS) for the public cloud and builds a private cloud on OpenStack might have difficulties in the interplay between its public and private resources, with Azure Stack there is a high fidelity between the public and private offerings.
The reality for OpenStack as a hybrid cloud play, however, hasn't been what was envisaged. The project, while very successful in a number of areas, hasn't really seen the rise of a consistent series of vendor offerings. This means that, although OpenStack is touted as a consistent platform between different distributions, every distribution is a different flavor, thus limiting the portability between different OpenStack-powered clouds.
Some may recall that this is Microsoft’s third attempt at a private Azure experience. In 2010, the company discussed an Azure Appliance that would be deployed by select hosting partners. The plan was dropped after tepid adoption, and in 2013 Microsoft announced the Azure Pack. At the time, it was described as “The Windows Azure Pack delivers Windows Azure technologies for you to run inside your datacenter, enabling you to offer rich, self-service, multi-tenant services that are consistent with Windows Azure.” In the comments of the announcement blog post, a Microsoft employee explains their thoughts around migrating from Azure Pack to the Azure Stack.
On the first, it's important to understand that between Azure Pack and Azure Stack, we're dealing with two different architectures. The former is built on top of Windows Server + System Center + an extra layer of technology to effectively emulate an Azure experience for self-service provisioning of some core items like VMs, DBs, websites, etc. The latter (Azure Stack) is a product (not something deployed on top of Windows Server & System Center) designed to allow you to create and offer Azure services to your end users from your datacenter (or from a partner's). For example, the APIs used to interface with the Virtual Machines service in Azure Stack is consistent with the API for the service in Azure. So you really are running Azure services in your DC. As you can imagine, doing in-place upgrades of systems with different underlying architectures is not usually possible and that is the case here as well. Most customers we've talked to plan to keep their Azure Pack deployment running and stand up an Azure Stack stamp when it's released and then migrate workloads from one environment to the other.
Microsoft hasn’t yet talked about pricing, licensing, or what the infrastructure topology would look like in a highly available environment. Meanwhile, TechCrunch sees Azure Stack as a strategic bet.
In many ways, Azure Stack is the logical next step in Microsoft’s overall hybrid cloud strategy. If you’re expecting to regularly move some workloads between your own data center and Azure (or maybe add some capacity in the cloud as needed), having a single platform and only one set of APIs across your own data center and the cloud to work with greatly simplifies the process.