2015-07-11

Microsoft is poised to bring its market leading cloud technology to private and hosted clouds with the upcoming Windows Server 2016. While the server product with all the planned additions is not available yet (a test build is due out later this summer), the cloud technologies in it were very much disclosed by the company in the last two months. Microsoft is also adding virtualized containers, Nano Server mode to Windows Server 2016 in order to serve the Next Generation Cloud market in a highly competitive way. Then System Center 2016 will also be an important part of that effort, among others. See the whole Cloud Platform roadmap public preview site which came first live on Jan 30, 2015, and since then it is offering a continuously updated “snapshot of what Microsoft is working on in the Cloud Platform business”.

I’ve been tracking this all along since then (but not posting about that here), and now I’ve come to conclusion that this will revolutionize the whole cloud solution market, particularly the one targeting the network operators and telcos who are in a great need for proven solutions in their “Networked Society” efforts.

INTRODUCTIONS

I. Hyper-scale Azure with host SDN*:

*Software Defined Networking

In order to understand the true impact of the upcoming Microsoft Cloud Platform I will quote here from the Microsoft Gives Software Networking a Hardware Boost article of June 30 on LightReading, a leading community driven media for the communications industry:

To achieve scale, Microsoft had to use “hyperscale SDN,” breaking away from a proprietary appliance combining management, control and data plane and separating those functions. Now, the management plane exposes APIs, the control plane uses the APIs to create rules, and then passes those rules to switches.



The company has developed its own SmartNIC to offload networking processing loads from hosts, which can dedicate their processing power to application workloads.

“With this, we can offload functionality into the device, saving CPU,” says Mark Russinovich, Microsoft Azure CTO, speaking at the Open Networking Summitthis month.

The SmartNIC improves performance to 30 Gbit/s on host networks using 40Gbit/s NICs.

The SmartNIC uses an Field-Programmable Gate Array (FPGA) for reconfigurable functions. Microsoft already uses FPGAs in Catapult, an FPGA-based reconfigurable fabric for large-scale data centers developed jointly by Microsoft Research and Bing. Using programmable hardware allows Microsoft to update equipment as it does software.

Microsoft programs the hardware using Generic Flow Tables (GFT), which is the language used for programming SDN.

The SmartNIC can also do crypto, QoS, storage acceleration and more, says Russinovich.



But can we truly call it software-defined networking if it includes a custom hardware component? Does it matter? Microsoft has found a solution that works for it, and that other network operators might want to emulate.

To understand the real significance of that statement by the author of the communication industry media article we should briefly characterize the state-of-the-art of cloud technology which has been the focus of network operators and telcos so far. This is OpenStack, the only available open source technology primarily deployed as an IaaS solution which all the rest of the cloud technologies is going to be built upon. Its promise has been huge, but “OpenStack is heading to the Trough of Disillusionment on the Technology Adoption Curve” as it was characterized by Randy Bias in his State of the Stack v4 address to the attendees of the OpenStack Summit held on May 20, 2015:



Randy told the audience there: “This is some information from Gartner and some others. Yeah, you know, there’s a lot of coaching I find. But what I found most interesting is this quote at the bottom which is that I want quote to you. … It dovetails completely with what I’ve been thinking, and what I’ve been hearing. Which is that number one is difficulty of the implementation is a problem. Two, shortage of skills, people who can actually build these things. Three, conflicting or uncoordinated project governance, and this is stuff that we’ve started to address with the “Big Tent” approach, and things like that. But you know there needs to be more. And, weak spots in some projects. And then, integration with the existing infrastructure, which I violently disagree with. You know, if you’re building OpenStack you should produce on new stuff, on net new stuff. But whatever, Gartner’s smarter than me, I guess, or supposedly. But this [is] pretty smart.” See also his slides.

For the continuation of my appraisal (highly recommended) of the OpenStack state-of-the-art go to my homepage here, scroll down to the above image, and read the information following it.

My conclusion at the end of that appraisal  was that most of the network and telecommunications oriented contributions to the code will come in the future OpenStack releases. My personal guess there was that about 2 more years will be needed for “telco/carrier grade” hardening of the OpenStack code together with the necessary enhancements in the functionality (see the May 12 2014 “OpenStack as the Key Engine of NFV” story by Ericsson indicated in the appraisal earlier).

This is representing a huge window of opportunity for the 2016 wave of Microsoft Cloud platform products (Windows Server 2016 et. al.) to penetrate the market of network operators and telcos which is crucial for the Microsoft survival. I will cover the upcoming Microsoft moves in that direction with future posts, and just indicate that here.

II. IaaS 2.0 (INTRODUCTION):

Here the best thing for introduction is the whole Virtual Machines and Containers in Azure article of July 2, 2015 by Ralph Squillace, a Senior Content Developer at Microsoft:

Azure offers you great cloud solutions, built on virtual machines—based on the emulation of physical computer hardware—to enable agile movement of software deployments and dramatically better resource consolidation than physical hardware. In the past few years, largely thanks to the Docker approach to containers and the docker ecosystem, Linux container technology has dramatically expanded the ways you can develop and manage distributed software. Application code in a container is isolated from the host Azure VM as well as other containers on the same VM, which gives you more development and deployment agility at the application level—in addition to the agility that Azure VMs already give you.

But that’s old news. The new news is that Azure offers you even more Docker goodness:

Many different ways to create Docker hosts for containers to suit your situation

Azure Resource Manager and resource group templates to simplify deploying and updating complex distributed applications

integration with a large array of both proprietary and open-source configuration management tools

And because you can programmatically create VMs and Linux containers on Azure, you can also use VM and container orchestration tools to create groups of Virtual Machines (VMs) and to deploy applications inside both Linux containers and soon Windows Server Containers.

This article not only discusses these concepts at a high level, it also contains tons of links to more information, tutorials, and products related to container and cluster usage on Azure. If you know all this, and just want the links, they’re right here.

The difference between virtual machines and containers

Virtual machines run inside an isolated hardware virtualization environment provided by a hypervisor. In Azure, the Virtual Machines service handles all that for you: You just create Virtual Machines by choosing the operating system and configuring it to run the way you want—or by uploading your own custom VM image. Virtual Machines are a time-tested, “battle-hardened” technology, and there are many tools available to manage operating systems and to configure the applications you install and run. Anything running in a virtual machine is hidden from the host operating system and, from the point of view of an application or user running inside a virtual machine, the virtual machine appears to be an autonomous physical computer.

Linux containers—which includes those created and hosted using docker tools, and there are other approaches—do not require or use a hypervisor to provide isolation. Instead, the container host uses the process and file system isolation features of the Linux kernel to expose to the container (and its application) only certain kernel features and its own isolated file system (at a minimum). From the point of view of an application running inside a container, the container appears to be a unique operating system instance. A contained application cannot see processes or any other resources outside of its container.

Because in this isolation and execution model the kernel of the Docker host computer is shared, and because the disk requirements of the container now do not include an entire operating system, both the start-up time of the container and the required disk storage overhead are much, much smaller.

It’s pretty cool.

Windows Server Containers provide the same advantages as Linux containers for applications that run on Windows. Windows Server Containers support the docker image format and the docker API. As a result, an application using Windows Server Containers can be developed, published, retrieved, and deployed using similar commands to those on Mac and Linux. That’s in addition to having new docker support in Microsoft Visual Studio. The larger container ecosystem will give everyone tools to do the work they need to do with containers.

That’s pretty cool, too.

Is this too good to be true?

Well, yes—and no. Containers, like any other technology, do not magically wipe away all the hard work required by distributed applications. Yet, at the same time containers do really change:

how fast application code can be developed and shared widely

how fast and with what confidence it can be tested

how fast and with what confidence it can be deployed

That said, remember containers execute on a container host—an operating system, and in Azure that means an Azure Virtual Machine. Even if you already love the idea of containers, you’re still going to need a VM infrastructure hosting the containers, but the benefits are that containers do not care on which VM they are running (although whether the container wants a Linux or Windows execution environment will be important, for example).

What are containers good for?

They’re great for many things, but they encourage—as do Azure Cloud Services and Azure Service Fabric—the creation of single-service, microservice-oriented distributed applications, in which application design is based on more small, composable parts rather than on larger, more strongly coupled components.

This is especially true in public cloud environments like Azure, in which you rent VMs when and where you want them. Not only do you get isolation and rapid deployment and orchestration tools, but you can make more efficient application infrastructure decisions.

For example, you might currently have a deployment consisting of 9 Azure VMs of a large size for a highly-available, distributed application. If the components of this application can be deployed in containers, you might be able to use only 4 VMs and deploy your application components inside 20 containers for redundancy and load balancing.

This is just an example, of course, but if you can do this in your scenario, you can adjust to usage spikes with more containers rather than more Azure VMs, and use the remaining overall CPU load much more efficiently than before.

In addition, there are many scenarios that do not lend themselves to a microservices approach; you will know best whether microservices and containers will help you.

Container benefits for developers

In general, it’s easy to see that container technology is a step forward, but there are more specific benefits as well. Let’s take the example of Docker containers. This topic will not dive deeply into Docker right now (read What is Docker? for that story, or wikipedia), but Docker and its ecosystem offer tremendous benefits to both developers and IT professionals.

Developers take to Docker containers quickly, because above all it makes using Linux containers easy:

They can use simple, incremental commands to create a fixed image that is easy to deploy and can automate building those images using a dockerfile

They can share those images easily using simple, git-style push and pull commands to public or private docker registries

They can think of isolated application components instead of computers

They can use a large number of tools that understand docker containers and different base images

Container benefits for operations and IT professionals

IT and operations professionals also benefit from the combination of containers and virtual machines.

contained services are isolated from VM host execution environment

contained code is verifiably identical

contained services can be started, stopped, and moved quickly between development, test, and production environments

Features like these—and there are more—excite established businesses, where professional information technology organizations have the job of fitting resources—including pure processing power—to the tasks required to not only stay in business, but increase customer satisfaction and reach. Small businesses, ISVs, and startups have exactly the same requirement, but they might describe it differently.

What are virtual machines good for?

Virtual machines provide the backbone of cloud computing, and that doesn’t change. If virtual machines start more slowly, have a larger disk footprint, and do not map directly to a microservices architecture, they do have very important benefits:

By default, they have much more robust default security protections for host computer

They support any major OS and application configurations

They have longstanding tool ecosystems for command and control

They provide the execution environment to host containers

The last item is important, because a contained application still requires a specific operating system and CPU type, depending upon the calls the application will make. It’s important to remember that you install containers on VMs because they contain the applications you want to deploy; containers are not replacements for VMs or operating systems.

High-level feature comparison of VMs and containers

The following table describes at a very high level the kind of feature differences that—without much extra work—exist between VMs and Linux containers. Note that some features maybe more or less desirable depending upon your own application needs, and that as with all software, extra work provides increased feature support, especially in the area of security.

FEATURE

VMS

CONTAINERS

“Default” security support

to a greater degree

to a slightly lesser degree

Memory on disk required

Complete OS plus apps

App requirements only

Time taken to start up

Substantially Longer: Boot of OS plus app loading

Substantially shorter: Only apps need to start because kernel is already running

Portability

Portable With Proper Preparation

Portable within image format; typically smaller

Image Automation

Varies widely depending on OS and apps

Docker registry; others

Creating and managing groups of VMs and containers

At this point, any architect, developer, or IT operations specialist might be thinking, “I can automate ALL of this; this really IS Data-Center-As-A-Service!”.

You’re right, it can be, and there are any number of systems, many of which you may already use, that can either manage groups of Azure VMs and inject custom code using scripts, often with the CustomScriptingExtension for Windows or the CustomScriptingExtension for Linux. You can—and perhaps already have—automated your Azure deployments using PowerShell or Azure CLI scripts like this.

These abilities are often then migrated to tools like Puppet and Chef to automate the creation of and configuration for VMs at scale. (There are links to using these tools with Azure here.)

Azure resource group templates

More recently, Azure released the Azure resource management REST API, and updated PowerShell and Azure CLI tools to use it easily. You can deploy, modify, or redeploy entire application topologies using Azure Resource Manager templates with the Azure resource management API using:

the Azure preview portal using templates—hint, use the “DeployToAzure” button

the Azure CLI

the Azure PowerShell modules

Deployment and management of entire groups of Azure VMs and containers

There are several popular systems that can deploy entire groups of VMs and install Docker (or other Linux container host systems) on them as an automatable group. For direct links, see the containers and tools section, below. There are several systems that do this to a greater or lesser extent, and this list is not exhaustive. Depending upon your skill set and scenarios, they may or may not be useful.

Docker has its own set of VM-creation tools (docker-machine) and a load-balancing, docker-container cluster management tool (swarm). In addition, theAzure Docker VM Extension comes with default support for docker-compose, which can deploy configured application containers across multiple containers.

In addition, you can try out Mesosphere’s Data Center Operating System (DCOS). DCOS is based on the open-source mesos “distributed systems kernel” that enables you to treat your datacenter as one addressable service. DCOS has built-in packages for several important systems such as Spark and Kafka(and others) as well as built-in services such as Marathon (a container control system) and Chronos (a distributed scheduler). Mesos was derived from lessons learned at Twitter, AirBnb, and other web-scale businesses.

Also, kubernetes is an open-source system for VM and container group management derived from lessons learned at Google. You can even use kubernetes with weave to provide networking support.

Deis is an open source “Platform-as-a-Service” (PaaS) that makes it easy to deploy and manage applications on your own servers. Deis builds upon Docker and CoreOS to provide a lightweight PaaS with a Heroku-inspired workflow. You can easily create a 3-Node Azure VM group and install Deis on Azure and then install a Hello World Go application.

CoreOS, a Linux distribution with an optimized footprint, Docker support, and their own container system called rkt, also has a container group management tool called fleet.

Ubuntu, another very popular Linux distribution, supports Docker very well, but also supports Linux (LXC-style) clusters.

Tools for working with Azure VMs and containers

Working with containers and Azure VMs uses tools. This section provides a list of only some of the most useful or important concepts and tools about containers, groups, and the larger configuration and orchestration tools used with them.

NOTE:

This area is changing amazingly rapidly, and while we will do our best to keep this topic and its links up to date, it might well be an impossible task. Make sure you search on interesting subjects to keep up to date!

Containers and VM technologies

Some Linux container technologies:

Docker

LXC

CoreOS and rkt

Open Container Project

RancherOS

Windows Server Container links:

Windows Server Containers

Visual Studio Docker links:

Visual Studio 2015 RC Tools for Docker – Preview

Docker tools:

Docker daemon

Docker clients

Windows Docker Client on Chocolatey

Docker installation instructions

Docker on Microsoft Azure:

Docker VM Extension for Linux on Azure

Azure Docker VM Extension User Guide

Using the Docker VM Extension from the Azure Command-line Interface (Azure CLI)

Using the Docker VM Extension from the Azure Preview Portal

Getting Started Quickly with Docker in the Azure Marketplace

How to use docker-machine on Azure

How to use docker with swarm on Azure

Get Started with Docker and Compose on Azure

Using an Azure resource group template to create a Docker host on Azure quickly

The built-in support for compose for contained applications

Implement a Docker private registry on Azure

Linux distributions and Azure examples:

CoreOS

Configuration, cluster management, and container orchestration:

Fleet on CoreOS

Deis

Create a 3-Node Azure VM group, install Deis, and start a Hello World Go application

Kubernetes

Complete guide to automated Kubernetes cluster deployment with CoreOS and Weave

Kubernetes Visualizer

Mesos

Mesosphere’s Data Center Operating System (DCOS)

Jenkins and Hudson

Blog: Jenkins Slave Plug-in for Azure

GitHub repo: Jenkins Storage Plug-in for Azure

Third Party: Hudson Slave Plug-in for Azure

Third Party: Hudson Storage Plug-in for Azure

Chef

Chef and Virtual Machines

Video: What is Chef and How does it Work?

Azure Automation

Video: How to Use Azure Automation with Linux VMs

Powershell DSC for Linux

Blog: How to do Powershell DSC for Linux

GitHub: Docker Client DSC

Next steps

Check out Docker and Windows Server Containers.

III. Hybrid flexibility and freedom of the Microsoft Cloud (INTRODUCTION):

xxx

DETAILS

I. Hyper-scale Azure with host SDN

Host Networking makes Physical Network Fast and Scalable

Massive, distributed 40GbE network built on commodity hardware

No Hardware per tenant ACLs

No Hardware NAT

No Hardware VPN / Overlay

No Vendor-specific control, management or data plane

This host networking approach we’re taking to SDN has enabled to let us scale these massive physical networks, but still get the agility that we need in the abstractions that our customers need from their API’s, and be able to scale out to these kinds of numbers.

All policy is in software – and everything’s a VM

Network services deployed like all other services

Battle-tested solutions in Azure are coming to private cloud with Windows Server 2016

Building SDN for Hyperscale Learnings

Cloud needs scale, availability, and agility

Achieve all thre for controller using microservices on Service Fabric

Achieve all three for host SDN with RDMA [Remote Direct Memory Access] and FPGAs [Field-Programmable Gate Arrays]

FOR MORE TECHNICAL INFORMATION WATCH THE FOLLOWING VIDEO:

June 17, 2015, Open Networking Summit: Achieving Hyper-Scale with Software Defined Networking By Mark Russinovich, CTO, Microsoft Azure in the Microsoft Azure Blog

Today, I am excited to deliver a keynote talk at the Open Networking Summit, where I’ll be talking about how Microsoft is leveraging software-defined networking to power one of the largest public clouds in the world – Microsoft Azure.

SDN is probably not a new term to you so what is the hype really about? To answer that question we need to take a step back and look at how the datacenter is evolving to meet the growing need for scalability, flexibility and reliability that many IT users need in this mobile-first, cloud-first world. Cloud-native apps and services are creating an unprecedented demand for scale and automation on IT infrastructure. Across the industry, this is driving the move of control systems from hardware devices into software in a trend called Software Defined Datacenter (SDDC), which means empowering customers to virtualize servers, storage and networking to optimize resources and apps with a single click.

With 22 hyper-scale regions around the world, Azure storage and compute usage doubling every six months, and 90,000 new Azure subscriptions a month, Azure has experienced exponential growth. In this environment, we’ve had to learn how to run a software-defined datacenter within our own infrastructure to deliver Azure services to a growing user base. Since the inception of SDDC, we have applied the principles of virtualized, scale-out, partitioned cloud design and central control to everything from the Azure compute plane implementation to cloud storage, and of course, to networking.

Leveraging SDN for Industry-Leading Virtual Networks

We are investing in bringing a cloud design pattern to networking to deliver scalability and flexibility to our customers consuming cloud services both from Azure and within their datacenters. How exactly are we doing this? For starters, we are delivering industry-leading virtual networks (Vnets), which are critical for any public cloud customer. Vnets are built using overlay and Network Functions Virtualization (NFV) technologies implemented in software running on commodity servers, on top of a shared physical network.

By abstracting the software from the hardware layer, we have developed Vnets that are both scalable and agile, but also secure and reliable. Through segmentation of subnets and security groups, traffic flow control with User Defined Routes, and ExpressRoute for private enterprise grade connectivity, we are able to mimic the feel of a physical network with these Vnets.

Elastic Scale through Disaggregating the Network

With the demands on Azure, Vnets must be able to scale up for very large workloads and back down for small workloads. By both separating the control plane and data plane, and centralizing the control plane, we enable networks that can be modified, scaled and programmed quickly. To give a concrete example of the kind of hyper-scale we can achieve in one region, we can scale the data plane to hundreds of thousands of servers by abstracting to hosts.

We use the Azure Virtual Filtering Platform (VFP) in the Hyper-V hosts to enable Azure’s data plane to act as a Hyper-V virtual network switch, enabling us to provide core SDN functionality for Azure networking services. VFP is a programmable switch that exposes an easy-to-program abstract interface to network agents that act on behalf of network controllers like the Vnet controller and our software load balancer controller. By leveraging host components and doing much of packet processing on each host running in the datacenter, the Azure SDN data plane scales massively – both out and up nodes from 1 Gbs to 40 Gbs, and growing.

Scaling up to 40 Gbs and beyond requires significant computation for packet processing. To help us scale up without consuming CPU cycles that can otherwise be made available for customer VMs, Microsoft is building network interface controller (NIC) offloads on Azure SmartNICs. With SmartNICs, Microsoft is bringing the flexibility and acceleration of Field Programmable Gate Arrays (FPGAs) into cloud servers. FPGAs have not yet been widely used as compute accelerators in servers, so Microsoft using them to enable rapid scale with the programmability of SDN and the performance of dedicated hardware is unique in the industry.

Network Security and Reliability with Azure Innovation

Security and reliability are paramount for us. On Azure, one of the ways we ensure a reliable, secure network is through partitioning Vnets with Azure Controllers, which are organized as a set of inter-connected services. Each service is partitioned to scale and runs protocols on multiple instances for high availability. A partition manager service is responsible for partitioning the load among these services based on subscriptions, while a gateway manager service routes requests to the appropriate partition by utilizing the partition service.

Introduced at //Build, Azure Service Fabric is the platform we used to build our network controllers. Service Fabric’s microservices-based architectural design, customers can update individual application components on a rolling basis without having to update the entire application – resulting in a more reliable service, faster updates and higher scalability for building mission-critical applications. Service Fabric powers a broad range of Microsoft hyper-scale services like Azure Data Factory, SQL Database, Bing Cortana, and Event Hubs.

Bringing Azure SDN Innovation to Our Customers’ Datacenters

Every day we learn from the hyper-scale deployments of Microsoft Azure.  Those learnings enable us to bring new capabilities to your datacenter, functioning at a smaller scale to bring you cloud efficiency and reliability.  Our strategy is to adapt the cloud design patterns, points of innovation and structural practices that make Azure a true enterprise grade offering.  The capabilities for the on-premises components are the same, and they’re resident in technology currently in production in datacenters across the world.

We first released SDN technology in Windows Server 2012 including network virtualization and subsequently enhanced this with the release of Windows Server 2012 R2 and System Center 2012 R2.  SDN capabilities in Windows Server derive from the foundational networking technologies that underlie Azure.  Moving forward, we will continue to enhance SDN capabilities with the release of Windows Server 2016 and Microsoft Azure Stack. New features include a data plane and programmable network controller based on Azure, as well as load balancer that is proven at Azure scale.

To see more of what’s going on at ONS, check out the recording here.

Excerpts
June 17, 2015: Microsoft Azure Gives SDN a Hardware Assist By Craig Matsumoto of SDxCentral, touting itself “the leading centralized source of news and resources covering Software-Defined Everything (SDx), SDN, NFV, cloud and virtualization infrastructure”

… SmartNIC covers those functions that need a hardware boost, or that Microsoft would just prefer to offload from the CPU — the philosophy being that CPUs are better left running virtual machines to serve Azure customers, Russinovich said.

Encryption is a prime example of the “boost” case: Hardware will always be able to do it faster than software. It’s just a question of whether you need that much firepower. You often don’t. But as 100-Gb/s networking starts to become a reality in the data center, Microsoft is worried — rightfully so — about software’s ability to keep up.

So, the SmartNIC is going to be applied inline — meaning traffic flows through it — for functions including encryption, quality-of-service processing, and storage acceleration. “The sky’s the limit, really, with what we can do with an FPGA given its flexible programming,” Russinovich said.



Separately, Russinovich talked about Microsoft’s tiered system of SDN controllers — the tiering being necessary for controlling regions as large as 500,000 hosts apiece.

A regional controller oversees a region and delegates work to cluster controllers, which act as the proxies that talk to network switches.

The regional controller also keeps track of network state. If a cluster controller fails, its replacement can learn its state from the regional controller.

The tiered approach looks like it’s going to be common in large networks. AT&T wants to use tiered SDN controllers as well. A controller based on OpenDaylight Project code would be responsible for a global view, overseeing local controllers based on either ONOS (for white box switches) or OpenContrail (for virtual routers and virtual switches).

II. IaaS 2.0 (DETAILS)

Virtual Machines service with Resource Manager: New scalable Resource Manager for IaaS

⇒ Compute resources: Virtual machines, VM extensions

⇒ Storage resources: Storage accounts (blobs)

⇒ Networking resources: Virtual networks, Network interface cards (NICs), Load balancers, IP addresses, Network Security Groups

The classic management model of Azure for IaaS

Azure Resource Manager V2 – the new management model for IaaS

Source: Microsoft, May 2015

⇒ Faster Scalability, Larger overall deployments

⇒ Ability to make parallel configuration changes

⇒ Templates enable single click deployments of complex applications into a resource group. A resource group is a container that holds all related elements of an application and can be managed as a single unit providing granular access control via role based authentication and control (RBAC)

⇒ A single unified Azure Stack for the Microsoft Cloud (public cloud, private cloud and hosted cloud)

FOR MORE TECHNICAL INFORMATION WATCH THE FOLLOWING VIDEO:
May 5, 2015, Microsoft Ignite: Taking a Deep Dive into Microsoft Azure IaaS Capabilities

Excerpt from
April 29, 2015: Azure peaks in the valley: New features and innovation By Vibhor Kapoor, Director, Product Marketing, Microsoft Azure, in the Microsoft Azure Blog

At the core of every Azure innovation, is our focus to solve for the needs of developers and ISVs . Today at //build, we announced exciting updates to Azure which enable developers of all types with the flexibility to build cloud apps and services across multiple devices and platforms. With the updates announced today, Microsoft has the most complete platform for predictive analytics and intelligent applications, empowering enterprises to realize the maximum value from their data.

SQL Database Enhancements
As Scott [Guthrie] shared on stage this morning, we made a number of updates and enhancements to SQL Database. Developers building software-as-a-service (SaaS) applications can leverage SQL Database to provide flexibility to support both explosive growth and profitable business models.  ….

Azure Data Lake

As part of Microsoft’s big data and analytics portfolio of products, we pre-announced Azure Data Lake, a hyper scale repository for big data analytic workloads. … Azure Data Lake is a Hadoop File System compatible with HDFS that works with the Hadoop ecosystem providing integration with Azure HDInsight and will be integrated with Microsoft offerings such as Revolution-R Enterprise, industry standard distributions like Hortonworks and Cloudera, and individual Hadoop projects like Spark, Storm, Flume, Sqoop, Kafka, etc. …

Azure SQL Data Warehouse

We are also pleased to preannounce Microsoft Azure SQL Data Warehouse. As part of Microsoft’s extension to Data Warehousing, Azure SQL Data Warehouse is an elastic data warehouse-as-a-service with enterprise-grade features based on SQL Server’s massively parallel processing architecture. It provides customers the ability to scale data, either on premise, or in our cloud. …

Azure Service Fabric

Today we are excited to make available the developer preview of Azure Service Fabric [a new PaaS platform announced on April 20th] – a high control platform that enables developers and ISVs to build cloud services with a high degree of scalability and customization. As we discussed last week, Service Fabric supports creating both stateless and stateful microservices – an architectural approach where complex applications are composed of small, independently versioned services – to power the most complex, low-latency, data-intensive scenarios and scale them into the cloud. [Azure Service Fabric is a mature technology that Microsoft is making available to customers for the first time, having powered Microsoft products and services for more than 5 years and being in development for the last 10 years.] …



Azure Resource Manager Support for VMs, Storage and Networking

Azure Resource Manager Support for Virtual Machines, Storage and Networking is now available in public preview. Azure Resource Manager templates enable single click deployments of complex applications into a resource group. A resource groups can contain all elements of an application and can be managed as a single unit providing granular access control via role based authentication and control (RBAC). Furthermore, you have ability to tag resources so you can better manage resources with a granular understanding of costs. We will have also have a starting set of more than 80 templates available in GitHub at preview release.

As part of our Azure Resource Manager availability, we are announcing partnerships across a broad set of PaaS, orchestration and management partners building on the new scalable Resource Manager for IaaS, including Cloud Foundry, Mesosphere, Juju, Apprenda, Jelastic and Scalr. We will also make available templates for Apprenda and Mesosphere directly in GitHub. The initial set of templates will also include many open-source solutions from many sources, including a template for MySQL, Chef, ElasticSearch, Zookeeper, MongoDB, and PostGreSQL. For more information, please visit https://azure.microsoft.com/en-us/documentation/articles/resource-group-overview/

[⇒Azure Resource Manager Overview]

June 6, 2015: Azure Resource Manager Overview By Tom FitzMacken, previously Senior Programming Writer on the Web Platform & Tools Content team website

Applications are typically made up of many components – maybe a web app, database, database server, storage, and 3rd party services. You do not see these components as separate entities, instead you see them as related and interdependent parts of a single entity. You want to deploy, manage, and monitor them as a group. Azure Resource Manager enables you to work with the resources in your application as a group. You can deploy, update or delete all of the resources for your application in a single, coordinated operation. You use <span style="col

Show more