2015-10-25

April 9, 2015: Microsoft Changes Course by Hsiao-wen Wang

from CommonWealth Magazine, Taiwan



On Oct 21, 2015 Julia Liuson, corporate vice president of Microsoft, expanded her role as the head of the Visual Studio and .NET engineering with all the rest of the once existing DevDiv, except Brian Harry’s Visual Studio Online Team (responsible for both the 3rd party developer services from Microsoft, as well as for new One Microsoft Engineering System). So now product management and cross-platform developer tools belong to her as well. See the announcement below. Note that in August 2013 she was the manager of Brian Harry’s TFS (Team Foundation Server) DevTeam. In her …get more girls in STEM disciplines (STEM=science, technology, engineering and mathematics) article of Sept 10, 2012 she is already a corporate vice president in Microsoft’s Developer Tools business. In June 2010 she was Visual Studio Business Applications and Server & Tools Business (STB) China co-General Manager.

Microsoft remains a technology giant that is able to post net earnings of more than NT$100 billion [$3B USD] per quarter. The giant is currently transforming itself and redefining its battlefield. Industry insiders wonder how Microsoft will make money if it can no longer rely on software licensing.

It seems that commercial cloud services are Microsoft’s answer. One of the crucial parties in overcoming in-house resistance to turning Windows into open source software is Julia Liuson. Born in Shanghai, Liuson grew up in Beijing. Upon obtaining a bachelor of science in electrical engineering from the University of Washington, she joined Microsoft in 1992, holding various technical and managerial positions when the company was still in its heyday. Liuson [as a corporate vice president] works closely with Microsoft’s new CEO Satya Nadella and oversees software development for Visual Studio and the .Net framework.


Q: When taking the helm of Microsoft, Nadella said, “Our industry does not respect tradition — it only respects innovation.” How has Microsoft changed since Nadella called for the company’s transformation when taking office more than a year ago?

A: There has been a very big change in terms of acceptance for going open-source. In terms of operating procedures, we have also seen massive changes. In the past we used to release major software updates every three years, as if we were selling a precious encyclopedia set. But in a speed-hungry Internet business environment, someone needs to run and maintain [software], racing against time 24 hours a day. It is like having to update one encyclopedia page per day, updating a chapter every week.

We have also changed our organization’s operating model. In the past, the ratio of software developers to software testing personnel was one to one. When the developers had developed new software, they would throw it over the wall to the testing staff, where it was no longer the developers’ business. Now, the real work begins when the developers have written the software and release it into the market, because we need to pay attention to customer feedback before we go back to make modifications.

In order to tear down the fences between developers and other departments, we reorganized our staff in work teams of eight to twelve members so that planning, development, testing, marketing and sales as well as customer support can communicate closely with each other and shorten the time needed for product updates and new releases.

INSERT from Oct 1, 2015: Our DevOps Journey – Microsoft Engineering Stories

… In the past, we had three distinct roles on what we call “feature teams”: program managers, developers, and testers. We wanted to reduce delays in handoffs between developers and testers and focus on quality for all software created, so we combined the traditional developer and tester roles into one discipline: software engineers. Software engineers are now responsible for every aspect of making their features come to life and performing well in production. … One of our first steps was to bring the operations teams into the same organization. Before, our ops teams were organizationally distant from the engineering team. … [Now] we call our operations team “Service Engineers.” Service Engineers have to know the application architecture to be more efficient troubleshooters, suggest architectural changes to the infrastructure, be able to develop and test things like infrastructure as code and automation scripts, and make high-value contributions that impact the service design or management. …

In addition to the Our DevOps Journey – Microsoft Engineering Stories briefing from Microsoft see also the background information in the end of this post under the “DevOps Journey” title.
END OF THE INSERT


Q: As Microsoft transforms, what attitudes and skills are needed most?

A: Microsoft must learn to listen more closely to its customers; that’s a huge change.

Just the Beginning

Corresponding to these attitudinal changes, everything was different from before, like the requirements of the products, analysis of customer behavior, and the collection of big data.

Previously, we only needed to sell our products and everything was fine; we didn’t need to look at what the user wanted. However, now that I need to collect [data] on the behavior of these users, how am I going to go about my product support? How do I analyze the data I’ve gathered? These have all been huge transformations at Microsoft.

We cannot dig moats like before to protect the high market share of our products Windows and Office. Now we are a challenger, a new service that starts with a zero market share with zero users. We need to win over every single customer.

We need to adjust our own mindset: If I were a small startup, what would I do? This is completely different from our mindset in the past, when Microsoft was the industry leader with a market share above 90 percent.


Q: What keeps you awake at night?

A: Everything (laughs)! Just kidding. Come to think of it, I am in charge of Microsoft software, which has millions of users around the globe. But I don’t know who they are and how they use our software. If you told this to the people at Amazon, they would laugh at you.

Microsoft must transform from a company that throws a box with software into the market, a company that does not know who its customers are into a company that offers pure services, that knows who every single customer is and how they use its services. This is what keeps me awake at night.

There are still many things that need to be done. How much I wish it was still yesterday. Then I would have another 24 hours to get things done (laughs).

Dec 22, 2011: cached by Zoominfo page of Microsoft Chinese Employee – 微软华人协会 > Julia Liuson

Julia Liuson (潘正磊) is the General Manager for Visual Studio Business Applications. Her teams are responsible for enabling developers to easily build business applications on Microsoft platforms by reinvigorating development paradigms for build LOB application, deliver first class tooling for Office server and client, and bring .Net programmability to all ISV applications.

Julia joined Microsoft in 1992 as a software developer on Access 1.0. After the successful launch of Access 1.0, 1.1, and 2.0 she became development lead for the database and web project tools in Visual InterDev 1.0 and 2.0. In 1998, she assumed the role of development manager for Visual Basic.Net, and led the development effort for Visual Basic.Net 2002 and 2003. Julia served as Director of Development for all Visual Studio product line, and tackled division wide process and engineering excellence issues.

As the Partner Product Unit Manager of Visual Studio Team Architect, she was a core member of the leadership team that led the successful development and launch of Visual Studio Team System in 2005.

In 2006, she became the Partner Product Unit Manager for Visual Basic, and was responsible for delivering the most productive development tool on .Net for professional developers, and for moving millions of VB6 users forward to the .Net platform.

Oct 21, 2015: Microsoft Executive VP of the Cloud and Enterprise Group [C+E] Scott Guthrie:

Today we are announcing some organizational changes within C+E that will enable us to further accelerate our customer momentum and move even faster as an organization.  Our new C+E structure will be aligned around our key strategic businesses (Cloud Infrastructure, Data and Analytics, Business Applications and App Platform, Enterprise Mobility, Developer).  As part of today’s changes we are also bringing several teams even closer together to enable us to make deeper shared technology bets.

Each team in C+E will have a clear, focused charter.  Our culture will continue to be grounded in a Growth Mindset.  We’ll exercise this by being Customer-Obsessed, Diverse and Inclusive, and by working as One Microsoft to Make a Difference for our customers and partners. We’ll embrace data driven decision making and optimize for continuous learning and improvement.


Developer Tools and Services

Our Visual Studio Family of developer tools and services provides a complete solution for building modern cloud and mobile applications.

The Visual Studio Tools and .NET Team will be led by Julia Liuson.  John Montgomery who leads the Visual Studio and .NET PM team will report to Julia going forward.  The VS Code Team, led by Shanku Niyogi, which is responsible for our cross-platform developer tools, will also today join the Visual Studio Tools and .NET Team with Shanku also reporting to Julia.



The Visual Studio Online Team will continue to be led by Brian Harry.  The VSO team is responsible for both our 3rd party developer services, as well as for new One Microsoft Engineering System.





“… TFS on-prem[ises] is growing slowly because it’s already huge. VS Online usage is growing more rapidly but is still far smaller than TFS on-prem[ises]. … Here’s a month by month trend of VS Online adoption by major organization. The numbers look a little larger than they really are because adoption is still early and people are using only subsets of the functionality or using VS Online as a supplement to on-prem TFS.” ASG = Application & Services Group for “Reinvent productivity and business processes” ambition, C&E = Cloud & Enterprise for “Build the intelligent cloud platform” ambition, OSG = Operating Systems Group for “Create more personal computing” ambition. Forrás: Team Foundation Server and VS Online adoption at Microsoft by Brian Harry, June 3, 2015

Oct 24, 2014 excerpt from the web according to “Visual Studio Online” “One Microsoft Engineering System” search:

… Visual Studio Online’s goal is to become the single place for all developer targeted services – for both the internal One Microsoft Engineering System and for customers. It provides software development teams with capabilities of project planning, work item management, version control, build automation, test lab management, elastic load test, Application Insights and more. We ship new features every 3 weeks at http://www.visualstudio.com>   and our adoption is growing at a very rapid clip. Ultimately, our audience is Engineers like YOU! Come onboard to build one of the most mission-critical services that will set the tone for all future engineering practices – inside Microsoft and outside in the developer community!

VS Online makes use of a wide range of technologies on premise and in the cloud, so you’ll have the opportunity to learn new stuff and go deep in many domains. Our key technologies are Azure, SQL Azure, AAD, and ASP.NET MVC on the backend. On the front end we use Knockout to build out an awesome user experience on the web, WPF for VS, and SWT for Eclipse. …

Sept 1, 2015: Cached Software Engineer II career

As Microsoft transforms to a devices + services company, Visual Studio continues to evolve and adapt in significant ways to support this transformation; requiring a strong team to deliver great engineering tools and systems. The Visual Studio Engineering Tools and Systems team is driving big, bold improvements for current and future releases in the ability to operate at a faster cadence by improving daily engineer productivity, speeding up builds and other advancements in how the software is built and delivered. This team is tasked with creating the next generation engineering system that aligns with the One Microsoft Engineering System vision (1ES). An engineering system that allows hundreds of people to work together efficiently and be very productive on one of the most important products at Microsoft, Visual Studio. This team is responsible for designing, creating, implementing, and managing the tools, services, and processes to arm the Developer Division engineers to do their best work.

As of 24 Oct, 2015: Principal Software Engineer Manager – C+E career

… The Tools for Software Engineers team (TSE) is set out to maximize the productivity of all Microsoft engineers and reduce the time from idea to production.

In Satya’s memo to the company he states “In order to deliver the experiences our customers need for the mobile-first and cloud-first world, we will modernize our engineering processes to be customer-obsessed, data-driven, speed-oriented and quality-focused.” Come join us to be a part of this change!

TSE develops and operates a set of engineering tools and services including build tools, build languages (MSBuild), CloudBuild service, drop and artifact services, verification services including unit test execution and code review tools, engineering reporting and analysis services; all working towards a unified, world-class engineering system offering for internal Microsoft needs and third parties.

CloudBuild is at the center of Microsoft 1ES and is helping major groups within the company build faster, more reliable and at scale. CloudBuild serves thousands of developers and builds millions of targets daily in a highly scalable and distributed service running at scale in multiple Data Centers across the world. …

July 31, 215: 2015 Annual Report>The ambitions that drive us

To carry out our strategy, our research and development efforts focus on three interconnected ambitions:

Reinvent productivity and business processes.

Build the intelligent cloud platform.

Create more personal computing.

Reinvent productivity and business processes

We believe we can significantly enhance the lives of our customers using our broad portfolio of communication, productivity, and information services that spans devices and platforms. Productivity will be the first and foremost objective, to enable people to meet and collaborate more easily, and to effectively express ideas in new ways. We will design applications as dual-use with the intelligence to partition data between work and life while respecting each person’s privacy choices. The foundation for these efforts will rest on advancing our leading productivity, collaboration, and business process tools including Skype, OneDrive, OneNote, Outlook, Word, Excel, PowerPoint, Bing, and Dynamics. With Office 365, we provide these familiar industry-leading productivity and business process tools as cloud services, enabling access from anywhere and any device. This creates an opportunity to reach new customers, and expand the usage of our services by our existing customers.

We see opportunity in combining our offerings in new ways that are more contextual and personal, while ensuring people, rather than their devices, remain at the center of the digital experience. We will offer our services across ecosystems and devices outside our own. As people move from device to device, so will their content and the richness of their services. We are engineering our applications so users can find, try, and buy them in friction-free ways.

Build the intelligent cloud platform

In deploying technology that advances business strategy, enterprises decide what solutions will make employees more productive, collaborative, and satisfied, and connect with customers in new and compelling ways. They work to unlock business insights from a world of data. To achieve these objectives, increasingly businesses look to leverage the benefits of the cloud. Helping businesses move to the cloud is one of our largest opportunities, and we believe we work from a position of strength.

The shift to the cloud is driven by three important economies of scale: larger datacenters can deploy computational resources at significantly lower cost per unit than smaller ones; larger datacenters can coordinate and aggregate diverse customer, geographic, and application demand patterns, improving the utilization of computing, storage, and network resources; and multi-tenancy lowers application maintenance labor costs for large public clouds. As one of the largest providers of cloud computing at scale, we are well-positioned to help businesses move to the cloud so that businesses can focus on innovation while leaving non-differentiating activities to reliable and cost-effective providers like Microsoft.

With Azure, we are one of very few cloud vendors that run at a scale that meets the needs of businesses of all sizes and complexities. We believe the combination of Azure and Windows Server makes us the only company with a public, private, and hybrid cloud platform that can power modern business. We are working to enhance the return on information technology (“IT”) investment by enabling enterprises to combine their existing datacenters and our public cloud into a single cohesive infrastructure. Businesses can deploy applications in their own datacenter, a partner’s datacenter, or in our datacenters with common security, management, and administration across all environments, with the flexibility and scale they want.

We enable organizations to securely adopt software-as-a-service applications (both our own and third-party) and integrate them with their existing security and management infrastructure. We will continue to innovate with higher-level services including identity and directory services that manage employee corporate identity and manage and secure corporate information accessed and stored across a growing number of devices, rich data storage and analytics services, machine learning services, media services, web and mobile backend services, and developer productivity services. To foster a rich developer ecosystem, our digital work and life experiences will also be extensible, enabling customers and partners to further customize and enhance our solutions, achieving even more value. This strategy requires continuing investment in datacenters and other infrastructure to support our devices and services.

Create more personal computing

Windows 10 is the cornerstone of our ambition to usher in an era of more personal computing. We see the launch of Windows 10 in July 2015 as a critical, transformative moment for the Company because we will move from an operating system that runs on a PC to a service that can power the full spectrum of devices in our customers’ lives. We developed Windows 10 not only to be familiar to our users, but more safe and secure, and always up-to-date. We believe Windows 10 is more personal and productive, working seamlessly with functionality such as Cortana, Office, Continuum, and universal applications. We designed Windows 10 to foster innovation – from us, our partners and developers – through experiences such as our new browser Microsoft Edge, across the range of existing devices, and into entirely new device categories.

Our ambition for Windows 10 is to broaden our economic opportunity through three key levers: an original equipment manufacturer (“OEM”) ecosystem that creates exciting new hardware designs for Windows 10; our own commitment to the health and profitability of our first-party premium device portfolio; and monetization opportunities such as services, subscriptions, gaming, and search. Our OEM partners are investing in an extensive portfolio of hardware designs and configurations as they ready for Windows 10. By December 2015, we anticipate the widest range of Windows hardware ever to be available.

With the launch of Windows 10, we are realizing our vision of a single, unified Windows operating system on which developers and OEMs can contribute to a thriving Windows ecosystem. We invest heavily to make Windows the most secure, manageable, and capable operating system for the needs of a modern workforce. We are working to create a broad developer opportunity by unifying the installed base to Windows 10 through upgrades and ongoing updates, and by enabling universal Windows applications to run across all device targets. As part of our strategic objectives, we are committed to designing and marketing first-party devices to help drive innovation, create new categories, and stimulate demand in the Windows ecosystem, including across PCs, phones, tablets, consoles, wearables, large multi-touch displays, and new categories such as the HoloLens holographic computing platform. We are developing new input/output methods like speech, pen, gesture, and augmented reality holograms to power more personal computing experiences with Windows 10.

Our future opportunity

There are several distinct areas of technology that we aim to drive forward. Our goal is to lead the industry in these areas over the long-term, which we expect will translate to sustained growth. We are investing significant resources in:

Delivering new productivity, entertainment, and business processes to improve how people communicate, collaborate, learn, work, play, and interact with one another.

Establishing the Windows platform across the PC, tablet, phone, server, other devices, and the cloud to drive a thriving ecosystem of developers, unify the cross-device user experience, and increase agility when bringing new advances to market.

Building and running cloud-based services in ways that unleash new experiences and opportunities for businesses and individuals.

Developing new devices that have increasingly natural ways to interact with them, including speech, pen, gesture, and augmented reality holograms.

Applying machine learning to make technology more intuitive and able to act on our behalf, instead of at our command.

We believe the breadth of our products and services portfolio, our large global partner and customer base, our growing ecosystem, and our ongoing investment in innovation position us to be a leader in these areas and differentiate ourselves from competitors.

Regarding the digital work and life experiences see my earlier Satya Nadella on “Digital Work and Life Experiences” supported by “Cloud OS” and “Device OS and Hardware” platforms–all from Microsoft post of July 23, 2014:

Those ambitions are also reporting segments now
Oct 22, 2015: Earnings Release FY16 Q1

Revenue in Productivity and Business Processes declined 3% (up 4% in constant currency) to $6.3 billion, with the following business highlights:

Office commercial products and cloud services revenue grew 5% in constant currency with Office 365 revenue growth of nearly 70% in constant currency and continued user growth across our productivity offerings

Office 365 consumer subscribers increased to 18.2 million, with approximately 3 million subscribers added in the quarter

Dynamics revenue grew 12% in constant currency, with the Dynamics CRM Online enterprise installed base growing more than 3x year-over-year

Revenue in Intelligent Cloud grew 8% (up 14% in constant currency) to $5.9 billion, with the following business highlights:

Server products and cloud services revenue grew 13% in constant currency, with revenue from premium products and services growing double-digits

Azure revenue and compute usage more than doubled year-over-year

Enterprise Mobility customers more than doubled year-over-year to over 20,000, and the installed base grew nearly 6x year-over-year

Revenue in More Personal Computing declined 17% (down 13% in constant currency) to $9.4 billion, with the following business highlights:

Windows OEM revenue declined 6%, performing better than the overall PC market, as the Windows 10 launch spurred PC ecosystem innovation and helped drive hardware mix toward premium devices

Phone revenue declined 54% in constant currency reflecting our updated strategy

Search advertising revenue excluding traffic acquisition costs grew 29% in constant currency with Bing US market share benefiting from Windows 10 usage

Xbox Live monthly active users grew 28% to 39 million

July 9, 2014: Upcoming VS Online Licensing Changes by Brian Harry

Through the fall and spring, we transitioned VS Online from Preview to General Availability.  That process included changes to branding, the SLA, the announcement of pricing, the end of the early adopter program and more.  We’ve been working closely with customers to understand where the friction is and what we can do to make adopting VS Online as easy as possible.  This is a continuing process and includes discussions about product functionality, compliance and privacy, pricing and licensing, etc.  This is a journey and we’ll keep taking feedback and adjusting.

Today I want to talk about one set of adjustments that we want to make to licensing.

As we ended the early adopter period, we got a lot of questions from customers about how to apply the licensing to their situation.  We also watched as people assigned licenses to their users: What kind of licenses did they choose?  How many people did they choose to remove from their account?  Etc.

From all of this learning, we’ve decided to roll out 2 licensing changes in the next couple of months:

Stakeholders

A common question we saw was “What do I do with all of the stakeholders in my organization?”  While the early adopter program was in effect and all users were free, customers were liberal with adding people to their account.  People who just wanted to track progress or file a bug or a suggestion occasionally, were included.  As the early adopter period ended, customers had to decide – Is this really worth $20/user/month (minus appropriate Azure discounts)?  The result was that many of these “stakeholders” were removed from the VS Online accounts in the transition, just adding more friction for the development teams.

As a result of all this feedback we proposed a new “Stakeholder” license for VS Online.  Based on the scenarios we wanted to address, we designed a set of features that matched the needs most customers have.  These include:

Full read/write/create on all work items

Create, run and save (to “My Queries”) work item queries

View project and team home pages

Access to the backlog, including add and update (but no ability to reprioritize the work)

Ability to receive work item alerts

Some of the explicitly excluded items are:

No access to Code, Build or Test hubs.

No access to Team Rooms

No access to any administrative functionality (Team membership, license administration, permissions, area/iterations configuration, sprint configuration, home page configuration, creation of shared queries, etc.)

We then surveyed our “Top Customers” and tuned the list of features (to arrive at what I listed above).  One of the conversations we had with them was about the price/value of this feature set.  We tested 3 different price points – $5/user/month, $2/user/month and free.  Many thought it was worth $5.  Every single one thought it was worth $2.  However, one of the questions we asked was “How many stakeholders would you add to your account at each of these price points?”  The result was 3X more stakeholders if it’s free than if it’s $2.  That told us that any amount of money, even if it is perceived as “worth it”, is too much friction.  Our goal is to enable everyone who has a stake to participate in the development process (and, of course, to run a business in the process).  Ultimately, in balancing the goals of enabling everyone to participate and running a business, we concluded that “free” is the right answer.

As a result, any VS Online  account will be able to have an unlimited number of “Stakeholder” users with access to the functionality listed above, at no charge.

Access to the Test Hub

Another point of friction that emerged in the transition was access to the Test hub.  During the Preview, all users had access to the Test hub but, at the end of the early adopter program, the only way to get access to the Test hub was by purchasing Visual Studio Test Professional with MSDN (or one of the other products that include it, like VS Premium or VS Ultimate).

We got ample feedback that there were a class of users who really only need access to the web based Test functionality and don’t need all that’s in VS Test Professional.

Because of this, we’ve decided to include access to all of the Test hub functionality in the Visual Studio Online Advanced plan.

Timing

I’m letting you know now so that, if you are currently planning your future, you know what is coming.  I’m always loathe to get too specific about dates in the future because, as we all know, stuff happens.  However, we are working hard to implement these licensing changes now and my expectation is that we’ve got about 2 sprints of work to do to get it all finished.  That would put the effective date somewhere in the neighborhood of mid-August.  I’ll update you with more certainty as the date gets a little closer.

What about Team Foundation Server?

In general, our goal is to keep the licensing for VS Online and Team Foundation Server as “parallel” as we can – to limit how confusing it could be.  As a result, we will be evolving the current “Work Item Web Access” TFS CAL exemption (currently known as “Limited” users in TFS) to match the “Stakeholder” capabilities.  That will result in significantly more functionality available to TFS users without CALs.  My hope is to get that change made for Team Foundation Server 2013 Update 4.  It’s too early yet to be sure that’s going to be possible but I’m hopeful.  We do not, currently, plan to provide an alternate license for the Test Hub functionality in TFS, though it’s certainly something we’re looking at and may have a solution in a future TFS version.

Conclusion

As I said, it’s a journey and we’ll keep listening.  It was interesting to me to watch the phenomenon of the transition from Preview to GA.  Despite announcing the planned pricing many months in advance, the feedback didn’t get really intense until, literally, the week before the end of the early adopter period when everyone had to finish choosing licenses.

One of the things that I’m proud of is that we were able to absorb that feedback, create a plan, review it with enough people, create an engineering plan and (assuming our timelines hold), deliver it in about 3 months.  In years past that kind of change would take a year or two.

Hopefully you’ll find this change valuable.  We’ll keep listening to feedback and tuning our offering to create the best, most friction-free solution that we can.

Thanks,

Brian

July 7, 2014: TFS Adoption at Microsoft – July 2014 by Brian Harry

Years ago, I used to do monthly updates on TFS adoption at Microsoft.  Eventually, the numbers got so astronomical that it just seemed silly so I stopped doing them.  It’s been long enough and there’s some changes happening that I figured it was worth updating you all on where we are.

First of all, adoption has continued to grow steadily year over year.  We’ve continued to onboard more teams and to deepen the feature set teams are using.  Any major change in the ALM solution of an organization of our size and complexity is journey.

Let’s start with some stats:

As of today, we have 68 TFS “instances”.  Instance sizes vary from modest hardware up to very large scaled out hardware for the larger teams.  We have over 60K monthly active users and that number is still growing rapidly.  Growth varies month to month and the growth below seems unusually high (over 10%).  I grabbed the latest data I could get my hands on – and that happened to be from April.  The numbers are really staggeringly large.

Current

30 day growth

Unique users

62,553

7,256

TPCs

788

46

Projects

15,581

187

Work items

42,088,748

5,572,355

Source files

320,224,466

11,959,935

Builds/month

568,190

109,764

Test cases

9,483,760

1,172,495

In addition we’ve started to make progress recently with Windows and Office – two of the Microsoft teams with the oldest and most entrenched engineering systems.  They’ve both used TFS in the past for work planning but recently Windows has also adopted TFS for all work management (including bugs) and Office is planning a move.  We’re also working with them on plans to move their source code over.

In the first couple of years of adoption of TFS at Microsoft, I remember a lot of fire drills.  Bringing on so many people and so much data with such mission critical needs really pushed the system and we spent a lot of time chasing down performance (and occasionally availability) problems.  These days things run pretty smoothly.  The system is scaled out enough and the code, and our dev processes have been tuned enough, that for the most part, the system just works.  We upgrade it pretty regularly (a couple of times a year for the breadth of the service, as often as every 3 weeks for our own instances).

As we close in on completing the first leg of our journey – getting all teams at Microsoft onto TFS, we are now beginning the second.  A few months ago, The TFS team and a few engineering systems teams working closely with them moved all of their assets into VS Online – code, work items, builds, etc.  This is a big step and, I think, foreshadows the future for the entire company.  At this point it’s only a few hundred people accessing it but it’s already the largest and most active account on VS Online and it will continue to grow.

It was a big decision for us – and we went through a lot of the same anxieties I hear from anyone wanting to adopt a cloud solution for a mission critical need.  Will be intellectual property be safe?  What happens when the service goes down?  Will I lose any data?  Will performance be good?  Etc.  Etc.  At the same time, it was important to us to live the life that we are suggesting our customers live – taking the same risks and working to ensure that all of those risks are mitigated.

The benefits of moving are already visible.  I’ve had countless people remark to me how much they’ve enjoyed having access to their work – work items, build status, code reviews, etc from any device, anywhere.  No messing with remote desktop or any other connectivity technology.  As part of this, we also bound the account to the Microsoft Active Directory tenant so we can log in using the same corporate credentials as we do for everything else.  Combining this with a move to Office 365/SharePoint Online for our other collaboration workflows has created for us a fantastic mobile, cloud experience.

I’ll see about starting to post some statistics on our move to the cloud.  As, I say, at this point, it’s a few hundred people and mostly just the TFS codebase – which is pretty large at this point.  Over time that will grow but I expect it will be slow – getting larger year over year into a distant future when all of Microsoft has moved to the cloud for our engineering system tools.

I know I have to say this because people will ask.  No, we are not abandoning on-prem TFS.  The vast majority of our customers still use it, the overwhelming majority of our internal teams still use it (the few hundred people using VS Online is still rounding error on the more than 60K people using TFS on premises).  We continue to share a codebase between VS Online and TFS and the vast majority of the work we do accrues to both scenarios – and that will continue to be the case.  TFS is here to stay and we’ll keep using it ourselves for a very long time.  At the same time VS Online is here to stay too and our use of it will grow rapidly in the coming years.  It will be a big milestone when the first big product engineering team not associated with building VS Online/TFS moves over to VSO for all of their core engineering system needs – I’ll be sure to let you know when that happens.

Brian

DevOps Journey

Sept 2, 2015: DevOps – Enabling DevOps on the Microsoft Stack by Michael Learned a Visual Studio ALM Ranger currently focused on DevOps and Microsoft Azure

There’s a lot of buzz around DevOps right now. An organization’s custom software is critical to providing rich experiences and useful data to its business users. Rapidly delivering quality software is no longer an option, it’s a requirement. Gone are the days of lengthy planning sessions and development iterations.  Cloud platforms such as Microsoft Azure have removed traditional bottlenecks and helped commoditize infrastructure. Software reigns in every business as the key differentiator and factor in business outcomes. No organization, developer or IT worker can or should avoid the DevOps movement.

DevOps is defined from numerous points of view, but most often refers to removing both cultural and technology barriers between development and operations teams so software can move into production as efficiently as possible. Once software is running in production you need to ensure you can capture rich usage data and feed that data back into development teams and decision makers.

There are many technologies and tools that can help with DevOps. These tools and processes support rapid release cycles and data collection on production applications. On the Microsoft stack, tools such as Release Management to drive rapid, predictable releases and Application Insights help capture rich app usage data. This article will explore and shed some light on critical tools and techniques used in DevOps, as well as the various aspects of DevOps (as shown in Figure 1).

Figure 1 The Various Aspects of DevOps

The Role of DevOps

Most organizations want to improve their DevOps story in the following areas:

Automated release pipelines in which you can reliably test and release on much shorter cycles.

Once the application is running in production, you need the ability to respond quickly to change requests and defects.

You must capture telemetry and usage data from running production applications and leverage that for data-driven decision making versus “crystal ball” decision making.

Are there silos in your organization blocking those aspects of DevOps? These silos exist in many forms, such as differing tools, scripting languages, politics and departmental boundaries. They intend to provide separation of duties and to keep security controls and stability in production.

Despite their intentions, these silos can sometimes impede an organization from achieving many DevOps goals, such as speedy, reliable releases and handling and responding to production defects. In many cases, this silo structure generates an alarming amount of waste. Developers and operations workers have traditionally worked on different teams with different goals. Those teams spend cycles fixing issues caused by these barriers and less time focused on driving the business.

Corporate decision makers need to take a fresh look at the various boundaries to evaluate the true ROI or benefits these silos intend to provide. It’s becoming clear the more you can remove those barriers, the easier it will be to implement DevOps solutions and reduce waste.

It’s a challenge to maintain proper security, controls, compliance and so on while balancing agility needs. Enterprise security teams must ensure data is kept secure and private. Security is arguably as important as anything else an organization does.

However, there’s an associated cost for every security boundary you build. If security boundaries are causing your teams waste and friction, those boundaries deserve a fresh look to ensure they generate ROI. You can be the most secure organization in the world, but if you can’t release software on time you’ll have a competitive disadvantage.

Balancing these priorities isn’t a new challenge, but it’s time for a fresh and honest look at the various processes and silos your organization has built. Teams should all be focused on business value over individual goals.

The Release Pipeline

The release pipeline is where your code is born with version control, then travels through various environments and is eventually released to production. Along the way, you perform automated build and testing. The pipeline should be in a state where moving changes to production is transparent, repeatable, reliable and fast. This will no doubt involve automation. The release pipeline might also include provisioning the application host environment.

Your release pipeline might not be optimized if these factors are present:

Tool and process mismatches, whereby you have different tools and processes in place per environment. (For example, the dev teams deploy with one tool and ops deploy with another.)

Manual steps can introduce error, so avoid them.

Re-building just to deploy to the next environment.

You lack traceability and have issues understanding which versions have been released.

Release cycles are lengthy, even for hotfixes.

Provisioning

Provisioning containers is sometimes considered an optional part of a release pipeline. A classic on-premises scenario often exists in which an environment is already running to host a Web application. The IIS Web server or other host and back-end SQL Server have been running through numerous iterations. Rapid releases into these environments deploy only the application code and subsequent SQL schema and data changes needed to move the appropriate update levels. In this case, you’re not provisioning fresh infrastructure (both IIS and SQL) to host the application. You’re using a release pipeline that disregards provisioning and focuses only on the application code itself.

There are other scenarios in which you might want to change various container configuration settings. You might need to tweak some app pool settings in IIS. You could implement that as part of the release pipeline or handle it manually. Then you may opt to track those changes in some type of versioning system with an Infrastructure-as-Code (IaC) strategy.

There are several other scenarios in which you would want to provision as part of an automated release pipeline. For example, early in development cycles, you might wish to tear down and rebuild new SQL databases for each release to fully and automatically test the environment.

Cloud computing platforms such as Azure let you pay only for what you need. Using automated setup and tear down can be cost-effective. By automating provisioning and environmental changes, you can avoid error and control the entire application environment. Scenarios like these make it compelling to include provisioning as part of a holistic release management system.

There are many options and techniques for including provisioning as part of your release pipeline. These will differ based on the types of applications you’re hosting and where you host them. One example is hosting a classic ASP.NET Web application versus an Azure Web app or some other Platform-as-a-Service (PaaS) application such as Azure Cloud Services. The containers for those applications are different and require different tooling techniques to support the provisioning steps.

Infrastructure as Code

One popular provisioning technique is IaC. An application is an executable that can be compiled code, scripts and so on combined with an operational environment. You’ll find this environment yields many benefits.

Microsoft recently had Forrester Research Inc. conduct a research study on the impact of IaC (see bit.ly/1IiGRk1). The research showed IaC is a critical DevOp component. It also showed provisioning and configuration is a major point of friction for teams delivering software. You’ll need to leverage automation and IaC techniques if you intend to completely fulfill your DevOps goals.

One of the traditional operational challenges is automating the ability to provide appropriate environments in which to execute applications and services, and keeping those environments in known good states. Virtualization and other automation techniques are beneficial, but still have problems keeping nodes in sync and managing configuration drift. Operations and development teams continue to struggle with different toolsets, expertise and processes.

IaC is based on the premise that we should be able to describe, version, execute and test our infrastructure code via an automated release pipeline. For example, you can easily create a Windows virtual machine (VM) configured with IIS using a simple Windows PowerShell script. Operations should be able to use the same ALM tools to script, version and test the infrastructure.

Other benefits include being able to spin up and tear down known versions of your environments. You can avoid troublesome issues because of environmental differences between development and production. You can express the application environment-specific dependencies in code and carry them along in version control. In short, you can eliminate manual processes and ensure you’ve tested reliable automated environment containers for your applications. Development and operations can use common scripting languages and tools and achieve those efficiencies.

The application type and intended host location will dictate the tooling involved for executing your infrastructure code. There are several tools gaining popularity to support these techniques, including Desired State Configuration (DSC), Puppet, Chef and more. Each helps you achieve similar goals based on the scenario at hand.

The code piece of IaC could be one of several things. It could simply be Windows PowerShell scripts that provision resources. Again, the application types and hosting environment will dictate your choices here.

For Azure, you can use Cloud Deployment Projects that leverage Azure Resource Management APIs to create and manage Azure Resource Groups. This lets you describe your environments with JSON. Azure Resource Goups also let you manage group-related resources together, such as Web sites and SQL databases. With cloud deployment projects, you can store your provisioning requirements in version control and perform Azure provisioning as part of an automated release pipeline. Here are the sections that make up the basic structure of a provisioning template:

Figure 3 Separate Configuration Data Within a DCS Script

For more information on templates, go to bit.ly/1RQ3gvg, and for more on cloud deployment projects, check out bit.ly/1flDH3m.

The scripting languages and tooling are only part of the changes needed to successfully adopt an IaC strategy. Development and operations teams must work together to integrate their work streams toward a common set of goals. This can be challenging because historically operations teams have focused on keeping environments stable and development teams are more focused on introducing new features into those environments. Sophisticated technologies are emerging, but the foundation of a successful IaC implementation will depend on the ability of the operations and development teams to effectively collaborate.

Release Orchestration

Release Management is a technology in the Visual Studio ALM stack. It’s really more of a concept whereby you can orchestrate the various objects and tasks that encompass a software release.  A few of these artifacts include the payload or package produced by a build system, the automated testing that happens as part of a release pipeline, approval workflows, notifications and security governance to control environments closer to production.

You can use technologies such as DSC, Windows PowerShell scripts, Azure Resource Manager, Chef, and so on to manage environment state and install software and dependencies into running environments. In terms of tooling provided by Visual Studio ALM, think of Release Management as the service that wraps around whatever technologies and tools you need to execute the deployments. Release Management might leverage simple command-line or Windows PowerShell scripts, use DSC, or even execute your own custom tools. You should aim to use the simplest solution possible to execute your releases.

It’s also a good practice to rely on Windows PowerShell because it’s ubiquitous. For example, you can use Windows PowerShell scripts as part of a release pipeline to deploy Azure Cloud Services. There are a lot of out-of-the-box tools with Release Management (see Figure 2), but you also have the flexibility to create your own.

Figure 2 Tools and Options Available for Release Management

Release Management can help you elegantly create an automated release pipeline and produce reliable automated application releases. You can also opt to include provisioning.  The Release Management tooling with Visual Studio and Team Foundation Server can help you orchestrate these artifacts into the overall release transaction. It also provides rich dashboard-style views into your current and historical states. There’s also rich integration with Team Foundation Server and Visual Studio Online.

Where Does DSC Fit In?

There has been a lot of press about DSC lately. DSC is not, however, some all-encompassing tool that can handle everything. You’ll use DSC as one of the tools in your DevOps structure, not the only tool.

You can use DSC in pull or push modes. Then you can use the “make it so” phase to control the server state. Controlling that state can be as simple as ensuring a file or directory exists, or something more complex such as modifying the registry, stopping or starting services, or running scripts to deploy an application. You can do this repeatedly without error. You can also define your own DSC resources or leverage a large number of built-in resources.

DSC is implemented as a Local Configuration Manager (LCM), running on a target node, accepting a Management Object File (MOF) configuration file and using it to apply configuration to the node itself.  So there’s no hard-coupled tool. You don’t even have to use Windows PowerShell to produce the MOF.

To start using DSC, simply produce the MOF file. That will eventually describe the various resources to execute, which end up written mostly in Windows PowerShell. One of the big advantages of DSC on Windows Server-based systems is the LCM is native to the OS, giving you the concept of a built-in agent.

Show more