In the last post we discussed in great detail the process of carefully planning your Hybrid Cloud . The planning process is vital, and in this post we’ll see the time spent planning really start to pay off.
In this post I’ll discuss the 4 Steps that I believe are crucial when Building any Hybrid environment.
Yes, I know this may sound obvious, but it underscores the idea that a Hybrid Cloud is often a function of building upon and improving what you already have – in this case, it’s a matter of adding public and/or service-provider cloud tiers to your on-prem datacenter. There are four primary components of a Microsoft Hybrid Cloud: Windows Server, Windows Azure, System Center, and Windows Azure Pack. For more on building a private cloud, check out these posts from Building Clouds.
In order to adopt aspects of the public cloud into your IT environment, it’s critical to ensure you’ve already started down that path with your on-premises IT capacity and services. One of the goals of a hybrid cloud is to allow IT and the business to make the best decisions about how and where workloads and data get deployed and run, without putting an undue burden on the application owners themselves.
One thing to be mindful of is avoiding the problems that can arise from integrating a modern public cloud experience into a traditional IT structure. If you’re infrastructure is not ready for these modern features, you can encounter inconsistent experiences, delays, and a lack of resources that will sour users on IT’s key role in running their service.
When users can perceive both options as appealing and useful, they will make the best decisions for the business (i.e. avoiding issues like shadow IT ), not just for their own convenience.
Having established your on-prem resources are ready and that you’ve identified public cloud providers and hosters that your organization will use to augment IT’s services everything will now just snap into place, right? If only the world were that simple.
One of the most important aspects of extending applications and services into another cloud is determining how to ensure that access to these services is consistent for your users.
That means asking questions about your Hybrid Cloud up front:
Answers to these questions will drive a lot of decisions about your Hybrid Cloud. For example, if services can expose their endpoints directly on the internet, a method for authenticating users who aren’t on your private networks becomes critical. Also keep in mind how services like ADFS extend your IT-managed identity services to the internet in a reliable way.
If services will require access to data or other services managed by IT, careful consideration needs to be made about how to connect these services while protecting your data and networks. A public cloud like Azure offers many different ways to connect these services.
One option is Windows Azure Service Bus . Service Bus provides a secure and scalable way of passing traffic between services through internet-standard protocols. More traditional networking-based options include Site-to-Site VPN connections, where more choices present themselves, and traditional packet tunneling options that use IPsec tunnels that can be used to easily and cost-effectively connect individual services to a corporate network. For large scale deployments, Microsoft is investing in high-speed interconnect options like MPLS connections to Azure with carriers like AT&T . To learn more about Service Bus, check out this post from the What’s New in 2012 R2 series .
Making these decisions early in the planning and building process will speed your implementation and deployment, as the appropriate teams can be engaged early in the design.
If you want your hybrid cloud to be as transparent as possible to your users, providing a consistent self-service experience is a critical first step. This is where Windows Azure Pack comes into play.
WAP isn’t just a pretty interface – it’s a powerful API that is shared with the Windows Azure public cloud. This means that service owners can design their applications around a consistent platform, management interface, and set of APIs. Having this level of consistency is going to directly result in a more dynamic IT environment and nimbler application development.
Because all of these services are driven by the Service Management API , exposed via RESTful web services , you can easily connect an IT Service Management system to them via automation. Now you can take your existing ITSM process that are tried, tested, and approved, and put them in front of a powerful computing platform. This brings us to our last step.
Trust me, this is a concrete topic, not something from the marketing team. Consider that, at this step, you’ve now done everything previously discussed to create a hybrid cloud that can respond to the demands of your business – does that mean you should hand over this web portal to your users and hope for the best? The reality is that there is going to need to be some IT involvement in the ongoing use of the cloud.
For the foreseeable future, IT is going to play a key role because of their knowledge about the technical and functional aspects of this cloud – and this expertise should work hand-in-hand with the needs communicated by the business leaders who will be finding ways to maximize the value of all this newfound flexibility.
By working with the IT department, business leaders can better understand the impact of technical decisions and changes. Also, if the leadership has an important update to a business system, and this update is being designed as a cloud app, the IT team will be able to advise on where that app should be deployed, and how to map the app requirements like SLAs, security, and data sovereignty onto the hybrid cloud – and how to do all of this cost effectively.
When making this transition, consider how your processes will change when it comes to managing things in a traditional on-prem environment vs. a Hybrid environment. Consider, for example, simple things like, What is your SLA? Where is your data? And How do you handle disasters?
An application with a demanding SLA often needs to be designed differently for a private or public cloud deployment. Different cloud vendors offer different SLAs for different deployments and apps. Technologies like failover clustering are available more readily in a private cloud than a public one. Maintaining SLAs on an application with tiers in entirely different clouds means carefully looking at your network designs.
Regional or industry-specific regulations may demand that some data remains in a set physical or geographic location, but this is easily addressed during the build by identifying the right combination of hosting options, whether those be private, geo-located public/hosted options. If regulations allow for data access to occur from the cloud (while the data itself remains on-prem), then data can be kept in the private cloud, but the web tier can be public.
Building a DR plan for a cloud app will often depend on answers to some of the questions covered above. DR is an incredibly important topic that I’ve discussed several times in the past , and it’s something we’ve extensively planned for in the R2 products .
An app that can live entirely in a public cloud can usually be designed around distributed geolocation, which provides a built-in DR capability. Alternatively, an app that demands a private cloud often has to have a DR designed for it explicitly. Taking all of this into account, during the build you will need to identify if you have multiple private clouds, if you have a trusted hoster who provides you with DR capacity when needed, and you’ll need to test and ensure your DR plans work as expected.
* * * *
Having looked at Planning and Building a Hybrid Cloud, we’re only halfway there. In the next two posts we’ll look at best practices for Deploying and Operating a Hybrid Environment.
As always, if you have questions or ideas, don’t hesitate to get in touch !
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.