Do you need rugged, compact-sized hyperconverged infrastructure (HCI) enabled servers to run your branch office and edge workloads? Do you want to modernize your applications and IoT functions with container technology? Do you want to leverage Azure’s hybrid services such as backup, disaster recovery, update management, monitoring, and security compliance?
Well, Microsoft and HPE have teamed up to validate the HPE Edgeline EL8000 Converged Edge system for Microsoft’s Azure Stack HCI program. Designed specifically for space-constrained environments, the HPE Edgeline EL8000 Converged Edge system has a unique 17-inch depth form factor that fits into limited infrastructures too small for other x86 systems. The chassis has an 8.7-inch width which brings additional flexibility for deploying at the deep edge, whether it is in a telco environment, a mobile vehicle, or a manufacturing floor. This Network Equipment-Building System (NEBs) compliant system delivers secure scalability.
HPE Edgeline EL8000 Converged Edge system gives:
Traditional x86 compute optimized for edge deployments, far from the traditional data center without the sacrifice of compute performance.
Edge-optimized remote system management with wireless capabilities based on Redfish industry standard.
Compact form factor, with short-depth and half-width options.
Rugged, modular form factor for secure scalability and serviceability in edge and hostile environments including NEBs level three and American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) level three/four compliance.
Broad accelerator support for emerging edge artificial intelligence (AI) use cases, for field programmable gate arrays or graphics processing units.
Up to four independent compute nodes, which are cluster-ready with embedded networks.
The HPE Edgeline EL8000 Converged Edge system offers flexibility of choice for compute density or for input/output expansion. These compact, ruggedized systems offer high-performance capacity to support the use cases that matter most, including media streaming, IoT, AI, and video analytics. The HPE Edgeline EL8000 is a versatile platform that enables edge compute transformation so as use case requirements change, the system’s flexible and modular architecture can scale to meet them.
Seamless management and security features with HPE Edgeline Chassis Manager
The HPE Edgeline EL8000 Converged Edge system features the HPE Edgeline Chassis Manager which limits downtime by providing system-level health monitoring and alerts. Increase efficiency and reliability by managing the chassis fan speeds for each server blade installed in addition to monitoring the health and status of the power supply. It simplifies firmware upgrade management and implementation with HPE Edgeline Chassis Manager.
Microsoft Azure Stack HCI:
Azure Stack HCI solutions bring together highly virtualized compute, storage, and networking on industry-standard x86 servers and components. Combining resources in the same cluster makes it easier for you to deploy, manage, and scale. Manage with your choice of command-line automation or Windows Admin Center.
Achieve industry-leading virtual machine performance for your server applications with Hyper-V, the foundational hypervisor technology of the Microsoft cloud, and Storage Spaces Direct technology with built-in support for non-volatile memory express (NVMe), persistent memory, and remote-direct memory access (RDMA) networking.
Help keep apps and data secure with shielded virtual machines, network microsegmentation, and native encryption.
You can take advantage of cloud and on-premises working together with a hyperconverged infrastructure platform in the public cloud. Your team can start building cloud skills with built-in integration to Azure infrastructure management services, including:
Azure Site Recovery for high availability and disaster recovery as a service (DRaaS).
Azure Monitor, a centralized hub to track what’s happening across your applications, network, and infrastructure – with advanced analytics powered by AI.
Cloud Witness, to use Azure as the lightweight tie breaker for cluster quorum.
Azure Backup for offsite data protection and to protect against ransomware.
Azure Update Management for update assessment and update deployments for Windows virtual machines (VMs) running in Azure and on-premises.
Azure Network Adapter to connect resources on-premises with your VMs in Azure via a point-to-site virtual private network (VPN.)
Sync your file server with the cloud, using Azure File Sync.
Azure Arc for Servers to manage role-based access control, governance, and compliance policy from Azure Portal.
By deploying the Microsoft and HPE HCI solution, you can quickly solve your branch office and edge needs with high performance and resiliency while protecting your business assets by enabling the Azure hybrid services built into the Azure Stack HCI Branch office and edge solution.
At the recent Microsoft Ignite 2019 conference, we introduced two new and related perspectives on the future and roadmap of edge computing.
Before getting further into the details of Network Edge Compute (NEC) and Multi-access Edge Compute (MEC), let’s take a look at the key scenarios which are emerging in line with 5G network deployments. For a decade, we have been working with customers to move their workloads from their on-premises locations to Azure to take advantage of the massive economies of scale of the public cloud. We get this scale with the ongoing build-out of new Azure regions and the constant increase of capacity in our existing regions, reducing the overall costs of running data centers.
For most workloads, running in the cloud is the best choice. Our ability to innovate and run Azure as efficiently as possible allows customers to focus on their business instead of managing physical hardware and associated space, power, cooling, and physical security. Now, with the advent of 5G mobile technology promising larger bandwidth and better reliability, we see significant requirements for low latency offerings to enable scenarios such as smart-buildings, factories, and agriculture. The “smart” prefix highlights that there is a compute-intensive workload, typically running machine learning or artificial intelligence-type logic, requiring compute resources to execute in near real-time. Ultimately the latency, or the time from when data is generated to the time it is analyzed, and a meaningful result is available, becomes critical for these smart-scenarios. Latency has become the new currency, and to reduce latency we need to move the required computing resources closer to the sensors, data origin or users.
Multi-access Edge Compute: The intersection of compute and networking
Internet of Things (IoT) creates incredible opportunities, but it also presents real challenges. Local connectivity in the enterprise has historically been limited to Ethernet and Wi-Fi. Over the past two decades, Wi-Fi has become the de-facto standard for wireless networks, not due to it necessarily being the best solution, but rather its entrenchment in the consumer ecosystem and lack of alternatives. Our customers from around the world tell us that deploying Wi-Fi to service their IoT devices requires compromises on coverage, bandwidth, security, manageability, reliability, and interoperability/roaming. For example, autonomous robots require better bandwidth, coverage, and reliability to operate safely within a factory. Airports generally have decent Wi-Fi coverage inside the terminals, but on the tarmac, coverage often drops significantly, making it insufficient and less suitable to power the smart airport.
Next-gen private cellular connectivity greatly improves bandwidth, coverage, reliability, and manageability. Through the combination of local compute resources and private mobile connectivity (private LTE), we can enable many new scenarios. For instance, in the smart factory example used earlier customers are now able to run their robotic control logic, highly available and independent of connectivity to the public cloud. MEC helps ensure that operations and any associated critical first-stage data processing remain up and production can continue uninterrupted.
With its promise and advantage of near-infinite compute and storage, the cloud is ideal for large data-intensive and computational tasks, such as machine learning jobs for predictive maintenance analytics. At this year’s Ignite conference, we shared our thoughts and experience, along with a technology preview of MEC with Azure. The technology preview brings private mobile network capabilities to Azure Stack Edge; an on-premises compute platform managed from Azure. In practical terms, the MEC allows locally controlling the robots; even if the factory suffers a network outage.
From an edge computing perspective, we have containers, running across Azure Stack Edge and Azure. A key aspect is that the same programming paradigm can be used for Azure and the edge-based MEC platform. Code can be developed and tested in the cloud, then seamlessly deployed at the edge. Developers can take advantage of the vast array of DevOps tools and solutions available in Azure and apply them to the new exciting edge scenarios. The MEC technology preview focuses on the simplified experience of cross-premises deployment and operations of managed compute and Virtual Network Functions with integration to existing Azure services.
Network Edge Compute
Whereas Multi-access Edge Compute (MEC) is intended to be deployed at the customer’s premises, Network Edge Compute (NEC) is the network carrier equivalent, placing the edge computing platform within their network. Last week we announced the initial deployment of our NEC platform in AT&T’s Dallas facility. Instead of needing to access applications and games running in the public cloud, software providers can bring their solutions physically closer to their end-users. At AT&T’s Business Summit we gave an augmented reality demonstration, working with Taqtile, and showed how to perform maintenance on an aircraft landing gear.
The HoloLens user sees the real landing gear along with the virtual manual along with specific parts of the landing gear virtually highlighted. The mixing of real-world and virtual objects displayed via HoloLens is what is often referred to as augmented reality (AR) or mixed reality (MR).
Mixed reality use cases such as remote assistance can revolutionize several industrial automation scenarios. Lower latencies and higher bandwidth coupled with local compute, enables new remote rendering scenarios to reduce battery consumption in handsets and MR devices.
Attabotics provides a robotic warehousing and fulfillment system for the retail and supply chain industries. Attabotics employs robots (Attabots) for storage and retrieval of goods from a grid of bins. A typical storage structure has about 100,000 bins and is serviced by between 60 and 80 Attabots. Azure Sphere powers the robots themselves. Communications using Wi-Fi or traditional 900 MHz spectrum does not meet the scale, performance and reliability requirements.
The Nexus robot control system, used for command and control of the warehousing system, is built natively on Azure and uses Azure IoT Central for telemetry. With a Private LTE (CBRS) radio from our partners Sierra Wireless and Ruckus Wireless and packet core partner Metaswitch, we enabled the Attabots to communicate over a private LTE network. The reduced latency improved reliability and made the warehousing solution more efficient. The entire warehousing solution, including the private LTE network used for a warehouse, run on a single Azure Stack Edge.
Multi-player online gaming is one of the canonical scenarios for low-latency edge computing. Game Cloud Studios has developed a game based on Azure Play Fab, called Tap and Field. The game backend and controls run on Azure, while the game server instances reside and run on the NEC platform. Having lower latencies results in better gaming experiences for players who are nearby in venues such as e-sport events, arcades, arenas, and similar venues.
The proliferation of drone use is disrupting many industries, from security and privacy to the delivery of goods. Air Traffic Control operations are on the cusp of one of the most significant disruptive events in the field, going from monitoring only dozens of aircrafts today to thousands tomorrow. This necessitates a sophisticated near real-time tracking system. Vorpal VigilAir has built a solution where drone and operator tracking is done using a distributed sensor network powered by a real-time tracking application running on the NEC.
Data-driven digital agriculture solutions
Azure FarmBeats is an Azure solution that enables aggregation of agriculture datasets across providers, and generation of actionable insights by building artificial intelligence (AI) or machine learning (ML) models by fusing the datasets. Gathering datasets from sensors distributed across the farm requires a reliable private network, and generating insights requires a robust edge computing platform that is capable of being operated in a disconnected mode in remote locations where connectivity to the cloud is often sparse. Our solution, based on the Azure Stack Edge along with a managed private LTE network, offers a reliable and scalable connectivity fabric along with the right compute resources close to the farm.
MEC, NEC, and Azure: Bringing compute everywhere
MEC enables a low-latency connected Azure platform in your location, NEC provides a similar platform in a network carrier’s central office, and Azure provides a vast array of cloud services and controls.
At Microsoft, we fundamentally believe in providing options for all customers. Because it is impractical to deploy Azure datacenters in every major metropolitan city throughout the world, our new edge computing platforms provide a solution for specific low-latency application requirements that cannot be satisfied in the cloud. Software developers can use the same programming and deployment models for containerized applications using MEC where private mobile connectivity is required, deploying to NEC where apps are optimally located outside the customer’s premises, or directly in Azure. Many applications will look to take advantage of combined compute resources across the edge and public cloud.
We are building a new extended platform and continue to work with the growing ecosystem of mobile connectivity and edge computing partners. We are excited to enable a new wave of innovation unleashed by the convergence of 5G, private mobile connectivity, IoT and containerized software environments, powered by new and distributed programming models. The next phase of computing has begun.
Do you need rugged, compact-sized hyperconverged infrastructure (HCI) enabled servers to run your branch office and edge workloads? Do you want to modernize your applications and IoT functions with container technology? Do you want to leverage Azure’s hybrid services such as backup, disaster recovery, update managment, monitoring, and security compliance?
Microsoft and Lenovo have teamed up to validate the Lenovo ThinkSystem SE350 for Microsoft’s Azure Stack HCI program. The ThinkSystem SE350 was designed and built with the unique requirements of edge servers in mind. It is versatile enough to stretch the limitations of server locations, providing a variety of connectivity and security options and can be easily managed with Lenovo XClarity Controller. The ThinkSystem SE350 solution has a focus on smart connectivity, business security, and manageability for the harsh environment. To see all Lenovo servers validated for Azure Stack HCI, see the Azure Stack HCI catalog to learn more.
Lenovo ThinkSystem SE350:
The ThinkSystem SE350 is the latest workhorse for the edge. Designed and built with the unique requirements for edge servers in mind, it is versatile enough to stretch the limitations of server locations, providing a variety of connectivity and security options and is easily managed with Lenovo XClarity Controller. The ThinkSystem SE350 is a rugged compact-sized edge solution with a focus on smart connectivity, business security, and manageability for the harsh environment.
The ThinkSystem SE350 is an Intel® Xeon® D processor-based server, with a 1U height, half-width and short depth case that can go anywhere. Mount it on a wall, stack it on a shelf, or install it in a rack. This rugged edge server can handle anything from 0-55°C as well as full performance in high dust and vibration environments.
Information availability is another challenging issue for users at the edge, who require insight into their operations at all times to ensure they are making the right decisions. The ThinkSystem SE350 is designed to provide several connectivity options with wired and secure wireless Wi-Fi and LTE connection ability. This purpose-built compact server is reliable for a wide variety of edge and IoT workloads.
Microsoft Azure Stack HCI:
Azure Stack HCI solutions bring together highly virtualized compute, storage, and networking on industry-standard x86 servers and components. Combining resources in the same cluster makes it easier for you to deploy, manage, and scale. Manage with your choice of command-line automation or Windows Admin Center.
Achieve industry-leading virtual machine (VM) performance for your server applications with Hyper-V, the foundational hypervisor technology of the Microsoft cloud, and Storage Spaces Direct technology with built-in support for non-volatile memory express (NVMe), persistent memory, and remote direct memory access (RDMA) networking.
Help keep apps and data secure with shielded virtual machines, network micro-segmentation, and native encryption.
You can take advantage of cloud and on-premises working together with a hyper-converged infrastructure platform in the public cloud. Your team can start building cloud skills with built-in integration to Azure infrastructure management services:
Azure Site Recovery for high availability and disaster recovery as a service (DRaaS).
Azure Monitor, a centralized hub to track what’s happening across your applications, network, and infrastructure, with advanced analytics powered by artificial intelligence.
Cloud Witness, to use Azure as the lightweight tie-breaker for cluster quorum.
Azure Backup for offsite data protection and to protect against ransomware.
Azure Update Management for update assessment and update deployments for Windows Virtual Machines running in Azure and on-premises.
Azure Network Adapter to connect resources on-premises with your VMs in Azure via a point-to-site VPN.
Sync your file server with the cloud, using Azure File Sync.
Azure Arc for Servers to manage role-based access control, governance, and compliance policy from Azure Portal.
By deploying the Microsoft + Lenovo HCI solution, you can quickly solve your branch office and edge needs with high performance and resiliency while protecting your business assets by enabling the Azure hybrid services built into the Azure Stack HCI Branch office and edge solution.
We are excited to announce the preview of Azure Dedicated Host, a new Azure service that enables you to run your organization’s Linux and Windows virtual machines on single-tenant physical servers. Azure Dedicated Hosts provide you with visibility and control to help address corporate compliance and regulatory requirements. We are extending Azure Hybrid Benefit to Azure Dedicated Hosts, so you can save money by using on-premises Windows Server and SQL Server licenses with Software Assurance or qualifying subscription licenses. Azure Dedicated Host is in preview in most Azure regions starting today.
You can use the Azure portal to create an Azure Dedicated Host, host groups (a collection of hosts), and to assign Azure Virtual Machines to hosts during the virtual machine (VM) creation process.
Visibility and control
Azure Dedicated Hosts can help address compliance requirements organizations may have in terms of physical security, data integrity, and monitoring. This is accomplished by giving you the ability to place Azure VMs on a specific and dedicated physical server. This offering also meets the needs of IT organizations seeking host-level isolation.
Azure Dedicated Hosts provide visibility over the server infrastructure running your Azure Virtual Machines. They allow you to gain further control over:
The underlying hardware infrastructure (host type)
Processor brand, capabilities, and more
Number of cores
Type and size of the Azure Virtual Machines you want to deploy
You can mix and match different Azure Virtual Machine sizes within the same virtual machine series on a given host.
With an Azure Dedicated Host, you can control all host-level platform maintenance initiated by Azure (e.g., host OS updates). An Azure Dedicated Host gives you the option to defer host maintenance operations and apply them within a defined maintenance window, 35 days. During this self-maintenance window, you can apply maintenance to your hosts at your convenience, thus gaining full control over the sequence and velocity of the maintenance process.
Licensing cost savings
We now offer Azure Hybrid Benefit for Windows Server and SQL Server on Azure Dedicated Hosts, making it the most cost-effective dedicated cloud service for Microsoft workloads.
Azure Hybrid Benefit allows you to use existing Windows Server and SQL Server licenses with Software Assurance, or qualifying subscription licenses, to pay a reduced rate on Azure services. Learn more by referring to the Azure Hybrid Benefit FAQ.
We are also expanding Azure Hybrid Benefit so you can take advantage of unlimited virtualization for Windows Server and SQL Server with Azure Dedicated Hosts. Customers with Windows Server Datacenter licenses and Software Assurance can use unlimited virtualization rights in Azure Dedicated Hosts. In other words, you can deploy as many Windows Server virtual machines as you like on the host, subject only to the physical capacity of the underlying server. Similarly, customers with SQL Server Enterprise Edition licenses and Software Assurance can use unlimited virtualization rights for SQL Server on their Azure Dedicated Hosts.
Azure Dedicated Hosts allow you to use other existing software licenses, such as SUSE or RedHat Linux. Check with your vendors for detailed license terms.
With the introduction of Azure Dedicated Hosts, we’re updating the outsourcing terms for Microsoft on-premises licenses to clarify the distinction between on-premises/traditional outsourcing and cloud services. For more details about these changes, read the blog “Updated Microsoft licensing terms for dedicated hosted cloud services.” If you have any additional questions, please reach out to your Microsoft account team or partner.
Q: Which Azure Virtual Machines can I run on Azure Dedicated Host?
A: During the preview period you will be able to deploy Dsv3 and Esv3 Azure Virtual Machine series. Support for Fsv2 virtual machines is coming soon. Any virtual machine size from a given virtual machine series can be deployed on an Azure Dedicated Host instance, subject to the physical capacity of the host. For additional information please refer to the documentation.
Q: Which Azure Disk Storage solutions are available to Azure Virtual Machines running on an Azure Dedicated Host?
A: Azure Standard HDDs, Standard SSDs, and Premium SSDs are all supported during the preview program. Learn more about Azure Disk Storage.
Q: Where can I find pricing and more details about the new Azure Dedicated Host service?
A: You can find more details about the new Azure Dedicated Host service on our pricing page.
Q: Can I use Azure Hybrid Benefit for Windows Server/SQL Server licenses with my Azure Dedicated Host?
A: Yes, you can lower your costs by taking advantage of Azure Hybrid Benefit for your existing Windows Server and SQL Server licenses with Software Assurance or qualifying subscription licenses. With Windows Server Datacenter and SQL Server Enterprise Editions, you get unlimited virtualization when you license the entire host and use Azure Hybrid Benefit. As a result, you can deploy as many Windows Server virtual machines as you like on the host, subject to the physical capacity of the underlying server. All Windows Server and SQL Server workloads in Azure Dedicated Hosts are also eligible for free Extended Security Updates for Windows Server and SQL Server 2008/R2.
Q: Can I use my Windows Server/SQL Server licenses with dedicated cloud services?
A: In order to make software licenses consistent across multitenant and dedicated cloud services, we are updating licensing terms for Windows Server, SQL Server, and other Microsoft software products for dedicated cloud services. Beginning October 1, 2019, new licenses purchased without Software Assurance and mobility rights cannot be used in dedicated hosting environments in Azure and certain other cloud service providers. This is consistent with our policy for multitenant hosting environments. However, SQL Server licenses with Software Assurance can continue to use their licenses on dedicated hosts with any cloud service provider via License Mobility, even if licenses were purchased after October 1, 2019. Customers may use on-premises licenses purchased before October 1, 2019 on dedicated cloud services. For more details regarding licensing, please read the blog “Updated Microsoft licensing terms for dedicated hosted cloud services.”
In recent weeks, we’ve been talking about the many reasons why Windows Server and SQL Server customers choose Azure. Security is a major concern when moving to the cloud, and Azure gives you the tools and resources you need to address those concerns. Innovation in data can open new doors as you move to the cloud, and Azure offers the easiest cloud transition, especially for customers running on SQL Server 2008 or 2008 R2 with concerns about end of support. Today we’re going to look at another critical decision point for customers as they move to the cloud. How easy is it to combine new cloud resources with what you already have on-premises? Many Windows Server and SQL Server customers choose Azure for its industry leading hybrid capabilities.
Microsoft is committed to enabling a hybrid approach to cloud adoption. Our commitment and passion stems from a deep understanding of our customers and their businesses over the past several decades. We understand that customers have business imperatives to keep certain workloads and data on premises, and our goal is to meet them where they are and prepare them for the future by providing the right technologies for every step along the way. That’s why we designed and built Azure to be hybrid from the beginning and have been delivering continuous innovation to help customers operate their hybrid environments seamlessly across on-premises, cloud and edge. Enterprise customers are choosing Azure for their Windows Server and SQL Server workloads. In fact, in a 2019 Microsoft survey of 500 enterprise customers, when those customers were asked about their migration plans for Windows Sever, they were 30 percent more likely to choose Azure.
Customers trust Azure to power their hybrid environments
Take Komatsu as an example. Komatsu achieved 49 percent cost reduction and nearly 30 percent performance gain by moving on-premises applications to Azure SQL Database Managed Instance and building a holistic data management and analytics solutions across their hybrid infrastructure.
Operating a $15 billion enterprise, Smithfield Foods slashed datacenter costs by 60 percent and accelerated application delivery from two months to one day using a hybrid cloud model built on Azure. Smithfield has factories and warehouses often in rural areas that have less than ideal internet bandwidth. It relies on Azure ExpressRoute to connect their major office locations globally to Azure to gain the flexibility and speed needed.
The government of Malta built a complete hybrid cloud eco-system powered by Azure and Azure Stack to modernize its infrastructure. This hybrid architecture, combined with a robust billing platform and integrated self-service backup, brings new level of flexibility and agility to the Maltese government operations, while also providing citizens and businesses more efficient services that they can access whenever they want.
Let’s look at some of Azure’s unique built-in hybrid capabilities.
Bringing the cloud to local datacenters with Azure Stack
Azure Stack, our unparalleled hybrid offering, lets customers build and run cloud-native applications with Azure services in their local datacenters or in disconnected locations. Today, it’s available in 92 countries and customers like Airbus Defense & Space, iMOKO, and KPMG Norway are using Azure Stack to bring cloud benefits on-premises.
We recently introduced Azure Stack HCI solutions so customers can run virtualized applications on-premises in a familiar way and enjoy easy access to off-the-shelf Azure management services such as backup and disaster recovery.
With Azure, Azure Stack, and Azure Stack HCI, Microsoft is the only cloud provider in the market that offers a comprehensive set of hybrid solutions.
Modernizing server management with Windows Admin Center
Windows Admin Center, a modern browser-based application free of charge, allows customers to manage Windows Servers on-premises, in Azure, or in other clouds. With Windows Admin Center, customers can easily access Azure management services to perform tasks such as disaster recovery, backup, patching, and monitoring. Since its launch just over a year ago, Windows Admin Center has seen tremendous momentum, managing more than 2.5 million server nodes each month.
Easily migrating on-premises SQL Server to Azure
Azure SQL Database is a fully managed and intelligent database service. SQL Database is evergreen, so it’s always up to date: no more worry about patching, upgrades or End of Support. Azure SQL Database Managed Instance has the full surface area of the SQL Server database engine in Azure. Customers use Managed Instance to migrate SQL Server to Azure without changing the application code. Because the service is consistent with on-premises SQL Server, customers can continue using familiar features, tools and resources in Azure.
With SQL Database Managed Instance, customers like Komatsu, Carlsberg Group, and AllScripts were able to quickly migrate SQL databases to Azure with minimal downtime and benefit from built-in PaaS capabilities such as automatic patching, backup, and high availability.
Connecting hybrid environments with fast and secure networking services
Customers build extremely fast private connections between Azure and local infrastructure, allowing both to and through access using Azure ExpressRoute at bandwidths up to 100 Gbps. Azure Virtual WAN makes it possible to quickly add and connect thousands of branch sites by automating configuration and connectivity to Azure and for global transit across customer sites, using the Microsoft global network.
Managing anywhere access with a single identity platform
Over 90 percent of enterprise customers use Active Directory on-premises. With Azure, customers can easily connect on-premises Active Directory with Azure Active Directory to provide seamless directory services for all Office 365 and Azure services. Azure Active Directory gives users a single sign-on experience across cloud, mobile and on-premises applications, and secures data from unauthorized access without compromising productivity.
Innovating continuously at the edge
Customers are extending their hybrid environments to the edge so they can take on new business opportunities. Microsoft has been leading the innovation in this space. The following are some examples.
Azure Data Box Edge provides a cloud managed compute platform for containers at the edge, enabling customers to process data at the edge and accelerate machine learning workloads. Data Box Edge also enables customers to transfer data over the internet to Azure in real-time for deeper analytics, model re-training at cloud scale or long-term storage.
At Microsoft Build 2019, we announced Azure SQL Database Edge as available in preview, to bring SQL engine to the edge. Developers will now be able to adopt a consistent programming surface area to develop on a SQL database and run the same code on-premises, in the cloud, or at the edge.
Get started – Integrate your hybrid environments with Azure
Check out the resources on Azure hybrid such as overviews, videos, and demos so you can learn more about how to use Azure to run Windows Server and SQL Server workloads successfully across your hybrid environments.
As customers adopt Azure and the cloud, they need fast, private, and secure connectivity across regions and Azure Virtual Networks (VNets). Based on the type of workload, customer needs vary. For example, if you want to ensure data replication across geographies you need a high bandwidth, low latency connection. Azure offers connectivity options for VNet that cater to varying customer needs, and you can connect VNets via VNet peering or VPN gateways.
It is not surprising that VNet is the fundamental building block for any customer network. VNet lets you create your own private space in Azure, or as I call it your own network bubble. VNets are crucial to your cloud network as they offer isolation, segmentation, and other key benefits. Read more about VNet’s key benefits in our documentation “What is Azure Virtual Network?”
VNet peering enables you to seamlessly connect Azure virtual networks. Once peered, the VNets appear as one, for connectivity purposes. The traffic between virtual machines in the peered virtual networks is routed through the Microsoft backbone infrastructure, much like traffic is routed between virtual machines in the same VNet, through private IP addresses only. No public internet is involved. You can peer VNets across Azure regions, too – all with a single click in the Azure Portal.
VNet peering – connecting VNets within the same Azure region
Global VNet peering – connecting VNets across Azure regions
A VPN gateway is a specific type of VNet gateway that is used to send traffic between an Azure virtual network and an on-premises location over the public internet. You can also use a VPN gateway to send traffic between VNets. Each VNet can have only one VPN gateway.
While we offer two ways to connect VNets, based on your specific scenario and needs, you might want to pick one over the other.
VNet Peering provides a low latency, high bandwidth connection useful in scenarios such as cross-region data replication and database failover scenarios. Since traffic is completely private and remains on the Microsoft backbone, customers with strict data policies prefer to use VNet Peering as public internet is not involved. Since there is no gateway in the path, there are no extra hops, ensuring low latency connections.
VPN Gateways provide a limited bandwidth connection and is useful in scenarios where encryption is needed, but bandwidth restrictions are tolerable. In these scenarios, customers are also not as latency-sensitive.
VNet Peering and VPN Gateways can also co-exist via gateway transit
Gateway transit enables you to use a peered VNet’s gateway for connecting to on-premises instead of creating a new gateway for connectivity. As you increase your workloads in Azure, you need to scale your networks across regions and VNets to keep up with the growth. Gateway transit allows you to share an ExpressRoute or VPN gateway with all peered VNets and lets you manage the connectivity in one place. Sharing enables cost-savings and reduction in management overhead.
With gateway transit enabled on VNet peering, you can create a transit VNet that contains your VPN gateway, Network Virtual Appliance, and other shared services. As your organization grows with new applications or business units and as you spin up new VNets, you can connect to your transit VNet with VNet peering. This prevents adding complexity to your network and reduces management overhead of managing multiple gateways and other appliances.
Yes, no Public IP endpoints. Routed through Microsoft backbone and is completely private. No public internet involved.
Public IP involved.
If VNet A is peered to VNet B, and VNet B is peered to VNet C, VNet A and VNet C cannot currently communicate. Spoke to spoke communication can be achieved via NVAs or Gateways in the hub VNet. See an example in our documentation.
If VNet A, VNet B, and VNet C are connected via VPN Gateways and BGP is enabled in the VNet connections, transitivity works.
Typical customer scenarios
Data replication, database failover, and other scenarios needing frequent backups of large data.
Encryption specific scenarios that are not latency sensitive and do not need high throughout.
Initial setup time
It took me 24.38 seconds, but you should give it a shot!
We continue to expand our ecosystem by partnering with independent software vendors (ISV) around the globe to deliver prepackaged software solutions to Azure Stack customers. As we are getting closer to our two-year anniversary, we are humbled by the trust and confidence bestowed by our partners in the Azure Stack platform. We would like to highlight some of the partnerships that we built during this journey.
Thales now offers their CipherTrust Cloud Key Manager solution through the Azure Stack Marketplace that works with Azure and Azure Stack “Bring Your Own Key” (BYOK) APIs to enable such key control. CipherTrust Cloud Key Manager creates Azure-compatible keys from the Vormetric Data Security Manager that can offer up to FIPS 140-2 Level 3 protection. Customers can upload, manage, and revoke keys, as needed, to and from Azure Key Vaults running in Azure Stack or Azure, all from a single pane of glass.
Every organization has a unique journey to the cloud based on its history, business specifics, culture, and maybe most importantly their starting point. The journey to the cloud provides many options, features, functionalities, as well as opportunities to improve existing governance, operations, implement new ones, and even redesign the applications to take advantage of the cloud architectures.
Veeam Backup and Replication 9.5 is now available through Azure Stack Marketplace making to possible to protect both Windows and Linux-based workloads running in the cloud from one centrally managed console. Refer to this document to learn about all data protection and disaster recovery partner solutions that support Azure Stack platform.
The VM-Series next-generation firewall from Palo Alto Networks allows customers to securely migrate their applications and data to Azure Stack, protecting them from known and unknown threats with application whitelisting and threat prevention policies. You can learn more about the VM-series next-generation firewall on Azure Stack.
Developer platform and tools
We continue to invest in open source technologies and Bitnami helps us make this possible with their extensive application catalog. Bitnami applications can be found on the Azure Stack Marketplace and can easily be launched directly on your Azure Stack platform. Learn more about Bitnami offerings.
With self-service simplicity, performance and scale, Iguazio Data Science Platform empowers developers to deploy AI apps faster on the edge. Iguazio Data Science Platform will be soon available through Azure Stack Marketplace.
PTC’s ThingWorx IIoT platform is designed for rapidly developing industrial IoT solutions, with the ability to scale securely from the cloud to the edge. ThingWorx runs on top of Microsoft Azure or Azure Stack, and leverages Azure PAAS to bring best in class IIoT solution to the manufacturing environment. Deploying ThingWorx on Azure Stack enables you to bring your cloud-based industry 4.0 solution to the factory floor. Experience on the show floor a demonstration of how ThingWorx Connect factory solution pulls data from real factory assets and makes insightful data available in prebuilt applications that can be customized and extended using ThingWorx Composer and Mashup builder.
Intelligent Edge devices
With the private preview of Iot Hub in Azure Stack, we are very excited to see our customers and partners creating solutions that perform data collection and AI inferencing in the field. Intel and its partners have created hardware kits that support IoT Edge and seamlessly integrate with Azure Stack. A few examples of such kits are the IEI Tank and up2, that enables the creation of computer vision solutions and deep learning inference using CPU, GPU, or an optional VPU. Those kits allow you to kick-start your targeted application development with a superior out-of-the-box experience, that includes pre-loaded software like the Intel Distribution of OpenVINO™.
Scaling and optimizing hybrid network-attached storage (NAS) performance gets a boost today with the general availability of the Microsoft Azure FXT Edge Filer, a caching appliance that integrates on-premises network-attached storage and Azure Blob Storage. The Azure FXT Edge Filer creates a performance tier between compute and file storage and provides high-throughput and low-latency network file system (NFS) to high-performance computing (HPC) applications running on Linux compute farms, as well as the ability to tier storage data to Azure Blob Storage.
Fast performance tier for hybrid storage architectures
The availability of Azure FXT Edge Filer today further integrates the highly performant and efficient technology that Avere Systems pioneered to the Azure ecosystem. The Azure FXT Edge Filer is a purpose-built evolution of the popular Avere FXT Edge Filer, in use globally to optimize storage performance in read-heavy workloads.
The new hardware model goes beyond top-line integration with substantial updates. It is now being manufactured by Dell and has been upgraded with twice as much memory and 33 percent more SSD. Two models with varying specifications are available today. With the new 6600 model, customers will see about a 40 percent improvement in read performance over the Avere FXT 5850. The appliance now supports hybrid storage architectures that include Azure Blob storage.
Edge filer hardware is recognized as a proven solution for storage performance improvements. With many clusters deployed around the globe, Azure FXT Edge Filer can scale performance separately from capacity to optimize storage efficiency. Companies large and small use the appliance to accelerate challenging workloads for processes like media rendering, financial simulations, genomic analysis, seismic processing, and wide area network (WAN) optimization. Now with new Microsoft Azure supported appliances, these workloads can run with even better performance and easily leverage Azure Blob storage for active archive storage capacity.
Rendering more faster
Visual effects studios have been long-time users of this type of edge appliance, as their rendering workloads frequently push storage infrastructures to their limits. When one of these companies, Digital Domain, heard about the new Azure FXT Edge Filer hardware, they quickly agreed to preview a 3-node cluster.
“I’ve been running my production renders on Avere FXT clusters for years and wanted to see how the new Azure FXT 6600 stacks up. Setup was easy as usual, and I was impressed with the new Dell hardware. After a week of lightweight testing, I decided to aim the entire render farm at the FXT 6600 cluster and it delivered the performance required without a hiccup and room to spare.”
Mike Thompson, Principal Engineer, Digital Domain
Digital Domain has nine locations in the United States, China, and India.
Manage heterogeneous storage resources easily
Azure FXT Edge Filers help keep analysts, artists, and engineers productive, ensuring that applications aren’t affected by storage latency. And storage administrators can easily manage these heterogeneous pools of storage in a single file system namespace and through a single mountpoint. Users access their files from a single mount point, whether they are stored in on-premises NAS or in Azure Blob storage.
Expanding a cluster to meet growing demands is as easy as adding additional nodes. The Azure FXT Edge Filer scales from three to 24 nodes, allowing even more productivity in peak periods. This scale helps companies avoid overprovisioning expensive storage arrays and enables moving to the cloud at the user’s own pace.
Gain low latency hybrid storage access
Azure FXT Edge Filers deliver high throughput and low latency for hybrid storage infrastructure supporting read-heavy HPC workloads. Azure FXT Edge Filers support storage architectures with NFS and server message block (SMB) protocol support for NetApp and Dell EMC Isilon NAS systems, as well as cloud APIs for Azure Blob storage and Amazon S3.
Customers are using the flexibility of the Azure FXT Edge Filer to move less frequently used data to cloud storage resources, while keeping files accessible with minimal latency. These active archives enable organizations to quickly leverage media assets, research, and other digital information as needed.
Enable powerful caching of data
Software on the Azure FXT Edge Filers identifies the most in-demand or hottest data and caches it closest to compute resources, whether that data is stored down the hall, across town, or across the world. With a cluster connected, the appliances take over, moving data as it warms and cools to optimize access and use of the storage.
Get started with Azure FXT Edge Filers
Whether you are currently running Avere FXT Edge Filers and are looking to upgrade to the latest hardware to increase performance or expanding your clusters or you are new to the technology, the process to get started is the same. You can request information by completing this online form or by reaching out to your Microsoft representative.
Microsoft will work with you to configure the optimal combination of software and hardware for your workload and facilitate its purchase and installation.
Protecting your IaaS virtual machine based applications
Azure Stack is an extension of Azure that lets you deliver IaaS Azure services from your organization’s datacenter. Consuming IaaS services from Azure Stack requires a modern approach to business continuity and disaster recovery (BC/DR). If you’re just starting your journey with Azure and Azure Stack, make sure to work through a comprehensive BC/DR strategy so your organization understands the immediate and long-term impact of modernizing applications in the context of cloud. If you already have Azure Stack, keep in mind that each application must have a well-articulated BC/DR plan calling out the resiliency, reliability, and availability requirements that meet the business needs of your organization.
What Azure Stack is and what it isn’t
Since launching Azure Stack at Ignite 2017, we’ve received feedback from many customers on the challenges they face within their organization evangelizing Azure Stack to their end customers. The main concerns are the stark differences from traditional virtualization. In the context of modernizing BC/DR practices, three misconceptions stand out:
Azure Stack is just another virtualization platform
Azure Stack is delivered as an appliance on prescriptive hardware co-engineered with our integrated system partners. Your focus must be on the services delivered by Azure Stack and the applications your customers will deploy on the system. You are responsible for working with your applications teams to define how they will achieve high availability, backup recovery, disaster recovery, and monitoring in the context of modern IaaS, separate from infrastructure running the services.
I should be able to use the same virtualization protection schemes with Azure Stack
Azure Stack is delivered as a sealed system with multiple layers of security to protect the infrastructure. Constraints include:
Scale unit nodes and infrastructure services have code integrity enabled.
At the networking layer, the traffic flow defined in the switches is locked down at deployment time using access control lists.
Given these constraints, there is no opportunity to install backup/replication agents on the scale-unit nodes, grant access to the nodes from an external device for replication and snapshotting, or physically attach external storage devices for storage level replication to another site.
Another ask from customers is the possibility of deploying one Azure Stack scale-unit across multiple datacenters or sites. Azure Stack doesn’t support a stretched or multi-site topology for scale-units. In a stretched deployment, the expectation is that nodes in one site can go offline with the remaining nodes in the secondary site available to continue running applications. From an availability perspective, Azure Stack only supports N-1 fault tolerance, so losing half of the node count will take the system offline. In addition, based on how scale-units are configured, Azure Stack only supports fault domains at a node level. There is no concept of a site within the scale-unit.
I am not deploying modern applications in Azure, none of this applies to me
Azure Stack is designed to offer cloud services in your datacenter. There is a clear separation between the operation of the infrastructure and how IaaS VM-based applications are delivered. Even if you’re not planning to deploy any applications to Azure, deploying to Azure Stack is not “business as usual” and will require thinking through the BC/DR implications throughout the entire lifecycle of your application.
Define your level of risk tolerance
With the understanding that Azure Stack requires a different approach to BC/DR for your IaaS VM-based applications, let’s look at the implications of having one or more Azure Stack systems, the physical and logical constructs in Azure Stack, and the recovery objectives you and your application owners need to focus on.
How far apart will you deploy Azure Stack systems
Let’s start by defining the impact radius you want to protect against in the event of a disaster. This can be as small as a rack in a co-location facility or an entire region of a country or continent. Within the impact radius, you can choose to deploy one or more Azure Stack systems. If the region is large enough you may even have multiple datacenters close together, each with Azure Stack systems. The key takeaway is that if the site goes offline due to a disaster or catastrophic event, there is no amount of redundancy that will keep the Azure systems online. If your intent is to survive the loss of an entire site as the diagram below shows, then you must consider deploying Azure Stack systems into multiple geographic locations separated by enough distance so a disaster in one location does not impact any other locations.
Help your application owners understand the physical and logical layers of Azure Stack
Next it’s important to understand the physical and logical layers that come together in an Azure Stack environment. The Azure Stack system running all the foundational services and your applications physically reside within a rack in a datacenter. Each deployment of Azure Stack is a separate instance or cloud with its own portal. The diagram below shows the physical and logical layering that’s common for all Azure Stack systems deployed today and for the foreseeable future.
Define the recovery time objectives for each application with your application owners
Now that you have a clear understanding of your risk tolerance if a system goes offline, you need to decide the protection schemes for your applications. You need to make sure you can quickly recover applications and data on a healthy system. We’re talking about making sure your applications are designed to be highly available within a scale-unit using availability sets to protect against hardware failures. In addition, you should also consider the possibility of an application going offline due to corruption or accidental deletion. Recovery can be as simple as scaling-out an application or restoring from a backup.
To survive an outage of the entire system, you’ll need to identify the availability requirements of each application, where the application can run in the event of an outage, and what tools you need to introduce to enable recovery. If your application can run temporarily in Azure, you can use services like Azure Site Recovery and Azure Backup to protect your application. Another option is to have additional Azure Stack systems fully deployed, operational, and ready to run applications. The time required to get the application running on a secondary system is the recovery time objective (RTO). This objective is established between you and the application owners. Some application owners will only tolerate minimal downtime while others are ok with multiple days of downtime if the data is protected in a separate location. Achieving this RTO will differ from one application to another. The diagram below summarizes the common protection schemes used at the VM or application level.
In the event of a disaster, there will be no time to request an on-demand deployment of Azure Stack to a secondary location. If you don’t have a deployed system in a secondary location, you will need to order one from your hardware partner. The time required to deliver, install, and deploy the system is measured in weeks.
Establish the offerings for application and data protection
Now that you know what you need to protect on Azure Stack and your risk tolerance for each application, let’s review some specific patterns used with IaaS VMs.
Applications deployed into IaaS VMs can be protected at the guest OS level using backup agents. Data can be restored to the same IaaS VM, to a new VM on the same system, or a different system in the event of a disaster. Backup agents support multiple data sources in an IaaS VM such as:
Disk: This requires block-level backup of one, some, or all disks exposed to the guest OS. It protects the entire disk and captures any changes at the block level.
File or folder: This requires file system-level backup of specific files and folders on one, some, or all volumes attached to the guest OS.
OS state: This requires backup targeted at the OS state.
Application: This requires a backup coordinated with the application installed in the guest OS. Application-aware backups typically include quiescing input and output in the guest for application consistency (for example, Volume Shadow Copy Service (VSS) in the Windows OS).
Application data replication
Another option is to use replication at the guest OS level or at the application level to make data available in a different system. The replication isn’t offloaded to the underlying infrastructure, it’s handled at the guest OS or above. One example is applications like SQL support asynchronous replication in a distributed availability group.
For high availability, you need to start by understanding the data persistence model of your applications:
Stateful workloads write data to one or more repositories. It’s necessary to understand which parts of the architecture need point-in-time data protection and high availability to recover from a catastrophic event.
Stateless workloads on the other hand don’t contain data that needs to be protected. These workloads typically support on-demand scale-up and scale-down and can be deployed in multiple locations in a scale-out topology behind a load balancer.
To support application level high availability within an Azure Stack system, multiple virtual machines are grouped into an availability set. Applications deployed in an availability set sit behind a load balancer that distributes incoming traffic randomly among multiple virtual machines.
Across Azure Stack systems, a similar approach is possible with the following differences; The load balancer must be external to both systems or in Azure (i.e. Traffic Manager). Availability sets do not span across independent Azure Stack systems.
Deploying your IaaS VM-based applications to Azure and Azure Stack requires a comprehensive evaluation of your BC/DR strategy. “Business as usual” is not enough in the context of cloud. For Azure Stack, you need to evaluate the resiliency, availability, and recoverability requirements of the applications separate from the protection schemes for the underlying infrastructure.
You must also reset end user expectations starting with the agreed upon SLAs. Customers onboarding their VMs to Azure Stack will need to agree to the SLAs that are possible on Azure Stack. For example, Azure Stack will not meet the stringent zero data loss requirements required by some mission critical applications that rely on storage level synchronous replication between sites. Take the time to identify these requirements early on and build a successful track record of onboarding new applications to Azure Stack with the appropriate level of protection and disaster recovery.
This blog post was co-authored by David Armour Principal PM Manager, Azure Stack and Tiberiu Radu, Senior Program Manager, Azure Stack.
Foundation of Azure Stack IaaS
Remember back in the virtualization days when you had to pick a host for your virtual machine? Some of my business units could tell by the naming convention the make and manufacturer of the hardware. Using this knowledge, they’d fill up the better gear first, leaving the teams that didn’t know better with the oldest hosts.
Clouds take a different approach. Instead of hosts, VMs are placed into a pool of capacity. The physical infrastructure is abstract. The compute, storage, and networking resources consumed by the VM are defined through software.
Azure Stack is an instance of the Azure cloud that you can run in your own datacenter. Microsoft has taken the experience and technology from running one of the largest clouds in the world to design a solution you can host in your facility. This forms the foundation of Azure Stack’s infrastructure-as-service (IaaS).
Let’s explore some of the characteristics of the Azure Stack infrastructure that allows you to run cloud-native VMs directly in your facility.
Cloud inspired hardware
Microsoft employees can’t just purchase their favorite server and rack it into an Azure datacenter. The only servers that enter an Azure datacenter have been specifically built for Azure. Not only are the servers built for Azure, so are the networking devices, the racks, and the cabling. This extreme standardization allows the Azure team to operate an Azure datacenter with just a handful of employees. Because all the servers are standardized and can be uniformly operated and automated, adding additional capacity to a datacenter doesn’t require hiring more employees to operate them.
Other advantages of standardizing hardware configurations is the standardization leads to expected, repeatable results – not only for Microsoft and Azure, but for its customers. The hardware integration has been validated and is a known recipe. Servers, storage, networking, cabling layout, and more are all well-known and based on these recipes, the ordering, delivery, and integration of new hardware components. Servicing and eventual retirement are repeatable and scalable. The full end-to-end validation of these configurations is done once with quick checks in place when the capacity is delivered and installed.
These principles are applied to Azure Stack solutions as well. The configurations, their capabilities, and validation are all well-known and the result is a repeatable and supportable product. Microsoft, its partners, and most importantly the end customer benefit. While an Azure Stack customer is limited to the defined, partner solutions, they have been built with reasonable flexibility so the customer can choose the specific capabilities or capacities required. Please note, there is one exception – the Azure Stack Development Kit (ASDK) allows you to install Azure Stack on any hardware that meets the hardware requirements. The ASDK is for evaluation purposes and not supported as a production environment.
Microsoft has partnered and co-engineered solutions with a variety of hardware partners or OEMs. The benefit is that Azure Stack can meet you where your existing relationships exist. These relationships may be based on existing hardware purchasing agreements, geographic location, or support capabilities. Keeping in mind the principles of operating a solution in a well-defined manner, Microsoft has set minimum requirements for Azure Stack hardware solutions. Each of our partners can then choose from their portfolio the components, servers, and network switches that best meet your needs. This creates a well-defined variety that continues to be supportable and delivers the overall solution value.
One of the principles we have taken from Microsoft’s experience in the enterprise and from Azure is overall solution resilience. The world of software and hardware is not perfect; things fail – cables go bad, software has bugs, power outages occur, and on and on. While we work to build better software and with our solution partners to continually improve, we must expect that things fail. Azure Stack solutions are not perfect, but have been constructed with the intent to overcome the common points of failure. For example, each copy of tenant/user data is stored on three separate storage devices in three separate servers. The physical network paths are redundant and provide better performance and resiliency to potential failure. The internal software of Azure Stack are services that coordinate across multiple instances. This type of end-to-end architectural design and implementation leads to a better end experience. Combining this approach to infrastructure resilience with the well-known and validated solutions approach described above provides for a better experience for the customer.
When you run your IaaS VMs in Azure Stack you should know they are running on a secure foundation. It turns out that one of the reasons people select Azure Stack is because they have data and/or processes that are either regulated or defined in a contractual agreement. Azure Stack not only gives its owners control of their data and processes, it comes with an infrastructure which is secured by default. In fact, the underlying infrastructure is locked down in a way that neither the owner nor Microsoft can access it. If it ever needs to be accessed because of a support issue, both the owner and Microsoft combine their keys to obtain access to the system and for a limited time.
Azure leads the industry in security compliance, and security compliance is important for Azure Stack as well. In Azure, Microsoft fully manages the technology, people, and processes as well as its compliance responsibilities. Things are different with Azure Stack. While the technology is provided by Microsoft, the people and processes are managed by the operator. To help operators jump-start the certification process, Azure Stack has gone through a set of formal assessments by a third party-independent auditing firm to document how the Azure Stack infrastructure meets the applicable controls from several major compliance standards. The documentation is an assessment of the technology not a certification of Azure Stack due to the standards including several personnel-related and process-related controls, but they help you get started. The technology assessments include the following standards:
If you face compliance mandates or internal processes that demand that you originate and manage your cloud data encryption keys, and even for Azure Stack, the CipherTrust Cloud Key Manager (CCKM) from Thales works with Azure and Azure Stack “Bring Your Own Key” (BYOK) API’s to enable such key control. CipherTrust Cloud Key Manager creates Azure-compatible keys from a FIPS 140-2 source. You can then can upload, manage, and revoke, if needed, to and from Azure Key Vaults running in Azure Stack or Azure, all from a single pane of glass.
For instance, you could create a salary app on Azure Stack, generate data encryption keys with CipherTrust Cloud Key Manager, and then set a policy to enable use of those keys in the Key Vault on Azure Stack only during the last week of the month when the app is computing the salaries. Among many other benefits, CCKM provides reduced time exposure for the keys, remote backup, a secure location for storing the keys, and the decoupling of management of the keys from the app itself. Not to mention automated key versioning. CCKM supports both Azure Active Directory (AAD) and Active Directory Federation Services (ADFS) deployments.
As noted earlier, Azure Stack is sold as an integrated hardware system, with software pre-installed on the validated hardware. It typically comes in a standard server rack. You choose where your system will be located. You can host it in your data center or perhaps you want to run it in a service provider’s facility.
With the Azure Stack running in your location of choice, you also have a choice of who operates the Azure Stack infrastructure. An Azure Stack operator is responsible for giving access to the Azure Stack, keeping the software and firmware up-to-date, providing the content in the marketplace, monitoring the system health, and diagnosing issues. Azure Stack provides automation, documentation, and training for all of these processes so that someone from your organization can operate Azure Stack. e also provide trained partner experts who can operate your Azure Stack either in their facility or yours.
Here is an overview of your options when you acquire your Azure Stack:
Once you have your Azure Stack up and running and you begin to plan your first IaaS VM deployments, you need to think about these VMs as cloud deployments, not virtualization deployments. IaaS VMs run best when they take advantage of the cloud infrastructure that they are running on. Many times, the way you tune a VM in your cloud infrastructure will be very different than the way you tuned VMs in your traditional virtualization environment. That said, you can always start with what you already have and improve those solutions through modern operations.
A great example of this is the use of multiple disks to get the needed IOps and throughput required of the application. As is the case in Azure, virtual machines placed in Azure Stack have limits applied for their disk activity. This limits the impact of one VM’s activity on another VM – aka noisy neighbor. While these limits are great for IaaS environments, it may take extra work to deploy workloads that get the appropriate resources needed, and in this example, it is IOps.
For optimization of SQL Server deployments, our documentation provides guidance on how to configure storage to obtain the needed performance. In this case, the approach is to attach multiple disks and stripe them to obtain the capacity and performance. When you use managed disks for your VMs, it allows the system to optimize where the physical data gets stored within your Azure Stack. Moving from virtualization environments to IaaS is reasonably straightforward and has its benefits, but requires a little bit of work on your first deployment. You can always use tools like Azure Monitor and the Virtual Machine solutions to better understand your workloads and gain insights on the performance of your VMs. When your VMs are not answering the performance requirements, you can also use the Azure Performance Diagnostics VM Extension for Windows to troubleshoot and identify potential bottlenecks.
The great thing about IaaS, and specifically Azure Stack, is the ability to easily reuse the deployment templates or artifacts to reduce the work for migration of similar workloads. We will cover this more in a future blog post.
Infrastructure purpose built for running cloud-native VMs
Few organizations can claim that they have experience building one the largest cloud infrastructures in the world. When you buy an Azure Stack, you get the benefit of Microsoft’s Azure experience. Microsoft has partnered with the best OEMs to deliver a standardized configuration so that you don’t have to worry about these details. The infrastructure of Azure Stack is purpose-built to get the best for your IaaS VMs – keeping them safe, secure, and performant.