The solution consists of several components.
First, FAS Storage Systems, running either 7-Mode or clustered Data ONTAP, must reside in a colocation facility that is an Azure ExpressRoute Exchange provider (such as Equinix). Even though both operating modes work, NetApp highly recommends clustered Data ONTAP.
It is important to also note that NetApp is testing E-Series Storage Arrays with iSCSI to be a part of this solution.
Next, the solution requires Azure ExpressRoute: a private connection that bypasses the public Internet. ExpressRoute connections offer faster speeds, more reliability, and higher security than typical connections. In fact, in tests by NetApp, it has been shown that ExpressRoute provides 36% better performance compared to VPN over public Internet connections.
According to the three vendors, the solution is currently available in two Azure regions:
Azure US West (San Jose, California):
- 200Mbps, 500Mbps, 1Gbps, 10Gbps virtual circuits
- 1ms - 2ms latency observed
Azure US East (Ashburn, Virginia):
- 200Mbps, 500Mbps, 1Gbps, 10Gbps virtual circuits
- < 1ms - 1ms latency observed
As ExpressRoute is rolled out globally, NetApp will be testing latency in additional locations.
CUSTOMER NETWORK REQUIREMENTS
There are also several required features for the customer's network equipment within the Equinix colocation facility. NetApp does not certify specific network equipment to be used in the solution; however, the network equipment must support the following features:
Border Gateway Protocol (BGP)
BGP is used to route network traffic between the local network in the Equinix colocation facility and the Azure virtual network.
Minimum of two 9/125 Single Mode Fiber (SMF) Ethernet ports
Azure ExpressRoute requires two physical connections (9/125 SMF) from the customer network equipment to the Equinix Cloud Exchange. Redundant physical connections protect against potential loss of ExpressRoute service caused by a failure in the physical link. The bandwidth of these physical connections can be 1Gbps or 10Gbps.
1000Base-T Ethernet ports
1000BASE-T network ports on the switch provide network connectivity from the NetApp storage cluster. Although these ports can be used for data, NetApp recommends using 1GbE ports for node management and out-of-band management.
Support for 802.1Q VLAN tags
802.1Q VLAN tags are used by the Equinix Cloud Exchange and Azure ExpressRoute to segregate network traffic on the same physical network connection.
Other optional features for the solution include:
Open Shortest Path First (OSPF) protocol
OSPF protocol is used when there are additional network connections back to on-premises data centers or other NetApp Private Storage for Microsoft Azure solution locations. OSPF is used to help prevent routing loops.
QinQ (stacked) VLAN tags
QinQ VLAN tags (IEEE 802.1ad) can be used by the Equinix Cloud Exchange to support the routing of the network traffic from the network to Azure. The outer service tag (S-tag) is used to route traffic to Azure from the Cloud Exchange. The inner customer tag (C-tag) is passed on to Azure for routing to the Azure virtual network through ExpressRoute.
Virtual Routing and Forwarding (VRF)
Virtual Routing and Forwarding is used to isolate routing of different Azure Virtual Networks and the customer VLANs in the Equinix co-location facility. Each VRF will have its own BGP configuration.
Redundant network switches
Redundant network switches protect from a loss of ExpressRoute service caused by switch failure. It is not a requirement, but it is highly recommended that redundant switches are used.
10Gbps Ethernet ports
Connecting 10Gbps Ethernet ports on the NetApp storage to the switch provides the highest amount of bandwidth capability between the switch and the storage to support data access.
NetApp also indicates that connectivity of FAS Storage to Azure Compute only supports IP storage protocols (SMB, NFS, and iSCSI) at this time.
There are several scenarios envisioned for this solution:
- Cloudburst for peak workloads
- Disaster Recovery
- Dev/Test and Production Workloads
- Multi-Cloud Application Continuity
- Data Center Migration/Consolidation
One of the more interesting scenarios is multi-cloud application continuity. For example, take two geographically-dispersed Microsoft SQL Server 2012 Availability Group (AG) nodes in an Active/Passive configuration.
The primary SQL AG node is a Hyper-V virtual machine located in a Microsoft Private Cloud on the East Coast of the United States. The SQL AG node located in the Microsoft private cloud is connected to NetApp storage via iSCSI.
The secondary SQL AG node is an Azure virtual machine located in a Virtual Network in the West US Region. The secondary SQL AG node is connected to NetApp Private Storage in the co-location facility via iSCSI over a secure, low latency, high bandwidth Azure ExpressRoute network connection. Additionally, a third SQL AG node could be located in an Amazon Web Services (AWS) compute node; providing further multi-cloud failover capability.
SQL AG Replication occurs via a network connection between the on-premise private cloud and the Azure virtual network.
In the case where there is a loss of a SQL node, SQL storage in the primary location, or a loss of an entire primary datacenter, the surviving SQL Availability Group database node database replicas are activated automatically.
This application continuity model can be extended by using multiple Azure regions with NPS for Azure deployments -- each in different Azure regions.
NetApp Private Storage for Microsoft Azure is immediately available through reseller partners and directly from NetApp, Microsoft, and Equinix in North America. The solution will be available in Europe and Asia in the near future.