Tuesday, July 15, 2014

Introducing NetApp Private Storage for Microsoft Azure

NetApp, Microsoft, and Equinix today introduced “NetApp Private Storage for Microsoft Azure”; a hybrid cloud infrastructure that links NetApp Storage with Azure Compute via Azure ExpressRoute.


DETAILS
The solution consists of several components.

First, FAS Storage Systems, running either 7-­Mode or clustered Data ONTAP, must reside in a co­location facility that is an Azure ExpressRoute Exchange provider (such as Equinix). Even though both operating modes work, NetApp highly recommends clustered Data ONTAP.

It is important to also note that NetApp is testing E-Series Storage Arrays with iSCSI to be a part of this solution.

Next, the solution requires Azure ExpressRoute: a private connection that bypasses the public Internet. ExpressRoute connections offer faster speeds, more reliability, and higher security than typical connections. In fact, in tests by NetApp, it has been shown that ExpressRoute provides 36% better performance compared to VPN over public Internet connections.


SUPPORTED REGIONS 
According to the three vendors, the solution is currently available in two Azure regions:

Azure US West (San Jose, California):
  • 200Mbps, 500Mbps, 1Gbps, 10Gbps virtual circuits
  • 1ms - 2ms latency observed

Azure US East (Ashburn, Virginia):
  • 200Mbps, 500Mbps, 1Gbps, 10Gbps virtual circuits
  • < 1ms - 1ms latency observed

As ExpressRoute is rolled out globally, NetApp will be testing latency in additional locations.


CUSTOMER NETWORK REQUIREMENTS
There are also several required features for the customer's network equipment within the Equinix colocation facility. NetApp does not certify specific network equipment to be used in the solution; however, the network equipment must support the following features:

Border Gateway Protocol (BGP)
BGP is used to route network traffic between the local network in the Equinix colocation facility and the Azure virtual network. 
Minimum of two 9/125 Single Mode Fiber (SMF) Ethernet ports
Azure ExpressRoute requires two physical connections (9/125 SMF) from the customer network equipment to the Equinix Cloud Exchange. Redundant physical connections protect against potential loss of ExpressRoute service caused by a failure in the physical link. The bandwidth of these physical connections can be 1Gbps or 10Gbps. 
1000Base-T Ethernet ports
1000BASE-T network ports on the switch provide network connectivity from the NetApp storage cluster. Although these ports can be used for data, NetApp recommends using 1GbE ports for node management and out-of-band management. 
Support for 802.1Q VLAN tags
802.1Q VLAN tags are used by the Equinix Cloud Exchange and Azure ExpressRoute to segregate network traffic on the same physical network connection.

Other optional features for the solution include:

Open Shortest Path First (OSPF) protocol
OSPF protocol is used when there are additional network connections back to on-premises data centers or other NetApp Private Storage for Microsoft Azure solution locations. OSPF is used to help prevent routing loops. 
QinQ (stacked) VLAN tags
QinQ VLAN tags (IEEE 802.1ad) can be used by the Equinix Cloud Exchange to support the routing of the network traffic from the network to Azure. The outer service tag (S-tag) is used to route traffic to Azure from the Cloud Exchange. The inner customer tag (C-tag) is passed on to Azure for routing to the Azure virtual network through ExpressRoute. 
Virtual Routing and Forwarding (VRF)
Virtual Routing and Forwarding is used to isolate routing of different Azure Virtual Networks and the customer VLANs in the Equinix co-location facility. Each VRF will have its own BGP configuration. 
Redundant network switches
Redundant network switches protect from a loss of ExpressRoute service caused by switch failure. It is not a requirement, but it is highly recommended that redundant switches are used. 
10Gbps Ethernet ports
Connecting 10Gbps Ethernet ports on the NetApp storage to the switch provides the highest amount of bandwidth capability between the switch and the storage to support data access.

NetApp also indicates that connectivity of FAS Storage to Azure Compute only supports IP storage protocols (SMB, NFS, and iSCSI) at this time.


SCENARIOS
There are several scenarios envisioned for this solution:

  • Cloudburst for peak workloads
  • Disaster Recovery
  • Dev/Test and Production Workloads
  • Multi-Cloud Application Continuity
  • Data Center Migration/Consolidation

One of the more interesting scenarios is multi-cloud application continuity. For example, take two geographically-dispersed Microsoft SQL Server 2012 Availability Group (AG) nodes in an Active/Passive configuration.

The primary SQL AG node is a Hyper-V virtual machine located in a Microsoft Private Cloud on the East Coast of the United States. The SQL AG node located in the Microsoft private cloud is connected to NetApp storage via iSCSI.

The secondary SQL AG node is an Azure virtual machine located in a Virtual Network in the West US Region. The secondary SQL AG node is connected to NetApp Private Storage in the co-location facility via iSCSI over a secure, low latency, high bandwidth Azure ExpressRoute network connection. Additionally, a third SQL AG node could be located in an Amazon Web Services (AWS) compute node; providing further multi-cloud failover capability.


SQL AG Replication occurs via a network connection between the on-premise private cloud and the Azure virtual network.

In the case where there is a loss of a SQL node, SQL storage in the primary location, or a loss of an entire primary datacenter, the surviving SQL Availability Group database node database replicas are activated automatically.

This application continuity model can be extended by using multiple Azure regions with NPS for Azure deployments -- each in different Azure regions.


SUMMARY
NetApp Private Storage for Microsoft Azure is immediately available through reseller partners and directly from NetApp, Microsoft, and Equinix in North America. The solution will be available in Europe and Asia in the near future.

Tuesday, June 17, 2014

NetApp Rebrands SSD-Only FAS Systems as All-Flash FAS

NetApp today rebranded its Fabric-Attached Storage (FAS) systems with only solid-state drives (SSD) as All-Flash FAS (or AFF) systems.

AFF systems can run any version of Data ONTAP that supports SSDs. However, NetApp plans to also offer five pre-configured bundles, starting June 23, that leverage the FAS8080 EX and FAS8060 with 200GB, 400GB, and 800GB SSDs.

But why is Data ONTAP good for flash?

I've asked Nick Triantos, one of our consulting systems engineers, to comment on why AFF is different. This is what he said:

"The biggest challenge for us is not how WAFL writes; in fact, that’s a real advantage. The biggest challenges for us have been:

Multi-core Optimizations – For a long time, Data ONTAP didn’t leverage multiple cores effectively. In fact, the project for multi-core optimizations started back with version 7.3 and has continued through the Data ONTAP 8 releases. I’m sure you’ve seen where one CPU was at 90% and the other at 20%! If the workload was hitting an ONTAP domain that would run on a single core, then your performance bottleneck was that particular core (90%). It didn’t matter if the other cores were underutilized. This has been addressed.

Metadata Management – When you leverage small block size like 4K, inherently, you create a ton of metadata you need to manage. In order to get to the data fast, you need even faster access to metadata. How do you access metadata faster? In memory. That’s why there’s a ton of memory in the FAS2500 and FAS8000 Series; so we can manage as much metadata as possible in DRAM.

Data Protection – This is actually related to the above. The AFF has more data protection features than any flash (or non-flash) array in the market. While this is a good thing, there’s a trade-off. The trade-off is longer I/O paths because metadata has to located and validated against the actual data blocks.

How do you protect against lost writes for example? What happens if I’m a trading firm and the SSD acknowledges that an SSD page has been written – when it was either not written at all or it was written to the wrong location? You just lost millions of dollars. Data ONTAP not only detects, but also protects and recovers from lost writes (which are a very insidious type of failure).”

I said, “Let’s talk more lost writes”. Here’s his response:

“Lost writes are a rare, but a very stealthy failure and the worst thing is you won’t know it happened until days or even months later. But once it happens, you just corrupted your data! Good luck trying to find out which backup or snapshot or replication point is not corrupted. Of course, all this additional data protection stuff comes with a trade-off.

On the other hand, claiming blazing speeds and just protecting against two drives losses is not sufficient to claim superior protection of data – especially when flash arrays are typically deployed for business critical, revenue generating applications. You have to have worked through all the failure modes and make sure you can protect against those failures.  We’ve hardened Data ONTAP over nearly 20 years of existence to provide a very high level of resiliency against all modes of failure in various combinations.”

To recap, NetApp AFF system bundles have:

  1. Larger memory
    • Larger read/write cache in FAS8000 more in-memory metadata
  2. Faster NVRAM
    • Faster ACKs = lower response times
  3. Significant Multi-core optimizations
    • Since Data ONTAP 7.3 to version 8.2+
  4. Continuous Segment Size Cleaning (CSS)
    • Data ONTAP variable segment size  (4K-256K)
  5. Intelligent Algorithms
    • Pattern Detection based Read-Ahead
    • Sequential reads same blk size (i.e 32k) & different blk sizes (4k,64k,4k,64k)
    • Strided reads: Start at Block N read Blk 10 & 12 but skip in between blk 11
    • Backwards reads: Start at Block N read -10 blocks
    • Multiple threads simultaneously reading from multiple locations

AFF system bundles are available to quote and order starting June 23, 2014.

NetApp Debuts FAS8080 EX

NetApp today launched the new FAS8080 EX, its most powerful storage system, which replaces the existing FAS/V6250 and FAS/V6290. This new system will initially ship with Data ONTAP 8.2.2 RC1 (supporting 7-Mode or clustered Data ONTAP).

The 6U form factor of each FAS8080 EX chassis (12U per HA pair) is targeted towards large enterprise customers with business-critical applications and multi-petabyte cloud providers.

Let’s explore some of the technical details of this new system.

Unlike the FAS8020/40/60, FAS8080 EX configurations require the I/O Expansion Module (IOXM). Single-chassis configurations include a controller with IOXM. Dual-chassis configurations include two chassis, each with a controller and IOXM.

The new FAS8080 EX has been qualified with the DS2246, DS4246, DS4486, DS4243, DS14mk4, and the DS14mk2-AT disk shelves with IOM6, IOM3, ESH4, and AT-FCX shelf modules. Virtualized storage from multiple vendors can also be added to the FAS8080 EX -- without a dedicated V-Series “gateway” system -- with the new “FlexArray” software feature.

NetApp will not offer a separate FlexCache model for the FAS8080 EX.

Inside each FAS8080 EX Processor Control Module (PCM) is a dual-socket, 2.8 GHz Intel "Ivy Bridge" processor with 20 cores per controller, an Intel Patsburg-J SouthBridge, and 128GB of DDR3 physical memory (256GB per HA pair). NetApp states that Data ONTAP will take advantage of all 20 cores on each PCM at time of launch.

Unique to the FAS8080 EX is the amount of NVRAM: Each FAS8080 EX PCM includes 16 GB of NVRAM9 (32GB per HA pair) with battery backup. Should a power loss occur, the NVRAM contents are destaged onto NAND Flash memory; once power is restored, the resulting NVLOG is then replayed to restore the system. NVRAM9 is integrated on the motherboard and does not take up a slot.

The FAS8080 EX is built upon the Gen 3 PCI Express (PCIe) architecture for embedded devices (such as PCI bridges, Ethernet / Fibre Channel / InfiniBand adapters, and SAS controllers). HA Pair configurations include a total of 24 PCIe slots with x8 wide links.

Interestingly, the HA interconnect for the FAS8080 EX now leverages 40Gb QDR InfiniBand adapters; this is a modest improvement from the FAS/V6290’s 20Gb DDR InfiniBand interconnect.

According to NetApp, it is recommended that the FAS8080 EX should be connected to at least four on-board ports for the cluster interconnect. This recommendation allows clustered Data ONTAP to reach peak performance for remote workloads. It is also interesting to note that the FAS8080 EX even supports up to six interconnect ports with the addition of the X1117A-R6 adapter in slot 12. However, using only two cluster interconnects is still supported.

As with previous FAS8000 systems, the FAS8080 EX includes support for the new Unified Target Adapter (UTA) 2 -- a storage industry first from NetApp. It supports 16Gb Fibre Channel (FC) or 10Gb Ethernet, providing future flexibility. Both ports must be set to the same "personality"; changing one UTA port will change the second port to the same personality.

The FC personality on the UTA2 will autorange link speeds from 16/8/4 Gb FC, but does not work at 2 or 1 Gbps. The 10GbE will not autorange below 10GbE speeds. It is important to note that UTA2 ports are not supported with older DS14 FC shelves or FC tape devices. To connect to DS14 shelves or FC tape, use either the X1132A-R6 or X2054A-R6 adapter.

The FAS8080 EX can hold a maximum of 1,440 drives (per HA system), with a maximum capacity of 5,760TB. The maximum amount of Flash Cache and Flash Pool (combined) capacity is up to 36TB per HA pair. Maximum aggregate size is 400TB and the maximum volume size is 100TB.

The FAS8080 EX is available to quote and order starting June 23, 2014 with Data ONTAP 8.2.2 RC1.