Sunday, April 13, 2014

NetApp Confirms ‘Heartbleed’ Bug In Certain Products

NetApp last week released an advisory confirming that at least six of its current products are vulnerable to the widely publicized security flaw known as the “Heartbleed” bug. These vulnerable products are:

  • Antivirus Connector for Clustered Data ONTAP
  • NetApp Manageability SDK (5.0P1 and later)
  • OnCommand Unified Manager Core Package (5.x only)
  • OnCommand Workflow Automation (2.2RC1 and later)
  • SMI-S Agent for Data ONTAP
  • SMI-S Agent for E-Series

The Heartbleed Bug is serious security vulnerability in OpenSSL 1.0.1 releases prior to 1.0.1g, which allow remote attackers to obtain sensitive information from process memory via crafted packets that trigger a buffer over-read. This is due to a missing bounds check in the handling of the Transport Layer Security (TLS) heartbeat extension packets.

Until software fixes are issued for the affected products, NetApp recommends implementing Intrusion Prevention Systems (IPS) and Intrusion Detection Systems (IDS) products available from third parties to stop an attack.

NetApp will continue to update their advisory, entitled “NTAP-20140410-heartbleed”, as more information becomes available.

Tuesday, March 25, 2014

NetApp to Recommend Increase to Cluster Port Count

With the recent debut of release candidates for clustered Data ONTAP 8.2.1, NetApp now recommends that customers running FAS6280, 6290, 8040, and 8060 systems use all four onboard ports for the cluster-interconnect. While this is not a requirement, the additional interconnect ports are necessary to reach peak performance for "remote workloads" -- that is, when a logical interface (LIF) home port serves data from a node that is different than the node which actually owns the data.

This recommendation means high-throughput applications (such as animation, rendering, or computer-aided design) will leverage the cluster interconnect more effectively during large sequential remote reads. It is expected that future releases of clustered Data ONTAP will provide further interconnect performance increases for additional workloads over time.

Even with clustered Data ONTAP 8.2.1, setting up a switched cluster will still default to using 2 cluster interfaces; however, it is possible to override the settings to 4 interfaces. Once configured, clustered Data ONTAP components (SpinNP, CSM, etc.) will automatically load-balance -- just as they have done for many years now.

It is also possible to reconfigure an existing node from 2 to 4 cluster interfaces.

Clustered Data ONTAP 8.2.1 RC2 is now available for download from the NetApp Support Site.

Tuesday, March 18, 2014

Better RAID Through Software: An In-Depth Look at the NetApp Firmware Parity Engine for E/EF-Series Systems

One of the most fundamental features of the NetApp E-Series Storage Systems and EF-Series Flash Arrays is the ability to generate, verify, and store RAID parity calculations (e.g. - XOR logic, Galois Field Multiply, etc.). To do so, NetApp leverages a native hardware “engine” that’s part of the Intel Xeon architecture. This engine, named “Crystal Beach 3” Direct Memory Access (DMA), is located within the Xeon CPU of every E5400/5500 and EF540/550 system.

However, unlike previous controller generations, the hardware components found within modern E/EF-Series controllers deliver much higher data rates than the “Crystal Beach” engine can support. This means software is needed to augment hardware during times of high-throughput RAID operations.

Meet the Firmware Parity Engine from NetApp.

Often referred to as “Software XOR”, this low-level feature emulates hardware parity calculations for RAID 5 and RAID 6. It functions in the same manner as the “Crystal Beach” DMA engine by: generating/validating P, Q, and P+Q, performing copy operations, and generating/validating cyclical redundancy checking (CRC). It is assumed that this functionality is written in Assembly Language and/or C code and runs within VxWorks (the operating system of the controller firmware).

Based upon my research and testing, it is deduced that there is some sort of driver that determines which engine to use by reviewing the current workload. This means if there is high utilization of the hardware engine, it will process parity calculations via software. It also appears that there is no way for end-users to “override” or “force” processing to a specific engine.

Be warned though: if you are searching for an interface to configure or tune RAID operations, you will be sorely disappointed! The Firmware Parity Engine has no end-user interface to configure settings.

Note that the E2600 and E2700 systems do not include this Intel-based feature, as they leverage LSI SAS RAID-on-Chip (ROC) processors.

The Firmware Parity Engine was originally released as part of SANtricity 10.86 and associated firmware. It is also bundled with the latest SANtricity 11.10 release.