New NetApp platform & Licensing improvements in ONTAP 9.6 (Part 1)

A320

All flash A320 2U platform introduced, here are a few important details for this new AFF system:

  • From the performance perspective of view most notable is ~100 microseconds latency on SQL SLOB workload. If true, that is a notable improvement because previously we’ve seen only sub 1 millisecond (1,000 microseconds) latency and new latency basically a few times (in the best-case scenario ~10 times) faster
    • About 20% better IOPS performance than A300
  • NVDIMM instead of traditional NVRAM in high-end/mid-range platforms. This is the second NetApp AFF platform after A800 system which adopted NVDIMM instead of PCIe-based NVRAM. Strictly speaking, NVDIMM has been around in entry FAS/AFF systems for an extended period of time, but only because of luck of PCIe slots & space in the controllers
  • No disk drives in the controller chassis
  • No RoCE support for hosts. Yet
  • End to End NVMe
  • Rumors from Insight 2018 confirmed about new disk shelves
    • NS224 directly connected over RoCE
    • 2 disk shelves maximum
    • 1.9 TB, 3.6 TB, and 7.6 TB drives supported
    • With an upcoming ONTAP release disk shelves connected to controllers over a switch will be supported and thus more disk shelves than just two
  • Not very important to customers, but interesting update from engineer theoretical perspective: with this new platform HA and Cluster Interconnect connectivity now combined, unlike in any other appliances before.
  • 8x Onboard 100 GbE ports per controller:
    • 2 for cluster interconnect 100 GbE ports (and HA)
    • 2 for the first disk shelf and optionally another 2 for the second disk shelf
    • it leaves 2 or 4 100 GbE ports for host connection
  • 2 optional PCIe cards per controller with next ports:
    • FC 16/32 Gb ports
    • RoCE capable 100/40 GbE
    • RoCE capable 25 GbE
    • Or 10 Gb BASE-T ports

Entry Level Systems

Previously released A220 system now available with 10G BASE-T ports, thanks to increase popularity of 10G BASE-T switches.

MCC IP for low-end platforms

MCC IP becomes available for low-end platforms: A220 & FAS2750 (not for 2720 though) in ONTAP 9.6 and requires a 4-node configuration (as all MCC-IP configs). New features made in a way to reduce cost for such small configurations.

  • All AFF systems with MCC-IP supports partitioning, including A220
  • Entry-level systems do not require special iWRAP cards/ports like other storage systems
  • Mixing MCC IP & other traffic allowed (with all the MCC-IP configs?)
    • NetApp wants to ensure customers to get great experience with their solutions so there will be some requirements your switch must meet to maintain high performance to be qualified for such MCC IP configuration.

Brief history of MCC IP:

  • In ONTAP 9.5 mid-range platforms FAS8200 & A300 added support for MCC IP
  • In ONTAP 9.4 MCC IP becomes available on high-end A800
  • And initially MCC IP introduced in ONTAP 9.3 for high-end A700 & FAS9000 systems.

New Cluster Switches

Two new port-dense switches from Cisco and Brocade with 48x 10/25 GbE SFP ports and a few 40 GbE or 100GbE QSFP ports. You can use same switches for MCC IP. Here is Brocade-based BES-53248 which will replace CN1610:

And new Cisco Nexus 92300YC with 1.2U height.

NVMe

New OS supported with ONTAP 9.6: Oracle Linux, VMware 6.7, and Windows Server 2012/2016. Previously in ONTAP 9.5 were supported SUSE Linux 15 and RedHat Enterprise Linux 7.5/7.6, RedHat still doesn’t have ANA support. New FlexPod config with A800 connected over FC-NVMe to SUSE Linux. Volume move now available with NVMe namespaces.

NVMe protocol becomes free. Again

In ONTAP 9.6 NVMe protocol become free. It was free when firstly introduced in 9.4 without ANA (Analog for SAN ALUA multipathing), and then it became not free in 9.5.

SnapMirror Synchronous licensing adjusted

In 9.6 simplified licensing, SM-S included in Premium Bundle. NetApp introduced SM-S in ONTAP 9.5 and previously licensed it by TB. If you not going to use a secondary system as the source to another system, SM-S do not need licensing on the secondary system.

New services

  • SupportEdge Prestige
  • Basic, Standard and Advanced Deployment options
  • Managed Upgrade Service

Read more

Disclaimer

All product names, logos, and brands are property of their respective owners. All company, product and service names used in this website are for identification purposes only. No one is sponsoring this article.

Which kind of Data Protection SnapMirror is? (Part 2)

I’m facing this question over and over again in different forms. To answer that question, we need to understand what kinds of data protection exists. The first part of this article How to make a Metro-HA from DR (Part 1)?

High Availability

This type of data protection trying to do its best to make your data available all the time. If you have an HA service, it will continue to work even if one or even a few components fail which means your Recovery Point Objective (RPO) is always 0 with HA, and Recovery Time Objective (RTO) is near 0. With RTO whatever that number is we assume that our service and applications using that service (maybe with a small pause) will survive failure and continue to function and will not return an error to its clients. An essential part of any HA solution is automatic switchover between two or more components, so your applications will transparently switch to the survived elements and your applications continue to interact with survived components instead of the failed one. With HA your timeouts should be set for your applications (typically up to 180 seconds) so that RTO will be equal to or lower. HA solutions made in a way not to reach those application timeouts to make sure they not going to return an error to upstream services but rather a short pause. Whenever you got RPO not 0, it instantly means data protection is not an HA solution. The biggest problem with HA solutions they limited by the distance between which components can communicate, the more significant gap between them, the more time they need all your data to be fully Synchronous across all of them and ready to take over the failed part.

In the context of NetApp FAS/AFF/ONTAP systems, HA can be local HA-pair or MetroCluster stretched between two sites up to 700 km.

Slide1.png
Slide2

Disaster Recovery

The second data protection is DR. What is the difference between DR and HA, they both for data protection, right? By definition, DR is the kind of data protection which starts with the assumption you already get into a situation where your data not available and your HA solution has failed for any reason. Why DR assumes your data not available, and you have a disruption in your infrastructure service? The answer is “by definition.” With DR you might have RPO 0 or not and your RTO is always not 0 which means you will get an error accessing your data, there will be a disruption in your service. DR assumes by definition there is no fully automatic and transparent switchover.

Because HA and DR are both Data Protection techniques, people often confuse them, mix them up and do not see the difference or vice versa, they are trying to contrapose them and choose between them. But, now after explanation what they are and how they are different, you might already guess that you cannot replace one with another they do not compete but rather complement each other.

In the context of NetApp systems, SnapMirror technology strongly associated with DR capabilities.

Slide3.png

Backup & Archive data protection

Backup is another type of data protection. Backup is an even lower level of data protection than DR and allows you to access your data all the time from the Backup site for the data restoration to a production site. An essential role for Backup data is to ensure it does not alter your data. Therefore, with Backup, we assume to restore data back to original or another place but not alter backed up data which means not to run DR on your Backup data. In the context of NetApp AFF/FAS/ONTAP systems backup solution are local Snapshots (a kind of) and SnapVault D2D replication technology. In ONTAP Cluster-Mode (version 8.3 and newer) SnapVault becomes XDP, just another engine for SnapMirror. With XDP SnapMirror capable of Unified Replication for both DR and Backup. With Archives you do not have access to your backups, so you need some time to bring them online before you can restore it back to the source or another location. The type library or NetApp Cloud Backup are the examples for the archive solution.

NetApp_SnapMirror.png

Is SnapMirror HA or DR data protection technology?

There is no straightforward answer to that, and to answer the question we have to consider the details.

SnapMirror comes in two flavors: Asynchronous SnapMirror which transfers data time to time to a secondary site, it is obviously a DR technology because you cannot switch to the DR site automatically since you do not have the latest version of your data. That means that before you start your applications, you might need to prepare them first. For instance, you might need to apply DB logs to your database, so your “not the latest version of data” will become the latest one. Alternatively, you might need to choose one snapshot out of the last few which you need to restore because the latest one might have corrupted data with a virus for instance. Again, by definition DR scenario assumes that you will not switch to a DR instantly, it assumes you already have downtime, and it assumes you might have manual interaction or a script or some modifications made before you’ll be able to start & run your services which require some downtime.

Synchronous SnapMirror (SM-S) also has two modes: Strict Full Synchronous mode and Relaxed Synchronous mode. The problem with Synchronous replication, similarly to HA solutions, is that the longer distance between the two sites, the more time needed to replicate the data. And the longer data will be transferred and confirmed to the first system, the longer time your application will not get the confirmation from your system.

The relaxed mode allows to have lags and network break-out and after network communication restoration auto-sync again, which means it is also a DR solution because it enables RPO to be not 0.

Strict mode does not tolerate network break-out by definition, which means it ensures your RPO to be always 0, which kind of makes it closer to HA.

Does it mean Synchronous SnapMirror in Strict mode is an HA solution?

Well, not precisely. Synchronous SnapMirror in Strict mode can also be part of a DR solution. For instance, if you have a DB with all the data been Asynchronously replicated to a DR site and only DB logs synchronously replicated to DR site, in this way we can reduce network traffic between two locations, provide small overall RPO and with DB synchronous logs restore data to the DB to ensure entire DB with RPO 0. In such a scenario RTO will not be so big but allows your DR site to be located very far away one from another. See scenarios how SnapMirror Sync can be combined with SnapMirror Async to build more robust beneficial DR solution.

To comply with HA definition, you need to have not only RPO to be 0 but also to be able to automatically switch over with RTO not higher than timeouts for your applications & services.

Can SM-S Strict mode switchover between sites automatically?

The answer is “not YET.” To do automatic switchover between sites, NetApp has an entirely different technology called MetroCluster which is Metro-HA technology. Any MetroCluster or local HA systems should be accommodated with DR, Backup & Archive technologies to provide the best data protection possible.

Will SM-S become HA?

I personally believe that NetApp will make it possible in the future to automatically switch over between two sites with SM-S. Most probably it will spin around SVM-DR feature to replicate not only data but also network interfaces and configurations, and for doing that SM-S will need some kind of Tiebreaker like in MCC, but those are not there yet. In my personal opinion, this kind of technology most probably going to (and should) be positioned as online data migration technology across NetApp Data Fabric rather than as a (Merto-) HA solution.

Why should SM-S not be positioned as an HA?

Few reasons:

1) NetApp already has MetroCluster (MCC) technology, and for many-many years it was and still is a superior Metro-HA technology proven to be stable, reliable and performant.

2) Now MCC become easier, simpler and smaller, and the only reasons you would like to have HA on top of SnapMirror are basically that tree. Since we already have MCC over IP (MC-IP), it is theoretically possible to run it even on the smallest AFF systems someday.

According to my own sense of how it will be, in some cases, SM-S might be used as an HA solution someday.

How HA, DR & Backup solutions applied to practice?

As you remember HA, DR & Backup solutions do not compete with but rather complement each other to provide full data protection. In a perfect world without money where you need to provide the highest possible and fully covered data protection, you would need HA, DR, Backups, and Archive. Where HA is located in one place or Geo-distributed as far as possible (up to 700 km), and on top of that, you need DR and Backups. For Backups, you might probably need to place your site as far as possible, for instance, on another side of the country or even to another continent. In these circumstances, you can do Synchronous SnapMirror only for some of your data like DB logs and Async for the rest to an intermediate site (up to a 10 ms network RTT latency) to a DR site and from that intermediate site to another continent all the data replicated Asynchronously or as Backup protection. And from DR and/or Backup sites we can do Archiving to Tape Library or NetApp Cloud Backup or another archive solution.

Slide4

Summary

HA, DR, Backup and Archive are different types of data protection which complement each other. Any company should have not only HA solution for their data but also DR, Backup, and Archive in the best-case scenario or at least HA solution & Backup, but it always depends on business needs, business willingness to get some level of protection, and understanding risks involved with not protecting the data properly.

How to make a Metro-HA from DR? (Part 1)

This is indeed a frequently asked question often asked in many different forms, like: Can a NetApp’s DR solution automatically do site switching on DR event with a FAS2000/A200 system?

As you might guess in NetApp world, Metro-HA is called MetroCluster (or MCC) and DR called Asynchronous SnapMirror. (Read about SnapMirror Synchronous in Part 2)

The question is the same sort of questions if someone would ask “Can you build a MetroCluster-like solution based on A200/FAS2000 with async SnapMirror, without buying a MetroCluster, is there out of the box solution?”. The short answer to that question is no; you cannot do that. There are few quite good reasons for that:

  • First of all is: DR & HA (or Metro-HA) protects from different kinds of failures, therefore designed, behave & working quite differently, though both are data protection technologies. You see MetroCluster is basically is an HA solution stretched between two sites (up to 300 km for HW MCC or up to 10km for MetroCluster SDS), it is not a DR solution
  • MetroCluster Based on another technology called SyncMirror, it requires additional PCI cards, models higher then A200/FAS2000 and there are some other requirements too.

Data Protection technologies comparison

Async SnapMirror on another hand is designed to provide Disaster Recovery, not Metro-HA. When you are saying DR, it means you store point in time data (snapshots) for cases like data (logical) corruption, so you’ll have the ability to choose between snapshots to restore. Moreover, the ability also meant responsibility, because you or another human must decide which one to select & restore. So, there is no “automatic, out of the box” switchover to DR site with Async SnapMirror like MCC. Once you have many snapshots, it means you have many options, which means it is not easy for a program or a system to decide to which one it should switch. Also, SnapMirror provides many opportunities to backup & restore:

  • Different platforms on main & DR sites (in MCC both systems must be the same model)
  • Different number & types of drives (in MCC mirrored aggregates must be the same size & drive type)
  • Fun-Out & Cascade replicas (MCC have only two sites)
  • Replication can be done over L3, no L2 requirements (MCC only for L2)
  • You can replicate separate Volumes or entire SVM (with exclusions for some of the volumes if necessary). With MCC you replicate entire storage system config and selected aggregates
  • Many snapshots (though MCC can contain snapshots it switches only between Active FS on both sites).

All these options give much flexibility for async SnapMirror and mean your storage system must have a very complex logic to switch between sites automatically, long story short, it is impossible to have a single solution which gives you a logic which is going to satisfy every customer, all possible configurations & all the applications in one solution. In other words, with that flexible solution like async SnapMirror switchover in many cases done manually.

At the end of the day, an automatic or semi-automatic switchover is possible

At the end of the day automatic or semi-automatic switchover is possible & must be done very carefully with environment knowledge, understanding precise customer situation and customized for:

  • Different environments
  • Different protocols
  • Different applications.

MetroCluster on another hand can automatically switch over between sites in case of one site failure, but it operates only with the active file system and solves only Data Availability problem, not Data Corruption. It means if your data been (logically) corrupted by let’s say a virus, in this case, MetroCluster switchover not going to help, but Snapshots & SnapMirror will. Unlike SnapMirror, MetroCluster has strict deterministic environmental requirements, and only two sites between which your system can switch plus it works only with the active file system (no snapshots) used, in this deterministic environment it is possible to determine surviving site which is to choose and switch automatically with a tiebreaker. A tiebreaker is a software with built-in logic which makes the decision for site switchover.

SVM DR

SVM DR does not replicate some of SVM’s configuration to DR site. So, you must configure it manually or prepare a script so in case of a disaster your script is going to do it for you.

Do not mix up Metro-HA (MetroCluster) & DR; those are two separate and not mutually exclusive data protection technologies: you can have both MetroCluster & DR, so big companies usually have both MetroCluster & SnapMirror because they have budgets, business requirements & approval for that. The same logic applies not only to NetApp systems but for all storage vendors.

The solution

In this particular case, a customer with FAS2000/A200 & async SnapMirror can have only DR, so manual mount to hosts must be done on the DR site after a disaster event on primary site occurs, though it is possible to set up & configure your own script with logic suitable for your environment which switches between sites automatically or semi-automatically. For this purpose thing like NetApp Work Flow Automation & Backup/Restore ONTAP SMB shares with PowerShell script can help to do the job. Also, you might be interested in VMware SRM + NetApp SRM plugin configuration, which can give you a relatively easy solution to switch between sites.

The second part of this article “Which kind of Data Protection SnapMirror is? (Part 2)“.