What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas? Part 3

NetApp & Rubrik

NetApp & Rubrik announced collaboration. First StorageGRID can be a target for Rubrik archives. And second Rubrik now supports NetApp SnapDiff API. SnapDiff API is a technology in ONTAP which compares two snaps and gives a list of files changed so Rubrik can copy only changed files. While Rubrik is not the first in working with NetApp SnapDiff APIs, others like Catalogic, Commvault, IBM (TSM) and Veritas (NetBackup) can work with it as well, but Rubrik is the first one with backing up data to a public cloud. Will be available in Rubrik Cloud Data Management (CDM) v5.2 in 2020.

NetApp & Veeam

Veeam Availability Orchestrator v3 (VAO) provide a new level of NetApp integration for DP:

  • FULL recovery orchestration for NetApp ONTAP Snapshots
  • Automated testing and reports that have become essential to your DR strategies
  • TR-4777: Veeam & StorageGRID

Continue to read

All announcements from Aug-Nov 2019

Am I missing something?

Please let me know in the comments below!

If you spotted an error, please let me know personally 😉

Disclaimer

Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.

What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas? Part 2

MAX Data 1.5

  • Support for ONTAP 9.6 GA and later releases
  • Support for FAS storage systems or ONTAP Select systems running ONTAP 9.7 besides AFF storage systems
  • Resizing application memory allocation
  • Support for Red Hat Enterprise Linux 7.7
  • Support for local snapshots on server-only systems
  • Significant performance improvements with more I/o, less latency: 5.4M I/o 4KB READ @ 12.5usec latency

Previously in 1.4

With version 1.4 you can use MAX Data without AFF. Tiering now works between PMEM and your SSD installed in the server.

Some info leaks that HCI will support MAX Data at some point.

Considering new compute node H615C with the Cascade Lake CPUs, which is by the way, required for Optane memory, so it looks like NetApp putting all together to make it happen.

Continue to read

Announcements from Aug-Nov 2019

Am I missing something?

Please let me know in the comments below!

If you spotted an error, please let me know personally 😉

Disclaimer

Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.

What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas? Part 1

E-Series

Performance

End-to-End NVMe with EF600 – More I/o (x2 times more than EF570), less latency:

NVMe in EF600

  • 100Gb NVMe/RoCE
  • 100Gb NVMe/InfiniBand
  • 32Gb NVMe/FC

E-Series Performance Analyzer

An automated installation and deployment of Grafana, NetApp E-Series Web Services, and supporting software for performance monitoring of NetApp E-Series Storage Systems. NetApp intend this project to allow you to quickly and simply deploy an instance of our performance analyzer for monitoring your E-Series storage systems. We incorporate various open source components and tools in order to do so. While they primarily intend it to serve as a reference implementation for using Grafana to visualize the performance of your E-Series systems, I also can be customizable and extensible based on your individual needs.

https://github.com/NetApp/eseries-perf-analyzer

New TR docs about EF & DB

Continue to read

announcements from Aug-Nov 2019

Am I missing something?

Please let me know in the comments below!

If you spotted an error, please let me know personally 😉

Disclaimer

Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.

Why GCP Anthos on NetApp HCI is a big deal?

Google Cloud & NetApp announced a new validated design with GKE running on NetApp HCI on-premise.

Read what you might miss from NetApp announcements during Aug-Nov 2019 compressed into a single article.

Kubernetes was originally designed by Google, Google is one of the main contributors to Docker, and obviously the most advanced, mature & stable on the market. If you tried GKE in GCP & other competitive solutions, you know what I’m talking about.

Containers on-premises are difficult when you want to make Enterprise solution for new containerized applications on-premises for number of reasons: Installation, configuration, management, updates of your core infrastructure components, persistent & performant & predictable storage performance, DevOps do not want to deal with infrastructure they want just consume it. These are the key problems to solve and NetApp aims to do it.

NVA-1141: NetApp HCI with Anthos. NVA Design/Deployment

Bullet points why Google Anthos on NetApp HCI is an important announcement:

  • Hybrid cloud. NetApp according to its Data fabric vision, continue to bring hybrid cloud experience to its users in the flash. Now with Anthos on HCI your on-prem data center becomes just another cloud zone. Software updates for GKE & Anthos are on Google’s shoulders, you just consume it. Not just NetApp HCI maintenance like software & firmware updates can be bought as a service, but space as well. You can pay as you go & consume infrastructure as a service: OPEX instead of CAPEX by request with NetApp Keystone
  • NetApp Kubernetes Services (NKS) In addition to NetApp NKS which allows for the deployment & management of on-premises & in the cloud kubernetes clusters, Anthos provides the ability to deploy clusters on-prem and fully integrated with Google Cloud, including the ability to manage from the GKE console. NKS bundled with Istio, Helm & many other components for your microservices which puts DevOps to the next level. Cloud infrastructure on-premises reached your data center
  • Storage automation. NetApp Trident is literally the most advanced storage driver for containers at the market so far which brings automation, API and persistent storage to containerization world. NetApp Trident with NKS & Anthos totally make sense. Speaking about Automation, NetApp Ansible playbooks are also the most advanced on the market at the moment with 106 published & supported modules, and SolidFire itself is known as fully API-driven storage, so you can work with it solely through RESTful API
  • Simple, predictive and performant enterprise storage with QoS whether on-prem or in the cloud: use Trident and Ansible with NetApp HCI on-prem or CVO or CVS in AWS, Azure or GCP, moreover replicate your data to the cloud for DR or Test/Dev
  • NetApp HCI vs other HCI solutions. One of the most notable HCI competitor is Nutanix so I want to use it as an example. Nutanix’s storage architecture with local disk drives certainly interesting but not unique and obviously have some architectural disadvantages, scalability was one issue to name. Local disk drives are blessing & great news for tiny solutions and not so good of idea when you need to scale it up, cheapness of a small solution with commodity HW & local drives might turn into curse at scale. That’s why Nutanix eventually developed dedicated storage nodes connected over the network to overcome the issue while stepping to the very competitive lend of network storage systems. Because dedicated storage nodes connected over the network is not something new & unique for Nutanix, there are plenty of capable & scalable network storage systems out there. Therefore, most exciting part of Nutanix is their ecosystem & simplicity not the storage architecture though. Now thanks to Anthos, NetApp HCI get in to a unique position with scalability, ecosystem, simplicity, hybrid cloud & functionality for microservices where some other great competitors like Nutanix not reached yet, and that gives NetApp a momentum in the HCI market
  • Performance. Don’t forget about NetApp’s Max Data software which already working with VMware & SolidFire, it will take NetApp only one last step to bring DCPMM like Intel Optane to NetApp HCI. Note NetApp just announced on Insight 2019 a compute node with Intel Cascade Lake CPUs which required for Optane. Max Data is not available on NetApp HCI yet, but we can clearly see that NetApp putting everything together to make it happen. Persistent memory in form of a file system for a Linux host server with tiering for cold blocks to “slow” SSD storage can put NetApp on top of all the competitors in terms of performance

HCI Performance

Speaking about which, take a look on these two performance tests:

  1. IOmark-VM-HC: 5 storage & 18 compute nodes using data stores & VVols
  2. IOmark-VDI-HC: 5 storage nodes & 12 compute nodes with only data stores

Total 1,440 VMs with 3,200 VDI desktops.

Notice how asymmetrical number of storage nodes compared to compute nodes are, and in “Real” HCI architectures with local drives you have to have more equipment, while with NetApp HCI you can choose how much storage and how much compute resources you need and scale them separately. Dedup & compression were enabled in the tests.

Disclaimer

This article is for information purposes only, may contain errors and personal opinions. This text neither authorized nor sponsored by NetApp. If you have spotted an error, please let me know.

What you might miss about NetApp from Aug-Nov 2019, including Insight in Las Vegas? Content

Some competitors might say NetApp do not innovate anymore. Well, read this article and answer yourself whether it is true, or it is just yet another shameless marketing.

Part 1

E-Series

Performance

NVMe in EF600

E-Series Performance Analyzer

New TR docs about EF & DB

Part 2

MAX Data 1.5

Previously in 1.4

Part 3

NetApp & Rubrik

NetApp & Veeam

Part 4

Active IQ 2.0

Active IQ Unified Manager 9.7

Part 5

AFF & FAS

AFF & NVMe

ONTAP AI with containers

ASA

ONTAP

ONTAP Select

ONTAP SDS is in embedded non-X86 systems for edge devices

FlexGroup

SnapMirror Sync (SM-S)

NDAS

SnapCenter 4.2

New with VMware & VVOLs:

Virtual Storage Console (VSC)

FlexCache

MCC

MetroCluster IP

MCC-FC

ONTAP Mediator instead of tie breaker

Part 6

StorageGRID v11.3

Part 7

Keystone

Complete Digital Advisors as part of Support Edge:

Part 8

Lab on demand

Lab on demand for Customers

There are more labs for current NetApp customers

Part 9

NAbox

Harvest 1.6

Part 10

SaaS Backup

SaaS backup for Salesforce

Cloud Volumes

Cloud Volumes On-Premises

Cloud Compliance

Cloud Insights

Cloud Secure

NetApp Kubernetes Services (NKS)

HCI

Part 11

New Solutions

Part 12

Containers

NetApp Trident

Ansible

Part 13

Technical Support

How to collect logs before open a support ticket

How to measure storage performance

Gartner Magic Quadrant for Primary Array

Will NetApp adopt QLC flash in 2020?

Continue to read

All announcements from Aug-Nov 2019

Am I missing something?

Please let me know in the comments below!

If you spotted an error, please let me know personally 😉

Disclaimer

Opinions & observations are my own, and not official NetApp information. This post contains future looking statements and may contain errors. If you have spotted an error, please let me know.

MAX Data: two primary use-cases

Thanks to the last NetApp Tech ONTAP Podcast. Episode 185 – Oracle on MAX Data I noted to myself two main use-cases for MAX Data software when used with a Database (no really matter whether with Oracle DB or any other). Here they are:

First configuration

When used without Max Recovery functionality, NetApp recommends to place DB data files to MAX FS and keep snapshots (MAX Snap) enabled there, then place DB logs on a separate LUN B on ONTAP system. In this case, if persistent memory or the server will be damaged, it will be possible to fully restore data with recovering from a MAX Data snapshot on LUN A and then roll-out latest transactions in the logs from the LUN B to the DB.

  • Pros & Cons: In this case, transactions executed fastly but confirmed to clients with speed of logs stored on the LUN B, also restoration process might take some time due to storage LUN speed usually much slower than persistent memory. Cheaper since only one server with persistent memory required.

Second configuration

When logs need to be placed on fast MAX Data FS with DB data files to increase overall performance (decrease latency) of the transaction (execution time + confirmation time to clients), NetApp recommends using Max Recovery Functionality which copies data from the primary server’s persistent memory synchronously to a recovery server’s persistent memory.

  • Pros & Cons: In this case, if a primary server will lose its data due to a malfunction, data can be fastly recovered back to primary server over RDMA connection from the Persistent Memory Tier and restore the primary server normal functioning which takes less time than the first configuration. If data going to be restored completely from a storage it might take a few hours on 1 TB of data versus 5-10 minutes with Max Recovery. Transaction execution latency a bit worse in this configuration for a few microseconds due to added network latency for synchronous replication, but overall transaction latency (execution + client confirmation) is much better than in the first configuration because entire DB including data files and logs stored on the fast persistent memory tier. Those few additional microseconds latency to transaction execution time is a relatively small price in terms of overall transaction latency. Max Recovery requires the other server with the same or greater amount of persistent memory & RDMA connection thus adds costs to the solution, but provide better protection and restoration speed in case of the primary server malfunction. The second configuration provide much better overall transaction latency than if logs placed on a storage LUN.

Some thoughts about RAM

Speaking about MAX Data configuration with enabled MAX Snap where you are putting your DB logs on a dedicated LUN (first configuration). It put me to thinking, what if we use this configuration with ordinary memory instead of Optane?

Of cause, there will be disadvantages, same as in first configuration, but there will be some pros as well:
1) In case of a disaster, all data in the RAM will be lost, so we will need to restore from MAX Snap and then roll out DB logs from the LUN, which will take some time
2) Transaction confirmation speed will be equal to the speed of LUN with logs. However, Transaction execution will be done with the speed of RAM
3) Price for RAM is higher. However, on another hand, you do not need new “special” servers with special CPUs

I wouldn’t do second configuration on RAM though.