Creating the right targets for your backup
Data Protection is a significant expense for customers. Anytime a customer brings in new data, it needs to be protected. As we see it, the cost of data protection can be broken down into four parts: backup software cost, deployment cost, on-going operational expense cost and the cost of the storage where the data is stored (aka backup targets). When we launched HYCU, a driving factor was our purpose built data protection solution for Nutanix needed to save customers money in every way possible.
We started by making HYCU extremely price competitive. We then practically eliminated the deployment cost because the software takes three minutes to deploy. Then, as HYCU is designed for exception- based management, the ongoing operational expense cost is also minimal. However, one area we did not have direct control over was the cost of the backup target. So, we did three things for customers to address this:
- Enabled them to leverage their existing infrastructure
- Provided them a choice of cloud storage, and
- Allowed them to expand their existing HCI infrastructure.
To let customers leverage their existing infrastructure, we decided to support all of the standard protocols: NFS, SMB and iSCSI. This allowed us to cover not just the breadth of traditional storage systems, but also scaleout dedupe appliances like ExaGrid. When it comes to the Cloud, the industry standards are Amazon S3 and Microsoft Azure. We support both and in addition, we also support S3 compliant storage, which allows us to support solutions like Cloudian and Scality. In addition, we will be expanding to other S3 compliant storage shortly.
One thing we began to realize is that most customers are always adding additional capacity to their backup infrastructure. Traditionally, this means buying an extra Data Domain or Quantum box. But, as customers move to HCI, they need to seriously rethink this. A move to HCI is usually done to eliminate silos and create a consolidated, scaleout infrastructure that can manage multiple workloads. If that’s true then isn’t backup just another workload? Why would you create a separate island for it? If you need extra capacity, then just add storage dense nodes into your existing Nutanix infrastructure or add a storage dense cluster. This will not only eliminate silos and cost, but it will also make the entire management simpler. We are starting to see many of our customers do this.
Beyond the four initial data protection costs, another area we felt we could help was the operationalization of the backup infrastructure. Typically there are five areas that we see admins focus on the most. They are:
- Make sure to assign the right target to the right policy
- Make sure there is enough capacity in each of their backup tiers
- Make sure the backup tiers are load balanced
- Make sure to delete stuff from the targets to free up space, and
- Make sure their backup infrastructure is still behaving well to meet the policy guidelines.
To address these, we wanted to make sure that we did the heavy lifting for the customer. The cool thing about HYCU is that it automates all these tasks and the customer does not have to do any of this on his/her own. They just need to look at our dashboard to see if anything needs to be done or automatically get notified. Isn’t that what good software is supposed to do?
To see how easy and efficient HYCU is in maximizing costs and managing your backup targets, check out the following video, or give HYCU a try at tryhycu.com.