In my last blog post, I kicked off this series on multi-cloud data protection by discussing how by choosing the right backup as a service strategy, you can facilitate multi-cloud data protection and align with upkeep maintenance for a true Cloud Operations model to meet changing needs. In this post, let’s think about what’s involved with actually backing up your cloud-based data.
Number Three: Automating Protection
How you choose to set up your backups is a critical success factor. Once again, you have choices. One approach is to set up agents (some people call these “connectors”), set up the backup configuration and the backup jobs, including how often you want to back up the data, how long you want to retain the backups, and specify the backup targets where the data will be stored. Anyone accustomed to using backup software with on-premises systems is familiar with this approach to setting up backups. Like many traditional data-center-centric processes, it’s a time-consuming manual task.
But you moved to the cloud to reduce the burden on your IT team, right? Manually configuring backups for multiple clouds could actually increase that burden, rather than reduce it.
That brings us to option number two: Automated, policy-based backups. Ideally, multi-cloud data protection should provide 1-click backups based on flexible policies — “set and forget.” When evaluating a cloud backup and recovery solution, pay careful attention to whether it provides strong support for policy-based configuration to truly deliver on that “1-click” promise.
Number Four: Leave No Application Behind
When we think about data protection, we need to think about applications. After all, that’s what keeps your business running. But the picture is quite different in a dynamic, multi-cloud environment than in a static, on-premises environment.
Remember that a key reason for moving to the cloud is to gain greater flexibility, speed and agility. You can spin up new workloads on the fly, take down inactive workloads or move workloads around quickly and easily to meet your changing needs. In an environment this dynamic, manually configuring backups just doesn’t work. You can’t have someone just sitting there waiting for new applications to spin up and then manually setting up backups.
So finding a data protection solution that automatically discovers new applications is essential. It’s also critical that your backup solution provide application consistency—does it provide protection in a consistent fashion by default?
In the first post in the series, I spoke about the importance of policy-based data protection. But manually assigning backup policies takes too long—you lose the speed advantage of the cloud. So it’s important that your backup solution provide automated assignment of backup policies.
There are two important success factors here. If you haven’t fully automated things like your CI/CD pipeline, you can actually tag or label your infrastructure as you create it and the backup solution should be able to automatically assign the correct policy based on this tag. That is the most effective and efficient approach. But what if the developer neglects to add a tag? You need to ensure default policy assignment is in place to provide a critical “safety net.” When evaluating any solution, look to make sure it provides both automated assignment and default assignment.
If there is a key theme to all of this, it’s ensuring maximum protection with minimum complexity and human effort. That’s a winning combination given the rising criticality of applications and data—and the growing pressures on IT budgets and headcounts.
In the next installment of this series, we’ll look at recoverability and cloning of backups, applications, VMs, containers, and more.
If you’d like more information about how HYCU handles multi-cloud data protection, you can find more information at www.hycu.com. Or, you can experience HYCU first hand by signing up for a free trial at TryHYCU.