Today, we’ll continue from where we left off last time. This time, we’ll focus on FinOps options that could be used for some of the more flexible and dynamic SAP systems, such as non-prod ones. In addition to all the possibilities described previously, it might be worth to:
- Validate if alternative choices for compute and storage make sense. Just because a productive SAP system uses a 2 TB GCE instance does not mean all the lower-end systems (qas / dev / sandbox etc) need to use identical components. If you’re not planning to perform performance tests on a particular system (eg. dev or sandbox), why not optimize storage by using Balanced (or even Standard) Persistent Disks instead of SSDs? If you have a way to reduce the amount of data after a system copy from prod to lower-end system, why not use this opportunity to deploy smaller GCE instances and Persistent Disks for those? Finally, if your performance requirements fluctuate (they usually tend to be higher around month-end-closing processing), why not…
- … add and remove app servers as needed? I know… this sounds like a heresy to SAP SMEs, who tend to only perform such activities when a pressing need arises and all other tuning options did not do the trick. It was always a daunting and manual process… But hey, app servers are nothing special: they can use standard (= not memory-optimized) VMs, there is no OS clustering involved and their provisioning (and destruction or at least stop/start operations) can be automated, so I don’t see any important reasons to treat them as static and long-lasting resources. Of course, some attention (and automation 😉) needs to be considered to manage a clean workflow of adding or removing the application server (logon groups, batch jobs, monitoring etc), but that’s all possible.
- Stop / start SAP system on a schedule to optimize the infrastructure costs. This one is a bit similar to the previous point, but this time – instead of a create/destroy or stop/start cycle for app servers – we plan to stop/start whole SAP systems, including the database layer, which is usually the main cost driver (especially if that’s a memory-intensive HANA database).
- Validate Shared Storage architecture. NFS is usually planned well in advance of the cloud migration itself, but it’s important to review the setup once in a while. It’s usually the case that Customers prefer managed solutions instead of deploying a do-it-yourself kind of NFS architecture. While it’s perfectly reasonable, it’s sometimes the case when they end up overspending quite a bit just because the solution they chose is not optimal from cost perspective. Or because there is a one, central setup for prod and nonprod systems that is highly available and performance-optimised, while only a fraction of data needs high availability. When you compare a cost-optimised Cloud Storage bucket (less than 0.01 USD/GB) to a high-end NFS solution costing over 0.6 USD/GB, it’s just mind-blowing how some decisions might influence the invoicing at the end of each month.
- Validate OS licensing by bringing your own licenses (BYOL). Instead of using a premium image that incurs additional cost for the third-party license, you might use custom images with licenses you already have. This strategy helps you continue to realize value from your existing investments in third-party licenses.
As you’ve probably noticed, those methods are much more invasive then the ones presented in the previous article. Because of this, more care should be taken when planning / implementing those. For example, application layer needs to be considered. You might have some running jobs and monitoring software that will trigger alerts when some application servers just disappear. That’s why it’s important to handle logon groups and monitoring maintenance mode as an additional step of the startup/shutdown procedure. Additionally, potential issues need to be discovered and resolved. This all translates to additional maintenance costs that might lower cloud consumption costs… which should motivate us to ask the question “if that’s even worth it?”. Well, it does if the costs vs benefits equation still makes sense… so when does it?
First of all, those types of optimizations should be validated only after simpler FinOps initiatives were already implemented and battle-tested. So it’s more of an area to squeeze those last 20% of FinOps potential, when you’re already handled 80%.
When it comes to short-lived and ephemeral SAP systems, it’s always a good idea to provision them using IaaC and destroy them when not needed anymore using the same method. With system / application server start/stop, it all depends on the time each VM is planned to be online each month. Let’s say we want out systems to be up 12 hours a day, 7 days a week. By buying a 3-year Commited Use Discount, you’ll get ~65% discount regardless of monthly system uptime, so there is little to no incentive to invest in all that extra effort. But it starts to change if we’re fine with our systems being up around 10 or 8 hours a day, 5 days a week, possibly excluding public holidays.
It’s also about learning and gaining experience. Even if the incentives aren’t huge, getting comfortable with SAP systems being a bit dynamic, plus the skills developed along the way will pay off a dividend in future. It’s a huge asset to have a SAP Basis team that understands the opex operational model, uses IaaC approach and has a more cloud-oriented mindset.
It’s a recommended practice to make informed decisions as to what we try to optimize next. This way, we can prioritize and create a long-term FinOps strategy. With a skilled FinOps team, we can try to define these in the form of table:
Before we wrap up, it’s worth mentioning about GCP resources that are shared across workloads (for example, support subscriptions or NFS volumes shared between different SAP systems, or even SAP and non-SAP systems). For those, it’s worth considering labeling shared projects as “shared” allows you to see which projects are in use by all. The relative cost can then be split among different departments.
We went quite deep into FinOps for SAP workloads already. In the last article in the series, I’m planning to equip you with a bunch of links and next steps which you can follow to continue your “FinOps for SAP” journey that you’ve started. See you next time!