Previous article was an easy one. It showed that with IaaS, there aren’t a lot of components to look at when cost-optimising IT workloads. Still, there are different optimisation techniques available and today I’ll try so provide some recommendations for using them with respect to SAP systems.
Let’s start with FinOps framework for SAP. Which actually the same as for all other workloads…
Inform (usually: make information more visible), Optimise, Operate (embed optimisations into our organisation). Once you’re done, repeat for another FinOps-related challenge. Simple, huh? FinOps is an iterative process that should be embraced; it’s a sum of a thousand small steps that will get you far, so if you’re a fan of those kind of challenges, you’ll definitely enjoy it!
It all starts with choosing “the thing” we want to optimise in the current cycle. We shouldn’t aim at doing everything at once; instead, it’s good to choose some low-hanging fruits, which are usually the easiest and most effective measures, such as Committed Use Discounts (explained below). Once chosen, the three-phase process starts for this particular aspect:
1) Inform phase: It’s often the case that the SAP Basis team is not fully aware of the underlying cloud resources used for each SAP system. Without making them aware and responsible for “their” part, you risk reducing cloud-based dynamics of SAP systems to zero. Which is just the opposite of FinOps aims and assumptions 😉. There really needs to be some level of awareness on a higher level among SAP SMEs and decision makers. In order to achieve it, you should educate them (that’s one of FinOps responsibilities, remember? -> LINK) and hand them over some controls of organization spending. The visibility is key here! You should create:
- Budgets: Enable you to track actual Google Cloud spend against planned spend.
- Alerts: Threshold rules that are used to trigger email notifications. Budget alert emails help you stay informed about how spend is tracking against planned budget.

2) Once the information is shared across stakeholders, SAP can start to “Optimise“. Just like with any other critical workload, you must be really careful with the tooling and methods you use. There is a thin line between cost optimization and losing business continuity. Translating it to practical terms: it’s extremely important to have FinOps-positive SAP stakeholders that have a holistic overview of both (FinOps and SAP) areas. Plus, you’d probably choose a different set of FinOps techniques for productive vs nonproductive SAP systems (full list below).
3) Operate phase is mostly about embedding the whole process into the company culture, including continuous measuring and regular validations, which might result in challenging the current status. Although SAP systems are more static than cloud-native workloads, they still require changes from time to time.
Once you’re done with the full cycle, you can take up another FinOps challenge and start from the beginning, potentially reusing some of the assets (like monitoring dashboards etc) you’ve created before.
Let’s now have a look at what’s in our “FinOps for SAP” toolbox when deployed on Google Cloud Platform. Starting from productive SAP systems, we should focus on the areas below:
- Proper system sizing before AND AFTER deployment
- Carefully choosing HA and DR components to be deployed
- Making sure to use Committed Use Discounts
- Validating the level of system dynamics that we can accept from the productive system
One may argue if that’s even part of FinOps, or that’s just a common sense and best practices of driving any implementation / migration project. There is no clear answer here, but being cautious about those topics from the beginning will set us on a good path to step two, which would be constant validation of resource utilization with the aim to optimize them in the long run. Especially that there is usually quite a broad area for improvement after the migration project is finished and the dust settles.
What are technical means to implement FinOps-friendly, productive SAP systems in GCP? On a high-level, here is what we can do:
Let’s dive a bit deeper into some of those options:
- First of all, ensure cost-perspective visibility to your SAP team to build awareness of the SAP-related costs. It does not have to be very detailed, but since cloud pricing is publicly visible, being aware of the level of costs is an important part of FinOps evangelization.
- Regarding the tooling, you might consider using the visualization tool that is pre-built in Google Data Studio.
- Use GCE, which is GCP-native compute environment, consisting of Virtual Machines only. At the time of writing this article (July 2022), the biggest GCE VM is 12 TB. You should aim at minimizing all “special” use-cases, which typically include Oracle-based SAP systems or OSes not supported on GCP (AIX, HP-US, Solaris etc). If not modernized, they usually need to be deployed to Bare Metal Solution, which is a much less FinOps-friendly environment. It’s recommended to migrate as many SAP systems to supported, GCE-based environments as possible. If that’s not possible for some part of the workloads, aim at minimizing BMS usage and create a plan to migrate out of this environment at some point in the future.
- Use Committed Use Discounts if it makes sense. This is the easiest and most effective method of considerably reducing your monthly bill for cloud resources! With SAP workloads, there is usually a baseline vCPU/memory usage that is more or less constant, which should encourage you to commit to using those resources for the long term (preferably 3 years).
- [TIP] With GCP, you don’t commit to using particular Virtual Machines; instead, you commit to a chosen number of vCPUs and memory in a chosen region and machine family. This gives you a ton of flexibility in the long run, which is so important with respect to FinOps approach.
- Choose more cost-effective VM families from the list of supported GCE machine types. Each VM has been performance-measured and can be chosen against the number of SAPS needed.
- [TIP]: you can cost-optimize SAP workloads by choosing AMD-based, n2d machine family for SAP application servers or non-HANA databases. Please be aware that n2d is NOT supported for HANA workloads.
- [TIP]: make sure to use SAPS values when migrating from on-premises instead of just comparing the number of cores. Newer generations of hardware tend to provide better performance and you may get away with using smaller VMs than in on-premises.
- Validate using custom VMs instead of fixed-sized ones. Both SAP application servers and HANA databases are certified to run on custom GCE machines. They usually make sense when you’re somewhere in between two sizes and don’t want to overprovision.
- [TIP]: Custom GCE VMs are supported by SAP, but you should reach out to SAP to inform them about this fact. Also, the capacity (SAPS) of a custom machine configuration is not predetermined.
- [TIP]: 640 GB is the maximum amount of memory a custom N2-family GCE machine can have (as of July 2022). This means custom VMs cannot be used for large HANA deployments.
- Carefully size each system. I really can’t stress that enough. Depending on each use-case (greenfield, brownfield, bluefield) and source/target system configuration, there are different ways of sizing SAP systems. Make sure to include the flexibility of GCP when sizing each workload, since trying to anticipate SAP performance requirements a few years in advance (like with traditional, greenfield SAP sizing approach) will mostly result in overprovisioning. It’s easy to scale up or down in the cloud, so instead of trying to arrange a huge buffer for the coming years, try to keep the size adequate and make sure you’ll be able to choose a bigger VM when the right time comes.
- [TIP]: Choosing to copy the on-premises resources to cloud 1:1 is usually not the best idea. At the same time, this is often what you’ll end up when a risk-averse approach is chosen by you or a migration Partner. That will just mean you’ll have much more improvement opportunities after your systems are up and running. It’ll just be more difficult to implement it after SAP go-live (some downtimes need to be planned for VM resize operation).
- [TIP] GCP automatically tries to identify oversized VMs, so you might have a look at them first… and you might be amazed about how much you can save every month!

- When application-level sizing is done, use SAP Cost Calculator to translate it into GCP resources (VMs, Persistent Disks, Shared Storage etc). This tool is available internally to Googlers (and GCP Partners) and it supports making some decisions regarding discounts, machine types or storage. Please reach out to your Google Cloud Partner representative for details.
SAP Pricing Calculator is a spreadsheet-based tool that allows you to plan all the SAP landscapes in one place and generate GCP resource specification and reports that can be presented to the customer.
- Optimize Disaster Recovery deployment. Depending on the RTO/RPO values required by the customer, different decisions can be made to protect productive SAP systems against wide-scale, natural or human-made disasters. Basically, your options are:
- No DR needed (or: not needed at the moment). That’s a valid choice that some customers make, but additional risk needs to be accepted because of such a decision.
- DR needed for some SAP systems (usually: productive ones, important from business continuity perspective). This can translate to cold / warm / hot DR implementation, each with different RPO/RTO targets and financial consequences.
DR case | RTO | RPO | Database | File Shares | App Servers | Cost $$ |
Scenario 1 | Days | Day | Backup/restore | Backup/Restore | Backup/Restore | Low |
Scenario 2 | Hours | Minutes | Backup/Restore or asynchronous replication | Backup/Restore or File sync Async replication | Backup/Restore | Medium |
Scenario 3 | Minutes | ~0 | Asynchronous Replication | Storage Asynchronous Replication | Pre-build, restore from GCP image or Google snapshot | High |
- Use “Infrastructure as a Code” approach. This actually gives a ton of advantages: it kind of brings SAP world closer to DevOps (and thus – to FinOps) and it allows you to easily provision and destroy multiple environments, which can be useful if you need a temporary, sandbox SAP system for just a few hours or days. Since IaaC is usually driven by a CI/CD pipeline, it also allows you to introduce additional policies that might enforce label usage when creating cloud resources (labels are usually extensively used to implement automated, FinOps-related actions, since they provide a way to tag resources with an easily identifiable key-value pair).
- [TIP]: It sometimes makes sense to initially deploy systems using smaller VMs. It’s usually the case that cloud resources are provisioned some weeks in advance of the real migration / go-live. You might be fine with smaller VMs during this time, when you prepare and adjust the database/application layer and perform most of the functional tests. Only after all is ready, you can resize the resources to target sizes.
- [TIP]: It’s not the topic of this article, but there are multiple possibilities to use IaaC approach when deploying SAP systems, mostly based on Terraform-based tooling (such as this GitHub repo).
Suggested Labels | Key (example) | Value (example) |
---|---|---|
purpose | The purpose of the resource, including the name of the application. | ecc, bw, gts, solman, po, grc, sap cs, sapgw, saprouter, sap cloud connector |
role-class-name | The role of the resource: Application / DB / Web Server (this can be filled in via ServiceNow in the future). | sap application, sap hana db, sap ase db, sap maxdb, sap ascs |
costcenter | It is required to assign costs to the responsible cost center. | |
department | The department using/responsible for the resources. | It, hr, rd, fin |
environment | The environment codes. | prod, test, dev, qas, sbx |
sid | The SAP SID of which the instance that belongs to. | PR1 |
- Carefully plan backup and recovery strategy. Regardless if native (backint / database-native etc) tooling or 3rd party software (Actifio / Networker) is used, they all tend to use GCS as a target for the backups. In order to optimize GCS storage costs, use appropriate GCS storage classes (Nearline / Coldline as a default one for backups, depending on any planned DR drills etc).
- [TIP]: Use GCS Lifecycle Policies to migrate backups to colder classes or completely remove older backups. In fact, Lifecycle Policies can automate the execution of your backup retention policies.
Wow, that’s a lot of areas for cost-optimization, and we did not even go beyond Day 0. We mostly focused on planning and deployment phases, which happen even before a SAP system goes live. In the next article, I explain some more options, which might be used already after a SAP system is migrated to GCP.
Pingback: FinOps and SAP: Part 5. Recommendations continued. – SAP on Google Cloud Platform
Pingback: FinOps and SAP: Part 3. Main cost drivers. – SAP on Google Cloud Platform