Shiv Gupta

Keeping Snowflake spend in check: an easy win

· Shiv Gupta

Cloud spend has come under increased scrutiny in the past couple of years and for good reason. The modern data warehouse allows for increased flexibility with virtually limitless compute and that of course comes with the risk of overspending.

I have experienced this first-hand, being surprised at the sight of a large Snowflake bill mostly attributable to a M warehouse being used where a XS was perfectly sufficient for the use case.

This is not a post about implementing sophisticated query optimization techniques. On the contrary, it’s about finding an easy ~98% savings opportunity from identifying an inefficient query pattern.

For some background, Snowflake compute is billed per-second; each query is run on a virtual warehouse and billed based on the size of the warehouse. There is no charge for idle compute time. Here’s a refresher.

In my case, the win came from identifying an inefficient query pattern and from right-sizing the warehouse running the query. I identified a service account associated with an application that issued several thousand identical OLTP-style queries non-stop throughout the day, each time using a different customer_id in the where clause. This was problematic for a couple of reasons.

First, this query pattern meant the warehouse was active at all hours of the day leading to high credit usage. Given the queries were pointing to tables in Snowflake only updated once a day, this was wasteful. A more efficient pattern would be to have the querying application cache the data once a day (immediately once new data lands) and query the cache subsequently. The cache could take the form of a full-fledged RDBMS (PostgreSQL, MySQL), NoSQL database (MongoDB), embedded database (SQLite, DuckDB), or a filesystem.

Second, the queries were issued using an M warehouse which was overkill for the compute necessary. I verified this by simulating these queries over a period of time using a XS warehouse and comparing query runtimes vs the larger warehouse. The difference in runtime for the queries was negligible, indicating that the warehouse could be downsized with no noticeable drop in performance.

Here’s some back-of-the-napkin math to break it down.

Before

  • M warehouse (4 credits / hour)
  • Warehouse active for 24 hours / day
  • 4 credits / hour * 24 hours / day = 96 credits / day (equivalent to $192/day for a Standard Edition Snowflake subscription)

After

  • XS warehouse (1 credit / hour)
  • Warehouse active for 1 hour / day (conservative estimate of an hour’s worth of compute associated with caching the Snowflake data)
  • 1 credit / hour * 1 hour / day = 1 credit / day (equivalent to $2/day for a Standard Edition Snowflake subscription)

This represents a 98% savings. To be clear, there are some costs associated with the caching solution. You’re likely going to pay to host the data store (e.g. AWS RDS instance) or your data may be small enough to fit on disk in a local DuckDB instance hosted alongside the application (in which case there is no cost - even better!). Quantifying these costs will help determine whether the effort is worthwhile.

Closing Thoughts

Aside from monitoring for inefficiencies in query patterns and right-sizing compute resources, there are other opportunities to mitigate the risk of overspending in Snowflake and other data platforms. For example, data teams should identify workflows producing tables with low query activity and revisit the workflows from a cost-benefit perspective. Lineage and data observability tools help here (Monte Carlo, Metaplane). In addition, SELECT.dev is an automated Snowflake optimization and cost management platform worth checking out.