Snowflake Cost Optimization: Rightsizing Your Warehouses

Snowflake cost optimization demystified! A quick and easy method to gain more value out of every Snowflake dollar spent.

In this article we’ll discuss a simple technique to drastically reduce the credit spend of a growing Snowflake project.

This is the first article in a series of posts dealing with cost optimization in Snowflake, the data cloud.

One of the most important differentiators in the platform is the ability to size processing clusters with just a click of the mouse. Often, new and starting teams start by building a one-node, extra-small-sized processing cluster and process any kind of data workload on that node.

After a while, typically both long and short-running queries will execute on that very same warehouse. Now take in mind the cost model. Snowflake bills by the second, though anytime a warehouse starts, it bills a minimum of one minute. 

Let’s try to find a way to make sure that our queries run for a maximum of one minute.

Calculating the query length long tail

To have a first understanding of the query duration distribution per warehouse, we need to get to the data. In our demo setup, we have a warehouse in a place called LOAD. It is optimistically scaled as a medium warehouse and it consumes 4 credits an hour. 

Historically, all of our workloads have been running on this LOAD warehouse, and our assumption is that we can split data loading workloads from transformation and analysis workloads.  Let’s investigate this assumption and optimize our setup.

Snowflake exposes a number of statistics related to the inner workings of this warehouse through the shared SNOWFLAKE database.  Find this information in the QUERY HISTORY table in the ACCOUNT USAGE schema. 

To quickly analyze the information in this table, we’ll use the Snowsight interface for Snowflake. It comes by default with any Snowflake deployment but is hidden in plain sight under the preview app button

This analyst interface to Snowflake allows us to execute queries and visualize the results right away. 

This query will build buckets of a with of 1 second which we can use to build our longtail histogram

SELECT
FLOOR(EXECUTION_TIME/1000.00)*1000 AS BIN_FLOOR, 
COUNT(EXECUTION_TIME) AS COUNT
FROM “SNOWFLAKE”.”ACCOUNT_USAGE”.”QUERY_HISTORY”
WHERE WAREHOUSE_NAME = ‘LOAD’
GROUP BY 1;

Visualize the results immediately by selecting the Chart button in the middle of the screen

Snowflake Cost Optimization query

By default, Snowsight will use an extra layer of aggregation in their graphs that will distort the output of our analysis here. So let’s make sure to set that to none.

Snowflake Cost Optimization chart

Finally, we get to the histogram that shows us how long queries take. The gray bar is the cutoff line for the one-minute minimum charge. 

It turns out that the majority of queries run blazing fast (thanks, Snowflake!) and we have considerable opportunities to optimize

The action

Intuitively we have 2 actions to take: 

  • Downscale the current instance to an XS. It’ll about quadruple the processing time of the fastest queries but that will still be within the minimum charge of one minute
  • Shift the workloads that require faster processing to a new warehouse

Action one is fairly straightforward and can be done using one simple query

ALTER WAREHOUSE “LOAD” SET WAREHOUSE_SIZE = ‘XSMALL’ AUTO_SUSPEND = 60;

Action two takes a bit more planning. 

We start by isolating the queries that require more processing power. This particular Snowflake instance doesn’t apply query tags, so we need to investigate the actual queries that take over half a minute to process.

SELECT QUERY_TEXT, EXECUTION_TIME FROM “SNOWFLAKE”.”ACCOUNT_USAGE”.”QUERY_HISTORY”
WHERE EXECUTION_TIME > 30000 
ORDER BY EXECUTION_TIME DESC;

Now, we need a warehouse to host those queries. Let’s create a new medium warehouse named NEW_WAREHOUSE by running this query:

CREATE WAREHOUSE NEW_WAREHOUSE WITH WAREHOUSE_SIZE = ‘MEDIUM’ WAREHOUSE_TYPE = ‘STANDARD’ AUTO_SUSPEND = 300 AUTO_RESUME = TRUE;

Analyze the queries that are reported back. Change the clients that execute those queries, and point those clients to the new warehouse we’ve just created.  

Conclusion

Far fewer queries will use the new warehouse so it’ll be started less. The queries executed on the original warehouse will take a slightly bit more time to execute, but they will stay well within the one-minute minimum Snowflake charge.

The method proposed in this article is quite a simplification of reality, though often with a material impact on cost. Multiple queries can run at the same time, and a query run on a warehouse that is already active doesn’t call for another one-minute minimum charge. A more advanced method of cost management would take more parameters into account, such as parallel processing capabilities and warehouse sizes. 

Celine Meulemans

Celine Meulemans

Related articles

In just 6 weeks, Jacob had the opportunity to learn and grow through a series of courses designed to equip him with the skills and knowledge necessary to succeed in the data industry.

Revisiting my 6 weeks onboarding training

If you’re working in a hands-on data role using Snowflake, Databricks, or Bigquery, chances are you’ve encountered dbt as a companion technology. 🎉 On April 3rd, 2023, dbt Labs announced that Tropos.io became one of the 5 premier partners worldwide.

Exclusive! We Are Excited To Be A Dbt Premier Partner in 2023

The how-to guide to interpreting Snowflake's usage-based pricing model.

Calculate your Snowflake monthly cost now using this expert method

Scroll to Top