Panic hits when you mistakenly delete data. Problems can come from a mistake that disrupts a process, or worse, the whole database was deleted. Thoughts of how recent was the last backup and how much time will be lost might have you wishing for a rewind button. Straightening out your database isn't a disaster to recover from with Snowflake's Time Travel. A few SQL commands allow you to go back in time and reclaim the past, saving you from the time and stress of a more extensive restore.

We'll get started in the Snowflake web console, configure data retention, and use Time Travel to retrieve historic data. Before querying for your previous database states, let's review the prerequisites for this guide.

Prerequisites

  • Quick Video Introduction to Snowflake
  • Snowflake Data Loading Basics Video

What You'll Learn

  • Snowflake account and user permissions
  • Make database objects
  • Set data retention timelines for Time Travel
  • Query Time Travel data
  • Clone past database states
  • Remove database objects
  • Next options for data protection

What You'll Need

  • A Snowflake Account

What You'll Build

  • Create database objects with Time Travel data retention

First things first, let's get your Snowflake account and user permissions primed to use Time Travel features.

Create a Snowflake Account

Snowflake lets you try out their services for free with a trial account . A Standard account allows for one day of Time Travel data retention, and an Enterprise account allows for 90 days of data retention. An Enterprise account is necessary to practice some commands in this tutorial.

Login and Setup Lab

Log into your Snowflake account. You can access the SQL commands we will execute throughout this lab directly in your Snowflake account by setting up your environment below:

Setup Lab Environment

This will create worksheets containing the lab SQL that can be executed as we step through this lab.

setup_lab

Once the lab has been setup, it can be continued by revisiting the lab details page and clicking Continue Lab

continue_lab

or by navigating to Worksheets and selecting the Getting Started with Time Travel folder.

worksheets

Increase Your Account Permission

Snowflake's web interface has a lot to offer, but for now, switch the account role from the default SYSADMIN to ACCOUNTADMIN . You'll need this increase in permissions later.

account-role-change-image

Now that you have the account and user permissions needed, let's create the required database objects to test drive Time Travel.

Within the Snowflake web console, navigate to Worksheets and use the ‘Getting Started with Time Travel' Worksheets we created earlier.

Create Database

Snowflake_TT_CreateDB-image

Use the above command to make a database called ‘timeTravel_db'. The Results output will show a status message of Database TIMETRAVEL_DB successfully created .

Create Table

Snowflake_TT_CreateTable-image

This command creates a table named ‘timeTravel_table' on the timeTravel_db database. The Results output should show a status message of Table TIMETRAVEL_TABLE successfully created .

With the Snowflake account and database ready, let's get down to business by configuring Time Travel.

Be ready for anything by setting up data retention beforehand. The default setting is one day of data retention. However, if your one day mark passes and you need the previous database state back, you can't retroactively extend the data retention period. This section teaches you how to be prepared by preconfiguring Time Travel retention.

Alter Table

Snowflake_TT_AlterTable-image

The command above changes the table's data retention period to 55 days. If you opted for a Standard account, your data retention period is limited to the default of one day. An Enterprise account allows for 90 days of preservation in Time Travel.

Now you know how easy it is to alter your data retention, let's bend the rules of time by querying an old database state with Time Travel.

With your data retention period specified, let's turn back the clock with the AT and BEFORE clauses .

Use timestamp to summon the database state at a specific date and time.

Employ offset to call the database state at a time difference of the current time. Calculate the offset in seconds with math expressions. The example above states, -60*5 , which translates to five minutes ago.

If you're looking to restore a database state just before a transaction occurred, grab the transaction's statement id. Use the command above with your statement id to get the database state right before the transaction statement was executed.

By practicing these queries, you'll be confident in how to find a previous database state. After locating the desired database state, you'll need to get a copy by cloning in the next step.

With the past at your fingertips, make a copy of the old database state you need with the clone keyword.

Clone Table

Snowflake_TT_CloneTable-image

The command above creates a new table named restoredTimeTravel_table that is an exact copy of the table timeTravel_table from five minutes prior.

Cloning will allow you to maintain the current database while getting a copy of a past database state. After practicing the steps in this guide, remove the practice database objects in the next section.

You've created a Snowflake account, made database objects, configured data retention, query old table states, and generate a copy of the old table state. Pat yourself on the back! Complete the steps to this tutorial by deleting the objects created.

Snowflake_TT_DropTable-image

By dropping the table before the database, the retention period previously specified on the object is honored. If a parent object(e.g., database) is removed without the child object(e.g., table) being dropped prior, the child's data retention period is null.

Drop Database

Snowflake_TT_DropDB-image

With the database now removed, you've completed learning how to call, copy, and erase the past.

snowflake information schema time travel

No-code Data Pipeline for Snowflake

Seamlessly load data from 150+ sources to Snowflake in real-time with Hevo.

Time Travel snowflake: The Ultimate Guide to Understand, Use & Get Started 101

By: Harsh Varshney Published: January 13, 2022

To empower your business decisions with data, you need Real-Time High-Quality data from all of your data sources in a central repository. Traditional On-Premise Data Warehouse solutions have limited Scalability and Performance , and they require constant maintenance. Snowflake is a more Cost-Effective and Instantly Scalable solution with industry-leading Query Performance. It’s a one-stop-shop for Cloud Data Warehousing and Analytics, with full SQL support for Data Analysis and Transformations. One of the highlighting features of Snowflake is Snowflake Time Travel.

Table of Contents

Snowflake Time Travel allows you to access Historical Data (that is, data that has been updated or removed) at any point in time. It is an effective tool for doing the following tasks:

  • Restoring Data-Related Objects (Tables, Schemas, and Databases) that may have been removed by accident or on purpose.
  • Duplicating and Backing up Data from previous periods of time.
  • Analyzing Data Manipulation and Consumption over a set period of time.

In this article, you will learn everything about Snowflake Time Travel along with the process which you might want to carry out while using it with simple SQL code to make the process run smoothly.

Key Features of Snowflake

What is snowflake time travel feature.

  • Enable Snowflake Time Travel
  • Disable Snowflake Time Travel
  • What are Data Retention Periods?
  • What are Snowflake Time Travel SQL Extensions?

How Many Days Does Snowflake Time Travel Work? 

How to specify a custom data retention period for snowflake time travel , how to modify data retention period for snowflake objects.

  • How to Query Snowflake Time Travel Data? 

How to Clone Historical Data in Snowflake? 

  • Using UNDROP Command with Snowflake Time Travel: How to Restore Objects?

Snowflake Fail-Safe vs Snowflake Time Travel: What is the Difference?

What is snowflake.

Snowflake Time Travel: logo

Snowflake is the world’s first Cloud Data Warehouse solution, built on the customer’s preferred Cloud Provider’s infrastructure (AWS, Azure, or GCP) . Snowflake (SnowSQL) adheres to the ANSI Standard and includes typical Analytics and Windowing Capabilities. There are some differences in Snowflake’s syntax, but there are also some parallels. 

Snowflake’s integrated development environment (IDE) is totally Web-based . Visit XXXXXXXX.us-east-1.snowflakecomputing.com. You’ll be sent to the primary Online GUI , which works as an IDE, where you can begin interacting with your Data Assets after logging in. Each query tab in the Snowflake interface is referred to as a “ Worksheet ” for simplicity. These “ Worksheets ,” like the tab history function, are automatically saved and can be viewed at any time.

  • Query Optimization: By using Clustering and Partitioning, Snowflake may optimize a query on its own. With Snowflake, Query Optimization isn’t something to be concerned about.
  • Secure Data Sharing: Data can be exchanged securely from one account to another using Snowflake Database Tables, Views, and UDFs.
  • Support for File Formats: JSON, Avro, ORC, Parquet, and XML are all Semi-Structured data formats that Snowflake can import. It has a VARIANT column type that lets you store Semi-Structured data.
  • Caching: Snowflake has a caching strategy that allows the results of the same query to be quickly returned from the cache when the query is repeated. Snowflake uses permanent (during the session) query results to avoid regenerating the report when nothing has changed.
  • SQL and Standard Support: Snowflake offers both standard and extended SQL support, as well as Advanced SQL features such as Merge, Lateral View, Statistical Functions, and many others.
  • Fault Resistant: Snowflake provides exceptional fault-tolerant capabilities to recover the Snowflake object in the event of a failure (tables, views, database, schema, and so on).

To get further information check out the official website here . 

Snowflake Time Travel: chart

Snowflake Time Travel is an interesting tool that allows you to access data from any point in the past. For example, if you have an Employee table, and you inadvertently delete it, you can utilize Time Travel to go back 5 minutes and retrieve the data. Snowflake Time Travel allows you to Access Historical Data (that is, data that has been updated or removed) at any point in time. It is an effective tool for doing the following tasks:

  • Query Data that has been changed or deleted in the past.
  • Make clones of complete Tables, Schemas, and Databases at or before certain dates.
  • Tables, Schemas, and Databases that have been deleted should be restored.

As the ability of businesses to collect data explodes, data teams have a crucial role to play in fueling data-driven decisions. Yet, they struggle to consolidate the data scattered across sources into their warehouse to build a single source of truth. Broken pipelines, data quality issues, bugs and errors, and lack of control and visibility over the data flow make data integration a nightmare.

1000+ data teams rely on Hevo’s Data Pipeline Platform to integrate data from over 150+ sources in a matter of minutes. Billions of data events from sources as varied as SaaS apps, Databases, File Storage and Streaming sources can be replicated in near real-time with Hevo’s fault-tolerant architecture. What’s more – Hevo puts complete control in the hands of data teams with intuitive dashboards for pipeline monitoring, auto-schema management, custom ingestion/loading schedules. 

All of this combined with transparent pricing and 24×7 support makes us the most loved data pipeline software on review sites.

Take our 14-day free trial to experience a better way to manage data pipelines.

How to Enable & Disable Snowflake Time Travel Feature? 

1) enable snowflake time travel.

To enable Snowflake Time Travel, no chores are necessary. It is turned on by default, with a one-day retention period . However, if you want to configure Longer Data Retention Periods of up to 90 days for Databases, Schemas, and Tables, you’ll need to upgrade to Snowflake Enterprise Edition. Please keep in mind that lengthier Data Retention necessitates more storage, which will be reflected in your monthly Storage Fees. See Storage Costs for Time Travel and Fail-safe for further information on storage fees.

For Snowflake Time Travel, the example below builds a table with 90 days of retention.

To shorten the retention term for a certain table, the below query can be used.

2) Disable Snowflake Time Travel

Snowflake Time Travel cannot be turned off for an account, but it can be turned off for individual Databases, Schemas, and Tables by setting the object’s DATA_RETENTION_TIME_IN_DAYS to 0.

Users with the ACCOUNTADMIN role can also set DATA_RETENTION_TIME_IN_DAYS to 0 at the account level, which means that by default, all Databases (and, by extension, all Schemas and Tables) created in the account have no retention period. However, this default can be overridden at any time for any Database, Schema, or Table.

3) What are Data Retention Periods?

Data Retention Time is an important part of Snowflake Time Travel. Snowflake preserves the state of the data before the update when data in a table is modified, such as deletion of data or removing an object containing data. The Data Retention Period sets the number of days that this historical data will be stored, allowing Time Travel operations ( SELECT, CREATE… CLONE, UNDROP ) to be performed on it.

All Snowflake Accounts have a standard retention duration of one day (24 hours) , which is automatically enabled:

  • At the account and object level in Snowflake Standard Edition , the Retention Period can be adjusted to 0 (or unset to the default of 1 day) (i.e. Databases, Schemas, and Tables).
  • The Retention Period can be set to 0 for temporary Databases, Schemas, and Tables (or unset back to the default of 1 day ). The same can be said of Temporary Tables.
  • The Retention Time for permanent Databases, Schemas, and Tables can be configured to any number between 0 and 90 days .

4) What are Snowflake Time Travel SQL Extensions?

The following SQL extensions have been added to facilitate Snowflake Time Travel:

  • OFFSET (time difference in seconds from the present time)
  • STATEMENT (identifier for statement, e.g. query ID)
  • For Tables, Schemas, and Databases, use the UNDROP command.

Snowflake Time Travel: SQL Extensions

The maximum Retention Time in Standard Edition is set to 1 day by default (i.e. one 24 hour period). The default for your account in Snowflake Enterprise Edition (and higher) can be set to any value up to 90 days :

  • The account default can be modified using the DATA_RETENTION_TIME IN_DAYS argument in the command when creating a Table, Schema, or Database.
  • If a Database or Schema has a Retention Period , that duration is inherited by default for all objects created in the Database/Schema.

The Data Retention Time can be set in the way it has been set in the example below. 

Using manual scripts and custom code to move data into the warehouse is cumbersome. Frequent breakages, pipeline errors and lack of data flow monitoring makes scaling such a system a nightmare. Hevo’s reliable data pipeline platform enables you to set up zero-code and zero-maintenance data pipelines that just work.

  • Reliability at Scale : With Hevo, you get a world-class fault-tolerant architecture that scales with zero data loss and low latency. 
  • Monitoring and Observability : Monitor pipeline health with intuitive dashboards that reveal every stat of pipeline and data flow. Bring real-time visibility into your ELT with Alerts and Activity Logs  
  • Stay in Total Control : When automation isn’t enough, Hevo offers flexibility – data ingestion modes, ingestion, and load frequency, JSON parsing, destination workbench, custom schema management, and much more – for you to have total control.    
  • Auto-Schema Management : Correcting improper schema after the data is loaded into your warehouse is challenging. Hevo automatically maps source schema with destination warehouse so that you don’t face the pain of schema errors.
  • 24×7 Customer Support : With Hevo you get more than just a platform, you get a partner for your pipelines. Discover peace with round the clock “Live Chat” within the platform. What’s more, you get 24×7 support even during the 14-day full-feature free trial.
  • Transparent Pricing : Say goodbye to complex and hidden pricing models. Hevo’s Transparent Pricing brings complete visibility to your ELT spend. Choose a plan based on your business needs. Stay in control with spend alerts and configurable credit limits for unforeseen spikes in data flow. 

When you alter a Table’s Data Retention Period, the new Retention Period affects all active data as well as any data in Time Travel. Whether you lengthen or shorten the period has an impact:

1) Increasing Retention 

This causes the data in Snowflake Time Travel to be saved for a longer amount of time.

For example, if you increase the retention time from 10 to 20 days on a Table, data that would have been destroyed after 10 days is now kept for an additional 10 days before being moved to Fail-Safe. This does not apply to data that is more than 10 days old and has previously been put to Fail-Safe mode .

2) Decreasing Retention

  • Temporal Travel reduces the quantity of time data stored.
  • The new Shorter Retention Period applies to active data updated after the Retention Period was trimmed.
  • If the data is still inside the new Shorter Period , it will stay in Time Travel.
  • If the data is not inside the new Timeframe, it is placed in Fail-Safe Mode.

For example, If you have a table with a 10-day Retention Term and reduce it to one day, data from days 2 through 10 will be moved to Fail-Safe, leaving just data from day 1 accessible through Time Travel.

However, since the data is moved from Snowflake Time Travel to Fail-Safe via a background operation, the change is not immediately obvious. Snowflake ensures that the data will be migrated, but does not say when the process will be completed; the data is still accessible using Time Travel until the background operation is completed.

Use the appropriate ALTER <object> Command to adjust an object’s Retention duration. For example, the below command is used to adjust the Retention duration for a table:

How to Query Snowflake Time Travel Data?

When you make any DML actions on a table, Snowflake saves prior versions of the Table data for a set amount of time. Using the AT | BEFORE Clause, you can Query previous versions of the data.

This Clause allows you to query data at or immediately before a certain point in the Table’s history throughout the Retention Period . The supplied point can be either a time-based (e.g., a Timestamp or a Time Offset from the present) or a Statement ID (e.g. SELECT or INSERT ).

  • The query below selects Historical Data from a Table as of the Date and Time indicated by the Timestamp:
  • The following Query pulls Data from a Table that was last updated 5 minutes ago:
  • The following Query collects Historical Data from a Table up to the specified statement’s Modifications, but not including them:

The AT | BEFORE Clause, in addition to queries, can be combined with the CLONE keyword in the Construct command for a Table, Schema, or Database to create a logical duplicate of the object at a specific point in its history.

Consider the following scenario:

  • The CREATE TABLE command below generates a Clone of a Table as of the Date and Time indicated by the Timestamp:
  • The following CREATE SCHEMA command produces a Clone of a Schema and all of its Objects as they were an hour ago:
  • The CREATE DATABASE command produces a Clone of a Database and all of its Objects as they were before the specified statement was completed:

Using UNDROP Command with Snowflake Time Travel: How to Restore Objects? 

The following commands can be used to restore a dropped object that has not been purged from the system (i.e. the item is still visible in the SHOW object type> HISTORY output):

  • UNDROP DATABASE
  • UNDROP TABLE
  • UNDROP SCHEMA

UNDROP returns the object to its previous state before the DROP command is issued.

A Database can be dropped using the UNDROP command. For example,

Snowflake Time Travel: UNDROP command

Similarly, you can UNDROP Tables and Schemas . 

In the event of a System Failure or other Catastrophic Events , such as a Hardware Failure or a Security Incident, Fail-Safe ensures that Historical Data is preserved . While Snowflake Time Travel allows you to Access Historical Data (that is, data that has been updated or removed) at any point in time. 

Fail-Safe mode allows Snowflake to recover Historical Data for a (non-configurable) 7-day period . This time begins as soon as the Snowflake Time Travel Retention Period expires.

This article has exposed you to the various Snowflake Time Travel to help you improve your overall decision-making and experience when trying to make the most out of your data. In case you want to export data from a source of your choice into your desired Database/destination like Snowflake , then Hevo is the right choice for you! 

However, as a Developer, extracting complex data from a diverse set of data sources like Databases, CRMs, Project management Tools, Streaming Services, and Marketing Platforms to your Database can seem to be quite challenging. If you are from non-technical background or are new in the game of data warehouse and analytics, Hevo can help!

Hevo will automate your data transfer process, hence allowing you to focus on other aspects of your business like Analytics, Customer Management, etc. Hevo provides a wide range of sources – 150+ Data Sources (including 40+ Free Sources) – that connect with over 15+ Destinations. It will provide you with a seamless experience and make your work life much easier.

Want to take Hevo for a spin? Sign Up for a 14-day free trial and experience the feature-rich Hevo suite first hand.

You can also have a look at our unbeatable pricing that will help you choose the right plan for your business needs!

Harsh comes with experience in performing research analysis who has a passion for data, software architecture, and writing technical content. He has written more than 100 articles on data integration and infrastructure.

  • Snowflake Commands

Related Articles

snowflake information schema time travel

Hevo - No Code Data Pipeline

Select Source

Continue Reading

snowflake information schema time travel

Vivek Sinha

Amazon Redshift vs PostgreSQL Comparison: Choosing the Right Data Warehouse

snowflake information schema time travel

Sarad Mohanan

Redshift Vacuum and Analyze: 4 Critical Aspects

snowflake information schema time travel

Amazon Redshift COPY Command: 3 Comprehensive Aspects

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

I want to read this e-book

snowflake information schema time travel

ThinkETL

Overview of Snowflake Time Travel

Consider a scenario where instead of dropping a backup table you have accidentally dropped the actual table (or) instead of updating a set of records, you accidentally updated all the records present in the table (because you didn’t use the Where clause in your update statement).

What would be your next action after realizing your mistake? You must be thinking to go back in time to a period where you didn’t execute your incorrect statement so that you can undo your mistake.

Snowflake provides this exact feature where you could get back to the data present at a particular period of time. This feature in Snowflake is called Time Travel .

Let us understand more about Snowflake Time Travel in this article with examples.

1. What is Snowflake Time Travel?

Snowflake Time Travel enables accessing historical data that has been changed or deleted at any point within a defined period. It is a powerful CDP (Continuous Data Protection) feature which ensures the maintenance and availability of your historical data.

Snowflake Continuous Data Protection Lifecycle

Below actions can be performed using Snowflake Time Travel within a defined period of time:

  • Restore tables, schemas, and databases that have been dropped.
  • Query data in the past that has since been updated or deleted.
  • Create clones of entire tables, schemas, and databases at or before specific points in the past.

Once the defined period of time has elapsed, the data is moved into Snowflake Fail-Safe and these actions can no longer be performed.

2. Restoring Dropped Objects

A dropped object can be restored within the Snowflake Time Travel retention period using the “UNDROP” command.

Consider we have a table ‘Employee’ and it has been dropped accidentally instead of a backup table.

Dropping Employee table

It can be easily restored using the Snowflake UNDROP command as shown below.

Restoring Employee table using UNDROP

Databases and Schemas can also be restored using the UNDROP command.

Calling UNDROP restores the object to its most recent state before the DROP command was issued.

3. Querying Historical Objects

When unwanted DML operations are performed on a table, the Snowflake Time Travel feature enables querying earlier versions of the data using the  AT | BEFORE  clause.

The AT | BEFORE clause is specified in the FROM clause immediately after the table name and it determines the point in the past from which historical data is requested for the object.

Let us understand with an example. Consider the table Employee. The table has a field IS_ACTIVE which indicates whether an employee is currently working in the Organization.

Employee table

The employee ‘Michael’ has left the organization and the field IS_ACTIVE needs to be updated as FALSE. But instead you have updated IS_ACTIVE as FALSE for all the records present in the table.

Updating IS_ACTIVE in Employee table

There are three different ways you could query the historical data using AT | BEFORE Clause.

3.1. OFFSET

“ OFFSET” is the time difference in seconds from the present time.

The following query selects historical data from a table as of 5 minutes ago.

Querying historical data using OFFSET

3.2. TIMESTAMP

Use “TIMESTAMP” to get the data at or before a particular date and time.

The following query selects historical data from a table as of the date and time represented by the specified timestamp.

Querying historical data using TIMESTAMP

3.3. STATEMENT

Identifier for statement, e.g. query ID

The following query selects historical data from a table up to, but not including any changes made by the specified statement.

Querying historical data using STATEMENT

The Query ID used in the statement belongs to Update statement we executed earlier. The query ID can be obtained from “Open History”.

4. Cloning Historical Objects

We have seen how to query the historical data. In addition, the AT | BEFORE clause can be used with the CLONE keyword in the CREATE command to create a logical duplicate of the object at a specified point in the object’s history.

The following queries show how to clone a table using AT | BEFORE clause in three different ways using OFFSET, TIMESTAMP and STATEMENT.

To restore the data in the table to a historical state, create a clone using AT | BEFORE clause, drop the actual table and rename the cloned table to the actual table name.

5. Data Retention Period

A key component of Snowflake Time Travel is the data retention period.

When data in a table is modified, deleted or the object containing data is dropped, Snowflake preserves the state of the data before the update. The data retention period specifies the number of days for which this historical data is preserved.

Time Travel operations can be performed on the data during this data retention period of the object. When the retention period ends for an object, the historical data is moved into Snowflake Fail-safe.

6. How to find the Time Travel Data Retention period of Snowflake Objects?

SHOW PARAMETERS command can be used to find the Time Travel retention period of Snowflake objects.

Below commands can be used to find the data retention period of data bases, schemas and tables.

The DATA_RETENTION_TIME_IN_DAYS parameters specifies the number of days to retain the old version of deleted/updated data.

The below image shows that the table Employee has the DATA_RETENTION_TIME_IN_DAYS value set as 1.

Query showing Data Retention Period of Employee table

7. How to set custom Time-Travel Data Retention period for Snowflake Objects?

Time travel is automatically enabled with the standard, 1-day retention period. However, you may wish to upgrade to Snowflake Enterprise Edition or higher to enable configuring longer data retention periods of up to 90 days for databases, schemas, and tables.

You can configure the data retention period of a table while creating the table as shown below.

To modify the data retention period of an existing table, use below syntax

The below image shows that the data retention period of table is altered to 30 days.

Altering Data Retention Period of Employee table

A retention period of 0 days for an object effectively disables Time Travel for the object.

8. Data Retention Period Rules and Inheritance

Changing the retention period for your account or individual objects changes the value for all lower-level objects that do not have a retention period explicitly set. For example:

  • If you change the retention period at the account level, all databases, schemas, and tables that do not have an explicit retention period automatically inherit the new retention period.
  • If you change the retention period at the schema level, all tables in the schema that do not have an explicit retention period inherit the new retention period.

Currently, when a database is dropped, the data retention period for child schemas or tables, if explicitly set to be different from the retention of the database, is not honored. The child schemas or tables are retained for the same period of time as the database.

  • To honor the data retention period for these child objects (schemas or tables), drop them explicitly before you drop the database or schema.

Related Articles:

What is Snowflake?

Leave a Comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Related Posts

QUALIFY clause in Snowflake

QUALIFY in Snowflake: Filter Window Functions

GROUP BY ALL in Snowflake

GROUP BY ALL in Snowflake

Rank Transformation in Informatica Cloud

Rank Transformation in Informatica Cloud (IICS)

Can't find what you're looking for? Ask The Community  

Time Travel and DDL Changes On Table

The time travel works with DML changes against a table but it does not provide the state of the table if it has undergone a DDL change such as dropping a column from the table. For example, see the below demo code:

Currently, when any DML operations are performed on a table, Snowflake retains previous versions of the table data for a defined period of time. This enables querying earlier versions of the data using the  AT | BEFORE  clause.

However, for DDL changes, it points to the current version of metadata and hence these changes at the schema level are not available yet.

As of now the workaround would be to clone the table or the schema or the database whichever is suitable before making the DDL change.

https://docs.snowflake.com/en/user-guide/data-time-travel.html#querying-historical-data

Was this article helpful? Yes No

MOST VIEWED

  • Client Release History (Prior to January 2022)
  • Using Single Sign On (SSO) with Python using Jupyter Notebook
  • Connecting Jupyter Notebook with Snowflake
  • Behavior Change Log
  • Snowflake Connector for Python Release Notes

© 2023 Snowflake Inc. All Rights Reserved | If you’d rather not receive future emails from Snowflake, unsubscribe here or customize your communication preferences

Subscribe to our blog!

Thank you for your submission., how to manage gdpr compliance with snowflake’s time travel and disaster recovery.

  • Cybersecurity

How to manage GDPR compliance with Snowflake’s Time Travel and Disaster Recovery

One year after implementation, the European Union’s General Data Protection Regulation ( GDPR ) continues to be a hot regulatory topic. As organizations work to bring their data practices into compliance with the new law, one question comes up repeatedly: How does Snowflake, the data warehouse built for the cloud, enable my organization to be GDPR compliant?

My answer tends to surprise people. Simply put, compliance is not a function of your database but rather a function of the design you choose . Although Snowflake provides the cloud-based technology and tools that enable compliance, each organization maintains sole responsibility for designing an architecture that is, in fact, GDPR compliant.

With that said, Snowflake offers some powerful features that don’t exist in other databases. Therefore, it behooves database architects to have a working knowledge of Snowflake’s data protection and recovery features when designing their cloud-based data warehouse.

How Time Travel and Fail-Safe work

Snowflake provides continuous data protection (CDP) with two features called Time Travel and Fail-Safe . These unique features eliminate traditional data warehousing challenges (costly backups, time-consuming rollbacks) and enable teams to experiment with their data with confidence, knowing that it will never get lost accidentally.

Time Travel

System administrators can use Time Travel to revert back to any point in the last 24 hours. This feature is useful whenever a mistake is made (for example, table or schema is dropped in production) or a failed release requires a database rollback (for example, a new ETL operation corrupts the data). Through a simple SQL interface, data can be restored based on a  point in time or a query ID, at the database, table, and schema level. By default, Time Travel is always on for Snowflake customers and is set to 24 hours, although enterprise customers have the capability to set Time Travel for any window up to 90 days.

If you accidentally drop a table or database and if the Time Travel window has passed, Snowflake offers a “get out of jail free” card called Fail-Safe. This data recovery feature provides seven days in which you can contact the Snowflake Support Team to bring your data back. A Snowflake administrator must complete this restoration, because the data is inaccessible to an end user. Once the Fail-Safe seven-day window passes, data is removed permanently from Snowflake and the cloud, so it’s important to act quickly.

Best practices for GDPR compliance with CDP

GDPR compliance can be extremely challenging if you don’t have a well-thought-out database architecture, especially for handling the “right to erasure (right to be forgotten)” in GDPR Article 17. Once an individual’s personally identifiable information (PII) is requested, organizations have 30 to 90 days in which to delete the individual’s PII from their database.

Two questions often arise at this point:

  • How do you ensure that an individual’s PII is removed completely and permanently removed from your database?
  • What do you need to account for in the data architecture, given the automated recovery measures that exist in Snowflake for CDP?

Based on strong data management principles, here are three best practices that alleviate concerns around GDPR compliance while you use Snowflake’s CDP features.

#1: Build a data model that segregates PII data

Arguably the most important data management decision you can make is to build a data model that segregates PII data into a separate table or set of tables. By creating an inventory, you can identify and account for every type of PII data you hold. This best practice is key for adhering to privacy regulations because it makes PII data simpler to find and delete.

The pitfalls of alternative strategies demonstrate why PII data segregation is your strongest option:

  • The risk of losing peripheral data : If PII data is interspersed in a big table with, say, 100 columns, and 20 of those columns are PII data, what happens when you need to delete PII for a single individual? You will likely end up deleting a row from the table that also eliminates 80 columns of non-GDPR-related data that could be valuable for analytical and business purposes.
  • Reliance on costly update operations : You can run an operation that obfuscates all the PII data in a table by scrambling the targeted information and leaving the other data intact. However, that procedure is prone to errors and amounts to a much more expensive methodology than simply deleting data from a separate PII table from the get-go.

#2: Conduct batch deletions and apply Time Travel parameters

Rather than carry out PII deletions as requests come in, borrow a best practice from HIPAA (Health Insurance Portability and Accountability Act) and use batch deletions. By adding a GDPR delete flag and date to your data management process, you can execute a batch process once a month within the 30-day GDPR window.

For PII erasure requests, you must consider Time Travel and its setting. For example, GDPR regulations provide 30 days to delete PII (and up to 90 days under extenuating circumstances), which means that in Snowflake’s enterprise version, you should set Time Travel for PII-specific tables to no more than 30 days; otherwise, the data could be inadvertently restored.

Conversely, another useful aspect of Time Travel is that if you inadvertently delete the wrong person’s data, you can easily do a point-in-time restore of just those records (if you are still within the Time Travel window). This strategy allows recovery from a mistake without violating GDPR.

#3: Implement tracking

Another best practice is to maintain a table where you track PII erasure requests and PII deletions. This tracking approach also helps you avoid any rollback issues, which is an important safety concern when using Time Travel. For instance, if you happen to restore back to a time before a batch deletion was executed, you’ll know to query the metadata table so you can delete the PII data again.

The same holds true with Fail-Safe, which allows the restoration of all your “lost” or deleted data. As such, you may need to use your list of PII erasure requests to delete those individuals’ PII again. The good news is that Fail-Safe operates within a seven-day period, so you’ll always be within the 90-day GDPR window if you do monthly batch deletions.

At the heart of the EU law is the mandate for organizations to take full responsibility for the data they hold. This regulation is putting a much-needed focus on database architecture and management principles that ultimately makes companies better at safeguarding data.

If you design your database architecture with the most-restrictive privacy policies and regulations in mind, you can avoid heavy refactoring in the future. Today, that means adhering to GDPR and implementing a database design that keeps all your PII ducks in a row while still benefiting from Snowflake’s CDP.

Analytics Data Protection and High Availability

Analytics Data Protection and High Availability

See how Snowflake assures resiliency and data consistency in the event of node failures.

GDPR: What It Is, Why It Matters, and How Snowflake Enables Your...

GDPR: What It Is, Why It Matters, and How Snowflake Enables Your...

An introduction to GDPR, “right to be forgotten” options, and best practices for implementing GDPR using powerful and...

Data Management Products

Data Management Products

Data management products enable security and efficiency for data-forward organizations.

Courses for Data Analyst

Courses for Data Analyst

Explore the path to becoming a qualified data analyst with guidance on finding the right courses. Discover free training options for aspiring data analysts.

SHOW TABLES ¶

Lists the tables for which you have access privileges, including dropped tables that are still within the Time Travel retention period and, therefore, can be undropped. The command can be used to list tables for the current/specified database or schema, or across your entire account.

The output returns table metadata and properties, ordered lexicographically by database, schema, and table name (see Output in this topic for descriptions of the output columns). This is important to note if you want to filter the results using the provided filters.

CREATE TABLE , DROP TABLE , UNDROP TABLE , ALTER TABLE , DESCRIBE TABLE

TABLES View (Information Schema)

Parameters ¶

Optionally returns only a subset of the output columns:

The kind column value is always TABLE.

database_name

schema_name

Default: No value (all columns are included in the output)

Optionally includes dropped tables that have not yet been purged (i.e. they are still within their respective Time Travel retention periods). If multiple versions of a dropped table exist, the output displays a row for each version. The output also includes an additional dropped_on column, which displays:

Date and timestamp (for dropped tables).

NULL (for active tables).

Default: No value (dropped tables are not included in the output)

Optionally filters the command output by object name. The filter uses case-insensitive pattern matching, with support for SQL wildcard characters ( % and _ ).

For example, the following patterns return the same results:

. Default: No value (no filtering is applied to the output).

Optionally specifies the scope of the command. Specify one of the following:

Returns records for the entire account.

Returns records for the current database in use or for a specified database ( db_name ).

If you specify DATABASE without db_name and no database is in use, the keyword has no effect on the output.

Returns records for the current schema in use or a specified schema ( schema_name ).

SCHEMA is optional if a database is in use or if you specify the fully qualified schema_name (for example, db.schema ).

If no database is in use, specifying SCHEMA has no effect on the output.

Default: Depends on whether the session currently has a database in use:

Database: DATABASE is the default (that is, the command returns the objects you have privileges to view in the database).

No database: ACCOUNT is the default (that is, the command returns the objects you have privileges to view in your account).

Optionally filters the command output based on the characters that appear at the beginning of the object name. The string must be enclosed in single quotes and is case-sensitive .

For example, the following strings return different results:

. Default: No value (no filtering is applied to the output)

Optionally limits the maximum number of rows returned, while also enabling “pagination” of the results. The actual number of rows returned might be less than the specified limit. For example, the number of existing objects is less than the specified limit.

The optional FROM ' name_string ' subclause effectively serves as a “cursor” for the results. This enables fetching the specified number of rows following the first row whose object name matches the specified string:

The string must be enclosed in single quotes and is case-sensitive .

The string does not have to include the full object name; partial names are supported.

Default: No value (no limit is applied to the output)

For SHOW commands that support both the FROM ' name_string ' and STARTS WITH ' name_string ' clauses, you can combine both of these clauses in the same statement. However, both conditions must be met or they cancel out each other and no results are returned.

In addition, objects are returned in lexicographic order by name, so FROM ' name_string ' only returns rows with a higher lexicographic value than the rows returned by STARTS WITH ' name_string ' .

For example:

... STARTS WITH 'A' LIMIT ... FROM 'B' would return no results.

... STARTS WITH 'B' LIMIT ... FROM 'A' would return no results.

... STARTS WITH 'A' LIMIT ... FROM 'AB' would return results (if any rows match the input strings).

Usage Notes ¶

If an account (or database or schema) has a large number of tables, then searching the entire account (or table or schema) can consume a significant amount of compute resources.

In the output, results are sorted by database name, schema name, and then table name. This means results for a database can contain tables from multiple schemas and might break pagination. In order for pagination to work as expected, you must execute the SHOW TABLES command for a single schema. You can use the IN SCHEMA schema_name parameter to the SHOW TABLES command. Alternatively, you can use the schema in the current context by executing a USE SCHEMA command before executing a SHOW TABLES command.

The command does not require a running warehouse to execute.

The value for LIMIT rows cannot exceed 10000 . If LIMIT rows is omitted, the command results in an error if the result set is larger than 10K rows.

To view results for which more than 10K records exist, either include LIMIT rows or query the corresponding view in the Snowflake Information Schema .

To post-process the output of this command, you can use the RESULT_SCAN function, which treats the output as a table that can be queried.

The command output provides table properties and metadata in the following columns:

For more information about the properties that can be specified for a table, see CREATE TABLE .

For cloned tables and tables with deleted data, the bytes displayed for the table may be different than the number of physical bytes for the table:

A cloned table does not utilize additional data storage until new rows are added to the table or existing rows in the table are modified or deleted. If few or no changes have been made to the table, the number of bytes displayed is more than the actual physical bytes stored for the table.

Data deleted from a table is maintained in Snowflake until both the Time Travel retention period (default is 1 day) and Fail-safe period (7 days) for the data have passed. During these two periods, the number of bytes displayed is less than the actual physical bytes stored for the table.

For more detailed information about table size in bytes as it relates to cloning, Time Travel, and Fail-safe, see the TABLE_STORAGE_METRICS Information Schema view.

These examples show all of the tables that you have privileges to view based on the specified parameters.

Run SHOW TABLES on tables in the Sample Data Sets . The examples use the TERSE parameter to limit the output.

Show all the tables with a name that starts with LINE in the tpch_sf1 schema: SHOW TERSE TABLES IN tpch_sf1 STARTS WITH 'LINE' ; Copy +-------------------------------+----------+-------+-----------------------+-------------+ | created_on | name | kind | database_name | schema_name | |-------------------------------+----------+-------+-----------------------+-------------| | 2016-07-08 13:41:59.960 -0700 | LINEITEM | TABLE | SNOWFLAKE_SAMPLE_DATA | TPCH_SF1 | +-------------------------------+----------+-------+-----------------------+-------------+ Show all of the tables with a name that includes the substring PART in the tpch_sf1 schema: SHOW TERSE TABLES LIKE '%PART%' IN tpch_sf1 ; Copy +-------------------------------+-----------+-------+-----------------------+-------------+ | created_on | name | kind | database_name | schema_name | |-------------------------------+-----------+-------+-----------------------+-------------| | 2016-07-08 13:41:59.960 -0700 | JPART | TABLE | SNOWFLAKE_SAMPLE_DATA | TPCH_SF1 | | 2016-07-08 13:41:59.960 -0700 | JPARTSUPP | TABLE | SNOWFLAKE_SAMPLE_DATA | TPCH_SF1 | | 2016-07-08 13:41:59.960 -0700 | PART | TABLE | SNOWFLAKE_SAMPLE_DATA | TPCH_SF1 | | 2016-07-08 13:41:59.960 -0700 | PARTSUPP | TABLE | SNOWFLAKE_SAMPLE_DATA | TPCH_SF1 | +-------------------------------+-----------+-------+-----------------------+-------------+ Show the tables in the tpch_sf1 schema, but limit the output to three rows, and start with the table names that begin with J : SHOW TERSE TABLES IN tpch_sf1 LIMIT 3 FROM 'J' ; Copy +-------------------------------+-----------+-------+-----------------------+-------------+ | created_on | name | kind | database_name | schema_name | |-------------------------------+-----------+-------+-----------------------+-------------| | 2016-07-08 13:41:59.960 -0700 | JCUSTOMER | TABLE | SNOWFLAKE_SAMPLE_DATA | TPCH_SF1 | | 2016-07-08 13:41:59.960 -0700 | JLINEITEM | TABLE | SNOWFLAKE_SAMPLE_DATA | TPCH_SF1 | | 2016-07-08 13:41:59.960 -0700 | JNATION | TABLE | SNOWFLAKE_SAMPLE_DATA | TPCH_SF1 | +-------------------------------+-----------+-------+-----------------------+-------------+

Show a dropped table using the HISTORY parameter.

Create a table in your current schema, then drop it: CREATE OR REPLACE TABLE test_show_tables_history ( c1 NUMBER ); DROP TABLE test_show_tables_history ; Copy Use the HISTORY parameter to include dropped tables in the command output: SHOW TABLES HISTORY LIKE 'test_show_tables_history' ; Copy In the output, the dropped_on column shows the date and time when the table was dropped.

IMAGES

  1. EXPLORE TIME TRAVEL IN SNOWFLAKE

    snowflake information schema time travel

  2. Schemas in Data Warehouse

    snowflake information schema time travel

  3. Snowflake Schema

    snowflake information schema time travel

  4. Understanding & Using Time Travel

    snowflake information schema time travel

  5. Introduction to Time Travel in Snowflake

    snowflake information schema time travel

  6. Snowflake Morphology

    snowflake information schema time travel

VIDEO

  1. Day05: Creation of Snowflake Database Schema and Tables || Table Structure and Micro partitions

  2. Largest snowflake

  3. Snowflake Schema Explanation in telugu Power bi//Khanvi tech

  4. Snowflake Information Schema

  5. Switching Roles in Snowflake

  6. Introducing Snowflake Schema Monitoring

COMMENTS

  1. Understanding & Using Time Travel

    Snowflake Time Travel enables accessing historical data (i.e. data that has been changed or deleted) at any point within a defined period. It serves as a powerful tool for performing the following tasks: Restoring data-related objects (tables, schemas, and databases) that might have been accidentally or intentionally deleted.

  2. AT

    The AT or BEFORE clause is used for Snowflake Time Travel. In a query, it is specified in the FROM clause immediately after the table name and it determines the point in the past from which historical data is requested for the object: The AT keyword specifies that the request is inclusive of any changes made by a statement or transaction with ...

  3. How to check the data rentention time for time travel for a specific

    snowflake-schema; Share. Improve this question. Follow asked Jun 28, 2022 at 16:09. Spencer W Spencer W. 1 1 1 bronze badge. Add a comment | 1 ... How do I verify the Snowflake Time Travel setting? 3. Is there a way to know when a table was last accessed in snowflake. 1.

  4. Getting Started with Time Travel

    A few SQL commands allow you to go back in time and reclaim the past, saving you from the time and stress of a more extensive restore. We'll get started in the Snowflake web console, configure data retention, and use Time Travel to retrieve historic data. Before querying for your previous database states, let's review the prerequisites for this ...

  5. Snowflake Information Schema

    Get the time travel duration of all tables in a Snowflake Schema. 4.5. Get the list of all Primary Keys in all tables present in a Snowflake Schema ... Snowflake Information Schema Table Functions. The table functions in INFORMATION_SCHEMA can be used to return account-level usage and historical information for storage, warehouses, user logins ...

  6. Snowflake Time Travel: The Ultimate Guide 101

    Time Travel snowflake: The Ultimate Guide to Understand, Use & Get Started 101. To empower your business decisions with data, you need Real-Time High-Quality data from all of your data sources in a central repository. Traditional On-Premise Data Warehouse solutions have limited Scalability and Performance, and they require constant maintenance.

  7. How to Use Time Travel in Snowflake the Right Way

    You set the period of a time travel window through the DATA_RETENTION_TIME_IN_DAYS parameter value. Time travel applies hierarchically to the account level, or to a database, schema, or table level. There is always by default an implicit value, inherited by child objects. Or can be enforced in an explicit manner at any level.

  8. Overview of Snowflake Time Travel

    Calling UNDROP restores the object to its most recent state before the DROP command was issued. 3. Querying Historical Objects. When unwanted DML operations are performed on a table, the Snowflake Time Travel feature enables querying earlier versions of the data using the AT | BEFORE clause. The AT | BEFORE clause is specified in the FROM clause immediately after the table name and it ...

  9. How to restore DDL changes using Time Travel

    Time Travel always returns the table in its current DDL state. That is, it returns the current table structure.Example to demonstrate the issue. Create a table with 2 columns. Select * returns the 2 columns. Drop a column from the table, and save the query ID of the drop statement. Now perform Time Travel to a point in time before the drop ...

  10. EXPLORE TIME TRAVEL IN SNOWFLAKE

    Snowflake's Time travel define by Databases, Schemas, and Tables. The data retention period parameter specifies the amount of time we can view the table's historical data. In all Snowflake editions, It is set to 1 day by default for all objects. This parameter can be extended to 90 days for Enterprise and Business-Critical editions.

  11. Grokking Time Travel in Snowflake

    Overview. As mentioned, Time Travel is Snowflake's feature that enables temporal data access. Users can access historical data that's up to 90 days old starting with Snowflake's Enterprise ...

  12. When a database, schema, or a table is restored using time travel or

    When you restore a database, schema, or table using time travel or when you clone a database, schema, or table, the role's privileges are not retained. This is expected behavior. The following example illustrates the different outputs when running SHOW GRANTS against the source database and the restored/cloned database.

  13. Snowflake Time Travel in a Nutshell

    A key component of Snowflake Time Travel is the data retention period. When data in a table is modified, including deletion of data or dropping an object containing data, Snowflake preserves the ...

  14. Time Travel and DDL Changes On Table

    The time travel works with DML changes against a table but it does not provide the state of the table if it has undergone a DDL change such as dropping a column from the table. -- TABLE1 has col1, col3 in time travel. This is supposed to return col1,col2 and col3 as we are referencing the time back to before running the drop column.

  15. How to manage GDPR compliance with Snowflake's Time Travel and Disaster

    Through a simple SQL interface, data can be restored based on a point in time or a query ID, at the database, table, and schema level. By default, Time Travel is always on for Snowflake customers and is set to 24 hours, although enterprise customers have the capability to set Time Travel for any window up to 90 days. Fail-Safe

  16. How do I verify the Snowflake Time Travel setting?

    To verify time travel at schema and table level use the following commands: ... snowflake Time travel data is not available. 0. Timestamp 'xxxx-xx-xxTxx:xx:xx+xxxx' is not recognized. 3. Timestamp is not recognized Snowflake. 0. Snowflake Timezone. 0. Snowflake timestamp conversion issue. 1.

  17. Snowflake, INFORMATION_SCHEMA.TABLE_STORAGE_METRICS view, latency

    1 Answer. TABLE_STORAGE_METRICS is data leveraged for billing and Snowflake takes snapshot information for storage billing on a specific sampling rate, so I could see that the active bytes and time-travel bytes would not be populated in real-time. If you are looking for the real-time storage of a table, you should likely be leveraging the ...

  18. Data Storage Considerations

    Data Storage Considerations. This topic provides guidelines and best practices for controlling data storage costs associated with Continuous Data Protection (CDP), particularly for tables. CDP, which includes Time Travel and Fail-safe, is a standard set of features available to all Snowflake accounts at no additional cost.

  19. SHOW TABLES

    SHOW TABLES. Lists the tables for which you have access privileges, including dropped tables that are still within the Time Travel retention period and, therefore, can be undropped. The command can be used to list tables for the current/specified database or schema, or across your entire account. The output returns table metadata and properties ...