SAP HANA Backup Notes

SAP HANA is an in-memory database or a database that stores its database tables in RAM. RAM is the fastest possible data storage media available today but it is volatile. This means that when the RAM chips lose power, the data bits on the chip are erased or lost. To avoid data loss, SAP HANA incorporates regular save points using two persistent storage volumes  The first is the database logging or redo log mount. Most SAP HANA hardware vendors will place this file system on a Fusion-IO SLC or MLC nand flash card or use a RAID 10 controller with an array with SSD disks to accommodate this mount. When you write data to SAP HANA, it is written to the logs first then moved into RAM. The log mount should accommodate 100,000+ IOPS. The second mount point is the persistent data storage mount point. This is typically mounted on a standard magnetic disk array and it is used to store a copy of all committed data that is stored in-memory. With the combination of both redo logging and in-memory data save points, the SAP HANA system is fully capable of recovering from a sudden power failure. However, what happens when you lose the logging or persistent data storage mounts? Better yet, what happens when my SAP HANA server is no longer usable? To overcome these potential problems, regular backups of the SAP HANA system are required. This article will walk you through the process of backing up the SAP HANA database and help you identify the components that are critical to the processes.

Question: What are the key backup components of the SAP HANA system?

The data and metadata

When performing a backup of the SAP HANA system the tables, views, undo logs, packages, information views and metadata are all saved to a configurable persistent disk location. In short, all of the data and code that is stored in SAP HANA will be backed up to  a path that you specify.

The default location of the data backup is configured as $(DIR_INSTANCE)/backup/data. Backups can be triggered using the SAP HANA studio, the DBA Cockpit in BW, SQL script commands or 3rd party tools. They are not automatically run by the SAP HANA system. Each HANA DBA will have to devise a backup strategy with the hardware vendor before purchasing SAP HANA to insure that they have the appropriated hardware to support the backup process. In most cases, it is a good idea to mount an NFS share to the SAP HANA OS to store these backup files. If you are using a 3rd party backup tool, make sure it’s first backup destination or media is also disk. The backup needs about 150 – 300 MBs throughput. Failure do do this will likely lead to the following:

To initiate a backup using SQL scripts run the following from the SAP HANA Studio SQL window.


To initiate a backup using the hdbsql command line, run the following command using the SAP HANA client that is installed on your SAP HANA Linux server.

./hdbsql -i 0 -n localhost -u backup_user -p xxxx “BACKUP DATA USING FILE (‘COMPLETE_DATA_BACKUP’)”

To initiate the backup using SAP HANA studio, right click your system in the navigation window and choose “backup and Recovery” and then the “back Up System” option.


For more information see the SAP HANA Administration Guide

The logs

By default, the SAP HANA system will create log file backup every 15 minutes (900 seconds) or when the standard log segments become full. They are also backed up when the SAP HANA system starts. These log backups can be used during the SAP HANA recovery processes to role the logs forward or backwards to a specific point in time. Each time the log backup job runs, a log segment snapshot or series of files are created in the $(DIR_INSTANCE)/backup/log directory. During the log file backup processes, new files are created each time the job runs. The existing files are not automatically deleted or overwritten. In short, the files will grow indefinitely until they are deleted by the SAP HANA DBA. This backup is independent of the standard full system data backup mentioned in the data and metadata backup sections.

As mentioned above, the SAP HANA DBA will need to maintain this log file backup location either manually or using a script. Most 3rd party tools, using backint, will manage this process for you. If you need to maintain roll backs for several months, I would recommend that you retain several months of files. If you feel that your full system backup will be sufficient, I would only keep a few days of these files.

The following is an example script that you can run on the SAP HANA host to delete any log backups that are older than 5 days. However, in later versions of SAP HANA it is better to use  BACKUP CATALOG DELETE SQL command mentioned below. Never delete your persistent change logs or database files directories. Make sure you only delete the “log backups” and that you have the path correct before deleting any files.

find /usr/sap/../../backup/log/* -mtime +5 -exec rm {} \;

Don’t forget about the backup catalog.

The backup catalog contains all of the full database backup and backup log history. It is used during a system restore process to locate the backup files required. Over time this catalog will grow in size. Once it becomes very large, you will notice that it takes SAP HANA studio several minutes to open and maneuver around in the backup console. As of REvision 45, it will also effect the speed of the log segment backups. See the following: . I have worked with a few system where the backup catalog contained about 5 million entries. At that size, the log backup process was slowed and the backup console was useless. Therefore, I highly recommend that you purge the catalog of any records that are not needed.

Assuming that your version of SAP HANA support purging the backup catalog, you can run the following SQL command to purge specific backup catalog records.


If you specify the “COMPLETE” option at the end of the command, you will purge the catalog and remove the log backup files from the file system. In my opinion, this is the best way to purge the log backup segments. It is better because it removes both the entries and files. If you only remove the files from the file system, the catalog will contain orphaned entries that might cause a restore process to return an error.

You can use the following SQL (for example) to find the last successful backup id that is older than 5 days. You can then remove everything from the catalog and file system that is older than specified backup id. Keep in mind that the script is point in time sensitive. If your last backup was 10 days ago, this will not return the backup id from 10 days ago.

SELECT TOP 1 min(to_bigint(BACKUP_ID)) FROM SYS.M_BACKUP_CATALOG where SYS_START_TIME >= ADD_DAYS(CURRENT_TIMESTAMP, -5) and ENTRY_TYPE_NAME = ‘complete data backup’ and STATE_NAME = ‘successful’ 

The configuration and backup catalog files

The SAP HANA configuration files are not included in the standard full system backup nor the automated log file backups. You need to backup these files using a script or 3rd party backup tool. The configuration files contain any custom SAP HANA parameters or settings that will be needed if a full rebuild of the database is required. In addition there is a backup catalog file that needs to be retained because it contains information that is needed for the point-in-time restore processes. The backup catalog is not required for a full system restore but it should be backed-up with each full system backup.

Configuration File Locations


Backup Catalog File Locations

As of Revision 45, the backup catalog is now stored with the log segment backups automatically. There is no need to backup the BackupCatalog.xml file manually.


Where should I store my backup files?

While the SAP HANA system will store the backups on the local file system by default, it is best that you store these files on a different file system entirely. This file system can be mounted to the SAP HANA operating system but should be separate from the standard logging and persistent storage volumes. In addition it would be wise to copy this mount to a disaster recover location, tape system or other backup media to ensure the redundancy and availability of the backup files. In short, make sure that you have a plan to manage both the standard full system backup files and the automatic log file backups.

One of the most common issue I see is a failure to properly manage the log segment backups. Remember that these files are created every 15 minutes by default. They will grow quickly during initial loads and when the RDBMS consumes large quantities of data. Once the log segment backup location runs out of space, the actual log segment (used by SAP HANA) will grow indefinitely. If both location run out of disk space, you will not be able to recover the system until the volume is expanded or moved to an NFS mount. Again, I see this happening at over 50% of my clients.

SAP Notes:

Scheduling the SAP HANA Backups: 1651055

SAP HANA database backup and recovery: 1642148

An example stored procedure to manage backup: DAILY_BACKUP_PROCEDURE


  1. Hello Jonathan,
    I have been following you blog for have posted some very nice article.
    I wanted to know How SAP HANA and BW togother in term of functionality.?
    ofcourse the performance is good.
    but is their any change happen in BW architecture.Schemas?
    How BI functionalty changed with HANA?

  2. Really enjoyed and learned a great deal from your HANA Security presentation last week in Orlando!

  3. Dear Jonathan,

    first thanks about the great blog.

    I am facing a problem while trying to use the DAILY_BACKUP_PROCEDURE

    the error is : ould not execute ‘CALL SYSTEM.BACKUP_DAILY(0)’ in 6:04.500 minutes .
    SAP DBTech JDBC: [256]: sql processing error: “SYSTEM”.”BACKUP_DAILY”: line 21 col 5 (at pos 918): [105] (range 1) NullConversion exception

    do you have any answer to that ?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.