另一種備份策略是直接複製 PostgreSQL 用於資料儲存的資料庫中檔案。第 18.2 節介紹了這些檔案的位置。您可以使用自己喜歡的任何方法進行檔案系統備份。例如:
但是,有兩個限制會讓這個方法不可行,或者至少不如 pg_dump 方法:
必須關閉資料庫伺服器才能完成可用的備份。備份其間都無法操作,像是必須禁止所有連線(部分原因是 tar 和類似工具無法對檔案系統狀態進行原子快照,而且還因為伺服器內部還存在一些未儲存的資料緩衝)。 有關停止伺服器的資訊可以在第 18.5 節中找到。不用說,您也需要在還原資料之前關閉伺服器。
如果您已深入研究資料庫的檔案系統結構的詳細資訊,則可能會嘗試僅從特定檔案或目錄中備份或還原某些特定資料表或資料庫。這些都不會成功,因為沒有提交日誌檔案 pg_xact/*,其中包含所有事務的提交狀態,這些檔案中包含的資料將無法使用。資料表檔案僅可用於此資訊。當然,僅還原資料表和關聯的 pg_xact 資料也是不可能的,因為這會使資料庫叢集中的所有其他資料表失效。因此,檔案系統備份僅適用於完整資料庫叢集的完整備份和還原。
另一種檔案系統備份方法是,如果檔案系統支持該功能(並且您願意相信它已正確實作),也就是對資料目錄建立「一致性快照(consistent snapshot)」。典型的過程是製作包含資料庫 volume 的「凍結快照(frozen snapshot)」,然後將整個資料目錄(不僅僅是部分,請參閱前文)從快照複製到備份設備,然後釋放凍結快照。即使資料庫伺服器正在執行,這也能完成備份。但是,以這種方式建立的備份會將資料庫檔案保存為某種狀態,就好像資料庫伺服器未正確關閉一樣。因此,當您以備份的檔案啟動資料庫伺服器時,它將認為先前的伺服器實例崩潰了,並且將重新執行 WAL 日誌。這不會是問題;請注意這一點(並確保在備份中包含 WAL 檔案)。而您可以在拍攝快照之前執行 CHECKPOINT,以減少恢復的時間。
如果您的資料庫分散在多個檔案系統中,則可能沒有任何方法可以獲取所有 volume 完全同步的凍結快照。例如,如果資料檔案和 WAL 日誌位於不同的磁碟上,或者資料表空間位於不同的檔案系統上,則可能無法使用快照備份,因為快照必須同時進行。在這種情況下,請務必仔細閱讀檔案系統文件,然後再使用一致性快照技術。
如果不可能同時建立快照,則一種選擇是關閉資料庫伺服器足夠長的時間以建立所有凍結的快照。或者你還有一種選擇是執行連續歸檔(continuous archiving)基礎備份(第 25.3.2 節),因為此類備份在備份期間不受檔案系統變更的影響。這要求僅在備份過程中啟用連續歸檔。使用連續歸檔還原(第 25.3.4 節)來完成還原。
使用 rsync 執行檔案系統備份也是可以的。首先在資料庫伺服器執行時也執行 rsync,然後關閉資料庫伺服器足夠長的時間以執行 rsync --checksum,即可完成此操作。(--checksum 是必須的,因為 rsync 僅具有一秒的檔案修改時間顆粒度。)第二次 rsync 將比第一次更快,因為它要傳輸的資料相對較少,並且最終結果將會是一致的,因為伺服器是關閉的狀態。此方法目標在最少停機時間的情況下執行檔案系統備份。
請注意,檔案系統備份通常會比 SQL dump 要佔空間。(例如,pg_dump 不需要匯出索引的內容,只需匯出重新建立索引的指令。)但是,進行檔案系統備份可能會更快。
The idea behind this dump method is to generate a file with SQL commands that, when fed back to the server, will recreate the database in the same state as it was at the time of the dump. PostgreSQL provides the utility program pg_dump for this purpose. The basic usage of this command is:
As you see, pg_dump writes its result to the standard output. We will see below how this can be useful. While the above command creates a text file, pg_dump can create files in other formats that allow for parallelism and more fine-grained control of object restoration.
pg_dump is a regular PostgreSQL client application (albeit a particularly clever one). This means that you can perform this backup procedure from any remote host that has access to the database. But remember that pg_dump does not operate with special permissions. In particular, it must have read access to all tables that you want to back up, so in order to back up the entire database you almost always have to run it as a database superuser. (If you do not have sufficient privileges to back up the entire database, you can still back up portions of the database to which you do have access using options such as -n
schema
or -t
table
.)
To specify which database server pg_dump should contact, use the command line options -h
host
and -p
port
. The default host is the local host or whatever your PGHOST
environment variable specifies. Similarly, the default port is indicated by the PGPORT
environment variable or, failing that, by the compiled-in default. (Conveniently, the server will normally have the same compiled-in default.)
Like any other PostgreSQL client application, pg_dump will by default connect with the database user name that is equal to the current operating system user name. To override this, either specify the -U
option or set the environment variable PGUSER
. Remember that pg_dump connections are subject to the normal client authentication mechanisms (which are described in Chapter 20).
An important advantage of pg_dump over the other backup methods described later is that pg_dump's output can generally be re-loaded into newer versions of PostgreSQL, whereas file-level backups and continuous archiving are both extremely server-version-specific. pg_dump is also the only method that will work when transferring a database to a different machine architecture, such as going from a 32-bit to a 64-bit server.
Dumps created by pg_dump are internally consistent, meaning, the dump represents a snapshot of the database at the time pg_dump began running. pg_dump does not block other operations on the database while it is working. (Exceptions are those operations that need to operate with an exclusive lock, such as most forms of ALTER TABLE
.)
Text files created by pg_dump are intended to be read in by the psql program. The general command form to restore a dump is
where infile
is the file output by the pg_dump command. The database dbname
will not be created by this command, so you must create it yourself from template0
before executing psql (e.g., with createdb -T template0
dbname
). psql supports options similar to pg_dumpfor specifying the database server to connect to and the user name to use. See the psql reference page for more information. Non-text file dumps are restored using the pg_restore utility.
Before restoring an SQL dump, all the users who own objects or were granted permissions on objects in the dumped database must already exist. If they do not, the restore will fail to recreate the objects with the original ownership and/or permissions. (Sometimes this is what you want, but usually it is not.)
By default, the psql script will continue to execute after an SQL error is encountered. You might wish to run psql with the ON_ERROR_STOP
variable set to alter that behavior and have psql exit with an exit status of 3 if an SQL error occurs:
Either way, you will only have a partially restored database. Alternatively, you can specify that the whole dump should be restored as a single transaction, so the restore is either fully completed or fully rolled back. This mode can be specified by passing the -1
or --single-transaction
command-line options to psql. When using this mode, be aware that even a minor error can rollback a restore that has already run for many hours. However, that might still be preferable to manually cleaning up a complex database after a partially restored dump.
The ability of pg_dump and psql to write to or read from pipes makes it possible to dump a database directly from one server to another, for example:
The dumps produced by pg_dump are relative to template0
. This means that any languages, procedures, etc. added via template1
will also be dumped by pg_dump. As a result, when restoring, if you are using a customized template1
, you must create the empty database from template0
, as in the example above.
After restoring a backup, it is wise to run ANALYZE on each database so the query optimizer has useful statistics; see Section 24.1.3 and Section 24.1.6 for more information. For more advice on how to load large amounts of data into PostgreSQL efficiently, refer to Section 14.4.
pg_dump dumps only a single database at a time, and it does not dump information about roles or tablespaces (because those are cluster-wide rather than per-database). To support convenient dumping of the entire contents of a database cluster, the pg_dumpall program is provided. pg_dumpall backs up each database in a given cluster, and also preserves cluster-wide data such as role and tablespace definitions. The basic usage of this command is:
The resulting dump can be restored with psql:
(Actually, you can specify any existing database name to start from, but if you are loading into an empty cluster then postgres
should usually be used.) It is always necessary to have database superuser access when restoring a pg_dumpall dump, as that is required to restore the role and tablespace information. If you use tablespaces, make sure that the tablespace paths in the dump are appropriate for the new installation.
pg_dumpall works by emitting commands to re-create roles, tablespaces, and empty databases, then invoking pg_dump for each database. This means that while each database will be internally consistent, the snapshots of different databases are not synchronized.
Cluster-wide data can be dumped alone using the pg_dumpall --globals-only
option. This is necessary to fully backup the cluster if running the pg_dump command on individual databases.
Some operating systems have maximum file size limits that cause problems when creating large pg_dump output files. Fortunately, pg_dump can write to the standard output, so you can use standard Unix tools to work around this potential problem. There are several possible methods:
Use compressed dumps. You can use your favorite compression program, for example gzip:
Reload with:
or:
Use split
. The split
command allows you to split the output into smaller files that are acceptable in size to the underlying file system. For example, to make chunks of 1 megabyte:
Reload with:
Use pg_dump's custom dump format. If PostgreSQL was built on a system with the zlib compression library installed, the custom dump format will compress data as it writes it to the output file. This will produce dump file sizes similar to using gzip
, but it has the added advantage that tables can be restored selectively. The following command dumps a database using the custom dump format:
A custom-format dump is not a script for psql, but instead must be restored with pg_restore, for example:
See the pg_dump and pg_restore reference pages for details.
For very large databases, you might need to combine split
with one of the other two approaches.
Use pg_dump's parallel dump feature. To speed up the dump of a large database, you can use pg_dump's parallel mode. This will dump multiple tables at the same time. You can control the degree of parallelism with the -j
parameter. Parallel dumps are only supported for the "directory" archive format.
You can use pg_restore -j
to restore a dump in parallel. This will work for any archive of either the "custom" or the "directory" archive mode, whether or not it has been created with pg_dump -j
.
與有價值的所有資料一樣,PostgreSQL 資料庫應定期備份。雖然程序本質上很簡單,但對基本技術和假設有清晰的了解是很重要的。
以三種根本上不同的方法來備份 PostgreSQL 資料:
SQL dump
檔案系統層級的備份
持續性歸檔
每個方式都有它的優點和缺點;以下各節將逐一討論。
PostgreSQL 在執行過程中不斷地在叢集資料目錄的 pg_wal/ 子目錄中維護一個交易日誌(Write Ahead Log, WAL)。日誌記錄了對資料庫資料檔案所做的所有變更。該日誌主要用於意外災難還原的目的:如果系統意外損毁,則可以透過「重播」自上一個檢查點以來所建立的日誌項目來恢復資料庫的一致性。然而,日誌的存在使得可以使用第三種策略來備份數據庫:我們可以將檔案系統級備份與 WAL 檔案備份結合在一起。 如果需要復原,我們將還原檔案系統備份,然後從備份的 WAL 檔案中重播以使系統進入當下的狀態。 與前面所介紹的方法相比,這種方法的管理更為複雜,但具有一些明顯的好處:
我們不需要完美一致的檔案系統備份作為起點。備份中的任何內部不一致都將透過日誌重播進行糾正(這與損毁復原期間發生的變化沒有太大不同)。因此,我們不需要檔案系統的快照功能,而只需要 tar 或類似的封存工具。
由於我們可以結合無限長的 WAL 檔案序列進行重播,因此只需繼續封存 WAL 檔案就可以實現連續備份。這對於大型資料庫來說尤其具有價值,在大型資料庫中,經常性進行完整備份可能不太方便。
不必一直重複播放 WAL 項目。我們可以隨時停止重播,並獲得當時的資料庫快照。因此,此技術支持時間點還原:自從進行基本備份以來,可以隨時將資料庫還原到其狀態。
如果我們將一系列 WAL 檔案連續提供給另一台已載入了相同基本備份檔案的伺服器,則我們將擁有一個熱備份系統:在任何時候,我們都可以啟動第二台伺服器,而該伺服器將具有近乎最新的資料庫副本。
pg_dump 和 pg_dumpall 並不會產生檔案系統層級的備份,因此不能用於連續歸檔解決方案的一部分。這樣的備份是邏輯上的,並且沒有包含足夠的資訊供 WAL 重播使用。
與普通資料系統備份技術一樣,此方法只能支援還原整個資料庫叢集,而不支援部份還原。此外,它還需要大量的檔案儲存空間:基本備份可能會很龐大,繁忙的系統將產生成許多數 MegaByte 等級的 WAL 流量,必須對其進行封存。儘管如此,在許多需要高可靠性的情況下,它還是備份技術中的首選。
要使用連續歸檔(許多資料庫供應商也將其稱為「線上備份」)成功恢復,您需要連續的 WAL 歸檔序列,該序列至少可以延伸到備份的開始時間。因此,在開始第一次基本備份之前,應先設定並測試用於封存 WAL 檔案的程序。因此,我們首先討論封存 WAL 檔案的機制。
從抽象的意義上講,執行中的 PostgreSQL 系統會產生無限長的 WAL 記錄序列。系統從物理上將此序列劃分為 WAL 分段檔案,每個檔案通常為16MB(儘管分段大小可以在 initdb 期間變更)。 分段檔案被賦予數字名稱,以反映它們在抽象的 WAL 序列中的位置。當不使用 WAL 歸檔時,系統通常只建立幾個分段檔案,然後透過將不再需要的分段檔案重新命名為較高的分段號號來「回收」它們。假設其內容在最後一個檢查點之前的分段檔案不再受關注時,即為可以回收。
歸檔處理 WAL 資料時,我們需要在每個分段檔案填滿後取得其內容,並將該資料保存在回收分段檔案以供重用之前的某個位置。根據應用程序和可用硬體的不同,可能有許多不同的「將資料保存到某處」的方式:我們可以將分段檔案複製到另一台主機上 NFS 掛載的目錄中,然後將它們寫入磁帶中(確保您擁有 一種識別每個檔案的原始名稱的方法),或者將它們一起批次處理並燒錄到 CD 上,或者也可以完全燒錄所有資料。為了給資料庫管理者提供靈活性,PostgreSQL 嘗試不對如何完成歸檔做任何假設。相反地,PostgreSQL 讓管理者指定要執行的 shell 命令,以將完整的分段檔案複製到需要的位置。該命令可以像 cp 一樣簡單,也可以呼叫複雜的 shell 腳本—一切由你決定。
要啟用 WAL 歸檔機制,請將 wal_level 組態參數設定為 replica 或更高的等級,將 archive_mode 設定為 on,然後在 archive_command 組態參數中指定要使用的 shell 命令。實際上,這些設定始終會放置在 postgresql.conf 檔案中。在 archive_command 中,%p 替換為要存檔的檔案路徑名稱,而 %f 僅替換為檔案名稱。(路徑名稱是相對於目前的工作目錄(即叢集的資料目錄)的。)如果需要在命令中嵌入實際的 % 字符,請使用 %%。最簡單的指令是:
它將可歸檔的 WAL 分段檔案複製到目錄 /mnt/server/archivedir 中。 (這是範例,而不是建議,並且可能不是所有平台都適用。)替換 %p 和 %f 參數後,實際執行的命令可能如下所示:
將為每個要歸檔的新檔案產生一個類似的命令。
將以執行 PostgreSQL 伺服器的同一用戶的所有權執行 archive 命令。由於要歸檔的一系列 WAL 檔案實際上包含了資料庫中的所有內容,因此您將要確保已歸檔的資料受到保護,以免被窺探;例如,應該存檔到沒有同群組使用者,所有其他人都沒有讀取權限的目錄中。
It is important that the archive command return zero exit status if and only if it succeeds. Upon getting a zero result, PostgreSQL will assume that the file has been successfully archived, and will remove or recycle it. However, a nonzero status tells PostgreSQL that the file was not archived; it will try again periodically until it succeeds.
The archive command should generally be designed to refuse to overwrite any pre-existing archive file. This is an important safety feature to preserve the integrity of your archive in case of administrator error (such as sending the output of two different servers to the same archive directory).
It is advisable to test your proposed archive command to ensure that it indeed does not overwrite an existing file, and that it returns nonzero status in this case. The example command above for Unix ensures this by including a separate test
step. On some Unix platforms, cp
has switches such as -i
that can be used to do the same thing less verbosely, but you should not rely on these without verifying that the right exit status is returned. (In particular, GNU cp
will return status zero when -i
is used and the target file already exists, which is not the desired behavior.)
While designing your archiving setup, consider what will happen if the archive command fails repeatedly because some aspect requires operator intervention or the archive runs out of space. For example, this could occur if you write to tape without an autochanger; when the tape fills, nothing further can be archived until the tape is swapped. You should ensure that any error condition or request to a human operator is reported appropriately so that the situation can be resolved reasonably quickly. The pg_wal/
directory will continue to fill with WAL segment files until the situation is resolved. (If the file system containing pg_wal/
fills up, PostgreSQL will do a PANIC shutdown. No committed transactions will be lost, but the database will remain offline until you free some space.)
The speed of the archiving command is unimportant as long as it can keep up with the average rate at which your server generates WAL data. Normal operation continues even if the archiving process falls a little behind. If archiving falls significantly behind, this will increase the amount of data that would be lost in the event of a disaster. It will also mean that the pg_wal/
directory will contain large numbers of not-yet-archived segment files, which could eventually exceed available disk space. You are advised to monitor the archiving process to ensure that it is working as you intend.
In writing your archive command, you should assume that the file names to be archived can be up to 64 characters long and can contain any combination of ASCII letters, digits, and dots. It is not necessary to preserve the original relative path (%p
) but it is necessary to preserve the file name (%f
).
Note that although WAL archiving will allow you to restore any modifications made to the data in your PostgreSQL database, it will not restore changes made to configuration files (that is, postgresql.conf
, pg_hba.conf
and pg_ident.conf
), since those are edited manually rather than through SQL operations. You might wish to keep the configuration files in a location that will be backed up by your regular file system backup procedures. See Section 19.2 for how to relocate the configuration files.
The archive command is only invoked on completed WAL segments. Hence, if your server generates only little WAL traffic (or has slack periods where it does so), there could be a long delay between the completion of a transaction and its safe recording in archive storage. To put a limit on how old unarchived data can be, you can set archive_timeout to force the server to switch to a new WAL segment file at least that often. Note that archived files that are archived early due to a forced switch are still the same length as completely full files. It is therefore unwise to set a very short archive_timeout
— it will bloat your archive storage. archive_timeout
settings of a minute or so are usually reasonable.
Also, you can force a segment switch manually with pg_switch_wal
if you want to ensure that a just-finished transaction is archived as soon as possible. Other utility functions related to WAL management are listed in Table 9.84.
When wal_level
is minimal
some SQL commands are optimized to avoid WAL logging, as described in Section 14.4.7. If archiving or streaming replication were turned on during execution of one of these statements, WAL would not contain enough information for archive recovery. (Crash recovery is unaffected.) For this reason, wal_level
can only be changed at server start. However, archive_command
can be changed with a configuration file reload. If you wish to temporarily stop archiving, one way to do it is to set archive_command
to the empty string (''
). This will cause WAL files to accumulate in pg_wal/
until a working archive_command
is re-established.
The easiest way to perform a base backup is to use the pg_basebackup tool. It can create a base backup either as regular files or as a tar archive. If more flexibility than pg_basebackup can provide is required, you can also make a base backup using the low level API (see Section 25.3.3).
It is not necessary to be concerned about the amount of time it takes to make a base backup. However, if you normally run the server with full_page_writes
disabled, you might notice a drop in performance while the backup runs since full_page_writes
is effectively forced on during backup mode.
To make use of the backup, you will need to keep all the WAL segment files generated during and after the file system backup. To aid you in doing this, the base backup process creates a backup history file that is immediately stored into the WAL archive area. This file is named after the first WAL segment file that you need for the file system backup. For example, if the starting WAL file is 0000000100001234000055CD
the backup history file will be named something like 0000000100001234000055CD.007C9330.backup
. (The second part of the file name stands for an exact position within the WAL file, and can ordinarily be ignored.) Once you have safely archived the file system backup and the WAL segment files used during the backup (as specified in the backup history file), all archived WAL segments with names numerically less are no longer needed to recover the file system backup and can be deleted. However, you should consider keeping several backup sets to be absolutely certain that you can recover your data.
The backup history file is just a small text file. It contains the label string you gave to pg_basebackup, as well as the starting and ending times and WAL segments of the backup. If you used the label to identify the associated dump file, then the archived history file is enough to tell you which dump file to restore.
Since you have to keep around all the archived WAL files back to your last base backup, the interval between base backups should usually be chosen based on how much storage you want to expend on archived WAL files. You should also consider how long you are prepared to spend recovering, if recovery should be necessary — the system will have to replay all those WAL segments, and that could take awhile if it has been a long time since the last base backup.
The procedure for making a base backup using the low level APIs contains a few more steps than the pg_basebackup method, but is relatively simple. It is very important that these steps are executed in sequence, and that the success of a step is verified before proceeding to the next step.
Low level base backups can be made in a non-exclusive or an exclusive way. The non-exclusive method is recommended and the exclusive one is deprecated and will eventually be removed.
A non-exclusive low level backup is one that allows other concurrent backups to be running (both those started using the same backup API and those started using pg_basebackup).
Ensure that WAL archiving is enabled and working.
Connect to the server (it does not matter which database) as a user with rights to run pg_start_backup (superuser, or a user who has been granted EXECUTE on the function) and issue the command:
where label
is any string you want to use to uniquely identify this backup operation. The connection calling pg_start_backup
must be maintained until the end of the backup, or the backup will be automatically aborted.
By default, pg_start_backup
can take a long time to finish. This is because it performs a checkpoint, and the I/O required for the checkpoint will be spread out over a significant period of time, by default half your inter-checkpoint interval (see the configuration parameter checkpoint_completion_target). This is usually what you want, because it minimizes the impact on query processing. If you want to start the backup as soon as possible, change the second parameter to true
, which will issue an immediate checkpoint using as much I/O as available.
The third parameter being false
tells pg_start_backup
to initiate a non-exclusive base backup.
Perform the backup, using any convenient file-system-backup tool such as tar or cpio (not pg_dump or pg_dumpall). It is neither necessary nor desirable to stop normal operation of the database while you do this. See Section 25.3.3.3 for things to consider during this backup.
In the same connection as before, issue the command:
This terminates backup mode. On a primary, it also performs an automatic switch to the next WAL segment. On a standby, it is not possible to automatically switch WAL segments, so you may wish to run pg_switch_wal
on the primary to perform a manual switch. The reason for the switch is to arrange for the last WAL segment file written during the backup interval to be ready to archive.
The pg_stop_backup
will return one row with three values. The second of these fields should be written to a file named backup_label
in the root directory of the backup. The third field should be written to a file named tablespace_map
unless the field is empty. These files are vital to the backup working, and must be written without modification.
Once the WAL segment files active during the backup are archived, you are done. The file identified by pg_stop_backup
's first return value is the last segment that is required to form a complete set of backup files. On a primary, if archive_mode
is enabled and the wait_for_archive
parameter is true
, pg_stop_backup
does not return until the last segment has been archived. On a standby, archive_mode
must be always
in order for pg_stop_backup
to wait. Archiving of these files happens automatically since you have already configured archive_command
. In most cases this happens quickly, but you are advised to monitor your archive system to ensure there are no delays. If the archive process has fallen behind because of failures of the archive command, it will keep retrying until the archive succeeds and the backup is complete. If you wish to place a time limit on the execution of pg_stop_backup
, set an appropriate statement_timeout
value, but make note that if pg_stop_backup
terminates because of this your backup may not be valid.
If the backup process monitors and ensures that all WAL segment files required for the backup are successfully archived then the wait_for_archive
parameter (which defaults to true) can be set to false to have pg_stop_backup
return as soon as the stop backup record is written to the WAL. By default, pg_stop_backup
will wait until all WAL has been archived, which can take some time. This option must be used with caution: if WAL archiving is not monitored correctly then the backup might not include all of the WAL files and will therefore be incomplete and not able to be restored.
The exclusive backup method is deprecated and should be avoided. Prior to PostgreSQL 9.6, this was the only low-level method available, but it is now recommended that all users upgrade their scripts to use non-exclusive backups.
The process for an exclusive backup is mostly the same as for a non-exclusive one, but it differs in a few key steps. This type of backup can only be taken on a primary and does not allow concurrent backups. Moreover, because it creates a backup label file, as described below, it can block automatic restart of the master server after a crash. On the other hand, the erroneous removal of this file from a backup or standby is a common mistake, which can result in serious data corruption. If it is necessary to use this method, the following steps may be used.
Ensure that WAL archiving is enabled and working.
Connect to the server (it does not matter which database) as a user with rights to run pg_start_backup (superuser, or a user who has been granted EXECUTE on the function) and issue the command:
where label
is any string you want to use to uniquely identify this backup operation. pg_start_backup
creates a backup label file, called backup_label
, in the cluster directory with information about your backup, including the start time and label string. The function also creates a tablespace map file, called tablespace_map
, in the cluster directory with information about tablespace symbolic links in pg_tblspc/
if one or more such link is present. Both files are critical to the integrity of the backup, should you need to restore from it.
By default, pg_start_backup
can take a long time to finish. This is because it performs a checkpoint, and the I/O required for the checkpoint will be spread out over a significant period of time, by default half your inter-checkpoint interval (see the configuration parameter checkpoint_completion_target). This is usually what you want, because it minimizes the impact on query processing. If you want to start the backup as soon as possible, use:
This forces the checkpoint to be done as quickly as possible.
Perform the backup, using any convenient file-system-backup tool such as tar or cpio (not pg_dump or pg_dumpall). It is neither necessary nor desirable to stop normal operation of the database while you do this. See Section 25.3.3.3 for things to consider during this backup.
As noted above, if the server crashes during the backup it may not be possible to restart until the backup_label
file has been manually deleted from the PGDATA
directory. Note that it is very important to never remove the backup_label
file when restoring a backup, because this will result in corruption. Confusion about when it is appropriate to remove this file is a common cause of data corruption when using this method; be very certain that you remove the file only on an existing master and never when building a standby or restoring a backup, even if you are building a standby that will subsequently be promoted to a new master.
Again connect to the database as a user with rights to run pg_stop_backup (superuser, or a user who has been granted EXECUTE on the function), and issue the command:
This function terminates backup mode and performs an automatic switch to the next WAL segment. The reason for the switch is to arrange for the last WAL segment written during the backup interval to be ready to archive.
Once the WAL segment files active during the backup are archived, you are done. The file identified by pg_stop_backup
's result is the last segment that is required to form a complete set of backup files. If archive_mode
is enabled, pg_stop_backup
does not return until the last segment has been archived. Archiving of these files happens automatically since you have already configured archive_command
. In most cases this happens quickly, but you are advised to monitor your archive system to ensure there are no delays. If the archive process has fallen behind because of failures of the archive command, it will keep retrying until the archive succeeds and the backup is complete.
When using exclusive backup mode, it is absolutely imperative to ensure that pg_stop_backup
completes successfully at the end of the backup. Even if the backup itself fails, for example due to lack of disk space, failure to call pg_stop_backup
will leave the server in backup mode indefinitely, causing future backups to fail and increasing the risk of a restart failure during the time that backup_label
exists.
Some file system backup tools emit warnings or errors if the files they are trying to copy change while the copy proceeds. When taking a base backup of an active database, this situation is normal and not an error. However, you need to ensure that you can distinguish complaints of this sort from real errors. For example, some versions of rsync return a separate exit code for “vanished source files”, and you can write a driver script to accept this exit code as a non-error case. Also, some versions of GNU tar return an error code indistinguishable from a fatal error if a file was truncated while tar was copying it. Fortunately, GNU tar versions 1.16 and later exit with 1 if a file was changed during the backup, and 2 for other errors. With GNU tar version 1.23 and later, you can use the warning options --warning=no-file-changed --warning=no-file-removed
to hide the related warning messages.
Be certain that your backup includes all of the files under the database cluster directory (e.g., /usr/local/pgsql/data
). If you are using tablespaces that do not reside underneath this directory, be careful to include them as well (and be sure that your backup archives symbolic links as links, otherwise the restore will corrupt your tablespaces).
You should, however, omit from the backup the files within the cluster's pg_wal/
subdirectory. This slight adjustment is worthwhile because it reduces the risk of mistakes when restoring. This is easy to arrange if pg_wal/
is a symbolic link pointing to someplace outside the cluster directory, which is a common setup anyway for performance reasons. You might also want to exclude postmaster.pid
and postmaster.opts
, which record information about the running postmaster, not about the postmaster which will eventually use this backup. (These files can confuse pg_ctl.)
It is often a good idea to also omit from the backup the files within the cluster's pg_replslot/
directory, so that replication slots that exist on the master do not become part of the backup. Otherwise, the subsequent use of the backup to create a standby may result in indefinite retention of WAL files on the standby, and possibly bloat on the master if hot standby feedback is enabled, because the clients that are using those replication slots will still be connecting to and updating the slots on the master, not the standby. Even if the backup is only intended for use in creating a new master, copying the replication slots isn't expected to be particularly useful, since the contents of those slots will likely be badly out of date by the time the new master comes on line.
The contents of the directories pg_dynshmem/
, pg_notify/
, pg_serial/
, pg_snapshots/
, pg_stat_tmp/
, and pg_subtrans/
(but not the directories themselves) can be omitted from the backup as they will be initialized on postmaster startup. If stats_temp_directory is set and is under the data directory then the contents of that directory can also be omitted.
Any file or directory beginning with pgsql_tmp
can be omitted from the backup. These files are removed on postmaster start and the directories will be recreated as needed.
pg_internal.init
files can be omitted from the backup whenever a file of that name is found. These files contain relation cache data that is always rebuilt when recovering.
The backup label file includes the label string you gave to pg_start_backup
, as well as the time at which pg_start_backup
was run, and the name of the starting WAL file. In case of confusion it is therefore possible to look inside a backup file and determine exactly which backup session the dump file came from. The tablespace map file includes the symbolic link names as they exist in the directory pg_tblspc/
and the full path of each symbolic link. These files are not merely for your information; their presence and contents are critical to the proper operation of the system's recovery process.
It is also possible to make a backup while the server is stopped. In this case, you obviously cannot use pg_start_backup
or pg_stop_backup
, and you will therefore be left to your own devices to keep track of which backup is which and how far back the associated WAL files go. It is generally better to follow the continuous archiving procedure above.
Okay, the worst has happened and you need to recover from your backup. Here is the procedure:
Stop the server, if it's running.
If you have the space to do so, copy the whole cluster data directory and any tablespaces to a temporary location in case you need them later. Note that this precaution will require that you have enough free space on your system to hold two copies of your existing database. If you do not have enough space, you should at least save the contents of the cluster's pg_wal
subdirectory, as it might contain logs which were not archived before the system went down.
Remove all existing files and subdirectories under the cluster data directory and under the root directories of any tablespaces you are using.
Restore the database files from your file system backup. Be sure that they are restored with the right ownership (the database system user, not root
!) and with the right permissions. If you are using tablespaces, you should verify that the symbolic links in pg_tblspc/
were correctly restored.
Remove any files present in pg_wal/
; these came from the file system backup and are therefore probably obsolete rather than current. If you didn't archive pg_wal/
at all, then recreate it with proper permissions, being careful to ensure that you re-establish it as a symbolic link if you had it set up that way before.
If you have unarchived WAL segment files that you saved in step 2, copy them into pg_wal/
. (It is best to copy them, not move them, so you still have the unmodified files if a problem occurs and you have to start over.)
Set recovery configuration settings in postgresql.conf
(see Section 19.5.4) and create a file recovery.signal
in the cluster data directory. You might also want to temporarily modify pg_hba.conf
to prevent ordinary users from connecting until you are sure the recovery was successful.
Start the server. The server will go into recovery mode and proceed to read through the archived WAL files it needs. Should the recovery be terminated because of an external error, the server can simply be restarted and it will continue recovery. Upon completion of the recovery process, the server will remove recovery.signal
(to prevent accidentally re-entering recovery mode later) and then commence normal database operations.
Inspect the contents of the database to ensure you have recovered to the desired state. If not, return to step 1. If all is well, allow your users to connect by restoring pg_hba.conf
to normal.
The key part of all this is to set up a recovery configuration that describes how you want to recover and how far the recovery should run. The one thing that you absolutely must specify is the restore_command
, which tells PostgreSQL how to retrieve archived WAL file segments. Like the archive_command
, this is a shell command string. It can contain %f
, which is replaced by the name of the desired log file, and %p
, which is replaced by the path name to copy the log file to. (The path name is relative to the current working directory, i.e., the cluster's data directory.) Write %%
if you need to embed an actual %
character in the command. The simplest useful command is something like:
which will copy previously archived WAL segments from the directory /mnt/server/archivedir
. Of course, you can use something much more complicated, perhaps even a shell script that requests the operator to mount an appropriate tape.
It is important that the command return nonzero exit status on failure. The command will be called requesting files that are not present in the archive; it must return nonzero when so asked. This is not an error condition. An exception is that if the command was terminated by a signal (other than SIGTERM, which is used as part of a database server shutdown) or an error by the shell (such as command not found), then recovery will abort and the server will not start up.
Not all of the requested files will be WAL segment files; you should also expect requests for files with a suffix of .history
. Also be aware that the base name of the %p
path will be different from %f
; do not expect them to be interchangeable.
WAL segments that cannot be found in the archive will be sought in pg_wal/
; this allows use of recent un-archived segments. However, segments that are available from the archive will be used in preference to files in pg_wal/
.
Normally, recovery will proceed through all available WAL segments, thereby restoring the database to the current point in time (or as close as possible given the available WAL segments). Therefore, a normal recovery will end with a “file not found” message, the exact text of the error message depending upon your choice of restore_command
. You may also see an error message at the start of recovery for a file named something like 00000001.history
. This is also normal and does not indicate a problem in simple recovery situations; see Section 25.3.5 for discussion.
If you want to recover to some previous point in time (say, right before the junior DBA dropped your main transaction table), just specify the required stopping point. You can specify the stop point, known as the “recovery target”, either by date/time, named restore point or by completion of a specific transaction ID. As of this writing only the date/time and named restore point options are very usable, since there are no tools to help you identify with any accuracy which transaction ID to use.
The stop point must be after the ending time of the base backup, i.e., the end time of pg_stop_backup
. You cannot use a base backup to recover to a time when that backup was in progress. (To recover to such a time, you must go back to your previous base backup and roll forward from there.)
If recovery finds corrupted WAL data, recovery will halt at that point and the server will not start. In such a case the recovery process could be re-run from the beginning, specifying a “recovery target” before the point of corruption so that recovery can complete normally. If recovery fails for an external reason, such as a system crash or if the WAL archive has become inaccessible, then the recovery can simply be restarted and it will restart almost from where it failed. Recovery restart works much like checkpointing in normal operation: the server periodically forces all its state to disk, and then updates the pg_control
file to indicate that the already-processed WAL data need not be scanned again.
The ability to restore the database to a previous point in time creates some complexities that are akin to science-fiction stories about time travel and parallel universes. For example, in the original history of the database, suppose you dropped a critical table at 5:15PM on Tuesday evening, but didn't realize your mistake until Wednesday noon. Unfazed, you get out your backup, restore to the point-in-time 5:14PM Tuesday evening, and are up and running. In this history of the database universe, you never dropped the table. But suppose you later realize this wasn't such a great idea, and would like to return to sometime Wednesday morning in the original history. You won't be able to if, while your database was up-and-running, it overwrote some of the WAL segment files that led up to the time you now wish you could get back to. Thus, to avoid this, you need to distinguish the series of WAL records generated after you've done a point-in-time recovery from those that were generated in the original database history.
To deal with this problem, PostgreSQL has a notion of timelines. Whenever an archive recovery completes, a new timeline is created to identify the series of WAL records generated after that recovery. The timeline ID number is part of WAL segment file names so a new timeline does not overwrite the WAL data generated by previous timelines. It is in fact possible to archive many different timelines. While that might seem like a useless feature, it's often a lifesaver. Consider the situation where you aren't quite sure what point-in-time to recover to, and so have to do several point-in-time recoveries by trial and error until you find the best place to branch off from the old history. Without timelines this process would soon generate an unmanageable mess. With timelines, you can recover to any prior state, including states in timeline branches that you abandoned earlier.
Every time a new timeline is created, PostgreSQL creates a “timeline history” file that shows which timeline it branched off from and when. These history files are necessary to allow the system to pick the right WAL segment files when recovering from an archive that contains multiple timelines. Therefore, they are archived into the WAL archive area just like WAL segment files. The history files are just small text files, so it's cheap and appropriate to keep them around indefinitely (unlike the segment files which are large). You can, if you like, add comments to a history file to record your own notes about how and why this particular timeline was created. Such comments will be especially valuable when you have a thicket of different timelines as a result of experimentation.
The default behavior of recovery is to recover along the same timeline that was current when the base backup was taken. If you wish to recover into some child timeline (that is, you want to return to some state that was itself generated after a recovery attempt), you need to specify the target timeline ID in recovery_target_timeline. You cannot recover into timelines that branched off earlier than the base backup.
Some tips for configuring continuous archiving are given here.
It is possible to use PostgreSQL's backup facilities to produce standalone hot backups. These are backups that cannot be used for point-in-time recovery, yet are typically much faster to backup and restore than pg_dump dumps. (They are also much larger than pg_dump dumps, so in some cases the speed advantage might be negated.)
As with base backups, the easiest way to produce a standalone hot backup is to use the pg_basebackup tool. If you include the -X
parameter when calling it, all the write-ahead log required to use the backup will be included in the backup automatically, and no special action is required to restore the backup.
If more flexibility in copying the backup files is needed, a lower level process can be used for standalone hot backups as well. To prepare for low level standalone hot backups, make sure wal_level
is set to replica
or higher, archive_mode
to on
, and set up an archive_command
that performs archiving only when a switch file exists. For example:
This command will perform archiving when /var/lib/pgsql/backup_in_progress
exists, and otherwise silently return zero exit status (allowing PostgreSQL to recycle the unwanted WAL file).
With this preparation, a backup can be taken using a script like the following:
The switch file /var/lib/pgsql/backup_in_progress
is created first, enabling archiving of completed WAL files to occur. After the backup the switch file is removed. Archived WAL files are then added to the backup so that both base backup and all required WAL files are part of the same tar file. Please remember to add error handling to your backup scripts.
If archive storage size is a concern, you can use gzip to compress the archive files:
You will then need to use gunzip during recovery:
Many people choose to use scripts to define their archive_command
, so that their postgresql.conf
entry looks very simple:
Using a separate script file is advisable any time you want to use more than a single command in the archiving process. This allows all complexity to be managed within the script, which can be written in a popular scripting language such as bash or perl.
Examples of requirements that might be solved within a script include:
Copying data to secure off-site data storage
Batching WAL files so that they are transferred every three hours, rather than one at a time
Interfacing with other backup and recovery software
Interfacing with monitoring software to report errors
When using an archive_command
script, it's desirable to enable logging_collector. Any messages written to stderr from the script will then appear in the database server log, allowing complex configurations to be diagnosed easily if they fail.
At this writing, there are several limitations of the continuous archiving technique. These will probably be fixed in future releases:
If a CREATE DATABASE command is executed while a base backup is being taken, and then the template database that the CREATE DATABASE
copied is modified while the base backup is still in progress, it is possible that recovery will cause those modifications to be propagated into the created database as well. This is of course undesirable. To avoid this risk, it is best not to modify any template databases while taking a base backup.
CREATE TABLESPACE commands are WAL-logged with the literal absolute path, and will therefore be replayed as tablespace creations with the same absolute path. This might be undesirable if the log is being replayed on a different machine. It can be dangerous even if the log is being replayed on the same machine, but into a new data directory: the replay will still overwrite the contents of the original tablespace. To avoid potential gotchas of this sort, the best practice is to take a new base backup after creating or dropping tablespaces.
It should also be noted that the default WAL format is fairly bulky since it includes many disk page snapshots. These page snapshots are designed to support crash recovery, since we might need to fix partially-written disk pages. Depending on your system hardware and software, the risk of partial writes might be small enough to ignore, in which case you can significantly reduce the total volume of archived logs by turning off page snapshots using the full_page_writes parameter. (Read the notes and warnings in Chapter 29 before you do so.) Turning off page snapshots does not prevent use of the logs for PITR operations. An area for future development is to compress archived WAL data by removing unnecessary page copies even when full_page_writes
is on. In the meantime, administrators might wish to reduce the number of page snapshots included in WAL by increasing the checkpoint interval parameters as much as feasible.