-Fc) and the “directory”format (
-Fd). They allow for selection and reordering of all archived items, support parallel restoration, and are compressed by default. The “directory” format is the only format that supports parallel dumps.
--schema-onlyis specified. The
-bswitch is therefore only useful to add large objects to dumps where a specific schema or table has been requested. Note that blobs are considered data and therefore will be included when
--data-onlyis used, but not when
-Bare given, the behavior is to output large objects, when data is being dumped, see the
--if-existsis also specified, restore might generate some harmless error messages, if any objects were not present in the destination database.)
--cleanis also specified, the script drops and recreates the target database before reconnecting to it.
--create, the output also includes the database's comment if any, and any configuration variable settings that are specific to this database, that is, any
ALTER DATABASE ... SET ...and
ALTER ROLE ... IN DATABASE ... SET ...commands that mention this database. Access privileges for the database itself are also dumped, unless
PGCLIENTENCODINGenvironment variable to the desired dump encoding.)
pg_dumpand must not exist before.
formatcan be one of the following:
tar-format archive suitable for input into pg_restore. The tar format is compatible with the directory format: extracting a tar-format archive produces a valid directory-format archive. However, the tar format does not support compression. Also, when using tar format the relative order of table data items cannot be changed during restore.
schema; this selects both the schema itself, and all its contained objects. When this option is not specified, all non-system schemas in the target database will be dumped. Multiple schemas can be selected by writing multiple
-nswitches. Also, the
schemaparameter is interpreted as a pattern according to the same rules used by psql's
\dcommands (see Patterns), so multiple schemas can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards; see Examples.
-nis specified, pg_dump makes no attempt to dump any other database objects that the selected schema(s) might depend upon. Therefore, there is no guarantee that the results of a specific-schema dump can be successfully restored by themselves into a clean database.
-nis specified. You can add blobs back to the dump with the
schemapattern. The pattern is interpreted according to the same rules as for
-Ncan be given more than once to exclude schemas matching any of several patterns.
-Nare given, the behavior is to dump just the schemas that match at least one
-nswitch but no
-n, then schemas matching
-Nare excluded from what is otherwise a normal dump.
SET SESSION AUTHORIZATIONstatements to set ownership of created database objects. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give that user ownership of all the objects, specify
--data-only. It is similar to, but for historical reasons not identical to, specifying
--schemaoption, which uses the word “schema” in a different meaning.)
--disable-triggersis used. (Usually, it's better to leave this out, and instead start the resulting script as superuser.)
-Nswitches have no effect when
-tis used, because tables selected by
-twill be dumped regardless of those switches, and non-table objects will not be dumped.
-tswitch is not entirely upward compatible with pre-8.2PostgreSQL versions. Formerly, writing
-t tabwould dump all tables named
tab, but now it just dumps whichever one is visible in your default search path. To get the old behavior you can write
-t '*.tab'. Also, you must write something like
-t sch.tabto select a table in a particular schema, rather than the old locution of
-n sch -t tab.
tablepattern. The pattern is interpreted according to the same rules as for
-Tcan be given more than once to exclude tables matching any of several patterns.
-Tare given, the behavior is to dump just the tables that match at least one
-tswitch but no
-t, then tables matching
-Tare excluded from what is otherwise a normal dump.
INSERTcommands with explicit column names (
column, ...) VALUES ...). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL databases. However, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents.
--disable-triggersmust be done as superuser. So, you should also specify a superuser name with
-S, or preferably be careful to start the resulting script as a superuser.
INSERTformat, as the
COPY FROMduring restore does not support row security.
tablepattern. The pattern is interpreted according to the same rules as for
--exclude-table-datacan be given more than once to exclude tables matching any of several patterns. This option is useful when you need the definition of a particular table even though you do not need the data in it.
IF EXISTSclause) when cleaning database objects. This option is not valid unless
--cleanis also specified.
INSERTcommands (rather than
COPY). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL databases. However, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents. Note that the restore might fail altogether if you have rearranged column order. The
--column-insertsoption is safe against column order changes, though even slower.
INSERTstatements target the root of the partitioning hierarchy that contains it, rather than the partition itself. This causes the appropriate partition to be re-determined for each row when the data is loaded. This may be useful when reloading data on a server where rows do not always fall into the same partitions as they did on the original server. That could happen, for example, if the partitioning column is of type text and the two systems have different definitions of the collation used to sort the partitioning column.
timeout. The timeout may be specified in any of the formats accepted by
SET statement_timeout. (Allowed formats vary depending on the server version you are dumping from, but an integer number of milliseconds is accepted by all versions.)
pg_dumpwill wait for all files to be written safely to disk. This option causes
pg_dumpto return without waiting, which is faster, but means that a subsequent operating system crash can leave the dump corrupt. Generally, this option is useful for testing but should not be used when dumping data from production installation.
pg_dump -jagainst a pre-9.2 server, see the documentation of the
-jparameter for more details.
--quote-all-identifiersprevents such issues, at the price of a harder-to-read dump script.
post-data. This option can be specified more than once to select multiple sections. The default is to dump all sections.
serializabletransaction for the dump, to ensure that the snapshot used is consistent with later database states; but do this by waiting for a point in the transaction stream at which no anomalies can be present, so that there isn't a risk of the dump failing or causing other transactions to roll back with a
serialization_failure. See Chapter 13 for more information about transaction isolation and concurrency control.
--schema) and table (
--table) qualifier match at least one schema/table in the database to be dumped. Note that if none of the schema/table qualifiers find matches, pg_dump will generate an error even without
--exclude-table-data. An exclude pattern failing to match any objects is not considered an error.
SET SESSION AUTHORIZATIONcommands instead of
ALTER OWNERcommands to determine object ownership. This makes the dump more standards-compatible, but depending on the history of the objects in the dump, might not restore properly. Also, a dump using
SET SESSION AUTHORIZATIONwill certainly require superuser privileges to restore correctly, whereas
ALTER OWNERrequires lesser privileges.
dbnameas the first non-option argument on the command line.
PGHOSTenvironment variable, if set, else a Unix domain socket connection is attempted.
PGPORTenvironment variable, if set, or a compiled-in default.
.pgpassfile, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-Wto avoid the extra connection attempt.
rolenamecommand after connecting to the database. It is useful when the authenticated user (specified by
-U) lacks privileges needed by pg_dump, but can switch to a role with the required rights. Some installations have a policy against logging in directly as a superuser, and use of this option allows dumps to be made without violating the policy.
SELECTstatements. If you have problems running pg_dump, make sure you are able to select information from the database using, for example, psql. Also, any default connection settings and environment variables used by the libpq front-end library will apply.
track_countsto false via
template1database, be careful to restore the output of pg_dump into a truly empty database; otherwise you are likely to get errors due to duplicate definitions of the added objects. To make an empty database without any local additions, copy from
template1, for example:
--disable-triggersis used, pg_dump emits commands to disable triggers on user tables before inserting the data, and then commands to re-enable them after the data has been inserted. If the restore is stopped in the middle, the system catalogs might be left in the wrong state.
--quote-all-identifiersoption is recommended in cross-version cases, as it can prevent problems arising from varying reserved-word lists in different PostgreSQL versions.
CREATE SUBSCRIPTIONcommands that use the
NOCONNECToption, so that restoring the subscription does not make remote connections for creating a replication slot or for initial table copy. That way, the dump can be restored without requiring network access to the remote servers. It is then up to the user to reactivate the subscriptions in a suitable way. If the involved hosts have changed, the connection information might have to be changed. It might also be appropriate to truncate the target tables before initiating a new full table copy.
westand end in
gsm, excluding any schemas whose names contain the word
-tand related switches, you need to double-quote the name; else it will be folded to lower case (see Patterns). But double quotes are special to the shell, so in turn they must be quoted. Thus, to dump a single table with a mixed-case name, you need something like