pg_dump
[connection-option
...] [option
...] [dbname
]-Fc
) and the “directory”format (-Fd
). They allow for selection and reordering of all archived items, support parallel restoration, and are compressed by default. The “directory” format is the only format that supports parallel dumps.dbname
-a
--data-only
-b
--blobs
--schema
, --table
, or --schema-only
is specified. The -b
switch is therefore only useful to add large objects to dumps where a specific schema or table has been requested. Note that blobs are considered data and therefore will be included when --data-only
is used, but not when --schema-only
is.-B
--no-blobs
-b
and -B
are given, the behavior is to output large objects, when data is being dumped, see the -b
documentation.-c
--clean
--if-exists
is also specified, restore might generate some harmless error messages, if any objects were not present in the destination database.)pg_restore
.-C
--create
--clean
is also specified, the script drops and recreates the target database before reconnecting to it.--create
, the output also includes the database's comment if any, and any configuration variable settings that are specific to this database, that is, any ALTER DATABASE ... SET ...
and ALTER ROLE ... IN DATABASE ... SET ...
commands that mention this database. Access privileges for the database itself are also dumped, unless --no-acl
is specified.pg_restore
.-E
encoding
--encoding=
encoding
PGCLIENTENCODING
environment variable to the desired dump encoding.)-f
file
--file=
file
pg_dump
and must not exist before.-F
format
--format=
format
format
can be one of the following:p
plain
c
custom
d
directory
t
tar
tar
-format archive suitable for input into pg_restore. The tar format is compatible with the directory format: extracting a tar-format archive produces a valid directory-format archive. However, the tar format does not support compression. Also, when using tar format the relative order of table data items cannot be changed during restore.-j
njobs
--jobs=
njobs
-n
schema
--schema=
schema
schema
; this selects both the schema itself, and all its contained objects. When this option is not specified, all non-system schemas in the target database will be dumped. Multiple schemas can be selected by writing multiple -n
switches. Also, the schema
parameter is interpreted as a pattern according to the same rules used by psql's \d
commands (see Patterns), so multiple schemas can also be selected by writing wildcard characters in the pattern. When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards; see Examples.-n
is specified, pg_dump makes no attempt to dump any other database objects that the selected schema(s) might depend upon. Therefore, there is no guarantee that the results of a specific-schema dump can be successfully restored by themselves into a clean database.-n
is specified. You can add blobs back to the dump with the --blobs
switch.-N
schema
--exclude-schema=
schema
schema
pattern. The pattern is interpreted according to the same rules as for -n
. -N
can be given more than once to exclude schemas matching any of several patterns.-n
and -N
are given, the behavior is to dump just the schemas that match at least one -n
switch but no -N
switches. If -N
appears without -n
, then schemas matching -N
are excluded from what is otherwise a normal dump.-o
--oids
-O
--no-owner
ALTER OWNER
or SET SESSION AUTHORIZATION
statements to set ownership of created database objects. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give that user ownership of all the objects, specify -O
.pg_restore
.-R
--no-reconnect
-s
--schema-only
--data-only
. It is similar to, but for historical reasons not identical to, specifying --section=pre-data --section=post-data
.--schema
option, which uses the word “schema” in a different meaning.)--exclude-table-data
.-S
username
--superuser=
username
--disable-triggers
is used. (Usually, it's better to leave this out, and instead start the resulting script as superuser.)-t
table
--table=
table
-n
and -N
switches have no effect when -t
is used, because tables selected by -t
will be dumped regardless of those switches, and non-table objects will not be dumped.-t
switch is not entirely upward compatible with pre-8.2PostgreSQL versions. Formerly, writing -t tab
would dump all tables named tab
, but now it just dumps whichever one is visible in your default search path. To get the old behavior you can write -t '*.tab'
. Also, you must write something like -t sch.tab
to select a table in a particular schema, rather than the old locution of -n sch -t tab
.-T
table
--exclude-table=
table
table
pattern. The pattern is interpreted according to the same rules as for -t
. -T
can be given more than once to exclude tables matching any of several patterns.-t
and -T
are given, the behavior is to dump just the tables that match at least one -t
switch but no -T
switches. If -T
appears without -t
, then tables matching -T
are excluded from what is otherwise a normal dump.-v
--verbose
-V
--version
-x
--no-privileges
--no-acl
-Z
0..9
--compress=
0..9
--binary-upgrade
--column-inserts
--attribute-inserts
INSERT
commands with explicit column names (INSERT INTO
table
(column
, ...) VALUES ...). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL databases. However, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents.--disable-dollar-quoting
--disable-triggers
--disable-triggers
must be done as superuser. So, you should also specify a superuser name with -S
, or preferably be careful to start the resulting script as a superuser.pg_restore
.--enable-row-security
INSERT
format, as the COPY FROM
during restore does not support row security.--exclude-table-data=
table
table
pattern. The pattern is interpreted according to the same rules as for -t
. --exclude-table-data
can be given more than once to exclude tables matching any of several patterns. This option is useful when you need the definition of a particular table even though you do not need the data in it.--schema-only
.--if-exists
IF EXISTS
clause) when cleaning database objects. This option is not valid unless --clean
is also specified.--inserts
INSERT
commands (rather than COPY
). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL databases. However, since this option generates a separate command for each row, an error in reloading a row causes only that row to be lost rather than the entire table contents. Note that the restore might fail altogether if you have rearranged column order. The --column-inserts
option is safe against column order changes, though even slower.--load-via-partition-root
COPY
or INSERT
statements target the root of the partitioning hierarchy that contains it, rather than the partition itself. This causes the appropriate partition to be re-determined for each row when the data is loaded. This may be useful when reloading data on a server where rows do not always fall into the same partitions as they did on the original server. That could happen, for example, if the partitioning column is of type text and the two systems have different definitions of the collation used to sort the partitioning column.--lock-wait-timeout=
timeout
timeout
. The timeout may be specified in any of the formats accepted by SET statement_timeout
. (Allowed formats vary depending on the server version you are dumping from, but an integer number of milliseconds is accepted by all versions.)--no-comments
--no-publications
--no-security-labels
--no-subscriptions
--no-sync
pg_dump
will wait for all files to be written safely to disk. This option causes pg_dump
to return without waiting, which is faster, but means that a subsequent operating system crash can leave the dump corrupt. Generally, this option is useful for testing but should not be used when dumping data from production installation.--no-synchronized-snapshots
pg_dump -j
against a pre-9.2 server, see the documentation of the -j
parameter for more details.--no-tablespaces
pg_restore
.--no-unlogged-table-data
--quote-all-identifiers
--quote-all-identifiers
prevents such issues, at the price of a harder-to-read dump script.--section=
sectionname
pre-data
, data
, or post-data
. This option can be specified more than once to select multiple sections. The default is to dump all sections.--serializable-deferrable
serializable
transaction for the dump, to ensure that the snapshot used is consistent with later database states; but do this by waiting for a point in the transaction stream at which no anomalies can be present, so that there isn't a risk of the dump failing or causing other transactions to roll back with a serialization_failure
. See Chapter 13 for more information about transaction isolation and concurrency control.--snapshot=
snapshotname
--strict-names
-n
/--schema
) and table (-t
/--table
) qualifier match at least one schema/table in the database to be dumped. Note that if none of the schema/table qualifiers find matches, pg_dump will generate an error even without --strict-names
.-N
/--exclude-schema
, -T
/--exclude-table
, or --exclude-table-data
. An exclude pattern failing to match any objects is not considered an error.--use-set-session-authorization
SET SESSION AUTHORIZATION
commands instead of ALTER OWNER
commands to determine object ownership. This makes the dump more standards-compatible, but depending on the history of the objects in the dump, might not restore properly. Also, a dump using SET SESSION AUTHORIZATION
will certainly require superuser privileges to restore correctly, whereas ALTER OWNER
requires lesser privileges.-?
--help
-d
dbname
--dbname=
dbname
dbname
as the first non-option argument on the command line.=
sign or starts with a valid URI prefix (postgresql://
or postgres://
), it is treated as a conninfo
string. See Section 34.1 for more information.-
h
host
--host=
host
PGHOST
environment variable, if set, else a Unix domain socket connection is attempted.-p
port
--port=
port
PGPORT
environment variable, if set, or a compiled-in default.-U
username
--username=
username
-w
--no-password
.pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.-W
--password
-W
to avoid the extra connection attempt.--role=
rolename
SET ROLE
rolename
command after connecting to the database. It is useful when the authenticated user (specified by -U
) lacks privileges needed by pg_dump, but can switch to a role with the required rights. Some installations have a policy against logging in directly as a superuser, and use of this option allows dumps to be made without violating the policy.PGDATABASE
PGHOST
PGOPTIONS
PGPORT
PGUSER
SELECT
statements. If you have problems running pg_dump, make sure you are able to select information from the database using, for example, psql. Also, any default connection settings and environment variables used by the libpq front-end library will apply.track_counts
to false via PGOPTIONS
or the ALTER USER
command.template1
database, be careful to restore the output of pg_dump into a truly empty database; otherwise you are likely to get errors due to duplicate definitions of the added objects. To make an empty database without any local additions, copy from template0
not template1
, for example:--disable-triggers
is used, pg_dump emits commands to disable triggers on user tables before inserting the data, and then commands to re-enable them after the data has been inserted. If the restore is stopped in the middle, the system catalogs might be left in the wrong state.ANALYZE
after restoring from a dump file to ensure optimal performance; see Section 24.1.3 and Section 24.1.6 for more information.--quote-all-identifiers
option is recommended in cross-version cases, as it can prevent problems arising from varying reserved-word lists in different PostgreSQL versions.CREATE SUBSCRIPTION
commands that use the NOCONNECT
option, so that restoring the subscription does not make remote connections for creating a replication slot or for initial table copy. That way, the dump can be restored without requiring network access to the remote servers. It is then up to the user to reactivate the subscriptions in a suitable way. If the involved hosts have changed, the connection information might have to be changed. It might also be appropriate to truncate the target tables before initiating a new full table copy.newdb
:east
or west
and end in gsm
, excluding any schemas whose names contain the word test
:ts_
:-t
and related switches, you need to double-quote the name; else it will be folded to lower case (see Patterns). But double quotes are special to the shell, so in turn they must be quoted. Thus, to dump a single table with a mixed-case name, you need something like