Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
libpq 是 C 應用程式設計師對 PostgreSQL 的介面。libpq 是一組函式庫,提供用戶端程序可以將查詢傳遞給 PostgreSQL 後端伺務器並接收這些查詢的結果。
libpq 也是其他幾個 PostgreSQL 應用程序接口的底層引擎,包括為 C++、Perl、Python、Tcl 和 ECPG 程式的介面。因此,如果您使用其中一個軟體套件,libpq 行為的某些方面對您很重要。特別是,第 33.14 節,第 33.15 節和第 33.18 節描述了使用 libpq 應用程式的用戶可見行為。
本章末尾包含一些簡短的程式(第 33.21 節),以展示如何撰寫使用 libpq 的程式。在原始碼的目錄 src/test/examples 中還有幾個完整的 libpq 應用程式範例。
使用 libpq 的用戶端程式必須包含標頭檔 libpq-fe.h,並且必須與 libpq 函式庫連接。
版本:11
這些函數可用於查詢現有資料庫連線物件的狀態。
提醒 libpq 應用程式設計師應該小心地使用 PGconn 的功能。使用下面描述的函數功能以獲取 PGconn 的內容。建議不要使用 libpq-int.h 引用內部 PGconn 變數,因為它們將來可能會有所改變。
以下函數回傳在連線建立時的參數值。這些值在連線的生命週期之間是不變的。如果使用多主機連線字串,則在使用同樣的 PGconn 物件建立新連線時,PQhost,PQport 和 PQpass 的值可能會變動。其他值在 PGconn 物件的生命週期內是不變的。
PQdb
回傳連線的資料庫名稱。
PQuser
回傳連線的使用者名稱。
PQpass
回傳連線所使用的密碼。
PQhost
回傳連線的伺服器主機名稱。如果連線是透過 Unix-domain socket,則可以是主機名稱,IP 位址或目錄路徑。(路徑會區分大小寫,因為它始終是一個絕對路徑,並以 / 開頭。)
PQport
回傳連線的連接埠號碼。
PQtty
回傳連線的除錯 TTY。(這已經過時了,因為伺服器不再關注 TTY 設定,但此功能仍然存在是為了相容性。)
PQoptions
回傳連線請求中傳遞的命令列選項。
以下函數回傳狀態資訊,這些資訊會在 PGconn 物件上執行操作時變動。
PQstatus
回傳連線的狀態。
狀態可以是許多值中的一個。但是,在非同步連線過程之外只能看到其中的兩個:CONNECTION_OK 和 CONNECTION_BAD。與資料庫的良好連線時是 CONNECTION_OK 狀態。CONNECTION_BAD 狀態表示連線嘗試失敗。通常情況下,OK 狀態將保持不變,直到 PQfinish,但網路故障可能導致狀態提前更改為 CONNECTION_BAD。在這種情況下,應用程式可以嘗試透過呼叫 PQreset 進行恢復。
有關可能回傳的其他狀態代碼,請參閱 PQconnectStartParams,PQconnectStart 和 PQconnectPoll 的項目。
PQtransactionStatus
回傳伺服器的目前在交易事務中狀態。
狀態可以是 PQTRANS_IDLE(目前空閒),PQTRANS_ACTIVE(命令正在進行中),PQTRANS_INTRANS(空閒,在有效的事務區塊中)或PQTRANS_INERROR(空閒,在失敗的事務區塊中)。 如果連線錯誤,則回報 PQTRANS_UNKNOWN。僅當查詢已發送到伺服器但尚未完成時才會回報 PQTRANS_ACTIVE。
PQparameterStatus
查詢伺服器的目前參數設定。
伺服器在連線啟動時或其值發生變化時會自動回報某些參數值。PQparameterStatus 可用於查詢這些設定。如果參數已知,則回傳參數的目前值;如果參數未知,則回傳 NULL。
截至目前版本可回報的參數包括 server_version,server_encoding,client_encoding,application_name,is_superuser,session_authorization,DateStyle,IntervalStyle,TimeZone,integer_datetimes 和 standard_conforming_strings。(8.0 之前的版本沒有 server_encoding,TimeZone 和 integer_datetimes;8.1 之前的版本沒有 standard_conforming_strings;8.4 之前的版本沒有 IntervalStyle;9.0 之前的版本沒有 application_name。)請注意 server_version,server_encoding 和 integer_datetimes 啟動後無法更改。
3.0 之前的協定,伺服器不回報參數設定,但 libpq 包括了無論如何都要取得 server_version 和 client_encoding 的值的邏輯,所以鼓勵應用程式使用 PQparameterStatus 而不是直接的程式碼來確定這些值。(請注意,在 3.0 之前的連線上,在連線啟動後透過 SET 更改 client_encoding 將不會為 PQparameterStatus 所反映。)對於 server_version,另請參閱 PQserverVersion,它以更容易比較的數字形式回傳資訊。
如果沒有回報 standard_conforming_strings 的值,則應用程式可以假設它已經關閉;而倒斜線在字串文字中被視為轉置符號。此外,可以將此參數的存在視為接受轉置字串語法(E'...')的指示。
雖然回傳的指標被宣告為 const,但它實際上指向與 PGconn 結構相關聯的可變的儲存空間。假設指標在查詢中保持有效是不明智的。
PQprotocolVersion
詢問正在使用的前端/後端協定。
應用程式可能希望使用此函數來確定是否支援某些功能。目前,可能的值是 2(2.0 協議),3(3.0 協議)或零(連線錯誤)。 連線啟動完成後協定的版本不會改變,但理論上它可以在連線重置期間改變。通常在與 PostgreSQL 7.4 或更高版本的伺服器連線時使用 3.0 協議;7.4 之前的伺服器僅支持協定 2.0。(協定 1.0 已過時,libpq 不支援。)
PQserverVersion
回傳伺服器版本的整數表示。
應用程式可以使用此函數來確定它們所連線的資料庫伺服器版本。結果是以伺服器的主要版本號乘以 10000 再加上次要版本號所組成的。例如,版本 10.1 將回傳為 100001,版本 11.0 將回傳為 110000。如果連線錯誤,則回傳零。
在主要版本 10 之前,PostgreSQL 使用三個部分的版本號碼,其中前兩個部分一起代表主要版本。對於這些版本,PQserverVersion 對每個部分使用兩位數字;例如,版本 9.1.5 將回傳為 90105,版本 9.2.0 將回傳為 90200。
因此,為了符合功能相容性,應用程式應將 PQserverVersion 的結果除以 100 而不是 10000,以確定主版本號碼。在所有版本系列中,只有最後兩位數字在次要版本(錯誤修復版本)之間有所不同。
PQerrorMessage
回傳最近於連線操作產生的錯誤訊息。
如果 PQerrorMessage 失敗,幾乎所有的 libpq 函數都會為它們設定一條訊息。請注意,根據 libpq 的規則,非空的 PQerrorMessage 結果可以包含多行,並且將包含最後的換行符號。呼叫者不應該直接輸出結果。當關聯的 PGconn 物件傳遞給 PQfinish 時,它將被釋放。不應期望結果字串在 PGconn 結構的操作中保持相同。
PQsocket
Obtains the file descriptor number of the connection socket to the server. A valid descriptor will be greater than or equal to 0; a result of -1 indicates that no server connection is currently open. (This will not change during normal operation, but could change during connection setup or reset.)
PQbackendPID
Returns the process ID (PID) of the backend process handling this connection.
The backend PID is useful for debugging purposes and for comparison to NOTIFY
messages (which include the PID of the notifying backend process). Note that the PID belongs to a process executing on the database server host, not the local host!
PQconnectionNeedsPassword
Returns true (1) if the connection authentication method required a password, but none was available. Returns false (0) if not.
This function can be applied after a failed connection attempt to decide whether to prompt the user for a password.
PQconnectionUsedPassword
Returns true (1) if the connection authentication method used a password. Returns false (0) if not.
This function can be applied after either a failed or successful connection attempt to detect whether the server demanded a password.
The following functions return information related to SSL. This information usually doesn't change after a connection is established.
PQsslInUse
Returns true (1) if the connection uses SSL, false (0) if not.
PQsslAttribute
Returns SSL-related information about the connection.
The list of available attributes varies depending on the SSL library being used, and the type of connection. If an attribute is not available, returns NULL.
The following attributes are commonly available:
library
Name of the SSL implementation in use. (Currently, only "OpenSSL"
is implemented)
protocol
SSL/TLS version in use. Common values are "TLSv1"
, "TLSv1.1"
and "TLSv1.2"
, but an implementation may return other strings if some other protocol is used.
key_bits
Number of key bits used by the encryption algorithm.
cipher
A short name of the ciphersuite used, e.g. "DHE-RSA-DES-CBC3-SHA"
. The names are specific to each SSL implementation.
compression
If SSL compression is in use, returns the name of the compression algorithm, or "on" if compression is used but the algorithm is not known. If compression is not in use, returns "off".
PQsslAttributeNames
Return an array of SSL attribute names available. The array is terminated by a NULL pointer.
PQsslStruct
Return a pointer to an SSL-implementation-specific object describing the connection.
The struct(s) available depend on the SSL implementation in use. For OpenSSL, there is one struct, available under the name "OpenSSL", and it returns a pointer to the OpenSSL SSL
struct. To use this function, code along the following lines could be used:
This structure can be used to verify encryption levels, check server certificates, and more. Refer to the OpenSSL documentation for information about this structure.PQgetssl
Returns the SSL structure used in the connection, or null if SSL is not in use.
This function is equivalent to PQsslStruct(conn, "OpenSSL")
. It should not be used in new applications, because the returned struct is specific to OpenSSL and will not be available if another SSL implementation is used. To check if a connection uses SSL, call PQsslInUse
instead, and for more details about the connection, use PQsslAttribute
.
版本:11
The following functions deal with making a connection to a PostgreSQL backend server. An application program can have several backend connections open at one time. (One reason to do that is to access more than one database.) Each connection is represented by a PGconn
object, which is obtained from the function , , or . Note that these functions will always return a non-null object pointer, unless perhaps there is too little memory even to allocate the PGconn
object. The function should be called to check the return value for a successful connection before queries are sent via the connection object.
If untrusted users have access to a database that has not adopted a , begin each session by removing publicly-writable schemas from search_path
. One can set parameter key word options
to value -csearch_path=
. Alternately, one can issue PQexec(
conn
, "SELECT pg_catalog.set_config('search_path', '', false)") after connecting. This consideration is not specific to libpq; it applies to every interface for executing arbitrary SQL commands.
On Unix, forking a process with open libpq connections can lead to unpredictable results because the parent and child processes share the same sockets and operating system resources. For this reason, such usage is not recommended, though doing an exec
from the child process to load a new executable is safe.
PQconnectdbParams
Makes a new connection to the database server.
This function opens a new database connection using the parameters taken from two NULL
-terminated arrays. The first, keywords
, is defined as an array of strings, each one being a key word. The second, values
, gives the value for each key word. Unlike below, the parameter set can be extended without changing the function signature, so use of this function (or its nonblocking analogs and PQconnectPoll
) is preferred for new application programming.
The currently recognized parameter key words are listed in .
The passed arrays can be empty to use all default parameters, or can contain one or more parameter settings. They must be matched in length. Processing will stop at the first NULL
entry in the keywords
array. Also, if the values
entry associated with a non-NULL
keywords
entry is NULL
or an empty string, that entry is ignored and processing continues with the next pair of array entries.
When expand_dbname
is non-zero, the value for the first dbname
key word is checked to see if it is a connection string. If so, it is “expanded” into the individual connection parameters extracted from the string. The value is considered to be a connection string, rather than just a database name, if it contains an equal sign (=
) or it begins with a URI scheme designator. (More details on connection string formats appear in .) Only the first occurrence of dbname
is treated in this way; any subsequent dbname
parameter is processed as a plain database name.
In general the parameter arrays are processed from start to end. If any key word is repeated, the last value (that is not NULL
or empty) is used. This rule applies in particular when a key word found in a connection string conflicts with one appearing in the keywords
array. Thus, the programmer may determine whether array entries can override or be overridden by values taken from a connection string. Array entries appearing before an expanded dbname
entry can be overridden by fields of the connection string, and in turn those fields are overridden by array entries appearing after dbname
(but, again, only if those entries supply non-empty values).
After processing all the array entries and any expanded connection string, any connection parameters that remain unset are filled with default values. If an unset parameter's corresponding environment variable (see ) is set, its value is used. If the environment variable is not set either, then the parameter's built-in default value is used.
PQconnectdb
Makes a new connection to the database server.
This function opens a new database connection using the parameters taken from the string conninfo
.
PQsetdbLogin
Makes a new connection to the database server.
PQsetdb
Makes a new connection to the database server.
Make a connection to the database server in a nonblocking manner.
You must ensure that the socket is in the appropriate state before calling PQconnectPoll
, as described below.
CONNECTION_STARTED
Waiting for connection to be made.
CONNECTION_MADE
Connection OK; waiting to send.
CONNECTION_AWAITING_RESPONSE
Waiting for a response from the server.
CONNECTION_AUTH_OK
Received authentication; waiting for backend start-up to finish.
CONNECTION_SSL_STARTUP
Negotiating SSL encryption.
CONNECTION_SETENV
Negotiating environment-driven parameter settings.
CONNECTION_CHECK_WRITABLE
Checking if connection is able to handle write transactions.
CONNECTION_CONSUME
Consuming any remaining response messages on connection.
Note that, although these constants will remain (in order to maintain compatibility), an application should never rely upon these occurring in a particular order, or at all, or on the status always being one of these documented values. An application might do something like this:
PQconndefaults
Returns the default connection options.
Returns the connection options used by a live connection.
PQconninfoParse
Returns parsed connection options from the provided connection string.
All legal options will be present in the result array, but the PQconninfoOption
for any option not present in the connection string will have val
set to NULL
; default values are not inserted.
If errmsg
is not NULL
, then *errmsg
is set to NULL
on success, else to a malloc
'd error string explaining the problem. (It is also possible for *errmsg
to be set to NULL
and the function to return NULL
; this indicates an out-of-memory condition.)
PQfinish
Closes the connection to the server. Also frees memory used by the PGconn
object.
PQreset
Resets the communication channel to the server.
This function will close the connection to the server and attempt to establish a new connection, using all the same parameters previously used. This might be useful for error recovery if a working connection is lost.
PQresetStart
PQresetPoll
Reset the communication channel to the server, in a nonblocking manner.
PQpingParams
The function returns one of the following values:
PQPING_OK
The server is running and appears to be accepting connections.
PQPING_REJECT
The server is running but is in a state that disallows connections (startup, shutdown, or crash recovery).
PQPING_NO_RESPONSE
The server could not be contacted. This might indicate that the server is not running, or that there is something wrong with the given connection parameters (for example, wrong port number), or that there is a network connectivity problem (for example, a firewall blocking the connection request).
PQPING_NO_ATTEMPT
No attempt was made to contact the server, because the supplied parameters were obviously incorrect or there was some client-side problem (for example, out of memory).
PQping
PQsetSSLKeyPassHook_OpenSSL
The application passes a pointer to a callback function with signature:
which libpq will then call instead of its default PQdefaultSSLKeyPassHook_OpenSSL
handler. The callback should determine the password for the key and copy it to result-buffer buf
of size size
. The string in buf
must be null-terminated. The callback must return the length of the password stored in buf
excluding the null terminator. On failure, the callback should set buf[0] = '\0'
and return 0. See PQdefaultSSLKeyPassHook_OpenSSL
in libpq's source code for an example.
If the user specified an explicit key location, its path will be in conn->sslkey
when the callback is invoked. This will be empty if the default key path is being used. For keys that are engine specifiers, it is up to engine implementations whether they use the OpenSSL password callback or define their own handling.
The app callback may choose to delegate unhandled cases to PQdefaultSSLKeyPassHook_OpenSSL
, or call it first and try something else if it returns 0, or completely override it.
The callback must not escape normal flow control with exceptions, longjmp(...)
, etc. It must return normally.
PQgetSSLKeyPassHook_OpenSSL
PQgetSSLKeyPassHook_OpenSSL
returns the current client certificate key password hook, or NULL
if none has been set.
In the keyword/value format, each parameter setting is in the form keyword
=
value
, with space(s) between settings. Spaces around a setting's equal sign are optional. To write an empty value, or a value containing spaces, surround it with single quotes, for example keyword = 'a value'
. Single quotes and backslashes within a value must be escaped with a backslash, i.e., \'
and \\
.
Example:
The general form for a connection URI is:
The URI scheme designator can be either postgresql://
or postgres://
. Each of the remaining URI parts is optional. The following examples illustrate valid URI syntax:
Values that would normally appear in the hierarchical part of the URI can alternatively be given as named parameters. For example:
The host part may be either a host name or an IP address. To specify an IPv6 address, enclose it in square brackets:
It is possible to specify multiple host components, each with an optional port component, in a single URI. A URI of the form postgresql://host1:port1,host2:port2,host3:port3/
is equivalent to a connection string of the form host=host1,host2,host3 port=port1,port2,port3
. As further described below, each host will be tried in turn until a connection is successfully established.
可以指定多個連線主機,以便按次序嘗試連線。在 Keyword/Value 格式中,host、hostaddr 和 port 選項可以接受以逗號分隔的列表。在指定的每個選項中必須設定相同數量的元素。例如,第一個 hostaddr 對應於第一個主機名稱,第二個 hostaddr 對應於第二個主機名稱,依此類推。作為例外,如果只指定了一個連接埠,則它適用於所有主機。
在連線的 URI 格式中,您可以在 URI 的主機部份中列出多個以逗號分隔的 host:port。
無論採用哪種格式,單個主機名稱都可以轉換為多個網路位址。這方面的一個常見情況是同時具有 IPv4 和 IPv6 位址的主機。
當指定多個主機時,或者當單個主機名稱轉換為多個位址時,將依次嘗試所有主機和位址,直到成功為止。如果無法連線任何主機,才會連線失敗。但如果連線成功建立,但身份驗證失敗,則不會嘗試列表中的其餘主機。
如果使用密碼檔,您可以為不同的主機設定不同的密碼。對於列表中的每個主機,所有其他連線選項都相同;例如,不可能為不同的主機指定不同的使用者名稱。
The currently recognized parameter key words are:
host
Name of host to connect to. If a host name looks like an absolute path name, it specifies Unix-domain communication rather than TCP/IP communication; the value is the name of the directory in which the socket file is stored. (On Unix, an absolute path name begins with a slash. On Windows, paths starting with drive letters are also recognized.) The default behavior when host
is not specified, or is empty, is to connect to a Unix-domain socket in /tmp
(or whatever socket directory was specified when PostgreSQL was built). On Windows and on machines without Unix-domain sockets, the default is to connect to localhost
.
hostaddr
Numeric IP address of host to connect to. This should be in the standard IPv4 address format, e.g., 172.28.40.9
. If your machine supports IPv6, you can also use those addresses. TCP/IP communication is always used when a nonempty string is specified for this parameter. If this parameter is not specified, the value of host
will be looked up to find the corresponding IP address — or, if host
specifies an IP address, that value will be used directly.
Using hostaddr
allows the application to avoid a host name look-up, which might be important in applications with time constraints. However, a host name is required for GSSAPI or SSPI authentication methods, as well as for verify-full
SSL certificate verification. The following rules are used:
If host
is specified without hostaddr
, a host name lookup occurs. (When using PQconnectPoll
, the lookup occurs when PQconnectPoll
first considers this host name, and it may cause PQconnectPoll
to block for a significant amount of time.)
If hostaddr
is specified without host
, the value for hostaddr
gives the server network address. The connection attempt will fail if the authentication method requires a host name.
If both host
and hostaddr
are specified, the value for hostaddr
gives the server network address. The value for host
is ignored unless the authentication method requires it, in which case it will be used as the host name.
Without either a host name or host address, libpq will connect using a local Unix-domain socket; or on Windows and on machines without Unix-domain sockets, it will attempt to connect to localhost
.
port
Port number to connect to at the server host, or socket file name extension for Unix-domain connections. If multiple hosts were given in the host
or hostaddr
parameters, this parameter may specify a comma-separated list of ports of the same length as the host list, or it may specify a single port number to be used for all hosts. An empty string, or an empty item in a comma-separated list, specifies the default port number established when PostgreSQL was built.
dbname
user
PostgreSQL user name to connect as. Defaults to be the same as the operating system name of the user running the application.
password
Password to be used if the server demands password authentication.
passfile
channel_binding
This option controls the client's use of channel binding. A setting of require
means that the connection must employ channel binding, prefer
means that the client will choose channel binding if available, and disable
prevents the use of channel binding. The default is prefer
if PostgreSQL is compiled with SSL support; otherwise the default is disable
.
Channel binding is a method for the server to authenticate itself to the client. It is only supported over SSL connections with PostgreSQL 11 or later servers using the SCRAM
authentication method.
connect_timeout
Maximum time to wait while connecting, in seconds (write as a decimal integer, e.g., 10
). Zero, negative, or not specified means wait indefinitely. The minimum allowed timeout is 2 seconds, therefore a value of 1
is interpreted as 2
. This timeout applies separately to each host name or IP address. For example, if you specify two hosts and connect_timeout
is 5, each host will time out if no connection is made within 5 seconds, so the total time spent waiting for a connection might be up to 10 seconds.
client_encoding
This sets the client_encoding
configuration parameter for this connection. In addition to the values accepted by the corresponding server option, you can use auto
to determine the right encoding from the current locale in the client (LC_CTYPE
environment variable on Unix systems).
options
application_name
fallback_application_name
keepalives
Controls whether client-side TCP keepalives are used. The default value is 1, meaning on, but you can change this to 0, meaning off, if keepalives are not wanted. This parameter is ignored for connections made via a Unix-domain socket.
keepalives_idle
Controls the number of seconds of inactivity after which TCP should send a keepalive message to the server. A value of zero uses the system default. This parameter is ignored for connections made via a Unix-domain socket, or if keepalives are disabled. It is only supported on systems where TCP_KEEPIDLE
or an equivalent socket option is available, and on Windows; on other systems, it has no effect.
keepalives_interval
Controls the number of seconds after which a TCP keepalive message that is not acknowledged by the server should be retransmitted. A value of zero uses the system default. This parameter is ignored for connections made via a Unix-domain socket, or if keepalives are disabled. It is only supported on systems where TCP_KEEPINTVL
or an equivalent socket option is available, and on Windows; on other systems, it has no effect.
keepalives_count
Controls the number of TCP keepalives that can be lost before the client's connection to the server is considered dead. A value of zero uses the system default. This parameter is ignored for connections made via a Unix-domain socket, or if keepalives are disabled. It is only supported on systems where TCP_KEEPCNT
or an equivalent socket option is available; on other systems, it has no effect.
tcp_user_timeout
Controls the number of milliseconds that transmitted data may remain unacknowledged before a connection is forcibly closed. A value of zero uses the system default. This parameter is ignored for connections made via a Unix-domain socket. It is only supported on systems where TCP_USER_TIMEOUT
is available; on other systems, it has no effect.
tty
Ignored (formerly, this specified where to send server debug output).
replication
The following values, which are case-insensitive, are supported:
true
, on
, yes
, 1
The connection goes into physical replication mode.
database
The connection goes into logical replication mode, connecting to the database specified in the dbname
parameter.
false
, off
, no
, 0
The connection is a regular one, which is the default behavior.
In physical or logical replication mode, only the simple query protocol can be used.
gssencmode
This option determines whether or with what priority a secure GSS TCP/IP connection will be negotiated with the server. There are three modes:
disable
only try a non-GSSAPI-encrypted connection
prefer
(default)
if there are GSSAPI credentials present (i.e., in a credentials cache), first try a GSSAPI-encrypted connection; if that fails or there are no credentials, try a non-GSSAPI-encrypted connection. This is the default when PostgreSQL has been compiled with GSSAPI support.
require
only try a GSSAPI-encrypted connection
gssencmode
is ignored for Unix domain socket communication. If PostgreSQL is compiled without GSSAPI support, using the require
option will cause an error, while prefer
will be accepted but libpq will not actually attempt a GSSAPI-encrypted connection.
sslmode
This option determines whether or with what priority a secure SSL TCP/IP connection will be negotiated with the server. There are six modes:
disable
only try a non-SSL connection
allow
first try a non-SSL connection; if that fails, try an SSL connection
prefer
(default)
first try an SSL connection; if that fails, try a non-SSL connection
require
only try an SSL connection. If a root CA file is present, verify the certificate in the same way as if verify-ca
was specifiedverify-ca
only try an SSL connection, and verify that the server certificate is issued by a trusted certificate authority (CA)
verify-full
only try an SSL connection, verify that the server certificate is issued by a trusted CA and that the requested server host name matches that in the certificate
sslmode
is ignored for Unix domain socket communication. If PostgreSQL is compiled without SSL support, using options require
, verify-ca
, or verify-full
will cause an error, while options allow
and prefer
will be accepted but libpq will not actually attempt an SSL connection.
Note that if GSSAPI encryption is possible, that will be used in preference to SSL encryption, regardless of the value of sslmode
. To force use of SSL encryption in an environment that has working GSSAPI infrastructure (such as a Kerberos server), also set gssencmode
to disable
.
requiressl
This option is deprecated in favor of the sslmode
setting.
If set to 1, an SSL connection to the server is required (this is equivalent to sslmode
require
). libpq will then refuse to connect if the server does not accept an SSL connection. If set to 0 (default), libpq will negotiate the connection type with the server (equivalent to sslmode
prefer
). This option is only available if PostgreSQL is compiled with SSL support.
sslcompression
If set to 1, data sent over SSL connections will be compressed. If set to 0, compression will be disabled. The default is 0. This parameter is ignored if a connection without SSL is made.
SSL compression is nowadays considered insecure and its use is no longer recommended. OpenSSL 1.1.0 disables compression by default, and many operating system distributions disable it in prior versions as well, so setting this parameter to on will not have any effect if the server does not accept compression.
If security is not a primary concern, compression can improve throughput if the network is the bottleneck. Disabling compression can improve response time and throughput if CPU performance is the limiting factor.
sslcert
This parameter specifies the file name of the client SSL certificate, replacing the default ~/.postgresql/postgresql.crt
. This parameter is ignored if an SSL connection is not made.sslkey
This parameter specifies the location for the secret key used for the client certificate. It can either specify a file name that will be used instead of the default ~/.postgresql/postgresql.key
, or it can specify a key obtained from an external “engine” (engines are OpenSSL loadable modules). An external engine specification should consist of a colon-separated engine name and an engine-specific key identifier. This parameter is ignored if an SSL connection is not made.sslpassword
This parameter specifies the password for the secret key specified in sslkey
, allowing client certificate private keys to be stored in encrypted form on disk even when interactive passphrase input is not practical.
Specifying this parameter with any non-empty value suppresses the Enter PEM pass phrase:
prompt that OpenSSL will emit by default when an encrypted client certificate key is provided to libpq
.
If the key is not encrypted this parameter is ignored. The parameter has no effect on keys specified by OpenSSL engines unless the engine uses the OpenSSL password callback mechanism for prompts.
There is no environment variable equivalent to this option, and no facility for looking it up in .pgpass
. It can be used in a service file connection definition. Users with more sophisticated uses should consider using openssl engines and tools like PKCS#11 or USB crypto offload devices.
sslrootcert
This parameter specifies the name of a file containing SSL certificate authority (CA) certificate(s). If the file exists, the server's certificate will be verified to be signed by one of these authorities. The default is ~/.postgresql/root.crt
.
sslcrl
This parameter specifies the file name of the SSL certificate revocation list (CRL). Certificates listed in this file, if it exists, will be rejected while attempting to authenticate the server's certificate. The default is ~/.postgresql/root.crl
.
requirepeer
ssl_min_protocol_version
This parameter specifies the minimum SSL/TLS protocol version to allow for the connection. Valid values are TLSv1
, TLSv1.1
, TLSv1.2
and TLSv1.3
. The supported protocols depend on the version of OpenSSL used, older versions not supporting the most modern protocol versions. If not specified, the default is TLSv1.2
, which satisfies industry best practices as of this writing.
ssl_max_protocol_version
This parameter specifies the maximum SSL/TLS protocol version to allow for the connection. Valid values are TLSv1
, TLSv1.1
, TLSv1.2
and TLSv1.3
. The supported protocols depend on the version of OpenSSL used, older versions not supporting the most modern protocol versions. If not set, this parameter is ignored and the connection will use the maximum bound defined by the backend, if set. Setting the maximum protocol version is mainly useful for testing or if some component has issues working with a newer protocol.
krbsrvname
GSS library to use for GSSAPI authentication. Currently this is disregarded except on Windows builds that include both GSSAPI and SSPI support. In that case, set this to gssapi
to cause libpq to use the GSSAPI library for authentication instead of the default SSPI.
service
target_session_attrs
If this parameter is set to read-write
, only a connection in which read-write transactions are accepted by default is considered acceptable. The query SHOW transaction_read_only
will be sent upon any successful connection; if it returns on
, the connection will be closed. If multiple hosts were specified in the connection string, any remaining servers will be tried just as if the connection attempt had failed. The default value of this parameter, any
, regards all connections as acceptable.
The PQexec
function is adequate for submitting commands in normal, synchronous applications. It has a few deficiencies, however, that can be of importance to some users:
PQexec
waits for the command to be completed. The application might have other work to do (such as maintaining a user interface), in which case it won't want to block waiting for the response.
Since the execution of the client application is suspended while it waits for the result, it is hard for the application to decide that it would like to try to cancel the ongoing command. (It can be done from a signal handler, but not otherwise.)
PQexec
can return only one PGresult
structure. If the submitted command string contains multiple SQL commands, all but the last PGresult
are discarded by PQexec
.
PQexec
always collects the command's entire result, buffering it in a single PGresult
. While this simplifies error-handling logic for the application, it can be impractical for results containing many rows.
Applications that do not like these limitations can instead use the underlying functions that PQexec
is built from: PQsendQuery
and PQgetResult
. There are also PQsendQueryParams
, PQsendPrepare
, PQsendQueryPrepared
, PQsendDescribePrepared
, and PQsendDescribePortal
, which can be used with PQgetResult
to duplicate the functionality of PQexecParams
, PQprepare
, PQexecPrepared
, PQdescribePrepared
, and PQdescribePortal
respectively.PQsendQuery
Submits a command to the server without waiting for the result(s). 1 is returned if the command was successfully dispatched and 0 if not (in which case, use PQerrorMessage
to get more information about the failure).
After successfully calling PQsendQuery
, call PQgetResult
one or more times to obtain the results. PQsendQuery
cannot be called again (on the same connection) until PQgetResult
has returned a null pointer, indicating that the command is done.PQsendQueryParams
Submits a command and separate parameters to the server without waiting for the result(s).
This is equivalent to PQsendQuery
except that query parameters can be specified separately from the query string. The function's parameters are handled identically toPQexecParams
. Like PQexecParams
, it will not work on 2.0-protocol connections, and it allows only one command in the query string.PQsendPrepare
Sends a request to create a prepared statement with the given parameters, without waiting for completion.
This is an asynchronous version of PQprepare
: it returns 1 if it was able to dispatch the request, and 0 if not. After a successful call, call PQgetResult
to determine whether the server successfully created the prepared statement. The function's parameters are handled identically to PQprepare
. Like PQprepare
, it will not work on 2.0-protocol connections.PQsendQueryPrepared
Sends a request to execute a prepared statement with given parameters, without waiting for the result(s).
This is similar to PQsendQueryParams
, but the command to be executed is specified by naming a previously-prepared statement, instead of giving a query string. The function's parameters are handled identically to PQexecPrepared
. Like PQexecPrepared
, it will not work on 2.0-protocol connections.PQsendDescribePrepared
Submits a request to obtain information about the specified prepared statement, without waiting for completion.
This is an asynchronous version of PQdescribePrepared
: it returns 1 if it was able to dispatch the request, and 0 if not. After a successful call, call PQgetResult
to obtain the results. The function's parameters are handled identically to PQdescribePrepared
. Like PQdescribePrepared
, it will not work on 2.0-protocol connections.PQsendDescribePortal
Submits a request to obtain information about the specified portal, without waiting for completion.
This is an asynchronous version of PQdescribePortal
: it returns 1 if it was able to dispatch the request, and 0 if not. After a successful call, call PQgetResult
to obtain the results. The function's parameters are handled identically to PQdescribePortal
. Like PQdescribePortal
, it will not work on 2.0-protocol connections.PQgetResult
Waits for the next result from a prior PQsendQuery
, PQsendQueryParams
, PQsendPrepare
, PQsendQueryPrepared
, PQsendDescribePrepared
, or PQsendDescribePortal
call, and returns it. A null pointer is returned when the command is complete and there will be no more results.
PQgetResult
must be called repeatedly until it returns a null pointer, indicating that the command is done. (If called when no command is active, PQgetResult
will just return a null pointer at once.) Each non-null result from PQgetResult
should be processed using the same PGresult
accessor functions previously described. Don't forget to free each result object with PQclear
when done with it. Note that PQgetResult
will block only if a command is active and the necessary response data has not yet been read by PQconsumeInput
.
Even when PQresultStatus
indicates a fatal error, PQgetResult
should be called until it returns a null pointer, to allow libpq to process the error information completely.
Using PQsendQuery
and PQgetResult
solves one of PQexec
's problems: If a command string contains multiple SQL commands, the results of those commands can be obtained individually. (This allows a simple form of overlapped processing, by the way: the client can be handling the results of one command while the server is still working on later queries in the same command string.)
By itself, calling PQgetResult
will still cause the client to block until the server completes the next SQL command. This can be avoided by proper use of two more functions:PQconsumeInput
If input is available from the server, consume it.
PQconsumeInput
normally returns 1 indicating “no error”, but returns 0 if there was some kind of trouble (in which case PQerrorMessage
can be consulted). Note that the result does not say whether any input data was actually collected. After calling PQconsumeInput
, the application can check PQisBusy
and/or PQnotifies
to see if their state has changed.
PQconsumeInput
can be called even if the application is not prepared to deal with a result or notification just yet. The function will read available data and save it in a buffer, thereby causing a select()
read-ready indication to go away. The application can thus use PQconsumeInput
to clear the select()
condition immediately, and then examine the results at leisure.PQisBusy
Returns 1 if a command is busy, that is, PQgetResult
would block waiting for input. A 0 return indicates that PQgetResult
can be called with assurance of not blocking.
PQisBusy
will not itself attempt to read data from the server; therefore PQconsumeInput
must be invoked first, or the busy state will never end.
By using the functions described above, it is possible to avoid blocking while waiting for input from the database server. However, it is still possible that the application will block waiting to send output to the server. This is relatively uncommon but can happen if very long SQL commands or data values are sent. (It is much more probable if the application sends data via COPY IN
, however.) To prevent this possibility and achieve completely nonblocking database operation, the following additional functions can be used.PQsetnonblocking
Sets the nonblocking status of the connection.
Sets the state of the connection to nonblocking if arg
is 1, or blocking if arg
is 0. Returns 0 if OK, -1 if error.
In the nonblocking state, calls to PQsendQuery
, PQputline
, PQputnbytes
, PQputCopyData
, and PQendcopy
will not block but instead return an error if they need to be called again.
Note that PQexec
does not honor nonblocking mode; if it is called, it will act in blocking fashion anyway.PQisnonblocking
Returns the blocking status of the database connection.
Returns 1 if the connection is set to nonblocking mode and 0 if blocking.PQflush
Attempts to flush any queued output data to the server. Returns 0 if successful (or if the send queue is empty), -1 if it failed for some reason, or 1 if it was unable to send all the data in the send queue yet (this case can only occur if the connection is nonblocking).
After sending any command or data on a nonblocking connection, call PQflush
. If it returns 1, wait for the socket to become read- or write-ready. If it becomes write-ready, call PQflush
again. If it becomes read-ready, call PQconsumeInput
, then call PQflush
again. Repeat until PQflush
returns 0. (It is necessary to check for read-ready and drain the input with PQconsumeInput
, because the server can block trying to send us data, e.g. NOTICE messages, and won't read our data until we read its.) Once PQflush
returns 0, wait for the socket to be read-ready and then read the response as described above.
The passed string can be empty to use all default parameters, or it can contain one or more parameter settings separated by whitespace, or it can contain a URI. See for details.
This is the predecessor of with a fixed set of parameters. It has the same functionality except that the missing parameters will always take on default values. Write NULL
or an empty string for any one of the fixed parameters that is to be defaulted.
If the dbName
contains an =
sign or has a valid connection URI prefix, it is taken as a conninfo
string in exactly the same way as if it had been passed to , and the remaining parameters are then applied as specified for .
This is a macro that calls with null pointers for the login
and pwd
parameters. It is provided for backward compatibility with very old programs.PQconnectStartParams
PQconnectStart
PQconnectPoll
These three functions are used to open a connection to a database server such that your application's thread of execution is not blocked on remote I/O whilst doing so. The point of this approach is that the waits for I/O to complete can occur in the application's main loop, rather than down inside or , and so the application can manage this operation in parallel with other activities.
With , the database connection is made using the parameters taken from the keywords
and values
arrays, and controlled by expand_dbname
, as described above for .
With PQconnectStart
, the database connection is made using the parameters taken from the string conninfo
as described above for .
Neither nor PQconnectStart
nor PQconnectPoll
will block, so long as a number of restrictions are met:
The hostaddr
parameter must be used appropriately to prevent DNS queries from being made. See the documentation of this parameter in for details.
If you call , ensure that the stream object into which you trace will not block.
To begin a nonblocking connection request, call PQconnectStart
or . If the result is null, then libpq has been unable to allocate a new PGconn
structure. Otherwise, a valid PGconn
pointer is returned (though not yet representing a valid connection to the database). Next call PQstatus(conn)
. If the result is CONNECTION_BAD
, the connection attempt has already failed, typically because of invalid connection parameters.
If PQconnectStart
or succeeds, the next stage is to poll libpq so that it can proceed with the connection sequence. Use PQsocket(conn)
to obtain the descriptor of the socket underlying the database connection. (Caution: do not assume that the socket remains the same across PQconnectPoll
calls.) Loop thus: If PQconnectPoll(conn)
last returned PGRES_POLLING_READING
, wait until the socket is ready to read (as indicated by select()
, poll()
, or similar system function). Then call PQconnectPoll(conn)
again. Conversely, if PQconnectPoll(conn)
last returned PGRES_POLLING_WRITING
, wait until the socket is ready to write, then call PQconnectPoll(conn)
again. On the first iteration, i.e., if you have yet to call PQconnectPoll
, behave as if it last returned PGRES_POLLING_WRITING
. Continue this loop until PQconnectPoll(conn)
returns PGRES_POLLING_FAILED
, indicating the connection procedure has failed, or PGRES_POLLING_OK
, indicating the connection has been successfully made.
At any time during connection, the status of the connection can be checked by calling . If this call returns CONNECTION_BAD
, then the connection procedure has failed; if the call returns CONNECTION_OK
, then the connection is ready. Both of these states are equally detectable from the return value of PQconnectPoll
, described above. Other states might also occur during (and only during) an asynchronous connection procedure. These indicate the current stage of the connection procedure and might be useful to provide feedback to the user for example. These statuses are:
The connect_timeout
connection parameter is ignored when using PQconnectPoll
; it is the application's responsibility to decide whether an excessive amount of time has elapsed. Otherwise, PQconnectStart
followed by a PQconnectPoll
loop is equivalent to .
Note that when PQconnectStart
or returns a non-null pointer, you must call when you are finished with it, in order to dispose of the structure and any associated memory blocks. This must be done even if the connection attempt fails or is abandoned.
Returns a connection options array. This can be used to determine all possible options and their current default values. The return value points to an array of PQconninfoOption
structures, which ends with an entry having a null keyword
pointer. The null pointer is returned if memory could not be allocated. Note that the current default values (val
fields) will depend on environment variables and other context. A missing or invalid service file will be silently ignored. Callers must treat the connection options data as read-only.
After processing the options array, free it by passing it to . If this is not done, a small amount of memory is leaked for each call to .PQconninfo
Returns a connection options array. This can be used to determine all possible options and the values that were used to connect to the server. The return value points to an array of PQconninfoOption
structures, which ends with an entry having a null keyword
pointer. All notes above for also apply to the result of .
Parses a connection string and returns the resulting options as an array; or returns NULL
if there is a problem with the connection string. This function can be used to extract the options in the provided connection string. The return value points to an array of PQconninfoOption
structures, which ends with an entry having a null keyword
pointer.
After processing the options array, free it by passing it to . If this is not done, some memory is leaked for each call to . Conversely, if an error occurs and errmsg
is not NULL
, be sure to free the error string using .
Note that even if the server connection attempt fails (as indicated by ), the application should call to free the memory used by the PGconn
object. The PGconn
pointer must not be used again after has been called.
These functions will close the connection to the server and attempt to establish a new connection, using all the same parameters previously used. This can be useful for error recovery if a working connection is lost. They differ from (above) in that they act in a nonblocking manner. These functions suffer from the same restrictions as , PQconnectStart
and PQconnectPoll
.
To initiate a connection reset, call . If it returns 0, the reset has failed. If it returns 1, poll the reset using PQresetPoll
in exactly the same way as you would create the connection using PQconnectPoll
.
reports the status of the server. It accepts connection parameters identical to those of , described above. It is not necessary to supply correct user name, password, or database name values to obtain the server status; however, if incorrect values are provided, the server will log a failed connection attempt.
reports the status of the server. It accepts connection parameters identical to those of , described above. It is not necessary to supply correct user name, password, or database name values to obtain the server status; however, if incorrect values are provided, the server will log a failed connection attempt.
The return values are the same as for .
PQsetSSLKeyPassHook_OpenSSL
lets an application override libpq's using or interactive prompting.
Several libpq functions parse a user-specified string to obtain connection parameters. There are two accepted formats for these strings: plain keyword/value strings and URIs. URIs generally follow , except that multi-host connection strings are allowed as further described below.
The recognized parameter key words are listed in .
All named parameters must match key words listed in , except that for compatibility with JDBC connection URIs, instances of ssl=true
are translated into sslmode=require
.
The connection URI needs to be encoded with if it includes symbols with special meaning in any of its parts. Here is an example where the equal sign (=
) is replaced with %3D
and the space character with %20
:
The host part is interpreted as described for the parameter . In particular, a Unix-domain socket connection is chosen if the host part is either empty or looks like an absolute path name, otherwise a TCP/IP connection is initiated. Note, however, that the slash is a reserved character in the hierarchical part of the URI. So, to specify a non-standard Unix-domain socket directory, either omit the host part of the URI and specify the host as a named parameter, or percent-encode the path in the host part of the URI:
A comma-separated list of host names is also accepted, in which case each host name in the list is tried in order; an empty item in the list selects the default behavior as explained above. See for details.
Note that authentication is likely to fail if host
is not the name of the server at network address hostaddr
. Also, when both host
and hostaddr
are specified, host
is used to identify the connection in a password file (see ).
A comma-separated list of hostaddr
values is also accepted, in which case each host in the list is tried in order. An empty item in the list causes the corresponding host name to be used, or the default host name if that is empty as well. See for details.
The database name. Defaults to be the same as the user name. In certain contexts, the value is checked for extended formats; see for more details on those.
Specifies the name of the file used to store passwords (see ). Defaults to ~/.pgpass
, or %APPDATA%\postgresql\pgpass.conf
on Microsoft Windows. (No error is reported if this file does not exist.)
Specifies command-line options to send to the server at connection start. For example, setting this to -c geqo=off
sets the session's value of the geqo
parameter to off
. Spaces within this string are considered to separate command-line arguments, unless escaped with a backslash (\
); write \\
to represent a literal backslash. For a detailed discussion of the available options, consult .
Specifies a value for the configuration parameter.
Specifies a fallback value for the configuration parameter. This value will be used if no value has been given for application_name
via a connection parameter or the PGAPPNAME
environment variable. Specifying a fallback name is useful in generic utility programs that wish to set a default application name but allow it to be overridden by the user.
This option determines whether the connection should use the replication protocol instead of the normal protocol. This is what PostgreSQL replication connections as well as tools such as pg_basebackup use internally, but it can also be used by third-party applications. For a description of the replication protocol, consult .
See for a detailed description of how these options work.
This parameter specifies the operating-system user name of the server, for example requirepeer=postgres
. When making a Unix-domain socket connection, if this parameter is set, the client checks at the beginning of the connection that the server process is running under the specified user name; if it is not, the connection is aborted with an error. This parameter can be used to provide server authentication similar to that available with SSL certificates on TCP/IP connections. (Note that if the Unix-domain socket is in /tmp
or another publicly writable location, any user could start a server listening there. Use this parameter to ensure that you are connected to a server run by a trusted user.) This option is only supported on platforms for which the peer
authentication method is implemented; see .
Kerberos service name to use when authenticating with GSSAPI. This must match the service name specified in the server configuration for Kerberos authentication to succeed. (See also .) The default value is normally postgres
, but that can be changed when building PostgreSQL via the --with-krb-srvnam
option of configure. In most environments, this parameter never needs to be changed. Some Kerberos implementations might require a different service name, such as Microsoft Active Directory which requires the service name to be in upper case (POSTGRES
).gsslib
Service name to use for additional parameters. It specifies a service name in pg_service.conf
that holds additional connection parameters. This allows applications to specify only a service name so connection parameters can be centrally maintained. See .
Another frequently-desired feature that can be obtained with PQsendQuery
and PQgetResult
is retrieving large query results a row at a time. This is discussed in .
A typical application using these functions will have a main loop that uses select()
or poll()
to wait for all the conditions that it must respond to. One of the conditions will be input available from the server, which in terms of select()
means readable data on the file descriptor identified by PQsocket
. When the main loop detects input ready, it should call PQconsumeInput
to read the input. It can then call PQisBusy
, followed by PQgetResult
if PQisBusy
returns false (0). It can also call PQnotifies
to detect NOTIFY
messages (see ).
A client that uses PQsendQuery
/PQgetResult
can also attempt to cancel a command that is still being processed by the server; see . But regardless of the return value of PQcancel
, the application must continue with the normal result-reading sequence using PQgetResult
. A successful cancellation will simply cause the command to terminate sooner than it would have otherwise.
Ordinarily, libpq collects a SQL command's entire result and returns it to the application as a single PGresult
. This can be unworkable for commands that return a large number of rows. For such cases, applications can use PQsendQuery
and PQgetResult
in single-row mode. In this mode, the result row(s) are returned to the application one at a time, as they are received from the server.
To enter single-row mode, call PQsetSingleRowMode
immediately after a successful call of PQsendQuery
(or a sibling function). This mode selection is effective only for the currently executing query. Then call PQgetResult
repeatedly, until it returns null, as documented in Section 33.4. If the query returns any rows, they are returned as individual PGresult
objects, which look like normal query results except for having status code PGRES_SINGLE_TUPLE
instead of PGRES_TUPLES_OK
. After the last row, or immediately if the query returns zero rows, a zero-row object with status PGRES_TUPLES_OK
is returned; this is the signal that no more rows will arrive. (But note that it is still necessary to continue calling PQgetResult
until it returns null.) All of these PGresult
objects will contain the same row description data (column names, types, etc) that an ordinary PGresult
object for the query would have. Each object should be freed with PQclear
as usual.PQsetSingleRowMode
Select single-row mode for the currently-executing query.
This function can only be called immediately after PQsendQuery
or one of its sibling functions, before any other operation on the connection such as PQconsumeInput
or PQgetResult
. If called at the correct time, the function activates single-row mode for the current query and returns 1. Otherwise the mode stays unchanged and the function returns 0. In any case, the mode reverts to normal after completion of the current query.
While processing a query, the server may return some rows and then encounter an error, causing the query to be aborted. Ordinarily, libpq discards any such rows and reports only the error. But in single-row mode, those rows will have already been returned to the application. Hence, the application will see some PGRES_SINGLE_TUPLE
PGresult
objects followed by a PGRES_FATAL_ERROR
object. For proper transactional behavior, the application must be designed to discard or undo whatever has been done with the previously-processed rows, if the query ultimately fails.
版本:11
以下環境變數可用於選擇預設連線參數值,如果呼叫變數沒有直接指定值, PQconnectdb,PQsetdbLogin 和 PQsetdb 將使用這些值。例如,這些有助於避免將資料庫連線訊息硬寫到簡單的用戶端應用程序中。
PGHOST
的行為與 host 參數相同。
PGHOSTADDR
的行為與 hostaddr 連線參數相同。這可以代替 PGHOST 或者除了 PGHOST 之外設定,以避免 DNS 查詢的開銷。
PGPORT
的行為與 port 連線參數相同。
PGDATABASE
的行為與 dbname 連線參數相同。
PGUSER
的行為與 user 連線參數相同。
PGPASSFILE
行為與 passfile 連線參數相同。
PGSERVICE
behaves the same as the service connection parameter.
PGSERVICEFILE
specifies the name of the per-user connection service file. If not set, it defaults to ~/.pg_service.conf
(see Section 33.16).
PGOPTIONS
behaves the same as the options connection parameter.
PGAPPNAME
behaves the same as the application_name connection parameter.
PGSSLMODE
behaves the same as the sslmode connection parameter.
PGREQUIRESSL
behaves the same as the requiressl connection parameter. This environment variable is deprecated in favor of the PGSSLMODE
variable; setting both variables suppresses the effect of this one.
PGSSLCOMPRESSION
behaves the same as the sslcompression connection parameter.
PGSSLCERT
behaves the same as the sslcert connection parameter.
PGSSLKEY
behaves the same as the sslkey connection parameter.
PGSSLROOTCERT
behaves the same as the sslrootcert connection parameter.
PGSSLCRL
behaves the same as the sslcrl connection parameter.
PGREQUIREPEER
behaves the same as the requirepeer connection parameter.
PGKRBSRVNAME
behaves the same as the krbsrvname connection parameter.
PGGSSLIB
behaves the same as the gsslib connection parameter.
PGCONNECT_TIMEOUT
behaves the same as the connect_timeout connection parameter.
PGCLIENTENCODING
behaves the same as the client_encoding connection parameter.
PGTARGETSESSIONATTRS
behaves the same as the target_session_attrs connection parameter.
以下環境變數可用於指定每個 PostgreSQL 連線的預設行為。(有關基於每個使用者或每個資料庫設定預設行為的方法,另請參閱 ALTER ROLE 和 ALTER DATABASE 指令。)
PGDATESTYLE
設定日期/時間表示的預設樣式。 (相當於 SET datestyle TO ....)
PGTZ
設定預設時區。 (相當於 SET timezone TO ....
)
PGGEQO
設定 genetic 查詢最佳化程序的預設模式。(相當於 SET geqo TO ....
)
有關這些環境變數的正確值的資訊,請參閱 SQL 指令 SET。
以下環境變數決定了 libpq 的內部行為;它們會覆寫編譯時的預設值。
PGSYSCONFDIR
設定包含 pg_service.conf 檔案的目錄,在未來版本中可能用於設定其他系統範圍的組態檔案。
PGLOCALEDIR
設定包含用於訊息本地化的區域設定檔案的目錄。
如果連線需要密碼(並且沒有指定密碼),使用者主目錄中的 .pgpass 檔案可以內含要使用的密碼。在 Microsoft Windows 上,該檔案路徑為 %APPDATA%\postgresql\pgpass.conf(其中 %APPDATA% 指使用者組態檔案中的 Application Data 目錄)。或者,可以使用連接參數 passfile 或環境變數 PGPASSFILE 來指定密碼檔案。
該檔案應該包含以下格式的內容:
(您可以透過複製上面的內容並在 # 之前增加提醒註解到該檔案。)前四個字段中的每一個都可以是一個文字字串,或 *,它可以匹配任何內容。 將使用匹配目前連線參數第一行中的密碼字串。(因此,在使用通用配對字時首先輸入更具體的項目。)如果項目需要包含「:」或「\」,請使用「\」跳脫字元。localhost 的主機名稱匹配來自本地機器的 TCP(主機名localhost)和 Unix domain socket(pghost 為空或預設的 domain socket 路徑)連線。 在備援伺服器中,複製的資料庫名稱與主服務器上的串流備援連線匹配。資料庫字段的用處有限,因為使用者對同一叢集中的所有資料庫具有相同的密碼。
在 Unix 系統上,密碼檔案的權限必須禁止任何對所有其他人或組群的存取;透過 chmod 0600 ~/.pgpass 等命令來設定。如果權限不是如此嚴格設定,則該檔案將被忽略。在Microsoft Windows上,則假設檔案儲存在安全的目錄中,因此不會進行特殊的權限檢查。
These functions control miscellaneous details of libpq's behavior.PQclientEncoding
Returns the client encoding.
Note that it returns the encoding ID, not a symbolic string such as EUC_JP
. If unsuccessful, it returns -1. To convert an encoding ID to an encoding name, you can use:
PQsetClientEncoding
Sets the client encoding.
conn
is a connection to the server, and encoding
is the encoding you want to use. If the function successfully sets the encoding, it returns 0, otherwise -1. The current encoding for this connection can be determined by using PQclientEncoding
.PQsetErrorVerbosity
Determines the verbosity of messages returned by PQerrorMessage
and PQresultErrorMessage
.
PQsetErrorVerbosity
sets the verbosity mode, returning the connection's previous setting. In TERSE mode, returned messages include severity, primary text, and position only; this will normally fit on a single line. The default mode produces messages that include the above plus any detail, hint, or context fields (these might span multiple lines). The VERBOSE mode includes all available fields. Changing the verbosity does not affect the messages available from already-existing PGresult
objects, only subsequently-created ones. (But see PQresultVerboseErrorMessage
if you want to print a previous error with a different verbosity.)PQsetErrorContextVisibility
Determines the handling of CONTEXT
fields in messages returned by PQerrorMessage
and PQresultErrorMessage
.
PQsetErrorContextVisibility
sets the context display mode, returning the connection's previous setting. This mode controls whether the CONTEXT
field is included in messages (unless the verbosity setting is TERSE, in which case CONTEXT
is never shown). The NEVER mode never includes CONTEXT
, while ALWAYS always includes it if available. In ERRORS mode (the default), CONTEXT
fields are included only for error messages, not for notices and warnings. Changing this mode does not affect the messages available from already-existing PGresult
objects, only subsequently-created ones. (But see PQresultVerboseErrorMessage
if you want to print a previous error with a different display mode.)PQtrace
Enables tracing of the client/server communication to a debugging file stream.
On Windows, if the libpq library and an application are compiled with different flags, this function call will crash the application because the internal representation of the FILE
pointers differ. Specifically, multithreaded/single-threaded, release/debug, and static/dynamic flags should be the same for the library and all applications using that library.PQuntrace
Disables tracing started by PQtrace
.
The information schema itself is a schema named information_schema
. This schema automatically exists in all databases. The owner of this schema is the initial database user in the cluster, and that user naturally has all the privileges on this schema, including the ability to drop it (but the space savings achieved by that are minuscule).
By default, the information schema is not in the schema search path, so you need to access all objects in it through qualified names. Since the names of some of the objects in the information schema are generic names that might occur in user applications, you should be careful if you want to put the information schema in the path.
The information schema consists of a set of views that contain information about the objects defined in the current database. The information schema is defined in the SQL standard and can therefore be expected to be portable and remain stable — unlike the system catalogs, which are specific to PostgreSQL and are modeled after implementation concerns. The information schema views do not, however, contain information about PostgreSQL-specific features; to inquire about those you need to query the system catalogs or other PostgreSQL-specific views.
在資料庫中查詢限制條件資訊時,符合標準的查詢可能預期會回傳一筆資料,到數筆資料。這是因為 SQL 標準要求限制條件名稱在綱要中必須是唯一的,但是 PostgreSQL 不強制執行此限制條件。PostgreSQL 自動產生的限制條件名稱會避免在同一綱要中的重複,但是使用者可以指定重複的名稱。
查詢 information schema 檢視表(如check_constraint_routine_usage,check_constraints,domain_constraints 和 referential_constraints)時,可能會出現此問題。其他一些檢視表也有類似的問題,但是包含資料表名稱用以協助區分重複的資料。例如,constraint_column_usage,constraint_table_usage,table_constraints。
The columns of the information schema views use special data types that are defined in the information schema. These are defined as simple domains over ordinary built-in types. You should not use these types for work outside the information schema, but your applications must be prepared for them if they select from the information schema.
These types are:
cardinal_number
A nonnegative integer.
character_data
A character string (without specific maximum length).
sql_identifier
A character string. This type is used for SQL identifiers, the type character_data
is used for any other kind of text data.
time_stamp
A domain over the type timestamp with time zone
yes_or_no
A character string domain that contains either YES
or NO
. This is used to represent Boolean (true/false) data in the information schema. (The information schema was invented before the type boolean
was added to the SQL standard, so this convention is necessary to keep the information schema backward compatible.)
Every column in the information schema has one of these five types.
PostgreSQL has a large object facility, which provides stream-style access to user data that is stored in a special large-object structure. Streaming access is useful when working with data values that are too large to manipulate conveniently as a whole.
This chapter describes the implementation and the programming and query language interfaces to PostgreSQL large object data. We use the libpq C library for the examples in this chapter, but most programming interfaces native to PostgreSQL support equivalent functionality. Other interfaces might use the large object interface internally to provide generic support for large values. This is not described here.
This chapter describes the embedded SQL package for PostgreSQL. It was written by Linus Tolke (<
>
) and Michael Meskes (<
>
). Originally it was written to work with C. It also works with C++, but it does not recognize all C++ constructs yet.
This documentation is quite incomplete. But since this interface is standardized, additional information can be found in many resources about SQL.
Once a connection to a database server has been successfully established, the functions described here are used to perform SQL queries and commands.
PQexec
Submits a command to the server and waits for the result.
Returns a PGresult
pointer or possibly a null pointer. A non-null pointer will generally be returned except in out-of-memory conditions or serious errors such as inability to send the command to the server. The PQresultStatus
function should be called to check the return value for any errors (including the value of a null pointer, in which case it will return PGRES_FATAL_ERROR
). Use PQerrorMessage
to get more information about such errors.
The command string can include multiple SQL commands (separated by semicolons). Multiple queries sent in a single PQexec
call are processed in a single transaction, unless there are explicit BEGIN
/COMMIT
commands included in the query string to divide it into multiple transactions. Note however that the returned PGresult
structure describes only the result of the last command executed from the string. Should one of the commands fail, processing of the string stops with it and the returned PGresult
describes the error condition.PQexecParams
Submits a command to the server and waits for the result, with the ability to pass parameters separately from the SQL command text.
PQexecParams
is like PQexec
, but offers additional functionality: parameter values can be specified separately from the command string proper, and query results can be requested in either text or binary format. PQexecParams
is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0.
The function arguments are:conn
The connection object to send the command through.command
The SQL command string to be executed. If parameters are used, they are referred to in the command string as $1
, $2
, etc.nParams
The number of parameters supplied; it is the length of the arrays paramTypes[]
, paramValues[]
, paramLengths[]
, and paramFormats[]
. (The array pointers can be NULL
when nParams
is zero.)paramTypes[]
Specifies, by OID, the data types to be assigned to the parameter symbols. If paramTypes
is NULL
, or any particular element in the array is zero, the server infers a data type for the parameter symbol in the same way it would do for an untyped literal string.paramValues[]
Specifies the actual values of the parameters. A null pointer in this array means the corresponding parameter is null; otherwise the pointer points to a zero-terminated text string (for text format) or binary data in the format expected by the server (for binary format).paramLengths[]
Specifies the actual data lengths of binary-format parameters. It is ignored for null parameters and text-format parameters. The array pointer can be null when there are no binary parameters.paramFormats[]
Specifies whether parameters are text (put a zero in the array entry for the corresponding parameter) or binary (put a one in the array entry for the corresponding parameter). If the array pointer is null then all parameters are presumed to be text strings.
Values passed in binary format require knowledge of the internal representation expected by the backend. For example, integers must be passed in network byte order. Passing numeric
values requires knowledge of the server storage format, as implemented in src/backend/utils/adt/numeric.c::numeric_send()
and src/backend/utils/adt/numeric.c::numeric_recv()
.resultFormat
Specify zero to obtain results in text format, or one to obtain results in binary format. (There is not currently a provision to obtain different result columns in different formats, although that is possible in the underlying protocol.)
The primary advantage of PQexecParams
over PQexec
is that parameter values can be separated from the command string, thus avoiding the need for tedious and error-prone quoting and escaping.
Unlike PQexec
, PQexecParams
allows at most one SQL command in the given string. (There can be semicolons in it, but not more than one nonempty command.) This is a limitation of the underlying protocol, but has some usefulness as an extra defense against SQL-injection attacks.
Specifying parameter types via OIDs is tedious, particularly if you prefer not to hard-wire particular OID values into your program. However, you can avoid doing so even in cases where the server by itself cannot determine the type of the parameter, or chooses a different type than you want. In the SQL command text, attach an explicit cast to the parameter symbol to show what data type you will send. For example:
This forces parameter $1
to be treated as bigint
, whereas by default it would be assigned the same type as x
. Forcing the parameter type decision, either this way or by specifying a numeric type OID, is strongly recommended when sending parameter values in binary format, because binary format has less redundancy than text format and so there is less chance that the server will detect a type mismatch mistake for you.PQprepare
Submits a request to create a prepared statement with the given parameters, and waits for completion.
The function creates a prepared statement named stmtName
from the query
string, which must contain a single SQL command. stmtName
can be ""
to create an unnamed statement, in which case any pre-existing unnamed statement is automatically replaced; otherwise it is an error if the statement name is already defined in the current session. If any parameters are used, they are referred to in the query as $1
, $2
, etc. nParams
is the number of parameters for which types are pre-specified in the array paramTypes[]
. (The array pointer can be NULL
when nParams
is zero.) paramTypes[]
specifies, by OID, the data types to be assigned to the parameter symbols. If paramTypes
is NULL
, or any particular element in the array is zero, the server assigns a data type to the parameter symbol in the same way it would do for an untyped literal string. Also, the query can use parameter symbols with numbers higher than nParams
; data types will be inferred for these symbols as well. (See PQdescribePrepared
for a means to find out what data types were inferred.)
As with PQexec
, the result is normally a PGresult
object whose contents indicate server-side success or failure. A null result indicates out-of-memory or inability to send the command at all. Use PQerrorMessage
to get more information about such errors.
Sends a request to execute a prepared statement with given parameters, and waits for the result.
PQexecPrepared
is like PQexecParams
, but the command to be executed is specified by naming a previously-prepared statement, instead of giving a query string. This feature allows commands that will be used repeatedly to be parsed and planned just once, rather than each time they are executed. The statement must have been prepared previously in the current session. PQexecPrepared
is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0.
The parameters are identical to PQexecParams
, except that the name of a prepared statement is given instead of a query string, and the paramTypes[]
parameter is not present (it is not needed since the prepared statement's parameter types were determined when it was created).PQdescribePrepared
Submits a request to obtain information about the specified prepared statement, and waits for completion.
PQdescribePrepared
allows an application to obtain information about a previously prepared statement. PQdescribePrepared
is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0.
stmtName
can be ""
or NULL
to reference the unnamed statement, otherwise it must be the name of an existing prepared statement. On success, a PGresult
with status PGRES_COMMAND_OK
is returned. The functions PQnparams
and PQparamtype
can be applied to this PGresult
to obtain information about the parameters of the prepared statement, and the functions PQnfields
, PQfname
, PQftype
, etc provide information about the result columns (if any) of the statement.PQdescribePortal
Submits a request to obtain information about the specified portal, and waits for completion.
PQdescribePortal
allows an application to obtain information about a previously created portal. (libpq does not provide any direct access to portals, but you can use this function to inspect the properties of a cursor created with a DECLARE CURSOR
SQL command.) PQdescribePortal
is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0.
portalName
can be ""
or NULL
to reference the unnamed portal, otherwise it must be the name of an existing portal. On success, a PGresult
with status PGRES_COMMAND_OK
is returned. The functions PQnfields
, PQfname
, PQftype
, etc can be applied to the PGresult
to obtain information about the result columns (if any) of the portal.
The PGresult
structure encapsulates the result returned by the server. libpq application programmers should be careful to maintain the PGresult
abstraction. Use the accessor functions below to get at the contents of PGresult
. Avoid directly referencing the fields of the PGresult
structure because they are subject to change in the future.PQresultStatus
Returns the result status of the command.
PQresultStatus
can return one of the following values:PGRES_EMPTY_QUERY
The string sent to the server was empty.PGRES_COMMAND_OK
Successful completion of a command returning no data.PGRES_TUPLES_OK
Successful completion of a command returning data (such as a SELECT
or SHOW
).PGRES_COPY_OUT
Copy Out (from server) data transfer started.PGRES_COPY_IN
Copy In (to server) data transfer started.PGRES_BAD_RESPONSE
The server's response was not understood.PGRES_NONFATAL_ERROR
A nonfatal error (a notice or warning) occurred.PGRES_FATAL_ERROR
A fatal error occurred.PGRES_COPY_BOTH
Copy In/Out (to and from server) data transfer started. This feature is currently used only for streaming replication, so this status should not occur in ordinary applications.PGRES_SINGLE_TUPLE
If the result status is PGRES_TUPLES_OK
or PGRES_SINGLE_TUPLE
, then the functions described below can be used to retrieve the rows returned by the query. Note that a SELECT
command that happens to retrieve zero rows still shows PGRES_TUPLES_OK
. PGRES_COMMAND_OK
is for commands that can never return rows (INSERT
or UPDATE
without a RETURNING
clause, etc.). A response of PGRES_EMPTY_QUERY
might indicate a bug in the client software.
Converts the enumerated type returned by PQresultStatus
into a string constant describing the status code. The caller should not free the result.
PQresultErrorMessage
Returns the error message associated with the command, or an empty string if there was no error.
If there was an error, the returned string will include a trailing newline. The caller should not free the result directly. It will be freed when the associated PGresult
handle is passed to PQclear
.
Immediately following a PQexec
or PQgetResult
call, PQerrorMessage
(on the connection) will return the same string as PQresultErrorMessage
(on the result). However, a PGresult
will retain its error message until destroyed, whereas the connection's error message will change when subsequent operations are done. Use PQresultErrorMessage
when you want to know the status associated with a particular PGresult
; use PQerrorMessage
when you want to know the status from the latest operation on the connection.PQresultVerboseErrorMessage
Returns a reformatted version of the error message associated with a PGresult
object.
In some situations a client might wish to obtain a more detailed version of a previously-reported error. PQresultVerboseErrorMessage
addresses this need by computing the message that would have been produced by PQresultErrorMessage
if the specified verbosity settings had been in effect for the connection when the given PGresult
was generated. If the PGresult
is not an error result, “PGresult is not an error result” is reported instead. The returned string includes a trailing newline.
Unlike most other functions for extracting data from a PGresult
, the result of this function is a freshly allocated string. The caller must free it using PQfreemem()
when the string is no longer needed.
A NULL return is possible if there is insufficient memory.PQresultErrorField
Returns an individual field of an error report.
fieldcode
is an error field identifier; see the symbols listed below. NULL
is returned if the PGresult
is not an error or warning result, or does not include the specified field. Field values will normally not include a trailing newline. The caller should not free the result directly. It will be freed when the associated PGresult
handle is passed to PQclear
.
The following field codes are available:PG_DIAG_SEVERITY
The severity; the field contents are ERROR
, FATAL
, or PANIC
(in an error message), or WARNING
, NOTICE
, DEBUG
, INFO
, or LOG
(in a notice message), or a localized translation of one of these. Always present.PG_DIAG_SEVERITY_NONLOCALIZED
The severity; the field contents are ERROR
, FATAL
, or PANIC
(in an error message), or WARNING
, NOTICE
, DEBUG
, INFO
, or LOG
(in a notice message). This is identical to the PG_DIAG_SEVERITY
field except that the contents are never localized. This is present only in reports generated by PostgreSQL versions 9.6 and later.PG_DIAG_SQLSTATE
The primary human-readable error message (typically one line). Always present.PG_DIAG_MESSAGE_DETAIL
Detail: an optional secondary error message carrying more detail about the problem. Might run to multiple lines.PG_DIAG_MESSAGE_HINT
Hint: an optional suggestion what to do about the problem. This is intended to differ from detail in that it offers advice (potentially inappropriate) rather than hard facts. Might run to multiple lines.PG_DIAG_STATEMENT_POSITION
A string containing a decimal integer indicating an error cursor position as an index into the original statement string. The first character has index 1, and positions are measured in characters not bytes.PG_DIAG_INTERNAL_POSITION
This is defined the same as the PG_DIAG_STATEMENT_POSITION
field, but it is used when the cursor position refers to an internally generated command rather than the one submitted by the client. The PG_DIAG_INTERNAL_QUERY
field will always appear when this field appears.PG_DIAG_INTERNAL_QUERY
The text of a failed internally-generated command. This could be, for example, a SQL query issued by a PL/pgSQL function.PG_DIAG_CONTEXT
An indication of the context in which the error occurred. Presently this includes a call stack traceback of active procedural language functions and internally-generated queries. The trace is one entry per line, most recent first.PG_DIAG_SCHEMA_NAME
If the error was associated with a specific database object, the name of the schema containing that object, if any.PG_DIAG_TABLE_NAME
If the error was associated with a specific table, the name of the table. (Refer to the schema name field for the name of the table's schema.)PG_DIAG_COLUMN_NAME
If the error was associated with a specific table column, the name of the column. (Refer to the schema and table name fields to identify the table.)PG_DIAG_DATATYPE_NAME
If the error was associated with a specific data type, the name of the data type. (Refer to the schema name field for the name of the data type's schema.)PG_DIAG_CONSTRAINT_NAME
If the error was associated with a specific constraint, the name of the constraint. Refer to fields listed above for the associated table or domain. (For this purpose, indexes are treated as constraints, even if they weren't created with constraint syntax.)PG_DIAG_SOURCE_FILE
The file name of the source-code location where the error was reported.PG_DIAG_SOURCE_LINE
The line number of the source-code location where the error was reported.PG_DIAG_SOURCE_FUNCTION
The name of the source-code function reporting the error.
The client is responsible for formatting displayed information to meet its needs; in particular it should break long lines as needed. Newline characters appearing in the error message fields should be treated as paragraph breaks, not line breaks.
Errors generated internally by libpq will have severity and primary message, but typically no other fields. Errors returned by a pre-3.0-protocol server will include severity and primary message, and sometimes a detail message, but no other fields.
Note that error fields are only available from PGresult
objects, not PGconn
objects; there is no PQerrorField
function.PQclear
Frees the storage associated with a PGresult
. Every command result should be freed via PQclear
when it is no longer needed.
You can keep a PGresult
object around for as long as you need it; it does not go away when you issue a new command, nor even if you close the connection. To get rid of it, you must call PQclear
. Failure to do this will result in memory leaks in your application.
These functions are used to extract information from a PGresult
object that represents a successful query result (that is, one that has status PGRES_TUPLES_OK
or PGRES_SINGLE_TUPLE
). They can also be used to extract information from a successful Describe operation: a Describe's result has all the same column information that actual execution of the query would provide, but it has zero rows. For objects with other status values, these functions will act as though the result has zero rows and zero columns.PQntuples
Returns the number of rows (tuples) in the query result. (Note that PGresult
objects are limited to no more than INT_MAX
rows, so an int
result is sufficient.)
PQnfields
Returns the number of columns (fields) in each row of the query result.
PQfname
Returns the column name associated with the given column number. Column numbers start at 0. The caller should not free the result directly. It will be freed when the associated PGresult
handle is passed to PQclear
.
NULL
is returned if the column number is out of range.PQfnumber
Returns the column number associated with the given column name.
-1 is returned if the given name does not match any column.
The given name is treated like an identifier in an SQL command, that is, it is downcased unless double-quoted. For example, given a query result generated from the SQL command:
we would have the results:
PQftable
Returns the OID of the table from which the given column was fetched. Column numbers start at 0.
InvalidOid
is returned if the column number is out of range, or if the specified column is not a simple reference to a table column, or when using pre-3.0 protocol. You can query the system table pg_class
to determine exactly which table is referenced.
The type Oid
and the constant InvalidOid
will be defined when you include the libpq header file. They will both be some integer type.PQftablecol
Returns the column number (within its table) of the column making up the specified query result column. Query-result column numbers start at 0, but table columns have nonzero numbers.
Zero is returned if the column number is out of range, or if the specified column is not a simple reference to a table column, or when using pre-3.0 protocol.PQfformat
Returns the format code indicating the format of the given column. Column numbers start at 0.
Format code zero indicates textual data representation, while format code one indicates binary representation. (Other codes are reserved for future definition.)PQftype
Returns the data type associated with the given column number. The integer returned is the internal OID number of the type. Column numbers start at 0.
You can query the system table pg_type
to obtain the names and properties of the various data types. The OIDs of the built-in data types are defined in the file src/include/catalog/pg_type.h
in the source tree.PQfmod
Returns the type modifier of the column associated with the given column number. Column numbers start at 0.
The interpretation of modifier values is type-specific; they typically indicate precision or size limits. The value -1 is used to indicate “no information available”. Most data types do not use modifiers, in which case the value is always -1.PQfsize
Returns the size in bytes of the column associated with the given column number. Column numbers start at 0.
PQfsize
returns the space allocated for this column in a database row, in other words the size of the server's internal representation of the data type. (Accordingly, it is not really very useful to clients.) A negative value indicates the data type is variable-length.PQbinaryTuples
Returns 1 if the PGresult
contains binary data and 0 if it contains text data.
This function is deprecated (except for its use in connection with COPY
), because it is possible for a single PGresult
to contain text data in some columns and binary data in others. PQfformat
is preferred. PQbinaryTuples
returns 1 only if all columns of the result are binary (format 1).PQgetvalue
Returns a single field value of one row of a PGresult
. Row and column numbers start at 0. The caller should not free the result directly. It will be freed when the associated PGresult
handle is passed to PQclear
.
For data in text format, the value returned by PQgetvalue
is a null-terminated character string representation of the field value. For data in binary format, the value is in the binary representation determined by the data type's typsend
and typreceive
functions. (The value is actually followed by a zero byte in this case too, but that is not ordinarily useful, since the value is likely to contain embedded nulls.)
An empty string is returned if the field value is null. See PQgetisnull
to distinguish null values from empty-string values.
The pointer returned by PQgetvalue
points to storage that is part of the PGresult
structure. One should not modify the data it points to, and one must explicitly copy the data into other storage if it is to be used past the lifetime of the PGresult
structure itself.PQgetisnull
Tests a field for a null value. Row and column numbers start at 0.
This function returns 1 if the field is null and 0 if it contains a non-null value. (Note that PQgetvalue
will return an empty string, not a null pointer, for a null field.)PQgetlength
Returns the actual length of a field value in bytes. Row and column numbers start at 0.
This is the actual data length for the particular data value, that is, the size of the object pointed to by PQgetvalue
. For text data format this is the same as strlen()
. For binary format this is essential information. Note that one should not rely on PQfsize
to obtain the actual data length.PQnparams
Returns the number of parameters of a prepared statement.
This function is only useful when inspecting the result of PQdescribePrepared
. For other types of queries it will return zero.PQparamtype
Returns the data type of the indicated statement parameter. Parameter numbers start at 0.
This function is only useful when inspecting the result of PQdescribePrepared
. For other types of queries it will return zero.PQprint
Prints out all the rows and, optionally, the column names to the specified output stream.
This function was formerly used by psql to print query results, but this is no longer the case. Note that it assumes all the data is in text format.
These functions are used to extract other information from PGresult
objects.PQcmdStatus
Returns the command status tag from the SQL command that generated the PGresult
.
Commonly this is just the name of the command, but it might include additional data such as the number of rows processed. The caller should not free the result directly. It will be freed when the associated PGresult
handle is passed to PQclear
.PQcmdTuples
Returns the number of rows affected by the SQL command.
This function returns a string containing the number of rows affected by the SQL statement that generated the PGresult
. This function can only be used following the execution of a SELECT
, CREATE TABLE AS
, INSERT
, UPDATE
, DELETE
, MOVE
, FETCH
, or COPY
statement, or an EXECUTE
of a prepared query that contains an INSERT
, UPDATE
, or DELETE
statement. If the command that generated the PGresult
was anything else, PQcmdTuples
returns an empty string. The caller should not free the return value directly. It will be freed when the associated PGresult
handle is passed to PQclear
.PQoidValue
Returns the OID of the inserted row, if the SQL command was an INSERT
that inserted exactly one row into a table that has OIDs, or a EXECUTE
of a prepared query containing a suitable INSERT
statement. Otherwise, this function returns InvalidOid
. This function will also return InvalidOid
if the table affected by the INSERT
statement does not contain OIDs.
PQoidStatus
This function is deprecated in favor of PQoidValue
and is not thread-safe. It returns a string with the OID of the inserted row, while PQoidValue
returns the OID value.
PQescapeLiteral
PQescapeLiteral
escapes a string for use within an SQL command. This is useful when inserting data values as literal constants in SQL commands. Certain characters (such as quotes and backslashes) must be escaped to prevent them from being interpreted specially by the SQL parser. PQescapeLiteral
performs this operation.
PQescapeLiteral
returns an escaped version of the str
parameter in memory allocated with malloc()
. This memory should be freed using PQfreemem()
when the result is no longer needed. A terminating zero byte is not required, and should not be counted in length
. (If a terminating zero byte is found before length
bytes are processed, PQescapeLiteral
stops at the zero; the behavior is thus rather like strncpy
.) The return string has all special characters replaced so that they can be properly processed by the PostgreSQL string literal parser. A terminating zero byte is also added. The single quotes that must surround PostgreSQL string literals are included in the result string.
On error, PQescapeLiteral
returns NULL
and a suitable message is stored in the conn
object.
It is especially important to do proper escaping when handling strings that were received from an untrustworthy source. Otherwise there is a security risk: you are vulnerable to “SQL injection” attacks wherein unwanted SQL commands are fed to your database.
Note that it is not necessary nor correct to do escaping when a data value is passed as a separate parameter in PQexecParams
or its sibling routines.PQescapeIdentifier
PQescapeIdentifier
escapes a string for use as an SQL identifier, such as a table, column, or function name. This is useful when a user-supplied identifier might contain special characters that would otherwise not be interpreted as part of the identifier by the SQL parser, or when the identifier might contain upper case characters whose case should be preserved.
PQescapeIdentifier
returns a version of the str
parameter escaped as an SQL identifier in memory allocated with malloc()
. This memory must be freed using PQfreemem()
when the result is no longer needed. A terminating zero byte is not required, and should not be counted in length
. (If a terminating zero byte is found before length
bytes are processed, PQescapeIdentifier
stops at the zero; the behavior is thus rather like strncpy
.) The return string has all special characters replaced so that it will be properly processed as an SQL identifier. A terminating zero byte is also added. The return string will also be surrounded by double quotes.
On error, PQescapeIdentifier
returns NULL
and a suitable message is stored in the conn
object.
As with string literals, to prevent SQL injection attacks, SQL identifiers must be escaped when they are received from an untrustworthy source.PQescapeStringConn
PQescapeStringConn
escapes string literals, much like PQescapeLiteral
. Unlike PQescapeLiteral
, the caller is responsible for providing an appropriately sized buffer. Furthermore, PQescapeStringConn
does not generate the single quotes that must surround PostgreSQL string literals; they should be provided in the SQL command that the result is inserted into. The parameter from
points to the first character of the string that is to be escaped, and the length
parameter gives the number of bytes in this string. A terminating zero byte is not required, and should not be counted in length
. (If a terminating zero byte is found before length
bytes are processed, PQescapeStringConn
stops at the zero; the behavior is thus rather like strncpy
.) to
shall point to a buffer that is able to hold at least one more byte than twice the value of length
, otherwise the behavior is undefined. Behavior is likewise undefined if the to
and from
strings overlap.
If the error
parameter is not NULL
, then *error
is set to zero on success, nonzero on error. Presently the only possible error conditions involve invalid multibyte encoding in the source string. The output string is still generated on error, but it can be expected that the server will reject it as malformed. On error, a suitable message is stored in the _conn
_object, whether or not _error
_ is NULL
.
PQescapeStringConn
returns the number of bytes written to to
, not including the terminating zero byte.PQescapeString
PQescapeString
is an older, deprecated version of PQescapeStringConn
.
The only difference from PQescapeStringConn
is that PQescapeString
does not take PGconn
or error
parameters. Because of this, it cannot adjust its behavior depending on the connection properties (such as character encoding) and therefore it might give the wrong results. Also, it has no way to report error conditions.
PQescapeString
can be used safely in client programs that work with only one PostgreSQL connection at a time (in this case it can find out what it needs to know “behind the scenes”). In other contexts it is a security hazard and should be avoided in favor of PQescapeStringConn
.PQescapeByteaConn
Escapes binary data for use within an SQL command with the type bytea
. As with PQescapeStringConn
, this is only used when inserting data directly into an SQL command string.
The from
parameter points to the first byte of the string that is to be escaped, and the from_length
parameter gives the number of bytes in this binary string. (A terminating zero byte is neither necessary nor counted.) The to_length
parameter points to a variable that will hold the resultant escaped string length. This result string length includes the terminating zero byte of the result.
PQescapeByteaConn
returns an escaped version of the from
parameter binary string in memory allocated with malloc()
. This memory should be freed using PQfreemem()
when the result is no longer needed. The return string has all special characters replaced so that they can be properly processed by the PostgreSQL string literal parser, and the bytea
input function. A terminating zero byte is also added. The single quotes that must surround PostgreSQL string literals are not part of the result string.
On error, a null pointer is returned, and a suitable error message is stored in the conn
object. Currently, the only possible error is insufficient memory for the result string.PQescapeBytea
PQescapeBytea
is an older, deprecated version of PQescapeByteaConn
.
The only difference from PQescapeByteaConn
is that PQescapeBytea
does not take a PGconn
parameter. Because of this, PQescapeBytea
can only be used safely in client programs that use a single PostgreSQL connection at a time (in this case it can find out what it needs to know “behind the scenes”). It might give the wrong results if used in programs that use multiple database connections (use PQescapeByteaConn
in such cases).PQunescapeBytea
Converts a string representation of binary data into binary data — the reverse of PQescapeBytea
. This is needed when retrieving bytea
data in text format, but not when retrieving it in binary format.
The from
parameter points to a string such as might be returned by PQgetvalue
when applied to a bytea
column. PQunescapeBytea
converts this string representation into its binary representation. It returns a pointer to a buffer allocated with malloc()
, or NULL
on error, and puts the size of the buffer in to_length
. The result must be freed using PQfreemem
when it is no longer needed.
This conversion is not exactly the inverse of PQescapeBytea
, because the string is not expected to be “escaped” when received from PQgetvalue
. In particular this means there is no need for string quoting considerations, and so no need for a PGconn
parameter.
An embedded SQL program consists of code written in an ordinary programming language, in this case C, mixed with SQL commands in specially marked sections. To build the program, the source code (*.pgc
) is first passed through the embedded SQL preprocessor, which converts it to an ordinary C program (*.c
), and afterwards it can be processed by a C compiler. (For details about the compiling and linking see ). Converted ECPG applications call functions in the libpq library through the embedded SQL library (ecpglib), and communicate with the PostgreSQL server using the normal frontend-backend protocol.
Embedded SQL has advantages over other methods for handling SQL commands from C code. First, it takes care of the tedious passing of information to and from variables in your C program. Second, the SQL code in the program is checked at build time for syntactical correctness. Third, embedded SQL in C is specified in the SQL standard and supported by many other SQL database systems. The PostgreSQL implementation is designed to match this standard as much as possible, and it is usually possible to port embedded SQL programs written for other SQL databases to PostgreSQL with relative ease.
As already stated, programs written for the embedded SQL interface are normal C programs with special code inserted to perform database-related actions. This special code always has the form:
These statements syntactically take the place of a C statement. Depending on the particular statement, they can appear at the global level or within a function. Embedded SQL statements follow the case-sensitivity rules of normal SQL code, and not those of C. Also they allow nested C-style comments that are part of the SQL standard. The C part of the program, however, follows the C standard of not accepting nested comments.
The following sections explain all the embedded SQL statements.
These examples and others can be found in the directory src/test/examples
in the source code distribution.
Example 33.1. libpq Example Program 1
Example 33.2. libpq Example Program 2
Example 33.3. libpq Example Program 3
The view applicable_roles
identifies all roles whose privileges the current user can use. This means there is some chain of role grants from the current user to the role in question. The current user itself is also an applicable role. The set of applicable roles is generally used for permission checking.
applicable_roles
ColumnsThe view attributes
contains information about the attributes of composite data types defined in the database. (Note that the view does not give information about table columns, which are sometimes called attributes in PostgreSQL contexts.) Only those attributes are shown that the current user has access to (by way of being the owner of or having some privilege on the type).
attributes
ColumnsThe view administrable_role_authorizations
identifies all roles that the current user has the admin option for.
Table 37.2. administrable_role_authorizations
Columns
The view collation_character_set_applicability
identifies which character set the available collations are applicable to. In PostgreSQL, there is only one character set per database (see explanation in ), so this view does not provide much useful information.
collation_character_set_applicability
ColumnsThe view character_sets
identifies the character sets available in the current database. Since PostgreSQL does not support multiple character sets within one database, this view only shows one, which is the database encoding.
Take note of how the following terms are used in the SQL standard:
character repertoire
An abstract collection of characters, for example UNICODE
, UCS
, or LATIN1
. Not exposed as an SQL object, but visible in this view.
character encoding form
An encoding of some character repertoire. Most older character repertoires only use one encoding form, and so there are no separate names for them (e.g., LATIN1
is an encoding form applicable to the LATIN1
repertoire). But for example Unicode has the encoding forms UTF8
, UTF16
, etc. (not all supported by PostgreSQL). Encoding forms are not exposed as an SQL object, but are visible in this view.
character set
A named SQL object that identifies a character repertoire, a character encoding, and a default collation. A predefined character set would typically have the same name as an encoding form, but users could define other names. For example, the character set UTF8
would typically identify the character repertoire UCS
, encoding form UTF8
, and some default collation.
You can think of an “encoding” in PostgreSQL either as a character set or a character encoding form. They will have the same name, and there can only be one in one database.
character_sets
ColumnsThe view columns
contains information about all table columns (or view columns) in the database. System columns (ctid
, etc.) are not included. Only those columns are shown that the current user has access to (by way of being the owner or having some privilege).
columns
ColumnsSince data types can be defined in a variety of ways in SQL, and PostgreSQL contains additional ways to define data types, their representation in the information schema can be somewhat difficult. The column data_type
is supposed to identify the underlying built-in type of the column. In PostgreSQL, this means that the type is defined in the system catalog schema pg_catalog
. This column might be useful if the application can handle the well-known built-in types specially (for example, format the numeric types differently or use the data in the precision columns). The columns udt_name
, udt_schema
, and udt_catalog
always identify the underlying data type of the column, even if the column is based on a domain. (Since PostgreSQL treats built-in types like user-defined types, built-in types appear here as well. This is an extension of the SQL standard.) These columns should be used if an application wants to process data differently according to the type, because in that case it wouldn't matter if the column is really based on a domain. If the column is based on a domain, the identity of the domain is stored in the columns domain_name
, domain_schema
, and domain_catalog
. If you want to pair up columns with their associated data types and treat domains as separate types, you could write coalesce(domain_name, udt_name)
, etc.
The view column_privileges
identifies all privileges granted on columns to a currently enabled role or by a currently enabled role. There is one row for each combination of column, grantor, and grantee.
If a privilege has been granted on an entire table, it will show up in this view as a grant for each column, but only for the privilege types where column granularity is possible: SELECT
, INSERT
, UPDATE
, REFERENCES
.
column_privileges
ColumnsThe view constraint_column_usage
identifies all columns in the current database that are used by some constraint. Only those columns are shown that are contained in a table owned by a currently enabled role. For a check constraint, this view identifies the columns that are used in the check expression. For a foreign key constraint, this view identifies the columns that the foreign key references. For a unique or primary key constraint, this view identifies the constrained columns.
constraint_column_usage
ColumnsThe view data_type_privileges
identifies all data type descriptors that the current user has access to, by way of being the owner of the described object or having some privilege for it. A data type descriptor is generated whenever a data type is used in the definition of a table column, a domain, or a function (as parameter or return type) and stores some information about how the data type is used in that instance (for example, the declared maximum length, if applicable). Each data type descriptor is assigned an arbitrary identifier that is unique among the data type descriptor identifiers assigned for one object (table, domain, function). This view is probably not useful for applications, but it is used to define some other views in the information schema.
data_type_privileges
ColumnsThe view role_udt_grants
is intended to identify USAGE
privileges granted on user-defined types where the grantor or grantee is a currently enabled role. Further information can be found under udt_privileges
. The only effective difference between this view and udt_privileges
is that this view omits objects that have been made accessible to the current user by way of a grant to PUBLIC
. Since data types do not have real privileges in PostgreSQL, but only an implicit grant to PUBLIC
, this view is empty.
role_udt_grants
ColumnsThe view domain_constraints
contains all constraints belonging to domains defined in the current database. Only those domains are shown that the current user has access to (by way of being the owner or having some privilege).
Table 37.19. domain_constraints
Columns
The view constraint_table_usage
identifies all tables in the current database that are used by some constraint and are owned by a currently enabled role. (This is different from the view table_constraints
, which identifies all table constraints along with the table they are defined on.) For a foreign key constraint, this view identifies the table that the foreign key references. For a unique or primary key constraint, this view simply identifies the table the constraint belongs to. Check constraints and not-null constraints are not included in this view.
constraint_table_usage
ColumnsThe view referential_constraints
contains all referential (foreign key) constraints in the current database. Only those constraints are shown for which the current user has write access to the referencing table (by way of being the owner or having some privilege other than SELECT
).
referential_constraints
ColumnsThe view role_usage_grants
identifies USAGE
privileges granted on various kinds of objects where the grantor or grantee is a currently enabled role. Further information can be found under usage_privileges
. The only effective difference between this view and usage_privileges
is that this view omits objects that have been made accessible to the current user by way of a grant to PUBLIC
.
role_usage_grants
ColumnsThe table sql_parts
contains information about which of the several parts of the SQL standard are supported by PostgreSQL.
Table 37.48. sql_parts
Columns
The view sequences
contains all sequences defined in the current database. Only those sequences are shown that the current user has access to (by way of being the owner or having some privilege).
sequences
ColumnsNote that in accordance with the SQL standard, the start, minimum, maximum, and increment values are returned as character strings.
The table sql_implementation_info
contains information about various aspects that are left implementation-defined by the SQL standard. This information is primarily intended for use in the context of the ODBC interface; users of other interfaces will probably find this information to be of little use. For this reason, the individual implementation information items are not described here; you will find them in the description of the ODBC interface.
sql_implementation_info
ColumnsPQprepare
creates a prepared statement for later execution with PQexecPrepared
. This feature allows commands to be executed repeatedly without being parsed and planned each time; see for details. PQprepare
is supported only in protocol 3.0 and later connections; it will fail when using protocol 2.0.
Prepared statements for use with PQexecPrepared
can also be created by executing SQL statements. Also, although there is no libpq function for deleting a prepared statement, the SQL statement can be used for that purpose.PQexecPrepared
The PGresult
contains a single result tuple from the current command. This status occurs only when single-row mode has been selected for the query (see ).
A result of status PGRES_NONFATAL_ERROR
will never be returned directly by PQexec
or other query execution functions; results of this kind are instead passed to the notice processor (see ).PQresStatus
The SQLSTATE code for the error. The SQLSTATE code identifies the type of error that has occurred; it can be used by front-end applications to perform specific operations (such as error handling) in response to a particular database error. For a list of the possible SQLSTATE codes, see . This field is not localizable, and is always present.PG_DIAG_MESSAGE_PRIMARY
The fields for schema name, table name, column name, data type name, and constraint name are supplied only for a limited number of error types; see . Do not assume that the presence of any of these fields guarantees the presence of another field. Core error sources observe the interrelationships noted above, but user-defined functions may use these fields in other ways. In the same vein, do not assume that these fields denote contemporary objects in the current database.
Certain byte values must be escaped when used as part of a bytea
literal in an SQL statement. PQescapeByteaConn
escapes bytes using either hex encoding or backslash escaping. See for more information.
See also under , a similarly structured view, for further information on some of the columns.
Column Type
Description
catalog_name
sql_identifier
Name of the database that contains this information schema
Column Type
Description
constraint_catalog
sql_identifier
Name of the database containing the constraint (always the current database)
constraint_schema
sql_identifier
Name of the schema containing the constraint
constraint_name
sql_identifier
Name of the constraint
specific_catalog
sql_identifier
Name of the database containing the function (always the current database)
specific_schema
sql_identifier
Name of the schema containing the function
specific_name
sql_identifier
The “specific name” of the function. See Section 37.45 for more information.
Column Type
Description
grantee
sql_identifier
Name of the role to which this role membership was granted (can be the current user, or a different role in case of nested role memberships)
role_name
sql_identifier
Name of a role
is_grantable
yes_or_no
YES
if the grantee has the admin option on the role, NO
if not
Column Type
Description
udt_catalog
sql_identifier
Name of the database containing the data type (always the current database)
udt_schema
sql_identifier
Name of the schema containing the data type
udt_name
sql_identifier
Name of the data type
attribute_name
sql_identifier
Name of the attribute
ordinal_position
cardinal_number
Ordinal position of the attribute within the data type (count starts at 1)
attribute_default
character_data
Default expression of the attribute
is_nullable
yes_or_no
YES
if the attribute is possibly nullable, NO
if it is known not nullable.
data_type
character_data
Data type of the attribute, if it is a built-in type, or ARRAY
if it is some array (in that case, see the view element_types
), else USER-DEFINED
(in that case, the type is identified in attribute_udt_name
and associated columns).
character_maximum_length
cardinal_number
If data_type
identifies a character or bit string type, the declared maximum length; null for all other data types or if no maximum length was declared.
character_octet_length
cardinal_number
If data_type
identifies a character type, the maximum possible length in octets (bytes) of a datum; null for all other data types. The maximum octet length depends on the declared character maximum length (see above) and the server encoding.
character_set_catalog
sql_identifier
Applies to a feature not available in PostgreSQL
character_set_schema
sql_identifier
Applies to a feature not available in PostgreSQL
character_set_name
sql_identifier
Applies to a feature not available in PostgreSQL
collation_catalog
sql_identifier
Name of the database containing the collation of the attribute (always the current database), null if default or the data type of the attribute is not collatable
collation_schema
sql_identifier
Name of the schema containing the collation of the attribute, null if default or the data type of the attribute is not collatable
collation_name
sql_identifier
Name of the collation of the attribute, null if default or the data type of the attribute is not collatable
numeric_precision
cardinal_number
If data_type
identifies a numeric type, this column contains the (declared or implicit) precision of the type for this attribute. The precision indicates the number of significant digits. It can be expressed in decimal (base 10) or binary (base 2) terms, as specified in the column numeric_precision_radix
. For all other data types, this column is null.
numeric_precision_radix
cardinal_number
If data_type
identifies a numeric type, this column indicates in which base the values in the columns numeric_precision
and numeric_scale
are expressed. The value is either 2 or 10. For all other data types, this column is null.
numeric_scale
cardinal_number
If data_type
identifies an exact numeric type, this column contains the (declared or implicit) scale of the type for this attribute. The scale indicates the number of significant digits to the right of the decimal point. It can be expressed in decimal (base 10) or binary (base 2) terms, as specified in the column numeric_precision_radix
. For all other data types, this column is null.
datetime_precision
cardinal_number
If data_type
identifies a date, time, timestamp, or interval type, this column contains the (declared or implicit) fractional seconds precision of the type for this attribute, that is, the number of decimal digits maintained following the decimal point in the seconds value. For all other data types, this column is null.
interval_type
character_data
If data_type
identifies an interval type, this column contains the specification which fields the intervals include for this attribute, e.g., YEAR TO MONTH
, DAY TO SECOND
, etc. If no field restrictions were specified (that is, the interval accepts all fields), and for all other data types, this field is null.
interval_precision
cardinal_number
Applies to a feature not available in PostgreSQL (see datetime_precision
for the fractional seconds precision of interval type attributes)
attribute_udt_catalog
sql_identifier
Name of the database that the attribute data type is defined in (always the current database)
attribute_udt_schema
sql_identifier
Name of the schema that the attribute data type is defined in
attribute_udt_name
sql_identifier
Name of the attribute data type
scope_catalog
sql_identifier
Applies to a feature not available in PostgreSQL
scope_schema
sql_identifier
Applies to a feature not available in PostgreSQL
scope_name
sql_identifier
Applies to a feature not available in PostgreSQL
maximum_cardinality
cardinal_number
Always null, because arrays always have unlimited maximum cardinality in PostgreSQL
dtd_identifier
sql_identifier
An identifier of the data type descriptor of the column, unique among the data type descriptors pertaining to the table. This is mainly useful for joining with other instances of such identifiers. (The specific format of the identifier is not defined and not guaranteed to remain the same in future versions.)
is_derived_reference_attribute
yes_or_no
Applies to a feature not available in PostgreSQL
Column Type
Description
grantee
sql_identifier
Name of the role to which this role membership was granted (can be the current user, or a different role in case of nested role memberships)
role_name
sql_identifier
Name of a role
is_grantable
yes_or_no
Always YES
Column Type
Description
constraint_catalog
sql_identifier
Name of the database containing the constraint (always the current database)
constraint_schema
sql_identifier
Name of the schema containing the constraint
constraint_name
sql_identifier
Name of the constraint
check_clause
character_data
The check expression of the check constraint
Column Type
Description
character_set_catalog
sql_identifier
Character sets are currently not implemented as schema objects, so this column is null.
character_set_schema
sql_identifier
Character sets are currently not implemented as schema objects, so this column is null.
character_set_name
sql_identifier
Name of the character set, currently implemented as showing the name of the database encoding
character_repertoire
sql_identifier
Character repertoire, showing UCS
if the encoding is UTF8
, else just the encoding name
form_of_use
sql_identifier
Character encoding form, same as the database encoding
default_collate_catalog
sql_identifier
Name of the database containing the default collation (always the current database, if any collation is identified)
default_collate_schema
sql_identifier
Name of the schema containing the default collation
default_collate_name
sql_identifier
Name of the default collation. The default collation is identified as the collation that matches the COLLATE
and CTYPE
settings of the current database. If there is no such collation, then this column and the associated schema and catalog columns are null.
Column Type
Description
table_catalog
sql_identifier
Name of the database that contains the foreign table (always the current database)
table_schema
sql_identifier
Name of the schema that contains the foreign table
table_name
sql_identifier
Name of the foreign table
column_name
sql_identifier
Name of the column
option_name
sql_identifier
Name of an option
option_value
character_data
Value of the option
Column Type
Description
table_catalog
sql_identifier
Name of the database containing the table (always the current database)
table_schema
sql_identifier
Name of the schema containing the table
table_name
sql_identifier
Name of the table
column_name
sql_identifier
Name of the base column that a generated column depends on
dependent_column
sql_identifier
Name of the generated column
Column Type
Description
domain_catalog
sql_identifier
Name of the database containing the domain (always the current database)
domain_schema
sql_identifier
Name of the schema containing the domain
domain_name
sql_identifier
Name of the domain
table_catalog
sql_identifier
Name of the database containing the table (always the current database)
table_schema
sql_identifier
Name of the schema containing the table
table_name
sql_identifier
Name of the table
column_name
sql_identifier
Name of the column
Column Type
Description
table_catalog
sql_identifier
Name of the database containing the table (always the current database)
table_schema
sql_identifier
Name of the schema containing the table
table_name
sql_identifier
Name of the table
column_name
sql_identifier
Name of the column
ordinal_position
cardinal_number
Ordinal position of the column within the table (count starts at 1)
column_default
character_data
Default expression of the column
is_nullable
yes_or_no
YES
if the column is possibly nullable, NO
if it is known not nullable. A not-null constraint is one way a column can be known not nullable, but there can be others.
data_type
character_data
Data type of the column, if it is a built-in type, or ARRAY
if it is some array (in that case, see the view element_types
), else USER-DEFINED
(in that case, the type is identified in udt_name
and associated columns). If the column is based on a domain, this column refers to the type underlying the domain (and the domain is identified in domain_name
and associated columns).
character_maximum_length
cardinal_number
If data_type
identifies a character or bit string type, the declared maximum length; null for all other data types or if no maximum length was declared.
character_octet_length
cardinal_number
If data_type
identifies a character type, the maximum possible length in octets (bytes) of a datum; null for all other data types. The maximum octet length depends on the declared character maximum length (see above) and the server encoding.
numeric_precision
cardinal_number
If data_type
identifies a numeric type, this column contains the (declared or implicit) precision of the type for this column. The precision indicates the number of significant digits. It can be expressed in decimal (base 10) or binary (base 2) terms, as specified in the column numeric_precision_radix
. For all other data types, this column is null.
numeric_precision_radix
cardinal_number
If data_type
identifies a numeric type, this column indicates in which base the values in the columns numeric_precision
and numeric_scale
are expressed. The value is either 2 or 10. For all other data types, this column is null.
numeric_scale
cardinal_number
If data_type
identifies an exact numeric type, this column contains the (declared or implicit) scale of the type for this column. The scale indicates the number of significant digits to the right of the decimal point. It can be expressed in decimal (base 10) or binary (base 2) terms, as specified in the column numeric_precision_radix
. For all other data types, this column is null.
datetime_precision
cardinal_number
If data_type
identifies a date, time, timestamp, or interval type, this column contains the (declared or implicit) fractional seconds precision of the type for this column, that is, the number of decimal digits maintained following the decimal point in the seconds value. For all other data types, this column is null.
interval_type
character_data
If data_type
identifies an interval type, this column contains the specification which fields the intervals include for this column, e.g., YEAR TO MONTH
, DAY TO SECOND
, etc. If no field restrictions were specified (that is, the interval accepts all fields), and for all other data types, this field is null.
interval_precision
cardinal_number
Applies to a feature not available in PostgreSQL (see datetime_precision
for the fractional seconds precision of interval type columns)
character_set_catalog
sql_identifier
Applies to a feature not available in PostgreSQL
character_set_schema
sql_identifier
Applies to a feature not available in PostgreSQL
character_set_name
sql_identifier
Applies to a feature not available in PostgreSQL
collation_catalog
sql_identifier
Name of the database containing the collation of the column (always the current database), null if default or the data type of the column is not collatable
collation_schema
sql_identifier
Name of the schema containing the collation of the column, null if default or the data type of the column is not collatable
collation_name
sql_identifier
Name of the collation of the column, null if default or the data type of the column is not collatable
domain_catalog
sql_identifier
If the column has a domain type, the name of the database that the domain is defined in (always the current database), else null.
domain_schema
sql_identifier
If the column has a domain type, the name of the schema that the domain is defined in, else null.
domain_name
sql_identifier
If the column has a domain type, the name of the domain, else null.
udt_catalog
sql_identifier
Name of the database that the column data type (the underlying type of the domain, if applicable) is defined in (always the current database)
udt_schema
sql_identifier
Name of the schema that the column data type (the underlying type of the domain, if applicable) is defined in
udt_name
sql_identifier
Name of the column data type (the underlying type of the domain, if applicable)
scope_catalog
sql_identifier
Applies to a feature not available in PostgreSQL
scope_schema
sql_identifier
Applies to a feature not available in PostgreSQL
scope_name
sql_identifier
Applies to a feature not available in PostgreSQL
maximum_cardinality
cardinal_number
Always null, because arrays always have unlimited maximum cardinality in PostgreSQL
dtd_identifier
sql_identifier
An identifier of the data type descriptor of the column, unique among the data type descriptors pertaining to the table. This is mainly useful for joining with other instances of such identifiers. (The specific format of the identifier is not defined and not guaranteed to remain the same in future versions.)
is_self_referencing
yes_or_no
Applies to a feature not available in PostgreSQL
is_identity
yes_or_no
If the column is an identity column, then YES
, else NO
.
identity_generation
character_data
If the column is an identity column, then ALWAYS
or BY DEFAULT
, reflecting the definition of the column.
identity_start
character_data
If the column is an identity column, then the start value of the internal sequence, else null.
identity_increment
character_data
If the column is an identity column, then the increment of the internal sequence, else null.
identity_maximum
character_data
If the column is an identity column, then the maximum value of the internal sequence, else null.
identity_minimum
character_data
If the column is an identity column, then the minimum value of the internal sequence, else null.
identity_cycle
yes_or_no
If the column is an identity column, then YES
if the internal sequence cycles or NO
if it does not; otherwise null.
is_generated
character_data
If the column is a generated column, then ALWAYS
, else NEVER
.
generation_expression
character_data
If the column is a generated column, then the generation expression, else null.
is_updatable
yes_or_no
YES
if the column is updatable, NO
if not (Columns in base tables are always updatable, columns in views not necessarily)
Column Type
Description
grantor
sql_identifier
Name of the role that granted the privilege
grantee
sql_identifier
Name of the role that the privilege was granted to
table_catalog
sql_identifier
Name of the database that contains the table that contains the column (always the current database)
table_schema
sql_identifier
Name of the schema that contains the table that contains the column
table_name
sql_identifier
Name of the table that contains the column
column_name
sql_identifier
Name of the column
privilege_type
character_data
Type of the privilege: SELECT
, INSERT
, UPDATE
, or REFERENCES
is_grantable
yes_or_no
YES
if the privilege is grantable, NO
if not
Column Type
Description
table_catalog
sql_identifier
Name of the database that contains the table that contains the column that is used by some constraint (always the current database)
table_schema
sql_identifier
Name of the schema that contains the table that contains the column that is used by some constraint
table_name
sql_identifier
Name of the table that contains the column that is used by some constraint
column_name
sql_identifier
Name of the column that is used by some constraint
constraint_catalog
sql_identifier
Name of the database that contains the constraint (always the current database)
constraint_schema
sql_identifier
Name of the schema that contains the constraint
constraint_name
sql_identifier
Name of the constraint
Column Type
Description
specific_catalog
sql_identifier
Name of the database containing the function (always the current database)
specific_schema
sql_identifier
Name of the schema containing the function
specific_name
sql_identifier
The “specific name” of the function. See Section 37.45 for more information.
ordinal_position
cardinal_number
Ordinal position of the parameter in the argument list of the function (count starts at 1)
parameter_mode
character_data
IN
for input parameter, OUT
for output parameter, and INOUT
for input/output parameter.
is_result
yes_or_no
Applies to a feature not available in PostgreSQL
as_locator
yes_or_no
Applies to a feature not available in PostgreSQL
parameter_name
sql_identifier
Name of the parameter, or null if the parameter has no name
data_type
character_data
Data type of the parameter, if it is a built-in type, or ARRAY
if it is some array (in that case, see the view element_types
), else USER-DEFINED
(in that case, the type is identified in udt_name
and associated columns).
character_maximum_length
cardinal_number
Always null, since this information is not applied to parameter data types in PostgreSQL
character_octet_length
cardinal_number
Always null, since this information is not applied to parameter data types in PostgreSQL
character_set_catalog
sql_identifier
Applies to a feature not available in PostgreSQL
character_set_schema
sql_identifier
Applies to a feature not available in PostgreSQL
character_set_name
sql_identifier
Applies to a feature not available in PostgreSQL
collation_catalog
sql_identifier
Always null, since this information is not applied to parameter data types in PostgreSQL
collation_schema
sql_identifier
Always null, since this information is not applied to parameter data types in PostgreSQL
collation_name
sql_identifier
Always null, since this information is not applied to parameter data types in PostgreSQL
numeric_precision
cardinal_number
Always null, since this information is not applied to parameter data types in PostgreSQL
numeric_precision_radix
cardinal_number
Always null, since this information is not applied to parameter data types in PostgreSQL
numeric_scale
cardinal_number
Always null, since this information is not applied to parameter data types in PostgreSQL
datetime_precision
cardinal_number
Always null, since this information is not applied to parameter data types in PostgreSQL
interval_type
character_data
Always null, since this information is not applied to parameter data types in PostgreSQL
interval_precision
cardinal_number
Always null, since this information is not applied to parameter data types in PostgreSQL
udt_catalog
sql_identifier
Name of the database that the data type of the parameter is defined in (always the current database)
udt_schema
sql_identifier
Name of the schema that the data type of the parameter is defined in
udt_name
sql_identifier
Name of the data type of the parameter
scope_catalog
sql_identifier
Applies to a feature not available in PostgreSQL
scope_schema
sql_identifier
Applies to a feature not available in PostgreSQL
scope_name
sql_identifier
Applies to a feature not available in PostgreSQL
maximum_cardinality
cardinal_number
Always null, because arrays always have unlimited maximum cardinality in PostgreSQL
dtd_identifier
sql_identifier
An identifier of the data type descriptor of the parameter, unique among the data type descriptors pertaining to the function. This is mainly useful for joining with other instances of such identifiers. (The specific format of the identifier is not defined and not guaranteed to remain the same in future versions.)
parameter_default
character_data
The default expression of the parameter, or null if none or if the function is not owned by a currently enabled role.
constraint_catalog
sql_identifier
Name of the database that contains the constraint (always the current database)
constraint_schema
sql_identifier
Name of the schema that contains the constraint
constraint_name
sql_identifier
Name of the constraint
table_catalog
sql_identifier
Name of the database that contains the table that contains the column that is restricted by this constraint (always the current database)
table_schema
sql_identifier
Name of the schema that contains the table that contains the column that is restricted by this constraint
table_name
sql_identifier
Name of the table that contains the column that is restricted by this constraint
column_name
sql_identifier
Name of the column that is restricted by this constraint
ordinal_position
cardinal_number
Ordinal position of the column within the constraint key (count starts at 1)
position_in_unique_constraint
cardinal_number
For a foreign-key constraint, ordinal position of the referenced column within its unique constraint (count starts at 1); otherwise null
Column Type
Description
collation_catalog
sql_identifier
Name of the database containing the collation (always the current database)
collation_schema
sql_identifier
Name of the schema containing the collation
collation_name
sql_identifier
Name of the default collation
pad_attribute
character_data
Always NO PAD
(The alternative PAD SPACE
is not supported by PostgreSQL.)
Column Type
Description
object_catalog
sql_identifier
Name of the database that contains the described object (always the current database)
object_schema
sql_identifier
Name of the schema that contains the described object
object_name
sql_identifier
Name of the described object
object_type
character_data
The type of the described object: one of TABLE
(the data type descriptor pertains to a column of that table), DOMAIN
(the data type descriptors pertains to that domain), ROUTINE
(the data type descriptor pertains to a parameter or the return data type of that function).
dtd_identifier
sql_identifier
The identifier of the data type descriptor, which is unique among the data type descriptors for that same object.
Column Type
Description
grantor
sql_identifier
The name of the role that granted the privilege
grantee
sql_identifier
The name of the role that the privilege was granted to
udt_catalog
sql_identifier
Name of the database containing the type (always the current database)
udt_schema
sql_identifier
Name of the schema containing the type
udt_name
sql_identifier
Name of the type
privilege_type
character_data
Always TYPE USAGE
is_grantable
yes_or_no
YES
if the privilege is grantable, NO
if not
Column Type
Description
constraint_catalog
sql_identifier
Name of the database that contains the constraint (always the current database)
constraint_schema
sql_identifier
Name of the schema that contains the constraint
constraint_name
sql_identifier
Name of the constraint
domain_catalog
sql_identifier
Name of the database that contains the domain (always the current database)
domain_schema
sql_identifier
Name of the schema that contains the domain
domain_name
sql_identifier
Name of the domain
is_deferrable
yes_or_no
YES
if the constraint is deferrable, NO
if not
initially_deferred
yes_or_no
YES
if the constraint is deferrable and initially deferred, NO
if not
Column Type
Description
grantor
sql_identifier
Name of the role that granted the privilege
grantee
sql_identifier
Name of the role that the privilege was granted to
table_catalog
sql_identifier
Name of the database that contains the table (always the current database)
table_schema
sql_identifier
Name of the schema that contains the table
table_name
sql_identifier
Name of the table
privilege_type
character_data
Type of the privilege: SELECT
, INSERT
, UPDATE
, DELETE
, TRUNCATE
, REFERENCES
, or TRIGGER
is_grantable
yes_or_no
YES
if the privilege is grantable, NO
if not
with_hierarchy
yes_or_no
In the SQL standard, WITH HIERARCHY OPTION
is a separate (sub-)privilege allowing certain operations on table inheritance hierarchies. In PostgreSQL, this is included in the SELECT
privilege, so this column shows YES
if the privilege is SELECT
, else NO
.
Column Type
Description
table_catalog
sql_identifier
Name of the database that contains the table that is used by some constraint (always the current database)
table_schema
sql_identifier
Name of the schema that contains the table that is used by some constraint
table_name
sql_identifier
Name of the table that is used by some constraint
constraint_catalog
sql_identifier
Name of the database that contains the constraint (always the current database)
constraint_schema
sql_identifier
Name of the schema that contains the constraint
constraint_name
sql_identifier
Name of the constraint
Column Type
Description
grantor
sql_identifier
Name of the role that granted the privilege
grantee
sql_identifier
Name of the role that the privilege was granted to
specific_catalog
sql_identifier
Name of the database containing the function (always the current database)
specific_schema
sql_identifier
Name of the schema containing the function
specific_name
sql_identifier
The “specific name” of the function. See Section 37.45 for more information.
routine_catalog
sql_identifier
Name of the database containing the function (always the current database)
routine_schema
sql_identifier
Name of the schema containing the function
routine_name
sql_identifier
Name of the function (might be duplicated in case of overloading)
privilege_type
character_data
Always EXECUTE
(the only privilege type for functions)
is_grantable
yes_or_no
YES
if the privilege is grantable, NO
if not
Column Type
Description
catalog_name
sql_identifier
Name of the database that the schema is contained in (always the current database)
schema_name
sql_identifier
Name of the schema
schema_owner
sql_identifier
Name of the owner of the schema
default_character_set_catalog
sql_identifier
Applies to a feature not available in PostgreSQL
default_character_set_schema
sql_identifier
Applies to a feature not available in PostgreSQL
default_character_set_name
sql_identifier
Applies to a feature not available in PostgreSQL
sql_path
character_data
Applies to a feature not available in PostgreSQL
constraint_catalog
sql_identifier
Name of the database containing the constraint (always the current database)
constraint_schema
sql_identifier
Name of the schema containing the constraint
constraint_name
sql_identifier
Name of the constraint
unique_constraint_catalog
sql_identifier
Name of the database that contains the unique or primary key constraint that the foreign key constraint references (always the current database)
unique_constraint_schema
sql_identifier
Name of the schema that contains the unique or primary key constraint that the foreign key constraint references
unique_constraint_name
sql_identifier
Name of the unique or primary key constraint that the foreign key constraint references
match_option
character_data
Match option of the foreign key constraint: FULL
, PARTIAL
, or NONE
.
update_rule
character_data
Update rule of the foreign key constraint: CASCADE
, SET NULL
, SET DEFAULT
, RESTRICT
, or NO ACTION
.
delete_rule
character_data
Delete rule of the foreign key constraint: CASCADE
, SET NULL
, SET DEFAULT
, RESTRICT
, or NO ACTION
.
Column Type
Description
specific_catalog
sql_identifier
Name of the database containing the function (always the current database)
specific_schema
sql_identifier
Name of the schema containing the function
specific_name
sql_identifier
The “specific name” of the function. See Section 37.45 for more information.
routine_catalog
sql_identifier
Name of the database containing the function (always the current database)
routine_schema
sql_identifier
Name of the schema containing the function
routine_name
sql_identifier
Name of the function (might be duplicated in case of overloading)
table_catalog
sql_identifier
Name of the database that contains the table that is used by the function (always the current database)
table_schema
sql_identifier
Name of the schema that contains the table that is used by the function
table_name
sql_identifier
Name of the table that is used by the function
column_name
sql_identifier
Name of the column that is used by the function
Column Type
Description
grantor
sql_identifier
The name of the role that granted the privilege
grantee
sql_identifier
The name of the role that the privilege was granted to
object_catalog
sql_identifier
Name of the database containing the object (always the current database)
object_schema
sql_identifier
Name of the schema containing the object, if applicable, else an empty string
object_name
sql_identifier
Name of the object
object_type
character_data
COLLATION
or DOMAIN
or FOREIGN DATA WRAPPER
or FOREIGN SERVER
or SEQUENCE
privilege_type
character_data
Always USAGE
is_grantable
yes_or_no
YES
if the privilege is grantable, NO
if not
Column Type
Description
feature_id
character_data
An identifier string containing the number of the part
feature_name
character_data
Descriptive name of the part
is_supported
yes_or_no
YES
if the part is fully supported by the current version of PostgreSQL, NO
if not
is_verified_by
character_data
Always null, since the PostgreSQL development group does not perform formal testing of feature conformance
comments
character_data
Possibly a comment about the supported status of the part
Column Type
Description
sequence_catalog
sql_identifier
Name of the database that contains the sequence (always the current database)
sequence_schema
sql_identifier
Name of the schema that contains the sequence
sequence_name
sql_identifier
Name of the sequence
data_type
character_data
The data type of the sequence.
numeric_precision
cardinal_number
This column contains the (declared or implicit) precision of the sequence data type (see above). The precision indicates the number of significant digits. It can be expressed in decimal (base 10) or binary (base 2) terms, as specified in the column numeric_precision_radix
.
numeric_precision_radix
cardinal_number
This column indicates in which base the values in the columns numeric_precision
and numeric_scale
are expressed. The value is either 2 or 10.
numeric_scale
cardinal_number
This column contains the (declared or implicit) scale of the sequence data type (see above). The scale indicates the number of significant digits to the right of the decimal point. It can be expressed in decimal (base 10) or binary (base 2) terms, as specified in the column numeric_precision_radix
.
start_value
character_data
The start value of the sequence
minimum_value
character_data
The minimum value of the sequence
maximum_value
character_data
The maximum value of the sequence
increment
character_data
The increment of the sequence
cycle_option
yes_or_no
YES
if the sequence cycles, else NO
Column Type
Description
implementation_info_id
character_data
Identifier string of the implementation information item
implementation_info_name
character_data
Descriptive name of the implementation information item
integer_value
cardinal_number
Value of the implementation information item, or null if the value is contained in the column character_value
character_value
character_data
Value of the implementation information item, or null if the value is contained in the column integer_value
comments
character_data
Possibly a comment pertaining to the implementation information item
Column Type
Description
specific_catalog
sql_identifier
Name of the database containing the function (always the current database)
specific_schema
sql_identifier
Name of the schema containing the function
specific_name
sql_identifier
The “specific name” of the function. This is a name that uniquely identifies the function in the schema, even if the real name of the function is overloaded. The format of the specific name is not defined, it should only be used to compare it to other instances of specific routine names.
routine_catalog
sql_identifier
Name of the database containing the function (always the current database)
routine_schema
sql_identifier
Name of the schema containing the function
routine_name
sql_identifier
Name of the function (might be duplicated in case of overloading)
routine_type
character_data
FUNCTION
for a function, PROCEDURE
for a procedure
module_catalog
sql_identifier
Applies to a feature not available in PostgreSQL
module_schema
sql_identifier
Applies to a feature not available in PostgreSQL
module_name
sql_identifier
Applies to a feature not available in PostgreSQL
udt_catalog
sql_identifier
Applies to a feature not available in PostgreSQL
udt_schema
sql_identifier
Applies to a feature not available in PostgreSQL
udt_name
sql_identifier
Applies to a feature not available in PostgreSQL
data_type
character_data
Return data type of the function, if it is a built-in type, or ARRAY
if it is some array (in that case, see the view element_types
), else USER-DEFINED
(in that case, the type is identified in type_udt_name
and associated columns). Null for a procedure.
character_maximum_length
cardinal_number
Always null, since this information is not applied to return data types in PostgreSQL
character_octet_length
cardinal_number
Always null, since this information is not applied to return data types in PostgreSQL
character_set_catalog
sql_identifier
Applies to a feature not available in PostgreSQL
character_set_schema
sql_identifier
Applies to a feature not available in PostgreSQL
character_set_name
sql_identifier
Applies to a feature not available in PostgreSQL
collation_catalog
sql_identifier
Always null, since this information is not applied to return data types in PostgreSQL
collation_schema
sql_identifier
Always null, since this information is not applied to return data types in PostgreSQL
collation_name
sql_identifier
Always null, since this information is not applied to return data types in PostgreSQL
numeric_precision
cardinal_number
Always null, since this information is not applied to return data types in PostgreSQL
numeric_precision_radix
cardinal_number
Always null, since this information is not applied to return data types in PostgreSQL
numeric_scale
cardinal_number
Always null, since this information is not applied to return data types in PostgreSQL
datetime_precision
cardinal_number
Always null, since this information is not applied to return data types in PostgreSQL
interval_type
character_data
Always null, since this information is not applied to return data types in PostgreSQL
interval_precision
cardinal_number
Always null, since this information is not applied to return data types in PostgreSQL
type_udt_catalog
sql_identifier
Name of the database that the return data type of the function is defined in (always the current database). Null for a procedure.
type_udt_schema
sql_identifier
Name of the schema that the return data type of the function is defined in. Null for a procedure.
type_udt_name
sql_identifier
Name of the return data type of the function. Null for a procedure.
scope_catalog
sql_identifier
Applies to a feature not available in PostgreSQL
scope_schema
sql_identifier
Applies to a feature not available in PostgreSQL
scope_name
sql_identifier
Applies to a feature not available in PostgreSQL
maximum_cardinality
cardinal_number
Always null, because arrays always have unlimited maximum cardinality in PostgreSQL
dtd_identifier
sql_identifier
An identifier of the data type descriptor of the return data type of this function, unique among the data type descriptors pertaining to the function. This is mainly useful for joining with other instances of such identifiers. (The specific format of the identifier is not defined and not guaranteed to remain the same in future versions.)
routine_body
character_data
If the function is an SQL function, then SQL
, else EXTERNAL
.
routine_definition
character_data
The source text of the function (null if the function is not owned by a currently enabled role). (According to the SQL standard, this column is only applicable if routine_body
is SQL
, but in PostgreSQL it will contain whatever source text was specified when the function was created.)
external_name
character_data
If this function is a C function, then the external name (link symbol) of the function; else null. (This works out to be the same value that is shown in routine_definition
.)
external_language
character_data
The language the function is written in
parameter_style
character_data
Always GENERAL
(The SQL standard defines other parameter styles, which are not available in PostgreSQL.)
is_deterministic
yes_or_no
If the function is declared immutable (called deterministic in the SQL standard), then YES
, else NO
. (You cannot query the other volatility levels available in PostgreSQL through the information schema.)
sql_data_access
character_data
Always MODIFIES
, meaning that the function possibly modifies SQL data. This information is not useful for PostgreSQL.
is_null_call
yes_or_no
If the function automatically returns null if any of its arguments are null, then YES
, else NO
. Null for a procedure.
sql_path
character_data
Applies to a feature not available in PostgreSQL
schema_level_routine
yes_or_no
Always YES
(The opposite would be a method of a user-defined type, which is a feature not available in PostgreSQL.)
max_dynamic_result_sets
cardinal_number
Applies to a feature not available in PostgreSQL
is_user_defined_cast
yes_or_no
Applies to a feature not available in PostgreSQL
is_implicitly_invocable
yes_or_no
Applies to a feature not available in PostgreSQL
security_type
character_data
If the function runs with the privileges of the current user, then INVOKER
, if the function runs with the privileges of the user who defined it, then DEFINER
.
to_sql_specific_catalog
sql_identifier
Applies to a feature not available in PostgreSQL
to_sql_specific_schema
sql_identifier
Applies to a feature not available in PostgreSQL
to_sql_specific_name
sql_identifier
Applies to a feature not available in PostgreSQL
as_locator
yes_or_no
Applies to a feature not available in PostgreSQL
created
time_stamp
Applies to a feature not available in PostgreSQL
last_altered
time_stamp
Applies to a feature not available in PostgreSQL
new_savepoint_level
yes_or_no
Applies to a feature not available in PostgreSQL
is_udt_dependent
yes_or_no
Currently always NO
. The alternative YES
applies to a feature not available in PostgreSQL.
result_cast_from_data_type
character_data
Applies to a feature not available in PostgreSQL
result_cast_as_locator
yes_or_no
Applies to a feature not available in PostgreSQL
result_cast_char_max_length
cardinal_number
Applies to a feature not available in PostgreSQL
result_cast_char_octet_length
cardinal_number
Applies to a feature not available in PostgreSQL
result_cast_char_set_catalog
sql_identifier
Applies to a feature not available in PostgreSQL
result_cast_char_set_schema
sql_identifier
Applies to a feature not available in PostgreSQL
result_cast_char_set_name
sql_identifier
Applies to a feature not available in PostgreSQL
result_cast_collation_catalog
sql_identifier
Applies to a feature not available in PostgreSQL
result_cast_collation_schema
sql_identifier
Applies to a feature not available in PostgreSQL
result_cast_collation_name
sql_identifier
Applies to a feature not available in PostgreSQL
result_cast_numeric_precision
cardinal_number
Applies to a feature not available in PostgreSQL
result_cast_numeric_precision_radix
cardinal_number
Applies to a feature not available in PostgreSQL
result_cast_numeric_scale
cardinal_number
Applies to a feature not available in PostgreSQL
result_cast_datetime_precision
cardinal_number
Applies to a feature not available in PostgreSQL
result_cast_interval_type
character_data
Applies to a feature not available in PostgreSQL
result_cast_interval_precision
cardinal_number
Applies to a feature not available in PostgreSQL
result_cast_type_udt_catalog
sql_identifier
Applies to a feature not available in PostgreSQL
result_cast_type_udt_schema
sql_identifier
Applies to a feature not available in PostgreSQL
result_cast_type_udt_name
sql_identifier
Applies to a feature not available in PostgreSQL
result_cast_scope_catalog
sql_identifier
Applies to a feature not available in PostgreSQL
result_cast_scope_schema
sql_identifier
Applies to a feature not available in PostgreSQL
result_cast_scope_name
sql_identifier
Applies to a feature not available in PostgreSQL
result_cast_maximum_cardinality
cardinal_number
Applies to a feature not available in PostgreSQL
result_cast_dtd_identifier
sql_identifier
Applies to a feature not available in PostgreSQL
Column Type
Description
collation_catalog
sql_identifier
Name of the database containing the collation (always the current database)
collation_schema
sql_identifier
Name of the schema containing the collation
collation_name
sql_identifier
Name of the default collation
character_set_catalog
sql_identifier
Character sets are currently not implemented as schema objects, so this column is null
character_set_schema
sql_identifier
Character sets are currently not implemented as schema objects, so this column is null
character_set_name
sql_identifier
Name of the character set
Column Type
Description
udt_catalog
sql_identifier
Name of the database that the column data type (the underlying type of the domain, if applicable) is defined in (always the current database)
udt_schema
sql_identifier
Name of the schema that the column data type (the underlying type of the domain, if applicable) is defined in
udt_name
sql_identifier
Name of the column data type (the underlying type of the domain, if applicable)
table_catalog
sql_identifier
Name of the database containing the table (always the current database)
table_schema
sql_identifier
Name of the schema containing the table
table_name
sql_identifier
Name of the table
column_name
sql_identifier
Name of the column
Column Type
Description
feature_id
character_data
Identifier string of the feature
feature_name
character_data
Descriptive name of the feature
sub_feature_id
character_data
Identifier string of the subfeature, or a zero-length string if not a subfeature
sub_feature_name
character_data
Descriptive name of the subfeature, or a zero-length string if not a subfeature
is_supported
yes_or_no
YES
if the feature is fully supported by the current version of PostgreSQL, NO
if not
is_verified_by
character_data
Always null, since the PostgreSQL development group does not perform formal testing of feature conformance
comments
character_data
Possibly a comment about the supported status of the feature
檢視圖 table_privileges 列出在資料表或檢視表上授權目前查詢的角色相關的所有權限。資料表、授權者和被授權者的每個組合形成一筆資料。
table_privileges
ColumnsColumn Type
Description
grantor
sql_identifier
Name of the role that granted the privilege
grantee
sql_identifier
Name of the role that the privilege was granted to
table_catalog
sql_identifier
Name of the database that contains the table (always the current database)
table_schema
sql_identifier
Name of the schema that contains the table
table_name
sql_identifier
Name of the table
privilege_type
character_data
Type of the privilege: SELECT
, INSERT
, UPDATE
, DELETE
, TRUNCATE
, REFERENCES
, or TRIGGER
is_grantable
yes_or_no
YES
if the privilege is grantable, NO
if not
with_hierarchy
yes_or_no
In the SQL standard, WITH HIERARCHY OPTION
is a separate (sub-)privilege allowing certain operations on table inheritance hierarchies. In PostgreSQL, this is included in the SELECT
privilege, so this column shows YES
if the privilege is SELECT
, else NO
.
The table sql_sizing
contains information about various size limits and maximum values in PostgreSQL. This information is primarily intended for use in the context of the ODBC interface; users of other interfaces will probably find this information to be of little use. For this reason, the individual sizing items are not described here; you will find them in the description of the ODBC interface.
Table 37.49. sql_sizing
Columns
Column Type
Description
sizing_id
cardinal_number
Identifier of the sizing item
sizing_name
character_data
Descriptive name of the sizing item
supported_value
cardinal_number
Value of the sizing item, or 0 if the size is unlimited or cannot be determined, or null if the features for which the sizing item is applicable are not supported
comments
character_data
Possibly a comment pertaining to the sizing item
The view table_constraints
contains all constraints belonging to tables that the current user owns or has some privilege other than SELECT
on.
table_constraints
Columnsconstraint_catalog
sql_identifier
Name of the database that contains the constraint (always the current database)
constraint_schema
sql_identifier
Name of the schema that contains the constraint
constraint_name
sql_identifier
Name of the constraint
table_catalog
sql_identifier
Name of the database that contains the table (always the current database)
table_schema
sql_identifier
Name of the schema that contains the table
table_name
sql_identifier
Name of the table
constraint_type
character_data
Type of the constraint: CHECK
, FOREIGN KEY
, PRIMARY KEY
, or UNIQUE
is_deferrable
yes_or_no
YES
if the constraint is deferrable, NO
if not
initially_deferred
yes_or_no
YES
if the constraint is deferrable and initially deferred, NO
if not
enforced
yes_or_no
Applies to a feature not available in PostgreSQL (currently always YES
)