Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This appendix and the next one contain information regarding the modules that can be found in the contrib
directory of the PostgreSQL distribution. These include porting tools, analysis utilities, and plug-in features that are not part of the core PostgreSQL system, mainly because they address a limited audience or are too experimental to be part of the main source tree. This does not preclude their usefulness.
本附錄涵蓋了 contrib 中的延伸功能和其他伺服器外掛模組,附錄 G 為工具程式的說明。
從原始碼編譯建置時,這些模組並不會自動編譯,除非您有編譯 “world” (參閱第 2 步)。 您可以透過執行以下命令來編譯和安裝這些模組:
in the contrib
directory of a configured source tree; or to build and install just one selected module, do the same in that module's subdirectory. Many of the modules have regression tests, which can be executed by running:
before installation or
once you have a PostgreSQL server running.
If you are using a pre-packaged version of PostgreSQL, these modules are typically made available as a separate subpackage, such as postgresql-contrib
.
許多模組提供新的使用者定義函數、運算子或型別。 要使用這些模組之一,在安裝程式後,您需要在資料庫系統中註冊新的 SQL 物件。 這是透過執行 CREATE EXTENSION 指令來完成的。 在一個新的資料庫中,你可以簡單地執行:
This command registers the new SQL objects in the current database only, so you need to run it in each database that you want the module's facilities to be available in. Alternatively, run it in database template1
so that the extension will be copied into subsequently-created databases by default.
For all these modules, CREATE EXTENSION
must be run by a database superuser, unless the module is considered “trusted”, in which case it can be run by any user who has CREATE
privilege on the current database. Modules that are trusted are identified as such in the sections that follow. Generally, trusted modules are ones that cannot provide access to outside-the-database functionality.
Many modules allow you to install their objects in a schema of your choice. To do that, add SCHEMA
schema_name
to the CREATE EXTENSION
command. By default, the objects will be placed in your current creation target schema, which in turn defaults to public
.
Note, however, that some of these modules are not “extensions” in this sense, but are loaded into the server in some other way, for instance by way of shared_preload_libraries. See the documentation of each module for details.
btree_gin
provides sample GIN operator classes that implement B-tree equivalent behavior for the data types int2
, int4
, int8
, float4
, float8
, timestamp with time zone
, timestamp without time zone
, time with time zone
, time without time zone
, date
, interval
, oid
, money
, "char"
, varchar
, text
, bytea
, bit
, varbit
, macaddr
, macaddr8
, inet
, cidr
, uuid
, name
, bool
, bpchar
, and all enum
types.
通常,這些運算子類不會優於等效的標準 B-tree 索引方法,並且它們缺少標準 B-tree 的一個主要功能:強制執行唯一性。但是,它們對於 GIN 測試很有用,並且可以作為開發其他 GIN 運算子類的基礎。同樣地,對於同時測試可索引 GIN 欄位和 B-tree 可索引列的查詢,建立使用這些運算子之一的多欄位 GIN 索引可能比建立兩個必須獨立的索引更有效,以 bitmap ANDing 的方式。
Teodor Sigaev (<
teodor@stack.net
>
) and Oleg Bartunov (<
oleg@sai.msu.su
>
). See http://www.sai.msu.su/~megera/oddmuse/index.cgi/Gin for additional information.
auth_delay 讓伺服器在回報身份驗證失敗之前會短暫暫停,從而使對資料庫密碼的暴力攻擊更加困難。請注意,它不會阻止拒絕服務攻擊,甚至可能加劇攻擊,因為在回報身份驗證失敗之前等待的程序仍然會佔用連線。
要使用這個功能,必須透過 postgresql.conf 中的 載入此模組。
auth_delay.milliseconds
(int
)
The number of milliseconds to wait before reporting an authentication failure. The default is 0.
These parameters must be set in postgresql.conf
. Typical usage might be:
KaiGai Kohei <
>
Bloom 提供了一種基於 Bloom fliters 索引方法。
Bloom filter 是一種節省空間的資料結構,用於測試元素是否為集合的成員。在使用索引方法時,它透過 signatures(其大小是在建立索引時確定的)快速排除不相符的內容。
Signature 是索引屬性的失真表示,因此很容易誤報。也就是說,可能會回報某個元素在集合中(實際上沒有)。因此,必須使用資料中的實際屬性值來重新檢查索引並搜尋結果。較大的 signature 會減少了誤報的機率,也減少了無效的存取次數,但當然也會使索引變大,造成掃描速度變慢。
當資料表具有許多屬性並且查詢測試它們的任意組合時,這種類型的索引最有用。傳統的 btree 索引會比 Bloom 索引快,但是它可能需要許多 btree 索引來支援所有可能的查詢,而其中一個查詢只需要一個 Bloom 索引。但是請注意,bloom 索引僅支援相等查詢,而 btree 索引也可以用於不相等和範圍查詢。
Bloom 索引的 WITH 子句接受以下參數:
length
每個 signature(索引項目)的長度(以位元為單位)。四捨五入到最接近的 16 的倍數。預設值為 80 位元,最大值為 4096。
col1 — col32
每個索引欄位產成的位元數。每個參數的名稱指的是它控制的索引欄位的編號。預設值為 2 位元,最大值為 4095。實際未使用的索引欄位的參數將被忽略。
This is an example of creating a bloom index:
The index is created with a signature length of 80 bits, with attributes i1 and i2 mapped to 2 bits, and attribute i3 mapped to 4 bits. We could have omitted the length
, col1
, and col2
specifications since those have the default values.
Here is a more complete example of bloom index definition and usage, as well as a comparison with equivalent btree indexes. The bloom index is considerably smaller than the btree index, and can perform better.
A sequential scan over this large table takes a long time:
So the planner will usually select an index scan if possible. With a btree index, we get results like this:
Bloom is better than btree in handling this type of search:
請注意,這裡的誤報的數量相對較多:被選擇 2439 筆資料要進行確認,但實際上沒有與查詢相符的資料。我們可以透過指定更大的 signature 長度來減少這種情況。在此範例中,建立長度為 200 的索引可將誤報數量減少到 55;但它也使索引大小增加了一倍(至 306 MB),並最終使該查詢的速度變慢(總計 125 毫秒)。
現在,btree 搜搜的主要問題在於,當搜搜條件不限制前導索引欄時,btree 的效率低下。btree 的更好策略是在每欄位上建立一個單獨的索引。然後計劃程序將會選擇規劃以下內容:
儘管此查詢的執行速度比使用單個索引的查詢快得多,但我們在索引大小上付出了很大的代價。每個單欄位 btree 索引佔用 214 MB,因此所需的總空間超過 1.2GB,是 Bloom 索引使用的空間 8 倍以上。
An operator class for bloom indexes requires only a hash function for the indexed data type and an equality operator for searching. This example shows the operator class definition for the text
data type:
Only operator classes for int4
and text
are included with the module.
Only the =
operator is supported for search. But it is possible to add support for arrays with union and intersection operations in the future.
bloom
access method doesn't support UNIQUE
indexes.
bloom
access method doesn't support searching for NULL
values.
Teodor Sigaev <
teodor@postgrespro.ru
>
, Postgres Professional, Moscow, Russia
Alexander Korotkov <
a.korotkov@postgrespro.ru
>
, Postgres Professional, Moscow, Russia
Oleg Bartunov <
obartunov@postgrespro.ru
>
, Postgres Professional, Moscow, Russia
dblink_connect — opens a persistent connection to a remote database
dblink_connect()
establishes a connection to a remote PostgreSQL database. The server and database to be contacted are identified through a standard libpq connection string. Optionally, a name can be assigned to the connection. Multiple named connections can be open at once, but only one unnamed connection is permitted at a time. The connection will persist until closed or until the database session is ended.
The connection string may also be the name of an existing foreign server. It is recommended to use the foreign-data wrapper dblink_fdw
when defining the foreign server. See the example below, as well as CREATE SERVER and CREATE USER MAPPING.
connname
The name to use for this connection; if omitted, an unnamed connection is opened, replacing any existing unnamed connection.connstr
libpq-style connection info string, for example hostaddr=127.0.0.1 port=5432 dbname=mydb user=postgres password=mypasswd options=-csearch_path=
. For details see Section 33.1.1. Alternatively, the name of a foreign server.
Returns status, which is always OK
(since any error causes the function to throw an error instead of returning).
If untrusted users have access to a database that has not adopted a secure schema usage pattern, begin each session by removing publicly-writable schemas from search_path
. One could, for example, add options=-csearch_path=
to connstr
. This consideration is not specific to dblink
; it applies to every interface for executing arbitrary SQL commands.
Only superusers may use dblink_connect
to create non-password-authenticated connections. If non-superusers need this capability, use dblink_connect_u
instead.
It is unwise to choose connection names that contain equal signs, as this opens a risk of confusion with connection info strings in other dblink
functions.
dblink_disconnect — closes a persistent connection to a remote database
dblink_disconnect()
closes a connection previously opened by dblink_connect()
. The form with no arguments closes an unnamed connection.
connname
The name of a named connection to be closed.
Returns status, which is always OK
(since any error causes the function to throw an error instead of returning).
dblink_exec — executes a command in a remote database
dblink_exec
executes a command (that is, any SQL statement that doesn't return rows) in a remote database.
When two text
arguments are given, the first one is first looked up as a persistent connection's name; if found, the command is executed on that connection. If not found, the first argument is treated as a connection info string as for dblink_connect
, and the indicated connection is made just for the duration of this command.
connname
Name of the connection to use; omit this parameter to use the unnamed connection.
connstr
A connection info string, as previously described for dblink_connect
.
sql
The SQL command that you wish to execute in the remote database, for example insert into foo values(0,'a','{"a0","b0","c0"}')
.
fail_on_error
If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, and the function's return value is set to ERROR
.
Returns status, either the command's status string or ERROR
.
dblink_connect_u — opens a persistent connection to a remote database, insecurely
dblink_connect_u()
is identical to dblink_connect()
, except that it will allow non-superusers to connect using any authentication method.
If the remote server selects an authentication method that does not involve a password, then impersonation and subsequent escalation of privileges can occur, because the session will appear to have originated from the user as which the local PostgreSQL server runs. Also, even if the remote server does demand a password, it is possible for the password to be supplied from the server environment, such as a ~/.pgpass
file belonging to the server's user. This opens not only a risk of impersonation, but the possibility of exposing a password to an untrustworthy remote server. Therefore, dblink_connect_u()
is initially installed with all privileges revoked from PUBLIC
, making it un-callable except by superusers. In some situations it may be appropriate to grant EXECUTE
permission for dblink_connect_u()
to specific users who are considered trustworthy, but this should be done with care. It is also recommended that any ~/.pgpass
file belonging to the server's user not contain any records specifying a wildcard host name.
For further details see dblink_connect()
.
adminpack 提供了許多支援性質的函數,pgAdmin 和其他管理工具可以用來提供其他功能,例如伺服器日誌檔案的遠程管理。預設情況下,僅超級使用者可以使用所有的這些函數,但使用 GRANT 指令可以允許其他使用者使用它們。
中列出的函數提供了對伺服器的主機上檔案的寫入功能。 (另請參見 中提供唯讀的功能。)只能存取資料庫叢集目錄中的檔案,除非使用者是超級使用者或具有 pg_read_server_files 或 pg_write_server_files 個角色(視情況而定) ,而相對路徑及絕對路徑都是允許的。
adminpack
Functionspg_file_write
writes the specified data
into the file named by filename
. If append
is false, the file must not already exist. If append
is true, the file can already exist, and will be appended to if so. Returns the number of bytes written.
pg_file_sync fsyncs filename 所指定檔案或目錄。 失敗時會拋出錯誤(例如,指定的路徑不存在)。 請注意, 對此函數沒有影響,因此即使更新資料庫檔案失敗也不會引發 PANIC 層級的錯誤。
pg_file_rename
renames a file. If archivename
is omitted or NULL, it simply renames oldname
to newname
(which must not already exist). If archivename
is provided, it first renames newname
to archivename
(which must not already exist), and then renames oldname
to newname
. In event of failure of the second rename step, it will try to rename archivename
back to newname
before reporting the error. Returns true on success, false if the source file(s) are not present or not writable; other cases throw errors.
pg_file_unlink
removes the specified file. Returns true on success, false if the specified file is not present or the unlink()
call fails; other cases throw errors.
pg_logdir_ls 回傳 目錄中所有日誌檔案的開始時間戳記和路徑名稱。 參數必須有其預設設定 (postgresql-%Y-%m-%d_%H%M%S.log) 才能使用此功能。
auto_explain 模組提供了一種自動記錄慢速語句執行計劃的方法,毌須手動執行 。這對於在大型應用程序中追踪未最佳化的查詢特別有用。
該模組不提供 SQL 可存取的功能。要使用它,只需將其載入到伺服器中即可。您也可以將其載入到單個連線之中:
(您必須是超級使用者才能這樣做。)更典型的用法是透過在 postgresql.conf 中的 或 中包含 auto_explain 將其預先載入到部分或全部連線中。然後,無論何時發生,您都可以追踪意外緩慢的查詢。當然,會有一些系統代價。
有幾個組態參數可以控制 auto_explain 的行為。請注意,預設行為是什麼都不做,因此如果需要任何結果,必須至少設定 auto_explain.log_min_duration。
auto_explain.log_min_duration
(integer
)
auto_explain.log_min_duration 是記錄語句計劃的最小語句執行時間(以毫秒為單位)。將此設定為零會記錄所有計劃。減號(預設值)停用計劃的記錄。例如,如果將其設定為 250ms,則將記錄執行 250ms 或更長時間的所有語句。只有超級使用者才能變更此設定。
auto_explain.log_analyze
(boolean
)
auto_explain.log_analyze 會在記錄執行計劃時列印 EXPLAIN ANALYZE 輸出,而不僅僅是 EXPLAIN 輸出。預設情況下,此參數處於停用狀態。只有超級使用者才能變更此設定。
啟用此參數後,將對所有執行的語句執行每計劃節點計時,無論它們是否執行足夠長時間以實際記錄。這可能會對效能產生極為不利的影響。關閉 auto_explain.log_timing 可以獲得較少的訊息,從而改善效能成本。
auto_explain.log_buffers
(boolean
)
auto_explain.log_buffers 控制是否在記錄執行計劃時輸出緩衝區使用情況統計訊息;它相當於 EXPLAIN 的 BUFFERS 選項。除非啟用了 auto_explain.log_analyze,否則此參數無效。預鉆水情況下,此參數處於停用狀態。只有超級使用者才能變更改此設定。
auto_explain.log_wal
(boolean
)
auto_explain.log_wal 控制在記錄執行計劃時是否輸出 WAL 使用情況統計資訊。它等同於 EXPLAIN 的 WAL 選項。除非啟用了 auto_explain.log_analyze,否則此參數無效。預設情況下,此參數是停用的。只有超級使用者可以變更此設定。
auto_explain.log_timing
(boolean
)
auto_explain.log_timing 控制在記錄執行計劃時是否輸出每個節點的計時訊息;它相當於 EXPLAIN 的 TIMING 選項。重複讀取系統時鐘的成本會在某些系統上明顯減慢查詢速度,因此當只需要實際資料列計數而非精確時間計時,將此參數設定為關閉可能很有用。除非啟用了 auto_explain.log_analyze,否則此參數無效。預設情況下,此參數處於啟用狀態。只有超級使用者才能變更此設定。
auto_explain.log_triggers
(boolean
)
auto_explain.log_triggers 會在記錄執行計劃時包含觸發器執行統計訊息。除非啟用了 auto_explain.log_analyze,否則此參數無效。預設情況下,此參數處於停用狀態。只有超級使用者才能變更此設定。
auto_explain.log_verbose
(boolean
)
auto_explain.log_verbose 控制是否在記錄執行計劃時輸出詳細訊息;它相當於 EXPLAIN 的 VERBOSE 選項。預設情況下,此參數處於停用狀態。只有超級使用者才能變更此設定。
auto_explain.log_settings
(boolean
)
auto_explain.log_settings 控制記錄執行計劃時是否輸出有關被修改的組態選項資訊。輸出中僅包含影響其值與內建預設值不同的查詢計劃的選項。預設情況下,此參數是停用的。 只有超級使用者可以變更此設定。
auto_explain.log_format
(enum
)
auto_explain.log_format 選擇要使用的 EXPLAIN 輸出格式。允許的值為 text、xml、json 和 yaml。預設為 text。只有超級使用者才能變更此設定。
auto_explain.log_level
(enum
)
auto_explain.log_level 選擇 auto_explain 將記錄查詢計劃的日誌等級。 有效值為 DEBUG5,DEBUG4,DEBUG3,DEBUG2,DEBUG1,INFO,NOTICE,WARNING 和 LOG。預設值為 LOG。只有超級使用者可以變更此設定。
auto_explain.log_nested_statements
(boolean
)
auto_explain.log_nested_statements 會讓巢狀語句(在函數內執行的語句)記錄下來。關閉時,僅記錄最上層查詢計劃。預設情況下,此參數處於停用狀態。只有超級使用者才能變更此設定。
auto_explain.sample_rate
(real
)
auto_explain.sample_rate 使 auto_explain 僅解釋每個連線中的一小部分語句。預設值為 1,表示 EXPLAIN 所有查詢。在巢狀語句的情況下,要就全部都要解釋,要就都不解釋。只有超級使用者才能變更此設定。
在一般的用法中,這些參數在 postgresql.conf 中設定,儘管超級使用者可以在他們自己的連線中即時更改它們。典型用法可能是:
這可能會產生如下的日誌輸出:
dblink — executes a query in a remote database
dblink
executes a query (usually a SELECT
, but it can be any SQL statement that returns rows) in a remote database.
When two text
arguments are given, the first one is first looked up as a persistent connection's name; if found, the command is executed on that connection. If not found, the first argument is treated as a connection info string as for dblink_connect
, and the indicated connection is made just for the duration of this command.
connname
Name of the connection to use; omit this parameter to use the unnamed connection.connstr
A connection info string, as previously described for dblink_connect
.
sql
The SQL query that you wish to execute in the remote database, for example select * from foo
.
fail_on_error
If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, and the function returns no rows.
The function returns the row(s) produced by the query. Since dblink
can be used with any query, it is declared to return record
, rather than specifying any particular set of columns. This means that you must specify the expected set of columns in the calling query — otherwise PostgreSQL would not know what to expect. Here is an example:
The “alias” part of the FROM
clause must specify the column names and types that the function will return. (Specifying column names in an alias is actually standard SQL syntax, but specifying column types is a PostgreSQL extension.) This allows the system to understand what *
should expand to, and what proname
in the WHERE
clause refers to, in advance of trying to execute the function. At run time, an error will be thrown if the actual query result from the remote database does not have the same number of columns shown in the FROM
clause. The column names need not match, however, and dblink
does not insist on exact type matches either. It will succeed so long as the returned data strings are valid input for the column type declared in the FROM
clause.
A convenient way to use dblink
with predetermined queries is to create a view. This allows the column type information to be buried in the view, instead of having to spell it out in every query. For example,
amcheck 模塊提供的功能是讓你可以驗證關連結構邏輯的一致性。如果該結構看起來有效,就不會引發任何錯誤。
The functions verify various invariants in the structure of the representation of particular relations. The correctness of the access method functions behind index scans and other important operations relies on these invariants always holding. For example, certain functions verify, among other things, that all B-Tree pages have items in “logical” order (e.g., for B-Tree indexes on text
, index tuples should be in collated lexical order). If that particular invariant somehow fails to hold, we can expect binary searches on the affected page to incorrectly guide index scans, resulting in wrong answers to SQL queries.
Verification is performed using the same procedures as those used by index scans themselves, which may be user-defined operator class code. For example, B-Tree index verification relies on comparisons made with one or more B-Tree support function 1 routines. See for details of operator class support functions.
amcheck
functions may only be used by superusers.
bt_index_check(index regclass, heapallindexed boolean) returns void
bt_index_check
tests that its target, a B-Tree index, respects a variety of invariants. Example usage:
This example shows a session that performs verification of the 10 largest catalog indexes in the database “test”. Verification of the presence of heap tuples as index tuples is requested for the subset that are unique indexes. Since no error is raised, all indexes tested appear to be logically consistent. Naturally, this query could easily be changed to call bt_index_check
for every index in the database where verification is supported.
bt_index_check
acquires an AccessShareLock
on the target index and the heap relation it belongs to. This lock mode is the same lock mode acquired on relations by simple SELECT
statements. bt_index_check
does not verify invariants that span child/parent relationships, but will verify the presence of all heap tuples as index tuples within the index when heapallindexed
is true
. When a routine, lightweight test for corruption is required in a live production environment, using bt_index_check
often provides the best trade-off between thoroughness of verification and limiting the impact on application performance and availability.
bt_index_parent_check(index regclass, heapallindexed boolean, rootdescend boolean) returns void
bt_index_parent_check
tests that its target, a B-Tree index, respects a variety of invariants. Optionally, when the heapallindexed
argument is true
, the function verifies the presence of all heap tuples that should be found within the index. When the optional rootdescend
argument is true
, verification re-finds tuples on the leaf level by performing a new search from the root page for each tuple. The checks that can be performed by bt_index_parent_check
are a superset of the checks that can be performed by bt_index_check
. bt_index_parent_check
can be thought of as a more thorough variant of bt_index_check
: unlike bt_index_check
, bt_index_parent_check
also checks invariants that span parent/child relationships, including checking that there are no missing downlinks in the index structure. bt_index_parent_check
follows the general convention of raising an error if it finds a logical inconsistency or other problem.
A ShareLock
is required on the target index by bt_index_parent_check
(a ShareLock
is also acquired on the heap relation). These locks prevent concurrent data modification from INSERT
, UPDATE
, and DELETE
commands. The locks also prevent the underlying relation from being concurrently processed by VACUUM
, as well as all other utility commands. Note that the function holds locks only while running, not for the entire transaction.
bt_index_parent_check
's additional verification is more likely to detect various pathological cases. These cases may involve an incorrectly implemented B-Tree operator class used by the index that is checked, or, hypothetically, undiscovered bugs in the underlying B-Tree index access method code. Note that bt_index_parent_check
cannot be used when Hot Standby mode is enabled (i.e., on read-only physical replicas), unlike bt_index_check
.
bt_index_check
and bt_index_parent_check
both output log messages about the verification process at DEBUG1
and DEBUG2
severity levels. These messages provide detailed information about the verification process that may be of interest to PostgreSQL developers. Advanced users may also find this information helpful, since it provides additional context should verification actually detect an inconsistency. Running:
in an interactive psql session before running a verification query will display messages about the progress of verification with a manageable level of detail.
heapallindexed
VerificationWhen the heapallindexed
argument to verification functions is true
, an additional phase of verification is performed against the table associated with the target index relation. This consists of a “dummy” CREATE INDEX
operation, which checks for the presence of all hypothetical new index tuples against a temporary, in-memory summarizing structure (this is built when needed during the basic first phase of verification). The summarizing structure “fingerprints” every tuple found within the target index. The high level principle behind heapallindexed
verification is that a new index that is equivalent to the existing, target index must only have entries that can be found in the existing structure.
The additional heapallindexed
phase adds significant overhead: verification will typically take several times longer. However, there is no change to the relation-level locks acquired when heapallindexed
verification is performed.
The summarizing structure is bound in size by maintenance_work_mem
. In order to ensure that there is no more than a 2% probability of failure to detect an inconsistency for each heap tuple that should be represented in the index, approximately 2 bytes of memory are needed per tuple. As less memory is made available per tuple, the probability of missing an inconsistency slowly increases. This approach limits the overhead of verification significantly, while only slightly reducing the probability of detecting a problem, especially for installations where verification is treated as a routine maintenance task. Any single absent or malformed tuple has a new opportunity to be detected with each new verification attempt.
amcheck
EffectivelyStructural inconsistencies caused by incorrect operator class implementations.
This includes issues caused by the comparison rules of operating system collations changing. Comparisons of datums of a collatable type like text
must be immutable (just as all comparisons used for B-Tree index scans must be immutable), which implies that operating system collation rules must never change. Though rare, updates to operating system collation rules can cause these issues. More commonly, an inconsistency in the collation order between a master server and a standby server is implicated, possibly because the major operating system version in use is inconsistent. Such inconsistencies will generally only arise on standby servers, and so can generally only be detected on standby servers.
Structural inconsistencies between indexes and the heap relations that are indexed (when heapallindexed
verification is performed).
There is no cross-checking of indexes against their heap relation during normal operation. Symptoms of heap corruption can be subtle.
Corruption caused by hypothetical undiscovered bugs in the underlying PostgreSQL access method code, sort code, or transaction management code.
File system or storage subsystem faults where checksums happen to simply not be enabled.
Note that amcheck
examines a page as represented in some shared memory buffer at the time of verification if there is only a shared buffer hit when accessing the block. Consequently, amcheck
does not necessarily examine data read from the file system at the time of verification. Note that when checksums are enabled, amcheck
may raise an error due to a checksum failure when a corrupt block is read into a buffer.
Corruption caused by faulty RAM, or the broader memory subsystem.
PostgreSQL does not protect against correctable memory errors and it is assumed you will operate using RAM that uses industry standard Error Correcting Codes (ECC) or better protection. However, ECC memory is typically only immune to single-bit errors, and should not be assumed to provide absolute protection against failures that result in memory corruption.
When heapallindexed
verification is performed, there is generally a greatly increased chance of detecting single-bit errors, since strict binary equality is tested, and the indexed attributes within the heap are tested.
In general, amcheck
can only prove the presence of corruption; it cannot prove its absence.
No error concerning corruption raised by amcheck
should ever be a false positive. amcheck
raises errors in the event of conditions that, by definition, should never happen, and so careful analysis of amcheck
errors is often required.
Takahiro Itagaki <
>
amcheck
can be effective at detecting various types of failure modes that will always fail to catch. These include:
If a problem like this arises, it may not affect each individual index that is ordered using an affected collation, simply because indexed values might happen to have the same absolute ordering regardless of the behavioral inconsistency. See and for further details about how PostgreSQL uses operating system locales and collations.
Automatic verification of the structural integrity of indexes plays a role in the general testing of new or proposed PostgreSQL features that could plausibly allow a logical inconsistency to be introduced. Verification of table structure and associated visibility and transaction status information plays a similar role. One obvious testing strategy is to call amcheck
functions continuously when running the standard regression tests. See for details on running the tests.
There is no general method of repairing problems that amcheck
detects. An explanation for the root cause of an invariant violation should be sought. may play a useful role in diagnosing corruption that amcheck
detects. A REINDEX
may not be effective in repairing corruption.
Function
Description
pg_catalog.pg_file_write
( filename
text
, data
text
, append
boolean
) → bigint
Writes, or appends to, a text file.
pg_catalog.pg_file_sync
( filename
text
) → void
Flushes a file or directory to disk.
pg_catalog.pg_file_rename
( oldname
text
, newname
text
[, archivename
text
] ) → boolean
Renames a file.
pg_catalog.pg_file_unlink
( filename
text
) → boolean
Removes a file.
pg_catalog.pg_logdir_ls
() → setof record
Lists the log files in the log_directory
directory.
dblink 模組可以讓你從資料庫連線中再連線到其他 PostgreSQL 資料庫。
另請參閱 postgres_fdw,它使用更現代且符合標準的基礎架構提供大致相同的功能。
dblink_fetch — returns rows from an open cursor in a remote database
dblink_fetch
fetches rows from a cursor previously established by dblink_open
.
connname
Name of the connection to use; omit this parameter to use the unnamed connection.
cursorname
The name of the cursor to fetch from.
howmany
The maximum number of rows to retrieve. The next howmany
rows are fetched, starting at the current cursor position, moving forward. Once the cursor has reached its end, no more rows are produced.
fail_on_error
If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, and the function returns no rows.
The function returns the row(s) fetched from the cursor. To use this function, you will need to specify the expected set of columns, as previously discussed for dblink
.
On a mismatch between the number of return columns specified in the FROM
clause, and the actual number of columns returned by the remote cursor, an error will be thrown. In this event, the remote cursor is still advanced by as many rows as it would have been if the error had not occurred. The same is true for any other error occurring in the local query after the remote FETCH
has been done.
dblink_open — opens a cursor in a remote database
dblink_open()
opens a cursor in a remote database. The cursor can subsequently be manipulated with dblink_fetch()
and dblink_close()
.
connname
Name of the connection to use; omit this parameter to use the unnamed connection.
cursorname
The name to assign to this cursor.
sql
The SELECT
statement that you wish to execute in the remote database, for example select * from pg_class
.
fail_on_error
If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, and the function's return value is set to ERROR
.
Returns status, either OK
or ERROR
.
Since a cursor can only persist within a transaction, dblink_open
starts an explicit transaction block (BEGIN
) on the remote side, if the remote side was not already within a transaction. This transaction will be closed again when the matching dblink_close
is executed. Note that if you use dblink_exec
to change data between dblink_open
and dblink_close
, and then an error occurs or you use dblink_disconnect
before dblink_close
, your change will be lost because the transaction will be aborted.
dblink_error_message — gets last error message on the named connection
dblink_error_message
fetches the most recent remote error message for a given connection.
connname
Name of the connection to use.
Returns last error message, or OK
if there has been no error in this connection.
When asynchronous queries are initiated by dblink_send_query
, the error message associated with the connection might not get updated until the server's response message is consumed. This typically means that dblink_is_busy
or dblink_get_result
should be called prior to dblink_error_message
, so that any error generated by the asynchronous query will be visible.
dblink_close — closes a cursor in a remote database
dblink_close
closes a cursor previously opened with dblink_open
.
connname
Name of the connection to use; omit this parameter to use the unnamed connection.
cursorname
The name of the cursor to close.
fail_on_error
If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, and the function's return value is set to ERROR
.
Returns status, either OK
or ERROR
.
If dblink_open
started an explicit transaction block, and this is the last remaining open cursor in this connection, dblink_close
will issue the matching COMMIT
.
dblink_send_query — 送出非同步的查詢到遠端資料庫
dblink_send_query 發送查詢以非同步方式執行,意即毋須等待指令結果。 連線上必須沒有其他正在進行的非同步查詢。
成功呼叫非同步查詢後,可以使用 dblink_is_busy 檢查完成狀態,並在最後使用 dblink_get_result 收集查詢結果。也可以嘗試使用 dblink_cancel_query 取消正在進行的非同步查詢。
connname
要使用的連線名稱。
sql
您希望在遠端資料庫中執行的 SQL 語句,例如,從 select * from pg_class
。
如果已成功開始執行查詢,則回傳 1,否則回傳 0。
dblink_is_busy — checks if connection is busy with an async query
dblink_is_busy
tests whether an async query is in progress.
connname
Name of the connection to check.
Returns 1 if connection is busy, 0 if it is not busy. If this function returns 0, it is guaranteed that dblink_get_result
will not block.
dblink_get_notify — retrieve async notifications on a connection
dblink_get_notify
retrieves notifications on either the unnamed connection, or on a named connection if specified. To receive notifications via dblink, LISTEN
must first be issued, using dblink_exec
. For details see LISTEN and NOTIFY.
connname
The name of a named connection to get notifications on.
Returns setof (notify_name text, be_pid int, extra text)
, or an empty set if none.
dblink_get_result — gets an async query result
dblink_get_result
collects the results of an asynchronous query previously sent with dblink_send_query
. If the query is not already completed, dblink_get_result
will wait until it is.
connname
Name of the connection to use.
fail_on_error
If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, and the function returns no rows.
For an async query (that is, a SQL statement returning rows), the function returns the row(s) produced by the query. To use this function, you will need to specify the expected set of columns, as previously discussed for dblink
.
For an async command (that is, a SQL statement not returning rows), the function returns a single row with a single text column containing the command's status string. It is still necessary to specify that the result will have a single text column in the calling FROM
clause.
This function must be called if dblink_send_query
returned 1. It must be called once for each query sent, and one additional time to obtain an empty set result, before the connection can be used again.
When using dblink_send_query
and dblink_get_result
, dblink fetches the entire remote query result before returning any of it to the local query processor. If the query returns a large number of rows, this can result in transient memory bloat in the local session. It may be better to open such a query as a cursor with dblink_open
and then fetch a manageable number of rows at a time. Alternatively, use plain dblink()
, which avoids memory bloat by spooling large result sets to disk.
dblink_cancel_query — cancels any active query on the named connection
dblink_cancel_query
attempts to cancel any query that is in progress on the named connection. Note that this is not certain to succeed (since, for example, the remote query might already have finished). A cancel request simply improves the odds that the query will fail soon. You must still complete the normal query protocol, for example by calling dblink_get_result
.
connname
Name of the connection to use.
Returns OK
if the cancel request has been sent, or the text of an error message on failure.
dblink_get_pkey — returns the positions and field names of a relation's primary key fields
dblink_get_pkey
provides information about the primary key of a relation in the local database. This is sometimes useful in generating queries to be sent to remote databases.
relname
Name of a local relation, for example foo
or myschema.mytab
. Include double quotes if the name is mixed-case or contains special characters, for example "FooBar"
; without quotes, the string will be folded to lower case.
Returns one row for each primary key field, or no rows if the relation has no primary key. The result row type is defined as
The position
column simply runs from 1 to N
; it is the number of the field within the primary key, not the number within the table's columns.
dblink_build_sql_insert — builds an INSERT statement using a local tuple, replacing the primary key field values with alternative supplied values
dblink_build_sql_insert
can be useful in doing selective replication of a local table to a remote database. It selects a row from the local table based on primary key, and then builds a SQL INSERT
command that will duplicate that row, but with the primary key values replaced by the values in the last argument. (To make an exact copy of the row, just specify the same values for the last two arguments.)
relname
Name of a local relation, for example foo
or myschema.mytab
. Include double quotes if the name is mixed-case or contains special characters, for example "FooBar"
; without quotes, the string will be folded to lower case.
primary_key_attnums
Attribute numbers (1-based) of the primary key fields, for example 1 2
.
num_primary_key_atts
The number of primary key fields.
src_pk_att_vals_array
Values of the primary key fields to be used to look up the local tuple. Each field is represented in text form. An error is thrown if there is no local row with these primary key values.
tgt_pk_att_vals_array
Values of the primary key fields to be placed in the resulting INSERT
command. Each field is represented in text form.
Returns the requested SQL statement as text.
As of PostgreSQL 9.0, the attribute numbers in primary_key_attnums
are interpreted as logical column numbers, corresponding to the column's position in SELECT * FROM relname
. Previous versions interpreted the numbers as physical column positions. There is a difference if any column(s) to the left of the indicated column have been dropped during the lifetime of the table.
dblink_build_sql_delete — builds a DELETE statement using supplied values for primary key field values
dblink_build_sql_delete
can be useful in doing selective replication of a local table to a remote database. It builds a SQL DELETE
command that will delete the row with the given primary key values.
relname
Name of a local relation, for example foo
or myschema.mytab
. Include double quotes if the name is mixed-case or contains special characters, for example "FooBar"
; without quotes, the string will be folded to lower case.
primary_key_attnums
Attribute numbers (1-based) of the primary key fields, for example 1 2
.
num_primary_key_atts
The number of primary key fields.
tgt_pk_att_vals_array
Values of the primary key fields to be used in the resulting DELETE
command. Each field is represented in text form.
Returns the requested SQL statement as text.
As of PostgreSQL 9.0, the attribute numbers in primary_key_attnums
are interpreted as logical column numbers, corresponding to the column's position in SELECT * FROM relname
. Previous versions interpreted the numbers as physical column positions. There is a difference if any column(s) to the left of the indicated column have been dropped during the lifetime of the table.
dblink_build_sql_update — builds an UPDATE statement using a local tuple, replacing the primary key field values with alternative supplied values
dblink_build_sql_update
can be useful in doing selective replication of a local table to a remote database. It selects a row from the local table based on primary key, and then builds a SQL UPDATE
command that will duplicate that row, but with the primary key values replaced by the values in the last argument. (To make an exact copy of the row, just specify the same values for the last two arguments.) The UPDATE
command always assigns all fields of the row — the main difference between this and dblink_build_sql_insert
is that it's assumed that the target row already exists in the remote table.
relname
Name of a local relation, for example foo
or myschema.mytab
. Include double quotes if the name is mixed-case or contains special characters, for example "FooBar"
; without quotes, the string will be folded to lower case.
primary_key_attnums
Attribute numbers (1-based) of the primary key fields, for example 1 2
.num_primary_key_atts
The number of primary key fields.
src_pk_att_vals_array
Values of the primary key fields to be used to look up the local tuple. Each field is represented in text form. An error is thrown if there is no local row with these primary key values.
tgt_pk_att_vals_array
Values of the primary key fields to be placed in the resulting UPDATE
command. Each field is represented in text form.
Returns the requested SQL statement as text.
As of PostgreSQL 9.0, the attribute numbers in primary_key_attnums
are interpreted as logical column numbers, corresponding to the column's position in SELECT * FROM relname
. Previous versions interpreted the numbers as physical column positions. There is a difference if any column(s) to the left of the indicated column have been dropped during the lifetime of the table.
The earthdistance
module provides two different approaches to calculating great circle distances on the surface of the Earth. The one described first depends on the cube
module (which must be installed before earthdistance
can be installed). The second one is based on the built-in point
data type, using longitude and latitude for the coordinates.
In this module, the Earth is assumed to be perfectly spherical. (If that's too inaccurate for you, you might want to look at the PostGIS project.)
Data is stored in cubes that are points (both corners are the same) using 3 coordinates representing the x, y, and z distance from the center of the Earth. A domain earth
over cube
is provided, which includes constraint checks that the value meets these restrictions and is reasonably close to the actual surface of the Earth.
The radius of the Earth is obtained from the earth()
function. It is given in meters. But by changing this one function you can change the module to use some other units, or to use a different value of the radius that you feel is more appropriate.
This package has applications to astronomical databases as well. Astronomers will probably want to change earth()
to return a radius of 180/pi()
so that distances are in degrees.
Functions are provided to support input in latitude and longitude (in degrees), to support output of latitude and longitude, to calculate the great circle distance between two points and to easily specify a bounding box usable for index searches.
The provided functions are shown in Table F.5.
earth()
float8
Returns the assumed radius of the Earth.
sec_to_gc(float8)
float8
Converts the normal straight line (secant) distance between two points on the surface of the Earth to the great circle distance between them.
gc_to_sec(float8)
float8
Converts the great circle distance between two points on the surface of the Earth to the normal straight line (secant) distance between them.
ll_to_earth(float8, float8)
earth
Returns the location of a point on the surface of the Earth given its latitude (argument 1) and longitude (argument 2) in degrees.
latitude(earth)
float8
Returns the latitude in degrees of a point on the surface of the Earth.
longitude(earth)
float8
Returns the longitude in degrees of a point on the surface of the Earth.
earth_distance(earth, earth)
float8
Returns the great circle distance between two points on the surface of the Earth.
earth_box(earth, float8)
cube
Returns a box suitable for an indexed search using the cube @>
operator for points within a given great circle distance of a location. Some points in this box are further than the specified great circle distance from the location, so a second check using earth_distance
should be included in the query.
The second part of the module relies on representing Earth locations as values of type point
, in which the first component is taken to represent longitude in degrees, and the second component is taken to represent latitude in degrees. Points are taken as (longitude, latitude) and not vice versa because longitude is closer to the intuitive idea of x-axis and latitude to y-axis.
A single operator is provided, shown in Table F.6.
point
<@>
point
float8
Gives the distance in statute miles between two points on the Earth's surface.
Note that unlike the cube
-based part of the module, units are hardwired here: changing the earth()
function will not affect the results of this operator.
One disadvantage of the longitude/latitude representation is that you need to be careful about the edge conditions near the poles and near +/- 180 degrees of longitude. The cube
-based representation avoids these discontinuities.\
file_fdw 模組提供了外部資料封裝器 file_fdw,可用於存取伺服器檔案系統中的資料檔案,或在伺服器上執行某個程序並取得其輸出。資料檔案或程序輸出必須採用可由 COPY FROM 讀取的格式;有關詳細資訊,請參閱 。目前對資料檔案的存取只有讀取的功能。
A foreign table created using this wrapper can have the following options:
filename
Specifies the file to be read. Must be an absolute path name. Either filename
or program
must be specified, but not both.
program
Specifies the command to be executed. The standard output of this command will be read as though COPY FROM PROGRAM
were used. Either program
or filename
must be specified, but not both.
format
Specifies the data format, the same as COPY
's FORMAT
option.header
Specifies whether the data has a header line, the same as COPY
's HEADER
option.
delimiter
Specifies the data delimiter character, the same as COPY
's DELIMITER
option.
quote
Specifies the data quote character, the same as COPY
's QUOTE
option.
escape
Specifies the data escape character, the same as COPY
's ESCAPE
option.
null
Specifies the data null string, the same as COPY
's NULL
option.
encoding
Specifies the data encoding, the same as COPY
's ENCODING
option.
Note that while COPY
allows options such as HEADER
to be specified without a corresponding value, the foreign table option syntax requires a value to be present in all cases. To activate COPY
options typically written without a value, you can pass the value TRUE, since all such options are Booleans.
A column of a foreign table created using this wrapper can have the following options:
force_not_null
This is a Boolean option. If true, it specifies that values of the column should not be matched against the null string (that is, the table-level null
option). This has the same effect as listing the column in COPY
's FORCE_NOT_NULL
option.
force_null
This is a Boolean option. If true, it specifies that values of the column which match the null string are returned as NULL
even if the value is quoted. Without this option, only unquoted values matching the null string are returned as NULL
. This has the same effect as listing the column in COPY
's FORCE_NULL
option.
COPY
's FORCE_QUOTE
option is currently not supported by file_fdw
.
These options can only be specified for a foreign table or its columns, not in the options of the file_fdw
foreign-data wrapper, nor in the options of a server or user mapping using the wrapper.
Changing table-level options requires being a superuser or having the privileges of the default role pg_read_server_files
(to use a filename) or the default role pg_execute_server_program
(to use a program), for security reasons: only certain users should be able to control which file is read or which program is run. In principle regular users could be allowed to change the other options, but that's not supported at present.
When specifying the program
option, keep in mind that the option string is executed by the shell. If you need to pass any arguments to the command that come from an untrusted source, you must be careful to strip or escape any characters that might have special meaning to the shell. For security reasons, it is best to use a fixed command string, or at least avoid passing any user input in it.
For a foreign table using file_fdw
, EXPLAIN
shows the name of the file to be read or program to be run. For a file, unless COSTS OFF
is specified, the file size (in bytes) is shown as well.
file_fdw 其中一個明顯的用途是使 PostgreSQL 活動日誌形成查詢方便的資料表。為此,首先必須先產生記錄為 CSV 檔案,在這裡我們將其稱為 pglog.csv。首先,安裝 file_fdw 延伸套件:
然後建立一個外部伺服器:
現在您可以建外部資料表了。使用 CREATE FOREIGN TABLE 命令,您將需要定義資料表的欄位、CSV 檔案名稱及其格式:
就是這樣-現在您可以直接查詢日誌了。當然,在正式的運作環境中,您需要定義某種方式來處理日誌檔案的輪轉。
pg_buffercache 延伸功能提供了一個即時檢查共享緩衝區配置情況的方法。
該模塊提供了一個 C 函數的 pg_buffercache_pages,該函數回傳一組記錄,以及一個檢視表 pg_buffercache,該檢視表封裝了該函數以便於使用。
預設情況下,僅限於超級使用者和 pg_monitor 角色的成員使用。也可以使用 GRANT 將存取權限授予其他角色。
pg_buffercache
View中列出了此檢視表的欄位定義。
pg_buffercache
ColumnsThere is one row for each buffer in the shared cache. Unused buffers are shown with all fields null except bufferid
. Shared system catalogs are shown as belonging to database zero.
Because the cache is shared by all the databases, there will normally be pages from relations not belonging to the current database. This means that there may not be matching join rows in pg_class
for some rows, or that there could even be incorrect joins. If you are trying to join against pg_class
, it's a good idea to restrict the join to rows having reldatabase
equal to the current database's OID or zero.
Since buffer manager locks are not taken to copy the buffer state data that the view will display, accessing pg_buffercache
view has less impact on normal buffer activity but it doesn't provide a consistent set of results across all buffers. However, we ensure that the information of each buffer is self-consistent.
pg_stat_statements 模組提供了一個追踪在伺服器上執行的 SQL 語句統計資訊方法。
必須透過將 pg_stat_statements 加到 postgresql.conf 中的 中來載入模組,因為它需要額外的共享記憶體。這意味著需要重新啟動伺服器才能載加或刪除模組。
載入 pg_stat_statements 後,它將追踪伺服器所有資料庫的統計資訊。 為了存取和處理這些統計資訊,此模組提供了一個檢視表 pg_stat_statements 以及工具程序函數 pg_stat_statements_reset 和 pg_stat_statements。這些不是全域可用的,但可以使用 CREATE EXTENSION pg_stat_statements
為特定資料庫啟用。
pg_stat_statements
View此延伸功能收集的統計數據可透過名為 pg_stat_statements 的檢視表查詢。對於每個不同的資料庫 ID、使用者 ID和查詢語句 ID(此延伸功能可以追踪的最大不同查詢語句數量),在此檢視表會在一筆資料中呈現。 檢視表的欄位在 Table F.21 中說明。
pg_stat_statements
Columns因為安全因素,僅超級使用者和 pg_read_all_stats 角色成員被允許查看其他使用者所執行的 SQL 語句和 queryid。但是,如果檢視圖已安裝在他們的資料庫中,則其他使用者也可以查看統計內容。
只要是有查詢計劃查詢的查詢(即 SELECT、INSERT、UPDATE 和 DELETE)根據內部雜湊計算具有相同的查詢結構,它們就會組合到單筆 pg_stat_statements 資料中。通常,如果兩個查詢在語義上等效,即兩個查詢在此意義上是相同的,只是出現在查詢中的常數內容的值除外。 但是,會嚴格地根據資料庫結構維護指令(即所有其他指令)的查詢字串進行比較。
為了將查詢與其他查詢搭配而忽略了常數內容時,該常數內容會在 pg_stat_statements 顯示中替換為參數符號,例如 $1。查詢語句的其餘部分是第一個查詢的內容,該查詢具有與 pg_stat_statements 項目關聯的特定 queryid 雜湊值。
In some cases, queries with visibly different texts might get merged into a single pg_stat_statements
entry. Normally this will happen only for semantically equivalent queries, but there is a small chance of hash collisions causing unrelated queries to be merged into one entry. (This cannot happen for queries belonging to different users or databases, however.)
Since the queryid
hash value is computed on the post-parse-analysis representation of the queries, the opposite is also possible: queries with identical texts might appear as separate entries, if they have different meanings as a result of factors such as different search_path
settings.
Consumers of pg_stat_statements
may wish to use queryid
(perhaps in combination with dbid
and userid
) as a more stable and reliable identifier for each entry than its query text. However, it is important to understand that there are only limited guarantees around the stability of the queryid
hash value. Since the identifier is derived from the post-parse-analysis tree, its value is a function of, among other things, the internal object identifiers appearing in this representation. This has some counterintuitive implications. For example, pg_stat_statements
will consider two apparently-identical queries to be distinct, if they reference a table that was dropped and recreated between the executions of the two queries. The hashing process is also sensitive to differences in machine architecture and other facets of the platform. Furthermore, it is not safe to assume that queryid
will be stable across major versions of PostgreSQL.
As a rule of thumb, queryid
values can be assumed to be stable and comparable only so long as the underlying server version and catalog metadata details stay exactly the same. Two servers participating in replication based on physical WAL replay can be expected to have identical queryid
values for the same query. However, logical replication schemes do not promise to keep replicas identical in all relevant details, so queryid
will not be a useful identifier for accumulating costs across a set of logical replicas. If in doubt, direct testing is recommended.
The parameter symbols used to replace constants in representative query texts start from the next number after the highest $
n
parameter in the original query text, or $1
if there was none. It's worth noting that in some cases there may be hidden parameter symbols that affect this numbering. For example, PL/pgSQL uses hidden parameter symbols to insert values of function local variables into queries, so that a PL/pgSQL statement like SELECT i + 1 INTO j
would have representative text like SELECT i + $2
.
The representative query texts are kept in an external disk file, and do not consume shared memory. Therefore, even very lengthy query texts can be stored successfully. However, if many long query texts are accumulated, the external file might grow unmanageably large. As a recovery method if that happens, pg_stat_statements
may choose to discard the query texts, whereupon all existing entries in the pg_stat_statements
view will show null query
fields, though the statistics associated with each queryid
are preserved. If this happens, consider reducing pg_stat_statements.max
to prevent recurrences.
plans 和 calls 不一定會完全相等,因為查詢計劃和執行統計資訊會在其各自的執行階段進行更新,並且僅針對成功的操作進行更新。例如,某一條語句已經進行了查詢計劃,但在執行階段卻失敗了,則僅更新其查詢計劃統計資訊。如果由於使用了快取的查詢計劃而跳過了計劃階段,也只會更新其執行階段的統計資訊。
pg_stat_statements_reset(userid Oid, dbid Oid, queryid bigint) returns void
pg_stat_statements_reset 會移除到目前為止由 pg_stat_statements 收集的與指定的 userid、dbid 和 queryid 相對應的統計資訊。如果未指定任何參數,則對每個參數使用預設值 0(無效),並且將重置與其他參數相對應的統計資訊。如果未指定任何參數,或者所有指定的參數均為0(無效),則將移除所有統計資訊。預設情況下,此功能只能由超級使用者執行。可以使用 GRANT 將存取權限授予其他人。
pg_stat_statements(showtext boolean) returns setof record
pg_stat_statements 檢視表是根據也稱為 pg_stat_statements 的函數定義的。用戶端可以直接呼叫 pg_stat_statements 函數,並透過指定showtext := false 可以省略查詢字串(即,對應於檢視圖查詢欄位的 OUT 參數將回傳 null)。此功能旨在支持可能希望避免重複獲取長度不確定的查詢字串成本的外部工具。這樣的工具可以代替暫存每個項目本身觀察到的第一個查詢字串,因為 pg_stat_statements 本身就是這樣做的,然後僅根據需要檢索查詢字串。由於伺服器將查詢字串儲存在檔案中,因此此方法可以減少用於重複檢查 pg_stat_statements 資料的實際 I/O 成本。
pg_stat_statements.max
(integer
)
pg_stat_statements.max 設定此模組所追踪的語句數量上限(即 pg_stat_statements 檢視表中的最大資料列數)。如果觀察到的語句不同,則將丟棄有關執行最少的語句的資訊。預設值為 5,000。只能在伺服器啟動時設定此參數。
pg_stat_statements.track
(enum
)
pg_stat_statements.track 控制此模組關注哪些語句。指定 top 表示追踪最上層語句(由用戶端直接發出的語句),也可以全部追踪巢狀語句(例如在函數內呼叫的語句),或者不指定以停用語句統計資訊收集。預設值為 top。只有超級使用者可以變更此設定。
pg_stat_statements.track_utility
(boolean
)
pg_stat_statements.track_utility 控制模組是否追踪管理程序命令。管理程序命令是除 SELECT、INSERT、UPDATE 和 DELETE 之外的所有命令。預設值為 on。只有超級使用者可以變更改此設定。
pg_stat_statements.save
(boolean
)
pg_stat_statements.save 指定是否在伺服器關閉時保存語句統計資訊。 如果關閉,則統計資訊不會在關閉時保存,也不會在伺服器啟動時重新載入。預設值為開。只能在 postgresql.conf 檔案或伺服器命令列中設定此參數。
此模塊需要與 pg_stat_statements.max 成比例的額外共享記憶體。請注意,即使將 pg_stat_statements.track 設定為 none,只要載入模組,就會佔用記憶體空間。
這些參數必須在 postgresql.conf 中設定。典型的用法可能是:
Mark Kirkwood <
>
Design suggestions: Neil Conway <
>
Debugging advice: Tom Lane <
>
Takahiro Itagaki <
>
. Query normalization added by Peter Geoghegan <
>
.
Column Type
Description
bufferid
integer
ID, in the range 1..shared_buffers
relfilenode
oid
(references pg_class
.relfilenode
)
Filenode number of the relation
reltablespace
oid
(references pg_tablespace
.oid
)
Tablespace OID of the relation
reldatabase
oid
(references pg_database
.oid
)
Database OID of the relation
relforknumber
smallint
Fork number within the relation; see include/common/relpath.h
relblocknumber
bigint
Page number within the relation
isdirty
boolean
Is the page dirty?
usagecount
smallint
Clock-sweep access count
pinning_backends
integer
Number of backends pinning this buffer
Column Type
Description
userid
oid
(references pg_authid
.oid
)
OID of user who executed the statement
dbid
oid
(references pg_database
.oid
)
OID of database in which the statement was executed
queryid
bigint
Internal hash code, computed from the statement's parse tree
query
text
Text of a representative statement
plans
bigint
Number of times the statement was planned (if pg_stat_statements.track_planning
is enabled, otherwise zero)
total_plan_time
double precision
Total time spent planning the statement, in milliseconds (if pg_stat_statements.track_planning
is enabled, otherwise zero)
min_plan_time
double precision
Minimum time spent planning the statement, in milliseconds (if pg_stat_statements.track_planning
is enabled, otherwise zero)
max_plan_time
double precision
Maximum time spent planning the statement, in milliseconds (if pg_stat_statements.track_planning
is enabled, otherwise zero)
mean_plan_time
double precision
Mean time spent planning the statement, in milliseconds (if pg_stat_statements.track_planning
is enabled, otherwise zero)
stddev_plan_time
double precision
Population standard deviation of time spent planning the statement, in milliseconds (if pg_stat_statements.track_planning
is enabled, otherwise zero)
calls
bigint
Number of times the statement was executed
total_exec_time
double precision
Total time spent executing the statement, in milliseconds
min_exec_time
double precision
Minimum time spent executing the statement, in milliseconds
max_exec_time
double precision
Maximum time spent executing the statement, in milliseconds
mean_exec_time
double precision
Mean time spent executing the statement, in milliseconds
stddev_exec_time
double precision
Population standard deviation of time spent executing the statement, in milliseconds
rows
bigint
Total number of rows retrieved or affected by the statement
shared_blks_hit
bigint
Total number of shared block cache hits by the statement
shared_blks_read
bigint
Total number of shared blocks read by the statement
shared_blks_dirtied
bigint
Total number of shared blocks dirtied by the statement
shared_blks_written
bigint
Total number of shared blocks written by the statement
local_blks_hit
bigint
Total number of local block cache hits by the statement
local_blks_read
bigint
Total number of local blocks read by the statement
local_blks_dirtied
bigint
Total number of local blocks dirtied by the statement
local_blks_written
bigint
Total number of local blocks written by the statement
temp_blks_read
bigint
Total number of temp blocks read by the statement
temp_blks_written
bigint
Total number of temp blocks written by the statement
blk_read_time
double precision
Total time the statement spent reading blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)
blk_write_time
double precision
Total time the statement spent writing blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)
wal_records
bigint
Total number of WAL records generated by the statement
wal_fpi
bigint
Total number of WAL full page images generated by the statement
wal_bytes
numeric
Total amount of WAL bytes generated by the statement
This module implements the hstore
data type for storing sets of key/value pairs within a single PostgreSQL value. This can be useful in various scenarios, such as rows with many attributes that are rarely examined, or semi-structured data. Keys and values are simply text strings.
This module is considered “trusted”, that is, it can be installed by non-superusers who have CREATE
privilege on the current database.
hstore
External RepresentationThe text representation of an hstore
, used for input and output, includes zero or more key
=>
value
pairs separated by commas. Some examples:
The order of the pairs is not significant (and may not be reproduced on output). Whitespace between pairs or around the =>
sign is ignored. Double-quote keys and values that include whitespace, commas, =
s or >
s. To include a double quote or a backslash in a key or value, escape it with a backslash.
Each key in an hstore
is unique. If you declare an hstore
with duplicate keys, only one will be stored in the hstore
and there is no guarantee as to which will be kept:
A value (but not a key) can be an SQL NULL
. For example:
The NULL
keyword is case-insensitive. Double-quote the NULL
to treat it as the ordinary string “NULL”.
Keep in mind that the hstore
text format, when used for input, applies before any required quoting or escaping. If you are passing an hstore
literal via a parameter, then no additional processing is needed. But if you're passing it as a quoted literal constant, then any single-quote characters and (depending on the setting of the standard_conforming_strings
configuration parameter) backslash characters need to be escaped correctly. See Section 4.1.2.1 for more on the handling of string constants.
On output, double quotes always surround keys and values, even when it's not strictly necessary.
hstore
Operators and FunctionsThe operators provided by the hstore
module are shown in Table F.7, the functions in Table F.8.
hstore
OperatorsOperator
Description
Example(s)
hstore
->
text
→ text
Returns value associated with given key, or NULL
if not present.
'a=>x, b=>y'::hstore -> 'a'
→ x
hstore
->
text[]
→ text[]
Returns values associated with given keys, or NULL
if not present.
'a=>x, b=>y, c=>z'::hstore -> ARRAY['c','a']
→ {"z","x"}
hstore
hstore
?
text
→ boolean
Does hstore
contain key?
'a=>1'::hstore ? 'a'
→ t
hstore
?&
text[]
→ boolean
Does hstore
contain all the specified keys?
'a=>1,b=>2'::hstore ?& ARRAY['a','b']
→ t
hstore
?
hstore
@>
hstore
→ boolean
Does left operand contain right?
'a=>b, b=>1, c=>NULL'::hstore @> 'b=>1'
→ t
hstore
<@
hstore
→ boolean
Is left operand contained in right?
'a=>c'::hstore <@ 'a=>b, b=>1, c=>NULL'
→ f
hstore
-
text
→ hstore
Deletes key from left operand.
'a=>1, b=>2, c=>3'::hstore - 'b'::text
→ "a"=>"1", "c"=>"3"
hstore
-
text[]
→ hstore
Deletes keys from left operand.
'a=>1, b=>2, c=>3'::hstore - ARRAY['a','b']
→ "c"=>"3"
hstore
-
hstore
→ hstore
Deletes pairs from left operand that match pairs in the right operand.
'a=>1, b=>2, c=>3'::hstore - 'a=>4, b=>2'::hstore
→ "a"=>"1", "c"=>"3"
anyelement
#=
hstore
→ anyelement
Replaces fields in the left operand (which must be a composite type) with matching values from hstore
.
ROW(1,3) #= 'f1=>11'::hstore
→ (11,3)
%%
hstore
→ text[]
Converts hstore
to an array of alternating keys and values.
%% 'a=>foo, b=>bar'::hstore
→ {a,foo,b,bar}
%#
hstore
→ text[]
Converts hstore
to a two-dimensional key/value array.
%# 'a=>foo, b=>bar'::hstore
→ {{a,foo},{b,bar}}
hstore
FunctionsFunction
Description
Example(s)
hstore
( record
) → hstore
Constructs an hstore
from a record or row.
hstore(ROW(1,2))
→ "f1"=>"1", "f2"=>"2"
hstore
( text[]
) → hstore
Constructs an hstore
from an array, which may be either a key/value array, or a two-dimensional array.
hstore(ARRAY['a','1','b','2'])
→ "a"=>"1", "b"=>"2"
hstore(ARRAY[['c','3'],['d','4']])
→ "c"=>"3", "d"=>"4"
hstore
( text[]
, text[]
) → hstore
Constructs an hstore
from separate key and value arrays.
hstore(ARRAY['a','b'], ARRAY['1','2'])
→ "a"=>"1", "b"=>"2"
hstore
( text
, text
) → hstore
Makes a single-item hstore
.
hstore('a', 'b')
→ "a"=>"b"
akeys
( hstore
) → text[]
Extracts an hstore
's keys as an array.
akeys('a=>1,b=>2')
→ {a,b}
skeys
( hstore
) → setof text
Extracts an hstore
's keys as a set.
skeys('a=>1,b=>2')
→
avals
( hstore
) → text[]
Extracts an hstore
's values as an array.
avals('a=>1,b=>2')
→ {1,2}
svals
( hstore
) → setof text
Extracts an hstore
's values as a set.
svals('a=>1,b=>2')
→
hstore_to_array
( hstore
) → text[]
Extracts an hstore
's keys and values as an array of alternating keys and values.
hstore_to_array('a=>1,b=>2')
→ {a,1,b,2}
hstore_to_matrix
( hstore
) → text[]
Extracts an hstore
's keys and values as a two-dimensional array.
hstore_to_matrix('a=>1,b=>2')
→ {{a,1},{b,2}}
hstore_to_json
( hstore
) → json
Converts an hstore
to a json
value, converting all non-null values to JSON strings.
This function is used implicitly when an hstore
value is cast to json
.
hstore_to_json('"a key"=>1, b=>t, c=>null, d=>12345, e=>012345, f=>1.234, g=>2.345e+4')
→ {"a key": "1", "b": "t", "c": null, "d": "12345", "e": "012345", "f": "1.234", "g": "2.345e+4"}
hstore_to_jsonb
( hstore
) → jsonb
Converts an hstore
to a jsonb
value, converting all non-null values to JSON strings.
This function is used implicitly when an hstore
value is cast to jsonb
.
hstore_to_jsonb('"a key"=>1, b=>t, c=>null, d=>12345, e=>012345, f=>1.234, g=>2.345e+4')
→ {"a key": "1", "b": "t", "c": null, "d": "12345", "e": "012345", "f": "1.234", "g": "2.345e+4"}
hstore_to_json_loose
( hstore
) → json
Converts an hstore
to a json
value, but attempts to distinguish numerical and Boolean values so they are unquoted in the JSON.
hstore_to_json_loose('"a key"=>1, b=>t, c=>null, d=>12345, e=>012345, f=>1.234, g=>2.345e+4')
→ {"a key": 1, "b": true, "c": null, "d": 12345, "e": "012345", "f": 1.234, "g": 2.345e+4}
hstore_to_jsonb_loose
( hstore
) → jsonb
Converts an hstore
to a jsonb
value, but attempts to distinguish numerical and Boolean values so they are unquoted in the JSON.
hstore_to_jsonb_loose('"a key"=>1, b=>t, c=>null, d=>12345, e=>012345, f=>1.234, g=>2.345e+4')
→ {"a key": 1, "b": true, "c": null, "d": 12345, "e": "012345", "f": 1.234, "g": 2.345e+4}
slice
( hstore
, text[]
) → hstore
Extracts a subset of an hstore
containing only the specified keys.
slice('a=>1,b=>2,c=>3'::hstore, ARRAY['b','c','x'])
→ "b"=>"2", "c"=>"3"
each
( hstore
) → setof record
( key
text
, value
text
)
Extracts an hstore
's keys and values as a set of records.
select * from each('a=>1,b=>2')
→
exist
( hstore
, text
) → boolean
Does hstore
contain key?
exist('a=>1', 'a')
→ t
defined
( hstore
, text
) → boolean
Does hstore
contain a non-NULL
value for key?
defined('a=>NULL', 'a')
→ f
delete
( hstore
, text
) → hstore
Deletes pair with matching key.
delete('a=>1,b=>2', 'b')
→ "a"=>"1"
delete
( hstore
, text[]
) → hstore
Deletes pairs with matching keys.
delete('a=>1,b=>2,c=>3', ARRAY['a','b'])
→ "c"=>"3"
delete
( hstore
, hstore
) → hstore
Deletes pairs matching those in the second argument.
delete('a=>1,b=>2', 'a=>4,b=>2'::hstore)
→ "a"=>"1"
populate_record
( anyelement
, hstore
) → anyelement
Replaces fields in the left operand (which must be a composite type) with matching values from hstore
.
populate_record(ROW(1,2), 'f1=>42'::hstore)
→ (42,2)
In addition to these operators and functions, values of the hstore
type can be subscripted, allowing them to act like associative arrays. Only a single subscript of type text
can be specified; it is interpreted as a key and the corresponding value is fetched or stored. For example,
A subscripted fetch returns NULL
if the subscript is NULL
or that key does not exist in the hstore
. (Thus, a subscripted fetch is not greatly different from the ->
operator.) A subscripted update fails if the subscript is NULL
; otherwise, it replaces the value for that key, adding an entry to the hstore
if the key does not already exist.
hstore
has GiST and GIN index support for the @>
, ?
, ?&
and ?|
operators. For example:
gist_hstore_ops
GiST opclass approximates a set of key/value pairs as a bitmap signature. Its optional integer parameter siglen
determines the signature length in bytes. The default length is 16 bytes. Valid values of signature length are between 1 and 2024 bytes. Longer signatures lead to a more precise search (scanning a smaller fraction of the index and fewer heap pages), at the cost of a larger index.
Example of creating such an index with a signature length of 32 bytes:
hstore
also supports btree
or hash
indexes for the =
operator. This allows hstore
columns to be declared UNIQUE
, or to be used in GROUP BY
, ORDER BY
or DISTINCT
expressions. The sort ordering for hstore
values is not particularly useful, but these indexes may be useful for equivalence lookups. Create indexes for =
comparisons as follows:
Add a key, or update an existing key with a new value:
Another way to do the same thing is:
If multiple keys are to be added or changed in one operation, the concatenation approach is more efficient than subscripting:
Delete a key:
Convert a record
to an hstore
:
Convert an hstore
to a predefined record
type:
Modify an existing record using the values from an hstore
:
The hstore
type, because of its intrinsic liberality, could contain a lot of different keys. Checking for valid keys is the task of the application. The following examples demonstrate several techniques for checking keys and obtaining statistics.
Simple example:
Using a table:
Online statistics:
As of PostgreSQL 9.0, hstore
uses a different internal representation than previous versions. This presents no obstacle for dump/restore upgrades since the text representation (used in the dump) is unchanged.
In the event of a binary upgrade, upward compatibility is maintained by having the new code recognize old-format data. This will entail a slight performance penalty when processing data that has not yet been modified by the new code. It is possible to force an upgrade of all values in a table column by doing an UPDATE
statement as follows:
Another way to do it is:
The ALTER TABLE
method requires an ACCESS EXCLUSIVE
lock on the table, but does not result in bloating the table with old row versions.
Additional extensions are available that implement transforms for the hstore
type for the languages PL/Perl and PL/Python. The extensions for PL/Perl are called hstore_plperl
and hstore_plperlu
, for trusted and untrusted PL/Perl. If you install these transforms and specify them when creating a function, hstore
values are mapped to Perl hashes. The extensions for PL/Python are called hstore_plpythonu
, hstore_plpython2u
, and hstore_plpython3u
(see Section 46.1 for the PL/Python naming convention). If you use them, hstore
values are mapped to Python dictionaries.
It is strongly recommended that the transform extensions be installed in the same schema as hstore
. Otherwise there are installation-time security hazards if a transform extension's schema contains objects defined by a hostile user.
Oleg Bartunov <
oleg@sai.msu.su
>
, Moscow, Moscow University, Russia
Teodor Sigaev <
teodor@sigaev.ru
>
, Moscow, Delta-Soft Ltd., Russia
Additional enhancements by Andrew Gierth <
andrew@tao11.riddles.org.uk
>
, United Kingdom
每當使用 CREATE ROLE 或 ALTER ROLE 設定使用者密碼時,passwordcheck 模組都會檢查使用者的密碼。 如果密碼強度被認為太弱,它將被拒絕,命令將因錯誤而終止。
要啟用此模組,請將 “$libdir/passwordcheck” 加到 postgresql.conf 中的 shared_preload_libraries,然後重新啟動伺服器。
您可以透過修改原始碼來使此模組更符合您的需求。 例如,您可以使用 CrackLib 檢查密碼——這只需要在 Makefile 中取消註釋兩行並重新編譯模組。 (由於授權許可因素,我們不能預先包含 CrackLib。)如果沒有 CrackLib,該模組會強制執行一些簡單的密碼強度規則,您可以根據需要修改或延伸這些規則。
注意 為了防止未加密的密碼透過網路發送、寫入伺服器日誌或被資料庫管理員以其他方式竊取,PostgreSQL 允許使用者提供預先加密的密碼。 許多用戶端程式利用此功能並在將密碼發送到伺服器之前對其進行加密。
這限制了 passwordcheck 模組的用途,因為在那種情況下它只能嘗試猜測密碼。 因此,如果您的安全要求很高,則不建議使用 passwordcheck。 使用外部驗證方法如 GSSAPI(參見第 21 章)比依賴資料庫內的密碼反而更為安全。
或者,您可以修改 passwordcheck 以拒絕預先加密的密碼,但強制使用者以明碼形式設定密碼會帶來自身的安全風險。
pgstattuple 模組提供各種函數來取得 tuple 層級的統計資訊。
由於這些函數會回傳詳細的 page-level 資訊,因此預設是限制存取的。 預設情況下,只有角色 pg_stat_scan_tables 具有 EXECUTE 權限。超級使用者當然可以繞過此限制。安裝此延伸功能後,使用者可以發出 GRANT 指令來授予函數的權限,以允許其他人執行它們。但是,最好還是將這些使用者加到 pg_stat_scan_tables 角色群組之中。
pgstattuple(regclass) returns record
pgstattuple
returns a relation's physical length, percentage of “dead” tuples, and other info. This may help users to determine whether vacuum is necessary or not. The argument is the target relation's name (optionally schema-qualified) or OID. For example:
The output columns are described in Table F.22.
pgstattuple
Output Columnstable_len
bigint
Physical relation length in bytes
tuple_count
bigint
Number of live tuples
tuple_len
bigint
Total length of live tuples in bytes
tuple_percent
float8
Percentage of live tuples
dead_tuple_count
bigint
Number of dead tuples
dead_tuple_len
bigint
Total length of dead tuples in bytes
dead_tuple_percent
float8
Percentage of dead tuples
free_space
bigint
Total free space in bytes
free_percent
float8
Percentage of free space
The table_len
will always be greater than the sum of the tuple_len
, dead_tuple_len
and free_space
. The difference is accounted for by fixed page overhead, the per-page table of pointers to tuples, and padding to ensure that tuples are correctly aligned.
pgstattuple
acquires only a read lock on the relation. So the results do not reflect an instantaneous snapshot; concurrent updates will affect them.
pgstattuple
judges a tuple is “dead” if HeapTupleSatisfiesDirty
returns false.pgstattuple(text) returns record
This is the same as pgstattuple(regclass)
, except that the target relation is specified as TEXT. This function is kept because of backward-compatibility so far, and will be deprecated in some future release.pgstatindex(regclass) returns record
pgstatindex
returns a record showing information about a B-tree index. For example:
The output columns are:
version
integer
B-tree version number
tree_level
integer
Tree level of the root page
index_size
bigint
Total index size in bytes
root_block_no
bigint
Location of root page (zero if none)
internal_pages
bigint
Number of “internal” (upper-level) pages
leaf_pages
bigint
Number of leaf pages
empty_pages
bigint
Number of empty pages
deleted_pages
bigint
Number of deleted pages
avg_leaf_density
float8
Average density of leaf pages
leaf_fragmentation
float8
Leaf page fragmentation
The reported index_size
will normally correspond to one more page than is accounted for by internal_pages + leaf_pages + empty_pages + deleted_pages
, because it also includes the index's metapage.
As with pgstattuple
, the results are accumulated page-by-page, and should not be expected to represent an instantaneous snapshot of the whole index.pgstatindex(text) returns record
This is the same as pgstatindex(regclass)
, except that the target index is specified as TEXT. This function is kept because of backward-compatibility so far, and will be deprecated in some future release.pgstatginindex(regclass) returns record
pgstatginindex
returns a record showing information about a GIN index. For example:
The output columns are:
version
integer
GIN version number
pending_pages
integer
Number of pages in the pending list
pending_tuples
bigint
Number of tuples in the pending list
pgstathashindex(regclass) returns record
pgstathashindex
returns a record showing information about a HASH index. For example:
The output columns are:
version
integer
HASH version number
bucket_pages
bigint
Number of bucket pages
overflow_pages
bigint
Number of overflow pages
bitmap_pages
bigint
Number of bitmap pages
unused_pages
bigint
Number of unused pages
live_items
bigint
Number of live tuples
dead_tuples
bigint
Number of dead tuples
free_percent
float
Percentage of free space
pg_relpages(regclass) returns bigint
pg_relpages
returns the number of pages in the relation.pg_relpages(text) returns bigint
This is the same as pg_relpages(regclass)
, except that the target relation is specified as TEXT. This function is kept because of backward-compatibility so far, and will be deprecated in some future release.pgstattuple_approx(regclass) returns record
pgstattuple_approx
is a faster alternative to pgstattuple
that returns approximate results. The argument is the target relation's name or OID. For example:
The output columns are described in Table F.23.
Whereas pgstattuple
always performs a full-table scan and returns an exact count of live and dead tuples (and their sizes) and free space, pgstattuple_approx
tries to avoid the full-table scan and returns exact dead tuple statistics along with an approximation of the number and size of live tuples and free space.
It does this by skipping pages that have only visible tuples according to the visibility map (if a page has the corresponding VM bit set, then it is assumed to contain no dead tuples). For such pages, it derives the free space value from the free space map, and assumes that the rest of the space on the page is taken up by live tuples.
For pages that cannot be skipped, it scans each tuple, recording its presence and size in the appropriate counters, and adding up the free space on the page. At the end, it estimates the total number of live tuples based on the number of pages and tuples scanned (in the same way that VACUUM estimates pg_class.reltuples).
pgstattuple_approx
Output Columnstable_len
bigint
Physical relation length in bytes (exact)
scanned_percent
float8
Percentage of table scanned
approx_tuple_count
bigint
Number of live tuples (estimated)
approx_tuple_len
bigint
Total length of live tuples in bytes (estimated)
approx_tuple_percent
float8
Percentage of live tuples
dead_tuple_count
bigint
Number of dead tuples (exact)
dead_tuple_len
bigint
Total length of dead tuples in bytes (exact)
dead_tuple_percent
float8
Percentage of dead tuples
approx_free_space
bigint
Total free space in bytes (estimated)
approx_free_percent
float8
Percentage of free space
In the above output, the free space figures may not match the pgstattuple
output exactly, because the free space map gives us an exact figure, but is not guaranteed to be accurate to the byte.
Tatsuo Ishii, Satoshi Nagayasu and Abhijit Menon-Sen
The pg_visibility
module provides a means for examining the visibility map (VM) and page-level visibility information of a table. It also provides functions to check the integrity of a visibility map and to force it to be rebuilt.
Three different bits are used to store information about page-level visibility. The all-visible bit in the visibility map indicates that every tuple in the corresponding page of the relation is visible to every current and future transaction. The all-frozen bit in the visibility map indicates that every tuple in the page is frozen; that is, no future vacuum will need to modify the page until such time as a tuple is inserted, updated, deleted, or locked on that page. The page header's PD_ALL_VISIBLE
bit has the same meaning as the all-visible bit in the visibility map, but is stored within the data page itself rather than in a separate data structure. These two bits will normally agree, but the page's all-visible bit can sometimes be set while the visibility map bit is clear after a crash recovery. The reported values can also disagree because of a change that occurs after pg_visibility
examines the visibility map and before it examines the data page. Any event that causes data corruption can also cause these bits to disagree.
Functions that display information about PD_ALL_VISIBLE
bits are much more costly than those that only consult the visibility map, because they must read the relation's data blocks rather than only the (much smaller) visibility map. Functions that check the relation's data blocks are similarly expensive.
pg_visibility_map(relation regclass, blkno bigint, all_visible OUT boolean, all_frozen OUT boolean) returns record
Returns the all-visible and all-frozen bits in the visibility map for the given block of the given relation.pg_visibility(relation regclass, blkno bigint, all_visible OUT boolean, all_frozen OUT boolean, pd_all_visible OUT boolean) returns record
Returns the all-visible and all-frozen bits in the visibility map for the given block of the given relation, plus the PD_ALL_VISIBLE
bit of that block.pg_visibility_map(relation regclass, blkno OUT bigint, all_visible OUT boolean, all_frozen OUT boolean) returns setof record
Returns the all-visible and all-frozen bits in the visibility map for each block of the given relation.pg_visibility(relation regclass, blkno OUT bigint, all_visible OUT boolean, all_frozen OUT boolean, pd_all_visible OUT boolean) returns setof record
Returns the all-visible and all-frozen bits in the visibility map for each block of the given relation, plus the PD_ALL_VISIBLE
bit of each block.pg_visibility_map_summary(relation regclass, all_visible OUT bigint, all_frozen OUT bigint) returns record
Returns the number of all-visible pages and the number of all-frozen pages in the relation according to the visibility map.pg_check_frozen(relation regclass, t_ctid OUT tid) returns setof tid
Returns the TIDs of non-frozen tuples stored in pages marked all-frozen in the visibility map. If this function returns a non-empty set of TIDs, the visibility map is corrupt.pg_check_visible(relation regclass, t_ctid OUT tid) returns setof tid
Returns the TIDs of non-all-visible tuples stored in pages marked all-visible in the visibility map. If this function returns a non-empty set of TIDs, the visibility map is corrupt.pg_truncate_visibility_map(relation regclass) returns void
Truncates the visibility map for the given relation. This function is useful if you believe that the visibility map for the relation is corrupt and wish to force rebuilding it. The first VACUUM
executed on the given relation after this function is executed will scan every page in the relation and rebuild the visibility map. (Until that is done, queries will treat the visibility map as containing all zeroes.)
By default, these functions are executable only by superusers and members of the pg_stat_scan_tables
role, with the exception of pg_truncate_visibility_map(relation regclass)
which can only be executed by superusers.
pg_trgm 模組提供了用於根據 trigram 配對決定包含字母及數字文字內容相似性的函數和運算子,以及支援快速搜索相似字串的索引運算子類。
trigram 是從字串中提取的一組三個連續字元。我們可以透過計算兩個字串共享的三連詞的數量來衡量它們的相似性。這個簡單的想法對測量許多自然語言中單詞的相似性非常有用。
pg_trgm
ignores non-word characters (non-alphanumerics) when extracting trigrams from a string. Each word is considered to have two spaces prefixed and one space suffixed when determining the set of trigrams contained in the string. For example, the set of trigrams in the string “cat
” is “ c
”, “ ca
”, “cat
”, and “at
”. The set of trigrams in the string “foo|bar
” is “ f
”, “ fo
”, “foo
”, “oo
”, “ b
”, “ ba
”, “bar
”, and “ar
”.
The functions provided by the pg_trgm
module are shown in , the operators in .
pg_trgm
FunctionsConsider the following example:
In the first string, the set of trigrams is {" w"," wo","wor","ord","rd "}
. In the second string, the ordered set of trigrams is {" t"," tw","two","wo "," w"," wo","wor","ord","rds","ds "}
. The most similar extent of an ordered set of trigrams in the second string is {" w"," wo","wor","ord"}
, and the similarity is 0.8
.
This function returns a value that can be approximately understood as the greatest similarity between the first string and any substring of the second string. However, this function does not add padding to the boundaries of the extent. Thus, the number of additional characters present in the second string is not considered, except for the mismatched word boundaries.
At the same time, strict_word_similarity(text, text)
selects an extent of words in the second string. In the example above, strict_word_similarity(text, text)
would select the extent of a single word 'words'
, whose set of trigrams is {" w"," wo","wor","ord","rds","ds "}
.
Thus, the strict_word_similarity(text, text)
function is useful for finding the similarity to whole words, while word_similarity(text, text)
is more suitable for finding the similarity for parts of words.
pg_trgm
Operatorspg_trgm.similarity_threshold
(real
)
Sets the current similarity threshold that is used by the %
operator. The threshold must be between 0 and 1 (default is 0.3).pg_trgm.word_similarity_threshold
(real
)
Sets the current word similarity threshold that is used by the <%
and %>
operators. The threshold must be between 0 and 1 (default is 0.6).pg_trgm.strict_word_similarity_threshold
(real
)
Sets the current strict word similarity threshold that is used by the <<%
and %>>
operators. The threshold must be between 0 and 1 (default is 0.5).
The pg_trgm
module provides GiST and GIN index operator classes that allow you to create an index over a text column for the purpose of very fast similarity searches. These index types support the above-described similarity operators, and additionally support trigram-based index searches for LIKE
, ILIKE
, ~
and ~*
queries. (These indexes do not support equality nor simple comparison operators, so you may need a regular B-tree index too.)
Example:
or
At this point, you will have an index on the t
column that you can use for similarity searching. A typical query is
This will return all values in the text column that are sufficiently similar to word
, sorted from best match to worst. The index will be used to make this a fast operation even over very large data sets.
A variant of the above query is
This can be implemented quite efficiently by GiST indexes, but not by GIN indexes. It will usually beat the first formulation when only a small number of the closest matches is wanted.
Also you can use an index on the t
column for word similarity or strict word similarity. Typical queries are:
and
This will return all values in the text column for which there is a continuous extent in the corresponding ordered trigram set that is sufficiently similar to the trigram set of word
, sorted from best match to worst. The index will be used to make this a fast operation even over very large data sets.
Possible variants of the above queries are:
and
This can be implemented quite efficiently by GiST indexes, but not by GIN indexes.
Beginning in PostgreSQL 9.1, these index types also support index searches for LIKE
and ILIKE
, for example
The index search works by extracting trigrams from the search string and then looking these up in the index. The more trigrams in the search string, the more effective the index search is. Unlike B-tree based searches, the search string need not be left-anchored.
Beginning in PostgreSQL 9.3, these index types also support index searches for regular-expression matches (~
and ~*
operators), for example
The index search works by extracting trigrams from the regular expression and then looking these up in the index. The more trigrams that can be extracted from the regular expression, the more effective the index search is. Unlike B-tree based searches, the search string need not be left-anchored.
For both LIKE
and regular-expression searches, keep in mind that a pattern with no extractable trigrams will degenerate to a full-index scan.
The choice between GiST and GIN indexing depends on the relative performance characteristics of GiST and GIN, which are discussed elsewhere.
Trigram matching is a very useful tool when used in conjunction with a full text index. In particular it can help to recognize misspelled input words that will not be matched directly by the full text search mechanism.
The first step is to generate an auxiliary table containing all the unique words in the documents:
where documents
is a table that has a text field bodytext
that we wish to search. The reason for using the simple
configuration with the to_tsvector
function, instead of using a language-specific configuration, is that we want a list of the original (unstemmed) words.
Next, create a trigram index on the word column:
Now, a SELECT
query similar to the previous example can be used to suggest spellings for misspelled words in user search terms. A useful extra test is to require that the selected words are also of similar length to the misspelled word.
Since the words
table has been generated as a separate, static table, it will need to be periodically regenerated so that it remains reasonably up-to-date with the document collection. Keeping it exactly current is usually unnecessary.
Documentation: Christopher Kings-Lynne
This module is sponsored by Delta-Soft Ltd., Moscow, Russia.\
Robert Haas <
>
GiST Development Site
Tsearch2 Development Site
Oleg Bartunov <
>
, Moscow, Moscow University, Russia
Teodor Sigaev <
>
, Moscow, Delta-Soft Ltd.,Russia
Alexander Korotkov <
>
, Moscow, Postgres Professional, Russia
text
%
text
boolean
Returns true
if its arguments have a similarity that is greater than the current similarity threshold set by pg_trgm.similarity_threshold
.
text
<%
text
boolean
Returns true
if the similarity between the trigram set in the first argument and a continuous extent of an ordered trigram set in the second argument is greater than the current word similarity threshold set by pg_trgm.word_similarity_threshold
parameter.
text
%>
text
boolean
Commutator of the <%
operator.
text
<<%
text
boolean
Returns true
if its second argument has a continuous extent of an ordered trigram set that matches word boundaries, and its similarity to the trigram set of the first argument is greater than the current strict word similarity threshold set by the pg_trgm.strict_word_similarity_threshold
parameter.
text
%>>
text
boolean
Commutator of the <<%
operator.
text
<->
text
real
Returns the “distance” between the arguments, that is one minus the similarity()
value.
text
<<->
text
real
Returns the “distance” between the arguments, that is one minus the word_similarity()
value.
text
<->>
text
real
Commutator of the <<->
operator.
text
<<<->
text
real
Returns the “distance” between the arguments, that is one minus the strict_word_similarity()
value.
text
<->>>
text
real
Commutator of the <<<->
operator.
similarity(text, text)
real
Returns a number that indicates how similar the two arguments are. The range of the result is zero (indicating that the two strings are completely dissimilar) to one (indicating that the two strings are identical).
show_trgm(text)
text[]
Returns an array of all the trigrams in the given string. (In practice this is seldom useful except for debugging.)
word_similarity(text, text)
real
Returns a number that indicates the greatest similarity between the set of trigrams in the first string and any continuous extent of an ordered set of trigrams in the second string. For details, see the explanation below.
strict_word_similarity(text, text)
real
Same as word_similarity(text, text)
, but forces extent boundaries to match word boundaries. Since we don't have cross-word trigrams, this function actually returns greatest similarity between first string and any continuous extent of words of the second string.
show_limit()
real
Returns the current similarity threshold used by the %
operator. This sets the minimum similarity between two words for them to be considered similar enough to be misspellings of each other, for example (deprecated).
set_limit(real)
real
Sets the current similarity threshold that is used by the %
operator. The threshold must be between 0 and 1 (default is 0.3). Returns the same value passed in (deprecated).
tsm_system_time 模組提供資料表抽樣方法 SYSTEM_TIME,此方法可在 SELECT 指令的 TABLESAMPLE 子句中使用。
此資料表抽樣方法接受一個浮點數參數,該參數是讀取資料表所花費的最大毫秒數(milliseconds)。這使您可以直接控制查詢所花費的時間,而代價是樣本大小變得難以預測。結果樣本將包含在指定時間內可以讀取的盡可能多的資料,除非已經讀取了整個資料表。
像內建的 SYSTEM 抽樣方法一樣,SYSTEM_ROWS 執行區塊策略抽樣,因此抽樣並不是完全隨機的,但可能會有些群聚的效應,尤其是在僅要求少量資料的情況下。
SYSTEM_ROWS 不支援 REPEATABLE 子句。
這是一個使用 SYSTEM_TIME 選擇資料表樣本的範例。首先要安裝延伸功能:
然後,您可以在 SELECT 指令中使用它,例如:
此指令將回傳 1 秒鐘(1,000 毫秒)內讀取 my_table 的樣本。 當然,如果可以在 1 秒內讀取整個資料表,則將回傳其所有資料。
sepgsql
is a loadable module that supports label-based mandatory access control (MAC) based on SELinux security policy.
The current implementation has significant limitations, and does not enforce mandatory access control for all actions. See Section F.35.7.
This module integrates with SELinux to provide an additional layer of security checking above and beyond what is normally provided by PostgreSQL. From the perspective of SELinux, this module allows PostgreSQL to function as a user-space object manager. Each table or function access initiated by a DML query will be checked against the system security policy. This check is in addition to the usual SQL permissions checking performed by PostgreSQL.
SELinux access control decisions are made using security labels, which are represented by strings such as system_u:object_r:sepgsql_table_t:s0
. Each access control decision involves two labels: the label of the subject attempting to perform the action, and the label of the object on which the operation is to be performed. Since these labels can be applied to any sort of object, access control decisions for objects stored within the database can be (and, with this module, are) subjected to the same general criteria used for objects of any other type, such as files. This design is intended to allow a centralized security policy to protect information assets independent of the particulars of how those assets are stored.
The SECURITY LABEL statement allows assignment of a security label to a database object.
sepgsql
can only be used on Linux 2.6.28 or higher with SELinux enabled. It is not available on any other platform. You will also need libselinux 2.1.10 or higher and selinux-policy 3.9.13 or higher (although some distributions may backport the necessary rules into older policy versions).
The sestatus
command allows you to check the status of SELinux. A typical display is:
If SELinux is disabled or not installed, you must set that product up first before installing this module.
To build this module, include the option --with-selinux
in your PostgreSQL configure
command. Be sure that the libselinux-devel
RPM is installed at build time.
To use this module, you must include sepgsql
in the shared_preload_libraries parameter in postgresql.conf
. The module will not function correctly if loaded in any other manner. Once the module is loaded, you should execute sepgsql.sql
in each database. This will install functions needed for security label management, and assign initial security labels.
Here is an example showing how to initialize a fresh database cluster with sepgsql
functions and security labels installed. Adjust the paths shown as appropriate for your installation:
Please note that you may see some or all of the following notifications depending on the particular versions you have of libselinux and selinux-policy:
These messages are harmless and should be ignored.
If the installation process completes without error, you can now start the server normally.
Due to the nature of SELinux, running the regression tests for sepgsql
requires several extra configuration steps, some of which must be done as root. The regression tests will not be run by an ordinary make check
or make installcheck
command; you must set up the configuration and then invoke the test script manually. The tests must be run in the contrib/sepgsql
directory of a configured PostgreSQL build tree. Although they require a build tree, the tests are designed to be executed against an installed server, that is they are comparable to make installcheck
not make check
.
First, set up sepgsql
in a working database according to the instructions in Section F.35.2. Note that the current operating system user must be able to connect to the database as superuser without password authentication.
Second, build and install the policy package for the regression test. The sepgsql-regtest
policy is a special purpose policy package which provides a set of rules to be allowed during the regression tests. It should be built from the policy source file sepgsql-regtest.te
, which is done using make
with a Makefile supplied by SELinux. You will need to locate the appropriate Makefile on your system; the path shown below is only an example. Once built, install this policy package using the semodule
command, which loads supplied policy packages into the kernel. If the package is correctly installed, semodule
-l should list sepgsql-regtest
as an available policy package:
Third, turn on sepgsql_regression_test_mode
. For security reasons, the rules in sepgsql-regtest
are not enabled by default; the sepgsql_regression_test_mode
parameter enables the rules needed to launch the regression tests. It can be turned on using the setsebool
command:
Fourth, verify your shell is operating in the unconfined_t
domain:
See Section F.35.8 for details on adjusting your working domain, if necessary.
Finally, run the regression test script:
This script will attempt to verify that you have done all the configuration steps correctly, and then it will run the regression tests for the sepgsql
module.
After completing the tests, it's recommended you disable the sepgsql_regression_test_mode
parameter:
You might prefer to remove the sepgsql-regtest
policy entirely:
sepgsql.permissive
(boolean
)
This parameter enables sepgsql
to function in permissive mode, regardless of the system setting. The default is off. This parameter can only be set in the postgresql.conf
file or on the server command line.
When this parameter is on, sepgsql
functions in permissive mode, even if SELinux in general is working in enforcing mode. This parameter is primarily useful for testing purposes.sepgsql.debug_audit
(boolean
)
This parameter enables the printing of audit messages regardless of the system policy settings. The default is off, which means that messages will be printed according to the system settings.
The security policy of SELinux also has rules to control whether or not particular accesses are logged. By default, access violations are logged, but allowed accesses are not.
This parameter forces all possible logging to be turned on, regardless of the system policy.
The security model of SELinux describes all the access control rules as relationships between a subject entity (typically, a client of the database) and an object entity (such as a database object), each of which is identified by a security label. If access to an unlabeled object is attempted, the object is treated as if it were assigned the label unlabeled_t
.
Currently, sepgsql
allows security labels to be assigned to schemas, tables, columns, sequences, views, and functions. When sepgsql
is in use, security labels are automatically assigned to supported database objects at creation time. This label is called a default security label, and is decided according to the system security policy, which takes as input the creator's label, the label assigned to the new object's parent object and optionally name of the constructed object.
A new database object basically inherits the security label of the parent object, except when the security policy has special rules known as type-transition rules, in which case a different label may be applied. For schemas, the parent object is the current database; for tables, sequences, views, and functions, it is the containing schema; for columns, it is the containing table.
For tables, db_table:select
, db_table:insert
, db_table:update
or db_table:delete
are checked for all the referenced target tables depending on the kind of statement; in addition, db_table:select
is also checked for all the tables that contain columns referenced in the WHERE
or RETURNING
clause, as a data source for UPDATE
, and so on.
Column-level permissions will also be checked for each referenced column. db_column:select
is checked on not only the columns being read using SELECT
, but those being referenced in other DML statements; db_column:update
or db_column:insert
will also be checked for columns being modified by UPDATE
or INSERT
.
For example, consider:
Here, db_column:update
will be checked for t1.x
, since it is being updated, db_column:{select update}
will be checked for t1.y
, since it is both updated and referenced, and db_column:select
will be checked for t1.z
, since it is only referenced. db_table:{select update}
will also be checked at the table level.
For sequences, db_sequence:get_value
is checked when we reference a sequence object using SELECT
; however, note that we do not currently check permissions on execution of corresponding functions such as lastval()
.
For views, db_view:expand
will be checked, then any other required permissions will be checked on the objects being expanded from the view, individually.
For functions, db_procedure:{execute}
will be checked when user tries to execute a function as a part of query, or using fast-path invocation. If this function is a trusted procedure, it also checks db_procedure:{entrypoint}
permission to check whether it can perform as entry point of trusted procedure.
In order to access any schema object, db_schema:search
permission is required on the containing schema. When an object is referenced without schema qualification, schemas on which this permission is not present will not be searched (just as if the user did not have USAGE
privilege on the schema). If an explicit schema qualification is present, an error will occur if the user does not have the requisite permission on the named schema.
The client must be allowed to access all referenced tables and columns, even if they originated from views which were then expanded, so that we apply consistent access control rules independent of the manner in which the table contents are referenced.
The default database privilege system allows database superusers to modify system catalogs using DML commands, and reference or modify toast tables. These operations are prohibited when sepgsql
is enabled.
SELinux defines several permissions to control common operations for each object type; such as creation, alter, drop and relabel of security label. In addition, several object types have special permissions to control their characteristic operations; such as addition or deletion of name entries within a particular schema.
Creating a new database object requires create
permission. SELinux will grant or deny this permission based on the client's security label and the proposed security label for the new object. In some cases, additional privileges are required:
CREATE DATABASE additionally requires getattr
permission for the source or template database.
Creating a schema object additionally requires add_name
permission on the parent schema.
Creating a table additionally requires permission to create each individual table column, just as if each table column were a separate top-level object.
Creating a function marked as LEAKPROOF
additionally requires install
permission. (This permission is also checked when LEAKPROOF
is set for an existing function.)
When DROP
command is executed, drop
will be checked on the object being removed. Permissions will be also checked for objects dropped indirectly via CASCADE
. Deletion of objects contained within a particular schema (tables, views, sequences and procedures) additionally requires remove_name
on the schema.
When ALTER
command is executed, setattr
will be checked on the object being modified for each object types, except for subsidiary objects such as the indexes or triggers of a table, where permissions are instead checked on the parent object. In some cases, additional permissions are required:
Moving an object to a new schema additionally requires remove_name
permission on the old schema and add_name
permission on the new one.
Setting the LEAKPROOF
attribute on a function requires install
permission.
Using SECURITY LABEL on an object additionally requires relabelfrom
permission for the object in conjunction with its old security label and relabelto
permission for the object in conjunction with its new security label. (In cases where multiple label providers are installed and the user tries to set a security label, but it is not managed by SELinux, only setattr
should be checked here. This is currently not done due to implementation restrictions.)
Trusted procedures are similar to security definer functions or setuid commands. SELinux provides a feature to allow trusted code to run using a security label different from that of the client, generally for the purpose of providing highly controlled access to sensitive data (e.g. rows might be omitted, or the precision of stored values might be reduced). Whether or not a function acts as a trusted procedure is controlled by its security label and the operating system security policy. For example:
The above operations should be performed by an administrative user.
In this case, a regular user cannot reference customer.credit
directly, but a trusted procedure show_credit
allows the user to print the credit card numbers of customers with some of the digits masked out.
It is possible to use SELinux's dynamic domain transition feature to switch the security label of the client process, the client domain, to a new context, if that is allowed by the security policy. The client domain needs the setcurrent
permission and also dyntransition
from the old to the new domain.
Dynamic domain transitions should be considered carefully, because they allow users to switch their label, and therefore their privileges, at their option, rather than (as in the case of a trusted procedure) as mandated by the system. Thus, the dyntransition
permission is only considered safe when used to switch to a domain with a smaller set of privileges than the original one. For example:
In this example above we were allowed to switch from the larger MCS range c1.c1023
to the smaller range c1.c4
, but switching back was denied.
A combination of dynamic domain transition and trusted procedure enables an interesting use case that fits the typical process life-cycle of connection pooling software. Even if your connection pooling software is not allowed to run most of SQL commands, you can allow it to switch the security label of the client using the sepgsql_setcon()
function from within a trusted procedure; that should take some credential to authorize the request to switch the client label. After that, this session will have the privileges of the target user, rather than the connection pooler. The connection pooler can later revert the security label change by again using sepgsql_setcon()
with NULL
argument, again invoked from within a trusted procedure with appropriate permissions checks. The point here is that only the trusted procedure actually has permission to change the effective security label, and only does so when given proper credentials. Of course, for secure operation, the credential store (table, procedure definition, or whatever) must be protected from unauthorized access.
We reject the LOAD command across the board, because any module loaded could easily circumvent security policy enforcement.
Table F.29 shows the available functions.
sepgsql_getcon() returns text
Returns the client domain, the current security label of the client.
sepgsql_setcon(text) returns bool
Switches the client domain of the current session to the new domain, if allowed by the security policy. It also accepts NULL
input as a request to transition to the client's original domain.
sepgsql_mcstrans_in(text) returns text
Translates the given qualified MLS/MCS range into raw format if the mcstrans daemon is running.
sepgsql_mcstrans_out(text) returns text
Translates the given raw MLS/MCS range into qualified format if the mcstrans daemon is running.
sepgsql_restorecon(text) returns bool
Sets up initial security labels for all objects within the current database. The argument may be NULL, or the name of a specfile to be used as alternative of the system default.
Data Definition Language (DDL) Permissions
Due to implementation restrictions, some DDL operations do not check permissions.Data Control Language (DCL) Permissions
Due to implementation restrictions, DCL operations do not check permissions.Row-level access control
PostgreSQL supports row-level access, but sepgsql
does not.Covert channels
sepgsql
does not try to hide the existence of a certain object, even if the user is not allowed to reference it. For example, we can infer the existence of an invisible object as a result of primary key conflicts, foreign key violations, and so on, even if we cannot obtain the contents of the object. The existence of a top secret table cannot be hidden; we only hope to conceal its contents.
This wiki page provides a brief overview, security design, architecture, administration and upcoming features.SELinux User's and Administrator's Guide
This document provides a wide spectrum of knowledge to administer SELinux on your systems. It focuses primarily on Red Hat operating systems, but is not limited to them.Fedora SELinux FAQ
This document answers frequently asked questions about SELinux. It focuses primarily on Fedora, but is not limited to Fedora.
KaiGai Kohei <
kaigai@ak.jp.nec.com
>
postgres_fdw 模組提供了外部資料封裝器 postgres_fdw,可用於存取儲存在外部 PostgreSQL 伺服器中的資料。
此模組提供的功能與舊版 dblink 模組的功能基本上是重疊的。但是 postgres_fdw 提供了更直覺且符合標準的語法來存取遠端資料表,並且在許多情況下可以提供更好的效能。
要準備使用 postgres_fdw 進行遠端存取:
使用 CREATE EXTENSION 安裝 postgres_fdw 延伸功能。
使用 CREATE SERVER 建立一個外部伺服器物件,設定您要連線的每個遠端資料庫。將連線資訊(使用者名稱和密碼除外)指定為 SERVER 物件的選項。
使用 CREATE USER MAPPING 為要存取每個外部伺服器的每個資料庫使用者建立一個使用者對應。指定用作使用者對應的使用者和密碼選項的遠端使用者名稱和密碼。
使用 CREATE FOREIGN TABLE 或 IMPORT FOREIGN SCHEMA 為要存取的每個遠端資料表建立一個外部資料表。外部資料表的欄位必須與引用的遠端資料表相符。但是,如果您指定正確的遠端名稱作為外部資料表物件的選項,則可以使用與遠端資料不同的資料表名稱和欄位名稱。
現在,您只需要從外部資料表中執行 SELECT 即可存取儲存在遠端的基本資料表中的資料。您還可以使用 INSERT、UPDATE 或 DELETE 來修改遠端資料表。 (當然,您在 USER MAPPING 中所指定的遠端使用者必須具有執行這些操作的權限。)
請注意,postgres_fdw 目前不支援帶有 ON CONFLICT DO UPDATE 子句的 INSERT 語句。但是,支援 ON CONFLICT DO NOTHING 子句,唯一性衝突處理宣告將會被忽略。還要注意,postgres_fdw 支援在分割資料表上執行的 UPDATE 語句所呼叫的資料遷移,但是目前不能處理以下情況:選擇將插入資料的遠端分割區也剛好是後續要 UPDATE 的目標分割區。
通常建議使用與遠端資料表欄位完全相同的資料型別和排序規則來宣告外部資料表的欄位。儘管 postgres_fdw 目前相當寬容地根據需要執行資料型別轉換,但是當型別或排序規則不符合時,由於遠端伺服器對 WHERE 子句的解釋與本機伺服器的解釋略有不同,因此可能會出現令人驚訝的語義結果。
請注意,與基本遠端資料表相比,可以用更少的欄位或不同的欄位順序宣告一個外部資料表。欄位與遠端資料表的對應是按名稱而不是位置進行的。
A foreign server using the postgres_fdw
foreign data wrapper can have the same options that libpq accepts in connection strings, as described in Section 34.1.2, except that these options are not allowed or have special handling:
user
, password
and sslpassword
(specify these in a user mapping, instead, or use a service file)
client_encoding
(this is automatically set from the local server encoding)
application_name
- this may appear in either or both a connection and postgres_fdw.application_name. If both are present, postgres_fdw.application_name
overrides the connection setting. Unlike libpq, postgres_fdw
allows application_name
to include “escape sequences”. See postgres_fdw.application_name for details.
fallback_application_name
(always set to postgres_fdw
)
sslkey
and sslcert
- these may appear in either or both a connection and a user mapping. If both are present, the user mapping setting overrides the connection setting.
Only superusers may create or modify user mappings with the sslcert
or sslkey
settings.
Only superusers may connect to foreign servers without password authentication, so always specify the password
option for user mappings belonging to non-superusers.
A superuser may override this check on a per-user-mapping basis by setting the user mapping option password_required 'false'
, e.g.,
To prevent unprivileged users from exploiting the authentication rights of the unix user the postgres server is running as to escalate to superuser rights, only the superuser may set this option on a user mapping.
Care is required to ensure that this does not allow the mapped user the ability to connect as superuser to the mapped database per CVE-2007-3278 and CVE-2007-6601. Don't set password_required=false
on the public
role. Keep in mind that the mapped user can potentially use any client certificates, .pgpass
, .pg_service.conf
etc. in the unix home directory of the system user the postgres server runs as. They can also use any trust relationship granted by authentication modes like peer
or ident
authentication.
These options can be used to control the names used in SQL statements sent to the remote PostgreSQL server. These options are needed when a foreign table is created with names different from the underlying remote table's names.
schema_name
(string
)
This option, which can be specified for a foreign table, gives the schema name to use for the foreign table on the remote server. If this option is omitted, the name of the foreign table's schema is used.
table_name
(string
)
This option, which can be specified for a foreign table, gives the table name to use for the foreign table on the remote server. If this option is omitted, the foreign table's name is used.
column_name
(string
)
This option, which can be specified for a column of a foreign table, gives the column name to use for the column on the remote server. If this option is omitted, the column's name is used.
postgres_fdw
retrieves remote data by executing queries against remote servers, so ideally the estimated cost of scanning a foreign table should be whatever it costs to be done on the remote server, plus some overhead for communication. The most reliable way to get such an estimate is to ask the remote server and then add something for overhead — but for simple queries, it may not be worth the cost of an additional remote query to get a cost estimate. So postgres_fdw
provides the following options to control how cost estimation is done:
use_remote_estimate
(boolean
)
This option, which can be specified for a foreign table or a foreign server, controls whether postgres_fdw
issues remote EXPLAIN
commands to obtain cost estimates. A setting for a foreign table overrides any setting for its server, but only for that table. The default is false
.
fdw_startup_cost
(floating point
)
This option, which can be specified for a foreign server, is a floating point value that is added to the estimated startup cost of any foreign-table scan on that server. This represents the additional overhead of establishing a connection, parsing and planning the query on the remote side, etc. The default value is 100
.
fdw_tuple_cost
(floating point
)
This option, which can be specified for a foreign server, is a floating point value that is used as extra cost per-tuple for foreign-table scans on that server. This represents the additional overhead of data transfer between servers. You might increase or decrease this number to reflect higher or lower network delay to the remote server. The default value is 0.01
.
When use_remote_estimate
is true, postgres_fdw
obtains row count and cost estimates from the remote server and then adds fdw_startup_cost
and fdw_tuple_cost
to the cost estimates. When use_remote_estimate
is false, postgres_fdw
performs local row count and cost estimation and then adds fdw_startup_cost
and fdw_tuple_cost
to the cost estimates. This local estimation is unlikely to be very accurate unless local copies of the remote table's statistics are available. Running ANALYZE on the foreign table is the way to update the local statistics; this will perform a scan of the remote table and then calculate and store statistics just as though the table were local. Keeping local statistics can be a useful way to reduce per-query planning overhead for a remote table — but if the remote table is frequently updated, the local statistics will soon be obsolete.
By default, only WHERE
clauses using built-in operators and functions will be considered for execution on the remote server. Clauses involving non-built-in functions are checked locally after rows are fetched. If such functions are available on the remote server and can be relied on to produce the same results as they do locally, performance can be improved by sending such WHERE
clauses for remote execution. This behavior can be controlled using the following option:
extensions
(string
)
This option is a comma-separated list of names of PostgreSQL extensions that are installed, in compatible versions, on both the local and remote servers. Functions and operators that are immutable and belong to a listed extension will be considered shippable to the remote server. This option can only be specified for foreign servers, not per-table.
When using the extensions
option, it is the user's responsibility that the listed extensions exist and behave identically on both the local and remote servers. Otherwise, remote queries may fail or behave unexpectedly.
fetch_size
(integer
)
This option specifies the number of rows postgres_fdw
should get in each fetch operation. It can be specified for a foreign table or a foreign server. The option specified on a table overrides an option specified for the server. The default is 100
.
batch_size
(integer
)
This option specifies the number of rows postgres_fdw
should insert in each insert operation. It can be specified for a foreign table or a foreign server. The option specified on a table overrides an option specified for the server. The default is 1
.
Note the actual number of rows postgres_fdw
inserts at once depends on the number of columns and the provided batch_size
value. The batch is executed as a single query, and the libpq protocol (which postgres_fdw
uses to connect to a remote server) limits the number of parameters in a single query to 65535. When the number of columns * batch_size
exceeds the limit, the batch_size
will be adjusted to avoid an error.
postgres_fdw
supports asynchronous execution, which runs multiple parts of an Append
node concurrently rather than serially to improve performance. This execution can be controlled using the following option:
async_capable
(boolean
)
This option controls whether postgres_fdw
allows foreign tables to be scanned concurrently for asynchronous execution. It can be specified for a foreign table or a foreign server. A table-level option overrides a server-level option. The default is false
.
In order to ensure that the data being returned from a foreign server is consistent, postgres_fdw
will only open one connection for a given foreign server and will run all queries against that server sequentially even if there are multiple foreign tables involved, unless those tables are subject to different user mappings. In such a case, it may be more performant to disable this option to eliminate the overhead associated with running queries asynchronously.
Asynchronous execution is applied even when an Append
node contains subplan(s) executed synchronously as well as subplan(s) executed asynchronously. In such a case, if the asynchronous subplans are ones processed using postgres_fdw
, tuples from the asynchronous subplans are not returned until after at least one synchronous subplan returns all tuples, as that subplan is executed while the asynchronous subplans are waiting for the results of asynchronous queries sent to foreign servers. This behavior might change in a future release.
As described in the Transaction Management section, in postgres_fdw
transactions are managed by creating corresponding remote transactions, and subtransactions are managed by creating corresponding remote subtransactions. When multiple remote transactions are involved in the current local transaction, by default postgres_fdw
commits those remote transactions serially when the local transaction is committed. When multiple remote subtransactions are involved in the current local subtransaction, by default postgres_fdw
commits those remote subtransactions serially when the local subtransaction is committed. Performance can be improved with the following option:
parallel_commit
(boolean
)
This option controls whether postgres_fdw
commits in parallel remote transactions opened on a foreign server in a local transaction when the local transaction is committed. This setting also applies to remote and local subtransactions. This option can only be specified for foreign servers, not per-table. The default is false
.
If multiple foreign servers with this option enabled are involved in a local transaction, multiple remote transactions on those foreign servers are committed in parallel across those foreign servers when the local transaction is committed.
When this option is enabled, a foreign server with many remote transactions may see a negative performance impact when the local transaction is committed.
By default all foreign tables using postgres_fdw
are assumed to be updatable. This may be overridden using the following option:
updatable
(boolean
)
This option controls whether postgres_fdw
allows foreign tables to be modified using INSERT
, UPDATE
and DELETE
commands. It can be specified for a foreign table or a foreign server. A table-level option overrides a server-level option. The default is true
.
Of course, if the remote table is not in fact updatable, an error would occur anyway. Use of this option primarily allows the error to be thrown locally without querying the remote server. Note however that the information_schema
views will report a postgres_fdw
foreign table to be updatable (or not) according to the setting of this option, without any check of the remote server.
By default all foreign tables using postgres_fdw
are assumed to be truncatable. This may be overridden using the following option:
truncatable
(boolean
)
This option controls whether postgres_fdw
allows foreign tables to be truncated using the TRUNCATE
command. It can be specified for a foreign table or a foreign server. A table-level option overrides a server-level option. The default is true
.
Of course, if the remote table is not in fact truncatable, an error would occur anyway. Use of this option primarily allows the error to be thrown locally without querying the remote server.
postgres_fdw
is able to import foreign table definitions using IMPORT FOREIGN SCHEMA. This command creates foreign table definitions on the local server that match tables or views present on the remote server. If the remote tables to be imported have columns of user-defined data types, the local server must have compatible types of the same names.
Importing behavior can be customized with the following options (given in the IMPORT FOREIGN SCHEMA
command):
import_collate
(boolean
)
This option controls whether column COLLATE
options are included in the definitions of foreign tables imported from a foreign server. The default is true
. You might need to turn this off if the remote server has a different set of collation names than the local server does, which is likely to be the case if it's running on a different operating system. If you do so, however, there is a very severe risk that the imported table columns' collations will not match the underlying data, resulting in anomalous query behavior.
Even when this parameter is set to true
, importing columns whose collation is the remote server's default can be risky. They will be imported with COLLATE "default"
, which will select the local server's default collation, which could be different.
import_default
(boolean
)
This option controls whether column DEFAULT
expressions are included in the definitions of foreign tables imported from a foreign server. The default is false
. If you enable this option, be wary of defaults that might get computed differently on the local server than they would be on the remote server; nextval()
is a common source of problems. The IMPORT
will fail altogether if an imported default expression uses a function or operator that does not exist locally.
import_generated
(boolean
)
This option controls whether column GENERATED
expressions are included in the definitions of foreign tables imported from a foreign server. The default is true
. The IMPORT
will fail altogether if an imported generated expression uses a function or operator that does not exist locally.
import_not_null
(boolean
)
This option controls whether column NOT NULL
constraints are included in the definitions of foreign tables imported from a foreign server. The default is true
.
Note that constraints other than NOT NULL
will never be imported from the remote tables. Although PostgreSQL does support check constraints on foreign tables, there is no provision for importing them automatically, because of the risk that a constraint expression could evaluate differently on the local and remote servers. Any such inconsistency in the behavior of a check constraint could lead to hard-to-detect errors in query optimization. So if you wish to import check constraints, you must do so manually, and you should verify the semantics of each one carefully. For more detail about the treatment of check constraints on foreign tables, see CREATE FOREIGN TABLE.
Tables or foreign tables which are partitions of some other table are imported only when they are explicitly specified in LIMIT TO
clause. Otherwise they are automatically excluded from IMPORT FOREIGN SCHEMA. Since all data can be accessed through the partitioned table which is the root of the partitioning hierarchy, importing only partitioned tables should allow access to all the data without creating extra objects.
By default, all connections that postgres_fdw
establishes to foreign servers are kept open in the local session for re-use.
keep_connections
(boolean
)
This option controls whether postgres_fdw
keeps the connections to the foreign server open so that subsequent queries can re-use them. It can only be specified for a foreign server. The default is on
. If set to off
, all connections to this foreign server will be discarded at the end of each transaction.
postgres_fdw_get_connections(OUT server_name text, OUT valid boolean) returns setof record
This function returns the foreign server names of all the open connections that postgres_fdw
established from the local session to the foreign servers. It also returns whether each connection is valid or not. false
is returned if the foreign server connection is used in the current local transaction but its foreign server or user mapping is changed or dropped (Note that server name of an invalid connection will be NULL
if the server is dropped), and then such invalid connection will be closed at the end of that transaction. true
is returned otherwise. If there are no open connections, no record is returned. Example usage of the function:
postgres_fdw_disconnect(server_name text) returns boolean
This function discards the open connections that are established by postgres_fdw
from the local session to the foreign server with the given name. Note that there can be multiple connections to the given server using different user mappings. If the connections are used in the current local transaction, they are not disconnected and warning messages are reported. This function returns true
if it disconnects at least one connection, otherwise false
. If no foreign server with the given name is found, an error is reported. Example usage of the function:
postgres_fdw_disconnect_all() returns boolean
This function discards all the open connections that are established by postgres_fdw
from the local session to foreign servers. If the connections are used in the current local transaction, they are not disconnected and warning messages are reported. This function returns true
if it disconnects at least one connection, otherwise false
. Example usage of the function:
postgres_fdw
establishes a connection to a foreign server during the first query that uses a foreign table associated with the foreign server. By default this connection is kept and re-used for subsequent queries in the same session. This behavior can be controlled using keep_connections
option for a foreign server. If multiple user identities (user mappings) are used to access the foreign server, a connection is established for each user mapping.
When changing the definition of or removing a foreign server or a user mapping, the associated connections are closed. But note that if any connections are in use in the current local transaction, they are kept until the end of the transaction. Closed connections will be re-established when they are necessary by future queries using a foreign table.
Once a connection to a foreign server has been established, it's by default kept until the local or corresponding remote session exits. To disconnect a connection explicitly, keep_connections
option for a foreign server may be disabled, or postgres_fdw_disconnect
and postgres_fdw_disconnect_all
functions may be used. For example, these are useful to close connections that are no longer necessary, thereby releasing connections on the foreign server.
During a query that references any remote tables on a foreign server, postgres_fdw
opens a transaction on the remote server if one is not already open corresponding to the current local transaction. The remote transaction is committed or aborted when the local transaction commits or aborts. Savepoints are similarly managed by creating corresponding remote savepoints.
The remote transaction uses SERIALIZABLE
isolation level when the local transaction has SERIALIZABLE
isolation level; otherwise it uses REPEATABLE READ
isolation level. This choice ensures that if a query performs multiple table scans on the remote server, it will get snapshot-consistent results for all the scans. A consequence is that successive queries within a single transaction will see the same data from the remote server, even if concurrent updates are occurring on the remote server due to other activities. That behavior would be expected anyway if the local transaction uses SERIALIZABLE
or REPEATABLE READ
isolation level, but it might be surprising for a READ COMMITTED
local transaction. A future PostgreSQL release might modify these rules.
Note that it is currently not supported by postgres_fdw
to prepare the remote transaction for two-phase commit.
postgres_fdw
attempts to optimize remote queries to reduce the amount of data transferred from foreign servers. This is done by sending query WHERE
clauses to the remote server for execution, and by not retrieving table columns that are not needed for the current query. To reduce the risk of misexecution of queries, WHERE
clauses are not sent to the remote server unless they use only data types, operators, and functions that are built-in or belong to an extension that's listed in the foreign server's extensions
option. Operators and functions in such clauses must be IMMUTABLE
as well. For an UPDATE
or DELETE
query, postgres_fdw
attempts to optimize the query execution by sending the whole query to the remote server if there are no query WHERE
clauses that cannot be sent to the remote server, no local joins for the query, no row-level local BEFORE
or AFTER
triggers or stored generated columns on the target table, and no CHECK OPTION
constraints from parent views. In UPDATE
, expressions to assign to target columns must use only built-in data types, IMMUTABLE
operators, or IMMUTABLE
functions, to reduce the risk of misexecution of the query.
When postgres_fdw
encounters a join between foreign tables on the same foreign server, it sends the entire join to the foreign server, unless for some reason it believes that it will be more efficient to fetch rows from each table individually, or unless the table references involved are subject to different user mappings. While sending the JOIN
clauses, it takes the same precautions as mentioned above for the WHERE
clauses.
The query that is actually sent to the remote server for execution can be examined using EXPLAIN VERBOSE
.
In the remote sessions opened by postgres_fdw
, the search_path parameter is set to just pg_catalog
, so that only built-in objects are visible without schema qualification. This is not an issue for queries generated by postgres_fdw
itself, because it always supplies such qualification. However, this can pose a hazard for functions that are executed on the remote server via triggers or rules on remote tables. For example, if a remote table is actually a view, any functions used in that view will be executed with the restricted search path. It is recommended to schema-qualify all names in such functions, or else attach SET search_path
options (see CREATE FUNCTION) to such functions to establish their expected search path environment.
postgres_fdw
likewise establishes remote session settings for various parameters:
TimeZone is set to UTC
DateStyle is set to ISO
IntervalStyle is set to postgres
extra_float_digits is set to 3
for remote servers 9.0 and newer and is set to 2
for older versions
These are less likely to be problematic than search_path
, but can be handled with function SET
options if the need arises.
It is not recommended that you override this behavior by changing the session-level settings of these parameters; that is likely to cause postgres_fdw
to malfunction.
postgres_fdw
can be used with remote servers dating back to PostgreSQL 8.3. Read-only capability is available back to 8.1. A limitation however is that postgres_fdw
generally assumes that immutable built-in functions and operators are safe to send to the remote server for execution, if they appear in a WHERE
clause for a foreign table. Thus, a built-in function that was added since the remote server's release might be sent to it for execution, resulting in “function does not exist” or a similar error. This type of failure can be worked around by rewriting the query, for example by embedding the foreign table reference in a sub-SELECT
with OFFSET 0
as an optimization fence, and placing the problematic function or operator outside the sub-SELECT
.
postgres_fdw.application_name
(string
)
Specifies a value for application_name configuration parameter used when postgres_fdw
establishes a connection to a foreign server. This overrides application_name
option of the server object. Note that change of this parameter doesn't affect any existing connections until they are re-established.
postgres_fdw.application_name
can be any string of any length and contain even non-ASCII characters. However when it's passed to and used as application_name
in a foreign server, note that it will be truncated to less than NAMEDATALEN
characters and anything other than printable ASCII characters will be replaced with question marks (?
). See application_name for details.
%
characters begin “escape sequences” that are replaced with status information as outlined below. Unrecognized escapes are ignored. Other characters are copied straight to the application name. Note that it's not allowed to specify a plus/minus sign or a numeric literal after the %
and before the option, for alignment and padding.
%a
Application name on local server
%c
%C
%u
User name on local server
%d
Database name on local server
%p
Process ID of backend on local server
%%
Literal %
For example, suppose user local_user
establishes a connection from database local_db
to foreign_db
as user foreign_user
, the setting 'db=%d, user=%u'
is replaced with 'db=local_db, user=local_user'
.
Here is an example of creating a foreign table with postgres_fdw
. First install the extension:
Then create a foreign server using CREATE SERVER. In this example we wish to connect to a PostgreSQL server on host 192.83.123.89
listening on port 5432
. The database to which the connection is made is named foreign_db
on the remote server:
A user mapping, defined with CREATE USER MAPPING, is needed as well to identify the role that will be used on the remote server:
Now it is possible to create a foreign table with CREATE FOREIGN TABLE. In this example we wish to access the table named some_schema.some_table
on the remote server. The local name for it will be foreign_table
:
It's essential that the data types and other properties of the columns declared in CREATE FOREIGN TABLE
match the actual remote table. Column names must match as well, unless you attach column_name
options to the individual columns to show how they are named in the remote table. In many cases, use of IMPORT FOREIGN SCHEMA
is preferable to constructing foreign table definitions manually.
Shigeru Hanada <
shigeru.hanada@gmail.com
>
tablefunc 模組內含了回傳資料表(即多筆資料列)的各種函數。這些函數本身很有用,也可以用作設計回傳多筆資料列的 C 函數的範例。
列出了 tablefunc 模組所提供的函數。
tablefunc
Functionsnormal_rand
produces a set of normally distributed random values (Gaussian distribution).
numvals
is the number of values to be returned from the function. mean
is the mean of the normal distribution of values and stddev
is the standard deviation of the normal distribution of values.
For example, this call requests 1000 values with a mean of 5 and a standard deviation of 3:
The crosstab
function is used to produce “pivot” displays, wherein data is listed across the page rather than down. For example, we might have data like
which we wish to display like
The crosstab
function takes a text parameter that is a SQL query producing raw data formatted in the first way, and produces a table formatted in the second way.
The sql
parameter is a SQL statement that produces the source set of data. This statement must return one row_name
column, one category
column, and one value
column. N
is an obsolete parameter, ignored if supplied (formerly this had to match the number of output value columns, but now that is determined by the calling query).
For example, the provided query might produce a set something like:
The crosstab
function is declared to return setof record
, so the actual names and types of the output columns must be defined in the FROM
clause of the calling SELECT
statement, for example:
This example produces a set something like:
The FROM
clause must define the output as one row_name
column (of the same data type as the first result column of the SQL query) followed by N value
columns (all of the same data type as the third result column of the SQL query). You can set up as many output value columns as you wish. The names of the output columns are up to you.
The crosstab
function produces one output row for each consecutive group of input rows with the same row_name
value. It fills the output value
columns, left to right, with the value
fields from these rows. If there are fewer rows in a group than there are output value
columns, the extra output columns are filled with nulls; if there are more rows, the extra input rows are skipped.
In practice the SQL query should always specify ORDER BY 1,2
to ensure that the input rows are properly ordered, that is, values with the same row_name
are brought together and correctly ordered within the row. Notice that crosstab
itself does not pay any attention to the second column of the query result; it's just there to be ordered by, to control the order in which the third-column values appear across the page.
Here is a complete example:
You can avoid always having to write out a FROM
clause to define the output columns, by setting up a custom crosstab function that has the desired output row type wired into its definition. This is described in the next section. Another possibility is to embed the required FROM
clause in a view definition.
The crosstab
N
functions are examples of how to set up custom wrappers for the general crosstab
function, so that you need not write out column names and types in the calling SELECT
query. The tablefunc
module includes crosstab2
, crosstab3
, and crosstab4
, whose output row types are defined as
Thus, these functions can be used directly when the input query produces row_name
and value
columns of type text
, and you want 2, 3, or 4 output values columns. In all other ways they behave exactly as described above for the general crosstab
function.
For instance, the example given in the previous section would also work as
These functions are provided mostly for illustration purposes. You can create your own return types and functions based on the underlying crosstab()
function. There are two ways to do it:
Create a composite type describing the desired output columns, similar to the examples in contrib/tablefunc/tablefunc--1.0.sql
. Then define a unique function name accepting one text
parameter and returning setof your_type_name
, but linking to the same underlying crosstab
C function. For example, if your source data produces row names that are text
, and values that are float8
, and you want 5 value columns:
Use OUT
parameters to define the return type implicitly. The same example could also be done this way:
The main limitation of the single-parameter form of crosstab
is that it treats all values in a group alike, inserting each value into the first available column. If you want the value columns to correspond to specific categories of data, and some groups might not have data for some of the categories, that doesn't work well. The two-parameter form of crosstab
handles this case by providing an explicit list of the categories corresponding to the output columns.
source_sql
is a SQL statement that produces the source set of data. This statement must return one row_name
column, one category
column, and one value
column. It may also have one or more “extra” columns. The row_name
column must be first. The category
and value
columns must be the last two columns, in that order. Any columns between row_name
and category
are treated as “extra”. The “extra” columns are expected to be the same for all rows with the same row_name
value.
For example, source_sql
might produce a set something like:
category_sql
is a SQL statement that produces the set of categories. This statement must return only one column. It must produce at least one row, or an error will be generated. Also, it must not produce duplicate values, or an error will be generated. category_sql
might be something like:
The crosstab
function is declared to return setof record
, so the actual names and types of the output columns must be defined in the FROM
clause of the calling SELECT
statement, for example:
This will produce a result something like:
The FROM
clause must define the proper number of output columns of the proper data types. If there are N
columns in the source_sql
query's result, the first N
-2 of them must match up with the first N
-2 output columns. The remaining output columns must have the type of the last column of the source_sql
query's result, and there must be exactly as many of them as there are rows in the category_sql
query's result.
The crosstab
function produces one output row for each consecutive group of input rows with the same row_name
value. The output row_name
column, plus any “extra” columns, are copied from the first row of the group. The output value
columns are filled with the value
fields from rows having matching category
values. If a row's category
does not match any output of the category_sql
query, its value
is ignored. Output columns whose matching category is not present in any input row of the group are filled with nulls.
In practice the source_sql
query should always specify ORDER BY 1
to ensure that values with the same row_name
are brought together. However, ordering of the categories within a group is not important. Also, it is essential to be sure that the order of the category_sql
query's output matches the specified output column order.
Here are two complete examples:
You can create predefined functions to avoid having to write out the result column names and types in each query. See the examples in the previous section. The underlying C function for this form of crosstab
is named crosstab_hash
.
The connectby
function produces a display of hierarchical data that is stored in a table. The table must have a key field that uniquely identifies rows, and a parent-key field that references the parent (if any) of each row. connectby
can display the sub-tree descending from any row.
connectby
ParametersThe key and parent-key fields can be any data type, but they must be the same type. Note that the start_with
value must be entered as a text string, regardless of the type of the key field.
The connectby
function is declared to return setof record
, so the actual names and types of the output columns must be defined in the FROM
clause of the calling SELECT
statement, for example:
The first two output columns are used for the current row's key and its parent row's key; they must match the type of the table's key field. The third output column is the depth in the tree and must be of type integer
. If a branch_delim
parameter was given, the next output column is the branch display and must be of type text
. Finally, if an orderby_fld
parameter was given, the last output column is a serial number, and must be of type integer
.
The “branch” output column shows the path of keys taken to reach the current row. The keys are separated by the specified branch_delim
string. If no branch display is wanted, omit both the branch_delim
parameter and the branch column in the output column list.
If the ordering of siblings of the same parent is important, include the orderby_fld
parameter to specify which field to order siblings by. This field can be of any sortable data type. The output column list must include a final integer serial-number column, if and only if orderby_fld
is specified.
The parameters representing table and field names are copied as-is into the SQL queries that connectby
generates internally. Therefore, include double quotes if the names are mixed-case or contain special characters. You may also need to schema-qualify the table name.
In large tables, performance will be poor unless there is an index on the parent-key field.
It is important that the branch_delim
string not appear in any key values, else connectby
may incorrectly report an infinite-recursion error. Note that if branch_delim
is not provided, a default value of ~
is used for recursion detection purposes.
Here is an example:
Joe Conway
test_decoding
is an example of a logical decoding output plugin. It doesn't do anything especially useful, but can serve as a starting point for developing your own output plugin.
test_decoding
receives WAL through the logical decoding mechanism and decodes it into text representations of the operations performed.
Typical output from this plugin, used over the SQL logical decoding interface, might be:
tsm_system_rows 模組提供資料表抽樣方法 SYSTEM_ROWS,此方法可在 指令的 TABLESAMPLE 子句中使用。
此資料表抽樣方法接受整數的參數,該參數是要讀取的最大資料筆數。除非資料表沒有足夠的資料,結果樣本將恰好包含那麼多筆資料;否則在這種情況下,將回傳整個資料表。
像內建的 SYSTEM 抽樣方法一樣,SYSTEM_ROWS 執行區塊策略抽樣,因此抽樣並不是完全隨機的,但可能會有些群聚的效應,尤其是在僅要求少量資料的情況下。
SYSTEM_ROWS 不支援 REPEATABLE 子句。
使用 SYSTEM_ROWS 選擇資料表樣本的範例。首先要安裝延伸功能:
然後,您可以在 SELECT 指令中使用它,例如:
此命令將從資料表 my_table 回傳 100 筆資料的樣本(除非該資料表沒有 100 筆資料,在這種情況下將回傳其所有資料)。
uuid-ossp 模組提供了使用幾種標準演算法來產生 Universally Unique IDentifiers (UUIDs) 的功能。還有一些函數可以產生某些特殊的 UUID 常數。此模組僅在 PostgreSQL 中特殊需求時才需要。有關產生 UUID 的內建函數,請參閱。
該模組被認為是「可信任的」,也就是說,它可以由對目前資料庫具有 CREATE 權限的非超級使用者安裝。
uuid-ossp
Functions列出了可用於產生 UUID 的函數。相關標準為 ITU-T Rec. X.667、ISO/IEC 9834-8:2005 和 所指定的四種用於產生 UUID 的演算法,由標示為版本號 1、3、4 和 5 。(沒有版本 2 。)這些演算法適用於不同的應用情境。
uuid-ossp
Session ID on local server (see for details)
Cluster name on local server (see for details)
另請參閱 psql 中的 指令,該指令提供的功能類似於 crosstab()。
explains the parameters.
從歷史上看,這個模組相依於 OSSP UUID 函式庫,它也是模組名稱的由來。雖然 OSSP UUID 庫仍然可以在 找到,但它沒有得到很好的維護,並且越來越難以移植到更新的平台。uuid-ossp 現在可以在某些平台上在沒有 OSSP 函式庫的情況下編譯。在 FreeBSD、NetBSD 和其他一些 BSD 衍生平台上,核心 libc 函式庫中包含合適的 UUID 建立函數。在 Linux、macOS 和其他一些平台上,libuuid 函式庫中提供了合適的函數,該函式庫最初來自 e2fsprogs 專案(儘管在近代 Linux 上它被認為是 util-linux-ng 的一部分)。呼叫 configure 時,指定 --with-uuid=bsd 使用 BSD 函數,或 --with-uuid=e2fs 使用 e2fsprogs 的 libuuid,或 --with-uuid=ossp 使用 OSSP UUID 函式庫。在某些主機上可能有多種函式庫可用,因此 configure 並不會自動選擇。
Peter Eisentraut <
>
relname
Name of the source relation
keyid_fld
Name of the key field
parent_keyid_fld
Name of the parent-key field
orderby_fld
Name of the field to order siblings by (optional)
start_with
Key value of the row to start at
max_depth
Maximum depth to descend to, or zero for unlimited depth
branch_delim
String to separate keys with in branch output (optional)
normal_rand(int numvals, float8 mean, float8 stddev)
setof float8
Produces a set of normally distributed random values
crosstab(text sql)
setof record
Produces a “pivot table” containing row names plus N
value columns, where N
is determined by the row type specified in the calling query
crosstab
N
(text sql)
setof table_crosstab_
N
Produces a “pivot table” containing row names plus N
value columns. crosstab2
, crosstab3
, and crosstab4
are predefined, but you can create additional crosstab
N
functions as described below
crosstab(text source_sql, text category_sql)
setof record
Produces a “pivot table” with the value columns specified by a second query
crosstab(text sql, int N)
setof record
Obsolete version of crosstab(text)
. The parameter N
is now ignored, since the number of value columns is always determined by the calling query
connectby(text relname, text keyid_fld, text parent_keyid_fld [, text orderby_fld ], text start_with, int max_depth [, text branch_delim ])
setof record
Produces a representation of a hierarchical tree structure
Function
Description
uuid_generate_v1
() → uuid
版本 1 UUID。 這涉及主機的 MAC 位址和時間戳記。請注意,此類 UUID 會顯示建立識別符的主機身份及其建立時間,這可能使其不適用於某些對安全敏感的應用情境。
uuid_generate_v1mc
() → uuid
產生版本 1 UUID,但使用隨機的 multicast MAC 位址而不是主機的真實 MAC 位址。
uuid_generate_v3
( namespace
uuid
, name
text
) → uuid
使用指定的輸入名稱在給予的命名空間 namespace 中產生成版本 3 的 UUID。命名空間應該是 Table F.33 中列出的的 uuidns*() 函數產生的特殊常數之一。(理論上可以是任何的 UUID。)名稱是所選命名空間中的識別字。
例如: SELECT uuid_generate_v3(uuid_ns_url(), 'http://www.postgresql.org');
參數 name 將為 MD5 雜湊值,因此無法從產生的 UUID 反推原來的內容。透過這種方法產生的 UUID 並沒有隨機或與環境相關的元素,因此是可重現的。
uuid_generate_v4
() → uuid
產生版本 4 UUID,它完全來自亂數。
uuid_generate_v5
( namespace
uuid
, name
text
) → uuid
產生版本 5 的 UUID,此版本與版本 3 UUID 類似,不同之處在於以 SHA-1 作為雜湊演算法。版本 5 應該比版本 3 更好,因為 SHA-1 被認為比 MD5 更加安全。
Function
Description
uuid_nil
() → uuid
回傳一個「nil」UUID 常數,它不會產生為真正的 UUID。
uuid_ns_dns
() → uuid
回傳一個常數,為 UUID 指定為 DNS 的命名空間。
uuid_ns_url
() → uuid
回傳一個常數,指定 UUID 為 URL 的命名空間。
uuid_ns_oid
() → uuid
回傳一個常數,為 UUID 指定 ISO 物件識別符號 (OID) 的命名空間。 (這與 ASN.1 OID 相關,它與 PostgreSQL 中使用的 OID 無關。)
uuid_ns_x500
() → uuid
回傳指定 UUID 為 X.500 專有名稱 (DN) 命名空間的常數。