Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Ordinarily, if a date/time string is syntactically valid but contains out-of-range field values, an error will be thrown. For example, input specifying the 31st of February will be rejected.
During a daylight-savings-time transition, it is possible for a seemingly valid timestamp string to represent a nonexistent or ambiguous timestamp. Such cases are not rejected; the ambiguity is resolved by determining which UTC offset to apply. For example, supposing that the TimeZone parameter is set to America/New_York
, consider
Because that day was a spring-forward transition date in that time zone, there was no civil time instant 2:30AM; clocks jumped forward from 2AM EST to 3AM EDT. PostgreSQL interprets the given time as if it were standard time (UTC-5), which then renders as 3:30AM EDT (UTC-4).
Conversely, consider the behavior during a fall-back transition:
On that date, there were two possible interpretations of 1:30AM; there was 1:30AM EDT, and then an hour later after clocks jumped back from 2AM EDT to 1AM EST, there was 1:30AM EST. Again, PostgreSQL interprets the given time as if it were standard time (UTC-5). We can force the other interpretation by specifying daylight-savings time:
The precise rule that is applied in such cases is that an invalid timestamp that appears to fall within a jump-forward daylight savings transition is assigned the UTC offset that prevailed in the time zone just before the transition, while an ambiguous timestamp that could fall on either side of a jump-back transition is assigned the UTC offset that prevailed just after the transition. In most time zones this is equivalent to saying that “the standard-time interpretation is preferred when in doubt”.
In all cases, the UTC offset associated with a timestamp can be specified explicitly, using either a numeric UTC offset or a time zone abbreviation that corresponds to a fixed UTC offset. The rule just given applies only when it is necessary to infer a UTC offset for a time zone in which the offset varies.
PostgreSQL can accept time zone specifications that are written according to the POSIX standard's rules for the TZ
environment variable. POSIX time zone specifications are inadequate to deal with the complexity of real-world time zone history, but there are sometimes reasons to use them.
A POSIX time zone specification has the form
(For readability, we show spaces between the fields, but spaces should not be used in practice.) The fields are:
STD
is the zone abbreviation to be used for standard time.
offset
is the zone's standard-time offset from UTC.
DST
is the zone abbreviation to be used for daylight-savings time. If this field and the following ones are omitted, the zone uses a fixed UTC offset with no daylight-savings rule.
dstoffset
is the daylight-savings offset from UTC. This field is typically omitted, since it defaults to one hour less than the standard-time offset
, which is usually the right thing.
rule
defines the rule for when daylight savings is in effect, as described below.
In this syntax, a zone abbreviation can be a string of letters, such as EST
, or an arbitrary string surrounded by angle brackets, such as <UTC-05>
. Note that the zone abbreviations given here are only used for output, and even then only in some timestamp output formats. The zone abbreviations recognized in timestamp input are determined as explained in Section B.4.
The offset fields specify the hours, and optionally minutes and seconds, difference from UTC. They have the format hh
[:
mm
[:
ss
]] optionally with a leading sign (+
or -
). The positive sign is used for zones west of Greenwich. (Note that this is the opposite of the ISO-8601 sign convention used elsewhere in PostgreSQL.) hh
can have one or two digits; mm
and ss
(if used) must have two.
The daylight-savings transition rule
has the format
(As before, spaces should not be included in practice.) The dstdate
and dsttime
fields define when daylight-savings time starts, while stddate
and stdtime
define when standard time starts. (In some cases, notably in zones south of the equator, the former might be later in the year than the latter.) The date fields have one of these formats:
n
A plain integer denotes a day of the year, counting from zero to 364, or to 365 in leap years.
J
n
In this form, n
counts from 1 to 365, and February 29 is not counted even if it is present. (Thus, a transition occurring on February 29 could not be specified this way. However, days after February have the same numbers whether it's a leap year or not, so that this form is usually more useful than the plain-integer form for transitions on fixed dates.)
M
m
.
n
.
d
This form specifies a transition that always happens during the same month and on the same day of the week. m
identifies the month, from 1 to 12. n
specifies the n
'th occurrence of the weekday identified by d
. n
is a number between 1 and 4, or 5 meaning the last occurrence of that weekday in the month (which could be the fourth or the fifth). d
is a number between 0 and 6, with 0 indicating Sunday. For example, M3.2.0
means “the second Sunday in March”.
The M
format is sufficient to describe many common daylight-savings transition laws. But note that none of these variants can deal with daylight-savings law changes, so in practice the historical data stored for named time zones (in the IANA time zone database) is necessary to interpret past time stamps correctly.
The time fields in a transition rule have the same format as the offset fields described previously, except that they cannot contain signs. They define the current local time at which the change to the other time occurs. If omitted, they default to 02:00:00
.
If a daylight-savings abbreviation is given but the transition rule
field is omitted, the fallback behavior is to use the rule M3.2.0,M11.1.0
, which corresponds to USA practice as of 2020 (that is, spring forward on the second Sunday of March, fall back on the first Sunday of November, both transitions occurring at 2AM prevailing time). Note that this rule does not give correct USA transition dates for years before 2007.
As an example, CET-1CEST,M3.5.0,M10.5.0/3
describes current (as of 2020) timekeeping practice in Paris. This specification says that standard time has the abbreviation CET
and is one hour ahead (east) of UTC; daylight savings time has the abbreviation CEST
and is implicitly two hours ahead of UTC; daylight savings time begins on the last Sunday in March at 2AM CET and ends on the last Sunday in October at 3AM CEST.
The four timezone names EST5EDT
, CST6CDT
, MST7MDT
, and PST8PDT
look like they are POSIX zone specifications. However, they actually are treated as named time zones because (for historical reasons) there are files by those names in the IANA time zone database. The practical implication of this is that these zone names will produce valid historical USA daylight-savings transitions, even when a plain POSIX specification would not.
One should be wary that it is easy to misspell a POSIX-style time zone specification, since there is no check on the reasonableness of the zone abbreviation(s). For example, SET TIMEZONE TO FOOBAR0
will work, leaving the system effectively using a rather peculiar abbreviation for UTC.
由於時區縮寫並沒有很好地標準化,PostgreSQL 提供了一種自訂的伺服器接受的縮寫集方法。timezone_abbreviations 運行時參數決定有效的縮寫集。雖然此參數可由任何資料庫使用者變更,但其可能的值受資料庫管理員控制 - 實際上它們是儲存在安裝目錄的 .../share/timezonesets/ 中的組態檔案的名稱。透過增加或變更該目錄中的檔案,管理員可以為時區縮寫設定本地策略。
timezone_abbreviations 可以設定為在 .../share/timezonesets/ 中找到的任何檔案名稱,檔案的名稱需要完全是英文字母的。(禁止 timezone_abbreviations 中的非英文字母字元可以防止讀取目標目錄之外的檔案,以及讀取編輯器備份檔案和其他無關檔案。)
時區縮寫檔案可以包含空白行和以 # 開頭的註釋。非註釋行必須具有以下格式之一:
zone_abbreviation 只是定義的縮寫。offset 是一個整數,定義與 UTC 相等的偏移量,以秒為單位,正向格林威治為東,負向為西。例如,-18000 將在格林威治以西 5 小時或北美東海岸標準時間。D 的區域名稱表示本地夏令時間而不是標準時間。
或者,可以定義 time_zone_name,引用 IANA 時區資料庫中定義的區域名稱。查詢區域的定義以查看該區域中是否正在使用縮寫,如果是,則使用適當的含義 - 即,目前正在確定其值的時間戳記中使用的含義,或者如果當時不是目前使用的話,則使用之前的含義,如果僅在該時間之後使用,則使用最舊的含義。這種行為對於處理其含義歷史變化的縮寫至關重要。還允許根據區域名稱定義縮寫,其中不出現該縮寫;使用縮寫就等於寫出區域名稱。
在定義與 UTC 的偏移量從未改變的縮寫時,偏好使用簡單的整數偏移量,因為這些縮寫比需要查閱時區定義的縮寫要簡單得多。
@INCLUDE 語法允許在 .../share/timezonesets/ 目錄中包含另一個檔案。包含可以嵌套到某個有限的深度。
@OVERRIDE 語法指示檔案中的後續項目可以覆蓋先前的項目(通常是從包含的檔案中獲取的項目)。如果沒有這個,相同時區縮寫的衝突定義將被視為錯誤。
在未修改的安裝環境中,檔案 Default 包含了世界上大多數地區的所有非衝突時區縮寫。為這些區域提供了澳大利亞和印度的附加檔案:這些檔案先有預設設定,然後根據需要增加或修改縮寫。
出於參考目的,標準安裝還包含 Africa.txt,America.txt 等文件,其中包含有關根據 IANA 時區資料庫已知正在使用的每個時區縮寫的訊息。可以根據需要將這些檔案中找到的區域名稱定義複製並貼到自行定義配置檔案中。請注意,由於名稱中包含了點,因此無法將這些檔案直接引用為 timezone_abbreviations 設定。
如果在讀取時區縮寫集時發生錯誤,則不會應用新值並保留舊值。如果在啟動資料庫時發生錯誤,則啟動失敗。
配置檔案中定義的時區縮寫會覆寫 PostgreSQL 中內建的非時區定義。例如,澳大利亞配置檔案定義了 SAT(南澳大利亞標準時間)。當此檔案有效時,SAT 將不會被識別為星期六的縮寫。
如果您修改 .../share/timezonesets/ 中的檔案,則由您來進行備份 - 正常的資料庫轉存將不包含此目錄。
Release date: 2023-02-09
This release contains a variety of fixes from 15.1. For information about new features in major release 15, see Section E.3.
A dump/restore is not required for those running 15.X.
However, if you are upgrading from a version earlier than 15.1, see Section E.2.
libpq can leak memory contents after GSSAPI transport encryption initiation fails (Jacob Champion)
A modified server, or an unauthenticated man-in-the-middle, can send a not-zero-terminated error message during setup of GSSAPI (Kerberos) transport encryption. libpq will then copy that string, as well as following bytes in application memory up to the next zero byte, to its error report. Depending on what the calling application does with the error report, this could result in disclosure of application memory contents. There is also a small probability of a crash due to reading beyond the end of memory. Fix by properly zero-terminating the server message. (CVE-2022-41862)
Fix calculation of which GENERATED
columns need to be updated in child tables during an UPDATE
on a partitioned table or inheritance tree (Amit Langote, Tom Lane)
This fixes failure to update GENERATED
columns that do not exist in the parent table, or that have different dependencies than are in the parent column's generation expression.
Fix possible failure of MERGE
to compute GENERATED
columns (Dean Rasheed)
When the first row-level action of the MERGE
was an UPDATE
, any subsequent INSERT
actions would fail to compute GENERATED
columns that were deemed unnecessary to compute for the UPDATE
action (due to not depending on any of the UPDATE
target columns).
Fix MERGE
's check for unreachable WHEN
clauses (Dean Rasheed)
A WHEN
clause following an unconditional WHEN
clause should be rejected as unreachable, but this case was not always detected.
Fix MERGE
's rule-detection test (Dean Rasheed)
MERGE
is not supported on tables with rules; but it also failed on tables that once had rules but no longer do.
In MERGE
, don't count a DO NOTHING
action as a processed tuple (Álvaro Herrera)
This makes the code's behavior match the documentation.
Allow a WITH RECURSIVE ... CYCLE
CTE to access its output column (Tom Lane)
A reference to the SET
column from within the CTE would fail with “cache lookup failed for type 0”.
Fix handling of pending inserts when doing a bulk insertion to a foreign table (Etsuro Fujita)
In some cases pending insertions were not flushed to the FDW soon enough, leading to logical inconsistencies, for example BEFORE ROW
triggers not seeing rows they should be able to see.
Allow REPLICA IDENTITY
to be set on an index that's not (yet) valid (Tom Lane)
When pg_dump dumps a partitioned index that's marked REPLICA IDENTITY
, it generates a command sequence that applies REPLICA IDENTITY
before the partitioned index has been marked valid, causing restore to fail. There seems no very good reason to prohibit doing it in that order, so allow it. The marking will have no effect anyway until the index becomes valid.
Fix handling of DEFAULT
markers in rules that perform an INSERT
from a multi-row VALUES
list (Dean Rasheed)
In some cases a DEFAULT
marker would not get replaced with the proper default-value expression, leading to an “unrecognized node type” error.
Reject uses of undefined variables in jsonpath
existence checks (Alexander Korotkov, David G. Johnston)
While jsonpath
match operators threw an error for an undefined variable in the path pattern, the existence operators silently treated it as a match.
Fix jsonb
subscripting to cope with toasted subscript values (Tom Lane, David G. Johnston)
Using a text value fetched directly from a table as a jsonb
subscript was likely to fail. Fetches would usually not find any matching element. Assignments could store the value with a garbage key, although keys long enough to cause that problem are probably rare in the field.
Fix edge-case data corruption in parallel hash joins (Dmitry Astapov)
If the final chunk of a large tuple being written out to a temporary file was exactly 32760 bytes, it would be corrupted due to a fencepost bug. The query would typically fail later with corrupted-data symptoms.
Honor non-default settings of checkpoint_completion_target
(Bharath Rupireddy)
Internal state was not updated after a change in checkpoint_completion_target
, possibly resulting in performing checkpoint I/O faster or slower than desired, especially if that setting was changed on-the-fly.
Log the correct ending timestamp in recovery_target_xid
mode (Tom Lane)
When ending recovery based on the recovery_target_xid
setting with recovery_target_inclusive
= off
, we printed an incorrect timestamp (always 2000-01-01) in the “recovery stopping before ... transaction” log message.
Improve error reporting for some buffered file read failures (Peter Eisentraut)
Correctly report a short read, giving the numbers of bytes desired and actually read, instead of reporting an irrelevant error code. Most places got this right already, but some recently-written replication logic did not.
Remove arbitrary limit on number of elements in int2vector
and oidvector
(Tom Lane)
The input functions for these types previously rejected more than 100 elements. With the introduction of the logical replication column list feature, it's necessary to accept int2vector
s having up to 1600 columns, otherwise long column lists cause logical-replication failures.
In extended query protocol, avoid an immediate commit after ANALYZE
if we're running a pipeline (Tom Lane)
If there's not been an explicit BEGIN TRANSACTION
, ANALYZE
would take it on itself to commit, which should not happen within a pipelined series of commands.
Reject cancel request packets having the wrong length (Andrey Borodin)
The server would process a cancel request even if its length word was too small. This led to reading beyond the end of the allocated buffer. In theory that could cause a segfault, but it seems quite unlikely to happen in practice, since the buffer would have to be very close to the end of memory. The more likely outcome was a bogus log message about wrong backend PID or cancel code. Complain about the wrong length, instead.
Fix planner preprocessing oversights for window function run-condition expressions (Richard Guo, David Rowley)
This could lead to planner errors such as “WindowFunc not found in subplan target lists”.
Fix possible dangling-pointer access during execution of window function run-condition expressions (David Rowley)
In practice, because the run-condition optimization is only applied to certain window functions that happen to all return int8
, this only manifested as a problem on 32-bit builds.
Add recursion and looping defenses in subquery pullup (Tom Lane)
A contrived query can result in deep recursion and unreasonable amounts of time spent trying to flatten subqueries. A proper fix for that seems unduly invasive for a back-patch, but we can at least add stack depth checks and an interrupt check to allow the query to be cancelled.
Fix planner issues when combining Memoize nodes with partitionwise joins or parameterized nestloops (Richard Guo)
These errors could lead to not using Memoize in contexts where it would be useful, or possibly to wrong query plans.
Fix partitionwise-join code to tolerate failure to produce a plan for each partition (Tom Lane)
This could result in “could not devise a query plan for the given query” errors.
Limit the amount of cleanup work done by get_actual_variable_range
(Simon Riggs)
Planner runs occurring just after deletion of a large number of tuples appearing at the end of an index could expend significant amounts of work setting the “killed” bits for those index entries. Limit the amount of work done in any one query by giving up on this process after examining 100 heap pages. All the cleanup will still happen eventually, but without so large a performance hiccup.
Prevent the statistics machinery from getting confused when a relation's relkind changes (Andres Freund)
Converting a table to a view could lead to crashes or assertion failures.
Fix under-parenthesized display of AT TIME ZONE
constructs (Tom Lane)
This could result in dump/restore failures for rules or views in which an argument of AT TIME ZONE
is itself an expression.
Prevent clobbering of cached parsetrees for utility statements in SQL functions (Tom Lane, Daniel Gustafsson)
If a SQL-language function executes the same utility command more than once within a single calling query, it could crash or report strange errors such as “unrecognized node type”.
Ensure that execution of full-text-search queries can be cancelled while they are performing phrase matches (Tom Lane)
Fix memory leak in hashing strings with nondeterministic collations (Jeff Davis)
Fix deadlock between DROP DATABASE
and logical replication worker process (Hou Zhijie)
This was caused by an ill-advised choice to block interrupts while creating a logical replication slot in the worker. In version 15 that could lead to an undetected deadlock. In version 14, no deadlock has been observed, but it's still a bad idea to block interrupts while waiting for network I/O.
Clean up the libpq connection object after a failed replication connection attempt (Andres Freund)
The previous coding leaked the connection object. In background code paths that's pretty harmless because the calling process will give up and exit. But in commands such as CREATE SUBSCRIPTION
, such a failure resulted in a small session-lifespan memory leak.
In hot-standby servers, reduce processing effort for tracking XIDs known to be active on the primary (Simon Riggs, Michail Nikolaev)
Insufficiently-aggressive cleanup of the KnownAssignedXids array could lead to poor performance, particularly when max_connections
is set to a large value on the standby.
Ignore invalidated logical-replication slots while determining oldest catalog xmin (Sirisha Chamarthi)
A replication slot could prevent cleanup of dead tuples in the system catalogs even after it becomes invalidated due to exceeding max_slot_wal_keep_size
. Thus, failure of a replication consumer could lead to indefinitely-large catalog bloat.
In logical decoding, notify the remote node when a transaction is detected to have crashed (Hou Zhijie)
After a server restart, we'll re-stream the changes for transactions occurring shortly before the restart. Some of these transactions probably never completed; when we realize that one didn't we throw away the relevant decoding state locally, but we neglected to tell the subscriber about it. That led to the subscriber keeping useless streaming files until it's next restarted.
Fix uninitialized-memory usage in logical decoding (Masahiko Sawada)
In certain cases, resumption of logical decoding could try to re-use XID data that had already been freed, leading to unpredictable behavior.
Acquire spinlock while updating shared state during logical decoding context creation (Masahiko Sawada)
We neglected to acquire the appropriate lock while updating data about two-phase transactions, potentially allowing other processes to see inconsistent data.
Fix pgoutput replication plug-in to not send columns not listed in a table's replication column list (Hou Zhijie)
UPDATE
and DELETE
events did not pay attention to the configured column list, thus sending more data than expected. This did not cause a problem when the receiver is our built-in logical replication code, but it might confuse other receivers, and in any case it wasted network bandwidth.
Avoid rare “failed to acquire cleanup lock” panic during WAL replay of hash-index page split operations (Robert Haas)
Advance a heap page's LSN when setting its all-visible bit during WAL replay (Jeff Davis)
Failure to do this left the page possibly different on standby servers than the primary, and violated some other expectations about when the LSN changes. This seems only a theoretical hazard so far as PostgreSQL itself is concerned, but it could upset third-party tools.
Fix int64_div_fast_to_numeric()
to work for a wider range of inputs (Dean Rasheed)
This function misbehaved with some values of its second argument. No such usages exist in core PostgreSQL, but it's clearly a hazard for external modules, so repair.
Fix latent buffer-overrun problem in WaitEventSet
logic (Thomas Munro)
The epoll
-based and kqueue
-based implementations could ask the kernel for too many events if the size of their internal buffer was different from the size of the caller's output buffer. That case is not known to occur in released PostgreSQL versions, but this error is a hazard for external modules and future bug fixes.
Avoid nominally-undefined behavior when accessing shared memory in 32-bit builds (Andres Freund)
clang's undefined-behavior sanitizer complained about use of a pointer that was less aligned than it should be. It's very unlikely that this would cause a problem in non-debug builds, but it's worth fixing for testing purposes.
Fix assertion failure in BRIN minmax-multi opclasses (Tomas Vondra)
The assertion was overly strict, so this mistake was harmless in non-assert builds.
Remove faulty assertion in useless-RESULT-RTE optimization logic (Tom Lane)
Fix copy-and-paste errors in cache-lookup-failure messages for ACL checks (Justin Pryzby)
In principle these errors should never be reached. But if they are, some of them reported the wrong type of object.
Fix possible corruption of very large tablespace map files in pg_basebackup (Antonin Houska)
Avoid harmless warning from pg_dump in --if-exists
mode (Tom Lane)
If the public
schema has a non-default owner then use of pg_dump's --if-exists
option resulted in a warning message “warning: could not find where to insert IF EXISTS in statement "-- *not* dropping schema, since initdb creates it"”. The dump output was okay, though.
Fix psql's \sf
and \ef
commands to handle SQL-language functions that have SQL-standard function bodies (Tom Lane)
These commands misidentified the start of the function body when it used new-style syntax.
Fix tab completion of ALTER FUNCTION/PROCEDURE/ROUTINE
... SET SCHEMA
(Dean Rasheed)
Update contrib/pageinspect
to mark its disk-accessing functions as PARALLEL RESTRICTED
(Tom Lane)
This avoids possible failure if one of these functions is used to examine a temporary table, since a session's temporary tables are not accessible from parallel workers.
Fix contrib/seg
to not crash or print garbage if an input number has more than 127 digits (Tom Lane)
Fix build on Microsoft Visual Studio 2013 (Tom Lane)
A previous patch supposed that all platforms of interest have snprintf()
, but MSVC 2013 isn't quite there yet. Revert to using sprintf()
on that platform.
Fix compile failure in building PL/Perl with MSVC when using Strawberry Perl (Andrew Dunstan)
Fix mismatch of PL/Perl built with MSVC versus a Perl library built with gcc (Andrew Dunstan)
Such combinations could previously fail with “loadable library and perl binaries are mismatched” errors.
Suppress compiler warnings from Perl's header files (Andres Freund)
Our preferred compiler options provoke warnings about constructs appearing in recent versions of Perl's header files. When using gcc, we can suppress these warnings with a pragma.
Fix pg_waldump to build on compilers that don't discard unused static-inline functions (Tom Lane)
Update time zone data files to tzdata release 2022g for DST law changes in Greenland and Mexico, plus historical corrections for northern Canada, Colombia, and Singapore.
Notably, a new timezone America/Ciudad_Juarez has been split off from America/Ojinaga.
This appendix and the next one contain information regarding the modules that can be found in the contrib
directory of the PostgreSQL distribution. These include porting tools, analysis utilities, and plug-in features that are not part of the core PostgreSQL system, mainly because they address a limited audience or are too experimental to be part of the main source tree. This does not preclude their usefulness.
本附錄涵蓋了 contrib 中的延伸功能和其他伺服器外掛模組,附錄 G 為工具程式的說明。
從原始碼編譯建置時,這些模組並不會自動編譯,除非您有編譯 “world” (參閱第 2 步)。 您可以透過執行以下命令來編譯和安裝這些模組:
in the contrib
directory of a configured source tree; or to build and install just one selected module, do the same in that module's subdirectory. Many of the modules have regression tests, which can be executed by running:
before installation or
once you have a PostgreSQL server running.
If you are using a pre-packaged version of PostgreSQL, these modules are typically made available as a separate subpackage, such as postgresql-contrib
.
許多模組提供新的使用者定義函數、運算子或型別。 要使用這些模組之一,在安裝程式後,您需要在資料庫系統中註冊新的 SQL 物件。 這是透過執行 CREATE EXTENSION 指令來完成的。 在一個新的資料庫中,你可以簡單地執行:
This command registers the new SQL objects in the current database only, so you need to run it in each database that you want the module's facilities to be available in. Alternatively, run it in database template1
so that the extension will be copied into subsequently-created databases by default.
For all these modules, CREATE EXTENSION
must be run by a database superuser, unless the module is considered “trusted”, in which case it can be run by any user who has CREATE
privilege on the current database. Modules that are trusted are identified as such in the sections that follow. Generally, trusted modules are ones that cannot provide access to outside-the-database functionality.
Many modules allow you to install their objects in a schema of your choice. To do that, add SCHEMA
schema_name
to the CREATE EXTENSION
command. By default, the objects will be placed in your current creation target schema, which in turn defaults to public
.
Note, however, that some of these modules are not “extensions” in this sense, but are loaded into the server in some other way, for instance by way of shared_preload_libraries. See the documentation of each module for details.
PostgreSQL 內建支援智慧式日期時間輸入格式。日期及時間以字串格式輸入,以預先定義的方式,分隔為多個欄位。每一個欄位可能會被解譯為數字、忽視或拒絕。處理程式也內建常見的描述字,如月份、星期及時區名稱。
本文件說明日期及時間解譯的方法和步驟,也包含了常見描述字的列表。
SQL:2016 中所定義的以下功能並未在此版本的 PostgreSQL 中實作。在某些情況下,有一些等效的功能可以使用。
Identifier | Core? | Description | Comment |
---|---|---|---|
amcheck 模塊提供的功能是讓你可以驗證關連結構邏輯的一致性。如果該結構看起來有效,就不會引發任何錯誤。
The functions verify various invariants in the structure of the representation of particular relations. The correctness of the access method functions behind index scans and other important operations relies on these invariants always holding. For example, certain functions verify, among other things, that all B-Tree pages have items in “logical” order (e.g., for B-Tree indexes on text
, index tuples should be in collated lexical order). If that particular invariant somehow fails to hold, we can expect binary searches on the affected page to incorrectly guide index scans, resulting in wrong answers to SQL queries.
Verification is performed using the same procedures as those used by index scans themselves, which may be user-defined operator class code. For example, B-Tree index verification relies on comparisons made with one or more B-Tree support function 1 routines. See for details of operator class support functions.
amcheck
functions may only be used by superusers.
bt_index_check(index regclass, heapallindexed boolean) returns void
bt_index_check
tests that its target, a B-Tree index, respects a variety of invariants. Example usage:
This example shows a session that performs verification of the 10 largest catalog indexes in the database “test”. Verification of the presence of heap tuples as index tuples is requested for the subset that are unique indexes. Since no error is raised, all indexes tested appear to be logically consistent. Naturally, this query could easily be changed to call bt_index_check
for every index in the database where verification is supported.
bt_index_check
acquires an AccessShareLock
on the target index and the heap relation it belongs to. This lock mode is the same lock mode acquired on relations by simple SELECT
statements. bt_index_check
does not verify invariants that span child/parent relationships, but will verify the presence of all heap tuples as index tuples within the index when heapallindexed
is true
. When a routine, lightweight test for corruption is required in a live production environment, using bt_index_check
often provides the best trade-off between thoroughness of verification and limiting the impact on application performance and availability.
bt_index_parent_check(index regclass, heapallindexed boolean, rootdescend boolean) returns void
bt_index_parent_check
tests that its target, a B-Tree index, respects a variety of invariants. Optionally, when the heapallindexed
argument is true
, the function verifies the presence of all heap tuples that should be found within the index. When the optional rootdescend
argument is true
, verification re-finds tuples on the leaf level by performing a new search from the root page for each tuple. The checks that can be performed by bt_index_parent_check
are a superset of the checks that can be performed by bt_index_check
. bt_index_parent_check
can be thought of as a more thorough variant of bt_index_check
: unlike bt_index_check
, bt_index_parent_check
also checks invariants that span parent/child relationships, including checking that there are no missing downlinks in the index structure. bt_index_parent_check
follows the general convention of raising an error if it finds a logical inconsistency or other problem.
A ShareLock
is required on the target index by bt_index_parent_check
(a ShareLock
is also acquired on the heap relation). These locks prevent concurrent data modification from INSERT
, UPDATE
, and DELETE
commands. The locks also prevent the underlying relation from being concurrently processed by VACUUM
, as well as all other utility commands. Note that the function holds locks only while running, not for the entire transaction.
bt_index_parent_check
's additional verification is more likely to detect various pathological cases. These cases may involve an incorrectly implemented B-Tree operator class used by the index that is checked, or, hypothetically, undiscovered bugs in the underlying B-Tree index access method code. Note that bt_index_parent_check
cannot be used when Hot Standby mode is enabled (i.e., on read-only physical replicas), unlike bt_index_check
.
bt_index_check
and bt_index_parent_check
both output log messages about the verification process at DEBUG1
and DEBUG2
severity levels. These messages provide detailed information about the verification process that may be of interest to PostgreSQL developers. Advanced users may also find this information helpful, since it provides additional context should verification actually detect an inconsistency. Running:
in an interactive psql session before running a verification query will display messages about the progress of verification with a manageable level of detail.
heapallindexed
VerificationWhen the heapallindexed
argument to verification functions is true
, an additional phase of verification is performed against the table associated with the target index relation. This consists of a “dummy” CREATE INDEX
operation, which checks for the presence of all hypothetical new index tuples against a temporary, in-memory summarizing structure (this is built when needed during the basic first phase of verification). The summarizing structure “fingerprints” every tuple found within the target index. The high level principle behind heapallindexed
verification is that a new index that is equivalent to the existing, target index must only have entries that can be found in the existing structure.
The additional heapallindexed
phase adds significant overhead: verification will typically take several times longer. However, there is no change to the relation-level locks acquired when heapallindexed
verification is performed.
The summarizing structure is bound in size by maintenance_work_mem
. In order to ensure that there is no more than a 2% probability of failure to detect an inconsistency for each heap tuple that should be represented in the index, approximately 2 bytes of memory are needed per tuple. As less memory is made available per tuple, the probability of missing an inconsistency slowly increases. This approach limits the overhead of verification significantly, while only slightly reducing the probability of detecting a problem, especially for installations where verification is treated as a routine maintenance task. Any single absent or malformed tuple has a new opportunity to be detected with each new verification attempt.
amcheck
EffectivelyStructural inconsistencies caused by incorrect operator class implementations.
This includes issues caused by the comparison rules of operating system collations changing. Comparisons of datums of a collatable type like text
must be immutable (just as all comparisons used for B-Tree index scans must be immutable), which implies that operating system collation rules must never change. Though rare, updates to operating system collation rules can cause these issues. More commonly, an inconsistency in the collation order between a master server and a standby server is implicated, possibly because the major operating system version in use is inconsistent. Such inconsistencies will generally only arise on standby servers, and so can generally only be detected on standby servers.
Structural inconsistencies between indexes and the heap relations that are indexed (when heapallindexed
verification is performed).
There is no cross-checking of indexes against their heap relation during normal operation. Symptoms of heap corruption can be subtle.
Corruption caused by hypothetical undiscovered bugs in the underlying PostgreSQL access method code, sort code, or transaction management code.
File system or storage subsystem faults where checksums happen to simply not be enabled.
Note that amcheck
examines a page as represented in some shared memory buffer at the time of verification if there is only a shared buffer hit when accessing the block. Consequently, amcheck
does not necessarily examine data read from the file system at the time of verification. Note that when checksums are enabled, amcheck
may raise an error due to a checksum failure when a corrupt block is read into a buffer.
Corruption caused by faulty RAM, or the broader memory subsystem.
PostgreSQL does not protect against correctable memory errors and it is assumed you will operate using RAM that uses industry standard Error Correcting Codes (ECC) or better protection. However, ECC memory is typically only immune to single-bit errors, and should not be assumed to provide absolute protection against failures that result in memory corruption.
When heapallindexed
verification is performed, there is generally a greatly increased chance of detecting single-bit errors, since strict binary equality is tested, and the indexed attributes within the heap are tested.
In general, amcheck
can only prove the presence of corruption; it cannot prove its absence.
No error concerning corruption raised by amcheck
should ever be a false positive. amcheck
raises errors in the event of conditions that, by definition, should never happen, and so careful analysis of amcheck
errors is often required.
auto_explain 模組提供了一種自動記錄慢速語句執行計劃的方法,毌須手動執行 。這對於在大型應用程序中追踪未最佳化的查詢特別有用。
該模組不提供 SQL 可存取的功能。要使用它,只需將其載入到伺服器中即可。您也可以將其載入到單個連線之中:
(您必須是超級使用者才能這樣做。)更典型的用法是透過在 postgresql.conf 中的 或 中包含 auto_explain 將其預先載入到部分或全部連線中。然後,無論何時發生,您都可以追踪意外緩慢的查詢。當然,會有一些系統代價。
有幾個組態參數可以控制 auto_explain 的行為。請注意,預設行為是什麼都不做,因此如果需要任何結果,必須至少設定 auto_explain.log_min_duration。
auto_explain.log_min_duration
(integer
)
auto_explain.log_min_duration 是記錄語句計劃的最小語句執行時間(以毫秒為單位)。將此設定為零會記錄所有計劃。減號(預設值)停用計劃的記錄。例如,如果將其設定為 250ms,則將記錄執行 250ms 或更長時間的所有語句。只有超級使用者才能變更此設定。
auto_explain.log_analyze
(boolean
)
auto_explain.log_analyze 會在記錄執行計劃時列印 EXPLAIN ANALYZE 輸出,而不僅僅是 EXPLAIN 輸出。預設情況下,此參數處於停用狀態。只有超級使用者才能變更此設定。
啟用此參數後,將對所有執行的語句執行每計劃節點計時,無論它們是否執行足夠長時間以實際記錄。這可能會對效能產生極為不利的影響。關閉 auto_explain.log_timing 可以獲得較少的訊息,從而改善效能成本。
auto_explain.log_buffers
(boolean
)
auto_explain.log_buffers 控制是否在記錄執行計劃時輸出緩衝區使用情況統計訊息;它相當於 EXPLAIN 的 BUFFERS 選項。除非啟用了 auto_explain.log_analyze,否則此參數無效。預鉆水情況下,此參數處於停用狀態。只有超級使用者才能變更改此設定。
auto_explain.log_wal
(boolean
)
auto_explain.log_wal 控制在記錄執行計劃時是否輸出 WAL 使用情況統計資訊。它等同於 EXPLAIN 的 WAL 選項。除非啟用了 auto_explain.log_analyze,否則此參數無效。預設情況下,此參數是停用的。只有超級使用者可以變更此設定。
auto_explain.log_timing
(boolean
)
auto_explain.log_timing 控制在記錄執行計劃時是否輸出每個節點的計時訊息;它相當於 EXPLAIN 的 TIMING 選項。重複讀取系統時鐘的成本會在某些系統上明顯減慢查詢速度,因此當只需要實際資料列計數而非精確時間計時,將此參數設定為關閉可能很有用。除非啟用了 auto_explain.log_analyze,否則此參數無效。預設情況下,此參數處於啟用狀態。只有超級使用者才能變更此設定。
auto_explain.log_triggers
(boolean
)
auto_explain.log_triggers 會在記錄執行計劃時包含觸發器執行統計訊息。除非啟用了 auto_explain.log_analyze,否則此參數無效。預設情況下,此參數處於停用狀態。只有超級使用者才能變更此設定。
auto_explain.log_verbose
(boolean
)
auto_explain.log_verbose 控制是否在記錄執行計劃時輸出詳細訊息;它相當於 EXPLAIN 的 VERBOSE 選項。預設情況下,此參數處於停用狀態。只有超級使用者才能變更此設定。
auto_explain.log_settings
(boolean
)
auto_explain.log_settings 控制記錄執行計劃時是否輸出有關被修改的組態選項資訊。輸出中僅包含影響其值與內建預設值不同的查詢計劃的選項。預設情況下,此參數是停用的。 只有超級使用者可以變更此設定。
auto_explain.log_format
(enum
)
auto_explain.log_format 選擇要使用的 EXPLAIN 輸出格式。允許的值為 text、xml、json 和 yaml。預設為 text。只有超級使用者才能變更此設定。
auto_explain.log_level
(enum
)
auto_explain.log_level 選擇 auto_explain 將記錄查詢計劃的日誌等級。 有效值為 DEBUG5,DEBUG4,DEBUG3,DEBUG2,DEBUG1,INFO,NOTICE,WARNING 和 LOG。預設值為 LOG。只有超級使用者可以變更此設定。
auto_explain.log_nested_statements
(boolean
)
auto_explain.log_nested_statements 會讓巢狀語句(在函數內執行的語句)記錄下來。關閉時,僅記錄最上層查詢計劃。預設情況下,此參數處於停用狀態。只有超級使用者才能變更此設定。
auto_explain.sample_rate
(real
)
auto_explain.sample_rate 使 auto_explain 僅解釋每個連線中的一小部分語句。預設值為 1,表示 EXPLAIN 所有查詢。在巢狀語句的情況下,要就全部都要解釋,要就都不解釋。只有超級使用者才能變更此設定。
在一般的用法中,這些參數在 postgresql.conf 中設定,儘管超級使用者可以在他們自己的連線中即時更改它們。典型用法可能是:
這可能會產生如下的日誌輸出:
amcheck
can be effective at detecting various types of failure modes that will always fail to catch. These include:
If a problem like this arises, it may not affect each individual index that is ordered using an affected collation, simply because indexed values might happen to have the same absolute ordering regardless of the behavioral inconsistency. See and for further details about how PostgreSQL uses operating system locales and collations.
Automatic verification of the structural integrity of indexes plays a role in the general testing of new or proposed PostgreSQL features that could plausibly allow a logical inconsistency to be introduced. Verification of table structure and associated visibility and transaction status information plays a similar role. One obvious testing strategy is to call amcheck
functions continuously when running the standard regression tests. See for details on running the tests.
There is no general method of repairing problems that amcheck
detects. An explanation for the root cause of an invariant violation should be sought. may play a useful role in diagnosing corruption that amcheck
detects. A REINDEX
may not be effective in repairing corruption.
Takahiro Itagaki <
>
B011
Embedded Ada
B013
Embedded COBOL
B014
Embedded Fortran
B015
Embedded MUMPS
B016
Embedded Pascal
B017
Embedded PL/I
B031
Basic dynamic SQL
B032
Extended dynamic SQL
B032-01
<describe input statement>
B033
Untyped SQL-invoked function arguments
B034
Dynamic specification of cursor attributes
B035
Non-extended descriptor names
B041
Extensions to embedded SQL exception declarations
B051
Enhanced execution rights
B111
Module language Ada
B112
Module language C
B113
Module language COBOL
B114
Module language Fortran
B115
Module language MUMPS
B116
Module language Pascal
B117
Module language PL/I
B121
Routine language Ada
B122
Routine language C
B123
Routine language COBOL
B124
Routine language Fortran
B125
Routine language MUMPS
B126
Routine language Pascal
B127
Routine language PL/I
B128
Routine language SQL
B200
Polymorphic table functions
B201
More than one PTF generic table parameter
B202
PTF Copartitioning
B203
More than one copartition specification
B204
PRUNE WHEN EMPTY
B205
Pass-through columns
B206
PTF descriptor parameters
B207
Cross products of partitionings
B208
PTF component procedure interface
B209
PTF extended names
B211
Module language Ada: VARCHAR and NUMERIC support
B221
Routine language Ada: VARCHAR and NUMERIC support
F054
TIMESTAMP in DATE type precedence list
F121
Basic diagnostics management
F121-01
GET DIAGNOSTICS statement
F121-02
SET TRANSACTION statement: DIAGNOSTICS SIZE clause
F122
Enhanced diagnostics management
F123
All diagnostics
F263
Comma-separated predicates in simple CASE expression
F291
UNIQUE predicate
F301
CORRESPONDING in query expressions
F312
MERGE statement
consider INSERT ... ON CONFLICT DO UPDATE
F313
Enhanced MERGE statement
F314
MERGE statement with DELETE branch
F341
Usage tables
no ROUTINE_*_USAGE tables
F403
Partitioned joined tables
F404
Range variable for common column names
F451
Character set definition
F461
Named character sets
F492
Optional table constraint enforcement
F521
Assertions
F671
Subqueries in CHECK
intentionally omitted
F673
Reads SQL-data routine invocations in CHECK constraints
F693
SQL-session and client module collations
F695
Translation support
F696
Additional translation documentation
F721
Deferrable constraints
foreign and unique keys only
F741
Referential MATCH types
no partial match yet
F812
Core
Basic flagging
F813
Extended flagging
F821
Local table references
F831
Full cursor update
F831-01
Updatable scrollable cursors
F831-02
Updatable ordered cursors
F841
LIKE_REGEX predicate
F842
OCCURRENCES_REGEX function
F843
POSITION_REGEX function
F844
SUBSTRING_REGEX function
F845
TRANSLATE_REGEX function
F846
Octet support in regular expression operators
F847
Nonconstant regular expressions
F866
FETCH FIRST clause: PERCENT option
R010
Row pattern recognition: FROM clause
R020
Row pattern recognition: WINDOW clause
R030
Row pattern recognition: full aggregate support
S011
Core
Distinct data types
S011-01
Core
USER_DEFINED_TYPES view
S023
Basic structured types
S024
Enhanced structured types
S025
Final structured types
S026
Self-referencing structured types
S027
Create method by specific method name
S028
Permutable UDT options list
S041
Basic reference types
S043
Enhanced reference types
S051
Create table of type
partially supported
S081
Subtables
S091
Basic array support
partially supported
S091-01
Arrays of built-in data types
S091-02
Arrays of distinct types
S091-03
Array expressions
S094
Arrays of reference types
S097
Array element assignment
S151
Type predicate
S161
Subtype treatment
S162
Subtype treatment for references
S202
SQL-invoked routines on multisets
S231
Structured type locators
S232
Array locators
S233
Multiset locators
S241
Transform functions
S242
Alter transform statement
S251
User-defined orderings
S261
Specific type method
S271
Basic multiset support
S272
Multisets of user-defined types
S274
Multisets of reference types
S275
Advanced multiset support
S281
Nested collection types
S291
Unique constraint on entire row
S401
Distinct types based on array types
S402
Distinct types based on distinct types
S403
ARRAY_MAX_CARDINALITY
S404
TRIM_ARRAY
T011
Timestamp in Information Schema
T021
BINARY and VARBINARY data types
T022
Advanced support for BINARY and VARBINARY data types
T023
Compound binary literal
T024
Spaces in binary literals
T041
Basic LOB data type support
T041-01
BLOB data type
T041-02
CLOB data type
T041-03
POSITION, LENGTH, LOWER, TRIM, UPPER, and SUBSTRING functions for LOB data types
T041-04
Concatenation of LOB data types
T041-05
LOB locator: non-holdable
T042
Extended LOB data type support
T043
Multiplier T
T044
Multiplier P
T051
Row types
T053
Explicit aliases for all-fields reference
T061
UCS support
T076
DECFLOAT data type
T101
Enhanced nullability determination
T111
Updatable joins, unions, and columns
T175
Generated columns
mostly supported
T176
Sequence generator support
supported except for NEXT VALUE FOR
T180
System-versioned tables
T181
Application-time period tables
T211
Basic trigger capability
T211-06
Support for run-time rules for the interaction of triggers and constraints
T211-08
Multiple triggers for the same event are executed in the order in which they were created in the catalog
intentionally omitted
T251
SET TRANSACTION statement: LOCAL option
T272
Enhanced savepoint management
T301
Functional dependencies
partially supported
T321
Core
Basic SQL-invoked routines
T321-05
Core
RETURN statement
T322
Declared data type attributes
T324
Explicit security for SQL routines
T326
Table functions
T332
Extended roles
mostly supported
T434
GROUP BY DISTINCT
T471
Result sets return value
T472
DESCRIBE CURSOR
T495
Combined data change and retrieval
different syntax
T502
Period predicates
T511
Transaction counts
T522
Default values for IN parameters of SQL-invoked procedures
supported except DEFAULT key word in invocation
T561
Holdable locators
T571
Array-returning external SQL-invoked functions
T572
Multiset-returning external SQL-invoked functions
T601
Local cursor references
T616
Null treatment option for LEAD and LAG functions
T618
NTH_VALUE function
function exists, but some options missing
T619
Nested window functions
T625
LISTAGG
T641
Multiple column assignment
only some syntax variants supported
T652
SQL-dynamic statements in SQL routines
T654
SQL-dynamic statements in external routines
T811
Basic SQL/JSON constructor functions
T812
SQL/JSON: JSON_OBJECTAGG
T813
SQL/JSON: JSON_ARRAYAGG with ORDER BY
T814
Colon in JSON_OBJECT or JSON_OBJECTAGG
T821
Basic SQL/JSON query operators
T822
SQL/JSON: IS JSON WITH UNIQUE KEYS predicate
T823
SQL/JSON: PASSING clause
T824
JSON_TABLE: specific PLAN clause
T825
SQL/JSON: ON EMPTY and ON ERROR clauses
T826
General value expression in ON ERROR or ON EMPTY clauses
T827
JSON_TABLE: sibling NESTED COLUMNS clauses
T828
JSON_QUERY
T829
JSON_QUERY: array wrapper options
T830
Enforcing unique keys in SQL/JSON constructor functions
T838
JSON_TABLE: PLAN DEFAULT clause
T839
Formatted cast of datetimes to/from character strings
M001
Datalinks
M002
Datalinks via SQL/CLI
M003
Datalinks via Embedded SQL
M004
Foreign data support
partially supported
M005
Foreign schema support
M006
GetSQLString routine
M007
TransmitRequest
M009
GetOpts and GetStatistics routines
M010
Foreign data wrapper support
different API
M011
Datalinks via Ada
M012
Datalinks via C
M013
Datalinks via COBOL
M014
Datalinks via Fortran
M015
Datalinks via M
M016
Datalinks via Pascal
M017
Datalinks via PL/I
M018
Foreign data wrapper interface routines in Ada
M019
Foreign data wrapper interface routines in C
different API
M020
Foreign data wrapper interface routines in COBOL
M021
Foreign data wrapper interface routines in Fortran
M022
Foreign data wrapper interface routines in MUMPS
M023
Foreign data wrapper interface routines in Pascal
M024
Foreign data wrapper interface routines in PL/I
M030
SQL-server foreign data support
M031
Foreign data wrapper general routines
X012
Multisets of XML type
X013
Distinct types of XML type
X015
Fields of XML type
X025
XMLCast
X030
XMLDocument
X038
XMLText
X065
XMLParse: BLOB input and CONTENT option
X066
XMLParse: BLOB input and DOCUMENT option
X068
XMLSerialize: BOM
X069
XMLSerialize: INDENT
X073
XMLSerialize: BLOB serialization and CONTENT option
X074
XMLSerialize: BLOB serialization and DOCUMENT option
X075
XMLSerialize: BLOB serialization
X076
XMLSerialize: VERSION
X077
XMLSerialize: explicit ENCODING option
X078
XMLSerialize: explicit XML declaration
X080
Namespaces in XML publishing
X081
Query-level XML namespace declarations
X082
XML namespace declarations in DML
X083
XML namespace declarations in DDL
X084
XML namespace declarations in compound statements
X085
Predefined namespace prefixes
X086
XML namespace declarations in XMLTable
X091
XML content predicate
X096
XMLExists
XPath 1.0 only
X100
Host language support for XML: CONTENT option
X101
Host language support for XML: DOCUMENT option
X110
Host language support for XML: VARCHAR mapping
X111
Host language support for XML: CLOB mapping
X112
Host language support for XML: BLOB mapping
X113
Host language support for XML: STRIP WHITESPACE option
X114
Host language support for XML: PRESERVE WHITESPACE option
X131
Query-level XMLBINARY clause
X132
XMLBINARY clause in DML
X133
XMLBINARY clause in DDL
X134
XMLBINARY clause in compound statements
X135
XMLBINARY clause in subqueries
X141
IS VALID predicate: data-driven case
X142
IS VALID predicate: ACCORDING TO clause
X143
IS VALID predicate: ELEMENT clause
X144
IS VALID predicate: schema location
X145
IS VALID predicate outside check constraints
X151
IS VALID predicate with DOCUMENT option
X152
IS VALID predicate with CONTENT option
X153
IS VALID predicate with SEQUENCE option
X155
IS VALID predicate: NAMESPACE without ELEMENT clause
X157
IS VALID predicate: NO NAMESPACE with ELEMENT clause
X160
Basic Information Schema for registered XML Schemas
X161
Advanced Information Schema for registered XML Schemas
X170
XML null handling options
X171
NIL ON NO CONTENT option
X181
XML(DOCUMENT(UNTYPED)) type
X182
XML(DOCUMENT(ANY)) type
X190
XML(SEQUENCE) type
X191
XML(DOCUMENT(XMLSCHEMA)) type
X192
XML(CONTENT(XMLSCHEMA)) type
X200
XMLQuery
X201
XMLQuery: RETURNING CONTENT
X202
XMLQuery: RETURNING SEQUENCE
X203
XMLQuery: passing a context item
X204
XMLQuery: initializing an XQuery variable
X205
XMLQuery: EMPTY ON EMPTY option
X206
XMLQuery: NULL ON EMPTY option
X211
XML 1.1 support
X222
XML passing mechanism BY REF
parser accepts BY REF but ignores it; passing is always BY VALUE
X231
XML(CONTENT(UNTYPED)) type
X232
XML(CONTENT(ANY)) type
X241
RETURNING CONTENT in XML publishing
X242
RETURNING SEQUENCE in XML publishing
X251
Persistent XML values of XML(DOCUMENT(UNTYPED)) type
X252
Persistent XML values of XML(DOCUMENT(ANY)) type
X253
Persistent XML values of XML(CONTENT(UNTYPED)) type
X254
Persistent XML values of XML(CONTENT(ANY)) type
X255
Persistent XML values of XML(SEQUENCE) type
X256
Persistent XML values of XML(DOCUMENT(XMLSCHEMA)) type
X257
Persistent XML values of XML(CONTENT(XMLSCHEMA)) type
X260
XML type: ELEMENT clause
X261
XML type: NAMESPACE without ELEMENT clause
X263
XML type: NO NAMESPACE with ELEMENT clause
X264
XML type: schema location
X271
XMLValidate: data-driven case
X272
XMLValidate: ACCORDING TO clause
X273
XMLValidate: ELEMENT clause
X274
XMLValidate: schema location
X281
XMLValidate with DOCUMENT option
X282
XMLValidate with CONTENT option
X283
XMLValidate with SEQUENCE option
X284
XMLValidate: NAMESPACE without ELEMENT clause
X286
XMLValidate: NO NAMESPACE with ELEMENT clause
X300
XMLTable
XPath 1.0 only
X305
XMLTable: initializing an XQuery variable
日期時間資料將會以下列流程解譯:(分隔符號均為半型文字)
以分隔符號將其分解為多個段落,如字串、時區或數字。
如果是以冒號(:)分隔的數字格式,那麼這是一個時間的字串,其所包括的內容都是時間資訊的一部份。
如果是以連字號(-)、斜線(/)、或兩個以上的間隔號(.)所分隔的數字格式,那麼這是一個日期的字串,它可能包含文字型式的月份名稱。但如果日期分隔符號已經先出現了,那麼它將被解釋為時區資訊(例如:Asia/Taipei)。
如果整個字串都是數字所組成,那麼它可能是符合ISO 8601格式的日期(例如:19990113,表示西元1999年1月3日),或時間(例如:141516,表示14:15:16)。
如果它是以加號(+)或減號(-)開頭的話,那麼它是一個數字型態的時區資訊或是特別的區間。
如果是一個字串,進行下列比對規則:
以 binary-search 表格,尋找時區的表示字。
如果沒有找到的話,則搜尋慣用字(如:today),星期(如:Thursday),月份(如:January),或介系詞(如:at, on)。
如果都沒有找到,就回傳錯誤訊息。
如果是一個由數字組成的字串,則進行下列判斷:
如果是 8 個或 6 個數字,而先前也沒有讀到其他日期的資訊,那麼它會被解譯為一個數字型態的日期(如:19990118 或 990118),對應年月日格式為 YYYYMMDD 或 YYMMDD。
如果是 3 位數字,而且先前已處理到年份資訊的話,那麼它會被解譯為該年的第幾天。
如果是 4 位或 6 位數字,而且先前已處理到年份資訊的話,那麼它會被解譯為時間資訊,對應格式為 HHMM 或 HHMMSS。
如果是 3 個或更多的數字,並且先前未處理到日期資訊的話,那麼它會是年份資訊。(這里將直接判斷為 yy-mm-dd 的日期格式。)
最後,日期格式將依 DateStyle 所定義的,設定為:mm-dd-yy,dd-mm-yy,或yy-mm-dd。其中如果月份或日子名稱無法找到合法字詞的話,那將會回傳錯誤的訊息。
如果指定了 BC(西元前)的話,那麼實際儲存值將是負的年份數再加1。(陽曆 Gregorian year 中並無西元 0 年,所以西元 1 年,以數值 0 作為年份的記錄值。)
沒有指定 BC 的 2 位數年份,將自動被調整為4位數。其規則為:若小於 70,則加 2000 作為其記錄值,否則就加 1900 作為記錄值。
若需要描述西元(Gregorian years AD) 1-99 年,請將年份數值以 0補滿 4 位數(例如:0099 表示西元 99 年)
The Julian Date system is a method for numbering days. It is unrelated to the Julian calendar, though it is confusingly named similarly to that calendar. The Julian Date system was invented by the French scholar Joseph Justus Scaliger (1540–1609) and probably takes its name from Scaliger's father, the Italian scholar Julius Caesar Scaliger (1484–1558).
In the Julian Date system, each day has a sequential number, starting from JD 0 (which is sometimes called the Julian Date). JD 0 corresponds to 1 January 4713 BC in the Julian calendar, or 24 November 4714 BC in the Gregorian calendar. Julian Date counting is most often used by astronomers for labeling their nightly observations, and therefore a date runs from noon UTC to the next noon UTC, rather than from midnight to midnight: JD 0 designates the 24 hours from noon UTC on 24 November 4714 BC to noon UTC on 25 November 4714 BC.
Although PostgreSQL supports Julian Date notation for input and output of dates (and also uses Julian dates for some internal datetime calculations), it does not observe the nicety of having dates run from noon to noon. PostgreSQL treats a Julian Date as running from local midnight to local midnight, the same as a normal date.
This definition does, however, provide a way to obtain the astronomical definition when you need it: do the arithmetic in time zone UTC+12
. For example,
This section attempts to outline to what extent PostgreSQL conforms to the current SQL standard. The following information is not a full statement of conformance, but it presents the main topics in as much detail as is both reasonable and useful for users.
The formal name of the SQL standard is ISO/IEC 9075 “Database Language SQL”. A revised version of the standard is released from time to time; the most recent update appearing in 2016. The 2016 version is referred to as ISO/IEC 9075:2016, or simply as SQL:2016. The versions prior to that were SQL:2011, SQL:2008, SQL:2006, SQL:2003, SQL:1999, and SQL-92. Each version replaces the previous one, so claims of conformance to earlier versions have no official merit. PostgreSQL development aims for conformance with the latest official version of the standard where such conformance does not contradict traditional features or common sense. Many of the features required by the SQL standard are supported, though sometimes with slightly differing syntax or function. Further moves towards conformance can be expected over time.
SQL-92 defined three feature sets for conformance: Entry, Intermediate, and Full. Most database management systems claiming SQL standard conformance were conforming at only the Entry level, since the entire set of features in the Intermediate and Full levels was either too voluminous or in conflict with legacy behaviors.
Starting with SQL:1999, the SQL standard defines a large set of individual features rather than the ineffectively broad three levels found in SQL-92. A large subset of these features represents the “Core” features, which every conforming SQL implementation must supply. The rest of the features are purely optional.
The standard versions beginning with SQL:2003 are also split into a number of parts. Each is known by a shorthand name. Note that these parts are not consecutively numbered.
ISO/IEC 9075-1 Framework (SQL/Framework)
ISO/IEC 9075-2 Foundation (SQL/Foundation)
ISO/IEC 9075-3 Call Level Interface (SQL/CLI)
ISO/IEC 9075-4 Persistent Stored Modules (SQL/PSM)
ISO/IEC 9075-9 Management of External Data (SQL/MED)
ISO/IEC 9075-10 Object Language Bindings (SQL/OLB)
ISO/IEC 9075-11 Information and Definition Schemas (SQL/Schemata)
ISO/IEC 9075-13 Routines and Types using the Java Language (SQL/JRT)
ISO/IEC 9075-14 XML-related specifications (SQL/XML)
ISO/IEC 9075-15 Multi-dimensional arrays (SQL/MDA)
The PostgreSQL core covers parts 1, 2, 9, 11, and 14. Part 3 is covered by the ODBC driver, and part 13 is covered by the PL/Java plug-in, but exact conformance is currently not being verified for these components. There are currently no implementations of parts 4, 10, and 15 for PostgreSQL.
PostgreSQL supports most of the major features of SQL:2016. Out of 177 mandatory features required for full Core conformance, PostgreSQL conforms to at least 170. In addition, there is a long list of supported optional features. It might be worth noting that at the time of writing, no current version of any database management system claims full conformance to Core SQL:2016.
In the following two sections, we provide a list of those features that PostgreSQL supports, followed by a list of the features defined in SQL:2016 which are not yet supported in PostgreSQL. Both of these lists are approximate: There might be minor details that are nonconforming for a feature that is listed as supported, and large parts of an unsupported feature might in fact be implemented. The main body of the documentation always contains the most accurate information about what does and does not work.
Feature codes containing a hyphen are subfeatures. Therefore, if a particular subfeature is not supported, the main feature is listed as unsupported even if some other subfeatures are supported.
Significant revisions to the XML-related specifications in ISO/IEC 9075-14 (SQL/XML) were introduced with SQL:2006. PostgreSQL's implementation of the XML data type and related functions largely follows the earlier 2003 edition, with some borrowing from later editions. In particular:
Where the current standard provides a family of XML data types to hold “document” or “content” in untyped or XML Schema-typed variants, and a type XML(SEQUENCE)
to hold arbitrary pieces of XML content, PostgreSQL provides the single xml
type, which can hold “document” or “content”. There is no equivalent of the standard's “sequence” type.
PostgreSQL provides two functions introduced in SQL:2006, but in variants that use the XPath 1.0 language, rather than XML Query as specified for them in the standard.
This section presents some of the resulting differences you may encounter.
The PostgreSQL-specific functions xpath()
and xpath_exists()
query XML documents using the XPath language. PostgreSQL also provides XPath-only variants of the standard functions XMLEXISTS
and XMLTABLE
, which officially use the XQuery language. For all of these functions, PostgreSQL relies on the libxml2 library, which provides only XPath 1.0.
There is a strong connection between the XQuery language and XPath versions 2.0 and later: any expression that is syntactically valid and executes successfully in both produces the same result (with a minor exception for expressions containing numeric character references or predefined entity references, which XQuery replaces with the corresponding character while XPath leaves them alone). But there is no such connection between these languages and XPath 1.0; it was an earlier language and differs in many respects.
There are two categories of limitation to keep in mind: the restriction from XQuery to XPath for the functions specified in the SQL standard, and the restriction of XPath to version 1.0 for both the standard and the PostgreSQL-specific functions.
Features of XQuery beyond those of XPath include:
XQuery expressions can construct and return new XML nodes, in addition to all possible XPath values. XPath can create and return values of the atomic types (numbers, strings, and so on) but can only return XML nodes that were already present in documents supplied as input to the expression.
XQuery has control constructs for iteration, sorting, and grouping.
XQuery allows declaration and use of local functions.
Recent XPath versions begin to offer capabilities overlapping with these (such as functional-style for-each
and sort
, anonymous functions, and parse-xml
to create a node from a string), but such features were not available before XPath 3.0.
For developers familiar with XQuery and XPath 2.0 or later, XPath 1.0 presents a number of differences to contend with:
The fundamental type of an XQuery/XPath expression, the sequence
, which can contain XML nodes, atomic values, or both, does not exist in XPath 1.0. A 1.0 expression can only produce a node-set (containing zero or more XML nodes), or a single atomic value.
Unlike an XQuery/XPath sequence, which can contain any desired items in any desired order, an XPath 1.0 node-set has no guaranteed order and, like any set, does not allow multiple appearances of the same item.
The libxml2 library does seem to always return node-sets to PostgreSQL with their members in the same relative order they had in the input document. Its documentation does not commit to this behavior, and an XPath 1.0 expression cannot control it.
While XQuery/XPath provides all of the types defined in XML Schema and many operators and functions over those types, XPath 1.0 has only node-sets and the three atomic types boolean
, double
, and string
.
XPath 1.0 has no conditional operator. An XQuery/XPath expression such as if ( hat ) then hat/@size else "no hat"
has no XPath 1.0 equivalent.
XPath 1.0 has no ordering comparison operator for strings. Both "cat" < "dog"
and "cat" > "dog"
are false, because each is a numeric comparison of two NaN
s. In contrast, =
and !=
do compare the strings as strings.
XPath 1.0 blurs the distinction between value comparisons and general comparisons as XQuery/XPath define them. Both sale/@hatsize = 7
and sale/@customer = "alice"
are existentially quantified comparisons, true if there is any sale
with the given value for the attribute, but sale/@taxable = false()
is a value comparison to the effective boolean value of a whole node-set. It is true only if no sale
has a taxable
attribute at all.
In the XQuery/XPath data model, a document node can have either document form (i.e., exactly one top-level element, with only comments and processing instructions outside of it) or content form (with those constraints relaxed). Its equivalent in XPath 1.0, the root node, can only be in document form. This is part of the reason an xml
value passed as the context item to any PostgreSQL XPath-based function must be in document form.
In SQL:2006 and later, both directions of conversion between standard SQL data types and the XML Schema types are specified precisely. However, the rules are expressed using the types and semantics of XQuery/XPath, and have no direct application to the different data model of XPath 1.0.
When PostgreSQL maps SQL data values to XML (as in xmlelement
), or XML to SQL (as in the output columns of xmltable
), except for a few cases treated specially, PostgreSQL simply assumes that the XML data type's XPath 1.0 string form will be valid as the text-input form of the SQL datatype, and conversely. This rule has the virtue of simplicity while producing, for many data types, results similar to the mappings specified in the standard.
This section concerns limits that are not inherent in the libxml2 library, but apply to the current implementation in PostgreSQL.
BY VALUE
Passing Mechanism Is SupportedThe SQL standard defines two passing mechanisms that apply when passing an XML argument from SQL to an XML function or receiving a result: BY REF
, in which a particular XML value retains its node identity, and BY VALUE
, in which the content of the XML is passed but node identity is not preserved. A mechanism can be specified before a list of parameters, as the default mechanism for all of them, or after any parameter, to override the default.
To illustrate the difference, if x
is an XML value, these two queries in an SQL:2006 environment would produce true and false, respectively:
PostgreSQL will accept BY VALUE
or BY REF
in an XMLEXISTS
or XMLTABLE
construct, but it ignores them. The xml
data type holds a character-string serialized representation, so there is no node identity to preserve, and passing is always effectively BY VALUE
.
The XPath-based functions support passing one parameter to serve as the XPath expression's context item, but do not support passing additional values to be available to the expression as named parameters.
XML(SEQUENCE)
TypeThe PostgreSQL xml
data type can only hold a value in DOCUMENT
or CONTENT
form. An XQuery/XPath expression context item must be a single XML node or atomic value, but XPath 1.0 further restricts it to be only an XML node, and has no node type allowing CONTENT
. The upshot is that a well-formed DOCUMENT
is the only form of XML value that PostgreSQL can supply as an XPath context item.
所有由PostgreSQL伺服器回傳的訊息,都會搭配一個 5 個字元的錯誤代碼,這些代碼均以 SQL 標準的 SQLSTATE 代碼所定義。應用程式應該以錯誤代碼作為程式行為,而非處理文字的錯誤訊息。這些錯誤代碼會隨 PostgreSQL 版本發佈而微幅更新,但不會因為不同語言的文字訊息影響其定義。特別注意,部份錯誤代碼是以 SQL 標準的定義所制定,而有些錯誤代碼是額外制定的,以對應其他 SQL 標準未定義的使用情況。
標準來看,錯誤代碼的前 2 個字元代表著這個錯誤的類別,而後面 3 個字元則指出在此類別中更詳細的發生情況。所以,應用程式即使未完整支援特定的錯誤代碼,仍然可以依類別代碼瞭解大概發生了什麼事。
下方的 列出了所有在 PostgreSQL 10 中所定義的錯誤代碼。(部份錯誤代碼可能實際上沒有使用,但仍然依 SQL 標準所制定)每一個錯誤的類別都有一個"標準"的錯誤情況,其最後 3 個字元為 000,只用於在該類別未特別詳細定義的錯誤情況。
在"Condition Name"欄位中的內容將能在 PL/pgSQL 中被使用。Condition Name 不論大小寫字母均可合法使用。(特別注意,PL/pgSQL 並不支援 00、01 及 02 的錯誤類別。)
有一些錯誤訊息,其回傳內容是發生錯誤的資料庫物件名稱(表格 Table、表格欄位 Table column、資料型別 data type、限制條件 constraint)。舉例來說,如果產生了 unique_violation 錯誤,則會回傳某個限制條件名稱。這些訊息將會分別額外的欄位回傳,所以應用程式不需要特別解譯描述性的訊息內容。在 PostgreSQL 9.3之前,這種回傳方式僅限於 SQLSTATE 類別 23(違反限制條件的一致性 integrity constraint violation),日後將儘可能延伸支援到所有類別。
Error Code | Condition Name |
---|
Release date: 2022-11-10
This release contains a variety of fixes from 15.0. For information about new features in major release 15, see .
A dump/restore is not required for those running 15.X.
However, if you regularly create and drop tables exceeding 1GB, see the first changelog entry below.
Fix failure to remove non-first segments of large tables (Tom Lane)
PostgreSQL splits large tables into multiple files (normally with 1GB per file). The logic for dropping a table was broken and would miss removing all but the first such file, in two cases: drops of temporary tables and WAL replay of drops of regular tables. Applications that routinely create multi-gigabyte temporary tables could suffer significant disk space leakage.
Orphaned temporary-table files are removed during postmaster start, so the mere act of updating to 15.1 is sufficient to clear any leaked temporary-table storage. However, if you suffered any database crashes while using 15.0, and there might have been large tables dropped just before such crashes, it's advisable to check the database directories for files named according to the pattern NNNN
.
NN
. If there is no matching file named just NNNN
(without the .
NN
suffix), these files should be removed manually.
Fix handling of DEFAULT
tokens that appear in a multi-row VALUES
clause of an INSERT
on an updatable view (Tom Lane)
This oversight could lead to “cache lookup failed for type” errors, or in older branches even to crashes.
Disallow rules named _RETURN
that are not ON SELECT
(Tom Lane)
This avoids confusion between a view's ON SELECT
rule and any other rules it may have.
Avoid failure in EXPLAIN VERBOSE
for a query using SEARCH BREADTH FIRST
with constant initial values (Tom Lane)
Prevent use of MERGE
on a partitioned table with foreign-table partitions (Álvaro Herrera)
The case isn't supported, and previously threw an incomprehensible error.
Fix construction of per-partition foreign key constraints while doing ALTER TABLE ATTACH PARTITION
(Jehan-Guillaume de Rorthais, Álvaro Herrera)
Previously, incorrect or duplicate constraints could be constructed for the newly-added partition.
Fix planner failure with extended statistics on partitioned or inherited tables (Richard Guo, Justin Pryzby)
Some cases failed with “cache lookup failed for statistics object”.
Fix mis-ordering of WAL operations in fast insert path for GIN indexes (Matthias van de Meent, Zhang Mingli)
This mistake is not known to have any negative consequences within core PostgreSQL, but it did cause issues for some extensions.
Fix bugs in logical decoding when replay starts from a point between the beginning of a transaction and the beginning of its subtransaction (Masahiko Sawada, Kuroda Hayato)
These errors could lead to assertion failures in debug builds, and otherwise to memory leaks.
Accept interrupts in more places during logical decoding (Amit Kapila, Masahiko Sawada)
This ameliorates problems with slow shutdown of replication workers.
Prevent attempts to replicate into a foreign-table partition in replication workers (Shi Yu, Tom Lane)
Although partitioned tables can have foreign tables as partitions, replicating into such a partition isn't currently supported. The logical replication worker process would crash if it was attempted. Now, an error is thrown.
Avoid crash after function syntax error in replication workers (Maxim Orlov, Anton Melnikov, Masahiko Sawada, Tom Lane)
If a syntax error occurred in a SQL-language or PL/pgSQL-language CREATE FUNCTION
or DO
command executed in a logical replication worker, the worker process would crash with a null pointer dereference or assertion failure.
Avoid double call of the shutdown callback of an archiver module (Nathan Bossart, Bharath Rupireddy)
Add plan-time check for attempted access to a table that has no table access method (Tom Lane)
This prevents a crash in some catalog-corruption scenarios, for example use of a view whose ON SELECT
rule is missing.
Prevent postmaster crash when shared-memory state is corrupted (Tom Lane)
The postmaster process is supposed to survive and initiate a database restart if shared memory becomes corrupted, but one bit of code was being insufficiently cautious about that.
In libpq, handle single-row mode correctly when pipelining (Denis Laxalde)
The single-row flag was not reset at the correct time if pipeline mode was also active.
Fix psql's exit status when a command-line query is canceled (Peter Eisentraut)
psql -c
query
would exit successfully if the query was canceled. Fix it to exit with nonzero status, as in other error cases.
Allow cross-platform tablespace relocation in pg_basebackup (Robert Haas)
Allow the remote path in --tablespace-mapping
to be either a Unix-style or Windows-style absolute path, since the source server could be on a different OS than the local system.
Fix pg_dump's failure to dump comments attached to some CHECK
constraints (Tom Lane)
Fix CREATE DATABASE
to allow its oid
parameter to exceed 231 (Tom Lane)
This oversight prevented pg_upgrade from succeeding when the source installation contained databases with OIDs larger than that.
In pg_stat_statements, fix access to already-freed memory (zhaoqigui)
This occurred if pg_stat_statements tracked a ROLLBACK
command issued via extended query protocol. In debug builds it consistently led to an assertion failure. In production builds there would often be no visible ill effect; but if the freed memory had already been reused, the likely result would be to store garbage for the query string.
Fix incompatibilities with LLVM 15 (Thomas Munro, Andres Freund)
Allow use of __sync_lock_test_and_set()
for spinlocks on any machine (Tom Lane)
This eases porting to new machine architectures, at least if you're using a compiler that supports this GCC builtin function.
Rename symbol REF
to REF_P
to avoid compile failure on recent macOS (Tom Lane)
Avoid using sprintf
, to avoid compile-time deprecation warnings (Tom Lane)
Update time zone data files to tzdata release 2022f for DST law changes in Chile, Fiji, Iran, Jordan, Mexico, Palestine, and Syria, plus historical corrections for Chile, Crimea, Iran, and Mexico.
Also, the Europe/Kiev zone has been renamed to Europe/Kyiv. Also, the following zones have been merged into nearby, more-populous zones whose clocks have agreed with them since 1970: Antarctica/Vostok, Asia/Brunei, Asia/Kuala_Lumpur, Atlantic/Reykjavik, Europe/Amsterdam, Europe/Copenhagen, Europe/Luxembourg, Europe/Monaco, Europe/Oslo, Europe/Stockholm, Indian/Christmas, Indian/Cocos, Indian/Kerguelen, Indian/Mahe, Indian/Reunion, Pacific/Chuuk, Pacific/Funafuti, Pacific/Majuro, Pacific/Pohnpei, Pacific/Wake and Pacific/Wallis. (This indirectly affects zones that were already links to one of these: Arctic/Longyearbyen, Atlantic/Jan_Mayen, Iceland, Pacific/Ponape, Pacific/Truk, and Pacific/Yap.) America/Nipigon, America/Rainy_River, America/Thunder_Bay, Europe/Uzhgorod, and Europe/Zaporozhye were also merged into nearby zones after discovering that their claimed post-1970 differences from those zones seem to have been errors. In all these cases, the previous zone name remains as an alias; but the actual data is that of the zone that was merged into.
These zone mergers result in loss of pre-1970 timezone history for the merged zones, which may be troublesome for applications expecting consistency of timestamptz
display. As an example, the stored value 1944-06-01 12:00 UTC
would previously display as 1944-06-01 13:00:00+01
if the Europe/Stockholm zone is selected, but now it will read out as 1944-06-01 14:00:00+02
.
It is possible to build the time zone data files with options that will restore the older zone data, but that choice also inserts a lot of other old (and typically poorly-attested) zone data, resulting in more total changes from the previous release than accepting these upstream changes does. PostgreSQL has chosen to ship the tzdb data as-recommended, and so far as we are aware most major operating system distributions are doing likewise. However, if these changes cause significant problems for your application, a possible solution is to install a local build of the time zone data files using tzdb's backwards-compatibility options (see their PACKRATDATA
and PACKRATLIST
options).
The release notes contain the significant changes in each PostgreSQL release, with major features and migration issues listed at the top. The release notes do not contain changes that affect only a few users or changes that are internal and therefore not user-visible. For example, the optimizer is improved in almost every release, but the improvements are usually observed by users as simply faster queries.
A complete list of changes for each release can be obtained by viewing the logs for each release. The records all source code changes as well. There is also a that shows changes to specific files.
The name appearing next to each item represents the major developer for that item. Of course all changes involve community discussion and patch review, so each item is truly a community effort.
The SQL standard states that“Within the definition of a‘datetime literal’, the‘datetime values’are constrained by the natural rules for dates and times according to the Gregorian calendar”.PostgreSQLfollows the SQL standard's lead by counting dates exclusively in the Gregorian calendar, even for years before that calendar was in use. This rule is known as the_proleptic Gregorian calendar_.
The Julian calendar was introduced by Julius Caesar in 45 BC. It was in common use in the Western world until the year 1582, when countries started changing to the Gregorian calendar. In the Julian calendar, the tropical year is approximated as 365 1/4 days = 365.25 days. This gives an error of about 1 day in 128 years.
The accumulating calendar error prompted Pope Gregory XIII to reform the calendar in accordance with instructions from the Council of Trent. In the Gregorian calendar, the tropical year is approximated as 365 + 97 / 400 days = 365.2425 days. Thus it takes approximately 3300 years for the tropical year to shift one day with respect to the Gregorian calendar.
The approximation 365+97/400 is achieved by having 97 leap years every 400 years, using the following rules:
Every year divisible by 4 is a leap year. |
---|
So, 1700, 1800, 1900, 2100, and 2200 are not leap years. But 1600, 2000, and 2400 are leap years. By contrast, in the older Julian calendar all years divisible by 4 are leap years.
The papal bull of February 1582 decreed that 10 days should be dropped from October 1582 so that 15 October should follow immediately after 4 October. This was observed in Italy, Poland, Portugal, and Spain. Other Catholic countries followed shortly after, but Protestant countries were reluctant to change, and the Greek Orthodox countries didn't change until the start of the 20th century. The reform was observed by Great Britain and its dominions (including what is now the USA) in 1752. Thus 2 September 1752 was followed by 14 September 1752. This is why Unix systems have thecal
program produce the following:
But, of course, this calendar is only valid for Great Britain and dominions, not other places. Since it would be difficult and confusing to try to track the actual calendars that were in use in various places at various times,PostgreSQLdoes not try, but rather follows the Gregorian calendar rules for all dates, even though this method is not historically accurate.
Different calendars have been developed in various parts of the world, many predating the Gregorian system. For example, the beginnings of the Chinese calendar can be traced back to the 14th century BC. Legend has it that the Emperor Huangdi invented that calendar in 2637 BC. The People's Republic of China uses the Gregorian calendar for civil purposes. The Chinese calendar is used for determining festivals.
The_Julian Date_system is another type of calendar, unrelated to the Julian calendar though it is confusingly named similarly to that calendar. The Julian Date system was invented by the French scholar Joseph Justus Scaliger (1540-1609) and probably takes its name from Scaliger's father, the Italian scholar Julius Caesar Scaliger (1484-1558). In the Julian Date system, each day has a sequential number, starting from JD 0 (which is sometimes called_the_Julian Date). JD 0 corresponds to 1 January 4713 BC in the Julian calendar, or 24 November 4714 BC in the Gregorian calendar. Julian Date counting is most often used by astronomers for labeling their nightly observations, and therefore a date runs from noon UTC to the next noon UTC, rather than from midnight to midnight: JD 0 designates the 24 hours from noon UTC on 24 November 4714 BC to noon UTC on 25 November 4714 BC.
AlthoughPostgreSQLsupports Julian Date notation for input and output of dates (and also uses Julian dates for some internal datetime calculations), it does not observe the nicety of having dates run from noon to noon.PostgreSQLtreats a Julian Date as running from midnight to midnight.
Identifier | Core? | Description | Comment |
---|
adminpack 提供了許多支援性質的函數,pgAdmin 和其他管理工具可以用來提供其他功能,例如伺服器日誌檔案的遠程管理。預設情況下,僅超級使用者可以使用所有的這些函數,但使用 GRANT 指令可以允許其他使用者使用它們。
中列出的函數提供了對伺服器的主機上檔案的寫入功能。 (另請參見 中提供唯讀的功能。)只能存取資料庫叢集目錄中的檔案,除非使用者是超級使用者或具有 pg_read_server_files 或 pg_write_server_files 個角色(視情況而定) ,而相對路徑及絕對路徑都是允許的。
adminpack
Functionspg_file_write
writes the specified data
into the file named by filename
. If append
is false, the file must not already exist. If append
is true, the file can already exist, and will be appended to if so. Returns the number of bytes written.
pg_file_sync fsyncs filename 所指定檔案或目錄。 失敗時會拋出錯誤(例如,指定的路徑不存在)。 請注意, 對此函數沒有影響,因此即使更新資料庫檔案失敗也不會引發 PANIC 層級的錯誤。
pg_file_rename
renames a file. If archivename
is omitted or NULL, it simply renames oldname
to newname
(which must not already exist). If archivename
is provided, it first renames newname
to archivename
(which must not already exist), and then renames oldname
to newname
. In event of failure of the second rename step, it will try to rename archivename
back to newname
before reporting the error. Returns true on success, false if the source file(s) are not present or not writable; other cases throw errors.
pg_file_unlink
removes the specified file. Returns true on success, false if the specified file is not present or the unlink()
call fails; other cases throw errors.
pg_logdir_ls 回傳 目錄中所有日誌檔案的開始時間戳記和路徑名稱。 參數必須有其預設設定 (postgresql-%Y-%m-%d_%H%M%S.log) 才能使用此功能。
The differences highlighted here are not all of them. In XQuery and the 2.0 and later versions of XPath, there is an XPath 1.0 compatibility mode, and the W3C lists of and applied in that mode offer a more complete (but still not exhaustive) account of the differences. The compatibility mode cannot make the later languages exactly equivalent to XPath 1.0.
Where interoperability with other systems is a concern, for some data types, it may be necessary to use data type formatting functions (such as those in ) explicitly to produce the standard mappings.
修飾字 | 說明 |
---|---|
January
Jan
February
Feb
March
Mar
April
Apr
May
June
Jun
July
Jul
August
Aug
September
Sep, Sept
October
Oct
November
Nov
December
Dec
Sunday
Sun
Monday
Mon
Tuesday
Tue, Tues
Wednesday
Wed, Weds
Thursday
Thu, Thur, Thurs
Friday
Fri
Saturday
Sat
AM
12:00 之前的時間
AT
忽略
JULIAN
,JD
,J
接下來的字串是 Julian Date 格式
ON
忽略
PM
12:00 之後的時間
T
接下來的字串是 time
B012 |
| Embedded C |
|
B021 |
| Direct SQL |
|
B128 |
| Routine language SQL |
|
E011 | Core | Numeric data types |
|
E011-01 | Core | INTEGER and SMALLINT data types |
|
E011-02 | Core | REAL, DOUBLE PRECISION, and FLOAT data types |
|
E011-03 | Core | DECIMAL and NUMERIC data types |
|
E011-04 | Core | Arithmetic operators |
|
E011-05 | Core | Numeric comparison |
|
E011-06 | Core | Implicit casting among the numeric data types |
|
E021 | Core | Character data types |
|
E021-01 | Core | CHARACTER data type |
|
E021-02 | Core | CHARACTER VARYING data type |
|
E021-03 | Core | Character literals |
|
E021-04 | Core | CHARACTER_LENGTH function | trims trailing spaces from CHARACTER values before counting |
E021-05 | Core | OCTET_LENGTH function |
|
E021-06 | Core | SUBSTRING function |
|
E021-07 | Core | Character concatenation |
|
E021-08 | Core | UPPER and LOWER functions |
|
E021-09 | Core | TRIM function |
|
E021-10 | Core | Implicit casting among the character string types |
|
E021-11 | Core | POSITION function |
|
E021-12 | Core | Character comparison |
|
E031 | Core | Identifiers |
|
E031-01 | Core | Delimited identifiers |
|
E031-02 | Core | Lower case identifiers |
|
E031-03 | Core | Trailing underscore |
|
E051 | Core | Basic query specification |
|
E051-01 | Core | SELECT DISTINCT |
|
E051-02 | Core | GROUP BY clause |
|
E051-04 | Core | GROUP BY can contain columns not in <select list> |
|
E051-05 | Core | Select list items can be renamed |
|
E051-06 | Core | HAVING clause |
|
E051-07 | Core | Qualified * in select list |
|
E051-08 | Core | Correlation names in the FROM clause |
|
E051-09 | Core | Rename columns in the FROM clause |
|
E061 | Core | Basic predicates and search conditions |
|
E061-01 | Core | Comparison predicate |
|
E061-02 | Core | BETWEEN predicate |
|
E061-03 | Core | IN predicate with list of values |
|
E061-04 | Core | LIKE predicate |
|
E061-05 | Core | LIKE predicate ESCAPE clause |
|
E061-06 | Core | NULL predicate |
|
E061-07 | Core | Quantified comparison predicate |
|
E061-08 | Core | EXISTS predicate |
|
E061-09 | Core | Subqueries in comparison predicate |
|
E061-11 | Core | Subqueries in IN predicate |
|
E061-12 | Core | Subqueries in quantified comparison predicate |
|
E061-13 | Core | Correlated subqueries |
|
E061-14 | Core | Search condition |
|
E071 | Core | Basic query expressions |
|
E071-01 | Core | UNION DISTINCT table operator |
|
E071-02 | Core | UNION ALL table operator |
|
E071-03 | Core | EXCEPT DISTINCT table operator |
|
E071-05 | Core | Columns combined via table operators need not have exactly the same data type |
|
E071-06 | Core | Table operators in subqueries |
|
E081 | Core | Basic Privileges |
|
E081-01 | Core | SELECT privilege |
|
E081-02 | Core | DELETE privilege |
|
E081-03 | Core | INSERT privilege at the table level |
|
E081-04 | Core | UPDATE privilege at the table level |
|
E081-05 | Core | UPDATE privilege at the column level |
|
E081-06 | Core | REFERENCES privilege at the table level |
|
E081-07 | Core | REFERENCES privilege at the column level |
|
E081-08 | Core | WITH GRANT OPTION |
|
E081-09 | Core | USAGE privilege |
|
E081-10 | Core | EXECUTE privilege |
|
E091 | Core | Set functions |
|
E091-01 | Core | AVG |
|
E091-02 | Core | COUNT |
|
E091-03 | Core | MAX |
|
E091-04 | Core | MIN |
|
E091-05 | Core | SUM |
|
E091-06 | Core | ALL quantifier |
|
E091-07 | Core | DISTINCT quantifier |
|
E101 | Core | Basic data manipulation |
|
E101-01 | Core | INSERT statement |
|
E101-03 | Core | Searched UPDATE statement |
|
E101-04 | Core | Searched DELETE statement |
|
E111 | Core | Single row SELECT statement |
|
E121 | Core | Basic cursor support |
|
E121-01 | Core | DECLARE CURSOR |
|
E121-02 | Core | ORDER BY columns need not be in select list |
|
E121-03 | Core | Value expressions in ORDER BY clause |
|
E121-04 | Core | OPEN statement |
|
E121-06 | Core | Positioned UPDATE statement |
|
E121-07 | Core | Positioned DELETE statement |
|
E121-08 | Core | CLOSE statement |
|
E121-10 | Core | FETCH statement implicit NEXT |
|
E121-17 | Core | WITH HOLD cursors |
|
E131 | Core | Null value support (nulls in lieu of values) |
|
E141 | Core | Basic integrity constraints |
|
E141-01 | Core | NOT NULL constraints |
|
E141-02 | Core | UNIQUE constraints of NOT NULL columns |
|
E141-03 | Core | PRIMARY KEY constraints |
|
E141-04 | Core | Basic FOREIGN KEY constraint with the NO ACTION default for both referential delete action and referential update action |
|
E141-06 | Core | CHECK constraints |
|
E141-07 | Core | Column defaults |
|
E141-08 | Core | NOT NULL inferred on PRIMARY KEY |
|
E141-10 | Core | Names in a foreign key can be specified in any order |
|
E151 | Core | Transaction support |
|
E151-01 | Core | COMMIT statement |
|
E151-02 | Core | ROLLBACK statement |
|
E152 | Core | Basic SET TRANSACTION statement |
|
E152-01 | Core | SET TRANSACTION statement: ISOLATION LEVEL SERIALIZABLE clause |
|
E152-02 | Core | SET TRANSACTION statement: READ ONLY and READ WRITE clauses |
|
E153 | Core | Updatable queries with subqueries |
|
E161 | Core | SQL comments using leading double minus |
|
E171 | Core | SQLSTATE support |
|
E182 | Core | Host language binding |
|
F021 | Core | Basic information schema |
|
F021-01 | Core | COLUMNS view |
|
F021-02 | Core | TABLES view |
|
F021-03 | Core | VIEWS view |
|
F021-04 | Core | TABLE_CONSTRAINTS view |
|
F021-05 | Core | REFERENTIAL_CONSTRAINTS view |
|
F021-06 | Core | CHECK_CONSTRAINTS view |
|
F031 | Core | Basic schema manipulation |
|
F031-01 | Core | CREATE TABLE statement to create persistent base tables |
|
F031-02 | Core | CREATE VIEW statement |
|
F031-03 | Core | GRANT statement |
|
F031-04 | Core | ALTER TABLE statement: ADD COLUMN clause |
|
F031-13 | Core | DROP TABLE statement: RESTRICT clause |
|
F031-16 | Core | DROP VIEW statement: RESTRICT clause |
|
F031-19 | Core | REVOKE statement: RESTRICT clause |
|
F032 |
| CASCADE drop behavior |
|
F033 |
| ALTER TABLE statement: DROP COLUMN clause |
|
F034 |
| Extended REVOKE statement |
|
F034-01 |
| REVOKE statement performed by other than the owner of a schema object |
|
F034-02 |
| REVOKE statement: GRANT OPTION FOR clause |
|
F034-03 |
| REVOKE statement to revoke a privilege that the grantee has WITH GRANT OPTION |
|
F041 | Core | Basic joined table |
|
F041-01 | Core | Inner join (but not necessarily the INNER keyword) |
|
F041-02 | Core | INNER keyword |
|
F041-03 | Core | LEFT OUTER JOIN |
|
F041-04 | Core | RIGHT OUTER JOIN |
|
F041-05 | Core | Outer joins can be nested |
|
F041-07 | Core | The inner table in a left or right outer join can also be used in an inner join |
|
F041-08 | Core | All comparison operators are supported (rather than just =) |
|
F051 | Core | Basic date and time |
|
F051-01 | Core | DATE data type (including support of DATE literal) |
|
F051-02 | Core | TIME data type (including support of TIME literal) with fractional seconds precision of at least 0 |
|
F051-03 | Core | TIMESTAMP data type (including support of TIMESTAMP literal) with fractional seconds precision of at least 0 and 6 |
|
F051-04 | Core | Comparison predicate on DATE, TIME, and TIMESTAMP data types |
|
F051-05 | Core | Explicit CAST between datetime types and character string types |
|
F051-06 | Core | CURRENT_DATE |
|
F051-07 | Core | LOCALTIME |
|
F051-08 | Core | LOCALTIMESTAMP |
|
F052 |
| Intervals and datetime arithmetic |
|
F053 |
| OVERLAPS predicate |
|
F081 | Core | UNION and EXCEPT in views |
|
F111 |
| Isolation levels other than SERIALIZABLE |
|
F111-01 |
| READ UNCOMMITTED isolation level |
|
F111-02 |
| READ COMMITTED isolation level |
|
F111-03 |
| REPEATABLE READ isolation level |
|
F131 | Core | Grouped operations |
|
F131-01 | Core | WHERE, GROUP BY, and HAVING clauses supported in queries with grouped views |
|
F131-02 | Core | Multiple tables supported in queries with grouped views |
|
F131-03 | Core | Set functions supported in queries with grouped views |
|
F131-04 | Core | Subqueries with GROUP BY and HAVING clauses and grouped views |
|
F131-05 | Core | Single row SELECT with GROUP BY and HAVING clauses and grouped views |
|
F171 |
| Multiple schemas per user |
|
F181 | Core | Multiple module support |
|
F191 |
| Referential delete actions |
|
F200 |
| TRUNCATE TABLE statement |
|
F201 | Core | CAST function |
|
F202 |
| TRUNCATE TABLE: identity column restart option |
|
F221 | Core | Explicit defaults |
|
F222 |
| INSERT statement: DEFAULT VALUES clause |
|
F231 |
| Privilege tables |
|
F231-01 |
| TABLE_PRIVILEGES view |
|
F231-02 |
| COLUMN_PRIVILEGES view |
|
F231-03 |
| USAGE_PRIVILEGES view |
|
F251 |
| Domain support |
|
F261 | Core | CASE expression |
|
F261-01 | Core | Simple CASE |
|
F261-02 | Core | Searched CASE |
|
F261-03 | Core | NULLIF |
|
F261-04 | Core | COALESCE |
|
F262 |
| Extended CASE expression |
|
F271 |
| Compound character literals |
|
F281 |
| LIKE enhancements |
|
F292 |
| UNIQUE null treatment | SQL:202x draft |
F302 |
| INTERSECT table operator |
|
F302-01 |
| INTERSECT DISTINCT table operator |
|
F302-02 |
| INTERSECT ALL table operator |
|
F304 |
| EXCEPT ALL table operator |
|
F311 | Core | Schema definition statement |
|
F311-01 | Core | CREATE SCHEMA |
|
F311-02 | Core | CREATE TABLE for persistent base tables |
|
F311-03 | Core | CREATE VIEW |
|
F311-04 | Core | CREATE VIEW: WITH CHECK OPTION |
|
F311-05 | Core | GRANT statement |
|
F312 |
| MERGE statement |
|
F313 |
| Enhanced MERGE statement |
|
F314 |
| MERGE statement with DELETE branch |
|
F321 |
| User authorization |
|
F341 |
| Usage tables |
|
F361 |
| Subprogram support |
|
F381 |
| Extended schema manipulation |
|
F381-01 |
| ALTER TABLE statement: ALTER COLUMN clause |
|
F381-02 |
| ALTER TABLE statement: ADD CONSTRAINT clause |
|
F381-03 |
| ALTER TABLE statement: DROP CONSTRAINT clause |
|
F382 |
| Alter column data type |
|
F383 |
| Set column not null clause |
|
F384 |
| Drop identity property clause |
|
F385 |
| Drop column generation expression clause |
|
F386 |
| Set identity column generation clause |
|
F391 |
| Long identifiers |
|
F392 |
| Unicode escapes in identifiers |
|
F393 |
| Unicode escapes in literals |
|
F394 |
| Optional normal form specification |
|
F401 |
| Extended joined table |
|
F401-01 |
| NATURAL JOIN |
|
F401-02 |
| FULL OUTER JOIN |
|
F401-04 |
| CROSS JOIN |
|
F402 |
| Named column joins for LOBs, arrays, and multisets |
|
F404 |
| Range variable for common column names |
|
F411 |
| Time zone specification | differences regarding literal interpretation |
F421 |
| National character |
|
F431 |
| Read-only scrollable cursors |
|
F431-01 |
| FETCH with explicit NEXT |
|
F431-02 |
| FETCH FIRST |
|
F431-03 |
| FETCH LAST |
|
F431-04 |
| FETCH PRIOR |
|
F431-05 |
| FETCH ABSOLUTE |
|
F431-06 |
| FETCH RELATIVE |
|
F441 |
| Extended set function support |
|
F442 |
| Mixed column references in set functions |
|
F471 | Core | Scalar subquery values |
|
F481 | Core | Expanded NULL predicate |
|
F491 |
| Constraint management |
|
F501 | Core | Features and conformance views |
|
F501-01 | Core | SQL_FEATURES view |
|
F501-02 | Core | SQL_SIZING view |
|
F502 |
| Enhanced documentation tables |
|
F531 |
| Temporary tables |
|
F555 |
| Enhanced seconds precision |
|
F561 |
| Full value expressions |
|
F571 |
| Truth value tests |
|
F591 |
| Derived tables |
|
F611 |
| Indicator data types |
|
F641 |
| Row and table constructors |
|
F651 |
| Catalog name qualifiers |
|
F661 |
| Simple tables |
|
F672 |
| Retrospective check constraints |
|
F690 |
| Collation support | but no character set support |
F692 |
| Extended collation support |
|
F701 |
| Referential update actions |
|
F711 |
| ALTER domain |
|
F731 |
| INSERT column privileges |
|
F751 |
| View CHECK enhancements |
|
F761 |
| Session management |
|
F762 |
| CURRENT_CATALOG |
|
F763 |
| CURRENT_SCHEMA |
|
F771 |
| Connection management |
|
F781 |
| Self-referencing operations |
|
F791 |
| Insensitive cursors |
|
F801 |
| Full set function |
|
F850 |
| Top-level <order by clause> in <query expression> |
|
F851 |
| <order by clause> in subqueries |
|
F852 |
| Top-level <order by clause> in views |
|
F855 |
| Nested <order by clause> in <query expression> |
|
F856 |
| Nested <fetch first clause> in <query expression> |
|
F857 |
| Top-level <fetch first clause> in <query expression> |
|
F858 |
| <fetch first clause> in subqueries |
|
F859 |
| Top-level <fetch first clause> in views |
|
F860 |
| <fetch first row count> in <fetch first clause> |
|
F861 |
| Top-level <result offset clause> in <query expression> |
|
F862 |
| <result offset clause> in subqueries |
|
F863 |
| Nested <result offset clause> in <query expression> |
|
F864 |
| Top-level <result offset clause> in views |
|
F865 |
| <offset row count> in <result offset clause> |
|
F867 |
| FETCH FIRST clause: WITH TIES option |
|
S071 |
| SQL paths in function and type name resolution |
|
S091-01 |
| Arrays of built-in data types |
|
S091-03 |
| Array expressions |
|
S092 |
| Arrays of user-defined types |
|
S095 |
| Array constructors by query |
|
S096 |
| Optional array bounds |
|
S098 |
| ARRAY_AGG |
|
S111 |
| ONLY in query expressions |
|
S201 |
| SQL-invoked routines on arrays |
|
S201-01 |
| Array parameters |
|
S201-02 |
| Array as result type of functions |
|
S211 |
| User-defined cast functions |
|
S301 |
| Enhanced UNNEST |
|
S404 |
| TRIM_ARRAY |
|
T031 |
| BOOLEAN data type |
|
T071 |
| BIGINT data type |
|
T121 |
| WITH (excluding RECURSIVE) in query expression |
|
T122 |
| WITH (excluding RECURSIVE) in subquery |
|
T131 |
| Recursive query |
|
T132 |
| Recursive query in subquery |
|
T133 |
| Enhanced cycle mark values | SQL:202x draft |
T141 |
| SIMILAR predicate |
|
T151 |
| DISTINCT predicate |
|
T152 |
| DISTINCT predicate with negation |
|
T171 |
| LIKE clause in table definition |
|
T172 |
| AS subquery clause in table definition |
|
T173 |
| Extended LIKE clause in table definition |
|
T174 |
| Identity columns |
|
T177 |
| Sequence generator support: simple restart option |
|
T178 |
| Identity columns: simple restart option |
|
T191 |
| Referential action RESTRICT |
|
T201 |
| Comparable data types for referential constraints |
|
T211-01 |
| Triggers activated on UPDATE, INSERT, or DELETE of one base table |
|
T211-02 |
| BEFORE triggers |
|
T211-03 |
| AFTER triggers |
|
T211-04 |
| FOR EACH ROW triggers |
|
T211-05 |
| Ability to specify a search condition that must be true before the trigger is invoked |
|
T211-07 |
| TRIGGER privilege |
|
T212 |
| Enhanced trigger capability |
|
T213 |
| INSTEAD OF triggers |
|
T241 |
| START TRANSACTION statement |
|
T261 |
| Chained transactions |
|
T271 |
| Savepoints |
|
T281 |
| SELECT privilege with column granularity |
|
T285 |
| Enhanced derived column names |
|
T312 |
| OVERLAY function |
|
T321-01 | Core | User-defined functions with no overloading |
|
T321-02 | Core | User-defined stored procedures with no overloading |
|
T321-03 | Core | Function invocation |
|
T321-04 | Core | CALL statement |
|
T321-05 | Core | RETURN statement |
|
T321-06 | Core | ROUTINES view |
|
T321-07 | Core | PARAMETERS view |
|
T323 |
| Explicit security for external routines |
|
T325 |
| Qualified SQL parameter references |
|
T331 |
| Basic roles |
|
T332 |
| Extended roles |
|
T341 |
| Overloading of SQL-invoked functions and procedures |
|
T351 |
| Bracketed SQL comments (/*...*/ comments) |
|
T431 |
| Extended grouping capabilities |
|
T432 |
| Nested and concatenated GROUPING SETS |
|
T433 |
| Multiargument GROUPING function |
|
T434 |
| GROUP BY DISTINCT |
|
T441 |
| ABS and MOD functions |
|
T461 |
| Symmetric BETWEEN predicate |
|
T491 |
| LATERAL derived table |
|
T501 |
| Enhanced EXISTS predicate |
|
T521 |
| Named arguments in CALL statement |
|
T523 |
| Default values for INOUT parameters of SQL-invoked procedures |
|
T524 |
| Named arguments in routine invocations other than a CALL statement |
|
T525 |
| Default values for parameters of SQL-invoked functions |
|
T551 |
| Optional key words for default syntax |
|
T581 |
| Regular expression substring function |
|
T591 |
| UNIQUE constraints of possibly null columns |
|
T611 |
| Elementary OLAP operations |
|
T612 |
| Advanced OLAP operations |
|
T613 |
| Sampling |
|
T614 |
| NTILE function |
|
T615 |
| LEAD and LAG functions |
|
T617 |
| FIRST_VALUE and LAST_VALUE function |
|
T620 |
| WINDOW clause: GROUPS option |
|
T621 |
| Enhanced numeric functions |
|
T622 |
| Trigonometric functions |
|
T623 |
| General logarithm functions |
|
T624 |
| Common logarithm functions |
|
T631 | Core | IN predicate with one list element |
|
T651 |
| SQL-schema statements in SQL routines |
|
T653 |
| SQL-schema statements in external routines |
|
T655 |
| Cyclically dependent routines |
|
T831 |
| SQL/JSON path language: strict mode |
|
T832 |
| SQL/JSON path language: item method |
|
T833 |
| SQL/JSON path language: multiple subscripts |
|
T834 |
| SQL/JSON path language: wildcard member accessor |
|
T835 |
| SQL/JSON path language: filter expressions |
|
T836 |
| SQL/JSON path language: starts with predicate |
|
T837 |
| SQL/JSON path language: regex_like predicate |
|
X010 |
| XML type |
|
X011 |
| Arrays of XML type |
|
X014 |
| Attributes of XML type |
|
X016 |
| Persistent XML values |
|
X020 |
| XMLConcat |
|
X031 |
| XMLElement |
|
X032 |
| XMLForest |
|
X034 |
| XMLAgg |
|
X035 |
| XMLAgg: ORDER BY option |
|
X036 |
| XMLComment |
|
X037 |
| XMLPI |
|
X040 |
| Basic table mapping |
|
X041 |
| Basic table mapping: nulls absent |
|
X042 |
| Basic table mapping: null as nil |
|
X043 |
| Basic table mapping: table as forest |
|
X044 |
| Basic table mapping: table as element |
|
X045 |
| Basic table mapping: with target namespace |
|
X046 |
| Basic table mapping: data mapping |
|
X047 |
| Basic table mapping: metadata mapping |
|
X048 |
| Basic table mapping: base64 encoding of binary strings |
|
X049 |
| Basic table mapping: hex encoding of binary strings |
|
X050 |
| Advanced table mapping |
|
X051 |
| Advanced table mapping: nulls absent |
|
X052 |
| Advanced table mapping: null as nil |
|
X053 |
| Advanced table mapping: table as forest |
|
X054 |
| Advanced table mapping: table as element |
|
X055 |
| Advanced table mapping: with target namespace |
|
X056 |
| Advanced table mapping: data mapping |
|
X057 |
| Advanced table mapping: metadata mapping |
|
X058 |
| Advanced table mapping: base64 encoding of binary strings |
|
X059 |
| Advanced table mapping: hex encoding of binary strings |
|
X060 |
| XMLParse: character string input and CONTENT option |
|
X061 |
| XMLParse: character string input and DOCUMENT option |
|
X070 |
| XMLSerialize: character string serialization and CONTENT option |
|
X071 |
| XMLSerialize: character string serialization and DOCUMENT option |
|
X072 |
| XMLSerialize: character string serialization |
|
X090 |
| XML document predicate |
|
X120 |
| XML parameters in SQL routines |
|
X121 |
| XML parameters in external routines |
|
X221 |
| XML passing mechanism BY VALUE |
|
X301 |
| XMLTable: derived column list option |
|
X302 |
| XMLTable: ordinality column option |
|
X303 |
| XMLTable: column default option |
|
X304 |
| XMLTable: passing a context item | must be XML DOCUMENT |
X400 |
| Name and identifier mapping |
|
X410 |
| Alter column data type: XML type |
|
Class 00 — Successful Completion |
|
|
Class 01 — Warning |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Class 02 — No Data (this is also a warning class per the SQL standard) |
|
|
|
|
Class 03 — SQL Statement Not Yet Complete |
|
|
Class 08 — Connection Exception |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Class 09 — Triggered Action Exception |
|
|
Class 0A — Feature Not Supported |
|
|
Class 0B — Invalid Transaction Initiation |
|
|
Class 0F — Locator Exception |
|
|
|
|
Class 0L — Invalid Grantor |
|
|
|
|
Class 0P — Invalid Role Specification |
|
|
Class 0Z — Diagnostics Exception |
|
|
|
|
Class 20 — Case Not Found |
|
|
Class 21 — Cardinality Violation |
|
|
Class 22 — Data Exception |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Class 23 — Integrity Constraint Violation |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Class 24 — Invalid Cursor State |
|
|
Class 25 — Invalid Transaction State |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Class 26 — Invalid SQL Statement Name |
|
|
Class 27 — Triggered Data Change Violation |
|
|
Class 28 — Invalid Authorization Specification |
|
|
|
|
Class 2B — Dependent Privilege Descriptors Still Exist |
|
|
|
|
Class 2D — Invalid Transaction Termination |
|
|
Class 2F — SQL Routine Exception |
|
|
|
|
|
|
|
|
|
|
Class 34 — Invalid Cursor Name |
|
|
Class 38 — External Routine Exception |
|
|
|
|
|
|
|
|
|
|
Class 39 — External Routine Invocation Exception |
|
|
|
|
|
|
|
|
|
|
|
|
Class 3B — Savepoint Exception |
|
|
|
|
Class 3D — Invalid Catalog Name |
|
|
Class 3F — Invalid Schema Name |
|
|
Class 40 — Transaction Rollback |
|
|
|
|
|
|
|
|
|
|
Class 42 — Syntax Error or Access Rule Violation |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Class 44 — WITH CHECK OPTION Violation |
|
|
Class 53 — Insufficient Resources |
|
|
|
|
|
|
|
|
|
|
Class 54 — Program Limit Exceeded |
|
|
|
|
|
|
|
|
Class 55 — Object Not In Prerequisite State |
|
|
|
|
|
|
|
|
|
|
Class 57 — Operator Intervention |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Class 58 — System Error (errors external to PostgreSQL itself) |
|
|
|
|
|
|
|
|
Class 72 — Snapshot Failure |
|
|
Class F0 — Configuration File Error |
|
|
|
|
Class HV — Foreign Data Wrapper Error (SQL/MED) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Class P0 — PL/pgSQL Error |
|
|
|
|
|
|
|
|
|
|
Class XX — Internal Error |
|
|
|
|
|
|
However, every year divisible by 100 is not a leap year. |
However, every year divisible by 400 is a leap year after all. |
Function Description |
Writes, or appends to, a text file. |
Flushes a file or directory to disk. |
Renames a file. |
Removes a file. |
Lists the log files in the |
btree_gin
provides sample GIN operator classes that implement B-tree equivalent behavior for the data types int2
, int4
, int8
, float4
, float8
, timestamp with time zone
, timestamp without time zone
, time with time zone
, time without time zone
, date
, interval
, oid
, money
, "char"
, varchar
, text
, bytea
, bit
, varbit
, macaddr
, macaddr8
, inet
, cidr
, uuid
, name
, bool
, bpchar
, and all enum
types.
通常,這些運算子類不會優於等效的標準 B-tree 索引方法,並且它們缺少標準 B-tree 的一個主要功能:強制執行唯一性。但是,它們對於 GIN 測試很有用,並且可以作為開發其他 GIN 運算子類的基礎。同樣地,對於同時測試可索引 GIN 欄位和 B-tree 可索引列的查詢,建立使用這些運算子之一的多欄位 GIN 索引可能比建立兩個必須獨立的索引更有效,以 bitmap ANDing 的方式。
Teodor Sigaev (<
teodor@stack.net
>
) and Oleg Bartunov (<
oleg@sai.msu.su
>
). See http://www.sai.msu.su/~megera/oddmuse/index.cgi/Gin for additional information.
Release notes for prior release branches can be found at https://www.postgresql.org/docs/release/
auth_delay 讓伺服器在回報身份驗證失敗之前會短暫暫停,從而使對資料庫密碼的暴力攻擊更加困難。請注意,它不會阻止拒絕服務攻擊,甚至可能加劇攻擊,因為在回報身份驗證失敗之前等待的程序仍然會佔用連線。
要使用這個功能,必須透過 postgresql.conf 中的 shared_preload_libraries 載入此模組。
auth_delay.milliseconds
(int
)
The number of milliseconds to wait before reporting an authentication failure. The default is 0.
These parameters must be set in postgresql.conf
. Typical usage might be:
KaiGai Kohei <
kaigai@ak.jp.nec.com
>
dblink 模組可以讓你從資料庫連線中再連線到其他 PostgreSQL 資料庫。
另請參閱 postgres_fdw,它使用更現代且符合標準的基礎架構提供大致相同的功能。
dblink_connect_u — opens a persistent connection to a remote database, insecurely
dblink_connect_u()
is identical to dblink_connect()
, except that it will allow non-superusers to connect using any authentication method.
If the remote server selects an authentication method that does not involve a password, then impersonation and subsequent escalation of privileges can occur, because the session will appear to have originated from the user as which the local PostgreSQL server runs. Also, even if the remote server does demand a password, it is possible for the password to be supplied from the server environment, such as a ~/.pgpass
file belonging to the server's user. This opens not only a risk of impersonation, but the possibility of exposing a password to an untrustworthy remote server. Therefore, dblink_connect_u()
is initially installed with all privileges revoked from PUBLIC
, making it un-callable except by superusers. In some situations it may be appropriate to grant EXECUTE
permission for dblink_connect_u()
to specific users who are considered trustworthy, but this should be done with care. It is also recommended that any ~/.pgpass
file belonging to the server's user not contain any records specifying a wildcard host name.
For further details see dblink_connect()
.
dblink_connect — opens a persistent connection to a remote database
dblink_connect()
establishes a connection to a remote PostgreSQL database. The server and database to be contacted are identified through a standard libpq connection string. Optionally, a name can be assigned to the connection. Multiple named connections can be open at once, but only one unnamed connection is permitted at a time. The connection will persist until closed or until the database session is ended.
The connection string may also be the name of an existing foreign server. It is recommended to use the foreign-data wrapper dblink_fdw
when defining the foreign server. See the example below, as well as CREATE SERVER and CREATE USER MAPPING.
connname
The name to use for this connection; if omitted, an unnamed connection is opened, replacing any existing unnamed connection.connstr
libpq-style connection info string, for example hostaddr=127.0.0.1 port=5432 dbname=mydb user=postgres password=mypasswd options=-csearch_path=
. For details see Section 33.1.1. Alternatively, the name of a foreign server.
Returns status, which is always OK
(since any error causes the function to throw an error instead of returning).
If untrusted users have access to a database that has not adopted a secure schema usage pattern, begin each session by removing publicly-writable schemas from search_path
. One could, for example, add options=-csearch_path=
to connstr
. This consideration is not specific to dblink
; it applies to every interface for executing arbitrary SQL commands.
Only superusers may use dblink_connect
to create non-password-authenticated connections. If non-superusers need this capability, use dblink_connect_u
instead.
It is unwise to choose connection names that contain equal signs, as this opens a risk of confusion with connection info strings in other dblink
functions.
Bloom 提供了一種基於 Bloom fliters 索引方法。
Bloom filter 是一種節省空間的資料結構,用於測試元素是否為集合的成員。在使用索引方法時,它透過 signatures(其大小是在建立索引時確定的)快速排除不相符的內容。
Signature 是索引屬性的失真表示,因此很容易誤報。也就是說,可能會回報某個元素在集合中(實際上沒有)。因此,必須使用資料中的實際屬性值來重新檢查索引並搜尋結果。較大的 signature 會減少了誤報的機率,也減少了無效的存取次數,但當然也會使索引變大,造成掃描速度變慢。
當資料表具有許多屬性並且查詢測試它們的任意組合時,這種類型的索引最有用。傳統的 btree 索引會比 Bloom 索引快,但是它可能需要許多 btree 索引來支援所有可能的查詢,而其中一個查詢只需要一個 Bloom 索引。但是請注意,bloom 索引僅支援相等查詢,而 btree 索引也可以用於不相等和範圍查詢。
Bloom 索引的 WITH 子句接受以下參數:
length
每個 signature(索引項目)的長度(以位元為單位)。四捨五入到最接近的 16 的倍數。預設值為 80 位元,最大值為 4096。
col1 — col32
每個索引欄位產成的位元數。每個參數的名稱指的是它控制的索引欄位的編號。預設值為 2 位元,最大值為 4095。實際未使用的索引欄位的參數將被忽略。
This is an example of creating a bloom index:
The index is created with a signature length of 80 bits, with attributes i1 and i2 mapped to 2 bits, and attribute i3 mapped to 4 bits. We could have omitted the length
, col1
, and col2
specifications since those have the default values.
Here is a more complete example of bloom index definition and usage, as well as a comparison with equivalent btree indexes. The bloom index is considerably smaller than the btree index, and can perform better.
A sequential scan over this large table takes a long time:
So the planner will usually select an index scan if possible. With a btree index, we get results like this:
Bloom is better than btree in handling this type of search:
請注意,這裡的誤報的數量相對較多:被選擇 2439 筆資料要進行確認,但實際上沒有與查詢相符的資料。我們可以透過指定更大的 signature 長度來減少這種情況。在此範例中,建立長度為 200 的索引可將誤報數量減少到 55;但它也使索引大小增加了一倍(至 306 MB),並最終使該查詢的速度變慢(總計 125 毫秒)。
現在,btree 搜搜的主要問題在於,當搜搜條件不限制前導索引欄時,btree 的效率低下。btree 的更好策略是在每欄位上建立一個單獨的索引。然後計劃程序將會選擇規劃以下內容:
儘管此查詢的執行速度比使用單個索引的查詢快得多,但我們在索引大小上付出了很大的代價。每個單欄位 btree 索引佔用 214 MB,因此所需的總空間超過 1.2GB,是 Bloom 索引使用的空間 8 倍以上。
An operator class for bloom indexes requires only a hash function for the indexed data type and an equality operator for searching. This example shows the operator class definition for the text
data type:
Only operator classes for int4
and text
are included with the module.
Only the =
operator is supported for search. But it is possible to add support for arrays with union and intersection operations in the future.
bloom
access method doesn't support UNIQUE
indexes.
bloom
access method doesn't support searching for NULL
values.
Teodor Sigaev <
teodor@postgrespro.ru
>
, Postgres Professional, Moscow, Russia
Alexander Korotkov <
a.korotkov@postgrespro.ru
>
, Postgres Professional, Moscow, Russia
Oleg Bartunov <
obartunov@postgrespro.ru
>
, Postgres Professional, Moscow, Russia
dblink_open — opens a cursor in a remote database
dblink_open()
opens a cursor in a remote database. The cursor can subsequently be manipulated with dblink_fetch()
and dblink_close()
.
connname
Name of the connection to use; omit this parameter to use the unnamed connection.
cursorname
The name to assign to this cursor.
sql
The SELECT
statement that you wish to execute in the remote database, for example select * from pg_class
.
fail_on_error
If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, and the function's return value is set to ERROR
.
Returns status, either OK
or ERROR
.
Since a cursor can only persist within a transaction, dblink_open
starts an explicit transaction block (BEGIN
) on the remote side, if the remote side was not already within a transaction. This transaction will be closed again when the matching dblink_close
is executed. Note that if you use dblink_exec
to change data between dblink_open
and dblink_close
, and then an error occurs or you use dblink_disconnect
before dblink_close
, your change will be lost because the transaction will be aborted.
dblink_disconnect — closes a persistent connection to a remote database
dblink_disconnect()
closes a connection previously opened by dblink_connect()
. The form with no arguments closes an unnamed connection.
connname
The name of a named connection to be closed.
Returns status, which is always OK
(since any error causes the function to throw an error instead of returning).
dblink — executes a query in a remote database
dblink
executes a query (usually a SELECT
, but it can be any SQL statement that returns rows) in a remote database.
When two text
arguments are given, the first one is first looked up as a persistent connection's name; if found, the command is executed on that connection. If not found, the first argument is treated as a connection info string as for dblink_connect
, and the indicated connection is made just for the duration of this command.
connname
Name of the connection to use; omit this parameter to use the unnamed connection.connstr
A connection info string, as previously described for dblink_connect
.
sql
The SQL query that you wish to execute in the remote database, for example select * from foo
.
fail_on_error
If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, and the function returns no rows.
The function returns the row(s) produced by the query. Since dblink
can be used with any query, it is declared to return record
, rather than specifying any particular set of columns. This means that you must specify the expected set of columns in the calling query — otherwise PostgreSQL would not know what to expect. Here is an example:
The “alias” part of the FROM
clause must specify the column names and types that the function will return. (Specifying column names in an alias is actually standard SQL syntax, but specifying column types is a PostgreSQL extension.) This allows the system to understand what *
should expand to, and what proname
in the WHERE
clause refers to, in advance of trying to execute the function. At run time, an error will be thrown if the actual query result from the remote database does not have the same number of columns shown in the FROM
clause. The column names need not match, however, and dblink
does not insist on exact type matches either. It will succeed so long as the returned data strings are valid input for the column type declared in the FROM
clause.
A convenient way to use dblink
with predetermined queries is to create a view. This allows the column type information to be buried in the view, instead of having to spell it out in every query. For example,
dblink_exec — executes a command in a remote database
dblink_exec
executes a command (that is, any SQL statement that doesn't return rows) in a remote database.
When two text
arguments are given, the first one is first looked up as a persistent connection's name; if found, the command is executed on that connection. If not found, the first argument is treated as a connection info string as for dblink_connect
, and the indicated connection is made just for the duration of this command.
connname
Name of the connection to use; omit this parameter to use the unnamed connection.
connstr
A connection info string, as previously described for dblink_connect
.
sql
The SQL command that you wish to execute in the remote database, for example insert into foo values(0,'a','{"a0","b0","c0"}')
.
fail_on_error
If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, and the function's return value is set to ERROR
.
Returns status, either the command's status string or ERROR
.
dblink_close — closes a cursor in a remote database
dblink_close
closes a cursor previously opened with dblink_open
.
connname
Name of the connection to use; omit this parameter to use the unnamed connection.
cursorname
The name of the cursor to close.
fail_on_error
If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, and the function's return value is set to ERROR
.
Returns status, either OK
or ERROR
.
If dblink_open
started an explicit transaction block, and this is the last remaining open cursor in this connection, dblink_close
will issue the matching COMMIT
.
dblink_is_busy — checks if connection is busy with an async query
dblink_is_busy
tests whether an async query is in progress.
connname
Name of the connection to check.
Returns 1 if connection is busy, 0 if it is not busy. If this function returns 0, it is guaranteed that dblink_get_result
will not block.
Release date: 2022-10-13
PostgreSQL 15 contains many new features and enhancements, including:
Support for the SQL MERGE
command.
Selective publication of tables' contents within logical replication publications, through the ability to specify column lists and row filter conditions.
More options for compression, including support for Zstandard (zstd) compression. This includes support for performing compression on the server side during pg_basebackup.
Support for structured server log output using the JSON format.
Performance improvements, particularly for in-memory and on-disk sorting.
The above items and other new features of PostgreSQL 15 are explained in more detail in the sections below.
A dump/restore using pg_dumpall or use of pg_upgrade or logical replication is required for those wishing to migrate data from any previous release. See Section 19.6 for general information on migrating to new major releases.
Version 15 contains a number of changes that may affect compatibility with previous releases. Observe the following incompatibilities:
Remove PUBLIC
creation permission on the public
schema (Noah Misch)
The new default is one of the secure schema usage patterns that Section 5.9.6 has recommended since the security release for CVE-2018-1058. The change applies to new database clusters and to newly-created databases in existing clusters. Upgrading a cluster or restoring a database dump will preserve public
's existing permissions.
For existing databases, especially those having multiple users, consider revoking CREATE
permission on the public
schema to adopt this new default. For new databases having no need to defend against insider threats, granting CREATE
permission will yield the behavior of prior releases.
Change the owner of the public
schema to be the new pg_database_owner
role (Noah Misch)
This allows each database's owner to have ownership privileges on the public
schema within their database. Previously it was owned by the bootstrap superuser, so that non-superuser database owners could not do anything with it.
This change applies to new database clusters and to newly-created databases in existing clusters. Upgrading a cluster or restoring a database dump will preserve public
's existing ownership specification.
Remove long-deprecated exclusive backup mode (David Steele, Nathan Bossart)
If the database server stops abruptly while in this mode, the server could fail to start. The non-exclusive backup mode is considered superior for all purposes. Functions pg_start_backup()
/pg_stop_backup()
have been renamed to pg_backup_start()
/pg_backup_stop()
, and the functions pg_backup_start_time()
and pg_is_in_backup()
have been removed.
Increase hash_mem_multiplier
default to 2.0 (Peter Geoghegan)
This allows query hash operations to use more work_mem
memory than other operations.
Remove server-side language plpython2u
and generic Python language plpythonu
(Andres Freund)
Python 2.x is no longer supported. While the original intent of plpythonu
was that it could eventually refer to plpython3u
, changing it now seems more likely to cause problems than solve them, so it's just been removed.
Generate an error if array_to_tsvector()
is passed an empty-string array element (Jean-Christophe Arnu)
This is prohibited because lexemes should never be empty. Users of previous Postgres releases should verify that no empty lexemes are stored because they can lead to dump/restore failures and inconsistent results.
Generate an error when chr()
is supplied with a negative argument (Peter Eisentraut)
Prevent CREATE OR REPLACE VIEW
from changing the collation of an output column (Tom Lane)
Disallow zero-length Unicode identifiers, e.g., U&""
(Peter Eisentraut)
Non-Unicode zero-length identifiers were already disallowed.
Prevent numeric literals from having non-numeric trailing characters (Peter Eisentraut)
Previously, query text like 123abc
would be interpreted as 123
followed by a separate token abc
.
Adjust JSON numeric literal processing to match the SQL/JSON-standard (Peter Eisentraut)
This accepts numeric formats like .1
and 1.
, and disallows trailing junk after numeric literals, like 1.type()
.
When interval
input provides a fractional value for a unit greater than months, round to the nearest month (Bruce Momjian)
For example, convert 1.99 years
to 2 years
, not 1 year 11 months
as before.
Improve consistency of interval
parsing with trailing periods (Tom Lane)
Numbers with trailing periods were rejected on some platforms.
Mark the interval
output function as stable, not immutable, since it depends on IntervalStyle
(Tom Lane)
This will, for example, cause creation of indexes relying on the text output of interval
values to fail.
Detect integer overflow in interval justification functions (Joe Koshakow)
The affected functions are justify_interval()
, justify_hours()
, and justify_days()
.
Change the I/O format of type "char"
for non-ASCII characters (Tom Lane)
Bytes with the high bit set are now output as a backslash and three octal digits, to avoid encoding issues.
Remove the default ADMIN OPTION
privilege a login role has on its own role membership (Robert Haas)
Previously, a login role could add/remove members of its own role, even without ADMIN OPTION
privilege.
Allow logical replication to run as the owner of the subscription (Mark Dilger)
Because row-level security policies are not checked, only superusers, roles with bypassrls
, and table owners can replicate into tables with row-level security policies.
Prevent UPDATE
and DELETE
logical replication operations on tables where the subscription owner does not have SELECT
permission on the table (Jeff Davis)
UPDATE
and DELETE
commands typically involve reading the table as well, so require the subscription owner to have table SELECT
permission.
When EXPLAIN
references the session's temporary object schema, refer to it as pg_temp
(Amul Sul)
Previously the actual schema name was reported, leading to inconsistencies across sessions.
Fix pg_statio_all_tables
to sum values for the rare case of TOAST tables with multiple indexes (Andrei Zubkov)
Previously such cases would show one row for each index.
Disallow setting custom options that match the name of an installed extension, but are not one of the extension's declared variables (Florin Irion, Tom Lane)
This change causes any such pre-existing variables to be deleted during extension load, and then prevents new ones from being created later in the session. The intent is to prevent confusion about whether a variable is associated with an extension or not.
Remove obsolete server variable stats_temp_directory
(Andres Freund, Kyotaro Horiguchi)
Improve the algorithm used to compute random()
(Fabien Coelho)
This will cause random()
's results to differ from what was emitted by prior versions, even for the same seed value.
libpq's PQsendQuery()
function is no longer supported in pipeline mode (Álvaro Herrera)
Applications that are using that combination will need to be modified to use PQsendQueryParams()
instead.
On non-Windows platforms, consult the HOME
environment variable to find the user's home directory (Anders Kaseorg)
If HOME
is empty or unset, fall back to the previous method of checking the <pwd.h>
database. This change affects libpq (for example, while looking up ~/.pgpass
) as well as various client application programs.
Remove pg_dump's --no-synchronized-snapshots
option (Tom Lane)
All still-supported server versions support synchronized snapshots, so there's no longer a need for this option.
After an error is detected in psql's --single-transaction
mode, change the final COMMIT
command to ROLLBACK
only if ON_ERROR_STOP
is set (Michael Paquier)
Avoid unnecessary casting of constants in queries sent by postgres_fdw (Dian Fay)
When column types are intentionally different between local and remote databases, such casts could cause errors.
Remove xml2's xml_is_well_formed()
function (Tom Lane)
This function has been implemented in the core backend since Postgres 9.1.
Allow custom scan providers to indicate if they support projections (Sven Klemm)
The default is now that custom scan providers are assumed to not support projections; those that do will need to be updated for this release.
Below you will find a detailed account of the changes between PostgreSQL 15 and the previous major release.
Record and check the collation version of each database (Peter Eisentraut)
This feature is designed to detect collation version changes to avoid index corruption. Function pg_database_collation_actual_version()
reports the underlying operating system collation version, and ALTER DATABASE ... REFRESH
sets the recorded database collation version to match the operating system collation version.
Allow ICU collations to be set as the default for clusters and databases (Peter Eisentraut)
Previously, only libc-based collations could be selected at the cluster and database levels. ICU collations could only be used via explicit COLLATE
clauses.
Add system view pg_ident_file_mappings
to report pg_ident.conf
information (Julien Rouhaud)
Improve planning time for queries referencing partitioned tables (David Rowley)
This change helps when only a few of many partitions are relevant.
Allow ordered scans of partitions to avoid sorting in more cases (David Rowley)
Previously, a partitioned table with a DEFAULT
partition or a LIST
partition containing multiple values could not be used for ordered partition scans. Now they can be used if such partitions are pruned during planning.
Improve foreign key behavior of updates on partitioned tables that move rows between partitions (Amit Langote)
Previously, such updates ran a delete action on the source partition and an insert action on the target partition. PostgreSQL will now run an update action on the partition root, providing cleaner semantics.
Allow CLUSTER
on partitioned tables (Justin Pryzby)
Fix ALTER TRIGGER RENAME
on partitioned tables to properly rename triggers on all partitions (Arne Roland, Álvaro Herrera)
Also prohibit cloned triggers from being renamed.
Allow btree indexes on system and TOAST tables to efficiently store duplicates (Peter Geoghegan)
Previously de-duplication was disabled for these types of indexes.
Improve lookup performance of GiST indexes that were built using sorting (Aliaksandr Kalenik, Sergei Shoulbakov, Andrey Borodin)
Allow unique constraints and indexes to treat NULL
values as not distinct (Peter Eisentraut)
Previously NULL
entries were always treated as distinct values, but this can now be changed by creating constraints and indexes using UNIQUE NULLS NOT DISTINCT
.
Allow extended statistics to record statistics for a parent with all its children (Tomas Vondra, Justin Pryzby)
Regular statistics already tracked parent and parent-plus-all-children statistics separately.
Add server variable recursive_worktable_factor
to allow the user to specify the expected size of the working table of a recursive query (Simon Riggs)
Allow hash lookup for NOT IN
clauses with many constants (David Rowley, James Coleman)
Previously the code always sequentially scanned the list of values.
Allow SELECT DISTINCT
to be parallelized (David Rowley)
Speed up encoding validation of UTF-8 text by processing 16 bytes at a time (John Naylor, Heikki Linnakangas)
This will improve text-heavy operations like COPY FROM
.
Improve performance for sorts that exceed work_mem
(Heikki Linnakangas)
When the sort data no longer fits in work_mem
, switch to a batch sorting algorithm that uses more output streams than before.
Improve performance and reduce memory consumption of in-memory sorts (Ronan Dunklau, David Rowley, Thomas Munro, John Naylor)
Allow WAL full page writes to use LZ4 and Zstandard compression (Andrey Borodin, Justin Pryzby)
This is controlled by the wal_compression
server setting.
Add support for writing WAL using direct I/O on macOS (Thomas Munro)
This only works if max_wal_senders = 0
and wal_level = minimal
.
Allow vacuum to be more aggressive in setting the oldest frozen and multi transaction id (Peter Geoghegan)
Allow a query referencing multiple foreign tables to perform parallel foreign table scans in more cases (Andrey Lepikhov, Etsuro Fujita)
Improve the performance of window functions that use row_number()
, rank()
, dense_rank()
and count()
(David Rowley)
Improve the performance of spinlocks on high-core-count ARM64 systems (Geoffrey Blake)
Enable default logging of checkpoints and slow autovacuum operations (Bharath Rupireddy)
This changes the default of log_checkpoints
to on
and that of log_autovacuum_min_duration
to 10 minutes. This will cause even an idle server to generate some log output, which might cause problems on resource-constrained servers without log file rotation. These defaults should be changed in such cases.
Generate progress messages in the server log during slow server starts (Nitin Jadhav, Robert Haas)
The messages report the cause of the delay. The time interval for notification is controlled by the new server variable log_startup_progress_interval
.
Store cumulative statistics system data in shared memory (Kyotaro Horiguchi, Andres Freund, Melanie Plageman)
Previously this data was sent to a statistics collector process via UDP packets, and could only be read by sessions after transferring it via the file system. There is no longer a separate statistics collector process.
Add additional information to VACUUM VERBOSE
and autovacuum logging messages (Peter Geoghegan)
Add EXPLAIN (BUFFERS)
output for temporary file block I/O (Masahiko Sawada)
Allow log output in JSON format (Sehrope Sarkuni, Michael Paquier)
The new setting is log_destination = jsonlog
.
Allow pg_stat_reset_single_table_counters()
to reset the counters of relations shared across all databases (Sadhuprasad Patro)
Add wait events for local shell commands (Fujii Masao)
The new wait events are used when calling archive_command
, archive_cleanup_command
, restore_command
and recovery_end_command
.
Allow table accesses done by a view to optionally be controlled by privileges of the view's caller (Christoph Heiss)
Previously, view accesses were always treated as being done by the view's owner. That's still the default.
Allow members of the pg_write_server_files
predefined role to perform server-side base backups (Dagfinn Ilmari Mannsåker)
Previously only superusers could perform such backups.
Allow GRANT
to grant permissions to change individual server variables via SET
and ALTER SYSTEM
(Mark Dilger)
The new function has_parameter_privilege()
reports on this privilege.
Add predefined role pg_checkpoint
that allows members to run CHECKPOINT
(Jeff Davis)
Previously checkpoints could only be run by superusers.
Allow members of the pg_read_all_stats
predefined role to access the views pg_backend_memory_contexts
and pg_shmem_allocations
(Bharath Rupireddy)
Previously these views could only be accessed by superusers.
Allow GRANT
to grant permissions on pg_log_backend_memory_contexts()
(Jeff Davis)
Previously this function could only be run by superusers.
Add server variable shared_memory_size
to report the size of allocated shared memory (Nathan Bossart)
Add server variable shared_memory_size_in_huge_pages
to report the number of huge memory pages required (Nathan Bossart)
This is only supported on Linux.
Honor server variable shared_preload_libraries
in single-user mode (Jeff Davis)
This change supports use of shared_preload_libraries
to load custom access methods and WAL resource managers, which would be essential for database access even in single-user mode.
On Solaris, make the default setting of dynamic_shared_memory_type
be sysv
(Thomas Munro)
The previous default choice, posix
, can result in spurious failures on this platform.
Allow postgres -C
to properly report runtime-computed values (Nathan Bossart)
Previously runtime-computed values data_checksums
, wal_segment_size
, and data_directory_mode
would report values that would not be accurate on the running server. However, this does not work on a running server.
Add support for LZ4 and Zstandard compression of server-side base backups (Jeevan Ladhe, Robert Haas)
Run the checkpointer and bgwriter processes during crash recovery (Thomas Munro)
This helps to speed up long crash recoveries.
Allow WAL processing to pre-fetch needed file contents (Thomas Munro)
This is controlled by the server variable recovery_prefetch
.
Allow archiving via loadable modules (Nathan Bossart)
Previously, archiving was only done by calling shell commands. The new server variable archive_library
can be set to specify a library to be called for archiving.
No longer require IDENTIFY_SYSTEM
to be run before START_REPLICATION
(Jeff Davis)
Allow publication of all tables in a schema (Vignesh C, Hou Zhijie, Amit Kapila)
For example, this syntax is now supported: CREATE PUBLICATION pub1 FOR TABLES IN SCHEMA s1,s2
. ALTER PUBLICATION
supports a similar syntax. Tables added later to the listed schemas will also be replicated.
Allow publication content to be filtered using a WHERE
clause (Hou Zhijie, Euler Taveira, Peter Smith, Ajin Cherian, Tomas Vondra, Amit Kapila)
Rows not satisfying the WHERE
clause are not published.
Allow publication content to be restricted to specific columns (Tomas Vondra, Álvaro Herrera, Rahila Syed)
Allow skipping of transactions on a subscriber using ALTER SUBSCRIPTION ... SKIP
(Masahiko Sawada)
Add support for prepared (two-phase) transactions to logical replication (Peter Smith, Ajin Cherian, Amit Kapila, Nikhil Sontakke, Stas Kelvich)
The new CREATE_REPLICATION_SLOT
option is called TWO_PHASE
. pg_recvlogical now supports a new --two-phase
option during slot creation.
Prevent logical replication of empty transactions (Ajin Cherian, Hou Zhijie, Euler Taveira)
Previously, publishers would send empty transactions to subscribers if subscribed tables were not modified.
Add SQL functions to monitor the directory contents of logical replication slots (Bharath Rupireddy)
The new functions are pg_ls_logicalsnapdir()
, pg_ls_logicalmapdir()
, and pg_ls_replslotdir()
. They can be run by members of the predefined pg_monitor
role.
Allow subscribers to stop the application of logical replication changes on error (Osumi Takamichi, Mark Dilger)
This is enabled with the subscriber option disable_on_error
and avoids possible infinite error loops during stream application.
Adjust subscriber server variables to match the publisher so datetime and float8 values are interpreted consistently (Japin Li)
Some publishers might be relying on inconsistent behavior.
Add system view pg_stat_subscription_stats
to report on subscriber activity (Masahiko Sawada)
The new function pg_stat_reset_subscription_stats()
allows resetting these statistics counters.
Suppress duplicate entries in the pg_publication_tables
system view (Hou Zhijie)
In some cases a partition could appear more than once.
Add SQL MERGE
command to adjust one table to match another (Simon Riggs, Pavan Deolasee, Álvaro Herrera, Amit Langote)
This is similar to INSERT ... ON CONFLICT
but more batch-oriented.
Add support for HEADER
option in COPY
text format (Rémi Lapeyre)
The new option causes the column names to be output, and optionally verified on input.
Add new WAL-logged method for database creation (Dilip Kumar)
This is the new default method for copying the template database, as it avoids the need for checkpoints during database creation. However, it might be slow if the template database is large, so the old method is still available.
Allow CREATE DATABASE
to set the database OID (Shruthi Gowda, Antonin Houska)
Prevent DROP DATABASE
, DROP TABLESPACE
, and ALTER DATABASE SET TABLESPACE
from occasionally failing during concurrent use on Windows (Thomas Munro)
Allow foreign key ON DELETE SET
actions to affect only specified columns (Paul Martinez)
Previously, all of the columns in the foreign key were always affected.
Allow ALTER TABLE
to modify a table's ACCESS METHOD
(Justin Pryzby, Jeff Davis)
Properly call object access hooks when ALTER TABLE
causes table rewrites (Michael Paquier)
Allow creation of unlogged sequences (Peter Eisentraut)
Track dependencies on individual columns in the results of functions returning composite types (Tom Lane)
Previously, if a view or rule contained a reference to a specific column within the result of a composite-returning function, that was not noted as a dependency; the view or rule was only considered to depend on the composite type as a whole. This meant that dropping the individual column would be allowed, causing problems in later use of the view or rule. The column-level dependency is now also noted, so that dropping such a column will be rejected unless the view is changed or dropped.
Allow the scale of a numeric
value to be negative, or greater than its precision (Dean Rasheed, Tom Lane)
This allows rounding of values to the left of the decimal point, e.g., '1234'::numeric(4, -2)
returns 1200.
Improve overflow detection when casting values to interval (Joe Koshakow)
Change the I/O format of type "char"
for non-ASCII characters (Tom Lane)
Update the display width information of modern Unicode characters, like emojis (Jacob Champion)
Also update from Unicode 5.0 to 14.0.0. There is now an automated way to keep Postgres updated with Unicode releases.
Add multirange input to range_agg()
(Paul Jungwirth)
Add regular expression functions for compatibility with other relational systems (Gilles Darold, Tom Lane)
The new functions are regexp_count()
, regexp_instr()
, regexp_like()
, and regexp_substr()
. Some new optional arguments were also added to regexp_replace()
.
Add the ability to compute the distance between polygons
(Tom Lane)
Add to_char()
format codes of
, tzh
, and tzm
(Nitin Jadhav)
The upper-case equivalents of these were already supported.
When applying AT TIME ZONE
to a time with time zone
value, use the transaction start time rather than wall clock time to determine whether DST applies (Aleksander Alekseev, Tom Lane)
This allows the conversion to be considered stable rather than volatile, and it saves a kernel call per invocation.
Ignore NULL array elements in ts_delete()
and setweight()
functions with array arguments (Jean-Christophe Arnu)
These functions effectively ignore empty-string array elements (since those could never match a valid lexeme). It seems consistent to let them ignore NULL elements too, instead of failing.
Add support for petabyte units to pg_size_pretty()
and pg_size_bytes()
(David Christensen)
Change pg_event_trigger_ddl_commands()
to output references to other sessions' temporary schemas using the actual schema name (Tom Lane)
Previously this function reported all temporary schemas as pg_temp
, but it's misleading to use that for any but the current session's temporary schema.
Fix enforcement of PL/pgSQL variable CONSTANT
markings (Tom Lane)
Previously, a variable could be used as a CALL
output parameter or refcursor OPEN
variable despite being marked CONSTANT
.
Allow IP address matching against a server certificate's Subject Alternative Name (Jacob Champion)
Allow PQsslAttribute()
to report the SSL library type without requiring a libpq connection (Jacob Champion)
Change query cancellations sent by the client to use the same TCP settings as normal client connections (Jelte Fennema)
This allows configured TCP timeouts to apply to query cancel connections.
Prevent libpq event callback failures from forcing an error result (Tom Lane)
Allow pgbench to retry after serialization and deadlock failures (Yugo Nagata, Marina Polyakova)
Improve performance of psql's \copy
command, by sending data in larger chunks (Heikki Linnakangas)
Add \dconfig
command to report server variables (Mark Dilger, Tom Lane)
This is similar to the server-side SHOW
command, but it can process patterns to show multiple variables conveniently.
Add \getenv
command to assign the value of an environment variable to a psql variable (Tom Lane)
Add +
option to the \lo_list
and \dl
commands to show large-object privileges (Pavel Luzanov)
Add a pager option for the \watch
command (Pavel Stehule, Thomas Munro)
This is only supported on Unix and is controlled by the PSQL_WATCH_PAGER
environment variable.
Make psql include intra-query double-hyphen comments in queries sent to the server (Tom Lane, Greg Nancarrow)
Previously such comments were removed from the query before being sent. Double-hyphen comments that are before any query text are not sent, and are not recorded as separate psql history entries.
Adjust psql so that Readline's meta-#
command will insert a double-hyphen comment marker (Tom Lane)
Previously a pound marker was inserted, unless the user had taken the trouble to configure a non-default comment marker.
Make psql output all results when multiple queries are passed to the server at once (Fabien Coelho)
Previously, only the last query result was displayed. The old behavior can be restored by setting the SHOW_ALL_RESULTS
psql variable to off
.
After an error is detected in --single-transaction
mode, change the final COMMIT
command to ROLLBACK
only if ON_ERROR_STOP
is set (Michael Paquier)
Previously, detection of an error in a -c
command or -f
script file would lead to issuing ROLLBACK
at the end, regardless of the value of ON_ERROR_STOP
.
Improve psql's tab completion (Shinya Kato, Dagfinn Ilmari Mannsåker, Peter Smith, Koyu Tanigawa, Ken Kato, David Fetter, Haiying Tang, Peter Eisentraut, Álvaro Herrera, Tom Lane, Masahiko Sawada)
Limit support of psql's backslash commands to servers running PostgreSQL 9.2 or later (Tom Lane)
Remove code that was only used when running with an older server. Commands that do not require any version-specific adjustments compared to 9.2 will still work.
Make pg_dump dump public
schema ownership changes and security labels (Noah Misch)
Improve performance of dumping databases with many objects (Tom Lane)
This will also improve the performance of pg_upgrade.
Improve parallel pg_dump's performance for tables with large TOAST tables (Tom Lane)
Add dump/restore option --no-table-access-method
to force restore to only use the default table access method (Justin Pryzby)
Limit support of pg_dump and pg_dumpall to servers running PostgreSQL 9.2 or later (Tom Lane)
Add new pg_basebackup option --target
to control the base backup location (Robert Haas)
The new options are server
to write the backup locally and blackhole
to discard the backup (for testing).
Allow pg_basebackup to do server-side gzip, LZ4, and Zstandard compression and client-side LZ4 and Zstandard compression of base backup files (Dipesh Pandit, Jeevan Ladhe)
Client-side gzip
compression was already supported.
Allow pg_basebackup to compress on the server side and decompress on the client side before storage (Dipesh Pandit)
This is accomplished by specifying compression on the server side and plain output format.
Allow pg_basebackup's --compress
option to control the compression location (server or client), compression method, and compression options (Michael Paquier, Robert Haas)
Add the LZ4 compression method to pg_receivewal (Georgios Kokolatos)
This is enabled via --compress=lz4
and requires binaries to be built using --with-lz4
.
Add additional capabilities to pg_receivewal's --compress
option (Georgios Kokolatos)
Improve pg_receivewal's ability to restart at the proper WAL location (Ronan Dunklau)
Previously, pg_receivewal would start based on the WAL file stored in the local archive directory, or at the sending server's current WAL flush location. With this change, if the sending server is running Postgres 15 or later, the local archive directory is empty, and a replication slot is specified, the replication slot's restart point will be used.
Add pg_rewind option --config-file
to simplify use when server configuration files are stored outside the data directory (Gunnar Bluth)
Store pg_upgrade's log and temporary files in a subdirectory of the new cluster called pg_upgrade_output.d
(Justin Pryzby)
Previously such files were left in the current directory, requiring manual cleanup. Now they are automatically removed on successful completion of pg_upgrade.
Disable default status reporting during pg_upgrade operation if the output is not a terminal (Andres Freund)
The status reporting output can be enabled for non-tty usage by using --verbose
.
Make pg_upgrade report all databases with invalid connection settings (Jeevan Ladhe)
Previously only the first database with an invalid connection setting was reported.
Make pg_upgrade preserve tablespace and database OIDs, as well as relation relfilenode numbers (Shruthi Gowda, Antonin Houska)
Add a --no-sync
option to pg_upgrade (Michael Paquier)
This is recommended only for testing.
Limit support of pg_upgrade to old servers running PostgreSQL 9.2 or later (Tom Lane)
Allow pg_waldump output to be filtered by relation file node, block number, fork number, and full page images (David Christensen, Thomas Munro)
Make pg_waldump report statistics before an interrupted exit (Bharath Rupireddy)
For example, issuing a control-C in a terminal running pg_waldump --stats --follow
will report the current statistics before exiting. This does not work on Windows.
Improve descriptions of some transaction WAL records reported by pg_waldump (Masahiko Sawada, Michael Paquier)
Allow pg_waldump to dump information about multiple resource managers (Heikki Linnakangas)
This is enabled by specifying the --rmgr
option multiple times.
Add documentation for pg_encoding_to_char()
and pg_char_to_encoding()
(Ian Lawrence Barwick)
Document the ^@
starts-with operator (Tom Lane)
Add support for continuous integration testing using cirrus-ci (Andres Freund, Thomas Munro, Melanie Plageman)
Add configure option --with-zstd
to enable Zstandard builds (Jeevan Ladhe, Robert Haas, Michael Paquier)
Add an ABI identifier field to the magic block in loadable libraries, allowing non-community PostgreSQL distributions to identify libraries that are not compatible with other builds (Peter Eisentraut)
An ABI field mismatch will generate an error at load time.
Create a new pg_type.typcategory
value for "char"
(Tom Lane)
Some other internal-use-only types have also been assigned to this category.
Add new protocol message TARGET
to specify a new COPY
method to be used for base backups (Robert Haas)
pg_basebackup now uses this method.
Add new protocol message COMPRESSION
and COMPRESSION_DETAIL
to specify the compression method and options (Robert Haas)
Remove server support for old BASE_BACKUP
command syntax and base backup protocol (Robert Haas)
Add support for extensions to set custom backup targets (Robert Haas)
Allow extensions to define custom WAL resource managers (Jeff Davis)
Add function pg_settings_get_flags()
to get the flags of server variables (Justin Pryzby)
On Windows, export all the server's global variables using PGDLLIMPORT
markers (Robert Haas)
Previously, only specific variables were accessible to extensions on Windows.
Require GNU make version 3.81 or later to build PostgreSQL (Tom Lane)
Require OpenSSL to build the pgcrypto extension (Peter Eisentraut)
Require Perl version 5.8.3 or later (Dagfinn Ilmari Mannsåker)
Require Python version 3.2 or later (Andres Freund)
Allow amcheck to check sequences (Mark Dilger)
Improve amcheck sanity checks for TOAST tables (Mark Dilger)
Add new module basebackup_to_shell as an example of a custom backup target (Robert Haas)
Add new module basic_archive as an example of performing archiving via a library (Nathan Bossart)
Allow btree_gist indexes on boolean columns (Emre Hasegeli)
These can be used for exclusion constraints.
Fix pageinspect's page_header()
to handle 32-kilobyte page sizes (Quan Zongliang)
Previously, improper negative values could be returned in certain cases.
Add counters for temporary file block I/O to pg_stat_statements (Masahiko Sawada)
Add JIT counters to pg_stat_statements (Magnus Hagander)
Add new module pg_walinspect (Bharath Rupireddy)
This gives SQL-level output similar to pg_waldump.
Indicate the permissive/enforcing state in sepgsql log messages (Dave Page)
Allow postgres_fdw to push down CASE
expressions (Alexander Pyhalov)
Add server variable postgres_fdw.application_name
to control the application name of postgres_fdw connections (Hayato Kuroda)
Previously the remote session's application_name
could only be set on the remote server or via a postgres_fdw connection specification. postgres_fdw.application_name
supports some escape sequences for customization, making it easier to tell such connections apart on the remote server.
Allow parallel commit on postgres_fdw servers (Etsuro Fujita)
This is enabled with the CREATE SERVER
option parallel_commit
.
The following individuals (in alphabetical order) have contributed to this release as patch authors, committers, reviewers, testers, or reporters of issues.
dblink_get_result — gets an async query result
dblink_get_result
collects the results of an asynchronous query previously sent with dblink_send_query
. If the query is not already completed, dblink_get_result
will wait until it is.
connname
Name of the connection to use.
fail_on_error
If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, and the function returns no rows.
For an async query (that is, a SQL statement returning rows), the function returns the row(s) produced by the query. To use this function, you will need to specify the expected set of columns, as previously discussed for dblink
.
For an async command (that is, a SQL statement not returning rows), the function returns a single row with a single text column containing the command's status string. It is still necessary to specify that the result will have a single text column in the calling FROM
clause.
This function must be called if dblink_send_query
returned 1. It must be called once for each query sent, and one additional time to obtain an empty set result, before the connection can be used again.
When using dblink_send_query
and dblink_get_result
, dblink fetches the entire remote query result before returning any of it to the local query processor. If the query returns a large number of rows, this can result in transient memory bloat in the local session. It may be better to open such a query as a cursor with dblink_open
and then fetch a manageable number of rows at a time. Alternatively, use plain dblink()
, which avoids memory bloat by spooling large result sets to disk.
dblink_get_pkey — returns the positions and field names of a relation's primary key fields
dblink_get_pkey
provides information about the primary key of a relation in the local database. This is sometimes useful in generating queries to be sent to remote databases.
relname
Name of a local relation, for example foo
or myschema.mytab
. Include double quotes if the name is mixed-case or contains special characters, for example "FooBar"
; without quotes, the string will be folded to lower case.
Returns one row for each primary key field, or no rows if the relation has no primary key. The result row type is defined as
The position
column simply runs from 1 to N
; it is the number of the field within the primary key, not the number within the table's columns.
dblink_build_sql_insert — builds an INSERT statement using a local tuple, replacing the primary key field values with alternative supplied values
dblink_build_sql_insert
can be useful in doing selective replication of a local table to a remote database. It selects a row from the local table based on primary key, and then builds a SQL INSERT
command that will duplicate that row, but with the primary key values replaced by the values in the last argument. (To make an exact copy of the row, just specify the same values for the last two arguments.)
relname
Name of a local relation, for example foo
or myschema.mytab
. Include double quotes if the name is mixed-case or contains special characters, for example "FooBar"
; without quotes, the string will be folded to lower case.
primary_key_attnums
Attribute numbers (1-based) of the primary key fields, for example 1 2
.
num_primary_key_atts
The number of primary key fields.
src_pk_att_vals_array
Values of the primary key fields to be used to look up the local tuple. Each field is represented in text form. An error is thrown if there is no local row with these primary key values.
tgt_pk_att_vals_array
Values of the primary key fields to be placed in the resulting INSERT
command. Each field is represented in text form.
Returns the requested SQL statement as text.
As of PostgreSQL 9.0, the attribute numbers in primary_key_attnums
are interpreted as logical column numbers, corresponding to the column's position in SELECT * FROM relname
. Previous versions interpreted the numbers as physical column positions. There is a difference if any column(s) to the left of the indicated column have been dropped during the lifetime of the table.
dblink_send_query — 送出非同步的查詢到遠端資料庫
dblink_send_query 發送查詢以非同步方式執行,意即毋須等待指令結果。 連線上必須沒有其他正在進行的非同步查詢。
成功呼叫非同步查詢後,可以使用 dblink_is_busy 檢查完成狀態,並在最後使用 dblink_get_result 收集查詢結果。也可以嘗試使用 dblink_cancel_query 取消正在進行的非同步查詢。
connname
要使用的連線名稱。
sql
您希望在遠端資料庫中執行的 SQL 語句,例如,從 select * from pg_class
。
如果已成功開始執行查詢,則回傳 1,否則回傳 0。
dblink_get_notify — retrieve async notifications on a connection
dblink_get_notify
retrieves notifications on either the unnamed connection, or on a named connection if specified. To receive notifications via dblink, LISTEN
must first be issued, using dblink_exec
. For details see and .
connname
The name of a named connection to get notifications on.
Returns setof (notify_name text, be_pid int, extra text)
, or an empty set if none.
dblink_error_message — gets last error message on the named connection
dblink_error_message
fetches the most recent remote error message for a given connection.
connname
Name of the connection to use.
Returns last error message, or OK
if there has been no error in this connection.
When asynchronous queries are initiated by dblink_send_query
, the error message associated with the connection might not get updated until the server's response message is consumed. This typically means that dblink_is_busy
or dblink_get_result
should be called prior to dblink_error_message
, so that any error generated by the asynchronous query will be visible.
dblink_fetch — returns rows from an open cursor in a remote database
dblink_fetch
fetches rows from a cursor previously established by dblink_open
.
connname
Name of the connection to use; omit this parameter to use the unnamed connection.
cursorname
The name of the cursor to fetch from.
howmany
The maximum number of rows to retrieve. The next howmany
rows are fetched, starting at the current cursor position, moving forward. Once the cursor has reached its end, no more rows are produced.
fail_on_error
If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, and the function returns no rows.
The function returns the row(s) fetched from the cursor. To use this function, you will need to specify the expected set of columns, as previously discussed for dblink
.
On a mismatch between the number of return columns specified in the FROM
clause, and the actual number of columns returned by the remote cursor, an error will be thrown. In this event, the remote cursor is still advanced by as many rows as it would have been if the error had not occurred. The same is true for any other error occurring in the local query after the remote FETCH
has been done.
file_fdw 模組提供了外部資料封裝器 file_fdw,可用於存取伺服器檔案系統中的資料檔案,或在伺服器上執行某個程序並取得其輸出。資料檔案或程序輸出必須採用可由 COPY FROM 讀取的格式;有關詳細資訊,請參閱 。目前對資料檔案的存取只有讀取的功能。
A foreign table created using this wrapper can have the following options:
filename
Specifies the file to be read. Must be an absolute path name. Either filename
or program
must be specified, but not both.
program
Specifies the command to be executed. The standard output of this command will be read as though COPY FROM PROGRAM
were used. Either program
or filename
must be specified, but not both.
format
Specifies the data format, the same as COPY
's FORMAT
option.header
Specifies whether the data has a header line, the same as COPY
's HEADER
option.
delimiter
Specifies the data delimiter character, the same as COPY
's DELIMITER
option.
quote
Specifies the data quote character, the same as COPY
's QUOTE
option.
escape
Specifies the data escape character, the same as COPY
's ESCAPE
option.
null
Specifies the data null string, the same as COPY
's NULL
option.
encoding
Specifies the data encoding, the same as COPY
's ENCODING
option.
Note that while COPY
allows options such as HEADER
to be specified without a corresponding value, the foreign table option syntax requires a value to be present in all cases. To activate COPY
options typically written without a value, you can pass the value TRUE, since all such options are Booleans.
A column of a foreign table created using this wrapper can have the following options:
force_not_null
This is a Boolean option. If true, it specifies that values of the column should not be matched against the null string (that is, the table-level null
option). This has the same effect as listing the column in COPY
's FORCE_NOT_NULL
option.
force_null
This is a Boolean option. If true, it specifies that values of the column which match the null string are returned as NULL
even if the value is quoted. Without this option, only unquoted values matching the null string are returned as NULL
. This has the same effect as listing the column in COPY
's FORCE_NULL
option.
COPY
's FORCE_QUOTE
option is currently not supported by file_fdw
.
These options can only be specified for a foreign table or its columns, not in the options of the file_fdw
foreign-data wrapper, nor in the options of a server or user mapping using the wrapper.
Changing table-level options requires being a superuser or having the privileges of the default role pg_read_server_files
(to use a filename) or the default role pg_execute_server_program
(to use a program), for security reasons: only certain users should be able to control which file is read or which program is run. In principle regular users could be allowed to change the other options, but that's not supported at present.
When specifying the program
option, keep in mind that the option string is executed by the shell. If you need to pass any arguments to the command that come from an untrusted source, you must be careful to strip or escape any characters that might have special meaning to the shell. For security reasons, it is best to use a fixed command string, or at least avoid passing any user input in it.
For a foreign table using file_fdw
, EXPLAIN
shows the name of the file to be read or program to be run. For a file, unless COSTS OFF
is specified, the file size (in bytes) is shown as well.
file_fdw 其中一個明顯的用途是使 PostgreSQL 活動日誌形成查詢方便的資料表。為此,首先必須先產生記錄為 CSV 檔案,在這裡我們將其稱為 pglog.csv。首先,安裝 file_fdw 延伸套件:
然後建立一個外部伺服器:
現在您可以建外部資料表了。使用 CREATE FOREIGN TABLE 命令,您將需要定義資料表的欄位、CSV 檔案名稱及其格式:
就是這樣-現在您可以直接查詢日誌了。當然,在正式的運作環境中,您需要定義某種方式來處理日誌檔案的輪轉。
dblink_cancel_query — cancels any active query on the named connection
dblink_cancel_query
attempts to cancel any query that is in progress on the named connection. Note that this is not certain to succeed (since, for example, the remote query might already have finished). A cancel request simply improves the odds that the query will fail soon. You must still complete the normal query protocol, for example by calling dblink_get_result
.
connname
Name of the connection to use.
Returns OK
if the cancel request has been sent, or the text of an error message on failure.
dblink_build_sql_delete — builds a DELETE statement using supplied values for primary key field values
dblink_build_sql_delete
can be useful in doing selective replication of a local table to a remote database. It builds a SQL DELETE
command that will delete the row with the given primary key values.
relname
Name of a local relation, for example foo
or myschema.mytab
. Include double quotes if the name is mixed-case or contains special characters, for example "FooBar"
; without quotes, the string will be folded to lower case.
primary_key_attnums
Attribute numbers (1-based) of the primary key fields, for example 1 2
.
num_primary_key_atts
The number of primary key fields.
tgt_pk_att_vals_array
Values of the primary key fields to be used in the resulting DELETE
command. Each field is represented in text form.
Returns the requested SQL statement as text.
As of PostgreSQL 9.0, the attribute numbers in primary_key_attnums
are interpreted as logical column numbers, corresponding to the column's position in SELECT * FROM relname
. Previous versions interpreted the numbers as physical column positions. There is a difference if any column(s) to the left of the indicated column have been dropped during the lifetime of the table.
The earthdistance
module provides two different approaches to calculating great circle distances on the surface of the Earth. The one described first depends on the cube
module (which must be installed before earthdistance
can be installed). The second one is based on the built-in point
data type, using longitude and latitude for the coordinates.
In this module, the Earth is assumed to be perfectly spherical. (If that's too inaccurate for you, you might want to look at the project.)
Data is stored in cubes that are points (both corners are the same) using 3 coordinates representing the x, y, and z distance from the center of the Earth. A domain earth
over cube
is provided, which includes constraint checks that the value meets these restrictions and is reasonably close to the actual surface of the Earth.
The radius of the Earth is obtained from the earth()
function. It is given in meters. But by changing this one function you can change the module to use some other units, or to use a different value of the radius that you feel is more appropriate.
This package has applications to astronomical databases as well. Astronomers will probably want to change earth()
to return a radius of 180/pi()
so that distances are in degrees.
Functions are provided to support input in latitude and longitude (in degrees), to support output of latitude and longitude, to calculate the great circle distance between two points and to easily specify a bounding box usable for index searches.
The provided functions are shown in .
Function | Returns | Description |
---|
The second part of the module relies on representing Earth locations as values of type point
, in which the first component is taken to represent longitude in degrees, and the second component is taken to represent latitude in degrees. Points are taken as (longitude, latitude) and not vice versa because longitude is closer to the intuitive idea of x-axis and latitude to y-axis.
Note that unlike the cube
-based part of the module, units are hardwired here: changing the earth()
function will not affect the results of this operator.
One disadvantage of the longitude/latitude representation is that you need to be careful about the edge conditions near the poles and near +/- 180 degrees of longitude. The cube
-based representation avoids these discontinuities.\
dblink_build_sql_update — builds an UPDATE statement using a local tuple, replacing the primary key field values with alternative supplied values
dblink_build_sql_update
can be useful in doing selective replication of a local table to a remote database. It selects a row from the local table based on primary key, and then builds a SQL UPDATE
command that will duplicate that row, but with the primary key values replaced by the values in the last argument. (To make an exact copy of the row, just specify the same values for the last two arguments.) The UPDATE
command always assigns all fields of the row — the main difference between this and dblink_build_sql_insert
is that it's assumed that the target row already exists in the remote table.
relname
Name of a local relation, for example foo
or myschema.mytab
. Include double quotes if the name is mixed-case or contains special characters, for example "FooBar"
; without quotes, the string will be folded to lower case.
primary_key_attnums
Attribute numbers (1-based) of the primary key fields, for example 1 2
.num_primary_key_atts
The number of primary key fields.
src_pk_att_vals_array
Values of the primary key fields to be used to look up the local tuple. Each field is represented in text form. An error is thrown if there is no local row with these primary key values.
tgt_pk_att_vals_array
Values of the primary key fields to be placed in the resulting UPDATE
command. Each field is represented in text form.
Returns the requested SQL statement as text.
As of PostgreSQL 9.0, the attribute numbers in primary_key_attnums
are interpreted as logical column numbers, corresponding to the column's position in SELECT * FROM relname
. Previous versions interpreted the numbers as physical column positions. There is a difference if any column(s) to the left of the indicated column have been dropped during the lifetime of the table.
每當使用 或 設定使用者密碼時,passwordcheck 模組都會檢查使用者的密碼。 如果密碼強度被認為太弱,它將被拒絕,命令將因錯誤而終止。
要啟用此模組,請將 “$libdir/passwordcheck” 加到 postgresql.conf 中的 ,然後重新啟動伺服器。
您可以透過修改原始碼來使此模組更符合您的需求。 例如,您可以使用 檢查密碼——這只需要在 Makefile 中取消註釋兩行並重新編譯模組。 (由於授權許可因素,我們不能預先包含 CrackLib。)如果沒有 CrackLib,該模組會強制執行一些簡單的密碼強度規則,您可以根據需要修改或延伸這些規則。
注意 為了防止未加密的密碼透過網路發送、寫入伺服器日誌或被資料庫管理員以其他方式竊取,PostgreSQL 允許使用者提供預先加密的密碼。 許多用戶端程式利用此功能並在將密碼發送到伺服器之前對其進行加密。
這限制了 passwordcheck 模組的用途,因為在那種情況下它只能嘗試猜測密碼。 因此,如果您的安全要求很高,則不建議使用 passwordcheck。 使用外部驗證方法如 GSSAPI(參見第 21 章)比依賴資料庫內的密碼反而更為安全。
或者,您可以修改 passwordcheck 以拒絕預先加密的密碼,但強制使用者以明碼形式設定密碼會帶來自身的安全風險。
Abhijit Menon-Sen |
---|
A single operator is provided, shown in .
Operator | Returns | Description |
---|
Adam Brusselback
Adam Mackler
Adrian Ho
Ahsan Hadi
Ajin Cherian
Alastair McKinley
Aleksander Alekseev
Ales Zeleny
Alex Kingsborough
Alex Kozhemyakin
Alexander Korotkov
Alexander Kukushkin
Alexander Lakhin
Alexander Nawratil
Alexander Pyhalov
Alexey Borzov
Alexey Ermakov
Aliaksandr Kalenik
Álvaro Herrera
Amit Kapila
Amit Khandekar
Amit Langote
Amul Sul
Anastasia Lubennikova
Anders Kaseorg
Andreas Dijkman
Andreas Grob
Andreas Seltenreich
Andrei Zubkov
Andres Freund
Andrew Alsup
Andrew Bille
Andrew Dunstan
Andrew Gierth
Andrew Kesper
Andrey Borodin
Andrey Lepikhov
Andrey Sokolov
Andy Fan
Anton Melnikov
Anton Voloshin
Antonin Houska
Arjan van de Ven
Arne Roland
Arthur Zakirov
Ashutosh Bapat
Ashutosh Sharma
Ashwin Agrawal
Asif Rehman
Asim Praveen
Atsushi Torikoshi
Aya Iwata
Bauyrzhan Sakhariyev
Benoit Lobréau
Bernd Dorn
Bertrand Drouvot
Bharath Rupireddy
Björn Harrtell
Boris Kolpackov
Boris Korzun
Brad Nicholson
Brar Piening
Bruce Momjian
Bruno da Silva
Bryn Llewellyn
Carl Sopchak
Cary Huang
Chapman Flack
Chen Jiaoqian
Chris Bandy
Chris Lowder
Christian Quest
Christoph Berg
Christoph Heiss
Christophe Pettus
Christopher Painter-Wakefield
Claudio Freire
Clemens Zeidler
Corey Huinker
Dag Lem
Dagfinn Ilmari Mannsåker
Dan Kubb
Daniel Cherniy
Daniel Gustafsson
Daniel Polski
Daniel Vérité
Daniel Westermann
Daniele Varrazzo
Daniil Anisimov
Danny Shemesh
Darafei Praliaskouski
Daria Lepikhova
Dave Cramer
Dave Page
David Christensen
David Fetter
David G. Johnston
David Rowley
David Steele
David Zhang
Dean Rasheed
Dian Fay
Dilip Kumar
Dipesh Pandit
Dmitry Dolgov
Dmitry Koval
Dmitry Marakasov
Dominique Devienne
Dong Wook
Drew DeVault
Eduard Català
Egor Chindyaskin
Egor Rogov
Ekaterina Kiryanova
Elena Indrupskaya
Elvis Pranskevichus
Emmanuel Quincerot
Emre Hasegeli
Eric Mutta
Erica Zhang
Erik Rijkers
Erki Eessaar
Etsuro Fujita
Euler Taveira
Fabien Coelho
Fabrice Chapuis
Fabrice Fontaine
Fabrízio de Royes Mello
Feike Steenbergen
Filip Gospodinov
Florin Irion
Floris Van Nee
Frédéric Yhuel
Gabriela Serventi
Gaurab Dey
Geoff Winkless
Geoffrey Blake
Georgios Kokolatos
Gilles Darold
Greg Nancarrow
Greg Rychlewski
Greg Sabino Mullane
Greg Stark
Gregory Smith
Guillaume Lelarge
Gunnar Bluth
Gurjeet Singh
Haiyang Wang
Haiying Tang
Hannu Krosing
Hans Buschmann
Hayato Kuroda
Heath Lord
Heikki Linnakangas
Herwig Goemans
Himanshu Upadhyaya
Holly Roberts
Hou Zhijie
Hubert Lubaczewski
Ian Barwick
Ian Campbell
Ibrar Ahmed
Ildus Kurbangaliev
Ilya Anfimov
Itamar Gafni
Jacob Champion
Jaime Casanova
Jakub Wartak
James Coleman
James Hilliard
James Inform
Jan Piotrowski
Japin Li
Jason Harvey
Jason Kim
Jean-Christophe Arnu
Jeevan Ladhe
Jeff Davis
Jeff Janes
Jehan-Guillaume de Rorthais
Jelte Fennema
Jeremy Evans
Jeremy Schneider
Jian Guo
Jian He
Jimmy Yih
Jiri Fejfar
Jitka Plesníková
Joe Conway
Joe Wildish
Joel Jacobson
Joey Bodoia
John Naylor
Jonathan Katz
Josef Simanek
Joseph Koshakow
Josh Soref
Joshua Brindle
Juan José Santamaría Flecha
Julien Rouhaud
Julien Roze
Junwang Zhao
Jürgen Purtz
Justin Pryzby
Ken Kato
Kevin Burke
Kevin Grittner
Kevin Humphreys
Kevin McKibbin
Kevin Sweet
Kevin Zheng
Klaudie Willis
Konstantin Knizhnik
Konstantina Skovola
Kosei Masumura
Kotaro Kawamoto
Koyu Tanigawa
Kuntal Ghosh
Kyotaro Horiguchi
Lars Kanis
Lauren Fliksteen
Laurent Hasson
Laurenz Albe
Leslie Lemaire
Liam Bowen
Lingjie Qiang
Liu Huailing
Louis Jachiet
Lukas Fittl
Ma Liangzhu
Maciek Sakrejda
Magnus Hagander
Mahendra Singh Thalor
Maksim Milyutin
Marc Bachmann
Marcin Krupowicz
Marcus Gartner
Marek Szuba
Marina Polyakova
Mario Emmenlauer
Mark Dilger
Mark Murawski
Mark Wong
Markus Wanner
Markus Winand
Martijn van Oosterhout
Martin Jurca
Martin Kalcher
Martín Marqués
Masahiko Sawada
Masahiro Ikeda
Masao Fujii
Masaya Kawamoto
Masayuki Hirose
Matthias van de Meent
Matthijs van der Vleuten
Maxim Orlov
Maxim Yablokov
Melanie Plageman
Michael Banck
Michael Harris
Michael J. Sullivan
Michael Meskes
Michael Mühlbeyer
Michael Paquier
Michael Powers
Mike Fiedler
Mike Oh
Mikhail Kulagin
Miles Delahunty
Naoki Okano
Nathan Bossart
Nathan Long
Nazir Bilal Yavuz
Neha Sharma
Neil Chen
Nicola Contu
Nicolas Lutic
Nikhil Benesch
Nikhil Shetty
Nikhil Sontakke
Nikita Glukhov
Nikolai Berkoff
Nikolay Samokhvalov
Nikolay Shaplov
Nitin Jadhav
Noah Misch
Noboru Saito
Noriyoshi Shinoda
Olaf Bohlen
Olly Betts
Onder Kalaci
Oskar Stenberg
Otto Kekalainen
Paul Guo
Paul Jungwirth
Paul Martinez
Pavan Deolasee
Pavel Borisov
Pavel Luzanov
Pavel Stehule
Peter Eisentraut
Peter Geoghegan
Peter Slavov
Peter Smith
Petr Jelínek
Phil Florent
Phil Krylov
Pierre-Aurélien Georges
Prabhat Sahu
Quan Zongliang
Rachel Heaton
Rahila Syed
Rajakavitha Kodhandapani
Rajkumar Raghuwanshi
Ranier Vilela
Rei Kamigishi
Reid Thompson
Rémi Lapeyre
Renan Soares Lopes
Richard Guo
Richard Wesley
RKN Sai Krishna
Robert Haas
Robert Treat
Roberto Mello
Robins Tharakan
Roger Mason
Roman Zharkov
Ronan Dunklau
Rui Zhao
Ryan Kelly
Ryo Matsumura
Ryohei Takahashi
Sadhuprasad Patro
Sait Talha Nisanci
Sami Imseih
Sandeep Thakkar
Sebastian Kemper
Sehrope Sarkuni
Sergei Kornilov
Sergei Shoulbakov
Sergey Shinderuk
Shay Rojansky
Shenhao Wang
Shi Yu
Shinya Kato
Shruthi Gowda
Simon Perepelitsa
Simon Riggs
Sirisha Chamarthi
Soumyadeep Chakraborty
Stan Hu
Stas Kelvich
Stefen Hillman
Stephen Frost
Steve Chavez
Sumanta Mukherjee
Suraj Khamkar
Suraj Kharage
Sven Klemm
Takamichi Osumi
Takayuki Tsunakawa
Takeshi Ideriha
Tatsuhiro Nakamori
Tatsuhito Kasahara
Tatsuo Ishii
Tatsuro Yamada
Teja Mupparti
Teodor Sigaev
Thibaud Walkowiak
Thom Brown
Thomas McKay
Thomas Munro
Tim McNamara
Timo Stolz
Timur Khanjanov
Tom Lane
Tomas Barton
Tomas Vondra
Tony Reix
Troy Frericks
Tushar Ahuja
Victor Wagner
Victor Yegorov
Vignesh C
Vik Fearing
Vincas Dargis
Vitaly Burovoy
Vitaly Voronov
Vladimir Sitnikov
Wang Ke
Wei Sun
Wei Wang
Whale Song
Will Mortensen
Wolfgang Walther
Yanliang Lei
Yaoguang Chen
Yogendra Suralkar
YoungHwan Joo
Yugo Nagata
Yukun Wang
Yura Sokolov
Yusuke Egashira
Yuzuko Hosoya
Zhang Mingli
Zhang Wenjie
Zhihong Yu
Zhiyong Wu
|
| Gives the distance in statute miles between two points on the Earth's surface. |
|
| Returns the assumed radius of the Earth. |
|
| Converts the normal straight line (secant) distance between two points on the surface of the Earth to the great circle distance between them. |
|
| Converts the great circle distance between two points on the surface of the Earth to the normal straight line (secant) distance between them. |
|
| Returns the location of a point on the surface of the Earth given its latitude (argument 1) and longitude (argument 2) in degrees. |
|
| Returns the latitude in degrees of a point on the surface of the Earth. |
|
| Returns the longitude in degrees of a point on the surface of the Earth. |
|
| Returns the great circle distance between two points on the surface of the Earth. |
|
| Returns a box suitable for an indexed search using the cube |
The pg_visibility
module provides a means for examining the visibility map (VM) and page-level visibility information of a table. It also provides functions to check the integrity of a visibility map and to force it to be rebuilt.
Three different bits are used to store information about page-level visibility. The all-visible bit in the visibility map indicates that every tuple in the corresponding page of the relation is visible to every current and future transaction. The all-frozen bit in the visibility map indicates that every tuple in the page is frozen; that is, no future vacuum will need to modify the page until such time as a tuple is inserted, updated, deleted, or locked on that page. The page header's PD_ALL_VISIBLE
bit has the same meaning as the all-visible bit in the visibility map, but is stored within the data page itself rather than in a separate data structure. These two bits will normally agree, but the page's all-visible bit can sometimes be set while the visibility map bit is clear after a crash recovery. The reported values can also disagree because of a change that occurs after pg_visibility
examines the visibility map and before it examines the data page. Any event that causes data corruption can also cause these bits to disagree.
Functions that display information about PD_ALL_VISIBLE
bits are much more costly than those that only consult the visibility map, because they must read the relation's data blocks rather than only the (much smaller) visibility map. Functions that check the relation's data blocks are similarly expensive.
pg_visibility_map(relation regclass, blkno bigint, all_visible OUT boolean, all_frozen OUT boolean) returns record
Returns the all-visible and all-frozen bits in the visibility map for the given block of the given relation.pg_visibility(relation regclass, blkno bigint, all_visible OUT boolean, all_frozen OUT boolean, pd_all_visible OUT boolean) returns record
Returns the all-visible and all-frozen bits in the visibility map for the given block of the given relation, plus the PD_ALL_VISIBLE
bit of that block.pg_visibility_map(relation regclass, blkno OUT bigint, all_visible OUT boolean, all_frozen OUT boolean) returns setof record
Returns the all-visible and all-frozen bits in the visibility map for each block of the given relation.pg_visibility(relation regclass, blkno OUT bigint, all_visible OUT boolean, all_frozen OUT boolean, pd_all_visible OUT boolean) returns setof record
Returns the all-visible and all-frozen bits in the visibility map for each block of the given relation, plus the PD_ALL_VISIBLE
bit of each block.pg_visibility_map_summary(relation regclass, all_visible OUT bigint, all_frozen OUT bigint) returns record
Returns the number of all-visible pages and the number of all-frozen pages in the relation according to the visibility map.pg_check_frozen(relation regclass, t_ctid OUT tid) returns setof tid
Returns the TIDs of non-frozen tuples stored in pages marked all-frozen in the visibility map. If this function returns a non-empty set of TIDs, the visibility map is corrupt.pg_check_visible(relation regclass, t_ctid OUT tid) returns setof tid
Returns the TIDs of non-all-visible tuples stored in pages marked all-visible in the visibility map. If this function returns a non-empty set of TIDs, the visibility map is corrupt.pg_truncate_visibility_map(relation regclass) returns void
Truncates the visibility map for the given relation. This function is useful if you believe that the visibility map for the relation is corrupt and wish to force rebuilding it. The first VACUUM
executed on the given relation after this function is executed will scan every page in the relation and rebuild the visibility map. (Until that is done, queries will treat the visibility map as containing all zeroes.)
By default, these functions are executable only by superusers and members of the pg_stat_scan_tables
role, with the exception of pg_truncate_visibility_map(relation regclass)
which can only be executed by superusers.
Robert Haas <
rhaas@postgresql.org
>
oid2name — resolve OIDs and file nodes in a PostgreSQL data directory
vacuumlo — remove orphaned large objects from a PostgreSQL database
This section covers PostgreSQL client applications in contrib
. They can be run from anywhere, independent of where the database server resides. See also PostgreSQL Client Applications for information about client applications that part of the core PostgreSQL distribution.
tsm_system_time 模組提供資料表抽樣方法 SYSTEM_TIME,此方法可在 SELECT 指令的 TABLESAMPLE 子句中使用。
此資料表抽樣方法接受一個浮點數參數,該參數是讀取資料表所花費的最大毫秒數(milliseconds)。這使您可以直接控制查詢所花費的時間,而代價是樣本大小變得難以預測。結果樣本將包含在指定時間內可以讀取的盡可能多的資料,除非已經讀取了整個資料表。
像內建的 SYSTEM 抽樣方法一樣,SYSTEM_ROWS 執行區塊策略抽樣,因此抽樣並不是完全隨機的,但可能會有些群聚的效應,尤其是在僅要求少量資料的情況下。
SYSTEM_ROWS 不支援 REPEATABLE 子句。
這是一個使用 SYSTEM_TIME 選擇資料表樣本的範例。首先要安裝延伸功能:
然後,您可以在 SELECT 指令中使用它,例如:
此指令將回傳 1 秒鐘(1,000 毫秒)內讀取 my_table 的樣本。 當然,如果可以在 1 秒內讀取整個資料表,則將回傳其所有資料。
test_decoding
is an example of a logical decoding output plugin. It doesn't do anything especially useful, but can serve as a starting point for developing your own output plugin.
test_decoding
receives WAL through the logical decoding mechanism and decodes it into text representations of the operations performed.
Typical output from this plugin, used over the SQL logical decoding interface, might be:
tsm_system_rows 模組提供資料表抽樣方法 SYSTEM_ROWS,此方法可在 SELECT 指令的 TABLESAMPLE 子句中使用。
此資料表抽樣方法接受整數的參數,該參數是要讀取的最大資料筆數。除非資料表沒有足夠的資料,結果樣本將恰好包含那麼多筆資料;否則在這種情況下,將回傳整個資料表。
像內建的 SYSTEM 抽樣方法一樣,SYSTEM_ROWS 執行區塊策略抽樣,因此抽樣並不是完全隨機的,但可能會有些群聚的效應,尤其是在僅要求少量資料的情況下。
SYSTEM_ROWS 不支援 REPEATABLE 子句。
使用 SYSTEM_ROWS 選擇資料表樣本的範例。首先要安裝延伸功能:
然後,您可以在 SELECT 指令中使用它,例如:
此命令將從資料表 my_table 回傳 100 筆資料的樣本(除非該資料表沒有 100 筆資料,在這種情況下將回傳其所有資料)。
This appendix and the previous one contain information regarding the modules that can be found in the contrib
directory of the PostgreSQL distribution. See Appendix F for more information about the contrib
section in general and server extensions and plug-ins found incontrib
specifically.
This appendix covers utility programs found in contrib
. Once installed, either from source or a packaging system, they are found in the bin
directory of the PostgreSQL installation and can be used like any other program.
oid2name — resolve OIDs and file nodes in a PostgreSQL data directory
oid2name
[option
...]
oid2name is a utility program that helps administrators to examine the file structure used by PostgreSQL. To make use of it, you need to be familiar with the database file structure, which is described in Chapter 66.
The name “oid2name” is historical, and is actually rather misleading, since most of the time when you use it, you will really be concerned with tables' filenode numbers (which are the file names visible in the database directories). Be sure you understand the difference between table OIDs and table filenodes!
oid2name connects to a target database and extracts OID, filenode, and/or table name information. You can also have it show database OIDs or tablespace OIDs.
oid2name accepts the following command-line arguments:
-f
filenode
show info for table with filenode
filenode
-i
include indexes and sequences in the listing
-o
oid
show info for table with OID
oid
-q
omit headers (useful for scripting)
-s
show tablespace OIDs
-S
include system objects (those in information_schema
, pg_toast
and pg_catalog
schemas)
-t
tablename_pattern
show info for table(s) matching
tablename_pattern
-V
--version
Print the oid2name version and exit.
-x
display more information about each object shown: tablespace name, schema name, and OID
-?
--help
Show help about oid2name command line arguments, and exit.
oid2name also accepts the following command-line arguments for connection parameters:
-d
database
database to connect to
-H
host
database server's host
-p
port
database server's port
-U
username
user name to connect as
-P
password
password (deprecated — putting this on the command line is a security hazard)
To display specific tables, select which tables to show by using -o
, -f
and/or -t
. -o
takes an OID, -f
takes a filenode, and -t
takes a table name (actually, it's a LIKE
pattern, so you can use things like foo%
). You can use as many of these options as you like, and the listing will include all objects matched by any of the options. But note that these options can only show objects in the database given by -d
.
If you don't give any of -o
, -f
or -t
, but do give -d
, it will list all tables in the database named by -d
. In this mode, the -S
and -i
options control what gets listed.
If you don't give -d
either, it will show a listing of database OIDs. Alternatively you can give -s
to get a tablespace listing.
oid2name requires a running database server with non-corrupt system catalogs. It is therefore of only limited use for recovering from catastrophic database corruption situations.
B. Palmer <
bpalmer@crimelabs.net
>
vacuumlo — remove orphaned large objects from a PostgreSQL database
vacuumlo
[option
...] dbname
...
vacuumlo is a simple utility program that will remove any “orphaned” large objects from a PostgreSQL database. An orphaned large object (LO) is considered to be any LO whose OID does not appear in any oid
or lo
data column of the database.
If you use this, you may also be interested in the lo_manage
trigger in the lo module. lo_manage
is useful to try to avoid creating orphaned LOs in the first place.
All databases named on the command line are processed.
vacuumlo accepts the following command-line arguments:
-l
limit
Remove no more than limit
large objects per transaction (default 1000). Since the server acquires a lock per LO removed, removing too many LOs in one transaction risks exceeding max_locks_per_transaction. Set the limit to zero if you want all removals done in a single transaction.
-n
Don't remove anything, just show what would be done.
-v
Write a lot of progress messages.
-V
--version
Print the vacuumlo version and exit.
-?
--help
Show help about vacuumlo command line arguments, and exit.
vacuumlo also accepts the following command-line arguments for connection parameters:
-h
hostname
Database server's host.
-p
port
Database server's port.
-U
username
User name to connect as.
-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
-W
Force vacuumlo to prompt for a password before connecting to a database.
This option is never essential, since vacuumlo will automatically prompt for a password if the server demands password authentication. However, vacuumlo will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W
to avoid the extra connection attempt.
vacuumlo works by the following method: First, vacuumlo builds a temporary table which contains all of the OIDs of the large objects in the selected database. It then scans through all columns in the database that are of type oid
or lo
, and removes matching entries from the temporary table. (Note: Only types with these names are considered; in particular, domains over them are not considered.) The remaining entries in the temporary table identify orphaned LOs. These are removed.
Peter Mount <
peter@retep.org.uk
>
PostgreSQL是一個複雜的軟體專案,管理專案是很困難的。而我們發現 PostgreSQL 的許多強化功能可以與核心專案分開進行更高效率的研發。
There are several administration tools available for PostgreSQL. The most popular is pgAdmin III, and there are several commercially available ones as well.
PostgreSQL is designed to be easily extensible. For this reason, extensions loaded into the database can function just like features that are built in. The contrib/
directory shipped with the source code contains several extensions, which are described in Appendix F. Other extensions are developed independently, like PostGIS. Even PostgreSQL replication solutions can be developed externally. For example, Slony-I is a popular master/standby replication solution that is developed independently from the core project.
pg_standby — supports the creation of a PostgreSQL warm standby server
pg_standby
[option
...] archivelocation
nextwalfile
walfilepath
[restartwalfile
]
pg_standby supports creation of a “warm standby” database server. It is designed to be a production-ready program, as well as a customizable template should you require specific modifications.
pg_standby is designed to be a waiting restore_command
, which is needed to turn a standard archive recovery into a warm standby operation. Other configuration is required as well, all of which is described in the main server manual (see Section 26.2).
To configure a standby server to use pg_standby, put this into its recovery.conf
configuration file:
where archiveDir
is the directory from which WAL segment files should be restored.
If restartwalfile
is specified, normally by using the %r
macro, then all WAL files logically preceding this file will be removed from archivelocation
. This minimizes the number of files that need to be retained, while preserving crash-restart capability. Use of this parameter is appropriate if the archivelocation
is a transient staging area for this particular standby server, but not when the archivelocation
is intended as a long-term WAL archive area.
pg_standby assumes that archivelocation
is a directory readable by the server-owning user. If restartwalfile
(or -k
) is specified, the archivelocation
directory must be writable too.
There are two ways to fail over to a “warm standby” database server when the master server fails:Smart Failover
In smart failover, the server is brought up after applying all WAL files available in the archive. This results in zero data loss, even if the standby server has fallen behind, but if there is a lot of unapplied WAL it can be a long time before the standby server becomes ready. To trigger a smart failover, create a trigger file containing the word smart
, or just create it and leave it empty.Fast Failover
In fast failover, the server is brought up immediately. Any WAL files in the archive that have not yet been applied will be ignored, and all transactions in those files are lost. To trigger a fast failover, create a trigger file and write the word fast
into it. pg_standby can also be configured to execute a fast failover automatically if no new WAL file appears within a defined interval.
pg_standby accepts the following command-line arguments:
-c
Use cp
or copy
command to restore WAL files from archive. This is the only supported behavior so this option is useless.-d
Print lots of debug logging output on stderr
.
-k
Remove files from archivelocation
so that no more than this many WAL files before the current one are kept in the archive. Zero (the default) means not to remove any files from archivelocation
. This parameter will be silently ignored if restartwalfile
is specified, since that specification method is more accurate in determining the correct archive cut-off point. Use of this parameter is deprecated as of PostgreSQL 8.3; it is safer and more efficient to specify a restartwalfile
parameter. A too small setting could result in removal of files that are still needed for a restart of the standby server, while a too large setting wastes archive space.
-r
maxretries
Set the maximum number of times to retry the copy command if it fails (default 3). After each failure, we wait for sleeptime
* num_retries
so that the wait time increases progressively. So by default, we will wait 5 secs, 10 secs, then 15 secs before reporting the failure back to the standby server. This will be interpreted as end of recovery and the standby will come up fully as a result.
-s
sleeptime
Set the number of seconds (up to 60, default 5) to sleep between tests to see if the WAL file to be restored is available in the archive yet. The default setting is not necessarily recommended; consult Section 26.2 for discussion.
-t
triggerfile
Specify a trigger file whose presence should cause failover. It is recommended that you use a structured file name to avoid confusion as to which server is being triggered when multiple servers exist on the same system; for example /tmp/pgsql.trigger.5432
.
-V
--version
Print the pg_standby version and exit.
-w
maxwaittime
Set the maximum number of seconds to wait for the next WAL file, after which a fast failover will be performed. A setting of zero (the default) means wait forever. The default setting is not necessarily recommended; consult Section 26.2 for discussion.
-?
--help
Show help about pg_standby command line arguments, and exit.
pg_standby is designed to work with PostgreSQL 8.2 and later.
PostgreSQL 8.3 provides the %r
macro, which is designed to let pg_standby know the last file it needs to keep. With PostgreSQL 8.2, the -k
option must be used if archive cleanup is required. This option remains available in 8.3, but its use is deprecated.
PostgreSQL 8.4 provides the recovery_end_command
option. Without this option a leftover trigger file can be hazardous.
pg_standby is written in C and has an easy-to-modify source code, with specifically designated sections to modify for your own needs
On Linux or Unix systems, you might use:
where the archive directory is physically located on the standby server, so that the archive_command
is accessing it across NFS, but the files are local to the standby (enabling use of ln
). This will:
produce debugging output in standby.log
sleep for 2 seconds between checks for next WAL file availability
stop waiting only when a trigger file called /tmp/pgsql.trigger.5442
appears, and perform failover according to its content
remove the trigger file when recovery ends
remove no-longer-needed files from the archive directory
On Windows, you might use:
Note that backslashes need to be doubled in the archive_command
, but not in the restore_command
or recovery_end_command
. This will:
use the copy
command to restore WAL files from archive
produce debugging output in standby.log
sleep for 5 seconds between checks for next WAL file availability
stop waiting only when a trigger file called C:\pgsql.trigger.5442
appears, and perform failover according to its content
remove the trigger file when recovery ends
remove no-longer-needed files from the archive directory
The copy
command on Windows sets the final file size before the file is completely copied, which would ordinarily confuse pg_standby. Therefore pg_standby waits sleeptime
seconds once it sees the proper file size. GNUWin32's cp
sets the file size only after the file copy is complete.
Since the Windows example uses copy
at both ends, either or both servers might be accessing the archive directory across the network.
Simon Riggs <
simon@2ndquadrant.com
>
The documentation sources are written in DocBook, which is a markup language defined in XML. In what follows, the terms DocBook and XML are both used, but technically they are not interchangeable.
DocBook allows an author to specify the structure and content of a technical document without worrying about presentation details. A document style defines how that content is rendered into one of several final forms. DocBook is maintained by the OASIS group. The official DocBook site has good introductory and reference documentation and a complete O'Reilly book for your online reading pleasure. The NewbieDoc Docbook Guide is very helpful for beginners. The FreeBSD Documentation Project also uses DocBook and has some good information, including a number of style guidelines that might be worth considering.
The PostgreSQL source code is stored and managed using the Git version control system. A public mirror of the master repository is available; it is updated within a minute of any change to the master repository.
Our wiki, https://wiki.postgresql.org/wiki/Working_with_Git, has some discussion on working with Git.
Note that building PostgreSQL from the source repository requires reasonably up-to-date versions of bison, flex, and Perl. These tools are not needed to build from a distribution tarball, because the files that these tools are used to build are included in the tarball. Other tool requirements are the same as shown in Section 16.2.
The following tools are used to process the documentation. Some might be optional, as noted.
This is the definition of DocBook itself. We currently use version 4.5; you cannot use later or earlier versions. You need the XML variant of the DocBook DTD, not the SGML variant.
These contain the processing instructions for converting the DocBook sources to other formats, such as HTML.
The minimum required version is currently 1.77.0, but it is recommended to use the latest available version for best results.
Libxml2 for xmllint
This library and the xmllint
tool it contains are used for processing XML. Many developers will already have Libxml2 installed, because it is also used when building the PostgreSQL code. Note, however, that xmllint
might need to be installed from a separate subpackage.
Libxslt for xsltproc
xsltproc
is an XSLT processor, that is, a program to convert XML to other formats using XSLT stylesheets.
This is a program for converting, among other things, XML to PDF.
We have documented experience with several installation methods for the various tools that are needed to process the documentation. These will be described below. There might be some other packaged distributions for these tools. Please report package status to the documentation mailing list, and we will include that information here.
You can get away with not installing DocBook XML and the DocBook XSLT stylesheets locally, because the required files will be downloaded from the Internet and cached locally. This may in fact be the preferred solution if your operating system packages provide only an old version of these files, or if no packages are available at all. If you want to prevent any attempt to access the Internet while building the documentation, you need to pass the --nonet
option to xmllint
and xsltproc
; see below for an example.
To install the required packages, use:
To install the required packages with pkg
, use:
When building the documentation from the doc
directory you'll need to use gmake
, because the makefile provided is not suitable for FreeBSD's make
.
There is a full set of packages of the documentation tools available for Debian GNU/Linux. To install, simply use:
On macOS, you can build the HTML and man documentation without installing anything extra. If you want to build PDFs or want to install a local copy of DocBook, you can get those from your preferred package manager.
If you use MacPorts, the following will get you set up:
If you use Homebrew, use this:
configure
Before you can build the documentation you need to run the configure
script, as you would when building the PostgreSQL programs themselves. Check the output near the end of the run; it should look something like this:
If xmllint
or xsltproc
is not found, you will not be able to build any of the documentation. fop
is only needed to build the documentation in PDF format. dbtoepub
is only needed to build the documentation in EPUB format.
If necessary, you can tell configure
where to find these programs, for example
Also, if you want to ensure that xmllint
and xsltproc
will not perform any network access, you can do something like
With Git you will make a copy of the entire code repository on your local machine, so you will have access to all history and branches offline. This is the fastest and most flexible way to develop or test patches.
Git
You will need an installed version of Git, which you can get from https://git-scm.com. Many systems already have a recent version of Git installed by default, or available in their package distribution system.
To begin using the Git repository, make a clone of the official mirror:
This will copy the full repository to your local machine, so it may take a while to complete, especially if you have a slow Internet connection. The files will be placed in a new subdirectory postgresql
of your current directory.
The Git mirror can also be reached via the Git protocol. Just change the URL prefix to git
, as in:
Whenever you want to get the latest updates in the system, cd
into the repository, and run:
Git can do a lot more things than just fetch the source. For more information, consult the Git man pages, or see the website at https://git-scm.com.
PostgreSQL has four primary documentation formats:
Plain text, for pre-installation information
HTML, for on-line browsing and reference
PDF, for printing
man pages, for quick reference.
Additionally, a number of plain-text README
files can be found throughout the PostgreSQL source tree, documenting various implementation issues.
HTML documentation and man pages are part of a standard distribution and are installed by default. PDF format documentation is available separately for download.
Once you have everything set up, change to the directory doc/src/sgml
and run one of the commands described in the following subsections to build the documentation. (Remember to use GNU make.)
To build the HTML version of the documentation:
This is also the default target. The output appears in the subdirectory html
.
To produce HTML documentation with the stylesheet used on postgresql.org instead of the default simple style use:
If the STYLE=website
option is used, the generated HTML files include references to stylesheets hosted on postgresql.org and require network access to view.
We use the DocBook XSL stylesheets to convert DocBook refentry
pages to *roff output suitable for man pages. To create the man pages, use the command:
To produce a PDF rendition of the documentation using FOP, you can use one of the following commands, depending on the preferred paper format:
For A4 format:
For U.S. letter format:
Because the PostgreSQL documentation is fairly big, FOP will require a significant amount of memory. Because of that, on some systems, the build will fail with a memory-related error message. This can usually be fixed by configuring Java heap settings in the configuration file ~/.foprc
, for example:
There is a minimum amount of memory that is required, and to some extent more memory appears to make things a bit faster. On systems with very little memory (less than 1 GB), the build will either be very slow due to swapping or will not work at all.
Other XSL-FO processors can also be used manually, but the automated build process only supports FOP.
The installation instructions are also distributed as plain text, in case they are needed in a situation where better reading tools are not available. The INSTALL
file corresponds to Chapter 17, with some minor changes to account for the different context. To recreate the file, change to the directory doc/src/sgml
and enter make INSTALL
. Building text output requires Pandoc version 1.13 or newer as an additional build tool.
In the past, the release notes and regression testing instructions were also distributed as plain text, but this practice has been discontinued.
Building the documentation can take very long. But there is a method to just check the correct syntax of the documentation files, which only takes a few seconds:
Reference pages should follow a standard layout. This allows users to find the desired information more quickly, and it also encourages writers to document all relevant aspects of a command. Consistency is not only desired among PostgreSQL reference pages, but also with reference pages provided by the operating system and other packages. Hence the following guidelines have been developed. They are for the most part consistent with similar guidelines established by various operating systems.
Reference pages that describe executable commands should contain the following sections, in this order. Sections that do not apply can be omitted. Additional top-level sections should only be used in special circumstances; often that information belongs in the “Usage” section.
Name
This section is generated automatically. It contains the command name and a half-sentence summary of its functionality.
Synopsis
This section contains the syntax diagram of the command. The synopsis should normally not list each command-line option; that is done below. Instead, list the major components of the command line, such as where input and output files go.
Description
Several paragraphs explaining what the command does.
Options
A list describing each command-line option. If there are a lot of options, subsections can be used.
Exit Status
If the program uses 0 for success and non-zero for failure, then you do not need to document it. If there is a meaning behind the different non-zero exit codes, list them here.
Usage
Describe any sublanguage or run-time interface of the program. If the program is not interactive, this section can usually be omitted. Otherwise, this section is a catch-all for describing run-time features. Use subsections if appropriate.
Environment
List all environment variables that the program might use. Try to be complete; even seemingly trivial variables like SHELL
might be of interest to the user.
Files
List any files that the program might access implicitly. That is, do not list input and output files that were specified on the command line, but list configuration files, etc.
Diagnostics
Explain any unusual output that the program might create. Refrain from listing every possible error message. This is a lot of work and has little use in practice. But if, say, the error messages have a standard format that the user can parse, this would be the place to explain it.
Notes
Anything that doesn't fit elsewhere, but in particular bugs, implementation flaws, security considerations, compatibility issues.
Examples
Examples
History
If there were some major milestones in the history of the program, they might be listed here. Usually, this section can be omitted.
Author
Author (only used in the contrib section)
See Also
Cross-references, listed in the following order: other PostgreSQL command reference pages, PostgreSQL SQL command reference pages, citation of PostgreSQL manuals, other reference pages (e.g., operating system, other packages), other documentation. Items in the same group are listed alphabetically.
Reference pages describing SQL commands should contain the following sections: Name, Synopsis, Description, Parameters, Outputs, Notes, Examples, Compatibility, History, See Also. The Parameters section is like the Options section, but there is more freedom about which clauses of the command can be listed. The Outputs section is only needed if the command returns something other than a default command-completion tag. The Compatibility section should explain to what extent this command conforms to the SQL standard(s), or to which other database system it is compatible. The See Also section of SQL commands should list SQL commands before cross-references to programs.
The actual colors to be used are configured using the environment variable PG_COLORS
(note plural). The value is a colon-separated list of key
=value
pairs. The keys specify what the color is to be used for. The values are SGR (Select Graphic Rendition) specifications, which are interpreted by the terminal.
The following keys are currently in use:
error
used to highlight the text “error” in error messages
warning
used to highlight the text “warning” in warning messages
locus
used to highlight location information (e.g., program name and file name) in messages
The default value is error=01;31:warning=01;35:locus=01
(01;31
= bold red, 01;35
= bold magenta, 01
= bold default color).
此色彩規範格式也廣為其他軟體套件所使用,例如 GCC,GNU coreutils 和 GNU grep。
Functionality is sometimes removed from PostgreSQL, feature, setting and file names sometimes change, or documentation moves to different places. This section directs users coming from old versions of the documentation or from external links to the appropriate new location for the information they need.
PostgreSQL 中的大多數工具都可以產生彩色的輸出內容。本附錄描述了它的設定方式。
To use colorized output, set the environment variable PG_COLOR
as follows:
If the value is always
, then color is used.
If the value is auto
and the standard error stream is associated with a terminal device, then color is used.
Otherwise, color is not used.
This module implements the hstore
data type for storing sets of key/value pairs within a single PostgreSQL value. This can be useful in various scenarios, such as rows with many attributes that are rarely examined, or semi-structured data. Keys and values are simply text strings.
This module is considered “trusted”, that is, it can be installed by non-superusers who have CREATE
privilege on the current database.
hstore
External RepresentationThe text representation of an hstore
, used for input and output, includes zero or more key
=>
value
pairs separated by commas. Some examples:
The order of the pairs is not significant (and may not be reproduced on output). Whitespace between pairs or around the =>
sign is ignored. Double-quote keys and values that include whitespace, commas, =
s or >
s. To include a double quote or a backslash in a key or value, escape it with a backslash.
Each key in an hstore
is unique. If you declare an hstore
with duplicate keys, only one will be stored in the hstore
and there is no guarantee as to which will be kept:
A value (but not a key) can be an SQL NULL
. For example:
The NULL
keyword is case-insensitive. Double-quote the NULL
to treat it as the ordinary string “NULL”.
Keep in mind that the hstore
text format, when used for input, applies before any required quoting or escaping. If you are passing an hstore
literal via a parameter, then no additional processing is needed. But if you're passing it as a quoted literal constant, then any single-quote characters and (depending on the setting of the standard_conforming_strings
configuration parameter) backslash characters need to be escaped correctly. See Section 4.1.2.1 for more on the handling of string constants.
On output, double quotes always surround keys and values, even when it's not strictly necessary.
hstore
Operators and FunctionsThe operators provided by the hstore
module are shown in Table F.7, the functions in Table F.8.
hstore
Operatorshstore
FunctionsIn addition to these operators and functions, values of the hstore
type can be subscripted, allowing them to act like associative arrays. Only a single subscript of type text
can be specified; it is interpreted as a key and the corresponding value is fetched or stored. For example,
A subscripted fetch returns NULL
if the subscript is NULL
or that key does not exist in the hstore
. (Thus, a subscripted fetch is not greatly different from the ->
operator.) A subscripted update fails if the subscript is NULL
; otherwise, it replaces the value for that key, adding an entry to the hstore
if the key does not already exist.
hstore
has GiST and GIN index support for the @>
, ?
, ?&
and ?|
operators. For example:
gist_hstore_ops
GiST opclass approximates a set of key/value pairs as a bitmap signature. Its optional integer parameter siglen
determines the signature length in bytes. The default length is 16 bytes. Valid values of signature length are between 1 and 2024 bytes. Longer signatures lead to a more precise search (scanning a smaller fraction of the index and fewer heap pages), at the cost of a larger index.
Example of creating such an index with a signature length of 32 bytes:
hstore
also supports btree
or hash
indexes for the =
operator. This allows hstore
columns to be declared UNIQUE
, or to be used in GROUP BY
, ORDER BY
or DISTINCT
expressions. The sort ordering for hstore
values is not particularly useful, but these indexes may be useful for equivalence lookups. Create indexes for =
comparisons as follows:
Add a key, or update an existing key with a new value:
Another way to do the same thing is:
If multiple keys are to be added or changed in one operation, the concatenation approach is more efficient than subscripting:
Delete a key:
Convert a record
to an hstore
:
Convert an hstore
to a predefined record
type:
Modify an existing record using the values from an hstore
:
The hstore
type, because of its intrinsic liberality, could contain a lot of different keys. Checking for valid keys is the task of the application. The following examples demonstrate several techniques for checking keys and obtaining statistics.
Simple example:
Using a table:
Online statistics:
As of PostgreSQL 9.0, hstore
uses a different internal representation than previous versions. This presents no obstacle for dump/restore upgrades since the text representation (used in the dump) is unchanged.
In the event of a binary upgrade, upward compatibility is maintained by having the new code recognize old-format data. This will entail a slight performance penalty when processing data that has not yet been modified by the new code. It is possible to force an upgrade of all values in a table column by doing an UPDATE
statement as follows:
Another way to do it is:
The ALTER TABLE
method requires an ACCESS EXCLUSIVE
lock on the table, but does not result in bloating the table with old row versions.
Additional extensions are available that implement transforms for the hstore
type for the languages PL/Perl and PL/Python. The extensions for PL/Perl are called hstore_plperl
and hstore_plperlu
, for trusted and untrusted PL/Perl. If you install these transforms and specify them when creating a function, hstore
values are mapped to Perl hashes. The extensions for PL/Python are called hstore_plpythonu
, hstore_plpython2u
, and hstore_plpython3u
(see Section 46.1 for the PL/Python naming convention). If you use them, hstore
values are mapped to Python dictionaries.
It is strongly recommended that the transform extensions be installed in the same schema as hstore
. Otherwise there are installation-time security hazards if a transform extension's schema contains objects defined by a hostile user.
Oleg Bartunov <
oleg@sai.msu.su
>
, Moscow, Moscow University, Russia
Teodor Sigaev <
teodor@sigaev.ru
>
, Moscow, Delta-Soft Ltd., Russia
Additional enhancements by Andrew Gierth <
andrew@tao11.riddles.org.uk
>
, United Kingdom
postgres_fdw 模組提供了外部資料封裝器 postgres_fdw,可用於存取儲存在外部 PostgreSQL 伺服器中的資料。
此模組提供的功能與舊版 dblink 模組的功能基本上是重疊的。但是 postgres_fdw 提供了更直覺且符合標準的語法來存取遠端資料表,並且在許多情況下可以提供更好的效能。
要準備使用 postgres_fdw 進行遠端存取:
使用 CREATE EXTENSION 安裝 postgres_fdw 延伸功能。
使用 CREATE SERVER 建立一個外部伺服器物件,設定您要連線的每個遠端資料庫。將連線資訊(使用者名稱和密碼除外)指定為 SERVER 物件的選項。
使用 CREATE USER MAPPING 為要存取每個外部伺服器的每個資料庫使用者建立一個使用者對應。指定用作使用者對應的使用者和密碼選項的遠端使用者名稱和密碼。
使用 CREATE FOREIGN TABLE 或 IMPORT FOREIGN SCHEMA 為要存取的每個遠端資料表建立一個外部資料表。外部資料表的欄位必須與引用的遠端資料表相符。但是,如果您指定正確的遠端名稱作為外部資料表物件的選項,則可以使用與遠端資料不同的資料表名稱和欄位名稱。
現在,您只需要從外部資料表中執行 SELECT 即可存取儲存在遠端的基本資料表中的資料。您還可以使用 INSERT、UPDATE 或 DELETE 來修改遠端資料表。 (當然,您在 USER MAPPING 中所指定的遠端使用者必須具有執行這些操作的權限。)
請注意,postgres_fdw 目前不支援帶有 ON CONFLICT DO UPDATE 子句的 INSERT 語句。但是,支援 ON CONFLICT DO NOTHING 子句,唯一性衝突處理宣告將會被忽略。還要注意,postgres_fdw 支援在分割資料表上執行的 UPDATE 語句所呼叫的資料遷移,但是目前不能處理以下情況:選擇將插入資料的遠端分割區也剛好是後續要 UPDATE 的目標分割區。
通常建議使用與遠端資料表欄位完全相同的資料型別和排序規則來宣告外部資料表的欄位。儘管 postgres_fdw 目前相當寬容地根據需要執行資料型別轉換,但是當型別或排序規則不符合時,由於遠端伺服器對 WHERE 子句的解釋與本機伺服器的解釋略有不同,因此可能會出現令人驚訝的語義結果。
請注意,與基本遠端資料表相比,可以用更少的欄位或不同的欄位順序宣告一個外部資料表。欄位與遠端資料表的對應是按名稱而不是位置進行的。
A foreign server using the postgres_fdw
foreign data wrapper can have the same options that libpq accepts in connection strings, as described in Section 34.1.2, except that these options are not allowed or have special handling:
user
, password
and sslpassword
(specify these in a user mapping, instead, or use a service file)
client_encoding
(this is automatically set from the local server encoding)
application_name
- this may appear in either or both a connection and postgres_fdw.application_name. If both are present, postgres_fdw.application_name
overrides the connection setting. Unlike libpq, postgres_fdw
allows application_name
to include “escape sequences”. See postgres_fdw.application_name for details.
fallback_application_name
(always set to postgres_fdw
)
sslkey
and sslcert
- these may appear in either or both a connection and a user mapping. If both are present, the user mapping setting overrides the connection setting.
Only superusers may create or modify user mappings with the sslcert
or sslkey
settings.
Only superusers may connect to foreign servers without password authentication, so always specify the password
option for user mappings belonging to non-superusers.
A superuser may override this check on a per-user-mapping basis by setting the user mapping option password_required 'false'
, e.g.,
To prevent unprivileged users from exploiting the authentication rights of the unix user the postgres server is running as to escalate to superuser rights, only the superuser may set this option on a user mapping.
Care is required to ensure that this does not allow the mapped user the ability to connect as superuser to the mapped database per CVE-2007-3278 and CVE-2007-6601. Don't set password_required=false
on the public
role. Keep in mind that the mapped user can potentially use any client certificates, .pgpass
, .pg_service.conf
etc. in the unix home directory of the system user the postgres server runs as. They can also use any trust relationship granted by authentication modes like peer
or ident
authentication.
These options can be used to control the names used in SQL statements sent to the remote PostgreSQL server. These options are needed when a foreign table is created with names different from the underlying remote table's names.
schema_name
(string
)
This option, which can be specified for a foreign table, gives the schema name to use for the foreign table on the remote server. If this option is omitted, the name of the foreign table's schema is used.
table_name
(string
)
This option, which can be specified for a foreign table, gives the table name to use for the foreign table on the remote server. If this option is omitted, the foreign table's name is used.
column_name
(string
)
This option, which can be specified for a column of a foreign table, gives the column name to use for the column on the remote server. If this option is omitted, the column's name is used.
postgres_fdw
retrieves remote data by executing queries against remote servers, so ideally the estimated cost of scanning a foreign table should be whatever it costs to be done on the remote server, plus some overhead for communication. The most reliable way to get such an estimate is to ask the remote server and then add something for overhead — but for simple queries, it may not be worth the cost of an additional remote query to get a cost estimate. So postgres_fdw
provides the following options to control how cost estimation is done:
use_remote_estimate
(boolean
)
This option, which can be specified for a foreign table or a foreign server, controls whether postgres_fdw
issues remote EXPLAIN
commands to obtain cost estimates. A setting for a foreign table overrides any setting for its server, but only for that table. The default is false
.
fdw_startup_cost
(floating point
)
This option, which can be specified for a foreign server, is a floating point value that is added to the estimated startup cost of any foreign-table scan on that server. This represents the additional overhead of establishing a connection, parsing and planning the query on the remote side, etc. The default value is 100
.
fdw_tuple_cost
(floating point
)
This option, which can be specified for a foreign server, is a floating point value that is used as extra cost per-tuple for foreign-table scans on that server. This represents the additional overhead of data transfer between servers. You might increase or decrease this number to reflect higher or lower network delay to the remote server. The default value is 0.01
.
When use_remote_estimate
is true, postgres_fdw
obtains row count and cost estimates from the remote server and then adds fdw_startup_cost
and fdw_tuple_cost
to the cost estimates. When use_remote_estimate
is false, postgres_fdw
performs local row count and cost estimation and then adds fdw_startup_cost
and fdw_tuple_cost
to the cost estimates. This local estimation is unlikely to be very accurate unless local copies of the remote table's statistics are available. Running ANALYZE on the foreign table is the way to update the local statistics; this will perform a scan of the remote table and then calculate and store statistics just as though the table were local. Keeping local statistics can be a useful way to reduce per-query planning overhead for a remote table — but if the remote table is frequently updated, the local statistics will soon be obsolete.
By default, only WHERE
clauses using built-in operators and functions will be considered for execution on the remote server. Clauses involving non-built-in functions are checked locally after rows are fetched. If such functions are available on the remote server and can be relied on to produce the same results as they do locally, performance can be improved by sending such WHERE
clauses for remote execution. This behavior can be controlled using the following option:
extensions
(string
)
This option is a comma-separated list of names of PostgreSQL extensions that are installed, in compatible versions, on both the local and remote servers. Functions and operators that are immutable and belong to a listed extension will be considered shippable to the remote server. This option can only be specified for foreign servers, not per-table.
When using the extensions
option, it is the user's responsibility that the listed extensions exist and behave identically on both the local and remote servers. Otherwise, remote queries may fail or behave unexpectedly.
fetch_size
(integer
)
This option specifies the number of rows postgres_fdw
should get in each fetch operation. It can be specified for a foreign table or a foreign server. The option specified on a table overrides an option specified for the server. The default is 100
.
batch_size
(integer
)
This option specifies the number of rows postgres_fdw
should insert in each insert operation. It can be specified for a foreign table or a foreign server. The option specified on a table overrides an option specified for the server. The default is 1
.
Note the actual number of rows postgres_fdw
inserts at once depends on the number of columns and the provided batch_size
value. The batch is executed as a single query, and the libpq protocol (which postgres_fdw
uses to connect to a remote server) limits the number of parameters in a single query to 65535. When the number of columns * batch_size
exceeds the limit, the batch_size
will be adjusted to avoid an error.
postgres_fdw
supports asynchronous execution, which runs multiple parts of an Append
node concurrently rather than serially to improve performance. This execution can be controlled using the following option:
async_capable
(boolean
)
This option controls whether postgres_fdw
allows foreign tables to be scanned concurrently for asynchronous execution. It can be specified for a foreign table or a foreign server. A table-level option overrides a server-level option. The default is false
.
In order to ensure that the data being returned from a foreign server is consistent, postgres_fdw
will only open one connection for a given foreign server and will run all queries against that server sequentially even if there are multiple foreign tables involved, unless those tables are subject to different user mappings. In such a case, it may be more performant to disable this option to eliminate the overhead associated with running queries asynchronously.
Asynchronous execution is applied even when an Append
node contains subplan(s) executed synchronously as well as subplan(s) executed asynchronously. In such a case, if the asynchronous subplans are ones processed using postgres_fdw
, tuples from the asynchronous subplans are not returned until after at least one synchronous subplan returns all tuples, as that subplan is executed while the asynchronous subplans are waiting for the results of asynchronous queries sent to foreign servers. This behavior might change in a future release.
As described in the Transaction Management section, in postgres_fdw
transactions are managed by creating corresponding remote transactions, and subtransactions are managed by creating corresponding remote subtransactions. When multiple remote transactions are involved in the current local transaction, by default postgres_fdw
commits those remote transactions serially when the local transaction is committed. When multiple remote subtransactions are involved in the current local subtransaction, by default postgres_fdw
commits those remote subtransactions serially when the local subtransaction is committed. Performance can be improved with the following option:
parallel_commit
(boolean
)
This option controls whether postgres_fdw
commits in parallel remote transactions opened on a foreign server in a local transaction when the local transaction is committed. This setting also applies to remote and local subtransactions. This option can only be specified for foreign servers, not per-table. The default is false
.
If multiple foreign servers with this option enabled are involved in a local transaction, multiple remote transactions on those foreign servers are committed in parallel across those foreign servers when the local transaction is committed.
When this option is enabled, a foreign server with many remote transactions may see a negative performance impact when the local transaction is committed.
By default all foreign tables using postgres_fdw
are assumed to be updatable. This may be overridden using the following option:
updatable
(boolean
)
This option controls whether postgres_fdw
allows foreign tables to be modified using INSERT
, UPDATE
and DELETE
commands. It can be specified for a foreign table or a foreign server. A table-level option overrides a server-level option. The default is true
.
Of course, if the remote table is not in fact updatable, an error would occur anyway. Use of this option primarily allows the error to be thrown locally without querying the remote server. Note however that the information_schema
views will report a postgres_fdw
foreign table to be updatable (or not) according to the setting of this option, without any check of the remote server.
By default all foreign tables using postgres_fdw
are assumed to be truncatable. This may be overridden using the following option:
truncatable
(boolean
)
This option controls whether postgres_fdw
allows foreign tables to be truncated using the TRUNCATE
command. It can be specified for a foreign table or a foreign server. A table-level option overrides a server-level option. The default is true
.
Of course, if the remote table is not in fact truncatable, an error would occur anyway. Use of this option primarily allows the error to be thrown locally without querying the remote server.
postgres_fdw
is able to import foreign table definitions using IMPORT FOREIGN SCHEMA. This command creates foreign table definitions on the local server that match tables or views present on the remote server. If the remote tables to be imported have columns of user-defined data types, the local server must have compatible types of the same names.
Importing behavior can be customized with the following options (given in the IMPORT FOREIGN SCHEMA
command):
import_collate
(boolean
)
This option controls whether column COLLATE
options are included in the definitions of foreign tables imported from a foreign server. The default is true
. You might need to turn this off if the remote server has a different set of collation names than the local server does, which is likely to be the case if it's running on a different operating system. If you do so, however, there is a very severe risk that the imported table columns' collations will not match the underlying data, resulting in anomalous query behavior.
Even when this parameter is set to true
, importing columns whose collation is the remote server's default can be risky. They will be imported with COLLATE "default"
, which will select the local server's default collation, which could be different.
import_default
(boolean
)
This option controls whether column DEFAULT
expressions are included in the definitions of foreign tables imported from a foreign server. The default is false
. If you enable this option, be wary of defaults that might get computed differently on the local server than they would be on the remote server; nextval()
is a common source of problems. The IMPORT
will fail altogether if an imported default expression uses a function or operator that does not exist locally.
import_generated
(boolean
)
This option controls whether column GENERATED
expressions are included in the definitions of foreign tables imported from a foreign server. The default is true
. The IMPORT
will fail altogether if an imported generated expression uses a function or operator that does not exist locally.
import_not_null
(boolean
)
This option controls whether column NOT NULL
constraints are included in the definitions of foreign tables imported from a foreign server. The default is true
.
Note that constraints other than NOT NULL
will never be imported from the remote tables. Although PostgreSQL does support check constraints on foreign tables, there is no provision for importing them automatically, because of the risk that a constraint expression could evaluate differently on the local and remote servers. Any such inconsistency in the behavior of a check constraint could lead to hard-to-detect errors in query optimization. So if you wish to import check constraints, you must do so manually, and you should verify the semantics of each one carefully. For more detail about the treatment of check constraints on foreign tables, see CREATE FOREIGN TABLE.
Tables or foreign tables which are partitions of some other table are imported only when they are explicitly specified in LIMIT TO
clause. Otherwise they are automatically excluded from IMPORT FOREIGN SCHEMA. Since all data can be accessed through the partitioned table which is the root of the partitioning hierarchy, importing only partitioned tables should allow access to all the data without creating extra objects.
By default, all connections that postgres_fdw
establishes to foreign servers are kept open in the local session for re-use.
keep_connections
(boolean
)
This option controls whether postgres_fdw
keeps the connections to the foreign server open so that subsequent queries can re-use them. It can only be specified for a foreign server. The default is on
. If set to off
, all connections to this foreign server will be discarded at the end of each transaction.
postgres_fdw_get_connections(OUT server_name text, OUT valid boolean) returns setof record
This function returns the foreign server names of all the open connections that postgres_fdw
established from the local session to the foreign servers. It also returns whether each connection is valid or not. false
is returned if the foreign server connection is used in the current local transaction but its foreign server or user mapping is changed or dropped (Note that server name of an invalid connection will be NULL
if the server is dropped), and then such invalid connection will be closed at the end of that transaction. true
is returned otherwise. If there are no open connections, no record is returned. Example usage of the function:
postgres_fdw_disconnect(server_name text) returns boolean
This function discards the open connections that are established by postgres_fdw
from the local session to the foreign server with the given name. Note that there can be multiple connections to the given server using different user mappings. If the connections are used in the current local transaction, they are not disconnected and warning messages are reported. This function returns true
if it disconnects at least one connection, otherwise false
. If no foreign server with the given name is found, an error is reported. Example usage of the function:
postgres_fdw_disconnect_all() returns boolean
This function discards all the open connections that are established by postgres_fdw
from the local session to foreign servers. If the connections are used in the current local transaction, they are not disconnected and warning messages are reported. This function returns true
if it disconnects at least one connection, otherwise false
. Example usage of the function:
postgres_fdw
establishes a connection to a foreign server during the first query that uses a foreign table associated with the foreign server. By default this connection is kept and re-used for subsequent queries in the same session. This behavior can be controlled using keep_connections
option for a foreign server. If multiple user identities (user mappings) are used to access the foreign server, a connection is established for each user mapping.
When changing the definition of or removing a foreign server or a user mapping, the associated connections are closed. But note that if any connections are in use in the current local transaction, they are kept until the end of the transaction. Closed connections will be re-established when they are necessary by future queries using a foreign table.
Once a connection to a foreign server has been established, it's by default kept until the local or corresponding remote session exits. To disconnect a connection explicitly, keep_connections
option for a foreign server may be disabled, or postgres_fdw_disconnect
and postgres_fdw_disconnect_all
functions may be used. For example, these are useful to close connections that are no longer necessary, thereby releasing connections on the foreign server.
During a query that references any remote tables on a foreign server, postgres_fdw
opens a transaction on the remote server if one is not already open corresponding to the current local transaction. The remote transaction is committed or aborted when the local transaction commits or aborts. Savepoints are similarly managed by creating corresponding remote savepoints.
The remote transaction uses SERIALIZABLE
isolation level when the local transaction has SERIALIZABLE
isolation level; otherwise it uses REPEATABLE READ
isolation level. This choice ensures that if a query performs multiple table scans on the remote server, it will get snapshot-consistent results for all the scans. A consequence is that successive queries within a single transaction will see the same data from the remote server, even if concurrent updates are occurring on the remote server due to other activities. That behavior would be expected anyway if the local transaction uses SERIALIZABLE
or REPEATABLE READ
isolation level, but it might be surprising for a READ COMMITTED
local transaction. A future PostgreSQL release might modify these rules.
Note that it is currently not supported by postgres_fdw
to prepare the remote transaction for two-phase commit.
postgres_fdw
attempts to optimize remote queries to reduce the amount of data transferred from foreign servers. This is done by sending query WHERE
clauses to the remote server for execution, and by not retrieving table columns that are not needed for the current query. To reduce the risk of misexecution of queries, WHERE
clauses are not sent to the remote server unless they use only data types, operators, and functions that are built-in or belong to an extension that's listed in the foreign server's extensions
option. Operators and functions in such clauses must be IMMUTABLE
as well. For an UPDATE
or DELETE
query, postgres_fdw
attempts to optimize the query execution by sending the whole query to the remote server if there are no query WHERE
clauses that cannot be sent to the remote server, no local joins for the query, no row-level local BEFORE
or AFTER
triggers or stored generated columns on the target table, and no CHECK OPTION
constraints from parent views. In UPDATE
, expressions to assign to target columns must use only built-in data types, IMMUTABLE
operators, or IMMUTABLE
functions, to reduce the risk of misexecution of the query.
When postgres_fdw
encounters a join between foreign tables on the same foreign server, it sends the entire join to the foreign server, unless for some reason it believes that it will be more efficient to fetch rows from each table individually, or unless the table references involved are subject to different user mappings. While sending the JOIN
clauses, it takes the same precautions as mentioned above for the WHERE
clauses.
The query that is actually sent to the remote server for execution can be examined using EXPLAIN VERBOSE
.
In the remote sessions opened by postgres_fdw
, the search_path parameter is set to just pg_catalog
, so that only built-in objects are visible without schema qualification. This is not an issue for queries generated by postgres_fdw
itself, because it always supplies such qualification. However, this can pose a hazard for functions that are executed on the remote server via triggers or rules on remote tables. For example, if a remote table is actually a view, any functions used in that view will be executed with the restricted search path. It is recommended to schema-qualify all names in such functions, or else attach SET search_path
options (see CREATE FUNCTION) to such functions to establish their expected search path environment.
postgres_fdw
likewise establishes remote session settings for various parameters:
TimeZone is set to UTC
DateStyle is set to ISO
IntervalStyle is set to postgres
extra_float_digits is set to 3
for remote servers 9.0 and newer and is set to 2
for older versions
These are less likely to be problematic than search_path
, but can be handled with function SET
options if the need arises.
It is not recommended that you override this behavior by changing the session-level settings of these parameters; that is likely to cause postgres_fdw
to malfunction.
postgres_fdw
can be used with remote servers dating back to PostgreSQL 8.3. Read-only capability is available back to 8.1. A limitation however is that postgres_fdw
generally assumes that immutable built-in functions and operators are safe to send to the remote server for execution, if they appear in a WHERE
clause for a foreign table. Thus, a built-in function that was added since the remote server's release might be sent to it for execution, resulting in “function does not exist” or a similar error. This type of failure can be worked around by rewriting the query, for example by embedding the foreign table reference in a sub-SELECT
with OFFSET 0
as an optimization fence, and placing the problematic function or operator outside the sub-SELECT
.
postgres_fdw.application_name
(string
)
Specifies a value for application_name configuration parameter used when postgres_fdw
establishes a connection to a foreign server. This overrides application_name
option of the server object. Note that change of this parameter doesn't affect any existing connections until they are re-established.
postgres_fdw.application_name
can be any string of any length and contain even non-ASCII characters. However when it's passed to and used as application_name
in a foreign server, note that it will be truncated to less than NAMEDATALEN
characters and anything other than printable ASCII characters will be replaced with question marks (?
). See application_name for details.
%
characters begin “escape sequences” that are replaced with status information as outlined below. Unrecognized escapes are ignored. Other characters are copied straight to the application name. Note that it's not allowed to specify a plus/minus sign or a numeric literal after the %
and before the option, for alignment and padding.
For example, suppose user local_user
establishes a connection from database local_db
to foreign_db
as user foreign_user
, the setting 'db=%d, user=%u'
is replaced with 'db=local_db, user=local_user'
.
Here is an example of creating a foreign table with postgres_fdw
. First install the extension:
Then create a foreign server using CREATE SERVER. In this example we wish to connect to a PostgreSQL server on host 192.83.123.89
listening on port 5432
. The database to which the connection is made is named foreign_db
on the remote server:
A user mapping, defined with CREATE USER MAPPING, is needed as well to identify the role that will be used on the remote server:
Now it is possible to create a foreign table with CREATE FOREIGN TABLE. In this example we wish to access the table named some_schema.some_table
on the remote server. The local name for it will be foreign_table
:
It's essential that the data types and other properties of the columns declared in CREATE FOREIGN TABLE
match the actual remote table. Column names must match as well, unless you attach column_name
options to the individual columns to show how they are named in the remote table. In many cases, use of IMPORT FOREIGN SCHEMA
is preferable to constructing foreign table definitions manually.
Shigeru Hanada <
shigeru.hanada@gmail.com
>
sepgsql
is a loadable module that supports label-based mandatory access control (MAC) based on SELinux security policy.
The current implementation has significant limitations, and does not enforce mandatory access control for all actions. See Section F.35.7.
This module integrates with SELinux to provide an additional layer of security checking above and beyond what is normally provided by PostgreSQL. From the perspective of SELinux, this module allows PostgreSQL to function as a user-space object manager. Each table or function access initiated by a DML query will be checked against the system security policy. This check is in addition to the usual SQL permissions checking performed by PostgreSQL.
SELinux access control decisions are made using security labels, which are represented by strings such as system_u:object_r:sepgsql_table_t:s0
. Each access control decision involves two labels: the label of the subject attempting to perform the action, and the label of the object on which the operation is to be performed. Since these labels can be applied to any sort of object, access control decisions for objects stored within the database can be (and, with this module, are) subjected to the same general criteria used for objects of any other type, such as files. This design is intended to allow a centralized security policy to protect information assets independent of the particulars of how those assets are stored.
The SECURITY LABEL statement allows assignment of a security label to a database object.
sepgsql
can only be used on Linux 2.6.28 or higher with SELinux enabled. It is not available on any other platform. You will also need libselinux 2.1.10 or higher and selinux-policy 3.9.13 or higher (although some distributions may backport the necessary rules into older policy versions).
The sestatus
command allows you to check the status of SELinux. A typical display is:
If SELinux is disabled or not installed, you must set that product up first before installing this module.
To build this module, include the option --with-selinux
in your PostgreSQL configure
command. Be sure that the libselinux-devel
RPM is installed at build time.
To use this module, you must include sepgsql
in the shared_preload_libraries parameter in postgresql.conf
. The module will not function correctly if loaded in any other manner. Once the module is loaded, you should execute sepgsql.sql
in each database. This will install functions needed for security label management, and assign initial security labels.
Here is an example showing how to initialize a fresh database cluster with sepgsql
functions and security labels installed. Adjust the paths shown as appropriate for your installation:
Please note that you may see some or all of the following notifications depending on the particular versions you have of libselinux and selinux-policy:
These messages are harmless and should be ignored.
If the installation process completes without error, you can now start the server normally.
Due to the nature of SELinux, running the regression tests for sepgsql
requires several extra configuration steps, some of which must be done as root. The regression tests will not be run by an ordinary make check
or make installcheck
command; you must set up the configuration and then invoke the test script manually. The tests must be run in the contrib/sepgsql
directory of a configured PostgreSQL build tree. Although they require a build tree, the tests are designed to be executed against an installed server, that is they are comparable to make installcheck
not make check
.
First, set up sepgsql
in a working database according to the instructions in Section F.35.2. Note that the current operating system user must be able to connect to the database as superuser without password authentication.
Second, build and install the policy package for the regression test. The sepgsql-regtest
policy is a special purpose policy package which provides a set of rules to be allowed during the regression tests. It should be built from the policy source file sepgsql-regtest.te
, which is done using make
with a Makefile supplied by SELinux. You will need to locate the appropriate Makefile on your system; the path shown below is only an example. Once built, install this policy package using the semodule
command, which loads supplied policy packages into the kernel. If the package is correctly installed, semodule
-l should list sepgsql-regtest
as an available policy package:
Third, turn on sepgsql_regression_test_mode
. For security reasons, the rules in sepgsql-regtest
are not enabled by default; the sepgsql_regression_test_mode
parameter enables the rules needed to launch the regression tests. It can be turned on using the setsebool
command:
Fourth, verify your shell is operating in the unconfined_t
domain:
See Section F.35.8 for details on adjusting your working domain, if necessary.
Finally, run the regression test script:
This script will attempt to verify that you have done all the configuration steps correctly, and then it will run the regression tests for the sepgsql
module.
After completing the tests, it's recommended you disable the sepgsql_regression_test_mode
parameter:
You might prefer to remove the sepgsql-regtest
policy entirely:
sepgsql.permissive
(boolean
)
This parameter enables sepgsql
to function in permissive mode, regardless of the system setting. The default is off. This parameter can only be set in the postgresql.conf
file or on the server command line.
When this parameter is on, sepgsql
functions in permissive mode, even if SELinux in general is working in enforcing mode. This parameter is primarily useful for testing purposes.sepgsql.debug_audit
(boolean
)
This parameter enables the printing of audit messages regardless of the system policy settings. The default is off, which means that messages will be printed according to the system settings.
The security policy of SELinux also has rules to control whether or not particular accesses are logged. By default, access violations are logged, but allowed accesses are not.
This parameter forces all possible logging to be turned on, regardless of the system policy.
The security model of SELinux describes all the access control rules as relationships between a subject entity (typically, a client of the database) and an object entity (such as a database object), each of which is identified by a security label. If access to an unlabeled object is attempted, the object is treated as if it were assigned the label unlabeled_t
.
Currently, sepgsql
allows security labels to be assigned to schemas, tables, columns, sequences, views, and functions. When sepgsql
is in use, security labels are automatically assigned to supported database objects at creation time. This label is called a default security label, and is decided according to the system security policy, which takes as input the creator's label, the label assigned to the new object's parent object and optionally name of the constructed object.
A new database object basically inherits the security label of the parent object, except when the security policy has special rules known as type-transition rules, in which case a different label may be applied. For schemas, the parent object is the current database; for tables, sequences, views, and functions, it is the containing schema; for columns, it is the containing table.
For tables, db_table:select
, db_table:insert
, db_table:update
or db_table:delete
are checked for all the referenced target tables depending on the kind of statement; in addition, db_table:select
is also checked for all the tables that contain columns referenced in the WHERE
or RETURNING
clause, as a data source for UPDATE
, and so on.
Column-level permissions will also be checked for each referenced column. db_column:select
is checked on not only the columns being read using SELECT
, but those being referenced in other DML statements; db_column:update
or db_column:insert
will also be checked for columns being modified by UPDATE
or INSERT
.
For example, consider:
Here, db_column:update
will be checked for t1.x
, since it is being updated, db_column:{select update}
will be checked for t1.y
, since it is both updated and referenced, and db_column:select
will be checked for t1.z
, since it is only referenced. db_table:{select update}
will also be checked at the table level.
For sequences, db_sequence:get_value
is checked when we reference a sequence object using SELECT
; however, note that we do not currently check permissions on execution of corresponding functions such as lastval()
.
For views, db_view:expand
will be checked, then any other required permissions will be checked on the objects being expanded from the view, individually.
For functions, db_procedure:{execute}
will be checked when user tries to execute a function as a part of query, or using fast-path invocation. If this function is a trusted procedure, it also checks db_procedure:{entrypoint}
permission to check whether it can perform as entry point of trusted procedure.
In order to access any schema object, db_schema:search
permission is required on the containing schema. When an object is referenced without schema qualification, schemas on which this permission is not present will not be searched (just as if the user did not have USAGE
privilege on the schema). If an explicit schema qualification is present, an error will occur if the user does not have the requisite permission on the named schema.
The client must be allowed to access all referenced tables and columns, even if they originated from views which were then expanded, so that we apply consistent access control rules independent of the manner in which the table contents are referenced.
The default database privilege system allows database superusers to modify system catalogs using DML commands, and reference or modify toast tables. These operations are prohibited when sepgsql
is enabled.
SELinux defines several permissions to control common operations for each object type; such as creation, alter, drop and relabel of security label. In addition, several object types have special permissions to control their characteristic operations; such as addition or deletion of name entries within a particular schema.
Creating a new database object requires create
permission. SELinux will grant or deny this permission based on the client's security label and the proposed security label for the new object. In some cases, additional privileges are required:
CREATE DATABASE additionally requires getattr
permission for the source or template database.
Creating a schema object additionally requires add_name
permission on the parent schema.
Creating a table additionally requires permission to create each individual table column, just as if each table column were a separate top-level object.
Creating a function marked as LEAKPROOF
additionally requires install
permission. (This permission is also checked when LEAKPROOF
is set for an existing function.)
When DROP
command is executed, drop
will be checked on the object being removed. Permissions will be also checked for objects dropped indirectly via CASCADE
. Deletion of objects contained within a particular schema (tables, views, sequences and procedures) additionally requires remove_name
on the schema.
When ALTER
command is executed, setattr
will be checked on the object being modified for each object types, except for subsidiary objects such as the indexes or triggers of a table, where permissions are instead checked on the parent object. In some cases, additional permissions are required:
Moving an object to a new schema additionally requires remove_name
permission on the old schema and add_name
permission on the new one.
Setting the LEAKPROOF
attribute on a function requires install
permission.
Using SECURITY LABEL on an object additionally requires relabelfrom
permission for the object in conjunction with its old security label and relabelto
permission for the object in conjunction with its new security label. (In cases where multiple label providers are installed and the user tries to set a security label, but it is not managed by SELinux, only setattr
should be checked here. This is currently not done due to implementation restrictions.)
Trusted procedures are similar to security definer functions or setuid commands. SELinux provides a feature to allow trusted code to run using a security label different from that of the client, generally for the purpose of providing highly controlled access to sensitive data (e.g. rows might be omitted, or the precision of stored values might be reduced). Whether or not a function acts as a trusted procedure is controlled by its security label and the operating system security policy. For example:
The above operations should be performed by an administrative user.
In this case, a regular user cannot reference customer.credit
directly, but a trusted procedure show_credit
allows the user to print the credit card numbers of customers with some of the digits masked out.
It is possible to use SELinux's dynamic domain transition feature to switch the security label of the client process, the client domain, to a new context, if that is allowed by the security policy. The client domain needs the setcurrent
permission and also dyntransition
from the old to the new domain.
Dynamic domain transitions should be considered carefully, because they allow users to switch their label, and therefore their privileges, at their option, rather than (as in the case of a trusted procedure) as mandated by the system. Thus, the dyntransition
permission is only considered safe when used to switch to a domain with a smaller set of privileges than the original one. For example:
In this example above we were allowed to switch from the larger MCS range c1.c1023
to the smaller range c1.c4
, but switching back was denied.
A combination of dynamic domain transition and trusted procedure enables an interesting use case that fits the typical process life-cycle of connection pooling software. Even if your connection pooling software is not allowed to run most of SQL commands, you can allow it to switch the security label of the client using the sepgsql_setcon()
function from within a trusted procedure; that should take some credential to authorize the request to switch the client label. After that, this session will have the privileges of the target user, rather than the connection pooler. The connection pooler can later revert the security label change by again using sepgsql_setcon()
with NULL
argument, again invoked from within a trusted procedure with appropriate permissions checks. The point here is that only the trusted procedure actually has permission to change the effective security label, and only does so when given proper credentials. Of course, for secure operation, the credential store (table, procedure definition, or whatever) must be protected from unauthorized access.
We reject the LOAD command across the board, because any module loaded could easily circumvent security policy enforcement.
Table F.29 shows the available functions.
Data Definition Language (DDL) Permissions
Due to implementation restrictions, some DDL operations do not check permissions.Data Control Language (DCL) Permissions
Due to implementation restrictions, DCL operations do not check permissions.Row-level access control
PostgreSQL supports row-level access, but sepgsql
does not.Covert channels
sepgsql
does not try to hide the existence of a certain object, even if the user is not allowed to reference it. For example, we can infer the existence of an invisible object as a result of primary key conflicts, foreign key violations, and so on, even if we cannot obtain the contents of the object. The existence of a top secret table cannot be hidden; we only hope to conceal its contents.
This wiki page provides a brief overview, security design, architecture, administration and upcoming features.SELinux User's and Administrator's Guide
This document provides a wide spectrum of knowledge to administer SELinux on your systems. It focuses primarily on Red Hat operating systems, but is not limited to them.Fedora SELinux FAQ
This document answers frequently asked questions about SELinux. It focuses primarily on Fedora, but is not limited to Fedora.
KaiGai Kohei <
kaigai@ak.jp.nec.com
>
pgstattuple 模組提供各種函數來取得 tuple 層級的統計資訊。
由於這些函數會回傳詳細的 page-level 資訊,因此預設是限制存取的。 預設情況下,只有角色 pg_stat_scan_tables 具有 EXECUTE 權限。超級使用者當然可以繞過此限制。安裝此延伸功能後,使用者可以發出 GRANT 指令來授予函數的權限,以允許其他人執行它們。但是,最好還是將這些使用者加到 pg_stat_scan_tables 角色群組之中。
pgstattuple(regclass) returns record
pgstattuple
returns a relation's physical length, percentage of “dead” tuples, and other info. This may help users to determine whether vacuum is necessary or not. The argument is the target relation's name (optionally schema-qualified) or OID. For example:
The output columns are described in Table F.22.
pgstattuple
Output ColumnsThe table_len
will always be greater than the sum of the tuple_len
, dead_tuple_len
and free_space
. The difference is accounted for by fixed page overhead, the per-page table of pointers to tuples, and padding to ensure that tuples are correctly aligned.
pgstattuple
acquires only a read lock on the relation. So the results do not reflect an instantaneous snapshot; concurrent updates will affect them.
pgstattuple
judges a tuple is “dead” if HeapTupleSatisfiesDirty
returns false.pgstattuple(text) returns record
This is the same as pgstattuple(regclass)
, except that the target relation is specified as TEXT. This function is kept because of backward-compatibility so far, and will be deprecated in some future release.pgstatindex(regclass) returns record
pgstatindex
returns a record showing information about a B-tree index. For example:
The output columns are:
The reported index_size
will normally correspond to one more page than is accounted for by internal_pages + leaf_pages + empty_pages + deleted_pages
, because it also includes the index's metapage.
As with pgstattuple
, the results are accumulated page-by-page, and should not be expected to represent an instantaneous snapshot of the whole index.pgstatindex(text) returns record
This is the same as pgstatindex(regclass)
, except that the target index is specified as TEXT. This function is kept because of backward-compatibility so far, and will be deprecated in some future release.pgstatginindex(regclass) returns record
pgstatginindex
returns a record showing information about a GIN index. For example:
The output columns are:
pgstathashindex(regclass) returns record
pgstathashindex
returns a record showing information about a HASH index. For example:
The output columns are:
pg_relpages(regclass) returns bigint
pg_relpages
returns the number of pages in the relation.pg_relpages(text) returns bigint
This is the same as pg_relpages(regclass)
, except that the target relation is specified as TEXT. This function is kept because of backward-compatibility so far, and will be deprecated in some future release.pgstattuple_approx(regclass) returns record
pgstattuple_approx
is a faster alternative to pgstattuple
that returns approximate results. The argument is the target relation's name or OID. For example:
The output columns are described in Table F.23.
Whereas pgstattuple
always performs a full-table scan and returns an exact count of live and dead tuples (and their sizes) and free space, pgstattuple_approx
tries to avoid the full-table scan and returns exact dead tuple statistics along with an approximation of the number and size of live tuples and free space.
It does this by skipping pages that have only visible tuples according to the visibility map (if a page has the corresponding VM bit set, then it is assumed to contain no dead tuples). For such pages, it derives the free space value from the free space map, and assumes that the rest of the space on the page is taken up by live tuples.
For pages that cannot be skipped, it scans each tuple, recording its presence and size in the appropriate counters, and adding up the free space on the page. At the end, it estimates the total number of live tuples based on the number of pages and tuples scanned (in the same way that VACUUM estimates pg_class.reltuples).
pgstattuple_approx
Output ColumnsIn the above output, the free space figures may not match the pgstattuple
output exactly, because the free space map gives us an exact figure, but is not guaranteed to be accurate to the byte.
Tatsuo Ishii, Satoshi Nagayasu and Abhijit Menon-Sen
pg_buffercache 延伸功能提供了一個即時檢查共享緩衝區配置情況的方法。
該模塊提供了一個 C 函數的 pg_buffercache_pages,該函數回傳一組記錄,以及一個檢視表 pg_buffercache,該檢視表封裝了該函數以便於使用。
預設情況下,僅限於超級使用者和 pg_monitor 角色的成員使用。也可以使用 GRANT 將存取權限授予其他角色。
pg_buffercache
ViewTable F.15 中列出了此檢視表的欄位定義。
pg_buffercache
ColumnsThere is one row for each buffer in the shared cache. Unused buffers are shown with all fields null except bufferid
. Shared system catalogs are shown as belonging to database zero.
Because the cache is shared by all the databases, there will normally be pages from relations not belonging to the current database. This means that there may not be matching join rows in pg_class
for some rows, or that there could even be incorrect joins. If you are trying to join against pg_class
, it's a good idea to restrict the join to rows having reldatabase
equal to the current database's OID or zero.
Since buffer manager locks are not taken to copy the buffer state data that the view will display, accessing pg_buffercache
view has less impact on normal buffer activity but it doesn't provide a consistent set of results across all buffers. However, we ensure that the information of each buffer is self-consistent.
Mark Kirkwood <
markir@paradise.net.nz
>
Design suggestions: Neil Conway <
neilc@samurai.com
>
Debugging advice: Tom Lane <
tgl@sss.pgh.pa.us
>
pg_stat_statements 模組提供了一個追踪在伺服器上執行的 SQL 語句統計資訊方法。
必須透過將 pg_stat_statements 加到 postgresql.conf 中的 shared_preload_libraries 中來載入模組,因為它需要額外的共享記憶體。這意味著需要重新啟動伺服器才能載加或刪除模組。
載入 pg_stat_statements 後,它將追踪伺服器所有資料庫的統計資訊。 為了存取和處理這些統計資訊,此模組提供了一個檢視表 pg_stat_statements 以及工具程序函數 pg_stat_statements_reset 和 pg_stat_statements。這些不是全域可用的,但可以使用 CREATE EXTENSION pg_stat_statements
為特定資料庫啟用。
pg_stat_statements
View此延伸功能收集的統計數據可透過名為 pg_stat_statements 的檢視表查詢。對於每個不同的資料庫 ID、使用者 ID和查詢語句 ID(此延伸功能可以追踪的最大不同查詢語句數量),在此檢視表會在一筆資料中呈現。 檢視表的欄位在 Table F.21 中說明。
pg_stat_statements
Columns因為安全因素,僅超級使用者和 pg_read_all_stats 角色成員被允許查看其他使用者所執行的 SQL 語句和 queryid。但是,如果檢視圖已安裝在他們的資料庫中,則其他使用者也可以查看統計內容。
只要是有查詢計劃查詢的查詢(即 SELECT、INSERT、UPDATE 和 DELETE)根據內部雜湊計算具有相同的查詢結構,它們就會組合到單筆 pg_stat_statements 資料中。通常,如果兩個查詢在語義上等效,即兩個查詢在此意義上是相同的,只是出現在查詢中的常數內容的值除外。 但是,會嚴格地根據資料庫結構維護指令(即所有其他指令)的查詢字串進行比較。
為了將查詢與其他查詢搭配而忽略了常數內容時,該常數內容會在 pg_stat_statements 顯示中替換為參數符號,例如 $1。查詢語句的其餘部分是第一個查詢的內容,該查詢具有與 pg_stat_statements 項目關聯的特定 queryid 雜湊值。
In some cases, queries with visibly different texts might get merged into a single pg_stat_statements
entry. Normally this will happen only for semantically equivalent queries, but there is a small chance of hash collisions causing unrelated queries to be merged into one entry. (This cannot happen for queries belonging to different users or databases, however.)
Since the queryid
hash value is computed on the post-parse-analysis representation of the queries, the opposite is also possible: queries with identical texts might appear as separate entries, if they have different meanings as a result of factors such as different search_path
settings.
Consumers of pg_stat_statements
may wish to use queryid
(perhaps in combination with dbid
and userid
) as a more stable and reliable identifier for each entry than its query text. However, it is important to understand that there are only limited guarantees around the stability of the queryid
hash value. Since the identifier is derived from the post-parse-analysis tree, its value is a function of, among other things, the internal object identifiers appearing in this representation. This has some counterintuitive implications. For example, pg_stat_statements
will consider two apparently-identical queries to be distinct, if they reference a table that was dropped and recreated between the executions of the two queries. The hashing process is also sensitive to differences in machine architecture and other facets of the platform. Furthermore, it is not safe to assume that queryid
will be stable across major versions of PostgreSQL.
As a rule of thumb, queryid
values can be assumed to be stable and comparable only so long as the underlying server version and catalog metadata details stay exactly the same. Two servers participating in replication based on physical WAL replay can be expected to have identical queryid
values for the same query. However, logical replication schemes do not promise to keep replicas identical in all relevant details, so queryid
will not be a useful identifier for accumulating costs across a set of logical replicas. If in doubt, direct testing is recommended.
The parameter symbols used to replace constants in representative query texts start from the next number after the highest $
n
parameter in the original query text, or $1
if there was none. It's worth noting that in some cases there may be hidden parameter symbols that affect this numbering. For example, PL/pgSQL uses hidden parameter symbols to insert values of function local variables into queries, so that a PL/pgSQL statement like SELECT i + 1 INTO j
would have representative text like SELECT i + $2
.
The representative query texts are kept in an external disk file, and do not consume shared memory. Therefore, even very lengthy query texts can be stored successfully. However, if many long query texts are accumulated, the external file might grow unmanageably large. As a recovery method if that happens, pg_stat_statements
may choose to discard the query texts, whereupon all existing entries in the pg_stat_statements
view will show null query
fields, though the statistics associated with each queryid
are preserved. If this happens, consider reducing pg_stat_statements.max
to prevent recurrences.
plans 和 calls 不一定會完全相等,因為查詢計劃和執行統計資訊會在其各自的執行階段進行更新,並且僅針對成功的操作進行更新。例如,某一條語句已經進行了查詢計劃,但在執行階段卻失敗了,則僅更新其查詢計劃統計資訊。如果由於使用了快取的查詢計劃而跳過了計劃階段,也只會更新其執行階段的統計資訊。
pg_stat_statements_reset(userid Oid, dbid Oid, queryid bigint) returns void
pg_stat_statements_reset 會移除到目前為止由 pg_stat_statements 收集的與指定的 userid、dbid 和 queryid 相對應的統計資訊。如果未指定任何參數,則對每個參數使用預設值 0(無效),並且將重置與其他參數相對應的統計資訊。如果未指定任何參數,或者所有指定的參數均為0(無效),則將移除所有統計資訊。預設情況下,此功能只能由超級使用者執行。可以使用 GRANT 將存取權限授予其他人。
pg_stat_statements(showtext boolean) returns setof record
pg_stat_statements 檢視表是根據也稱為 pg_stat_statements 的函數定義的。用戶端可以直接呼叫 pg_stat_statements 函數,並透過指定showtext := false 可以省略查詢字串(即,對應於檢視圖查詢欄位的 OUT 參數將回傳 null)。此功能旨在支持可能希望避免重複獲取長度不確定的查詢字串成本的外部工具。這樣的工具可以代替暫存每個項目本身觀察到的第一個查詢字串,因為 pg_stat_statements 本身就是這樣做的,然後僅根據需要檢索查詢字串。由於伺服器將查詢字串儲存在檔案中,因此此方法可以減少用於重複檢查 pg_stat_statements 資料的實際 I/O 成本。
pg_stat_statements.max
(integer
)
pg_stat_statements.max 設定此模組所追踪的語句數量上限(即 pg_stat_statements 檢視表中的最大資料列數)。如果觀察到的語句不同,則將丟棄有關執行最少的語句的資訊。預設值為 5,000。只能在伺服器啟動時設定此參數。
pg_stat_statements.track
(enum
)
pg_stat_statements.track 控制此模組關注哪些語句。指定 top 表示追踪最上層語句(由用戶端直接發出的語句),也可以全部追踪巢狀語句(例如在函數內呼叫的語句),或者不指定以停用語句統計資訊收集。預設值為 top。只有超級使用者可以變更此設定。
pg_stat_statements.track_utility
(boolean
)
pg_stat_statements.track_utility 控制模組是否追踪管理程序命令。管理程序命令是除 SELECT、INSERT、UPDATE 和 DELETE 之外的所有命令。預設值為 on。只有超級使用者可以變更改此設定。
pg_stat_statements.save
(boolean
)
pg_stat_statements.save 指定是否在伺服器關閉時保存語句統計資訊。 如果關閉,則統計資訊不會在關閉時保存,也不會在伺服器啟動時重新載入。預設值為開。只能在 postgresql.conf 檔案或伺服器命令列中設定此參數。
此模塊需要與 pg_stat_statements.max 成比例的額外共享記憶體。請注意,即使將 pg_stat_statements.track 設定為 none,只要載入模組,就會佔用記憶體空間。
這些參數必須在 postgresql.conf 中設定。典型的用法可能是:
Takahiro Itagaki <
itagaki.takahiro@oss.ntt.co.jp
>
. Query normalization added by Peter Geoghegan <
peter@2ndquadrant.com
>
.
tablefunc 模組內含了回傳資料表(即多筆資料列)的各種函數。這些函數本身很有用,也可以用作設計回傳多筆資料列的 C 函數的範例。
Table F.30 列出了 tablefunc 模組所提供的函數。
tablefunc
FunctionsFunction | Returns | Description |
---|---|---|
normal_rand
produces a set of normally distributed random values (Gaussian distribution).
numvals
is the number of values to be returned from the function. mean
is the mean of the normal distribution of values and stddev
is the standard deviation of the normal distribution of values.
For example, this call requests 1000 values with a mean of 5 and a standard deviation of 3:
The crosstab
function is used to produce “pivot” displays, wherein data is listed across the page rather than down. For example, we might have data like
which we wish to display like
The crosstab
function takes a text parameter that is a SQL query producing raw data formatted in the first way, and produces a table formatted in the second way.
The sql
parameter is a SQL statement that produces the source set of data. This statement must return one row_name
column, one category
column, and one value
column. N
is an obsolete parameter, ignored if supplied (formerly this had to match the number of output value columns, but now that is determined by the calling query).
For example, the provided query might produce a set something like:
The crosstab
function is declared to return setof record
, so the actual names and types of the output columns must be defined in the FROM
clause of the calling SELECT
statement, for example:
This example produces a set something like:
The FROM
clause must define the output as one row_name
column (of the same data type as the first result column of the SQL query) followed by N value
columns (all of the same data type as the third result column of the SQL query). You can set up as many output value columns as you wish. The names of the output columns are up to you.
The crosstab
function produces one output row for each consecutive group of input rows with the same row_name
value. It fills the output value
columns, left to right, with the value
fields from these rows. If there are fewer rows in a group than there are output value
columns, the extra output columns are filled with nulls; if there are more rows, the extra input rows are skipped.
In practice the SQL query should always specify ORDER BY 1,2
to ensure that the input rows are properly ordered, that is, values with the same row_name
are brought together and correctly ordered within the row. Notice that crosstab
itself does not pay any attention to the second column of the query result; it's just there to be ordered by, to control the order in which the third-column values appear across the page.
Here is a complete example:
You can avoid always having to write out a FROM
clause to define the output columns, by setting up a custom crosstab function that has the desired output row type wired into its definition. This is described in the next section. Another possibility is to embed the required FROM
clause in a view definition.
另請參閱 psql 中的 \crosstabview 指令,該指令提供的功能類似於 crosstab()。
The crosstab
N
functions are examples of how to set up custom wrappers for the general crosstab
function, so that you need not write out column names and types in the calling SELECT
query. The tablefunc
module includes crosstab2
, crosstab3
, and crosstab4
, whose output row types are defined as
Thus, these functions can be used directly when the input query produces row_name
and value
columns of type text
, and you want 2, 3, or 4 output values columns. In all other ways they behave exactly as described above for the general crosstab
function.
For instance, the example given in the previous section would also work as
These functions are provided mostly for illustration purposes. You can create your own return types and functions based on the underlying crosstab()
function. There are two ways to do it:
Create a composite type describing the desired output columns, similar to the examples in contrib/tablefunc/tablefunc--1.0.sql
. Then define a unique function name accepting one text
parameter and returning setof your_type_name
, but linking to the same underlying crosstab
C function. For example, if your source data produces row names that are text
, and values that are float8
, and you want 5 value columns:
Use OUT
parameters to define the return type implicitly. The same example could also be done this way:
The main limitation of the single-parameter form of crosstab
is that it treats all values in a group alike, inserting each value into the first available column. If you want the value columns to correspond to specific categories of data, and some groups might not have data for some of the categories, that doesn't work well. The two-parameter form of crosstab
handles this case by providing an explicit list of the categories corresponding to the output columns.
source_sql
is a SQL statement that produces the source set of data. This statement must return one row_name
column, one category
column, and one value
column. It may also have one or more “extra” columns. The row_name
column must be first. The category
and value
columns must be the last two columns, in that order. Any columns between row_name
and category
are treated as “extra”. The “extra” columns are expected to be the same for all rows with the same row_name
value.
For example, source_sql
might produce a set something like:
category_sql
is a SQL statement that produces the set of categories. This statement must return only one column. It must produce at least one row, or an error will be generated. Also, it must not produce duplicate values, or an error will be generated. category_sql
might be something like:
The crosstab
function is declared to return setof record
, so the actual names and types of the output columns must be defined in the FROM
clause of the calling SELECT
statement, for example:
This will produce a result something like:
The FROM
clause must define the proper number of output columns of the proper data types. If there are N
columns in the source_sql
query's result, the first N
-2 of them must match up with the first N
-2 output columns. The remaining output columns must have the type of the last column of the source_sql
query's result, and there must be exactly as many of them as there are rows in the category_sql
query's result.
The crosstab
function produces one output row for each consecutive group of input rows with the same row_name
value. The output row_name
column, plus any “extra” columns, are copied from the first row of the group. The output value
columns are filled with the value
fields from rows having matching category
values. If a row's category
does not match any output of the category_sql
query, its value
is ignored. Output columns whose matching category is not present in any input row of the group are filled with nulls.
In practice the source_sql
query should always specify ORDER BY 1
to ensure that values with the same row_name
are brought together. However, ordering of the categories within a group is not important. Also, it is essential to be sure that the order of the category_sql
query's output matches the specified output column order.
Here are two complete examples:
You can create predefined functions to avoid having to write out the result column names and types in each query. See the examples in the previous section. The underlying C function for this form of crosstab
is named crosstab_hash
.
The connectby
function produces a display of hierarchical data that is stored in a table. The table must have a key field that uniquely identifies rows, and a parent-key field that references the parent (if any) of each row. connectby
can display the sub-tree descending from any row.
Table F.31 explains the parameters.
connectby
ParametersThe key and parent-key fields can be any data type, but they must be the same type. Note that the start_with
value must be entered as a text string, regardless of the type of the key field.
The connectby
function is declared to return setof record
, so the actual names and types of the output columns must be defined in the FROM
clause of the calling SELECT
statement, for example:
The first two output columns are used for the current row's key and its parent row's key; they must match the type of the table's key field. The third output column is the depth in the tree and must be of type integer
. If a branch_delim
parameter was given, the next output column is the branch display and must be of type text
. Finally, if an orderby_fld
parameter was given, the last output column is a serial number, and must be of type integer
.
The “branch” output column shows the path of keys taken to reach the current row. The keys are separated by the specified branch_delim
string. If no branch display is wanted, omit both the branch_delim
parameter and the branch column in the output column list.
If the ordering of siblings of the same parent is important, include the orderby_fld
parameter to specify which field to order siblings by. This field can be of any sortable data type. The output column list must include a final integer serial-number column, if and only if orderby_fld
is specified.
The parameters representing table and field names are copied as-is into the SQL queries that connectby
generates internally. Therefore, include double quotes if the names are mixed-case or contain special characters. You may also need to schema-qualify the table name.
In large tables, performance will be poor unless there is an index on the parent-key field.
It is important that the branch_delim
string not appear in any key values, else connectby
may incorrectly report an infinite-recursion error. Note that if branch_delim
is not provided, a default value of ~
is used for recursion detection purposes.
Here is an example:
Joe Conway
pg_trgm 模組提供了用於根據 trigram 配對決定包含字母及數字文字內容相似性的函數和運算子,以及支援快速搜索相似字串的索引運算子類。
trigram 是從字串中提取的一組三個連續字元。我們可以透過計算兩個字串共享的三連詞的數量來衡量它們的相似性。這個簡單的想法對測量許多自然語言中單詞的相似性非常有用。
pg_trgm
ignores non-word characters (non-alphanumerics) when extracting trigrams from a string. Each word is considered to have two spaces prefixed and one space suffixed when determining the set of trigrams contained in the string. For example, the set of trigrams in the string “cat
” is “ c
”, “ ca
”, “cat
”, and “at
”. The set of trigrams in the string “foo|bar
” is “ f
”, “ fo
”, “foo
”, “oo
”, “ b
”, “ ba
”, “bar
”, and “ar
”.
The functions provided by the pg_trgm
module are shown in Table F.24, the operators in Table F.25.
pg_trgm
FunctionsFunction | Returns | Description |
---|---|---|
Consider the following example:
In the first string, the set of trigrams is {" w"," wo","wor","ord","rd "}
. In the second string, the ordered set of trigrams is {" t"," tw","two","wo "," w"," wo","wor","ord","rds","ds "}
. The most similar extent of an ordered set of trigrams in the second string is {" w"," wo","wor","ord"}
, and the similarity is 0.8
.
This function returns a value that can be approximately understood as the greatest similarity between the first string and any substring of the second string. However, this function does not add padding to the boundaries of the extent. Thus, the number of additional characters present in the second string is not considered, except for the mismatched word boundaries.
At the same time, strict_word_similarity(text, text)
selects an extent of words in the second string. In the example above, strict_word_similarity(text, text)
would select the extent of a single word 'words'
, whose set of trigrams is {" w"," wo","wor","ord","rds","ds "}
.
Thus, the strict_word_similarity(text, text)
function is useful for finding the similarity to whole words, while word_similarity(text, text)
is more suitable for finding the similarity for parts of words.
pg_trgm
Operatorspg_trgm.similarity_threshold
(real
)
Sets the current similarity threshold that is used by the %
operator. The threshold must be between 0 and 1 (default is 0.3).pg_trgm.word_similarity_threshold
(real
)
Sets the current word similarity threshold that is used by the <%
and %>
operators. The threshold must be between 0 and 1 (default is 0.6).pg_trgm.strict_word_similarity_threshold
(real
)
Sets the current strict word similarity threshold that is used by the <<%
and %>>
operators. The threshold must be between 0 and 1 (default is 0.5).
The pg_trgm
module provides GiST and GIN index operator classes that allow you to create an index over a text column for the purpose of very fast similarity searches. These index types support the above-described similarity operators, and additionally support trigram-based index searches for LIKE
, ILIKE
, ~
and ~*
queries. (These indexes do not support equality nor simple comparison operators, so you may need a regular B-tree index too.)
Example:
or
At this point, you will have an index on the t
column that you can use for similarity searching. A typical query is
This will return all values in the text column that are sufficiently similar to word
, sorted from best match to worst. The index will be used to make this a fast operation even over very large data sets.
A variant of the above query is
This can be implemented quite efficiently by GiST indexes, but not by GIN indexes. It will usually beat the first formulation when only a small number of the closest matches is wanted.
Also you can use an index on the t
column for word similarity or strict word similarity. Typical queries are:
and
This will return all values in the text column for which there is a continuous extent in the corresponding ordered trigram set that is sufficiently similar to the trigram set of word
, sorted from best match to worst. The index will be used to make this a fast operation even over very large data sets.
Possible variants of the above queries are:
and
This can be implemented quite efficiently by GiST indexes, but not by GIN indexes.
Beginning in PostgreSQL 9.1, these index types also support index searches for LIKE
and ILIKE
, for example
The index search works by extracting trigrams from the search string and then looking these up in the index. The more trigrams in the search string, the more effective the index search is. Unlike B-tree based searches, the search string need not be left-anchored.
Beginning in PostgreSQL 9.3, these index types also support index searches for regular-expression matches (~
and ~*
operators), for example
The index search works by extracting trigrams from the regular expression and then looking these up in the index. The more trigrams that can be extracted from the regular expression, the more effective the index search is. Unlike B-tree based searches, the search string need not be left-anchored.
For both LIKE
and regular-expression searches, keep in mind that a pattern with no extractable trigrams will degenerate to a full-index scan.
The choice between GiST and GIN indexing depends on the relative performance characteristics of GiST and GIN, which are discussed elsewhere.
Trigram matching is a very useful tool when used in conjunction with a full text index. In particular it can help to recognize misspelled input words that will not be matched directly by the full text search mechanism.
The first step is to generate an auxiliary table containing all the unique words in the documents:
where documents
is a table that has a text field bodytext
that we wish to search. The reason for using the simple
configuration with the to_tsvector
function, instead of using a language-specific configuration, is that we want a list of the original (unstemmed) words.
Next, create a trigram index on the word column:
Now, a SELECT
query similar to the previous example can be used to suggest spellings for misspelled words in user search terms. A useful extra test is to require that the selected words are also of similar length to the misspelled word.
Since the words
table has been generated as a separate, static table, it will need to be periodically regenerated so that it remains reasonably up-to-date with the document collection. Keeping it exactly current is usually unnecessary.
GiST Development Site http://www.sai.msu.su/~megera/postgres/gist/
Tsearch2 Development Site http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/
Oleg Bartunov <
oleg@sai.msu.su
>
, Moscow, Moscow University, Russia
Teodor Sigaev <
teodor@sigaev.ru
>
, Moscow, Delta-Soft Ltd.,Russia
Alexander Korotkov <
a.korotkov@postgrespro.ru
>
, Moscow, Postgres Professional, Russia
Documentation: Christopher Kings-Lynne
This module is sponsored by Delta-Soft Ltd., Moscow, Russia.\
uuid-ossp 模組提供了使用幾種標準演算法來產生 Universally Unique IDentifiers (UUIDs) 的功能。還有一些函數可以產生某些特殊的 UUID 常數。此模組僅在 PostgreSQL 中特殊需求時才需要。有關產生 UUID 的內建函數,請參閱第 9.14 節。
該模組被認為是「可信任的」,也就是說,它可以由對目前資料庫具有 CREATE 權限的非超級使用者安裝。
uuid-ossp
FunctionsTable F.32 列出了可用於產生 UUID 的函數。相關標準為 ITU-T Rec. X.667、ISO/IEC 9834-8:2005 和 RFC 4122 所指定的四種用於產生 UUID 的演算法,由標示為版本號 1、3、4 和 5 。(沒有版本 2 。)這些演算法適用於不同的應用情境。
uuid-ossp
從歷史上看,這個模組相依於 OSSP UUID 函式庫,它也是模組名稱的由來。雖然 OSSP UUID 庫仍然可以在 http://www.ossp.org/pkg/lib/uuid/ 找到,但它沒有得到很好的維護,並且越來越難以移植到更新的平台。uuid-ossp 現在可以在某些平台上在沒有 OSSP 函式庫的情況下編譯。在 FreeBSD、NetBSD 和其他一些 BSD 衍生平台上,核心 libc 函式庫中包含合適的 UUID 建立函數。在 Linux、macOS 和其他一些平台上,libuuid 函式庫中提供了合適的函數,該函式庫最初來自 e2fsprogs 專案(儘管在近代 Linux 上它被認為是 util-linux-ng 的一部分)。呼叫 configure 時,指定 --with-uuid=bsd 使用 BSD 函數,或 --with-uuid=e2fs 使用 e2fsprogs 的 libuuid,或 --with-uuid=ossp 使用 OSSP UUID 函式庫。在某些主機上可能有多種函式庫可用,因此 configure 並不會自動選擇。
Peter Eisentraut <
peter_e@gmx.net
>
PostgreSQL includes several procedural languages with the base distribution: PL/pgSQL, PL/Tcl, PL/Perl, and PL/Python.
In addition, there are a number of procedural languages that are developed and maintained outside the core PostgreSQL distribution. Table H.2 lists some of these packages. Note that some of these projects might not be released under the same license as PostgreSQL. For more information on each procedural language, including licensing information, refer to its website and documentation.
Table H.2. Externally Maintained Procedural Languages
Name | Language | Website |
---|---|---|
There are only two client interfaces included in the base PostgreSQL distribution:
libpq is included because it is the primary C language interface, and because many other client interfaces are built on top of it.
ECPG is included because it depends on the server-side SQL grammar, and is therefore sensitive to changes in PostgreSQL itself.
All other language interfaces are external projects and are distributed separately. Table H.1 includes a list of some of these projects. Note that some of these packages might not be released under the same license as PostgreSQL. For more information on each language interface, including licensing terms, refer to its website and documentation.
Table H.1. Externally Maintained Client Interfaces
Name | Language | Comments | Website |
---|---|---|---|
This section covers PostgreSQL server-related applications in contrib
. They are typically run on the host where the database server resides. See also for information about server applications that part of the core PostgreSQL distribution.
本頁列出在本翻譯文件及官方手冊常見的縮寫字。
ANSI
API
ASCII
BKI
CA
CIDR
CPAN
CRL
CSV
CTE
CVE
DBA
DBI
DBMS
DDL
DML
DST
ECPG
ESQL
FAQ
FSM
GEQO
GIN
GiST
Git
GMT
GSSAPI
GUC
HBA
HOT
IEC
IEEE
IPC
ISO
ISSN
JDBC
JIT
JSON
LDAP
LSN
MSVC
MVCC
NLS
ODBC
OID
OLAP
OLTP
ORDBMS
PAM
PGSQL
PGXS
PID
PITR
PL
POSIX
RDBMS
RFC
SGML
SPI
SP-GiST
SQL
SRF
SSH
SSL
SSPI
SYSV
TCP/IP
TID
TOAST
TPC
URL
UTC
UTF
UTF8
UUID
WAL
XID
XML
The documentation sources are most conveniently modified with an editor that has a mode for editing XML, and even more so if it has some awareness of XML schema languages so that it can know about DocBook syntax specifically.
Note that for historical reasons the documentation source files are named with an extension .sgml
even though they are now XML files. So you might need to adjust your editor configuration to set the correct mode.
nXML Mode, which ships with Emacs, is the most common mode for editing XML documents with Emacs. It will allow you to use Emacs to insert tags and check markup consistency, and it supports DocBook out of the box. Check the for detailed documentation.
src/tools/editors/emacs.samples
contains recommended settings for this mode.
This is a list of terms and their meaning in the context of PostgreSQL and relational database systems in general.
, , , and . This set of properties of database transactions is intended to guarantee validity in concurrent operation and even in event of errors, power failures, etc.
A that combines (aggregates) multiple input values, for example by counting, averaging or adding, yielding a single output value.
For more information, see .
See Also .
See .
The process of collecting statistics from data in and other to help the to make decisions about how to execute .
(Don't confuse this term with the ANALYZE
option to the command.)
For more information, see .
In reference to a : the fact that its value cannot be broken down into smaller components.
In reference to a : see .
Space in data pages which does not contain current row versions, such as unused (free) space or outdated row versions.Cast
A checkpoint is also the act of carrying out all the actions that are necessary to reach a checkpoint as defined above. This process is initiated when predefined conditions are met, such as a specified amount of time has passed, or a certain volume of records has been written; or it can be invoked by the user with the command CHECKPOINT
.
A specialized process responsible for executing checkpoints.
A collection of databases and global SQL objects, and their common static and dynamic metadata. Sometimes referred to as a cluster.
In PostgreSQL, the term cluster is also sometimes used to refer to an instance. (Don't confuse this term with the SQL command CLUSTER
.)Database server
The internal representation of one value of an SQL data type.Delete
A storage structure that keeps metadata about each data page of a table's main fork. The free space map entry for each page stores the amount of free space that's available for future tuples, and is structured to be efficiently searched for available space for a new tuple of a given size.
A mechanism that allows a process to limit or prevent simultaneous access to a resource.Log file
Log files contain human-readable text lines about events. Examples include login failures, long-running queries, etc.
The property that some information has been pre-computed and stored for later use, rather than computing it on-the-fly.
This term is also used to refer to some multi-step queries to mean that the data resulting from executing a given step is stored in memory (with the possibility of spilling to disk), so that it can be read multiple times by another step.Materialized view (relation)
A concept of non-existence that is a central tenet of relational database theory. It represents the absence of a definite value.Optimizer
One of several disjoint (not overlapping) subsets of a larger set.
A type of routine. Their distinctive qualities are that they do not return values, and that they are allowed to make transactional statements such as COMMIT
and ROLLBACK
. They are invoked via the CALL
command.
More generically, a relation is a set of tuples; for example, the result of a query is also a relation.
In PostgreSQL, Class is an archaic synonym for relation.Replica (server)
The fact that a result set is a relation means that a query can be used in the definition of another query, becoming a subquery.Revoke
Many routines are already defined within PostgreSQL itself, but user-defined ones can also be added.Row
All system-defined SQL objects reside in schema pg_catalog
.
This term is sometimes used to refer to an instance or to a host.Session
The largest part of shared memory is known as shared buffers and is used to mirror part of data files, organized into pages. When a page is modified, it is called a dirty page until it is written back to the file system.
Any object that can be created with a CREATE
command. Most objects are specific to one database, and are commonly known as local objects.
A series of documents that define the SQL language.Standby (server)
A mechanism by which large attributes of table rows are split and stored in a secondary table, called the TOAST table. Each relation with large attributes has its own TOAST table.
The numerical, unique, sequentially-assigned identifier that each transaction receives when it first causes a database modification. Frequently abbreviated as xid. When stored on disk, xids are only 32-bits wide, so only approximately four billion write transaction IDs can be generated; to permit the system to run for longer than that, epochs are used, also 32 bits wide. When the counter reaches the maximum xid value, it starts over at 3
(values under that are reserved) and the epoch value is incremented by one. In some contexts, the epoch and xid values are considered together as a single 64-bit value.
Average number of transactions that are executed per second, totaled across all sessions active for a measured run. This is used as a measure of the performance characteristics of an instance.Trigger
The primary use of unlogged tables is for storing transient work data that must be shared across processes.
A storage structure that keeps metadata about each data page of a table's main fork. The visibility map entry for each page stores two bits: the first one (all-visible
) indicates that all tuples in the page are visible to all transactions. The second one (all-frozen
) indicates that all tuples in the page are marked frozen.WAL
A low-level description of an individual data change. It contains sufficient information for the data change to be re-executed (replayed) in case a system failure causes the change to be lost. WAL records use a non-printable binary format.
描述了 PostgreSQL 的各種硬限制。但是,在達到絕對硬限制之前,可能會在實際應用時就產生其他的限制情況,例如效能限制或可用的磁碟空間。
Item | Upper Limit | Comment |
---|
由於要儲存的 tuple 要儘量塞進一個 8,192 位元組的 heap page,因此資料表的最大欄位數量可能還會減少一些。 例如,如果不包括 tuple 標頭資訊的話,由 1600 個 int 欄位組成的 tuple 將會佔用 6,400 個位元組,並且可以儲存在 heap page 之中,但是具有 1600 個 bigint 欄位的 tuple 將需要使用 12,800 個位元組,因此不適合放入 heap page 之中。型別是可變動長度(例如 text,varchar 和 char)的內容可以儲存在資料表的 TOAST 資料表中,而這些內容到了足夠長度就會這樣使用。資料 heap 中的 tuple 中只會保留 18 位元組的指標。對於較短長度的可變長度文字,會使用 4 位元組或 1 位元組的字串標頭,並且該內容儲存在 heap tuple 之內。
從資料表中刪除的欄位也會影響最大欄位數的限制。此外,儘管在 tuple 的 null bitmap 中將新建立的 tuple 的刪除欄位值內部標記為 null,但 null bitmap 也還是佔用空間。
Escape | Effect |
---|---|
Column | Type | Description |
---|---|---|
Column | Type | Description |
---|---|---|
Column | Type | Description |
---|---|---|
Column | Type | Description |
---|---|---|
Column | Type | Description |
---|---|---|
Parameter | Description |
---|---|
Operator | Returns | Description |
---|---|---|
, SQL commands such as CREATE TABLE
, ALTER USER
, SQL commands such as INSERT
, UPDATE
, DELETE
, the PostgreSQL subsystem that handles server configuration
Log Sequence Number, see and .
(Continuous Archiving)
The property of a that either all its operations complete as a single unit or none do. In addition, if a system failure occurs during the execution of a transaction, no partial results are visible after recovery. This is one of the ACID properties.
An element with a certain name and data type found within a .
A set of background processes that routinely perform and operations.
For more information, see .
Process of an which acts on behalf of a and handles its requests.
(Don't confuse this term with the similar terms or ).
Process within an , which runs system- or user-supplied code. Serves as infrastructure for several features in PostgreSQL, such as and . In addition, can add custom background worker processes.
For more information, see .
A process that writes dirty from to the file system. It wakes up periodically, but works only for a short period in order to distribute its expensive I/O activity over time to avoid generating larger I/O peaks which could block other processes.
For more information, see .
A conversion of a from its current data type to another data type.
For more information, see .
The SQL standard uses this term to indicate what is called a in PostgreSQL's terminology.
(Don't confuse this term with ).
For more information, see .
A type of defined on a which restricts the values allowed in one or more . The check constraint can make reference to any attribute of the same row in the relation, but cannot reference other rows of the same relation or other relations.
For more information, see .
A point in the sequence at which it is guaranteed that the heap and index data files have been updated with all information from modified before that checkpoint; a checkpoint record is written and flushed to WAL to mark that point.
For more information, see .
See .Client (process)
Any process, possibly remote, that establishes a by to an to interact with a .
An found in a or .Commit
The act of finalizing a within the , which makes it visible to other transactions and assures its .
For more information, see .
The concept that multiple independent operations happen within the at the same time. In PostgreSQL, concurrency is controlled by the mechanism.
An established line of communication between a client process and a process, usually over a network, supporting a . This term is sometimes used as a synonym for session.
For more information, see .
The property that the data in the is always in compliance with . Transactions may be allowed to violate some of the constraints transiently before it commits, but if such violations are not resolved by the time it commits, such a transaction is automatically . This is one of the ACID properties.
A restriction on the values of data allowed within a , or in attributes of a domain.
For more information, see .
See .Database
A named collection of .
For more information, see .Database cluster
See .Data directory
The base directory on the file system of a that contains all data files and subdirectories associated with a (with the exception of , and optionally ). The environment variable PGDATA
is commonly used to refer to the data directory.
A 's storage space comprises the data directory plus any additional tablespaces.
For more information, see .Data page
The basic structure used to store relation data. All pages are of the same size. Data pages are typically stored on disk, each in a specific file, and can be read to where they can be modified, becoming dirty. They become clean when written to disk. New pages, which initially exist in memory only, are also dirty until written.Datum
An SQL command which removes from a given or .
For more information, see .Durability
The assurance that once a has been , the changes remain even after a system failure or crash. This is one of the ACID properties.Epoch
See .Extension
A software add-on package that can be installed on an to get extra features.
For more information, see .File segment
A physical file which stores data for a given . File segments are limited in size by a configuration value (typically 1 gigabyte), so if a relation exceeds that size, it is split into multiple segments.
For more information, see .
(Don't confuse this term with the similar term ).Foreign data wrapper
A means of representing data that is not contained in the local so that it appears as if were in local . With a foreign data wrapper it is possible to define a and .
For more information, see .Foreign key
A type of defined on one or more in a which requires the value(s) in those to identify zero or one in another (or, infrequently, the same) .Foreign server
A named collection of which all use the same and have other configuration values in common.
For more information, see .Foreign table (relation)
A which appears to have and similar to a regular , but will forward requests for data through its , which will return structured according to the definition of the .
For more information, see .Fork
Each of the separate segmented file sets in which a relation is stored. The main fork is where the actual data resides. There also exist two secondary forks for metadata: the and the . also have an init fork.Free space map (fork)
For more information, see .Function (routine)
A type of routine that receives zero or more arguments, returns zero or more output values, and is constrained to run within one transaction. Functions are invoked as part of a query, for example via SELECT
. Certain functions can return ; those are called set-returning functions.
Functions can also be used for to invoke.
For more information, see .Grant
An SQL command that is used to allow a or to access specific objects within the .
For more information, see .Heap
Contains the values of attributes (i.e., the data) for a . The heap is realized within one or more in the relation's .Host
A computer that communicates with other computers over a network. This is sometimes used as a synonym for . It is also used to refer to a computer where run.Index (relation)
A that contains data derived from a or . Its internal structure supports fast retrieval of and access to the original data.
For more information, see .Insert
An SQL command used to add new data into a .
For more information, see .Instance
A group of backend and auxiliary processes that communicate using a common shared memory area. One manages the instance; one instance manages exactly one with all its databases. Many instances can run on the same as long as their TCP ports do not conflict.
The instance handles all key features of a DBMS: read and write access to files and shared memory, assurance of the ACID properties, to , privilege verification, crash recovery, replication, etc.Isolation
The property that the effects of a transaction are not visible to before it commits. This is one of the ACID properties.
For more information, see .Join
An operation and SQL keyword used in for combining data from multiple .Key
A means of identifying a within a or other by values contained within one or more in that relation.Lock
For more information, see .Logged
A is considered if changes to it are sent to the . By default, all regular tables are logged. A table can be specified as either at creation time or via the ALTER TABLE
command.Logger (process)
If activated, the process writes information about database events into the current . When reaching certain time- or volume-dependent criteria, a new log file is created. Also called syslogger.
For more information, see .Log record
Archaic term for a .Master (server)
See .Materialized
This term is used in , to mean that the data derived from the view's query is stored on disk separately from the sources of that data.
A that is defined by a SELECT
statement (just like a ), but stores data in the same way that a does. It cannot be modified via INSERT
, UPDATE
, or DELETE
operations.
For more information, see .Multi-version concurrency control (MVCC)
A mechanism designed to allow several to be reading and writing the same rows without one process causing other processes to stall. In PostgreSQL, MVCC is implemented by creating copies (versions) of as they are modified; after transactions that can see the old versions terminate, those old versions need to be removed.Null
See .Parallel query
The ability to handle parts of executing a to take advantage of parallel processes on servers with multiple CPUs.Partition
In reference to a : One of the tables that each contain part of the data of the partitioned table, which is said to be the parent. The partition is itself a table, so it can also be queried directly; at the same time, a partition can sometimes be a partitioned table, allowing hierarchies to be created.
In reference to a in a , a partition is a user-defined criterion that identifies which neighboring of the can be considered by the function.Partitioned table (relation)
A that is in semantic terms the same as a , but whose storage is distributed across several .Postmaster (process)
The very first process of an . It starts and manages the other auxiliary processes and creates on demand.
For more information, see .Primary key
A special case of a defined on a or other that also guarantees that all of the within the do not have values. As the name implies, there can be only one primary key per table, though it is possible to have multiple unique constraints that also have no null-capable attributes.Primary (server)
When two or more are linked via , the that is considered the authoritative source of information is called the primary, also known as a master.Procedure (routine)
For more information, see .Query
A request sent by a client to a , usually to return results or to modify data on the database.Query planner
The part of PostgreSQL that is devoted to determining (planning) the most efficient way to execute . Also known as query optimizer, optimizer, or simply planner.Record
See .Recycling
See .Referential integrity
A means of restricting data in one by a so that it must have matching data in another .Relation
The generic term for all objects in a that have a name and a list of defined in a specific order. , , , , , composite types, and are all relations.
A that is paired with a database and is maintaining a copy of some or all of the primary database's data. The foremost reasons for doing this are to allow for greater access to that data, and to maintain availability of the data in the event that the becomes unavailable.Replication
The act of reproducing data on one onto another server called a . This can take the form of physical replication, where all file changes from one server are copied verbatim, or logical replication where a defined subset of data changes are conveyed using a higher-level representation.Result set
A transmitted from a to a upon the completion of an SQL command, usually a SELECT
but it can be an INSERT
, UPDATE
, or DELETE
command if the RETURNING
clause is specified.
A command to prevent access to a named set of objects for a named list of .
For more information, see .Role
A collection of access privileges to the . Roles are themselves a privilege that can be granted to other roles. This is often done for convenience or to ensure completeness when multiple need the same privileges.
For more information, see .Rollback
A command to undo all of the operations performed since the beginning of a .
For more information, see .Routine
A defined set of instructions stored in the database system that can be invoked for execution. A routine can be written in a variety of programming languages. Routines can be (including set-returning functions and ), , and .
See .Savepoint
A special mark in the sequence of steps in a . Data modifications after this point in time may be reverted to the time of the savepoint.
For more information, see .Schema
A schema is a namespace for , which all reside in the same . Each SQL object must reside in exactly one schema.
More generically, the term schema is used to mean all data descriptions ( definitions, , comments, etc) for a given or subset thereof.
For more information, see .Segment
See .Select
The SQL command used to request data from a . Normally, SELECT
commands are not expected to modify the in any way, but it is possible that invoked within the query could have side effects that do modify data.
For more information, see .Sequence (relation)
A type of relation that is used to generate values. Typically the generated values are sequential non-repeating numbers. They are commonly used to generate surrogate values.Server
A computer on which PostgreSQL run. The term server denotes real hardware, a container, or a virtual machine.
A state that allows a client and a backend to interact, communicating over a .Shared memory
RAM which is used by the processes common to an . It mirrors parts of files, provides a transient area for , and stores additional common information. Note that shared memory belongs to the complete instance, not to a single database.
For more information, see .SQL object
Most local objects belong to a specific in their containing database, such as (all types), (all types), data types, etc. The names of such objects of the same type in the same schema are enforced to be unique.
There also exist local objects that do not belong to schemas; some examples are , , and . The names of such objects of the same type are enforced to be unique within the database.
Other object types, such as , , replication origins, subscriptions for logical replication, and databases themselves are not local SQL objects since they exist entirely outside of any specific database; they are called global objects. The names of such objects are enforced to be unique within the whole database cluster.
For more information, see .SQL standard
See .Stats collector (process)
This process collects statistical information about the 's activities.
For more information, see .System catalog
A collection of which describe the structure of all of the instance. The system catalog resides in the schema pg_catalog
. These tables contain data in internal representation and are not typically considered useful for user examination; a number of user-friendlier , also in schema pg_catalog
, offer more convenient access to some of that information, while additional tables and views exist in schema information_schema
(see ) that expose some of the same and additional information as mandated by the .
For more information, see .Table
A collection of having a common data structure (the same number of , in the same order, having the same name and type per position). A table is the most common form of in PostgreSQL.
For more information, see .Tablespace
A named location on the server file system. All which require storage beyond their definition in the must belong to a single tablespace. Initially, a database cluster contains a single usable tablespace which is used as the default for all SQL objects, called pg_default
.
For more information, see .Temporary table
that exist either for the lifetime of a or a , as specified at the time of creation. The data in them is not visible to other sessions, and is not . Temporary tables are often used to store intermediate data for a multi-step operation.
For more information, see .TOAST
For more information, see .Transaction
A combination of commands that must act as a single command: they all succeed or all fail as a single unit, and their effects are not visible to other until the transaction is complete, and possibly even later, depending on the isolation level.
For more information, see .Transaction ID
For more information, see .Transactions per second (TPS)
A which can be defined to execute whenever a certain operation (INSERT
, UPDATE
, DELETE
, TRUNCATE
) is applied to a . A trigger executes within the same as the statement which invoked it, and if the function fails, then the invoking statement also fails.
For more information, see .Tuple
A collection of in a fixed order. That order may be defined by the (or other ) where the tuple is contained, in which case the tuple is often called a row. It may also be defined by the structure of a result set, in which case it is sometimes called a record.Unique constraint
A type of defined on a which restricts the values allowed in one or a combination of columns so that each value or combination of values can only appear once in the relation — that is, no other row in the relation contains values that are equal to those.
Because are not considered equal to each other, multiple rows with null values are allowed to exist without violating the unique constraint.Unlogged
The property of certain that the changes to them are not reflected in the . This disables replication and crash recovery for these relations.
are always unlogged.Update
An SQL command used to modify that may already exist in a specified . It cannot create or remove rows.
For more information, see .User
A that has the LOGIN
privilege.User mapping
The translation of login credentials in the local to credentials in a remote data system defined by a .
For more information, see .Vacuum
The process of removing outdated from tables or materialized views, and other closely related processing required by PostgreSQL's implementation of . This can be initiated through the use of the VACUUM
command, but can also be handled automatically via processes.
For more information, see .View
A that is defined by a SELECT
statement, but has no storage of its own. Any time a query references a view, the definition of the view is substituted into the query as if the user had typed it as a subquery instead of the name of the view.
For more information, see .Visibility map (fork)
See .WAL archiver (process)
A process that saves copies of for the purpose of creating backups or keeping current.
For more information, see .WAL file
Also known as WAL segment or WAL segment file. Each of the sequentially-numbered files that provide storage space for . The files are all of the same predefined size and are written in sequential order, interspersing changes as they occur in multiple simultaneous sessions. If the system crashes, the files are read in order, and each of the changes is replayed to restore the system to the state it was in before the crash.
Each WAL file can be released after a writes all the changes in it to the corresponding data files. Releasing the file can be done either by deleting it, or by changing its name so that it will be used in the future, which is called recycling.
For more information, see .WAL record
For more information, see .WAL segment
See .WAL writer (process)
A process that writes from to .
For more information, see .Window function (routine)
A type of used in a that applies to a of the query's ; the function's result is based on values found in of the same partition or frame.
All can be used as window functions, but window functions can also be used to, for example, give ranks to each of the rows in the partition. Also known as analytic functions.
For more information, see .Write-ahead log
The journal that keeps track of the changes in the as user- and system-invoked operations take place. It comprises many individual written sequentially to .
Column Type
Description
bufferid
integer
ID, in the range 1..shared_buffers
relfilenode
oid
(references pg_class
.relfilenode
)
Filenode number of the relation
reltablespace
oid
(references pg_tablespace
.oid
)
Tablespace OID of the relation
reldatabase
oid
(references pg_database
.oid
)
Database OID of the relation
relforknumber
smallint
Fork number within the relation; see include/common/relpath.h
relblocknumber
bigint
Page number within the relation
isdirty
boolean
Is the page dirty?
usagecount
smallint
Clock-sweep access count
pinning_backends
integer
Number of backends pinning this buffer
Operator
Description
Example(s)
hstore
->
text
→ text
Returns value associated with given key, or NULL
if not present.
'a=>x, b=>y'::hstore -> 'a'
→ x
hstore
->
text[]
→ text[]
Returns values associated with given keys, or NULL
if not present.
'a=>x, b=>y, c=>z'::hstore -> ARRAY['c','a']
→ {"z","x"}
hstore
hstore
?
text
→ boolean
Does hstore
contain key?
'a=>1'::hstore ? 'a'
→ t
hstore
?&
text[]
→ boolean
Does hstore
contain all the specified keys?
'a=>1,b=>2'::hstore ?& ARRAY['a','b']
→ t
hstore
?
hstore
@>
hstore
→ boolean
Does left operand contain right?
'a=>b, b=>1, c=>NULL'::hstore @> 'b=>1'
→ t
hstore
<@
hstore
→ boolean
Is left operand contained in right?
'a=>c'::hstore <@ 'a=>b, b=>1, c=>NULL'
→ f
hstore
-
text
→ hstore
Deletes key from left operand.
'a=>1, b=>2, c=>3'::hstore - 'b'::text
→ "a"=>"1", "c"=>"3"
hstore
-
text[]
→ hstore
Deletes keys from left operand.
'a=>1, b=>2, c=>3'::hstore - ARRAY['a','b']
→ "c"=>"3"
hstore
-
hstore
→ hstore
Deletes pairs from left operand that match pairs in the right operand.
'a=>1, b=>2, c=>3'::hstore - 'a=>4, b=>2'::hstore
→ "a"=>"1", "c"=>"3"
anyelement
#=
hstore
→ anyelement
Replaces fields in the left operand (which must be a composite type) with matching values from hstore
.
ROW(1,3) #= 'f1=>11'::hstore
→ (11,3)
%%
hstore
→ text[]
Converts hstore
to an array of alternating keys and values.
%% 'a=>foo, b=>bar'::hstore
→ {a,foo,b,bar}
%#
hstore
→ text[]
Converts hstore
to a two-dimensional key/value array.
%# 'a=>foo, b=>bar'::hstore
→ {{a,foo},{b,bar}}
Function
Description
Example(s)
hstore
( record
) → hstore
Constructs an hstore
from a record or row.
hstore(ROW(1,2))
→ "f1"=>"1", "f2"=>"2"
hstore
( text[]
) → hstore
Constructs an hstore
from an array, which may be either a key/value array, or a two-dimensional array.
hstore(ARRAY['a','1','b','2'])
→ "a"=>"1", "b"=>"2"
hstore(ARRAY[['c','3'],['d','4']])
→ "c"=>"3", "d"=>"4"
hstore
( text[]
, text[]
) → hstore
Constructs an hstore
from separate key and value arrays.
hstore(ARRAY['a','b'], ARRAY['1','2'])
→ "a"=>"1", "b"=>"2"
hstore
( text
, text
) → hstore
Makes a single-item hstore
.
hstore('a', 'b')
→ "a"=>"b"
akeys
( hstore
) → text[]
Extracts an hstore
's keys as an array.
akeys('a=>1,b=>2')
→ {a,b}
skeys
( hstore
) → setof text
Extracts an hstore
's keys as a set.
skeys('a=>1,b=>2')
→
avals
( hstore
) → text[]
Extracts an hstore
's values as an array.
avals('a=>1,b=>2')
→ {1,2}
svals
( hstore
) → setof text
Extracts an hstore
's values as a set.
svals('a=>1,b=>2')
→
hstore_to_array
( hstore
) → text[]
Extracts an hstore
's keys and values as an array of alternating keys and values.
hstore_to_array('a=>1,b=>2')
→ {a,1,b,2}
hstore_to_matrix
( hstore
) → text[]
Extracts an hstore
's keys and values as a two-dimensional array.
hstore_to_matrix('a=>1,b=>2')
→ {{a,1},{b,2}}
hstore_to_json
( hstore
) → json
Converts an hstore
to a json
value, converting all non-null values to JSON strings.
This function is used implicitly when an hstore
value is cast to json
.
hstore_to_json('"a key"=>1, b=>t, c=>null, d=>12345, e=>012345, f=>1.234, g=>2.345e+4')
→ {"a key": "1", "b": "t", "c": null, "d": "12345", "e": "012345", "f": "1.234", "g": "2.345e+4"}
hstore_to_jsonb
( hstore
) → jsonb
Converts an hstore
to a jsonb
value, converting all non-null values to JSON strings.
This function is used implicitly when an hstore
value is cast to jsonb
.
hstore_to_jsonb('"a key"=>1, b=>t, c=>null, d=>12345, e=>012345, f=>1.234, g=>2.345e+4')
→ {"a key": "1", "b": "t", "c": null, "d": "12345", "e": "012345", "f": "1.234", "g": "2.345e+4"}
hstore_to_json_loose
( hstore
) → json
Converts an hstore
to a json
value, but attempts to distinguish numerical and Boolean values so they are unquoted in the JSON.
hstore_to_json_loose('"a key"=>1, b=>t, c=>null, d=>12345, e=>012345, f=>1.234, g=>2.345e+4')
→ {"a key": 1, "b": true, "c": null, "d": 12345, "e": "012345", "f": 1.234, "g": 2.345e+4}
hstore_to_jsonb_loose
( hstore
) → jsonb
Converts an hstore
to a jsonb
value, but attempts to distinguish numerical and Boolean values so they are unquoted in the JSON.
hstore_to_jsonb_loose('"a key"=>1, b=>t, c=>null, d=>12345, e=>012345, f=>1.234, g=>2.345e+4')
→ {"a key": 1, "b": true, "c": null, "d": 12345, "e": "012345", "f": 1.234, "g": 2.345e+4}
slice
( hstore
, text[]
) → hstore
Extracts a subset of an hstore
containing only the specified keys.
slice('a=>1,b=>2,c=>3'::hstore, ARRAY['b','c','x'])
→ "b"=>"2", "c"=>"3"
each
( hstore
) → setof record
( key
text
, value
text
)
Extracts an hstore
's keys and values as a set of records.
select * from each('a=>1,b=>2')
→
exist
( hstore
, text
) → boolean
Does hstore
contain key?
exist('a=>1', 'a')
→ t
defined
( hstore
, text
) → boolean
Does hstore
contain a non-NULL
value for key?
defined('a=>NULL', 'a')
→ f
delete
( hstore
, text
) → hstore
Deletes pair with matching key.
delete('a=>1,b=>2', 'b')
→ "a"=>"1"
delete
( hstore
, text[]
) → hstore
Deletes pairs with matching keys.
delete('a=>1,b=>2,c=>3', ARRAY['a','b'])
→ "c"=>"3"
delete
( hstore
, hstore
) → hstore
Deletes pairs matching those in the second argument.
delete('a=>1,b=>2', 'a=>4,b=>2'::hstore)
→ "a"=>"1"
populate_record
( anyelement
, hstore
) → anyelement
Replaces fields in the left operand (which must be a composite type) with matching values from hstore
.
populate_record(ROW(1,2), 'f1=>42'::hstore)
→ (42,2)
Column Type
Description
userid
oid
(references pg_authid
.oid
)
OID of user who executed the statement
dbid
oid
(references pg_database
.oid
)
OID of database in which the statement was executed
queryid
bigint
Internal hash code, computed from the statement's parse tree
query
text
Text of a representative statement
plans
bigint
Number of times the statement was planned (if pg_stat_statements.track_planning
is enabled, otherwise zero)
total_plan_time
double precision
Total time spent planning the statement, in milliseconds (if pg_stat_statements.track_planning
is enabled, otherwise zero)
min_plan_time
double precision
Minimum time spent planning the statement, in milliseconds (if pg_stat_statements.track_planning
is enabled, otherwise zero)
max_plan_time
double precision
Maximum time spent planning the statement, in milliseconds (if pg_stat_statements.track_planning
is enabled, otherwise zero)
mean_plan_time
double precision
Mean time spent planning the statement, in milliseconds (if pg_stat_statements.track_planning
is enabled, otherwise zero)
stddev_plan_time
double precision
Population standard deviation of time spent planning the statement, in milliseconds (if pg_stat_statements.track_planning
is enabled, otherwise zero)
calls
bigint
Number of times the statement was executed
total_exec_time
double precision
Total time spent executing the statement, in milliseconds
min_exec_time
double precision
Minimum time spent executing the statement, in milliseconds
max_exec_time
double precision
Maximum time spent executing the statement, in milliseconds
mean_exec_time
double precision
Mean time spent executing the statement, in milliseconds
stddev_exec_time
double precision
Population standard deviation of time spent executing the statement, in milliseconds
rows
bigint
Total number of rows retrieved or affected by the statement
shared_blks_hit
bigint
Total number of shared block cache hits by the statement
shared_blks_read
bigint
Total number of shared blocks read by the statement
shared_blks_dirtied
bigint
Total number of shared blocks dirtied by the statement
shared_blks_written
bigint
Total number of shared blocks written by the statement
local_blks_hit
bigint
Total number of local block cache hits by the statement
local_blks_read
bigint
Total number of local blocks read by the statement
local_blks_dirtied
bigint
Total number of local blocks dirtied by the statement
local_blks_written
bigint
Total number of local blocks written by the statement
temp_blks_read
bigint
Total number of temp blocks read by the statement
temp_blks_written
bigint
Total number of temp blocks written by the statement
blk_read_time
double precision
Total time the statement spent reading blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)
blk_write_time
double precision
Total time the statement spent writing blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)
wal_records
bigint
Total number of WAL records generated by the statement
wal_fpi
bigint
Total number of WAL full page images generated by the statement
wal_bytes
numeric
Total amount of WAL bytes generated by the statement
normal_rand(int numvals, float8 mean, float8 stddev)
setof float8
Produces a set of normally distributed random values
crosstab(text sql)
setof record
Produces a “pivot table” containing row names plus N
value columns, where N
is determined by the row type specified in the calling query
crosstab
N
(text sql)
setof table_crosstab_
N
Produces a “pivot table” containing row names plus N
value columns. crosstab2
, crosstab3
, and crosstab4
are predefined, but you can create additional crosstab
N
functions as described below
crosstab(text source_sql, text category_sql)
setof record
Produces a “pivot table” with the value columns specified by a second query
crosstab(text sql, int N)
setof record
Obsolete version of crosstab(text)
. The parameter N
is now ignored, since the number of value columns is always determined by the calling query
connectby(text relname, text keyid_fld, text parent_keyid_fld [, text orderby_fld ], text start_with, int max_depth [, text branch_delim ])
setof record
Produces a representation of a hierarchical tree structure
relname
Name of the source relation
keyid_fld
Name of the key field
parent_keyid_fld
Name of the parent-key field
orderby_fld
Name of the field to order siblings by (optional)
start_with
Key value of the row to start at
max_depth
Maximum depth to descend to, or zero for unlimited depth
branch_delim
String to separate keys with in branch output (optional)
similarity(text, text)
real
Returns a number that indicates how similar the two arguments are. The range of the result is zero (indicating that the two strings are completely dissimilar) to one (indicating that the two strings are identical).
show_trgm(text)
text[]
Returns an array of all the trigrams in the given string. (In practice this is seldom useful except for debugging.)
word_similarity(text, text)
real
Returns a number that indicates the greatest similarity between the set of trigrams in the first string and any continuous extent of an ordered set of trigrams in the second string. For details, see the explanation below.
strict_word_similarity(text, text)
real
Same as word_similarity(text, text)
, but forces extent boundaries to match word boundaries. Since we don't have cross-word trigrams, this function actually returns greatest similarity between first string and any continuous extent of words of the second string.
show_limit()
real
Returns the current similarity threshold used by the %
operator. This sets the minimum similarity between two words for them to be considered similar enough to be misspellings of each other, for example (deprecated).
set_limit(real)
real
Sets the current similarity threshold that is used by the %
operator. The threshold must be between 0 and 1 (default is 0.3). Returns the same value passed in (deprecated).
text
%
text
boolean
Returns true
if its arguments have a similarity that is greater than the current similarity threshold set by pg_trgm.similarity_threshold
.
text
<%
text
boolean
Returns true
if the similarity between the trigram set in the first argument and a continuous extent of an ordered trigram set in the second argument is greater than the current word similarity threshold set by pg_trgm.word_similarity_threshold
parameter.
text
%>
text
boolean
Commutator of the <%
operator.
text
<<%
text
boolean
Returns true
if its second argument has a continuous extent of an ordered trigram set that matches word boundaries, and its similarity to the trigram set of the first argument is greater than the current strict word similarity threshold set by the pg_trgm.strict_word_similarity_threshold
parameter.
text
%>>
text
boolean
Commutator of the <<%
operator.
text
<->
text
real
Returns the “distance” between the arguments, that is one minus the similarity()
value.
text
<<->
text
real
Returns the “distance” between the arguments, that is one minus the word_similarity()
value.
text
<->>
text
real
Commutator of the <<->
operator.
text
<<<->
text
real
Returns the “distance” between the arguments, that is one minus the strict_word_similarity()
value.
text
<->>>
text
real
Commutator of the <<<->
operator.
%a
Application name on local server
%c
Session ID on local server (see log_line_prefix for details)
%C
Cluster name on local server (see cluster_name for details)
%u
User name on local server
%d
Database name on local server
%p
Process ID of backend on local server
%%
Literal %
sepgsql_getcon() returns text
Returns the client domain, the current security label of the client.
sepgsql_setcon(text) returns bool
Switches the client domain of the current session to the new domain, if allowed by the security policy. It also accepts NULL
input as a request to transition to the client's original domain.
sepgsql_mcstrans_in(text) returns text
Translates the given qualified MLS/MCS range into raw format if the mcstrans daemon is running.
sepgsql_mcstrans_out(text) returns text
Translates the given raw MLS/MCS range into qualified format if the mcstrans daemon is running.
sepgsql_restorecon(text) returns bool
Sets up initial security labels for all objects within the current database. The argument may be NULL, or the name of a specfile to be used as alternative of the system default.
table_len
bigint
Physical relation length in bytes
tuple_count
bigint
Number of live tuples
tuple_len
bigint
Total length of live tuples in bytes
tuple_percent
float8
Percentage of live tuples
dead_tuple_count
bigint
Number of dead tuples
dead_tuple_len
bigint
Total length of dead tuples in bytes
dead_tuple_percent
float8
Percentage of dead tuples
free_space
bigint
Total free space in bytes
free_percent
float8
Percentage of free space
version
integer
B-tree version number
tree_level
integer
Tree level of the root page
index_size
bigint
Total index size in bytes
root_block_no
bigint
Location of root page (zero if none)
internal_pages
bigint
Number of “internal” (upper-level) pages
leaf_pages
bigint
Number of leaf pages
empty_pages
bigint
Number of empty pages
deleted_pages
bigint
Number of deleted pages
avg_leaf_density
float8
Average density of leaf pages
leaf_fragmentation
float8
Leaf page fragmentation
version
integer
GIN version number
pending_pages
integer
Number of pages in the pending list
pending_tuples
bigint
Number of tuples in the pending list
version
integer
HASH version number
bucket_pages
bigint
Number of bucket pages
overflow_pages
bigint
Number of overflow pages
bitmap_pages
bigint
Number of bitmap pages
unused_pages
bigint
Number of unused pages
live_items
bigint
Number of live tuples
dead_tuples
bigint
Number of dead tuples
free_percent
float
Percentage of free space
table_len
bigint
Physical relation length in bytes (exact)
scanned_percent
float8
Percentage of table scanned
approx_tuple_count
bigint
Number of live tuples (estimated)
approx_tuple_len
bigint
Total length of live tuples in bytes (estimated)
approx_tuple_percent
float8
Percentage of live tuples
dead_tuple_count
bigint
Number of dead tuples (exact)
dead_tuple_len
bigint
Total length of dead tuples in bytes (exact)
dead_tuple_percent
float8
Percentage of dead tuples
approx_free_space
bigint
Total free space in bytes (estimated)
approx_free_percent
float8
Percentage of free space
Function
Description
uuid_generate_v1
() → uuid
版本 1 UUID。 這涉及主機的 MAC 位址和時間戳記。請注意,此類 UUID 會顯示建立識別符的主機身份及其建立時間,這可能使其不適用於某些對安全敏感的應用情境。
uuid_generate_v1mc
() → uuid
產生版本 1 UUID,但使用隨機的 multicast MAC 位址而不是主機的真實 MAC 位址。
uuid_generate_v3
( namespace
uuid
, name
text
) → uuid
使用指定的輸入名稱在給予的命名空間 namespace 中產生成版本 3 的 UUID。命名空間應該是 Table F.33 中列出的的 uuidns*() 函數產生的特殊常數之一。(理論上可以是任何的 UUID。)名稱是所選命名空間中的識別字。
例如: SELECT uuid_generate_v3(uuid_ns_url(), 'http://www.postgresql.org');
參數 name 將為 MD5 雜湊值,因此無法從產生的 UUID 反推原來的內容。透過這種方法產生的 UUID 並沒有隨機或與環境相關的元素,因此是可重現的。
uuid_generate_v4
() → uuid
產生版本 4 UUID,它完全來自亂數。
uuid_generate_v5
( namespace
uuid
, name
text
) → uuid
產生版本 5 的 UUID,此版本與版本 3 UUID 類似,不同之處在於以 SHA-1 作為雜湊演算法。版本 5 應該比版本 3 更好,因為 SHA-1 被認為比 MD5 更加安全。
Function
Description
uuid_nil
() → uuid
回傳一個「nil」UUID 常數,它不會產生為真正的 UUID。
uuid_ns_dns
() → uuid
回傳一個常數,為 UUID 指定為 DNS 的命名空間。
uuid_ns_url
() → uuid
回傳一個常數,指定 UUID 為 URL 的命名空間。
uuid_ns_oid
() → uuid
回傳一個常數,為 UUID 指定 ISO 物件識別符號 (OID) 的命名空間。 (這與 ASN.1 OID 相關,它與 PostgreSQL 中使用的 OID 無關。)
uuid_ns_x500
() → uuid
回傳指定 UUID 為 X.500 專有名稱 (DN) 命名空間的常數。
PL/Java
Java
PL/Lua
Lua
PL/R
R
PL/sh
Unix shell
PL/v8
JavaScript
DBD::Pg
Perl
Perl DBI driver
JDBC
Java
Type 4 JDBC driver
libpqxx
C++
New-style C++ interface
node-postgres
JavaScript
Node.js driver
Npgsql
.NET
.NET data provider
pgtcl
Tcl
pgtclng
Tcl
pq
Go
Pure Go driver for Go's database/sql
psqlODBC
ODBC
ODBC driver
psycopg
Python
DB API 2.0-compliant
database size | unlimited |
number of databases | 4,294,950,911 |
relations per database | 1,431,650,303 |
relation size | 32 TB | with the default |
rows per table | limited by the number of tuples that can fit onto 4,294,967,295 pages |
columns per table | 1600 | further limited by tuple size fitting on a single page; see note below |
columns in a result set | 1664 |
field size | 1 GB |
identifier length | 63 bytes | can be increased by recompiling PostgreSQL |
indexes per table | unlimited | constrained by maximum relations per database |
columns per index | 32 | can be increased by recompiling PostgreSQL |
partition keys | 32 | can be increased by recompiling PostgreSQL |
列出了在 SQL 標準和 PostgreSQL 15.2 中作為關鍵字的所有標記。相關說明提供於中。 (由於篇幅原因,僅收錄了 SQL 標準的最新兩個版本,以及用於歷史對比的 SQL-92,與其他中間標準版本差異較小。)
SQL 區分保留(reserved)關鍵字和非保留(non-reserved)關鍵字。 根據標準,只有保留關鍵字才是真正的關鍵字; 他們永遠不允許作為標識字串。 非保留關鍵字則只在特定的用法中有特殊的含義,而在其他用法中可以作為標識字串使用。 大多數非保留關鍵字實際上是 SQL 指定的內建資料表和函數的名稱。非保留關鍵字的概念本質上只是為了聲明在某些用法中增加了一些預先定義的含義。
在 PostgreSQL 解析器中,情況有點複雜。 有幾種不同類別的標記,從永遠不能用作標識字串的標記,到在解析器中絕對沒有特殊地位,但被視為普通標識字串的標記都有。 (後者通常是 SQL 指定的函數。)即使保留關鍵字在 PostgreSQL 中也沒有完全保留,而是可以用作欄位標籤(例如 SELECT 55 AS CHECK 是可以的,儘管 CHECK 是保留關鍵字) .
在 的 PostgreSQL 欄位中,我們將解析器明確知道但允許作為欄位名稱或資料表名稱的那些關鍵字分類為“non-reserved”。 一些非保留的關鍵字不能用作函數或資料型別名稱,並相應地進行標記。 (這些詞大多表示具有特殊語法的內建函數或資料型別。函數或型別仍然可用,但使用者不能重新定義。)標記為“reserved”的是那些不允許作為欄位或資料表名稱的標記。 允許一些保留關鍵字作為函數或資料型別的名稱; 這也顯示在表格中。 如果沒有這樣標記,保留關鍵字僅允許作為欄位標籤。 此欄中的空白項目表示該關鍵字被 PostgreSQL 視為普通標識字串。
此外,雖然大多數關鍵字可以直接用作欄位標籤而毋須在它們之前使用 AS(如所述),但有一些關鍵詞需要前導 AS 以避免歧義。 在下表中這些關鍵字會標記為“requires AS”。
一般來說,如果使用任何列出的關鍵字作為標識字的命令出現解析錯誤的時候,您應該嘗試將標識字以雙引號括住,再查看問題是否消失。
在研究 之前,重要的是要了解,在 PostgreSQL 中沒有保留關鍵字並不意味著沒有實作與該關鍵字相關的功能。 同樣地,關鍵字的存在也不表示其功能的存在。
Key Word | PostgreSQL | SQL:2016 | SQL:2011 | SQL-92 |
|
| non-reserved | non-reserved |
|
| non-reserved |
|
|
|
|
| reserved | reserved |
|
|
| reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved | reserved |
| non-reserved |
|
|
|
|
| non-reserved | non-reserved |
|
|
| reserved |
|
|
| non-reserved | non-reserved | non-reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
| non-reserved | non-reserved | non-reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved |
|
|
|
| reserved | reserved | reserved | reserved |
|
| reserved | reserved | reserved |
| non-reserved |
|
|
|
| non-reserved | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
| reserved |
|
|
|
| reserved |
|
|
|
| reserved | reserved | reserved | reserved |
| reserved | reserved | reserved | reserved |
|
| reserved | reserved | reserved |
| reserved, requires | reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
| reserved, requires | reserved | reserved | reserved |
| reserved | non-reserved | non-reserved | reserved |
| non-reserved | reserved | reserved |
|
|
| reserved |
|
|
| non-reserved | non-reserved | non-reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
| reserved | reserved | reserved |
|
| non-reserved | reserved | reserved | reserved |
|
| reserved |
|
|
| non-reserved | reserved | reserved |
|
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| reserved (can be function or type) | reserved | reserved | reserved |
|
| reserved | reserved | reserved |
| non-reserved |
|
|
|
|
| non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | reserved | reserved | reserved |
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
| non-reserved (cannot be function or type) | reserved | reserved |
|
| reserved (can be function or type) | reserved | reserved |
|
| non-reserved (cannot be function or type) |
|
| reserved |
|
|
|
| reserved |
|
| reserved | reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved |
|
| reserved | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | reserved | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
| non-reserved |
|
|
|
| non-reserved | reserved | reserved |
|
| non-reserved | reserved | reserved |
|
|
| reserved | reserved |
|
| non-reserved | non-reserved | non-reserved | reserved |
| non-reserved | reserved | reserved | reserved |
| reserved | reserved | reserved | reserved |
| reserved | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| reserved | reserved |
|
|
| reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
|
| non-reserved |
|
|
| non-reserved (cannot be function or type), requires | reserved | reserved | reserved |
| non-reserved (cannot be function or type), requires | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
|
| reserved | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| reserved | reserved | reserved |
| reserved | reserved | reserved | reserved |
| non-reserved |
|
|
|
| non-reserved |
|
|
|
|
| reserved |
|
|
|
| non-reserved | non-reserved | non-reserved |
|
| reserved | reserved |
|
| non-reserved | reserved | reserved | reserved |
| non-reserved |
|
|
|
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
| reserved | reserved | reserved | reserved |
| reserved (can be function or type) | non-reserved | non-reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| reserved | reserved |
|
| reserved | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved |
|
| non-reserved |
|
|
|
| non-reserved |
|
|
|
| non-reserved | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved | non-reserved |
| non-reserved |
|
|
|
| reserved (can be function or type) |
|
|
|
|
| reserved | reserved |
|
|
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved |
| non-reserved |
|
|
|
| non-reserved |
|
|
|
|
| reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
| reserved | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved |
|
|
| reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved | reserved |
|
| non-reserved | non-reserved |
|
| non-reserved |
|
|
|
|
| reserved | reserved | reserved |
| non-reserved | reserved |
|
|
|
| reserved | reserved |
|
|
| reserved | reserved | reserved |
|
| reserved |
|
|
|
| reserved |
|
|
| non-reserved |
|
|
|
|
| reserved | reserved | reserved |
|
| reserved | reserved |
|
|
| reserved | reserved |
|
| reserved, requires | reserved | reserved | reserved |
| reserved (can be function or type) | reserved | reserved | reserved |
| non-reserved |
|
|
|
| non-reserved | reserved | reserved |
|
|
| reserved | reserved |
|
| non-reserved | reserved | reserved | reserved |
| reserved | reserved | reserved |
|
| reserved | reserved | reserved | reserved |
|
| reserved | reserved |
|
|
| reserved | reserved |
|
| reserved | reserved | reserved |
|
|
| reserved | reserved |
|
| reserved (can be function or type) | reserved | reserved |
|
| reserved | reserved | reserved | reserved |
| reserved | reserved | reserved | reserved |
|
| reserved | reserved |
|
| reserved | reserved | reserved | reserved |
| non-reserved | reserved | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
| non-reserved | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved | non-reserved |
| non-reserved |
|
|
|
|
| reserved | reserved |
|
|
| reserved | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
| non-reserved, requires | reserved | reserved | reserved |
|
| non-reserved | non-reserved |
|
| non-reserved | reserved | reserved | reserved |
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
|
| reserved |
|
|
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
| non-reserved | reserved | reserved | reserved |
| reserved | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
| reserved | non-reserved | non-reserved | reserved |
| non-reserved | non-reserved | non-reserved | reserved |
|
| reserved |
|
|
|
| non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved | reserved | reserved | reserved |
| non-reserved |
|
|
|
| non-reserved |
|
|
|
|
| reserved | reserved |
|
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved |
|
|
| reserved | reserved |
|
|
| non-reserved | non-reserved |
|
| reserved | non-reserved | non-reserved | reserved |
|
| reserved | reserved | reserved |
|
| non-reserved | non-reserved | reserved |
| non-reserved |
|
|
|
|
| reserved | reserved |
|
|
| non-reserved | non-reserved | reserved |
| non-reserved |
|
|
|
| non-reserved |
|
|
|
| non-reserved |
|
|
|
|
| reserved | reserved | reserved |
|
| non-reserved | non-reserved |
|
| reserved | reserved | reserved | reserved |
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
| reserved |
|
|
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved | reserved |
| non-reserved | reserved | reserved | reserved |
| non-reserved | reserved | reserved | reserved |
|
| reserved | reserved |
|
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved |
|
| non-reserved | reserved | reserved |
|
|
| reserved | reserved |
|
| reserved | reserved | reserved | reserved |
|
| reserved | non-reserved |
|
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved |
|
|
|
| reserved | reserved | reserved | reserved |
|
| reserved | reserved | reserved |
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved |
|
|
|
|
| reserved | reserved |
|
|
| non-reserved |
|
|
| non-reserved | reserved | reserved | reserved |
| non-reserved |
|
|
|
|
| reserved | reserved |
|
| reserved, requires | reserved | reserved | reserved |
|
|
|
| reserved |
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved |
|
|
|
|
| reserved | reserved | reserved |
| non-reserved | reserved | reserved | reserved |
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
|
| reserved | reserved |
|
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved |
|
|
|
| non-reserved | reserved | reserved | reserved |
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
| reserved | reserved | reserved | reserved |
| non-reserved |
|
|
|
| reserved, requires | reserved | reserved | reserved |
|
| non-reserved | non-reserved |
|
| non-reserved, requires | reserved | reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved |
|
|
|
|
| non-reserved |
|
|
| non-reserved | non-reserved | non-reserved | reserved |
|
| reserved | reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
|
| reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| reserved, requires | reserved | reserved | reserved |
| non-reserved |
|
|
|
| reserved | reserved | reserved | reserved |
|
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved |
| non-reserved |
|
|
|
|
| non-reserved | non-reserved | reserved |
|
| reserved | reserved |
|
|
| reserved | reserved |
|
| reserved (can be function or type) |
|
|
|
| reserved, requires | reserved | reserved | reserved |
|
| non-reserved | non-reserved |
|
|
| non-reserved |
|
|
| reserved (can be function or type) | reserved | reserved | reserved |
| non-reserved | reserved | reserved |
|
| non-reserved |
|
|
|
|
| reserved | reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
|
| reserved | reserved | reserved |
| non-reserved | reserved | reserved | reserved |
|
| non-reserved | non-reserved | reserved |
|
| non-reserved | non-reserved | reserved |
| reserved, requires | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
| non-reserved (cannot be function or type) |
|
|
|
| reserved, requires | reserved | reserved | reserved |
| non-reserved (cannot be function or type) | reserved | reserved |
|
| non-reserved | reserved | reserved |
|
| non-reserved |
|
|
|
| reserved, requires | reserved | reserved | reserved |
| non-reserved |
|
|
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved | reserved | reserved |
|
| non-reserved, requires | reserved | reserved | reserved |
|
| non-reserved | non-reserved |
|
| non-reserved | reserved | reserved | reserved |
| non-reserved |
|
|
|
|
| non-reserved | non-reserved |
|
| reserved (can be function or type) |
|
|
|
| non-reserved | non-reserved | non-reserved | reserved |
|
| non-reserved | non-reserved |
|
| non-reserved |
|
|
|
|
| non-reserved | non-reserved |
|
| non-reserved |
|
|
|
| non-reserved | reserved | reserved |
|
| reserved | reserved | reserved | reserved |
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved |
|
|
|
| non-reserved |
|
|
|
|
| reserved | reserved | reserved |
| non-reserved |
|
|
|
| non-reserved |
|
|
|
|
| reserved |
|
|
| reserved | non-reserved | non-reserved | reserved |
| non-reserved |
|
|
|
| reserved (can be function or type) | reserved | reserved | reserved |
| non-reserved (cannot be function or type) | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved | reserved |
| non-reserved | reserved | reserved | reserved |
| non-reserved | reserved | reserved | reserved |
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
|
| non-reserved | non-reserved |
|
| reserved, requires | reserved | reserved | reserved |
|
| reserved | reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
| reserved, requires | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
| reserved (can be function or type) | reserved | reserved | reserved |
| reserved (can be function or type), requires |
|
|
|
| non-reserved | non-reserved | non-reserved | reserved |
| reserved (can be function or type) | reserved | reserved | reserved |
|
| reserved |
|
|
|
| reserved |
|
|
|
| reserved |
|
|
|
| reserved |
|
|
|
| reserved |
|
|
|
| reserved |
|
|
|
| reserved |
|
|
|
| reserved |
|
|
|
| reserved |
|
|
|
| non-reserved | non-reserved |
|
|
| non-reserved |
|
|
| non-reserved | non-reserved | non-reserved | reserved |
|
| non-reserved |
|
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved |
|
|
|
|
| reserved | reserved |
|
| non-reserved | reserved | reserved | reserved |
| non-reserved | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved | reserved |
|
| reserved | reserved |
|
| reserved | reserved | reserved |
|
|
| reserved | reserved |
|
| reserved | reserved | reserved | reserved |
| non-reserved |
|
|
|
| non-reserved (cannot be function or type) |
|
|
|
| reserved (can be function or type) | reserved | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
| non-reserved | non-reserved | non-reserved | reserved |
|
| non-reserved | non-reserved |
|
| reserved (can be function or type) | reserved | reserved | reserved |
|
| reserved | reserved |
|
| reserved, requires | non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
|
| reserved |
|
|
| non-reserved |
|
|
|
|
| reserved | reserved |
|
| non-reserved |
|
|
|
| non-reserved | reserved | reserved | reserved |
| reserved | reserved | reserved |
|
| reserved | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved |
|
|
|
| non-reserved |
|
|
|
|
| reserved |
|
|
|
| reserved |
|
|
| non-reserved |
|
|
|
|
| reserved | reserved | reserved |
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
|
| reserved |
|
|
|
| reserved |
|
|
|
| reserved |
|
|
| non-reserved |
|
|
|
|
| reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
|
| reserved |
|
|
|
| reserved | reserved |
|
| non-reserved | reserved | reserved |
|
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
| non-reserved | reserved | reserved |
|
|
| reserved | reserved | reserved |
| non-reserved, requires | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
|
| reserved | reserved |
|
| non-reserved |
|
|
|
|
| reserved | reserved |
|
|
| reserved | reserved | reserved |
| non-reserved, requires | reserved | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
| non-reserved |
|
|
|
|
| reserved | reserved |
|
|
| non-reserved | non-reserved | non-reserved |
| non-reserved | non-reserved | non-reserved | non-reserved |
| non-reserved | non-reserved | non-reserved | reserved |
|
| non-reserved | non-reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
| reserved (can be function or type) | reserved | reserved | reserved |
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
|
| reserved | reserved |
|
|
| non-reserved |
|
|
|
| non-reserved | non-reserved |
|
| non-reserved | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved | reserved | reserved | reserved |
| non-reserved (cannot be function or type) | reserved | reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| reserved | reserved | reserved | reserved |
| non-reserved |
|
|
|
| non-reserved |
|
|
|
| reserved (can be function or type), requires |
|
|
|
| non-reserved |
|
|
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
| reserved | reserved | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
|
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved |
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
|
| non-reserved |
|
|
|
| reserved | reserved |
|
|
| non-reserved | non-reserved |
|
|
| reserved | reserved | reserved |
| non-reserved | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
| reserved, requires | reserved | reserved |
|
| non-reserved |
|
|
|
| non-reserved | reserved | reserved |
|
|
| reserved |
|
|
| reserved, requires | reserved | reserved | reserved |
|
| reserved |
|
|
| reserved | reserved | reserved | reserved |
|
| reserved | reserved | reserved |
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
| reserved | reserved | reserved | reserved |
| reserved, requires | reserved | reserved | reserved |
|
| non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved |
|
| reserved (can be function or type) | reserved | reserved | reserved |
|
| non-reserved | non-reserved | reserved |
| non-reserved, requires | reserved | reserved |
|
|
| non-reserved |
|
|
| reserved (can be function or type), requires | reserved | reserved | reserved |
| non-reserved (cannot be function or type) | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved |
|
|
|
| non-reserved |
|
|
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved | reserved |
| non-reserved |
|
|
|
| non-reserved | reserved | reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved | reserved |
| non-reserved | reserved | reserved |
|
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved |
|
|
| non-reserved | non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved |
|
|
|
|
| non-reserved |
|
|
|
| non-reserved | non-reserved |
|
|
| reserved |
|
|
|
| reserved |
|
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| non-reserved | non-reserved |
|
|
| reserved |
|
|
|
| non-reserved |
|
|
| reserved | non-reserved | non-reserved |
|
|
| non-reserved |
|
|
| non-reserved |
|
|
|
|
| non-reserved | non-reserved | non-reserved |
| non-reserved |
|
|
|
|
| reserved | reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved (cannot be function or type), requires | reserved | reserved | reserved |
| non-reserved | reserved | reserved | reserved |
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved | reserved |
|
| non-reserved |
|
|
| reserved | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved | reserved |
|
| non-reserved |
|
|
| non-reserved | non-reserved | non-reserved | reserved |
| non-reserved |
|
|
|
| non-reserved | reserved | reserved | reserved |
| non-reserved |
|
|
|
| non-reserved |
|
|
|
|
| non-reserved |
|
|
|
| reserved |
|
|
|
| non-reserved | non-reserved | reserved |
| non-reserved |
|
|
|
| non-reserved |
|
|
|
|
| non-reserved |
|
|
| non-reserved | reserved | reserved |
|
|
| reserved | reserved |
|
| non-reserved | non-reserved | non-reserved | reserved |
|
| reserved | reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
| non-reserved |
|
|
|
| non-reserved |
|
|
|
|
| non-reserved | non-reserved |
|
| non-reserved | reserved | reserved |
|
| non-reserved | reserved | reserved |
|
| reserved | reserved | reserved | reserved |
| non-reserved | reserved | reserved |
|
| non-reserved |
|
|
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved | reserved |
| non-reserved | reserved | reserved |
|
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved | non-reserved |
| non-reserved |
|
|
|
| non-reserved |
|
|
|
|
| non-reserved | non-reserved |
|
| non-reserved |
|
|
|
|
| non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved | reserved |
|
| reserved | reserved |
|
| non-reserved | reserved | reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
| reserved, requires | non-reserved | non-reserved |
|
| non-reserved | reserved | reserved |
|
| non-reserved | reserved | reserved | reserved |
| reserved (can be function or type) | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | reserved | reserved | reserved |
| non-reserved | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved |
|
|
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved |
|
| non-reserved | reserved | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| reserved | reserved |
|
| non-reserved |
|
|
|
|
| reserved |
|
|
| non-reserved | reserved | reserved |
|
|
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved |
| non-reserved | non-reserved | non-reserved | reserved |
| non-reserved |
|
|
|
|
| non-reserved | non-reserved | non-reserved |
|
| reserved | reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved | reserved | reserved | reserved |
| non-reserved | reserved | reserved |
|
| non-reserved, requires | reserved | reserved | reserved |
|
| non-reserved | non-reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
|
| reserved |
|
|
| reserved | reserved | reserved | reserved |
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved |
|
|
|
| reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved | non-reserved |
| non-reserved | non-reserved | non-reserved |
|
|
| non-reserved | non-reserved | non-reserved |
| non-reserved | non-reserved | non-reserved | reserved |
| reserved | reserved | reserved | reserved |
| non-reserved | reserved | reserved | reserved |
| non-reserved (cannot be function or type) |
|
|
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved |
|
|
|
| non-reserved | reserved |
|
|
| reserved (can be function or type) | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
|
| reserved |
|
|
|
| reserved |
|
|
|
| non-reserved | non-reserved | reserved |
| non-reserved | reserved |
|
|
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
| non-reserved |
|
|
|
| reserved | reserved | reserved | reserved |
|
| non-reserved |
|
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved | reserved |
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved | reserved | reserved | reserved |
|
|
|
| reserved |
|
|
|
| reserved |
|
| reserved | reserved |
|
|
| reserved | reserved | reserved |
|
| reserved | reserved |
|
|
| reserved | reserved |
|
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | reserved | reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
|
| reserved | reserved |
|
| non-reserved |
|
|
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
| non-reserved |
|
|
|
| non-reserved |
|
|
|
| non-reserved |
|
|
|
| non-reserved |
|
|
|
| non-reserved |
|
|
|
|
| non-reserved |
|
|
| non-reserved | non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved | non-reserved |
|
| reserved | reserved |
|
| non-reserved |
|
|
|
|
| reserved |
|
|
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved | reserved |
| non-reserved |
|
|
|
| reserved | reserved | reserved |
|
| non-reserved |
|
|
|
| non-reserved | reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved | reserved |
|
| non-reserved | non-reserved |
|
| reserved | reserved | reserved | reserved |
| non-reserved |
|
|
|
| reserved (can be function or type) | reserved | reserved |
|
| non-reserved |
|
|
|
|
| non-reserved | non-reserved | non-reserved |
|
| reserved |
|
|
|
| reserved |
|
|
| non-reserved |
|
|
|
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved | reserved |
| non-reserved |
|
|
|
| reserved | reserved | reserved | reserved |
|
| non-reserved |
|
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
|
| reserved | reserved | reserved |
|
| reserved | reserved | reserved |
| reserved, requires | reserved | reserved | reserved |
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| reserved | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved | reserved |
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
|
| reserved | reserved | reserved |
|
| reserved | reserved |
|
|
| reserved | reserved | reserved |
| non-reserved (cannot be function or type) | reserved | reserved |
|
| non-reserved | reserved | reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
|
| reserved | reserved |
|
| reserved | reserved | reserved | reserved |
| non-reserved | reserved | reserved |
|
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved | non-reserved |
| non-reserved |
|
|
|
| non-reserved | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved | non-reserved |
|
| non-reserved |
|
|
|
| non-reserved | non-reserved |
|
| non-reserved |
|
|
|
| reserved, requires | reserved | reserved | reserved |
| reserved | reserved | reserved | reserved |
| non-reserved | reserved | reserved | reserved |
|
| non-reserved | non-reserved |
|
| non-reserved |
|
|
|
| non-reserved |
|
|
|
|
| reserved |
|
|
|
| non-reserved | non-reserved | non-reserved |
|
| reserved | reserved |
|
| non-reserved |
|
|
|
|
| non-reserved | non-reserved |
|
| non-reserved | reserved | reserved | reserved |
|
| reserved | reserved | reserved |
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved | reserved |
| reserved | reserved | reserved | reserved |
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
|
| non-reserved | non-reserved |
|
| reserved | reserved | reserved | reserved |
|
| non-reserved |
|
|
|
| non-reserved |
|
|
|
| non-reserved |
|
|
| non-reserved |
|
|
|
| non-reserved | non-reserved | non-reserved |
|
| non-reserved |
|
|
|
| non-reserved |
|
|
|
| non-reserved | reserved | reserved | reserved |
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
|
| reserved | reserved |
|
|
| reserved | reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved | reserved |
| reserved |
|
|
|
| non-reserved, requires | reserved | reserved | reserved |
|
| reserved | reserved |
|
|
| reserved | reserved |
|
| reserved (can be function or type) |
|
|
|
| non-reserved | non-reserved | non-reserved |
|
|
| reserved | reserved |
|
| non-reserved | non-reserved | non-reserved | reserved |
| non-reserved |
|
|
|
| non-reserved |
|
|
|
| reserved | reserved | reserved | reserved |
|
| reserved | reserved | reserved |
| reserved, requires | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
|
| reserved | reserved |
|
| reserved, requires | reserved | reserved |
|
| reserved, requires | reserved | reserved | reserved |
| non-reserved, requires | reserved | reserved |
|
| non-reserved, requires | reserved | reserved |
|
| non-reserved | non-reserved | non-reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved | reserved |
| non-reserved | reserved | reserved |
|
|
| reserved | reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved |
|
|
| non-reserved | non-reserved |
|
|
| reserved | reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved |
|
|
| reserved | reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved |
|
|
| reserved | reserved |
|
| non-reserved (cannot be function or type) |
|
|
|
|
| non-reserved | non-reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved |
|
| non-reserved (cannot be function or type) | reserved | reserved |
|
|
| reserved | reserved |
|
|
| reserved | reserved |
|
| non-reserved, requires | reserved | reserved | reserved |
| non-reserved | non-reserved | non-reserved |
|
| non-reserved | non-reserved | non-reserved | reserved |