Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
PostgreSQL 為內建的資料型別提供了大量的函數和運算子。使用者還可以定義自己的函數和運算子,如第 V 部分所述。psql 指令 \df 和 \do 可分別用於列出所有可用的函數和運算子。
The notation used throughout this chapter to describe the argument and result data types of a function or operator is like this:
which says that the function repeat
takes one text and one integer argument and returns a result of type text. The right arrow is also used to indicate the result of an example, thus:
如果您擔心可移植性,那麼請注意,本章中描述的大多數函數和運算子(最常見的算術運算子和比較運算子以及一些明確標記的函數除外)都不是由 SQL 標準指定的。其他一些 SQL 資料庫管理系統提供了其中一些延伸功能,並且在許多情況下,這些功能在各種實作之間是相容和一致的。本章可能不夠完整;附加功能出現在手冊的其他相關章節中。
版本:11
PostgreSQL 提供了三種不同的特徵比對方法:傳統的 SQL LIKE 運算子,最新的 SIMILAR TO 運算子(於 SQL:1999 中加入)和 POSIX 樣式的正規表示式。除了基本的「這個字串符合這個樣式嗎?」運算子之外,還可以使用函數來提取或替換符合的子字串,以及在配對的位置拆分字串。
提醒 如果您的特徵比對需求超出此範圍,請考慮在 Perl 或 Tcl 中撰寫使用者定義的函數。
雖然大多數正規表示式搜尋可以非常快速地執行,但是完成正規表示式需要花費大量的時間和記憶體來處理。要特別注意從各種來源接受正規表示式的搜尋方式。如果必須這樣做,建議強制限制執行語句執行時間。
使用 SIMILAR TO 方式的搜尋具有相同的安全隱憂,因為 SIMILAR TO 提供了許多與 POSIX 樣式的正規表示式相同功能。
LIKE 搜尋比其他兩個選項要簡單得多,在使用可能惡意的來源時更安全。
LIKE
The LIKE
expression returns true if the string
matches the supplied pattern
. (As expected, the NOT LIKE
expression returns false if LIKE
returns true, and vice versa. An equivalent expression is NOT (
string
LIKE pattern
).)
If pattern
does not contain percent signs or underscores, then the pattern only represents the string itself; in that case LIKE
acts like the equals operator. An underscore (_
) in pattern
stands for (matches) any single character; a percent sign (%
) matches any sequence of zero or more characters.
Some examples:
LIKE
pattern matching always covers the entire string. Therefore, if it's desired to match a sequence anywhere within a string, the pattern must start and end with a percent sign.
To match a literal underscore or percent sign without matching other characters, the respective character in pattern
must be preceded by the escape character. The default escape character is the backslash but a different one can be selected by using the ESCAPE
clause. To match the escape character itself, write two escape characters.
If you have standard_conforming_strings turned off, any backslashes you write in literal string constants will need to be doubled. See Section 4.1.2.1 for more information.
It's also possible to select no escape character by writing ESCAPE ''
. This effectively disables the escape mechanism, which makes it impossible to turn off the special meaning of underscore and percent signs in the pattern.
The key word ILIKE
can be used instead of LIKE
to make the match case-insensitive according to the active locale. This is not in the SQL standard but is a PostgreSQL extension.
The operator ~~
is equivalent to LIKE
, and ~~*
corresponds to ILIKE
. There are also !~~
and !~~*
operators that represent NOT LIKE
and NOT ILIKE
, respectively. All of these operators are PostgreSQL-specific.
There is also the prefix operator ^@
and corresponding starts_with
function which covers cases when only searching by beginning of the string is needed.
SIMILAR TO
Regular ExpressionsThe SIMILAR TO
operator returns true or false depending on whether its pattern matches the given string. It is similar to LIKE
, except that it interprets the pattern using the SQL standard's definition of a regular expression. SQL regular expressions are a curious cross between LIKE
notation and common regular expression notation.
Like LIKE
, the SIMILAR TO
operator succeeds only if its pattern matches the entire string; this is unlike common regular expression behavior where the pattern can match any part of the string. Also like LIKE
, SIMILAR TO
uses _
and %
as wildcard characters denoting any single character and any string, respectively (these are comparable to .
and .*
in POSIX regular expressions).
In addition to these facilities borrowed from LIKE
, SIMILAR TO
supports these pattern-matching metacharacters borrowed from POSIX regular expressions:
|
denotes alternation (either of two alternatives).
*
denotes repetition of the previous item zero or more times.
+
denotes repetition of the previous item one or more times.
?
denotes repetition of the previous item zero or one time.
{
m
}
denotes repetition of the previous item exactly m
times.
{
m
,}
denotes repetition of the previous item m
or more times.
{
m
,
n
}
denotes repetition of the previous item at least m
and not more than n
times.
Parentheses ()
can be used to group items into a single logical item.
A bracket expression [...]
specifies a character class, just as in POSIX regular expressions.
Notice that the period (.
) is not a metacharacter for SIMILAR TO
.
As with LIKE
, a backslash disables the special meaning of any of these metacharacters; or a different escape character can be specified with ESCAPE
.
Some examples:
The substring
function with three parameters, substring(
string
from pattern
for escape-character
), provides extraction of a substring that matches an SQL regular expression pattern. As with SIMILAR TO
, the specified pattern must match the entire data string, or else the function fails and returns null. To indicate the part of the pattern that should be returned on success, the pattern must contain two occurrences of the escape character followed by a double quote ("
). The text matching the portion of the pattern between these markers is returned.
Some examples, with #"
delimiting the return string:
Table 9.14 lists the available operators for pattern matching using POSIX regular expressions.
POSIX regular expressions provide a more powerful means for pattern matching than the LIKE
and SIMILAR TO
operators. Many Unix tools such as egrep
, sed
, or awk
use a pattern matching language that is similar to the one described here.
A regular expression is a character sequence that is an abbreviated definition of a set of strings (a regular set). A string is said to match a regular expression if it is a member of the regular set described by the regular expression. As with LIKE
, pattern characters match string characters exactly unless they are special characters in the regular expression language — but regular expressions use different special characters than LIKE
does. Unlike LIKE
patterns, a regular expression is allowed to match anywhere within a string, unless the regular expression is explicitly anchored to the beginning or end of the string.
Some examples:
The POSIX pattern language is described in much greater detail below.
The substring
function with two parameters, substring(
string
from pattern
), provides extraction of a substring that matches a POSIX regular expression pattern. It returns null if there is no match, otherwise the portion of the text that matched the pattern. But if the pattern contains any parentheses, the portion of the text that matched the first parenthesized subexpression (the one whose left parenthesis comes first) is returned. You can put parentheses around the whole expression if you want to use parentheses within it without triggering this exception. If you need parentheses in the pattern before the subexpression you want to extract, see the non-capturing parentheses described below.
Some examples:
The regexp_replace
function provides substitution of new text for substrings that match POSIX regular expression patterns. It has the syntax regexp_replace
(source
, pattern
, replacement
[, flags
]). The source
string is returned unchanged if there is no match to the pattern
. If there is a match, the source
string is returned with the replacement
string substituted for the matching substring. The replacement
string can contain \
n
, where n
is 1 through 9, to indicate that the source substring matching the n
'th parenthesized subexpression of the pattern should be inserted, and it can contain \&
to indicate that the substring matching the entire pattern should be inserted. Write \\
if you need to put a literal backslash in the replacement text. The flags
parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. Flag i
specifies case-insensitive matching, while flag g
specifies replacement of each matching substring rather than only the first one. Supported flags (though not g
) are described in Table 9.22.
Some examples:
The regexp_match
function returns a text array of captured substring(s) resulting from the first match of a POSIX regular expression pattern to a string. It has the syntax regexp_match
(string
, pattern
[, flags
]). If there is no match, the result is NULL
. If a match is found, and the pattern
contains no parenthesized subexpressions, then the result is a single-element text array containing the substring matching the whole pattern. If a match is found, and the pattern
contains parenthesized subexpressions, then the result is a text array whose n
'th element is the substring matching the n
'th parenthesized subexpression of the pattern
(not counting “non-capturing” parentheses; see below for details). The flags
parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. Supported flags are described in Table 9.22.
Some examples:
In the common case where you just want the whole matching substring or NULL
for no match, write something like
The regexp_matches
function returns a set of text arrays of captured substring(s) resulting from matching a POSIX regular expression pattern to a string. It has the same syntax as regexp_match
. This function returns no rows if there is no match, one row if there is a match and the g
flag is not given, or N
rows if there are N
matches and the g
flag is given. Each returned row is a text array containing the whole matched substring or the substrings matching parenthesized subexpressions of the pattern
, just as described above for regexp_match
. regexp_matches
accepts all the flags shown in Table 9.22, plus the g
flag which commands it to return all matches, not just the first one.
Some examples:
In most cases regexp_matches()
should be used with the g
flag, since if you only want the first match, it's easier and more efficient to use regexp_match()
. However,regexp_match()
only exists in PostgreSQL version 10 and up. When working in older versions, a common trick is to place a regexp_matches()
call in a sub-select, for example:
This produces a text array if there's a match, or NULL
if not, the same as regexp_match()
would do. Without the sub-select, this query would produce no output at all for table rows without a match, which is typically not the desired behavior.
The regexp_split_to_table
function splits a string using a POSIX regular expression pattern as a delimiter. It has the syntax regexp_split_to_table
(string
, pattern
[, flags
]). If there is no match to the pattern
, the function returns the string
. If there is at least one match, for each match it returns the text from the end of the last match (or the beginning of the string) to the beginning of the match. When there are no more matches, it returns the text from the end of the last match to the end of the string. The flags
parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. regexp_split_to_table
supports the flags described in Table 9.22.
The regexp_split_to_array
function behaves the same as regexp_split_to_table
, except that regexp_split_to_array
returns its result as an array of text
. It has the syntax regexp_split_to_array
(string
, pattern
[, flags
]). The parameters are the same as for regexp_split_to_table
.
Some examples:
As the last example demonstrates, the regexp split functions ignore zero-length matches that occur at the start or end of the string or immediately after a previous match. This is contrary to the strict definition of regexp matching that is implemented by regexp_match
and regexp_matches
, but is usually the most convenient behavior in practice. Other software systems such as Perl use similar definitions.
PostgreSQL's regular expressions are implemented using a software package written by Henry Spencer. Much of the description of regular expressions below is copied verbatim from his manual.
Regular expressions (REs), as defined in POSIX 1003.2, come in two forms: extended REs or EREs (roughly those of egrep
), and basic REs or BREs (roughly those of ed
). PostgreSQL supports both forms, and also implements some extensions that are not in the POSIX standard, but have become widely used due to their availability in programming languages such as Perl and Tcl. REs using these non-POSIX extensions are called advanced REs or AREs in this documentation. AREs are almost an exact superset of EREs, but BREs have several notational incompatibilities (as well as being much more limited). We first describe the ARE and ERE forms, noting features that apply only to AREs, and then describe how BREs differ.
PostgreSQL always initially presumes that a regular expression follows the ARE rules. However, the more limited ERE or BRE rules can be chosen by prepending an embedded option to the RE pattern, as described in Section 9.7.3.4. This can be useful for compatibility with applications that expect exactly the POSIX 1003.2 rules.
A regular expression is defined as one or more branches, separated by |
. It matches anything that matches one of the branches.
A branch is zero or more quantified atoms or constraints, concatenated. It matches a match for the first, followed by a match for the second, etc; an empty branch matches the empty string.
A quantified atom is an atom possibly followed by a single quantifier. Without a quantifier, it matches a match for the atom. With a quantifier, it can match some number of matches of the atom. An atom can be any of the possibilities shown in Table 9.15. The possible quantifiers and their meanings are shown in Table 9.16.
A constraint matches an empty string, but matches only when specific conditions are met. A constraint can be used where an atom could be used, except it cannot be followed by a quantifier. The simple constraints are shown in Table 9.17; some more constraints are described later.
An RE cannot end with a backslash (\
).
If you have standard_conforming_strings turned off, any backslashes you write in literal string constants will need to be doubled. See Section 4.1.2.1 for more information.
The forms using {
...
}
are known as bounds. The numbers m
and n
within a bound are unsigned decimal integers with permissible values from 0 to 255 inclusive.
Non-greedy quantifiers (available in AREs only) match the same possibilities as their corresponding normal (greedy) counterparts, but prefer the smallest number rather than the largest number of matches. See Section 9.7.3.5 for more detail.
A quantifier cannot immediately follow another quantifier, e.g., **
is invalid. A quantifier cannot begin an expression or subexpression or follow ^
or |
.
Lookahead and lookbehind constraints cannot contain back references (see Section 9.7.3.3), and all parentheses within them are considered non-capturing.
A bracket expression is a list of characters enclosed in []
. It normally matches any single character from the list (but see below). If the list begins with ^
, it matches any single character not from the rest of the list. If two characters in the list are separated by -
, this is shorthand for the full range of characters between those two (inclusive) in the collating sequence, e.g., [0-9]
in ASCII matches any decimal digit. It is illegal for two ranges to share an endpoint, e.g., a-c-e
. Ranges are very collating-sequence-dependent, so portable programs should avoid relying on them.
To include a literal ]
in the list, make it the first character (after ^
, if that is used). To include a literal -
, make it the first or last character, or the second endpoint of a range. To use a literal -
as the first endpoint of a range, enclose it in [.
and .]
to make it a collating element (see below). With the exception of these characters, some combinations using [
(see next paragraphs), and escapes (AREs only), all other special characters lose their special significance within a bracket expression. In particular, \
is not special when following ERE or BRE rules, though it is special (as introducing an escape) in AREs.
Within a bracket expression, a collating element (a character, a multiple-character sequence that collates as if it were a single character, or a collating-sequence name for either) enclosed in [.
and .]
stands for the sequence of characters of that collating element. The sequence is treated as a single element of the bracket expression's list. This allows a bracket expression containing a multiple-character collating element to match more than one character, e.g., if the collating sequence includes a ch
collating element, then the RE [[.ch.]]*c
matches the first five characters of chchcc
.
PostgreSQL currently does not support multi-character collating elements. This information describes possible future behavior.
Within a bracket expression, a collating element enclosed in [=
and =]
is an equivalence class, standing for the sequences of characters of all collating elements equivalent to that one, including itself. (If there are no other equivalent collating elements, the treatment is as if the enclosing delimiters were [.
and .]
.) For example, if o
and ^
are the members of an equivalence class, then [[=o=]]
, [[=^=]]
, and [o^]
are all synonymous. An equivalence class cannot be an endpoint of a range.
Within a bracket expression, the name of a character class enclosed in [:
and :]
stands for the list of all characters belonging to that class. Standard character class names are: alnum
, alpha
, blank
,cntrl
, digit
, graph
, lower
, print
, punct
, space
, upper
, xdigit
. These stand for the character classes defined in ctype. A locale can provide others. A character class cannot be used as an endpoint of a range.
There are two special cases of bracket expressions: the bracket expressions [[:<:]]
and [[:>:]]
are constraints, matching empty strings at the beginning and end of a word respectively. A word is defined as a sequence of word characters that is neither preceded nor followed by word characters. A word character is an alnum
character (as defined by ctype) or an underscore. This is an extension, compatible with but not specified by POSIX 1003.2, and should be used with caution in software intended to be portable to other systems. The constraint escapes described below are usually preferable; they are no more standard, but are easier to type.
Escapes are special sequences beginning with \
followed by an alphanumeric character. Escapes come in several varieties: character entry, class shorthands, constraint escapes, and back references. A \
followed by an alphanumeric character but not constituting a valid escape is illegal in AREs. In EREs, there are no escapes: outside a bracket expression, a \
followed by an alphanumeric character merely stands for that character as an ordinary character, and inside a bracket expression, \
is an ordinary character. (The latter is the one actual incompatibility between EREs and AREs.)
Character-entry escapes exist to make it easier to specify non-printing and other inconvenient characters in REs. They are shown in Table 9.18.
Class-shorthand escapes provide shorthands for certain commonly-used character classes. They are shown in Table 9.19.
A constraint escape is a constraint, matching the empty string if specific conditions are met, written as an escape. They are shown in Table 9.20.
A back reference (\
n
) matches the same string matched by the previous parenthesized subexpression specified by the number n
(see Table 9.21). For example, ([bc])\1
matches bb
or cc
but not bc
or cb
. The subexpression must entirely precede the back reference in the RE. Subexpressions are numbered in the order of their leading parentheses. Non-capturing parentheses do not define subexpressions.
Hexadecimal digits are 0
-9
, a
-f
, and A
-F
. Octal digits are 0
-7
.
Numeric character-entry escapes specifying values outside the ASCII range (0-127) have meanings dependent on the database encoding. When the encoding is UTF-8, escape values are equivalent to Unicode code points, for example \u1234
means the character U+1234
. For other multibyte encodings, character-entry escapes usually just specify the concatenation of the byte values for the character. If the escape value does not correspond to any legal character in the database encoding, no error will be raised, but it will never match any data.
The character-entry escapes are always taken as ordinary characters. For example, \135
is ]
in ASCII, but \135
does not terminate a bracket expression.
Within bracket expressions, \d
, \s
, and \w
lose their outer brackets, and \D
, \S
, and \W
are illegal. (So, for example, [a-c\d]
is equivalent to [a-c[:digit:]]
. Also, [a-c\D]
, which is equivalent to [a-c^[:digit:]]
, is illegal.)
A word is defined as in the specification of [[:<:]]
and [[:>:]]
above. Constraint escapes are illegal within bracket expressions.
There is an inherent ambiguity between octal character-entry escapes and back references, which is resolved by the following heuristics, as hinted at above. A leading zero always indicates an octal escape. A single non-zero digit, not followed by another digit, is always taken as a back reference. A multi-digit sequence not starting with a zero is taken as a back reference if it comes after a suitable subexpression (i.e., the number is in the legal range for a back reference), and otherwise is taken as octal.
In addition to the main syntax described above, there are some special forms and miscellaneous syntactic facilities available.
An RE can begin with one of two special director prefixes. If an RE begins with ***:
, the rest of the RE is taken as an ARE. (This normally has no effect in PostgreSQL, since REs are assumed to be AREs; but it does have an effect if ERE or BRE mode had been specified by the flags
parameter to a regex function.) If an RE begins with ***=
, the rest of the RE is taken to be a literal string, with all characters considered ordinary characters.
An ARE can begin with embedded options: a sequence (?
xyz
)
(where xyz
is one or more alphabetic characters) specifies options affecting the rest of the RE. These options override any previously determined options — in particular, they can override the case-sensitivity behavior implied by a regex operator, or the flags
parameter to a regex function. The available option letters are shown in Table 9.22. Note that these same option letters are used in the flags
parameters of regex functions.
Embedded options take effect at the )
terminating the sequence. They can appear only at the start of an ARE (after the ***:
director if any).
In addition to the usual (tight) RE syntax, in which all characters are significant, there is an expanded syntax, available by specifying the embedded x
option. In the expanded syntax, white-space characters in the RE are ignored, as are all characters between a #
and the following newline (or the end of the RE). This permits paragraphing and commenting a complex RE. There are three exceptions to that basic rule:
a white-space character or #
preceded by \
is retained
white space or #
within a bracket expression is retained
white space and comments cannot appear within multi-character symbols, such as (?:
For this purpose, white-space characters are blank, tab, newline, and any character that belongs to the space
character class.
Finally, in an ARE, outside bracket expressions, the sequence (?#
ttt
)
(where ttt
is any text not containing a )
) is a comment, completely ignored. Again, this is not allowed between the characters of multi-character symbols, like (?:
. Such comments are more a historical artifact than a useful facility, and their use is deprecated; use the expanded syntax instead.
None of these metasyntax extensions is available if an initial ***=
director has specified that the user's input be treated as a literal string rather than as an RE.
In the event that an RE could match more than one substring of a given string, the RE matches the one starting earliest in the string. If the RE could match more than one substring starting at that point, either the longest possible match or the shortest possible match will be taken, depending on whether the RE is greedy or non-greedy.
Whether an RE is greedy or not is determined by the following rules:
Most atoms, and all constraints, have no greediness attribute (because they cannot match variable amounts of text anyway).
Adding parentheses around an RE does not change its greediness.
A quantified atom with a fixed-repetition quantifier ({
m
}
or {
m
}?
) has the same greediness (possibly none) as the atom itself.
A quantified atom with other normal quantifiers (including {
m
,
n
}
with m
equal to n
) is greedy (prefers longest match).
A quantified atom with a non-greedy quantifier (including {
m
,
n
}?
with m
equal to n
) is non-greedy (prefers shortest match).
A branch — that is, an RE that has no top-level |
operator — has the same greediness as the first quantified atom in it that has a greediness attribute.
An RE consisting of two or more branches connected by the |
operator is always greedy.
The above rules associate greediness attributes not only with individual quantified atoms, but with branches and entire REs that contain quantified atoms. What that means is that the matching is done in such a way that the branch, or whole RE, matches the longest or shortest possible substring as a whole. Once the length of the entire match is determined, the part of it that matches any particular subexpression is determined on the basis of the greediness attribute of that subexpression, with subexpressions starting earlier in the RE taking priority over ones starting later.
An example of what this means:
In the first case, the RE as a whole is greedy because Y*
is greedy. It can match beginning at the Y
, and it matches the longest possible string starting there, i.e., Y123
. The output is the parenthesized part of that, or 123
. In the second case, the RE as a whole is non-greedy because Y*?
is non-greedy. It can match beginning at the Y
, and it matches the shortest possible string starting there, i.e., Y1
. The subexpression [0-9]{1,3}
is greedy but it cannot change the decision as to the overall match length; so it is forced to match just 1
.
In short, when an RE contains both greedy and non-greedy subexpressions, the total match length is either as long as possible or as short as possible, according to the attribute assigned to the whole RE. The attributes assigned to the subexpressions only affect how much of that match they are allowed to “eat” relative to each other.
The quantifiers {1,1}
and {1,1}?
can be used to force greediness or non-greediness, respectively, on a subexpression or a whole RE. This is useful when you need the whole RE to have a greediness attribute different from what's deduced from its elements. As an example, suppose that we are trying to separate a string containing some digits into the digits and the parts before and after them. We might try to do that like this:
That didn't work: the first .*
is greedy so it “eats” as much as it can, leaving the \d+
to match at the last possible place, the last digit. We might try to fix that by making it non-greedy:
That didn't work either, because now the RE as a whole is non-greedy and so it ends the overall match as soon as possible. We can get what we want by forcing the RE as a whole to be greedy:
Controlling the RE's overall greediness separately from its components' greediness allows great flexibility in handling variable-length patterns.
When deciding what is a longer or shorter match, match lengths are measured in characters, not collating elements. An empty string is considered longer than no match at all. For example: bb*
matches the three middle characters of abbbc
; (week|wee)(night|knights)
matches all ten characters of weeknights
; when (.*).*
is matched against abc
the parenthesized subexpression matches all three characters; and when (a*)*
is matched against bc
both the whole RE and the parenthesized subexpression match an empty string.
If case-independent matching is specified, the effect is much as if all case distinctions had vanished from the alphabet. When an alphabetic that exists in multiple cases appears as an ordinary character outside a bracket expression, it is effectively transformed into a bracket expression containing both cases, e.g., x
becomes [xX]
. When it appears inside a bracket expression, all case counterparts of it are added to the bracket expression, e.g., [x]
becomes [xX]
and [^x]
becomes [^xX]
.
If newline-sensitive matching is specified, .
and bracket expressions using ^
will never match the newline character (so that matches will never cross newlines unless the RE explicitly arranges it) and ^
and $
will match the empty string after and before a newline respectively, in addition to matching at beginning and end of string respectively. But the ARE escapes \A
and \Z
continue to match beginning or end of string only.
If partial newline-sensitive matching is specified, this affects .
and bracket expressions as with newline-sensitive matching, but not ^
and $
.
If inverse partial newline-sensitive matching is specified, this affects ^
and $
as with newline-sensitive matching, but not .
and bracket expressions. This isn't very useful but is provided for symmetry.
No particular limit is imposed on the length of REs in this implementation. However, programs intended to be highly portable should not employ REs longer than 256 bytes, as a POSIX-compliant implementation can refuse to accept such REs.
The only feature of AREs that is actually incompatible with POSIX EREs is that \
does not lose its special significance inside bracket expressions. All other ARE features use syntax which is illegal or has undefined or unspecified effects in POSIX EREs; the ***
syntax of directors likewise is outside the POSIX syntax for both BREs and EREs.
Many of the ARE extensions are borrowed from Perl, but some have been changed to clean them up, and a few Perl extensions are not present. Incompatibilities of note include \b
, \B
, the lack of special treatment for a trailing newline, the addition of complemented bracket expressions to the things affected by newline-sensitive matching, the restrictions on parentheses and back references in lookahead/lookbehind constraints, and the longest/shortest-match (rather than first-match) matching semantics.
Two significant incompatibilities exist between AREs and the ERE syntax recognized by pre-7.4 releases of PostgreSQL:
In AREs, \
followed by an alphanumeric character is either an escape or an error, while in previous releases, it was just another way of writing the alphanumeric. This should not be much of a problem because there was no reason to write such a sequence in earlier releases.
In AREs, \
remains a special character within []
, so a literal \
within a bracket expression must be written \\
.
BREs differ from EREs in several respects. In BREs, |
, +
, and ?
are ordinary characters and there is no equivalent for their functionality. The delimiters for bounds are \{
and \}
, with {
and }
by themselves ordinary characters. The parentheses for nested subexpressions are \(
and \)
, with (
and )
by themselves ordinary characters. ^
is an ordinary character except at the beginning of the RE or the beginning of a parenthesized subexpression, $
is an ordinary character except at the end of the RE or the end of a parenthesized subexpression, and *
is an ordinary character if it appears at the beginning of the RE or the beginning of a parenthesized subexpression (after a possible leading ^
). Finally, single-digit back references are available, and \<
and \>
are synonyms for [[:<:]]
and [[:>:]]
respectively; no other escapes are available in BREs.
This section describes functions and operators for examining and manipulating bit strings, that is values of the types bit
and bit varying
. Aside from the usual comparison operators, the operators shown in can be used. Bit string operands of &
, |
, and #
must be of equal length. When bit shifting, the original length of the string is preserved, as shown in the examples.
Operator | Description | Example | Result |
---|
The following SQL-standard functions work on bit strings as well as character strings: length
, bit_length
, octet_length
, position
, substring
, overlay
.
The following functions work on bit strings as well as binary strings: get_bit
, set_bit
. When working with a bit string, these functions number the first (leftmost) bit of the string as bit 0.
In addition, it is possible to cast integral values to and from type bit
. Some examples:
Note that casting to just “bit” means casting to bit(1)
, and so will deliver only the least significant bit of the integer.
Casting an integer to bit(n)
copies the rightmost n
bits. Casting an integer to a bit string width wider than the integer itself will sign-extend on the left.\
This section describes functions and operators for examining and manipulating values of type bytea
.
SQL defines some string functions that use key words, rather than commas, to separate arguments. Details are in . PostgreSQL also provides versions of these functions that use the regular function invocation syntax (see ).
The sample results shown on this page assume that the server parameter is set to escape
(the traditional PostgreSQL format).
Function | Return Type | Description | Example | Result |
---|
Additional binary string manipulation functions are available and are listed in . Some of them are used internally to implement the SQL-standard string functions listed in .
Function | Return Type | Description | Example | Result |
---|
get_byte
and set_byte
number the first byte of a binary string as byte 0. get_bit
and set_bit
number bits from the right within each byte; for example bit 0 is the least significant bit of the first byte, and bit 15 is the most significant bit of the second byte.
Note that for historic reasons, the function md5
returns a hex-encoded value of type text
whereas the SHA-2 functions return type bytea
. Use the functions encode
and decode
to convert between the two, for example encode(sha256('abc'), 'hex')
to get a hex-encoded text representation.
PostgreSQL 格式化函數提供了一套功能強大的工具,用於將各種資料型別(日期/時間、整數、浮點數、數字)轉換為格式化的字串,以及從格式化字串轉換為特定資料型別。列出了這些函數,而這些函數都遵循一個通用的呼叫約定:第一個參數是要格式化的值,第二個參數是定義輸出或輸入格式的樣板。
Function | Return Type | Description | Example |
---|
提醒 還有一個單一參數 to_timestamp 函數; 請參閱 。
小技巧 存在有 to_timestamp 和 to_date 來處理無法透過簡單轉換進行轉換的輸入格式。對於大多數標準日期/時間格式,只需將來源字串強制轉換為所需的資料型別即可,並且非常容易。同樣地,對於標準數字表示形式,to_number 也不是必要的。
在 to_char 輸出樣版字串中,基於給予值識別並替換為某些格式資料的某些樣式。 非樣板的任何文字都將被逐字複製。同樣地,在輸入樣板字串(用於其他功能)中,樣板標識輸入資料字串要提供的值。如果樣板字串中存在不是樣板的字串,則只需跳過輸入資料字串中的相對應字元(無論它們是否等於樣板字串字元)。
shows the template patterns available for formatting date and time values.
Pattern | Description |
---|
Usage notes for date/time formatting:
FM
suppresses leading zeroes and trailing blanks that would otherwise be added to make the output of a pattern be fixed-width. In PostgreSQL, FM
modifies only the next specification, while in Oracle FM
affects all subsequent specifications, and repeated FM
modifiers toggle fill mode on and off.
TM
does not include trailing blanks. to_timestamp
and to_date
ignore the TM
modifier.
to_timestamp
and to_date
skip multiple blank spaces at the beginning of the input string and around date and time values unless the FX
option is used. For example, to_timestamp(' 2000 JUN', 'YYYY MON')
and to_timestamp('2000 - JUN', 'YYYY-MON')
work, but to_timestamp('2000 JUN', 'FXYYYY MON')
returns an error because to_timestamp
expects only a single space. FX
must be specified as the first item in the template.
A separator (a space or non-letter/non-digit character) in the template string of to_timestamp
and to_date
matches any single separator in the input string or is skipped, unless the FX
option is used. For example, to_timestamp('2000JUN', 'YYYY///MON')
and to_timestamp('2000/JUN', 'YYYY MON')
work, but to_timestamp('2000//JUN', 'YYYY/MON')
returns an error because the number of separators in the input string exceeds the number of separators in the template.
If FX
is specified, a separator in the template string matches exactly one character in the input string. But note that the input string character is not required to be the same as the separator from the template string. For example, to_timestamp('2000/JUN', 'FXYYYY MON')
works, but to_timestamp('2000/JUN', 'FXYYYY MON')
returns an error because the second space in the template string consumes the letter J
from the input string.
A TZH
template pattern can match a signed number. Without the FX
option, minus signs may be ambiguous, and could be interpreted as a separator. This ambiguity is resolved as follows: If the number of separators before TZH
in the template string is less than the number of separators before the minus sign in the input string, the minus sign is interpreted as part of TZH
. Otherwise, the minus sign is considered to be a separator between values. For example, to_timestamp('2000 -10', 'YYYY TZH')
matches -10
to TZH
, but to_timestamp('2000 -10', 'YYYY TZH')
matches 10
to TZH
.
Ordinary text is allowed in to_char
templates and will be output literally. You can put a substring in double quotes to force it to be interpreted as literal text even if it contains template patterns. For example, in '"Hello Year "YYYY'
, the YYYY
will be replaced by the year data, but the single Y
in Year
will not be. In to_date
, to_number
, and to_timestamp
, literal text and double-quoted strings result in skipping the number of characters contained in the string; for example "XX"
skips two input characters (whether or not they are XX
).
Tip
Prior to PostgreSQL 12, it was possible to skip arbitrary text in the input string using non-letter or non-digit characters. For example, to_timestamp('2000y6m1d', 'yyyy-MM-DD')
used to work. Now you can only use letter characters for this purpose. For example, to_timestamp('2000y6m1d', 'yyyytMMtDDt')
and to_timestamp('2000y6m1d', 'yyyy"y"MM"m"DD"d"')
skip y
, m
, and d
.
If you want to have a double quote in the output you must precede it with a backslash, for example '\"YYYY Month\"'
. Backslashes are not otherwise special outside of double-quoted strings. Within a double-quoted string, a backslash causes the next character to be taken literally, whatever it is (but this has no special effect unless the next character is a double quote or another backslash).
In to_timestamp
and to_date
, if the year format specification is less than four digits, e.g. YYY
, and the supplied year is less than four digits, the year will be adjusted to be nearest to the year 2020, e.g. 95
becomes 1995.
In to_timestamp
and to_date
, the YYYY
conversion has a restriction when processing years with more than 4 digits. You must use some non-digit character or template after YYYY
, otherwise the year is always interpreted as 4 digits. For example (with the year 20000): to_date('200001131', 'YYYYMMDD')
will be interpreted as a 4-digit year; instead use a non-digit separator after the year, like to_date('20000-1131', 'YYYY-MMDD')
or to_date('20000Nov31', 'YYYYMonDD')
.
In to_timestamp
and to_date
, the CC
(century) field is accepted but ignored if there is a YYY
, YYYY
or Y,YYY
field. If CC
is used with YY
or Y
then the result is computed as that year in the specified century. If the century is specified but the year is not, the first year of the century is assumed.
In to_timestamp
and to_date
, weekday names or numbers (DAY
, D
, and related field types) are accepted but are ignored for purposes of computing the result. The same is true for quarter (Q
) fields.
In to_timestamp
and to_date
, an ISO 8601 week-numbering date (as distinct from a Gregorian date) can be specified in one of two ways:
Year, week number, and weekday: for example to_date('2006-42-4', 'IYYY-IW-ID')
returns the date 2006-10-19
. If you omit the weekday it is assumed to be 1 (Monday).
Year and day of year: for example to_date('2006-291', 'IYYY-IDDD')
also returns 2006-10-19
.
Attempting to enter a date using a mixture of ISO 8601 week-numbering fields and Gregorian date fields is nonsensical, and will cause an error. In the context of an ISO 8601 week-numbering year, the concept of a “month” or “day of month” has no meaning. In the context of a Gregorian year, the ISO week has no meaning.
Caution
In to_timestamp
, millisecond (MS
) or microsecond (US
) fields are used as the seconds digits after the decimal point. For example to_timestamp('12.3', 'SS.MS')
is not 3 milliseconds, but 300, because the conversion treats it as 12 + 0.3 seconds. So, for the format SS.MS
, the input values 12.3
, 12.30
, and 12.300
specify the same number of milliseconds. To get three milliseconds, one must write 12.003
, which the conversion treats as 12 + 0.003 = 12.003 seconds.
Here is a more complex example: to_timestamp('15:12:02.020.001230', 'HH24:MI:SS.MS.US')
is 15 hours, 12 minutes, and 2 seconds + 20 milliseconds + 1230 microseconds = 2.021230 seconds.
to_char(..., 'ID')
's day of the week numbering matches the extract(isodow from ...)
function, but to_char(..., 'D')
's does not match extract(dow from ...)
's day numbering.
to_char(interval)
formats HH
and HH12
as shown on a 12-hour clock, for example zero hours and 36 hours both output as 12
, while HH24
outputs the full hour value, which can exceed 23 in an interval
value.
Usage notes for numeric formatting:
0
specifies a digit position that will always be printed, even if it contains a leading/trailing zero. 9
also specifies a digit position, but if it is a leading zero then it will be replaced by a space, while if it is a trailing zero and fill mode is specified then it will be deleted. (For to_number()
, these two pattern characters are equivalent.)
If no explicit provision is made for a sign in to_char()
's pattern, one column will be reserved for the sign, and it will be anchored to (appear just left of) the number. If S
appears just left of some 9
's, it will likewise be anchored to the number.
A sign formatted using SG
, PL
, or MI
is not anchored to the number; for example, to_char(-12, 'MI9999')
produces '- 12'
but to_char(-12, 'S9999')
produces ' -12'
. (The Oracle implementation does not allow the use of MI
before 9
, but rather requires that 9
precede MI
.)
TH
does not convert values less than zero and does not convert fractional numbers.
PL
, SG
, and TH
are PostgreSQL extensions.
In to_number
, if non-data template patterns such as L
or TH
are used, the corresponding number of input characters are skipped, whether or not they match the template pattern, unless they are data characters (that is, digits, sign, decimal point, or comma). For example, TH
would skip two non-data characters.
V
with to_char
multiplies the input values by 10^
n
, where n
is the number of digits following V
. V
with to_number
divides in a similar manner. to_char
and to_number
do not support the use of V
combined with a decimal point (e.g., 99.9V99
is not allowed).
EEEE
(scientific notation) cannot be used in combination with any of the other formatting patterns or modifiers other than digit and decimal point patterns, and must be at the end of the format string (e.g., 9.99EEEE
is a valid pattern).
to_char
ExamplesOperator | Description | Example |
---|---|---|
Atom | Description |
---|---|
Quantifier | Matches |
---|---|
Constraint | Description |
---|---|
Escape | Description |
---|---|
Escape | Description |
---|---|
Escape | Description |
---|---|
Escape | Description |
---|---|
Option | Description |
---|---|
See also the aggregate function string_agg
in and the large object functions in .
Modifiers can be applied to any template pattern to alter its behavior. For example, FMMonth
is the Month
pattern with the FM
modifier. shows the modifier patterns for date/time formatting.
Modifier | Description | Example |
---|
While to_date
will reject a mixture of Gregorian and ISO week-numbering date fields, to_char
will not, since output format specifications like YYYY-MM-DD (IYYY-IDDD)
can be useful. But avoid writing something like IYYY-MM-DD
; that would yield surprising results near the start of the year. (See for more information.)
shows the template patterns available for formatting numeric values.
Pattern | Description |
---|
The pattern characters S
, L
, D
, and G
represent the sign, currency symbol, decimal point, and thousands separator characters defined by the current locale (see and ). The pattern characters period and comma represent those exact characters, with the meanings of decimal point and thousands separator, regardless of locale.
Certain modifiers can be applied to any template pattern to alter its behavior. For example, FM99.99
is the 99.99
pattern with the FM
modifier. shows the modifier patterns for numeric formatting.
Modifier | Description | Example |
---|
shows some examples of the use of the to_char
function.
Expression | Result |
---|
~
Matches regular expression, case sensitive
'thomas' ~ '.*thomas.*'
~*
Matches regular expression, case insensitive
'thomas' ~* '.*Thomas.*'
!~
Does not match regular expression, case sensitive
'thomas' !~ '.*Thomas.*'
!~*
Does not match regular expression, case insensitive
'thomas' !~* '.*vadim.*'
(
re
)
(where re
is any regular expression) matches a match for re
, with the match noted for possible reporting
(?:
re
)
as above, but the match is not noted for reporting (a “non-capturing” set of parentheses) (AREs only)
.
matches any single character
[
chars
]
a bracket expression, matching any one of the chars
(see Section 9.7.3.2 for more detail)
\
k
(where k
is a non-alphanumeric character) matches that character taken as an ordinary character, e.g., \\
matches a backslash character
\
c
where c
is alphanumeric (possibly followed by other characters) is an escape, see Section 9.7.3.3 (AREs only; in EREs and BREs, this matches c
)
{
when followed by a character other than a digit, matches the left-brace character {
; when followed by a digit, it is the beginning of a bound
(see below)
x
where x
is a single character with no other significance, matches that character
*
a sequence of 0 or more matches of the atom
+
a sequence of 1 or more matches of the atom
?
a sequence of 0 or 1 matches of the atom
{
m
}
a sequence of exactly m
matches of the atom
{
m
,}
a sequence of m
or more matches of the atom
{
m
,
n
}
a sequence of m
through n
(inclusive) matches of the atom; m
cannot exceed n
*?
non-greedy version of *
+?
non-greedy version of +
??
non-greedy version of ?
{
m
}?
non-greedy version of {
m
}
{
m
,}?
non-greedy version of {
m
,}
{
m
,
n
}?
non-greedy version of {
m
,
n
}
^
matches at the beginning of the string
$
matches at the end of the string
(?=
re
)
positive lookahead matches at any point where a substring matching re
begins (AREs only)
(?!
re
)
negative lookahead matches at any point where no substring matching re
begins (AREs only)
(?<=
re
)
positive lookbehind matches at any point where a substring matching re
ends (AREs only)
(?<!
re
)
negative lookbehind matches at any point where no substring matching re
ends (AREs only)
\a
alert (bell) character, as in C
\b
backspace, as in C
\B
synonym for backslash (\
) to help reduce the need for backslash doubling
\c
X
(where X
is any character) the character whose low-order 5 bits are the same as those of X
, and whose other bits are all zero
\e
the character whose collating-sequence name is ESC
, or failing that, the character with octal value 033
\f
form feed, as in C
newline, as in C
carriage return, as in C
horizontal tab, as in C
\u
wxyz
(where wxyz
is exactly four hexadecimal digits) the character whose hexadecimal value is 0x
wxyz
\U
stuvwxyz
(where stuvwxyz
is exactly eight hexadecimal digits) the character whose hexadecimal value is 0x
stuvwxyz
\v
vertical tab, as in C
\x
hhh
(where hhh
is any sequence of hexadecimal digits) the character whose hexadecimal value is 0x
hhh
(a single character no matter how many hexadecimal digits are used)
\0
the character whose value is 0
(the null byte)
\
xy
(where xy
is exactly two octal digits, and is not a back reference) the character whose octal value is 0
xy
\
xyz
(where xyz
is exactly three octal digits, and is not a back reference) the character whose octal value is 0
xyz
\d
[[:digit:]]
\s
[[:space:]]
\w
[[:alnum:]_]
(note underscore is included)
\D
[^[:digit:]]
\S
[^[:space:]]
\W
[^[:alnum:]_]
(note underscore is included)
\A
matches only at the beginning of the string (see Section 9.7.3.5 for how this differs from ^
)
\m
matches only at the beginning of a word
\M
matches only at the end of a word
\y
matches only at the beginning or end of a word
\Y
matches only at a point that is not the beginning or end of a word
\Z
matches only at the end of the string (see Section 9.7.3.5 for how this differs from $
)
\
m
(where m
is a nonzero digit) a back reference to the m
'th subexpression
\
mnn
(where m
is a nonzero digit, and nn
is some more digits, and the decimal value mnn
is not greater than the number of closing capturing parentheses seen so far) a back reference to the mnn
'th subexpression
b
rest of RE is a BRE
c
case-sensitive matching (overrides operator type)
e
rest of RE is an ERE
i
case-insensitive matching (see Section 9.7.3.5) (overrides operator type)
m
historical synonym for n
n
newline-sensitive matching (see Section 9.7.3.5)
p
partial newline-sensitive matching (see Section 9.7.3.5)
q
rest of RE is a literal (“quoted”) string, all ordinary characters
s
non-newline-sensitive matching (default)
t
tight syntax (default; see below)
w
inverse partial newline-sensitive (“weird”) matching (see Section 9.7.3.5)
x
expanded syntax (see below)
| digit position (can be dropped if insignificant) |
| digit position (will not be dropped, even if insignificant) |
| decimal point |
| group (thousands) separator |
| negative value in angle brackets |
| sign anchored to number (uses locale) |
| currency symbol (uses locale) |
| decimal point (uses locale) |
| group separator (uses locale) |
| minus sign in specified position (if number < 0) |
| plus sign in specified position (if number > 0) |
| plus/minus sign in specified position |
| Roman numeral (input between 1 and 3999) |
| ordinal number suffix |
| shift specified number of digits (see notes) |
| exponent for scientific notation |
| fill mode (suppress trailing zeroes and padding blanks) |
|
| upper case ordinal number suffix |
|
| lower case ordinal number suffix |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| concatenation |
|
|
| bitwise AND |
|
|
| bitwise OR |
|
|
| bitwise XOR |
|
|
| bitwise NOT |
|
|
| bitwise shift left |
|
|
| bitwise shift right |
|
|
|
| String concatenation |
|
|
|
| Number of bytes in binary string |
|
|
|
| Replace substring |
|
|
|
| Location of specified substring |
|
|
|
| Extract substring |
|
|
|
| Remove the longest string containing only bytes appearing in |
|
|
|
| Remove the longest string containing only bytes appearing in |
|
|
|
| Decode binary data from textual representation in |
|
|
|
| Encode binary data into a textual representation. Supported formats are: |
|
|
|
| Extract bit from string |
|
|
|
| Extract byte from string |
|
|
|
| Length of binary string |
|
|
|
| Calculates the MD5 hash of |
|
|
|
| Set bit in string |
|
|
|
| Set byte in string |
|
|
|
| SHA-224 hash |
|
|
|
| SHA-256 hash |
|
|
|
| SHA-384 hash |
|
|
|
| SHA-512 hash |
|
|
|
| 將時間戳記轉換為字串 |
|
|
| convert interval to string |
|
|
| convert integer to string |
|
|
| convert real/double precision to string |
|
|
| convert numeric to string |
|
|
| convert string to date |
|
|
| convert string to numeric |
|
|
| convert string to time stamp |
|
| hour of day (01-12) |
| hour of day (01-12) |
| hour of day (00-23) |
| minute (00-59) |
| second (00-59) |
| millisecond (000-999) |
| microsecond (000000-999999) |
| seconds past midnight (0-86399) |
| meridiem indicator (without periods) |
| meridiem indicator (with periods) |
| year (4 or more digits) with comma |
| year (4 or more digits) |
| last 3 digits of year |
| last 2 digits of year |
| last digit of year |
| ISO 8601 week-numbering year (4 or more digits) |
| last 3 digits of ISO 8601 week-numbering year |
| last 2 digits of ISO 8601 week-numbering year |
| last digit of ISO 8601 week-numbering year |
| era indicator (without periods) |
| era indicator (with periods) |
| full upper case month name (blank-padded to 9 chars) |
| full capitalized month name (blank-padded to 9 chars) |
| full lower case month name (blank-padded to 9 chars) |
| abbreviated upper case month name (3 chars in English, localized lengths vary) |
| abbreviated capitalized month name (3 chars in English, localized lengths vary) |
| abbreviated lower case month name (3 chars in English, localized lengths vary) |
| month number (01-12) |
| full upper case day name (blank-padded to 9 chars) |
| full capitalized day name (blank-padded to 9 chars) |
| full lower case day name (blank-padded to 9 chars) |
| abbreviated upper case day name (3 chars in English, localized lengths vary) |
| abbreviated capitalized day name (3 chars in English, localized lengths vary) |
| abbreviated lower case day name (3 chars in English, localized lengths vary) |
| day of year (001-366) |
| day of ISO 8601 week-numbering year (001-371; day 1 of the year is Monday of the first ISO week) |
| day of month (01-31) |
| day of the week, Sunday ( |
| ISO 8601 day of the week, Monday ( |
| week of month (1-5) (the first week starts on the first day of the month) |
| week number of year (1-53) (the first week starts on the first day of the year) |
| week number of ISO 8601 week-numbering year (01-53; the first Thursday of the year is in week 1) |
| century (2 digits) (the twenty-first century starts on 2001-01-01) |
| Julian Day (integer days since November 24, 4714 BC at midnight UTC) |
| quarter |
| month in upper case Roman numerals (I-XII; I=January) |
| month in lower case Roman numerals (i-xii; i=January) |
| upper case time-zone abbreviation (only supported in |
| lower case time-zone abbreviation (only supported in |
| time-zone hours |
| time-zone minutes |
| time-zone offset from UTC (only supported in |
版本:11
本節中描述的函數和類函數表示式對 xml 型別的值進行操作。有關 xml 型別的訊息,請查看第 8.13 節。這裡不再重複用於轉換為 xml 型別的函數表示式 xmlparse 和 xmlserialize。使用大多數這些函數需要使用 configure --with-libxml 編譯安裝。
一組函數和類函數的表示式可用於從 SQL 資料産生 XML 內容。因此,它們特別適合將查詢結果格式化為 XML 文件以便在用戶端應用程序中進行處理。
函數 xmlcomment 建立一個 XML 字串,其中包含指定文字作為內容的 XML 註釋。文字不能包含「 -- 」或以「 - 」結尾,以便産生的結構是有效的 XML 註釋。 如果參數為 null,則結果為 null。
例如:
函數 xmlconcat 連接列表中各個 XML 字串,以建立包含 XML 內容片段的單個字串。空值會被忽略;如果都沒有非空值參數,則結果僅為 null。
例如:
XML 宣告(如果存在)組合如下。如果所有參數值具有相同的 XML 版本宣告,則在結果中使用該版本,否則不使用任何版本。如果所有參數值都具有獨立宣告值「yes」,則在結果中使用該值。如果所有參數值都具有獨立的宣告值且至少有一個為「no」,則在結果中使用該值。否則結果將沒有獨立宣告。如果確定結果需要獨立宣告但沒有版本聲明,則將使用版本為 1.0 的版本宣告,因為 XML 要求 XML 宣告包含版本宣告。在所有情況下都會忽略編碼宣告並將其刪除。
例如:
xmlelement 表示式産生具有給定名稱、屬性和內容的 XML 元素。
範例:
透過用 xHHHH 序列替換有問題的字符來轉譯非有效 XML 名稱的元素和屬性名稱,其中 HHHH 是十六進位表示法中字元的 Unicode 代碼。例如:
如果屬性值是引用欄位,則無需明確指定屬性名稱,在這種情況下,預設情況下欄位的名稱將用作屬性名稱。在其他情況下,必須為該屬性明確指定名稱。所以這個例子是有效的:
但這些不行:
元素內容(如果已指定)將根據其資料型別進行格式化。如果內容本身是 xml 型別,則可以建構複雜的 XML 文件。例如:
其他型別的內容將被格式化為有效的 XML 字元資料。這尤其意味著字符 <、> 和 & 將被轉換為其他形式。二進位資料(資料型別 bytea)將以 base64 或十六進位編碼表示,具體取決於組態參數 xmlbinary 的設定。為了使 SQL 和 PostgreSQL 資料型別與 XML Schema 規範保持一致,預計各種資料型別的特定行為將會各自發展,此時將出現更精確的描述。
xmlforest 表示式使用給定的名稱和內容産生元素的 XML 序列。
範例:
如第二個範例所示,如果內容值是欄位引用,則可以省略元素名稱,在這種情況下,預設情況下使用欄位名稱。 否則,必須指定名稱。
非有效的 XML 名稱的元素名稱將被轉譯,如上面的 xmlelement 所示。類似地,內容資料會被轉譯以産生有效的 XML 內容,除非它已經是 xml 型別。
請注意,如果 XML 序列由多個元素組成,則它們不是有效的 XML 文件,因此將 xmlforest 表示式包裝在 xmlelement 中可能很有用。
xmlpi 表示式建立 XML 處理指令。內容(如果存在)不得包含字元序列 ?>。
例如:
xmlroot 表示式改變 XML 值的根節點屬性。如果指定了版本,它將替換根節點的版本宣告中的值;如果指定了獨立設定,則它將替換根節點的獨立宣告中的值。
與此處描述的其他函數不同,函數 xmlagg 是一個彙總函數。它將輸入值連接到彙總函數呼叫,就像 xmlconcat 一樣,除了它是跨資料列而不是在單個資料列中的表示式進行連接。有關彙總函數的其他訊息,請參閱第 9.20 節。
例如:
要確定連接的順序,可以將 ORDER BY 子句加到彙總呼叫中,如第 4.2.7 節中所述。例如:
以前的版本中推薦使用以下非標準方法,在特定情況下可能仍然有用:
本節中描述的表示式用於檢查 xml 的屬性。
如果參數 XML 是正確的 XML 文件,則表示式 IS DOCUMENT 將回傳 true,如果不是(它是內容片段),則回傳 false;如果參數為 null,則回傳 null。有關文件和內容片段之間的區別,請參閱第 8.13 節。
如果第一個參數中的 XPath 表示式回傳任何節點,則 xmlexists 函數回傳 true,否則回傳 false。 (如果任一參數為 null,則結果為 null。)
範例
BY REF 子句在 PostgreSQL 中沒有任何作用,但可以達到 SQL 一致性和與其他實作的相容性。根據 SQL 標準,第一個 BY REF 是必需的,第二個是選擇性的。另請注意,SQL 標準指定 xmlexists 構造將 XQuery 表示式作為第一個參數,但 PostgreSQL 目前僅支持 XPath,它是 XQuery 的子集。
此函數檢查文字字串是否格式正確,回傳布林結果。xml_is_well_formed_document 檢查格式正確的文檔,而 xml_is_well_formed_content 檢查格式良好的內容。如果 xmloption 配置參數設定為 DOCUMENT,則 xml_is_well_formed 會執行前者;如果設定為 CONTENT,則執行後者。這意味著 xml_is_well_formed 對於查看對 xml 類型的簡單強制轉換是否成功很有用,而其他兩個函數對於查看 XMLPARSE 的相對應變數是否成功很有用。
範例:
最後一個範例顯示檢查包括命名空間是否符合。
為了處理資料型別為 xml 的值,PostgreSQL 提供了 xpath 和 xpath_exists 函數,它們用於計算 XPath 1.0 表示式和 XMLTABLE 資料表函數。
函數 xpath 根據 XML 值 xml 計算 XPath 表示式 xpath(字串)。 它回傳與 XPath 表示式產生的節點集合所相對應 XML 值的陣列。如果 XPath 表示式回傳單一變數值而不是節點集合,則回傳單個元素的陣列。
第二個參數必須是格式良好的 XML 內容。特別要注意是,它必須具有單一根節點元素。
該函數的選擇性第三個參數是命名空間對應的陣列。該陣列應該是二維字串陣列,第二維的長度等於 2(即,它應該是陣列的陣列,每個陣列恰好由 2 個元素組成)。每個陣列項目的第一個元素是命名空間名稱(別名),第二個是命名空間 URI。不要求此陣列中提供的別名與 XML 內容本身所使用的別名相同(換句話說,在 XML 內容和 xpath 函數內容中,別名都是區域性的)。
例如:
要設定預設的(匿名)命名空間,請執行以下操作:
The function xpath_exists
is a specialized form of the xpath
function. Instead of returning the individual XML values that satisfy the XPath, this function returns a Boolean indicating whether the query was satisfied or not. This function is equivalent to the standard XMLEXISTS
predicate, except that it also offers support for a namespace mapping argument.
Example:
The xmltable
function produces a table based on the given XML value, an XPath filter to extract rows, and an optional set of column definitions.
The optional XMLNAMESPACES
clause is a comma-separated list of namespaces. It specifies the XML namespaces used in the document and their aliases. A default namespace specification is not currently supported.
The required row_expression
argument is an XPath expression that is evaluated against the supplied XML document to obtain an ordered sequence of XML nodes. This sequence is what xmltable
transforms into output rows.
document_expression
provides the XML document to operate on. The BY REF
clauses have no effect in PostgreSQL, but are allowed for SQL conformance and compatibility with other implementations. The argument must be a well-formed XML document; fragments/forests are not accepted.
The mandatory COLUMNS
clause specifies the list of columns in the output table. If the COLUMNS
clause is omitted, the rows in the result set contain a single column of type xml
containing the data matched by row_expression
. If COLUMNS
is specified, each entry describes a single column. See the syntax summary above for the format. The column name and type are required; the path, default and nullability clauses are optional.
A column marked FOR ORDINALITY
will be populated with row numbers matching the order in which the output rows appeared in the original input XML document. At most one column may be marked FOR ORDINALITY
.
The column_expression
for a column is an XPath expression that is evaluated for each row, relative to the result of the row_expression
, to find the value of the column. If no column_expression
is given, then the column name is used as an implicit path.
If a column's XPath expression returns multiple elements, an error is raised. If the expression matches an empty tag, the result is an empty string (not NULL
). Any xsi:nil
attributes are ignored.
The text body of the XML matched by the column_expression
is used as the column value. Multiple text()
nodes within an element are concatenated in order. Any child elements, processing instructions, and comments are ignored, but the text contents of child elements are concatenated to the result. Note that the whitespace-only text()
node between two non-text elements is preserved, and that leading whitespace on a text()
node is not flattened.
If the path expression does not match for a given row but default_expression
is specified, the value resulting from evaluating that expression is used. If no DEFAULT
clause is given for the column, the field will be set to NULL
. It is possible for a default_expression
to reference the value of output columns that appear prior to it in the column list, so the default of one column may be based on the value of another column.
Columns may be marked NOT NULL
. If the column_expression
for a NOT NULL
column does not match anything and there is no DEFAULT
or the default_expression
also evaluates to null, an error is reported.
Unlike regular PostgreSQL functions, column_expression
and default_expression
are not evaluated to a simple value before calling the function. column_expression
is normally evaluated exactly once per input row, and default_expression
is evaluated each time a default is needed for a field. If the expression qualifies as stable or immutable the repeat evaluation may be skipped. Effectively xmltable
behaves more like a subquery than a function call. This means that you can usefully use volatile functions like nextval
in default_expression
, and column_expression
may depend on other parts of the XML document.
Examples:
The following example shows concatenation of multiple text() nodes, usage of the column name as XPath filter, and the treatment of whitespace, XML comments and processing instructions:
The following example illustrates how the XMLNAMESPACES
clause can be used to specify the default namespace, and a list of additional namespaces used in the XML document as well as in the XPath expressions:
The following functions map the contents of relational tables to XML values. They can be thought of as XML export functionality:
The return type of each function is xml
.
table_to_xml
maps the content of the named table, passed as parameter tbl
. The regclass
type accepts strings identifying tables using the usual notation, including optional schema qualifications and double quotes. query_to_xml
executes the query whose text is passed as parameter query
and maps the result set. cursor_to_xml
fetches the indicated number of rows from the cursor specified by the parameter cursor
. This variant is recommended if large tables have to be mapped, because the result value is built up in memory by each function.
If tableforest
is false, then the resulting XML document looks like this:
If tableforest
is true, the result is an XML content fragment that looks like this:
If no table name is available, that is, when mapping a query or a cursor, the string table
is used in the first format, row
in the second format.
The choice between these formats is up to the user. The first format is a proper XML document, which will be important in many applications. The second format tends to be more useful in the cursor_to_xml
function if the result values are to be reassembled into one document later on. The functions for producing XML content discussed above, in particular xmlelement
, can be used to alter the results to taste.
The data values are mapped in the same way as described for the function xmlelement
above.
The parameter nulls
determines whether null values should be included in the output. If true, null values in columns are represented as:
where xsi
is the XML namespace prefix for XML Schema Instance. An appropriate namespace declaration will be added to the result value. If false, columns containing null values are simply omitted from the output.
The parameter targetns
specifies the desired XML namespace of the result. If no particular namespace is wanted, an empty string should be passed.
The following functions return XML Schema documents describing the mappings performed by the corresponding functions above:
It is essential that the same parameters are passed in order to obtain matching XML data mappings and XML Schema documents.
The following functions produce XML data mappings and the corresponding XML Schema in one document (or forest), linked together. They can be useful where self-contained and self-describing results are wanted:
In addition, the following functions are available to produce analogous mappings of entire schemas or the entire current database:
Note that these potentially produce a lot of data, which needs to be built up in memory. When requesting content mappings of large schemas or databases, it might be worthwhile to consider mapping the tables separately instead, possibly even through a cursor.
The result of a schema content mapping looks like this:
where the format of a table mapping depends on the tableforest
parameter as explained above.
The result of a database content mapping looks like this:
where the schema mapping is as above.
As an example of using the output produced by these functions, Figure 9.1 shows an XSLT stylesheet that converts the output of table_to_xml_and_xmlschema
to an HTML document containing a tabular rendition of the table data. In a similar manner, the results from these functions can be converted into other XML-based formats.
| fill mode (suppress leading zeroes and padding blanks) |
|
| upper case ordinal number suffix |
|
| lower case ordinal number suffix |
|
| fixed format global option (see usage notes) |
|
|
|
| spell mode (not implemented) |
|
本節提供了 PostgreSQL 的數學運算方式。對於沒有標準數學約定的型別(例如,日期/時間型別),我們將在後續部分中介紹具體的行為。
Table 9.4 列出了可用的數學運算子。
Operator | Description | Example | Result |
---|---|---|---|
位元運算子僅適用於整數資料型別,也可用於位元字串型別的位元和位元變化,如 Table 9.14 所示。
Table 9.5 列出了可用的數學函數。在該表中,dp 表示雙精確度。這些函數中的許多函數都提供了多種形式,且具有不同的參數型別。除非另有說明,否則函數的任何形式都將回傳與其參數相同的資料型別。使用雙精確度資料的功能主要以主機系統的 C 函式庫實作; 因此,邊界情況下的準確性和行為可能會因主機系統而有所差異。
Table 9.6 shows functions for generating random numbers.
The random()
function uses a simple linear congruential algorithm. It is fast but not suitable for cryptographic applications; see the pgcrypto module for a more secure alternative. If setseed()
is called, the results of subsequent random()
calls in the current session are repeatable by re-issuing setseed()
with the same argument.
Table 9.7 shows the available trigonometric functions. All these functions take arguments and return values of type double precision
. Each of the trigonometric functions comes in two variants, one that measures angles in radians and one that measures angles in degrees.
Another way to work with angles measured in degrees is to use the unit transformation functions radians()
and degrees()
shown earlier. However, using the degree-based trigonometric functions is preferred, as that way avoids round-off error for special cases such as sind(30)
.
Table 9.8 shows the available hyperbolic functions. All these functions take arguments and return values of type double precision
.
The usual comparison operators are available, as shown in Table 9.1.
Operator | Description |
---|---|
The !=
operator is converted to <>
in the parser stage. It is not possible to implement !=
and <>
operators that do different things.
Comparison operators are available for all relevant data types. All comparison operators are binary operators that return values of type boolean
; expressions like 1 < 2 < 3
are not valid (because there is no <
operator to compare a Boolean value with 3
).
There are also some comparison predicates, as shown in Table 9.2. These behave much like operators, but have special syntax mandated by the SQL standard.
Predicate | Description |
---|---|
The BETWEEN
predicate simplifies range tests:
is equivalent to
Notice that BETWEEN
treats the endpoint values as included in the range. NOT BETWEEN
does the opposite comparison:
is equivalent to
BETWEEN SYMMETRIC
is like BETWEEN
except there is no requirement that the argument to the left of AND
be less than or equal to the argument on the right. If it is not, those two arguments are automatically swapped, so that a nonempty range is always implied.
Ordinary comparison operators yield null (signifying “unknown”), not true or false, when either input is null. For example, 7 = NULL
yields null, as does 7 <> NULL
. When this behavior is not suitable, use the IS [ NOT ] DISTINCT FROM
predicates:
For non-null inputs, IS DISTINCT FROM
is the same as the <>
operator. However, if both inputs are null it returns false, and if only one input is null it returns true. Similarly, IS NOT DISTINCT FROM
is identical to =
for non-null inputs, but it returns true when both inputs are null, and false when only one input is null. Thus, these predicates effectively act as though null were a normal data value, rather than “unknown”.
To check whether a value is or is not null, use the predicates:
or the equivalent, but nonstandard, predicates:
Do not write expression
= NULL because NULL
is not “equal to” NULL
. (The null value represents an unknown value, and it is not known whether two unknown values are equal.)
Some applications might expect that expression
= NULL returns true if expression
evaluates to the null value. It is highly recommended that these applications be modified to comply with the SQL standard. However, if that cannot be done the transform_null_equals configuration variable is available. If it is enabled, PostgreSQL will convert x = NULL
clauses to x IS NULL
.
If the expression
is row-valued, then IS NULL
is true when the row expression itself is null or when all the row's fields are null, while IS NOT NULL
is true when the row expression itself is non-null and all the row's fields are non-null. Because of this behavior, IS NULL
and IS NOT NULL
do not always return inverse results for row-valued expressions; in particular, a row-valued expression that contains both null and non-null fields will return false for both tests. In some cases, it may be preferable to write row
IS DISTINCT FROM NULL
or row
IS NOT DISTINCT FROM NULL
, which will simply check whether the overall row value is null without any additional tests on the row fields.
Boolean values can also be tested using the predicates
These will always return true or false, never a null value, even when the operand is null. A null input is treated as the logical value “unknown”. Notice that IS UNKNOWN
and IS NOT UNKNOWN
are effectively the same as IS NULL
and IS NOT NULL
, respectively, except that the input expression must be of Boolean type.
Some comparison-related functions are also available, as shown in Table 9.3.
常見可用的邏輯運算子:
SQL 使用具有 true、false 和 null 的三值邏輯系統,其中 null 表示“未知”。請參閱以下真值表:
運算子 AND 和 OR 是可交換的,也就是說,您可以在不影響結果的情況下交換左右運算元。有關子表示式求值順序的更多資訊,請參閱第 4.2.14 節。
Table 9.31 shows the available functions for date/time value processing, with details appearing in the following subsections. Table 9.30 illustrates the behaviors of the basic arithmetic operators (+
, *
, etc.). For formatting functions, refer to Section 9.8. You should be familiar with the background information on date/time data types from Section 8.5.
All the functions and operators described below that take time
or timestamp
inputs actually come in two variants: one that takes time with time zone
or timestamp with time zone
, and one that takes time without time zone
or timestamp without time zone
. For brevity, these variants are not shown separately. Also, the +
and *
operators come in commutative pairs (for example both date + integer and integer + date); we show only one of each such pair.
Operator | Example | Result |
---|---|---|
In addition to these functions, the SQL OVERLAPS
operator is supported:
This expression yields true when two time periods (defined by their endpoints) overlap, false when they do not overlap. The endpoints can be specified as pairs of dates, times, or time stamps; or as a date, time, or time stamp followed by an interval. When a pair of values is provided, either the start or the end can be written first; OVERLAPS
automatically takes the earlier value of the pair as the start. Each time period is considered to represent the half-open interval start
<=
time
<
end
, unless start
and end
are equal in which case it represents that single time instant. This means for instance that two time periods with only an endpoint in common do not overlap.
When adding an interval
value to (or subtracting an interval
value from) a timestamp with time zone
value, the days component advances or decrements the date of the timestamp with time zone
by the indicated number of days, keeping the time of day the same. Across daylight saving time changes (when the session time zone is set to a time zone that recognizes DST), this means interval '1 day'
does not necessarily equal interval '24 hours'
. For example, with the session time zone set to America/Denver
:
This happens because an hour was skipped due to a change in daylight saving time at 2005-04-03 02:00:00
in time zone America/Denver
.
Note there can be ambiguity in the months
field returned by age
because different months have different numbers of days. PostgreSQL's approach uses the month from the earlier of the two dates when calculating partial months. For example, age('2004-06-01', '2004-04-30')
uses April to yield 1 mon 1 day
, while using May would yield 1 mon 2 days
because May has 31 days, while April has only 30.
Subtraction of dates and timestamps can also be complex. One conceptually simple way to perform subtraction is to convert each value to a number of seconds using EXTRACT(EPOCH FROM ...)
, then subtract the results; this produces the number of seconds between the two values. This will adjust for the number of days in each month, timezone changes, and daylight saving time adjustments. Subtraction of date or timestamp values with the “-
” operator returns the number of days (24-hours) and hours/minutes/seconds between the values, making the same adjustments. The age
function returns years, months, days, and hours/minutes/seconds, performing field-by-field subtraction and then adjusting for negative field values. The following queries illustrate the differences in these approaches. The sample results were produced with timezone = 'US/Eastern'
; there is a daylight saving time change between the two dates used:
EXTRACT
, date_part
The extract
function retrieves subfields such as year or hour from date/time values. source
must be a value expression of type timestamp
, time
, or interval
. (Expressions of type date
are cast to timestamp
and can therefore be used as well.) field
is an identifier or string that selects what field to extract from the source value. The extract
function returns values of type double precision
. The following are valid field names:century
The century
The first century starts at 0001-01-01 00:00:00 AD, although they did not know it at the time. This definition applies to all Gregorian calendar countries. There is no century number 0, you go from -1 century to 1 century. If you disagree with this, please write your complaint to: Pope, Cathedral Saint-Peter of Roma, Vatican.day
For timestamp
values, the day (of the month) field (1 - 31) ; for interval
values, the number of days
decade
The year field divided by 10
dow
The day of the week as Sunday (0
) to Saturday (6
)
Note that extract
's day of the week numbering differs from that of the to_char(..., 'D')
function.doy
The day of the year (1 - 365/366)
epoch
For timestamp with time zone
values, the number of seconds since 1970-01-01 00:00:00 UTC (can be negative); for date
and timestamp
values, the number of seconds since 1970-01-01 00:00:00 local time; for interval
values, the total number of seconds in the interval
You can convert an epoch value back to a time stamp with to_timestamp
:
hour
The hour field (0 - 23)
isodow
The day of the week as Monday (1
) to Sunday (7
)
This is identical to dow
except for Sunday. This matches the ISO 8601 day of the week numbering.isoyear
The ISO 8601 week-numbering year that the date falls in (not applicable to intervals)
Each ISO 8601 week-numbering year begins with the Monday of the week containing the 4th of January, so in early January or late December the ISO year may be different from the Gregorian year. See the week
field for more information.
This field is not available in PostgreSQL releases prior to 8.3.microseconds
The seconds field, including fractional parts, multiplied by 1 000 000; note that this includes full seconds
millennium
The millennium
Years in the 1900s are in the second millennium. The third millennium started January 1, 2001.milliseconds
The seconds field, including fractional parts, multiplied by 1000. Note that this includes full seconds.
minute
The minutes field (0 - 59)
month
For timestamp
values, the number of the month within the year (1 - 12) ; for interval
values, the number of months, modulo 12 (0 - 11)
quarter
The quarter of the year (1 - 4) that the date is in
second
The seconds field, including fractional parts (0 - 59[7])
timezone
The time zone offset from UTC, measured in seconds. Positive values correspond to time zones east of UTC, negative values to zones west of UTC. (Technically, PostgreSQL does not use UTC because leap seconds are not handled.)timezone_hour
The hour component of the time zone offsettimezone_minute
The minute component of the time zone offsetweek
The number of the ISO 8601 week-numbering week of the year. By definition, ISO weeks start on Mondays and the first week of a year contains January 4 of that year. In other words, the first Thursday of a year is in week 1 of that year.
In the ISO week-numbering system, it is possible for early-January dates to be part of the 52nd or 53rd week of the previous year, and for late-December dates to be part of the first week of the next year. For example, 2005-01-01
is part of the 53rd week of year 2004, and 2006-01-01
is part of the 52nd week of year 2005, while 2012-12-31
is part of the first week of 2013. It's recommended to use the isoyear
field together with week
to get consistent results.
year
The year field. Keep in mind there is no 0 AD
, so subtracting BC
years from AD
years should be done with care.
When the input value is +/-Infinity, extract
returns +/-Infinity for monotonically-increasing fields (epoch
, julian
, year
, isoyear
, decade
, century
, and millennium
). For other fields, NULL is returned. PostgreSQL versions before 9.6 returned zero for all cases of infinite input.
The extract
function is primarily intended for computational processing. For formatting date/time values for display, see Section 9.8.
The date_part
function is modeled on the traditional Ingres equivalent to the SQL-standard function extract
:
Note that here the field
parameter needs to be a string value, not a name. The valid field names for date_part
are the same as for extract
.
date_trunc
The function date_trunc
is conceptually similar to the trunc
function for numbers.
source
is a value expression of type timestamp
, timestamp with time zone
, or interval
. (Values of type date
and time
are cast automatically to timestamp
or interval
, respectively.) field
selects to which precision to truncate the input value. The return value is likewise of type timestamp
, timestamp with time zone
, or interval
, and it has all fields that are less significant than the selected one set to zero (or one, for day and month).
Valid values for field
are:
When the input value is of type timestamp with time zone
, the truncation is performed with respect to a particular time zone; for example, truncation to day
produces a value that is midnight in that zone. By default, truncation is done with respect to the current TimeZone setting, but the optional time_zone
argument can be provided to specify a different time zone. The time zone name can be specified in any of the ways described in Section 8.5.3.
A time zone cannot be specified when processing timestamp without time zone
or interval
inputs. These are always taken at face value.
Examples (assuming the local time zone is America/New_York
):
AT TIME ZONE
The AT TIME ZONE
converts time stamp without time zone to/from time stamp with time zone, and time values to different time zones. Table 9.32 shows its variants.
AT TIME ZONE
VariantsIn these expressions, the desired time zone zone
can be specified either as a text string (e.g., 'America/Los_Angeles'
) or as an interval (e.g., INTERVAL '-08:00'
). In the text case, a time zone name can be specified in any of the ways described in Section 8.5.3.
Examples (assuming the local time zone is America/Los_Angeles
):
The first example adds a time zone to a value that lacks it, and displays the value using the current TimeZone
setting. The second example shifts the time stamp with time zone value to the specified time zone, and returns the value without a time zone. This allows storage and display of values different from the current TimeZone
setting. The third example converts Tokyo time to Chicago time. Converting time values to other time zones uses the currently active time zone rules since no date is supplied.
The function timezone
(zone
, timestamp
) is equivalent to the SQL-conforming construct timestamp
AT TIME ZONE zone
.
PostgreSQL provides a number of functions that return values related to the current date and time. These SQL-standard functions all return values based on the start time of the current transaction:
CURRENT_TIME
and CURRENT_TIMESTAMP
deliver values with time zone; LOCALTIME
and LOCALTIMESTAMP
deliver values without time zone.
CURRENT_TIME
, CURRENT_TIMESTAMP
, LOCALTIME
, and LOCALTIMESTAMP
can optionally take a precision parameter, which causes the result to be rounded to that many fractional digits in the seconds field. Without a precision parameter, the result is given to the full available precision.
Some examples:
Since these functions return the start time of the current transaction, their values do not change during the transaction. This is considered a feature: the intent is to allow a single transaction to have a consistent notion of the “current” time, so that multiple modifications within the same transaction bear the same time stamp.
Other database systems might advance these values more frequently.
PostgreSQL also provides functions that return the start time of the current statement, as well as the actual current time at the instant the function is called. The complete list of non-SQL-standard time functions is:
transaction_timestamp()
is equivalent to CURRENT_TIMESTAMP
, but is named to clearly reflect what it returns. statement_timestamp()
returns the start time of the current statement (more specifically, the time of receipt of the latest command message from the client). statement_timestamp()
and transaction_timestamp()
return the same value during the first command of a transaction, but might differ during subsequent commands. clock_timestamp()
returns the actual current time, and therefore its value changes even within a single SQL command. timeofday()
is a historical PostgreSQL function. Like clock_timestamp()
, it returns the actual current time, but as a formatted text
string rather than a timestamp with time zone
value. now()
is a traditional PostgreSQL equivalent to transaction_timestamp()
.
All the date/time data types also accept the special literal value now
to specify the current date and time (again, interpreted as the transaction start time). Thus, the following three all return the same result:
You do not want to use the third form when specifying a DEFAULT
clause while creating a table. The system will convert now
to a timestamp
as soon as the constant is parsed, so that when the default value is needed, the time of the table creation would be used! The first two forms will not be evaluated until the default value is used, because they are function calls. Thus they will give the desired behavior of defaulting to the time of row insertion.
The following functions are available to delay execution of the server process:
pg_sleep
makes the current session's process sleep until seconds
seconds have elapsed. seconds
is a value of type double precision
, so fractional-second delays can be specified. pg_sleep_for
is a convenience function for larger sleep times specified as an interval
. pg_sleep_until
is a convenience function for when a specific wake-up time is desired. For example:
The effective resolution of the sleep interval is platform-specific; 0.01 seconds is a common value. The sleep delay will be at least as long as specified. It might be longer depending on factors such as server load. In particular, pg_sleep_until
is not guaranteed to wake up exactly at the specified time, but it will not wake up any earlier.
Make sure that your session does not hold more locks than necessary when calling pg_sleep
or its variants. Otherwise other sessions might have to wait for your sleeping process, slowing down the entire system.\
[7] 60 if leap seconds are implemented by the operating system
本節描述的內容為:
用於處理和建立 JSON 資料的函數和運算子
SQL/JSON 路徑語言
要了解有關 SQL/JSON 標準的更多資訊,請參閱 []。有關於 PostgreSQL 支援的 JSON 型別的詳細資訊,請參閱。
列出了可用於 JSON 資料型別的運算子(請參閱)。
json
and jsonb
OperatorsOperator | Right Operand Type | Return type | Description | Example | Example Result |
---|
這些運算子都有 json 和 jsonb 型別共用的變形。欄位/元素/路徑提取運算子回傳與其左側輸入相同的類型(json 或 jsonb),但指定為回傳 text 的運算符除外,這些運算子將結果強制轉換為 text。如果 JSON 輸入的結構不符合要求,則欄位/元素/路徑提取運算子將回傳 NULL 而不會失敗。例如,如果不存在這樣的元素。接受整數 JSON 陣列索引的欄位/元素/路徑提取運算子均支援表示從陣列末尾開始的負數索引值。
The standard comparison operators shown in are available for jsonb
, but not for json
. They follow the ordering rules for B-tree operations outlined at .
Some further operators also exist only for jsonb
, as shown in . Many of these operators can be indexed by jsonb
operator classes. For a full description of jsonb
containment and existence semantics, see . describes how these operators can be used to effectively index jsonb
data.
jsonb
OperatorsThe ||
operator concatenates the elements at the top level of each of its operands. It does not operate recursively. For example, if both operands are objects with a common key field name, the value of the field in the result will just be the value from the right hand operand.
The @?
and @@
operators suppress the following errors: lacking object field or array element, unexpected JSON item type, and numeric errors. This behavior might be helpful while searching over JSON document collections of varying structure.
array_to_json
and row_to_json
have the same behavior as to_json
except for offering a pretty-printing option. The behavior described for to_json
likewise applies to each individual value converted by the other JSON creation functions.
The functions json[b]_populate_record
, json[b]_populate_recordset
, json[b]_to_record
and json[b]_to_recordset
operate on a JSON object, or array of objects, and extract the values associated with keys whose names match column names of the output row type. Object fields that do not correspond to any output column name are ignored, and output columns that do not match any object field will be filled with nulls. To convert a JSON value to the SQL type of an output column, the following rules are applied in sequence:
A JSON null value is converted to a SQL null in all cases.
If the output column is of type json
or jsonb
, the JSON value is just reproduced exactly.
If the output column is a composite (row) type, and the JSON value is a JSON object, the fields of the object are converted to columns of the output row type by recursive application of these rules.
Likewise, if the output column is an array type and the JSON value is a JSON array, the elements of the JSON array are converted to elements of the output array by recursive application of these rules.
Otherwise, if the JSON value is a string literal, the contents of the string are fed to the input conversion function for the column's data type.
Otherwise, the ordinary text representation of the JSON value is fed to the input conversion function for the column's data type.
While the examples for these functions use constants, the typical use would be to reference a table in the FROM
clause and use one of its json
or jsonb
columns as an argument to the function. Extracted key values can then be referenced in other parts of the query, like WHERE
clauses and target lists. Extracting multiple values in this way can improve performance over extracting them separately with per-key operators.
All the items of the path
parameter of jsonb_set
as well as jsonb_insert
except the last item must be present in the target
. If create_missing
is false, all items of the path
parameter of jsonb_set
must be present. If these conditions are not met the target
is returned unchanged.
If the last path item is an object key, it will be created if it is absent and given the new value. If the last path item is an array index, if it is positive the item to set is found by counting from the left, and if negative by counting from the right - -1
designates the rightmost element, and so on. If the item is out of the range -array_length .. array_length -1, and create_missing is true, the new value is added at the beginning of the array if the item is negative, and at the end of the array if it is positive.
The json_typeof
function's null
return value should not be confused with a SQL NULL. While calling json_typeof('null'::json)
will return null
, calling json_typeof(NULL::json)
will return a SQL NULL.
If the argument to json_strip_nulls
contains duplicate field names in any object, the result could be semantically somewhat different, depending on the order in which they occur. This is not an issue for jsonb_strip_nulls
since jsonb
values never have duplicate object field names.
The jsonb_path_exists
, jsonb_path_match
, jsonb_path_query
, jsonb_path_query_array
, and jsonb_path_query_first
functions have optional vars
and silent
arguments.
If the vars
argument is specified, it provides an object containing named variables to be substituted into a jsonpath
expression.
If the silent
argument is specified and has the true
value, these functions suppress the same errors as the @?
and @@
operators.
JSON query functions and operators pass the provided path expression to the path engine for evaluation. If the expression matches the queried JSON data, the corresponding SQL/JSON item is returned. Path expressions are written in the SQL/JSON path language and can also include arithmetic expressions and functions. Query functions treat the provided expression as a text string, so it must be enclosed in single quotes.
A path expression consists of a sequence of elements allowed by the jsonpath
data type. The path expression is evaluated from left to right, but you can use parentheses to change the order of operations. If the evaluation is successful, a sequence of SQL/JSON items (SQL/JSON sequence) is produced, and the evaluation result is returned to the JSON query function that completes the specified computation.
For example, suppose you have some JSON data from a GPS tracker that you would like to parse, such as:
To retrieve the available track segments, you need to use the .
key
accessor operator for all the preceding JSON objects:
If the item to retrieve is an element of an array, you have to unnest this array using the [*]
operator. For example, the following path will return location coordinates for all the available track segments:
To return the coordinates of the first segment only, you can specify the corresponding subscript in the []
accessor operator. Note that the SQL/JSON arrays are 0-relative:
When defining the path, you can also use one or more filter expressions that work similar to the WHERE
clause in SQL. A filter expression begins with a question mark and provides a condition in parentheses:
Filter expressions must be specified right after the path evaluation step to which they are applied. The result of this step is filtered to include only those items that satisfy the provided condition. SQL/JSON defines three-valued logic, so the condition can be true
, false
, or unknown
. The unknown
value plays the same role as SQL NULL
and can be tested for with the is unknown
predicate. Further path evaluation steps use only those items for which filter expressions return true
.
Suppose you would like to retrieve all heart rate values higher than 130. You can achieve this using the following expression:
To get the start time of segments with such values instead, you have to filter out irrelevant segments before returning the start time, so the filter expression is applied to the previous step, and the path used in the condition is different:
You can use several filter expressions on the same nesting level, if required. For example, the following expression selects all segments that contain locations with relevant coordinates and high heart rate values:
Using filter expressions at different nesting levels is also allowed. The following example first filters all segments by location, and then returns high heart rate values for these segments, if available:
You can also nest filter expressions within each other:
This expression returns the size of the track if it contains any segments with high heart rate values, or an empty sequence otherwise.
PostgreSQL's implementation of SQL/JSON path language has the following deviations from the SQL/JSON standard:
.datetime()
item method is not implemented yet mainly because immutable jsonpath
functions and operators cannot reference session timezone, which is used in some datetime operations. Datetime support will be added to jsonpath
in future versions of PostgreSQL.
A path expression can be a Boolean predicate, although the SQL/JSON standard allows predicates only in filters. This is necessary for implementation of the @@
operator. For example, the following jsonpath
expression is valid in PostgreSQL:
When you query JSON data, the path expression may not match the actual JSON data structure. An attempt to access a non-existent member of an object or element of an array results in a structural error. SQL/JSON path expressions have two modes of handling structural errors:
lax (default) — the path engine implicitly adapts the queried data to the specified path. Any remaining structural errors are suppressed and converted to empty SQL/JSON sequences.
strict — if a structural error occurs, an error is raised.
The lax mode facilitates matching of a JSON document structure and path expression if the JSON data does not conform to the expected schema. If an operand does not match the requirements of a particular operation, it can be automatically wrapped as an SQL/JSON array or unwrapped by converting its elements into an SQL/JSON sequence before performing this operation. Besides, comparison operators automatically unwrap their operands in the lax mode, so you can compare SQL/JSON arrays out-of-the-box. An array of size 1 is considered equal to its sole element. Automatic unwrapping is not performed only when:
The path expression contains type()
or size()
methods that return the type and the number of elements in the array, respectively.
The queried JSON data contain nested arrays. In this case, only the outermost array is unwrapped, while all the inner arrays remain unchanged. Thus, implicit unwrapping can only go one level down within each path evaluation step.
For example, when querying the GPS data listed above, you can abstract from the fact that it stores an array of segments when using the lax mode:
In the strict mode, the specified path must exactly match the structure of the queried JSON document to return an SQL/JSON item, so using this path expression will cause an error. To get the same result as in the lax mode, you have to explicitly unwrap the segments
array:
SQL/JSON path expressions allow matching text to a regular expression with the like_regex
filter. For example, the following SQL/JSON path query would case-insensitively match all strings in an array that start with an English vowel:
The optional flag
string may include one or more of the characters i
for case-insensitive match, m
to allow ^
and $
to match at newlines, s
to allow .
to match a newline, and q
to quote the whole pattern (reducing the behavior to a simple substring match).
jsonpath
Operators and Methodsjsonpath
Filter Expression ElementsFor enum types (described in), there are several functions that allow cleaner programming without hard-coding particular values of an enum type. These are listed in. The examples assume an enum type created as:
Table 9.32. Enum Support Functions
Function | Description | Example | Example Result |
---|
Notice that except for the two-argument form ofenum_range
, these functions disregard the specific value passed to them; they care only about its declared data type. Either null or a specific value of the type can be passed, with the same result. It is more common to apply these functions to a table column or function argument than to a hardwired type name as suggested by the examples.
translation mode (print localized day and month names based on )
Function | Return Type | Description | Example | Result |
---|---|---|---|---|
Function | Return Type | Description |
---|---|---|
Function (radians) | Function (degrees) | Description |
---|---|---|
Function | Description | Example | Result |
---|---|---|---|
Function | Description | Example | Example Result |
---|---|---|---|
Function | Return Type | Description | Example | Result |
---|---|---|---|---|
Expression | Return Type | Description |
---|---|---|
Operator | Right Operand Type | Description | Example |
---|
shows the functions that are available for creating json
and jsonb
values. (There are no equivalent functions for jsonb
, of the row_to_json
and array_to_json
functions. However, the to_jsonb
function supplies much the same functionality as these functions would.)
Function | Description | Example | Example Result |
---|
The extension has a cast from hstore
to json
, so that hstore
values converted via the JSON creation functions will be represented as JSON objects, not as primitive string values.
shows the functions that are available for processing json
and jsonb
values.
Function | Return Type | Description | Example | Example Result |
---|
Many of these functions and operators will convert Unicode escapes in JSON strings to the appropriate single character. This is a non-issue if the input is type jsonb
, because the conversion was already done; but for json
input, this may result in throwing an error, as noted in .
See also for the aggregate function json_agg
which aggregates record values as JSON, and the aggregate function json_object_agg
which aggregates pairs of values into a JSON object, and their jsonb
equivalents, jsonb_agg
and jsonb_object_agg
.
SQL/JSON path expressions specify the items to be retrieved from the JSON data, similar to XPath expressions used for SQL access to XML. In PostgreSQL, path expressions are implemented as the jsonpath
data type and can use any elements described in .
To refer to the JSON data to be queried (the context item), use the $
sign in the path expression. It can be followed by one or more , which go down the JSON structure level by level to retrieve the content of context item. Each operator that follows deals with the result of the previous evaluation step.
The result of each path evaluation step can be processed by one or more jsonpath
operators and methods listed in . Each method name must be preceded by a dot. For example, you can get an array size:
For more examples of using jsonpath
operators and methods within path expressions, see .
Functions and operators that can be used in filter expressions are listed in . The path evaluation result to be filtered is denoted by the @
variable. To refer to a JSON element stored at a lower nesting level, add one or more accessor operators after @
.
There are minor differences in the interpretation of regular expression patterns used in like_regex
filters, as described in .
The SQL/JSON standard borrows its definition for regular expressions from the LIKE_REGEX
operator, which in turn uses the XQuery standard. PostgreSQL does not currently support the LIKE_REGEX
operator. Therefore, the like_regex
filter is implemented using the POSIX regular expression engine described in . This leads to various minor discrepancies from standard SQL/JSON behavior, which are cataloged in . Note, however, that the flag-letter incompatibilities described there do not apply to SQL/JSON, as it translates the XQuery flag letters to match what the POSIX engine expects.
Keep in mind that the pattern argument of like_regex
is a JSON path string literal, written according to the rules given in . This means in particular that any backslashes you want to use in the regular expression must be doubled. For example, to match strings that contain only digits:
shows the operators and methods available in jsonpath
. shows the available filter expression elements.
Operator/Method | Description | Example JSON | Example Query | Result |
---|
Value/Predicate | Description | Example JSON | Example Query | Result |
---|
+
addition
2 + 3
5
-
subtraction
2 - 3
-1
*
multiplication
2 * 3
6
/
division (integer division truncates the result)
4 / 2
2
%
modulo (remainder)
5 % 4
1
^
exponentiation (associates left to right)
2.0 ^ 3.0
8
|/
square root
|/ 25.0
5
||/
cube root
||/ 27.0
3
!
factorial
5 !
120
!!
factorial (prefix operator)
!! 5
120
@
absolute value
@ -5.0
5
&
bitwise AND
91 & 15
11
|
bitwise OR
32 | 3
35
#
bitwise XOR
17 # 5
20
~
bitwise NOT
~1
-2
<<
bitwise shift left
1 << 4
16
>>
bitwise shift right
8 >> 2
2
abs(
x
)
(same as input)
absolute value
abs(-17.4)
17.4
cbrt(dp
)
dp
cube root
cbrt(27.0)
3
ceil(dp
or numeric
)
(same as input)
nearest integer greater than or equal to argument
ceil(-42.8)
-42
ceiling(dp
or numeric
)
(same as input)
nearest integer greater than or equal to argument (same as ceil
)
ceiling(-95.3)
-95
degrees(dp
)
dp
radians to degrees
degrees(0.5)
28.6478897565412
div(
y
numeric
, x
numeric
)
numeric
integer quotient of y
/x
div(9,4)
2
exp(dp
or numeric
)
(same as input)
exponential
exp(1.0)
2.71828182845905
floor(dp
or numeric
)
(same as input)
nearest integer less than or equal to argument
floor(-42.8)
-43
ln(dp
or numeric
)
(same as input)
natural logarithm
ln(2.0)
0.693147180559945
log(dp
or numeric
)
(same as input)
base 10 logarithm
log(100.0)
2
log10(dp
or numeric
)
(same as input)
base 10 logarithm
log10(100.0)
2
log(
b
numeric
, x
numeric
)
numeric
logarithm to base b
log(2.0, 64.0)
6.0000000000
mod(
y
, x
)
(same as argument types)
remainder of y
/x
mod(9,4)
1
pi()
dp
“π” constant
pi()
3.14159265358979
power(
a
dp
, b
dp
)
dp
a
raised to the power of b
power(9.0, 3.0)
729
power(
a
numeric
, b
numeric
)
numeric
a
raised to the power of b
power(9.0, 3.0)
729
radians(dp
)
dp
degrees to radians
radians(45.0)
0.785398163397448
round(dp
or numeric
)
(same as input)
round to nearest integer
round(42.4)
42
round(
v
numeric
, s
int
)
numeric
round to s
decimal places
round(42.4382, 2)
42.44
scale(numeric
)
integer
scale of the argument (the number of decimal digits in the fractional part)
scale(8.41)
2
sign(dp
or numeric
)
(same as input)
sign of the argument (-1, 0, +1)
sign(-8.4)
-1
sqrt(dp
or numeric
)
(same as input)
square root
sqrt(2.0)
1.4142135623731
trunc(dp
or numeric
)
(same as input)
truncate toward zero
trunc(42.8)
42
trunc(
v
numeric
, s
int
)
numeric
truncate to s
decimal places
trunc(42.4382, 2)
42.43
width_bucket(
operand
dp
, b1
dp
, b2
dp
, count
int
)
int
return the bucket number to which operand
would be assigned in a histogram having count
equal-width buckets spanning the range b1
to b2
; returns 0
or count
+1 for an input outside the range
width_bucket(5.35, 0.024, 10.06, 5)
3
width_bucket(
operand
numeric
, b1
numeric
, b2
numeric
, count
int
)
int
return the bucket number to which operand
would be assigned in a histogram having count
equal-width buckets spanning the range b1
to b2
; returns 0
or count
+1 for an input outside the range
width_bucket(5.35, 0.024, 10.06, 5)
3
width_bucket(
operand
anyelement
, thresholds
anyarray
)
int
return the bucket number to which operand
would be assigned given an array listing the lower bounds of the buckets; returns 0
for an input less than the first lower bound; the thresholds
array must be sorted, smallest first, or unexpected results will be obtained
width_bucket(now(), array['yesterday', 'today', 'tomorrow']::timestamptz[])
2
random()
dp
random value in the range 0.0 <= x < 1.0
setseed(dp
)
void
set seed for subsequent random()
calls (value between -1.0 and 1.0, inclusive)
acos(
x
)
acosd(
x
)
inverse cosine
asin(
x
)
asind(
x
)
inverse sine
atan(
x
)
atand(
x
)
inverse tangent
atan2(
y
, x
)
atan2d(
y
, x
)
inverse tangent of y
/x
cos(
x
)
cosd(
x
)
cosine
cot(
x
)
cotd(
x
)
cotangent
sin(
x
)
sind(
x
)
sine
tan(
x
)
tand(
x
)
tangent
sinh(
x
)
hyperbolic sine
sinh(0)
0
cosh(
x
)
hyperbolic cosine
cosh(0)
1
tanh(
x
)
hyperbolic tangent
tanh(0)
0
asinh(
x
)
inverse hyperbolic sine
asinh(0)
0
acosh(
x
)
inverse hyperbolic cosine
acosh(1)
0
atanh(
x
)
inverse hyperbolic tangent
atanh(0)
0
<
less than
>
greater than
<=
less than or equal to
>=
greater than or equal to
=
equal
<>
or !=
not equal
a
BETWEEN
x
AND
y
between
a
NOT BETWEEN
x
AND
y
not between
a
BETWEEN SYMMETRIC
x
AND
y
between, after sorting the comparison values
a
NOT BETWEEN SYMMETRIC
x
AND
y
not between, after sorting the comparison values
a
IS DISTINCT FROM
b
not equal, treating null like an ordinary value
a
IS NOT DISTINCT FROM
b
equal, treating null like an ordinary value
expression
IS NULL
is null
expression
IS NOT NULL
is not null
expression
ISNULL
is null (nonstandard syntax)
expression
NOTNULL
is not null (nonstandard syntax)
boolean_expression
IS TRUE
is true
boolean_expression
IS NOT TRUE
is false or unknown
boolean_expression
IS FALSE
is false
boolean_expression
IS NOT FALSE
is true or unknown
boolean_expression
IS UNKNOWN
is unknown
boolean_expression
IS NOT UNKNOWN
is true or false
num_nonnulls(VARIADIC "any")
returns the number of non-null arguments
num_nonnulls(1, NULL, 2)
2
num_nulls(VARIADIC "any")
returns the number of null arguments
num_nulls(1, NULL, 2)
1
AND
OR
NOT
a
b
a
AND b
a
OR b
TRUE
TRUE
TRUE
TRUE
TRUE
FALSE
FALSE
TRUE
TRUE
NULL
NULL
TRUE
FALSE
FALSE
FALSE
FALSE
FALSE
NULL
FALSE
NULL
NULL
NULL
NULL
NULL
a
NOT a
TRUE
FALSE
FALSE
TRUE
NULL
NULL
+
date '2001-09-28' + integer '7'
date '2001-10-05'
+
date '2001-09-28' + interval '1 hour'
timestamp '2001-09-28 01:00:00'
+
date '2001-09-28' + time '03:00'
timestamp '2001-09-28 03:00:00'
+
interval '1 day' + interval '1 hour'
interval '1 day 01:00:00'
+
timestamp '2001-09-28 01:00' + interval '23 hours'
timestamp '2001-09-29 00:00:00'
+
time '01:00' + interval '3 hours'
time '04:00:00'
-
- interval '23 hours'
interval '-23:00:00'
-
date '2001-10-01' - date '2001-09-28'
integer '3'
(days)
-
date '2001-10-01' - integer '7'
date '2001-09-24'
-
date '2001-09-28' - interval '1 hour'
timestamp '2001-09-27 23:00:00'
-
time '05:00' - time '03:00'
interval '02:00:00'
-
time '05:00' - interval '2 hours'
time '03:00:00'
-
timestamp '2001-09-28 23:00' - interval '23 hours'
timestamp '2001-09-28 00:00:00'
-
interval '1 day' - interval '1 hour'
interval '1 day -01:00:00'
-
timestamp '2001-09-29 03:00' - timestamp '2001-09-27 12:00'
interval '1 day 15:00:00'
*
900 * interval '1 second'
interval '00:15:00'
*
21 * interval '1 day'
interval '21 days'
*
double precision '3.5' * interval '1 hour'
interval '03:30:00'
/
interval '1 hour' / double precision '1.5'
interval '00:40:00'
age(timestamp
, timestamp
)
interval
Subtract arguments, producing a “symbolic” result that uses years and months, rather than just days
age(timestamp '2001-04-10', timestamp '1957-06-13')
43 years 9 mons 27 days
age(timestamp
)
interval
Subtract from current_date
(at midnight)
age(timestamp '1957-06-13')
43 years 8 mons 3 days
clock_timestamp()
timestamp with time zone
Current date and time (changes during statement execution); see Section 9.9.4
current_date
date
Current date; see Section 9.9.4
current_time
time with time zone
Current time of day; see Section 9.9.4
current_timestamp
timestamp with time zone
Current date and time (start of current transaction); see Section 9.9.4
date_part(text
, timestamp
)
double precision
Get subfield (equivalent to extract
); see Section 9.9.1
date_part('hour', timestamp '2001-02-16 20:38:40')
20
date_part(text
, interval
)
double precision
Get subfield (equivalent to extract
); see Section 9.9.1
date_part('month', interval '2 years 3 months')
3
date_trunc(text
, timestamp
)
timestamp
Truncate to specified precision; see Section 9.9.2
date_trunc('hour', timestamp '2001-02-16 20:38:40')
2001-02-16 20:00:00
date_trunc(text
, timestamp with time zone
, text
)
timestamp with time zone
Truncate to specified precision in the specified time zone; see Section 9.9.2
date_trunc('day', timestamptz '2001-02-16 20:38:40+00', 'Australia/Sydney')
2001-02-16 13:00:00+00
date_trunc(text
, interval
)
interval
Truncate to specified precision; see Section 9.9.2
date_trunc('hour', interval '2 days 3 hours 40 minutes')
2 days 03:00:00
extract
(field
from timestamp
)
double precision
Get subfield; see Section 9.9.1
extract(hour from timestamp '2001-02-16 20:38:40')
20
extract
(field
from interval
)
double precision
Get subfield; see Section 9.9.1
extract(month from interval '2 years 3 months')
3
isfinite(date
)
boolean
Test for finite date (not +/-infinity)
isfinite(date '2001-02-16')
true
isfinite(timestamp
)
boolean
Test for finite time stamp (not +/-infinity)
isfinite(timestamp '2001-02-16 21:28:30')
true
isfinite(interval
)
boolean
Test for finite interval
isfinite(interval '4 hours')
true
justify_days(interval
)
interval
Adjust interval so 30-day time periods are represented as months
justify_days(interval '35 days')
1 mon 5 days
justify_hours(interval
)
interval
Adjust interval so 24-hour time periods are represented as days
justify_hours(interval '27 hours')
1 day 03:00:00
justify_interval(interval
)
interval
Adjust interval using justify_days
and justify_hours
, with additional sign adjustments
justify_interval(interval '1 mon -1 hour')
29 days 23:00:00
localtime
time
Current time of day; see Section 9.9.4
localtimestamp
timestamp
Current date and time (start of current transaction); see Section 9.9.4
make_date(
year
int
, month
int
, day
int
)
date
Create date from year, month and day fields
make_date(2013, 7, 15)
2013-07-15
make_interval(
years
int
DEFAULT 0, months
int
DEFAULT 0, weeks
int
DEFAULT 0, days
int
DEFAULT 0, hours
int
DEFAULT 0, mins
int
DEFAULT 0, secs
double precision
DEFAULT 0.0)
interval
Create interval from years, months, weeks, days, hours, minutes and seconds fields
make_interval(days => 10)
10 days
make_time(
hour
int
, min
int
, sec
double precision
)
time
Create time from hour, minute and seconds fields
make_time(8, 15, 23.5)
08:15:23.5
make_timestamp(
year
int
, month
int
, day
int
, hour
int
, min
int
, sec
double precision
)
timestamp
Create timestamp from year, month, day, hour, minute and seconds fields
make_timestamp(2013, 7, 15, 8, 15, 23.5)
2013-07-15 08:15:23.5
make_timestamptz(
year
int
, month
int
, day
int
, hour
int
, min
int
, sec
double precision
, [ timezone
text
])
timestamp with time zone
Create timestamp with time zone from year, month, day, hour, minute and seconds fields; if timezone
is not specified, the current time zone is used
make_timestamptz(2013, 7, 15, 8, 15, 23.5)
2013-07-15 08:15:23.5+01
now()
timestamp with time zone
Current date and time (start of current transaction); see Section 9.9.4
statement_timestamp()
timestamp with time zone
Current date and time (start of current statement); see Section 9.9.4
timeofday()
text
Current date and time (like clock_timestamp
, but as a text
string); see Section 9.9.4
transaction_timestamp()
timestamp with time zone
Current date and time (start of current transaction); see Section 9.9.4
to_timestamp(double precision
)
timestamp with time zone
Convert Unix epoch (seconds since 1970-01-01 00:00:00+00) to timestamp
to_timestamp(1284352323)
2010-09-13 04:32:03+00
microseconds
milliseconds
second
minute
hour
day
week
month
quarter
year
decade
century
millennium
timestamp without time zone
AT TIME ZONE zone
timestamp with time zone
Treat given time stamp without time zone as located in the specified time zone
timestamp with time zone
AT TIME ZONE zone
timestamp without time zone
Convert given time stamp with time zone to the new time zone, with no time zone designation
time with time zone
AT TIME ZONE zone
time with time zone
Convert given time with time zone to the new time zone
|
| Does the left JSON value contain the right JSON path/value entries at the top level? |
|
|
| Are the left JSON path/value entries contained at the top level within the right JSON value? |
|
|
| Does the string exist as a top-level key within the JSON value? |
|
|
| Do any of these array strings exist as top-level keys? |
|
|
| Do all of these array strings exist as top-level keys? |
|
|
| Concatenate two |
|
|
| Delete key/value pair or string element from left operand. Key/value pairs are matched based on their key value. |
|
|
| Delete multiple key/value pairs or string elements from left operand. Key/value pairs are matched based on their key value. |
|
|
| Delete the array element with specified index (Negative integers count from the end). Throws an error if top level container is not an array. |
|
|
| Delete the field or element with specified path (for JSON arrays, negative integers count from the end) |
|
|
| Does JSON path return any item for the specified JSON value? |
|
|
| Returns the result of JSON path predicate check for the specified JSON value. Only the first item of the result is taken into account. If the result is not Boolean, then |
|
| Returns the value as |
|
|
| Returns the array as a JSON array. A PostgreSQL multidimensional array becomes a JSON array of arrays. Line feeds will be added between dimension-1 elements if |
|
|
| Returns the row as a JSON object. Line feeds will be added between level-1 elements if |
|
|
| Builds a possibly-heterogeneously-typed JSON array out of a variadic argument list. |
|
|
| Builds a JSON object out of a variadic argument list. By convention, the argument list consists of alternating keys and values. |
|
|
| Builds a JSON object out of a text array. The array must have either exactly one dimension with an even number of members, in which case they are taken as alternating key/value pairs, or two dimensions such that each inner array has exactly two elements, which are taken as a key/value pair. |
|
|
| This form of |
|
|
|
| Returns the number of elements in the outermost JSON array. |
|
|
|
| Expands the outermost JSON object into a set of key/value pairs. |
|
|
| Expands the outermost JSON object into a set of key/value pairs. The returned values will be of type |
|
|
| Returns JSON value pointed to by |
|
|
|
| Returns JSON value pointed to by |
|
|
|
| Returns set of keys in the outermost JSON object. |
|
|
| Expands the object in |
|
|
| Expands the outermost array of objects in |
|
|
| Expands a JSON array to a set of JSON values. |
|
|
| Expands a JSON array to a set of |
|
|
| Returns the type of the outermost JSON value as a text string. Possible types are |
|
|
|
| Builds an arbitrary record from a JSON object (see note below). As with all functions returning |
|
|
| Builds an arbitrary set of records from a JSON array of objects (see note below). As with all functions returning |
|
|
| Returns |
|
|
|
| Returns |
|
|
|
| Returns |
|
|
|
| Returns |
|
|
| Checks whether JSON path returns any item for the specified JSON value. |
|
|
|
| Returns the result of JSON path predicate check for the specified JSON value. Only the first item of the result is taken into account. If the result is not Boolean, then |
|
|
|
| Gets all JSON items returned by JSON path for the specified JSON value. |
|
|
| Gets all JSON items returned by JSON path for the specified JSON value and wraps result into an array. |
|
|
|
| Gets the first JSON item returned by JSON path for the specified JSON value. Returns |
|
|
| Plus operator that iterates over the SQL/JSON sequence |
|
|
|
| Minus operator that iterates over the SQL/JSON sequence |
|
|
|
| Addition |
|
|
|
| Subtraction |
|
|
|
| Multiplication |
|
|
|
| Division |
|
|
|
| Modulus |
|
|
|
| Type of the SQL/JSON item |
|
|
|
| Size of the SQL/JSON item |
|
|
|
| Approximate floating-point number converted from an SQL/JSON number or a string |
|
|
|
| Nearest integer greater than or equal to the SQL/JSON number |
|
|
|
| Nearest integer less than or equal to the SQL/JSON number |
|
|
|
| Absolute value of the SQL/JSON number |
|
|
|
| Sequence of object's key-value pairs represented as array of items containing three fields ( |
|
|
|
|
|
| Get JSON array element (indexed from zero, negative integers count from the end) |
|
|
|
|
| Get JSON object field by key |
|
|
|
|
| Get JSON array element as |
|
|
|
|
| Get JSON object field as |
|
|
|
|
| Get JSON object at the specified path |
|
|
|
|
| Get JSON object at the specified path as |
|
|
| Returns the first value of the input enum type |
|
|
| Returns the last value of the input enum type |
|
|
| Returns all values of the input enum type in an ordered array |
|
|
| Returns the range between the two given enum values, as an ordered array. The values must be from the same enum type. If the first parameter is null, the result will start with the first value of the enum type. If the second parameter is null, the result will end with the last value of the enum type. |
|
|
|
|
|
|
本節介紹 PostgreSQL 中符合 SQL 標準可用的條件表示式。
如果您的需求超出了這些條件表示式的功能,您可能需要考慮使用功能更強的程序語言編寫 stored procedure。
CASE
SQL 中的 CASE 表示式是一種通用的條件表示式,類似於其他程序語言中的 if / else 語句:
CASE子句可用於任何表示式有效的地方。每個條件都是一個回傳布林值的表示式。如果條件結果為 true,則 CASE 表示式的值為該條件之後的結果,而不處理CASE表達式的其餘部分。如果條件的結果不成立,則以相同的方式檢查後續的 WHEN 子句。如果沒有任何 WHEN 條件成立,則 CASE 表示式的值是 ELSE 子句的結果。如果省略了 ELSE 子句並且沒有條件為真,則結果為 null。
範例:
所有結果表示式的資料型別都必須可轉換為單一的輸出型別。更多細節請參閱 10.5 節。
CASE 表示式的「簡單」語法是上述一般語法的變形:
計算第一個表示式,然後與 WHEN 子句中的每個表示式的結果值進行比較,直到找到與其相等的值。如果未找到匹配的項目,則回傳 ELSE 子句(或空值)的結果。這與 C 語言中的 switch 語句類似。
上面的例子可以使用簡單的 CASE 語法來撰寫:
CASE 表示式不會計算任何不需要的子表示式來確定結果。例如,這是避免除以零例外狀況可能的方法:
如 4.2.14 節所述,在不同時候計算表示式的子表示式時會出現各種情況,因此「CASE 只計算必要子表示式」的原則並不是固定的。例如,一個常數 1/0 的子表示式在查詢規畫時通常就會導致一個除以零的錯誤,即使它在 CASE 部分內,在執行時永遠不會被使用。
COALESCE
COALESCE 函數回傳非空值的第一個參數。僅當所有參數都為空值時才回傳空值。當檢索資料要進行顯示時,它通常用於將預認值替換為空值,例如:
如果它不為 null,則回傳 descrtiption;否則,如果 short_description 不為null,則傳回 short_description;否則回傳(none)。
像 CASE 表示式一樣,COALESCE 只計算確定結果所需的參數;也就是說,不會計算第一個非空值參數之後的參數。此 SQL 標準函數提供了與 NVL 和 IFNULL 類似的功能,這些在其他某些資料庫系統中所使用的功能。
NULLIF
如果 value1 等於 value2,則 NULLIF 函數回傳空值;否則回傳 value1。這可以用來執行上面 COALESCE 範例的逆操作:
在這個例子中,如果 value 是(none),則回傳 null,否則回傳 value 的值。
GREATEST
and LEAST
GREATEST 和 LEAST 函數從任意數量的表示式列表中選擇最大值或最小值。表示式必須全部轉換為通用的資料型別,這將成為結果的別型(詳見 10.5 節)。列表中的 NULL 值將會被忽略。僅當所有表示式求值為 NULL 時,結果才會為 NULL。
請注意,GREATEST 和 LEAST 並不在 SQL 標準中,但卻是一個常見的延伸功能。如果任何參數為 NULL,則其他一些資料庫會使其回傳 NULL,而不是僅在所有參數都為 NULL 時回傳 NULL。
This section describes the SQL-compliant subquery expressions available in PostgreSQL. All of the expression forms documented in this section return Boolean (true/false) results.
EXISTS
The argument of EXISTS
is an arbitrary SELECT
statement, or subquery. The subquery is evaluated to determine whether it returns any rows. If it returns at least one row, the result of EXISTS
is “true”; if the subquery returns no rows, the result of EXISTS
is “false”.
The subquery can refer to variables from the surrounding query, which will act as constants during any one evaluation of the subquery.
The subquery will generally only be executed long enough to determine whether at least one row is returned, not all the way to completion. It is unwise to write a subquery that has side effects (such as calling sequence functions); whether the side effects occur might be unpredictable.
Since the result depends only on whether any rows are returned, and not on the contents of those rows, the output list of the subquery is normally unimportant. A common coding convention is to write all EXISTS
tests in the form EXISTS(SELECT 1 WHERE ...)
. There are exceptions to this rule however, such as subqueries that use INTERSECT
.
This simple example is like an inner join on col2
, but it produces at most one output row for each tab1
row, even if there are several matching tab2
rows:
IN
The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result. The result of IN
is “true” if any equal subquery row is found. The result is “false” if no equal row is found (including the case where the subquery returns no rows).
Note that if the left-hand expression yields null, or if there are no equal right-hand values and at least one right-hand row yields null, the result of the IN
construct will be null, not false. This is in accordance with SQL's normal rules for Boolean combinations of null values.
As with EXISTS
, it's unwise to assume that the subquery will be evaluated completely.
The left-hand side of this form of IN
is a row constructor, as described in Section 4.2.13. The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result. The result of IN
is “true” if any equal subquery row is found. The result is “false” if no equal row is found (including the case where the subquery returns no rows).
As usual, null values in the rows are combined per the normal rules of SQL Boolean expressions. Two rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal if any corresponding members are non-null and unequal; otherwise the result of that row comparison is unknown (null). If all the per-row results are either unequal or null, with at least one null, then the result of IN
is null.
NOT IN
The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result. The result of NOT IN
is “true” if only unequal subquery rows are found (including the case where the subquery returns no rows). The result is “false” if any equal row is found.
Note that if the left-hand expression yields null, or if there are no equal right-hand values and at least one right-hand row yields null, the result of the NOT IN
construct will be null, not true. This is in accordance with SQL's normal rules for Boolean combinations of null values.
As with EXISTS
, it's unwise to assume that the subquery will be evaluated completely.
The left-hand side of this form of NOT IN
is a row constructor, as described in Section 4.2.13. The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result. The result of NOT IN
is “true” if only unequal subquery rows are found (including the case where the subquery returns no rows). The result is “false” if any equal row is found.
As usual, null values in the rows are combined per the normal rules of SQL Boolean expressions. Two rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal if any corresponding members are non-null and unequal; otherwise the result of that row comparison is unknown (null). If all the per-row results are either unequal or null, with at least one null, then the result of NOT IN
is null.
ANY
/SOME
The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result using the given operator
, which must yield a Boolean result. The result of ANY
is “true” if any true result is obtained. The result is “false” if no true result is found (including the case where the subquery returns no rows).
SOME
is a synonym for ANY
. IN
is equivalent to = ANY
.
Note that if there are no successes and at least one right-hand row yields null for the operator's result, the result of the ANY
construct will be null, not false. This is in accordance with SQL's normal rules for Boolean combinations of null values.
As with EXISTS
, it's unwise to assume that the subquery will be evaluated completely.
The left-hand side of this form of ANY
is a row constructor, as described in Section 4.2.13. The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result, using the given operator
. The result of ANY
is “true” if the comparison returns true for any subquery row. The result is “false” if the comparison returns false for every subquery row (including the case where the subquery returns no rows). The result is NULL if no comparison with a subquery row returns true, and at least one comparison returns NULL.
See Section 9.24.5 for details about the meaning of a row constructor comparison.
ALL
The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result using the given operator
, which must yield a Boolean result. The result of ALL
is “true” if all rows yield true (including the case where the subquery returns no rows). The result is “false” if any false result is found. The result is NULL if no comparison with a subquery row returns false, and at least one comparison returns NULL.
NOT IN
is equivalent to <> ALL
.
As with EXISTS
, it's unwise to assume that the subquery will be evaluated completely.
The left-hand side of this form of ALL
is a row constructor, as described in Section 4.2.13. The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result, using the given operator
. The result of ALL
is “true” if the comparison returns true for all subquery rows (including the case where the subquery returns no rows). The result is “false” if the comparison returns false for any subquery row. The result is NULL if no comparison with a subquery row returns false, and at least one comparison returns NULL.
See Section 9.24.5 for details about the meaning of a row constructor comparison.
The left-hand side is a row constructor, as described in Section 4.2.13. The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. Furthermore, the subquery cannot return more than one row. (If it returns zero rows, the result is taken to be null.) The left-hand side is evaluated and compared row-wise to the single subquery result row.
See Section 9.24.5 for details about the meaning of a row constructor comparison.
本節介紹了用於在多群組內容之間進行多重比較的幾個專用語法結構。這些功能在語法上與前一節的子查詢形式相關,但不涉及子查詢。涉及陣列子表示式的形式是 PostgreSQL 的延伸功能;其餘的都是相容 SQL 的。本節中記錄的所有表達形式都是回傳布林值(true/false)結果。
IN
右側是 scalar 表示式帶括號的列表。如果左側表示式的結果等於任何右側表示式,結果為「true」。這是一個簡寫的方式
請注意,如果左側表示式產生空值,或者沒有相等的右側值並且至少有一個右側表示式產生空值,則 IN 的的結果將為空,而不是 false。這符合 SQL 空值布林組合的普遍規則。
NOT IN
右側是 scalar 表示式帶括號的列表。如果左側表示式的結果不等於所有右側表示式,則結果為「true」。 這是一個簡寫的方式
請注意,如果左邊的表示式為空,或者沒有相等的右邊的值,並且至少有一個右邊的表示式為空,則 NOT IN 的結果將為空,而不要天真地認為是 true。這符合 SQL 空值布林組合的普遍規則。
x NOT IN y 在所有情況下都等於 NOT(x IN y)。但是,使用 NOT IN 時,與使用 IN 時相比,空值更有可能讓新手感到痛苦。如果可能的話,最好積極轉換自己需要的比較內容。
ANY
/SOME
(array)右側是一個帶括號的表示式,它必須產生一個陣列。使用給定的運算子評估左側表示式並與陣列的每個元素進行比較,該運算子必須產生布林結果。如果獲得任何 true 結果,則 ANY 的結果為「true」。 如果未找到 true(包括陣列中沒有元素的情況),則結果為「false」。
如果陣列表示式產生一個空的陣列,則 ANY 的結果將為空。如果左邊的表示式為空,則 ANY 的結果通常為空(儘管非嚴格的比較運算子可能會產生不同的結果)。另外,如果右邊的陣列包含任何空元素並且沒有獲得真正的比較結果,則 ANY 的結果將為空,而不是 false(再次假設嚴格的比較運算子)。這符合 SQL 空值布林組合的普遍規則。
SOME 是 ANY 的同義詞。
ALL
(array)右側是一個帶括號的表示式,它必須產生一個陣列。使用給定的運算子計算左側表示式並與陣列的每個元素進行比較,該運算子必須產生布林結果。如果所有比較都為真(包括陣列為空的情況),則 ALL 的結果為“真”。如果發現任何錯誤的情況,結果就為“假”。
如果陣列表示式產生一個空陣列,則 ALL 的結果將為 NULL。如果左邊的表示式為NULL,則 ALL 的結果通常為 NULL(儘管非嚴格的比較運算子可能產生不同的結果)。另外,如果右邊的陣列包含任何 NULL 元素,並且沒有獲得錯誤的比較結果,則 ALL 的結果將為 NULL,而不是 TRUE(再次假設一個嚴格的比較運算子)。 這符合 SQL NULL 布林組合的一般性規則。
每一邊都是資料列建構函數,如 4.2.13 節所述。兩個資料列內容必須具有相同的欄位數。運算好每一側,並逐個資料列比較它們。當運算子為 =,<>,<,<=,>或 >=時,允許進行資料列建構函數比較。每個資料列元素必須是具有預設 B-tree運算子類的型別,否則嘗試的比較可能會産生錯誤。
如果使用前面的欄位解析比較,則可能不會發生與元素數量或型別相關的錯誤。
= 和 <> 比較的工作方式與其他比較略有不同。如果所有相應的成員都是非空且相等的,則認為兩個資料列相等;如果任何相應的成員非空且不相等,則資料列不相等;否則資料列比較的結果是未知的(null)。
對於 <,<=,> 和 >= 情況,資料列元素從左到右進行比較,一旦找到不相等或空的元素配對就停止。如果這對元素中的任何一個為 null,則資料列比較的結果是未知的(null);否則這對元素的比較就決定了結果。例如,ROW(1, 2, NULL) < ROW(1, 3, 0) 產生 true,而不是 null,因為不考慮第三組元素。
在 PostgreSQL 8.2 之前,每個 SQL 規範都沒有處理 <,<=,> 和 >=。像ROW(a, b) < ROW(c, d) 這樣的比較被實作為 a < c AND b < d,而正確的行為等同於 a < c OR (a = c AND b <d)。
此語法類似於 <> 行比較,但它不會因為 null 輸入產生 null。相反地,任何空值被認為不等於(不同於)任何非空值,並且任何兩個空值被認為是相等的(不是不同的)。因此結果將為 true 或 false,永遠不為 null。
此語法類似於 a = 資料列比較,但它不會因為 null 輸入而產生 null。相反地,任何空值被認為不等於(不同於)任何非空值,並且任何兩個空值被認為是相等的(不是不同的)。因此,結果將始終為 true 或 false,永遠不會為 null。
如果結果取決於比較兩個 NULL 值或 NULL 和非 NULL,則 SQL 規範要求按資料列進行比較以回傳 NULL。PostgreSQL只在比較兩個資料列建構函數的結果(如 9.23.5 節)或者將一個資料列建構函數與子查詢的輸出結果進行比較時(如 9.22 節)那樣做。在比較兩個複合型別內容的其他部份中,兩個 NULL 字串會被認為是相等的,並且 NULL 被認為大於非 NULL。為了對複合型別進行一致的排序和索引行為,這是必須的。
評估每一側,並逐個資料列比較它們。 當運算符為 =,<>,<,<=,> 或 >= 時,允許複合型別比較,或者俱有與其中一個類似的語義。(具體而言,如果一個運算子是 B-Tree 運算子類的成員,或者是 B-Tree 運算子類的 = 成員的否定運算,則它可以是資料列比較運算子。)上述運算子的預設行為與資料列建構函數的 IS [NOT] DISTINCT FROM 相同(見第 9.23.5 節)。
為了支援包含沒有預設 B-Tree 運算子類的元素的資料列匹配,以下運算子被定義用於複合型別比較:_ =,_ <>,_ <,_ <=,_> 和 _>=。這些運算子比較兩個資料列的內部二進製表示形式。即使兩個資料列與等號運算子的比較為真,兩個資料列也可能具有不同的二進製表示形式。 這些比較運算子下的資料列排序是確定性的,但沒有其他意義。這些運算子在內部用於具體化檢視表,並可用於其他專用目的(如複寫),但不打算經常用於撰寫查詢。
| Equality operator |
|
|
|
| Non-equality operator |
|
|
|
| Non-equality operator (same as |
|
|
|
| Less-than operator |
|
|
|
| Less-than-or-equal-to operator |
|
|
|
| Greater-than operator |
|
|
|
| Greater-than-or-equal-to operator |
|
|
|
| Value used to perform comparison with JSON |
|
|
|
| Value used to perform comparison with JSON |
|
|
|
| Value used to perform comparison with JSON |
|
|
|
| Boolean AND |
|
|
|
| Boolean OR |
|
|
|
| Boolean NOT |
|
|
|
|
|
|
|
| Tests whether the second operand is an initial substring of the first operand |
|
|
|
| Tests whether a path expression matches at least one SQL/JSON item |
|
|
|
| Tests whether a Boolean condition is |
|
|
|
本節介紹了用於檢查和操作字串的函數和運算子。在這種情況下,字串包括 character、character varying 和 text 型別的值。除非另有說明,否則下面列出的所有函數都可以在這些型別上使用,但是請注意在使用 character 型別時自動空格填充的潛在影響。其中有一些函數還支援對於位元型別的處理。
SQL 定義了一些使用關鍵字而不是逗號分隔參數的字串函數。詳情請見 Table 9.9。PostgreSQL 還提供了使用一般函數呼叫的語法,這些功能的函數版本(請參見 Table 9.10)。
在 PostgreSQL 8.3 之前的版本中,由於存在從這些資料型別到文字的隱式強制轉換,這些函數也將默默接受幾種非字串資料型別的值。這些強制轉換已被刪除,因為它們經常引起令人驚訝的結果。但是,字串連接運算子(||)仍然接受非字串輸入,只要至少一個輸入為字串型別即可,如 Table 9.9 所示。對於其他情況,如果您需要複製以前的行為,請在查詢語句中明確加入型別轉換。
Function | Return Type | Description | Example | Result |
---|---|---|---|---|
其他字串操作的可用函數,在 Table 9.10 中列出。其中一些用於內部實作的SQL標準字符串函數,則在 Table 9.9 中列出。
concat、concat_ws 和 format 函數是動態參數,因此可以將要連接或格式化的值以 VARIADIC 關鍵字標記的陣列(請參閱第 37.5.5 節)輸入。 將陣列的元素視為函數的一個普通參數。如果動態參數陣列參數為 NULL,則 concat 和 concat_ws 回傳 NULL,但是 format 將 NULL 視為零元素陣列。
另請參閱第 9.20 節中的彙總函數 string_agg。
轉換名稱遵循標準的命名規則:來源編碼的正式名稱,所有非字母數字字元均用下底線代替,接在 _to___ 之後,然後是經過類似處理的目標編碼名稱。因此,名稱可能與習慣的編碼名稱有所不同。
format
The function format
produces output formatted according to a format string, in a style similar to the C function sprintf
.
formatstr
is a format string that specifies how the result should be formatted. Text in the format string is copied directly to the result, except where format specifiers are used. Format specifiers act as placeholders in the string, defining how subsequent function arguments should be formatted and inserted into the result. Each formatarg
argument is converted to text according to the usual output rules for its data type, and then formatted and inserted into the result string according to the format specifier(s).
Format specifiers are introduced by a %
character and have the form
where the component fields are:position
(optional)
A string of the form n
$ where n
is the index of the argument to print. Index 1 means the first argument after formatstr
. If the position
is omitted, the default is to use the next argument in sequence.flags
(optional)
Additional options controlling how the format specifier's output is formatted. Currently the only supported flag is a minus sign (-
) which will cause the format specifier's output to be left-justified. This has no effect unless the width
field is also specified.width
(optional)
Specifies the minimum number of characters to use to display the format specifier's output. The output is padded on the left or right (depending on the -
flag) with spaces as needed to fill the width. A too-small width does not cause truncation of the output, but is simply ignored. The width may be specified using any of the following: a positive integer; an asterisk (*
) to use the next function argument as the width; or a string of the form *
n
$ to use the _n
_th function argument as the width.
If the width comes from a function argument, that argument is consumed before the argument that is used for the format specifier's value. If the width argument is negative, the result is left aligned (as if the -
flag had been specified) within a field of length abs
(width
).type
(required)
The type of format conversion to use to produce the format specifier's output. The following types are supported:
s
formats the argument value as a simple string. A null value is treated as an empty string.
I
treats the argument value as an SQL identifier, double-quoting it if necessary. It is an error for the value to be null (equivalent to quote_ident
).
L
quotes the argument value as an SQL literal. A null value is displayed as the string NULL
, without quotes (equivalent to quote_nullable
).
In addition to the format specifiers described above, the special sequence %%
may be used to output a literal %
character.
Here are some examples of the basic format conversions:
Here are examples using width
fields and the -
flag:
These examples show use of position
fields:
Unlike the standard C function sprintf
, PostgreSQL's format
function allows format specifiers with and without position
fields to be mixed in the same format string. A format specifier without a position
field always uses the next argument after the last argument consumed. In addition, the format
function does not require all function arguments to be used in the format string. For example:
The %I
and %L
format specifiers are particularly useful for safely constructing dynamic SQL statements. See Example 42.1.
This section describes functions for operating on_sequence objects_, also called sequence generators or just sequences. Sequence objects are special single-row tables created withCREATE SEQUENCE. Sequence objects are commonly used to generate unique identifiers for rows of a table. The sequence functions, listed inTable 9.47, provide simple, multiuser-safe methods for obtaining successive sequence values from sequence objects.
Table 9.47. Sequence Functions
Function | Return Type | Description |
---|---|---|
The sequence to be operated on by a sequence function is specified by aregclass
argument, which is simply the OID of the sequence in thepg_class
system catalog. You do not have to look up the OID by hand, however, since theregclass
data type's input converter will do the work for you. Just write the sequence name enclosed in single quotes so that it looks like a literal constant. For compatibility with the handling of ordinarySQLnames, the string will be converted to lower case unless it contains double quotes around the sequence name. Thus:
The sequence name can be schema-qualified if necessary:
SeeSection 8.18for more information aboutregclass
.
BeforePostgreSQL8.1, the arguments of the sequence functions were of typetext
, notregclass
, and the above-described conversion from a text string to an OID value would happen at run time during each call. For backward compatibility, this facility still exists, but internally it is now handled as an implicit coercion fromtext
toregclass
before the function is invoked.
When you write the argument of a sequence function as an unadorned literal string, it becomes a constant of typeregclass
. Since this is really just an OID, it will track the originally identified sequence despite later renaming, schema reassignment, etc. This“early binding”behavior is usually desirable for sequence references in column defaults and views. But sometimes you might want“late binding”where the sequence reference is resolved at run time. To get late-binding behavior, force the constant to be stored as atext
constant instead ofregclass
:
Note that late binding was the only behavior supported inPostgreSQLreleases before 8.1, so you might need to do this to preserve the semantics of old applications.
Of course, the argument of a sequence function can be an expression as well as a constant. If it is a text expression then the implicit coercion will result in a run-time lookup.
The available sequence functions are:
nextval
Advance the sequence object to its next value and return that value. This is done atomically: even if multiple sessions executenextval
concurrently, each will safely receive a distinct sequence value.
If a sequence object has been created with default parameters, successivenextval
calls will return successive values beginning with 1. Other behaviors can be obtained by using special parameters in theCREATE SEQUENCEcommand; see its command reference page for more information.
To avoid blocking concurrent transactions that obtain numbers from the same sequence, anextval
operation is never rolled back; that is, once a value has been fetched it is considered used and will not be returned again. This is true even if the surrounding transaction later aborts, or if the calling query ends up not using the value. For example anINSERT
with anON CONFLICT
clause will compute the to-be-inserted tuple, including doing any requirednextval
calls, before detecting any conflict that would cause it to follow theON CONFLICT
rule instead. Such cases will leave unused“holes”in the sequence of assigned values. Thus,PostgreSQLsequence objects_cannot be used to obtain“gapless”sequences_.
This function requiresUSAGE
orUPDATE
privilege on the sequence.
currval
Return the value most recently obtained bynextval
for this sequence in the current session. (An error is reported ifnextval
has never been called for this sequence in this session.) Because this is returning a session-local value, it gives a predictable answer whether or not other sessions have executednextval
since the current session did.
This function requiresUSAGE
orSELECT
privilege on the sequence.
lastval
Return the value most recently returned bynextval
in the current session. This function is identical tocurrval
, except that instead of taking the sequence name as an argument it refers to whichever sequencenextval
was most recently applied to in the current session. It is an error to calllastval
ifnextval
has not yet been called in the current session.
This function requiresUSAGE
orSELECT
privilege on the last used sequence.
setval
Reset the sequence object's counter value. The two-parameter form sets the sequence'slast_value
field to the specified value and sets itsis_called
field totrue
, meaning that the nextnextval
will advance the sequence before returning a value. The value reported bycurrval
is also set to the specified value. In the three-parameter form,is_called
can be set to eithertrue
orfalse
.true
has the same effect as the two-parameter form. If it is set tofalse
, the nextnextval
will return exactly the specified value, and sequence advancement commences with the followingnextval
. Furthermore, the value reported bycurrval
is not changed in this case. For example,
The result returned bysetval
is just the value of its second argument.
Because sequences are non-transactional, changes made bysetval
are not undone if the transaction rolls back.
This function requiresUPDATE
privilege on the sequence.
Table 9.40,Table 9.41andTable 9.42summarize the functions and operators that are provided for full text searching. SeeChapter 12for a detailed explanation ofPostgreSQL's text search facility.
Table 9.40. Text Search Operators
Operator | Return Type | Description | Example | Result | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Thetsquery
containment operators consider only the lexemes listed in the two queries, ignoring the combining operators.
In addition to the operators shown in the table, the ordinary B-tree comparison operators (=
,<
, etc) are defined for typestsvector
andtsquery
. These are not very useful for text searching but allow, for example, unique indexes to be built on columns of these types.
Table 9.41. Text Search Functions
Function | Return Type | Description | Example | Result | ||
---|---|---|---|---|---|---|
All the text search functions that accept an optionalregconfig
argument will use the configuration specified bydefault_text_search_configwhen that argument is omitted.
The functions inTable 9.42are listed separately because they are not usually used in everyday text searching operations. They are helpful for development and debugging of new text search configurations.
Table 9.42. Text Search Debugging Functions
The geometric typespoint
,box
,lseg
,line
,path
,polygon
, andcircle
have a large set of native support functions and operators, shown inTable 9.33,Table 9.34, andTable 9.35.
Note that the“same as”operator,~=
, represents the usual notion of equality for thepoint
,box
,polygon
, andcircle
types. Some of these types also have an=
operator, but=
compares for equal_areas_only. The other scalar comparison operators (<=
and so on) likewise compare areas for these types.
Table 9.33. Geometric Operators
Operator | Description | Example | ||||
---|---|---|---|---|---|---|
BeforePostgreSQL8.2, the containment operators@>
and<@
were respectively called~
and@
. These names are still available, but are deprecated and will eventually be removed.
Table 9.34. Geometric Functions
Table 9.35. Geometric Type Conversion Functions
It is possible to access the two component numbers of apoint
as though the point were an array with indexes 0 and 1. For example, ift.p
is apoint
column thenSELECT p[0] FROM t
retrieves the X coordinate andUPDATE t SET p[1] = ...
changes the Y coordinate. In the same way, a value of typebox
orlseg
can be treated as an array of twopoint
values.
Thearea
function works for the typesbox
,circle
, andpath
. Thearea
function only works on thepath
data type if the points in thepath
are non-intersecting. For example, thepath'((0,0),(0,1),(2,1),(2,2),(1,2),(1,0),(0,0))'::PATH
will not work; however, the following visually identicalpath'((0,0),(0,1),(1,1),(1,2),(2,2),(2,1),(1,1),(1,0),(0,0))'::PATH
will work. If the concept of an intersecting versus non-intersectingpath
is confusing, draw both of the abovepath
s side by side on a piece of graph paper.
Table 9.36shows the operators available for thecidr
andinet
types. The operators<<
,<<=
,>>
,>>=
, and&&
test for subnet inclusion. They consider only the network parts of the two addresses (ignoring any host part) and determine whether one network is identical to or a subnet of the other.
**Table 9.36. **cidr
andinet
Operators
Operator | Description | Example | ||
---|---|---|---|---|
Table 9.37shows the functions available for use with thecidr
andinet
types. Theabbrev
,host
, andtext
functions are primarily intended to offer alternative display formats.
**Table 9.37. **cidr
andinet
Functions
Anycidr
value can be cast toinet
implicitly or explicitly; therefore, the functions shown above as operating oninet
also work oncidr
values. (Where there are separate functions forinet
andcidr
, it is because the behavior should be different for the two cases.) Also, it is permitted to cast aninet
value tocidr
. When this is done, any bits to the right of the netmask are silently zeroed to create a validcidr
value. In addition, you can cast a text value toinet
orcidr
using normal casting syntax: for example,inet(expression
)orcolname
::cidr.
Table 9.38shows the functions available for use with themacaddr
type. The functiontrunc(macaddr
)returns a MAC address with the last 3 bytes set to zero. This can be used to associate the remaining prefix with a manufacturer.
**Table 9.38. **macaddr
Functions
Themacaddr
type also supports the standard relational operators (>
,<=
, etc.) for lexicographical ordering, and the bitwise arithmetic operators (~
,&
and|
) for NOT, AND and OR.
Table 9.39shows the functions available for use with themacaddr8
type. The functiontrunc(macaddr8
)returns a MAC address with the last 5 bytes set to zero. This can be used to associate the remaining prefix with a manufacturer.
**Table 9.39. **macaddr8
Functions
Themacaddr8
type also supports the standard relational operators (>
,<=
, etc.) for ordering, and the bitwise arithmetic operators (~
,&
and|
) for NOT, AND and OR.
See Section 8.17 for an overview of range types.
Table 9.53 shows the specialized operators available for range types. In addition to those, the usual comparison operators shown in Table 9.1 are available for range types. The comparison operators order first by the range lower bounds, and only if those are equal do they compare the upper bounds. This does not usually result in a useful overall ordering, but the operators are provided to allow unique indexes to be constructed on ranges.
The left-of/right-of/adjacent operators always return false when an empty range is involved; that is, an empty range is not considered to be either before or after any other range.
Table 9.54 shows the functions available for use with range types.
The lower_inc
, upper_inc
, lower_inf
, and upper_inf
functions all return false for an empty range.
Table 9.48shows the operators available for array types.
Table 9.48. Array Operators
Operator | Description | Example | Result | ||||
---|---|---|---|---|---|---|---|
Array comparisons compare the array contents element-by-element, using the default B-tree comparison function for the element data type. In multidimensional arrays the elements are visited in row-major order (last subscript varies most rapidly). If the contents of two arrays are equal but the dimensionality is different, the first difference in the dimensionality information determines the sort order. (This is a change from versions ofPostgreSQLprior to 8.2: older versions would claim that two arrays with the same contents were equal, even if the number of dimensions or subscript ranges were different.)
SeeSection 8.15for more details about array operator behavior. SeeSection 11.2for more details about which operators support indexed operations.
Table 9.49shows the functions available for use with array types. SeeSection 8.15for more information and examples of the use of these functions.
Table 9.49. Array Functions
Inarray_position
andarray_positions
, each array element is compared to the searched value usingIS NOT DISTINCT FROM
semantics.
Inarray_position
,NULL
is returned if the value is not found.
Inarray_positions
,NULL
is returned only if the array isNULL
; if the value is not found in the array, an empty array is returned instead.
Instring_to_array
, if the delimiter parameter is NULL, each character in the input string will become a separate element in the resulting array. If the delimiter is an empty string, then the entire input string is returned as a one-element array. Otherwise the input string is split at each occurrence of the delimiter string.
Instring_to_array
, if the null-string parameter is omitted or NULL, none of the substrings of the input will be replaced by NULL. Inarray_to_string
, if the null-string parameter is omitted or NULL, any null elements in the array are simply skipped and not represented in the output string.
There are two differences in the behavior ofstring_to_array
from pre-9.1 versions ofPostgreSQL. First, it will return an empty (zero-element) array rather than NULL when the input string is of zero length. Second, if the delimiter string is NULL, the function splits the input into individual characters, rather than returning NULL as before.
See alsoSection 9.20about the aggregate functionarray_agg
for use with arrays.
本節介紹可以回傳多個資料列的函數。此類中使用最廣泛的函數是序列生成函數,如 Table 9.61和 Table 9.62 所述。其他更專門的集合回傳函數在本手冊的其他地方介紹。有關組合多個集合回傳函數的方法,請參見第 7.2.1.4 節。
Function | Argument Type | Return Type | Description |
---|---|---|---|
當 step 為正時,如果 start 大於 stop 則回傳零筆資料。相反地,當 step 為負時,如果 start 小於 stop 也回傳零筆資料。NULL 的輸入也回傳零筆資料。 step 為零是錯誤的。以下是一些範例:
generate_subscripts 是一個很方便的函數,用於為給定陣列的指定維度產成一組有效的索引內容。對於沒有所請求維數的陣列或 NULL 陣列,回傳零筆資料(但是對於 NULL 陣列元素,回傳有效的索引)。以下是一些範例:
當 FROM 子句中的函數加上 WITH ORDINALITY 時,一個 bigint 欄位將附加到輸出資料中,該欄位從 1 開始,並針對函數輸出的每一筆資料以 1 遞增。這對集合回傳函數中的 unnest() 特別有用。
Window functions provide the ability to perform calculations across sets of rows that are related to the current query row. See Section 3.5 for an introduction to this feature, and Section 4.2.8 for syntax details.
The built-in window functions are listed in Table 9.60. Note that these functions must be invoked using window function syntax, i.e., an OVER
clause is required.
In addition to these functions, any built-in or user-defined ordinary aggregate (i.e., not ordered-set or hypothetical-set aggregates) can be used as a window function; see Section 9.21 for a list of the built-in aggregates. Aggregate functions act as window functions only when an OVER
clause follows the call; otherwise they act as plain aggregates and return a single row for the entire set.
All of the functions listed in Table 9.60 depend on the sort ordering specified by the ORDER BY
clause of the associated window definition. Rows that are not distinct when considering only the ORDER BY
columns are said to be peers. The four ranking functions (including cume_dist
) are defined so that they give the same answer for all rows of a peer group.
Note that first_value
, last_value
, and nth_value
consider only the rows within the “window frame”, which by default contains the rows from the start of the partition through the last peer of the current row. This is likely to give unhelpful results for last_value
and sometimes also nth_value
. You can redefine the frame by adding a suitable frame specification (RANGE
, ROWS
or GROUPS
) to the OVER
clause. See Section 4.2.8 for more information about frame specifications.
When an aggregate function is used as a window function, it aggregates over the rows within the current row's window frame. An aggregate used with ORDER BY
and the default window frame definition produces a “running sum” type of behavior, which may or may not be what's wanted. To obtain aggregation over the whole partition, omit ORDER BY
or use ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
. Other frame specifications can be used to obtain other effects.
The SQL standard defines a RESPECT NULLS
or IGNORE NULLS
option for lead
, lag
, first_value
, last_value
, and nth_value
. This is not implemented in PostgreSQL: the behavior is always the same as the standard's default, namely RESPECT NULLS
. Likewise, the standard's FROM FIRST
or FROM LAST
option for nth_value
is not implemented: only the default FROM FIRST
behavior is supported. (You can achieve the result of FROM LAST
by reversing the ORDER BY
ordering.)
版本:11
Aggregate functions compute a single result from a set of input values. The built-in general-purpose aggregate functions are listed in Table 9.55 while statistical aggregates are in Table 9.56. The built-in within-group ordered-set aggregate functions are listed in Table 9.57 while the built-in within-group hypothetical-set ones are in Table 9.58. Grouping operations, which are closely related to aggregate functions, are listed in Table 9.59. The special syntax considerations for aggregate functions are explained in Section 4.2.7. Consult Section 2.7 for additional introductory information.
Aggregate functions that support Partial Mode are eligible to participate in various optimizations, such as parallel aggregation.
It should be noted that except for count
, these functions return a null value when no rows are selected. In particular, sum
of no rows returns null, not zero as one might expect, and array_agg
returns null rather than an empty array when there are no input rows. The coalesce
function can be used to substitute zero or an empty array for null when necessary.
The aggregate functions array_agg
, json_agg
, jsonb_agg
, json_object_agg
, jsonb_object_agg
, string_agg
, and xmlagg
, as well as similar user-defined aggregate functions, produce meaningfully different result values depending on the order of the input values. This ordering is unspecified by default, but can be controlled by writing an ORDER BY
clause within the aggregate call, as shown in Section 4.2.7. Alternatively, supplying the input values from a sorted subquery will usually work. For example:
Beware that this approach can fail if the outer query level contains additional processing, such as a join, because that might cause the subquery's output to be reordered before the aggregate is computed.
The boolean aggregates bool_and
and bool_or
correspond to the standard SQL aggregates every
and any
or some
. PostgreSQL supports every
, but not any
or some
, because there is an ambiguity built into the standard syntax:
Here ANY
can be considered either as introducing a subquery, or as being an aggregate function, if the subquery returns one row with a Boolean value. Thus the standard name cannot be given to these aggregates.
Users accustomed to working with other SQL database management systems might be disappointed by the performance of the count
aggregate when it is applied to the entire table. A query like:
will require effort proportional to the size of the table: PostgreSQL will need to scan either the entire table or the entirety of an index that includes all rows in the table.
Table 9.56 shows aggregate functions typically used in statistical analysis. (These are separated out merely to avoid cluttering the listing of more-commonly-used aggregates.) Functions shown as accepting numeric_type
are available for all the types smallint
, integer
, bigint
, numeric
, real
, and double precision
. Where the description mentions N
, it means the number of input rows for which all the input expressions are non-null. In all cases, null is returned if the computation is meaningless, for example when N
is zero.
Table 9.57 shows some aggregate functions that use the ordered-set aggregate syntax. These functions are sometimes referred to as “inverse distribution” functions. Their aggregated input is introduced by ORDER BY
, and they may also take a direct argument that is not aggregated, but is computed only once. All these functions ignore null values in their aggregated input. For those that take a fraction
parameter, the fraction value must be between 0 and 1; an error is thrown if not. However, a null fraction
value simply produces a null result.
Each of the “hypothetical-set” aggregates listed in Table 9.58 is associated with a window function of the same name defined in Section 9.22. In each case, the aggregate's result is the value that the associated window function would have returned for the “hypothetical” row constructed from args
, if such a row had been added to the sorted group of rows represented by the sorted_args
. For each of these functions, the list of direct arguments given in args
must match the number and types of the aggregated arguments given in sorted_args
. Unlike most built-in aggregates, these aggregates are not strict, that is they do not drop input rows containing nulls. Null values sort according to the rule specified in the ORDER BY
clause.
The grouping operations shown in Table 9.59 are used in conjunction with grouping sets (see Section 7.2.4) to distinguish result rows. The arguments to the GROUPING
function are not actually evaluated, but they must exactly match expressions given in the GROUP BY
clause of the associated query level. For example:
Here, the grouping
value 0
in the first four rows shows that those have been grouped normally, over both the grouping columns. The value 1
indicates that model
was not grouped by in the next-to-last two rows, and the value 3
indicates that neither make
nor model
was grouped by in the last row (which therefore is an aggregate over all the input rows).
雖然觸發器的許多用途涉及使用者自行定義的觸發器函數,但 PostgreSQL 提供了一些可以直接在使用者定義的觸發器中使用的內建觸發器函數。這些都列在 之中。(還有一些是附加的內建觸發器函數,它們用於實現外部鍵和延遲性的索引限制條件。由於使用者並不需要直接使用它們,因此這裡就沒有將它們文件化。)
有關建立觸發器的更多說明,請參閱 。
當作用於資料列層級的 BEFORE UPDATE 觸發器時,suppress_redundant_updates_trigger 函數將阻止任何不實際變更資料表資料的 UPDATE 行為發生。這會替代掉無論如何都會正常執行的實體資料更新行為,無論是否發生資料變更。(這種行為使 UPDATE 執行得更快,因為不需要重覆進行相關的檢查,並且在某些情況下也很有用。)
理想情況下,您應該避免執行實際上不會變更資料的 UPDATE。多餘的 UPDATE 可能會花費大量不必要的時間,尤其是當有大量索引要更改,並且棄置的資料空間必須時常被清理時。然而,在客戶端程式中檢測這種情況並不是這麼容易,甚至是不可能的,而且撰寫表示式來檢測也容易出錯。另一種方法則是使用 suppress_redundant_updates_trigger,它會跳過不改變資料的 UPDATE。觸發器為每筆資料花費很少但非常重要的時間,因此如果UPDATE 條件的大多數資料確實發生了變化,則使用此觸發器將使更新執行得比平均的情況更慢。
Suppress_redundant_updates_trigger 函數可以像這樣加到資料表中來使用:
在大多數情況下,對於每一筆資料來說,您需要在最後才觸發此觸發器,以便它不會妨礙可能希望更改該筆資料的其他觸發器。請記住,觸發器是按名稱次序觸發的,因此您應該選擇觸發器名稱在資料表上可能有的任何其他觸發器的名稱之後。(因此在範例中使用「z」的開頭名稱。)
PostgreSQL provides a function to inspect complex statistics defined using the CREATE STATISTICS
command.
pg_mcv_list_items
returns a list of all items stored in a multi-column MCV list, and returns the following columns:
Name | Type | Description |
---|
The pg_mcv_list_items
function can be used like this:
Values of the pg_mcv_list
can be obtained only from the pg_statistic_ext_data.stxdmcv
column.
PostgreSQL provides these helper functions to retrieve information from event triggers.
For more information about event triggers, see .
pg_event_trigger_ddl_commands
returns a list of DDL commands executed by each user action, when invoked in a function attached to a ddl_command_end
event trigger. If called in any other context, an error is raised. pg_event_trigger_ddl_commands
returns one row for each base command executed; some commands that are a single SQL sentence may return more than one row. This function returns the following columns:
Name | Type | Description |
---|
pg_event_trigger_dropped_objects
returns a list of all objects dropped by the command in whose sql_drop
event it is called. If called in any other context, an error is raised. This function returns the following columns:
The pg_event_trigger_dropped_objects
function can be used in an event trigger like this:
These functions can be used in an event trigger like this:
Tests whether the first operand matches the regular expression given by the second operand, optionally with modifications described by a string of flag
characters (see )
Function | Return Type | Description | Example | Result |
---|---|---|---|---|
Conversion Name | Source Encoding | Destination Encoding |
---|---|---|
Function | Return Type | Description | Example | Result |
---|---|---|---|---|
Function | Return Type | Description | Example |
---|---|---|---|
Function | Return Type | Description | Example |
---|---|---|---|
Function | Return Type | Description | Example | Result |
---|---|---|---|---|
Function | Return Type | Description | Example | Result |
---|---|---|---|---|
Function | Return Type | Description | Example | Result |
---|---|---|---|---|
Function | Return Type | Description | Example | Result |
---|---|---|---|---|
Function | Return Type | Description |
---|---|---|
Name | Type | Description |
---|
The functions shown in provide information about a table for which a table_rewrite
event has just been called. If called in any other context, an error is raised.
string
||
string
text
String concatenation
'Post' || 'greSQL'
PostgreSQL
string
||
non-string
or non-string
||
string
text
String concatenation with one non-string input
'Value: ' || 42
Value: 42
bit_length(
string
)
int
Number of bits in string
bit_length('jose')
32
char_length(
string
) or character_length(
string
)
int
Number of characters in string
char_length('jose')
4
lower(
string
)
text
Convert string to lower case
lower('TOM')
tom
octet_length(
string
)
int
Number of bytes in string
octet_length('jose')
4
overlay(
string
placing string
from int
[for int
])
text
Replace substring
overlay('Txxxxas' placing 'hom' from 2 for 4)
Thomas
position(
substring
in string
)
int
Location of specified substring
position('om' in 'Thomas')
3
substring(
string
[from int
] [for int
])
text
Extract substring
substring('Thomas' from 2 for 3)
hom
substring(
string
from pattern
)
text
Extract substring matching POSIX regular expression. See Section 9.7 for more information on pattern matching.
substring('Thomas' from '...$')
mas
substring(
string
from pattern
for escape
)
text
Extract substring matching SQL regular expression. See Section 9.7 for more information on pattern matching.
substring('Thomas' from '%#"o_a#"_' for '#')
oma
trim([leading | trailing | both] [
characters
] from string
)
text
Remove the longest string containing only characters from characters
(a space by default) from the start, end, or both ends (both
is the default) of string
trim(both 'xyz' from 'yxTomxx')
Tom
trim([leading | trailing | both] [from]
string
[, characters
] )
text
Non-standard syntax for trim()
trim(both from 'yxTomxx', 'xyz')
Tom
upper(
string
)
text
Convert string to upper case
upper('tom')
TOM
ascii(
string
)
int
ASCII code of the first character of the argument. For UTF8 returns the Unicode code point of the character. For other multibyte encodings, the argument must be an ASCII character.
ascii('x')
120
btrim(
string
text
[, characters
text
])
text
Remove the longest string consisting only of characters in characters
(a space by default) from the start and end of string
btrim('xyxtrimyyx', 'xyz')
trim
chr(int
)
text
Character with the given code. For UTF8 the argument is treated as a Unicode code point. For other multibyte encodings the argument must designate an ASCII character. The NULL (0) character is not allowed because text data types cannot store such bytes.
chr(65)
A
concat(
str
"any"
[, str
"any"
[, ...] ])
text
Concatenate the text representations of all the arguments. NULL arguments are ignored.
concat('abcde', 2, NULL, 22)
abcde222
concat_ws(
sep
text
, str
"any"
[, str
"any"
[, ...] ])
text
Concatenate all but the first argument with separators. The first argument is used as the separator string. NULL arguments are ignored.
concat_ws(',', 'abcde', 2, NULL, 22)
abcde,2,22
convert(
string
bytea
, src_encoding
name
, dest_encoding
name
)
bytea
Convert string to dest_encoding
. The original encoding is specified by src_encoding
. The string
must be valid in this encoding. Conversions can be defined by CREATE CONVERSION
. Also there are some predefined conversions. See Table 9.11 for available conversions.
convert('text_in_utf8', 'UTF8', 'LATIN1')
text_in_utf8
represented in Latin-1 encoding (ISO 8859-1)
convert_from(
string
bytea
, src_encoding
name
)
text
Convert string to the database encoding. The original encoding is specified by src_encoding
. The string
must be valid in this encoding.
convert_from('text_in_utf8', 'UTF8')
text_in_utf8
represented in the current database encoding
convert_to(
string
text
, dest_encoding
name
)
bytea
Convert string to dest_encoding
.
convert_to('some text', 'UTF8')
some text
represented in the UTF8 encoding
decode(
string
text
, format
text
)
bytea
Decode binary data from textual representation in string
. Options for format
are same as in encode
.
decode('MTIzAAE=', 'base64')
\x3132330001
encode(
data
bytea
, format
text
)
text
Encode binary data into a textual representation. Supported formats are: base64
, hex
, escape
. escape
converts zero bytes and high-bit-set bytes to octal sequences (\
nnn
) and doubles backslashes.
encode('123\000\001', 'base64')
MTIzAAE=
format
(formatstr
text
[, formatarg
"any"
[, ...] ])
text
Format arguments according to a format string. This function is similar to the C function sprintf
. See Section 9.4.1.
format('Hello %s, %1$s', 'World')
Hello World, World
initcap(
string
)
text
Convert the first letter of each word to upper case and the rest to lower case. Words are sequences of alphanumeric characters separated by non-alphanumeric characters.
initcap('hi THOMAS')
Hi Thomas
left(
str
text
, n
int
)
text
Return first n
characters in the string. When n
is negative, return all but last |n
| characters.
left('abcde', 2)
ab
length(
string
)
int
Number of characters in string
length('jose')
4
length(
string
bytea
, encoding
name
)
int
Number of characters in string
in the given encoding
. The string
must be valid in this encoding.
length('jose', 'UTF8')
4
lpad(
string
text
, length
int
[, fill
text
])
text
Fill up the string
to length length
by prepending the characters fill
(a space by default). If the string
is already longer than length
then it is truncated (on the right).
lpad('hi', 5, 'xy')
xyxhi
ltrim(
string
text
[, characters
text
])
text
Remove the longest string containing only characters from characters
(a space by default) from the start of string
ltrim('zzzytest', 'xyz')
test
md5(
string
)
text
Calculates the MD5 hash of string
, returning the result in hexadecimal
md5('abc')
900150983cd24fb0 d6963f7d28e17f72
parse_ident(
qualified_identifier
text
[, strictmode
boolean
DEFAULT true ] )
text[]
Split qualified_identifier
into an array of identifiers, removing any quoting of individual identifiers. By default, extra characters after the last identifier are considered an error; but if the second parameter is false
, then such extra characters are ignored. (This behavior is useful for parsing names for objects like functions.) Note that this function does not truncate over-length identifiers. If you want truncation you can cast the result to name[]
.
parse_ident('"SomeSchema".someTable')
{SomeSchema,sometable}
pg_client_encoding()
name
Current client encoding name
pg_client_encoding()
SQL_ASCII
quote_ident(
string
text
)
text
Return the given string suitably quoted to be used as an identifier in an SQL statement string. Quotes are added only if necessary (i.e., if the string contains non-identifier characters or would be case-folded). Embedded quotes are properly doubled. See also Example 42.1.
quote_ident('Foo bar')
"Foo bar"
quote_literal(
string
text
)
text
Return the given string suitably quoted to be used as a string literal in an SQL statement string. Embedded single-quotes and backslashes are properly doubled. Note that quote_literal
returns null on null input; if the argument might be null, quote_nullable
is often more suitable. See also Example 42.1.
quote_literal(E'O\'Reilly')
'O''Reilly'
quote_literal(
value
anyelement
)
text
Coerce the given value to text and then quote it as a literal. Embedded single-quotes and backslashes are properly doubled.
quote_literal(42.5)
'42.5'
quote_nullable(
string
text
)
text
Return the given string suitably quoted to be used as a string literal in an SQL statement string; or, if the argument is null, return NULL
. Embedded single-quotes and backslashes are properly doubled. See also Example 42.1.
quote_nullable(NULL)
NULL
quote_nullable(
value
anyelement
)
text
Coerce the given value to text and then quote it as a literal; or, if the argument is null, return NULL
. Embedded single-quotes and backslashes are properly doubled.
quote_nullable(42.5)
'42.5'
regexp_match(
string
text
, pattern
text
[, flags
text
])
text[]
Return captured substring(s) resulting from the first match of a POSIX regular expression to the string
. See Section 9.7.3 for more information.
regexp_match('foobarbequebaz', '(bar)(beque)')
{bar,beque}
regexp_matches(
string
text
, pattern
text
[, flags
text
])
setof text[]
Return captured substring(s) resulting from matching a POSIX regular expression to the string
. See Section 9.7.3 for more information.
regexp_matches('foobarbequebaz', 'ba.', 'g')
{bar}
{baz}
(2 rows)
regexp_replace(
string
text
, pattern
text
, replacement
text
[, flags
text
])
text
Replace substring(s) matching a POSIX regular expression. See Section 9.7.3 for more information.
regexp_replace('Thomas', '.[mN]a.', 'M')
ThM
regexp_split_to_array(
string
text
, pattern
text
[, flags
text
])
text[]
Split string
using a POSIX regular expression as the delimiter. See Section 9.7.3 for more information.
regexp_split_to_array('hello world', '\s+')
{hello,world}
regexp_split_to_table(
string
text
, pattern
text
[, flags
text
])
setof text
Split string
using a POSIX regular expression as the delimiter. See Section 9.7.3 for more information.
regexp_split_to_table('hello world', '\s+')
hello
world
(2 rows)
repeat(
string
text
, number
int
)
text
Repeat string
the specified number
of times
repeat('Pg', 4)
PgPgPgPg
replace(
string
text
, from
text
, to
text
)
text
Replace all occurrences in string
of substring from
with substring to
replace('abcdefabcdef', 'cd', 'XX')
abXXefabXXef
reverse(
str
)
text
Return reversed string.
reverse('abcde')
edcba
right(
str
text
, n
int
)
text
Return last n
characters in the string. When n
is negative, return all but first |n
| characters.
right('abcde', 2)
de
rpad(
string
text
, length
int
[, fill
text
])
text
Fill up the string
to length length
by appending the characters fill
(a space by default). If the string
is already longer than length
then it is truncated.
rpad('hi', 5, 'xy')
hixyx
rtrim(
string
text
[, characters
text
])
text
Remove the longest string containing only characters from characters
(a space by default) from the end of string
rtrim('testxxzx', 'xyz')
test
split_part(
string
text
, delimiter
text
, field
int
)
text
Split string
on delimiter
and return the given field (counting from one)
split_part('abc~@~def~@~ghi', '~@~', 2)
def
strpos(
string
, substring
)
int
Location of specified substring (same as position(
substring
in string
), but note the reversed argument order)
strpos('high', 'ig')
2
substr(
string
, from
[, count
])
text
回傳子字串(與 substring(string
from from
for count
) 相同)
substr('alphabet', 3, 2)
ph
starts_with(
string
, prefix
)
bool
Returns true if string
starts with prefix
.
starts_with('alphabet', 'alph')
t
to_ascii(
string
text
[, encoding
text
])
text
Convert string
to ASCII from another encoding (only supports conversion from LATIN1
, LATIN2
, LATIN9
, and WIN1250
encodings)
to_ascii('Karel')
Karel
to_hex(
number
int
or bigint
)
text
Convert number
to its equivalent hexadecimal representation
to_hex(2147483647)
7fffffff
translate(
string
text
, from
text
, to
text
)
text
Any character in string
that matches a character in the from
set is replaced by the corresponding character in the to
set. If from
is longer than to
, occurrences of the extra characters in from
are removed.
translate('12345', '143', 'ax')
a2x5
ascii_to_mic
SQL_ASCII
MULE_INTERNAL
ascii_to_utf8
SQL_ASCII
UTF8
big5_to_euc_tw
BIG5
EUC_TW
big5_to_mic
BIG5
MULE_INTERNAL
big5_to_utf8
BIG5
UTF8
euc_cn_to_mic
EUC_CN
MULE_INTERNAL
euc_cn_to_utf8
EUC_CN
UTF8
euc_jp_to_mic
EUC_JP
MULE_INTERNAL
euc_jp_to_sjis
EUC_JP
SJIS
euc_jp_to_utf8
EUC_JP
UTF8
euc_kr_to_mic
EUC_KR
MULE_INTERNAL
euc_kr_to_utf8
EUC_KR
UTF8
euc_tw_to_big5
EUC_TW
BIG5
euc_tw_to_mic
EUC_TW
MULE_INTERNAL
euc_tw_to_utf8
EUC_TW
UTF8
gb18030_to_utf8
GB18030
UTF8
gbk_to_utf8
GBK
UTF8
iso_8859_10_to_utf8
LATIN6
UTF8
iso_8859_13_to_utf8
LATIN7
UTF8
iso_8859_14_to_utf8
LATIN8
UTF8
iso_8859_15_to_utf8
LATIN9
UTF8
iso_8859_16_to_utf8
LATIN10
UTF8
iso_8859_1_to_mic
LATIN1
MULE_INTERNAL
iso_8859_1_to_utf8
LATIN1
UTF8
iso_8859_2_to_mic
LATIN2
MULE_INTERNAL
iso_8859_2_to_utf8
LATIN2
UTF8
iso_8859_2_to_windows_1250
LATIN2
WIN1250
iso_8859_3_to_mic
LATIN3
MULE_INTERNAL
iso_8859_3_to_utf8
LATIN3
UTF8
iso_8859_4_to_mic
LATIN4
MULE_INTERNAL
iso_8859_4_to_utf8
LATIN4
UTF8
iso_8859_5_to_koi8_r
ISO_8859_5
KOI8R
iso_8859_5_to_mic
ISO_8859_5
MULE_INTERNAL
iso_8859_5_to_utf8
ISO_8859_5
UTF8
iso_8859_5_to_windows_1251
ISO_8859_5
WIN1251
iso_8859_5_to_windows_866
ISO_8859_5
WIN866
iso_8859_6_to_utf8
ISO_8859_6
UTF8
iso_8859_7_to_utf8
ISO_8859_7
UTF8
iso_8859_8_to_utf8
ISO_8859_8
UTF8
iso_8859_9_to_utf8
LATIN5
UTF8
johab_to_utf8
JOHAB
UTF8
koi8_r_to_iso_8859_5
KOI8R
ISO_8859_5
koi8_r_to_mic
KOI8R
MULE_INTERNAL
koi8_r_to_utf8
KOI8R
UTF8
koi8_r_to_windows_1251
KOI8R
WIN1251
koi8_r_to_windows_866
KOI8R
WIN866
koi8_u_to_utf8
KOI8U
UTF8
mic_to_ascii
MULE_INTERNAL
SQL_ASCII
mic_to_big5
MULE_INTERNAL
BIG5
mic_to_euc_cn
MULE_INTERNAL
EUC_CN
mic_to_euc_jp
MULE_INTERNAL
EUC_JP
mic_to_euc_kr
MULE_INTERNAL
EUC_KR
mic_to_euc_tw
MULE_INTERNAL
EUC_TW
mic_to_iso_8859_1
MULE_INTERNAL
LATIN1
mic_to_iso_8859_2
MULE_INTERNAL
LATIN2
mic_to_iso_8859_3
MULE_INTERNAL
LATIN3
mic_to_iso_8859_4
MULE_INTERNAL
LATIN4
mic_to_iso_8859_5
MULE_INTERNAL
ISO_8859_5
mic_to_koi8_r
MULE_INTERNAL
KOI8R
mic_to_sjis
MULE_INTERNAL
SJIS
mic_to_windows_1250
MULE_INTERNAL
WIN1250
mic_to_windows_1251
MULE_INTERNAL
WIN1251
mic_to_windows_866
MULE_INTERNAL
WIN866
sjis_to_euc_jp
SJIS
EUC_JP
sjis_to_mic
SJIS
MULE_INTERNAL
sjis_to_utf8
SJIS
UTF8
windows_1258_to_utf8
WIN1258
UTF8
uhc_to_utf8
UHC
UTF8
utf8_to_ascii
UTF8
SQL_ASCII
utf8_to_big5
UTF8
BIG5
utf8_to_euc_cn
UTF8
EUC_CN
utf8_to_euc_jp
UTF8
EUC_JP
utf8_to_euc_kr
UTF8
EUC_KR
utf8_to_euc_tw
UTF8
EUC_TW
utf8_to_gb18030
UTF8
GB18030
utf8_to_gbk
UTF8
GBK
utf8_to_iso_8859_1
UTF8
LATIN1
utf8_to_iso_8859_10
UTF8
LATIN6
utf8_to_iso_8859_13
UTF8
LATIN7
utf8_to_iso_8859_14
UTF8
LATIN8
utf8_to_iso_8859_15
UTF8
LATIN9
utf8_to_iso_8859_16
UTF8
LATIN10
utf8_to_iso_8859_2
UTF8
LATIN2
utf8_to_iso_8859_3
UTF8
LATIN3
utf8_to_iso_8859_4
UTF8
LATIN4
utf8_to_iso_8859_5
UTF8
ISO_8859_5
utf8_to_iso_8859_6
UTF8
ISO_8859_6
utf8_to_iso_8859_7
UTF8
ISO_8859_7
utf8_to_iso_8859_8
UTF8
ISO_8859_8
utf8_to_iso_8859_9
UTF8
LATIN5
utf8_to_johab
UTF8
JOHAB
utf8_to_koi8_r
UTF8
KOI8R
utf8_to_koi8_u
UTF8
KOI8U
utf8_to_sjis
UTF8
SJIS
utf8_to_windows_1258
UTF8
WIN1258
utf8_to_uhc
UTF8
UHC
utf8_to_windows_1250
UTF8
WIN1250
utf8_to_windows_1251
UTF8
WIN1251
utf8_to_windows_1252
UTF8
WIN1252
utf8_to_windows_1253
UTF8
WIN1253
utf8_to_windows_1254
UTF8
WIN1254
utf8_to_windows_1255
UTF8
WIN1255
utf8_to_windows_1256
UTF8
WIN1256
utf8_to_windows_1257
UTF8
WIN1257
utf8_to_windows_866
UTF8
WIN866
utf8_to_windows_874
UTF8
WIN874
windows_1250_to_iso_8859_2
WIN1250
LATIN2
windows_1250_to_mic
WIN1250
MULE_INTERNAL
windows_1250_to_utf8
WIN1250
UTF8
windows_1251_to_iso_8859_5
WIN1251
ISO_8859_5
windows_1251_to_koi8_r
WIN1251
KOI8R
windows_1251_to_mic
WIN1251
MULE_INTERNAL
windows_1251_to_utf8
WIN1251
UTF8
windows_1251_to_windows_866
WIN1251
WIN866
windows_1252_to_utf8
WIN1252
UTF8
windows_1256_to_utf8
WIN1256
UTF8
windows_866_to_iso_8859_5
WIN866
ISO_8859_5
windows_866_to_koi8_r
WIN866
KOI8R
windows_866_to_mic
WIN866
MULE_INTERNAL
windows_866_to_utf8
WIN866
UTF8
windows_866_to_windows_1251
WIN866
WIN
windows_874_to_utf8
WIN874
UTF8
euc_jis_2004_to_utf8
EUC_JIS_2004
UTF8
utf8_to_euc_jis_2004
UTF8
EUC_JIS_2004
shift_jis_2004_to_utf8
SHIFT_JIS_2004
UTF8
utf8_to_shift_jis_2004
UTF8
SHIFT_JIS_2004
euc_jis_2004_to_shift_jis_2004
EUC_JIS_2004
SHIFT_JIS_2004
shift_jis_2004_to_euc_jis_2004
SHIFT_JIS_2004
EUC_JIS_2004
currval(regclass
)
bigint
Return value most recently obtained withnextval
for specified sequence
lastval()
bigint
Return value most recently obtained withnextval
for any sequence
nextval(regclass
)
bigint
Advance sequence and return new value
setval(regclass
,bigint
)
bigint
Set sequence's current value
setval(regclass
,bigint
,boolean
)
bigint
Set sequence's current value andis_called
flag
@@
boolean
tsvector
matchestsquery
?
to_tsvector('fat cats ate rats') @@ to_tsquery('cat & rat')
t
@@@
boolean
deprecated synonym for@@
to_tsvector('fat cats ate rats') @@@ to_tsquery('cat & rat')
t
`
`
tsvector
concatenatetsvector
s
`'a:1 b:2'::tsvector
'c:1 d:2 b:3'::tsvector`
'a':1 'b':2,5 'c':3 'd':4
&&
tsquery
ANDtsquery
s together
`'fat
rat'::tsquery && 'cat'::tsquery`
`( 'fat'
'rat' ) & 'cat'`
`
`
tsquery
ORtsquery
s together
`'fat
rat'::tsquery
'cat'::tsquery`
`( 'fat'
'rat' )
'cat'`
!!
tsquery
negate atsquery
!! 'cat'::tsquery
!'cat'
<->
tsquery
tsquery
followed bytsquery
to_tsquery('fat') <-> to_tsquery('rat')
'fat' <-> 'rat'
@>
boolean
tsquery
contains another ?
'cat'::tsquery @> 'cat & rat'::tsquery
f
<@
boolean
tsquery
is contained in ?
'cat'::tsquery <@ 'cat & rat'::tsquery
t
array_to_tsvector(text[]
)
tsvector
convert array of lexemes totsvector
array_to_tsvector('{fat,cat,rat}'::text[])
'cat' 'fat' 'rat'
get_current_ts_config()
regconfig
get default text search configuration
get_current_ts_config()
english
length(tsvector
)
integer
number of lexemes intsvector
length('fat:2,4 cat:3 rat:5A'::tsvector)
3
numnode(tsquery
)
integer
number of lexemes plus operators intsquery
`numnode('(fat & rat)
cat'::tsquery)`
5
plainto_tsquery([configregconfig
,]querytext
)
tsquery
producetsquery
ignoring punctuation
plainto_tsquery('english', 'The Fat Rats')
'fat' & 'rat'
phraseto_tsquery([configregconfig
,]querytext
)
tsquery
producetsquery
that searches for a phrase, ignoring punctuation
phraseto_tsquery('english', 'The Fat Rats')
'fat' <-> 'rat'
querytree(querytsquery
)
text
get indexable part of atsquery
querytree('foo & ! bar'::tsquery)
'foo'
setweight(vectortsvector
,weight"char"
)
tsvector
assignweight
_to each element ofvector
_
setweight('fat:2,4 cat:3 rat:5B'::tsvector, 'A')
'cat':3A 'fat':2A,4A 'rat':5A
setweight(vectortsvector
,weight"char"
,lexemestext[]
)
tsvector
assignweight
_to elements ofvector
that are listed inlexemes
_
setweight('fat:2,4 cat:3 rat:5B'::tsvector, 'A', '{cat,rat}')
'cat':3A 'fat':2,4 'rat':5A
strip(tsvector
)
tsvector
remove positions and weights fromtsvector
strip('fat:2,4 cat:3 rat:5A'::tsvector)
'cat' 'fat' 'rat'
to_tsquery([configregconfig
,]querytext
)
tsquery
normalize words and convert totsquery
to_tsquery('english', 'The & Fat & Rats')
'fat' & 'rat'
to_tsvector([configregconfig
,]documenttext
)
tsvector
reduce document text totsvector
to_tsvector('english', 'The Fat Rats')
'fat':2 'rat':3
to_tsvector([configregconfig
,]documentjson(b)
)
tsvector
reduce each string value in the document to atsvector
, and then concatentate those in document order to produce a singletsvector
to_tsvector('english', '{"a": "The Fat Rats"}'::json)
'fat':2 'rat':3
ts_delete(vectortsvector
,lexemetext
)
tsvector
remove givenlexeme
_fromvector
_
ts_delete('fat:2,4 cat:3 rat:5A'::tsvector, 'fat')
'cat':3 'rat':5A
ts_delete(vectortsvector
,lexemestext[]
)
tsvector
remove any occurrence of lexemes inlexemes
_fromvector
_
ts_delete('fat:2,4 cat:3 rat:5A'::tsvector, ARRAY['fat','rat'])
'cat':3
ts_filter(vectortsvector
,weights"char"[]
)
tsvector
select only elements with givenweights
_fromvector
_
ts_filter('fat:2,4 cat:3b rat:5A'::tsvector, '{a,b}')
'cat':3B 'rat':5A
ts_headline([configregconfig
,]documenttext
,querytsquery
[,optionstext
])
text
display a query match
ts_headline('x y z', 'z'::tsquery)
x y <b>z</b>
ts_headline([configregconfig
,]documentjson(b)
,querytsquery
[,optionstext
])
text
display a query match
ts_headline('{"a":"x y z"}'::json, 'z'::tsquery)
{"a":"x y <b>z</b>"}
ts_rank([weightsfloat4[]
,]vectortsvector
,querytsquery
[,normalizationinteger
])
float4
rank document for query
ts_rank(textsearch, query)
0.818
ts_rank_cd([weightsfloat4[]
,]vectortsvector
,querytsquery
[,normalizationinteger
])
float4
rank document for query using cover density
ts_rank_cd('{0.1, 0.2, 0.4, 1.0}', textsearch, query)
2.01317
ts_rewrite(querytsquery
,targettsquery
,substitutetsquery
)
tsquery
replacetarget
_withsubstitute
_within query
`ts_rewrite('a & b'::tsquery, 'a'::tsquery, 'foo
bar'::tsquery)`
`'b' & ( 'foo'
'bar' )`
ts_rewrite(querytsquery
,selecttext
)
tsquery
replace using targets and substitutes from aSELECT
command
SELECT ts_rewrite('a & b'::tsquery, 'SELECT t,s FROM aliases')
`'b' & ( 'foo'
'bar' )`
tsquery_phrase(query1tsquery
,query2tsquery
)
tsquery
make query that searches forquery1
_followed byquery2
_(same as<->
operator)
tsquery_phrase(to_tsquery('fat'), to_tsquery('cat'))
'fat' <-> 'cat'
tsquery_phrase(query1tsquery
,query2tsquery
,distanceinteger
)
tsquery
make query that searches forquery1
_followed byquery2
at distancedistance
_
tsquery_phrase(to_tsquery('fat'), to_tsquery('cat'), 10)
'fat' <10> 'cat'
tsvector_to_array(tsvector
)
text[]
converttsvector
to array of lexemes
tsvector_to_array('fat:2,4 cat:3 rat:5A'::tsvector)
{cat,fat,rat}
tsvector_update_trigger()
trigger
trigger function for automatictsvector
column update
CREATE TRIGGER ... tsvector_update_trigger(tsvcol, 'pg_catalog.swedish', title, body)
tsvector_update_trigger_column()
trigger
trigger function for automatictsvector
column update
CREATE TRIGGER ... tsvector_update_trigger_column(tsvcol, configcol, title, body)
unnest(tsvector
, OUTlexemetext
, OUTpositionssmallint[]
, OUTweightstext
)
setof record
expand a tsvector to a set of rows
unnest('fat:2,4 cat:3 rat:5A'::tsvector)
(cat,{3},{D}) ...
ts_debug([configregconfig
,]documenttext
, OUTaliastext
, OUTdescriptiontext
, OUTtokentext
, OUTdictionariesregdictionary[]
, OUTdictionaryregdictionary
, OUTlexemestext[]
)
setof record
test a configuration
ts_debug('english', 'The Brightest supernovaes')
(asciiword,"Word, all ASCII",The,{english_stem},english_stem,{}) ...
ts_lexize(dictregdictionary
,tokentext
)
text[]
test a dictionary
ts_lexize('english_stem', 'stars')
{star}
ts_parse(parser_nametext
,documenttext
, OUTtokidinteger
, OUTtokentext
)
setof record
test a parser
ts_parse('default', 'foo - bar')
(1,foo) ...
ts_parse(parser_oidoid
,documenttext
, OUTtokidinteger
, OUTtokentext
)
setof record
test a parser
ts_parse(3722, 'foo - bar')
(1,foo) ...
ts_token_type(parser_nametext
, OUTtokidinteger
, OUTaliastext
, OUTdescriptiontext
)
setof record
get token types defined by parser
ts_token_type('default')
(1,asciiword,"Word, all ASCII") ...
ts_token_type(parser_oidoid
, OUTtokidinteger
, OUTaliastext
, OUTdescriptiontext
)
setof record
get token types defined by parser
ts_token_type(3722)
(1,asciiword,"Word, all ASCII") ...
ts_stat(sqlquerytext
, [weightstext
,] OUTwordtext
, OUTndocinteger
, OUTnentryinteger
)
setof record
get statistics of atsvector
column
ts_stat('SELECT vector from apod')
(foo,10,15) ...
+
Translation
box '((0,0),(1,1))' + point '(2.0,0)'
-
Translation
box '((0,0),(1,1))' - point '(2.0,0)'
*
Scaling/rotation
box '((0,0),(1,1))' * point '(2.0,0)'
/
Scaling/rotation
box '((0,0),(2,2))' / point '(2.0,0)'
#
Point or box of intersection
box '((1,-1),(-1,1))' # box '((1,1),(-2,-2))'
#
Number of points in path or polygon
# path '((1,0),(0,1),(-1,0))'
@-@
Length or circumference
@-@ path '((0,0),(1,0))'
@@
Center
@@ circle '((0,0),10)'
##
Closest point to first operand on second operand
point '(0,0)' ## lseg '((2,0),(0,2))'
<->
Distance between
circle '((0,0),1)' <-> circle '((5,0),1)'
&&
Overlaps? (One point in common makes this true.)
box '((0,0),(1,1))' && box '((0,0),(2,2))'
<<
Is strictly left of?
circle '((0,0),1)' << circle '((5,0),1)'
>>
Is strictly right of?
circle '((5,0),1)' >> circle '((0,0),1)'
&<
Does not extend to the right of?
box '((0,0),(1,1))' &< box '((0,0),(2,2))'
&>
Does not extend to the left of?
box '((0,0),(3,3))' &> box '((0,0),(2,2))'
`<<
`
Is strictly below?
`box '((0,0),(3,3))' <<
box '((3,4),(5,5))'`
`
>>`
Is strictly above?
`box '((3,4),(5,5))'
>> box '((0,0),(3,3))'`
`&<
`
Does not extend above?
`box '((0,0),(1,1))' &<
box '((0,0),(2,2))'`
`
&>`
Does not extend below?
`box '((0,0),(3,3))'
&> box '((0,0),(2,2))'`
<^
Is below (allows touching)?
circle '((0,0),1)' <^ circle '((0,5),1)'
>^
Is above (allows touching)?
circle '((0,5),1)' >^ circle '((0,0),1)'
?#
Intersects?
lseg '((-1,0),(1,0))' ?# box '((-2,-2),(2,2))'
?-
Is horizontal?
?- lseg '((-1,0),(1,0))'
?-
Are horizontally aligned?
point '(1,0)' ?- point '(0,0)'
`?
`
Is vertical?
`?
lseg '((-1,0),(1,0))'`
`?
`
Are vertically aligned?
`point '(0,1)' ?
point '(0,0)'`
`?-
`
Is perpendicular?
`lseg '((0,0),(0,1))' ?-
lseg '((0,0),(1,0))'`
`?
`
Are parallel?
`lseg '((-1,0),(1,0))' ?
lseg '((-1,2),(1,2))'`
@>
Contains?
circle '((0,0),2)' @> point '(1,1)'
<@
Contained in or on?
point '(1,1)' <@ circle '((0,0),2)'
~=
Same as?
polygon '((0,0),(1,1))' ~= polygon '((1,1),(0,0))'
area(object
)
double precision
area
area(box '((0,0),(1,1))')
center(object
)
point
center
center(box '((0,0),(1,2))')
diameter(circle
)
double precision
diameter of circle
diameter(circle '((0,0),2.0)')
height(box
)
double precision
vertical size of box
height(box '((0,0),(1,1))')
isclosed(path
)
boolean
a closed path?
isclosed(path '((0,0),(1,1),(2,0))')
isopen(path
)
boolean
an open path?
isopen(path '[(0,0),(1,1),(2,0)]')
length(object
)
double precision
length
length(path '((-1,0),(1,0))')
npoints(path
)
int
number of points
npoints(path '[(0,0),(1,1),(2,0)]')
npoints(polygon
)
int
number of points
npoints(polygon '((1,1),(0,0))')
pclose(path
)
path
convert path to closed
pclose(path '[(0,0),(1,1),(2,0)]')
popen(path
)
path
convert path to open
popen(path '((0,0),(1,1),(2,0))')
radius(circle
)
double precision
radius of circle
radius(circle '((0,0),2.0)')
width(box
)
double precision
horizontal size of box
width(box '((0,0),(1,1))')
box(circle
)
box
circle to box
box(circle '((0,0),2.0)')
box(point
)
box
point to empty box
box(point '(0,0)')
box(point
,point
)
box
points to box
box(point '(0,0)', point '(1,1)')
box(polygon
)
box
polygon to box
box(polygon '((0,0),(1,1),(2,0))')
bound_box(box
,box
)
box
boxes to bounding box
bound_box(box '((0,0),(1,1))', box '((3,3),(4,4))')
circle(box
)
circle
box to circle
circle(box '((0,0),(1,1))')
circle(point
,double precision
)
circle
center and radius to circle
circle(point '(0,0)', 2.0)
circle(polygon
)
circle
polygon to circle
circle(polygon '((0,0),(1,1),(2,0))')
line(point
,point
)
line
points to line
line(point '(-1,0)', point '(1,0)')
lseg(box
)
lseg
box diagonal to line segment
lseg(box '((-1,0),(1,0))')
lseg(point
,point
)
lseg
points to line segment
lseg(point '(-1,0)', point '(1,0)')
path(polygon
)
path
polygon to path
path(polygon '((0,0),(1,1),(2,0))')
point
(double precision
,double precision
)
point
construct point
point(23.4, -44.5)
point(box
)
point
center of box
point(box '((-1,0),(1,0))')
point(circle
)
point
center of circle
point(circle '((0,0),2.0)')
point(lseg
)
point
center of line segment
point(lseg '((-1,0),(1,0))')
point(polygon
)
point
center of polygon
point(polygon '((0,0),(1,1),(2,0))')
polygon(box
)
polygon
box to 4-point polygon
polygon(box '((0,0),(1,1))')
polygon(circle
)
polygon
circle to 12-point polygon
polygon(circle '((0,0),2.0)')
polygon(npts
,circle
)
polygon
circle tonpts
-point polygon
polygon(12, circle '((0,0),2.0)')
polygon(path
)
polygon
path to polygon
polygon(path '((0,0),(1,1),(2,0))')
<
is less than
inet '192.168.1.5' < inet '192.168.1.6'
<=
is less than or equal
inet '192.168.1.5' <= inet '192.168.1.5'
=
equals
inet '192.168.1.5' = inet '192.168.1.5'
>=
is greater or equal
inet '192.168.1.5' >= inet '192.168.1.5'
>
is greater than
inet '192.168.1.5' > inet '192.168.1.4'
<>
is not equal
inet '192.168.1.5' <> inet '192.168.1.4'
<<
is contained by
inet '192.168.1.5' << inet '192.168.1/24'
<<=
is contained by or equals
inet '192.168.1/24' <<= inet '192.168.1/24'
>>
contains
inet '192.168.1/24' >> inet '192.168.1.5'
>>=
contains or equals
inet '192.168.1/24' >>= inet '192.168.1/24'
&&
contains or is contained by
inet '192.168.1/24' && inet '192.168.1.80/28'
~
bitwise NOT
~ inet '192.168.1.6'
&
bitwise AND
inet '192.168.1.6' & inet '0.0.0.255'
`
`
bitwise OR
`inet '192.168.1.6'
inet '0.0.0.255'`
+
addition
inet '192.168.1.6' + 25
-
subtraction
inet '192.168.1.43' - 36
-
subtraction
inet '192.168.1.43' - inet '192.168.1.19'
abbrev(inet
)
text
abbreviated display format as text
abbrev(inet '10.1.0.0/16')
10.1.0.0/16
abbrev(cidr
)
text
abbreviated display format as text
abbrev(cidr '10.1.0.0/16')
10.1/16
broadcast(inet
)
inet
broadcast address for network
broadcast('192.168.1.5/24')
192.168.1.255/24
family(inet
)
int
extract family of address;4
for IPv4,6
for IPv6
family('::1')
6
host(inet
)
text
extract IP address as text
host('192.168.1.5/24')
192.168.1.5
hostmask(inet
)
inet
construct host mask for network
hostmask('192.168.23.20/30')
0.0.0.3
masklen(inet
)
int
extract netmask length
masklen('192.168.1.5/24')
24
netmask(inet
)
inet
construct netmask for network
netmask('192.168.1.5/24')
255.255.255.0
network(inet
)
cidr
extract network part of address
network('192.168.1.5/24')
192.168.1.0/24
set_masklen(inet
,int
)
inet
set netmask length forinet
value
set_masklen('192.168.1.5/24', 16)
192.168.1.5/16
set_masklen(cidr
,int
)
cidr
set netmask length forcidr
value
set_masklen('192.168.1.0/24'::cidr, 16)
192.168.0.0/16
text(inet
)
text
extract IP address and netmask length as text
text(inet '192.168.1.5')
192.168.1.5/32
inet_same_family(inet
,inet
)
boolean
are the addresses from the same family?
inet_same_family('192.168.1.5/24', '::1')
false
inet_merge(inet
,inet
)
cidr
the smallest network which includes both of the given networks
inet_merge('192.168.1.5/24', '192.168.2.5/24')
192.168.0.0/22
trunc(macaddr
)
macaddr
set last 3 bytes to zero
trunc(macaddr '12:34:56:78:90:ab')
12:34:56:00:00:00
trunc(macaddr8
)
macaddr8
set last 5 bytes to zero
trunc(macaddr8 '12:34:56:78:90:ab:cd:ef')
12:34:56:00:00:00:00:00
macaddr8_set7bit(macaddr8
)
macaddr8
set 7th bit to one, also known as modified EUI-64, for inclusion in an IPv6 address
macaddr8_set7bit(macaddr8 '00:34:56:ab:cd:ef')
02:34:56:ff:fe:ab:cd:ef
Operator
Description
Example(s)
anyrange
@>
anyrange
→ boolean
Does the first range contain the second?
int4range(2,4) @> int4range(2,3)
→ t
anyrange
@>
anyelement
→ boolean
Does the range contain the element?
'[2011-01-01,2011-03-01)'::tsrange @> '2011-01-10'::timestamp
→ t
anyrange
<@
anyrange
→ boolean
Is the first range contained by the second?
int4range(2,4) <@ int4range(1,7)
→ t
anyelement
<@
anyrange
→ boolean
Is the element contained in the range?
42 <@ int4range(1,7)
→ f
anyrange
&&
anyrange
→ boolean
Do the ranges overlap, that is, have any elements in common?
int8range(3,7) && int8range(4,12)
→ t
anyrange
<<
anyrange
→ boolean
Is the first range strictly left of the second?
int8range(1,10) << int8range(100,110)
→ t
anyrange
>>
anyrange
→ boolean
Is the first range strictly right of the second?
int8range(50,60) >> int8range(20,30)
→ t
anyrange
&<
anyrange
→ boolean
Does the first range not extend to the right of the second?
int8range(1,20) &< int8range(18,20)
→ t
anyrange
&>
anyrange
→ boolean
Does the first range not extend to the left of the second?
int8range(7,20) &> int8range(5,10)
→ t
anyrange
-
anyrange
+
anyrange
→ anyrange
Computes the union of the ranges. The ranges must overlap or be adjacent, so that the union is a single range (but see range_merge()
).
numrange(5,15) + numrange(10,20)
→ [5,20)
anyrange
*
anyrange
→ anyrange
Computes the intersection of the ranges.
int8range(5,15) * int8range(10,20)
→ [10,15)
anyrange
-
anyrange
→ anyrange
Computes the difference of the ranges. The second range must not be contained in the first in such a way that the difference would not be a single range.
int8range(5,15) - int8range(10,20)
→ [5,10)
Function
Description
Example(s)
lower
( anyrange
) → anyelement
Extracts the lower bound of the range (NULL
if the range is empty or the lower bound is infinite).
lower(numrange(1.1,2.2))
→ 1.1
upper
( anyrange
) → anyelement
Extracts the upper bound of the range (NULL
if the range is empty or the upper bound is infinite).
upper(numrange(1.1,2.2))
→ 2.2
isempty
( anyrange
) → boolean
Is the range empty?
isempty(numrange(1.1,2.2))
→ f
lower_inc
( anyrange
) → boolean
Is the range's lower bound inclusive?
lower_inc(numrange(1.1,2.2))
→ t
upper_inc
( anyrange
) → boolean
Is the range's upper bound inclusive?
upper_inc(numrange(1.1,2.2))
→ f
lower_inf
( anyrange
) → boolean
Is the range's lower bound infinite?
lower_inf('(,)'::daterange)
→ t
upper_inf
( anyrange
) → boolean
Is the range's upper bound infinite?
upper_inf('(,)'::daterange)
→ t
range_merge
( anyrange
, anyrange
) → anyrange
Computes the smallest range that includes both of the given ranges.
range_merge('[1,2)'::int4range, '[3,4)'::int4range)
→ [1,4)
=
equal
ARRAY[1.1,2.1,3.1]::int[] = ARRAY[1,2,3]
t
<>
not equal
ARRAY[1,2,3] <> ARRAY[1,2,4]
t
<
less than
ARRAY[1,2,3] < ARRAY[1,2,4]
t
>
greater than
ARRAY[1,4,3] > ARRAY[1,2,4]
t
<=
less than or equal
ARRAY[1,2,3] <= ARRAY[1,2,3]
t
>=
greater than or equal
ARRAY[1,4,3] >= ARRAY[1,4,3]
t
@>
contains
ARRAY[1,4,3] @> ARRAY[3,1]
t
<@
is contained by
ARRAY[2,7] <@ ARRAY[1,7,4,2,6]
t
&&
overlap (have elements in common)
ARRAY[1,4,3] && ARRAY[2,1]
t
`
`
array-to-array concatenation
`ARRAY[1,2,3]
ARRAY[4,5,6]`
{1,2,3,4,5,6}
`
`
array-to-array concatenation
`ARRAY[1,2,3]
ARRAY[[4,5,6],[7,8,9]]`
{ {1,2,3},{4,5,6},{7,8,9} }
`
`
element-to-array concatenation
`3
ARRAY[4,5,6]`
{3,4,5,6}
`
`
array-to-element concatenation
`ARRAY[4,5,6]
7`
{4,5,6,7}
array_append
(anyarray
,anyelement
)
anyarray
append an element to the end of an array
array_append(ARRAY[1,2], 3)
{1,2,3}
array_cat
(anyarray
,anyarray
)
anyarray
concatenate two arrays
array_cat(ARRAY[1,2,3], ARRAY[4,5])
{1,2,3,4,5}
array_ndims
(anyarray
)
int
returns the number of dimensions of the array
array_ndims(ARRAY[[1,2,3], [4,5,6]])
2
array_dims
(anyarray
)
text
returns a text representation of array's dimensions
array_dims(ARRAY[[1,2,3], [4,5,6]])
[1:2][1:3]
array_fill
(anyelement
,int[]
, [,int[]
])
anyarray
returns an array initialized with supplied value and dimensions, optionally with lower bounds other than 1
array_fill(7, ARRAY[3], ARRAY[2])
[2:4]={7,7,7}
array_length
(anyarray
,int
)
int
returns the length of the requested array dimension
array_length(array[1,2,3], 1)
3
array_lower
(anyarray
,int
)
int
returns lower bound of the requested array dimension
array_lower('[0:2]={1,2,3}'::int[], 1)
0
array_position
(anyarray
,anyelement
[,int
])
int
returns the subscript of the first occurrence of the second argument in the array, starting at the element indicated by the third argument or at the first element (array must be one-dimensional)
array_position(ARRAY['sun','mon','tue','wed','thu','fri','sat'], 'mon')
2
array_positions
(anyarray
,anyelement
)
int[]
returns an array of subscripts of all occurrences of the second argument in the array given as first argument (array must be one-dimensional)
array_positions(ARRAY['A','A','B','A'], 'A')
{1,2,4}
array_prepend
(anyelement
,anyarray
)
anyarray
append an element to the beginning of an array
array_prepend(1, ARRAY[2,3])
{1,2,3}
array_remove
(anyarray
,anyelement
)
anyarray
remove all elements equal to the given value from the array (array must be one-dimensional)
array_remove(ARRAY[1,2,3,2], 2)
{1,3}
array_replace
(anyarray
,anyelement
,anyelement
)
anyarray
replace each array element equal to the given value with a new value
array_replace(ARRAY[1,2,5,4], 5, 3)
{1,2,3,4}
array_to_string
(anyarray
,text
[,text
])
text
concatenates array elements using supplied delimiter and optional null string
array_to_string(ARRAY[1, 2, 3, NULL, 5], ',', '*')
1,2,3,*,5
array_upper
(anyarray
,int
)
int
returns upper bound of the requested array dimension
array_upper(ARRAY[1,8,3,7], 1)
4
cardinality
(anyarray
)
int
returns the total number of elements in the array, or 0 if the array is empty
cardinality(ARRAY[[1,2],[3,4]])
4
string_to_array
(text
,text
[,text
])
text[]
splits string into array elements using supplied delimiter and optional null string
string_to_array('xx~^~yy~^~zz', '~^~', 'yy')
{xx,NULL,zz}
unnest
(anyarray
)
setof anyelement
expand an array to a set of rows
unnest(ARRAY[1,2])
12(2 rows)
unnest
(anyarray
,anyarray
[, ...])
setof anyelement, anyelement [, ...]
expand multiple arrays (possibly of different types) to a set of rows. This is only allowed in the FROM clause; seeSection 7.2.1.4
unnest(ARRAY[1,2],ARRAY['foo','bar','baz'])
1 foo2 barNULL baz(3 rows)
generate_series(
start
, stop
)
int
, bigint
or numeric
setof int
, setof bigint
, or setof numeric
(same as argument type)
從 start 到 stop 產生成一系列的值,間隔為 1
generate_series(
start
, stop
, step
)
int
, bigint
or numeric
setof int
, setof bigint
or setof numeric
(same as argument type)
產生一系列的值,從 start 到 end,間隔為 step
generate_series(
start
, stop
, step
interval
)
timestamp
or timestamp with time zone
setof timestamp
or setof timestamp with time zone
(same as argument type)
產生一系列的值,從 start 到 end,間隔為 step
generate_subscripts(
array anyarray
, dim int
)
setof int
產生成一個包含給定陣列索引的系列內容。
generate_subscripts(
array anyarray
, dim int
, reverse boolean
)
setof int
產生一個包含給定陣列索引的序列內容。當 reverse 為 true 時,將以相反的順序回傳該序列。
Function
Description
row_number
() → bigint
Returns the number of the current row within its partition, counting from 1.
rank
() → bigint
Returns the rank of the current row, with gaps; that is, the row_number
of the first row in its peer group.
dense_rank
() → bigint
Returns the rank of the current row, without gaps; this function effectively counts peer groups.
percent_rank
() → double precision
Returns the relative rank of the current row, that is (rank
- 1) / (total partition rows - 1). The value thus ranges from 0 to 1 inclusive.
cume_dist
() → double precision
Returns the cumulative distribution, that is (number of partition rows preceding or peers with current row) / (total partition rows). The value thus ranges from 1/N
to 1.
ntile
( num_buckets
integer
) → integer
Returns an integer ranging from 1 to the argument value, dividing the partition as equally as possible.
lag
( value
anyelement
[, offset
integer
[, default
anyelement
]] ) → anyelement
Returns value
evaluated at the row that is offset
rows before the current row within the partition; if there is no such row, instead returns default
(which must be of the same type as value
). Both offset
and default
are evaluated with respect to the current row. If omitted, offset
defaults to 1 and default
to NULL
.
lead
( value
anyelement
[, offset
integer
[, default
anyelement
]] ) → anyelement
Returns value
evaluated at the row that is offset
rows after the current row within the partition; if there is no such row, instead returns default
(which must be of the same type as value
). Both offset
and default
are evaluated with respect to the current row. If omitted, offset
defaults to 1 and default
to NULL
.
first_value
( value
anyelement
) → anyelement
Returns value
evaluated at the row that is the first row of the window frame.
last_value
( value
anyelement
) → anyelement
Returns value
evaluated at the row that is the last row of the window frame.
nth_value
( value
anyelement
, n
integer
) → anyelement
Returns value
evaluated at the row that is the n
'th row of the window frame (counting from 1); returns NULL
if there is no such row.
Function
Description
Partial Mode
array_agg
( anynonarray
) → anyarray
Collects all the input values, including nulls, into an array.
No
array_agg
( anyarray
) → anyarray
Concatenates all the input arrays into an array of one higher dimension. (The inputs must all have the same dimensionality, and cannot be empty or null.)
No
avg
( smallint
) → numeric
avg
( integer
) → numeric
avg
( bigint
) → numeric
avg
( numeric
) → numeric
avg
( real
) → double precision
avg
( double precision
) → double precision
avg
( interval
) → interval
Computes the average (arithmetic mean) of all the non-null input values.
Yes
bit_and
( smallint
) → smallint
bit_and
( integer
) → integer
bit_and
( bigint
) → bigint
bit_and
( bit
) → bit
Computes the bitwise AND of all non-null input values.
Yes
bit_or
( smallint
) → smallint
bit_or
( integer
) → integer
bit_or
( bigint
) → bigint
bit_or
( bit
) → bit
Computes the bitwise OR of all non-null input values.
Yes
bool_and
( boolean
) → boolean
Returns true if all non-null input values are true, otherwise false.
Yes
bool_or
( boolean
) → boolean
Returns true if any non-null input value is true, otherwise false.
Yes
count
( *
) → bigint
Computes the number of input rows.
Yes
count
( "any"
) → bigint
Computes the number of input rows in which the input value is not null.
Yes
every
( boolean
) → boolean
This is the SQL standard's equivalent to bool_and
.
Yes
json_agg
( anyelement
) → json
jsonb_agg
( anyelement
) → jsonb
Collects all the input values, including nulls, into a JSON array. Values are converted to JSON as per to_json
or to_jsonb
.
No
json_object_agg
( key
"any"
, value
"any"
) → json
jsonb_object_agg
( key
"any"
, value
"any"
) → jsonb
Collects all the key/value pairs into a JSON object. Key arguments are coerced to text; value arguments are converted as per to_json
or to_jsonb
. Values can be null, but not keys.
No
max
( see text
) → same as input type
Computes the maximum of the non-null input values. Available for any numeric, string, date/time, or enum type, as well as inet
, interval
, money
, oid
, pg_lsn
, tid
, and arrays of any of these types.
Yes
min
( see text
) → same as input type
Computes the minimum of the non-null input values. Available for any numeric, string, date/time, or enum type, as well as inet
, interval
, money
, oid
, pg_lsn
, tid
, and arrays of any of these types.
Yes
string_agg
( value
text
, delimiter
text
) → text
string_agg
( value
bytea
, delimiter
bytea
) → bytea
Concatenates the non-null input values into a string. Each value after the first is preceded by the corresponding delimiter
(if it's not null).
No
sum
( smallint
) → bigint
sum
( integer
) → bigint
sum
( bigint
) → numeric
sum
( numeric
) → numeric
sum
( real
) → real
sum
( double precision
) → double precision
sum
( interval
) → interval
sum
( money
) → money
Computes the sum of the non-null input values.
Yes
xmlagg
( xml
) → xml
Concatenates the non-null XML input values (see Section 9.15.1.7).
No
Function
Description
Partial Mode
corr
( Y
double precision
, X
double precision
) → double precision
Computes the correlation coefficient.
Yes
covar_pop
( Y
double precision
, X
double precision
) → double precision
Computes the population covariance.
Yes
covar_samp
( Y
double precision
, X
double precision
) → double precision
Computes the sample covariance.
Yes
regr_avgx
( Y
double precision
, X
double precision
) → double precision
Computes the average of the independent variable, sum(
X
)/N
.
Yes
regr_avgy
( Y
double precision
, X
double precision
) → double precision
Computes the average of the dependent variable, sum(
Y
)/N
.
Yes
regr_count
( Y
double precision
, X
double precision
) → bigint
Computes the number of rows in which both inputs are non-null.
Yes
regr_intercept
( Y
double precision
, X
double precision
) → double precision
Computes the y-intercept of the least-squares-fit linear equation determined by the (X
, Y
) pairs.
Yes
regr_r2
( Y
double precision
, X
double precision
) → double precision
Computes the square of the correlation coefficient.
Yes
regr_slope
( Y
double precision
, X
double precision
) → double precision
Computes the slope of the least-squares-fit linear equation determined by the (X
, Y
) pairs.
Yes
regr_sxx
( Y
double precision
, X
double precision
) → double precision
Computes the “sum of squares” of the independent variable, sum(
X
^2) - sum(X
)^2/N
.
Yes
regr_sxy
( Y
double precision
, X
double precision
) → double precision
Computes the “sum of products” of independent times dependent variables, sum(
X
*Y
) - sum(X
) * sum(Y
)/N
.
Yes
regr_syy
( Y
double precision
, X
double precision
) → double precision
Computes the “sum of squares” of the dependent variable, sum(
Y
^2) - sum(Y
)^2/N
.
Yes
stddev
( numeric_type
) → double precision
for real
or double precision
, otherwise numeric
This is a historical alias for stddev_samp
.
Yes
stddev_pop
( numeric_type
) → double precision
for real
or double precision
, otherwise numeric
Computes the population standard deviation of the input values.
Yes
stddev_samp
( numeric_type
) → double precision
for real
or double precision
, otherwise numeric
Computes the sample standard deviation of the input values.
Yes
variance
( numeric_type
) → double precision
for real
or double precision
, otherwise numeric
This is a historical alias for var_samp
.
Yes
var_pop
( numeric_type
) → double precision
for real
or double precision
, otherwise numeric
Computes the population variance of the input values (square of the population standard deviation).
Yes
var_samp
( numeric_type
) → double precision
for real
or double precision
, otherwise numeric
Computes the sample variance of the input values (square of the sample standard deviation).
Yes
Function
Description
Partial Mode
mode
() WITHIN GROUP
( ORDER BY
anyelement
) → anyelement
Computes the mode, the most frequent value of the aggregated argument (arbitrarily choosing the first one if there are multiple equally-frequent values). The aggregated argument must be of a sortable type.
No
percentile_cont
( fraction
double precision
) WITHIN GROUP
( ORDER BY
double precision
) → double precision
percentile_cont
( fraction
double precision
) WITHIN GROUP
( ORDER BY
interval
) → interval
Computes the continuous percentile, a value corresponding to the specified fraction
within the ordered set of aggregated argument values. This will interpolate between adjacent input items if needed.
No
percentile_cont
( fractions
double precision[]
) WITHIN GROUP
( ORDER BY
double precision
) → double precision[]
percentile_cont
( fractions
double precision[]
) WITHIN GROUP
( ORDER BY
interval
) → interval[]
Computes multiple continuous percentiles. The result is an array of the same dimensions as the fractions
parameter, with each non-null element replaced by the (possibly interpolated) value corresponding to that percentile.
No
percentile_disc
( fraction
double precision
) WITHIN GROUP
( ORDER BY
anyelement
) → anyelement
Computes the discrete percentile, the first value within the ordered set of aggregated argument values whose position in the ordering equals or exceeds the specified fraction
. The aggregated argument must be of a sortable type.
No
percentile_disc
( fractions
double precision[]
) WITHIN GROUP
( ORDER BY
anyelement
) → anyarray
Computes multiple discrete percentiles. The result is an array of the same dimensions as the fractions
parameter, with each non-null element replaced by the input value corresponding to that percentile. The aggregated argument must be of a sortable type.
No
Function
Description
Partial Mode
rank
( args
) WITHIN GROUP
( ORDER BY
sorted_args
) → bigint
Computes the rank of the hypothetical row, with gaps; that is, the row number of the first row in its peer group.
No
dense_rank
( args
) WITHIN GROUP
( ORDER BY
sorted_args
) → bigint
Computes the rank of the hypothetical row, without gaps; this function effectively counts peer groups.
No
percent_rank
( args
) WITHIN GROUP
( ORDER BY
sorted_args
) → double precision
Computes the relative rank of the hypothetical row, that is (rank
- 1) / (total rows - 1). The value thus ranges from 0 to 1 inclusive.
No
cume_dist
( args
) WITHIN GROUP
( ORDER BY
sorted_args
) → double precision
Computes the cumulative distribution, that is (number of rows preceding or peers with hypothetical row) / (total rows). The value thus ranges from 1/N
to 1.
No
Function
Description
GROUPING
( group_by_expression(s)
) → integer
Returns a bit mask indicating which GROUP BY
expressions are not included in the current grouping set. Bits are assigned with the rightmost argument corresponding to the least-significant bit; each bit is 0 if the corresponding expression is included in the grouping criteria of the grouping set generating the current result row, and 1 if it is not included.
|
| OID of catalog the object belonged in |
|
| OID of the object itself |
|
| Sub-object ID (e.g., attribute number for a column) |
|
| True if this was one of the root object(s) of the deletion |
|
| True if there was a normal dependency relationship in the dependency graph leading to this object |
|
| True if this was a temporary object |
|
| Type of the object |
|
| Name of the schema the object belonged in, if any; otherwise |
|
| Name of the object, if the combination of schema and name can be used as a unique identifier for the object; otherwise |
|
| Text rendering of the object identity, schema-qualified. Each identifier included in the identity is quoted if necessary. |
|
| An array that, together with |
|
| Complement for |
Function Description |
Returns the OID of the table about to be rewritten. |
Returns a code explaining the reason(s) for rewriting. The exact meaning of the codes is release dependent. |
|
| index of the item in the MCV list |
|
| values stored in the MCV item |
|
| flags identifying |
|
| frequency of this MCV item |
|
| base frequency of this MCV item |
|
| OID of catalog the object belongs in |
|
| OID of the object itself |
|
| Sub-object ID (e.g., attribute number for a column) |
|
| Command tag |
|
| Type of the object |
|
| Name of the schema the object belongs in, if any; otherwise |
|
| Text rendering of the object identity, schema-qualified. Each identifier included in the identity is quoted if necessary. |
|
| True if the command is part of an extension script |
|
| A complete representation of the command, in internal format. This cannot be output directly, but it can be passed to other functions to obtain different pieces of information about the command. |
Function Description Example Usage |
跳過不會產生具體的更新行為。 詳情請見下文。
|
從關聯的純文字檔欄位自動更新 tsvector 欄位。要使用的文字搜尋配置以名稱指定為觸發器參數。有關詳細說明,請參閱。
|
從關聯的純文字檔欄位自動更新 tsvector 欄位。 要使用的文字搜尋配置取自資料表的 regconfig 欄位。有關詳細說明,請參閱。
|
The functions described in this section are used to control and monitor a PostgreSQL installation.
Table 9.83 shows the functions available to query and alter run-time configuration parameters.
The functions shown in Table 9.84 send control signals to other server processes. Use of these functions is restricted to superusers by default but access may be granted to others using GRANT
, with noted exceptions.
Each of these functions returns true
if successful and false
otherwise.
pg_cancel_backend
and pg_terminate_backend
send signals (SIGINT or SIGTERM respectively) to backend processes identified by process ID. The process ID of an active backend can be found from the pid
column of the pg_stat_activity
view, or by listing the postgres
processes on the server (using ps on Unix or the Task Manager on Windows). The role of an active backend can be found from the usename
column of the pg_stat_activity
view.
The functions shown in Table 9.85 assist in making on-line backups. These functions cannot be executed during recovery (except non-exclusive pg_start_backup
, non-exclusive pg_stop_backup
, pg_is_in_backup
, pg_backup_start_time
and pg_wal_lsn_diff
).
For details about proper usage of these functions, see Section 25.3.
pg_current_wal_lsn
displays the current write-ahead log write location in the same format used by the above functions. Similarly, pg_current_wal_insert_lsn
displays the current write-ahead log insertion location and pg_current_wal_flush_lsn
displays the current write-ahead log flush location. The insertion location is the “logical” end of the write-ahead log at any instant, while the write location is the end of what has actually been written out from the server's internal buffers, and the flush location is the last location known to be written to durable storage. The write location is the end of what can be examined from outside the server, and is usually what you want if you are interested in archiving partially-complete write-ahead log files. The insertion and flush locations are made available primarily for server debugging purposes. These are all read-only operations and do not require superuser permissions.
You can use pg_walfile_name_offset
to extract the corresponding write-ahead log file name and byte offset from a pg_lsn
value. For example:
Similarly, pg_walfile_name
extracts just the write-ahead log file name. When the given write-ahead log location is exactly at a write-ahead log file boundary, both these functions return the name of the preceding write-ahead log file. This is usually the desired behavior for managing write-ahead log archiving behavior, since the preceding file is the last one that currently needs to be archived.
The functions shown in Table 9.86 provide information about the current status of a standby server. These functions may be executed both during recovery and in normal running.
The functions shown in Table 9.87 control the progress of recovery. These functions may be executed only during recovery.
pg_wal_replay_pause
and pg_wal_replay_resume
cannot be executed while a promotion is ongoing. If a promotion is triggered while recovery is paused, the paused state ends and promotion continues.
If streaming replication is disabled, the paused state may continue indefinitely without a problem. If streaming replication is in progress then WAL records will continue to be received, which will eventually fill available disk space, depending upon the duration of the pause, the rate of WAL generation and available disk space.
PostgreSQL allows database sessions to synchronize their snapshots. A snapshot determines which data is visible to the transaction that is using the snapshot. Synchronized snapshots are necessary when two or more sessions need to see identical content in the database. If two sessions just start their transactions independently, there is always a possibility that some third transaction commits between the executions of the two START TRANSACTION
commands, so that one session sees the effects of that transaction and the other does not.
To solve this problem, PostgreSQL allows a transaction to export the snapshot it is using. As long as the exporting transaction remains open, other transactions can import its snapshot, and thereby be guaranteed that they see exactly the same view of the database that the first transaction sees. But note that any database changes made by any one of these transactions remain invisible to the other transactions, as is usual for changes made by uncommitted transactions. So the transactions are synchronized with respect to pre-existing data, but act normally for changes they make themselves.
Snapshots are exported with the pg_export_snapshot
function, shown in Table 9.88, and imported with the SET TRANSACTION command.
The functions shown in Table 9.89 are for controlling and interacting with replication features. See Section 26.2.5, Section 26.2.6, and Chapter 49 for information about the underlying features. Use of functions for replication origin is restricted to superusers. Use of functions for replication slots is restricted to superusers and users having REPLICATION
privilege.
Many of these functions have equivalent commands in the replication protocol; see Section 52.4.
The functions described in Section 9.27.3, Section 9.27.4, and Section 9.27.5 are also relevant for replication.
The functions shown in Table 9.90 calculate the disk space usage of database objects, or assist in presentation of usage results. All these functions return sizes measured in bytes. If an OID that does not represent an existing object is passed to one of these functions, NULL
is returned.
The functions above that operate on tables or indexes accept a regclass
argument, which is simply the OID of the table or index in the pg_class
system catalog. You do not have to look up the OID by hand, however, since the regclass
data type's input converter will do the work for you. Just write the table name enclosed in single quotes so that it looks like a literal constant. For compatibility with the handling of ordinary SQL names, the string will be converted to lower case unless it contains double quotes around the table name.
The functions shown in Table 9.91 assist in identifying the specific disk files associated with database objects.
Table 9.92 lists functions used to manage collations.
Table 9.93 lists functions that provide information about the structure of partitioned tables.
For example, to check the total size of the data contained in a partitioned table measurement
, one could use the following query:
Table 9.94 shows the functions available for index maintenance tasks. (Note that these maintenance tasks are normally done automatically by autovacuum; use of these functions is only required in special cases.) These functions cannot be executed during recovery. Use of these functions is restricted to superusers and the owner of the given index.
The functions shown in Table 9.95 provide native access to files on the machine hosting the server. Only files within the database cluster directory and the log_directory
can be accessed, unless the user is a superuser or is granted the role pg_read_server_files
. Use a relative path for files in the cluster directory, and a path matching the log_directory
configuration setting for log files.
Note that granting users the EXECUTE privilege on pg_read_file()
, or related functions, allows them the ability to read any file on the server that the database server process can read; these functions bypass all in-database privilege checks. This means that, for example, a user with such access is able to read the contents of the pg_authid
table where authentication information is stored, as well as read any table data in the database. Therefore, granting access to these functions should be carefully considered.
Some of these functions take an optional missing_ok
parameter, which specifies the behavior when the file or directory does not exist. If true
, the function returns NULL
or an empty result set, as appropriate. If false
, an error is raised. The default is false
.
The functions shown in Table 9.96 manage advisory locks. For details about proper use of these functions, see Section 13.3.5.
All these functions are intended to be used to lock application-defined resources, which can be identified either by a single 64-bit key value or two 32-bit key values (note that these two key spaces do not overlap). If another session already holds a conflicting lock on the same resource identifier, the functions will either wait until the resource becomes available, or return a false
result, as appropriate for the function. Locks can be either shared or exclusive: a shared lock does not conflict with other shared locks on the same resource, only with exclusive locks. Locks can be taken at session level (so that they are held until released or the session ends) or at transaction level (so that they are held until the current transaction ends; there is no provision for manual release). Multiple session-level lock requests stack, so that if the same resource identifier is locked three times there must then be three unlock requests to release the resource in advance of session end.
Table 9.63 shows several functions that extract session and system information.
In addition to the functions listed in this section, there are a number of functions related to the statistics system that also provide system information. See Section 27.2.2 for more information.
Name | Return Type | Description |
---|---|---|
current_catalog
, current_role
, current_schema
, current_user
, session_user
, and user
have special syntactic status in SQL: they must be called without trailing parentheses. (In PostgreSQL, parentheses can optionally be used with current_schema
, but not with the others.)
The session_user
is normally the user who initiated the current database connection; but superusers can change this setting with SET SESSION AUTHORIZATION. The current_user
is the user identifier that is applicable for permission checking. Normally it is equal to the session user, but it can be changed with SET ROLE. It also changes during the execution of functions with the attribute SECURITY DEFINER
. In Unix parlance, the session user is the “real user” and the current user is the “effective user”. current_role
and user
are synonyms for current_user
. (The SQL standard draws a distinction between current_role
and current_user
, but PostgreSQL does not, since it unifies users and roles into a single kind of entity.)
current_schema
returns the name of the schema that is first in the search path (or a null value if the search path is empty). This is the schema that will be used for any tables or other named objects that are created without specifying a target schema. current_schemas(boolean)
returns an array of the names of all schemas presently in the search path. The Boolean option determines whether or not implicitly included system schemas such as pg_catalog
are included in the returned search path.
The search path can be altered at run time. The command is:
inet_client_addr
returns the IP address of the current client, and inet_client_port
returns the port number. inet_server_addr
returns the IP address on which the server accepted the current connection, and inet_server_port
returns the port number. All these functions return NULL if the current connection is via a Unix-domain socket.
pg_blocking_pids
returns an array of the process IDs of the sessions that are blocking the server process with the specified process ID, or an empty array if there is no such server process or it is not blocked. One server process blocks another if it either holds a lock that conflicts with the blocked process's lock request (hard block), or is waiting for a lock that would conflict with the blocked process's lock request and is ahead of it in the wait queue (soft block). When using parallel queries the result always lists client-visible process IDs (that is, pg_backend_pid
results) even if the actual lock is held or awaited by a child worker process. As a result of that, there may be duplicated PIDs in the result. Also note that when a prepared transaction holds a conflicting lock, it will be represented by a zero process ID in the result of this function. Frequent calls to this function could have some impact on database performance, because it needs exclusive access to the lock manager's shared state for a short time.
pg_conf_load_time
returns the timestamp with time zone
when the server configuration files were last loaded. (If the current session was alive at the time, this will be the time when the session itself re-read the configuration files, so the reading will vary a little in different sessions. Otherwise it is the time when the postmaster process re-read the configuration files.)
pg_current_logfile
returns, as text
, the path of the log file(s) currently in use by the logging collector. The path includes the log_directory directory and the log file name. Log collection must be enabled or the return value is NULL
. When multiple log files exist, each in a different format, pg_current_logfile
called without arguments returns the path of the file having the first format found in the ordered list: stderr, csvlog. NULL
is returned when no log file has any of these formats. To request a specific file format supply, as text
, either csvlog or stderr as the value of the optional parameter. The return value is NULL
when the log format requested is not a configured log_destination. The pg_current_logfile
reflects the contents of the current_logfiles
file.
pg_my_temp_schema
returns the OID of the current session's temporary schema, or zero if it has none (because it has not created any temporary tables). pg_is_other_temp_schema
returns true if the given OID is the OID of another session's temporary schema. (This can be useful, for example, to exclude other sessions' temporary tables from a catalog display.)
pg_listening_channels
returns a set of names of asynchronous notification channels that the current session is listening to. pg_notification_queue_usage
returns the fraction of the total available space for notifications currently occupied by notifications that are waiting to be processed, as a double
in the range 0-1. See LISTEN and NOTIFY for more information.
pg_postmaster_start_time
returns the timestamp with time zone
when the server started.
pg_safe_snapshot_blocking_pids
returns an array of the process IDs of the sessions that are blocking the server process with the specified process ID from acquiring a safe snapshot, or an empty array if there is no such server process or it is not blocked. A session running a SERIALIZABLE
transaction blocks a SERIALIZABLE READ ONLY DEFERRABLE
transaction from acquiring a snapshot until the latter determines that it is safe to avoid taking any predicate locks. See Section 13.2.3 for more information about serializable and deferrable transactions. Frequent calls to this function could have some impact on database performance, because it needs access to the predicate lock manager's shared state for a short time.
version
returns a string describing the PostgreSQL server's version. You can also get this information from server_version or for a machine-readable version, server_version_num. Software developers should use server_version_num
(available since 8.2) or PQserverVersion
instead of parsing the text version.
Table 9.64 lists functions that allow the user to query object access privileges programmatically. See Section 5.7 for more information about privileges.
has_table_privilege
checks whether a user can access a table in a particular way. The user can be specified by name, by OID (pg_authid.oid
), public
to indicate the PUBLIC pseudo-role, or if the argument is omitted current_user
is assumed. The table can be specified by name or by OID. (Thus, there are actually six variants of has_table_privilege
, which can be distinguished by the number and types of their arguments.) When specifying by name, the name can be schema-qualified if necessary. The desired access privilege type is specified by a text string, which must evaluate to one of the values SELECT
, INSERT
, UPDATE
, DELETE
, TRUNCATE
, REFERENCES
, or TRIGGER
. Optionally, WITH GRANT OPTION
can be added to a privilege type to test whether the privilege is held with grant option. Also, multiple privilege types can be listed separated by commas, in which case the result will be true
if any of the listed privileges is held. (Case of the privilege string is not significant, and extra whitespace is allowed between but not within privilege names.) Some examples:
has_sequence_privilege
checks whether a user can access a sequence in a particular way. The possibilities for its arguments are analogous to has_table_privilege
. The desired access privilege type must evaluate to one of USAGE
, SELECT
, or UPDATE
.
has_any_column_privilege
checks whether a user can access any column of a table in a particular way. Its argument possibilities are analogous to has_table_privilege
, except that the desired access privilege type must evaluate to some combination of SELECT
, INSERT
, UPDATE
, or REFERENCES
. Note that having any of these privileges at the table level implicitly grants it for each column of the table, so has_any_column_privilege
will always return true
if has_table_privilege
does for the same arguments. But has_any_column_privilege
also succeeds if there is a column-level grant of the privilege for at least one column.
has_column_privilege
checks whether a user can access a column in a particular way. Its argument possibilities are analogous to has_table_privilege
, with the addition that the column can be specified either by name or attribute number. The desired access privilege type must evaluate to some combination of SELECT
, INSERT
, UPDATE
, or REFERENCES
. Note that having any of these privileges at the table level implicitly grants it for each column of the table.
has_database_privilege
checks whether a user can access a database in a particular way. Its argument possibilities are analogous to has_table_privilege
. The desired access privilege type must evaluate to some combination of CREATE
, CONNECT
, TEMPORARY
, or TEMP
(which is equivalent to TEMPORARY
).
has_function_privilege
checks whether a user can access a function in a particular way. Its argument possibilities are analogous to has_table_privilege
. When specifying a function by a text string rather than by OID, the allowed input is the same as for the regprocedure
data type (see Section 8.19). The desired access privilege type must evaluate to EXECUTE
. An example is:
has_foreign_data_wrapper_privilege
checks whether a user can access a foreign-data wrapper in a particular way. Its argument possibilities are analogous to has_table_privilege
. The desired access privilege type must evaluate to USAGE
.
has_language_privilege
checks whether a user can access a procedural language in a particular way. Its argument possibilities are analogous to has_table_privilege
. The desired access privilege type must evaluate to USAGE
.
has_schema_privilege
checks whether a user can access a schema in a particular way. Its argument possibilities are analogous to has_table_privilege
. The desired access privilege type must evaluate to some combination of CREATE
or USAGE
.
has_server_privilege
checks whether a user can access a foreign server in a particular way. Its argument possibilities are analogous to has_table_privilege
. The desired access privilege type must evaluate to USAGE
.
has_tablespace_privilege
checks whether a user can access a tablespace in a particular way. Its argument possibilities are analogous to has_table_privilege
. The desired access privilege type must evaluate to CREATE
.
has_type_privilege
checks whether a user can access a type in a particular way. Its argument possibilities are analogous to has_table_privilege
. When specifying a type by a text string rather than by OID, the allowed input is the same as for the regtype
data type (see Section 8.19). The desired access privilege type must evaluate to USAGE
.
pg_has_role
checks whether a user can access a role in a particular way. Its argument possibilities are analogous to has_table_privilege
, except that public
is not allowed as a user name. The desired access privilege type must evaluate to some combination of MEMBER
or USAGE
. MEMBER
denotes direct or indirect membership in the role (that is, the right to do SET ROLE
), while USAGE
denotes whether the privileges of the role are immediately available without doing SET ROLE
.
row_security_active
checks whether row level security is active for the specified table in the context of the current_user
and environment. The table can be specified by name or by OID.
Table 9.65 shows the operators available for the aclitem
type, which is the catalog representation of access privileges. See Section 5.7 for information about how to read access privilege values.
aclitem
OperatorsTable 9.66 shows some additional functions to manage the aclitem
type.
aclitem
Functionsacldefault
returns the built-in default access privileges for an object of type type
belonging to role ownerId
. These represent the access privileges that will be assumed when an object's ACL entry is null. (The default access privileges are described in Section 5.7.) The type
parameter is a CHAR
: write 'c' for COLUMN
, 'r' for TABLE
and table-like objects, 's' for SEQUENCE
, 'd' for DATABASE
, 'f' for FUNCTION
or PROCEDURE
, 'l' for LANGUAGE
, 'L' for LARGE OBJECT
, 'n' for SCHEMA
, 't' for TABLESPACE
, 'F' for FOREIGN DATA WRAPPER
, 'S' for FOREIGN SERVER
, or 'T' for TYPE
or DOMAIN
.
aclexplode
returns an aclitem
array as a set of rows. Output columns are grantor oid
, grantee oid
(0
for PUBLIC
), granted privilege as text
(SELECT
, ...) and whether the privilege is grantable as boolean
. makeaclitem
performs the inverse operation.
Table 9.67 shows functions that determine whether a certain object is visible in the current schema search path. For example, a table is said to be visible if its containing schema is in the search path and no table of the same name appears earlier in the search path. This is equivalent to the statement that the table can be referenced by name without explicit schema qualification. To list the names of all visible tables:
Each function performs the visibility check for one type of database object. Note that pg_table_is_visible
can also be used with views, materialized views, indexes, sequences and foreign tables; pg_function_is_visible
can also be used with procedures and aggregates; pg_type_is_visible
can also be used with domains. For functions and operators, an object in the search path is visible if there is no object of the same name and argument data type(s) earlier in the path. For operator classes, both name and associated index access method are considered.
All these functions require object OIDs to identify the object to be checked. If you want to test an object by name, it is convenient to use the OID alias types (regclass
, regtype
, regprocedure
, regoperator
, regconfig
, or regdictionary
), for example:
Note that it would not make much sense to test a non-schema-qualified type name in this way — if the name can be recognized at all, it must be visible.
Table 9.68 lists functions that extract information from the system catalogs.
format_type
returns the SQL name of a data type that is identified by its type OID and possibly a type modifier. Pass NULL for the type modifier if no specific modifier is known.
pg_get_keywords
returns a set of records describing the SQL keywords recognized by the server. The word
column contains the keyword. The catcode
column contains a category code: U
for unreserved, C
for column name, T
for type or function name, or R
for reserved. The catdesc
column contains a possibly-localized string describing the category.
pg_get_constraintdef
, pg_get_indexdef
, pg_get_ruledef
, pg_get_statisticsobjdef
, and pg_get_triggerdef
, respectively reconstruct the creating command for a constraint, index, rule, extended statistics object, or trigger. (Note that this is a decompiled reconstruction, not the original text of the command.) pg_get_expr
decompiles the internal form of an individual expression, such as the default value for a column. It can be useful when examining the contents of system catalogs. If the expression might contain Vars, specify the OID of the relation they refer to as the second parameter; if no Vars are expected, zero is sufficient. pg_get_viewdef
reconstructs the SELECT
query that defines a view. Most of these functions come in two variants, one of which can optionally “pretty-print” the result. The pretty-printed format is more readable, but the default format is more likely to be interpreted the same way by future versions of PostgreSQL; avoid using pretty-printed output for dump purposes. Passing false
for the pretty-print parameter yields the same result as the variant that does not have the parameter at all.
pg_get_functiondef
returns a complete CREATE OR REPLACE FUNCTION
statement for a function. pg_get_function_arguments
returns the argument list of a function, in the form it would need to appear in within CREATE FUNCTION
. pg_get_function_result
similarly returns the appropriate RETURNS
clause for the function. pg_get_function_identity_arguments
returns the argument list necessary to identify a function, in the form it would need to appear in within ALTER FUNCTION
, for instance. This form omits default values.
pg_get_serial_sequence
returns the name of the sequence associated with a column, or NULL if no sequence is associated with the column. If the column is an identity column, the associated sequence is the sequence internally created for the identity column. For columns created using one of the serial types (serial
, smallserial
, bigserial
), it is the sequence created for that serial column definition. In the latter case, this association can be modified or removed with ALTER SEQUENCE OWNED BY
. (The function probably should have been called pg_get_owned_sequence
; its current name reflects the fact that it has typically been used with serial
or bigserial
columns.) The first input parameter is a table name with optional schema, and the second parameter is a column name. Because the first parameter is potentially a schema and table, it is not treated as a double-quoted identifier, meaning it is lower cased by default, while the second parameter, being just a column name, is treated as double-quoted and has its case preserved. The function returns a value suitably formatted for passing to sequence functions (see Section 9.16). A typical use is in reading the current value of a sequence for an identity or serial column, for example:
pg_get_userbyid
extracts a role's name given its OID.
pg_index_column_has_property
, pg_index_has_property
, and pg_indexam_has_property
return whether the specified index column, index, or index access method possesses the named property. NULL
is returned if the property name is not known or does not apply to the particular object, or if the OID or column number does not identify a valid object. Refer to Table 9.69 for column properties, Table 9.70 for index properties, and Table 9.71 for access method properties. (Note that extension access methods can define additional property names for their indexes.)
pg_options_to_table
returns the set of storage option name/value pairs (option_name
/option_value
) when passed pg_class
.reloptions
or pg_attribute
.attoptions
.
pg_tablespace_databases
allows a tablespace to be examined. It returns the set of OIDs of databases that have objects stored in the tablespace. If this function returns any rows, the tablespace is not empty and cannot be dropped. To display the specific objects populating the tablespace, you will need to connect to the databases identified by pg_tablespace_databases
and query their pg_class
catalogs.
pg_typeof
returns the OID of the data type of the value that is passed to it. This can be helpful for troubleshooting or dynamically constructing SQL queries. The function is declared as returning regtype
, which is an OID alias type (see Section 8.19); this means that it is the same as an OID for comparison purposes but displays as a type name. For example:
The expression collation for
returns the collation of the value that is passed to it. Example:
The value might be quoted and schema-qualified. If no collation is derived for the argument expression, then a null value is returned. If the argument is not of a collatable data type, then an error is raised.
The to_regclass
, to_regproc
, to_regprocedure
, to_regoper
, to_regoperator
, to_regtype
, to_regnamespace
, and to_regrole
functions translate relation, function, operator, type, schema, and role names (given as text
) to objects of type regclass
, regproc
, regprocedure
, regoper
, regoperator
, regtype
, regnamespace
, and regrole
respectively. These functions differ from a cast from text in that they don't accept a numeric OID, and that they return null rather than throwing an error if the name is not found (or, for to_regproc
and to_regoper
, if the given name matches multiple objects).
Table 9.72 lists functions related to database object identification and addressing.
pg_describe_object
returns a textual description of a database object specified by catalog OID, object OID, and sub-object ID (such as a column number within a table; the sub-object ID is zero when referring to a whole object). This description is intended to be human-readable, and might be translated, depending on server configuration. This is useful to determine the identity of an object as stored in the pg_depend
catalog.
pg_identify_object
returns a row containing enough information to uniquely identify the database object specified by catalog OID, object OID and sub-object ID. This information is intended to be machine-readable, and is never translated. type
identifies the type of database object; schema
is the schema name that the object belongs in, or NULL
for object types that do not belong to schemas; name
is the name of the object, quoted if necessary, if the name (along with schema name, if pertinent) is sufficient to uniquely identify the object, otherwise NULL
; identity
is the complete object identity, with the precise format depending on object type, and each name within the format being schema-qualified and quoted as necessary.
pg_identify_object_as_address
returns a row containing enough information to uniquely identify the database object specified by catalog OID, object OID and sub-object ID. The returned information is independent of the current server, that is, it could be used to identify an identically named object in another server. type
identifies the type of database object; object_names
and object_args
are text arrays that together form a reference to the object. These three values can be passed to pg_get_object_address
to obtain the internal address of the object. This function is the inverse of pg_get_object_address
.
pg_get_object_address
returns a row containing enough information to uniquely identify the database object specified by its type and object name and argument arrays. The returned values are the ones that would be used in system catalogs such as pg_depend
and can be passed to other system functions such as pg_identify_object
or pg_describe_object
. classid
is the OID of the system catalog containing the object; objid
is the OID of the object itself, and objsubid
is the sub-object ID, or zero if none. This function is the inverse of pg_identify_object_as_address
.
The functions shown in Table 9.73 extract comments previously stored with the COMMENT command. A null value is returned if no comment could be found for the specified parameters.
Table 9.73. Comment Information Functions
col_description
returns the comment for a table column, which is specified by the OID of its table and its column number. (obj_description
cannot be used for table columns since columns do not have OIDs of their own.)
The two-parameter form of obj_description
returns the comment for a database object specified by its OID and the name of the containing system catalog. For example, obj_description(123456,'pg_class')
would retrieve the comment for the table with OID 123456. The one-parameter form of obj_description
requires only the object OID. It is deprecated since there is no guarantee that OIDs are unique across different system catalogs; therefore, the wrong comment might be returned.
shobj_description
is used just like obj_description
except it is used for retrieving comments on shared objects. Some system catalogs are global to all databases within each cluster, and the descriptions for objects in them are stored globally as well.
The functions shown in Table 9.74 provide server transaction information in an exportable form. The main use of these functions is to determine which transactions were committed between two snapshots.
Table 9.74. Transaction IDs and Snapshots
The internal transaction ID type (xid
) is 32 bits wide and wraps around every 4 billion transactions. However, these functions export a 64-bit format that is extended with an “epoch” counter so it will not wrap around during the life of an installation. The data type used by these functions, txid_snapshot
, stores information about transaction ID visibility at a particular moment in time. Its components are described in Table 9.75.
Table 9.75. Snapshot Components
txid_snapshot
's textual representation is xmin
:xmax
:xip_list
. For example 10:20:10,14,15
means xmin=10, xmax=20, xip_list=10, 14, 15
.
txid_status(bigint)
reports the commit status of a recent transaction. Applications may use it to determine whether a transaction committed or aborted when the application and database server become disconnected while a COMMIT
is in progress. The status of a transaction will be reported as either in progress
, committed
, or aborted
, provided that the transaction is recent enough that the system retains the commit status of that transaction. If is old enough that no references to that transaction survive in the system and the commit status information has been discarded, this function will return NULL. Note that prepared transactions are reported as in progress
; applications must check pg_prepared_xacts
if they need to determine whether the txid is a prepared transaction.
The functions shown in Table 9.76 provide information about transactions that have been already committed. These functions mainly provide information about when the transactions were committed. They only provide useful data when track_commit_timestamp configuration option is enabled and only for transactions that were committed after it was enabled.
Table 9.76. Committed Transaction Information
The functions shown in Table 9.77 print information initialized during initdb
, such as the catalog version. They also show information about write-ahead logging and checkpoint processing. This information is cluster-wide, and not specific to any one database. They provide most of the same information, from the same source, as pg_controldata, although in a form better suited to SQL functions.
Table 9.77. Control Data Functions
pg_control_checkpoint
returns a record, shown in Table 9.78
Table 9.78. pg_control_checkpoint
Columns
pg_control_system
returns a record, shown in Table 9.79
Table 9.79. pg_control_system
Columns
pg_control_init
returns a record, shown in Table 9.80
Table 9.80. pg_control_init
Columns
pg_control_recovery
returns a record, shown in Table 9.81
Table 9.81. pg_control_recovery
Columns
Name | Return Type | Description |
---|---|---|
Operator | Description | Example | Result |
---|---|---|---|
Name | Return Type | Description |
---|---|---|
Name | Return Type | Description |
---|---|---|
Name | Return Type | Description |
---|---|---|
Name | Description |
---|---|
Name | Description |
---|---|
Name | Description |
---|---|
Name | Return Type | Description |
---|---|---|
Name | Return Type | Description |
---|---|---|
Name | Return Type | Description |
---|---|---|
Name | Description |
---|---|
Name | Return Type | Description |
---|---|---|
Name | Return Type | Description |
---|---|---|
Column Name | Data Type |
---|---|
Column Name | Data Type |
---|---|
Column Name | Data Type |
---|---|
Column Name | Data Type |
---|---|
Function
Description
Example(s)
current_setting
( setting_name
text
[, missing_ok
boolean
] ) → text
Returns the current value of the setting setting_name
. If there is no such setting, current_setting
throws an error unless missing_ok
is supplied and is true
. This function corresponds to the SQL command SHOW
.
current_setting('datestyle')
→ ISO, MDY
set_config
( setting_name
text
, new_value
text
, is_local
boolean
) → text
Sets the parameter setting_name
to new_value
, and returns that value. If is_local
is true
, the new value will only apply for the current transaction. If you want the new value to apply for the current session, use false
instead. This function corresponds to the SQL command SET
.
set_config('log_statement_stats', 'off', false)
→ off
Function
Description
pg_cancel_backend
( pid
integer
) → boolean
Cancels the current query of the session whose backend process has the specified process ID. This is also allowed if the calling role is a member of the role whose backend is being canceled or the calling role has been granted pg_signal_backend
, however only superusers can cancel superuser backends.
pg_reload_conf
() → boolean
Causes all processes of the PostgreSQL server to reload their configuration files. (This is initiated by sending a SIGHUP signal to the postmaster process, which in turn sends SIGHUP to each of its children.)
pg_rotate_logfile
() → boolean
Signals the log-file manager to switch to a new output file immediately. This works only when the built-in log collector is running, since otherwise there is no log-file manager subprocess.
pg_terminate_backend
( pid
integer
) → boolean
Terminates the session whose backend process has the specified process ID. This is also allowed if the calling role is a member of the role whose backend is being terminated or the calling role has been granted pg_signal_backend
, however only superusers can terminate superuser backends.
Function
Description
pg_create_restore_point
( name
text
) → pg_lsn
Creates a named marker record in the write-ahead log that can later be used as a recovery target, and returns the corresponding write-ahead log location. The given name can then be used with recovery_target_name to specify the point up to which recovery will proceed. Avoid creating multiple restore points with the same name, since recovery will stop at the first one whose name matches the recovery target.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_current_wal_flush_lsn
() → pg_lsn
Returns the current write-ahead log flush location (see notes below).
pg_current_wal_insert_lsn
() → pg_lsn
Returns the current write-ahead log insert location (see notes below).
pg_current_wal_lsn
() → pg_lsn
Returns the current write-ahead log write location (see notes below).
pg_start_backup
( label
text
[, fast
boolean
[, exclusive
boolean
]] ) → pg_lsn
Prepares the server to begin an on-line backup. The only required parameter is an arbitrary user-defined label for the backup. (Typically this would be the name under which the backup dump file will be stored.) If the optional second parameter is given as true
, it specifies executing pg_start_backup
as quickly as possible. This forces an immediate checkpoint which will cause a spike in I/O operations, slowing any concurrently executing queries. The optional third parameter specifies whether to perform an exclusive or non-exclusive backup (default is exclusive).
When used in exclusive mode, this function writes a backup label file (backup_label
) and, if there are any links in the pg_tblspc/
directory, a tablespace map file (tablespace_map
) into the database cluster's data directory, then performs a checkpoint, and then returns the backup's starting write-ahead log location. (The user can ignore this result value, but it is provided in case it is useful.) When used in non-exclusive mode, the contents of these files are instead returned by the pg_stop_backup
function, and should be copied to the backup area by the user.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_stop_backup
( exclusive
boolean
[, wait_for_archive
boolean
] ) → setof record
( lsn
pg_lsn
, labelfile
text
, spcmapfile
text
)
Finishes performing an exclusive or non-exclusive on-line backup. The exclusive
parameter must match the previous pg_start_backup
call. In an exclusive backup, pg_stop_backup
removes the backup label file and, if it exists, the tablespace map file created by pg_start_backup
. In a non-exclusive backup, the desired contents of these files are returned as part of the result of the function, and should be written to files in the backup area (not in the data directory).
There is an optional second parameter of type boolean
. If false, the function will return immediately after the backup is completed, without waiting for WAL to be archived. This behavior is only useful with backup software that independently monitors WAL archiving. Otherwise, WAL required to make the backup consistent might be missing and make the backup useless. By default or when this parameter is true, pg_stop_backup
will wait for WAL to be archived when archiving is enabled. (On a standby, this means that it will wait only when archive_mode
= always
. If write activity on the primary is low, it may be useful to run pg_switch_wal
on the primary in order to trigger an immediate segment switch.)
When executed on a primary, this function also creates a backup history file in the write-ahead log archive area. The history file includes the label given to pg_start_backup
, the starting and ending write-ahead log locations for the backup, and the starting and ending times of the backup. After recording the ending location, the current write-ahead log insertion point is automatically advanced to the next write-ahead log file, so that the ending write-ahead log file can be archived immediately to complete the backup.
The result of the function is a single record. The lsn
column holds the backup's ending write-ahead log location (which again can be ignored). The second and third columns are NULL
when ending an exclusive backup; after a non-exclusive backup they hold the desired contents of the label and tablespace map files.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_stop_backup
() → pg_lsn
Finishes performing an exclusive on-line backup. This simplified version is equivalent to pg_stop_backup(true, true)
, except that it only returns the pg_lsn
result.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_is_in_backup
() → boolean
Returns true if an on-line exclusive backup is in progress.
pg_backup_start_time
() → timestamp with time zone
Returns the start time of the current on-line exclusive backup if one is in progress, otherwise NULL
.
pg_switch_wal
() → pg_lsn
Forces the server to switch to a new write-ahead log file, which allows the current file to be archived (assuming you are using continuous archiving). The result is the ending write-ahead log location plus 1 within the just-completed write-ahead log file. If there has been no write-ahead log activity since the last write-ahead log switch, pg_switch_wal
does nothing and returns the start location of the write-ahead log file currently in use.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_walfile_name
( lsn
pg_lsn
) → text
Converts a write-ahead log location to the name of the WAL file holding that location.
pg_walfile_name_offset
( lsn
pg_lsn
) → record
( file_name
text
, file_offset
integer
)
Converts a write-ahead log location to a WAL file name and byte offset within that file.
pg_wal_lsn_diff
( lsn
pg_lsn
, lsn
pg_lsn
) → numeric
Calculates the difference in bytes between two write-ahead log locations. This can be used with pg_stat_replication
or some of the functions shown in Table 9.85 to get the replication lag.
Function
Description
pg_is_in_recovery
() → boolean
Returns true if recovery is still in progress.
pg_last_wal_receive_lsn
() → pg_lsn
Returns the last write-ahead log location that has been received and synced to disk by streaming replication. While streaming replication is in progress this will increase monotonically. If recovery has completed then this will remain static at the location of the last WAL record received and synced to disk during recovery. If streaming replication is disabled, or if it has not yet started, the function returns NULL
.
pg_last_wal_replay_lsn
() → pg_lsn
Returns the last write-ahead log location that has been replayed during recovery. If recovery is still in progress this will increase monotonically. If recovery has completed then this will remain static at the location of the last WAL record applied during recovery. When the server has been started normally without recovery, the function returns NULL
.
pg_last_xact_replay_timestamp
() → timestamp with time zone
Returns the time stamp of the last transaction replayed during recovery. This is the time at which the commit or abort WAL record for that transaction was generated on the primary. If no transactions have been replayed during recovery, the function returns NULL
. Otherwise, if recovery is still in progress this will increase monotonically. If recovery has completed then this will remain static at the time of the last transaction applied during recovery. When the server has been started normally without recovery, the function returns NULL
.
Function
Description
pg_is_wal_replay_paused
() → boolean
Returns true if recovery is paused.
pg_promote
( wait
boolean
DEFAULT
true
, wait_seconds
integer
DEFAULT
60
) → boolean
Promotes a standby server to primary status. With wait
set to true
(the default), the function waits until promotion is completed or wait_seconds
seconds have passed, and returns true
if promotion is successful and false
otherwise. If wait
is set to false
, the function returns true
immediately after sending a SIGUSR1
signal to the postmaster to trigger promotion.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_wal_replay_pause
() → void
Pauses recovery. While recovery is paused, no further database changes are applied. If hot standby is active, all new queries will see the same consistent snapshot of the database, and no further query conflicts will be generated until recovery is resumed.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_wal_replay_resume
() → void
Restarts recovery if it was paused.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
Function
Description
pg_export_snapshot
() → text
Saves the transaction's current snapshot and returns a text
string identifying the snapshot. This string must be passed (outside the database) to clients that want to import the snapshot. The snapshot is available for import only until the end of the transaction that exported it.
A transaction can export more than one snapshot, if needed. Note that doing so is only useful in READ COMMITTED
transactions, since in REPEATABLE READ
and higher isolation levels, transactions use the same snapshot throughout their lifetime. Once a transaction has exported any snapshots, it cannot be prepared with PREPARE TRANSACTION.
Function
Description
pg_create_physical_replication_slot
( slot_name
name
[, immediately_reserve
boolean
, temporary
boolean
] ) → record
( slot_name
name
, lsn
pg_lsn
)
Creates a new physical replication slot named slot_name
. The optional second parameter, when true
, specifies that the LSN for this replication slot be reserved immediately; otherwise the LSN is reserved on first connection from a streaming replication client. Streaming changes from a physical slot is only possible with the streaming-replication protocol — see Section 52.4. The optional third parameter, temporary
, when set to true, specifies that the slot should not be permanently stored to disk and is only meant for use by the current session. Temporary slots are also released upon any error. This function corresponds to the replication protocol command CREATE_REPLICATION_SLOT ... PHYSICAL
.
pg_drop_replication_slot
( slot_name
name
) → void
Drops the physical or logical replication slot named slot_name
. Same as replication protocol command DROP_REPLICATION_SLOT
. For logical slots, this must be called while connected to the same database the slot was created on.
pg_create_logical_replication_slot
( slot_name
name
, plugin
name
[, temporary
boolean
] ) → record
( slot_name
name
, lsn
pg_lsn
)
Creates a new logical (decoding) replication slot named slot_name
using the output plugin plugin
. The optional third parameter, temporary
, when set to true, specifies that the slot should not be permanently stored to disk and is only meant for use by the current session. Temporary slots are also released upon any error. A call to this function has the same effect as the replication protocol command CREATE_REPLICATION_SLOT ... LOGICAL
.
pg_copy_physical_replication_slot
( src_slot_name
name
, dst_slot_name
name
[, temporary
boolean
] ) → record
( slot_name
name
, lsn
pg_lsn
)
Copies an existing physical replication slot named src_slot_name
to a physical replication slot named dst_slot_name
. The copied physical slot starts to reserve WAL from the same LSN as the source slot. temporary
is optional. If temporary
is omitted, the same value as the source slot is used.
pg_copy_logical_replication_slot
( src_slot_name
name
, dst_slot_name
name
[, temporary
boolean
[, plugin
name
]] ) → record
( slot_name
name
, lsn
pg_lsn
)
Copies an existing logical replication slot named src_slot_name
to a logical replication slot named dst_slot_name
, optionally changing the output plugin and persistence. The copied logical slot starts from the same LSN as the source logical slot. Both temporary
and plugin
are optional; if they are omitted, the values of the source slot are used.
pg_logical_slot_get_changes
( slot_name
name
, upto_lsn
pg_lsn
, upto_nchanges
integer
, VARIADIC
options
text[]
) → setof record
( lsn
pg_lsn
, xid
xid
, data
text
)
Returns changes in the slot slot_name
, starting from the point from which changes have been consumed last. If upto_lsn
and upto_nchanges
are NULL, logical decoding will continue until end of WAL. If upto_lsn
is non-NULL, decoding will include only those transactions which commit prior to the specified LSN. If upto_nchanges
is non-NULL, decoding will stop when the number of rows produced by decoding exceeds the specified value. Note, however, that the actual number of rows returned may be larger, since this limit is only checked after adding the rows produced when decoding each new transaction commit.
pg_logical_slot_peek_changes
( slot_name
name
, upto_lsn
pg_lsn
, upto_nchanges
integer
, VARIADIC
options
text[]
) → setof record
( lsn
pg_lsn
, xid
xid
, data
text
)
Behaves just like the pg_logical_slot_get_changes()
function, except that changes are not consumed; that is, they will be returned again on future calls.
pg_logical_slot_get_binary_changes
( slot_name
name
, upto_lsn
pg_lsn
, upto_nchanges
integer
, VARIADIC
options
text[]
) → setof record
( lsn
pg_lsn
, xid
xid
, data
bytea
)
Behaves just like the pg_logical_slot_get_changes()
function, except that changes are returned as bytea
.
pg_logical_slot_peek_binary_changes
( slot_name
name
, upto_lsn
pg_lsn
, upto_nchanges
integer
, VARIADIC
options
text[]
) → setof record
( lsn
pg_lsn
, xid
xid
, data
bytea
)
Behaves just like the pg_logical_slot_peek_changes()
function, except that changes are returned as bytea
.
pg_replication_slot_advance
( slot_name
name
, upto_lsn
pg_lsn
) → record
( slot_name
name
, end_lsn
pg_lsn
)
Advances the current confirmed position of a replication slot named slot_name
. The slot will not be moved backwards, and it will not be moved beyond the current insert location. Returns the name of the slot and the actual position that it was advanced to. The updated slot position information is written out at the next checkpoint if any advancing is done. So in the event of a crash, the slot may return to an earlier position.
pg_replication_origin_create
( node_name
text
) → oid
Creates a replication origin with the given external name, and returns the internal ID assigned to it.
pg_replication_origin_drop
( node_name
text
) → void
Deletes a previously-created replication origin, including any associated replay progress.
pg_replication_origin_oid
( node_name
text
) → oid
Looks up a replication origin by name and returns the internal ID. If no such replication origin is found an error is thrown.
pg_replication_origin_session_setup
( node_name
text
) → void
Marks the current session as replaying from the given origin, allowing replay progress to be tracked. Can only be used if no origin is currently selected. Use pg_replication_origin_session_reset
to undo.
pg_replication_origin_session_reset
() → void
Cancels the effects of pg_replication_origin_session_setup()
.
pg_replication_origin_session_is_setup
() → boolean
Returns true if a replication origin has been selected in the current session.
pg_replication_origin_session_progress
( flush
boolean
) → pg_lsn
Returns the replay location for the replication origin selected in the current session. The parameter flush
determines whether the corresponding local transaction will be guaranteed to have been flushed to disk or not.
pg_replication_origin_xact_setup
( origin_lsn
pg_lsn
, origin_timestamp
timestamp with time zone
) → void
Marks the current transaction as replaying a transaction that has committed at the given LSN and timestamp. Can only be called when a replication origin has been selected using pg_replication_origin_session_setup
.
pg_replication_origin_xact_reset
() → void
Cancels the effects of pg_replication_origin_xact_setup()
.
pg_replication_origin_advance
( node_name
text
, lsn
pg_lsn
) → void
Sets replication progress for the given node to the given location. This is primarily useful for setting up the initial location, or setting a new location after configuration changes and similar. Be aware that careless use of this function can lead to inconsistently replicated data.
pg_replication_origin_progress
( node_name
text
, flush
boolean
) → pg_lsn
Returns the replay location for the given replication origin. The parameter flush
determines whether the corresponding local transaction will be guaranteed to have been flushed to disk or not.
pg_logical_emit_message
( transactional
boolean
, prefix
text
, content
text
) → pg_lsn
pg_logical_emit_message
( transactional
boolean
, prefix
text
, content
bytea
) → pg_lsn
Emits a logical decoding message. This can be used to pass generic messages to logical decoding plugins through WAL. The transactional
parameter specifies if the message should be part of the current transaction, or if it should be written immediately and decoded as soon as the logical decoder reads the record. The prefix
parameter is a textual prefix that can be used by logical decoding plugins to easily recognize messages that are interesting for them. The content
parameter is the content of the message, given either in text or binary form.
Function
Description
pg_column_size
( "any"
) → integer
Shows the number of bytes used to store any individual data value. If applied directly to a table column value, this reflects any compression that was done.
pg_database_size
( name
) → bigint
pg_database_size
( oid
) → bigint
Computes the total disk space used by the database with the specified name or OID. To use this function, you must have CONNECT
privilege on the specified database (which is granted by default) or be a member of the pg_read_all_stats
role.
pg_indexes_size
( regclass
) → bigint
Computes the total disk space used by indexes attached to the specified table.
pg_relation_size
( relation
regclass
[, fork
text
] ) → bigint
Computes the disk space used by one “fork” of the specified relation. (Note that for most purposes it is more convenient to use the higher-level functions pg_total_relation_size
or pg_table_size
, which sum the sizes of all forks.) With one argument, this returns the size of the main data fork of the relation. The second argument can be provided to specify which fork to examine:
main
returns the size of the main data fork of the relation.
fsm
returns the size of the Free Space Map (see Section 68.3) associated with the relation.
vm
returns the size of the Visibility Map (see Section 68.4) associated with the relation.
init
returns the size of the initialization fork, if any, associated with the relation.
pg_size_bytes
( text
) → bigint
Converts a size in human-readable format (as returned by pg_size_pretty
) into bytes.
pg_size_pretty
( bigint
) → text
pg_size_pretty
( numeric
) → text
Converts a size in bytes into a more easily human-readable format with size units (bytes, kB, MB, GB or TB as appropriate). Note that the units are powers of 2 rather than powers of 10, so 1kB is 1024 bytes, 1MB is 10242 = 1048576 bytes, and so on.
pg_table_size
( regclass
) → bigint
Computes the disk space used by the specified table, excluding indexes (but including its TOAST table if any, free space map, and visibility map).
pg_tablespace_size
( name
) → bigint
pg_tablespace_size
( oid
) → bigint
Computes the total disk space used in the tablespace with the specified name or OID. To use this function, you must have CREATE
privilege on the specified tablespace or be a member of the pg_read_all_stats
role, unless it is the default tablespace for the current database.
pg_total_relation_size
( regclass
) → bigint
Computes the total disk space used by the specified table, including all indexes and TOAST data. The result is equivalent to pg_table_size
+
pg_indexes_size
.
Function
Description
pg_relation_filenode
( relation
regclass
) → oid
Returns the “filenode” number currently assigned to the specified relation. The filenode is the base component of the file name(s) used for the relation (see Section 68.1 for more information). For most relations the result is the same as pg_class
.relfilenode
, but for certain system catalogs relfilenode
is zero and this function must be used to get the correct value. The function returns NULL if passed a relation that does not have storage, such as a view.
pg_relation_filepath
( relation
regclass
) → text
Returns the entire file path name (relative to the database cluster's data directory, PGDATA
) of the relation.
pg_filenode_relation
( tablespace
oid
, filenode
oid
) → regclass
Returns a relation's OID given the tablespace OID and filenode it is stored under. This is essentially the inverse mapping of pg_relation_filepath
. For a relation in the database's default tablespace, the tablespace can be specified as zero. Returns NULL
if no relation in the current database is associated with the given values.
Function
Description
pg_collation_actual_version
( oid
) → text
Returns the actual version of the collation object as it is currently installed in the operating system. If this is different from the value in pg_collation
.collversion
, then objects depending on the collation might need to be rebuilt. See also ALTER COLLATION.
pg_import_system_collations
( schema
regnamespace
) → integer
Adds collations to the system catalog pg_collation
based on all the locales it finds in the operating system. This is what initdb
uses; see Section 23.2.2 for more details. If additional locales are installed into the operating system later on, this function can be run again to add collations for the new locales. Locales that match existing entries in pg_collation
will be skipped. (But collation objects based on locales that are no longer present in the operating system are not removed by this function.) The schema
parameter would typically be pg_catalog
, but that is not a requirement; the collations could be installed into some other schema as well. The function returns the number of new collation objects it created.
Function
Description
pg_partition_tree
( regclass
) → setof record
( relid
regclass
, parentrelid
regclass
, isleaf
boolean
, level
integer
)
Lists the tables or indexes in the partition tree of the given partitioned table or partitioned index, with one row for each partition. Information provided includes the OID of the partition, the OID of its immediate parent, a boolean value telling if the partition is a leaf, and an integer telling its level in the hierarchy. The level value is 0 for the input table or index, 1 for its immediate child partitions, 2 for their partitions, and so on. Returns no rows if the relation does not exist or is not a partition or partitioned table.
pg_partition_ancestors
( regclass
) → setof regclass
Lists the ancestor relations of the given partition, including the relation itself. Returns no rows if the relation does not exist or is not a partition or partitioned table.
pg_partition_root
( regclass
) → regclass
Returns the top-most parent of the partition tree to which the given relation belongs. Returns NULL
if the relation does not exist or is not a partition or partitioned table.
Function
Description
brin_summarize_new_values
( index
regclass
) → integer
Scans the specified BRIN index to find page ranges in the base table that are not currently summarized by the index; for any such range it creates a new summary index tuple by scanning those table pages. Returns the number of new page range summaries that were inserted into the index.
brin_summarize_range
( index
regclass
, blockNumber
bigint
) → integer
Summarizes the page range covering the given block, if not already summarized. This is like brin_summarize_new_values
except that it only processes the page range that covers the given table block number.
brin_desummarize_range
( index
regclass
, blockNumber
bigint
) → void
Removes the BRIN index tuple that summarizes the page range covering the given table block, if there is one.
gin_clean_pending_list
( index
regclass
) → bigint
Cleans up the “pending” list of the specified GIN index by moving entries in it, in bulk, to the main GIN data structure. Returns the number of pages removed from the pending list. If the argument is a GIN index built with the fastupdate
option disabled, no cleanup happens and the result is zero, because the index doesn't have a pending list. See Section 66.4.1 and Section 66.5 for details about the pending list and fastupdate
option.
Function
Description
pg_ls_dir
( dirname
text
[, missing_ok
boolean
, include_dot_dirs
boolean
] ) → setof text
Returns the names of all files (and directories and other special files) in the specified directory. The include_dot_dirs
parameter indicates whether “.” and “..” are to be included in the result set; the default is to exclude them. Including them can be useful when missing_ok
is true
, to distinguish an empty directory from a non-existent directory.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_ls_logdir
() → setof record
( name
text
, size
bigint
, modification
timestamp with time zone
)
Returns the name, size, and last modification time (mtime) of each ordinary file in the server's log directory. Filenames beginning with a dot, directories, and other special files are excluded.
This function is restricted to superusers and members of the pg_monitor
role by default, but other users can be granted EXECUTE to run the function.
pg_ls_waldir
() → setof record
( name
text
, size
bigint
, modification
timestamp with time zone
)
Returns the name, size, and last modification time (mtime) of each ordinary file in the server's write-ahead log (WAL) directory. Filenames beginning with a dot, directories, and other special files are excluded.
This function is restricted to superusers and members of the pg_monitor
role by default, but other users can be granted EXECUTE to run the function.
pg_ls_archive_statusdir
() → setof record
( name
text
, size
bigint
, modification
timestamp with time zone
)
Returns the name, size, and last modification time (mtime) of each ordinary file in the server's WAL archive status directory (pg_wal/archive_status
). Filenames beginning with a dot, directories, and other special files are excluded.
This function is restricted to superusers and members of the pg_monitor
role by default, but other users can be granted EXECUTE to run the function.
pg_ls_tmpdir
( [ tablespace
oid
] ) → setof record
( name
text
, size
bigint
, modification
timestamp with time zone
)
Returns the name, size, and last modification time (mtime) of each ordinary file in the temporary file directory for the specified tablespace
. If tablespace
is not provided, the pg_default
tablespace is examined. Filenames beginning with a dot, directories, and other special files are excluded.
This function is restricted to superusers and members of the pg_monitor
role by default, but other users can be granted EXECUTE to run the function.
pg_read_file
( filename
text
[, offset
bigint
, length
bigint
[, missing_ok
boolean
]] ) → text
Returns all or part of a text file, starting at the given byte offset
, returning at most length
bytes (less if the end of file is reached first). If offset
is negative, it is relative to the end of the file. If offset
and length
are omitted, the entire file is returned. The bytes read from the file are interpreted as a string in the database's encoding; an error is thrown if they are not valid in that encoding.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_read_binary_file
( filename
text
[, offset
bigint
, length
bigint
[, missing_ok
boolean
]] ) → bytea
Returns all or part of a file. This function is identical to pg_read_file
except that it can read arbitrary binary data, returning the result as bytea
not text
; accordingly, no encoding checks are performed.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
In combination with the convert_from
function, this function can be used to read a text file in a specified encoding and convert to the database's encoding:
pg_stat_file
( filename
text
[, missing_ok
boolean
] ) → record
( size
bigint
, access
timestamp with time zone
, modification
timestamp with time zone
, change
timestamp with time zone
, creation
timestamp with time zone
, isdir
boolean
)
Returns a record containing the file's size, last access time stamp, last modification time stamp, last file status change time stamp (Unix platforms only), file creation time stamp (Windows only), and a flag indicating if it is a directory.
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.
Function
Description
pg_advisory_lock
( key
bigint
) → void
pg_advisory_lock
( key1
integer
, key2
integer
) → void
Obtains an exclusive session-level advisory lock, waiting if necessary.
pg_advisory_lock_shared
( key
bigint
) → void
pg_advisory_lock_shared
( key1
integer
, key2
integer
) → void
Obtains a shared session-level advisory lock, waiting if necessary.
pg_advisory_unlock
( key
bigint
) → boolean
pg_advisory_unlock
( key1
integer
, key2
integer
) → boolean
Releases a previously-acquired exclusive session-level advisory lock. Returns true
if the lock is successfully released. If the lock was not held, false
is returned, and in addition, an SQL warning will be reported by the server.
pg_advisory_unlock_all
() → void
Releases all session-level advisory locks held by the current session. (This function is implicitly invoked at session end, even if the client disconnects ungracefully.)
pg_advisory_unlock_shared
( key
bigint
) → boolean
pg_advisory_unlock_shared
( key1
integer
, key2
integer
) → boolean
Releases a previously-acquired shared session-level advisory lock. Returns true
if the lock is successfully released. If the lock was not held, false
is returned, and in addition, an SQL warning will be reported by the server.
pg_advisory_xact_lock
( key
bigint
) → void
pg_advisory_xact_lock
( key1
integer
, key2
integer
) → void
Obtains an exclusive transaction-level advisory lock, waiting if necessary.
pg_advisory_xact_lock_shared
( key
bigint
) → void
pg_advisory_xact_lock_shared
( key1
integer
, key2
integer
) → void
Obtains a shared transaction-level advisory lock, waiting if necessary.
pg_try_advisory_lock
( key
bigint
) → boolean
pg_try_advisory_lock
( key1
integer
, key2
integer
) → boolean
Obtains an exclusive session-level advisory lock if available. This will either obtain the lock immediately and return true
, or return false
without waiting if the lock cannot be acquired immediately.
pg_try_advisory_lock_shared
( key
bigint
) → boolean
pg_try_advisory_lock_shared
( key1
integer
, key2
integer
) → boolean
Obtains a shared session-level advisory lock if available. This will either obtain the lock immediately and return true
, or return false
without waiting if the lock cannot be acquired immediately.
pg_try_advisory_xact_lock
( key
bigint
) → boolean
pg_try_advisory_xact_lock
( key1
integer
, key2
integer
) → boolean
Obtains an exclusive transaction-level advisory lock if available. This will either obtain the lock immediately and return true
, or return false
without waiting if the lock cannot be acquired immediately.
pg_try_advisory_xact_lock_shared
( key
bigint
) → boolean
pg_try_advisory_xact_lock_shared
( key1
integer
, key2
integer
) → boolean
Obtains a shared transaction-level advisory lock if available. This will either obtain the lock immediately and return true
, or return false
without waiting if the lock cannot be acquired immediately.
current_catalog
name
name of current database (called “catalog” in the SQL standard)
current_database()
name
name of current database
current_query()
text
text of the currently executing query, as submitted by the client (might contain more than one statement)
current_role
name
equivalent to current_user
current_schema
[()]
name
name of current schema
current_schemas(boolean
)
name[]
names of schemas in search path, optionally including implicit schemas
current_user
name
user name of current execution context
inet_client_addr()
inet
address of the remote connection
inet_client_port()
int
port of the remote connection
inet_server_addr()
inet
address of the local connection
inet_server_port()
int
port of the local connection
pg_backend_pid()
int
Process ID of the server process attached to the current session
pg_blocking_pids(int
)
int[]
Process ID(s) that are blocking specified server process ID from acquiring a lock
pg_conf_load_time()
timestamp with time zone
configuration load time
pg_current_logfile([text
])
text
Primary log file name, or log in the requested format, currently in use by the logging collector
pg_my_temp_schema()
oid
OID of session's temporary schema, or 0 if none
pg_is_other_temp_schema(oid
)
boolean
is schema another session's temporary schema?
pg_jit_available()
boolean
is a JIT compiler extension available (see Chapter 31) and the jit configuration parameter set to on
.
pg_listening_channels()
setof text
channel names that the session is currently listening on
pg_notification_queue_usage()
double
fraction of the asynchronous notification queue currently occupied (0-1)
pg_postmaster_start_time()
timestamp with time zone
server start time
pg_safe_snapshot_blocking_pids(int
)
int[]
Process ID(s) that are blocking specified server process ID from acquiring a safe snapshot
pg_trigger_depth()
int
current nesting level of PostgreSQL triggers (0 if not called, directly or indirectly, from inside a trigger)
session_user
name
session user name
user
name
equivalent to current_user
version()
text
PostgreSQL version information. See also server_version_num for a machine-readable version.
has_any_column_privilege
(user
, table
, privilege
)
boolean
does user have privilege for any column of table
has_any_column_privilege
(table
, privilege
)
boolean
does current user have privilege for any column of table
has_column_privilege
(user
, table
, column
, privilege
)
boolean
does user have privilege for column
has_column_privilege
(table
, column
, privilege
)
boolean
does current user have privilege for column
has_database_privilege
(user
, database
, privilege
)
boolean
does user have privilege for database
has_database_privilege
(database
, privilege
)
boolean
does current user have privilege for database
has_foreign_data_wrapper_privilege
(user
, fdw
, privilege
)
boolean
does user have privilege for foreign-data wrapper
has_foreign_data_wrapper_privilege
(fdw
, privilege
)
boolean
does current user have privilege for foreign-data wrapper
has_function_privilege
(user
, function
, privilege
)
boolean
does user have privilege for function
has_function_privilege
(function
, privilege
)
boolean
does current user have privilege for function
has_language_privilege
(user
, language
, privilege
)
boolean
does user have privilege for language
has_language_privilege
(language
, privilege
)
boolean
does current user have privilege for language
has_schema_privilege
(user
, schema
, privilege
)
boolean
does user have privilege for schema
has_schema_privilege
(schema
, privilege
)
boolean
does current user have privilege for schema
has_sequence_privilege
(user
, sequence
, privilege
)
boolean
does user have privilege for sequence
has_sequence_privilege
(sequence
, privilege
)
boolean
does current user have privilege for sequence
has_server_privilege
(user
, server
, privilege
)
boolean
does user have privilege for foreign server
has_server_privilege
(server
, privilege
)
boolean
does current user have privilege for foreign server
has_table_privilege
(user
, table
, privilege
)
boolean
does user have privilege for table
has_table_privilege
(table
, privilege
)
boolean
does current user have privilege for table
has_tablespace_privilege
(user
, tablespace
, privilege
)
boolean
does user have privilege for tablespace
has_tablespace_privilege
(tablespace
, privilege
)
boolean
does current user have privilege for tablespace
has_type_privilege
(user
, type
, privilege
)
boolean
does user have privilege for type
has_type_privilege
(type
, privilege
)
boolean
does current user have privilege for type
pg_has_role
(user
, role
, privilege
)
boolean
does user have privilege for role
pg_has_role
(role
, privilege
)
boolean
does current user have privilege for role
row_security_active
(table
)
boolean
does current user have row level security active for table
=
equal
'calvin=r*w/hobbes'::aclitem = 'calvin=r*w*/hobbes'::aclitem
f
@>
contains element
'{calvin=r*w/hobbes,hobbes=r*w*/postgres}'::aclitem[] @> 'calvin=r*w/hobbes'::aclitem
t
~
contains element
'{calvin=r*w/hobbes,hobbes=r*w*/postgres}'::aclitem[] ~ 'calvin=r*w/hobbes'::aclitem
t
acldefault
(type
, ownerId
)
aclitem[]
get the default access privileges for an object belonging to ownerId
aclexplode
(aclitem[]
)
setof record
get aclitem
array as tuples
makeaclitem
(grantee
, grantor
, privilege
, grantable
)
aclitem
build an aclitem
from input
pg_collation_is_visible(
collation_oid
)
boolean
is collation visible in search path
pg_conversion_is_visible(
conversion_oid
)
boolean
is conversion visible in search path
pg_function_is_visible(
function_oid
)
boolean
is function visible in search path
pg_opclass_is_visible(
opclass_oid
)
boolean
is operator class visible in search path
pg_operator_is_visible(
operator_oid
)
boolean
is operator visible in search path
pg_opfamily_is_visible(
opclass_oid
)
boolean
is operator family visible in search path
pg_statistics_obj_is_visible(
stat_oid
)
boolean
is statistics object visible in search path
pg_table_is_visible(
table_oid
)
boolean
is table visible in search path
pg_ts_config_is_visible(
config_oid
)
boolean
is text search configuration visible in search path
pg_ts_dict_is_visible(
dict_oid
)
boolean
is text search dictionary visible in search path
pg_ts_parser_is_visible(
parser_oid
)
boolean
is text search parser visible in search path
pg_ts_template_is_visible(
template_oid
)
boolean
is text search template visible in search path
pg_type_is_visible(
type_oid
)
boolean
is type (or domain) visible in search path
format_type(
type_oid
, typemod
)
text
get SQL name of a data type
pg_get_constraintdef(
constraint_oid
)
text
get definition of a constraint
pg_get_constraintdef(
constraint_oid
, pretty_bool
)
text
get definition of a constraint
pg_get_expr(
pg_node_tree
, relation_oid
)
text
decompile internal form of an expression, assuming that any Vars in it refer to the relation indicated by the second parameter
pg_get_expr(
pg_node_tree
, relation_oid
, pretty_bool
)
text
decompile internal form of an expression, assuming that any Vars in it refer to the relation indicated by the second parameter
pg_get_functiondef(
func_oid
)
text
get definition of a function or procedure
pg_get_function_arguments(
func_oid
)
text
get argument list of function's or procedure's definition (with default values)
pg_get_function_identity_arguments(
func_oid
)
text
get argument list to identify a function or procedure (without default values)
pg_get_function_result(
func_oid
)
text
get RETURNS
clause for function (returns null for a procedure)
pg_get_indexdef(
index_oid
)
text
get CREATE INDEX
command for index
pg_get_indexdef(
index_oid
, column_no
, pretty_bool
)
text
get CREATE INDEX
command for index, or definition of just one index column when column_no
is not zero
pg_get_keywords()
setof record
get list of SQL keywords and their categories
pg_get_ruledef(
rule_oid
)
text
get CREATE RULE
command for rule
pg_get_ruledef(
rule_oid
, pretty_bool
)
text
get CREATE RULE
command for rule
pg_get_serial_sequence(
table_name
, column_name
)
text
get name of the sequence that a serial or identity column uses
pg_get_statisticsobjdef(
statobj_oid
)
text
get CREATE STATISTICS
command for extended statistics object
pg_get_triggerdef
(trigger_oid
)
text
get CREATE [ CONSTRAINT ] TRIGGER
command for trigger
pg_get_triggerdef
(trigger_oid
, pretty_bool
)
text
get CREATE [ CONSTRAINT ] TRIGGER
command for trigger
pg_get_userbyid(
role_oid
)
name
get role name with given OID
pg_get_viewdef(
view_name
)
text
get underlying SELECT
command for view or materialized view (deprecated)
pg_get_viewdef(
view_name
, pretty_bool
)
text
get underlying SELECT
command for view or materialized view (deprecated)
pg_get_viewdef(
view_oid
)
text
get underlying SELECT
command for view or materialized view
pg_get_viewdef(
view_oid
, pretty_bool
)
text
get underlying SELECT
command for view or materialized view
pg_get_viewdef(
view_oid
, wrap_column_int
)
text
get underlying SELECT
command for view or materialized view; lines with fields are wrapped to specified number of columns, pretty-printing is implied
pg_index_column_has_property(
index_oid
, column_no
, prop_name
)
boolean
test whether an index column has a specified property
pg_index_has_property(
index_oid
, prop_name
)
boolean
test whether an index has a specified property
pg_indexam_has_property(
am_oid
, prop_name
)
boolean
test whether an index access method has a specified property
pg_options_to_table(
reloptions
)
setof record
get the set of storage option name/value pairs
pg_tablespace_databases(
tablespace_oid
)
setof oid
get the set of database OIDs that have objects in the tablespace
pg_tablespace_location(
tablespace_oid
)
text
get the path in the file system that this tablespace is located in
pg_typeof(
any
)
regtype
get the data type of any value
collation for (
any
)
text
get the collation of the argument
to_regclass(
rel_name
)
regclass
get the OID of the named relation
to_regproc(
func_name
)
regproc
get the OID of the named function
to_regprocedure(
func_name
)
regprocedure
get the OID of the named function
to_regoper(
operator_name
)
regoper
get the OID of the named operator
to_regoperator(
operator_name
)
regoperator
get the OID of the named operator
to_regtype(
type_name
)
regtype
get the OID of the named type
to_regnamespace(
schema_name
)
regnamespace
get the OID of the named schema
to_regrole(
role_name
)
regrole
get the OID of the named role
asc
Does the column sort in ascending order on a forward scan?
desc
Does the column sort in descending order on a forward scan?
nulls_first
Does the column sort with nulls first on a forward scan?
nulls_last
Does the column sort with nulls last on a forward scan?
orderable
Does the column possess any defined sort ordering?
distance_orderable
Can the column be scanned in order by a “distance” operator, for example ORDER BY col <-> constant
?
returnable
Can the column value be returned by an index-only scan?
search_array
Does the column natively support col = ANY(array)
searches?
search_nulls
Does the column support IS NULL
and IS NOT NULL
searches?
clusterable
Can the index be used in a CLUSTER
command?
index_scan
Does the index support plain (non-bitmap) scans?
bitmap_scan
Does the index support bitmap scans?
backward_scan
Can the scan direction be changed in mid-scan (to support FETCH BACKWARD
on a cursor without needing materialization)?
can_order
Does the access method support ASC
, DESC
and related keywords in CREATE INDEX
?
can_unique
Does the access method support unique indexes?
can_multi_col
Does the access method support indexes with multiple columns?
can_exclude
Does the access method support exclusion constraints?
can_include
Does the access method support the INCLUDE
clause of CREATE INDEX
?
pg_describe_object(
classid
oid
, objid
oid
, objsubid
integer
)
text
get description of a database object
pg_identify_object(
classid
oid
, objid
oid
, objsubid
integer
)
type
text
, schema
text
, name
text
, identity
text
get identity of a database object
pg_identify_object_as_address(
classid
oid
, objid
oid
, objsubid
integer
)
type
text
, object_names
text[]
, object_args
text[]
get external representation of a database object's address
pg_get_object_address(
type
text
, object_names
text[]
, object_args
text[]
)
classid
oid
, objid
oid
, objsubid
integer
get address of a database object from its external representation
col_description(
table_oid
, column_number
)
text
get comment for a table column
obj_description(
object_oid
, catalog_name
)
text
get comment for a database object
obj_description(
object_oid
)
text
get comment for a database object (deprecated)
shobj_description(
object_oid
, catalog_name
)
text
get comment for a shared database object
txid_current()
bigint
get current transaction ID, assigning a new one if the current transaction does not have one
txid_current_if_assigned()
bigint
same as txid_current()
but returns null instead of assigning a new transaction ID if none is already assigned
txid_current_snapshot()
txid_snapshot
get current snapshot
txid_snapshot_xip(
txid_snapshot
)
setof bigint
get in-progress transaction IDs in snapshot
txid_snapshot_xmax(
txid_snapshot
)
bigint
get xmax
of snapshot
txid_snapshot_xmin(
txid_snapshot
)
bigint
get xmin
of snapshot
txid_visible_in_snapshot(
bigint
, txid_snapshot
)
boolean
is transaction ID visible in snapshot? (do not use with subtransaction ids)
txid_status(
bigint
)
text
report the status of the given transaction: committed
, aborted
, in progress
, or null if the transaction ID is too old
xmin
Earliest transaction ID (txid) that is still active. All earlier transactions will either be committed and visible, or rolled back and dead.
xmax
First as-yet-unassigned txid. All txids greater than or equal to this are not yet started as of the time of the snapshot, and thus invisible.
xip_list
Active txids at the time of the snapshot. The list includes only those active txids between xmin
and xmax
; there might be active txids higher than xmax
. A txid that is xmin <= txid < xmax
and not in this list was already completed at the time of the snapshot, and thus either visible or dead according to its commit status. The list does not include txids of subtransactions.
pg_xact_commit_timestamp(
xid
)
timestamp with time zone
get commit timestamp of a transaction
pg_last_committed_xact()
xid
xid
, timestamp
timestamp with time zone
get transaction ID and commit timestamp of latest committed transaction
pg_control_checkpoint()
record
Returns information about current checkpoint state.
pg_control_system()
record
Returns information about current control file state.
pg_control_init()
record
Returns information about cluster initialization state.
pg_control_recovery()
record
Returns information about recovery state.
checkpoint_lsn
pg_lsn
redo_lsn
pg_lsn
redo_wal_file
text
timeline_id
integer
prev_timeline_id
integer
full_page_writes
boolean
next_xid
text
next_oid
oid
next_multixact_id
xid
next_multi_offset
xid
oldest_xid
xid
oldest_xid_dbid
oid
oldest_active_xid
xid
oldest_multi_xid
xid
oldest_multi_dbid
oid
oldest_commit_ts_xid
xid
newest_commit_ts_xid
xid
checkpoint_time
timestamp with time zone
pg_control_version
integer
catalog_version_no
integer
system_identifier
bigint
pg_control_last_modified
timestamp with time zone
max_data_alignment
integer
database_block_size
integer
blocks_per_segment
integer
wal_block_size
integer
bytes_per_wal_segment
integer
max_identifier_length
integer
max_index_columns
integer
max_toast_chunk_size
integer
large_object_chunk_size
integer
float4_pass_by_value
boolean
float8_pass_by_value
boolean
data_page_checksum_version
integer
min_recovery_end_lsn
pg_lsn
min_recovery_end_timeline
integer
backup_start_lsn
pg_lsn
backup_end_lsn
pg_lsn
end_of_backup_record_required
boolean