Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
常見可用的邏輯運算子:
SQL 使用具有 true、false 和 null 的三值邏輯系統,其中 null 表示“未知”。請參閱以下真值表:
運算子 AND 和 OR 是可交換的,也就是說,您可以在不影響結果的情況下交換左右運算元。有關子表示式求值順序的更多資訊,請參閱第 4.2.14 節。
This section describes functions and operators for examining and manipulating values of type bytea
.
SQL defines some string functions that use key words, rather than commas, to separate arguments. Details are in . PostgreSQL also provides versions of these functions that use the regular function invocation syntax (see ).
The sample results shown on this page assume that the server parameter is set to escape
(the traditional PostgreSQL format).
Function | Return Type | Description | Example | Result |
---|
Additional binary string manipulation functions are available and are listed in . Some of them are used internally to implement the SQL-standard string functions listed in .
Function | Return Type | Description | Example | Result |
---|
get_byte
and set_byte
number the first byte of a binary string as byte 0. get_bit
and set_bit
number bits from the right within each byte; for example bit 0 is the least significant bit of the first byte, and bit 15 is the most significant bit of the second byte.
Note that for historic reasons, the function md5
returns a hex-encoded value of type text
whereas the SHA-2 functions return type bytea
. Use the functions encode
and decode
to convert between the two, for example encode(sha256('abc'), 'hex')
to get a hex-encoded text representation.
The usual comparison operators are available, as shown in .
Operator | Description |
---|
<>
is the standard SQL notation for “not equal”. !=
is an alias, which is converted to <>
at a very early stage of parsing. Hence, it is not possible to implement !=
and <>
operators that do different things.
These comparison operators are available for all built-in data types that have a natural ordering, including numeric, string, and date/time types. In addition, arrays, composite types, and ranges can be compared if their component data types are comparable.
It is usually possible to compare values of related data types as well; for example integer
>
bigint
will work. Some cases of this sort are implemented directly by “cross-type” comparison operators, but if no such operator is available, the parser will coerce the less-general type to the more-general type and apply the latter's comparison operator.
As shown above, all comparison operators are binary operators that return values of type boolean
. Thus, expressions like 1 < 2 < 3
are not valid (because there is no <
operator to compare a Boolean value with 3
). Use the BETWEEN
predicates shown below to perform range tests.
There are also some comparison predicates, as shown in . These behave much like operators, but have special syntax mandated by the SQL standard.
The BETWEEN
predicate simplifies range tests:
is equivalent to
Notice that BETWEEN
treats the endpoint values as included in the range. BETWEEN SYMMETRIC
is like BETWEEN
except there is no requirement that the argument to the left of AND
be less than or equal to the argument on the right. If it is not, those two arguments are automatically swapped, so that a nonempty range is always implied.
The various variants of BETWEEN
are implemented in terms of the ordinary comparison operators, and therefore will work for any data type(s) that can be compared.
The use of AND
in the BETWEEN
syntax creates an ambiguity with the use of AND
as a logical operator. To resolve this, only a limited set of expression types are allowed as the second argument of a BETWEEN
clause. If you need to write a more complex sub-expression in BETWEEN
, write parentheses around the sub-expression.
Ordinary comparison operators yield null (signifying “unknown”), not true or false, when either input is null. For example, 7 = NULL
yields null, as does 7 <> NULL
. When this behavior is not suitable, use the IS [ NOT ] DISTINCT FROM
predicates:
For non-null inputs, IS DISTINCT FROM
is the same as the <>
operator. However, if both inputs are null it returns false, and if only one input is null it returns true. Similarly, IS NOT DISTINCT FROM
is identical to =
for non-null inputs, but it returns true when both inputs are null, and false when only one input is null. Thus, these predicates effectively act as though null were a normal data value, rather than “unknown”.
To check whether a value is or is not null, use the predicates:
or the equivalent, but nonstandard, predicates:
Do not write expression
= NULL
because NULL
is not “equal to” NULL
. (The null value represents an unknown value, and it is not known whether two unknown values are equal.)
If the expression
is row-valued, then IS NULL
is true when the row expression itself is null or when all the row's fields are null, while IS NOT NULL
is true when the row expression itself is non-null and all the row's fields are non-null. Because of this behavior, IS NULL
and IS NOT NULL
do not always return inverse results for row-valued expressions; in particular, a row-valued expression that contains both null and non-null fields will return false for both tests. In some cases, it may be preferable to write row
IS DISTINCT FROM NULL
or row
IS NOT DISTINCT FROM NULL
, which will simply check whether the overall row value is null without any additional tests on the row fields.
Boolean values can also be tested using the predicates
These will always return true or false, never a null value, even when the operand is null. A null input is treated as the logical value “unknown”. Notice that IS UNKNOWN
and IS NOT UNKNOWN
are effectively the same as IS NULL
and IS NOT NULL
, respectively, except that the input expression must be of Boolean type.
See also the aggregate function string_agg
in and the large object functions in .
Some applications might expect that expression
= NULL
returns true if expression
evaluates to the null value. It is highly recommended that these applications be modified to comply with the SQL standard. However, if that cannot be done the configuration variable is available. If it is enabled, PostgreSQL will convert x = NULL
clauses to x IS NULL
.
Some comparison-related functions are also available, as shown in .
AND
OR
NOT
a
b
a
AND b
a
OR b
TRUE
TRUE
TRUE
TRUE
TRUE
FALSE
FALSE
TRUE
TRUE
NULL
NULL
TRUE
FALSE
FALSE
FALSE
FALSE
FALSE
NULL
FALSE
NULL
NULL
NULL
NULL
NULL
a
NOT a
TRUE
FALSE
FALSE
TRUE
NULL
NULL
Function Description Example(s) |
Returns the number of non-null arguments.
|
Returns the number of null arguments.
|
|
| String concatenation |
|
|
|
| Number of bytes in binary string |
|
|
|
| Replace substring |
|
|
|
| Location of specified substring |
|
|
|
| Extract substring |
|
|
|
| Remove the longest string containing only bytes appearing in |
|
|
|
| Remove the longest string containing only bytes appearing in |
|
|
|
| Decode binary data from textual representation in |
|
|
|
| Encode binary data into a textual representation. Supported formats are: |
|
|
|
| Extract bit from string |
|
|
|
| Extract byte from string |
|
|
|
| Length of binary string |
|
|
|
| Calculates the MD5 hash of |
|
|
|
| Set bit in string |
|
|
|
| Set byte in string |
|
|
|
| SHA-224 hash |
|
|
|
| SHA-256 hash |
|
|
|
| SHA-384 hash |
|
|
|
| SHA-512 hash |
|
|
| Less than |
| Greater than |
| Less than or equal to |
| Greater than or equal to |
| Equal |
| Not equal |
| Not equal |
Predicate Description Example(s) |
Between (inclusive of the range endpoints).
|
Not between (the negation of
|
Between, after sorting the two endpoint values.
|
Not between, after sorting the two endpoint values.
|
Not equal, treating null as a comparable value.
|
Equal, treating null as a comparable value.
|
Test whether value is null.
|
Test whether value is not null.
|
Test whether value is null (nonstandard syntax). |
Test whether value is not null (nonstandard syntax). |
Test whether boolean expression yields true.
|
Test whether boolean expression yields false or unknown.
|
Test whether boolean expression yields false.
|
Test whether boolean expression yields true or unknown.
|
Test whether boolean expression yields unknown.
|
Test whether boolean expression yields true or false.
|
PostgreSQL 為內建的資料型別提供了大量的函數和運算子。使用者還可以定義自己的函數和運算子,如第 V 部分所述。psql 指令 \df 和 \do 可分別用於列出所有可用的函數和運算子。
The notation used throughout this chapter to describe the argument and result data types of a function or operator is like this:
which says that the function repeat
takes one text and one integer argument and returns a result of type text. The right arrow is also used to indicate the result of an example, thus:
如果您擔心可移植性,那麼請注意,本章中描述的大多數函數和運算子(最常見的算術運算子和比較運算子以及一些明確標記的函數除外)都不是由 SQL 標準指定的。其他一些 SQL 資料庫管理系統提供了其中一些延伸功能,並且在許多情況下,這些功能在各種實作之間是相容和一致的。本章可能不夠完整;附加功能出現在手冊的其他相關章節中。
版本:11
PostgreSQL 提供了三種不同的特徵比對方法:傳統的 SQL LIKE 運算子,最新的 SIMILAR TO 運算子(於 SQL:1999 中加入)和 POSIX 樣式的正規表示式。除了基本的「這個字串符合這個樣式嗎?」運算子之外,還可以使用函數來提取或替換符合的子字串,以及在配對的位置拆分字串。
提醒 如果您的特徵比對需求超出此範圍,請考慮在 Perl 或 Tcl 中撰寫使用者定義的函數。
雖然大多數正規表示式搜尋可以非常快速地執行,但是完成正規表示式需要花費大量的時間和記憶體來處理。要特別注意從各種來源接受正規表示式的搜尋方式。如果必須這樣做,建議強制限制執行語句執行時間。
使用 SIMILAR TO 方式的搜尋具有相同的安全隱憂,因為 SIMILAR TO 提供了許多與 POSIX 樣式的正規表示式相同功能。
LIKE 搜尋比其他兩個選項要簡單得多,在使用可能惡意的來源時更安全。
LIKE
The LIKE
expression returns true if the string
matches the supplied pattern
. (As expected, the NOT LIKE
expression returns false if LIKE
returns true, and vice versa. An equivalent expression is NOT (
string
LIKE pattern
).)
If pattern
does not contain percent signs or underscores, then the pattern only represents the string itself; in that case LIKE
acts like the equals operator. An underscore (_
) in pattern
stands for (matches) any single character; a percent sign (%
) matches any sequence of zero or more characters.
Some examples:
LIKE
pattern matching always covers the entire string. Therefore, if it's desired to match a sequence anywhere within a string, the pattern must start and end with a percent sign.
To match a literal underscore or percent sign without matching other characters, the respective character in pattern
must be preceded by the escape character. The default escape character is the backslash but a different one can be selected by using the ESCAPE
clause. To match the escape character itself, write two escape characters.
If you have standard_conforming_strings turned off, any backslashes you write in literal string constants will need to be doubled. See Section 4.1.2.1 for more information.
It's also possible to select no escape character by writing ESCAPE ''
. This effectively disables the escape mechanism, which makes it impossible to turn off the special meaning of underscore and percent signs in the pattern.
The key word ILIKE
can be used instead of LIKE
to make the match case-insensitive according to the active locale. This is not in the SQL standard but is a PostgreSQL extension.
The operator ~~
is equivalent to LIKE
, and ~~*
corresponds to ILIKE
. There are also !~~
and !~~*
operators that represent NOT LIKE
and NOT ILIKE
, respectively. All of these operators are PostgreSQL-specific.
There is also the prefix operator ^@
and corresponding starts_with
function which covers cases when only searching by beginning of the string is needed.
SIMILAR TO
Regular ExpressionsThe SIMILAR TO
operator returns true or false depending on whether its pattern matches the given string. It is similar to LIKE
, except that it interprets the pattern using the SQL standard's definition of a regular expression. SQL regular expressions are a curious cross between LIKE
notation and common regular expression notation.
Like LIKE
, the SIMILAR TO
operator succeeds only if its pattern matches the entire string; this is unlike common regular expression behavior where the pattern can match any part of the string. Also like LIKE
, SIMILAR TO
uses _
and %
as wildcard characters denoting any single character and any string, respectively (these are comparable to .
and .*
in POSIX regular expressions).
In addition to these facilities borrowed from LIKE
, SIMILAR TO
supports these pattern-matching metacharacters borrowed from POSIX regular expressions:
|
denotes alternation (either of two alternatives).
*
denotes repetition of the previous item zero or more times.
+
denotes repetition of the previous item one or more times.
?
denotes repetition of the previous item zero or one time.
{
m
}
denotes repetition of the previous item exactly m
times.
{
m
,}
denotes repetition of the previous item m
or more times.
{
m
,
n
}
denotes repetition of the previous item at least m
and not more than n
times.
Parentheses ()
can be used to group items into a single logical item.
A bracket expression [...]
specifies a character class, just as in POSIX regular expressions.
Notice that the period (.
) is not a metacharacter for SIMILAR TO
.
As with LIKE
, a backslash disables the special meaning of any of these metacharacters; or a different escape character can be specified with ESCAPE
.
Some examples:
The substring
function with three parameters, substring(
string
from pattern
for escape-character
), provides extraction of a substring that matches an SQL regular expression pattern. As with SIMILAR TO
, the specified pattern must match the entire data string, or else the function fails and returns null. To indicate the part of the pattern that should be returned on success, the pattern must contain two occurrences of the escape character followed by a double quote ("
). The text matching the portion of the pattern between these markers is returned.
Some examples, with #"
delimiting the return string:
Table 9.14 lists the available operators for pattern matching using POSIX regular expressions.
POSIX regular expressions provide a more powerful means for pattern matching than the LIKE
and SIMILAR TO
operators. Many Unix tools such as egrep
, sed
, or awk
use a pattern matching language that is similar to the one described here.
A regular expression is a character sequence that is an abbreviated definition of a set of strings (a regular set). A string is said to match a regular expression if it is a member of the regular set described by the regular expression. As with LIKE
, pattern characters match string characters exactly unless they are special characters in the regular expression language — but regular expressions use different special characters than LIKE
does. Unlike LIKE
patterns, a regular expression is allowed to match anywhere within a string, unless the regular expression is explicitly anchored to the beginning or end of the string.
Some examples:
The POSIX pattern language is described in much greater detail below.
The substring
function with two parameters, substring(
string
from pattern
), provides extraction of a substring that matches a POSIX regular expression pattern. It returns null if there is no match, otherwise the portion of the text that matched the pattern. But if the pattern contains any parentheses, the portion of the text that matched the first parenthesized subexpression (the one whose left parenthesis comes first) is returned. You can put parentheses around the whole expression if you want to use parentheses within it without triggering this exception. If you need parentheses in the pattern before the subexpression you want to extract, see the non-capturing parentheses described below.
Some examples:
The regexp_replace
function provides substitution of new text for substrings that match POSIX regular expression patterns. It has the syntax regexp_replace
(source
, pattern
, replacement
[, flags
]). The source
string is returned unchanged if there is no match to the pattern
. If there is a match, the source
string is returned with the replacement
string substituted for the matching substring. The replacement
string can contain \
n
, where n
is 1 through 9, to indicate that the source substring matching the n
'th parenthesized subexpression of the pattern should be inserted, and it can contain \&
to indicate that the substring matching the entire pattern should be inserted. Write \\
if you need to put a literal backslash in the replacement text. The flags
parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. Flag i
specifies case-insensitive matching, while flag g
specifies replacement of each matching substring rather than only the first one. Supported flags (though not g
) are described in Table 9.22.
Some examples:
The regexp_match
function returns a text array of captured substring(s) resulting from the first match of a POSIX regular expression pattern to a string. It has the syntax regexp_match
(string
, pattern
[, flags
]). If there is no match, the result is NULL
. If a match is found, and the pattern
contains no parenthesized subexpressions, then the result is a single-element text array containing the substring matching the whole pattern. If a match is found, and the pattern
contains parenthesized subexpressions, then the result is a text array whose n
'th element is the substring matching the n
'th parenthesized subexpression of the pattern
(not counting “non-capturing” parentheses; see below for details). The flags
parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. Supported flags are described in Table 9.22.
Some examples:
In the common case where you just want the whole matching substring or NULL
for no match, write something like
The regexp_matches
function returns a set of text arrays of captured substring(s) resulting from matching a POSIX regular expression pattern to a string. It has the same syntax as regexp_match
. This function returns no rows if there is no match, one row if there is a match and the g
flag is not given, or N
rows if there are N
matches and the g
flag is given. Each returned row is a text array containing the whole matched substring or the substrings matching parenthesized subexpressions of the pattern
, just as described above for regexp_match
. regexp_matches
accepts all the flags shown in Table 9.22, plus the g
flag which commands it to return all matches, not just the first one.
Some examples:
In most cases regexp_matches()
should be used with the g
flag, since if you only want the first match, it's easier and more efficient to use regexp_match()
. However,regexp_match()
only exists in PostgreSQL version 10 and up. When working in older versions, a common trick is to place a regexp_matches()
call in a sub-select, for example:
This produces a text array if there's a match, or NULL
if not, the same as regexp_match()
would do. Without the sub-select, this query would produce no output at all for table rows without a match, which is typically not the desired behavior.
The regexp_split_to_table
function splits a string using a POSIX regular expression pattern as a delimiter. It has the syntax regexp_split_to_table
(string
, pattern
[, flags
]). If there is no match to the pattern
, the function returns the string
. If there is at least one match, for each match it returns the text from the end of the last match (or the beginning of the string) to the beginning of the match. When there are no more matches, it returns the text from the end of the last match to the end of the string. The flags
parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. regexp_split_to_table
supports the flags described in Table 9.22.
The regexp_split_to_array
function behaves the same as regexp_split_to_table
, except that regexp_split_to_array
returns its result as an array of text
. It has the syntax regexp_split_to_array
(string
, pattern
[, flags
]). The parameters are the same as for regexp_split_to_table
.
Some examples:
As the last example demonstrates, the regexp split functions ignore zero-length matches that occur at the start or end of the string or immediately after a previous match. This is contrary to the strict definition of regexp matching that is implemented by regexp_match
and regexp_matches
, but is usually the most convenient behavior in practice. Other software systems such as Perl use similar definitions.
PostgreSQL's regular expressions are implemented using a software package written by Henry Spencer. Much of the description of regular expressions below is copied verbatim from his manual.
Regular expressions (REs), as defined in POSIX 1003.2, come in two forms: extended REs or EREs (roughly those of egrep
), and basic REs or BREs (roughly those of ed
). PostgreSQL supports both forms, and also implements some extensions that are not in the POSIX standard, but have become widely used due to their availability in programming languages such as Perl and Tcl. REs using these non-POSIX extensions are called advanced REs or AREs in this documentation. AREs are almost an exact superset of EREs, but BREs have several notational incompatibilities (as well as being much more limited). We first describe the ARE and ERE forms, noting features that apply only to AREs, and then describe how BREs differ.
PostgreSQL always initially presumes that a regular expression follows the ARE rules. However, the more limited ERE or BRE rules can be chosen by prepending an embedded option to the RE pattern, as described in Section 9.7.3.4. This can be useful for compatibility with applications that expect exactly the POSIX 1003.2 rules.
A regular expression is defined as one or more branches, separated by |
. It matches anything that matches one of the branches.
A branch is zero or more quantified atoms or constraints, concatenated. It matches a match for the first, followed by a match for the second, etc; an empty branch matches the empty string.
A quantified atom is an atom possibly followed by a single quantifier. Without a quantifier, it matches a match for the atom. With a quantifier, it can match some number of matches of the atom. An atom can be any of the possibilities shown in Table 9.15. The possible quantifiers and their meanings are shown in Table 9.16.
A constraint matches an empty string, but matches only when specific conditions are met. A constraint can be used where an atom could be used, except it cannot be followed by a quantifier. The simple constraints are shown in Table 9.17; some more constraints are described later.
An RE cannot end with a backslash (\
).
If you have standard_conforming_strings turned off, any backslashes you write in literal string constants will need to be doubled. See Section 4.1.2.1 for more information.
The forms using {
...
}
are known as bounds. The numbers m
and n
within a bound are unsigned decimal integers with permissible values from 0 to 255 inclusive.
Non-greedy quantifiers (available in AREs only) match the same possibilities as their corresponding normal (greedy) counterparts, but prefer the smallest number rather than the largest number of matches. See Section 9.7.3.5 for more detail.
A quantifier cannot immediately follow another quantifier, e.g., **
is invalid. A quantifier cannot begin an expression or subexpression or follow ^
or |
.
Lookahead and lookbehind constraints cannot contain back references (see Section 9.7.3.3), and all parentheses within them are considered non-capturing.
A bracket expression is a list of characters enclosed in []
. It normally matches any single character from the list (but see below). If the list begins with ^
, it matches any single character not from the rest of the list. If two characters in the list are separated by -
, this is shorthand for the full range of characters between those two (inclusive) in the collating sequence, e.g., [0-9]
in ASCII matches any decimal digit. It is illegal for two ranges to share an endpoint, e.g., a-c-e
. Ranges are very collating-sequence-dependent, so portable programs should avoid relying on them.
To include a literal ]
in the list, make it the first character (after ^
, if that is used). To include a literal -
, make it the first or last character, or the second endpoint of a range. To use a literal -
as the first endpoint of a range, enclose it in [.
and .]
to make it a collating element (see below). With the exception of these characters, some combinations using [
(see next paragraphs), and escapes (AREs only), all other special characters lose their special significance within a bracket expression. In particular, \
is not special when following ERE or BRE rules, though it is special (as introducing an escape) in AREs.
Within a bracket expression, a collating element (a character, a multiple-character sequence that collates as if it were a single character, or a collating-sequence name for either) enclosed in [.
and .]
stands for the sequence of characters of that collating element. The sequence is treated as a single element of the bracket expression's list. This allows a bracket expression containing a multiple-character collating element to match more than one character, e.g., if the collating sequence includes a ch
collating element, then the RE [[.ch.]]*c
matches the first five characters of chchcc
.
PostgreSQL currently does not support multi-character collating elements. This information describes possible future behavior.
Within a bracket expression, a collating element enclosed in [=
and =]
is an equivalence class, standing for the sequences of characters of all collating elements equivalent to that one, including itself. (If there are no other equivalent collating elements, the treatment is as if the enclosing delimiters were [.
and .]
.) For example, if o
and ^
are the members of an equivalence class, then [[=o=]]
, [[=^=]]
, and [o^]
are all synonymous. An equivalence class cannot be an endpoint of a range.
Within a bracket expression, the name of a character class enclosed in [:
and :]
stands for the list of all characters belonging to that class. Standard character class names are: alnum
, alpha
, blank
,cntrl
, digit
, graph
, lower
, print
, punct
, space
, upper
, xdigit
. These stand for the character classes defined in ctype. A locale can provide others. A character class cannot be used as an endpoint of a range.
There are two special cases of bracket expressions: the bracket expressions [[:<:]]
and [[:>:]]
are constraints, matching empty strings at the beginning and end of a word respectively. A word is defined as a sequence of word characters that is neither preceded nor followed by word characters. A word character is an alnum
character (as defined by ctype) or an underscore. This is an extension, compatible with but not specified by POSIX 1003.2, and should be used with caution in software intended to be portable to other systems. The constraint escapes described below are usually preferable; they are no more standard, but are easier to type.
Escapes are special sequences beginning with \
followed by an alphanumeric character. Escapes come in several varieties: character entry, class shorthands, constraint escapes, and back references. A \
followed by an alphanumeric character but not constituting a valid escape is illegal in AREs. In EREs, there are no escapes: outside a bracket expression, a \
followed by an alphanumeric character merely stands for that character as an ordinary character, and inside a bracket expression, \
is an ordinary character. (The latter is the one actual incompatibility between EREs and AREs.)
Character-entry escapes exist to make it easier to specify non-printing and other inconvenient characters in REs. They are shown in Table 9.18.
Class-shorthand escapes provide shorthands for certain commonly-used character classes. They are shown in Table 9.19.
A constraint escape is a constraint, matching the empty string if specific conditions are met, written as an escape. They are shown in Table 9.20.
A back reference (\
n
) matches the same string matched by the previous parenthesized subexpression specified by the number n
(see Table 9.21). For example, ([bc])\1
matches bb
or cc
but not bc
or cb
. The subexpression must entirely precede the back reference in the RE. Subexpressions are numbered in the order of their leading parentheses. Non-capturing parentheses do not define subexpressions.
Hexadecimal digits are 0
-9
, a
-f
, and A
-F
. Octal digits are 0
-7
.
Numeric character-entry escapes specifying values outside the ASCII range (0-127) have meanings dependent on the database encoding. When the encoding is UTF-8, escape values are equivalent to Unicode code points, for example \u1234
means the character U+1234
. For other multibyte encodings, character-entry escapes usually just specify the concatenation of the byte values for the character. If the escape value does not correspond to any legal character in the database encoding, no error will be raised, but it will never match any data.
The character-entry escapes are always taken as ordinary characters. For example, \135
is ]
in ASCII, but \135
does not terminate a bracket expression.
Within bracket expressions, \d
, \s
, and \w
lose their outer brackets, and \D
, \S
, and \W
are illegal. (So, for example, [a-c\d]
is equivalent to [a-c[:digit:]]
. Also, [a-c\D]
, which is equivalent to [a-c^[:digit:]]
, is illegal.)
A word is defined as in the specification of [[:<:]]
and [[:>:]]
above. Constraint escapes are illegal within bracket expressions.
There is an inherent ambiguity between octal character-entry escapes and back references, which is resolved by the following heuristics, as hinted at above. A leading zero always indicates an octal escape. A single non-zero digit, not followed by another digit, is always taken as a back reference. A multi-digit sequence not starting with a zero is taken as a back reference if it comes after a suitable subexpression (i.e., the number is in the legal range for a back reference), and otherwise is taken as octal.
In addition to the main syntax described above, there are some special forms and miscellaneous syntactic facilities available.
An RE can begin with one of two special director prefixes. If an RE begins with ***:
, the rest of the RE is taken as an ARE. (This normally has no effect in PostgreSQL, since REs are assumed to be AREs; but it does have an effect if ERE or BRE mode had been specified by the flags
parameter to a regex function.) If an RE begins with ***=
, the rest of the RE is taken to be a literal string, with all characters considered ordinary characters.
An ARE can begin with embedded options: a sequence (?
xyz
)
(where xyz
is one or more alphabetic characters) specifies options affecting the rest of the RE. These options override any previously determined options — in particular, they can override the case-sensitivity behavior implied by a regex operator, or the flags
parameter to a regex function. The available option letters are shown in Table 9.22. Note that these same option letters are used in the flags
parameters of regex functions.
Embedded options take effect at the )
terminating the sequence. They can appear only at the start of an ARE (after the ***:
director if any).
In addition to the usual (tight) RE syntax, in which all characters are significant, there is an expanded syntax, available by specifying the embedded x
option. In the expanded syntax, white-space characters in the RE are ignored, as are all characters between a #
and the following newline (or the end of the RE). This permits paragraphing and commenting a complex RE. There are three exceptions to that basic rule:
a white-space character or #
preceded by \
is retained
white space or #
within a bracket expression is retained
white space and comments cannot appear within multi-character symbols, such as (?:
For this purpose, white-space characters are blank, tab, newline, and any character that belongs to the space
character class.
Finally, in an ARE, outside bracket expressions, the sequence (?#
ttt
)
(where ttt
is any text not containing a )
) is a comment, completely ignored. Again, this is not allowed between the characters of multi-character symbols, like (?:
. Such comments are more a historical artifact than a useful facility, and their use is deprecated; use the expanded syntax instead.
None of these metasyntax extensions is available if an initial ***=
director has specified that the user's input be treated as a literal string rather than as an RE.
In the event that an RE could match more than one substring of a given string, the RE matches the one starting earliest in the string. If the RE could match more than one substring starting at that point, either the longest possible match or the shortest possible match will be taken, depending on whether the RE is greedy or non-greedy.
Whether an RE is greedy or not is determined by the following rules:
Most atoms, and all constraints, have no greediness attribute (because they cannot match variable amounts of text anyway).
Adding parentheses around an RE does not change its greediness.
A quantified atom with a fixed-repetition quantifier ({
m
}
or {
m
}?
) has the same greediness (possibly none) as the atom itself.
A quantified atom with other normal quantifiers (including {
m
,
n
}
with m
equal to n
) is greedy (prefers longest match).
A quantified atom with a non-greedy quantifier (including {
m
,
n
}?
with m
equal to n
) is non-greedy (prefers shortest match).
A branch — that is, an RE that has no top-level |
operator — has the same greediness as the first quantified atom in it that has a greediness attribute.
An RE consisting of two or more branches connected by the |
operator is always greedy.
The above rules associate greediness attributes not only with individual quantified atoms, but with branches and entire REs that contain quantified atoms. What that means is that the matching is done in such a way that the branch, or whole RE, matches the longest or shortest possible substring as a whole. Once the length of the entire match is determined, the part of it that matches any particular subexpression is determined on the basis of the greediness attribute of that subexpression, with subexpressions starting earlier in the RE taking priority over ones starting later.
An example of what this means:
In the first case, the RE as a whole is greedy because Y*
is greedy. It can match beginning at the Y
, and it matches the longest possible string starting there, i.e., Y123
. The output is the parenthesized part of that, or 123
. In the second case, the RE as a whole is non-greedy because Y*?
is non-greedy. It can match beginning at the Y
, and it matches the shortest possible string starting there, i.e., Y1
. The subexpression [0-9]{1,3}
is greedy but it cannot change the decision as to the overall match length; so it is forced to match just 1
.
In short, when an RE contains both greedy and non-greedy subexpressions, the total match length is either as long as possible or as short as possible, according to the attribute assigned to the whole RE. The attributes assigned to the subexpressions only affect how much of that match they are allowed to “eat” relative to each other.
The quantifiers {1,1}
and {1,1}?
can be used to force greediness or non-greediness, respectively, on a subexpression or a whole RE. This is useful when you need the whole RE to have a greediness attribute different from what's deduced from its elements. As an example, suppose that we are trying to separate a string containing some digits into the digits and the parts before and after them. We might try to do that like this:
That didn't work: the first .*
is greedy so it “eats” as much as it can, leaving the \d+
to match at the last possible place, the last digit. We might try to fix that by making it non-greedy:
That didn't work either, because now the RE as a whole is non-greedy and so it ends the overall match as soon as possible. We can get what we want by forcing the RE as a whole to be greedy:
Controlling the RE's overall greediness separately from its components' greediness allows great flexibility in handling variable-length patterns.
When deciding what is a longer or shorter match, match lengths are measured in characters, not collating elements. An empty string is considered longer than no match at all. For example: bb*
matches the three middle characters of abbbc
; (week|wee)(night|knights)
matches all ten characters of weeknights
; when (.*).*
is matched against abc
the parenthesized subexpression matches all three characters; and when (a*)*
is matched against bc
both the whole RE and the parenthesized subexpression match an empty string.
If case-independent matching is specified, the effect is much as if all case distinctions had vanished from the alphabet. When an alphabetic that exists in multiple cases appears as an ordinary character outside a bracket expression, it is effectively transformed into a bracket expression containing both cases, e.g., x
becomes [xX]
. When it appears inside a bracket expression, all case counterparts of it are added to the bracket expression, e.g., [x]
becomes [xX]
and [^x]
becomes [^xX]
.
If newline-sensitive matching is specified, .
and bracket expressions using ^
will never match the newline character (so that matches will never cross newlines unless the RE explicitly arranges it) and ^
and $
will match the empty string after and before a newline respectively, in addition to matching at beginning and end of string respectively. But the ARE escapes \A
and \Z
continue to match beginning or end of string only.
If partial newline-sensitive matching is specified, this affects .
and bracket expressions as with newline-sensitive matching, but not ^
and $
.
If inverse partial newline-sensitive matching is specified, this affects ^
and $
as with newline-sensitive matching, but not .
and bracket expressions. This isn't very useful but is provided for symmetry.
No particular limit is imposed on the length of REs in this implementation. However, programs intended to be highly portable should not employ REs longer than 256 bytes, as a POSIX-compliant implementation can refuse to accept such REs.
The only feature of AREs that is actually incompatible with POSIX EREs is that \
does not lose its special significance inside bracket expressions. All other ARE features use syntax which is illegal or has undefined or unspecified effects in POSIX EREs; the ***
syntax of directors likewise is outside the POSIX syntax for both BREs and EREs.
Many of the ARE extensions are borrowed from Perl, but some have been changed to clean them up, and a few Perl extensions are not present. Incompatibilities of note include \b
, \B
, the lack of special treatment for a trailing newline, the addition of complemented bracket expressions to the things affected by newline-sensitive matching, the restrictions on parentheses and back references in lookahead/lookbehind constraints, and the longest/shortest-match (rather than first-match) matching semantics.
Two significant incompatibilities exist between AREs and the ERE syntax recognized by pre-7.4 releases of PostgreSQL:
In AREs, \
followed by an alphanumeric character is either an escape or an error, while in previous releases, it was just another way of writing the alphanumeric. This should not be much of a problem because there was no reason to write such a sequence in earlier releases.
In AREs, \
remains a special character within []
, so a literal \
within a bracket expression must be written \\
.
BREs differ from EREs in several respects. In BREs, |
, +
, and ?
are ordinary characters and there is no equivalent for their functionality. The delimiters for bounds are \{
and \}
, with {
and }
by themselves ordinary characters. The parentheses for nested subexpressions are \(
and \)
, with (
and )
by themselves ordinary characters. ^
is an ordinary character except at the beginning of the RE or the beginning of a parenthesized subexpression, $
is an ordinary character except at the end of the RE or the end of a parenthesized subexpression, and *
is an ordinary character if it appears at the beginning of the RE or the beginning of a parenthesized subexpression (after a possible leading ^
). Finally, single-digit back references are available, and \<
and \>
are synonyms for [[:<:]]
and [[:>:]]
respectively; no other escapes are available in BREs.
This section describes functions and operators for examining and manipulating bit strings, that is values of the types bit
and bit varying
. Aside from the usual comparison operators, the operators shown in Table 9.14 can be used. Bit string operands of &
, |
, and #
must be of equal length. When bit shifting, the original length of the string is preserved, as shown in the examples.
Operator | Description | Example | Result |
---|---|---|---|
The following SQL-standard functions work on bit strings as well as character strings: length
, bit_length
, octet_length
, position
, substring
, overlay
.
The following functions work on bit strings as well as binary strings: get_bit
, set_bit
. When working with a bit string, these functions number the first (leftmost) bit of the string as bit 0.
In addition, it is possible to cast integral values to and from type bit
. Some examples:
Note that casting to just “bit” means casting to bit(1)
, and so will deliver only the least significant bit of the integer.
Casting an integer to bit(n)
copies the rightmost n
bits. Casting an integer to a bit string width wider than the integer itself will sign-extend on the left.\
本節提供了 PostgreSQL 的數學運算方式。對於沒有標準數學約定的型別(例如,日期/時間型別),我們將在後續部分中介紹具體的行為。
Table 9.4 列出了可用的數學運算子。
Operator | Description | Example | Result |
---|---|---|---|
位元運算子僅適用於整數資料型別,也可用於位元字串型別的位元和位元變化,如 Table 9.14 所示。
Table 9.5 列出了可用的數學函數。在該表中,dp 表示雙精確度。這些函數中的許多函數都提供了多種形式,且具有不同的參數型別。除非另有說明,否則函數的任何形式都將回傳與其參數相同的資料型別。使用雙精確度資料的功能主要以主機系統的 C 函式庫實作; 因此,邊界情況下的準確性和行為可能會因主機系統而有所差異。
Table 9.6 shows functions for generating random numbers.
The random()
function uses a simple linear congruential algorithm. It is fast but not suitable for cryptographic applications; see the pgcrypto module for a more secure alternative. If setseed()
is called, the results of subsequent random()
calls in the current session are repeatable by re-issuing setseed()
with the same argument.
Table 9.7 shows the available trigonometric functions. All these functions take arguments and return values of type double precision
. Each of the trigonometric functions comes in two variants, one that measures angles in radians and one that measures angles in degrees.
Another way to work with angles measured in degrees is to use the unit transformation functions radians()
and degrees()
shown earlier. However, using the degree-based trigonometric functions is preferred, as that way avoids round-off error for special cases such as sind(30)
.
Table 9.8 shows the available hyperbolic functions. All these functions take arguments and return values of type double precision
.
PostgreSQL 格式化函數提供了一套功能強大的工具,用於將各種資料型別(日期/時間、整數、浮點數、數字)轉換為格式化的字串,以及從格式化字串轉換為特定資料型別。Table 9.24 列出了這些函數,而這些函數都遵循一個通用的呼叫約定:第一個參數是要格式化的值,第二個參數是定義輸出或輸入格式的樣板。
Function | Return Type | Description | Example |
---|---|---|---|
提醒 還有一個單一參數 to_timestamp 函數; 請參閱 Table 9.31。
小技巧 存在有 to_timestamp 和 to_date 來處理無法透過簡單轉換進行轉換的輸入格式。對於大多數標準日期/時間格式,只需將來源字串強制轉換為所需的資料型別即可,並且非常容易。同樣地,對於標準數字表示形式,to_number 也不是必要的。
在 to_char 輸出樣版字串中,基於給予值識別並替換為某些格式資料的某些樣式。 非樣板的任何文字都將被逐字複製。同樣地,在輸入樣板字串(用於其他功能)中,樣板標識輸入資料字串要提供的值。如果樣板字串中存在不是樣板的字串,則只需跳過輸入資料字串中的相對應字元(無論它們是否等於樣板字串字元)。
Table 9.25 shows the template patterns available for formatting date and time values.
Pattern | Description |
---|---|
Modifiers can be applied to any template pattern to alter its behavior. For example, FMMonth
is the Month
pattern with the FM
modifier. Table 9.26 shows the modifier patterns for date/time formatting.
Usage notes for date/time formatting:
FM
suppresses leading zeroes and trailing blanks that would otherwise be added to make the output of a pattern be fixed-width. In PostgreSQL, FM
modifies only the next specification, while in Oracle FM
affects all subsequent specifications, and repeated FM
modifiers toggle fill mode on and off.
TM
does not include trailing blanks. to_timestamp
and to_date
ignore the TM
modifier.
to_timestamp
and to_date
skip multiple blank spaces at the beginning of the input string and around date and time values unless the FX
option is used. For example, to_timestamp(' 2000 JUN', 'YYYY MON')
and to_timestamp('2000 - JUN', 'YYYY-MON')
work, but to_timestamp('2000 JUN', 'FXYYYY MON')
returns an error because to_timestamp
expects only a single space. FX
must be specified as the first item in the template.
A separator (a space or non-letter/non-digit character) in the template string of to_timestamp
and to_date
matches any single separator in the input string or is skipped, unless the FX
option is used. For example, to_timestamp('2000JUN', 'YYYY///MON')
and to_timestamp('2000/JUN', 'YYYY MON')
work, but to_timestamp('2000//JUN', 'YYYY/MON')
returns an error because the number of separators in the input string exceeds the number of separators in the template.
If FX
is specified, a separator in the template string matches exactly one character in the input string. But note that the input string character is not required to be the same as the separator from the template string. For example, to_timestamp('2000/JUN', 'FXYYYY MON')
works, but to_timestamp('2000/JUN', 'FXYYYY MON')
returns an error because the second space in the template string consumes the letter J
from the input string.
A TZH
template pattern can match a signed number. Without the FX
option, minus signs may be ambiguous, and could be interpreted as a separator. This ambiguity is resolved as follows: If the number of separators before TZH
in the template string is less than the number of separators before the minus sign in the input string, the minus sign is interpreted as part of TZH
. Otherwise, the minus sign is considered to be a separator between values. For example, to_timestamp('2000 -10', 'YYYY TZH')
matches -10
to TZH
, but to_timestamp('2000 -10', 'YYYY TZH')
matches 10
to TZH
.
Ordinary text is allowed in to_char
templates and will be output literally. You can put a substring in double quotes to force it to be interpreted as literal text even if it contains template patterns. For example, in '"Hello Year "YYYY'
, the YYYY
will be replaced by the year data, but the single Y
in Year
will not be. In to_date
, to_number
, and to_timestamp
, literal text and double-quoted strings result in skipping the number of characters contained in the string; for example "XX"
skips two input characters (whether or not they are XX
).
Tip
Prior to PostgreSQL 12, it was possible to skip arbitrary text in the input string using non-letter or non-digit characters. For example, to_timestamp('2000y6m1d', 'yyyy-MM-DD')
used to work. Now you can only use letter characters for this purpose. For example, to_timestamp('2000y6m1d', 'yyyytMMtDDt')
and to_timestamp('2000y6m1d', 'yyyy"y"MM"m"DD"d"')
skip y
, m
, and d
.
If you want to have a double quote in the output you must precede it with a backslash, for example '\"YYYY Month\"'
. Backslashes are not otherwise special outside of double-quoted strings. Within a double-quoted string, a backslash causes the next character to be taken literally, whatever it is (but this has no special effect unless the next character is a double quote or another backslash).
In to_timestamp
and to_date
, if the year format specification is less than four digits, e.g. YYY
, and the supplied year is less than four digits, the year will be adjusted to be nearest to the year 2020, e.g. 95
becomes 1995.
In to_timestamp
and to_date
, the YYYY
conversion has a restriction when processing years with more than 4 digits. You must use some non-digit character or template after YYYY
, otherwise the year is always interpreted as 4 digits. For example (with the year 20000): to_date('200001131', 'YYYYMMDD')
will be interpreted as a 4-digit year; instead use a non-digit separator after the year, like to_date('20000-1131', 'YYYY-MMDD')
or to_date('20000Nov31', 'YYYYMonDD')
.
In to_timestamp
and to_date
, the CC
(century) field is accepted but ignored if there is a YYY
, YYYY
or Y,YYY
field. If CC
is used with YY
or Y
then the result is computed as that year in the specified century. If the century is specified but the year is not, the first year of the century is assumed.
In to_timestamp
and to_date
, weekday names or numbers (DAY
, D
, and related field types) are accepted but are ignored for purposes of computing the result. The same is true for quarter (Q
) fields.
In to_timestamp
and to_date
, an ISO 8601 week-numbering date (as distinct from a Gregorian date) can be specified in one of two ways:
Year, week number, and weekday: for example to_date('2006-42-4', 'IYYY-IW-ID')
returns the date 2006-10-19
. If you omit the weekday it is assumed to be 1 (Monday).
Year and day of year: for example to_date('2006-291', 'IYYY-IDDD')
also returns 2006-10-19
.
Attempting to enter a date using a mixture of ISO 8601 week-numbering fields and Gregorian date fields is nonsensical, and will cause an error. In the context of an ISO 8601 week-numbering year, the concept of a “month” or “day of month” has no meaning. In the context of a Gregorian year, the ISO week has no meaning.
Caution
While to_date
will reject a mixture of Gregorian and ISO week-numbering date fields, to_char
will not, since output format specifications like YYYY-MM-DD (IYYY-IDDD)
can be useful. But avoid writing something like IYYY-MM-DD
; that would yield surprising results near the start of the year. (See Section 9.9.1 for more information.)
In to_timestamp
, millisecond (MS
) or microsecond (US
) fields are used as the seconds digits after the decimal point. For example to_timestamp('12.3', 'SS.MS')
is not 3 milliseconds, but 300, because the conversion treats it as 12 + 0.3 seconds. So, for the format SS.MS
, the input values 12.3
, 12.30
, and 12.300
specify the same number of milliseconds. To get three milliseconds, one must write 12.003
, which the conversion treats as 12 + 0.003 = 12.003 seconds.
Here is a more complex example: to_timestamp('15:12:02.020.001230', 'HH24:MI:SS.MS.US')
is 15 hours, 12 minutes, and 2 seconds + 20 milliseconds + 1230 microseconds = 2.021230 seconds.
to_char(..., 'ID')
's day of the week numbering matches the extract(isodow from ...)
function, but to_char(..., 'D')
's does not match extract(dow from ...)
's day numbering.
to_char(interval)
formats HH
and HH12
as shown on a 12-hour clock, for example zero hours and 36 hours both output as 12
, while HH24
outputs the full hour value, which can exceed 23 in an interval
value.
Table 9.27 shows the template patterns available for formatting numeric values.
Usage notes for numeric formatting:
0
specifies a digit position that will always be printed, even if it contains a leading/trailing zero. 9
also specifies a digit position, but if it is a leading zero then it will be replaced by a space, while if it is a trailing zero and fill mode is specified then it will be deleted. (For to_number()
, these two pattern characters are equivalent.)
The pattern characters S
, L
, D
, and G
represent the sign, currency symbol, decimal point, and thousands separator characters defined by the current locale (see lc_monetary and lc_numeric). The pattern characters period and comma represent those exact characters, with the meanings of decimal point and thousands separator, regardless of locale.
If no explicit provision is made for a sign in to_char()
's pattern, one column will be reserved for the sign, and it will be anchored to (appear just left of) the number. If S
appears just left of some 9
's, it will likewise be anchored to the number.
A sign formatted using SG
, PL
, or MI
is not anchored to the number; for example, to_char(-12, 'MI9999')
produces '- 12'
but to_char(-12, 'S9999')
produces ' -12'
. (The Oracle implementation does not allow the use of MI
before 9
, but rather requires that 9
precede MI
.)
TH
does not convert values less than zero and does not convert fractional numbers.
PL
, SG
, and TH
are PostgreSQL extensions.
In to_number
, if non-data template patterns such as L
or TH
are used, the corresponding number of input characters are skipped, whether or not they match the template pattern, unless they are data characters (that is, digits, sign, decimal point, or comma). For example, TH
would skip two non-data characters.
V
with to_char
multiplies the input values by 10^
n
, where n
is the number of digits following V
. V
with to_number
divides in a similar manner. to_char
and to_number
do not support the use of V
combined with a decimal point (e.g., 99.9V99
is not allowed).
EEEE
(scientific notation) cannot be used in combination with any of the other formatting patterns or modifiers other than digit and decimal point patterns, and must be at the end of the format string (e.g., 9.99EEEE
is a valid pattern).
Certain modifiers can be applied to any template pattern to alter its behavior. For example, FM99.99
is the 99.99
pattern with the FM
modifier. Table 9.28 shows the modifier patterns for numeric formatting.
Table 9.29 shows some examples of the use of the to_char
function.
to_char
ExamplesFor enum types (described inSection 8.7), there are several functions that allow cleaner programming without hard-coding particular values of an enum type. These are listed inTable 9.32. The examples assume an enum type created as:
Table 9.32. Enum Support Functions
Function | Description | Example | Example Result |
---|---|---|---|
Notice that except for the two-argument form ofenum_range
, these functions disregard the specific value passed to them; they care only about its declared data type. Either null or a specific value of the type can be passed, with the same result. It is more common to apply these functions to a table column or function argument than to a hardwired type name as suggested by the examples.
本節介紹了用於檢查和操作字串的函數和運算子。在這種情況下,字串包括 character、character varying 和 text 型別的值。除非另有說明,否則下面列出的所有函數都可以在這些型別上使用,但是請注意在使用 character 型別時自動空格填充的潛在影響。其中有一些函數還支援對於位元型別的處理。
SQL 定義了一些使用關鍵字而不是逗號分隔參數的字串函數。詳情請見 Table 9.9。PostgreSQL 還提供了使用一般函數呼叫的語法,這些功能的函數版本(請參見 Table 9.10)。
在 PostgreSQL 8.3 之前的版本中,由於存在從這些資料型別到文字的隱式強制轉換,這些函數也將默默接受幾種非字串資料型別的值。這些強制轉換已被刪除,因為它們經常引起令人驚訝的結果。但是,字串連接運算子(||)仍然接受非字串輸入,只要至少一個輸入為字串型別即可,如 Table 9.9 所示。對於其他情況,如果您需要複製以前的行為,請在查詢語句中明確加入型別轉換。
Function | Return Type | Description | Example | Result |
---|---|---|---|---|
其他字串操作的可用函數,在 Table 9.10 中列出。其中一些用於內部實作的SQL標準字符串函數,則在 Table 9.9 中列出。
concat、concat_ws 和 format 函數是動態參數,因此可以將要連接或格式化的值以 VARIADIC 關鍵字標記的陣列(請參閱第 37.5.5 節)輸入。 將陣列的元素視為函數的一個普通參數。如果動態參數陣列參數為 NULL,則 concat 和 concat_ws 回傳 NULL,但是 format 將 NULL 視為零元素陣列。
另請參閱第 9.20 節中的彙總函數 string_agg。
轉換名稱遵循標準的命名規則:來源編碼的正式名稱,所有非字母數字字元均用下底線代替,接在 _to___ 之後,然後是經過類似處理的目標編碼名稱。因此,名稱可能與習慣的編碼名稱有所不同。
format
The function format
produces output formatted according to a format string, in a style similar to the C function sprintf
.
formatstr
is a format string that specifies how the result should be formatted. Text in the format string is copied directly to the result, except where format specifiers are used. Format specifiers act as placeholders in the string, defining how subsequent function arguments should be formatted and inserted into the result. Each formatarg
argument is converted to text according to the usual output rules for its data type, and then formatted and inserted into the result string according to the format specifier(s).
Format specifiers are introduced by a %
character and have the form
where the component fields are:position
(optional)
A string of the form n
$ where n
is the index of the argument to print. Index 1 means the first argument after formatstr
. If the position
is omitted, the default is to use the next argument in sequence.flags
(optional)
Additional options controlling how the format specifier's output is formatted. Currently the only supported flag is a minus sign (-
) which will cause the format specifier's output to be left-justified. This has no effect unless the width
field is also specified.width
(optional)
Specifies the minimum number of characters to use to display the format specifier's output. The output is padded on the left or right (depending on the -
flag) with spaces as needed to fill the width. A too-small width does not cause truncation of the output, but is simply ignored. The width may be specified using any of the following: a positive integer; an asterisk (*
) to use the next function argument as the width; or a string of the form *
n
$ to use the _n
_th function argument as the width.
If the width comes from a function argument, that argument is consumed before the argument that is used for the format specifier's value. If the width argument is negative, the result is left aligned (as if the -
flag had been specified) within a field of length abs
(width
).type
(required)
The type of format conversion to use to produce the format specifier's output. The following types are supported:
s
formats the argument value as a simple string. A null value is treated as an empty string.
I
treats the argument value as an SQL identifier, double-quoting it if necessary. It is an error for the value to be null (equivalent to quote_ident
).
L
quotes the argument value as an SQL literal. A null value is displayed as the string NULL
, without quotes (equivalent to quote_nullable
).
In addition to the format specifiers described above, the special sequence %%
may be used to output a literal %
character.
Here are some examples of the basic format conversions:
Here are examples using width
fields and the -
flag:
These examples show use of position
fields:
Unlike the standard C function sprintf
, PostgreSQL's format
function allows format specifiers with and without position
fields to be mixed in the same format string. A format specifier without a position
field always uses the next argument after the last argument consumed. In addition, the format
function does not require all function arguments to be used in the format string. For example:
The %I
and %L
format specifiers are particularly useful for safely constructing dynamic SQL statements. See Example 42.1.
shows the available functions for date/time value processing, with details appearing in the following subsections. illustrates the behaviors of the basic arithmetic operators (+
, *
, etc.). For formatting functions, refer to . You should be familiar with the background information on date/time data types from .
All the functions and operators described below that take time
or timestamp
inputs actually come in two variants: one that takes time with time zone
or timestamp with time zone
, and one that takes time without time zone
or timestamp without time zone
. For brevity, these variants are not shown separately. Also, the +
and *
operators come in commutative pairs (for example both date + integer and integer + date); we show only one of each such pair.
Operator | Example | Result |
---|
In addition to these functions, the SQL OVERLAPS
operator is supported:
This expression yields true when two time periods (defined by their endpoints) overlap, false when they do not overlap. The endpoints can be specified as pairs of dates, times, or time stamps; or as a date, time, or time stamp followed by an interval. When a pair of values is provided, either the start or the end can be written first; OVERLAPS
automatically takes the earlier value of the pair as the start. Each time period is considered to represent the half-open interval start
<=
time
<
end
, unless start
and end
are equal in which case it represents that single time instant. This means for instance that two time periods with only an endpoint in common do not overlap.
When adding an interval
value to (or subtracting an interval
value from) a timestamp with time zone
value, the days component advances or decrements the date of the timestamp with time zone
by the indicated number of days, keeping the time of day the same. Across daylight saving time changes (when the session time zone is set to a time zone that recognizes DST), this means interval '1 day'
does not necessarily equal interval '24 hours'
. For example, with the session time zone set to America/Denver
:
This happens because an hour was skipped due to a change in daylight saving time at 2005-04-03 02:00:00
in time zone America/Denver
.
Note there can be ambiguity in the months
field returned by age
because different months have different numbers of days. PostgreSQL's approach uses the month from the earlier of the two dates when calculating partial months. For example, age('2004-06-01', '2004-04-30')
uses April to yield 1 mon 1 day
, while using May would yield 1 mon 2 days
because May has 31 days, while April has only 30.
Subtraction of dates and timestamps can also be complex. One conceptually simple way to perform subtraction is to convert each value to a number of seconds using EXTRACT(EPOCH FROM ...)
, then subtract the results; this produces the number of seconds between the two values. This will adjust for the number of days in each month, timezone changes, and daylight saving time adjustments. Subtraction of date or timestamp values with the “-
” operator returns the number of days (24-hours) and hours/minutes/seconds between the values, making the same adjustments. The age
function returns years, months, days, and hours/minutes/seconds, performing field-by-field subtraction and then adjusting for negative field values. The following queries illustrate the differences in these approaches. The sample results were produced with timezone = 'US/Eastern'
; there is a daylight saving time change between the two dates used:
EXTRACT
, date_part
The extract
function retrieves subfields such as year or hour from date/time values. source
must be a value expression of type timestamp
, time
, or interval
. (Expressions of type date
are cast to timestamp
and can therefore be used as well.) field
is an identifier or string that selects what field to extract from the source value. The extract
function returns values of type double precision
. The following are valid field names:century
The century
The first century starts at 0001-01-01 00:00:00 AD, although they did not know it at the time. This definition applies to all Gregorian calendar countries. There is no century number 0, you go from -1 century to 1 century. If you disagree with this, please write your complaint to: Pope, Cathedral Saint-Peter of Roma, Vatican.day
For timestamp
values, the day (of the month) field (1 - 31) ; for interval
values, the number of days
decade
The year field divided by 10
dow
The day of the week as Sunday (0
) to Saturday (6
)
Note that extract
's day of the week numbering differs from that of the to_char(..., 'D')
function.doy
The day of the year (1 - 365/366)
epoch
For timestamp with time zone
values, the number of seconds since 1970-01-01 00:00:00 UTC (can be negative); for date
and timestamp
values, the number of seconds since 1970-01-01 00:00:00 local time; for interval
values, the total number of seconds in the interval
You can convert an epoch value back to a time stamp with to_timestamp
:
hour
The hour field (0 - 23)
isodow
The day of the week as Monday (1
) to Sunday (7
)
This is identical to dow
except for Sunday. This matches the ISO 8601 day of the week numbering.isoyear
The ISO 8601 week-numbering year that the date falls in (not applicable to intervals)
Each ISO 8601 week-numbering year begins with the Monday of the week containing the 4th of January, so in early January or late December the ISO year may be different from the Gregorian year. See the week
field for more information.
This field is not available in PostgreSQL releases prior to 8.3.microseconds
The seconds field, including fractional parts, multiplied by 1 000 000; note that this includes full seconds
millennium
The millennium
Years in the 1900s are in the second millennium. The third millennium started January 1, 2001.milliseconds
The seconds field, including fractional parts, multiplied by 1000. Note that this includes full seconds.
minute
The minutes field (0 - 59)
month
For timestamp
values, the number of the month within the year (1 - 12) ; for interval
values, the number of months, modulo 12 (0 - 11)
quarter
The quarter of the year (1 - 4) that the date is in
second
timezone
The time zone offset from UTC, measured in seconds. Positive values correspond to time zones east of UTC, negative values to zones west of UTC. (Technically, PostgreSQL does not use UTC because leap seconds are not handled.)timezone_hour
The hour component of the time zone offsettimezone_minute
The minute component of the time zone offsetweek
The number of the ISO 8601 week-numbering week of the year. By definition, ISO weeks start on Mondays and the first week of a year contains January 4 of that year. In other words, the first Thursday of a year is in week 1 of that year.
In the ISO week-numbering system, it is possible for early-January dates to be part of the 52nd or 53rd week of the previous year, and for late-December dates to be part of the first week of the next year. For example, 2005-01-01
is part of the 53rd week of year 2004, and 2006-01-01
is part of the 52nd week of year 2005, while 2012-12-31
is part of the first week of 2013. It's recommended to use the isoyear
field together with week
to get consistent results.
year
The year field. Keep in mind there is no 0 AD
, so subtracting BC
years from AD
years should be done with care.
When the input value is +/-Infinity, extract
returns +/-Infinity for monotonically-increasing fields (epoch
, julian
, year
, isoyear
, decade
, century
, and millennium
). For other fields, NULL is returned. PostgreSQL versions before 9.6 returned zero for all cases of infinite input.
The date_part
function is modeled on the traditional Ingres equivalent to the SQL-standard function extract
:
Note that here the field
parameter needs to be a string value, not a name. The valid field names for date_part
are the same as for extract
.
date_trunc
The function date_trunc
is conceptually similar to the trunc
function for numbers.
source
is a value expression of type timestamp
, timestamp with time zone
, or interval
. (Values of type date
and time
are cast automatically to timestamp
or interval
, respectively.) field
selects to which precision to truncate the input value. The return value is likewise of type timestamp
, timestamp with time zone
, or interval
, and it has all fields that are less significant than the selected one set to zero (or one, for day and month).
Valid values for field
are:
A time zone cannot be specified when processing timestamp without time zone
or interval
inputs. These are always taken at face value.
Examples (assuming the local time zone is America/New_York
):
AT TIME ZONE
AT TIME ZONE
VariantsExamples (assuming the local time zone is America/Los_Angeles
):
The first example adds a time zone to a value that lacks it, and displays the value using the current TimeZone
setting. The second example shifts the time stamp with time zone value to the specified time zone, and returns the value without a time zone. This allows storage and display of values different from the current TimeZone
setting. The third example converts Tokyo time to Chicago time. Converting time values to other time zones uses the currently active time zone rules since no date is supplied.
The function timezone
(zone
, timestamp
) is equivalent to the SQL-conforming construct timestamp
AT TIME ZONE zone
.
PostgreSQL provides a number of functions that return values related to the current date and time. These SQL-standard functions all return values based on the start time of the current transaction:
CURRENT_TIME
and CURRENT_TIMESTAMP
deliver values with time zone; LOCALTIME
and LOCALTIMESTAMP
deliver values without time zone.
CURRENT_TIME
, CURRENT_TIMESTAMP
, LOCALTIME
, and LOCALTIMESTAMP
can optionally take a precision parameter, which causes the result to be rounded to that many fractional digits in the seconds field. Without a precision parameter, the result is given to the full available precision.
Some examples:
Since these functions return the start time of the current transaction, their values do not change during the transaction. This is considered a feature: the intent is to allow a single transaction to have a consistent notion of the “current” time, so that multiple modifications within the same transaction bear the same time stamp.
Other database systems might advance these values more frequently.
PostgreSQL also provides functions that return the start time of the current statement, as well as the actual current time at the instant the function is called. The complete list of non-SQL-standard time functions is:
transaction_timestamp()
is equivalent to CURRENT_TIMESTAMP
, but is named to clearly reflect what it returns. statement_timestamp()
returns the start time of the current statement (more specifically, the time of receipt of the latest command message from the client). statement_timestamp()
and transaction_timestamp()
return the same value during the first command of a transaction, but might differ during subsequent commands. clock_timestamp()
returns the actual current time, and therefore its value changes even within a single SQL command. timeofday()
is a historical PostgreSQL function. Like clock_timestamp()
, it returns the actual current time, but as a formatted text
string rather than a timestamp with time zone
value. now()
is a traditional PostgreSQL equivalent to transaction_timestamp()
.
All the date/time data types also accept the special literal value now
to specify the current date and time (again, interpreted as the transaction start time). Thus, the following three all return the same result:
You do not want to use the third form when specifying a DEFAULT
clause while creating a table. The system will convert now
to a timestamp
as soon as the constant is parsed, so that when the default value is needed, the time of the table creation would be used! The first two forms will not be evaluated until the default value is used, because they are function calls. Thus they will give the desired behavior of defaulting to the time of row insertion.
The following functions are available to delay execution of the server process:
pg_sleep
makes the current session's process sleep until seconds
seconds have elapsed. seconds
is a value of type double precision
, so fractional-second delays can be specified. pg_sleep_for
is a convenience function for larger sleep times specified as an interval
. pg_sleep_until
is a convenience function for when a specific wake-up time is desired. For example:
The effective resolution of the sleep interval is platform-specific; 0.01 seconds is a common value. The sleep delay will be at least as long as specified. It might be longer depending on factors such as server load. In particular, pg_sleep_until
is not guaranteed to wake up exactly at the specified time, but it will not wake up any earlier.
Make sure that your session does not hold more locks than necessary when calling pg_sleep
or its variants. Otherwise other sessions might have to wait for your sleeping process, slowing down the entire system.\
PostgreSQL 提供了一個產成 UUID 的函數:
此函數回傳版本 4(隨機)的 UUID。這是最常用的 UUID 樣式,適用於大多數的應用。
模組提供了其他函數,這些函數實作了用於產生成其他 UUID 的標準演算法。
PostgreSQL 還提供了 中列出的用於 UUID 常用的比較運算子。
shows the operators available for thecidr
andinet
types. The operators<<
,<<=
,>>
,>>=
, and&&
test for subnet inclusion. They consider only the network parts of the two addresses (ignoring any host part) and determine whether one network is identical to or a subnet of the other.
**Table 9.36. **cidr
andinet
Operators
Operator | Description | Example |
---|
shows the functions available for use with thecidr
andinet
types. Theabbrev
,host
, andtext
functions are primarily intended to offer alternative display formats.
**Table 9.37. **cidr
andinet
Functions
Anycidr
value can be cast toinet
implicitly or explicitly; therefore, the functions shown above as operating oninet
also work oncidr
values. (Where there are separate functions forinet
andcidr
, it is because the behavior should be different for the two cases.) Also, it is permitted to cast aninet
value tocidr
. When this is done, any bits to the right of the netmask are silently zeroed to create a validcidr
value. In addition, you can cast a text value toinet
orcidr
using normal casting syntax: for example,inet(expression
)orcolname
::cidr.
**Table 9.38. **macaddr
Functions
Themacaddr
type also supports the standard relational operators (>
,<=
, etc.) for lexicographical ordering, and the bitwise arithmetic operators (~
,&
and|
) for NOT, AND and OR.
**Table 9.39. **macaddr8
Functions
Themacaddr8
type also supports the standard relational operators (>
,<=
, etc.) for ordering, and the bitwise arithmetic operators (~
,&
and|
) for NOT, AND and OR.
版本:11
本節中描述的函數和類函數表示式對 xml 型別的值進行操作。有關 xml 型別的訊息,請查看。這裡不再重複用於轉換為 xml 型別的函數表示式 xmlparse 和 xmlserialize。使用大多數這些函數需要使用 configure --with-libxml 編譯安裝。
一組函數和類函數的表示式可用於從 SQL 資料産生 XML 內容。因此,它們特別適合將查詢結果格式化為 XML 文件以便在用戶端應用程序中進行處理。
函數 xmlcomment 建立一個 XML 字串,其中包含指定文字作為內容的 XML 註釋。文字不能包含「 -- 」或以「 - 」結尾,以便産生的結構是有效的 XML 註釋。 如果參數為 null,則結果為 null。
例如:
函數 xmlconcat 連接列表中各個 XML 字串,以建立包含 XML 內容片段的單個字串。空值會被忽略;如果都沒有非空值參數,則結果僅為 null。
例如:
XML 宣告(如果存在)組合如下。如果所有參數值具有相同的 XML 版本宣告,則在結果中使用該版本,否則不使用任何版本。如果所有參數值都具有獨立宣告值「yes」,則在結果中使用該值。如果所有參數值都具有獨立的宣告值且至少有一個為「no」,則在結果中使用該值。否則結果將沒有獨立宣告。如果確定結果需要獨立宣告但沒有版本聲明,則將使用版本為 1.0 的版本宣告,因為 XML 要求 XML 宣告包含版本宣告。在所有情況下都會忽略編碼宣告並將其刪除。
例如:
xmlelement 表示式産生具有給定名稱、屬性和內容的 XML 元素。
範例:
透過用 xHHHH 序列替換有問題的字符來轉譯非有效 XML 名稱的元素和屬性名稱,其中 HHHH 是十六進位表示法中字元的 Unicode 代碼。例如:
如果屬性值是引用欄位,則無需明確指定屬性名稱,在這種情況下,預設情況下欄位的名稱將用作屬性名稱。在其他情況下,必須為該屬性明確指定名稱。所以這個例子是有效的:
但這些不行:
元素內容(如果已指定)將根據其資料型別進行格式化。如果內容本身是 xml 型別,則可以建構複雜的 XML 文件。例如:
其他型別的內容將被格式化為有效的 XML 字元資料。這尤其意味著字符 <、> 和 & 將被轉換為其他形式。二進位資料(資料型別 bytea)將以 base64 或十六進位編碼表示,具體取決於組態參數 xmlbinary 的設定。為了使 SQL 和 PostgreSQL 資料型別與 XML Schema 規範保持一致,預計各種資料型別的特定行為將會各自發展,此時將出現更精確的描述。
xmlforest 表示式使用給定的名稱和內容産生元素的 XML 序列。
範例:
如第二個範例所示,如果內容值是欄位引用,則可以省略元素名稱,在這種情況下,預設情況下使用欄位名稱。 否則,必須指定名稱。
非有效的 XML 名稱的元素名稱將被轉譯,如上面的 xmlelement 所示。類似地,內容資料會被轉譯以産生有效的 XML 內容,除非它已經是 xml 型別。
請注意,如果 XML 序列由多個元素組成,則它們不是有效的 XML 文件,因此將 xmlforest 表示式包裝在 xmlelement 中可能很有用。
xmlpi 表示式建立 XML 處理指令。內容(如果存在)不得包含字元序列 ?>。
例如:
xmlroot 表示式改變 XML 值的根節點屬性。如果指定了版本,它將替換根節點的版本宣告中的值;如果指定了獨立設定,則它將替換根節點的獨立宣告中的值。
例如:
要確定連接的順序,可以將 ORDER BY 子句加到彙總呼叫中,如第 4.2.7 節中所述。例如:
以前的版本中推薦使用以下非標準方法,在特定情況下可能仍然有用:
本節中描述的表示式用於檢查 xml 的屬性。
如果第一個參數中的 XPath 表示式回傳任何節點,則 xmlexists 函數回傳 true,否則回傳 false。 (如果任一參數為 null,則結果為 null。)
範例
BY REF 子句在 PostgreSQL 中沒有任何作用,但可以達到 SQL 一致性和與其他實作的相容性。根據 SQL 標準,第一個 BY REF 是必需的,第二個是選擇性的。另請注意,SQL 標準指定 xmlexists 構造將 XQuery 表示式作為第一個參數,但 PostgreSQL 目前僅支持 XPath,它是 XQuery 的子集。
此函數檢查文字字串是否格式正確,回傳布林結果。xml_is_well_formed_document 檢查格式正確的文檔,而 xml_is_well_formed_content 檢查格式良好的內容。如果 xmloption 配置參數設定為 DOCUMENT,則 xml_is_well_formed 會執行前者;如果設定為 CONTENT,則執行後者。這意味著 xml_is_well_formed 對於查看對 xml 類型的簡單強制轉換是否成功很有用,而其他兩個函數對於查看 XMLPARSE 的相對應變數是否成功很有用。
範例:
最後一個範例顯示檢查包括命名空間是否符合。
為了處理資料型別為 xml 的值,PostgreSQL 提供了 xpath 和 xpath_exists 函數,它們用於計算 XPath 1.0 表示式和 XMLTABLE 資料表函數。
函數 xpath 根據 XML 值 xml 計算 XPath 表示式 xpath(字串)。 它回傳與 XPath 表示式產生的節點集合所相對應 XML 值的陣列。如果 XPath 表示式回傳單一變數值而不是節點集合,則回傳單個元素的陣列。
第二個參數必須是格式良好的 XML 內容。特別要注意是,它必須具有單一根節點元素。
該函數的選擇性第三個參數是命名空間對應的陣列。該陣列應該是二維字串陣列,第二維的長度等於 2(即,它應該是陣列的陣列,每個陣列恰好由 2 個元素組成)。每個陣列項目的第一個元素是命名空間名稱(別名),第二個是命名空間 URI。不要求此陣列中提供的別名與 XML 內容本身所使用的別名相同(換句話說,在 XML 內容和 xpath 函數內容中,別名都是區域性的)。
例如:
要設定預設的(匿名)命名空間,請執行以下操作:
The function xpath_exists
is a specialized form of the xpath
function. Instead of returning the individual XML values that satisfy the XPath, this function returns a Boolean indicating whether the query was satisfied or not. This function is equivalent to the standard XMLEXISTS
predicate, except that it also offers support for a namespace mapping argument.
Example:
The xmltable
function produces a table based on the given XML value, an XPath filter to extract rows, and an optional set of column definitions.
The optional XMLNAMESPACES
clause is a comma-separated list of namespaces. It specifies the XML namespaces used in the document and their aliases. A default namespace specification is not currently supported.
The required row_expression
argument is an XPath expression that is evaluated against the supplied XML document to obtain an ordered sequence of XML nodes. This sequence is what xmltable
transforms into output rows.
document_expression
provides the XML document to operate on. The BY REF
clauses have no effect in PostgreSQL, but are allowed for SQL conformance and compatibility with other implementations. The argument must be a well-formed XML document; fragments/forests are not accepted.
The mandatory COLUMNS
clause specifies the list of columns in the output table. If the COLUMNS
clause is omitted, the rows in the result set contain a single column of type xml
containing the data matched by row_expression
. If COLUMNS
is specified, each entry describes a single column. See the syntax summary above for the format. The column name and type are required; the path, default and nullability clauses are optional.
A column marked FOR ORDINALITY
will be populated with row numbers matching the order in which the output rows appeared in the original input XML document. At most one column may be marked FOR ORDINALITY
.
The column_expression
for a column is an XPath expression that is evaluated for each row, relative to the result of the row_expression
, to find the value of the column. If no column_expression
is given, then the column name is used as an implicit path.
If a column's XPath expression returns multiple elements, an error is raised. If the expression matches an empty tag, the result is an empty string (not NULL
). Any xsi:nil
attributes are ignored.
The text body of the XML matched by the column_expression
is used as the column value. Multiple text()
nodes within an element are concatenated in order. Any child elements, processing instructions, and comments are ignored, but the text contents of child elements are concatenated to the result. Note that the whitespace-only text()
node between two non-text elements is preserved, and that leading whitespace on a text()
node is not flattened.
If the path expression does not match for a given row but default_expression
is specified, the value resulting from evaluating that expression is used. If no DEFAULT
clause is given for the column, the field will be set to NULL
. It is possible for a default_expression
to reference the value of output columns that appear prior to it in the column list, so the default of one column may be based on the value of another column.
Columns may be marked NOT NULL
. If the column_expression
for a NOT NULL
column does not match anything and there is no DEFAULT
or the default_expression
also evaluates to null, an error is reported.
Unlike regular PostgreSQL functions, column_expression
and default_expression
are not evaluated to a simple value before calling the function. column_expression
is normally evaluated exactly once per input row, and default_expression
is evaluated each time a default is needed for a field. If the expression qualifies as stable or immutable the repeat evaluation may be skipped. Effectively xmltable
behaves more like a subquery than a function call. This means that you can usefully use volatile functions like nextval
in default_expression
, and column_expression
may depend on other parts of the XML document.
Examples:
The following example shows concatenation of multiple text() nodes, usage of the column name as XPath filter, and the treatment of whitespace, XML comments and processing instructions:
The following example illustrates how the XMLNAMESPACES
clause can be used to specify the default namespace, and a list of additional namespaces used in the XML document as well as in the XPath expressions:
The following functions map the contents of relational tables to XML values. They can be thought of as XML export functionality:
The return type of each function is xml
.
table_to_xml
maps the content of the named table, passed as parameter tbl
. The regclass
type accepts strings identifying tables using the usual notation, including optional schema qualifications and double quotes. query_to_xml
executes the query whose text is passed as parameter query
and maps the result set. cursor_to_xml
fetches the indicated number of rows from the cursor specified by the parameter cursor
. This variant is recommended if large tables have to be mapped, because the result value is built up in memory by each function.
If tableforest
is false, then the resulting XML document looks like this:
If tableforest
is true, the result is an XML content fragment that looks like this:
If no table name is available, that is, when mapping a query or a cursor, the string table
is used in the first format, row
in the second format.
The choice between these formats is up to the user. The first format is a proper XML document, which will be important in many applications. The second format tends to be more useful in the cursor_to_xml
function if the result values are to be reassembled into one document later on. The functions for producing XML content discussed above, in particular xmlelement
, can be used to alter the results to taste.
The data values are mapped in the same way as described for the function xmlelement
above.
The parameter nulls
determines whether null values should be included in the output. If true, null values in columns are represented as:
where xsi
is the XML namespace prefix for XML Schema Instance. An appropriate namespace declaration will be added to the result value. If false, columns containing null values are simply omitted from the output.
The parameter targetns
specifies the desired XML namespace of the result. If no particular namespace is wanted, an empty string should be passed.
The following functions return XML Schema documents describing the mappings performed by the corresponding functions above:
It is essential that the same parameters are passed in order to obtain matching XML data mappings and XML Schema documents.
The following functions produce XML data mappings and the corresponding XML Schema in one document (or forest), linked together. They can be useful where self-contained and self-describing results are wanted:
In addition, the following functions are available to produce analogous mappings of entire schemas or the entire current database:
Note that these potentially produce a lot of data, which needs to be built up in memory. When requesting content mappings of large schemas or databases, it might be worthwhile to consider mapping the tables separately instead, possibly even through a cursor.
The result of a schema content mapping looks like this:
where the format of a table mapping depends on the tableforest
parameter as explained above.
The result of a database content mapping looks like this:
where the schema mapping is as above.
The geometric typespoint
,box
,lseg
,line
,path
,polygon
, andcircle
have a large set of native support functions and operators, shown in,, and.
Note that the“same as”operator,~=
, represents the usual notion of equality for thepoint
,box
,polygon
, andcircle
types. Some of these types also have an=
operator, but=
compares for equal_areas_only. The other scalar comparison operators (<=
and so on) likewise compare areas for these types.
Table 9.33. Geometric Operators
Operator | Description | Example |
---|
BeforePostgreSQL8.2, the containment operators@>
and<@
were respectively called~
and@
. These names are still available, but are deprecated and will eventually be removed.
Table 9.34. Geometric Functions
Table 9.35. Geometric Type Conversion Functions
It is possible to access the two component numbers of apoint
as though the point were an array with indexes 0 and 1. For example, ift.p
is apoint
column thenSELECT p[0] FROM t
retrieves the X coordinate andUPDATE t SET p[1] = ...
changes the Y coordinate. In the same way, a value of typebox
orlseg
can be treated as an array of twopoint
values.
Thearea
function works for the typesbox
,circle
, andpath
. Thearea
function only works on thepath
data type if the points in thepath
are non-intersecting. For example, thepath'((0,0),(0,1),(2,1),(2,2),(1,2),(1,0),(0,0))'::PATH
will not work; however, the following visually identicalpath'((0,0),(0,1),(1,1),(1,2),(2,2),(2,1),(1,1),(1,0),(0,0))'::PATH
will work. If the concept of an intersecting versus non-intersectingpath
is confusing, draw both of the abovepath
s side by side on a piece of graph paper.
,andsummarize the functions and operators that are provided for full text searching. Seefor a detailed explanation ofPostgreSQL's text search facility.
Table 9.40. Text Search Operators
Operator | Return Type | Description | Example | Result |
---|
Thetsquery
containment operators consider only the lexemes listed in the two queries, ignoring the combining operators.
In addition to the operators shown in the table, the ordinary B-tree comparison operators (=
,<
, etc) are defined for typestsvector
andtsquery
. These are not very useful for text searching but allow, for example, unique indexes to be built on columns of these types.
Table 9.41. Text Search Functions
Function | Return Type | Description | Example | Result |
---|
Table 9.42. Text Search Debugging Functions
Operator | Description | Example |
---|---|---|
Atom | Description |
---|---|
Quantifier | Matches |
---|---|
Constraint | Description |
---|---|
Escape | Description |
---|---|
Escape | Description |
---|---|
Escape | Description |
---|---|
Escape | Description |
---|---|
Option | Description |
---|---|
Function | Return Type | Description | Example | Result |
---|---|---|---|---|
Function | Return Type | Description |
---|---|---|
Function (radians) | Function (degrees) | Description |
---|---|---|
Function | Description | Example | Result |
---|---|---|---|
Modifier | Description | Example |
---|---|---|
Pattern | Description |
---|---|
Modifier | Description | Example |
---|---|---|
Expression | Result |
---|---|
Function | Return Type | Description | Example | Result |
---|---|---|---|---|
Conversion Name | Source Encoding | Destination Encoding |
---|---|---|
Function | Return Type | Description | Example | Result |
---|
The seconds field, including fractional parts (0 - 59)
The extract
function is primarily intended for computational processing. For formatting date/time values for display, see .
When the input value is of type timestamp with time zone
, the truncation is performed with respect to a particular time zone; for example, truncation to day
produces a value that is midnight in that zone. By default, truncation is done with respect to the current setting, but the optional time_zone
argument can be provided to specify a different time zone. The time zone name can be specified in any of the ways described in .
The AT TIME ZONE
converts time stamp without time zone to/from time stamp with time zone, and time values to different time zones. shows its variants.
Expression | Return Type | Description |
---|
In these expressions, the desired time zone zone
can be specified either as a text string (e.g., 'America/Los_Angeles'
) or as an interval (e.g., INTERVAL '-08:00'
). In the text case, a time zone name can be specified in any of the ways described in .
60 if leap seconds are implemented by the operating system
Function | Return Type | Description | Example | Result |
---|
shows the functions available for use with themacaddr
type. The functiontrunc(macaddr
)returns a MAC address with the last 3 bytes set to zero. This can be used to associate the remaining prefix with a manufacturer.
Function | Return Type | Description | Example | Result |
---|
shows the functions available for use with themacaddr8
type. The functiontrunc(macaddr8
)returns a MAC address with the last 5 bytes set to zero. This can be used to associate the remaining prefix with a manufacturer.
Function | Return Type | Description | Example | Result |
---|
與此處描述的其他函數不同,函數 xmlagg 是一個彙總函數。它將輸入值連接到彙總函數呼叫,就像 xmlconcat 一樣,除了它是跨資料列而不是在單個資料列中的表示式進行連接。有關彙總函數的其他訊息,請參閱。
如果參數 XML 是正確的 XML 文件,則表示式 IS DOCUMENT 將回傳 true,如果不是(它是內容片段),則回傳 false;如果參數為 null,則回傳 null。有關文件和內容片段之間的區別,請參閱。
As an example of using the output produced by these functions, shows an XSLT stylesheet that converts the output of table_to_xml_and_xmlschema
to an HTML document containing a tabular rendition of the table data. In a similar manner, the results from these functions can be converted into other XML-based formats.
Function | Return Type | Description | Example |
---|
Function | Return Type | Description | Example |
---|
All the text search functions that accept an optionalregconfig
argument will use the configuration specified bywhen that argument is omitted.
The functions inare listed separately because they are not usually used in everyday text searching operations. They are helpful for development and debugging of new text search configurations.
Function | Return Type | Description | Example | Result |
---|
||
concatenation
B'10001' || B'011'
10001011
&
bitwise AND
B'10001' & B'01101'
00001
|
bitwise OR
B'10001' | B'01101'
11101
#
bitwise XOR
B'10001' # B'01101'
11100
~
bitwise NOT
~ B'10001'
01110
<<
bitwise shift left
B'10001' << 3
01000
>>
bitwise shift right
B'10001' >> 2
00100
+
addition
2 + 3
5
-
subtraction
2 - 3
-1
*
multiplication
2 * 3
6
/
division (integer division truncates the result)
4 / 2
2
%
modulo (remainder)
5 % 4
1
^
exponentiation (associates left to right)
2.0 ^ 3.0
8
|/
square root
|/ 25.0
5
||/
cube root
||/ 27.0
3
!
factorial
5 !
120
!!
factorial (prefix operator)
!! 5
120
@
absolute value
@ -5.0
5
&
bitwise AND
91 & 15
11
|
bitwise OR
32 | 3
35
#
bitwise XOR
17 # 5
20
~
bitwise NOT
~1
-2
<<
bitwise shift left
1 << 4
16
>>
bitwise shift right
8 >> 2
2
abs(
x
)
(same as input)
absolute value
abs(-17.4)
17.4
cbrt(dp
)
dp
cube root
cbrt(27.0)
3
ceil(dp
or numeric
)
(same as input)
nearest integer greater than or equal to argument
ceil(-42.8)
-42
ceiling(dp
or numeric
)
(same as input)
nearest integer greater than or equal to argument (same as ceil
)
ceiling(-95.3)
-95
degrees(dp
)
dp
radians to degrees
degrees(0.5)
28.6478897565412
div(
y
numeric
, x
numeric
)
numeric
integer quotient of y
/x
div(9,4)
2
exp(dp
or numeric
)
(same as input)
exponential
exp(1.0)
2.71828182845905
floor(dp
or numeric
)
(same as input)
nearest integer less than or equal to argument
floor(-42.8)
-43
ln(dp
or numeric
)
(same as input)
natural logarithm
ln(2.0)
0.693147180559945
log(dp
or numeric
)
(same as input)
base 10 logarithm
log(100.0)
2
log10(dp
or numeric
)
(same as input)
base 10 logarithm
log10(100.0)
2
log(
b
numeric
, x
numeric
)
numeric
logarithm to base b
log(2.0, 64.0)
6.0000000000
mod(
y
, x
)
(same as argument types)
remainder of y
/x
mod(9,4)
1
pi()
dp
“π” constant
pi()
3.14159265358979
power(
a
dp
, b
dp
)
dp
a
raised to the power of b
power(9.0, 3.0)
729
power(
a
numeric
, b
numeric
)
numeric
a
raised to the power of b
power(9.0, 3.0)
729
radians(dp
)
dp
degrees to radians
radians(45.0)
0.785398163397448
round(dp
or numeric
)
(same as input)
round to nearest integer
round(42.4)
42
round(
v
numeric
, s
int
)
numeric
round to s
decimal places
round(42.4382, 2)
42.44
scale(numeric
)
integer
scale of the argument (the number of decimal digits in the fractional part)
scale(8.41)
2
sign(dp
or numeric
)
(same as input)
sign of the argument (-1, 0, +1)
sign(-8.4)
-1
sqrt(dp
or numeric
)
(same as input)
square root
sqrt(2.0)
1.4142135623731
trunc(dp
or numeric
)
(same as input)
truncate toward zero
trunc(42.8)
42
trunc(
v
numeric
, s
int
)
numeric
truncate to s
decimal places
trunc(42.4382, 2)
42.43
width_bucket(
operand
dp
, b1
dp
, b2
dp
, count
int
)
int
return the bucket number to which operand
would be assigned in a histogram having count
equal-width buckets spanning the range b1
to b2
; returns 0
or count
+1 for an input outside the range
width_bucket(5.35, 0.024, 10.06, 5)
3
width_bucket(
operand
numeric
, b1
numeric
, b2
numeric
, count
int
)
int
return the bucket number to which operand
would be assigned in a histogram having count
equal-width buckets spanning the range b1
to b2
; returns 0
or count
+1 for an input outside the range
width_bucket(5.35, 0.024, 10.06, 5)
3
width_bucket(
operand
anyelement
, thresholds
anyarray
)
int
return the bucket number to which operand
would be assigned given an array listing the lower bounds of the buckets; returns 0
for an input less than the first lower bound; the thresholds
array must be sorted, smallest first, or unexpected results will be obtained
width_bucket(now(), array['yesterday', 'today', 'tomorrow']::timestamptz[])
2
random()
dp
random value in the range 0.0 <= x < 1.0
setseed(dp
)
void
set seed for subsequent random()
calls (value between -1.0 and 1.0, inclusive)
acos(
x
)
acosd(
x
)
inverse cosine
asin(
x
)
asind(
x
)
inverse sine
atan(
x
)
atand(
x
)
inverse tangent
atan2(
y
, x
)
atan2d(
y
, x
)
inverse tangent of y
/x
cos(
x
)
cosd(
x
)
cosine
cot(
x
)
cotd(
x
)
cotangent
sin(
x
)
sind(
x
)
sine
tan(
x
)
tand(
x
)
tangent
sinh(
x
)
hyperbolic sine
sinh(0)
0
cosh(
x
)
hyperbolic cosine
cosh(0)
1
tanh(
x
)
hyperbolic tangent
tanh(0)
0
asinh(
x
)
inverse hyperbolic sine
asinh(0)
0
acosh(
x
)
inverse hyperbolic cosine
acosh(1)
0
atanh(
x
)
inverse hyperbolic tangent
atanh(0)
0
to_char(timestamp
, text
)
text
將時間戳記轉換為字串
to_char(current_timestamp, 'HH12:MI:SS')
to_char(interval
, text
)
text
convert interval to string
to_char(interval '15h 2m 12s', 'HH24:MI:SS')
to_char(int
, text
)
text
convert integer to string
to_char(125, '999')
to_char
(double precision
, text
)
text
convert real/double precision to string
to_char(125.8::real, '999D9')
to_char(numeric
, text
)
text
convert numeric to string
to_char(-125.8, '999D99S')
to_date(text
, text
)
date
convert string to date
to_date('05 Dec 2000', 'DD Mon YYYY')
to_number(text
, text
)
numeric
convert string to numeric
to_number('12,454.8-', '99G999D9S')
to_timestamp(text
, text
)
timestamp with time zone
convert string to time stamp
to_timestamp('05 Dec 2000', 'DD Mon YYYY')
HH
hour of day (01-12)
HH12
hour of day (01-12)
HH24
hour of day (00-23)
MI
minute (00-59)
SS
second (00-59)
MS
millisecond (000-999)
US
microsecond (000000-999999)
SSSS
seconds past midnight (0-86399)
AM
, am
, PM
or pm
meridiem indicator (without periods)
A.M.
, a.m.
, P.M.
or p.m.
meridiem indicator (with periods)
Y,YYY
year (4 or more digits) with comma
YYYY
year (4 or more digits)
YYY
last 3 digits of year
YY
last 2 digits of year
Y
last digit of year
IYYY
ISO 8601 week-numbering year (4 or more digits)
IYY
last 3 digits of ISO 8601 week-numbering year
IY
last 2 digits of ISO 8601 week-numbering year
I
last digit of ISO 8601 week-numbering year
BC
, bc
, AD
or ad
era indicator (without periods)
B.C.
, b.c.
, A.D.
or a.d.
era indicator (with periods)
MONTH
full upper case month name (blank-padded to 9 chars)
Month
full capitalized month name (blank-padded to 9 chars)
month
full lower case month name (blank-padded to 9 chars)
MON
abbreviated upper case month name (3 chars in English, localized lengths vary)
Mon
abbreviated capitalized month name (3 chars in English, localized lengths vary)
mon
abbreviated lower case month name (3 chars in English, localized lengths vary)
MM
month number (01-12)
DAY
full upper case day name (blank-padded to 9 chars)
Day
full capitalized day name (blank-padded to 9 chars)
day
full lower case day name (blank-padded to 9 chars)
DY
abbreviated upper case day name (3 chars in English, localized lengths vary)
Dy
abbreviated capitalized day name (3 chars in English, localized lengths vary)
dy
abbreviated lower case day name (3 chars in English, localized lengths vary)
DDD
day of year (001-366)
IDDD
day of ISO 8601 week-numbering year (001-371; day 1 of the year is Monday of the first ISO week)
DD
day of month (01-31)
D
day of the week, Sunday (1
) to Saturday (7
)
ID
ISO 8601 day of the week, Monday (1
) to Sunday (7
)
W
week of month (1-5) (the first week starts on the first day of the month)
WW
week number of year (1-53) (the first week starts on the first day of the year)
IW
week number of ISO 8601 week-numbering year (01-53; the first Thursday of the year is in week 1)
CC
century (2 digits) (the twenty-first century starts on 2001-01-01)
J
Julian Day (integer days since November 24, 4714 BC at midnight UTC)
Q
quarter
RM
month in upper case Roman numerals (I-XII; I=January)
rm
month in lower case Roman numerals (i-xii; i=January)
TZ
upper case time-zone abbreviation (only supported in to_char
)
tz
lower case time-zone abbreviation (only supported in to_char
)
TZH
time-zone hours
TZM
time-zone minutes
OF
time-zone offset from UTC (only supported in to_char
)
FM
prefix
fill mode (suppress leading zeroes and padding blanks)
FMMonth
TH
suffix
upper case ordinal number suffix
DDTH
, e.g., 12TH
th
suffix
lower case ordinal number suffix
DDth
, e.g., 12th
FX
prefix
fixed format global option (see usage notes)
FX Month DD Day
TM
prefix
translation mode (print localized day and month names based on lc_time)
TMMonth
SP
suffix
spell mode (not implemented)
DDSP
9
digit position (can be dropped if insignificant)
0
digit position (will not be dropped, even if insignificant)
.
(period)
decimal point
,
(comma)
group (thousands) separator
PR
negative value in angle brackets
S
sign anchored to number (uses locale)
L
currency symbol (uses locale)
D
decimal point (uses locale)
G
group separator (uses locale)
MI
minus sign in specified position (if number < 0)
PL
plus sign in specified position (if number > 0)
SG
plus/minus sign in specified position
RN
Roman numeral (input between 1 and 3999)
TH
or th
ordinal number suffix
V
shift specified number of digits (see notes)
EEEE
exponent for scientific notation
FM
prefix
fill mode (suppress trailing zeroes and padding blanks)
FM99.99
TH
suffix
upper case ordinal number suffix
999TH
th
suffix
lower case ordinal number suffix
999th
to_char(current_timestamp, 'Day, DD HH12:MI:SS')
'Tuesday , 06 05:39:18'
to_char(current_timestamp, 'FMDay, FMDD HH12:MI:SS')
'Tuesday, 6 05:39:18'
to_char(-0.1, '99.99')
' -.10'
to_char(-0.1, 'FM9.99')
'-.1'
to_char(-0.1, 'FM90.99')
'-0.1'
to_char(0.1, '0.9')
' 0.1'
to_char(12, '9990999.9')
' 0012.0'
to_char(12, 'FM9990999.9')
'0012.'
to_char(485, '999')
' 485'
to_char(-485, '999')
'-485'
to_char(485, '9 9 9')
' 4 8 5'
to_char(1485, '9,999')
' 1,485'
to_char(1485, '9G999')
' 1 485'
to_char(148.5, '999.999')
' 148.500'
to_char(148.5, 'FM999.999')
'148.5'
to_char(148.5, 'FM999.990')
'148.500'
to_char(148.5, '999D999')
' 148,500'
to_char(3148.5, '9G999D999')
' 3 148,500'
to_char(-485, '999S')
'485-'
to_char(-485, '999MI')
'485-'
to_char(485, '999MI')
'485 '
to_char(485, 'FM999MI')
'485'
to_char(485, 'PL999')
'+485'
to_char(485, 'SG999')
'+485'
to_char(-485, 'SG999')
'-485'
to_char(-485, '9SG99')
'4-85'
to_char(-485, '999PR')
'<485>'
to_char(485, 'L999')
'DM 485'
to_char(485, 'RN')
' CDLXXXV'
to_char(485, 'FMRN')
'CDLXXXV'
to_char(5.2, 'FMRN')
'V'
to_char(482, '999th')
' 482nd'
to_char(485, '"Good number:"999')
'Good number: 485'
to_char(485.8, '"Pre:"999" Post:" .999')
'Pre: 485 Post: .800'
to_char(12, '99V999')
' 12000'
to_char(12.4, '99V999')
' 12400'
to_char(12.45, '99V9')
' 125'
to_char(0.0004859, '9.99EEEE')
' 4.86e-04'
enum_first(anyenum)
Returns the first value of the input enum type
enum_first(null::rainbow)
red
enum_last(anyenum)
Returns the last value of the input enum type
enum_last(null::rainbow)
purple
enum_range(anyenum)
Returns all values of the input enum type in an ordered array
enum_range(null::rainbow)
{red,orange,yellow,green,blue,purple}
enum_range(anyenum, anyenum)
Returns the range between the two given enum values, as an ordered array. The values must be from the same enum type. If the first parameter is null, the result will start with the first value of the enum type. If the second parameter is null, the result will end with the last value of the enum type.
enum_range('orange'::rainbow, 'green'::rainbow)
{orange,yellow,green}
enum_range(NULL, 'green'::rainbow)
{red,orange,yellow,green}
enum_range('orange'::rainbow, NULL)
{orange,yellow,green,blue,purple}
~
Matches regular expression, case sensitive
'thomas' ~ '.*thomas.*'
~*
Matches regular expression, case insensitive
'thomas' ~* '.*Thomas.*'
!~
Does not match regular expression, case sensitive
'thomas' !~ '.*Thomas.*'
!~*
Does not match regular expression, case insensitive
'thomas' !~* '.*vadim.*'
(
re
)
(where re
is any regular expression) matches a match for re
, with the match noted for possible reporting
(?:
re
)
as above, but the match is not noted for reporting (a “non-capturing” set of parentheses) (AREs only)
.
matches any single character
[
chars
]
a bracket expression, matching any one of the chars
(see Section 9.7.3.2 for more detail)
\
k
(where k
is a non-alphanumeric character) matches that character taken as an ordinary character, e.g., \\
matches a backslash character
\
c
where c
is alphanumeric (possibly followed by other characters) is an escape, see Section 9.7.3.3 (AREs only; in EREs and BREs, this matches c
)
{
when followed by a character other than a digit, matches the left-brace character {
; when followed by a digit, it is the beginning of a bound
(see below)
x
where x
is a single character with no other significance, matches that character
*
a sequence of 0 or more matches of the atom
+
a sequence of 1 or more matches of the atom
?
a sequence of 0 or 1 matches of the atom
{
m
}
a sequence of exactly m
matches of the atom
{
m
,}
a sequence of m
or more matches of the atom
{
m
,
n
}
a sequence of m
through n
(inclusive) matches of the atom; m
cannot exceed n
*?
non-greedy version of *
+?
non-greedy version of +
??
non-greedy version of ?
{
m
}?
non-greedy version of {
m
}
{
m
,}?
non-greedy version of {
m
,}
{
m
,
n
}?
non-greedy version of {
m
,
n
}
^
matches at the beginning of the string
$
matches at the end of the string
(?=
re
)
positive lookahead matches at any point where a substring matching re
begins (AREs only)
(?!
re
)
negative lookahead matches at any point where no substring matching re
begins (AREs only)
(?<=
re
)
positive lookbehind matches at any point where a substring matching re
ends (AREs only)
(?<!
re
)
negative lookbehind matches at any point where no substring matching re
ends (AREs only)
\a
alert (bell) character, as in C
\b
backspace, as in C
\B
synonym for backslash (\
) to help reduce the need for backslash doubling
\c
X
(where X
is any character) the character whose low-order 5 bits are the same as those of X
, and whose other bits are all zero
\e
the character whose collating-sequence name is ESC
, or failing that, the character with octal value 033
\f
form feed, as in C
newline, as in C
carriage return, as in C
horizontal tab, as in C
\u
wxyz
(where wxyz
is exactly four hexadecimal digits) the character whose hexadecimal value is 0x
wxyz
\U
stuvwxyz
(where stuvwxyz
is exactly eight hexadecimal digits) the character whose hexadecimal value is 0x
stuvwxyz
\v
vertical tab, as in C
\x
hhh
(where hhh
is any sequence of hexadecimal digits) the character whose hexadecimal value is 0x
hhh
(a single character no matter how many hexadecimal digits are used)
\0
the character whose value is 0
(the null byte)
\
xy
(where xy
is exactly two octal digits, and is not a back reference) the character whose octal value is 0
xy
\
xyz
(where xyz
is exactly three octal digits, and is not a back reference) the character whose octal value is 0
xyz
\d
[[:digit:]]
\s
[[:space:]]
\w
[[:alnum:]_]
(note underscore is included)
\D
[^[:digit:]]
\S
[^[:space:]]
\W
[^[:alnum:]_]
(note underscore is included)
\A
matches only at the beginning of the string (see Section 9.7.3.5 for how this differs from ^
)
\m
matches only at the beginning of a word
\M
matches only at the end of a word
\y
matches only at the beginning or end of a word
\Y
matches only at a point that is not the beginning or end of a word
\Z
matches only at the end of the string (see Section 9.7.3.5 for how this differs from $
)
\
m
(where m
is a nonzero digit) a back reference to the m
'th subexpression
\
mnn
(where m
is a nonzero digit, and nn
is some more digits, and the decimal value mnn
is not greater than the number of closing capturing parentheses seen so far) a back reference to the mnn
'th subexpression
b
rest of RE is a BRE
c
case-sensitive matching (overrides operator type)
e
rest of RE is an ERE
i
case-insensitive matching (see Section 9.7.3.5) (overrides operator type)
m
historical synonym for n
n
newline-sensitive matching (see Section 9.7.3.5)
p
partial newline-sensitive matching (see Section 9.7.3.5)
q
rest of RE is a literal (“quoted”) string, all ordinary characters
s
non-newline-sensitive matching (default)
t
tight syntax (default; see below)
w
inverse partial newline-sensitive (“weird”) matching (see Section 9.7.3.5)
x
expanded syntax (see below)
string
||
string
text
String concatenation
'Post' || 'greSQL'
PostgreSQL
string
||
non-string
or non-string
||
string
text
String concatenation with one non-string input
'Value: ' || 42
Value: 42
bit_length(
string
)
int
Number of bits in string
bit_length('jose')
32
char_length(
string
) or character_length(
string
)
int
Number of characters in string
char_length('jose')
4
lower(
string
)
text
Convert string to lower case
lower('TOM')
tom
octet_length(
string
)
int
Number of bytes in string
octet_length('jose')
4
overlay(
string
placing string
from int
[for int
])
text
Replace substring
overlay('Txxxxas' placing 'hom' from 2 for 4)
Thomas
position(
substring
in string
)
int
Location of specified substring
position('om' in 'Thomas')
3
substring(
string
[from int
] [for int
])
text
Extract substring
substring('Thomas' from 2 for 3)
hom
substring(
string
from pattern
)
text
Extract substring matching POSIX regular expression. See Section 9.7 for more information on pattern matching.
substring('Thomas' from '...$')
mas
substring(
string
from pattern
for escape
)
text
Extract substring matching SQL regular expression. See Section 9.7 for more information on pattern matching.
substring('Thomas' from '%#"o_a#"_' for '#')
oma
trim([leading | trailing | both] [
characters
] from string
)
text
Remove the longest string containing only characters from characters
(a space by default) from the start, end, or both ends (both
is the default) of string
trim(both 'xyz' from 'yxTomxx')
Tom
trim([leading | trailing | both] [from]
string
[, characters
] )
text
Non-standard syntax for trim()
trim(both from 'yxTomxx', 'xyz')
Tom
upper(
string
)
text
Convert string to upper case
upper('tom')
TOM
ascii(
string
)
int
ASCII code of the first character of the argument. For UTF8 returns the Unicode code point of the character. For other multibyte encodings, the argument must be an ASCII character.
ascii('x')
120
btrim(
string
text
[, characters
text
])
text
Remove the longest string consisting only of characters in characters
(a space by default) from the start and end of string
btrim('xyxtrimyyx', 'xyz')
trim
chr(int
)
text
Character with the given code. For UTF8 the argument is treated as a Unicode code point. For other multibyte encodings the argument must designate an ASCII character. The NULL (0) character is not allowed because text data types cannot store such bytes.
chr(65)
A
concat(
str
"any"
[, str
"any"
[, ...] ])
text
Concatenate the text representations of all the arguments. NULL arguments are ignored.
concat('abcde', 2, NULL, 22)
abcde222
concat_ws(
sep
text
, str
"any"
[, str
"any"
[, ...] ])
text
Concatenate all but the first argument with separators. The first argument is used as the separator string. NULL arguments are ignored.
concat_ws(',', 'abcde', 2, NULL, 22)
abcde,2,22
convert(
string
bytea
, src_encoding
name
, dest_encoding
name
)
bytea
Convert string to dest_encoding
. The original encoding is specified by src_encoding
. The string
must be valid in this encoding. Conversions can be defined by CREATE CONVERSION
. Also there are some predefined conversions. See Table 9.11 for available conversions.
convert('text_in_utf8', 'UTF8', 'LATIN1')
text_in_utf8
represented in Latin-1 encoding (ISO 8859-1)
convert_from(
string
bytea
, src_encoding
name
)
text
Convert string to the database encoding. The original encoding is specified by src_encoding
. The string
must be valid in this encoding.
convert_from('text_in_utf8', 'UTF8')
text_in_utf8
represented in the current database encoding
convert_to(
string
text
, dest_encoding
name
)
bytea
Convert string to dest_encoding
.
convert_to('some text', 'UTF8')
some text
represented in the UTF8 encoding
decode(
string
text
, format
text
)
bytea
Decode binary data from textual representation in string
. Options for format
are same as in encode
.
decode('MTIzAAE=', 'base64')
\x3132330001
encode(
data
bytea
, format
text
)
text
Encode binary data into a textual representation. Supported formats are: base64
, hex
, escape
. escape
converts zero bytes and high-bit-set bytes to octal sequences (\
nnn
) and doubles backslashes.
encode('123\000\001', 'base64')
MTIzAAE=
format
(formatstr
text
[, formatarg
"any"
[, ...] ])
text
Format arguments according to a format string. This function is similar to the C function sprintf
. See Section 9.4.1.
format('Hello %s, %1$s', 'World')
Hello World, World
initcap(
string
)
text
Convert the first letter of each word to upper case and the rest to lower case. Words are sequences of alphanumeric characters separated by non-alphanumeric characters.
initcap('hi THOMAS')
Hi Thomas
left(
str
text
, n
int
)
text
Return first n
characters in the string. When n
is negative, return all but last |n
| characters.
left('abcde', 2)
ab
length(
string
)
int
Number of characters in string
length('jose')
4
length(
string
bytea
, encoding
name
)
int
Number of characters in string
in the given encoding
. The string
must be valid in this encoding.
length('jose', 'UTF8')
4
lpad(
string
text
, length
int
[, fill
text
])
text
Fill up the string
to length length
by prepending the characters fill
(a space by default). If the string
is already longer than length
then it is truncated (on the right).
lpad('hi', 5, 'xy')
xyxhi
ltrim(
string
text
[, characters
text
])
text
Remove the longest string containing only characters from characters
(a space by default) from the start of string
ltrim('zzzytest', 'xyz')
test
md5(
string
)
text
Calculates the MD5 hash of string
, returning the result in hexadecimal
md5('abc')
900150983cd24fb0 d6963f7d28e17f72
parse_ident(
qualified_identifier
text
[, strictmode
boolean
DEFAULT true ] )
text[]
Split qualified_identifier
into an array of identifiers, removing any quoting of individual identifiers. By default, extra characters after the last identifier are considered an error; but if the second parameter is false
, then such extra characters are ignored. (This behavior is useful for parsing names for objects like functions.) Note that this function does not truncate over-length identifiers. If you want truncation you can cast the result to name[]
.
parse_ident('"SomeSchema".someTable')
{SomeSchema,sometable}
pg_client_encoding()
name
Current client encoding name
pg_client_encoding()
SQL_ASCII
quote_ident(
string
text
)
text
Return the given string suitably quoted to be used as an identifier in an SQL statement string. Quotes are added only if necessary (i.e., if the string contains non-identifier characters or would be case-folded). Embedded quotes are properly doubled. See also Example 42.1.
quote_ident('Foo bar')
"Foo bar"
quote_literal(
string
text
)
text
Return the given string suitably quoted to be used as a string literal in an SQL statement string. Embedded single-quotes and backslashes are properly doubled. Note that quote_literal
returns null on null input; if the argument might be null, quote_nullable
is often more suitable. See also Example 42.1.
quote_literal(E'O\'Reilly')
'O''Reilly'
quote_literal(
value
anyelement
)
text
Coerce the given value to text and then quote it as a literal. Embedded single-quotes and backslashes are properly doubled.
quote_literal(42.5)
'42.5'
quote_nullable(
string
text
)
text
Return the given string suitably quoted to be used as a string literal in an SQL statement string; or, if the argument is null, return NULL
. Embedded single-quotes and backslashes are properly doubled. See also Example 42.1.
quote_nullable(NULL)
NULL
quote_nullable(
value
anyelement
)
text
Coerce the given value to text and then quote it as a literal; or, if the argument is null, return NULL
. Embedded single-quotes and backslashes are properly doubled.
quote_nullable(42.5)
'42.5'
regexp_match(
string
text
, pattern
text
[, flags
text
])
text[]
Return captured substring(s) resulting from the first match of a POSIX regular expression to the string
. See Section 9.7.3 for more information.
regexp_match('foobarbequebaz', '(bar)(beque)')
{bar,beque}
regexp_matches(
string
text
, pattern
text
[, flags
text
])
setof text[]
Return captured substring(s) resulting from matching a POSIX regular expression to the string
. See Section 9.7.3 for more information.
regexp_matches('foobarbequebaz', 'ba.', 'g')
{bar}
{baz}
(2 rows)
regexp_replace(
string
text
, pattern
text
, replacement
text
[, flags
text
])
text
Replace substring(s) matching a POSIX regular expression. See Section 9.7.3 for more information.
regexp_replace('Thomas', '.[mN]a.', 'M')
ThM
regexp_split_to_array(
string
text
, pattern
text
[, flags
text
])
text[]
Split string
using a POSIX regular expression as the delimiter. See Section 9.7.3 for more information.
regexp_split_to_array('hello world', '\s+')
{hello,world}
regexp_split_to_table(
string
text
, pattern
text
[, flags
text
])
setof text
Split string
using a POSIX regular expression as the delimiter. See Section 9.7.3 for more information.
regexp_split_to_table('hello world', '\s+')
hello
world
(2 rows)
repeat(
string
text
, number
int
)
text
Repeat string
the specified number
of times
repeat('Pg', 4)
PgPgPgPg
replace(
string
text
, from
text
, to
text
)
text
Replace all occurrences in string
of substring from
with substring to
replace('abcdefabcdef', 'cd', 'XX')
abXXefabXXef
reverse(
str
)
text
Return reversed string.
reverse('abcde')
edcba
right(
str
text
, n
int
)
text
Return last n
characters in the string. When n
is negative, return all but first |n
| characters.
right('abcde', 2)
de
rpad(
string
text
, length
int
[, fill
text
])
text
Fill up the string
to length length
by appending the characters fill
(a space by default). If the string
is already longer than length
then it is truncated.
rpad('hi', 5, 'xy')
hixyx
rtrim(
string
text
[, characters
text
])
text
Remove the longest string containing only characters from characters
(a space by default) from the end of string
rtrim('testxxzx', 'xyz')
test
split_part(
string
text
, delimiter
text
, field
int
)
text
Split string
on delimiter
and return the given field (counting from one)
split_part('abc~@~def~@~ghi', '~@~', 2)
def
strpos(
string
, substring
)
int
Location of specified substring (same as position(
substring
in string
), but note the reversed argument order)
strpos('high', 'ig')
2
substr(
string
, from
[, count
])
text
回傳子字串(與 substring(string
from from
for count
) 相同)
substr('alphabet', 3, 2)
ph
starts_with(
string
, prefix
)
bool
Returns true if string
starts with prefix
.
starts_with('alphabet', 'alph')
t
to_ascii(
string
text
[, encoding
text
])
text
Convert string
to ASCII from another encoding (only supports conversion from LATIN1
, LATIN2
, LATIN9
, and WIN1250
encodings)
to_ascii('Karel')
Karel
to_hex(
number
int
or bigint
)
text
Convert number
to its equivalent hexadecimal representation
to_hex(2147483647)
7fffffff
translate(
string
text
, from
text
, to
text
)
text
Any character in string
that matches a character in the from
set is replaced by the corresponding character in the to
set. If from
is longer than to
, occurrences of the extra characters in from
are removed.
translate('12345', '143', 'ax')
a2x5
ascii_to_mic
SQL_ASCII
MULE_INTERNAL
ascii_to_utf8
SQL_ASCII
UTF8
big5_to_euc_tw
BIG5
EUC_TW
big5_to_mic
BIG5
MULE_INTERNAL
big5_to_utf8
BIG5
UTF8
euc_cn_to_mic
EUC_CN
MULE_INTERNAL
euc_cn_to_utf8
EUC_CN
UTF8
euc_jp_to_mic
EUC_JP
MULE_INTERNAL
euc_jp_to_sjis
EUC_JP
SJIS
euc_jp_to_utf8
EUC_JP
UTF8
euc_kr_to_mic
EUC_KR
MULE_INTERNAL
euc_kr_to_utf8
EUC_KR
UTF8
euc_tw_to_big5
EUC_TW
BIG5
euc_tw_to_mic
EUC_TW
MULE_INTERNAL
euc_tw_to_utf8
EUC_TW
UTF8
gb18030_to_utf8
GB18030
UTF8
gbk_to_utf8
GBK
UTF8
iso_8859_10_to_utf8
LATIN6
UTF8
iso_8859_13_to_utf8
LATIN7
UTF8
iso_8859_14_to_utf8
LATIN8
UTF8
iso_8859_15_to_utf8
LATIN9
UTF8
iso_8859_16_to_utf8
LATIN10
UTF8
iso_8859_1_to_mic
LATIN1
MULE_INTERNAL
iso_8859_1_to_utf8
LATIN1
UTF8
iso_8859_2_to_mic
LATIN2
MULE_INTERNAL
iso_8859_2_to_utf8
LATIN2
UTF8
iso_8859_2_to_windows_1250
LATIN2
WIN1250
iso_8859_3_to_mic
LATIN3
MULE_INTERNAL
iso_8859_3_to_utf8
LATIN3
UTF8
iso_8859_4_to_mic
LATIN4
MULE_INTERNAL
iso_8859_4_to_utf8
LATIN4
UTF8
iso_8859_5_to_koi8_r
ISO_8859_5
KOI8R
iso_8859_5_to_mic
ISO_8859_5
MULE_INTERNAL
iso_8859_5_to_utf8
ISO_8859_5
UTF8
iso_8859_5_to_windows_1251
ISO_8859_5
WIN1251
iso_8859_5_to_windows_866
ISO_8859_5
WIN866
iso_8859_6_to_utf8
ISO_8859_6
UTF8
iso_8859_7_to_utf8
ISO_8859_7
UTF8
iso_8859_8_to_utf8
ISO_8859_8
UTF8
iso_8859_9_to_utf8
LATIN5
UTF8
johab_to_utf8
JOHAB
UTF8
koi8_r_to_iso_8859_5
KOI8R
ISO_8859_5
koi8_r_to_mic
KOI8R
MULE_INTERNAL
koi8_r_to_utf8
KOI8R
UTF8
koi8_r_to_windows_1251
KOI8R
WIN1251
koi8_r_to_windows_866
KOI8R
WIN866
koi8_u_to_utf8
KOI8U
UTF8
mic_to_ascii
MULE_INTERNAL
SQL_ASCII
mic_to_big5
MULE_INTERNAL
BIG5
mic_to_euc_cn
MULE_INTERNAL
EUC_CN
mic_to_euc_jp
MULE_INTERNAL
EUC_JP
mic_to_euc_kr
MULE_INTERNAL
EUC_KR
mic_to_euc_tw
MULE_INTERNAL
EUC_TW
mic_to_iso_8859_1
MULE_INTERNAL
LATIN1
mic_to_iso_8859_2
MULE_INTERNAL
LATIN2
mic_to_iso_8859_3
MULE_INTERNAL
LATIN3
mic_to_iso_8859_4
MULE_INTERNAL
LATIN4
mic_to_iso_8859_5
MULE_INTERNAL
ISO_8859_5
mic_to_koi8_r
MULE_INTERNAL
KOI8R
mic_to_sjis
MULE_INTERNAL
SJIS
mic_to_windows_1250
MULE_INTERNAL
WIN1250
mic_to_windows_1251
MULE_INTERNAL
WIN1251
mic_to_windows_866
MULE_INTERNAL
WIN866
sjis_to_euc_jp
SJIS
EUC_JP
sjis_to_mic
SJIS
MULE_INTERNAL
sjis_to_utf8
SJIS
UTF8
windows_1258_to_utf8
WIN1258
UTF8
uhc_to_utf8
UHC
UTF8
utf8_to_ascii
UTF8
SQL_ASCII
utf8_to_big5
UTF8
BIG5
utf8_to_euc_cn
UTF8
EUC_CN
utf8_to_euc_jp
UTF8
EUC_JP
utf8_to_euc_kr
UTF8
EUC_KR
utf8_to_euc_tw
UTF8
EUC_TW
utf8_to_gb18030
UTF8
GB18030
utf8_to_gbk
UTF8
GBK
utf8_to_iso_8859_1
UTF8
LATIN1
utf8_to_iso_8859_10
UTF8
LATIN6
utf8_to_iso_8859_13
UTF8
LATIN7
utf8_to_iso_8859_14
UTF8
LATIN8
utf8_to_iso_8859_15
UTF8
LATIN9
utf8_to_iso_8859_16
UTF8
LATIN10
utf8_to_iso_8859_2
UTF8
LATIN2
utf8_to_iso_8859_3
UTF8
LATIN3
utf8_to_iso_8859_4
UTF8
LATIN4
utf8_to_iso_8859_5
UTF8
ISO_8859_5
utf8_to_iso_8859_6
UTF8
ISO_8859_6
utf8_to_iso_8859_7
UTF8
ISO_8859_7
utf8_to_iso_8859_8
UTF8
ISO_8859_8
utf8_to_iso_8859_9
UTF8
LATIN5
utf8_to_johab
UTF8
JOHAB
utf8_to_koi8_r
UTF8
KOI8R
utf8_to_koi8_u
UTF8
KOI8U
utf8_to_sjis
UTF8
SJIS
utf8_to_windows_1258
UTF8
WIN1258
utf8_to_uhc
UTF8
UHC
utf8_to_windows_1250
UTF8
WIN1250
utf8_to_windows_1251
UTF8
WIN1251
utf8_to_windows_1252
UTF8
WIN1252
utf8_to_windows_1253
UTF8
WIN1253
utf8_to_windows_1254
UTF8
WIN1254
utf8_to_windows_1255
UTF8
WIN1255
utf8_to_windows_1256
UTF8
WIN1256
utf8_to_windows_1257
UTF8
WIN1257
utf8_to_windows_866
UTF8
WIN866
utf8_to_windows_874
UTF8
WIN874
windows_1250_to_iso_8859_2
WIN1250
LATIN2
windows_1250_to_mic
WIN1250
MULE_INTERNAL
windows_1250_to_utf8
WIN1250
UTF8
windows_1251_to_iso_8859_5
WIN1251
ISO_8859_5
windows_1251_to_koi8_r
WIN1251
KOI8R
windows_1251_to_mic
WIN1251
MULE_INTERNAL
windows_1251_to_utf8
WIN1251
UTF8
windows_1251_to_windows_866
WIN1251
WIN866
windows_1252_to_utf8
WIN1252
UTF8
windows_1256_to_utf8
WIN1256
UTF8
windows_866_to_iso_8859_5
WIN866
ISO_8859_5
windows_866_to_koi8_r
WIN866
KOI8R
windows_866_to_mic
WIN866
MULE_INTERNAL
windows_866_to_utf8
WIN866
UTF8
windows_866_to_windows_1251
WIN866
WIN
windows_874_to_utf8
WIN874
UTF8
euc_jis_2004_to_utf8
EUC_JIS_2004
UTF8
utf8_to_euc_jis_2004
UTF8
EUC_JIS_2004
shift_jis_2004_to_utf8
SHIFT_JIS_2004
UTF8
utf8_to_shift_jis_2004
UTF8
SHIFT_JIS_2004
euc_jis_2004_to_shift_jis_2004
EUC_JIS_2004
SHIFT_JIS_2004
shift_jis_2004_to_euc_jis_2004
SHIFT_JIS_2004
EUC_JIS_2004
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| Treat given time stamp without time zone as located in the specified time zone |
|
| Convert given time stamp with time zone to the new time zone, with no time zone designation |
|
| Convert given time with time zone to the new time zone |
|
| abbreviated display format as text |
|
|
|
| abbreviated display format as text |
|
|
|
| broadcast address for network |
|
|
|
| extract family of address; |
|
|
|
| extract IP address as text |
|
|
|
| construct host mask for network |
|
|
|
| extract netmask length |
|
|
|
| construct netmask for network |
|
|
|
| extract network part of address |
|
|
|
| set netmask length for |
|
|
|
| set netmask length for |
|
|
|
| extract IP address and netmask length as text |
|
|
|
| are the addresses from the same family? |
|
|
|
| the smallest network which includes both of the given networks |
|
|
|
| set last 3 bytes to zero |
|
|
|
| set last 5 bytes to zero |
|
|
|
| set 7th bit to one, also known as modified EUI-64, for inclusion in an IPv6 address |
|
|
|
| area |
|
|
| center |
|
|
| diameter of circle |
|
|
| vertical size of box |
|
|
| a closed path? |
|
|
| an open path? |
|
|
| length |
|
|
| number of points |
|
|
| number of points |
|
|
| convert path to closed |
|
|
| convert path to open |
|
|
| radius of circle |
|
|
| horizontal size of box |
|
|
| circle to box |
|
|
| point to empty box |
|
|
| points to box |
|
|
| polygon to box |
|
|
| boxes to bounding box |
|
|
| box to circle |
|
|
| center and radius to circle |
|
|
| polygon to circle |
|
|
| points to line |
|
|
| box diagonal to line segment |
|
|
| points to line segment |
|
|
| polygon to path |
|
|
| construct point |
|
|
| center of box |
|
|
| center of circle |
|
|
| center of line segment |
|
|
| center of polygon |
|
|
| box to 4-point polygon |
|
|
| circle to 12-point polygon |
|
|
| circle to |
|
|
| path to polygon |
|
|
| test a configuration |
|
|
|
| test a dictionary |
|
|
|
| test a parser |
|
|
|
| test a parser |
|
|
|
| get token types defined by parser |
|
|
|
| get token types defined by parser |
|
|
|
| get statistics of a |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| is less than |
|
| is less than or equal |
|
| equals |
|
| is greater or equal |
|
| is greater than |
|
| is not equal |
|
| is contained by |
|
| is contained by or equals |
|
| contains |
|
| contains or equals |
|
| contains or is contained by |
|
| bitwise NOT |
|
| bitwise AND |
|
` | ` | bitwise OR | `inet '192.168.1.6' | inet '0.0.0.255'` |
| addition |
|
| subtraction |
|
| subtraction |
|
| Translation |
|
| Translation |
|
| Scaling/rotation |
|
| Scaling/rotation |
|
| Point or box of intersection |
|
| Number of points in path or polygon |
|
| Length or circumference |
|
| Center |
|
| Closest point to first operand on second operand |
|
| Distance between |
|
| Overlaps? (One point in common makes this true.) |
|
| Is strictly left of? |
|
| Is strictly right of? |
|
| Does not extend to the right of? |
|
| Does not extend to the left of? |
|
`<< | ` | Is strictly below? | `box '((0,0),(3,3))' << | box '((3,4),(5,5))'` |
` | >>` | Is strictly above? | `box '((3,4),(5,5))' | >> box '((0,0),(3,3))'` |
`&< | ` | Does not extend above? | `box '((0,0),(1,1))' &< | box '((0,0),(2,2))'` |
` | &>` | Does not extend below? | `box '((0,0),(3,3))' | &> box '((0,0),(2,2))'` |
| Is below (allows touching)? |
|
| Is above (allows touching)? |
|
| Intersects? |
|
| Is horizontal? |
|
| Are horizontally aligned? |
|
`? | ` | Is vertical? | `? | lseg '((-1,0),(1,0))'` |
`? | ` | Are vertically aligned? | `point '(0,1)' ? | point '(0,0)'` |
`?- | ` | Is perpendicular? | `lseg '((0,0),(0,1))' ?- | lseg '((0,0),(1,0))'` |
`? | ` | Are parallel? | `lseg '((-1,0),(1,0))' ? | lseg '((-1,2),(1,2))'` |
| Contains? |
|
| Contained in or on? |
|
| Same as? |
|
|
|
|
|
|
|
| deprecated synonym for |
|
|
` | ` |
| concatenate | `'a:1 b:2'::tsvector | 'c:1 d:2 b:3'::tsvector` |
|
|
| AND | `'fat | rat'::tsquery && 'cat'::tsquery` | `( 'fat' | 'rat' ) & 'cat'` |
` | ` |
| OR | `'fat | rat'::tsquery | 'cat'::tsquery` | `( 'fat' | 'rat' ) | 'cat'` |
|
| negate a |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| convert array of lexemes to |
|
|
|
| get default text search configuration |
|
|
|
| number of lexemes in |
|
|
|
| number of lexemes plus operators in | `numnode('(fat & rat) | cat'::tsquery)` |
|
|
| produce |
|
|
|
| produce |
|
|
|
| get indexable part of a |
|
|
|
| assign |
|
|
|
| assign |
|
|
|
| remove positions and weights from |
|
|
|
| normalize words and convert to |
|
|
|
| reduce document text to |
|
|
|
| reduce each string value in the document to a |
|
|
|
| remove given |
|
|
|
| remove any occurrence of lexemes in |
|
|
|
| select only elements with given |
|
|
|
| display a query match |
|
|
|
| display a query match |
|
|
|
| rank document for query |
|
|
|
| rank document for query using cover density |
|
|
|
| replace | `ts_rewrite('a & b'::tsquery, 'a'::tsquery, 'foo | bar'::tsquery)` | `'b' & ( 'foo' | 'bar' )` |
|
| replace using targets and substitutes from a |
| `'b' & ( 'foo' | 'bar' )` |
|
| make query that searches for |
|
|
|
| make query that searches for |
|
|
|
| convert |
|
|
|
| trigger function for automatic |
|
|
| trigger function for automatic |
|
|
| expand a tsvector to a set of rows |
|
|
本節介紹 PostgreSQL 中符合 SQL 標準可用的條件表示式。
如果您的需求超出了這些條件表示式的功能,您可能需要考慮使用功能更強的程序語言編寫 stored procedure。
CASE
SQL 中的 CASE 表示式是一種通用的條件表示式,類似於其他程序語言中的 if / else 語句:
CASE子句可用於任何表示式有效的地方。每個條件都是一個回傳布林值的表示式。如果條件結果為 true,則 CASE 表示式的值為該條件之後的結果,而不處理CASE表達式的其餘部分。如果條件的結果不成立,則以相同的方式檢查後續的 WHEN 子句。如果沒有任何 WHEN 條件成立,則 CASE 表示式的值是 ELSE 子句的結果。如果省略了 ELSE 子句並且沒有條件為真,則結果為 null。
範例:
所有結果表示式的資料型別都必須可轉換為單一的輸出型別。更多細節請參閱 10.5 節。
CASE 表示式的「簡單」語法是上述一般語法的變形:
計算第一個表示式,然後與 WHEN 子句中的每個表示式的結果值進行比較,直到找到與其相等的值。如果未找到匹配的項目,則回傳 ELSE 子句(或空值)的結果。這與 C 語言中的 switch 語句類似。
上面的例子可以使用簡單的 CASE 語法來撰寫:
CASE 表示式不會計算任何不需要的子表示式來確定結果。例如,這是避免除以零例外狀況可能的方法:
如 4.2.14 節所述,在不同時候計算表示式的子表示式時會出現各種情況,因此「CASE 只計算必要子表示式」的原則並不是固定的。例如,一個常數 1/0 的子表示式在查詢規畫時通常就會導致一個除以零的錯誤,即使它在 CASE 部分內,在執行時永遠不會被使用。
COALESCE
COALESCE 函數回傳非空值的第一個參數。僅當所有參數都為空值時才回傳空值。當檢索資料要進行顯示時,它通常用於將預認值替換為空值,例如:
如果它不為 null,則回傳 descrtiption;否則,如果 short_description 不為null,則傳回 short_description;否則回傳(none)。
像 CASE 表示式一樣,COALESCE 只計算確定結果所需的參數;也就是說,不會計算第一個非空值參數之後的參數。此 SQL 標準函數提供了與 NVL 和 IFNULL 類似的功能,這些在其他某些資料庫系統中所使用的功能。
NULLIF
如果 value1 等於 value2,則 NULLIF 函數回傳空值;否則回傳 value1。這可以用來執行上面 COALESCE 範例的逆操作:
在這個例子中,如果 value 是(none),則回傳 null,否則回傳 value 的值。
GREATEST
and LEAST
GREATEST 和 LEAST 函數從任意數量的表示式列表中選擇最大值或最小值。表示式必須全部轉換為通用的資料型別,這將成為結果的別型(詳見 10.5 節)。列表中的 NULL 值將會被忽略。僅當所有表示式求值為 NULL 時,結果才會為 NULL。
請注意,GREATEST 和 LEAST 並不在 SQL 標準中,但卻是一個常見的延伸功能。如果任何參數為 NULL,則其他一些資料庫會使其回傳 NULL,而不是僅在所有參數都為 NULL 時回傳 NULL。
|
| Subtract arguments, producing a “symbolic” result that uses years and months, rather than just days |
|
|
|
| Subtract from |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| Test for finite date (not +/-infinity) |
|
|
|
| Test for finite time stamp (not +/-infinity) |
|
|
|
| Test for finite interval |
|
|
|
| Adjust interval so 30-day time periods are represented as months |
|
|
|
| Adjust interval so 24-hour time periods are represented as days |
|
|
|
| Adjust interval using |
|
|
|
|
|
|
|
| Create date from year, month and day fields |
|
|
|
| Create interval from years, months, weeks, days, hours, minutes and seconds fields |
|
|
|
| Create time from hour, minute and seconds fields |
|
|
|
| Create timestamp from year, month, day, hour, minute and seconds fields |
|
|
|
| Create timestamp with time zone from year, month, day, hour, minute and seconds fields; if |
|
|
|
|
|
|
|
|
|
|
|
| Convert Unix epoch (seconds since 1970-01-01 00:00:00+00) to timestamp |
|
|
This section describes functions for operating on_sequence objects_, also called sequence generators or just sequences. Sequence objects are special single-row tables created withCREATE SEQUENCE. Sequence objects are commonly used to generate unique identifiers for rows of a table. The sequence functions, listed inTable 9.47, provide simple, multiuser-safe methods for obtaining successive sequence values from sequence objects.
Table 9.47. Sequence Functions
Function | Return Type | Description |
---|---|---|
The sequence to be operated on by a sequence function is specified by aregclass
argument, which is simply the OID of the sequence in thepg_class
system catalog. You do not have to look up the OID by hand, however, since theregclass
data type's input converter will do the work for you. Just write the sequence name enclosed in single quotes so that it looks like a literal constant. For compatibility with the handling of ordinarySQLnames, the string will be converted to lower case unless it contains double quotes around the sequence name. Thus:
The sequence name can be schema-qualified if necessary:
SeeSection 8.18for more information aboutregclass
.
BeforePostgreSQL8.1, the arguments of the sequence functions were of typetext
, notregclass
, and the above-described conversion from a text string to an OID value would happen at run time during each call. For backward compatibility, this facility still exists, but internally it is now handled as an implicit coercion fromtext
toregclass
before the function is invoked.
When you write the argument of a sequence function as an unadorned literal string, it becomes a constant of typeregclass
. Since this is really just an OID, it will track the originally identified sequence despite later renaming, schema reassignment, etc. This“early binding”behavior is usually desirable for sequence references in column defaults and views. But sometimes you might want“late binding”where the sequence reference is resolved at run time. To get late-binding behavior, force the constant to be stored as atext
constant instead ofregclass
:
Note that late binding was the only behavior supported inPostgreSQLreleases before 8.1, so you might need to do this to preserve the semantics of old applications.
Of course, the argument of a sequence function can be an expression as well as a constant. If it is a text expression then the implicit coercion will result in a run-time lookup.
The available sequence functions are:
nextval
Advance the sequence object to its next value and return that value. This is done atomically: even if multiple sessions executenextval
concurrently, each will safely receive a distinct sequence value.
If a sequence object has been created with default parameters, successivenextval
calls will return successive values beginning with 1. Other behaviors can be obtained by using special parameters in theCREATE SEQUENCEcommand; see its command reference page for more information.
To avoid blocking concurrent transactions that obtain numbers from the same sequence, anextval
operation is never rolled back; that is, once a value has been fetched it is considered used and will not be returned again. This is true even if the surrounding transaction later aborts, or if the calling query ends up not using the value. For example anINSERT
with anON CONFLICT
clause will compute the to-be-inserted tuple, including doing any requirednextval
calls, before detecting any conflict that would cause it to follow theON CONFLICT
rule instead. Such cases will leave unused“holes”in the sequence of assigned values. Thus,PostgreSQLsequence objects_cannot be used to obtain“gapless”sequences_.
This function requiresUSAGE
orUPDATE
privilege on the sequence.
currval
Return the value most recently obtained bynextval
for this sequence in the current session. (An error is reported ifnextval
has never been called for this sequence in this session.) Because this is returning a session-local value, it gives a predictable answer whether or not other sessions have executednextval
since the current session did.
This function requiresUSAGE
orSELECT
privilege on the sequence.
lastval
Return the value most recently returned bynextval
in the current session. This function is identical tocurrval
, except that instead of taking the sequence name as an argument it refers to whichever sequencenextval
was most recently applied to in the current session. It is an error to calllastval
ifnextval
has not yet been called in the current session.
This function requiresUSAGE
orSELECT
privilege on the last used sequence.
setval
Reset the sequence object's counter value. The two-parameter form sets the sequence'slast_value
field to the specified value and sets itsis_called
field totrue
, meaning that the nextnextval
will advance the sequence before returning a value. The value reported bycurrval
is also set to the specified value. In the three-parameter form,is_called
can be set to eithertrue
orfalse
.true
has the same effect as the two-parameter form. If it is set tofalse
, the nextnextval
will return exactly the specified value, and sequence advancement commences with the followingnextval
. Furthermore, the value reported bycurrval
is not changed in this case. For example,
The result returned bysetval
is just the value of its second argument.
Because sequences are non-transactional, changes made bysetval
are not undone if the transaction rolls back.
This function requiresUPDATE
privilege on the sequence.
Table 9.48shows the operators available for array types.
Table 9.48. Array Operators
Operator | Description | Example | Result | ||||
---|---|---|---|---|---|---|---|
Array comparisons compare the array contents element-by-element, using the default B-tree comparison function for the element data type. In multidimensional arrays the elements are visited in row-major order (last subscript varies most rapidly). If the contents of two arrays are equal but the dimensionality is different, the first difference in the dimensionality information determines the sort order. (This is a change from versions ofPostgreSQLprior to 8.2: older versions would claim that two arrays with the same contents were equal, even if the number of dimensions or subscript ranges were different.)
SeeSection 8.15for more details about array operator behavior. SeeSection 11.2for more details about which operators support indexed operations.
Table 9.49shows the functions available for use with array types. SeeSection 8.15for more information and examples of the use of these functions.
Table 9.49. Array Functions
Inarray_position
andarray_positions
, each array element is compared to the searched value usingIS NOT DISTINCT FROM
semantics.
Inarray_position
,NULL
is returned if the value is not found.
Inarray_positions
,NULL
is returned only if the array isNULL
; if the value is not found in the array, an empty array is returned instead.
Instring_to_array
, if the delimiter parameter is NULL, each character in the input string will become a separate element in the resulting array. If the delimiter is an empty string, then the entire input string is returned as a one-element array. Otherwise the input string is split at each occurrence of the delimiter string.
Instring_to_array
, if the null-string parameter is omitted or NULL, none of the substrings of the input will be replaced by NULL. Inarray_to_string
, if the null-string parameter is omitted or NULL, any null elements in the array are simply skipped and not represented in the output string.
There are two differences in the behavior ofstring_to_array
from pre-9.1 versions ofPostgreSQL. First, it will return an empty (zero-element) array rather than NULL when the input string is of zero length. Second, if the delimiter string is NULL, the function splits the input into individual characters, rather than returning NULL as before.
See alsoSection 9.20about the aggregate functionarray_agg
for use with arrays.
See Section 8.17 for an overview of range types.
Table 9.55 shows the specialized operators available for range types. Table 9.56 shows the specialized operators available for multirange types. In addition to those, the usual comparison operators shown in Table 9.1 are available for range and multirange types. The comparison operators order first by the range lower bounds, and only if those are equal do they compare the upper bounds. The multirange operators compare each range until one is unequal. This does not usually result in a useful overall ordering, but the operators are provided to allow unique indexes to be constructed on ranges.
The left-of/right-of/adjacent operators always return false when an empty range or multirange is involved; that is, an empty range is not considered to be either before or after any other range.
Elsewhere empty ranges and multiranges are treated as the additive identity: anything unioned with an empty value is itself. Anything minus an empty value is itself. An empty multirange has exactly the same points as an empty range. Every range contains the empty range. Every multirange contains as many empty ranges as you like.
The range union and difference operators will fail if the resulting range would need to contain two disjoint sub-ranges, as such a range cannot be represented. There are separate operators for union and difference that take multirange parameters and return a multirange, and they do not fail even if their arguments are disjoint. So if you need a union or difference operation for ranges that may be disjoint, you can avoid errors by first casting your ranges to multiranges.
Table 9.57 shows the functions available for use with range types. Table 9.58 shows the functions available for use with multirange types.
The lower_inc
, upper_inc
, lower_inf
, and upper_inf
functions all return false for an empty range or multirange.
版本:11
Aggregate functions compute a single result from a set of input values. The built-in general-purpose aggregate functions are listed in Table 9.55 while statistical aggregates are in Table 9.56. The built-in within-group ordered-set aggregate functions are listed in Table 9.57 while the built-in within-group hypothetical-set ones are in Table 9.58. Grouping operations, which are closely related to aggregate functions, are listed in Table 9.59. The special syntax considerations for aggregate functions are explained in Section 4.2.7. Consult Section 2.7 for additional introductory information.
Aggregate functions that support Partial Mode are eligible to participate in various optimizations, such as parallel aggregation.
It should be noted that except for count
, these functions return a null value when no rows are selected. In particular, sum
of no rows returns null, not zero as one might expect, and array_agg
returns null rather than an empty array when there are no input rows. The coalesce
function can be used to substitute zero or an empty array for null when necessary.
The aggregate functions array_agg
, json_agg
, jsonb_agg
, json_object_agg
, jsonb_object_agg
, string_agg
, and xmlagg
, as well as similar user-defined aggregate functions, produce meaningfully different result values depending on the order of the input values. This ordering is unspecified by default, but can be controlled by writing an ORDER BY
clause within the aggregate call, as shown in Section 4.2.7. Alternatively, supplying the input values from a sorted subquery will usually work. For example:
Beware that this approach can fail if the outer query level contains additional processing, such as a join, because that might cause the subquery's output to be reordered before the aggregate is computed.
The boolean aggregates bool_and
and bool_or
correspond to the standard SQL aggregates every
and any
or some
. PostgreSQL supports every
, but not any
or some
, because there is an ambiguity built into the standard syntax:
Here ANY
can be considered either as introducing a subquery, or as being an aggregate function, if the subquery returns one row with a Boolean value. Thus the standard name cannot be given to these aggregates.
Users accustomed to working with other SQL database management systems might be disappointed by the performance of the count
aggregate when it is applied to the entire table. A query like:
will require effort proportional to the size of the table: PostgreSQL will need to scan either the entire table or the entirety of an index that includes all rows in the table.
Table 9.56 shows aggregate functions typically used in statistical analysis. (These are separated out merely to avoid cluttering the listing of more-commonly-used aggregates.) Functions shown as accepting numeric_type
are available for all the types smallint
, integer
, bigint
, numeric
, real
, and double precision
. Where the description mentions N
, it means the number of input rows for which all the input expressions are non-null. In all cases, null is returned if the computation is meaningless, for example when N
is zero.
Table 9.57 shows some aggregate functions that use the ordered-set aggregate syntax. These functions are sometimes referred to as “inverse distribution” functions. Their aggregated input is introduced by ORDER BY
, and they may also take a direct argument that is not aggregated, but is computed only once. All these functions ignore null values in their aggregated input. For those that take a fraction
parameter, the fraction value must be between 0 and 1; an error is thrown if not. However, a null fraction
value simply produces a null result.
Each of the “hypothetical-set” aggregates listed in Table 9.58 is associated with a window function of the same name defined in Section 9.22. In each case, the aggregate's result is the value that the associated window function would have returned for the “hypothetical” row constructed from args
, if such a row had been added to the sorted group of rows represented by the sorted_args
. For each of these functions, the list of direct arguments given in args
must match the number and types of the aggregated arguments given in sorted_args
. Unlike most built-in aggregates, these aggregates are not strict, that is they do not drop input rows containing nulls. Null values sort according to the rule specified in the ORDER BY
clause.
The grouping operations shown in Table 9.59 are used in conjunction with grouping sets (see Section 7.2.4) to distinguish result rows. The arguments to the GROUPING
function are not actually evaluated, but they must exactly match expressions given in the GROUP BY
clause of the associated query level. For example:
Here, the grouping
value 0
in the first four rows shows that those have been grouped normally, over both the grouping columns. The value 1
indicates that model
was not grouped by in the next-to-last two rows, and the value 3
indicates that neither make
nor model
was grouped by in the last row (which therefore is an aggregate over all the input rows).
This section describes:
functions and operators for processing and creating JSON data
the SQL/JSON path language
To learn more about the SQL/JSON standard, see [sqltr-19075-6]. For details on JSON types supported in PostgreSQL, see Section 8.14.
Table 9.45 shows the operators that are available for use with JSON data types (see Section 8.14). In addition, the usual comparison operators shown in Table 9.1 are available for jsonb
, though not for json
. The comparison operators follow the ordering rules for B-tree operations outlined in Section 8.14.4. See also Section 9.21 for the aggregate function json_agg
which aggregates record values as JSON, the aggregate function json_object_agg
which aggregates pairs of values into a JSON object, and their jsonb
equivalents, jsonb_agg
and jsonb_object_agg
.
json
and jsonb
OperatorsThe field/element/path extraction operators return NULL, rather than failing, if the JSON input does not have the right structure to match the request; for example if no such key or array element exists.
Some further operators exist only for jsonb
, as shown in Table 9.46. Section 8.14.4 describes how these operators can be used to effectively search indexed jsonb
data.
jsonb
OperatorsThe jsonpath
operators @?
and @@
suppress the following errors: missing object field or array element, unexpected JSON item type, datetime and numeric errors. The jsonpath
-related functions described below can also be told to suppress these types of errors. This behavior might be helpful when searching JSON document collections of varying structure.
Table 9.47 shows the functions that are available for constructing json
and jsonb
values.
Table 9.48 shows the functions that are available for processing json
and jsonb
values.
SQL/JSON path expressions specify the items to be retrieved from the JSON data, similar to XPath expressions used for SQL access to XML. In PostgreSQL, path expressions are implemented as the jsonpath
data type and can use any elements described in Section 8.14.7.
JSON query functions and operators pass the provided path expression to the path engine for evaluation. If the expression matches the queried JSON data, the corresponding JSON item, or set of items, is returned. Path expressions are written in the SQL/JSON path language and can include arithmetic expressions and functions.
A path expression consists of a sequence of elements allowed by the jsonpath
data type. The path expression is normally evaluated from left to right, but you can use parentheses to change the order of operations. If the evaluation is successful, a sequence of JSON items is produced, and the evaluation result is returned to the JSON query function that completes the specified computation.
To refer to the JSON value being queried (the context item), use the $
variable in the path expression. It can be followed by one or more accessor operators, which go down the JSON structure level by level to retrieve sub-items of the context item. Each operator that follows deals with the result of the previous evaluation step.
For example, suppose you have some JSON data from a GPS tracker that you would like to parse, such as:
To retrieve the available track segments, you need to use the .
key
accessor operator to descend through surrounding JSON objects:
To retrieve the contents of an array, you typically use the [*]
operator. For example, the following path will return the location coordinates for all the available track segments:
To return the coordinates of the first segment only, you can specify the corresponding subscript in the []
accessor operator. Recall that JSON array indexes are 0-relative:
The result of each path evaluation step can be processed by one or more jsonpath
operators and methods listed in Section 9.16.2.2. Each method name must be preceded by a dot. For example, you can get the size of an array:
More examples of using jsonpath
operators and methods within path expressions appear below in Section 9.16.2.2.
When defining a path, you can also use one or more filter expressions that work similarly to the WHERE
clause in SQL. A filter expression begins with a question mark and provides a condition in parentheses:
Filter expressions must be written just after the path evaluation step to which they should apply. The result of that step is filtered to include only those items that satisfy the provided condition. SQL/JSON defines three-valued logic, so the condition can be true
, false
, or unknown
. The unknown
value plays the same role as SQL NULL
and can be tested for with the is unknown
predicate. Further path evaluation steps use only those items for which the filter expression returned true
.
The functions and operators that can be used in filter expressions are listed in Table 9.50. Within a filter expression, the @
variable denotes the value being filtered (i.e., one result of the preceding path step). You can write accessor operators after @
to retrieve component items.
For example, suppose you would like to retrieve all heart rate values higher than 130. You can achieve this using the following expression:
To get the start times of segments with such values, you have to filter out irrelevant segments before returning the start times, so the filter expression is applied to the previous step, and the path used in the condition is different:
You can use several filter expressions in sequence, if required. For example, the following expression selects start times of all segments that contain locations with relevant coordinates and high heart rate values:
Using filter expressions at different nesting levels is also allowed. The following example first filters all segments by location, and then returns high heart rate values for these segments, if available:
You can also nest filter expressions within each other:
This expression returns the size of the track if it contains any segments with high heart rate values, or an empty sequence otherwise.
PostgreSQL's implementation of the SQL/JSON path language has the following deviations from the SQL/JSON standard:
A path expression can be a Boolean predicate, although the SQL/JSON standard allows predicates only in filters. This is necessary for implementation of the @@
operator. For example, the following jsonpath
expression is valid in PostgreSQL:
There are minor differences in the interpretation of regular expression patterns used in like_regex
filters, as described in Section 9.16.2.3.
When you query JSON data, the path expression may not match the actual JSON data structure. An attempt to access a non-existent member of an object or element of an array results in a structural error. SQL/JSON path expressions have two modes of handling structural errors:
lax (default) — the path engine implicitly adapts the queried data to the specified path. Any remaining structural errors are suppressed and converted to empty SQL/JSON sequences.
strict — if a structural error occurs, an error is raised.
The lax mode facilitates matching of a JSON document structure and path expression if the JSON data does not conform to the expected schema. If an operand does not match the requirements of a particular operation, it can be automatically wrapped as an SQL/JSON array or unwrapped by converting its elements into an SQL/JSON sequence before performing this operation. Besides, comparison operators automatically unwrap their operands in the lax mode, so you can compare SQL/JSON arrays out-of-the-box. An array of size 1 is considered equal to its sole element. Automatic unwrapping is not performed only when:
The path expression contains type()
or size()
methods that return the type and the number of elements in the array, respectively.
The queried JSON data contain nested arrays. In this case, only the outermost array is unwrapped, while all the inner arrays remain unchanged. Thus, implicit unwrapping can only go one level down within each path evaluation step.
For example, when querying the GPS data listed above, you can abstract from the fact that it stores an array of segments when using the lax mode:
In the strict mode, the specified path must exactly match the structure of the queried JSON document to return an SQL/JSON item, so using this path expression will cause an error. To get the same result as in the lax mode, you have to explicitly unwrap the segments
array:
The .**
accessor can lead to surprising results when using the lax mode. For instance, the following query selects every HR
value twice:
This happens because the .**
accessor selects both the segments
array and each of its elements, while the .HR
accessor automatically unwraps arrays when using the lax mode. To avoid surprising results, we recommend using the .**
accessor only in the strict mode. The following query selects each HR
value just once:
Table 9.49 shows the operators and methods available in jsonpath
. Note that while the unary operators and methods can be applied to multiple values resulting from a preceding path step, the binary operators (addition etc.) can only be applied to single values.
Table 9.49. jsonpath
Operators and Methods
The result type of the datetime()
and datetime(
template
)
methods can be date
, timetz
, time
, timestamptz
, or timestamp
. Both methods determine their result type dynamically.
The datetime()
method sequentially tries to match its input string to the ISO formats for date
, timetz
, time
, timestamptz
, and timestamp
. It stops on the first matching format and emits the corresponding data type.
The datetime(
template
)
method determines the result type according to the fields used in the provided template string.
The datetime()
and datetime(
template
)
methods use the same parsing rules as the to_timestamp
SQL function does (see Section 9.8), with three exceptions. First, these methods don't allow unmatched template patterns. Second, only the following separators are allowed in the template string: minus sign, period, solidus (slash), comma, apostrophe, semicolon, colon and space. Third, separators in the template string must exactly match the input string.
If different date/time types need to be compared, an implicit cast is applied. A date
value can be cast to timestamp
or timestamptz
, timestamp
can be cast to timestamptz
, and time
to timetz
. However, all but the first of these conversions depend on the current TimeZone setting, and thus can only be performed within timezone-aware jsonpath
functions.
Table 9.50 shows the available filter expression elements.
jsonpath
Filter Expression ElementsSQL/JSON path expressions allow matching text to a regular expression with the like_regex
filter. For example, the following SQL/JSON path query would case-insensitively match all strings in an array that start with an English vowel:
The optional flag
string may include one or more of the characters i
for case-insensitive match, m
to allow ^
and $
to match at newlines, s
to allow .
to match a newline, and q
to quote the whole pattern (reducing the behavior to a simple substring match).
The SQL/JSON standard borrows its definition for regular expressions from the LIKE_REGEX
operator, which in turn uses the XQuery standard. PostgreSQL does not currently support the LIKE_REGEX
operator. Therefore, the like_regex
filter is implemented using the POSIX regular expression engine described in Section 9.7.3. This leads to various minor discrepancies from standard SQL/JSON behavior, which are cataloged in Section 9.7.3.8. Note, however, that the flag-letter incompatibilities described there do not apply to SQL/JSON, as it translates the XQuery flag letters to match what the POSIX engine expects.
Keep in mind that the pattern argument of like_regex
is a JSON path string literal, written according to the rules given in Section 8.14.7. This means in particular that any backslashes you want to use in the regular expression must be doubled. For example, to match string values of the root document that contain only digits:
Window functions provide the ability to perform calculations across sets of rows that are related to the current query row. See for an introduction to this feature, and for syntax details.
The built-in window functions are listed in . Note that these functions must be invoked using window function syntax, i.e., an OVER
clause is required.
In addition to these functions, any built-in or user-defined ordinary aggregate (i.e., not ordered-set or hypothetical-set aggregates) can be used as a window function; see for a list of the built-in aggregates. Aggregate functions act as window functions only when an OVER
clause follows the call; otherwise they act as plain aggregates and return a single row for the entire set.
All of the functions listed in depend on the sort ordering specified by the ORDER BY
clause of the associated window definition. Rows that are not distinct when considering only the ORDER BY
columns are said to be peers. The four ranking functions (including cume_dist
) are defined so that they give the same answer for all rows of a peer group.
Note that first_value
, last_value
, and nth_value
consider only the rows within the “window frame”, which by default contains the rows from the start of the partition through the last peer of the current row. This is likely to give unhelpful results for last_value
and sometimes also nth_value
. You can redefine the frame by adding a suitable frame specification (RANGE
, ROWS
or GROUPS
) to the OVER
clause. See for more information about frame specifications.
When an aggregate function is used as a window function, it aggregates over the rows within the current row's window frame. An aggregate used with ORDER BY
and the default window frame definition produces a “running sum” type of behavior, which may or may not be what's wanted. To obtain aggregation over the whole partition, omit ORDER BY
or use ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
. Other frame specifications can be used to obtain other effects.
The SQL standard defines a RESPECT NULLS
or IGNORE NULLS
option for lead
, lag
, first_value
, last_value
, and nth_value
. This is not implemented in PostgreSQL: the behavior is always the same as the standard's default, namely RESPECT NULLS
. Likewise, the standard's FROM FIRST
or FROM LAST
option for nth_value
is not implemented: only the default FROM FIRST
behavior is supported. (You can achieve the result of FROM LAST
by reversing the ORDER BY
ordering.)
This section describes the SQL-compliant subquery expressions available in PostgreSQL. All of the expression forms documented in this section return Boolean (true/false) results.
EXISTS
The argument of EXISTS
is an arbitrary SELECT
statement, or subquery. The subquery is evaluated to determine whether it returns any rows. If it returns at least one row, the result of EXISTS
is “true”; if the subquery returns no rows, the result of EXISTS
is “false”.
The subquery can refer to variables from the surrounding query, which will act as constants during any one evaluation of the subquery.
The subquery will generally only be executed long enough to determine whether at least one row is returned, not all the way to completion. It is unwise to write a subquery that has side effects (such as calling sequence functions); whether the side effects occur might be unpredictable.
Since the result depends only on whether any rows are returned, and not on the contents of those rows, the output list of the subquery is normally unimportant. A common coding convention is to write all EXISTS
tests in the form EXISTS(SELECT 1 WHERE ...)
. There are exceptions to this rule however, such as subqueries that use INTERSECT
.
This simple example is like an inner join on col2
, but it produces at most one output row for each tab1
row, even if there are several matching tab2
rows:
IN
The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result. The result of IN
is “true” if any equal subquery row is found. The result is “false” if no equal row is found (including the case where the subquery returns no rows).
Note that if the left-hand expression yields null, or if there are no equal right-hand values and at least one right-hand row yields null, the result of the IN
construct will be null, not false. This is in accordance with SQL's normal rules for Boolean combinations of null values.
As with EXISTS
, it's unwise to assume that the subquery will be evaluated completely.
As usual, null values in the rows are combined per the normal rules of SQL Boolean expressions. Two rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal if any corresponding members are non-null and unequal; otherwise the result of that row comparison is unknown (null). If all the per-row results are either unequal or null, with at least one null, then the result of IN
is null.
NOT IN
The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result. The result of NOT IN
is “true” if only unequal subquery rows are found (including the case where the subquery returns no rows). The result is “false” if any equal row is found.
Note that if the left-hand expression yields null, or if there are no equal right-hand values and at least one right-hand row yields null, the result of the NOT IN
construct will be null, not true. This is in accordance with SQL's normal rules for Boolean combinations of null values.
As with EXISTS
, it's unwise to assume that the subquery will be evaluated completely.
As usual, null values in the rows are combined per the normal rules of SQL Boolean expressions. Two rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal if any corresponding members are non-null and unequal; otherwise the result of that row comparison is unknown (null). If all the per-row results are either unequal or null, with at least one null, then the result of NOT IN
is null.
ANY
/SOME
The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result using the given operator
, which must yield a Boolean result. The result of ANY
is “true” if any true result is obtained. The result is “false” if no true result is found (including the case where the subquery returns no rows).
SOME
is a synonym for ANY
. IN
is equivalent to = ANY
.
Note that if there are no successes and at least one right-hand row yields null for the operator's result, the result of the ANY
construct will be null, not false. This is in accordance with SQL's normal rules for Boolean combinations of null values.
As with EXISTS
, it's unwise to assume that the subquery will be evaluated completely.
ALL
The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result using the given operator
, which must yield a Boolean result. The result of ALL
is “true” if all rows yield true (including the case where the subquery returns no rows). The result is “false” if any false result is found. The result is NULL if no comparison with a subquery row returns false, and at least one comparison returns NULL.
NOT IN
is equivalent to <> ALL
.
As with EXISTS
, it's unwise to assume that the subquery will be evaluated completely.
本節介紹了用於在多群組內容之間進行多重比較的幾個專用語法結構。這些功能在語法上與前一節的子查詢形式相關,但不涉及子查詢。涉及陣列子表示式的形式是 PostgreSQL 的延伸功能;其餘的都是相容 SQL 的。本節中記錄的所有表達形式都是回傳布林值(true/false)結果。
IN
右側是 scalar 表示式帶括號的列表。如果左側表示式的結果等於任何右側表示式,結果為「true」。這是一個簡寫的方式
請注意,如果左側表示式產生空值,或者沒有相等的右側值並且至少有一個右側表示式產生空值,則 IN 的的結果將為空,而不是 false。這符合 SQL 空值布林組合的普遍規則。
NOT IN
右側是 scalar 表示式帶括號的列表。如果左側表示式的結果不等於所有右側表示式,則結果為「true」。 這是一個簡寫的方式
請注意,如果左邊的表示式為空,或者沒有相等的右邊的值,並且至少有一個右邊的表示式為空,則 NOT IN 的結果將為空,而不要天真地認為是 true。這符合 SQL 空值布林組合的普遍規則。
x NOT IN y 在所有情況下都等於 NOT(x IN y)。但是,使用 NOT IN 時,與使用 IN 時相比,空值更有可能讓新手感到痛苦。如果可能的話,最好積極轉換自己需要的比較內容。
ANY
/SOME
(array)右側是一個帶括號的表示式,它必須產生一個陣列。使用給定的運算子評估左側表示式並與陣列的每個元素進行比較,該運算子必須產生布林結果。如果獲得任何 true 結果,則 ANY 的結果為「true」。 如果未找到 true(包括陣列中沒有元素的情況),則結果為「false」。
如果陣列表示式產生一個空的陣列,則 ANY 的結果將為空。如果左邊的表示式為空,則 ANY 的結果通常為空(儘管非嚴格的比較運算子可能會產生不同的結果)。另外,如果右邊的陣列包含任何空元素並且沒有獲得真正的比較結果,則 ANY 的結果將為空,而不是 false(再次假設嚴格的比較運算子)。這符合 SQL 空值布林組合的普遍規則。
SOME 是 ANY 的同義詞。
ALL
(array)右側是一個帶括號的表示式,它必須產生一個陣列。使用給定的運算子計算左側表示式並與陣列的每個元素進行比較,該運算子必須產生布林結果。如果所有比較都為真(包括陣列為空的情況),則 ALL 的結果為“真”。如果發現任何錯誤的情況,結果就為“假”。
如果陣列表示式產生一個空陣列,則 ALL 的結果將為 NULL。如果左邊的表示式為NULL,則 ALL 的結果通常為 NULL(儘管非嚴格的比較運算子可能產生不同的結果)。另外,如果右邊的陣列包含任何 NULL 元素,並且沒有獲得錯誤的比較結果,則 ALL 的結果將為 NULL,而不是 TRUE(再次假設一個嚴格的比較運算子)。 這符合 SQL NULL 布林組合的一般性規則。
每一邊都是資料列建構函數,如 4.2.13 節所述。兩個資料列內容必須具有相同的欄位數。運算好每一側,並逐個資料列比較它們。當運算子為 =,<>,<,<=,>或 >=時,允許進行資料列建構函數比較。每個資料列元素必須是具有預設 B-tree運算子類的型別,否則嘗試的比較可能會産生錯誤。
如果使用前面的欄位解析比較,則可能不會發生與元素數量或型別相關的錯誤。
= 和 <> 比較的工作方式與其他比較略有不同。如果所有相應的成員都是非空且相等的,則認為兩個資料列相等;如果任何相應的成員非空且不相等,則資料列不相等;否則資料列比較的結果是未知的(null)。
對於 <,<=,> 和 >= 情況,資料列元素從左到右進行比較,一旦找到不相等或空的元素配對就停止。如果這對元素中的任何一個為 null,則資料列比較的結果是未知的(null);否則這對元素的比較就決定了結果。例如,ROW(1, 2, NULL) < ROW(1, 3, 0) 產生 true,而不是 null,因為不考慮第三組元素。
在 PostgreSQL 8.2 之前,每個 SQL 規範都沒有處理 <,<=,> 和 >=。像ROW(a, b) < ROW(c, d) 這樣的比較被實作為 a < c AND b < d,而正確的行為等同於 a < c OR (a = c AND b <d)。
此語法類似於 <> 行比較,但它不會因為 null 輸入產生 null。相反地,任何空值被認為不等於(不同於)任何非空值,並且任何兩個空值被認為是相等的(不是不同的)。因此結果將為 true 或 false,永遠不為 null。
此語法類似於 a = 資料列比較,但它不會因為 null 輸入而產生 null。相反地,任何空值被認為不等於(不同於)任何非空值,並且任何兩個空值被認為是相等的(不是不同的)。因此,結果將始終為 true 或 false,永遠不會為 null。
為了支援包含沒有預設 B-Tree 運算子類的元素的資料列匹配,以下運算子被定義用於複合型別比較:_ =,_ <>,_ <,_ <=,_> 和 _>=。這些運算子比較兩個資料列的內部二進製表示形式。即使兩個資料列與等號運算子的比較為真,兩個資料列也可能具有不同的二進製表示形式。 這些比較運算子下的資料列排序是確定性的,但沒有其他意義。這些運算子在內部用於具體化檢視表,並可用於其他專用目的(如複寫),但不打算經常用於撰寫查詢。
本節描述可能傳回多筆資料集合的函數。 此類別中使用最廣泛的函數是數列生成函數,如 和 所示。 其他更詳細的集合回傳函數在本手冊的其他章節也有說明。有關組合多個集合回傳函數的方法,請參閱。
When step
is positive, zero rows are returned if start
is greater than stop
. Conversely, when step
is negative, zero rows are returned if start
is less than stop
. Zero rows are also returned if any input is NULL
. It is an error for step
to be zero. Some examples follow:
generate_subscripts
is a convenience function that generates the set of valid subscripts for the specified dimension of the given array. Zero rows are returned for arrays that do not have the requested dimension, or if any input is NULL
. Some examples follow:
When a function in the FROM
clause is suffixed by WITH ORDINALITY
, a bigint
column is appended to the function's output column(s), which starts from 1 and increments by 1 for each row of the function's output. This is most useful in the case of set returning functions such as unnest()
.
The functions described in this section are used to control and monitor a PostgreSQL installation.
shows the functions available to query and alter run-time configuration parameters.
The functions shown in send control signals to other server processes. Use of these functions is restricted to superusers by default but access may be granted to others using GRANT
, with noted exceptions.
Each of these functions returns true
if successful and false
otherwise.
pg_cancel_backend
and pg_terminate_backend
send signals (SIGINT or SIGTERM respectively) to backend processes identified by process ID. The process ID of an active backend can be found from the pid
column of the pg_stat_activity
view, or by listing the postgres
processes on the server (using ps on Unix or the Task Manager on Windows). The role of an active backend can be found from the usename
column of the pg_stat_activity
view.
pg_current_wal_lsn
displays the current write-ahead log write location in the same format used by the above functions. Similarly, pg_current_wal_insert_lsn
displays the current write-ahead log insertion location and pg_current_wal_flush_lsn
displays the current write-ahead log flush location. The insertion location is the “logical” end of the write-ahead log at any instant, while the write location is the end of what has actually been written out from the server's internal buffers, and the flush location is the last location known to be written to durable storage. The write location is the end of what can be examined from outside the server, and is usually what you want if you are interested in archiving partially-complete write-ahead log files. The insertion and flush locations are made available primarily for server debugging purposes. These are all read-only operations and do not require superuser permissions.
You can use pg_walfile_name_offset
to extract the corresponding write-ahead log file name and byte offset from a pg_lsn
value. For example:
Similarly, pg_walfile_name
extracts just the write-ahead log file name. When the given write-ahead log location is exactly at a write-ahead log file boundary, both these functions return the name of the preceding write-ahead log file. This is usually the desired behavior for managing write-ahead log archiving behavior, since the preceding file is the last one that currently needs to be archived.
pg_wal_replay_pause
and pg_wal_replay_resume
cannot be executed while a promotion is ongoing. If a promotion is triggered while recovery is paused, the paused state ends and promotion continues.
If streaming replication is disabled, the paused state may continue indefinitely without a problem. If streaming replication is in progress then WAL records will continue to be received, which will eventually fill available disk space, depending upon the duration of the pause, the rate of WAL generation and available disk space.
PostgreSQL allows database sessions to synchronize their snapshots. A snapshot determines which data is visible to the transaction that is using the snapshot. Synchronized snapshots are necessary when two or more sessions need to see identical content in the database. If two sessions just start their transactions independently, there is always a possibility that some third transaction commits between the executions of the two START TRANSACTION
commands, so that one session sees the effects of that transaction and the other does not.
To solve this problem, PostgreSQL allows a transaction to export the snapshot it is using. As long as the exporting transaction remains open, other transactions can import its snapshot, and thereby be guaranteed that they see exactly the same view of the database that the first transaction sees. But note that any database changes made by any one of these transactions remain invisible to the other transactions, as is usual for changes made by uncommitted transactions. So the transactions are synchronized with respect to pre-existing data, but act normally for changes they make themselves.
The functions above that operate on tables or indexes accept a regclass
argument, which is simply the OID of the table or index in the pg_class
system catalog. You do not have to look up the OID by hand, however, since the regclass
data type's input converter will do the work for you. Just write the table name enclosed in single quotes so that it looks like a literal constant. For compatibility with the handling of ordinary SQL names, the string will be converted to lower case unless it contains double quotes around the table name.
For example, to check the total size of the data contained in a partitioned table measurement
, one could use the following query:
Note that granting users the EXECUTE privilege on pg_read_file()
, or related functions, allows them the ability to read any file on the server that the database server process can read; these functions bypass all in-database privilege checks. This means that, for example, a user with such access is able to read the contents of the pg_authid
table where authentication information is stored, as well as read any table data in the database. Therefore, granting access to these functions should be carefully considered.
Some of these functions take an optional missing_ok
parameter, which specifies the behavior when the file or directory does not exist. If true
, the function returns NULL
or an empty result set, as appropriate. If false
, an error is raised. The default is false
.
All these functions are intended to be used to lock application-defined resources, which can be identified either by a single 64-bit key value or two 32-bit key values (note that these two key spaces do not overlap). If another session already holds a conflicting lock on the same resource identifier, the functions will either wait until the resource becomes available, or return a false
result, as appropriate for the function. Locks can be either shared or exclusive: a shared lock does not conflict with other shared locks on the same resource, only with exclusive locks. Locks can be taken at session level (so that they are held until released or the session ends) or at transaction level (so that they are held until the current transaction ends; there is no provision for manual release). Multiple session-level lock requests stack, so that if the same resource identifier is locked three times there must then be three unlock requests to release the resource in advance of session end.
shows several functions that extract session and system information.
In addition to the functions listed in this section, there are a number of functions related to the statistics system that also provide system information. See for more information.
Name | Return Type | Description |
---|
current_catalog
, current_role
, current_schema
, current_user
, session_user
, and user
have special syntactic status in SQL: they must be called without trailing parentheses. (In PostgreSQL, parentheses can optionally be used with current_schema
, but not with the others.)
current_schema
returns the name of the schema that is first in the search path (or a null value if the search path is empty). This is the schema that will be used for any tables or other named objects that are created without specifying a target schema. current_schemas(boolean)
returns an array of the names of all schemas presently in the search path. The Boolean option determines whether or not implicitly included system schemas such as pg_catalog
are included in the returned search path.
The search path can be altered at run time. The command is:
inet_client_addr
returns the IP address of the current client, and inet_client_port
returns the port number. inet_server_addr
returns the IP address on which the server accepted the current connection, and inet_server_port
returns the port number. All these functions return NULL if the current connection is via a Unix-domain socket.
pg_blocking_pids
returns an array of the process IDs of the sessions that are blocking the server process with the specified process ID, or an empty array if there is no such server process or it is not blocked. One server process blocks another if it either holds a lock that conflicts with the blocked process's lock request (hard block), or is waiting for a lock that would conflict with the blocked process's lock request and is ahead of it in the wait queue (soft block). When using parallel queries the result always lists client-visible process IDs (that is, pg_backend_pid
results) even if the actual lock is held or awaited by a child worker process. As a result of that, there may be duplicated PIDs in the result. Also note that when a prepared transaction holds a conflicting lock, it will be represented by a zero process ID in the result of this function. Frequent calls to this function could have some impact on database performance, because it needs exclusive access to the lock manager's shared state for a short time.
pg_conf_load_time
returns the timestamp with time zone
when the server configuration files were last loaded. (If the current session was alive at the time, this will be the time when the session itself re-read the configuration files, so the reading will vary a little in different sessions. Otherwise it is the time when the postmaster process re-read the configuration files.)
pg_my_temp_schema
returns the OID of the current session's temporary schema, or zero if it has none (because it has not created any temporary tables). pg_is_other_temp_schema
returns true if the given OID is the OID of another session's temporary schema. (This can be useful, for example, to exclude other sessions' temporary tables from a catalog display.)
pg_postmaster_start_time
returns the timestamp with time zone
when the server started.
has_table_privilege
checks whether a user can access a table in a particular way. The user can be specified by name, by OID (pg_authid.oid
), public
to indicate the PUBLIC pseudo-role, or if the argument is omitted current_user
is assumed. The table can be specified by name or by OID. (Thus, there are actually six variants of has_table_privilege
, which can be distinguished by the number and types of their arguments.) When specifying by name, the name can be schema-qualified if necessary. The desired access privilege type is specified by a text string, which must evaluate to one of the values SELECT
, INSERT
, UPDATE
, DELETE
, TRUNCATE
, REFERENCES
, or TRIGGER
. Optionally, WITH GRANT OPTION
can be added to a privilege type to test whether the privilege is held with grant option. Also, multiple privilege types can be listed separated by commas, in which case the result will be true
if any of the listed privileges is held. (Case of the privilege string is not significant, and extra whitespace is allowed between but not within privilege names.) Some examples:
has_sequence_privilege
checks whether a user can access a sequence in a particular way. The possibilities for its arguments are analogous to has_table_privilege
. The desired access privilege type must evaluate to one of USAGE
, SELECT
, or UPDATE
.
has_any_column_privilege
checks whether a user can access any column of a table in a particular way. Its argument possibilities are analogous to has_table_privilege
, except that the desired access privilege type must evaluate to some combination of SELECT
, INSERT
, UPDATE
, or REFERENCES
. Note that having any of these privileges at the table level implicitly grants it for each column of the table, so has_any_column_privilege
will always return true
if has_table_privilege
does for the same arguments. But has_any_column_privilege
also succeeds if there is a column-level grant of the privilege for at least one column.
has_column_privilege
checks whether a user can access a column in a particular way. Its argument possibilities are analogous to has_table_privilege
, with the addition that the column can be specified either by name or attribute number. The desired access privilege type must evaluate to some combination of SELECT
, INSERT
, UPDATE
, or REFERENCES
. Note that having any of these privileges at the table level implicitly grants it for each column of the table.
has_database_privilege
checks whether a user can access a database in a particular way. Its argument possibilities are analogous to has_table_privilege
. The desired access privilege type must evaluate to some combination of CREATE
, CONNECT
, TEMPORARY
, or TEMP
(which is equivalent to TEMPORARY
).
has_foreign_data_wrapper_privilege
checks whether a user can access a foreign-data wrapper in a particular way. Its argument possibilities are analogous to has_table_privilege
. The desired access privilege type must evaluate to USAGE
.
has_language_privilege
checks whether a user can access a procedural language in a particular way. Its argument possibilities are analogous to has_table_privilege
. The desired access privilege type must evaluate to USAGE
.
has_schema_privilege
checks whether a user can access a schema in a particular way. Its argument possibilities are analogous to has_table_privilege
. The desired access privilege type must evaluate to some combination of CREATE
or USAGE
.
has_server_privilege
checks whether a user can access a foreign server in a particular way. Its argument possibilities are analogous to has_table_privilege
. The desired access privilege type must evaluate to USAGE
.
has_tablespace_privilege
checks whether a user can access a tablespace in a particular way. Its argument possibilities are analogous to has_table_privilege
. The desired access privilege type must evaluate to CREATE
.
pg_has_role
checks whether a user can access a role in a particular way. Its argument possibilities are analogous to has_table_privilege
, except that public
is not allowed as a user name. The desired access privilege type must evaluate to some combination of MEMBER
or USAGE
. MEMBER
denotes direct or indirect membership in the role (that is, the right to do SET ROLE
), while USAGE
denotes whether the privileges of the role are immediately available without doing SET ROLE
.
row_security_active
checks whether row level security is active for the specified table in the context of the current_user
and environment. The table can be specified by name or by OID.
aclitem
Operatorsaclitem
Functionsaclexplode
returns an aclitem
array as a set of rows. Output columns are grantor oid
, grantee oid
(0
for PUBLIC
), granted privilege as text
(SELECT
, ...) and whether the privilege is grantable as boolean
. makeaclitem
performs the inverse operation.
Each function performs the visibility check for one type of database object. Note that pg_table_is_visible
can also be used with views, materialized views, indexes, sequences and foreign tables; pg_function_is_visible
can also be used with procedures and aggregates; pg_type_is_visible
can also be used with domains. For functions and operators, an object in the search path is visible if there is no object of the same name and argument data type(s) earlier in the path. For operator classes, both name and associated index access method are considered.
All these functions require object OIDs to identify the object to be checked. If you want to test an object by name, it is convenient to use the OID alias types (regclass
, regtype
, regprocedure
, regoperator
, regconfig
, or regdictionary
), for example:
Note that it would not make much sense to test a non-schema-qualified type name in this way — if the name can be recognized at all, it must be visible.
format_type
returns the SQL name of a data type that is identified by its type OID and possibly a type modifier. Pass NULL for the type modifier if no specific modifier is known.
pg_get_keywords
returns a set of records describing the SQL keywords recognized by the server. The word
column contains the keyword. The catcode
column contains a category code: U
for unreserved, C
for column name, T
for type or function name, or R
for reserved. The catdesc
column contains a possibly-localized string describing the category.
pg_get_constraintdef
, pg_get_indexdef
, pg_get_ruledef
, pg_get_statisticsobjdef
, and pg_get_triggerdef
, respectively reconstruct the creating command for a constraint, index, rule, extended statistics object, or trigger. (Note that this is a decompiled reconstruction, not the original text of the command.) pg_get_expr
decompiles the internal form of an individual expression, such as the default value for a column. It can be useful when examining the contents of system catalogs. If the expression might contain Vars, specify the OID of the relation they refer to as the second parameter; if no Vars are expected, zero is sufficient. pg_get_viewdef
reconstructs the SELECT
query that defines a view. Most of these functions come in two variants, one of which can optionally “pretty-print” the result. The pretty-printed format is more readable, but the default format is more likely to be interpreted the same way by future versions of PostgreSQL; avoid using pretty-printed output for dump purposes. Passing false
for the pretty-print parameter yields the same result as the variant that does not have the parameter at all.
pg_get_functiondef
returns a complete CREATE OR REPLACE FUNCTION
statement for a function. pg_get_function_arguments
returns the argument list of a function, in the form it would need to appear in within CREATE FUNCTION
. pg_get_function_result
similarly returns the appropriate RETURNS
clause for the function. pg_get_function_identity_arguments
returns the argument list necessary to identify a function, in the form it would need to appear in within ALTER FUNCTION
, for instance. This form omits default values.
pg_get_userbyid
extracts a role's name given its OID.
pg_options_to_table
returns the set of storage option name/value pairs (option_name
/option_value
) when passed pg_class
.reloptions
or pg_attribute
.attoptions
.
pg_tablespace_databases
allows a tablespace to be examined. It returns the set of OIDs of databases that have objects stored in the tablespace. If this function returns any rows, the tablespace is not empty and cannot be dropped. To display the specific objects populating the tablespace, you will need to connect to the databases identified by pg_tablespace_databases
and query their pg_class
catalogs.
The expression collation for
returns the collation of the value that is passed to it. Example:
The value might be quoted and schema-qualified. If no collation is derived for the argument expression, then a null value is returned. If the argument is not of a collatable data type, then an error is raised.
The to_regclass
, to_regproc
, to_regprocedure
, to_regoper
, to_regoperator
, to_regtype
, to_regnamespace
, and to_regrole
functions translate relation, function, operator, type, schema, and role names (given as text
) to objects of type regclass
, regproc
, regprocedure
, regoper
, regoperator
, regtype
, regnamespace
, and regrole
respectively. These functions differ from a cast from text in that they don't accept a numeric OID, and that they return null rather than throwing an error if the name is not found (or, for to_regproc
and to_regoper
, if the given name matches multiple objects).
pg_describe_object
returns a textual description of a database object specified by catalog OID, object OID, and sub-object ID (such as a column number within a table; the sub-object ID is zero when referring to a whole object). This description is intended to be human-readable, and might be translated, depending on server configuration. This is useful to determine the identity of an object as stored in the pg_depend
catalog.
pg_identify_object
returns a row containing enough information to uniquely identify the database object specified by catalog OID, object OID and sub-object ID. This information is intended to be machine-readable, and is never translated. type
identifies the type of database object; schema
is the schema name that the object belongs in, or NULL
for object types that do not belong to schemas; name
is the name of the object, quoted if necessary, if the name (along with schema name, if pertinent) is sufficient to uniquely identify the object, otherwise NULL
; identity
is the complete object identity, with the precise format depending on object type, and each name within the format being schema-qualified and quoted as necessary.
pg_identify_object_as_address
returns a row containing enough information to uniquely identify the database object specified by catalog OID, object OID and sub-object ID. The returned information is independent of the current server, that is, it could be used to identify an identically named object in another server. type
identifies the type of database object; object_names
and object_args
are text arrays that together form a reference to the object. These three values can be passed to pg_get_object_address
to obtain the internal address of the object. This function is the inverse of pg_get_object_address
.
pg_get_object_address
returns a row containing enough information to uniquely identify the database object specified by its type and object name and argument arrays. The returned values are the ones that would be used in system catalogs such as pg_depend
and can be passed to other system functions such as pg_identify_object
or pg_describe_object
. classid
is the OID of the system catalog containing the object; objid
is the OID of the object itself, and objsubid
is the sub-object ID, or zero if none. This function is the inverse of pg_identify_object_as_address
.
Table 9.73. Comment Information Functions
col_description
returns the comment for a table column, which is specified by the OID of its table and its column number. (obj_description
cannot be used for table columns since columns do not have OIDs of their own.)
The two-parameter form of obj_description
returns the comment for a database object specified by its OID and the name of the containing system catalog. For example, obj_description(123456,'pg_class')
would retrieve the comment for the table with OID 123456. The one-parameter form of obj_description
requires only the object OID. It is deprecated since there is no guarantee that OIDs are unique across different system catalogs; therefore, the wrong comment might be returned.
shobj_description
is used just like obj_description
except it is used for retrieving comments on shared objects. Some system catalogs are global to all databases within each cluster, and the descriptions for objects in them are stored globally as well.
Table 9.74. Transaction IDs and Snapshots
Table 9.75. Snapshot Components
txid_snapshot
's textual representation is xmin
:xmax
:xip_list
. For example 10:20:10,14,15
means xmin=10, xmax=20, xip_list=10, 14, 15
.
Table 9.76. Committed Transaction Information
Table 9.77. Control Data Functions
Table 9.78. pg_control_checkpoint
Columns
Table 9.79. pg_control_system
Columns
Table 9.80. pg_control_init
Columns
Table 9.81. pg_control_recovery
Columns
雖然觸發器的許多用途涉及使用者自行定義的觸發器函數,但 PostgreSQL 提供了一些可以直接在使用者定義的觸發器中使用的內建觸發器函數。這些都列在 之中。(還有一些是附加的內建觸發器函數,它們用於實現外部鍵和延遲性的索引限制條件。由於使用者並不需要直接使用它們,因此這裡就沒有將它們文件化。)
有關建立觸發器的更多說明,請參閱 。
當作用於資料列層級的 BEFORE UPDATE 觸發器時,suppress_redundant_updates_trigger 函數將阻止任何不實際變更資料表資料的 UPDATE 行為發生。這會替代掉無論如何都會正常執行的實體資料更新行為,無論是否發生資料變更。(這種行為使 UPDATE 執行得更快,因為不需要重覆進行相關的檢查,並且在某些情況下也很有用。)
理想情況下,您應該避免執行實際上不會變更資料的 UPDATE。多餘的 UPDATE 可能會花費大量不必要的時間,尤其是當有大量索引要更改,並且棄置的資料空間必須時常被清理時。然而,在客戶端程式中檢測這種情況並不是這麼容易,甚至是不可能的,而且撰寫表示式來檢測也容易出錯。另一種方法則是使用 suppress_redundant_updates_trigger,它會跳過不改變資料的 UPDATE。觸發器為每筆資料花費很少但非常重要的時間,因此如果UPDATE 條件的大多數資料確實發生了變化,則使用此觸發器將使更新執行得比平均的情況更慢。
Suppress_redundant_updates_trigger 函數可以像這樣加到資料表中來使用:
在大多數情況下,對於每一筆資料來說,您需要在最後才觸發此觸發器,以便它不會妨礙可能希望更改該筆資料的其他觸發器。請記住,觸發器是按名稱次序觸發的,因此您應該選擇觸發器名稱在資料表上可能有的任何其他觸發器的名稱之後。(因此在範例中使用「z」的開頭名稱。)
Current date and time (changes during statement execution); see
Current date; see
Current time of day; see
Current date and time (start of current transaction); see
Get subfield (equivalent to extract
); see
Get subfield (equivalent to extract
); see
Truncate to specified precision; see
Truncate to specified precision in the specified time zone; see
Truncate to specified precision; see
Get subfield; see
Get subfield; see
Current time of day; see
Current date and time (start of current transaction); see
Current date and time (start of current transaction); see
Current date and time (start of current statement); see
Current date and time (like clock_timestamp
, but as a text
string); see
Current date and time (start of current transaction); see
Function | Return Type | Description | Example | Result |
---|---|---|---|---|
The left-hand side of this form of IN
is a row constructor, as described in . The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result. The result of IN
is “true” if any equal subquery row is found. The result is “false” if no equal row is found (including the case where the subquery returns no rows).
The left-hand side of this form of NOT IN
is a row constructor, as described in . The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result. The result of NOT IN
is “true” if only unequal subquery rows are found (including the case where the subquery returns no rows). The result is “false” if any equal row is found.
The left-hand side of this form of ANY
is a row constructor, as described in . The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result, using the given operator
. The result of ANY
is “true” if the comparison returns true for any subquery row. The result is “false” if the comparison returns false for every subquery row (including the case where the subquery returns no rows). The result is NULL if no comparison with a subquery row returns true, and at least one comparison returns NULL.
See for details about the meaning of a row constructor comparison.
The left-hand side of this form of ALL
is a row constructor, as described in . The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result, using the given operator
. The result of ALL
is “true” if the comparison returns true for all subquery rows (including the case where the subquery returns no rows). The result is “false” if the comparison returns false for any subquery row. The result is NULL if no comparison with a subquery row returns false, and at least one comparison returns NULL.
See for details about the meaning of a row constructor comparison.
The left-hand side is a row constructor, as described in . The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. Furthermore, the subquery cannot return more than one row. (If it returns zero rows, the result is taken to be null.) The left-hand side is evaluated and compared row-wise to the single subquery result row.
See for details about the meaning of a row constructor comparison.
如果結果取決於比較兩個 NULL 值或 NULL 和非 NULL,則 SQL 規範要求按資料列進行比較以回傳 NULL。PostgreSQL只在比較兩個資料列建構函數的結果(如 )或者將一個資料列建構函數與子查詢的輸出結果進行比較時(如 )那樣做。在比較兩個複合型別內容的其他部份中,兩個 NULL 字串會被認為是相等的,並且 NULL 被認為大於非 NULL。為了對複合型別進行一致的排序和索引行為,這是必須的。
評估每一側,並逐個資料列比較它們。 當運算符為 =,<>,<,<=,> 或 >= 時,允許複合型別比較,或者俱有與其中一個類似的語義。(具體而言,如果一個運算子是 B-Tree 運算子類的成員,或者是 B-Tree 運算子類的 = 成員的否定運算,則它可以是資料列比較運算子。)上述運算子的預設行為與資料列建構函數的 IS [NOT] DISTINCT FROM 相同(見)。
The functions shown in assist in making on-line backups. These functions cannot be executed during recovery (except non-exclusive pg_start_backup
, non-exclusive pg_stop_backup
, pg_is_in_backup
, pg_backup_start_time
and pg_wal_lsn_diff
).
For details about proper usage of these functions, see .
The functions shown in provide information about the current status of a standby server. These functions may be executed both during recovery and in normal running.
The functions shown in control the progress of recovery. These functions may be executed only during recovery.
Snapshots are exported with the pg_export_snapshot
function, shown in , and imported with the command.
The functions shown in are for controlling and interacting with replication features. See , , and for information about the underlying features. Use of functions for replication origin is restricted to superusers. Use of functions for replication slots is restricted to superusers and users having REPLICATION
privilege.
Many of these functions have equivalent commands in the replication protocol; see .
The functions described in , , and are also relevant for replication.
The functions shown in calculate the disk space usage of database objects, or assist in presentation of usage results. All these functions return sizes measured in bytes. If an OID that does not represent an existing object is passed to one of these functions, NULL
is returned.
The functions shown in assist in identifying the specific disk files associated with database objects.
lists functions used to manage collations.
lists functions that provide information about the structure of partitioned tables.
shows the functions available for index maintenance tasks. (Note that these maintenance tasks are normally done automatically by autovacuum; use of these functions is only required in special cases.) These functions cannot be executed during recovery. Use of these functions is restricted to superusers and the owner of the given index.
The functions shown in provide native access to files on the machine hosting the server. Only files within the database cluster directory and the log_directory
can be accessed, unless the user is a superuser or is granted the role pg_read_server_files
. Use a relative path for files in the cluster directory, and a path matching the log_directory
configuration setting for log files.
The functions shown in manage advisory locks. For details about proper use of these functions, see .
The session_user
is normally the user who initiated the current database connection; but superusers can change this setting with . The current_user
is the user identifier that is applicable for permission checking. Normally it is equal to the session user, but it can be changed with . It also changes during the execution of functions with the attribute SECURITY DEFINER
. In Unix parlance, the session user is the “real user” and the current user is the “effective user”. current_role
and user
are synonyms for current_user
. (The SQL standard draws a distinction between current_role
and current_user
, but PostgreSQL does not, since it unifies users and roles into a single kind of entity.)
pg_current_logfile
returns, as text
, the path of the log file(s) currently in use by the logging collector. The path includes the directory and the log file name. Log collection must be enabled or the return value is NULL
. When multiple log files exist, each in a different format, pg_current_logfile
called without arguments returns the path of the file having the first format found in the ordered list: stderr, csvlog. NULL
is returned when no log file has any of these formats. To request a specific file format supply, as text
, either csvlog or stderr as the value of the optional parameter. The return value is NULL
when the log format requested is not a configured . The pg_current_logfile
reflects the contents of the current_logfiles
file.
pg_listening_channels
returns a set of names of asynchronous notification channels that the current session is listening to. pg_notification_queue_usage
returns the fraction of the total available space for notifications currently occupied by notifications that are waiting to be processed, as a double
in the range 0-1. See and for more information.
pg_safe_snapshot_blocking_pids
returns an array of the process IDs of the sessions that are blocking the server process with the specified process ID from acquiring a safe snapshot, or an empty array if there is no such server process or it is not blocked. A session running a SERIALIZABLE
transaction blocks a SERIALIZABLE READ ONLY DEFERRABLE
transaction from acquiring a snapshot until the latter determines that it is safe to avoid taking any predicate locks. See for more information about serializable and deferrable transactions. Frequent calls to this function could have some impact on database performance, because it needs access to the predicate lock manager's shared state for a short time.
version
returns a string describing the PostgreSQL server's version. You can also get this information from or for a machine-readable version, . Software developers should use server_version_num
(available since 8.2) or instead of parsing the text version.
lists functions that allow the user to query object access privileges programmatically. See for more information about privileges.
Name | Return Type | Description |
---|
has_function_privilege
checks whether a user can access a function in a particular way. Its argument possibilities are analogous to has_table_privilege
. When specifying a function by a text string rather than by OID, the allowed input is the same as for the regprocedure
data type (see ). The desired access privilege type must evaluate to EXECUTE
. An example is:
has_type_privilege
checks whether a user can access a type in a particular way. Its argument possibilities are analogous to has_table_privilege
. When specifying a type by a text string rather than by OID, the allowed input is the same as for the regtype
data type (see ). The desired access privilege type must evaluate to USAGE
.
shows the operators available for the aclitem
type, which is the catalog representation of access privileges. See for information about how to read access privilege values.
Operator | Description | Example | Result |
---|
shows some additional functions to manage the aclitem
type.
Name | Return Type | Description |
---|
acldefault
returns the built-in default access privileges for an object of type type
belonging to role ownerId
. These represent the access privileges that will be assumed when an object's ACL entry is null. (The default access privileges are described in .) The type
parameter is a CHAR
: write 'c' for COLUMN
, 'r' for TABLE
and table-like objects, 's' for SEQUENCE
, 'd' for DATABASE
, 'f' for FUNCTION
or PROCEDURE
, 'l' for LANGUAGE
, 'L' for LARGE OBJECT
, 'n' for SCHEMA
, 't' for TABLESPACE
, 'F' for FOREIGN DATA WRAPPER
, 'S' for FOREIGN SERVER
, or 'T' for TYPE
or DOMAIN
.
shows functions that determine whether a certain object is visible in the current schema search path. For example, a table is said to be visible if its containing schema is in the search path and no table of the same name appears earlier in the search path. This is equivalent to the statement that the table can be referenced by name without explicit schema qualification. To list the names of all visible tables:
Name | Return Type | Description |
---|
lists functions that extract information from the system catalogs.
Name | Return Type | Description |
---|
pg_get_serial_sequence
returns the name of the sequence associated with a column, or NULL if no sequence is associated with the column. If the column is an identity column, the associated sequence is the sequence internally created for the identity column. For columns created using one of the serial types (serial
, smallserial
, bigserial
), it is the sequence created for that serial column definition. In the latter case, this association can be modified or removed with ALTER SEQUENCE OWNED BY
. (The function probably should have been called pg_get_owned_sequence
; its current name reflects the fact that it has typically been used with serial
or bigserial
columns.) The first input parameter is a table name with optional schema, and the second parameter is a column name. Because the first parameter is potentially a schema and table, it is not treated as a double-quoted identifier, meaning it is lower cased by default, while the second parameter, being just a column name, is treated as double-quoted and has its case preserved. The function returns a value suitably formatted for passing to sequence functions (see ). A typical use is in reading the current value of a sequence for an identity or serial column, for example:
pg_index_column_has_property
, pg_index_has_property
, and pg_indexam_has_property
return whether the specified index column, index, or index access method possesses the named property. NULL
is returned if the property name is not known or does not apply to the particular object, or if the OID or column number does not identify a valid object. Refer to for column properties, for index properties, and for access method properties. (Note that extension access methods can define additional property names for their indexes.)
Name | Description |
---|
Name | Description |
---|
Name | Description |
---|
pg_typeof
returns the OID of the data type of the value that is passed to it. This can be helpful for troubleshooting or dynamically constructing SQL queries. The function is declared as returning regtype
, which is an OID alias type (see ); this means that it is the same as an OID for comparison purposes but displays as a type name. For example:
lists functions related to database object identification and addressing.
Name | Return Type | Description |
---|
The functions shown in extract comments previously stored with the command. A null value is returned if no comment could be found for the specified parameters.
Name | Return Type | Description |
---|
The functions shown in provide server transaction information in an exportable form. The main use of these functions is to determine which transactions were committed between two snapshots.
Name | Return Type | Description |
---|
The internal transaction ID type (xid
) is 32 bits wide and wraps around every 4 billion transactions. However, these functions export a 64-bit format that is extended with an “epoch” counter so it will not wrap around during the life of an installation. The data type used by these functions, txid_snapshot
, stores information about transaction ID visibility at a particular moment in time. Its components are described in .
Name | Description |
---|
txid_status(bigint)
reports the commit status of a recent transaction. Applications may use it to determine whether a transaction committed or aborted when the application and database server become disconnected while a COMMIT
is in progress. The status of a transaction will be reported as either in progress
, committed
, or aborted
, provided that the transaction is recent enough that the system retains the commit status of that transaction. If is old enough that no references to that transaction survive in the system and the commit status information has been discarded, this function will return NULL. Note that prepared transactions are reported as in progress
; applications must check if they need to determine whether the txid is a prepared transaction.
The functions shown in provide information about transactions that have been already committed. These functions mainly provide information about when the transactions were committed. They only provide useful data when configuration option is enabled and only for transactions that were committed after it was enabled.
Name | Return Type | Description |
---|
The functions shown in print information initialized during initdb
, such as the catalog version. They also show information about write-ahead logging and checkpoint processing. This information is cluster-wide, and not specific to any one database. They provide most of the same information, from the same source, as , although in a form better suited to SQL functions.
Name | Return Type | Description |
---|
pg_control_checkpoint
returns a record, shown in
Column Name | Data Type |
---|
pg_control_system
returns a record, shown in
Column Name | Data Type |
---|
pg_control_init
returns a record, shown in
Column Name | Data Type |
---|
pg_control_recovery
returns a record, shown in
Column Name | Data Type |
---|
currval(regclass
)
bigint
Return value most recently obtained withnextval
for specified sequence
lastval()
bigint
Return value most recently obtained withnextval
for any sequence
nextval(regclass
)
bigint
Advance sequence and return new value
setval(regclass
,bigint
)
bigint
Set sequence's current value
setval(regclass
,bigint
,boolean
)
bigint
Set sequence's current value andis_called
flag
=
equal
ARRAY[1.1,2.1,3.1]::int[] = ARRAY[1,2,3]
t
<>
not equal
ARRAY[1,2,3] <> ARRAY[1,2,4]
t
<
less than
ARRAY[1,2,3] < ARRAY[1,2,4]
t
>
greater than
ARRAY[1,4,3] > ARRAY[1,2,4]
t
<=
less than or equal
ARRAY[1,2,3] <= ARRAY[1,2,3]
t
>=
greater than or equal
ARRAY[1,4,3] >= ARRAY[1,4,3]
t
@>
contains
ARRAY[1,4,3] @> ARRAY[3,1]
t
<@
is contained by
ARRAY[2,7] <@ ARRAY[1,7,4,2,6]
t
&&
overlap (have elements in common)
ARRAY[1,4,3] && ARRAY[2,1]
t
`
`
array-to-array concatenation
`ARRAY[1,2,3]
ARRAY[4,5,6]`
{1,2,3,4,5,6}
`
`
array-to-array concatenation
`ARRAY[1,2,3]
ARRAY[[4,5,6],[7,8,9]]`
{ {1,2,3},{4,5,6},{7,8,9} }
`
`
element-to-array concatenation
`3
ARRAY[4,5,6]`
{3,4,5,6}
`
`
array-to-element concatenation
`ARRAY[4,5,6]
7`
{4,5,6,7}
array_append
(anyarray
,anyelement
)
anyarray
append an element to the end of an array
array_append(ARRAY[1,2], 3)
{1,2,3}
array_cat
(anyarray
,anyarray
)
anyarray
concatenate two arrays
array_cat(ARRAY[1,2,3], ARRAY[4,5])
{1,2,3,4,5}
array_ndims
(anyarray
)
int
returns the number of dimensions of the array
array_ndims(ARRAY[[1,2,3], [4,5,6]])
2
array_dims
(anyarray
)
text
returns a text representation of array's dimensions
array_dims(ARRAY[[1,2,3], [4,5,6]])
[1:2][1:3]
array_fill
(anyelement
,int[]
, [,int[]
])
anyarray
returns an array initialized with supplied value and dimensions, optionally with lower bounds other than 1
array_fill(7, ARRAY[3], ARRAY[2])
[2:4]={7,7,7}
array_length
(anyarray
,int
)
int
returns the length of the requested array dimension
array_length(array[1,2,3], 1)
3
array_lower
(anyarray
,int
)
int
returns lower bound of the requested array dimension
array_lower('[0:2]={1,2,3}'::int[], 1)
0
array_position
(anyarray
,anyelement
[,int
])
int
returns the subscript of the first occurrence of the second argument in the array, starting at the element indicated by the third argument or at the first element (array must be one-dimensional)
array_position(ARRAY['sun','mon','tue','wed','thu','fri','sat'], 'mon')
2
array_positions
(anyarray
,anyelement
)
int[]
returns an array of subscripts of all occurrences of the second argument in the array given as first argument (array must be one-dimensional)
array_positions(ARRAY['A','A','B','A'], 'A')
{1,2,4}
array_prepend
(anyelement
,anyarray
)
anyarray
append an element to the beginning of an array
array_prepend(1, ARRAY[2,3])
{1,2,3}
array_remove
(anyarray
,anyelement
)
anyarray
remove all elements equal to the given value from the array (array must be one-dimensional)
array_remove(ARRAY[1,2,3,2], 2)
{1,3}
array_replace
(anyarray
,anyelement
,anyelement
)
anyarray
replace each array element equal to the given value with a new value
array_replace(ARRAY[1,2,5,4], 5, 3)
{1,2,3,4}
array_to_string
(anyarray
,text
[,text
])
text
concatenates array elements using supplied delimiter and optional null string
array_to_string(ARRAY[1, 2, 3, NULL, 5], ',', '*')
1,2,3,*,5
array_upper
(anyarray
,int
)
int
returns upper bound of the requested array dimension
array_upper(ARRAY[1,8,3,7], 1)
4
cardinality
(anyarray
)
int
returns the total number of elements in the array, or 0 if the array is empty
cardinality(ARRAY[[1,2],[3,4]])
4
string_to_array
(text
,text
[,text
])
text[]
splits string into array elements using supplied delimiter and optional null string
string_to_array('xx~^~yy~^~zz', '~^~', 'yy')
{xx,NULL,zz}
unnest
(anyarray
)
setof anyelement
expand an array to a set of rows
unnest(ARRAY[1,2])
12(2 rows)
unnest
(anyarray
,anyarray
[, ...])
setof anyelement, anyelement [, ...]
expand multiple arrays (possibly of different types) to a set of rows. This is only allowed in the FROM clause; seeSection 7.2.1.4
unnest(ARRAY[1,2],ARRAY['foo','bar','baz'])
1 foo2 barNULL baz(3 rows)
Operator
Description
Example(s)
anyrange
@>
anyrange
→ boolean
Does the first range contain the second?
int4range(2,4) @> int4range(2,3)
→ t
anyrange
@>
anyelement
→ boolean
Does the range contain the element?
'[2011-01-01,2011-03-01)'::tsrange @> '2011-01-10'::timestamp
→ t
anyrange
<@
anyrange
→ boolean
Is the first range contained by the second?
int4range(2,4) <@ int4range(1,7)
→ t
anyelement
<@
anyrange
→ boolean
Is the element contained in the range?
42 <@ int4range(1,7)
→ f
anyrange
&&
anyrange
→ boolean
Do the ranges overlap, that is, have any elements in common?
int8range(3,7) && int8range(4,12)
→ t
anyrange
<<
anyrange
→ boolean
Is the first range strictly left of the second?
int8range(1,10) << int8range(100,110)
→ t
anyrange
>>
anyrange
→ boolean
Is the first range strictly right of the second?
int8range(50,60) >> int8range(20,30)
→ t
anyrange
&<
anyrange
→ boolean
Does the first range not extend to the right of the second?
int8range(1,20) &< int8range(18,20)
→ t
anyrange
&>
anyrange
→ boolean
Does the first range not extend to the left of the second?
int8range(7,20) &> int8range(5,10)
→ t
anyrange
-|-
anyrange
→ boolean
Are the ranges adjacent?
numrange(1.1,2.2) -|- numrange(2.2,3.3)
→ t
anyrange
+
anyrange
→ anyrange
Computes the union of the ranges. The ranges must overlap or be adjacent, so that the union is a single range (but see range_merge()
).
numrange(5,15) + numrange(10,20)
→ [5,20)
anyrange
*
anyrange
→ anyrange
Computes the intersection of the ranges.
int8range(5,15) * int8range(10,20)
→ [10,15)
anyrange
-
anyrange
→ anyrange
Computes the difference of the ranges. The second range must not be contained in the first in such a way that the difference would not be a single range.
int8range(5,15) - int8range(10,20)
→ [5,10)
Operator
Description
Example(s)
anymultirange
@>
anymultirange
→ boolean
Does the first multirange contain the second?
'{[2,4)}'::int4multirange @> '{[2,3)}'::int4multirange
→ t
anymultirange
@>
anyrange
→ boolean
Does the multirange contain the range?
'{[2,4)}'::int4multirange @> int4range(2,3)
→ t
anymultirange
@>
anyelement
→ boolean
Does the multirange contain the element?
'{[2011-01-01,2011-03-01)}'::tsmultirange @> '2011-01-10'::timestamp
→ t
anyrange
@>
anymultirange
→ boolean
Does the range contain the multirange?
'[2,4)'::int4range @> '{[2,3)}'::int4multirange
→ t
anymultirange
<@
anymultirange
→ boolean
Is the first multirange contained by the second?
'{[2,4)}'::int4multirange <@ '{[1,7)}'::int4multirange
→ t
anymultirange
<@
anyrange
→ boolean
Is the multirange contained by the range?
'{[2,4)}'::int4multirange <@ int4range(1,7)
→ t
anyrange
<@
anymultirange
→ boolean
Is the range contained by the multirange?
int4range(2,4) <@ '{[1,7)}'::int4multirange
→ t
anyelement
<@
anymultirange
→ boolean
Is the element contained by the multirange?
4 <@ '{[1,7)}'::int4multirange
→ t
anymultirange
&&
anymultirange
→ boolean
Do the multiranges overlap, that is, have any elements in common?
'{[3,7)}'::int8multirange && '{[4,12)}'::int8multirange
→ t
anymultirange
&&
anyrange
→ boolean
Does the multirange overlap the range?
'{[3,7)}'::int8multirange && int8range(4,12)
→ t
anyrange
&&
anymultirange
→ boolean
Does the range overlap the multirange?
int8range(3,7) && '{[4,12)}'::int8multirange
→ t
anymultirange
<<
anymultirange
→ boolean
Is the first multirange strictly left of the second?
'{[1,10)}'::int8multirange << '{[100,110)}'::int8multirange
→ t
anymultirange
<<
anyrange
→ boolean
Is the multirange strictly left of the range?
'{[1,10)}'::int8multirange << int8range(100,110)
→ t
anyrange
<<
anymultirange
→ boolean
Is the range strictly left of the multirange?
int8range(1,10) << '{[100,110)}'::int8multirange
→ t
anymultirange
>>
anymultirange
→ boolean
Is the first multirange strictly right of the second?
'{[50,60)}'::int8multirange >> '{[20,30)}'::int8multirange
→ t
anymultirange
>>
anyrange
→ boolean
Is the multirange strictly right of the range?
'{[50,60)}'::int8multirange >> int8range(20,30)
→ t
anyrange
>>
anymultirange
→ boolean
Is the range strictly right of the multirange?
int8range(50,60) >> '{[20,30)}'::int8multirange
→ t
anymultirange
&<
anymultirange
→ boolean
Does the first multirange not extend to the right of the second?
'{[1,20)}'::int8multirange &< '{[18,20)}'::int8multirange
→ t
anymultirange
&<
anyrange
→ boolean
Does the multirange not extend to the right of the range?
'{[1,20)}'::int8multirange &< int8range(18,20)
→ t
anyrange
&<
anymultirange
→ boolean
Does the range not extend to the right of the multirange?
int8range(1,20) &< '{[18,20)}'::int8multirange
→ t
anymultirange
&>
anymultirange
→ boolean
Does the first multirange not extend to the left of the second?
'{[7,20)}'::int8multirange &> '{[5,10)}'::int8multirange
→ t
anymultirange
&>
anyrange
→ boolean
Does the multirange not extend to the left of the range?
'{[7,20)}'::int8multirange &> int8range(5,10)
→ t
anyrange
&>
anymultirange
→ boolean
Does the range not extend to the left of the multirange?
int8range(7,20) &> '{[5,10)}'::int8multirange
→ t
anymultirange
-|-
anymultirange
→ boolean
Are the multiranges adjacent?
'{[1.1,2.2)}'::nummultirange -|- '{[2.2,3.3)}'::nummultirange
→ t
anymultirange
-|-
anyrange
→ boolean
Is the multirange adjacent to the range?
'{[1.1,2.2)}'::nummultirange -|- numrange(2.2,3.3)
→ t
anyrange
-|-
anymultirange
→ boolean
Is the range adjacent to the multirange?
numrange(1.1,2.2) -|- '{[2.2,3.3)}'::nummultirange
→ t
anymultirange
+
anymultirange
→ anymultirange
Computes the union of the multiranges. The multiranges need not overlap or be adjacent.
'{[5,10)}'::nummultirange + '{[15,20)}'::nummultirange
→ {[5,10), [15,20)}
anymultirange
*
anymultirange
→ anymultirange
Computes the intersection of the multiranges.
'{[5,15)}'::int8multirange * '{[10,20)}'::int8multirange
→ {[10,15)}
anymultirange
-
anymultirange
→ anymultirange
Computes the difference of the multiranges.
'{[5,20)}'::int8multirange - '{[10,15)}'::int8multirange
→ {[5,10), [15,20)}
Function
Description
Example(s)
lower
( anyrange
) → anyelement
Extracts the lower bound of the range (NULL
if the range is empty or the lower bound is infinite).
lower(numrange(1.1,2.2))
→ 1.1
upper
( anyrange
) → anyelement
Extracts the upper bound of the range (NULL
if the range is empty or the upper bound is infinite).
upper(numrange(1.1,2.2))
→ 2.2
isempty
( anyrange
) → boolean
Is the range empty?
isempty(numrange(1.1,2.2))
→ f
lower_inc
( anyrange
) → boolean
Is the range's lower bound inclusive?
lower_inc(numrange(1.1,2.2))
→ t
upper_inc
( anyrange
) → boolean
Is the range's upper bound inclusive?
upper_inc(numrange(1.1,2.2))
→ f
lower_inf
( anyrange
) → boolean
Is the range's lower bound infinite?
lower_inf('(,)'::daterange)
→ t
upper_inf
( anyrange
) → boolean
Is the range's upper bound infinite?
upper_inf('(,)'::daterange)
→ t
range_merge
( anyrange
, anyrange
) → anyrange
Computes the smallest range that includes both of the given ranges.
range_merge('[1,2)'::int4range, '[3,4)'::int4range)
→ [1,4)
Function
Description
Example(s)
lower
( anymultirange
) → anyelement
Extracts the lower bound of the multirange (NULL
if the multirange is empty or the lower bound is infinite).
lower('{[1.1,2.2)}'::nummultirange)
→ 1.1
upper
( anymultirange
) → anyelement
Extracts the upper bound of the multirange (NULL
if the multirange is empty or the upper bound is infinite).
upper('{[1.1,2.2)}'::nummultirange)
→ 2.2
isempty
( anymultirange
) → boolean
Is the multirange empty?
isempty('{[1.1,2.2)}'::nummultirange)
→ f
lower_inc
( anymultirange
) → boolean
Is the multirange's lower bound inclusive?
lower_inc('{[1.1,2.2)}'::nummultirange)
→ t
upper_inc
( anymultirange
) → boolean
Is the multirange's upper bound inclusive?
upper_inc('{[1.1,2.2)}'::nummultirange)
→ f
lower_inf
( anymultirange
) → boolean
Is the multirange's lower bound infinite?
lower_inf('{(,)}'::datemultirange)
→ t
upper_inf
( anymultirange
) → boolean
Is the multirange's upper bound infinite?
upper_inf('{(,)}'::datemultirange)
→ t
range_merge
( anymultirange
) → anyrange
Computes the smallest range that includes the entire multirange.
range_merge('{[1,2), [3,4)}'::int4multirange)
→ [1,4)
multirange
( anyrange
) → anymultirange
Returns a multirange containing just the given range.
multirange('[1,2)'::int4range)
→ {[1,2)}
unnest
( anymultirange
) → setof anyrange
Expands a multirange into a set of ranges. The ranges are read out in storage order (ascending).
unnest('{[1,2), [3,4)}'::int4multirange)
→
Function
Description
Partial Mode
array_agg
( anynonarray
) → anyarray
Collects all the input values, including nulls, into an array.
No
array_agg
( anyarray
) → anyarray
Concatenates all the input arrays into an array of one higher dimension. (The inputs must all have the same dimensionality, and cannot be empty or null.)
No
avg
( smallint
) → numeric
avg
( integer
) → numeric
avg
( bigint
) → numeric
avg
( numeric
) → numeric
avg
( real
) → double precision
avg
( double precision
) → double precision
avg
( interval
) → interval
Computes the average (arithmetic mean) of all the non-null input values.
Yes
bit_and
( smallint
) → smallint
bit_and
( integer
) → integer
bit_and
( bigint
) → bigint
bit_and
( bit
) → bit
Computes the bitwise AND of all non-null input values.
Yes
bit_or
( smallint
) → smallint
bit_or
( integer
) → integer
bit_or
( bigint
) → bigint
bit_or
( bit
) → bit
Computes the bitwise OR of all non-null input values.
Yes
bool_and
( boolean
) → boolean
Returns true if all non-null input values are true, otherwise false.
Yes
bool_or
( boolean
) → boolean
Returns true if any non-null input value is true, otherwise false.
Yes
count
( *
) → bigint
Computes the number of input rows.
Yes
count
( "any"
) → bigint
Computes the number of input rows in which the input value is not null.
Yes
every
( boolean
) → boolean
This is the SQL standard's equivalent to bool_and
.
Yes
json_agg
( anyelement
) → json
jsonb_agg
( anyelement
) → jsonb
Collects all the input values, including nulls, into a JSON array. Values are converted to JSON as per to_json
or to_jsonb
.
No
json_object_agg
( key
"any"
, value
"any"
) → json
jsonb_object_agg
( key
"any"
, value
"any"
) → jsonb
Collects all the key/value pairs into a JSON object. Key arguments are coerced to text; value arguments are converted as per to_json
or to_jsonb
. Values can be null, but not keys.
No
max
( see text
) → same as input type
Computes the maximum of the non-null input values. Available for any numeric, string, date/time, or enum type, as well as inet
, interval
, money
, oid
, pg_lsn
, tid
, and arrays of any of these types.
Yes
min
( see text
) → same as input type
Computes the minimum of the non-null input values. Available for any numeric, string, date/time, or enum type, as well as inet
, interval
, money
, oid
, pg_lsn
, tid
, and arrays of any of these types.
Yes
string_agg
( value
text
, delimiter
text
) → text
string_agg
( value
bytea
, delimiter
bytea
) → bytea
Concatenates the non-null input values into a string. Each value after the first is preceded by the corresponding delimiter
(if it's not null).
No
sum
( smallint
) → bigint
sum
( integer
) → bigint
sum
( bigint
) → numeric
sum
( numeric
) → numeric
sum
( real
) → real
sum
( double precision
) → double precision
sum
( interval
) → interval
sum
( money
) → money
Computes the sum of the non-null input values.
Yes
xmlagg
( xml
) → xml
Concatenates the non-null XML input values (see Section 9.15.1.7).
No
Function
Description
Partial Mode
corr
( Y
double precision
, X
double precision
) → double precision
Computes the correlation coefficient.
Yes
covar_pop
( Y
double precision
, X
double precision
) → double precision
Computes the population covariance.
Yes
covar_samp
( Y
double precision
, X
double precision
) → double precision
Computes the sample covariance.
Yes
regr_avgx
( Y
double precision
, X
double precision
) → double precision
Computes the average of the independent variable, sum(
X
)/N
.
Yes
regr_avgy
( Y
double precision
, X
double precision
) → double precision
Computes the average of the dependent variable, sum(
Y
)/N
.
Yes
regr_count
( Y
double precision
, X
double precision
) → bigint
Computes the number of rows in which both inputs are non-null.
Yes
regr_intercept
( Y
double precision
, X
double precision
) → double precision
Computes the y-intercept of the least-squares-fit linear equation determined by the (X
, Y
) pairs.
Yes
regr_r2
( Y
double precision
, X
double precision
) → double precision
Computes the square of the correlation coefficient.
Yes
regr_slope
( Y
double precision
, X
double precision
) → double precision
Computes the slope of the least-squares-fit linear equation determined by the (X
, Y
) pairs.
Yes
regr_sxx
( Y
double precision
, X
double precision
) → double precision
Computes the “sum of squares” of the independent variable, sum(
X
^2) - sum(X
)^2/N
.
Yes
regr_sxy
( Y
double precision
, X
double precision
) → double precision
Computes the “sum of products” of independent times dependent variables, sum(
X
*Y
) - sum(X
) * sum(Y
)/N
.
Yes
regr_syy
( Y
double precision
, X
double precision
) → double precision
Computes the “sum of squares” of the dependent variable, sum(
Y
^2) - sum(Y
)^2/N
.
Yes
stddev
( numeric_type
) → double precision
for real
or double precision
, otherwise numeric
This is a historical alias for stddev_samp
.
Yes
stddev_pop
( numeric_type
) → double precision
for real
or double precision
, otherwise numeric
Computes the population standard deviation of the input values.
Yes
stddev_samp
( numeric_type
) → double precision
for real
or double precision
, otherwise numeric
Computes the sample standard deviation of the input values.
Yes
variance
( numeric_type
) → double precision
for real
or double precision
, otherwise numeric
This is a historical alias for var_samp
.
Yes
var_pop
( numeric_type
) → double precision
for real
or double precision
, otherwise numeric
Computes the population variance of the input values (square of the population standard deviation).
Yes
var_samp
( numeric_type
) → double precision
for real
or double precision
, otherwise numeric
Computes the sample variance of the input values (square of the sample standard deviation).
Yes
Function
Description
Partial Mode
mode
() WITHIN GROUP
( ORDER BY
anyelement
) → anyelement
Computes the mode, the most frequent value of the aggregated argument (arbitrarily choosing the first one if there are multiple equally-frequent values). The aggregated argument must be of a sortable type.
No
percentile_cont
( fraction
double precision
) WITHIN GROUP
( ORDER BY
double precision
) → double precision
percentile_cont
( fraction
double precision
) WITHIN GROUP
( ORDER BY
interval
) → interval
Computes the continuous percentile, a value corresponding to the specified fraction
within the ordered set of aggregated argument values. This will interpolate between adjacent input items if needed.
No
percentile_cont
( fractions
double precision[]
) WITHIN GROUP
( ORDER BY
double precision
) → double precision[]
percentile_cont
( fractions
double precision[]
) WITHIN GROUP
( ORDER BY
interval
) → interval[]
Computes multiple continuous percentiles. The result is an array of the same dimensions as the fractions
parameter, with each non-null element replaced by the (possibly interpolated) value corresponding to that percentile.
No
percentile_disc
( fraction
double precision
) WITHIN GROUP
( ORDER BY
anyelement
) → anyelement
Computes the discrete percentile, the first value within the ordered set of aggregated argument values whose position in the ordering equals or exceeds the specified fraction
. The aggregated argument must be of a sortable type.
No
percentile_disc
( fractions
double precision[]
) WITHIN GROUP
( ORDER BY
anyelement
) → anyarray
Computes multiple discrete percentiles. The result is an array of the same dimensions as the fractions
parameter, with each non-null element replaced by the input value corresponding to that percentile. The aggregated argument must be of a sortable type.
No
Function
Description
Partial Mode
rank
( args
) WITHIN GROUP
( ORDER BY
sorted_args
) → bigint
Computes the rank of the hypothetical row, with gaps; that is, the row number of the first row in its peer group.
No
dense_rank
( args
) WITHIN GROUP
( ORDER BY
sorted_args
) → bigint
Computes the rank of the hypothetical row, without gaps; this function effectively counts peer groups.
No
percent_rank
( args
) WITHIN GROUP
( ORDER BY
sorted_args
) → double precision
Computes the relative rank of the hypothetical row, that is (rank
- 1) / (total rows - 1). The value thus ranges from 0 to 1 inclusive.
No
cume_dist
( args
) WITHIN GROUP
( ORDER BY
sorted_args
) → double precision
Computes the cumulative distribution, that is (number of rows preceding or peers with hypothetical row) / (total rows). The value thus ranges from 1/N
to 1.
No
Function
Description
GROUPING
( group_by_expression(s)
) → integer
Returns a bit mask indicating which GROUP BY
expressions are not included in the current grouping set. Bits are assigned with the rightmost argument corresponding to the least-significant bit; each bit is 0 if the corresponding expression is included in the grouping criteria of the grouping set generating the current result row, and 1 if it is not included.
Operator
Description
Example(s)
json
->
integer
→ json
jsonb
->
integer
→ jsonb
Extracts n
'th element of JSON array (array elements are indexed from zero, but negative integers count from the end).
'[{"a":"foo"},{"b":"bar"},{"c":"baz"}]'::json -> 2
→ {"c":"baz"}
'[{"a":"foo"},{"b":"bar"},{"c":"baz"}]'::json -> -3
→ {"a":"foo"}
json
->
text
→ json
jsonb
->
text
→ jsonb
Extracts JSON object field with the given key.
'{"a": {"b":"foo"}}'::json -> 'a'
→ {"b":"foo"}
json
->>
integer
→ text
jsonb
->>
integer
→ text
Extracts n
'th element of JSON array, as text
.
'[1,2,3]'::json ->> 2
→ 3
json
->>
text
→ text
jsonb
->>
text
→ text
Extracts JSON object field with the given key, as text
.
'{"a":1,"b":2}'::json ->> 'b'
→ 2
json
#>
text[]
→ json
jsonb
#>
text[]
→ jsonb
Extracts JSON sub-object at the specified path, where path elements can be either field keys or array indexes.
'{"a": {"b": ["foo","bar"]}}'::json #> '{a,b,1}'
→ "bar"
json
#>>
text[]
→ text
jsonb
#>>
text[]
→ text
Extracts JSON sub-object at the specified path as text
.
'{"a": {"b": ["foo","bar"]}}'::json #>> '{a,b,1}'
→ bar
Operator
Description
Example(s)
jsonb
@>
jsonb
→ boolean
Does the first JSON value contain the second? (See Section 8.14.3 for details about containment.)
'{"a":1, "b":2}'::jsonb @> '{"b":2}'::jsonb
→ t
jsonb
<@
jsonb
→ boolean
Is the first JSON value contained in the second?
'{"b":2}'::jsonb <@ '{"a":1, "b":2}'::jsonb
→ t
jsonb
?
text
→ boolean
Does the text string exist as a top-level key or array element within the JSON value?
'{"a":1, "b":2}'::jsonb ? 'b'
→ t
'["a", "b", "c"]'::jsonb ? 'b'
→ t
jsonb
?|
text[]
→ boolean
Do any of the strings in the text array exist as top-level keys or array elements?
'{"a":1, "b":2, "c":3}'::jsonb ?| array['b', 'd']
→ t
jsonb
?&
text[]
→ boolean
Do all of the strings in the text array exist as top-level keys or array elements?
'["a", "b", "c"]'::jsonb ?& array['a', 'b']
→ t
jsonb
||
jsonb
→ jsonb
Concatenates two jsonb
values. Concatenating two arrays generates an array containing all the elements of each input. Concatenating two objects generates an object containing the union of their keys, taking the second object's value when there are duplicate keys. All other cases are treated by converting a non-array input into a single-element array, and then proceeding as for two arrays. Does not operate recursively: only the top-level array or object structure is merged.
'["a", "b"]'::jsonb || '["a", "d"]'::jsonb
→ ["a", "b", "a", "d"]
'{"a": "b"}'::jsonb || '{"c": "d"}'::jsonb
→ {"a": "b", "c": "d"}
'[1, 2]'::jsonb || '3'::jsonb
→ [1, 2, 3]
'{"a": "b"}'::jsonb || '42'::jsonb
→ [{"a": "b"}, 42]
To append an array to another array as a single entry, wrap it in an additional layer of array, for example:
'[1, 2]'::jsonb || jsonb_build_array('[3, 4]'::jsonb)
→ [1, 2, [3, 4]]
jsonb
-
text
→ jsonb
Deletes a key (and its value) from a JSON object, or matching string value(s) from a JSON array.
'{"a": "b", "c": "d"}'::jsonb - 'a'
→ {"c": "d"}
'["a", "b", "c", "b"]'::jsonb - 'b'
→ ["a", "c"]
jsonb
-
text[]
→ jsonb
Deletes all matching keys or array elements from the left operand.
'{"a": "b", "c": "d"}'::jsonb - '{a,c}'::text[]
→ {}
jsonb
-
integer
→ jsonb
Deletes the array element with specified index (negative integers count from the end). Throws an error if JSON value is not an array.
'["a", "b"]'::jsonb - 1
→ ["a"]
jsonb
#-
text[]
→ jsonb
Deletes the field or array element at the specified path, where path elements can be either field keys or array indexes.
'["a", {"b":1}]'::jsonb #- '{1,b}'
→ ["a", {}]
jsonb
@?
jsonpath
→ boolean
Does JSON path return any item for the specified JSON value?
'{"a":[1,2,3,4,5]}'::jsonb @? '$.a[*] ? (@ > 2)'
→ t
jsonb
@@
jsonpath
→ boolean
Returns the result of a JSON path predicate check for the specified JSON value. Only the first item of the result is taken into account. If the result is not Boolean, then NULL
is returned.
'{"a":[1,2,3,4,5]}'::jsonb @@ '$.a[*] > 2'
→ t
Function
Description
Example(s)
to_json
( anyelement
) → json
to_jsonb
( anyelement
) → jsonb
Converts any SQL value to json
or jsonb
. Arrays and composites are converted recursively to arrays and objects (multidimensional arrays become arrays of arrays in JSON). Otherwise, if there is a cast from the SQL data type to json
, the cast function will be used to perform the conversion;[a] otherwise, a scalar JSON value is produced. For any scalar other than a number, a Boolean, or a null value, the text representation will be used, with escaping as necessary to make it a valid JSON string value.
to_json('Fred said "Hi."'::text)
→ "Fred said \"Hi.\""
to_jsonb(row(42, 'Fred said "Hi."'::text))
→ {"f1": 42, "f2": "Fred said \"Hi.\""}
array_to_json
( anyarray
[, boolean
] ) → json
Converts an SQL array to a JSON array. The behavior is the same as to_json
except that line feeds will be added between top-level array elements if the optional boolean parameter is true.
array_to_json('{{1,5},{99,100}}'::int[])
→ [[1,5],[99,100]]
row_to_json
( record
[, boolean
] ) → json
Converts an SQL composite value to a JSON object. The behavior is the same as to_json
except that line feeds will be added between top-level elements if the optional boolean parameter is true.
row_to_json(row(1,'foo'))
→ {"f1":1,"f2":"foo"}
json_build_array
( VARIADIC
"any"
) → json
jsonb_build_array
( VARIADIC
"any"
) → jsonb
Builds a possibly-heterogeneously-typed JSON array out of a variadic argument list. Each argument is converted as per to_json
or to_jsonb
.
json_build_array(1, 2, 'foo', 4, 5)
→ [1, 2, "foo", 4, 5]
json_build_object
( VARIADIC
"any"
) → json
jsonb_build_object
( VARIADIC
"any"
) → jsonb
Builds a JSON object out of a variadic argument list. By convention, the argument list consists of alternating keys and values. Key arguments are coerced to text; value arguments are converted as per to_json
or to_jsonb
.
json_build_object('foo', 1, 2, row(3,'bar'))
→ {"foo" : 1, "2" : {"f1":3,"f2":"bar"}}
json_object
( text[]
) → json
jsonb_object
( text[]
) → jsonb
Builds a JSON object out of a text array. The array must have either exactly one dimension with an even number of members, in which case they are taken as alternating key/value pairs, or two dimensions such that each inner array has exactly two elements, which are taken as a key/value pair. All values are converted to JSON strings.
json_object('{a, 1, b, "def", c, 3.5}')
→ {"a" : "1", "b" : "def", "c" : "3.5"}
json_object('{{a, 1}, {b, "def"}, {c, 3.5}}')
→ {"a" : "1", "b" : "def", "c" : "3.5"}
json_object
( keys
text[]
, values
text[]
) → json
jsonb_object
( keys
text[]
, values
text[]
) → jsonb
This form of json_object
takes keys and values pairwise from separate text arrays. Otherwise it is identical to the one-argument form.
json_object('{a,b}', '{1,2}')
→ {"a": "1", "b": "2"}
Function
Description
Example(s)
json_array_elements
( json
) → setof json
jsonb_array_elements
( jsonb
) → setof jsonb
Expands the top-level JSON array into a set of JSON values.
select * from json_array_elements('[1,true, [2,false]]')
→
json_array_elements_text
( json
) → setof text
jsonb_array_elements_text
( jsonb
) → setof text
Expands the top-level JSON array into a set of text
values.
select * from json_array_elements_text('["foo", "bar"]')
→
json_array_length
( json
) → integer
jsonb_array_length
( jsonb
) → integer
Returns the number of elements in the top-level JSON array.
json_array_length('[1,2,3,{"f1":1,"f2":[5,6]},4]')
→ 5
jsonb_array_length('[]')
→ 0
json_each
( json
) → setof record
( key
text
, value
json
)
jsonb_each
( jsonb
) → setof record
( key
text
, value
jsonb
)
Expands the top-level JSON object into a set of key/value pairs.
select * from json_each('{"a":"foo", "b":"bar"}')
→
json_each_text
( json
) → setof record
( key
text
, value
text
)
jsonb_each_text
( jsonb
) → setof record
( key
text
, value
text
)
Expands the top-level JSON object into a set of key/value pairs. The returned value
s will be of type text
.
select * from json_each_text('{"a":"foo", "b":"bar"}')
→
json_extract_path
( from_json
json
, VARIADIC
path_elems
text[]
) → json
jsonb_extract_path
( from_json
jsonb
, VARIADIC
path_elems
text[]
) → jsonb
Extracts JSON sub-object at the specified path. (This is functionally equivalent to the #>
operator, but writing the path out as a variadic list can be more convenient in some cases.)
json_extract_path('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}', 'f4', 'f6')
→ "foo"
json_extract_path_text
( from_json
json
, VARIADIC
path_elems
text[]
) → text
jsonb_extract_path_text
( from_json
jsonb
, VARIADIC
path_elems
text[]
) → text
Extracts JSON sub-object at the specified path as text
. (This is functionally equivalent to the #>>
operator.)
json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}', 'f4', 'f6')
→ foo
json_object_keys
( json
) → setof text
jsonb_object_keys
( jsonb
) → setof text
Returns the set of keys in the top-level JSON object.
select * from json_object_keys('{"f1":"abc","f2":{"f3":"a", "f4":"b"}}')
→
json_populate_record
( base
anyelement
, from_json
json
) → anyelement
jsonb_populate_record
( base
anyelement
, from_json
jsonb
) → anyelement
Expands the top-level JSON object to a row having the composite type of the base
argument. The JSON object is scanned for fields whose names match column names of the output row type, and their values are inserted into those columns of the output. (Fields that do not correspond to any output column name are ignored.) In typical use, the value of base
is just NULL
, which means that any output columns that do not match any object field will be filled with nulls. However, if base
isn't NULL
then the values it contains will be used for unmatched columns.
To convert a JSON value to the SQL type of an output column, the following rules are applied in sequence:
A JSON null value is converted to an SQL null in all cases.
If the output column is of type json
or jsonb
, the JSON value is just reproduced exactly.
If the output column is a composite (row) type, and the JSON value is a JSON object, the fields of the object are converted to columns of the output row type by recursive application of these rules.
Likewise, if the output column is an array type and the JSON value is a JSON array, the elements of the JSON array are converted to elements of the output array by recursive application of these rules.
Otherwise, if the JSON value is a string, the contents of the string are fed to the input conversion function for the column's data type.
Otherwise, the ordinary text representation of the JSON value is fed to the input conversion function for the column's data type.
While the example below uses a constant JSON value, typical use would be to reference a json
or jsonb
column laterally from another table in the query's FROM
clause. Writing json_populate_record
in the FROM
clause is good practice, since all of the extracted columns are available for use without duplicate function calls.
create type subrowtype as (d int, e text);
create type myrowtype as (a int, b text[], c subrowtype);
select * from json_populate_record(null::myrowtype, '{"a": 1, "b": ["2", "a b"], "c": {"d": 4, "e": "a b c"}, "x": "foo"}')
→
json_populate_recordset
( base
anyelement
, from_json
json
) → setof anyelement
jsonb_populate_recordset
( base
anyelement
, from_json
jsonb
) → setof anyelement
Expands the top-level JSON array of objects to a set of rows having the composite type of the base
argument. Each element of the JSON array is processed as described above for json[b]_populate_record
.
create type twoints as (a int, b int);
select * from json_populate_recordset(null::twoints, '[{"a":1,"b":2}, {"a":3,"b":4}]')
→
json_to_record
( json
) → record
jsonb_to_record
( jsonb
) → record
Expands the top-level JSON object to a row having the composite type defined by an AS
clause. (As with all functions returning record
, the calling query must explicitly define the structure of the record with an AS
clause.) The output record is filled from fields of the JSON object, in the same way as described above for json[b]_populate_record
. Since there is no input record value, unmatched columns are always filled with nulls.
create type myrowtype as (a int, b text);
select * from json_to_record('{"a":1,"b":[1,2,3],"c":[1,2,3],"e":"bar","r": {"a": 123, "b": "a b c"}}') as x(a int, b text, c int[], d text, r myrowtype)
→
json_to_recordset
( json
) → setof record
jsonb_to_recordset
( jsonb
) → setof record
Expands the top-level JSON array of objects to a set of rows having the composite type defined by an AS
clause. (As with all functions returning record
, the calling query must explicitly define the structure of the record with an AS
clause.) Each element of the JSON array is processed as described above for json[b]_populate_record
.
select * from json_to_recordset('[{"a":1,"b":"foo"}, {"a":"2","c":"bar"}]') as x(a int, b text)
→
jsonb_set
( target
jsonb
, path
text[]
, new_value
jsonb
[, create_if_missing
boolean
] ) → jsonb
Returns target
with the item designated by path
replaced by new_value
, or with new_value
added if create_if_missing
is true (which is the default) and the item designated by path
does not exist. All earlier steps in the path must exist, or the target
is returned unchanged. As with the path oriented operators, negative integers that appear in the path
count from the end of JSON arrays. If the last path step is an array index that is out of range, and create_if_missing
is true, the new value is added at the beginning of the array if the index is negative, or at the end of the array if it is positive.
jsonb_set('[{"f1":1,"f2":null},2,null,3]', '{0,f1}', '[2,3,4]', false)
→ [{"f1": [2, 3, 4], "f2": null}, 2, null, 3]
jsonb_set('[{"f1":1,"f2":null},2]', '{0,f3}', '[2,3,4]')
→ [{"f1": 1, "f2": null, "f3": [2, 3, 4]}, 2]
jsonb_set_lax
( target
jsonb
, path
text[]
, new_value
jsonb
[, create_if_missing
boolean
[, null_value_treatment
text
]] ) → jsonb
If new_value
is not NULL
, behaves identically to jsonb_set
. Otherwise behaves according to the value of null_value_treatment
which must be one of 'raise_exception'
, 'use_json_null'
, 'delete_key'
, or 'return_target'
. The default is 'use_json_null'
.
jsonb_set_lax('[{"f1":1,"f2":null},2,null,3]', '{0,f1}', null)
→ [{"f1": null, "f2": null}, 2, null, 3]
jsonb_set_lax('[{"f1":99,"f2":null},2]', '{0,f3}', null, true, 'return_target')
→ [{"f1": 99, "f2": null}, 2]
jsonb_insert
( target
jsonb
, path
text[]
, new_value
jsonb
[, insert_after
boolean
] ) → jsonb
Returns target
with new_value
inserted. If the item designated by the path
is an array element, new_value
will be inserted before that item if insert_after
is false (which is the default), or after it if insert_after
is true. If the item designated by the path
is an object field, new_value
will be inserted only if the object does not already contain that key. All earlier steps in the path must exist, or the target
is returned unchanged. As with the path oriented operators, negative integers that appear in the path
count from the end of JSON arrays. If the last path step is an array index that is out of range, the new value is added at the beginning of the array if the index is negative, or at the end of the array if it is positive.
jsonb_insert('{"a": [0,1,2]}', '{a, 1}', '"new_value"')
→ {"a": [0, "new_value", 1, 2]}
jsonb_insert('{"a": [0,1,2]}', '{a, 1}', '"new_value"', true)
→ {"a": [0, 1, "new_value", 2]}
json_strip_nulls
( json
) → json
jsonb_strip_nulls
( jsonb
) → jsonb
Deletes all object fields that have null values from the given JSON value, recursively. Null values that are not object fields are untouched.
json_strip_nulls('[{"f1":1, "f2":null}, 2, null, 3]')
→ [{"f1":1},2,null,3]
jsonb_path_exists
( target
jsonb
, path
jsonpath
[, vars
jsonb
[, silent
boolean
]] ) → boolean
Checks whether the JSON path returns any item for the specified JSON value. If the vars
argument is specified, it must be a JSON object, and its fields provide named values to be substituted into the jsonpath
expression. If the silent
argument is specified and is true
, the function suppresses the same errors as the @?
and @@
operators do.
jsonb_path_exists('{"a":[1,2,3,4,5]}', '$.a[*] ? (@ >= $min && @ <= $max)', '{"min":2, "max":4}')
→ t
jsonb_path_match
( target
jsonb
, path
jsonpath
[, vars
jsonb
[, silent
boolean
]] ) → boolean
Returns the result of a JSON path predicate check for the specified JSON value. Only the first item of the result is taken into account. If the result is not Boolean, then NULL
is returned. The optional vars
and silent
arguments act the same as for jsonb_path_exists
.
jsonb_path_match('{"a":[1,2,3,4,5]}', 'exists($.a[*] ? (@ >= $min && @ <= $max))', '{"min":2, "max":4}')
→ t
jsonb_path_query
( target
jsonb
, path
jsonpath
[, vars
jsonb
[, silent
boolean
]] ) → setof jsonb
Returns all JSON items returned by the JSON path for the specified JSON value. The optional vars
and silent
arguments act the same as for jsonb_path_exists
.
select * from jsonb_path_query('{"a":[1,2,3,4,5]}', '$.a[*] ? (@ >= $min && @ <= $max)', '{"min":2, "max":4}')
→
jsonb_path_query_array
( target
jsonb
, path
jsonpath
[, vars
jsonb
[, silent
boolean
]] ) → jsonb
Returns all JSON items returned by the JSON path for the specified JSON value, as a JSON array. The optional vars
and silent
arguments act the same as for jsonb_path_exists
.
jsonb_path_query_array('{"a":[1,2,3,4,5]}', '$.a[*] ? (@ >= $min && @ <= $max)', '{"min":2, "max":4}')
→ [2, 3, 4]
jsonb_path_query_first
( target
jsonb
, path
jsonpath
[, vars
jsonb
[, silent
boolean
]] ) → jsonb
Returns the first JSON item returned by the JSON path for the specified JSON value. Returns NULL
if there are no results. The optional vars
and silent
arguments act the same as for jsonb_path_exists
.
jsonb_path_query_first('{"a":[1,2,3,4,5]}', '$.a[*] ? (@ >= $min && @ <= $max)', '{"min":2, "max":4}')
→ 2
jsonb_path_exists_tz
( target
jsonb
, path
jsonpath
[, vars
jsonb
[, silent
boolean
]] ) → boolean
jsonb_path_match_tz
( target
jsonb
, path
jsonpath
[, vars
jsonb
[, silent
boolean
]] ) → boolean
jsonb_path_query_tz
( target
jsonb
, path
jsonpath
[, vars
jsonb
[, silent
boolean
]] ) → setof jsonb
jsonb_path_query_array_tz
( target
jsonb
, path
jsonpath
[, vars
jsonb
[, silent
boolean
]] ) → jsonb
jsonb_path_query_first_tz
( target
jsonb
, path
jsonpath
[, vars
jsonb
[, silent
boolean
]] ) → jsonb
These functions act like their counterparts described above without the _tz
suffix, except that these functions support comparisons of date/time values that require timezone-aware conversions. The example below requires interpretation of the date-only value 2015-08-02
as a timestamp with time zone, so the result depends on the current TimeZone setting. Due to this dependency, these functions are marked as stable, which means these functions cannot be used in indexes. Their counterparts are immutable, and so can be used in indexes; but they will throw errors if asked to make such comparisons.
jsonb_path_exists_tz('["2015-08-01 12:00:00-05"]', '$[*] ? (@.datetime() < "2015-08-02".datetime())')
→ t
jsonb_pretty
( jsonb
) → text
Converts the given JSON value to pretty-printed, indented text.
jsonb_pretty('[{"f1":1,"f2":null}, 2]')
→
json_typeof
( json
) → text
jsonb_typeof
( jsonb
) → text
Returns the type of the top-level JSON value as a text string. Possible types are object
, array
, string
, number
, boolean
, and null
. (The null
result should not be confused with an SQL NULL; see the examples.)
json_typeof('-123.4')
→ number
json_typeof('null'::json)
→ null
json_typeof(NULL::json) IS NULL
→ t
Operator/Method
Description
Example(s)
number
+
number
→ number
Addition
jsonb_path_query('[2]', '$[0] + 3')
→ 5
+
number
→ number
Unary plus (no operation); unlike addition, this can iterate over multiple values
jsonb_path_query_array('{"x": [2,3,4]}', '+ $.x')
→ [2, 3, 4]
number
-
number
→ number
Subtraction
jsonb_path_query('[2]', '7 - $[0]')
→ 5
-
number
→ number
Negation; unlike subtraction, this can iterate over multiple values
jsonb_path_query_array('{"x": [2,3,4]}', '- $.x')
→ [-2, -3, -4]
number
*
number
→ number
Multiplication
jsonb_path_query('[4]', '2 * $[0]')
→ 8
number
/
number
→ number
Division
jsonb_path_query('[8.5]', '$[0] / 2')
→ 4.2500000000000000
number
%
number
→ number
Modulo (remainder)
jsonb_path_query('[32]', '$[0] % 10')
→ 2
value
.
type()
→ string
Type of the JSON item (see json_typeof
)
jsonb_path_query_array('[1, "2", {}]', '$[*].type()')
→ ["number", "string", "object"]
value
.
size()
→ number
Size of the JSON item (number of array elements, or 1 if not an array)
jsonb_path_query('{"m": [11, 15]}', '$.m.size()')
→ 2
value
.
double()
→ number
Approximate floating-point number converted from a JSON number or string
jsonb_path_query('{"len": "1.9"}', '$.len.double() * 2')
→ 3.8
number
.
ceiling()
→ number
Nearest integer greater than or equal to the given number
jsonb_path_query('{"h": 1.3}', '$.h.ceiling()')
→ 2
number
.
floor()
→ number
Nearest integer less than or equal to the given number
jsonb_path_query('{"h": 1.7}', '$.h.floor()')
→ 1
number
.
abs()
→ number
Absolute value of the given number
jsonb_path_query('{"z": -0.3}', '$.z.abs()')
→ 0.3
string
.
datetime()
→ datetime_type
(see note)
Date/time value converted from a string
jsonb_path_query('["2015-8-1", "2015-08-12"]', '$[*] ? (@.datetime() < "2015-08-2".datetime())')
→ "2015-8-1"
string
.
datetime(
template
)
→ datetime_type
(see note)
Date/time value converted from a string using the specified to_timestamp
template
jsonb_path_query_array('["12:30", "18:40"]', '$[*].datetime("HH24:MI")')
→ ["12:30:00", "18:40:00"]
object
.
keyvalue()
→ array
The object's key-value pairs, represented as an array of objects containing three fields: "key"
, "value"
, and "id"
; "id"
is a unique identifier of the object the key-value pair belongs to
jsonb_path_query_array('{"x": "20", "y": 32}', '$.keyvalue()')
→ [{"id": 0, "key": "x", "value": "20"}, {"id": 0, "key": "y", "value": 32}]
Predicate/Value
Description
Example(s)
value
==
value
→ boolean
Equality comparison (this, and the other comparison operators, work on all JSON scalar values)
jsonb_path_query_array('[1, "a", 1, 3]', '$[*] ? (@ == 1)')
→ [1, 1]
jsonb_path_query_array('[1, "a", 1, 3]', '$[*] ? (@ == "a")')
→ ["a"]
value
!=
value
→ boolean
value
<>
value
→ boolean
Non-equality comparison
jsonb_path_query_array('[1, 2, 1, 3]', '$[*] ? (@ != 1)')
→ [2, 3]
jsonb_path_query_array('["a", "b", "c"]', '$[*] ? (@ <> "b")')
→ ["a", "c"]
value
<
value
→ boolean
Less-than comparison
jsonb_path_query_array('[1, 2, 3]', '$[*] ? (@ < 2)')
→ [1]
value
<=
value
→ boolean
Less-than-or-equal-to comparison
jsonb_path_query_array('["a", "b", "c"]', '$[*] ? (@ <= "b")')
→ ["a", "b"]
value
>
value
→ boolean
Greater-than comparison
jsonb_path_query_array('[1, 2, 3]', '$[*] ? (@ > 2)')
→ [3]
value
>=
value
→ boolean
Greater-than-or-equal-to comparison
jsonb_path_query_array('[1, 2, 3]', '$[*] ? (@ >= 2)')
→ [2, 3]
true
→ boolean
JSON constant true
jsonb_path_query('[{"name": "John", "parent": false}, {"name": "Chris", "parent": true}]', '$[*] ? (@.parent == true)')
→ {"name": "Chris", "parent": true}
false
→ boolean
JSON constant false
jsonb_path_query('[{"name": "John", "parent": false}, {"name": "Chris", "parent": true}]', '$[*] ? (@.parent == false)')
→ {"name": "John", "parent": false}
null
→ value
JSON constant null
(note that, unlike in SQL, comparison to null
works normally)
jsonb_path_query('[{"name": "Mary", "job": null}, {"name": "Michael", "job": "driver"}]', '$[*] ? (@.job == null) .name')
→ "Mary"
boolean
&&
boolean
→ boolean
Boolean AND
jsonb_path_query('[1, 3, 7]', '$[*] ? (@ > 1 && @ < 5)')
→ 3
boolean
||
boolean
→ boolean
Boolean OR
jsonb_path_query('[1, 3, 7]', '$[*] ? (@ < 1 || @ > 5)')
→ 7
!
boolean
→ boolean
Boolean NOT
jsonb_path_query('[1, 3, 7]', '$[*] ? (!(@ < 5))')
→ 7
boolean
is unknown
→ boolean
Tests whether a Boolean condition is unknown
.
jsonb_path_query('[-1, 2, 7, "foo"]', '$[*] ? ((@ > 0) is unknown)')
→ "foo"
string
like_regex
string
[ flag
string
] → boolean
Tests whether the first operand matches the regular expression given by the second operand, optionally with modifications described by a string of flag
characters (see Section 9.16.2.3).
jsonb_path_query_array('["abc", "abd", "aBdC", "abdacb", "babc"]', '$[*] ? (@ like_regex "^ab.*c")')
→ ["abc", "abdacb"]
jsonb_path_query_array('["abc", "abd", "aBdC", "abdacb", "babc"]', '$[*] ? (@ like_regex "^ab.*c" flag "i")')
→ ["abc", "aBdC", "abdacb"]
string
starts with
string
→ boolean
Tests whether the second operand is an initial substring of the first operand.
jsonb_path_query('["John Smith", "Mary Stone", "Bob Johnson"]', '$[*] ? (@ starts with "John")')
→ "John Smith"
exists
(
path_expression
)
→ boolean
Tests whether a path expression matches at least one SQL/JSON item. Returns unknown
if the path expression would result in an error; the second example uses this to avoid a no-such-key error in strict mode.
jsonb_path_query('{"x": [1, 2], "y": [2, 4]}', 'strict $.* ? (exists (@ ? (@[*] > 2)))')
→ [2, 4]
jsonb_path_query_array('{"value": 41}', 'strict $ ? (exists (@.name)) .name')
→ []
Function Description |
Generates a series comprising the valid subscripts of the |
Generates a series comprising the valid subscripts of the |
Function Description |
Returns true if recovery is still in progress. |
Returns the last write-ahead log location that has been received and synced to disk by streaming replication. While streaming replication is in progress this will increase monotonically. If recovery has completed then this will remain static at the location of the last WAL record received and synced to disk during recovery. If streaming replication is disabled, or if it has not yet started, the function returns |
Returns the last write-ahead log location that has been replayed during recovery. If recovery is still in progress this will increase monotonically. If recovery has completed then this will remain static at the location of the last WAL record applied during recovery. When the server has been started normally without recovery, the function returns |
Returns the time stamp of the last transaction replayed during recovery. This is the time at which the commit or abort WAL record for that transaction was generated on the primary. If no transactions have been replayed during recovery, the function returns |
Function Description |
Returns true if recovery is paused. |
Promotes a standby server to primary status. With This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. |
Pauses recovery. While recovery is paused, no further database changes are applied. If hot standby is active, all new queries will see the same consistent snapshot of the database, and no further query conflicts will be generated until recovery is resumed. This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. |
Restarts recovery if it was paused. This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. |
Function Description |
Lists the tables or indexes in the partition tree of the given partitioned table or partitioned index, with one row for each partition. Information provided includes the OID of the partition, the OID of its immediate parent, a boolean value telling if the partition is a leaf, and an integer telling its level in the hierarchy. The level value is 0 for the input table or index, 1 for its immediate child partitions, 2 for their partitions, and so on. Returns no rows if the relation does not exist or is not a partition or partitioned table. |
Lists the ancestor relations of the given partition, including the relation itself. Returns no rows if the relation does not exist or is not a partition or partitioned table. |
Returns the top-most parent of the partition tree to which the given relation belongs. Returns |
Function Description |
Returns the names of all files (and directories and other special files) in the specified directory. The This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. |
Returns the name, size, and last modification time (mtime) of each ordinary file in the server's log directory. Filenames beginning with a dot, directories, and other special files are excluded. This function is restricted to superusers and members of the |
Returns the name, size, and last modification time (mtime) of each ordinary file in the server's write-ahead log (WAL) directory. Filenames beginning with a dot, directories, and other special files are excluded. This function is restricted to superusers and members of the |
Returns the name, size, and last modification time (mtime) of each ordinary file in the server's WAL archive status directory ( This function is restricted to superusers and members of the |
Returns the name, size, and last modification time (mtime) of each ordinary file in the temporary file directory for the specified This function is restricted to superusers and members of the |
Returns all or part of a text file, starting at the given byte This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. |
Returns all or part of a file. This function is identical to This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. In combination with the |
Returns a record containing the file's size, last access time stamp, last modification time stamp, last file status change time stamp (Unix platforms only), file creation time stamp (Windows only), and a flag indicating if it is a directory. This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. |
Function Description |
Obtains an exclusive session-level advisory lock, waiting if necessary. |
Obtains a shared session-level advisory lock, waiting if necessary. |
Releases a previously-acquired exclusive session-level advisory lock. Returns |
Releases all session-level advisory locks held by the current session. (This function is implicitly invoked at session end, even if the client disconnects ungracefully.) |
Releases a previously-acquired shared session-level advisory lock. Returns |
Obtains an exclusive transaction-level advisory lock, waiting if necessary. |
Obtains a shared transaction-level advisory lock, waiting if necessary. |
Obtains an exclusive session-level advisory lock if available. This will either obtain the lock immediately and return |
Obtains a shared session-level advisory lock if available. This will either obtain the lock immediately and return |
Obtains an exclusive transaction-level advisory lock if available. This will either obtain the lock immediately and return |
Obtains a shared transaction-level advisory lock if available. This will either obtain the lock immediately and return |
|
| does user have privilege for any column of table |
|
| does current user have privilege for any column of table |
|
| does user have privilege for column |
|
| does current user have privilege for column |
|
| does user have privilege for database |
|
| does current user have privilege for database |
|
| does user have privilege for foreign-data wrapper |
|
| does current user have privilege for foreign-data wrapper |
|
| does user have privilege for function |
|
| does current user have privilege for function |
|
| does user have privilege for language |
|
| does current user have privilege for language |
|
| does user have privilege for schema |
|
| does current user have privilege for schema |
|
| does user have privilege for sequence |
|
| does current user have privilege for sequence |
|
| does user have privilege for foreign server |
|
| does current user have privilege for foreign server |
|
| does user have privilege for table |
|
| does current user have privilege for table |
|
| does user have privilege for tablespace |
|
| does current user have privilege for tablespace |
|
| does user have privilege for type |
|
| does current user have privilege for type |
|
| does user have privilege for role |
|
| does current user have privilege for role |
|
| does current user have row level security active for table |
| equal |
|
|
| contains element |
|
|
| contains element |
|
|
|
| get the default access privileges for an object belonging to |
|
| get |
|
| build an |
|
| is collation visible in search path |
|
| is conversion visible in search path |
|
| is function visible in search path |
|
| is operator class visible in search path |
|
| is operator visible in search path |
|
| is operator family visible in search path |
|
| is statistics object visible in search path |
|
| is table visible in search path |
|
| is text search configuration visible in search path |
|
| is text search dictionary visible in search path |
|
| is text search parser visible in search path |
|
| is text search template visible in search path |
|
| is type (or domain) visible in search path |
|
| get SQL name of a data type |
|
| get definition of a constraint |
|
| get definition of a constraint |
|
| decompile internal form of an expression, assuming that any Vars in it refer to the relation indicated by the second parameter |
|
| decompile internal form of an expression, assuming that any Vars in it refer to the relation indicated by the second parameter |
|
| get definition of a function or procedure |
|
| get argument list of function's or procedure's definition (with default values) |
|
| get argument list to identify a function or procedure (without default values) |
|
| get |
|
| get |
|
| get |
|
| get list of SQL keywords and their categories |
|
| get |
|
| get |
|
| get name of the sequence that a serial or identity column uses |
|
| get |
|
| get |
|
| get |
|
| get role name with given OID |
|
| get underlying |
|
| get underlying |
|
| get underlying |
|
| get underlying |
|
| get underlying |
|
| test whether an index column has a specified property |
|
| test whether an index has a specified property |
|
| test whether an index access method has a specified property |
|
| get the set of storage option name/value pairs |
|
| get the set of database OIDs that have objects in the tablespace |
|
| get the path in the file system that this tablespace is located in |
|
| get the data type of any value |
|
| get the collation of the argument |
|
| get the OID of the named relation |
|
| get the OID of the named function |
|
| get the OID of the named function |
|
| get the OID of the named operator |
|
| get the OID of the named operator |
|
| get the OID of the named type |
|
| get the OID of the named schema |
|
| get the OID of the named role |
| Does the column sort in ascending order on a forward scan? |
| Does the column sort in descending order on a forward scan? |
| Does the column sort with nulls first on a forward scan? |
| Does the column sort with nulls last on a forward scan? |
| Does the column possess any defined sort ordering? |
| Can the column be scanned in order by a “distance” operator, for example |
| Can the column value be returned by an index-only scan? |
| Does the column natively support |
| Does the column support |
| Can the index be used in a |
| Does the index support plain (non-bitmap) scans? |
| Does the index support bitmap scans? |
| Can the scan direction be changed in mid-scan (to support |
| Does the access method support |
| Does the access method support unique indexes? |
| Does the access method support indexes with multiple columns? |
| Does the access method support exclusion constraints? |
| Does the access method support the |
|
| get description of a database object |
|
| get identity of a database object |
|
| get external representation of a database object's address |
|
| get address of a database object from its external representation |
|
| get comment for a table column |
|
| get comment for a database object |
|
| get comment for a database object (deprecated) |
|
| get comment for a shared database object |
|
| get current transaction ID, assigning a new one if the current transaction does not have one |
|
| same as |
|
| get current snapshot |
|
| get in-progress transaction IDs in snapshot |
|
| get |
|
| get |
|
| is transaction ID visible in snapshot? (do not use with subtransaction ids) |
|
| report the status of the given transaction: |
| Earliest transaction ID (txid) that is still active. All earlier transactions will either be committed and visible, or rolled back and dead. |
| First as-yet-unassigned txid. All txids greater than or equal to this are not yet started as of the time of the snapshot, and thus invisible. |
| Active txids at the time of the snapshot. The list includes only those active txids between |
|
| get commit timestamp of a transaction |
|
| get transaction ID and commit timestamp of latest committed transaction |
|
| Returns information about current checkpoint state. |
|
| Returns information about current control file state. |
|
| Returns information about cluster initialization state. |
|
| Returns information about recovery state. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Function Description |
Returns the number of the current row within its partition, counting from 1. |
Returns the rank of the current row, with gaps; that is, the |
Returns the rank of the current row, without gaps; this function effectively counts peer groups. |
Returns the relative rank of the current row, that is ( |
Returns the cumulative distribution, that is (number of partition rows preceding or peers with current row) / (total partition rows). The value thus ranges from 1/ |
Returns an integer ranging from 1 to the argument value, dividing the partition as equally as possible. |
Returns |
Returns |
Returns |
Returns |
Returns |
Function Description Example(s) |
Returns the current value of the setting
|
Sets the parameter
|
Function Description |
Cancels the current query of the session whose backend process has the specified process ID. This is also allowed if the calling role is a member of the role whose backend is being canceled or the calling role has been granted |
Causes all processes of the PostgreSQL server to reload their configuration files. (This is initiated by sending a SIGHUP signal to the postmaster process, which in turn sends SIGHUP to each of its children.) |
Signals the log-file manager to switch to a new output file immediately. This works only when the built-in log collector is running, since otherwise there is no log-file manager subprocess. |
Terminates the session whose backend process has the specified process ID. This is also allowed if the calling role is a member of the role whose backend is being terminated or the calling role has been granted |
Function Description |
產生從 start 到 stop 的一系列數值,間隔為 step,其預設為 1。 |
|
Function Description |
This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. |
Returns the current write-ahead log flush location (see notes below). |
Returns the current write-ahead log insert location (see notes below). |
Returns the current write-ahead log write location (see notes below). |
Prepares the server to begin an on-line backup. The only required parameter is an arbitrary user-defined label for the backup. (Typically this would be the name under which the backup dump file will be stored.) If the optional second parameter is given as When used in exclusive mode, this function writes a backup label file ( This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. |
Finishes performing an exclusive or non-exclusive on-line backup. The There is an optional second parameter of type When executed on a primary, this function also creates a backup history file in the write-ahead log archive area. The history file includes the label given to The result of the function is a single record. The This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. |
Finishes performing an exclusive on-line backup. This simplified version is equivalent to This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. |
Returns true if an on-line exclusive backup is in progress. |
Returns the start time of the current on-line exclusive backup if one is in progress, otherwise |
Forces the server to switch to a new write-ahead log file, which allows the current file to be archived (assuming you are using continuous archiving). The result is the ending write-ahead log location plus 1 within the just-completed write-ahead log file. If there has been no write-ahead log activity since the last write-ahead log switch, This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. |
Converts a write-ahead log location to the name of the WAL file holding that location. |
Converts a write-ahead log location to a WAL file name and byte offset within that file. |
|
Function Description |
Saves the transaction's current snapshot and returns a |
Function Description |
|
Drops the physical or logical replication slot named |
Creates a new logical (decoding) replication slot named |
Copies an existing physical replication slot named |
Copies an existing logical replication slot named |
Returns changes in the slot |
Behaves just like the |
Behaves just like the |
Behaves just like the |
Advances the current confirmed position of a replication slot named |
Creates a replication origin with the given external name, and returns the internal ID assigned to it. |
Deletes a previously-created replication origin, including any associated replay progress. |
Looks up a replication origin by name and returns the internal ID. If no such replication origin is found an error is thrown. |
Marks the current session as replaying from the given origin, allowing replay progress to be tracked. Can only be used if no origin is currently selected. Use |
Cancels the effects of |
Returns true if a replication origin has been selected in the current session. |
Returns the replay location for the replication origin selected in the current session. The parameter |
Marks the current transaction as replaying a transaction that has committed at the given LSN and timestamp. Can only be called when a replication origin has been selected using |
Cancels the effects of |
Sets replication progress for the given node to the given location. This is primarily useful for setting up the initial location, or setting a new location after configuration changes and similar. Be aware that careless use of this function can lead to inconsistently replicated data. |
Returns the replay location for the given replication origin. The parameter |
Emits a logical decoding message. This can be used to pass generic messages to logical decoding plugins through WAL. The |
Function Description |
Shows the number of bytes used to store any individual data value. If applied directly to a table column value, this reflects any compression that was done. |
Computes the total disk space used by the database with the specified name or OID. To use this function, you must have |
Computes the total disk space used by indexes attached to the specified table. |
Computes the disk space used by one “fork” of the specified relation. (Note that for most purposes it is more convenient to use the higher-level functions
|
Converts a size in human-readable format (as returned by |
Converts a size in bytes into a more easily human-readable format with size units (bytes, kB, MB, GB or TB as appropriate). Note that the units are powers of 2 rather than powers of 10, so 1kB is 1024 bytes, 1MB is 10242 = 1048576 bytes, and so on. |
Computes the disk space used by the specified table, excluding indexes (but including its TOAST table if any, free space map, and visibility map). |
Computes the total disk space used in the tablespace with the specified name or OID. To use this function, you must have |
Computes the total disk space used by the specified table, including all indexes and TOAST data. The result is equivalent to |
Function Description |
|
Returns the entire file path name (relative to the database cluster's data directory, |
Returns a relation's OID given the tablespace OID and filenode it is stored under. This is essentially the inverse mapping of |
Function Description |
|
|
Function Description |
Scans the specified BRIN index to find page ranges in the base table that are not currently summarized by the index; for any such range it creates a new summary index tuple by scanning those table pages. Returns the number of new page range summaries that were inserted into the index. |
Summarizes the page range covering the given block, if not already summarized. This is like |
Removes the BRIN index tuple that summarizes the page range covering the given table block, if there is one. |
|
|
| name of current database (called “catalog” in the SQL standard) |
|
| name of current database |
|
| text of the currently executing query, as submitted by the client (might contain more than one statement) |
|
| equivalent to |
|
| name of current schema |
|
| names of schemas in search path, optionally including implicit schemas |
|
| user name of current execution context |
|
| address of the remote connection |
|
| port of the remote connection |
|
| address of the local connection |
|
| port of the local connection |
|
| Process ID of the server process attached to the current session |
|
| Process ID(s) that are blocking specified server process ID from acquiring a lock |
|
| configuration load time |
|
| Primary log file name, or log in the requested format, currently in use by the logging collector |
|
| OID of session's temporary schema, or 0 if none |
|
| is schema another session's temporary schema? |
|
|
|
| channel names that the session is currently listening on |
|
| fraction of the asynchronous notification queue currently occupied (0-1) |
|
| server start time |
|
| Process ID(s) that are blocking specified server process ID from acquiring a safe snapshot |
|
| current nesting level of PostgreSQL triggers (0 if not called, directly or indirectly, from inside a trigger) |
|
| session user name |
|
| equivalent to |
|
|
Function Description Example Usage |
跳過不會產生具體的更新行為。 詳情請見下文。
|
從關聯的純文字檔欄位自動更新 tsvector 欄位。要使用的文字搜尋配置以名稱指定為觸發器參數。有關詳細說明,請參閱。
|
從關聯的純文字檔欄位自動更新 tsvector 欄位。 要使用的文字搜尋配置取自資料表的 regconfig 欄位。有關詳細說明,請參閱。
|
PostgreSQL provides a function to inspect complex statistics defined using the CREATE STATISTICS
command.
pg_mcv_list_items
returns a list of all items stored in a multi-column MCV list, and returns the following columns:
Name | Type | Description |
---|---|---|
The pg_mcv_list_items
function can be used like this:
Values of the pg_mcv_list
can be obtained only from the pg_statistic_ext_data.stxdmcv
column.
PostgreSQL provides these helper functions to retrieve information from event triggers.
For more information about event triggers, see Chapter 39.
pg_event_trigger_ddl_commands
returns a list of DDL commands executed by each user action, when invoked in a function attached to a ddl_command_end
event trigger. If called in any other context, an error is raised. pg_event_trigger_ddl_commands
returns one row for each base command executed; some commands that are a single SQL sentence may return more than one row. This function returns the following columns:
Name | Type | Description |
---|---|---|
pg_event_trigger_dropped_objects
returns a list of all objects dropped by the command in whose sql_drop
event it is called. If called in any other context, an error is raised. This function returns the following columns:
The pg_event_trigger_dropped_objects
function can be used in an event trigger like this:
The functions shown in Table 9.98 provide information about a table for which a table_rewrite
event has just been called. If called in any other context, an error is raised.
These functions can be used in an event trigger like this:
產生從 start 到 stop 的一系列數值,間隔為 step。在 with time zone 的時候,一天中的時間和夏令時調整是根據 timezone 參數指定的時區或目前 設定(如果省略的話)所計算的。
Creates a named marker record in the write-ahead log that can later be used as a recovery target, and returns the corresponding write-ahead log location. The given name can then be used with to specify the point up to which recovery will proceed. Avoid creating multiple restore points with the same name, since recovery will stop at the first one whose name matches the recovery target.
Calculates the difference in bytes between two write-ahead log locations. This can be used with pg_stat_replication
or some of the functions shown in to get the replication lag.
A transaction can export more than one snapshot, if needed. Note that doing so is only useful in READ COMMITTED
transactions, since in REPEATABLE READ
and higher isolation levels, transactions use the same snapshot throughout their lifetime. Once a transaction has exported any snapshots, it cannot be prepared with .
Creates a new physical replication slot named slot_name
. The optional second parameter, when true
, specifies that the LSN for this replication slot be reserved immediately; otherwise the LSN is reserved on first connection from a streaming replication client. Streaming changes from a physical slot is only possible with the streaming-replication protocol — see . The optional third parameter, temporary
, when set to true, specifies that the slot should not be permanently stored to disk and is only meant for use by the current session. Temporary slots are also released upon any error. This function corresponds to the replication protocol command CREATE_REPLICATION_SLOT ... PHYSICAL
.
fsm
returns the size of the Free Space Map (see ) associated with the relation.
vm
returns the size of the Visibility Map (see ) associated with the relation.
Returns the “filenode” number currently assigned to the specified relation. The filenode is the base component of the file name(s) used for the relation (see for more information). For most relations the result is the same as pg_class
.relfilenode
, but for certain system catalogs relfilenode
is zero and this function must be used to get the correct value. The function returns NULL if passed a relation that does not have storage, such as a view.
Returns the actual version of the collation object as it is currently installed in the operating system. If this is different from the value in pg_collation
.collversion
, then objects depending on the collation might need to be rebuilt. See also .
Adds collations to the system catalog pg_collation
based on all the locales it finds in the operating system. This is what initdb
uses; see for more details. If additional locales are installed into the operating system later on, this function can be run again to add collations for the new locales. Locales that match existing entries in pg_collation
will be skipped. (But collation objects based on locales that are no longer present in the operating system are not removed by this function.) The schema
parameter would typically be pg_catalog
, but that is not a requirement; the collations could be installed into some other schema as well. The function returns the number of new collation objects it created.
Cleans up the “pending” list of the specified GIN index by moving entries in it, in bulk, to the main GIN data structure. Returns the number of pages removed from the pending list. If the argument is a GIN index built with the fastupdate
option disabled, no cleanup happens and the result is zero, because the index doesn't have a pending list. See and for details about the pending list and fastupdate
option.
is a JIT compiler extension available (see ) and the configuration parameter set to on
.
PostgreSQL version information. See also for a machine-readable version.
Name | Type | Description |
---|---|---|
index
int
index of the item in the MCV list
values
text[]
values stored in the MCV item
nulls
boolean[]
flags identifying NULL
values
frequency
double precision
frequency of this MCV item
base_frequency
double precision
base frequency of this MCV item
classid
oid
OID of catalog the object belongs in
objid
oid
OID of the object itself
objsubid
integer
Sub-object ID (e.g., attribute number for a column)
command_tag
text
Command tag
object_type
text
Type of the object
schema_name
text
Name of the schema the object belongs in, if any; otherwise NULL
. No quoting is applied.
object_identity
text
Text rendering of the object identity, schema-qualified. Each identifier included in the identity is quoted if necessary.
in_extension
boolean
True if the command is part of an extension script
command
pg_ddl_command
A complete representation of the command, in internal format. This cannot be output directly, but it can be passed to other functions to obtain different pieces of information about the command.
classid
oid
OID of catalog the object belonged in
objid
oid
OID of the object itself
objsubid
integer
Sub-object ID (e.g., attribute number for a column)
original
boolean
True if this was one of the root object(s) of the deletion
normal
boolean
True if there was a normal dependency relationship in the dependency graph leading to this object
is_temporary
boolean
True if this was a temporary object
object_type
text
Type of the object
schema_name
text
Name of the schema the object belonged in, if any; otherwise NULL
. No quoting is applied.
object_name
text
Name of the object, if the combination of schema and name can be used as a unique identifier for the object; otherwise NULL
. No quoting is applied, and name is never schema-qualified.
object_identity
text
Text rendering of the object identity, schema-qualified. Each identifier included in the identity is quoted if necessary.
address_names
text[]
An array that, together with object_type
and address_args
, can be used by the pg_get_object_address
function to recreate the object address in a remote server containing an identically named object of the same kind.
address_args
text[]
Complement for address_names
Function
Description
pg_event_trigger_table_rewrite_oid
() → oid
Returns the OID of the table about to be rewritten.
pg_event_trigger_table_rewrite_reason
() → integer
Returns a code explaining the reason(s) for rewriting. The exact meaning of the codes is release dependent.