Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The usual logical operators are available:
SQLuses a three-valued logic system with true, false, andnull
, which represents“unknown”. Observe the following truth tables:
The operatorsAND
andOR
are commutative, that is, you can switch the left and right operand without affecting the result. But seeSection 4.2.14for more information about the order of evaluation of subexpressions.
AND
OR
NOT
a
b
a
AND
b
a
OR
b
TRUE
TRUE
TRUE
TRUE
TRUE
FALSE
FALSE
TRUE
TRUE
NULL
NULL
TRUE
FALSE
FALSE
FALSE
FALSE
FALSE
NULL
FALSE
NULL
NULL
NULL
NULL
NULL
a
NOT
a
TRUE
FALSE
FALSE
TRUE
NULL
NULL
Mathematical operators are provided for manyPostgreSQLtypes. For types without standard mathematical conventions (e.g., date/time types) we describe the actual behavior in subsequent sections.
Table 9.4shows the available mathematical operators.
Table 9.4. Mathematical Operators
The bitwise operators work only on integral data types, whereas the others are available for all numeric data types. The bitwise operators are also available for the bit string typesbit
andbit varying
, as shown inTable 9.13.
Table 9.5shows the available mathematical functions. In the table,dp
indicatesdouble precision
. Many of these functions are provided in multiple forms with different argument types. Except where noted, any given form of a function returns the same data type as its argument. The functions working withdouble precision
data are mostly implemented on top of the host system's C library; accuracy and behavior in boundary cases can therefore vary depending on the host system.
Table 9.5. Mathematical Functions
Table 9.6shows functions for generating random numbers.
Table 9.6. Random Functions
The characteristics of the values returned byrandom()
depend on the system implementation. It is not suitable for cryptographic applications; seepgcryptomodule for an alternative.
Finally,Table 9.7shows the available trigonometric functions. All trigonometric functions take arguments and return values of typedouble precision
. Each of the trigonometric functions comes in two variants, one that measures angles in radians and one that measures angles in degrees.
Table 9.7. Trigonometric Functions
Another way to work with angles measured in degrees is to use the unit transformation functionsradians()
anddegrees()
shown earlier. However, using the degree-based trigonometric functions is preferred, as that way avoids roundoff error for special cases such assind(30)
.
This section describes functions and operators for examining and manipulating bit strings, that is values of the typesbit
andbit varying
. Aside from the usual comparison operators, the operators shown inTable 9.13can be used. Bit string operands of&
,|
, and#
must be of equal length. When bit shifting, the original length of the string is preserved, as shown in the examples.
Table 9.13. Bit String Operators
The followingSQL-standard functions work on bit strings as well as character strings:length
,bit_length
,octet_length
,position
,substring
,overlay
.
The following functions work on bit strings as well as binary strings:get_bit
,set_bit
. When working with a bit string, these functions number the first (leftmost) bit of the string as bit 0.
In addition, it is possible to cast integral values to and from typebit
. Some examples:
Note that casting to just“bit”means casting tobit(1)
, and so will deliver only the least significant bit of the integer.
Casting an integer tobit(n)
copies the rightmostn
bits. Casting an integer to a bit string width wider than the integer itself will sign-extend on the left.
The usual comparison operators are available, as shown inTable 9.1.
Table 9.1. Comparison Operators
The!=
operator is converted to<>
in the parser stage. It is not possible to implement!=
and<>
operators that do different things.
Comparison operators are available for all relevant data types. All comparison operators are binary operators that return values of typeboolean
; expressions like1 < 2 < 3
are not valid (because there is no<
operator to compare a Boolean value with3
).
There are also some comparison predicates, as shown inTable 9.2. These behave much like operators, but have special syntax mandated by the SQL standard.
Table 9.2. Comparison Predicates
TheBETWEEN
predicate simplifies range tests:
is equivalent to
Notice thatBETWEEN
treats the endpoint values as included in the range.NOT BETWEEN
does the opposite comparison:
is equivalent to
BETWEEN SYMMETRIC
is likeBETWEEN
except there is no requirement that the argument to the left ofAND
be less than or equal to the argument on the right. If it is not, those two arguments are automatically swapped, so that a nonempty range is always implied.
Ordinary comparison operators yield null (signifying“unknown”), not true or false, when either input is null. For example,7 = NULL
yields null, as does7 <> NULL
. When this behavior is not suitable, use theIS [NOT] DISTINCT FROM
predicates:
For non-null inputs,IS DISTINCT FROM
is the same as the<>
operator. However, if both inputs are null it returns false, and if only one input is null it returns true. Similarly,IS NOT DISTINCT FROM
is identical to=
for non-null inputs, but it returns true when both inputs are null, and false when only one input is null. Thus, these predicates effectively act as though null were a normal data value, rather than“unknown”.
To check whether a value is or is not null, use the predicates:
or the equivalent, but nonstandard, predicates:
Donot_writeexpression
_= NULLbecauseNULL
is not“equal to”NULL
. (The null value represents an unknown value, and it is not known whether two unknown values are equal.)
Some applications might expect thatexpression
= NULLreturns true if_expression
_evaluates to the null value. It is highly recommended that these applications be modified to comply with the SQL standard. However, if that cannot be done thetransform_null_equalsconfiguration variable is available. If it is enabled,PostgreSQLwill convertx = NULL
clauses tox IS NULL
.
If theexpression
_is row-valued, thenIS NULL
is true when the row expression itself is null or when all the row's fields are null, whileIS NOT NULL
is true when the row expression itself is non-null and all the row's fields are non-null. Because of this behavior,IS NULL
andIS NOT NULL
do not always return inverse results for row-valued expressions; in particular, a row-valued expression that contains both null and non-null fields will return false for both tests. In some cases, it may be preferable to writerowIS DISTINCT FROM NULL
orrow
_IS NOT DISTINCT FROM NULL
, which will simply check whether the overall row value is null without any additional tests on the row fields.
Boolean values can also be tested using the predicates
These will always return true or false, never a null value, even when the operand is null. A null input is treated as the logical value“unknown”. Notice thatIS UNKNOWN
andIS NOT UNKNOWN
are effectively the same asIS NULL
andIS NOT NULL
, respectively, except that the input expression must be of Boolean type.
Some comparison-related functions are also available, as shown inTable 9.3.
Table 9.3. Comparison Functions
Operator
Description
Example
Result
+
addition
2 + 3
5
-
subtraction
2 - 3
-1
*
multiplication
2 * 3
6
/
division (integer division truncates the result)
4 / 2
2
%
modulo (remainder)
5 % 4
1
^
exponentiation (associates left to right)
2.0 ^ 3.0
8
`
/`
square root
`
/ 25.0`
5
`
/`
cube root
`
/ 27.0`
3
!
factorial
5 !
120
!!
factorial (prefix operator)
!! 5
120
@
absolute value
@ -5.0
5
&
bitwise AND
91 & 15
11
`
`
bitwise OR
`32
3`
35
#
bitwise XOR
17 # 5
20
~
bitwise NOT
~1
-2
<<
bitwise shift left
1 << 4
16
>>
bitwise shift right
8 >> 2
2
Function
Return Type
Description
Example
Result
abs(x
)
(same as input)
absolute value
abs(-17.4)
17.4
cbrt(dp
)
dp
cube root
cbrt(27.0)
3
ceil(dp
ornumeric
)
(same as input)
nearest integer greater than or equal to argument
ceil(-42.8)
-42
ceiling(dp
ornumeric
)
(same as input)
nearest integer greater than or equal to argument (same asceil
)
ceiling(-95.3)
-95
degrees(dp
)
dp
radians to degrees
degrees(0.5)
28.6478897565412
div(ynumeric
,xnumeric
)
numeric
integer quotient ofy
/x
div(9,4)
2
exp(dp
ornumeric
)
(same as input)
exponential
exp(1.0)
2.71828182845905
floor(dp
ornumeric
)
(same as input)
nearest integer less than or equal to argument
floor(-42.8)
-43
ln(dp
ornumeric
)
(same as input)
natural logarithm
ln(2.0)
0.693147180559945
log(dp
ornumeric
)
(same as input)
base 10 logarithm
log(100.0)
2
log(bnumeric
,xnumeric
)
numeric
logarithm to baseb
log(2.0, 64.0)
6.0000000000
mod(y
,x
)
(same as argument types)
remainder ofy
/x
mod(9,4)
1
pi()
dp
“π”constant
pi()
3.14159265358979
power(adp
,bdp
)
dp
a
_raised to the power ofb
_
power(9.0, 3.0)
729
power(anumeric
,bnumeric
)
numeric
a
_raised to the power ofb
_
power(9.0, 3.0)
729
radians(dp
)
dp
degrees to radians
radians(45.0)
0.785398163397448
round(dp
ornumeric
)
(same as input)
round to nearest integer
round(42.4)
42
round(vnumeric
,sint
)
numeric
round to_s
_decimal places
round(42.4382, 2)
42.44
scale(numeric
)
integer
scale of the argument (the number of decimal digits in the fractional part)
scale(8.41)
2
sign(dp
ornumeric
)
(same as input)
sign of the argument (-1, 0, +1)
sign(-8.4)
-1
sqrt(dp
ornumeric
)
(same as input)
square root
sqrt(2.0)
1.4142135623731
trunc(dp
ornumeric
)
(same as input)
truncate toward zero
trunc(42.8)
42
trunc(vnumeric
,sint
)
numeric
truncate to_s
_decimal places
trunc(42.4382, 2)
42.43
width_bucket(operanddp
,b1dp
,b2dp
,countint
)
int
return the bucket number to whichoperand
_would be assigned in a histogram havingcount
equal-width buckets spanning the rangeb1
tob2
; returns0
orcount
_+1for an input outside the range
width_bucket(5.35, 0.024, 10.06, 5)
3
width_bucket(operandnumeric
,b1numeric
,b2numeric
,countint
)
int
return the bucket number to whichoperand
_would be assigned in a histogram havingcount
equal-width buckets spanning the rangeb1
tob2
; returns0
orcount
_+1for an input outside the range
width_bucket(5.35, 0.024, 10.06, 5)
3
width_bucket(operandanyelement
,thresholdsanyarray
)
int
return the bucket number to whichoperand
_would be assigned given an array listing the lower bounds of the buckets; returns0
for an input less than the first lower bound; thethresholds
array_must be sorted, smallest first, or unexpected results will be obtained
width_bucket(now(), array['yesterday', 'today', 'tomorrow']::timestamptz[])
2
Function
Return Type
Description
random()
dp
random value in the range 0.0 <= x < 1.0
setseed(dp
)
void
set seed for subsequentrandom()
calls (value between -1.0 and 1.0, inclusive)
Function (radians)
Function (degrees)
Description
acos(x
)
acosd(x
)
inverse cosine
asin(x
)
asind(x
)
inverse sine
atan(x
)
atand(x
)
inverse tangent
atan2(y
,x
)
atan2d(y
,x
)
inverse tangent ofy
/x
cos(x
)
cosd(x
)
cosine
cot(x
)
cotd(x
)
cotangent
sin(x
)
sind(x
)
sine
tan(x
)
tand(x
)
tangent
Operator
Description
Example
Result
`
`
concatenation
`B'10001'
B'011'`
10001011
&
bitwise AND
B'10001' & B'01101'
00001
`
`
bitwise OR
`B'10001'
B'01101'`
11101
#
bitwise XOR
B'10001' # B'01101'
11100
~
bitwise NOT
~ B'10001'
01110
<<
bitwise shift left
B'10001' << 3
01000
>>
bitwise shift right
B'10001' >> 2
00100
Operator
Description
<
less than
>
greater than
<=
less than or equal to
>=
greater than or equal to
=
equal
<>
or!=
not equal
Predicate
Description
aBETWEENxANDy
between
aNOT BETWEENxANDy
not between
aBETWEEN SYMMETRICxANDy
between, after sorting the comparison values
aNOT BETWEEN SYMMETRICxANDy
not between, after sorting the comparison values
aIS DISTINCT FROMb
not equal, treating null like an ordinary value
aIS NOT DISTINCT FROMb
equal, treating null like an ordinary value
expressionIS NULL
is null
expressionIS NOT NULL
is not null
expressionISNULL
is null (nonstandard syntax)
expressionNOTNULL
is not null (nonstandard syntax)
boolean_expressionIS TRUE
is true
boolean_expressionIS NOT TRUE
is false or unknown
boolean_expressionIS FALSE
is false
boolean_expressionIS NOT FALSE
is true or unknown
boolean_expressionIS UNKNOWN
is unknown
boolean_expressionIS NOT UNKNOWN
is true or false
Function
Description
Example
Example Result
num_nonnulls(VARIADIC "any")
returns the number of non-null arguments
num_nonnulls(1, NULL, 2)
2
num_nulls(VARIADIC "any")
returns the number of null arguments
num_nulls(1, NULL, 2)
1
PostgreSQL 為內建的資料型別提供了大量的函數和運算子。使用者還可以定義自己的函數和運算子,如第 V 部分所述。psql 指令 \df 和 \do 可分別用於列出所有可用的函數和運算子。
如果您擔心可移植性,那麼請注意,本章中描述的大多數函數和運算子(最常見的算術運算子和比較運算子以及一些明確標記的函數除外)都不是由 SQL 標準指定的。其他一些 SQL 資料庫管理系統提供了其中一些延伸功能,並且在許多情況下,這些功能在各種實作之間是相容和一致的。本章可能不夠完整;附加功能出現在手冊的其他相關章節中。
版本:11
PostgreSQL 提供了三種不同的特徵比對方法:傳統的 SQL LIKE 運算子,最新的 SIMILAR TO 運算子(於 SQL:1999 中加入)和 POSIX 樣式的正規表示式。除了基本的「這個字串符合這個樣式嗎?」運算子之外,還可以使用函數來提取或替換符合的子字串,以及在配對的位置拆分字串。
提醒 如果您的特徵比對需求超出此範圍,請考慮在 Perl 或 Tcl 中撰寫使用者定義的函數。
雖然大多數正規表示式搜尋可以非常快速地執行,但是完成正規表示式需要花費大量的時間和記憶體來處理。要特別注意從各種來源接受正規表示式的搜尋方式。如果必須這樣做,建議強制限制執行語句執行時間。
使用 SIMILAR TO 方式的搜尋具有相同的安全隱憂,因為 SIMILAR TO 提供了許多與 POSIX 樣式的正規表示式相同功能。
LIKE 搜尋比其他兩個選項要簡單得多,在使用可能惡意的來源時更安全。
LIKE
The LIKE
expression returns true if the string
matches the supplied pattern
. (As expected, the NOT LIKE
expression returns false if LIKE
returns true, and vice versa. An equivalent expression is NOT (
string
LIKE pattern
).)
If pattern
does not contain percent signs or underscores, then the pattern only represents the string itself; in that case LIKE
acts like the equals operator. An underscore (_
) in pattern
stands for (matches) any single character; a percent sign (%
) matches any sequence of zero or more characters.
Some examples:
LIKE
pattern matching always covers the entire string. Therefore, if it's desired to match a sequence anywhere within a string, the pattern must start and end with a percent sign.
To match a literal underscore or percent sign without matching other characters, the respective character in pattern
must be preceded by the escape character. The default escape character is the backslash but a different one can be selected by using the ESCAPE
clause. To match the escape character itself, write two escape characters.
If you have standard_conforming_strings turned off, any backslashes you write in literal string constants will need to be doubled. See Section 4.1.2.1 for more information.
It's also possible to select no escape character by writing ESCAPE ''
. This effectively disables the escape mechanism, which makes it impossible to turn off the special meaning of underscore and percent signs in the pattern.
The key word ILIKE
can be used instead of LIKE
to make the match case-insensitive according to the active locale. This is not in the SQL standard but is a PostgreSQL extension.
The operator ~~
is equivalent to LIKE
, and ~~*
corresponds to ILIKE
. There are also !~~
and !~~*
operators that represent NOT LIKE
and NOT ILIKE
, respectively. All of these operators are PostgreSQL-specific.
There is also the prefix operator ^@
and corresponding starts_with
function which covers cases when only searching by beginning of the string is needed.
SIMILAR TO
Regular ExpressionsThe SIMILAR TO
operator returns true or false depending on whether its pattern matches the given string. It is similar to LIKE
, except that it interprets the pattern using the SQL standard's definition of a regular expression. SQL regular expressions are a curious cross between LIKE
notation and common regular expression notation.
Like LIKE
, the SIMILAR TO
operator succeeds only if its pattern matches the entire string; this is unlike common regular expression behavior where the pattern can match any part of the string. Also like LIKE
, SIMILAR TO
uses _
and %
as wildcard characters denoting any single character and any string, respectively (these are comparable to .
and .*
in POSIX regular expressions).
In addition to these facilities borrowed from LIKE
, SIMILAR TO
supports these pattern-matching metacharacters borrowed from POSIX regular expressions:
|
denotes alternation (either of two alternatives).
*
denotes repetition of the previous item zero or more times.
+
denotes repetition of the previous item one or more times.
?
denotes repetition of the previous item zero or one time.
{
m
}
denotes repetition of the previous item exactly m
times.
{
m
,}
denotes repetition of the previous item m
or more times.
{
m
,
n
}
denotes repetition of the previous item at least m
and not more than n
times.
Parentheses ()
can be used to group items into a single logical item.
A bracket expression [...]
specifies a character class, just as in POSIX regular expressions.
Notice that the period (.
) is not a metacharacter for SIMILAR TO
.
As with LIKE
, a backslash disables the special meaning of any of these metacharacters; or a different escape character can be specified with ESCAPE
.
Some examples:
The substring
function with three parameters, substring(
string
from pattern
for escape-character
), provides extraction of a substring that matches an SQL regular expression pattern. As with SIMILAR TO
, the specified pattern must match the entire data string, or else the function fails and returns null. To indicate the part of the pattern that should be returned on success, the pattern must contain two occurrences of the escape character followed by a double quote ("
). The text matching the portion of the pattern between these markers is returned.
Some examples, with #"
delimiting the return string:
Table 9.14 lists the available operators for pattern matching using POSIX regular expressions.
POSIX regular expressions provide a more powerful means for pattern matching than the LIKE
and SIMILAR TO
operators. Many Unix tools such as egrep
, sed
, or awk
use a pattern matching language that is similar to the one described here.
A regular expression is a character sequence that is an abbreviated definition of a set of strings (a regular set). A string is said to match a regular expression if it is a member of the regular set described by the regular expression. As with LIKE
, pattern characters match string characters exactly unless they are special characters in the regular expression language — but regular expressions use different special characters than LIKE
does. Unlike LIKE
patterns, a regular expression is allowed to match anywhere within a string, unless the regular expression is explicitly anchored to the beginning or end of the string.
Some examples:
The POSIX pattern language is described in much greater detail below.
The substring
function with two parameters, substring(
string
from pattern
), provides extraction of a substring that matches a POSIX regular expression pattern. It returns null if there is no match, otherwise the portion of the text that matched the pattern. But if the pattern contains any parentheses, the portion of the text that matched the first parenthesized subexpression (the one whose left parenthesis comes first) is returned. You can put parentheses around the whole expression if you want to use parentheses within it without triggering this exception. If you need parentheses in the pattern before the subexpression you want to extract, see the non-capturing parentheses described below.
Some examples:
The regexp_replace
function provides substitution of new text for substrings that match POSIX regular expression patterns. It has the syntax regexp_replace
(source
, pattern
, replacement
[, flags
]). The source
string is returned unchanged if there is no match to the pattern
. If there is a match, the source
string is returned with the replacement
string substituted for the matching substring. The replacement
string can contain \
n
, where n
is 1 through 9, to indicate that the source substring matching the n
'th parenthesized subexpression of the pattern should be inserted, and it can contain \&
to indicate that the substring matching the entire pattern should be inserted. Write \\
if you need to put a literal backslash in the replacement text. The flags
parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. Flag i
specifies case-insensitive matching, while flag g
specifies replacement of each matching substring rather than only the first one. Supported flags (though not g
) are described in Table 9.22.
Some examples:
The regexp_match
function returns a text array of captured substring(s) resulting from the first match of a POSIX regular expression pattern to a string. It has the syntax regexp_match
(string
, pattern
[, flags
]). If there is no match, the result is NULL
. If a match is found, and the pattern
contains no parenthesized subexpressions, then the result is a single-element text array containing the substring matching the whole pattern. If a match is found, and the pattern
contains parenthesized subexpressions, then the result is a text array whose n
'th element is the substring matching the n
'th parenthesized subexpression of the pattern
(not counting “non-capturing” parentheses; see below for details). The flags
parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. Supported flags are described in Table 9.22.
Some examples:
In the common case where you just want the whole matching substring or NULL
for no match, write something like
The regexp_matches
function returns a set of text arrays of captured substring(s) resulting from matching a POSIX regular expression pattern to a string. It has the same syntax as regexp_match
. This function returns no rows if there is no match, one row if there is a match and the g
flag is not given, or N
rows if there are N
matches and the g
flag is given. Each returned row is a text array containing the whole matched substring or the substrings matching parenthesized subexpressions of the pattern
, just as described above for regexp_match
. regexp_matches
accepts all the flags shown in Table 9.22, plus the g
flag which commands it to return all matches, not just the first one.
Some examples:
In most cases regexp_matches()
should be used with the g
flag, since if you only want the first match, it's easier and more efficient to use regexp_match()
. However,regexp_match()
only exists in PostgreSQL version 10 and up. When working in older versions, a common trick is to place a regexp_matches()
call in a sub-select, for example:
This produces a text array if there's a match, or NULL
if not, the same as regexp_match()
would do. Without the sub-select, this query would produce no output at all for table rows without a match, which is typically not the desired behavior.
The regexp_split_to_table
function splits a string using a POSIX regular expression pattern as a delimiter. It has the syntax regexp_split_to_table
(string
, pattern
[, flags
]). If there is no match to the pattern
, the function returns the string
. If there is at least one match, for each match it returns the text from the end of the last match (or the beginning of the string) to the beginning of the match. When there are no more matches, it returns the text from the end of the last match to the end of the string. The flags
parameter is an optional text string containing zero or more single-letter flags that change the function's behavior. regexp_split_to_table
supports the flags described in Table 9.22.
The regexp_split_to_array
function behaves the same as regexp_split_to_table
, except that regexp_split_to_array
returns its result as an array of text
. It has the syntax regexp_split_to_array
(string
, pattern
[, flags
]). The parameters are the same as for regexp_split_to_table
.
Some examples:
As the last example demonstrates, the regexp split functions ignore zero-length matches that occur at the start or end of the string or immediately after a previous match. This is contrary to the strict definition of regexp matching that is implemented by regexp_match
and regexp_matches
, but is usually the most convenient behavior in practice. Other software systems such as Perl use similar definitions.
PostgreSQL's regular expressions are implemented using a software package written by Henry Spencer. Much of the description of regular expressions below is copied verbatim from his manual.
Regular expressions (REs), as defined in POSIX 1003.2, come in two forms: extended REs or EREs (roughly those of egrep
), and basic REs or BREs (roughly those of ed
). PostgreSQL supports both forms, and also implements some extensions that are not in the POSIX standard, but have become widely used due to their availability in programming languages such as Perl and Tcl. REs using these non-POSIX extensions are called advanced REs or AREs in this documentation. AREs are almost an exact superset of EREs, but BREs have several notational incompatibilities (as well as being much more limited). We first describe the ARE and ERE forms, noting features that apply only to AREs, and then describe how BREs differ.
PostgreSQL always initially presumes that a regular expression follows the ARE rules. However, the more limited ERE or BRE rules can be chosen by prepending an embedded option to the RE pattern, as described in Section 9.7.3.4. This can be useful for compatibility with applications that expect exactly the POSIX 1003.2 rules.
A regular expression is defined as one or more branches, separated by |
. It matches anything that matches one of the branches.
A branch is zero or more quantified atoms or constraints, concatenated. It matches a match for the first, followed by a match for the second, etc; an empty branch matches the empty string.
A quantified atom is an atom possibly followed by a single quantifier. Without a quantifier, it matches a match for the atom. With a quantifier, it can match some number of matches of the atom. An atom can be any of the possibilities shown in Table 9.15. The possible quantifiers and their meanings are shown in Table 9.16.
A constraint matches an empty string, but matches only when specific conditions are met. A constraint can be used where an atom could be used, except it cannot be followed by a quantifier. The simple constraints are shown in Table 9.17; some more constraints are described later.
An RE cannot end with a backslash (\
).
If you have standard_conforming_strings turned off, any backslashes you write in literal string constants will need to be doubled. See Section 4.1.2.1 for more information.
The forms using {
...
}
are known as bounds. The numbers m
and n
within a bound are unsigned decimal integers with permissible values from 0 to 255 inclusive.
Non-greedy quantifiers (available in AREs only) match the same possibilities as their corresponding normal (greedy) counterparts, but prefer the smallest number rather than the largest number of matches. See Section 9.7.3.5 for more detail.
A quantifier cannot immediately follow another quantifier, e.g., **
is invalid. A quantifier cannot begin an expression or subexpression or follow ^
or |
.
Lookahead and lookbehind constraints cannot contain back references (see Section 9.7.3.3), and all parentheses within them are considered non-capturing.
A bracket expression is a list of characters enclosed in []
. It normally matches any single character from the list (but see below). If the list begins with ^
, it matches any single character not from the rest of the list. If two characters in the list are separated by -
, this is shorthand for the full range of characters between those two (inclusive) in the collating sequence, e.g., [0-9]
in ASCII matches any decimal digit. It is illegal for two ranges to share an endpoint, e.g., a-c-e
. Ranges are very collating-sequence-dependent, so portable programs should avoid relying on them.
To include a literal ]
in the list, make it the first character (after ^
, if that is used). To include a literal -
, make it the first or last character, or the second endpoint of a range. To use a literal -
as the first endpoint of a range, enclose it in [.
and .]
to make it a collating element (see below). With the exception of these characters, some combinations using [
(see next paragraphs), and escapes (AREs only), all other special characters lose their special significance within a bracket expression. In particular, \
is not special when following ERE or BRE rules, though it is special (as introducing an escape) in AREs.
Within a bracket expression, a collating element (a character, a multiple-character sequence that collates as if it were a single character, or a collating-sequence name for either) enclosed in [.
and .]
stands for the sequence of characters of that collating element. The sequence is treated as a single element of the bracket expression's list. This allows a bracket expression containing a multiple-character collating element to match more than one character, e.g., if the collating sequence includes a ch
collating element, then the RE [[.ch.]]*c
matches the first five characters of chchcc
.
PostgreSQL currently does not support multi-character collating elements. This information describes possible future behavior.
Within a bracket expression, a collating element enclosed in [=
and =]
is an equivalence class, standing for the sequences of characters of all collating elements equivalent to that one, including itself. (If there are no other equivalent collating elements, the treatment is as if the enclosing delimiters were [.
and .]
.) For example, if o
and ^
are the members of an equivalence class, then [[=o=]]
, [[=^=]]
, and [o^]
are all synonymous. An equivalence class cannot be an endpoint of a range.
Within a bracket expression, the name of a character class enclosed in [:
and :]
stands for the list of all characters belonging to that class. Standard character class names are: alnum
, alpha
, blank
,cntrl
, digit
, graph
, lower
, print
, punct
, space
, upper
, xdigit
. These stand for the character classes defined in ctype. A locale can provide others. A character class cannot be used as an endpoint of a range.
There are two special cases of bracket expressions: the bracket expressions [[:<:]]
and [[:>:]]
are constraints, matching empty strings at the beginning and end of a word respectively. A word is defined as a sequence of word characters that is neither preceded nor followed by word characters. A word character is an alnum
character (as defined by ctype) or an underscore. This is an extension, compatible with but not specified by POSIX 1003.2, and should be used with caution in software intended to be portable to other systems. The constraint escapes described below are usually preferable; they are no more standard, but are easier to type.
Escapes are special sequences beginning with \
followed by an alphanumeric character. Escapes come in several varieties: character entry, class shorthands, constraint escapes, and back references. A \
followed by an alphanumeric character but not constituting a valid escape is illegal in AREs. In EREs, there are no escapes: outside a bracket expression, a \
followed by an alphanumeric character merely stands for that character as an ordinary character, and inside a bracket expression, \
is an ordinary character. (The latter is the one actual incompatibility between EREs and AREs.)
Character-entry escapes exist to make it easier to specify non-printing and other inconvenient characters in REs. They are shown in Table 9.18.
Class-shorthand escapes provide shorthands for certain commonly-used character classes. They are shown in Table 9.19.
A constraint escape is a constraint, matching the empty string if specific conditions are met, written as an escape. They are shown in Table 9.20.
A back reference (\
n
) matches the same string matched by the previous parenthesized subexpression specified by the number n
(see Table 9.21). For example, ([bc])\1
matches bb
or cc
but not bc
or cb
. The subexpression must entirely precede the back reference in the RE. Subexpressions are numbered in the order of their leading parentheses. Non-capturing parentheses do not define subexpressions.
Hexadecimal digits are 0
-9
, a
-f
, and A
-F
. Octal digits are 0
-7
.
Numeric character-entry escapes specifying values outside the ASCII range (0-127) have meanings dependent on the database encoding. When the encoding is UTF-8, escape values are equivalent to Unicode code points, for example \u1234
means the character U+1234
. For other multibyte encodings, character-entry escapes usually just specify the concatenation of the byte values for the character. If the escape value does not correspond to any legal character in the database encoding, no error will be raised, but it will never match any data.
The character-entry escapes are always taken as ordinary characters. For example, \135
is ]
in ASCII, but \135
does not terminate a bracket expression.
Within bracket expressions, \d
, \s
, and \w
lose their outer brackets, and \D
, \S
, and \W
are illegal. (So, for example, [a-c\d]
is equivalent to [a-c[:digit:]]
. Also, [a-c\D]
, which is equivalent to [a-c^[:digit:]]
, is illegal.)
A word is defined as in the specification of [[:<:]]
and [[:>:]]
above. Constraint escapes are illegal within bracket expressions.
There is an inherent ambiguity between octal character-entry escapes and back references, which is resolved by the following heuristics, as hinted at above. A leading zero always indicates an octal escape. A single non-zero digit, not followed by another digit, is always taken as a back reference. A multi-digit sequence not starting with a zero is taken as a back reference if it comes after a suitable subexpression (i.e., the number is in the legal range for a back reference), and otherwise is taken as octal.
In addition to the main syntax described above, there are some special forms and miscellaneous syntactic facilities available.
An RE can begin with one of two special director prefixes. If an RE begins with ***:
, the rest of the RE is taken as an ARE. (This normally has no effect in PostgreSQL, since REs are assumed to be AREs; but it does have an effect if ERE or BRE mode had been specified by the flags
parameter to a regex function.) If an RE begins with ***=
, the rest of the RE is taken to be a literal string, with all characters considered ordinary characters.
An ARE can begin with embedded options: a sequence (?
xyz
)
(where xyz
is one or more alphabetic characters) specifies options affecting the rest of the RE. These options override any previously determined options — in particular, they can override the case-sensitivity behavior implied by a regex operator, or the flags
parameter to a regex function. The available option letters are shown in Table 9.22. Note that these same option letters are used in the flags
parameters of regex functions.
Embedded options take effect at the )
terminating the sequence. They can appear only at the start of an ARE (after the ***:
director if any).
In addition to the usual (tight) RE syntax, in which all characters are significant, there is an expanded syntax, available by specifying the embedded x
option. In the expanded syntax, white-space characters in the RE are ignored, as are all characters between a #
and the following newline (or the end of the RE). This permits paragraphing and commenting a complex RE. There are three exceptions to that basic rule:
a white-space character or #
preceded by \
is retained
white space or #
within a bracket expression is retained
white space and comments cannot appear within multi-character symbols, such as (?:
For this purpose, white-space characters are blank, tab, newline, and any character that belongs to the space
character class.
Finally, in an ARE, outside bracket expressions, the sequence (?#
ttt
)
(where ttt
is any text not containing a )
) is a comment, completely ignored. Again, this is not allowed between the characters of multi-character symbols, like (?:
. Such comments are more a historical artifact than a useful facility, and their use is deprecated; use the expanded syntax instead.
None of these metasyntax extensions is available if an initial ***=
director has specified that the user's input be treated as a literal string rather than as an RE.
In the event that an RE could match more than one substring of a given string, the RE matches the one starting earliest in the string. If the RE could match more than one substring starting at that point, either the longest possible match or the shortest possible match will be taken, depending on whether the RE is greedy or non-greedy.
Whether an RE is greedy or not is determined by the following rules:
Most atoms, and all constraints, have no greediness attribute (because they cannot match variable amounts of text anyway).
Adding parentheses around an RE does not change its greediness.
A quantified atom with a fixed-repetition quantifier ({
m
}
or {
m
}?
) has the same greediness (possibly none) as the atom itself.
A quantified atom with other normal quantifiers (including {
m
,
n
}
with m
equal to n
) is greedy (prefers longest match).
A quantified atom with a non-greedy quantifier (including {
m
,
n
}?
with m
equal to n
) is non-greedy (prefers shortest match).
A branch — that is, an RE that has no top-level |
operator — has the same greediness as the first quantified atom in it that has a greediness attribute.
An RE consisting of two or more branches connected by the |
operator is always greedy.
The above rules associate greediness attributes not only with individual quantified atoms, but with branches and entire REs that contain quantified atoms. What that means is that the matching is done in such a way that the branch, or whole RE, matches the longest or shortest possible substring as a whole. Once the length of the entire match is determined, the part of it that matches any particular subexpression is determined on the basis of the greediness attribute of that subexpression, with subexpressions starting earlier in the RE taking priority over ones starting later.
An example of what this means:
In the first case, the RE as a whole is greedy because Y*
is greedy. It can match beginning at the Y
, and it matches the longest possible string starting there, i.e., Y123
. The output is the parenthesized part of that, or 123
. In the second case, the RE as a whole is non-greedy because Y*?
is non-greedy. It can match beginning at the Y
, and it matches the shortest possible string starting there, i.e., Y1
. The subexpression [0-9]{1,3}
is greedy but it cannot change the decision as to the overall match length; so it is forced to match just 1
.
In short, when an RE contains both greedy and non-greedy subexpressions, the total match length is either as long as possible or as short as possible, according to the attribute assigned to the whole RE. The attributes assigned to the subexpressions only affect how much of that match they are allowed to “eat” relative to each other.
The quantifiers {1,1}
and {1,1}?
can be used to force greediness or non-greediness, respectively, on a subexpression or a whole RE. This is useful when you need the whole RE to have a greediness attribute different from what's deduced from its elements. As an example, suppose that we are trying to separate a string containing some digits into the digits and the parts before and after them. We might try to do that like this:
That didn't work: the first .*
is greedy so it “eats” as much as it can, leaving the \d+
to match at the last possible place, the last digit. We might try to fix that by making it non-greedy:
That didn't work either, because now the RE as a whole is non-greedy and so it ends the overall match as soon as possible. We can get what we want by forcing the RE as a whole to be greedy:
Controlling the RE's overall greediness separately from its components' greediness allows great flexibility in handling variable-length patterns.
When deciding what is a longer or shorter match, match lengths are measured in characters, not collating elements. An empty string is considered longer than no match at all. For example: bb*
matches the three middle characters of abbbc
; (week|wee)(night|knights)
matches all ten characters of weeknights
; when (.*).*
is matched against abc
the parenthesized subexpression matches all three characters; and when (a*)*
is matched against bc
both the whole RE and the parenthesized subexpression match an empty string.
If case-independent matching is specified, the effect is much as if all case distinctions had vanished from the alphabet. When an alphabetic that exists in multiple cases appears as an ordinary character outside a bracket expression, it is effectively transformed into a bracket expression containing both cases, e.g., x
becomes [xX]
. When it appears inside a bracket expression, all case counterparts of it are added to the bracket expression, e.g., [x]
becomes [xX]
and [^x]
becomes [^xX]
.
If newline-sensitive matching is specified, .
and bracket expressions using ^
will never match the newline character (so that matches will never cross newlines unless the RE explicitly arranges it) and ^
and $
will match the empty string after and before a newline respectively, in addition to matching at beginning and end of string respectively. But the ARE escapes \A
and \Z
continue to match beginning or end of string only.
If partial newline-sensitive matching is specified, this affects .
and bracket expressions as with newline-sensitive matching, but not ^
and $
.
If inverse partial newline-sensitive matching is specified, this affects ^
and $
as with newline-sensitive matching, but not .
and bracket expressions. This isn't very useful but is provided for symmetry.
No particular limit is imposed on the length of REs in this implementation. However, programs intended to be highly portable should not employ REs longer than 256 bytes, as a POSIX-compliant implementation can refuse to accept such REs.
The only feature of AREs that is actually incompatible with POSIX EREs is that \
does not lose its special significance inside bracket expressions. All other ARE features use syntax which is illegal or has undefined or unspecified effects in POSIX EREs; the ***
syntax of directors likewise is outside the POSIX syntax for both BREs and EREs.
Many of the ARE extensions are borrowed from Perl, but some have been changed to clean them up, and a few Perl extensions are not present. Incompatibilities of note include \b
, \B
, the lack of special treatment for a trailing newline, the addition of complemented bracket expressions to the things affected by newline-sensitive matching, the restrictions on parentheses and back references in lookahead/lookbehind constraints, and the longest/shortest-match (rather than first-match) matching semantics.
Two significant incompatibilities exist between AREs and the ERE syntax recognized by pre-7.4 releases of PostgreSQL:
In AREs, \
followed by an alphanumeric character is either an escape or an error, while in previous releases, it was just another way of writing the alphanumeric. This should not be much of a problem because there was no reason to write such a sequence in earlier releases.
In AREs, \
remains a special character within []
, so a literal \
within a bracket expression must be written \\
.
BREs differ from EREs in several respects. In BREs, |
, +
, and ?
are ordinary characters and there is no equivalent for their functionality. The delimiters for bounds are \{
and \}
, with {
and }
by themselves ordinary characters. The parentheses for nested subexpressions are \(
and \)
, with (
and )
by themselves ordinary characters. ^
is an ordinary character except at the beginning of the RE or the beginning of a parenthesized subexpression, $
is an ordinary character except at the end of the RE or the end of a parenthesized subexpression, and *
is an ordinary character if it appears at the beginning of the RE or the beginning of a parenthesized subexpression (after a possible leading ^
). Finally, single-digit back references are available, and \<
and \>
are synonyms for [[:<:]]
and [[:>:]]
respectively; no other escapes are available in BREs.
本節介紹用於檢查和操作字串值的函數和運算子。此節中的字串包括 character,character varying 和 text 的內容。除非另有說明,否則下面列出的所有函數都適用於所有這些型別,但在使用字串型別時要小心自動填充字元的潛在影響。對於 bit-string 型別,一些函數也可以處理。
SQL 定義了一些字串函數,它們使用關鍵字而不是逗號來分隔參數。詳情見 Table 9.8。PostgreSQL 還提供了一般函數呼叫語法的這些函數的版本(參見 Table 9.9)。
注意 在 PostgreSQL 8.3 之前,由於存在從這些資料型別到文字的強制轉換,這些函數也會默默地接受幾個非字串資料型別的值。但這些強制措施已被刪除,因為它們經常引起令人驚訝的行為。不過,字串連接運算子(||)仍然接受非字串輸入,只要至少有一個輸入是字串型別,如 Table 9.8 所示。對於其他情況,如果需要複製先前的行為,請在語法中加入明確的轉換。
還有其他字串操作函數可用,在 Table 9.9 中列出。 其中一些內部用於實作 SQL 標準的字串函數列在 Table 9.8。
concat,concat_ws 和 format 函數是可變參數,因此可以將值連接或格式化成標記為 VARIADIC 關鍵字的陣列(請參閱第 37.4.5 節)。陣列的元素被視為它們是函數的單獨普通參數。如果 variadic 陣列參數為 NULL,則 concat 和 concat_ws 回傳 NULL,但 format 將 NULL 視為零元素陣列。
另請參閱第 9.20 節中的彙總函數 string_agg。
format
函數格式化輸出根據格式字串的輸出,其格式類似於 C 函數 sprintf。
formatstr 是一個格式字串,指定如何格式化結果。格式字串中的文字將直接複製到結果中,除非使用格式標示符。格式標示符充當字串中的佔位符,定義後續函數參數應如何格式化並插入結果中。每個 formatarg 參數根據其資料型別的一般輸出規則轉換為文字,然後根據格式標示符進行格式化並插入到結果字串中。
格式標示符由 % 字元引入並具有其語法
組件段落的位置:position(選擇性)
形式為 n$ 的字串,其中 n 是要輸入參數的索引。索引 1 表示 formatstr 之後的第一個參數。如果省略該位置,則預設使用 sequence.flags 中的下一個參數(選擇性)
控制格式標示符輸出格式的其他選項。目前唯一支援的標示是減號( - ),這將使格式標示符的輸出向左對齊。除非還指定了 width,否則這沒有效果。(選擇性)
指定用於顯示格式標示符輸出的最小字元數。輸出在左側或右側(取決於 - 標示)填充,並根據需要填充空格以填充寬度。寬度太小不會導致截斷輸出,但會被忽略。可以使用以下任何一種來指定寬度:正整數;星號()使用下一個函數參數作為寬度;或者 n$ 形式的字串,以使用第 n 個函數參數作為寬度。
如果寬度來自函數參數,則該參數在用於格式標示符值的參數之前使用。如果 width 參數為負,則結果在長度為 abs(width).type(必要)的段落內保持對齊(就像指定了 - 標誌一樣)。
用於産生格式標示符輸出的格式轉換型別。支援以下型別:
s
將參數值格式化為簡單字串。空值被視為空字串。
I
將參數值視為 SQL 標示符,必要時對其進行雙引號。值為 null(相當於 quote_ident)是一個錯誤。
L
引用參數值作為 SQL 文字。空值顯示為字串 NULL,不帶引號(相當於 quote_nullable)。
除了上面描述的格式標示符之外,特殊序列 %% 可用於輸出文字 % 字元。
以下是基本格式轉換的一些範例:
以下是使用寬度欄位和 - 標示的範例:
這些範例顯示了 position 欄位的使用:
與標準 C 函數 sprintf 不同,PostgreSQL 的格式函數允許將具有和不具有位置欄位的格式標示符混合在相同的格式字串中。沒有位置欄位的格式標示符始終使用最後一個參數消耗後的下一個參數。此外,format 函數不要求在格式字串中使用所有函數參數。例如:
%I 和 %L 格式標示符對於安全地建構動態 SQL 語句特別有用。詳見範例 42.1。
Table 9.40,Table 9.41andTable 9.42summarize the functions and operators that are provided for full text searching. SeeChapter 12for a detailed explanation ofPostgreSQL's text search facility.
Table 9.40. Text Search Operators
Thetsquery
containment operators consider only the lexemes listed in the two queries, ignoring the combining operators.
In addition to the operators shown in the table, the ordinary B-tree comparison operators (=
,<
, etc) are defined for typestsvector
andtsquery
. These are not very useful for text searching but allow, for example, unique indexes to be built on columns of these types.
Table 9.41. Text Search Functions
All the text search functions that accept an optionalregconfig
argument will use the configuration specified bydefault_text_search_configwhen that argument is omitted.
The functions inTable 9.42are listed separately because they are not usually used in everyday text searching operations. They are helpful for development and debugging of new text search configurations.
Table 9.42. Text Search Debugging Functions
The geometric typespoint
,box
,lseg
,line
,path
,polygon
, andcircle
have a large set of native support functions and operators, shown inTable 9.33,Table 9.34, andTable 9.35.
Note that the“same as”operator,~=
, represents the usual notion of equality for thepoint
,box
,polygon
, andcircle
types. Some of these types also have an=
operator, but=
compares for equal_areas_only. The other scalar comparison operators (<=
and so on) likewise compare areas for these types.
Table 9.33. Geometric Operators
BeforePostgreSQL8.2, the containment operators@>
and<@
were respectively called~
and@
. These names are still available, but are deprecated and will eventually be removed.
Table 9.34. Geometric Functions
Table 9.35. Geometric Type Conversion Functions
It is possible to access the two component numbers of apoint
as though the point were an array with indexes 0 and 1. For example, ift.p
is apoint
column thenSELECT p[0] FROM t
retrieves the X coordinate andUPDATE t SET p[1] = ...
changes the Y coordinate. In the same way, a value of typebox
orlseg
can be treated as an array of twopoint
values.
Thearea
function works for the typesbox
,circle
, andpath
. Thearea
function only works on thepath
data type if the points in thepath
are non-intersecting. For example, thepath'((0,0),(0,1),(2,1),(2,2),(1,2),(1,0),(0,0))'::PATH
will not work; however, the following visually identicalpath'((0,0),(0,1),(1,1),(1,2),(2,2),(2,1),(1,1),(1,0),(0,0))'::PATH
will work. If the concept of an intersecting versus non-intersectingpath
is confusing, draw both of the abovepath
s side by side on a piece of graph paper.
Table 9.36shows the operators available for thecidr
andinet
types. The operators<<
,<<=
,>>
,>>=
, and&&
test for subnet inclusion. They consider only the network parts of the two addresses (ignoring any host part) and determine whether one network is identical to or a subnet of the other.
Table 9.36. cidr
andinet
Operators
Table 9.37shows the functions available for use with thecidr
andinet
types. Theabbrev
,host
, andtext
functions are primarily intended to offer alternative display formats.
Table 9.37. cidr
andinet
Functions
Anycidr
value can be cast toinet
implicitly or explicitly; therefore, the functions shown above as operating oninet
also work oncidr
values. (Where there are separate functions forinet
andcidr
, it is because the behavior should be different for the two cases.) Also, it is permitted to cast aninet
value tocidr
. When this is done, any bits to the right of the netmask are silently zeroed to create a validcidr
value. In addition, you can cast a text value toinet
orcidr
using normal casting syntax: for example,inet(expression
)orcolname
::cidr.
Table 9.38shows the functions available for use with themacaddr
type. The functiontrunc(macaddr
)returns a MAC address with the last 3 bytes set to zero. This can be used to associate the remaining prefix with a manufacturer.
Table 9.38. macaddr
Functions
Themacaddr
type also supports the standard relational operators (>
,<=
, etc.) for lexicographical ordering, and the bitwise arithmetic operators (~
,&
and|
) for NOT, AND and OR.
Table 9.39shows the functions available for use with themacaddr8
type. The functiontrunc(macaddr8
)returns a MAC address with the last 5 bytes set to zero. This can be used to associate the remaining prefix with a manufacturer.
Table 9.39. macaddr8
Functions
Themacaddr8
type also supports the standard relational operators (>
,<=
, etc.) for ordering, and the bitwise arithmetic operators (~
,&
and|
) for NOT, AND and OR.
This section describes functions and operators for examining and manipulating values of typebytea
.
SQLdefines some string functions that use key words, rather than commas, to separate arguments. Details are inTable 9.11.PostgreSQLalso provides versions of these functions that use the regular function invocation syntax (seeTable 9.12).
The sample results shown on this page assume that the server parameterbytea_output
is set toescape
(the traditional PostgreSQL format).
Table 9.11. SQLBinary String Functions and Operators
Additional binary string manipulation functions are available and are listed inTable 9.12. Some of them are used internally to implement theSQL-standard string functions listed inTable 9.11.
Table 9.12. Other Binary String Functions
get_byte
andset_byte
number the first byte of a binary string as byte 0.get_bit
andset_bit
number bits from the right within each byte; for example bit 0 is the least significant bit of the first byte, and bit 15 is the most significant bit of the second byte.
See also the aggregate functionstring_agg
inSection 9.20and the large object functions inSection 34.4.
版本:11
本節中描述的函數和類函數表示式對 xml 型別的值進行操作。有關 xml 型別的訊息,請查看。這裡不再重複用於轉換為 xml 型別的函數表示式 xmlparse 和 xmlserialize。使用大多數這些函數需要使用 configure --with-libxml 編譯安裝。
一組函數和類函數的表示式可用於從 SQL 資料産生 XML 內容。因此,它們特別適合將查詢結果格式化為 XML 文件以便在用戶端應用程序中進行處理。
函數 xmlcomment 建立一個 XML 字串,其中包含指定文字作為內容的 XML 註釋。文字不能包含「 -- 」或以「 - 」結尾,以便産生的結構是有效的 XML 註釋。 如果參數為 null,則結果為 null。
例如:
函數 xmlconcat 連接列表中各個 XML 字串,以建立包含 XML 內容片段的單個字串。空值會被忽略;如果都沒有非空值參數,則結果僅為 null。
例如:
XML 宣告(如果存在)組合如下。如果所有參數值具有相同的 XML 版本宣告,則在結果中使用該版本,否則不使用任何版本。如果所有參數值都具有獨立宣告值「yes」,則在結果中使用該值。如果所有參數值都具有獨立的宣告值且至少有一個為「no」,則在結果中使用該值。否則結果將沒有獨立宣告。如果確定結果需要獨立宣告但沒有版本聲明,則將使用版本為 1.0 的版本宣告,因為 XML 要求 XML 宣告包含版本宣告。在所有情況下都會忽略編碼宣告並將其刪除。
例如:
xmlelement 表示式産生具有給定名稱、屬性和內容的 XML 元素。
範例:
透過用 xHHHH 序列替換有問題的字符來轉譯非有效 XML 名稱的元素和屬性名稱,其中 HHHH 是十六進位表示法中字元的 Unicode 代碼。例如:
如果屬性值是引用欄位,則無需明確指定屬性名稱,在這種情況下,預設情況下欄位的名稱將用作屬性名稱。在其他情況下,必須為該屬性明確指定名稱。所以這個例子是有效的:
但這些不行:
元素內容(如果已指定)將根據其資料型別進行格式化。如果內容本身是 xml 型別,則可以建構複雜的 XML 文件。例如:
其他型別的內容將被格式化為有效的 XML 字元資料。這尤其意味著字符 <、> 和 & 將被轉換為其他形式。二進位資料(資料型別 bytea)將以 base64 或十六進位編碼表示,具體取決於組態參數 xmlbinary 的設定。為了使 SQL 和 PostgreSQL 資料型別與 XML Schema 規範保持一致,預計各種資料型別的特定行為將會各自發展,此時將出現更精確的描述。
xmlforest 表示式使用給定的名稱和內容産生元素的 XML 序列。
範例:
如第二個範例所示,如果內容值是欄位引用,則可以省略元素名稱,在這種情況下,預設情況下使用欄位名稱。 否則,必須指定名稱。
非有效的 XML 名稱的元素名稱將被轉譯,如上面的 xmlelement 所示。類似地,內容資料會被轉譯以産生有效的 XML 內容,除非它已經是 xml 型別。
請注意,如果 XML 序列由多個元素組成,則它們不是有效的 XML 文件,因此將 xmlforest 表示式包裝在 xmlelement 中可能很有用。
xmlpi 表示式建立 XML 處理指令。內容(如果存在)不得包含字元序列 ?>。
例如:
xmlroot 表示式改變 XML 值的根節點屬性。如果指定了版本,它將替換根節點的版本宣告中的值;如果指定了獨立設定,則它將替換根節點的獨立宣告中的值。
例如:
要確定連接的順序,可以將 ORDER BY 子句加到彙總呼叫中,如第 4.2.7 節中所述。例如:
以前的版本中推薦使用以下非標準方法,在特定情況下可能仍然有用:
本節中描述的表示式用於檢查 xml 的屬性。
如果第一個參數中的 XPath 表示式回傳任何節點,則 xmlexists 函數回傳 true,否則回傳 false。 (如果任一參數為 null,則結果為 null。)
範例
BY REF 子句在 PostgreSQL 中沒有任何作用,但可以達到 SQL 一致性和與其他實作的相容性。根據 SQL 標準,第一個 BY REF 是必需的,第二個是選擇性的。另請注意,SQL 標準指定 xmlexists 構造將 XQuery 表示式作為第一個參數,但 PostgreSQL 目前僅支持 XPath,它是 XQuery 的子集。
此函數檢查文字字串是否格式正確,回傳布林結果。xml_is_well_formed_document 檢查格式正確的文檔,而 xml_is_well_formed_content 檢查格式良好的內容。如果 xmloption 配置參數設定為 DOCUMENT,則 xml_is_well_formed 會執行前者;如果設定為 CONTENT,則執行後者。這意味著 xml_is_well_formed 對於查看對 xml 類型的簡單強制轉換是否成功很有用,而其他兩個函數對於查看 XMLPARSE 的相對應變數是否成功很有用。
範例:
最後一個範例顯示檢查包括命名空間是否符合。
為了處理資料型別為 xml 的值,PostgreSQL 提供了 xpath 和 xpath_exists 函數,它們用於計算 XPath 1.0 表示式和 XMLTABLE 資料表函數。
函數 xpath 根據 XML 值 xml 計算 XPath 表示式 xpath(字串)。 它回傳與 XPath 表示式產生的節點集合所相對應 XML 值的陣列。如果 XPath 表示式回傳單一變數值而不是節點集合,則回傳單個元素的陣列。
第二個參數必須是格式良好的 XML 內容。特別要注意是,它必須具有單一根節點元素。
該函數的選擇性第三個參數是命名空間對應的陣列。該陣列應該是二維字串陣列,第二維的長度等於 2(即,它應該是陣列的陣列,每個陣列恰好由 2 個元素組成)。每個陣列項目的第一個元素是命名空間名稱(別名),第二個是命名空間 URI。不要求此陣列中提供的別名與 XML 內容本身所使用的別名相同(換句話說,在 XML 內容和 xpath 函數內容中,別名都是區域性的)。
例如:
要設定預設的(匿名)命名空間,請執行以下操作:
The function xpath_exists
is a specialized form of the xpath
function. Instead of returning the individual XML values that satisfy the XPath, this function returns a Boolean indicating whether the query was satisfied or not. This function is equivalent to the standard XMLEXISTS
predicate, except that it also offers support for a namespace mapping argument.
Example:
The xmltable
function produces a table based on the given XML value, an XPath filter to extract rows, and an optional set of column definitions.
The optional XMLNAMESPACES
clause is a comma-separated list of namespaces. It specifies the XML namespaces used in the document and their aliases. A default namespace specification is not currently supported.
The required row_expression
argument is an XPath expression that is evaluated against the supplied XML document to obtain an ordered sequence of XML nodes. This sequence is what xmltable
transforms into output rows.
document_expression
provides the XML document to operate on. The BY REF
clauses have no effect in PostgreSQL, but are allowed for SQL conformance and compatibility with other implementations. The argument must be a well-formed XML document; fragments/forests are not accepted.
The mandatory COLUMNS
clause specifies the list of columns in the output table. If the COLUMNS
clause is omitted, the rows in the result set contain a single column of type xml
containing the data matched by row_expression
. If COLUMNS
is specified, each entry describes a single column. See the syntax summary above for the format. The column name and type are required; the path, default and nullability clauses are optional.
A column marked FOR ORDINALITY
will be populated with row numbers matching the order in which the output rows appeared in the original input XML document. At most one column may be marked FOR ORDINALITY
.
The column_expression
for a column is an XPath expression that is evaluated for each row, relative to the result of the row_expression
, to find the value of the column. If no column_expression
is given, then the column name is used as an implicit path.
If a column's XPath expression returns multiple elements, an error is raised. If the expression matches an empty tag, the result is an empty string (not NULL
). Any xsi:nil
attributes are ignored.
The text body of the XML matched by the column_expression
is used as the column value. Multiple text()
nodes within an element are concatenated in order. Any child elements, processing instructions, and comments are ignored, but the text contents of child elements are concatenated to the result. Note that the whitespace-only text()
node between two non-text elements is preserved, and that leading whitespace on a text()
node is not flattened.
If the path expression does not match for a given row but default_expression
is specified, the value resulting from evaluating that expression is used. If no DEFAULT
clause is given for the column, the field will be set to NULL
. It is possible for a default_expression
to reference the value of output columns that appear prior to it in the column list, so the default of one column may be based on the value of another column.
Columns may be marked NOT NULL
. If the column_expression
for a NOT NULL
column does not match anything and there is no DEFAULT
or the default_expression
also evaluates to null, an error is reported.
Unlike regular PostgreSQL functions, column_expression
and default_expression
are not evaluated to a simple value before calling the function. column_expression
is normally evaluated exactly once per input row, and default_expression
is evaluated each time a default is needed for a field. If the expression qualifies as stable or immutable the repeat evaluation may be skipped. Effectively xmltable
behaves more like a subquery than a function call. This means that you can usefully use volatile functions like nextval
in default_expression
, and column_expression
may depend on other parts of the XML document.
Examples:
The following example shows concatenation of multiple text() nodes, usage of the column name as XPath filter, and the treatment of whitespace, XML comments and processing instructions:
The following example illustrates how the XMLNAMESPACES
clause can be used to specify the default namespace, and a list of additional namespaces used in the XML document as well as in the XPath expressions:
The following functions map the contents of relational tables to XML values. They can be thought of as XML export functionality:
The return type of each function is xml
.
table_to_xml
maps the content of the named table, passed as parameter tbl
. The regclass
type accepts strings identifying tables using the usual notation, including optional schema qualifications and double quotes. query_to_xml
executes the query whose text is passed as parameter query
and maps the result set. cursor_to_xml
fetches the indicated number of rows from the cursor specified by the parameter cursor
. This variant is recommended if large tables have to be mapped, because the result value is built up in memory by each function.
If tableforest
is false, then the resulting XML document looks like this:
If tableforest
is true, the result is an XML content fragment that looks like this:
If no table name is available, that is, when mapping a query or a cursor, the string table
is used in the first format, row
in the second format.
The choice between these formats is up to the user. The first format is a proper XML document, which will be important in many applications. The second format tends to be more useful in the cursor_to_xml
function if the result values are to be reassembled into one document later on. The functions for producing XML content discussed above, in particular xmlelement
, can be used to alter the results to taste.
The data values are mapped in the same way as described for the function xmlelement
above.
The parameter nulls
determines whether null values should be included in the output. If true, null values in columns are represented as:
where xsi
is the XML namespace prefix for XML Schema Instance. An appropriate namespace declaration will be added to the result value. If false, columns containing null values are simply omitted from the output.
The parameter targetns
specifies the desired XML namespace of the result. If no particular namespace is wanted, an empty string should be passed.
The following functions return XML Schema documents describing the mappings performed by the corresponding functions above:
It is essential that the same parameters are passed in order to obtain matching XML data mappings and XML Schema documents.
The following functions produce XML data mappings and the corresponding XML Schema in one document (or forest), linked together. They can be useful where self-contained and self-describing results are wanted:
In addition, the following functions are available to produce analogous mappings of entire schemas or the entire current database:
Note that these potentially produce a lot of data, which needs to be built up in memory. When requesting content mappings of large schemas or databases, it might be worthwhile to consider mapping the tables separately instead, possibly even through a cursor.
The result of a schema content mapping looks like this:
where the format of a table mapping depends on the tableforest
parameter as explained above.
The result of a database content mapping looks like this:
where the schema mapping is as above.
shows the operators available for array types.
Table 9.48. Array Operators
Array comparisons compare the array contents element-by-element, using the default B-tree comparison function for the element data type. In multidimensional arrays the elements are visited in row-major order (last subscript varies most rapidly). If the contents of two arrays are equal but the dimensionality is different, the first difference in the dimensionality information determines the sort order. (This is a change from versions ofPostgreSQLprior to 8.2: older versions would claim that two arrays with the same contents were equal, even if the number of dimensions or subscript ranges were different.)
Seefor more details about array operator behavior. Seefor more details about which operators support indexed operations.
shows the functions available for use with array types. Seefor more information and examples of the use of these functions.
Table 9.49. Array Functions
Inarray_position
andarray_positions
, each array element is compared to the searched value usingIS NOT DISTINCT FROM
semantics.
Inarray_position
,NULL
is returned if the value is not found.
Inarray_positions
,NULL
is returned only if the array isNULL
; if the value is not found in the array, an empty array is returned instead.
Instring_to_array
, if the delimiter parameter is NULL, each character in the input string will become a separate element in the resulting array. If the delimiter is an empty string, then the entire input string is returned as a one-element array. Otherwise the input string is split at each occurrence of the delimiter string.
Instring_to_array
, if the null-string parameter is omitted or NULL, none of the substrings of the input will be replaced by NULL. Inarray_to_string
, if the null-string parameter is omitted or NULL, any null elements in the array are simply skipped and not represented in the output string.
There are two differences in the behavior ofstring_to_array
from pre-9.1 versions ofPostgreSQL. First, it will return an empty (zero-element) array rather than NULL when the input string is of zero length. Second, if the delimiter string is NULL, the function splits the input into individual characters, rather than returning NULL as before.
This section describes functions for operating onsequence objects, also called sequence generators or just sequences. Sequence objects are special single-row tables created with. Sequence objects are commonly used to generate unique identifiers for rows of a table. The sequence functions, listed in, provide simple, multiuser-safe methods for obtaining successive sequence values from sequence objects.
Table 9.47. Sequence Functions
The sequence to be operated on by a sequence function is specified by aregclass
argument, which is simply the OID of the sequence in thepg_class
system catalog. You do not have to look up the OID by hand, however, since theregclass
data type's input converter will do the work for you. Just write the sequence name enclosed in single quotes so that it looks like a literal constant. For compatibility with the handling of ordinarySQLnames, the string will be converted to lower case unless it contains double quotes around the sequence name. Thus:
The sequence name can be schema-qualified if necessary:
BeforePostgreSQL8.1, the arguments of the sequence functions were of typetext
, notregclass
, and the above-described conversion from a text string to an OID value would happen at run time during each call. For backward compatibility, this facility still exists, but internally it is now handled as an implicit coercion fromtext
toregclass
before the function is invoked.
When you write the argument of a sequence function as an unadorned literal string, it becomes a constant of typeregclass
. Since this is really just an OID, it will track the originally identified sequence despite later renaming, schema reassignment, etc. This“early binding”behavior is usually desirable for sequence references in column defaults and views. But sometimes you might want“late binding”where the sequence reference is resolved at run time. To get late-binding behavior, force the constant to be stored as atext
constant instead ofregclass
:
Note that late binding was the only behavior supported inPostgreSQLreleases before 8.1, so you might need to do this to preserve the semantics of old applications.
Of course, the argument of a sequence function can be an expression as well as a constant. If it is a text expression then the implicit coercion will result in a run-time lookup.
The available sequence functions are:
nextval
Advance the sequence object to its next value and return that value. This is done atomically: even if multiple sessions executenextval
concurrently, each will safely receive a distinct sequence value.
To avoid blocking concurrent transactions that obtain numbers from the same sequence, anextval
operation is never rolled back; that is, once a value has been fetched it is considered used and will not be returned again. This is true even if the surrounding transaction later aborts, or if the calling query ends up not using the value. For example anINSERT
with anON CONFLICT
clause will compute the to-be-inserted tuple, including doing any requirednextval
calls, before detecting any conflict that would cause it to follow theON CONFLICT
rule instead. Such cases will leave unused“holes”in the sequence of assigned values. Thus,PostgreSQLsequence objectscannot be used to obtain“gapless”sequences.
This function requiresUSAGE
orUPDATE
privilege on the sequence.
currval
Return the value most recently obtained bynextval
for this sequence in the current session. (An error is reported ifnextval
has never been called for this sequence in this session.) Because this is returning a session-local value, it gives a predictable answer whether or not other sessions have executednextval
since the current session did.
This function requiresUSAGE
orSELECT
privilege on the sequence.
lastval
Return the value most recently returned bynextval
in the current session. This function is identical tocurrval
, except that instead of taking the sequence name as an argument it refers to whichever sequencenextval
was most recently applied to in the current session. It is an error to calllastval
ifnextval
has not yet been called in the current session.
This function requiresUSAGE
orSELECT
privilege on the last used sequence.
setval
Reset the sequence object's counter value. The two-parameter form sets the sequence'slast_value
field to the specified value and sets itsis_called
field totrue
, meaning that the nextnextval
will advance the sequence before returning a value. The value reported bycurrval
is also set to the specified value. In the three-parameter form,is_called
can be set to eithertrue
orfalse
.true
has the same effect as the two-parameter form. If it is set tofalse
, the nextnextval
will return exactly the specified value, and sequence advancement commences with the followingnextval
. Furthermore, the value reported bycurrval
is not changed in this case. For example,
The result returned bysetval
is just the value of its second argument.
Because sequences are non-transactional, changes made bysetval
are not undone if the transaction rolls back.
This function requiresUPDATE
privilege on the sequence.
版本:11
彙總函數將一組輸入值計算為單一個結果。 中列出了內建的通用彙總函數, 中列出了統計的彙總函數。 中列出了內建的群組內有序集合函數,而內建的群組內假設集合函數列於 中。 列出了與彙總函數密切相關的分組操作。介紹了彙總函數的特殊語法注意事項。有關其他介紹性資訊,請參閱。
It should be noted that except for count
, these functions return a null value when no rows are selected. In particular, sum
of no rows returns null, not zero as one might expect, and array_agg
returns null rather than an empty array when there are no input rows. The coalesce
function can be used to substitute zero or an empty array for null when necessary.
Aggregate functions which support Partial Mode are eligible to participate in various optimizations, such as parallel aggregation.
Boolean aggregates bool_and
and bool_or
correspond to standard SQL aggregates every
and any
or some
. As for any
and some
, it seems that there is an ambiguity built into the standard syntax:
Here ANY
can be considered either as introducing a subquery, or as being an aggregate function, if the subquery returns one row with a Boolean value. Thus the standard name cannot be given to these aggregates.
Users accustomed to working with other SQL database management systems might be disappointed by the performance of the count
aggregate when it is applied to the entire table. A query like:
will require effort proportional to the size of the table: PostgreSQL will need to scan either the entire table or the entirety of an index which includes all rows in the table.
Beware that this approach can fail if the outer query level contains additional processing, such as a join, because that might cause the subquery's output to be reordered before the aggregate is computed.
For each of these hypothetical-set aggregates, the list of direct arguments given in args
must match the number and types of the aggregated arguments given in sorted_args
. Unlike most built-in aggregates, these aggregates are not strict, that is they do not drop input rows containing nulls. Null values sort according to the rule specified in the ORDER BY
clause.
For enum types (described in), there are several functions that allow cleaner programming without hard-coding particular values of an enum type. These are listed in. The examples assume an enum type created as:
Table 9.32. Enum Support Functions
Notice that except for the two-argument form ofenum_range
, these functions disregard the specific value passed to them; they care only about its declared data type. Either null or a specific value of the type can be passed, with the same result. It is more common to apply these functions to a table column or function argument than to a hardwired type name as suggested by the examples.
ThePostgreSQLformatting functions provide a powerful set of tools for converting various data types (date/time, integer, floating point, numeric) to formatted strings and for converting from formatted strings to specific data types.lists them. These functions all follow a common calling convention: the first argument is the value to be formatted and the second argument is a template that defines the output or input format.
Table 9.23. Formatting Functions
There is also a single-argumentto_timestamp
function; see.
to_timestamp
andto_date
exist to handle input formats that cannot be converted by simple casting. For most standard date/time formats, simply casting the source string to the required data type works, and is much easier. Similarly,to_number
is unnecessary for standard numeric representations.
In ato_char
output template string, there are certain patterns that are recognized and replaced with appropriately-formatted data based on the given value. Any text that is not a template pattern is simply copied verbatim. Similarly, in an input template string (for the other functions), template patterns identify the values to be supplied by the input data string.
shows the template patterns available for formatting date and time values.
Table 9.24. Template Patterns for Date/Time Formatting
Table 9.25. Template Pattern Modifiers for Date/Time Formatting
Usage notes for date/time formatting:
FM
suppresses leading zeroes and trailing blanks that would otherwise be added to make the output of a pattern be fixed-width. InPostgreSQL,FM
modifies only the next specification, while in OracleFM
affects all subsequent specifications, and repeatedFM
modifiers toggle fill mode on and off.
TM
does not include trailing blanks.to_timestamp
andto_date
ignore theTM
modifier.
to_timestamp
andto_date
skip multiple blank spaces in the input string unless theFX
option is used. For example,to_timestamp('2000 JUN', 'YYYY MON')
works, butto_timestamp('2000 JUN', 'FXYYYY MON')
returns an error becauseto_timestamp
expects one space only.FX
must be specified as the first item in the template.
Ordinary text is allowed into_char
templates and will be output literally. You can put a substring in double quotes to force it to be interpreted as literal text even if it contains pattern key words. For example, in'"Hello Year "YYYY'
, theYYYY
will be replaced by the year data, but the singleY
inYear
will not be. Into_date
,to_number
, andto_timestamp
, double-quoted strings skip the number of input characters contained in the string, e.g."XX"
skips two input characters.
If you want to have a double quote in the output you must precede it with a backslash, for example'\"YYYY Month\"'
.
Into_timestamp
andto_date
, if the year format specification is less than four digits, e.g.YYY
, and the supplied year is less than four digits, the year will be adjusted to be nearest to the year 2020, e.g.95
becomes 1995.
Into_timestamp
andto_date
, theYYYY
conversion has a restriction when processing years with more than 4 digits. You must use some non-digit character or template afterYYYY
, otherwise the year is always interpreted as 4 digits. For example (with the year 20000):to_date('200001131', 'YYYYMMDD')
will be interpreted as a 4-digit year; instead use a non-digit separator after the year, liketo_date('20000-1131', 'YYYY-MMDD')
orto_date('20000Nov31', 'YYYYMonDD')
.
Into_timestamp
andto_date
, theCC
(century) field is accepted but ignored if there is aYYY
,YYYY
orY,YYY
field. IfCC
is used withYY
orY
then the result is computed as that year in the specified century. If the century is specified but the year is not, the first year of the century is assumed.
Into_timestamp
andto_date
, weekday names or numbers (DAY
,D
, and related field types) are accepted but are ignored for purposes of computing the result. The same is true for quarter (Q
) fields.
Into_timestamp
andto_date
, an ISO 8601 week-numbering date (as distinct from a Gregorian date) can be specified in one of two ways:
Year, week number, and weekday: for exampleto_date('2006-42-4', 'IYYY-IW-ID')
returns the date2006-10-19
. If you omit the weekday it is assumed to be 1 (Monday).
Year and day of year: for exampleto_date('2006-291', 'IYYY-IDDD')
also returns2006-10-19
.
Attempting to enter a date using a mixture of ISO 8601 week-numbering fields and Gregorian date fields is nonsensical, and will cause an error. In the context of an ISO 8601 week-numbering year, the concept of a“month”or“day of month”has no meaning. In the context of a Gregorian year, the ISO week has no meaning.
Caution
Into_timestamp
, millisecond (MS
) or microsecond (US
) fields are used as the seconds digits after the decimal point. For exampleto_timestamp('12.3', 'SS.MS')
is not 3 milliseconds, but 300, because the conversion treats it as 12 + 0.3 seconds. So, for the formatSS.MS
, the input values12.3
,12.30
, and12.300
specify the same number of milliseconds. To get three milliseconds, one must write12.003
, which the conversion treats as 12 + 0.003 = 12.003 seconds.
Here is a more complex example:to_timestamp('15:12:02.020.001230', 'HH24:MI:SS.MS.US')
is 15 hours, 12 minutes, and 2 seconds + 20 milliseconds + 1230 microseconds = 2.021230 seconds.
to_char(..., 'ID')
's day of the week numbering matches theextract(isodow from ...)
function, butto_char(..., 'D')
's does not matchextract(dow from ...)
's day numbering.
to_char(interval)
formatsHH
andHH12
as shown on a 12-hour clock, for example zero hours and 36 hours both output as12
, whileHH24
outputs the full hour value, which can exceed 23 in aninterval
value.
Table 9.26. Template Patterns for Numeric Formatting
Usage notes for numeric formatting:
A sign formatted usingSG
,PL
, orMI
is not anchored to the number; for example,to_char(-12, 'MI9999')
produces'- 12'
butto_char(-12, 'S9999')
produces' -12'
. The Oracle implementation does not allow the use ofMI
before9
, but rather requires that9
precedeMI
.
9
results in a value with the same number of digits as there are9
s. If a digit is not available it outputs a space.
TH
does not convert values less than zero and does not convert fractional numbers.
PL
,SG
, andTH
arePostgreSQLextensions.
V
withto_char
multiplies the input values by10^n
, where_n
_is the number of digits followingV
.V
withto_number
divides in a similar manner.to_char
andto_number
do not support the use ofV
combined with a decimal point (e.g.,99.9V99
is not allowed).
EEEE
(scientific notation) cannot be used in combination with any of the other formatting patterns or modifiers other than digit and decimal point patterns, and must be at the end of the format string (e.g.,9.99EEEE
is a valid pattern).
Table 9.27. Template Pattern Modifiers for Numeric Formatting
Table 9.28. to_char
Examples
Table 9-28 shows the available functions for date/time value processing, with details appearing in the following subsections. Table 9-27 illustrates the behaviors of the basic arithmetic operators (+, *, etc.). For formatting functions, refer to . You should be familiar with the background information on date/time data types from .
All the functions and operators described below that take time or timestamp inputs actually come in two variants: one that takes time with time zone or timestamp with time zone, and one that takes time without time zone or timestamp without time zone. For brevity, these variants are not shown separately. Also, the + and * operators come in commutative pairs (for example both date + integer and integer + date); we show only one of each such pair.
Table 9-27. Date/Time Operators
Table 9-28. Date/Time Functions
In addition to these functions, the SQL OVERLAPS operator is supported:
This expression yields true when two time periods (defined by their endpoints) overlap, false when they do not overlap. The endpoints can be specified as pairs of dates, times, or time stamps; or as a date, time, or time stamp followed by an interval. When a pair of values is provided, either the start or the end can be written first; OVERLAPS automatically takes the earlier value of the pair as the start. Each time period is considered to represent the half-open interval start <= time < end, unless start and end are equal in which case it represents that single time instant. This means for instance that two time periods with only an endpoint in common do not overlap.
When adding an interval value to (or subtracting an interval value from) a timestamp with time zone value, the days component advances or decrements the date of the timestamp with time zone by the indicated number of days. Across daylight saving time changes (when the session time zone is set to a time zone that recognizes DST), this means interval '1 day' does not necessarily equal interval '24 hours'. For example, with the session time zone set to CST7CDT, timestamp with time zone '2005-04-02 12:00-07' + interval '1 day' will produce timestamp with time zone '2005-04-03 12:00-06', while adding interval '24 hours' to the same initial timestamp with time zone produces timestamp with time zone '2005-04-03 13:00-06', as there is a change in daylight saving time at 2005-04-03 02:00 in time zone CST7CDT.
Note there can be ambiguity in the months field returned by age
because different months have different numbers of days. PostgreSQL's approach uses the month from the earlier of the two dates when calculating partial months. For example, age('2004-06-01', '2004-04-30') uses April to yield 1 mon 1 day, while using May would yield 1 mon 2 days because May has 31 days, while April has only 30.
Subtraction of dates and timestamps can also be complex. One conceptually simple way to perform subtraction is to convert each value to a number of seconds using EXTRACT(EPOCH FROM ...), then subtract the results; this produces the number of seconds between the two values. This will adjust for the number of days in each month, timezone changes, and daylight saving time adjustments. Subtraction of date or timestamp values with the "-" operator returns the number of days (24-hours) and hours/minutes/seconds between the values, making the same adjustments. The age
function returns years, months, days, and hours/minutes/seconds, performing field-by-field subtraction and then adjusting for negative field values. The following queries illustrate the differences in these approaches. The sample results were produced with timezone = 'US/Eastern'; there is a daylight saving time change between the two dates used:
EXTRACT
, date_part
The extract
function retrieves subfields such as year or hour from date/time values. source must be a value expression of type timestamp, time, or interval. (Expressions of type dateare cast to timestamp and can therefore be used as well.) field is an identifier or string that selects what field to extract from the source value. The extract
function returns values of type double precision. The following are valid field names:century
The century
The first century starts at 0001-01-01 00:00:00 AD, although they did not know it at the time. This definition applies to all Gregorian calendar countries. There is no century number 0, you go from -1 century to 1 century. If you disagree with this, please write your complaint to: Pope, Cathedral Saint-Peter of Roma, Vatican.day
For timestamp values, the day (of the month) field (1 - 31) ; for interval values, the number of days
decade
The year field divided by 10
dow
The day of the week as Sunday (0) to Saturday (6)
Note that extract
's day of the week numbering differs from that of the to_char(..., 'D')
function.doy
The day of the year (1 - 365/366)
epoch
For timestamp with time zone values, the number of seconds since 1970-01-01 00:00:00 UTC (can be negative); for date and timestamp values, the number of seconds since 1970-01-01 00:00:00 local time; for interval values, the total number of seconds in the interval
Here is how you can convert an epoch value back to a time stamp:
(The to_timestamp
function encapsulates the above conversion.)hour
The hour field (0 - 23)
isodow
The day of the week as Monday (1) to Sunday (7)
This is identical to dow except for Sunday. This matches the ISO 8601 day of the week numbering.isoyear
The ISO 8601 week-numbering year that the date falls in (not applicable to intervals)
Each ISO 8601 week-numbering year begins with the Monday of the week containing the 4th of January, so in early January or late December the ISO year may be different from the Gregorian year. See the week field for more information.
This field is not available in PostgreSQL releases prior to 8.3.microseconds
The seconds field, including fractional parts, multiplied by 1 000 000; note that this includes full seconds
millennium
The millennium
Years in the 1900s are in the second millennium. The third millennium started January 1, 2001.milliseconds
The seconds field, including fractional parts, multiplied by 1000. Note that this includes full seconds.
minute
The minutes field (0 - 59)
month
For timestamp values, the number of the month within the year (1 - 12) ; for interval values, the number of months, modulo 12 (0 - 11)
quarter
The quarter of the year (1 - 4) that the date is in
second
timezone
The time zone offset from UTC, measured in seconds. Positive values correspond to time zones east of UTC, negative values to zones west of UTC. (Technically, PostgreSQL uses UT1 because leap seconds are not handled.)timezone_hour
The hour component of the time zone offsettimezone_minute
The minute component of the time zone offsetweek
The number of the ISO 8601 week-numbering week of the year. By definition, ISO weeks start on Mondays and the first week of a year contains January 4 of that year. In other words, the first Thursday of a year is in week 1 of that year.
In the ISO week-numbering system, it is possible for early-January dates to be part of the 52nd or 53rd week of the previous year, and for late-December dates to be part of the first week of the next year. For example, 2005-01-01 is part of the 53rd week of year 2004, and 2006-01-01 is part of the 52nd week of year 2005, while 2012-12-31 is part of the first week of 2013. It's recommended to use the isoyear field together with week to get consistent results.
year
The year field. Keep in mind there is no 0 AD, so subtracting BC years from AD years should be done with care.
The date_part
function is modeled on the traditional Ingres equivalent to the SQL-standard function extract
:
Note that here the field parameter needs to be a string value, not a name. The valid field names for date_part
are the same as for extract
.
date_trunc
The function date_trunc
is conceptually similar to the trunc
function for numbers.
source is a value expression of type timestamp or interval. (Values of type date and time are cast automatically to timestamp or interval, respectively.) field selects to which precision to truncate the input value. The return value is of type timestamp or interval with all fields that are less significant than the selected one set to zero (or one, for day and month).
Valid values for field are:
Examples:
Table 9-29. AT TIME ZONE Variants
Examples (assuming the local time zone is PST8PDT):
The first example takes a time stamp without time zone and interprets it as MST time (UTC-7), which is then converted to PST (UTC-8) for display. The second example takes a time stamp specified in EST (UTC-5) and converts it to local time in MST (UTC-7).
The function timezone
(zone, timestamp) is equivalent to the SQL-conforming construct timestamp AT TIME ZONE zone.
PostgreSQL provides a number of functions that return values related to the current date and time. These SQL-standard functions all return values based on the start time of the current transaction:
CURRENT_TIME
and CURRENT_TIMESTAMP
deliver values with time zone; LOCALTIME
and LOCALTIMESTAMP
deliver values without time zone.
CURRENT_TIME
, CURRENT_TIMESTAMP
, LOCALTIME
, and LOCALTIMESTAMP
can optionally take a precision parameter, which causes the result to be rounded to that many fractional digits in the seconds field. Without a precision parameter, the result is given to the full available precision.
Some examples:
Since these functions return the start time of the current transaction, their values do not change during the transaction. This is considered a feature: the intent is to allow a single transaction to have a consistent notion of the "current" time, so that multiple modifications within the same transaction bear the same time stamp.
Note: Other database systems might advance these values more frequently.
PostgreSQL also provides functions that return the start time of the current statement, as well as the actual current time at the instant the function is called. The complete list of non-SQL-standard time functions is:
transaction_timestamp()
is equivalent to CURRENT_TIMESTAMP
, but is named to clearly reflect what it returns. statement_timestamp()
returns the start time of the current statement (more specifically, the time of receipt of the latest command message from the client). statement_timestamp()
and transaction_timestamp()
return the same value during the first command of a transaction, but might differ during subsequent commands. clock_timestamp()
returns the actual current time, and therefore its value changes even within a single SQL command. timeofday()
is a historical PostgreSQL function. Like clock_timestamp()
, it returns the actual current time, but as a formatted text string rather than a timestamp with time zone value. now()
is a traditional PostgreSQL equivalent to transaction_timestamp()
.
All the date/time data types also accept the special literal value now to specify the current date and time (again, interpreted as the transaction start time). Thus, the following three all return the same result:
Tip: You do not want to use the third form when specifying a DEFAULT clause while creating a table. The system will convert now to a timestamp as soon as the constant is parsed, so that when the default value is needed, the time of the table creation would be used! The first two forms will not be evaluated until the default value is used, because they are function calls. Thus they will give the desired behavior of defaulting to the time of row insertion.
The following functions are available to delay execution of the server process:
pg_sleep
makes the current session's process sleep until seconds seconds have elapsed. seconds is a value of type double precision, so fractional-second delays can be specified. pg_sleep_for
is a convenience function for larger sleep times specified as an interval. pg_sleep_until
is a convenience function for when a specific wake-up time is desired. For example:
Note: The effective resolution of the sleep interval is platform-specific; 0.01 seconds is a common value. The sleep delay will be at least as long as specified. It might be longer depending on factors such as server load. In particular,
pg_sleep_until
is not guaranteed to wake up exactly at the specified time, but it will not wake up any earlier.
與此處描述的其他函數不同,函數 xmlagg 是一個彙總函數。它將輸入值連接到彙總函數呼叫,就像 xmlconcat 一樣,除了它是跨資料列而不是在單個資料列中的表示式進行連接。有關彙總函數的其他訊息,請參閱。
如果參數 XML 是正確的 XML 文件,則表示式 IS DOCUMENT 將回傳 true,如果不是(它是內容片段),則回傳 false;如果參數為 null,則回傳 null。有關文件和內容片段之間的區別,請參閱。
As an example of using the output produced by these functions, shows an XSLT stylesheet that converts the output of table_to_xml_and_xmlschema
to an HTML document containing a tabular rendition of the table data. In a similar manner, the results from these functions can be converted into other XML-based formats.
See alsoabout the aggregate functionarray_agg
for use with arrays.
Seefor more information aboutregclass
.
If a sequence object has been created with default parameters, successivenextval
calls will return successive values beginning with 1. Other behaviors can be obtained by using special parameters in thecommand; see its command reference page for more information.
The aggregate functions array_agg
, json_agg
, jsonb_agg
, json_object_agg
, jsonb_object_agg
, string_agg
, and xmlagg
, as well as similar user-defined aggregate functions, produce meaningfully different result values depending on the order of the input values. This ordering is unspecified by default, but can be controlled by writing an ORDER BY
clause within the aggregate call, as shown in . Alternatively, supplying the input values from a sorted subquery will usually work. For example:
shows aggregate functions typically used in statistical analysis. (These are separated out merely to avoid cluttering the listing of more-commonly-used aggregates.) Where the description mentions N
, it means the number of input rows for which all the input expressions are non-null. In all cases, null is returned if the computation is meaningless, for example when N
is zero.
shows some aggregate functions that use the ordered-set aggregate syntax. These functions are sometimes referred to as “inverse distribution” functions.
All the aggregates listed in ignore null values in their sorted input. For those that take a fraction
parameter, the fraction value must be between 0 and 1; an error is thrown if not. However, a null fraction value simply produces a null result.
Each of the aggregates listed in is associated with a window function of the same name defined in . In each case, the aggregate result is the value that the associated window function would have returned for the “hypothetical” row constructed from args
, if such a row had been added to the sorted group of rows computed from the sorted_args
.
Grouping operations are used in conjunction with grouping sets (see ) to distinguish result rows. The arguments to the GROUPING
operation are not actually evaluated, but they must match exactly expressions given in the GROUP BY
clause of the associated query level. Bits are assigned with the rightmost argument being the least-significant bit; each bit is 0 if the corresponding expression is included in the grouping criteria of the grouping set generating the result row, and 1 if it is not. For example:
Modifiers can be applied to any template pattern to alter its behavior. For example,FMMonth
is theMonth
pattern with theFM
modifier.shows the modifier patterns for date/time formatting.
Whileto_date
will reject a mixture of Gregorian and ISO week-numbering date fields,to_char
will not, since output format specifications likeYYYY-MM-DD (IYYY-IDDD)
can be useful. But avoid writing something likeIYYY-MM-DD
; that would yield surprising results near the start of the year. (Seefor more information.)
shows the template patterns available for formatting numeric values.
Certain modifiers can be applied to any template pattern to alter its behavior. For example,FM9999
is the9999
pattern with theFM
modifier.shows the modifier patterns for numeric formatting.
shows some examples of the use of theto_char
function.
The seconds field, including fractional parts (0 - 59)
The extract
function is primarily intended for computational processing. For formatting date/time values for display, see .
The AT TIME ZONE construct allows conversions of time stamps to different time zones. shows its variants.
In these expressions, the desired time zone zone can be specified either as a text string (e.g., 'PST') or as an interval (e.g., INTERVAL '-08:00'). In the text case, a time zone name can be specified in any of the ways described in .
Function
Return Type
Description
Example
Result
string
||
string
text
字串連接
'Post' || 'greSQL'
PostgreSQL
string
||
non-string
or non-string
||
string
text
字串與一個非字串輸入連接
'Value: ' || 42
Value: 42
bit_length(
string
)
int
字串中的位元數
bit_length('jose')
32
char_length(
string
) or character_length(
string
)
int
字串中的字元數
char_length('jose')
4
lower(
string
)
text
將字串轉換為小寫
lower('TOM')
tom
octet_length(
string
)
int
字串中的位元組數
octet_length('jose')
4
overlay(
string
placing string
fromint
[for int
])
text
子字串替換
overlay('Txxxxas' placing 'hom' from 2 for 4)
Thomas
position(
substring
in string
)
int
指出子字串的位置
position('om' in 'Thomas')
3
substring(
string
[from int
] [forint
])
text
提取子字串
substring('Thomas' from 2 for 3)
hom
substring(
string
from pattern
)
text
提取符合 POSIX 正規表示式的子字串。有關特徵比對的更多訊息,請參見第 9.7 節。
substring('Thomas' from '...$')
mas
substring(
string
from pattern
forescape
)
text
提取符合 SQL 正規表示式的子字串。有關特徵比對的更多訊息,請參閱第 9.7 節。
substring('Thomas' from '%#"o_a#"_' for '#')
oma
trim([leading | trailing | both] [
characters
] from string
)
text
從字串的開頭,結尾或兩端(兩者都是預設值)中刪除包含某些字元(預設為空格)的最長字串
trim(both 'xyz' from 'yxTomxx')
Tom
trim([leading | trailing | both] [from]
string
[, characters
] )
text
trim() 的非標準語法
trim(both from 'yxTomxx', 'xyz')
Tom
upper(
string
)
text
將字串轉換為大寫
upper('tom')
TOM
Function
Return Type
Description
Example
Result
ascii(
string
)
int
參數的第一個字元的 ASCII 碼。對於 UTF8,回傳字元的 Unicode 代碼。對於其他多位元組編碼,參數必須是 ASCII 字元。
ascii('x')
120
btrim(
string
text
[, characters
text
])
text
從字串的開頭和結尾刪除特定字元的最長字串(預設為空格)
btrim('xyxtrimyyx', 'xyz')
trim
chr(int
)
text
輸出給定代碼的字元。對於 UTF8,該參數被視為 Unicode 代碼。對於其他多位元組編碼,參數必須指定 ASCII 字元。不允許使用 NULL(0)字元,因為文字資料型別無法儲存這個位元組。
chr(65)
A
concat(
str
"any"
[, str
"any"
[, ...] ])
text
連接所有參數的文字結果。NULL 參數會被忽略。
concat('abcde', 2, NULL, 22)
abcde222
concat_ws(
sep
text
, str
"any"
[,str
"any"
[, ...] ])
text
使用分隔字元連接除第一個參數以外的所有參數。第一個參數用作分隔字串。NULL 參數會被忽略。
concat_ws(',', 'abcde', 2, NULL, 22)
abcde,2,22
convert(
string
bytea
,src_encoding
name
, dest_encoding
name
)
bytea
將字串轉換為 dest_encoding。原始編碼由 src_encoding 指定。該字串必須在此編碼中有效。可以透過 CREATE CONVERSION 定義轉換。還有一些預定義的轉換。有關可用的轉換,請參閱 Table 9.10。
convert('text_in_utf8', 'UTF8', 'LATIN1')
text_in_utf8
represented in Latin-1 encoding (ISO 8859-1)
convert_from(
string
bytea
,src_encoding
name
)
text
將字串轉換為資料庫編碼。原始編碼由 src_encoding 指定。該字串必須在此編碼中有效。
convert_from('text_in_utf8', 'UTF8')
text_in_utf8
represented in the current database encoding
convert_to(
string
text
,dest_encoding
name
)
bytea
將字串轉換為 dest_encoding。
convert_to('some text', 'UTF8')
some text
represented in the UTF8 encoding
decode(
string
text
, format
text
)
bytea
從字串中的文字表示中解碼二進位資料。格式選項與編碼相同。
decode('MTIzAAE=', 'base64')
\x3132330001
encode(
data
bytea
, format
text
)
text
將二進制資料編碼為文字表示。支援的格式為:base64,hex,escape。escape 將零位元組和 high-bit-set 位元組轉換為八進制序列(\nnn)並將倒斜線加倍。
encode(E'123\\000\\001', 'base64')
MTIzAAE=
format
(formatstr
text
[,formatarg
"any"
[, ...] ])
text
根據格式字串格式化參數。此功能類似於 C 函數 sprintf。詳見 9.4.1 節。
format('Hello %s, %1$s', 'World')
Hello World, World
initcap(
string
)
text
將每個單詞的第一個字母轉換為大寫,其餘單詞轉換為小寫。單詞是由非字母數字字元分隔的字母數字字元序列。
initcap('hi THOMAS')
Hi Thomas
left(
str
text
, n
int
)
text
回傳字串中的前 n 個字元。當 n 為負數時,回傳除最後 |n| 之外的所有內容字元。
left('abcde', 2)
ab
length(
string
)
int
字串中的字元數
length('jose')
4
length(
string
bytea
, encoding
name
)
int
給定編碼中字串中的字元數。該字串必須在此編碼中有效。
length('jose', 'UTF8')
4
lpad(
string
text
, length
int
[,fill
text
])
text
透過在字元填充前加上字串填充(預設為空格)。 如果字串已經長於長度,那麼它將被截斷(在右側)。
lpad('hi', 5, 'xy')
xyxhi
ltrim(
string
text
[, characters
text
])
text
從字串的開頭刪除最長指定字元的字串(預設為空格)
ltrim('zzzytest', 'xyz')
test
md5(
string
)
text
計算字串的 MD5 雜湊值,以十六進位形式回傳結果
md5('abc')
900150983cd24fb0 d6963f7d28e17f72
parse_ident(
qualified_identifier
text
[, strictmode
boolean
DEFAULT true ] )
text[]
將 qualified_identifier 以標示字拆分為陣列,刪除任何單個標示字的引用。預設情況下,最後一個標示字後面的額外字元將被視為錯誤;但如果第二個參數為 false,則忽略這些額外的字元。(此行為對於解析函數等物件的名稱很有用。)請注意,此函數不會截斷超長標示字。如果要截斷,可以將結果轉換為 name[]。
parse_ident('"SomeSchema".someTable')
{SomeSchema,sometable}
pg_client_encoding()
name
目前用戶端的編碼名稱
pg_client_encoding()
SQL_ASCII
quote_ident(
string
text
)
text
回傳適當引用的字串,以用作 SQL 語句字串中的標示字。僅在必要時加上引號(即如果字串包含非標示字或將被大小寫折疊)。 嵌入式引號會正確加倍。請參閱例 42.1。
quote_ident('Foo bar')
"Foo bar"
quote_literal(
string
text
)
text
回傳適當引用的字串,以用作 SQL 語句字串中的字串文字。嵌入式單引號和倒斜線會適當加倍。請注意,quote_literal 在 null 輸入時回傳 null;如果參數可能為 null,則 quote_nullable 通常更合適。請參閱例 42.1。
quote_literal(E'O\'Reilly')
'O''Reilly'
quote_literal(
value
anyelement
)
text
將給定的值強制轉換為文字型別,然後將其引用為文字。嵌入式單引號和反斜線會適當加倍。
quote_literal(42.5)
'42.5'
quote_nullable(
string
text
)
text
回傳適當引用的字串,以用作 SQL 語句字串中的字串文字;或者,如果參數為 null,則回傳NULL。嵌入式單引號和倒斜線將適當加倍。請參閱例 42.1。
quote_nullable(NULL)
NULL
quote_nullable(
value
anyelement
)
text
將給定的值強制轉換為文字型別,然後將其引用為文字;或者,如果參數為 null,則回傳 NULL。嵌入式單引號和倒斜線將適當加倍。
quote_nullable(42.5)
'42.5'
regexp_match(
string
text
,pattern
text
[, flags
text
])
text[]
回傳由 POSIX 正規表示式與字串的第一個匹配產生的子字串。有關更多訊息,請參閱第 9.7.3 節。
regexp_match('foobarbequebaz', '(bar)(beque)')
{bar,beque}
regexp_matches(
string
text
,pattern
text
[, flags
text
])
setof text[]
回傳透過將 POSIX 正規表示式與字串匹配而得到的子字串。有關更多訊息,請參閱第 9.7.3 節。
regexp_matches('foobarbequebaz', 'ba.', 'g')
{bar}{baz}
(2 rows)
regexp_replace(
string
text
,pattern
text
, replacement
text
[, flags
text
])
text
替換與 POSIX 正規表示式匹配的子字串。有關更多訊息,請參閱第 9.7.3 節。
regexp_replace('Thomas', '.[mN]a.', 'M')
ThM
regexp_split_to_array(
string
text
, pattern
text
[, flags
text
])
text[]
使用 POSIX 正規表示式作為分隔字拆分字串。有關更多訊息,請參閱第 9.7.3 節。
regexp_split_to_array('hello world', E'\\s+')
{hello,world}
regexp_split_to_table(
string
text
, pattern
text
[, flags
text
])
setof text
使用 POSIX 正規表示式作為分隔字拆分字串。有關更多訊息,請參閱第 9.7.3 節。
regexp_split_to_table('hello world', E'\\s+')
helloworld
(2 rows)
repeat(
string
text
, number
int
)
text
將字串重複的指定次數
repeat('Pg', 4)
PgPgPgPg
replace(
string
text
, from
text
,to
text
)
text
以子字串 to 替換所有符合 from 的子字串
replace('abcdefabcdef', 'cd', 'XX')
abXXefabXXef
reverse(
str
)
text
回傳反轉字串。
reverse('abcde')
edcba
right(
str
text
, n
int
)
text
回傳字串中的最後 n 個字元。當 n 為負數時,回傳除了第一個 |n| 之外的所有字元。
right('abcde', 2)
de
rpad(
string
text
, length
int
[,fill
text
])
text
透過附加字元 fill(預設為空格)將字串填充至長度 length。如果字串已經長於 length,那麼它將被截斷。
rpad('hi', 5, 'xy')
hixyx
rtrim(
string
text
[, characters
text
])
text
從字串末尾刪除最長某包含 characters (預設為空格)的字串
rtrim('testxxzx', 'xyz')
test
split_part(
string
text
,delimiter
text
, field
int
)
text
在分隔字上拆分字串並回傳給定段落(從一個字元開始)
split_part('abc~@~def~@~ghi', '~@~', 2)
def
strpos(
string
, substring
)
int
回傳子字串的位置(與 position 相同,但請注意參數順序不同)
strpos('high', 'ig')
2
substr(
string
, from
[, count
])
text
提取子字串(與 substring 相同)
substr('alphabet', 3, 2)
ph
to_ascii(
string
text
[, encoding
text
])
text
從其他編碼將字串轉換為 ASCII(僅支援從 LATIN1,LATIN2,LATIN9 和 WIN1250 編碼轉換)
to_ascii('Karel')
Karel
to_hex(
number
int
or bigint
)
text
將數字轉換為其等效的十六進位表示
to_hex(2147483647)
7fffffff
translate(
string
text
, from
text
, to
text
)
text
字串中與 from 集合中相符合的任何字元都將替換為 to 集合中的相對應字元。如果 from 長於 to,則會刪除 from 中出現的額外字元。
translate('12345', '143', 'ax')
a2x5
Conversion Name [a]
Source Encoding
Destination Encoding
ascii_to_mic
SQL_ASCII
MULE_INTERNAL
ascii_to_utf8
SQL_ASCII
UTF8
big5_to_euc_tw
BIG5
EUC_TW
big5_to_mic
BIG5
MULE_INTERNAL
big5_to_utf8
BIG5
UTF8
euc_cn_to_mic
EUC_CN
MULE_INTERNAL
euc_cn_to_utf8
EUC_CN
UTF8
euc_jp_to_mic
EUC_JP
MULE_INTERNAL
euc_jp_to_sjis
EUC_JP
SJIS
euc_jp_to_utf8
EUC_JP
UTF8
euc_kr_to_mic
EUC_KR
MULE_INTERNAL
euc_kr_to_utf8
EUC_KR
UTF8
euc_tw_to_big5
EUC_TW
BIG5
euc_tw_to_mic
EUC_TW
MULE_INTERNAL
euc_tw_to_utf8
EUC_TW
UTF8
gb18030_to_utf8
GB18030
UTF8
gbk_to_utf8
GBK
UTF8
iso_8859_10_to_utf8
LATIN6
UTF8
iso_8859_13_to_utf8
LATIN7
UTF8
iso_8859_14_to_utf8
LATIN8
UTF8
iso_8859_15_to_utf8
LATIN9
UTF8
iso_8859_16_to_utf8
LATIN10
UTF8
iso_8859_1_to_mic
LATIN1
MULE_INTERNAL
iso_8859_1_to_utf8
LATIN1
UTF8
iso_8859_2_to_mic
LATIN2
MULE_INTERNAL
iso_8859_2_to_utf8
LATIN2
UTF8
iso_8859_2_to_windows_1250
LATIN2
WIN1250
iso_8859_3_to_mic
LATIN3
MULE_INTERNAL
iso_8859_3_to_utf8
LATIN3
UTF8
iso_8859_4_to_mic
LATIN4
MULE_INTERNAL
iso_8859_4_to_utf8
LATIN4
UTF8
iso_8859_5_to_koi8_r
ISO_8859_5
KOI8R
iso_8859_5_to_mic
ISO_8859_5
MULE_INTERNAL
iso_8859_5_to_utf8
ISO_8859_5
UTF8
iso_8859_5_to_windows_1251
ISO_8859_5
WIN1251
iso_8859_5_to_windows_866
ISO_8859_5
WIN866
iso_8859_6_to_utf8
ISO_8859_6
UTF8
iso_8859_7_to_utf8
ISO_8859_7
UTF8
iso_8859_8_to_utf8
ISO_8859_8
UTF8
iso_8859_9_to_utf8
LATIN5
UTF8
johab_to_utf8
JOHAB
UTF8
koi8_r_to_iso_8859_5
KOI8R
ISO_8859_5
koi8_r_to_mic
KOI8R
MULE_INTERNAL
koi8_r_to_utf8
KOI8R
UTF8
koi8_r_to_windows_1251
KOI8R
WIN1251
koi8_r_to_windows_866
KOI8R
WIN866
koi8_u_to_utf8
KOI8U
UTF8
mic_to_ascii
MULE_INTERNAL
SQL_ASCII
mic_to_big5
MULE_INTERNAL
BIG5
mic_to_euc_cn
MULE_INTERNAL
EUC_CN
mic_to_euc_jp
MULE_INTERNAL
EUC_JP
mic_to_euc_kr
MULE_INTERNAL
EUC_KR
mic_to_euc_tw
MULE_INTERNAL
EUC_TW
mic_to_iso_8859_1
MULE_INTERNAL
LATIN1
mic_to_iso_8859_2
MULE_INTERNAL
LATIN2
mic_to_iso_8859_3
MULE_INTERNAL
LATIN3
mic_to_iso_8859_4
MULE_INTERNAL
LATIN4
mic_to_iso_8859_5
MULE_INTERNAL
ISO_8859_5
mic_to_koi8_r
MULE_INTERNAL
KOI8R
mic_to_sjis
MULE_INTERNAL
SJIS
mic_to_windows_1250
MULE_INTERNAL
WIN1250
mic_to_windows_1251
MULE_INTERNAL
WIN1251
mic_to_windows_866
MULE_INTERNAL
WIN866
sjis_to_euc_jp
SJIS
EUC_JP
sjis_to_mic
SJIS
MULE_INTERNAL
sjis_to_utf8
SJIS
UTF8
tcvn_to_utf8
WIN1258
UTF8
uhc_to_utf8
UHC
UTF8
utf8_to_ascii
UTF8
SQL_ASCII
utf8_to_big5
UTF8
BIG5
utf8_to_euc_cn
UTF8
EUC_CN
utf8_to_euc_jp
UTF8
EUC_JP
utf8_to_euc_kr
UTF8
EUC_KR
utf8_to_euc_tw
UTF8
EUC_TW
utf8_to_gb18030
UTF8
GB18030
utf8_to_gbk
UTF8
GBK
utf8_to_iso_8859_1
UTF8
LATIN1
utf8_to_iso_8859_10
UTF8
LATIN6
utf8_to_iso_8859_13
UTF8
LATIN7
utf8_to_iso_8859_14
UTF8
LATIN8
utf8_to_iso_8859_15
UTF8
LATIN9
utf8_to_iso_8859_16
UTF8
LATIN10
utf8_to_iso_8859_2
UTF8
LATIN2
utf8_to_iso_8859_3
UTF8
LATIN3
utf8_to_iso_8859_4
UTF8
LATIN4
utf8_to_iso_8859_5
UTF8
ISO_8859_5
utf8_to_iso_8859_6
UTF8
ISO_8859_6
utf8_to_iso_8859_7
UTF8
ISO_8859_7
utf8_to_iso_8859_8
UTF8
ISO_8859_8
utf8_to_iso_8859_9
UTF8
LATIN5
utf8_to_johab
UTF8
JOHAB
utf8_to_koi8_r
UTF8
KOI8R
utf8_to_koi8_u
UTF8
KOI8U
utf8_to_sjis
UTF8
SJIS
utf8_to_tcvn
UTF8
WIN1258
utf8_to_uhc
UTF8
UHC
utf8_to_windows_1250
UTF8
WIN1250
utf8_to_windows_1251
UTF8
WIN1251
utf8_to_windows_1252
UTF8
WIN1252
utf8_to_windows_1253
UTF8
WIN1253
utf8_to_windows_1254
UTF8
WIN1254
utf8_to_windows_1255
UTF8
WIN1255
utf8_to_windows_1256
UTF8
WIN1256
utf8_to_windows_1257
UTF8
WIN1257
utf8_to_windows_866
UTF8
WIN866
utf8_to_windows_874
UTF8
WIN874
windows_1250_to_iso_8859_2
WIN1250
LATIN2
windows_1250_to_mic
WIN1250
MULE_INTERNAL
windows_1250_to_utf8
WIN1250
UTF8
windows_1251_to_iso_8859_5
WIN1251
ISO_8859_5
windows_1251_to_koi8_r
WIN1251
KOI8R
windows_1251_to_mic
WIN1251
MULE_INTERNAL
windows_1251_to_utf8
WIN1251
UTF8
windows_1251_to_windows_866
WIN1251
WIN866
windows_1252_to_utf8
WIN1252
UTF8
windows_1256_to_utf8
WIN1256
UTF8
windows_866_to_iso_8859_5
WIN866
ISO_8859_5
windows_866_to_koi8_r
WIN866
KOI8R
windows_866_to_mic
WIN866
MULE_INTERNAL
windows_866_to_utf8
WIN866
UTF8
windows_866_to_windows_1251
WIN866
WIN
windows_874_to_utf8
WIN874
UTF8
euc_jis_2004_to_utf8
EUC_JIS_2004
UTF8
utf8_to_euc_jis_2004
UTF8
EUC_JIS_2004
shift_jis_2004_to_utf8
SHIFT_JIS_2004
UTF8
utf8_to_shift_jis_2004
UTF8
SHIFT_JIS_2004
euc_jis_2004_to_shift_jis_2004
EUC_JIS_2004
SHIFT_JIS_2004
shift_jis_2004_to_euc_jis_2004
SHIFT_JIS_2004
EUC_JIS_2004
[a] 轉換名稱遵循標準命名方式:原始碼的正式名稱,所有非字母數字字元替換為底線,後接to,後接類似處理的目標編碼名稱。因此,名稱可能會偏離慣用的編碼名稱。
Operator
Return Type
Description
Example
Result
@@
boolean
tsvector
matchestsquery
?
to_tsvector('fat cats ate rats') @@ to_tsquery('cat & rat')
t
@@@
boolean
deprecated synonym for@@
to_tsvector('fat cats ate rats') @@@ to_tsquery('cat & rat')
t
`
`
tsvector
concatenatetsvector
s
`'a:1 b:2'::tsvector
'c:1 d:2 b:3'::tsvector`
'a':1 'b':2,5 'c':3 'd':4
&&
tsquery
ANDtsquery
s together
`'fat
rat'::tsquery && 'cat'::tsquery`
`( 'fat'
'rat' ) & 'cat'`
`
`
tsquery
ORtsquery
s together
`'fat
rat'::tsquery
'cat'::tsquery`
`( 'fat'
'rat' )
'cat'`
!!
tsquery
negate atsquery
!! 'cat'::tsquery
!'cat'
<->
tsquery
tsquery
followed bytsquery
to_tsquery('fat') <-> to_tsquery('rat')
'fat' <-> 'rat'
@>
boolean
tsquery
contains another ?
'cat'::tsquery @> 'cat & rat'::tsquery
f
<@
boolean
tsquery
is contained in ?
'cat'::tsquery <@ 'cat & rat'::tsquery
t
Function
Return Type
Description
Example
Result
array_to_tsvector(text[]
)
tsvector
convert array of lexemes totsvector
array_to_tsvector('{fat,cat,rat}'::text[])
'cat' 'fat' 'rat'
get_current_ts_config()
regconfig
get default text search configuration
get_current_ts_config()
english
length(tsvector
)
integer
number of lexemes intsvector
length('fat:2,4 cat:3 rat:5A'::tsvector)
3
numnode(tsquery
)
integer
number of lexemes plus operators intsquery
`numnode('(fat & rat)
cat'::tsquery)`
5
plainto_tsquery([configregconfig
,]querytext
)
tsquery
producetsquery
ignoring punctuation
plainto_tsquery('english', 'The Fat Rats')
'fat' & 'rat'
phraseto_tsquery([configregconfig
,]querytext
)
tsquery
producetsquery
that searches for a phrase, ignoring punctuation
phraseto_tsquery('english', 'The Fat Rats')
'fat' <-> 'rat'
querytree(querytsquery
)
text
get indexable part of atsquery
querytree('foo & ! bar'::tsquery)
'foo'
setweight(vectortsvector
,weight"char"
)
tsvector
assignweight
_to each element ofvector
_
setweight('fat:2,4 cat:3 rat:5B'::tsvector, 'A')
'cat':3A 'fat':2A,4A 'rat':5A
setweight(vectortsvector
,weight"char"
,lexemestext[]
)
tsvector
assignweight
_to elements ofvector
that are listed inlexemes
_
setweight('fat:2,4 cat:3 rat:5B'::tsvector, 'A', '{cat,rat}')
'cat':3A 'fat':2,4 'rat':5A
strip(tsvector
)
tsvector
remove positions and weights fromtsvector
strip('fat:2,4 cat:3 rat:5A'::tsvector)
'cat' 'fat' 'rat'
to_tsquery([configregconfig
,]querytext
)
tsquery
normalize words and convert totsquery
to_tsquery('english', 'The & Fat & Rats')
'fat' & 'rat'
to_tsvector([configregconfig
,]documenttext
)
tsvector
reduce document text totsvector
to_tsvector('english', 'The Fat Rats')
'fat':2 'rat':3
to_tsvector([configregconfig
,]documentjson(b)
)
tsvector
reduce each string value in the document to atsvector
, and then concatentate those in document order to produce a singletsvector
to_tsvector('english', '{"a": "The Fat Rats"}'::json)
'fat':2 'rat':3
ts_delete(vectortsvector
,lexemetext
)
tsvector
remove givenlexeme
_fromvector
_
ts_delete('fat:2,4 cat:3 rat:5A'::tsvector, 'fat')
'cat':3 'rat':5A
ts_delete(vectortsvector
,lexemestext[]
)
tsvector
remove any occurrence of lexemes inlexemes
_fromvector
_
ts_delete('fat:2,4 cat:3 rat:5A'::tsvector, ARRAY['fat','rat'])
'cat':3
ts_filter(vectortsvector
,weights"char"[]
)
tsvector
select only elements with givenweights
_fromvector
_
ts_filter('fat:2,4 cat:3b rat:5A'::tsvector, '{a,b}')
'cat':3B 'rat':5A
ts_headline([configregconfig
,]documenttext
,querytsquery
[,optionstext
])
text
display a query match
ts_headline('x y z', 'z'::tsquery)
x y <b>z</b>
ts_headline([configregconfig
,]documentjson(b)
,querytsquery
[,optionstext
])
text
display a query match
ts_headline('{"a":"x y z"}'::json, 'z'::tsquery)
{"a":"x y <b>z</b>"}
ts_rank([weightsfloat4[]
,]vectortsvector
,querytsquery
[,normalizationinteger
])
float4
rank document for query
ts_rank(textsearch, query)
0.818
ts_rank_cd([weightsfloat4[]
,]vectortsvector
,querytsquery
[,normalizationinteger
])
float4
rank document for query using cover density
ts_rank_cd('{0.1, 0.2, 0.4, 1.0}', textsearch, query)
2.01317
ts_rewrite(querytsquery
,targettsquery
,substitutetsquery
)
tsquery
replacetarget
_withsubstitute
_within query
`ts_rewrite('a & b'::tsquery, 'a'::tsquery, 'foo
bar'::tsquery)`
`'b' & ( 'foo'
'bar' )`
ts_rewrite(querytsquery
,selecttext
)
tsquery
replace using targets and substitutes from aSELECT
command
SELECT ts_rewrite('a & b'::tsquery, 'SELECT t,s FROM aliases')
`'b' & ( 'foo'
'bar' )`
tsquery_phrase(query1tsquery
,query2tsquery
)
tsquery
make query that searches forquery1
_followed byquery2
_(same as<->
operator)
tsquery_phrase(to_tsquery('fat'), to_tsquery('cat'))
'fat' <-> 'cat'
tsquery_phrase(query1tsquery
,query2tsquery
,distanceinteger
)
tsquery
make query that searches forquery1
_followed byquery2
at distancedistance
_
tsquery_phrase(to_tsquery('fat'), to_tsquery('cat'), 10)
'fat' <10> 'cat'
tsvector_to_array(tsvector
)
text[]
converttsvector
to array of lexemes
tsvector_to_array('fat:2,4 cat:3 rat:5A'::tsvector)
{cat,fat,rat}
tsvector_update_trigger()
trigger
trigger function for automatictsvector
column update
CREATE TRIGGER ... tsvector_update_trigger(tsvcol, 'pg_catalog.swedish', title, body)
tsvector_update_trigger_column()
trigger
trigger function for automatictsvector
column update
CREATE TRIGGER ... tsvector_update_trigger_column(tsvcol, configcol, title, body)
unnest(tsvector
, OUTlexemetext
, OUTpositionssmallint[]
, OUTweightstext
)
setof record
expand a tsvector to a set of rows
unnest('fat:2,4 cat:3 rat:5A'::tsvector)
(cat,{3},{D}) ...
Function
Return Type
Description
Example
Result
ts_debug([configregconfig
,]documenttext
, OUTaliastext
, OUTdescriptiontext
, OUTtokentext
, OUTdictionariesregdictionary[]
, OUTdictionaryregdictionary
, OUTlexemestext[]
)
setof record
test a configuration
ts_debug('english', 'The Brightest supernovaes')
(asciiword,"Word, all ASCII",The,{english_stem},english_stem,{}) ...
ts_lexize(dictregdictionary
,tokentext
)
text[]
test a dictionary
ts_lexize('english_stem', 'stars')
{star}
ts_parse(parser_nametext
,documenttext
, OUTtokidinteger
, OUTtokentext
)
setof record
test a parser
ts_parse('default', 'foo - bar')
(1,foo) ...
ts_parse(parser_oidoid
,documenttext
, OUTtokidinteger
, OUTtokentext
)
setof record
test a parser
ts_parse(3722, 'foo - bar')
(1,foo) ...
ts_token_type(parser_nametext
, OUTtokidinteger
, OUTaliastext
, OUTdescriptiontext
)
setof record
get token types defined by parser
ts_token_type('default')
(1,asciiword,"Word, all ASCII") ...
ts_token_type(parser_oidoid
, OUTtokidinteger
, OUTaliastext
, OUTdescriptiontext
)
setof record
get token types defined by parser
ts_token_type(3722)
(1,asciiword,"Word, all ASCII") ...
ts_stat(sqlquerytext
, [weightstext
,] OUTwordtext
, OUTndocinteger
, OUTnentryinteger
)
setof record
get statistics of atsvector
column
ts_stat('SELECT vector from apod')
(foo,10,15) ...
Operator
Description
Example
+
Translation
box '((0,0),(1,1))' + point '(2.0,0)'
-
Translation
box '((0,0),(1,1))' - point '(2.0,0)'
*
Scaling/rotation
box '((0,0),(1,1))' * point '(2.0,0)'
/
Scaling/rotation
box '((0,0),(2,2))' / point '(2.0,0)'
#
Point or box of intersection
box '((1,-1),(-1,1))' # box '((1,1),(-2,-2))'
#
Number of points in path or polygon
# path '((1,0),(0,1),(-1,0))'
@-@
Length or circumference
@-@ path '((0,0),(1,0))'
@@
Center
@@ circle '((0,0),10)'
##
Closest point to first operand on second operand
point '(0,0)' ## lseg '((2,0),(0,2))'
<->
Distance between
circle '((0,0),1)' <-> circle '((5,0),1)'
&&
Overlaps? (One point in common makes this true.)
box '((0,0),(1,1))' && box '((0,0),(2,2))'
<<
Is strictly left of?
circle '((0,0),1)' << circle '((5,0),1)'
>>
Is strictly right of?
circle '((5,0),1)' >> circle '((0,0),1)'
&<
Does not extend to the right of?
box '((0,0),(1,1))' &< box '((0,0),(2,2))'
&>
Does not extend to the left of?
box '((0,0),(3,3))' &> box '((0,0),(2,2))'
`<<
`
Is strictly below?
`box '((0,0),(3,3))' <<
box '((3,4),(5,5))'`
`
>>`
Is strictly above?
`box '((3,4),(5,5))'
>> box '((0,0),(3,3))'`
`&<
`
Does not extend above?
`box '((0,0),(1,1))' &<
box '((0,0),(2,2))'`
`
&>`
Does not extend below?
`box '((0,0),(3,3))'
&> box '((0,0),(2,2))'`
<^
Is below (allows touching)?
circle '((0,0),1)' <^ circle '((0,5),1)'
>^
Is above (allows touching)?
circle '((0,5),1)' >^ circle '((0,0),1)'
?#
Intersects?
lseg '((-1,0),(1,0))' ?# box '((-2,-2),(2,2))'
?-
Is horizontal?
?- lseg '((-1,0),(1,0))'
?-
Are horizontally aligned?
point '(1,0)' ?- point '(0,0)'
`?
`
Is vertical?
`?
lseg '((-1,0),(1,0))'`
`?
`
Are vertically aligned?
`point '(0,1)' ?
point '(0,0)'`
`?-
`
Is perpendicular?
`lseg '((0,0),(0,1))' ?-
lseg '((0,0),(1,0))'`
`?
`
Are parallel?
`lseg '((-1,0),(1,0))' ?
lseg '((-1,2),(1,2))'`
@>
Contains?
circle '((0,0),2)' @> point '(1,1)'
<@
Contained in or on?
point '(1,1)' <@ circle '((0,0),2)'
~=
Same as?
polygon '((0,0),(1,1))' ~= polygon '((1,1),(0,0))'
Function
Return Type
Description
Example
area(object
)
double precision
area
area(box '((0,0),(1,1))')
center(object
)
point
center
center(box '((0,0),(1,2))')
diameter(circle
)
double precision
diameter of circle
diameter(circle '((0,0),2.0)')
height(box
)
double precision
vertical size of box
height(box '((0,0),(1,1))')
isclosed(path
)
boolean
a closed path?
isclosed(path '((0,0),(1,1),(2,0))')
isopen(path
)
boolean
an open path?
isopen(path '[(0,0),(1,1),(2,0)]')
length(object
)
double precision
length
length(path '((-1,0),(1,0))')
npoints(path
)
int
number of points
npoints(path '[(0,0),(1,1),(2,0)]')
npoints(polygon
)
int
number of points
npoints(polygon '((1,1),(0,0))')
pclose(path
)
path
convert path to closed
pclose(path '[(0,0),(1,1),(2,0)]')
popen(path
)
path
convert path to open
popen(path '((0,0),(1,1),(2,0))')
radius(circle
)
double precision
radius of circle
radius(circle '((0,0),2.0)')
width(box
)
double precision
horizontal size of box
width(box '((0,0),(1,1))')
Function
Return Type
Description
Example
box(circle
)
box
circle to box
box(circle '((0,0),2.0)')
box(point
)
box
point to empty box
box(point '(0,0)')
box(point
,point
)
box
points to box
box(point '(0,0)', point '(1,1)')
box(polygon
)
box
polygon to box
box(polygon '((0,0),(1,1),(2,0))')
bound_box(box
,box
)
box
boxes to bounding box
bound_box(box '((0,0),(1,1))', box '((3,3),(4,4))')
circle(box
)
circle
box to circle
circle(box '((0,0),(1,1))')
circle(point
,double precision
)
circle
center and radius to circle
circle(point '(0,0)', 2.0)
circle(polygon
)
circle
polygon to circle
circle(polygon '((0,0),(1,1),(2,0))')
line(point
,point
)
line
points to line
line(point '(-1,0)', point '(1,0)')
lseg(box
)
lseg
box diagonal to line segment
lseg(box '((-1,0),(1,0))')
lseg(point
,point
)
lseg
points to line segment
lseg(point '(-1,0)', point '(1,0)')
path(polygon
)
path
polygon to path
path(polygon '((0,0),(1,1),(2,0))')
point
(double precision
,double precision
)
point
construct point
point(23.4, -44.5)
point(box
)
point
center of box
point(box '((-1,0),(1,0))')
point(circle
)
point
center of circle
point(circle '((0,0),2.0)')
point(lseg
)
point
center of line segment
point(lseg '((-1,0),(1,0))')
point(polygon
)
point
center of polygon
point(polygon '((0,0),(1,1),(2,0))')
polygon(box
)
polygon
box to 4-point polygon
polygon(box '((0,0),(1,1))')
polygon(circle
)
polygon
circle to 12-point polygon
polygon(circle '((0,0),2.0)')
polygon(npts
,circle
)
polygon
circle tonpts
-point polygon
polygon(12, circle '((0,0),2.0)')
polygon(path
)
polygon
path to polygon
polygon(path '((0,0),(1,1),(2,0))')
Operator
Description
Example
<
is less than
inet '192.168.1.5' < inet '192.168.1.6'
<=
is less than or equal
inet '192.168.1.5' <= inet '192.168.1.5'
=
equals
inet '192.168.1.5' = inet '192.168.1.5'
>=
is greater or equal
inet '192.168.1.5' >= inet '192.168.1.5'
>
is greater than
inet '192.168.1.5' > inet '192.168.1.4'
<>
is not equal
inet '192.168.1.5' <> inet '192.168.1.4'
<<
is contained by
inet '192.168.1.5' << inet '192.168.1/24'
<<=
is contained by or equals
inet '192.168.1/24' <<= inet '192.168.1/24'
>>
contains
inet '192.168.1/24' >> inet '192.168.1.5'
>>=
contains or equals
inet '192.168.1/24' >>= inet '192.168.1/24'
&&
contains or is contained by
inet '192.168.1/24' && inet '192.168.1.80/28'
~
bitwise NOT
~ inet '192.168.1.6'
&
bitwise AND
inet '192.168.1.6' & inet '0.0.0.255'
`
`
bitwise OR
`inet '192.168.1.6'
inet '0.0.0.255'`
+
addition
inet '192.168.1.6' + 25
-
subtraction
inet '192.168.1.43' - 36
-
subtraction
inet '192.168.1.43' - inet '192.168.1.19'
Function
Return Type
Description
Example
Result
abbrev(inet
)
text
abbreviated display format as text
abbrev(inet '10.1.0.0/16')
10.1.0.0/16
abbrev(cidr
)
text
abbreviated display format as text
abbrev(cidr '10.1.0.0/16')
10.1/16
broadcast(inet
)
inet
broadcast address for network
broadcast('192.168.1.5/24')
192.168.1.255/24
family(inet
)
int
extract family of address;4
for IPv4,6
for IPv6
family('::1')
6
host(inet
)
text
extract IP address as text
host('192.168.1.5/24')
192.168.1.5
hostmask(inet
)
inet
construct host mask for network
hostmask('192.168.23.20/30')
0.0.0.3
masklen(inet
)
int
extract netmask length
masklen('192.168.1.5/24')
24
netmask(inet
)
inet
construct netmask for network
netmask('192.168.1.5/24')
255.255.255.0
network(inet
)
cidr
extract network part of address
network('192.168.1.5/24')
192.168.1.0/24
set_masklen(inet
,int
)
inet
set netmask length forinet
value
set_masklen('192.168.1.5/24', 16)
192.168.1.5/16
set_masklen(cidr
,int
)
cidr
set netmask length forcidr
value
set_masklen('192.168.1.0/24'::cidr, 16)
192.168.0.0/16
text(inet
)
text
extract IP address and netmask length as text
text(inet '192.168.1.5')
192.168.1.5/32
inet_same_family(inet
,inet
)
boolean
are the addresses from the same family?
inet_same_family('192.168.1.5/24', '::1')
false
inet_merge(inet
,inet
)
cidr
the smallest network which includes both of the given networks
inet_merge('192.168.1.5/24', '192.168.2.5/24')
192.168.0.0/22
Function
Return Type
Description
Example
Result
trunc(macaddr
)
macaddr
set last 3 bytes to zero
trunc(macaddr '12:34:56:78:90:ab')
12:34:56:00:00:00
Function
Return Type
Description
Example
Result
trunc(macaddr8
)
macaddr8
set last 5 bytes to zero
trunc(macaddr8 '12:34:56:78:90:ab:cd:ef')
12:34:56:00:00:00:00:00
macaddr8_set7bit(macaddr8
)
macaddr8
set 7th bit to one, also known as modified EUI-64, for inclusion in an IPv6 address
macaddr8_set7bit(macaddr8 '00:34:56:ab:cd:ef')
02:34:56:ff:fe:ab:cd:ef
Operator
Description
Example
~
Matches regular expression, case sensitive
'thomas' ~ '.*thomas.*'
~*
Matches regular expression, case insensitive
'thomas' ~* '.*Thomas.*'
!~
Does not match regular expression, case sensitive
'thomas' !~ '.*Thomas.*'
!~*
Does not match regular expression, case insensitive
'thomas' !~* '.*vadim.*'
Atom
Description
(
re
)
(where re
is any regular expression) matches a match for re
, with the match noted for possible reporting
(?:
re
)
as above, but the match is not noted for reporting (a “non-capturing” set of parentheses) (AREs only)
.
matches any single character
[
chars
]
a bracket expression, matching any one of the chars
(see Section 9.7.3.2 for more detail)
\
k
(where k
is a non-alphanumeric character) matches that character taken as an ordinary character, e.g., \\
matches a backslash character
\
c
where c
is alphanumeric (possibly followed by other characters) is an escape, see Section 9.7.3.3 (AREs only; in EREs and BREs, this matches c
)
{
when followed by a character other than a digit, matches the left-brace character {
; when followed by a digit, it is the beginning of a bound
(see below)
x
where x
is a single character with no other significance, matches that character
Quantifier
Matches
*
a sequence of 0 or more matches of the atom
+
a sequence of 1 or more matches of the atom
?
a sequence of 0 or 1 matches of the atom
{
m
}
a sequence of exactly m
matches of the atom
{
m
,}
a sequence of m
or more matches of the atom
{
m
,
n
}
a sequence of m
through n
(inclusive) matches of the atom; m
cannot exceed n
*?
non-greedy version of *
+?
non-greedy version of +
??
non-greedy version of ?
{
m
}?
non-greedy version of {
m
}
{
m
,}?
non-greedy version of {
m
,}
{
m
,
n
}?
non-greedy version of {
m
,
n
}
Constraint
Description
^
matches at the beginning of the string
$
matches at the end of the string
(?=
re
)
positive lookahead matches at any point where a substring matching re
begins (AREs only)
(?!
re
)
negative lookahead matches at any point where no substring matching re
begins (AREs only)
(?<=
re
)
positive lookbehind matches at any point where a substring matching re
ends (AREs only)
(?<!
re
)
negative lookbehind matches at any point where no substring matching re
ends (AREs only)
Escape
Description
\a
alert (bell) character, as in C
\b
backspace, as in C
\B
synonym for backslash (\
) to help reduce the need for backslash doubling
\c
X
(where X
is any character) the character whose low-order 5 bits are the same as those of X
, and whose other bits are all zero
\e
the character whose collating-sequence name is ESC
, or failing that, the character with octal value 033
\f
form feed, as in C
\n
newline, as in C
\r
carriage return, as in C
\t
horizontal tab, as in C
\u
wxyz
(where wxyz
is exactly four hexadecimal digits) the character whose hexadecimal value is 0x
wxyz
\U
stuvwxyz
(where stuvwxyz
is exactly eight hexadecimal digits) the character whose hexadecimal value is 0x
stuvwxyz
\v
vertical tab, as in C
\x
hhh
(where hhh
is any sequence of hexadecimal digits) the character whose hexadecimal value is 0x
hhh
(a single character no matter how many hexadecimal digits are used)
\0
the character whose value is 0
(the null byte)
\
xy
(where xy
is exactly two octal digits, and is not a back reference) the character whose octal value is 0
xy
\
xyz
(where xyz
is exactly three octal digits, and is not a back reference) the character whose octal value is 0
xyz
Escape
Description
\d
[[:digit:]]
\s
[[:space:]]
\w
[[:alnum:]_]
(note underscore is included)
\D
[^[:digit:]]
\S
[^[:space:]]
\W
[^[:alnum:]_]
(note underscore is included)
Escape
Description
\A
matches only at the beginning of the string (see Section 9.7.3.5 for how this differs from ^
)
\m
matches only at the beginning of a word
\M
matches only at the end of a word
\y
matches only at the beginning or end of a word
\Y
matches only at a point that is not the beginning or end of a word
\Z
matches only at the end of the string (see Section 9.7.3.5 for how this differs from $
)
Escape
Description
\
m
(where m
is a nonzero digit) a back reference to the m
'th subexpression
\
mnn
(where m
is a nonzero digit, and nn
is some more digits, and the decimal value mnn
is not greater than the number of closing capturing parentheses seen so far) a back reference to the mnn
'th subexpression
Option
Description
b
rest of RE is a BRE
c
case-sensitive matching (overrides operator type)
e
rest of RE is an ERE
i
case-insensitive matching (see Section 9.7.3.5) (overrides operator type)
m
historical synonym for n
n
newline-sensitive matching (see Section 9.7.3.5)
p
partial newline-sensitive matching (see Section 9.7.3.5)
q
rest of RE is a literal (“quoted”) string, all ordinary characters
s
non-newline-sensitive matching (default)
t
tight syntax (default; see below)
w
inverse partial newline-sensitive (“weird”) matching (see Section 9.7.3.5)
x
expanded syntax (see below)
Function
Return Type
Description
Example
Result
string
`
_
string`_
bytea
String concatenation
`E'\\Post'::bytea
E'\047gres\000'::bytea`
\\Post'gres\000
octet_length(string
)
int
Number of bytes in binary string
octet_length(E'jo\\000se'::bytea)
5
overlay(string
_placingstring
_fromint
[forint
])
bytea
Replace substring
overlay(E'Th\\000omas'::bytea placing E'\\002\\003'::bytea from 2 for 3)
T\\002\\003mas
position(substring
_instring
_)
int
Location of specified substring
position(E'\\000om'::bytea in E'Th\\000omas'::bytea)
3
substring(string
[fromint
] [forint
])
bytea
Extract substring
substring(E'Th\\000omas'::bytea from 2 for 3)
h\000o
trim([both]bytes
_fromstring
_)
bytea
Remove the longest string containing only bytes appearing inbytes
_from the start and end ofstring
_
trim(E'\\000\\001'::bytea from E'\\000Tom\\001'::bytea)
Tom
Function
Return Type
Description
Example
Result
btrim(stringbytea
,bytesbytea
)
bytea
Remove the longest string containing only bytes appearing inbytes
_from the start and end ofstring
_
btrim(E'\\000trim\\001'::bytea, E'\\000\\001'::bytea)
trim
decode(stringtext
,formattext
)
bytea
Decode binary data from textual representation instring
. Options for_format
_are same as inencode
.
decode(E'123\\000456', 'escape')
123\000456
encode(databytea
,formattext
)
text
Encode binary data into a textual representation. Supported formats are:base64
,hex
,escape
.escape
converts zero bytes and high-bit-set bytes to octal sequences (\nnn
) and doubles backslashes.
encode(E'123\\000456'::bytea, 'escape')
123\000456
get_bit(string
,offset
)
int
Extract bit from string
get_bit(E'Th\\000omas'::bytea, 45)
1
get_byte(string
,offset
)
int
Extract byte from string
get_byte(E'Th\\000omas'::bytea, 4)
109
length(string
)
int
Length of binary string
length(E'jo\\000se'::bytea)
5
md5(string
)
text
Calculates the MD5 hash ofstring
, returning the result in hexadecimal
md5(E'Th\\000omas'::bytea)
8ab2d3c9689aaf18 b4958c334c82d8b1
set_bit(string
,offset
,newvalue
)
bytea
Set bit in string
set_bit(E'Th\\000omas'::bytea, 45, 0)
Th\000omAs
set_byte(string
,offset
,newvalue
)
bytea
Set byte in string
set_byte(E'Th\\000omas'::bytea, 4, 64)
Th\000o@as
Function | Argument Type | Return Type | Partial Mode | Description |
|
|
| Yes | correlation coefficient |
|
|
| Yes | population covariance |
|
|
| Yes | sample covariance |
|
|
| Yes | average of the independent variable ( |
|
|
| Yes | average of the dependent variable ( |
|
|
| Yes | number of input rows in which both expressions are nonnull |
|
|
| Yes | y-intercept of the least-squares-fit linear equation determined by the ( |
|
|
| Yes | square of the correlation coefficient |
|
|
| Yes | slope of the least-squares-fit linear equation determined by the ( |
|
|
| Yes |
|
|
|
| Yes |
|
|
|
| Yes |
|
|
|
| Yes | historical alias for |
|
|
| Yes | population standard deviation of the input values |
|
|
| Yes | sample standard deviation of the input values |
|
|
| Yes | historical alias for |
|
|
| Yes | population variance of the input values (square of the population standard deviation) |
|
|
| Yes | sample variance of the input values (square of the sample standard deviation) |
Function | Direct Argument Type(s) | Aggregated Argument Type(s) | Return Type | Partial Mode | Description |
| any sortable type | same as sort expression | No | returns the most frequent input value (arbitrarily choosing the first one if there are multiple equally-frequent results) |
|
|
| same as sort expression | No | continuous percentile: returns a value corresponding to the specified fraction in the ordering, interpolating between adjacent input items if needed |
|
|
| array of sort expression's type | No | multiple continuous percentile: returns an array of results matching the shape of the |
|
| any sortable type | same as sort expression | No | discrete percentile: returns the first input value whose position in the ordering equals or exceeds the specified fraction |
|
| any sortable type | array of sort expression's type | No | multiple discrete percentile: returns an array of results matching the shape of the |
Function | Direct Argument Type(s) | Aggregated Argument Type(s) | Return Type | Partial Mode | Description |
|
|
|
| No | rank of the hypothetical row, with gaps for duplicate rows |
|
|
|
| No | rank of the hypothetical row, without gaps |
|
|
|
| No | relative rank of the hypothetical row, ranging from 0 to 1 |
|
|
|
| No | relative rank of the hypothetical row, ranging from 1/ |
Function | Return Type | Description |
|
| Integer bit mask indicating which arguments are not being included in the current grouping set |
Pattern | Description |
| hour of day (01-12) |
| hour of day (01-12) |
| hour of day (00-23) |
| minute (00-59) |
| second (00-59) |
| millisecond (000-999) |
| microsecond (000000-999999) |
| seconds past midnight (0-86399) |
| meridiem indicator (without periods) |
| meridiem indicator (with periods) |
| year (4 or more digits) with comma |
| year (4 or more digits) |
| last 3 digits of year |
| last 2 digits of year |
| last digit of year |
| ISO 8601 week-numbering year (4 or more digits) |
| last 3 digits of ISO 8601 week-numbering year |
| last 2 digits of ISO 8601 week-numbering year |
| last digit of ISO 8601 week-numbering year |
| era indicator (without periods) |
| era indicator (with periods) |
| full upper case month name (blank-padded to 9 chars) |
| full capitalized month name (blank-padded to 9 chars) |
| full lower case month name (blank-padded to 9 chars) |
| abbreviated upper case month name (3 chars in English, localized lengths vary) |
| abbreviated capitalized month name (3 chars in English, localized lengths vary) |
| abbreviated lower case month name (3 chars in English, localized lengths vary) |
| month number (01-12) |
| full upper case day name (blank-padded to 9 chars) |
| full capitalized day name (blank-padded to 9 chars) |
| full lower case day name (blank-padded to 9 chars) |
| abbreviated upper case day name (3 chars in English, localized lengths vary) |
| abbreviated capitalized day name (3 chars in English, localized lengths vary) |
| abbreviated lower case day name (3 chars in English, localized lengths vary) |
| day of year (001-366) |
| day of ISO 8601 week-numbering year (001-371; day 1 of the year is Monday of the first ISO week) |
| day of month (01-31) |
| day of the week, Sunday ( |
| ISO 8601 day of the week, Monday ( |
| week of month (1-5) (the first week starts on the first day of the month) |
| week number of year (1-53) (the first week starts on the first day of the year) |
| week number of ISO 8601 week-numbering year (01-53; the first Thursday of the year is in week 1) |
| century (2 digits) (the twenty-first century starts on 2001-01-01) |
| Julian Day (integer days since November 24, 4714 BC at midnight UTC) |
| quarter |
| month in upper case Roman numerals (I-XII; I=January) |
| month in lower case Roman numerals (i-xii; i=January) |
| upper case time-zone abbreviation (only supported in |
| lower case time-zone abbreviation (only supported in |
| time-zone offset from UTC (only supported in |
Pattern | Description |
| value with the specified number of digits |
| value with leading zeros |
| decimal point |
| group (thousand) separator |
| negative value in angle brackets |
| sign anchored to number (uses locale) |
| currency symbol (uses locale) |
| decimal point (uses locale) |
| group separator (uses locale) |
| minus sign in specified position (if number < 0) |
| plus sign in specified position (if number > 0) |
| plus/minus sign in specified position |
| Roman numeral (input between 1 and 3999) |
| ordinal number suffix |
| shift specified number of digits (see notes) |
| exponent for scientific notation |
Modifier | Description | Example |
| fill mode (suppress leading zeroes and padding blanks) |
|
| upper case ordinal number suffix |
|
| lower case ordinal number suffix |
|
Expression | Result |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
microseconds |
milliseconds |
second |
minute |
hour |
day |
week |
month |
quarter |
year |
decade |
century |
millennium |
Expression | Return Type | Description |
timestamp without time zone AT TIME ZONE zone | timestamp with time zone | Treat given time stamp without time zone as located in the specified time zone |
timestamp with time zone AT TIME ZONE zone | timestamp without time zone | Convert given time stamp with time zone to the new time zone, with no time zone designation |
time with time zone AT TIME ZONE zone | time with time zone | Convert given time with time zone to the new time zone |
Warning |
Make sure that your session does not hold more locks than necessary when calling |
Operator | Description | Example | Result |
| equal |
|
|
| not equal |
|
|
| less than |
|
|
| greater than |
|
|
| less than or equal |
|
|
| greater than or equal |
|
|
| contains |
|
|
| is contained by |
|
|
| overlap (have elements in common) |
|
|
` | ` | array-to-array concatenation | `ARRAY[1,2,3] | ARRAY[4,5,6]` |
|
` | ` | array-to-array concatenation | `ARRAY[1,2,3] | ARRAY[[4,5,6],[7,8,9]]` |
|
` | ` | element-to-array concatenation | `3 | ARRAY[4,5,6]` |
|
` | ` | array-to-element concatenation | `ARRAY[4,5,6] | 7` |
|
Function | Return Type | Description |
|
| Return value most recently obtained with |
|
| Return value most recently obtained with |
|
| Advance sequence and return new value |
|
| Set sequence's current value |
|
| Set sequence's current value and |
Function | Description | Example | Example Result |
| Returns the first value of the input enum type |
|
|
| Returns the last value of the input enum type |
|
|
| Returns all values of the input enum type in an ordered array |
|
|
| Returns the range between the two given enum values, as an ordered array. The values must be from the same enum type. If the first parameter is null, the result will start with the first value of the enum type. If the second parameter is null, the result will end with the last value of the enum type. |
|
|
|
|
|
|
Function | Return Type | Description | Example |
|
| convert time stamp to string |
|
|
| convert interval to string |
|
|
| convert integer to string |
|
|
| convert real/double precision to string |
|
|
| convert numeric to string |
|
|
| convert string to date |
|
|
| convert string to numeric |
|
|
| convert string to time stamp |
|
Operator | Example | Result |
+ | date '2001-09-28' + integer '7' | date '2001-10-05' |
+ | date '2001-09-28' + interval '1 hour' | timestamp '2001-09-28 01:00:00' |
+ | date '2001-09-28' + time '03:00' | timestamp '2001-09-28 03:00:00' |
+ | interval '1 day' + interval '1 hour' | interval '1 day 01:00:00' |
+ | timestamp '2001-09-28 01:00' + interval '23 hours' | timestamp '2001-09-29 00:00:00' |
+ | time '01:00' + interval '3 hours' | time '04:00:00' |
- | - interval '23 hours' | interval '-23:00:00' |
- | date '2001-10-01' - date '2001-09-28' | integer '3' (days) |
- | date '2001-10-01' - integer '7' | date '2001-09-24' |
- | date '2001-09-28' - interval '1 hour' | timestamp '2001-09-27 23:00:00' |
- | time '05:00' - time '03:00' | interval '02:00:00' |
- | time '05:00' - interval '2 hours' | time '03:00:00' |
- | timestamp '2001-09-28 23:00' - interval '23 hours' | timestamp '2001-09-28 00:00:00' |
- | interval '1 day' - interval '1 hour' | interval '1 day -01:00:00' |
- | timestamp '2001-09-29 03:00' - timestamp '2001-09-27 12:00' | interval '1 day 15:00:00' |
* | 900 * interval '1 second' | interval '00:15:00' |
* | 21 * interval '1 day' | interval '21 days' |
* | double precision '3.5' * interval '1 hour' | interval '03:30:00' |
/ | interval '1 hour' / double precision '1.5' | interval '00:40:00' |
本節介紹 PostgreSQL 中符合 SQL 標準可用的條件表示式。
如果您的需求超出了這些條件表示式的功能,您可能需要考慮使用功能更強的程序語言編寫 stored procedure。
CASE
SQL 中的 CASE 表示式是一種通用的條件表示式,類似於其他程序語言中的 if / else 語句:
CASE子句可用於任何表示式有效的地方。每個條件都是一個回傳布林值的表示式。如果條件結果為 true,則 CASE 表示式的值為該條件之後的結果,而不處理CASE表達式的其餘部分。如果條件的結果不成立,則以相同的方式檢查後續的 WHEN 子句。如果沒有任何 WHEN 條件成立,則 CASE 表示式的值是 ELSE 子句的結果。如果省略了 ELSE 子句並且沒有條件為真,則結果為 null。
範例:
所有結果表示式的資料型別都必須可轉換為單一的輸出型別。更多細節請參閱 10.5 節。
CASE 表示式的「簡單」語法是上述一般語法的變形:
計算第一個表示式,然後與 WHEN 子句中的每個表示式的結果值進行比較,直到找到與其相等的值。如果未找到匹配的項目,則回傳 ELSE 子句(或空值)的結果。這與 C 語言中的 switch 語句類似。
上面的例子可以使用簡單的 CASE 語法來撰寫:
CASE 表示式不會計算任何不需要的子表示式來確定結果。例如,這是避免除以零例外狀況可能的方法:
如 4.2.14 節所述,在不同時候計算表示式的子表示式時會出現各種情況,因此「CASE 只計算必要子表示式」的原則並不是固定的。例如,一個常數 1/0 的子表示式在查詢規畫時通常就會導致一個除以零的錯誤,即使它在 CASE 部分內,在執行時永遠不會被使用。
COALESCE
COALESCE 函數回傳非空值的第一個參數。僅當所有參數都為空值時才回傳空值。當檢索資料要進行顯示時,它通常用於將預認值替換為空值,例如:
如果它不為 null,則回傳 descrtiption;否則,如果 short_description 不為null,則傳回 short_description;否則回傳(none)。
像 CASE 表示式一樣,COALESCE 只計算確定結果所需的參數;也就是說,不會計算第一個非空值參數之後的參數。此 SQL 標準函數提供了與 NVL 和 IFNULL 類似的功能,這些在其他某些資料庫系統中所使用的功能。
NULLIF
如果 value1 等於 value2,則 NULLIF 函數回傳空值;否則回傳 value1。這可以用來執行上面 COALESCE 範例的逆操作:
在這個例子中,如果 value 是(none),則回傳 null,否則回傳 value 的值。
GREATEST
and LEAST
GREATEST 和 LEAST 函數從任意數量的表示式列表中選擇最大值或最小值。表示式必須全部轉換為通用的資料型別,這將成為結果的別型(詳見 10.5 節)。列表中的 NULL 值將會被忽略。僅當所有表示式求值為 NULL 時,結果才會為 NULL。
請注意,GREATEST 和 LEAST 並不在 SQL 標準中,但卻是一個常見的延伸功能。如果任何參數為 NULL,則其他一些資料庫會使其回傳 NULL,而不是僅在所有參數都為 NULL 時回傳 NULL。
This section describes theSQL-compliant subquery expressions available inPostgreSQL. All of the expression forms documented in this section return Boolean (true/false) results.
EXISTS
The argument ofEXISTS
is an arbitrarySELECT
statement, orsubquery. The subquery is evaluated to determine whether it returns any rows. If it returns at least one row, the result ofEXISTS
is“true”; if the subquery returns no rows, the result ofEXISTS
is“false”.
The subquery can refer to variables from the surrounding query, which will act as constants during any one evaluation of the subquery.
The subquery will generally only be executed long enough to determine whether at least one row is returned, not all the way to completion. It is unwise to write a subquery that has side effects (such as calling sequence functions); whether the side effects occur might be unpredictable.
Since the result depends only on whether any rows are returned, and not on the contents of those rows, the output list of the subquery is normally unimportant. A common coding convention is to write allEXISTS
tests in the formEXISTS(SELECT 1 WHERE ...)
. There are exceptions to this rule however, such as subqueries that useINTERSECT
.
This simple example is like an inner join oncol2
, but it produces at most one output row for eachtab1
row, even if there are several matchingtab2
rows:
IN
The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result. The result ofIN
is“true”if any equal subquery row is found. The result is“false”if no equal row is found (including the case where the subquery returns no rows).
Note that if the left-hand expression yields null, or if there are no equal right-hand values and at least one right-hand row yields null, the result of theIN
construct will be null, not false. This is in accordance with SQL's normal rules for Boolean combinations of null values.
As withEXISTS
, it's unwise to assume that the subquery will be evaluated completely.
The left-hand side of this form ofIN
is a row constructor, as described inSection 4.2.13. The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result. The result ofIN
is“true”if any equal subquery row is found. The result is“false”if no equal row is found (including the case where the subquery returns no rows).
As usual, null values in the rows are combined per the normal rules of SQL Boolean expressions. Two rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal if any corresponding members are non-null and unequal; otherwise the result of that row comparison is unknown (null). If all the per-row results are either unequal or null, with at least one null, then the result ofIN
is null.
NOT IN
The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result. The result ofNOT IN
is“true”if only unequal subquery rows are found (including the case where the subquery returns no rows). The result is“false”if any equal row is found.
Note that if the left-hand expression yields null, or if there are no equal right-hand values and at least one right-hand row yields null, the result of theNOT IN
construct will be null, not true. This is in accordance with SQL's normal rules for Boolean combinations of null values.
As withEXISTS
, it's unwise to assume that the subquery will be evaluated completely.
The left-hand side of this form ofNOT IN
is a row constructor, as described inSection 4.2.13. The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result. The result ofNOT IN
is“true”if only unequal subquery rows are found (including the case where the subquery returns no rows). The result is“false”if any equal row is found.
As usual, null values in the rows are combined per the normal rules of SQL Boolean expressions. Two rows are considered equal if all their corresponding members are non-null and equal; the rows are unequal if any corresponding members are non-null and unequal; otherwise the result of that row comparison is unknown (null). If all the per-row results are either unequal or null, with at least one null, then the result ofNOT IN
is null.
ANY
/SOME
The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result using the givenoperator
, which must yield a Boolean result. The result ofANY
is“true”if any true result is obtained. The result is“false”if no true result is found (including the case where the subquery returns no rows).
SOME
is a synonym forANY
.IN
is equivalent to= ANY
.
Note that if there are no successes and at least one right-hand row yields null for the operator's result, the result of theANY
construct will be null, not false. This is in accordance with SQL's normal rules for Boolean combinations of null values.
As withEXISTS
, it's unwise to assume that the subquery will be evaluated completely.
The left-hand side of this form ofANY
is a row constructor, as described inSection 4.2.13. The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result, using the givenoperator
. The result ofANY
is“true”if the comparison returns true for any subquery row. The result is“false”if the comparison returns false for every subquery row (including the case where the subquery returns no rows). The result is NULL if the comparison does not return true for any row, and it returns NULL for at least one row.
SeeSection 9.23.5for details about the meaning of a row constructor comparison.
ALL
The right-hand side is a parenthesized subquery, which must return exactly one column. The left-hand expression is evaluated and compared to each row of the subquery result using the givenoperator
, which must yield a Boolean result. The result ofALL
is“true”if all rows yield true (including the case where the subquery returns no rows). The result is“false”if any false result is found. The result is NULL if the comparison does not return false for any row, and it returns NULL for at least one row.
NOT IN
is equivalent to<> ALL
.
As withEXISTS
, it's unwise to assume that the subquery will be evaluated completely.
The left-hand side of this form ofALL
is a row constructor, as described inSection 4.2.13. The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. The left-hand expressions are evaluated and compared row-wise to each row of the subquery result, using the givenoperator
. The result ofALL
is“true”if the comparison returns true for all subquery rows (including the case where the subquery returns no rows). The result is“false”if the comparison returns false for any subquery row. The result is NULL if the comparison does not return false for any subquery row, and it returns NULL for at least one row.
SeeSection 9.23.5for details about the meaning of a row constructor comparison.
The left-hand side is a row constructor, as described inSection 4.2.13. The right-hand side is a parenthesized subquery, which must return exactly as many columns as there are expressions in the left-hand row. Furthermore, the subquery cannot return more than one row. (If it returns zero rows, the result is taken to be null.) The left-hand side is evaluated and compared row-wise to the single subquery result row.
SeeSection 9.23.5for details about the meaning of a row constructor comparison.
本節介紹了用於在多群組內容之間進行多重比較的幾個專用語法結構。這些功能在語法上與前一節的子查詢形式相關,但不涉及子查詢。涉及陣列子表示式的形式是 PostgreSQL 的延伸功能;其餘的都是相容 SQL 的。本節中記錄的所有表達形式都是回傳布林值(true/false)結果。
IN
右側是 scalar 表示式帶括號的列表。如果左側表示式的結果等於任何右側表示式,結果為「true」。這是一個簡寫的方式
請注意,如果左側表示式產生空值,或者沒有相等的右側值並且至少有一個右側表示式產生空值,則 IN 的的結果將為空,而不是 false。這符合 SQL 空值布林組合的普遍規則。
NOT IN
右側是 scalar 表示式帶括號的列表。如果左側表示式的結果不等於所有右側表示式,則結果為「true」。 這是一個簡寫的方式
請注意,如果左邊的表示式為空,或者沒有相等的右邊的值,並且至少有一個右邊的表示式為空,則 NOT IN 的結果將為空,而不要天真地認為是 true。這符合 SQL 空值布林組合的普遍規則。
x NOT IN y 在所有情況下都等於 NOT(x IN y)。但是,使用 NOT IN 時,與使用 IN 時相比,空值更有可能讓新手感到痛苦。如果可能的話,最好積極轉換自己需要的比較內容。
ANY
/SOME
(array)右側是一個帶括號的表示式,它必須產生一個陣列。使用給定的運算子評估左側表示式並與陣列的每個元素進行比較,該運算子必須產生布林結果。如果獲得任何 true 結果,則 ANY 的結果為「true」。 如果未找到 true(包括陣列中沒有元素的情況),則結果為「false」。
如果陣列表示式產生一個空的陣列,則 ANY 的結果將為空。如果左邊的表示式為空,則 ANY 的結果通常為空(儘管非嚴格的比較運算子可能會產生不同的結果)。另外,如果右邊的陣列包含任何空元素並且沒有獲得真正的比較結果,則 ANY 的結果將為空,而不是 false(再次假設嚴格的比較運算子)。這符合 SQL 空值布林組合的普遍規則。
SOME 是 ANY 的同義詞。
ALL
(array)右側是一個帶括號的表示式,它必須產生一個陣列。使用給定的運算子計算左側表示式並與陣列的每個元素進行比較,該運算子必須產生布林結果。如果所有比較都為真(包括陣列為空的情況),則 ALL 的結果為“真”。如果發現任何錯誤的情況,結果就為“假”。
如果陣列表示式產生一個空陣列,則 ALL 的結果將為 NULL。如果左邊的表示式為NULL,則 ALL 的結果通常為 NULL(儘管非嚴格的比較運算子可能產生不同的結果)。另外,如果右邊的陣列包含任何 NULL 元素,並且沒有獲得錯誤的比較結果,則 ALL 的結果將為 NULL,而不是 TRUE(再次假設一個嚴格的比較運算子)。 這符合 SQL NULL 布林組合的一般性規則。
每一邊都是資料列建構函數,如 4.2.13 節所述。兩個資料列內容必須具有相同的欄位數。運算好每一側,並逐個資料列比較它們。當運算子為 =,<>,<,<=,>或 >=時,允許進行資料列建構函數比較。每個資料列元素必須是具有預設 B-tree運算子類的型別,否則嘗試的比較可能會産生錯誤。
注意 如果使用前面的欄位解析比較,則可能不會發生與元素數量或型別相關的錯誤。
= 和 <> 比較的工作方式與其他比較略有不同。如果所有相應的成員都是非空且相等的,則認為兩個資料列相等;如果任何相應的成員非空且不相等,則資料列不相等;否則資料列比較的結果是未知的(null)。
對於 <,<=,> 和 >= 情況,資料列元素從左到右進行比較,一旦找到不相等或空的元素配對就停止。如果這對元素中的任何一個為 null,則資料列比較的結果是未知的(null);否則這對元素的比較就決定了結果。例如,ROW(1, 2, NULL) < ROW(1, 3, 0) 產生 true,而不是 null,因為不考慮第三組元素。
注意 在 PostgreSQL 8.2 之前,每個 SQL 規範都沒有處理 <,<=,> 和 >=。像ROW(a, b) < ROW(c, d) 這樣的比較被實作為 a < c AND b < d,而正確的行為等同於 a < c OR (a = c AND b <d)。
此語法類似於 <> 行比較,但它不會因為 null 輸入產生 null。相反地,任何空值被認為不等於(不同於)任何非空值,並且任何兩個空值被認為是相等的(不是不同的)。因此結果將為 true 或 false,永遠不為 null。
此語法類似於 a = 資料列比較,但它不會因為 null 輸入而產生 null。相反地,任何空值被認為不等於(不同於)任何非空值,並且任何兩個空值被認為是相等的(不是不同的)。因此,結果將始終為 true 或 false,永遠不會為 null。
如果結果取決於比較兩個 NULL 值或 NULL 和非 NULL,則 SQL 規範要求按資料列進行比較以回傳 NULL。PostgreSQL只在比較兩個資料列建構函數的結果(如 9.23.5 節)或者將一個資料列建構函數與子查詢的輸出結果進行比較時(如 9.22 節)那樣做。在比較兩個複合型別內容的其他部份中,兩個 NULL 字串會被認為是相等的,並且 NULL 被認為大於非 NULL。為了對複合型別進行一致的排序和索引行為,這是必須的。
評估每一側,並逐個資料列比較它們。 當運算符為 =,<>,<,<=,> 或 >= 時,允許複合型別比較,或者俱有與其中一個類似的語義。(具體而言,如果一個運算子是 B-Tree 運算子類的成員,或者是 B-Tree 運算子類的 = 成員的否定運算,則它可以是資料列比較運算子。)上述運算子的預設行為與資料列建構函數的 IS [NOT] DISTINCT FROM 相同(見第 9.23.5 節)。
為了支援包含沒有預設 B-Tree 運算子類的元素的資料列匹配,以下運算子被定義用於複合型別比較: =, <>, <, <=,> 和 >=。這些運算子比較兩個資料列的內部二進製表示形式。即使兩個資料列與等號運算子的比較為真,兩個資料列也可能具有不同的二進製表示形式。 這些比較運算子下的資料列排序是確定性的,但沒有其他意義。這些運算子在內部用於具體化檢視表,並可用於其他專用目的(如複寫),但不打算經常用於撰寫查詢。
CurrentlyPostgreSQLprovides one built in trigger function,suppress_redundant_updates_trigger
, which will prevent any update that does not actually change the data in the row from taking place, in contrast to the normal behavior which always performs the update regardless of whether or not the data has changed. (This normal behavior makes updates run faster, since no checking is required, and is also useful in certain cases.)
Ideally, you should normally avoid running updates that don't actually change the data in the record. Redundant updates can cost considerable unnecessary time, especially if there are lots of indexes to alter, and space in dead rows that will eventually have to be vacuumed. However, detecting such situations in client code is not always easy, or even possible, and writing expressions to detect them can be error-prone. An alternative is to usesuppress_redundant_updates_trigger
, which will skip updates that don't change the data. You should use this with care, however. The trigger takes a small but non-trivial time for each record, so if most of the records affected by an update are actually changed, use of this trigger will actually make the update run slower.
Thesuppress_redundant_updates_trigger
function can be added to a table like this:
In most cases, you would want to fire this trigger last for each row. Bearing in mind that triggers fire in name order, you would then choose a trigger name that comes after the name of any other trigger you might have on the table.
For more information about creating triggers, see CREATE TRIGGER.
Function | Return Type | Description | Example | Result |
|
| append an element to the end of an array |
|
|
|
| concatenate two arrays |
|
|
|
| returns the number of dimensions of the array |
|
|
|
| returns a text representation of array's dimensions |
|
|
|
| returns an array initialized with supplied value and dimensions, optionally with lower bounds other than 1 |
|
|
|
| returns the length of the requested array dimension |
|
|
|
| returns lower bound of the requested array dimension |
|
|
|
| returns the subscript of the first occurrence of the second argument in the array, starting at the element indicated by the third argument or at the first element (array must be one-dimensional) |
|
|
|
| returns an array of subscripts of all occurrences of the second argument in the array given as first argument (array must be one-dimensional) |
|
|
|
| append an element to the beginning of an array |
|
|
|
| remove all elements equal to the given value from the array (array must be one-dimensional) |
|
|
|
| replace each array element equal to the given value with a new value |
|
|
|
| concatenates array elements using supplied delimiter and optional null string |
|
|
|
| returns upper bound of the requested array dimension |
|
|
|
| returns the total number of elements in the array, or 0 if the array is empty |
|
|
|
| splits string into array elements using supplied delimiter and optional null string |
|
|
|
| expand an array to a set of rows |
| 12(2 rows) |
|
|
| 1 foo2 barNULL baz(3 rows) |
Function | Argument Type(s) | Return Type | Partial Mode | Description |
| any non-array type | array of the argument type | No | input values, including nulls, concatenated into an array |
| any array type | same as argument data type | No | input arrays concatenated into array of one higher dimension (inputs must all have same dimensionality, and cannot be empty or NULL) |
|
|
| Yes | the average (arithmetic mean) of all input values |
|
| same as argument data type | Yes | the bitwise AND of all non-null input values, or null if none |
|
| same as argument data type | Yes | the bitwise OR of all non-null input values, or null if none |
|
|
| Yes | true if all input values are true, otherwise false |
|
|
| Yes | true if at least one input value is true, otherwise false |
|
| Yes | number of input rows |
| any |
| Yes | number of input rows for which the value of |
|
|
| Yes | equivalent to |
|
|
| No | aggregates values as a JSON array |
|
|
| No | aggregates values as a JSON array |
|
|
| No | aggregates name/value pairs as a JSON object |
|
|
| No | aggregates name/value pairs as a JSON object |
| any numeric, string, date/time, network, or enum type, or arrays of these types | same as argument type | Yes | maximum value of |
| any numeric, string, date/time, network, or enum type, or arrays of these types | same as argument type | Yes | minimum value of |
| ( | same as argument types | No | input values concatenated into a string, separated by delimiter |
|
|
| Yes | sum of |
|
|
| No |
Modifier | Description | Example |
| fill mode (suppress leading zeroes and padding blanks) |
|
| upper case ordinal number suffix |
|
| lower case ordinal number suffix |
|
| fixed format global option (see usage notes) |
|
|
|
| spell mode (not implemented) |
|
Function | Return Type | Description | Example | Result |
| interval | 參數間相減,產生一個使用年和月的帶有「符號」的結果,而不僅僅是幾天 | age(timestamp '2001-04-10', timestamp '1957-06-13') | 43 years 9 mons 27 days |
| interval | 用 current_date 減去該日期(以午夜為準) | age(timestamp '1957-06-13') | 43 years 8 mons 3 days |
| timestamp with time zone |
|
|
| date |
|
|
| time with time zone |
|
|
| timestamp with time zone |
|
|
| double precision | date_part('hour', timestamp '2001-02-16 20:38:40') | 20 |
| double precision | date_part('month', interval '2 years 3 months') | 3 |
| timestamp | date_trunc('hour', timestamp '2001-02-16 20:38:40') | 2001-02-16 20:00:00 |
| interval | date_trunc('hour', interval '2 days 3 hours 40 minutes') | 2 days 03:00:00 |
| double precision | extract(hour from timestamp '2001-02-16 20:38:40') | 20 |
| double precision | extract(month from interval '2 years 3 months') | 3 |
| boolean | Test for finite date (not +/-infinity) | isfinite(date '2001-02-16') | true |
| boolean | Test for finite time stamp (not +/-infinity) | isfinite(timestamp '2001-02-16 21:28:30') | true |
| boolean | Test for finite interval | isfinite(interval '4 hours') | true |
| interval | Adjust interval so 30-day time periods are represented as months | justify_days(interval '35 days') | 1 mon 5 days |
| interval | Adjust interval so 24-hour time periods are represented as days | justify_hours(interval '27 hours') | 1 day 03:00:00 |
| interval | Adjust interval using | justify_interval(interval '1 mon -1 hour') | 29 days 23:00:00 |
| time |
|
|
| timestamp |
|
|
| date | Create date from year, month and day fields | make_date(2013, 7, 15) | 2013-07-15 |
| interval | Create interval from years, months, weeks, days, hours, minutes and seconds fields | make_interval(days := 10) | 10 days |
| time | Create time from hour, minute and seconds fields | make_time(8, 15, 23.5) | 08:15:23.5 |
| timestamp | Create timestamp from year, month, day, hour, minute and seconds fields | make_timestamp(2013, 7, 15, 8, 15, 23.5) | 2013-07-15 08:15:23.5 |
| timestamp with time zone | Create timestamp with time zone from year, month, day, hour, minute and seconds fields. When timezone is not specified, then current time zone is used. | make_timestamptz(2013, 7, 15, 8, 15, 23.5) | 2013-07-15 08:15:23.5+01 |
| timestamp with time zone |
|
|
| timestamp with time zone |
|
|
| text |
|
|
| timestamp with time zone |
|
|
60 if leap seconds are implemented by the operating system |
Table 9.43shows the operators that are available for use with the two JSON data types (seeSection 8.14).
Table 9.43. json
andjsonb
Operators
There are parallel variants of these operators for both thejson
andjsonb
types. The field/element/path extraction operators return the same type as their left-hand input (eitherjson
orjsonb
), except for those specified as returningtext
, which coerce the value to text. The field/element/path extraction operators return NULL, rather than failing, if the JSON input does not have the right structure to match the request; for example if no such element exists. The field/element/path extraction operators that accept integer JSON array subscripts all support negative subscripting from the end of arrays.
The standard comparison operators shown inTable 9.1are available forjsonb
, but not forjson
. They follow the ordering rules for B-tree operations outlined atSection 8.14.4.
Some further operators also exist only forjsonb
, as shown inTable 9.44. Many of these operators can be indexed byjsonb
operator classes. For a full description ofjsonb
containment and existence semantics, seeSection 8.14.3.Section 8.14.4describes how these operators can be used to effectively indexjsonb
data.
Table 9.44. Additionaljsonb
Operators
The||
operator concatenates the elements at the top level of each of its operands. It does not operate recursively. For example, if both operands are objects with a common key field name, the value of the field in the result will just be the value from the right hand operand.
Table 9.45shows the functions that are available for creatingjson
andjsonb
values. (There are no equivalent functions forjsonb
, of therow_to_json
andarray_to_json
functions. However, theto_jsonb
function supplies much the same functionality as these functions would.)
Table 9.45. JSON Creation Functions
array_to_json
androw_to_json
have the same behavior asto_json
except for offering a pretty-printing option. The behavior described forto_json
likewise applies to each individual value converted by the other JSON creation functions.
Thehstoreextension has a cast fromhstore
tojson
, so thathstore
values converted via the JSON creation functions will be represented as JSON objects, not as primitive string values.
Table 9.46shows the functions that are available for processingjson
andjsonb
values.
Table 9.46. JSON Processing Functions
Many of these functions and operators will convert Unicode escapes in JSON strings to the appropriate single character. This is a non-issue if the input is typejsonb
, because the conversion was already done; but forjson
input, this may result in throwing an error, as noted inSection 8.14.
Injson_populate_record
,json_populate_recordset
,json_to_record
andjson_to_recordset
, type coercion from the JSON is“best effort”and may not result in desired values for some types. JSON keys are matched to identical column names in the target row type. JSON fields that do not appear in the target row type will be omitted from the output, and target columns that do not match any JSON field will simply be NULL.
All the items of thepath
parameter ofjsonb_set
as well asjsonb_insert
except the last item must be present in thetarget
. Ifcreate_missing
is false, all items of thepath
parameter ofjsonb_set
must be present. If these conditions are not met thetarget
is returned unchanged.
If the last path item is an object key, it will be created if it is absent and given the new value. If the last path item is an array index, if it is positive the item to set is found by counting from the left, and if negative by counting from the right --1
designates the rightmost element, and so on. If the item is out of the range -array_length .. array_length -1, and create_missing is true, the new value is added at the beginning of the array if the item is negative, and at the end of the array if it is positive.
Thejson_typeof
function'snull
return value should not be confused with a SQL NULL. While callingjson_typeof('null'::json)
will returnnull
, callingjson_typeof(NULL::json)
will return a SQL NULL.
If the argument tojson_strip_nulls
contains duplicate field names in any object, the result could be semantically somewhat different, depending on the order in which they occur. This is not an issue forjsonb_strip_nulls
sincejsonb
values never have duplicate object field names.
See alsoSection 9.20for the aggregate functionjson_agg
which aggregates record values as JSON, and the aggregate functionjson_object_agg
which aggregates pairs of values into a JSON object, and theirjsonb
equivalents,jsonb_agg
andjsonb_object_agg
.
This section describes functions that possibly return more than one row. The most widely used functions in this class are series generating functions, as detailed inTable 9.58andTable 9.59. Other, more specialized set-returning functions are described elsewhere in this manual. SeeSection 7.2.1.4for ways to combine multiple set-returning functions.
Table 9.58. Series Generating Functions
Whenstep
_is positive, zero rows are returned ifstart
is greater thanstop
. Conversely, whenstep
is negative, zero rows are returned ifstart
is less thanstop
. Zero rows are also returned forNULL
inputs. It is an error forstep
_to be zero. Some examples follow:
Table 9.59. Subscript Generating Functions
generate_subscripts
is a convenience function that generates the set of valid subscripts for the specified dimension of the given array. Zero rows are returned for arrays that do not have the requested dimension, or for NULL arrays (but valid subscripts are returned for NULL array elements). Some examples follow:
When a function in theFROM
clause is suffixed byWITH ORDINALITY
, abigint
column is appended to the output which starts from 1 and increments by 1 for each row of the function's output. This is most useful in the case of set returning functions such asunnest()
.
本節中描述的函數用於控制和監控 PostgreSQL 環境。
Table 9.77 列出了可用於查詢和變更執行時組態參數的函數。
函數 current_setting 會產生設定 setting_name 目前的值。它對應於 SQL 指令 SHOW。範例如下:
如果沒有名為 setting_name 的設定,則 current_setting 會拋出錯誤,除非有設定了 missing_ok,並且為 true。
set_config 將參數 setting_name 設定為 new_value。如果 is_local 為true,則新值僅適用於目前的交易事務。如果要將新值套用於目前連線之中,請改用 false。此函數對應於 SQL 命令 SET。範例如下:
The functions shown in Table 9.78 send control signals to other server processes. Use of these functions is restricted to superusers by default but access may be granted to others using GRANT
, with noted exceptions.
Each of these functions returns true
if successful and false
otherwise.
pg_cancel_backend
and pg_terminate_backend
send signals (SIGINT or SIGTERM respectively) to backend processes identified by process ID. The process ID of an active backend can be found from the pid
column of the pg_stat_activity
view, or by listing the postgres
processes on the server (using ps on Unix or the Task Manager on Windows). The role of an active backend can be found from the usename
column of the pg_stat_activity
view.
pg_reload_conf
sends a SIGHUP signal to the server, causing configuration files to be reloaded by all server processes.
pg_rotate_logfile
signals the log-file manager to switch to a new output file immediately. This works only when the built-in log collector is running, since otherwise there is no log-file manager subprocess.
The functions shown in Table 9.79 assist in making on-line backups. These functions cannot be executed during recovery (except pg_is_in_backup
, pg_backup_start_time
and pg_wal_lsn_diff
).
pg_start_backup
accepts an arbitrary user-defined label for the backup. (Typically this would be the name under which the backup dump file will be stored.) When used in exclusive mode, the function writes a backup label file (backup_label
) and, if there are any links in the pg_tblspc/
directory, a tablespace map file (tablespace_map
) into the database cluster's data directory, performs a checkpoint, and then returns the backup's starting write-ahead log location as text. The user can ignore this result value, but it is provided in case it is useful. When used in non-exclusive mode, the contents of these files are instead returned by the pg_stop_backup
function, and should be written to the backup by the caller.
There is an optional second parameter of type boolean
. If true
, it specifies executing pg_start_backup
as quickly as possible. This forces an immediate checkpoint which will cause a spike in I/O operations, slowing any concurrently executing queries.
In an exclusive backup, pg_stop_backup
removes the label file and, if it exists, the tablespace_map
file created by pg_start_backup
. In a non-exclusive backup, the contents of the backup_label
and tablespace_map
are returned in the result of the function, and should be written to files in the backup (and not in the data directory). There is an optional second parameter of type boolean
. If false, the pg_stop_backup
will return immediately after the backup is completed without waiting for WAL to be archived. This behavior is only useful for backup software which independently monitors WAL archiving. Otherwise, WAL required to make the backup consistent might be missing and make the backup useless. When this parameter is set to true, pg_stop_backup
will wait for WAL to be archived when archiving is enabled; on the standby, this means that it will wait only when archive_mode = always
. If write activity on the primary is low, it may be useful to run pg_switch_wal
on the primary in order to trigger an immediate segment switch.
When executed on a primary, the function also creates a backup history file in the write-ahead log archive area. The history file includes the label given to pg_start_backup
, the starting and ending write-ahead log locations for the backup, and the starting and ending times of the backup. The return value is the backup's ending write-ahead log location (which again can be ignored). After recording the ending location, the current write-ahead log insertion point is automatically advanced to the next write-ahead log file, so that the ending write-ahead log file can be archived immediately to complete the backup.
pg_switch_wal
moves to the next write-ahead log file, allowing the current file to be archived (assuming you are using continuous archiving). The return value is the ending write-ahead log location + 1 within the just-completed write-ahead log file. If there has been no write-ahead log activity since the last write-ahead log switch, pg_switch_wal
does nothing and returns the start location of the write-ahead log file currently in use.
pg_create_restore_point
creates a named write-ahead log record that can be used as recovery target, and returns the corresponding write-ahead log location. The given name can then be used with recovery_target_name to specify the point up to which recovery will proceed. Avoid creating multiple restore points with the same name, since recovery will stop at the first one whose name matches the recovery target.
pg_current_wal_lsn
displays the current write-ahead log write location in the same format used by the above functions. Similarly, pg_current_wal_insert_lsn
displays the current write-ahead log insertion location and pg_current_wal_flush_lsn
displays the current write-ahead log flush location. The insertion location is the “logical” end of the write-ahead log at any instant, while the write location is the end of what has actually been written out from the server's internal buffers and flush location is the location guaranteed to be written to durable storage. The write location is the end of what can be examined from outside the server, and is usually what you want if you are interested in archiving partially-complete write-ahead log files. The insertion and flush locations are made available primarily for server debugging purposes. These are both read-only operations and do not require superuser permissions.
You can use pg_walfile_name_offset
to extract the corresponding write-ahead log file name and byte offset from the results of any of the above functions. For example:
Similarly, pg_walfile_name
extracts just the write-ahead log file name. When the given write-ahead log location is exactly at a write-ahead log file boundary, both these functions return the name of the preceding write-ahead log file. This is usually the desired behavior for managing write-ahead log archiving behavior, since the preceding file is the last one that currently needs to be archived.
pg_wal_lsn_diff
calculates the difference in bytes between two write-ahead log locations. It can be used with pg_stat_replication
or some functions shown in Table 9.79 to get the replication lag.
For details about proper usage of these functions, see Section 25.3.
The functions shown in Table 9.80 provide information about the current status of the standby. These functions may be executed both during recovery and in normal running.
Table 9.80. Recovery Information Functions
The functions shown in Table 9.81 control the progress of recovery. These functions may be executed only during recovery.
While recovery is paused no further database changes are applied. If in hot standby, all new queries will see the same consistent snapshot of the database, and no further query conflicts will be generated until recovery is resumed.
If streaming replication is disabled, the paused state may continue indefinitely without problem. While streaming replication is in progress WAL records will continue to be received, which will eventually fill available disk space, depending upon the duration of the pause, the rate of WAL generation and available disk space.
PostgreSQL allows database sessions to synchronize their snapshots. A snapshot determines which data is visible to the transaction that is using the snapshot. Synchronized snapshots are necessary when two or more sessions need to see identical content in the database. If two sessions just start their transactions independently, there is always a possibility that some third transaction commits between the executions of the two START TRANSACTION
commands, so that one session sees the effects of that transaction and the other does not.
To solve this problem, PostgreSQL allows a transaction to export the snapshot it is using. As long as the exporting transaction remains open, other transactions can import its snapshot, and thereby be guaranteed that they see exactly the same view of the database that the first transaction sees. But note that any database changes made by any one of these transactions remain invisible to the other transactions, as is usual for changes made by uncommitted transactions. So the transactions are synchronized with respect to pre-existing data, but act normally for changes they make themselves.
Snapshots are exported with the pg_export_snapshot
function, shown in Table 9.82, and imported with the SET TRANSACTION command.
The function pg_export_snapshot
saves the current snapshot and returns a text
string identifying the snapshot. This string must be passed (outside the database) to clients that want to import the snapshot. The snapshot is available for import only until the end of the transaction that exported it. A transaction can export more than one snapshot, if needed. Note that doing so is only useful in READ COMMITTED
transactions, since in REPEATABLE READ
and higher isolation levels, transactions use the same snapshot throughout their lifetime. Once a transaction has exported any snapshots, it cannot be prepared with PREPARE TRANSACTION.
See SET TRANSACTION for details of how to use an exported snapshot.
The functions shown in Table 9.83 are for controlling and interacting with replication features. See Section 26.2.5, Section 26.2.6, and Chapter 50 for information about the underlying features. Use of these functions is restricted to superusers.
Many of these functions have equivalent commands in the replication protocol; see Section 53.4.
The functions described in Section 9.26.3, Section 9.26.4, and Section 9.26.5 are also relevant for replication.
The functions shown in Table 9.84 calculate the disk space usage of database objects.
pg_column_size
shows the space used to store any individual data value.
pg_total_relation_size
accepts the OID or name of a table or toast table, and returns the total on-disk space used for that table, including all associated indexes. This function is equivalent to pg_table_size
+
pg_indexes_size
.
pg_table_size
accepts the OID or name of a table and returns the disk space needed for that table, exclusive of indexes. (TOAST space, free space map, and visibility map are included.)
pg_indexes_size
accepts the OID or name of a table and returns the total disk space used by all the indexes attached to that table.
pg_database_size
and pg_tablespace_size
accept the OID or name of a database or tablespace, and return the total disk space used therein. To use pg_database_size
, you must have CONNECT
permission on the specified database (which is granted by default), or be a member of the pg_read_all_stats
role. To use pg_tablespace_size
, you must have CREATE
permission on the specified tablespace, or be a member of the pg_read_all_stats
role unless it is the default tablespace for the current database.
pg_relation_size
accepts the OID or name of a table, index or toast table, and returns the on-disk size in bytes of one fork of that relation. (Note that for most purposes it is more convenient to use the higher-level functions pg_total_relation_size
or pg_table_size
, which sum the sizes of all forks.) With one argument, it returns the size of the main data fork of the relation. The second argument can be provided to specify which fork to examine:
'main'
returns the size of the main data fork of the relation.
'fsm'
returns the size of the Free Space Map (see Section 68.3) associated with the relation.
'vm'
returns the size of the Visibility Map (see Section 68.4) associated with the relation.
'init'
returns the size of the initialization fork, if any, associated with the relation.
pg_size_pretty
can be used to format the result of one of the other functions in a human-readable way, using bytes, kB, MB, GB or TB as appropriate.
pg_size_bytes
can be used to get the size in bytes from a string in human-readable format. The input may have units of bytes, kB, MB, GB or TB, and is parsed case-insensitively. If no units are specified, bytes are assumed.
The units kB, MB, GB and TB used by the functions pg_size_pretty
and pg_size_bytes
are defined using powers of 2 rather than powers of 10, so 1kB is 1024 bytes, 1MB is 10242 = 1048576 bytes, and so on.
The functions above that operate on tables or indexes accept a regclass
argument, which is simply the OID of the table or index in the pg_class
system catalog. You do not have to look up the OID by hand, however, since the regclass
data type's input converter will do the work for you. Just write the table name enclosed in single quotes so that it looks like a literal constant. For compatibility with the handling of ordinary SQL names, the string will be converted to lower case unless it contains double quotes around the table name.
If an OID that does not represent an existing object is passed as argument to one of the above functions, NULL is returned.
The functions shown in Table 9.85 assist in identifying the specific disk files associated with database objects.
pg_relation_filenode
accepts the OID or name of a table, index, sequence, or toast table, and returns the “filenode” number currently assigned to it. The filenode is the base component of the file name(s) used for the relation (see Section 68.1 for more information). For most tables the result is the same as pg_class
.relfilenode
, but for certain system catalogs relfilenode
is zero and this function must be used to get the correct value. The function returns NULL if passed a relation that does not have storage, such as a view.
pg_relation_filepath
is similar to pg_relation_filenode
, but it returns the entire file path name (relative to the database cluster's data directory PGDATA
) of the relation.
pg_filenode_relation
is the reverse of pg_relation_filenode
. Given a “tablespace” OID and a “filenode”, it returns the associated relation's OID. For a table in the database's default tablespace, the tablespace can be specified as 0.
Table 9.86 lists functions used to manage collations.
pg_collation_actual_version
returns the actual version of the collation object as it is currently installed in the operating system. If this is different from the value in pg_collation.collversion
, then objects depending on the collation might need to be rebuilt. See also ALTER COLLATION.
pg_import_system_collations
adds collations to the system catalog pg_collation
based on all the locales it finds in the operating system. This is what initdb
uses; see Section 23.2.2 for more details. If additional locales are installed into the operating system later on, this function can be run again to add collations for the new locales. Locales that match existing entries in pg_collation
will be skipped. (But collation objects based on locales that are no longer present in the operating system are not removed by this function.) The schema
parameter would typically be pg_catalog
, but that is not a requirement; the collations could be installed into some other schema as well. The function returns the number of new collation objects it created.
Table 9.87 shows the functions available for index maintenance tasks. These functions cannot be executed during recovery. Use of these functions is restricted to superusers and the owner of the given index.
brin_summarize_new_values
accepts the OID or name of a BRIN index and inspects the index to find page ranges in the base table that are not currently summarized by the index; for any such range it creates a new summary index tuple by scanning the table pages. It returns the number of new page range summaries that were inserted into the index. brin_summarize_range
does the same, except it only summarizes the range that covers the given block number.
gin_clean_pending_list
accepts the OID or name of a GIN index and cleans up the pending list of the specified index by moving entries in it to the main GIN data structure in bulk. It returns the number of pages removed from the pending list. Note that if the argument is a GIN index built with the fastupdate
option disabled, no cleanup happens and the return value is 0, because the index doesn't have a pending list. Please see Section 66.4.1 and Section 66.5 for details of the pending list and fastupdate
option.
The functions shown in Table 9.88 provide native access to files on the machine hosting the server. Only files within the database cluster directory and the log_directory
can be accessed unless the user is granted the role pg_read_server_files
. Use a relative path for files in the cluster directory, and a path matching the log_directory
configuration setting for log files.
Note that granting users the EXECUTE privilege on the pg_read_file()
, or related, functions allows them the ability to read any file on the server which the database can read and that those reads bypass all in-database privilege checks. This means that, among other things, a user with this access is able to read the contents of the pg_authid
table where authentication information is contained, as well as read any file in the database. Therefore, granting access to these functions should be carefully considered.
Some of these functions take an optional missing_ok
parameter, which specifies the behavior when the file or directory does not exist. If true
, the function returns NULL (except pg_ls_dir
, which returns an empty result set). If false
, an error is raised. The default is false
.
pg_ls_dir
returns the names of all files (and directories and other special files) in the specified directory. The include_dot_dirs
indicates whether “.” and “..” are included in the result set. The default is to exclude them (false
), but including them can be useful when missing_ok
is true
, to distinguish an empty directory from an non-existent directory.
pg_ls_logdir
returns the name, size, and last modified time (mtime) of each file in the log directory. By default, only superusers and members of the pg_monitor
role can use this function. Access may be granted to others using GRANT
.
pg_ls_waldir
returns the name, size, and last modified time (mtime) of each file in the write ahead log (WAL) directory. By default only superusers and members of the pg_monitor
role can use this function. Access may be granted to others using GRANT
.
pg_read_file
returns part of a text file, starting at the given offset
, returning at most length
bytes (less if the end of file is reached first). If offset
is negative, it is relative to the end of the file. If offset
and length
are omitted, the entire file is returned. The bytes read from the file are interpreted as a string in the server encoding; an error is thrown if they are not valid in that encoding.
pg_read_binary_file
is similar to pg_read_file
, except that the result is a bytea
value; accordingly, no encoding checks are performed. In combination with the convert_from
function, this function can be used to read a file in a specified encoding:
pg_stat_file
returns a record containing the file size, last accessed time stamp, last modified time stamp, last file status change time stamp (Unix platforms only), file creation time stamp (Windows only), and a boolean
indicating if it is a directory. Typical usages include:
The functions shown in Table 9.89 manage advisory locks. For details about proper use of these functions, see Section 13.3.5.
pg_advisory_lock
locks an application-defined resource, which can be identified either by a single 64-bit key value or two 32-bit key values (note that these two key spaces do not overlap). If another session already holds a lock on the same resource identifier, this function will wait until the resource becomes available. The lock is exclusive. Multiple lock requests stack, so that if the same resource is locked three times it must then be unlocked three times to be released for other sessions' use.
pg_advisory_lock_shared
works the same as pg_advisory_lock
, except the lock can be shared with other sessions requesting shared locks. Only would-be exclusive lockers are locked out.
pg_try_advisory_lock
is similar to pg_advisory_lock
, except the function will not wait for the lock to become available. It will either obtain the lock immediately and return true
, or return false
if the lock cannot be acquired immediately.
pg_try_advisory_lock_shared
works the same as pg_try_advisory_lock
, except it attempts to acquire a shared rather than an exclusive lock.
pg_advisory_unlock
will release a previously-acquired exclusive session level advisory lock. It returns true
if the lock is successfully released. If the lock was not held, it will return false
, and in addition, an SQL warning will be reported by the server.
pg_advisory_unlock_shared
works the same as pg_advisory_unlock
, except it releases a shared session level advisory lock.
pg_advisory_unlock_all
will release all session level advisory locks held by the current session. (This function is implicitly invoked at session end, even if the client disconnects ungracefully.)
pg_advisory_xact_lock
works the same as pg_advisory_lock
, except the lock is automatically released at the end of the current transaction and cannot be released explicitly.
pg_advisory_xact_lock_shared
works the same as pg_advisory_lock_shared
, except the lock is automatically released at the end of the current transaction and cannot be released explicitly.
pg_try_advisory_xact_lock
works the same as pg_try_advisory_lock
, except the lock, if acquired, is automatically released at the end of the current transaction and cannot be released explicitly.
pg_try_advisory_xact_lock_shared
works the same as pg_try_advisory_lock_shared
, except the lock, if acquired, is automatically released at the end of the current transaction and cannot be released explicitly.
_Window functions_provide the ability to perform calculations across sets of rows that are related to the current query row. SeeSection 3.5for an introduction to this feature, andSection 4.2.8for syntax details.
The built-in window functions are listed inTable 9.57. Note that these functions_must_be invoked using window function syntax, i.e., anOVER
clause is required.
In addition to these functions, any built-in or user-defined general-purpose or statistical aggregate (i.e., not ordered-set or hypothetical-set aggregates) can be used as a window function; seeSection 9.20for a list of the built-in aggregates. Aggregate functions act as window functions only when anOVER
clause follows the call; otherwise they act as non-window aggregates and return a single row for the entire set.
Table 9.57. General-Purpose Window Functions
All of the functions listed inTable 9.57depend on the sort ordering specified by theORDER BY
clause of the associated window definition. Rows that are not distinct when considering only theORDER BY
columns are said to bepeers. The four ranking functions (includingcume_dist
) are defined so that they give the same answer for all peer rows.
Note thatfirst_value
,last_value
, andnth_value
consider only the rows within the“window frame”, which by default contains the rows from the start of the partition through the last peer of the current row. This is likely to give unhelpful results forlast_value
and sometimes alsonth_value
. You can redefine the frame by adding a suitable frame specification (RANGE
orROWS
) to theOVER
clause. SeeSection 4.2.8for more information about frame specifications.
When an aggregate function is used as a window function, it aggregates over the rows within the current row's window frame. An aggregate used withORDER BY
and the default window frame definition produces a“running sum”type of behavior, which may or may not be what's wanted. To obtain aggregation over the whole partition, omitORDER BY
or useROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
. Other frame specifications can be used to obtain other effects.
The SQL standard defines aRESPECT NULLS
orIGNORE NULLS
option forlead
,lag
,first_value
,last_value
, andnth_value
. This is not implemented inPostgreSQL: the behavior is always the same as the standard's default, namelyRESPECT NULLS
. Likewise, the standard'sFROM FIRST
orFROM LAST
option fornth_value
is not implemented: only the defaultFROM FIRST
behavior is supported. (You can achieve the result ofFROM LAST
by reversing theORDER BY
ordering.)
cume_dist
computes the fraction of partition rows that are less than or equal to the current row and its peers, whilepercent_rank
computes the fraction of partition rows that are less than the current row, assuming the current row does not exist in the partition.
Table 9.60 列出了一些取得連線和系統資訊的函數。
除了本節中列出的功能之外,還有一些與統計系統相關的功能也提供系統訊息。有關更多訊息,請參閱第 28.2.2 節。
注意
current_catalog
,current_role
,current_schema
,current_user
,session_user
, 和user 在 SQL 中有特殊的語法狀態:他們必須以沒有括號的方式呼叫。(在PostgreSQL中,括號可以選擇性地與 current_schema 一起使用,但不能與其他的函數一起使用。)
session_user 通常是發起目前資料庫連線的使用者;但超級使用者可以利用 SET SESSION AUTHORIZATION 更改此設定。current_user 是適用於權限檢查的使用者識別方式。通常它與連線中的使用者相同,但也可以使用 SET ROLE 進行更改。在使用 SECURITY DEFINER 屬性執行功能期間,它也會發生變化。用 Unix 的說法,連線使用者是「real user」,而目前使用者是「effective user」。current_role 和 user 是 current_user 的同義詞。 (標準 SQL 區分了 current_role 和 current_user,但 PostgreSQL 並沒有,因為它將使用者和角色統合為一種實體。)
current_schema 回傳搜尋路徑中的第一個 schema 名稱(如果搜尋路徑為空值,則回傳空值)。這將會用於在沒有指定 schema 的情況下建立的任何資料表或其他物件的 schema。current_schemas(boolean)回傳目前搜尋路徑中所有 schema 名稱的陣列。 布林選項表示隱含的系統 schema(如pg_catalog)是否包含在回傳的搜尋路徑中。
注意搜尋路徑可以在執行中時更改。該指令是:
inet_client_addr 回傳目前用戶端的 IP 位址、inet_client_port 回傳連接埠、inet_server_addr 回傳伺服器接受目前連線的 IP 位址、inet_server_port 回傳連接埠。 如果目前連線是透過 Unix-domain socker,那這些函數都會回傳 NULL。
pg_blocking_pids 會回傳連線中阻擋指定 Process ID 的 Process ID 陣列,如果沒有這樣的 Process 或未被阻擋,則回傳一個空的陣列。如果一個伺服器的 Process 阻擋了其他 Process 的鎖定請求(Hard block),或者正在與其他請求鎖定的 Process 在等待佇列之前即發生衝突(Soft block)。在使用平行查詢時,即使實際的 lock 被子程序持有或等待,結果也都會列出用戶端可見的 Process ID(即 pg_backend_pid 結果)。因此,結果中可能會有重複的 PID。還要注意的是,當準備好的交易事務持有衝突的鎖定時,它將在此函數的結果中以 zero process ID 表示。頻繁呼叫此函數可能會對資料庫效能產生一些影響,因為它需要短時間獨佔鎖定管理器的共享狀態。
pg_conf_load_time 回傳上次載入伺服器設定檔的時間戳記,帶有時區記錄。 (如果目前的連線仍然存在的話,這將是連線本身重新讀取設定檔的時間,因此在不同的連線中讀取會有所不同,否則會是 postmaster 重新讀取設定檔的時間。)
pg_current_logfile 以 text 型別回傳日誌收集器目前使用的日誌檔的路徑。該路徑包括log_directory目錄和日誌檔名稱。日誌收集必須啟用或回傳值為 NULL。當存在多個日誌檔(每個檔案格式不同)時,呼叫不帶參數的 pg_current_log 將回傳具有在有序列表中找到的第一個格式的檔案路徑:stderr,csvlog。 沒有任何日誌檔具有這些格式時,將回傳 NULL。 要以文字形式請求特定的檔案格式,請將 csvlog 或 stderr 作為參數。當請求的日誌格式不是設定的 log_destination 時,回傳值為 NULL。pg_current_log 檔案反映了 current_logfiles 檔案的內容。
pg_my_temp_schema 回傳目前連線臨時 schema 的 OID,如果沒有的話(因為沒有建立任何臨時資料表),則回傳零。pg_is_other_temp_schema 如果給予的 OID 是另一個連線的臨時 schema OID,則回傳 true。 (舉個例子,這可以用於從列表顯示中排除其他連線的臨時資料表。)
pg_listening_channels 回傳目前連線正在監聽的一組非同步監聽通道的名稱。 pg_notification_queue_usage 回傳目前正在等待處理的監聽佔用的總可用空間的比率,範圍為 0-1。 有關更多訊息,請參閱 LISTEN 和 NOTIFY。
pg_postmaster_start_time 回傳伺服器啟動時帶有時區的時間戳記。
pg_safe_snapshot_blocking_pids 回傳阻擋具有指定 Process ID的取得安全快照的連線 Process ID 陣列,如果沒有這樣的 Process 或未有阻擋的情況,則回傳一個空陣列。執行 SERIALIZABLE 交易事務的連線會阻止另一個 SERIALIZABLE READ ONLY DEFERRABLE 交易事務取得快照,直到後者確定避免使用任何謂 predicate lock 是安全的。有關可序列化 SERIALIZABLE 和可延期 DEFERRABLE 交易的更多訊息,請參閱第 13.2.3 節。頻繁呼叫此函數可能會對資料庫效能產生一些影響,因為它需要短時間詢問 predicate lock 管理器的共享狀態。
version 回傳一個說明 PostgreSQL 伺服器版本的字串。你也可以從 server_version 或適於機器讀取的 server_version_num 取得此信息。軟體研發人員應該使用 server_version_num(自8.2起可用)或 PQserverVersion,而不用需要解析文字的版本。
Table 9.61 列出了允許使用者以程式控制的方式查詢資料庫物件存取權限的函數。有關權限的更多訊息,請參閱第 5.6 節。
has_table_privilege 用於檢查使用者是否可以以特定的方式存取資料表。使用者可以透過 name、OID(pg_authid.oid)、public 來指定 PUBLIC 的虛擬角色,如果省略參數的話,預設為 current_user。該資料表可以使用名稱或 OID 來指定。(因此,has_table_privilege 實際上有六種變形,以它們的參數數量和型別加以區分。)以資料表名稱指定時,如果需要的,名稱可以加上 schema。所需的存取權限類型由文字字串指定,該文字字串必須為 SELECT、INSERT、UPDATE、DELETE、TRUNCATE、REFERENCES 或 TRIGGER 之一。或者,可以將 WITH GRANT OPTION 加到權限型別中以測試權限是否與授予的選項一起保存。此外,多個權限型別可以用逗號分隔列出,在這種情況下,如果列出的任何權限被保留,將會是 True 的結果。(權限字串的大小寫不重要,可以允許在權限名稱之間,但不在權限名稱內有額外的空白)。一些範例:
has_sequence_privilege 用於檢查使用者是否能以特定方式存取序列物件。其參數類似於 has_table_privilege。所需的存取權限類型必須為 USAGE、SELECT 或 UPDATE 之一。
has_any_column_privilege 用於檢查使用者是否能以特定方式存取資料表中的任何欄位。它的參數類似於 has_table_privilege,只是所需的存取權限類型必須為 SELECT、INSERT、UPDATE 或 REFERENCES 的組合。請注意,在資料表等級具有這些權限中的任何一項,都自然地授予該資料表的每一欄位。因此如果 has_table_privilege 對相同參數執行操作,has_table_privilege 始終都會回傳 true。但是,如果至少有一欄位有欄位級的欄限授予,則 has_any_column_privilege 也會為 true。
has_column_privilege 用於檢查使用者是否能以特定方式存取欄位。它的參數類似於 has_table_privilege,該欄位可以透過名稱或屬性編號指定。所需的存取權限類型必須為 SELECT、INSERT、UPDATE 或 REFERENCES 的某種組合。請注意,在資料表級擁有的權限中的任何一項都會自動授予該資料表的每一個欄位。
has_database_privilege 用於檢查使用者是否能以特定方式存取資料庫。它的參數與 has_table_privilege 類似。所需的存取權限類型必須為 CREATE、CONNECT、TEMPORARY 或 TEMP(相當於 TEMPORARY)的某種組合。
has_function_privilege 用於檢查使用者是否可以以特定方式存取函數。它的參數類似於 has_table_privilege。 當透過文字字串而不是 OID 指定函數時,允許的輸入與regprocedure 資料型別相同(請參閱第 8.18 節)。所需的存取權限類型必須為 EXECUTE。例如:
has_foreign_data_wrapper_privilege 用於檢查使用者是否能以特定方式存取 FDW。 它的參數類似於 has_table_privilege。所需的存取權限類型必須為 USAGE。
has_language_privilege 用於檢查使用者是否能以特定方式存 procedure 的程式語言。 它的參數類似於 has_table_privilege。所需的存取權限類型必須為 USAGE。
has_schema_privilege 用於檢查使用者是否能以特定方式存取 schema。它的參數類似於 has_table_privilege。所需的存取權限類型必須為 CREATE 或 USAGE 的組合。
has_server_privilege 用於檢查使用者是否能以特定方式存取 foreign server。它的參數類似於 has_table_privilege。所需的存取權限類型必須為 USAGE。
has_tablespace_privilege 用於檢查使用者是否能以特定方式存取資料表空間。它的參數類似於 has_table_privilege。所需的存取權限類型必須為 CREATE。
has_type_privilege 用於檢查使用者是否能以特定方式存取資料型別。它的參數類似於 has_table_privilege。當使用文字字串而不是 OID 指定資料型別時,允許的輸入與 regtypedata 型別相同(參閱第 8.18 節)。所需的存取權限類型必須為 USAGE。
pg_has_role 用於檢查使用者是否能以特定方式存取角色。它的參數類似於 has_table_privilege,而 public 不允許作為使用者名稱。所需的存取權限類型必須為 MEMBER 或 USAGE 的組合。MEMBER 表示角色中的直接或間接成員資格(即俱備 SET ROLE 的權力),而 USAGE 表示角色的權限是否立即可用而不需要執行 SET ROLE。
row_security_active 用於檢查 current_user 和 environment 的上下文中的資料列級的安全性是否對指定的資料表是有效的。該資料表可以使用名稱或 OID 來指定。
Table 9.62 列出想要確定某個物件在目前 schema 搜尋路徑中是否可見的函數。例如,如果一個資料表所包含的 schema 位於搜尋路徑中,並且在搜尋路徑的前面沒有出現同名的資料表,則稱該資料表是可見的。這相當於可以透過名稱引用資料表而不需要明確指定 schema 限定的語法。要列出所有可見資料表的名稱:
每個函數都對一種類型的資料庫物件執行可見性檢查。請注意,pg_table_is_visible 也可以用於 view、materialized view、索引、序列和外部資料表;pg_type_is_visible 也可以用於 domain。對於函數和運算子,如果在路徑的前面沒有名稱和參數資料型別相同的物件,則搜尋路徑中的物件是可見的。對於運算子,名稱和相關的索引存取方法都應該考慮在內。
所有這些功能都需要物件的 OID 來識別要檢查的物件。如果要按名稱測試物件,則使用 OID 別名型別(regclass、regtype、regprocedure、regoperator、regconfig 或 regdictionary)會比較方便,例如:
請注意,以這種方式測試非 schema 限定的型別名稱沒有什麼意義 - 因為如果名稱可以被識別,則它必然是可見的。
Table 9.63 列出了從系統目錄中取得資訊的功能。
format_type 回傳由其 OID 查得的可能資料型別 SQL 名稱。如果沒有特定的型別名稱修飾字的話,則設定為 NULL。
pg_get_keywords 回傳一組描述伺服器識別的 SQL 關鍵字記錄。單詞欄位包含關鍵字。catcode 欄位包含一個類別代碼:U 表示未保留,C 表示欄位名,T 表示型別或函數名,或 R 表示保留字。catdesc列包含描述類別的可能本地化的字串。
pg_get_constraintdef、pg_get_indexdef、pg_get_ruledef、pg_get_statisticsobjdef 和
pg_get_triggerdef 分別重建限制條件、索引、規則、延伸統計物件或觸發器的建立指令。(請注意,這是一個反組譯的的功能,並不是原本初始建立的指令內容。)pg_get_expr 反組譯單一個表示式的內部形式,例如欄位的預設值。在檢查系統目錄的內容時會很有用。如果表示式可能包含 Vars,則指定它們引用關係的 OID 作為第二個參數;如果沒有 Vars,那就填上零。pg_get_viewdef 重建定義視圖的 SELECT 查詢。這些功能中的大多數都有兩種變形,其中一種可以選擇性地輸出結果。使用「pretty-print」則能使輸出的格式更具可讀性,不過預設格式更可能被未來版本的 PostgreSQL 以相同方式解釋;避免以轉存目的 pretty-print 輸出。為 pretty-print 給予 false 就會得到與根本沒有參數的變形相同結果。
pg_get_functiondef 為某個函數回傳一個完整的 CREATE OR REPLACE FUNCTION 語句。pg_get_function_arguments 回傳函數的參數列表,格式為需要在 CREATE FUNCTION 中出現的格式。pg_get_function_result 同樣回傳該函數的相對應的 RETURNS 子句。例如,pg_get_function_identity_arguments 回傳識別函數所需的參數列表,例如,它需要在 ALTER FUNCTION 中出現的形式,該形式會省略預設值。
pg_get_serial_sequence 回傳與欄位關聯的序列的名稱,如果沒有序列與欄位關聯,則回傳 NULL。第一個輸入參數是資料表名稱,你可以視情況使用 schema,第二個參數是欄位名稱。由於第一個參數可能是 schema 和資料表,因此不會將其視為雙引號識別符號,這意味著它預設就是小寫字母,而第二個參數(僅作為欄位名稱)被視為雙引號識別符號,並且會保留其大小寫模樣。該函數回傳一個適當格式的內容以傳遞給序列函數(參閱第 9.16 節)。該關聯可以用於 ALTER SEQUENCE OWNED BY 進行修改或刪除。(函數可能應該被稱為 pg_get_owned_sequence;它的目前名稱反映了它通常用於 serial 或 bigserial 欄位的現況。)
pg_get_userbyid 根據其 OID 取得角色的名稱。
pg_index_column_has_property、pg_index_has_property 和 pg_indexam_has_property 回傳指定的索引欄位、索引或索引存取方法是否擁有指定的屬性。如果屬性名稱未知或不適用於特定的物件,或者 OID 或欄位編號未標識有效物件,則回傳 NULL。 請參閱 Table 9.64 欄位屬性,Table 9.65 索引屬性以及 Table 9.66 存取方法屬性。 (請注意,延伸套件的存取方法可以為其索引定義其他屬性名稱。)
pg_options_to_table 會回傳一組儲存選項 name/value 的組合(option_name / option_value),當參數傳送 pg_class.reloptions 或 pg_attribute.attoptions 時。
pg_tablespace_databases 用於檢查資料表空間。它回傳儲在在資料表空間中的資料庫 OID 集合。如果此函數有回傳任何資料,則表示資料表空間不是空的,並且不能被刪除。要顯示使用資料表空間的特定對象,你需要連線到 pg_tablespace_databases 所登記的資料庫並查詢其系統目錄中的 pg_class 資料表。
pg_typeof 回傳其所接受參數的資料型別 OID。這對於問題除錯或動態構建 SQL 查詢很有幫助。該函數宣告的為回傳型別為 regtype,這是一個 OID 別名型別(詳見第 8.18 節);這意味著它與用於比較 OID 相同,但顯示為型別名稱。 例如:
表示式 collation for 用於回傳其參數的 collation。例如:
該值可能會有括號和 schema-qualified。如果沒有能對應的 collation,則回傳 NULL。 如果參數不是能有 collation 的資料內容,則會產生錯誤。
to_regclass、to_regproc、to_regprocedure、to_regoper、to_regoperator、to_regtype、to_regnamespace 和 to_regrole 函數將關連、函數、運算子、資料型別、schema 和角色名稱(文字型別輸入)分別轉換為 regclass、regproc、regprocedure、regoper、regoperator、regtype、regnamespace 和 regrole。這些函數不同於型別轉換,因為它們不接受 OID,那將會回傳 null,而不是在找不到名稱時拋出錯誤(或者會回傳 forto_regproc 和 to_regoper,如果輸入的名稱與多個物件吻合時)。
Table 9.67 列出與資料庫物件識別和定址的相關函數。
pg_describe_object 回傳由系統目錄 OID、物件 OID和子物件 ID(可能為零) 指定的資料庫物件的文字描述訊息。根據伺服器配置,此描述旨在提供操作人員的可讀性,並且可能會進行翻譯。這對確定儲存在 pg_depend 系統目錄中的物件識別非常有用。
pg_identify_object 回傳一個包含足夠訊息的資料列,以唯一識別由目錄 OID、物件 OID和(可能為零)子物件 ID 指定的資料庫物件。此函數旨在於機器可讀,所以不會進行翻譯。type 為識別資料庫物件的類型;schema 是物件所屬的 schema 名稱,而對於不屬於 schema 的物件類型為NULL;name 是物件的名稱,在必要時引用,只有在可以使用時才存在( 與 schema 名稱一樣,如果需要才使用)作為物件的唯一識別符,否則為 NULL;識別是完整的物件識別,其精確格式取決於物件類型,格式中的每個部分都根據需要進行 schema-qualified 和使用括號。
pg_identify_object_as_address 回傳一個包含足夠訊息的資料列,以唯一識別由系統目錄 OID、物件 OID 和(可能為零)子物件 ID 指定的資料庫物件。回傳的訊息獨立於目前的伺服器。也就是說,它可以用來識別另一台伺服器中同名的物件。type 識別資料庫物件的型別;name 和 args 是文字陣列,它們一起組成對該物件的引用。這三個欄位傳遞給 pg_get_object_address 以取得物件的內部位址。這個函數是pg_get_object_address 的反函數。
pg_get_object_address 回傳一個包含足夠訊息的資料列,以唯一識別由其型別和物件名稱及其參數陣列所指定的資料庫物件。回傳的內容和系統目錄中使用的相同。例如pg_depend,可用於傳遞給其他系統函數,如 pg_identify_object 或pg_describe_object。catalog_id是包含物件的系統目錄 OID;object_id 是物件本身的OID,object_sub_id 是物件子 ID,如果沒有則為零。 這個函數是 pg_identify_object_as_address 的反函數。
Table 9.68 中列出的功能用於取得先前與 COMMENT 指令一起儲存的註解。如果未找到指定參數的註解,則回傳 NULL。
col_description 回傳資料表欄位的註解,由其資料庫的 OID 及欄位編號指定。 (obj_description 不能用於資料表欄位,因為欄位沒有自己的 OID。)
obj_description 以雙參數的形式回傳由其 OID 指定的資料庫物件註釋以及所包含的系統目錄名稱。 例如,obj_description(123456, 'pg_class') 將檢索 OID 為 123456 的資料表註釋。obj_description 的單參數形式僅需要物件的 OID。由於不能保證 OID 在不同的系統目錄中是唯一的,因此不推薦再使用它;否則可能會回傳錯誤的註解。
shobj_description 和 obj_description 用法相同,只是它用於檢索共享物件上的註解。 某些系統目錄對每個叢取中的所有資料庫都是全域的,並且其中的物件註解也全域存放的。
Table 9.69 中列出可匯出形式的函數以提供伺服器交易事務的訊息。這些函數的主要用途在於確定兩個快照之間提交了哪些交易事務。
內部事務 ID 型別(xid)為 32位元大小,大約每 40 億次事務輪迴一次。但是,這些函數會導出 64 位元格式,該格式通過「epoch」計數器進行擴展,因此在安裝過程中不會輪迴。這些函數使用的資料型別 txid_snapshot 在特定時刻儲存有關事務 ID 可見性的訊息。Table 9.70 描述了它的相關功能。
txid_snapshot的文字字串表示是 xmin:xmax:xip_list。例如 10:20:10,14,15 意味著xmin = 10,xmax = 20,xip_list = 10,14,15。
txid_status(bigint) 回報最近事務的提交狀態。應用程式可以使用它來確定在 COMMIT正在進行時,應用程式和資料庫伺服器連線中斷時是否提交或中止事務。如果交易時間足夠短以至於系統能保留該交易的提交狀態,則交易狀態將被回報為正在進行、已提交或已中止。如果太長以至於在系統中不存在對該交易事務的引用,而提交狀態訊息已被丟棄,則該函數將回傳 NULL。請注意,prepared transaction 會回報為正在進行中;如果需要確定 txid 是否為 prepared transaction,則應用程式必須使用checkpg_prepared_xacts。
Table 9.71 中列出的函數用於取得關於已經提交的事務訊息。這些功能主要提供有關交易何時發生的訊息。當啟用 track_commit_timestamp 配置選項時,它們可以提供一些有用的資料,只是僅用於啟用後所提交的事務。
Table 9.72 中列出的函數為在 initdb 期間輸出的初始化訊息,例如系統目錄版本。它們還顯示關於 WAL 和查核點的處理訊息。這些訊息都是 cluster 範圍內的,並非特定於任何一個資料庫。它們提供了與 pg_control 資料相同的大部分訊息,儘管它們的形式更適合用於 SQL 函數。
pg_control_checkpoint 回傳一筆記錄,如 Table 9.73 所示
pg_control_checkpoint
Columnspg_control_system 回傳一筆記錄,如 Table 9.74 所示
pg_control_system
Columnspg_control_init 回傳一筆記錄,如 Table 9.75 所示
pg_control_init
Columnspg_control_recovery 回傳一筆記錄,如 Table 9.76 所示
pg_control_recovery
ColumnsPostgreSQLprovides these helper functions to retrieve information from event triggers.
For more information about event triggers, seeChapter 39.
pg_event_trigger_ddl_commands
returns a list ofDDLcommands executed by each user action, when invoked in a function attached to addl_command_end
event trigger. If called in any other context, an error is raised.pg_event_trigger_ddl_commands
returns one row for each base command executed; some commands that are a single SQL sentence may return more than one row. This function returns the following columns:
pg_event_trigger_dropped_objects
returns a list of all objects dropped by the command in whosesql_drop
event it is called. If called in any other context,pg_event_trigger_dropped_objects
raises an error.pg_event_trigger_dropped_objects
returns the following columns:
Thepg_event_trigger_dropped_objects
function can be used in an event trigger like this:
The functions shown inTable 9.90provide information about a table for which atable_rewrite
event has just been called. If called in any other context, an error is raised.
Table 9.90. Table Rewrite information
Thepg_event_trigger_table_rewrite_oid
function can be used in an event trigger like this:
SeeSection 8.17for an overview of range types.
Table 9.50shows the operators available for range types.
Table 9.50. Range Operators
The simple comparison operators<
,>
,<=
, and>=
compare the lower bounds first, and only if those are equal, compare the upper bounds. These comparisons are not usually very useful for ranges, but are provided to allow B-tree indexes to be constructed on ranges.
The left-of/right-of/adjacent operators always return false when an empty range is involved; that is, an empty range is not considered to be either before or after any other range.
The union and difference operators will fail if the resulting range would need to contain two disjoint sub-ranges, as such a range cannot be represented.
Table 9.51shows the functions available for use with range types.
Table 9.51. Range Functions
Thelower
andupper
functions return null if the range is empty or the requested bound is infinite. Thelower_inc
,upper_inc
,lower_inf
, andupper_inf
functions all return false for an empty range.
expand multiple arrays (possibly of different types) to a set of rows. This is only allowed in the FROM clause; see
concatenation of XML values (see also )
translation mode (print localized day and month names based on)
Current date and time (changes during statement execution); see
目前日期;詳見第
Current time of day; see
Current date and time (start of current transaction); see
Get subfield (equivalent to extract
); see
Get subfield (equivalent to extract
); see
Truncate to specified precision; see also
Truncate to specified precision; see also
Get subfield; see
Get subfield; see
Current time of day; see
Current date and time (start of current transaction); see
Current date and time (start of current transaction); see
Current date and time (start of current statement); see
Current date and time (like clock_timestamp
, but as a textstring); see
Current date and time (start of current transaction); see
Operator
Right Operand Type
Description
Example
Example Result
->
int
Get JSON array element (indexed from zero, negative integers count from the end)
'[{"a":"foo"},{"b":"bar"},{"c":"baz"}]'::json->2
{"c":"baz"}
->
text
Get JSON object field by key
'{"a": {"b":"foo"}}'::json->'a'
{"b":"foo"}
->>
int
Get JSON array element astext
'[1,2,3]'::json->>2
3
->>
text
Get JSON object field astext
'{"a":1,"b":2}'::json->>'b'
2
#>
text[]
Get JSON object at specified path
'{"a": {"b":{"c": "foo"}}}'::json#>'{a,b}'
{"c": "foo"}
#>>
text[]
Get JSON object at specified path astext
'{"a":[1,2,3],"b":[4,5,6]}'::json#>>'{a,2}'
3
Operator
Right Operand Type
Description
Example
@>
jsonb
Does the left JSON value contain the right JSON path/value entries at the top level?
'{"a":1, "b":2}'::jsonb @> '{"b":2}'::jsonb
<@
jsonb
Are the left JSON path/value entries contained at the top level within the right JSON value?
'{"b":2}'::jsonb <@ '{"a":1, "b":2}'::jsonb
?
text
Does the_string_exist as a top-level key within the JSON value?
'{"a":1, "b":2}'::jsonb ? 'b'
`?
`
text[]
Do any of these array_strings_exist as top-level keys?
`'{"a":1, "b":2, "c":3}'::jsonb ?
array['b', 'c']`
?&
text[]
Do all of these array_strings_exist as top-level keys?
'["a", "b"]'::jsonb ?& array['a', 'b']
`
`
jsonb
Concatenate twojsonb
values into a newjsonb
value
`'["a", "b"]'::jsonb
'["c", "d"]'::jsonb`
-
text
Delete key/value pair or_string_element from left operand. Key/value pairs are matched based on their key value.
'{"a": "b"}'::jsonb - 'a'
-
text[]
Delete multiple key/value pairs or_string_elements from left operand. Key/value pairs are matched based on their key value.
'{"a": "b", "c": "d"}'::jsonb - '{a,c}'::text[]
-
integer
Delete the array element with specified index (Negative integers count from the end). Throws an error if top level container is not an array.
'["a", "b"]'::jsonb - 1
#-
text[]
Delete the field or element with specified path (for JSON arrays, negative integers count from the end)
'["a", {"b":1}]'::jsonb #- '{1,b}'
Function
Description
Example
Example Result
to_json(anyelement)to_jsonb(anyelement)
Returns the value asjson
orjsonb
. Arrays and composites are converted (recursively) to arrays and objects; otherwise, if there is a cast from the type tojson
, the cast function will be used to perform the conversion; otherwise, a scalar value is produced. For any scalar type other than a number, a Boolean, or a null value, the text representation will be used, in such a fashion that it is a validjson
orjsonb
value.
to_json('Fred said "Hi."'::text)
"Fred said \"Hi.\""
array_to_json(anyarray [, pretty_bool])
Returns the array as a JSON array. A PostgreSQL multidimensional array becomes a JSON array of arrays. Line feeds will be added between dimension-1 elements if_pretty_bool
_is true.
array_to_json('{ {1,5},{99,100} }'::int[])
[[1,5],[99,100]]
row_to_json(record [, pretty_bool])
Returns the row as a JSON object. Line feeds will be added between level-1 elements if_pretty_bool
_is true.
row_to_json(row(1,'foo'))
{"f1":1,"f2":"foo"}
json_build_array(VARIADIC "any")jsonb_build_array(VARIADIC "any")
Builds a possibly-heterogeneously-typed JSON array out of a variadic argument list.
json_build_array(1,2,'3',4,5)
[1, 2, "3", 4, 5]
json_build_object(VARIADIC "any")jsonb_build_object(VARIADIC "any")
Builds a JSON object out of a variadic argument list. By convention, the argument list consists of alternating keys and values.
json_build_object('foo',1,'bar',2)
{"foo": 1, "bar": 2}
json_object(text[])jsonb_object(text[])
Builds a JSON object out of a text array. The array must have either exactly one dimension with an even number of members, in which case they are taken as alternating key/value pairs, or two dimensions such that each inner array has exactly two elements, which are taken as a key/value pair.
json_object('{a, 1, b, "def", c, 3.5}')json_object('{ {a, 1},{b, "def"},{c, 3.5} }')
{"a": "1", "b": "def", "c": "3.5"}
json_object(keys text[], values text[])jsonb_object(keys text[], values text[])
This form ofjson_object
takes keys and values pairwise from two separate arrays. In all other respects it is identical to the one-argument form.
json_object('{a, b}', '{1,2}')
{"a": "1", "b": "2"}
Function
Return Type
Description
Example
Example Result
json_array_length(json)jsonb_array_length(jsonb)
int
Returns the number of elements in the outermost JSON array.
json_array_length('[1,2,3,{"f1":1,"f2":[5,6]},4]')
5
json_each(json)jsonb_each(jsonb)
setof key text, value jsonsetof key text, value jsonb
Expands the outermost JSON object into a set of key/value pairs.
select * from json_each('{"a":"foo", "b":"bar"}')
key | value-----+------- a | "foo" b | "bar"
json_each_text(json)jsonb_each_text(jsonb)
setof key text, value text
Expands the outermost JSON object into a set of key/value pairs. The returned values will be of typetext
.
select * from json_each_text('{"a":"foo", "b":"bar"}')
key | value-----+------- a | foo b | bar
json_extract_path(from_json json, VARIADIC path_elems text[])jsonb_extract_path(from_json jsonb, VARIADIC path_elems text[])
jsonjsonb
Returns JSON value pointed to bypath_elems
(equivalent to#>
operator).
json_extract_path('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4')
{"f5":99,"f6":"foo"}
json_extract_path_text(from_json json, VARIADIC path_elems text[])jsonb_extract_path_text(from_json jsonb, VARIADIC path_elems text[])
text
Returns JSON value pointed to by_path_elems
_astext
(equivalent to#>>
operator).
json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4', 'f6')
foo
json_object_keys(json)jsonb_object_keys(jsonb)
setof text
Returns set of keys in the outermost JSON object.
json_object_keys('{"f1":"abc","f2":{"f3":"a", "f4":"b"}}')
json_object_keys------------------ f1 f2
json_populate_record(base anyelement, from_json json)jsonb_populate_record(base anyelement, from_json jsonb)
anyelement
Expands the object infrom_json
_to a row whose columns match the record type defined bybase
_(see note below).
select * from json_populate_record(null::myrowtype, '{"a": 1, "b": ["2", "a b"], "c": {"d": 4, "e": "a b c"}}')
a | b | c---+-----------+------------- 1 | {2,"a b"} | (4,"a b c")
json_populate_recordset(base anyelement, from_json json)jsonb_populate_recordset(base anyelement, from_json jsonb)
setof anyelement
Expands the outermost array of objects infrom_json
_to a set of rows whose columns match the record type defined bybase
_(see note below).
select * from json_populate_recordset(null::myrowtype, '[{"a":1,"b":2},{"a":3,"b":4}]')
a | b---+--- 1 | 2 3 | 4
json_array_elements(json)jsonb_array_elements(jsonb)
setof jsonsetof jsonb
Expands a JSON array to a set of JSON values.
select * from json_array_elements('[1,true, [2,false]]')
value----------- 1 true [2,false]
json_array_elements_text(json)jsonb_array_elements_text(jsonb)
setof text
Expands a JSON array to a set oftext
values.
select * from json_array_elements_text('["foo", "bar"]')
value----------- foo bar
json_typeof(json)jsonb_typeof(jsonb)
text
Returns the type of the outermost JSON value as a text string. Possible types areobject
,array
,string
,number
,boolean
, andnull
.
json_typeof('-123.4')
number
json_to_record(json)jsonb_to_record(jsonb)
record
Builds an arbitrary record from a JSON object (see note below). As with all functions returningrecord
, the caller must explicitly define the structure of the record with anAS
clause.
select * from json_to_record('{"a":1,"b":[1,2,3],"c":[1,2,3],"e":"bar","r": {"a": 123, "b": "a b c"}}') as x(a int, b text, c int[], d text, r myrowtype)
a | b | c | d | r---+---------+---------+---+--------------- 1 | [1,2,3] | {1,2,3} | | (123,"a b c")
json_to_recordset(json)jsonb_to_recordset(jsonb)
setof record
Builds an arbitrary set of records from a JSON array of objects (see note below). As with all functions returningrecord
, the caller must explicitly define the structure of the record with anAS
clause.
select * from json_to_recordset('[{"a":1,"b":"foo"},{"a":"2","c":"bar"}]') as x(a int, b text);
a | b---+----- 1 | foo 2 |
json_strip_nulls(from_json json)jsonb_strip_nulls(from_json jsonb)
jsonjsonb
Returns_from_json
_with all object fields that have null values omitted. Other null values are untouched.
json_strip_nulls('[{"f1":1,"f2":null},2,null,3]')
[{"f1":1},2,null,3]
jsonb_set(target jsonb, path text[], new_value jsonb[,create_missingboolean
])
jsonb
Returnstarget
_with the section designated bypath
replaced bynew_value
, or withnew_value
added ifcreate_missing
is true ( default istrue
) and the item designated bypath
does not exist. As with the path orientated operators, negative integers that appear inpath
_count from the end of JSON arrays.
jsonb_set('[{"f1":1,"f2":null},2,null,3]', '{0,f1}','[2,3,4]', false)jsonb_set('[{"f1":1,"f2":null},2]', '{0,f3}','[2,3,4]')
[{"f1":[2,3,4],"f2":null},2,null,3][{"f1": 1, "f2": null, "f3": [2, 3, 4]}, 2]
jsonb_insert(target jsonb, path text[], new_value jsonb, [insert_afterboolean
])
jsonb
Returnstarget
_withnew_value
inserted. Iftarget
section designated bypath
is in a JSONB array,new_value
will be inserted before target or after ifinsert_after
is true (default isfalse
). Iftarget
section designated bypath
is in JSONB object,new_value
will be inserted only iftarget
does not exist. As with the path orientated operators, negative integers that appear inpath
_count from the end of JSON arrays.
jsonb_insert('{"a": [0,1,2]}', '{a, 1}', '"new_value"')jsonb_insert('{"a": [0,1,2]}', '{a, 1}', '"new_value"', true)
{"a": [0, "new_value", 1, 2]}{"a": [0, 1, "new_value", 2]}
jsonb_pretty(from_json jsonb)
text
Returns_from_json
_as indented JSON text.
jsonb_pretty('[{"f1":1,"f2":null},2,null,3]')
[ { "f1": 1, "f2": null }, 2, null, 3]
Function
Argument Type
Return Type
Description
generate_series(start
,stop
)
int
,bigint
ornumeric
setof int
,setof bigint
, orsetof numeric
(same as argument type)
Generate a series of values, fromstart
_tostop
_with a step size of one
generate_series(start
,stop
,step
)
int
,bigint
ornumeric
setof int
,setof bigint
orsetof numeric
(same as argument type)
Generate a series of values, fromstart
_tostop
with a step size ofstep
_
generate_series(start
,stop
,stepinterval
)
timestamp
ortimestamp with time zone
setof timestamp
orsetof timestamp with time zone
(same as argument type)
Generate a series of values, fromstart
_tostop
with a step size ofstep
_
Function
Return Type
Description
generate_subscripts(array anyarray
,dim int
)
setof int
Generate a series comprising the given array's subscripts.
generate_subscripts(array anyarray
,dim int
,reverse boolean
)
setof int
Generate a series comprising the given array's subscripts. When_reverse
_is true, the series is returned in reverse order.
函數名稱
回傳型別
說明
current_setting(
setting_name
[, missing_ok
])
text
取得目前設定值
set_config(
setting_name
, new_value
, is_local
)
text
設定參數並回傳新值
Name
Return Type
Description
pg_cancel_backend(
pid
int
)
boolean
Cancel a backend's current query. This is also allowed if the calling role is a member of the role whose backend is being canceled or the calling role has been granted pg_signal_backend
, however only superusers can cancel superuser backends.
pg_reload_conf()
boolean
Cause server processes to reload their configuration files
pg_rotate_logfile()
boolean
Rotate server's log file
pg_terminate_backend(
pid
int
)
boolean
Terminate a backend. This is also allowed if the calling role is a member of the role whose backend is being terminated or the calling role has been grantedpg_signal_backend
, however only superusers can terminate superuser backends.
Name
Return Type
Description
pg_create_restore_point(
name
text
)
pg_lsn
Create a named point for performing restore (restricted to superusers by default, but other users can be granted EXECUTE to run the function)
pg_current_wal_flush_lsn()
pg_lsn
Get current write-ahead log flush location
pg_current_wal_insert_lsn()
pg_lsn
Get current write-ahead log insert location
pg_current_wal_lsn()
pg_lsn
Get current write-ahead log write location
pg_start_backup(
label
text
[, fast
boolean
[,exclusive
boolean
]])
pg_lsn
Prepare for performing on-line backup (restricted to superusers by default, but other users can be granted EXECUTE to run the function)
pg_stop_backup()
pg_lsn
Finish performing exclusive on-line backup (restricted to superusers by default, but other users can be granted EXECUTE to run the function)
pg_stop_backup(
exclusive
boolean
[,wait_for_archive
boolean
])
setof record
Finish performing exclusive or non-exclusive on-line backup (restricted to superusers by default, but other users can be granted EXECUTE to run the function)
pg_is_in_backup()
bool
True if an on-line exclusive backup is still in progress.
pg_backup_start_time()
timestamp with time zone
Get start time of an on-line exclusive backup in progress.
pg_switch_wal()
pg_lsn
Force switch to a new write-ahead log file (restricted to superusers by default, but other users can be granted EXECUTE to run the function)
pg_walfile_name(
lsn
pg_lsn
)
text
Convert write-ahead log location to file name
pg_walfile_name_offset(
lsn
pg_lsn
)
text
, integer
Convert write-ahead log location to file name and decimal byte offset within file
pg_wal_lsn_diff(
lsn
pg_lsn
, lsn
pg_lsn
)
numeric
Calculate the difference between two write-ahead log locations
Name
Return Type
Description
pg_is_in_recovery()
bool
True if recovery is still in progress.
pg_last_wal_receive_lsn()
pg_lsn
Get last write-ahead log location received and synced to disk by streaming replication. While streaming replication is in progress this will increase monotonically. If recovery has completed this will remain static at the value of the last WAL record received and synced to disk during recovery. If streaming replication is disabled, or if it has not yet started, the function returns NULL.
pg_last_wal_replay_lsn()
pg_lsn
Get last write-ahead log location replayed during recovery. If recovery is still in progress this will increase monotonically. If recovery has completed then this value will remain static at the value of the last WAL record applied during that recovery. When the server has been started normally without recovery the function returns NULL.
pg_last_xact_replay_timestamp()
timestamp with time zone
Get time stamp of last transaction replayed during recovery. This is the time at which the commit or abort WAL record for that transaction was generated on the primary. If no transactions have been replayed during recovery, this function returns NULL. Otherwise, if recovery is still in progress this will increase monotonically. If recovery has completed then this value will remain static at the value of the last transaction applied during that recovery. When the server has been started normally without recovery the function returns NULL.
Name
Return Type
Description
pg_is_wal_replay_paused()
bool
True if recovery is paused.
pg_wal_replay_pause()
void
Pauses recovery immediately (restricted to superusers by default, but other users can be granted EXECUTE to run the function).
pg_wal_replay_resume()
void
Restarts recovery if it was paused (restricted to superusers by default, but other users can be granted EXECUTE to run the function).
Name
Return Type
Description
pg_export_snapshot()
text
Save the current snapshot and return its identifier
Function
Return Type
Description
pg_create_physical_replication_slot(
slot_name
name
[, immediately_reserve
boolean
,temporary
boolean
])
(slot_name
name
, lsn
pg_lsn
)
Creates a new physical replication slot named slot_name
. The optional second parameter, when true
, specifies that the LSNfor this replication slot be reserved immediately; otherwise the LSN is reserved on first connection from a streaming replication client. Streaming changes from a physical slot is only possible with the streaming-replication protocol — see Section 53.4. The optional third parameter, temporary
, when set to true, specifies that the slot should not be permanently stored to disk and is only meant for use by current session. Temporary slots are also released upon any error. This function corresponds to the replication protocol command CREATE_REPLICATION_SLOT ... PHYSICAL
.
pg_drop_replication_slot(
slot_name
name
)
void
Drops the physical or logical replication slot named slot_name
. Same as replication protocol command DROP_REPLICATION_SLOT
. For logical slots, this must be called when connected to the same database the slot was created on.
pg_create_logical_replication_slot(
slot_name
name
, plugin
name
[, temporary
boolean
])
(slot_name
name
, lsn
pg_lsn
)
Creates a new logical (decoding) replication slot named slot_name
using the output plugin plugin
. The optional third parameter, temporary
, when set to true, specifies that the slot should not be permanently stored to disk and is only meant for use by current session. Temporary slots are also released upon any error. A call to this function has the same effect as the replication protocol command CREATE_REPLICATION_SLOT ... LOGICAL
.
pg_logical_slot_get_changes(
slot_name
name
,upto_lsn
pg_lsn
, upto_nchanges
int
, VARIADICoptions
text[]
)
(lsn
pg_lsn
, xid
xid
, data
text
)
Returns changes in the slot slot_name
, starting from the point at which since changes have been consumed last. If upto_lsn
and upto_nchanges
are NULL, logical decoding will continue until end of WAL. If upto_lsn
is non-NULL, decoding will include only those transactions which commit prior to the specified LSN. If upto_nchanges
is non-NULL, decoding will stop when the number of rows produced by decoding exceeds the specified value. Note, however, that the actual number of rows returned may be larger, since this limit is only checked after adding the rows produced when decoding each new transaction commit.
pg_logical_slot_peek_changes(
slot_name
name
,upto_lsn
pg_lsn
, upto_nchanges
int
, VARIADICoptions
text[]
)
(lsn
pg_lsn
, xid
xid
, data
text
)
Behaves just like the pg_logical_slot_get_changes()
function, except that changes are not consumed; that is, they will be returned again on future calls.
pg_logical_slot_get_binary_changes(
slot_name
name
, upto_lsn
pg_lsn
, upto_nchanges
int
, VARIADIC options
text[]
)
(lsn
pg_lsn
, xid
xid
, data
bytea
)
Behaves just like the pg_logical_slot_get_changes()
function, except that changes are returned as bytea
.
pg_logical_slot_peek_binary_changes(
slot_name
name
, upto_lsn
pg_lsn
, upto_nchanges
int
, VARIADIC options
text[]
)
(lsn
pg_lsn
, xid
xid
, data
bytea
)
Behaves just like the pg_logical_slot_get_changes()
function, except that changes are returned as bytea
and that changes are not consumed; that is, they will be returned again on future calls.
pg_replication_slot_advance(
slot_name
name
,upto_lsn
pg_lsn
)
(slot_name
name
, end_lsn
pg_lsn
) bool
Advances the current confirmed position of a replication slot named slot_name
. The slot will not be moved backwards, and it will not be moved beyond the current insert location. Returns name of the slot and real position to which it was advanced to.
pg_replication_origin_create(
node_name
text
)
oid
Create a replication origin with the given external name, and return the internal id assigned to it.
pg_replication_origin_drop(
node_name
text
)
void
Delete a previously created replication origin, including any associated replay progress.
pg_replication_origin_oid(
node_name
text
)
oid
Lookup a replication origin by name and return the internal id. If no corresponding replication origin is found an error is thrown.
pg_replication_origin_session_setup(
node_name
text
)
void
Mark the current session as replaying from the given origin, allowing replay progress to be tracked. Use pg_replication_origin_session_reset
to revert. Can only be used if no previous origin is configured.
pg_replication_origin_session_reset()
void
Cancel the effects of pg_replication_origin_session_setup()
.
pg_replication_origin_session_is_setup()
bool
Has a replication origin been configured in the current session?
pg_replication_origin_session_progress(
flush
bool
)
pg_lsn
Return the replay location for the replication origin configured in the current session. The parameter flush
determines whether the corresponding local transaction will be guaranteed to have been flushed to disk or not.
pg_replication_origin_xact_setup(
origin_lsn
pg_lsn
, origin_timestamp
timestamptz
)
void
Mark the current transaction as replaying a transaction that has committed at the given LSN and timestamp. Can only be called when a replication origin has previously been configured using pg_replication_origin_session_setup()
.
pg_replication_origin_xact_reset()
void
Cancel the effects of pg_replication_origin_xact_setup()
.
pg_replication_origin_advance(
node_name
text
,lsn
pg_lsn
)
void
Set replication progress for the given node to the given location. This primarily is useful for setting up the initial location or a new location after configuration changes and similar. Be aware that careless use of this function can lead to inconsistently replicated data.
pg_replication_origin_progress(
node_name
text
, flush
bool
)
pg_lsn
Return the replay location for the given replication origin. The parameter flush
determines whether the corresponding local transaction will be guaranteed to have been flushed to disk or not.
pg_logical_emit_message(
transactional
bool
,prefix
text
, content
text
)
pg_lsn
Emit text logical decoding message. This can be used to pass generic messages to logical decoding plugins through WAL. The parameter transactional
specifies if the message should be part of current transaction or if it should be written immediately and decoded as soon as the logical decoding reads the record. The prefix
is textual prefix used by the logical decoding plugins to easily recognize interesting messages for them. The content
is the text of the message.
pg_logical_emit_message(
transactional
bool
,prefix
text
, content
bytea
)
pg_lsn
Emit binary logical decoding message. This can be used to pass generic messages to logical decoding plugins through WAL. The parameter transactional
specifies if the message should be part of current transaction or if it should be written immediately and decoded as soon as the logical decoding reads the record. The prefix
is textual prefix used by the logical decoding plugins to easily recognize interesting messages for them. The content
is the binary content of the message.
Name
Return Type
Description
pg_column_size(any
)
int
Number of bytes used to store a particular value (possibly compressed)
pg_database_size(oid
)
bigint
Disk space used by the database with the specified OID
pg_database_size(name
)
bigint
Disk space used by the database with the specified name
pg_indexes_size(regclass
)
bigint
Total disk space used by indexes attached to the specified table
pg_relation_size(
relation
regclass
, fork
text
)
bigint
Disk space used by the specified fork ('main'
, 'fsm'
, 'vm'
, or 'init'
) of the specified table or index
pg_relation_size(
relation
regclass
)
bigint
Shorthand for pg_relation_size(..., 'main')
pg_size_bytes(text
)
bigint
Converts a size in human-readable format with size units into bytes
pg_size_pretty(bigint
)
text
Converts a size in bytes expressed as a 64-bit integer into a human-readable format with size units
pg_size_pretty(numeric
)
text
Converts a size in bytes expressed as a numeric value into a human-readable format with size units
pg_table_size(regclass
)
bigint
Disk space used by the specified table, excluding indexes (but including TOAST, free space map, and visibility map)
pg_tablespace_size(oid
)
bigint
Disk space used by the tablespace with the specified OID
pg_tablespace_size(name
)
bigint
Disk space used by the tablespace with the specified name
pg_total_relation_size(regclass
)
bigint
Total disk space used by the specified table, including all indexes and TOAST data
Name
Return Type
Description
pg_relation_filenode(
relation
regclass
)
oid
Filenode number of the specified relation
pg_relation_filepath(
relation
regclass
)
text
File path name of the specified relation
pg_filenode_relation(
tablespace
oid
, filenode
oid
)
regclass
Find the relation associated with a given tablespace and filenode
Name
Return Type
Description
pg_collation_actual_version(oid
)
text
Return actual version of collation from operating system
pg_import_system_collations(
schema
regnamespace
)
integer
Import operating system collations
Name
Return Type
Description
brin_summarize_new_values(
index
regclass
)
integer
summarize page ranges not already summarized
brin_summarize_range(
index
regclass
, blockNumber
bigint
)
integer
summarize the page range covering the given block, if not already summarized
brin_desummarize_range(
index
regclass
, blockNumber
bigint
)
integer
de-summarize the page range covering the given block, if summarized
gin_clean_pending_list(
index
regclass
)
bigint
move GIN pending list entries into main index structure
Name
Return Type
Description
pg_ls_dir(
dirname
text
[, missing_ok
boolean
,include_dot_dirs
boolean
])
setof text
List the contents of a directory. Restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_ls_logdir()
setof record
List the name, size, and last modification time of files in the log directory. Access is granted to members of the pg_monitor
role and may be granted to other non-superuser roles.
pg_ls_waldir()
setof record
List the name, size, and last modification time of files in the WAL directory. Access is granted to members of the pg_monitor
role and may be granted to other non-superuser roles.
pg_read_file(
filename
text
[, offset
bigint
, length
bigint
[, missing_ok
boolean
] ])
text
Return the contents of a text file. Restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_read_binary_file(
filename
text
[, offset
bigint
,length
bigint
[, missing_ok
boolean
] ])
bytea
Return the contents of a file. Restricted to superusers by default, but other users can be granted EXECUTE to run the function.
pg_stat_file(
filename
text
[, missing_ok
boolean
])
record
Return information about a file. Restricted to superusers by default, but other users can be granted EXECUTE to run the function.
Name
Return Type
Description
pg_advisory_lock(
key
bigint
)
void
Obtain exclusive session level advisory lock
pg_advisory_lock(
key1
int
, key2
int
)
void
Obtain exclusive session level advisory lock
pg_advisory_lock_shared(
key
bigint
)
void
Obtain shared session level advisory lock
pg_advisory_lock_shared(
key1
int
, key2
int
)
void
Obtain shared session level advisory lock
pg_advisory_unlock(
key
bigint
)
boolean
Release an exclusive session level advisory lock
pg_advisory_unlock(
key1
int
, key2
int
)
boolean
Release an exclusive session level advisory lock
pg_advisory_unlock_all()
void
Release all session level advisory locks held by the current session
pg_advisory_unlock_shared(
key
bigint
)
boolean
Release a shared session level advisory lock
pg_advisory_unlock_shared(
key1
int
, key2
int
)
boolean
Release a shared session level advisory lock
pg_advisory_xact_lock(
key
bigint
)
void
Obtain exclusive transaction level advisory lock
pg_advisory_xact_lock(
key1
int
, key2
int
)
void
Obtain exclusive transaction level advisory lock
pg_advisory_xact_lock_shared(
key
bigint
)
void
Obtain shared transaction level advisory lock
pg_advisory_xact_lock_shared(
key1
int
, key2
int
)
void
Obtain shared transaction level advisory lock
pg_try_advisory_lock(
key
bigint
)
boolean
Obtain exclusive session level advisory lock if available
pg_try_advisory_lock(
key1
int
, key2
int
)
boolean
Obtain exclusive session level advisory lock if available
pg_try_advisory_lock_shared(
key
bigint
)
boolean
Obtain shared session level advisory lock if available
pg_try_advisory_lock_shared(
key1
int
, key2
int
)
boolean
Obtain shared session level advisory lock if available
pg_try_advisory_xact_lock(
key
bigint
)
boolean
Obtain exclusive transaction level advisory lock if available
pg_try_advisory_xact_lock(
key1
int
, key2
int
)
boolean
Obtain exclusive transaction level advisory lock if available
pg_try_advisory_xact_lock_shared(
key
bigint
)
boolean
Obtain shared transaction level advisory lock if available
pg_try_advisory_xact_lock_shared(
key1
int
, key2
int
)
boolean
Obtain shared transaction level advisory lock if available
Function
Return Type
Description
row_number()
bigint
number of the current row within its partition, counting from 1
rank()
bigint
rank of the current row with gaps; same asrow_number
of its first peer
dense_rank()
bigint
rank of the current row without gaps; this function counts peer groups
percent_rank()
double precision
relative rank of the current row: (rank
- 1) / (total partition rows - 1)
cume_dist()
double precision
cumulative distribution: (number of partition rows preceding or peer with current row) / total partition rows
ntile(num_bucketsinteger
)
integer
integer ranging from 1 to the argument value, dividing the partition as equally as possible
lag(valueanyelement
[,offsetinteger
[,defaultanyelement
]])
same type asvalue
returnsvalue
_evaluated at the row that isoffset
rows before the current row within the partition; if there is no such row, instead returndefault
(which must be of the same type asvalue
). Bothoffset
anddefault
are evaluated with respect to the current row. If omitted,offset
defaults to 1 anddefault
_to null
lead(valueanyelement
[,offsetinteger
[,defaultanyelement
]])
same type asvalue
returnsvalue
_evaluated at the row that isoffset
rows after the current row within the partition; if there is no such row, instead returndefault
(which must be of the same type asvalue
). Bothoffset
anddefault
are evaluated with respect to the current row. If omitted,offset
defaults to 1 anddefault
_to null
first_value(valueany
)
same type asvalue
returns_value
_evaluated at the row that is the first row of the window frame
last_value(valueany
)
same type asvalue
returns_value
_evaluated at the row that is the last row of the window frame
nth_value(valueany
,nthinteger
)
same type asvalue
returnsvalue
_evaluated at the row that is thenth
_row of the window frame (counting from 1); null if no such row
Name
Return Type
Description
current_catalog
name
目前資料庫的名稱(在 SQL 標準中稱為「catalog」)
current_database()
name
目前資料庫的名稱
current_query()
text
正在執行的查詢的文字內容(由用戶端送出的)(可能包含多個語句)
current_role
name
等同於 current_user
current_schema
[()]
name
目前 schema 的名稱
current_schemas(boolean)
name[]
搜尋路徑中的 schema 名稱,選擇性包含隱含的 schema
current_user
name
目前執行查詢的使用者名稱
inet_client_addr()
inet
遠端連線的位址
inet_client_port()
int
遠端連線的連接埠
inet_server_addr()
inet
本機連線的位址
inet_server_port()
int
本機連線的連接埠
pg_backend_pid()
int
目前伺服連線服務的 Process ID
pg_blocking_pids(int)
int[]
正在防止指定的伺服器 Process ID 取得鎖定權限的 Process ID
pg_conf_load_time()
timestamp with time zone
載入時間的設定
pg_current_logfile([text])
text
主要日誌的檔案名稱,或者登記的日誌收集器目前正在使用的請求格式
pg_my_temp_schema()
oid
目前連線的暫時 schema 的 OID,如果沒有則為 0
pg_is_other_temp_schema(oid)
boolean
這個 schema 是另一個連線的暫時 schema 嗎?
pg_listening_channels()
setof text
連線目前正在監聽的頻道(channel)名稱
pg_notification_queue_usage()
double
目前佔用的非同步通知佇列的使用率(0-1)
pg_postmaster_start_time()
timestamp with time zone
伺服器的啟動時間
pg_safe_snapshot_blocking_pids(int)
int[]
阻擋指定的伺服器 Process ID 取得安全快照的 Process ID
pg_trigger_depth()
intPostgreSQL 觸發器的目前巢狀等級(如果未從觸發器內部直接或間接呼叫,則為 0)
session_user
name
連線中的使用者名稱
user
name
等同於 current_user
version()
text
PostgreSQL 版本訊息。另請參閱 server_version_num 以獲得機器可讀版本內容。
Name
Return Type
Description
has_any_column_privilege
(user
,table
,privilege
)
boolean
使用者是否有任何資料表欄位的權限?
has_any_column_privilege
(table
,privilege
)
boolean
目前使用者是否有任何資料表欄位的權限?
has_column_privilege
(user
,table
,column
,privilege
)
boolean
使用者是否有該欄位的權限?
has_column_privilege
(table
,column
,privilege
)
boolean
目前用戶是否具有該欄位的權限?
has_database_privilege
(user
,database
,privilege
)
boolean
使用者對該資料庫是否有權限?
has_database_privilege
(database
,privilege
)
boolean
目前用戶是否具有該資料庫的權限?
has_foreign_data_wrapper_privilege
(user
,fdw
,privilege
)
boolean
使用者是否擁有該 FDW 權限?
has_foreign_data_wrapper_privilege
(fdw
,privilege
)
boolean
目前使用者是否具有該 FDW 的權限?
has_function_privilege
(user
,function
,privilege
)
boolean
使用者是否具有該函數的權限?
has_function_privilege
(function
,privilege
)
boolean
目前用戶是否具有該函數的權限?
has_language_privilege
(user
,language
,privilege
)
boolean
使用者是否有該程式語言的權限?
has_language_privilege
(language
,privilege
)
boolean
目前使用者是否具有該程式語言的權限?
has_schema_privilege
(user
,schema
,privilege
)
boolean
使用者是否具有該 schema 的權限?
has_schema_privilege
(schema
,privilege
)
boolean
目前使用者是否具有該 schema 的權限?
has_sequence_privilege
(user
,sequence
,privilege
)
boolean
使用者是否具有該序列資料的權限?
has_sequence_privilege
(sequence
,privilege
)
boolean
目前使用者是否具有該序列資料的權限?
has_server_privilege
(user
,server
,privilege
)
boolean
使用者是否擁有該 foreign server 的權限?
has_server_privilege
(server
,privilege
)
boolean
目前使用者是否具有該 foreign server 的權限?
has_table_privilege
(user
,table
,privilege
)
boolean
使用者是否擁有該資料表的權限?
has_table_privilege
(table
,privilege
)
boolean
目前使用者是否擁有該資料表的權限?
has_tablespace_privilege
(user
,tablespace
,privilege
)
boolean
使用者是否擁有資料表空間的權限?
has_tablespace_privilege
(tablespace
,privilege
)
boolean
目前用戶是否擁有資料表空間的權限?
has_type_privilege
(user
,type
,privilege
)
boolean
使用者是否有該資料型別的權限?
has_type_privilege
(type
,privilege
)
boolean
目前使用者是否擁有該資料型別的權限?
pg_has_role
(user
,role
,privilege
)
boolean
使用者是否具有該角色的權限?
pg_has_role
(role
,privilege
)
boolean
目前使用者是否擁有該角色的權限?
row_security_active
(table
)
boolean
目前使用者對於資料表的資料列級安全設定是否有效?
Name
Return Type
Description
pg_collation_is_visible(collation_oid
)
boolean
collation 在搜尋路徑中可見嗎?
pg_conversion_is_visible(conversion_oid
)
boolean
型別轉換在搜尋路徑中可見嗎?
pg_function_is_visible(function_oid
)
boolean
函數在搜尋路徑中可見嗎?
pg_opclass_is_visible(opclass_oid
)
boolean
運算子集合在搜尋路徑中可見嗎?
pg_operator_is_visible(operator_oid
)
boolean
運算子在搜尋路徑中可見嗎?
pg_opfamily_is_visible(opclass_oid
)
boolean
運算子家族在搜尋路徑中可見嗎?
pg_statistics_obj_is_visible(stat_oid
)
boolean
統計物件在搜尋路徑中可見嗎?
pg_table_is_visible(table_oid
)
boolean
資料表在搜尋路徑中可見嗎?
pg_ts_config_is_visible(config_oid
)
boolean
全文檢索組態在搜索路徑中可見嗎?
pg_ts_dict_is_visible(dict_oid
)
boolean
全文檢索字典在搜索路徑中可見嗎?
pg_ts_parser_is_visible(parser_oid
)
boolean
全文檢索解析器在搜索路徑中可見嗎?
pg_ts_template_is_visible(template_oid
)
boolean
全文檢索樣版在搜索路徑中可見嗎?
pg_type_is_visible(type_oid
)
boolean
資料型別(或 domain)在搜尋路徑中可見嗎?
Name
Return Type
Description
format_type(type_oid
,typemod
)
text
取得資料型別的 SQL 名稱
pg_get_constraintdef(constraint_oid
)
text
取得限制條件的定義
pg_get_constraintdef(constraint_oid
,pretty_bool
)
text
取得限制條件的定義
pg_get_expr(pg_node_tree
,relation_oid
)
text
反組譯表示式的內部形式,假設其中的任何 Vars 引用由第二個參數所指示的關連
pg_get_expr(pg_node_tree
,relation_oid
,pretty_bool
)
text
反組譯表示式的內部形式,假設其中的任何 Vars 引用由第二個參數所指示的關連
pg_get_functiondef(func_oid
)
text
取得函數的定義
pg_get_function_arguments(func_oid
)
text
取得函數定義的參數列表(包含預設值)
pg_get_function_identity_arguments(func_oid
)
text
取得函數的參數列表(不包含默認值)
pg_get_function_result(func_oid
)
text
取得函數的 RETURNS 子句
pg_get_indexdef(index_oid
)
text
取得用於索引的 CREATE INDEX 命令
pg_get_indexdef(index_oid
,column_no
,pretty_bool
)
text
取得用於索引的 CREATE INDEX 指令,或者只有一個索引欄位的定義,當 column_no 不為零時
pg_get_keywords()
setof record
取得 SQL 關鍵字列表及其類別
pg_get_ruledef(rule_oid
)
text
取出規則的 CREATE RULE 指令
pg_get_ruledef(rule_oid
,pretty_bool
)
text
取出規則的 CREATE RULE 指令
pg_get_serial_sequence(table_name
,column_name
)
text
取得 serial、smallserial 或 bigserial 欄位使用的序列名稱
pg_get_statisticsobjdef(statobj_oid
)
text
取得延伸的統計資訊物件的 CREATE STATISTICS 指令
pg_get_triggerdef
(trigger_oid
)
text
取得觸發器的 CREATE [CONSTRAINT] TRIGGER 指令
pg_get_triggerdef
(trigger_oid
,pretty_bool
)
text
取得觸發器的 CREATE [CONSTRAINT] TRIGGER 指令
pg_get_userbyid(role_oid
)
name
取得指定 OID 的角色名稱
pg_get_viewdef(view_name
)
text
為 view 或 materialized view 取得實際的 SELECT 命令(已淘汰)
pg_get_viewdef(view_name
,pretty_bool
)
text
為 view 或 materialized view 取得實際的 SELECT 命令(已淘汰)
pg_get_viewdef(view_oid
)
text
為 view 或 materialized view 取得實際的 SELECT 命令
pg_get_viewdef(view_oid
,pretty_bool
)
text
為 view 或 materialized view 取得實際的 SELECT 命令(淘汰了)
pg_get_viewdef(view_oid
,wrap_column_int
)
text
為 view 或 materialized view 取得實際的 SELECT 指令;內容將被包裝為指定數量的列,隱含著 pretty-prtinting 的輸出
pg_index_column_has_property(index_oid
,column_no
,prop_name
)
boolean
測試索引欄位是否具有指定的屬性
pg_index_has_property(index_oid
,prop_name
)
boolean
測試索引欄位是否具有指定的屬性
pg_indexam_has_property(am_oid
,prop_name
)
boolean
測試索引存取方法是否具有指定的屬性
pg_options_to_table(reloptions
)
setof record
取得一組儲存的選項 name/value
pg_tablespace_databases(tablespace_oid
)
setof oid
取得資料表空間中具有物件的資料庫 OID 集合
pg_tablespace_location(tablespace_oid
)
text
取得該資料表空間所在的檔案系統路徑
pg_typeof(any
)
regtype
取得任何內容的資料型別
collation for (any
)
text
取得參數的校對方式
to_regclass(rel_name
)
regclass
取得指定關連的 OID
to_regproc(func_name
)
regproc
取得指定函數的 OID
to_regprocedure(func_name
)
regprocedure
取得指定函數的 OID
to_regoper(operator_name
)
regoper
取得指定運算子的 OID
to_regoperator(operator_name
)
regoperator
取得指定運算子的 OID
to_regtype(type_name
)
regtype
取得指定型別的 OID
to_regnamespace(schema_name
)
regnamespace
取得指定 schema 的 OID
to_regrole(role_name
)
regrole
取得指定角色的 OID
Name
Description
asc
該欄位在順向掃描中是以升幂排序嗎?
desc
該欄位在順向掃描中是否以降冪排序?
nulls_first
在順向掃描中,欄位是否先排序 NULL?
nulls_last
在順向掃描中,欄位是否將 NULL 排序在最後?
orderable
欄位是否具有任何已定義的排序順序?
distance_orderable
是否可以透過「距離」運算子按順序掃描列,例如 ORDER BY COL <-> 常數?
returnable
欄位值是否可以進行 index-only 掃描?
search_array
欄位本身是否支援 col = ANY(array) 搜尋?
search_nulls
該欄位是否支援 IS NULL 和 IS NOT NULL 搜尋?
Name
Description
clusterable
索引是否可以在 CLUSTER 指令中使用?
index_scan
索引是否支持 plain(非 bitmap)掃描?
bitmap_scan
索引是否支持 bitmap 掃描?
backward_scan
索引是否支持逆向掃描?
Name
Description
can_order
存取方法是否支援 CREATE INDEX 中的 ASC、DESC 及相關的關鍵字?
can_unique
存取方法是否支援唯一索引?
can_multi_col
存取方法是否支援具有多欄位的索引?
can_exclude
存取方法是否支援排除性的限制條件?
Name
Return Type
Description
pg_describe_object(catalog_id
,object_id
,object_sub_id
)
text
取得資料庫物件的描述
pg_identify_object(catalog_idoid
,object_idoid
,object_sub_idinteger
)
type text
,schema text
,name text
,identity text
取得資料庫物件的識別
pg_identify_object_as_address(catalog_idoid
,object_idoid
,object_sub_idinteger
)
type text
,name text[]
,args text[]
取得資料庫物件位址的外部表示
pg_get_object_address(typetext
,nametext[]
,argstext[]
)
catalog_id oid
,object_id oid
,object_sub_id int32
從外部識別取得資料庫物件的位址
Name
Return Type
Description
col_description(table_oid
,column_number
)
text
取得資料表欄位的註解
obj_description(object_oid
,catalog_name
)
text
取得資料庫物件的註解
obj_description(object_oid
)
text
取得資料庫物件的註解(已廢棄)
shobj_description(object_oid
,catalog_name
)
text
取得共享資料庫物件的註解
Name
Return Type
Description
txid_current()
bigint
取得目前的事務 ID,如果目前事務還沒有 ID,則會分配一個新的事務 ID
txid_current_if_assigned()
bigint
與 txid_current() 相同,只是如果沒有 ID 的話,就回傳 NULL 而不是分配一個新的 xid
txid_current_snapshot()
txid_snapshot
取得目前的快照
txid_snapshot_xip(txid_snapshot
)
setof bigint
在快照中取得正在進行的事務 ID
txid_snapshot_xmax(txid_snapshot
)
bigint
取得快照的 xmax
txid_snapshot_xmin(txid_snapshot
)
bigint
取得快照的 xmin
txid_visible_in_snapshot(bigint
,txid_snapshot
)
boolean
事務 ID 在快照中是否是可見的? (不要使用子事務的 ID)
txid_status(bigint
)
txid_status
回報給定的 xact 已提交、已中止、或進行中的狀態,如果 txid 太舊,則報告為 NULL
Name
Description
xmin
仍然有效的最早交易 ID(txid)仍然有效。所有較早的交易將被承諾並且為可見的,或者回溯然後結束。
xmax
第一個尚未分配的 txid。所有大於或等於此的 txid 在快照時間之前尚未開始,因此為不可見。
xip_list
快照時有效的 txid。該列表僅包含介於 xmin 和 xmax 之間有效的 txid;有可能存在比xmax 更高的有效 txid。 xmin <= txid < xmax 並且不在此列表中的 txid 在快照時已經完成,因此根據其提交狀態區分為可見或不可見。該列表並不包含子事務的 txid。
Name
Return Type
Description
pg_xact_commit_timestamp(xid
)
timestamp with time zone
取得交易事務的提交時間戳記
pg_last_committed_xact()
xidxid
,timestamptimestamp with time zone
取得最新提交事務的事務 ID 和提交時間戳記
Name
Return Type
Description
pg_control_checkpoint()
record
Returns information about current checkpoint state.
pg_control_system()
record
Returns information about current control file state.
pg_control_init()
record
Returns information about cluster initialization state.
pg_control_recovery()
record
Returns information about recovery state.
Column Name
Data Type
checkpoint_lsn
pg_lsn
prior_lsn
pg_lsn
redo_lsn
pg_lsn
redo_wal_file
text
timeline_id
integer
prev_timeline_id
integer
full_page_writes
boolean
next_xid
text
next_oid
oid
next_multixact_id
xid
next_multi_offset
xid
oldest_xid
xid
oldest_xid_dbid
oid
oldest_active_xid
xid
oldest_multi_xid
xid
oldest_multi_dbid
oid
oldest_commit_ts_xid
xid
newest_commit_ts_xid
xid
checkpoint_time
timestamp with time zone
Column Name
Data Type
pg_control_version
integer
catalog_version_no
integer
system_identifier
bigint
pg_control_last_modified
timestamp with time zone
Column Name
Data Type
max_data_alignment
integer
database_block_size
integer
blocks_per_segment
integer
wal_block_size
integer
bytes_per_wal_segment
integer
max_identifier_length
integer
max_index_columns
integer
max_toast_chunk_size
integer
large_object_chunk_size
integer
float4_pass_by_value
boolean
float8_pass_by_value
boolean
data_page_checksum_version
integer
Column Name
Data Type
min_recovery_end_lsn
pg_lsn
min_recovery_end_timeline
integer
backup_start_lsn
pg_lsn
backup_end_lsn
pg_lsn
end_of_backup_record_required
boolean
Name
Type
Description
classid
Oid
OID of catalog the object belongs in
objid
Oid
OID of the object in the catalog
objsubid
integer
Object sub-id (e.g. attribute number for columns)
command_tag
text
command tag
object_type
text
Type of the object
schema_name
text
Name of the schema the object belongs in, if any; otherwiseNULL
. No quoting is applied.
object_identity
text
Text rendering of the object identity, schema-qualified. Each and every identifier present in the identity is quoted if necessary.
in_extension
bool
whether the command is part of an extension script
command
pg_ddl_command
A complete representation of the command, in internal format. This cannot be output directly, but it can be passed to other functions to obtain different pieces of information about the command.
Name
Type
Description
classid
Oid
OID of catalog the object belonged in
objid
Oid
OID the object had within the catalog
objsubid
int32
Object sub-id (e.g. attribute number for columns)
original
bool
Flag used to identify the root object(s) of the deletion
normal
bool
Flag indicating that there's a normal dependency relationship in the dependency graph leading to this object
is_temporary
bool
Flag indicating that the object was a temporary object.
object_type
text
Type of the object
schema_name
text
Name of the schema the object belonged in, if any; otherwiseNULL
. No quoting is applied.
object_name
text
Name of the object, if the combination of schema and name can be used as a unique identifier for the object; otherwiseNULL
. No quoting is applied, and name is never schema-qualified.
object_identity
text
Text rendering of the object identity, schema-qualified. Each and every identifier present in the identity is quoted if necessary.
address_names
text[]
An array that, together withobject_type
andaddress_args
, can be used by thepg_get_object_address()
to recreate the object address in a remote server containing an identically named object of the same kind.
address_args
text[]
Complement foraddress_names
above.
Name
Return Type
Description
pg_event_trigger_table_rewrite_oid()
Oid
The OID of the table about to be rewritten.
pg_event_trigger_table_rewrite_reason()
int
The reason code(s) explaining the reason for rewriting. The exact meaning of the codes is release dependent.
Operator
Description
Example
Result
=
equal
int4range(1,5) = '[1,4]'::int4range
t
<>
not equal
numrange(1.1,2.2) <> numrange(1.1,2.3)
t
<
less than
int4range(1,10) < int4range(2,3)
t
>
greater than
int4range(1,10) > int4range(1,5)
t
<=
less than or equal
numrange(1.1,2.2) <= numrange(1.1,2.2)
t
>=
greater than or equal
numrange(1.1,2.2) >= numrange(1.1,2.0)
t
@>
contains range
int4range(2,4) @> int4range(2,3)
t
@>
contains element
'[2011-01-01,2011-03-01)'::tsrange @> '2011-01-10'::timestamp
t
<@
range is contained by
int4range(2,4) <@ int4range(1,7)
t
<@
element is contained by
42 <@ int4range(1,7)
f
&&
overlap (have points in common)
int8range(3,7) && int8range(4,12)
t
<<
strictly left of
int8range(1,10) << int8range(100,110)
t
>>
strictly right of
int8range(50,60) >> int8range(20,30)
t
&<
does not extend to the right of
int8range(1,20) &< int8range(18,20)
t
&>
does not extend to the left of
int8range(7,20) &> int8range(5,10)
t
`-
-`
is adjacent to
`numrange(1.1,2.2) -
- numrange(2.2,3.3)`
t
+
union
numrange(5,15) + numrange(10,20)
[5,20)
*
intersection
int8range(5,15) * int8range(10,20)
[10,15)
-
difference
int8range(5,15) - int8range(10,20)
[5,10)
Function
Return Type
Description
Example
Result
lower
(anyrange
)
range's element type
lower bound of range
lower(numrange(1.1,2.2))
1.1
upper
(anyrange
)
range's element type
upper bound of range
upper(numrange(1.1,2.2))
2.2
isempty
(anyrange
)
boolean
is the range empty?
isempty(numrange(1.1,2.2))
false
lower_inc
(anyrange
)
boolean
is the lower bound inclusive?
lower_inc(numrange(1.1,2.2))
true
upper_inc
(anyrange
)
boolean
is the upper bound inclusive?
upper_inc(numrange(1.1,2.2))
false
lower_inf
(anyrange
)
boolean
is the lower bound infinite?
lower_inf('(,)'::daterange)
true
upper_inf
(anyrange
)
boolean
is the upper bound infinite?
upper_inf('(,)'::daterange)
true
range_merge
(anyrange
,anyrange
)
anyrange
the smallest range which includes both of the given ranges
range_merge('[1,2)'::int4range, '[3,4)'::int4range)
[1,4)