Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
版本:11
PostgreSQL is extensible because its operation is catalog-driven. If you are familiar with standard relational database systems, you know that they store information about databases, tables, columns, etc., in what are commonly known as system catalogs. (Some systems call this the data dictionary.) The catalogs appear to the user as tables like any other, but the DBMS stores its internal bookkeeping in them. One key difference between PostgreSQL and standard relational database systems is that PostgreSQL stores much more information in its catalogs: not only information about tables and columns, but also information about data types, functions, access methods, and so on. These tables can be modified by the user, and since PostgreSQL bases its operation on these tables, this means that PostgreSQL can be extended by users. By comparison, conventional database systems can only be extended by changing hardcoded procedures in the source code or by loading modules specially written by the DBMS vendor.
The PostgreSQL server can moreover incorporate user-written code into itself through dynamic loading. That is, the user can specify an object code file (e.g., a shared library) that implements a new type or function, and PostgreSQL will load it as required. Code written in SQL is even more trivial to add to the server. This ability to modify its operation “on the fly” makes PostgreSQL uniquely suited for rapid prototyping of new applications and storage structures.
版本:11
PostgreSQL 提供了四種形態的函數:
查詢語言函數(用 SQL 語言撰寫的函數)(第 38.5 節)
程序語言函數(例如,用 PL/pgSQL 或 PL/Tcl 撰寫的函數)(第 38.8 節)
內部函數(第 38.9 節)
C 語言函數(第 38.10 節)
每種函數都可以將基本型別、複合類型或它們的組合型別作為參數。 另外,每種函數都可以回傳一個基本型別或一個複合型別。函數也可以定義為回傳基本或複合值的集合。
許多函數可以接受或回傳某些虛擬型別 pseudo type(如多態型別 polymorphic type),但可用的方式會有所不同。有關更多詳細訊息,請參閱各種函數的說明。
定義 SQL 函數最簡單,所以我們先討論這些。為 SQL 函數提供的大多數概念將轉入其他類型的函數。
在本章中,查看 CREATE FUNCTION 指令的參考頁面可以更好地理解這些範例。本章中的一些範例可以在 PostgreSQL 原始碼發行版的 src/tutorial 目錄中的 funcs.sql 和 funcs.c 中找到。
版本:11
PostgreSQL data types can be divided into base types, container types, domains, and pseudo-types.
Base types are those, like integer
, that are implemented below the level of the SQL language (typically in a low-level language such as C). They generally correspond to what are often known as abstract data types. PostgreSQL can only operate on such types through functions provided by the user and only understands the behavior of such types to the extent that the user describes them. The built-in base types are described in Chapter 8.
Enumerated (enum) types can be considered as a subcategory of base types. The main difference is that they can be created using just SQL commands, without any low-level programming. Refer to Section 8.7 for more information.
PostgreSQL has three kinds of “container” types, which are types that contain multiple values of other types. These are arrays, composites, and ranges.
Arrays can hold multiple values that are all of the same type. An array type is automatically created for each base type, composite type, range type, and domain type. But there are no arrays of arrays. So far as the type system is concerned, multi-dimensional arrays are the same as one-dimensional arrays. Refer to Section 8.15 for more information.
Composite types, or row types, are created whenever the user creates a table. It is also possible to use CREATE TYPE to define a “stand-alone” composite type with no associated table. A composite type is simply a list of types with associated field names. A value of a composite type is a row or record of field values. Refer to Section 8.16 for more information.
A range type can hold two values of the same type, which are the lower and upper bounds of the range. Range types are user-created, although a few built-in ones exist. Refer to Section 8.17 for more information.
A domain is based on a particular underlying type and for many purposes is interchangeable with its underlying type. However, a domain can have constraints that restrict its valid values to a subset of what the underlying type would allow. Domains are created using the SQL command CREATE DOMAIN. Refer to Section 8.18 for more information.
There are a few “pseudo-types” for special purposes. Pseudo-types cannot appear as columns of tables or components of container types, but they can be used to declare the argument and result types of functions. This provides a mechanism within the type system to identify special classes of functions. Table 8.25 lists the existing pseudo-types.
Five pseudo-types of special interest are anyelement
, anyarray
, anynonarray
, anyenum
, and anyrange
, which are collectively called polymorphic types. Any function declared using these types is said to be a polymorphic function. A polymorphic function can operate on many different data types, with the specific data type(s) being determined by the data types actually passed to it in a particular call.
Polymorphic arguments and results are tied to each other and are resolved to a specific data type when a query calling a polymorphic function is parsed. Each position (either argument or return value) declared as anyelement
is allowed to have any specific actual data type, but in any given call they must all be the same actual type. Each position declared as anyarray
can have any array data type, but similarly they must all be the same type. And similarly, positions declared as anyrange
must all be the same range type. Furthermore, if there are positions declared anyarray
and others declared anyelement
, the actual array type in the anyarray
positions must be an array whose elements are the same type appearing in the anyelement
positions. Similarly, if there are positions declared anyrange
and others declared anyelement
, the actual range type in the anyrange
positions must be a range whose subtype is the same type appearing in the anyelement
positions. anynonarray
is treated exactly the same as anyelement
, but adds the additional constraint that the actual type must not be an array type. anyenum
is treated exactly the same as anyelement
, but adds the additional constraint that the actual type must be an enum type.
Thus, when more than one argument position is declared with a polymorphic type, the net effect is that only certain combinations of actual argument types are allowed. For example, a function declared as equal(anyelement, anyelement)
will take any two input values, so long as they are of the same data type.
When the return value of a function is declared as a polymorphic type, there must be at least one argument position that is also polymorphic, and the actual data type supplied as the argument determines the actual result type for that call. For example, if there were not already an array subscripting mechanism, one could define a function that implements subscripting as subscript(anyarray, integer) returns anyelement
. This declaration constrains the actual first argument to be an array type, and allows the parser to infer the correct result type from the actual first argument's type. Another example is that a function declared as f(anyarray) returns anyenum
will only accept arrays of enum types.
Note that anynonarray
and anyenum
do not represent separate type variables; they are the same type as anyelement
, just with an additional constraint. For example, declaring a function as f(anyelement, anyenum)
is equivalent to declaring it as f(anyenum, anyenum)
: both actual arguments have to be the same enum type.
A variadic function (one taking a variable number of arguments, as in Section 38.5.5) can be polymorphic: this is accomplished by declaring its last parameter as VARIADIC
anyarray
. For purposes of argument matching and determining the actual result type, such a function behaves the same as if you had written the appropriate number of anynonarray
parameters.
版本:11
More than one function can be defined with the same SQL name, so long as the arguments they take are different. In other words, function names can be overloaded. Whether or not you use it, this capability entails security precautions when calling functions in databases where some users mistrust other users; see Section 10.3. When a query is executed, the server will determine which function to call from the data types and the number of the provided arguments. Overloading can also be used to simulate functions with a variable number of arguments, up to a finite maximum number.
When creating a family of overloaded functions, one should be careful not to create ambiguities. For instance, given the functions:
it is not immediately clear which function would be called with some trivial input like test(1, 1.5)
. The currently implemented resolution rules are described in Chapter 10, but it is unwise to design a system that subtly relies on this behavior.
A function that takes a single argument of a composite type should generally not have the same name as any attribute (field) of that type. Recall that attribute
(table
) is considered equivalent to table
.attribute
. In the case that there is an ambiguity between a function on a composite type and an attribute of the composite type, the attribute will always be used. It is possible to override that choice by schema-qualifying the function name (that is, schema
.func
(table
) ) but it's better to avoid the problem by not choosing conflicting names.
Another possible conflict is between variadic and non-variadic functions. For instance, it is possible to create both foo(numeric)
and foo(VARIADIC numeric[])
. In this case it is unclear which one should be matched to a call providing a single numeric argument, such as foo(10.1)
. The rule is that the function appearing earlier in the search path is used, or if the two functions are in the same schema, the non-variadic one is preferred.
When overloading C-language functions, there is an additional constraint: The C name of each function in the family of overloaded functions must be different from the C names of all other functions, either internal or dynamically loaded. If this rule is violated, the behavior is not portable. You might get a run-time linker error, or one of the functions will get called (usually the internal one). The alternative form of the AS
clause for the SQL CREATE FUNCTION
command decouples the SQL function name from the function name in the C source code. For instance:
The names of the C functions here reflect one of many possible conventions.
Internal functions are functions written in C that have been statically linked into the PostgreSQL server. The “body” of the function definition specifies the C-language name of the function, which need not be the same as the name being declared for SQL use. (For reasons of backward compatibility, an empty body is accepted as meaning that the C-language function name is the same as the SQL name.)
Normally, all internal functions present in the server are declared during the initialization of the database cluster (see Section 18.2), but a user could use CREATE FUNCTION
to create additional alias names for an internal function. Internal functions are declared in CREATE FUNCTION
with language name internal
. For instance, to create an alias for the sqrt
function:
(Most internal functions expect to be declared “strict”.)
Not all “predefined” functions are “internal” in the above sense. Some predefined functions are written in SQL.
By default, a function is just a “black box” that the database system knows very little about the behavior of. However, that means that queries using the function may be executed much less efficiently than they could be. It is possible to supply additional knowledge that helps the planner optimize function calls.
Some basic facts can be supplied by declarative annotations provided in the CREATE FUNCTION command. Most important of these is the function's volatility category (IMMUTABLE
, STABLE
, or VOLATILE
); one should always be careful to specify this correctly when defining a function. The parallel safety property (PARALLEL UNSAFE
, PARALLEL RESTRICTED
, or PARALLEL SAFE
) must also be specified if you hope to use the function in parallelized queries. It can also be useful to specify the function's estimated execution cost, and/or the number of rows a set-returning function is estimated to return. However, the declarative way of specifying those two facts only allows specifying a constant value, which is often inadequate.
It is also possible to attach a planner support function to a SQL-callable function (called its target function), and thereby provide knowledge about the target function that is too complex to be represented declaratively. Planner support functions have to be written in C (although their target functions might not be), so this is an advanced feature that relatively few people will use.
A planner support function must have the SQL signature
It is attached to its target function by specifying the SUPPORT
clause when creating the target function.
The details of the API for planner support functions can be found in file src/include/nodes/supportnodes.h
in the PostgreSQL source code. Here we provide just an overview of what planner support functions can do. The set of possible requests to a support function is extensible, so more things might be possible in future versions.
Some function calls can be simplified during planning based on properties specific to the function. For example, int4mul(n, 1)
could be simplified to just n
. This type of transformation can be performed by a planner support function, by having it implement the SupportRequestSimplify
request type. The support function will be called for each instance of its target function found in a query parse tree. If it finds that the particular call can be simplified into some other form, it can build and return a parse tree representing that expression. This will automatically work for operators based on the function, too — in the example just given, n * 1
would also be simplified to n
. (But note that this is just an example; this particular optimization is not actually performed by standard PostgreSQL.) We make no guarantee that PostgreSQL will never call the target function in cases that the support function could simplify. Ensure rigorous equivalence between the simplified expression and an actual execution of the target function.
For target functions that return boolean
, it is often useful to estimate the fraction of rows that will be selected by a WHERE
clause using that function. This can be done by a support function that implements the SupportRequestSelectivity
request type.
If the target function's run time is highly dependent on its inputs, it may be useful to provide a non-constant cost estimate for it. This can be done by a support function that implements the SupportRequestCost
request type.
For target functions that return sets, it is often useful to provide a non-constant estimate for the number of rows that will be returned. This can be done by a support function that implements the SupportRequestRows
request type.
For target functions that return boolean
, it may be possible to convert a function call appearing in WHERE
into an indexable operator clause or clauses. The converted clauses might be exactly equivalent to the function's condition, or they could be somewhat weaker (that is, they might accept some values that the function condition does not). In the latter case the index condition is said to be lossy; it can still be used to scan an index, but the function call will have to be executed for each row returned by the index to see if it really passes the WHERE
condition or not. To create such conditions, the support function must implement the SupportRequestIndexCondition
request type.
版本:11
PostgreSQL 允許使用者定義的函數用 SQL 和 C 之外的其他語言撰寫。這些其他的程式語言通常稱為程序語言(PL)。程序語言並沒有內建在 PostgreSQL 伺服器中的。它們由可外掛模組提供。有關更多資訊,請參閱第 41 章和後續章節。
版本:11
Every operator is “syntactic sugar” for a call to an underlying function that does the real work; so you must first create the underlying function before you can create the operator. However, an operator is not merely syntactic sugar, because it carries additional information that helps the query planner optimize queries that use the operator. The next section will be devoted to explaining that additional information.
PostgreSQL supports left unary, right unary, and binary operators. Operators can be overloaded; that is, the same operator name can be used for different operators that have different numbers and types of operands. When a query is executed, the system determines the operator to call from the number and types of the provided operands.
Here is an example of creating an operator for adding two complex numbers. We assume we've already created the definition of type complex
(see Section 38.12). First we need a function that does the work, then we can define the operator:
Now we could execute a query like this:
We've shown how to create a binary operator here. To create unary operators, just omit one of leftarg
(for left unary) or rightarg
(for right unary). The function
clause and the argument clauses are the only required items in CREATE OPERATOR
. The commutator
clause shown in the example is an optional hint to the query optimizer. Further details about commutator
and other optimizer hints appear in the next section.
版本:11
每個函數都有易變性的分類,可能為 VOLATILE、STABLE 或 IMMUTABLE。如果 CREATE FUNCTION 指令沒有指定類別,則 VOLATILE 是預設值。易變性類別用於是函數最佳化時的依據:
一個 VOLATILE 函數可以做任何事情,包括修改資料庫。它可以使用相同的參數在連續呼叫中回傳不同的結果。優化器不對這些函數的行為做任何假設。使用 volatile 函數的查詢將在需要其值的每一個資料列重新運算該函數。
STABLE 函數不能修改資料庫,並且保證在單個語句中給予所有資料列相同參數的情況下回傳相同的結果。此類別允許優化器將函數的多個呼叫優化為單個呼叫。 特別是,在索引掃描條件下使用包含這種函數的表示式是安全的。(由於索引掃描只會計算一次比較值,而不是每個資料列一次,因此在索引掃描條件下使用 VOLATILE 函數無效)。
IMMUTABLE 函數不能修改資料庫,並且保證相同輸入永遠回傳相同的結果。這個類別允許最佳化時在查詢用常數參數呼叫函數時預先運算函數。例如,像 SELECT ... WHERE x = 2 + 2 這樣的查詢可以簡化為 SELECT ... WHERE x = 4,因為整數加法運算子下的函數被標記為 IMMUTABLE。
為獲得最佳化結果,您應該使用對他們有效的最嚴格的易變性類別來標記您的函數。
任何會有預期以外結果的函數必須標註為 VOLATILE,以便對其進行優化時不能被優化。即使是不會有預期以外結果的函數,如果它的值可能在單個查詢中改變,也需要標記為 VOLATILE;一些例子是 random(),currval(),timeofday()。
另一個重要的例子是 current_timestamp 函數家族被限定為 STABLE,因為它們的值在交易事務中不會改變。
在考慮到查詢計劃並且立即執行的簡單交互式查詢時,STABLE 和 IMMUTABLE 類別之間的差別相對較小:函數在計劃期間執行一次,或者在查詢執行啟動期間執行一次並不重要。但是,如果查詢計劃保存並稍後再使用,則會有很大差異。如果標記一個函數 IMMUTABLE,那麼它可能會在查詢計劃過程中過早地將其簡化為常數,導致在隨後的計劃使用過程中重新使用舊值。使用預準備語句或使用暫存計劃的函數語言(如PL/pgSQL)時,這會是一種風險。
對於使用 SQL 或任何標準程序語言撰寫的函數,由易變性類別確定的第二個重要屬性,即由正在呼叫該函數的 SQL 指令所做的任何資料變更的可見性。一個 VOLATILE 函數會看到這樣的變化,一個 STABLE 或 IMMUTABLE 函數則不會。此行為是使用 MVCC 的快照行為實現的(請參閱第 13 章):STABLE 和 IMMUTABLE 函數使用從呼叫查詢開始時所建立的快照,而 VOLATILE 函數在執行每個查詢的開始時獲取新的快照。
注意 用 C 語言撰寫的函數可以想要的方式管理快照,不過以本節的方式運用 C 函數也是一個好的作法。
由於此快照行為,即使從可能正在透過平行查詢進行變更的資料表中選擇,只包含 SELECT 指令的函數也可以安全地標記為 STABLE。PostgreSQL將使用為呼叫查詢建立的快照執行 STABLE 函數的所有命令,因此它將在該查詢中看到資料庫的固定檢視內容。
IMMUTABLE 函數中的 SELECT 指令使用相同的快照行為。根據 IMMUTABLE 函數從資料庫資料表中進行選擇通常是不明智的,因為如果資料表內容發生變化,不變性將被破壞。但是,PostgreSQL並沒有強制你不能這樣做。
當一個函數的結果取決於一個配置參數時,一個常見的錯誤是標記一個函數 IMMUTABLE。 例如,一個操縱時間戳記的函數可能具有取決於 TimeZone 設定的結果。為了安全起見,這些功能應該標記為 STABLE。
注意 PostgreSQL 要求 STABLE 和 IMMUTABLE 函數不能包含 SELECT 以外的 SQL 指令以防止資料修改。(這不是一個完全防彈的要求,因為這些函數仍然可以呼叫修改資料庫的 VOLATILE 函數,如果這樣做,你會發現 STABLE 或 IMMUTABLE 函數並沒有注意到被呼叫函數應用的資料庫更改,因為它們會其快照是隱藏的。)
版本:11
As described in Section 37.2, PostgreSQL can be extended to support new data types. This section describes how to define new base types, which are data types defined below the level of the SQL language. Creating a new base type requires implementing functions to operate on the type in a low-level language, usually C.
The examples in this section can be found in complex.sql
and complex.c
in the src/tutorial
directory of the source distribution. See the README
file in that directory for instructions about running the examples.
A user-defined type must always have input and output functions. These functions determine how the type appears in strings (for input by the user and output to the user) and how the type is organized in memory. The input function takes a null-terminated character string as its argument and returns the internal (in memory) representation of the type. The output function takes the internal representation of the type as argument and returns a null-terminated character string. If we want to do anything more with the type than merely store it, we must provide additional functions to implement whatever operations we'd like to have for the type.
Suppose we want to define a type complex
that represents complex numbers. A natural way to represent a complex number in memory would be the following C structure:
We will need to make this a pass-by-reference type, since it's too large to fit into a single Datum
value.
As the external string representation of the type, we choose a string of the form (x,y)
.
The input and output functions are usually not hard to write, especially the output function. But when defining the external string representation of the type, remember that you must eventually write a complete and robust parser for that representation as your input function. For instance:
The output function can simply be:
You should be careful to make the input and output functions inverses of each other. If you do not, you will have severe problems when you need to dump your data into a file and then read it back in. This is a particularly common problem when floating-point numbers are involved.
Optionally, a user-defined type can provide binary input and output routines. Binary I/O is normally faster but less portable than textual I/O. As with textual I/O, it is up to you to define exactly what the external binary representation is. Most of the built-in data types try to provide a machine-independent binary representation. For complex
, we will piggy-back on the binary I/O converters for type float8
:
Once we have written the I/O functions and compiled them into a shared library, we can define the complex
type in SQL. First we declare it as a shell type:
This serves as a placeholder that allows us to reference the type while defining its I/O functions. Now we can define the I/O functions:
Finally, we can provide the full definition of the data type:
When you define a new base type, PostgreSQL automatically provides support for arrays of that type. The array type typically has the same name as the base type with the underscore character (_
) prepended.
Once the data type exists, we can declare additional functions to provide useful operations on the data type. Operators can then be defined atop the functions, and if needed, operator classes can be created to support indexing of the data type. These additional layers are discussed in following sections.
If the internal representation of the data type is variable-length, the internal representation must follow the standard layout for variable-length data: the first four bytes must be a char[4]
field which is never accessed directly (customarily named vl_len_
). You must use the SET_VARSIZE()
macro to store the total size of the datum (including the length field itself) in this field and VARSIZE()
to retrieve it. (These macros exist because the length field may be encoded depending on platform.)
For further details see the description of the CREATE TYPE command.
If the values of your data type vary in size (in internal form), it's usually desirable to make the data type TOAST-able (see Section 68.2). You should do this even if the values are always too small to be compressed or stored externally, because TOAST can save space on small data too, by reducing header overhead.
To support TOAST storage, the C functions operating on the data type must always be careful to unpack any toasted values they are handed by using PG_DETOAST_DATUM
. (This detail is customarily hidden by defining type-specific GETARG_DATATYPE_P
macros.) Then, when running the CREATE TYPE
command, specify the internal length as variable
and select some appropriate storage option other than plain
.
If data alignment is unimportant (either just for a specific function or because the data type specifies byte alignment anyway) then it's possible to avoid some of the overhead of PG_DETOAST_DATUM
. You can use PG_DETOAST_DATUM_PACKED
instead (customarily hidden by defining a GETARG_DATATYPE_PP
macro) and using the macros VARSIZE_ANY_EXHDR
and VARDATA_ANY
to access a potentially-packed datum. Again, the data returned by these macros is not aligned even if the data type definition specifies an alignment. If the alignment is important you must go through the regular PG_DETOAST_DATUM
interface.
Older code frequently declares vl_len_
as an int32
field instead of char[4]
. This is OK as long as the struct definition has other fields that have at least int32
alignment. But it is dangerous to use such a struct definition when working with a potentially unaligned datum; the compiler may take it as license to assume the datum actually is aligned, leading to core dumps on architectures that are strict about alignment.
Another feature that's enabled by TOAST support is the possibility of having an expanded in-memory data representation that is more convenient to work with than the format that is stored on disk. The regular or “flat” varlena storage format is ultimately just a blob of bytes; it cannot for example contain pointers, since it may get copied to other locations in memory. For complex data types, the flat format may be quite expensive to work with, so PostgreSQL provides a way to “expand” the flat format into a representation that is more suited to computation, and then pass that format in-memory between functions of the data type.
To use expanded storage, a data type must define an expanded format that follows the rules given in src/include/utils/expandeddatum.h
, and provide functions to “expand” a flat varlena value into expanded format and “flatten” the expanded format back to the regular varlena representation. Then ensure that all C functions for the data type can accept either representation, possibly by converting one into the other immediately upon receipt. This does not require fixing all existing functions for the data type at once, because the standard PG_DETOAST_DATUM
macro is defined to convert expanded inputs into regular flat format. Therefore, existing functions that work with the flat varlena format will continue to work, though slightly inefficiently, with expanded inputs; they need not be converted until and unless better performance is important.
C functions that know how to work with an expanded representation typically fall into two categories: those that can only handle expanded format, and those that can handle either expanded or flat varlena inputs. The former are easier to write but may be less efficient overall, because converting a flat input to expanded form for use by a single function may cost more than is saved by operating on the expanded format. When only expanded format need be handled, conversion of flat inputs to expanded form can be hidden inside an argument-fetching macro, so that the function appears no more complex than one working with traditional varlena input. To handle both types of input, write an argument-fetching function that will detoast external, short-header, and compressed varlena inputs, but not expanded inputs. Such a function can be defined as returning a pointer to a union of the flat varlena format and the expanded format. Callers can use the VARATT_IS_EXPANDED_HEADER()
macro to determine which format they received.
The TOAST infrastructure not only allows regular varlena values to be distinguished from expanded values, but also distinguishes “read-write” and “read-only” pointers to expanded values. C functions that only need to examine an expanded value, or will only change it in safe and non-semantically-visible ways, need not care which type of pointer they receive. C functions that produce a modified version of an input value are allowed to modify an expanded input value in-place if they receive a read-write pointer, but must not modify the input if they receive a read-only pointer; in that case they have to copy the value first, producing a new value to modify. A C function that has constructed a new expanded value should always return a read-write pointer to it. Also, a C function that is modifying a read-write expanded value in-place should take care to leave the value in a sane state if it fails partway through.
For examples of working with expanded values, see the standard array infrastructure, particularly src/backend/utils/adt/array_expanded.c
.\
版本:11
A procedure is a database object similar to a function. The difference is that a procedure does not return a value, so there is no return type declaration. While a function is called as part of a query or DML command, a procedure is called explicitly using the CALL statement.
The explanations on how to define user-defined functions in the rest of this chapter apply to procedures as well, except that the CREATE PROCEDURE command is used instead, there is no return type, and some other features such as strictness don't apply.
Collectively, functions and procedures are also known as routines. There are commands such as ALTER ROUTINE and DROP ROUTINE that can operate on functions and procedures without having to know which kind it is. Note, however, that there is no CREATE ROUTINE
command.
版本:11
A PostgreSQL operator definition can include several optional clauses that tell the system useful things about how the operator behaves. These clauses should be provided whenever appropriate, because they can make for considerable speedups in execution of queries that use the operator. But if you provide them, you must be sure that they are right! Incorrect use of an optimization clause can result in slow queries, subtly wrong output, or other Bad Things. You can always leave out an optimization clause if you are not sure about it; the only consequence is that queries might run slower than they need to.
Additional optimization clauses might be added in future versions of PostgreSQL. The ones described here are all the ones that release 11.1 understands.
COMMUTATOR
The COMMUTATOR
clause, if provided, names an operator that is the commutator of the operator being defined. We say that operator A is the commutator of operator B if (x A y) equals (y B x) for all possible input values x, y. Notice that B is also the commutator of A. For example, operators <
and >
for a particular data type are usually each others' commutators, and operator +
is usually commutative with itself. But operator -
is usually not commutative with anything.
The left operand type of a commutable operator is the same as the right operand type of its commutator, and vice versa. So the name of the commutator operator is all that PostgreSQL needs to be given to look up the commutator, and that's all that needs to be provided in the COMMUTATOR
clause.
It's critical to provide commutator information for operators that will be used in indexes and join clauses, because this allows the query optimizer to “flip around” such a clause to the forms needed for different plan types. For example, consider a query with a WHERE clause like tab1.x = tab2.y
, where tab1.x
and tab2.y
are of a user-defined type, and suppose that tab2.y
is indexed. The optimizer cannot generate an index scan unless it can determine how to flip the clause around to tab2.y = tab1.x
, because the index-scan machinery expects to see the indexed column on the left of the operator it is given. PostgreSQL will not simply assume that this is a valid transformation — the creator of the =
operator must specify that it is valid, by marking the operator with commutator information.
When you are defining a self-commutative operator, you just do it. When you are defining a pair of commutative operators, things are a little trickier: how can the first one to be defined refer to the other one, which you haven't defined yet? There are two solutions to this problem:
One way is to omit the COMMUTATOR
clause in the first operator that you define, and then provide one in the second operator's definition. Since PostgreSQL knows that commutative operators come in pairs, when it sees the second definition it will automatically go back and fill in the missing COMMUTATOR
clause in the first definition.
The other, more straightforward way is just to include COMMUTATOR
clauses in both definitions. When PostgreSQL processes the first definition and realizes that COMMUTATOR
refers to a nonexistent operator, the system will make a dummy entry for that operator in the system catalog. This dummy entry will have valid data only for the operator name, left and right operand types, and result type, since that's all that PostgreSQL can deduce at this point. The first operator's catalog entry will link to this dummy entry. Later, when you define the second operator, the system updates the dummy entry with the additional information from the second definition. If you try to use the dummy operator before it's been filled in, you'll just get an error message.
NEGATOR
The NEGATOR
clause, if provided, names an operator that is the negator of the operator being defined. We say that operator A is the negator of operator B if both return Boolean results and (x A y) equals NOT (x B y) for all possible inputs x, y. Notice that B is also the negator of A. For example, <
and >=
are a negator pair for most data types. An operator can never validly be its own negator.
Unlike commutators, a pair of unary operators could validly be marked as each other's negators; that would mean (A x) equals NOT (B x) for all x, or the equivalent for right unary operators.
An operator's negator must have the same left and/or right operand types as the operator to be defined, so just as with COMMUTATOR
, only the operator name need be given in the NEGATOR
clause.
Providing a negator is very helpful to the query optimizer since it allows expressions like NOT (x = y)
to be simplified into x <> y
. This comes up more often than you might think, because NOT
operations can be inserted as a consequence of other rearrangements.
Pairs of negator operators can be defined using the same methods explained above for commutator pairs.
RESTRICT
The RESTRICT
clause, if provided, names a restriction selectivity estimation function for the operator. (Note that this is a function name, not an operator name.) RESTRICT
clauses only make sense for binary operators that return boolean
. The idea behind a restriction selectivity estimator is to guess what fraction of the rows in a table will satisfy a WHERE
-clause condition of the form:
for the current operator and a particular constant value. This assists the optimizer by giving it some idea of how many rows will be eliminated by WHERE
clauses that have this form. (What happens if the constant is on the left, you might be wondering? Well, that's one of the things that COMMUTATOR
is for...)
Writing new restriction selectivity estimation functions is far beyond the scope of this chapter, but fortunately you can usually just use one of the system's standard estimators for many of your own operators. These are the standard restriction estimators:
You can frequently get away with using either eqsel
or neqsel
for operators that have very high or very low selectivity, even if they aren't really equality or inequality. For example, the approximate-equality geometric operators use eqsel
on the assumption that they'll usually only match a small fraction of the entries in a table.
You can use scalarltsel
, scalarlesel
, scalargtsel
and scalargesel
for comparisons on data types that have some sensible means of being converted into numeric scalars for range comparisons. If possible, add the data type to those understood by the function convert_to_scalar()
in src/backend/utils/adt/selfuncs.c
. (Eventually, this function should be replaced by per-data-type functions identified through a column of the pg_type
system catalog; but that hasn't happened yet.) If you do not do this, things will still work, but the optimizer's estimates won't be as good as they could be.
There are additional selectivity estimation functions designed for geometric operators in src/backend/utils/adt/geo_selfuncs.c
: areasel
, positionsel
, and contsel
. At this writing these are just stubs, but you might want to use them (or even better, improve them) anyway.
JOIN
The JOIN
clause, if provided, names a join selectivity estimation function for the operator. (Note that this is a function name, not an operator name.) JOIN
clauses only make sense for binary operators that return boolean
. The idea behind a join selectivity estimator is to guess what fraction of the rows in a pair of tables will satisfy a WHERE
-clause condition of the form:
for the current operator. As with the RESTRICT
clause, this helps the optimizer very substantially by letting it figure out which of several possible join sequences is likely to take the least work.
As before, this chapter will make no attempt to explain how to write a join selectivity estimator function, but will just suggest that you use one of the standard estimators if one is applicable:
HASHES
The HASHES
clause, if present, tells the system that it is permissible to use the hash join method for a join based on this operator. HASHES
only makes sense for a binary operator that returns boolean
, and in practice the operator must represent equality for some data type or pair of data types.
The assumption underlying hash join is that the join operator can only return true for pairs of left and right values that hash to the same hash code. If two values get put in different hash buckets, the join will never compare them at all, implicitly assuming that the result of the join operator must be false. So it never makes sense to specify HASHES
for operators that do not represent some form of equality. In most cases it is only practical to support hashing for operators that take the same data type on both sides. However, sometimes it is possible to design compatible hash functions for two or more data types; that is, functions that will generate the same hash codes for “equal” values, even though the values have different representations. For example, it's fairly simple to arrange this property when hashing integers of different widths.
To be marked HASHES
, the join operator must appear in a hash index operator family. This is not enforced when you create the operator, since of course the referencing operator family couldn't exist yet. But attempts to use the operator in hash joins will fail at run time if no such operator family exists. The system needs the operator family to find the data-type-specific hash function(s) for the operator's input data type(s). Of course, you must also create suitable hash functions before you can create the operator family.
Care should be exercised when preparing a hash function, because there are machine-dependent ways in which it might fail to do the right thing. For example, if your data type is a structure in which there might be uninteresting pad bits, you cannot simply pass the whole structure to hash_any
. (Unless you write your other operators and functions to ensure that the unused bits are always zero, which is the recommended strategy.) Another example is that on machines that meet the IEEE floating-point standard, negative zero and positive zero are different values (different bit patterns) but they are defined to compare equal. If a float value might contain negative zero then extra steps are needed to ensure it generates the same hash value as positive zero.
A hash-joinable operator must have a commutator (itself if the two operand data types are the same, or a related equality operator if they are different) that appears in the same operator family. If this is not the case, planner errors might occur when the operator is used. Also, it is a good idea (but not strictly required) for a hash operator family that supports multiple data types to provide equality operators for every combination of the data types; this allows better optimization.
The function underlying a hash-joinable operator must be marked immutable or stable. If it is volatile, the system will never attempt to use the operator for a hash join.
If a hash-joinable operator has an underlying function that is marked strict, the function must also be complete: that is, it should return true or false, never null, for any two nonnull inputs. If this rule is not followed, hash-optimization of IN
operations might generate wrong results. (Specifically, IN
might return false where the correct answer according to the standard would be null; or it might yield an error complaining that it wasn't prepared for a null result.)
MERGES
The MERGES
clause, if present, tells the system that it is permissible to use the merge-join method for a join based on this operator. MERGES
only makes sense for a binary operator that returns boolean
, and in practice the operator must represent equality for some data type or pair of data types.
Merge join is based on the idea of sorting the left- and right-hand tables into order and then scanning them in parallel. So, both data types must be capable of being fully ordered, and the join operator must be one that can only succeed for pairs of values that fall at the “same place” in the sort order. In practice this means that the join operator must behave like equality. But it is possible to merge-join two distinct data types so long as they are logically compatible. For example, the smallint
-versus-integer
equality operator is merge-joinable. We only need sorting operators that will bring both data types into a logically compatible sequence.
To be marked MERGES
, the join operator must appear as an equality member of a btree
index operator family. This is not enforced when you create the operator, since of course the referencing operator family couldn't exist yet. But the operator will not actually be used for merge joins unless a matching operator family can be found. The MERGES
flag thus acts as a hint to the planner that it's worth looking for a matching operator family.
A merge-joinable operator must have a commutator (itself if the two operand data types are the same, or a related equality operator if they are different) that appears in the same operator family. If this is not the case, planner errors might occur when the operator is used. Also, it is a good idea (but not strictly required) for a btree
operator family that supports multiple data types to provide equality operators for every combination of the data types; this allows better optimization.
The function underlying a merge-joinable operator must be marked immutable or stable. If it is volatile, the system will never attempt to use the operator for a merge join.
版本:11
SQL functions execute an arbitrary list of SQL statements, returning the result of the last query in the list. In the simple (non-set) case, the first row of the last query's result will be returned. (Bear in mind that “the first row” of a multirow result is not well-defined unless you use ORDER BY
.) If the last query happens to return no rows at all, the null value will be returned.
Alternatively, an SQL function can be declared to return a set (that is, multiple rows) by specifying the function's return type as SETOF
sometype
, or equivalently by declaring it as RETURNS TABLE(
columns
). In this case all rows of the last query's result are returned. Further details appear below.
The body of an SQL function must be a list of SQL statements separated by semicolons. A semicolon after the last statement is optional. Unless the function is declared to return void
, the last statement must be a SELECT
, or an INSERT
, UPDATE
, or DELETE
that has a RETURNING
clause.
Any collection of commands in the SQL language can be packaged together and defined as a function. Besides SELECT
queries, the commands can include data modification queries (INSERT
, UPDATE
, and DELETE
), as well as other SQL commands. (You cannot use transaction control commands, e.g. COMMIT
, SAVEPOINT
, and some utility commands, e.g. VACUUM
, in SQL functions.) However, the final command must be a SELECT
or have a RETURNING
clause that returns whatever is specified as the function's return type. Alternatively, if you want to define a SQL function that performs actions but has no useful value to return, you can define it as returning void
. For example, this function removes rows with negative salaries from the emp
table:
The entire body of a SQL function is parsed before any of it is executed. While a SQL function can contain commands that alter the system catalogs (e.g., CREATE TABLE
), the effects of such commands will not be visible during parse analysis of later commands in the function. Thus, for example, CREATE TABLE foo (...); INSERT INTO foo VALUES(...);
will not work as desired if packaged up into a single SQL function, since foo
won't exist yet when the INSERT
command is parsed. It's recommended to use PL/pgSQL instead of a SQL function in this type of situation.
The syntax of the CREATE FUNCTION
command requires the function body to be written as a string constant. It is usually most convenient to use dollar quoting (see ) for the string constant. If you choose to use regular single-quoted string constant syntax, you must double single quote marks ('
) and backslashes (\
) (assuming escape string syntax) in the body of the function (see ).
Arguments of a SQL function can be referenced in the function body using either names or numbers. Examples of both methods appear below.
To use a name, declare the function argument as having a name, and then just write that name in the function body. If the argument name is the same as any column name in the current SQL command within the function, the column name will take precedence. To override this, qualify the argument name with the name of the function itself, that is function_name
.argument_name
. (If this would conflict with a qualified column name, again the column name wins. You can avoid the ambiguity by choosing a different alias for the table within the SQL command.)
In the older numeric approach, arguments are referenced using the syntax $
n
: $1
refers to the first input argument, $2
to the second, and so on. This will work whether or not the particular argument was declared with a name.
If an argument is of a composite type, then the dot notation, e.g., argname
.fieldname
or $1.
fieldname
, can be used to access attributes of the argument. Again, you might need to qualify the argument's name with the function name to make the form with an argument name unambiguous.
SQL function arguments can only be used as data values, not as identifiers. Thus for example this is reasonable:
but this will not work:
The ability to use names to reference SQL function arguments was added in PostgreSQL 9.2. Functions to be used in older servers must use the $
n
notation.
The simplest possible SQL function has no arguments and simply returns a base type, such as integer
:
Notice that we defined a column alias within the function body for the result of the function (with the name result
), but this column alias is not visible outside the function. Hence, the result is labeled one
instead of result
.
It is almost as easy to define SQL functions that take base types as arguments:
Alternatively, we could dispense with names for the arguments and use numbers:
Here is a more useful function, which might be used to debit a bank account:
A user could execute this function to debit account 17 by $100.00 as follows:
In this example, we chose the name accountno
for the first argument, but this is the same as the name of a column in the bank
table. Within the UPDATE
command, accountno
refers to the column bank.accountno
, so tf1.accountno
must be used to refer to the argument. We could of course avoid this by using a different name for the argument.
In practice one would probably like a more useful result from the function than a constant 1, so a more likely definition is:
which adjusts the balance and returns the new balance. The same thing could be done in one command using RETURNING
:
A SQL function must return exactly its declared result type. This may require inserting an explicit cast. For example, suppose we wanted the previous add_em
function to return type float8
instead. This won't work:
even though in other contexts PostgreSQL would be willing to insert an implicit cast to convert integer
to float8
. We need to write it as
When writing functions with arguments of composite types, we must not only specify which argument we want but also the desired attribute (field) of that argument. For example, suppose that emp
is a table containing employee data, and therefore also the name of the composite type of each row of the table. Here is a function double_salary
that computes what someone's salary would be if it were doubled:
Notice the use of the syntax $1.salary
to select one field of the argument row value. Also notice how the calling SELECT
command uses table_name
.*
to select the entire current row of a table as a composite value. The table row can alternatively be referenced using just the table name, like this:
Sometimes it is handy to construct a composite argument value on-the-fly. This can be done with the ROW
construct. For example, we could adjust the data being passed to the function:
It is also possible to build a function that returns a composite type. This is an example of a function that returns a single emp
row:
In this example we have specified each of the attributes with a constant value, but any computation could have been substituted for these constants.
Note two important things about defining the function:
The select list order in the query must be exactly the same as that in which the columns appear in the table associated with the composite type. (Naming the columns, as we did above, is irrelevant to the system.)
We must ensure each expression's type matches the corresponding column of the composite type, inserting a cast if necessary. Otherwise we'll get errors like this:
As with the base-type case, the function will not insert any casts automatically.
A different way to define the same function is:
Here we wrote a SELECT
that returns just a single column of the correct composite type. This isn't really better in this situation, but it is a handy alternative in some cases — for example, if we need to compute the result by calling another function that returns the desired composite value. Another example is that if we are trying to write a function that returns a domain over composite, rather than a plain composite type, it is always necessary to write it as returning a single column, since there is no other way to produce a value that is exactly of the domain type.
We could call this function directly either by using it in a value expression:
or by calling it as a table function:
When you use a function that returns a composite type, you might want only one field (attribute) from its result. You can do that with syntax like this:
The extra parentheses are needed to keep the parser from getting confused. If you try to do it without them, you get something like this:
Another option is to use functional notation for extracting an attribute:
Another way to use a function returning a composite type is to pass the result to another function that accepts the correct row type as input:
An alternative way of describing a function's results is to define it with output parameters, as in this example:
What has essentially happened here is that we have created an anonymous composite type for the result of the function. The above example has the same end result as
but not having to bother with the separate composite type definition is often handy. Notice that the names attached to the output parameters are not just decoration, but determine the column names of the anonymous composite type. (If you omit a name for an output parameter, the system will choose a name on its own.)
Notice that output parameters are not included in the calling argument list when invoking such a function from SQL. This is because PostgreSQL considers only the input parameters to define the function's calling signature. That means also that only the input parameters matter when referencing the function for purposes such as dropping it. We could drop the above function with either of
Parameters can be marked as IN
(the default), OUT
, INOUT
, or VARIADIC
. An INOUT
parameter serves as both an input parameter (part of the calling argument list) and an output parameter (part of the result record type). VARIADIC
parameters are input parameters, but are treated specially as described next.
SQL functions can be declared to accept variable numbers of arguments, so long as all the “optional” arguments are of the same data type. The optional arguments will be passed to the function as an array. The function is declared by marking the last parameter as VARIADIC
; this parameter must be declared as being of an array type. For example:
Effectively, all the actual arguments at or beyond the VARIADIC
position are gathered up into a one-dimensional array, as if you had written
You can't actually write that, though — or at least, it will not match this function definition. A parameter marked VARIADIC
matches one or more occurrences of its element type, not of its own type.
This prevents expansion of the function's variadic parameter into its element type, thereby allowing the array argument value to match normally. VARIADIC
can only be attached to the last actual argument of a function call.
Specifying VARIADIC
in the call is also the only way to pass an empty array to a variadic function, for example:
Simply writing SELECT mleast()
does not work because a variadic parameter must match at least one actual argument. (You could define a second function also named mleast
, with no parameters, if you wanted to allow such calls.)
but not these:
For example:
The =
sign can also be used in place of the key word DEFAULT
.
All SQL functions can be used in the FROM
clause of a query, but it is particularly useful for functions returning composite types. If the function is defined to return a base type, the table function produces a one-column table. If the function is defined to return a composite type, the table function produces a column for each attribute of the composite type.
Here is an example:
As the example shows, we can work with the columns of the function's result just the same as if they were columns of a regular table.
Note that we only got one row out of the function. This is because we did not use SETOF
. That is described in the next section.
When an SQL function is declared as returning SETOF
sometype
, the function's final query is executed to completion, and each row it outputs is returned as an element of the result set.
This feature is normally used when calling the function in the FROM
clause. In this case each row returned by the function becomes a row of the table seen by the query. For example, assume that table foo
has the same contents as above, and we say:
Then we would get:
It is also possible to return multiple rows with the columns defined by output parameters, like this:
The key point here is that you must write RETURNS SETOF record
to indicate that the function returns multiple rows instead of just one. If there is only one output parameter, write that parameter's type instead of record
.
This example does not do anything that we couldn't have done with a simple join, but in more complex calculations the option to put some of the work into a function can be quite convenient.
Functions returning sets can also be called in the select list of a query. For each row that the query generates by itself, the set-returning function is invoked, and an output row is generated for each element of the function's result set. The previous example could also be done with queries like these:
In the last SELECT
, notice that no output row appears for Child2
, Child3
, etc. This happens because listchildren
returns an empty set for those arguments, so no result rows are generated. This is the same behavior as we got from an inner join to the function result when using the LATERAL
syntax.
PostgreSQL's behavior for a set-returning function in a query's select list is almost exactly the same as if the set-returning function had been written in a LATERAL FROM
-clause item instead. For example,
is almost equivalent to
It would be exactly the same, except that in this specific example, the planner could choose to put g
on the outside of the nested-loop join, since g
has no actual lateral dependency on tab
. That would result in a different output row order. Set-returning functions in the select list are always evaluated as though they are on the inside of a nested-loop join with the rest of the FROM
clause, so that the function(s) are run to completion before the next row from the FROM
clause is considered.
If there is more than one set-returning function in the query's select list, the behavior is similar to what you get from putting the functions into a single LATERAL ROWS FROM( ... )
FROM
-clause item. For each row from the underlying query, there is an output row using the first result from each function, then an output row using the second result, and so on. If some of the set-returning functions produce fewer outputs than others, null values are substituted for the missing data, so that the total number of rows emitted for one underlying row is the same as for the set-returning function that produced the most outputs. Thus the set-returning functions run “in lockstep” until they are all exhausted, and then execution continues with the next underlying row.
Set-returning functions can be nested in a select list, although that is not allowed in FROM
-clause items. In such cases, each level of nesting is treated separately, as though it were a separate LATERAL ROWS FROM( ... )
item. For example, in
the set-returning functions srf2
, srf3
, and srf5
would be run in lockstep for each row of tab
, and then srf1
and srf4
would be applied in lockstep to each row produced by the lower functions.
Set-returning functions cannot be used within conditional-evaluation constructs, such as CASE
or COALESCE
. For example, consider
It might seem that this should produce five repetitions of input rows that have x > 0
, and a single repetition of those that do not; but actually, because generate_series(1, 5)
would be run in an implicit LATERAL FROM
item before the CASE
expression is ever evaluated, it would produce five repetitions of every input row. To reduce confusion, such cases produce a parse-time error instead.
If a function's last command is INSERT
, UPDATE
, or DELETE
with RETURNING
, that command will always be executed to completion, even if the function is not declared with SETOF
or the calling query does not fetch all the result rows. Any extra rows produced by the RETURNING
clause are silently dropped, but the commanded table modifications still happen (and are all completed before returning from the function).
Before PostgreSQL 10, putting more than one set-returning function in the same select list did not behave very sensibly unless they always produced equal numbers of rows. Otherwise, what you got was a number of output rows equal to the least common multiple of the numbers of rows produced by the set-returning functions. Also, nested set-returning functions did not work as described above; instead, a set-returning function could have at most one set-returning argument, and each nest of set-returning functions was run independently. Also, conditional execution (set-returning functions inside CASE
etc) was previously allowed, complicating things even more. Use of the LATERAL
syntax is recommended when writing queries that need to work in older PostgreSQL versions, because that will give consistent results across different versions. If you have a query that is relying on conditional execution of a set-returning function, you may be able to fix it by moving the conditional test into a custom set-returning function. For example,
could become
This formulation will work the same in all versions of PostgreSQL.
TABLE
還有一種是將函數宣告為回傳集合的方法,即是使用語法 RETURNS TABLE(columns)。 這等同於使用一個或多個 OUT 參數,並會將該函數標記為回傳 SETOF record(或 SETOF,視情況而定為單個輸出參數的類型)。此表示法是在 SQL 標準的最新版本中訂定的,因此,與使用 SETOF 相比,它更容易具有可攜性。
例如,前面的「sum-and-product」範例也可以透過以下方式完成:
不允許使用帶有 RETURNS TABLE 表示法的 OUT 或 INOUT 參數-您必須將所有輸出欄位放在 TABLE 列表之中。
Notice the use of the typecast 'a'::text
to specify that the argument is of type text
. This is required if the argument is just a string literal, since otherwise it would be treated as type unknown
, and array of unknown
is not a valid type. Without the typecast, you will get errors like this:
It is permitted to have polymorphic arguments with a fixed return type, but the converse is not. For example:
Polymorphism can be used with functions that have output arguments. For example:
Polymorphism can also be used with variadic functions. For example:
will depend on the database's default collation. In C
locale the result will be ABC
, but in many other locales it will be abc
. The collation to use can be forced by adding a COLLATE
clause to any of the arguments, for example
Alternatively, if you wish a function to operate with a particular collation regardless of what it is called with, insert COLLATE
clauses as needed in the function definition. This version of anyleast
would always use en_US
locale to compare strings:
But note that this will throw an error if applied to a non-collatable data type.
If no common collation can be identified among the actual arguments, then a SQL function treats its parameters as having their data types' default collation (which is usually the database's default collation, but could be different for parameters of domain types).
The behavior of collatable parameters can be thought of as a limited form of polymorphism, applicable only to textual data types.
but this usage is deprecated since it's easy to get confused. (See for details about these two notations for the composite value of a table row.)
The second way is described more fully in .
As explained in , the field notation and functional notation are equivalent.
This is not essentially different from the version of add_em
shown in . The real value of output parameters is that they provide a convenient way of defining functions that return several columns. For example,
Sometimes it is useful to be able to pass an already-constructed array to a variadic function; this is particularly handy when one variadic function wants to pass on its array parameter to another one. Also, this is the only secure way to call a variadic function found in a schema that permits untrusted users to create objects; see . You can do this by specifying VARIADIC
in the call:
The array element parameters generated from a variadic parameter are treated as not having any names of their own. This means it is not possible to call a variadic function using named arguments (), except when you specify VARIADIC
. For example, this will work:
Functions can be declared with default values for some or all input arguments. The default values are inserted whenever the function is called with insufficiently many actual arguments. Since arguments can only be omitted from the end of the actual argument list, all parameters after a parameter with a default value have to have default values as well. (Although the use of named argument notation could allow this restriction to be relaxed, it's still enforced so that positional argument notation works sensibly.) Whether or not you use it, this capability creates a need for precautions when calling functions in databases where some users mistrust other users; see .
It is frequently useful to construct a query's result by invoking a set-returning function multiple times, with the parameters for each invocation coming from successive rows of a table or subquery. The preferred way to do this is to use the LATERAL
key word, which is described in . Here is an example using a set-returning function to enumerate elements of a tree structure:
SQL functions can be declared to accept and return the polymorphic types anyelement
, anyarray
, anynonarray
, anyenum
, and anyrange
. See for a more detailed explanation of polymorphic functions. Here is a polymorphic function make_array
that builds up an array from two arbitrary data type elements:
When a SQL function has one or more parameters of collatable data types, a collation is identified for each function call depending on the collations assigned to the actual arguments, as described in . If a collation is successfully identified (i.e., there are no conflicts of implicit collations among the arguments) then all the collatable parameters are treated as having that collation implicitly. This will affect the behavior of collation-sensitive operations within the function. For example, using the anyleast
function described above, the result of
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
版本:11
If you are thinking about distributing your PostgreSQL extension modules, setting up a portable build system for them can be fairly difficult. Therefore the PostgreSQL installation provides a build infrastructure for extensions, called PGXS, so that simple extension modules can be built simply against an already installed server. PGXS is mainly intended for extensions that include C code, although it can be used for pure-SQL extensions too. Note that PGXS is not intended to be a universal build system framework that can be used to build any software interfacing to PostgreSQL; it simply automates common build rules for simple server extension modules. For more complicated packages, you might need to write your own build system.
To use the PGXS infrastructure for your extension, you must write a simple makefile. In the makefile, you need to set some variables and include the global PGXS makefile. Here is an example that builds an extension module named isbn_issn
, consisting of a shared library containing some C code, an extension control file, a SQL script, an include file (only needed if other modules might need to access the extension functions without going via SQL), and a documentation text file:
The last three lines should always be the same. Earlier in the file, you assign variables or add custom make rules.
Set one of these three variables to specify what is built:MODULES
list of shared-library objects to be built from source files with same stem (do not include library suffixes in this list)MODULE_big
a shared library to build from multiple source files (list object files in OBJS
)PROGRAM
an executable program to build (list object files in OBJS
)
The following variables can also be set:EXTENSION
extension name(s); for each name you must provide an extension
.control file, which will be installed into prefix
/share/extensionMODULEDIR
subdirectory of prefix
/share into which DATA and DOCS files should be installed (if not set, default is extension
if EXTENSION
is set, or contrib
if not)DATA
random files to install into prefix
/share/$MODULEDIRDATA_built
random files to install into prefix
/share/$MODULEDIR, which need to be built firstDATA_TSEARCH
random files to install under prefix
/share/tsearch_dataDOCS
random files to install under prefix
/doc/$MODULEDIRHEADERS
HEADERS_built
Files to (optionally build and) install under prefix
/include/server/$MODULEDIR/$MODULE_big.
Unlike DATA_built
, files in HEADERS_built
are not removed by the clean
target; if you want them removed, also add them to EXTRA_CLEAN
or add your own rules to do it.HEADERS_$MODULE
HEADERS_built_$MODULE
Files to install (after building if specified) under prefix
/include/server/$MODULEDIR/$MODULE, where $MODULE
must be a module name used in MODULES
or MODULE_big
.
Unlike DATA_built
, files in HEADERS_built_$MODULE
are not removed by the clean
target; if you want them removed, also add them to EXTRA_CLEAN
or add your own rules to do it.
It is legal to use both variables for the same module, or any combination, unless you have two module names in the MODULES
list that differ only by the presence of a prefix built_
, which would cause ambiguity. In that (hopefully unlikely) case, you should use only the HEADERS_built_$MODULE
variables.SCRIPTS
script files (not binaries) to install into prefix
/binSCRIPTS_built
script files (not binaries) to install into prefix
/bin, which need to be built firstREGRESS
list of regression test cases (without suffix), see belowREGRESS_OPTS
additional switches to pass to pg_regressISOLATION
list of isolation test cases, see below for more detailsISOLATION_OPTS
additional switches to pass to pg_isolation_regressTAP_TESTS
switch defining if TAP tests need to be run, see belowNO_INSTALLCHECK
don't define an installcheck
target, useful e.g. if tests require special configuration, or don't use pg_regressEXTRA_CLEAN
extra files to remove in make cleanPG_CPPFLAGS
will be prepended to CPPFLAGSPG_CFLAGS
will be appended to CFLAGSPG_CXXFLAGS
will be appended to CXXFLAGSPG_LDFLAGS
will be prepended to LDFLAGSPG_LIBS
will be added to PROGRAM
link lineSHLIB_LINK
will be added to MODULE_big
link linePG_CONFIG
path to pg_config program for the PostgreSQL installation to build against (typically just pg_config
to use the first one in your PATH
)
Put this makefile as Makefile
in the directory which holds your extension. Then you can do make
to compile, and then make install
to install your module. By default, the extension is compiled and installed for the PostgreSQL installation that corresponds to the first pg_config
program found in your PATH
. You can use a different installation by setting PG_CONFIG
to point to its pg_config
program, either within the makefile or on the make
command line.
You can also run make
in a directory outside the source tree of your extension, if you want to keep the build directory separate. This procedure is also called a VPATH build. Here's how:
Alternatively, you can set up a directory for a VPATH build in a similar way to how it is done for the core code. One way to do this is using the core script config/prep_buildtree
. Once this has been done you can build by setting the make
variable VPATH
like this:
This procedure can work with a greater variety of directory layouts.
The scripts listed in the REGRESS
variable are used for regression testing of your module, which can be invoked by make installcheck
after doing make install
. For this to work you must have a running PostgreSQL server. The script files listed in REGRESS
must appear in a subdirectory named sql/
in your extension's directory. These files must have extension .sql
, which must not be included in the REGRESS
list in the makefile. For each test there should also be a file containing the expected output in a subdirectory named expected/
, with the same stem and extension .out
. make installcheck
executes each test script with psql, and compares the resulting output to the matching expected file. Any differences will be written to the file regression.diffs
in diff -c
format. Note that trying to run a test that is missing its expected file will be reported as “trouble”, so make sure you have all expected files.
The scripts listed in the ISOLATION
variable are used for tests stressing behavior of concurrent session with your module, which can be invoked by make installcheck
after doing make install
. For this to work you must have a running PostgreSQL server. The script files listed in ISOLATION
must appear in a subdirectory named specs/
in your extension's directory. These files must have extension .spec
, which must not be included in the ISOLATION
list in the makefile. For each test there should also be a file containing the expected output in a subdirectory named expected/
, with the same stem and extension .out
. make installcheck
executes each test script, and compares the resulting output to the matching expected file. Any differences will be written to the file output_iso/regression.diffs
in diff -c
format. Note that trying to run a test that is missing its expected file will be reported as “trouble”, so make sure you have all expected files.
TAP_TESTS
enables the use of TAP tests. Data from each run is present in a subdirectory named tmp_check/
. See also Section 32.4 for more details.
The easiest way to create the expected files is to create empty files, then do a test run (which will of course report differences). Inspect the actual result files found in the results/
directory (for tests in REGRESS
), or output_iso/results/
directory (for tests in ISOLATION
), then copy them to expected/
if they match what you expect from the test.
版本:11
Aggregate functions in PostgreSQL are defined in terms of state values and state transition functions. That is, an aggregate operates using a state value that is updated as each successive input row is processed. To define a new aggregate function, one selects a data type for the state value, an initial value for the state, and a state transition function. The state transition function takes the previous state value and the aggregate's input value(s) for the current row, and returns a new state value. A final function can also be specified, in case the desired result of the aggregate is different from the data that needs to be kept in the running state value. The final function takes the ending state value and returns whatever is wanted as the aggregate result. In principle, the transition and final functions are just ordinary functions that could also be used outside the context of the aggregate. (In practice, it's often helpful for performance reasons to create specialized transition functions that can only work when called as part of an aggregate.)
Thus, in addition to the argument and result data types seen by a user of the aggregate, there is an internal state-value data type that might be different from both the argument and result types.
If we define an aggregate that does not use a final function, we have an aggregate that computes a running function of the column values from each row. sum
is an example of this kind of aggregate. sum
starts at zero and always adds the current row's value to its running total. For example, if we want to make a sum
aggregate to work on a data type for complex numbers, we only need the addition function for that data type. The aggregate definition would be:
which we might use like this:
(Notice that we are relying on function overloading: there is more than one aggregate named sum
, but PostgreSQL can figure out which kind of sum applies to a column of type complex
.)
The above definition of sum
will return zero (the initial state value) if there are no nonnull input values. Perhaps we want to return null in that case instead — the SQL standard expects sum
to behave that way. We can do this simply by omitting the initcond
phrase, so that the initial state value is null. Ordinarily this would mean that the sfunc
would need to check for a null state-value input. But for sum
and some other simple aggregates like max
and min
, it is sufficient to insert the first nonnull input value into the state variable and then start applying the transition function at the second nonnull input value. PostgreSQL will do that automatically if the initial state value is null and the transition function is marked “strict” (i.e., not to be called for null inputs).
Another bit of default behavior for a “strict” transition function is that the previous state value is retained unchanged whenever a null input value is encountered. Thus, null values are ignored. If you need some other behavior for null inputs, do not declare your transition function as strict; instead code it to test for null inputs and do whatever is needed.
avg
(average) is a more complex example of an aggregate. It requires two pieces of running state: the sum of the inputs and the count of the number of inputs. The final result is obtained by dividing these quantities. Average is typically implemented by using an array as the state value. For example, the built-in implementation of avg(float8)
looks like:
float8_accum
requires a three-element array, not just two elements, because it accumulates the sum of squares as well as the sum and count of the inputs. This is so that it can be used for some other aggregates as well as avg
.
Aggregate function calls in SQL allow DISTINCT
and ORDER BY
options that control which rows are fed to the aggregate's transition function and in what order. These options are implemented behind the scenes and are not the concern of the aggregate's support functions.
For further details see the CREATE AGGREGATE command.
Aggregate functions can optionally support moving-aggregate mode, which allows substantially faster execution of aggregate functions within windows with moving frame starting points. (See Section 3.5 and Section 4.2.8 for information about use of aggregate functions as window functions.) The basic idea is that in addition to a normal “forward” transition function, the aggregate provides an inverse transition function, which allows rows to be removed from the aggregate's running state value when they exit the window frame. For example a sum
aggregate, which uses addition as the forward transition function, would use subtraction as the inverse transition function. Without an inverse transition function, the window function mechanism must recalculate the aggregate from scratch each time the frame starting point moves, resulting in run time proportional to the number of input rows times the average frame length. With an inverse transition function, the run time is only proportional to the number of input rows.
The inverse transition function is passed the current state value and the aggregate input value(s) for the earliest row included in the current state. It must reconstruct what the state value would have been if the given input row had never been aggregated, but only the rows following it. This sometimes requires that the forward transition function keep more state than is needed for plain aggregation mode. Therefore, the moving-aggregate mode uses a completely separate implementation from the plain mode: it has its own state data type, its own forward transition function, and its own final function if needed. These can be the same as the plain mode's data type and functions, if there is no need for extra state.
As an example, we could extend the sum
aggregate given above to support moving-aggregate mode like this:
The parameters whose names begin with m
define the moving-aggregate implementation. Except for the inverse transition function minvfunc
, they correspond to the plain-aggregate parameters without m
.
The forward transition function for moving-aggregate mode is not allowed to return null as the new state value. If the inverse transition function returns null, this is taken as an indication that the inverse function cannot reverse the state calculation for this particular input, and so the aggregate calculation will be redone from scratch for the current frame starting position. This convention allows moving-aggregate mode to be used in situations where there are some infrequent cases that are impractical to reverse out of the running state value. The inverse transition function can “punt” on these cases, and yet still come out ahead so long as it can work for most cases. As an example, an aggregate working with floating-point numbers might choose to punt when a NaN
(not a number) input has to be removed from the running state value.
When writing moving-aggregate support functions, it is important to be sure that the inverse transition function can reconstruct the correct state value exactly. Otherwise there might be user-visible differences in results depending on whether the moving-aggregate mode is used. An example of an aggregate for which adding an inverse transition function seems easy at first, yet where this requirement cannot be met is sum
over float4
or float8
inputs. A naive declaration of sum(float8
) could be
This aggregate, however, can give wildly different results than it would have without the inverse transition function. For example, consider
This query returns 0
as its second result, rather than the expected answer of 1
. The cause is the limited precision of floating-point values: adding 1
to 1e20
results in 1e20
again, and so subtracting 1e20
from that yields 0
, not 1
. Note that this is a limitation of floating-point arithmetic in general, not a limitation of PostgreSQL.
Aggregate functions can use polymorphic state transition functions or final functions, so that the same functions can be used to implement multiple aggregates. See Section 38.2.5 for an explanation of polymorphic functions. Going a step further, the aggregate function itself can be specified with polymorphic input type(s) and state type, allowing a single aggregate definition to serve for multiple input data types. Here is an example of a polymorphic aggregate:
Here, the actual state type for any given aggregate call is the array type having the actual input type as elements. The behavior of the aggregate is to concatenate all the inputs into an array of that type. (Note: the built-in aggregate array_agg
provides similar functionality, with better performance than this definition would have.)
Here's the output using two different actual data types as arguments:
Ordinarily, an aggregate function with a polymorphic result type has a polymorphic state type, as in the above example. This is necessary because otherwise the final function cannot be declared sensibly: it would need to have a polymorphic result type but no polymorphic argument type, which CREATE FUNCTION
will reject on the grounds that the result type cannot be deduced from a call. But sometimes it is inconvenient to use a polymorphic state type. The most common case is where the aggregate support functions are to be written in C and the state type should be declared as internal
because there is no SQL-level equivalent for it. To address this case, it is possible to declare the final function as taking extra “dummy” arguments that match the input arguments of the aggregate. Such dummy arguments are always passed as null values since no specific value is available when the final function is called. Their only use is to allow a polymorphic final function's result type to be connected to the aggregate's input type(s). For example, the definition of the built-in aggregate array_agg
is equivalent to
Here, the finalfunc_extra
option specifies that the final function receives, in addition to the state value, extra dummy argument(s) corresponding to the aggregate's input argument(s). The extra anynonarray
argument allows the declaration of array_agg_finalfn
to be valid.
An aggregate function can be made to accept a varying number of arguments by declaring its last argument as a VARIADIC
array, in much the same fashion as for regular functions; see Section 38.5.5. The aggregate's transition function(s) must have the same array type as their last argument. The transition function(s) typically would also be marked VARIADIC
, but this is not strictly required.
Variadic aggregates are easily misused in connection with the ORDER BY
option (seeSection 4.2.7), since the parser cannot tell whether the wrong number of actual arguments have been given in such a combination. Keep in mind that everything to the right of ORDER BY
is a sort key, not an argument to the aggregate. For example, in
the parser will see this as a single aggregate function argument and three sort keys. However, the user might have intended
If myaggregate
is variadic, both these calls could be perfectly valid.
For the same reason, it's wise to think twice before creating aggregate functions with the same names and different numbers of regular arguments.
The aggregates we have been describing so far are “normal” aggregates. PostgreSQL also supports ordered-set aggregates, which differ from normal aggregates in two key ways. First, in addition to ordinary aggregated arguments that are evaluated once per input row, an ordered-set aggregate can have “direct” arguments that are evaluated only once per aggregation operation. Second, the syntax for the ordinary aggregated arguments specifies a sort ordering for them explicitly. An ordered-set aggregate is usually used to implement a computation that depends on a specific row ordering, for instance rank or percentile, so that the sort ordering is a required aspect of any call. For example, the built-in definition of percentile_disc
is equivalent to:
This aggregate takes a float8
direct argument (the percentile fraction) and an aggregated input that can be of any sortable data type. It could be used to obtain a median household income like this:
Here, 0.5
is a direct argument; it would make no sense for the percentile fraction to be a value varying across rows.
Unlike the case for normal aggregates, the sorting of input rows for an ordered-set aggregate is not done behind the scenes, but is the responsibility of the aggregate's support functions. The typical implementation approach is to keep a reference to a “tuplesort” object in the aggregate's state value, feed the incoming rows into that object, and then complete the sorting and read out the data in the final function. This design allows the final function to perform special operations such as injecting additional “hypothetical” rows into the data to be sorted. While normal aggregates can often be implemented with support functions written in PL/pgSQL or another PL language, ordered-set aggregates generally have to be written in C, since their state values aren't definable as any SQL data type. (In the above example, notice that the state value is declared as type internal
— this is typical.) Also, because the final function performs the sort, it is not possible to continue adding input rows by executing the transition function again later. This means the final function is not READ_ONLY
; it must be declared in CREATE AGGREGATE as READ_WRITE
, or as SHAREABLE
if it's possible for additional final-function calls to make use of the already-sorted state.
The state transition function for an ordered-set aggregate receives the current state value plus the aggregated input values for each row, and returns the updated state value. This is the same definition as for normal aggregates, but note that the direct arguments (if any) are not provided. The final function receives the last state value, the values of the direct arguments if any, and (if finalfunc_extra
is specified) null values corresponding to the aggregated input(s). As with normal aggregates, finalfunc_extra
is only really useful if the aggregate is polymorphic; then the extra dummy argument(s) are needed to connect the final function's result type to the aggregate's input type(s).
Currently, ordered-set aggregates cannot be used as window functions, and therefore there is no need for them to support moving-aggregate mode.
Optionally, an aggregate function can support partial aggregation. The idea of partial aggregation is to run the aggregate's state transition function over different subsets of the input data independently, and then to combine the state values resulting from those subsets to produce the same state value that would have resulted from scanning all the input in a single operation. This mode can be used for parallel aggregation by having different worker processes scan different portions of a table. Each worker produces a partial state value, and at the end those state values are combined to produce a final state value. (In the future this mode might also be used for purposes such as combining aggregations over local and remote tables; but that is not implemented yet.)
To support partial aggregation, the aggregate definition must provide a combine function, which takes two values of the aggregate's state type (representing the results of aggregating over two subsets of the input rows) and produces a new value of the state type, representing what the state would have been after aggregating over the combination of those sets of rows. It is unspecified what the relative order of the input rows from the two sets would have been. This means that it's usually impossible to define a useful combine function for aggregates that are sensitive to input row order.
As simple examples, MAX
and MIN
aggregates can be made to support partial aggregation by specifying the combine function as the same greater-of-two or lesser-of-two comparison function that is used as their transition function. SUM
aggregates just need an addition function as combine function. (Again, this is the same as their transition function, unless the state value is wider than the input data type.)
The combine function is treated much like a transition function that happens to take a value of the state type, not of the underlying input type, as its second argument. In particular, the rules for dealing with null values and strict functions are similar. Also, if the aggregate definition specifies a non-null initcond
, keep in mind that that will be used not only as the initial state for each partial aggregation run, but also as the initial state for the combine function, which will be called to combine each partial result into that state.
If the aggregate's state type is declared as internal
, it is the combine function's responsibility that its result is allocated in the correct memory context for aggregate state values. This means in particular that when the first input is NULL
it's invalid to simply return the second input, as that value will be in the wrong context and will not have sufficient lifespan.
When the aggregate's state type is declared as internal
, it is usually also appropriate for the aggregate definition to provide a serialization function and a deserialization function, which allow such a state value to be copied from one process to another. Without these functions, parallel aggregation cannot be performed, and future applications such as local/remote aggregation will probably not work either.
A serialization function must take a single argument of type internal
and return a result of type bytea
, which represents the state value packaged up into a flat blob of bytes. Conversely, a deserialization function reverses that conversion. It must take two arguments of types bytea
and internal
, and return a result of type internal
. (The second argument is unused and is always zero, but it is required for type-safety reasons.) The result of the deserialization function should simply be allocated in the current memory context, as unlike the combine function's result, it is not long-lived.
Worth noting also is that for an aggregate to be executed in parallel, the aggregate itself must be marked PARALLEL SAFE
. The parallel-safety markings on its support functions are not consulted.
A function written in C can detect that it is being called as an aggregate support function by calling AggCheckCallContext
, for example:
One reason for checking this is that when it is true, the first input must be a temporary state value and can therefore safely be modified in-place rather than allocating a new copy. See int8inc()
for an example. (While aggregate transition functions are always allowed to modify the transition value in-place, aggregate final functions are generally discouraged from doing so; if they do so, the behavior must be declared when creating the aggregate. See CREATE AGGREGATE for more detail.)
The second argument of AggCheckCallContext
can be used to retrieve the memory context in which aggregate state values are being kept. This is useful for transition functions that wish to use“expanded” objects (see Section 38.12.1) as their state values. On first call, the transition function should return an expanded object whose memory context is a child of the aggregate state context, and then keep returning the same expanded object on subsequent calls. See array_append()
for an example. (array_append()
is not the transition function of any built-in aggregate, but it is written to behave efficiently when used as transition function of a custom aggregate.)
Another support routine available to aggregate functions written in C is AggGetAggref
, which returns the Aggref
parse node that defines the aggregate call. This is mainly useful for ordered-set aggregates, which can inspect the substructure of the Aggref
node to find out what sort ordering they are supposed to implement. Examples can be found in orderedsetaggs.c
in the PostgreSQLsource code.
版本:11
A useful extension to PostgreSQL typically includes multiple SQL objects; for example, a new data type will require new functions, new operators, and probably new index operator classes. It is helpful to collect all these objects into a single package to simplify database management. PostgreSQL calls such a package an extension. To define an extension, you need at least a script file that contains the SQL commands to create the extension's objects, and a control file that specifies a few basic properties of the extension itself. If the extension includes C code, there will typically also be a shared library file into which the C code has been built. Once you have these files, a simple CREATE EXTENSION command loads the objects into your database.
The main advantage of using an extension, rather than just running the SQL script to load a bunch of “loose” objects into your database, is that PostgreSQL will then understand that the objects of the extension go together. You can drop all the objects with a single DROP EXTENSION command (no need to maintain a separate “uninstall” script). Even more useful, pg_dump knows that it should not dump the individual member objects of the extension — it will just include a CREATE EXTENSION
command in dumps, instead. This vastly simplifies migration to a new version of the extension that might contain more or different objects than the old version. Note however that you must have the extension's control, script, and other files available when loading such a dump into a new database.
PostgreSQL will not let you drop an individual object contained in an extension, except by dropping the whole extension. Also, while you can change the definition of an extension member object (for example, via CREATE OR REPLACE FUNCTION
for a function), bear in mind that the modified definition will not be dumped by pg_dump. Such a change is usually only sensible if you concurrently make the same change in the extension's script file. (But there are special provisions for tables containing configuration data; see Section 37.17.4.) In production situations, it's generally better to create an extension update script to perform changes to extension member objects.
The extension script may set privileges on objects that are part of the extension via GRANT
and REVOKE
statements. The final set of privileges for each object (if any are set) will be stored in the pg_init_privs
system catalog. When pg_dump is used, the CREATE EXTENSION
command will be included in the dump, followed by the set of GRANT
and REVOKE
statements necessary to set the privileges on the objects to what they were at the time the dump was taken.
PostgreSQL does not currently support extension scripts issuing CREATE POLICY
or SECURITY LABEL
statements. These are expected to be set after the extension has been created. All RLS policies and security labels on extension objects will be included in dumps created by pg_dump.
The extension mechanism also has provisions for packaging modification scripts that adjust the definitions of the SQL objects contained in an extension. For example, if version 1.1 of an extension adds one function and changes the body of another function compared to 1.0, the extension author can provide an update script that makes just those two changes. The ALTER EXTENSION UPDATE
command can then be used to apply these changes and track which version of the extension is actually installed in a given database.
The kinds of SQL objects that can be members of an extension are shown in the description of ALTER EXTENSION. Notably, objects that are database-cluster-wide, such as databases, roles, and tablespaces, cannot be extension members since an extension is only known within one database. (Although an extension script is not prohibited from creating such objects, if it does so they will not be tracked as part of the extension.) Also notice that while a table can be a member of an extension, its subsidiary objects such as indexes are not directly considered members of the extension. Another important point is that schemas can belong to extensions, but not vice versa: an extension as such has an unqualified name and does not exist “within” any schema. The extension's member objects, however, will belong to schemas whenever appropriate for their object types. It may or may not be appropriate for an extension to own the schema(s) its member objects are within.
If an extension's script creates any temporary objects (such as temp tables), those objects are treated as extension members for the remainder of the current session, but are automatically dropped at session end, as any temporary object would be. This is an exception to the rule that extension member objects cannot be dropped without dropping the whole extension.
大多數的延伸功能應該很少假設它們所佔用的資料庫是固定的。 特別是,除非您使用了 SET search_path = pg_temp,否則將假設每個不合格的名稱都解析為惡意使用者定義的物件。當心那些隱含相依於 search_path 的結構:IN
和 CASE expression WHEN
總是使用搜尋路徑的一個運算子。在這個時機上,請使用 OPERATOR(schema.=) ANY
和 CASE WHEN expression
。
The CREATE EXTENSION command relies on a control file for each extension, which must be named the same as the extension with a suffix of .control
, and must be placed in the installation's SHAREDIR/extension
directory. There must also be at least one SQL script file, which follows the naming pattern extension
--version
.sql (for example, foo--1.0.sql
for version 1.0
of extension foo
). By default, the script file(s) are also placed in the SHAREDIR/extension
directory; but the control file can specify a different directory for the script file(s).
The file format for an extension control file is the same as for the postgresql.conf
file, namely a list of parameter_name
=
value
assignments, one per line. Blank lines and comments introduced by #
are allowed. Be sure to quote any value that is not a single word or number.
A control file can set the following parameters:directory
(string
)
The directory containing the extension's SQL script file(s). Unless an absolute path is given, the name is relative to the installation's SHAREDIR
directory. The default behavior is equivalent to specifying directory = 'extension'
.default_version
(string
)
The default version of the extension (the one that will be installed if no version is specified in CREATE EXTENSION
). Although this can be omitted, that will result in CREATE EXTENSION
failing if no VERSION
option appears, so you generally don't want to do that.comment
(string
)
A comment (any string) about the extension. The comment is applied when initially creating an extension, but not during extension updates (since that might override user-added comments). Alternatively, the extension's comment can be set by writing a COMMENT command in the script file.encoding
(string
)
The character set encoding used by the script file(s). This should be specified if the script files contain any non-ASCII characters. Otherwise the files will be assumed to be in the database encoding.module_pathname
(string
)
The value of this parameter will be substituted for each occurrence of MODULE_PATHNAME
in the script file(s). If it is not set, no substitution is made. Typically, this is set to $libdir/
shared_library_name
and then MODULE_PATHNAME
is used in CREATE FUNCTION
commands for C-language functions, so that the script files do not need to hard-wire the name of the shared library.requires
(string
)
A list of names of extensions that this extension depends on, for example requires = 'foo, bar'
. Those extensions must be installed before this one can be installed.superuser
(boolean
)
If this parameter is true
(which is the default), only superusers can create the extension or update it to a new version. If it is set to false
, just the privileges required to execute the commands in the installation or update script are required.relocatable
(boolean
)
An extension is relocatable if it is possible to move its contained objects into a different schema after initial creation of the extension. The default is false
, i.e. the extension is not relocatable. See Section 37.17.3 for more information.schema
(string
)
This parameter can only be set for non-relocatable extensions. It forces the extension to be loaded into exactly the named schema and not any other. The schema
parameter is consulted only when initially creating an extension, not during extension updates. See Section 37.17.3 for more information.
In addition to the primary control file extension
.control, an extension can have secondary control files named in the style extension
--version
.control. If supplied, these must be located in the script file directory. Secondary control files follow the same format as the primary control file. Any parameters set in a secondary control file override the primary control file when installing or updating to that version of the extension. However, the parameters directory
and default_version
cannot be set in a secondary control file.
An extension's SQL script files can contain any SQL commands, except for transaction control commands (BEGIN
, COMMIT
, etc) and commands that cannot be executed inside a transaction block (such as VACUUM
). This is because the script files are implicitly executed within a transaction block.
An extension's SQL script files can also contain lines beginning with \echo
, which will be ignored (treated as comments) by the extension mechanism. This provision is commonly used to throw an error if the script file is fed to psql rather than being loaded via CREATE EXTENSION
(see example script in Section 37.17.7). Without that, users might accidentally load the extension's contents as “loose” objects rather than as an extension, a state of affairs that's a bit tedious to recover from.
While the script files can contain any characters allowed by the specified encoding, control files should contain only plain ASCII, because there is no way for PostgreSQL to know what encoding a control file is in. In practice this is only an issue if you want to use non-ASCII characters in the extension's comment. Recommended practice in that case is to not use the control file comment
parameter, but instead use COMMENT ON EXTENSION
within a script file to set the comment.
Users often wish to load the objects contained in an extension into a different schema than the extension's author had in mind. There are three supported levels of relocatability:
A fully relocatable extension can be moved into another schema at any time, even after it's been loaded into a database. This is done with the ALTER EXTENSION SET SCHEMA
command, which automatically renames all the member objects into the new schema. Normally, this is only possible if the extension contains no internal assumptions about what schema any of its objects are in. Also, the extension's objects must all be in one schema to begin with (ignoring objects that do not belong to any schema, such as procedural languages). Mark a fully relocatable extension by setting relocatable = true
in its control file.
An extension might be relocatable during installation but not afterwards. This is typically the case if the extension's script file needs to reference the target schema explicitly, for example in setting search_path
properties for SQL functions. For such an extension, set relocatable = false
in its control file, and use @extschema@
to refer to the target schema in the script file. All occurrences of this string will be replaced by the actual target schema's name before the script is executed. The user can set the target schema using the SCHEMA
option of CREATE EXTENSION
.
If the extension does not support relocation at all, set relocatable = false
in its control file, and also set schema
to the name of the intended target schema. This will prevent use of the SCHEMA
option of CREATE EXTENSION
, unless it specifies the same schema named in the control file. This choice is typically necessary if the extension contains internal assumptions about schema names that can't be replaced by uses of @extschema@
. The @extschema@
substitution mechanism is available in this case too, although it is of limited use since the schema name is determined by the control file.
In all cases, the script file will be executed with search_path initially set to point to the target schema; that is, CREATE EXTENSION
does the equivalent of this:
This allows the objects created by the script file to go into the target schema. The script file can change search_path
if it wishes, but that is generally undesirable. search_path
is restored to its previous setting upon completion of CREATE EXTENSION
.
The target schema is determined by the schema
parameter in the control file if that is given, otherwise by the SCHEMA
option of CREATE EXTENSION
if that is given, otherwise the current default object creation schema (the first one in the caller's search_path
). When the control file schema
parameter is used, the target schema will be created if it doesn't already exist, but in the other two cases it must already exist.
If any prerequisite extensions are listed in requires
in the control file, their target schemas are appended to the initial setting of search_path
. This allows their objects to be visible to the new extension's script file.
Although a non-relocatable extension can contain objects spread across multiple schemas, it is usually desirable to place all the objects meant for external use into a single schema, which is considered the extension's target schema. Such an arrangement works conveniently with the default setting of search_path
during creation of dependent extensions.
Some extensions include configuration tables, which contain data that might be added or changed by the user after installation of the extension. Ordinarily, if a table is part of an extension, neither the table's definition nor its content will be dumped by pg_dump. But that behavior is undesirable for a configuration table; any data changes made by the user need to be included in dumps, or the extension will behave differently after a dump and reload.
To solve this problem, an extension's script file can mark a table or a sequence it has created as a configuration relation, which will cause pg_dump to include the table's or the sequence's contents (not its definition) in dumps. To do that, call the function pg_extension_config_dump(regclass, text)
after creating the table or the sequence, for example
Any number of tables or sequences can be marked this way. Sequences associated with serial
or bigserial
columns can be marked as well.
When the second argument of pg_extension_config_dump
is an empty string, the entire contents of the table are dumped by pg_dump. This is usually only correct if the table is initially empty as created by the extension script. If there is a mixture of initial data and user-provided data in the table, the second argument of pg_extension_config_dump
provides a WHERE
condition that selects the data to be dumped. For example, you might do
and then make sure that standard_entry
is true only in the rows created by the extension's script.
For sequences, the second argument of pg_extension_config_dump
has no effect.
More complicated situations, such as initially-provided rows that might be modified by users, can be handled by creating triggers on the configuration table to ensure that modified rows are marked correctly.
You can alter the filter condition associated with a configuration table by calling pg_extension_config_dump
again. (This would typically be useful in an extension update script.) The only way to mark a table as no longer a configuration table is to dissociate it from the extension with ALTER EXTENSION ... DROP TABLE
.
Note that foreign key relationships between these tables will dictate the order in which the tables are dumped out by pg_dump. Specifically, pg_dump will attempt to dump the referenced-by table before the referencing table. As the foreign key relationships are set up at CREATE EXTENSION time (prior to data being loaded into the tables) circular dependencies are not supported. When circular dependencies exist, the data will still be dumped out but the dump will not be able to be restored directly and user intervention will be required.
Sequences associated with serial
or bigserial
columns need to be directly marked to dump their state. Marking their parent relation is not enough for this purpose.
One advantage of the extension mechanism is that it provides convenient ways to manage updates to the SQL commands that define an extension's objects. This is done by associating a version name or number with each released version of the extension's installation script. In addition, if you want users to be able to update their databases dynamically from one version to the next, you should provide update scripts that make the necessary changes to go from one version to the next. Update scripts have names following the pattern extension
--old_version
--target_version
.sql (for example, foo--1.0--1.1.sql
contains the commands to modify version 1.0
of extension foo
into version 1.1
).
Given that a suitable update script is available, the command ALTER EXTENSION UPDATE
will update an installed extension to the specified new version. The update script is run in the same environment that CREATE EXTENSION
provides for installation scripts: in particular, search_path
is set up in the same way, and any new objects created by the script are automatically added to the extension. Also, if the script chooses to drop extension member objects, they are automatically dissociated from the extension.
If an extension has secondary control files, the control parameters that are used for an update script are those associated with the script's target (new) version.
The update mechanism can be used to solve an important special case: converting a “loose” collection of objects into an extension. Before the extension mechanism was added to PostgreSQL (in 9.1), many people wrote extension modules that simply created assorted unpackaged objects. Given an existing database containing such objects, how can we convert the objects into a properly packaged extension? Dropping them and then doing a plain CREATE EXTENSION
is one way, but it's not desirable if the objects have dependencies (for example, if there are table columns of a data type created by the extension). The way to fix this situation is to create an empty extension, then use ALTER EXTENSION ADD
to attach each pre-existing object to the extension, then finally create any new objects that are in the current extension version but were not in the unpackaged release. CREATE EXTENSION
supports this case with its FROM
old_version
option, which causes it to not run the normal installation script for the target version, but instead the update script named extension
--old_version
--target_version
.sql. The choice of the dummy version name to use as old_version
is up to the extension author, though unpackaged
is a common convention. If you have multiple prior versions you need to be able to update into extension style, use multiple dummy version names to identify them.
ALTER EXTENSION
is able to execute sequences of update script files to achieve a requested update. For example, if only foo--1.0--1.1.sql
and foo--1.1--2.0.sql
are available, ALTER EXTENSION
will apply them in sequence if an update to version 2.0
is requested when 1.0
is currently installed.
PostgreSQL doesn't assume anything about the properties of version names: for example, it does not know whether 1.1
follows 1.0
. It just matches up the available version names and follows the path that requires applying the fewest update scripts. (A version name can actually be any string that doesn't contain --
or leading or trailing -
.)
Sometimes it is useful to provide “downgrade” scripts, for example foo--1.1--1.0.sql
to allow reverting the changes associated with version 1.1
. If you do that, be careful of the possibility that a downgrade script might unexpectedly get applied because it yields a shorter path. The risky case is where there is a “fast path” update script that jumps ahead several versions as well as a downgrade script to the fast path's start point. It might take fewer steps to apply the downgrade and then the fast path than to move ahead one version at a time. If the downgrade script drops any irreplaceable objects, this will yield undesirable results.
To check for unexpected update paths, use this command:
This shows each pair of distinct known version names for the specified extension, together with the update path sequence that would be taken to get from the source version to the target version, or NULL
if there is no available update path. The path is shown in textual form with --
separators. You can use regexp_split_to_array(path,'--')
if you prefer an array format.
An extension that has been around for awhile will probably exist in several versions, for which the author will need to write update scripts. For example, if you have released a foo
extension in versions 1.0
, 1.1
, and 1.2
, there should be update scripts foo--1.0--1.1.sql
and foo--1.1--1.2.sql
. Before PostgreSQL 10, it was necessary to also create new script files foo--1.1.sql
and foo--1.2.sql
that directly build the newer extension versions, or else the newer versions could not be installed directly, only by installing 1.0
and then updating. That was tedious and duplicative, but now it's unnecessary, because CREATE EXTENSION
can follow update chains automatically. For example, if only the script files foo--1.0.sql
, foo--1.0--1.1.sql
, and foo--1.1--1.2.sql
are available then a request to install version 1.2
is honored by running those three scripts in sequence. The processing is the same as if you'd first installed 1.0
and then updated to 1.2
. (As with ALTER EXTENSION UPDATE
, if multiple pathways are available then the shortest is preferred.) Arranging an extension's script files in this style can reduce the amount of maintenance effort needed to produce small updates.
If you use secondary (version-specific) control files with an extension maintained in this style, keep in mind that each version needs a control file even if it has no stand-alone installation script, as that control file will determine how the implicit update to that version is performed. For example, if foo--1.0.control
specifies requires = 'bar'
but foo
's other control files do not, the extension's dependency on bar
will be dropped when updating from 1.0
to another version.
Here is a complete example of an SQL-only extension, a two-element composite type that can store any type of value in its slots, which are named “k” and “v”. Non-text values are automatically coerced to text for storage.
The script file pair--1.0.sql
looks like this:
The control file pair.control
looks like this:
While you hardly need a makefile to install these two files into the correct directory, you could use a Makefile
containing this:
This makefile relies on PGXS, which is described in Section 37.18. The command make install
will install the control and script files into the correct directory as reported by pg_config.
Once the files are installed, use the CREATE EXTENSION command to load the objects into any particular database.
版本:11
The procedures described thus far let you define new types, new functions, and new operators. However, we cannot yet define an index on a column of a new data type. To do this, we must define an operator class for the new data type. Later in this section, we will illustrate this concept in an example: a new operator class for the B-tree index method that stores and sorts complex numbers in ascending absolute value order.
Operator classes can be grouped into operator families to show the relationships between semantically compatible classes. When only a single data type is involved, an operator class is sufficient, so we'll focus on that case first and then return to operator families.
The pg_am
table contains one row for every index method (internally known as access method). Support for regular access to tables is built into PostgreSQL, but all index methods are described in pg_am
. It is possible to add a new index access method by writing the necessary code and then creating an entry in pg_am
— but that is beyond the scope of this chapter (see Chapter 62).
The routines for an index method do not directly know anything about the data types that the index method will operate on. Instead, an operator class identifies the set of operations that the index method needs to use to work with a particular data type. Operator classes are so called because one thing they specify is the set of WHERE
-clause operators that can be used with an index (i.e., can be converted into an index-scan qualification). An operator class can also specify some support function that are needed by the internal operations of the index method, but do not directly correspond to any WHERE
-clause operator that can be used with the index.
It is possible to define multiple operator classes for the same data type and index method. By doing this, multiple sets of indexing semantics can be defined for a single data type. For example, a B-tree index requires a sort ordering to be defined for each data type it works on. It might be useful for a complex-number data type to have one B-tree operator class that sorts the data by complex absolute value, another that sorts by real part, and so on. Typically, one of the operator classes will be deemed most commonly useful and will be marked as the default operator class for that data type and index method.
The same operator class name can be used for several different index methods (for example, both B-tree and hash index methods have operator classes named int4_ops
), but each such class is an independent entity and must be defined separately.
The operators associated with an operator class are identified by “strategy numbers”, which serve to identify the semantics of each operator within the context of its operator class. For example, B-trees impose a strict ordering on keys, lesser to greater, and so operators like “less than” and “greater than or equal to” are interesting with respect to a B-tree. Because PostgreSQL allows the user to define operators, PostgreSQL cannot look at the name of an operator (e.g., <
or >=
) and tell what kind of comparison it is. Instead, the index method defines a set of “strategies”, which can be thought of as generalized operators. Each operator class specifies which actual operator corresponds to each strategy for a particular data type and interpretation of the index semantics.
The B-tree index method defines five strategies, shown in Table 38.3.
Operation | Strategy Number |
---|---|
Hash indexes support only equality comparisons, and so they use only one strategy, shown in Table 38.4.
GiST indexes are more flexible: they do not have a fixed set of strategies at all. Instead, the “consistency” support routine of each particular GiST operator class interprets the strategy numbers however it likes. As an example, several of the built-in GiST index operator classes index two-dimensional geometric objects, providing the “R-tree” strategies shown in Table 38.5. Four of these are true two-dimensional tests (overlaps, same, contains, contained by); four of them consider only the X direction; and the other four provide the same tests in the Y direction.
SP-GiST indexes are similar to GiST indexes in flexibility: they don't have a fixed set of strategies. Instead the support routines of each operator class interpret the strategy numbers according to the operator class's definition. As an example, the strategy numbers used by the built-in operator classes for points are shown in Table 38.6.
GIN indexes are similar to GiST and SP-GiST indexes, in that they don't have a fixed set of strategies either. Instead the support routines of each operator class interpret the strategy numbers according to the operator class's definition. As an example, the strategy numbers used by the built-in operator class for arrays are shown in Table 38.7.
BRIN indexes are similar to GiST, SP-GiST and GIN indexes in that they don't have a fixed set of strategies either. Instead the support routines of each operator class interpret the strategy numbers according to the operator class's definition. As an example, the strategy numbers used by the built-in Minmax
operator classes are shown in Table 38.8.
Notice that all the operators listed above return Boolean values. In practice, all operators defined as index method search operators must return type boolean
, since they must appear at the top level of a WHERE
clause to be used with an index. (Some index access methods also support ordering operators, which typically don't return Boolean values; that feature is discussed in Section 38.16.7.)
Strategies aren't usually enough information for the system to figure out how to use an index. In practice, the index methods require additional support routines in order to work. For example, the B-tree index method must be able to compare two keys and determine whether one is greater than, equal to, or less than the other. Similarly, the hash index method must be able to compute hash codes for key values. These operations do not correspond to operators used in qualifications in SQL commands; they are administrative routines used by the index methods, internally.
Just as with strategies, the operator class identifies which specific functions should play each of these roles for a given data type and semantic interpretation. The index method defines the set of functions it needs, and the operator class identifies the correct functions to use by assigning them to the “support function numbers” specified by the index method.
Additionally, some opclasses allow users to specify parameters which control their behavior. Each builtin index access method has an optional options
support function, which defines a set of opclass-specific parameters.
B-trees require a comparison support function, and allow four additional support functions to be supplied at the operator class author's option, as shown in Table 38.9. The requirements for these support functions are explained further in Section 64.3.
Hash indexes require one support function, and allow two additional ones to be supplied at the operator class author's option, as shown in Table 38.10.
GiST indexes have ten support functions, three of which are optional, as shown in Table 38.11. (For more information see Chapter 65.)
SP-GiST indexes have six support functions, one of which is optional, as shown in Table 38.12. (For more information see Chapter 66.)
GIN indexes have seven support functions, four of which are optional, as shown in Table 38.13. (For more information see Chapter 67.)
BRIN indexes have five basic support functions, one of which is optional, as shown in Table 38.14. Some versions of the basic functions require additional support functions to be provided. (For more information see Section 68.3.)
Unlike search operators, support functions return whichever data type the particular index method expects; for example in the case of the comparison function for B-trees, a signed integer. The number and types of the arguments to each support function are likewise dependent on the index method. For B-tree and hash the comparison and hashing support functions take the same input data types as do the operators included in the operator class, but this is not the case for most GiST, SP-GiST, GIN, and BRIN support functions.
Now that we have seen the ideas, here is the promised example of creating a new operator class. (You can find a working copy of this example in src/tutorial/complex.c
and src/tutorial/complex.sql
in the source distribution.) The operator class encapsulates operators that sort complex numbers in absolute value order, so we choose the name complex_abs_ops
. First, we need a set of operators. The procedure for defining operators was discussed in Section 38.14. For an operator class on B-trees, the operators we require are:
absolute-value less-than (strategy 1)
absolute-value less-than-or-equal (strategy 2)
absolute-value equal (strategy 3)
absolute-value greater-than-or-equal (strategy 4)
absolute-value greater-than (strategy 5)
The least error-prone way to define a related set of comparison operators is to write the B-tree comparison support function first, and then write the other functions as one-line wrappers around the support function. This reduces the odds of getting inconsistent results for corner cases. Following this approach, we first write:
Now the less-than function looks like:
The other four functions differ only in how they compare the internal function's result to zero.
Next we declare the functions and the operators based on the functions to SQL:
It is important to specify the correct commutator and negator operators, as well as suitable restriction and join selectivity functions, otherwise the optimizer will be unable to make effective use of the index.
Other things worth noting are happening here:
There can only be one operator named, say, =
and taking type complex
for both operands. In this case we don't have any other operator =
for complex
, but if we were building a practical data type we'd probably want =
to be the ordinary equality operation for complex numbers (and not the equality of the absolute values). In that case, we'd need to use some other operator name for complex_abs_eq
.
Although PostgreSQL can cope with functions having the same SQL name as long as they have different argument data types, C can only cope with one global function having a given name. So we shouldn't name the C function something simple like abs_eq
. Usually it's a good practice to include the data type name in the C function name, so as not to conflict with functions for other data types.
We could have made the SQL name of the function abs_eq
, relying on PostgreSQL to distinguish it by argument data types from any other SQL function of the same name. To keep the example simple, we make the function have the same names at the C level and SQL level.
The next step is the registration of the support routine required by B-trees. The example C code that implements this is in the same file that contains the operator functions. This is how we declare the function:
Now that we have the required operators and support routine, we can finally create the operator class:
And we're done! It should now be possible to create and use B-tree indexes on complex
columns.
We could have written the operator entries more verbosely, as in:
but there is no need to do so when the operators take the same data type we are defining the operator class for.
The above example assumes that you want to make this new operator class the default B-tree operator class for the complex
data type. If you don't, just leave out the word DEFAULT
.
So far we have implicitly assumed that an operator class deals with only one data type. While there certainly can be only one data type in a particular index column, it is often useful to index operations that compare an indexed column to a value of a different data type. Also, if there is use for a cross-data-type operator in connection with an operator class, it is often the case that the other data type has a related operator class of its own. It is helpful to make the connections between related classes explicit, because this can aid the planner in optimizing SQL queries (particularly for B-tree operator classes, since the planner contains a great deal of knowledge about how to work with them).
To handle these needs, PostgreSQL uses the concept of an operator family. An operator family contains one or more operator classes, and can also contain indexable operators and corresponding support functions that belong to the family as a whole but not to any single class within the family. We say that such operators and functions are “loose” within the family, as opposed to being bound into a specific class. Typically each operator class contains single-data-type operators while cross-data-type operators are loose in the family.
All the operators and functions in an operator family must have compatible semantics, where the compatibility requirements are set by the index method. You might therefore wonder why bother to single out particular subsets of the family as operator classes; and indeed for many purposes the class divisions are irrelevant and the family is the only interesting grouping. The reason for defining operator classes is that they specify how much of the family is needed to support any particular index. If there is an index using an operator class, then that operator class cannot be dropped without dropping the index — but other parts of the operator family, namely other operator classes and loose operators, could be dropped. Thus, an operator class should be specified to contain the minimum set of operators and functions that are reasonably needed to work with an index on a specific data type, and then related but non-essential operators can be added as loose members of the operator family.
As an example, PostgreSQL has a built-in B-tree operator family integer_ops
, which includes operator classes int8_ops
, int4_ops
, and int2_ops
for indexes on bigint
(int8
), integer
(int4
), and smallint
(int2
) columns respectively. The family also contains cross-data-type comparison operators allowing any two of these types to be compared, so that an index on one of these types can be searched using a comparison value of another type. The family could be duplicated by these definitions:
Notice that this definition “overloads” the operator strategy and support function numbers: each number occurs multiple times within the family. This is allowed so long as each instance of a particular number has distinct input data types. The instances that have both input types equal to an operator class's input type are the primary operators and support functions for that operator class, and in most cases should be declared as part of the operator class rather than as loose members of the family.
In a B-tree operator family, all the operators in the family must sort compatibly, as is specified in detail in Section 64.2. For each operator in the family there must be a support function having the same two input data types as the operator. It is recommended that a family be complete, i.e., for each combination of data types, all operators are included. Each operator class should include just the non-cross-type operators and support function for its data type.
To build a multiple-data-type hash operator family, compatible hash support functions must be created for each data type supported by the family. Here compatibility means that the functions are guaranteed to return the same hash code for any two values that are considered equal by the family's equality operators, even when the values are of different types. This is usually difficult to accomplish when the types have different physical representations, but it can be done in some cases. Furthermore, casting a value from one data type represented in the operator family to another data type also represented in the operator family via an implicit or binary coercion cast must not change the computed hash value. Notice that there is only one support function per data type, not one per equality operator. It is recommended that a family be complete, i.e., provide an equality operator for each combination of data types. Each operator class should include just the non-cross-type equality operator and the support function for its data type.
GiST, SP-GiST, and GIN indexes do not have any explicit notion of cross-data-type operations. The set of operators supported is just whatever the primary support functions for a given operator class can handle.
In BRIN, the requirements depends on the framework that provides the operator classes. For operator classes based on minmax
, the behavior required is the same as for B-tree operator families: all the operators in the family must sort compatibly, and casts must not change the associated sort ordering.
Prior to PostgreSQL 8.3, there was no concept of operator families, and so any cross-data-type operators intended to be used with an index had to be bound directly into the index's operator class. While this approach still works, it is deprecated because it makes an index's dependencies too broad, and because the planner can handle cross-data-type comparisons more effectively when both data types have operators in the same operator family.
PostgreSQL uses operator classes to infer the properties of operators in more ways than just whether they can be used with indexes. Therefore, you might want to create operator classes even if you have no intention of indexing any columns of your data type.
In particular, there are SQL features such as ORDER BY
and DISTINCT
that require comparison and sorting of values. To implement these features on a user-defined data type, PostgreSQL looks for the default B-tree operator class for the data type. The “equals” member of this operator class defines the system's notion of equality of values for GROUP BY
and DISTINCT
, and the sort ordering imposed by the operator class defines the default ORDER BY
ordering.
If there is no default B-tree operator class for a data type, the system will look for a default hash operator class. But since that kind of operator class only provides equality, it is only able to support grouping not sorting.
When there is no default operator class for a data type, you will get errors like “could not identify an ordering operator” if you try to use these SQL features with the data type.
In PostgreSQL versions before 7.4, sorting and grouping operations would implicitly use operators named =
, <
, and >
. The new behavior of relying on default operator classes avoids having to make any assumption about the behavior of operators with particular names.
Sorting by a non-default B-tree operator class is possible by specifying the class's less-than operator in a USING
option, for example
Alternatively, specifying the class's greater-than operator in USING
selects a descending-order sort.
Comparison of arrays of a user-defined type also relies on the semantics defined by the type's default B-tree operator class. If there is no default B-tree operator class, but there is a default hash operator class, then array equality is supported, but not ordering comparisons.
Another SQL feature that requires even more data-type-specific knowledge is the RANGE
offset
PRECEDING
/FOLLOWING
framing option for window functions (see Section 4.2.8). For a query such as
it is not sufficient to know how to order by x
; the database must also understand how to “subtract 5” or “add 10” to the current row's value of x
to identify the bounds of the current window frame. Comparing the resulting bounds to other rows' values of x
is possible using the comparison operators provided by the B-tree operator class that defines the ORDER BY
ordering — but addition and subtraction operators are not part of the operator class, so which ones should be used? Hard-wiring that choice would be undesirable, because different sort orders (different B-tree operator classes) might need different behavior. Therefore, a B-tree operator class can specify an in_range support function that encapsulates the addition and subtraction behaviors that make sense for its sort order. It can even provide more than one in_range support function, in case there is more than one data type that makes sense to use as the offset in RANGE
clauses. If the B-tree operator class associated with the window's ORDER BY
clause does not have a matching in_range support function, the RANGE
offset
PRECEDING
/FOLLOWING
option is not supported.
Another important point is that an equality operator that appears in a hash operator family is a candidate for hash joins, hash aggregation, and related optimizations. The hash operator family is essential here since it identifies the hash function(s) to use.
Some index access methods (currently, only GiST and SP-GiST) support the concept of ordering operators. What we have been discussing so far are search operators. A search operator is one for which the index can be searched to find all rows satisfying WHERE
indexed_column
operator
constant
. Note that nothing is promised about the order in which the matching rows will be returned. In contrast, an ordering operator does not restrict the set of rows that can be returned, but instead determines their order. An ordering operator is one for which the index can be scanned to return rows in the order represented by ORDER BY
indexed_column
operator
constant
. The reason for defining ordering operators that way is that it supports nearest-neighbor searches, if the operator is one that measures distance. For example, a query like
finds the ten places closest to a given target point. A GiST index on the location column can do this efficiently because <->
is an ordering operator.
While search operators have to return Boolean results, ordering operators usually return some other type, such as float or numeric for distances. This type is normally not the same as the data type being indexed. To avoid hard-wiring assumptions about the behavior of different data types, the definition of an ordering operator is required to name a B-tree operator family that specifies the sort ordering of the result data type. As was stated in the previous section, B-tree operator families define PostgreSQL's notion of ordering, so this is a natural representation. Since the point <->
operator returns float8
, it could be specified in an operator class creation command like this:
where float_ops
is the built-in operator family that includes operations on float8
. This declaration states that the index is able to return rows in order of increasing values of the <->
operator.
There are two special features of operator classes that we have not discussed yet, mainly because they are not useful with the most commonly used index methods.
Normally, declaring an operator as a member of an operator class (or family) means that the index method can retrieve exactly the set of rows that satisfy a WHERE
condition using the operator. For example:
can be satisfied exactly by a B-tree index on the integer column. But there are cases where an index is useful as an inexact guide to the matching rows. For example, if a GiST index stores only bounding boxes for geometric objects, then it cannot exactly satisfy a WHERE
condition that tests overlap between nonrectangular objects such as polygons. Yet we could use the index to find objects whose bounding box overlaps the bounding box of the target object, and then do the exact overlap test only on the objects found by the index. If this scenario applies, the index is said to be “lossy” for the operator. Lossy index searches are implemented by having the index method return a recheck flag when a row might or might not really satisfy the query condition. The core system will then test the original query condition on the retrieved row to see whether it should be returned as a valid match. This approach works if the index is guaranteed to return all the required rows, plus perhaps some additional rows, which can be eliminated by performing the original operator invocation. The index methods that support lossy searches (currently, GiST, SP-GiST and GIN) allow the support functions of individual operator classes to set the recheck flag, and so this is essentially an operator-class feature.
Consider again the situation where we are storing in the index only the bounding box of a complex object such as a polygon. In this case there's not much value in storing the whole polygon in the index entry — we might as well store just a simpler object of type box
. This situation is expressed by the STORAGE
option in CREATE OPERATOR CLASS
: we'd write something like:
At present, only the GiST, SP-GiST, GIN and BRIN index methods support a STORAGE
type that's different from the column data type. The GiST compress
and decompress
support routines must deal with data-type conversion when STORAGE
is used. SP-GiST likewise requires a compress
support function to convert to the storage type, when that is different; if an SP-GiST opclass also supports retrieving data, the reverse conversion must be handled by the consistent
function. In GIN, the STORAGE
type identifies the type of the “key” values, which normally is different from the type of the indexed column — for example, an operator class for integer-array columns might have keys that are just integers. The GIN extractValue
and extractQuery
support routines are responsible for extracting keys from indexed values. BRIN is similar to GIN: the STORAGE
type identifies the type of the stored summary values, and operator classes' support procedures are responsible for interpreting the summary values correctly.
版本:11
使用者定義的函數可以使用 C 語言(或可以與 C 相容的程式語言,例如 C ++)撰寫。此類函數被編譯為可動態載入的物件(也稱為共享函式庫),並由伺服器依需求載入。動態載入功能是將「C語言」函數與「內部」函數區分開來的地方–兩者的實際的編譯方式本質上是相同的。(因此,標準內部函式庫為使用者定義的 C 函數提供了豐富的編譯範例。)
Currently only one calling convention is used for C functions (“version 1”). Support for that calling convention is indicated by writing a PG_FUNCTION_INFO_V1()
macro call for the function, as illustrated below.
The first time a user-defined function in a particular loadable object file is called in a session, the dynamic loader loads that object file into memory so that the function can be called. The CREATE FUNCTION
for a user-defined C function must therefore specify two pieces of information for the function: the name of the loadable object file, and the C name (link symbol) of the specific function to call within that object file. If the C name is not explicitly specified then it is assumed to be the same as the SQL function name.
The following algorithm is used to locate the shared object file based on the name given in the CREATE FUNCTION
command:
If the name is an absolute path, the given file is loaded.
If the name starts with the string $libdir
, that part is replaced by the PostgreSQL package library directory name, which is determined at build time.
If the name does not contain a directory part, the file is searched for in the path specified by the configuration variable .
Otherwise (the file was not found in the path, or it contains a non-absolute directory part), the dynamic loader will try to take the name as given, which will most likely fail. (It is unreliable to depend on the current working directory.)
If this sequence does not work, the platform-specific shared library file name extension (often .so
) is appended to the given name and this sequence is tried again. If that fails as well, the load will fail.
It is recommended to locate shared libraries either relative to $libdir
or through the dynamic library path. This simplifies version upgrades if the new installation is at a different location. The actual directory that $libdir
stands for can be found out with the command pg_config --pkglibdir
.
The user ID the PostgreSQL server runs as must be able to traverse the path to the file you intend to load. Making the file or a higher-level directory not readable and/or not executable by the postgres user is a common mistake.
In any case, the file name that is given in the CREATE FUNCTION
command is recorded literally in the system catalogs, so if the file needs to be loaded again the same procedure is applied.
PostgreSQL will not compile a C function automatically. The object file must be compiled before it is referenced in a CREATE FUNCTION
command. See for additional information.
To ensure that a dynamically loaded object file is not loaded into an incompatible server, PostgreSQL checks that the file contains a “magic block” with the appropriate contents. This allows the server to detect obvious incompatibilities, such as code compiled for a different major version of PostgreSQL. To include a magic block, write this in one (and only one) of the module source files, after having included the header fmgr.h
:
After it is used for the first time, a dynamically loaded object file is retained in memory. Future calls in the same session to the function(s) in that file will only incur the small overhead of a symbol table lookup. If you need to force a reload of an object file, for example after recompiling it, begin a fresh session.
Optionally, a dynamically loaded file can contain initialization and finalization functions. If the file includes a function named _PG_init
, that function will be called immediately after loading the file. The function receives no parameters and should return void. If the file includes a function named _PG_fini
, that function will be called immediately before unloading the file. Likewise, the function receives no parameters and should return void. Note that _PG_fini
will only be called during an unload of the file, not during process termination. (Presently, unloads are disabled and will never occur, but this may change in the future.)
To know how to write C-language functions, you need to know how PostgreSQL internally represents base data types and how they can be passed to and from functions. Internally, PostgreSQL regards a base type as a “blob of memory”. The user-defined functions that you define over a type in turn define the way that PostgreSQL can operate on it. That is, PostgreSQL will only store and retrieve the data from disk and use your user-defined functions to input, process, and output the data.
Base types can have one of three internal formats:
pass by value, fixed-length
pass by reference, fixed-length
pass by reference, variable-length
By-value types can only be 1, 2, or 4 bytes in length (also 8 bytes, if sizeof(Datum)
is 8 on your machine). You should be careful to define your types such that they will be the same size (in bytes) on all architectures. For example, the long
type is dangerous because it is 4 bytes on some machines and 8 bytes on others, whereas int
type is 4 bytes on most Unix machines. A reasonable implementation of the int4
type on Unix machines might be:
On the other hand, fixed-length types of any size can be passed by-reference. For example, here is a sample implementation of a PostgreSQL type:
Only pointers to such types can be used when passing them in and out of PostgreSQL functions. To return a value of such a type, allocate the right amount of memory with palloc
, fill in the allocated memory, and return a pointer to it. (Also, if you just want to return the same value as one of your input arguments that's of the same data type, you can skip the extra palloc
and just return the pointer to the input value.)
Finally, all variable-length types must also be passed by reference. All variable-length types must begin with an opaque length field of exactly 4 bytes, which will be set by SET_VARSIZE
; never set this field directly! All data to be stored within that type must be located in the memory immediately following that length field. The length field contains the total length of the structure, that is, it includes the size of the length field itself.
Another important point is to avoid leaving any uninitialized bits within data type values; for example, take care to zero out any alignment padding bytes that might be present in structs. Without this, logically-equivalent constants of your data type might be seen as unequal by the planner, leading to inefficient (though not incorrect) plans.
As an example, we can define the type text
as follows:
The [FLEXIBLE_ARRAY_MEMBER]
notation means that the actual length of the data part is not specified by this declaration.
When manipulating variable-length types, we must be careful to allocate the correct amount of memory and set the length field correctly. For example, if we wanted to store 40 bytes in a text
structure, we might use a code fragment like this:
VARHDRSZ
is the same as sizeof(int32)
, but it's considered good style to use the macro VARHDRSZ
to refer to the size of the overhead for a variable-length type. Also, the length field must be set using the SET_VARSIZE
macro, not by simple assignment.
Now that we've gone over all of the possible structures for base types, we can show some examples of real functions.
The version-1 calling convention relies on macros to suppress most of the complexity of passing arguments and results. The C declaration of a version-1 function is always:
In addition, the macro call:
must appear in the same source file. (Conventionally, it's written just before the function itself.) This macro call is not needed for internal
-language functions, since PostgreSQL assumes that all internal functions use the version-1 convention. It is, however, required for dynamically-loaded functions.
In a version-1 function, each actual argument is fetched using a PG_GETARG_
xxx
() macro that corresponds to the argument's data type. (In non-strict functions there needs to be a previous check about argument null-ness using PG_ARGISNULL()
; see below.) The result is returned using a PG_RETURN_
xxx
() macro for the return type. PG_GETARG_
xxx
() takes as its argument the number of the function argument to fetch, where the count starts at 0. PG_RETURN_
xxx
() takes as its argument the actual value to return.
Here are some examples using the version-1 calling convention:
Supposing that the above code has been prepared in file funcs.c
and compiled into a shared object, we could define the functions to PostgreSQL with commands like this:
Here, DIRECTORY
stands for the directory of the shared library file (for instance the PostgreSQL tutorial directory, which contains the code for the examples used in this section). (Better style would be to use just 'funcs'
in the AS
clause, after having added DIRECTORY
to the search path. In any case, we can omit the system-specific extension for a shared library, commonly .so
.)
Notice that we have specified the functions as “strict”, meaning that the system should automatically assume a null result if any input value is null. By doing this, we avoid having to check for null inputs in the function code. Without this, we'd have to check for null values explicitly, using PG_ARGISNULL()
.
The macro PG_ARGISNULL(
n
) allows a function to test whether each input is null. (Of course, doing this is only necessary in functions not declared “strict”.) As with the PG_GETARG_
xxx
() macros, the input arguments are counted beginning at zero. Note that one should refrain from executing PG_GETARG_
xxx
() until one has verified that the argument isn't null. To return a null result, execute PG_RETURN_NULL()
; this works in both strict and nonstrict functions.
At first glance, the version-1 coding conventions might appear to be just pointless obscurantism, compared to using plain C
calling conventions. They do however allow us to deal with NULL
able arguments/return values, and “toasted” (compressed or out-of-line) values.
Other options provided by the version-1 interface are two variants of the PG_GETARG_
xxx
() macros. The first of these, PG_GETARG_
xxx
_COPY(), guarantees to return a copy of the specified argument that is safe for writing into. (The normal macros will sometimes return a pointer to a value that is physically stored in a table, which must not be written to. Using the PG_GETARG_
xxx
_COPY() macros guarantees a writable result.) The second variant consists of the PG_GETARG_
xxx
_SLICE() macros which take three arguments. The first is the number of the function argument (as above). The second and third are the offset and length of the segment to be returned. Offsets are counted from zero, and a negative length requests that the remainder of the value be returned. These macros provide more efficient access to parts of large values in the case where they have storage type “external”. (The storage type of a column can be specified using ALTER TABLE
tablename
ALTER COLUMN colname
SET STORAGE storagetype
. storagetype
is one of plain
, external
, extended
, or main
.)
Before we turn to the more advanced topics, we should discuss some coding rules for PostgreSQL C-language functions. While it might be possible to load functions written in languages other than C into PostgreSQL, this is usually difficult (when it is possible at all) because other languages, such as C++, FORTRAN, or Pascal often do not follow the same calling convention as C. That is, other languages do not pass argument and return values between functions in the same way. For this reason, we will assume that your C-language functions are actually written in C.
The basic rules for writing and building C functions are as follows:
Use pg_config --includedir-server
to find out where the PostgreSQL server header files are installed on your system (or the system that your users will be running on).
When allocating memory, use the PostgreSQL functions palloc
and pfree
instead of the corresponding C library functions malloc
and free
. The memory allocated by palloc
will be freed automatically at the end of each transaction, preventing memory leaks.
Always zero the bytes of your structures using memset
(or allocate them with palloc0
in the first place). Even if you assign to each field of your structure, there might be alignment padding (holes in the structure) that contain garbage values. Without this, it's difficult to support hash indexes or hash joins, as you must pick out only the significant bits of your data structure to compute a hash. The planner also sometimes relies on comparing constants via bitwise equality, so you can get undesirable planning results if logically-equivalent values aren't bitwise equal.
Most of the internal PostgreSQL types are declared in postgres.h
, while the function manager interfaces (PG_FUNCTION_ARGS
, etc.) are in fmgr.h
, so you will need to include at least these two files. For portability reasons it's best to include postgres.h
first, before any other system or user header files. Including postgres.h
will also include elog.h
and palloc.h
for you.
Symbol names defined within object files must not conflict with each other or with symbols defined in the PostgreSQL server executable. You will have to rename your functions or variables if you get error messages to this effect.
Before you are able to use your PostgreSQL extension functions written in C, they must be compiled and linked in a special way to produce a file that can be dynamically loaded by the server. To be precise, a shared library needs to be created.
For information beyond what is contained in this section you should read the documentation of your operating system, in particular the manual pages for the C compiler, cc
, and the link editor, ld
. In addition, the PostgreSQL source code contains several working examples in the contrib
directory. If you rely on these examples you will make your modules dependent on the availability of the PostgreSQL source code, however.
Creating shared libraries is generally analogous to linking executables: first the source files are compiled into object files, then the object files are linked together. The object files need to be created as position-independent code (PIC), which conceptually means that they can be placed at an arbitrary location in memory when they are loaded by the executable. (Object files intended for executables are usually not compiled that way.) The command to link a shared library contains special flags to distinguish it from linking an executable (at least in theory — on some systems the practice is much uglier).
In the following examples we assume that your source code is in a file foo.c
and we will create a shared library foo.so
. The intermediate object file will be called foo.o
unless otherwise noted. A shared library can contain more than one object file, but we only use one here.FreeBSD
The compiler flag to create PIC is -fPIC
. To create shared libraries the compiler flag is -shared
.
This is applicable as of version 3.0 of FreeBSD.HP-UX
The compiler flag of the system compiler to create PIC is +z
. When using GCC it's -fPIC
. The linker flag for shared libraries is -b
. So:
or:
and then:
HP-UX uses the extension .sl
for shared libraries, unlike most other systems.Linux
The compiler flag to create PIC is -fPIC
. The compiler flag to create a shared library is -shared
. A complete example looks like this:
macOS
Here is an example. It assumes the developer tools are installed.
NetBSD
The compiler flag to create PIC is -fPIC
. For ELF systems, the compiler with the flag -shared
is used to link shared libraries. On the older non-ELF systems, ld -Bshareable
is used.
OpenBSD
The compiler flag to create PIC is -fPIC
. ld -Bshareable
is used to link shared libraries.
Solaris
The compiler flag to create PIC is -KPIC
with the Sun compiler and -fPIC
with GCC. To link shared libraries, the compiler option is -G
with either compiler or alternatively -shared
with GCC.
or
The resulting shared library file can then be loaded into PostgreSQL. When specifying the file name to the CREATE FUNCTION
command, one must give it the name of the shared library file, not the intermediate object file. Note that the system's standard shared-library extension (usually .so
or .sl
) can be omitted from the CREATE FUNCTION
command, and normally should be omitted for best portability.
Composite types do not have a fixed layout like C structures. Instances of a composite type can contain null fields. In addition, composite types that are part of an inheritance hierarchy can have different fields than other members of the same inheritance hierarchy. Therefore, PostgreSQL provides a function interface for accessing fields of composite types from C.
Suppose we want to write a function to answer the query:
Using the version-1 calling conventions, we can define c_overpaid
as:
GetAttributeByName
is the PostgreSQL system function that returns attributes out of the specified row. It has three arguments: the argument of type HeapTupleHeader
passed into the function, the name of the desired attribute, and a return parameter that tells whether the attribute is null. GetAttributeByName
returns a Datum
value that you can convert to the proper data type by using the appropriate DatumGet
XXX
() macro. Note that the return value is meaningless if the null flag is set; always check the null flag before trying to do anything with the result.
There is also GetAttributeByNum
, which selects the target attribute by column number instead of name.
The following command declares the function c_overpaid
in SQL:
Notice we have used STRICT
so that we did not have to check whether the input arguments were NULL.
To return a row or composite-type value from a C-language function, you can use a special API that provides macros and functions to hide most of the complexity of building composite data types. To use this API, the source file must include:
There are two ways you can build a composite data value (henceforth a “tuple”): you can build it from an array of Datum values, or from an array of C strings that can be passed to the input conversion functions of the tuple's column data types. In either case, you first need to obtain or construct a TupleDesc
descriptor for the tuple structure. When working with Datums, you pass the TupleDesc
to BlessTupleDesc
, and then call heap_form_tuple
for each row. When working with C strings, you pass the TupleDesc
to TupleDescGetAttInMetadata
, and then call BuildTupleFromCStrings
for each row. In the case of a function returning a set of tuples, the setup steps can all be done once during the first call of the function.
Several helper functions are available for setting up the needed TupleDesc
. The recommended way to do this in most functions returning composite values is to call:
passing the same fcinfo
struct passed to the calling function itself. (This of course requires that you use the version-1 calling conventions.) resultTypeId
can be specified as NULL
or as the address of a local variable to receive the function's result type OID. resultTupleDesc
should be the address of a local TupleDesc
variable. Check that the result is TYPEFUNC_COMPOSITE
; if so, resultTupleDesc
has been filled with the needed TupleDesc
. (If it is not, you can report an error along the lines of “function returning record called in context that cannot accept type record”.)
get_call_result_type
can resolve the actual type of a polymorphic function result; so it is useful in functions that return scalar polymorphic results, not only functions that return composites. The resultTypeId
output is primarily useful for functions returning polymorphic scalars.
get_call_result_type
has a sibling get_expr_result_type
, which can be used to resolve the expected output type for a function call represented by an expression tree. This can be used when trying to determine the result type from outside the function itself. There is also get_func_result_type
, which can be used when only the function's OID is available. However these functions are not able to deal with functions declared to return record
, and get_func_result_type
cannot resolve polymorphic types, so you should preferentially use get_call_result_type
.
Older, now-deprecated functions for obtaining TupleDesc
s are:
to get a TupleDesc
for the row type of a named relation, and:
to get a TupleDesc
based on a type OID. This can be used to get a TupleDesc
for a base or composite type. It will not work for a function that returns record
, however, and it cannot resolve polymorphic types.
Once you have a TupleDesc
, call:
if you plan to work with Datums, or:
if you plan to work with C strings. If you are writing a function returning set, you can save the results of these functions in the FuncCallContext
structure — use the tuple_desc
or attinmeta
field respectively.
When working with Datums, use:
to build a HeapTuple
given user data in Datum form.
When working with C strings, use:
to build a HeapTuple
given user data in C string form. values
is an array of C strings, one for each attribute of the return row. Each C string should be in the form expected by the input function of the attribute data type. In order to return a null value for one of the attributes, the corresponding pointer in the values
array should be set to NULL
. This function will need to be called again for each row you return.
Once you have built a tuple to return from your function, it must be converted into a Datum
. Use:
to convert a HeapTuple
into a valid Datum. This Datum
can be returned directly if you intend to return just a single row, or it can be used as the current return value in a set-returning function.
An example appears in the next section.
C-language functions have two options for returning sets (multiple rows). In one method, called ValuePerCall mode, a set-returning function is called repeatedly (passing the same arguments each time) and it returns one new row on each call, until it has no more rows to return and signals that by returning NULL. The set-returning function (SRF) must therefore save enough state across calls to remember what it was doing and return the correct next item on each call. In the other method, called Materialize mode, a SRF fills and returns a tuplestore object containing its entire result; then only one call occurs for the whole result, and no inter-call state is needed.
When using ValuePerCall mode, it is important to remember that the query is not guaranteed to be run to completion; that is, due to options such as LIMIT
, the executor might stop making calls to the set-returning function before all rows have been fetched. This means it is not safe to perform cleanup activities in the last call, because that might not ever happen. It's recommended to use Materialize mode for functions that need access to external resources, such as file descriptors.
The remainder of this section documents a set of helper macros that are commonly used (though not required to be used) for SRFs using ValuePerCall mode. Additional details about Materialize mode can be found in src/backend/utils/fmgr/README
. Also, the contrib
modules in the PostgreSQL source distribution contain many examples of SRFs using both ValuePerCall and Materialize mode.
To use the ValuePerCall support macros described here, include funcapi.h
. These macros work with a structure FuncCallContext
that contains the state that needs to be saved across calls. Within the calling SRF, fcinfo->flinfo->fn_extra
is used to hold a pointer to FuncCallContext
across calls. The macros automatically fill that field on first use, and expect to find the same pointer there on subsequent uses.
The macros to be used by an SRF using this infrastructure are:
Use this to determine if your function is being called for the first or a subsequent time. On the first call (only), call:
to initialize the FuncCallContext
. On every function call, including the first, call:
to set up for using the FuncCallContext
.
If your function has data to return in the current call, use:
to return it to the caller. (result
must be of type Datum
, either a single value or a tuple prepared as described above.) Finally, when your function is finished returning data, use:
to clean up and end the SRF.
The memory context that is current when the SRF is called is a transient context that will be cleared between calls. This means that you do not need to call pfree
on everything you allocated using palloc
; it will go away anyway. However, if you want to allocate any data structures to live across calls, you need to put them somewhere else. The memory context referenced by multi_call_memory_ctx
is a suitable location for any data that needs to survive until the SRF is finished running. In most cases, this means that you should switch into multi_call_memory_ctx
while doing the first-call setup. Use funcctx->user_fctx
to hold a pointer to any such cross-call data structures. (Data you allocate in multi_call_memory_ctx
will go away automatically when the query ends, so it is not necessary to free that data manually, either.)
While the actual arguments to the function remain unchanged between calls, if you detoast the argument values (which is normally done transparently by the PG_GETARG_
xxx
macro) in the transient context then the detoasted copies will be freed on each cycle. Accordingly, if you keep references to such values in your user_fctx
, you must either copy them into the multi_call_memory_ctx
after detoasting, or ensure that you detoast the values only in that context.
A complete pseudo-code example looks like the following:
A complete example of a simple SRF returning a composite type looks like:
One way to declare this function in SQL is:
A different way is to use OUT parameters:
Notice that in this method the output type of the function is formally an anonymous record
type.
For example, suppose we want to write a function to accept a single element of any type, and return a one-dimensional array of that type:
The following command declares the function make_array
in SQL:
There is a variant of polymorphism that is only available to C-language functions: they can be declared to take parameters of type "any"
. (Note that this type name must be double-quoted, since it's also a SQL reserved word.) This works like anyelement
except that it does not constrain different "any"
arguments to be the same type, nor do they help determine the function's result type. A C-language function can also declare its final parameter to be VARIADIC "any"
. This will match one or more actual arguments of any type (not necessarily the same type). These arguments will not be gathered into an array as happens with normal variadic functions; they will just be passed to the function separately. The PG_NARGS()
macro and the methods described above must be used to determine the number of actual arguments and their types when using this feature. Also, users of such a function might wish to use the VARIADIC
keyword in their function call, with the expectation that the function would treat the array elements as separate arguments. The function itself must implement that behavior if wanted, after using get_fn_expr_variadic
to detect that the actual argument was marked with VARIADIC
.
from your _PG_init
function.
LWLocks are reserved by calling:
from _PG_init
. This will ensure that an array of num_lwlocks
LWLocks is available under the name tranche_name
. Use GetNamedLWLockTranche
to get a pointer to this array.
To avoid possible race-conditions, each backend should use the LWLock AddinShmemInitLock
when connecting to and initializing its allocation of shared memory, as shown here:
Although the PostgreSQL backend is written in C, it is possible to write extensions in C++ if these guidelines are followed:
All functions accessed by the backend must present a C interface to the backend; these C functions can then call C++ functions. For example, extern C
linkage is required for backend-accessed functions. This is also necessary for any functions that are passed as pointers between the backend and C++ code.
Free memory using the appropriate deallocation method. For example, most backend memory is allocated using palloc()
, so use pfree()
to free it. Using C++ delete
in such cases will fail.
Prevent exceptions from propagating into the C code (use a catch-all block at the top level of all extern C
functions). This is necessary even if the C++ code does not explicitly throw any exceptions, because events like out-of-memory can still throw exceptions. Any exceptions must be caught and appropriate errors passed back to the C interface. If possible, compile C++ with -fno-exceptions
to eliminate exceptions entirely; in such cases, you must check for failures in your C++ code, e.g., check for NULL returned by new()
.
If calling backend functions from C++ code, be sure that the C++ call stack contains only plain old data structures (POD). This is necessary because backend errors generate a distant longjmp()
that does not properly unroll a C++ call stack with non-POD objects.
In summary, it is best to place C++ code behind a wall of extern C
functions that interface to the backend, and avoid exception, memory, and call stack leakage.
Operation | Strategy Number |
---|---|
Operation | Strategy Number |
---|---|
Operation | Strategy Number |
---|---|
Operation | Strategy Number |
---|---|
Operation | Strategy Number |
---|---|
Function | Support Number |
---|---|
Function | Support Number |
---|---|
Function | Description | Support Number |
---|---|---|
Function | Description | Support Number |
---|---|---|
Function | Description | Support Number |
---|---|---|
Function | Description | Support Number |
---|---|---|
(The actual PostgreSQL C code calls this type int32
, because it is a convention in C that int
XX
means XX
bits. Note therefore also that the C type int8
is 1 byte in size. The SQL type int8
is called int64
in C. See also .)
Never modify the contents of a pass-by-reference input value. If you do so you are likely to corrupt on-disk data, since the pointer you are given might point directly into a disk buffer. The sole exception to this rule is explained in .
specifies which C type corresponds to which SQL type when writing a C-language function that uses a built-in type of PostgreSQL. The “Defined In” column gives the header file that needs to be included to get the type definition. (The actual definition might be in a different file that is included by the listed file. It is recommended that users stick to the defined interface.) Note that you should always include postgres.h
first in any source file, because it declares a number of things that you will need anyway.
SQL Type | C Type | Defined In |
---|
Finally, the version-1 function call conventions make it possible to return set results () and implement trigger functions () and procedural-language call handlers (). For more details see src/backend/utils/fmgr/README
in the source distribution.
Compiling and linking your code so that it can be dynamically loaded into PostgreSQL always requires special flags. See for a detailed explanation of how to do it for your particular operating system.
Remember to define a “magic block” for your shared library, as described in .
If this is too complicated for you, you should consider using , which hides the platform differences behind a uniform interface.
Refer back to about where the server expects to find the shared library files.
C-language functions can be declared to accept and return the polymorphic types described in . When a function's arguments or return types are defined as polymorphic types, the function author cannot know in advance what data type it will be called with, or need to return. There are two routines provided in fmgr.h
to allow a version-1 C function to discover the actual data types of its arguments and the type it is expected to return. The routines are called get_fn_expr_rettype(FmgrInfo *flinfo)
and get_fn_expr_argtype(FmgrInfo *flinfo, int argnum)
. They return the result or argument type OID, or InvalidOid
if the information is not available. The structure flinfo
is normally accessed as fcinfo->flinfo
. The parameter argnum
is zero based. get_call_result_type
can also be used as an alternative to get_fn_expr_rettype
. There is also get_fn_expr_variadic
, which can be used to find out whether variadic arguments have been merged into an array. This is primarily useful for VARIADIC "any"
functions, since such merging will always have occurred for variadic functions taking ordinary array types.
Add-ins can reserve LWLocks and an allocation of shared memory on server startup. The add-in's shared library must be preloaded by specifying it in . Shared memory is reserved by calling:
less than
1
less than or equal
2
equal
3
greater than or equal
4
greater than
5
equal
1
strictly left of
1
does not extend to right of
2
overlaps
3
does not extend to left of
4
strictly right of
5
same
6
contains
7
contained by
8
does not extend above
9
strictly below
10
strictly above
11
does not extend below
12
strictly left of
1
strictly right of
5
same
6
contained by
8
strictly below
10
strictly above
11
overlap
1
contains
2
is contained by
3
equal
4
less than
1
less than or equal
2
equal
3
greater than or equal
4
greater than
5
Compare two keys and return an integer less than zero, zero, or greater than zero, indicating whether the first key is less than, equal to, or greater than the second
1
Return the addresses of C-callable sort support function(s) (optional)
2
Compare a test value to a base value plus/minus an offset, and return true or false according to the comparison result (optional)
3
Determine if it is safe for indexes that use the operator class to apply the btree deduplication optimization (optional)
4
Defines a set of options that are specific to this operator class (optional)
5
Compute the 32-bit hash value for a key
1
Compute the 64-bit hash value for a key given a 64-bit salt; if the salt is 0, the low 32 bits of the result must match the value that would have been computed by function 1 (optional)
2
Defines a set of options that are specific to this operator class (optional)
3
consistent
determine whether key satisfies the query qualifier
1
union
compute union of a set of keys
2
compress
compute a compressed representation of a key or value to be indexed
3
decompress
compute a decompressed representation of a compressed key
4
penalty
compute penalty for inserting new key into subtree with given subtree's key
5
picksplit
determine which entries of a page are to be moved to the new page and compute the union keys for resulting pages
6
equal
compare two keys and return true if they are equal
7
distance
determine distance from key to query value (optional)
8
fetch
compute original representation of a compressed key for index-only scans (optional)
9
options
Defines a set of options that are specific to this operator class (optional)
10
config
provide basic information about the operator class
1
choose
determine how to insert a new value into an inner tuple
2
picksplit
determine how to partition a set of values
3
inner_consistent
determine which sub-partitions need to be searched for a query
4
leaf_consistent
determine whether key satisfies the query qualifier
5
options
Defines a set of options that are specific to this operator class (optional)
6
compare
compare two keys and return an integer less than zero, zero, or greater than zero, indicating whether the first key is less than, equal to, or greater than the second
1
extractValue
extract keys from a value to be indexed
2
extractQuery
extract keys from a query condition
3
consistent
determine whether value matches query condition (Boolean variant) (optional if support function 6 is present)
4
comparePartial
compare partial key from query and key from index, and return an integer less than zero, zero, or greater than zero, indicating whether GIN should ignore this index entry, treat the entry as a match, or stop the index scan (optional)
5
triConsistent
determine whether value matches query condition (ternary variant) (optional if support function 4 is present)
6
options
Defines a set of options that are specific to this operator class (optional)
7
opcInfo
return internal information describing the indexed columns' summary data
1
add_value
add a new value to an existing summary index tuple
2
consistent
determine whether value matches query condition
3
union
compute union of two summary tuples
4
options
Defines a set of options that are specific to this operator class (optional)
5
|
|
|
|
|
|
|
|
|
|
| (compiler built-in) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|