Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
ALTER EXTENSION — 變更延伸功能的宣告內容
ALTER EXTENSION 用來變更已安裝的延伸功能的定義。包括有幾個子形態:
UPDATE
此形態將該延伸功能更新為新的版本。該延伸功能必須提供合適的更新腳本(或一系列的腳本),可以將目前安裝的版本升級到要求更新的版本。
SET SCHEMA
這種形態將延伸功能的物件移動到另一個綱要中。該延伸功能必須是可以重新定位的,此命令才能成功。
ADD
member_object
此形態將現有物件新增至延伸功能之中。這主要在延伸功能更新腳本時有用處。該物件隨後將被視為延伸功能的成員之一;值得注意的是,接下來也只能透過移除延伸功能來移除它。
DROP
member_object
此形態從延伸功能中移除成員物件。這主要用於延伸功能的更新腳本之中。該物件並不會被實體移除,僅與該延伸功能脫離關連。
有關這些操作的更多資訊,請參閱第 37.17 節。
您必須擁有該延伸功能才能使用 ALTER EXTENSION。ADD/DROP 形態還需要所要新增/移除物件的所有權。
name
The name of an installed extension.
new_version
The desired new version of the extension. This can be written as either an identifier or a string literal. If not specified, ALTER EXTENSION UPDATE
attempts to update to whatever is shown as the default version in the extension's control file.
new_schema
The new schema for the extension.
object_name
aggregate_name
function_name
operator_name
procedure_name
routine_name
The name of an object to be added to or removed from the extension. Names of tables, aggregates, domains, foreign tables, functions, operators, operator classes, operator families, procedures, routines, sequences, text search objects, types, and views can be schema-qualified.
source_type
The name of the source data type of the cast.
target_type
The name of the target data type of the cast.
argmode
The mode of a function, procedure, or aggregate argument: IN
, OUT
, INOUT
, or VARIADIC
. If omitted, the default is IN
. Note that ALTER EXTENSION
does not actually pay any attention to OUT
arguments, since only the input arguments are needed to determine the function's identity. So it is sufficient to list the IN
, INOUT
, and VARIADIC
arguments.
argname
The name of a function, procedure, or aggregate argument. Note that ALTER EXTENSION
does not actually pay any attention to argument names, since only the argument data types are needed to determine the function's identity.
argtype
The data type of a function, procedure, or aggregate argument.
left_type
right_type
The data type(s) of the operator's arguments (optionally schema-qualified). Write NONE
for the missing argument of a prefix or postfix operator.
PROCEDURAL
This is a noise word.
type_name
The name of the data type of the transform.
lang_name
The name of the language of the transform.
要將 hstore 延伸功能更新到版本 2.0:
要將 hstore 延伸功能的綱要變更為 utils:
要將現有函數新增到 hstore 延伸功能之中:
ALTER EXTENSION 是 PostgreSQL 的延伸功能。
ALTER FUNCTION — 變更一個函數的定義
ALTER FUNCTION
變更一個函數的定義。
您必須是函數擁有者才能使用 ALTER FUNCTION 功能。要變更函數的綱要,您還必須具有新綱要的 CREATE 權限。要變更擁有者,您還必須是新擁有角色的直接或間接成員,並且該角色必須對該函數的綱要具有 CREATE 權限。(這些限制強制改變擁有者不會做任何透過刪除和重新建立函數都無法做到的事情,但是超級用戶可以改變任何函數的所有權。)
name
現有函數的名稱(可以加上綱要)。如果未指定參數列表,則該名稱在其綱要中必須是唯一的。
argmode
參數的模式:IN,OUT,INOUT 或 VARIADIC。如果省略,則預設為 IN。請注意,ALTER FUNCTION 實際上並不關心 OUT 參數,因為只需要輸入參數來確定函數的身份。所以列出 IN,INOUT 和 VARIADIC 參數就足夠了。
argname
參數的名稱。請注意,ALTER FUNCTION 實際上並不關心參數名稱,因為只需要參數資料型別來確定函數的身份。
argtype
如果有的話,函數參數的資料型別(可選用綱要修飾)。
new_name
函數的新名稱。
new_owner
函數的新擁有者。請注意,如果該函數被標記為 SECURITY DEFINER,隨後它將以新的擁有者執行。
new_schema
函數的新綱要。
extension_name
函數相依的延伸套件。
CALLED ON NULL INPUT
RETURNS NULL ON NULL INPUT
STRICT
CALLED ON NULL INPUT 變更此函數,以便在其某些或全部參數為空時呼叫此函數。回傳 NULL ON NULL INPUT 或 STRICT 變更該函數,以便在其任何參數為 null 時不呼叫此函數;相反地,則會自動假設空的結果。有關更多訊息,請參閱 CREATE FUNCTION。
IMMUTABLE
STABLE
VOLATILE
將函數的易變性變更為指定的設定。詳情請參閱 CREATE FUNCTION。
[ EXTERNAL ] SECURITY INVOKER
[ EXTERNAL ] SECURITY DEFINER
變更該函數是否是安全性定義者。考慮到 SQL 標準的一致性,關鍵字 EXTERNAL 會被忽略。有關此功能的更多訊息,請參閱 CREATE FUNCTION。
PARALLEL
改變函數是否被認為是可以安全地平行運算。 詳情請參閱 CREATE FUNCTION。
LEAKPROOF
變更函數是否被認為是 leakproof。有關此功能的更多訊息,請參閱 CREATE FUNCTION。
COST
execution_cost
變更函數估計的執行成本。有關更多訊息,請參閱 CREATE FUNCTION。
ROWS
result_rows
變更由 set-returning 函數回傳的估計資料列數。有關更多訊息,請參閱 CREATE FUNCTION。
configuration_parameter
value
呼叫函數時,增加或變更要對配置參數進行的指定方式。 如果值為 DEFAULT,或者等價地使用 RESET,那麼函數本地配置將被刪除,以便該函數執行其環境中存在的值。使用 RESET ALL 清除所有功能本地配置。SET FROM CURRENT 將執行 ALTER FUNCTION 時當下參數的值保存為輸入函數時所要應用的值。
有關允許的參數名稱和值相關更多訊息,請參閱 SET 和第 19 章。
RESTRICT
此語法會被忽略,它只為了符合 SQL 標準。
要將 integer 型別的函數 sqrt 重新命名為 square_root:
要將整數型別函數 sqrt 的擁有者變更為 joe,請執行以下操作:
要將整數型別函數 sqrt 的綱要變更為 maths,請執行以下操作:
要將 integer 型別的函數標記為相依於延伸套件 mathlib:
自動以該函數調整搜尋路徑:
要停用某個函數的 search_path 自動設定,請執行以下操作:
該函數現在將執行其呼叫者使用的任何搜尋路徑。
此語法與 SQL 標準中的 ALTER FUNCTION 語法部分相容。標準可以允許修改一個函數的更多屬性,但不能提供重新命名函數、使函數成為 security definer、將配置參數值附加到函數或變更函數的擁有者、綱要或易變性的設定。標準還需要 RESTRICT 關鍵字,這在 PostgreSQL 中是選用的。
本參考資訊中的內容旨在提供適當長度的摘要,俱備一定的權威性、完整性和正式的性。本書的其他部分可以找到關於 PostgreSQL 使用的更多訊息,包括應用描述、教學或範例表單。 請參閱每個參考頁面上列出的參考資訊。
這些參考訊息也可以在傳統的「man」功能中取得。
ALTER INDEX — change the definition of an index
ALTER INDEX
changes the definition of an existing index. There are several subforms:RENAME
The RENAME
form changes the name of the index. There is no effect on the stored data.SET TABLESPACE
This form changes the index's tablespace to the specified tablespace and moves the data file(s) associated with the index to the new tablespace. To change the tablespace of an index, you must own the index and have CREATE
privilege on the new tablespace. All indexes in the current database in a tablespace can be moved by using the ALL IN TABLESPACE
form, which will lock all indexes to be moved and then move each one. This form also supports OWNED BY
, which will only move indexes owned by the roles specified. If the NOWAIT
option is specified then the command will fail if it is unable to acquire all of the locks required immediately. Note that system catalogs will not be moved by this command, use ALTER DATABASE
or explicit ALTER INDEX
invocations instead if desired. See also CREATE TABLESPACE.DEPENDS ON EXTENSION
This form marks the index as dependent on the extension, such that if the extension is dropped, the index will automatically be dropped as well.SET (
storage_parameter
= value
[, ... ] )
This form changes one or more index-method-specific storage parameters for the index. See CREATE INDEX for details on the available parameters. Note that the index contents will not be modified immediately by this command; depending on the parameter you might need to rebuild the index with REINDEX to get the desired effects.RESET (
storage_parameter
[, ... ] )
This form resets one or more index-method-specific storage parameters to their defaults. As with SET
, a REINDEX
might be needed to update the index entirely.
IF EXISTS
Do not throw an error if the index does not exist. A notice is issued in this case.
name
The name (possibly schema-qualified) of an existing index to alter.
new_name
The new name for the index.
tablespace_name
The tablespace to which the index will be moved.
extension_name
The name of the extension that the index is to depend on.
storage_parameter
The name of an index-method-specific storage parameter.
value
The new value for an index-method-specific storage parameter. This might be a number or a word depending on the parameter.
These operations are also possible using ALTER TABLE. ALTER INDEX
is in fact just an alias for the forms of ALTER TABLE
that apply to indexes.
There was formerly an ALTER INDEX OWNER
variant, but this is now ignored (with a warning). An index cannot have an owner different from its table's owner. Changing the table's owner automatically changes the index as well.
Changing any part of a system catalog index is not permitted.
To rename an existing index:
To move an index to a different tablespace:
To change an index's fill factor (assuming that the index method supports it):
ALTER INDEX
is a PostgreSQL extension.
版本:11
ALTER LANGUAGE — 變更程序語言的宣告
ALTER LANGUAGE 變更程序語言的宣告。唯一的功能是將語言重新命名或分配新的所有者。您必須是該語言所有者或超級使用者才能使用 ALTER LANGUAGE。
name
語言名稱
new_name
語言的新名稱
new_owner
語言的新所有者e
SQL 標準中沒有 ALTER LANGUAGE 語句。
ALTER PUBLICATION — change the definition of a publication
The command ALTER PUBLICATION
can change the attributes of a publication.
The first three variants change which tables are part of the publication. The SET TABLE
clause will replace the list of tables in the publication with the specified one. The ADD TABLE
and DROP TABLE
clauses will add and remove one or more tables from the publication. Note that adding tables to a publication that is already subscribed to will require a ALTER SUBSCRIPTION ... REFRESH PUBLICATION
action on the subscribing side in order to become effective.
The fourth variant of this command listed in the synopsis can change all of the publication properties specified in . Properties not mentioned in the command retain their previous settings.
The remaining variants change the owner and the name of the publication.
You must own the publication to use ALTER PUBLICATION
. To alter the owner, you must also be a direct or indirect member of the new owning role. The new owner must have CREATE
privilege on the database. Also, the new owner of a FOR ALL TABLES
publication must be a superuser. However, a superuser can change the ownership of a publication while circumventing these restrictions.
name
The name of an existing publication whose definition is to be altered.
table_name
Name of an existing table. If ONLY
is specified before the table name, only that table is affected. If ONLY
is not specified, the table and all its descendant tables (if any) are affected. Optionally, *
can be specified after the table name to explicitly indicate that descendant tables are included.
SET (
publication_parameter
[= value
] [, ... ] )
new_owner
The user name of the new owner of the publication.
new_name
The new name for the publication.
Change the publication to publish only deletes and updates:
Add some tables to the publication:
ALTER PUBLICATION
is a PostgreSQL extension.
This clause alters publication parameters originally set by . See there for more information.
, , ,
ALTER DEFAULT PRIVILEGES — 設定預設的存取權限
ALTER DEFAULT PRIVILEGES 使您可以設定將套用於未來建立物件的權限。(它不會影響指派給已存在物件的權限。)目前,只能變更改綱要、資料表(包括檢視表和外部資料表)、序列、函數和型別(包括 domain)的權限。對於此命令,函數包括彙總函數和程序函數。在這個命令中,單詞 FUNCTIONS 和 ROUTINES 是等效的。(ROUTINES 優先作為函數和程序的標準術語。在早期的 PostgreSQL 版本中,只允許使用單詞 FUNCTIONS。不可能單獨為函數和程序設定預設權限。)
您只能為將由您自己或您所屬角色建立的物件變更預設權限。可以全域設定權限(意即,對於在目前資料庫中建立的所有物件),或僅為在指定綱要中建立的物件設定權限。每個綱要指定的預設權限將指派到特定物件類型的全域預設權限。
如 GRANT 中所述,任何物件類型的預設權限通常都會從物件所有者授予所有可授予的權限,並且還可以從 PUBLIC 授予某些權限。但是,可以透過使用 ALTER DEFAULT PRIVILEGES 變更全域預設權限來變更此行為。
target_role
目前角色所屬的現有角色的名稱。 如果省略 FOR ROLE,則假設為目前角色。
schema_name
現有綱要的名稱。如果有指定的話,則會為稍後在該綱要中建立的物件變更預設權限。如果省略 IN SCHEMA,則變更全域的預設權限。使用 ON SCHEMAS 時不允許使用 IN SCHEMA,因為無法嵌套綱要。
role_name
授予或撤消權限的現有角色名稱。此參數以及 abbreviated_grant_or_revoke 中的所有其他參數的行為與 GRANT 或 REVOKE 中所述相同,只是一個是為整個物件類別而不是特定的命名物件設定權限。
使用 psql 的 \ddp 指令獲取有關現有預設權限指派的資訊。權限的含義與 GRANT 中 \dp 的解釋相同。
如果您希望移除已變更預設權限的角色,則必須撤消其預設權限的變更,或使用 DROP OWNED BY 刪除該角色的預設權限項目。
為隨後在綱要 myschema 中所建立的所有資料表(和檢視表)授予每個人 SELECT 權限,並使角色 webuser 具備 INSERT 權限:
撤消上述內容,以便後續建立的資料表不會有更大於正常的權限:
對於角色 admin 隨後建立的所有函數,移除一般會在函數上授予給 PUBLIC 的 EXECUTE 權限:
SQL標準中沒有 ALTER DEFAULT PRIVILEGES 語句。
ALTER SEQUENCE — change the definition of a sequence generator
ALTER SEQUENCE 變更現有序列産生器的參數。ALTER SEQUENCE 指令中未特別設定的任何參數都會保留其先前的設定。
您必須擁有該序列才能使用 ALTER SEQUENCE。要變更序列的綱要,您還必須對新綱要具有 CREATE 權限。要變更擁有者,您還必須是新擁有角色直接或間接成員,並且該角色必須對序列的綱要具有 CREATE 權限。(這些限制強制要求變更擁有者不會透過刪除和重新建立序列來執行任何操作。但是,超級使用者無論如何都可以變更任何序列的所有權。)
name
要變更的序列名稱(選擇性加上綱要)。
IF EXISTS
如果序列不存在,請不要拋出錯誤。 在這種情況下發出 NOTICE。
data_type
可選擇性子句 AS data_type 變更序列的資料型別。有效型別為 smallint,integer 和 bigint。
僅當先前的最小值和最大值是舊資料型別的最小值或最大值時,變更資料型別會自動變更序列的最小值和最大值(換句話說,如果序列是使用 NO MINVALUE 或 NO MAXVALUE,不論明確或隱含)。否則,將保留最小值和最大值,除非新值作為同一指令的一部分發出。如果最小值和最大值不適合新資料型別,則會産生錯誤。
increment
INCREMENT BY 增量子句是可選的。正值將產生遞增序列,負值將產生遞減序列。如果未指定,將保留舊的增量值。
minvalue
NO MINVALUE
可選擇性子句 MINVALUE minvalue 決定序列可以産生的最小值。如果指定 NO MINVALUE,則將分別使用預設值 1 和遞增和遞減資料型別的最小值。如果未指定任何選項,則將保持目前的最小值。
maxvalue
NO MAXVALUE
可選擇性子句 MAXVALUE maxvalue 決定序列的最大值。如果指定 NO MAXVALUE,則將分別使用資料型別最大值的預設值,以及遞增和遞減序列的預設值 -1。如果未指定任何選項,則將保持目前的最大值。
start
可選擇性子句 START WITH start 變更序列記錄的起始值。這對目前序列值沒有影響;它只是設定未來 ALTER SEQUENCE RESTART 指令將使用的值。
restart
可選擇性子句 RESTART [WITH restart] 變更序列的目前值。這與使用 is_called = false 呼叫 setval 函數類似:下一次呼叫 nextval 將回傳指定的值。寫入沒有重啟值的 RESTART 相當於提供由 CREATE SEQUENCE 記錄的起始值或 ALTER SEQUENCE START WITH 最後設定的起始值。
與 setval 使用相反,序列上的 RESTART 操作是交易事務的,並阻止平行事務從同一序列中取得數字。如果這不是需要的操作模式,則應使用 setval。
cache
此子句 CACHE 高速快取使序列號碼能夠預先分配並儲存在記憶體中,以便更快地存取。最小值為 1(一次只能産生一個值,即沒有快取)。如果未指定,將保留舊的快取值。
CYCLE
可選擇性的 CYCLE 關鍵字可用於使序列在分別透過遞增或遞減達到 maxvalue 或 minvalue 時循環繞回。如果達到限制,則産生的下一個數字將分別為 minvalue 或 maxvalue。
NO CYCLE
如果指定了可選擇性的 NO CYCLE 關鍵字,則在序列達到其最大值後對 nextval 的任何呼叫都將回傳錯誤。如果未指定 CYCLE 或 NO CYCLE,則將保持舊的循環行為。
OWNED BY
table_name
.column_name
OWNED BY NONE
OWNED BY 選項使序列與特定的資料表欄位相關連,這樣如果移除該欄位(或其整個資料表),序列也將自動移除。如果指定了,則此關連將替換先前為序列指定的任何關連。指定的資料表必須具有相同的擁有者,並且與序列位於相同的綱要中。指定 OWNED BY NONE 將移除任何現有關連,使序列「獨立」。
new_owner
序列新擁有者的使用者名稱。
new_name
序列的新名稱。
new_schema
序列的新綱要。
ALTER SEQUENCE 不會立即影響具有預先分配(快取)序列值的後端(除了目前後端)的 nextval 結果。在注意到變更的序列産生參數之前,它們將使用所有快取的值。目前的後端將立即受到影響。
ALTER SEQUENCE 不會影響序列的 currval 狀態。(在 PostgreSQL 8.3 之前,有時會影響到。)
ALTER SEQUENCE 阻止同時間的 nextval,currval,lastval 和 setval 呼叫。
由於歷史原因,ALTER TABLE 也可以用於序列;但是序列允許的 ALTER TABLE 的語法就只有上面列出的形式。
將序列 serial 從 105 重新啟動:
ALTER SEQUENCE 符合 SQL 標準,除了 AS,START WITH,OWNED BY,OWNER TO,RENAME TO 和 SET SCHEMA 子句之外,它們是 PostgreSQL 的延伸功能。
版本:11
ALTER STATISTICS — change the definition of an extended statistics object
ALTER STATISTICS
changes the parameters of an existing extended statistics object. Any parameters not specifically set in the ALTER STATISTICS
command retain their prior settings.
You must own the statistics object to use ALTER STATISTICS
. To change a statistics object's schema, you must also have CREATE
privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have CREATE
privilege on the statistics object's schema. (These restrictions enforce that altering the owner doesn't do anything you couldn't do by dropping and recreating the statistics object. However, a superuser can alter ownership of any statistics object anyway.)
name
The name (optionally schema-qualified) of the statistics object to be altered.
new_owner
The user name of the new owner of the statistics object.
new_name
The new name for the statistics object.
new_schema
The new schema for the statistics object.
new_target
There is no ALTER STATISTICS
command in the SQL standard.
ALTER DATABASE — change a database
ALTER DATABASE
changes the attributes of a database.
The first form changes certain per-database settings. (See below for details.) Only the database owner or a superuser can change these settings.
The second form changes the name of the database. Only the database owner or a superuser can rename a database; non-superuser owners must also have the CREATEDB
privilege. The current database cannot be renamed. (Connect to a different database if you need to do that.)
The third form changes the owner of the database. To alter the owner, you must own the database and also be a direct or indirect member of the new owning role, and you must have the CREATEDB
privilege. (Note that superusers have all these privileges automatically.)
The fourth form changes the default tablespace of the database. Only the database owner or a superuser can do this; you must also have create privilege for the new tablespace. This command physically moves any tables or indexes in the database's old default tablespace to the new tablespace. The new default tablespace must be empty for this database, and no one can be connected to the database. Tables and indexes in non-default tablespaces are unaffected.
The remaining forms change the session default for a run-time configuration variable for a PostgreSQL database. Whenever a new session is subsequently started in that database, the specified value becomes the session default value. The database-specific default overrides whatever setting is present in postgresql.conf
or has been received from the postgres
command line. Only the database owner or a superuser can change the session defaults for a database. Certain variables cannot be set this way, or can only be set by a superuser.
name
The name of the database whose attributes are to be altered.allowconn
If false then no one can connect to this database.connlimit
How many concurrent connections can be made to this database. -1 means no limit.istemplate
If true, then this database can be cloned by any user with CREATEDB
privileges; if false, then only superusers or the owner of the database can clone it.new_name
The new name of the database.new_owner
The new owner of the database.new_tablespace
The new default tablespace of the database.
This form of the command cannot be executed inside a transaction block.configuration_parameter
value
Set this database's session default for the specified configuration parameter to the given value. If value
is DEFAULT
or, equivalently, RESET
is used, the database-specific setting is removed, so the system-wide default setting will be inherited in new sessions. Use RESET ALL
to clear all database-specific settings. SET FROM CURRENT
saves the session's current value of the parameter as the database-specific value.
To disable index scans by default in the database test
:
The ALTER DATABASE
statement is a PostgreSQL extension.
ALTER SYSTEM — 變更伺服器組態設定
ALTER SYSTEM 用於變更整個資料庫叢集的伺服器組態參數。它比手動編輯 postgresql.conf 檔案的傳統方法更為方便。ALTER SYSTEM 將設定的參數設定寫入到 postgresql.auto.conf 檔案中,該檔案是在 postgresql.conf 之外讀取的。將參數設定為 DEFAULT 或使用 RESET 變體,將從 postgresql.auto.conf 檔案中刪除該設定項目。使用 RESET ALL 刪除所有此類設定項目。
在下一次伺服器組態重新載入之後,或者對於只能在伺服器啟動時變更的參數,在下一次伺服器重新啟動之後,使用 ALTER SYSTEM 設定的值將會生效。可以透過呼叫 SQL 函數 pg_reload_conf(),執行 pg_ctl reload 或向主伺服器程序發送 SIGHUP 信號來命令伺服器組態重新載入。
只有超級使用者才能使用 ALTER SYSTEM。另外,由於此命令直接作用於檔案系統且無法回溯,因此不允許在交易區塊或函數內部使用此指令。
configuration_parameter
可設定的組態參數的名稱。可用參數的說明在之中。
value
參數的新值。可以將值指定為字串常數、指標、數字或以逗號分隔的列表,配合參數的要求。可以使用 DEFAULT 以從 postgresql.auto.conf 中刪除參數及其值。
設定 wal_level:
取消該設定,恢復在 postgresql.conf 中的設定:
ALTER SYSTEM 語句是 PostgreSQL 的延伸功能。
ALTER SUBSCRIPTION — change the definition of a subscription
ALTER SUBSCRIPTION 可以變更 中大部分可指定的訂閱屬性。
您必須是該訂閱的擁有者才能使用 ALTER SUBSCRIPTION。要變更擁有者,您必須是新角色的直接或間接成員,而新所有者必須是超級使用者。(目前,訂閱擁有者都必須是超級使用者,因此擁有者檢查將在實作中繞過,但未來這個部份有可能會發生變化。)
name
屬性將被變更的訂閱名稱。
CONNECTION '
conninfo
'
SET PUBLICATION
publication_name
set_publication_option 為此操作指定了其他選項。支援的選項有:
refresh
(boolean
)
如果為 false,此指令將不會嘗試更新資料表訊息。REFRESH PUBLICATION 就應該要分開執行。預設值是 true。
此外,可能需要指定更新選項,如 REFRESH PUBLICATION 中所述。
REFRESH PUBLICATION
從發佈者取得缺少的資料表訊息。這將開始複寫自從上次呼叫 REFRESH PUBLICATION 或自從 CREATE SUBSCRIPTION 以來已加到訂閱發佈中的資料表。
refresh_option 指定更新操作的附加選項。支援的選項有:
copy_data
(boolean
)
指定在複寫開始之後是否應複寫正在訂閱的發佈中的現有資料。預設值是 true。
ENABLE
啟用先前停用的訂閱,在交易事務結束時啟動邏輯複寫程序。
DISABLE
停用正在運行的訂閱,在交易事務結束時停止邏輯複寫的工作。
SET (
subscription_parameter
[= value
] [, ... ] )
new_owner
訂閱的新擁有者的使用者名稱。
new_name
訂閱的新名稱。
將訂閱的發佈對象變更為 insert_only:
停用(停止)訂閱:
ALTER SUBSCRIPTION 是 PostgreSQL 的延伸功能。
ALTER SCHEMA — change the definition of a schema
ALTER SCHEMA
changes the definition of a schema.
You must own the schema to use ALTER SCHEMA
. To rename a schema you must also have the CREATE
privilege for the database. To alter the owner, you must also be a direct or indirect member of the new owning role, and you must have the CREATE
privilege for the database. (Note that superusers have all these privileges automatically.)
name
The name of an existing schema.
new_name
The new name of the schema. The new name cannot begin with pg_
, as such names are reserved for system schemas.
new_owner
The new owner of the schema.
There is no ALTER SCHEMA
statement in the SQL standard.
本部分包含 PostgreSQL 支援的 SQL 指令的參考訊息。一般而言,「SQL」是指語言;內容包含了有關各標準的一致性和相容性。
連結將會連結至 PostgreSQL 官方使用手冊,本手冊連結請使用左側目錄。
Table of Contents
— abort the current transaction
— change the definition of an aggregate function
— change the definition of a collation
— change the definition of a conversion
— change a database
— define default access privileges
— change the definition of a domain
— change the definition of an event trigger
— change the definition of an extension
— change the definition of a foreign-data wrapper
— change the definition of a foreign table
— change the definition of a function
— change role name or membership
— change the definition of an index
,
The statistic-gathering target for this statistics object for subsequent operations. The target can be set in the range 0 to 10000; alternatively, set it to -1 to revert to using the maximum of the statistics target of the referenced columns, if set, or the system default statistics target (). For more information on the use of statistics by the PostgreSQL query planner, refer to .
,
,
See and for more information about allowed parameter names and values.
It is also possible to tie a session default to a specific role rather than to a database; see . Role-specific settings override database-specific ones if there is a conflict.
, , ,
此命令不能用於設定 ,也不能用於設定 postgresql.conf 中不允許的參數(例如,)。
有關其他設定參數的方法,請參閱。
,
此子句變更最初由 設定的連線參數。請到該指令查看更多訊息。
變更訂閱發佈的列表。有關更多訊息,請參閱 。預設情況下,這個指令就如同 REFRESH PUBLICATION 一樣。
此子句變更最初由 設定的參數。查看該指令取得更多訊息。允許的選項是 slot_name 和 synchronous_commit
, , ,
,
— change the definition of a procedural language
— change the definition of a large object
— change the definition of a materialized view
— change the definition of an operator
— change the definition of an operator class
— change the definition of an operator family
— change the definition of a row-level security policy
— change the definition of a procedure
— change the definition of a publication
— change a database role
— change the definition of a routine
— change the definition of a rule
— change the definition of a schema
— change the definition of a sequence generator
— change the definition of a foreign server
— change the definition of an extended statistics object
— change the definition of a subscription
— change a server configuration parameter
— change the definition of a table
— change the definition of a tablespace
— change the definition of a text search configuration
— change the definition of a text search dictionary
— change the definition of a text search parser
— change the definition of a text search template
— change the definition of a trigger
— change the definition of a type
— change a database role
— change the definition of a user mapping
— change the definition of a view
— collect statistics about a database
— start a transaction block
— invoke a procedure
— force a write-ahead log checkpoint
— close a cursor
— cluster a table according to an index
— define or change the comment of an object
— commit the current transaction
— commit a transaction that was earlier prepared for two-phase commit
— copy data between a file and a table
— define a new access method
— define a new aggregate function
— define a new cast
— define a new collation
— define a new encoding conversion
— create a new database
— define a new domain
— define a new event trigger
— install an extension
— define a new foreign-data wrapper
— define a new foreign table
— define a new function
— define a new database role
— define a new index
— define a new procedural language
— define a new materialized view
— define a new operator
— define a new operator class
— define a new operator family
— define a new row-level security policy for a table
— define a new procedure
— define a new publication
— define a new database role
— define a new rewrite rule
— define a new schema
— define a new sequence generator
— define a new foreign server
— define extended statistics
— define a new subscription
— define a new table
— define a new table from the results of a query
— define a new tablespace
— define a new text search configuration
— define a new text search dictionary
— define a new text search parser
— define a new text search template
— define a new transform
— define a new trigger
— define a new data type
— define a new database role
— define a new mapping of a user to a foreign server
— define a new view
— deallocate a prepared statement
— define a cursor
— delete rows of a table
— discard session state
— execute an anonymous code block
— remove an access method
— remove an aggregate function
— remove a cast
— remove a collation
— remove a conversion
— remove a database
— remove a domain
— remove an event trigger
— remove an extension
— remove a foreign-data wrapper
— remove a foreign table
— remove a function
— remove a database role
— remove an index
— remove a procedural language
— remove a materialized view
— remove an operator
— remove an operator class
— remove an operator family
— remove database objects owned by a database role
— remove a row-level security policy from a table
— remove a procedure
— remove a publication
— remove a database role
— remove a routine
— remove a rewrite rule
— remove a schema
— remove a sequence
— remove a foreign server descriptor
— remove extended statistics
— remove a subscription
— remove a table
— remove a tablespace
— remove a text search configuration
— remove a text search dictionary
— remove a text search parser
— remove a text search template
— remove a transform
— remove a trigger
— remove a data type
— remove a database role
— remove a user mapping for a foreign server
— remove a view
— commit the current transaction
— execute a prepared statement
— show the execution plan of a statement
— retrieve rows from a query using a cursor
— define access privileges
— import table definitions from a foreign server
— create new rows in a table
— listen for a notification
— load a shared library file
— lock a table
— conditionally insert, update, or delete rows of a table
— position a cursor
— generate a notification
— prepare a statement for execution
— prepare the current transaction for two-phase commit
— change the ownership of database objects owned by a database role
— replace the contents of a materialized view
— rebuild indexes
— destroy a previously defined savepoint
— restore the value of a run-time parameter to the default value
— remove access privileges
— abort the current transaction
— cancel a transaction that was earlier prepared for two-phase commit
— roll back to a savepoint
— define a new savepoint within the current transaction
— define or change a security label applied to an object
— retrieve rows from a table or view
— define a new table from the results of a query
— change a run-time parameter
— set constraint check timing for the current transaction
— set the current user identifier of the current session
— set the session user identifier and the current user identifier of the current session
— set the characteristics of the current transaction
— show the value of a run-time parameter
— start a transaction block
— empty a table or set of tables
— stop listening for a notification
— update rows of a table
— garbage-collect and optionally analyze a database
— compute a set of rows
ALTER TRIGGER — change the definition of a trigger
ALTER TRIGGER
changes properties of an existing trigger. The RENAME
clause changes the name of the given trigger without otherwise changing the trigger definition. The DEPENDS ON EXTENSION
clause marks the trigger as dependent on an extension, such that if the extension is dropped, the trigger will automatically be dropped as well.
You must own the table on which the trigger acts to be allowed to change its properties.
name
The name of an existing trigger to alter.table_name
The name of the table on which this trigger acts.new_name
The new name for the trigger.extension_name
The name of the extension that the trigger is to depend on.
The ability to temporarily enable or disable a trigger is provided by ALTER TABLE, not by ALTER TRIGGER
, because ALTER TRIGGER
has no convenient way to express the option of enabling or disabling all of a table's triggers at once.
To rename an existing trigger:
To mark a trigger as being dependent on an extension:
ALTER TRIGGER
is a PostgreSQL extension of the SQL standard.
ANALYZE — 收集有關資料庫的統計資訊
ANALYZE 收集有關資料庫中資料表內容的統計資訊,並將結果儲在在 pg_statistic 系統目錄中。然後,查詢計劃程序會使用這些統計資訊來幫助決定查詢的最有效執行計劃。
如果沒有參數,ANALYZE 會檢查目前資料庫中的每個資料表。使用參數時,ANALYZE 僅檢查該資料表。還可以輸出欄位名稱列表,在這種情況下,僅收集這些欄位的統計資訊。
VERBOSE
啟用進度訊息的顯示。
table_name
要分析的特定資料表的名稱(可以加入綱要名稱)。如果省略,則分析目前資料庫中的所有一般資料表,分割資料表和具體化檢視表(但不包括外部資料表)。如果指定的資料表是分割資料表,則更新分割資料表的繼承統計資訊和各個分割區的統計資訊。
column_name
要分析特定欄位的名稱。預設為所有欄位。
指定 VERBOSE 時,ANALYZE 會輸出進度訊息以顯示目前正在處理哪個資料表。還會列出有關資料表的各種統計資訊。
僅在明確選擇時才會分析外部資料表。並非所有外部資料封裝器都支援 ANALYZE。如果資料表的封裝器不支援 ANALYZE,則該命令只會輸出警告並且不執行任何操作。
在預設的 PostgreSQL 配置中,autovacuum 背景程序(參閱第 24.1.6 節)負責在資料表首次載入資料時自動分析資料表,並在整個日常操作中進行變更。停用 autovacuum 時,最好定期運行 ANALYZE,或者在對資料表的內容進行大量變更後運行。準確的統計資訊可以幫助規劃程序選擇最合適的查詢計劃,從而提高查詢處理的效率。讀取主要資料庫的常見策略是在一天的離峰使用時間內每天運行一次 VACUUM 和 ANALYZE。(如果有大量更新活動的話,這是不夠的。)
ANALYZE 只需要對目標資料表執行讀取鎖定,因此它可以與資料表上的其他活動同時運行。
ANALYZE 收集的統計資訊通常包括每欄位中一些最常見值的列表,以及顯示每個欄位中近似資料分佈的直方圖。如果 ANALYZE 認為它們不感興趣(例如,在唯一鍵欄位中,沒有常見值)或者欄位資料型別不支援相應的運算子,則可以省略其中一個或兩個。第 24 章提供了有關統計資訊的更多訊息。
對於大型資料表,ANALYZE 採用資料表內容的隨機樣本,而不是檢查每一個資料列。這允許在很短的時間內分析非常大的資料表。但請注意,統計訊息只是近似值,並且每次運行 ANALYZE 時都會略有變化,即使實際資料表內容沒有變化也是如此。這可能會導致 EXPLAIN 顯示的計劃程序估算成本發生微小變化。在極少數情況下,這種非確定性會導致計劃程序在運行 ANALYZE 後更改查詢計劃。為避免這種情況,請提高 ANALYZE 收集的統計資料量,如下所述。
可以透過調整 default_statistics_target 組態變數來控制分析的範圍,或者透過使用ALTER TABLE ... ALTER COLUMN ... SET STATISTICS 設定每個欄位統計訊息目標來達到各欄位控制(參閱 ALTER TABLE)。目標值設定最常用值列表中的最大項目數和直方圖中的最大二進制數。預設目標值為 100,但可以向上或向下調整此值以將計劃器估計的準確性與 ANALYZE 所用的時間和 pg_statistic 中佔用的空間量進行權衡。特別是,將統計訊息目標設定為零會停用該欄位的統計訊息收集。對於從未用作查詢的 WHERE,GROUP BY 或 ORDER BY 子句的一部分欄位,這樣做可能很有用,因為規劃程序不會使用此類欄位的統計訊息。
被分析欄位中最大的統計目標決定了為準備統計訊息而採樣的資料列數。增加目標會使得進行 ANALYZE 所需的時間和空間成比例增加。
ANALYZE 估計的值之一是每個欄位中出現的不同值的數量。因為只檢查了資料列的子集,所以即使具有最大可能的統計目標,該估計有時也可能非常不準確。如果這種不準確導致錯誤的查詢計劃,可以手動確定更準確的值,然後使用 ALTER TABLE ... ALTER COLUMN ... SET (n_distinct = ...) 進行安裝(參閱 ALTER TABLE)。
如果正在分析的資料表有一個或多個子資料表,ANALYZE 將收集兩次統計訊息:一次僅在父資料表的資料列上,第二次在父資料表的資料列上及其所有子資料表。在規劃遍歷整個繼承樹的查詢時,需要第二組統計訊息。 但是,autovacuum 背景程序在決定是否觸發對該資料表的自動分析時,只會考慮父資料表本身的插入或更新。如果很少插入或更新該資料表,則除非您手動運行 ANALYZE,否則繼承統計訊息將不是最新的。
如果任何子資料表是外部資料封裝器不支援 ANALYZE 的外部資料表,則在收集繼承統計訊息時將忽略這些子資料表。
如果要分析的資料表完全為空,ANALYZE 將不會記錄該資料表的新統計訊息。任何現有統計資訊都會被保留。
SQL 標準中沒有 ANALYZE 語句。
ALTER TABLESPACE — 變更資料表空間的定義
ALTER TABLESPACE 可用於變更資料表空間的定義。
您必須擁有該資料表空間才能變更資料表空間的定義。要改變擁有者,您還必須是新角色的直接或間接成員。(請注意,超級使用者自動擁有這些權限。)
name
現有資料表空間的名稱。
new_name
資料表空間的新名稱。新名稱不能以「pg_」開頭,因為這些名稱是為系統資料表空間保留的。
new_owner
資料表空間的新擁有者。
tablespace_option
要設定或重置的資料表空間參數。目前,唯一可用的參數是 seq_page_cost,random_page_cost 和 effective_io_concurrency。為特定資料表空間設定任一值將覆蓋查詢規劃器一般從該資料表空間中的資料表中讀取頁面成本的估計值,這由相同名稱的配置參數(請參閱 seq_page_cost,random_page_cost,effective_io_concurrency)所決定。如果一個資料表空間位於比一般 I/O 子系統更快或更慢的磁碟上,這可能很有用。
將資料表空間 index_space 重新命名為 fast_raid:
變更資料表空間 index_space 的擁有者:
SQL 標準中沒有 ALTER TABLESPACE 語句。
ALTER TYPE — change the definition of a type
ALTER TYPE
changes the definition of an existing type. There are several subforms:ADD ATTRIBUTE
This form adds a new attribute to a composite type, using the same syntax as CREATE TYPE.DROP ATTRIBUTE [ IF EXISTS ]
This form drops an attribute from a composite type. If IF EXISTS
is specified and the attribute does not exist, no error is thrown. In this case a notice is issued instead.SET DATA TYPE
This form changes the type of an attribute of a composite type.OWNER
This form changes the owner of the type.RENAME
This form changes the name of the type or the name of an individual attribute of a composite type.SET SCHEMA
This form moves the type into another schema.ADD VALUE [ IF NOT EXISTS ] [ BEFORE | AFTER ]
This form adds a new value to an enum type. The new value's place in the enum's ordering can be specified as being BEFORE
or AFTER
one of the existing values. Otherwise, the new item is added at the end of the list of values.
If IF NOT EXISTS
is specified, it is not an error if the type already contains the new value: a notice is issued but no other action is taken. Otherwise, an error will occur if the new value is already present.RENAME VALUE
This form renames a value of an enum type. The value's place in the enum's ordering is not affected. An error will occur if the specified value is not present or the new name is already present.
The ADD ATTRIBUTE
, DROP ATTRIBUTE
, and ALTER ATTRIBUTE
actions can be combined into a list of multiple alterations to apply in parallel. For example, it is possible to add several attributes and/or alter the type of several attributes in a single command.
You must own the type to use ALTER TYPE
. To change the schema of a type, you must also have CREATE
privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have CREATE
privilege on the type's schema. (These restrictions enforce that altering the owner doesn't do anything you couldn't do by dropping and recreating the type. However, a superuser can alter ownership of any type anyway.) To add an attribute or alter an attribute type, you must also have USAGE
privilege on the data type.
name
The name (possibly schema-qualified) of an existing type to alter.
new_name
The new name for the type.
new_owner
The user name of the new owner of the type.
new_schema
The new schema for the type.
attribute_name
The name of the attribute to add, alter, or drop.
new_attribute_name
The new name of the attribute to be renamed.
data_type
The data type of the attribute to add, or the new type of the attribute to alter.
new_enum_value
The new value to be added to an enum type's list of values, or the new name to be given to an existing value. Like all enum literals, it needs to be quoted.
neighbor_enum_value
The existing enum value that the new value should be added immediately before or after in the enum type's sort ordering. Like all enum literals, it needs to be quoted.
existing_enum_value
The existing enum value that should be renamed. Like all enum literals, it needs to be quoted.
CASCADE
Automatically propagate the operation to typed tables of the type being altered, and their descendants.
RESTRICT
Refuse the operation if the type being altered is the type of a typed table. This is the default.
ALTER TYPE ... ADD VALUE
(the form that adds a new value to an enum type) cannot be executed inside a transaction block.
Comparisons involving an added enum value will sometimes be slower than comparisons involving only original members of the enum type. This will usually only occur if BEFORE
or AFTER
is used to set the new value's sort position somewhere other than at the end of the list. However, sometimes it will happen even though the new value is added at the end (this occurs if the OID counter “wrapped around” since the original creation of the enum type). The slowdown is usually insignificant; but if it matters, optimal performance can be regained by dropping and recreating the enum type, or by dumping and reloading the database.
To rename a data type:
To change the owner of the type email
to joe
:
To change the schema of the type email
to customers
:
To add a new attribute to a type:
To add a new value to an enum type in a particular sort position:
To rename an enum value:
The variants to add and drop attributes are part of the SQL standard; the other variants are PostgreSQL extensions.
ALTER VIEW — 變更檢視表的定義
ALTER VIEW 變更檢視表的各種輔助屬性。(如果要修改檢視表的定義查詢,請使用 CREATE OR REPLACE VIEW。)
您必須擁有該檢視表才能使用 ALTER VIEW。要變更檢視表的綱要,您還必須具有新綱要的 CREATE 權限。要變更擁有者,您還必須是新擁有角色的直接或間接成員,並且該角色必須對檢視表的綱要具有 CREATE 權限。(這些限制強制要求變更擁有者不會透過移除和重新建立檢視表來執行任何操作。但是,超級使用者無論如何都可以變更任何檢視表的所有權。)
name
現有檢視表的名稱(可選擇性加上綱要)。
IF EXISTS
如果檢視表不存在,請不要拋出錯誤。在這種情況下發出 NOTICE。
SET
/DROP DEFAULT
這些語法設定或移除欄位的預設值。在為檢視表套用任何規則或觸發器之前,檢視表欄位的預設值將替換到引用的 INSERT 或 UPDATE 指令,其目標是檢視表。因此,檢視表的預設值優先於基礎關連的任何預設值。
new_owner
檢視表新擁有者的使用者名稱。
new_name
檢視表的新名稱。
new_schema
檢視表的新綱要。
SET (
view_option_name
[= view_option_value
] [, ... ] )
RESET (
view_option_name
[, ... ] )
設定或重設檢視表選項。目前支援的選項包括:
check_option
(string
)
變更檢視表的檢查選項。值必須是 local 或 cascaded。
security_barrier
(boolean
)
變更檢視表的 security-barrier 屬性。該值必須是布林值,也就是 true 或 false。
security_invoker
(boolean
)
Changes the security-invoker property of the view. The value must be a Boolean value, such as true
or false
.
由於歷史因素,ALTER TABLE 也可以用於檢視表;但是檢視表能允許的 ALTER TABLE 的語法就等同於上面所列出的語法。
要將檢視表 foo 重新命名為 bar:
要將預設欄位值加到可更新檢視表:
ALTER VIEW 是基於 SQL 標準的 PostgreSQL 延伸功能。
ALTER TABLE — 變更資料表的定義
ALTER TABLE
變更現有資料表的定義。有幾個子命令描述如下。請注意,每個子命令所需的鎖定等級可能不同。除非明確指出,否則都是 ACCESS EXCLUSIVE 鎖定。當列出多個子命令時,所有子命令所需的鎖以最嚴格的為準。
ADD COLUMN [ IF NOT EXISTS ]
該資料表使用與 CREATE TABLE 相同的語法在資料表中增加一個新的欄位。如果 IF NOT EXISTS 被指定,並且欄位已經存在這個名稱,則可以避免引發錯誤。
DROP COLUMN [ IF EXISTS ]
從該資料表中刪除一個欄位。涉及該欄位的索引和資料表限制條件也將自動刪除。如果刪除的欄位會導致統計信息僅包含單個欄位的資料的話,那麼引用刪除欄位的多變量統計數據也將被刪除。如果資料表外的任何內容取決於該欄位,例如外部鍵引用或 view,則需要使用 CASCADE。 如果指定 IF EXISTS 但該欄位卻不存在,則不會引發錯誤。通常在這種情況下,會發出提示訊息。
SET DATA TYPE
這種語法用於變更一個資料表中欄位的資料型別。涉及該欄位的索引和簡單的資料表限制條件將透過重新分析原始提供的表示式自動轉換為使用新的欄位型別。可選用的 COLLATE 子句指定新欄位的排序規則;如果省略的話,則排序規則是新欄位型別的預設值。可選用的 USING 子句指定如何從舊值計算為新的欄位值;如果省略,則預設轉換與從舊資料類型到新欄位轉換的賦值相同。 如果沒有隱含或賦值從舊型別轉換為新型別,則必須提供 USING 子句。
SET
/DROP DEFAULT
這個語法設定或刪除欄位的預設值。預設值僅適用於其後續的 INSERT 或 UPDATE 指令;它不會變更資料表中已有的資料列。
SET
/DROP NOT NULL
這個語法會變更欄位是否標記為允許空值或拒絕空值。當欄位不應該包含空值時,您就可以使用 SET NOT NULL。
如果此資料表是一個資料表分割區,而在父資料表中標記為 NOT NULL,則不能在欄位上執行 DROP NOT NULL。要從所有分割區中刪除 NOT NULL 約束,請在父資料表上執行 DROP NOT NULL。即使父級沒有 NOT NULL 限制條件,如果需要,這樣的限制條件仍然可以加到單獨的分割區中;也就是說,即使父資料表允許他們,子資料表們也可以不允許使用空值,但是反過來也是如此。
ADD GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY
SET GENERATED { ALWAYS | BY DEFAULT }
DROP IDENTITY [ IF EXISTS ]
這個語法會變更欄位是否為標識欄位(identity column)或變更現有標識欄位的生成屬性。有關詳細訊息,請參閱 CREATE TABLE。
如果指定了 DROP IDENTITY IF EXISTS 而該欄位不是標識欄位,則不會引發錯誤。 在這種情況下,會發布通知。
SET
sequence_option
RESTART
這個語法變更現有標識欄下的序列設定。sequence_option 是 ALTER SEQUENCE 支援的選項,像是 INCREMENT BY。
SET STATISTICS
此語法為隨後的 ANALYZE 操作設定每個欄位的統計目標。目標可以設定在 0 到 10000 範圍內;或者,將其設定為 -1 以恢復為使用系統預設的統計訊息目標(default_statistics_target)。有關 PostgreSQL 查詢規劃器使用統計訊息的更多資訊,請參閱第 14.2 節。
SET STATISTICS
會要求一個 SHARE UPDATE EXCLUSIVE
的鎖定。
SET (
attribute_option
= value
[, ... ] )
RESET (
attribute_option
[, ... ] )
此語法設定或重置每個屬性選項。目前,只有定義的每個屬性選項是 n_distinct 和 n_distinct_inherited,它們會覆蓋後續 ANALYZE 操作所做的不同值的估計數量。 n_distinct 會影響資料表本身的統計訊息,而 n_distinct_inherited 會影響為該表及其繼承子資料表所收集的統計訊息。當設定為正值時,ANALYZE 將假定該欄位正好包含指定數量的相異非空值。當設定為負值(必須大於或等於 -1)時,ANALYZE 將假定欄位中相異非空值的數量與表的大小成線性關係;準確的計數是透過將估計的資料表大小乘以給定數字的絕對值來計算。例如,值 -1 意味著欄位中的所有值都是不同的,而值 -0.5 意味著每個值在平均值上會出現兩次。當資料表的大小隨時間變化時這很有用,因為在查詢計劃階段之前,不會執行資料表中行數的乘法運算。指定值 0 以恢復到一般性估計不同值的數量。有關 PostgreSQL 查詢規劃器使用統計資訊的更多訊息,請參閱第 14.2 節。
變更每個屬性選項會要求取得一個 SHARE UPDATE EXCLUSIVE 鎖定。
SET STORAGE
此語法設定欄位的儲存模式。 這將控制此欄位是以內建方式保存還是以輔助 TOAST 方式保存,以及是否應該壓縮資料。PLAIN 必須用於固定長度值(如整數),並且是內建的,未壓縮的。MAIN 用於內建可壓縮資料。EXTERNAL 用於外部未壓縮資料,EXTENDED 用於外部壓縮資料。EXTENDED 是非 PLAIN 儲存的大多數資料型別的預設值。 使用 EXTERNAL 將使得對非常大的字串和 bytea 值進行子字串處理的速度更快,從而增加儲存空間。請注意,SET STORAGE 本身並不會改變資料表中的任何內容,它只是設定在將來的資料表更新期間追求的策略。有關更多訊息,請參閱第 68.2 節。
ADD
table_constraint
[ NOT VALID ]
此語法用於與 CREATE TABLE 相同的語法為資料表加上一個新的限制條件,並可以加上選項 NOT VALID,該選項目前只允許用於外部鍵和 CHECK 限制條件。如果限制條件被標記為 NOT VALID,則跳過用於驗證資料表中的所有資料列滿足限制條件的冗長初始檢查。對於後續的插入或更新,這個檢查仍然會被執行(也就是說,除非在被引用的資料表中存在有匹配的資料,否則在外部鍵的情況下它們將會失敗;並且除非新的資料列匹配指定的檢查,否則它們將會失敗)。但是,資料庫不會假定該限制條件適用於資料表中的所有的資料,直到透過使用 VALIDATE CONSTRAINT 選項進行驗證。
ADD
table_constraint_using_index
此語法根據現有的唯一索引向資料表中增加新的 PRIMARY KEY 或 UNIQUE 限制條件。索引中的所有欄位都將包含在限制條件裡。
索引不能有表示式欄位,也不能是部分索引。此外,它必須是具有隱含排序順序的 b-tree 索引。這些限制可確保索引等同於由常態的 ADD PRIMARY KEY 或 ADD UNIQUE 指令建立的索引。
如果指定了 PRIMARY KEY,並且索引的欄位尚未標記為 NOT NULL,那麼此命令將嘗試對每個此類的欄位執行 ALTER COLUMN SET NOT NULL。這需要全資料表掃描來驗證列不包含空值。在所有其他情況下,這是一項快速的操作。
如果提供限制條件名稱,那麼索引將被重新命名以匹配限制條件名稱。否則,限制條件將被命名為與索引相同。
執行此命令後,索引由該限制條件「擁有」,就像索引由一般的 ADD PRIMARY KEY 或 ADD UNIQUE 命令建立的一樣。特別要注意是,刪除限制條件會使索引消失。
在需要增加新的限制條件情況下,使用現有索引加上約束可能會很有幫助。這種情況下需要加上新限制條件需要很長一段時間但不會阻斷資料表更新。為此,請使用 CREATE INDEX CONCURRENTLY 建立索引,然後使用此語法將其作為官方限制條件進行安裝。請參閱後續的例子。
ALTER CONSTRAINT
在資料表變更先前建立限制條件屬性。目前只有外部鍵限制條件可以變更。
VALIDATE CONSTRAINT
此語法透過掃描資料表來驗證先前設定為 NOT VALID 的外部鍵或檢查限制條件,以確保沒有不滿足限制條件的資料列。如果限制條件已被標記為有效,則不會產生任何行為。
驗證對於大型資料表可能是一個漫長的過程。將驗證與初始設定分離的價值在於,您可以將驗證延遲到不太繁忙的時間處理,或者可以用來給額外的時間來糾正原先存在的錯誤,同時防止出現新的錯誤。另請注意,驗證本身並不妨礙它在執行時對該資料表的正常寫入命令。
驗證僅獲取變更資料表上的 SHARE UPDATE EXCLUSIVE 鎖定。 如果限制條件是外部鍵,那麼在限制條件引用的資料表上也需要 ROW SHARE 鎖定。
DROP CONSTRAINT [ IF EXISTS ]
這個語法會在資料表上刪除指定的限制條件。如果指定了 IF EXISTS 並且該限制條件不存在,就不會引發錯誤。在這種情況下,會發布提示通知。
DISABLE
/ENABLE [ REPLICA | ALWAYS ] TRIGGER
這個語法設定這個資料表的觸發器。被禁用的觸發器仍然是系統已知的,只是在發生觸發事件時不會執行而已。對於延遲觸發器,當事件發生時檢查啟用狀態,而不是在實際執行觸發器函數時檢查。可以禁用或啟用由名稱指定的單個觸發器或資料表中的所有觸發器,或者僅禁用使用者的觸發器(此選項不包括內部生成的限制條件觸發器,例如用於實作外部鍵約束或可延遲唯一性和排除限制條件的觸發器)。禁用或啟用內部生成的限制條件觸發器需要超級使用者權限;應該謹慎對待,因為如果不執行觸發器,限制條件的完整性將無法得到保證。觸發器的觸發機制也會受設定變數 session_replication_role 的影響。當複製角色是「origin」(預設)或「local」時,只需啟用觸發器就會觸發。配置為 ENABLE REPLICA 的觸發器只會在連線處於「replica」模式時觸發,而設定為 ENABLE ALWAYS 的觸發器將觸發,不論目前的複複模式為何。
此指令會取得一個 SHARE ROW EXCLUSIVE 鎖定。
DISABLE
/ENABLE [ REPLICA | ALWAYS ] RULE
這個語法設定屬於資料表的覆寫規則的觸發。禁用的規則仍為系統所知的,但在查詢覆寫期間不適用。其語法義意和禁用/啟用觸發器一樣。對於 ON SELECT 規則,將忽略此設定,即使目前連線處於非預設的複製角色,也始終使用此設定以保持 view 能正常工作。
DISABLE
/ENABLE ROW LEVEL SECURITY
這個語法為資料表控制屬於資料表的資料列級安全原則的適用。 如果啟用且該資料表不存在任何安全原則,則應用預設為拒絕原則。請注意,即使資料列級安全性被禁用,安全原則也可以存在於資料表中 - 在這種情況下,原則將不會被採用,它們將被忽略。 另請參閱 CREATE POLICY。
NO FORCE
/FORCE ROW LEVEL SECURITY
當使用者為資料表的所有者時,這個語法控制屬於資料表的資料列安全原則的使用。如果啟用,則在使用者是資料表所有者時應用資料列級安全原則。 如果禁用(預設),那麼當使用者是資料表所有者時,資料列級安全性將不會被應用。另請參閱 CREATE POLICY。
CLUSTER ON
此資料表為將來的 CLUSTER 操作選擇預設索引。但它實際上並不會重組資料表。
變更叢集選項將取得 SHARE UPDATE EXCLUSIVE 鎖定。
SET WITHOUT CLUSTER
此語法從資料表中刪除最近使用的 CLUSTER 索引設定。這會影響未指定索引的後續叢集操作。
變更叢集選項將取得 SHARE UPDATE EXCLUSIVE 鎖定。
SET WITH OIDS
此語法在資料表中增加了一個 oid 系統欄位(參閱第 5.4 節)。 如果資料表已經有 OID,那就什麼都不做。
請注意,這不等同於 ADD COLUMN oid oid;那只會增加一個正常的欄位,而它碰巧被命名為 oid,而不是系統欄位。
SET WITHOUT OIDS
此語法從資料表中移除 oid 系統欄位。這完全等同於 DROP COLUMN oid RESTRICT,只是如果已經沒有 oid 欄位,它不會有動作產生。
SET TABLESPACE
此語法將資料表的資料表空間更改為指定的資料表空間,並將與資料表關聯的資料檔案移動到新的資料表空間。資料表中的索引(如果有的話)不會移動;但它們可以通過額外的 SET TABLESPACE 指令單獨移動。資料表空間中目前資料庫中的所有資料表都可以通過使用 ALL IN TABLESPACE 語法來移動,它將鎖定所有要移動的資料表,然後移動每個資料表。這種語法也支持 OWNED BY,它只會移動指定角色擁有的資料表。 如果指定了 NOWAIT 選項,那麼如果無法立即取得所有需要的鎖定,該指令將失敗。請注意,如果需要,系統目錄不會被此指令移動,而是使用 ALTER DATABASE 或 ALTER TABLE 呼叫。information_schema 關連不被視為系統目錄的一部分,將會被移動。另請參閱 CREATE TABLESPACE。
SET { LOGGED | UNLOGGED }
此子句會將資料表從無日誌資料表變更為有日誌資料表或反之亦然(請參閱 UNLOGGED)。它不能用於臨時資料表。
SET (
storage_parameter
= value
[, ... ] )
此子句變更資料表的一個或多個儲存參數。有關可用參數的詳細訊息,請參閱儲存參數選項。請注意,這個指令不會立即修改資料表內容;根據參數,您可能需要重填資料表以獲得所需的效果。這可以透過 VACUUM FULL、CLUSTER或強制重填資料表的 ALTER TABLE 形式來完成。對於與規劃器相關的參數,更改將在下次資料表鎖定時生效,因此目前執行的查詢不會受到影響。
SHARE UPDATE EXCLUSIVE 會針對 fillfactor 和 autovacuum 儲存參數以及以下計劃程序的相關參數進行鎖定:effective_io_concurrency,parallel_workers,seq_page_cost,random_page_cost,n_distinct 和 n_distinct_inherited。
雖然 CREATE TABLE 允許在 WITH(storage_parameter)語法中指定 OIDS,但 ALTER TABLE 不會將 OIDS 視為儲存參數。而是使用 SET WITH OIDS 和 SET WITHOUT OIDS 語法來變更 OID 狀態。
RESET (
storage_parameter
[, ... ] )
此語法將一個或多個儲存參數重置為其預設值。 和 SET 一樣,可能需要重新寫入資料來完成更新其效果。
INHERIT
parent_table
此子句將目標資料表加到指定的父資料表中成為新的子資料表。然後,針對父資料表的查詢將會包含目標資料表的資料。要作為子資料表加入前,目標資料表必須已經包含與父資料表的所有欄位(它也可以具有其他欄位)。這些欄位必須具有可匹配的資料型別,並且如果它們在父資料表中具有 NOT NULL 限制條件,那麼它們還必須在子資料表中也具有 NOT NULL 限制條件。
對於父資料表的所有 CHECK 限制條件,必須還有相對應的子資料表限制條件,除非父資料表中標記為不可繼承(即使用ALTER TABLE ... ADD CONSTRAINT ... NO INHERIT 所建立的,它們將會被忽略;所有匹配的子資料表限制條件不得標記為不可繼承。目前不用考慮 UNIQUE,PRIMARY KEY 和 FOREIGN KEY,但未來這些可能會改變。
NO INHERIT
parent_table
此字句從指定的父資料表的子資料表中刪除目標資料表。針對父資料表的查詢將不再包含從目標資料表中所産生的記錄。
OF
type_name
此子句將資料表連接到複合型別,就像 CREATE TABLE OF 已經産生它一樣。該資料表的欄位名稱和型別必須與組合型別的列表完全吻合;oid 系統欄位的存在會有所不同。該資料表不得從任何其他的資料表繼承。這些限制確保了 CREATE TABLE OF 將得到一個等效的資料表定義。
NOT OF
此子句將複合型別資料表從它的型別中分離出來。
OWNER
該子句將資料表、序列、檢視表、具體化檢視表或外部資料表的擁有者變更為指定的使用者。
REPLICA IDENTITY
此子句變更寫入 WAL 的訊息,以識別更新或刪除的資料列。如果正在使用邏輯複製的話,則此子句不起作用。DEFAULT(非系統資料表的預設值)記錄主鍵欄位的舊值(如果有的話)。USING INDEX 記錄指定索引覆蓋欄位的舊值,它必須是唯一的,不能是部分的,也不可是延遲的,並且只能包含標記為 NOT NULL 的欄位。FULL 記錄行中所有欄位的舊值。 沒有記錄關於舊資料列的訊息。(這是系統資料表的預設值。)在任何情況下,都不記錄舊值,除非至少有一個將記錄的欄位在新舊版本的資料列之間不同。
RENAME
給資料表一個新的名稱。RENAME 子句變更資料表的名稱(或索引、序列、檢視表、具體化檢視表或外部資料表)、資料表中各別的欄位名稱、及資料表的限制條件名稱。對於儲存的資料沒有任何影響。
SET SCHEMA
此子句將資料表移動到另一個綱要之中。資料表的關聯索引、約束和序列也將被移動。
ATTACH PARTITION
partition_name
FOR VALUES partition_bound_spec
此子句使用與 CREATE TABLE 相同的 partition_bound_spec 語法,將現有資料表(可能本身已為分區割資料表)作為目標資料表的分割區。 分割區綁定規範必須對應於目標資料表的分割區限制條件和分割區主鍵。要附加的資料表必須與目標資料表具有相同的欄位,並且不得再更多;此外,欄位型別也必須匹配。而且,它必須具有目標資料表的所有 NOT NULL 和 CHECK 限制條件。目前暫不考慮UNIQUE,PRIMARY KEY 和 FOREIGN KEY 限制條件。如果附加資料表中的任何 CHECK 限制條件被標記為 NO INHERIT,則此指令將會失敗;這種限制條件必須在沒有 NO INHERIT 子句的情況下重新建立。
如果新的分割區是一般資料表,則執行全資料表掃描以檢查資料表中的現有資料列是否違反分割區的限制條件。透過在資料表中加入一個有效的 CHECK 限制條件來避免這種掃描,在執行此命令之前,只允許滿足所需分割區限制條件的資料列。資料庫將使用這樣的限制條件來確定,即不需要掃描資料表來驗證分割區的合法性。但是,如果任何分割區鍵是表示式並且分割區不接受 NULL 值,則這個語法不起作用。 如果附加一個不接受 NULL 值的列表分割區,除非它是一個表示式,否則請將 NOT NULL 限制條件加到到分割區鍵欄位。
如果新的分割區是外部資料表,則不會執行任何操作來驗證外部資料表中的所有資料都遵守分割區限制條件。(請參閱 CREATE FOREIGN TABLE 中有關外部資料表上限制條件的說明。)
DETACH PARTITION
partition_name
此子句會分離目標資料表的指定分割區。 分離的分割區作為獨立資料表繼續存在,只是不再與原來的資料表相關聯。
除了 RENAME,SET SCHEMA,ATTACH PARTITION 和 DETACH PARTITION 之外,所有在單個資料表上作用的 ALTER TABLE 子句可以組合成一個或多個變更的列表一起使用。例如,可以在單個命令中加入多個欄位(及/或)變更多個欄位的型別。這對於大型資料表尤其有用,因為只需要在資料表上進行一次操作。
您必須擁有該資料表才能使用 ALTER TABLE。要變更資料表的綱要或資料表空間,還必須對新的綱要或資料表空間具有 CREATE 權限。要將資料表加上為父資料表的新子資料表,您也必須擁有父資料表。另外,要將資料表附加為另一個資料表的新分割區,您必須擁有附加的資料表。要變更擁有者,您還必須是新擁有角色的直接或間接成員,並且該角色必須對資料表具有 CREATE 權限。(這些限制強制改變擁有者不會做任何刪除和重新建立資料表的操作,但超級用戶可以改變任何資料表的所有權。)要加入欄位或更改欄位型別或使用 OF 子句中,您還必須具有資料型別的 USAGE 權限。
IF EXISTS
如果資料表不存在,請不要拋出錯誤。在這種情況下發布 NOTICE。
name
要變更的現有資料表名稱(可以加上綱要指定)。如果在資料表名稱之前指定了 ONLY,則只變更改該資料表。如果沒有指定 ONLY,則資料表及其所有繼承的資料表(如果有的話)都進行變更。或者,可以在資料表名稱之後指定 * 以明確指示包含繼承資料表。
column_name
新的欄位或現有欄位的名稱。
new_column_name
現有欄位的新名稱。
new_name
資料表的新名稱。
data_type
新欄位的資料型別或現有欄位的新資料型別。
table_constraint
資料表新的限制條件。
constraint_name
新的或現有限制條件的名稱。
CASCADE
自動刪除相依於刪除欄位或限制條件的物件(例如,引用欄位的檢視表),並依次刪除相依於這些物件的所有物件(請參閱第 5.13 節)。
RESTRICT
如果有任何相依物件,則拒絕刪除欄位或限制條件。這是預設的行為。
trigger_name
要停用或啟用事件觸發器的名稱。
ALL
停用或啟用屬於資料表的所有觸發器。(如果觸發器是內部生成的限制條件觸發器,例如那些用於實現外部鍵限制條件或可延遲的唯一性和排除性限制條件的觸發器,則調整它們需要超級使用者權限。)
USER
除內部産生的限制條件觸發器(例如那些用於實現外部鍵限制條件或可延遲唯一性和排除性限制條件的觸發器)之外,停用或啟用屬於資料表的所有觸發器。
index_name
現有索引的名稱。
storage_parameter
資料表儲存參數的名稱。
value
資料表儲存參數的新值。這可能是一個數字或一個字串,具體形式取決於參數為何。
parent_table
與此資料表關聯或取消關聯的父資料表。
new_owner
資料表的新擁有者名稱。
new_tablespace
資料表將被移動到的資料表空間名稱。
new_schema
資料表將被移動到的綱要名稱。
partition_name
要附加為新分割區或從此資料表中分離的分割區資料表名稱。
partition_bound_spec
新分割區的分割區綁定規範。關於語法的更多細節請參考 CREATE TABLE。
關鍵詞 COLUMN 是可以省略的。
當使用 ADD COLUMN 增加欄時,資料表中的所有現有的資料列都將使用該欄位的預設值進行初始化(如果未指定 DEFAULT 子句,則為 NULL)。如果沒有 DEFAULT 子句的話,這只是一個結構的變更,而不需要立即更新資料表的資料,只是在讀出時加上 NULL 值。
使用 DEFAULT 子句增加欄位或變更現有欄位的型別會需要重寫整個資料表及其索引。變更現有欄位型別時的例外情況,如果 USING 子句不變更欄位內容,並且舊型別可以是新型別的二進制強製或新類型的不受限制的 domain,則不需要重寫資料表;但受影響欄位上的任何索引仍必須重建。增加或刪除系統 oid 欄位也需要重寫整個資料表。資料表和索引重建對於大型資料表來說,可能需要大量時間,並且暫時需要可能多達兩倍的磁碟空間。
增加 CHECK 或 NOT NULL 限制條件需要掃描資料表以驗證現有的資料列是否滿足限制條件,但不需要重寫資料表。
同樣,在附加新分割區時,可能會掃描它們以驗證現有資料是否符合分割區的限制條件。
提供在單個 ALTER TABLE 中指定多個變更選項的主要原因是多個資料表掃描或重寫可以因此在資料表中組合成單次作業。
DROP COLUMN 資料表不會在實體上刪除欄位,而只是使其對 SQL 操作設為不可見。資料表中的後續插入和更新操作將該欄位儲存為空值。因此,刪除欄位很快,但不會立即減少資料表的磁碟大小,因為所刪除的欄位所佔用的空間並不會被回收。隨著現有資料的更新,空間將隨著時間的推移而被回收。(這些語句在刪除系統 oid 欄位時不適用,這是透過立即重寫完成的。)
要強制立即回收被刪除的欄位所佔用的空間,可以執行 ALTER TABLE 的一種語法來執行整個資料表的重寫。這會導致重建每個資料列,並將刪除的欄位替換為空值。
ALTER TABLE 的重寫語法並不是 MVCC 安全的。在資料表重寫後,如果使用在重寫發生之前的快照,該資料表對於平行事務將會顯示為空。更多細節參閱 13.5 節。
SET DATA TYPE 的 USING 選項實際上可以指定涉及資料列舊值的任何表示式;也就是說,它可以引用其他欄位以及正在轉換的欄位。這允許使用 SET DATA TYPE 語法完成非常普遍的轉換。由於這種靈活性,USING 表示式並不適用於欄位的預設值(如果有的話); 結果可能不是預設所需的常數表示式。這意味著,如果沒有隱含或賦值從舊型別轉換為新型別,即使提供了 USING 子句,SET DATA TYPE 也可能無法轉換預設值。在這種情況下,請使用 DROP DEFAULT 刪除預設值,執行 ALTER TYPE,然後使用 SET DEFAULT 加上合適的新預設值。類似的考量適用於涉及該欄位的索引和限制條件。
如果資料表有任何後代資料表,則不允許在父資料表中增加、重新命名或變更欄位的型別,卻不對後代資料進行相同操作。這確保了後代資料總是有與父代資料匹配的欄位。同樣,如果不在所有後代資料表中重新命名限制條件,則不能在父級資料表中重新命名該限制條件,以便限制條件在父代資料及其後代資料表之間也匹配。此外,因為從父代資料中查詢也會從其後代資料中進行查詢,所以對父代資料表的限制條件不能被標記為有效,除非它對於那些後代資料表也被標記為有效。在所有這些情況下,ALTER TABLE ONLY 將會被拒絕。
遞迴的 DROP COLUMN 操作只有在後代資料表不從其他父代繼承該欄位並且從未擁有該欄位的獨立定義的情況下才會刪除後代資料表欄位。非遞迴 DROP COLUMN(即,ALTER TABLE ONLY ... DROP COLUMN)永遠不會刪除任何後代欄位,而是將它們標記為獨立定義而非繼承。對於分割區資料表,非遞迴 DROP COLUMN 命令將會失敗,因為資料表的所有分割區必須與分割區源頭具有相同的欄位。
識別欄位(ADD GENERATED,SET 等,DROP IDENTITY)的行為以及TRIGGER,CLUSTER,OWNER 和 TABLESPACE 行為絕不會遞迴到後代資料表;也就是說,他們總是像只有被指定的那樣行事。增加限制條件僅針對未標記為 NO INHERIT 的 CHECK constraints 遞迴。
變更系統目錄資料表的任何部分都不會允許。
有關有效參數的更多描述,請參閱 CREATE TABLE。第 5 章則有關於繼承的更多訊息。
要將一個 varchar 型別的欄位加到資料表中,請執行以下操作指令:
從資料表中刪除一個欄位:
在一個操作指令中變更兩個現有欄位的型別:
透過 USING 子句將包含 Unix 時間戳記的整數欄位變更為帶有時區的時間戳記:
同樣,當有一個欄位沒有自動轉換為新資料型別的預設表示式時:
重新命名現有的欄位:
重新命名現有的資料表:
重新命名現有的限制條件:
要將欄位加上 not null 的限制條件:
從欄位中刪除 not null 的限制條件:
為資料表及其所有子資料表加上檢查的限制條件:
要僅將要檢查的限制條件加到資料表而不加到其子資料表:
(檢查用的限制條件並不會被未來的子資料表繼承。)
從資料表及其所有子資料表中移除限制條件:
僅從一個資料表中刪除限制條件:
(限制條件會保留在所有的子資料表中。)
將外部鍵的限制條件加到到資料表中:
將外部鍵限制條件以其他工作影響最小的方式加到資料表中:
在資料表中加上(多個欄位)唯一性的限制條件:
要在資料表中加上一個自動命名的主鍵限制條件,注意的是,一個資料表只能有一個主鍵:
將資料表移動到不同的資料表空間:
將資料表移動到不同的 schema:
在重建索引時重新建立主鍵的限制條件,而不阻擋資料更新:
將資料表分割區附加到範圍型的分割資料表中:
將資料表分割區附加到列表型的分割資料表中:
從分割資料表中分離資料表分割區:
ADD(沒有 USING INDEX)、DROP [COLUMN]、DROP IDENTITY、RESTART、SET DEFAULT、SET DATA TYPE(沒有 USING)、SET GENERATED 和 SET sequence_option 的語法是符合 SQL 標準的。其他語法則是 SQL 標準的 PostgreSQL 延伸語法。此外,在單個 ALTER TABLE 指令中進行多個操作的功能也是延伸語法。
ALTER TABLE DROP COLUMN 可用於刪除資料表的單一欄位,而留下一個沒有欄位的資料表。這是 SQL 的延伸,SQL 標準禁止使用無欄位的資料表。
CLUSTER — cluster a table according to an index
CLUSTER
instructs PostgreSQL to cluster the table specified by table_name
based on the index specified by index_name
. The index must already have been defined on table_name
.
When a table is clustered, it is physically reordered based on the index information. Clustering is a one-time operation: when the table is subsequently updated, the changes are not clustered. That is, no attempt is made to store new or updated rows according to their index order. (If one wishes, one can periodically recluster by issuing the command again. Also, setting the table's fillfactor
storage parameter to less than 100% can aid in preserving cluster ordering during updates, since updated rows are kept on the same page if enough space is available there.)
When a table is clustered, PostgreSQL remembers which index it was clustered by. The form CLUSTER
table_name
reclusters the table using the same index as before. You can also use the CLUSTER
or SET WITHOUT CLUSTER
forms of ALTER TABLE to set the index to be used for future cluster operations, or to clear any previous setting.
CLUSTER
without any parameter reclusters all the previously-clustered tables in the current database that the calling user owns, or all such tables if called by a superuser. This form of CLUSTER
cannot be executed inside a transaction block.
When a table is being clustered, an ACCESS EXCLUSIVE
lock is acquired on it. This prevents any other database operations (both reads and writes) from operating on the table until the CLUSTER
is finished.
table_name
The name (possibly schema-qualified) of a table.index_name
The name of an index.VERBOSE
Prints a progress report as each table is clustered.
In cases where you are accessing single rows randomly within a table, the actual order of the data in the table is unimportant. However, if you tend to access some data more than others, and there is an index that groups them together, you will benefit from using CLUSTER
. If you are requesting a range of indexed values from a table, or a single indexed value that has multiple rows that match, CLUSTER
will help because once the index identifies the table page for the first row that matches, all other rows that match are probably already on the same table page, and so you save disk accesses and speed up the query.
CLUSTER
can re-sort the table using either an index scan on the specified index, or (if the index is a b-tree) a sequential scan followed by sorting. It will attempt to choose the method that will be faster, based on planner cost parameters and available statistical information.
When an index scan is used, a temporary copy of the table is created that contains the table data in the index order. Temporary copies of each index on the table are created as well. Therefore, you need free space on disk at least equal to the sum of the table size and the index sizes.
When a sequential scan and sort is used, a temporary sort file is also created, so that the peak temporary space requirement is as much as double the table size, plus the index sizes. This method is often faster than the index scan method, but if the disk space requirement is intolerable, you can disable this choice by temporarily setting enable_sort to off
.
It is advisable to set maintenance_work_mem to a reasonably large value (but not more than the amount of RAM you can dedicate to the CLUSTER
operation) before clustering.
Because the planner records statistics about the ordering of tables, it is advisable to run ANALYZE on the newly clustered table. Otherwise, the planner might make poor choices of query plans.
Because CLUSTER
remembers which indexes are clustered, one can cluster the tables one wants clustered manually the first time, then set up a periodic maintenance script that executes CLUSTER
without any parameters, so that the desired tables are periodically reclustered.
Cluster the table employees
on the basis of its index employees_ind
:
Cluster the employees
table using the same index that was used before:
Cluster all tables in the database that have previously been clustered:
There is no CLUSTER
statement in the SQL standard.
The syntax
is also supported for compatibility with pre-8.3 PostgreSQL versions.
COMMENT — 定義或變更物件的註解
COMMENT 儲存有關資料庫物件的註解。
每個物件僅能儲存一個註解字串,因此要修改註解,請為同一物件發出新的 COMMENT 指令。 要刪除註解,請寫入 NULL 代替文字字串。當物件被移除時,註解也會自動移除。
對於大多數類型的物件,只有物件的擁有者才能設定註解。角色本身沒有擁有者,因此 COMMENT ON ROLE 的規則是您必須是超級使用者才能對超級使用者角色註解,或具有 CREATEROLE 權限才能對非超級使用者角色註解。 同樣,存取方法也沒有擁有者;所以您必須是超級使用者才能對存取方法留下註解。 當然,超級使用者可以對任何物件註解。
可以使用psql的 \d 系列指令查看註解。其他使用者界面要檢索註解的話,可以使用 psql 相同內建函數的建置,即 obj_description,col_description 和 shobj_description(請參閱表格 9.68)。
object_name
relation_name
.column_name
aggregate_name
constraint_name
function_name
operator_name
policy_name
rule_name
trigger_name
要註釋的物件名稱。Table、aggregate、collation、conversion、domain、foreign table、function、index、operator、operator class、sequence、statistics、text search object、type、view 的名稱,並且可以是指定 schema。在對欄位進行註釋時,relation_name 必須引用資料表、檢視表、複合型別或外部資料表。
table_name
domain_name
在 constraint、trigger、rule 或 policy 上建立註釋時,這些參數指定定義該物件的資料表或 domain 名稱。
source_type
來源資料型別轉換的名稱。
target_type
目標資料型別轉換的名稱。
argmode
函數或彙總函數的模式:IN,OUT,INOUT 或 VARIADIC。如果省略,則預設為 IN。請注意,COMMENT 實際上並不關心 OUT 參數,因為只需要輸入參數來決定函數的識別。因此,列出 IN,INOUT 和 VARIADIC 參數就足夠了。
argname
函數或彙總參數的名稱。請注意,COMMENT 實際上並不關心參數名稱,因為只需要參數資料型別來決定函數的識別。
argtype
函數或彙總參數的資料型別。
large_object_oid
large object 的 OID。
left_type
right_type
運算子參數的資料型別(可加上綱要名稱)。使用 NONE 表示缺少前綴或後綴運算子的參數。
PROCEDURAL
這是一個噪音詞。
type_name
轉換的資料型別名稱。
lang_name
變換語言的名稱。
text
新的註解,寫成字串文字;或 NULL 以刪除註解。
目前並沒有用於查看註解的安全機制:連線到資料庫的任何使用者可以看到該資料庫中的所有物件註解。對於資料庫而言,角色和資料表空間等共享物件,註解將以全域儲存,因此連線到叢集中任何資料庫的任何使用者都可以看到共享物件的所有註解。因此,請勿將安全關鍵訊息置於註解中。
對資料表 mytable 加上註解:
再來移除它:
更多例子:
SQL 標準中並沒有 COMMENT 指令。
CREATE DATABASE — 建立一個新的資料庫
CREATE DATABASE 建立一個新的 PostgreSQL 資料庫。
要建立資料庫,您必須是超級使用者或具有特殊的 CREATEDB 權限。請參閱 。
預設情況下,將透過複製標準系統資料庫 template1 來建立新的資料庫。可以透過修改 TEMPLATE 名稱來指定不同的樣板。特別是,通過修改 TEMPLATE template0,您可以建立一個僅包含您的 PostgreSQL 版本預定義的標準物件的原始資料庫。如果您希望避免複製可能已添加到 template1 的任何本地物件,這將非常有用。
name
要建立的資料庫名稱。
user_name
將擁有新資料庫的使用者角色名稱,或 DEFAULT 使用預設值(即執行指令的使用者)。要建立由其他角色擁有的資料庫,您必須是該角色的直接或間接成員,或者是超級使用者。
template
從中建立新資料庫的樣板名稱,或 DEFAULT 以使用預設樣板(template1)。
encoding
lc_collate
要在新資料庫中使用的排列順序(LC_COLLATE)。這會影響套用於字串的排序順序,例如在使用 ORDER BY的查詢中,以及在文字欄位索引中使用的順序。預設設定是使用樣板資料庫的排序順序。其他限制請參閱下面說明。
lc_ctype
要在新資料庫中使用的字元分類(LC_CTYPE)。這會影響字元的分類,例如小寫、大寫和數字。預設設定是使用樣板資料庫的字元分類。其他限制請參閱下面的說明。
tablespace_name
allowconn
如果為 false,則沒有人可以連線到此資料庫。預設值為 true,允許連線(除非受到其他機制限制,例如 GRANT / REVOKE CONNECT)。
connlimit
可以對此資料庫建立多少同時連線。 -1(預設值)表示沒有限制。
istemplate
如果為 true,則任何具有 CREATEDB 權限的使用者都可以複製此資料庫;如果為false(預設值),則只有超級使用者或資料庫的擁有者才能複製它。
選擇性參數可以按任何順序輸入,而不僅僅是上面說明的順序。
不能在交易事務區塊內執行 CREATE DATABASE。
「無法初始化資料庫目錄」的錯誤很可能與資料目錄,磁碟空間滿載或其他檔案系統的權限不足問題有關。
不會從樣板資料庫中複製資料庫級的組態參數(透過 ALTER DATABASE 設定)。
為新資料庫指定的字元集編碼必須與所選的區域設定(LC_COLLATE 和 LC_CTYPE)相容。如果語言環境是 C(或等效 POSIX),則允許所有編碼,但對於其他語言環境設定,只有一種編碼可以正常工作。(不過,在Windows上,UTF-8 編碼可以與任何語言環境一起使用。)CREATE DATABASE 將允許超級使用者指定 SQL_ASCII 編碼而不管語言環境設定如何,但是這種方式已被棄用。如果資料可能導致字串函數的不當行為,則與語言環境不相容的編碼就會儲存在資料庫中。
編碼和語言環境設定必須與樣板資料庫的設定相符合,除非將 template0 用作樣板。這是因為其他資料庫可能包含與指定編碼不相符合的資料,或者可能包含其排序順序受 LC_COLLATE 和 LC_CTYPE 影響的索引。複製此類資料將導致資料庫根據新設定而損壞。總之,template0 不包含任何會受影響的資料或索引。
CONNECTION LIMIT 選項僅近乎強制執行:如果兩個新連線幾乎同時開始,當資料庫只剩下一個連線「插槽」時,則兩者都可能會失敗。此外,不會對超級使用者或後台工作程序強制執行此限制。
要建立新資料庫:
使用 salesspace 預設資料表空間並建立由使用者 salesapp 所擁有的資料庫 sales:
要使用不同的區域設定來建立資料庫 music:
在此範例中,如果指定的語言環境與 template1 中的語言環境不同,則需要 TEMPLATE template0 子句。(如果不是,則明確指定語言環境是多餘的。)
要建立具有不同區域設定和不同字元集編碼的資料庫 music2:
指定的區域設定和編碼設定必須相符合,否則將回報錯誤。
請注意,區域設定名稱專屬於作業系統,因此上述指令可能無法在其他地方以相同的方式工作。
SQL 標準中沒有 CREATE DATABASE 語句。資料庫等同於目錄,其建立是實作上定義的。
CREATE EXTENSION — install an extension
CREATE EXTENSION
loads a new extension into the current database. There must not be an extension of the same name already loaded.
Loading an extension essentially amounts to running the extension's script file. The script will typically create new SQL objects such as functions, data types, operators and index support methods. CREATE EXTENSION
additionally records the identities of all the created objects, so that they can be dropped again if DROP EXTENSION
is issued.
The user who runs CREATE EXTENSION
becomes the owner of the extension for purposes of later privilege checks, and normally also becomes the owner of any objects created by the extension's script.
Loading an extension ordinarily requires the same privileges that would be required to create its component objects. For many extensions this means superuser privileges are needed. However, if the extension is marked trusted in its control file, then it can be installed by any user who has CREATE
privilege on the current database. In this case the extension object itself will be owned by the calling user, but the contained objects will be owned by the bootstrap superuser (unless the extension's script explicitly assigns them to the calling user). This configuration gives the calling user the right to drop the extension, but not to modify individual objects within it.
IF NOT EXISTS
Do not throw an error if an extension with the same name already exists. A notice is issued in this case. Note that there is no guarantee that the existing extension is anything like the one that would have been created from the currently-available script file.
extension_name
The name of the extension to be installed. PostgreSQL will create the extension using details from the file SHAREDIR/extension/
extension_name
.control
.
schema_name
The name of the schema in which to install the extension's objects, given that the extension allows its contents to be relocated. The named schema must already exist. If not specified, and the extension's control file does not specify a schema either, the current default object creation schema is used.
If the extension specifies a schema
parameter in its control file, then that schema cannot be overridden with a SCHEMA
clause. Normally, an error will be raised if a SCHEMA
clause is given and it conflicts with the extension's schema
parameter. However, if the CASCADE
clause is also given, then schema_name
is ignored when it conflicts. The given schema_name
will be used for installation of any needed extensions that do not specify schema
in their control files.
Remember that the extension itself is not considered to be within any schema: extensions have unqualified names that must be unique database-wide. But objects belonging to the extension can be within schemas.
version
The version of the extension to install. This can be written as either an identifier or a string literal. The default version is whatever is specified in the extension's control file.
CASCADE
Automatically install any extensions that this extension depends on that are not already installed. Their dependencies are likewise automatically installed, recursively. The SCHEMA
clause, if given, applies to all extensions that get installed this way. Other options of the statement are not applied to automatically-installed extensions; in particular, their default versions are always selected.
以超級使用者身份安裝擴展需要信任擴展的作者以安全的方式編寫擴展安裝腳本。對於惡意用戶來說,創建特洛伊木馬物件並不是非常困難,這些對象將損害以後粗心編寫的擴展腳本的執行,從而允許該用戶獲得超級用戶許可權。但是,特洛伊木馬物件僅在腳本執行期間位於search_path中時才是危險的,這意味著它們位於擴展的安裝目標架構中或它所依賴的某個擴展的架構中。因此,在處理其腳本未經仔細審查的擴展時,一個好的經驗法則是僅將它們安裝到尚未授予任何不受信任的使用者的 CREATE 許可權的架構中。對於它們所依賴的任何擴展也是如此。
PostgreSQL提供的擴展被認為可以抵禦此類安裝時攻擊,除了少數依賴於其他擴展的擴展。如這些擴展的文件中所述,它們應安裝到安全架構中,或安裝到與它們所依賴的擴展相同的架構中,或兩者兼而有之。
將 hstore 延伸功能安裝到目前的資料庫中,將其物件置於 SCHEMA addons 中:
同樣結果的另一種做法:
CREATE EXTENSION 是 PostgreSQL 的延伸功能。
CREATE FOREIGN TABLE — define a new foreign table
CREATE FOREIGN TABLE
creates a new foreign table in the current database. The table will be owned by the user issuing the command.
If a schema name is given (for example,CREATE FOREIGN TABLE myschema.mytable ...
) then the table is created in the specified schema. Otherwise it is created in the current schema. The name of the foreign table must be distinct from the name of any other foreign table, table, sequence, index, view, or materialized view in the same schema.
CREATE FOREIGN TABLE
also automatically creates a data type that represents the composite type corresponding to one row of the foreign table. Therefore, foreign tables cannot have the same name as any existing data type in the same schema.
IfPARTITION OF
clause is specified then the table is created as a partition ofparent_table
with specified bounds.
To be able to create a foreign table, you must haveUSAGE
privilege on the foreign server, as well asUSAGE
privilege on all column types used in the table.
IF NOT EXISTS
Do not throw an error if a relation with the same name already exists. A notice is issued in this case. Note that there is no guarantee that the existing relation is anything like the one that would have been created.
table_name
The name (optionally schema-qualified) of the table to be created.
column_name
The name of a column to be created in the new table.
data_type
COLLATE
collation
TheCOLLATE
clause assigns a collation to the column (which must be of a collatable data type). If not specified, the column data type's default collation is used.
INHERITS (
parent_table
[, ... ] )
CONSTRAINT
constraint_name
An optional name for a column or table constraint. If the constraint is violated, the constraint name is present in error messages, so constraint names likecol must be positive
can be used to communicate helpful constraint information to client applications. (Double-quotes are needed to specify constraint names that contain spaces.) If a constraint name is not specified, the system generates a name.
NOT NULL
The column is not allowed to contain null values.
NULL
The column is allowed to contain null values. This is the default.
This clause is only provided for compatibility with non-standard SQL databases. Its use is discouraged in new applications.
CHECK (
expression
) [ NO INHERIT ]
TheCHECK
clause specifies an expression producing a Boolean result which each row in the foreign table is expected to satisfy; that is, the expression should produce TRUE or UNKNOWN, never FALSE, for all rows in the foreign table. A check constraint specified as a column constraint should reference that column's value only, while an expression appearing in a table constraint can reference multiple columns.
Currently,CHECK
expressions cannot contain subqueries nor refer to variables other than columns of the current row. The system columntableoid
may be referenced, but not any other system column.
A constraint marked withNO INHERIT
will not propagate to child tables.
DEFAULT
default_expr
TheDEFAULT
clause assigns a default data value for the column whose column definition it appears within. The value is any variable-free expression (subqueries and cross-references to other columns in the current table are not allowed). The data type of the default expression must match the data type of the column.
The default expression will be used in any insert operation that does not specify a value for the column. If there is no default for a column, then the default is null.
server_name
OPTIONS (
option
'
value
' [, ...] )
Options to be associated with the new foreign table or one of its columns. The allowed option names and values are specific to each foreign data wrapper and are validated using the foreign-data wrapper's validator function. Duplicate option names are not allowed (although it's OK for a table option and a column option to have the same name).
Constraints on foreign tables (such asCHECK
orNOT NULL
clauses) are not enforced by the corePostgreSQLsystem, and most foreign data wrappers do not attempt to enforce them either; that is, the constraint is simply assumed to hold true. There would be little point in such enforcement since it would only apply to rows inserted or updated via the foreign table, and not to rows modified by other means, such as directly on the remote server. Instead, a constraint attached to a foreign table should represent a constraint that is being enforced by the remote server.
Some special-purpose foreign data wrappers might be the only access mechanism for the data they access, and in that case it might be appropriate for the foreign data wrapper itself to perform constraint enforcement. But you should not assume that a wrapper does that unless its documentation says so.
AlthoughPostgreSQLdoes not attempt to enforce constraints on foreign tables, it does assume that they are correct for purposes of query optimization. If there are rows visible in the foreign table that do not satisfy a declared constraint, queries on the table might produce incorrect answers. It is the user's responsibility to ensure that the constraint definition matches reality.
Create foreign tablefilms
, which will be accessed through the serverfilm_server
:
Create foreign tablemeasurement_y2016m07
, which will be accessed through the serverserver_07
, as a partition of the range partitioned tablemeasurement
:
,
,
,
,
CREATE FUNCTION — 定義一個新函數
CREATE FUNCTION 用來定義一個新函數。CREATE OR REPLACE FUNCTION 將建立一個新的函數,或是更換現有的函數定義。為了能夠定義一個函數,使用者必須具有該程式語言的 USAGE 權限。
如果包含 schema,則該函數將在指定的 schema 中建立。否則它會在目前的 schema中建立。新函數的名稱不得與相同 schema 中具有相同輸入參數型別的任何現有函數相同。但是,不同參數型別的函數可以共享一個名稱(稱為多載 overloading)。
要更換現有函數的目前定義,請使用 CREATE OR REPLACE FUNCTION。以這種方式改變函數的名稱或參數型別是不可行的(如果你嘗試做了,實際上你會建立一個新的、不同的函數)。另外,CREATE OR REPLACE FUNCTION 不會讓你改變現有函數的回傳型別。為此,你必須刪除並重新建立該函數。(使用 OUT 參數時,這意味著你不能更改任何 OUT 參數的型別,除非移除該函數。)
當使用 CREATE OR REPLACE FUNCTION 替換現有函數時,該函數的所有權和權限都不會改變。所有其他的函數屬性都被分配了指令中指定或隱含的值。你必須擁有取代它的功能(這包括成為自己角色群組的成員)。
如果你刪除然後重新建立一個函數,那麼新函數與舊的函數不是同一個實體;你必須刪除引用舊功能的現有規則、view、觸發器等。使用 CREATE OR REPLACE FUNCTION 來更改函數定義而不會破壞引用該函數的物件。此外,ALTER FUNCTION 可用於更改現有函數的大部分輔助屬性。
建立函數的使用者會成為該函數的所有者。
為了能夠建立一個函數,你必須對參數型別和回傳型別具有 USAGE 權限。
name
要建立的函數名稱(可以加上 schema)。
argmode
參數的模式:IN、OUT、INOUT 或 VARIADIC。如果省略的話,則預設為 IN。只有 OUT 參數可以接著 VARIADIC 之後。此外,OUT 和 INOUT 參數不能與 RETURNS TABLE 表示法一起使用。
argname
argtype
如果有的話,函數參數的資料型別(可加上 schema)。參數型別可以是基本型別、複合型別或 domain 型別,也可以引用資料表欄位的型別。
根據實作語言的不同,它也可能被指定為「偽型別」,例如 cstring。偽型別表示實際參數型別或者是不完整指定的,或者是在普通 SQL 資料型別集合之外的型別。
透過寫成 table_name.column_name %TYPE來引用欄位的型別。使用此功能有時可以幫助建立獨立於資料表定義的修改功能。
default_expr
如果未指定參數,則將用作預設值的表示式。該表示式必須是參數的參數型別的強制性。 只有輸入(包括 INOUT)參數可以有一個預設值。具有預設值的參數之後的所有輸入參數也必須具有預設值。
rettype
回傳的資料型別(可加上 schema)。回傳型別可以是基本型別、複合型別或 domain 型別,也可以引用資料表欄位的型別。根據實作語言的不同,它也可能被指定為「偽型別」,例如 cstring。如果該函數不應該回傳一個值,則應指定 void 作為回傳型別。
當有 OUT 或 INOUT 參數時,可以省略 RETURNS 子句。如果存在的話,它就必須與輸出參數所暗示的結果型別一致:如果存在多個輸出參數,則為 RECORD,或者與單個輸出參數的型別相同。
SETOF 修飾字表示該函數將回傳一組值,而不是單個值。
以寫作 table_name.column_name %TYPE 的形式來引用欄位的型別。
column_name
RETURNS TABLE 語法中輸出欄位的名稱。這實際上是另一種宣告 OUT 參數的方式,除了 RETURNS TABLE 也意味著 RETURNS SETOF。
column_type
RETURNS TABLE 語法中輸出欄位的資料型別。
lang_name
該函數實作的程式語言名稱。它可以是 sql、c、internal 或使用者定義的程式語言的名稱,例如,PLPGSQL。將這個名字用單引號括起來,並且需要完全符合大小寫。
TRANSFORM { FOR TYPEtype_name
} [, ... ] }
WINDOW
WINDOW 表示該函數是一個窗函數,而不是一個普通函數。目前這僅對用 C 寫成的函數有用。在替換現有函數定義時,不能更改 WINDOW 屬性。
IMMUTABLE
STABLE
VOLATILE
這些屬性告知查詢優化器關於函數的行為。至多只能指定一個選項。如果沒有這些選項出現,VOLATILE 是基本的假設。
IMMUTABLE 表示該函數不能修改資料庫,並且在給定相同的參數值時總是回傳相同的結果;也就是說,它不會執行資料庫查詢或以其他方式使用不直接存在於其參數列表中的訊息。如果給出這個選項,任何具有所有常量參數的函數呼叫都可以立即替換為函數值。
STABLE 表示該函數無法修改資料庫,並且在單個資料表掃描時,它將始終為相同的參數值回傳相同的結果,但其結果可能會跨 SQL 語句更改。對於結果取決於資料庫查詢,參數變數(如目前時區)等的函數,這是合適的選擇(對於希望查詢由目前命令修改資料列的 AFTER 觸發器並不合適)。另請注意,current_timestamp 類的函數符合穩定性,因為它們的值在事務中不會改變。
VOLATILE 表示即使在單個資料表掃描中函數值也會改變,因此不能進行優化。 在這個意義上,相對較少的資料庫功能是不穩定的,有一些例子是random ()、currval()、timeofday()。 但請注意,任何具有副作用的函數都必須分類為 VOLATILE,即使其結果具有相當的可預測性,以防止結果被優化掉,這樣例子是setval()。
LEAKPROOF
CALLED ON NULL INPUT
RETURNS NULL ON NULL INPUT
STRICT
CALLED ON NULL INPUT
(預設值)表示當其某些參數為 null 時,該函數仍將被正常呼叫。那麼函數作者有責任在必要時檢查 null,並作出適當的處理。
RETURNS NULL ON NULL INPUT
或 STRICT
表示函數每當其任何參數為 null 時就回傳 null。如果指定了該參數,那麼當有 null 參數時,該函數就不會被執行;也就是,會自動假定結果為 null。
[EXTERNAL] SECURITY INVOKER
[EXTERNAL] SECURITY DEFINER
SECURITY INVOKER
表示該函數將以呼叫它的使用者權限執行。這是預設的設定。 SECURITY DEFINER
指定該功能將以擁有它的使用者權限執行。
關鍵字 EXTERNAL
允許 SQL 一致性,但它是選擇性的。與 SQL 標準不同的是,此功能適用於所有函數。
PARALLEL
PARALLEL UNSAFE 表示該函數不能在平行模式下執行,並且在 SQL 語句中存在此類函數會強制執行串列的執行計劃。這是預設的設定。PARALLEL RESTRICTED 表示該功能可以以平行模式執行,但執行僅限於平行群組領導。PARALLEL SAFE 表示該功能可以安全無限制地在平行模式下執行。
如果函數修改任何資料庫狀態,或者如果他們對交易事務進行了更新(如使用子事務,或者他們存取序列資料或試圖對設定進行永久性更改(例如 setval)),那麼函數就應該標記為 PARALLEL UNSAFE。如果它們存取臨時資料表、客戶端連線狀態、游標、prepared statement 或系統無法以平行模式同步的繁雜的後端狀態,它們應該被標記為 PARALLEL RESTRICTED(例如,設定種子不能由初始者執行,另一個流程所做的更改不會反映在初始者身上)。一般來說,如果一個函數在 RESTRICTED 或 UNSAFE 時被標記為 SAFE,或者當它實際上是 UNSAFE 的時候被標記為 RESTRICTED,那麼它在使用平行查詢時可能會引發錯誤或產生錯誤的結果。如果錯誤標記,C 語言函數在理論上可能表現出完全未定義的行為,因為系統無法保護自己免受任意 C 程式的影響,但在大多數情況下,結果不會比其他函數更差。只要有疑問,函數就應該標記為UNSAFE,這是預設值。
execution_cost
一個正數,以 cpu 執行成本為單位給予該函數的估計執行成本。如果函數回傳一個集合,則這是每個回傳資料列的成本。如果未指定成本,則假定 C 語言和內部函數為 1 個單元,其他語言為 100 個單元。較大的值會導致規劃單元嘗試避免比必要時更頻繁地評估該函數。
result_rows
一個正數,它給予規劃單元應該期望函數回傳的估計資料列數。只有在函數宣告回傳一個集合時才允許這樣做。預設是 1000 個資料列。
configuration_parameter
value
SET 子句在輸入函數時將指定的配置參數設定為指定的值,然後在函數退出時恢復為其先前的值。 SET FROM CURRENT 將執行 CREATE FUNCTION 時當時參數的值保存為輸入函數時要應用的值。
如果將一個 SET 子句附加到一個函數,那麼在該函數內對同一個變數執行的 SET LOCAL 命令的作用將僅限於該函數:配置參數的先前的值仍然會在函數離開時恢復。 然而,一個普通的 SET 命令(沒有 LOCAL)會覆蓋 SET 子句,就像它對於先前的 SET LOCAL 指令所做的那樣:除非當下的事務被回復,否則這種指令的效果將在函數退出後持續存在。
definition
定義函數的字串常數;其意義取決於程式語言。它可以是內部函數名稱、目標檔案的路徑、SQL 指令或程序語言中的內容。
obj_file
,link_symbol
當重複 CREATE FUNCTION 呼叫引用同一個目標檔案時,該檔案僅會在每個連線中載入一次。要卸載並重新載入文件(可能在開發過程中),請重新啟動一個新的連線。
attribute
指定有關該功能的可選訊息的歷史方法。有以下屬性可以在這裡顯示:
isStrict
等同於 STRICT 或 RETURNS NULL ON NULL INPUT。
isCachable
isCachable 等同於 IMMUTABLE,但過時了;但它仍然被接受使用,因為相容性的理由。
屬性名稱都不區分大小寫。
PostgreSQL 允許函數多載;也就是說,只要具有不同的輸入參數類型,相同的名稱可以用於多個不同的函數。但是,所有 C 的函數名稱必須不同,因此你必須為 C 函數重載不同C 的名稱(例如,使用參數型別作為 C 名稱的一部分)。
如果兩個函數具有相同的名稱和輸入參數型別,則忽略任何 OUT 參數將被視為相同。 因此,像這些聲明就會有衝突:
具有不同參數型別列表的函數在建立時不會被視為衝突,但如果提供了預設值,則它們可能會在使用中發生衝突。 例如下面的例子:
呼叫 foo(10) 的話會因為不知道應該呼叫哪個函數而失敗。
以完整的 SQL 型別語法來宣告函數的參數和回傳值是可以。 但是,帶括號的型別修飾字(例如數字型別的精確度修飾字)將被 CREATE FUNCTION 丟棄。 因此,例如 CREATE FUNCTION foo(varchar(10))...與 CREATE FUNCTION foo(varchar)....完全相同。
使用 CREATE OR REPLACE FUNCTION 替換現有函數時,對於更改參數名稱是有限制的。你不能更改已分配給任何輸入參數的名稱(儘管你可以將名稱增加先前沒有的參數)。如果有多個輸出參數,則不能更改輸出參數的名稱,因為這會更改描述函數結果的匿名組合類型的欄位名稱。 這些限制是為了確保函數的現有的呼叫在更換時不會停止工作。
如果使用 VARIADIC 參數將函數聲明為 STRICT,則嚴格性檢查會測試整個動態陣列是否為 non-null。如果陣列有 null,該函數仍然可以被呼叫。
將一個整數遞增,在 PL/pgSQL 中使用參數名稱:
回傳包含多個輸出參數的結果:
你可以使用明確命名的複合型別更加詳細地完成同樣的事情:
回傳多個欄位的另一種方法是使用 TABLE 函數:
但是,TABLE 函數與前面的例子不同,因為它實際上回傳一堆記錄,而不僅僅是一條記錄。
由於SECURITY DEFINER函數是以擁有它的用戶的權限執行的,因此需要注意確保該函數不會被濫用。為了安全起見,應設定 search_path 以排除任何不受信任的使用者可以寫入的 schema。這可以防止惡意使用者建立掩蓋物件的物件(例如資料表、函數和運算元),使得該物件被函數使用。在這方面特別重要的是臨時資料表的 schema,它預設是首先被搜尋的,並且通常允許由任何人寫入。透過強制最後才搜尋臨時 schema 可以得到較為安全的處理。 為此,請將 pg_temp 作為 search_path 中的最後一個項目。此函數說明安全的使用情況:
這個函數的意圖是存取一個資料表 admin.pwds。但是,如果沒有 SET 子句,或者只提及 admin 的 SET 子句,則可以透過建立名為 pwds 的臨時資料表來破壞該函數。
在 PostgreSQL 8.3 之前,SET 子句還不能使用,所以舊的函數可能需要相當複雜的邏輯來儲存、設定和恢復 search_path。有了 SET 子句便更容易用於此目的。
SQL:1999 及其更新的版本中定義了一個 CREATE FUNCTION 指令。與 PostgreSQL 版本的指令類似但不完全相容。這些屬性並不是可移植的,不同的程序語言之間也無法移植。
為了與其他資料庫系統相容,可以在 argname 之前或之後編寫 argmode。但只有第一種方法符合標準。
對於參數預設值,SQL標準僅使用 DEFAULT 關鍵字指定語法。帶有 = 的語法在 T-SQL 和 Firebird 中使用。
CREATE EVENT TRIGGER — define a new event trigger
CREATE EVENT TRIGGER
creates a new event trigger. Whenever the designated event occurs and the WHEN
condition associated with the trigger, if any, is satisfied, the trigger function will be executed. For a general introduction to event triggers, see . The user who creates an event trigger becomes its owner.
name
The name to give the new trigger. This name must be unique within the database.event
The name of the event that triggers a call to the given function. See for more information on event names.filter_variable
The name of a variable used to filter events. This makes it possible to restrict the firing of the trigger to a subset of the cases in which it is supported. Currently the only supported filter_variable
is TAG
.filter_value
A list of values for the associated filter_variable
for which the trigger should fire. For TAG
, this means a list of command tags (e.g., 'DROP FUNCTION'
).function_name
A user-supplied function that is declared as taking no argument and returning type event_trigger
.
In the syntax of CREATE EVENT TRIGGER
, the keywords FUNCTION
and PROCEDURE
are equivalent, but the referenced function must in any case be a function, not a procedure. The use of the keyword PROCEDURE
here is historical and deprecated.
Only superusers can create event triggers.
There is no CREATE EVENT TRIGGER
statement in the SQL standard.
CREATE FOREIGN DATA WRAPPER — define a new foreign-data wrapper
CREATE FOREIGN DATA WRAPPER
creates a new foreign-data wrapper. The user who defines a foreign-data wrapper becomes its owner.
The foreign-data wrapper name must be unique within the database.
Only superusers can create foreign-data wrappers.
name
The name of the foreign-data wrapper to be created.
HANDLER
handler_function
handler_function
is the name of a previously registered function that will be called to retrieve the execution functions for foreign tables. The handler function must take no arguments, and its return type must be fdw_handler
.
It is possible to create a foreign-data wrapper with no handler function, but foreign tables using such a wrapper can only be declared, not accessed.
VALIDATOR
validator_function
validator_function
is the name of a previously registered function that will be called to check the generic options given to the foreign-data wrapper, as well as options for foreign servers, user mappings and foreign tables using the foreign-data wrapper. If no validator function or NO VALIDATOR
is specified, then options will not be checked at creation time. (Foreign-data wrappers will possibly ignore or reject invalid option specifications at run time, depending on the implementation.) The validator function must take two arguments: one of type text[]
, which will contain the array of options as stored in the system catalogs, and one of type oid
, which will be the OID of the system catalog containing the options. The return type is ignored; the function should report invalid options using the ereport(ERROR)
function.
OPTIONS (
option
'value
' [, ... ] )
This clause specifies options for the new foreign-data wrapper. The allowed option names and values are specific to each foreign data wrapper and are validated using the foreign-data wrapper's validator function. Option names must be unique.
PostgreSQL's foreign-data functionality is still under active development. Optimization of queries is primitive (and mostly left to the wrapper, too). Thus, there is considerable room for future performance improvements.
Create a useless foreign-data wrapper dummy
:
Create a foreign-data wrapper file
with handler function file_fdw_handler
:
Create a foreign-data wrapper mywrapper
with some options:
CREATE FOREIGN DATA WRAPPER
conforms to ISO/IEC 9075-9 (SQL/MED), with the exception that the HANDLER
and VALIDATOR
clauses are extensions and the standard clauses LIBRARY
and LANGUAGE
are not implemented in PostgreSQL.
Note, however, that the SQL/MED functionality as a whole is not yet conforming.
CREATE ACCESS METHOD — define a new access method
CREATE ACCESS METHOD
creates a new access method.
The access method name must be unique within the database.
Only superusers can define new access methods.
name
The name of the access method to be created.
access_method_type
This clause specifies the type of access method to define. Only TABLE
and INDEX
are supported at present.
handler_function
handler_function
is the name (possibly schema-qualified) of a previously registered function that represents the access method. The handler function must be declared to take a single argument of type internal
, and its return type depends on the type of access method; for TABLE
access methods, it must be table_am_handler
and for INDEX
access methods, it must be index_am_handler
. The C-level API that the handler function must implement varies depending on the type of access method. The table access method API is described in and the index access method API is described in .
Create an index access method heptree
with handler function heptree_handler
:
CREATE ACCESS METHOD
is a PostgreSQL extension.
COPY — 在檔案和資料表之間複製資料
COPY 在 PostgreSQL 資料表和標準檔案系統的檔案之間移動資料。COPY TO 將資料表的內容複製到檔案,而 COPY FROM 將資料從檔案複製到資料表(將資料附加到資料表中)。COPY TO 還可以複製 SELECT 查詢的結果。
如果指定了欄位列表,則 COPY 將僅將指定欄位中的資料複製到檔案或從檔案複製。如果資料表中有任何欄位不在欄位列表中,則 COPY FROM 將插入這些欄位的預設值。
帶有檔案名稱的 COPY 指示 PostgreSQL 伺服器直接讀取或寫入檔案。PostgreSQL 使用者必須可以存取該檔案(伺服器執行的作業系統使用者 ID),並且必須從伺服器的角度指定名稱。使用 PROGRAM 時,伺服器執行給定的命令並從程序的標準輸出讀取,或寫入程序的標準輸入。必須從伺服器的角度使用該命令,並且該命令可由 PostgreSQL 作業系統使用者執行。指定 STDIN 或 STDOUT 時,資料透過用戶端和伺服器之間的連線傳輸。
table_name
現有資料表的名稱(可選擇性加上綱要)。
column_name
要複製欄位的選擇性列表。如果未指定欄位列表,則將複製資料表的所有欄位。
query
對於 INSERT,UPDATE 和 DELETE 查詢,必須提供 RETURNING 子句,並且目標關連不能具有條件規則,也不能具有 ALSO 規則,也不能具有延伸為多個語句的 INSTEAD 規則。
filename
輸入或輸出檔案的路徑名稱。輸入檔案名稱可以是絕對路徑或相對路徑,但輸出檔案名稱必須是絕對路徑。Windows 使用者可能需要使用 E''字串並將路徑名稱中所使用的任何倒斜線加倍。
PROGRAM
要執行的命令。在 COPY FROM 中,從命令的標準輸出讀取輸入;在 COPY TO 中,輸出給寫入命令的標準輸入。
請注意,該命令由 shell 呼叫,因此如果您需要將任何參數傳遞給來自不受信任來源的 shell 命令,則必須小心去除或轉義可能對 shell 具有特殊含義的任何特殊字串。出於安全原因,最好使用固定的命令字串,或者至少避免在其中傳遞任何使用者輸入參數。
STDIN
指定輸入來自用戶端應用程式。
STDOUT
指定輸出轉到用戶端應用程序式。
boolean
指定是應打開還是關閉所選選項。您可以寫入TRUE,ON 或 1 以啟用該選項,使用 FALSE,OFF 或 0 來停用它。布林值也可以省略,在這種情況下假定為 TRUE
FORMAT
選擇要讀取或寫入的資料格式:text,csv(逗號分隔值)或二進位。預設為 text。
FREEZE
請求複製已經凍結的資料列,就像在執行 VACUUM FREEZE 命令之後一樣。這是初始資料載入的效能選項。只有在目前子事務中建立或清空正在載入的資料表時,才會凍結資料列,而沒有使用中游標,並且此事務不保留舊的快照。
請注意,所有其他連線在成功載入後將立即能夠看到資料。這違反了 MVCC 可見性的正常規則,使用者應該知道這可能的潛在問題。
DELIMITER
指定用於分隔檔案每行內欄位的字元。預設值為 text 格式的 tab 字元,CSV 格式的逗號。這必須是一個單位元組字元。採用二進位格式時不允許使用此選項。
NULL
指定表示空值的字串。 預設值為 text 格式的 N(倒斜線-N)和 CSV 格式的未加引號的空字串。對於不希望將空值與空字串區分開的情況,即使是 text 格式,也可能更喜歡空字串。採用二進位格式時不允許使用此選項。
使用 COPY FROM 時,與該字串匹配的任何資料項都將儲存為空值,因此您應確保使用與 COPY TO 相同的字串。
HEADER
指定該檔案包含標題列,其中包含檔案中每個欄位的名稱。在輸出時,第一行包含資料表中的欄位名稱;在輸入時,第一行將被忽略。僅在採用 CSV 格式時才允許此選項。
QUOTE
指定引用資料值時要使用的引用字元。預設為雙引號。這必須是一個單位元組字元。僅在採用 CSV 格式時才允許此選項。
ESCAPE
指定應在與 QUOTE 值匹配的資料字元之前出現的字元。預設值與 QUOTE 值相同(因此,如果引號字元出現在資料中,則引號字元加倍)。這必須是一個單位元組字元。僅在使用 CSV 格式時才允許此選項。
FORCE_QUOTE
強制引用用於每個指定欄位中的所有非 NULL 值。從不引用 NULL 輸出。如果指定 *,則將在所有欄位中引用非 NULL 值。此選項僅在 COPY TO 中允許,並且僅在使用 CSV 格式時允許。
FORCE_NOT_NULL
不要將指定欄位的值與空字串匹配。在 null 字串為空的預設情況下,這意味著空值將被讀取為零長度字串而不是空值,即使它們未被引用也是如此。此選項僅在 COPY FROM 中允許,並且僅能用在 CSV 格式時。
FORCE_NULL
將指定欄位的值與空字串匹配,即使它已被引用,如果找到匹配項,則將值設定為 NULL。在 null 字串為空的預設情況下,這會將帶引號的空字串轉換為 NULL。此選項僅在 COPY FROM 中允許,並且僅能用在 CSV 格式。
ENCODING
指定文件在 encoding_name 中編碼。如果省略此選項,則使用目前用戶端編碼。有關詳細訊息,請參閱下面的註釋。
WHERE
The optional WHERE
clause has the general form
where condition
is any expression that evaluates to a result of type boolean
. Any row that does not satisfy this condition will not be inserted to the table. A row satisfies the condition if it returns true when the actual row values are substituted for any variable references.
Currently, subqueries are not allowed in WHERE
expressions, and the evaluation does not see any changes made by the COPY
itself (this matters when the expression contains calls to VOLATILE
functions).
成功完成後,COPY 命令將回傳命令標記的形式
計數是複製的資料列數量。
提醒 僅當命令不是 COPY ... TO STDOUT 或等效的 psql 元命令 \copy ... to stdout 時,psql 才會輸出此命令標記。這是為了防止命令標記與剛剛輸出的資料混淆。
COPY TO 只能用於普通資料表,而不能用於檢視表。但是,您可以使用 COPY(SELECT * FROM viewname) ...以被複製檢視表的目前內容。
COPY FROM 可以與普通資料表一起使用,也可以與具有 INSTEAD OF INSERT 觸發器的檢視表一起使用。
COPY 僅處理指定名稱的資料表;它不會將資料複製到子資料表或從子資料表複製資料。因此,例如 COPY table TO 會輸出與 SELECT FROM ONLY table 相同的資料。但 COPY(SELECT FROM table)TO ... 可用於轉存繼承結構中的所有資料。
您必須對其值由 COPY TO 讀取的資料表具有 select 權限,並對透過 COPY FROM 插入值的資料表有 INSERT 權限。在命令中列出的欄位上具有欄位權限就足夠了。
如果為資料表啟用了資料列級安全性原則,則相關的 SELECT 安全原則將套用於 COPY table TO 語句。目前,具有資料列級安全性的資料表不支援 COPY FROM。請改用等效的 INSERT 語句。
在 COPY 命令中所指名的檔案由伺服器直接讀取或寫入,而不是由用戶端應用程序讀取或寫入。因此,它們必須儲存在資料庫伺服器主機上,或者具有它們的存取能力,而非用戶端。它們必須是 PostgreSQL 使用者帳號(伺服器執行的使用者 ID)可存取,可讀或可寫,而不是用戶端。同樣地,用 PROGRAM 指定的命令是由伺服器直接執行,而不是由用戶端應用程序執行,且必須由 PostgreSQL 使用者執行。COPY 指名的檔案或命令僅允許資料庫超級使用者使用,因為它允許讀取或寫入伺服器有權存取的任何檔案。
建議始終都將 COPY 中使用的檔案名稱指定為絕對路徑。這在 COPY TO 的情況下由伺服器是強制執行的,但對於 COPY FROM,您可以選擇由相對路徑指定的檔案中讀取。該路徑將相對於伺服器程序的工作目錄(通常是叢集的資料目錄)作為起點,而不是用戶端的工作目錄。
使用 PROGRAM 執行命令可能受作業系統的存取控制機制(如 SELinux)所限制。
COPY FROM 將呼叫目標資料表上的所有觸發器和檢查限制條件。但是,它不會呼叫規則。
對於標識欄位,COPY FROM 命令將會寫入輸入資料中提供的欄位值,如 INSERT 選項 OVERRIDING SYSTEM VALUE。
COPY 輸入和輸出受 DateStyle 影響。為確保可以使用非預設 DateStyle 設定的其他 PostgreSQL 安裝的可移植性,應在使用 COPY TO 之前將 DateStyle 設定為 ISO。避免使用 IntervalStyle 設定 tosql_standard 轉存資料也是一個好主意,因為負間隔值可能會被具有不同 IntervalStyle 設定的伺服器所誤解。
輸入資料根據 ENCODING 選項或目前用戶端編碼進行解譯,輸出資料以 ENCODING 或目前用戶端編碼進行編碼,即使資料未透過用戶端而直接由伺服器讀取或寫入檔案。
COPY 會在第一個錯誤時停止操作。這不應該會使 COPY TO 出現問題,但目標資料表已經收到了 COPY FROM 中的之前的資料列。這些資料列將不可見或無法存取,但它們仍佔用磁碟空間。 如果故障發生在大量的複製操作中,則可能相當於浪費大量磁碟空間。您可能需要呼叫 VACUUM 來恢復浪費的空間。
FORCE_NULL 和 FORCE_NOT_NULL 可以在同一個欄位上同時使用。這會導致將帶引號的空字串轉換為空值,將不帶引號的空字串轉換為空字串。
使用文字格式時,讀取或寫入的資料是一個文字檔案,資料表的每筆資料會產生一行。行中的欄位由分隔符號分隔。欄位值本身是每個屬性的資料型別的輸出函數所產生的字串,或輸入函數可接受的字串。使用指定的空字串代替空欄位。如果輸入檔案的任何行包含的欄位比預期的多或少,則 COPY FROM 將引發錯誤。如果指定了 OIDS,則將 OID 讀取或寫入為資料欄位之前的第一欄位。
End of data can be represented by a single line containing just backslash-period (\.
). An end-of-data marker is not necessary when reading from a file, since the end of file serves perfectly well; it is needed only when copying data to or from client applications using pre-3.0 client protocol.
Backslash characters (\
) can be used in the COPY
data to quote data characters that might otherwise be taken as row or column delimiters. In particular, the following characters must be preceded by a backslash if they appear as part of a column value: backslash itself, newline, carriage return, and the current delimiter character.
The specified null string is sent by COPY TO
without adding any backslashes; conversely, COPY FROM
matches the input against the null string before removing backslashes. Therefore, a null string such as cannot be confused with the actual data value (which would be represented as \
).
The following special backslash sequences are recognized by COPY FROM
:
Presently, COPY TO
will never emit an octal or hex-digits backslash sequence, but it does use the other sequences listed above for those control characters.
Any other backslashed character that is not mentioned in the above table will be taken to represent itself. However, beware of adding backslashes unnecessarily, since that might accidentally produce a string matching the end-of-data marker (\.
) or the null string ( by default). These strings will be recognized before any other backslash processing is done.
It is strongly recommended that applications generating COPY
data convert data newlines and carriage returns to the and sequences respectively. At present it is possible to represent a data carriage return by a backslash and carriage return, and to represent a data newline by a backslash and newline. However, these representations might not be accepted in future releases. They are also highly vulnerable to corruption if the COPY
file is transferred across different machines (for example, from Unix to Windows or vice versa).
COPY TO
will terminate each row with a Unix-style newline (“”). Servers running on Microsoft Windows instead output carriage return/newline (“\r
”), but only for COPY
to a server file; for consistency across platforms, COPY TO STDOUT
always sends “” regardless of server platform. COPY FROM
can handle lines ending with newlines, carriage returns, or carriage return/newlines. To reduce the risk of error due to un-backslashed newlines or carriage returns that were meant as data, COPY FROM
will complain if the line endings in the input are not all alike.
此格式選項用於匯入和匯出許多其他應用程式(例如試算表)也常使用的逗號分隔(CSV, Comma Separated Value)檔案格式。它不會使用 PostgreSQL 的標準文字格式所使用轉譯規則,而是產成通用的 CSV 轉譯機制。
每個記錄中的值由 DELIMITER 字元分隔。如果該值包含 DELIMITER,QUOTE 字元,NULL 字串,Carriage Return 或換行字元,則整個值將以 QUOTE 字元前後夾住,以及在 QUOTE 字元或 ESCAPE 字元前面有轉譯字元。在特定欄位中輸出非 NULL 值時,也可以使用 FORCE_QUOTE 強制使用引號。
CSV 格式沒有區分空值和空字串的標準方法。PostgreSQL 的 COPY 透過引號來處理。輸出 NULL 作為 NULL 參數字串,並且不加引號,而與 NULL 參數字串相符的非 NULL 值則被加引號。例如,使用預設設定,將 NULL 寫入未加引號的空字串,而將空字串資料值寫入雙引號("")。 讀取時則遵循類似的規則。您可以使用 FORCE_NOT_NULL 來防止對特定欄位進行 NULL 輸入比較。您還可以使用 FORCE_NULL 將帶引號的空字串轉換為 NULL。
由於反斜線不是 CSV 格式的特殊字元,因此 .(資料結尾標記)也可能會顯示為資料。 為避免任何誤解,請使用 . 在行上顯示為單獨項目的資料將在輸出上自動加上引號,並且在輸入(如果加引號)時不會被解釋為資料結束標記。如果要載入的檔案是由另一個應用程式產生的,該檔案只有一個未加引號的欄位,並且值可能為 .,則可能需要在輸入檔案中將值加上引號。
在 CSV 格式中,所有字元均為有效字元。用引號括起來的值用空格或除 DELIMITER 以外的任何字元包圍,將包括這些字元。如果從從空白行填充到固定寬度的 CSV 行的系統中匯入資料,則可能會導致錯誤。如果出現這種情況,在將資料匯入 PostgreSQL 之前,可能需要預處理 CSV 檔案以除去尾隨的空白字元。
CSV 格式將識別並產生帶有括號的 CSV 檔案,這些值包含嵌入式回車字元和換行字元。因此,與文字格式的檔案相比,檔案並非嚴格限於每筆資料一行。
許多程式會產生奇怪的,有時是錯誤的 CSV 檔案,因此檔案格式更像是一種約定,而不是一種標準。因此,您可能會遇到一些無法使用此機制匯入的檔案,並且 COPY 也可能會產生成其他程式無法處理的檔案內容。
binary 格式選項使所有資料以二進位格式而不是文字形式儲存/讀取。它比 text 和 CSV 格式快一些,但二進位格式檔案在機器架構和 PostgreSQL 版本之間的可移植性較低。此外,二進位格式是資料型別專屬的;例如,它不能從 smallint 欄位輸出二進位資料並將其讀入 int 欄位,即使它在 text 格式中可以正常運作。
二進位檔案格式由檔案標頭,包含資料列資料的零個或多個 tuple 以及檔案結尾組成。標頭和資料按 network byte order 排列。
7.4 之前的 PostgreSQL 版本使用了不同的二進位檔案格式。
File Header
The file header consists of 15 bytes of fixed fields, followed by a variable-length header extension area. The fixed fields are:Signature
11-byte sequence PGCOPY\n\377\r\n\0
— note that the zero byte is a required part of the signature. (The signature is designed to allow easy identification of files that have been munged by a non-8-bit-clean transfer. This signature will be changed by end-of-line-translation filters, dropped zero bytes, dropped high bits, or parity changes.)Flags field
32-bit integer bit mask to denote important aspects of the file format. Bits are numbered from 0 (LSB) to 31 (MSB). Note that this field is stored in network byte order (most significant byte first), as are all the integer fields used in the file format. Bits 16-31 are reserved to denote critical file format issues; a reader should abort if it finds an unexpected bit set in this range. Bits 0-15 are reserved to signal backwards-compatible format issues; a reader should simply ignore any unexpected bits set in this range. Currently only one flag bit is defined, and the rest must be zero:Bit 16
if 1, OIDs are included in the data; if 0, notHeader extension area length
32-bit integer, length in bytes of remainder of header, not including self. Currently, this is zero, and the first tuple follows immediately. Future changes to the format might allow additional data to be present in the header. A reader should silently skip over any header extension data it does not know what to do with.
The header extension area is envisioned to contain a sequence of self-identifying chunks. The flags field is not intended to tell readers what is in the extension area. Specific design of header extension contents is left for a later release.
This design allows for both backwards-compatible header additions (add header extension chunks, or set low-order flag bits) and non-backwards-compatible changes (set high-order flag bits to signal such changes, and add supporting data to the extension area if needed).
Tuples
Each tuple begins with a 16-bit integer count of the number of fields in the tuple. (Presently, all tuples in a table will have the same count, but that might not always be true.) Then, repeated for each field in the tuple, there is a 32-bit length word followed by that many bytes of field data. (The length word does not include itself, and can be zero.) As a special case, -1 indicates a NULL field value. No value bytes follow in the NULL case.
There is no alignment padding or any other extra data between fields.
Presently, all data values in a binary-format file are assumed to be in binary format (format code one). It is anticipated that a future extension might add a header field that allows per-column format codes to be specified.
To determine the appropriate binary format for the actual tuple data you should consult the PostgreSQL source, in particular the *send
and *recv
functions for each column's data type (typically these functions are found in the src/backend/utils/adt/
directory of the source distribution).
If OIDs are included in the file, the OID field immediately follows the field-count word. It is a normal field except that it's not included in the field-count. In particular it has a length word — this will allow handling of 4-byte vs. 8-byte OIDs without too much pain, and will allow OIDs to be shown as null if that ever proves desirable.
File Trailer
The file trailer consists of a 16-bit integer word containing -1. This is easily distinguished from a tuple's field-count word.
A reader should report an error if a field-count word is neither -1 nor the expected number of columns. This provides an extra check against somehow getting out of sync with the data.
以下範例使用破折號「|」作為欄位分隔符把資料表複製到用戶端:
要將檔案中的資料複製到 country 資料表中:
要將名稱以「A」開頭的國家複製到檔案中:
要複製到壓縮檔案,可以透過外部壓縮程序輸出:
以下是適合從 STDIN 複製到資料表中的資料範例:
請注意,每行上的空白實際上是 tab 字元。
以下是相同的資料,以二進位格式輸出。在透過 Unix 實用工具 od -c 過濾後顯示資料。該資料表有三個欄位;第一個是 char(2) 型別,第二個是 text 型別,第三個是 integer 型別。所有行在第三欄位中都具有空值。
SQL 標準中沒有 COPY 語句。
在 PostgreSQL 版本 9.0 之前使用了以下語法並且仍然支援:
請注意,在此語法中,BINARY 和 CSV 被視為獨立的關鍵字,而不是 FORMAT 選項的參數。
在 PostgreSQL 版本 7.3 之前使用了以下語法,並且仍然支援:
CREATE INDEX — 定義一個新的索引
CREATE INDEX 在指定關連的指定欄位上建構索引,該索引可以是資料表或具體化檢視表。索引主要用於增強資料庫效能(儘管不恰當的使用會導致效能降低)。
索引的主要欄位指定欄位名稱,或者作為括號中的表示式指定。如果索引方法支援多欄位索引,則可以指定多個欄位。
索引欄位可以是根據資料表的一個欄位或多個欄位的計算表示式。此功能可用於基於對基本資料的某些轉換來快速存取資料。例如,在 upper(col) 上計算的索引將允許子句 WHERE upper(col) = 'JIM' 使用索引。
PostgreSQL 提供索引方法 B-tree,hash,GiST,SP-GiST,GIN 和 BRIN。使用者也可以定義自己的索引方法,但這相當複雜。
WHERE 子句存在時,將建立部分索引。部分索引是一個索引,它只包含資料表的一部分項目,通常是比索引的其餘部分更有用的索引部分。例如,如果您的資料表包含已開票和未開單的訂單,其中未開單的訂單佔據總資料表的一小部分,但這是一個經常使用的部分,您可以透過僅在該部分上建立索引來提高效能。另一個可能的應用是使用帶有 UNIQUE 的 WHERE 來強制資料表子集的唯一性。有關更多討論,請參閱。
WHERE 子句中使用的表示式只能引用基礎資料表的欄位,但它可以使用所有欄位,而不僅僅是被索引的欄位。目前,WHERE 中也禁止使用子查詢和彙總資料表示式。相同的限制適用於作為表示式的索引欄位。
索引定義中使用的所有函數和運算符必須是「immutable」,也就是說,它們的結果必須僅依賴於它們的參數,而不是任何外部影響(例如另一個資料表的內容或目前時間)。此限制可確保明確定義索引的行為。要在索引表示式或 WHERE 子句中使用使用者定義的函數,請記住在建立函數時將該函數標記為 immutable。
UNIQUE
在建立索引時(如果資料已存在)並且每次插入資料時,系統都會檢查資料表中的重複值。嘗試插入或更新如果導致重複項目的資料將產生錯誤。
CONCURRENTLY
IF NOT EXISTS
如果已存在具有相同名稱的關連,請不要拋出錯誤,在這種情況下發出 NOTICE。請注意,無法保證現有索引與已建立的索引類似。指定 IF NOT EXISTS 時需要索引名稱。
INCLUDE
The optional INCLUDE
clause specifies a list of columns which will be included in the index as non-key columns. A non-key column cannot be used in an index scan search qualification, and it is disregarded for purposes of any uniqueness or exclusion constraint enforced by the index. However, an index-only scan can return the contents of non-key columns without having to visit the index's table, since they are available directly from the index entry. Thus, addition of non-key columns allows index-only scans to be used for queries that otherwise could not use them.
It's wise to be conservative about adding non-key columns to an index, especially wide columns. If an index tuple exceeds the maximum size allowed for the index type, data insertion will fail. In any case, non-key columns duplicate data from the index's table and bloat the size of the index, thus potentially slowing searches. Furthermore, B-tree deduplication is never used with indexes that have a non-key column.
Columns listed in the INCLUDE
clause don't need appropriate operator classes; the clause can include columns whose data types don't have operator classes defined for a given access method.
Expressions are not supported as included columns since they cannot be used in index-only scans.
Currently, the B-tree and the GiST index access methods support this feature. In B-tree and the GiST indexes, the values of columns listed in the INCLUDE
clause are included in leaf tuples which correspond to heap tuples, but are not included in upper-level index entries used for tree navigation.
name
要建立的索引名稱。這裡不能包含綱要名稱;索引始終在與其父資料表相同的綱要中創建。如果省略該名稱,PostgreSQL會根據父資料表的名稱和索引的欄位名稱選擇合適的名稱。
ONLY
Indicates not to recurse creating indexes on partitions, if the table is partitioned. The default is to recurse.
table_name
要編制索引的資料表名稱(可以加上綱要名稱)。
method
要使用的索引方法的名稱。選項是 btree,hash,gist,spgist,gin 和 brin。預設方法是 btree。
column_name
資料表欄位的名稱。
expression
基於資料表的一個欄位或多個欄位的表示式。表示式通常必須與周圍的括號一起填寫,如語法中所示。但是,如果表示式具有函數呼叫的形式,則可以省略括號。
collation
用於索引的排序規則的名稱。預設情況下,索引使用為要索引的欄位宣告排序規則或要索引的表示式結果排序規則。具有非預設排序規則的索引對於涉及使用非預設排序規則的表示式查詢非常有用。
opclass
運算子類的名稱。請參閱下文了解詳情。
opclass_parameter
The name of an operator class parameter. See below for details.
ASC
指定遞增排序順序(預設值)。
DESC
指定遞減排序。
NULLS FIRST
指定 nulls 排在非 null 之前。這是指定 DESC 時的預設值。
NULLS LAST
指定 nulls 排在非 null 之後。這是未指定 DESC 時的預設值。
storage_parameter
tablespace_name
predicate
部分索引的限制條件表示式。
選擇性的 WITH 子句指定索引的儲存參數。每個索引方法都有自己的一組允許的儲存參數。B-tree,hash,GiST 和 SP-GiST 索引方法都接受此參數:
fillfactor
索引的 fillfactor 一個百分比,用於確定索引方法嘗試填充索引頁面的程度。對於B-tree,在初始索引建構期間以及在右側擴展索引時(在插入新的最大索引值時),葉子頁面將填充到此百分比。 如果頁面隨後變得完整,它們將被拆分,導致索引效率逐漸下降。B-tree 使用預設的 fillfactor 為90,但可以選擇 10 到 100 之間的任何整數值。如果資料表是靜態的,那麼 fillfactor 100 能最小化索引的實體大小,但是對於大量更新的資料表,較小的 fillfactor 能最小化頁面拆分的需要。其他索引方法以不同但大致類似的方式使用 fillfactor;預設的 fillfactor 會因方法而異。
B-tree 索引也接受以下參數:
deduplicate_items
(boolean
)
透過 ALTER INDEX 關閉 deduplicate_items 可以防止後續的插入觸發重複資料刪除,但是它本身並不會使現有的資料使用標準資料表示形式。
vacuum_cleanup_index_scale_factor
(floating point
)
GiST 索引另外接受此參數:
buffering
GIN 索引接受不同的參數:
fastupdate
透過 ALTER INDEX 關閉 fastupdate 可防止將來的插入進入擱置的索引項目列表,但本身不會更新以前的項目。您可能希望 VACUUM 資料表或之後呼叫 gin_clean_pending_list 函數以確保清空擱置列表。
gin_pending_list_limit
BRIN 索引接受不同的參數:
pages_per_range
autosummarize
定義每當在下一個頁面上檢測到插入時是否為前一頁面範圍進行摘要計算。
建立索引可能會干擾資料庫的日常操作。通常,PostgreSQL 會鎖定要對寫入進行索引的資料表,並通過對資料的單次掃描來執行整個索引建構。其他事務仍然可以讀取資料表,但如果它們嘗試插入,更新或刪除資料表中的資料列,它們將被阻擋,直到索引建構完成。如果系統是線上正式資料庫,這可能會產生嚴重影響。非常大的資料表可能需要很長時間才能被編入索引,即使對於較小的資料表,索引建構也可能會鎖定寫入程序,這些時間對於線上正式系統來說是不可接受的。
PostgreSQL 支援建構索引而不會鎖定寫入。透過指定 CREATE INDEX 的 CONCURRENTLY 選項來呼叫此方法。使用此選項時,PostgreSQL 必須對資料執行兩次掃描,此外,它必須等待可能修改或使用索引的所有事務。因此,這種方法比標準索引建構需要更多的工作,也需要更長的時間來完成。但是,由於它允許在建構索引時繼續正常操作,因此此方法對於在正式環境中增加新的索引很有用。當然,索引建立帶來的額外 CPU 和 I/O 負載可能會減慢其他操作。
如果在掃描資料表時出現問題,例如鎖死或唯一索引中的唯一性違規,則 CREATE INDEX 指令將會失敗但留下「無效」索引。出於查詢目的,該索引將被忽略,因為它可能不完整;但它仍然會消耗更新成本。psql \d 指令將回報此類索引為 INVALID:
在這種情況下,建議的恢復方法是刪除索引並再次嘗試同時執行 CREATE INDEX。(另一種可能性是使用 REINDEX 重建索引。但是,由於 REINDEX 不支持同步建構,因此該選項看起來不太有吸引力。)
同時建構唯一索引時的另一個警告是,當第二個資料表掃描開始時,已經對其他事務強制加上唯一性限制條件。這意味著在索引可供使用之前,甚至在索引建構最終失敗的情況下,可以在其他查詢中回報違反限制條件。此外,如果在第二次掃描中確實發生了故障,則「無效」索引將繼續強制執行其唯一性約束。
表示式索引和部分索引的同步建構也是支援的。在評估這些表示式時發生的錯誤可能導致類似於上面針對唯一性違規所描述的行為。
一般索引建立允許同一資料表上的其他一般索引建立同時執行,但一次只能在一個資料表上進行一個同步索引構立。在這兩種情況下,同時不允許在資料表上進行其他類型的結構變更。另一個區別是可以在事務塊中執行一般的 CREATE INDEX 指令,但 CREATE INDEX CONCURRENTLY 不能。
目前,只有B-tree,GiST,GIN 和 BRIN 索引方法支持多欄位索引。預設情況下最多可以指定 32 個欄位。(編譯 PostgreSQL 時可以變更此限制。)只有 B-tree 目前支援唯一索引。
When CREATE INDEX
is invoked on a partitioned table, the default behavior is to recurse to all partitions to ensure they all have matching indexes. Each partition is first checked to determine whether an equivalent index already exists, and if so, that index will become attached as a partition index to the index being created, which will become its parent index. If no matching index exists, a new index will be created and automatically attached; the name of the new index in each partition will be determined as if no index name had been specified in the command. If the ONLY
option is specified, no recursion is done, and the index is marked invalid. (ALTER INDEX ... ATTACH PARTITION
marks the index valid, once all partitions acquire matching indexes.) Note, however, that any partition that is created in the future using CREATE TABLE ... PARTITION OF
will automatically have a matching index, regardless of whether ONLY
is specified.
對於支援有序掃描的索引方法(目前只有 B-tree),可以指定選擇性子句 ASC,DESC,NULLS FIRST 和 NULLS LAST 來修改索引的排序順序。由於可以向前或向後掃描有序索引,因此建立單欄位 DESC 索引並不常用 - 排序順序已經可用於一般性索引。這些選項的值是可以建立多欄位索引,該索引搭配混合排序查詢所請求的排序順序,例如 SELECT ... ORDER BY x ASC, y DESC。如果您需要在依賴於索引以避免排序步驟的查詢中支援「nulls sort low」行為而不是預設的「nulls sort high」,那麼 NULLS 選項很有用。
PostgreSQL can build indexes while leveraging multiple CPUs in order to process the table rows faster. This feature is known as parallel index build. For index methods that support building indexes in parallel (currently, only B-tree), maintenance_work_mem
specifies the maximum amount of memory that can be used by each index build operation as a whole, regardless of how many worker processes were started. Generally, a cost model automatically determines how many worker processes should be requested, if any.
You might want to reset parallel_workers
after setting it as part of tuning an index build. This avoids inadvertent changes to query plans, since parallel_workers
affects all parallel table scans.
While CREATE INDEX
with the CONCURRENTLY
option supports parallel builds without special restrictions, only the first table scan is actually performed in parallel.
PostgreSQL 的早期版本也有一個 R-tree 索引方法。此方法已被移除,因為它沒有 GiST 方法的顯著優勢。如果指定了 USING rtree,CREATE INDEX 會將其解釋為USING gist,以簡化舊資料庫到 GiST 的轉換。
To create a unique B-tree index on the column title
in the table films
:
To create a unique B-tree index on the column title
with included columns director
and rating
in the table films
:
To create a B-Tree index with deduplication disabled:
To create an index on the expression lower(title)
, allowing efficient case-insensitive searches:
(In this example we have chosen to omit the index name, so the system will choose a name, typically films_lower_idx
.)
To create an index with non-default collation:
To create an index with non-default sort ordering of nulls:
To create an index with non-default fill factor:
To create a GIN index with fast updates disabled:
To create an index on the column code
in the table films
and have the index reside in the tablespace indexspace
:
To create a GiST index on a point attribute so that we can efficiently use box operators on the result of the conversion function:
To create an index without locking out writes to the table:
CREATE INDEX 是 PostgreSQL 延伸語法。SQL 標準中沒有索引的規定。
要在新資料庫中使用的字元集編碼。指定字串常數(例如,'SQL_ASCII')或整數編碼數字,或指定 DEFAULT 以使用預設編碼(即樣板資料庫的編碼)。 PostgreSQL 伺服器支援的字元集在中描述。其他限制請參閱下面說明。
將與新資料庫關連的資料表空間名稱,或 DEFAULT 以使用樣板資料庫的資料表空間。此資料表空間將是用於在此資料庫中建立物件的預設資料表空間。有關更多訊息,請參閱 。
使用 移除資料庫。
工具 是一個封裝此指令的程式,為方便起見而提供。
雖然可以透過將其名稱指定為模板來複製除 template1 之外的資料庫,但這並不是(通常)用作通用的「COPY DATABASE」工具。主要限制是在複製樣板資料庫時不能有其他連線到樣板資料庫。如果啟動時存在任何其他連線,則 CREATE DATABASE 將失敗;否則,在 CREATE DATABASE 完成之前,將鎖定與樣版資料庫新的連線。有關更多訊息,請參閱。
,
Before you can use CREATE EXTENSION
to load an extension into a database, the extension's supporting files must be installed. Information about installing the extensions supplied with PostgreSQL can be found in .
The extensions currently available for loading can be identified from the or system views.
有關設計新的延伸功能相關資訊,請參閱。
,
The data type of the column. This can include array specifiers. For more information on the data types supported byPostgreSQL, refer to.
The optionalINHERITS
clause specifies a list of tables from which the new foreign table automatically inherits all columns. Parent tables can be plain tables or foreign tables. See the similar form offor more details.
The name of an existing foreign server to use for the foreign table. For details on defining a server, see.
TheCREATE FOREIGN TABLE
command largely conforms to theSQLstandard; however, much as with,NULL
constraints and zero-column foreign tables are permitted. The ability to specify column default values is also aPostgreSQLextension. Table inheritance, in the form defined byPostgreSQL, is nonstandard.
參數的名稱。在某些語言(包括 SQL 和 PL/pgSQL)可讓你在函數中使用該名稱。對於其他語言而言,就函數本身而言,輸入參數的名稱只是額外的文件;但可以在呼叫函數時使用輸入參數名稱,以提高可讀性(請參閱)。在任何情況下,輸出參數的名稱都很重要,因為它會在結果的欄位型別中定義了欄位名稱。 (如果你省略輸出參數的名稱,系統將自行選擇一個預設的欄位名稱。)
對該函數如何套用型別轉換的呼叫列表。在 SQL 型別和特定於語言的資料型別之間進行轉換;請參閱 。程序語言實作通常具有內建型別的編碼知識,這些不需要在這裡列出。只是如果程序語言實作不知道如何處理這些資料型別並且沒有提供轉換方式,它將回退到轉換資料型別的預設行為,但這仍然取決於實作的情況而定。
更多詳細訊息請參閱。
LEAKPROOF
表示該函數不會有副作用。除了其回傳值之外,它沒有揭示任何關於它的參數訊息。例如,某些參數值引發了錯誤訊息但不引發其他錯誤訊息的函數,或者在任何錯誤訊息中包含參數值的函數都是不洩漏的。這會影響系統如何對使用這些 security_barrier 選項建立的 view 或啟用資料列級別安全性的資料表執行查詢。在查詢本身包含非防漏功能的使用者提供的任何條件之前,系統將執行安全策略和安全屏障 view 的條件,以防止資料意外暴露。標記為防漏的功能和操作子被認為是可信的,並且可以在來自安全原則和安全障礙視圖的條件之前執行。另外,沒有參數或者沒有從安全屏障 view 或資料表中傳遞任何參數的函數在安全條件之前不必被標記為不可洩漏。請參閱 和。此選項只能由超級使用者設定。
有關允許的參數名稱和值的更多訊息,請參閱 和。
使用錢字號括弧(請參閱)撰寫函數定義內容,而不是普通的單引號語法的話,通常很有幫助。如果沒有錢字號括弧,函數定義中的任何單引號或反斜線都必須通過加倍來避免編譯錯誤。
當 C 語言原始碼中的函數名稱與 SQL 函數的名稱不同時,AS 子句的這種形式用於可動態載入的 C 語言函數。字串 obj_file 是包含已編譯 C 函數的共享函式庫檔案的名稱,會被解釋為 指令。字串 link_symbol 是函數的連結,即 C 語言原始碼中函數的名稱。如果省略連結,則假設它與定義的 SQL 函數的名稱相同。
有關撰寫函數的更多訊息,請參閱。
這裡有一些簡單的例子可以幫助你開始。有關更多訊息和範例,請參閱。
還有一點需要注意的是,預設情況下,對於新建立的函數,將會把權限授予 PUBLIC(請參閱 以獲取更多訊息)。通常情況下,你只希望將安全定義函數的使用僅限於某些使用者。為此,你必須撤銷預設的 PUBLIC 權限,然後選擇性地授予執行權限。為了避免出現一個破口,使得所有人都可以訪問新功能,可以在一個交易事務中建立它並設定權限。例如:
, , , ,
Event triggers are disabled in single-user mode (see ). If an erroneous event trigger disables the database so much that you can't even drop the trigger, restart in single-user mode and you'll be able to do that.
Forbid the execution of any command:
, ,
, , , ,
, ,
Each backend running COPY
will report its progress in the pg_stat_progress_copy
view. See for details.
,,, 或 指令,其結果將被複製。請注意,查詢周圍需要括號。
不要將 COPY 與 psql 指令 混淆。\copy 呼叫 COPY FROM STDIN 或 COPY TO STDOUT,然後將資料讀取/儲存在 psql 用戶端可存取的檔案中。因此,使用 \copy時,檔案可存取性和存取權限取決於用戶端而不是伺服器端。
使用此選項時,PostgreSQL 將在建立索引時,不會採取任何阻止資料表上同時的插入,更新或刪除的鎖定;而標準索引建立會鎖定資料表上的寫入(但不是讀取),直到完成為止。使用此選項時需要注意幾點 - 請參閱。
特定於索引方法的儲存參數的名稱。有關詳情,請參見。
用於建立索引的資料表空間。如果未指定,則查詢 ,或臨時資料表 。
控制中所描述的 B-tree 重複資料刪除技術的使用。設定為 ON 或 OFF 以啟用或停用該機制。(如所述,允許使用 ON 和 OFF 的其他寫法。)預設值為 ON。
Per-index value for .
確定是否使用中描述的緩衝建構技術來建構索引。使用 OFF 時,它被停用,ON 時啟用。當使用 AUTO 時,它最初被停用,但一旦索引大小達到 ,就會立即打開。預設值為 AUTO。
此設定控制中描述的快速更新技術的運用。它是一個布林參數:ON 啟用快速更新,OFF 停用它。 (如所述,允許使用 ON 和 OFF 的替代拼寫。)預設為 ON。
自定義 參數。此值以 KB 為單位。
定義構成 BRIN 索引每個項目的一個區塊範圍的資料表區塊數(有關更多詳細訊息,請參閱)。預設值為 128。
在同步索引建構時,索引實際上在一個交易事務中輸入到系統目錄,然後在另外兩個事務中産生兩個資料表掃描。在每次掃描資料表之前,索引建構必須等待已修改資料表的現有事務結束。在第二次掃描之後,索引建構必須等待具有快照(參閱)的任何事務在第二次掃描之前結束。最後,索引可以標記為可以使用,然後 CREATE INDEX 指令完成。但是,即使這樣,索引也可能無法立即用於查詢:在最壞的情況下,只要在索引建構開始之前存在事務,都不能使用它。
有關何時可以使用索引,何時不使用索引以及哪些特定情況可以使用索引的訊息,請參閱。
可以為索引的每個欄位指定運算子類。運算子類識別該欄位的索引要使用的運算子。例如,4 bytes 整數的 B-tree 索引將使用 int4_ops 類;此運算子類包括 4 bytes 整數的比較函數。實際上,欄位資料型別的預設運算子類通常就足夠了。擁有運算子類的要點是,對於某些資料型別,可能存在多個有意義的排序。例如,我們可能希望按絕對值或複數資料型別的實部進行排序。我們可以透過為資料型別定義兩個運算子類,然後在建立索引時選擇適當的類。有關運算子類的更多訊息,請參閱和。
對於大多數索引方法,建立索引的速度取決於 的設定。較大的值將減少索引建立所需的時間,只要您不要使其大於真正可用的記憶體大小,否則將驅使主機使用 SWAP。
Parallel index builds may benefit from increasing maintenance_work_mem
where an equivalent serial index build will see little or no benefit. Note that maintenance_work_mem
may influence the number of worker processes requested, since parallel workers must have at least a 32MB
share of the total maintenance_work_mem
budget. There must also be a remaining 32MB
share for the leader process. Increasing may allow more workers to be used, which will reduce the time needed for index creation, so long as the index build is not already I/O bound. Of course, there should also be sufficient CPU capacity that would otherwise lie idle.
Setting a value for parallel_workers
via directly controls how many parallel worker processes will be requested by a CREATE INDEX
against the table. This bypasses the cost model completely, and prevents maintenance_work_mem
from affecting how many parallel workers are requested. Setting parallel_workers
to 0 via ALTER TABLE
will disable parallel index builds on the table in all cases.
使用 移除索引。
, ,
\b
Backspace (ASCII 8)
\f
Form feed (ASCII 12)
\n
Newline (ASCII 10)
\r
Carriage return (ASCII 13)
\t
Tab (ASCII 9)
\v
Vertical tab (ASCII 11)
\
digits
Backslash followed by one to three octal digits specifies the byte with that numeric code
\x
digits
Backslash x
followed by one or two hex digits specifies the byte with that numeric code
CREATE PROCEDURE — define a new procedure
CREATE PROCEDURE
defines a new procedure. CREATE OR REPLACE PROCEDURE
will either create a new procedure, or replace an existing definition. To be able to define a procedure, the user must have the USAGE
privilege on the language.
If a schema name is included, then the procedure is created in the specified schema. Otherwise it is created in the current schema. The name of the new procedure must not match any existing procedure or function with the same input argument types in the same schema. However, procedures and functions of different argument types can share a name (this is called overloading).
To replace the current definition of an existing procedure, use CREATE OR REPLACE PROCEDURE
. It is not possible to change the name or argument types of a procedure this way (if you tried, you would actually be creating a new, distinct procedure).
When CREATE OR REPLACE PROCEDURE
is used to replace an existing procedure, the ownership and permissions of the procedure do not change. All other procedure properties are assigned the values specified or implied in the command. You must own the procedure to replace it (this includes being a member of the owning role).
The user that creates the procedure becomes the owner of the procedure.
To be able to create a procedure, you must have USAGE
privilege on the argument types.
name
The name (optionally schema-qualified) of the procedure to create.
argmode
The mode of an argument: IN
, INOUT
, or VARIADIC
. If omitted, the default is IN
. (OUT
arguments are currently not supported for procedures. Use INOUT
instead.)
argname
The name of an argument.
argtype
The data type(s) of the procedure's arguments (optionally schema-qualified), if any. The argument types can be base, composite, or domain types, or can reference the type of a table column.
Depending on the implementation language it might also be allowed to specify “pseudo-types” such as cstring
. Pseudo-types indicate that the actual argument type is either incompletely specified, or outside the set of ordinary SQL data types.
The type of a column is referenced by writing table_name
.column_name
%TYPE. Using this feature can sometimes help make a procedure independent of changes to the definition of a table.
default_expr
An expression to be used as default value if the parameter is not specified. The expression has to be coercible to the argument type of the parameter. All input parameters following a parameter with a default value must have default values as well.
lang_name
The name of the language that the procedure is implemented in. It can be sql
, c
, internal
, or the name of a user-defined procedural language, e.g. plpgsql
. Enclosing the name in single quotes is deprecated and requires matching case.
TRANSFORM { FOR TYPE
type_name
} [, ... ] }
Lists which transforms a call to the procedure should apply. Transforms convert between SQL types and language-specific data types; see CREATE TRANSFORM. Procedural language implementations usually have hardcoded knowledge of the built-in types, so those don't need to be listed here. If a procedural language implementation does not know how to handle a type and no transform is supplied, it will fall back to a default behavior for converting data types, but this depends on the implementation.
[EXTERNAL] SECURITY INVOKER
[EXTERNAL] SECURITY DEFINER
SECURITY INVOKER
indicates that the procedure is to be executed with the privileges of the user that calls it. That is the default. SECURITY DEFINER
specifies that the procedure is to be executed with the privileges of the user that owns it.
The key word EXTERNAL
is allowed for SQL conformance, but it is optional since, unlike in SQL, this feature applies to all procedures not only external ones.
A SECURITY DEFINER
procedure cannot execute transaction control statements (for example, COMMIT
and ROLLBACK
, depending on the language).
configuration_parameter
value
The SET
clause causes the specified configuration parameter to be set to the specified value when the procedure is entered, and then restored to its prior value when the procedure exits. SET FROM CURRENT
saves the value of the parameter that is current when CREATE PROCEDURE
is executed as the value to be applied when the procedure is entered.
If a SET
clause is attached to a procedure, then the effects of a SET LOCAL
command executed inside the procedure for the same variable are restricted to the procedure: the configuration parameter's prior value is still restored at procedure exit. However, an ordinary SET
command (without LOCAL
) overrides the SET
clause, much as it would do for a previous SET LOCAL
command: the effects of such a command will persist after procedure exit, unless the current transaction is rolled back.
If a SET
clause is attached to a procedure, then that procedure cannot execute transaction control statements (for example, COMMIT
and ROLLBACK
, depending on the language).
See SET and Chapter 19 for more information about allowed parameter names and values.
definition
A string constant defining the procedure; the meaning depends on the language. It can be an internal procedure name, the path to an object file, an SQL command, or text in a procedural language.
It is often helpful to use dollar quoting (see Section 4.1.2.4) to write the procedure definition string, rather than the normal single quote syntax. Without dollar quoting, any single quotes or backslashes in the procedure definition must be escaped by doubling them.
obj_file
, link_symbol
This form of the AS
clause is used for dynamically loadable C language procedures when the procedure name in the C language source code is not the same as the name of the SQL procedure. The string obj_file
is the name of the shared library file containing the compiled C procedure, and is interpreted as for the LOAD command. The string link_symbol
is the procedure's link symbol, that is, the name of the procedure in the C language source code. If the link symbol is omitted, it is assumed to be the same as the name of the SQL procedure being defined.
When repeated CREATE PROCEDURE
calls refer to the same object file, the file is only loaded once per session. To unload and reload the file (perhaps during development), start a new session.
See CREATE FUNCTION for more details on function creation that also apply to procedures.
Use CALL to execute a procedure.
A CREATE PROCEDURE
command is defined in the SQL standard. The PostgreSQL version is similar but not fully compatible. For details see also CREATE FUNCTION.
CREATE DOMAIN — define a new domain
CREATE DOMAIN
creates a new domain. A domain is essentially a data type with optional constraints (restrictions on the allowed set of values). The user who defines a domain becomes its owner.
If a schema name is given (for example, CREATE DOMAIN myschema.mydomain ...
) then the domain is created in the specified schema. Otherwise it is created in the current schema. The domain name must be unique among the types and domains existing in its schema.
Domains are useful for abstracting common constraints on fields into a single location for maintenance. For example, several tables might contain email address columns, all requiring the same CHECK constraint to verify the address syntax. Define a domain rather than setting up each table's constraint individually.
To be able to create a domain, you must have USAGE
privilege on the underlying type.
name
The name (optionally schema-qualified) of a domain to be created.
data_type
The underlying data type of the domain. This can include array specifiers.
collation
An optional collation for the domain. If no collation is specified, the underlying data type's default collation is used. The underlying type must be collatable if COLLATE
is specified.
DEFAULT
expression
The DEFAULT
clause specifies a default value for columns of the domain data type. The value is any variable-free expression (but subqueries are not allowed). The data type of the default expression must match the data type of the domain. If no default value is specified, then the default value is the null value.
The default expression will be used in any insert operation that does not specify a value for the column. If a default value is defined for a particular column, it overrides any default associated with the domain. In turn, the domain default overrides any default value associated with the underlying data type.CONSTRAINT
constraint_name
An optional name for a constraint. If not specified, the system generates a name.
NOT NULL
Values of this domain are prevented from being null (but see notes below).
NULL
Values of this domain are allowed to be null. This is the default.
This clause is only intended for compatibility with nonstandard SQL databases. Its use is discouraged in new applications.
CHECK (
expression
)
CHECK
clauses specify integrity constraints or tests which values of the domain must satisfy. Each constraint must be an expression producing a Boolean result. It should use the key word VALUE
to refer to the value being tested. Expressions evaluating to TRUE or UNKNOWN succeed. If the expression produces a FALSE result, an error is reported and the value is not allowed to be converted to the domain type.
Currently, CHECK
expressions cannot contain subqueries nor refer to variables other than VALUE
.
When a domain has multiple CHECK
constraints, they will be tested in alphabetical order by name. (PostgreSQL versions before 9.5 did not honor any particular firing order for CHECK
constraints.)
Domain constraints, particularly NOT NULL
, are checked when converting a value to the domain type. It is possible for a column that is nominally of the domain type to read as null despite there being such a constraint. For example, this can happen in an outer-join query, if the domain column is on the nullable side of the outer join. A more subtle example is
The empty scalar sub-SELECT will produce a null value that is considered to be of the domain type, so no further constraint checking is applied to it, and the insertion will succeed.
It is very difficult to avoid such problems, because of SQL's general assumption that a null value is a valid value of every data type. Best practice therefore is to design a domain's constraints so that a null value is allowed, and then to apply column NOT NULL
constraints to columns of the domain type as needed, rather than directly to the domain type.
This example creates the us_postal_code
data type and then uses the type in a table definition. A regular expression test is used to verify that the value looks like a valid US postal code:
The command CREATE DOMAIN
conforms to the SQL standard.
CREATE PUBLICATION — define a new publication
CREATE PUBLICATION
adds a new publication into the current database. The publication name must be distinct from the name of any existing publication in the current database.
A publication is essentially a group of tables whose data changes are intended to be replicated through logical replication. See Section 30.1 for details about how publications fit into the logical replication setup.
name
The name of the new publication.
FOR TABLE
Specifies a list of tables to add to the publication. If ONLY
is specified before the table name, only that table is added to the publication. If ONLY
is not specified, the table and all its descendant tables (if any) are added. Optionally, *
can be specified after the table name to explicitly indicate that descendant tables are included. This does not apply to a partitioned table, however. The partitions of a partitioned table are always implicitly considered part of the publication, so they are never explicitly added to the publication.
Only persistent base tables and partitioned tables can be part of a publication. Temporary tables, unlogged tables, foreign tables, materialized views, and regular views cannot be part of a publication.
When a partitioned table is added to a publication, all of its existing and future partitions are implicitly considered to be part of the publication. So, even operations that are performed directly on a partition are also published via publications that its ancestors are part of.
FOR ALL TABLES
Marks the publication as one that replicates changes for all tables in the database, including tables created in the future.
WITH (
publication_parameter
[= value
] [, ... ] )
This clause specifies optional parameters for a publication. The following parameters are supported:
publish
(string
)
This parameter determines which DML operations will be published by the new publication to the subscribers. The value is comma-separated list of operations. The allowed operations are insert
, update
, delete
, and truncate
. The default is to publish all actions, and so the default value for this option is 'insert, update, delete, truncate'
.
publish_via_partition_root
(boolean
)
This parameter determines whether changes in a partitioned table (or on its partitions) contained in the publication will be published using the identity and schema of the partitioned table rather than that of the individual partitions that are actually changed; the latter is the default. Enabling this allows the changes to be replicated into a non-partitioned table or a partitioned table consisting of a different set of partitions.
If this is enabled, TRUNCATE
operations performed directly on partitions are not replicated.
If neither FOR TABLE
nor FOR ALL TABLES
is specified, then the publication starts out with an empty set of tables. That is useful if tables are to be added later.
The creation of a publication does not start replication. It only defines a grouping and filtering logic for future subscribers.
To create a publication, the invoking user must have the CREATE
privilege for the current database. (Of course, superusers bypass this check.)
To add a table to a publication, the invoking user must have ownership rights on the table. The FOR ALL TABLES
clause requires the invoking user to be a superuser.
The tables added to a publication that publishes UPDATE
and/or DELETE
operations must have REPLICA IDENTITY
defined. Otherwise those operations will be disallowed on those tables.
For an INSERT ... ON CONFLICT
command, the publication will publish the operation that actually results from the command. So depending of the outcome, it may be published as either INSERT
or UPDATE
, or it may not be published at all.
COPY ... FROM
commands are published as INSERT
operations.
DDL operations are not published.
Create a publication that publishes all changes in two tables:
Create a publication that publishes all changes in all tables:
Create a publication that only publishes INSERT
operations in one table:
CREATE PUBLICATION
is a PostgreSQL extension.
CREATE MATERIALIZED VIEW — 定義一個新的具體化檢視表
CREATE MATERIALIZED VIEW 定義查詢的具體化檢視表。執行該查詢並在命令完成後將資料儲存於檢視表中(除非使用 WITH NO DATA),並可在稍後使用 REFRESH MATERIALIZED VIEW 更新。
CREATE MATERIALIZED VIEW 與 CREATE TABLE AS 類似,只是它還記得用於初始化檢視表的查詢,以便稍後可以根據需要更新它。具體化檢視表具有許多與資料表相同的屬性,但不支援臨時具體化檢視表或自動產生 O
ID。
IF NOT EXISTS
如果已經存在具有相同名稱的具體化檢視表的話,請不要拋出錯誤。在這種情況下會發布通知。請注意,這不能保證現有的具體化檢視表與想要建立的檢視表相似。
table_name
要建立的具體化檢視表名稱(可選擇性加上所屬綱要)。
column_name
新的具體化檢視表中欄位的名稱。如果未提供欄位名稱,則從查詢的輸出的欄位名稱中取得它們。
WITH (
storage_parameter
[= value
] [, ... ] )
此子句為新的具體化檢視表指定選擇性的儲存參數;請參閱儲存參數選項了解更多訊息。CREATE TABLE 支援的所有參數也都支援 CREATE MATERIALIZED VIEW,只有 OIDS 除外。有關更多訊息,請參閱 CREATE TABLE。
TABLESPACE
tablespace_name
tablespace_name 用於要在其中建立新的具體化檢視表的資料表空間名稱。如果未指定,則會使用 default_tablespace 設定。
query
一個 SELECT、TABLE 或 VALUES 指令。該查詢將在安全保障的操作環境中執行;特別是對自己建立的臨時資料表函數的呼叫將會失敗。
WITH [ NO ] DATA
此子句指定是否需要在建立時把資料填入具體化檢視表。如果沒有的話,則具體化檢視表將被標記為不可進行資料掃描,並且在使用 REFRESH MATERIALIZED VIEW 之前都無法查詢。
CREATE MATERIALIZED VIEW
is a PostgreSQL extension.
ALTER MATERIALIZED VIEW, CREATE TABLE AS, CREATE VIEW, DROP MATERIALIZED VIEW, REFRESH MATERIALIZED VIEW
CREATE LANGUAGE — 宣告一種新的程序語言
CREATE LANGUAGE 使用 PostgreSQL 資料庫註冊新的程序語言。隨後即可使用這種新語言定義函數和觸發器程序。
注意 從 PostgreSQL 9.1 開始,大多數程序語言都被製作成「extension」,因此應該使用 CREATE EXTENSION 而不是 CREATE LANGUAGE 安裝。現在應該直接使用 CREATE LANGUAGE 來限制性的延伸套件安裝腳本。如果資料庫中存在有純粹的程序語言(可能是升級後的結果),則可以使用 CREATE EXTENSION langname FROM unpackaged 將其轉換為延伸套件。
CREATE LANGUAGE 有效地將語言名稱與負責執行用該語言撰寫的函數與語言處理函數相關聯。有關語言處理程序的更多訊息,請參閱第 55 章。
CREATE LANGUAGE 指令有兩種形式。在第一種形式中,使用者只提供所需語言的名稱,PostgreSQL 伺服器查詢 pg_pltemplatesystem 目錄以確定正確的參數。在第二種形式中,使用者提供語言參數以及語言名稱。第二種形式可用於建立未在 pg_pltemplate 中定義的語言,但此方法被視為是過時的。
當伺服器在 pg_pltemplate 目錄中找到指定的語言名稱項目時,即使該命令包含語言參數,它也將使用目錄的資訊。此行為簡化了舊的備份檔案載入,這些備份檔案可能包含有關語言支援功能但過時的訊息。
通常,使用者必須具有 PostgreSQL 超級使用者權限才能註冊新的語言。但是,如果語言在 pg_pltemplate 目錄中列出並且標記為允許由資料庫擁有者建立(tmpldbacreate 為 true),則資料庫的擁有者可以在該資料庫中註冊新的語言。預設情況下,資料庫擁有者可以建立受信任的語言,但超級使用者可以透過修改 pg_pltemplate 的內容來調整它。語言的建立者成為其擁有者,以後可以將其移除,重新命名或將其分配給新的擁有者。
CREATE OR REPLACE LANGUAGE 將註冊新的語言或更換現有的定義。如果該語言已存在,則其參數將根據指定的值或從 pg_pltemplate 取得,但語言的擁有權和權限設定不會更改,並且假定使用該語言撰寫的任何現有函數仍然有效。除了建立語言的普通權限要求之外,使用者還必須是現有語言的擁有者或超級使用者。REPLACE 主要用於確保語言存在。如果該語言具有 pg_pltemplate 項目,則 REPLACE 實際上不會變更現有定義的任何內容,除非在建立語言後修改了 pg_pltemplate 項目的特殊情況。
TRUSTED
TRUSTED 表該語言不會授予使用者不應該擁有的資料存取權限。如果在註冊語言時省略了此關鍵字,則只有具有 PostgreSQL 超級使用者權限的使用者才能使用該語言建立新的函數。
PROCEDURAL
這是一個無功能的修飾詞。
name
新的程序語言名稱。此名稱在資料庫中的語言中必須是唯一的。
為了向下相容,名稱可以用單引號括起來。
HANDLER
call_handler
call_handler 是先前註冊的函數名稱,將呼叫該函數來執行程序語言的函數。程序語言的呼叫處理程序必須用編譯語言撰寫,例如 C with version 1 convention,並在 PostgreSQL 中註冊為不帶參數的函數,回傳 language_handlertype,這是一種 placeholder 型別,僅用於將函數識別為呼叫處理程序
INLINE
inline_handler
inline_handler 是先前註冊的函數名稱,該函數將被呼叫以執行此語言的匿名代碼區塊(DO 指令)。如果未指定 inline_handlerfunction,則該語言不支援匿名代碼區塊。處理函數必須使用一個型別為 internal 的參數,它將是 DO 指令的內部形式,並且通常回傳是 void。處理程序的回傳值將被忽略。
VALIDATOR
valfunction
valfunction 是先前註冊的函數名稱,該函數將在宣告語言中的新函數時呼叫,以驗證新函數。如果未指定驗證程序功能,則在建立新函數時不會檢查該函數。驗證程序函數必須使用一個型別為 oid 的參數,該參數將是要建立的函數 OID,並且通常回傳為 void。
驗證程序函數通常會檢查函數的語法正確性,但它也可以查看函數的其他屬性。例如,如果語言無法處理某些參數型別。要發出錯誤信號,驗證程序函數應使用ereport() 函數。該函數的回傳值將被忽略。
如果伺服器在 pg_pltemplate 中具有指定語言名稱的項目,則忽略 TRUSTED 選項和支援函數名稱。
使用 DROP LANGUAGE 移除程序語言。
系統目錄 pg_language(參閱第 51.29 節)記錄有關目前安裝的語言訊息。此外,psql 指令 \dL 可列出已安裝的語言。
要以程序語言建立函數,使用者必須具有該語言的 USAGE 權限。預設情況下,USAGE 被授予 PUBLIC(即每個人)在可信任的語言上。如果需要,可以撤銷此權限。
程序語言在各個資料庫之間是獨立。但是,可以在 template1 資料庫中安裝一種語言,這將使其在所有後續建立的資料庫中自動可用。
如果伺務器在 pg_pltemplate 中沒有該語言的項目,則必須已存在呼叫處理函數,內嵌處理函數(如果有)和驗證程序函數(如果有)。但是當有項目時,處理函數就不一定需要存在;如果資料庫中不存在,它們將會自動被定義。(如果實作該語言的共享函式庫在安裝環境中不可用,則可能導致 CREATE LANGUAGE 失敗。)
在 7.3 之前的 PostgreSQL 版本中,有必要將處理函數宣告為回傳 placeholder 型別 opaque,而不是 language_handler。為了支援載入舊的備份檔案,CREATE LANGUAGE 將接受宣告為回傳 opaque 的函數,但它會發出通知並將函數宣告的回傳型別變更為 language_handler。
建立任何標準程序語言的最好方式是:
對於 pg_pltemplate 目錄中未知的語言,需要這樣的指令程序:
CREATE LANGUAGE 是 PostgreSQL 的延伸功能。
ALTER LANGUAGE, CREATE FUNCTION, DROP LANGUAGE, GRANT, REVOKE
CREATE ROLE — 定義一個新的資料庫角色
CREATE ROLE 將新的角色加到 PostgreSQL 資料庫叢集之中。角色是可以擁有資料庫物件並具有資料庫權限的實體;根據使用方式的不同,角色可以被視為「使用者」、「群組」或是兩者兼具。有關管理使用者和身份驗證的訊息,請參閱第 21 章和第 20 章。 您必須具有 CREATEROLE 權限或成為資料庫的超級使用者才能使用此命令。
請注意,角色是在資料庫叢集等級所定義的,因此在叢集中的所有資料庫中都是有效的。
name
新角色的名稱。
SUPERUSER
NOSUPERUSER
這個子句決定新角色是否為「超級使用者」,他可以覆寫資料庫中的所有存取限制。超級使用者的狀態很危險,只能在真正需要時才使用。您必須自己成為超級使用者才能建立新的超級使用者。如果未指定,則 NOSUPERUSER 是預設值。
CREATEDB
NOCREATEDB
這個子句定義了角色是否可以建立資料庫。如果指定了 CREATEDB,則被定義的角色將被允許建立新的資料庫。指定 NOCREATEDB 則是不允許建立資料庫的角色。如果未指定,則預設為 NOCREATEDB。
CREATEROLE
NOCREATEROLE
這個子句決定是否允許角色建立新角色(即執行CREATE ROLE)。具有CREATEROLE 權限的角色也可以變更和刪除其他角色。如果未指定,則預設為 NOCREATEROLE。
INHERIT
NOINHERIT
這個子句決定角色是否「繼承」它所屬角色的特權。具有 INHERIT 屬性的角色可以自動使用已授予其直接或間接成員的所有角色的任何資料庫特權。沒有INHERIT,另一個角色的成員資格只能賦予角色設定其他角色的能力;其他角色的權限僅在完成後才可用。如果未指定,INHERIT 是預設值。
LOGIN
NOLOGIN
這個子句決定是否允許角色登入;也就是說,在用戶端連線期間,角色是否可以作為初始連線的授權名稱。具有 LOGIN 屬性的角色可以被認為是一個使用者。沒有此屬性的角色對於管理資料庫權限很有用,但不是一般認知上的使用者。如果未指定,則除了通過其替代指令 CREATE USER 使用 CREATE ROLE 時,NOLOGIN 是預設值。
REPLICATION
NOREPLICATION
這個子句確定角色是否是可以進行複製工作的角色。角色必須具有此屬性(或成為超級使用者)才能以複製模式(實體或邏輯複製)連線到伺服器,並且能夠建立或刪除複製單元。具有 REPLICATION 屬性的角色是一個非常高權限的角色,只能用於實際用於複製工作的角色。 如果未指定,則預設為 NOREPLICATION。
BYPASSRLS
NOBYPASSRLS
這個子句決定角色是否可以繞過每個資料列級安全(RLS)原則檢查。 NOBYPASSRLS 是預設值。請注意,預設情況下,pg_dump 會將 row_security 設定為 OFF,以確保資料表中的所有內容都被匯出。如果執行 pg_dump 的使用者沒有適當的權限,則會回報錯誤。超級使用者和匯出資料表的擁有者總是能夠繞過 RLS。
CONNECTION LIMIT
connlimit
如果角色可以登入,則指定該角色可以建立多少個同時連線。-1(預設值)表示沒有限制。請注意,只有正常連線才會計入此限制。預備交易和後端服務連線都不計入此限制。
[ ENCRYPTED
] PASSWORD
'password
'
PASSWORD NULL
設定角色的密碼。(密碼僅用於具有 LOGIN 屬性的角色,但您可以為沒有密碼的角色定義密碼。)如果您不打算使用密碼驗證,則可以省略此選項。如果未指定密碼,則密碼將設定為 NULL,而該使用者的密碼驗證將始終失敗。可以選擇將空密碼明確寫為 PASSWORD NULL
。
指定一個空字串也會將密碼設定為 NULL,但 PostgreSQL 版本 10 之前並不是這種情況。在早期版本中,可以使用或不使用空字串,具體取決於身份驗證方法和確切版本,libpq 會拒絕在任何情況下使用它。為避免歧義,應避免指定空字串。
密碼總是會以加密方式儲存在系統目錄中。ENCRYPTED 關鍵字不起作用,但為了相容性而被接受。加密方法由配置參數 password_encryption 決定。如果提供的密碼字串已經以 MD5 加密或 SCRAM 加密的格式存在,則無論使用password_encryption 為何(因為系統無法解密指定的加密密碼字符串,如果以不同的格式對其進行加密的話),它都會按原樣儲存。 這允許在轉存/恢復期間重新載入加密的密碼。
VALID UNTIL
'timestamp
'
VALID UNTIL 子句設定角色密碼不再有效的日期和時間。 如果省略此項,則密碼將始終有效。
IN ROLE
role_name
IN ROLE 子句列出一個或多個新角色將立即添加為新成員的現有角色。(請注意,不能選擇以管理員身份增加新角色;請使用單獨的 GRANT 指令來執行此操作。)
IN GROUP
role_name
IN GROUP 是 IN ROLE 的過時語法。
ROLE
role_name
ROLE 子句列出了一個或多個自動增加為新角色成員的現有角色。 (這實際上使新角色成為「群組」。)
ADMIN
role_name
ADMIN 子句與 ROLE 類似,但已命名的角色被新增到新角色 WITH ADMIN OPTION 中,賦予他們將此角色的成員身份授予其他人的權利。
USER
role_name
USER 子句是 ROLE 子句的過時寫法。
SYSID
uid
SYSID 子句會被忽略,但為了相容性而被接受。
使用 ALTER ROLE 變更改角色的屬性,使用 DROP ROLE 刪除角色。所有由CREATE ROLE 指定的屬性都可以在後面的 ALTER ROLE 命令中修改。
從群組加入和移出角色成員的首選方法是使用 GRANT 和 REVOKE。
VALID UNTIL 子句僅定義密碼的到期時間,而不是角色本身。特別要注意的是,使用基於非密碼的身份驗證方法登錄時,不會強制實施到期的時間。
INHERIT 屬性管理可授予權限的繼承(即資料庫物件和角色成員的存取權限)。它不適用於由 CREATE ROLE 和 ALTER ROLE 設定的特殊角色屬性。例如,即使設定了INHERIT,作為 CREATEDB 權限角色的成員也不會立即授予建立資料庫的能力;在建立資料庫之前,有必要通過 SET ROLE 來扮演這個角色。
出於相容性的原因,INHERIT 屬性是預設屬性:在 PostgreSQL 的以前版本中,使用者總是可以存取它們所屬的群組的所有特權。但是,NOINHERIT 提供了與 SQL 標準中指定的語義更接近的設定。
請小心使用 CREATEROLE 權限。對於 CREATEROLE 角色的權限並沒有繼承的概念。這意味著即使角色沒有特定的權限但允許建立其他角色,也可以使用不同於自己的權限輕鬆建立另一個角色(除了使用超級使用者權限建立角色)。例如,如果角色「使用者」具有 CREATEROLE 權限但不具有 CREATEDB 權限,但它可以使用 CREATEDB 權限建立新角色。 因此,將具有 CREATEROLE 權限的角色視為幾乎是超級使用者的角色。
PostgreSQL 包含一個工具 createuser,它具有與 CREATE ROLE 相同的功能(實際上,它也使用此命令),但可以從命令列終端機中執行。
CONNECTION LIMIT 選項只是大略地執行;如果兩個新的連線幾乎同時啟動,但只剩下連線留給該角色的話,也可能兩個都失敗。 此外,此限制不會限制超級使用者。
使用此命令指定未加密的密碼時必須謹慎行事。密碼將以明文形式傳輸到伺服器,並且還可能會記錄在用戶端的命令歷史記錄或伺服器日誌中。但是,createuser 指令會傳輸加密的密碼。此外,psql 還包含一個命令 \password,可用於安全地更改密碼。
建立一個可以登入的角色,但不要給它一個密碼:
建立角色同時設定一個密碼:
(CREATE USER 與 CREATE ROLE 相同,但它暗示著 LOGIN。)
使用一個有效的密碼建立一個角色,直到 2004 年底。在 2005 年的第一秒之後,密碼就不再有效。
建立一個可以建立資料庫和管理角色的角色:
CREATE ROLE 語句在 SQL 標準中,只有簡單的語法標準。
多個初始管理員和 CREATE ROLE 的所有其他選項都是 PostgreSQL 延伸功能。
SQL 標準定義了使用者和角色的概念,且將它們視為不同的概念,並將所有定義使用者的命令留在每個資料庫的實作中。在 PostgreSQL 中,我們選擇將使用者和角色統一為單一類型的實體。因此,角色擁有比標準更多的選用屬性。
由 SQL 標準指定的行為最接近於給予使用者 NOINHERIT 屬性,而角色則賦予了 INHERIT 屬性。
CREATE POLICY — 為資料表定義新的資料列級的安全原則
CREATE POLICY 指令用於為資料表定義新的資料列級安全原則。請注意,你必須在資料表上啟用資料列級安全性(使用 ALTER TABLE ... ENABLE ROW LEVEL SECURITY)以便套用所建立的原則。
安全原則會授予 SELECT、INSERT、UPDATE 或 DELETE 與相關安全原則表示式所匹配的資料列的權限。根據 USING 中指定的表示式檢查現有的資料列,同時根據 WITH CHECK 中指定的表示式檢查透過 INSERT 或 UPDATE 建立的新資料列。當 USING 表示式對給定的資料列回傳 true 時,那麼該資料對使用者是可見的,而如果回傳 false 或 null,那麼該資料列為不可見。當 WITH CHECK 表示式對一筆資料列回傳 true 時,則插入或更新該資料列,而如果回傳 false 或 null,則會產生錯誤。
對於 INSERT 和 UPDATE 語句而言,在觸發 BEFORE 觸發器之後,以及在進行任何實際的資料修改之前,WITH CHECK 表示式都會強制執行。因此,BEFORE ROW 觸發器可能會修改要插入的資料,從而影響安全原則檢查的結果。WITH CHECK 表示式會在任何其他限制條件之前執行。
安全原則的名稱是對應每個資料表的。因此,一個原則名稱可用於許多不同的資料表,並為每個資料表定義適合該表格的定義。
安全原則可以應用於特定的指令或特定角色。新建立的安全原則預設適用於所有的指令和角色,除非另有設定。多個原則可能適用於單個命令;請參閱下面的詳細訊息。Table 240 總結了不同類型的原則如何應用於特定指令。
對於同時具有 USING 和 WITH CHECK 表達式(ALL 和 UPDATE)的安全原則,如果沒有定義 WITH CHECK 表示式,那麼 USING 表示式將用於確定哪些資料列為可見(一般的 USING 情況)以及哪些新資料列將會允許新增(WITH CHECK 情況下)。
如果對資料表啟用資料列級安全性,但卻沒有適用的原則,則會假定「預設拒絕」的原則,不會顯示或更新任何資料列。
name
要建立的原則名稱。它必須與資料表的任何其他原則的名稱不同。
table_name
該原則適用的資料表名稱(可選擇性加上 schema)。
PERMISSIVE
指定將原則建立為寬鬆的原則。所有適用查詢的寬鬆原則將使用布林運算的「OR」運算組合在一起。通過建立寬鬆的原則,管理者可以增加可以存取的資料。安全原則預設就是寬容的。
RESTRICTIVE
指定將該原則建立為限制性原則。所有適用查詢的限制性原則將使用布林運算「AND」運算組合在一起。透過建立限制性原則,管理者可以減少可以存取的資料集合大小,因為必須為每條資料都會檢核所有限制性策略。
請注意,在限制性原則可用於減少存取權限之前,需要至少有一項允許原則來授予對於資料列的存取權限。如果只有限制性原則存在,則不能存取任何資料列。 如果存在一系列的寬鬆原則和限制性原則,那麼除了所有限制性原則之外,只有在至少有一項寬鬆原則通過的情況下才能得得資料。
command
該原則適用的指令。有效的選項是 ALL、SELECT、INSERT、UPDATE 和 DELETE。ALL 是預設值。請參閱下面有關如何應用這些選項的細節。
role_name
要適用該原則的角色。預設值是 PUBLIC,它將會把原則適用於所有角色。
using_expression
可以是任何的 SQL 條件表示式(回傳布林值)。 條件表示式不能包含任何彙總函數或窗函數。如果啟用了資料列的安全原則,則將此表示式加入到引用該資料表的查詢中。表示式回傳 true 的資料列將會是可見的。表示式回傳 false 或 null 的任何資料列對使用者來說都是不可見的(在 SELECT 中),並且不可用於資料更新(在 UPDATE 或 DELETE 中)。這樣的資料列被無聲地壓制;不會有錯誤的回報。
check_expression
為一 SQL 條件表示式(回傳布林值)。 條件表示式不能包含任何彙總函數或窗函數。如果啟用了資料列的安全原則,則將在針對該資料表的 INSERT 和 UPDATE 查詢中使用此表示式。只有表示式認定為 true 的資料列才會被允許操作。如果對於插入的任何資料或由更新產生的任何資料,表示式的計算結果為 false 或 null,則會引發錯誤。請注意,check_expression 將根據資料列的建議新內容進行評估,而不是原始內容。
ALL
將 ALL 用於安全原則中意味著它將適用於所有指令,而不管指令的類型如何。如果同時存在 ALL 原則及更具體的原則,則兩個原則都會適用。 此外,如果僅定義了 USING 表示式,則所有原則都將適用於查詢類操作的和更新類操作,對兩種情況均使用 USING 表示式。
舉例來說,當執行 UPDATE 時,ALL 原則將適用於 UPDATE 將能夠選擇作為要更新的資料列(適用 USING 表示式)和更新後的資料列,以檢查它們是否它們允許增加到資料表中(如果定義了 WITH CHECK 表示式,則使用 WITH CHECK 表示式,否則使用 USING 表示式)。如果 INSERT 或 UPDATE 指令嘗試將資料列加入到未通過 ALL 原則的 WITH CHECK 表示式的資料表中,則整個指令將被中止。
SELECT
將 SELECT 用於原則意味著它將適用於 SELECT 查詢,所有當定義原則的關連需要 SELECT 權限的時候。結果是只有通過 SELECT 原則的那些資料才會在 SELECT 查詢中回傳,而那些需要 SELECT 權限的查詢(如 UPDATE)也只能看到 SELECT 原則允許的那些資料。SELECT 原則不能有 WITH CHECK 表示式,因為它只適用於從關連中檢索資料的情況。
INSERT
在原則中使用 INSERT 意味著它將適用於 INSERT 指令。插入的資料列不通過此原則將導致原則違規錯誤,並且整個 INSERT 指令將被中止。INSERT 原則不能有 USING 表示式,因為它只適用於資料被增加到關連中的情況。
請注意,帶有 ON CONFLICT DO UPDATE 的 INSERT 只對於隨 INSERT 路徑追加到關連的資料列檢查 INSERT 原則的 WITH CHECK 表示式。
UPDATE
在安全原則中使用 UPDATE 意味著它將適用於 UPDATE、SELECT FOR UPDATE 和 SELECT FOR SHARE 指令,以及 INSERT 指令的輔助 ON CONFLICT DO UPDATE 子句。由於 UPDATE 涉及取得現有資料並用新的更新資料替換它,所以 UPDATE 原則同時接受 USING 表示式和 WITH CHECK 表示式。USING 表示式定義 UPDATE 命令將查看哪些資料進行操作,而 WITH CHECK 表示式則定義允許哪些修改後的資料儲回關連之中。
任何未通過 WITH CHECK 表示式的資料列都會導致錯誤,並且使得整個命令被中止。如果僅指定 USING 子句,則該子句將同時用於 USING 和 WITH CHECK 兩種情況。
通常,UPDATE 指令還需要從正在更新中的欄位(例如,在 WHERE 子句或 RETURNING 子句中,又或者在 SET 子句的右側的表示式中)讀取資料。在這種情況下,正在更新的關連也需要 SELECT 權限,除 UPDATE 原則外,還將適用適當的 SELECT 或 ALL 原則。因此,除了被授予通過 UPDATE 或 ALL 原則更新資料列的權限之外,用戶還必須能夠存取通過 SELECT 或 ALL 原則更新的資料列。
當 INSERT 指令具有輔助的 ON CONFLICT DO UPDATE 子句時,如果採用 UPDATE 執行路徑,則首先針對任何 UPDATE 原則的 USING 表示式檢查要更新的資料列,然後根據 WITH CHECK 表示式檢查即將更新的資料列。 但是請注意,與獨立的 UPDATE 指令不同,如果現有的資料列未通過 USING 表示式,則會引發錯誤(UPDATE 執行路徑永遠不會被默默地忽視)。
DELETE
將 DELETE 用於原則意味著它將適用於 DELETE 命令。只有通過此原則的資料列才會被 DELETE 指令看到。如果不能通過 DELETE 原則的 USING 表達式,但可以透過 SELECT 顯示不能刪除的資料列。
在大多數情況下,DELETE 指令還需要從正在刪除中的關連(例如,在 WHERE 子句或 RETURNING 子句中)的欄位讀取資料。在這種情況下,就還需要 SELECT 權限,所以除了 DELETE 原則外,還將適用適當的 SELECT 或 ALL 策略。因此,除了被授予通過 DELETE 或 ALL 原則刪除資料列的權限外,使用者還必須能夠存取通過 SELECT 或 ALL 原則刪除的資料列。
DELETE 原則不能有 WITH CHECK 表示式,因為它只適用於從關連中刪除資料的情況,所以沒有新的資料需要檢查。
Command
SELECT/ALL policy
INSERT/ALL policy
UPDATE/ALL policy
DELETE/ALL policy
USING expression
WITH CHECK expression
USING expression
WITH CHECK expression
USING expression
SELECT
Existing row
—
—
—
—
SELECT FOR UPDATE/SHARE
Existing row
—
Existing row
—
—
INSERT
—
New row
—
—
—
INSERT ... RETURNING
New row
—
—
—
UPDATE
—
Existing row
New row
—
DELETE
—
—
—
Existing row
ON CONFLICT DO UPDATE
Existing & new rows
—
Existing row
New row
—
當不同命令類型的多個原則適用於同一指令(例如,適用於 UPDATE 指令的 SELECT 和 UPDATE 原則)時,使用者必須同時具有這兩種類型指令的權限(例如,從關連中查詢資料列的權限以及允許可以更新它們)。因此將會使用 AND 運算將一種原則類型的表示式與其他類型原則的表示式組合在一起。
當相同指令類型的多個原則套用於同一個指令時,必須至少有一個 PERMISSIVE 原則授予的存取權限,而所有 RESTRICTIVE 原則都必須通過。也就是,所有的 PERMISSIVE 原則表示式均使用 OR 組合,而所有 RESTRICTIVE 原則表示式都使用 AND 進行組合,並使用 AND 組合其結果。 如果沒有 PERMISSIVE 原則,則存取將會被拒絕。
請注意,出於合併多個原則的目的,所有原則都被視為與正在套用的其他任何類型的原則具有相同的類型。
例如,在需要 SELECT 和 UPDATE 權限的 UPDATE 指令中,如果每種類型都有多個適用的原則,則它們將按如下方式組合:
您必須是資料表的擁有者才能為其建立或變更安全原則。
雖然安全原則是應用於對資料庫中資料表的查詢,但系統在內部執行參考完整性檢查或驗證限制條件時並不會套用安全原則。這意味著有間接的方法來確認給定的值是否存在。其中的一個例子是嘗試將重複值插入到主鍵或具有唯一值限制的欄位中。如果插入失敗,則使用者可以推斷該值已經存在。(這個例子假定使用者在安全原則下允許插入他們不允許看到的資料。)另一個例子是允許使用者插入資料表,而該表引用了另一個資料表,即使資料表被隱藏了。存在值可以由使用者在引用資料表中插入值來確定,如果成功表示該值存在於引用表中。這些問題可以透過仔細制定安全原則來解決,以防止用戶能夠以插入、刪除或更新所有可能表明他們無法看到的值的資料,或者使用衍生的值(例如代理鍵)而不是具有外在意義的鍵。
通常,為了防止受保護數據無意中暴露給可能不可信的使用者自訂函數,系統將在出現在使用者查詢的資格之前執行使用安全原則施加的過濾條件。但是,系統(或系統管理員)標記為 LEAKPROOF 功能的操作員可以在原則表示式之前進行執行函式,因為它們被認為是可信的。
由於原則表示式是直接加到使用者的查詢中,它們將以執行整個查詢的使用者的權限執行。因此,使用給定原則的使用者必須能夠存取表示式中所引用的任何資料表或函數,否則當嘗試查詢啟用了資料列級安全性的資料表時,它們將會收到拒絕權限的錯誤。然而,這並不會改變 View 的工作方式。與普通查詢和 View 一樣,View 所引用的資料表權限檢查和原則將使用 View 擁有者的權限並套用於視圖擁有者的任何原則。
更多討論和實際案例可以在 5.7 節中瞭解。
CREATE POLICY 屬於 PostgreSQL 延伸指令。
ALTER POLICY, DROP POLICY, ALTER TABLE
CREATE SCHEMA — define a new schema
CREATE SCHEMA
enters a new schema into the current database. The schema name must be distinct from the name of any existing schema in the current database.
A schema is essentially a namespace: it contains named objects (tables, data types, functions, and operators) whose names can duplicate those of other objects existing in other schemas. Named objects are accessed either by “qualifying” their names with the schema name as a prefix, or by setting a search path that includes the desired schema(s). A CREATE
command specifying an unqualified object name creates the object in the current schema (the one at the front of the search path, which can be determined with the function current_schema
).
Optionally, CREATE SCHEMA
can include subcommands to create objects within the new schema. The subcommands are treated essentially the same as separate commands issued after creating the schema, except that if the AUTHORIZATION
clause is used, all the created objects will be owned by that user.
schema_name
The name of a schema to be created. If this is omitted, the user_name
is used as the schema name. The name cannot begin with pg_
, as such names are reserved for system schemas.
user_name
The role name of the user who will own the new schema. If omitted, defaults to the user executing the command. To create a schema owned by another role, you must be a direct or indirect member of that role, or be a superuser.
schema_element
An SQL statement defining an object to be created within the schema. Currently, only CREATE TABLE
, CREATE VIEW
, CREATE INDEX
, CREATE SEQUENCE
, CREATE TRIGGER
and GRANT
are accepted as clauses within CREATE SCHEMA
. Other kinds of objects may be created in separate commands after the schema is created.
IF NOT EXISTS
Do nothing (except issuing a notice) if a schema with the same name already exists. schema_element
subcommands cannot be included when this option is used.
To create a schema, the invoking user must have the CREATE
privilege for the current database. (Of course, superusers bypass this check.)
Create a schema:
Create a schema for user joe
; the schema will also be named joe
:
Create a schema named test
that will be owned by user joe
, unless there already is a schema named test
. (It does not matter whether joe
owns the pre-existing schema.)
Create a schema and create a table and view within it:
Notice that the individual subcommands do not end with semicolons.
The following is an equivalent way of accomplishing the same result:
The SQL standard allows a DEFAULT CHARACTER SET
clause in CREATE SCHEMA
, as well as more subcommand types than are presently accepted by PostgreSQL.
The SQL standard specifies that the subcommands in CREATE SCHEMA
can appear in any order. The present PostgreSQL implementation does not handle all cases of forward references in subcommands; it might sometimes be necessary to reorder the subcommands in order to avoid forward references.
According to the SQL standard, the owner of a schema always owns all objects within it. PostgreSQL allows schemas to contain objects owned by users other than the schema owner. This can happen only if the schema owner grants the CREATE
privilege on their schema to someone else, or a superuser chooses to create objects in it.
The IF NOT EXISTS
option is a PostgreSQL extension.
CREATE RULE — 定義新的重寫規則
CREATE RULE 定義了套用於指定資料表或檢視表的新規則。CREATE OR REPLACE RULE 將建立新規則,或替換同一個資料表的同名現有規則。
PostgreSQL 規則系統允許人們定義要對資料庫資料表中的插入,更新或刪除的執行替代操作。粗略地說,當使用者給予資料表上的特定指令時,規則會使其執行其他指令。或者,INSTEAD 規則可以用另一個指令替換特定的指令,或者根本不執行指令。規則也用於實作 SQL 檢視表。重要的是要意識到規則實際上是命令轉換機制或巨集。轉換在指令執行開始之前發生。如果您確實希望為每個實體資料列獨立觸發操作,則可能需要使用觸發器,而不是規則。有關規則系統的更多訊息,請參閱。
目前,ON SELECT 規則必須是無條件的 INSTEAD 規則,並且必須具有由單個SELECT 指令組成的操作。因此,ON SELECT 規則有效地將資料表轉換為檢視表,其可見內容是由規則的 SELECT 指令回傳的資料列,而不是資料表中儲存的任何內容(如果有的話)。撰寫 CREATE VIEW 指令比建立實際資料表並為其定義 ON SELECT 規則被認為是更好的方式。
您可以透過定義 ON INSERT,ON UPDATE 和 ON DELETE 規則(或任何足以滿足目的的子集)來建立可更新檢視表的錯覺,以使用其他資料表上的適當更新替換檢視表上的更新操作。如果要支援 INSERT RETURNING 等,請務必在每個規則中加上適當的 RETURNING 子句。
如果您嘗試對複雜的檢視表更新使用條件規則,則會有一個問題:對於您希望在檢視表上允許的每個操作,必須有一個無條件的 INSTEAD 規則。 如果規則是有條件的,或者不是 INSTEAD,那麼系統仍將拒絕執行更新操作的嘗試,因為它認為在某些情況下它可能最終會嘗試在檢視表的虛擬資料表上執行操作。如果要處理條件規則中的所有有用情況,請加上無條件 DO INSTEAD NOTHING 規則以確保系統知道它永遠不會被呼叫去更新虛擬資料表。然後使條件規則非 INSTEAD;在套用它們的情況下,它們會加到預設的 INSTEAD NOTHING 操作。(但是,此方法目前不支援 RETURNING 查詢。)
注意 一個簡單到可自動更新的檢視表(請參閱 )不需要使用者建立的規則便可更新。雖然您仍然可以建立明確的規則,但自動更新轉換通常會優於規則。
值得考慮的另一個選擇是使用 INSTEAD OF 觸發器(請參閱 )代替規則。
name
要建立的規則名稱。這必須與同一個資料表的其他規則名稱不同。同一個資料表和相同事件類型的多個規會按字母順序套用。
event
此事件是 SELECT,INSERT,UPDATE 或 DELETE 之一。請注意,包含 ON CONFLICT 子句的 INSERT 不能用於具有 INSERT 或 UPDATE 規則的資料表。請考慮使用可更新檢視表。
table_name
規則適用的資料表或檢視表名稱(可加上綱要名稱)。
condition
任何 SQL 條件表示式(回傳布林值)。條件表示式不能引用除 NEW 和 OLD 之外的任何資料表,也不能包含彙總函數。
INSTEAD
INSTEAD 表示應該執行此指令而不是原始指令。
ALSO
ALSO 表示除原始指令外還應該執行此命令。
如果既未指定 ALSO 也未指定 INSTEAD,則 ALSO 是預設行為。
command
組成規則操作的指令。有效指令是 SELECT,INSERT,UPDATE,DELETE 或 NOTIFY。
在條件和指令中,特殊資料表名稱 NEW 和 OLD 可用於引用資料表中的值。NEW 在 ON INSERT 和 ON UPDATE 規則中有效,用於引用要插入或更新的新資料列。OLD 在 ON UPDATE 和 ON DELETE 規則中有效,以引用正在更新或刪除的現有資料列。
您必須是資料表的擁有者才能為其建立或變更規則。
在檢視表中的 INSERT,UPDATE 或 DELETE 規則中,您可以加入一個發出檢視表欄位的 RETURNING 子句。如果規則分別由 INSERT RETURNING,UPDATE RETURNING 或 DELETE RETURNING 指令觸發,則此子句將用於計算輸出。當規則由沒有 RETURNING 的指令觸發時,將忽略規則的 RETURNING 子句。目前實作上只允許無條件的 INSTEAD 規則包含 RETURNING;此外,同一事件的所有規則中最多只能有一個 RETURNING 子句。(這可確保只有一個候選 RETURNING 子句用於計算結果。)如果任何可用規則中沒有 RETURNING 子句,則將拒絕對檢視表的 RETURNING 查詢。
注意避免循環規則非常重要。例如,雖然 PostgreSQL 接受以下兩個規則定義中,但由於規則的遞迴擴展,SELECT 指令會導致 PostgreSQL 回報錯誤:
目前,如果規則操作包含 NOTIFY 指令的話,NOTIFY 指令將無條件執行。也就是即使沒有規則應該套用的任何資料,也會發出 NOTIFY。例如,在:
在 UPDATE 期間將發送一個 NOTIFY 事件,無論是否存在與條件 id = 42 相符的資料列。這是可能在未來版本中修補的實作限制。
CREATE RULE 是一個 PostgreSQL 延伸語法,整個查詢語句重寫系統也是。
CREATE SERVER — define a new foreign server
CREATE SERVER
defines a new foreign server. The user who defines the server becomes its owner.
A foreign server typically encapsulates connection information that a foreign-data wrapper uses to access an external data resource. Additional user-specific connection information may be specified by means of user mappings.
The server name must be unique within the database.
Creating a server requiresUSAGE
privilege on the foreign-data wrapper being used.
IF NOT EXISTS
Do not throw an error if a server with the same name already exists. A notice is issued in this case. Note that there is no guarantee that the existing server is anything like the one that would have been created.
server_name
The name of the foreign server to be created.
server_type
Optional server type, potentially useful to foreign-data wrappers.
server_version
Optional server version, potentially useful to foreign-data wrappers.
fdw_name
The name of the foreign-data wrapper that manages the server.
OPTIONS (
option
'
value
' [, ... ] )
This clause specifies the options for the server. The options typically define the connection details of the server, but the actual names and values are dependent on the server's foreign-data wrapper.
Create a servermyserver
that uses the foreign-data wrapperpostgres_fdw
:
CREATE SERVER
conforms to ISO/IEC 9075-9 (SQL/MED).
,
,
,
,
CREATE SUBSCRIPTION — 定義一個新的訂閱
CREATE SUBSCRIPTION 為目前資料庫加上一個新的訂閱。訂閱名稱必須與資料庫中任何現有訂閱的名稱相異。
訂閱表示與發佈者的複寫連線。因此,此指令不僅可以在本地中增加定義,還可以在發佈者上建立複寫插槽。
將在運行此指令的交易事務提交時啟動邏輯複寫工作程序以複寫新訂閱的資料。
若要建立一個 subscription,必須具有 pg_create_subscription 角色的授權,以及對目前資料庫的 CREATE 授權。
有關訂閱和邏輯複寫完整的訊息,請參閱和。
subscription_name
新訂閱的名稱。
CONNECTION '
conninfo
'
PUBLICATION
publication_name
要訂閱的發佈者的發佈名稱。
WITH (
subscription_parameter
[= value
] [, ... ] )
此子句指定訂閱的選用參數。支援以下參數:
指定 CREATE SUBSCRIPTION 是否應該連線到發佈者。將此設定為 false 會將enabled、create_slot 和 copy_data 的預設值更改為 false。
不允許將 connect 設定為 false,卻將 enabled、create_slot 或 copy_data 設定為 true。
create_slot
(boolean
)
指定指令是否應在發佈者上建立複寫插槽。預設值為 true。
enabled
(boolean
)
指定訂閱是應該主動複寫,還是應該只是設定而不啟動。預設值為 true。
slot_name
(string
)
要使用的複寫插槽的名稱。預設行為是使用插槽名稱的訂閱。
當 slot_name 設定為 NONE 時,將不會有與該訂閱關聯的複寫插槽。如果稍後手動建立複寫插槽,則可以使用此方法。此類訂閱還必須同時啟用並且將 create_slot 設定為 false。
binary
(boolean
)
When doing cross-version replication, it could be that the publisher has a binary send function for some data type, but the subscriber lacks a binary receive function for that type. In such a case, data transfer will fail, and the binary
option cannot be used.
If the publisher is a PostgreSQL version before 16, then any initial table synchronization will use text format even if binary = true
.
copy_data
(boolean
)
指定複寫開始後是否應複寫正在訂閱的發佈中的現有資料。預設值為 true。
streaming
(enum
)
Specifies whether to enable streaming of in-progress transactions for this subscription. The default value is off
, meaning all transactions are fully decoded on the publisher and only then sent to the subscriber as a whole.
If set to on
, the incoming changes are written to temporary files and then applied only after the transaction is committed on the publisher and received by the subscriber.
If set to parallel
, incoming changes are directly applied via one of the parallel apply workers, if available. If no parallel apply worker is free to handle streaming transactions then the changes are written to temporary files and applied after the transaction is committed. Note that if an error happens in a parallel apply worker, the finish LSN of the remote transaction might not be reported in the server log.
synchronous_commit
(enum
)
使用 off 進行邏輯複寫是安全的:如果訂閱戶因缺少同步而遺失事務,則資料將從發佈者重新發送。
執行同步邏寫複製時,可能需要使用其他設定。邏輯複寫工作程序向發佈者報告寫入和更新的位置,使用同步複寫時,發佈者將等待實際更新。這意味著在將訂閱用於同步複寫時將訂閱戶的 synchronous_commit 設定為 off 可能會增加發佈伺服器上 COMMIT 的延遲。在這種情況下,將 synchronous_commit 設定為 local 或更高的值可能更有利。
two_phase
(boolean
)
Specifies whether two-phase commit is enabled for this subscription. The default is false
.
When two-phase commit is enabled, prepared transactions are sent to the subscriber at the time of PREPARE TRANSACTION
, and are processed as two-phase transactions on the subscriber too. Otherwise, prepared transactions are sent to the subscriber only when committed, and are then processed immediately by the subscriber.
disable_on_error
(boolean
)
Specifies whether the subscription should be automatically disabled if any errors are detected by subscription workers during data replication from the publisher. The default is false
.
password_required
(boolean
)
Specifies whether connections to the publisher made as a result of this subscription must use password authentication. This setting is ignored when the subscription is owned by a superuser. The default is true
. Only superusers can set this value to false
.
run_as_owner
(boolean
)
origin
(string
)
Specifies whether the subscription will request the publisher to only send changes that don't have an origin or send changes regardless of origin. Setting origin
to none
means that the subscription will request the publisher to only send changes that don't have an origin. Setting origin
to any
means that the publisher sends changes regardless of their origin. The default is any
.
When specifying a parameter of type boolean
, the =
value
part can be omitted, which is equivalent to specifying TRUE
.
建立複寫插槽時(預設行為),CREATE SUBSCRIPTION 不能在交易事務區塊內執行。
建立連線到同一資料庫叢集的訂閱(例如,在同一叢集中的資料庫之間進行複寫或在同一資料庫中進行複寫)只有在複寫插槽未作為同一指令的一部分建立時才會成功。否則,CREATE SUBSCRIPTION 呼叫將失敗。要使其順利運作,請單獨建立複寫插槽(使用函數 pg_create_logical_replication_slot,套件名稱為 pgoutput),並使用參數 create_slot = false 建立訂閱。這是一個可能在將來的版本中解除的實作限制。
Subscriptions having several publications in which the same table has been published with different column lists are not supported.
When using a subscription parameter combination of copy_data = true
and origin = NONE
, the initial sync table data is copied directly from the publisher, meaning that knowledge of the true origin of that data is not possible. If the publisher also has subscriptions then the copied table data might have originated from further upstream. This scenario is detected and a WARNING is logged to the user, but the warning is only an indication of a potential problem; it is the user's responsibility to make the necessary checks to ensure the copied data origins are really as wanted or not.
To find which tables might potentially include non-local origins (due to other subscriptions created on the publisher) try this SQL query:
建立遠端伺服器的訂閱,將複寫 mypublication 和 insert_only 資料表,並在提交時立即開始複寫:
建立對於遠端伺服器的訂閱,將複寫 insert_only 資料表,並且在稍後啟用之前不會開始複寫。
CREATE SUBSCRIPTION 是 PostgreSQL 的延伸功能。
版本:11
CREATE STATISTICS — 定義延伸統計資訊
CREATE STATISTICS 將建立一個新的延伸統計資訊物件,追踪指定資料表、外部資料表或具體化檢視表的相關數據。統計資訊物件將在目前的資料庫中建立,並由發出此命令的使用者擁有。
如果使用了綱要名稱(例如,CREATE STATISTICS myschema.mystat ...),則會在指定的綱要中建立統計資訊物件。 否則,它將在目前的綱要中建立。統計資訊物件的名稱必須與同一綱要中任何其他統計資訊物件的名稱不同。
IF NOT EXISTS
如果已存在具有相同名稱的統計物件,請不要拋出錯誤。在這種情況下發出 NOTICE。請注意,此處僅考慮統計物件的名稱,而不是其定義的詳細內容。
statistics_name
要建立的統計資訊物件名稱(可選擇性加上綱要名稱)。
statistics_kind
column_name
計算統計資訊要涵蓋的資料表欄位名稱。必須至少提供兩個欄位名稱;不需要考慮欄位名稱的次序。
table_name
包含計算統計資訊欄位的資料表名稱(可選擇性加上綱要名稱)。
您必須是資料表的所有者才能建立讀取它的統計物件。但是,一旦建立之後,統計物件的所有權就獨立於基礎資料表。
建立具有兩個功能性相關連的欄位在資料表 t1,也就是知道第一欄位中的值就足以決定另一欄位中的值。然後在這些欄位上建構功能相依性統計資訊:
如果沒有功能相依性統計,計劃程序將會假設兩個 WHERE 條件是獨立的,並且將它們的可能性相乘而得到非常小的資料列計數估計。有了這樣的統計數據,規劃程序就會瞭解到 WHERE 條件是多餘的,就不會低估資料列數量。
使用兩個完全相關的欄位(包含相同的資訊)以及在這些欄位上的 MCV 列表建立資料表 t2:
MCV 列表為查詢計劃程序提供了有關資料表中時常出現特定內容的詳細情況,以及資料表中未出現內容的組合選擇性上限,從而使這兩種情況下產生成更好的估計值 。
SQL 標準中沒有 CREATE STATISTICS 指令。
CREATE TRANSFORM — 定義一個新的轉變
CREATE TRANSFORM 定義一個新的轉換。CREATE OR REPLACE TRANSFORM 將建立新的轉換,或替換現有定義。
Transform 指的是如何讓資料型別為程序語言進行轉換。例如,當使用 hstore 型別在 PL/Python 中撰寫函數時,PL/Python 並不具備如何在 Python 環境中呈現 hstore 值的實作方法。語言實作通常預設使用字串型別,但是如果關聯陣列或列表更合適時,這會很不方便的。
轉換指定了兩個函數:
「from SQL」函數,用於將型別從 SQL 環境轉換為某個程序語言。將使用該語言撰寫的函數的參數呼叫此函數。
「to SQL」函數,用於將型別從某程序語言轉換到 SQL 環境。將使用該語言撰寫的函數呼叫此函數回傳值。
沒有一定要提供這兩種功能。如果未指定,則必要時將使用特定於語言的預設行為。(為了防止某個方向的轉換發生,你也可以寫一個總是出錯的轉換函數。)
為了能夠建立轉換,您必須擁有該型別的 USAGE 權限,擁有該語言的 USAGE 權限,並且擁有對 from-SQL 和 to-SQL 函數的 EXECUTE 權限(如果已指定)。
type_name
轉換的資料型別名稱。
lang_name
轉換程序語言的名稱。
from_sql_function_name
[(argument_type
[, ...])]
用於將資料型別從 SQL 環境轉換為程序語言的函數名稱。它必須採用一個資料型別為 internal 且回傳型別為 internal 的參數。實際的參數將是轉換的型別,並且函數應該被撰寫為就像它一樣。(但是,如果沒有至少一個型別為 internal 的參數,則不允許聲明回傳 internal 的 SQL 級函數。)實際回傳值將是特定於語言實作的內容。如果未指定參數列表,則函數名稱在其綱要中必須是唯一的。
to_sql_function_name
[(argument_type
[, ...])]
用於將資料型別從程序語言轉換為 SQL 環境的函數名稱。它必須採用型別為 internal 的一個參數,並回傳作為轉換型別的型別。實際參數值將是特定於語言實作的內容。如果未指定參數列表,則函數名稱在其綱要中必須是唯一的。
要為型別 hstore 和語言 plpythonu 建立轉換,首先要建立型別和語言:
然後建立必要的函數:
最後建立轉換將它們連結在一起:
實際上,這些指令將被包含在延伸套件中。
contrib 包含許多提供轉換的延伸套件,可以作為真實範例。
這種形式的 CREATE TRANSFORM 是 PostgreSQL 延伸功能。SQL 標準中有一個 CREATE TRANSFORM 指令,但它用於使資料型別適應用戶端語言。 PostgreSQL 不支援這種用法。
CREATE TABLE AS — 從查詢結果來定義一個新資料表
CREATE TABLE AS 建立一個資料表並且以 SELECT 指令産生的資料填入。資料表欄位具有與 SELECT 的輸出列表相關聯的名稱與資料型別(除此之外,你也可以透過給予明確欄位來重寫欄位名稱)。
CREATE TABLE AS 與建立檢視表具有一些相似之處,但實際上完全不同:它建立一個新的資料表並僅對該查詢進行一次性運算以填入新資料表。新資料表將不隨查詢來源資料表的後續變更而改變。相比之下,無論何時查詢,檢視資料表都會重新運算其所定義的 SELECT 語句。
GLOBAL
or LOCAL
忽略相容性。不推薦使用這個關鍵字;有關詳細訊息,請參閱 。
TEMPORARY
or TEMP
UNLOGGED
IF NOT EXISTS
table_name
要建立的資料表名稱(可以加上綱要名稱)。
column_name
新資料表中欄位的名稱。如果未提供欄位名稱,則從查詢的輸出欄位名稱中取得它們。
WITH (
storage_parameter
[= value
] [, ... ] )
WITH OIDS
WITHOUT OIDS
這些過時的語法分別等同於 WITH(OIDS)和 WITH(OIDS = FALSE)。如果您希望同時提供 OIDS 設定和儲存參數,則必須使用 WITH(...)語法;請參閱上個段落。
ON COMMIT
使用 ON COMMIT 可以控制交易事務區塊結尾時的臨時資料表行為。有三個選項是:
PRESERVE ROWS
交易結束時不會採取特殊行動。這是預設行為。
DELETE ROWS
臨時資料表中的所有資料列將在每個交易事務區塊的末尾被刪除。本質上,每次提交都會自動完成 TRUNCATE。
DROP
臨時資料表將在目前交易事務區塊的結尾被刪除。
TABLESPACE
tablespace_name
query
WITH [ NO ] DATA
此子句指定是否將查詢産生的資料複製到新資料表中。如果不是,則就只複製資料表結構。預設值是複製資料。
此指令在功能上類似於 SELECT INTO,但通常會優先使用這個,因為它不太可能與 SELECT INTO 語法的其他用法混淆。基本上,CREATE TABLE AS 的功能包含了 SELECT INTO 所提供的功能。
建立一個新的資料表 films_recent,其中只包含來自資料表 film 的最新項目:
要完全複製資料表,也可以使用 TABLE 指令的簡短格式:
使用預備查詢語句建立一個新的臨時資料表 films_recent,僅包含來自資料表 film 的最近項目。新資料表具有 OID,並將在 commit 時丢棄:
CREATE TABLE AS 符合 SQL 標準。以下是非標準的延伸功能:
在標準中需要括住子查詢子句的括號;在 PostgreSQL 中,這些括號是選用的。
在標準中,WITH [NO] DATA 子句是必須的;在 PostgreSQL 中是選用的。
WITH 子句是一個 PostgreSQL 延伸功能;標準中既沒有儲存參數也沒有 OID。
PostgreSQL 資料表空間的概念並不是標準的一部分。因此,TABLESPACE 子句是一個延伸功能。
CREATE TABLESPACE 註冊一個新的叢集範圍的資料表空間。資料表空間名稱必須與資料庫叢集中任何現有資料表空間的名稱不同。
資料表空間允許超級使用者為資料庫物件(如資料表和索引)的資料檔案可以駐留的檔案系統上定義備用位置。
具有適當權限的使用者可以將 tablespace_name 傳遞給 CREATE DATABASE、CREATE TABLE、CREATE INDEX 或 ADD CONSTRAINT,以將這些物件的資料檔案儲存在指定的資料表空間中。
資料表空間不能獨立於定義它的叢集使用,請參見。
tablespace_name
要建立的資料表空間名稱。該名稱不能以 pg_ 開頭,因為這些名稱是為系統資料表空間。
user_name
將擁有資料表空間的使用者的名稱。如果省略,則預設為執行該命令的使用者。只有超級使用者可以建立資料表空間,但他們可以將資料表空間的所有權分配給非超級使用者。
directory
將用於資料表空間的目錄。該目錄應該是空的,並且必須由 PostgreSQL 的作業系統使用者所擁有。該目錄必須以絕對路徑指定。
tablespace_option
資料表空間僅在支持符號連接的檔案系統上支援。
CREATE TABLESPACE 不能在交易事務內執行。
在 /data/dbs 建立一個資料表空間 dbspace:
在 /data/indexes 處建立一個資料表空間索引空間,並指定 genevieve 為擁有者:
CREATE TABLESPACE 是一個 PostgreSQL 延伸功能。
版本:11
CREATE TRIGGER — 宣告一個新的觸發器
CREATE TRIGGER 建立一個新的觸發器。觸發器將與指定的資料表,檢視表或外部資料表關聯,並在對該表執行某些操作時執行指定的函數。
可以指定觸發器在嘗試對某行執行操作之前(在檢查限制條件並嘗試執行 INSERT,UPDATE 或 DELETE 之前);或者在操作完成後(在檢查限制條件並且 INSERT,UPDATE 或 DELETE 完成之後);又或者代替操作(在檢視表上插入,更新或刪除的情況下)。如果觸發器在事件之前或之後觸發,則觸發器可以跳過目前資料列的操作,或者更改正在插入的資料列(僅適用於 INSERT 和 UPDATE 操作)。如果觸發器在事件發生後觸發,則所有更改(包括其他觸發器的效果)都對觸發器都是「可見」。
對於操作修改的每一個資料列,都會呼叫標記為 FOR EACH ROW 的觸發器一次。例如,影響 10 行的 DELETE 將導致目標關連上的任何 ON DELETE 觸發器被分別呼叫 10 次,每次刪除則執行一次。相反,標記為 FOR EACH STATEMENT 的觸發器僅對任何給予操作的執行一次,無論其修改多少資料列(特別是,修改零個資料列的操作仍將驅使執行任何適用的 FOR EACH STATEMENT 觸發器)。
指定用於觸發 INSTEAD OF 觸發事件的觸發器必須標記為 FOR EACH ROW,而且只能在檢視表上定義。 必須將檢視圖上的 BEFORE 和 AFTER 觸發器標記為每個語句。
此外,觸發器可以定義為觸發 TRUNCATE,儘管只有 FOR EACH STATEMENT。
下表總結了可以在資料表,檢視表和外部資料表上使用哪些類型的觸發器:
此外,觸發器定義可以指定布林 WHEN 條件,將對其進行測試以查看是否應觸發觸發器。在資料列級觸發器中,WHEN 條件可以檢查資料列的欄位舊值和新值。語句級觸發器也可以具有 WHEN 條件,儘管該功能對它們沒有那麼有用,因為條件不能引用資料表中的任何值。
如果為同一事件定義了多個相同類型的觸發器,則按名稱的字母順序觸發它們。
REFERENCING 選項啟用轉換關連的集合,轉換關連是包含目前 SQL 語句插入,刪除或修改的所有資料列的子集。此功能允許觸發器查看語句的全域檢視圖,而不是一次只能查看一個資料列。此選項僅適用於非限制條件觸發器的 AFTER 觸發器;另外,如果觸發器是 UPDATE 觸發器,則它不能指定 column_name 列表。OLD TABLE 只能指定一次,並且只能用於可以在 UPDATE 或 DELETE上 觸發的觸發器;它建立一個轉換關係,其中包含語句更新或刪除的所有資料列的先前版本。類似地,NEW TABLE 只能指定一次,並且只能用於可以在 UPDATE 或 INSERT 上觸發的觸發器;它建立一個轉換關連,包含語句更新或插入的所有資料列的新版本。
SELECT 不會修改任何資料列,因此您無法建立 SELECT 觸發器。規則和檢視表需要除錯以提供可行的解決方案時,就需要 SELECT 觸發器。
name
The name to give the new trigger. This must be distinct from the name of any other trigger for the same table. The name cannot be schema-qualified — the trigger inherits the schema of its table. For a constraint trigger, this is also the name to use when modifying the trigger's behavior using SET CONSTRAINTS
.
BEFORE
AFTER
INSTEAD OF
Determines whether the function is called before, after, or instead of the event. A constraint trigger can only be specified as AFTER
.
event
One of INSERT
, UPDATE
, DELETE
, or TRUNCATE
; this specifies the event that will fire the trigger. Multiple events can be specified using OR
, except when transition relations are requested.
For UPDATE
events, it is possible to specify a list of columns using this syntax:
The trigger will only fire if at least one of the listed columns is mentioned as a target of the UPDATE
command.
INSTEAD OF UPDATE
events do not allow a list of columns. A column list cannot be specified when requesting transition relations, either.
table_name
The name (optionally schema-qualified) of the table, view, or foreign table the trigger is for.
referenced_table_name
The (possibly schema-qualified) name of another table referenced by the constraint. This option is used for foreign-key constraints and is not recommended for general use. This can only be specified for constraint triggers.
DEFERRABLE
NOT DEFERRABLE
INITIALLY IMMEDIATE
INITIALLY DEFERRED
REFERENCING
This keyword immediately precedes the declaration of one or two relation names that provide access to the transition relations of the triggering statement.
OLD TABLE
NEW TABLE
This clause indicates whether the following relation name is for the before-image transition relation or the after-image transition relation.
transition_relation_name
The (unqualified) name to be used within the trigger for this transition relation.
FOR EACH ROW
FOR EACH STATEMENT
This specifies whether the trigger procedure should be fired once for every row affected by the trigger event, or just once per SQL statement. If neither is specified, FOR EACH STATEMENT
is the default. Constraint triggers can only be specified FOR EACH ROW
.
condition
A Boolean expression that determines whether the trigger function will actually be executed. If WHEN
is specified, the function will only be called if the condition
returns true
. In FOR EACH ROW
triggers, the WHEN
condition can refer to columns of the old and/or new row values by writing OLD.
column_name
or NEW.
column_name
respectively. Of course, INSERT
triggers cannot refer to OLD
and DELETE
triggers cannot refer to NEW
.
INSTEAD OF
triggers do not support WHEN
conditions.
Currently, WHEN
expressions cannot contain subqueries.
Note that for constraint triggers, evaluation of the WHEN
condition is not deferred, but occurs immediately after the row update operation is performed. If the condition does not evaluate to true then the trigger is not queued for deferred execution.
function_name
A user-supplied function that is declared as taking no arguments and returning type trigger
, which is executed when the trigger fires.
arguments
An optional comma-separated list of arguments to be provided to the function when the trigger is executed. The arguments are literal string constants. Simple names and numeric constants can be written here, too, but they will all be converted to strings. Please check the description of the implementation language of the trigger function to find out how these arguments can be accessed within the function; it might be different from normal function arguments.
To create a trigger on a table, the user must have the TRIGGER
privilege on the table. The user must also have EXECUTE
privilege on the trigger function.
A column-specific trigger (one defined using the UPDATE OF
column_name
syntax) will fire when any of its columns are listed as targets in the UPDATE
command's SET
list. It is possible for a column's value to change even when the trigger is not fired, because changes made to the row's contents by BEFORE UPDATE
triggers are not considered. Conversely, a command such as UPDATE ... SET x = x ...
will fire a trigger on column x
, even though the column's value did not change.
In a BEFORE
trigger, the WHEN
condition is evaluated just before the function is or would be executed, so using WHEN
is not materially different from testing the same condition at the beginning of the trigger function. Note in particular that the NEW
row seen by the condition is the current value, as possibly modified by earlier triggers. Also, a BEFORE
trigger's WHEN
condition is not allowed to examine the system columns of the NEW
row (such as oid
), because those won't have been set yet.
In an AFTER
trigger, the WHEN
condition is evaluated just after the row update occurs, and it determines whether an event is queued to fire the trigger at the end of statement. So when an AFTER
trigger's WHEN
condition does not return true, it is not necessary to queue an event nor to re-fetch the row at end of statement. This can result in significant speedups in statements that modify many rows, if the trigger only needs to be fired for a few of the rows.
In some cases it is possible for a single SQL command to fire more than one kind of trigger. For instance an INSERT
with an ON CONFLICT DO UPDATE
clause may cause both insert and update operations, so it will fire both kinds of triggers as needed. The transition relations supplied to triggers are specific to their event type; thus an INSERT
trigger will see only the inserted rows, while an UPDATE
trigger will see only the updated rows.
Row updates or deletions caused by foreign-key enforcement actions, such as ON UPDATE CASCADE
or ON DELETE SET NULL
, are treated as part of the SQL command that caused them (note that such actions are never deferred). Relevant triggers on the affected table will be fired, so that this provides another way in which a SQL command might fire triggers not directly matching its type. In simple cases, triggers that request transition relations will see all changes caused in their table by a single original SQL command as a single transition relation. However, there are cases in which the presence of an AFTER ROW
trigger that requests transition relations will cause the foreign-key enforcement actions triggered by a single SQL command to be split into multiple steps, each with its own transition relation(s). In such cases, any statement-level triggers that are present will be fired once per creation of a transition relation set, ensuring that the triggers see each affected row in a transition relation once and only once.
Statement-level triggers on a view are fired only if the action on the view is handled by a row-level INSTEAD OF
trigger. If the action is handled by an INSTEAD
rule, then whatever statements are emitted by the rule are executed in place of the original statement naming the view, so that the triggers that will be fired are those on tables named in the replacement statements. Similarly, if the view is automatically updatable, then the action is handled by automatically rewriting the statement into an action on the view's base table, so that the base table's statement-level triggers are the ones that are fired.
Modifying a partitioned table or a table with inheritance children fires statement-level triggers attached to the explicitly named table, but not statement-level triggers for its partitions or child tables. In contrast, row-level triggers are fired on the rows in affected partitions or child tables, even if they are not explicitly named in the query. If a statement-level trigger has been defined with transition relations named by a REFERENCING
clause, then before and after images of rows are visible from all affected partitions or child tables. In the case of inheritance children, the row images include only columns that are present in the table that the trigger is attached to. Currently, row-level triggers with transition relations cannot be defined on partitions or inheritance child tables.
In PostgreSQL versions before 7.3, it was necessary to declare trigger functions as returning the placeholder type opaque
, rather than trigger
. To support loading of old dump files, CREATE TRIGGER
will accept a function declared as returning opaque
, but it will issue a notice and change the function's declared return type to trigger
.
每當要更新資料表 accounts 的資料列時,執行函數 check_account_update:
一樣,但只有在 UPDATE 命令中將欄位 balance 作為更新標的時才執行該函數:
如果欄位 balance 實際上已變更其值,則此語法才會執行該函數:
呼叫函數來記錄 accounts 的更新,但僅在變更了某些內容時:
對每一個資料列執行函數 view_insert_row,資料列被插入到檢視表中時:
對每個語句執行函數 check_transfer_balances_to_zero 以確認所傳輸的資料列與淨值的差異:
對每一個資料列執行 check_matching_pairs 函數以確認同時對相對應的資料列對進行變更(透過相同的語句):
PostgreSQL 中的 CREATE TRIGGER 語句只實作了 SQL 標準的一部份。 目前還缺少以下功能:
While transition table names for AFTER
triggers are specified using the REFERENCING
clause in the standard way, the row variables used in FOR EACH ROW
triggers may not be specified in a REFERENCING
clause. They are available in a manner that is dependent on the language in which the trigger function is written, but is fixed for any one language. Some languages effectively behave as though there is a REFERENCING
clause containing OLD ROW AS OLD NEW ROW AS NEW
.
The standard allows transition tables to be used with column-specific UPDATE
triggers, but then the set of rows that should be visible in the transition tables depends on the trigger's column list. This is not currently implemented by PostgreSQL.
PostgreSQL only allows the execution of a user-defined function for the triggered action. The standard allows the execution of a number of other SQL commands, such as CREATE TABLE
, as the triggered action. This limitation is not hard to work around by creating a user-defined function that executes the desired commands.
SQL specifies that multiple triggers should be fired in time-of-creation order. PostgreSQL uses name order, which was judged to be more convenient.
SQL specifies that BEFORE DELETE
triggers on cascaded deletes fire after the cascaded DELETE
completes. The PostgreSQL behavior is for BEFORE DELETE
to always fire before the delete action, even a cascading one. This is considered more consistent. There is also nonstandard behavior if BEFORE
triggers modify rows or prevent updates during an update that is caused by a referential action. This can lead to constraint violations or stored data that does not honor the referential constraint.
The ability to specify multiple actions for a single trigger using OR
is a PostgreSQL extension of the SQL standard.
The ability to fire triggers for TRUNCATE
is a PostgreSQL extension of the SQL standard, as is the ability to define statement-level triggers on views.
CREATE CONSTRAINT TRIGGER
is a PostgreSQL extension of the SQL standard.
New row
Existing & new rows
Existing row
If read access is required to the existing or new row (for example, aWHERE
orRETURNING
clause that refers to columns from the relation).
,
,
When using themodule, a foreign server's name can be used as an argument of thefunction to indicate the connection parameters. It is necessary to have theUSAGE
privilege on the foreign server to be able to use it in this way.
Seefor more details.
發佈者的連線字串。有關詳細訊息,請參閱。
connect
(boolean
)
由於此選項設定為 false 時未建立連線,所以資料表未訂閱,而在您啟用訂閱後,將不會複寫任何內容。若要啟動複製,必須手動立建複寫槽,啟用訂閱,然後更新訂閱。相關範例,請參閱 。
Specifies whether the subscription will request the publisher to send the data in binary format (as opposed to text). The default is false
. Any initial table synchronization copy (see copy_data
) also uses the same format. Binary format can be faster than the text format, but it is less portable across machine architectures and PostgreSQL versions. Binary format is very data type specific; for example, it will not allow copying from a smallint
column to an integer
column, even though that would work fine in text format. Even when this option is enabled, only data types having binary send and receive functions will be transferred in binary. Note that the initial synchronization requires all data types to have binary send and receive functions, otherwise the synchronization will fail (see for more about send/receive functions).
If the publications contain WHERE
clauses, it will affect what data is copied. Refer to the for details.
See for details of how copy_data = true
can interact with the origin
parameter.
此參數的值將覆寫 設定。預設值為 off。
The implementation of two-phase commit requires that replication has successfully finished the initial table synchronization phase. So even when two_phase
is enabled for a subscription, the internal two-phase state remains temporarily “pending” until the initialization phase completes. See column subtwophasestate
of to know the actual two-phase state.
If true, all replication actions are performed as the subscription owner. If false, replication workers will perform actions on each table as the owner of that table. The latter configuration is generally much more secure; for details, see . The default is false
.
See for details of how copy_data = true
can interact with the origin
parameter.
有關如何在訂閱和發佈的服服之間配置存取控制的詳細訊息,請參閱。
If any table in the publication has a WHERE
clause, rows for which the expression
evaluates to false or null will not be published. If the subscription has several publications in which the same table has been published with different WHERE
clauses, a row will be published if any of the expressions (referring to that publish operation) are satisfied. In the case of different WHERE
clauses, if one of the publications has no WHERE
clause (referring to that publish operation) or the publication is declared as or , rows are always published regardless of the definition of the other expressions. If the subscriber is a PostgreSQL version before 15, then any row filtering is ignored during the initial data synchronization phase. For this case, the user might want to consider deleting any initially copied data that would be incompatible with subsequent filtering. Because initial data synchronization does not take into account the publication parameter when copying existing table data, some rows may be copied that would not be replicated using DML. See for examples.
We allow non-existent publications to be specified so that users can add those later. This means can have non-existent publications.
, , ,
要在此統計資訊物件中計算的統計資訊類型。目前支援的類型是 ndistinct(啟用 n-distinct 統計資訊)、dependencies(啟用欄位相依性統計資訊)和 mcv(啟用最常見值的列表)。如果省略此子句,則所有受支援的統計資訊類型都會包含在統計資訊物件中。有關更多說明,請參閱和。
,
使用 移除轉換。
, , ,
如果指定,則資料表被建立為臨時資料表。有關詳細訊息,請參閱 。
如果指定,則將該資料表建立為無日誌記錄的資料表。有關詳細訊息,請參閱 。
如果已存在具有相同名稱的關連,則不要拋出錯誤。 在這種情況下發布 NOTICE。有關詳細訊息,請參閱 。
此子句為新資料表指定可選用的儲存參數;請參閱了解更多訊息。WITH 子句還可以包含 OIDS = TRUE(或只是 OIDS)來指定新資料表的資料列應具有分配給它們的 OID(物件指標),或者 OIDS = FALSE 來指定行不應具有 OID。有關更多訊息,請參閱 。
tablespace_name 是要在其中建立新資料表的資料表空間名稱。如果未指定,則查詢 ,如果該資料表是臨時資料表,則為 。
, 或 指令或執行預備好 SELECT,TABLE 或 VALUES 查詢的 指令。
CREATE TABLE AS 指令允許使用者明確指定是否應包含 OID。如果未明確指定 OID 的存在,則使用 的設定變數。
PostgreSQL 以一種與標準不同的方式處理臨時資料表;有關詳細訊息,請參閱 。
, , , , ,
要設定或重置的資料表空間參數。目前,唯一可用的參數是 seq_page_cost,random_page_cost 和 effective_io_concurrency。為特定資料表空間設定任一值將覆寫查詢規劃器通常從該資料表空間中的資料表中讀取成本的估計值,這由相同名稱的配置參數(請參閱 、、)確定。如果一個資料表空間位於比一般 I/O 子系統更快或更慢的磁碟上,這些參數可能會很有用。
, , , ,
指定 CONSTRAINT 選項時,此指令將建令限制條件觸發器。除了可以使用 調整觸發器觸發的時機之外,其他與一般觸發器相同。限制條件觸發器必須是普通資料表(而不是外部資料表)上的 AFTER ROW 觸發器。 它們可以在語句結尾引發觸發事件,也可以在包含事務結束時觸發;在後面的情況下,他們會被延後。透過使用 SET CONSTRAINTS,也可以強制立即觸發待處理的延遲觸發器。當限制條件時,限制條件觸發器會引發例外處理。
有關觸發器的更多訊息,請參閱。
The default timing of the trigger. See the documentation for details of these constraint options. This can only be specified for constraint triggers.
Use to remove a trigger.
中有使用 C 撰寫的觸發器函數完整範例。
, , ,
BEFORE
INSERT
/UPDATE
/DELETE
Tables and foreign tables
Tables, views, and foreign tables
TRUNCATE
—
Tables
AFTER
INSERT
/UPDATE
/DELETE
Tables and foreign tables
Tables, views, and foreign tables
TRUNCATE
—
Tables
INSTEAD OF
INSERT
/UPDATE
/DELETE
Views
—
TRUNCATE
—
—
DEALLOCATE — deallocate a prepared statement
DEALLOCATE
is used to deallocate a previously prepared SQL statement. If you do not explicitly deallocate a prepared statement, it is deallocated when the session ends.
For more information on prepared statements, see PREPARE.
PREPARE
This key word is ignored.
name
The name of the prepared statement to deallocate.
ALL
Deallocate all prepared statements.
The SQL standard includes a DEALLOCATE
statement, but it is only for use in embedded SQL.
CREATE VIEW — 定義一個新的檢視表
CREATE VIEW 定義某個查詢的檢視表。此檢視表並不會實體上存在。相反地,每次在查詢中引用檢視表時才進行查詢。
CREATE OR REPLACE VIEW 類似,只是如果已經存在同名的檢視表,則替換它。新的查詢必須産生與現有檢視表查詢所産生相同的欄位(即,相同順序且具有相同資料型別的相同欄位名稱),但它可以會在列表末尾增加其他欄位。產生輸出欄位的計算可能完全不同。
如果加上了綱要名稱(例如,CREATE VIEW myschema.myview ...),則會在指定的綱要中建立檢視表。否則,它將在目前綱要中建立。臨時檢視表存在於特殊綱要中,因此在建立臨時檢視圖時不能加上綱要名稱。檢視表的名稱必須與同一綱要中的任何其他檢視表、資料表、序列、索引或外部資料表的名稱不同。
TEMPORARY
or TEMP
如果指定此選項,則檢視表將建立為臨時檢視表。臨時檢視表會在目前連線結束時自動刪除。當臨時檢視表存在時,目前連線不會顯示具有相同名稱的現有永久關連,除非它們以綱要名稱引用。
如果檢視表引用的任何資料表是臨時的,則檢視表將建立為臨時檢視表(無論是否指定了 TEMPORARY)。
RECURSIVE
建立遞迴檢視表。語法:
同等於
必須為遞迴檢視表指定檢視表欄位名稱列表。
name
要建立的檢視表名稱(選擇性加入綱要名稱)。
column_name
用於檢視表欄位的選擇性名稱列表。如果沒有,則從查詢中推導出欄位名稱。
WITH (
view_option_name
[= view_option_value
] [, ... ] )
此子句指定檢視表的選擇性參數;支援以下參數:
check_option
(string
)
此參數可能是 local 或 cascaded,等同於指定 WITH [CASCADED | LOCAL] CHECK OPTION(見下文)。可以使用 ALTER VIEW 在現有檢視表上變更此選項。
security_barrier
(boolean
)
如果檢視表旨在提供資料列級安全性,則應使用此方法。有關詳細訊息,請參閱第 41.5 節。
security_invoker
(boolean
)
此選項會根據檢視表使用者而不是檢視表所有者的權限檢查底層基本關連。相關細節,請參閱下面的說明。
可以使用 ALTER VIEW 在現有檢視表上變更以上所有選項。
query
SELECT 或 VALUES 指令,它將提供檢視表的欄位和資料列。
WITH [ CASCADED | LOCAL ] CHECK OPTION
此選項控制自動可更新檢視表的行為。指定此選項時,將檢查檢視表上的 INSERT 和 UPDATE 指令,以確保新資料列滿足檢視表定義條件(即,檢查新資料列以確保它們在檢視表中可見)。如果不是,則將拒絕更新。如果未指定 CHECK OPTION,則允許檢視表上的 INSERT 和 UPDATE 指令建立檢視表不可見的資料列。支援以下檢查選項:
LOCAL
僅根據檢視表本身中直接定義的條件檢查新資料列。不檢查在其基礎的檢視表上定義的任何條件(除非它們也指定了 CHECK OPTION)。
CASCADED
根據檢視圖條件和所有其基礎的檢視表檢查新資料列。如果指定了 CHECK OPTION,而既未指定 LOCAL 也未指定 CASCADED,則假定為 CASCADED。
CHECK OPTION 可能不適用於 RECURSIVE 檢視表。
請注意,CHECK OPTION 僅在可自動更新的檢視表上受到支援,並且沒有 INSTEAD OF 觸發器或 INSTEAD 規則。如果在具有 INSTEAD OF 觸發器的基本檢視表之上定義了可自動更新的檢視表,則 LOCAL CHECK OPTION 可用於檢查自動更新檢視表上的條件。但是具有 INSTEAD OF 觸發器的基本檢視表上的條件將不會檢查(CASCADED 選項不會延伸影響到觸發器可更新檢視表,並且將忽略直接在觸發器可更新檢視表上定義的任何檢查選項)。如果檢視表或其任何基本關連具有導致 INSERT 或 UPDATE 指令被重寫的 INSTEAD 規則,則在重寫的查詢中將忽略所有檢查選項,包括在與關連之上定義的自動可更新檢視表的任何檢查與 INSTEAD 規則。
使用 DROP VIEW 語句移除檢視表。
請注意,檢視表欄位的名稱和型別會按您希望的方式分配。例如:
是不好的形式,因為欄位名稱預設為 ?column?;此外,欄位資料型別預設為 text,可能不是您想要的。在檢視表的結果中,字串的更好形式是這樣的:
對檢視表中引用的資料表存取權限由檢視表擁有者的權限決定。在某些情況下,這可用於提供對基礎資料表安全但受限制的存取。但是,並非所有檢視表都可以防範篡改;有關詳細訊息,請參閱第 41.5 節。在檢視表中呼叫的函數處理方式與使用檢視表直接從查詢中呼叫的函數相同。因此,檢視表的使用者必須具有呼叫檢視表使用的所有函數權限。
如果檢視表的 security_invoker 屬性設定為 true,則對底層基本關連的存取權限由執行查詢的使用者而非檢視表所有者的權限決定。因此,安全的檢視表的使用者必須對該檢視表及其底層基本關連具有相關權限。
If any of the underlying base relations is a security invoker view, it will be treated as if it had been accessed directly from the original query. Thus, a security invoker view will always check its underlying base relations using the permissions of the current user, even if it is accessed from a view without the security_invoker
property.
If any of the underlying base relations has row-level security enabled, then by default, the row-level security policies of the view owner are applied, and access to any additional relations referred to by those policies is determined by the permissions of the view owner. However, if the view has security_invoker
set to true
, then the policies and permissions of the invoking user are used instead, as if the base relations had been referenced directly from the query using the view.
Functions called in the view are treated the same as if they had been called directly from the query using the view. Therefore, the user of a view must have permissions to call all functions used by the view. Functions in the view are executed with the privileges of the user executing the query or the function owner, depending on whether the functions are defined as SECURITY INVOKER
or SECURITY DEFINER
. Thus, for example, calling CURRENT_USER
directly in a view will always return the invoking user, not the view owner. This is not affected by the view's security_invoker
setting, and so a view with security_invoker
set to false
is not equivalent to a SECURITY DEFINER
function and those concepts should not be confused.
The user creating or replacing a view must have USAGE
privileges on any schemas referred to in the view query, in order to look up the referenced objects in those schemas. Note, however, that this lookup only happens when the view is created or replaced. Therefore, the user of the view only requires the USAGE
privilege on the schema containing the view, not on the schemas referred to in the view query, even for a security invoker view.
在現有檢視表上使用 CREATE OR REPLACE VIEW 時,僅更改檢視表定義的 SELECT 規則。其他檢視表屬性(包括所有權,權限和非 SELECT 規則)保持不變。您必須擁有檢視表才能替換它(這包括成為擁有角色的成員)。
簡單檢視表可自動更新:系統將允許 INSERT,UPDATE 和 DELETE 語句以與一般資料表相同的方式在檢視表上使用。如果檢視表滿足以下所有條件,則檢視表可自動更新:
檢視表必須在其 FROM 列表中只有一個項目,該列表必須是資料表或另一個可更新檢視表。
檢視表定義不得在最上層有 WITH,DISTINCT,GROUP BY,HAVING,LIMIT 或 OFFSET 子句。
檢視表定義不得在最上層有集合的操作(UNION,INTERSECT 或 EXCEPT)。
檢視表的選擇列表不得包含任何彙總、窗函數或設定回傳函數。
可自動更新的檢視表可以包含可更新欄位和不可更新欄位的混合。如果欄位是對底層基本關連的可更新欄位簡單引用,則欄位是可更新的;否則該欄位是唯讀的,如果 INSERT 或 UPDATE 語句嘗試為其賦值,則會引發錯誤。
如果檢視表可自動更新,則系統會將檢視表上的任何 INSERT,UPDATE 或 DELETE 語句轉換為基本關連上的相應語句。完全支援具有 ON CONFLICT UPDATE 子句的 INSERT 語句。
如果可自動更新的檢視表包含 WHERE 條件,則條件限制檢視表上的 UPDATE 和 DELETE 語句可以修改基本關連的哪些資料列。但是,允許 UPDATE 更改資料列以使其不再滿足 WHERE 條件,因此不再透過檢視表看見。類似地,INSERT 指令可能會插入不滿足 WHERE 條件的基本關連資料列,因此透過檢視圖就不可見(ON CONFLICT UPDATE 可能類似地影響透過檢視圖不可見的現有資料列)。 CHECK OPTION 可用於防止 INSERT 和 UPDATE 指令建立檢視表不可見的資料列。
如果使用 security_barrier 屬性標記了可自動更新的檢視表,那麼將始終在檢視表使用者增加的任何條件之前評估所有檢視表的 WHERE 條件(以及使用標記為 LEAKPROOF 的運算子的任何條件)。有關詳細訊息,請參閱第 40.5 節。請注意,由於這個原因,最終未回傳的資料列(因為它們沒有通過使用者的 WHERE 條件)可能仍然會被鎖定。EXPLAIN 可用於查看在關連等級套用哪些條件(因此不鎖定資料列),哪些不是。
預設情況下,不滿足所有這些條件的更複雜檢視表是唯讀的:系統不允許在檢視表上插入,更新或刪除。您可以透過在檢視表上建立 INSTEAD OF 觸發器來獲取可更新檢視表的效果,該觸發器必須將檢視表上的嘗試插入等轉換為對其他資料表的適當操作。有關更多訊息,請參閱 CREATE TRIGGER。另一種可能性是建立規則(參閱 CREATE RULE),但實際上觸發器更容易理解和正確使用。
請注意,在檢視表上執行插入,更新或刪除的使用者必須在檢視表上具有相對應的插入,更新或刪除權限。此外,檢視表的擁有者必須具有底層基本關連的相關權限,但執行更新的使用者不需要對底層基本關連的任何權限(請參閱第 41.5 節)。
建立一個包含所有喜劇電影的檢視表:
這將建立一個檢視表,包含資料表 film 中當下所有欄位。雖然 * 用於建立檢視表,但稍後附加到資料表中的欄位,將不會成為檢視表的一部分。
使用 LOCAL CHECK OPTION 建立檢視表:
這將建立一個基於喜劇檢視表的檢視表,僅顯示具有 kind = 'Comedy' 和 classification='U' 的電影。如果新的資料列沒有 classification = 'U',則將拒絕任何在檢視表中插入或更新資料列的嘗試,但不會檢查 film 中的 kind 欄位。
使用 CASCADED CHECK OPTION 建立檢視表:
這將建立一個檢視表,檢查新資料列的 classification 和 kind。
混合可更新和不可更新欄位建立檢視表:
此檢視表將支援 INSERT,UPDATE 和 DELETE。film 資料表中的所有欄位都是可更新的,而計算欄位 country 和 avg_rating 將只是唯讀的。
建立一個包含 1 到 100 之間數字的遞迴檢視表:
請注意,雖然遞迴檢視表的名稱在此 CREATE 中加上綱要的,但其內部自我引用不能加上綱要。這是因為 CTE 名稱不能包含綱要名稱。
CREATE OR REPLACE VIEW 是 PostgreSQL 語言的延伸功能。臨時檢視表的概念也是如此。WITH(...)子句也是一個延伸功能。
DELETE — delete rows of a table
DELETE
deletes rows that satisfy theWHERE
clause from the specified table. If theWHERE
clause is absent, the effect is to delete all rows in the table. The result is a valid, but empty table.
TRUNCATEprovides a faster mechanism to remove all rows from a table.
There are two ways to delete rows in a table using information contained in other tables in the database: using sub-selects, or specifying additional tables in theUSING
clause. Which technique is more appropriate depends on the specific circumstances.
The optionalRETURNING
clause causesDELETE
to compute and return value(s) based on each row actually deleted. Any expression using the table's columns, and/or columns of other tables mentioned inUSING
, can be computed. The syntax of theRETURNING
list is identical to that of the output list ofSELECT
.
You must have theDELETE
privilege on the table to delete from it, as well as theSELECT
privilege for any table in theUSING
clause or whose values are read in thecondition
.
with_query
TheWITH
clause allows you to specify one or more subqueries that can be referenced by name in theDELETE
query. SeeSection 7.8andSELECTfor details.
table_name
The name (optionally schema-qualified) of the table to delete rows from. IfONLY
is specified before the table name, matching rows are deleted from the named table only. IfONLY
is not specified, matching rows are also deleted from any tables inheriting from the named table. Optionally,*
can be specified after the table name to explicitly indicate that descendant tables are included.
alias
A substitute name for the target table. When an alias is provided, it completely hides the actual name of the table. For example, givenDELETE FROM foo AS f
, the remainder of theDELETE
statement must refer to this table asf
notfoo
.
using_list
A list of table expressions, allowing columns from other tables to appear in theWHERE
condition. This is similar to the list of tables that can be specified in theFROM
Clauseof aSELECT
statement; for example, an alias for the table name can be specified. Do not repeat the target table in theusing_list
, unless you wish to set up a self-join.
condition
An expression that returns a value of typeboolean
. Only rows for which this expression returnstrue
will be deleted.
cursor_name
The name of the cursor to use in aWHERE CURRENT OF
condition. The row to be deleted is the one most recently fetched from this cursor. The cursor must be a non-grouping query on theDELETE
's target table. Note thatWHERE CURRENT OF
cannot be specified together with a Boolean condition. SeeDECLAREfor more information about using cursors withWHERE CURRENT OF
.
output_expression
An expression to be computed and returned by theDELETE
command after each row is deleted. The expression can use any column names of the table named by_table_name
_or table(s) listed inUSING
. Write*
to return all columns.
output_name
A name to use for a returned column.
On successful completion, aDELETE
command returns a command tag of the form
Thecount
_is the number of rows deleted. Note that the number may be less than the number of rows that matched thecondition
when deletes were suppressed by aBEFORE DELETE
trigger. Ifcount
_is 0, no rows were deleted by the query (this is not considered an error).
If theDELETE
command contains aRETURNING
clause, the result will be similar to that of aSELECT
statement containing the columns and values defined in theRETURNING
list, computed over the row(s) deleted by the command.
PostgreSQLlets you reference columns of other tables in theWHERE
condition by specifying the other tables in theUSING
clause. For example, to delete all films produced by a given producer, one can do:
What is essentially happening here is a join betweenfilms
andproducers
, with all successfully joinedfilms
rows being marked for deletion. This syntax is not standard. A more standard way to do it is:
In some cases the join style is easier to write or faster to execute than the sub-select style.
Delete all films but musicals:
Clear the tablefilms
:
Delete completed tasks, returning full details of the deleted rows:
Delete the row oftasks
on which the cursorc_tasks
is currently positioned:
This command conforms to theSQLstandard, except that theUSING
andRETURNING
clauses arePostgreSQLextensions, as is the ability to useWITH
withDELETE
.
CREATE USER MAPPING — define a new mapping of a user to a foreign server
CREATE USER MAPPING
defines a mapping of a user to a foreign server. A user mapping typically encapsulates connection information that a foreign-data wrapper uses together with the information encapsulated by a foreign server to access an external data resource.
The owner of a foreign server can create user mappings for that server for any user. Also, a user can create a user mapping for their own user name ifUSAGE
privilege on the server has been granted to the user.
IF NOT EXISTS
Do not throw an error if a mapping of the given user to the given foreign server already exists. A notice is issued in this case. Note that there is no guarantee that the existing user mapping is anything like the one that would have been created.
user_name
The name of an existing user that is mapped to foreign server.CURRENT_USER
andUSER
match the name of the current user. WhenPUBLIC
is specified, a so-called public mapping is created that is used when no user-specific mapping is applicable.
server_name
The name of an existing server for which the user mapping is to be created.
OPTIONS (
option
'
value
' [, ... ] )
This clause specifies the options of the user mapping. The options typically define the actual user name and password of the mapping. Option names must be unique. The allowed option names and values are specific to the server's foreign-data wrapper.
Create a user mapping for userbob
, serverfoo
:
CREATE USER MAPPING
conforms to ISO/IEC 9075-9 (SQL/MED).
,
,
,
版本:11
DO — 執行匿名的程式區塊
DO 執行匿名的程式區塊,換句話說,在程序語言中執行短暫的匿名函數。
程式區塊被視為沒有參數的函數,回傳 void。它被解譯並只執行一次。
可以在程式區塊之前或之後寫入選擇性的 LANGUAGE 子句。
code
要執行的程序語言程式。必須將其指定為字串,就像在 CREATE FUNCTION 中一樣。建議使用錢字號引用的文字。
lang_name
程式碼的程序語言名稱。如果省略,則預設為 plpgsql。
要使用的程序語言必須已透過 CREATE LANGUAGE 安裝到目前資料庫中。plpgsql 預設會安裝,但其他語言則沒有。
使用者必須具有程序語言的 USAGE 權限,如果語言是 untrusted,則必須是超級使用者。這與在語言中建立函數的權限要求相同。
將綱要 public 中所有檢視表的所有權限授予角色webuser:
SQL 標準中沒有 DO 語句。
版本:11
CREATE TYPE — 定義新的資料型別
CREATE TYPE 註冊一個新的資料型別,以便在目前資料庫中使用。定義型別的使用者將成為其所有者。
如果加上了綱要名稱,則會在指定的綱要中建立型別。否則,它將在目前的綱要中建立。型別名稱必須與同一綱要中任何現有型別或 domain 的名稱不同。(因為資料表具有關連的資料型別,所以型別名稱也必須與同一綱要中任何現有資料表的名稱不同。)
CREATE TYPE 有五種形式,如上面的語法概要所示。分別可以建立複合型別、列舉型別、範圍型別、基本型別或 shell 型別。下面將依次討論前四個。 shell 型別只是一個佔位型別,用於稍後定義的型別;它透過發出 CREATE TYPE 建立的,除了型別名稱之外沒有參數。在建立範圍型別和基本型別時,需要使用 Shell 型別作為先行引用,詳細如下面小節中所述。
CREATE TYPE 的第一種形式是複合型別。複合型別以屬性名稱和資料型別列表組成。如果屬性可以指定 collation 的話,則也可以指定 collation。複合型別與資料表的資料列型別基本相同,但使用 CREATE TYPE 時,毌須建立實際的資料表,只需要定義型別即可。舉例來說,獨立複合型別可用於函數的參數或回傳型別。
要能夠建立複合型別,您必須具有所有屬性型別的 USAGE 權限。
第二種形式的 CREATE TYPE 創建一個列舉(enum)型別,如第 8.7 節所述。列舉型別採用一個或多個帶引號的標籤列表,每個標籤的長度必須小於 NAMEDATALEN 個字元(標準 PostgreSQL 編譯中為 64 個字元)。(可以以空集合建立列舉型別,但是在使用 ALTER TYPE 加入一個以上標籤之前,這樣的型別是不允許使用的。)
第三種形式的 CREATE TYPE 建立一個新的範圍型別,如第 8.17 節所述。
範圍型別的子型別可以是具有關連的 b-tree 運算子類的任何型別(用於確定範圍型別值的排序)。通常,子型別的預設 b-tree 運算子類用於決定排序;要使用非預設的運算子類,請使用 subtype_opclass 指定其名稱。如果子型別是可指定 collation 的,並且您希望在範圍的排序中使用非預設的排序規則,請使用排序規則選項指定所需的排序規則。
選擇性的規範函數必須能接受所定義範圍型別的一個參數,並回傳相同型別的值。在套用時,這會用於將範圍值轉換為所規範形式。有關更多訊息,請參閱第 8.17.8 節。建立規範函數有點棘手,因為必須在宣告範圍型別之前定義它。而要執行此操作,必須先建立一個 shell 型別,這是一種佔位型別,除了名稱和所有者之外沒有其他屬性。這是透過發出命令 CREATE TYPE name 來完成的,沒有其他參數。然後可以使用 shell 型別作為參數和結果宣告函數,最後可以使用相同的名稱宣告範圍型別。這會自動使用有效的範圍型別替換 shell 型別參數。
選擇性的 subtype_diff 函數必須將子型別的兩個值作為參數,並回傳表示兩個給定值之間差異的雙精確度值。雖然這是選擇性的,但是有提供它的話,可以在範圍型別的欄位上實現更高的 GiST 索引效率。有關更多訊息,請參閱第 8.17.8 節。
The fourth form of CREATE TYPE
creates a new base type (scalar type). To create a new base type, you must be a superuser. (This restriction is made because an erroneous type definition could confuse or even crash the server.)
The parameters can appear in any order, not only that illustrated above, and most are optional. You must register two or more functions (using CREATE FUNCTION
) before defining the type. The support functions input_function
and output_function
are required, while the functions receive_function
, send_function
, type_modifier_input_function
, type_modifier_output_function
and analyze_function
are optional. Generally these functions have to be coded in C or another low-level language.
The input_function
converts the type's external textual representation to the internal representation used by the operators and functions defined for the type. output_function
performs the reverse transformation. The input function can be declared as taking one argument of type cstring
, or as taking three arguments of types cstring
, oid
, integer
. The first argument is the input text as a C string, the second argument is the type's own OID (except for array types, which instead receive their element type's OID), and the third is the typmod
of the destination column, if known (-1 will be passed if not). The input function must return a value of the data type itself. Usually, an input function should be declared STRICT; if it is not, it will be called with a NULL first parameter when reading a NULL input value. The function must still return NULL in this case, unless it raises an error. (This case is mainly meant to support domain input functions, which might need to reject NULL inputs.) The output function must be declared as taking one argument of the new data type. The output function must return type cstring
. Output functions are not invoked for NULL values.
The optional receive_function
converts the type's external binary representation to the internal representation. If this function is not supplied, the type cannot participate in binary input. The binary representation should be chosen to be cheap to convert to internal form, while being reasonably portable. (For example, the standard integer data types use network byte order as the external binary representation, while the internal representation is in the machine's native byte order.) The receive function should perform adequate checking to ensure that the value is valid. The receive function can be declared as taking one argument of type internal
, or as taking three arguments of types internal
, oid
, integer
. The first argument is a pointer to a StringInfo
buffer holding the received byte string; the optional arguments are the same as for the text input function. The receive function must return a value of the data type itself. Usually, a receive function should be declared STRICT; if it is not, it will be called with a NULL first parameter when reading a NULL input value. The function must still return NULL in this case, unless it raises an error. (This case is mainly meant to support domain receive functions, which might need to reject NULL inputs.) Similarly, the optional send_function
converts from the internal representation to the external binary representation. If this function is not supplied, the type cannot participate in binary output. The send function must be declared as taking one argument of the new data type. The send function must return type bytea
. Send functions are not invoked for NULL values.
You should at this point be wondering how the input and output functions can be declared to have results or arguments of the new type, when they have to be created before the new type can be created. The answer is that the type should first be defined as a shell type, which is a placeholder type that has no properties except a name and an owner. This is done by issuing the command CREATE TYPE
name
, with no additional parameters. Then the C I/O functions can be defined referencing the shell type. Finally, CREATE TYPE
with a full definition replaces the shell entry with a complete, valid type definition, after which the new type can be used normally.
The optional type_modifier_input_function
and type_modifier_output_function
are needed if the type supports modifiers, that is optional constraints attached to a type declaration, such as char(5)
or numeric(30,2)
. PostgreSQL allows user-defined types to take one or more simple constants or identifiers as modifiers. However, this information must be capable of being packed into a single non-negative integer value for storage in the system catalogs. The type_modifier_input_function
is passed the declared modifier(s) in the form of a cstring
array. It must check the values for validity (throwing an error if they are wrong), and if they are correct, return a single non-negative integer
value that will be stored as the column “typmod”. Type modifiers will be rejected if the type does not have a type_modifier_input_function
. The type_modifier_output_function
converts the internal integer typmod value back to the correct form for user display. It must return a cstring
value that is the exact string to append to the type name; for example numeric
's function might return (30,2)
. It is allowed to omit the type_modifier_output_function
, in which case the default display format is just the stored typmod integer value enclosed in parentheses.
The optional analyze_function
performs type-specific statistics collection for columns of the data type. By default, ANALYZE
will attempt to gather statistics using the type's “equals” and “less-than” operators, if there is a default b-tree operator class for the type. For non-scalar types this behavior is likely to be unsuitable, so it can be overridden by specifying a custom analysis function. The analysis function must be declared to take a single argument of type internal
, and return a boolean
result. The detailed API for analysis functions appears in src/include/commands/vacuum.h
.
While the details of the new type's internal representation are only known to the I/O functions and other functions you create to work with the type, there are several properties of the internal representation that must be declared to PostgreSQL. Foremost of these is internallength
. Base data types can be fixed-length, in which case internallength
is a positive integer, or variable-length, indicated by setting internallength
to VARIABLE
. (Internally, this is represented by setting typlen
to -1.) The internal representation of all variable-length types must start with a 4-byte integer giving the total length of this value of the type. (Note that the length field is often encoded, as described in Section 68.2; it's unwise to access it directly.)
The optional flag PASSEDBYVALUE
indicates that values of this data type are passed by value, rather than by reference. Types passed by value must be fixed-length, and their internal representation cannot be larger than the size of the Datum
type (4 bytes on some machines, 8 bytes on others).
The alignment
parameter specifies the storage alignment required for the data type. The allowed values equate to alignment on 1, 2, 4, or 8 byte boundaries. Note that variable-length types must have an alignment of at least 4, since they necessarily contain an int4
as their first component.
The storage
parameter allows selection of storage strategies for variable-length data types. (Only plain
is allowed for fixed-length types.) plain
specifies that data of the type will always be stored in-line and not compressed. extended
specifies that the system will first try to compress a long data value, and will move the value out of the main table row if it's still too long. external
allows the value to be moved out of the main table, but the system will not try to compress it. main
allows compression, but discourages moving the value out of the main table. (Data items with this storage strategy might still be moved out of the main table if there is no other way to make a row fit, but they will be kept in the main table preferentially over extended
and external
items.)
All storage
values other than plain
imply that the functions of the data type can handle values that have been toasted, as described in Section 68.2 and Section 37.13.1. The specific other value given merely determines the default TOAST storage strategy for columns of a toastable data type; users can pick other strategies for individual columns using ALTER TABLE SET STORAGE
.
The like_type
parameter provides an alternative method for specifying the basic representation properties of a data type: copy them from some existing type. The values of internallength
, passedbyvalue
, alignment
, and storage
are copied from the named type. (It is possible, though usually undesirable, to override some of these values by specifying them along with the LIKE
clause.) Specifying representation this way is especially useful when the low-level implementation of the new type “piggybacks” on an existing type in some fashion.
The category
and preferred
parameters can be used to help control which implicit cast will be applied in ambiguous situations. Each data type belongs to a category named by a single ASCII character, and each type is either “preferred” or not within its category. The parser will prefer casting to preferred types (but only from other types within the same category) when this rule is helpful in resolving overloaded functions or operators. For more details see Chapter 10. For types that have no implicit casts to or from any other types, it is sufficient to leave these settings at the defaults. However, for a group of related types that have implicit casts, it is often helpful to mark them all as belonging to a category and select one or two of the “most general” types as being preferred within the category. The category
parameter is especially useful when adding a user-defined type to an existing built-in category, such as the numeric or string types. However, it is also possible to create new entirely-user-defined type categories. Select any ASCII character other than an upper-case letter to name such a category.
A default value can be specified, in case a user wants columns of the data type to default to something other than the null value. Specify the default with the DEFAULT
key word. (Such a default can be overridden by an explicit DEFAULT
clause attached to a particular column.)
To indicate that a type is an array, specify the type of the array elements using the ELEMENT
key word. For example, to define an array of 4-byte integers (int4
), specify ELEMENT = int4
. More details about array types appear below.
To indicate the delimiter to be used between values in the external representation of arrays of this type, delimiter
can be set to a specific character. The default delimiter is the comma (,
). Note that the delimiter is associated with the array element type, not the array type itself.
If the optional Boolean parameter collatable
is true, column definitions and expressions of the type may carry collation information through use of the COLLATE
clause. It is up to the implementations of the functions operating on the type to actually make use of the collation information; this does not happen automatically merely by marking the type collatable.
每當建立使用者定義的型別時,PostgreSQL 都會自動建立一個相關聯的陣列型別,其名稱由元素型別的名稱組成,該名稱前面帶有底線,並在必要時將其截斷以使其長度小於 NAMEDATALEN 個字元。(如果這樣產生的名稱與現有型別名稱衝突,則重複此程序,直到找到一個非衝突名稱為止。)這個在幕後建立的陣列型別為可變長度,並使用內建的輸入和輸出函數 array_in 和 array_out。陣列型別會追隨其元素型別的所有者或網要中的所有變更,如果元素型別被刪除,也會將其刪除。
直覺地您可能會問,如果系統自動產生正確的陣列型別,為什麼會有 ELEMENT 選項。使用 ELEMENT 唯一有用的情況是,當您建立一個固定長度的型別時,該型別在內部恰好是由許多相同的東西組成的陣列,並且除了希望直接透過索引來存取這些元素。您打算為整個型別提供的任何支援操作。例如,型別 point 僅表示為兩個浮點數,可以使用 point[0] 和 point[1] 對其進行存取的行為。請注意,此功能僅適用於內部格式完全相同的固定長度欄位序列的固定長度型別。可索引的可變長度型別必須具有由 array_in 和 array_out 使用的通用內部表示形式。出於歷史原因(也就是說,這顯然是錯誤的,但要更改它為時已晚),固定長度陣列型別的索引是從零開始的,而非如同可變長度陣列從一開始。
name
The name (optionally schema-qualified) of a type to be created.
attribute_name
The name of an attribute (column) for the composite type.
data_type
The name of an existing data type to become a column of the composite type.
collation
The name of an existing collation to be associated with a column of a composite type, or with a range type.
label
A string literal representing the textual label associated with one value of an enum type.
subtype
The name of the element type that the range type will represent ranges of.
subtype_operator_class
The name of a b-tree operator class for the subtype.
canonical_function
The name of the canonicalization function for the range type.
subtype_diff_function
The name of a difference function for the subtype.
input_function
The name of a function that converts data from the type's external textual form to its internal form.
output_function
The name of a function that converts data from the type's internal form to its external textual form.
receive_function
The name of a function that converts data from the type's external binary form to its internal form.
send_function
The name of a function that converts data from the type's internal form to its external binary form.
type_modifier_input_function
The name of a function that converts an array of modifier(s) for the type into internal form.
type_modifier_output_function
The name of a function that converts the internal form of the type's modifier(s) to external textual form.
analyze_function
The name of a function that performs statistical analysis for the data type.
internallength
A numeric constant that specifies the length in bytes of the new type's internal representation. The default assumption is that it is variable-length.
alignment
The storage alignment requirement of the data type. If specified, it must be char
, int2
, int4
, or double
; the default is int4
.
storage
The storage strategy for the data type. If specified, must be plain
, external
, extended
, or main
; the default is plain
.
like_type
The name of an existing data type that the new type will have the same representation as. The values of internallength
, passedbyvalue
, alignment
, and storage
are copied from that type, unless overridden by explicit specification elsewhere in this CREATE TYPE
command.
category
The category code (a single ASCII character) for this type. The default is 'U'
for “user-defined type”. Other standard category codes can be found in Table 51.64. You may also choose other ASCII characters in order to create custom categories.
preferred
True if this type is a preferred type within its type category, else false. The default is false. Be very careful about creating a new preferred type within an existing type category, as this could cause surprising changes in behavior.
default
The default value for the data type. If this is omitted, the default is null.
element
The type being created is an array; this specifies the type of the array elements.
delimiter
The delimiter character to be used between values in arrays made of this type.
collatable
True if this type's operations can use collation information. The default is false.
Because there are no restrictions on use of a data type once it's been created, creating a base type or range type is tantamount to granting public execute permission on the functions mentioned in the type definition. This is usually not an issue for the sorts of functions that are useful in a type definition. But you might want to think twice before designing a type in a way that would require “secret” information to be used while converting it to or from external form.
Before PostgreSQL version 8.3, the name of a generated array type was always exactly the element type's name with one underscore character (_
) prepended. (Type names were therefore restricted in length to one less character than other names.) While this is still usually the case, the array type name may vary from this in case of maximum-length names or collisions with user type names that begin with underscore. Writing code that depends on this convention is therefore deprecated. Instead, use pg_type
.typarray
to locate the array type associated with a given type.
It may be advisable to avoid using type and table names that begin with underscore. While the server will change generated array type names to avoid collisions with user-given names, there is still risk of confusion, particularly with old client software that may assume that type names beginning with underscores always represent arrays.
Before PostgreSQL version 8.2, the shell-type creation syntax CREATE TYPE
name
did not exist. The way to create a new base type was to create its input function first. In this approach, PostgreSQL will first see the name of the new data type as the return type of the input function. The shell type is implicitly created in this situation, and then it can be referenced in the definitions of the remaining I/O functions. This approach still works, but is deprecated and might be disallowed in some future release. Also, to avoid accidentally cluttering the catalogs with shell types as a result of simple typos in function definitions, a shell type will only be made this way when the input function is written in C.
In PostgreSQL versions before 7.3, it was customary to avoid creating a shell type at all, by replacing the functions' forward references to the type name with the placeholder pseudo-type opaque
. The cstring
arguments and results also had to be declared as opaque
before 7.3. To support loading of old dump files, CREATE TYPE
will accept I/O functions declared using opaque
, but it will issue a notice and change the function declarations to use the correct types.
此範例建立一個複合型別並在函數定義中使用它:
此範例建立列舉型別並在資料表定義中使用它:
此範例建立範圍型別:
此範例建立基本資料型別 box,然後在資料表定義中使用該型別:
如果 box 的內部結構是一個包含四個 float4 元素的陣列,我們可能會使用:
這將允許透過索引存取 box 值的組件編號。其他型別的行為與先前相同。
此範例建立一個 large object 型別並在資料表定義中使用它:
更多範例,包括適當的輸入和輸出功能,請參閱第 37.13 節。
建立複合型別 CREATE TYPE 指令的第一種形式符合 SQL 標準。其他形式則是 PostgreSQL 延伸語法。SQL 標準中的 CREATE TYPE 語句還定義了 PostgreSQL 中未實作的其他形式。
建立具有零屬性的複合型別是 PostgreSQL 專有的(類似於 CREATE TABLE 的情況)。
CREATE USER — 定義一個新的資料庫角色
CREATE USER 現在是 CREATE ROLE 的別名指令。唯一的區別是當命令為 CREATE USER 時,預設情況下是具有 LOGIN 權限的,而當命令為 CREATE ROLE 時則預設為 NOLOGIN。
CREATE USER 語句是 PostgreSQL 延伸功能。SQL 標準將使用者的定義留給各資料庫系統自行實作。
DROP DATABASE — remove a database
DROP DATABASE
drops a database. It removes the catalog entries for the database and deletes the directory containing the data. It can only be executed by the database owner. Also, it cannot be executed while you or anyone else are connected to the target database. (Connect to postgres
or any other database to issue this command.)
DROP DATABASE
cannot be undone. Use it with care!
IF EXISTS
Do not throw an error if the database does not exist. A notice is issued in this case.
name
The name of the database to remove.
DROP DATABASE
cannot be executed inside a transaction block.
This command cannot be executed while connected to the target database. Thus, it might be more convenient to use the program dropdb instead, which is a wrapper around this command.
There is no DROP DATABASE
statement in the SQL standard.
DROP EXTENSION — remove an extension
DROP EXTENSION
removes extensions from the database. Dropping an extension causes its component objects to be dropped as well.
You must own the extension to use DROP EXTENSION
.
IF EXISTS
Do not throw an error if the extension does not exist. A notice is issued in this case.
name
The name of an installed extension.
CASCADE
Automatically drop objects that depend on the extension, and in turn all objects that depend on those objects (see Section 5.13).
RESTRICT
Refuse to drop the extension if any objects depend on it (other than its own member objects and other extensions listed in the same DROP
command). This is the default.
To remove the extension hstore
from the current database:
This command will fail if any of hstore
's objects are in use in the database, for example if any tables have columns of the hstore
type. Add the CASCADE
option to forcibly remove those dependent objects as well.
DROP EXTENSION
is a PostgreSQL extension.
版本:11
DROP LANGUAGE — 移除程序語言
DROP LANGUAGE 移除先前註冊的程序語言定義。您必須是超級使用者或語言的所有者才能使用 DROP LANGUAGE。
從 PostgreSQL 9.1 開始,大多數程序語言都被製作成「extension」,因此,應該使用 DROP EXTENSION 而不是 DROP LANGUAGE 來移除。
IF EXISTS
如果該語言不存在,請不要拋出錯誤。而在這種情況下發出 NOTICE。
name
現有程序語言的名稱。為了相容性,名稱可以用單引號括起來。
CASCADE
自動移除相依於語言的物件(例如語言中的函數),以及相依於這些物件的所有物件(參閱第 5.13 節)。
RESTRICT
如果任何物件相依於它,則拒絕移除。這是預設選項。
此命令會移除程序語言 plsample:
SQL 標準中沒有 DROP LANGUAGE 語句。
DROP INDEX — remove an index
DROP INDEX
drops an existing index from the database system. To execute this command you must be the owner of the index.
CONCURRENTLY
Drop the index without locking out concurrent selects, inserts, updates, and deletes on the index's table. A normal DROP INDEX
acquires exclusive lock on the table, blocking other accesses until the index drop can be completed. With this option, the command instead waits until conflicting transactions have completed.
There are several caveats to be aware of when using this option. Only one index name can be specified, and the CASCADE
option is not supported. (Thus, an index that supports a UNIQUE
or PRIMARY KEY
constraint cannot be dropped this way.) Also, regular DROP INDEX
commands can be performed within a transaction block, but DROP INDEX CONCURRENTLY
cannot.
IF EXISTS
Do not throw an error if the index does not exist. A notice is issued in this case.
name
The name (optionally schema-qualified) of an index to remove.
CASCADE
Automatically drop objects that depend on the index, and in turn all objects that depend on those objects (see Section 5.13).
RESTRICT
Refuse to drop the index if any objects depend on it. This is the default.
This command will remove the index title_idx
:
DROP INDEX
is a PostgreSQL language extension. There are no provisions for indexes in the SQL standard.
DROP FUNCTION — 移除一個函數
DROP FUNCTION 移除現有函數的定義。要執行此命令,使用者必須是該函數的擁有者。必須指定該函數的參數類型,因為可能存在多個具有相同名稱和不同參數列表的不同函數。
IF EXISTS
如果函數不存在,不要拋出錯誤。在這種情況下發布 NOTICE。
name
現有函數的名稱(可以加上綱要)。如果未指定參數列表,則該名稱在其綱要中必須是唯一的。
argmode
參數的模式:IN、OUT、INOUT 或 VARIADIC。如果省略,則預設為 IN。請注意,DROP FUNCTION 實際上並不關注 OUT 參數,因為只需要輸入參數來確定函數的身份。所以列出 IN、INOUT 和 VARIADIC 參數就足夠了。
argname
參數的名稱。 請注意,DROP FUNCTION 實際上並不關注參數名稱,因為只需要參數資料型別來確定函數的身份。
argtype
如果有的話,函數參數的資料型別(可加上綱要)。
CASCADE
自動刪除依賴於該功能的物件(如運算子或觸發器),並依次移除依賴於這些物件的所有物件(請參閱第 5.13 節)。
RESTRICT
如果任何物件依賴於它,拒絕移除該函數。這是預設的做法。
此指令移除平方根函數:
在一個指令中刪除多個函數:
如果函數名稱在其綱要中是唯一的,則可以在不帶參數列表的情況下引用它:
請注意,這不同於
它指的是一個零個參數的函數,而第一個變形可以引用具有任意數量參數的函數,包括零個,只要該名稱是唯一的。
這個指令符合 SQL 標準,並帶有這些 PostgreSQL 的延伸功能:
原標準只允許每個指令刪除一個函數。
具有 IF EXISTS 選項
能夠指定參數模式和名稱
DROP ACCESS METHOD — remove an access method
DROP ACCESS METHOD
removes an existing access method. Only superusers can drop access methods.
IF EXISTS
Do not throw an error if the access method does not exist. A notice is issued in this case.
name
The name of an existing access method.
CASCADE
Automatically drop objects that depend on the access method (such as operator classes, operator families, and indexes), and in turn all objects that depend on those objects (see Section 5.14).
RESTRICT
Refuse to drop the access method if any objects depend on it. This is the default.
Drop the access method heptree
:
DROP ACCESS METHOD
is a PostgreSQL extension.
DROP MATERIALIZED VIEW — 移除具體化檢視表
DROP MATERIALIZED VIEW 移除現有的具體化檢視表。要執行此指令,您必須是具體化檢視表的擁有者。
IF EXISTS
如果具體化檢視表不存在,請不要拋出錯誤。在這種情況下發出 NOTICE。
name
要移除的具體化檢視表名稱(可選用綱要名稱)。
CASCADE
自動移除相依於具體化檢視表的物件(例如其他具體化檢視表或一般的檢視表),以及相依於這些物件的所有物件(參閱)。
RESTRICT
如果任何物件相依於它,則拒絕移除具體化檢視表。這是預設值。
此指令將移除名為 order_summary 的具體化檢視表:
DROP MATERIALIZED VIEW 是 PostgreSQL 延伸語法。
DROP ROLE — 移除資料庫角色
DROP ROLE 移除指定的角色。要移除超級使用者角色的話,您必須自己成為超級用戶;要刪除非超級使用者角色,您必須具有 CREATEROLE 權限。
如果角色在叢集的任何資料庫中仍被引用,則無法移除該角色;如果執行的話,會出現錯誤。在移除角色之前,您必須移除其擁有的所有物件(或重新分配其所有權),並撤銷該角色已授予其他角色的任何權限。 和 指令可用於此目的;更多討論請參閱。
但是,沒有必要刪除涉及角色的角色成員。DROP ROLE 會自動撤銷其他角色中的目標角色以及目標角色中的其他角色的任何成員資格。其他角色不會被丟棄或受到其他影響。
IF EXISTS
如果角色不存在,請不要拋出錯誤。在這種情況下會發布通知。
name
要移除的角色名稱。
移除角色:
SQL 標準定義了 DROP ROLE,但它只允許一次移除一個角色,並且它指定了不同於 PostgreSQL 使用的權限要求。
版本:11
DROP STATISTICS — remove extended statistics
DROP STATISTICS
removes statistics object(s) from the database. Only the statistics object's owner, the schema owner, or a superuser can drop a statistics object.
IF EXISTS
Do not throw an error if the statistics object does not exist. A notice is issued in this case.
name
The name (optionally schema-qualified) of the statistics object to drop.
To destroy two statistics objects in different schemas, without failing if they don't exist:
There is no DROP STATISTICS
command in the SQL standard.
DROP POLICY — remove a row level security policy from a table
DROP POLICY
removes the specified policy from the table. Note that if the last policy is removed for a table and the table still has row level security enabled viaALTER TABLE
, then the default-deny policy will be used.ALTER TABLE ... DISABLE ROW LEVEL SECURITY
can be used to disable row level security for a table, whether policies for the table exist or not.
IF EXISTS
Do not throw an error if the policy does not exist. A notice is issued in this case.
name
The name of the policy to drop.
table_name
The name (optionally schema-qualified) of the table that the policy is on.
CASCADE
RESTRICT
These key words do not have any effect, since there are no dependencies on policies.
To drop the policy calledp1
on the table namedmy_table
:
DROP POLICY
is aPostgreSQLextension.
,
DROP SEQUENCE — remove a sequence
DROP SEQUENCE
removes sequence number generators. A sequence can only be dropped by its owner or a superuser.
IF EXISTS
Do not throw an error if the sequence does not exist. A notice is issued in this case.
name
The name (optionally schema-qualified) of a sequence.
CASCADE
Automatically drop objects that depend on the sequence, and in turn all objects that depend on those objects (see ).RESTRICT
Refuse to drop the sequence if any objects depend on it. This is the default.
要移除序列物件:
DROP SEQUENCE 符合 SQL 標準,但標準僅允許每個指令移除一個序列,包含除了 IF EXISTS 選項(它是 PostgreSQL 加入的功能)外的部份。
DROP SCHEMA — remove a schema
DROP SCHEMA
removes schemas from the database.
A schema can only be dropped by its owner or a superuser. Note that the owner can drop the schema (and thereby all contained objects) even if they do not own some of the objects within the schema.
IF EXISTS
Do not throw an error if the schema does not exist. A notice is issued in this case.
name
The name of a schema.
CASCADE
Automatically drop objects (tables, functions, etc.) that are contained in the schema, and in turn all objects that depend on those objects (see ).
RESTRICT
Refuse to drop the schema if it contains any objects. This is the default.
Using the CASCADE
option might make the command remove objects in other schemas besides the one(s) named.
To remove schema mystuff
from the database, along with everything it contains:
DROP SCHEMA
is fully conforming with the SQL standard, except that the standard only allows one schema to be dropped per command, and apart from the IF EXISTS
option, which is a PostgreSQL extension.
DROP OWNED — remove database objects owned by a database role
DROP OWNED
drops all the objects within the current database that are owned by one of the specified roles. Any privileges granted to the given roles on objects in the current database and on shared objects (databases, tablespaces) will also be revoked.
name
The name of a role whose objects will be dropped, and whose privileges will be revoked.
CASCADE
Automatically drop objects that depend on the affected objects, and in turn all objects that depend on those objects (see ).
RESTRICT
Refuse to drop the objects owned by a role if any other database objects depend on one of the affected objects. This is the default.
DROP OWNED
is often used to prepare for the removal of one or more roles. Because DROP OWNED
only affects the objects in the current database, it is usually necessary to execute this command in each database that contains objects owned by a role that is to be removed.
Using the CASCADE
option might make the command recurse to objects owned by other users.
Databases and tablespaces owned by the role(s) will not be removed.
The DROP OWNED
command is a PostgreSQL extension.
DROP PUBLICATION — remove a publication
DROP PUBLICATION
removes an existing publication from the database.
A publication can only be dropped by its owner or a superuser.
IF EXISTS
Do not throw an error if the publication does not exist. A notice is issued in this case.
name
The name of an existing publication.
CASCADE
RESTRICT
These key words do not have any effect, since there are no dependencies on publications.
Drop a publication:
DROP PUBLICATION
is a PostgreSQL extension.
DROP SUBSCRIPTION — 移除訂閱
DROP SUBSCRIPTION 從資料庫叢集中移除訂閱。
訂閱只能由超級使用者移除。
如果訂閱與複寫插槽關連,則不能在交易事務內執行 DROP SUBSCRIPTION。 (您可以使用 ALTER SUBSCRIPTION 來取消插槽的設定。)
name
要移除的訂閱名稱。
CASCADE
RESTRICT
這些關鍵詞沒有任何作用,因為訂閱沒有相依關係。
在移除與遠端主機上的複寫插槽關連的訂閱(正常狀態)時,DROP SUBSCRIPTION 將連線到遠端主機,並嘗試將復寫插槽移除作為其操作的一部分。這是必要的,以便釋放為遠端主機上的訂閱所分配的資源。如果失敗,無論是因為遠端主機不可連線,還是因為遠端複寫插槽不能被移除或不存在,DROP SUBSCRIPTION 命令都將失敗。要在這種情況下繼續,請透過執行 ALTER SUBSCRIPTION ... SET(slot_name = NONE)來解除訂閱與複寫插槽的關連。 之後,DROP SUBSCRIPTION 將不再嘗試對遠端主機執行任何操作。請注意,如果遠程複寫插槽仍然存在,則應該手動移除它;否則它將繼續保留 WAL 並最終可能導致磁碟空間不足。另見。
如果訂閱與複寫插槽相關連,則 DROP SUBSCRIPTION 不能在交易事務內執行。
移除訂閱:
DROP SUBSCRIPTION 是 PostgreSQL 的延伸功能。
,
, ,
PostgreSQL 包含一個與此命令具有相同功能的工具程式 (實際上,它也呼叫此命令),但可以從終端機的命令列上執行。
, ,
,
, \
,
The command is an alternative that reassigns the ownership of all the database objects owned by one or more roles. However, REASSIGN OWNED
does not deal with privileges for other objects.
See for more discussion.
,
,
,
DROP TRANSFORM — remove a transform
DROP TRANSFORM
removes a previously defined transform.
To be able to drop a transform, you must own the type and the language. These are the same privileges that are required to create a transform.
IF EXISTS
Do not throw an error if the transform does not exist. A notice is issued in this case.
type_name
The name of the data type of the transform.
lang_name
The name of the language of the transform.
CASCADE
Automatically drop objects that depend on the transform, and in turn all objects that depend on those objects (see Section 5.13).
RESTRICT
Refuse to drop the transform if any objects depend on it. This is the default.
To drop the transform for type hstore
and language plpythonu
:
This form of DROP TRANSFORM
is a PostgreSQL extension. See CREATE TRANSFORM for details.
DROP TRIGGER — remove a trigger
DROP TRIGGER
removes an existing trigger definition. To execute this command, the current user must be the owner of the table for which the trigger is defined.
IF EXISTS
Do not throw an error if the trigger does not exist. A notice is issued in this case.name
The name of the trigger to remove.table_name
The name (optionally schema-qualified) of the table for which the trigger is defined.CASCADE
Automatically drop objects that depend on the trigger, and in turn all objects that depend on those objects (see Section 5.13).RESTRICT
Refuse to drop the trigger if any objects depend on it. This is the default.
Destroy the trigger if_dist_exists
on the table films
:
The DROP TRIGGER
statement in PostgreSQL is incompatible with the SQL standard. In the SQL standard, trigger names are not local to tables, so the command is simply DROP TRIGGER
name
.
DROP TABLE — 移除一個資料表
DROP TABLE 從資料庫中移除資料表。只有資料表的擁有者,其綱要的擁有者和超級使用者才能移除該資料表。要在不破壞資料表的情況下清空資料表的資料,請使用 DELETE 或 TRUNCATE。
DROP TABLE 會移除目標資料表所關連的任何索引、規則、觸發器和限制條件。但是,要移除由檢視表引用的資料表或另一個資料表的外部鍵,必須指定 CASCADE。(CASCADE 將完全移除從屬的檢視表,但外部鍵情況,它只會移除外部鍵,而不會移除其他資料表。)
IF EXISTS
如果資料表不存在,請不要拋出錯誤。 在這種情況下發出 NOTICE。
name
要移除的資料表名稱(可選用綱要名稱)。
CASCADE
自動刪除相依於資料表的物件(例如檢視表),以及相依於這些物件的所有物件(參閱第 5.13 節)。
RESTRICT
如果任何物件相依於它,就拒絕移除此資料表。這是預設行為。
移除兩個資料表,films 和 distributors:
此命令符合 SQL 標準,除了標準只允許每個指令移除一個資料表,及 IF EXISTS 選項(PostgreSQL 延伸功能)之外。
DROP VIEW — 移除檢視表
DROP VIEW 移除現有檢視表。要執行此命令,您必須是檢視表的擁有者。
IF EXISTS
如果檢視表不存在,請不要拋出錯誤。在這種情況下發出 NOTICE。
name
要移除的檢視表名稱(可選擇性加上綱要名稱)。
CASCADE
自動移除相依於檢視表的物件(例如其他檢視表),以及相依於這些物件的所有物件(參閱第 5.13 節)。
RESTRICT
如果任何物件相依於它,則拒絕移除檢視表。這是預設行為。
此指令將移除名稱為 kinds 的檢視表:
此命令符合 SQL 標準,但標準僅允許每個指令移除一個檢視表,並且除了 IF EXISTS 選項(PostgreSQL 延伸功能)之外。
EXECUTE — execute a prepared statement
EXECUTE
is used to execute a previously prepared statement. Since prepared statements only exist for the duration of a session, the prepared statement must have been created by a PREPARE
statement executed earlier in the current session.
If the PREPARE
statement that created the statement specified some parameters, a compatible set of parameters must be passed to the EXECUTE
statement, or else an error is raised. Note that (unlike functions) prepared statements are not overloaded based on the type or number of their parameters; the name of a prepared statement must be unique within a database session.
For more information on the creation and usage of prepared statements, see PREPARE.
name
The name of the prepared statement to execute.
parameter
The actual value of a parameter to the prepared statement. This must be an expression yielding a value that is compatible with the data type of this parameter, as was determined when the prepared statement was created.
The command tag returned by EXECUTE
is that of the prepared statement, and not EXECUTE
.
Examples are given in the Examples section of the PREPARE documentation.
The SQL standard includes an EXECUTE
statement, but it is only for use in embedded SQL. This version of the EXECUTE
statement also uses a somewhat different syntax.
DROP TYPE — remove a data type
DROP TYPE
removes a user-defined data type. Only the owner of a type can remove it.
IF EXISTS
Do not throw an error if the type does not exist. A notice is issued in this case.
name
The name (optionally schema-qualified) of the data type to remove.
CASCADE
Automatically drop objects that depend on the type (such as table columns, functions, and operators), and in turn all objects that depend on those objects (see Section 5.13).
RESTRICT
Refuse to drop the type if any objects depend on it. This is the default.
To remove the data type box
:
This command is similar to the corresponding command in the SQL standard, apart from the IF EXISTS
option, which is a PostgreSQL extension. But note that much of the CREATE TYPE
command and the data type extension mechanisms in PostgreSQL differ from the SQL standard.
EXPLAIN — 顯示執行計劃的內容
此命令顯示 PostgreSQL 計劃程序為所提供的查詢語句設計的執行計劃。執行計劃顯示查詢語句如何掃瞄其所引用的資料表 - 通過簡單循序掃描、索引掃描等 - 如果引用了多個資料表,將使用哪些交叉查詢的演算法將每個資料表所需的資料列匯集在一起。
顯示這些資訊的最關鍵部分是估計查詢語句的執行成本,這是計劃程序猜測執行行語句需要多長時間(以成本單位測量,是任何面向的,但通常意味著磁碟頁面讀取)。實際上顯示了兩個數字:可以回傳第一個資料列之前的啟動成本,以及回傳所有資料列的總成本。對於大多數查詢而言,總成本是重要的,但在諸如 EXISTS 中的子查詢之類的查詢中,規劃程序將選擇最小的啟動成本而不是最小的總成本(因為執行程序會在獲得一個資料列之後將停止)。此外,如果使用 LIMIT 子句限制要回傳的資料列數量,則計劃程序會在兩端的成本之間進行適當的插值,以估計哪個計劃確實成本較低。
ANALYZE 選項讓語句實際執行,而不僅僅是計劃而已。然後將實際運行時的統計資訊加到顯示結果中,包括每個計劃節點中消耗的總耗用時間(以毫秒為單位)以及實際回傳的總資料列數。這對於了解規劃程序的估計是否接近現實非常有用。
請記住,當使用 ANALYZE 選項時,實際上會執行該語句。儘管 EXPLAIN 將丟棄 SELECT 回傳的任何輸出,但該語句的其他副作用將照常發生。如果您希望在 INSERT、UPDATE、DELETE、CREATE TABLE AS 或 EXECUTE 語句上使用 EXPLAIN ANALYZE 而不讓命令影響您的資料,請使用以下方法:
在未括號的語法中,只有 ANALYZE 和 VERBOSE 選項可以使用,而且也只能依次序使用。在 PostgreSQL 9.0 之前,沒有括號的語法是唯一受支援的語法。預計所有新選項僅在括號語法中受支援。
ANALYZE
執行命令並顯示實際運行時間和其他統計訊息。此參數預設為 FALSE。
VERBOSE
顯示有關計劃的其他訊息。具體來說,包括計劃樹中每個節點的輸出欄位列表, schema-qualify 資料表和函數名稱,始終在表示式中使用其範圍資料表別名標記,並始終輸出顯示統計訊息的每個觸發器的名稱。此參數預設為 FALSE。
COSTS
包括有關每個計劃節點的估計啟動和總成本的訊息,以及估計的資料列數和每個資料列的估計寬度。此參數預設為 TRUE。
SETTINGS
Include information on configuration parameters. Specifically, include options affecting query planning with value different from the built-in default value. This parameter defaults to FALSE
.
BUFFERS
加入顯示有關緩衝區使用的訊息。具體來說,包括命中、讀取、弄髒和寫入的共享塊的數量,命中、讀取、弄髒和寫入的本地區塊的數量,以及讀取和寫入的臨時區塊的數量。命中意味著避免了讀取,因為在需要時已經在緩衝區中找到了區塊。共享區塊包含來自一般資料表和索引的資料;本地區塊包含臨時資料表和索引的資料;臨時區塊包含用於排序、映射、具體化計劃節點和類似情況的短期工作資料。髒污的區塊數表示此查詢更改的先前未修改的區塊數量;而寫入的區塊數表示在查詢處理期間由該後端從緩衝區中讀出的先前髒污區塊的數量。為上層節點顯示的塊數包括其所有子節點使用的塊數。在文字格式中,僅輸出非零的值。僅當啟用 ANALYZE 時,才能使用此參數。它預設為 FALSE。
WAL
Include information on WAL record generation. Specifically, include the number of records, number of full page images (fpi) and amount of WAL bytes generated. In text format, only non-zero values are printed. This parameter may only be used when ANALYZE
is also enabled. It defaults to FALSE
.
TIMING
包括輸出中每個節點花費的實際啟動時間和總時間。重複讀取系統時鐘的開銷可能會在某些系統上顯著減慢查詢速度,因此當僅需要實際資料列計數而非精確時間時,將此參數設定為 FALSE 可能會很有用。即使使用此選項關閉節點級時序,也始終會測量整個語句的執行時間。僅當啟用 ANALYZE 時,才能使用此參數。 它預設為 TRUE。
SUMMARY
在查詢計劃之後顯示摘要訊息(例如,總計的時間訊息)。使用 ANALYZE 時預設會包含摘要訊息,但一般預設的情況下不包括摘要信息,不過可以使用此選項啟用。EXPLAIN EXECUTE 中的計劃時間包括從緩衝區中取得計劃所需的時間以及必要時重新計劃所需的時間。
FORMAT
指定輸出格式,可以是 TEXT、XML、JSON 或 YAML。非文字輸出格式包含與文字輸出格式相同的訊息,但能讓程式更容易解析。此參數預設為 TEXT。
boolean
指定是應打開還是關閉所選選項。您可以寫入 TRUE、ON 或 1 以啟用該選項,使用 FALSE、OFF 或 0 來停用它。布林值也可以省略,在這種情況下假定為 TRUE。
statement
任何 SELECT,INSERT,UPDATE,DELETE,VALUES,EXECUTE,DECLARE,CREATE TABLE AS 或 CREATE MATERIALIZED VIEW AS 語句,您希望查看其執行計劃。
命令的結果是為語句選擇計劃的文字描述,可選擇使用執行統計訊息加以註釋。第 14.1 節描述了其所提供的訊息。
為了使 PostgreSQL 查詢規劃器在優化查詢時做出合理的明智決策,pg_statistic 資料應該是查詢中使用的所有資料表的最新數據。通常,autovacuum 背景程序會自動處理。但是如果資料表的內容最近發生了重大變化,您可能需要手動 ANALYZE 而不是等待 autovacuum 來趕上變化。
為了測量執行計劃中每個節點的執行時成本,EXPLAIN ANALYZE 的目前實作為查詢執行加入了開銷分析。因此,對查詢執行 EXPLAIN ANALYZE 有時會比正常執行查詢花費更長的時間。開銷量取決於查詢的性質以及所使用的平台。最糟糕的情況發生在計劃節點上,這些節點本身每次執行只需要很少的時間,而且在作業系統呼相對較慢以獲取時間的主機上。
要顯示具有單個整數欄位和 10000 個資料列的資料表的簡單查詢計劃:
這是相同的查詢,使用 JSON 輸出格式:
如果索引存在並且我們使用具有可索引的 WHERE 條件查詢,則 EXPLAIN 可能會顯示不同的計劃:
這是相同的查詢,但是採用 YAML 格式:
XML 格式留給讀者練習。
以下是同一計劃,其成本估算被停用:
以下是使用彙總函數查詢的查詢計劃範例:
以下是使用 EXPLAIN EXECUTE 顯示準備好的查詢執行計劃範例:
當然,此處顯示的具體數字取決於所涉及資料表的實際內容。另請注意,由於計劃程序的改進,PostgreSQL 版本之間的數字甚至選定的查詢策略可能會有所不同。此外,ANALYZE 指令使用隨機採樣來估計數據統計;因此,即使資料表中資料的實際分佈沒有改變,也可能在全新的 ANALYZE 之後改變成本估算。
SQL 標準中並沒有定義 EXPLAIN 語句。
LISTEN — 監聽某個通知
LISTEN 將目前連線註冊為名為 channel 的通知通道上的監聽器。如果目前連線已註冊為此通知通道的監聽器,則不執行任何操作。
無論何時透過此連線或連線到同一資料庫的另一個連線呼叫指令 NOTIFY 通道,都會通知目前正在該通知通道上監聽的所有連線,並且每個連線將依次通知其連線的用戶端應用程序。
可以使用 UNLISTEN 指令為給定通知通道取消註冊連線。連線結束時會自動清除連線的監聽註冊。
用戶端應用程序必須用於檢測通知事件的方法取決於它使用的 PostgreSQL 應用程序程式介面。使用 libpq 函式庫,應用程序將 LISTEN 作為普通 SQL 指令送出,然後必須定期呼叫函數 PQnotifies 以查明是否已收到任何通知事件。其他介面(如 libpgtcl)提供了處理通知事件的更高階的方法;實際上,使用 libpgtcl,應用程式設計師甚至不應該直接送出 LISTEN 或 UNLISTEN。有關更多詳細訊息,請參閱所用介面的使用手冊。
NOTIFY 包含對 LISTEN 及 NOTIFY 使用的更廣泛討論。
channel
通知通道的名稱(任何識別指標)。
LISTEN 在事務提交時生效。如果在稍後回復的事務中執行 LISTEN 或 UNLISTEN,則正在監聽的通知通道也不會改變。
已執行 LISTEN 的事務無法為兩階段提交做 prepared。
從 psql 配置並執行 listen / notify 指令:
SQL 標準中沒有 LISTEN 語句。
INSERT — 在資料表中建立新的資料
INSERT
inserts new rows into a table. One can insert one or more rows specified by value expressions, or zero or more rows resulting from a query.
The target column names can be listed in any order. If no list of column names is given at all, the default is all the columns of the table in their declared order; or the first N
column names, if there are only N
columns supplied by the VALUES
clause or query
. The values supplied by the VALUES
clause or query
are associated with the explicit or implicit column list left-to-right.
Each column not present in the explicit or implicit column list will be filled with a default value, either its declared default value or null if there is none.
If the expression for any column is not of the correct data type, automatic type conversion will be attempted.
ON CONFLICT
can be used to specify an alternative action to raising a unique constraint or exclusion constraint violation error. (See ON CONFLICT
Clause below.)
The optional RETURNING
clause causes INSERT
to compute and return value(s) based on each row actually inserted (or updated, if an ON CONFLICT DO UPDATE
clause was used). This is primarily useful for obtaining values that were supplied by defaults, such as a serial sequence number. However, any expression using the table's columns is allowed. The syntax of the RETURNING
list is identical to that of the output list of SELECT
. Only rows that were successfully inserted or updated will be returned. For example, if a row was locked but not updated because an ON CONFLICT DO UPDATE ... WHERE
clause condition
was not satisfied, the row will not be returned.
You must have INSERT
privilege on a table in order to insert into it. If ON CONFLICT DO UPDATE
is present, UPDATE
privilege on the table is also required.
If a column list is specified, you only need INSERT
privilege on the listed columns. Similarly, when ON CONFLICT DO UPDATE
is specified, you only need UPDATE
privilege on the column(s) that are listed to be updated. However, ON CONFLICT DO UPDATE
also requires SELECT
privilege on any column whose values are read in the ON CONFLICT DO UPDATE
expressions or condition
.
Use of the RETURNING
clause requires SELECT
privilege on all columns mentioned in RETURNING
. If you use the query
clause to insert rows from a query, you of course need to have SELECT
privilege on any table or column used in the query.
This section covers parameters that may be used when only inserting new rows. Parameters exclusively used with the ON CONFLICT
clause are described separately.
with_query
The WITH
clause allows you to specify one or more subqueries that can be referenced by name in the INSERT
query. See Section 7.8 and SELECT for details.
It is possible for the query
(SELECT
statement) to also contain a WITH
clause. In such a case both sets of with_query
can be referenced within the query
, but the second one takes precedence since it is more closely nested.
table_name
The name (optionally schema-qualified) of an existing table.
alias
A substitute name for table_name
. When an alias is provided, it completely hides the actual name of the table. This is particularly useful when ON CONFLICT DO UPDATE
targets a table named excluded
, since that will otherwise be taken as the name of the special table representing rows proposed for insertion.
column_name
The name of a column in the table named by table_name
. The column name can be qualified with a subfield name or array subscript, if needed. (Inserting into only some fields of a composite column leaves the other fields null.) When referencing a column with ON CONFLICT DO UPDATE
, do not include the table's name in the specification of a target column. For example, INSERT INTO table_name ... ON CONFLICT DO UPDATE SET table_name.col = 1
is invalid (this follows the general behavior for UPDATE
).
OVERRIDING SYSTEM VALUE
Without this clause, it is an error to specify an explicit value (other than DEFAULT
) for an identity column defined as GENERATED ALWAYS
. This clause overrides that restriction.
OVERRIDING USER VALUE
If this clause is specified, then any values supplied for identity columns defined as GENERATED BY DEFAULT
are ignored and the default sequence-generated values are applied.
This clause is useful for example when copying values between tables. Writing INSERT INTO tbl2 OVERRIDING USER VALUE SELECT * FROM tbl1
will copy from tbl1
all columns that are not identity columns in tbl2
while values for the identity columns in tbl2
will be generated by the sequences associated with tbl2
.
DEFAULT VALUES
All columns will be filled with their default values. (An OVERRIDING
clause is not permitted in this form.)
expression
An expression or value to assign to the corresponding column.
DEFAULT
The corresponding column will be filled with its default value.
query
A query (SELECT
statement) that supplies the rows to be inserted. Refer to the SELECT statement for a description of the syntax.
output_expression
An expression to be computed and returned by the INSERT
command after each row is inserted or updated. The expression can use any column names of the table named by table_name
. Write *
to return all columns of the inserted or updated row(s).
output_name
A name to use for a returned column.
ON CONFLICT
Clause選用的 ON CONFLICT 子句指定了遭遇違反唯一性或提供排除約束違反錯誤的替代操作。對於建議插入的每個單獨資料,要就是繼續 INSERT,要就是如果違反了由 conflict_target 指定的約束條件或索引,則採用替代的 conflict_action。發生衝突時,CONFLICT DO NOTHING,只是避免插入一筆資料作為其替代操作。ON CONFLICT DO UPDATE 則以替代資料更新現有資料,如果建議插入的資料發生衝突的話。
conflict_target
can perform unique index inference. When performing inference, it consists of one or more index_column_name
columns and/or index_expression
expressions, and an optional index_predicate
. All table_name
unique indexes that, without regard to order, contain exactly the conflict_target
-specified columns/expressions are inferred (chosen) as arbiter indexes. If an index_predicate
is specified, it must, as a further requirement for inference, satisfy arbiter indexes. Note that this means a non-partial unique index (a unique index without a predicate) will be inferred (and thus used by ON CONFLICT
) if such an index satisfying every other criteria is available. If an attempt at inference is unsuccessful, an error is raised.
ON CONFLICT DO UPDATE
guarantees an atomic INSERT
or UPDATE
outcome; provided there is no independent error, one of those two outcomes is guaranteed, even under high concurrency. This is also known as UPSERT — “UPDATE or INSERT”.
conflict_target
Specifies which conflicts ON CONFLICT
takes the alternative action on by choosing arbiter indexes. Either performs unique index inference, or names a constraint explicitly. For ON CONFLICT DO NOTHING
, it is optional to specify a conflict_target
; when omitted, conflicts with all usable constraints (and unique indexes) are handled. For ON CONFLICT DO UPDATE
, a conflict_target
must be provided.
conflict_action
conflict_action
specifies an alternative ON CONFLICT
action. It can be either DO NOTHING
, or a DO UPDATE
clause specifying the exact details of the UPDATE
action to be performed in case of a conflict. The SET
and WHERE
clauses in ON CONFLICT DO UPDATE
have access to the existing row using the table's name (or an alias), and to rows proposed for insertion using the special excluded
table. SELECT
privilege is required on any column in the target table where corresponding excluded
columns are read.
Note that the effects of all per-row BEFORE INSERT
triggers are reflected in excluded
values, since those effects may have contributed to the row being excluded from insertion.
index_column_name
The name of a table_name
column. Used to infer arbiter indexes. Follows CREATE INDEX
format. SELECT
privilege on index_column_name
is required.
index_expression
Similar to index_column_name
, but used to infer expressions on table_name
columns appearing within index definitions (not simple columns). Follows CREATE INDEX
format. SELECT
privilege on any column appearing within index_expression
is required.
collation
When specified, mandates that corresponding index_column_name
or index_expression
use a particular collation in order to be matched during inference. Typically this is omitted, as collations usually do not affect whether or not a constraint violation occurs. Follows CREATE INDEX
format.
opclass
When specified, mandates that corresponding index_column_name
or index_expression
use particular operator class in order to be matched during inference. Typically this is omitted, as the equality semantics are often equivalent across a type's operator classes anyway, or because it's sufficient to trust that the defined unique indexes have the pertinent definition of equality. Follows CREATE INDEX
format.
index_predicate
Used to allow inference of partial unique indexes. Any indexes that satisfy the predicate (which need not actually be partial indexes) can be inferred. Follows CREATE INDEX
format. SELECT
privilege on any column appearing within index_predicate
is required.
constraint_name
Explicitly specifies an arbiter constraint by name, rather than inferring a constraint or index.
condition
An expression that returns a value of type boolean
. Only rows for which this expression returns true
will be updated, although all rows will be locked when the ON CONFLICT DO UPDATE
action is taken. Note that condition
is evaluated last, after a conflict has been identified as a candidate to update.
Note that exclusion constraints are not supported as arbiters with ON CONFLICT DO UPDATE
. In all cases, only NOT DEFERRABLE
constraints and unique indexes are supported as arbiters.
INSERT
with an ON CONFLICT DO UPDATE
clause is a “deterministic” statement. This means that the command will not be allowed to affect any single existing row more than once; a cardinality violation error will be raised when this situation arises. Rows proposed for insertion should not duplicate each other in terms of attributes constrained by an arbiter index or constraint.
Note that it is currently not supported for the ON CONFLICT DO UPDATE
clause of an INSERT
applied to a partitioned table to update the partition key of a conflicting row such that it requires the row be moved to a new partition.
It is often preferable to use unique index inference rather than naming a constraint directly using ON CONFLICT ON CONSTRAINT
constraint_name
. Inference will continue to work correctly when the underlying index is replaced by another more or less equivalent index in an overlapping way, for example when using CREATE UNIQUE INDEX ... CONCURRENTLY
before dropping the index being replaced.
On successful completion, an INSERT
command returns a command tag of the form
The count
is the number of rows inserted or updated. oid
is always 0 (it used to be the OID assigned to the inserted row if count
was exactly one and the target table was declared WITH OIDS
and 0 otherwise, but creating a table WITH OIDS
is not supported anymore).
If the INSERT
command contains a RETURNING
clause, the result will be similar to that of a SELECT
statement containing the columns and values defined in the RETURNING
list, computed over the row(s) inserted or updated by the command.
If the specified table is a partitioned table, each row is routed to the appropriate partition and inserted into it. If the specified table is a partition, an error will occur if one of the input rows violates the partition constraint.
Insert a single row into table films
:
In this example, the len
column is omitted and therefore it will have the default value:
This example uses the DEFAULT
clause for the date columns rather than specifying a value:
To insert a row consisting entirely of default values:
To insert multiple rows using the multirow VALUES
syntax:
This example inserts some rows into table films
from a table tmp_films
with the same column layout as films
:
This example inserts into array columns:
Insert a single row into table distributors
, returning the sequence number generated by the DEFAULT
clause:
Increment the sales count of the salesperson who manages the account for Acme Corporation, and record the whole updated row along with current time in a log table:
Insert or update new distributors as appropriate. Assumes a unique index has been defined that constrains values appearing in the did
column. Note that the special excluded
table is used to reference values originally proposed for insertion:
Insert a distributor, or do nothing for rows proposed for insertion when an existing, excluded row (a row with a matching constrained column or columns after before row insert triggers fire) exists. Example assumes a unique index has been defined that constrains values appearing in the did
column:
Insert or update new distributors as appropriate. Example assumes a unique index has been defined that constrains values appearing in the did
column. WHERE
clause is used to limit the rows actually updated (any existing row not updated will still be locked, though):
Insert new distributor if possible; otherwise DO NOTHING
. Example assumes a unique index has been defined that constrains values appearing in the did
column on a subset of rows where the is_active
Boolean column evaluates to true
:
INSERT
conforms to the SQL standard, except that the RETURNING
clause is a PostgreSQL extension, as is the ability to use WITH
with INSERT
, and the ability to specify an alternative action with ON CONFLICT
. Also, the case in which a column name list is omitted, but not all the columns are filled from the VALUES
clause or query
, is disallowed by the standard.
The SQL standard specifies that OVERRIDING SYSTEM VALUE
can only be specified if an identity column that is generated always exists. PostgreSQL allows the clause in any case and ignores it if it is not applicable.
Possible limitations of the query
clause are documented under SELECT.\
IMPORT FOREIGN SCHEMA — 從外部伺服器匯入資料表定義
IMPORT FOREIGN SCHEMA 會建立外部資料表,這些外部資料表是外部伺服器上現有的資料表。新的外部資料表將由發出命令的使用者所有,並使用正確的欄位定義和選項來建立,以搭配遠端的資料表。
預設情況下,將導入外部伺服器上特定 schema 中存在的所有資料表和檢視表。可以選擇將資料表限制為指定的名單,也可以排除特定的資料表。所有新的外部資料表都會在目標 schema 中建立,該 schema 必須已經存在。
要使用 IMPORT FOREIGN SCHEMA,使用者必須在外部伺服器上具有 USAGE 權限,並在目標 schema 上具有 CREATE 權限。
remote_schema
The remote schema to import from. The specific meaning of a remote schema depends on the foreign data wrapper in use.
LIMIT TO (
table_name
[, ...] )
Import only foreign tables matching one of the given table names. Other tables existing in the foreign schema will be ignored.
EXCEPT (
table_name
[, ...] )
Exclude specified foreign tables from the import. All tables existing in the foreign schema will be imported except the ones listed here.
server_name
The foreign server to import from.
local_schema
The schema in which the imported foreign tables will be created.
OPTIONS (
option
'value
' [, ...] )
Options to be used during the import. The allowed option names and values are specific to each foreign data wrapper.
從伺服器 film_server 上的遠端 schema foreign_films 匯入資料表定義,然後在本機 schema films 中建立外部資料表:
如上所述,但僅導入兩個資料表 actors 和 directors(如果它們存在的話):
IMPORT FOREIGN SCHEMA 指令符合 SQL 標準,但 OPTIONS 子句是 PostgreSQL 的延伸功能。
參閱
LOAD — 載入共享函式庫檔案
此命令將共享函式庫檔案載入到 PostgreSQL 伺服器的定址空間中。如果檔案已經被載入,那麼此命令就不會進行任何操作。包含 C 函數的共享函式庫檔案只要呼叫其中一個函數就會自動載入。因此,LOAD 通常只需要載入一個透過「hook」修改伺服器行為而不是提供一組函數的函式庫。
函式庫檔案名通常只是一個檔案名稱,在伺服器的函式庫搜尋路徑(由 dynamic_library_path 設定)中尋找。或者,它可以以完整的路徑名稱給予。無論哪種情況,平台的標準共享庫文件延伸名稱都可以省略。有關該主題的更多訊息,請參閱第 37.9.1 節。
非超級使用者只能將 LOAD 用於位於 $libdir/plugins/ 中的函式庫檔案 - 指定的檔案名稱必須以該字串開頭。(資料庫管理員有責任確保在那裡只安裝「安全」函式庫。)
LOAD
是 PostgreSQL 的延伸功能。
PREPARE — prepare a statement for execution
PREPARE
creates a prepared statement. A prepared statement is a server-side object that can be used to optimize performance. When the PREPARE
statement is executed, the specified statement is parsed, analyzed, and rewritten. When an EXECUTE
command is subsequently issued, the prepared statement is planned and executed. This division of labor avoids repetitive parse analysis work, while allowing the execution plan to depend on the specific parameter values supplied.
Prepared statements can take parameters: values that are substituted into the statement when it is executed. When creating the prepared statement, refer to parameters by position, using $1
, $2
, etc. A corresponding list of parameter data types can optionally be specified. When a parameter's data type is not specified or is declared as unknown
, the type is inferred from the context in which the parameter is first referenced (if possible). When executing the statement, specify the actual values for these parameters in the EXECUTE
statement. Refer to EXECUTE for more information about that.
Prepared statements only last for the duration of the current database session. When the session ends, the prepared statement is forgotten, so it must be recreated before being used again. This also means that a single prepared statement cannot be used by multiple simultaneous database clients; however, each client can create their own prepared statement to use. Prepared statements can be manually cleaned up using the DEALLOCATE command.
Prepared statements potentially have the largest performance advantage when a single session is being used to execute a large number of similar statements. The performance difference will be particularly significant if the statements are complex to plan or rewrite, e.g., if the query involves a join of many tables or requires the application of several rules. If the statement is relatively simple to plan and rewrite but relatively expensive to execute, the performance advantage of prepared statements will be less noticeable.
name
An arbitrary name given to this particular prepared statement. It must be unique within a single session and is subsequently used to execute or deallocate a previously prepared statement.
data_type
The data type of a parameter to the prepared statement. If the data type of a particular parameter is unspecified or is specified as unknown
, it will be inferred from the context in which the parameter is first referenced. To refer to the parameters in the prepared statement itself, use $1
, $2
, etc.
statement
Any SELECT
, INSERT
, UPDATE
, DELETE
, or VALUES
statement.
A prepared statement can be executed with either a generic plan or a custom plan. A generic plan is the same across all executions, while a custom plan is generated for a specific execution using the parameter values given in that call. Use of a generic plan avoids planning overhead, but in some situations a custom plan will be much more efficient to execute because the planner can make use of knowledge of the parameter values. (Of course, if the prepared statement has no parameters, then this is moot and a generic plan is always used.)
By default (that is, when plan_cache_mode is set to auto
), the server will automatically choose whether to use a generic or custom plan for a prepared statement that has parameters. The current rule for this is that the first five executions are done with custom plans and the average estimated cost of those plans is calculated. Then a generic plan is created and its estimated cost is compared to the average custom-plan cost. Subsequent executions use the generic plan if its cost is not so much higher than the average custom-plan cost as to make repeated replanning seem preferable.
This heuristic can be overridden, forcing the server to use either generic or custom plans, by setting plan_cache_mode
to force_generic_plan
or force_custom_plan
respectively. This setting is primarily useful if the generic plan's cost estimate is badly off for some reason, allowing it to be chosen even though its actual cost is much more than that of a custom plan.
To examine the query plan PostgreSQL is using for a prepared statement, use EXPLAIN, for example
If a generic plan is in use, it will contain parameter symbols $
n
, while a custom plan will have the supplied parameter values substituted into it.
For more information on query planning and the statistics collected by PostgreSQL for that purpose, see the ANALYZE documentation.
Although the main point of a prepared statement is to avoid repeated parse analysis and planning of the statement, PostgreSQL will force re-analysis and re-planning of the statement before using it whenever database objects used in the statement have undergone definitional (DDL) changes since the previous use of the prepared statement. Also, if the value of search_path changes from one use to the next, the statement will be re-parsed using the new search_path
. (This latter behavior is new as of PostgreSQL 9.3.) These rules make use of a prepared statement semantically almost equivalent to re-submitting the same query text over and over, but with a performance benefit if no object definitions are changed, especially if the best plan remains the same across uses. An example of a case where the semantic equivalence is not perfect is that if the statement refers to a table by an unqualified name, and then a new table of the same name is created in a schema appearing earlier in the search_path
, no automatic re-parse will occur since no object used in the statement changed. However, if some other change forces a re-parse, the new table will be referenced in subsequent uses.
You can see all prepared statements available in the session by querying the pg_prepared_statements
system view.
Create a prepared statement for an INSERT
statement, and then execute it:
Create a prepared statement for a SELECT
statement, and then execute it:
In this example, the data type of the second parameter is not specified, so it is inferred from the context in which $2
is used.
The SQL standard includes a PREPARE
statement, but it is only for use in embedded SQL. This version of the PREPARE
statement also uses a somewhat different syntax.
MERGE — 有條件地 INSERT、UPDATE 或 DELETE 資料
MERGE 使用 data_source 執行修改 target_table_name 中資料的操作。MERGE 提供單個 SQL 語句,便能以指定條件插入、更新或刪除資料,否則這項作業通常需要多個操作程序。
First, the MERGE
command performs a join from data_source
to target_table_name
producing zero or more candidate change rows. For each candidate change row, the status of MATCHED
or NOT MATCHED
is set just once, after which WHEN
clauses are evaluated in the order specified. For each candidate change row, the first clause to evaluate as true is executed. No more than one WHEN
clause is executed for any candidate change row.
MERGE
actions have the same effect as regular UPDATE
, INSERT
, or DELETE
commands of the same names. The syntax of those commands is different, notably that there is no WHERE
clause and no table name is specified. All actions refer to the target_table_name
, though modifications to other tables may be made using triggers.
When DO NOTHING
is specified, the source row is skipped. Since actions are evaluated in their specified order, DO NOTHING
can be handy to skip non-interesting source rows before more fine-grained handling.
There is no separate MERGE
privilege. If you specify an update action, you must have the UPDATE
privilege on the column(s) of the target_table_name
that are referred to in the SET
clause. If you specify an insert action, you must have the INSERT
privilege on the target_table_name
. If you specify an delete action, you must have the DELETE
privilege on the target_table_name
. Privileges are tested once at statement start and are checked whether or not particular WHEN
clauses are executed. You will require the SELECT
privilege on the data_source
and any column(s) of the target_table_name
referred to in a condition
.
MERGE
is not supported if the target_table_name
is a materialized view, foreign table, or if it has any rules defined on it.
target_table_name
The name (optionally schema-qualified) of the target table to merge into. If ONLY
is specified before the table name, matching rows are updated or deleted in the named table only. If ONLY
is not specified, matching rows are also updated or deleted in any tables inheriting from the named table. Optionally, *
can be specified after the table name to explicitly indicate that descendant tables are included. The ONLY
keyword and *
option do not affect insert actions, which always insert into the named table only.
target_alias
A substitute name for the target table. When an alias is provided, it completely hides the actual name of the table. For example, given MERGE INTO foo AS f
, the remainder of the MERGE
statement must refer to this table as f
not foo
.
source_table_name
The name (optionally schema-qualified) of the source table, view, or transition table. If ONLY
is specified before the table name, matching rows are included from the named table only. If ONLY
is not specified, matching rows are also included from any tables inheriting from the named table. Optionally, *
can be specified after the table name to explicitly indicate that descendant tables are included.
source_query
A query (SELECT
statement or VALUES
statement) that supplies the rows to be merged into the target_table_name
. Refer to the SELECT statement or VALUES statement for a description of the syntax.
source_alias
A substitute name for the data source. When an alias is provided, it completely hides the actual name of the table or the fact that a query was issued.
join_condition
join_condition
is an expression resulting in a value of type boolean
(similar to a WHERE
clause) that specifies which rows in the data_source
match rows in the target_table_name
.
Only columns from target_table_name
that attempt to match data_source
rows should appear in join_condition
. join_condition
subexpressions that only reference target_table_name
columns can affect which action is taken, often in surprising ways.
when_clause
At least one WHEN
clause is required.
If the WHEN
clause specifies WHEN MATCHED
and the candidate change row matches a row in the target_table_name
, the WHEN
clause is executed if the condition
is absent or it evaluates to true
.
Conversely, if the WHEN
clause specifies WHEN NOT MATCHED
and the candidate change row does not match a row in the target_table_name
, the WHEN
clause is executed if the condition
is absent or it evaluates to true
.
condition
An expression that returns a value of type boolean
. If this expression for a WHEN
clause returns true
, then the action for that clause is executed for that row.
A condition on a WHEN MATCHED
clause can refer to columns in both the source and the target relations. A condition on a WHEN NOT MATCHED
clause can only refer to columns from the source relation, since by definition there is no matching target row. Only the system attributes from the target table are accessible.
merge_insert
The specification of an INSERT
action that inserts one row into the target table. The target column names can be listed in any order. If no list of column names is given at all, the default is all the columns of the table in their declared order.
Each column not present in the explicit or implicit column list will be filled with a default value, either its declared default value or null if there is none.
If target_table_name
is a partitioned table, each row is routed to the appropriate partition and inserted into it. If target_table_name
is a partition, an error will occur if any input row violates the partition constraint.
Column names may not be specified more than once. INSERT
actions cannot contain sub-selects.
Only one VALUES
clause can be specified. The VALUES
clause can only refer to columns from the source relation, since by definition there is no matching target row.
merge_update
The specification of an UPDATE
action that updates the current row of the target_table_name
. Column names may not be specified more than once.
Neither a table name nor a WHERE
clause are allowed.
merge_delete
Specifies a DELETE
action that deletes the current row of the target_table_name
. Do not include the table name or any other clauses, as you would normally do with a DELETE command.
column_name
The name of a column in the target_table_name
. The column name can be qualified with a subfield name or array subscript, if needed. (Inserting into only some fields of a composite column leaves the other fields null.) Do not include the table's name in the specification of a target column.
OVERRIDING SYSTEM VALUE
Without this clause, it is an error to specify an explicit value (other than DEFAULT
) for an identity column defined as GENERATED ALWAYS
. This clause overrides that restriction.
OVERRIDING USER VALUE
If this clause is specified, then any values supplied for identity columns defined as GENERATED BY DEFAULT
are ignored and the default sequence-generated values are applied.
DEFAULT VALUES
All columns will be filled with their default values. (An OVERRIDING
clause is not permitted in this form.)
expression
An expression to assign to the column. If used in a WHEN MATCHED
clause, the expression can use values from the original row in the target table, and values from the data_source
row. If used in a WHEN NOT MATCHED
clause, the expression can use values from the data_source
.
DEFAULT
Set the column to its default value (which will be NULL
if no specific default expression has been assigned to it).
with_query
The WITH
clause allows you to specify one or more subqueries that can be referenced by name in the MERGE
query. See Section 7.8 and SELECT for details.
On successful completion, a MERGE
command returns a command tag of the form
The total_count
is the total number of rows changed (whether inserted, updated, or deleted). If total_count
is 0, no rows were changed in any way.
The following steps take place during the execution of MERGE
.
Perform any BEFORE STATEMENT
triggers for all actions specified, whether or not their WHEN
clauses match.
Perform a join from source to target table. The resulting query will be optimized normally and will produce a set of candidate change rows. For each candidate change row,
Evaluate whether each row is MATCHED
or NOT MATCHED
.
Test each WHEN
condition in the order specified until one returns true.
When a condition returns true, perform the following actions:
Perform any BEFORE ROW
triggers that fire for the action's event type.
Perform the specified action, invoking any check constraints on the target table.
Perform any AFTER ROW
triggers that fire for the action's event type.
Perform any AFTER STATEMENT
triggers for actions specified, whether or not they actually occur. This is similar to the behavior of an UPDATE
statement that modifies no rows.
In summary, statement triggers for an event type (say, INSERT
) will be fired whenever we specify an action of that kind. In contrast, row-level triggers will fire only for the specific event type being executed. So a MERGE
command might fire statement triggers for both UPDATE
and INSERT
, even though only UPDATE
row triggers were fired.
You should ensure that the join produces at most one candidate change row for each target row. In other words, a target row shouldn't join to more than one data source row. If it does, then only one of the candidate change rows will be used to modify the target row; later attempts to modify the row will cause an error. This can also occur if row triggers make changes to the target table and the rows so modified are then subsequently also modified by MERGE
. If the repeated action is an INSERT
, this will cause a uniqueness violation, while a repeated UPDATE
or DELETE
will cause a cardinality violation; the latter behavior is required by the SQL standard. This differs from historical PostgreSQL behavior of joins in UPDATE
and DELETE
statements where second and subsequent attempts to modify the same row are simply ignored.
If a WHEN
clause omits an AND
sub-clause, it becomes the final reachable clause of that kind (MATCHED
or NOT MATCHED
). If a later WHEN
clause of that kind is specified it would be provably unreachable and an error is raised. If no final reachable clause is specified of either kind, it is possible that no action will be taken for a candidate change row.
The order in which rows are generated from the data source is indeterminate by default. A source_query
can be used to specify a consistent ordering, if required, which might be needed to avoid deadlocks between concurrent transactions.
There is no RETURNING
clause with MERGE
. Actions of INSERT
, UPDATE
and DELETE
cannot contain RETURNING
or WITH
clauses.
When MERGE
is run concurrently with other commands that modify the target table, the usual transaction isolation rules apply; see Section 13.2 for an explanation on the behavior at each isolation level. You may also wish to consider using INSERT ... ON CONFLICT
as an alternative statement which offers the ability to run an UPDATE
if a concurrent INSERT
occurs. There are a variety of differences and restrictions between the two statement types and they are not interchangeable.
根據新的 recent_transactions 對 customer_accounts 進行資料維護。
請注意,這將完全等同於以下語句,因為 MATCHED 結果在執行期間並不會改變。
嘗試插入新的庫存項目以及庫存數量。 如果該項目已經存在,則更新現有項目的庫存數量,並且不允許有零庫存的項目。
在此例中,wine_stock_changes 資料表可能是最近加入到資料庫中的臨時資料庫表。
This command conforms to the SQL standard.
The WITH clause and DO NOTHING
action are extensions to the SQL standard.
NOTIFY — 發起一個通知
NOTIFY 指令將通知事件與可選擇性的「payload」字串一起發送到每個用戶端應用程序,該客戶端應用程序先前已在目前資料庫中為指定的通道名稱執行了 LISTEN 監聽通道。所有用戶都可以看到通知。
NOTIFY 為存取同一 PostgreSQL 資料庫的程序提供了一個簡單的程序間通信機制。有效負載字串可以與通知一起發送,並且可以透過使用資料庫中資料表附加的資料由通知程序傳遞給監聽器來傳遞結構化資料的更高階機制。
傳遞給用戶端以獲取通知事件的訊息包括通知通道的名稱,通知連線的伺服器程序 PID 和有效負載字串,如果尚未指定,則為空字串。
資料庫設計者可以定義將在給定資料庫中使用的通道名稱以及每個通道名稱的含義。通常,通道名稱與資料庫中某些資料表的名稱相同,而 notify 事件本質上意味著「我更改了此資料表,看看它有什麼新的」。但是 NOTIFY 和 LISTEN 指令沒有強制執行這種關連。例如,資料庫設計人員可以使用多個不同的通道名稱來向單個資料表發送不同類型的更改。或者,有效負載字串可用於區分各種情況。
當 NOTIFY 用於表示特定資料表的更改發生時,一種有用的程式設計技術是將 NOTIFY 放入由資料表更新觸發的語句觸發器中。這樣,當資料表更改時,通知會自動發生,應用程式設計師就不會不小心忘記要做。
NOTIFY 以某些重要方式與 SQL 事務交互溝通。首先,如果在事務內執行 NOTIFY,則除非提交事務,否則不會傳遞通知事件。這是適當的,因為如果事務中止,則其中的所有指令都沒有效果,包括 NOTIFY。但是如果人們期望立即傳遞通知事件,那可能會令人不安。其次,如果監聽連線在事務處理期間收到通知信號,則通知事件將不會在事務完成(提交或中止)之後傳遞到其連接的用戶端。同樣,原因是如果在稍後中止的事務中傳遞通知,則會希望以某種方式撤消通知 - 但是一旦將通知發送到客戶端,伺服器就不能「收回」通知。因此,通知事件僅在事務之間傳遞。這樣做的結果是使用 NOTIFY 進行即時訊息的應用程序應該盡量縮短交易時間。
如果從具有相同有效負載字串的同一事務多次發送信號通知相同的通道名稱,則資料庫伺服器可以決定僅發送單個通知。另一方面,具有不同有效負載字串的通知將始終作為不同的通知傳遞。同樣,來自不同交易的通知永遠不會疊合成一個通知。除了刪除重複通知的後續範例外,NOTIFY 保證來自同一事務的通知按發送順序傳遞。還保證來自不同事務的消息按照提交的事務的順序傳遞。
執行 NOTIFY 的用戶端通常會在同一通知通道上進行監聽。在這種情況下,它將回傳通知事件,就像所有其他監聽連線一樣。根據應用程序邏輯,這可能會導致無用的作業。例如,讀取資料庫的資料表以查詢該連線剛剛寫出的相同更新。透過注意通知連線的伺服器程序 PID(在通知事件消息中提供)是否與自己連線的 PID(可從 libpq 獲得)相同,就能避免這種額外的工作。當它們相同時,通知事件是某個人自己的工作反彈,可以忽略。
channel
要發起通知信號的通知通道的名稱(任何識別指標)。
payload
要與通知一起傳送的「payload」字符串。必須將其指定為簡單的字串文字。在預設配置中,它必須少於 8000 個位元組。(如果需要傳遞二進位資料或大量訊息,最好將其放在資料庫的資料表中並發送記錄的指標。)
有一個佇列保存已發送但尚未由所有監聽會話處理的通知。如果此佇列已滿,則在提交時呼叫 NOTIFY 的事務將失敗。佇列非常大(標準安裝中為 8GB),應該足夠大,幾乎適用於所有情境。但是,如果連線執行 LISTEN 然後進入事務很長時間,則不會進行清理。一旦佇列半滿,您將在日誌檔案中看到警告,指出阻擋清理的連線。在這種情況下,您應確保此連線結束其當下事務,以便可以繼續進行清理。
函數 pg_notification_queue_usage 回傳目前被掛起通知佔用的佇列部分。有關更多訊息,請參閱第 9.25 節。
已執行 NOTIFY 的事務無法為兩階段提交做 prepared。
要發送通知,您還可以使用函數 pg_notify(text, text)。此函數將通道名稱作為第一個參數,將有效負載作為第二個參數。如果您需要使用特殊的通道名稱和有效負載,則此功能比 NOTIFY 指令更容易使用。
從 psql 配置並執行 listen / notify 指令:
SQL 標準中沒有 NOTIFY 語句。
REFRESH MATERIALIZED VIEW — 更新具體化檢視表(materialized view)的內容
REFRESH MATERIALIZED VIEW 完全更新具體化檢視表的內容。舊的內容將會被丟棄。如果指定了 WITH DATA(預設),則會執行檢視表上的查詢以産生新資料,並且使具體化檢視表處於可掃描查詢的狀態。如果指定了 WITH NO DATA,則不會産生新的資料,並且具體化檢視表將處於不可掃描查詢的狀態。
CONCURRENTLY 和 WITH NO DATA 不能同時使用。
CONCURRENTLY
更新具體化檢視表而不鎖定具體化檢視表上同時進行的 SELECT。如果沒有使用這個選項的話,如果有很多資料列會更新時,將傾向於使用更少的資源並且更快地完成,但可能會阻止嘗試從具體化檢視表中讀取的其他連線。在只有少數資料列受到影響的情況下,此選項則可能會更快。
具體化檢視表上至少有一個唯一索引且僅使用欄位名稱並包含所有資料列時,才允許使用此選項;也就是說,它不能以任何表示式建立索引,也不能包含 WHERE 子句。
當具體化檢視表尚未填入資料時,不能使用此選項。
即使使用此選項,每次也只有一個 REFRESH 對一個具體化檢視表執行。
name
要更新的具體化檢視表的名稱(可以加上綱要名稱)。
此命令將更新名為 order_summary 的具體化檢視表定義的查詢結果內容,並將其設定為可掃描查詢狀態:
該命令將釋放與具體化檢視表 annual_statistics_basis 相關的儲存空間,並使其處於不可掃描查詢的狀態:
REFRESH MATERIALIZED VIEW
是 PostgreSQL 的延伸指令。
REASSIGN OWNED — change the ownership of database objects owned by a database role
REASSIGN OWNED
instructs the system to change the ownership of database objects owned by any of the old_roles
to new_role
.
old_role
The name of a role. The ownership of all the objects within the current database, and of all shared objects (databases, tablespaces), owned by this role will be reassigned to new_role
.new_role
The name of the role that will be made the new owner of the affected objects.
REASSIGN OWNED
is often used to prepare for the removal of one or more roles. Because REASSIGN OWNED
does not affect objects within other databases, it is usually necessary to execute this command in each database that contains objects owned by a role that is to be removed.
REASSIGN OWNED
requires privileges on both the source role(s) and the target role.
The command is an alternative that simply drops all the database objects owned by one or more roles.
The REASSIGN OWNED
command does not affect any privileges granted to the old_roles
for objects that are not owned by them. Use DROP OWNED
to revoke such privileges.
The REASSIGN OWNED
command is a PostgreSQL extension.
REINDEX — 重建索引
REINDEX 使用索引資料表中所儲存的資料重建索引,替換索引舊的版本。有幾種情況可以使用 REINDEX:
索引損壞,不再包含有效的資料。雖然理論上這種情況永遠不會發生,但實際上索引會因程式錯誤或硬體故障而損壞。REINDEX 提供了一種恢復的方法。
索引變得「臃腫」,即它包含許多空或幾乎空的頁面。在某些不常見的存取模式下,PostgreSQL 中 的 B-tree 索引會發生這種情況。REINDEX 提供了一種透過寫入無死頁的索引新版本來減少索引空間消耗的方法。有關更多訊息,請參閱。
您變更了索引的儲存參數(例如 fillfactor),並希望確保變更能完全生效。
使用 CONCURRENTLY 選項的索引建立失敗,留下「invalid」索引的時候。 這些索引沒辦法使用,但使用 REINDEX 重建它們會很方便。請注意,只有 REINDEX INDEX (單獨針對一個索引)的時候才能對無效索引執行平行處理(CONCURRENTLY)。
INDEX
重新建立指定的索引。
TABLE
重新建立指定資料表的所有索引。如果資料表具有額外的「TOAST」資料表,那麼也會重新編制索引。
SCHEMA
重新建立指定綱要的所有索引。如果此綱要的資料表具有額外的「TOAST」資料表,那麼也會重新編制索引。還會處理共享系統目錄上的索引。這種形式的 REINDEX 不能在交易事務區塊中執行。
DATABASE
重新建立目前資料庫中的所有索引。還會處理共享系統目錄上的索引。這種形式的 REINDEX 不能在交易事務區塊中執行。
SYSTEM
重新建立目前資料庫中系統目錄的所有索引。包含共享系統目錄的索引。但不處理使用者資料表的索引。這種形式的 REINDEX 不能在交易事務區塊中執行。
name
要重新編制索引的特定索引,資料表或資料庫的名稱。索引和資料表名稱可以是加上綱要名稱的。目前,REINDEX DATABASE 和 REINDEX SYSTEM 只能重新索引目前資料庫,因此它們的參數必須符合目前資料庫的名稱。
CONCURRENTLY
對於臨時資料表,REINDEX 都是以非平行同步(non-concurrent)的方式處理,因為其他任何連線都無法存取它們,更何況非平行同步的重新索引的成本更為便宜。
VERBOSE
在重新索引每個索引時輸出進度報告。
如果您懷疑使用者資料表上的索引損壞,您可以使用 REINDEX INDEX 或 REINDEX TABLE 簡單地重建該索引或資料表上的所有索引。
如果您需要從系統資料表上的索引損壞中恢復,則事情會比較困難。在這種情況下,系統沒有使用到任何可疑索引很重要。(實際上,在這種情況下,由於依賴於損壞的索引,您可能會發現伺服器程序在啟動時立即終止。)要安全恢復,必須使用 -P 選項啟動伺服器,這樣可以防止伺服器程序使用索引進行系統目錄查詢。
或者,可以在其命令列選項中包含 -P 的情況下啟動一般模式的伺服器連線。執行此操作的方法因用戶端而異,但在所有基於 libpq 的客戶端中,可以在啟動用戶端之前將 PGOPTIONS 環境變數設定為 -P。請注意,雖然此方法不需要鎖定其他用戶端,但在修復完成之前阻止其他用戶連線到損壞的資料庫仍然是明智之舉。
REINDEX 類似於索引的刪除和重新建立,因為索引內容是從頭開始建立的。但是,鎖定考慮情況是相當不同的。REINDEX 鎖定寫入但不讀取索引的父資料表。它還對正在處理的特定索引進行獨占鎖定,這將阻止嘗試使用該索引的讀取。相反,DROP INDEX 會暫時對父資料表進行獨占鎖定,從而阻止寫入和讀取。隨後的 CREATE INDEX 鎖定寫入但不讀取;由於索引不在那裡,沒有讀取會嘗試使用它,這意味著沒有阻塞但是讀取可能會被強制進入昂貴的循序掃描。
重新索引單個索引或資料表需要成為該索引或資料表的擁有者。重新索引資料庫需要成為資料庫的擁有者(請注意,擁有者因此可以重建其他使用者擁有的資料表索引)。當然,超級使用者總是可以重新索引任何東西。
不支援直接對分割資料表的父表或分割資料表索引重新編制索引。但每個單獨的子資料表都可以分別重新索引。
重建索引可能會干擾資料庫的一般操作。通常來說,PostgreSQL 會鎖定針對寫入操作重建索引的資料表,並透過對資料表的全表掃描來執行整個索引建構。其他交易事務仍然可以讀取資料表,但是如果它們嘗試在資料表中插入,更新或刪除資料,則它們將被暫時阻擋直到索引重建完成。如果系統是線上的正式資料庫,則可能會產生嚴重影響。非常大的資料表可能需要花費數小時才能建立索引,即使對於較小的表,索引重建也可能會將寫入者鎖定在線上系統無法接受的時間之內。
PostgreSQL 支援以最小的寫入鎖定來重建索引。透過指定 REINDEX 的 CONCURRENTLY 選項來使用此方法。使用此選項時,PostgreSQL 必須對每個需要重建的索引執行兩次資料表掃描,並等待終止所有可能使用該索引的現有交易事務。與標準索引重建相比,此方法總共需要更多的工作量,並且由於需要等待可能會修改索引的未完成交易事務而需要更長的時間才能完成。但是,由於它允許重建索引時繼續正常操作,因此此方法對於在生產環境中重建索引很有用。當然,索引重建帶來的額外 CPU,記憶體和 I/O 負載也可能會減慢其他操作的速度。
在同步重建索引的過程中,將以下列步驟實行。 每個步驟都在單獨的交易事務中運行。如果重建多個索引,則每個步驟都會循環遍歷所有索引,然後再進行下一步。
A new temporary index definition is added to the catalog pg_index
. This definition will be used to replace the old index. A SHARE UPDATE EXCLUSIVE
lock at session level is taken on the indexes being reindexed as well as their associated tables to prevent any schema modification while processing.
A first pass to build the index is done for each new index. Once the index is built, its flag pg_index.indisready
is switched to “true” to make it ready for inserts, making it visible to other sessions once the transaction that performed the build is finished. This step is done in a separate transaction for each index.
Then a second pass is performed to add tuples that were added while the first pass was running. This step is also done in a separate transaction for each index.
All the constraints that refer to the index are changed to refer to the new index definition, and the names of the indexes are changed. At this point, pg_index.indisvalid
is switched to “true” for the new index and to “false” for the old, and a cache invalidation is done causing all sessions that referenced the old index to be invalidated.
The old indexes have pg_index.indisready
switched to “false” to prevent any new tuple insertions, after waiting for running queries that might reference the old index to complete.
The old indexes are dropped. The SHARE UPDATE EXCLUSIVE
session locks for the indexes and the table are released.
如果在重建索引時出現問題,例如唯一索引中的唯一性衝突,則 REINDEX 命令將失敗,但除現有索引外,還會留下「INVALID」的新索引。該索引出於查詢目的將被忽略,因為它可能不完整。但是它將仍然消耗更新資料的開銷。psql \d
命令將回報諸如 INVALID 的索引:
在這種情況下,建議的恢復方法是刪除無效索引,然後再次嘗試執行REINDEX CONCURRENTLY。在處理期間建立的同步索引的名稱會以 ccnew 結尾,如果它是舊索引定義而我們未能刪除,則以 ccold 結尾。 可以使用 DROP INDEX 刪除無效的索引,包括無效的 TOAST 索引。
一般索引建立允許在同一資料表上同時進行其他一般索引的建立,但是一次只能在一個資料表上進行一個同步索引建構。而在以上這兩種情況下,都不允許同時在資料表上進行其他類型的架構修改。另一個區別是,可以在交易事務內執行一般的 REINDEX TABLE 或 REINDEX INDEX 指令,但 REINDEX CONCURRENTLY 則無法執行。
REINDEX SYSTEM 不支援 CONCURRENTLY,因為不能同時為系統目錄重新建立索引。
此外,排除限制條件的索引不能同步重新索引。如果在此命令中直接命名了這樣的索引,則會引發錯誤。如果同時對具有排除限制條件索引的資料表或資料庫重新建立索引,則將跳過這些索引。 (可以在沒有 CONCURRENTLY 選項的情況下重新索引此類索引。)
重建單個索引:
重建資料表 my_table 上的所有索引:
重建特定資料庫中的所有索引,而不管系統索引是否有效:
重建資料表的索引,而在進行重建索引的過程中,不會阻止任何相關物件的讀寫操作:
SQL 標準中沒有 REINDEX 指令。
GRANT — 賦予存取權限
GRANT 指令有兩個基本用法:一個是授予資料庫物件(資料表、欄位、檢視表、外部資料表、序列、資料庫、外部資料封裝、外部伺服器、函數、程序語言、綱要或資料表空間)的權限;另一個則是授予角色成員資格的人。這些用法在許多方面類似,但它們的不同之處將會單獨描述。
在資料庫物件上的 GRANT
GRANT 指令的這個用法為資料庫物件提供了對一個或多個角色的特定權限。這些權限將會附加到已授予的權限(如果有的話)。
還有一個選項可以在一個或多個綱要中為相同類型的所有物件授予權限。目前僅有資料、序列和函數支援此功能(但請注意,ALL TABLES 被視為包括檢視圖和外部資料表)。
關鍵字 PUBLIC 表示將權限授予所有角色,包括稍後可能建立的角色。 PUBLIC 可以被視為一個隱性定義的群組,它包含了所有角色。任何特定角色都將擁有直接授予它的權限總和,也就是授予其目前成員的任何角色的權限,加上授予 PUBLIC 的權限。
如果指定了 WITH GRANT OPTION,則權限的被授予者可以將該權限授予其他人。如果沒有授權選項,被授予者就無法執行此操作。授權選項不能授予 PUBLIC。
毌須向物件的所有者(通常是建立它的使用者)授予權限,因為預設情況下所有者具備所有權限。 (但是,所有者可以選擇撤銷他們自己的一些安全權限。)
刪除物件或以任何方式變更其定義的權利將不被視為可授予的權限;它是所有者固有的,不能被授予或撤銷。(但是,透過授予或撤銷擁有該物件的角色成員資格可以獲得類似的效果;請參閱下文。)所有者也隱含地擁有該物件的所有授權選項。
可以設定的權限如下:
SELECT
INSERT
UPDATE
DELETE
TRUNCATE
REFERENCES
TRIGGER
CREATE
CONNECT
TEMPORARY
EXECUTE
USAGE
TEMP
TEMPORARY 的另一種寫法。
ALL PRIVILEGES
授予該物件型別的所有可用權限。PRIVILEGES 關鍵字在 PostgreSQL 中是選擇性的,但它在嚴格的 SQL 中是必要的。
The FUNCTION
syntax works for plain functions, aggregate functions, and window functions, but not for procedures; use PROCEDURE
for those. Alternatively, use ROUTINE
to refer to a function, aggregate function, window function, or procedure regardless of its precise type.
There is also an option to grant privileges on all objects of the same type within one or more schemas. This functionality is currently supported only for tables, sequences, functions, and procedures. ALL TABLES
also affects views and foreign tables, just like the specific-object GRANT
command. ALL FUNCTIONS
also affects aggregate and window functions, but not procedures, again just like the specific-object GRANT
command. Use ALL ROUTINES
to include procedures.
GRANT 指令也可以用於將角色加入成為其他角色的成員。角色的成員意義重大,因為它可以將授予角色的權限也同等授予給每個成員。
如果指定了 WITH ADMIN OPTION,則該成員就可以將角色的成員資格再授予其他人,也可以撤銷該角色的成員資格。如果沒有 admin 選項,普通使用者就無法做到上述的行為。 角色不被視為對自身持有 WITH ADMIN OPTION,但它可以從連線使用者與角色匹配的資料庫連線中授予或撤銷其自身的成員資格。資料庫的超級使用者可以向任何人授予或撤銷任何角色的成員資格。具有 CREATEROLE 權限的角色可以授予或撤銷任何非超級使用者角色的成員資格。
If GRANTED BY
is specified, the grant is recorded as having been done by the specified role. Only database superusers may use this option, except when it names the same role executing the command.
Unlike the case with privileges, membership in a role cannot be granted to PUBLIC
. Note also that this form of the command does not allow the noise word GROUP
in role_specification
.
REVOKE 指令用於撤銷存取權限.
Since PostgreSQL 8.1, the concepts of users and groups have been unified into a single kind of entity called a role. It is therefore no longer necessary to use the keyword GROUP
to identify whether a grantee is a user or a group. GROUP
is still allowed in the command, but it is a noise word.
A user may perform SELECT
, INSERT
, etc. on a column if they hold that privilege for either the specific column or its whole table. Granting the privilege at the table level and then revoking it for one column will not do what one might wish: the table-level grant is unaffected by a column-level operation.
When a non-owner of an object attempts to GRANT
privileges on the object, the command will fail outright if the user has no privileges whatsoever on the object. As long as some privilege is available, the command will proceed, but it will grant only those privileges for which the user has grant options. The GRANT ALL PRIVILEGES
forms will issue a warning message if no grant options are held, while the other forms will issue a warning if grant options for any of the privileges specifically named in the command are not held. (In principle these statements apply to the object owner as well, but since the owner is always treated as holding all grant options, the cases can never occur.)
It should be noted that database superusers can access all objects regardless of object privilege settings. This is comparable to the rights of root
in a Unix system. As with root
, it's unwise to operate as a superuser except when absolutely necessary.
If a superuser chooses to issue a GRANT
or REVOKE
command, the command is performed as though it were issued by the owner of the affected object. In particular, privileges granted via such a command will appear to have been granted by the object owner. (For role membership, the membership appears to have been granted by the containing role itself.)
GRANT
and REVOKE
can also be done by a role that is not the owner of the affected object, but is a member of the role that owns the object, or is a member of a role that holds privileges WITH GRANT OPTION
on the object. In this case the privileges will be recorded as having been granted by the role that actually owns the object or holds the privileges WITH GRANT OPTION
. For example, if table t1
is owned by role g1
, of which role u1
is a member, then u1
can grant privileges on t1
to u2
, but those privileges will appear to have been granted directly by g1
. Any other member of role g1
could revoke them later.
If the role executing GRANT
holds the required privileges indirectly via more than one role membership path, it is unspecified which containing role will be recorded as having done the grant. In such cases it is best practice to use SET ROLE
to become the specific role you want to do the GRANT
as.
Granting permission on a table does not automatically extend permissions to any sequences used by the table, including sequences tied to SERIAL
columns. Permissions on sequences must be set separately.
把向資料表 film 插入資料的權限授予所有使用者:
把所有檢視表 kind 可用的權限授予使用者 manuel:
請注意,雖然如果由超級使用者或該型別的擁有者執行,上述內容確實會授予所有權限;但當由其他人執行時,它將僅授予其他人其所擁有授權選項的權限。
將角色 admin 的成員資格授予使用者 joe:
根據 SQL 標準,ALL PRIVILEGES 中的 PRIVILEGES 關鍵字是需要的。SQL 標準不支援在指令設定多個物件的權限。
PostgreSQL 允許物件擁有者撤銷他們自己的普通權限:例如,資料表擁有者可以透過撤銷自己的 INSERT,UPDATE,DELETE 和 TRUNCATE 權限使資料表對自己而言是唯讀。但根據 SQL 標準,這是不可能的。原因是 PostgreSQL 將擁有者的權限視為已由擁有者授予他們自己;因此他們自己也可以撤銷它們。在 SQL 標準中,擁有者的權限由假設上的實體「_SYSTEM」授予。由於不是「_SYSTEM」,擁有者就不能撤銷這些權利。
根據 SQL 標準,可以向 PUBLIC 授予授權選項;PostgreSQL 僅支援向角色授予授權選項。
SQL 標準為其他型別的物件提供 USAGE 權限:字元集,排序規則,翻譯。
在 SQL 標準中,序列只有 USAGE 權限,它控制 NEXT VALUE FOR 表示式的使用,這相當於 PostgreSQL 中的 nextval 函數。序列權限 SELECT 和 UPDATE 是 PostgreSQL 的延伸功能。將序列 USAGE 權限套用於 currval 函數也是 PostgreSQL 的延伸功能(函數本身也是如此)。
資料庫,資料表空間,綱要和語言的權限都是 PostgreSQL 的延伸功能。
儘管保留了未來 操作的預設索引,但 REFRESH MATERIALIZED VIEW 並不會根據此屬性對産生的資料列進行排序。如果您希望在産生後對資料進行排序,則必須在檢視表的查詢中使用 ORDER BY 子句。
, ,
See for more discussion.
, ,
使用此選項時,PostgreSQL 將重建索引而不會採取任何防止在資料表上進行同時的插入、更新或刪除的鎖定; 而標準索引重建會鎖定資料表上的寫入(但不會影響讀取),直到完成。使用此選項時需要注意一些注意事項—請參閱。
一種方法是關閉伺服器並啟動單一使用者模式的 PostgreSQL 伺服器,其命令列中包含 -P 選項。然後,可以發出 REINDEX DATABASE,REINDEX SYSTEM,REINDEX TABLE 或 REINDEX INDEX,具體取決於您需要重建的程度。如有疑問,請使用 REINDEX SYSTEM 選擇資料庫中所有系統索引的重建。然後退出單一使用者模式伺服器連線再重新啟動一般模式伺服器。有關如何與單一使用者模式伺服器界面交互的更多訊息,請參閱 參考頁面。
PostgreSQL 將某些類型物件的預設權限授予 PUBLIC。預設情況下,對資料表、資料表欄位、序列、外部資料封裝、外部伺服器、大型物件、綱要或資料表空間不會授予 PUBLIC 權限。對於其他類型的物件,授予 PUBLIC 的預設權限如下:CONNECT 和 TEMPORARY(建立臨時資料表)資料庫權限;函數的 EXECUTE 權限;以及語言和資料型別(包括 domain)的 USAGE 權限。當然,物件所有者可以撤銷預設和明確授予的權限。(為了最大限度地提高安全性,請在建立物件的同一交易事務中發出 REVOKE;如此就沒有其他用戶可以使用該物件的窗口。)此外,可以使用 指令更改這些初始預設權限設定。
個別特定的權限說明,如中所定義。
See for more information about specific privilege types, as well as how to inspect objects' privileges.
,
ALTER POLICY — 變更資料列等級的安全原則定義
ALTER POLICY 用於變更現有資料列層級安全原則的定義。請注意,ALTER POLICY 只允許修改安全原則所適用的使用者們以及調整 USING 和 WITH CHECK 表示式。要更改安全原則的其他屬性,例如原則適用的指令,或者允許及限制原則,則必須刪除並重新建立安全原則。
要使用 ALTER POLICY 的話,你必須擁有該安全原則適用的資料表。
在 ALTER POLICY 的第二種形式中,指定的使用者角色列表、表示式和檢查表示式,將會被獨立替換。當其中一個子句被省略時,其原則相對應部分就不會改變。
name
變更現有原則的名稱。
table_name
該原則所在的資料表名稱(可以加上 schema )。
new_name
原則的新名稱。
role_name
原則所適用的使用者角色。可以同時指定多個角色。要將原則應用於所有角色,請使用 PUBLIC。
using_expression
原則的 USING 表示式。 有關詳細訊息,請參閱 CREATE POLICY。
check_expression
原則的 WITH CHECK 表示式。有關詳細訊息,請參閱 CREATE POLICY。
ALTER POLICY 是 PostgreSQL 所延伸支援的指令。
ALTER MATERIALIZED VIEW — change the definition of a materialized view
ALTER MATERIALIZED VIEW
changes various auxiliary properties of an existing materialized view.
You must own the materialized view to use ALTER MATERIALIZED VIEW
. To change a materialized view's schema, you must also have CREATE
privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have CREATE
privilege on the materialized view's schema. (These restrictions enforce that altering the owner doesn't do anything you couldn't do by dropping and recreating the materialized view. However, a superuser can alter ownership of any view anyway.)
The DEPENDS ON EXTENSION
form marks the materialized view as dependent on an extension, such that the materialized view will automatically be dropped if the extension is dropped.
The statement subforms and actions available for ALTER MATERIALIZED VIEW
are a subset of those available for ALTER TABLE
, and have the same meaning when used for materialized views. See the descriptions for ALTER TABLE for details.
name
The name (optionally schema-qualified) of an existing materialized view.
column_name
Name of a new or existing column.
extension_name
The name of the extension that the materialized view is to depend on.
new_column_name
New name for an existing column.
new_owner
The user name of the new owner of the materialized view.
new_name
The new name for the materialized view.
new_schema
The new schema for the materialized view.
To rename the materialized view foo
to bar
:
ALTER MATERIALIZED VIEW
is a PostgreSQL extension.
CREATE MATERIALIZED VIEW, DROP MATERIALIZED VIEW, REFRESH MATERIALIZED VIEW
ALTER ROLE — 變更資料庫角色
ALTER ROLE 變更 PostgreSQL 角色的屬性。
語法中列出的此指令的第一種語法樣式可以變更可在 CREATE ROLE 中指定的許多角色屬性。(所有可能的屬性都會被覆寫,除了沒有增加或移除成員資格的選項之外,應使用 GRANT 和 REVOKE。)指令中未提及的屬性保留其先前的設定。資料庫超級使用者可以變更任何角色的任何設定。具有 CREATEROLE 權限的角色可以變更任何的這些設定,但僅適用於非超級使用者和非複寫角色。普通角色只能變更自己的密碼。
第二種語法樣式用於變更角色的名稱。資料庫超級使用者可以重新命名任何角色。具有 CREATEROLE 權限的角色可以重新命名非超級使用者角色。無法重新命名目前連線使用者。(如果需要,請以其他使用者身份進行連線。)由於 MD5 加密的密碼使用角色名稱作為加密 salt,因此如果密碼是 MD5 加密的,則重新命名角色會重置其加密密碼。
其餘語法樣式則是變更組態變數的角色連線預設值,或者為所有資料庫更改,或者在指定 IN DATABASE 子句時,僅針對指定名稱資料庫中的連線。如果指定了 ALL 而不是角色名稱,則會變更所有角色的設定。使用 ALL 與 IN DATABASE 實際上與使用指令 ALTER DATABASE ... SET .... 相同。
每當角色隨後啟動一個新連線時,指定的值將成為連線預設值,覆寫 postgresql.conf 中存在的任何設定或已從 postgres 命令列接收。這只發生在登入時;執行 SET ROLE 或 SET SESSION AUTHORIZATION 不會設定新的組態值。為所有資料庫的設定將由附加到角色的特定於資料庫的設定覆寫。特定資料庫或特定角色的設定會覆寫所有角色的設定。
超級使用者可以變更任何人的連線預設值。具有 CREATEROLE 權限的角色可以變更非超級使用者角色的預設值。普通角色只能為自己設定預設值。某些組態變數不能以這種方式設定,或者只能在超級使用者發出命令時設定。只有超級使用者才能變更所有資料庫中所有角色的設定。
name
要變更其屬性的角色名稱。
CURRENT_USER
變更目前使用者而不是指定的角色。
SESSION_USER
變更目前連線使用者而不是指定的角色。
SUPERUSER
NOSUPERUSER
CREATEDB
NOCREATEDB
CREATEROLE
NOCREATEROLE
INHERIT
NOINHERIT
LOGIN
NOLOGIN
REPLICATION
NOREPLICATION
BYPASSRLS
NOBYPASSRLS
CONNECTION LIMIT
connlimit
[ ENCRYPTED
] PASSWORD
'password
'
PASSWORD NULL
VALID UNTIL
'timestamp
'
這些子句變更 CREATE ROLE 最初設定的屬性。有關更多訊息,請參閱 CREATE ROLE 參考頁面。
new_name
角色的新名稱。
database_name
應在其中設定組態變數的資料庫名稱。
configuration_parameter
value
將使指定組態參數覆寫此角色的連線預設值。如果 value 為 DEFAULT,或者等效地使用 RESET,則會移除特定於角色的組態參數,因此該角色將在新連線中繼承系統範圍的預設設定。使用 RESET ALL 清除所有特定於角色的設定。SET FROM CURRENT 將連線當下參數值保存為特定於角色的值。如果指定了 IN DATABASE,則僅為給定角色和資料庫設定或移除組態參數。
特定於角色的組態變數設定僅在登入時生效;SET ROLE 和 SET SESSION AUTHORIZATION 不處理特定於角色的組態變數設定。
有關可使用的參數名稱和內容的更多訊息,請參閱 SET 和第 19 章。
使用 CREATE ROLE 增加新角色,使用 DROP ROLE 移除角色。
ALTER ROLE 無法變更角色的成員資格。請使用 GRANT 和 REVOKE 來做到這一點。
使用此指令指定未加密的密碼時必須小心。密碼將以明文形式傳輸到伺服器,也可能記錄在用戶端的指令歷史記錄或伺服器日誌中。psql 包含一個指令 \password,可用於變更角色的密碼而不暴露明文密碼。
也可以將連線預設值綁定到特定資料庫而不是角色;請參閱 ALTER DATABASE。 如果存在衝突,則特定於資料庫角色的設定會覆蓋特定於角色的設定,而這些設定又會覆蓋特定於資料庫的設定。
變更角色的密碼:
移除角色的密碼:
變更密碼到期日期,指定密碼將於 2015 年 5 月 4 日中午到期,使用比 UTC 提前一小時的時區:
使密碼永久有效:
賦予角色建立其他角色和新資料庫的能力:
讓某個角色的 maintenance_work_mem 參數使用非預設值:
為 client_min_messages 參數指定一個非預設的,特定於某資料庫的設定:
ALTER ROLE 語句是 PostgreSQL 的延伸功能。
COMMIT PREPARED — commit a transaction that was earlier prepared for two-phase commit
COMMIT PREPARED
commits a transaction that is in prepared state.
transaction_id
The transaction identifier of the transaction that is to be committed.
To commit a prepared transaction, you must be either the same user that executed the transaction originally, or a superuser. But you do not have to be in the same session that executed the transaction.
This command cannot be executed inside a transaction block. The prepared transaction is committed immediately.
All currently available prepared transactions are listed in the system view.
Commit the transaction identified by the transaction identifier foobar
:
COMMIT PREPARED
is a PostgreSQL extension. It is intended for use by external transaction management systems, some of which are covered by standards (such as X/Open XA), but the SQL side of those systems is not standardized.
CREATE CAST — define a new cast
CREATE CAST
defines a new cast. A cast specifies how to perform a conversion between two data types. For example,
converts the integer constant 42 to type float8
by invoking a previously specified function, in this case float8(int4)
. (If no suitable cast has been defined, the conversion fails.)
Two types can be binary coercible, which means that the conversion can be performed “for free” without invoking any function. This requires that corresponding values use the same internal representation. For instance, the types text
and varchar
are binary coercible both ways. Binary coercibility is not necessarily a symmetric relationship. For example, the cast from xml
to text
can be performed for free in the present implementation, but the reverse direction requires a function that performs at least a syntax check. (Two types that are binary coercible both ways are also referred to as binary compatible.)
You can define a cast as an I/O conversion cast by using the WITH INOUT
syntax. An I/O conversion cast is performed by invoking the output function of the source data type, and passing the resulting string to the input function of the target data type. In many common cases, this feature avoids the need to write a separate cast function for conversion. An I/O conversion cast acts the same as a regular function-based cast; only the implementation is different.
By default, a cast can be invoked only by an explicit cast request, that is an explicit CAST(
x
AS typename
) or x
::
typename
construct.
If the cast is marked AS ASSIGNMENT
then it can be invoked implicitly when assigning a value to a column of the target data type. For example, supposing that foo.f1
is a column of type text
, then:
will be allowed if the cast from type integer
to type text
is marked AS ASSIGNMENT
, otherwise not. (We generally use the term assignment cast to describe this kind of cast.)
If the cast is marked AS IMPLICIT
then it can be invoked implicitly in any context, whether assignment or internally in an expression. (We generally use the term implicit cast to describe this kind of cast.) For example, consider this query:
The parser initially marks the constants as being of type integer
and numeric
respectively. There is no integer
+
numeric
operator in the system catalogs, but there is a numeric
+
numeric
operator. The query will therefore succeed if a cast from integer
to numeric
is available and is marked AS IMPLICIT
— which in fact it is. The parser will apply the implicit cast and resolve the query as if it had been written
Now, the catalogs also provide a cast from numeric
to integer
. If that cast were marked AS IMPLICIT
— which it is not — then the parser would be faced with choosing between the above interpretation and the alternative of casting the numeric
constant to integer
and applying the integer
+
integer
operator. Lacking any knowledge of which choice to prefer, it would give up and declare the query ambiguous. The fact that only one of the two casts is implicit is the way in which we teach the parser to prefer resolution of a mixed numeric
-and-integer
expression as numeric
; there is no built-in knowledge about that.
It is wise to be conservative about marking casts as implicit. An overabundance of implicit casting paths can cause PostgreSQL to choose surprising interpretations of commands, or to be unable to resolve commands at all because there are multiple possible interpretations. A good rule of thumb is to make a cast implicitly invokable only for information-preserving transformations between types in the same general type category. For example, the cast from int2
to int4
can reasonably be implicit, but the cast from float8
to int4
should probably be assignment-only. Cross-type-category casts, such as text
to int4
, are best made explicit-only.
To be able to create a cast, you must own the source or the target data type and have USAGE
privilege on the other type. To create a binary-coercible cast, you must be superuser. (This restriction is made because an erroneous binary-coercible cast conversion can easily crash the server.)
source_type
The name of the source data type of the cast.
target_type
The name of the target data type of the cast.
function_name
[(argument_type
[, ...])]
The function used to perform the cast. The function name can be schema-qualified. If it is not, the function will be looked up in the schema search path. The function's result data type must match the target type of the cast. Its arguments are discussed below. If no argument list is specified, the function name must be unique in its schema.
WITHOUT FUNCTION
Indicates that the source type is binary-coercible to the target type, so no function is required to perform the cast.
WITH INOUT
Indicates that the cast is an I/O conversion cast, performed by invoking the output function of the source data type, and passing the resulting string to the input function of the target data type.
AS ASSIGNMENT
Indicates that the cast can be invoked implicitly in assignment contexts.
AS IMPLICIT
Indicates that the cast can be invoked implicitly in any context.
Cast implementation functions can have one to three arguments. The first argument type must be identical to or binary-coercible from the cast's source type. The second argument, if present, must be type integer
; it receives the type modifier associated with the destination type, or -1
if there is none. The third argument, if present, must be type boolean
; it receives true
if the cast is an explicit cast, false
otherwise. (Bizarrely, the SQL standard demands different behaviors for explicit and implicit casts in some cases. This argument is supplied for functions that must implement such casts. It is not recommended that you design your own data types so that this matters.)
The return type of a cast function must be identical to or binary-coercible to the cast's target type.
Ordinarily a cast must have different source and target data types. However, it is allowed to declare a cast with identical source and target types if it has a cast implementation function with more than one argument. This is used to represent type-specific length coercion functions in the system catalogs. The named function is used to coerce a value of the type to the type modifier value given by its second argument.
When a cast has different source and target types and a function that takes more than one argument, it supports converting from one type to another and applying a length coercion in a single step. When no such entry is available, coercion to a type that uses a type modifier involves two cast steps, one to convert between data types and a second to apply the modifier.
A cast to or from a domain type currently has no effect. Casting to or from a domain uses the casts associated with its underlying type.
Remember that if you want to be able to convert types both ways you need to declare casts both ways explicitly.
It is normally not necessary to create casts between user-defined types and the standard string types (text
, varchar
, and char(
n
), as well as user-defined types that are defined to be in the string category). PostgreSQL provides automatic I/O conversion casts for that. The automatic casts to string types are treated as assignment casts, while the automatic casts from string types are explicit-only. You can override this behavior by declaring your own cast to replace an automatic cast, but usually the only reason to do so is if you want the conversion to be more easily invokable than the standard assignment-only or explicit-only setting. Another possible reason is that you want the conversion to behave differently from the type's I/O function; but that is sufficiently surprising that you should think twice about whether it's a good idea. (A small number of the built-in types do indeed have different behaviors for conversions, mostly because of requirements of the SQL standard.)
While not required, it is recommended that you continue to follow this old convention of naming cast implementation functions after the target data type. Many users are used to being able to cast data types using a function-style notation, that is typename
(x
). This notation is in fact nothing more nor less than a call of the cast implementation function; it is not specially treated as a cast. If your conversion functions are not named to support this convention then you will have surprised users. Since PostgreSQL allows overloading of the same function name with different argument types, there is no difficulty in having multiple conversion functions from different types that all use the target type's name.
Actually the preceding paragraph is an oversimplification: there are two cases in which a function-call construct will be treated as a cast request without having matched it to an actual function. If a function call name
(x
) does not exactly match any existing function, but name
is the name of a data type and pg_cast
provides a binary-coercible cast to this type from the type of x
, then the call will be construed as a binary-coercible cast. This exception is made so that binary-coercible casts can be invoked using functional syntax, even though they lack any function. Likewise, if there is no pg_cast
entry but the cast would be to or from a string type, the call will be construed as an I/O conversion cast. This exception allows I/O conversion casts to be invoked using functional syntax.
There is also an exception to the exception: I/O conversion casts from composite types to string types cannot be invoked using functional syntax, but must be written in explicit cast syntax (either CAST
or ::
notation). This exception was added because after the introduction of automatically-provided I/O conversion casts, it was found too easy to accidentally invoke such a cast when a function or column reference was intended.
To create an assignment cast from type bigint
to type int4
using the function int4(bigint)
:
(This cast is already predefined in the system.)
The CREATE CAST
command conforms to the SQL standard, except that SQL does not make provisions for binary-coercible types or extra arguments to implementation functions. AS IMPLICIT
is a PostgreSQL extension, too.
CREATE SEQUENCE — define a new sequence generator
CREATE SEQUENCE
creates a new sequence number generator. This involves creating and initializing a new special single-row table with the name name
. The generator will be owned by the user issuing the command.
If a schema name is given then the sequence is created in the specified schema. Otherwise it is created in the current schema. Temporary sequences exist in a special schema, so a schema name cannot be given when creating a temporary sequence. The sequence name must be distinct from the name of any other sequence, table, index, view, or foreign table in the same schema.
After a sequence is created, you use the functions nextval
, currval
, and setval
to operate on the sequence. These functions are documented in .
Although you cannot update a sequence directly, you can use a query like:
to examine the parameters and current state of a sequence. In particular, the last_value
field of the sequence shows the last value allocated by any session. (Of course, this value might be obsolete by the time it's printed, if other sessions are actively doing nextval
calls.)
TEMPORARY
or TEMP
If specified, the sequence object is created only for this session, and is automatically dropped on session exit. Existing permanent sequences with the same name are not visible (in this session) while the temporary sequence exists, unless they are referenced with schema-qualified names.
IF NOT EXISTS
Do not throw an error if a relation with the same name already exists. A notice is issued in this case. Note that there is no guarantee that the existing relation is anything like the sequence that would have been created - it might not even be a sequence.
name
The name (optionally schema-qualified) of the sequence to be created.
data_type
The optional clause AS
data_type
specifies the data type of the sequence. Valid types are smallint
, integer
, and bigint
. bigint
is the default. The data type determines the default minimum and maximum values of the sequence.
increment
The optional clause INCREMENT BY
increment
specifies which value is added to the current sequence value to create a new value. A positive value will make an ascending sequence, a negative one a descending sequence. The default value is 1.
minvalue
NO MINVALUE
The optional clause MINVALUE
minvalue
determines the minimum value a sequence can generate. If this clause is not supplied or NO MINVALUE
is specified, then defaults will be used. The default for an ascending sequence is 1. The default for a descending sequence is the minimum value of the data type.
maxvalue
NO MAXVALUE
The optional clause MAXVALUE
maxvalue
determines the maximum value for the sequence. If this clause is not supplied or NO MAXVALUE
is specified, then default values will be used. The default for an ascending sequence is the maximum value of the data type. The default for a descending sequence is -1.
start
The optional clause START WITH
start
allows the sequence to begin anywhere. The default starting value is minvalue
for ascending sequences and maxvalue
for descending ones.
cache
The optional clause CACHE
cache
specifies how many sequence numbers are to be preallocated and stored in memory for faster access. The minimum value is 1 (only one value can be generated at a time, i.e., no cache), and this is also the default.
CYCLE
NO CYCLE
The CYCLE
option allows the sequence to wrap around when the maxvalue
or minvalue
has been reached by an ascending or descending sequence respectively. If the limit is reached, the next number generated will be the minvalue
or maxvalue
, respectively.
If NO CYCLE
is specified, any calls to nextval
after the sequence has reached its maximum value will return an error. If neither CYCLE
or NO CYCLE
are specified, NO CYCLE
is the default.OWNED BY
table_name
.
column_name
OWNED BY NONE
The OWNED BY
option causes the sequence to be associated with a specific table column, such that if that column (or its whole table) is dropped, the sequence will be automatically dropped as well. The specified table must have the same owner and be in the same schema as the sequence. OWNED BY NONE
, the default, specifies that there is no such association.
Use DROP SEQUENCE
to remove a sequence.
Sequences are based on bigint
arithmetic, so the range cannot exceed the range of an eight-byte integer (-9223372036854775808 to 9223372036854775807).
Because nextval
and setval
calls are never rolled back, sequence objects cannot be used if “gapless” assignment of sequence numbers is needed. It is possible to build gapless assignment by using exclusive locking of a table containing a counter; but this solution is much more expensive than sequence objects, especially if many transactions need sequence numbers concurrently.
Unexpected results might be obtained if a cache
setting greater than one is used for a sequence object that will be used concurrently by multiple sessions. Each session will allocate and cache successive sequence values during one access to the sequence object and increase the sequence object's last_value
accordingly. Then, the next cache
-1 uses of nextval
within that session simply return the preallocated values without touching the sequence object. So, any numbers allocated but not used within a session will be lost when that session ends, resulting in “holes” in the sequence.
Furthermore, although multiple sessions are guaranteed to allocate distinct sequence values, the values might be generated out of sequence when all the sessions are considered. For example, with a cache
setting of 10, session A might reserve values 1..10 and return nextval
=1, then session B might reserve values 11..20 and return nextval
=11 before session A has generated nextval
=2. Thus, with a cache
setting of one it is safe to assume that nextval
values are generated sequentially; with a cache
setting greater than one you should only assume that the nextval
values are all distinct, not that they are generated purely sequentially. Also, last_value
will reflect the latest value reserved by any session, whether or not it has yet been returned by nextval
.
Another consideration is that a setval
executed on such a sequence will not be noticed by other sessions until they have used up any preallocated values they have cached.
Create an ascending sequence called serial
, starting at 101:
Select the next number from this sequence:
Select the next number from this sequence:
Use this sequence in an INSERT
command:
Update the sequence value after a COPY FROM
:
CREATE SEQUENCE
conforms to the SQL standard, with the following exceptions:
Obtaining the next value is done using the nextval()
function instead of the standard's NEXT VALUE FOR
expression.
The OWNED BY
clause is a PostgreSQL extension.
ALTER USER 現在是 的別名。
,
Sometimes it is necessary for usability or standards-compliance reasons to provide multiple implicit casts among a set of types, resulting in ambiguity that cannot be avoided as above. The parser has a fallback heuristic based on type categories and preferred types that can help to provide desired behavior in such cases. See for more information.
Use to remove user-defined casts.
, ,
,
版本:11
CREATE TABLE — 建立一個新的資料表
CREATE TABLE 將在目前資料庫中建立一個新的,初始化為空的資料表。該資料表將由發出此指令的使用者擁有。
如果加上了綱要名稱(例如,CREATE TABLE myschema.mytable ...),那麼將在指定的綱要中建立資料表。否則,它將在目前綱要中建立。臨時資料表存在於特殊綱要中,因此在建立臨時資料表時無法使用綱要名稱。資料表的名稱必須與同一綱要中的任何其他資料表、序列、索引、檢視表或外部資料表的名稱不同。
CREATE TABLE 會自動建立一個資料型別,表示與資料表的一個資料列對應的複合型別。因此,資料表不能與同一綱要中的任何現有資料型別具有相同的名稱。 可選擇性加上限制條件子句指定新的資料列或更新資料列必須滿足的限制條件才能使其插入或更新操作成功。限制條件是一個 SQL 物件,它有助於以各種方式定義資料表中的有效值集合。
定義限制條件有兩種方法:資料表限制條件和欄位限制條件。欄位限制條件被定義為欄位定義的一部分。資料表限制條件定義不依賴於特定欄位,它可以包含多個欄位。
每個欄位限制條件也可以寫為資料表限制條件;欄位限制條件只是在其限制僅影響一欄位時使用的語法方便。
為了能夠建立資料表,您必須分別對所有欄位型別或 OF 子句中的型別具有 USAGE 權限。
TEMPORARY
or TEMP
如果使用此參數,則將資料表建立為臨時資料表。臨時資料表會在連線結束時自動刪除,或者選擇性地在目前交易事務結束時刪除(請參閱下面的 ON COMMIT)。當臨時資料表存在時,目前連線不會顯示具有相同名稱的現有永久資料表,除非它們使用綱要限定的名稱引用。在臨時資料表上建立的任何索引也都自動是臨時的。
由於 autovacuum 背景程序無法存取,因此無法對臨時資料表進行清理或分析。所以,應透過線上的 SQL 命令執行適當的清理和分析操作。例如,如果要在複雜查詢中使用臨時資料表,在填入資料後的臨時表上執行 ANALYZE 是個不錯的作法。
你可以選擇性地在 TEMPORARY 或 TEMP 之前加上 GLOBAL 或 LOCAL。目前這在 PostgreSQL 中沒有任何區別,也已經被棄用;請參閱相容性。
UNLOGGED
如果指定了這個選項,則將此表建立為無日誌記錄的資料表。寫入無日誌記錄資料表的資料不寫入 WAL(詳見第 29 章),這使得它們比普通的資料表快得多。但是,它們就不是完全安全的:在系統崩潰或不正常關閉之後,會自動清除無日誌記錄的資料表。 無日誌記錄的資料表內容也無法複製到備用伺服器。在無日誌記錄資料表上所建的所有索引也沒有日誌記錄。
IF NOT EXISTS
如果已經存在同樣名稱的關連物件,請不要拋出錯誤。在這種情況下發出 NOTICE。請注意,不能保證現有關連物件類似於將要建立的關連物件。
table_name
要建立的資料表名稱(可選擇性加上的綱要)。
OF
type_name
Creates a typed table, which takes its structure from the specified composite type (name optionally schema-qualified). A typed table is tied to its type; for example the table will be dropped if the type is dropped (with DROP TYPE ... CASCADE
).
When a typed table is created, then the data types of the columns are determined by the underlying composite type and are not specified by the CREATE TABLE
command. But the CREATE TABLE
command can add defaults and constraints to the table and can specify storage parameters.
column_name
The name of a column to be created in the new table.
data_type
The data type of the column. This can include array specifiers. For more information on the data types supported by PostgreSQL, refer to Chapter 8.
COLLATE
collation
The COLLATE
clause assigns a collation to the column (which must be of a collatable data type). If not specified, the column data type's default collation is used.
INHERITS (
parent_table
[, ... ] )
The optional INHERITS
clause specifies a list of tables from which the new table automatically inherits all columns. Parent tables can be plain tables or foreign tables.
Use of INHERITS
creates a persistent relationship between the new child table and its parent table(s). Schema modifications to the parent(s) normally propagate to children as well, and by default the data of the child table is included in scans of the parent(s).
If the same column name exists in more than one parent table, an error is reported unless the data types of the columns match in each of the parent tables. If there is no conflict, then the duplicate columns are merged to form a single column in the new table. If the column name list of the new table contains a column name that is also inherited, the data type must likewise match the inherited column(s), and the column definitions are merged into one. If the new table explicitly specifies a default value for the column, this default overrides any defaults from inherited declarations of the column. Otherwise, any parents that specify default values for the column must all specify the same default, or an error will be reported.
CHECK
constraints are merged in essentially the same way as columns: if multiple parent tables and/or the new table definition contain identically-named CHECK
constraints, these constraints must all have the same check expression, or an error will be reported. Constraints having the same name and expression will be merged into one copy. A constraint marked NO INHERIT
in a parent will not be considered. Notice that an unnamed CHECK
constraint in the new table will never be merged, since a unique name will always be chosen for it.
Column STORAGE
settings are also copied from parent tables.
If a column in the parent table is an identity column, that property is not inherited. A column in the child table can be declared identity column if desired.
PARTITION BY { RANGE | LIST | HASH } ( {
column_name
| ( expression
) } [ opclass
] [, ...] )
The optional PARTITION BY
clause specifies a strategy of partitioning the table. The table thus created is called a partitioned table. The parenthesized list of columns or expressions forms the partition key for the table. When using range or hash partitioning, the partition key can include multiple columns or expressions (up to 32, but this limit can be altered when building PostgreSQL), but for list partitioning, the partition key must consist of a single column or expression.
Range and list partitioning require a btree operator class, while hash partitioning requires a hash operator class. If no operator class is specified explicitly, the default operator class of the appropriate type will be used; if no default operator class exists, an error will be raised. When hash partitioning is used, the operator class used must implement support function 2 (see Section 37.16.3 for details).
A partitioned table is divided into sub-tables (called partitions), which are created using separate CREATE TABLE
commands. The partitioned table is itself empty. A data row inserted into the table is routed to a partition based on the value of columns or expressions in the partition key. If no existing partition matches the values in the new row, an error will be reported.
Partitioned tables do not support EXCLUDE
constraints; however, you can define these constraints on individual partitions.
See Section 5.11 for more discussion on table partitioning.
PARTITION OF
parent_table
{ FOR VALUES partition_bound_spec
| DEFAULT }
建資料表作為指定父資料表的分割區。此資料表可以使用 FOR VALUES 建立為特定值的分割區,也可以使用 DEFAULT 建立為預設分割區。父資料表中存在所有的索引、限制條件和使用者定義的資料列級觸發器都將複製到新的分割區上。
The partition_bound_spec
must correspond to the partitioning method and partition key of the parent table, and must not overlap with any existing partition of that parent. The form with IN
is used for list partitioning, the form with FROM
and TO
is used for range partitioning, and the form with WITH
is used for hash partitioning.
partition_bound_expr
is any variable-free expression (subqueries, window functions, aggregate functions, and set-returning functions are not allowed). Its data type must match the data type of the corresponding partition key column. The expression is evaluated once at table creation time, so it can even contain volatile expressions such as CURRENT_TIMESTAMP
.
When creating a list partition, NULL
can be specified to signify that the partition allows the partition key column to be null. However, there cannot be more than one such list partition for a given parent table. NULL
cannot be specified for range partitions.
When creating a range partition, the lower bound specified with FROM
is an inclusive bound, whereas the upper bound specified with TO
is an exclusive bound. That is, the values specified in the FROM
list are valid values of the corresponding partition key columns for this partition, whereas those in the TO
list are not. Note that this statement must be understood according to the rules of row-wise comparison (Section 9.23.5). For example, given PARTITION BY RANGE (x,y)
, a partition bound FROM (1, 2) TO (3, 4)
allows x=1
with any y>=2
, x=2
with any non-null y
, and x=3
with any y<4
.
The special values MINVALUE
and MAXVALUE
may be used when creating a range partition to indicate that there is no lower or upper bound on the column's value. For example, a partition defined using FROM (MINVALUE) TO (10)
allows any values less than 10, and a partition defined using FROM (10) TO (MAXVALUE)
allows any values greater than or equal to 10.
When creating a range partition involving more than one column, it can also make sense to use MAXVALUE
as part of the lower bound, and MINVALUE
as part of the upper bound. For example, a partition defined using FROM (0, MAXVALUE) TO (10, MAXVALUE)
allows any rows where the first partition key column is greater than 0 and less than or equal to 10. Similarly, a partition defined using FROM ('a', MINVALUE) TO ('b', MINVALUE)
allows any rows where the first partition key column starts with "a".
Note that if MINVALUE
or MAXVALUE
is used for one column of a partitioning bound, the same value must be used for all subsequent columns. For example, (10, MINVALUE, 0)
is not a valid bound; you should write (10, MINVALUE, MINVALUE)
.
Also note that some element types, such as timestamp
, have a notion of "infinity", which is just another value that can be stored. This is different from MINVALUE
and MAXVALUE
, which are not real values that can be stored, but rather they are ways of saying that the value is unbounded. MAXVALUE
can be thought of as being greater than any other value, including "infinity" and MINVALUE
as being less than any other value, including "minus infinity". Thus the range FROM ('infinity') TO (MAXVALUE)
is not an empty range; it allows precisely one value to be stored — "infinity".
If DEFAULT
is specified, the table will be created as the default partition of the parent table. This option is not available for hash-partitioned tables. A partition key value not fitting into any other partition of the given parent will be routed to the default partition.
When a table has an existing DEFAULT
partition and a new partition is added to it, the default partition must be scanned to verify that it does not contain any rows which properly belong in the new partition. If the default partition contains a large number of rows, this may be slow. The scan will be skipped if the default partition is a foreign table or if it has a constraint which proves that it cannot contain rows which should be placed in the new partition.
When creating a hash partition, a modulus and remainder must be specified. The modulus must be a positive integer, and the remainder must be a non-negative integer less than the modulus. Typically, when initially setting up a hash-partitioned table, you should choose a modulus equal to the number of partitions and assign every table the same modulus and a different remainder (see examples, below). However, it is not required that every partition have the same modulus, only that every modulus which occurs among the partitions of a hash-partitioned table is a factor of the next larger modulus. This allows the number of partitions to be increased incrementally without needing to move all the data at once. For example, suppose you have a hash-partitioned table with 8 partitions, each of which has modulus 8, but find it necessary to increase the number of partitions to 16. You can detach one of the modulus-8 partitions, create two new modulus-16 partitions covering the same portion of the key space (one with a remainder equal to the remainder of the detached partition, and the other with a remainder equal to that value plus 8), and repopulate them with data. You can then repeat this -- perhaps at a later time -- for each modulus-8 partition until none remain. While this may still involve a large amount of data movement at each step, it is still better than having to create a whole new table and move all the data at once.
A partition must have the same column names and types as the partitioned table to which it belongs. Modifications to the column names or types of a partitioned table will automatically propagate to all partitions. CHECK
constraints will be inherited automatically by every partition, but an individual partition may specify additional CHECK
constraints; additional constraints with the same name and condition as in the parent will be merged with the parent constraint. Defaults may be specified separately for each partition. But note that a partition's default value is not applied when inserting a tuple through a partitioned table.
Rows inserted into a partitioned table will be automatically routed to the correct partition. If no suitable partition exists, an error will occur.
Operations such as TRUNCATE which normally affect a table and all of its inheritance children will cascade to all partitions, but may also be performed on an individual partition. Note that dropping a partition with DROP TABLE
requires taking an ACCESS EXCLUSIVE
lock on the parent table.
LIKE
source_table
[ like_option
... ]
The LIKE
clause specifies a table from which the new table automatically copies all column names, their data types, and their not-null constraints.
Unlike INHERITS
, the new table and original table are completely decoupled after creation is complete. Changes to the original table will not be applied to the new table, and it is not possible to include data of the new table in scans of the original table.
Also unlike INHERITS
, columns and constraints copied by LIKE
are not merged with similarly named columns and constraints. If the same name is specified explicitly or in another LIKE
clause, an error is signaled.
The optional like_option
clauses specify which additional properties of the original table to copy. Specifying INCLUDING
copies the property, specifying EXCLUDING
omits the property. EXCLUDING
is the default. If multiple specifications are made for the same kind of object, the last one is used. The available options are:
INCLUDING COMMENTS
Comments for the copied columns, constraints, and indexes will be copied. The default behavior is to exclude comments, resulting in the copied columns and constraints in the new table having no comments.
INCLUDING CONSTRAINTS
CHECK
constraints will be copied. No distinction is made between column constraints and table constraints. Not-null constraints are always copied to the new table.
INCLUDING DEFAULTS
Default expressions for the copied column definitions will be copied. Otherwise, default expressions are not copied, resulting in the copied columns in the new table having null defaults. Note that copying defaults that call database-modification functions, such as nextval
, may create a functional linkage between the original and new tables.
INCLUDING GENERATED
Any generation expressions of copied column definitions will be copied. By default, new columns will be regular base columns.
INCLUDING IDENTITY
Any identity specifications of copied column definitions will be copied. A new sequence is created for each identity column of the new table, separate from the sequences associated with the old table.
INCLUDING INDEXES
Indexes, PRIMARY KEY
, UNIQUE
, and EXCLUDE
constraints on the original table will be created on the new table. Names for the new indexes and constraints are chosen according to the default rules, regardless of how the originals were named. (This behavior avoids possible duplicate-name failures for the new indexes.)
INCLUDING STATISTICS
Extended statistics are copied to the new table.
INCLUDING STORAGE
STORAGE
settings for the copied column definitions will be copied. The default behavior is to exclude STORAGE
settings, resulting in the copied columns in the new table having type-specific default settings. For more on STORAGE
settings, see Section 68.2.
INCLUDING ALL
INCLUDING ALL
is an abbreviated form selecting all the available individual options. (It could be useful to write individual EXCLUDING
clauses after INCLUDING ALL
to select all but some specific options.)
The LIKE
clause can also be used to copy column definitions from views, foreign tables, or composite types. Inapplicable options (e.g., INCLUDING INDEXES
from a view) are ignored.
CONSTRAINT
constraint_name
An optional name for a column or table constraint. If the constraint is violated, the constraint name is present in error messages, so constraint names like col must be positive
can be used to communicate helpful constraint information to client applications. (Double-quotes are needed to specify constraint names that contain spaces.) If a constraint name is not specified, the system generates a name.
NOT NULL
The column is not allowed to contain null values.
NULL
The column is allowed to contain null values. This is the default.
This clause is only provided for compatibility with non-standard SQL databases. Its use is discouraged in new applications.
CHECK (
expression
) [ NO INHERIT ]
The CHECK
clause specifies an expression producing a Boolean result which new or updated rows must satisfy for an insert or update operation to succeed. Expressions evaluating to TRUE or UNKNOWN succeed. Should any row of an insert or update operation produce a FALSE result, an error exception is raised and the insert or update does not alter the database. A check constraint specified as a column constraint should reference that column's value only, while an expression appearing in a table constraint can reference multiple columns.
Currently, CHECK
expressions cannot contain subqueries nor refer to variables other than columns of the current row (see Section 5.4.1). The system column tableoid
may be referenced, but not any other system column.
A constraint marked with NO INHERIT
will not propagate to child tables.
When a table has multiple CHECK
constraints, they will be tested for each row in alphabetical order by name, after checking NOT NULL
constraints. (PostgreSQL versions before 9.5 did not honor any particular firing order for CHECK
constraints.)
DEFAULT
default_expr
The DEFAULT
clause assigns a default data value for the column whose column definition it appears within. The value is any variable-free expression (in particular, cross-references to other columns in the current table are not allowed). Subqueries are not allowed either. The data type of the default expression must match the data type of the column.
The default expression will be used in any insert operation that does not specify a value for the column. If there is no default for a column, then the default is null.
GENERATED ALWAYS AS (
generation_expr
) STORED
This clause creates the column as a generated column. The column cannot be written to, and when read the result of the specified expression will be returned.
The keyword STORED
is required to signify that the column will be computed on write and will be stored on disk.
The generation expression can refer to other columns in the table, but not other generated columns. Any functions and operators used must be immutable. References to other tables are not allowed.
GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ (
sequence_options
) ]
This clause creates the column as an identity column. It will have an implicit sequence attached to it and the column in new rows will automatically have values from the sequence assigned to it.
The clauses ALWAYS
and BY DEFAULT
determine how the sequence value is given precedence over a user-specified value in an INSERT
statement. If ALWAYS
is specified, a user-specified value is only accepted if the INSERT
statement specifies OVERRIDING SYSTEM VALUE
. If BY DEFAULT
is specified, then the user-specified value takes precedence. See INSERT for details. (In the COPY
command, user-specified values are always used regardless of this setting.)
The optional sequence_options
clause can be used to override the options of the sequence. See CREATE SEQUENCE for details.UNIQUE
(column constraint)
UNIQUE (
column_name
[, ... ] ) [ INCLUDE ( column_name
[, ...]) ] (table constraint)
The UNIQUE
constraint specifies that a group of one or more columns of a table can contain only unique values. The behavior of the unique table constraint is the same as that for column constraints, with the additional capability to span multiple columns.
For the purpose of a unique constraint, null values are not considered equal.
Each unique table constraint must name a set of columns that is different from the set of columns named by any other unique or primary key constraint defined for the table. (Otherwise it would just be the same constraint listed twice.)
When establishing a unique constraint for a multi-level partition hierarchy, all the columns in the partition key of the target partitioned table, as well as those of all its descendant partitioned tables, must be included in the constraint definition.
Adding a unique constraint will automatically create a unique btree index on the column or group of columns used in the constraint. The optional clause INCLUDE
adds to that index one or more columns on which the uniqueness is not enforced. Note that although the constraint is not enforced on the included columns, it still depends on them. Consequently, some operations on these columns (e.g. DROP COLUMN
) can cause cascaded constraint and index deletion.
PRIMARY KEY
(column constraint)
PRIMARY KEY (
column_name
[, ... ] ) [ INCLUDE ( column_name
[, ...]) ] (table constraint)
The PRIMARY KEY
constraint specifies that a column or columns of a table can contain only unique (non-duplicate), nonnull values. Only one primary key can be specified for a table, whether as a column constraint or a table constraint.
The primary key constraint should name a set of columns that is different from the set of columns named by any unique constraint defined for the same table. (Otherwise, the unique constraint is redundant and will be discarded.)
PRIMARY KEY
enforces the same data constraints as a combination of UNIQUE
and NOT NULL
, but identifying a set of columns as the primary key also provides metadata about the design of the schema, since a primary key implies that other tables can rely on this set of columns as a unique identifier for rows.
PRIMARY KEY
constraints share the restrictions that UNIQUE
constraints have when placed on partitioned tables.
Adding a PRIMARY KEY
constraint will automatically create a unique btree index on the column or group of columns used in the constraint. The optional INCLUDE
clause allows a list of columns to be specified which will be included in the non-key portion of the index. Although uniqueness is not enforced on the included columns, the constraint still depends on them. Consequently, some operations on the included columns (e.g. DROP COLUMN
) can cause cascaded constraint and index deletion.
EXCLUDE [ USING
index_method
] ( exclude_element
WITH operator
[, ... ] ) index_parameters
[ WHERE ( predicate
) ]
The EXCLUDE
clause defines an exclusion constraint, which guarantees that if any two rows are compared on the specified column(s) or expression(s) using the specified operator(s), not all of these comparisons will return TRUE
. If all of the specified operators test for equality, this is equivalent to a UNIQUE
constraint, although an ordinary unique constraint will be faster. However, exclusion constraints can specify constraints that are more general than simple equality. For example, you can specify a constraint that no two rows in the table contain overlapping circles (see Section 8.8) by using the &&
operator.
Exclusion constraints are implemented using an index, so each specified operator must be associated with an appropriate operator class (see Section 11.10) for the index access method index_method
. The operators are required to be commutative. Each exclude_element
can optionally specify an operator class and/or ordering options; these are described fully under CREATE INDEX.
The access method must support amgettuple
(see Chapter 61); at present this means GIN cannot be used. Although it's allowed, there is little point in using B-tree or hash indexes with an exclusion constraint, because this does nothing that an ordinary unique constraint doesn't do better. So in practice the access method will always be GiST or SP-GiST.
The predicate
allows you to specify an exclusion constraint on a subset of the table; internally this creates a partial index. Note that parentheses are required around the predicate.
REFERENCES
reftable
[ ( refcolumn
) ] [ MATCH matchtype
] [ ON DELETE referential_action
] [ ON UPDATE referential_action
] (column constraint)
FOREIGN KEY (
column_name
[, ... ] ) REFERENCES reftable
[ ( refcolumn
[, ... ] ) ] [ MATCH matchtype
] [ ON DELETE referential_action
] [ ON UPDATE referential_action
] (table constraint)
These clauses specify a foreign key constraint, which requires that a group of one or more columns of the new table must only contain values that match values in the referenced column(s) of some row of the referenced table. If the refcolumn
list is omitted, the primary key of the reftable
is used. The referenced columns must be the columns of a non-deferrable unique or primary key constraint in the referenced table. The user must have REFERENCES
permission on the referenced table (either the whole table, or the specific referenced columns). The addition of a foreign key constraint requires a SHARE ROW EXCLUSIVE
lock on the referenced table. Note that foreign key constraints cannot be defined between temporary tables and permanent tables.
A value inserted into the referencing column(s) is matched against the values of the referenced table and referenced columns using the given match type. There are three match types: MATCH FULL
, MATCH PARTIAL
, and MATCH SIMPLE
(which is the default). MATCH FULL
will not allow one column of a multicolumn foreign key to be null unless all foreign key columns are null; if they are all null, the row is not required to have a match in the referenced table. MATCH SIMPLE
allows any of the foreign key columns to be null; if any of them are null, the row is not required to have a match in the referenced table. MATCH PARTIAL
is not yet implemented. (Of course, NOT NULL
constraints can be applied to the referencing column(s) to prevent these cases from arising.)
In addition, when the data in the referenced columns is changed, certain actions are performed on the data in this table's columns. The ON DELETE
clause specifies the action to perform when a referenced row in the referenced table is being deleted. Likewise, the ON UPDATE
clause specifies the action to perform when a referenced column in the referenced table is being updated to a new value. If the row is updated, but the referenced column is not actually changed, no action is done. Referential actions other than the NO ACTION
check cannot be deferred, even if the constraint is declared deferrable. There are the following possible actions for each clause:
NO ACTION
Produce an error indicating that the deletion or update would create a foreign key constraint violation. If the constraint is deferred, this error will be produced at constraint check time if there still exist any referencing rows. This is the default action.
RESTRICT
Produce an error indicating that the deletion or update would create a foreign key constraint violation. This is the same as NO ACTION
except that the check is not deferrable.
CASCADE
Delete any rows referencing the deleted row, or update the values of the referencing column(s) to the new values of the referenced columns, respectively.
SET NULL
Set the referencing column(s) to null.
SET DEFAULT
Set the referencing column(s) to their default values. (There must be a row in the referenced table matching the default values, if they are not null, or the operation will fail.)
If the referenced column(s) are changed frequently, it might be wise to add an index to the referencing column(s) so that referential actions associated with the foreign key constraint can be performed more efficiently.
DEFERRABLE
NOT DEFERRABLE
This controls whether the constraint can be deferred. A constraint that is not deferrable will be checked immediately after every command. Checking of constraints that are deferrable can be postponed until the end of the transaction (using the SET CONSTRAINTS command). NOT DEFERRABLE
is the default. Currently, only UNIQUE
, PRIMARY KEY
, EXCLUDE
, and REFERENCES
(foreign key) constraints accept this clause. NOT NULL
and CHECK
constraints are not deferrable. Note that deferrable constraints cannot be used as conflict arbitrators in an INSERT
statement that includes an ON CONFLICT DO UPDATE
clause.
INITIALLY IMMEDIATE
INITIALLY DEFERRED
If a constraint is deferrable, this clause specifies the default time to check the constraint. If the constraint is INITIALLY IMMEDIATE
, it is checked after each statement. This is the default. If the constraint is INITIALLY DEFERRED
, it is checked only at the end of the transaction. The constraint check time can be altered with the SET CONSTRAINTS command.
USING
method
This optional clause specifies the table access method to use to store the contents for the new table; the method needs be an access method of type TABLE
. See Chapter 60 for more information. If this option is not specified, the default table access method is chosen for the new table. See default_table_access_method for more information.
WITH (
storage_parameter
[= value
] [, ... ] )
This clause specifies optional storage parameters for a table or index; see Storage Parameters for more information. For backward-compatibility the WITH
clause for a table can also include OIDS=FALSE
to specify that rows of the new table should not contain OIDs (object identifiers), OIDS=TRUE
is not supported anymore.
WITHOUT OIDS
This is backward-compatible syntax for declaring a table WITHOUT OIDS
, creating a table WITH OIDS
is not supported anymore.
ON COMMIT
The behavior of temporary tables at the end of a transaction block can be controlled using ON COMMIT
. The three options are:
PRESERVE ROWS
No special action is taken at the ends of transactions. This is the default behavior.
DELETE ROWS
All rows in the temporary table will be deleted at the end of each transaction block. Essentially, an automatic TRUNCATE is done at each commit. When used on a partitioned table, this is not cascaded to its partitions.
DROP
The temporary table will be dropped at the end of the current transaction block. When used on a partitioned table, this action drops its partitions and when used on tables with inheritance children, it drops the dependent children.
TABLESPACE
tablespace_name
The tablespace_name
is the name of the tablespace in which the new table is to be created. If not specified, default_tablespace is consulted, or temp_tablespaces if the table is temporary. For partitioned tables, since no storage is required for the table itself, the tablespace specified overrides default_tablespace
as the default tablespace to use for any newly created partitions when no other tablespace is explicitly specified.
USING INDEX TABLESPACE
tablespace_name
This clause allows selection of the tablespace in which the index associated with a UNIQUE
, PRIMARY KEY
, or EXCLUDE
constraint will be created. If not specified, default_tablespace is consulted, or temp_tablespaces if the table is temporary.
The WITH
clause can specify storage parameters for tables, and for indexes associated with a UNIQUE
, PRIMARY KEY
, or EXCLUDE
constraint. Storage parameters for indexes are documented in CREATE INDEX. The storage parameters currently available for tables are listed below. For many of these parameters, as shown, there is an additional parameter with the same name prefixed with toast.
, which controls the behavior of the table's secondary TOAST table, if any (see Section 68.2 for more information about TOAST). If a table parameter value is set and the equivalent toast.
parameter is not, the TOAST table will use the table's parameter value. Specifying these parameters for partitioned tables is not supported, but you may specify them for individual leaf partitions.
fillfactor
(integer
)
The fillfactor for a table is a percentage between 10 and 100. 100 (complete packing) is the default. When a smaller fillfactor is specified, INSERT
operations pack table pages only to the indicated percentage; the remaining space on each page is reserved for updating rows on that page. This gives UPDATE
a chance to place the updated copy of a row on the same page as the original, which is more efficient than placing it on a different page. For a table whose entries are never updated, complete packing is the best choice, but in heavily updated tables smaller fillfactors are appropriate. This parameter cannot be set for TOAST tables.
toast_tuple_target
(integer
)
toast_tuple_target 指定在嘗試將較長的欄位值移入 TOAST 資料表之前所需的最短位元組長度,也是目標長度,一旦嘗試開始 TOAST,我們就嘗試將長度縮減到其以下的長度。這只會影響標記為「External」或「Extended」的欄位,並且僅適用於新的 tuple - 對已經存在的資料沒有影響。預設情況下,此參數設定為每個區塊至少允許 4 個 tuple,也就是預設的 blocksize 為 2040 位元組。有效值介於 128 位元組和(blocksize - header)之間,其預設為 8160 位元組。對於非常短或非常長的資料列,變更此值可能沒有效果。請注意,預設的設定通常接近最佳值,並且在某些情況下設定此參數可能會產生負面影響。不能為TOAST 資料表設定此參數。
parallel_workers
(integer
)
此設定了用於輔助對該資料表進行平行掃描的工作程序數量。如果未設定,則系統將根據其關連大小來決定一個值。例如,由於設定了 max_worker_processes,計劃程序或透過使用平行掃描的工具程式的語句選擇的實際工作程序數量可能會更少。
autovacuum_enabled
, toast.autovacuum_enabled
(boolean
)
Enables or disables the autovacuum daemon for a particular table. If true, the autovacuum daemon will perform automatic VACUUM
and/or ANALYZE
operations on this table following the rules discussed in Section 24.1.6. If false, this table will not be autovacuumed, except to prevent transaction ID wraparound. See Section 24.1.5 for more about wraparound prevention. Note that the autovacuum daemon does not run at all (except to prevent transaction ID wraparound) if the autovacuum parameter is false; setting individual tables' storage parameters does not override that. Therefore there is seldom much point in explicitly setting this storage parameter to true
, only to false
.
vacuum_index_cleanup
, toast.vacuum_index_cleanup
(boolean
)
Enables or disables index cleanup when VACUUM
is run on this table. The default value is true
. Disabling index cleanup can speed up VACUUM
very significantly, but may also lead to severely bloated indexes if table modifications are frequent. The INDEX_CLEANUP
parameter of VACUUM, if specified, overrides the value of this option.
vacuum_truncate
, toast.vacuum_truncate
(boolean
)
Enables or disables vacuum to try to truncate off any empty pages at the end of this table. The default value is true
. If true
, VACUUM
and autovacuum do the truncation and the disk space for the truncated pages is returned to the operating system. Note that the truncation requires ACCESS EXCLUSIVE
lock on the table. The TRUNCATE
parameter of VACUUM, if specified, overrides the value of this option.
autovacuum_vacuum_threshold
, toast.autovacuum_vacuum_threshold
(integer
)
Per-table value for autovacuum_vacuum_threshold parameter.
autovacuum_vacuum_scale_factor
, toast.autovacuum_vacuum_scale_factor
(floating point
)
Per-table value for autovacuum_vacuum_scale_factor parameter.
autovacuum_analyze_threshold
(integer
)
Per-table value for autovacuum_analyze_threshold parameter.
autovacuum_analyze_scale_factor
(floating point
)
Per-table value for autovacuum_analyze_scale_factor parameter.
autovacuum_vacuum_cost_delay
, toast.autovacuum_vacuum_cost_delay
(floating point
)
Per-table value for autovacuum_vacuum_cost_delay parameter.
autovacuum_vacuum_cost_limit
, toast.autovacuum_vacuum_cost_limit
(integer
)
Per-table value for autovacuum_vacuum_cost_limit parameter.
autovacuum_freeze_min_age
, toast.autovacuum_freeze_min_age
(integer
)
Per-table value for vacuum_freeze_min_age parameter. Note that autovacuum will ignore per-table autovacuum_freeze_min_age
parameters that are larger than half the system-wide autovacuum_freeze_max_age setting.
autovacuum_freeze_max_age
, toast.autovacuum_freeze_max_age
(integer
)
Per-table value for autovacuum_freeze_max_age parameter. Note that autovacuum will ignore per-table autovacuum_freeze_max_age
parameters that are larger than the system-wide setting (it can only be set smaller).
autovacuum_freeze_table_age
, toast.autovacuum_freeze_table_age
(integer
)
Per-table value for vacuum_freeze_table_age parameter.
autovacuum_multixact_freeze_min_age
, toast.autovacuum_multixact_freeze_min_age
(integer
)
Per-table value for vacuum_multixact_freeze_min_age parameter. Note that autovacuum will ignore per-table autovacuum_multixact_freeze_min_age
parameters that are larger than half the system-wide autovacuum_multixact_freeze_max_age setting.
autovacuum_multixact_freeze_max_age
, toast.autovacuum_multixact_freeze_max_age
(integer
)
Per-table value for autovacuum_multixact_freeze_max_age parameter. Note that autovacuum will ignore per-table autovacuum_multixact_freeze_max_age
parameters that are larger than the system-wide setting (it can only be set smaller).
autovacuum_multixact_freeze_table_age
, toast.autovacuum_multixact_freeze_table_age
(integer
)
Per-table value for vacuum_multixact_freeze_table_age parameter.
log_autovacuum_min_duration
, toast.log_autovacuum_min_duration
(integer
)
Per-table value for log_autovacuum_min_duration parameter.
user_catalog_table
(boolean
)
Declare the table as an additional catalog table for purposes of logical replication. See Section 48.6.2 for details. This parameter cannot be set for TOAST tables.
PostgreSQL automatically creates an index for each unique constraint and primary key constraint to enforce uniqueness. Thus, it is not necessary to create an index explicitly for primary key columns. (See CREATE INDEX for more information.)
Unique constraints and primary keys are not inherited in the current implementation. This makes the combination of inheritance and unique constraints rather dysfunctional.
A table cannot have more than 1600 columns. (In practice, the effective limit is usually lower because of tuple-length constraints.)
Create table films
and table distributors
:
Create a table with a 2-dimensional array:
Define a unique table constraint for the table films
. Unique table constraints can be defined on one or more columns of the table:
Define a check column constraint:
Define a check table constraint:
Define a primary key table constraint for the table films
:
Define a primary key constraint for table distributors
. The following two examples are equivalent, the first using the table constraint syntax, the second the column constraint syntax:
Assign a literal constant default value for the column name
, arrange for the default value of column did
to be generated by selecting the next value of a sequence object, and make the default value of modtime
be the time at which the row is inserted:
Define two NOT NULL
column constraints on the table distributors
, one of which is explicitly given a name:
Define a unique constraint for the name
column:
The same, specified as a table constraint:
Create the same table, specifying 70% fill factor for both the table and its unique index:
Create table circles
with an exclusion constraint that prevents any two circles from overlapping:
Create table cinemas
in tablespace diskvol1
:
Create a composite type and a typed table:
Create a range partitioned table:
Create a range partitioned table with multiple columns in the partition key:
Create a list partitioned table:
Create a hash partitioned table:
Create partition of a range partitioned table:
Create a few partitions of a range partitioned table with multiple columns in the partition key:
Create partition of a list partitioned table:
Create partition of a list partitioned table that is itself further partitioned and then add a partition to it:
Create partitions of a hash partitioned table:
Create a default partition:
The CREATE TABLE
command conforms to the SQL standard, with exceptions listed below.
Although the syntax of CREATE TEMPORARY TABLE
resembles that of the SQL standard, the effect is not the same. In the standard, temporary tables are defined just once and automatically exist (starting with empty contents) in every session that needs them. PostgreSQL instead requires each session to issue its own CREATE TEMPORARY TABLE
command for each temporary table to be used. This allows different sessions to use the same temporary table name for different purposes, whereas the standard's approach constrains all instances of a given temporary table name to have the same table structure.
The standard's definition of the behavior of temporary tables is widely ignored. PostgreSQL's behavior on this point is similar to that of several other SQL databases.
The SQL standard also distinguishes between global and local temporary tables, where a local temporary table has a separate set of contents for each SQL module within each session, though its definition is still shared across sessions. Since PostgreSQL does not support SQL modules, this distinction is not relevant in PostgreSQL.
For compatibility's sake, PostgreSQL will accept the GLOBAL
and LOCAL
keywords in a temporary table declaration, but they currently have no effect. Use of these keywords is discouraged, since future versions of PostgreSQL might adopt a more standard-compliant interpretation of their meaning.
The ON COMMIT
clause for temporary tables also resembles the SQL standard, but has some differences. If the ON COMMIT
clause is omitted, SQL specifies that the default behavior is ON COMMIT DELETE ROWS
. However, the default behavior in PostgreSQL is ON COMMIT PRESERVE ROWS
. The ON COMMIT DROP
option does not exist in SQL.
When a UNIQUE
or PRIMARY KEY
constraint is not deferrable, PostgreSQL checks for uniqueness immediately whenever a row is inserted or modified. The SQL standard says that uniqueness should be enforced only at the end of the statement; this makes a difference when, for example, a single command updates multiple key values. To obtain standard-compliant behavior, declare the constraint as DEFERRABLE
but not deferred (i.e., INITIALLY IMMEDIATE
). Be aware that this can be significantly slower than immediate uniqueness checking.
The SQL standard says that CHECK
column constraints can only refer to the column they apply to; only CHECK
table constraints can refer to multiple columns. PostgreSQL does not enforce this restriction; it treats column and table check constraints alike.
EXCLUDE
ConstraintThe EXCLUDE
constraint type is a PostgreSQL extension.
NULL
“Constraint”The NULL
“constraint” (actually a non-constraint) is a PostgreSQL extension to the SQL standard that is included for compatibility with some other database systems (and for symmetry with the NOT NULL
constraint). Since it is the default for any column, its presence is simply noise.
The SQL standard says that table and domain constraints must have names that are unique across the schema containing the table or domain. PostgreSQL is laxer: it only requires constraint names to be unique across the constraints attached to a particular table or domain. However, this extra freedom does not exist for index-based constraints (UNIQUE
, PRIMARY KEY
, and EXCLUDE
constraints), because the associated index is named the same as the constraint, and index names must be unique across all relations within the same schema.
Currently, PostgreSQL does not record names for NOT NULL
constraints at all, so they are not subject to the uniqueness restriction. This might change in a future release.
Multiple inheritance via the INHERITS
clause is a PostgreSQL language extension. SQL:1999 and later define single inheritance using a different syntax and different semantics. SQL:1999-style inheritance is not yet supported by PostgreSQL.
PostgreSQL allows a table of no columns to be created (for example, CREATE TABLE foo();
). This is an extension from the SQL standard, which does not allow zero-column tables. Zero-column tables are not in themselves very useful, but disallowing them creates odd special cases for ALTER TABLE DROP COLUMN
, so it seems cleaner to ignore this spec restriction.
PostgreSQL allows a table to have more than one identity column. The standard specifies that a table can have at most one identity column. This is relaxed mainly to give more flexibility for doing schema changes or migrations. Note that the INSERT
command supports only one override clause that applies to the entire statement, so having multiple identity columns with different behaviors is not well supported.
The option STORED
is not standard but is also used by other SQL implementations. The SQL standard does not specify the storage of generated columns.
LIKE
ClauseWhile a LIKE
clause exists in the SQL standard, many of the options that PostgreSQL accepts for it are not in the standard, and some of the standard's options are not implemented by PostgreSQL.
WITH
ClauseThe WITH
clause is a PostgreSQL extension; storage parameters are not in the standard.
The PostgreSQL concept of tablespaces is not part of the standard. Hence, the clauses TABLESPACE
and USING INDEX TABLESPACE
are extensions.
Typed tables implement a subset of the SQL standard. According to the standard, a typed table has columns corresponding to the underlying composite type as well as one other column that is the “self-referencing column”. PostgreSQL does not support self-referencing columns explicitly.
PARTITION BY
ClauseThe PARTITION BY
clause is a PostgreSQL extension.
PARTITION OF
ClauseThe PARTITION OF
clause is a PostgreSQL extension.
ALTER TABLE, DROP TABLE, CREATE TABLE AS, CREATE TABLESPACE, CREATE TYPE\
DROP TABLESPACE — 移除一個資料表空間
DROP TABLESPACE 從系統中移除資料表空間。
資料表空間只能由其所有者或超級使用者移除。資料表空間在移除之前必須清空所有的資料庫物件。即使目前資料庫中沒有物件正在使用資料表空間,其他資料庫中的物件仍可能仍駐留在資料表空間中。另外,如果資料表空間在任何連線中的 temp_tablespaces 設定列表上,則 DROP 可能會因臨時檔案駐留在資料表空間中而失敗。
IF EXISTS
如果資料表空間不存在,請不要拋出錯誤。在這種情況下發布通知。
name
資料表空間的名稱。
DROP TABLESPACE 不能在交易事務內執行。
從系統中移除資料表空間 mystuff:
DROP TABLESPACE 是 PostgreSQL 的延伸功能。
PREPARE TRANSACTION — prepare the current transaction for two-phase commit
PREPARE TRANSACTION
prepares the current transaction for two-phase commit. After this command, the transaction is no longer associated with the current session; instead, its state is fully stored on disk, and there is a very high probability that it can be committed successfully, even if a database crash occurs before the commit is requested.
Once prepared, a transaction can later be committed or rolled back with or , respectively. Those commands can be issued from any session, not only the one that executed the original transaction.
From the point of view of the issuing session, PREPARE TRANSACTION
is not unlike a ROLLBACK
command: after executing it, there is no active current transaction, and the effects of the prepared transaction are no longer visible. (The effects will become visible again if the transaction is committed.)
If the PREPARE TRANSACTION
command fails for any reason, it becomes a ROLLBACK
: the current transaction is canceled.
transaction_id
An arbitrary identifier that later identifies this transaction for COMMIT PREPARED
or ROLLBACK PREPARED
. The identifier must be written as a string literal, and must be less than 200 bytes long. It must not be the same as the identifier used for any currently prepared transaction.
PREPARE TRANSACTION
is not intended for use in applications or interactive sessions. Its purpose is to allow an external transaction manager to perform atomic global transactions across multiple databases or other transactional resources. Unless you're writing a transaction manager, you probably shouldn't be using PREPARE TRANSACTION
.
This command must be used inside a transaction block. Use to start one.
It is not currently allowed to PREPARE
a transaction that has executed any operations involving temporary tables or the session's temporary namespace, created any cursors WITH HOLD
, or executed LISTEN
, UNLISTEN
, or NOTIFY
. Those features are too tightly tied to the current session to be useful in a transaction to be prepared.
If the transaction modified any run-time parameters with SET
(without the LOCAL
option), those effects persist after PREPARE TRANSACTION
, and will not be affected by any later COMMIT PREPARED
or ROLLBACK PREPARED
. Thus, in this one respect PREPARE TRANSACTION
acts more like COMMIT
than ROLLBACK
.
Prepare the current transaction for two-phase commit, using foobar
as the transaction identifier:
PREPARE TRANSACTION
is a PostgreSQL extension. It is intended for use by external transaction management systems, some of which are covered by standards (such as X/Open XA), but the SQL side of those systems is not standardized.
All currently available prepared transactions are listed in the system view.
It is unwise to leave transactions in the prepared state for a long time. This will interfere with the ability of VACUUM
to reclaim storage, and in extreme cases could cause the database to shut down to prevent transaction ID wraparound (see ). Keep in mind also that the transaction continues to hold whatever locks it held. The intended usage of the feature is that a prepared transaction will normally be committed or rolled back as soon as an external transaction manager has verified that other databases are also prepared to commit.
If you have not set up an external transaction manager to track prepared transactions and ensure they get closed out promptly, it is best to keep the prepared-transaction feature disabled by setting to zero. This will prevent accidental creation of prepared transactions that might then be forgotten and eventually cause problems.
,