Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The SQL standard states that“Within the definition of a‘datetime literal’, the‘datetime values’are constrained by the natural rules for dates and times according to the Gregorian calendar”.PostgreSQLfollows the SQL standard's lead by counting dates exclusively in the Gregorian calendar, even for years before that calendar was in use. This rule is known as theproleptic Gregorian calendar.
The Julian calendar was introduced by Julius Caesar in 45 BC. It was in common use in the Western world until the year 1582, when countries started changing to the Gregorian calendar. In the Julian calendar, the tropical year is approximated as 365 1/4 days = 365.25 days. This gives an error of about 1 day in 128 years.
The accumulating calendar error prompted Pope Gregory XIII to reform the calendar in accordance with instructions from the Council of Trent. In the Gregorian calendar, the tropical year is approximated as 365 + 97 / 400 days = 365.2425 days. Thus it takes approximately 3300 years for the tropical year to shift one day with respect to the Gregorian calendar.
The approximation 365+97/400 is achieved by having 97 leap years every 400 years, using the following rules:
Every year divisible by 4 is a leap year.
However, every year divisible by 100 is not a leap year.
However, every year divisible by 400 is a leap year after all.
So, 1700, 1800, 1900, 2100, and 2200 are not leap years. But 1600, 2000, and 2400 are leap years. By contrast, in the older Julian calendar all years divisible by 4 are leap years.
The papal bull of February 1582 decreed that 10 days should be dropped from October 1582 so that 15 October should follow immediately after 4 October. This was observed in Italy, Poland, Portugal, and Spain. Other Catholic countries followed shortly after, but Protestant countries were reluctant to change, and the Greek Orthodox countries didn't change until the start of the 20th century. The reform was observed by Great Britain and its dominions (including what is now the USA) in 1752. Thus 2 September 1752 was followed by 14 September 1752. This is why Unix systems have thecal
program produce the following:
But, of course, this calendar is only valid for Great Britain and dominions, not other places. Since it would be difficult and confusing to try to track the actual calendars that were in use in various places at various times,PostgreSQLdoes not try, but rather follows the Gregorian calendar rules for all dates, even though this method is not historically accurate.
Different calendars have been developed in various parts of the world, many predating the Gregorian system. For example, the beginnings of the Chinese calendar can be traced back to the 14th century BC. Legend has it that the Emperor Huangdi invented that calendar in 2637 BC. The People's Republic of China uses the Gregorian calendar for civil purposes. The Chinese calendar is used for determining festivals.
The_Julian Date_system is another type of calendar, unrelated to the Julian calendar though it is confusingly named similarly to that calendar. The Julian Date system was invented by the French scholar Joseph Justus Scaliger (1540-1609) and probably takes its name from Scaliger's father, the Italian scholar Julius Caesar Scaliger (1484-1558). In the Julian Date system, each day has a sequential number, starting from JD 0 (which is sometimes called_the_Julian Date). JD 0 corresponds to 1 January 4713 BC in the Julian calendar, or 24 November 4714 BC in the Gregorian calendar. Julian Date counting is most often used by astronomers for labeling their nightly observations, and therefore a date runs from noon UTC to the next noon UTC, rather than from midnight to midnight: JD 0 designates the 24 hours from noon UTC on 24 November 4714 BC to noon UTC on 25 November 4714 BC.
AlthoughPostgreSQLsupports Julian Date notation for input and output of dates (and also uses Julian dates for some internal datetime calculations), it does not observe the nicety of having dates run from noon to noon.PostgreSQLtreats a Julian Date as running from midnight to midnight.
由於時區縮寫並沒有很好地標準化,PostgreSQL 提供了一種自訂的伺服器接受的縮寫集方法。timezone_abbreviations 運行時參數決定有效的縮寫集。雖然此參數可由任何資料庫使用者變更,但其可能的值受資料庫管理員控制 - 實際上它們是儲存在安裝目錄的 .../share/timezonesets/ 中的組態檔案的名稱。透過增加或變更該目錄中的檔案,管理員可以為時區縮寫設定本地策略。
timezone_abbreviations 可以設定為在 .../share/timezonesets/ 中找到的任何檔案名稱,檔案的名稱需要完全是英文字母的。(禁止 timezone_abbreviations 中的非英文字母字元可以防止讀取目標目錄之外的檔案,以及讀取編輯器備份檔案和其他無關檔案。)
時區縮寫檔案可以包含空白行和以 # 開頭的註釋。非註釋行必須具有以下格式之一:
zone_abbreviation 只是定義的縮寫。offset 是一個整數,定義與 UTC 相等的偏移量,以秒為單位,正向格林威治為東,負向為西。例如,-18000 將在格林威治以西 5 小時或北美東海岸標準時間。D 的區域名稱表示本地夏令時間而不是標準時間。
或者,可以定義 time_zone_name,引用 IANA 時區資料庫中定義的區域名稱。查詢區域的定義以查看該區域中是否正在使用縮寫,如果是,則使用適當的含義 - 即,目前正在確定其值的時間戳記中使用的含義,或者如果當時不是目前使用的話,則使用之前的含義,如果僅在該時間之後使用,則使用最舊的含義。這種行為對於處理其含義歷史變化的縮寫至關重要。還允許根據區域名稱定義縮寫,其中不出現該縮寫;使用縮寫就等於寫出區域名稱。
小技巧 在定義與 UTC 的偏移量從未改變的縮寫時,偏好使用簡單的整數偏移量,因為這些縮寫比需要查閱時區定義的縮寫要簡單得多。
@INCLUDE 語法允許在 .../share/timezonesets/ 目錄中包含另一個檔案。包含可以嵌套到某個有限的深度。
@OVERRIDE 語法指示檔案中的後續項目可以覆蓋先前的項目(通常是從包含的檔案中獲取的項目)。如果沒有這個,相同時區縮寫的衝突定義將被視為錯誤。
在未修改的安裝環境中,檔案 Default 包含了世界上大多數地區的所有非衝突時區縮寫。為這些區域提供了澳大利亞和印度的附加檔案:這些檔案先有預設設定,然後根據需要增加或修改縮寫。
出於參考目的,標準安裝還包含 Africa.txt,America.txt 等文件,其中包含有關根據 IANA 時區資料庫已知正在使用的每個時區縮寫的訊息。可以根據需要將這些檔案中找到的區域名稱定義複製並貼到自行定義配置檔案中。請注意,由於名稱中包含了點,因此無法將這些檔案直接引用為 timezone_abbreviations 設定。
如果在讀取時區縮寫集時發生錯誤,則不會應用新值並保留舊值。如果在啟動資料庫時發生錯誤,則啟動失敗。
配置檔案中定義的時區縮寫會覆寫 PostgreSQL 中內建的非時區定義。例如,澳大利亞配置檔案定義了 SAT(南澳大利亞標準時間)。當此檔案有效時,SAT 將不會被識別為星期六的縮寫。
如果您修改 .../share/timezonesets/ 中的檔案,則由您來進行備份 - 正常的資料庫轉存將不包含此目錄。
Table C.1 lists all tokens that are key words in the SQL standard and in PostgreSQL 11devel. Background information can be found in Section 4.1.1. (For space reasons, only the latest two versions of the SQL standard, and SQL-92 for historical comparison, are included. The differences between those and the other intermediate standard versions are small.)
SQL distinguishes between reserved and non-reserved key words. According to the standard, reserved key words are the only real key words; they are never allowed as identifiers. Non-reserved key words only have a special meaning in particular contexts and can be used as identifiers in other contexts. Most non-reserved key words are actually the names of built-in tables and functions specified by SQL. The concept of non-reserved key words essentially only exists to declare that some predefined meaning is attached to a word in some contexts.
In the PostgreSQL parser life is a bit more complicated. There are several different classes of tokens ranging from those that can never be used as an identifier to those that have absolutely no special status in the parser as compared to an ordinary identifier. (The latter is usually the case for functions specified by SQL.) Even reserved key words are not completely reserved in PostgreSQL, but can be used as column labels (for example, SELECT 55 AS CHECK
, even though CHECK
is a reserved key word).
In Table C.1 in the column for PostgreSQL we classify as “non-reserved” those key words that are explicitly known to the parser but are allowed as column or table names. Some key words that are otherwise non-reserved cannot be used as function or data type names and are marked accordingly. (Most of these words represent built-in functions or data types with special syntax. The function or type is still available but it cannot be redefined by the user.) Labeled “reserved” are those tokens that are not allowed as column or table names. Some reserved key words are allowable as names for functions or data types; this is also shown in the table. If not so marked, a reserved key word is only allowed as an “AS” column label name.
As a general rule, if you get spurious parser errors for commands that contain any of the listed key words as an identifier you should try to quote the identifier to see if the problem goes away.
It is important to understand before studying Table C.1 that the fact that a key word is not reserved in PostgreSQL does not mean that the feature related to the word is not implemented. Conversely, the presence of a key word does not indicate the existence of a feature.
Key Word
PostgreSQL
SQL:2011
SQL:2008
SQL-92
A
non-reserved
non-reserved
ABORT
non-reserved
ABS
reserved
reserved
ABSENT
non-reserved
non-reserved
ABSOLUTE
non-reserved
non-reserved
non-reserved
reserved
ACCESS
non-reserved
ACCORDING
non-reserved
non-reserved
ACTION
non-reserved
non-reserved
non-reserved
reserved
ADA
non-reserved
non-reserved
non-reserved
ADD
non-reserved
non-reserved
non-reserved
reserved
ADMIN
non-reserved
non-reserved
non-reserved
AFTER
non-reserved
non-reserved
non-reserved
AGGREGATE
non-reserved
ALL
reserved
reserved
reserved
reserved
ALLOCATE
reserved
reserved
reserved
ALSO
non-reserved
ALTER
non-reserved
reserved
reserved
reserved
ALWAYS
non-reserved
non-reserved
non-reserved
ANALYSE
reserved
ANALYZE
reserved
AND
reserved
reserved
reserved
reserved
ANY
reserved
reserved
reserved
reserved
ARE
reserved
reserved
reserved
ARRAY
reserved
reserved
reserved
ARRAY_AGG
reserved
reserved
ARRAY_MAX_CARDINALITY
reserved
AS
reserved
reserved
reserved
reserved
ASC
reserved
non-reserved
non-reserved
reserved
ASENSITIVE
reserved
reserved
ASSERTION
non-reserved
non-reserved
non-reserved
reserved
ASSIGNMENT
non-reserved
non-reserved
non-reserved
ASYMMETRIC
reserved
reserved
reserved
AT
non-reserved
reserved
reserved
reserved
ATOMIC
reserved
reserved
ATTACH
non-reserved
ATTRIBUTE
non-reserved
non-reserved
non-reserved
ATTRIBUTES
non-reserved
non-reserved
AUTHORIZATION
reserved (can be function or type)
reserved
reserved
reserved
AVG
reserved
reserved
reserved
BACKWARD
non-reserved
BASE64
non-reserved
non-reserved
BEFORE
non-reserved
non-reserved
non-reserved
BEGIN
non-reserved
reserved
reserved
reserved
BEGIN_FRAME
reserved
BEGIN_PARTITION
reserved
BERNOULLI
non-reserved
non-reserved
BETWEEN
non-reserved (cannot be function or type)
reserved
reserved
reserved
BIGINT
non-reserved (cannot be function or type)
reserved
reserved
BINARY
reserved (can be function or type)
reserved
reserved
BIT
non-reserved (cannot be function or type)
reserved
BIT_LENGTH
reserved
BLOB
reserved
reserved
BLOCKED
non-reserved
non-reserved
BOM
non-reserved
non-reserved
BOOLEAN
non-reserved (cannot be function or type)
reserved
reserved
BOTH
reserved
reserved
reserved
reserved
BREADTH
non-reserved
non-reserved
BY
non-reserved
reserved
reserved
reserved
C
non-reserved
non-reserved
non-reserved
CACHE
non-reserved
CALL
reserved
reserved
CALLED
non-reserved
reserved
reserved
CARDINALITY
reserved
reserved
CASCADE
non-reserved
non-reserved
non-reserved
reserved
CASCADED
non-reserved
reserved
reserved
reserved
CASE
reserved
reserved
reserved
reserved
CAST
reserved
reserved
reserved
reserved
CATALOG
non-reserved
non-reserved
non-reserved
reserved
CATALOG_NAME
non-reserved
non-reserved
non-reserved
CEIL
reserved
reserved
CEILING
reserved
reserved
CHAIN
non-reserved
non-reserved
non-reserved
CHAR
non-reserved (cannot be function or type)
reserved
reserved
reserved
CHARACTER
non-reserved (cannot be function or type)
reserved
reserved
reserved
CHARACTERISTICS
non-reserved
non-reserved
non-reserved
CHARACTERS
non-reserved
non-reserved
CHARACTER_LENGTH
reserved
reserved
reserved
CHARACTER_SET_CATALOG
non-reserved
non-reserved
non-reserved
CHARACTER_SET_NAME
non-reserved
non-reserved
non-reserved
CHARACTER_SET_SCHEMA
non-reserved
non-reserved
non-reserved
CHAR_LENGTH
reserved
reserved
reserved
CHECK
reserved
reserved
reserved
reserved
CHECKPOINT
non-reserved
CLASS
non-reserved
CLASS_ORIGIN
non-reserved
non-reserved
non-reserved
CLOB
reserved
reserved
CLOSE
non-reserved
reserved
reserved
reserved
CLUSTER
non-reserved
COALESCE
non-reserved (cannot be function or type)
reserved
reserved
reserved
COBOL
non-reserved
non-reserved
non-reserved
COLLATE
reserved
reserved
reserved
reserved
COLLATION
reserved (can be function or type)
non-reserved
non-reserved
reserved
COLLATION_CATALOG
non-reserved
non-reserved
non-reserved
COLLATION_NAME
non-reserved
non-reserved
non-reserved
COLLATION_SCHEMA
non-reserved
non-reserved
non-reserved
COLLECT
reserved
reserved
COLUMN
reserved
reserved
reserved
reserved
COLUMNS
non-reserved
non-reserved
non-reserved
COLUMN_NAME
non-reserved
non-reserved
non-reserved
COMMAND_FUNCTION
non-reserved
non-reserved
non-reserved
COMMAND_FUNCTION_CODE
non-reserved
non-reserved
COMMENT
non-reserved
COMMENTS
non-reserved
COMMIT
non-reserved
reserved
reserved
reserved
COMMITTED
non-reserved
non-reserved
non-reserved
non-reserved
CONCURRENTLY
reserved (can be function or type)
CONDITION
reserved
reserved
CONDITION_NUMBER
non-reserved
non-reserved
non-reserved
CONFIGURATION
non-reserved
CONFLICT
non-reserved
CONNECT
reserved
reserved
reserved
CONNECTION
non-reserved
non-reserved
non-reserved
reserved
CONNECTION_NAME
non-reserved
non-reserved
non-reserved
CONSTRAINT
reserved
reserved
reserved
reserved
CONSTRAINTS
non-reserved
non-reserved
non-reserved
reserved
CONSTRAINT_CATALOG
non-reserved
non-reserved
non-reserved
CONSTRAINT_NAME
non-reserved
non-reserved
non-reserved
CONSTRAINT_SCHEMA
non-reserved
non-reserved
non-reserved
CONSTRUCTOR
non-reserved
non-reserved
CONTAINS
reserved
non-reserved
CONTENT
non-reserved
non-reserved
non-reserved
CONTINUE
non-reserved
non-reserved
non-reserved
reserved
CONTROL
non-reserved
non-reserved
CONVERSION
non-reserved
CONVERT
reserved
reserved
reserved
COPY
non-reserved
CORR
reserved
reserved
CORRESPONDING
reserved
reserved
reserved
COST
non-reserved
COUNT
reserved
reserved
reserved
COVAR_POP
reserved
reserved
COVAR_SAMP
reserved
reserved
CREATE
reserved
reserved
reserved
reserved
CROSS
reserved (can be function or type)
reserved
reserved
reserved
CSV
non-reserved
CUBE
non-reserved
reserved
reserved
CUME_DIST
reserved
reserved
CURRENT
non-reserved
reserved
reserved
reserved
CURRENT_CATALOG
reserved
reserved
reserved
CURRENT_DATE
reserved
reserved
reserved
reserved
CURRENT_DEFAULT_TRANSFORM_GROUP
reserved
reserved
CURRENT_PATH
reserved
reserved
CURRENT_ROLE
reserved
reserved
reserved
CURRENT_ROW
reserved
CURRENT_SCHEMA
reserved (can be function or type)
reserved
reserved
CURRENT_TIME
reserved
reserved
reserved
reserved
CURRENT_TIMESTAMP
reserved
reserved
reserved
reserved
CURRENT_TRANSFORM_GROUP_FOR_TYPE
reserved
reserved
CURRENT_USER
reserved
reserved
reserved
reserved
CURSOR
non-reserved
reserved
reserved
reserved
CURSOR_NAME
non-reserved
non-reserved
non-reserved
CYCLE
non-reserved
reserved
reserved
DATA
non-reserved
non-reserved
non-reserved
non-reserved
日期時間資料將會以下列流程解譯:(分隔符號均為半型文字)
以分隔符號將其分解為多個段落,如字串、時區或數字。
如果是以冒號(:)分隔的數字格式,那麼這是一個時間的字串,其所包括的內容都是時間資訊的一部份。
如果是以連字號(-)、斜線(/)、或兩個以上的間隔號(.)所分隔的數字格式,那麼這是一個日期的字串,它可能包含文字型式的月份名稱。但如果日期分隔符號已經先出現了,那麼它將被解釋為時區資訊(例如:Asia/Taipei)。
如果整個字串都是數字所組成,那麼它可能是符合ISO 8601格式的日期(例如:19990113,表示西元1999年1月3日),或時間(例如:141516,表示14:15:16)。
如果它是以加號(+)或減號(-)開頭的話,那麼它是一個數字型態的時區資訊或是特別的區間。
如果是一個字串,進行下列比對規則:
以binary-search表格,尋找時區的表示字。
如果沒有找到的話,則搜尋慣用字(如:today),星期(如:Thursday),月份(如:January),或介系詞(如:at, on)。
如果都沒有找到,就回傳錯誤訊息。
如果是一個由數字組成的字串,則進行下列判斷:
如果是8個或6個數字,而先前也沒有讀到其他日期的資訊,那麼它會被解譯為一個數字型態的日期(如:19990118或990118),對應年月日格式為YYYYMMDD或YYMMDD。
如果是3位數字,而且先前已處理到年份資訊的話,那麼它會被解譯為該年的第幾天。
如果是4位或6位數字,而且先前已處理到年份資訊的話,那麼它會被解譯為時間資訊,對應格式為HHMM或HHMMSS。
如果是3個或更多的數字,並且先前未處理到日期資訊的話,那麼它會是年份資訊。(這里將直接判斷為yy-mm-dd的日期格式。)
最後,日期格式將依DateStyle所定義的,設定為:mm-dd-yy,dd-mm-yy,或yy-mm-dd。其中如果月份或日子名稱無法找到合法字詞的話,那將會回傳錯誤的訊息。
如果指定了BC(西元前)的話,那麼實際儲存值將是負的年份數再加1。(陽曆Gregorian year中並無西元0年,所以西元1年,以數值0作為年份的記錄值。)
沒有指定BC的2位數年份,將自動被調整為4位數。其規則為:若小於70,則加2000作為其記錄值,否則就加1900作為記錄值。
若需要描述西元(Gregorian years AD) 1-99年,請將年份數值以0補滿4位數(例如:0099表西元99年)。
PostgreSQL內建支援智慧式日期時間輸入格式。日期及時間以字串格式輸入,以預先定義的方式,分隔為多個欄位。每一個欄位可能會被解譯為數字、忽視或拒絕。處理程式也內建常見的描述字,如月份、星期及時區名稱。
本文件說明日期及時間解譯的方法和步驟,也包含了常見描述字的列表。
This section attempts to outline to what extent PostgreSQL conforms to the current SQL standard. The following information is not a full statement of conformance, but it presents the main topics in as much detail as is both reasonable and useful for users.
The formal name of the SQL standard is ISO/IEC 9075 “Database Language SQL”. A revised version of the standard is released from time to time; the most recent update appearing in 2011. The 2011 version is referred to as ISO/IEC 9075:2011, or simply as SQL:2011. The versions prior to that were SQL:2008, SQL:2003, SQL:1999, and SQL-92. Each version replaces the previous one, so claims of conformance to earlier versions have no official merit. PostgreSQL development aims for conformance with the latest official version of the standard where such conformance does not contradict traditional features or common sense. Many of the features required by the SQL standard are supported, though sometimes with slightly differing syntax or function. Further moves towards conformance can be expected over time.
SQL-92 defined three feature sets for conformance: Entry, Intermediate, and Full. Most database management systems claiming SQL standard conformance were conforming at only the Entry level, since the entire set of features in the Intermediate and Full levels was either too voluminous or in conflict with legacy behaviors.
Starting with SQL:1999, the SQL standard defines a large set of individual features rather than the ineffectively broad three levels found in SQL-92. A large subset of these features represents the “Core” features, which every conforming SQL implementation must supply. The rest of the features are purely optional. Some optional features are grouped together to form “packages”, which SQL implementations can claim conformance to, thus claiming conformance to particular groups of features.
The standard versions beginning with SQL:2003 are also split into a number of parts. Each is known by a shorthand name. Note that these parts are not consecutively numbered.
ISO/IEC 9075-1 Framework (SQL/Framework)
ISO/IEC 9075-2 Foundation (SQL/Foundation)
ISO/IEC 9075-3 Call Level Interface (SQL/CLI)
ISO/IEC 9075-4 Persistent Stored Modules (SQL/PSM)
ISO/IEC 9075-9 Management of External Data (SQL/MED)
ISO/IEC 9075-10 Object Language Bindings (SQL/OLB)
ISO/IEC 9075-11 Information and Definition Schemas (SQL/Schemata)
ISO/IEC 9075-13 Routines and Types using the Java Language (SQL/JRT)
ISO/IEC 9075-14 XML-related specifications (SQL/XML)
The PostgreSQL core covers parts 1, 2, 9, 11, and 14. Part 3 is covered by the ODBC driver, and part 13 is covered by the PL/Java plug-in, but exact conformance is currently not being verified for these components. There are currently no implementations of parts 4 and 10 for PostgreSQL.
PostgreSQL supports most of the major features of SQL:2011. Out of 179 mandatory features required for full Core conformance, PostgreSQL conforms to at least 160. In addition, there is a long list of supported optional features. It might be worth noting that at the time of writing, no current version of any database management system claims full conformance to Core SQL:2011.
In the following two sections, we provide a list of those features that PostgreSQL supports, followed by a list of the features defined in SQL:2011 which are not yet supported in PostgreSQL. Both of these lists are approximate: There might be minor details that are nonconforming for a feature that is listed as supported, and large parts of an unsupported feature might in fact be implemented. The main body of the documentation always contains the most accurate information about what does and does not work.
Feature codes containing a hyphen are subfeatures. Therefore, if a particular subfeature is not supported, the main feature is listed as unsupported even if some other subfeatures are supported.
Table B.1. 月份名稱
月份
縮寫
January
Jan
February
Feb
March
Mar
April
Apr
May
June
Jun
July
Jul
August
Aug
September
Sep, Sept
October
Oct
November
Nov
December
Dec
Table B.2. 星期名稱
星期
縮寫
Sunday
Sun
Monday
Mon
Tuesday
Tue, Tues
Wednesday
Wed, Weds
Thursday
Thu, Thur, Thurs
Friday
Fri
Saturday
Sat
Table B.3. 日期/時間修飾字
修飾字
說明
AM
12:00 之前的時間
AT
忽略
JULIAN
,JD
,J
接下來的字串是 Julian Date 格式
ON
忽略
PM
12:00 之後的時間
T
接下來的字串是 time
版本資訊不進行翻譯,請參閱官方手冊網頁: https://www.postgresql.org/docs/devel/static/release.html
The release notes contain the significant changes in each PostgreSQL release, with major features and migration issues listed at the top. The release notes do not contain changes that affect only a few users or changes that are internal and therefore not user-visible. For example, the optimizer is improved in almost every release, but the improvements are usually observed by users as simply faster queries.
A complete list of changes for each release can be obtained by viewing the Git logs for each release. The pgsql-committers
email list records all source code changes as well. There is also a web interface that shows changes to specific files.
The name appearing next to each item represents the major developer for that item. Of course all changes involve community discussion and patch review, so each item is truly a community effort.
所有由PostgreSQL伺服器回傳的訊息,都會搭配一個 5 個字元的錯誤代碼,這些代碼均以 SQL 標準的 SQLSTATE 代碼所定義。應用程式應該以錯誤代碼作為程式行為,而非處理文字的錯誤訊息。這些錯誤代碼會隨 PostgreSQL 版本發佈而微幅更新,但不會因為不同語言的文字訊息影響其定義。特別注意,部份錯誤代碼是以 SQL 標準的定義所制定,而有些錯誤代碼是額外制定的,以對應其他 SQL 標準未定義的使用情況。
標準來看,錯誤代碼的前 2 個字元代表著這個錯誤的類別,而後面 3 個字元則指出在此類別中更詳細的發生情況。所以,應用程式即使未完整支援特定的錯誤代碼,仍然可以依類別代碼瞭解大概發生了什麼事。
下方的 Table A.1 列出了所有在 PostgreSQL 10 中所定義的錯誤代碼。(部份錯誤代碼可能實際上沒有使用,但仍然依 SQL 標準所制定)每一個錯誤的類別都有一個"標準"的錯誤情況,其最後 3 個字元為 000,只用於在該類別未特別詳細定義的錯誤情況。
在"Condition Name"欄位中的內容將能在 PL/pgSQL 中被使用。Condition Name 不論大小寫字母均可合法使用。(特別注意,PL/pgSQL 並不支援 00、01 及 02 的錯誤類別。)
有一些錯誤訊息,其回傳內容是發生錯誤的資料庫物件名稱(表格 Table、表格欄位 Table column、資料型別 data type、限制條件 constraint)。舉例來說,如果產生了 unique_violation 錯誤,則會回傳某個限制條件名稱。這些訊息將會分別額外的欄位回傳,所以應用程式不需要特別解譯描述性的訊息內容。在 PostgreSQL 9.3之前,這種回傳方式僅限於 SQLSTATE 類別 23(違反限制條件的一致性 integrity constraint violation),日後將儘可能延伸支援到所有類別。
Error Code
Condition Name
Class 00 — Successful Completion
00000
successful_completion
Class 01 — Warning
01000
warning
0100C
dynamic_result_sets_returned
01008
implicit_zero_bit_padding
01003
null_value_eliminated_in_set_function
01007
privilege_not_granted
01006
privilege_not_revoked
01004
string_data_right_truncation
01P01
deprecated_feature
Class 02 — No Data (this is also a warning class per the SQL standard)
02000
no_data
02001
no_additional_dynamic_result_sets_returned
Class 03 — SQL Statement Not Yet Complete
03000
sql_statement_not_yet_complete
Class 08 — Connection Exception
08000
connection_exception
08003
connection_does_not_exist
08006
connection_failure
08001
sqlclient_unable_to_establish_sqlconnection
08004
sqlserver_rejected_establishment_of_sqlconnection
08007
transaction_resolution_unknown
08P01
protocol_violation
Class 09 — Triggered Action Exception
09000
triggered_action_exception
Class 0A — Feature Not Supported
0A000
feature_not_supported
Class 0B — Invalid Transaction Initiation
0B000
invalid_transaction_initiation
Class 0F — Locator Exception
0F000
locator_exception
0F001
invalid_locator_specification
Class 0L — Invalid Grantor
0L000
invalid_grantor
0LP01
invalid_grant_operation
Class 0P — Invalid Role Specification
0P000
invalid_role_specification
Class 0Z — Diagnostics Exception
0Z000
diagnostics_exception
0Z002
stacked_diagnostics_accessed_without_active_handler
Class 20 — Case Not Found
20000
case_not_found
Class 21 — Cardinality Violation
21000
cardinality_violation
Class 22 — Data Exception
22000
data_exception
2202E
array_subscript_error
22021
character_not_in_repertoire
22008
datetime_field_overflow
22012
division_by_zero
22005
error_in_assignment
2200B
escape_character_conflict
22022
indicator_overflow
22015
interval_field_overflow
2201E
invalid_argument_for_logarithm
22014
invalid_argument_for_ntile_function
22016
invalid_argument_for_nth_value_function
2201F
invalid_argument_for_power_function
2201G
invalid_argument_for_width_bucket_function
22018
invalid_character_value_for_cast
22007
invalid_datetime_format
22019
invalid_escape_character
2200D
invalid_escape_octet
22025
invalid_escape_sequence
22P06
nonstandard_use_of_escape_character
22010
invalid_indicator_parameter_value
22023
invalid_parameter_value
2201B
invalid_regular_expression
2201W
invalid_row_count_in_limit_clause
2201X
invalid_row_count_in_result_offset_clause
2202H
invalid_tablesample_argument
2202G
invalid_tablesample_repeat
22009
invalid_time_zone_displacement_value
2200C
invalid_use_of_escape_character
2200G
most_specific_type_mismatch
22004
null_value_not_allowed
22002
null_value_no_indicator_parameter
22003
numeric_value_out_of_range
2200H
sequence_generator_limit_exceeded
22026
string_data_length_mismatch
22001
string_data_right_truncation
22011
substring_error
22027
trim_error
22024
unterminated_c_string
2200F
zero_length_character_string
22P01
floating_point_exception
22P02
invalid_text_representation
22P03
invalid_binary_representation
22P04
bad_copy_file_format
22P05
untranslatable_character
2200L
not_an_xml_document
2200M
invalid_xml_document
2200N
invalid_xml_content
2200S
invalid_xml_comment
2200T
invalid_xml_processing_instruction
Class 23 — Integrity Constraint Violation
23000
integrity_constraint_violation
23001
restrict_violation
23502
not_null_violation
23503
foreign_key_violation
23505
unique_violation
23514
check_violation
23P01
exclusion_violation
Class 24 — Invalid Cursor State
24000
invalid_cursor_state
Class 25 — Invalid Transaction State
25000
invalid_transaction_state
25001
active_sql_transaction
25002
branch_transaction_already_active
25008
held_cursor_requires_same_isolation_level
25003
inappropriate_access_mode_for_branch_transaction
25004
inappropriate_isolation_level_for_branch_transaction
25005
no_active_sql_transaction_for_branch_transaction
25006
read_only_sql_transaction
25007
schema_and_data_statement_mixing_not_supported
25P01
no_active_sql_transaction
25P02
in_failed_sql_transaction
25P03
idle_in_transaction_session_timeout
Class 26 — Invalid SQL Statement Name
26000
invalid_sql_statement_name
Class 27 — Triggered Data Change Violation
27000
triggered_data_change_violation
Class 28 — Invalid Authorization Specification
28000
invalid_authorization_specification
28P01
invalid_password
Class 2B — Dependent Privilege Descriptors Still Exist
2B000
dependent_privilege_descriptors_still_exist
2BP01
dependent_objects_still_exist
Class 2D — Invalid Transaction Termination
2D000
invalid_transaction_termination
Class 2F — SQL Routine Exception
2F000
sql_routine_exception
2F005
function_executed_no_return_statement
2F002
modifying_sql_data_not_permitted
2F003
prohibited_sql_statement_attempted
2F004
reading_sql_data_not_permitted
Class 34 — Invalid Cursor Name
34000
invalid_cursor_name
Class 38 — External Routine Exception
38000
external_routine_exception
38001
containing_sql_not_permitted
38002
modifying_sql_data_not_permitted
38003
prohibited_sql_statement_attempted
38004
reading_sql_data_not_permitted
Class 39 — External Routine Invocation Exception
39000
external_routine_invocation_exception
39001
invalid_sqlstate_returned
39004
null_value_not_allowed
39P01
trigger_protocol_violated
39P02
srf_protocol_violated
39P03
event_trigger_protocol_violated
Class 3B — Savepoint Exception
3B000
savepoint_exception
3B001
invalid_savepoint_specification
Class 3D — Invalid Catalog Name
3D000
invalid_catalog_name
Class 3F — Invalid Schema Name
3F000
invalid_schema_name
Class 40 — Transaction Rollback
40000
transaction_rollback
40002
transaction_integrity_constraint_violation
40001
serialization_failure
40003
statement_completion_unknown
40P01
deadlock_detected
Class 42 — Syntax Error or Access Rule Violation
42000
syntax_error_or_access_rule_violation
42601
syntax_error
42501
insufficient_privilege
42846
cannot_coerce
42803
grouping_error
42P20
windowing_error
42P19
invalid_recursion
42830
invalid_foreign_key
42602
invalid_name
42622
name_too_long
42939
reserved_name
42804
datatype_mismatch
42P18
indeterminate_datatype
42P21
collation_mismatch
42P22
indeterminate_collation
42809
wrong_object_type
428C9
generated_always
42703
undefined_column
42883
undefined_function
42P01
undefined_table
42P02
undefined_parameter
42704
undefined_object
42701
duplicate_column
42P03
duplicate_cursor
42P04
duplicate_database
42723
duplicate_function
42P05
duplicate_prepared_statement
42P06
duplicate_schema
42P07
duplicate_table
42712
duplicate_alias
42710
duplicate_object
42702
ambiguous_column
42725
ambiguous_function
42P08
ambiguous_parameter
42P09
ambiguous_alias
42P10
invalid_column_reference
42611
invalid_column_definition
42P11
invalid_cursor_definition
42P12
invalid_database_definition
42P13
invalid_function_definition
42P14
invalid_prepared_statement_definition
42P15
invalid_schema_definition
42P16
invalid_table_definition
42P17
invalid_object_definition
Class 44 — WITH CHECK OPTION Violation
44000
with_check_option_violation
Class 53 — Insufficient Resources
53000
insufficient_resources
53100
disk_full
53200
out_of_memory
53300
too_many_connections
53400
configuration_limit_exceeded
Class 54 — Program Limit Exceeded
54000
program_limit_exceeded
54001
statement_too_complex
54011
too_many_columns
54023
too_many_arguments
Class 55 — Object Not In Prerequisite State
55000
object_not_in_prerequisite_state
55006
object_in_use
55P02
cant_change_runtime_param
55P03
lock_not_available
55P04
unsafe_new_enum_value_usage
Class 57 — Operator Intervention
57000
operator_intervention
57014
query_canceled
57P01
admin_shutdown
57P02
crash_shutdown
57P03
cannot_connect_now
57P04
database_dropped
Class 58 — System Error (errors external toPostgreSQLitself)
58000
system_error
58030
io_error
58P01
undefined_file
58P02
duplicate_file
Class 72 — Snapshot Failure
72000
snapshot_too_old
Class F0 — Configuration File Error
F0000
config_file_error
F0001
lock_file_exists
Class HV — Foreign Data Wrapper Error (SQL/MED)
HV000
fdw_error
HV005
fdw_column_name_not_found
HV002
fdw_dynamic_parameter_value_needed
HV010
fdw_function_sequence_error
HV021
fdw_inconsistent_descriptor_information
HV024
fdw_invalid_attribute_value
HV007
fdw_invalid_column_name
HV008
fdw_invalid_column_number
HV004
fdw_invalid_data_type
HV006
fdw_invalid_data_type_descriptors
HV091
fdw_invalid_descriptor_field_identifier
HV00B
fdw_invalid_handle
HV00C
fdw_invalid_option_index
HV00D
fdw_invalid_option_name
HV090
fdw_invalid_string_length_or_buffer_length
HV00A
fdw_invalid_string_format
HV009
fdw_invalid_use_of_null_pointer
HV014
fdw_too_many_handles
HV001
fdw_out_of_memory
HV00P
fdw_no_schemas
HV00J
fdw_option_name_not_found
HV00K
fdw_reply_handle
HV00Q
fdw_schema_not_found
HV00R
fdw_table_not_found
HV00L
fdw_unable_to_create_execution
HV00M
fdw_unable_to_create_reply
HV00N
fdw_unable_to_establish_connection
Class P0 — PL/pgSQL Error
P0000
plpgsql_error
P0001
raise_exception
P0002
no_data_found
P0003
too_many_rows
P0004
assert_failure
Class XX — Internal Error
XX000
internal_error
XX001
data_corrupted
XX002
index_corrupted
The pg_visibility
module provides a means for examining the visibility map (VM) and page-level visibility information of a table. It also provides functions to check the integrity of a visibility map and to force it to be rebuilt.
Three different bits are used to store information about page-level visibility. The all-visible bit in the visibility map indicates that every tuple in the corresponding page of the relation is visible to every current and future transaction. The all-frozen bit in the visibility map indicates that every tuple in the page is frozen; that is, no future vacuum will need to modify the page until such time as a tuple is inserted, updated, deleted, or locked on that page. The page header's PD_ALL_VISIBLE
bit has the same meaning as the all-visible bit in the visibility map, but is stored within the data page itself rather than in a separate data structure. These two bits will normally agree, but the page's all-visible bit can sometimes be set while the visibility map bit is clear after a crash recovery. The reported values can also disagree because of a change that occurs after pg_visibility
examines the visibility map and before it examines the data page. Any event that causes data corruption can also cause these bits to disagree.
Functions that display information about PD_ALL_VISIBLE
bits are much more costly than those that only consult the visibility map, because they must read the relation's data blocks rather than only the (much smaller) visibility map. Functions that check the relation's data blocks are similarly expensive.
pg_visibility_map(relation regclass, blkno bigint, all_visible OUT boolean, all_frozen OUT boolean) returns record
Returns the all-visible and all-frozen bits in the visibility map for the given block of the given relation.
pg_visibility(relation regclass, blkno bigint, all_visible OUT boolean, all_frozen OUT boolean, pd_all_visible OUT boolean) returns record
Returns the all-visible and all-frozen bits in the visibility map for the given block of the given relation, plus the PD_ALL_VISIBLE
bit of that block.
pg_visibility_map(relation regclass, blkno OUT bigint, all_visible OUT boolean, all_frozen OUT boolean) returns setof record
Returns the all-visible and all-frozen bits in the visibility map for each block of the given relation.
pg_visibility(relation regclass, blkno OUT bigint, all_visible OUT boolean, all_frozen OUT boolean, pd_all_visible OUT boolean) returns setof record
Returns the all-visible and all-frozen bits in the visibility map for each block of the given relation, plus the PD_ALL_VISIBLE
bit of each block.
pg_visibility_map_summary(relation regclass, all_visible OUT bigint, all_frozen OUT bigint) returns record
Returns the number of all-visible pages and the number of all-frozen pages in the relation according to the visibility map.
pg_check_frozen(relation regclass, t_ctid OUT tid) returns setof tid
Returns the TIDs of non-frozen tuples stored in pages marked all-frozen in the visibility map. If this function returns a non-empty set of TIDs, the visibility map is corrupt.
pg_check_visible(relation regclass, t_ctid OUT tid) returns setof tid
Returns the TIDs of non-all-visible tuples stored in pages marked all-visible in the visibility map. If this function returns a non-empty set of TIDs, the visibility map is corrupt.
pg_truncate_visibility_map(relation regclass) returns void
Truncates the visibility map for the given relation. This function is useful if you believe that the visibility map for the relation is corrupt and wish to force rebuilding it. The firstVACUUM
executed on the given relation after this function is executed will scan every page in the relation and rebuild the visibility map. (Until that is done, queries will treat the visibility map as containing all zeroes.)
By default, these functions are executable only by superusers and members of the pg_stat_scan_tables
role, with the exception of pg_truncate_visibility_map(relation regclass)
which can only be executed by superusers.
This appendix and the previous one contain information regarding the modules that can be found in the contrib
directory of the PostgreSQL distribution. See for more information about the contrib
section in general and server extensions and plug-ins found incontrib
specifically.
This appendix covers utility programs found in contrib
. Once installed, either from source or a packaging system, they are found in the bin
directory of the PostgreSQL installation and can be used like any other program.
— opens a persistent connection to a remote database
— opens a persistent connection to a remote database, insecurely
— closes a persistent connection to a remote database
— executes a query in a remote database
— executes a command in a remote database
— opens a cursor in a remote database
— returns rows from an open cursor in a remote database
— closes a cursor in a remote database
— returns the names of all open named dblink connections
— gets last error message on the named connection
— sends an async query to a remote database
— checks if connection is busy with an async query
— retrieve async notifications on a connection
— gets an async query result
— cancels any active query on the named connection
— returns the positions and field names of a relation's primary key fields
— builds an INSERT statement using a local tuple, replacing the primary key field values with alternative supplied values
— builds a DELETE statement using supplied values for primary key field values
— builds an UPDATE statement using a local tuple, replacing the primary key field values with alternative supplied values
dblink
is a module that supports connections to otherPostgreSQLdatabases from within a database session.
dblink — executes a query in a remote database
dblink
executes a query (usually aSELECT
, but it can be any SQL statement that returns rows) in a remote database.
When twotext
arguments are given, the first one is first looked up as a persistent connection's name; if found, the command is executed on that connection. If not found, the first argument is treated as a connection info string as fordblink_connect
, and the indicated connection is made just for the duration of this command.
connname
Name of the connection to use; omit this parameter to use the unnamed connection.
connstr
A connection info string, as previously described fordblink_connect
.
sql
The SQL query that you wish to execute in the remote database, for exampleselect * from foo
.
fail_on_error
If true (the default when omitted) then an error thrown on the remote side of the connection causes an error to also be thrown locally. If false, the remote error is locally reported as a NOTICE, and the function returns no rows.
The function returns the row(s) produced by the query. Sincedblink
can be used with any query, it is declared to returnrecord
, rather than specifying any particular set of columns. This means that you must specify the expected set of columns in the calling query — otherwisePostgreSQLwould not know what to expect. Here is an example:
The“alias”part of theFROM
clause must specify the column names and types that the function will return. (Specifying column names in an alias is actually standard SQL syntax, but specifying column types is aPostgreSQLextension.) This allows the system to understand what*
should expand to, and whatproname
in theWHERE
clause refers to, in advance of trying to execute the function. At run time, an error will be thrown if the actual query result from the remote database does not have the same number of columns shown in theFROM
clause. The column names need not match, however, anddblink
does not insist on exact type matches either. It will succeed so long as the returned data strings are valid input for the column type declared in theFROM
clause.
A convenient way to usedblink
with predetermined queries is to create a view. This allows the column type information to be buried in the view, instead of having to spell it out in every query. For example,
auto_explain 模組提供了一種自動記錄慢速語句執行計劃的方法,毌須手動執行 。這對於在大型應用程序中追踪未最佳化的查詢特別有用。
該模組不提供 SQL 可存取的功能。要使用它,只需將其載入到伺服器中即可。您也可以將其載入到單個連線之中:
(您必須是超級使用者才能這樣做。)更典型的用法是透過在 postgresql.conf 中的 或 中包含 auto_explain 將其預先載入到部分或全部連線中。然後,無論何時發生,您都可以追踪意外緩慢的查詢。當然,會有一些系統代價。
有幾個組態參數可以控制 auto_explain 的行為。請注意,預設行為是什麼都不做,因此如果需要任何結果,必須至少設定 auto_explain.log_min_duration。
auto_explain.log_min_duration
(integer
)
auto_explain.log_min_duration 是記錄語句計劃的最小語句執行時間(以毫秒為單位)。將此設定為零會記錄所有計劃。減號(預設值)停用計劃的記錄。例如,如果將其設定為 250ms,則將記錄執行 250ms 或更長時間的所有語句。只有超級使用者才能變更此設定。
auto_explain.log_analyze
(boolean
)
auto_explain.log_analyze 會在記錄執行計劃時列印 EXPLAIN ANALYZE 輸出,而不僅僅是 EXPLAIN 輸出。預設情況下,此參數處於停用狀態。只有超級使用者才能變更此設定。
啟用此參數後,將對所有執行的語句執行每計劃節點計時,無論它們是否執行足夠長時間以實際記錄。這可能會對效能產生極為不利的影響。關閉 auto_explain.log_timing 可以獲得較少的訊息,從而改善效能成本。
auto_explain.log_buffers
(boolean
)
auto_explain.log_buffers 控制是否在記錄執行計劃時輸出緩衝區使用情況統計訊息;它相當於 EXPLAIN 的 BUFFERS 選項。除非啟用了 auto_explain.log_analyze,否則此參數無效。預鉆水情況下,此參數處於停用狀態。只有超級使用者才能變更改此設定。
auto_explain.log_timing
(boolean
)
auto_explain.log_timing 控制在記錄執行計劃時是否輸出每個節點的計時訊息;它相當於 EXPLAIN 的 TIMING 選項。重複讀取系統時鐘的成本會在某些系統上明顯減慢查詢速度,因此當只需要實際資料列計數而非精確時間計時,將此參數設定為關閉可能很有用。除非啟用了 auto_explain.log_analyze,否則此參數無效。預設情況下,此參數處於啟用狀態。只有超級使用者才能變更此設定。
auto_explain.log_triggers
(boolean
)
auto_explain.log_triggers 會在記錄執行計劃時包含觸發器執行統計訊息。除非啟用了 auto_explain.log_analyze,否則此參數無效。預設情況下,此參數處於停用狀態。只有超級使用者才能變更此設定。
auto_explain.log_verbose
(boolean
)
auto_explain.log_verbose 控制是否在記錄執行計劃時輸出詳細訊息;它相當於 EXPLAIN 的 VERBOSE 選項。預設情況下,此參數處於停用狀態。只有超級使用者才能變更此設定。
auto_explain.log_format
(enum
)
auto_explain.log_format 選擇要使用的 EXPLAIN 輸出格式。允許的值為 text、xml、json 和 yaml。預設為 text。只有超級使用者才能變更此設定。
auto_explain.log_nested_statements
(boolean
)
auto_explain.log_nested_statements 會讓巢狀語句(在函數內執行的語句)記錄下來。關閉時,僅記錄最上層查詢計劃。預設情況下,此參數處於停用狀態。只有超級使用者才能變更此設定。
auto_explain.sample_rate
(real
)
auto_explain.sample_rate 使 auto_explain 僅解釋每個連線中的一小部分語句。預設值為 1,表示 EXPLAIN 所有查詢。在巢狀語句的情況下,要就全部都要解釋,要就都不解釋。只有超級使用者才能變更此設定。
在一般的用法中,這些參數在 postgresql.conf 中設定,儘管超級使用者可以在他們自己的連線中即時更改它們。典型用法可能是:
這可能會產生如下的日誌輸出:
Table of Contents
— opens a persistent connection to a remote database
— opens a persistent connection to a remote database, insecurely
— closes a persistent connection to a remote database
— executes a query in a remote database
— executes a command in a remote database
— opens a cursor in a remote database
— returns rows from an open cursor in a remote database
— closes a cursor in a remote database
— returns the names of all open named dblink connections
— gets last error message on the named connection
— sends an async query to a remote database
— checks if connection is busy with an async query
— retrieve async notifications on a connection
— gets an async query result
— cancels any active query on the named connection
— returns the positions and field names of a relation's primary key fields
— builds an INSERT statement using a local tuple, replacing the primary key field values with alternative supplied values
— builds a DELETE statement using supplied values for primary key field values
— builds an UPDATE statement using a local tuple, replacing the primary key field values with alternative supplied values
This appendix and the next one contain information regarding the modules that can be found in thecontrib
directory of thePostgreSQLdistribution. These include porting tools, analysis utilities, and plug-in features that are not part of the core PostgreSQL system, mainly because they address a limited audience or are too experimental to be part of the main source tree. This does not preclude their usefulness.
in thecontrib
directory of a configured source tree; or to build and install just one selected module, do the same in that module's subdirectory. Many of the modules have regression tests, which can be executed by running:
before installation or
once you have aPostgreSQLserver running.
If you are using a pre-packaged version ofPostgreSQL, these modules are typically made available as a separate subpackage, such aspostgresql-contrib
.
This command must be run by a database superuser. This registers the new SQL objects in the current database only, so you need to run this command in each database that you want the module's facilities to be available in. Alternatively, run it in databasetemplate1
so that the extension will be copied into subsequently-created databases by default.
Many modules allow you to install their objects in a schema of your choice. To do that, addSCHEMA
_schema_name
_to theCREATE EXTENSION
command. By default, the objects will be placed in your current creation target schema, typicallypublic
.
If your database was brought forward by dump and reload from a pre-9.1 version ofPostgreSQL, and you had been using the pre-9.1 version of the module in it, you should instead do
Robert Haas <
>
See also, which provides roughly the same functionality using a more modern and standards-compliant infrastructure.
Takahiro Itagaki <
>
This appendix covers extensions and other server plug-in modules found incontrib
.covers utility programs.
When building from the source distribution, these components are not built automatically, unless you build the "world" target (see). You can build and install all of them by running:
Many modules supply new user-defined functions, operators, or types. To make use of one of these modules, after you have installed the code you need to register the new SQL objects in the database system. InPostgreSQL9.1 and later, this is done by executing acommand. In a fresh database, you can simply do
This will update the pre-9.1 objects of the module into a proper_extension_object. Future updates to the module will be managed by. For more information about extension updates, see.
Note, however, that some of these modules are not“extensions”in this sense, but are loaded into the server in some other way, for instance by way of. See the documentation of each module for details.
oid2name — resolve OIDs and file nodes in a PostgreSQL data directory
vacuumlo — remove orphaned large objects from a PostgreSQL database
This section covers PostgreSQL client applications in contrib
. They can be run from anywhere, independent of where the database server resides. See also PostgreSQL Client Applications for information about client applications that part of the core PostgreSQL distribution.
oid2name — resolve OIDs and file nodes in a PostgreSQL data directory
oid2name
[option
...]
oid2name is a utility program that helps administrators to examine the file structure used by PostgreSQL. To make use of it, you need to be familiar with the database file structure, which is described in Chapter 66.
The name “oid2name” is historical, and is actually rather misleading, since most of the time when you use it, you will really be concerned with tables' filenode numbers (which are the file names visible in the database directories). Be sure you understand the difference between table OIDs and table filenodes!
oid2name connects to a target database and extracts OID, filenode, and/or table name information. You can also have it show database OIDs or tablespace OIDs.
oid2name accepts the following command-line arguments:-f
filenode
show info for table with filenode filenode
-i
include indexes and sequences in the listing-o
oid
show info for table with OID oid
-q
omit headers (useful for scripting)-s
show tablespace OIDs-S
include system objects (those in information_schema
, pg_toast
and pg_catalog
schemas)-t
tablename_pattern
show info for table(s) matching tablename_pattern
-V
--version
Print the oid2name version and exit.-x
display more information about each object shown: tablespace name, schema name, and OID-?
--help
Show help about oid2name command line arguments, and exit.
oid2name also accepts the following command-line arguments for connection parameters:-d
database
database to connect to-H
host
database server's host-p
port
database server's port-U
username
user name to connect as-P
password
password (deprecated — putting this on the command line is a security hazard)
To display specific tables, select which tables to show by using -o
, -f
and/or -t
. -o
takes an OID, -f
takes a filenode, and -t
takes a table name (actually, it's a LIKE
pattern, so you can use things like foo%
). You can use as many of these options as you like, and the listing will include all objects matched by any of the options. But note that these options can only show objects in the database given by -d
.
If you don't give any of -o
, -f
or -t
, but do give -d
, it will list all tables in the database named by -d
. In this mode, the -S
and -i
options control what gets listed.
If you don't give -d
either, it will show a listing of database OIDs. Alternatively you can give -s
to get a tablespace listing.
oid2name requires a running database server with non-corrupt system catalogs. It is therefore of only limited use for recovering from catastrophic database corruption situations.
B. Palmer <
bpalmer@crimelabs.net
>
This section covers PostgreSQL server-related applications in contrib
. They are typically run on the host where the database server resides. See also PostgreSQL Server Applications for information about server applications that part of the core PostgreSQL distribution.
PostgreSQL是一個複雜的軟體專案,管理專案是很困難的。而我們發現 PostgreSQL 的許多強化功能可以與核心專案分開進行更高效率的研發。
vacuumlo — remove orphaned large objects from a PostgreSQL database
vacuumlo
[option
...] dbname
...
vacuumlo is a simple utility program that will remove any “orphaned” large objects from a PostgreSQL database. An orphaned large object (LO) is considered to be any LO whose OID does not appear in any oid
or lo
data column of the database.
If you use this, you may also be interested in the lo_manage
trigger in the lo module. lo_manage
is useful to try to avoid creating orphaned LOs in the first place.
All databases named on the command line are processed.
vacuumlo accepts the following command-line arguments:-l
limit
Remove no more than limit
large objects per transaction (default 1000). Since the server acquires a lock per LO removed, removing too many LOs in one transaction risks exceeding max_locks_per_transaction. Set the limit to zero if you want all removals done in a single transaction.-n
Don't remove anything, just show what would be done.-v
Write a lot of progress messages.-V
--version
Print the vacuumlo version and exit.-?
--help
Show help about vacuumlo command line arguments, and exit.
vacuumlo also accepts the following command-line arguments for connection parameters:-h
hostname
Database server's host.-p
port
Database server's port.-U
username
User name to connect as.-w
--no-password
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.-W
Force vacuumlo to prompt for a password before connecting to a database.
This option is never essential, since vacuumlo will automatically prompt for a password if the server demands password authentication. However, vacuumlo will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W
to avoid the extra connection attempt.
vacuumlo works by the following method: First, vacuumlo builds a temporary table which contains all of the OIDs of the large objects in the selected database. It then scans through all columns in the database that are of type oid
or lo
, and removes matching entries from the temporary table. (Note: Only types with these names are considered; in particular, domains over them are not considered.) The remaining entries in the temporary table identify orphaned LOs. These are removed.
Peter Mount <
peter@retep.org.uk
>
pg_standby — supports the creation of a PostgreSQL warm standby server
pg_standby
[option
...] archivelocation
nextwalfile
walfilepath
[restartwalfile
]
pg_standby supports creation of a “warm standby” database server. It is designed to be a production-ready program, as well as a customizable template should you require specific modifications.
pg_standby is designed to be a waiting restore_command
, which is needed to turn a standard archive recovery into a warm standby operation. Other configuration is required as well, all of which is described in the main server manual (see Section 26.2).
To configure a standby server to use pg_standby, put this into its recovery.conf
configuration file:
where archiveDir
is the directory from which WAL segment files should be restored.
If restartwalfile
is specified, normally by using the %r
macro, then all WAL files logically preceding this file will be removed from archivelocation
. This minimizes the number of files that need to be retained, while preserving crash-restart capability. Use of this parameter is appropriate if the archivelocation
is a transient staging area for this particular standby server, but not when the archivelocation
is intended as a long-term WAL archive area.
pg_standby assumes that archivelocation
is a directory readable by the server-owning user. If restartwalfile
(or -k
) is specified, the archivelocation
directory must be writable too.
There are two ways to fail over to a “warm standby” database server when the master server fails:Smart Failover
In smart failover, the server is brought up after applying all WAL files available in the archive. This results in zero data loss, even if the standby server has fallen behind, but if there is a lot of unapplied WAL it can be a long time before the standby server becomes ready. To trigger a smart failover, create a trigger file containing the word smart
, or just create it and leave it empty.Fast Failover
In fast failover, the server is brought up immediately. Any WAL files in the archive that have not yet been applied will be ignored, and all transactions in those files are lost. To trigger a fast failover, create a trigger file and write the word fast
into it. pg_standby can also be configured to execute a fast failover automatically if no new WAL file appears within a defined interval.
pg_standby accepts the following command-line arguments:-c
Use cp
or copy
command to restore WAL files from archive. This is the only supported behavior so this option is useless.-d
Print lots of debug logging output on stderr
.-k
Remove files from archivelocation
so that no more than this many WAL files before the current one are kept in the archive. Zero (the default) means not to remove any files from archivelocation
. This parameter will be silently ignored if restartwalfile
is specified, since that specification method is more accurate in determining the correct archive cut-off point. Use of this parameter is deprecated as of PostgreSQL 8.3; it is safer and more efficient to specify a restartwalfile
parameter. A too small setting could result in removal of files that are still needed for a restart of the standby server, while a too large setting wastes archive space.-r
maxretries
Set the maximum number of times to retry the copy command if it fails (default 3). After each failure, we wait for sleeptime
* num_retries
so that the wait time increases progressively. So by default, we will wait 5 secs, 10 secs, then 15 secs before reporting the failure back to the standby server. This will be interpreted as end of recovery and the standby will come up fully as a result.-s
sleeptime
Set the number of seconds (up to 60, default 5) to sleep between tests to see if the WAL file to be restored is available in the archive yet. The default setting is not necessarily recommended; consult Section 26.2 for discussion.-t
triggerfile
Specify a trigger file whose presence should cause failover. It is recommended that you use a structured file name to avoid confusion as to which server is being triggered when multiple servers exist on the same system; for example /tmp/pgsql.trigger.5432
.-V
--version
Print the pg_standby version and exit.-w
maxwaittime
Set the maximum number of seconds to wait for the next WAL file, after which a fast failover will be performed. A setting of zero (the default) means wait forever. The default setting is not necessarily recommended; consult Section 26.2 for discussion.-?
--help
Show help about pg_standby command line arguments, and exit.
pg_standby is designed to work with PostgreSQL 8.2 and later.
PostgreSQL 8.3 provides the %r
macro, which is designed to let pg_standby know the last file it needs to keep. With PostgreSQL 8.2, the -k
option must be used if archive cleanup is required. This option remains available in 8.3, but its use is deprecated.
PostgreSQL 8.4 provides the recovery_end_command
option. Without this option a leftover trigger file can be hazardous.
pg_standby is written in C and has an easy-to-modify source code, with specifically designated sections to modify for your own needs
On Linux or Unix systems, you might use:
where the archive directory is physically located on the standby server, so that the archive_command
is accessing it across NFS, but the files are local to the standby (enabling use of ln
). This will:
produce debugging output in standby.log
sleep for 2 seconds between checks for next WAL file availability
stop waiting only when a trigger file called /tmp/pgsql.trigger.5442
appears, and perform failover according to its content
remove the trigger file when recovery ends
remove no-longer-needed files from the archive directory
On Windows, you might use:
Note that backslashes need to be doubled in the archive_command
, but not in the restore_command
or recovery_end_command
. This will:
use the copy
command to restore WAL files from archive
produce debugging output in standby.log
sleep for 5 seconds between checks for next WAL file availability
stop waiting only when a trigger file called C:\pgsql.trigger.5442
appears, and perform failover according to its content
remove the trigger file when recovery ends
remove no-longer-needed files from the archive directory
The copy
command on Windows sets the final file size before the file is completely copied, which would ordinarily confuse pg_standby. Therefore pg_standby waits sleeptime
seconds once it sees the proper file size. GNUWin32's cp
sets the file size only after the file copy is complete.
Since the Windows example uses copy
at both ends, either or both servers might be accessing the archive directory across the network.
Simon Riggs <
simon@2ndquadrant.com
>
There are several administration tools available for PostgreSQL. The most popular is pgAdmin III, and there are several commercially available ones as well.
There are only two client interfaces included in the base PostgreSQL distribution:
is included because it is the primary C language interface, and because many other client interfaces are built on top of it.
is included because it depends on the server-side SQL grammar, and is therefore sensitive to changes in PostgreSQL itself.
All other language interfaces are external projects and are distributed separately. includes a list of some of these projects. Note that some of these packages might not be released under the same license as PostgreSQL. For more information on each language interface, including licensing terms, refer to its website and documentation.
Table H.1. Externally Maintained Client Interfaces
The PostgreSQL source code is stored and managed using the Git version control system. A public mirror of the master repository is available; it is updated within a minute of any change to the master repository.
Our wiki, , has some discussion on working with Git.
Note that building PostgreSQL from the source repository requires reasonably up-to-date versions of bison, flex, and Perl. These tools are not needed to build from a distribution tarball, because the files that these tools are used to build are included in the tarball. Other tool requirements are the same as shown in .
With Git you will make a copy of the entire code repository on your local machine, so you will have access to all history and branches offline. This is the fastest and most flexible way to develop or test patches.
Git
You will need an installed version of Git, which you can get from . Many systems already have a recent version of Git installed by default, or available in their package distribution system.
To begin using the Git repository, make a clone of the official mirror:
This will copy the full repository to your local machine, so it may take a while to complete, especially if you have a slow Internet connection. The files will be placed in a new subdirectory postgresql
of your current directory.
The Git mirror can also be reached via the Git protocol. Just change the URL prefix to git
, as in:
Whenever you want to get the latest updates in the system, cd
into the repository, and run:
Git can do a lot more things than just fetch the source. For more information, consult the Git man pages, or see the website at .
PostgreSQL includes several procedural languages with the base distribution: , , , and .
In addition, there are a number of procedural languages that are developed and maintained outside the core PostgreSQL distribution. lists some of these packages. Note that some of these projects might not be released under the same license as PostgreSQL. For more information on each procedural language, including licensing information, refer to its website and documentation.
Table H.2. Externally Maintained Procedural Languages
Name
Language
Comments
Website
DBD::Pg
Perl
Perl DBI driver
JDBC
Java
Type 4 JDBC driver
libpqxx
C++
New-style C++ interface
node-postgres
JavaScript
Node.js driver
Npgsql
.NET
.NET data provider
pgtcl
Tcl
pgtclng
Tcl
pq
Go
Pure Go driver for Go's database/sql
psqlODBC
ODBC
ODBC driver
psycopg
Python
DB API 2.0-compliant
Name
Language
Website
PL/Java
Java
PL/Lua
Lua
PL/R
R
PL/sh
Unix shell
PL/v8
JavaScript
PostgreSQL is designed to be easily extensible. For this reason, extensions loaded into the database can function just like features that are built in. The contrib/
directory shipped with the source code contains several extensions, which are described in Appendix F. Other extensions are developed independently, like PostGIS. Even PostgreSQL replication solutions can be developed externally. For example, Slony-I is a popular master/standby replication solution that is developed independently from the core project.
本頁說明PostgreSQL官方文件如何取得及製作。
由於本翻譯文件以GitBook製作,以下文件謹附上官方連結,暫不進行翻譯。
PostgreSQL目前有下列四種主要文件格式:
Plain text,作為安裝前置的說明。
HTML,作為線上瀏覽的參考說明。
PDF,提供有列印需求的使用者運用。
man pages,在系統操作時快速查閱使用。
還有一些開發用的文件檔案合併整理於PostgreSQL原始碼之中。
HTML及man pages的文件,會隨PostgreSQL安裝在系統之中;而PDF文件則以另外下載的方式提供。
本頁列出在本翻譯文件及官方手冊常見的縮寫字。
ANSI
API
ASCII
BKI
CA
CIDR
CPAN
CRL
CSV
CTE
CVE
DBA
DBI
DBMS
DDL
DML
DST
ECPG
ESQL
FAQ
FSM
GEQO
GIN
GiST
Git
GMT
GSSAPI
GUC
HBA
HOT
IEC
IEEE
IPC
ISO
ISSN
JDBC
LDAP
LSN
MSVC
MVCC
NLS
ODBC
OID
OLAP
OLTP
ORDBMS
PAM
PGSQL
PGXS
PID
PITR
PL
POSIX
RDBMS
RFC
SGML
SPI
SP-GiST
SQL
SRF
SSH
SSL
SSPI
SYSV
TCP/IP
TID
TOAST
TPC
URL
UTC
UTF
UTF8
UUID
WAL
XID
XML
, SQL commands such asCREATE TABLE
,ALTER USER
, SQL commands such asINSERT
,UPDATE
,DELETE
, thePostgreSQLsubsystem that handles server configuration
Log Sequence Number, seeand.
(Continuous Archiving)