PostgreSQL 正體中文使用手冊
PostgreSQL.TW官方使用手冊小島故事加入社團
15
15
  • 簡介
  • 前言
    • 1. 什麼是 PostgreSQL?
    • 2. PostgreSQL 沿革
    • 3. 慣例
    • 4. 其他參考資訊
    • 5. 問題回報指南
  • I. 新手教學
    • 1. 入門指南
      • 1.1. 安裝
      • 1.2. 基礎架構
      • 1.3. 建立一個資料庫
      • 1.4. 存取一個資料庫
    • 2. SQL 查詢語言
      • 2.1. 簡介
      • 2.2. 概念
      • 2.3. 創建一個新的資料表
      • 2.4. 資料列是資料表的組成單位
      • 2.5. 資料表的查詢
      • 2.6. 交叉查詢
      • 2.7. 彙總查詢
      • 2.8. 更新資料
      • 2.9. 刪除資料
    • 3. 先進功能
      • 3.1. 簡介
      • 3.2. 檢視表(View)
      • 3.3. 外部索引鍵
      • 3.4. 交易安全
      • 3.5. 窗函數
      • 3.6. 繼承
      • 3.7. 結論
  • II. SQL 查詢語言
    • 4. SQL 語法
      • 4.1. 語法結構
      • 4.2. 參數表示式
      • 4.3. 函數呼叫
    • 5. 定義資料結構
      • 5.1. 認識資料表
      • 5.2. 預設值
      • 5.3. Generated Columns
      • 5.4. 限制條件
      • 5.5. 系統欄位
      • 5.6. 表格變更
      • 5.7. 權限
      • 5.8. 資料列安全原則
      • 5.9. Schemas
      • 5.10. 繼承
      • 5.11. 分割資料表
      • 5.12. 外部資料
      • 5.13. 其他資料庫物件
      • 5.14. 相依性追蹤
    • 6. 資料處理
      • 6.1. 新增資料
      • 6.2. 更新資料
      • 6.3. 刪除資料
      • 6.4. 修改並回傳資料
    • 7. 資料查詢
      • 7.1. 概觀
      • 7.2. 資料表表示式
      • 7.3. 取得資料列表
      • 7.4. 合併查詢結果
      • 7.5. 資料排序
      • 7.6. LIMIT 和 OFFSET
      • 7.7. VALUES 列舉資料
      • 7.8. WITH Querys(Common Table Expressions)
    • 8. 資料型別
      • 8.1. 數字型別
      • 8.2. 貨幣型別
      • 8.3. 字串型別
      • 8.4. 位元組型別(bytea)
      • 8.5. 日期時間型別
      • 8.6. 布林型別
      • 8.7. 列舉型別
      • 8.8. 地理資訊型別
      • 8.9. 網路資訊型別
      • 8.10. 位元字串型別
      • 8.11. 全文檢索型別
      • 8.12. UUID 型別
      • 8.13. XML 型別
      • 8.14. JSON 型別
      • 8.15. 陣列
      • 8.16. 複合型別
      • 8.17. 範圍型別
      • 8.18. Domain Types
      • 8.19. 物件指標型別
      • 8.20. pg_lsn 型別
      • 8.21. 概念型別
    • 9. 函式及運算子
      • 9.1. 邏輯運算子
      • 9.2. 比較函式及運算子
      • 9.3. 數學函式及運算子
      • 9.4. 字串函式及運算子
      • 9.5. 位元字串函式及運算子
      • 9.6. 二元字串函式及運算子
      • 9.7. 特徵比對
      • 9.8. 型別轉換函式
      • 9.9 日期時間函式及運算子
      • 9.10. 列舉型別函式
      • 9.11. 地理資訊函式及運算子
      • 9.12. 網路位址函式及運算子
      • 9.13. 文字檢索函式及運算子
      • 9.14. UUID Functions
      • 9.15. XML 函式
      • 9.16. JSON 函式及運算子
      • 9.17. 序列函式
      • 9.18. 條件表示式
      • 9.19. 陣列函式及運算子
      • 9.20. 範圍函式及運算子
      • 9.21. 彙總函數
      • 9.22. Window 函式
      • 9.23. 子查詢
      • 9.24. 資料列與陣列的比較運算
      • 9.25. 集合回傳函數
      • 9.26. 系統資訊函數
      • 9.27. 系統管理函式
      • 9.28. 觸發函式
      • 9.29. 事件觸發函式
      • 9.30. Statistics Information Functions
    • 10. 型別轉換
      • 10.1. 概觀
      • 10.2. 運算子
      • 10.3. 函式
      • 10.4. 資料儲存轉換規則
      • 10.5. UNION、CASE 等相關結構
      • 10.6. SELECT 輸出規則
    • 11. 索引(Index)
      • 11.1. 簡介
      • 11.2. 索引型別
      • 11.3. 多欄位索引
      • 11.4. 索引與 ORDER BY
      • 11.5. 善用多個索引
      • 11.6. 唯一值索引
      • 11.7. 表示式索引
      • 11.8. 部份索引(partial index)
      • 11.9. Index-Only Scans and Covering Indexes
      • 11.10. 運算子物件及家族
      • 11.11. 索引與排序規則
      • 11.12. 檢查索引運用
    • 12. 全文檢索
      • 12.1. 簡介
      • 12.2. 查詢與索引
      • 12.3. 細部控制
      • 12.4. 延伸功能
      • 12.5. 斷詞
      • 12.6. 字典
      • 12.7. 組態範例
      • 12.8. 測試與除錯
      • 12.9. GIN 及 GiST 索引型別
      • 12.10. psql支援
      • 12.11. 功能限制
    • 13. 一致性管理(Concurrency Control)
      • 13.1. 簡介
      • 13.2. 交易隔離
      • 13.3. 鎖定模式
      • 13.4. 在應用端檢視資料一致性
      • 13.5. Serialization Failure Handling
      • 13.6. 特別提醒
      • 13.7. 鎖定與索引
    • 14. 效能技巧
      • 14.1. 善用 EXPLAIN
      • 14.2. 統計資訊
      • 14.3. 使用確切的 JOIN 方式
      • 14.4. 快速建立資料庫內容
      • 14.5. 風險性彈性設定
    • 15. 平行查詢
      • 15.1. 如何運作?
      • 15.2. 啓用時機?
      • 15.3. 平行查詢計畫
      • 15.4. 平行查詢的安全性
  • III. 系統管理
    • 16. 以預編譯套件安裝
    • 17. 以原始碼安裝
      • 17.1. 簡要步驟
      • 17.2. 環境需求
      • 17.3. Getting The Source
      • 17.4. 安裝流程
      • 17.5. Post-Installation Setup
      • 17.6. Supported Platforms
      • 17.7. 平台相關的注意事項
    • 18. 以原始碼在 Windows 上安裝
      • 18.1. Building with Visual C++ or the Microsoft Windows SDK
    • 19. 服務配置與維運
      • 19.1. PostgreSQL 使用者帳號
      • 19.2. Creating a Database Cluster
      • 19.3. Starting the Database Server
      • 19.4. 核心資源管理
      • 19.5. Shutting Down the Server
      • 19.6. Upgrading a PostgreSQL Cluster
      • 19.7. Preventing Server Spoofing
      • 19.8. Encryption Options
      • 19.9. Secure TCP/IP Connections with SSL
      • 19.10. Secure TCP/IP Connections with GSSAPI Encryption
      • 19.11. Secure TCP/IP Connections with SSH Tunnels
      • 19.12. 在 Windows 註冊事件日誌
    • 20. 服務組態設定
      • 20.1. Setting Parameters
      • 20.2. File Locations
      • 20.3. 連線與認證
      • 20.4. 資源配置
      • 20.5. Write Ahead Log
      • 20.6. 複寫(Replication)
      • 20.7. 查詢規畫
      • 20.8. 錯誤回報與日誌記錄
      • 20.9. 執行階段統計資訊
      • 20.10. 自動資料庫清理
      • 20.11. 用戶端連線預設參數
      • 20.12. 交易鎖定管理
      • 20.13. 版本與平台的相容性
      • 20.14. Error Handling
      • 20.15. 預先配置的參數
      • 20.16. Customized Options
      • 20.17. Developer Options
      • 20.18. Short Options
    • 21. 使用者認證
      • 21.1. 設定檔:pg_hba.conf
      • 21.2. User Name Maps
      • 21.3. Authentication Methods
      • 21.4. Trust Authentication
      • 21.5. Password Authentication
      • 21.6. GSSAPI Authentication
      • 21.7. SSPI Authentication
      • 21.8. Ident Authentication
      • 21.9. Peer Authentication
      • 21.10. LDAP Authentication
      • 21.11. RADIUS Authentication
      • 21.12. Certificate Authentication
      • 21.13. PAM Authentication
      • 21.14. BSD Authentication
      • 21.15. Authentication Problems
    • 22. 資料庫角色
      • 22.1. Database Roles
      • 22.2. Role Attributes
      • 22.3. Role Membership
      • 22.4. 移除角色
      • 22.5. Default Roles
      • 22.6. Function Security
    • 23. 管理資料庫
      • 23.1. Overview
      • 23.2. Creating a Database
      • 23.3. 樣版資料庫
      • 23.4. Database Configuration
      • 23.5. Destroying a Database
      • 23.6. Tablespaces
    • 24. 語系
      • 24.1. 語系支援
      • 24.2. Collation Support
      • 24.3. 字元集支援
    • 25. 例行性資料庫維護工作
      • 25.1. 例行性資料清理
      • 25.2. 定期重建索引
      • 25.3. Log 檔案維護
    • 26. 備份及還原
      • 26.1. SQL Dump
      • 26.2. 檔案系統層級備份
      • 26.3. 持續封存及 Point-in-Time Recovery (PITR)
    • 27. High Availability, Load Balancing, and Replication
      • 27.1. 比較不同的解決方案
      • 27.2. 日誌轉送備用伺服器 Log-Shipping Standby Servers
      • 27.3. Failover
      • 27.4. Hot Standby
    • 28. 監控資料庫活動
      • 28.1. 標準的 Unix 工具
      • 28.2. 統計資訊收集器
      • 28.3. Viewing Locks
      • 28.4. Progress Reporting
      • 28.5. Dynamic Tracing
    • 29. 監控磁碟使用情況
      • 29.1. 瞭解磁碟使用情形
      • 29.2. 磁碟空間不足錯誤
    • 30. 高可靠度及預寫日誌
      • 30.1. 可靠度
      • 30.2. Data Checksums
      • 30.3. Write-Ahead Logging(WAL)
      • 30.4. Asynchronous Commit
      • 30.5. WAL Configuration
      • 30.6. WAL Internals
    • 31. 邏輯複寫(Logical Replication)
      • 31.1. 發佈(Publication)
      • 31.2. 訂閱(Subscription)
      • 31.3. Row Filters
      • 31.4. Column Lists
      • 31.5. 衝突處理
      • 31.6. 限制
      • 31.7. 架構
      • 31.8. 監控
      • 31.9. 安全性
      • 31.10. 系統設定
      • 31.11. 快速設定
    • 32. Just-in-Time Compilation(JIT)
      • 32.1. What is JIT compilation?
      • 32.2. When to JIT?
      • 32.3. Configuration
      • 32.4. Extensibility
    • 33. 迴歸測試
      • 33.1. Running the Tests
      • 33.2. Test Evaluation
      • 33.3. Variant Comparison Files
      • 33.4. TAP Tests
      • 33.5. Test Coverage Examination
  • IV. 用戶端介面
    • 34. libpq - C Library
      • 33.1. 資料庫連線控制函數
      • 33.2. 連線狀態函數
      • 33.3. Command Execution Functions
      • 33.4. Asynchronous Command Processing
      • 33.5. Retrieving Query Results Row-By-Row
      • 33.6. Canceling Queries in Progress
      • 33.7. The Fast-Path Interface
      • 33.8. Asynchronous Notification
      • 33.9. Functions Associated with the COPY Command
      • 33.10. Control Functions
      • 33.11. Miscellaneous Functions
      • 33.12. Notice Processing
      • 33.13. Event System
      • 33.14. 環境變數
      • 34.16. 密碼檔
      • 33.16. The Connection Service File
      • 33.17. LDAP Lookup of Connection Parameters
      • 33.18. SSL Support
      • 33.19. Behavior in Threaded Programs
      • 33.20. Building libpq Programs
      • 33.21. Example Programs
    • 35. Large Objects
      • 35.1. Introduction
      • 35.2. Implementation Features
      • 35.3. Client Interfaces
      • 35.4. Server-side Functions
      • 35.5. Example Program
    • 36. ECPG - Embedded SQL in C
      • 35.1. The Concept
      • 35.2. Managing Database Connections
      • 35.3. Running SQL Commands
      • 35.4. Using Host Variables
      • 35.5. Dynamic SQL
      • 35.6. pgtypes Library
      • 35.7. Using Descriptor Areas
      • 35.8. Error Handling
      • 35.9. Preprocessor Directives
      • 35.10. Processing Embedded SQL Programs
      • 35.11. Library Functions
      • 35.12. Large Objects
      • 35.13. C++ Applications
      • 35.14. Embedded SQL Commands
      • 35.15. Informix Compatibility Mode
      • 35.16. Internals
    • 37. The Information Schema
      • 37.1. The Schema
      • 37.2. Data Types
      • 37.3. information_schema_catalog_name
      • 37.4. administrable_role_authorizations
      • 37.5. applicable_roles
      • 37.7. attributes
      • 37.7. character_sets
      • 37.8. check_constraint_routine_usage
      • 37.9. check_constraints
      • 37.10. collations
      • 37.11. collation_character_set_applicability
      • 37.12. column_column_usage
      • 37.13. column_domain_usage
      • 37.14. column_options
      • 37.15. column_privileges
      • 37.16. column_udt_usage
      • 37.17. columns
      • 37.18. constraint_column_usage
      • 37.19. constraint_table_usage
      • 37.20. data_type_privileges
      • 37.21. domain_constraints
      • 37.21. domain_udt_usage
      • 37.22. domains
      • 37.23. element_types
      • 37.24. enabled_roles
      • 37.25. foreign_data_wrapper_options
      • 37.26. foreign_data_wrappers
      • 37.27. foreign_server_options
      • 37.28. foreign_servers
      • 37.29. foreign_table_options
      • 37.30. foreign_tables
      • 36.32. key_column_usage
      • 37.33. parameters
      • 36.34. referential_constraints
      • 37.34. role_column_grants
      • 37.35. role_routine_grants
      • 37.37. role_table_grants
      • 37.38. role_udt_grants
      • 37.39. role_usage_grants
      • 37.40. routine_column_usage
      • 37.41. routine_privileges
      • 37.45. routines
      • 37.46. schemata
      • 37.47. sequences
      • 37.48. sql_features
      • 37.49. sql_implementation_info
      • 37.50. sql_parts
      • 37.51. sql_sizing
      • 36.51. table_constraints
      • 36.49. table_privileges
      • 37.52. tables
      • 37.53. transforms
      • 37.54. triggered_update_columns
      • 37.55. triggers
      • 37.56. udt_privileges
      • 37.57. usage_privileges
      • 37.58. user_defined_types
      • 37.59. user_mapping_options
      • 37.60. user_mappings
      • 37.63. view_column_usage
      • 37.64. view_routine_usage
      • 37.65. view_table_usage
      • 37.66. views
  • V. 資料庫程式設計
    • 38. SQL 延伸功能
      • 38.1. How Extensibility Works
      • 38.2. The PostgreSQL Type System
      • 38.3. 使用者自訂函數
      • 38.4. User-defined Procedures
      • 38.5. Query Language (SQL) Functions
      • 38.6. Function Overloading
      • 38.7. 函數易變性類別
      • 38.8. Procedural Language Functions
      • 38.9. Internal Functions
      • 38.10. C-Language Functions
      • 38.11. Function Optimization Information
      • 38.12. User-defined Aggregates
      • 38.13. User-defined Types
      • 38.14. User-defined Operators
      • 38.15. Operator Optimization Information
      • 38.16. Interfacing Extensions To Indexes
      • 38.17. 封裝相關物件到延伸功能中
      • 38.18. Extension Building Infrastructure
    • 39. Triggers
      • 39.1. Overview of Trigger Behavior
      • 39.2. Visibility of Data Changes
      • 39.3. Writing Trigger Functions in C
      • 39.4. A Complete Trigger Example
    • 40. Event Triggers (事件觸發)
      • 40.1. Overview of Event Trigger Behavior
      • 40.2. Event Trigger Firing Matrix
      • 40.3. Writing Event Trigger Functions in C
      • 40.4. A Complete Event Trigger Example
    • 41. 規則系統
      • 41.1. The Query Tree
      • 41.2. Views and the Rule System
      • 41.3. Materialized Views
      • 41.4. Rules on INSERT, UPDATE, and DELETE
      • 41.5. 規則及權限
      • 41.6. Rules and Command Status
      • 41.7. Rules Versus Triggers
    • 42. Procedural Languages(程序語言)
      • 42.1. Installing Procedural Languages
    • 43. PL/pgSQL - SQL Procedural Language
      • 43.1. Overview
      • 43.2. Structure of PL/pgSQL
      • 43.3. Declarations
      • 43.4. Expressions
      • 43.5. 基本語法
      • 43.6. Control Structures
      • 43.7. Cursors
      • 43.8. Transaction Management
      • 43.9. Errors and Messages
      • 43.10. Trigger Functions
      • 43.11. PL/pgSQL under the Hood
      • 43.12. Tips for Developing in PL/pgSQL
      • 43.13. Porting from Oracle PL/SQL
    • 44. PL/Tcl - Tcl Procedural Language
    • 45. PL/Perl — Perl Procedural Language
    • 46. PL/Python - Python Procedural Language
      • 46.1. PL/Python Functions
      • 46.2. Data Values
      • 46.3. Sharing Data
      • 46.4. Anonymous Code Blocks
      • 46.5. Trigger Functions
      • 46.6. Database Access
      • 46.7. Explicit Subtransactions
      • 46.8. Transaction Management
      • 46.9. Utility Functions
      • 46.10. Python 2 vs. Python 3
      • 46.11. Environment Variables
    • 47. Server Programming Interface
    • 48. Background Worker Processes
    • 49. Logical Decoding
      • 48.1. Logical Decoding Examples
      • 48.2. Logical Decoding Concepts
      • 48.3. Streaming Replication Protocol Interface
      • 48.4. Logical Decoding SQL Interface
      • 48.5. System Catalogs Related to Logical Decoding
      • 48.6. Logical Decoding Output Plugins
      • 48.7. Logical Decoding Output Writers
      • 48.8. Synchronous Replication Support for Logical Decoding
    • 50. Replication Progress Tracking
    • 51. Archive Modules
      • 51.1. Initialization Functions
      • 51.2. Archive Module Callbacks
  • VI. 參考資訊
    • I. SQL 指令
      • ALTER DATABASE
      • ALTER DEFAULT PRIVILEGES
      • ALTER EXTENSION
      • ALTER FUNCTION
      • ALTER INDEX
      • ALTER LANGUAGE
      • ALTER MATERIALIZED VIEW
      • ALTER POLICY
      • ALTER PUBLICATION
      • ALTER ROLE
      • ALTER RULE
      • ALTER SCHEMA
      • ALTER SEQUENCE
      • ALTER STATISTICS
      • ALTER SUBSCRIPTION
      • ALTER SYSTEM
      • ALTER TABLE
      • ALTER TABLESPACE
      • ALTER TRIGGER
      • ALTER TYPE
      • ALTER USER
      • ALTER VIEW
      • ANALYZE
      • CLUSTER
      • COMMENT
      • COMMIT PREPARED
      • COPY
      • CREATE ACCESS METHOD
      • CREATE CAST
      • CREATE DATABASE
      • CREATE EVENT TRIGGER
      • CREATE EXTENSION
      • CREATE FOREIGN TABLE
      • CREATE FOREIGN DATA WRAPPER
      • CREATE FUNCTION
      • CREATE INDEX
      • CREATE LANGUAGE
      • CREATE MATERIALIZED VIEW
      • CREATE DOMAIN
      • CREATE POLICY
      • CREATE PROCEDURE
      • CREATE PUBLICATION
      • CREATE ROLE
      • CREATE RULE
      • CREATE SCHEMA
      • CREATE SEQUENCE
      • CREATE SERVER
      • CREATE STATISTICS
      • CREATE SUBSCRIPTION
      • CREATE TABLE
      • CREATE TABLE AS
      • CREATE TABLESPACE
      • CREATE TRANSFORM
      • CREATE TRIGGER
      • CREATE TYPE
      • CREATE USER
      • CREATE USER MAPPING
      • CREATE VIEW
      • DEALLOCATE
      • DELETE
      • DO
      • DROP ACCESS METHOD
      • DROP DATABASE
      • DROP EXTENSION
      • DROP FUNCTION
      • DROP INDEX
      • DROP LANGUAGE
      • DROP MATERIALIZED VIEW
      • DROP OWNED
      • DROP POLICY
      • DROP PUBLICATION
      • DROP ROLE
      • DROP RULE
      • DROP SCHEMA
      • DROP SEQUENCE
      • DROP STATISTICS
      • DROP SUBSCRIPTION
      • DROP TABLE
      • DROP TABLESPACE
      • DROP TRANSFORM
      • DROP TRIGGER
      • DROP TYPE
      • DROP USER
      • DROP VIEW
      • EXECUTE
      • EXPLAIN
      • GRANT
      • IMPORT FOREIGN SCHEMA
      • INSERT
      • LISTEN
      • LOAD
      • MERGE
      • NOTIFY
      • PREPARE
      • PREPARE TRANSACTION
      • REASSIGN OWNED
      • REFRESH MATERIALIZED VIEW
      • REINDEX
      • RESET
      • REVOKE
      • ROLLBACK PREPARED
      • SECURITY LABEL
      • SELECT
      • SELECT INTO
      • SET
      • SET CONSTRAINTS
      • SET ROLE
      • SET SESSION AUTHORIZATION
      • SET TRANSACTION
      • SHOW
      • TRUNCATE
      • UNLISTEN
      • UPDATE
      • VACUUM
      • VALUES
    • II. PostgreSQL 用戶端工具
      • createdb
      • createuser
      • dropdb
      • dropuser
      • oid2name
      • pgbench
      • pg_basebackup
      • pg_dump
      • pg_dumpall
      • pg_isready
      • pg_receivewal
      • pg_recvlogical
      • pg_restore
      • pg_verifybackup
      • psql
      • vacuumdb
    • III. PostgreSQL 伺服器應用程式
      • initdb
      • pg_archivecleanup
      • pg_ctl
      • pg_standby
      • pg_test_fsync
      • pg_test_timing
      • pg_upgrade
      • postgres
  • VII. 資料庫進階
    • 52. PostgreSQL 的內部架構
      • 52.1. 處理查詢語句的流程
      • 52.2. How Connections Are Established
      • 52.3. The Parser Stage
      • 52.4. The PostgreSQL Rule System
      • 52.5. Planner/Optimizer
      • 52.6. Executor
    • 53. 系統資訊目錄
      • 51.3. pg_am
      • 51.7. pg_attribute
      • 51.8. pg_authid
      • 51.9. pg_auth_members
      • 51.10. pg_cast
      • 51.11 pg_class
      • 51.12. pg_collation
      • 51.13. pg_constraint
      • 51.15 pg_database
      • 51.21. pg_event_trigger
      • 51.22. pg_extension
      • 51.26 pg_index
      • 51.29. pg_language
      • 51.32. pg_namespace
      • 51.33. pg_opclass
      • 51.38. pg_policy
      • 51.39. pg_proc
      • 51.44. pg_rewrite
      • 51.49. pg_statistic
      • 51.50. pg_statistic_ext
      • 51.52. pg_subscription
      • 51.53. pg_subscription_rel
      • 51.54. pg_tablespace
      • 51.56. pg_trigger
      • 51.62. pg_type
      • 51.66. pg_available_extensions
      • 51.67. pg_available_extension_versions
      • 51.71. pg_hba_file_rules
      • 51.72. pg_indexes
      • 51.73. pg_locks
      • 51.77. pg_prepared_xacts
      • 51.79. pg_replication_origin_status
    • 54. System Views
      • 54.1. Overview
      • 54.19. pg_replication_slots
      • 54.20 pg_roles
      • 54.24. pg_settings
      • 54.25. pg_shadow
      • 54.26. pg_shmem_allocations
      • 54.27. pg_stats
      • 54.30. pg_tables
      • 54.31. pg_timezone_abbrevs
      • 54.32. pg_timezone_names
      • 54.33. pg_user
      • 54.35. pg_views
    • 55. Frontend/Backend Protocol
      • 52.1. Overview
      • 52.2. Message Flow
      • 52.3. SASL Authentication
      • 52.4. Streaming Replication Protocol
      • 52.5. Logical Streaming Replication Protocol
      • 52.6. Message Data Types
      • 52.7. Message Formats
      • 52.8. Error and Notice Message Fields
      • 52.9. Logical Replication Message Formats
      • 52.10. Summary of Changes since Protocol 2.0
    • 56. PostgreSQL 程式撰寫慣例
      • 53.1. Formatting
      • 53.2. Reporting Errors Within the Server
      • 53.3. Error Message Style Guide
      • 53.4. Miscellaneous Coding Conventions
    • 57. Native Language Support
      • 54.1. For the Translator
      • 54.2. For the Programmer
    • 58. 撰寫程序語言的處理程序
    • 59. Writing a Foreign Data Wrapper
      • 56.1. Foreign Data Wrapper Functions
      • 56.2. Foreign Data Wrapper Callback Routines
      • 56.3. Foreign Data Wrapper Helper Functions
      • 56.4. Foreign Data Wrapper Query Planning
      • 56.5. Row Locking in Foreign Data Wrappers
    • 60. Writing a Table Sampling Method
    • 61. Writing a Custom Scan Provider
    • 62. Genetic Query Optimizer
      • 59.1. Query Handling as a Complex Optimization Problem
      • 59.2. Genetic Algorithms
      • 59.3. Genetic Query Optimization (GEQO) in PostgreSQL
      • 59.4. Further Reading
    • 63. Table Access Method Interface Definition
    • 64. Index Access Method Interface Definition
    • 65. Generic WAL Records
    • 66. Custom WAL Resource Managers
    • 67. B-Tree Indexes
      • 67.1. Introduction
      • 67.2. Behavior of B-Tree Operator Classes
      • 67.3. B-Tree Support Functions
      • 67.4. Implementation
    • 68. GiST Indexes
      • 64.1. Introduction
      • 64.2. Built-in Operator Classes
      • 64.3. Extensibility
      • 64.4. Implementation
      • 64.5. Examples
    • 69. SP-GiST Indexes
      • 65.1. Introduction
      • 65.2. Built-in Operator Classes
      • 65.3. Extensibility
      • 65.4. Implementation
      • 65.5. Examples
    • 70. GIN 索引
      • 70.1. 簡介
      • 70.2. 內建運算子類
      • 70.3. 延伸介面
      • 70.4. 實作說明
      • 70.5. GIN 小技巧
      • 70.6. 限制
      • 70.7. 範例
    • 71. BRIN Indexes
      • 67.1. Introduction
      • 67.2. Built-in Operator Classes
      • 67.3. Extensibility
    • 72. Hash Indexes
    • 73. 資料庫實體儲存格式
      • 73.1. Database File Layout
      • 73.3. TOAST
      • 68.3. Free Space Map
      • 68.4 可視性映射表(Visibility Map)
      • 68.5. The Initialization Fork
      • 68.6. Database Page Layout
    • 74. System Catalog Declarations and Initial Contents
    • 75. 查詢計畫如何使用統計資訊
      • 70.1. Row Estimation Examples
      • 70.2. 多元統計資訊範例
      • 70.3. Planner Statistics and Security
    • 76. Backup Manifest Format
  • VIII. 附錄
    • A. PostgreSQL 錯誤代碼
    • B. 日期時間格式支援
      • B.1. 日期時間解譯流程
      • B.2. Handling of Invalid or Ambiguous Timestamps
      • B.3. 日期時間慣用字
      • B.4. 日期時間設定檔
      • B.5. POSIX Time Zone Specifications
      • B.6. 日期時間的沿革
      • B.7. Julian Dates
    • C. SQL 關鍵字
    • D. SQL 相容性
      • D.1. Supported Features
      • D.2. Unsupported Features
      • D.3. XML Limits and Conformance to SQL/XML
    • E. 版本資訊
      • E.1. Release 15.2
      • E.2. Release 15.1
      • E.3. Release 15
      • E.4. Prior Releases
    • F. 延伸支援模組
      • F.1. adminpack
      • F.2. amcheck
      • F.3. auth_delay
      • F.4. auto_explain
      • F.5. bloom
      • F.6. btree_gin
      • F.10. dblink
        • dblink_connect
        • dblink_connect_u
        • dblink_disconnect
        • dblink
        • dblink_exec
        • dblink_open
        • dblink_fetch
        • dblink_close
        • dblink_get_connections
        • dblink_error_message
        • dblink_send_query
        • dblink_is_busy
        • dblink_get_notify
        • dblink_get_result
        • dblink_cancel_query
        • dblink_get_pkey
        • dblink_build_sql_insert
        • dblink_build_sql_delete
        • dblink_build_sql_update
      • F.13. earthdistance
      • F.14. file_fdw
      • F.16. hstore
      • F.24. pg_buffercache
      • F.26. passwordcheck
      • F.29. pg_stat_statements
      • F.30. pgstattuple
      • F.31. pg_trgm
      • F.32. pg_visibility
      • F.38. postgres_fdw
      • F.35. sepgsql
      • F.43. tablefunc
      • F.45. test_decoding
      • F.46. tsm_system_rows
      • F.47. tsm_system_time
      • F.49. uuid-ossp
    • G. Additional Supplied Programs
      • G.1. Client Applications
        • oid2name
        • vacuumlo
      • G.2. Server Applications
        • pg_standby
    • H. 外部專案
      • H.1. 用戶端介面
      • H.2. Administration Tools
      • H.3. Procedural Languages
      • H.4. Extensions
    • I. The Source Code Repository
      • I.1. Getting The Source via Git
    • J. 文件取得
      • J.1. DocBook
      • J.2. Tool Sets
      • J.3. Building the Documentation
      • J.4. Documentation Authoring
      • J.5. Style Guide
    • K. PostgreSQL Limits
    • L. 縮寫字
    • M. Glossary
    • N. 色彩支援
      • N.1. When Color is Used
      • N.2. Configuring the Colors
    • O. Obsolete or Renamed Features
  • 參考書目
Powered by GitBook
On this page
  • 語法
  • 說明
  • 參數
  • Notes
  • Examples
  • 相容性
  • 參閱

Was this helpful?

Edit on GitHub
Export as PDF
  1. VI. 參考資訊
  2. I. SQL 指令

CREATE TABLE

版本:11

CREATE TABLE — 建立一個新的資料表

語法

CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name ( [
  { column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ]
    | table_constraint
    | LIKE source_table [ like_option ... ] }
    [, ... ]
] )
[ INHERITS ( parent_table [, ... ] ) ]
[ PARTITION BY { RANGE | LIST | HASH } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ]
[ USING method ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace_name ]

CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name
    OF type_name [ (
  { column_name [ WITH OPTIONS ] [ column_constraint [ ... ] ]
    | table_constraint }
    [, ... ]
) ]
[ PARTITION BY { RANGE | LIST | HASH } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ]
[ USING method ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace_name ]

CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name
    PARTITION OF parent_table [ (
  { column_name [ WITH OPTIONS ] [ column_constraint [ ... ] ]
    | table_constraint }
    [, ... ]
) ] { FOR VALUES partition_bound_spec | DEFAULT }
[ PARTITION BY { RANGE | LIST | HASH } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ]
[ USING method ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace_name ]

where column_constraint is:

[ CONSTRAINT constraint_name ]
{ NOT NULL |
  NULL |
  CHECK ( expression ) [ NO INHERIT ] |
  DEFAULT default_expr |
  GENERATED ALWAYS AS ( generation_expr ) STORED |
  GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( sequence_options ) ] |
  UNIQUE index_parameters |
  PRIMARY KEY index_parameters |
  REFERENCES reftable [ ( refcolumn ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ]
    [ ON DELETE referential_action ] [ ON UPDATE referential_action ] }
[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]

and table_constraint is:

[ CONSTRAINT constraint_name ]
{ CHECK ( expression ) [ NO INHERIT ] |
  UNIQUE ( column_name [, ... ] ) index_parameters |
  PRIMARY KEY ( column_name [, ... ] ) index_parameters |
  EXCLUDE [ USING index_method ] ( exclude_element WITH operator [, ... ] ) index_parameters [ WHERE ( predicate ) ] |
  FOREIGN KEY ( column_name [, ... ] ) REFERENCES reftable [ ( refcolumn [, ... ] ) ]
    [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE referential_action ] [ ON UPDATE referential_action ] }
[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]

and like_option is:

{ INCLUDING | EXCLUDING } { COMMENTS | CONSTRAINTS | DEFAULTS | GENERATED | IDENTITY | INDEXES | STATISTICS | STORAGE | ALL }

and partition_bound_spec is:

IN ( partition_bound_expr [, ...] ) |
FROM ( { partition_bound_expr | MINVALUE | MAXVALUE } [, ...] )
  TO ( { partition_bound_expr | MINVALUE | MAXVALUE } [, ...] ) |
WITH ( MODULUS numeric_literal, REMAINDER numeric_literal )

index_parameters in UNIQUE, PRIMARY KEY, and EXCLUDE constraints are:

[ INCLUDE ( column_name [, ... ] ) ]
[ WITH ( storage_parameter [= value] [, ... ] ) ]
[ USING INDEX TABLESPACE tablespace_name ]

exclude_element in an EXCLUDE constraint is:

{ column_name | ( expression ) } [ opclass ] [ ASC | DESC ] [ NULLS { FIRST | LAST } ]

說明

CREATE TABLE 將在目前資料庫中建立一個新的,初始化為空的資料表。該資料表將由發出此指令的使用者擁有。

如果加上了綱要名稱(例如,CREATE TABLE myschema.mytable ...),那麼將在指定的綱要中建立資料表。否則,它將在目前綱要中建立。臨時資料表存在於特殊綱要中,因此在建立臨時資料表時無法使用綱要名稱。資料表的名稱必須與同一綱要中的任何其他資料表、序列、索引、檢視表或外部資料表的名稱不同。

CREATE TABLE 會自動建立一個資料型別,表示與資料表的一個資料列對應的複合型別。因此,資料表不能與同一綱要中的任何現有資料型別具有相同的名稱。 可選擇性加上限制條件子句指定新的資料列或更新資料列必須滿足的限制條件才能使其插入或更新操作成功。限制條件是一個 SQL 物件,它有助於以各種方式定義資料表中的有效值集合。

定義限制條件有兩種方法:資料表限制條件和欄位限制條件。欄位限制條件被定義為欄位定義的一部分。資料表限制條件定義不依賴於特定欄位,它可以包含多個欄位。

每個欄位限制條件也可以寫為資料表限制條件;欄位限制條件只是在其限制僅影響一欄位時使用的語法方便。

為了能夠建立資料表,您必須分別對所有欄位型別或 OF 子句中的型別具有 USAGE 權限。

參數

TEMPORARY or TEMP

如果使用此參數,則將資料表建立為臨時資料表。臨時資料表會在連線結束時自動刪除,或者選擇性地在目前交易事務結束時刪除(請參閱下面的 ON COMMIT)。當臨時資料表存在時,目前連線不會顯示具有相同名稱的現有永久資料表,除非它們使用綱要限定的名稱引用。在臨時資料表上建立的任何索引也都自動是臨時的。

UNLOGGED

IF NOT EXISTS

如果已經存在同樣名稱的關連物件,請不要拋出錯誤。在這種情況下發出 NOTICE。請注意,不能保證現有關連物件類似於將要建立的關連物件。

table_name

要建立的資料表名稱(可選擇性加上的綱要)。

OF type_name

Creates a typed table, which takes its structure from the specified composite type (name optionally schema-qualified). A typed table is tied to its type; for example the table will be dropped if the type is dropped (with DROP TYPE ... CASCADE).

When a typed table is created, then the data types of the columns are determined by the underlying composite type and are not specified by the CREATE TABLE command. But the CREATE TABLE command can add defaults and constraints to the table and can specify storage parameters.

column_name

The name of a column to be created in the new table.

data_type

COLLATE collation

The COLLATE clause assigns a collation to the column (which must be of a collatable data type). If not specified, the column data type's default collation is used.

INHERITS ( parent_table [, ... ] )

The optional INHERITS clause specifies a list of tables from which the new table automatically inherits all columns. Parent tables can be plain tables or foreign tables.

Use of INHERITS creates a persistent relationship between the new child table and its parent table(s). Schema modifications to the parent(s) normally propagate to children as well, and by default the data of the child table is included in scans of the parent(s).

If the same column name exists in more than one parent table, an error is reported unless the data types of the columns match in each of the parent tables. If there is no conflict, then the duplicate columns are merged to form a single column in the new table. If the column name list of the new table contains a column name that is also inherited, the data type must likewise match the inherited column(s), and the column definitions are merged into one. If the new table explicitly specifies a default value for the column, this default overrides any defaults from inherited declarations of the column. Otherwise, any parents that specify default values for the column must all specify the same default, or an error will be reported.

CHECK constraints are merged in essentially the same way as columns: if multiple parent tables and/or the new table definition contain identically-named CHECK constraints, these constraints must all have the same check expression, or an error will be reported. Constraints having the same name and expression will be merged into one copy. A constraint marked NO INHERIT in a parent will not be considered. Notice that an unnamed CHECK constraint in the new table will never be merged, since a unique name will always be chosen for it.

Column STORAGE settings are also copied from parent tables.

If a column in the parent table is an identity column, that property is not inherited. A column in the child table can be declared identity column if desired.

PARTITION BY { RANGE | LIST | HASH } ( { column_name | ( expression ) } [ opclass ] [, ...] )

The optional PARTITION BY clause specifies a strategy of partitioning the table. The table thus created is called a partitioned table. The parenthesized list of columns or expressions forms the partition key for the table. When using range or hash partitioning, the partition key can include multiple columns or expressions (up to 32, but this limit can be altered when building PostgreSQL), but for list partitioning, the partition key must consist of a single column or expression.

A partitioned table is divided into sub-tables (called partitions), which are created using separate CREATE TABLE commands. The partitioned table is itself empty. A data row inserted into the table is routed to a partition based on the value of columns or expressions in the partition key. If no existing partition matches the values in the new row, an error will be reported.

Partitioned tables do not support EXCLUDE constraints; however, you can define these constraints on individual partitions.

PARTITION OF parent_table { FOR VALUES partition_bound_spec | DEFAULT }

建資料表作為指定父資料表的分割區。此資料表可以使用 FOR VALUES 建立為特定值的分割區,也可以使用 DEFAULT 建立為預設分割區。父資料表中存在所有的索引、限制條件和使用者定義的資料列級觸發器都將複製到新的分割區上。

The partition_bound_spec must correspond to the partitioning method and partition key of the parent table, and must not overlap with any existing partition of that parent. The form with IN is used for list partitioning, the form with FROM and TO is used for range partitioning, and the form with WITH is used for hash partitioning.

partition_bound_expr is any variable-free expression (subqueries, window functions, aggregate functions, and set-returning functions are not allowed). Its data type must match the data type of the corresponding partition key column. The expression is evaluated once at table creation time, so it can even contain volatile expressions such as CURRENT_TIMESTAMP.

When creating a list partition, NULL can be specified to signify that the partition allows the partition key column to be null. However, there cannot be more than one such list partition for a given parent table. NULL cannot be specified for range partitions.

The special values MINVALUE and MAXVALUE may be used when creating a range partition to indicate that there is no lower or upper bound on the column's value. For example, a partition defined using FROM (MINVALUE) TO (10) allows any values less than 10, and a partition defined using FROM (10) TO (MAXVALUE) allows any values greater than or equal to 10.

When creating a range partition involving more than one column, it can also make sense to use MAXVALUE as part of the lower bound, and MINVALUE as part of the upper bound. For example, a partition defined using FROM (0, MAXVALUE) TO (10, MAXVALUE) allows any rows where the first partition key column is greater than 0 and less than or equal to 10. Similarly, a partition defined using FROM ('a', MINVALUE) TO ('b', MINVALUE) allows any rows where the first partition key column starts with "a".

Note that if MINVALUE or MAXVALUE is used for one column of a partitioning bound, the same value must be used for all subsequent columns. For example, (10, MINVALUE, 0) is not a valid bound; you should write (10, MINVALUE, MINVALUE).

Also note that some element types, such as timestamp, have a notion of "infinity", which is just another value that can be stored. This is different from MINVALUE and MAXVALUE, which are not real values that can be stored, but rather they are ways of saying that the value is unbounded. MAXVALUE can be thought of as being greater than any other value, including "infinity" and MINVALUE as being less than any other value, including "minus infinity". Thus the range FROM ('infinity') TO (MAXVALUE) is not an empty range; it allows precisely one value to be stored — "infinity".

If DEFAULT is specified, the table will be created as the default partition of the parent table. This option is not available for hash-partitioned tables. A partition key value not fitting into any other partition of the given parent will be routed to the default partition.

When a table has an existing DEFAULT partition and a new partition is added to it, the default partition must be scanned to verify that it does not contain any rows which properly belong in the new partition. If the default partition contains a large number of rows, this may be slow. The scan will be skipped if the default partition is a foreign table or if it has a constraint which proves that it cannot contain rows which should be placed in the new partition.

When creating a hash partition, a modulus and remainder must be specified. The modulus must be a positive integer, and the remainder must be a non-negative integer less than the modulus. Typically, when initially setting up a hash-partitioned table, you should choose a modulus equal to the number of partitions and assign every table the same modulus and a different remainder (see examples, below). However, it is not required that every partition have the same modulus, only that every modulus which occurs among the partitions of a hash-partitioned table is a factor of the next larger modulus. This allows the number of partitions to be increased incrementally without needing to move all the data at once. For example, suppose you have a hash-partitioned table with 8 partitions, each of which has modulus 8, but find it necessary to increase the number of partitions to 16. You can detach one of the modulus-8 partitions, create two new modulus-16 partitions covering the same portion of the key space (one with a remainder equal to the remainder of the detached partition, and the other with a remainder equal to that value plus 8), and repopulate them with data. You can then repeat this -- perhaps at a later time -- for each modulus-8 partition until none remain. While this may still involve a large amount of data movement at each step, it is still better than having to create a whole new table and move all the data at once.

A partition must have the same column names and types as the partitioned table to which it belongs. Modifications to the column names or types of a partitioned table will automatically propagate to all partitions. CHECK constraints will be inherited automatically by every partition, but an individual partition may specify additional CHECK constraints; additional constraints with the same name and condition as in the parent will be merged with the parent constraint. Defaults may be specified separately for each partition. But note that a partition's default value is not applied when inserting a tuple through a partitioned table.

Rows inserted into a partitioned table will be automatically routed to the correct partition. If no suitable partition exists, an error will occur.

Operations such as TRUNCATE which normally affect a table and all of its inheritance children will cascade to all partitions, but may also be performed on an individual partition. Note that dropping a partition with DROP TABLE requires taking an ACCESS EXCLUSIVE lock on the parent table.

LIKE source_table [ like_option ... ]

The LIKE clause specifies a table from which the new table automatically copies all column names, their data types, and their not-null constraints.

Unlike INHERITS, the new table and original table are completely decoupled after creation is complete. Changes to the original table will not be applied to the new table, and it is not possible to include data of the new table in scans of the original table.

Also unlike INHERITS, columns and constraints copied by LIKE are not merged with similarly named columns and constraints. If the same name is specified explicitly or in another LIKE clause, an error is signaled.

The optional like_option clauses specify which additional properties of the original table to copy. Specifying INCLUDING copies the property, specifying EXCLUDING omits the property. EXCLUDING is the default. If multiple specifications are made for the same kind of object, the last one is used. The available options are:

INCLUDING COMMENTS

Comments for the copied columns, constraints, and indexes will be copied. The default behavior is to exclude comments, resulting in the copied columns and constraints in the new table having no comments.

INCLUDING CONSTRAINTS

CHECK constraints will be copied. No distinction is made between column constraints and table constraints. Not-null constraints are always copied to the new table.

INCLUDING DEFAULTS

Default expressions for the copied column definitions will be copied. Otherwise, default expressions are not copied, resulting in the copied columns in the new table having null defaults. Note that copying defaults that call database-modification functions, such as nextval, may create a functional linkage between the original and new tables.

INCLUDING GENERATED

Any generation expressions of copied column definitions will be copied. By default, new columns will be regular base columns.

INCLUDING IDENTITY

Any identity specifications of copied column definitions will be copied. A new sequence is created for each identity column of the new table, separate from the sequences associated with the old table.

INCLUDING INDEXES

Indexes, PRIMARY KEY, UNIQUE, and EXCLUDE constraints on the original table will be created on the new table. Names for the new indexes and constraints are chosen according to the default rules, regardless of how the originals were named. (This behavior avoids possible duplicate-name failures for the new indexes.)

INCLUDING STATISTICS

Extended statistics are copied to the new table.

INCLUDING STORAGE

INCLUDING ALL

INCLUDING ALL is an abbreviated form selecting all the available individual options. (It could be useful to write individual EXCLUDING clauses after INCLUDING ALL to select all but some specific options.)

The LIKE clause can also be used to copy column definitions from views, foreign tables, or composite types. Inapplicable options (e.g., INCLUDING INDEXES from a view) are ignored.

CONSTRAINT constraint_name

An optional name for a column or table constraint. If the constraint is violated, the constraint name is present in error messages, so constraint names like col must be positive can be used to communicate helpful constraint information to client applications. (Double-quotes are needed to specify constraint names that contain spaces.) If a constraint name is not specified, the system generates a name.

NOT NULL

The column is not allowed to contain null values.

NULL

The column is allowed to contain null values. This is the default.

This clause is only provided for compatibility with non-standard SQL databases. Its use is discouraged in new applications.

CHECK ( expression ) [ NO INHERIT ]

The CHECK clause specifies an expression producing a Boolean result which new or updated rows must satisfy for an insert or update operation to succeed. Expressions evaluating to TRUE or UNKNOWN succeed. Should any row of an insert or update operation produce a FALSE result, an error exception is raised and the insert or update does not alter the database. A check constraint specified as a column constraint should reference that column's value only, while an expression appearing in a table constraint can reference multiple columns.

A constraint marked with NO INHERIT will not propagate to child tables.

When a table has multiple CHECK constraints, they will be tested for each row in alphabetical order by name, after checking NOT NULL constraints. (PostgreSQL versions before 9.5 did not honor any particular firing order for CHECK constraints.)

DEFAULT default_expr

The DEFAULT clause assigns a default data value for the column whose column definition it appears within. The value is any variable-free expression (in particular, cross-references to other columns in the current table are not allowed). Subqueries are not allowed either. The data type of the default expression must match the data type of the column.

The default expression will be used in any insert operation that does not specify a value for the column. If there is no default for a column, then the default is null.

GENERATED ALWAYS AS ( generation_expr ) STORED

This clause creates the column as a generated column. The column cannot be written to, and when read the result of the specified expression will be returned.

The keyword STORED is required to signify that the column will be computed on write and will be stored on disk.

The generation expression can refer to other columns in the table, but not other generated columns. Any functions and operators used must be immutable. References to other tables are not allowed.

GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( sequence_options ) ]

This clause creates the column as an identity column. It will have an implicit sequence attached to it and the column in new rows will automatically have values from the sequence assigned to it.

UNIQUE ( column_name [, ... ] ) [ INCLUDE ( column_name [, ...]) ] (table constraint)

The UNIQUE constraint specifies that a group of one or more columns of a table can contain only unique values. The behavior of the unique table constraint is the same as that for column constraints, with the additional capability to span multiple columns.

For the purpose of a unique constraint, null values are not considered equal.

Each unique table constraint must name a set of columns that is different from the set of columns named by any other unique or primary key constraint defined for the table. (Otherwise it would just be the same constraint listed twice.)

When establishing a unique constraint for a multi-level partition hierarchy, all the columns in the partition key of the target partitioned table, as well as those of all its descendant partitioned tables, must be included in the constraint definition.

Adding a unique constraint will automatically create a unique btree index on the column or group of columns used in the constraint. The optional clause INCLUDE adds to that index one or more columns on which the uniqueness is not enforced. Note that although the constraint is not enforced on the included columns, it still depends on them. Consequently, some operations on these columns (e.g. DROP COLUMN) can cause cascaded constraint and index deletion.

PRIMARY KEY (column constraint) PRIMARY KEY ( column_name [, ... ] ) [ INCLUDE ( column_name [, ...]) ] (table constraint)

The PRIMARY KEY constraint specifies that a column or columns of a table can contain only unique (non-duplicate), nonnull values. Only one primary key can be specified for a table, whether as a column constraint or a table constraint.

The primary key constraint should name a set of columns that is different from the set of columns named by any unique constraint defined for the same table. (Otherwise, the unique constraint is redundant and will be discarded.)

PRIMARY KEY enforces the same data constraints as a combination of UNIQUE and NOT NULL, but identifying a set of columns as the primary key also provides metadata about the design of the schema, since a primary key implies that other tables can rely on this set of columns as a unique identifier for rows.

PRIMARY KEY constraints share the restrictions that UNIQUE constraints have when placed on partitioned tables.

Adding a PRIMARY KEY constraint will automatically create a unique btree index on the column or group of columns used in the constraint. The optional INCLUDE clause allows a list of columns to be specified which will be included in the non-key portion of the index. Although uniqueness is not enforced on the included columns, the constraint still depends on them. Consequently, some operations on the included columns (e.g. DROP COLUMN) can cause cascaded constraint and index deletion.

EXCLUDE [ USING index_method ] ( exclude_element WITH operator [, ... ] ) index_parameters [ WHERE ( predicate ) ]

The predicate allows you to specify an exclusion constraint on a subset of the table; internally this creates a partial index. Note that parentheses are required around the predicate.

REFERENCES reftable [ ( refcolumn ) ] [ MATCH matchtype ] [ ON DELETE referential_action ] [ ON UPDATE referential_action ] (column constraint) FOREIGN KEY ( column_name [, ... ] ) REFERENCES reftable [ ( refcolumn [, ... ] ) ] [ MATCH matchtype ] [ ON DELETE referential_action ] [ ON UPDATE referential_action ] (table constraint)

These clauses specify a foreign key constraint, which requires that a group of one or more columns of the new table must only contain values that match values in the referenced column(s) of some row of the referenced table. If the refcolumn list is omitted, the primary key of the reftable is used. The referenced columns must be the columns of a non-deferrable unique or primary key constraint in the referenced table. The user must have REFERENCES permission on the referenced table (either the whole table, or the specific referenced columns). The addition of a foreign key constraint requires a SHARE ROW EXCLUSIVE lock on the referenced table. Note that foreign key constraints cannot be defined between temporary tables and permanent tables.

A value inserted into the referencing column(s) is matched against the values of the referenced table and referenced columns using the given match type. There are three match types: MATCH FULL, MATCH PARTIAL, and MATCH SIMPLE (which is the default). MATCH FULL will not allow one column of a multicolumn foreign key to be null unless all foreign key columns are null; if they are all null, the row is not required to have a match in the referenced table. MATCH SIMPLE allows any of the foreign key columns to be null; if any of them are null, the row is not required to have a match in the referenced table. MATCH PARTIAL is not yet implemented. (Of course, NOT NULL constraints can be applied to the referencing column(s) to prevent these cases from arising.)

In addition, when the data in the referenced columns is changed, certain actions are performed on the data in this table's columns. The ON DELETE clause specifies the action to perform when a referenced row in the referenced table is being deleted. Likewise, the ON UPDATE clause specifies the action to perform when a referenced column in the referenced table is being updated to a new value. If the row is updated, but the referenced column is not actually changed, no action is done. Referential actions other than the NO ACTION check cannot be deferred, even if the constraint is declared deferrable. There are the following possible actions for each clause:

NO ACTION

Produce an error indicating that the deletion or update would create a foreign key constraint violation. If the constraint is deferred, this error will be produced at constraint check time if there still exist any referencing rows. This is the default action.

RESTRICT

Produce an error indicating that the deletion or update would create a foreign key constraint violation. This is the same as NO ACTION except that the check is not deferrable.

CASCADE

Delete any rows referencing the deleted row, or update the values of the referencing column(s) to the new values of the referenced columns, respectively.

SET NULL

Set the referencing column(s) to null.

SET DEFAULT

Set the referencing column(s) to their default values. (There must be a row in the referenced table matching the default values, if they are not null, or the operation will fail.)

If the referenced column(s) are changed frequently, it might be wise to add an index to the referencing column(s) so that referential actions associated with the foreign key constraint can be performed more efficiently.

DEFERRABLE NOT DEFERRABLE

INITIALLY IMMEDIATE INITIALLY DEFERRED

USING method

WITH ( storage_parameter [= value] [, ... ] )

WITHOUT OIDS

This is backward-compatible syntax for declaring a table WITHOUT OIDS, creating a table WITH OIDS is not supported anymore.

ON COMMIT

The behavior of temporary tables at the end of a transaction block can be controlled using ON COMMIT. The three options are:

PRESERVE ROWS

No special action is taken at the ends of transactions. This is the default behavior.

DELETE ROWS

DROP

The temporary table will be dropped at the end of the current transaction block. When used on a partitioned table, this action drops its partitions and when used on tables with inheritance children, it drops the dependent children.

TABLESPACE tablespace_name

USING INDEX TABLESPACE tablespace_name

Storage Parameters

fillfactor (integer)

The fillfactor for a table is a percentage between 10 and 100. 100 (complete packing) is the default. When a smaller fillfactor is specified, INSERT operations pack table pages only to the indicated percentage; the remaining space on each page is reserved for updating rows on that page. This gives UPDATE a chance to place the updated copy of a row on the same page as the original, which is more efficient than placing it on a different page. For a table whose entries are never updated, complete packing is the best choice, but in heavily updated tables smaller fillfactors are appropriate. This parameter cannot be set for TOAST tables.

toast_tuple_target (integer)

toast_tuple_target 指定在嘗試將較長的欄位值移入 TOAST 資料表之前所需的最短位元組長度,也是目標長度,一旦嘗試開始 TOAST,我們就嘗試將長度縮減到其以下的長度。這只會影響標記為「External」或「Extended」的欄位,並且僅適用於新的 tuple - 對已經存在的資料沒有影響。預設情況下,此參數設定為每個區塊至少允許 4 個 tuple,也就是預設的 blocksize 為 2040 位元組。有效值介於 128 位元組和(blocksize - header)之間,其預設為 8160 位元組。對於非常短或非常長的資料列,變更此值可能沒有效果。請注意,預設的設定通常接近最佳值,並且在某些情況下設定此參數可能會產生負面影響。不能為TOAST 資料表設定此參數。

parallel_workers (integer)

autovacuum_enabled, toast.autovacuum_enabled (boolean)

vacuum_index_cleanup, toast.vacuum_index_cleanup (boolean)

vacuum_truncate, toast.vacuum_truncate (boolean)

autovacuum_vacuum_threshold, toast.autovacuum_vacuum_threshold (integer)

autovacuum_vacuum_scale_factor, toast.autovacuum_vacuum_scale_factor (floating point)

autovacuum_analyze_threshold (integer)

autovacuum_analyze_scale_factor (floating point)

autovacuum_vacuum_cost_delay, toast.autovacuum_vacuum_cost_delay (floating point)

autovacuum_vacuum_cost_limit, toast.autovacuum_vacuum_cost_limit (integer)

autovacuum_freeze_min_age, toast.autovacuum_freeze_min_age (integer)

autovacuum_freeze_max_age, toast.autovacuum_freeze_max_age (integer)

autovacuum_freeze_table_age, toast.autovacuum_freeze_table_age (integer)

autovacuum_multixact_freeze_min_age, toast.autovacuum_multixact_freeze_min_age (integer)

autovacuum_multixact_freeze_max_age, toast.autovacuum_multixact_freeze_max_age (integer)

autovacuum_multixact_freeze_table_age, toast.autovacuum_multixact_freeze_table_age (integer)

log_autovacuum_min_duration, toast.log_autovacuum_min_duration (integer)

user_catalog_table (boolean)

Notes

Unique constraints and primary keys are not inherited in the current implementation. This makes the combination of inheritance and unique constraints rather dysfunctional.

A table cannot have more than 1600 columns. (In practice, the effective limit is usually lower because of tuple-length constraints.)

Examples

Create table films and table distributors:

CREATE TABLE films (
    code        char(5) CONSTRAINT firstkey PRIMARY KEY,
    title       varchar(40) NOT NULL,
    did         integer NOT NULL,
    date_prod   date,
    kind        varchar(10),
    len         interval hour to minute
);

CREATE TABLE distributors (
     did    integer PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY,
     name   varchar(40) NOT NULL CHECK (name <> '')
);

Create a table with a 2-dimensional array:

CREATE TABLE array_int (
    vector  int[][]
);

Define a unique table constraint for the table films. Unique table constraints can be defined on one or more columns of the table:

CREATE TABLE films (
    code        char(5),
    title       varchar(40),
    did         integer,
    date_prod   date,
    kind        varchar(10),
    len         interval hour to minute,
    CONSTRAINT production UNIQUE(date_prod)
);

Define a check column constraint:

CREATE TABLE distributors (
    did     integer CHECK (did > 100),
    name    varchar(40)
);

Define a check table constraint:

CREATE TABLE distributors (
    did     integer,
    name    varchar(40),
    CONSTRAINT con1 CHECK (did > 100 AND name <> '')
);

Define a primary key table constraint for the table films:

CREATE TABLE films (
    code        char(5),
    title       varchar(40),
    did         integer,
    date_prod   date,
    kind        varchar(10),
    len         interval hour to minute,
    CONSTRAINT code_title PRIMARY KEY(code,title)
);

Define a primary key constraint for table distributors. The following two examples are equivalent, the first using the table constraint syntax, the second the column constraint syntax:

CREATE TABLE distributors (
    did     integer,
    name    varchar(40),
    PRIMARY KEY(did)
);

CREATE TABLE distributors (
    did     integer PRIMARY KEY,
    name    varchar(40)
);

Assign a literal constant default value for the column name, arrange for the default value of column did to be generated by selecting the next value of a sequence object, and make the default value of modtime be the time at which the row is inserted:

CREATE TABLE distributors (
    name      varchar(40) DEFAULT 'Luso Films',
    did       integer DEFAULT nextval('distributors_serial'),
    modtime   timestamp DEFAULT current_timestamp
);

Define two NOT NULL column constraints on the table distributors, one of which is explicitly given a name:

CREATE TABLE distributors (
    did     integer CONSTRAINT no_null NOT NULL,
    name    varchar(40) NOT NULL
);

Define a unique constraint for the name column:

CREATE TABLE distributors (
    did     integer,
    name    varchar(40) UNIQUE
);

The same, specified as a table constraint:

CREATE TABLE distributors (
    did     integer,
    name    varchar(40),
    UNIQUE(name)
);

Create the same table, specifying 70% fill factor for both the table and its unique index:

CREATE TABLE distributors (
    did     integer,
    name    varchar(40),
    UNIQUE(name) WITH (fillfactor=70)
)
WITH (fillfactor=70);

Create table circles with an exclusion constraint that prevents any two circles from overlapping:

CREATE TABLE circles (
    c circle,
    EXCLUDE USING gist (c WITH &&)
);

Create table cinemas in tablespace diskvol1:

CREATE TABLE cinemas (
        id serial,
        name text,
        location text
) TABLESPACE diskvol1;

Create a composite type and a typed table:

CREATE TYPE employee_type AS (name text, salary numeric);

CREATE TABLE employees OF employee_type (
    PRIMARY KEY (name),
    salary WITH OPTIONS DEFAULT 1000
);

Create a range partitioned table:

CREATE TABLE measurement (
    logdate         date not null,
    peaktemp        int,
    unitsales       int
) PARTITION BY RANGE (logdate);

Create a range partitioned table with multiple columns in the partition key:

CREATE TABLE measurement_year_month (
    logdate         date not null,
    peaktemp        int,
    unitsales       int
) PARTITION BY RANGE (EXTRACT(YEAR FROM logdate), EXTRACT(MONTH FROM logdate));

Create a list partitioned table:

CREATE TABLE cities (
    city_id      bigserial not null,
    name         text not null,
    population   bigint
) PARTITION BY LIST (left(lower(name), 1));

Create a hash partitioned table:

CREATE TABLE orders (
    order_id     bigint not null,
    cust_id      bigint not null,
    status       text
) PARTITION BY HASH (order_id);

Create partition of a range partitioned table:

CREATE TABLE measurement_y2016m07
    PARTITION OF measurement (
    unitsales DEFAULT 0
) FOR VALUES FROM ('2016-07-01') TO ('2016-08-01');

Create a few partitions of a range partitioned table with multiple columns in the partition key:

CREATE TABLE measurement_ym_older
    PARTITION OF measurement_year_month
    FOR VALUES FROM (MINVALUE, MINVALUE) TO (2016, 11);

CREATE TABLE measurement_ym_y2016m11
    PARTITION OF measurement_year_month
    FOR VALUES FROM (2016, 11) TO (2016, 12);

CREATE TABLE measurement_ym_y2016m12
    PARTITION OF measurement_year_month
    FOR VALUES FROM (2016, 12) TO (2017, 01);

CREATE TABLE measurement_ym_y2017m01
    PARTITION OF measurement_year_month
    FOR VALUES FROM (2017, 01) TO (2017, 02);

Create partition of a list partitioned table:

CREATE TABLE cities_ab
    PARTITION OF cities (
    CONSTRAINT city_id_nonzero CHECK (city_id != 0)
) FOR VALUES IN ('a', 'b');

Create partition of a list partitioned table that is itself further partitioned and then add a partition to it:

CREATE TABLE cities_ab
    PARTITION OF cities (
    CONSTRAINT city_id_nonzero CHECK (city_id != 0)
) FOR VALUES IN ('a', 'b') PARTITION BY RANGE (population);

CREATE TABLE cities_ab_10000_to_100000
    PARTITION OF cities_ab FOR VALUES FROM (10000) TO (100000);

Create partitions of a hash partitioned table:

CREATE TABLE orders_p1 PARTITION OF orders
    FOR VALUES WITH (MODULUS 4, REMAINDER 0);
CREATE TABLE orders_p2 PARTITION OF orders
    FOR VALUES WITH (MODULUS 4, REMAINDER 1);
CREATE TABLE orders_p3 PARTITION OF orders
    FOR VALUES WITH (MODULUS 4, REMAINDER 2);
CREATE TABLE orders_p4 PARTITION OF orders
    FOR VALUES WITH (MODULUS 4, REMAINDER 3);

Create a default partition:

CREATE TABLE cities_partdef
    PARTITION OF cities DEFAULT;

相容性

The CREATE TABLE command conforms to the SQL standard, with exceptions listed below.

Temporary Tables

Although the syntax of CREATE TEMPORARY TABLE resembles that of the SQL standard, the effect is not the same. In the standard, temporary tables are defined just once and automatically exist (starting with empty contents) in every session that needs them. PostgreSQL instead requires each session to issue its own CREATE TEMPORARY TABLE command for each temporary table to be used. This allows different sessions to use the same temporary table name for different purposes, whereas the standard's approach constrains all instances of a given temporary table name to have the same table structure.

The standard's definition of the behavior of temporary tables is widely ignored. PostgreSQL's behavior on this point is similar to that of several other SQL databases.

The SQL standard also distinguishes between global and local temporary tables, where a local temporary table has a separate set of contents for each SQL module within each session, though its definition is still shared across sessions. Since PostgreSQL does not support SQL modules, this distinction is not relevant in PostgreSQL.

For compatibility's sake, PostgreSQL will accept the GLOBAL and LOCAL keywords in a temporary table declaration, but they currently have no effect. Use of these keywords is discouraged, since future versions of PostgreSQL might adopt a more standard-compliant interpretation of their meaning.

The ON COMMIT clause for temporary tables also resembles the SQL standard, but has some differences. If the ON COMMIT clause is omitted, SQL specifies that the default behavior is ON COMMIT DELETE ROWS. However, the default behavior in PostgreSQL is ON COMMIT PRESERVE ROWS. The ON COMMIT DROP option does not exist in SQL.

Non-Deferred Uniqueness Constraints

When a UNIQUE or PRIMARY KEY constraint is not deferrable, PostgreSQL checks for uniqueness immediately whenever a row is inserted or modified. The SQL standard says that uniqueness should be enforced only at the end of the statement; this makes a difference when, for example, a single command updates multiple key values. To obtain standard-compliant behavior, declare the constraint as DEFERRABLE but not deferred (i.e., INITIALLY IMMEDIATE). Be aware that this can be significantly slower than immediate uniqueness checking.

Column Check Constraints

The SQL standard says that CHECK column constraints can only refer to the column they apply to; only CHECK table constraints can refer to multiple columns. PostgreSQL does not enforce this restriction; it treats column and table check constraints alike.

EXCLUDE Constraint

The EXCLUDE constraint type is a PostgreSQL extension.

NULL “Constraint”

The NULL “constraint” (actually a non-constraint) is a PostgreSQL extension to the SQL standard that is included for compatibility with some other database systems (and for symmetry with the NOT NULL constraint). Since it is the default for any column, its presence is simply noise.

Constraint Naming

The SQL standard says that table and domain constraints must have names that are unique across the schema containing the table or domain. PostgreSQL is laxer: it only requires constraint names to be unique across the constraints attached to a particular table or domain. However, this extra freedom does not exist for index-based constraints (UNIQUE, PRIMARY KEY, and EXCLUDE constraints), because the associated index is named the same as the constraint, and index names must be unique across all relations within the same schema.

Currently, PostgreSQL does not record names for NOT NULL constraints at all, so they are not subject to the uniqueness restriction. This might change in a future release.

Inheritance

Multiple inheritance via the INHERITS clause is a PostgreSQL language extension. SQL:1999 and later define single inheritance using a different syntax and different semantics. SQL:1999-style inheritance is not yet supported by PostgreSQL.

Zero-Column Tables

PostgreSQL allows a table of no columns to be created (for example, CREATE TABLE foo();). This is an extension from the SQL standard, which does not allow zero-column tables. Zero-column tables are not in themselves very useful, but disallowing them creates odd special cases for ALTER TABLE DROP COLUMN, so it seems cleaner to ignore this spec restriction.

Multiple Identity Columns

PostgreSQL allows a table to have more than one identity column. The standard specifies that a table can have at most one identity column. This is relaxed mainly to give more flexibility for doing schema changes or migrations. Note that the INSERT command supports only one override clause that applies to the entire statement, so having multiple identity columns with different behaviors is not well supported.

Generated Columns

The option STORED is not standard but is also used by other SQL implementations. The SQL standard does not specify the storage of generated columns.

LIKE Clause

While a LIKE clause exists in the SQL standard, many of the options that PostgreSQL accepts for it are not in the standard, and some of the standard's options are not implemented by PostgreSQL.

WITH Clause

The WITH clause is a PostgreSQL extension; storage parameters are not in the standard.

Tablespaces

The PostgreSQL concept of tablespaces is not part of the standard. Hence, the clauses TABLESPACE and USING INDEX TABLESPACE are extensions.

Typed Tables

Typed tables implement a subset of the SQL standard. According to the standard, a typed table has columns corresponding to the underlying composite type as well as one other column that is the “self-referencing column”. PostgreSQL does not support self-referencing columns explicitly.

PARTITION BY Clause

The PARTITION BY clause is a PostgreSQL extension.

PARTITION OF Clause

The PARTITION OF clause is a PostgreSQL extension.

參閱

PreviousCREATE SUBSCRIPTIONNextCREATE TABLE AS

Last updated 2 years ago

Was this helpful?

由於 背景程序無法存取,因此無法對臨時資料表進行清理或分析。所以,應透過線上的 SQL 命令執行適當的清理和分析操作。例如,如果要在複雜查詢中使用臨時資料表,在填入資料後的臨時表上執行 ANALYZE 是個不錯的作法。

你可以選擇性地在 TEMPORARY 或 TEMP 之前加上 GLOBAL 或 LOCAL。目前這在 PostgreSQL 中沒有任何區別,也已經被棄用;請參閱。

如果指定了這個選項,則將此表建立為無日誌記錄的資料表。寫入無日誌記錄資料表的資料不寫入 WAL(詳見),這使得它們比普通的資料表快得多。但是,它們就不是完全安全的:在系統崩潰或不正常關閉之後,會自動清除無日誌記錄的資料表。 無日誌記錄的資料表內容也無法複製到備用伺服器。在無日誌記錄資料表上所建的所有索引也沒有日誌記錄。

The data type of the column. This can include array specifiers. For more information on the data types supported by PostgreSQL, refer to .

Range and list partitioning require a btree operator class, while hash partitioning requires a hash operator class. If no operator class is specified explicitly, the default operator class of the appropriate type will be used; if no default operator class exists, an error will be raised. When hash partitioning is used, the operator class used must implement support function 2 (see for details).

See for more discussion on table partitioning.

When creating a range partition, the lower bound specified with FROM is an inclusive bound, whereas the upper bound specified with TO is an exclusive bound. That is, the values specified in the FROM list are valid values of the corresponding partition key columns for this partition, whereas those in the TO list are not. Note that this statement must be understood according to the rules of row-wise comparison (). For example, given PARTITION BY RANGE (x,y), a partition bound FROM (1, 2) TO (3, 4) allows x=1 with any y>=2, x=2 with any non-null y, and x=3 with any y<4.

STORAGE settings for the copied column definitions will be copied. The default behavior is to exclude STORAGE settings, resulting in the copied columns in the new table having type-specific default settings. For more on STORAGE settings, see .

Currently, CHECK expressions cannot contain subqueries nor refer to variables other than columns of the current row (see ). The system column tableoid may be referenced, but not any other system column.

The clauses ALWAYS and BY DEFAULT determine how the sequence value is given precedence over a user-specified value in an INSERT statement. If ALWAYS is specified, a user-specified value is only accepted if the INSERT statement specifies OVERRIDING SYSTEM VALUE. If BY DEFAULT is specified, then the user-specified value takes precedence. See for details. (In the COPY command, user-specified values are always used regardless of this setting.)

The optional sequence_options clause can be used to override the options of the sequence. See for details.UNIQUE (column constraint)

The EXCLUDE clause defines an exclusion constraint, which guarantees that if any two rows are compared on the specified column(s) or expression(s) using the specified operator(s), not all of these comparisons will return TRUE. If all of the specified operators test for equality, this is equivalent to a UNIQUE constraint, although an ordinary unique constraint will be faster. However, exclusion constraints can specify constraints that are more general than simple equality. For example, you can specify a constraint that no two rows in the table contain overlapping circles (see ) by using the && operator.

Exclusion constraints are implemented using an index, so each specified operator must be associated with an appropriate operator class (see ) for the index access method index_method. The operators are required to be commutative. Each exclude_element can optionally specify an operator class and/or ordering options; these are described fully under .

The access method must support amgettuple (see ); at present this means GIN cannot be used. Although it's allowed, there is little point in using B-tree or hash indexes with an exclusion constraint, because this does nothing that an ordinary unique constraint doesn't do better. So in practice the access method will always be GiST or SP-GiST.

This controls whether the constraint can be deferred. A constraint that is not deferrable will be checked immediately after every command. Checking of constraints that are deferrable can be postponed until the end of the transaction (using the command). NOT DEFERRABLE is the default. Currently, only UNIQUE, PRIMARY KEY, EXCLUDE, and REFERENCES (foreign key) constraints accept this clause. NOT NULL and CHECK constraints are not deferrable. Note that deferrable constraints cannot be used as conflict arbitrators in an INSERT statement that includes an ON CONFLICT DO UPDATE clause.

If a constraint is deferrable, this clause specifies the default time to check the constraint. If the constraint is INITIALLY IMMEDIATE, it is checked after each statement. This is the default. If the constraint is INITIALLY DEFERRED, it is checked only at the end of the transaction. The constraint check time can be altered with the command.

This optional clause specifies the table access method to use to store the contents for the new table; the method needs be an access method of type TABLE. See for more information. If this option is not specified, the default table access method is chosen for the new table. See for more information.

This clause specifies optional storage parameters for a table or index; see for more information. For backward-compatibility the WITH clause for a table can also include OIDS=FALSE to specify that rows of the new table should not contain OIDs (object identifiers), OIDS=TRUE is not supported anymore.

All rows in the temporary table will be deleted at the end of each transaction block. Essentially, an automatic is done at each commit. When used on a partitioned table, this is not cascaded to its partitions.

The tablespace_name is the name of the tablespace in which the new table is to be created. If not specified, is consulted, or if the table is temporary. For partitioned tables, since no storage is required for the table itself, the tablespace specified overrides default_tablespace as the default tablespace to use for any newly created partitions when no other tablespace is explicitly specified.

This clause allows selection of the tablespace in which the index associated with a UNIQUE, PRIMARY KEY, or EXCLUDE constraint will be created. If not specified, is consulted, or if the table is temporary.

The WITH clause can specify storage parameters for tables, and for indexes associated with a UNIQUE, PRIMARY KEY, or EXCLUDE constraint. Storage parameters for indexes are documented in . The storage parameters currently available for tables are listed below. For many of these parameters, as shown, there is an additional parameter with the same name prefixed with toast., which controls the behavior of the table's secondary TOAST table, if any (see for more information about TOAST). If a table parameter value is set and the equivalent toast. parameter is not, the TOAST table will use the table's parameter value. Specifying these parameters for partitioned tables is not supported, but you may specify them for individual leaf partitions.

此設定了用於輔助對該資料表進行平行掃描的工作程序數量。如果未設定,則系統將根據其關連大小來決定一個值。例如,由於設定了 ,計劃程序或透過使用平行掃描的工具程式的語句選擇的實際工作程序數量可能會更少。

Enables or disables the autovacuum daemon for a particular table. If true, the autovacuum daemon will perform automatic VACUUM and/or ANALYZE operations on this table following the rules discussed in . If false, this table will not be autovacuumed, except to prevent transaction ID wraparound. See for more about wraparound prevention. Note that the autovacuum daemon does not run at all (except to prevent transaction ID wraparound) if the parameter is false; setting individual tables' storage parameters does not override that. Therefore there is seldom much point in explicitly setting this storage parameter to true, only to false.

Enables or disables index cleanup when VACUUM is run on this table. The default value is true. Disabling index cleanup can speed up VACUUM very significantly, but may also lead to severely bloated indexes if table modifications are frequent. The INDEX_CLEANUP parameter of , if specified, overrides the value of this option.

Enables or disables vacuum to try to truncate off any empty pages at the end of this table. The default value is true. If true, VACUUM and autovacuum do the truncation and the disk space for the truncated pages is returned to the operating system. Note that the truncation requires ACCESS EXCLUSIVE lock on the table. The TRUNCATE parameter of , if specified, overrides the value of this option.

Per-table value for parameter.

Per-table value for parameter.

Per-table value for parameter.

Per-table value for parameter.

Per-table value for parameter.

Per-table value for parameter.

Per-table value for parameter. Note that autovacuum will ignore per-table autovacuum_freeze_min_age parameters that are larger than half the system-wide setting.

Per-table value for parameter. Note that autovacuum will ignore per-table autovacuum_freeze_max_age parameters that are larger than the system-wide setting (it can only be set smaller).

Per-table value for parameter.

Per-table value for parameter. Note that autovacuum will ignore per-table autovacuum_multixact_freeze_min_age parameters that are larger than half the system-wide setting.

Per-table value for parameter. Note that autovacuum will ignore per-table autovacuum_multixact_freeze_max_age parameters that are larger than the system-wide setting (it can only be set smaller).

Per-table value for parameter.

Per-table value for parameter.

Declare the table as an additional catalog table for purposes of logical replication. See for details. This parameter cannot be set for TOAST tables.

PostgreSQL automatically creates an index for each unique constraint and primary key constraint to enforce uniqueness. Thus, it is not necessary to create an index explicitly for primary key columns. (See for more information.)

,, , , \

第 29 章
Chapter 8
Section 37.16.3
Section 5.11
Section 9.23.5
Section 68.2
Section 5.4.1
INSERT
CREATE SEQUENCE
Section 8.8
Section 11.10
CREATE INDEX
Chapter 61
SET CONSTRAINTS
SET CONSTRAINTS
Chapter 60
default_table_access_method
Storage Parameters
TRUNCATE
default_tablespace
temp_tablespaces
default_tablespace
temp_tablespaces
CREATE INDEX
Section 68.2
Section 24.1.6
Section 24.1.5
autovacuum
VACUUM
VACUUM
autovacuum_vacuum_threshold
autovacuum_vacuum_scale_factor
autovacuum_analyze_threshold
autovacuum_analyze_scale_factor
autovacuum_vacuum_cost_delay
autovacuum_vacuum_cost_limit
vacuum_freeze_min_age
autovacuum_freeze_max_age
autovacuum_freeze_max_age
vacuum_freeze_table_age
vacuum_multixact_freeze_min_age
autovacuum_multixact_freeze_max_age
autovacuum_multixact_freeze_max_age
vacuum_multixact_freeze_table_age
log_autovacuum_min_duration
Section 48.6.2
CREATE INDEX
ALTER TABLE
DROP TABLE
CREATE TABLE AS
CREATE TABLESPACE
CREATE TYPE
相容性
max_worker_processes
autovacuum