PostgreSQL 正體中文使用手冊
PostgreSQL.TW官方使用手冊小島故事加入社團
15
15
  • 簡介
  • 前言
    • 1. 什麼是 PostgreSQL?
    • 2. PostgreSQL 沿革
    • 3. 慣例
    • 4. 其他參考資訊
    • 5. 問題回報指南
  • I. 新手教學
    • 1. 入門指南
      • 1.1. 安裝
      • 1.2. 基礎架構
      • 1.3. 建立一個資料庫
      • 1.4. 存取一個資料庫
    • 2. SQL 查詢語言
      • 2.1. 簡介
      • 2.2. 概念
      • 2.3. 創建一個新的資料表
      • 2.4. 資料列是資料表的組成單位
      • 2.5. 資料表的查詢
      • 2.6. 交叉查詢
      • 2.7. 彙總查詢
      • 2.8. 更新資料
      • 2.9. 刪除資料
    • 3. 先進功能
      • 3.1. 簡介
      • 3.2. 檢視表(View)
      • 3.3. 外部索引鍵
      • 3.4. 交易安全
      • 3.5. 窗函數
      • 3.6. 繼承
      • 3.7. 結論
  • II. SQL 查詢語言
    • 4. SQL 語法
      • 4.1. 語法結構
      • 4.2. 參數表示式
      • 4.3. 函數呼叫
    • 5. 定義資料結構
      • 5.1. 認識資料表
      • 5.2. 預設值
      • 5.3. Generated Columns
      • 5.4. 限制條件
      • 5.5. 系統欄位
      • 5.6. 表格變更
      • 5.7. 權限
      • 5.8. 資料列安全原則
      • 5.9. Schemas
      • 5.10. 繼承
      • 5.11. 分割資料表
      • 5.12. 外部資料
      • 5.13. 其他資料庫物件
      • 5.14. 相依性追蹤
    • 6. 資料處理
      • 6.1. 新增資料
      • 6.2. 更新資料
      • 6.3. 刪除資料
      • 6.4. 修改並回傳資料
    • 7. 資料查詢
      • 7.1. 概觀
      • 7.2. 資料表表示式
      • 7.3. 取得資料列表
      • 7.4. 合併查詢結果
      • 7.5. 資料排序
      • 7.6. LIMIT 和 OFFSET
      • 7.7. VALUES 列舉資料
      • 7.8. WITH Querys(Common Table Expressions)
    • 8. 資料型別
      • 8.1. 數字型別
      • 8.2. 貨幣型別
      • 8.3. 字串型別
      • 8.4. 位元組型別(bytea)
      • 8.5. 日期時間型別
      • 8.6. 布林型別
      • 8.7. 列舉型別
      • 8.8. 地理資訊型別
      • 8.9. 網路資訊型別
      • 8.10. 位元字串型別
      • 8.11. 全文檢索型別
      • 8.12. UUID 型別
      • 8.13. XML 型別
      • 8.14. JSON 型別
      • 8.15. 陣列
      • 8.16. 複合型別
      • 8.17. 範圍型別
      • 8.18. Domain Types
      • 8.19. 物件指標型別
      • 8.20. pg_lsn 型別
      • 8.21. 概念型別
    • 9. 函式及運算子
      • 9.1. 邏輯運算子
      • 9.2. 比較函式及運算子
      • 9.3. 數學函式及運算子
      • 9.4. 字串函式及運算子
      • 9.5. 位元字串函式及運算子
      • 9.6. 二元字串函式及運算子
      • 9.7. 特徵比對
      • 9.8. 型別轉換函式
      • 9.9 日期時間函式及運算子
      • 9.10. 列舉型別函式
      • 9.11. 地理資訊函式及運算子
      • 9.12. 網路位址函式及運算子
      • 9.13. 文字檢索函式及運算子
      • 9.14. UUID Functions
      • 9.15. XML 函式
      • 9.16. JSON 函式及運算子
      • 9.17. 序列函式
      • 9.18. 條件表示式
      • 9.19. 陣列函式及運算子
      • 9.20. 範圍函式及運算子
      • 9.21. 彙總函數
      • 9.22. Window 函式
      • 9.23. 子查詢
      • 9.24. 資料列與陣列的比較運算
      • 9.25. 集合回傳函數
      • 9.26. 系統資訊函數
      • 9.27. 系統管理函式
      • 9.28. 觸發函式
      • 9.29. 事件觸發函式
      • 9.30. Statistics Information Functions
    • 10. 型別轉換
      • 10.1. 概觀
      • 10.2. 運算子
      • 10.3. 函式
      • 10.4. 資料儲存轉換規則
      • 10.5. UNION、CASE 等相關結構
      • 10.6. SELECT 輸出規則
    • 11. 索引(Index)
      • 11.1. 簡介
      • 11.2. 索引型別
      • 11.3. 多欄位索引
      • 11.4. 索引與 ORDER BY
      • 11.5. 善用多個索引
      • 11.6. 唯一值索引
      • 11.7. 表示式索引
      • 11.8. 部份索引(partial index)
      • 11.9. Index-Only Scans and Covering Indexes
      • 11.10. 運算子物件及家族
      • 11.11. 索引與排序規則
      • 11.12. 檢查索引運用
    • 12. 全文檢索
      • 12.1. 簡介
      • 12.2. 查詢與索引
      • 12.3. 細部控制
      • 12.4. 延伸功能
      • 12.5. 斷詞
      • 12.6. 字典
      • 12.7. 組態範例
      • 12.8. 測試與除錯
      • 12.9. GIN 及 GiST 索引型別
      • 12.10. psql支援
      • 12.11. 功能限制
    • 13. 一致性管理(Concurrency Control)
      • 13.1. 簡介
      • 13.2. 交易隔離
      • 13.3. 鎖定模式
      • 13.4. 在應用端檢視資料一致性
      • 13.5. Serialization Failure Handling
      • 13.6. 特別提醒
      • 13.7. 鎖定與索引
    • 14. 效能技巧
      • 14.1. 善用 EXPLAIN
      • 14.2. 統計資訊
      • 14.3. 使用確切的 JOIN 方式
      • 14.4. 快速建立資料庫內容
      • 14.5. 風險性彈性設定
    • 15. 平行查詢
      • 15.1. 如何運作?
      • 15.2. 啓用時機?
      • 15.3. 平行查詢計畫
      • 15.4. 平行查詢的安全性
  • III. 系統管理
    • 16. 以預編譯套件安裝
    • 17. 以原始碼安裝
      • 17.1. 簡要步驟
      • 17.2. 環境需求
      • 17.3. Getting The Source
      • 17.4. 安裝流程
      • 17.5. Post-Installation Setup
      • 17.6. Supported Platforms
      • 17.7. 平台相關的注意事項
    • 18. 以原始碼在 Windows 上安裝
      • 18.1. Building with Visual C++ or the Microsoft Windows SDK
    • 19. 服務配置與維運
      • 19.1. PostgreSQL 使用者帳號
      • 19.2. Creating a Database Cluster
      • 19.3. Starting the Database Server
      • 19.4. 核心資源管理
      • 19.5. Shutting Down the Server
      • 19.6. Upgrading a PostgreSQL Cluster
      • 19.7. Preventing Server Spoofing
      • 19.8. Encryption Options
      • 19.9. Secure TCP/IP Connections with SSL
      • 19.10. Secure TCP/IP Connections with GSSAPI Encryption
      • 19.11. Secure TCP/IP Connections with SSH Tunnels
      • 19.12. 在 Windows 註冊事件日誌
    • 20. 服務組態設定
      • 20.1. Setting Parameters
      • 20.2. File Locations
      • 20.3. 連線與認證
      • 20.4. 資源配置
      • 20.5. Write Ahead Log
      • 20.6. 複寫(Replication)
      • 20.7. 查詢規畫
      • 20.8. 錯誤回報與日誌記錄
      • 20.9. 執行階段統計資訊
      • 20.10. 自動資料庫清理
      • 20.11. 用戶端連線預設參數
      • 20.12. 交易鎖定管理
      • 20.13. 版本與平台的相容性
      • 20.14. Error Handling
      • 20.15. 預先配置的參數
      • 20.16. Customized Options
      • 20.17. Developer Options
      • 20.18. Short Options
    • 21. 使用者認證
      • 21.1. 設定檔:pg_hba.conf
      • 21.2. User Name Maps
      • 21.3. Authentication Methods
      • 21.4. Trust Authentication
      • 21.5. Password Authentication
      • 21.6. GSSAPI Authentication
      • 21.7. SSPI Authentication
      • 21.8. Ident Authentication
      • 21.9. Peer Authentication
      • 21.10. LDAP Authentication
      • 21.11. RADIUS Authentication
      • 21.12. Certificate Authentication
      • 21.13. PAM Authentication
      • 21.14. BSD Authentication
      • 21.15. Authentication Problems
    • 22. 資料庫角色
      • 22.1. Database Roles
      • 22.2. Role Attributes
      • 22.3. Role Membership
      • 22.4. 移除角色
      • 22.5. Default Roles
      • 22.6. Function Security
    • 23. 管理資料庫
      • 23.1. Overview
      • 23.2. Creating a Database
      • 23.3. 樣版資料庫
      • 23.4. Database Configuration
      • 23.5. Destroying a Database
      • 23.6. Tablespaces
    • 24. 語系
      • 24.1. 語系支援
      • 24.2. Collation Support
      • 24.3. 字元集支援
    • 25. 例行性資料庫維護工作
      • 25.1. 例行性資料清理
      • 25.2. 定期重建索引
      • 25.3. Log 檔案維護
    • 26. 備份及還原
      • 26.1. SQL Dump
      • 26.2. 檔案系統層級備份
      • 26.3. 持續封存及 Point-in-Time Recovery (PITR)
    • 27. High Availability, Load Balancing, and Replication
      • 27.1. 比較不同的解決方案
      • 27.2. 日誌轉送備用伺服器 Log-Shipping Standby Servers
      • 27.3. Failover
      • 27.4. Hot Standby
    • 28. 監控資料庫活動
      • 28.1. 標準的 Unix 工具
      • 28.2. 統計資訊收集器
      • 28.3. Viewing Locks
      • 28.4. Progress Reporting
      • 28.5. Dynamic Tracing
    • 29. 監控磁碟使用情況
      • 29.1. 瞭解磁碟使用情形
      • 29.2. 磁碟空間不足錯誤
    • 30. 高可靠度及預寫日誌
      • 30.1. 可靠度
      • 30.2. Data Checksums
      • 30.3. Write-Ahead Logging(WAL)
      • 30.4. Asynchronous Commit
      • 30.5. WAL Configuration
      • 30.6. WAL Internals
    • 31. 邏輯複寫(Logical Replication)
      • 31.1. 發佈(Publication)
      • 31.2. 訂閱(Subscription)
      • 31.3. Row Filters
      • 31.4. Column Lists
      • 31.5. 衝突處理
      • 31.6. 限制
      • 31.7. 架構
      • 31.8. 監控
      • 31.9. 安全性
      • 31.10. 系統設定
      • 31.11. 快速設定
    • 32. Just-in-Time Compilation(JIT)
      • 32.1. What is JIT compilation?
      • 32.2. When to JIT?
      • 32.3. Configuration
      • 32.4. Extensibility
    • 33. 迴歸測試
      • 33.1. Running the Tests
      • 33.2. Test Evaluation
      • 33.3. Variant Comparison Files
      • 33.4. TAP Tests
      • 33.5. Test Coverage Examination
  • IV. 用戶端介面
    • 34. libpq - C Library
      • 33.1. 資料庫連線控制函數
      • 33.2. 連線狀態函數
      • 33.3. Command Execution Functions
      • 33.4. Asynchronous Command Processing
      • 33.5. Retrieving Query Results Row-By-Row
      • 33.6. Canceling Queries in Progress
      • 33.7. The Fast-Path Interface
      • 33.8. Asynchronous Notification
      • 33.9. Functions Associated with the COPY Command
      • 33.10. Control Functions
      • 33.11. Miscellaneous Functions
      • 33.12. Notice Processing
      • 33.13. Event System
      • 33.14. 環境變數
      • 34.16. 密碼檔
      • 33.16. The Connection Service File
      • 33.17. LDAP Lookup of Connection Parameters
      • 33.18. SSL Support
      • 33.19. Behavior in Threaded Programs
      • 33.20. Building libpq Programs
      • 33.21. Example Programs
    • 35. Large Objects
      • 35.1. Introduction
      • 35.2. Implementation Features
      • 35.3. Client Interfaces
      • 35.4. Server-side Functions
      • 35.5. Example Program
    • 36. ECPG - Embedded SQL in C
      • 35.1. The Concept
      • 35.2. Managing Database Connections
      • 35.3. Running SQL Commands
      • 35.4. Using Host Variables
      • 35.5. Dynamic SQL
      • 35.6. pgtypes Library
      • 35.7. Using Descriptor Areas
      • 35.8. Error Handling
      • 35.9. Preprocessor Directives
      • 35.10. Processing Embedded SQL Programs
      • 35.11. Library Functions
      • 35.12. Large Objects
      • 35.13. C++ Applications
      • 35.14. Embedded SQL Commands
      • 35.15. Informix Compatibility Mode
      • 35.16. Internals
    • 37. The Information Schema
      • 37.1. The Schema
      • 37.2. Data Types
      • 37.3. information_schema_catalog_name
      • 37.4. administrable_role_authorizations
      • 37.5. applicable_roles
      • 37.7. attributes
      • 37.7. character_sets
      • 37.8. check_constraint_routine_usage
      • 37.9. check_constraints
      • 37.10. collations
      • 37.11. collation_character_set_applicability
      • 37.12. column_column_usage
      • 37.13. column_domain_usage
      • 37.14. column_options
      • 37.15. column_privileges
      • 37.16. column_udt_usage
      • 37.17. columns
      • 37.18. constraint_column_usage
      • 37.19. constraint_table_usage
      • 37.20. data_type_privileges
      • 37.21. domain_constraints
      • 37.21. domain_udt_usage
      • 37.22. domains
      • 37.23. element_types
      • 37.24. enabled_roles
      • 37.25. foreign_data_wrapper_options
      • 37.26. foreign_data_wrappers
      • 37.27. foreign_server_options
      • 37.28. foreign_servers
      • 37.29. foreign_table_options
      • 37.30. foreign_tables
      • 36.32. key_column_usage
      • 37.33. parameters
      • 36.34. referential_constraints
      • 37.34. role_column_grants
      • 37.35. role_routine_grants
      • 37.37. role_table_grants
      • 37.38. role_udt_grants
      • 37.39. role_usage_grants
      • 37.40. routine_column_usage
      • 37.41. routine_privileges
      • 37.45. routines
      • 37.46. schemata
      • 37.47. sequences
      • 37.48. sql_features
      • 37.49. sql_implementation_info
      • 37.50. sql_parts
      • 37.51. sql_sizing
      • 36.51. table_constraints
      • 36.49. table_privileges
      • 37.52. tables
      • 37.53. transforms
      • 37.54. triggered_update_columns
      • 37.55. triggers
      • 37.56. udt_privileges
      • 37.57. usage_privileges
      • 37.58. user_defined_types
      • 37.59. user_mapping_options
      • 37.60. user_mappings
      • 37.63. view_column_usage
      • 37.64. view_routine_usage
      • 37.65. view_table_usage
      • 37.66. views
  • V. 資料庫程式設計
    • 38. SQL 延伸功能
      • 38.1. How Extensibility Works
      • 38.2. The PostgreSQL Type System
      • 38.3. 使用者自訂函數
      • 38.4. User-defined Procedures
      • 38.5. Query Language (SQL) Functions
      • 38.6. Function Overloading
      • 38.7. 函數易變性類別
      • 38.8. Procedural Language Functions
      • 38.9. Internal Functions
      • 38.10. C-Language Functions
      • 38.11. Function Optimization Information
      • 38.12. User-defined Aggregates
      • 38.13. User-defined Types
      • 38.14. User-defined Operators
      • 38.15. Operator Optimization Information
      • 38.16. Interfacing Extensions To Indexes
      • 38.17. 封裝相關物件到延伸功能中
      • 38.18. Extension Building Infrastructure
    • 39. Triggers
      • 39.1. Overview of Trigger Behavior
      • 39.2. Visibility of Data Changes
      • 39.3. Writing Trigger Functions in C
      • 39.4. A Complete Trigger Example
    • 40. Event Triggers (事件觸發)
      • 40.1. Overview of Event Trigger Behavior
      • 40.2. Event Trigger Firing Matrix
      • 40.3. Writing Event Trigger Functions in C
      • 40.4. A Complete Event Trigger Example
    • 41. 規則系統
      • 41.1. The Query Tree
      • 41.2. Views and the Rule System
      • 41.3. Materialized Views
      • 41.4. Rules on INSERT, UPDATE, and DELETE
      • 41.5. 規則及權限
      • 41.6. Rules and Command Status
      • 41.7. Rules Versus Triggers
    • 42. Procedural Languages(程序語言)
      • 42.1. Installing Procedural Languages
    • 43. PL/pgSQL - SQL Procedural Language
      • 43.1. Overview
      • 43.2. Structure of PL/pgSQL
      • 43.3. Declarations
      • 43.4. Expressions
      • 43.5. 基本語法
      • 43.6. Control Structures
      • 43.7. Cursors
      • 43.8. Transaction Management
      • 43.9. Errors and Messages
      • 43.10. Trigger Functions
      • 43.11. PL/pgSQL under the Hood
      • 43.12. Tips for Developing in PL/pgSQL
      • 43.13. Porting from Oracle PL/SQL
    • 44. PL/Tcl - Tcl Procedural Language
    • 45. PL/Perl — Perl Procedural Language
    • 46. PL/Python - Python Procedural Language
      • 46.1. PL/Python Functions
      • 46.2. Data Values
      • 46.3. Sharing Data
      • 46.4. Anonymous Code Blocks
      • 46.5. Trigger Functions
      • 46.6. Database Access
      • 46.7. Explicit Subtransactions
      • 46.8. Transaction Management
      • 46.9. Utility Functions
      • 46.10. Python 2 vs. Python 3
      • 46.11. Environment Variables
    • 47. Server Programming Interface
    • 48. Background Worker Processes
    • 49. Logical Decoding
      • 48.1. Logical Decoding Examples
      • 48.2. Logical Decoding Concepts
      • 48.3. Streaming Replication Protocol Interface
      • 48.4. Logical Decoding SQL Interface
      • 48.5. System Catalogs Related to Logical Decoding
      • 48.6. Logical Decoding Output Plugins
      • 48.7. Logical Decoding Output Writers
      • 48.8. Synchronous Replication Support for Logical Decoding
    • 50. Replication Progress Tracking
    • 51. Archive Modules
      • 51.1. Initialization Functions
      • 51.2. Archive Module Callbacks
  • VI. 參考資訊
    • I. SQL 指令
      • ALTER DATABASE
      • ALTER DEFAULT PRIVILEGES
      • ALTER EXTENSION
      • ALTER FUNCTION
      • ALTER INDEX
      • ALTER LANGUAGE
      • ALTER MATERIALIZED VIEW
      • ALTER POLICY
      • ALTER PUBLICATION
      • ALTER ROLE
      • ALTER RULE
      • ALTER SCHEMA
      • ALTER SEQUENCE
      • ALTER STATISTICS
      • ALTER SUBSCRIPTION
      • ALTER SYSTEM
      • ALTER TABLE
      • ALTER TABLESPACE
      • ALTER TRIGGER
      • ALTER TYPE
      • ALTER USER
      • ALTER VIEW
      • ANALYZE
      • CLUSTER
      • COMMENT
      • COMMIT PREPARED
      • COPY
      • CREATE ACCESS METHOD
      • CREATE CAST
      • CREATE DATABASE
      • CREATE EVENT TRIGGER
      • CREATE EXTENSION
      • CREATE FOREIGN TABLE
      • CREATE FOREIGN DATA WRAPPER
      • CREATE FUNCTION
      • CREATE INDEX
      • CREATE LANGUAGE
      • CREATE MATERIALIZED VIEW
      • CREATE DOMAIN
      • CREATE POLICY
      • CREATE PROCEDURE
      • CREATE PUBLICATION
      • CREATE ROLE
      • CREATE RULE
      • CREATE SCHEMA
      • CREATE SEQUENCE
      • CREATE SERVER
      • CREATE STATISTICS
      • CREATE SUBSCRIPTION
      • CREATE TABLE
      • CREATE TABLE AS
      • CREATE TABLESPACE
      • CREATE TRANSFORM
      • CREATE TRIGGER
      • CREATE TYPE
      • CREATE USER
      • CREATE USER MAPPING
      • CREATE VIEW
      • DEALLOCATE
      • DELETE
      • DO
      • DROP ACCESS METHOD
      • DROP DATABASE
      • DROP EXTENSION
      • DROP FUNCTION
      • DROP INDEX
      • DROP LANGUAGE
      • DROP MATERIALIZED VIEW
      • DROP OWNED
      • DROP POLICY
      • DROP PUBLICATION
      • DROP ROLE
      • DROP RULE
      • DROP SCHEMA
      • DROP SEQUENCE
      • DROP STATISTICS
      • DROP SUBSCRIPTION
      • DROP TABLE
      • DROP TABLESPACE
      • DROP TRANSFORM
      • DROP TRIGGER
      • DROP TYPE
      • DROP USER
      • DROP VIEW
      • EXECUTE
      • EXPLAIN
      • GRANT
      • IMPORT FOREIGN SCHEMA
      • INSERT
      • LISTEN
      • LOAD
      • MERGE
      • NOTIFY
      • PREPARE
      • PREPARE TRANSACTION
      • REASSIGN OWNED
      • REFRESH MATERIALIZED VIEW
      • REINDEX
      • RESET
      • REVOKE
      • ROLLBACK PREPARED
      • SECURITY LABEL
      • SELECT
      • SELECT INTO
      • SET
      • SET CONSTRAINTS
      • SET ROLE
      • SET SESSION AUTHORIZATION
      • SET TRANSACTION
      • SHOW
      • TRUNCATE
      • UNLISTEN
      • UPDATE
      • VACUUM
      • VALUES
    • II. PostgreSQL 用戶端工具
      • createdb
      • createuser
      • dropdb
      • dropuser
      • oid2name
      • pgbench
      • pg_basebackup
      • pg_dump
      • pg_dumpall
      • pg_isready
      • pg_receivewal
      • pg_recvlogical
      • pg_restore
      • pg_verifybackup
      • psql
      • vacuumdb
    • III. PostgreSQL 伺服器應用程式
      • initdb
      • pg_archivecleanup
      • pg_ctl
      • pg_standby
      • pg_test_fsync
      • pg_test_timing
      • pg_upgrade
      • postgres
  • VII. 資料庫進階
    • 52. PostgreSQL 的內部架構
      • 52.1. 處理查詢語句的流程
      • 52.2. How Connections Are Established
      • 52.3. The Parser Stage
      • 52.4. The PostgreSQL Rule System
      • 52.5. Planner/Optimizer
      • 52.6. Executor
    • 53. 系統資訊目錄
      • 51.3. pg_am
      • 51.7. pg_attribute
      • 51.8. pg_authid
      • 51.9. pg_auth_members
      • 51.10. pg_cast
      • 51.11 pg_class
      • 51.12. pg_collation
      • 51.13. pg_constraint
      • 51.15 pg_database
      • 51.21. pg_event_trigger
      • 51.22. pg_extension
      • 51.26 pg_index
      • 51.29. pg_language
      • 51.32. pg_namespace
      • 51.33. pg_opclass
      • 51.38. pg_policy
      • 51.39. pg_proc
      • 51.44. pg_rewrite
      • 51.49. pg_statistic
      • 51.50. pg_statistic_ext
      • 51.52. pg_subscription
      • 51.53. pg_subscription_rel
      • 51.54. pg_tablespace
      • 51.56. pg_trigger
      • 51.62. pg_type
      • 51.66. pg_available_extensions
      • 51.67. pg_available_extension_versions
      • 51.71. pg_hba_file_rules
      • 51.72. pg_indexes
      • 51.73. pg_locks
      • 51.77. pg_prepared_xacts
      • 51.79. pg_replication_origin_status
    • 54. System Views
      • 54.1. Overview
      • 54.19. pg_replication_slots
      • 54.20 pg_roles
      • 54.24. pg_settings
      • 54.25. pg_shadow
      • 54.26. pg_shmem_allocations
      • 54.27. pg_stats
      • 54.30. pg_tables
      • 54.31. pg_timezone_abbrevs
      • 54.32. pg_timezone_names
      • 54.33. pg_user
      • 54.35. pg_views
    • 55. Frontend/Backend Protocol
      • 52.1. Overview
      • 52.2. Message Flow
      • 52.3. SASL Authentication
      • 52.4. Streaming Replication Protocol
      • 52.5. Logical Streaming Replication Protocol
      • 52.6. Message Data Types
      • 52.7. Message Formats
      • 52.8. Error and Notice Message Fields
      • 52.9. Logical Replication Message Formats
      • 52.10. Summary of Changes since Protocol 2.0
    • 56. PostgreSQL 程式撰寫慣例
      • 53.1. Formatting
      • 53.2. Reporting Errors Within the Server
      • 53.3. Error Message Style Guide
      • 53.4. Miscellaneous Coding Conventions
    • 57. Native Language Support
      • 54.1. For the Translator
      • 54.2. For the Programmer
    • 58. 撰寫程序語言的處理程序
    • 59. Writing a Foreign Data Wrapper
      • 56.1. Foreign Data Wrapper Functions
      • 56.2. Foreign Data Wrapper Callback Routines
      • 56.3. Foreign Data Wrapper Helper Functions
      • 56.4. Foreign Data Wrapper Query Planning
      • 56.5. Row Locking in Foreign Data Wrappers
    • 60. Writing a Table Sampling Method
    • 61. Writing a Custom Scan Provider
    • 62. Genetic Query Optimizer
      • 59.1. Query Handling as a Complex Optimization Problem
      • 59.2. Genetic Algorithms
      • 59.3. Genetic Query Optimization (GEQO) in PostgreSQL
      • 59.4. Further Reading
    • 63. Table Access Method Interface Definition
    • 64. Index Access Method Interface Definition
    • 65. Generic WAL Records
    • 66. Custom WAL Resource Managers
    • 67. B-Tree Indexes
      • 67.1. Introduction
      • 67.2. Behavior of B-Tree Operator Classes
      • 67.3. B-Tree Support Functions
      • 67.4. Implementation
    • 68. GiST Indexes
      • 64.1. Introduction
      • 64.2. Built-in Operator Classes
      • 64.3. Extensibility
      • 64.4. Implementation
      • 64.5. Examples
    • 69. SP-GiST Indexes
      • 65.1. Introduction
      • 65.2. Built-in Operator Classes
      • 65.3. Extensibility
      • 65.4. Implementation
      • 65.5. Examples
    • 70. GIN 索引
      • 70.1. 簡介
      • 70.2. 內建運算子類
      • 70.3. 延伸介面
      • 70.4. 實作說明
      • 70.5. GIN 小技巧
      • 70.6. 限制
      • 70.7. 範例
    • 71. BRIN Indexes
      • 67.1. Introduction
      • 67.2. Built-in Operator Classes
      • 67.3. Extensibility
    • 72. Hash Indexes
    • 73. 資料庫實體儲存格式
      • 73.1. Database File Layout
      • 73.3. TOAST
      • 68.3. Free Space Map
      • 68.4 可視性映射表(Visibility Map)
      • 68.5. The Initialization Fork
      • 68.6. Database Page Layout
    • 74. System Catalog Declarations and Initial Contents
    • 75. 查詢計畫如何使用統計資訊
      • 70.1. Row Estimation Examples
      • 70.2. 多元統計資訊範例
      • 70.3. Planner Statistics and Security
    • 76. Backup Manifest Format
  • VIII. 附錄
    • A. PostgreSQL 錯誤代碼
    • B. 日期時間格式支援
      • B.1. 日期時間解譯流程
      • B.2. Handling of Invalid or Ambiguous Timestamps
      • B.3. 日期時間慣用字
      • B.4. 日期時間設定檔
      • B.5. POSIX Time Zone Specifications
      • B.6. 日期時間的沿革
      • B.7. Julian Dates
    • C. SQL 關鍵字
    • D. SQL 相容性
      • D.1. Supported Features
      • D.2. Unsupported Features
      • D.3. XML Limits and Conformance to SQL/XML
    • E. 版本資訊
      • E.1. Release 15.2
      • E.2. Release 15.1
      • E.3. Release 15
      • E.4. Prior Releases
    • F. 延伸支援模組
      • F.1. adminpack
      • F.2. amcheck
      • F.3. auth_delay
      • F.4. auto_explain
      • F.5. bloom
      • F.6. btree_gin
      • F.10. dblink
        • dblink_connect
        • dblink_connect_u
        • dblink_disconnect
        • dblink
        • dblink_exec
        • dblink_open
        • dblink_fetch
        • dblink_close
        • dblink_get_connections
        • dblink_error_message
        • dblink_send_query
        • dblink_is_busy
        • dblink_get_notify
        • dblink_get_result
        • dblink_cancel_query
        • dblink_get_pkey
        • dblink_build_sql_insert
        • dblink_build_sql_delete
        • dblink_build_sql_update
      • F.13. earthdistance
      • F.14. file_fdw
      • F.16. hstore
      • F.24. pg_buffercache
      • F.26. passwordcheck
      • F.29. pg_stat_statements
      • F.30. pgstattuple
      • F.31. pg_trgm
      • F.32. pg_visibility
      • F.38. postgres_fdw
      • F.35. sepgsql
      • F.43. tablefunc
      • F.45. test_decoding
      • F.46. tsm_system_rows
      • F.47. tsm_system_time
      • F.49. uuid-ossp
    • G. Additional Supplied Programs
      • G.1. Client Applications
        • oid2name
        • vacuumlo
      • G.2. Server Applications
        • pg_standby
    • H. 外部專案
      • H.1. 用戶端介面
      • H.2. Administration Tools
      • H.3. Procedural Languages
      • H.4. Extensions
    • I. The Source Code Repository
      • I.1. Getting The Source via Git
    • J. 文件取得
      • J.1. DocBook
      • J.2. Tool Sets
      • J.3. Building the Documentation
      • J.4. Documentation Authoring
      • J.5. Style Guide
    • K. PostgreSQL Limits
    • L. 縮寫字
    • M. Glossary
    • N. 色彩支援
      • N.1. When Color is Used
      • N.2. Configuring the Colors
    • O. Obsolete or Renamed Features
  • 參考書目
Powered by GitBook
On this page

Was this helpful?

Edit on GitHub
Export as PDF
  1. VII. 資料庫進階
  2. 69. SP-GiST Indexes

65.3. Extensibility

Previous65.2. Built-in Operator ClassesNext65.4. Implementation

Last updated 2 years ago

Was this helpful?

SP-GiST offers an interface with a high level of abstraction, requiring the access method developer to implement only methods specific to a given data type. The SP-GiST core is responsible for efficient disk mapping and searching the tree structure. It also takes care of concurrency and logging considerations.

Leaf tuples of an SP-GiST tree contain values of the same data type as the indexed column. Leaf tuples at the root level will always contain the original indexed data value, but leaf tuples at lower levels might contain only a compressed representation, such as a suffix. In that case the operator class support functions must be able to reconstruct the original value using information accumulated from the inner tuples that are passed through to reach the leaf level.

Inner tuples are more complex, since they are branching points in the search tree. Each inner tuple contains a set of one or more nodes, which represent groups of similar leaf values. A node contains a downlink that leads either to another, lower-level inner tuple, or to a short list of leaf tuples that all lie on the same index page. Each node normally has a label that describes it; for example, in a radix tree the node label could be the next character of the string value. (Alternatively, an operator class can omit the node labels, if it works with a fixed set of nodes for all inner tuples; see .) Optionally, an inner tuple can have a prefix value that describes all its members. In a radix tree this could be the common prefix of the represented strings. The prefix value is not necessarily really a prefix, but can be any data needed by the operator class; for example, in a quad-tree it can store the central point that the four quadrants are measured with respect to. A quad-tree inner tuple would then also contain four nodes corresponding to the quadrants around this central point.

Some tree algorithms require knowledge of level (or depth) of the current tuple, so the SP-GiST core provides the possibility for operator classes to manage level counting while descending the tree. There is also support for incrementally reconstructing the represented value when that is needed, and for passing down additional data (called traverse values) during a tree descent.

Note

The SP-GiST core code takes care of null entries. Although SP-GiST indexes do store entries for nulls in indexed columns, this is hidden from the index operator class code: no null index entries or search conditions will ever be passed to the operator class methods. (It is assumed that SP-GiST operators are strict and so cannot succeed for null values.) Null values are therefore not discussed further here.

There are five user-defined methods that an index operator class for SP-GiST must provide. All five follow the convention of accepting two internal arguments, the first of which is a pointer to a C struct containing input values for the support method, while the second argument is a pointer to a C struct where output values must be placed. Four of the methods just return void, since all their results appear in the output struct; but leaf_consistent additionally returns a boolean result. The methods must not modify any fields of their input structs. In all cases, the output struct is initialized to zeroes before calling the user-defined method.

The five user-defined methods are:config

Returns static information about the index implementation, including the data type OIDs of the prefix and node label data types.

The SQL declaration of the function must look like this:

CREATE FUNCTION my_config(internal, internal) RETURNS void ...

The first argument is a pointer to a spgConfigIn C struct, containing input data for the function. The second argument is a pointer to a spgConfigOut C struct, which the function must fill with result data.

typedef struct spgConfigIn
{
    Oid         attType;        /* Data type to be indexed */
} spgConfigIn;

typedef struct spgConfigOut
{
    Oid         prefixType;     /* Data type of inner-tuple prefixes */
    Oid         labelType;      /* Data type of inner-tuple node labels */
    bool        canReturnData;  /* Opclass can reconstruct original data */
    bool        longValuesOK;   /* Opclass can cope with values > 1 page */
} spgConfigOut;

attType is passed in order to support polymorphic index operator classes; for ordinary fixed-data-type operator classes, it will always have the same value and so can be ignored.

Chooses a method for inserting a new value into an inner tuple.

The SQL declaration of the function must look like this:

CREATE FUNCTION my_choose(internal, internal) RETURNS void ...

The first argument is a pointer to a spgChooseIn C struct, containing input data for the function. The second argument is a pointer to a spgChooseOut C struct, which the function must fill with result data.

typedef struct spgChooseIn
{
    Datum       datum;          /* original datum to be indexed */
    Datum       leafDatum;      /* current datum to be stored at leaf */
    int         level;          /* current level (counting from zero) */

    /* Data from current inner tuple */
    bool        allTheSame;     /* tuple is marked all-the-same? */
    bool        hasPrefix;      /* tuple has a prefix? */
    Datum       prefixDatum;    /* if so, the prefix value */
    int         nNodes;         /* number of nodes in the inner tuple */
    Datum      *nodeLabels;     /* node label values (NULL if none) */
} spgChooseIn;

typedef enum spgChooseResultType
{
    spgMatchNode = 1,           /* descend into existing node */
    spgAddNode,                 /* add a node to the inner tuple */
    spgSplitTuple               /* split inner tuple (change its prefix) */
} spgChooseResultType;

typedef struct spgChooseOut
{
    spgChooseResultType resultType;     /* action code, see above */
    union
    {
        struct                  /* results for spgMatchNode */
        {
            int         nodeN;      /* descend to this node (index from 0) */
            int         levelAdd;   /* increment level by this much */
            Datum       restDatum;  /* new leaf datum */
        }           matchNode;
        struct                  /* results for spgAddNode */
        {
            Datum       nodeLabel;  /* new node's label */
            int         nodeN;      /* where to insert it (index from 0) */
        }           addNode;
        struct                  /* results for spgSplitTuple */
        {
            /* Info to form new upper-level inner tuple with one child tuple */
            bool        prefixHasPrefix;    /* tuple should have a prefix? */
            Datum       prefixPrefixDatum;  /* if so, its value */
            int         prefixNNodes;       /* number of nodes */
            Datum      *prefixNodeLabels;   /* their labels (or NULL for
                                             * no labels) */
            int         childNodeN;         /* which node gets child tuple */

            /* Info to form new lower-level inner tuple with all old nodes */
            bool        postfixHasPrefix;   /* tuple should have a prefix? */
            Datum       postfixPrefixDatum; /* if so, its value */
        }           splitTuple;
    }           result;
} spgChooseOut;

The choose function can determine either that the new value matches one of the existing child nodes, or that a new child node must be added, or that the new value is inconsistent with the tuple prefix and so the inner tuple must be split to create a less restrictive prefix.

If the new value matches one of the existing child nodes, set resultType to spgMatchNode. Set nodeN to the index (from zero) of that node in the node array. Set levelAdd to the increment in level caused by descending through that node, or leave it as zero if the operator class does not use levels. Set restDatum to equal datum if the operator class does not modify datums from one level to the next, or otherwise set it to the modified value to be used as leafDatum at the next level.

If a new child node must be added, set resultType to spgAddNode. Set nodeLabel to the label to be used for the new node, and set nodeN to the index (from zero) at which to insert the node in the node array. After the node has been added, the choose function will be called again with the modified inner tuple; that call should result in an spgMatchNode result.

If the new value is inconsistent with the tuple prefix, set resultType to spgSplitTuple. This action moves all the existing nodes into a new lower-level inner tuple, and replaces the existing inner tuple with a tuple having a single downlink pointing to the new lower-level inner tuple. Set prefixHasPrefix to indicate whether the new upper tuple should have a prefix, and if so set prefixPrefixDatum to the prefix value. This new prefix value must be sufficiently less restrictive than the original to accept the new value to be indexed. Set prefixNNodes to the number of nodes needed in the new tuple, and set prefixNodeLabels to a palloc'd array holding their labels, or to NULL if node labels are not required. Note that the total size of the new upper tuple must be no more than the total size of the tuple it is replacing; this constrains the lengths of the new prefix and new labels. Set childNodeN to the index (from zero) of the node that will downlink to the new lower-level inner tuple. Set postfixHasPrefix to indicate whether the new lower-level inner tuple should have a prefix, and if so set postfixPrefixDatum to the prefix value. The combination of these two prefixes and the downlink node's label (if any) must have the same meaning as the original prefix, because there is no opportunity to alter the node labels that are moved to the new lower-level tuple, nor to change any child index entries. After the node has been split, the choose function will be called again with the replacement inner tuple. That call may return an spgAddNode result, if no suitable node was created by the spgSplitTuple action. Eventually choose must return spgMatchNode to allow the insertion to descend to the next level.picksplit

Decides how to create a new inner tuple over a set of leaf tuples.

The SQL declaration of the function must look like this:

CREATE FUNCTION my_picksplit(internal, internal) RETURNS void ...

The first argument is a pointer to a spgPickSplitIn C struct, containing input data for the function. The second argument is a pointer to a spgPickSplitOut C struct, which the function must fill with result data.

typedef struct spgPickSplitIn
{
    int         nTuples;        /* number of leaf tuples */
    Datum      *datums;         /* their datums (array of length nTuples) */
    int         level;          /* current level (counting from zero) */
} spgPickSplitIn;

typedef struct spgPickSplitOut
{
    bool        hasPrefix;      /* new inner tuple should have a prefix? */
    Datum       prefixDatum;    /* if so, its value */

    int         nNodes;         /* number of nodes for new inner tuple */
    Datum      *nodeLabels;     /* their labels (or NULL for no labels) */

    int        *mapTuplesToNodes;   /* node index for each leaf tuple */
    Datum      *leafTupleDatums;    /* datum to store in each new leaf tuple */
} spgPickSplitOut;

nTuples is the number of leaf tuples provided. datums is an array of their datum values. level is the current level that all the leaf tuples share, which will become the level of the new inner tuple.

Set hasPrefix to indicate whether the new inner tuple should have a prefix, and if so set prefixDatum to the prefix value. Set nNodes to indicate the number of nodes that the new inner tuple will contain, and set nodeLabels to an array of their label values, or to NULL if node labels are not required. Set mapTuplesToNodes to an array that gives the index (from zero) of the node that each leaf tuple should be assigned to. Set leafTupleDatums to an array of the values to be stored in the new leaf tuples (these will be the same as the input datums if the operator class does not modify datums from one level to the next). Note that the picksplit function is responsible for palloc'ing the nodeLabels,mapTuplesToNodes and leafTupleDatums arrays.

Returns set of nodes (branches) to follow during tree search.

The SQL declaration of the function must look like this:

CREATE FUNCTION my_inner_consistent(internal, internal) RETURNS void ...

The first argument is a pointer to a spgInnerConsistentIn C struct, containing input data for the function. The second argument is a pointer to a spgInnerConsistentOut C struct, which the function must fill with result data.

typedef struct spgInnerConsistentIn
{
    ScanKey     scankeys;       /* array of operators and comparison values */
    int         nkeys;          /* length of array */

    Datum       reconstructedValue;     /* value reconstructed at parent */
    void       *traversalValue; /* opclass-specific traverse value */
    MemoryContext traversalMemoryContext;   /* put new traverse values here */
    int         level;          /* current level (counting from zero) */
    bool        returnData;     /* original data must be returned? */

    /* Data from current inner tuple */
    bool        allTheSame;     /* tuple is marked all-the-same? */
    bool        hasPrefix;      /* tuple has a prefix? */
    Datum       prefixDatum;    /* if so, the prefix value */
    int         nNodes;         /* number of nodes in the inner tuple */
    Datum      *nodeLabels;     /* node label values (NULL if none) */
} spgInnerConsistentIn;

typedef struct spgInnerConsistentOut
{
    int         nNodes;         /* number of child nodes to be visited */
    int        *nodeNumbers;    /* their indexes in the node array */
    int        *levelAdds;      /* increment level by this much for each */
    Datum      *reconstructedValues;    /* associated reconstructed values */
    void      **traversalValues;        /* opclass-specific traverse values */
} spgInnerConsistentOut;

nNodes must be set to the number of child nodes that need to be visited by the search, and nodeNumbers must be set to an array of their indexes. If the operator class keeps track of levels, set levelAdds to an array of the level increments required when descending to each node to be visited. (Often these increments will be the same for all the nodes, but that's not necessarily so, so an array is used.) If value reconstruction is needed, set reconstructedValues to an array of the values reconstructed for each child node to be visited; otherwise, leave reconstructedValues as NULL. If it is desired to pass down additional out-of-band information (“traverse values”) to lower levels of the tree search, settraversalValues to an array of the appropriate traverse values, one for each child node to be visited; otherwise, leave traversalValues as NULL. Note that the inner_consistentfunction is responsible for palloc'ing the nodeNumbers, levelAdds, reconstructedValues, and traversalValues arrays in the current memory context. However, any output traverse values pointed to by the traversalValues array should be allocated in traversalMemoryContext. Each traverse value must be a single palloc'd chunk.leaf_consistent

Returns true if a leaf tuple satisfies a query.

The SQL declaration of the function must look like this:

CREATE FUNCTION my_leaf_consistent(internal, internal) RETURNS bool ...

The first argument is a pointer to a spgLeafConsistentIn C struct, containing input data for the function. The second argument is a pointer to a spgLeafConsistentOut C struct, which the function must fill with result data.

typedef struct spgLeafConsistentIn
{
    ScanKey     scankeys;       /* array of operators and comparison values */
    int         nkeys;          /* length of array */

    Datum       reconstructedValue;     /* value reconstructed at parent */
    void       *traversalValue; /* opclass-specific traverse value */
    int         level;          /* current level (counting from zero) */
    bool        returnData;     /* original data must be returned? */

    Datum       leafDatum;      /* datum in leaf tuple */
} spgLeafConsistentIn;

typedef struct spgLeafConsistentOut
{
    Datum       leafValue;      /* reconstructed original data, if any */
    bool        recheck;        /* set true if operator must be rechecked */
} spgLeafConsistentOut;

The array scankeys, of length nkeys, describes the index search condition(s). These conditions are combined with AND — only index entries that satisfy all of them satisfy the query. (Note that nkeys = 0 implies that all index entries satisfy the query.) Usually the consistent function only cares about the sk_strategy and sk_argument fields of each array entry, which respectively give the indexable operator and comparison value. In particular it is not necessary to check sk_flags to see if the comparison value is NULL, because the SP-GiST core code will filter out such conditions. reconstructedValue is the value reconstructed for the parent tuple; it is (Datum) 0 at the root level or if the inner_consistentfunction did not provide a value at the parent level. traversalValue is a pointer to any traverse data passed down from the previous call of inner_consistent on the parent index tuple, or NULL at the root level. level is the current leaf tuple's level, starting at zero for the root level. returnData is true if reconstructed data is required for this query; this will only be so if the config function asserted canReturnData. leafDatum is the key value stored in the current leaf tuple.

The function must return true if the leaf tuple matches the query, or false if not. In the true case, if returnData is true then leafValue must be set to the value originally supplied to be indexed for this leaf tuple. Also, recheck may be set to true if the match is uncertain and so the operator(s) must be re-applied to the actual heap tuple to verify the match.

All the SP-GiST support methods are normally called in a short-lived memory context; that is, CurrentMemoryContext will be reset after processing of each tuple. It is therefore not very important to worry about pfree'ing everything you palloc. (The config method is an exception: it should try to avoid leaking memory. But usually the config method need do nothing but assign constants into the passed parameter struct.)

If the indexed column is of a collatable data type, the index collation will be passed to all the support methods, using the standard PG_GET_COLLATION() mechanism.

For operator classes that do not use prefixes, prefixType can be set to VOIDOID. Likewise, for operator classes that do not use node labels, labelType can be set to VOIDOID. canReturnData should be set true if the operator class is capable of reconstructing the originally-supplied index value. longValuesOK should be set true only when the attType is of variable length and the operator class is capable of segmenting long values by repeated suffixing (see ).choose

datum is the original datum that was to be inserted into the index. leafDatum is initially the same as datum, but can change at lower levels of the tree if the choose or picksplitmethods change it. When the insertion search reaches a leaf page, the current value of leafDatum is what will be stored in the newly created leaf tuple. level is the current inner tuple's level, starting at zero for the root level. allTheSame is true if the current inner tuple is marked as containing multiple equivalent nodes (see ). hasPrefix is true if the current inner tuple contains a prefix; if so, prefixDatum is its value. nNodes is the number of child nodes contained in the inner tuple, and nodeLabels is an array of their label values, or NULL if there are no labels.

If more than one leaf tuple is supplied, it is expected that the picksplit function will classify them into more than one node; otherwise it is not possible to split the leaf tuples across multiple pages, which is the ultimate purpose of this operation. Therefore, if the picksplit function ends up placing all the leaf tuples in the same node, the core SP-GiST code will override that decision and generate an inner tuple in which the leaf tuples are assigned at random to several identically-labeled nodes. Such a tuple is marked allTheSame to signify that this has happened. The choose and inner_consistent functions must take suitable care with such inner tuples. See for more information.

picksplit can be applied to a single leaf tuple only in the case that the config function set longValuesOK to true and a larger-than-a-page input value has been supplied. In this case the point of the operation is to strip off a prefix and produce a new, shorter leaf datum value. The call will be repeated until a leaf datum short enough to fit on a page has been produced. See for more information.inner_consistent

The array scankeys, of length nkeys, describes the index search condition(s). These conditions are combined with AND — only index entries that satisfy all of them are interesting. (Note that nkeys = 0 implies that all index entries satisfy the query.) Usually the consistent function only cares about the sk_strategy and sk_argument fields of each array entry, which respectively give the indexable operator and comparison value. In particular it is not necessary to check sk_flags to see if the comparison value is NULL, because the SP-GiST core code will filter out such conditions. reconstructedValue is the value reconstructed for the parent tuple; it is (Datum) 0 at the root level or if the inner_consistent function did not provide a value at the parent level. traversalValue is a pointer to any traverse data passed down from the previous call of inner_consistent on the parent index tuple, or NULL at the root level. traversalMemoryContext is the memory context in which to store output traverse values (see below). level is the current inner tuple's level, starting at zero for the root level. returnData is true if reconstructed data is required for this query; this will only be so if the config function asserted canReturnData. allTheSame is true if the current inner tuple is marked “all-the-same”; in this case all the nodes have the same label (if any) and so either all or none of them match the query (see ).hasPrefix is true if the current inner tuple contains a prefix; if so, prefixDatum is its value. nNodes is the number of child nodes contained in the inner tuple, and nodeLabels is an array of their label values, or NULL if the nodes do not have labels.

Section 63.4.2
Section 63.4.1
Section 63.4.3
Section 63.4.3
Section 63.4.1
Section 63.4.3