9.16. JSON 函式及運算子

本節描述的內容為:

  • 用於處理和建立 JSON 資料的函數和運算子

  • SQL/JSON 路徑語言

要了解有關 SQL/JSON 標準的更多資訊,請參閱 [sqltr-19075-6]。有關於 PostgreSQL 支援的 JSON 型別的詳細資訊,請參閱第 8.14 節

9.16.1. Processing and Creating JSON Data

Table 9.44 列出了可用於 JSON 資料型別的運算子(請參閱第 8.14 節)。

Table 9.44. json and jsonb Operators

這些運算子都有 json 和 jsonb 型別共用的變形。欄位/元素/路徑提取運算子回傳與其左側輸入相同的類型(json 或 jsonb),但指定為回傳 text 的運算符除外,這些運算子將結果強制轉換為 text。如果 JSON 輸入的結構不符合要求,則欄位/元素/路徑提取運算子將回傳 NULL 而不會失敗。例如,如果不存在這樣的元素。接受整數 JSON 陣列索引的欄位/元素/路徑提取運算子均支援表示從陣列末尾開始的負數索引值。

The standard comparison operators shown in Table 9.1 are available for jsonb, but not for json. They follow the ordering rules for B-tree operations outlined at Section 8.14.4.

Some further operators also exist only for jsonb, as shown in Table 9.45. Many of these operators can be indexed by jsonb operator classes. For a full description of jsonb containment and existence semantics, see Section 8.14.3. Section 8.14.4 describes how these operators can be used to effectively index jsonb data.

Table 9.45. Additional jsonb Operators

Note

The || operator concatenates the elements at the top level of each of its operands. It does not operate recursively. For example, if both operands are objects with a common key field name, the value of the field in the result will just be the value from the right hand operand.

Note

The @? and @@ operators suppress the following errors: lacking object field or array element, unexpected JSON item type, and numeric errors. This behavior might be helpful while searching over JSON document collections of varying structure.

Table 9.46 shows the functions that are available for creating json and jsonb values. (There are no equivalent functions for jsonb, of the row_to_json and array_to_json functions. However, the to_jsonb function supplies much the same functionality as these functions would.)

Table 9.46. JSON Creation Functions

Note

array_to_json and row_to_json have the same behavior as to_json except for offering a pretty-printing option. The behavior described for to_json likewise applies to each individual value converted by the other JSON creation functions.

Note

The hstore extension has a cast from hstore to json, so that hstore values converted via the JSON creation functions will be represented as JSON objects, not as primitive string values.

Table 9.47 shows the functions that are available for processing json and jsonb values.

Table 9.47. JSON Processing Functions

Note

Many of these functions and operators will convert Unicode escapes in JSON strings to the appropriate single character. This is a non-issue if the input is type jsonb, because the conversion was already done; but for json input, this may result in throwing an error, as noted in Section 8.14.

Note

The functions json[b]_populate_record, json[b]_populate_recordset, json[b]_to_record and json[b]_to_recordset operate on a JSON object, or array of objects, and extract the values associated with keys whose names match column names of the output row type. Object fields that do not correspond to any output column name are ignored, and output columns that do not match any object field will be filled with nulls. To convert a JSON value to the SQL type of an output column, the following rules are applied in sequence:

  • A JSON null value is converted to a SQL null in all cases.

  • If the output column is of type json or jsonb, the JSON value is just reproduced exactly.

  • If the output column is a composite (row) type, and the JSON value is a JSON object, the fields of the object are converted to columns of the output row type by recursive application of these rules.

  • Likewise, if the output column is an array type and the JSON value is a JSON array, the elements of the JSON array are converted to elements of the output array by recursive application of these rules.

  • Otherwise, if the JSON value is a string literal, the contents of the string are fed to the input conversion function for the column's data type.

  • Otherwise, the ordinary text representation of the JSON value is fed to the input conversion function for the column's data type.

While the examples for these functions use constants, the typical use would be to reference a table in the FROM clause and use one of its json or jsonb columns as an argument to the function. Extracted key values can then be referenced in other parts of the query, like WHERE clauses and target lists. Extracting multiple values in this way can improve performance over extracting them separately with per-key operators.

Note

All the items of the path parameter of jsonb_set as well as jsonb_insert except the last item must be present in the target. If create_missing is false, all items of the path parameter of jsonb_set must be present. If these conditions are not met the target is returned unchanged.

If the last path item is an object key, it will be created if it is absent and given the new value. If the last path item is an array index, if it is positive the item to set is found by counting from the left, and if negative by counting from the right - -1 designates the rightmost element, and so on. If the item is out of the range -array_length .. array_length -1, and create_missing is true, the new value is added at the beginning of the array if the item is negative, and at the end of the array if it is positive.

Note

The json_typeof function's null return value should not be confused with a SQL NULL. While calling json_typeof('null'::json) will return null, calling json_typeof(NULL::json) will return a SQL NULL.

Note

If the argument to json_strip_nulls contains duplicate field names in any object, the result could be semantically somewhat different, depending on the order in which they occur. This is not an issue for jsonb_strip_nulls since jsonb values never have duplicate object field names.

Note

The jsonb_path_exists, jsonb_path_match, jsonb_path_query, jsonb_path_query_array, and jsonb_path_query_first functions have optional vars and silent arguments.

If the vars argument is specified, it provides an object containing named variables to be substituted into a jsonpath expression.

If the silent argument is specified and has the true value, these functions suppress the same errors as the @? and @@ operators.

See also Section 9.20 for the aggregate function json_agg which aggregates record values as JSON, and the aggregate function json_object_agg which aggregates pairs of values into a JSON object, and their jsonb equivalents, jsonb_agg and jsonb_object_agg.

9.16.2. The SQL/JSON Path Language

SQL/JSON path expressions specify the items to be retrieved from the JSON data, similar to XPath expressions used for SQL access to XML. In PostgreSQL, path expressions are implemented as the jsonpath data type and can use any elements described in Section 8.14.6.

JSON query functions and operators pass the provided path expression to the path engine for evaluation. If the expression matches the queried JSON data, the corresponding SQL/JSON item is returned. Path expressions are written in the SQL/JSON path language and can also include arithmetic expressions and functions. Query functions treat the provided expression as a text string, so it must be enclosed in single quotes.

A path expression consists of a sequence of elements allowed by the jsonpath data type. The path expression is evaluated from left to right, but you can use parentheses to change the order of operations. If the evaluation is successful, a sequence of SQL/JSON items (SQL/JSON sequence) is produced, and the evaluation result is returned to the JSON query function that completes the specified computation.

To refer to the JSON data to be queried (the context item), use the $ sign in the path expression. It can be followed by one or more accessor operators, which go down the JSON structure level by level to retrieve the content of context item. Each operator that follows deals with the result of the previous evaluation step.

For example, suppose you have some JSON data from a GPS tracker that you would like to parse, such as:

{
  "track": {
    "segments": [
      {
        "location":   [ 47.763, 13.4034 ],
        "start time": "2018-10-14 10:05:14",
        "HR": 73
      },
      {
        "location":   [ 47.706, 13.2635 ],
        "start time": "2018-10-14 10:39:21",
        "HR": 135
      }
    ]
  }
}

To retrieve the available track segments, you need to use the .key accessor operator for all the preceding JSON objects:

'$.track.segments'

If the item to retrieve is an element of an array, you have to unnest this array using the [*] operator. For example, the following path will return location coordinates for all the available track segments:

'$.track.segments[*].location'

To return the coordinates of the first segment only, you can specify the corresponding subscript in the [] accessor operator. Note that the SQL/JSON arrays are 0-relative:

'$.track.segments[0].location'

The result of each path evaluation step can be processed by one or more jsonpath operators and methods listed in Section 9.15.2.3. Each method name must be preceded by a dot. For example, you can get an array size:

'$.track.segments.size()'

For more examples of using jsonpath operators and methods within path expressions, see Section 9.15.2.3.

When defining the path, you can also use one or more filter expressions that work similar to the WHERE clause in SQL. A filter expression begins with a question mark and provides a condition in parentheses:

? (condition)

Filter expressions must be specified right after the path evaluation step to which they are applied. The result of this step is filtered to include only those items that satisfy the provided condition. SQL/JSON defines three-valued logic, so the condition can be true, false, or unknown. The unknown value plays the same role as SQL NULL and can be tested for with the is unknown predicate. Further path evaluation steps use only those items for which filter expressions return true.

Functions and operators that can be used in filter expressions are listed in Table 9.49. The path evaluation result to be filtered is denoted by the @ variable. To refer to a JSON element stored at a lower nesting level, add one or more accessor operators after @.

Suppose you would like to retrieve all heart rate values higher than 130. You can achieve this using the following expression:

'$.track.segments[*].HR ? (@ > 130)'

To get the start time of segments with such values instead, you have to filter out irrelevant segments before returning the start time, so the filter expression is applied to the previous step, and the path used in the condition is different:

'$.track.segments[*] ? (@.HR > 130)."start time"'

You can use several filter expressions on the same nesting level, if required. For example, the following expression selects all segments that contain locations with relevant coordinates and high heart rate values:

'$.track.segments[*] ? (@.location[1] < 13.4) ? (@.HR > 130)."start time"'

Using filter expressions at different nesting levels is also allowed. The following example first filters all segments by location, and then returns high heart rate values for these segments, if available:

'$.track.segments[*] ? (@.location[1] < 13.4).HR ? (@ > 130)'

You can also nest filter expressions within each other:

'$.track ? (exists(@.segments[*] ? (@.HR > 130))).segments.size()'

This expression returns the size of the track if it contains any segments with high heart rate values, or an empty sequence otherwise.

PostgreSQL's implementation of SQL/JSON path language has the following deviations from the SQL/JSON standard:

  • .datetime() item method is not implemented yet mainly because immutable jsonpath functions and operators cannot reference session timezone, which is used in some datetime operations. Datetime support will be added to jsonpath in future versions of PostgreSQL.

  • A path expression can be a Boolean predicate, although the SQL/JSON standard allows predicates only in filters. This is necessary for implementation of the @@ operator. For example, the following jsonpath expression is valid in PostgreSQL:

    '$.track.segments[*].HR < 70'
  • There are minor differences in the interpretation of regular expression patterns used in like_regex filters, as described in Section 9.15.2.2.

9.16.2.1. Strict And Lax Modes

When you query JSON data, the path expression may not match the actual JSON data structure. An attempt to access a non-existent member of an object or element of an array results in a structural error. SQL/JSON path expressions have two modes of handling structural errors:

  • lax (default) — the path engine implicitly adapts the queried data to the specified path. Any remaining structural errors are suppressed and converted to empty SQL/JSON sequences.

  • strict — if a structural error occurs, an error is raised.

The lax mode facilitates matching of a JSON document structure and path expression if the JSON data does not conform to the expected schema. If an operand does not match the requirements of a particular operation, it can be automatically wrapped as an SQL/JSON array or unwrapped by converting its elements into an SQL/JSON sequence before performing this operation. Besides, comparison operators automatically unwrap their operands in the lax mode, so you can compare SQL/JSON arrays out-of-the-box. An array of size 1 is considered equal to its sole element. Automatic unwrapping is not performed only when:

  • The path expression contains type() or size() methods that return the type and the number of elements in the array, respectively.

  • The queried JSON data contain nested arrays. In this case, only the outermost array is unwrapped, while all the inner arrays remain unchanged. Thus, implicit unwrapping can only go one level down within each path evaluation step.

For example, when querying the GPS data listed above, you can abstract from the fact that it stores an array of segments when using the lax mode:

'lax $.track.segments.location'

In the strict mode, the specified path must exactly match the structure of the queried JSON document to return an SQL/JSON item, so using this path expression will cause an error. To get the same result as in the lax mode, you have to explicitly unwrap the segments array:

'strict $.track.segments[*].location'

9.16.2.2. Regular Expressions

SQL/JSON path expressions allow matching text to a regular expression with the like_regex filter. For example, the following SQL/JSON path query would case-insensitively match all strings in an array that start with an English vowel:

'$[*] ? (@ like_regex "^[aeiou]" flag "i")'

The optional flag string may include one or more of the characters i for case-insensitive match, m to allow ^ and $ to match at newlines, s to allow . to match a newline, and q to quote the whole pattern (reducing the behavior to a simple substring match).

The SQL/JSON standard borrows its definition for regular expressions from the LIKE_REGEX operator, which in turn uses the XQuery standard. PostgreSQL does not currently support the LIKE_REGEX operator. Therefore, the like_regex filter is implemented using the POSIX regular expression engine described in Section 9.7.3. This leads to various minor discrepancies from standard SQL/JSON behavior, which are cataloged in Section 9.7.3.8. Note, however, that the flag-letter incompatibilities described there do not apply to SQL/JSON, as it translates the XQuery flag letters to match what the POSIX engine expects.

Keep in mind that the pattern argument of like_regex is a JSON path string literal, written according to the rules given in Section 8.14.6. This means in particular that any backslashes you want to use in the regular expression must be doubled. For example, to match strings that contain only digits:

'$ ? (@ like_regex "^\\d+$")'

9.16.2.3. SQL/JSON Path Operators And Methods

Table 9.48 shows the operators and methods available in jsonpath. Table 9.49 shows the available filter expression elements.

Table 9.48. jsonpath Operators and Methods

Table 9.49. jsonpath Filter Expression Elements

Last updated