schemas
Creates, updates, deletes, gets or lists a schemas
resource.
Overview
Name | schemas |
Type | Resource |
Id | snowflake.schema.schemas |
Fields
The following fields are returned by SELECT
queries:
- list_schemas
- fetch_schema
Snowflake schema definition.
Name | Datatype | Description |
---|---|---|
name | string | A Snowflake object identifier. If the identifier contains spaces or special characters, the entire string must be enclosed in double quotes. Identifiers enclosed in double quotes are also case-sensitive. (pattern: ^"([^"]|"")+"|[a-zA-Z_][a-zA-Z0-9_$]*$, example: TEST_NAME) |
database_name | string | Database that the schema belongs to |
budget | string | Budget that defines a monthly spending limit on the compute costs for a Snowflake account or a custom group of Snowflake objects. |
comment | string | Optional comment in which to store information related to the schema. |
created_on | string (date-time) | Date and time the schema was created. |
data_retention_time_in_days | integer | Number of days for which Time Travel actions (CLONE and UNDROP) can be performed on the schema, as well as specifying the default Time Travel retention time for all tables created in the schema |
default_ddl_collation | string | Specifies a default collation specification for all tables added to the schema. You an override the default at the schema and individual table levels. |
dropped_on | string (date-time) | Date and time the schema was dropped. |
is_current | boolean | Current schema for the session. |
is_default | boolean | Default schema for a user. |
kind | string | Schema type, permanent (default) or transient. (default: PERMANENT) |
log_level | string | Severity level of messages that should be ingested and made available in the active event table. Currently, Snowflake supports only TRACE , DEBUG , INFO , WARN , ERROR , FATAL and OFF . |
managed_access | boolean | Whether this schema is a managed access schema that centralizes privilege management with the schema owner. |
max_data_extension_time_in_days | integer | Maximum number of days for which Snowflake can extend the data retention period for tables in the schema to prevent streams on the tables from becoming stale. |
options | string | |
owner | string | Name of the role that owns the schema. |
owner_role_type | string | Type of role that owns the object, either ROLE or DATABASE_ROLE . |
pipe_execution_paused | boolean | Whether pipe execution is paused. |
retention_time | integer | Number of days that historical data is retained for Time Travel. |
serverless_task_max_statement_size | string | Specifies the maximum allowed warehouse size for the serverless task. Minimum XSMALL, Maximum XXLARGE. |
serverless_task_min_statement_size | string | Specifies the minimum allowed warehouse size for the serverless task. Minimum XSMALL, Maximum XXLARGE. |
suspend_task_after_num_failures | integer | Specifies the number of consecutive failed task runs after which the current task is suspended automatically. |
trace_level | string | How trace events are ingested into the event table. Currently, Snowflake supports only ALWAYS , ON_EVENT , and OFF . |
user_task_managed_initial_warehouse_size | string | Size of the compute resources to provision for the first run of the serverless task, before a task history is available for Snowflake to determine an ideal size. |
user_task_timeout_ms | integer | Time limit, in milliseconds, for a single run of the task before it times out. |
Snowflake schema definition.
Name | Datatype | Description |
---|---|---|
name | string | A Snowflake object identifier. If the identifier contains spaces or special characters, the entire string must be enclosed in double quotes. Identifiers enclosed in double quotes are also case-sensitive. (pattern: ^"([^"]|"")+"|[a-zA-Z_][a-zA-Z0-9_$]*$, example: TEST_NAME) |
database_name | string | Database that the schema belongs to |
budget | string | Budget that defines a monthly spending limit on the compute costs for a Snowflake account or a custom group of Snowflake objects. |
comment | string | Optional comment in which to store information related to the schema. |
created_on | string (date-time) | Date and time the schema was created. |
data_retention_time_in_days | integer | Number of days for which Time Travel actions (CLONE and UNDROP) can be performed on the schema, as well as specifying the default Time Travel retention time for all tables created in the schema |
default_ddl_collation | string | Specifies a default collation specification for all tables added to the schema. You an override the default at the schema and individual table levels. |
dropped_on | string (date-time) | Date and time the schema was dropped. |
is_current | boolean | Current schema for the session. |
is_default | boolean | Default schema for a user. |
kind | string | Schema type, permanent (default) or transient. (default: PERMANENT) |
log_level | string | Severity level of messages that should be ingested and made available in the active event table. Currently, Snowflake supports only TRACE , DEBUG , INFO , WARN , ERROR , FATAL and OFF . |
managed_access | boolean | Whether this schema is a managed access schema that centralizes privilege management with the schema owner. |
max_data_extension_time_in_days | integer | Maximum number of days for which Snowflake can extend the data retention period for tables in the schema to prevent streams on the tables from becoming stale. |
options | string | |
owner | string | Name of the role that owns the schema. |
owner_role_type | string | Type of role that owns the object, either ROLE or DATABASE_ROLE . |
pipe_execution_paused | boolean | Whether pipe execution is paused. |
retention_time | integer | Number of days that historical data is retained for Time Travel. |
serverless_task_max_statement_size | string | Specifies the maximum allowed warehouse size for the serverless task. Minimum XSMALL, Maximum XXLARGE. |
serverless_task_min_statement_size | string | Specifies the minimum allowed warehouse size for the serverless task. Minimum XSMALL, Maximum XXLARGE. |
suspend_task_after_num_failures | integer | Specifies the number of consecutive failed task runs after which the current task is suspended automatically. |
trace_level | string | How trace events are ingested into the event table. Currently, Snowflake supports only ALWAYS , ON_EVENT , and OFF . |
user_task_managed_initial_warehouse_size | string | Size of the compute resources to provision for the first run of the serverless task, before a task history is available for Snowflake to determine an ideal size. |
user_task_timeout_ms | integer | Time limit, in milliseconds, for a single run of the task before it times out. |
Methods
The following methods are available for this resource:
Name | Accessible by | Required Params | Optional Params | Description |
---|---|---|---|---|
list_schemas | select | database_name , endpoint | like , startsWith , showLimit , fromName , history | Lists the accessible schemas. |
fetch_schema | select | database_name , name , endpoint | Fetches a schema. | |
create_schema | insert | database_name , endpoint | createMode , kind | Creates a schema, with modifiers as query parameters. You must provide the full schema definition when creating a schema. |
create_or_alter_schema | replace | database_name , name , endpoint | kind | Creates a new, or alters an existing, schema. You must provide the full schema definition even when altering an existing schema. |
delete_schema | delete | database_name , name , endpoint | ifExists , restrict | Deletes the specified schema. If you enable the ifExists parameter, the operation succeeds even if the schema does not exist. Otherwise, a 404 failure is returned if the schema does not exist. if the drop is unsuccessful. |
clone_schema | exec | database_name , name , endpoint | createMode , kind , targetDatabase | Clones an existing schema, with modifiers as query parameters. You must provide the full schema definition when cloning an existing schema. |
undrop_schema | exec | database_name , name , endpoint | Undrops schema. |
Parameters
Parameters can be passed in the WHERE
clause of a query. Check the Methods section to see which parameters are required or optional for each operation.
Name | Datatype | Description |
---|---|---|
database_name | string | Identifier (i.e. name) for the database to which the resource belongs. You can use the /api/v2/databases GET request to get a list of available databases. |
endpoint | string | Organization and Account Name (default: orgid-acctid) |
name | string | Identifier (i.e. name) for the resource. |
createMode | string | Query parameter allowing support for different modes of resource creation. Possible values include: - errorIfExists : Throws an error if you try to create a resource that already exists. - orReplace : Automatically replaces the existing resource with the current one. - ifNotExists : Creates a new resource when an alter is requested for a non-existent resource. |
fromName | string | Query parameter to enable fetching rows only following the first row whose object name matches the specified string. Case-sensitive and does not have to be the full name. |
history | boolean | Whether to include dropped schemas that have not yet been purged. Default: false . |
ifExists | boolean | Query parameter that specifies how to handle the request for a resource that does not exist: - true : The endpoint does not throw an error if the resource does not exist. It returns a 200 success response, but does not take any action on the resource. - false : The endpoint throws an error if the resource doesn't exist. |
kind | string | Type of schema to clone. Currently, Snowflake supports only transient and permanent (also represented by the empty string). |
like | string | Query parameter to filter the command output by resource name. Uses case-insensitive pattern matching, with support for SQL wildcard characters. |
restrict | boolean | Whether to drop the schema if foreign keys exist that reference any tables in the schema. - true : Return a warning about existing foreign key references and don't drop the schema. - false : Drop the schema and all objects in the database, including tables with primary or unique keys that are referenced by foreign keys in other tables. Default: false . |
showLimit | integer | Query parameter to limit the maximum number of rows returned by a command. |
startsWith | string | Query parameter to filter the command output based on the string of characters that appear at the beginning of the object name. Uses case-sensitive pattern matching. |
targetDatabase | string | Database of the newly created schema. Defaults to the source schema's database. |
SELECT
examples
- list_schemas
- fetch_schema
Lists the accessible schemas.
SELECT
name,
database_name,
budget,
comment,
created_on,
data_retention_time_in_days,
default_ddl_collation,
dropped_on,
is_current,
is_default,
kind,
log_level,
managed_access,
max_data_extension_time_in_days,
options,
owner,
owner_role_type,
pipe_execution_paused,
retention_time,
serverless_task_max_statement_size,
serverless_task_min_statement_size,
suspend_task_after_num_failures,
trace_level,
user_task_managed_initial_warehouse_size,
user_task_timeout_ms
FROM snowflake.schema.schemas
WHERE database_name = '{{ database_name }}' -- required
AND endpoint = '{{ endpoint }}' -- required
AND like = '{{ like }}'
AND startsWith = '{{ startsWith }}'
AND showLimit = '{{ showLimit }}'
AND fromName = '{{ fromName }}'
AND history = '{{ history }}';
Fetches a schema.
SELECT
name,
database_name,
budget,
comment,
created_on,
data_retention_time_in_days,
default_ddl_collation,
dropped_on,
is_current,
is_default,
kind,
log_level,
managed_access,
max_data_extension_time_in_days,
options,
owner,
owner_role_type,
pipe_execution_paused,
retention_time,
serverless_task_max_statement_size,
serverless_task_min_statement_size,
suspend_task_after_num_failures,
trace_level,
user_task_managed_initial_warehouse_size,
user_task_timeout_ms
FROM snowflake.schema.schemas
WHERE database_name = '{{ database_name }}' -- required
AND name = '{{ name }}' -- required
AND endpoint = '{{ endpoint }}' -- required;
INSERT
examples
- create_schema
- Manifest
Creates a schema, with modifiers as query parameters. You must provide the full schema definition when creating a schema.
INSERT INTO snowflake.schema.schemas (
data__name,
data__kind,
data__comment,
data__managed_access,
data__data_retention_time_in_days,
data__default_ddl_collation,
data__log_level,
data__pipe_execution_paused,
data__max_data_extension_time_in_days,
data__suspend_task_after_num_failures,
data__trace_level,
data__user_task_managed_initial_warehouse_size,
data__user_task_timeout_ms,
data__serverless_task_min_statement_size,
data__serverless_task_max_statement_size,
database_name,
endpoint,
createMode,
kind
)
SELECT
'{{ name }}' --required,
'{{ kind }}',
'{{ comment }}',
{{ managed_access }},
{{ data_retention_time_in_days }},
'{{ default_ddl_collation }}',
'{{ log_level }}',
{{ pipe_execution_paused }},
{{ max_data_extension_time_in_days }},
{{ suspend_task_after_num_failures }},
'{{ trace_level }}',
'{{ user_task_managed_initial_warehouse_size }}',
{{ user_task_timeout_ms }},
'{{ serverless_task_min_statement_size }}',
'{{ serverless_task_max_statement_size }}',
'{{ database_name }}',
'{{ endpoint }}',
'{{ createMode }}',
'{{ kind }}'
;
# Description fields are for documentation purposes
- name: schemas
props:
- name: database_name
value: string
description: Required parameter for the schemas resource.
- name: endpoint
value: string
description: Required parameter for the schemas resource.
- name: name
value: string
description: >
A Snowflake object identifier. If the identifier contains spaces or special characters, the entire string must be enclosed in double quotes. Identifiers enclosed in double quotes are also case-sensitive.
- name: kind
value: string
description: >
Schema type, permanent (default) or transient.
valid_values: ['PERMANENT', 'TRANSIENT']
default: PERMANENT
- name: comment
value: string
description: >
Optional comment in which to store information related to the schema.
- name: managed_access
value: boolean
description: >
Whether this schema is a managed access schema that centralizes privilege management with the schema owner.
default: false
- name: data_retention_time_in_days
value: integer
description: >
Number of days for which Time Travel actions (CLONE and UNDROP) can be performed on the schema, as well as specifying the default Time Travel retention time for all tables created in the schema
- name: default_ddl_collation
value: string
description: >
Specifies a default collation specification for all tables added to the schema. You an override the default at the schema and individual table levels.
- name: log_level
value: string
description: >
Severity level of messages that should be ingested and made available in the active event table. Currently, Snowflake supports only `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`, `FATAL` and `OFF`.
- name: pipe_execution_paused
value: boolean
description: >
Whether pipe execution is paused.
- name: max_data_extension_time_in_days
value: integer
description: >
Maximum number of days for which Snowflake can extend the data retention period for tables in the schema to prevent streams on the tables from becoming stale.
- name: suspend_task_after_num_failures
value: integer
description: >
Specifies the number of consecutive failed task runs after which the current task is suspended automatically.
- name: trace_level
value: string
description: >
How trace events are ingested into the event table. Currently, Snowflake supports only `ALWAYS`, `ON_EVENT`, and `OFF`.
- name: user_task_managed_initial_warehouse_size
value: string
description: >
Size of the compute resources to provision for the first run of the serverless task, before a task history is available for Snowflake to determine an ideal size.
- name: user_task_timeout_ms
value: integer
description: >
Time limit, in milliseconds, for a single run of the task before it times out.
- name: serverless_task_min_statement_size
value: string
description: >
Specifies the minimum allowed warehouse size for the serverless task. Minimum XSMALL, Maximum XXLARGE.
- name: serverless_task_max_statement_size
value: string
description: >
Specifies the maximum allowed warehouse size for the serverless task. Minimum XSMALL, Maximum XXLARGE.
- name: createMode
value: string
description: Query parameter allowing support for different modes of resource creation. Possible values include: - `errorIfExists`: Throws an error if you try to create a resource that already exists. - `orReplace`: Automatically replaces the existing resource with the current one. - `ifNotExists`: Creates a new resource when an alter is requested for a non-existent resource.
- name: kind
value: string
description: Type of schema to create. Currently, Snowflake supports only `transient` and `permanent` (also represented by the empty string).
REPLACE
examples
- create_or_alter_schema
Creates a new, or alters an existing, schema. You must provide the full schema definition even when altering an existing schema.
REPLACE snowflake.schema.schemas
SET
data__name = '{{ name }}',
data__kind = '{{ kind }}',
data__comment = '{{ comment }}',
data__managed_access = {{ managed_access }},
data__data_retention_time_in_days = {{ data_retention_time_in_days }},
data__default_ddl_collation = '{{ default_ddl_collation }}',
data__log_level = '{{ log_level }}',
data__pipe_execution_paused = {{ pipe_execution_paused }},
data__max_data_extension_time_in_days = {{ max_data_extension_time_in_days }},
data__suspend_task_after_num_failures = {{ suspend_task_after_num_failures }},
data__trace_level = '{{ trace_level }}',
data__user_task_managed_initial_warehouse_size = '{{ user_task_managed_initial_warehouse_size }}',
data__user_task_timeout_ms = {{ user_task_timeout_ms }},
data__serverless_task_min_statement_size = '{{ serverless_task_min_statement_size }}',
data__serverless_task_max_statement_size = '{{ serverless_task_max_statement_size }}'
WHERE
database_name = '{{ database_name }}' --required
AND name = '{{ name }}' --required
AND endpoint = '{{ endpoint }}' --required
AND data__name = '{{ name }}' --required
AND kind = '{{ kind}}';
DELETE
examples
- delete_schema
Deletes the specified schema. If you enable the ifExists
parameter, the operation succeeds even if the schema does not exist. Otherwise, a 404 failure is returned if the schema does not exist. if the drop is unsuccessful.
DELETE FROM snowflake.schema.schemas
WHERE database_name = '{{ database_name }}' --required
AND name = '{{ name }}' --required
AND endpoint = '{{ endpoint }}' --required
AND ifExists = '{{ ifExists }}'
AND restrict = '{{ restrict }}';
Lifecycle Methods
- clone_schema
- undrop_schema
Clones an existing schema, with modifiers as query parameters. You must provide the full schema definition when cloning an existing schema.
EXEC snowflake.schema.schemas.clone_schema
@database_name='{{ database_name }}' --required,
@name='{{ name }}' --required,
@endpoint='{{ endpoint }}' --required,
@createMode='{{ createMode }}',
@kind='{{ kind }}',
@targetDatabase='{{ targetDatabase }}'
@@json=
'{
"name": "{{ name }}",
"kind": "{{ kind }}",
"comment": "{{ comment }}",
"managed_access": {{ managed_access }},
"data_retention_time_in_days": {{ data_retention_time_in_days }},
"default_ddl_collation": "{{ default_ddl_collation }}",
"log_level": "{{ log_level }}",
"pipe_execution_paused": {{ pipe_execution_paused }},
"max_data_extension_time_in_days": {{ max_data_extension_time_in_days }},
"suspend_task_after_num_failures": {{ suspend_task_after_num_failures }},
"trace_level": "{{ trace_level }}",
"user_task_managed_initial_warehouse_size": "{{ user_task_managed_initial_warehouse_size }}",
"user_task_timeout_ms": {{ user_task_timeout_ms }},
"serverless_task_min_statement_size": "{{ serverless_task_min_statement_size }}",
"serverless_task_max_statement_size": "{{ serverless_task_max_statement_size }}"
}';
Undrops schema.
EXEC snowflake.schema.schemas.undrop_schema
@database_name='{{ database_name }}' --required,
@name='{{ name }}' --required,
@endpoint='{{ endpoint }}' --required;