iceberg_tables
Creates, updates, deletes, gets or lists an iceberg_tables
resource.
Overview
Name | iceberg_tables |
Type | Resource |
Id | snowflake.iceberg_table.iceberg_tables |
Fields
The following fields are returned by SELECT
queries:
- list_iceberg_tables
- fetch_iceberg_table
A Snowflake iceberg table
Name | Datatype | Description |
---|---|---|
name | string | Name of the iceberg table (pattern: ^"([^"]|"")+"|[a-zA-Z_][a-zA-Z0-9_$]*$) |
catalog_table_name | string | Name of the table as recognized by the catalog. |
database_name | string | Database in which the iceberg table is stored (pattern: ^"([^"]|"")+"|[a-zA-Z_][a-zA-Z0-9_$]*$) |
schema_name | string | Schema in which the iceberg table is stored (pattern: ^"([^"]|"")+"|[a-zA-Z_][a-zA-Z0-9_$]*$) |
auto_refresh | boolean | Specifies whether to automatically refresh the table metadata |
base_location | string | The path to a directory where Snowflake can write data and metadata files for the table. |
can_write_metadata | string | Signifies whether Snowflake can write metadata to the location specified by the file_path. |
catalog | string | Name of the catalog integration to use for iceberg tables |
catalog_namespace | string | Catalog namespace for the table. The namespace defined when the table was created. Otherwise, the default namespace associated with the catalog integration used by the table. If you’re syncing the table to Snowflake Open Catalog, the default is null. |
catalog_sync | string | Name of the catalog integration to sync this table |
change_tracking | boolean | True if change tracking is enabled, allowing streams and CHANGES to be used on the entity. |
cluster_by | array | Specifies one or more columns or column expressions in the table as the clustering key. |
columns | array | |
comment | string | user comment associated to an object in the dictionary |
constraints | array | |
created_on | string (date-time) | Date and time when the iceberg table was created. |
data_retention_time_in_days | integer | number of days to retain the old version of deleted/updated data |
external_volume | string | Name of an external volume that will be used for persisted Iceberg metadata and data files. |
iceberg_table_type | string | Type of Iceberg table. UNMANAGED if the table is not managed by Snowflake. NOT ICEBERG otherwise. |
max_data_extension_time_in_days | integer | Maximum number of days to extend data retention beyond the retention period to prevent a stream becoming stale. |
metadata_file_path | string | Specifies the relative path of the Iceberg metadata file to use for column definitions. |
owner | string | Role that owns the iceberg table (pattern: ^"([^"]|"")+"|[a-zA-Z_][a-zA-Z0-9_$]*$) |
owner_role_type | string | The type of role that owns the iceberg table (pattern: ^"([^"]|"")+"|[a-zA-Z_][a-zA-Z0-9_$]*$) |
replace_invalid_characters | boolean | Specifies whether to replace invalid characters in the column names |
storage_serialization_policy | string | Storage serialization policy used for managed Iceberg table. This include encodings and compressions |
A Snowflake iceberg table
Name | Datatype | Description |
---|---|---|
name | string | Name of the iceberg table (pattern: ^"([^"]|"")+"|[a-zA-Z_][a-zA-Z0-9_$]*$) |
catalog_table_name | string | Name of the table as recognized by the catalog. |
database_name | string | Database in which the iceberg table is stored (pattern: ^"([^"]|"")+"|[a-zA-Z_][a-zA-Z0-9_$]*$) |
schema_name | string | Schema in which the iceberg table is stored (pattern: ^"([^"]|"")+"|[a-zA-Z_][a-zA-Z0-9_$]*$) |
auto_refresh | boolean | Specifies whether to automatically refresh the table metadata |
base_location | string | The path to a directory where Snowflake can write data and metadata files for the table. |
can_write_metadata | string | Signifies whether Snowflake can write metadata to the location specified by the file_path. |
catalog | string | Name of the catalog integration to use for iceberg tables |
catalog_namespace | string | Catalog namespace for the table. The namespace defined when the table was created. Otherwise, the default namespace associated with the catalog integration used by the table. If you’re syncing the table to Snowflake Open Catalog, the default is null. |
catalog_sync | string | Name of the catalog integration to sync this table |
change_tracking | boolean | True if change tracking is enabled, allowing streams and CHANGES to be used on the entity. |
cluster_by | array | Specifies one or more columns or column expressions in the table as the clustering key. |
columns | array | |
comment | string | user comment associated to an object in the dictionary |
constraints | array | |
created_on | string (date-time) | Date and time when the iceberg table was created. |
data_retention_time_in_days | integer | number of days to retain the old version of deleted/updated data |
external_volume | string | Name of an external volume that will be used for persisted Iceberg metadata and data files. |
iceberg_table_type | string | Type of Iceberg table. UNMANAGED if the table is not managed by Snowflake. NOT ICEBERG otherwise. |
max_data_extension_time_in_days | integer | Maximum number of days to extend data retention beyond the retention period to prevent a stream becoming stale. |
metadata_file_path | string | Specifies the relative path of the Iceberg metadata file to use for column definitions. |
owner | string | Role that owns the iceberg table (pattern: ^"([^"]|"")+"|[a-zA-Z_][a-zA-Z0-9_$]*$) |
owner_role_type | string | The type of role that owns the iceberg table (pattern: ^"([^"]|"")+"|[a-zA-Z_][a-zA-Z0-9_$]*$) |
replace_invalid_characters | boolean | Specifies whether to replace invalid characters in the column names |
storage_serialization_policy | string | Storage serialization policy used for managed Iceberg table. This include encodings and compressions |
Methods
The following methods are available for this resource:
Parameters
Parameters can be passed in the WHERE
clause of a query. Check the Methods section to see which parameters are required or optional for each operation.
Name | Datatype | Description |
---|---|---|
database_name | string | Identifier (i.e. name) for the database to which the resource belongs. You can use the /api/v2/databases GET request to get a list of available databases. |
endpoint | string | Organization and Account Name (default: orgid-acctid) |
name | string | Identifier (i.e. name) for the resource. |
query | string | The SQL select query to run to set up the table values (and possibly columns). |
schema_name | string | Identifier (i.e. name) for the schema to which the resource belongs. You can use the /api/v2/databases/{database}/schemas GET request to get a list of available schemas for the specified database. |
copyGrants | boolean | Query parameter to enable copy grants when creating the object. |
createMode | string | Query parameter allowing support for different modes of resource creation. Possible values include: - errorIfExists : Throws an error if you try to create a resource that already exists. - orReplace : Automatically replaces the existing resource with the current one. - ifNotExists : Creates a new resource when an alter is requested for a non-existent resource. |
deep | boolean | Optionally includes dependency information of the table. |
fromName | string | Query parameter to enable fetching rows only following the first row whose object name matches the specified string. Case-sensitive and does not have to be the full name. |
ifExists | boolean | Query parameter that specifies how to handle the request for a resource that does not exist: - true : The endpoint does not throw an error if the resource does not exist. It returns a 200 success response, but does not take any action on the resource. - false : The endpoint throws an error if the resource doesn't exist. |
like | string | Query parameter to filter the command output by resource name. Uses case-insensitive pattern matching, with support for SQL wildcard characters. |
showLimit | integer | Query parameter to limit the maximum number of rows returned by a command. |
startsWith | string | Query parameter to filter the command output based on the string of characters that appear at the beginning of the object name. Uses case-sensitive pattern matching. |
targetDatabase | string | Database of the newly created table. Defaults to the source table's database. |
targetSchema | string | Schema of the newly created table. Defaults to the source table's schema. |
type | string | Specifies whether the table can be dropped if foreign keys exist that reference the table. |
SELECT
examples
- list_iceberg_tables
- fetch_iceberg_table
Lists the Apache Iceberg™ tables for which you have access privileges.
SELECT
name,
catalog_table_name,
database_name,
schema_name,
auto_refresh,
base_location,
can_write_metadata,
catalog,
catalog_namespace,
catalog_sync,
change_tracking,
cluster_by,
columns,
comment,
constraints,
created_on,
data_retention_time_in_days,
external_volume,
iceberg_table_type,
max_data_extension_time_in_days,
metadata_file_path,
owner,
owner_role_type,
replace_invalid_characters,
storage_serialization_policy
FROM snowflake.iceberg_table.iceberg_tables
WHERE database_name = '{{ database_name }}' -- required
AND schema_name = '{{ schema_name }}' -- required
AND endpoint = '{{ endpoint }}' -- required
AND like = '{{ like }}'
AND startsWith = '{{ startsWith }}'
AND showLimit = '{{ showLimit }}'
AND fromName = '{{ fromName }}'
AND deep = '{{ deep }}';
Describe an iceberg table
SELECT
name,
catalog_table_name,
database_name,
schema_name,
auto_refresh,
base_location,
can_write_metadata,
catalog,
catalog_namespace,
catalog_sync,
change_tracking,
cluster_by,
columns,
comment,
constraints,
created_on,
data_retention_time_in_days,
external_volume,
iceberg_table_type,
max_data_extension_time_in_days,
metadata_file_path,
owner,
owner_role_type,
replace_invalid_characters,
storage_serialization_policy
FROM snowflake.iceberg_table.iceberg_tables
WHERE database_name = '{{ database_name }}' -- required
AND schema_name = '{{ schema_name }}' -- required
AND name = '{{ name }}' -- required
AND endpoint = '{{ endpoint }}' -- required;
INSERT
examples
- create_snowflake_managed_iceberg_table
- Manifest
Create a snowflake managed iceberg table (clone and undrop are separate subresources)
INSERT INTO snowflake.iceberg_table.iceberg_tables (
data__name,
data__comment,
data__change_tracking,
data__max_data_extension_time_in_days,
data__external_volume,
data__data_retention_time_in_days,
data__catalog_sync,
data__catalog,
data__storage_serialization_policy,
data__catalog_table_name,
data__catalog_namespace,
data__cluster_by,
data__columns,
data__base_location,
data__replace_invalid_characters,
data__metadata_file_path,
data__constraints,
database_name,
schema_name,
endpoint,
createMode,
copyGrants
)
SELECT
'{{ name }}' --required,
'{{ comment }}',
{{ change_tracking }},
{{ max_data_extension_time_in_days }},
'{{ external_volume }}',
{{ data_retention_time_in_days }},
'{{ catalog_sync }}',
'{{ catalog }}',
'{{ storage_serialization_policy }}',
'{{ catalog_table_name }}',
'{{ catalog_namespace }}',
'{{ cluster_by }}',
'{{ columns }}',
'{{ base_location }}',
{{ replace_invalid_characters }},
'{{ metadata_file_path }}',
'{{ constraints }}',
'{{ database_name }}',
'{{ schema_name }}',
'{{ endpoint }}',
'{{ createMode }}',
'{{ copyGrants }}'
;
# Description fields are for documentation purposes
- name: iceberg_tables
props:
- name: database_name
value: string
description: Required parameter for the iceberg_tables resource.
- name: schema_name
value: string
description: Required parameter for the iceberg_tables resource.
- name: endpoint
value: string
description: Required parameter for the iceberg_tables resource.
- name: name
value: string
description: >
Name of the iceberg table
- name: comment
value: string
description: >
user comment associated to an object in the dictionary
- name: change_tracking
value: boolean
description: >
True if change tracking is enabled, allowing streams and CHANGES to be used on the entity.
- name: max_data_extension_time_in_days
value: integer
description: >
Maximum number of days to extend data retention beyond the retention period to prevent a stream becoming stale.
- name: external_volume
value: string
description: >
Name of an external volume that will be used for persisted Iceberg metadata and data files.
- name: data_retention_time_in_days
value: integer
description: >
number of days to retain the old version of deleted/updated data
- name: catalog_sync
value: string
description: >
Name of the catalog integration to sync this table
- name: catalog
value: string
description: >
Name of the catalog integration to use for iceberg tables
- name: storage_serialization_policy
value: string
description: >
Storage serialization policy used for managed Iceberg table. This include encodings and compressions
valid_values: ['COMPATIBLE', 'OPTIMIZED']
- name: catalog_table_name
value: string
description: >
Name of the table as recognized by the catalog.
- name: catalog_namespace
value: string
description: >
Catalog namespace for the table. The namespace defined when the table was created. Otherwise, the default namespace associated with the catalog integration used by the table. If you’re syncing the table to Snowflake Open Catalog, the default is null.
- name: cluster_by
value: array
description: >
Specifies one or more columns or column expressions in the table as the clustering key.
- name: columns
value: array
- name: base_location
value: string
description: >
The path to a directory where Snowflake can write data and metadata files for the table.
- name: replace_invalid_characters
value: boolean
description: >
Specifies whether to replace invalid characters in the column names
- name: metadata_file_path
value: string
description: >
Specifies the relative path of the Iceberg metadata file to use for column definitions.
- name: constraints
value: array
- name: createMode
value: string
description: Query parameter allowing support for different modes of resource creation. Possible values include: - `errorIfExists`: Throws an error if you try to create a resource that already exists. - `orReplace`: Automatically replaces the existing resource with the current one. - `ifNotExists`: Creates a new resource when an alter is requested for a non-existent resource.
- name: copyGrants
value: boolean
description: Query parameter to enable copy grants when creating the object.
DELETE
examples
- drop_iceberg_table
Drop an iceberg table
DELETE FROM snowflake.iceberg_table.iceberg_tables
WHERE database_name = '{{ database_name }}' --required
AND schema_name = '{{ schema_name }}' --required
AND name = '{{ name }}' --required
AND endpoint = '{{ endpoint }}' --required
AND ifExists = '{{ ifExists }}'
AND type = '{{ type }}';
Lifecycle Methods
- create_snowflake_managed_iceberg_table_as_select
- create_unmanaged_iceberg_table_from_aws_glue_catalog
- create_unmanaged_iceberg_table_from_delta
- create_unmanaged_iceberg_table_from_iceberg_files
- create_unmanaged_iceberg_table_from_iceberg_rest
- resume_recluster_iceberg_table
- suspend_recluster_iceberg_table
- refresh_iceberg_table
- convert_to_managed_iceberg_table
- undrop_iceberg_table
- clone_snowflake_managed_iceberg_table
- create_snowflake_managed_iceberg_table_like
Create a snowflake managed iceberg table as select
EXEC snowflake.iceberg_table.iceberg_tables.create_snowflake_managed_iceberg_table_as_select
@database_name='{{ database_name }}' --required,
@schema_name='{{ schema_name }}' --required,
@query='{{ query }}' --required,
@endpoint='{{ endpoint }}' --required,
@createMode='{{ createMode }}',
@copyGrants={{ copyGrants }}
@@json=
'{
"name": "{{ name }}",
"columns": "{{ columns }}",
"external_volume": "{{ external_volume }}",
"cluster_by": "{{ cluster_by }}",
"base_location": "{{ base_location }}",
"comment": "{{ comment }}"
}';
Create an unmanaged iceberg table from AWS Glue catalog
EXEC snowflake.iceberg_table.iceberg_tables.create_unmanaged_iceberg_table_from_aws_glue_catalog
@database_name='{{ database_name }}' --required,
@schema_name='{{ schema_name }}' --required,
@endpoint='{{ endpoint }}' --required,
@createMode='{{ createMode }}'
@@json=
'{
"name": "{{ name }}",
"external_volume": "{{ external_volume }}",
"catalog_table_name": "{{ catalog_table_name }}",
"catalog_namespace": "{{ catalog_namespace }}",
"replace_invalid_characters": {{ replace_invalid_characters }},
"auto_refresh": {{ auto_refresh }},
"catalog": "{{ catalog }}",
"comment": "{{ comment }}"
}';
Create an unmanaged iceberg table from Delta
EXEC snowflake.iceberg_table.iceberg_tables.create_unmanaged_iceberg_table_from_delta
@database_name='{{ database_name }}' --required,
@schema_name='{{ schema_name }}' --required,
@endpoint='{{ endpoint }}' --required,
@createMode='{{ createMode }}'
@@json=
'{
"name": "{{ name }}",
"external_volume": "{{ external_volume }}",
"replace_invalid_characters": {{ replace_invalid_characters }},
"base_location": "{{ base_location }}",
"catalog": "{{ catalog }}",
"comment": "{{ comment }}"
}';
Create an unmanaged iceberg table from Iceberg files
EXEC snowflake.iceberg_table.iceberg_tables.create_unmanaged_iceberg_table_from_iceberg_files
@database_name='{{ database_name }}' --required,
@schema_name='{{ schema_name }}' --required,
@endpoint='{{ endpoint }}' --required,
@createMode='{{ createMode }}'
@@json=
'{
"name": "{{ name }}",
"external_volume": "{{ external_volume }}",
"replace_invalid_characters": {{ replace_invalid_characters }},
"metadata_file_path": "{{ metadata_file_path }}",
"catalog": "{{ catalog }}",
"comment": "{{ comment }}"
}';
Create an unmanaged iceberg table from Iceberg REST
EXEC snowflake.iceberg_table.iceberg_tables.create_unmanaged_iceberg_table_from_iceberg_rest
@database_name='{{ database_name }}' --required,
@schema_name='{{ schema_name }}' --required,
@endpoint='{{ endpoint }}' --required,
@createMode='{{ createMode }}'
@@json=
'{
"name": "{{ name }}",
"external_volume": "{{ external_volume }}",
"catalog_table_name": "{{ catalog_table_name }}",
"catalog_namespace": "{{ catalog_namespace }}",
"replace_invalid_characters": {{ replace_invalid_characters }},
"auto_refresh": {{ auto_refresh }},
"catalog": "{{ catalog }}",
"comment": "{{ comment }}"
}';
Resume recluster of an iceberg table (iceberg tables managed by an external catalog do not allow clustering)
EXEC snowflake.iceberg_table.iceberg_tables.resume_recluster_iceberg_table
@database_name='{{ database_name }}' --required,
@schema_name='{{ schema_name }}' --required,
@name='{{ name }}' --required,
@endpoint='{{ endpoint }}' --required,
@ifExists={{ ifExists }};
Suspend recluster of an iceberg table (iceberg tables managed by an external catalog do not allow clustering)
EXEC snowflake.iceberg_table.iceberg_tables.suspend_recluster_iceberg_table
@database_name='{{ database_name }}' --required,
@schema_name='{{ schema_name }}' --required,
@name='{{ name }}' --required,
@endpoint='{{ endpoint }}' --required,
@ifExists={{ ifExists }};
Refreshes the metadata for an Apache Iceberg table that uses an external Iceberg catalog
EXEC snowflake.iceberg_table.iceberg_tables.refresh_iceberg_table
@database_name='{{ database_name }}' --required,
@schema_name='{{ schema_name }}' --required,
@name='{{ name }}' --required,
@endpoint='{{ endpoint }}' --required,
@ifExists={{ ifExists }}
@@json=
'{
"metadata_file_relative_path": "{{ metadata_file_relative_path }}"
}';
Convert unmanaged iceberg table to managed iceberg table
EXEC snowflake.iceberg_table.iceberg_tables.convert_to_managed_iceberg_table
@database_name='{{ database_name }}' --required,
@schema_name='{{ schema_name }}' --required,
@name='{{ name }}' --required,
@endpoint='{{ endpoint }}' --required,
@ifExists={{ ifExists }}
@@json=
'{
"base_location": "{{ base_location }}",
"storage_serialization_policy": "{{ storage_serialization_policy }}"
}';
Undrop an iceberg table
EXEC snowflake.iceberg_table.iceberg_tables.undrop_iceberg_table
@database_name='{{ database_name }}' --required,
@schema_name='{{ schema_name }}' --required,
@name='{{ name }}' --required,
@endpoint='{{ endpoint }}' --required;
Clone a snowflake managed iceberg table
EXEC snowflake.iceberg_table.iceberg_tables.clone_snowflake_managed_iceberg_table
@database_name='{{ database_name }}' --required,
@schema_name='{{ schema_name }}' --required,
@name='{{ name }}' --required,
@endpoint='{{ endpoint }}' --required,
@createMode='{{ createMode }}',
@copyGrants={{ copyGrants }},
@targetDatabase='{{ targetDatabase }}',
@targetSchema='{{ targetSchema }}'
@@json=
'{
"name": "{{ name }}",
"point_of_time": "{{ point_of_time }}"
}';
Creates a new table with the same column definitions as an existing table, but without copying data from the existing table.
EXEC snowflake.iceberg_table.iceberg_tables.create_snowflake_managed_iceberg_table_like
@database_name='{{ database_name }}' --required,
@schema_name='{{ schema_name }}' --required,
@name='{{ name }}' --required,
@endpoint='{{ endpoint }}' --required,
@createMode='{{ createMode }}',
@copyGrants={{ copyGrants }},
@targetDatabase='{{ targetDatabase }}',
@targetSchema='{{ targetSchema }}'
@@json=
'{
"name": "{{ name }}",
"cluster_by": "{{ cluster_by }}",
"external_volume": "{{ external_volume }}",
"base_location": "{{ base_location }}",
"comment": "{{ comment }}"
}';