You can query all tables in all datasets with a single query from a project. Note that it only works if all tables are in a single [region][1].
```sql
DECLARE table_list ARRAY<STRING>;
DECLARE query_tables STRING;
-- loop through all datasets
FOR var_dataset IN (
SELECT
catalog_name,
schema_name
FROM
`${project_id}`.`region-${region}`.INFORMATION_SCHEMA.SCHEMATA
)
DO
-- get all raw tables in the dataset
SET query_tables = FORMAT(
"""SELECT ARRAY_AGG(table_id), FROM `%s.%s`.__TABLES__;""",
var_dataset.catalog_name,
var_dataset.schema_name
);
EXECUTE IMMEDIATE query_tables INTO table_list;
-- loop through the tables inside of a dataset
FOR var_table IN (
WITH expression AS (
SELECT table_list
)
SELECT table_id FROM expression, UNNEST(table_list) AS table_id
)
DO
--- do whatever you want with each table
SELECT var_dataset.catalog_name, var_dataset.schema_name, var_table.table_id;
END FOR;
END FOR;
```
Using [FOR...IN][2] to loop through datasets in my project. Nesting it with a
another [FOR...IN][2] to loop through tables.
Inside of the first Loop Use [EXECUTE IMMEDIATE][3] to get all of the tables from a dataset. Note that it only allows 1 row to be saved into a variable, so adding all of the returned tables into an [ARRAY_AGG][4] on a single row.
Then the the single row can be transformed into a "table_expression" using WITH statement.
"table_expression" is expected type of parameter that [FOR...IN][2] can loop through.
I used this to dynamically create snapshots of all my tables in a project.
Although a thing to consider is how many iterations can this loop have. I read somewhere about a million, but can't seem to find the number in the official documentations.
[1]: https://cloud.google.com/compute/docs/regions-zones
[2]: https://cloud.google.com/bigquery/docs/reference/standard-sql/procedural-language#for-in
[3]: https://cloud.google.com/bigquery/docs/reference/standard-sql/procedural-language#execute_immediate
[4]: https://cloud.google.com/bigquery/docs/reference/standard-sql/aggregate_functions#array_agg
You can query all tables in all datasets with a single query from a project. Note that it only works if all tables are in a single [region][1].
```sql
DECLARE table_list ARRAY<STRING>;
DECLARE query_tables STRING;
-- loop through all datasets
FOR var_dataset IN (
SELECT
catalog_name,
schema_name
FROM
`${project_id}`.`region-${region}`.INFORMATION_SCHEMA.SCHEMATA
)
DO
-- get all raw tables in the dataset
SET query_tables = FORMAT(
"""SELECT ARRAY_AGG(table_id), FROM `%s.%s`.__TABLES__;""",
var_dataset.catalog_name,
var_dataset.schema_name
);
EXECUTE IMMEDIATE query_tables INTO table_list;
-- loop through the tables inside of a dataset
FOR var_table IN (
WITH expression AS (
SELECT table_list
)
SELECT table_id FROM expression, UNNEST(table_list) AS table_id
)
DO
--- do whatever you want with each table
SELECT var_dataset.catalog_name, var_dataset.schema_name, var_table.table_id;
END FOR;
END FOR;
```
Using [FOR...IN][2] to loop through datasets in my project. Nesting it with a
another [FOR...IN][2] to loop through tables.
Inside of the first Loop Use [EXECUTE IMMEDIATE][3] to get all of the tables from a dataset. Note that it only allows 1 row to be saved into a variable, so adding all of the returned tables into an [ARRAY_AGG][4] on a single row.
Then the the single row can be transformed into a "table_expression" using WITH statement.
"table_expression" is expected type of parameter that [FOR...IN][2] can loop through.
I used this to dynamically create snapshots of all my tables in a project.
Although a thing to consider is how many iterations can this loop have. I read somewhere about a million, but can't seem to find the number in the official documentations.
[1]: https://cloud.google.com/compute/docs/regions-zones
[2]: https://cloud.google.com/bigquery/docs/reference/standard-sql/procedural-language#for-in
[3]: https://cloud.google.com/bigquery/docs/reference/standard-sql/procedural-language#execute_immediate
[4]: https://cloud.google.com/bigquery/docs/reference/standard-sql/aggregate_functions#array_agg