Hi is there a way for deleting rows in BigQuery from a Python Script? I tried looking in the documentation and finding an example on internet, but I could not find anything.
Something that looks like this.
table_id = "a.dataset.table" # Table ID for faulty_gla_entry
statement = """ DELETE FROM a.dataset.table where value = 2 """
client.delete(table_id, statement)
Like #SergeyGeron stated. https://googleapis.dev/python/bigquery/latest/usage/index.html#bigquery-basics
has nice stuff.
Wrote something like this.
from google.cloud import bigquery
client = bigquery.Client()
query = """DELETE FROM a.dataset.table WHERE value = 4"""
query_job = client.query(query)
print(query_job.result())
here you can see a code a documentation to execute a query with python
You can see this example code, with the “Delete” statement.
from google.cloud import bigquery
client = bigquery.Client()
dml_statement = (
"Delete from dataset.Inventory where ID=5"
)
query_job = client.query(dml_statement) # API request
query_job.result() # Waits for statement to finish
How to build queries in BigQuery
Related
When I create temp table via python, an error throws
400 Use of CREATE TEMPORARY TABLE requires a script or session
How can I create a session?
from google.colab import auth
from google.cloud import bigquery
from google.colab import data_table
client = bigquery.Client(project=project, location = location)
client.query('''
create temp table t_acquisted_users as
select *
from table_a
limit 10
''').result()
You can create a session using the BigQuery API using the create_session parameter in a job config, for example:
job_config=bigquery.QueryJobConfig(create_session=True)
More details on this excellent article:
https://dev.to/stack-labs/bigquery-transactions-over-multiple-queries-with-sessions-2ll5
That's how I fix it in quick. Awaiting others provide a better answer
# create session
client0 = bigquery.Client(project=project, location=location)
job = client0.query(
"SELECT 1;", # a query can't fail
job_config=bigquery.QueryJobConfig(create_session=True)
)
session_id = job.session_info.session_id
job.result()
# set default session
client = bigquery.Client(project=project, location=location,
default_query_job_config=bigquery.QueryJobConfig(
connection_properties=[
bigquery.query.ConnectionProperty(
key="session_id", value=session_id
)
]
))
I'm using the following python code to delete some rows in a bigquery table:
from google.cloud import bigquery
bigquery_client = bigquery.Client(project='my-project')
query_delete = f"""
delete from `my_table` where created_at >= '2021-01-01'
"""
print(query_delete)
# query_delete
job = bigquery_client.query(query)
job.result()
print("Deleted!")
However, the rows don't seem to be deleted when doing this from python. What am I missing?
I think below code snippet should work for you. You should pass query_delete instead of query
from google.cloud import bigquery
bigquery_client = bigquery.Client(project='my-project')
query_delete = f"""
delete from `my_table` where created_at >= '2021-01-01'
"""
print(query_delete)
job = bigquery_client.query(query_delete)
job.result()
print("Deleted!")
Or you can try below formatted query
query_delete = (
"DELETE from my_table "
"WHERE created_at >= '2021-01-01'"
)
You probably need to enable standard SQL in BigQuery table.
https://stackoverflow.com/a/42831957/1683626
It should be set as default as is noted in official documentation, but maybe you changed it to legacy sql.
In the Cloud Console and the client libraries, standard SQL is the default.
I'm trying to download data from the big query public dataset and store it locally in a CSV file. When I add LIMIT 10 at the end of the query, my code works but if not, I get an error that says:
Response too large to return. Consider setting allowLargeResults to true in your job configuration.
Thank you in Advance!
Here is my code:
import pandas as pd
import pandas_gbq as gbq
import tqdm
def get_data(query,project_id):
data = gbq.read_gbq(query, project_id=project_id,configuration={"allow_large_results":True})
data.to_csv('blockchain.csv',header=True,index=False)
if __name__ == "__main__":
query = """SELECT * FROM `bigquery-public-data.crypto_bitcoin.transactions` WHERE block_timestamp>='2017-09-1' and block_timestamp<'2017-10-1';"""
project_id = "bitcoin-274091"
get_data(query,project_id)
As was mentioned by #Graham Polley, at first you may consider to save results of your source query to some Bigquery table and then extract data from this table to GCS. Due to the current pandas_gbq library limitations, to achieve your goal I would recommend using google-cloud-bigquery package as the officially advised Python library managing interaction with Bigquery API.
In the following example, I've used bigquery.Client.query() method to trigger a query job with job_config configuration and then invoke bigquery.Client.extract_table() method to fetch the data and store it in GCS bucket:
from google.cloud import bigquery
client = bigquery.Client()
job_config = bigquery.QueryJobConfig(destination="project_id.dataset.table")
sql = """SELECT * FROM ..."""
query_job = client.query(sql, job_config=job_config)
query_job.result()
gs_path = "gs://bucket/test.csv"
ds = client.dataset(dataset, project=project_id)
tb = ds.table(table)
extract_job = client.extract_table(tb,gs_path,location='US')
extract_job.result()
As the end you can delete the table consisting staging data.
I'm running parameterised BigQuery queries inside a Flask app exactly as described in Google's docs.
I'm seeing some unexpected results, so simply want to print the query to my terminal/console for debug purposes. When I do this on only see the query with the parameterised placeholders, not the values.
Does anyone know how to get a view of the query with the values being run?
For example:
query = "select * from dogs where breed = #dog_breed"
query_params = [
bigquery.ScalarQueryParameter("dog_breed", "STRING", "kokoni")
]
job_config = bigquery.QueryJobConfig()
job_config.query_parameters = query_params
print(query) # This will only print query as above, not with value 'kokoni'
query_job = client.query(
query,
job_config=job_config,
)
You could use the list_jobs method to retrieve the information from the Job class, like in the example below:
from google.cloud import bigquery
client = bigquery.Client()
# List the 3 most recent jobs in reverse chronological order.
# Omit the max_results parameter to list jobs from the past 6 months.
print("Last 3 jobs:")
for job in client.list_jobs(max_results=3): # API request(s)
print(job.query)
print(job.query_parameters)
This is the query that I have been running in BigQuery that I want to run in my python script. How would I change this/ what do I have to add for it to run in Python.
#standardSQL
SELECT
Serial,
MAX(createdAt) AS Latest_Use,
SUM(ConnectionTime/3600) as Total_Hours,
COUNT(DISTINCT DeviceID) AS Devices_Connected
FROM `dataworks-356fa.FirebaseArchive.testf`
WHERE Model = "BlueBox-pH"
GROUP BY Serial
ORDER BY Serial
LIMIT 1000;
From what I have been researching it is saying that I cant save this query as a permanent table using Python. Is that true? and if it is true is it possible to still export a temporary table?
You need to use the BigQuery Python client lib, then something like this should get you up and running:
from google.cloud import bigquery
client = bigquery.Client(project='PROJECT_ID')
query = "SELECT...."
dataset = client.dataset('dataset')
table = dataset.table(name='table')
job = client.run_async_query('my-job', query)
job.destination = table
job.write_disposition= 'WRITE_TRUNCATE'
job.begin()
https://googlecloudplatform.github.io/google-cloud-python/stable/bigquery-usage.html
See the current BigQuery Python client tutorial.
Here is another way using a JSON file for the service account:
>>> from google.cloud import bigquery
>>>
>>> CREDS = 'test_service_account.json'
>>> client = bigquery.Client.from_service_account_json(json_credentials_path=CREDS)
>>> job = client.query('select * from dataset1.mytable')
>>> for row in job.result():
... print(row)
This is a good usage guide:
https://googleapis.github.io/google-cloud-python/latest/bigquery/usage/index.html
To simply run and write a query:
# from google.cloud import bigquery
# client = bigquery.Client()
# dataset_id = 'your_dataset_id'
job_config = bigquery.QueryJobConfig()
# Set the destination table
table_ref = client.dataset(dataset_id).table("your_table_id")
job_config.destination = table_ref
sql = """
SELECT corpus
FROM `bigquery-public-data.samples.shakespeare`
GROUP BY corpus;
"""
# Start the query, passing in the extra configuration.
query_job = client.query(
sql,
# Location must match that of the dataset(s) referenced in the query
# and of the destination table.
location="US",
job_config=job_config,
) # API request - starts the query
query_job.result() # Waits for the query to finish
print("Query results loaded to table {}".format(table_ref.path))
I personally prefer querying using pandas:
# BQ authentication
import pydata_google_auth
SCOPES = [
'https://www.googleapis.com/auth/cloud-platform',
'https://www.googleapis.com/auth/drive',
]
credentials = pydata_google_auth.get_user_credentials(
SCOPES,
# Set auth_local_webserver to True to have a slightly more convienient
# authorization flow. Note, this doesn't work if you're running from a
# notebook on a remote sever, such as over SSH or with Google Colab.
auth_local_webserver=True,
)
query = "SELECT * FROM my_table"
data = pd.read_gbq(query, project_id = MY_PROJECT_ID, credentials=credentials, dialect = 'standard')
The pythonbq package is very simple to use and a great place to start. It uses python-gbq.
To get started you would need to generate a BQ json key for external app access. You can generate your key here.
Your code would look something like:
from pythonbq import pythonbq
myProject=pythonbq(
bq_key_path='path/to/bq/key.json',
project_id='myGoogleProjectID'
)
SQL_CODE="""
SELECT
Serial,
MAX(createdAt) AS Latest_Use,
SUM(ConnectionTime/3600) as Total_Hours,
COUNT(DISTINCT DeviceID) AS Devices_Connected
FROM `dataworks-356fa.FirebaseArchive.testf`
WHERE Model = "BlueBox-pH"
GROUP BY Serial
ORDER BY Serial
LIMIT 1000;
"""
output=myProject.query(sql=SQL_CODE)