Can't append to an existing table. Fails silently - python

I'm trying to dump a pandas DataFrame into an existing Snowflake table (via a jupyter notebook).
When I run the code below no errors are raised, but no data is written to the destination SF table (df has ~800 rows).
from sqlalchemy import create_engine
from snowflake.sqlalchemy import URL
sf_engine = create_engine(
URL(
user=os.environ['SF_PROD_EID'],
password=os.environ['SF_PROD_PWD'],
account=account,
warehouse=warehouse,
database=database,
)
)
df.to_sql(
"test_table",
con=sf_engine,
schema=schema,
if_exists="append",
index=False,
chunksize=16000,
)
If I check the SF History, I can see that the queries apparently ran without issue:
If I pull the query from the SF History UI and run it manually in the Snowflake UI the data shows up in the destination table.
If I try to use locopy I run into the same issue.
If the table does not exist before hand, the same code above creates the table and drops the rows no problem.
Here's where it gets weird. When I run the pd.to_sql command to try and append and then drop the destination table, if I then issue a select count(*) from destination_table a table still exists with that name and has (only) the data that I've been trying to drop. Thinking it may be a case-sensitive table naming situation?
Any insight is appreciated :)

Try adding role="<role>" and schema="<schema>" in URL.
engine = create_engine(URL(
account=os.getenv("SNOWFLAKE_ACCOUNT"),
user=os.getenv("SNOWFLAKE_USER"),
password=os.getenv("SNOWFLAKE_PASSWORD"),
role="<role>",
warehouse="<warehouse>",
database="<database>",
schema="<schema>"
))

Issue was due how I set up the database connection and the case-sensitivity of the table name. Turns out that I was writing to a table called DB.SCHEMA."db.schema.test_table" (note that the db.schema slug turns into part of the table name). Don't be like me kids. Use upper-case table names in Snowflake!

Related

write dataframe from jupyter notebook to snowflake without define table column type

I have a data frame in jupyter notebook. My objective is to import this df into snowflake as a new table.
Is there any way to write a new table into snowflake directly without defining any table columns' names and types?
i am using
import snowflake.connector as snow
from snowflake.connector.pandas_tools import write_pandas
from sqlalchemy import create_engine
import pandas as pd
connection = snowflake.connector.connect(
user='XXX',
password='XXX',
account='XXX',
warehouse='COMPUTE_WH',
database= 'SNOWPLOW',
schema = 'DBT_WN'
)
df.to_sql('aaa', connection, index = False)
it ran into an error:
DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': not all arguments converted during string formatting
Can anyone provide the sample code to fix this issue?
Here's one way to do it -- apologies in advance for my code formatting in SO combined with python's spaces vs tabs "model". Check the tabs/spaces if you cut-n-paste ...
Because of the Snowsql security model, in your connection parameters be sure to specify the ROLE you are using as well. (Often the default role is 'PUBLIC')
Since you already have sqlAlchemy in the mix ... this idea doesn't use the snowflake write_pandas, so it isn't a good answer for large dataframes ... Some odd behaviors with sqlAlchemy and Snowflake; make sure the dataframe column names are upper case; yet use a lowercase table name in the argument to to_sql() ...
def df2sf_alch(target_df, target_table):
# create a sqlAlchemy connection object
engine = create_engine(f"snowflake://{your-sf-account-url}",
creator=lambda:connection)
# re/create table in Snowflake
try:
# sqlAlchemy creates table based on a lower-case table name
# and it works to have uppercase df column names
target_df.to_sql(target_table.lower(), con=engine, if_exists='replace', index=False)
print(f"Table {target_table.upper()} re/created")
except Exception as e:
print(f"Could not replace table {target_table.upper()}", exc_info=1)
nrows = connection.cursor().execute(f"select count(*) from {target_table}").fetchone()[0]
print(f"Table {target_table.upper()} rows = {nrows}")
Note this function needs to be changed to reflect the appropriate 'snowflake account url' in order to create the sqlAlchemy connection object. Also, assuming the case naming oddities are taken care of in the df, along with your already defined connection, you'd call this function simply passing the df and the name of the table, like df2sf_alch(my_df, 'MY_TABLE')

Reading Data from Temp Table in Snowflake into Jupyter Notebook

I am trying to query data from Snowflake into a Jupyter Notebook. Since some columns were not present in the original table, I did create a temporary table which had the required new columns. Unfortunately, due to work restrictions, I couldn't show the whole output here. But when I did run the CREATE TEMPORARY TABLE command, got the following output.
Table CUSTOMER_ACCOUNT_NEW successfully created.
Here is the query I used to make the TEMP table.
CREATE OR REPLACE TEMPORARY TABLE DATA_LAKE.CUSTOMER.CUSTOMER_ACCOUNT_NEW AS
SELECT ID,
VERIFICATION_PROFILE,
get_path(VERIFICATION_PROFILE,'identityMindMostRecentResults')::VARCHAR AS identitymind,
get_path(VERIFICATION_PROFILE,'identityMindMostRecentResults."mm:1"')::VARCHAR AS mm1,
get_path(VERIFICATION_PROFILE,'identityMindMostRecentResults."mm:2"')::VARCHAR AS mm2,
get_path(VERIFICATION_PROFILE,'identityMindMostRecentResults.res')::VARCHAR AS res,
get_path(VERIFICATION_PROFILE,'identityMindMostRecentResults."ss:1"')::VARCHAR AS sanctions,
get_path(VERIFICATION_PROFILE,'autoVerified.facts.account.riskScore')::VARCHAR AS riskscore,
get_path(VERIFICATION_PROFILE,'autoVerified.facts.giact.verificationResponse')::VARCHAR AS GIACT,
get_path(VERIFICATION_PROFILE,'autoVerified.facts.account.type')::VARCHAR AS acct_type,
get_path(VERIFICATION_PROFILE,'autoVerified.verified')::VARCHAR AS verified,
get_path(VERIFICATION_PROFILE,'bankInformationProvided')::VARCHAR AS Bank_info_given,
get_path(VERIFICATION_PROFILE,'businessInformationProvided')::VARCHAR AS Business_info_given,
get_path(VERIFICATION_PROFILE,'autoVerified.facts.account.industry.riskLevel')::VARCHAR AS industry_risk
FROM DATA_LAKE.CUSTOMER.CUSTOMER_ACCOUNT
WHERE DATEDIFF('day',TO_DATE(TIME_UPDATED),CURRENT_DATE())<=90
I would like to mention that VERIFICATION_PROFILE is a JSON blob, hence I had to use get_path to retrieve the values. Moreover, keys like mm:1 are originally in double quotes, so I did use it as it is, and it is working fine in snowflake.
Then using snowflake connector python, I did try to run following query;
import pandas as pd
import warnings
import snowflake.connector as sf
ctx = sf.connect(
user='*****',
password='*****',
account='*******',
warehouse='********',
database='DATA_LAKE',
schema='CUSTOMER'
)
#create cursor
curs = ctx.cursor()
sqlnew2 = "SELECT * \
FROM DATA_LAKE.CUSTOMER.CUSTOMER_ACCOUNT_NEW;"
curs.execute(sqlnew2)
df = curs.fetch_pandas_all()
Here curs is the cursor object created earlier. Then I got the following message;
ProgrammingError: 002003 (42S02): SQL compilation error:
Object 'DATA_LAKE.CUSTOMER.CUSTOMER_ACCOUNT_NEW' does not exist or not authorized.
May I know does snowflake connector allow us to query data from temporary tables pr not? Help/advice is greatly appreciated.
Temp tables only live as long as the session they were created lives:
Temporary tables can have a Time Travel retention period of 1 day; however, a temporary table is purged once the session (in which the table was created) ends so the actual retention period is for 24 hours or the remainder of the session, whichever is shorter.
You might want to use a transient table instead:
https://docs.snowflake.com/en/user-guide/tables-temp-transient.html#comparison-of-table-types

sqlalchemy to_sql if_exists = "replace" creates a table with different prefix

When I try to execute the following code, it drops the actual table in the database but doesn't recreate the same one. Instead of generating the dbo.TableName type, it does create username/TableName.
engine = sqlalchemy.create_engine(
"mssql+pyodbc://server/dbname?driver=ODBC+Driver+13+for+SQL+Server")
df.to_sql("OTD_1_DELIVERY_TRACKING_F_IMPORT", con=engine, if_exists="replace", index=False)
Does anyone know how to fix this so it recreates the dbo.TableName table?
Does anyone know how to fix this so it recreates the dbo.TableName table
This is happening because the user has a default schema other than dbo. In very old versions of SQL Server each user had their own schema, and this is a vestige of that behavior.
So when that user runs
DROP TABLE OTD_1_DELIVERY_TRACKING_F_IMPORT
It looks in the user's default schema first, and not finding anything, then looks in the dbo schema, and drops that table. But when the user runs
CREATE TABLE OTD_1_DELIVERY_TRACKING_F_IMPORT ...
It's created in the user's default schema.
The easy fix is to change the user's DEFAULT SCHEMA to dbo. EG
ALTER USER [SomeUser] WITH DEFAULT_SCHEMA = dbo;

Adding a column from an existing BQ table to another BQ table using Python

I am trying to experiment with creating new tables from existing BQ tables, all within python. So far I've successfully created the table using some similar code, but now I want to add another column to it from another table - which I have not been successful with. I think the problem comes somewhere within my SQL code.
Basically what I want here is to add another column named "ip_address" and put all the info from another table into that column.
I've tried splitting up the two SQL statements and running them separately, I've tried many different combinations of the commands (taking our CHAR, adding (32) after, combining all into one statement, etc.), and still I run into problems.
from google.cloud import bigquery
def alter(client, sql_alter, job_config, table_id):
query_job = client.query(sql_alter, job_config=job_config)
query_job.result()
print(f'Query results appended to table {table_id}')
def main():
client = bigquery.Client.from_service_account_json('my_json')
table_id = 'ref.datasetid.tableid'
job_config = bigquery.QueryJobConfig()
sql_alter = """
ALTER TABLE `ref.datasetid.tableid`
ADD COLUMN ip_address CHAR;
INSERT INTO `ref.datasetid.tableid` ip_address
SELECT ip
FROM `ref.datasetid.table2id`;
"""
alter(client, sql_alter, job_config, table_id)
if __name__ == '__main__':
main()
With this code, the current error is "400 Syntax error: Unexpected extra token INSERT at [4:9]" Also, do I have to continuously reference my table with ref.datasetid.tableid or can I write just tableid? I've run into errors before it gets there so I'm still not sure. Still a beginner so help is greatly appreciated!
BigQuery does not support ALTER TABLE or other DDL statements, take a look into how Modifying table schemas there you can find an example of how to add a new column when you append data to a table during a load job.

Create multiple views in shell script or big query

When I am exporting data from MySQL to BigQuery, some data are been duplicated. As a way to fix this, I thought of creating views of this tables using row number. The query to do this is shown below. The problem is that a lot of tables in my dataset are duplicated and possibly when I add new tables and export them to big query, they will have duplicated data and I don't want to create this type of query every time that a I add a new table in my dataset (I want that, in the moment I export a new table, a view to this table is created). Is this possible to do in a loop in the query (like 'for each table in my data set, do this')? Is this possible to do in shell script (when export a table to big query, create a view for this table)? In last case, is this possible to do in python?
SELECT
* EXCEPT (ROW_NUMBER)
FROM
(
SELECT
*, ROW_NUMBER() OVER (PARTITION BY id order by updated_at desc) ROW_NUMBER
FROM dataset1.table1
)
WHERE ROW_NUMBER = 1
It is definitely can be done in python.
I would recommend to use gcloud python library https://github.com/GoogleCloudPlatform/google-cloud-python
So I think you script should be something like this
from google.cloud import bigquery
from google.cloud.bigquery import Dataset
client = bigquery.Client()
dataset_ref = client.dataset('dataset_name')
tables = list(client.list_tables(dataset_ref))
for tab in tables:
table = dataset.table("v_{}".format(tab.name))
table.view_query = "select * from `my-project.my.dataset.{}`".format(tab.name)
#if creating legacy view comment out next line
table.view_query_legacy_sql = False
table.create()

Categories

Resources