Python and Pandas to Query API's and update DB - python

I've been querying a few API's with Python to individually create CSV's for a table.
I would like to try and instead of recreating the table each time, update the existing table with any new API data.
At the moment the way the Query is working, I have a table that looks like this,
From this I am taking the suburbs of each state and copying them into a csv for each different state.
Then using this script I am cleaning them into a list (the api needs the %20 for any spaces),
"%20"
#suburbs = ["want this", "want this (meh)", "this as well (nope)"]
suburb_cleaned = []
#dont_want = frozenset( ["(meh)", "(nope)"] )
for urb in suburbs:
cleaned_name = []
name_parts = urb.split()
for part in name_parts:
if part in dont_want:
continue
cleaned_name.append(part)
suburb_cleaned.append('%20'.join(cleaned_name))
Then taking the suburbs for each state and putting them into this API to return a csv,
timestr = time.strftime("%Y%m%d-%H%M%S")
Name = "price_data_NT"+timestr+".csv"
url_price = "http://mwap.com/api"
string = 'gxg&state='
api_results = {}
n = 0
y = 2
for urbs in suburb_cleaned:
url = url_price + urbs + string + "NT"
print(url)
print(urbs)
request = requests.get(url)
api_results[urbs] = pd.DataFrame(request.json())
n = n+1
if n == y:
dfs = pd.concat(api_results).reset_index(level=1, drop=True).rename_axis(
'key').reset_index().set_index(['key'])
dfs.to_csv(Name, sep='\t', encoding='utf-8')
y = y+2
continue
print("made it through"+urbs)
# print(request.json())
# print(api_results)
dfs = pd.concat(api_results).reset_index(level=1, drop=True).rename_axis(
'key').reset_index().set_index(['key'])
dfs.to_csv(Name, sep='\t', encoding='utf-8')
Then adding the states manually in excel, and combining and cleaning the suburb names.
# use pd.concat
df = pd.concat([act, vic,nsw,SA,QLD,WA]).reset_index().set_index(['key']).rename_axis('suburb').reset_index().set_index(['state'])
# apply lambda to clean the %20
f = lambda s: s.replace('%20', ' ')
df['suburb'] = df['suburb'].apply(f)
and then finally inserting it into a db
engine = create_engine('mysql://username:password#localhost/dbname')
with engine.connect() as conn, conn.begin():
df.to_sql('Price_historic', conn, if_exists='replace',index=False)
Leading this this sort of output
Now, this is a hek of a process. I would love to simplify it and make the database only update the values that are needed from the API, and not have this much complexity in getting the data.
Would love some helpful tips on achieving this goal - I'm thinking I could do an update on the mysql database instead of insert or something? and with the querying of the API, I feel like I'm overcomplicating it.
Thanks!

I don't see any reason why you would be creating CSV files in this process. It sounds like you can just query the data and then load it into a MySql table directly. You say that you are adding the states manually in excel? Is that data not available through your prior api calls? If not, could you find that information and save it to a CSV, so you can automate that step by loading it into a table and having python look up the values for you?
Generally, you wouldn't want to overwrite the mysql table every time. When you have a table, you can identify the column or columns that uniquely identify a specific record, then create a UNIQUE INDEX for them. For example if your street and price values designate a unique entry, then in mysql you could run:
ALTER TABLE `Price_historic` ADD UNIQUE INDEX(street, price);
After this, your table will not allow duplicate records based on those values. Then, instead of creating a new table every time, you can insert your data into the existing table, with instructions to either update or ignore when you encounter a duplicate. For example:
final_str = "INSERT INTO Price_historic (state, suburb, property_price_id, type, street, price, date) " \
"VALUES (%s, %s, %s, %s, %s, %s, %s, %s) " \
"ON DUPLICATE KEY UPDATE " \
"state = VALUES(state), date = VALUES(date)"
con = pdb.connect(db_host, db_user, db_pass, db_name)
with con:
try:
cur = con.cursor()
cur.executemany(final_str, insert_list)

If the setup you are trying is something for longer term , I would suggest running 2 diff processes in parallel-
Process 1:
Query API 1, obtain required data and insert into DB table, with binary / bit flag that would specify only API 1 has been called.
Process 2:
Run query on DB to obtain all records needed for API call 2 based on binary/bit flag that we set in process 1--> For corresponding data run call 2 and update data back to DB table based on primary Key
Database : I would suggest adding Primary Key as well as [Bit Flag][1] that gives status of different API call statuses. Bit Flag also helps you
- in case you want to double confirm if specific API call has been made for specific record not.
- Expand your project to additional API calls and can still track status of each API call at record level
[1]: Bit Flags: https://docs.oracle.com/cd/B28359_01/server.111/b28286/functions014.htm#SQLRF00612

Related

Python MySQL search entire database for value

I have a GUI interacting with my database, and MySQL database has around 50 tables. I need to search each table for a value and return the field and key of the item in each table if it is found. I would like to search for partial matches. ex.( Search Value = "test", "Protest", "Test123" would be matches. Here is my attempt.
def searchdatabase(self, event):
print('Searching...')
self.connect_mysql() #Function to connect to database
d_tables = []
results_list = [] # I will store results here
s_string = "test" #Value I am searching
self.cursor.execute("USE db") # select the database
self.cursor.execute("SHOW TABLES")
for (table_name,) in self.cursor:
d_tables.append(table_name)
#Loop through tables list, get column name, and check if value is in the column
for table in d_tables:
#Get the columns
self.cursor.execute(f"SELECT * FROM `{table}` WHERE 1=0")
field_names = [i[0] for i in self.cursor.description]
#Find Value
for f_name in field_names:
print("RESULTS:", self.cursor.execute(f"SELECT * FROM `{table}` WHERE {f_name} LIKE {s_string}"))
print(table)
I get an error on print("RESULTS:", self.cursor.execute(f"SELECT * FROM `{table}` WHERE {f_name} LIKE {s_string}"))
Exception: (1054, "Unknown column 'test' in 'where clause'")
I use a similar insert query that works fine so I am not understanding what the issue is.
ex. insert_query = (f"INSERT INTO `{source_tbl}` ({query_columns}) VALUES ({query_placeholders})")
May be because of single quote you have missed while checking for some columns.
TRY :
print("RESULTS:", self.cursor.execute(f"SELECT * FROM `{table}` WHERE '{f_name}' LIKE '{s_string}'"))
Have a look -> here
Don’t insert user-provided data into SQL queries like this. It is begging for SQL injection attacks. Your database library will have a way of sending parameters to queries. Use that.
The whole design is fishy. Normally, there should be no need to look for a string across several columns of 50 different tables. Admittedly, sometimes you end up in these situations because of reasons outside your control.

Python cx_Oracle Insert Into table with multiple columns automating the values (1:,2: ... 100:)

I am working on a script to read from an oracle table with about 75 columns in one environment and load it into same table definition in a different environment. Till now I have been using cx_Oracle cur.execute() method to 'INSERT INTO TABLENAME VALUES(:1,:2,:3..:8);' and then load the data using 'cur.execute(sql, conn)' method.
However,this table that I'm trying to load has about 75+ columns and writing (:1, :2 ... :75) would be tedious and I'm guessing not part of best practice.
Is there an automated way to loop over the number of columns and automatically fill the values() portion of the SQL query.
user = 'username'
pass = getpass.getpass()
connection_prod = cx_Oracle.makedsn(host, port, service_name = '')
cursor_prod = connection_prod.cursor()
connection_dev = cx_Oracle.makedsn(host, port, service_name = '')
cursor_dev = connection_dev.cursor()
SQL_Read = """Select * from Table_name_Prod"""
Data = cur.execute(SQL_Read, connection_prod)
for row in Data:
SQL_Load = "INSERT INTO TABLE_NAME_DEV VALUES(:1, :2,:3, :4 ...:75);" --This part is ugly and tedious.
cursor_dev.execute(SQL_LOAD, row)
This is where I need Help
connection_Prod.commit()
cursor_Prod.close()
connection_Prod.close()
You can do the following which should help not only in reducing code but also in improving performance:
connection_prod = cx_Oracle.connect(...)
cursor_prod = connection_prod.cursor()
# set array size for source cursor to some reasonable value
# increasing this value reduces round-trips but increases memory usage
cursor_prod.arraysize = 500
connection_dev = cx_Oracle.connect(...)
cursor_dev = connection_dev.cursor()
cursor_prod.execute("select * from table_name_prod")
bind_names = ",".join(":" + str(i + 1) \
for i in range(len(cursor_prod.description)))
sql_load = "insert into table_name_dev values (" + bind_names + ")"
while True:
rows = cursor_prod.fetchmany()
if not rows:
break
cursor_dev.executemany(sql_load, rows)
# can call connection_dev.commit() here if you want to commit each batch
The use of cursor.executemany() will significantly help in terms of performance. Hope this helps you out!

Python Script for SQL Server - Update values with MERGE

I have this python function which inserts to a SQL database. The script is such that every time it is rerun it will have to insert the same row over again in addition to new rows. Eventually I will be changing this so that it only inserts new rows but for now I have to work with some sort of update statement.
I'm aware that I can use MERGE in SQL Server to achieve something similar to MySQL's ON DUPLICATE KEY UPDATE, but I'm not exactly sure how it should be used. Any advice is welcome. Thanks!
def sqlInsrt(headers, values):
#create string input of mylisth
strheaders = ','.join(str(i) for i in headers)
#create string ? param's for INSERT clause
placestr = ','.join(i for i in ["?" for i in headers])
#create string ? param's for UPDATE clause
replacestr = ', '.join(['{}=?'.format(h) for h in headers])
#Setup and execute SQL query
insert = ("INSERT INTO "+part+" ("+strheaders+") VALUES ("+placestr+")")
cursor.execute(insert, values)
cnx.commit()
You should read the docs for Merge.
Basically MERGE INTO TargetTable
USING SourceTable
ON TargetTable.id = SourceTable.id
....
Then you can read the docs about using When not marched by Target etc.
So your Python would maybe swap out the table names and joins using params
I wrote a script that solves the simplest case of merging two identically structured tables, one containing new/updated data. This is useful in incremental data imports. You can expand it depending on your needs (eg. if you need a type 2 SCD):
def create_merge_query(
stg_schema: str,
stg_table: str,
schema: str,
table: str,
primary_key: str,
con: pyodbc.Connection,
) -> str:
"""
Create a merge query for the simplest possible upsert scenario:
- updating and inserting all fields
- merging on a single column, which has the same name in both tables
Args:
stg_schema (str): The schema where the staging table is located.
stg_table (str): The table with new/updated data.
schema (str): The schema where the table is located.
table (str): The table to merge into.
primary_key (str): The column on which to merge.
"""
columns_query = f"""
SELECT
col.name
FROM sys.tables AS tab
INNER JOIN sys.columns AS col
ON tab.object_id = col.object_id
WHERE tab.name = '{table}'
AND schema_name(tab.schema_id) = '{schema}'
ORDER BY column_id;
"""
columns_query_result = con.execute(columns_query)
columns = [tup[0] for tup in columns_query_result]
columns_stg_fqn = [f"stg.{col}" for col in columns]
update_pairs = [f"existing.{col} = stg.{col}" for col in columns]
merge_query = f"""
MERGE INTO {schema}.{table} existing
USING {stg_schema}.{stg_table} stg
ON stg.{primary_key} = existing.{primary_key}
WHEN MATCHED
THEN UPDATE SET {", ".join(update_pairs)}
WHEN NOT MATCHED
THEN INSERT({", ".join(columns)})
VALUES({", ".join(columns_stg_fqn)});
"""
return merge_query

Select single item in MYSQLdb - Python

I've been learning Python recently and have learned how to connect to the database and retrieve data from a database using MYSQLdb. However, all the examples show how to get multiple rows of data. I want to know how to retrieve only one row of data.
This is my current method.
cur.execute("SELECT number, name FROM myTable WHERE id='" + id + "'")
results = cur.fetchall()
number = 0
name = ""
for result in results:
number = result['number']
name = result['name']
It seems redundant to do for result in results: since I know there is only going to be one result.
How can I just get one row of data without using the for loop?
.fetchone() to the rescue:
result = cur.fetchone()
use .pop()
if results:
result = results.pop()
number = result['number']
name = result['name']

Update PostgreSQL database with daily stock prices in Python

So I found a great script over at QuantState that had a great walk-through on setting up my own securities database and loading in historical pricing information. However, I'm not trying to modify the script so that I can run it daily and have the latest stock quotes added.
I adjusted the initial data load to just download 1 week worth of historicals, but I've been having issues with writing the SQL statement to see if the row exists already before adding. Can anyone help me out with this. Here's what I have so far:
def insert_daily_data_into_db(data_vendor_id, symbol_id, daily_data):
"""Takes a list of tuples of daily data and adds it to the
database. Appends the vendor ID and symbol ID to the data.
daily_data: List of tuples of the OHLC data (with
adj_close and volume)"""
# Create the time now
now = datetime.datetime.utcnow()
# Amend the data to include the vendor ID and symbol ID
daily_data = [(data_vendor_id, symbol_id, d[0], now, now,
d[1], d[2], d[3], d[4], d[5], d[6]) for d in daily_data]
# Create the insert strings
column_str = """data_vendor_id, symbol_id, price_date, created_date,
last_updated_date, open_price, high_price, low_price,
close_price, volume, adj_close_price"""
insert_str = ("%s, " * 11)[:-2]
final_str = "INSERT INTO daily_price (%s) VALUES (%s) WHERE NOT EXISTS (SELECT 1 FROM daily_price WHERE symbol_id = symbol_id AND price_date = insert_str[2])" % (column_str, insert_str)
# Using the postgre connection, carry out an INSERT INTO for every symbol
with con:
cur = con.cursor()
cur.executemany(final_str, daily_data)
Some notes regarding your code above:
It's generally easier to defer to now() in pure SQL than to try in Python whenever possible. It avoids lots of potential pitfalls with timezones, library differences, etc.
If you construct a list of columns, you can dynamically generate a string of %s's based on its size, and don't need to hardcode the length into a repeated string with is then sliced.
Since it appears that insert_daily_data_into_db is meant to be called from within a loop on a per-stock basis, I don't believe you want to use executemany here, which would require a list of tuples and is very different semantically.
You were comparing symbol_id to itself in the sub select, instead of a particular value (which would mean it's always true).
To prevent possible SQL Injection, you should always interpolate values in the WHERE clause, including sub selects.
Note: I'm assuming that you're using psycopg2 to access Postgres, and that the primary key for the table is a tuple of (symbol_id, price_date). If not, the code below would need to be tweaked at least a bit.
With those points in mind, try something like this (untested, since I don't have your data, db, etc. but it is syntactically valid Python):
def insert_daily_data_into_db(data_vendor_id, symbol_id, daily_data):
"""Takes a list of tuples of daily data and adds it to the
database. Appends the vendor ID and symbol ID to the data.
daily_data: List of tuples of the OHLC data (with
adj_close and volume)"""
column_list = ["data_vendor_id", "symbol_id", "price_date", "created_date",
"last_updated_date", "open_price", "high_price", "low_price",
"close_price", "volume", "adj_close_price"]
insert_list = ['%s'] * len(column_str)
values_tuple = (data_vendor_id, symbol_id, daily_data[0], 'now()', 'now()', daily_data[1],
daily_data[2], daily_data[3], daily_data[4], daily_data[5], daily_data[6])
final_str = """INSERT INTO daily_price ({0})
VALUES ({1})
WHERE NOT EXISTS (SELECT 1
FROM daily_price
WHERE symbol_id = %s
AND price_date = %s)""".format(', '.join(column_list), ', '.join(insert_list))
# Using the postgre connection, carry out an INSERT INTO for every symbol
with con:
cur = con.cursor()
cur.execute(final_str, values_tuple, values_tuple[1], values_tuple[2])

Categories

Resources