SQLite Find Name where datetime is between start and end - python

I have the following table in my database, which represents the shifts of a working day.
When a new product is added to another table 'Products' I want to assign a shift to it based on the start_timestamp.
So when I insert into Products its takes start_timestamp and looks in table ProductionPlan and looks for a result (ProductionPlan.name) where it is between the start and end timestamp of that shift.
On that way I can assign a shift to the product.
I hope somebody can help me out with this!
Table ProductionPlan
name
start_timestamp
end_timestamp
shift 1
2021-05-10T07:00:00
2021-05-10T11:00:00
shift 2
2021-05-10T11:00:00
2021-05-10T15:00:00
shift 3
2021-05-10T15:00:00
2021-05-10T19:00:00
shift 1
2021-05-11T07:00:00
2021-05-11T11:00:00
shift 2
2021-05-11T11:00:00
2021-05-11T15:00:00
shift 3
2021-05-11T15:00:00
2021-05-11T19:00:00
Table Products
id
name
start_timestamp
end_timestamp
shift
1
Schroef
2021-05-10T08:09:05
2021-05-10T08:19:05
2
Bout
2021-05-10T08:20:08
2021-04-28T08:30:11
3
Schroef
2021-05-10T12:09:12
2021-04-28T12:30:15
I have the following code to insert into Products:
def insertNewProduct(self, log):
"""
This function is used to insert a new product into the database.
#param log : a object to log
#return None.
"""
debug("Class: SQLite, function: insertNewProduct")
self.__openDB()
timestampStart = datetime.fromtimestamp(int(log.startTime)).isoformat()
queryToExecute = "INSERT INTO Products (name, start_timestamp) VALUES('{0}','{1}')".format(log.summary,
timestampStart)
self.cur.execute(queryToExecute)
self.__closeDB()
return self.cur.lastrowid
It's just a simple INSERT INTO but I want to add a query or even extend this query to fill in the column shift.

You can use a SELECT inside an INSERT.
queryToExecute = """INSERT INTO Products (name, start_timestamp, shift)
SELECT :1, :2, name FROM ProductionPlan pp
WHERE :2 BETWEEN pp.start_timestamp and pp.end_timestamp"""
self.cur.execute(queryToExecute, (log.summary, timestampStart))
In above code I have used a parameterized query because I hate inserting parameters as strings inside a query. It was the cause of too many SQL injection attacks...

Related

Substracting from two tables in mysql

I Have two tables in mysql that store amounts. For instance, we have amounts0 and amounts1. Each table stores a name and an amount. Let's say that in amounts0 we have: "'James', 50". We also have "'Paul', 75". In the other table, we have: "'James', 25" and "'Paul' 50" and "'James', 10". I have a text file and I want a result like this:
What this is doing: It is collecting the amounts from the first table and subtracting them from the sum of the amounts in the second table grouped by name. For example in the picture above we have: James: 50 - sum(25, 10) = 25 - 35 so 15.
Can someone help me to create a query that does that with the 2 tables and then write them to the text file?
I think you can use this
SELECT NAME,
Abs(Sum(t1.amount) - t2.amount) AS amount
FROM table1 AS t1
JOIN (SELECT Sum( ` amount ` ) AS amount,
` NAME ` as t2name
FROM table2
GROUP BY ` NAME ` ) AS t2
ON t2.t2name = t1.NAME
GROUP BY NAME

Python: cx_Oracle does not like how I am entering date

I am trying to do a simple select all query in python using the Cx_oracle module. When I do a select all for the first ten rows in a table I am able to print our the output. However when I do a select all for the first ten rows for a specific date in the table all that gets printed out is a blank list like this: [].
Here is the query select all query that prints out all the results:
sql_query = "select * from table_name fetch first 10 rows only"
cur = db_eng.OpenCursor()
db_eng.ExecuteQuery(cur, sql_query)
result = db_eng.FetchResults(cur)
print(result)
The above query works and is able to print out the results.
Here is the query that I am having trouble with and this query below works in sql developer:
sql_query = "select * from table_name where requested_time = '01-jul-2021' fetch first 10 rows only"
cur = db_eng.OpenCursor()
db_eng.ExecuteQuery(cur, sql_query)
result = db_eng.FetchResults(cur)
print(result)
I also tried this way where I define the date outside of the query.
specific_date = '01-jul-2021'
sql_query = "select * from table_name where requested_time = '{0}' fetch first 10 rows only".format(specific_date)
cur = db_eng.OpenCursor()
db_eng.ExecuteQuery(cur, sql_query)
result = db_eng.FetchResults(cur)
print(result)
Oracle dates have a time portion. The query
select * from table_name where requested_time = '01-jul-2021' fetch first 10 rows only
Will only give you the rows for which the value for the column requested_time is 01-jul-2021 00:00. Chances are that you have other rows for which there is a time portion as well.
To cut off the time portion there are several options. Note that I explicitly added the a TO_DATE function to the date - you're assuming that the database is expecting a dd-mon-yyyy format and successfully will do the implicit conversion but it's safer to let the database know.
TRUNC truncate the column - this will remove the time portion
SELECT *
FROM table_name
WHERE TRUNC(requested_time) = TO_DATE('01-jul-2021','DD-mon-YYYY')
FETCH FIRST 10 ROWS ONLY
Format the column date to the same format as the date you supplied and compare the resulting string:
SELECT *
FROM table_name
WHERE TO_CHAR(requested_time,'DD-mon-YYYY') = '01-jul-2021'
FETCH FIRST 10 ROWS ONLY
Example:
pdb1--KOEN>create table test_tab(requested_time DATE);
Table TEST_TAB created.
pdb1--KOEN>BEGIN
2 INSERT INTO test_tab(requested_time) VALUES (TO_DATE('08-AUG-2021 00:00','DD-MON-YYYY HH24:MI'));
3 INSERT INTO test_tab(requested_time) VALUES (TO_DATE('08-AUG-2021 01:00','DD-MON-YYYY HH24:MI'));
4 INSERT INTO test_tab(requested_time) VALUES (TO_DATE('08-AUG-2021 02:10','DD-MON-YYYY HH24:MI'));
5 END;
6 /
PL/SQL procedure successfully completed.
pdb1--KOEN>SELECT COUNT(*) FROM test_tab WHERE requested_time = TO_DATE('08-AUG-2021','DD-MON-YYYY');
COUNT(*)
----------
1
--only 1 row. That is the rows with time 00:00. Other rows are ignored
pdb1--KOEN>SELECT COUNT(*) FROM test_tab WHERE TRUNC(requested_time) = TO_DATE('08-AUG-2021','DD-MON-YYYY');
-- all rows
COUNT(*)
----------
3

Need help to merge two sql tables when one contains ids and second contains names associated to that id

I have 2 tables in MySQL.
One has transactions with important columns where each row has Debit account ID and Credit account ID. I have second table which contains Account name and special number associated to Account ID. I want somehow to try sql query which will take data from transactions table and assign account name and account number from second table.
I tried doing everything using two query , one would get transactions and second one would get account details and then I did iterate over dataframe and assigned everything one by one which doesn't seem to be good idea
query = "SELECT tr_id, tr_date, description, dr_acc, cr_acc, amount, currency, currency_rate, document, comment FROM transactions WHERE " \
"company_id = {} {} and deleted = 0 {} LIMIT {}, {}".format(
company_id, filter, sort, sn, en)
df = ncon.getDF(query)
df.insert(4, 'dr_name', '')
df.insert(6, 'cr_name', '')
data = tuple(list(set(df['dr_acc'].values.tolist() + df['cr_acc'].values.tolist())))
query = "SELECT account_number, acc_id, account_name FROM tb_accounts WHERE company_id = {} and deleted = 0 and acc_id in {}".format(
company_id, data)
df_accs = ncon.getDF(query)
for index, row in df_accs.iterrows():
acc = str(row['acc_id'])
ac = row['account_number']
nm = row['account_name']
indx = df.index[df['dr_acc'] == acc].tolist()
df.at[indx, 'dr_acc'] = ac
df.at[indx, 'dr_name'] = nm
indx = df.index[df['cr_acc'] == acc].tolist()
df.at[indx, 'cr_acc'] = ac
df.at[indx, 'cr_name'] = nm
What you're looking for, I think, is a SQL JOIN statement.
Taking a crack at writing a query that might work based on your code:
query = '''
SELECT transactions.tr_id,
transactions.tr_date,
transactions.description,
transactions.dr_acc,
transactions.cr_acc,
transactions.amount,
transactions.currency,
transactions.currency_rate,
transactions.document,
transactions.comment
FROM transactions INNER JOIN tb_accounts ON tb_accounts.acc_id = transactions.acc_id
WHERE
transactions.company_id = {} AND
tb_accounts.company_id = {} AND
transactions.deleted = 0 AND
tb_accounts.deleted = 0
ORDER BY transactions.tr_id
LIMIT 10;'''
The above query will, roughly, present query results with all the fields listed from the two tables for each pair of rows where the acc_id is the same.
NOTE, the query above will probably not have very good performance. SQL JOIN statements must be written with care, but I wrote it above in a way that's easy to understand, so as to illustrate the power of the JOIN.
You should as a matter of habit NEVER try to program something when you could use a join instead. As long as you take care to write a join properly so that it can be efficient, the MySQL engine will beat your python code for performance almost every time.
sort two dataframe and use merge for merging 2data frame
df1 = df1.sort_values(['dr_acc'], ascending=True)
df2 = df2.sort_values(['acc_id'], ascending=True)
merge2df = pd.merge(df1, df2, how='outer',
left_on=['dr_acc'], right_on=['acc_id'])
I assumed df1 is 1st query data set and df2 is 2nd query data set
sql query
'''SELECT tr_id, tr_date,
description,
dr_acc, cr_acc,
amount, currency,
currency_rate,
document,
account_number, acc_id, account_name
comment FROM transactions left join
tb_accounts on transactions.dr_acc=tb_accounts.account_number'''

Fast complex SQL queries on PostgreSQL database with Python

I have a large dataset with +50M records in a PostgreSQL database that require massive calculations, inner join.
Python is the tool of choice with Psycopg2.
Running the process with fetchmany of 20,000 records takes a couple of hours to finish.
The execution needs to take place sequentially, as in each record of the 50M needs to be fetched separately, then another query (in the below example) needs to run before a result is returned and saved in a separate table.
Indexes are properly configured on each table (5 tables in total) and the complex query (that returns a calculated value - example below) takes around 240MS to return results (when the database is not under load).
Celery is used to take care of database inserts of the calculated values in a separate table.
My question is about common strategies to reduce overall running time and produce results/calculations faster.
In other words, what is an effective way to go through all the records, one by one, calculate the value of a field via a second query then save the result.
UPDATE:
There is an important piece of information that I unintentionally missed mentioning while trying to obfuscate sensitive details. Sorry for that.
The original SELECT query calculates a value aggregated from different tables as follows:
SELECT CR.gg, (AX.b + BF.f)/CR.d AS calculated_field
FROM table_one CR
LEFT JOIN table_two AX ON EX.x = CR.x
LEFT JOIN table_three BF ON BF.x = CR.x
WHERE CR.gg = '123'
GROUP BY CR.gg;
PS: the SQL query is written by our experienced DBA so i trust that it is optimised.
don't loop over records and call the DBMS repeatedly for every record.
instead, let the DBMS process large chunks (preferrably: all) of data
and, let it spit out all the results.
Below is a snippet of my twitter-sucker(with a rather complex ugly query)
def fetch_referred_tweets(self):
self.curs = self.conn.cursor()
tups = ()
selrefd = """SELECT twx.id, twx.in_reply_to_id, twx.seq, twx.created_at
FROM(
SELECT tw1.id, tw1.in_reply_to_id, tw1.seq, tw1.created_at
FROM tt_tweets tw1
WHERE 1=1
AND tw1.in_reply_to_id > 0
AND tw1.is_retweet = False
AND tw1.did_resolve = False
AND NOT EXISTS ( SELECT * FROM tweets nx
WHERE nx.id = tw1.in_reply_to_id)
AND NOT EXISTS ( SELECT * FROM tt_tweets nx
WHERE nx.id = tw1.in_reply_to_id)
UNION ALL
SELECT tw2.id, tw2.in_reply_to_id, tw2.seq, tw2.created_at
FROM tweets tw2
WHERE 1=1
AND tw2.in_reply_to_id > 0
AND tw2.is_retweet = False
AND tw2.did_resolve = False
AND NOT EXISTS ( SELECT * FROM tweets nx
WHERE nx.id = tw2.in_reply_to_id)
AND NOT EXISTS ( SELECT * FROM tt_tweets nx
WHERE nx.id = tw2.in_reply_to_id)
-- ORDER BY tw2.created_at DESC
)twx
LIMIT %s;"""
# -- AND tw.created_at < now() - '15 min':: interval
# -- AND tw.created_at >= now() - '72 hour':: interval
count = 0
uniqs = 0
self.curs.execute(selrefd, (quotum_referred_tweets, ) )
tups = self.curs.fetchmany(quotum_referred_tweets)
for tup in tups:
if tup == None: break
print ('%d -->> %d [seq=%d] datum=%s' % tup)
self.resolve_list.append(tup[0] ) # this tweet
if tup[1] not in self.refetch_tweets:
self.refetch_tweets[ tup[1] ] = [ tup[0]] # referred tweet
uniqs += 1
count += 1
self.curs.close()
Note: your query makes no sense:
you only select fields from the ertable
so, the two LEFT JOINed tables could be omitted
if ex and ef do contain multiple matching rows, the resultset could be larger than just all the rows selected from er, resulting in duplicateder records
there is a GROUP BY present, but no aggregates are in the select list
select er.gg, er.z, er.y
from table_one er
where er.gg = '123'
-- or:
where er.gg >= '123'
and er.gg <= '456'
ORDER BY er.gg, er.z, er.y -- Or: some other ordering
;
since you are doing a join in your query, the logical thing to do is to work around it, meaning create what's known as a summary table, this summary table -residing on the database- will hold the final joined dataset, so in your python code you will just fetch/select data from it.
another way is to use materialized view link
I took #wildplasser's advice and moved the calculation operation inside the database as a function.
The result has been impressively efficient to say the least and total run time dropped to minutes/~ hour.
To recap:
Database records are no longer fetched in the sequence
mentioned earlier
Calculations happen inside the database via a function PostgreSQL function

Update PostgreSQL database with daily stock prices in Python

So I found a great script over at QuantState that had a great walk-through on setting up my own securities database and loading in historical pricing information. However, I'm not trying to modify the script so that I can run it daily and have the latest stock quotes added.
I adjusted the initial data load to just download 1 week worth of historicals, but I've been having issues with writing the SQL statement to see if the row exists already before adding. Can anyone help me out with this. Here's what I have so far:
def insert_daily_data_into_db(data_vendor_id, symbol_id, daily_data):
"""Takes a list of tuples of daily data and adds it to the
database. Appends the vendor ID and symbol ID to the data.
daily_data: List of tuples of the OHLC data (with
adj_close and volume)"""
# Create the time now
now = datetime.datetime.utcnow()
# Amend the data to include the vendor ID and symbol ID
daily_data = [(data_vendor_id, symbol_id, d[0], now, now,
d[1], d[2], d[3], d[4], d[5], d[6]) for d in daily_data]
# Create the insert strings
column_str = """data_vendor_id, symbol_id, price_date, created_date,
last_updated_date, open_price, high_price, low_price,
close_price, volume, adj_close_price"""
insert_str = ("%s, " * 11)[:-2]
final_str = "INSERT INTO daily_price (%s) VALUES (%s) WHERE NOT EXISTS (SELECT 1 FROM daily_price WHERE symbol_id = symbol_id AND price_date = insert_str[2])" % (column_str, insert_str)
# Using the postgre connection, carry out an INSERT INTO for every symbol
with con:
cur = con.cursor()
cur.executemany(final_str, daily_data)
Some notes regarding your code above:
It's generally easier to defer to now() in pure SQL than to try in Python whenever possible. It avoids lots of potential pitfalls with timezones, library differences, etc.
If you construct a list of columns, you can dynamically generate a string of %s's based on its size, and don't need to hardcode the length into a repeated string with is then sliced.
Since it appears that insert_daily_data_into_db is meant to be called from within a loop on a per-stock basis, I don't believe you want to use executemany here, which would require a list of tuples and is very different semantically.
You were comparing symbol_id to itself in the sub select, instead of a particular value (which would mean it's always true).
To prevent possible SQL Injection, you should always interpolate values in the WHERE clause, including sub selects.
Note: I'm assuming that you're using psycopg2 to access Postgres, and that the primary key for the table is a tuple of (symbol_id, price_date). If not, the code below would need to be tweaked at least a bit.
With those points in mind, try something like this (untested, since I don't have your data, db, etc. but it is syntactically valid Python):
def insert_daily_data_into_db(data_vendor_id, symbol_id, daily_data):
"""Takes a list of tuples of daily data and adds it to the
database. Appends the vendor ID and symbol ID to the data.
daily_data: List of tuples of the OHLC data (with
adj_close and volume)"""
column_list = ["data_vendor_id", "symbol_id", "price_date", "created_date",
"last_updated_date", "open_price", "high_price", "low_price",
"close_price", "volume", "adj_close_price"]
insert_list = ['%s'] * len(column_str)
values_tuple = (data_vendor_id, symbol_id, daily_data[0], 'now()', 'now()', daily_data[1],
daily_data[2], daily_data[3], daily_data[4], daily_data[5], daily_data[6])
final_str = """INSERT INTO daily_price ({0})
VALUES ({1})
WHERE NOT EXISTS (SELECT 1
FROM daily_price
WHERE symbol_id = %s
AND price_date = %s)""".format(', '.join(column_list), ', '.join(insert_list))
# Using the postgre connection, carry out an INSERT INTO for every symbol
with con:
cur = con.cursor()
cur.execute(final_str, values_tuple, values_tuple[1], values_tuple[2])

Categories

Resources