I want to run a working sql raw query in orm way. Where I need to pick columns and count of columns from tables joined not any columns from main. How do I implement this in ORM way.
My Raw query (I usually uses in sql server):
RAW SQL:
SELECT [barcode].[dbo].[Tbl_ProductionMaster].[ProductionName]
,[barcode].[dbo].[Tbl_barcode].[Size]
,COUNT([barcode].[dbo].[Tbl_barcode].[Size])
FROM [barcode].[dbo].[tbl_ProductionScan]
INNER JOIN [barcode].[dbo].[Tbl_barcode] ON [barcode].[dbo].[Tbl_barcode].[Serial_no] = [barcode].[dbo].[tbl_ProductionScan].[serial_no]
INNER JOIN [barcode].[dbo].[Tbl_ProductionMaster] ON [barcode].[dbo].[Tbl_ProductionMaster].[ProductionCode] = [barcode].[dbo].[Tbl_barcode].[Product_code]
WHERE [barcode].[dbo].[tbl_ProductionScan].[prod_date] BETWEEN '2021-08-01 08:00:00' AND '2021-08-25 08:00:00' AND [barcode].[dbo].[Tbl_ProductionMaster].[ProductionName] Like '%3780%black%'
GROUP BY [barcode].[dbo].[Tbl_ProductionMaster].[ProductionName], [barcode].[dbo].[Tbl_barcode].[Size]
What I managed to do in SQL Alchemy :
result = (
session.query(ProductionMaster.article, Barcode.size, sa.func.count(Barcode.size))
.join(Barcode, Barcode.serial_no == ProductionScan.serial_no)
.join(ProductionMaster, ProductionMaster.prod_code == Barcode.prod_code)
.filter(
sa.and_(
ProductionScan.date >= "2021-08-01 08:00:00",
ProductionScan.date <= "2021-08-25 08:00:00",
ProductionMaster.article.like("%3780%black%"),
)
)
.group_by(ProductionMaster.article, Barcode.size)
.all()
This run to an error as the raw query returned is not correct.
Error:
The multi-part identifier "tbl_ProductionScan.serial_no" could not be bound. (4104) (SQLExecDirectW)')
Raw sql returned from sqlalchemy error:
[SQL: SELECT [Tbl_ProductionMaster].[ProductionName] AS [Tbl_ProductionMaster_ProductionName], [Tbl_barcode].[Size] AS [Tbl_barcode_Size], count([Tbl_barcode].[Size]) AS count_1
FROM [tbl_ProductionScan], [Tbl_ProductionMaster] JOIN [Tbl_barcode] ON [Tbl_barcode].serial_no = [tbl_ProductionScan].serial_no JOIN [Tbl_ProductionMaster] ON [Tbl_ProductionMaster].[ProductionCode] = [Tbl_barcode].[Product_Code]
WHERE [tbl_ProductionScan].prod_date >= ? AND [tbl_ProductionScan].prod_date <= ? AND [Tbl_ProductionMaster].[ProductionName] LIKE ? GROUP BY [Tbl_ProductionMaster].[ProductionName], [Tbl_barcode].[Size]]
In this sqlalchemy query, How do I get ride of Tbl_ProductionScan in the FROM keyword. I only need Tbl_ProductionScan in there, all the rest tables Tbl_ProductionMaster, Tbl_Barcode in the JOIN keyword only. That way sqlalchemy orm matches my actuall raw query given at the top.
I am aware of the filtering, make sure you convert your string to date and are you try to filter on two string values (use or) or one
start_date = datetime.strptime("2021-08-01 08:00:00", "%Y-%m-%d %H:%M:%S")
end_date = datetime.strptime("2021-08-25 08:00:00", "%Y-%m-%d %H:%M:%S")
search = "%{}%{}%".format("3780", "black")
and then try to run this
result = session.query(ProductionScan, ProductionMaster, Barcode, )
.join(Barcode, Barcode.serial_no == ProductionScan.serial_no)
.join(ProductionMaster, ProductionMaster.prod_code == Barcode.prod_code)
.filter(
sa.and_(
ProductionScan.date >= start_date,
ProductionScan.date <= end_date,
ProductionMaster.article.like(search),
# sa.or_(ProductionMaster.article.like(search1), ProductionMaster.article.like(search2))
)
)
.filter(Barcode.serial_no.isnot(None))
.with_entities(ProductionMaster.article.label('article'), Barcode.size.label('size'), sa.func.count(Barcode.size).label('count'))
.group_by(ProductionMaster.article, Barcode.size)
.all()
with_entities
Return a new Query replacing the SELECT list with the given entities.
Related
I have a relatively complex sql statement that I want to execute with sqlalchemy ORM. But when I try to do so I always get the error {NoSuchColumnError}"Could not locate column in row for column 'transaction_out.value'". My sql statement looks as follows:
sql = """
Select
addresses.address,
transaction_out1.value As sent,
transaction_out1.transaction_id As sent_id,
transactions.block As block_sent,
transactions.time As time_sent,
transactions.txid As txid_sent,
"sent" as type
From
transaction_out INNER Join
transaction_out_address On transaction_out_address.transaction_out_id = transaction_out.id INNER Join
addresses On transaction_out_address.address_id = addresses.id INNER Join
transaction_in On transaction_in.transaction_out_id = transaction_out.id INNER Join
transactions On transaction_in.transaction_id = transactions.id INNER Join
transaction_out transaction_out1 On transaction_out1.transaction_id = transactions.id INNER Join
transactions transactions1 On transaction_out.transaction_id = transactions1.id
WHERE addresses.address=:address_string
UNION
Select
addresses.address,
transaction_out.value As received,
transaction_out.transaction_id As received_id,
transactions.block As received_block,
transactions.time As received_time,
transactions.txid As received_txid,
"received"
From
transaction_out LEFT Join
transaction_out_address On transaction_out_address.transaction_out_id = transaction_out.id LEFT Join
addresses On transaction_out_address.address_id = addresses.id LEFT Join
transaction_in On transaction_in.transaction_out_id = transaction_out.id LEFT Join
transactions On transaction_out.transaction_id = transactions.id
WHERE addresses.address=:address_string
"""
And I tried to execute the statement in the following way:
query = session.query(Address.address, TransactionOut.value, TransactionOut.id, Block.height, Transaction.time, Transaction.txid).from_statement(
stmt.bindparams(
bindparam("address_string",
value=address_string)
))
I can execute the raw sql statement with engine.execute() without any problems but I need to do it with session.query() so I can use sqlalchemy-datatables. My database looks more or less like the one here: https://dba.stackexchange.com/questions/137791/blockchain-bitcoin-as-a-database/137800#137800.
What is the problem with the way I try to execute it?
The column aliases in the raw SQL are hiding the columns from the SQLAlchemy query. Either remove them, or alter the query to accommodate them:
query = session.query(Address.address,
TransactionOut.value.label('sent'),
TransactionOut.id.label('sent_id'),
Transaction.block.label('block_sent'),
Transaction.time.label('time_sent'),
Transaction.txid.label('txid_sent')).\
from_statement(stmt).\
params(address_string=address_string)
I'm trying to understand the following code snip:
improt pandasql
data_sql = data[['account_id', 'id', 'date', 'amount']]
# data_sql is a table has the above columns
data_sql.loc[:, 'date_hist_min'] = data_sql.date.apply(lambda x: x + pd.DateOffset(months=-6))
# add one more column, 'date_hist_min', it is from the column 'data' with the month minus 6
sqlcode = '''
SELECT t1.id,
t1.date,
t2.account_id as "account_id_hist",
t2.date as "date_hist",
t2.amount as "amount_hist"
FROM data_sql as t1 JOIN data_sql as t2
ON (cast(strftime('%s', t2.date) as integer) BETWEEN
(cast(strftime('%s', t1.date_hist_min) as integer))
AND (cast(strftime('%s', t1.date) as integer)))
AND (t1.{0} == t2.{0})
'''
# perform the SQL query on the table with sqlcode:
newdf = pandasql.sqldf(sqlcode.format(column), locals())
The code is with Python pandasql. It manipulates dataframe as SQL table. You can assume
the above dataframe as SQL table.
The definition of the table is in the comments.
What's the meaning of t1.{0} == t2.{0} ? What does {0} stand for in the context?
sqlcode.format(column) is going format the string and inject the columns into {0}
The 0 means format will use the first parameter.
print("This {1} a {0}".format("string", "is")) would print "This is a string"
I'm having a lot of trouble converting my sql query to sqlalchemy. I haven't been able to find any resources doing what I am trying to do.
The query I am trying to convert is:
SELECT
COALESCE(d.manager_name, e.name) AS name,
COALESCE(d.department_name, e.department_name) AS department
FROM employee e
LEFT JOIN department d ON e.id = d.id
WHERE e.date = '2018-11-05'
In sqlalchemy I came up with:
query = self.session.query(
func.coalesce(Department.manager_name, Employee.name),
func.coalesce(Department.department_name, Employee.department_name)).join(Department,
Employee.id == Department.id,
).filter(
Employee.date == '2018-11-05',
)
But keep getting the error:
sqlalchemy.exc.InvalidRequestError: Can't join table/selectable 'Department' to itself.
WHY?! The statements are exact!
Since Department is the leftmost item in your query, joins take place against it. To control what is considered the first – or the "left" – entity in the join use Query.select_from():
query = self.session.query(
func.coalesce(Department.manager_name, Employee.name),
func.coalesce(Department.department_name, Employee.department_name)).\
select_from(Employee).\
outerjoin(Department, Employee.id == Department.id).\
filter(Employee.date == '2018-11-05')
This behaviour is also explained in the ORM tutorial under "Querying with Joins", and Query.join(): "Controlling what to Join From".
Your query construct was also using Query.join(), though the raw SQL had LEFT JOIN. In that case Query.outerjoin() or join(..., isouter=True) should be used.
My original query is like
select table1.id, table1.value
from some_database.something table1
join some_set table2 on table2.code=table1.code
where table1.date_ >= :_startdate and table1.date_ <= :_enddate
which is saved in a string in Python. If I do
x = session.execute(script_str, {'_startdate': start_date, '_enddate': end_date})
then
x.fetchall()
will give me the table I want.
Now the situation is, table2 is no longer available to me in the Oracle database, instead it is available in my python environment as a DataFrame. I wonder what is the best way to fetch the same table from the database in this case?
You can use the IN clause instead.
First remove the join from the script_str:
script_str = """
select table1.id, table1.value
from something table1
where table1.date_ >= :_startdate and table1.date_ <= :_enddate
"""
Then, get codes from dataframe:
codes = myDataFrame.code_column.values
Now, we need to dynamically extend the script_str and the parameters to the query:
param_names = ['_code{}'.format(i) for i in range(len(codes))]
script_str += "AND table1.code IN ({})".format(
", ".join([":{}".format(p) for p in param_names])
)
Create dict with all parameters:
params = {
'_startdate': start_date,
'_enddate': end_date,
}
params.update(zip(param_names, codes))
And execute the query:
x = session.execute(script_str, params)
i need a little help.
I have following query and i'm, curious about how to represent it in terms of sqlalchemy.orm. Currently i'm executing it by session.execute. Its not critical for me, but i'm just curious. The thing that i'm actually don't know is how to put subquery in FROM clause (nested view) without doing any join.
select g_o.group_ from (
select distinct regexp_split_to_table(g.group_name, E',') group_
from (
select array_to_string(groups, ',') group_name
from company
where status='active'
and array_to_string(groups, ',') like :term
limit :limit
) g
) g_o
where g_o.group_ like :term
order by 1
limit :limit
I need this subquery thing because of speed issue - without limit in the most inner query function regexp_split_to_table starts to parse all data and does limit only after that. But my table is huge and i cannot afford that.
If something is not very clear, please, ask, i'll do my best)
I presume this is PostgreSQL.
To create a subquery, use subquery() method. The resulting object can be used as if it were Table object. Here's how your query would look like in SQLAlchemy:
subq1 = session.query(
func.array_to_string(Company.groups, ',').label('group_name')
).filter(
(Company.status == 'active') &
(func.array_to_string(Company.groups, ',').like(term))
).limit(limit).subquery()
subq2 = session.query(
func.regexp_split_to_table(subq1.c.group_name, ',')
.distinct()
.label('group')
).subquery()
q = session.query(subq2.c.group).\
filter(subq2.c.group.like(term)).\
order_by(subq2.c.group).\
limit(limit)
However, you could avoid one subquery by using unnest function instead of converting array to string with arrayt_to_string and then splitting it with regexp_split_to_table:
subq = session.query(
func.unnest(Company.groups).label('group')
).filter(
(Company.status == 'active') &
(func.array_to_string(Company.groups, ',').like(term))
).limit(limit).subquery()
q = session.query(subq.c.group.distinct()).\
filter(subq.c.group.like(term)).\
order_by(subq.c.group).\
limit(limit)