I'm having a lot of trouble converting my sql query to sqlalchemy. I haven't been able to find any resources doing what I am trying to do.
The query I am trying to convert is:
SELECT
COALESCE(d.manager_name, e.name) AS name,
COALESCE(d.department_name, e.department_name) AS department
FROM employee e
LEFT JOIN department d ON e.id = d.id
WHERE e.date = '2018-11-05'
In sqlalchemy I came up with:
query = self.session.query(
func.coalesce(Department.manager_name, Employee.name),
func.coalesce(Department.department_name, Employee.department_name)).join(Department,
Employee.id == Department.id,
).filter(
Employee.date == '2018-11-05',
)
But keep getting the error:
sqlalchemy.exc.InvalidRequestError: Can't join table/selectable 'Department' to itself.
WHY?! The statements are exact!
Since Department is the leftmost item in your query, joins take place against it. To control what is considered the first – or the "left" – entity in the join use Query.select_from():
query = self.session.query(
func.coalesce(Department.manager_name, Employee.name),
func.coalesce(Department.department_name, Employee.department_name)).\
select_from(Employee).\
outerjoin(Department, Employee.id == Department.id).\
filter(Employee.date == '2018-11-05')
This behaviour is also explained in the ORM tutorial under "Querying with Joins", and Query.join(): "Controlling what to Join From".
Your query construct was also using Query.join(), though the raw SQL had LEFT JOIN. In that case Query.outerjoin() or join(..., isouter=True) should be used.
Related
I want to run a working sql raw query in orm way. Where I need to pick columns and count of columns from tables joined not any columns from main. How do I implement this in ORM way.
My Raw query (I usually uses in sql server):
RAW SQL:
SELECT [barcode].[dbo].[Tbl_ProductionMaster].[ProductionName]
,[barcode].[dbo].[Tbl_barcode].[Size]
,COUNT([barcode].[dbo].[Tbl_barcode].[Size])
FROM [barcode].[dbo].[tbl_ProductionScan]
INNER JOIN [barcode].[dbo].[Tbl_barcode] ON [barcode].[dbo].[Tbl_barcode].[Serial_no] = [barcode].[dbo].[tbl_ProductionScan].[serial_no]
INNER JOIN [barcode].[dbo].[Tbl_ProductionMaster] ON [barcode].[dbo].[Tbl_ProductionMaster].[ProductionCode] = [barcode].[dbo].[Tbl_barcode].[Product_code]
WHERE [barcode].[dbo].[tbl_ProductionScan].[prod_date] BETWEEN '2021-08-01 08:00:00' AND '2021-08-25 08:00:00' AND [barcode].[dbo].[Tbl_ProductionMaster].[ProductionName] Like '%3780%black%'
GROUP BY [barcode].[dbo].[Tbl_ProductionMaster].[ProductionName], [barcode].[dbo].[Tbl_barcode].[Size]
What I managed to do in SQL Alchemy :
result = (
session.query(ProductionMaster.article, Barcode.size, sa.func.count(Barcode.size))
.join(Barcode, Barcode.serial_no == ProductionScan.serial_no)
.join(ProductionMaster, ProductionMaster.prod_code == Barcode.prod_code)
.filter(
sa.and_(
ProductionScan.date >= "2021-08-01 08:00:00",
ProductionScan.date <= "2021-08-25 08:00:00",
ProductionMaster.article.like("%3780%black%"),
)
)
.group_by(ProductionMaster.article, Barcode.size)
.all()
This run to an error as the raw query returned is not correct.
Error:
The multi-part identifier "tbl_ProductionScan.serial_no" could not be bound. (4104) (SQLExecDirectW)')
Raw sql returned from sqlalchemy error:
[SQL: SELECT [Tbl_ProductionMaster].[ProductionName] AS [Tbl_ProductionMaster_ProductionName], [Tbl_barcode].[Size] AS [Tbl_barcode_Size], count([Tbl_barcode].[Size]) AS count_1
FROM [tbl_ProductionScan], [Tbl_ProductionMaster] JOIN [Tbl_barcode] ON [Tbl_barcode].serial_no = [tbl_ProductionScan].serial_no JOIN [Tbl_ProductionMaster] ON [Tbl_ProductionMaster].[ProductionCode] = [Tbl_barcode].[Product_Code]
WHERE [tbl_ProductionScan].prod_date >= ? AND [tbl_ProductionScan].prod_date <= ? AND [Tbl_ProductionMaster].[ProductionName] LIKE ? GROUP BY [Tbl_ProductionMaster].[ProductionName], [Tbl_barcode].[Size]]
In this sqlalchemy query, How do I get ride of Tbl_ProductionScan in the FROM keyword. I only need Tbl_ProductionScan in there, all the rest tables Tbl_ProductionMaster, Tbl_Barcode in the JOIN keyword only. That way sqlalchemy orm matches my actuall raw query given at the top.
I am aware of the filtering, make sure you convert your string to date and are you try to filter on two string values (use or) or one
start_date = datetime.strptime("2021-08-01 08:00:00", "%Y-%m-%d %H:%M:%S")
end_date = datetime.strptime("2021-08-25 08:00:00", "%Y-%m-%d %H:%M:%S")
search = "%{}%{}%".format("3780", "black")
and then try to run this
result = session.query(ProductionScan, ProductionMaster, Barcode, )
.join(Barcode, Barcode.serial_no == ProductionScan.serial_no)
.join(ProductionMaster, ProductionMaster.prod_code == Barcode.prod_code)
.filter(
sa.and_(
ProductionScan.date >= start_date,
ProductionScan.date <= end_date,
ProductionMaster.article.like(search),
# sa.or_(ProductionMaster.article.like(search1), ProductionMaster.article.like(search2))
)
)
.filter(Barcode.serial_no.isnot(None))
.with_entities(ProductionMaster.article.label('article'), Barcode.size.label('size'), sa.func.count(Barcode.size).label('count'))
.group_by(ProductionMaster.article, Barcode.size)
.all()
with_entities
Return a new Query replacing the SELECT list with the given entities.
I've been wrestling with what should be a simple conversion of a straightforward SQL query into an SQLAlchemy expression, and I just cannot get things to line up the way I mean in the subquery. This is a single-table query of a "Comments" table; I want to find which users have made the most first comments:
SELECT user_id, count(*) AS count
FROM comments c
where c.date = (SELECT MIN(c2.date)
FROM comments c2
WHERE c2.post_id = c.post_id
)
GROUP BY user_id
ORDER BY count DESC
LIMIT 20;
I don't know how to write the subquery so that it refers to the outer query, and if I did, I wouldn't know how to assemble this into the outer query itself. (Using MySQL, which shouldn't matter.)
Well, after giving up for a while and then looking back at it, I came up with something that works. I'm sure there's a better way, but:
c2 = aliased(Comment)
firstdate = select([func.min(c2.date)]).\
where(c2.post_id == Comment.post_id).\
as_scalar() # or scalar_subquery(), in SQLA 1.4
users = session.query(
Comment.user_id, func.count('*').label('count')).\
filter(Comment.date == firstdate).\
group_by(Comment.user_id).\
order_by(desc('count')).\
limit(20)
So I am trying to join a few tables with an outerjoin.
This is my code
products = (
db.session.query(Offers, Products, Brand, Categories, ProductImages)
.outerjoin(
Offers,
Offers.product_id == Products.id,
Offers.brand_id == Brand.id,
Offers.category_id == Categories.id,
Offers.product_id == ProductImages.product_id,
)
.filter(and_(now >= Offers.start_date), (now <= Offers.end_date))
.order_by(Offers.product_name)
.all()
)
I am getting this error:
sqlalchemy.exc.InvalidRequestError: Don't know how to join to <class 'app.models.Offers'>; please use an ON clause to more clearly establish the left side of this join
But I am assuming by mentioning "Offers" at the beginning of the join I am stating a join ON the "Offers" table.
There are no relationships defined for each of the tables and my tech lead has told me not to define it at the moment. How can I do the join without defining the relationships?
I have a relatively complex sql statement that I want to execute with sqlalchemy ORM. But when I try to do so I always get the error {NoSuchColumnError}"Could not locate column in row for column 'transaction_out.value'". My sql statement looks as follows:
sql = """
Select
addresses.address,
transaction_out1.value As sent,
transaction_out1.transaction_id As sent_id,
transactions.block As block_sent,
transactions.time As time_sent,
transactions.txid As txid_sent,
"sent" as type
From
transaction_out INNER Join
transaction_out_address On transaction_out_address.transaction_out_id = transaction_out.id INNER Join
addresses On transaction_out_address.address_id = addresses.id INNER Join
transaction_in On transaction_in.transaction_out_id = transaction_out.id INNER Join
transactions On transaction_in.transaction_id = transactions.id INNER Join
transaction_out transaction_out1 On transaction_out1.transaction_id = transactions.id INNER Join
transactions transactions1 On transaction_out.transaction_id = transactions1.id
WHERE addresses.address=:address_string
UNION
Select
addresses.address,
transaction_out.value As received,
transaction_out.transaction_id As received_id,
transactions.block As received_block,
transactions.time As received_time,
transactions.txid As received_txid,
"received"
From
transaction_out LEFT Join
transaction_out_address On transaction_out_address.transaction_out_id = transaction_out.id LEFT Join
addresses On transaction_out_address.address_id = addresses.id LEFT Join
transaction_in On transaction_in.transaction_out_id = transaction_out.id LEFT Join
transactions On transaction_out.transaction_id = transactions.id
WHERE addresses.address=:address_string
"""
And I tried to execute the statement in the following way:
query = session.query(Address.address, TransactionOut.value, TransactionOut.id, Block.height, Transaction.time, Transaction.txid).from_statement(
stmt.bindparams(
bindparam("address_string",
value=address_string)
))
I can execute the raw sql statement with engine.execute() without any problems but I need to do it with session.query() so I can use sqlalchemy-datatables. My database looks more or less like the one here: https://dba.stackexchange.com/questions/137791/blockchain-bitcoin-as-a-database/137800#137800.
What is the problem with the way I try to execute it?
The column aliases in the raw SQL are hiding the columns from the SQLAlchemy query. Either remove them, or alter the query to accommodate them:
query = session.query(Address.address,
TransactionOut.value.label('sent'),
TransactionOut.id.label('sent_id'),
Transaction.block.label('block_sent'),
Transaction.time.label('time_sent'),
Transaction.txid.label('txid_sent')).\
from_statement(stmt).\
params(address_string=address_string)
i need a little help.
I have following query and i'm, curious about how to represent it in terms of sqlalchemy.orm. Currently i'm executing it by session.execute. Its not critical for me, but i'm just curious. The thing that i'm actually don't know is how to put subquery in FROM clause (nested view) without doing any join.
select g_o.group_ from (
select distinct regexp_split_to_table(g.group_name, E',') group_
from (
select array_to_string(groups, ',') group_name
from company
where status='active'
and array_to_string(groups, ',') like :term
limit :limit
) g
) g_o
where g_o.group_ like :term
order by 1
limit :limit
I need this subquery thing because of speed issue - without limit in the most inner query function regexp_split_to_table starts to parse all data and does limit only after that. But my table is huge and i cannot afford that.
If something is not very clear, please, ask, i'll do my best)
I presume this is PostgreSQL.
To create a subquery, use subquery() method. The resulting object can be used as if it were Table object. Here's how your query would look like in SQLAlchemy:
subq1 = session.query(
func.array_to_string(Company.groups, ',').label('group_name')
).filter(
(Company.status == 'active') &
(func.array_to_string(Company.groups, ',').like(term))
).limit(limit).subquery()
subq2 = session.query(
func.regexp_split_to_table(subq1.c.group_name, ',')
.distinct()
.label('group')
).subquery()
q = session.query(subq2.c.group).\
filter(subq2.c.group.like(term)).\
order_by(subq2.c.group).\
limit(limit)
However, you could avoid one subquery by using unnest function instead of converting array to string with arrayt_to_string and then splitting it with regexp_split_to_table:
subq = session.query(
func.unnest(Company.groups).label('group')
).filter(
(Company.status == 'active') &
(func.array_to_string(Company.groups, ',').like(term))
).limit(limit).subquery()
q = session.query(subq.c.group.distinct()).\
filter(subq.c.group.like(term)).\
order_by(subq.c.group).\
limit(limit)