So I am using django with mysql, and I have a model MyModel, which contains some items with None on the region field. When I run this:
results = MyModel.objects.all()\
.values('region')\
.annotate(total=Count('region'))
it returns the grouping correctly, but one is {'None': 0}, which is incorrect, because there are some items with region field equal to None.
Now, if I were using mysql then I could group this with:
select region, count(id) from model_table group by region;
which returns the solution I want | NULL | 5 |.
How can I achieve this from django?
You actually already wrote the answer in your question. As far as I know, COUNT in SQL excludes certain falsey values like NULL. So if you want to count the falsey values as well, you use another field to perform the aggregate, just like you did in your SQL statement where you used the 'id' field. On that note, all you need to do is to perform COUNT on 'id'. Change your code to something like this:
results = MyModel.objects.all()\
.values('region')\
.annotate(total=Count('id'))
Related
Say I have an id column that is saved as ids JSON NOT NULL using SQLAlchemy, and now I want to delete an id from this column. I'd like to do several things at once:
query only the rows who have this specific ID
delete this ID from all rows it appears in
a bonus, if possible - delete the row if the ID list is now empty.
For the query, something like this:
db.query(models.X).filter(id in list(models.X.ids)) should work.
now, I'd rather avoid iterating over each query and then send an update request as it can be multiple rows. Is there any elegant way to do this?
Thanks!
For the search and remove remove part you can use json_remove function (from SQLLite built-in functions)
from sqlalchemy import func
db.query(models.X).update({'ids': func.json_remove(models.X.ids,f'$[{TARGET_ID}]') })
Here replace TARGET_ID by the targeted id.
Now this will update the row 'silently' (wether or not this id is present in the array).
If you want to first check if target id is in the column: you can query first all rows containing the target id with json_extract query (calling .all() method and then remove those ids with an .update() call.
But this will cost you double amount of queries (less performant).
For the delete part, you can use the json_array_length built-in function
from sqlalchemy import func
db.query(models.X).filter(func.json_array_length(models.X.ids) == 0).delete()
FYI : Not sure that you can do both in one query, and even if possible, I would not do it for clean syntax, logging and monitoring reasons.
I have search option which user input from front end.
search = {"Date":"2016-02-07","Status":"pass" }
Then I am mapping with those values with column names in DB.
query_to_field_mapping = {"Date": "date","Status": "result"}
query = {query_to_field_mapping[key]: value for key, value in search.items()}
Now I have DateTimeField in DB. When I am using filter I am trying with below one:
result = Customer.objects.filter(**query)
Here am trying to filter as per date field & want to get the filtered record as well. I tried the same with no success .How can I proceed?
Any help ll be awesome!!!
I tried the some other question from SO: How can I filter a date of a DateTimeField in Django?
I couldn't get a way to solve my problem as there we are passing a column name 1 by 1 .But right now I am passing as dictionary.
Your approach is the correct one. The reason why it doesn't work is because you filter for equality of datetime field by providing a date string, therefore a 2016-02-07 date (your query param) does not equal 2016-02-07T12:32:22 (a possible value in the DB).
To overcome this situation, use one of the field lookups possibilities from the link of your own question. As a specific example, let's use a contains field lookup, like so:
query_to_field_mapping = {"Date": "date__contains","Status": "result"}
Thus, when passing the query_to_field_mapping to .filter(), it will look for the date within the datetime that you need.
I am querying two table with SQLalchemy, I want to use the distinct feature on my query, to get a unique set of customer id's
I have the following query:
orders[n] = DBSession.query(Order).\
join(Customer).\
filter(Order.oh_reqdate == date_q).\
filter(Order.vehicle_id == vehicle.id).\
order_by(Customer.id).\
distinct(Customer.id).\
order_by(asc(Order.position)).all()
If you can see what is going on here, I am querying the Order table for all orders out for a specific date, for a specific vehicle, this works fine. However some customers may have more than one order for a single date. So I am trying to filter the results to only list each customer once. This work fine, however In order to do this, I must first order the results by the column that has the distinct() function on it. I can add in a second order_by to the column I want the results ordered by, without causing a syntax error. But it gets ignored and results are simply ordered by the Customer.id.
I need to perform my query on the Order table and join to the customer (not the other way round) due to the way the foreign keys have been setup.
Is what I want to-do possible within one query? Or will I need to re-loop over my results to get the data I want in the right order?
you never need to "re-loop" - if you mean load the rows into Python, that is. You probably want to produce a subquery and select from that, which you can achieve using query.from_self().order_by(asc(Order.position)). More specific scenarios you can get using subquery().
In this case I can't really tell what you're going for. If a customer has more than one Order with the requested vehicle id and date, you'll get two rows, one for each Order, and each Order row will refer to the Customer. What exactly do you want instead ? Just the first order row within each customer group ? I'd do that like this:
highest_order = s.query(Order.customer_id, func.max(Order.position).label('position')).\
filter(Order.oh_reqdate == date_q).\
filter(Order.vehicle_id == vehicle.id).\
group_by(Order.customer_id).\
subquery()
s.query(Order).\
join(Customer).\
join(highest_order, highest_order.c.customer_id == Customer.id).\
filter(Order.oh_reqdate == date_q).\
filter(Order.vehicle_id == vehicle.id).\
filter(Order.position == highest_order.c.position)
is it possible to follow ForeignKey relationships backward for entire querySet?
i mean something like this:
x = table1.objects.select_related().filter(name='foo')
x.table2.all()
when table1 hase ForeignKey to table2.
in
https://docs.djangoproject.com/en/1.2/topics/db/queries/#following-relationships-backward
i can see that it works only with get() and not filter()
Thanks
You basically want to get QuerySet of different type from data you start with.
class Kid(models.Model):
mom = models.ForeignKey('Mom')
name = models.CharField…
class Mom(models.Model):
name = models.CharField…
Let's say you want to get all moms having any son named Johnny.
Mom.objects.filter(kid__name='Johnny')
Let's say you want to get all kids of any Lucy.
Kid.objects.filter(mom__name='Lucy')
You should be able to use something like:
for y in x:
y.table2.all()
But you could also use get() for a list of the unique values (which will be id, unless you have a different specified), after finding them using a query.
So,
x = table1.objects.select_related().filter(name='foo')
for y in x:
z=table1.objects.select_related().get(y.id)
z.table2.all()
Should also work.
You can also use values() to fetch specific values of a foreign key reference. With values the select query on the DB will be reduced to fetch only those values and the appropriate joins will be done.
To re-use the example from Krzysztof Szularz:
jonny_moms = Kid.objects.filter(name='Jonny').values('mom__id', 'mom__name').distinct()
This will return a dictionary of Mom attributes by using the Kid QueryManager.
Lets say I got 2 models, Document and Person. Document got relationship to Person via "owner" property. Now:
session.query(Document)\
.options(joinedload('owner'))\
.filter(Person.is_deleted!=True)
Will double join table Person. One person table will be selected, and the doubled one will be filtered which is not exactly what I want cuz this way document rows will not be filtered.
What can I do to apply filter on joinloaded table/model ?
You are right, table Person will be used twice in the resulting SQL, but each of them serves different purpose:
one is to filter the the condition: filter(Person.is_deleted != True)
the other is to eager load the relationship: options(joinedload('owner'))
But the reason your query returns wrong results is because your filter condition is not complete. In order to make it produce the right results, you also need to JOIN the two models:
qry = (session.query(Document).
join(Document.owner). # THIS IS IMPORTANT
options(joinedload(Document.owner)).
filter(Person.is_deleted != True)
)
This will return correct rows, even though it will still have 2 references (JOINs) to Person table. The real solution to your query is that using contains_eager instead of joinedload:
qry = (session.query(Document).
join(Document.owner). # THIS IS STILL IMPORTANT
options(contains_eager(Document.owner)).
filter(Person.is_deleted != True)
)