I have the following query:
self.cursor.execute("SELECT platform_id_episode, title, from table WHERE asset_type='movie'")
Is there a way to get the number of results returned directly? Currently I am doing the inefficient:
r = self.cursor.fetchall()
num_results = len(r)
If you don't actually need the results,* don't ask MySQL for them; just use COUNT:**
self.cursor.execute("SELECT COUNT(*) FROM table WHERE asset_type='movie'")
Now, you'll get back one row, with one column, whose value is the number of rows your other query would have returns.
Notice that I ignored your specific columns and just did COUNT(*). A COUNT(platform_id_episode) would also be legal, but it means the number of found rows with non-NULL platform_id_episode values; COUNT(*) is the number of found rows full stop.***
* If you do need the results… well, you have to call fetchall() or equivalent to get them, so I don't see the problem.
** If you've never used aggregate functions in SQL before, make sure to look over some of the examples on that page; you've probably never realized you can do things like that so simply (and efficiently).
*** If someone taught you "never use * in a SELECT", well, that's good advice, but it's not relevant here. The problem with SELECT * is that it spams all of the columns, in random order, across your result set, instead of the columns you actually need in the order you need. SELECT COUNT(*) doesn't do that.
Related
I have this query:
SELECT COUNT(DISTINCT Serial, DatumOrig, Glucose) FROM values;
I've tried to recreate it with SQLAlchemy this way:
session.query(Value.Serial, Value.DatumOrig, Value.Glucose).distinct().count()
But this translates to this:
SELECT count(*) AS count_1
FROM (SELECT DISTINCT
values.`Serial` AS `values_Serial`,
values.`DatumOrig` AS `values_DatumOrig`,
values.`Glucose` AS `values_Glucose`
FROM values)
AS anon_1
Which does not call the count function inline but wraps the select distinct into a subquery.
My question is: What are the different ways with SQLAlchemy to count a distinct select on multiple columns and what are they translating into?
Is there any solution which would translate into my original query? Is there any serious difference in performance or memory usage?
First off, I think that COUNT(DISTINCT) supporting more than 1 expression is a MySQL extension. You can kind of achieve the same in for example PostgreSQL with ROW values, but the behaviour is not the same regarding NULL. In MySQL if any of the value expressions evaluate to NULL, the row does not qualify. That also leads to what is different between the two queries in the question:
If any of Serial, DatumOrig, or Glucose is NULL in the COUNT(DISTINCT) query, that row does not qualify or in other words does not count.
COUNT(*) is the cardinality of the subquery anon_1, or in other words the count of rows. SELECT DISTINCT Serial, DatumOrig, Glucose will include (distinct) rows with NULL.
Looking at EXPLAIN output for the 2 queries it looks like the subquery causes MySQL to use a temporary table. That will likely cause a performance difference, especially if it is materialized on disk.
Producing the multi valued COUNT(DISTINCT) query in SQLAlchemy is a bit tricky, because count() is a generic function and implemented closer to the SQL standard. It only accepts a single expression as its (optional) positional argument and the same goes for distinct(). If all else fails, you can always revert to text() fragments, like in this case:
# NOTE: text() fragments are included in the query as is, so if the text originates
# from an untrusted source, the query cannot be trusted.
session.query(func.count(distinct(text("`Serial`, `DatumOrig`, `Glucose`")))).\
select_from(Value).\
scalar()
which is far from readable and maintainable code, but gets the job done now. Another option is to write a custom construct that implements the MySQL extension, or rewrite the query as you have attempted. One way to form a custom construct that produces the required SQL would be:
from itertools import count
from sqlalchemy import func, distinct as _distinct
def _comma_list(exprs):
# NOTE: Magic number alert, the precedence value must be large enough to avoid
# producing parentheses around the "comma list" when passed to distinct()
ps = count(10 + len(exprs), -1)
exprs = iter(exprs)
cl = next(exprs)
for p, e in zip(ps, exprs):
cl = cl.op(',', precedence=p)(e)
return cl
def distinct(*exprs):
return _distinct(_comma_list(exprs))
session.query(func.count(distinct(
Value.Serial, Value.DatumOrig, Value.Glucose))).scalar()
I m trying to create a method where I can pass a parameter (a number) and get the number as my number of output. See below:
def get_data(i):
for i in range(0,i):
TNG = "SELECT DISTINCT hub, value, date_inserted FROM ZE_DATA.AESO_CSD_SUMMARY where opr_date >= trunc(sysdate) order by date_inserted desc fetch first i rows only"
Where i is a number. Inside the query "fetch first i rows only" , i want it to query i number of rows.
Thoughts on the syntax?
Seems like you're looking for a limit argument. You didn't mention what type of SQL you're using, but here are a couple of examples for various SQL languages.
I'm also a little confused by the structure of that function, seems like you may want to query the result set, then iterate through it rather than query the result set i number of times.
I've got a weekly process which does a full replace operation on a few tables. The process is weekly since there are large amounts of data as a whole. However, we also want to do daily/hourly delta updates, so the system would be more in sync with production.
When we update data, we are creating duplications of rows (updates of an existing row), which I want to get rid of. To achieve this, I've written a python script which runs the following query on a table, inserting the results back into it:
QUERY = """#standardSQL
select {fields}
from (
select *
, max(record_insert_time) over (partition by id) as max_record_insert_time
from {client_name}_{environment}.{table} as a
)
where 1=1
and record_insert_time = max_record_insert_time"""
The {fields} variable is replaced with a list of all the table columns; I can't use * here because that would only work for 1 run (the next will already have a field called max_record_insert_time and that would cause an ambiguity issue).
Everything is working as expected, with one exception - some of the columns in the table are of RECORD datatype; despite not using aliases for them, and selecting their fully qualified name (e.g. record_name.child_name), when the output is written back into the table, the results are flattened. I've added the flattenResults: False config to my code, but this has not changed the outcome.
I would love to hear thoughts about how to resolve this issue using my existing plan, other methods of deduping, or other methods of handling delta updates altogether.
Perhaps you can use in the outer statement
SELECT * EXCEPT (max_record_insert_time)
This should keep the exact record structure. (for more detailed documentation see https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#select-except)
Alternative approach, would be include in {fields} only top level columns even if they are non leaves, i.e. just record_name and not record_name.*
Below answer is definitely not better than use of straightforward SELECT * EXCEPT modifier, but wanted to present alternative version
SELECT t.*
FROM (
SELECT
id, MAX(record_insert_time) AS max_record_insert_time,
ARRAY_AGG(t) AS all_records_for_id
FROM yourTable AS t GROUP BY id
), UNNEST(all_records_for_id) AS t
WHERE t.record_insert_time = max_record_insert_time
ORDER BY id
What above query does is - first groups all records for each id into array of respective rows along with max value for insert_time. Then, for each id - it simply flattens all (previously aggregated) rows and picks only rows with insert_time matching max time. Result is as expected. No Analytic Function involved but rather simple Aggregation. But extra use of UNNEST ...
Still - at least different option :o)
On mysql I would enter the following query, but running the same on google BigQuery throws an error for the upper limit. How do I specify limits on a query? Say I have a query that returns 20 results and I want results between 5 and 10 only, how should I frame the query on Google BigQuery?)
For example:
SELECT id,
COUNT(total) AS total
FROM ABC.data
GROUP BY id
ORDER BY count DESC
LIMIT 5,10;
If I only put "LIMIT 5" on the end of the query, I get the top 5 and if I put "LIMIT 10" I ge t the top 10, but what syntax do I use to get between 5 and 10.
Could someone please shed some light on this?
Any help is much appreciated.
Thanks and have a great day.
I would use window functions...
something like
select * from
(Select id, total, row_number() over (order by total desc) as rnb
from
(SELECT id,
COUNT(total) AS total
FROM ABC.data
GROUP BY id
))
where rnb>=5 and rnb<=10
The windowing function answer is a good one, but I thought I'd give another option that involves how your result is fetched rather than how the query is run.
If you only need the first N rows you can add a LIMIT N to your query. But if you don't need the first M rows, you can change how you fetch the results. If you're using the the java API, you can use the setStartIndex() method on either the TableData.list() or the Jobs.getQueryResults() call to only fetch rows starting from a particular index.
That question makes no sense to an ever changing dataset. if you have a 1 second delay between when you ask for the first 5 and the next 5... the data could have changed. It's order is now different and you will miss data or get duplicate results. So databases like BigTable have a method for doing one query of the data and giving you the resultset to you in groups. If that were the case: What you are looking for is called query cursors. I can't say this any better than their own example so [Here is the documentation on them.][1]
But since you said the data does not change then fetch() will work just fine. fetch() has 2 options you will want to take note of limit and offset. 'limit' is the maximum number of results to return. If set to None, all available results will be retrieved. 'offset' is how many results to skip.
Check out other options here: https://developers.google.com/appengine/docs/python/datastore/queryclass#Query_fetch
I have a general ledger table in my DB with the columns: member_id, is_credit and amount. I want to get the current balance of the member.
Ideally that can be got by two queries where the first query has is_credit == True and the second query is_credit == False something close to:
credit_amount = session.query(func.sum(Funds.amount).label('Debit_Amount')).filter(Funds.member_id==member_id, Funds.is_credit==True)
debit_amount = session.query(func.sum(Funds.amount).label('Debit_Amount')).filter(Funds.member_id==member_id, Funds.is_credit==False)
balance = credit_amount - debit_amount
and then subtract the result. Is there a way to have the above run in one query to give the balance?
From the comments you state that hybrids are too advanced right now, so I will propose an easier but not as efficient solution (still its okay):
(session.query(Funds.is_credit, func.sum(Funds.amount).label('Debit_Amount')).
filter(Funds.member_d==member_id).group_by(Funds.is_credit))
What will this do? You will recieve a two-row result, one has the credit, the other the debit, depending on the is_credit property of the result. The second part (Debit_Amount) will be the value. You then evaluate them to get the result: Only one query that fetches both values.
If you are unsure what group_by does, I recommend you read up on SQL before doing it in SQLAlchemy. SQLAlchemy offers very easy usage of SQL but it requires that you understand SQL as well. Thus, I recommend: First build a query in SQL and see that it does what you want - then translate it to SQLAlchemy and see that it does the same. Otherwise SQLAlchemy will often generate highly inefficient queries, because you asked for the wrong thing.