Django Model use of keyword IN - python

I am attempting to figure out how to use the IN keyword in a django model query.
I was attempting to replace:
db = database.connect()
c = db.cursor()
c.execute("SELECT MAX(Date) FROM `Requests` WHERE UserId = %%s AND VIN = %%s AND Success = 1 AND RptType in %s" % str(cls.SuperReportTypes), (userID, vin))
With this:
myrequests = Request.objects.filter(user=userID, vin = vin, report_type in cls.SuperReportTypes)
myrequests.aggregate(Max('Date'))
I get a:
SyntaxError: non-keyword arg after keyword arg (<console>, line 1)
When I remove the ending "report_type in cls.SuperReportTypes" the query functions properly.
I recognize that there is a way to do this after the query managing the result set but I was hoping to deal with this in such a way that MYSQL would do the execution.

field__in=seq

You are using the wrong in statement
https://docs.djangoproject.com/en/dev/ref/models/querysets/#in

Related

Postgres: invalid input syntax for type date

I have created a database and I am trying to fetch data from it. I have a class Query and inside the class I have a function that calls a table called forecasts. The function is as follows:
def forecast(self, provider: str, zone: str='Mainland',):
self.date_start = date_start)
self.date_end = (date_end)
self.df_forecasts = pd.DataFrame()
fquery = """
SELECT dp.name AS provider_name, lf.datetime_from AS date, fr.name AS run_name, lf.value AS value
FROM load_forecasts lf
INNER JOIN bidding_zones bz ON lf.zone_id = bz.zone_id
INNER JOIN data_providers dp ON lf.provider_id = dp.provider_id
INNER JOIN forecast_runs fr ON lf.run_id = fr.run_id
WHERE bz.name = '{zone}'
AND dp.name = '{provider}'
AND date(lf.datetime_from) BETWEEN '{self.date_start}' AND '{self.date_end}'
"""
df_forecasts = pd.read_sql_query(fquery, self.connection)
return df_forecasts
In the scripts that I run I am calling the Query class giving it my inputs
query = Query(date_start, date_end)
And the function
forecast_df = query.forecast(provider='Meteologica')
I run my script in the command line in the classic way
python myscript.py '2022-11-10' '2022-11-18'
My script shows the error
sqlalchemy.exc.DataError: (psycopg2.errors.InvalidDatetimeFormat) invalid input syntax for type date: "{self.date_start}"
LINE 9: AND date(lf.datetime_from) BETWEEN '{self.date_start...
when I use this syntax, but when I manually input the string for date_start and date_end it works.
I cannot find a way to solve the problem with sqlalchemy, so I opened a cursor with psycopg2.
# Returns the datetime, value and provider name and issue date of the forecasts in the load_forecasts table
# The dates range is specified by the user when the class is called
def forecast(self, provider: str, zone: str='Mainland',):
# Opens a cursor to get the data
cursor = self.connection.cursor()
# Query to run
query = """
SELECT dp.name, lf.datetime_from, fr.name, lf.value, lf.issue_date
FROM load_forecasts lf
INNER JOIN bidding_zones bz ON lf.zone_id = bz.zone_id
INNER JOIN data_providers dp ON lf.provider_id = dp.provider_id
INNER JOIN forecast_runs fr ON lf.run_id = fr.run_id
WHERE bz.name = %s
AND dp.name = %s
AND date(lf.datetime_from) BETWEEN %s AND %s
"""
# Execute the query, bring the data and close the cursor
cursor.execute(query, (zone, provider, self.date_start, self.date_end))
self.df_forecasts = cursor.fetchall()
cursor.close()
return self.df_forecasts
If anyone finds the answer with sqlalchemy, I would love to see it!

Calling Argument in Python code

I have the following code that demands a -p argument when calling. However, how do I call the -p argument in the SQL query? I am also looking to use this -p argument text in the output file name.
#!/usr/bin/python
import argparse
import psycopg2
import csv
parser = argparse.ArgumentParser(description='insert the project ID as an
argument')
parser.add_argument('-p','--project_id', help='project_id to pull files from
ERAPRO',required=True)
args = parser.parse_args()
conn = psycopg2.connect(database="XXX", user="XXX", password="XXX",
host="XXX", port="5432")
cur = conn.cursor()
cur.execute("""SELECT project_analysis.project_accession,
analysis.analysis_accession, file.filename, file.file_md5, file.file_location
FROM project_analysis
LEFT JOIN analysis on project_analysis.analysis_accession = analysis.analysis_accession
LEFT JOIN analysis_file on analysis.analysis_accession = analysis_file.analysis_accession
LEFT JOIN file on analysis_file.file_id = file.file_id
WHERE project_accession = <INSERT -p ARGUMENT HERE> and analysis.hidden_in_eva = '0';""")
records = cur.fetchall()
with open ('/nfs/production3/eva/user/gary/evapro_ftp/<INSERT -p ARGUMENT
HERE>.csv', 'w') as f:
writer = csv.writer (f, delimiter = ',')
for row in records:
writer.writerow(row)
conn.close()
All help appreciated.
Thanks
First assign your argument to variable using dest argument to add_argument(). lets say we assign the input to the project_id variable.
This way we can reference it in the code.
parser.add_argument('-p','--project_id',
help='project_id to pull files from
ERAPRO',
required=True,
dest='project_id') # notice the dest argument
cur.execute("""SELECT project_analysis.project_accession,
analysis.analysis_accession, file.filename, file.file_md5, file.file_location
FROM project_analysis
LEFT JOIN analysis on project_analysis.analysis_accession = analysis.analysis_accession
LEFT JOIN analysis_file on analysis.analysis_accession = analysis_file.analysis_accession
LEFT JOIN file on analysis_file.file_id = file.file_id
WHERE project_accession = %s and analysis.hidden_in_eva = '0';""", (args.project_id))
Notice the use of execute(' ... %s ...', (args.project_id)) by doing this we interpolated the value referenced by project_id into the string.
After calling args = parser.parse_args() you can obtain the value of the arguments like this:
pid = args.project_id
Then you can use that value of pid in your code by using normal string substitution. However, it's better to use psycopg2's inbuilt method for passing parameters to SQL queries to prevent SQL injection.
Normal string substitution:
'hello world {]'.format(var_name)
psycopg2:
cur.execute('SELECT * from %s', (var_name))

MySQL Python connector does not commit

I have this function that it's supposed should receive json and store the values on a RDS MySQL db.
def saveMetric(metrics):
cnx = RDS_Connect()
cursor = cnx.cursor()
jsonMetrics = json.loads(metrics)
#print type(jsonMetrics['Metrics'])
# Every 2000 registries, the script will start overriding values
persistance = 2000
save_metrics_query = (
"REPLACE INTO metrics "
"SET metric_seq = (SELECT COALESCE(MAX(row_id), 0) %% %(persistance)d + 1 FROM metrics AS m), "
"instance_id = \'%(instance_id)s\', "
"service = \'%(service)s\' , "
"metric_name = \'%(metric_name)s\', "
"metric_value = %(metric_value)f"
)
for metric in jsonMetrics['Metrics']:
formatData = {}
formatData['persistance'] = persistance
formatData['instance_id'] = arguments.dimensionValue
formatData['service'] = jsonMetrics['Service']
formatData['metric_name'] = metric
formatData['metric_value'] = jsonMetrics['Metrics'][metric]
print save_metrics_query % formatData
try:
cursor.execute(save_metrics_query, formatData, multi=True)
logger('info','Metrics were saved successfully!')
cnx.commit()
except mysql.connector.Error as err:
logger('error', "Something went wrong: %s" % err)
cursor.close()
cnx.close()
RDS_Connect() was already tested and it works just fine. The problem is that after running the function, the data is not saved to the DB. I think there is a problem with the commit but I don't see any errors or warning message. If I run the query manually, the data gets stored.
Here is the query that runs after parsing the json:
REPLACE INTO metrics SET metric_seq = (SELECT COALESCE(MAX(row_id), 0) % 2000 + 1 FROM metrics AS m), instance_id = 'i-03932937bd67622c4', service = 'AWS/EC2' , metric_name = 'CPUUtilization', metric_value = 0.670000
If it helps, this is the json that the function receives:
{
"Metrics": {
"CPUUtilization": 1.33,
"NetworkIn": 46428.0,
"NetworkOut": 38772.0
},
"Id": "i-03932937bd67622c4",
"Service": "AWS/EC2"
}
I'd appreciate some help.
Regards!
UPDATE:
I found that the problem was related to the formatting codes on the query template.
I re wrote the function like this:
def saveMetric(metrics):
cnx = RDS_Connect()
jsonMetrics = json.loads(metrics)
print json.dumps(jsonMetrics,indent=4)
persistance = 2000
row_id_query_template = "SELECT COALESCE(MAX(row_id), 0) % {} + 1 FROM metrics AS m"
row_id_query = row_id_query_template.format(persistance)
save_metrics_query = (
"REPLACE INTO metrics "
"SET metric_seq = (" + row_id_query + "),"
"instance_id = %(instance_id)s,"
"service = %(service)s,"
"metric_name = %(metric_name)s,"
"metric_value = %(metric_value)s"
)
for metric in jsonMetrics['Metrics']:
formatData = {}
formatData['instance_id'] = arguments.dimensionValue
formatData['service'] = jsonMetrics['Service']
formatData['metric_name'] = metric
formatData['metric_value'] = jsonMetrics['Metrics'][metric]
if arguments.verbose == True:
print "Data: ",formatData
print "Query Template: ",save_metrics_query.format(**formatData)
try:
cursor = cnx.cursor()
cursor.execute(save_metrics_query, formatData)
logger('info','Metrics were saved successfully!')
cnx.commit()
cursor.close()
except mysql.connector.Error as err:
logger('error', "Something went wrong: %s" % err)
cnx.close()
As you can see, I format the SELECT outside. I believe the whole problem was due to this line:
"metric_value = %(metric_value)f"
I changed to:
"metric_value = %(metric_value)s"
and now it works. I think the formatting was wrong, given a syntax error (tho I don't know how the exception was never thrown).
Thanks to everyone who took time to help me!
I haven't actually used MySQL, but the docs seem to indicate that calling cursor.execute with multi=True just returns an iterator. If that is true, then it wouldn't actually insert anything - you'd need to call .next() on the iterator (or just iterate over it) to actually insert the record.
It also goes on to advise against using parameters with multi=True:
If multi is set to True, execute() is able to execute multiple statements specified in the operation string. It returns an iterator that enables processing the result of each statement. However, using parameters does not work well in this case, and it is usually a good idea to execute each statement on its own.
tl;dr: remove that parameter, as the default is False.
This was the solution. I changed:
"metric_value = %(metric_value)f"
To:
"metric_value = %(metric_value)s"
Doing some troubleshooting I found a syntax error on the SQL. Somehow the exception didn't show.

Auto format arguments to string

I am writing a function that takes some arguments and builds a SQL query string. I currently have each of the keyword arguments assigned to the string by name, and I am thinking that there has to be an easier way to automatically format the keyword arguments expected in the string. I'm wondering if it is possible to automatically format (or assign) a keyword argument to the query string if the argument name matches.
Here is my current function:
def create_query(
who_id,
what_id=None,
owner_id=None,
subject,
description,
due_date,
is_closed=False,
is_priority=False,
is_reminder=True,
):
query_str = """
'WhoId':'{who_id}',
'Subject':'{subject}',
'Description':'{description}',
'ActivityDate':'{due_date}',
'IsClosed':'{is_closed}',
'IsPriority':'{is_priority}',
'IsReminderSet':'{is_reminder}',
""".format(
who_id=who_id,
subject=subject, # Mapping each of these to the
description=description, # identically named variable
due_date=due_date, # seems dumb
is_closed=is_closed,
is_priority=is_priority,
is_reminder=is_reminder,
)
And I would like something that is more like this:
def create_query(**kwargs):
query_string = """
'WhoId':'{who_id}',
'Subject':'{subject}',
'Description':'{description}',
...
""".format(
for key in **kwargs:
if key == << one of the {keyword} variables in query_string >>:
key = << the matching variable in query_string >>
def create_query(**kwargs):
query_string = """
'WhoId':'{who_id}',
'Subject':'{subject}',
'Description':'{description}',
...
""".format(**kwargs)
print(query_string)
Can be used like this then:
create_query(who_id=123, subject="Boss", description="he's bossy",
will_be_ignored="really?")
Prints:
'WhoId':'123',
'Subject':'Boss',
'Description':'he is bossy',
...
Note the additional parameter will_be_ignored that I entered. It will simply be ignored. You don't have to filter that yourself.
But better don't build the query string yourself. Let the database connector handle that. For example, if my example said "he's bossy" then that would break the query because it uses ' as delimiter. Your database connector should allow you to give it queries with placeholders and values, and then properly replace the placeholders with the values.
Alternative:
def create_query(**kwargs):
parameters = (('WhoId', 'who_id'),
('Subject','subject'),
('Description', 'description'))
query_string = ','.join(r"'{}':'{{{}}}'".format(name, kwargs[id])
for name, id in parameters
if id in kwargs)
print(query_string)
create_query(who_id=123, description="he's bossy",
will_be_ignored="really?")
Prints:
'WhoId':'{123}','Description':'{he's bossy}'

Unicode sql query in cx_Oracle

i have the following:
ora_wet = oracle_connection()
cursor = ora_wet.cursor()
sqlQuery = u"SELECT * FROM web_cities WHERE cty_name = 'София'"
cursor.execute(sqlQuery)
sqlResult = cursor.fetchone()
When I do this I get the following error:
TypeError: expecting None or a string on line 18 which is the cursor.execute(sqlQuery)
If I make the query non-unicode (without the u) it goes through but it returns nothing
edit: in reply to first comment:
NLS_LANGUAGE is BULGARIAN,
NLS_CHARACTERSET is CL8MSWIN1251
language is Python...
yes there is a record with cty_name = 'София'
connection is just:
def oracle_connection():
return cx_Oracle.connect('user/pass#server')
ora_wet = oracle_connection()

Categories

Resources