I use Select query in mysql and I would like to get records using python.
client_start is SELECT query
client_start = con.execute("SELECT DISTINCT `clients_agreements`.`date_start` , `buildings`.`id` , `buildings`.`street` , `buildings`.`street_nr` , `clients`.`building_id` , `clients_agreements`.`user_id` FROM `clients_agreements` LEFT JOIN `buildings` On `clients_agreements`.`user_id` = `buildings`.`id` LEFT JOIN `clients` ON `clients`.`building_id` = `buildings`.`id` WHERE `date_start` = (CURRENT_DATE)") #SELECT ID CLIENT QUERY
for row in client_start:
streets = row[2]
house_nr = row[3]
message = str(streets) + " / " + str(house_nr)
msg = 'Subject: {0}\n\n{1}'.format(subject,message).encode('utf-8').strip()
When i use print into loop I get two result(what is good) but when i use print after loop i get only one result. How to print all result after loop? I can't use arrays because I would like to send it in email. I want to send all records
As #roganjosh said, each time through the loop you're throwing away the previous contents of message.
You can address this in two ways:
Whatever you're doing with message -- printing, sending an email, writing to a file, etc. -- do it inside the loop, while you have the current value.
Or, if you need to collect all the values and then handle them all at once, save them in a list:
messages = []
for row in client_start:
streets = row[2]
house_nr = row[3]
message = str(streets) + " / " + str(house_nr)
messages.append(message)
# now handle the list of messages
Related
Beginner here.
I have the following circumstances.
A text file with each line containing a name.
A cassandra 3.5 database
A python script
The intention is to have the script read from the file one line (one name) at a time, and query Cassandra with that name.
FYI, everything works fine except for when I try to pass the value of the list to the query.
I current have something like:
#... driver import, datetime imports done above
#...
with open(fname) as f:
content = f.readlines()
# Loop for each line from the number of lines in the name list file
# num_of_lines is already set
for x in range(num_of_lines):
tagname = str(content[x])
rows = session.execute("""SELECT * FROM tablename where name = %s and date = %s order by time desc limit 1""", (tagname, startDay))
for row in rows:
print row.name + ", " + str(row.date)
Everything works fine if I remove the tagname list component and edit the query itself with a name value.
What am I doing wrong here?
Simply building on the answer from #Vinny above, format simply replaces literal value. You need to put quotes around it.
for x in content:
rows = session.execute("SELECT * FROM tablename where name ='{}' and date ='{}' order by time desc limit 1".format(x, startDay))
for row in rows:
print row.name + ", " + str(row.date)
You can simply iterate over content:
for x in content:
rows = session.execute("SELECT * FROM tablename where name = {} and date = {} order by time desc limit 1".format(x, startDay))
for row in rows:
print row.name + ", " + str(row.date)
....
Also, you don't need to have 3 quotes for the string. Single quotes is good enough (3 quotes is used for documentation / multiple line comments in python)
Note that this might end in a different error; but you will be iterating on the lines instead of iterating over an index and reading lines.
I have a text file that contains many different entries. What I'd like to do is take the first column, use each unique value as a key, and then store the second column as values. I actually have this working, sort of, but I'm looking for a better way to do this. Here is my example file:
account_check:"login/auth/broken"
adobe_air_installed:kb_base+"/"+app_name+"/Path"
adobe_air_installed:kb_base+"/"+app_name+"/Version"
adobe_audition_installed:'SMB/Adobe_Audition/'+version+'/Path'
adobe_audition_installed:'SMB/Adobe_Audition/'+version+'/ExePath'
Here is the code I'm using to parse my text file:
val_dict = {}
for row in creader:
try:
value = val_dict[row[0]]
value += row[1] + ", "
except KeyError:
value = row[1] + ", "
val_dict[row[0]] = value
for row in val_dict.items():
values = row[1][:-1],row[0]
cursor.execute("UPDATE 'plugins' SET 'sets_kb_item'= ? WHERE filename= ?", values)
And here is the code I use to query + format the data currently:
def kb_item(query):
db = get_db()
cur = db.execute("select * from plugins where sets_kb_item like ?", (query,))
plugins = cur.fetchall()
for item in plugins:
for i in item['sets_kb_item'].split(','):
print i.strip()
Here is the output:
kb_base+"/Installed"
kb_base+"/Path"
kb_base+"/Version"
It took me many tries but I finally got the output the way I wanted it, however I'm looking for critique. Is there a better way to do this? Could my entire for item in plugins.... print i.strip() be done in one line and saved as a variable? I am very new to working with databases, and my python skills could also use refreshing.
NOTE I'm using csvreader in this code because I originally had a .csv file - however I found it was just as easy to use the .txt file I was provided.
I am trying to python to generate a script that generates unload command in redshift. I not an expert Python programmer. I need to where I can generate all columns for the unload list. If the column is of specific name, I need to replace with a function. The challenge I am facing is it appending "," to last item in the dictionary. Is there a way I can avoid the last comma? Any help would be appreciated.
import psycopg2 from psycopg2.extras
import RealDictCursor
try:
conn = psycopg2.connect("dbname='test' port='5439' user='scott' host='something.redshift.amazonaws.com' password='tiger'");
except:
print "Unable to connect to the database"
conn.cursor_factory = RealDictCursor
cur = conn.cursor()
conn.set_isolation_level( psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT )
try:
cur.execute("SELECT column from pg_table_def where schema_name='myschema' and table_name ='tab1' " );
except:
print "Unable to execute select statement from the database!"
result = cur.fetchall()
print "unload mychema.tab1 (select "
for row in result:
for key,value in row.items():
print "%s,"%(value)
print ") AWS Credentials here on..."
conn.close()
Use the join function on the list of values in each row:
print ",".join(row.values())
Briefly, the join function is called on a string which we can think of as the "glue", and takes a list of "pieces" as its argument. The result is a string of "pieces" "held together" by the "glue". Example:
>>> glue = "+"
>>> pieces = ["a", "b", "c"]
>>> glue.join(pieces)
"a+b+c"
(since row.values() returns a list, you don't actually need the comprehension, so it's even simpler than I wrote it at first)
Infact, this worked better.
columns = []
for row in result:
if (row['column_name'] == 'mycol1') or (row['column_name'] == 'mycol2') :
columns.append("func_sha1(" + row['column_name'] + "||" + salt +")")
else:
columns.append(row['column_name'])
print selstr + ",".join(columns)+" ) TO s3://"
Thanks for your help, Jon
I'm still trying to get the hang of python, so bear with me. please. I have this bit of code that I'm using from a book. The book does not properly show the white space in the code, so the spacing is my best guess. This code is supposed to break the results of a MySQL query into a more readable format.
if form is True:
columns_query = """DESCRIBE %s""" % (table)
print columns_query
columns_command = cursor.execute(columns_query)
headers = cursor.fetchall()
column_list = []
for record in headers:
column_list.append(record[0])
output=""
for record in results:
output = output + "===================================\n\n"
for field_no in xrange(0, len(column_list)):
output = output + column_list[field_no] + ": " + str(record[field_no]) + "\n"
output = output + "\n"
When I try to run it, I get the following:
Traceback (most recent call last):
File "odata_search.py", line 46, in <module>
output = output + column_list[field_no] + ": " + str(record[field_no]) + "\n"
IndexError: tuple index out of range
It has something to do with the str(record[field_no]) portion of the code, but that's what it looks like in the book, so I'm not sure what else to try.
Clearly len(record) != len(column_list). (specifically, column_list is longer than record). Is there a reason that you expect them to be the same length?
One "fix" would be something like:
for col,rec in zip(column_list,record):
output += col + ": " + str(rec) + "\n"
instead of:
for field_no in xrange(0, len(column_list)):
output = output + column_list[field_no] + ": " + str(record[field_no]) + "\n"
This will truncate the output at the shorter of column_list and record.
I would recommend using zip instead of range(0,len(...)) in any event. It's much more idiomatic.
The problem it turns out was both the white space and, more importantly, the MySQL query itself. I was pulling a list that were rows in a column instead of pulling all columns of a row, which is what the loops were written to concatenate. The number of records I would get back in the result of the wrong query was not equal to the number of results in the list that contained all the columns. The code was also intended to be a loop within a loop, so the spacing I have at the top is wrong. It should look like it does below. I added a couple of lines before it to show the query I had to modify:
Old statement looped like this:
statement = """select %s from %s where %s like '%s' limit 10""" % (column, table, column, term)
Should look like this:
statement = """select * from %s where %s like '%s' limit 10""" % (table, column, term)
command = cursor.execute(statement)
results = cursor.fetchall()
column_list = []
for record in results:
column_list.append(record[0])
Loop within a loop:
if form is True:
columns_query = """DESCRIBE %s""" % (table)
columns_command = cursor.execute(columns_query)
headers = cursor.fetchall()
column_list = []
for record in headers:
column_list.append(record[0])
output=""
for record in results:
output = output + "===================================\n\n"
for field_no in xrange(0, len(column_list)):
output = output + column_list[field_no] + ": " + str(record[field_no]) + "\n"
output = output + "\n"
This is effectively how i'm using _mssql.
Everything works fine, even after i use fetch_array().
The problem is, when I iterate through fetch_array(), it takes over ten minutes for 6K rows to be written to the body of an email. This just isn't acceptable.
Is there a better way for me to do this?
EDIT: haven't figured out the best way to copy python code here but here is the crappily formatted code used.
code for writing the email body:
results=mssql.fetch_array()
mssql.close()
resultTuple = results[0]
if resultTuple[1] == 0:
emailBody = 'Zero Rows returned'
if test or resultTuple[1] != 0:
i = 0
columnTuples = resultTuple[0]
listOfRows = resultTuple[2]
columns = []
emailBody = ''
if test: print 'looping results'
while i < len(columnTuples):
columns.append(columnTuples[i][0])
emailBody = emailBody + columns[i].center(20) + '\t'
i = i + 1
emailBody = emailBody + '\n'
for rowTuple in listOfRows: #row loop
for x in range(0, len(rowTuple)): #field loop
emailBody = emailBody + str(rowTuple[x]).center(20) + '\t'
if test: print x
emailBody = emailBody + '\n'
Could you show us some code of iterating through the results and creating an email? If I had to make a wild guess, I would say you are probably violating this idiom: "Build strings as a list and use ''.join at the end." If you keep adding to a string as you go you create a new (and progressively larger) string on each iteration, which runs in quadratic time.
(Turned this comment into an answer at the request of the OP. Will edit to more complete answer as needed.)
Edit:
Ok, my guess was correct. You'll want something like this:
email_lines = []
for row in result:
line = do_something_with(row)
email_lines.append(line)
email_body = '\n'.join*(email_lines)