how do i convert a list into a string - python

i have tried to use the .replace or .strip method but have been unsuccessful with doing such. I am trying to print out a single stringed list separated by commas.
does anyone know a way to make it so it us printing out with no [] or single quotes ''
def get_format(header1):
format_lookup = "SELECT ID, FormatName, HeaderRow, ColumnStatus, ColumnMobileID, ColumnVendorID, ColumnTechID, " \
"ColumnCallType, ColumnCallDate, ColumnCallTime, ColumnCallTo, ColumnQty, ColumnQtyLabel " \
"from dynamic_format WHERE HeaderRow=%s"
header1 = (str(header1),)
cursor = connection.cursor()
cursor.execute(format_lookup, header1)
record = cursor.fetchone()
return record

I suppose I'll post my comment as an answer:
In [1]: header1 = ['ESN', 'ORIGINAL_QUANTITY', 'INVOICE_DATE']
In [2]: ", ".join(header1)
Out[2]: 'ESN, ORIGINAL_QUANTITY, INVOICE_DATE'
In [3]: print(", ".join(header1))
ESN, ORIGINAL_QUANTITY, INVOICE_DATE
The reason you're getting those errors is because header1 is a list object and .replace() is a string method.
#sbabtizied's answer is what you'd use if header1 was a string:
# a single string, what sbabti assumed you had
"['ESN', 'ORIGINAL_QUANTITY', 'INVOICE_DATE']"
# a list of strings, what you actually have
['ESN', 'ORIGINAL_QUANTITY', 'INVOICE_DATE']

Related

python string parsing issues when saving sql commands to file

I have dicts data I am looping through to form INSERT commands for postgresql. where the parent keys of the dicts are the column names and the parent values are the column values:
data = {
'id': 45,
'col1': "foo's",
'col2': {'dict_key': 5}
}
columns = ', '.join(data.keys())
# replace single quote with double to form a json type for psql column
data['col2'] = data['col2'].replace("'", '"')
with open("file.sql", "w") as f:
command = "INSERT INTO table1({}) VALUES {};"
f.write(command.format(columns, tuple(data.values()))
The problem is that the output of this is not formatted correctly for sql to execute. This is the output of the above:
INSERT INTO table1(id, col1, col2) VALUES (45, "foo's", '{"dict_key":5}');
The json field is formatted correctly with the single quotes around the value. But col2 keeps the double quotes if the string in col2 contains a single quote. This is a problem because postgresql requires single quotes to identify TEXT input.
Is there a better way to parse data into psql insert commands?
Did you try using json.dump() and repr()?
columns = ', '.join(data.keys())
data['col1'] = repr(data['col1'])
data['col2'] = json.dumps(data['col2'])
...
This appears to be a limitation (or rather an implementation detail) in Python, with how the __repr__() for strings or str is defined.
Try this sample code out:
value = 'wont fix'; assert f'{value!r}' == "'wont fix'"
value = 'won\'t fix'; assert f'{value!r}' == '"won\'t fix"'
As can be seen, single quotes are preferred in the repr for strings, unless the string itself contains a single quote - in that case, double quotes are used to wrap the repr for the string.
A "quick and dirty" solution is to implement a custom string subclass, SQStr, which effectively overrides the default repr to always wrap a string with single quotes:
class SQStr(str):
def __repr__(self):
value = self.replace("'", r"\'")
return f"'{value}'"
If you want to also support double-escaped single quotes like r"\\\'", then something like this:
class SQStr(str):
def __repr__(self, _escaped_sq=r"\'", _tmp_symbol="|+*+|",
_default_repr=str.__repr__):
if "'" in self:
if _escaped_sq in self:
value = (self
.replace(_escaped_sq, _tmp_symbol)
.replace("'", _escaped_sq)
.replace(_tmp_symbol, r"\\\'"))
else:
value = self.replace("'", _escaped_sq)
return f"'{value}'"
# else, string doesn't contain single quotes, so we
# can use the default str.__repr__()
return _default_repr(self)
Now it appears to work as expected:
value = 'wont fix'; assert f'{SQStr(value)!r}' == "'wont fix'"
value = 'won\'t fix'; assert f'{SQStr(value)!r}' == r"'won\'t fix'"
# optional
value = r"won't fix, won\'t!"; assert f'{SQStr(value)!r}' == r"'won\'t fix, won\\\'t!'"

SQL query format

I have a list of string that I need to pass to an sql query.
listofinput = []
for i in input:
listofinput.append(i)
if(len(listofinput)>1):
listofinput = format(tuple(listofinput))
sql_query = f"""SELECT * FROM countries
where
name in {listofinput};
"""
This works when I have a list, but in case of just one value it fails.
as listofinput = ['USA'] for one value
but listofinput ('USA', 'Germany') for multiple
also I need to do this for thousands of input, what is the best optimized way to achieve the same. name in my table countries is an indexed column
You can just convert to tuple and then if the second last character is a coma, remove it.
listofinput = format(tuple(input))
if listofinput[-2] == ",":
listofinput = f"{listofinput[:-2]})"
sql_query = f"""SELECT * FROM countries
where name in {listofinput};"""
Change if(len(listofinput)>1): to if(len(listofinput)>=1):
This might work.
Remove condition if(len(listofinput)>1) .
Because if you don't convert to tuple your query should be like this:
... where name in ['USA']
or
... where name in []
and in [...] not acceptable in SQL, only in (...) is acceptable.
You can remove format() too:
listofinput = tuple(listofinput)
Final Code:
listofinput = []
for i in input:
listofinput.append(i)
listofinput = tuple(listofinput)
sql_query = f"""SELECT * FROM countries
WHERE
name IN {listofinput};
"""
Yes the tuple with one element will required a ","
To circumvent your problem, maybe you can use string instead by just changing your code to the below:
listofinput = []
for i in input:
listofinput.append(i)
if(len(listofinput)>1):
listofinput = format(tuple(listofinput))
else:
listofinput='('+listofinput[0]+')'

Convert a single-column pyodbc row into a plain integer value

I am passing data to a sql statement in python, the code part is like this:
foo.execute("select test_dat_id from tbl_dat_test where sample_id =?",(n_value,))
myfoo = foo.fetchall()
zoo = "select result_value from tbl_dat_analyte where test_dat_id ="
for new in myfoo:
new1=str(new)
new2=float(new1)
var = zoo + new2
print(var)
foo.execute(var)
To make the long story short, myfoo is sql row, and i converted its entries to string, this is a number mainly with a space and brackets, (Like this: (964005, ))
simply i want it to be converted to integer so it can pass to the sql statment, i believe there is easier ways to do so, but i really can't get it, thanks.
A good way would be to use regular expressions:
import re
foo = "(100) "
foo = re.sub("[^0-9]", "", foo)
foo = int(foo)
This will remove all non-numeric characters, and convert the string to int.
If I've understood your need correctly here's what you need to do
string_with_number_and_whitespaces = '(100) ' # Also contains brackets
string_with_number_only = string_with_number_and_whitespaces.replace(' ', '').replace('(', '').replace(')', '')
number = int(string_with_number_only)
print(number) # Output 100
.fetchall() returns a list of pyodbc.Row objects. If you want the first (and only) element in the row just extract it using index [0]
>>> myfoo = [(1,),(2,)]
>>> for new in myfoo:
... myint = new[0]
... print(myint * 10)
...
10
20
>>>
Thank you very much for all your valuable answers, i am going to give them a try one by one, actually before i get your feed back, i got it working like this (I know its a primitive solution but its worked):
foo.execute("select test_dat_id from tbl_dat_test where sample_id =?",(n_value,))
myfoo = foo.fetchall()
myfoo2 = dict(enumerate(item[0:100] for item in myfoo))
v_value = list(myfoo2.values())[0:100]
zoo = "select result_value from tbl_dat_analyte where test_dat_id = "
for new in v_value:
new2=str(new)
new2=new2.replace(" ", "")
new2=new2.replace("(", "")
new2=new2.replace(",)", "")
var = zoo + new2
print(var)
foo100 = cnx.cursor()
foo100.execute(var)
myzoo = foo100.fetchall()
print('zoo is:')
print(myzoo)
c5a = my_sheet.cell(row= 21 , column = 3)
c5a.value = myzoo

When I query MySQL I get a result with \u3000 in it

I'm connecting to a MySQL database, using utf8mbs, with the following code:
def name():
with conn.cursor() as cursor:
sql = "select name from fake_user where id = 147951"
cursor.execute(sql)
interentname = cursor.fetchall()
for i in interentname:
i = str(i)
new_name = i.strip("',)")
new_name = cc.strip("('")
# return new_name.encode('utf8').decode('unicode_escape')
return re.sub("[\u3000]", "", new_name)
print(name())
This keeps printing ♚\u3000\u3000 恏😊, I want to know how to get rid of the \u3000 part in that.
The above code doesn't get rid of the \u3000 though, why is that?
interentname is a tuple
new_name is a str string
How do I decode this properly?
You are turning each row, a tuple, into a string representation:
for i in interentname:
i = str(i)
Don't do that. A tuple is a sequence of values, and for your specific query, there will be only a single value in it, the value for the name column. Index into the tuple to get the single value:
for row in interentname:
name = row[0]
You can also use tuple assignment:
for row in interentname:
name, = row
Note the comma after name there, it tells Python that row must be a sequence with one value and that that one value should be assigned to name. You can even do this in the for loop target:
for name, in interentname:
print(name)
interentname is a sequence of tuples, not just a single tuple, so each iteration, you get a value like:
>>> row = ('♚\u3000\u3000 恏😊',)
The \u3000 codepoints in there are U+3000 IDEOGRAPHIC SPACE characters, which Python will always echo as \uxxxx escapes when the string is represented (as anything will be inside the standard containers).
By turning a tuple into a string, you then capture the representation as a string:
>>> str(row)
>>> str(row)
"('♚\\u3000\\u3000 恏😊',)"
Python represents tuples using valid Python syntax, and uses valid Python syntax for strings too. But removing the tuple syntax from that output (so the "(' at the start and ',) at the end) does not give you the proper string value back.
Indexing the tuple object gives you the value in it:
>>> row[0]
'♚\u3000\u3000 恏😊'
>>> print(row[0])
♚   恏😊

Populating Insert Statements from a local file

I am trying to write user data from a file into a series of insert statements. I feel I am close but just missing one or two things. I am attempting to run a .format, but all I end up with are ?'s
import time, json, sqlite3
def insertsfromfile(file):
results = open(file).readlines()
output = open('UserINSERTFile.txt', 'w')
for rows in results:
jsonobject = json.loads(rows)
userid = jsonobject['user']['id']
name = jsonobject['user']['name']
screenname = jsonobject['user']['screen_name']
description = jsonobject['user']['description']
friendscount = jsonobject['user']['friends_count']
insert = ('INSERT INTO Users VALUES (?,?,?,?,?'.format(userid, name, screenname,description, friendscount)
insert = insert[:-1] + ''
output.write(insert)
output.close()
Thanks
I figured it out after reviewing it. Essentially I was missing that I had to combine the attributes together with my Insert string with the '+'. Also had to convert the variables to str() in case they were int.

Categories

Resources