so I have looked around for days and tried various solutions to other people issues that seem the same as my own but unfortunately I have not made any progress. I am still very new with python, tkinter, and mysqli databases. What I trying to do is have an inventory program, it's just that simple. You can login and add, delete or simply view what is in the database. I can add and delete items just fine but I am having an issue pulling the data in the correct columns.
So far I have tried to pull each as a variable and assign it accordingly, tried to pull one row at a time, even switch to a new database but have gotten no correct results. If I use sqlite 3 it works fine so is it because mysql is on a server and not local? Any way, I'd like some advice to point me in the right direction so any help is much appreciated.
Edit:
tree.insert("", 1, "dirIso", text="ProductID")
for n, dirIso in enumerate(results,1):
list_of_column_values = [list(_dict.values())[0] for _dict in dirIso]
tree.insert('dirIso', n, text=list_of_column_values[0],
values=list_of_column_values[1:])
cursor.close()
conn.close()
This is what I have done, I am now getting a 'str' object has no attribute 'values'. Do I need to change n? Or is it looking for the name inside of my database as values and not the columns?
Outcome I'm trying to get is for the table to display its respective data for each item.
Question: Treeview is placing mysql data into one column
Edit your Question, showing three rows of results data, if your results does not look like the following.
Assuming the following results == list of dict!
results = [{'produkt_id': '1234', 'Name': 'John', 'Address': 'NewYork'},
{'produkt_id': '5678', 'Name': 'Peter', 'Address': 'Boston'}
]
You didn't need Tree Heading, as you want a Tabelview
# Dict's are unordered,
# therfore you need a list of fieldnames in your desired order
fieldnames = ['produkt_id', 'Name', 'Address']
# Loop results
for n, _dict in enumerate(results,1):
# Create a list of values from this _dict
_list = []
for key in fieldnames:
_list.append(_dict[key])
# The first value goes to 'text'
# All others goes to 'values'
tree.insert('', 'end', n, text=_list[0], values=_list[1:])
Output:
1234 John NewYork
5678 Peter Boston
Related
I am doing a rather large loop to pull out multiple key and value pairs which are then formed into multiple dictionaries. I want to turn these eventually into a dataframe. I am assuming I must first make a list out of them? Code looks so:
data = {}
ls_dict = []
keys = [name]
values = [number]
for i in range(len(keys)):
data[keys[i]] = values[i]
ls_dict.append(data)
print(ls_dict)
This loop is inside another larger loop. That is where the key and values are coming from.
When I run the code, I get the load of separate dictionaries like so:
[{'name': number}]
[{'name': number}]
[{'name': number}]
But I was hoping to get them in a list like this:
[{'name': number}, {'name': number}, {'name': number}]
The plan was then to return that list out of the function and turn it into a dataframe with column headings "User" and "User Number".
Any ideas first of all why it's not producing a list. And also, is there maybe a better way to make a dataframe out of the name and number im getting from my larger loop.
All help greatly appreciated.
Try:
final_list = [{key: value} for key, value in zip(keys,values)]
It looks like both keys and values always have only a single element. That's why the given for loop does only one step. Could you maybe also show the outer loop?
A good way to turn this kind of data into dataframes might be to use the from_dict classmethod.
I'd like to use the ON DUPLICATE KEY UPDATE optionality provided by SQLAlchemy to upsert a bunch of records.
These records have been sucessfully inserted with python using the following (where connection is engine.connect() object and table is a Table object)
record_list = [{'col1': 'name1', 'col2': '2015-01-31', 'col3': 27.2},
{'col1': 'name1', 'col2': '2016-01-31', 'col3': 25.2}]
query = insert(table)
results = connection.execute(query, record_list)
Looking at the docs at https://docs.sqlalchemy.org/en/13/dialects/mysql.html#insert-on-duplicate-key-update-upsert as well as a number of SO questions (including the suggestion it's possible under the comments on SQLAlchemy ON DUPLICATE KEY UPDATE ) I've tried a number of different examples, but there were none that I could see that address multiple records with the upsert statement using this method.
I'm trying along the lines of
query = insert(table).values(record_list)
upsert_query = query.on_duplicate_key_update()
results = connection.execute(upsert_query)
but either get the issue that the .on_duplicate_key_update() requires cant be empty or that the SQL syntax is wrong.
If anyone has sucessfully managed and could help me with the code structure here I'd really appreciate it.
I just ran into a similar problem and creating a dictionary out of query.inserted solved it for me.
query = insert(table).values(record_list)
update_dict = {x.name: x for x in query.inserted}
upsert_query = query.on_duplicate_key_update(update_dict)
#user12730260’s answer is great! but has a little bug, the correct code is:
query = insert(table).values(record_list) # each record is a dict
update_dict = {x.name: x for x in query.inserted} # specify columns for update, u can filter some of it
upsert_query = query.on_duplicate_key_update(**update_dict) # here's the modification: u should expand the columns dict
Your on_duplicate_key_update function requires arguments that define the data to be inserted in the update. Please have a look at the example in the documentation that you have already found.
insert().on_duplicate_key_update({"key": "value"})
This title may be a little misleading, but here is where i am at
My Goal:
Take data from a JIRA api and push it to a Postgresql database(using psycopg2). BUT need one of the column's data to be changed by a find and replace and then take that list and push it to the database along with the other data that is unchanged from the API.
What I have currently:
So i do a find and replace by:
creating a list from the API data that needs change.
data_change = list((item['status']) for item in data_table['issues'])
I then create a dictionary to map what needs to be changed.
severity = {
'Blocker': 'Emergency',
'Critical': 'High',
'Major': 'High',
'Moderate': 'Medium',
'Minor': 'Low',
'Trivial': 'Low'
}
then i create a new list with all the data i need to be entered into the database with the variable "result"
result = [severity.get(e, e) for e in data_change]
So now i need to take this list and push it to the database along with the other data.
def insert_into_table_epic(data_table):
query = """
INSERT into
table
(id, name, status)
VALUES
%s;
"""
values = list((item['id'],
item['name'],
result) for item in data_table['issues'])
extras.execute_values(cur, query, values)
conn.commit()
The Problem:
The problem lies with this line here:
values = list((item['id'],
item['name'],
result) for item in data_table['issues'])
The API has 50 different 'issues' so this adds one value into each row, resulting in 50 rows. I am passing 'result' as one of the values and this wont work because that variable is a list in itself and will insert the whole list for every row.
I was thinking of putting in a query to the database that does the find and replace after the data has been put it, but would like to know if i can do this by using this route:
API---->List---->Change data--->Insert into db with rest of data taken from API
The Question:
How do i change this variable 'result' and be able to pass it through 'value' without it taking the whole list for every row
You can do the conversions directly (without data_change and result):
values = list((item['id'],
item['key'],
severity.get(item['status'], item['status'])
) for item in data_table['issues'])
I want to shade every other column excluding the 1st row/header with grey. I read through the documentation for XLSX Writer and was unable to find any example for this, I also searched through the tag here and couldn't find anything.
why not set it up as a conditional format?
http://xlsxwriter.readthedocs.org/example_conditional_format.html
you should just declare a condition like "if cells row number %2 == 0"
I wanted to post the details on how I did this, and how I was able to do it dynamically. It's kinda hacky, but I'm new to Python and I just needed this to work for right now.
xlsW = pd.ExcelWriter(finalReportFileName)
rptMatchingDoe.to_excel(xlsW,'Room Counts Not Matching',index=False)
workbook = xlsW.book
rptMatchingSheet = xlsW.sheets['Room Counts Not Matching']
formatShadeRows = workbook.add_format({'bg_color': '#a9c9ff',
'font_color': 'black'})
rptMatchingSheet.conditional_format('A1:'+xlsAlpha[rptMatchingDoeColCount]+matchingCount,{'type': 'formula',
'criteria': '=MOD(ROW(),2) = 0',
'format': formatShadeRows})
xlsW.save()
xlsAlpha is a list that contains the max amount of columns my report could possible have. My first three columns are always consistent so I just set rptMatchingDoeColCount equal to 2 and then when I loop through the list to build my query I increment the count. The matchingCount variable is just a fetchone() result from a count(*) query on the view I'm pulling from in the database.
Eventually I think I will write a function to replace the hardcoded list assigned to xlsAlpha, so that it can be a virtually unlimited amount of columns.
If anyone has any suggestions on how I could improve this feel free to share.
Hello i am struggling with algorithm that will calculate distance made by operator in warehouse. It is calculated based on picklist that contains set of locations. i just set distances between them. I have created "mini" algorithm version with hand input for few locations, but im aiming for picklists with over 100k locations. I would like to input variables "picklist" and "LOCA" - """locations""" from csv file. i have managed to do it by this code:
with open("movement warehouse.csv") as csv_file:
csv_reader = csv.reader(csv_file, delimiter=',')
next(csv_reader)
for row in csv_reader:
picklist, number, LOCA, ITEM, DATE = row
print(row)
it prints this:
['4403821', '10', 'E-11-GR', 'NSWH-828031C', '20-Jun-17']
['4403824', '10', 'I-15-BL', 'CISH-800-100174-01', '20-Jun-17']
['4403825', '10', 'I-02-ER', 'CISH-800-100175-01', '20-Jun-17']
['4403825', '20', 'G-21-FR', 'CISH-700-101709-01', '20-Jun-17']
(its just part of it, but u get the idea)
so the first column is number of picklist and the third one is location.
Others dont matter for me for now.
So finally my question:
How can I use particular "cell" of data for example: I'd like to check:
if first picklist = second picklist: #thats possible
distance += (distance between location'E-11-GR'and 'I-15-BL'
else:
skip to next location #or whatever
You get the idea? how to go through data by rows in column ( from csv file )
PS.:
Im beginner, working in python for 3 weeks now and this is my first post here, so go easy on me please :) and if this post is to messy let me know ill try to explain better
You want to access the array indexes.
Every row you get will come in as an array:
my_row = ["1", "foo", "bar"]
you can get the first by saying:
my_row[0]
Arrays begin at zero, so bar would be at my_row[2].
If you ask for a term that's not in the array (my_row[5]) it'll throw an error, so make sure the data is there before you ask for it.
The structure of your code might look something like this:
all_rows = []
for row in csv_reader:
all_rows.append(row)
'To get the first row:
all_rows[0]
'To get the first column of the first row:
all_rows[0][0]
I think there's an elegant way of solving this with dictionaries.
Assuming that picklist is our reference point(?)
master_dict = defaultdict(list) #sets up a dictionary with a list as the default value
for row in csv:
value = dict(zip(['number', 'LOCA', 'ITEM', 'DATE'], row[1:]))
master_dict[row[0]].append(value)
Then you can iterate through the keys?