This is the simple code I want to execute. I want the elements of tkinter combobox come from sqlite3 database it does the job but It appends the first row and first column values many times(no of rows in database) . I want it to append the first column of each rows of databse.
con_caster = sqlite3.connect('caster.db')
con_sec = sqlite3.connect('section.db')
cur_caster = con_caster.cursor()
con_sec = con_sec.cursor()
#cur_caster.execute("DROP TABLE IF EXISTS autos;") # use this line only if you want to overwrite existing table
cur_caster.execute("CREATE TABLE IF NOT EXISTS caster ('casterid' TEXT, 'casterradius' NUMBER);")
cur_caster = con_caster.cursor()
query = cur_caster.execute('SELECT casterid FROM caster')
caster_data = []
for row in cur_caster.fetchall():
caster_data.append(row[0])
combo['values'] = caster_data
con_caster.commit()
And this is how I have created the rows in database
def add_to_database():
i=0
# add details to database
caster_data.append((add_cid.get()))
cur_caster.execute("INSERT INTO caster VALUES (?, ?);", (caster_data[i],caster_data[i] ))
con_caster.commit()
combo['values'] = caster_data
i+=1
When I delete the database file and run the app it shows
When I add data, the combobox looks like
After closing and opening app again he combobox looks like
And the database also changes same as tyhe combobox.
Looking for the help in this.
Thank you so much in advance
I finally removed the bug, There was problem with the data addition method.
Every time I needed to add data ro database I used to add the first element instead of entry value which was caster_data[i]
The correct snippet is.
def add_to_database():
i=0
caster_data.append((add_cid.get()))
cur_caster.execute("INSERT INTO caster VALUES (?, ?);", (add_cid.get(),add_cr.get() ))
con_caster.commit()
combo['values'] = caster_data
i+=1
Related
I have a GUI interacting with my database, and MySQL database has around 50 tables. I need to search each table for a value and return the field and key of the item in each table if it is found. I would like to search for partial matches. ex.( Search Value = "test", "Protest", "Test123" would be matches. Here is my attempt.
def searchdatabase(self, event):
print('Searching...')
self.connect_mysql() #Function to connect to database
d_tables = []
results_list = [] # I will store results here
s_string = "test" #Value I am searching
self.cursor.execute("USE db") # select the database
self.cursor.execute("SHOW TABLES")
for (table_name,) in self.cursor:
d_tables.append(table_name)
#Loop through tables list, get column name, and check if value is in the column
for table in d_tables:
#Get the columns
self.cursor.execute(f"SELECT * FROM `{table}` WHERE 1=0")
field_names = [i[0] for i in self.cursor.description]
#Find Value
for f_name in field_names:
print("RESULTS:", self.cursor.execute(f"SELECT * FROM `{table}` WHERE {f_name} LIKE {s_string}"))
print(table)
I get an error on print("RESULTS:", self.cursor.execute(f"SELECT * FROM `{table}` WHERE {f_name} LIKE {s_string}"))
Exception: (1054, "Unknown column 'test' in 'where clause'")
I use a similar insert query that works fine so I am not understanding what the issue is.
ex. insert_query = (f"INSERT INTO `{source_tbl}` ({query_columns}) VALUES ({query_placeholders})")
May be because of single quote you have missed while checking for some columns.
TRY :
print("RESULTS:", self.cursor.execute(f"SELECT * FROM `{table}` WHERE '{f_name}' LIKE '{s_string}'"))
Have a look -> here
Don’t insert user-provided data into SQL queries like this. It is begging for SQL injection attacks. Your database library will have a way of sending parameters to queries. Use that.
The whole design is fishy. Normally, there should be no need to look for a string across several columns of 50 different tables. Admittedly, sometimes you end up in these situations because of reasons outside your control.
I have created a sqlite3 database:
def connect():
conn=sqlite3.connect("todo.db")
cur=conn.cursor()
cur.execute("CREATE TABLE IF NOT EXISTS todo (id INTEGER PRIMARY KEY, tasks TEXT, typ TEXT, difficulty TEXT, frequency TEXT, deadline TEXT)")
conn.commit()
conn.close()
I fetch the data from it:
def view():
conn=sqlite3.connect("todo.db")
cur=conn.cursor()
cur.execute("SELECT * FROM todo")
rows=cur.fetchall()
conn.close()
return rows
and show them in a tkinter list widget:
def view_command():
list1.delete(0,END)
for row in tdo.view():
row = str(row).strip("()")
row = row.replace("'","")
row = row.strip()
list1.insert(END,row)
It shows all elements of the rows as expected:
12, finish something, todo, difficult, often, 12.08.2020
Now what I'm trying to do is to show this row in the list without index:
finish something, todo, difficult, often, 12.08.2020
I did this by not selecting the id from the database, and I tried to exclude it with strip and similar functions. But then the id gets lost. I want to be able to select the row by the id to further work with it.
def get_selected_row(event):
try:
global selected_tuple
index=list1.curselection()[0]
selected_tuple=list1.get(index)
selected_tuple = tuple(selected_tuple.split (","))
So to make it short, how can I get the index from the database, hide the index in the list widget, but still be able to use it with when selecting it?
You can create a mapping file to map a listbox index to a database id.
The following example assumes that the database id is the first element of a row.
def view_command():
list1.mapping = []
list1.delete(0,END)
for row in tdo.view():
db_id = row.pop(0)
list1.mapping.append(db_id)
row = str(row).strip("()")
row = row.replace("'","")
row = row.strip()
list1.insert(END,row)
Now, given index x from the listbox, the database id would be list1.mapping[x].
Instead of converting the list to a string and then removing characters added by the conversion, why not just join the list with a comma and space?
For example:
row = ", ".join(row)
I've been querying a few API's with Python to individually create CSV's for a table.
I would like to try and instead of recreating the table each time, update the existing table with any new API data.
At the moment the way the Query is working, I have a table that looks like this,
From this I am taking the suburbs of each state and copying them into a csv for each different state.
Then using this script I am cleaning them into a list (the api needs the %20 for any spaces),
"%20"
#suburbs = ["want this", "want this (meh)", "this as well (nope)"]
suburb_cleaned = []
#dont_want = frozenset( ["(meh)", "(nope)"] )
for urb in suburbs:
cleaned_name = []
name_parts = urb.split()
for part in name_parts:
if part in dont_want:
continue
cleaned_name.append(part)
suburb_cleaned.append('%20'.join(cleaned_name))
Then taking the suburbs for each state and putting them into this API to return a csv,
timestr = time.strftime("%Y%m%d-%H%M%S")
Name = "price_data_NT"+timestr+".csv"
url_price = "http://mwap.com/api"
string = 'gxg&state='
api_results = {}
n = 0
y = 2
for urbs in suburb_cleaned:
url = url_price + urbs + string + "NT"
print(url)
print(urbs)
request = requests.get(url)
api_results[urbs] = pd.DataFrame(request.json())
n = n+1
if n == y:
dfs = pd.concat(api_results).reset_index(level=1, drop=True).rename_axis(
'key').reset_index().set_index(['key'])
dfs.to_csv(Name, sep='\t', encoding='utf-8')
y = y+2
continue
print("made it through"+urbs)
# print(request.json())
# print(api_results)
dfs = pd.concat(api_results).reset_index(level=1, drop=True).rename_axis(
'key').reset_index().set_index(['key'])
dfs.to_csv(Name, sep='\t', encoding='utf-8')
Then adding the states manually in excel, and combining and cleaning the suburb names.
# use pd.concat
df = pd.concat([act, vic,nsw,SA,QLD,WA]).reset_index().set_index(['key']).rename_axis('suburb').reset_index().set_index(['state'])
# apply lambda to clean the %20
f = lambda s: s.replace('%20', ' ')
df['suburb'] = df['suburb'].apply(f)
and then finally inserting it into a db
engine = create_engine('mysql://username:password#localhost/dbname')
with engine.connect() as conn, conn.begin():
df.to_sql('Price_historic', conn, if_exists='replace',index=False)
Leading this this sort of output
Now, this is a hek of a process. I would love to simplify it and make the database only update the values that are needed from the API, and not have this much complexity in getting the data.
Would love some helpful tips on achieving this goal - I'm thinking I could do an update on the mysql database instead of insert or something? and with the querying of the API, I feel like I'm overcomplicating it.
Thanks!
I don't see any reason why you would be creating CSV files in this process. It sounds like you can just query the data and then load it into a MySql table directly. You say that you are adding the states manually in excel? Is that data not available through your prior api calls? If not, could you find that information and save it to a CSV, so you can automate that step by loading it into a table and having python look up the values for you?
Generally, you wouldn't want to overwrite the mysql table every time. When you have a table, you can identify the column or columns that uniquely identify a specific record, then create a UNIQUE INDEX for them. For example if your street and price values designate a unique entry, then in mysql you could run:
ALTER TABLE `Price_historic` ADD UNIQUE INDEX(street, price);
After this, your table will not allow duplicate records based on those values. Then, instead of creating a new table every time, you can insert your data into the existing table, with instructions to either update or ignore when you encounter a duplicate. For example:
final_str = "INSERT INTO Price_historic (state, suburb, property_price_id, type, street, price, date) " \
"VALUES (%s, %s, %s, %s, %s, %s, %s, %s) " \
"ON DUPLICATE KEY UPDATE " \
"state = VALUES(state), date = VALUES(date)"
con = pdb.connect(db_host, db_user, db_pass, db_name)
with con:
try:
cur = con.cursor()
cur.executemany(final_str, insert_list)
If the setup you are trying is something for longer term , I would suggest running 2 diff processes in parallel-
Process 1:
Query API 1, obtain required data and insert into DB table, with binary / bit flag that would specify only API 1 has been called.
Process 2:
Run query on DB to obtain all records needed for API call 2 based on binary/bit flag that we set in process 1--> For corresponding data run call 2 and update data back to DB table based on primary Key
Database : I would suggest adding Primary Key as well as [Bit Flag][1] that gives status of different API call statuses. Bit Flag also helps you
- in case you want to double confirm if specific API call has been made for specific record not.
- Expand your project to additional API calls and can still track status of each API call at record level
[1]: Bit Flags: https://docs.oracle.com/cd/B28359_01/server.111/b28286/functions014.htm#SQLRF00612
I have a table in my SQL Server that is being updated every minute.
Currently, I get the data from my table using this lines of code:
conn = pymssql.connect(server, user, password, "tempdb")
def print_table():
cursor = conn.cursor(as_dict=True)
cursor.execute('SELECT * FROM EmotionDisturbances WHERE name=%s', 'John Doe')
for row in cursor:
#Show the data:
print("rate=%d, emotion=%s" % (row['rate'], row['emotion']))
conn.close()
In my application, I run this the function every 10 seconds.
How do I update the function so that I only print the last appended data from my table?
Thanks
Assuming you have an auto-incrementing index in column id you'd do:
SELECT * FROM EmotionDisturbances WHERE name = % ORDER BY id DESC LIMIT 1
EDIT: If you want all data that was added after a certain time, then you'll need to migrate your schema to have a created date column if it doesn't have one already, then you can do:
SELECT *
FROM EmotionDisturbances
WHERE name = % AND created >= DATEADD(second, -10, GETDATE())
This would get all of the records created over the last 10 seconds, since you said this function runs every 10 seconds.
I've looking around for a solution but can't seem to spot what I' doing wrong. I have a mysql database with a table with 2 columns: clock (contains a timestamp), and state (either contains 1 or 0). When a button is pressed on my breadboard (I am using a raspberry pi), 1 is entered into the database which updates continually, and if the button is not being pressed, 0 is entered into the table. This works as I have selecting the table in command line and the appropriate number of 1's and 0's are displayed.
In a seperate python file, the database and table are called and supposedly a sum of the state column is taken, and if this is 1 or more, I know the button has been pressed in the last 30 seconds as the table will clear every 30 seconds.
The issue is that however many 1's and 0's, and even when the table is empty, the sum only returns 1. I am very new to python so this error could be embarrisingly small, thanks for your help all the same!
Here is the entire python file for the sum:
import MySQLdb
conn = MySQLDB.connect('localhost', 'root', 'password', 'sensordb')
cur = conn.cursor()
but1 = cur.execute("SELECT SUM(state) FROM button1")
print "%d" % but1
You need to fetch the row as well.
cur.execute("SELECT SUM(state) FROM button1")
row = cur.fetchone()
while row is not None:
print(row)
row = cur.fetchone()
You are nearly there....
After the cur.execute(....)
You then need to read the output...
for result in cursor.execute("Your QUERY"):
if result.with_rows:
print("Rows produced by statement '{}':".format(
result.statement))
print(result.fetchall())
Hope this helps you....