I want to insert the value of variable (temp, hum) into my table sensors using mysql database (db name:project). I am using Ubuntu 14.04, and the connection is made between python and MySQL, but I can't put the value of the variable. So any help is welcome, and thanks.
This is my script Python:
from random import *
import socket
import sys
import mysql.connector
from datetime import datetime
temp = randrange(10, 31, 2)
hum = randrange(300, 701, 2)
print temp
print hum
conn=mysql.connector.connect(user='root',password='12345',host='localhost',database='projet');
mycursor=conn.cursor();
mycursor.execute("""INSERT INTO sensors VALUES('$datetime','$temp','$hum')""");
conn.commit()
mycursor.execute("SELECT * FROM sensors")
This is my table where you can find the variables temp and hum and not their values:
That's because you are not passing the variable values into the query. Pass them with the query into the execute() method:
mycursor.execute("""
INSERT INTO
sensors
VALUES
('$datetime', %s, %s)""", (temp, hum))
where %s are placeholders for the query parameters. Note that the database driver would handle the parameter type conversion and escaping automatically.
As for the $datetime, if you want to have a current date/datetime in this column, look into using NOW() or CURDATE(), see:
MySQL: Curdate() vs Now()
but i can't put the value of variable
Assuming data-time is a string and the rest are integers
Try
"INSERT INTO sensors VALUES('%s','%d','%d')", %(date, temp, num)
in the execute method
Related
I am a beginner in programming and please excuse me if the question is stupid.
See the code below. It takes two values from a csv file named headphones_master_data.csv ( price, link) and writes the data into a MySQL table. When writing the data , the date is also being written to the table.
There are 900 rows in the file. When you see the writing part, the my_cursor.execute(sql, val) function is executed 900 times (the number of rows).
It got me thinking and I wanted to see if there are other ways to improve the data writing part. I came up with two ideas and they are as follows.
1 - Convert all the lists ( price, link) into a dictionary and write the dictionary. So the my_cursor.execute(sql, val) function is executed just once.
2 - Convert the lists into a data frame and write that into the database so the write happens just once.
Which method is the best one? Are there any drawbacks of writing the data only once. More importantly, Am I thinking about the optimization correctly?
''''
import pandas as pd
import pymysql
data = pd.read_csv("headphones-master_data.csv") #read csv file and save this into a variable named data
link_list = data['Product_url'].tolist() #taking athe url value from the data vaiable and turn into a list
price_list = data['Sale_price'].tolist()
crawled_date = time.strftime('%Y-%m-%d') #generate the date format compatiable with MySQL
connection = pymysql.connect(host='localhost',
user='root',
password='passme123##$',
db='hpsize') #connection obhect to pass the database details
my_cursor = connection.cursor() #curser object to communicate with database
for i in range(len(link_list)):
link = link_list[i]
price = price_list[i]
sql = "INSERT INTO comparison (link, price, crawled_date) VALUES (%s, %s, %s)" #sql query to add data to database with three variables
val = link , price , crawled_date #the variables to be addded to the SQL query
my_cursor.execute(sql, val) #execute the curser obhect to insert the data
connection.commit() #commit and make the insert permanent
my_cursor.execute("SELECT * from comparison") #load the table contents to verify the insert
result = my_cursor.fetchall()
for i in result:
print(i)
connection.close()
''''
The best way in my opinion is to pass the data into a DataFrame and then use the .to_sql method in order to save the data in your MySQL database.
This method take an argument (method='multi') which allows you to insert all the data in the DataFrame in one go and within a very short time.. This works if your database allows multi-writing.
Read more here : https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html
I'm trying to populate my time column which is a TIMESTAMP datatype with an INSERT command from my Python script. This is my current code for the insert:
usercount= ("INSERT INTO UserNum(Amount, Network) \
VALUES ('%s', '%s')" % \
(num_user, nets_id["name"]))
I haven't included to insert a TIMESTAMP value as I believe this gets automatically generated upon INSERT
But when I look at the UserNum table the time column is populated by values such as AAAAAAAAE5U= is there something I'm doing wrong?
Any help with this would be much appreciated!
You do not need to make your arguments strings. Python handles it.
Try this:
usercount = "INSERT INTO UserNum(Amount, Network) VALUES (%s, %s)" % (num_user, nets_id["name"])
but be sure that they are of the same type as in your DB.
I'm trying to insert a one dynamic value along with static value in MySQL database using python. When I execute the query with dynamic value alone is working, check the below query.
cursor.execute("INSERT INTO rfid(tagname) VALUES (%s)", (txt))
But How to execute a dynamic value along with static value? I tried below, its not working.
cursor.execute("INSERT INTO rfid(tagname,weight) VALUES (%s,%d)", (txt,30))
error:
execute
query = query % db.literal(args)
TypeError: %d format: a number is required, not str
Actually I gave 30, it is a number.
Python version v2.7 and MySQL version is 5.6
If you're already hard-coding 30, why not just hard code it in the insert statement?
cursor.execute("INSERT INTO rfid(tagname, weight) VALUES (%s, 30)", (txt))
I am trying to import rows of a csv file into a mysql table, I am using Python to do this.
Here's a snippet of my mysql statement from my python script:
sql = """INSERT INTO tbl_celebrants(id, name, DATE_FORMAT(birthday,'%m/%d/%Y'), address) \
VALUES(%s , %s, %s, %s)"""
I am getting an error where it says
ValueError: unsupported format character 'm' (0x6d) at index 60
The Date format in my csv file is mm/dd/yyyy. I have tried using %% in the DATE_FORMAT( '%%m/%%d/%%Y') in my python script as suggested by what I have read somewhere in this site but it did not work for me. I appreciate any help and many thanks in advance.
P.S
Here's how I am executing the statement
for row in reader:
cursor = conn.cursor()
sql = """INSERT INTO tbl_celebrants(id, name, DATE_FORMAT(birthday,'%%m/%%d /%%Y'),address) VALUES(%s,%s,%s,%s)"""
cursor.execute(sql, row)
cursor.execute("commit")
cursor.close()
It's right, you should double all % char inside DATE_FORMAT() to didn't get them interpreted as prepared statement placeholder. But you're using the DATE_FORMAT() function in the wrong place, you should use it inside the VALUES(...) declaration, so to fix all the issue try that:
sql = """INSERT INTO tbl_celebrants(id, name, birthday, address)
VALUES(%s , %s, DATE_FORMAT(%s,'%%m/%%d/%%Y'), %s)"""
I have a strange problem that Im having trouble both duplicating and solving.
Im using the pyodbc library in Python to access a MS Access 2007 database. The script is basically just importing a csv file into Access plus a few other tricks.
I am trying to first save a 'Gift Header' - then get the auto-incrmented id (GiftRef) that it is saved with - and use this value to save 1 or more associated 'Gift Details'.
Everything works exactly as it should - 90% of the time. The other 10% of the time Access seems to get stuck and repeatedly returns the same value for cur.execute("select last(GiftRef) from tblGiftHeader").
Once it gets stuck it returns this value for the duration of the script. It does not happen while processing a specific entry or at any specific time in the execution - it seems to happen completely
at random.
Also I know that it is returning the wrong value - in other words the Gift Headers are being saved - and are being given new, unique ID's - but for whatever reason that value is not being returned correctly when called.
SQL = "insert into tblGiftHeader (PersonID, GiftDate, Initials, Total) VALUES "+ str(header_vals) + ""
cur.execute(SQL)
gift_ref = [s[0] for s in cur.execute("select last(GiftRef) from tblGiftHeader")][0]
cur.commit()
Any thoughts or insights would be appreciated.
In Access SQL the LAST() function does not necessarily return the most recently created AutoNumber value. (See here for details.)
What you want is to do a SELECT ##IDENTITY immediately after you commit your INSERT, like this:
import pyodbc
cnxn = pyodbc.connect('DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\\Users\\Public\\Database1.accdb;')
cursor = cnxn.cursor()
cursor.execute("INSERT INTO Clients (FirstName, LastName) VALUES (?, ?)", ['Mister', 'Gumby'])
cursor.commit()
cursor.execute("SELECT ##IDENTITY AS ID")
row = cursor.fetchone()
print row.ID
cnxn.close()
Yep! That seems to be a much more reliable way of getting the last id. I believe my initial code was based on the example here http://www.w3schools.com/sql/sql_func_last.asp which I suppose I took out of context.
Thanks for the assist! Here is the updated version of my original code (with connection string):
MDB = 'C:\\Users\\Public\\database.mdb'
DRV = '{Microsoft Access Driver (*.mdb)}'
conn = pyodbc.connect('DRIVER={};DBQ={}'.format(DRV,MDB))
curs = conn.cursor()
SQL = "insert into tblGiftHeader (PersonID, GiftDate, Initials, Total) VALUES "+ str(header_vals) + ""
curs.execute(SQL)
curs.commit()
curs.execute("SELECT ##IDENTITY AS ID")
row = curs.fetchone()
gift_ref = row.ID