I want to INSERT a resulting dict from an API into my db, so far I can insert one item at a time.
This is my code:
import json
import requests
import psycopg2
def my_func():
response = requests.get("https://path/to/api/")
data = response.json()
while data['next'] is not None:
response = requests.get(data['next'])
data = response.json()
for item in data['results']:
try:
connection = psycopg2.connect(user="user",
password="user",
host="127.0.0.1",
port="5432",
database="mydb")
cursor = connection.cursor()
postgres_insert_query = """ INSERT INTO table_items (NAME) VALUES (%s)"""
record_to_insert = item['name']
cursor.execute(postgres_insert_query, (record_to_insert,))
connection.commit()
count = cursor.rowcount
print (count, "success")
except (Exception, psycopg2.Error) as error :
if(connection):
print("error", error)
finally:
if(connection):
cursor.close()
connection.close()
my_func()
So, this one is working, but for example if I want to insert into table_items, not just on name row, but let's say, name, address, weight, cost_per_unit, from that table, then I will change these lines of code:
postgres_insert_query = 'INSERT INTO table_items (NAME, ADDRESS, WEIGHT, COST_PER_UNIT) VALUES (%s,%s,%s,%s)'
record_to_insert = (item['name']['address']['weight']['cost_per_unit'])
Then it will throw:
Failed to insert record into table_items table string indices must be integers
PostgreSQL connection is closed
I mean, the first version, with just one field works perfectly, but I need to insert into the other 3 fields everytime, any ideas?
You have to fix the syntax when you call the item attributes to define the parameters, and also change the object you pass to the parameterized query, since record_to_insert is already a tuple :
postgres_insert_query = """ INSERT INTO table_items
(NAME, ADDRESS, WEIGHT, COST_PER_UNIT) VALUES (%s,%s,%s,%s)"""
record_to_insert = (item['name'],
item['address'],
item['weight'],
item['cost_per_unit'])
cursor.execute(postgres_insert_query, record_to_insert) # you can pass the tuple directly
Related
import mysql.connector
def add_features_to_db(stockname, timeframe, date, feature):
try:
conn = mysql.connector.connect(
user='root', password='', host='localhost', database='fx003')
cursor = conn.cursor()
dbtable = stockname + timeframe
mySql_insert_query = """INSERT INTO `%s` (date, trend) VALUES ( `%s`, `%s` )"""
record = (dbtable, date, feature)
cursor.execute(mySql_insert_query, record)
conn.commit()
print("Record inserted successfully")
except mysql.connector.Error as error:
print("Failed to insert into MySQL table {}".format(error))
finally:
if conn.is_connected():
cursor.close()
conn.close()
print("MySQL connection is closed")
add_features_to_db("aud-cad", "_30mins", "2021-09-24 21:00:00", "Short")
I have the code above and giving me the below error:
Failed to insert into MySQL table 1146 (42S02): Table 'fx003.'aud-cad_30mins'' doesn't exist
aud-cad_30mins table does exist and an insert query like below doing its job:
mySql_insert_query = """INSERT INTO aud-cad_30mins (date, trend) VALUES ( "2021-09-24 21:00:00","Short" )"""
So when I try to use variables in the query, it gives the error. Why the table name getting unwanted quotes? Checked several tutorials but couldn't find a solution, any ideas?
The table name should be hardcoded in the query string instead of having it there as a placeholder %s, which is meant for the values to be inserted. So if you have the table name in the variable, you can replace it via format() before calling cursor.execute()
dbtable = stockname + timeframe
mySql_insert_query = """INSERT INTO {} (date, trend) VALUES ( %s, %s )""".format(dbtable)
see the examples in the docs
edit: as Bill mentioned in the comment, dont add the backticks around the %s placeholders.
I'm trying to insert data that I got from a csv file to tables that I previously created using sqlalchemy in python. However, when I try running the following code I get an error message that says that not all arguments were converted during string formatting.
Could you help me identify what my error is and how can I fix it?
#Importing the csv input file
df = pd.read_csv('APAN5310_HW6_DATA.csv')
print (df)
print (df.columns)
df.dtypes
#Splitting the data for the first table
first_name = df['first_name']
last_name = df['last_name']
email = df['email']
df[['cell_phone','home_phone']] = df.cell_and_home_phones.str.split(";",expand=True,)
cell_phone = df['cell_phone']
home_phone = df['home_phone']
consumer_list = [first_name, last_name, email, cell_phone, home_phone]
import psycopg2
def bulkInsert(records):
try:
connection = psycopg2.connect(user="postgres",
password="123",
host="localhost",
port="5432",
database="Pharmacy")
cursor = connection.cursor()
sql_insert_query = """ INSERT INTO consumer (consumer_list)
VALUES (%s,%s,%s,%s,%s) """
# executemany() to insert multiple rows
result = cursor.executemany(sql_insert_query, records)
connection.commit()
print(cursor.rowcount, "Record inserted successfully into consumer table")
except (Exception, psycopg2.Error) as error:
print("Failed inserting record into consumer table {}".format(error))
finally:
# closing database connection.
if connection:
cursor.close()
connection.close()
print("PostgreSQL connection is closed")
records = consumer_list
bulkInsert(records)```
Error Message I get
"Failed inserting record into consumer table not all arguments converted during string formatting
PostgreSQL connection is closed"
#Javier you have to convert all arguments to columns.... as #mechanical_meat said:
""" INSERT INTO consumer (column1, column2, column3, column4, column5)
VALUES (%s, %s, %s, %s, %s) """
But I suppose that you found that in a year.
My Goal is to parse the API via pagination. Store as a JSON feed, and then send it off to the MySQL DB. Once stored, I want to check if any new rows have been added, if so delete the database and add all new rows. (maybe not the best approach?) However for some strange reason nothing is storing in mySQLDB anymore, and my prints aren't working. Any thoughts on what I messed up?
PYTHON
import requests
import json
def dbconnect():
try:
db = MySQLdb.connect(
host='localhost',
user='root',
passwd='',
db='watch',
)
except Exception as e:
sys.exit("Can't connect to database")
return db
#init db
db = dbconnect()
cursor = db.cursor()
# Start getting all entries
def get_all_cracked_entries():
# results will be appended to this list
all_time_entries = []
# loop through all pages and return JSON object
for page in range(1, 4):
url = "https://api.watch.com/api?page="+str(page)
response = requests.get(url=url).json()
all_time_entries.append(response)
page += 1
for product in response:
print("id:", product["_id"])
print("title:", product["title"])
print("slug:", product["slug"])
print("releaseDate:", product["releaseDate"])
cursor.execute("INSERT INTO jsondump (id, title, slug, releaseDate) VALUES (%s,%s,%s,%s)", (product["_id"], product["title"], product["slug"], product["releaseDate"]))
db.commit()
#Check Row Count
cursor.execute("SELECT * FROM `jsondump`")
cursor.fetchall()
rc = cursor.rowcount
print("%d"%rc)
if rc > rc+1:
rs = cursor.fetchall()
else:
cursor.execute("TRUNCATE TABLE jsondump")
for product in response:
print("id:", product["_id"])
print("title:", product["title"])
print("slug:", product["slug"])
print("releaseDate:", product["releaseDate"])
print('---')
db = dbconnect()
cursor = db.cursor()
cursor.execute("INSERT INTO jsondump (id, title, slug, releaseDate) VALUES (%s,%s,%s)", (product["_id"], product["title"], product["slug"], product["releaseDate"]]))
db.commit()
cursor.close()
# prettify JSON
data = json.dumps(all_time_entries, sort_keys=True, indent=0)
#
return data
SAMPLE JSON
[{
"_id":"xxxxxxx",
"releaseDate":"2020-02-13T21:00:00-03:00",
"slug":"table-manners",
"title":"Table Manners","
}] ```
I'm trying to add some data to database using python. But I'm unable to get the auto increment primary_key of the last inserted record.
I've checked similar questions here and here, but it haven't worked.
My code is as follows:
def insert_vehicles_to_db(vehicle):
conn = db_connection()
cur = conn.cursor()
if vehicle_not_exists(vehicle, conn, cur):
try:
insert_vehicle(vehicle, conn, cur)
except Exception as e:
pass
else:
pass
conn.close()
Then it goes to insert_vehicle function. In that function I want:
to add a new vehicle to the database
to add a new price in vehicle_price table
for previous step I need the last inserted vehicle primary key from vehicles table
The function insert_vehicle is as follows:
def insert_vehicle(vehicle, conn, cur):
try:
query = "INSERT INTO vehicles (reference, data, price, reference_url, timestamp) VALUES (%s, %s, %s, %s, %s);"
cur.execute(query, (vehicle['reference'], "", vehicle['price'], vehicle['reference_url'], datetime.datetime.now()))
////////// I tried here vehicle_id = cur.lastrowid, it gives me always 0 //////////
insert_vehicle_price(vehicle['price'], vehicle_id, conn, cur)
conn.commit()
except Exception as e:
# TODO Handle error
pass
And insert_vehicle_price looks as follows:
def insert_vehicle_price(price, vehicle_id, conn, cur):
//// Here I need the correct vehicle_id to be able to insert a new record in `vehicle_price` table
pass
Any idea how to solve it?
In case your primary_key is the ID then you can use cursor.lastrowid to get the last row ID inserted on the cursor object, or connection.insert_id() to get the ID from the last insert on that connection.
i just want to select or insert into mysql using python 3.2 and mysql.connector..
import mysql.connector
filename = "t1.15231.0337.mod35.hdf"
try:
cnx = mysql.connector.connect(user='root', password='', database='etl')
cursor = cnx.cursor()
cursor.execute('SELECT * FROM hdf_file WHERE NAMA_FILE = %s',filename)
rows = cursor.fetchall ()
if rows == []:
insert_hdf = cursor.execute('INSERT INTO hdf_file VALUES(%s,null,NOW(),null,null,NOW())',filename)
cursor.execute(insert_hdf)
cnx.commit()
cursor.close()
cnx.close()
except mysql.connector.Error as err:
print("Something went wrong: {}".format(err))
but it said that: unknown column 'filename' in where clause
i have tried to put something like this:
cursor.execute('SELECT * FROM hdf_file WHERE NAMA_FILE = filename')
but i got the same error...
When using cursor.execute() with parameterised queries the query arguments are passed as a sequence (e.g. list, tuple) or as a dictionary if using named parameters. Your code passes only the string filename.
Your queries could be written like this:
cursor.execute('SELECT * FROM hdf_file WHERE NAMA_FILE = %s', (filename,))
Here the tuple (filename,) is passed to execute(). Similarly for the insert query:
cursor.execute('INSERT INTO hdf_file VALUES (%s, null, NOW(), null, null, NOW())',
(filename,))
execute() will return None, so there is no use in storing the result in the insert_hdf variable. It also makes no sense, and will cause an error, if you attempt cursor.execute(insert_hdf).