I'm using SQL Server, Python, pypyodbc.
The tables I have are:
tbl_User: id, owner
tbl_UserPhone: id, number, user_id
user_id is the primary key of User and the foreign key of UserPhone.
I'm trying to insert 2 different phones to the same user_id using pypyodbc.
This is one of the things I tried that did not work:
cursor = connection.cursor()
SQLCommand = ("INSERT INTO tbl_UserPhones"
"(id,number,user_id)"
" VALUES (?,?,?)")
values = [userphone_index, user_phone,"((SELECT id from tbl_User where id = %d))" % user_id_index]
cursor.execute(SQLCommand, values)
cursor.commit()
Based on your comments, you have an identity column in tbl_UserPhones. Based on the column names I'm guessing it's the ID column.
The exception you get is very clear - you can't insert data into an identity column without specifically setting identity_insert to on before your insert statement. Basically, messing around with identity columns is bad practice. it's better to let Sql server to use it's built in capabilities and handle the insert to the identity column automatically.
You need to change your insert statement to not include the id column:
Instead of
SQLCommand = ("INSERT INTO tbl_UserPhones"
"(id,number,user_id)"
" VALUES (?,?,?)")
values = [userphone_index, user_phone,"((SELECT id from tbl_User where id = %d))" % user_id_index]
try this:
SQLCommand = ("INSERT INTO tbl_UserPhones"
"(number,user_id)"
" VALUES (?,?)")
values = [user_phone,"((SELECT id from tbl_User where id = %d))" % user_id_index]
SQLCommand = ("INSERT INTO tbl_UserPhones"
"(id,number,user_id)"
" VALUES (?,?,?)")
user_sqlCommand = cursor.execute("(SELECT id FROM tbl_User WHERE id = %d)" % user_index).fetchone()[0]
values = [userphone_index, user_phone, user_sqlCommand]
This was the solution.
Related
I need to extract results from a redshift database based on an IF condition written in Python.
Suppose I have a table with CustomerID, SHipment Number, Invoice Station, ect as columns in Redshift table, I want to get all the records from Redshift table if customer ID exists which should be checked with user input.
TABLE NAME = ShipmentInfo
COLUMNS = CustomerID, BillNumber, Invoicing Station, Charges, etc.
Python
import psycopg2
con=psycopg2.connect(dbname= 'datamodel', host='123',
port= '5439', user= 'bce', password= 'Ciz')
cur = con.cursor()
HWB = input("Enter the House Bill Number : ")
#if CustomerID = HWB:
cur.execute("SELECT source_system, file_nbr, file_date, CustomerID
FROM public.shipment_info where CustomerID = $HWB")
results = cur.fetchall()
cur.close()
con.close()
print(results)
Consider parameterization of user input value (else risk the infamous, Bobby Tables).
# PREPARED STATEMENT WITH PLACEHOLDER
sql = """SELECT source_system, file_nbr, file_date, CustomerID
FROM public.shipment_info
WHERE CustomerID = %s
"""
# BIND PARAM VALUES WITH TUPLE OF ONE-ITEM
cur.execute(sql, (HWB,))
I am trying to update some values into a database. The user can give the row that should be changed. The input from the user, however is a string. When I try to parse this into the MySQL connector with python it gives an error because of the apostrophes. The code I have so far is:
import mysql.connector
conn = mysql.connector
conn = connector.connect(user=dbUser, password=dbPasswd, host=dbHost, database=dbName)
cursor = conn.cursor()
cursor.execute("""UPDATE Search SET %s = %s WHERE searchID = %s""", ('maxPrice', 300, 10,))
I get this error
mysql.connector.errors.ProgrammingError: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''maxPrice' = 300 WHERE searchID = 10' at line 1
How do I get rid of the apostrophes? Because I think they are causing problems.
As noted, you can't prepare it using a field.
Perhaps the safest way is to allow only those fields that are expected, e.g.
#!/usr/bin/python
import os
import mysql.connector
conn = mysql.connector.connect(user=os.environ.get('USER'),
host='localhost',
database='sandbox',
unix_socket='/var/run/mysqld/mysqld.sock')
cur = conn.cursor(dictionary=True)
query = """SELECT column_name
FROM information_schema.columns
WHERE table_schema = DATABASE()
AND table_name = 'Search'
"""
cur.execute(query)
fields = [x['column_name'] for x in cur.fetchall()]
user_input = ['maxPrice', 300, 10]
if user_input[0] in fields:
cur.execute("""UPDATE Search SET {0} = {1} WHERE id = {1}""".format(user_input[0], '%s'),
tuple(user_input[1:]))
print cur.statement
Prints:
UPDATE Search SET maxPrice = 300 WHERE id = 10
Where:
mysql> show create table Search\G
*************************** 1. row ***************************
Search
CREATE TABLE `Search` (
`id` int(11) DEFAULT NULL,
`maxPrice` float DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1
A column name is not a parameter. Put the column name maxPrice directly into your SQL.
cursor.execute("""UPDATE Search SET maxPrice = %s WHERE searchID = %s""", (300, 10))
If you want to use the same code with different column names, you would have to modify the string itself.
sql = "UPDATE Search SET {} = %s WHERE searchID = %s".format('maxPrice')
cursor.execute(sql, (300,10))
But bear in mind that this is not safe from injection the way parameters are, so make sure your column name is not a user-input string or anything like that.
You cannot do it like that. You need to place the column name in the string before you call cursor.execute. Column names cannot be used when transforming variables in cursor.execute.
Something like this would work:
sql = "UPDATE Search SET {} = %s WHERE searchID = %s".format('maxPrice')
cursor.execute(sql, (300, 10,))
You cannot dynamically bind object (e.g., column) names, only values. If that's the logic you're trying to achieve, you'd have to resort to string manipulation/formatting (with all the risks of SQL-injection attacks that come with it). E.g.:
sql = """UPDATE Search SET {} = %s WHERE searchID = %s""".format('maxPrice')
cursor.execute(sql, (300, 10,))
I've a Pandas dataframe which I'm trying to insert into a MySQL table, using MySQLdb and to_sql. The table has 'allocationid' as primary key and autoincrement.. I will want to do this daily, deleting out the day's previous data from the MySQL table and reinserting updated data from the Pandas dataframe. Hence would like the primary key to autoincrement automatically (I won't be using it down the line, but may want to refer to it).
code is...
columns = ('date','tradeid','accountid','amount')
splitInput = pd.DataFrame(columns = columns)
splitInput['accountid'] = newHFfile['acctID']
splitInput['tradeid'] = newHFfile['Ref']
splitInput['amount'] = newHFfile['AMOUNT1']
splitInput['date'] = newHFfile['Trade Date']
db = MySQLdb.connect(host="(hostIP)", port=3306, user="user", passwd="(passwd)", db="(database)")
cursor = db.cursor()
query = """delete from splittrades where date = """ + runymdformat + """ """
cursor.execute(query)
db.commit()
splitInput.to_sql(con = db, name = 'splittrades',if_exists = 'append',flavor = 'mysql',index = False)
db.commit()
db.close()
The problem is that without adding a column for primary key, I get 'OperationalError: (1364, "Field 'allocationid' doesn't have a default value")'
If I add a primary key column and leave it blank, null, I get OperationalError: (1366, "Incorrect integer value: '' for column 'allocationid' at row 1")
If I use 1 or 0 in the allocationid column I get a duplicated value error msg.
MySQL usually auto-increments the primary key if you don't specify it - is there a way I can make this work from Python?
PS am not a Python expert so pls treat me gently - thanks :-)
def makeProductTable():
"""This creates a database with a blank table."""
with connect("products.db") as db:
cursor = db.cursor()
cursor.execute("""
CREATE TABLE Product(
ProductID integer,
GTIN integer,
Description string,
StockLevel integer,
Primary Key(ProductID));""")
db.commit()
def editStockLevel():
with connect("products.db") as db:
cursor = db.cursor()
Product_ID=input("Please enter the id of the product you would like to change: ")
Stock_Update=input("Please enter the new stock level: ")
sql = "update product set StockLevel = ('Stock_Update') where ProductID = ('Product_ID');"
cursor.execute(sql)
db.commit()
return "Stock Level Updated."
The first function is used to make the table and it shows my column titles, the second function is needed to update a specific value in the table.
But when this is ran the inputs are executed, however when all show all the products in the table the value for stock level doesn't change.
So I think the problem has something to do with the cursor.execute(sql) line.
Or something like this?
cur.execute("UPDATE Product set StockLevel = ? where ProductID = ?",(Stock_Update,Product_ID))
Yes; you're passing literal strings, instead of the values returned from your input calls. You need to use parameters in the statement and pass thme to the execute call.
sql= "update product set StockLevel = %s where ProductID = %s;"
cursor.execute(sql, (Stock_Update, Product_ID))
[Using Python3.x]
The basic idea is that I have to run a first query to pull a long list of IDs (text) (about a million IDs) and use those IDs in an IN() clause in a WHERE statement in another query. I'm using python string formatting to make this happen, and works well if the number of IDs is small - say 100k - but gives me an error (pyodbc.Error: ('08S01', '[08S01] [MySQL][ODBC 5.2(a) Driver][mysqld-5.5.31-MariaDB-log]MySQL server has gone away (2006) (SQLExecDirectW)')) when the set is indeed about a million IDs long.
I tried to read into it a bit and think it might have something with the default(?) limits set by SQLite. Also I am wondering if I'm approaching this in the right way anyway.
Here's my code:
Step 1: Getting the IDs
def get_device_ids(con_str, query, tb_name):
local_con = lite.connect('temp.db')
local_cur = local_con.cursor()
local_cur.execute("DROP TABLE IF EXISTS {};".format(tb_name))
local_cur.execute("CREATE TABLE {} (id TEXT PRIMARY KEY, \
lang TEXT, first_date DATETIME);".format(tb_name))
data = create_external_con(con_str, query)
device_id_set = set()
with local_con:
for row in data:
device_id_set.update([row[0]])
local_cur.execute("INSERT INTO srv(id, lang, \
first_date) VALUES (?,?,?);", (row))
lid = local_cur.lastrowid
print("Number of rows inserted into SRV: {}".format(lid))
return device_id_set
Step 2: Generating the query with 'dynamic' IN() clause
def gen_queries(ids):
ids_list = str(', '.join("'" + id_ +"'" for id_ in ids))
query = """
SELECT e.id,
e.field2,
e.field3
FROM table e
WHERE e.id IN ({})
""".format(ids_list)
return query
Step 3: Using that query in another INSERT query
This is where things go wrong
def get_data(con_str, query, tb_name):
local_con = lite.connect('temp.db')
local_cur = local_con.cursor()
local_cur.execute("DROP TABLE IF EXISTS {};".format(tb_name))
local_cur.execute("CREATE TABLE {} (id TEXT, field1 INTEGER, \
field2 TEXT, field3 TEXT, field4 INTEGER, \
PRIMARY KEY(id, field1));".format(tb_name))
data = create_external_con(con_str, query) # <== THIS IS WHERE THAT QUERY IS INSERTED
device_id_set = set()
with local_con:
for row in data:
device_id_set.update(row[1])
local_cur.execute("INSERT INTO table2(id, field1, field2, field3, \
field4) VALUES (?,?,?,?,?);", (row))
lid = local_cur.lastrowid
print("Number of rows inserted into table2: {}".format(lid))
Any help is very much appreciated!
Edit
This is probably the right solution to my problem, however when I try to use "SET SESSION max_allowed_packet=104857600" I get the error: SESSION variable 'max_allowed_packet' is read-only. Use SET GLOBAL to assign the value (1621). Then when I try to change SESSION to GLOBAL i get an access denied message.
Insert the IDs into a (temporary) table in the same database, and then use:
... WHERE e.ID IN (SELECT ID FROM TempTable)