inserting a file to ms sql server through python - python

I am quite new to programming. I have written the following code by researching from StackOverflow and other sites. I am trying to upload a csv file to the MS SQL Server. Every time I run this it connects and then a message pops up 'Previous SQL was not a query'. I am not sure how to actually tackle this. Any suggestions and help will be appreciated
import pyodbc import _csv
source_path= r'C:\Users\user\Documents\QA Canvas\module2\Module 2 Challenge\UFO_Merged.csv'
source_expand= open(source_path, 'r')
details= source_expand.readlines
print('Connecting...')
try:
conn = pyodbc.connect(r'DRIVER={ODBC Driver 13 for SQL Server};'r'SERVER=FAHIM\SQLEXPRESS;'r'DATABASE=Ash;'r'Trusted_Connection=yes')
print('Connected')
cur = conn.cursor()
print('Cursor established')
sqlquery ="""
IF EXISTS
(
SELECT TABLE_NAME ,TABLE_SCHEMA FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'UFO_MERGED' AND TABLE_SCHEMA = 'dbo')
BEGIN
DROP TABLE [dbo].[UFO_MERGED]
END
CREATE TABLE [dbo].[UFO_MERGED]
( [ID] smallint
,[COMMENTS] varchar(max)
,[FIRST OCCURANCE] datetime
,[CITY] varchar(60)
,[COUNTRY] varchar(20)
,[SHAPE] varchar(20)
,[SPEED] smallint
,[SECOND OCCURANCE] datetime
PRIMARY KEY(id)
) ON [PRIMARY]
"""
result = cur.execute(sqlquery).fetchall()
for row in result:
print(row)
print("{} rows returned".format(len(result)))
sqlstr= """
Insert into [dbo].[UFO_Merged] values ('()','()','()','()','()','()','()','()')
"""
for row in details[1:]:
row_data =row.split(',')
sqlquery=sqlstr.format(row_data[0],row_data[1],row_data[2],row_data[3],row_data[4],row_data[5],row_data[6],row_data[7])
result=cur.execute(sqlquery)
conn.commit()
conn.close()
except Exception as inst:
if inst.args[0]== '08001':
print("Cannot connect to the server")
elif inst.args[0] == '28000':
print("Login failed - check connection string")
else:
print(inst)

Well, make sure the SQL works first, before you try to introduce other technologies (Python, R, C#, etc.) on top of it. The SQL looks a little funky, but I'm not a SQL expert, so I can't say for sure, and I don't have time to recreate your setup on my machine. Maybe you can try with something a bit less complex, get that working, and then graduate to something more advanced. Does the following work for you?
import pyodbc
user='sa'
password='PC#1234'
database='climate'
port='1433'
TDS_Version='8.0'
server='192.168.1.103'
driver='FreeTDS'
con_string='UID=%s;PWD=%s;DATABASE=%s;PORT=%s;TDS=%s;SERVER=%s;driver=%s' % (user,password, database,port,TDS_Version,server,driver)
cnxn=pyodbc.connect(con_string)
cursor=cnxn.cursor()
cursor.execute("INSERT INTO mytable(name,address) VALUES (?,?)",('thavasi','mumbai'))
cnxn.commit()

Related

How to commit stored procedure execution by using pyodbc

I am trying to execute stored procedure by using pyodbc in databricks, after executing SP I tried to commit the connection but, commit is not happening. Here I am giving my code, please help me out from this issue.
import pyodbc
#### Connecting Azure SQL
def db_connection():
try:
username = "starsusername"
password = "password-db"
server = "server-name"
database_name = "db-name2"
port = "db-port"
conn=pyodbc.connect('Driver={ODBC Driver 17 for SQL server};SERVER=tcp:'+server+','+port+';DATABASE='+ database_name +';UID='+ username +';PWD='+ password)
cursor=conn.cursor()
return cursor, conn
except Exception as e:
print("Faild to Connect AZURE SQL: \n"+str(e))
cursor, conn = db_connection()
# conn1.autocommit=True
cursor.execute("delete from db.table_name")
cursor.execute("insert into db.table_name(BUSINESS_DATE) values('2021-10-02')")
cursor.execute("exec db.SP_NAME '20211023'")
conn.commit()
conn.close()
here I am commiting connection after SP excution. deletion and insertion is not happening at all. and I tried with cursor.execute("SET NOCOUNT ON; exec db.SP_NAME '20211023'") but it's also not working.
Thanks in Advance
If you check this document on pyodbc, you will find that -
To call a stored procedure right now, pass the call to the execute method using either a format your database recognizes or using the ODBC call escape format. The ODBC driver will then reformat the call for you to match the given database.
Note that after connection is set up or done, try doing conn.autocommit = True before calling your SP and it will help. By default it is false.
Executing the Stored Procedure.
You will be able to execute your stored procedure if you follow the below code snippet.
cursor = conn.cursor()
conn.autocommit = True
executesp = """EXEC yourstoredprocedure """
cursor.execute(executesp)
conn.commit()
Delete the Records in SQL Server
You can delete record as shown in the below example.
...#just an example
cursor.execute('''
DELETE FROM product
WHERE product_id in (5,6)
''')
conn.commit()
Don’t forget to add conn.commit() at the end of the code, to ensure that the command would get executed.
Insert record in SQL Server
The below snippet show how we can do the same.
...#just an example
cursor.execute("INSERT INTO EMP (EMPNO, ENAME, JOB, MGR) VALUES (535, 'Scott', 'Manager', 545)")
conn.commit()
I will suggest you to read the for following document for more information.
Delete Record Documentation.
Insert Record Document

Is there a way to insert dataframe into mysql using pymysql?

I have this:
import pymysql
import pymysql.cursors
host = "localhost"
port=3306
user = "db"
password='pass'
db='test'
charset='utf8mb4'
cursorclass=pymysql.cursors.DictCursor
try:
connection= pymysql.connect(host=host,port=port,user=user,password=passw,db=db,charset=charset,cursorclass=cursorclass)
Executor=connection.cursor()
except Exception as e:
print(e)
sys.exit()
I tried using the pandas to_sql(), but it is replacing the values in the table with the latest one. I want to insert the values into the table using the Pandas, but I want to avoid the duplicate entries and if any then it should get passed.
It might be possible to pickle the dataframe, and insert it into a table under a column of type BLOB. If you go this way, you'd have to depickle the result returned by mysqld
EDIT: I see what you are trying to do now. Here is a possible solution. Let me know if it works!
# assume you have declared df and connection
records = df.to_dict(orient = 'records')
for record in records:
sql = "INSERT INTO mytable ({0}) \
VALUES ({1})".format(record.keys(), record.values())
curs = connection.cursor()
try:
curs.execute(sql)
curs.close()
except:
break #handle/research the error

How can I sort a database by date?

I am creating a Python app that will store my homework in a database (using PhpMyAdmin). Here comes my problem:
At this moment, I am sorting every input with an ID (1, 2, 3, 4...), a date (23/06/2018...), and a task (read one chapter of a book). Now I would like to sort them by the date because when I want to read what do I have to do. I would prefer to see what shall I do first, depending on when should I get it done. For example:
If I have two tasks: one 25/07/2018 and the other 11/07/2018, I would like to show the 11/07/2018 first, no matter if it was addead later than the 25/07/2018. I am using Python (3.6), pymysql and PhpMyAdmin to manage the database.
I have had an idea to get this working, maybe I could run a Python script every 2 hours, that sorts all the elements in the database, but I have no clue about how can I do it.
Now, I will show you the code that enters the values into a database and then it shows them all.
def dba():
connection = pymysql.connect(host='localhost',
user='root',
password='Adminhost123..',
db='deuresc',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
try:
with connection.cursor() as cursor:
# Create a new record
sql = "INSERT INTO `deures` (`data`, `tasca`) VALUES (%s, %s)"
cursor.execute(sql, (data, tasca))
# connection is not autocommit by default. So you must commit to save
# your changes.
connection.commit()
with connection.cursor() as cursor:
# Read a single record
sql = "SELECT * FROM `deures` WHERE `data`=%s"
cursor.execute(sql, (data,))
resultat = cursor.fetchone()
print('Has introduït: ' + str(resultat))
finally:
connection.close()
def dbb():
connection = pymysql.connect(host='localhost',
user='root',
password='Adminhost123..',
db='deuresc',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
try:
with connection.cursor() as cursor:
# Read a single record
sql = "SELECT * FROM `deures`"
cursor.execute(sql)
resultat = cursor.fetchall()
for i in resultat:
print(i)
finally:
connection.close()
Can someone help?
You don't sort the database. You sort the results of the query when you ask for data. So in your dbb function you should do:
SELECT * FROM `deures` ORDER BY `data`
assuming that data is the field with the date.

Python mysql memory leak in insertion

I'm inserting millions of rows in MySQL using Python3 but I found the memory usage keeps growing and finally reached 64GB. I tried to diagnose the problem and here is a reproduction of the problem: say I have 100 CSV files. Each file contains 50000 rows and I want to insert them into the database. Here is a sample code:
import mysql.connector
insert_sql = ("INSERT INTO table (Value) VALUES (%s)")
for i in range(100):
cnx = mysql.connector.connect(user='root', password='password', host='127.0.0.1', database='database')
cursor = cnx.cursor()
# Insert 50000 rows here
for j in range(50000):
cursor.execute(insert_sql, (j,))
cnx.commit()
cursor.close()
cnx.close()
print('Finished processing one file')
print('All done')
The database contains only 1 table with 2 columns:
CREATE TABLE `table` (
`Id` int(11) NOT NULL AUTO_INCREMENT,
`Value` int(11) NOT NULL,
PRIMARY KEY (`Id`)
)
Environment: Mac OS Sierra; Python 3.6.x; MySQL 8.0.1; mysql-connector-python 8.0.11
I understand the memory should grow before committing because the changes are buffered. But I supposed it will decrease after the committing. However, it doesn't. Since in my real application I have thousands of files with 100MB each, my memory will blow up.
Did I do anything wrong here? (I'm new to database) How can I keep the memory usage under control? Any suggestion will be appreciated!
Edit: I also tried the following code according to the comments and answers but it still doesn't work:
import mysql.connector
insert_sql = ("INSERT INTO table (Value) VALUES (%s)")
for i in range(100):
cnx = mysql.connector.connect(user='root', password='password', host='127.0.0.1', database='database')
cursor = cnx.cursor()
params = [(j,) for j in range(50000)]
# If I don't excute the following insertion, the memory is stable.
cnx.executemany(insert_sql, params)
cnx.commit()
cursor.close()
del cursor
cnx.close()
del cnx
print('Finished processing one file')
print('All done')
Try batch execution, this loop of inserts might be the problem.
You can do executemany:
c.executemany("INSERT INTO table (Value) VALUES (%s)",
[('a'),('b')])
or big insert statement with all the values you want at the same time.

sqlite3 in data receive python communication

this is a simple chat code in python programming.I want to receive an increasingly aware of and store the ip, host, and message. But as it is, it only records once and not always? resolve like this?
(I use SQLITE3)
while true:
data = conn.recv (1024)
cur=con.cursor()
cur.execute("CREATE TABLE amo(IP INT, data TEXT)")
cur.execute("INSERT INTO amo VALUES(?,?)", (HOST, data))
Duarte, you have very nearly answered your own question, you are re-creating the table with each loop. Separate out your logic:
# establish you database connection and create the table, if it does not already exist
... (create your db connection here) ...
cur = con.cursor()
cur.execute("CREATE TABLE IF NOT EXISTS amo(IP INT, data TEXT)")
# open your chat connection, and store the data
while True:
... (chat data) ...
cur.execute("INSERT INTO amo({},{})".format(HOST, data))
You will only have to create the table once, then it will be in the sqlite3 db, you can establish a connection to the db at the start of our script, and manipulate the data in the db to your heart's content after that.

Categories

Resources