Python Mysql CREATE and INSERT in one query - python

I am using Python to connect to MySQL in XAMPP.I am creating a string and passing it to a new process through queue.The string contains the MySQL query which on execution will create a table and insert into the table.following is my code:-
from MySQLdb import connect
from os import _exit
from multiprocessing import Process,Queue
q=Queue()
def pkt():
conn=connect(user='root',passwd='257911.',host='localhost',unix_socket="/opt/lampp/var/mysql/mysql.sock")
cursor=conn.cursor()
conn.select_db("try")
while True:
y=q.get()
if y=="exit":
break
else:
cursor.execute(y)
conn.commit()
cursor.close()
conn.close()
_exit(0)
if __name__=="__main__":
a=Process(target=pkt)
a.start()
query="CREATE TABLE hello(id varchar(10) NOT NULL,name varchar(20)); INSERT INTO hello(id,name) VALUES('1234','sujata'); "
q.put(query)
q.put("exit")
Upon executing the code, I am getting the following error:-
Traceback (most recent call last):
File "/usr/lib64/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib64/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "try.py", line 16, in pkt
conn.commit()
ProgrammingError: (2014, "Commands out of sync; you can't run this command now")
I am getting the same error on inserting into multiple tables in one query.Is it not possible to combine create and insert in one statement??
Thanks.

No, you cannot give multiple semicolon separated queries in the same line. This is because MySQLdb is a wrapper around _mysql which makes it compatible with the Python DB API interface (Read more about the API here). The API works via PREPARE/EXECUTE statements and according to this,
The text must represent a single statement, not multiple statements.

I need to close the cursor after execution. This works for me.
from MySQLdb import connect
from os import _exit
from multiprocessing import Process,Queue
q=Queue()
def pkt():
conn=connect(user='root',host='localhost',unix_socket="/home/mysql/mysql/mysql.sock")
cursor=conn.cursor()
conn.select_db("try")
while True:
y=q.get()
if y=="exit":
break
else:
cursor.execute(y)
cursor.close()
conn.commit()
conn.close()
_exit(0)
if __name__=="__main__":
a=Process(target=pkt)
a.start()
query="CREATE TABLE hello(id varchar(10) NOT NULL,name varchar(20)); INSERT INTO hello(id,name) VALUES('1234','sujata'); "
q.put(query)
q.put("exit")

Related

mysql-connector-python InterfaceError: Failed getting warnings when executing a query with multiple statements with get_warnings=True

Using Python 3.7, I execute a query against a MySQL database, with multiple statements, with get_warnings enabled:
import mysql.connector
cnx = mysql.connector.connect(host='xxx',
user='xxx',
password='xxx',
database='xxx',
use_pure=False,
get_warnings=True)
# Test 1, works:
cur = cnx.cursor()
cur.execute('SELECT "a"+1')
for row in cur:
print(row)
print(cur.fetchwarnings())
cur.close()
# Test 2, InterfaceError:
cur = cnx.cursor()
for rs in cur.execute('SELECT "a"+1; SELECT 2', multi=True):
for row in rs:
print(row)
print(rs.fetchwarnings())
The first test executes a single statement, iterates over the cursor, fetches data, and finally prints warnings. Output as expected:
(1.0,)
[('Warning', 1292, "Truncated incorrect DOUBLE value: 'a'")]
The second test, (you can remove the first test altogether), will execute print(row) once, then an Exception happens. Output:
Traceback (most recent call last):
File "C:\Program Files\Python37\lib\site-packages\mysql\connector\connection_cext.py", line 472, in cmd_query
raw_as_string=raw_as_string)
_mysql_connector.MySQLInterfaceError: Commands out of sync; you can't run this command now
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\Python37\lib\site-packages\mysql\connector\cursor_cext.py", line 138, in _fetch_warnings
_ = self._cnx.cmd_query("SHOW WARNINGS")
File "C:\Program Files\Python37\lib\site-packages\mysql\connector\connection_cext.py", line 475, in cmd_query
sqlstate=exc.sqlstate)
mysql.connector.errors.DatabaseError: 2014 (HY000): Commands out of sync; you can't run this command now
During handling of the above exception, another exception occurred:
....etc....
Did anyone encounter the same problem? How did you solve it? What am I doing wrong? Could this be a bug in the connector?
Other things I've tried:
If you set get_warnings to False, no error happens and
fetchwarnings() returns None
If you remove the problem from the SQL code, no error happens and fetchwarnings() returns None
use_pure can be True or False, the only difference is a slightly different traceback
Using fetchall() instead of for row in rs gives the same result
Many other variations give the same error.
System:
Connector version is mysql-connector-python-8.0.17 but 8.0.16 has the same issue.
Python 3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 22:22:05) [MSC v.1916 64 bit (AMD64)] on win32
MySQL 5.7
The "Commands out of sync" is because MySQL client interface calls are performed in a wrong order. This is not a bug in the connector. This is expected behavior.
Executing that first SELECT returns a MySQL resultset.
Before the client issues another statement that returns a MySQL resultset, we have to do something with the resultset that is already returned. That is, there needs to be calls to either mysql_use_result and mysql_free_result, or a call to mysql_store_result. Once the client does that, then the client can execute another SQL statement that returns a result.
(Note that the execution of the MySQL SHOW WARNINGS statement returns a MySQL resultset.)
Again, this is expected behavior, as documented here:
https://dev.mysql.com/doc/refman/8.0/en/commands-out-of-sync.html
The references to mysql_free_result, mysql_store_result and mysql_use_result aren't specific to a Python interface; these reference the underlying library routines in the MySQL client code. e.g. https://dev.mysql.com/doc/refman/8.0/en/mysql-use-result.html
FOLLOWUP
I suspect the author of the MySQL Python connector didn't anticipate this use case, or if it was anticipated, the observed behavior was judged to be correct.
As far as avoiding the problem, I would avoid the use of the multii=True and do a separate execute for each SQL statement. Following the same pattern as in Test 1, we could add an outer loop to loop through the SQL statements
# Test 1.2
sqls = ['SELECT "a"+1', 'SELECT 2', ]
for sql in sqls:
cur = cnx.cursor()
cur.execute(sql)
for row in cur:
print(row)
print(cur.fetchwarnings())
cur.close()
Another option would be to avoid the call to the fetchwarnings. That is what is causing the SHOW WARNINGS statement to be executed (only after it first verifies that the count of warnings is greater than zero.) We can issue a SHOW WARNINGS statement separately, and loop through the results from that like it were the return from a SELECT.
# Test 1.3
cur = cnx.cursor()
for rs in cur.execute('SELECT "a"+1; SHOW WARNINGS; SELECT 2; SHOW WARNINGS', multi=True):
for row in rs:
print(row)
cur.close()

AttributeError: module 'odbc' has no attribute 'connect' - python with pydev

I am very new to python and I just can't seem to find an answer to this error. When I run the code below I get the error
AttributeError: module 'odbc' has no attribute 'connect'
However, the error only shows in eclipse. There's no problem if I run it via command line. I am running python 3.5. What am I doing wrong?
try:
import pyodbc
except ImportError:
import odbc as pyodbc
# Specifying the ODBC driver, server name, database, etc. directly
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=PXLstr,17;DATABASE=Dept_MR;UID=guest;PWD=password')
The suggestion to remove the try...except block did not work for me. Now the actual import is throwing the error as below:
Traceback (most recent call last):
File "C:\Users\a\workspace\TestPyProject\src\helloworld.py", line 2, in <module>
import pyodbc
File "C:\Users\a\AppData\Local\Continuum\Anaconda3\Lib\site-packages\sqlalchemy\dialects\mssql\pyodbc.py", line 105, in <module>
from .base import MSExecutionContext, MSDialect, VARBINARY
I do have pyodbc installed and the import and connect works fine with the command line on windows.
thank you
The problem here is that the pyodbc module is not importing in your try / except block. I would highly recommend not putting import statements in try blocks. First, you would want to make sure you have pyodbc installed (pip install pyodbc), preferably in a virtualenv, then you can do something like this:
import pyodbc
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=PXLstr,17;DATABASE=Dept_MR;UID=guest;PWD=password')
cursor = cnxn.cursor()
cursor.execute('SELECT 1')
for row in cursor.fetchall():
print(row)
If you're running on Windows (it appears so, given the DRIVER= parameter), take a look at virtualenvwrapper-win for managing Windows Python virtual environments: https://pypi.python.org/pypi/virtualenvwrapper-win
Good luck!
Flipper's answer helped to establish that the problem was with referencing an incorrect library in External Libraries list in eclipse. After fixing it, the issue was resolved.
What is the name of your python file? If you inadvertently name it as 'pyodbc.py', you got that error. Because it tries to import itself instead of the intended pyodbc module.
here is the solution!
simply install and use 'pypyodbc' instead of 'pyodbc'!
I have my tested example as below. change your data for SERVER_NAME and DATA_NAME and DRIVER. also put your own records.good luck!
import sys
import pypyodbc as odbc
records = [
['x', 'Movie', '2020-01-09', 2020],
['y', 'TV Show', None, 2019]
]
DRIVER = 'ODBC Driver 11 for SQL Server'
SERVER_NAME = '(LocalDB)\MSSQLLocalDB'
DATABASE_NAME = 'D:\ASPNET\SHOJA.IR\SHOJA.IR\APP_DATA\DATABASE3.MDF'
conn_string = f"""
Driver={{{DRIVER}}};
Server={SERVER_NAME};
Database={DATABASE_NAME};
Trust_Connection=yes;
"""
try:
conn = odbc.connect(conn_string)
except Exception as e:
print(e)
print('task is terminated')
sys.exit()
else:
cursor = conn.cursor()
insert_statement = """
INSERT INTO NetflixMovies
VALUES (?, ?, ?, ?)
"""
try:
for record in records:
print(record)
cursor.execute(insert_statement, record)
except Exception as e:
cursor.rollback()
print(e.value)
print('transaction rolled back')
else:
print('records inserted successfully')
cursor.commit()
cursor.close()
finally:
if conn.connected == 1:
print('connection closed')
conn.close()

Use cx_Oracle and multiprocessing to query data concurrently

All,
I am trying to access and process a large chunk of data from an Oracle database. So I used multiprocessing module to spawn 50 processes to access the database. To avoid opening 50 physical connections, I tried to use session pooling from cx_Oracle. So the code looks like below. However I always got an unpickling error. I know cx_Oracle has pickling issue, but I thought I go around it by using a global variable. Could any one help.
import sys
import cx_Oracle
import os
from multiprocessing import Pool
# Read a list of ids from the input file
def ReadList(inputFile):
............
def GetText(applId):
global sPool
connection = sPool.acquire()
cur = connection.cursor()
cur.prepare('Some Query')
cur.execute(None, appl_id = applId)
result = cur.fetchone()
title = result[0]
abstract = result[2].read()
sa = result[3].read()
cur.close()
sPool.release(connection)
return (title, abstract, sa)
if __name__=='__main__':
inputFile = sys.argv[1]
ids = ReadList(inputFile)
dsn = cx_Oracle.makedsn('xxx', ...)
sPool=cx_Oracle.SessionPool(....., min=1, max=10, increment=1)
pool = Pool(10)
results = pool.map(GetText, ids)
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.6/threading.py", line 525, in __bootstrap_inner
self.run()
File "/usr/lib/python2.6/threading.py", line 477, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.6/multiprocessing/pool.py", line 282, in _handle_results
task = get()
UnpicklingError: NEWOBJ class argument has NULL tp_new
How are you expecting 50 processes to use the same, intra-process-managed DB connection (pool)?!
First of all, your code results the error "NameError: global name 'sPool' is not defined", therefore sPool=cx_Oracle.SessionPool(....., min=1, max=10, increment=1) must be above of def GetText(applId):
For me, this code starts work properly after change from multiprocessing import Pool to from multiprocessing.dummy import Pool and adding parameter threaded=True to call of cx_Oracle.SessionPool as sPool=cx_Oracle.SessionPool(....., min=1, max=10, increment=1, threaded=True)

Error while importing file into DB2 from python script

Getting the below error while trying to import a ^ delimited file into a DB2 database using python 2.4.3.
Error:
Traceback (most recent call last):
File "C:\Python25\Usefulscripts\order.py", line 89, in <module>
load_order_stack()
File "C:\Python25\Usefulscripts\order.py", line 75, in load_order_stack
conn2.execute(importTmp)
ProgrammingError: ('42601', '[42601] [IBM][CLI Driver][DB2/LINUXX8664] SQL0104N An unexpected token "orders_extract"
was found following "import from ".
Code:
import pyodbc
def load_order_stack():
try:
conn2 = pyodbc.connect('DSN=db2Database;UID=ueserid;PWD=password')
importTmp = ("import from orders_extract of del modified by coldel0x5E"
"insert_update into test.ORDERS_Table (ORDER_ID,item,price);")
conn2.execute(importTmp)
conn2.commit()
IMPORT is not an SQL statement. It is a DB2 Command Line Processor (CLP) command and as such can only be run by the said CLP.
There is an SQL interface to some CLP commands via calls to the ADMIN_CMD() stored procedure, please check the manual: IMPORT using ADMIN_CMD
You also have the option of reading the file, line by line, and inserting into your database. This will definitely be slower than any native import operation. Assuming your delimited file structure is, and the file is named input.txt:
ORDER_ID^item^price
1^'bat'^50.00
2^'ball'^25.00
Code:
import csv
import pyodbc
connection = pyodbc.connect('DSN=db2Database;UID=ueserid;PWD=password')
cursor = connection.cursor()
with open('input.txt', 'rb') as f:
rows = csv.reader(f, delimiter='^')
# get column names from header in first line
columns = ','.join(next(rows))
for row in rows:
# build sql with placeholders for insert
placeholders = ','.join('?' * len(row))
sql = 'insert into ({}) values ({});'.format(columns, placeholders)
# execute parameterized database insert
cursor.execute(sql, row)
cursor.commit()
Play around with commit() placement, you probably want to commit in batches to improve performance.

Psycopg2 concurrency issue

I am trying to do share a psycopg2 connection between multiple threads. As was mentioned in the docs, I am doing that by creating new cursor objects from the shared connection, whenever I use it in a new thread.
def delete(conn):
while True:
conn.commit()
def test(conn):
cur = conn.cursor()
thread.start_new_thread(delete,(conn,))
i = 1
while True:
cur.execute("INSERT INTO mas(taru,s) values (2,%s)",(i,))
print i
i = i +1
conn.commit()
After running, I get output like,
1
2
...
98
99
Traceback (most recent call last):
File "postgres_test_send.py", line 44, in <module>
cur.execute("INSERT INTO mas(taru,s) values (2,%s)",(i,))
psycopg2.InternalError: SET TRANSACTION ISOLATION LEVEL must be called before any query
What's going on here?
The bug is not in the most recent psycopg2 versions: it has probably been fixed in 2.4.2.

Categories

Resources