this is somewhat related to my earlier query..
Reading A Big File With Python
The problem was with runtime, so i was suggested to use sqlite3 database, and it reduced the time to millisecond, and I am very happy, now the only problem i have is, connecting to different database files in the same folder. All the database files have the same tables.
The code I am using, reads only the first one, and doesnt seem to check the other databases.
The output is when the teacher, enters students ID, it is supposed to return the related records if found in the database table.
my Code is something like this, But I am sure I am doing something wrong, pardon me if its a silly one, as using sqlite3 for the first time.
#other codes above not related to this part
databases = []
directory = "./Databases"
for filename in os.listdir(directory):
flname = os.path.join(directory, filename)
databases.append(flname)
for database in databases:
conn = sqlite3.connect(database)
conn.text_factory = str
cur = conn.cursor()
sqlqry = "SELECT * FROM tbl_1 WHERE std_ID='%s';" % (sudentID)
try:
c = cur.execute(sqlqry)
data = c.fetchall()
for i in data:
print "[INFO] RECORD FOUND"
print "[INFO] STUDENT ID: "+i[1]
print "[INFO] STUDENT NAME: "+i[2]
#and some other info
conn.close()
except sqlite3.Error as e:
print "[INFO] "+e
Thanks For Any guides
#Whiskey, sometimes it helps to try to break the problem down into a minimal example and see if that works or where it breaks. Since you are able to see the database names being printed as they are opened, my guess would be a problem with the query or possibly the data in the db even tho the records seem to be there. When you say it doesn't find the record you're looking for does it just print out nothing or does it print out the "[INFO]" line in your exception handler?
I put together the following minimal example, and it seems to be working as far as my understanding of your problem goes. My only other piece of advice to add to everyone else's would be to parametrize your query rather than using the raw input directly to make your app a little more secure. Hope it helps:
import os, sqlite3
"""
Create the test databases:
sqlite3 Databases/test_db1.db
sqlite> CREATE TABLE foo ( id INTEGER NOT NULL, name VARCHAR(100), PRIMARY KEY (id) );
sqlite>
sqlite3 Databases/test_db2.db
sqlite> CREATE TABLE foo ( id INTEGER NOT NULL, name VARCHAR(100), PRIMARY KEY (id) );
sqlite> INSERT INTO foo VALUES (2, 'world');
"""
databases = []
student_id = 2
directory = "./Databases"
for filename in os.listdir(directory):
flname = os.path.join(directory, filename)
databases.append(flname)
for database in databases:
try:
with sqlite3.connect(database) as conn:
conn.text_factory = str
cur = conn.cursor()
sqlqry = "SELECT * FROM foo WHERE id=:1;"
c = cur.execute(sqlqry, [student_id])
for row in c.fetchall():
print "-- found: %s=%s" % (row[0], row[1])
except sqlite3.Error, err:
print "[INFO] %s" % err
Related
I am inserting JSON data into a MySQL database
I am parsing the JSON and then inserting it into a MySQL db using the python connector
Through trial, I can see the error is associated with this piece of code
for steps in result['routes'][0]['legs'][0]['steps']:
query = ('SELECT leg_no FROM leg_data WHERE travel_mode = %s AND Orig_lat = %s AND Orig_lng = %s AND Dest_lat = %s AND Dest_lng = %s AND time_stamp = %s')
if steps['travel_mode'] == "pub_tran":
travel_mode = steps['travel_mode']
Orig_lat = steps['var_1']['dep']['lat']
Orig_lng = steps['var_1']['dep']['lng']
Dest_lat = steps['var_1']['arr']['lat']
Dest_lng = steps['var_1']['arr']['lng']
time_stamp = leg['_sent_time_stamp']
if steps['travel_mode'] =="a_pied":
query = ('SELECT leg_no FROM leg_data WHERE travel_mode = %s AND Orig_lat = %s AND Orig_lng = %s AND Dest_lat = %s AND Dest_lng = %s AND time_stamp = %s')
travel_mode = steps['travel_mode']
Orig_lat = steps['var_2']['lat']
Orig_lng = steps['var_2']['lng']
Dest_lat = steps['var_2']['lat']
Dest_lng = steps['var_2']['lng']
time_stamp = leg['_sent_time_stamp']
cursor.execute(query,(travel_mode, Orig_lat, Orig_lng, Dest_lat, Dest_lng, time_stamp))
leg_no = cursor.fetchone()[0]
print(leg_no)
I have inserted higher level details and am now searching the database to associate this lower level information with its parent. The only way to find this unique value is to search via the origin and destination coordinates with the time_stamp. I believe the logic is sound and by printing the leg_no immediately after this section, I can see values which appear at first inspection to be correct
However, when added to the rest of the code, it causes subsequent sections where more data is inserted using the cursor to fail with this error -
raise errors.InternalError("Unread result found.")
mysql.connector.errors.InternalError: Unread result found.
The issue seems similar to MySQL Unread Result with Python
Is the query too complex and needs splitting or is there another issue?
If the query is indeed too complex, can anyone advise how best to split this?
EDIT As per #Gord's help, Ive tried to dump any unread results
cursor.execute(query,(leg_travel_mode, leg_Orig_lat, leg_Orig_lng, leg_Dest_lat, leg_Dest_lng))
leg_no = cursor.fetchone()[0]
try:
cursor.fetchall()
except mysql.connector.errors.InterfaceError as ie:
if ie.msg == 'No result set to fetch from.':
pass
else:
raise
cursor.execute(query,(leg_travel_mode, leg_Orig_lat, leg_Orig_lng, leg_Dest_lat, leg_Dest_lng, time_stamp))
But, I still get
raise errors.InternalError("Unread result found.")
mysql.connector.errors.InternalError: Unread result found.
[Finished in 3.3s with exit code 1]
scratches head
EDIT 2 - when I print the ie.msg, I get -
No result set to fetch from
All that was required was for buffered to be set to true!
cursor = cnx.cursor(buffered=True)
The reason is that without a buffered cursor, the results are "lazily" loaded, meaning that "fetchone" actually only fetches one row from the full result set of the query. When you will use the same cursor again, it will complain that you still have n-1 results (where n is the result set amount) waiting to be fetched. However, when you use a buffered cursor the connector fetches ALL rows behind the scenes and you just take one from the connector so the mysql db won't complain.
I was able to recreate your issue. MySQL Connector/Python apparently doesn't like it if you retrieve multiple rows and don't fetch them all before closing the cursor or using it to retrieve some other stuff. For example
import mysql.connector
cnxn = mysql.connector.connect(
host='127.0.0.1',
user='root',
password='whatever',
database='mydb')
crsr = cnxn.cursor()
crsr.execute("DROP TABLE IF EXISTS pytest")
crsr.execute("""
CREATE TABLE pytest (
id INT(11) NOT NULL AUTO_INCREMENT,
firstname VARCHAR(20),
PRIMARY KEY (id)
)
""")
crsr.execute("INSERT INTO pytest (firstname) VALUES ('Gord')")
crsr.execute("INSERT INTO pytest (firstname) VALUES ('Anne')")
cnxn.commit()
crsr.execute("SELECT firstname FROM pytest")
fname = crsr.fetchone()[0]
print(fname)
crsr.execute("SELECT firstname FROM pytest") # InternalError: Unread result found.
If you only expect (or care about) one row then you can put a LIMIT on your query
crsr.execute("SELECT firstname FROM pytest LIMIT 0, 1")
fname = crsr.fetchone()[0]
print(fname)
crsr.execute("SELECT firstname FROM pytest") # OK now
or you can use fetchall() to get rid of any unread results after you have finished working with the rows you retrieved.
crsr.execute("SELECT firstname FROM pytest")
fname = crsr.fetchone()[0]
print(fname)
try:
crsr.fetchall() # fetch (and discard) remaining rows
except mysql.connector.errors.InterfaceError as ie:
if ie.msg == 'No result set to fetch from.':
# no problem, we were just at the end of the result set
pass
else:
raise
crsr.execute("SELECT firstname FROM pytest") # OK now
cursor.reset() is really what you want.
fetchall() is not good because you may end up moving unnecessary data from the database to your client.
The problem is about the buffer, maybe you disconnected from the previous MySQL connection and now it cannot perform the next statement. There are two ways to give the buffer to the cursor. First, only to the particular cursor using the following command:
import mysql.connector
cnx = mysql.connector.connect()
# Only this particular cursor will buffer results
cursor = cnx.cursor(buffered=True)
Alternatively, you could enable buffer for any cursor you use:
import mysql.connector
# All cursors created from cnx2 will be buffered by default
cnx2 = mysql.connector.connect(buffered=True)
cursor = cnx.cursor()
In case you disconnected from MySQL, the latter works for you.
Enjoy coding
If you want to get only one result from a request, and want after to reuse the same connexion for other requests, limit your sql select request to 1 using "limit 1" at the end of your request.
ex "Select field from table where x=1 limit 1;"
This method is faster using "buffered=True"
Set the consume_results argument on the connect() method to True.
cnx = mysql.connector.connect(
host="localhost",
user="user",
password="password",
database="database",
consume_results=True
)
Now instead of throwing an exception, it basically does fetchall().
Unfortunately this still makes it slow, if you have a lot of unread rows.
There is also a possibility that your connection to MySQL Workbench is disconnected. Establish the connection again. This solved the problem for me.
cursor.reset()
and then create tables and load entries
Would setting the cursor within the for loop, executing it, and then closing it again in the loop help?
Like:
for steps in result['routes'][0]['legs'][0]['steps']:
cursor = cnx.cursor()
....
leg_no = cursor.fetchone()[0]
cursor.close()
print(leg_no)
I want to use sqlite3 to deal with data in Ubuntu with python. But I always failed and get errors. Codes related to database are as follows:
sqlite = "%s.db" % name
#connnect to the database
conn = sqlite3.connect(sqlite)
print "Opened database successfully"
c = conn.cursor()
#set default separator to "\t" in database
c.execute(".separator "\t"")
print "Set separator of database successfully"
#create table data_node
c.execute('''create table data_node(Time int,Node Text,CurSize int,SizeVar int,VarRate real,Evil int);''')
print "Table data_node created successfully"
node_info = "%s%s.txt" % (name,'-PIT-node')
c.execute(".import %\"s\" data_node") % node_info
print "Import to data_node successfully"
#create table data_face
data_info = "%s%s.txt" % (name,'-PIT-face')
c.execute('''create table data_face(Time int,Node Text,TotalEntry real,FaceId int,FaceEntry real,Evil int);''')
c.execute(".import \"%s\" data_face") % face_info
#get the final table : PIT_node
c.execute('''create table node_temp as select FIRST.Time,FIRST.Node,ROUND(FIRST.PacketsRaw/SECOND.PacketsRaw,4) as SatisRatio from tracer_temp FIRST,tracer_temp SECOND WHERE FIRST.Time=SECOND.Time AND FIRST.Node=SECOND.Node AND FIRST.Type='InData' AND SECOND.Type='OutInterests';''')
c.execute('''create table PIT_node as select A.Time,A.Node,B.SatisRatio,A.CurSize,A.SizeVar,A.VarRate,A.Evil from data_node A,node_temp B WHERE A.Time=B.Time AND A.Node=B.Node;''')
#get the final table : PIT_face
c.execute('''create table face_temp as select FIRST.Time,FIRST.Node,FIRST.FaceId,ROUND(FIRST.PacketsRaw/SECOND.PacketsRaw,4) as SatisRatio,SECOND.Packets from data_tracer FIRST,data_tracer SECOND WHERE FIRST.Time=SECOND.Time AND FIRST.Node=SECOND.Node AND FIRST.FaceId=SECOND.FaceId AND FIRST.Type='OutData' AND SECOND.Type='InInterests';''')
c.execute('''create table PIT_face as select A.Time,A.Node,A.FaceId,B.SatisRatio,B.Packets,ROUND(A.FaceEntry/A.TotalEntry,4),A.Evil from data_face as A,face_temp as B WHERE A.Time=B.Time AND A.Node=B.Node AND A.FaceId = B.FaceId;''')
conn.commit()
conn.close()
These sql-commands are right. When I run the code, it always shows sqlite3.OperationalError: near ".": syntax error. So how to change my code and are there other errors in other commands such as create table?
You have many problems in your code as posted, but the one you're asking about is:
c.execute(".separator "\t"")
This isn't valid Python syntax. But, even if you fix that, it's not valid SQL.
The "dot-commands" are special commands to the sqlite3 command line shell. It intercepts them and uses them to configure itself. They mean nothing to the actual database, and cannot be used from Python.
And most of them don't make any sense outside that shell anyway. For example, you're trying to set the column separator here. But the database doesn't return strings, it returns row objects—similar to lists. There is nowhere for a separator to be used. If you want to print the rows out with tab separators, you have to do that in your own print statements.
So, the simple fix is to remove all of those dot-commands.
However, there is a problem—at least one of those dot-commands actually does something:
c.execute(".import %\"s\" data_node") % node_info
You will have to replace that will valid calls to the library that do the same thing as the .import dot-command. Read what it does, and it should be easy to understand. (You basically want to open the file, parse the columns for each row, and do an executemany on an INSERT with the rows.)
I wanted to start into using databases in python. I chose postgresql for the database "language". I already created several databases, but now I want simply to check if the database exists with python. For this I already read this answer: Checking if a postgresql table exists under python (and probably Psycopg2) and tried to use their solution:
import sys
import psycopg2
con = None
try:
con = psycopg2.connect(database="testdb", user="test", password="abcd")
cur = con.cursor()
cur.execute("SELECT exists(SELECT * from information_schema.testdb)")
ver = cur.fetchone()[0]
print ver
except psycopg2.DatabaseError, e:
print "Error %s" %e
sys.exit(1)
finally:
if con:
con.close()
But unfortunately, I only get the output
Error relation "information_schema.testdb" does not exist
LINE 1: SELECT exists(SELECT * from information_schema.testdb)
Am I doing something wrong, or did I miss something?
Your question confuses me a little, because you say you want to look to see if a database exists, but you look in the information_schema.tables view. That view would tell you if a table existed in the currently open database. If you want to check if a database exists, assuming you have access to the 'postgres' database, you could:
import sys
import psycopg2, psycopg2.extras
cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
dbname = 'db_to_check_for_existance'
con = None
try:
con = psycopg2.connect(database="postgres", user="postgres")
cur = con.cursor(cursor_factory=psycopg2.extras.DictCursor)
cur.execute("select * from pg_database where datname = %(dname)s", {'dname': dbname })
answer = cur.fetchall()
if len(answer) > 0:
print "Database {} exists".format(dbname)
else:
print "Database {} does NOT exist".format(dbname)
except Exception, e:
print "Error %s" %e
sys.exit(1)
finally:
if con:
con.close()
What is happening here is you are looking in the database tables called pg_database. The column 'datname' contains each of the database names. Your code would supply db_to_check_for_existance as the name of the database you want to check for existence. For example, you could replace that value with 'postgres' and you would get the 'exists' answer. If you replace the value with aardvark you would probably get the does NOT exist report.
If you're trying to see if a database exists:
curs.execute("SELECT exists(SELECT 1 from pg_catalog.pg_database where datname = %s)", ('mydb',))
It sounds like you may be confused by the difference between a database and a table.
The code below hangs (even ctrl-C won't stop it, I have to close the terminal) when it tries to create the second table, and I'm not sure why. The first table is created successfully (I can see it in psql with \dt cyanobacteria.*). A simple solution would be to rename the table, but I'm trying to restore someone else's code to working order and I'd have to go through changing lots of stuff. And he had it working once, so it ought to work for me!
I've created a database called 'genomes', a user called 'genomes_admin' and a schema called 'cyanobacteria'. Then I try to make some tables:
#!/usr/bin/python
import psycopg2
psql = "dbname='genomes' user='genomes_admin'"
schm = 'cyanobacteria'
conn = psycopg2.connect(psql)
cur = conn.cursor()
cur.execute('''SET search_path TO %s''', (schm,))
conn.commit()
cur.execute('''CREATE TABLE IF NOT EXISTS testnm(blah text, length int) ''')
print 'created testnm'
conn.commit()
print 'committed'
cur.execute('''CREATE TABLE IF NOT EXISTS genomes(blah text, length int) ''') # hangs here
print 'created genomes' # this line never executes
conn.commit()
print 'committed'
cur.close()
conn.close()
I have created table using this create command as:
CREATE TABLE test_table(id INT PRIMARY KEY,name
VARCHAR(50),price INT)
i want to insert into this table wherein values are stored already in variable
bookdb=# name = 'algorithms'
bookdb-# price = 500
bookdb-# INSERT INTO test_table VALUES(1,'name',price);
I get the following error:
ERROR: syntax error at or near "name"
LINE 1: name = 'algorithms'
Can anyone point out the mistake and propose solution for the above?
Thanks in advance
Edit:
import psycopg2
import file_content
try:
conn = psycopg2.connect(database='bookdb',user='v22')
cur = conn.cursor()
cur.execute("DROP TABLE IF EXISTS book_details")
cur.execute("CREATE TABLE book_details(id INT PRIMARY KEY,name VARCHAR(50),price INT)")
cur.execute("INSERT INTO book_details VALUES(1,'name',price)")
conn.commit()
except:
print "unable to connect to db"
I have used the above code to insert values into table,variables name and price containing the values to be inserted into table are available in file_content python file and i have imported that file.The normal INSERT statement takes values manually but i want my code to take values which are stored in variables.
SQL does not support the concept of variables.
To use variables, you must use a programming language, such as Java, C, Xojo. One such language is PL/pgSQL, which you can think of as a superset of SQL. PL/PgSQL is often bundled as a part of Postgres installers, but not always.
I suggest you read some basic tutorials on SQL.
See this similar question: How do you use script variables in PostgreSQL?
don't have postgres installed here, but you can try this
import psycopg2
import file_content
try:
conn = psycopg2.connect(database='bookdb',user='v22')
cur = conn.cursor()
cur.execute("DROP TABLE IF EXISTS book_details")
cur.execute("CREATE TABLE book_details(id INT PRIMARY KEY,name VARCHAR(50),price INT)")
cur.execute("INSERT INTO book_details VALUES(1, '%s', %s)" % (name, price))
conn.commit()
except:
print "unable to connect to db"
If you are using PSQL console:
\set name 'algo'
\set price 10
insert into test_table values (1,':name',:price)
\g