I have a Python code where I am having database statements to establish a database connection and insertion statement. Below is the code which I have written. The program is running correctly but when I open MySQL and run SELECT statement, there is no response from database. It's stuck for a long time.
import MySQLdb
dsn_database = "project"
dsn_hostname = "localhost"
dsn_port = 3306
dsn_uid = "root"
dsn_pwd = "pwd"
conn = MySQLdb.connect(host=dsn_hostname, port=dsn_port, user=dsn_uid, passwd=dsn_pwd, db=dsn_database)
conn.query("""DROP TABLE IF EXISTS cars""")
conn.query("""CREATE TABLE cars(Id INTEGER PRIMARY KEY, Name VARCHAR(20), Price INT)""")
conn.query("""INSERT INTO cars VALUES(1,'Audi',52642)""")
Related
I have created a python program (with python3 and mysql.connector library) that updates the value of a column in a MySQL DB. When I run the command SELECT * FROM table_name in python seems to have changed the value, but when I run this command in MySql WorkBench it drops me the table with no changes applied.
Here is my code:
db = mysql.connector.connect(
host = "IP adress",
user = "user",
passwd = "password"
)
mycursor = db.cursor()
mycursor.execute("USE db_name;")
mycursor.execute('UPDATE table_name SET column = value WHERE condition;')
mycursor.execute('SELECT * FROM table_name;')
print(mycursor.fetchone())
As I have previously mentioned, when I run the command SELECT * FROM table_name in python it seems like the changes have been applied, but when I run it in MySql Workbench it seems like no changes have been applied. Does anybody know what the problem is?
try : before print(mycursor.fetchone())
db.commit()
I'm trying to figure out why I can't access a particular table in a PostgreSQL database using psycopg2. I am running PostgreSQL 11.5
If I do this, I can connect to the database in question and read all the tables in it:
import psycopg2
try:
connection = psycopg2.connect(user = "postgres", #psycopg2.connect() creates connection to PostgreSQL database instance
password = "battlebot",
host = "127.0.0.1",
port = "5432",
database = "BRE_2019")
cursor = connection.cursor() #creates a cursor object which allows us to execute PostgreSQL commands through python source
#Print PostgreSQL version
cursor.execute("""SELECT table_name FROM information_schema.tables
WHERE table_schema = 'public'""")
for table in cursor.fetchall():
print(table)
The results look like this :
('geography_columns',)
('geometry_columns',)
('spatial_ref_sys',)
('raster_columns',)
('raster_overviews',)
('nc_avery_parcels_poly',)
('Zone5e',)
('AllResidential2019',)
#....etc....
The table I am interested in is the last one, 'AllResidential2019'
So I try to connect to it and print the contents by doing the following:
try:
connection = psycopg2.connect(user = "postgres",
#psycopg2.connect() creates connection to PostgreSQL database instance
password = "battlebot",
host = "127.0.0.1",
port = "5432",
database = "BRE_2019")
cursor = connection.cursor() #creates a cursor object which allows us to execute PostgreSQL commands through python source
cursor.execute("SELECT * FROM AllResidential2019;") #Executes a database operation or query. Execute method takes SQL query as a parameter. Returns list of result
record = cursor.fetchall()
print(record)
except (Exception, psycopg2.Error) as error:
print("Error while connecting to PostgreSQL: ", error)
And I get the following error:
Error while connecting to PostgreSQL: relation "allresidential2019" does not exist
LINE 1: SELECT * FROM AllResidential2019;
However, I can successfully connect and get results when attempting to connect to another table in another database I have (this works! and the results are the data in this table):
try:
connection = psycopg2.connect(user = "postgres", #psycopg2.connect() creates connection to PostgreSQL database instance
password = "battlebot",
host = "127.0.0.1",
port = "5432",
database = "ClimbingWeatherApp") . #different database name
cursor = connection.cursor()
cursor.execute("SELECT * FROM climbing_area_info ;")
record = cursor.fetchall()
print(record)
except (Exception, psycopg2.Error) as error:
print("Error while connecting to PostgreSQL: ", error)
I can't figure out why I can retrieve information from one table but not another, using exactly the same code (except names are changes). And I am also not sure how to troubleshoot this. Can anyone offer suggestions?
Your table name is case-sensitive and you have to close it in double quotes:
SELECT * FROM "AllResidential2019";
In Python program it may look like this:
cursor.execute('SELECT * FROM "AllResidential2019"')
or you can use the specialized module SQL string composition:
from psycopg2 import sql
# ...
cursor.execute(sql.SQL("SELECT * FROM {}").format(sql.Identifier('AllResidential2019')))
Note that case-sensitive Postgres identifiers (i.e. names of a table, column, view, function, etc) unnecessarily complicate simple matters. I would advise you not to use them.
Likely, the reason for your issue is Postgres' quoting rules which adheres to the ANSI SQL standard regarding double quoting identifiers. In your table creation, you likely quoted the table:
CREATE TABLE "AllResidential2019" (
...
)
Due to case sensitivity of at least one capital letter, this requires you to always quote the table when referencing the table. Do remember: single and double quotes have different meanings in SQL as opposed to being mostly interchangeable in Python.
SELECT * FROM "AllResidential2019"
DELETE FROM "AllResidential2019" ...
ALTER TABLE "AllResidential2019" ...
It is often recommended, if your table, column, or other identifier does not contain special characters, spaces, or reserved words, to always use lower case or no quotes:
CREATE TABLE "allresidential2019" (
...
)
CREATE TABLE AllResidential2019 (
...
)
Doing so, any combination of capital letters will work
SELECT * FROM ALLRESIDENTIAL2019
SELECT * FROM aLlrEsIdEnTiAl2019
SELECT * FROM "allresidential2019"
See further readings on the subject:
Omitting the double quote to do query on PostgreSQL
PostgreSQL naming conventions
Postgres Docs - 4.1.1. Identifiers and Key Words
Don’t use double quotes in PostgreSQL
What is the difference between single and double quotes in SQL?
I was facing the same error in Ubuntu. But in my case, I accidentally added the tables to the wrong database, which was in turn owned by the root postgres user instead of the new postgres user that I had created for my flask app.
I'm using a SQL file to create and populate the tables. This is the command that I used to be able to create these tables using a .sql file. This allows you to specify the owner of the tables as well as the database in which they should be created:
sudo -u postgres psql -U my_user -d my_database -f file.sql -h localhost
You will then be prompted for my_users's password.
sudo -u postgres is only necessary if you are running this from a terminal as a the root user. It basically runs the psql ... command as the postgres user.
I am writing and reading from a sql database with tries & excepts. The thought behind the try/except is if for some reason the internet is down or we cannot connect to the server, we will write the sql transactions locally to a text file and then use those statements to update the table. That being said - the try and except only seems to work if there is a connection to the server. We have a table BAR in the DB database on server FOO:
try:
conn = pyodbc.connect('DRIVER={SQL Server};SERVER=FOO;DATABASE=DB;UID=user;PWD=password')
cursor = conn.cursor()
cursor.execute("UPDATE BAR SET Date = '"+time+"' WHERE ID = "+ID)
conn.commit()
except:
f = open("vistorlog.txt", "a")
f.write("UPDATE BAR SET Date = '"+time+"' WHERE ID = "+ID+"\n")
f.close()
the only instance where this try&except works is when there is an issue with the sql statement i.e. "Update BARS..." fails because there is no table named BARS. If I change the server to FOOS (or in a real life scenario unplug the ethernet cord and leave the table/serve names legitimate) the try and except doesn't work - the program freezes with no error.
I have looked at similar questions but nothing has worked for me so far
So here it is. I want to update my table through a python script. I'm using the module cx_oracle. I can execute a SELECT query but whenever I try to execute an UPDATE query, my program just hangs (freezes). I realize that I need to use cursor.commit() after cursor.execute() if I am updating a table but my code never gets past cursor.commit(). I have added a code snippet below that I am using to debug.
Any suggestions??
Code
import cx_Oracle
def getConnection():
ip = '127.0.0.1'
port = 1521
service_name = 'ORCLCDB.localdomain'
username = 'username'
password = 'password'
dsn = cx_Oracle.makedsn(ip, port, service_name=service_name) # (CONNECT_DATA=(SERVICE_NAME=ORCLCDB.localdomain)))
return cx_Oracle.connect(username, password, dsn) # connection
def debugging():
con = getConnection()
print(con)
cur = con.cursor()
print('Updating')
cur.execute('UPDATE EMPLOYEE SET LATITUDE = 53.540943 WHERE EMPLOYEEID = 1')
print('committing')
con.commit()
con.close()
print('done')
debugging()
**Here is the corresponding output: **
<cx_Oracle.Connection to username#(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ORCLCDB.localdomain)))>
Updating
Solution
After a bit of poking around, I found the underlying cause! I had made changes to the table using Oracle SQL Developer but had not committed them, when the python script tried to make changes to the table it would freeze up because of this. To avoid the freeze, I committed my changes in oracle sql developer before running the python script and it worked fine!
Do you have any option to look in the database ? I mean , in order to understand whether is a problem of the python program or not, we need to check the v$session in the database to understand whether something is blocked.
select sid, event, last_call_et, status from v$session where sid = xxx
Where xxx is the sid of the session which has connected with python.
By the way, I would choose to commit explicitly after cursor execute
cur.execute('UPDATE EMPLOYEE SET LATITUDE = 53.540943 WHERE EMPLOYEEID = 1')
con.commit()
Hope it helps
Best
I'm writing python script which uses a mysql db. I want to secure mysql connection and the database will not be accessible to local user . Is there a good way to product this?
#!/usr/bin/python
import MySQLdb
# Open database connection
db = MySQLdb.connect("localhost","username","password","db")
# prepare a cursor object using cursor() method
cursor = db.cursor()
# Prepare SQL query to INSERT a record into the database.
cursor.execute("CREATE TABLE IF NOT EXISTS users(id INTEGER PRIMARY KEY, userid VARCHAR(30),
activity_id VARCHAR(30), date_time VARCHAR(30), screenshot_filename VARCHAR(255), screenshot_md5 VARCHAR(255), num_clicks INT, num_of_mouse_movements INT, num_pause INT );")
try:
# Execute the SQL command
cursor.execute(sql)
# Commit your changes in the database
db.commit()
except:
# Rollback in case there is any error
db.rollback()
# disconnect from server
db.close()
MySQL supports SSL connection like 'https' (secure connection between Web Server and Web Browser). Client code need to be modified for making connection. This makes your data invisible by other user. a client needs to be modified for making secure connection as follows. excerpted from http://www.mysqlperformanceblog.com/2013/06/22/setting-up-mysql-ssl-and-secure-connections/
[root#centos6 ~]# cat mysql-ssl.py
#!/usr/bin/env python
import MySQLdb
ssl = {‘cert’: ‘/etc/mysql-ssl/client-cert.pem’, ‘key’: ‘/etc/mysql-ssl/client-key.pem’}
conn = MySQLdb.connect(host=’127.0.0.1′, user=’ssluser’, passwd=’pass’, ssl=ssl)
cursor = conn.cursor()
cursor.execute(‘SHOW STATUS like “Ssl_cipher”‘)
print cursor.fetchone()
The standard way to do this is to have the web/application server access the database server over a private (local) network, rather than over a public network (Internet).
If the database server is on the same machine as the web/application server, you can host the database server on the loopback IP address (127.0.0.1), which is only directly accessible from the same machine.