Unable to truncate table in postgres - python

I am trying to truncate a table in my Postgres DB using a python script:
conn = get_psql_conn()
cursor = conn.cursor()
cursor.execute("""TRUNCATE table table_name;""")
cursor.close()
conn.close()
Nothing happens.
The script finishes quickly, no error is raised.
The table still has its rows.
I was able to execute other queries with no problem using the same setup.
I did appreciate it if anyone can point out my mistake here!
Thanks

Related

PostgreSQL : All queries running slow except in PgAdmin or Dbeaver

I tried to run a Python program which query our database.
But unfortunately any query i run with psycopg2 is very very very slow.
As an exemple you can see in the picture that the same query took 47ms in Dbeaver and take more than 3 minutes in Python !
In the past i tried to move from dbever to oracle client. But all my queries in oracle were so slow so i decided to stay on dBeaver.
But scripting and make queries on the database is a need for my project.
Here an exemple of table I am querying "bex" :
ID
Name
code
code_acr
1
Paris
PAR
PAR
2
Dijon
DIJ
DIJ
3
Brest
BRS
BRT
4
Toulon
TLN
TLN
Here is the code I am using in Python :
import psycopg2
try:
conn = psycopg2.connect(
host="xxxxx.sogate-pacy.xxxxxx.fr",
dbname="xxxxxx",
user="xxxxxxx",
password="\<xxxxx\>",
port="5432",
options="-c search_path=xxx",
sslmode = "disable"
)
cursor = conn.cursor()
postgreSQL_select_Query = "SELECT * FROM bex"
cursor.execute(postgreSQL_select_Query)
ouvrage = cursor.fetchone()
print("Print each row and it's columns values")
print(cursor.fetchone())
except (Exception, psycopg2.Error) as error:
print("Error while fetching data from PostgreSQL", error)
finally:
# closing database connection.
if conn:
cursor.close()
conn.close()
print("PostgreSQL connection is closed")
I tried to make a Python script to get data from the database
To be noted that this table has only 10 rows at total.
and this happen even if do a select to return me only one row
Your python code fetches the entire bex table into memory in your python process memory space, and then processes the first row and throws the rest away. While pgAdmin4 and DBeaver both uses cursors (or something equivalent to them) to fetch only a small number of rows until you do something which calls for more. You can use a psycopg2 "named cursor" to get the same behavior in your own python code as you get with pgAdmin4.

Bulk Insert into SQL Server with Python not working

I'm attempting to bulk insert a csv into a table in SQL server. The catch is, the data doesn't match the columns of the destination table. The destination table has several audit columns that are not found in the source file. The solution I found for this is to insert into a view instead. The code is pretty simple:
from sqlalchemy import create_engine
engine = create_engine('mssql+pyodbc://[DNS]')
conn = engine.connect()
sql = "BULK INSERT [table view] FROM '[source file path]' WITH (FIELDTERMINATOR = ',',ROWTERMINATOR = '\n')"
conn.execute(sql)
conn.close()
When I run the SQL statement inside of SSMS it works perfectly. When I try to execute it from inside a Python script, the script runs but no data winds up in the table. What am I missing?
Update: It turns out bulk inserting into a normal table doesn't work either.
Before closing the connection, you need to call commit() or the SQL actions will be rolled back on connection close.
conn.commit()
conn.close()
It turns out that instead of using SQL Alchemy, I had to use pypyodbc. Not sure why this worked and the other way didn't. Example code found here:How to Speed up with Bulk Insert to MS Server from Python with Pyodbc from CSV
This works for me after checking sqlalchemy transactions refeference. I don't explicitly set conn.commit() as
The block managed by each .begin() method has the behavior such that the transaction is committed when the block completes.
with engine.begin() as conn:
conn.execute(sql_bulk_insert)

Can't create table lock in mssql from python

I'm using ceODBC to connect to sql-server 2014 from a centos 6 box from python 2.7.9.
In a critical part of our code, after inserting rows into a table, I want to do double check that all rows have arrived safely. I want to do this because sometimes an error happens, but ceODBC does not throw an error, and the table is empty.
To make sure that in between inserting data and doing a 'count statement' no other parts of the code do any inserts I want to lock the table. This is where I have my problem. It seems that there is a sp_getapplock build into sql-server, but when I do the following:
import ceodbc
conn = # Make connection here
cursor = conn.cursor()
cursor.execute("declare #result int; exec #result = sp_getapplock #Resource='Dim_Date', #LockMode='Exclusive'; select #result").fetchall()
The result sometimes is a 0, sometimes a -999, but never is the table locked for other connections.
Does anyone know what I''m doing wrong?
(I added the pyodbc tag because I think the two drivers are similar.)

How to disable query cache with mysql.connector

I'm connecting mysql on my Kivy application.
import mysql.connector
con = mysql.connector.Connect(host='XXX', port=XXX, user='XXX', password='XXX', database='XXX')
cur = con.cursor()
db = cur.execute("""select SELECT SQL_NO_CACHE * from abc""")
data = cur.fetchall()
print (data)
After inserting or deleting on table abc from another connection; i call the same query on python; but data is not updating.
I add the query "SET SESSION query_cache_type = OFF;" before select query, but it didn't work. Someone said "select NOW() ..." query is not cachable but it didn't work again. What should I do?
I solved this by adding the code after fetchall()
con.commit()
Calling the same select query without doing a commit, won't update the results.
The solution is to use:
Once:
con.autocommit(True)
Or, after each select query:
con.commit()
With this option, there will be a commit after each select query.
Otherwise, subsequent selects will render the same result.
This error seems to be Bug #42197 related to Query cache and auto-commit in MySQL. The status is won't fix!
In a few months, this should be irrelevant because MySQL 8.0 is dropping Query Cache.
I encounterd the same problem that has been solved and used the above method.
conn.commit()
and I found that different DBMS has different behavior,not all DBMS exist in the connection cache
try this,
conn.autocommit(True);
this will auto commit after each of you select query.
The MySQL query cache is flushed when tables are modified, so it wouldn't have that effect. It's impossible to say without seeing the rest of your code, but it's most likely that your INSERT / DELETE query is failing to run.

MySQLdb.cursors.Cursor.execute does not work

I have done the following:
import MySQLdb as mdb
con = mdb.connect(hostname, username, password, dbname)
cur = con.cursor()
count = cur.execute(query)
cur.close()
con.close()
I have two queries, I execute them in the mysql console I can view the results.
But when I give the same through python one query works and the other one does not.
I am sure it is not problem with mysql or query or python code. I suspect cur.execute(query) function.
Have anyone come through similar situation? Any solutions?
Use conn.commit() after execution, to commit/finish insertion and deletion based changes.
I have two queries, I execute them in the mysql console I can view the results.
But I only see one query:
import MySQLdb as mdb
con = mdb.connect(hostname, username, password, dbname)
cur = con.cursor()
count = cur.execute(query)
cur.close()
con.close()
My guess is query contains the both queries separated by a semin-colon and is an INSERT statement? You probably need to use executemany().
See Executing several SQL queries with MySQLdb
On the other hand, if both of your queries are SELECT statements (you say "I see the result"), I'm not sure you can fetch both results from only one call to execute(). I would consider that as bad style, anyway.
This is a function and the query is passed to this function. When I
execute one query after the other. I dont get the result for few
queries, there is no problem with the queries because I have crossed
checked them with the mysql console.
As you clarified your question in a comment, I post an other answer -- completely different approach.
Are you connected to your DB in autocommit mode? If no, for changes to be permanently applied, you have to COMMIT them. In normal circumstances, you shouldn't create a new connection for each request. That put excessive load on the DB server for almost nothing:
# Open a connection once
con = mdb.connect(hostname, username, password, dbname)
# Do that *for each query*:
cur = con.cursor()
try:
count = cur.execute(query)
conn.commit() # don't forget to commit the transaction
else:
print "DONE:", query # for "debug" -- in real app you migth have an "except:" clause instead
finally:
cur.close() # close anyway
# Do that *for each query*:
cur = con.cursor()
try:
count = cur.execute(query)
conn.commit() # don't forget to commit the transaction
else:
print "DONE:", query # for "debug" -- in real app you migth have an "except:" clause instead
finally:
cur.close() # close anyway
# Close *the* connection
con.close()
The above code is directly typed into SO. Please forgive typos and other basic syntax errors. But that's the spirit of it.
A last word, while typing I was wondering how you deal with exceptions? By any chance could the MySQLdb error be silently ignored at some upper level of your program?
Use this query, this will update multiple rows of column in one query
sql=cursor.executemany("UPDATE `table` SET `col1` = %s WHERE `col2` = %s",
[(col1_val1, col2_val1),(col2_val2, col_val2)])
and also commit with database to see the changes.
conn.commit()

Categories

Resources