Closing an SQLObject Connection - python

Is it possible to manually close an SQLObject Connection once it has been opened? I am trying to delete a database file once it has been used, but it seems that the open connection to the database file is stopping me from doing so.
For example:
from sqlobject import *
import os
# Create and open connection to a database file.
sqlhub.processConnection = connectionForURI('sqlite:path_to_db')
SomeObject.createTable()
# ...
# Delete database when finished.
os.remove('path_to_db')
Gives the following error:
WindowsError: [Error 32] The process cannot access the file because
it is being used by another process: 'path_to_db'

It seems like just calling .close() on the database connection seems to do the trick:
from sqlobject import *
import os
# Create and open connection to a database file.
sqlhub.processConnection = connectionForURI('sqlite:path_to_db')
#do something with connection
pass
#close connection
sqlhub.processConnection.close()
#delete database
os.remove(path_to_db)
I could only find a little bit on the close method here, but it's fair to say you can treat it like any other file object. I don't have much experience with sqlobject though, and in the interpreter, you can still remove the db right after the processConnection assignment, without closing it, so who knows.

Related

How to backup Peewee database (SqliteQueueDatabase) programatically?

I'm using Peewee in one of my projecs. Specifically, I'm using SqliteQueueDatabase and I need to create a backup (i.e. another *.db file) without stopping my application. I saw that there are two methods that could work for me (backup and backup_to_file) but they're methods from CSqliteExtDatabase, and SqliteQueueDatabase is subclass of SqliteExtDatabase. I've found solutions to manually create a dump of the file, but I need a *.db file (not a *.csv file, for example). Couldn't find any similar question or relevant answer.
Thanks!
You can just import the backup_to_file() helper from playhouse._sqlite_ext and pass it your connection and a filename:
db = SqliteQueueDatabase('...')
from playhouse._sqlite_ext import backup_to_file
conn = db.connection() # get the underlying pysqlite conn
backup_to_file(conn, 'dest.db')
Also, if you're using pysqlite3, then there are also backup methods available on the connection itself.

Code works on Jupyter notebook but not as as .py script

Simplified example of my code, please ignore syntax errors:
import numpy as np
import pandas as pd
import pymysql.cursors
from datetime import date, datetime
connection = pymysql.connect(host=,
user=,
password=,
db=,
cursorclass=pymysql.cursors.DictCursor)
df1 = pd.read_sql()
df2 = pd.read_sql(
df3 = pd.read_sql()
np.where(a=1, b, c)
df1.append([df2, d3])
path = r'C:\Users\\'
df.to_csv(path+'a.csv')
On Jupyternotebook it outputs the csv file like it supposed to. However, it I download the .py and run with python. It will only output a csv the first time I run it after restarting my computer. Other times it will just run and nothing happens. Why this is happening is blowing my mind.
I think you have added the path wrongly
If you change your path to df.to_csv(path+'\a.csv') then it will be correct
It's hard to say without knowing what your actual code is, but one thought is that the connection you have to your DB is never closed, and is somehow locking the DB so you are unable to make another connection.
The first connection would end, of course, when you restart your computer.
To see if this is an issue, you could use the MySQL command SHOW PROCESSLIST that would list out the different connections for you; if, after running the script the first time, one of the processes is still the same connection from your machine you just made, that could be the issue. Here's the docs on the command: https://dev.mysql.com/doc/refman/8.0/en/show-processlist.html
Alternatively, you could wrap the DB connection code in a try/except block with some print statements for good measure, to ascertain whether or not that's an issue, like so:
try:
print "Right before connection"
connection = pymysql.connect(host=,
user=,
password=,
db=,
cursorclass=pymysql.cursors.DictCursor)
print "Right after connection"
except Exception as e:
print "The Exception is:{}".format(str(e))
Also, you should most definitely print the objects that you're trying to write to CSV, to see if they're still valid the second time around (i.e. make sure you've actually populated those variables and they're not just Nones)

UPDATE statement on Access database fails silently under pyodbc

I have a problem with a simple UPDATE statement. I wrote a Python tool which creates a lot of UPDATE statements and after creating them I want to execute them on my Access database but it doesn't work This is one statement for example:
UPDATE FCL_B_COVERSHEET_A SET BRANCH = 0 WHERE OBJ_ID = '1220140910132011062005';
The statement syntax is not the problem. I tested it and it works.
This next code snippet shows the initialization for the connect object.
strInputPathMDB = "C:\\Test.mdb"
DRV = '{Microsoft Access Driver (*.mdb)}';
con = pyodbc.connect('Driver={0};Dbq={1};Uid={2};Pwd={3};'.format(DRV,strInputPathMDB,"administrator",""))
After that I wrote a method which execute one SQL statement
def executeSQLStatement(conConnection, strSQL):
arcpy.AddMessage(strSQL)
cursor = conConnection.cursor()
cursor.execute(strSQL)
conConnection.commit()
and if I execute this code everything seems to work - no error message or anything like that - but also the data is not updated and I don't know what I'm doing wrong ...
for strSQL in sqlStateArray:
executeSQLStatement(con, strSQL)
con.close()
I hope you understand what my problem is. Thanks for your help.
Chris
The issue here was that the .mdb file was in the root folder of the C: drive. Root folders often restrict normal users to read-only access so the database file was being opened as read-only. Moving the .mdb file to a public folder solved the problem.

SQLite Insert command in Python script Doesn't work on web

I'm trying to use an SQLite insert operation in a python script, it works when I execute it manually on the command line but when I try to access it on the web it won't insert it in the database. Here is my function:
def insertdb(unique_id,number_of_days):
conn = sqlite3.connect('database.db')
print "Opened database successfully";
conn.execute("INSERT INTO IDENT (ID_NUM,DAYS_LEFT) VALUES (?,?)",(unique_id,number_of_days));
conn.commit()
print "Records created successfully";
conn.close()
When it is executed on the web, it only shows the output "Opened database successfully" but does not seem to insert the value into the database. What am I missing? Is this a server configuration issue? I have checked the database permissions on writing and they are correctly set.
The problem is almost certainly that you're trying to create or open a database named database.db in whatever happens to be the current working directory, and one of the following is true:
The database exists and you don't have permission to write to it. So, everything works until you try to do something that requires write access (like commiting an INSERT).
The database exists, and you have permission to write to it, but you don't have permission to create new files in the directory. So, everything works until sqlite needs to create a temporary file (which it almost always will for execute-ing an INSERT).
Meanwhile, you don't mention what web server/container/etc. you're using, but apparently you have it configured to just swallow all errors silently, which is a really, really bad idea for any debugging. Configure it to report the errors in some way. Otherwise, you will never figure out what's going on with anything that goes wrong.
If you don't have control over the server configuration, you can at least wrap all your code in a try/except and manually log exceptions to some file you have write access to (ideally via the logging module, or just open and write if worst comes to worst).
Or, you can just do that with dumb print statements, as you're already doing:
def insertdb(unique_id,number_of_days):
conn = sqlite3.connect('database.db')
print "Opened database successfully";
try:
conn.execute("INSERT INTO IDENT (ID_NUM,DAYS_LEFT) VALUES (?,?)",(unique_id,number_of_days));
conn.commit()
print "Records created successfully";
except Exception as e:
print e # or, better, traceback.print_exc()
conn.close()

zc.lockfile.LockError in ZODB

I am trying to use ZODB 3.10.2 on my web server which is running Debian and Python 2.7.1. It seems like every time I try to access the same database from 2 different processes, I get a mysterious exception. I tried accessing a database from an interactive Python session and everything seemed to work fine:
>>> import ZODB
>>> from ZODB.FileStorage import FileStorage
>>> storage = FileStorage("test.db")
>>>
But then I tried the same series of commands from another session running at the same time and it didn't seem to work:
>>> import ZODB
>>> from ZODB.FileStorage import FileStorage
>>> storage = FileStorage("test.db")
No handlers could be found for logger "zc.lockfile"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/ZODB3-3.10.2-py2.7-linux-x86_64.egg/ZODB/FileStorage/FileStorage.py", line 125, in __init__
self._lock_file = LockFile(file_name + '.lock')
File "/usr/local/lib/python2.7/site-packages/zc.lockfile-1.0.0-py2.7.egg/zc/lockfile/__init__.py", line 76, in __init__
_lock_file(fp)
File "/usr/local/lib/python2.7/site-packages/zc.lockfile-1.0.0-py2.7.egg/zc/lockfile/__init__.py", line 59, in _lock_file
raise LockError("Couldn't lock %r" % file.name)
zc.lockfile.LockError: Couldn't lock 'test.db.lock'
>>>
Why is this happening? What can be done about it?
The ZODB does not support multi-process access. This is why you get the lock error; the ZODB file storage has been locked by one process to prevent other processes altering it.
There are several ways around this. The easiest option is to use ZEO. ZEO extends the ZODB machinery to provide access to objects over a network, and you can easily configure your ZODB to access a ZEO server instead of a local FileStorage file:
<zodb>
<zeoclient>
server localhost:9100
</zeoclient>
</zodb>
Another option is to use RelStorage, which stores the ZODB data in a relational database. RelStorage supports PostgreSQL, Oracle and MySQL backends. RelStorage takes care of concurrent access from different ZODB clients. Here is an example configuration:
<zodb>
<relstorage>
<postgresql>
# The dsn is optional, as are each of the parameters in the dsn.
dsn dbname='zodb' user='username' host='localhost' password='pass'
</postgresql>
</relstorage>
</zodb>
RelStorage requires more up-front setup work but can outperform ZEO in many scenarios.
You can not access the same database files from two processes at the same time (which is obvious). That's why you get this error. If you need to perform actions on the same data.fs file from two or more processes: use ZEO.
## Let the program run once (if it's already running, don't run it again)
## Run the program, open the form
import sys
import zc.lockfile
try:
lock = zc.lockfile.LockFile('lock', content_template='{pid};{hostname}')
if __name__ == '__main__':
mainForm()
except zc.lockfile.LockError:
sys.exit()
## zc.lockfile thanks
## https://pypi.org/project/zc.lockfile/#detailed-documentation

Categories

Resources