I wrote a Python script which initializes an empty database if it doesn't exist.
import os
if not os.path.exists('Database'):
os.makedirs('Database')
os.system('sqlite3 Database/testDB.db ";"')
# rest of the script...
Can I do this in a more Pythonic fashion, with a try-except, or is this kind of code acceptable?
I think you can do it like this:
import sqlite3
conn = sqlite3.connect('Database/testDB.db')
This should connect to your database and create it in case that it doesn't exist. I'm not sure this is the most pythonic way, but it does use the sqlite3 module instead of the sqlite3 command.
Making it Pythonic: create a sqlite3 database if it doesn't exist?
The most Pythonic way to do this is to use the context manager:
import sqlite3
# if we error, we rollback automatically, else commit!
with sqlite3.connect('/Temp/testDB.db') as conn:
cursor = conn.cursor()
cursor.execute('SELECT SQLITE_VERSION()')
data = cursor.fetchone()
print('SQLite version:', data)
In a python shell this echoes for me:
<sqlite3.Cursor object at 0x0CCAD4D0>
SQLite version: (u'3.5.9',)
To ensure you have a tempfile path that works across platforms, use tempfile.gettempdir:
import tempfile
with sqlite3.connect(tempfile.gettempdir() + '/testDB.db') as conn:
...
Create directory path, database file and table
Here is a recipe to create the directory path, database file and table
when necessary. If these already exist, the script will overwrite nothing and simply use what is at hand.
import os
import sqlite3
data_path = './really/deep/data/path/'
filename = 'whatever'
os.makedirs(data_path, exist_ok=True)
db = sqlite3.connect(data_path + filename + '.sqlite3')
db.execute('CREATE TABLE IF NOT EXISTS TableName (id INTEGER PRIMARY KEY, quantity INTEGER)')
db.close()
sqlite3.connect will attempt to create a database if it doesn't exist - so the only way to tell if one does exist is to try to open it and catch an IOError. Then to create a blank database, just connect using the sqlite3 module.
import sqlite3
try:
open('idonotexist')
print 'Database already exists!'
except IOError as e:
if e.args == 2: # No such file or directory
blank_db = sqlite3.connect('idontexist')
print 'Blank database created'
else: # permission denied or something else?
print e
Of course, you may still have to do something with os.makedirs depending on if the structure already exists.
Related
I have some python code that copies a SQLite db across sftp. However, it is a highly active db, so many of the times I am running into a malformed db. I'm thinking of these possible options, but I don't know how to implement them because I am newer to python.
Alternate method of getting the sqlite db copied?
Maybe there is a way to query the sqlite file from the device? Not sure if that would work since sqlite is more of a local db not sure how I can query it like I could w mysql etc...
Create a loop? I could call the function again in the exception, but not sure how to retry the rest of the code.
Also, the malformed db issue can possibly occur in other sections im thinking? Maybe I need to run a pragma quick_check?
This is commonly what I am seeing.... The other catch is why am I seeing it as often as I am? Because if I load the sqlite file from my main machine, and it runs the query files?
(venv) dulanic#mediaserver:/opt/python_scripts/rpi$ cd /opt/python_scripts/rpi ; /usr/bin/env /opt/python_scripts/rpi/venv/bin/python /home/dulanic/.vscode-server/extensions/ms-python.python-2021.2.636928669/pythonFiles/lib/python/debugpy/launcher 37599 -- /opt/python_scripts/rpi/rpdb.py
An error occurred: database disk image is malformed
This is my current code:
#!/usr/bin/env python3
import psycopg2, sqlite3, sys, paramiko, sys, os, socket, time
scpuser=os.getenv('scpuser')
scppw = os.getenv('scppw')
sqdb = os.getenv('sqdb')
sqlike = os.getenv('sqlike')
pgdb = os.getenv('pgdb')
pguser = os.getenv('pguser')
pgpswd = os.getenv('pgpswd')
pghost = os.getenv('pghost')
pgport = os.getenv('pgport')
pgschema = os.getenv('pgschema')
database = r"./pihole.db"
pihole = socket.gethostbyname('pi.hole')
tabnames=[]
tabgrab = ''
def pullsqlite():
sftp.get('/etc/pihole/pihole-FTL.db','pihole.db')
sftp.close()
# SFTP pull config
ssh_client=paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(hostname=pihole,username=scpuser,password=scppw)
sftp=ssh_client.open_sftp()
# Pull SQlite
pullsqlite()
# Load sqlite tables to list
consq=sqlite3.connect(sqdb)
cursq=consq.cursor()
cursq.execute(f"SELECT name FROM sqlite_master WHERE type='table' AND name in ({sqlike})" )
tabgrab = cursq.fetchall()
# postgres connection
conpg = psycopg2.connect(database=pgdb, user=pguser, password=pgpswd,
host=pghost, port=pgport)
#Load data to postgres from sqlite
for item in tabgrab:
tabnames.append(item[0])
start = time.perf_counter()
for table in tabnames:
curpg = conpg.cursor()
if table=='queries':
curpg.execute(f"SELECT max(id) FROM {table};")
max_id = curpg.fetchone()[0]
cursq.execute(f"SELECT * FROM {table} where id > {max_id};")
else:
cursq.execute(f"SELECT * FROM {table};")
try:
rows=cursq.fetchall()
except sqlite3.Error as e:
print("An error occurred:", e.args[0])
colcount=len(rows[0])
pholder=('%s,'*colcount)[:-1]
try:
curpg.execute(f"SET search_path TO {pgschema};" )
curpg.executemany(f"INSERT INTO {table} VALUES ({pholder}) ON CONFLICT DO NOTHING;" ,rows)
conpg.commit()
print(f'Inserted {len(rows)} rows into {table}')
except psycopg2.DatabaseError as e:
print (f'Error {e}')
sys.exit(1)
if 'start' in locals():
elapsed = time.perf_counter() - start
print(f'Time {elapsed:0.4}')
consq.close()
import sqlite3
conn = sqlite3.connect("test.db")
cursor = conn.cursor()
It should create the database, but it does not. Any help?
This code will create an sqlite db file called "test.db" in the same directory you are running your script from.
For example, if you have your python file in:
/home/user/python_code/mycode.py
And you run it from:
/home/user/
With:
python python_code/mycode.py # or python3
It will create an "empty" sqlite db file at
/home/user/test.db
If you can't find the test.db file, make sure you pass it the full path of where you want it to be located.
i.e.
conn = sqlite3.connect("/full/path/to/location/you/want/test.db")
I had the same problem, my .db file wasn't appearing because I forgot to add test.db at the end of path, see line 2 below
import sqlite3
databaseFile = "/home/user/test.db" #don't forget the test.db
conn = sqlite3.connect(databaseFile)
cursor = conn.cursor()
I suspect the DB will not be created on disk until you create at least one table in it. Just calling conn.cursor() is not sufficient.
Console sqlite3 utility behaves this way, too.
I have a Pyramid / SQLAlchemy, MySQL python app.
When I execute a raw SQL INSERT query, nothing gets written to the DB.
When using ORM, however, I can write to the DB. I read the docs, I read up about the ZopeTransactionExtension, read a good deal of SO questions, all to no avail.
What hasn't worked so far:
transaction.commit() - nothing is written to the DB. I do realize this statement is necessary with ZopeTransactionExtension but it just doesn't do the magic here.
dbsession().commit - doesn't work since I'm using ZopeTransactionExtension
dbsession().close() - nothing written
dbsession().flush() - nothing written
mark_changed(session) -
File "/home/dev/.virtualenvs/sc/local/lib/python2.7/site-packages/zope/sqlalchemy/datamanager.py", line 198, in join_transaction
if session.twophase:
AttributeError: 'scoped_session' object has no attribute 'twophase'"
What has worked but is not acceptable because it doesn't use scoped_session:
engine.execute(...)
I'm looking for how to execute raw SQL with a scoped_session (dbsession() in my code)
Here is my SQLAlchemy setup (models/__init__.py)
def dbsession():
assert (_dbsession is not None)
return _dbsession
def init_engines(settings, _testing_workarounds=False):
import zope.sqlalchemy
extension = zope.sqlalchemy.ZopeTransactionExtension()
global _dbsession
_dbsession = scoped_session(
sessionmaker(
autoflush=True,
expire_on_commit=False,
extension=extension,
)
)
engine = engine_from_config(settings, 'sqlalchemy.')
_dbsession.configure(bind=engine)
Here is a python script I wrote to isolate the problem. It resembles the real-world environment of where the problem occurs. All I want is to make the below script insert the data into the DB:
# -*- coding: utf-8 -*-
import sys
import transaction
from pyramid.paster import setup_logging, get_appsettings
from sc.models import init_engines, dbsession
from sqlalchemy.sql.expression import text
def __main__():
if len(sys.argv) < 2:
raise RuntimeError()
config_uri = sys.argv[1]
setup_logging(config_uri)
aa = init_engines(get_appsettings(config_uri))
session = dbsession()
session.execute(text("""INSERT INTO
operations (description, generated_description)
VALUES ('hello2', 'world');"""))
print list(session.execute("""SELECT * from operations""").fetchall()) # prints inserted data
transaction.commit()
print list(session.execute("""SELECT * from operations""").fetchall()) # doesn't print inserted data
if __name__ == '__main__':
__main__()
What is interesting, if I do:
session = dbsession()
session.execute(text("""INSERT INTO
operations (description, generated_description)
VALUES ('hello2', 'world');"""))
op = Operation(generated_description='aa', description='oo')
session.add(op)
then the first print outputs the raw SQL inserted row ('hello2' 'world'), and the second print prints both rows, and in fact both rows are inserted into the DB.
I cannot comprehend why using an ORM insert alongside raw SQL "fixes" it.
I really need to be able to call execute() on a scoped_session to insert data into the DB using raw SQL. Any advice?
It has been a while since I mixed raw sql with sqlalchemy, but whenever you mix them, you need to be aware of what happens behind the scenes with the ORM. First, check the autocommit flag. If the zope transaction is not configured correctly, the ORM insert might be triggering a commit.
Actually, after looking at the zope docs, it seems manual execute statements need an extra step. From their readme:
By default, zope.sqlalchemy puts sessions in an 'active' state when they are
first used. ORM write operations automatically move the session into a
'changed' state. This avoids unnecessary database commits. Sometimes it
is necessary to interact with the database directly through SQL. It is not
possible to guess whether such an operation is a read or a write. Therefore we
must manually mark the session as changed when manual SQL statements write
to the DB.
>>> session = Session()
>>> conn = session.connection()
>>> users = Base.metadata.tables['test_users']
>>> conn.execute(users.update(users.c.name=='bob'), name='ben')
<sqlalchemy.engine...ResultProxy object at ...>
>>> from zope.sqlalchemy import mark_changed
>>> mark_changed(session)
>>> transaction.commit()
>>> session = Session()
>>> str(session.query(User).all()[0].name)
'ben'
>>> transaction.abort()
It seems you aren't doing that, and so the transaction.commit does nothing.
Writing a script to convert raw data for MySQL import I worked with a temporary textfile so far which I later imported manually using the LOAD DATA INFILE... command.
Now I included the import command into the python script:
db = mysql.connector.connect(user='root', password='root',
host='localhost',
database='myDB')
cursor = db.cursor()
query = """
LOAD DATA INFILE 'temp.txt' INTO TABLE myDB.values
FIELDS TERMINATED BY ',' LINES TERMINATED BY ';';
"""
cursor.execute(query)
cursor.close()
db.commit()
db.close()
This works but temp.txt has to be in the database directory which isn't suitable for my needs.
Next approch is dumping the file and commiting directly:
db = mysql.connector.connect(user='root', password='root',
host='localhost',
database='myDB')
sql = "INSERT INTO values(`timestamp`,`id`,`value`,`status`) VALUES(%s,%s,%s,%s)"
cursor=db.cursor()
for line in lines:
mode, year, julian, time, *values = line.split(",")
del values[5]
date = datetime.strptime(year+julian, "%Y%j").strftime("%Y-%m-%d")
time = datetime.strptime(time.rjust(4, "0"), "%H%M" ).strftime("%H:%M:%S")
timestamp = "%s %s" % (date, time)
for i, value in enumerate(values[:20], 1):
args = (timestamp,str(i+28),value, mode)
cursor.execute(sql,args)
db.commit()
Works as well but takes around four times as long which is too much. (The same for construct was used in the first version to generate temp.txt)
My conclusion is that I need a file and the LOAD DATA INFILE command to be faster. To be free where the textfile is placed the LOCAL option seems useful. But with MySQL Connector (1.1.7) there is the known error:
mysql.connector.errors.ProgrammingError: 1148 (42000): The used command is not allowed with this MySQL version
So far I've seen that using MySQLdb instead of MySQL Connector can be a workaround. Activity on MySQLdb however seems low and Python 3.3 support will probably never come.
Is LOAD DATA LOCAL INFILE the way to go and if so is there a working connector for python 3.3 available?
EDIT: After development the database will run on a server, script on a client.
I may have missed something important, but can't you just specify the full filename in the first chunk of code?
LOAD DATA INFILE '/full/path/to/temp.txt'
Note the path must be a path on the server.
To use LOAD DATA INFILE with every accessible file you have to set the
LOCAL_FILES client flag while creating the connection
import mysql.connector
from mysql.connector.constants import ClientFlag
db = mysql.connector.connect(client_flags=[ClientFlag.LOCAL_FILES], <other arguments>)
I have been trying to create a sqlite database using one python file and access data from it using another, but keep getting an error. I have 2 files, main.py and file2.py
main.py:
import sqlite3, os
conn = sqlite3.connect(':memory:')
queryCurs = conn.cursor()
def createTable():
queryCurs.execute('''CREATE TABLE test(id INTEGER PRIMARY KEY, name TEXT)''')
def addInitial(name):
queryCurs.execute('''INSERT INTO test(name) VALUES (?)''',(name,))
createTable()
addInitial("John")
conn.commit()
os.system('file2.py')
and here is the code in file2.py
import sqlite3, os, time
conn = sqlite3.connect(':memory:')
queryCurs = conn.cursor()
queryCurs.execute('SELECT name FROM test WHERE id=1')
for i in queryCurs:
for j in i:
name = j
print name
conn.commit()
I receive the error: OperationalError: no such table: test
Each connect call creates its own in-memory database.
To share the same in-memory database, create a single connection and share that Python object in both modules.