How to backup Peewee database (SqliteQueueDatabase) programatically? - python

I'm using Peewee in one of my projecs. Specifically, I'm using SqliteQueueDatabase and I need to create a backup (i.e. another *.db file) without stopping my application. I saw that there are two methods that could work for me (backup and backup_to_file) but they're methods from CSqliteExtDatabase, and SqliteQueueDatabase is subclass of SqliteExtDatabase. I've found solutions to manually create a dump of the file, but I need a *.db file (not a *.csv file, for example). Couldn't find any similar question or relevant answer.
Thanks!

You can just import the backup_to_file() helper from playhouse._sqlite_ext and pass it your connection and a filename:
db = SqliteQueueDatabase('...')
from playhouse._sqlite_ext import backup_to_file
conn = db.connection() # get the underlying pysqlite conn
backup_to_file(conn, 'dest.db')
Also, if you're using pysqlite3, then there are also backup methods available on the connection itself.

Related

testing postgres db python

I don't understand how to test my repositories.
I want to be sure that I really saved object with all of it parameters into database, and when I execute my SQL statement I really received what I am supposed to.
But, I cannot put "CREATE TABLE test_table" in setUp method of unittest case because it will be created multiple times (tests of the same testcase are runned in parallel). So, as long as I create 2 methods in the same class which needs to work on the same table, it won't work (name clash of tables)
Same, I cannot put "CREATE TABLE test_table" setUpModule, because, now the table is created once, but since tests are runned in parallel, there is nothing which prevents from inserting the same object multiple times into my table, which breakes the unicity constraint of some field.
Same, I cannot "CREATE SCHEMA some_random_schema_name" in every method, because I need to globally "SET search_path TO ..." for a given Database, so every method runned in parallel will be affected.
The only way I see is to create to "CREATE DATABASE" for each test, and with unique name, and establish a invidual connection to each database.. This looks extreeeemly wasteful. Is there a better way?
Also, I cannot use SQLite in memory because I need to test PostgreSQL.
The best solution for this is to use the testing.postgresql module. This fires up a db in user-space, then deletes it again at the end of the run. You can put the following in a unittest suite - either in setUp, setUpClass or setUpModule - depending on what persistence you want:
import testing.postgresql
def setUp(self):
self.postgresql = testing.postgresql.Postgresql(port=7654)
# Get the url to connect to with psycopg2 or equivalent
print(self.postgresql.url())
def tearDown(self):
self.postgresql.stop()
If you want the database to persist between/after tests, you can run it with the base_dir option to set a directory - which will prevent it's removal after shutdown:
name = "testdb"
port = "5678"
path = "/tmp/my_test_db"
testing.postgresql.Postgresql(name=name, port=port, base_dir=path)
Outside of testing it can also be used as a context manager, where it will automatically clean up and shut down when the with block is exited:
with testing.postgresql.Postgresql(port=7654) as psql:
# do something here

UPDATE statement on Access database fails silently under pyodbc

I have a problem with a simple UPDATE statement. I wrote a Python tool which creates a lot of UPDATE statements and after creating them I want to execute them on my Access database but it doesn't work This is one statement for example:
UPDATE FCL_B_COVERSHEET_A SET BRANCH = 0 WHERE OBJ_ID = '1220140910132011062005';
The statement syntax is not the problem. I tested it and it works.
This next code snippet shows the initialization for the connect object.
strInputPathMDB = "C:\\Test.mdb"
DRV = '{Microsoft Access Driver (*.mdb)}';
con = pyodbc.connect('Driver={0};Dbq={1};Uid={2};Pwd={3};'.format(DRV,strInputPathMDB,"administrator",""))
After that I wrote a method which execute one SQL statement
def executeSQLStatement(conConnection, strSQL):
arcpy.AddMessage(strSQL)
cursor = conConnection.cursor()
cursor.execute(strSQL)
conConnection.commit()
and if I execute this code everything seems to work - no error message or anything like that - but also the data is not updated and I don't know what I'm doing wrong ...
for strSQL in sqlStateArray:
executeSQLStatement(con, strSQL)
con.close()
I hope you understand what my problem is. Thanks for your help.
Chris
The issue here was that the .mdb file was in the root folder of the C: drive. Root folders often restrict normal users to read-only access so the database file was being opened as read-only. Moving the .mdb file to a public folder solved the problem.

Using Python Boto with AWS Support API

I've used boto to interact with S3 with with no problems, but now I'm attempting to connect to the AWS Support API to pull back info on open tickets, trusted advisor results, etc. It seems that the boto library has different connect methods for each AWS service? For example, with S3 it is:
conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
According to the boto docs, the following should work to connect to AWS Support API:
>>> from boto.support.connection import SupportConnection
>>> conn = SupportConnection('<aws access key>', '<aws secret key>')
However, there are a few problems I see after digging through the source code. First, boto.support.connection doesn't actually exist. boto.connection does, but it doesn't contain a class SupportConnection. boto.support.layer1 exists, and DOES have the class SupportConnection, but it doesn't accept key arguments as the docs suggest. Instead it takes 1 argument - an AWSQueryConnection object. That class is defined in boto.connection. AWSQueryConnection takes 1 argument - an AWSAuthConnection object, class also defined in boto.connection. Lastly, AWSAuthConnection takes a generic object, with requirements defined in init as:
class AWSAuthConnection(object):
def __init__(self, host, aws_access_key_id=None,
aws_secret_access_key=None,
is_secure=True, port=None, proxy=None, proxy_port=None,
proxy_user=None, proxy_pass=None, debug=0,
https_connection_factory=None, path='/',
provider='aws', security_token=None,
suppress_consec_slashes=True,
validate_certs=True, profile_name=None):
So, for kicks, I tried creating an AWSAuthConnection by passing keys, followed by AWSQueryConnection(awsauth), followed by SupportConnection(awsquery), with no luck. This was inside a script.
Last item of interest is that, with my keys defined in a .boto file in my home directory, and running python interpreter from the command line, I can make a direct import and call to SupportConnection() (no arguments) and it works. It clearly is picking up my keys from the .boto file and consuming them but I haven't analyzed every line of source code to understand how, and frankly, I'm hoping to avoid doing that.
Long story short, I'm hoping someone has some familiarity with boto and connecting to AWS API's other than S3 (the bulk of material that exists via google) to help me troubleshoot further.
This should work:
import boto.support
conn = boto.support.connect_to_region('us-east-1')
This assumes you have credentials in your boto config file or in an IAM Role. If you want to pass explicit credentials, do this:
import boto.support
conn = boto.support.connect_to_region('us-east-1', aws_access_key_id="<access key>", aws_secret_access_key="<secret key>")
This basic incantation should work for all services in all regions. Just import the correct module (e.g. boto.support or boto.ec2 or boto.s3 or whatever) and then call it's connect_to_region method, supplying the name of the region you want as a parameter.

Closing an SQLObject Connection

Is it possible to manually close an SQLObject Connection once it has been opened? I am trying to delete a database file once it has been used, but it seems that the open connection to the database file is stopping me from doing so.
For example:
from sqlobject import *
import os
# Create and open connection to a database file.
sqlhub.processConnection = connectionForURI('sqlite:path_to_db')
SomeObject.createTable()
# ...
# Delete database when finished.
os.remove('path_to_db')
Gives the following error:
WindowsError: [Error 32] The process cannot access the file because
it is being used by another process: 'path_to_db'
It seems like just calling .close() on the database connection seems to do the trick:
from sqlobject import *
import os
# Create and open connection to a database file.
sqlhub.processConnection = connectionForURI('sqlite:path_to_db')
#do something with connection
pass
#close connection
sqlhub.processConnection.close()
#delete database
os.remove(path_to_db)
I could only find a little bit on the close method here, but it's fair to say you can treat it like any other file object. I don't have much experience with sqlobject though, and in the interpreter, you can still remove the db right after the processConnection assignment, without closing it, so who knows.

Tornado. Django-like testrunner and test database

I like django unit tests, cause they create and drop test database on run.
What ways to create test database for tornado exists?
UPD: I'm interested in postgresql test database creation on test run.
I found the easiest way is just to use a SQL dump for the test database. Create a database, populate it with fixtures, and write it out to file. Simply call load_fixtures before you run your tests (or whenever you want to reset the DB). This method can certainly be improved, but it's been good enough for my needs.
import os
import unittest2
import tornado.database
settings = dict(
db_host="127.0.0.1:3306",
db_name="testdb",
db_user="testdb",
db_password="secret",
db_fixtures_file=os.path.join(os.path.dirname(os.path.abspath(__file__)), 'fixtures.sql'),
)
def load_fixtures():
"""Fixtures are stored in an SQL dump.
"""
os.system("psql %s --user=%s --password=%s < %s" % (settings['db_name'],
settings['db_user'], settings['db_password'], settings['db_fixtures_file']))
return tornado.database.Connection(
host=settings['db_host'], database=settings['db_name'],
user=settings['db_user'], password=settings['db_password'])

Categories

Resources