If I want to be able to test my application against a empty MySQL database each time my application's testsuite is run, how can I start up a server as a non-root user which refers to a empty (not saved anywhere, or in saved to /tmp) MySQL database?
My application is in Python, and I'm using unittest on Ubuntu 9.10.
--datadir for just the data or --basedir
You can try the Blackhole and Memory table types in MySQL.
Related
I have a little Django app that uses PyMongo and MongoDB.
If I write (or update) something in the database, I have to restart the server for it to show in the web page. I'm running with 'python manage.py runserver'
I switched to the django dummy cache but that didn't help.
Every database action is within an 'with MongoClient' statement.
I figured it out. I read in the data in the django_tables2 class variables. So it was never refreshed...
Bangs forehead on desk...
I'm creating a gui in python to manipulate stored records and I have the mysql script to set up the database and enter all information. How do I get from the mysql script to the .db file so that python can access and manipulate it?
db files are SQLite databases most of the time. What you are trying to do is converting a dumped MySQL database into an SQLite database. Doing this is not trivial, as I think both dialects are not compatible. If the input is simple enough, you can try running each part of it using an SQLite connection in your Python script. If it uses more complex features, you may want to actually connect to a (filled) MySQL database and fetch the data from there, inserting it back into a local SQLite file.
I am writing a REST API using flask_restful and managing the mysql db using flask-sqlalchemy. I would like to know what the best practice for loading existing data into a table when the app starts is.
I am currently calling the db.create_all() method withing an endpoint with the #app.before_first_request decorator. I would like to then fill in one of the tables created with existing data from a csv file. Should the code to push the data in a separate script or within the function?
Thanks!
I would separate loading initial database data from application initialization, because probably initial data from my experience would not be changed often and can take some time if file is bigger, and usually you don't need to reload it in the database each time application loads.
I think you will most certainly need database migrations at some point in your application development, so I would suggest setting up Flask-Migrate to handle that, and running its upgrade method on application creation (create_app method if you are using Flask application factories pattern) which will handle database migrations. I am saying this since it will save you some headache when you are introducing it later on database already populated with actual data which is initialized with db.create_all().
And for populating database with seed data I would go with Flask CLI or Flask-Script. In one of my recent projects I used Flask-Script for this, and created separate manage.py file which amongst other application management methods contained initial data seeding method which looked something like this:
#manager.command
def seed():
"Load initial data into database."
db.session.add(...)
db.session.commit()
And it was run on demand by following command:
python manage.py seed
My database on Amazon currently has only a little data in it (I am making a web app but it is still in development) and I am looking to delete it, make changes to the schema, and put it back up again. The past few times I have done this, I have completely recreated my elasticbeanstalk app, but there seems like there is a better way. On my local machine, I will take the following steps:
"dropdb databasename" and then "createdb databasename"
python manage.py makemigrations
python manage.py migrate
Is there something like this that I can do on amazon to delete my database and put it back online again without deleting the entire application? When I tried just deleting the RDS instance a while ago and making a new one, I was having problems with elasticbeanstalk.
The easiest way to accomplish this is to SSH to one of your EC2 instances, that has acccess to the RDS DB, and then connect to the DB from there. Make sure that your python scripts can read your app configuration to access the configured DB, or add arguments for DB hostname. To drop and create your DB, you must just add the necessary arguments to connect to the DB. For example:
$ createdb -h <RDS endpoint> -U <user> -W ebdb
You can also create a RDS snapshot when the DB is empty, and use the RDS instance actions Restore to Point in Time or Migrate Latest Snapshot.
I had the same problem and came up with a workaround. In your python code just add and run the following method when deploying your app the next time:
FOR SQLALCHEMY AFTER VERSION 2.0
from sqlalchemy import create_engine, text
tables = ["table1_name", "table2_name"] # the names of the tables you want to delte
engine = create_engine("sqlite:///example.db") # here you create your engine
def delete_tables(tables):
for table in tables:
sql = text(f"DROP TABLE IF EXISTS {table} CASCADE;") # CASCADE deltes the tables even if they had some connections to other tables
with engine.connect() as connection:
with connection.begin():
connection.execute(sql)
delete_tables(tables) # Comment this line out after running it once.
FOR SQLALCHEMY BEFORE VERSION 2 (I guess)
def delete_tables(tables):
for table in tables:
engine.execute(f"DROP TABLE IF EXISTS {table} CASCADE;")
delete_tables(tables) # Comment this line out after running it once.
After you deployed and ran this code 1 time, all your tables will be deleted.
IMPORTANT: Delete or comment out this code after that, otherwise you will delete all your tables every time when you deploy your code to AWS
I have a PostgreSQL schema that resides in a schema.sql file that gets run each time a database connection is initiated in Python. It looks something like:
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
facebook_id TEXT NOT NULL,
name TEXT NOT NULL,
access_token TEXT,
created TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW()
);
The app is deployed on Heroku, using their PostgreSQL and everything works as expected.
Now, what if I want to change a bit the structure of my users table? How can I do this the easiest and the best way? I thought of writing an ALTER... line in schema.sql for each change I want to produce in the database, but I don't think this is the best approach, since after some time the schema file will be full of ALTERs and it will slow down my app.
What's the indicated way to deploy changes made to a database?
Running a hard-coded script on each connection is not a great way to handle schema management.
You need to either manage the schema manually, or use a full-fledged tool that keeps a schema version identifier in the database, checks that, and applies a script to upgrade to the next schema version if it's different to the latest one. Rails calls this "migrations" and it kind-of works. If you're using Django it has schema management too.
If you're not using a framework like that, I suggest just writing your own schema upgrade scripts. Add a "schema_version" table with a single row. SELECT it when the app first starts after a redeploy and if it's lower than the current version the app knows about, apply the update script(s) in order, eg schema_1_to_2, schema_2_to_3, etc.
I don't recommend doing this on connect, do it on app start, or better, as a special maintenance command. If you do it on every connection you'll have multiple connections trying to make the same changes and you'll land up with duplicated columns and all sorts of other mess.
I support several django apps on heroku with Postgres. I just connect via PgAdmin and run my scripts when changes are required. I don't see any need for running a script every time a connection is made.