I have flask-migrate (version 1.8.0) working well with a sqlite database in a development environment. Now I would like to migrate our data to MySQL and maintain all of our migration history (so it stays in sync with our Flask-SQLAlchemy models in our git repository).
I created an empty MySQL database, and after changing my SQLALCHEMY_DATABASE_URI, I tried running:
python manage.py db upgrade
That resulted in an error about not being able to drop the table migrate_version. (Which makes sense, since this is a new database, although sqlite actually contains the table 'alembic_version' not 'migrate_version'.)
So, I tried to initialize this new database:
python manage.py db init
Now I get an error: "Directory migrations already exists".
I can rename that folder and re-run the command with no problem, but then I lose all of my previous migrations. I think we would have the same issues when we also transition to our test and production environments.
I've seen in the docs Flask-Migrate has multiple database support, but I think that looks to be more for maintaining multiple databases in a single development environment. Is there a way to have Flask-Migrate track changes across multiple development environments?
To address the real issue in the OP's question, you need to use the --directory flag to initiate a migrations directory specific to your each environment's database.
From the flask-migrate documentation:
All commands also take a --directory DIRECTORY option that points to
the directory containing the migration scripts. If this argument is
omitted the directory used is migrations.
So:
flask db init --directory=[DIRECTORY NAME]
Flask-Migrate itself has no memory of your database, so when running migration commands with flask db, it will reference the specified migrations directory (by default, when the --directory flag is not used, this is called 'migrations').
flask db migrate --directory=[DIRECTORY_NAME]
etc.
It goes without saying that the flask command will reference the application context as configured by your config file or environment variables.
I typically create a migration directory for each environment with an explicit reference to the environment: e.g. development and staging, with something like 'migrations_dev' and 'migrations_stg'.
Hope this is helpful.
Here are the steps I took to transition from SQLite to MySQL and maintain all the migration history. I highly suspect there is a better way to do this, but it worked for me.
Initialize the new, blank database using another folder for your "new" migrations
python manage.py db init -d tmp
Create a migration
python manage.py db migrate -d tmp -m "Bring MySQL up to date"
Apply the migration
python maange.py db upgrade -d tmp
Now, you can delete the "tmp" migrations folder. You no longer need it. Find the HEAD migration. Look for 'Rev: your_revision_num (head)'
python manage.py db show
Run an update statement against your MySQL database
update alembic_version set version_num = 'your_revision_num'
Now your MySQL database schema should match your old SQLite schema and you'll have your full migration history as well.
The table migrate_version is used to track migrations by package sqlalchemy-migrate. Alembic, the package used by Flask-Migrate, uses a alembic_version table, as you know.
So my guess, is that this MySQL database that you want to use has been used before by an application that was under sqlalchemy-migrate control.
I recommend that you delete the MySQL database and make a brand new one.
I also had the same need on my side. I wanted to reproduce the command that exists in the laravel framework to make a migration in different environments:
php artisan migrate --env prod
With this kind of command, you can launch a migration in different environments. I have not found a directly equivalent command in flask.
THE "flask db upgrade --env prod" command does not exist. In particular, the --env argument .
As a workaround, I created a command that allows me to change the environment:
flask env --name prod
That command is a custom flask command that will copy the content of the .env.prod file to .env.
This way, the application is in the prod environment. And we can then launch the migration command on the prod environment.
How to use the custom env command to migrate to different environments?
To start the migration in the staging environment, just run these two commands:
flask env --name staging
flask db updgrade
Then if you want to start the migration in the production environment, just run these two commands:
flask env --name prod
flask db updgrade
How to create the custom command flask env?
First, you need to know how to create custom command in flask. Just follow the official falsk documentation
Here is the content of my custom command which allows to change the environment:
from flask.cli import with_appcontext
import shutil
#click.command('env')
#click.option("--name", is_flag=False, flag_value="Flag", default="")
#with_appcontext
def env(name):
if name == '':
print(os.getenv('FLASK_ENV'))
return
if name not in ['dev', 'prod', 'local', 'staging']:
print('That env does not exist')
return
shutil.copyfile('.env.' + name, '.env')
return
In my setup, I have 4 environments: local, dev, staging, prod.
And I have 4 corresponding .env files: .env.local, .env.staging, .env.prod, .env.dev
The custom flask env command also copies the contents of the environment files into the .env file that the flask application loads at start-up.
Related
Hive -
I have a Flask + React application that is running on Debian 11 via Nginx and Gunicorn. In development, everything works great ask it uses SQLAlchemy + SQLite to manage data queries.
In production, my .env file includes the connection details to the PostgreSQL database. After that is when it gets weird (at least for me, but this may be something that people commonly run into that my hours on Google just didn't turn up):
When I installed the app on production and set the .env file, I performed the flask db upgrade, and it wrote to the PostgreSQL database (confirmed tables exist).
When I ran the command line command to create an admin user in the new environment, it created my user in PostgreSQL on the users table with my admin flag.
When I go into flask shell I can import db from the app (which is just an instantiation of SQLAlchemy) and import User from the AUTH API. Once those are imported, I can run User.get.all() and it will return all users from the PostgreSQL table. I've even ensured there is a unique user in that table by manually creating it in the DB to validate that it doesn't get created in two systems.
When I use curl to hit the API to login in, it says that the users table is not found and references that it tried to query SQLite.
To summarize, I can not figure out why command line/shell interfaces correctly pull in the PostgreSQL connection but hitting the API falls back to SQLite. I'm not even sure where to start in debugging...even in the os_env call in the main app that says, "Pull from the env or fall back to development," I made the fall back = production.
All commands are executed in venv. Gunicorn is running within the same venv, and validated by tailing the logs that supervisor compiles for Gunicorn.
I am happy to provide any code that might be needed, but I am unsure what is and is not relevant. If it helps, the original base was built off of this boilerplate, and we have just expanded the API calls and models and defined a connection string to PostgreSQL in Production but left the SQLite connection string in development...the operation of the app works exactly the same: https://github.com/a-luna/flask-api-tutorial/tree/part-6
I finally found the answer.
When you launch Gunicorn, it ignores your .env file and any environment variables you may have set in the Shell. Even when your app specifically loads the .env file, Gunicorn still ignores it.
There are a variety of solutions but, since I was using Supervisor and also had a large number of variables to load, using the --env flag on Gunicorn was not an option.
Instead, add this to your Gunicorn file. Since I was using a virtualenv and had installed it via pip, my gunicorn command was running from ./project-root/venv/bin/gunicorn.
Modify that file as so:
At the top where your imports are, you will want to add:
import os
from dotenv import load_dotenv
Then, anywhere before you actually load the app (I put mine right after all of the imports), add this block of code where I have two environment files called .env and .flaskenv:
for env_file in ('.env', '.flaskenv'):
env = os.path.join(os.getcwd(), env_file)
if os.path.exists(env):
load_dotenv(env)
I have a Django app in a Github repo. Through a Github action, it is deployed as a Python app in Azure.
In the Azure portal:
1- In "Configuration > Application" settings, I've defined POST_BUILD_COMMAND as
python manage.py makemigrations && python manage.py migrate
as described in Configure a Linux Python app for Azure App Service.
2- I have configured a deployment slot and a production slot. It offers a 'Swap' option, to push the live app in the deployment slot to production.
However, I'm under the impression that doing that doesn't run the POST_BUILD_COMMAND command for the production Django app, leaving the database unaltered - which means that the production frontend gets the new fields/updates, but the migrations don't occur.
What's the best way to perform the migrations in production?
Would the correct way be to set "Configurations > General settings > Startup Command" to 'python manage.py makemigrations && python manage.py migrate'? Would that work?
The best way to perform the migration is using the YAML configuration.
When we have the requirement of the libraries which are needed to be installed, while in production, be careful while migrating, because we need to ignore, venv folder.
Create a dev environment with live data and make sure to add the environment variables of the database on azure portal
Once we create the environment variables to the database, then we can get live data to the dev environment.
Create the Production Environment repo and instead of creating the connection from GitHub again, take the diversion from dev repo.
Make the parent repo as dev repo for prod repo. Use the same methodology which you used in the documentation.
Configure the pipeline based on the environment variables.
Create an automation procedure to update the production environment on interval basis.
python manage.py makemigrations && python manage.py migrate
Use the above code block for performing operations into the dev. Use the Azure portal terminal to create YAML files and change the parameters.
This can be performed by VS Code also. (The best method)
Document, is the perfect flow reference for the procedure.
GitHub -> Azure Dev Repo -> Backup -> Create Prod Repo -> Migrate from Dev to Prod -> Exclude gitignore folder
This is my model.
class Otp(models.Model):
user = models.OneToOneField('CustomUser', on_delete=models.CASCADE)
onetime = models.CharField(max_length=25, default=calculated, unique=True)
link = models.UUIDField(default=uuid.uuid4, unique=True)
created_at = models.DateTimeField(auto_now_add=True)
I just added the column 'link'. Workes great on my development environment.
When on my server I go inte venv and run 'makemigrations' see the changes. Then 'migrate' and everything applied.
Now when I access a page that use the table, I get 'column does not exist'.
I go inte postgres and check the table. No changes was applied. The new column is missing.
I try adding, removing forth and back. The migrations are flawless, but there are no changes to the table.
I even droppped the table. Re ran migrations to create it. It says no changes.
I'm strting to think the migrations are applying it to another database. But how can I check this. It should not be another. I only have one.
EDIT with update on my investigation.
My database connection is set in the settings in production.py for postgres. My development.py is sqlite.
So now I checked the db for sqlite on my server. And for sure, the migrations are applied on sqlite, not postgres.
how come it's choosing sqlite all of a sudden?
This is how I apply changes.
1. Make changes in models.py for app
2. Activate venv to be able to run migrations
3. Run 'python manage.py makemigrations APPNAME'
4. Run 'python manage.py migrate APPNAME'
So what is defining it to run migrations towards sqlite and not postgres?
Your description does suggest that you are interfacing with a different database than you expect on your server. I'd recommend checking the DATABASES setting in your Django settings file. The credentials you provide there will likely need to be different in your production and development environments.
In response to you most recent edit, make sure that you are running your Django instance on your server using your production settings. Based on what you have described, I assume your wsgi.py file includes a line resembling
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp.development")
Django itself has no notion of development vs production settings. It only knows to load the setting module that you tell it to.
When running in your production environment, you need to explicitly specify the use of your "production settings". This can be accomplished most easily by setting the environment variable DJANGO_SETTINGS_MODULE to the Python path of your settings module. Do note that this must be accomplished by the web server that runs your Django app. Simply sshing in and running export DJANGO_SETTINGS_MODULE=myapp.production won't cut it.
A common practice is to configure your wsgi.py endpoint to default to the production settings, and to explicitly opt for the development settings on your local machine, e.g. via
$ ./manage.py --settings=myapp.development_settings <command>
I am trying to script a series of examples where the reader incrementally builds a web application. The first stage takes place with Mezzanine's default configuration, using built-in SQLlite:
sudo pip install mezzanine
sudo -u mezzanine python manage.py createdb
After the initial examples are complete, I want to switch the existing setup to a mysql backend. If that is too complex, I at least want to re-create the built-in examples that come with Mezzanine on the new backend, but Mezzanine won't allow re-running createdb
CommandError: Database already created, you probably want the migrate command
This seems like something that should be incredibly simple, yet I can't seem to get it quite right (and migrate alone does not do the trick). Google and official docs not helping either.
Steps I am taking: first, I create a MySQL database on Amazon RDS. Then, I set appropriate configuration for it in myapp/local_settings (I am sure these steps are correct). Then:
sudo apt install python-mysqldb
sudo -u mezzanine python /srv/mezzanine/manage.py migrate
but then:
Running migrations:
No migrations to apply.
What am I missing?
The Mezzanine project is based on Django, the Python framework.
Unless you encounter a Mezzanine specific problem, most issues can be solved by figuring out how its done the Django way.
Migrations is just Django's way of refering to alterations & amendments within the DB, ie, the schema (because apps evolve & databases are metamorphic).
In order to actually migrate the data however you could:
Export the contents from the current database, eg:
./manage.py dumpdata > db-dump-in-json.json
./manage.py --format=xml > db-dump-in-xml.xml
This may crash if there is too much data or not enough memory. Then the thing would be to use native DB tools to get the dump.
Create and add the settings for the new DB in settings.py:
Create the tables and set them up (based on your models) on the newly defined DB:
./manage.py makemigrations
./manage.py migrate
createdb = syncdb (create) + migrate (set) combined
And reload the exported data there:
./manage.py loaddata db-dump-in-json.json
So I have my Django app running and I just added South. I performed some migrations which worked fine locally, but I am seeing some database errors on my Heroku version. I'd like to view the current schema for my database both locally and on Heroku so I can compare and see exactly what is different. Is there an easy way to do this from the command line, or a better way to debug this?
From the command line you should be able to do heroku pg:psql to connect directly via PSQL to your database and from in there \dt will show you your tables and \d <tablename> will show you your table schema.
locally django provides a management command that will launch you into your db's shell.
python manage.py dbshell
django also provides a management command that will display the sql for any app you have configure in your project, regardless of the database manager (SQLite, MySQL, etc) that you're using:
python manage.py sqlall <app name>
Try it! It could be usefull!