I have a database on Heroku I'm trying to copy to my local machine.
I created a backup of the database by doing:
heroku pgbackups:capture
This create a dump file of the database which I downloaded by creating a URL link to it:
heroku pgbackups:url b004
But now I have a dump file and don't really know what do to with it. I tried
pg_restore
to restore the database but I don't know where that information went. I basically want to create a .db file out of this dump file. Is that possible?
Ultimately my end goal is to access this database -- so if another method of copying the db is better, I'm fine with that that.
Heroku does not allow you to use sqlite files, as they have a read only file system. But you can use Django to dump the data from Heroku into a JSON file via the dumpdata command, and them import that into your local dev environment.
Because it can be difficult to run commands that generate files on the web server using heroku run, I suggest you instead install django smuggler, which makes this operation a point and click affair in admin.
First of all you should install postgres in your local machine.
PostgreSQL cheat sheet for django beginners
Then, import your dump file with pg_restore command:
pg_restore -d yournewdb -U yournewuser --role=yournewuser /tmp/b001.dump
Thats all, your data is now cloned from your heroku app.
Related
Hive -
I have a Flask + React application that is running on Debian 11 via Nginx and Gunicorn. In development, everything works great ask it uses SQLAlchemy + SQLite to manage data queries.
In production, my .env file includes the connection details to the PostgreSQL database. After that is when it gets weird (at least for me, but this may be something that people commonly run into that my hours on Google just didn't turn up):
When I installed the app on production and set the .env file, I performed the flask db upgrade, and it wrote to the PostgreSQL database (confirmed tables exist).
When I ran the command line command to create an admin user in the new environment, it created my user in PostgreSQL on the users table with my admin flag.
When I go into flask shell I can import db from the app (which is just an instantiation of SQLAlchemy) and import User from the AUTH API. Once those are imported, I can run User.get.all() and it will return all users from the PostgreSQL table. I've even ensured there is a unique user in that table by manually creating it in the DB to validate that it doesn't get created in two systems.
When I use curl to hit the API to login in, it says that the users table is not found and references that it tried to query SQLite.
To summarize, I can not figure out why command line/shell interfaces correctly pull in the PostgreSQL connection but hitting the API falls back to SQLite. I'm not even sure where to start in debugging...even in the os_env call in the main app that says, "Pull from the env or fall back to development," I made the fall back = production.
All commands are executed in venv. Gunicorn is running within the same venv, and validated by tailing the logs that supervisor compiles for Gunicorn.
I am happy to provide any code that might be needed, but I am unsure what is and is not relevant. If it helps, the original base was built off of this boilerplate, and we have just expanded the API calls and models and defined a connection string to PostgreSQL in Production but left the SQLite connection string in development...the operation of the app works exactly the same: https://github.com/a-luna/flask-api-tutorial/tree/part-6
I finally found the answer.
When you launch Gunicorn, it ignores your .env file and any environment variables you may have set in the Shell. Even when your app specifically loads the .env file, Gunicorn still ignores it.
There are a variety of solutions but, since I was using Supervisor and also had a large number of variables to load, using the --env flag on Gunicorn was not an option.
Instead, add this to your Gunicorn file. Since I was using a virtualenv and had installed it via pip, my gunicorn command was running from ./project-root/venv/bin/gunicorn.
Modify that file as so:
At the top where your imports are, you will want to add:
import os
from dotenv import load_dotenv
Then, anywhere before you actually load the app (I put mine right after all of the imports), add this block of code where I have two environment files called .env and .flaskenv:
for env_file in ('.env', '.flaskenv'):
env = os.path.join(os.getcwd(), env_file)
if os.path.exists(env):
load_dotenv(env)
I changed the database from SQLite3 to PostgreSQL in my Django project. Is it possible to store my new database in the GitHub repository so that after cloning and running by the command
python manage.py runserver
the project has started with the whole database?
You cannot save the database as such, you can instead create fixtures to be run. Whenever someone will clone the project, he/she can simple run those fixtures to populate the database.
So I have created a website with Flask and uploaded it to heroku.
The website has a database which is called test.db
In cmd i did
heroku vim -a my-app;
so now when I do dir, I get all my files, and test.db too. But how to open test.db, i tried open or read test.db. None of those seem to work. Do you know a command to open this database file?
Copy it via heroku ps:copy filename to local machine. Open it on local machine as usual.
Though in general you shouldn't rely on a database that is written to file system because on Heroku your files will be wiped once every day.
I am newbie to python and django, but this time I need a fast solution. I've got a problems, using hosting where my django application is deployed so I need to migrate to another server, but I have no ssh or telnet to server, only ftp connection for this server. I need to export data from django database. I wanted to write a script and put it somewhere in django application for data export, but when I put my modification on server behavior does not change(as nothing changed). Also when I remove .pyc files from djagno (for example. views.pyc) - no changes, and when I remove .py file - nothings changes (for example views.py).
As far as I read about django, it is possible that server is running with option "-noreload".
So question is it any possible way to dump database only via ftp and django/python?
(remote connection via mysql is disabled)
ftp stands for "file transfer protocol", not for "remote shell", so no, you cannot use ftp to execute a command / program / script / whatever. But why don't you just ask your hosting how to get a dump of your data ?
I tried to create a sqlalchemy project in pyramid and when I run the server, I get this error,
Pyramid is having a problem using your SQL database. The problem
might be caused by one of the following things:
1. You may need to run the "initialize_MyProject_db" script
to initialize your database tables. Check your virtual
environment's "bin" directory for this script and try to run it.
2. Your database server may not be running. Check that the
database server referred to by the "sqlalchemy.url" setting in
your "development.ini" file is running.
After you fix the problem, please restart the Pyramid application to
try it again.
when I check my development.ini file the sqlite database is configured as this,
sqlalchemy.url = sqlite:///%(here)s/MyProject.sqlite
What needs to changed in here to configure it correctly?
I run on linux box.
You need to create a database in either sqlite,postgres,or any other,Thereafter go to development.ini file edit sqlalchemy.url = sqlite:///%(here)s/MyProject.sqlite and specify the name of your database,then run the initialize_myproject_db development.ini command.if you are using mysql, that line should be
sqlachemy.uri = mysql://username:password#host/dbname
Just trying Pyramid for the first time, I just faced the same problem, after many command combinations, I just got the solution.
Run from the project root, the command:
initialize_tutorial_db development.ini
Info taken from Wiki2 SQLAlchemy tutorial
It says right there in the first point - you need to run initialize_MyProject_db development.ini to create database.
If that's not the case please post the log from running the server.