So I have a MySQL database for my Django project which I think is located in the default directly (wherever that should be). When I want to move my Django project to digital ocean, how do I move that database? The my.cnf file is located inside the same directory as the settings.py file for the Django project. I am using macOS.
In short: How do I copy my database file to my Django project directory and reconfigure its path to that?
Compare the MySQL versions of both servers. Ensure the new version isn't less than the current version. Also check its not two major versions ahead of the current one.
Ensure MySQL is stopped.
Copy/move the entire datadir to the new location. There should be no files in this new location that weren't in the original.
In the my.cnf file set datadir to the new location.
Copy the my.cnf file to the location where mysql is expecting to read this on the new server. Look at top of mysqld --help --verbose if you have doubts where this is.
There are many things to consider here,
In Digital Ocean you would have MySQL server running on same machine as the Django server i.e just like your local machine?
If yes then you might not have to change anything on django settings (if you keep user and password same as local).
If No, you might need to change host in django to IP of that server(there are many things involved here...)
Regarding the data transfer, it is always the best practice to dump MySQL and Import it in new DB. There are many guides available on web. Here is one from Digital ocean.
Related
I am working on a Django based application whose location on my disk is home/user/Documents/project/application. Now this application takes in some values from the user and writes them into a file located in a folder which is under the project directory i.e home/user/Documents/project/folder/file. While running the development server using the command python manage.py runserver everything worked fine, however after deployment the application/views.py which accesses the file via open('folder/path','w') is not able to access it anymore, because by default it looks in var/www folder when deployed via apache2 server using mod_wsgi.
Now, I am not putting the folder into /var/www because it is not a good practise to put any python code there as it might become readable clients which is a major security threat. Please let me know, how can I point the deployed application to read and write to correct file.
The real solution is to install your data files in /srv/data/myapp or some such so that you can give the webserver user correct permissions to only those directories. Whether you choose to put your code in /var/www or not, is a separate question, but I would suggest putting at least your wsgi file there (and, of course, specifying your <DocumentRoot..> correctly.
We are developing a b2b application with django. For each client, we launch a new virtual server machine and a database. So each client has a separate installation of our application. (We do so because by the nature of our application, one client may require high use of resources at certain times, and we do not want one client's state to affect the others)
Each of these installations are binded to a central repository. If we update the application code, when we push to the master branch, all installations detect this, pull the latest version of the code and restart the application.
If we update the database schema on the other hand, currently, we need to run migrations manually by connecting to each db instance one by one (settings.py file reads the database settings from an external file which is not in the repo, we add this file manually upon installation).
Can we automate this process? i.e. given a list of databases, is it possible to run migrations on these databases with a single command?
If we update the application code, when we push to the master branch,
all installations detect this, pull the latest version of the code and
restart the application.
I assume that you have some sort of automation to pull the codes and restart the web server. You can just add the migration to this automation process. Each of the server's settings.py would read the database details from the external file and run the migration for you.
So the flow should be something like:
Pull the codes
Migrate
Collect Static
Restart the web server
First, I'd really look (very hard) for a way to launch a script that does as masnun suggests on the client side, really hard.
Second, if that does not work, then I'd try the following:
Configure on your local machine all client databases in the settings variable DATABASES
Make sure you can connect to all the client databases, this may need some fiddling
Then you run the "manage.py migrate" process with the extra flag --database=mydatabase (where "mydatabase" is the handle provided in the configuration) for EACH client database
I have not tried this, but I don't see why it wouldn't work ...
This is a bit embarassing, but I'm a Django noob and I couldn't find a simple solution to this:
I have written a Django app in a local VM that I now want to deploy to a "production" server. App works like a charm locally.
Now my IT colleague has set up the server with Django and that also works fine. I can open it via the Web and I get the usual "Congratulations on your first Django-powered page". I can also log into the admin interface. The project has been created.
This is a very low-key mini project and I'm not too familiar with git, so we've decided to just push files via FTP. (And I want to stick with that if at all possible.) So I uploaded the app folder into the project folder and also adjusted the project's settings.py and urls.py.
However, nothing seems to be happening on the server's end. The welcome page is the same, the app does not show up in the admin interface and the URLs won't be resolved as hoped.
Any suggestions what I should have done / done differently?
You need to restart apache or whatever is running your django project. Your changes to py files are cached when you first load your server config (settings).
Any suggestions what I should have done / done differently?
You should be using git/jenkins/deployment techniques, I know you said you've decided not to use it but you're going to be missing out on important things like being able to keep track of changes and unit testing
I have a small team of 5 and I need help finding resources or advice on how to have one centralized MySQL database. We all work virtual and host all the files on one github to centralize everything. So it would be a pain to have 5 local databases on different computers all with the same information as we work scripting logic from the models and etc that will be the same.
The github_cm_dev: is the main directory inside gitHub
venv: is the virtual environment (so anyone on the team can use and activate the environment easily)
indieitude-project/indieitude directory: Has all of my django applications and files
indieitude-project/indieitude/indieitude directory: has all the config files i.e. settings, urls.p etc
Any advice or thoughts on the matter?
Normally there is one master database -- the production DB. There are three general options here:
1) If the data isn't too sensitive and a dev doesn't need every last change, you can simply let each dev take a dump file from production and apply it to their machine.
2) If the data is sensitive (e.g. w/ user passwords), you can create a "dummy" version of the DB and use that as the canonical data for local dev and for testing.
3) If you need every change on each user's machine, then you can preload the entries you need with django fixture files. See https://docs.djangoproject.com/en/dev/howto/initial-data/ You can create a fixture file from a DB pretty easily. A fixture file is just text so you can put it in your git repo.
I'm writing a web application in Python (on Apache server on a Linux system) that needs to connect to a Postgres database. It therefore needs a valid password for the database server. It seems rather unsatisfactory to hard code the password in my Python files.
I did wonder about using a .pgpass file, but it would need to belong to the www-data user, right? By default, there is no /home/www-data directory, which is where I would have expected to store the .pgpass file. Can I just create such a directory and store the .pgpass file there? And if not, then what is the "correct" way to enable my Python scripts to connect to the database?
No matter what approach you use, other apps running as www-data will be able to read your password and log in as you to the database. Using peer auth won't help you out, it'll still trust all apps running under www-data.
If you want your application to be able to isolate its data from other databases you'll need to run it as a separate user ID. The main approaches with this are:
Use the apache suexec module to run scripts as a separate user;
Use fast-cgi (fcgi) or scgi to run the cgi as a different user; or
Have the app run its own minimal HTTP server and have Apache reverse proxy for it
Of these, by far the best option is usually to use scgi/fcgi. It lets you easily run your app as a different unix user but avoids the complexity and overhead of reverse proxying.
Install the application and its config files in its own directory different from the static files directory and only readable by the application user.
Set another user to run the application and use the WSGIDaemonProcess directive.
All of that and much more is clearly described in the mod_wsgi site, in the Quick Configuration Guide, Configuration Guidelines and Configuration Directives