Deploying to a live server for an existing Django application. It's a very old site that has not been updated in 3+ years.
I was hired on contract to bring it up to date, which included upgrading the Django version to be current. This broke many things on the site that had to be repaired.
I did a test deployment and it went fine. Now it is time for deployment to live and I am having some issues....
First thing I was going to do is keep a log of the current Django version on server, incase of any issues we can roll back. I tried logging in Python command prompt and importing Django to find version number, and it said Django not found.
I was looking further and found the version in a pip requirements.txt file.
Then I decided to update the actual django version on the server. Update went through smoothly. Then I checked the live site, and everything was unchanged (with the old files still in place). Most of the site should have been broken. It was not recognizing any changes in Django.
I am assuming the reason for this might be that the last contractor used virtualenv? And that's why it is not recognizing Django, or the Django update are not doing anything to the live site?
That is the only reason I could come up with to explain this issue, as since there is a pip requirements.txt file, he likely installed Django with pip, which means Python should recognize the path to Django.
So then I was going to try to find the source path for the virtualenv with command "lsvirtualenv". But when I do that, even that gives me a "command not found" error.
My only guess is that this was an older version of virtualenv that does not have this command? If that is not the case, I'm not sure what is going on.
Any advice for how I find the information I need to update the package versions on this server with the tools I have access to?
Create your own virtualenv
If all fails, just recreate the virtualenv from the requirements.txt and go from there
Find out how the old app was being launched
If you insist on finding the old one, IMO the most direct way is to find how is the production Django app being ran. Look for bash scripts that start it, some supervisor entries etc
If you find how it starts, then you can pinpoint the environment it is launched in (e.g. which virtualenv)
Find the virtualenv by searching for common files
Other than that you can use find or locate command to search for files we know to exist in a virtualenv like lib/pythonX.Y/site-packages, bin/activate or bin/python etc
Why not start checking what processes are actually running, and with what commandline, using ps auxf or something of the sort. Then you know if its nginx+uwsgi or django-devserver or what, and maybe even see the virtualenv path, if it's being launched very manually. Then, look at the config file of the server you find.
Alternatively, look around, using netstat -taupen, for example, which processes are listening to which ports. Makes even more sense, if there's a reverse proxy like nginx or whatever running, and you know that it's proxying to.
The requirements.txt I'd ignore completely. You'll get the same, but correct information from the virtualenv once you activate it and run pip freeze. The file's superfluous at best, and misleading at worst.
Btw, if this old contractor compiled and installed a custom Python, (s)he might not even have used a virtualenv, while still avoiding the system libraries and PYTHONPATH. Unlikely, but possible.
Related
After run
python manage.py dumpdata > data.json
I get this ModuleNotFoundError: No module named 'allauth'
and if i comment the allauth in the installed apps i still get another error on another package that might have a data
You need to install it in your environment using pip. You should also add it to your requirements.txt file.
If you get more of the same errors, continue to install the missing packages.
As per our comments, it turns out that whilst yes, the package was not installed, the ultimate issue was quite specific to your use case, which was that you had not activated the correct environment for the project.
please note this should not confuse the cause of the error, and that being the package was not installed
For bonus points, one can use docker instead of virtual environments, and one massive benefit of this is docker is compatible with pretty much any development operating system host.
I wont dig too deep into that subject, but rather allow you to investigate and post new questions relating to your new use case if you decide to go that way
We have a relatively large number of machines in a cluster that run a certain Python-based software for computation in an academic research environment. In order to keep the code base up to date, we are using a build server which makes the current code base available in a directory each time we update a dedicated deployment tag on our Mercurial server. Each machine part of the cluster runs a daily rsync script that just synchronises with the deployment directory on the build server (if there's anything to sync) and restart the process if the code base was updated.
Now this approach I find a bit dated and a slight overkill, and would like to optimise in the following way:
Get rid of the build server as all it actually does is clone the latest code base that has a certain tag attached - it doesn't actually compile or do any additional checks (such as testing) on the code base at all. This would also reduce some pain for us as it'd be one less server to maintain and worry about.
Instad of having the build server, I would like to pull straight from our Mercurial server which hosts the code already. This would reduce the need to duplicate the code base each time we update the deployment tag.
Now I had a bit of a read before on how to install / deploy Python-based software with pip (e.g., How to point pip at a Mercurial branch?). It seems to be the right choice as it supports installing packages straight from a code repository. However, I ran into a few problems that I would need help with. The requirements I have are as follows:
Use Mercurial as a source.
Automated background process to update and install into a custom directory on the file system.
Only pull and update from the repository if there is a new version available.
The following command seems to almost do what I need:
pip install -e hg+https://authkey:anypw#mymercurialserver.hostname.com/Code/package#deployment#egg=package --upgrade --src ~/proj
It pulls the package from the Mercurial server, picks the code base with the tag "deployment" and installs it into proj inside the user's home directory.
The problem, however, is that regardless whether there is an update available or not, pip always uninstalls package and reinstalls it. This makes it difficult to decide whether the process needs to be restarted or not if nothing actually changed. In addition, pip always gets stuck with the the message that hg clone in ./proj/yarely exists with URL... and asks me: What to do? (s)witch, (i)gnore, (w)ipe, (b)ackup. Now this is not ideal, as (1) it would be an automated process without user prompt, and, (2) it should only pull the repository if there was an update in the first place to reduce traffic in the network and not overload our Mercurial server. I believe that in this case, a pull instead of clone if there was a local copy of the repository already would be more appropriate and potentially solve the problem.
I wasn't able to find an elegant and nice solution to this problem. Does anyone have a pointer or suggestion how this could be achieved?
When I build my python project, tox is taking a really long time to set up the environment. I've traced the slowness to the fact that when it's trying to up the environment in .tox, it's trying to access an old pypi repo that my company has since decommissioned in favor of a new one (which works fine). Most stuff I do tries to access this old repo many times (~100), waiting a second or two to timeout on each connection.
I've done everything I can think of to cleanse my machine of links to the old repository, but it's still lingering somewhere. I've tried:
Uninstalling & re-installing both tox & pip
Blowing away ~/.pip
grepping through my project folders for the url of the old repo
grepping through in /Library/Frameworks/ for the url of the old repo (this is on OSX Yosemite 10.10.5)
No dice on any of them.
Where else should I look? Is there some obscure nook or cranny I haven't thought of? Where does pip look for repos by default if you don't explicitly specify one?
Can it be tox settings? This and this parameters seem like they could be relevant. If you don't keep your tox files in the same place as the project itself, you could've missed it while grepping.
Found it through some painstaking println debugging & reading through the source on github. It was in ~/Library/Application Support/pip.
For anyone finding this in the future, pip has some default directories it looks in.
At work we are using Trac on several internal wiki's and an external wiki. REcently we found the need for a new plugin. After we going through a few tutorials we went to install a plugin to make sure it would work. It didn't. We've been going through trying to figure out. Below I will list the steps and various things I did while trying to get it to work.
1) I went to trac-hacks website and downloaded their hellow world plugin, figured I couldn't make a mistake using their code.
2) I compiled and made an egg using python setup.py bdist_egg on the machine where trac is installed, to make sure it's the same Python version being used.
3) I then copied it over to /directory/where/trac/is/plugins/ folder and chmod 755 the file egg file.
4) I then restarted http, unable to find a better way of restaring trac so this may be where my problem is. It didn't work. So I deleted the egg folder in plugins
5) Uploaded it via trac->administration->plugins and restarted httpd again. Nothing.
6) I realized I had to edit the trac.ini file and added helloworld.* = enabled under component and restarted the web server.
It's quite possible it's me but any help would be greatly appreciated!
Its the helloworld plugin from trachack, essentially displays hello world and theres a button. There are no error messages provided, hence why googling was hard.
I'm assuming that it's using root and that's the user I built it with. I will look into seeing if it's anybody else, just taking a quick look though I don't see anything else that could be using it. I only copied the egg file to the plugins folder, I set up another folder elsewhere and built it and cp to the plugins folder. I'm glad to know I was doing that right because looking up documentation on how to restart trac turns up practically nothing, they just say restart trac or restart apache. I will look into the logs later on tomorrow. Thanks for the replies! Also we are using trac .12.1.
So after looking at the log files it seems that it doesn't even load the plugin, can't find anywhere that says it's loading or any errors with it. Now we have a few trac sites for various projects and one of the sites already has plugins installed so I went there and and put the test plugin there and checked logs and it didn't work either. So I'm just going to conclude it's the plugin or something we already have in place and it's not me. I believe I'm going to try and make one and test it. Thanks for the help!
It sounds like you built the egg correctly. After you copy it into your plugins folder, change the file's owner and group (I'm assuming you're on Linux since you mentioned chmod) to match the account that your webserver uses. I'm not sure if that's strictly necessary, but it's what's always worked for me.
I may be misreading your #4, but it sounds like you copied the whole egg folder to your plugins directory. Only the .egg file needs to be copied over, it's a self-contained package. I don't think Trac looks for .egg files in subdirectories.
Restarting your webserver is the easiest way to restart Trac. Actually, I'm not aware of any other way to do it.
When it comes to plugin problems, Trac's log is usually a very good source of information. I recommend setting Trac's log level to DEBUG, then shut down the web server. Clear out the contents of Trac's log file, then start the web server and make a copy of Trac's log file after the server has completely come back online. Do this process twice: once with the plugin installed and once without it installed. The difference in the logfiles should give you a good indication of what the problem is. Once you get accustomed to what your logs normally look like, you'll be able to read the log in place without clearing it out and generating two versions of it.
By the way, what Trac version are you using?
Check the Trac version and downloaded plugin
instead python setup.py bdist_egg try python setup.py install
Quite an old thread, but since I ran into the same problem at one point:
Make sure you build the .egg with the same Python version that you use to run Trac with!Backwards compatibility between Python versions does not matter here, because Trac reads information about the Python version out of the .egg file before it even loads it, to make sure it is compatible.
(Small version numbers should not matter, so you should be able to run a .egg with Python 2.7.10 when it was built with 2.7.3, but not when it was built with 2.6.x. Look at the Version number that is written into the .egg file name.)
Just curious how people are deploying their Django projects in combination with virtualenv
More specifically, how do you keep your production virtualenv's synched correctly with your development machine?
I use git for scm but I don't have my virtualenv inside the git repo - should I, or is it best to use the pip freeze and then re-create the environment on the server using the freeze output? (If you do this, could you please describe the steps - I am finding very little good documentation on the unfreezing process - is something like pip install -r freeze_output.txt possible?)
I just set something like this up at work using pip, Fabric and git. The flow is basically like this, and borrows heavily from this script:
In our source tree, we maintain a requirements.txt file. We'll maintain this manually.
When we do a new release, the Fabric script creates an archive based on whatever treeish we pass it.
Fabric will find the SHA for what we're deploying with git log -1 --format=format:%h TREEISH. That gives us SHA_OF_THE_RELEASE
Fabric will get the last SHA for our requirements file with git log -1 --format=format:%h SHA_OF_THE_RELEASE requirements.txt. This spits out the short version of the hash, like 1d02afc which is the SHA for that file for this particular release.
The Fabric script will then look into a directory where our virtualenvs are stored on the remote host(s).
If there is not a directory named 1d02afc, a new virtualenv is created and setup with pip install -E /path/to/venv/1d02afc -r /path/to/requirements.txt
If there is an existing path/to/venv/1d02afc, nothing is done
The little magic part of this is passing whatever tree-ish you want to git, and having it do the packaging (from Fabric). By using git archive my-branch, git archive 1d02afc or whatever else, I'm guaranteed to get the right packages installed on my remote machines.
I went this route since I really didn't want to have extra virtuenvs floating around if the packages hadn't changed between release. I also don't like the idea of having the actual packages I depend on in my own source tree.
I use this bootstrap.py: http://github.com/ccnmtl/ccnmtldjango/blob/master/ccnmtldjango/template/bootstrap.py
which expects are directory called 'requirements' that looks something like this: http://github.com/ccnmtl/ccnmtldjango/tree/master/ccnmtldjango/template/requirements/
There's an apps.txt, a libs.txt (which apps.txt includes--I just like to keep django apps seperate from other python modules) and a src directory which contains the actual tarballs.
When ./bootstrap.py is run, it creates the virtualenv (wiping a previous one if it exists) and installs everything from requirements/apps.txt into it. I do not ever install anything into the virtualenv otherwise. If I want to include a new library, I put the tarball into requirements/src/, add a line to one of the textfiles and re-run ./bootstrap.py.
bootstrap.py and requirements get checked into version control (also a copy of pip.py so I don't even have to have that installed system-wide anywhere). The virtualenv itself isn't. The scripts that I have that push out to production run ./bootstrap.py on the production server each time I push. (bootstrap.py also goes to some lengths to ensure that it's sticking to Python 2.5 since that's what we have on the production servers (Ubuntu Hardy) and my dev machine (Ubuntu Karmic) defaults to Python 2.6 if you're not careful)