When I build my python project, tox is taking a really long time to set up the environment. I've traced the slowness to the fact that when it's trying to up the environment in .tox, it's trying to access an old pypi repo that my company has since decommissioned in favor of a new one (which works fine). Most stuff I do tries to access this old repo many times (~100), waiting a second or two to timeout on each connection.
I've done everything I can think of to cleanse my machine of links to the old repository, but it's still lingering somewhere. I've tried:
Uninstalling & re-installing both tox & pip
Blowing away ~/.pip
grepping through my project folders for the url of the old repo
grepping through in /Library/Frameworks/ for the url of the old repo (this is on OSX Yosemite 10.10.5)
No dice on any of them.
Where else should I look? Is there some obscure nook or cranny I haven't thought of? Where does pip look for repos by default if you don't explicitly specify one?
Can it be tox settings? This and this parameters seem like they could be relevant. If you don't keep your tox files in the same place as the project itself, you could've missed it while grepping.
Found it through some painstaking println debugging & reading through the source on github. It was in ~/Library/Application Support/pip.
For anyone finding this in the future, pip has some default directories it looks in.
Related
I am trying to install GeoDjango what turns out to be much harder than I thought. After I installed the OSGeo4W on my 64 Bit Windows 10 system I set everything up in the settings.py file but now I get this error:
FileNotFoundError: Could not find module 'C:\OSGeo4W\bin\gdal304.dll' (or one of its dependencies). Try using the full path with constructor syntax.
I also set the GDAL_LIBRARY_PATH but it just won't work.
GDAL_LIBRARY_PATH = "C:\\OSGeo4W\\bin\\gdal304.dll"
This is my C:\OSGeo4W\bin path and as you can see the gdal304.dll file is there
My Python is on version 3.10.6
Django is on version 4.1
I already tried to solve it by myself for a week but slowly I have no idea left on what to do
I ran into this problem too, since I updated my old GEO Django Setup today.
You may use a Docker Image as suggested by the others, but I prefere a native solution, since I don't want to spin up Docker every time I start coding.
Your solution is in the brackets: (or one of its dependencies)
You may look up the transitive dependendencies from gdal304.dll. There are several tools for this (see here). I'm using here now the Git integrated MinGW - Shell that has ldd installed. This should be the case for any (newer) Git installation on Windows.
As you can see, some dependencies are already fullfilled from your operating system. Others that are missing, have to be fullfilled from OSGeo4W. If you compare this with your bin directory from OSGeo4W you will see the Problem:
Sadly a simple "renaming" does not the trick. I was lucky and had not yet deleted my old OSGeo4W version. In the old files I then found the necessary DLL.
So, long story short: You need the jpeg.dll file.
There are sites like "windll.com" or "dll-files.com", but I would not recommend using them. I don't trust these sites. You may install something like "MSYS2", "Cygwin" or even "MVSC", install the "libjpeg-turbo" library and then finally copy & paste the necessary DLL file.
This is also suggested on the official Site for libjpeg-turbo: https://libjpeg-turbo.org/Documentation/OfficialBinaries
But this seems like a lot of work for someone who just want to have the DLL file, but then again: Never download a library blindly from the Internet and load it into your application. These libraries could do anthing!
Deploying to a live server for an existing Django application. It's a very old site that has not been updated in 3+ years.
I was hired on contract to bring it up to date, which included upgrading the Django version to be current. This broke many things on the site that had to be repaired.
I did a test deployment and it went fine. Now it is time for deployment to live and I am having some issues....
First thing I was going to do is keep a log of the current Django version on server, incase of any issues we can roll back. I tried logging in Python command prompt and importing Django to find version number, and it said Django not found.
I was looking further and found the version in a pip requirements.txt file.
Then I decided to update the actual django version on the server. Update went through smoothly. Then I checked the live site, and everything was unchanged (with the old files still in place). Most of the site should have been broken. It was not recognizing any changes in Django.
I am assuming the reason for this might be that the last contractor used virtualenv? And that's why it is not recognizing Django, or the Django update are not doing anything to the live site?
That is the only reason I could come up with to explain this issue, as since there is a pip requirements.txt file, he likely installed Django with pip, which means Python should recognize the path to Django.
So then I was going to try to find the source path for the virtualenv with command "lsvirtualenv". But when I do that, even that gives me a "command not found" error.
My only guess is that this was an older version of virtualenv that does not have this command? If that is not the case, I'm not sure what is going on.
Any advice for how I find the information I need to update the package versions on this server with the tools I have access to?
Create your own virtualenv
If all fails, just recreate the virtualenv from the requirements.txt and go from there
Find out how the old app was being launched
If you insist on finding the old one, IMO the most direct way is to find how is the production Django app being ran. Look for bash scripts that start it, some supervisor entries etc
If you find how it starts, then you can pinpoint the environment it is launched in (e.g. which virtualenv)
Find the virtualenv by searching for common files
Other than that you can use find or locate command to search for files we know to exist in a virtualenv like lib/pythonX.Y/site-packages, bin/activate or bin/python etc
Why not start checking what processes are actually running, and with what commandline, using ps auxf or something of the sort. Then you know if its nginx+uwsgi or django-devserver or what, and maybe even see the virtualenv path, if it's being launched very manually. Then, look at the config file of the server you find.
Alternatively, look around, using netstat -taupen, for example, which processes are listening to which ports. Makes even more sense, if there's a reverse proxy like nginx or whatever running, and you know that it's proxying to.
The requirements.txt I'd ignore completely. You'll get the same, but correct information from the virtualenv once you activate it and run pip freeze. The file's superfluous at best, and misleading at worst.
Btw, if this old contractor compiled and installed a custom Python, (s)he might not even have used a virtualenv, while still avoiding the system libraries and PYTHONPATH. Unlikely, but possible.
I have a number of python "script suites" (as I call them) which I would like to make easy-to-install for my colleagues. I have looked into pip, and that seems really nice, but in that regimen (as I understand it) I would have to submit a static version and update them on every change.
As it happens I am going to be adding and changing a lot of stuff in my script suites along the way, and whenever someone installs it, I would like them to get the newest version. With pip, that means that on every commit to my repository, I will also have to re-submit a package to the PyPI index. That's a lot of unnecessary work.
Is there any way to provide an easy cross-platform installation (via pip or otherwise) which pulls the files directly from my github repo?
I'm not sure if I understand your problem entirely, but you might want to use pip's editable installs[1]
Here's a brief example: In this artificial example let's suppose you want to use git as CVS.
git clone url_to_myrepo.git path/to/local_repository
pip install [--user] -e path/to/local_repository
The installation of the package will reflect the state of your local repository. Therefore there is no need to reinstall the package with pip when the remote repository gets updated. Whenever you pull changes to your local repository, the installation will be up-to-date as well.
[1] http://pip.readthedocs.org/en/latest/reference/pip_install.html#editable-installs
At work we are using Trac on several internal wiki's and an external wiki. REcently we found the need for a new plugin. After we going through a few tutorials we went to install a plugin to make sure it would work. It didn't. We've been going through trying to figure out. Below I will list the steps and various things I did while trying to get it to work.
1) I went to trac-hacks website and downloaded their hellow world plugin, figured I couldn't make a mistake using their code.
2) I compiled and made an egg using python setup.py bdist_egg on the machine where trac is installed, to make sure it's the same Python version being used.
3) I then copied it over to /directory/where/trac/is/plugins/ folder and chmod 755 the file egg file.
4) I then restarted http, unable to find a better way of restaring trac so this may be where my problem is. It didn't work. So I deleted the egg folder in plugins
5) Uploaded it via trac->administration->plugins and restarted httpd again. Nothing.
6) I realized I had to edit the trac.ini file and added helloworld.* = enabled under component and restarted the web server.
It's quite possible it's me but any help would be greatly appreciated!
Its the helloworld plugin from trachack, essentially displays hello world and theres a button. There are no error messages provided, hence why googling was hard.
I'm assuming that it's using root and that's the user I built it with. I will look into seeing if it's anybody else, just taking a quick look though I don't see anything else that could be using it. I only copied the egg file to the plugins folder, I set up another folder elsewhere and built it and cp to the plugins folder. I'm glad to know I was doing that right because looking up documentation on how to restart trac turns up practically nothing, they just say restart trac or restart apache. I will look into the logs later on tomorrow. Thanks for the replies! Also we are using trac .12.1.
So after looking at the log files it seems that it doesn't even load the plugin, can't find anywhere that says it's loading or any errors with it. Now we have a few trac sites for various projects and one of the sites already has plugins installed so I went there and and put the test plugin there and checked logs and it didn't work either. So I'm just going to conclude it's the plugin or something we already have in place and it's not me. I believe I'm going to try and make one and test it. Thanks for the help!
It sounds like you built the egg correctly. After you copy it into your plugins folder, change the file's owner and group (I'm assuming you're on Linux since you mentioned chmod) to match the account that your webserver uses. I'm not sure if that's strictly necessary, but it's what's always worked for me.
I may be misreading your #4, but it sounds like you copied the whole egg folder to your plugins directory. Only the .egg file needs to be copied over, it's a self-contained package. I don't think Trac looks for .egg files in subdirectories.
Restarting your webserver is the easiest way to restart Trac. Actually, I'm not aware of any other way to do it.
When it comes to plugin problems, Trac's log is usually a very good source of information. I recommend setting Trac's log level to DEBUG, then shut down the web server. Clear out the contents of Trac's log file, then start the web server and make a copy of Trac's log file after the server has completely come back online. Do this process twice: once with the plugin installed and once without it installed. The difference in the logfiles should give you a good indication of what the problem is. Once you get accustomed to what your logs normally look like, you'll be able to read the log in place without clearing it out and generating two versions of it.
By the way, what Trac version are you using?
Check the Trac version and downloaded plugin
instead python setup.py bdist_egg try python setup.py install
Quite an old thread, but since I ran into the same problem at one point:
Make sure you build the .egg with the same Python version that you use to run Trac with!Backwards compatibility between Python versions does not matter here, because Trac reads information about the Python version out of the .egg file before it even loads it, to make sure it is compatible.
(Small version numbers should not matter, so you should be able to run a .egg with Python 2.7.10 when it was built with 2.7.3, but not when it was built with 2.6.x. Look at the Version number that is written into the .egg file name.)
I currently have git and virtualenv set up in a way which exactly
suits my needs and, so far, hasn't caused any problems. However I'm aware that
my setup is non-standard and I'm wondering if anyone more familiar with virtualenv's
internals can point out if, and where, it's likely to go wrong.
My setup
My virtualenv is inside my git repository, but git is set to ignore the bin and include directories and everything in lib except for the site-packages directory.
More precisely, my .gitignore file looks like this:
*.pyc
# Ignore all the virtualenv stuff except the actual packages
# themselves
/bin
/include
/lib/python*/*
!/lib/python*/site-packages
# Ignore easyinstall and setuptools
/lib/python*/site-packages/easy-install.pth
/lib/python*/site-packages/setuptools.pth
/lib/python*/site-packages/setuptools-*
/lib/python*/site-packages/pip-*
With this arrangement I -- and anyone else working on a checkout of the project -- can use virtualenv and pip as normal but with the following advantages:
If anyone updates or installs a package and pushes their changes, anyone else who pulls those changes automatically gets the update: they don't need to notice that a requirements.txt file has changed or do any post-receive hook magic.
There are no network dependencies: all the code to make the application work lives in the git repository.
I'm aware that this only works with pure-Python packages, but that's all I'm concerned with at the moment.
Does anyone know of any other problems with this approach that I should be aware of?
This is an interesting question. I think the other two answers (thus far) raise good specific points. Clearly you've thought this through and have arrived at a solution you like, but I'll note that there does seem to be a philosophical split here among virtualenv users.
One camp, to which I'd guess you belong, feels that the local VE is part of the project (i.e. it should be under version control). The other feels that the VE should essentially be treated as a development artifact -- that requirements.txt should be part of the project repo, but that you should be able to blow away and recreate the VE as needed.
I just mention this because when I first saw this distinction made, it helped shape my thinking about virtualenv. (I'm in the second camp, FWIW, because it seems simpler and cleaner to me, but that's not to say that being in the first camp is wrong for your particular project.)
If you have any -e items in your requirements.txt - in other words if you have any editable dependencies as described in the format that are using the git version control system you will most likely run into issues when your src directory is committed. If you just add /src to your .gitignore then you can avoid this problem, but often the installation just adds a pointer in site-packages to this location, so you need it!
In /lib/python2.7/site-packages in my virtualenv I have absolute paths, for example in Django.egg-link I have /Users/henry/.virtualenv/mowapp/src/django (this is a Mac).
Check to see if you have any absolute paths checked into git, I think that is a major problem with this approach.