I work on two servers, and on one server .pydistutils.cfg looks like:
install_scripts=~/opt_old/bin
install_data=~/opt_old/share
install_lib=~/usr/lib/python2.6/site-packages
I think, this creates problems with using pip and linking libraries on local versions of python.
On the second server, the file is non existent, and I don't have any issues.
Why do we need this file and why is PYTHONPATH not sufficient?
I installed a local version of Python and renamed the file .pydistutils.cfg. Hence, it seems that the file is not that important.
A pretty good write-up is here:
http://bouktin.blogspot.com/2012/04/configure-pydistutilscfg-python.html
I don't immediately see a reason why an average developer should use it, it seems a bit kludgy to me, perhaps it makes sense if you make your own distro, target docker or similar distribution system or target an embedded system?
Here's a super-simple usage example:
https://github.com/amolenaar/gaphor/wiki/Custom-Python-Installation-Location
Related
I am trying to install GeoDjango what turns out to be much harder than I thought. After I installed the OSGeo4W on my 64 Bit Windows 10 system I set everything up in the settings.py file but now I get this error:
FileNotFoundError: Could not find module 'C:\OSGeo4W\bin\gdal304.dll' (or one of its dependencies). Try using the full path with constructor syntax.
I also set the GDAL_LIBRARY_PATH but it just won't work.
GDAL_LIBRARY_PATH = "C:\\OSGeo4W\\bin\\gdal304.dll"
This is my C:\OSGeo4W\bin path and as you can see the gdal304.dll file is there
My Python is on version 3.10.6
Django is on version 4.1
I already tried to solve it by myself for a week but slowly I have no idea left on what to do
I ran into this problem too, since I updated my old GEO Django Setup today.
You may use a Docker Image as suggested by the others, but I prefere a native solution, since I don't want to spin up Docker every time I start coding.
Your solution is in the brackets: (or one of its dependencies)
You may look up the transitive dependendencies from gdal304.dll. There are several tools for this (see here). I'm using here now the Git integrated MinGW - Shell that has ldd installed. This should be the case for any (newer) Git installation on Windows.
As you can see, some dependencies are already fullfilled from your operating system. Others that are missing, have to be fullfilled from OSGeo4W. If you compare this with your bin directory from OSGeo4W you will see the Problem:
Sadly a simple "renaming" does not the trick. I was lucky and had not yet deleted my old OSGeo4W version. In the old files I then found the necessary DLL.
So, long story short: You need the jpeg.dll file.
There are sites like "windll.com" or "dll-files.com", but I would not recommend using them. I don't trust these sites. You may install something like "MSYS2", "Cygwin" or even "MVSC", install the "libjpeg-turbo" library and then finally copy & paste the necessary DLL file.
This is also suggested on the official Site for libjpeg-turbo: https://libjpeg-turbo.org/Documentation/OfficialBinaries
But this seems like a lot of work for someone who just want to have the DLL file, but then again: Never download a library blindly from the Internet and load it into your application. These libraries could do anthing!
So I was trying to modify an existing library and instead of doing it the smart way and using pip -e I instead just installed the libraries, then swapped the modified files for whatever changes I wanted. For example if I had:
Library A/
---doSomethingA.py
---otherFiles.py
I just deleted doSomethingA.py and replaced it with my version of doSomethingA.py. Theoretically I figured, because im editing the file locally, it should still work as planned for my library with whatever extra functionality I want.
HOWEVER.... its basically going crazy. While I can see my edited changes in the file, when I run the library its obviously not running that file. I did things like:
commenting out the whole file (still runs somehow)
Actually uninstalling the library and part of another script using doSomethingA.py it still runs?? (i.e Something like import libraryA works on JupyerHub, but not on the putty terminal... ?)
I've obviously come to the conclusion that its not running the file that it says that it is (and trust me I've checked the path of the file like 10 times).
My question is:
How is this possible? What are the places that python would store another copy of the file etc?
I've also deleted the __pychache__, but I can't think of anything else to do. Is my best option to just give up and create a new virtual environment, etc?
I understand that you are running on jupyter hub.
This means that your python is running remotly on the server and the framework is taking care to synch your local project (but not the installed libraries).
The python on the server is not aware of your local change.
As a temporary mitigation you can copy the installed library to you project root.
Can anyone explain the relationship between the LD_LIBRARY_PATH and the lib-dynload directory works in Python on a Unix machine.
The reason I ask is because at my place of employment, we have a network install of Python that works across several unix machines (Don't ask why, its a bunch of odd political oddities.) It works fine for most of the systems which are older, but on the newer systems, it runs into problems when people try to use the tkinter framework (since those machines have newer versions of the underlying libraries installed.)
I did some poking around, and inside the lib-dynload directory there is another library file which seems to just direct Python which library to use for tkinter stuff.
Doing some fiddling, I found a way to bypass the problem (essentially, placing the new versions of the library at the front of the user's LD_LIBRARY_PATH seems to solve the problem. I assume it works because it's finding this version of the library before the version in the lib-dynload folder, but it breaks if you attempt to do so on one of the older machines), but this is really an inelegant solution.
The Story
After cleaning up my Dreamhost shared server's home folder from all the cruft accumulated over time, I decided to start afresh and compile/reinstall Python.
All tutorials and snippets I found seemed overly simplistic, assuming (or ignoring) a bunch of dependencies needed by Python to compile all modules correctly. So, starting from http://andrew.io/weblog/2010/02/installing-python-2-6-virtualenv-and-virtualenvwrapper-on-dreamhost/ (so far the best guide I found), I decided to write a set-and-forget Bash script to automate this painful process, including along the way a bunch of other things I am planning to use.
The Script
I am hosting the script on http://bitbucket.org/tmslnz/python-dreamhost-batch/src/
The TODOs
So far it runs fine, and does all it needs to do in about 900 seconds, giving me at the end of the process a fully functional Python / Mercurial / etc... setup without even needing to log out and back in.
I though this might be of use for others too, but there are a few things that I think it's missing and I am not quite sure how to go for it, what's the best way to do it, or if this just doesn't make any sense at all.
Check for errors and break
Check for minor version bumps of the packages and give warnings
Check for known dependencies
Use arguments to install only some of the packages instead of commenting out lines
Organise the code in a manner that's easy to update
Optionally make the installers and compiling silent, with error logging to file
failproof .bashrc modification to prevent breaking ssh logins and having to log back via FTP to fix it
EDIT: The implied question is: can anyone, more bashful than me, offer general advice on the worthiness of the above points or highlight any problems they see with this approach? (see my answer to Ry4an's comment below)
The Gist
I am no UNIX or Bash or compiler expert, and this has been built iteratively, by trial and error. It is somehow going towards apt-get (well, 1% of it...), but since Dreamhost and others obviously cannot give root access on shared servers, this looks to me like a potentially very useful workaround; particularly so with some community work involved.
One way to streamline this would be to make it work with one of: capistrano/fabric, puppet/chef, jhbuild, or buildout+minitage (and a lot of cmmi tasks). There are some opportunities for factoring in common code, especially with something more high-level than bash. You will run into bootstrapping issues, however, so maybe leave good enough alone.
If you want to look into userland package managers, there is autopackage (bootstraps well), nix (quickstart), and stow (simple but helps with isolation).
Honestly, I would just build packages with a name prefix for all of the pieces and have them install under /opt so that they're out of the way. That way it only takes the download time and a bit of install time to do.
I am going to be building a Pylons-based web application. For this purpose, I'd like to build a minimal Linux platform, upon which I would then install the necessary packages such as Python and Pylons, and other necessary dependencies. The other reason to keep it minimal is because this machine will be virtual, probably over KVM, and will eventually be replicated in some cloud environment.
What would you use to do this? I am thinking of using Fedora 10's AOS iso, but would love to understand all my options.
I really like JeOS "Just enough OS" which is a minimal distribution of the Ubuntu Server Edition.
If you want to be able to remove all the cruft but still be using a ‘mainstream’ distro rather than one cut down to aim at tiny devices, look at Slackware. You can happily remove stuff as low-level as sysvinit, cron and so on, without collapsing into dependency hell. And nothing in it relies on Perl or Python, so you can easily remove them (and install whichever version of Python your app prefers to use).
For this purpose, I'd like to build a minimal Linux platform...
So Why not try to use ArchLinux www.archlinux.org?
Also you can use virtualenv with Pylons in it.
debootstrap is your friend.
Damn Small Linux? Slax?
If you want to go serious about the virtual appliance idea, take a look at the newly released VMware Studio. It was built exactly for trimming down a system (only Linux for now afaik) so it provides only enough base to run your application.
VMware is going (a bit more) open by pushing an open virtual appliance format (OVF) so, at some point in the future, you might be able to run the result on other virtualization platforms too.
Debootstrap, or use kickstart to strap your FC domains. However, other methods of strapping an RPM based distro exist, such as Steve Kemp's rinse utility that replaces rpmstrap.
Or, you could just grab something at jailtime to use as a base.
If that fails, download everything you need from source, build / install it with a /mydist prefix (including libc, etc) and test it via chroot.
I've been building templates for Xen for years .. its actually turned into a very fun hobby :)