Way to differentiate linux packages from requirements.txt? - python

I've been given a deep learning model developed in linux OS. I am using Windows10 OS.
While trying to replicate it on my local (windows) machine I faced a problem when trying to download multiple requirements.
requirments.txt looks something like this:
apturl==0.5.2
asn1crypto==0.24.0
bleach==2.1.2
Brlapi==0.6.6
certifi==2020.11.8
I know that apturl, Brlapi are linux packages therefore I take it out of requirements.txt file and run it in Dockerfile using following command
RUN apt update && apt install -y htop python3-dev wget \
apturl==0.5.2 \
Brlapi==0.6.6
since requirements.txt contain lots of packages to be installed and I do not know which belong to linux is there easy way to separate them? Right now I am running it and when error occurs on certain package, google it then if it belong to linux, move it do Dockerfile command to install.
Am I just supposed to know which packages belong to which and separate them on my own?
Thanks in advance!

apt is used for managing system packages on debian based operating systems. Although it can install some python packages if they were packaged and available as debian packages, it is generally not used for this purpose.
The tool commonly used for managing python packages is pip and requirments.txt file is usually used to specify the python package dependency details. If you have python3 installed on your local system, try python3 -m pip install -r requirements.txt or pip3 install -r requirements.txt (replace python3/pip3 with python/pip if required).
TLDR: The packages in requirements.txt are python packages and not system packages. So your OS shouldn't matter here as long you have python installed.

Related

How to create a common environment for teamwork in Python

I would like to create a virtual environment for my team. My team, works in different places and everyone has their own environment, it causes a lot of problems, everyone has a different version of the libraries (Python, RobotFramework).
I thought about:
creating one common environment, I used virtualenv.
Installing the prepared libraries (python and robotframework) with one command pip install ...,
Prepared libraries will be in the git repository so that everyone can modify them, change the library version.
I have the first and third parts done, but I have a problem with the second. How to create such a package of libraries to be able to install it with one pip install command.
Should I create an environment locally, install all the libraries in it, and send them to git? Or should I package the project via setuptool (to tar.gz)?
Unfortunately, I cannot find the answer to this question, it seems to me that none of the above solutions is optimal.
The easiest way of doing it would be creating a text file of all the libraries you are using in pip with the command.
pip freeze > requirements.txt
This will create a file listing all the packages with their versions that are being used. To install that ask every team member to place that requirement file in their projects and use
pip install -r requirements.txt
With pip, you could download your dependencies. These will be .tar.gz, .whl or .zip files. Note that this could be complicated if your team uses multiple OS.
Here is an example which will download the dependencies into the directory named "dependencies", you can push this to git along with the requirements file.
pip freeze > req.txt
pip download -r req.txt -d dependencies
When someone clones your repository, they can install the dependencies offline with the following command.
pip install --no-index --find-links=dependencies -r req.txt

Install python packages offline on server

I want to install some packages on the server which does not access to internet. so I have to take packages and send them to the server. But I do not know how can I install them.
Download all the packages you need and send them to the server where you need to install them. It doesn't matter if they have *whl or *tar.gz extension. Then install them one by one using pip:
pip install path/to/package
or:
python -m pip install path/to/package
The second option is useful if you have multiple interpreters on the server (e.g. python2 and python3 or multiple versions of either of them). In such case replace python with the one you want to use, e.g:
python3 -m pip install path/to/package
If you have a lot of packages, you can list them in a requirement file as you would normally do when you have access to the internet. Then instead of putting the names of the packages into the file, put the paths to the packages (one path per line). When you have the file, install all packages by typing:
python -m pip install -r requirements.txt
In the requirements file you can also mix between different types of the packages (*whl and *tar.gz). The only thing to take care about is to download the correct versions of the packages you need for the platform you have (64bit packages for 64bit platform etc.).
You can find more information regarding pip install in its documentation.
You can either download the packages from the website and run python setup.py install. Or you can run a pip install on a local dir, such as :
pip install path/to/tar/ball
https://pip.pypa.io/en/stable/reference/pip_install/#usage
Download the wheel packages from https://www.lfd.uci.edu/~gohlke/pythonlibs/ . You may install the .whl packages by pip install (package.whl) , refer installing wheels using pip for more.
Download the package from website and extract the tar ball.
run python setup.py install

Python: PIL/_imaging.so: invalid ELF header

I'm using a virtualenv to run Python 2.7 in my local machine and everything works as expected. When I transfer "site-packages" to my production sever, I get the follow error:
PIL/_imaging.so: invalid ELF header
This happens on the Pillow 2.5.3 pypi package found here
I am running OS X, while my production server is running Debian. I suspect the OS differences might be causing the issue but I'm not sure. I have no idea how to fix this. Can anyone help?
Note: I cannot install packages directly to my production server, so I have to upload them directly to use them.
In your current virtual environment, execute the following command
pip freeze > requirements.txt
Copy this requirements.txt file to your server.
Create your new virtualenvironment (delete the one you were using before).
Activate the virtual environment and then type pip install -r requirements.txt
Now, the libraries will be installed correctly and built accurately as well.
If you see errors for PIL, execute the following commands:
sudo apt-get install build-essential python-dev
sudo apt-get build-dep python-imaging
virtual environments are for isolating Python on your current machine; they are not for creating portable environments. The benefit is to work with different versions of Python packages without modifying the system Python installation.
Using virtual environments does not require super-user permissions; so you can install packages even if you are not "root".
It does, however, require Internet access as packages are downloaded from the web. If your server does not have access to the Internet, back on your mac, do the following, from your virtual environment:
pip install basket
This will install basket which is a small utility that allows you to download packages but not install them. Great for keeping a local archive of packages that you can move to other machines.
Once its installed, follow these steps as listed in the documentation:
basket init
pip freeze > requirements.txt
awk -F'==' '{print $1}' requirements.txt | basket download
This will download all the packages from your requirements.txt file into ~/.basket
Next, copy this directory to your server and then run the following command from your virtualenvironment
pip install --no-index -f /path/to/basket -r requirements.txt

pip install from a file only if needed

I have a packages file (dependencies.conf) for pip including a bunch of packages that my app needs:
argparse==1.2.1
Cython==0.20.2
...
In my build process, I download all packages using:
pip install --download=build/modules -r conf/dependencies.conf
Then in the deployment process, I want to install these files only if the installed version is different than what I need and in the correct order (dependencies)
I'm currently using the following:
for f in modules/*; do pip install -I $f; done
But this is wrong since it doesn't validate the version (-I is there in order to downgrade packages if needed) and it doesn't handle the right order of dependencies.
Is there a simple method to do that? (I'm basically trying to update the packages in machines that don't have internet connection)
Get the version using PIP, using the following command
eg. pip freeze | grep Jinja2
Jinja2==2.6
as explained in the following link Find which version of package is installed with pip
then compare this with the version, and run pip install with the appropriate version if necessary

How can I download a PyPI package for pip installation at a later date?

I have a pip requirements file with pinned package versions. I need to install PyPI packages on a system without a direct internet connection. How can I easily download the packages I need now, for pip installation later, without visiting each package page myself?
The pip documentation has a good example of fast and local installs:
$ pip install --download <DIR> -r requirements.txt
$ pip install --no-index --find-links=[file://]<DIR> -r requirements.txt
This is mostly for speed, but if you do the first step on an internet connected machine, copy the files to the other machine, then run the second step, you should be able to install all of your requirements without an internet connection.
To add to ford's answer, in order to get around cross-platform issues one can do the following instead:
On the machine with internet access do:
$ pip install --download <DIR> -r requirements.txt
$ pip install --download <DIR> -r requirements.txt --no-use-wheel
This will download available wheels for the packages in case the wheel is cross platform, but will also download the source so the packages can be built on any platform in case the wheel doesn't work for the target system.
Then, as ford has suggested, after moving the from the machine with internet access to the other machine do:
$ pip install --no-index --find-links=[file://]<DIR> -r requirements.txt
I can't guarantee this will work in every case, but it worked for me when trying to download a package and its dependencies on a Windows machine with internet access to install on a CentOS machine without internet access. There may also be other factors to consider if using different versions of Python on each machine (in my case I had Python 3.4 on both).

Categories

Resources