Installing and Using Python Modules in a Docker Container - python

I am new to using docker containers. I have successfully created a new docker container but am now having trouble installing and then using python modules in it.
I entered into the already running container with the following command:
$docker exec -it lizzie /bin/bash
This worked. I also managed to install the module of interest to me with the following command:
$pip install pysnmp
I cloned my git repository, entered the local repo, and attempted to run a script that utilized the module pysnmp. The following error was returned:
ImportError: No module named pysnmp
I reinstalled the module to ensure that it had installed correctly; all requirements were already satisfied. The two folders currently in the docker are a folder called "anaconda-ks.cfg" which I can't enter and the repo. I feel like this has something to do with the path the module was installed in but I'm not sure where I should be installing it or how to do so. Thanks in advance for the help!

Related

GCP instance installed libraries not working

I installed some python packages in my gcp instance. python_speech_features was one of them. When I wrote pip list, it shows me that it is installed. But when I try to access it in my code, it says no module found with this name python_speech_features. I have attached screenshots of the error error and installed packages packages installed.
I tried to replicate a scenario like the one you currently have; however, I was able to use the ’python_speech_features’ library with this example on a GCE instance without issues. This is the procedure I did:
Create an GCE instance and connect it.
Create a virtual environment with this command: virtualenv -p python3 env
Activate the virtualenv with this command: source env/bin/activate
Install the following libraries:
numpy==1.18.5
python-speech-features==0.5
scipy==1.4.1
Download the example file: ‘example.wav’
Run the code: ‘python example.py’
I recommend trying this procedure to ensure that the library import works correctly.

Project downloaded from Github setup errors

I apologize ahead of time if these questions are very dumb. I'm pretty new to sourcing python code from github. The link I am attempting to use is a study from link: https://github.com/malllabiisc/ConfGCN. So far, what I have tried is downloading the code as a zip. Then, I followed the instructions from Github and downloaded Ubuntu to run the shell file setup.sh. However, I am running into errors as after running sudo bash setup.sh in Ubuntu, it gives me this error:
Install python dependencies
setup.sh: line 11: pip: command not found
I have checked out the respective files this references. It calls for:
echo "Install python dependencies"
pip install -r requirements.txt
Inside the requirements.txt file it has a variety of python packages I have already installed inside a Venv in Pycharm. It specifically calls for:
numpy==1.16.0
tensorflow==1.12.1
scipy==1.2.0
networkx==2.2
Previous lines in setup.sh run perfectly fine in terms of updating files included in the folder. Another question I have is in general on how to setup a python package. I am currently using Pycharm CE 2020 and I've attempted creating a python package inside of my workspace. I noticed that it auto generates a init.py file. How can I integrate my downloads from GitHub into my Pycharm Project?
There is no reason to run setup.sh as root because it is just supposed to install some packages which do not necessitates Sudo access. You can simply create a virtual environment and run setup.sh. For setting up environment just run:
$ virtualenv -p /usr/bin/python3.6 myenv # Create an environment
$ source myenv/bin/activate # Load the environment
(myenv) $ ./setup.sh
Once the environment is ready, you should be able to run the code. You can make Pycharm use that environment for executing the code.

ModuleNotFoundError: No module named 'pymongo' with Docker and Airflow

I'm currently using Docker with puckel/Airflow to run Airflow
I installed pymongo successfully but when calling the import of pymongo, it still fails to find the module.
I added below codes into the Dockerfile above the other RUN before rebuilding
1st attempt
RUN pip install pymongo
2nd attempt
RUN pip install pymongo -U
I built them with
docker build --rm -t puckel/docker-airflow .
Pymongo does install successfully but when I do run the webserver with a simple import of dags I still get the error
File "/usr/local/lib/python3.6/site-packages/airflow/contrib/hooks/mongo_hook.py", line 22, in <module>
from pymongo import MongoClient
ModuleNotFoundError: No module named 'pymongo'
I solved it by copying my requirements.txt file in the root.
In fact in puckel/docker-airflow's Dockerfile it executes entrypoint.sh witch pip install packages from /requirements.txt if the file exists. So we are sure our packages are installed.
You can add in the Dockerfile :
COPY ./requirements.txt /requirements.txt
Or
in a docker-compose.yml add a volume to your container:
volumes:
- ./requirements.txt:/requirements.txt
I ran into this same symptom. I fixed it by adding && pip install pymongo \ to puckel/airflow:Dockerfile, near the other pip install commands and rebuilding the image.
Here's what I tried that did not fix the problem:
Adding pymongo to requirements.txt and mounting the file. I verified that the module was loaded as expected via log messages in docker-compose startup and by connecting to my worker and webserver instances and seeing that the module was available in the Python environment using help("modules") but the module was not available to my Airflow DAGs
Adding --build-arg PYTHON_DEPS="pymongo" as a parameter to my docker build command. (Note: for modules other than pymongo this step fixed module not found errors, but not for pymongo. In fact, I did not see any log record of pymongo being installed during docker build when I set this)
Could you, try
RUN pip3 install pymongo
and report back. It might happen if you have multiple versions of Python. pip3 will make sure you are installing the module for Python 3.x.
When you built the puckel/Airflow Docker image, did you add mongo to AIRFLOW_DEPS in your build arguments?
e.g. docker build --rm --build-arg AIRFLOW_DEPS="mongo" -t puckel/docker-airflow .
I has similar experience for mysql hook and solved.
My experience is check if the module could be imported in pure python enviroment first.
Some time, the pack you installed is not the airflow wanted.
For your case, you could check in following step.
1. jump into the docker container
docker exec -it /bin/bash
2. launch python assuming you has use python 3.X version
python
3. check the module in python enviromnet
import pymonggo
# other test script if you to check.
if you facing error, pls solve it in python environment first and then go back to airflow.
=======================================================
I Just double checked the airflow github source code and realized that mongo db is not default hook in original source code.
In case case, you might need go further into pymongo package to study how to install & compile it and related dependence.

How can I use pip to install Python packages into my Divio Docker project?

I'm used to using pip to install Python packages into my Django projects' virtual environments.
When I am working with a Divio Docker project locally, this does not work.
There are two things you need to be aware of when installing Python packages into a Docker project:
the package must be installed in the correct environment
if you want to use the installed package in the future, it needs to be installed in a more permanent way
The details below describe using a Divio project, but the principle will be similar for other Docker installations.
Installation in the correct environment
To use pip on the command line to install a Python package into a Dockerised project, you need to be using pip inside the Docker environment, that is, inside the container.
It's not enough to be in the directory where you have access to the project's files. In this respect, it's similar to using a virtual environment - you need to have the virtualenv activated. (Otherwise, your package will be installed not in the virtual environment, but on your own host environment.)
To activate a virtual environment, you'd run something like source bin/activate on it.
To install a package within a Divio web container:
# start a bash prompt inside the project
docker-compose run --rm web bash
# install the package in the usual way
pip install rsa
rsa is now installed and available to use.
More permanent installation
So far however, the package will only be installed and available in that particular container. As soon as you exit the bash shell, the container will disappear. The next time you launch a web container, you will not find the rsa package there. That's because the container is launched each time from its image.
In order to have the package remain installed, you will need to include it in the image.
A Divio project includes a requirements.in file, listing Python packages that will be included in the image.
Add a new line containing rsa to the end of that file. Then run:
docker-compose build web
This will rebuild the Docker image. Next time you launch a container with (for example) docker-compose run --rm web bash, it will include that Python package.
(The Divio Developer Handbook has some additional guidance on using pip.)
Note: I am a member of the Divio team. This question is one that we see quite regularly via our support channels.

AWS Lambda with Zappa fails on "import module 'handler': No module named 'werkzeug' "

After deploying my python application with Zappa and visiting the aws link, I can see the following error:
When I checked the logs, I found the source of the error:
I then decided to pip install -r requirements.txt to ensure that it's installed inside my virtual environment (which it is):
Requirement already satisfied: Werkzeug==0.12 in ./flaskapi/lib/python3.6/site-packages (from -r requirements.txt (line 41))
Something is going wrong when it's uploaded to AWS. I'm not sure if this is the core issue, but I did notice the package name in the logs is different than the one inside requirements.txt file. The package name in the logs doesn't start with a capitalized 'W', while the package name in the requirements.txt does. Other than that, I'm not sure what I'm doing wrong.
Any and all help is appreciated
I solved this issue by upgrading to python 3.7 . I would recommend starting a new virtualenv, configured to use python 3.7 .
if you do not have python3.7 on your system, you will need to install it. This site is the one I used. Works on AWS cloud9 too.
installing python 3.7
virtualenv env -p python3.7
source ./env/bin/activate
python --version
output should be "Python 3.7.X"
then continue setting up your app like normal.

Categories

Resources