run mitmproxy-node inside of a Docker container? - python

Pls note that the Docker container is run inside a Jenkins pipeline. I am trying to run a specific npm package mitmproxy-node on/in a Docker container, it has a dependency on the python mitmproxy package. I think that I need to have the python dep inside of the node container so that the node code can find and run/instantiate the mitmproxy during runtime(it is being turned on as a runtime process.env )
How do I construct/fix a Dockerfile to build the container so that the node container, where the test.runner exists knows/can use the python mitmproxy code ?
I have some thing like this.
FROM node:14.15-buster
COPY .package*.json .package*.json
COPY .npmrc .npmrc
is it here? RUN apt-get install python ?
RUN npm install
COPY . .
CMD ("npm", "test")
When trying to instantiate mitmproxy, it throws an error
Error in beforeSession:
Error: mitmdump, which is an executable that ships with mitmproxy, is not on your PATH.
Please ensure that you can run mitmdump --version successfully from your command line.
I am pretty new when it comes to Docker so your help is appreciated.

here's the full dockerfile that got me to where I am goin.
FROM node:14.15.0-buster
WORKDIR /usr/src/app
RUN apt-get update || : && apt-get install python3 -y -V
ENV PATH="${PATH}:/usr/bin/python3"
RUN apt-get install python3-pip -y -V
RUN pip3 install mitmproxy
ENV PATH="${PATH}:/usr/bin/mitmproxy"
ARG NPM_TOKEN
COPY package*.json ./
COPY .npmrc .
RUN npm install
COPY . .
CMD ["npm", "test"]

Related

How to install python libraries in docker file on ubuntu?

I want to create a docker image (docker version: 20.10.20)that contains python libraires from a requirement.txt file that contains 50 libraries. Without facing root user permissions how can proceed. Here is the file:
From ubuntu:latest
RUN apt update
RUN apt install python3 -y
WORKDIR /Destop/DS
# COPY requirement.txt ./
# RUN pip install -r requirement.txt
# it contains only pandas==1.5.1
COPY script2.py ./
CMD ["python3", "./script2.py"]
It failed at requiremnt.txt command
*error it takes lot of time while creating image.
because it ask for root permission.
For me the only problem in your Dockerfile is in the line RUN apt install python -y. This is erroring with Package 'python' has no installation candidate.
It is expected since python refers to version 2.x of Python wich is deprecated and no longer present in the default Ubuntu repositories.
Changing your Dockerfile to use Python version 3.x worked fine for me.
FROM ubuntu:latest
RUN apt update
RUN apt install python3 python3-pip -y
WORKDIR /Destop/DS
COPY requirement.txt ./
RUN pip3 install -r requirement.txt
COPY script2.py ./
CMD ["python3", "./script2.py"]
To test I used requirement.txt
pandas==1.5.1
and script2.py
import pandas as pd
print(pd.__version__)
With this building the docker image and running a container from it executed succesfully.
docker build -t myimage .
docker run --rm myimage

Installing private pip package inside docker container

I am trying to create docker container for a fastapi application.
This application is going to use a private pip package hosted on github.
During local development, I used the following command to install the dependency:
pip install git+https://<ACCESS_TOKEN>:x-oauth-basic#github.com/username/projectname
I tried the same approach inside dockerfile, however without success
FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
ARG ACCESS_TOKEN=default_value
RUN /usr/local/bin/python -m pip install --upgrade pip
RUN echo "pip install git+https://${ACCESS_TOKEN}:x-oauth-basic#github.com/username/projectname"
RUN pip install --no-cache-dir --upgrade -r requirements.txt
COPY . /code
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8080"]
docker build --build-arg ACCESS_TOKEN=access_token_value .
The container builds without errors and during the build process I can see that the token is passed correctly.
However, after running the container with docker run <containerid> I get the following error:
ModuleNotFoundError: No module named 'projectname'
Have anyone tried such thing before?
Is it the correct approach?
if I am not mistaken, you could run your pip command without echo:
RUN pip install git+https://${ACCESS_TOKEN}:x-oauth-basic#github.com/username/projectname

Python script with numpy in dockerfile on Raspberry PI - libf77blas.so.3 error

I want to run a Docker container on my Raspberry PI 2 with a Python script that uses numpy. For this I have the following Dockerfile:
FROM python:3.7
COPY numpy_script.py /
RUN pip install numpy
CMD ["python", "numpy_script.py"]
But when I want to import numpy, I get the error message that libf77blas.so.3 was not found.
I have also tried to install numpy with a wheel from www.piwheels.org, but the same error occurs.
The Google search revealed that I need to install the liblapack3. How do I need to modify my Dockerfile for this?
Inspired by the answer of om-ha, this worked for me:
FROM python:3.7
COPY numpy_script.py /
RUN apt-get update \
&& apt-get -y install libatlas-base-dev \
&& pip install numpy
CMD ["python", "numpy_script.py"]
Working Dockerfile
# Python image (debian-based)
FROM python:3.7
# Create working directory
WORKDIR /app
# Copy project files
COPY numpy_script.py numpy_script.py
# RUN command to update packages & install dependencies for the project
RUN apt-get update \
&& apt-get install -y \
&& pip install numpy
# Commands to run within the container
CMD ["python", "numpy_script.py"]
Explanation
You have an extra trailing \ in your dockerfile, this is used for multi-line shell commands actually. You can see this used in-action here. I used this in my answer above. Beware the last shell command (in this case pip) does not need a trailing \, a mishap that was done in the code you showed.
You should probably use a working directory via WORKDIR /app
Run apt-get update just to be sure everything is up-to-date.
It's recommended to group multiple shell commands within ONE RUN directive using &&. See best practices and this discussion.
Resources sorted by order of use in this dockerfile
FROM
WORKDIR
COPY
RUN
CMD

How do I setup only python 2.7 in a docker container?

I have node app and in one use case I am calling python script from node using python-shell . I am trying to setup this app on docker and my Dockerfile looks something like this:
FROM debian:latest
# replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# update the repository sources list
# and install dependencies
RUN apt-get update \
&& apt-get install -y curl \
&& apt-get -y autoclean
# nvm environment variables
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 10.15.3
# install nvm
# https://github.com/creationix/nvm#install-script
RUN curl --silent -o-https://raw.githubusercontent.com/creationix/nvm/v0.31.2/install.sh | bash
# install node and npm
RUN source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# add node and npm to path so the commands are available
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# confirm installation
RUN node -v
RUN npm -v
RUN apt-get -y install python2.7
COPY package.json .
RUN npm install
COPY . .
CMD ["npm","run","start"]
after building and running this container when I try to invoke use case where python script gets called from node I am getting this error.
null
events.js:174
throw er; // Unhandled 'error' event
^
Error: spawn /usr/lib/python2.7 EACCES
at Process.ChildProcess._handle.onexit (internal/child_process.js:240:19)
at onErrorNT (internal/child_process.js:415:16)
at process._tickCallback (internal/process/next_tick.js:63:19)
Emitted 'error' event at:
at Process.ChildProcess._handle.onexit (internal/child_process.js:246:12)
at onErrorNT (internal/child_process.js:415:16)
at process._tickCallback (internal/process/next_tick.js:63:19)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
Help on setting up just python2.7 in a docker container?
You can use python base image
FROM python:2.7
This base image with have python pre-configured and you don't need to install python seperately. Hope it helps.
Here is the list of available image
For quick reference please check
https://blog.realkinetic.com/building-minimal-docker-containers-for-python-applications-37d0272c52f3
You can use "FROM python:2.7" , the base image.
FROM python:2.7
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./your-daemon-or-script.py" ]
(documentation link)please find some examples here

How to fix use dependancy error in docker?

I found a base firefox standalone image, I am trying to run a script using selenium with geckodriver inside a docker container, I've tried to install requirements from the dockerfile but get ModuleNotFoundError: No module named 'selenium'
This is my Dockerfile:
From selenium/node-firefox:3.141.59-iron
# Set buffered environment variable
ENV PYTHONUNBUFFERED 1
# Set the working directory to /app
USER root
RUN mkdir /app
WORKDIR /app
EXPOSE 80
# Install packacges needed for crontab and selenium
RUN apt-get update && apt-get install -y sudo libpulse0 pulseaudio software-properties-common libappindicator1 fonts-liberation python-pip virtualenv
RUN apt-get install binutils libproj-dev gdal-bin cron nano -y
# RUN virtualenv -p /usr/bin/python3.6 venv36
# RUN . venv36/bin/activate
# Install any needed packages specified in requirements.txt
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
ENTRYPOINT ["/bin/bash"]
I am expecting my script to run, I'm not sure why there it is using Python2.7 in the interactive shell, I thought the selenium docker image came with 3.6 and selenium already installed
Your container comes with both python (python 2) and python3. It's just python defaults to the 2.7 instance. You can change this behavior by issuing:
RUN alias python=python3
in your Dockerfile

Categories

Resources