I'm a bit new to docker and I'm messing around with it. I currently have a server being ran on port 5000 in another container. This server is being ran with express and uses JavaScript. I'm trying to send requests to that server with python. I tried using both localhost:5000 and 127.0.0.1:5000 but neither of these seems to work. What can I do? I noticed if I run the python code without docker it works perfectly fine.
Python Docker File:
FROM ubuntu:latest
RUN apt update
RUN apt install python3 -y
RUN apt-get install -y python3-pip
WORKDIR /usr/app/src
COPY . .
RUN pip install discum
RUN pip install python-dotenv
EXPOSE 5000
CMD ["python3", "./src/index.py"]
JavaScript Docker File:
FROM node:latest
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
You could create a network between the to containers using --net look at this answer How to get Docker containers to talk to each other while running on my local host?
Another way, and my preferred way, is to use docker-compose and create networks between your containers.
Use Service Name and Port is always the best.
So if you have a docker file like the below you could use the URL http://client:5000
version: 3.8
services:
client:
image: blah
ports:
- 5000:5000
Related
I'm trying to create a Docker container which runs a Python http.server listening on port 8000. This is my Dockerfile:
FROM ubuntu:20.04 AS focal
WORKDIR /usr/src/server
COPY . .
RUN apt update && apt-get install -y build-essential python python3 zip net-tools iptables sudo curl
CMD ["python3", "-m http.server", "8000"]
First, I successfully built the image: docker build -t py_server .
Then I tried to run the image as a container: docker run --rm -p 8000:8000 py_server
But the following error was thrown:
/usr/bin/python3: Error while finding module specification for ' http.server' (ModuleNotFoundError: No module named ' http')
Not sure why python3 wasn't able to find the http module when specified with CMD in the Dockerfile. I tested whether python3 in the container has http.server by directly executing the command using bash on the py_server image, and it worked:
$ docker run -it --rm -p 8000:8000 py_server bash
root#d3426b37cf2e:/usr/src/cs435_mp1_server# python3 -m http.server 8000
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
I'm very confused about this.
I just noticed that there was a space in front of "http.server" which made python3 unable to find the module.
I edited the CMD line on my Dockerfile and it is working now:
CMD ["python3", "-m", "http.server", "8000"]
Still not very sure why a space was previously added in front of "http.server" though.
I've installed and configured docker (as per documentation) and I am trying to build a flask application using tiangolo/uwsgi-nginx-flask:python3.8. I've built a hello-world application, and have tested it locally by running python manage.py and the application runs successfully. Link to full Code-File.
My docker version and installation is as below:
Dockerfile:
FROM tiangolo/uwsgi-nginx-flask:python3.8
ENV INSTALL_PATH /usr/src/helloworld
RUN mkdir -p $INSTALL_PATH
# install net-tools
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y \
net-tools \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# set working directory
WORKDIR $INSTALL_PATH
# setup flask environment
# install all requirements
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
# copy all files and folder to docker
COPY . .
# run the application in docker environment
CMD [ "python", "./manage.py" ]
I built the application with docker build --tag hello-world:test . and running the application as: docker run -d -p 5000:5000 hello-world:test successfully.
However, I'm unable to open the application in localhost:5000 or 0.0.0.0:5000 or any other port. The application is running, as I can see it from the CLI:
But, from browser the page is not reachable:
The question mentions to check the IP address:
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" hungry_engelbart
>> <no value>
Found another solution at this link, but docker-machine is currently deprecated.
I'm new to docker, but I have tried to run the same thing following this tutorial, but faced similar issues.
Finally, I am able to solve this. I had to configure a new inbound rules under Windows Firewall > Advanced Settings > Inbound Rules > New Inbound Rules. Create a new rule that will allow a range of local IP addresses, which in my case was 198.168.0.1:198.168.0.100. Finally, you need to run the application at 0.0.0.0 as pointed by #tentative in the comments. :)
I am trying to develop pyramid application with using docker-container.I build a docker images with below docker file.
FROM ubuntu
RUN apt-get -y update
RUN apt-get -y install python3.6 python3.6-dev libssl-dev wget git python3-pip libmysqlclient-dev
WORKDIR /application
COPY . /application
RUN pip3 install -e .
EXPOSE 6543
This is my docker-compose file
version: '3'
services:
webserver:
ports:
- 6543:6543
build:
context: .
dockerfile: Dockerfile-development
volumes:
- .:/application
command: pserve development.ini --reload
The docker image is created successfully. But when i run the docker-compose up and browse the url localhost:6543 it is showing The site can't be reached now. But when i run it locally with pserve development.ini it is working fine. I tried to connect to the docker interactively and run the command pserve develpment.ini, It is showing as
Starting server in PID 18.
Serving on http://localhost:6543
But when i browse the url from chrome it is not working.
You need to listen in all network interfaces. In your development.ini file, use:
listen = *:6543
You should get a log which says:
Serving on http://0.0.0.0:6543
Then try to access it from your host machine using localhost:6543.
I have pulled jenkins container from docker hub like this:
docker pull jenkins
The container runs and I can access Jenkins UI in :
http://localhost:8080
My question is:
If I want to be able to create a jenkins job that pulls from a github repo and I want to run some python tests from one of the test files of that repo, how can I install extra packages such as virtualenvwrapper, pip, pytest, nose, selenium etc?
It appears that the docker container does not share any reference with local host file system.
How can I install such packages in this running container?
Thanks
You will need to install all your dependencies at docker container build time.
You can make your own Dockerfile off of the jenkins library, and then put custom stuff in there. Your Dockerfile can look like
FROM jenkins:latest
MAINTAINER Becks
RUN apt-get update && apt-get install -y {space delimited list of package}
Then, you can do something like...
docker build -t jenkins-docker --file Dockerfile .
docker run -it -d --name=jenkins-docker jenkins-docker
I might not have written all the syntax correctly, but this is basically what you need to do. If you want the run step to spin up jenkins, follow along with what they are doing in the existing Dockerfile here and add relevant sections to your dockerfile, to add some RUN steps to run jenkins.
Came across this page, which approaches a similar problem, although it also mounts the docker sock inside another container, to kind of connect one container to another. Given that its an external link, here's the relevant dockerfile from there,
FROM jenkins:1.596
USER root
RUN apt-get update \
&& apt-get install -y sudo \
&& rm -rf /var/lib/apt/lists/*
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
USER jenkins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
And this is how you can spin it up.
docker build -t myjenk .
...
Successfully built 471fc0d22bff
$ docker run -d -v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/usr/bin/docker -p 8080:8080 myjenk
I strongly suggest going through that post. Its pretty awesome.
I want to write simple python application and put in docker conteiner with dockerfile. My dockerfile is:
FROM ubuntu:saucy
# Install required packages
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install mysql-server python-mysqldb
# Add our python app code to the image
RUN mkdir -p /app
ADD . /app
WORKDIR /app
# Set the default command to execute
CMD ["python", "main.py"]
In my python application I only want to connect to the database. main.py look something like this:
import MySQLdb as db
connection = db.connect(
host='localhost',
port=3306,
user='root',
passwd='password',
)
When I built docker image with:
docker build -t myapp .
and run docker image with:
docker run -i myapp
I got error:
_mysql_exceptions.OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)")
What is the problem?
The problem is that you've never started the database - you need to explicitly start services in most Docker images. But if you want to run two processes in Docker (the DB and your python program), things get a little more complex. You either have to use a process manager like supervisor, or be a bit cleverer in your start-up script.
To see what I mean, create the following script, and call it cmd.sh:
#!/bin/bash
mysqld &
python main.py
Add it to the Dockerfile:
FROM ubuntu:saucy
# Install required packages
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install mysql-server python-mysqldb
# Add our python app code to the image
RUN mkdir -p /app
ADD . /app
WORKDIR /app
# Set the default command to execute
COPY cmd.sh /cmd.sh
RUN chmod +x /cmd.sh
CMD cmd.sh
Now build and try again. (Apologies if this doesn't work, it's off the top of my head and I haven't tested it).
Note that this is not a good solution; mysql will not be getting signals proxied to it, so probably won't shutdown properly when the container stops. You could fix this by using a process manager like supervisor, but the easiest and best solution is to use separate containers. You can find stock containers for mysql and also for python, which would save you a lot of trouble. To do this:
Take the mysql installation stuff out of the Dockerfile
Change localhost in your python code to mysql or whatever you want to call your MySQL container.
Start a MySQL container with something like docker run -d --name mysql mysql
Start your container and link to the mysql container e.g: docker run myapp --link mysql:mysql