I've managed to deploy a Django app inside a docker container on my personal Mac using localhost with Apache. For this, I use docker-compose with the build and up commands. I'm trying to run the same Django app on a CentOS server using a docker image generated on my local machine. Apache is also running on the server on port 90.
docker run -it -d --hostname xxx.xxx.xxx -p 9090:9090 --name test idOfImage
How can I access this container with Apache using the hostname and port number in the URL? Any help would be greatly appreciated. Thanks.
From other containers the best way to access this container is to attach both to the same network and use the container's --name as a DNS name and the internal port (the second port from the -p option, which isn't strictly required for this case); from outside a container or from other hosts use the host's IP address or DNS name and the published port (the first port from the -p option).
The docker run --hostname option isn't especially useful; the only time you'd want to specify it is if you have some magic licensed software that only ran if it had a particular hostname.
Avoid localhost in a Docker context, except for the very specific case where you know you're running a process on the host system outside a container and you're trying to access a container's published port or some other service running on the host. Don't use "localhost" as a generic term, it has a very specific context-dependent meaning (every process believes it's running "on localhost").
Related
I am running a Postgres SQL database container with following command:
docker run --name db -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -v pg
Of course I have changed the 'localhost' to 'db' since I am trying to connect with this container.
when I try to connect to the container database I get the following error:
psycopg2.OperationalError: could not translate host name "db" to address: Name or service not known
I cant use here Docker compose in this context ( I know how to run it though ).
What else I need to add in my docker command so that I can connect from python ?
Of course I have changed the 'localhost' to 'db' since I am trying to connect with this container.
No, you don't, your dockerfile is exposing the port 5432 to the host machine as stated by the flag -p 5432:5432
So if, you are trying to connect to the docker from your host machine, yoi will use the host localhost
I think you are confusing between docker and docker networking when we have multiple docker trying to communicate with each other as is the case is with docker-compose.
In case of docker-compose, when you have multiple services running, they can communicate with each other using the docker containers name as the host. Similar if you have a network between docker containers, they can communicate with each other using the docker name as the host
So if it was docker-compose, with the docker running on one container, and your app in another, in that case you would replace localhost with db.
Hope that clarifies things
If your Python program is running on the Docker host, then you don't want to "of course" change localhost to db in your connection string, since (a) Docker doesn't change your host DNS settings (b) you're using -p to publish the service running on port 5432 to the host on port 5432.
You would only use the name db from another Docker container running in the same Docker network.
Currently, my Docker image runs as expected when run with the following line inside VSCode's CLI.
docker run -it -d -p 5000:5000 flaskapp
This allows me to open up http://localhost:5000/ and access it. However, if I were to run it on Docker desktop, it does not allow me to access the localhost. Under my Dockerfile, I have made sure to include EXPOSE 5000 inside.
docker build -t flaskapp:latest .
How do I run a Docker image inside Docker desktop or EC2 with the -p flag?
We need to specify host as 0.0.0.0 in the app.run().
Eg: app.run(host="0.0.0.0")
Then add an inbound rule in the ec2 instance security group to expose the port.
Use the ec2 instance IP with the port number to access it.
I am trying to teach myself how to deploy a dash application on AWS.
I have created a folder 'DashboardImage' on my mac that contains a Dockerfile, README.md, requirements.txt and an app folder that contains my python dash app 'dashboard.py'.
My Dockerfile looks like this:
I go into the DashboardImage folder and run
docker built -t conjoint_dashboard .
It built successfully and if I run docker images I can see the details of the image.
When I try
docker run conjoint_dashboard
The terminal tells me Dash is running on http://0.0.0.0:8050/ but it is not connecting.
I can't understand why.
Update it according to your port, e.g. if your application exposes port 8050 then:
docker run -p 8050:8050 conjoint_dashboard where -p = publish first one is the HOST port, and the second is the CONTAINER port.
Also you can update your dockerfile:
FROM: continuumio/minicoda3
...
EXPOSE 8050/tcp
...
The EXPOSE instruction doesn't actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published.
To actually publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports.
By default, EXPOSE assumes TCP. You can also specify UDP:
You need to expose the port, see: https://docs.docker.com/engine/reference/commandline/run/#publish-or-expose-port--p---expose
$ docker run -p 127.0.0.1:80:8080/tcp ubuntu bash
This binds port 8080 of the container to TCP port 80 on 127.0.0.1 of the host machine. You can also specify udp and sctp ports. The Docker User Guide explains in detail how to manipulate ports in Docker.
(I'm having the inverse problem of exposing a port and it's not reachable.)
In my case I have 2 containers on the same network. One is an Alpine Python running a Python Flask app. The other is a barebones Ubuntu 18.04. The services are initialised basically like this:
docker-compose.yml:
version: '3'
services:
pythonflask:
build: someDockerfile # from python:3.6-alpine
restart: unless-stopped
ubuntucontainer:
build: someOtherDockerfile #from ubuntu:18.04
depends_on:
- pythonflask
restart: unless-stopped
The Python Flask app runs on port 5000.
Notice the lack of expose: - 5000 in the docker-compose.yml file.
The problem is that I'm able to get a correct response when cURLing http://pythonflask:5000 from inside ubuntucontainer
Steps:
$ docker exec -it ubuntucontainer /bin/bash
...and then within the container...
root#ubuntucontainer:/# curl http://pythonflask:5000/
...correctly returns my response from the Flask app.
However from my machine running docker:
$ curl http://localhost:5000/
Doesn't return anything (as expected).
As I test different ports, they get automatically exposed each time. What is doing this?
Connectivity between containers is achieved by placing the containers on the same docker network and communicating over the container ip and port (rather than the host published port). So what does expose do then?
Expose is documentation
Expose in docker is used by image creators to document the expected port that the application will listen on inside the container. With the exception of some tools and a flag in docker that uses this metadata documentation, it is not used to control access between containers or modify docker's networking. Applications may be reconfigured at runtime to listen to a different port, and you can connect to ports that have not been exposed.
For DNS lookups between containers, the network needs to be user created, not one of the default networks from docker (e.g. DNS is not enabled in the default bridge network named "bridge"). With DNS, you can lookup the container name, service name (from a compose file), and any network aliases created for that container on that network.
The other half of the equation is "publish" in docker. This creates a mapping from the host to the container to allow external access. It is implemented with a proxy process that runs on the host and forwards new connections. Because of the implementation, you can publish a port on the host even if the container is not listening on the port, though you would receive an error when trying to connect to that port in that scenario.
The lack of expose: ... just means that there is no port exposed from the service group you defined in your docker-compose.yml
Within the images you use, there are still exposed ports which are reachable from within the network that is automatically created by docker-compose.
That is why you reach one container from within another. In addition every container can be accessed via service name from the docker-compose.yml on the internal network.
You should not be able to access flask from your host (http://localhost:5000)
I currently have a Linux Debian VM set up through Google Cloud Platform. I have docker installed and would like to start running application containers within it.
I'm following the documentation under Docker's website Found Here under
"Running a web application in Docker" I download the image and run it with no issue. I then run $sudo docker ps and get the port which is 0.0.0.0:32768->5000/tcp
I then try to browse to the website at http://"MyExternalVMIP":32768 but the applications doesn't come up. Am I missing something?
First, test to see if your service works at all. To do this, from the VM itself, run:
wget http://localhost:32768
or
curl http://localhost:32768
If that works, that means the service is operating properly, so let's move further with the debugging.
There may be two firewalls that are blocking external access to your docker process:
the VM's OS firewall
Google Compute Engine firewall
You can see if you're affected by the first issue by accessing the URL from the VM itself and from another VM on the same GCE network (use the VM name in the URL, not the external IP):
wget http://[vm-name]:32768
To fix the first issue, you would have to either open up the single port (recommended):
iptables -I INPUT -p tcp -s 0.0.0.0/0 --dport 32768 -j ACCEPT
or disable firewall entirely, e.g., by stopping iptables (not recommended).
If, after fixing this, you can access the URL from another host on the same GCE network, but still can't access it from outside of Google Compute Engine, you're affected by the second issue. To fix it, you will need to open the port in the GCE firewall; this can also be done via the web UI in the Developers Console.
Create an entry in your local ssh config file as below with specific local forward port. In my case its an example of yarn's IP, which I want to access in browser.
Host hadoop
HostName <External-IP>
User <Local-machine-username>
IdentityFile ~/.ssh/<private-key-for-above-user>
LocalForward 8089 <Internal-IP>:8088