I have a GAE application and a bunch of people working on it so to save people the trouble of setting up all the dependencies and whatnot I was hoping to allow them to run the gae development server in a docker container.
My dockerfile ends with:
CMD dev_appserver.py app_localhost.yaml
And my docker-compose is like:
version: '3'
services:
my_image:
build: ./my_image
image: my_image
ports:
- "8080:8080"
- "8000:8000"
volumes:
- ./my_image:/usr/src/
building this works fine. And running it with docker-compute up also seems to work fine. I mean, it has friendly output saying that the default module is accessible at 8080 and all that good stuff.
But if I access localhose:8080 via chrome I get ERR_SOCKET_NOT_CONNECTED. If I try curl it I get curl: (56) Recv failure: Connection reset by peer.
It all runs fine and is accessable when I run it outside the container.
docker ps 56 ↵
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a2ae48f1f66 waxed_backend_image "/bin/sh -c 'dev_a..." 9 hours ago Up 8 hours 0.0.0.0:8000->8000/tcp, 0.0.0.0:8080->8080/tcp dockerpygae_waxed_backend_1
Here's a possibly related problem I have: making requests to localhost from inside docker container It seems that every time I try to communicate with the gae development server in any dockery way things start to go horribly wrong
I changed this:
CMD dev_appserver.py app_localhost.yaml
To this:
CMD dev_appserver.py --host 0.0.0.0 app_localhost.yaml
And now it works fine
Although I dont know why it worked. I'll still appreciate an answer tht is more correct than this one
Related
I am new to the docker world and I have some issues regarding how to connect 2 docker services tougher.
I am using https://memgraph.com/ as my database and when I am running it locally I am running it like this
docker run -it -p 7687:7687 -p 3000:3000 memgraph/memgraph-platform
I wrote my program which is going to connect to the database using mgclient and when I am running it locally everything is working fine.
Now I am trying to put in inside the docker container and running it using docker-compose.yaml
My docker-compose.yaml is:
version: "3.5"
services:
memgraph:
image: memgraph/memgraph-platform:2.1.0
container_name: memgraph_container
restart: unless-stopped
ports:
- "7687:7687"
- "3000:3000"
my_app:
image: memgraph_docker
container_name: something
restart: unless-stopped
command: python main.py
and when I am trying to run it with this command:
docker-compose up
I am getting an error regarding the connection to the server. Could anyone tell me what I am missing regarding the docker-compose.yaml?
How does your my_app connect to the database?
Are you using a connection string of the form localhost:7687 (or perhaps localhost:3000)? This would work locally because you are publishing (--publish=7687:7687 --publish=3000:3000) the container's ports 7687 and 3000 to the host port's (using the same ports).
NOTE You can remap ports when your docker run. For example, you could --publish=9999:7686 and then you would need to use port 9999 on your localhost to access the container's port 7687.
When you combine the 2 containers using Docker Compose, each container is given a name that matches the service name. In this case, your Memgraph database is called memgraph (matching the service name).
Using Docker Compose, localhost takes on a different mean. From my_app, localhost is my_app. So, using localhost under Docker Compose, my_app would try connecting to itself not the database.
Under Docker Compose, for my_app (the name for your app), you need to refer to Memgraph by its service name (memgraph). The ports will be unchanged as both 7687 and 3000 (whichever is correct).
NOTE The ports statement in your Docker Compose config is possibly redundant *unless you want to be able to access the database from your (local)host (which you may for debugging). From a best practice standpoint, once my_app is able to access the database correctly, you don't need to expose the database's ports to the host.
Update
It is good practice to externalize configuration (from your app). So that you can configure your app dynamically. An easy way to do this is to use environment variables.
For example:
main.py:
import os
conn = connect(
host=os.getenv("HOST"),
port=os.getenv("PORT"),
)
Then, when you run under e.g. Docker, you need to set these values:
docker run ... --env=HOST="localhost" --env=PORT="7687" ...
And under Docker Compose, you can:
version: "3.5"
services:
memgraph:
image: memgraph/memgraph-platform:2.1.0
container_name: memgraph_container
restart: unless-stopped
my_app:
image: memgraph_docker
container_name: something
restart: unless-stopped
command: python main.py
environment:
HOST: memgraph
PORT: 7687
I have some problems running Django on an ECS task.
I want to have a Django webapp running on an ECS task and accessible to the world.
Here are the symptoms:
When I run an ECS task using Django python manage.py runserver 0.0.0.0:8000 as entry point for my container, I have a connection refused response.
When I run the task using Gunicorn using gunicorn --bind 0.0.0.0:8000 my-project.wsgi I have no data response.
I don't see logs on CloudWatch and I can't find any server's logs when I ssh to the ECS instance.
Here are some of my settings related to that kind of issue:
I have set my ECS instance security groups inbound to All TCP | TCP | 0 - 65535 | 0.0.0.0/0 to be sure it's not a firewall problem. And I can assert that because I can run a ruby on rails server on the same ECS instance perfectly.
In my container task definition I set a port mapping to 80:8000 and an other to 8000:8000.
In my settings.py, I have set ALLOWED_HOSTS = ["*"] and DEBUG = False.
Locally my server run perfectly on the same docker image when doing a docker run -it -p 8000:8000 my-image gunicorn --bind=0.0.0.0:8000 wsgi or same with manage.py runserver.
Here is my docker file for a Gunicorn web server.
FROM python:3.6
WORKDIR /usr/src/my-django-project
COPY my-django-project .
RUN pip install -r requirements.txt
EXPOSE 8000
CMD ["gunicorn","--bind","0.0.0.0:8000","wsgi"]
# CMD ["python","manage.py", "runserver", "0.0.0.0:8000"]
Any help would be grateful!
To help you debugging:
What is the status of the job when you are trying to access your webapp.
Figure out which instance the job is running and try docker ps on that ecs instance for the running job.
If you are able see the container or the job running on the instance, try access your webapp directly on the server with command like curl http://localhost:8000 or wget
If you container is not running. Try docker ps -a and see which one has just stopped and check with docker logs -f
With this approach, you can cut out all AWS firewall settings, so that you can see if your container is configured correctly. I think it will help you tracking down the issue easier.
After you figuring out the container is running fine and you are able to request with localhost, then you can work on security group inbound/outbound filter.
I have been working with Docker previously using services to run a website made with Django.
Now I would like to know how I should create a Docker to just run Python scripts without a web server and any service related with websited.
An example of normal docker which I am used to work is:
version: '2'
services:
nginx:
image: nginx:latest
container_name: nz01
ports:
- "8001:8000"
volumes:
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
depends_on:
- web
web:
build: .
container_name: dz01
depends_on:
- db
volumes:
- ./src:/src
expose:
- "8000"
db:
image: postgres:latest
container_name: pz01
ports:
- "5433:5432"
volumes:
- postgres_database:/var/lib/postgresql/data:Z
volumes:
postgres_database:
external: true
How should be the docker-compose.yml file?
Simply remove everything from your Dockerfile that has nothing to do with your script and start with something simple, like
FROM python:3
ADD my_script.py /
CMD [ "python", "./my_script.py" ]
You do not need Docker compose for containerizing a single python script.
The example is taken from this simple tutorial about containerizing Python applications: https://runnable.com/docker/python/dockerize-your-python-application
You can easily overwrite the command specified in the Dockerfile (via CMD) when starting a container from the image. Just append the desired command to your docker run command, e.g:
docker run IMAGE /path/to/script.py
You can easily run Python interactively without even having to build a container:
docker run -it python
If you want to have access to some code you have written within the container, simply change that to:
docker run -it -v /path/to/code:/app: python
Making a Dockerfile is unnecessary for this simple application.
Most Linux distributions come with Python preinstalled. Using Docker here adds significant complexity and I'd pretty strongly advise against Docker just to run a simple script. You can use a virtual environment to isolate a particular Python package's dependencies from the rest of the system.
(There is a pretty consistent stream of SO questions around getting filesystem permissions and user IDs right for scripts that principally want to interact with the host system. Also remember that running docker anything implies root-equivalent permissions. If you don't want Docker's filesystem and user namespace isolation, IMHO it's easier to just not use Docker where it doesn't make sense.)
I am working on a localhost django webserver http://localhost:8000, which works fine.
Meanwhile i need ngrok to do the port forwarding, ngrok http 8000, which works fine too.
Then I want to put ngrok, postgres, redis, maildev, etc all in docker containers, all others works fine, except ngrok.
ngrok failed to contain to localhost:8000.
I understand why, i suppose because ngrok is running on a seperate 'server 'and the localhost on that server does not have web server running.
I am wondering how i can fix it.
I tried in my docker-compose file with
network_mode: "host", it is not working (MacOS).
I tried to use host.docker.internal, but as I am a free plan user, ngrok does not allow me to specify a hostname.
any help is appreciated! Thanks.
here is my docker-compose file:
ngrok:
image: wernight/ngrok
ports:
- '4040:4040'
environment:
- NGROK_PORT=8000
- NGROK_AUTH=${NGROK_AUTH_TOKEN}
network_mode: "host"
UPDATE:
stripe has a new tool [stripe-cli][1], which can do the same thing.
just do as below
stripe-cli:
image: stripe/stripe-cli
command: listen --api-key $STRIPE_SECRET_KEY
--load-from-webhooks-api
--forward-to host.docker.internal:8000/api/webhook/
I ended up getting rid of ngrok, using serveo instead to solve the problem,
here is the code, in case anyone run into the same problem
serveo:
image: taichunmin/serveo
tty: true
stdin_open: true
command: "ssh -o ServerAliveInterval=60 -R 80:host.docker.internal:8000 -o \"StrictHostKeyChecking no\" serveo.net"
I was able to get it to work by doing the following:
Instruct Django to bind to port 8000 with the following command: python manage.py runserver 0.0.0.0:8000
Instruct ngrok to connect to the web docker service in my docker compose file by passing in web:8000 as the NGROK_PORT environment variable.
I've pasted truncated versions of my settings below.
docker-compose.yml:
version: '3.7'
services:
ngrok:
image: wernight/ngrok
depends_on:
- web
env_file:
- ./ngrok/.env
ports:
- 4040:4040
web:
build:
context: ./app
dockerfile: Dockerfile.dev
command: python manage.py runserver 0.0.0.0:8000
env_file:
- ./app/django-project/settings/.env
ports:
- 8000:8000
volumes:
- ./app/:/app/
And here is the env file referenced above (i.e. ./ngrok/.env):
NGROK_AUTH=your-auth-token-here
NGROK_DEBUG=1
NGROK_PORT=web:8000
NGROK_SUBDOMAIN=(optional)-your-subdomain-here
You can leave out the subdomain and auth fields. I figured this out by looking through their docker entrypoint
I'm trying to learn how to use docker and am having some troubles. I'm using a docker-compose.yaml file for running a python script that connects to a mysql container and I'm trying to use ddtrace to send traces to datadog. I'm using the following image from this github page from datadog
ddagent:
image: datadog/docker-dd-agent
environment:
- DD_BIND_HOST=0.0.0.0
- DD_API_KEY=invalid_key_but_this_is_fine
ports:
- "127.0.0.1:8126:8126"
And my docker-compose.yaml looks like
version: "3"
services:
ddtrace-test:
build: .
volumes:
- ".:/app"
links:
- ddagent
ddagent:
image: datadog/docker-dd-agent
environment:
- DD_BIND_HOST=0.0.0.0
- DD_API_KEY=<my key>
ports:
- "127.0.0.1:8126:8126"
So then I'm running the command docker-compose run --rm ddtrace-test python test.py, where test.py looks like
from ddtrace import tracer
#tracer.wrap('test', 'test')
def foo():
print('running foo')
foo()
And when I run the command, I'm returned with
Starting service---reprocess_ddagent_1 ... done
foo
cannot send spans to localhost:8126: [Errno 99] Cannot assign requested address
I'm not sure what this error means. When I use my key and run from local instead of over a docker image, it works fine. What could be going wrong here?
Containers are about isolation so in container "localhost" means inside container so ddtrace-test cannot find ddagent inside his container. You have 2 ways to fix that:
Put network_mode: host in ddtrace-test so he will bind to host's network interface, skipping network isolation
Change ddtrace-test to use "ddagent" host instead of localhost as in docker-compose services can be accessed using theirs names