ngrok in docker cannot connect to Django development server - python

I am working on a localhost django webserver http://localhost:8000, which works fine.
Meanwhile i need ngrok to do the port forwarding, ngrok http 8000, which works fine too.
Then I want to put ngrok, postgres, redis, maildev, etc all in docker containers, all others works fine, except ngrok.
ngrok failed to contain to localhost:8000.
I understand why, i suppose because ngrok is running on a seperate 'server 'and the localhost on that server does not have web server running.
I am wondering how i can fix it.
I tried in my docker-compose file with
network_mode: "host", it is not working (MacOS).
I tried to use host.docker.internal, but as I am a free plan user, ngrok does not allow me to specify a hostname.
any help is appreciated! Thanks.
here is my docker-compose file:
ngrok:
image: wernight/ngrok
ports:
- '4040:4040'
environment:
- NGROK_PORT=8000
- NGROK_AUTH=${NGROK_AUTH_TOKEN}
network_mode: "host"

UPDATE:
stripe has a new tool [stripe-cli][1], which can do the same thing.
just do as below
stripe-cli:
image: stripe/stripe-cli
command: listen --api-key $STRIPE_SECRET_KEY
--load-from-webhooks-api
--forward-to host.docker.internal:8000/api/webhook/
I ended up getting rid of ngrok, using serveo instead to solve the problem,
here is the code, in case anyone run into the same problem
serveo:
image: taichunmin/serveo
tty: true
stdin_open: true
command: "ssh -o ServerAliveInterval=60 -R 80:host.docker.internal:8000 -o \"StrictHostKeyChecking no\" serveo.net"

I was able to get it to work by doing the following:
Instruct Django to bind to port 8000 with the following command: python manage.py runserver 0.0.0.0:8000
Instruct ngrok to connect to the web docker service in my docker compose file by passing in web:8000 as the NGROK_PORT environment variable.
I've pasted truncated versions of my settings below.
docker-compose.yml:
version: '3.7'
services:
ngrok:
image: wernight/ngrok
depends_on:
- web
env_file:
- ./ngrok/.env
ports:
- 4040:4040
web:
build:
context: ./app
dockerfile: Dockerfile.dev
command: python manage.py runserver 0.0.0.0:8000
env_file:
- ./app/django-project/settings/.env
ports:
- 8000:8000
volumes:
- ./app/:/app/
And here is the env file referenced above (i.e. ./ngrok/.env):
NGROK_AUTH=your-auth-token-here
NGROK_DEBUG=1
NGROK_PORT=web:8000
NGROK_SUBDOMAIN=(optional)-your-subdomain-here
You can leave out the subdomain and auth fields. I figured this out by looking through their docker entrypoint

Related

Django project on AWS not updating code after git pull

I am deploying a Django project on AWS. I am running Postgres, Redis, Nginx as well as my project on Docker there.
Everything is working fine, but when I change something on my local machine, push changes to git and then pull them on the AWS instance, the code is changing, files are updated but they are not showing on the website. Only the static files are updating automatically (I guess because of Nginx). Here is my docker-compose config:
version: '3.9'
services:
redis:
image: redis
command: redis-server
ports:
- "6379:6379"
postgres:
image: postgres
environment:
- POSTGRES_USER=
- POSTGRES_PASSWORD=
- POSTGRES_DB=
ports:
- "5432:5432"
web:
image: image_name
build: .
restart: always
command: gunicorn project.wsgi:application --bind 0.0.0.0:8000
env_file:
- envs/.env.prod
ports:
- "8000:8000"
volumes:
- ./staticfiles/:/tmp/project/staticfiles
depends_on:
- postgres
- redis
nginx:
image: nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./staticfiles:/home/app/web/staticfiles
- ./nginx/conf.d:/etc/nginx/conf.d
- ./nginx/logs:/var/log/nginx
- ./certbot/www:/var/www/certbot/:ro
- ./certbot/conf/:/etc/nginx/ssl/:ro
depends_on:
- web
Can you please tell me what to do?
I tried deleting everything from docker and compose up again but nothing happened.
I looked all over in here but I still don't understand... instance restart is not working as well. I tried cleaning redis cache because I have template caching and still nothing.
After updating the code on the EC2 instance, you need to build a new web docker image from that new code. If you are just restarting things then docker-compose is going to continue to pick up the last docker image you built.
You need to run the following sequence of commands (on the EC2 instance):
docker-compose build web
docker-compose up -d
You are seeing the static files change immediately, without rebuilding the docker image, because you are mapping to those files via docker volume.
I found the issue... it was because I had template caching.
If I remove the cache and do what #MarkB suggested, all is updating.
I don't understand why this happens since I tried flushing all redis cache after changes but I guess it solves my issues.

How to connect 2 docker compontens within the same docker-compose.yaml

I am new to the docker world and I have some issues regarding how to connect 2 docker services tougher.
I am using https://memgraph.com/ as my database and when I am running it locally I am running it like this
docker run -it -p 7687:7687 -p 3000:3000 memgraph/memgraph-platform
I wrote my program which is going to connect to the database using mgclient and when I am running it locally everything is working fine.
Now I am trying to put in inside the docker container and running it using docker-compose.yaml
My docker-compose.yaml is:
version: "3.5"
services:
memgraph:
image: memgraph/memgraph-platform:2.1.0
container_name: memgraph_container
restart: unless-stopped
ports:
- "7687:7687"
- "3000:3000"
my_app:
image: memgraph_docker
container_name: something
restart: unless-stopped
command: python main.py
and when I am trying to run it with this command:
docker-compose up
I am getting an error regarding the connection to the server. Could anyone tell me what I am missing regarding the docker-compose.yaml?
How does your my_app connect to the database?
Are you using a connection string of the form localhost:7687 (or perhaps localhost:3000)? This would work locally because you are publishing (--publish=7687:7687 --publish=3000:3000) the container's ports 7687 and 3000 to the host port's (using the same ports).
NOTE You can remap ports when your docker run. For example, you could --publish=9999:7686 and then you would need to use port 9999 on your localhost to access the container's port 7687.
When you combine the 2 containers using Docker Compose, each container is given a name that matches the service name. In this case, your Memgraph database is called memgraph (matching the service name).
Using Docker Compose, localhost takes on a different mean. From my_app, localhost is my_app. So, using localhost under Docker Compose, my_app would try connecting to itself not the database.
Under Docker Compose, for my_app (the name for your app), you need to refer to Memgraph by its service name (memgraph). The ports will be unchanged as both 7687 and 3000 (whichever is correct).
NOTE The ports statement in your Docker Compose config is possibly redundant *unless you want to be able to access the database from your (local)host (which you may for debugging). From a best practice standpoint, once my_app is able to access the database correctly, you don't need to expose the database's ports to the host.
Update
It is good practice to externalize configuration (from your app). So that you can configure your app dynamically. An easy way to do this is to use environment variables.
For example:
main.py:
import os
conn = connect(
host=os.getenv("HOST"),
port=os.getenv("PORT"),
)
Then, when you run under e.g. Docker, you need to set these values:
docker run ... --env=HOST="localhost" --env=PORT="7687" ...
And under Docker Compose, you can:
version: "3.5"
services:
memgraph:
image: memgraph/memgraph-platform:2.1.0
container_name: memgraph_container
restart: unless-stopped
my_app:
image: memgraph_docker
container_name: something
restart: unless-stopped
command: python main.py
environment:
HOST: memgraph
PORT: 7687

Django shell_plus: How to access Jupyter notebook in Docker Container

I am trying to access a Jupyter Notebook created with the shell_plus command from django-extensions in a Docker container.
docker-compose -f local.yml run --rm django python manage.py shell_plus --notebook
My configuration is based on the answers of #RobM and #Mark Chackerian to this Stack Overflow question. I.e. I installed and configured a custom kernel and my Django apps config file has the constant NOTEBOOK_ARGUMENTS set to:
NOTEBOOK_ARGUMENTS = [
'--ip', '0.0.0.0',
'--port', '8888',
'--allow-root',
'--no-browser',
]
I can see the container starting successfully in the logs:
[I 12:58:54.877 NotebookApp] The Jupyter Notebook is running at:
[I 12:58:54.877 NotebookApp] http://10d56bab37fc:8888/?token=b2678617ff4dcac7245d236b6302e57ba83a71cb6ea558c6
[I 12:58:54.877 NotebookApp] or http://127.0.0.1:8888/?token=b2678617ff4dcac7245d236b6302e57ba83a71cb6ea558c6
But I can't open the url. I have forwarded the port 8888 in my docker-compose, tried to use localhost instead of 127.0.0.1 and also tried to use the containers IP w/o success.
It feels like I am missing the obvious here … Any help is appreciated.
For the sake of records as of 2020, I managed to have a working django setup with Postgresql in docker-compose:
development.py (settings.py)
INSTALLED_APPS += [
"django_extensions",
]
SHELL_PLUS = "ipython"
SHELL_PLUS_PRINT_SQL = True
NOTEBOOK_ARGUMENTS = [
"--ip",
"0.0.0.0",
"--port",
"8888",
"--allow-root",
"--no-browser",
]
IPYTHON_ARGUMENTS = [
"--ext",
"django_extensions.management.notebook_extension",
"--debug",
]
IPYTHON_KERNEL_DISPLAY_NAME = "Django Shell-Plus"
SHELL_PLUS_POST_IMPORTS = [ # extra things to import in notebook
("module1.submodule", ("func1", "func2", "class1", "etc")),
("module2.submodule", ("func1", "func2", "class1", "etc"))
]
os.environ["DJANGO_ALLOW_ASYNC_UNSAFE"] = "true" # only use in development
requirements.txt
django-extensions
jupyter
notebook
Werkzeug # needed for runserver_plus
...
docker-compose.yml
version: "3"
services:
db:
image: postgres:13
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
restart: always
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data/
web:
build: .
environment:
- DJANGO_SETTINGS_MODULE=settings.development
command:
- scripts/startup.sh
volumes:
- ...
ports:
- "8000:8000" # webserver
- "8888:8888" # ipython notebook
depends_on:
- db
volumes:
postgres_data:
From your host terminal run this command:
docker-compose exec web python manage.py shell_plus --notebook
Finally navigate to http://localhost:8888/?token=<xxxx> in the web browser of host.
Got it to work, but why it does so is beyond me. Exposing the ports in the docker-compose run command did the trick.
docker-compose -f local.yml run --rm -p 8888:8888 django python manage.py shell_plus --notebook
I was under the impression exposing ports in my local.yml would open them also in containers started by run.
The compose run command will per default not expose the defined service ports. From the documentation at https://docs.docker.com/compose/reference/run/
The [...] difference is that the docker-compose run command does not
create any of the ports specified in the service configuration. This
prevents port collisions with already-open ports. If you do want the
service’s ports to be created and mapped to the host, specify the
--service-ports flag:
docker-compose run --service-ports web python manage.py shell
You will therefore need to run
docker-compose -f local.yml run --rm --service-ports django python manage.py shell_plus --notebook
It might also be that the default 8888 port is already used by a local jupyter server (e.g. one spun up by VS Code's jupyter notebook implementation. I therefore usually map to a different port in the settings.py NOTEBOOK_ARGUMENTS list. (In this case the port mapping in the compose file needs to be adjusted as well, of course, and there must not be another container running in the background with the same service definition as this might also occupy the port.)
If you want to use jupyter notebook like separated service:
jupyter_notebook:
build:
context: .
dockerfile: docker/dev/web/Dockerfile
command: python manage.py shell_plus --notebook
depends_on:
- web
ports:
- 8888:8888 # ipython notebook
env_file:
- .env
after:
docker-compose logs -f 'jupyter_notebook'
and you will get access token in logs

docker compose application unable to see redis server

I'm trying to put up 3 apps using docker. One of them is a flask web app, the other is a redis server, and the other one is a celery app that will communicate with the flask one via redis.
Now the first 2 seem to come up without any issues, but for the flask app i'm getting this error:
celery_1 exited with code 1
My docker-compose.yml file looks like this:
version: '2'
services:
redis:
image: "redis:alpine"
web:
build: .
ports:
- "7998:7998"
command: "gunicorn -b 0.0.0.0:7998 --log-level TRACE common_apps:app"
celery:
build: .
command: "celery -A common_apps.celery_app worker"
And if I cut out the celery part and launch it individually, the error message I get is not being able to find the redis host (but this may be because the hostname redis is only present within the docker-compose context)
Any thoughts on what's broken here ?
Thanks a lot
The problem was the .env file was being ignored. After I moved the values from the .env file to the docker-compose.yml file, the application started working as intended

Cannot see my web application using Docker container

I am trying to test my web application using a docker container, but I am not able to see it when I try to access it through my browser.
The docker compose file looks like
version: '2'
services:
db:
image: postgres
volumes:
- ~/pgdata:/var/lib/postgresql/data/pgdata
environment:
POSTGRES_PASSWORD: "dbpassword"
PGDATA: "/var/lib/postgresql/data/pgdata"
ports:
- "5432:5432"
web:
build:
context: .
dockerfile: Dockerfile-web
ports:
- "5000:5000"
volumes:
- ./web:/web
depends_on:
- db
backend:
build:
context: .
dockerfile: Dockerfile-backend
volumes:
- ./backend:/backend
depends_on:
- db
The dockerfile-web looks like
FROM python
ADD web/requirements.txt /web/requirements.txt
ADD web/bower.json /web/bower.json
WORKDIR /web
RUN \
wget https://nodejs.org/dist/v4.4.7/node-v4.4.7-linux-x64.tar.xz && \
tar xJf node-*.tar.xz -C /usr/local --strip-components=1 && \
rm -f node-*.tar.xz
RUN npm install -g bower
RUN bower install --allow-root
RUN pip install -r requirements.txt
RUN export MYFLASKAPP_SECRET='makethewebsite'
CMD python manage.py server
The ip for my docker machine is
docker-machine ip
192.168.99.100
But when I try
http://192.168.99.100:5000/
in my browser it just says that the site cannot be reached.
It seems like it is refusing the connection.
When I ping my database in the browser I can see that my database response in a log
http://192.168.99.100:5432/
So I tried wget inside the container and got
$ docker exec 3bb5246a0623 wget http://localhost:5000/
--2016-07-23 05:25:16-- http://localhost:5000/
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:5000... failed: Connection refused.
Connecting to localhost (localhost)|127.0.0.1|:5000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 34771 (34K) [text/html]
Saving to: ‘index.html.1’
0K .......... .......... .......... ... 100% 5.37M=0.006s
2016-07-23 05:25:16 (5.37 MB/s) - ‘index.html.1’ saved [34771/34771]
Anyone know how I can get my web application to show up through my browser?
I had to enable external visibility for my flask application.
You can see it here
Can't connect to Flask web service, connection refused

Categories

Resources