I have done ssh port forwarding which listens to 2000 in my local. So the service is running on localhost:2000, it is Neo4J server. In my local, I am able to connect with Neo4J using Neo4J desktop by giving bolt://localhost:2000 and can look at the datas.
but i am not able to connect with host bolt://localhost:2000 in docker container.
I looked at the answers here
From inside of a Docker container, how do I connect to the localhost of the machine?
I added extra hosts in docker-compose.yml
flask_service:
build:
context: ./project
dockerfile: my_dockerfile
container_name: flask_container
stdin_open: true
tty: true
ports:
- 5000:5000
extra_hosts:
- "myhost:175.1.344.136"
175.1.344.136 being my host IP
and I have used both bolt://175.1.344.136:2000 and bolt://myhost:2000 inside container but it is not connecting. Also i want to know which is the right way bolt://175.1.344.136:2000 or bolt://myhost:2000
I get an error
2021-05-25T10:53:55+0000.060 [INFO] neo4j_proxy.__init__:74 (8:Thread-11) - NEO4J endpoint: bolt://myhost:2000
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/neobolt/direct.py", line 831, in _connect
s.connect(resolved_address)
ConnectionRefusedError: [Errno 111] Connection refused
I am using MacOs, Please help me resolve this.
Thanks in advance!
you need to use the container service name neo4j and default port 7687 to use inside the same docker host.
bolt://neo4j:7687
version: "3.8"
services:
neo4j:
image: neo4j:3.4.4-enterprise
ports:
- 7474:7474
- 7687:7687
my_service:
build:
context: .
environment:
DATABASE_URL: "bolt://neo4j:7687"
Related
I have a mosquitto mqtt brocker running in docker. I am starting it is a docker compose file. Now I am trying to connect with the broker, it was working locally. When I try to connect as a docker container it is not working although I have changed the host/brocker address from local to compose image name. How can I make it work ?
Here What I have tried
Docker compose ( edited )
version: '3.5'
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdatapg:/var/lib/postgresql/data
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
mosquitto:
image: eclipse-mosquitto
networks:
- postgres
ports:
- "1883:1883"
volumes:
- ./conf:/mosquitto/conf
- ./data:/mosquitto/data
- ./log:/mosquitto/log
app:
restart: always
build: .
depends_on:
- db
networks:
- postgres
networks:
postgres:
driver: bridge
volumes:
pgdatapg:
and part of my python
broker = "mosquitto"
port = 1883
topic = "py/mqtt/test"
def connect_mqtt():
def on_connect(client, userdata, flags, rc):
if rc == 0:
print("Connected to MQTT Broker!")
else:
print("Failed to connect, return code %d\n", rc)
client = mqtt_client.Client(client_id)
client.on_connect = on_connect
client.connect(broker, port)
return client
Here is the conf file
persistence true
persistence_location /mosquitto/data/
log_dest file /mosquitto/log/mosquitto.log
listener 1883
## Authentication ##
allow_anonymous false
password_file /mosquitto/conf/mosquitto.conf
I am getting the following error
| ConnectionRefusedError: [Errno 111] Connection refused
When running with docker compose, the containers started as services by default are placed on dedicated docker overlay network named after the project (which defaults to the dir name the docker-compose.yml file is held in) e.g. network called foo_default
https://docs.docker.com/compose/networking/
Services are only accessible from other containers that are connected to the same network (and to the host via what ever ports are exposed).
So if you only have mosquitto in the docker-compose.yml then no other containers will be able to connect to it. If you include the container that the python is running in as a service in the compose file then it will be able to connect.
You can also change the networks containers connect to in the compose file.
https://docs.docker.com/compose/networking/#specify-custom-networks
EDIT:
You have forced the mosquitto service to use network_mode: host network so it's not on the same postgres network as the app. Containers can be on multiple networks, but mosquitto should not be bound to the host network to make all this work.
EDIT2:
You are also not setting a username/password in app even though you have require authentication in mosquitto.conf and you are pointing the password file at the config file which just won't work. I suggest you remove the last line of the mosquitto.conf file and set allow_anonymous true.
P.s. I suspect that the mosquitto container currently isn't actually starting due to the last line of the config file.
I am trying to run a Selenium script written in Python inside a Docker container via Selenium Grid. Unfortunately I can't manage to configure the remote webdriver.
This is the Docker Compose file:
version: "3"
services:
chrome:
image: selenium/node-chrome:4.1.3-20220327
shm_size: 2gb
depends_on:
- selenium-hub
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
firefox:
image: selenium/node-firefox:4.1.3-20220327
shm_size: 2gb
depends_on:
- selenium-hub
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
selenium-hub:
image: selenium/hub:4.1.3-20220327
container_name: selenium-hub
ports:
- "4444:4444"
python-script:
build: .
This is the webdriver setup within the Python code:
driver = webdriver.Remote(
desired_capabilities=DesiredCapabilities.FIREFOX,
command_executor="http://localhost:4444/wd/hub"
)
It works when I run the Python script locally with these settings. But as soon as I want to start it inside a Docker container, I get the following error, among others:
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=4444): Max retries exceeded with url: /session (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7b85c41780>: Failed to establish a new connection: [Errno 111] Connection refused'))
I'm totally new to docker and also quite new to programming itself, so help would be very very nice.
Thank you!
TLDR : Try this:
driver = webdriver.Remote(
desired_capabilities=DesiredCapabilities.FIREFOX,
command_executor="http://selenium-hub:4444/wd/hub"
)
The reason it works locally from vs-code, is that localhost points to your local machine. Your docker container has its own idea of what localhost means. When the code runs inside the container, localhost refers to that container.
Is that container listening to that port? Probably not - and this is why it doesn't work. Docker has its own network stack!
What you want to contact is the other container "selenium-hub". In docker, the service-name (or container name) becomes the host - but this only works from within the docker network. (Docker-compose makes a default network for you you don't specify one)
I am trying to run integration tests (in python) which depend on mysql. Currently they depend on SQL running locally, but I want them to depend on a MySQL running in docker.
Contents of Dockerfile:
FROM continuumio/anaconda3:4.3.1
WORKDIR /opt/workdir
ADD . /opt/workdir
RUN python setup.py install
Contents of Docker Compose:
version: '2'
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
restart: always
expose:
- "3306"
my_common_package:
image: my_common_package
depends_on:
- mysql
restart: always
links:
- mysql
volumes:
db_data:
Now, I try to run the tests in my package using:
docker-compose run my_common_package python testsql.py
and I receive the error
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on
'localhost' ([Errno 99] Cannot assign requested address)")
docker-compose will by default create virtual network were all the containers/services in the compose file can reach each other by an IP address. By using links, depends_on or network aliases they can reach each other by host name. In your case the host name is the service name, but this can be overridden. (see: docs)
Your script in my_common_package container/service should then connect to mysql on port 3306 according to your setup. (not localhost on port 3306)
Also note that using expose is only necessary if the Dockerfile for the service don't have an EXPOSE statement. The standard mysql image already does this.
If you want to map a container port to localhost you need to use ports, but only do this if it's necessary.
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
Here we are saying that port 3306 in the mysql container should be mapped to localhost on port 3306.
Now you can connect to mysql using localhost:3306 outside of docker. For example you can try to run your testsql.py locally (NOT in a container).
Container to container communication will always happen using the host name of each container. Think of containers as virtual machines.
You can even find the network docker-compose created using docker network list:
1b1a54630639 myproject_default bridge local
82498fd930bb bridge bridge local
.. then use docker network inspect <id> to look at the details.
Assigned IP addresses to containers can be pretty random, so the only viable way for container to container communication is using hostnames.
This question already has an answer here:
Connect to a PostgreSQL database on a Docker container
(1 answer)
Closed 1 year ago.
I am fairly new to docker and I do not know what's causing the issue to not run my python script on docker..
here is how i am creating my docker-compose.yml
version: "3.6"
services:
app :
build: ./app/
db:
build: ./database/
Here is the error :
File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 127, in connect
app_1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
app_1 | sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
app_1 | Is the server running on host "127.0.0.1" and accepting
app_1 | TCP/IP connections on port 5432?
upon running docker-compose ps:
Name Command State Ports
------------------------------------------------------------------------------------
542132_app_final_db_1 docker-entrypoint.sh postgres Up 5432/tcp
app_1 python abc ... Exit 1
How do I solve it? Please help. I am fairly new to Docker/Docker-compose. Thanks!
I suppose you did not configure the docker network as host network (containers live on the same network interface of the host maschine). You either need to connect the containers via a link in the docker-compose.yml file or you need to put the containers into a custom network (not the default one). I just read this is the default, so you might not need the link.
Furthermore you need you python app to connect to the hostname of the database resp. the linked name:
In your case you need to take the following to measures for the standard solution:
Config connectivity between app and db
version: "3.6"
services:
app :
build: ./app/
links:
- db:database
db:
build: ./database/
config a proper db connection in the app
In your app you need to connect to: database:5432, not to localhost:5432
Alternate solutions
use network to generate a custom network, so you have dns resolution and can use db:5432 without the link
use host network (docker run --networking host). Not sure if that is possible in docker compose
See https://docs.docker.com/compose/networking/
I am working on a localhost django webserver http://localhost:8000, which works fine.
Meanwhile i need ngrok to do the port forwarding, ngrok http 8000, which works fine too.
Then I want to put ngrok, postgres, redis, maildev, etc all in docker containers, all others works fine, except ngrok.
ngrok failed to contain to localhost:8000.
I understand why, i suppose because ngrok is running on a seperate 'server 'and the localhost on that server does not have web server running.
I am wondering how i can fix it.
I tried in my docker-compose file with
network_mode: "host", it is not working (MacOS).
I tried to use host.docker.internal, but as I am a free plan user, ngrok does not allow me to specify a hostname.
any help is appreciated! Thanks.
here is my docker-compose file:
ngrok:
image: wernight/ngrok
ports:
- '4040:4040'
environment:
- NGROK_PORT=8000
- NGROK_AUTH=${NGROK_AUTH_TOKEN}
network_mode: "host"
UPDATE:
stripe has a new tool [stripe-cli][1], which can do the same thing.
just do as below
stripe-cli:
image: stripe/stripe-cli
command: listen --api-key $STRIPE_SECRET_KEY
--load-from-webhooks-api
--forward-to host.docker.internal:8000/api/webhook/
I ended up getting rid of ngrok, using serveo instead to solve the problem,
here is the code, in case anyone run into the same problem
serveo:
image: taichunmin/serveo
tty: true
stdin_open: true
command: "ssh -o ServerAliveInterval=60 -R 80:host.docker.internal:8000 -o \"StrictHostKeyChecking no\" serveo.net"
I was able to get it to work by doing the following:
Instruct Django to bind to port 8000 with the following command: python manage.py runserver 0.0.0.0:8000
Instruct ngrok to connect to the web docker service in my docker compose file by passing in web:8000 as the NGROK_PORT environment variable.
I've pasted truncated versions of my settings below.
docker-compose.yml:
version: '3.7'
services:
ngrok:
image: wernight/ngrok
depends_on:
- web
env_file:
- ./ngrok/.env
ports:
- 4040:4040
web:
build:
context: ./app
dockerfile: Dockerfile.dev
command: python manage.py runserver 0.0.0.0:8000
env_file:
- ./app/django-project/settings/.env
ports:
- 8000:8000
volumes:
- ./app/:/app/
And here is the env file referenced above (i.e. ./ngrok/.env):
NGROK_AUTH=your-auth-token-here
NGROK_DEBUG=1
NGROK_PORT=web:8000
NGROK_SUBDOMAIN=(optional)-your-subdomain-here
You can leave out the subdomain and auth fields. I figured this out by looking through their docker entrypoint