Execute Selenium Python Script within Docker - python

I am trying to run a Selenium script written in Python inside a Docker container via Selenium Grid. Unfortunately I can't manage to configure the remote webdriver.
This is the Docker Compose file:
version: "3"
services:
chrome:
image: selenium/node-chrome:4.1.3-20220327
shm_size: 2gb
depends_on:
- selenium-hub
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
firefox:
image: selenium/node-firefox:4.1.3-20220327
shm_size: 2gb
depends_on:
- selenium-hub
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
selenium-hub:
image: selenium/hub:4.1.3-20220327
container_name: selenium-hub
ports:
- "4444:4444"
python-script:
build: .
This is the webdriver setup within the Python code:
driver = webdriver.Remote(
desired_capabilities=DesiredCapabilities.FIREFOX,
command_executor="http://localhost:4444/wd/hub"
)
It works when I run the Python script locally with these settings. But as soon as I want to start it inside a Docker container, I get the following error, among others:
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=4444): Max retries exceeded with url: /session (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7b85c41780>: Failed to establish a new connection: [Errno 111] Connection refused'))
I'm totally new to docker and also quite new to programming itself, so help would be very very nice.
Thank you!

TLDR : Try this:
driver = webdriver.Remote(
desired_capabilities=DesiredCapabilities.FIREFOX,
command_executor="http://selenium-hub:4444/wd/hub"
)
The reason it works locally from vs-code, is that localhost points to your local machine. Your docker container has its own idea of what localhost means. When the code runs inside the container, localhost refers to that container.
Is that container listening to that port? Probably not - and this is why it doesn't work. Docker has its own network stack!
What you want to contact is the other container "selenium-hub". In docker, the service-name (or container name) becomes the host - but this only works from within the docker network. (Docker-compose makes a default network for you you don't specify one)

Related

How to connect to remote Selenium drivers within the same docker-compose?

I ran into a
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='chromedriver', port=4444): Max retries exceeded with url: /wd/hub/session (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc2de559bb0>: Failed to establish a new connection: [Errno 111] Connection refused'))
While running Selenium via Python and Docker.
My Connection looks like this:
self.driver = webdriver.Remote(
command_executor='http://chromedriver:4444/wd/hub',
options=options
)
The docker-compose like this:
...
chromedriver:
image: selenium/standalone-chrome
ports:
- "4444:4444"
hostname: chromedriver
shm_size: 2g
runner:
image: "kevoooo/twitchfarm-runner:latest"
entrypoint: "python3 /py-scripts/main.py"
healthcheck:
test: python3 /py-scripts/main.py
interval: 30s
timeout: 10s
retries: 5
environment:
- DISPLAY=127.0.0.1
- USER=uname
- PASS=pass
- 2FA_KEY=key
volumes:
- "chrome-data:/saves/google-chrome"
depends_on:
- chromedriver
...
Thanks in advance!
According to https://docs.docker.com/compose/startup-order/ - depends_on only controls the order of service start up. It does not know when the container is ready.
Basically the container readiness state is how a client treats it so it is up to the application that uses the containerized service how to handle service disruption (either cause by the service has not yet started completely or by any fault after the service has started)
So your solution makes the sense. Despite is it rough enough. The better choice would be to have a script that polls endpoint until the success is returned and then proceed with your test code.
I solved it by changing the entrypoint of "runner" to:
entrypoint: bash -c "sleep 10 && python3 /py-scripts/main.py"
I thought, that was handled by the "depends-on"-clause

Can't connect python (Flask) with Mysql container using docker-compose [duplicate]

I am trying to run integration tests (in python) which depend on mysql. Currently they depend on SQL running locally, but I want them to depend on a MySQL running in docker.
Contents of Dockerfile:
FROM continuumio/anaconda3:4.3.1
WORKDIR /opt/workdir
ADD . /opt/workdir
RUN python setup.py install
Contents of Docker Compose:
version: '2'
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
restart: always
expose:
- "3306"
my_common_package:
image: my_common_package
depends_on:
- mysql
restart: always
links:
- mysql
volumes:
db_data:
Now, I try to run the tests in my package using:
docker-compose run my_common_package python testsql.py
and I receive the error
pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on
'localhost' ([Errno 99] Cannot assign requested address)")
docker-compose will by default create virtual network were all the containers/services in the compose file can reach each other by an IP address. By using links, depends_on or network aliases they can reach each other by host name. In your case the host name is the service name, but this can be overridden. (see: docs)
Your script in my_common_package container/service should then connect to mysql on port 3306 according to your setup. (not localhost on port 3306)
Also note that using expose is only necessary if the Dockerfile for the service don't have an EXPOSE statement. The standard mysql image already does this.
If you want to map a container port to localhost you need to use ports, but only do this if it's necessary.
services:
mysql:
image: mysql:5.6
container_name: test_mysql_container
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=My_Database
- MYSQL_USER=my_user
- MYSQL_PASSWORD=my_password
volumes:
- db_data:/var/lib/mysql
ports:
- "3306:3306"
Here we are saying that port 3306 in the mysql container should be mapped to localhost on port 3306.
Now you can connect to mysql using localhost:3306 outside of docker. For example you can try to run your testsql.py locally (NOT in a container).
Container to container communication will always happen using the host name of each container. Think of containers as virtual machines.
You can even find the network docker-compose created using docker network list:
1b1a54630639 myproject_default bridge local
82498fd930bb bridge bridge local
.. then use docker network inspect <id> to look at the details.
Assigned IP addresses to containers can be pretty random, so the only viable way for container to container communication is using hostnames.

connect with service running on host from docker container

I have done ssh port forwarding which listens to 2000 in my local. So the service is running on localhost:2000, it is Neo4J server. In my local, I am able to connect with Neo4J using Neo4J desktop by giving bolt://localhost:2000 and can look at the datas.
but i am not able to connect with host bolt://localhost:2000 in docker container.
I looked at the answers here
From inside of a Docker container, how do I connect to the localhost of the machine?
I added extra hosts in docker-compose.yml
flask_service:
build:
context: ./project
dockerfile: my_dockerfile
container_name: flask_container
stdin_open: true
tty: true
ports:
- 5000:5000
extra_hosts:
- "myhost:175.1.344.136"
175.1.344.136 being my host IP
and I have used both bolt://175.1.344.136:2000 and bolt://myhost:2000 inside container but it is not connecting. Also i want to know which is the right way bolt://175.1.344.136:2000 or bolt://myhost:2000
I get an error
2021-05-25T10:53:55+0000.060 [INFO] neo4j_proxy.__init__:74 (8:Thread-11) - NEO4J endpoint: bolt://myhost:2000
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/neobolt/direct.py", line 831, in _connect
s.connect(resolved_address)
ConnectionRefusedError: [Errno 111] Connection refused
I am using MacOs, Please help me resolve this.
Thanks in advance!
you need to use the container service name neo4j and default port 7687 to use inside the same docker host.
bolt://neo4j:7687
version: "3.8"
services:
neo4j:
image: neo4j:3.4.4-enterprise
ports:
- 7474:7474
- 7687:7687
my_service:
build:
context: .
environment:
DATABASE_URL: "bolt://neo4j:7687"

ngrok in docker cannot connect to Django development server

I am working on a localhost django webserver http://localhost:8000, which works fine.
Meanwhile i need ngrok to do the port forwarding, ngrok http 8000, which works fine too.
Then I want to put ngrok, postgres, redis, maildev, etc all in docker containers, all others works fine, except ngrok.
ngrok failed to contain to localhost:8000.
I understand why, i suppose because ngrok is running on a seperate 'server 'and the localhost on that server does not have web server running.
I am wondering how i can fix it.
I tried in my docker-compose file with
network_mode: "host", it is not working (MacOS).
I tried to use host.docker.internal, but as I am a free plan user, ngrok does not allow me to specify a hostname.
any help is appreciated! Thanks.
here is my docker-compose file:
ngrok:
image: wernight/ngrok
ports:
- '4040:4040'
environment:
- NGROK_PORT=8000
- NGROK_AUTH=${NGROK_AUTH_TOKEN}
network_mode: "host"
UPDATE:
stripe has a new tool [stripe-cli][1], which can do the same thing.
just do as below
stripe-cli:
image: stripe/stripe-cli
command: listen --api-key $STRIPE_SECRET_KEY
--load-from-webhooks-api
--forward-to host.docker.internal:8000/api/webhook/
I ended up getting rid of ngrok, using serveo instead to solve the problem,
here is the code, in case anyone run into the same problem
serveo:
image: taichunmin/serveo
tty: true
stdin_open: true
command: "ssh -o ServerAliveInterval=60 -R 80:host.docker.internal:8000 -o \"StrictHostKeyChecking no\" serveo.net"
I was able to get it to work by doing the following:
Instruct Django to bind to port 8000 with the following command: python manage.py runserver 0.0.0.0:8000
Instruct ngrok to connect to the web docker service in my docker compose file by passing in web:8000 as the NGROK_PORT environment variable.
I've pasted truncated versions of my settings below.
docker-compose.yml:
version: '3.7'
services:
ngrok:
image: wernight/ngrok
depends_on:
- web
env_file:
- ./ngrok/.env
ports:
- 4040:4040
web:
build:
context: ./app
dockerfile: Dockerfile.dev
command: python manage.py runserver 0.0.0.0:8000
env_file:
- ./app/django-project/settings/.env
ports:
- 8000:8000
volumes:
- ./app/:/app/
And here is the env file referenced above (i.e. ./ngrok/.env):
NGROK_AUTH=your-auth-token-here
NGROK_DEBUG=1
NGROK_PORT=web:8000
NGROK_SUBDOMAIN=(optional)-your-subdomain-here
You can leave out the subdomain and auth fields. I figured this out by looking through their docker entrypoint

Sending ddtrace from docker

I'm trying to learn how to use docker and am having some troubles. I'm using a docker-compose.yaml file for running a python script that connects to a mysql container and I'm trying to use ddtrace to send traces to datadog. I'm using the following image from this github page from datadog
ddagent:
image: datadog/docker-dd-agent
environment:
- DD_BIND_HOST=0.0.0.0
- DD_API_KEY=invalid_key_but_this_is_fine
ports:
- "127.0.0.1:8126:8126"
And my docker-compose.yaml looks like
version: "3"
services:
ddtrace-test:
build: .
volumes:
- ".:/app"
links:
- ddagent
ddagent:
image: datadog/docker-dd-agent
environment:
- DD_BIND_HOST=0.0.0.0
- DD_API_KEY=<my key>
ports:
- "127.0.0.1:8126:8126"
So then I'm running the command docker-compose run --rm ddtrace-test python test.py, where test.py looks like
from ddtrace import tracer
#tracer.wrap('test', 'test')
def foo():
print('running foo')
foo()
And when I run the command, I'm returned with
Starting service---reprocess_ddagent_1 ... done
foo
cannot send spans to localhost:8126: [Errno 99] Cannot assign requested address
I'm not sure what this error means. When I use my key and run from local instead of over a docker image, it works fine. What could be going wrong here?
Containers are about isolation so in container "localhost" means inside container so ddtrace-test cannot find ddagent inside his container. You have 2 ways to fix that:
Put network_mode: host in ddtrace-test so he will bind to host's network interface, skipping network isolation
Change ddtrace-test to use "ddagent" host instead of localhost as in docker-compose services can be accessed using theirs names

Categories

Resources