I recently developed a Python script to retrieve medical images from Open-I in Python. This script worked locally, but now that I have put this script in a Docker container the script errors when I try to retrieve the image. The output in the console only says: error. Can someone please explain to me what I am doing wrong?
import urllib
urllib.urlretrieve('https://openi.nlm.nih.gov/'+record['imgLarge'], path+str(filename))
Docker compose file:
version: '3.2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
tty: true
networks:
- retriever
kafka:
image: wurstmeister/kafka:latest
ports:
- target: 9094
published: 9094
protocol: tcp
mode: host
environment:
HOSTNAME_COMMAND: "docker info | grep ^Name: | cut -d' ' -f 2"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://_{HOSTNAME_COMMAND}:9094
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9094
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_CREATE_TOPICS: "rontgen_images"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- retriever
tty: true
python:
build:
context: .
dockerfile: DockerfilePython
tty: true
networks:
- retriever
networks:
retriever:
driver: bridge
Related
Im trying to build my services however when i use docker-compose up --build i recieve the following:
ERROR [internal] load metadata for docker.io/library/python:3.10-alpine 0.9s
=> [auth] library/python:pull token for registry-1.docker.io 0.0s
[internal] load metadata for docker.io/library/python:3.10-alpine:
failed to solve with frontend dockerfile.v0: failed to create LLB definition: no match for platform in manifest sha256:075028375723487287022732372384723874283782348237837: not found
This issue is appearing after i've updated my mac to Ventura 13.0.1, from what i gathered it appears to be the os thats causing this issue. I have 2 services(database and api) the db can spin up but my api returns the issue above.
Ive tried:
docker-compose up --build
docker pull docker.io/library/python:3.10-alpine
docker ps -a
making sure the docker deamon was running
clearing my images
Dockerfile
FROM python:3.10-alpine
docker-compose.yml
version: '3.9'
services:
my_db:
image: postgres:14-alpine
container_name: my_db
environment:
POSTGRES_USER: "${DB_USERNAME}"
POSTGRES_PASSWORD: "${DB_PASSWORD}"
POSTGRES_DB: "${DB_NAME}"
ports:
- "127.0.0.1:5432:${DB_PORT}"
volumes:
- ./database:/var/lib/postgresql/data
deploy:
mode: global
resources:
limits:
cpus: '1'
memory: 128M
pid: isolated
security_opt:
- no-new-privileges:true
cap_drop:
- NET_ADMIN
- SYS_ADMIN
networks:
- db
my_api:
build:
context: .
dockerfile: Dockerfile
depends_on:
posts_db:
condition: service_healthy
container_name: my_api
platform: linux/amd64/v8
environment:
CURRENT_ENVIRONMENT: local_docker
DB_HOST: "${DB_HOST}"
DB_PORT: "${DB_PORT}"
DB_NAME: "${DB_NAME}"
DB_USERNAME: "${DB_USERNAME}"
DB_PASSWORD: "${DB_PASSWORD}"
security_opt:
- no-new-privileges:true
deploy:
mode: global
endpoint_mode: vip
resources:
limits:
cpus: '1'
memory: 128M
ports:
- "127.0.0.1:8000:8000"
volumes:
- /app/database
- ./:/app
networks:
- api
- db
networks:
api:
db:
The answer i was searching for "--platform =linux/amd64" to my Dockerfile as follows
From --platform=linux/amd64 python:3.10-alpine
This issue was encountered after i updated my macos to Ventura 13.0.1
I was hoping to get some insight to what I am missing, currently trying to run a docker-compose config with python (walrus for db wrapper) and redis image, but I keep receiving the same error:
redis.exceptions.ConnectionError: Error -2 connecting to redis://redis:6379. Name or service not known.
I tried different solutions on stack overflow to fix this but still nothing is working.
Here are the related docker-compose config:
version: '3.3'
services:
redis:
image: redis:latest
container_name: redis
ports:
- "6379:6379"
command: ["redis-server"]
entrypoint: redis-server --appendonly yes
consumers:
build: ./consumers
container_name: consumers
environment:
- REDIS_HOST=redis://redis
command: "./run.sh"
depends_on:
- redis
restart: always
tty: true
networks:
default:
driver: bridge
Dockerfile:
FROM python:3.10
WORKDIR /consumers
# Copy Dependencies
COPY requirements.txt requirements.txt
COPY run.sh .
# Install Dependencies
RUN pip install -r requirements.txt
COPY . .
ENV REDIS_HOST=redis://redis
RUN chmod a+x run.sh
# Run executable consumer.py
CMD [ "./run.sh"]
And connection in python using walrus to redis:
rdb = Database(host=os.getenv("REDIS_HOST", "localhost"), port=6379)
Locally without docker the setup works fine. Any direction in this case would be really appreciated.
Thank you
The following configuration made it work, removed entrypoint, created a new custom network and exposed port. REDIS_HOST was modified to redis i.e. container name. All together made it work but while trying only one of these the problem persisted.
version: '3.5'
services:
redis:
image: redis:latest
container_name: redis
ports:
- "6379:6379"
expose:
- 6379:6379
command: ["redis-server"]
networks:
- connections
consumers-g1:
build: ./consumers
container_name: consumers-g1
environment:
- REDIS_HOST=redis
command: "./run.sh"
expose:
- 6379:6379
networks:
- connections
restart: always
tty: true
networks:
connections:
name: connections
driver: bridge
I am running elasticsearch from docker-compose and I want to connect python to it and test it. The docker-compose has the following form:
version: '3.2'
services:
elasticsearch:
container_name: elasticsearch
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
links:
- kibana
networks:
- elk
logstash:
container_name: logstash
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5000:5000"
- "9600:9600"
expose:
- "5044"
networks:
- elk
depends_on:
- elasticsearch
kibana:
container_name: kibana
build:
context: kibana/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./kibana/config/kibana.yml
target: /usr/share/kibana/config/kibana.yml
read_only: true
ports:
- "5601:5601"
networks:
- elk
app:
container_name: app
build : ./app
volumes:
- ./app/:/usr/src/app
- /usr/src/app/node_modules/ # make node_module empty in container
command: npm start
ports:
- "3000:3000"
networks:
- elk
nginx:
container_name: nginx
build: ./nginx
volumes:
- ./nginx/config:/etc/nginx/conf.d
- ./nginx/log:/var/log/nginx
ports:
- "80:80"
- "443:443"
links:
- app:app
depends_on:
- app
networks:
- elk
filebeat:
container_name: filebeat
build: ./filebeat
entrypoint: "filebeat -e -strict.perms=false"
volumes:
- ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
- ./nginx/log:/var/log/nginx
networks:
- elk
depends_on:
- app
- nginx
- logstash
- elasticsearch
- kibana
links:
- logstash
networks:
elk:
driver: bridge
volumes:
elasticsearch:
I am using in jupyter-notebook this very simple code to connect to elasticsearch and test it:
from elasticsearch import Elasticsearch
es = Elasticsearch(['http://elasticsearch:9200/'])
if not es.ping():
raise ValueError("Connection failed")
However, the result is ValueError: Connection failed. Is there a problem reaching the localhost outside the docker?
I have tried also to replace elasticsearch with localhost in es = Elasticsearch(['http://elasticsearch:9200/']), but it also failed.
I am going to lose my mind because I've been stuck on this for 2 days and can't figure out why python wants to keep using 127.0.0.1 instead of the host I have specified. My docker compose snippet is:
# Use root/example as user/password credentials
version: '3.1'
services:
mongo_service:
image: mongo
#command: --default-authentication-plugin=mysql_native_password
command: mongo
restart: always
ports:
- '27017:27017'
cinemas_api:
container_name: cinemas_api
hostname: cinemas_api
build:
context: ./cinemas_api
dockerfile: Dockerfile
links:
- mongo_service
ports:
- 5000:5000
expose:
- '5000'
depends_on:
- mongo_service
booking_api:
container_name: booking_api
hostname: booking_api
build:
context: ./booking_api
dockerfile: Dockerfile
ports:
- 5050:5000
depends_on:
- mongo_service
networks:
db_net:
external: true
#docker-compose -f docker-compose.yaml up --build
then in cinemas.py, I try to connect:
client = MongoClient('mongodb://mongo_service:27017/test')
However, I get the error
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 ::
caused by :: Connection refused :
I am trying to connect redis container to python app container using environment variable. I passed password as an environment variable but it is not connecting, if I don't use an environment variable and hard code the password it works fine otherwise it gives redis.exceptions.ConnectionError
version: "3.7"
services:
nginx_app:
image: nginx:latest
depends_on:
- flask_app
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- 8090:80
networks:
- my_project_network
flask_app:
build:
context: .
dockerfile: Dockerfile
expose:
- 5000
environment:
- PASSWORD=pass123a
depends_on:
- redis_app
networks:
- my_project_network
redis_app:
image: redis:latest
command: redis-server --requirepass ${PASSWORD} --appendonly yes
environment:
- PASSWORD=pass123a
volumes:
- ./redis-vol:/data
expose:
- 6379
networks:
- my_project_network
networks:
my_project_network:
index.py
from flask import Flask
from redis import Redis
import os
app = Flask(__name__)
redis = Redis(host='redis_app', port=6379, password=os.getenv('PASSWORD'))
#app.route('/')
def hello():
redis.incr('hits')
return 'Hello World! I have been seen %s times.' % redis.get('hits')
if __name__ == "__main__":
app.run(host="0.0.0.0", debug=True)
Update your docker-compose.yaml
the environment is a list of strings:
docker-composer interpolates ${ENV} where the value of ENV is loaded from .env file
Use:
command: redis-server --requirepass $PASSWORD --appendonly yes
Instead of:
command: redis-server --requirepass ${PASSWORD} --appendonly yes
You can verify environment variable inside ur container by:
docker-compose run --rm flask_app printenv | grep PASSWORD
That should return:
PASSWORD=pass123a
docker-compose example for environment variables: Here
Looks like you have missed passing the environment variable to your Redis container.
Try This:
version: "3.7"
services:
nginx_app:
image: nginx:latest
#LOCAL IMAGE
depends_on:
- flask_app
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- 8082:80
networks:
- my_project_network
flask_app:
build:
context: .
dockerfile: Dockerfile
expose:
- 5000
environment:
- PASSWORD=pass123a
depends_on:
- redis_app
networks:
- my_project_network
redis_app:
image: redis:latest
command: redis-server --requirepass ${PASSWORD} --appendonly yes
environment:
- PASSWORD=pass123a
volumes:
- ./redis-vol:/data
expose:
- 6379
networks:
- my_project_network
networks:
my_project_network: