I have a watiress API, I am trying to make it public in a docker contain, However i am receiving the following error:
return self.socket.bind(addr)
fleettracker_1 | OSError: [Errno 99] Cannot assign requested address
Here is my Docker-compose yml
version: '3'
services:
fleettracker:
build: ./fleettracker
ports:
- "5000:5017"
links:
- db
networks:
- fullstack
db:
image: mysql
ports:
- "3306:3306"
networks:
fullstack:
driver: bridge
Here is my dockerfile
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "waitress API.py" ]
and here is my waitress API main:
if __name__ == "__main__":
waitress.serve(app=app, host="SOme IP", port=Someport)
Related
I am facing below mentioned error, could you please help me to resolve this?
invalid volume specification: '/run/desktop/mnt/host/d/Master/Projects/python_task/image: /app:rw': invalid mount config for type "bind": invalid mount path: ' /app' mount path must be absolute
My file structure could be seen here image
Docker-Compose:
services:
psql_crxdb:
image: postgres:13
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=dvd_rental
volumes:
- "./dvdrental_data:/var/lib/postgresql/data:rw"
ports:
- "5432:5432"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=admin#admin.com
- PGADMIN_DEFAULT_PASSWORD=root
ports:
- "8080:80"
analytics:
build:
context: main
environment:
POSTGRESQL_CS: 'postgresql+psycopg2://postgres:password#psql_crxdb:5432/dvd_rental'
depends_on:
- psql_crxdb
command: ["python", "./main.py" ]
pythontask:
build:
context: python_task
command: ["python", "./circle.py" ]
volumes:
- "./python_task/image: /app"
Dockerfile:
FROM python:3.9
RUN apt-get install wget
RUN pip install Pillow datetime
WORKDIR /app
COPY circle.py circle.py
ENTRYPOINT ["python", "circle.py"]
I can't connect muy python app to postgres all run over docker, this is muy dockerfile:
FROM python:3.8
RUN mkdir /app
WORKDIR /app
ADD . /app/
ADD requirements.txt requirements.txt
RUN apt update -y
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
My docker-compose
version: '3'
services:
db:
image: postgres:13.4-alpine
environment:
POSTGRES_PASSWORD: secret
POSTGRES_HOST_AUTH_METHOD: trust
env_file:
- .env
ports:
- "5432:5432"
volumes:
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
app:
build: .
restart: always
depends_on:
- db
stdin_open: true
tty: true
env_file:
- .env
and my .env file
DB_NAME=database_dev
DB_USER=postgres
DB_PASSWORD=secret
DB_HOST=localhost
DB_PORT=5432
I'm trying to connect with SQLAlchemy, and this is the error
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "{hostname}" to address: Name or service not known
edit
add my python code for connection, the env variables that I use are from .env file
class DatabaseManager:
def __init__(self):
self.db_url = 'postgresql+psycopg2://{user}:{password}\\#{hostname}/{database_name}'
self.db_url.format(
user=os.environ['DB_USER'],
password=os.environ['DB_PASSWORD'],
hostname=os.environ['DB_HOST'],
database_name=os.environ['DB_NAME']
)
self.engine = create_engine(self.db_url)
def make_session(self):
self.Session = sessionmaker(bind=self.engine)
self.session = self.Session()
def add_data(self, data):
self.session.add(data)
self.session.commit()
def close(self):
self.session.close()
According to your edit, your DB_HOST variable is not correct, here localhost is localhost inside the python container. Your python instance (app) should point to the hostname of your db.
Because docker-compose allow for refering to services with their name, you can simply do in your env file :
DB_HOST=db
I run two different Apps in containers.
Django App
Flask App
Django ran just well. I configured my Flask App as follow:
This Is a docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: username
MYSQL_PASSWORD: pwd
MYSQL_ROOT_PASSWORD: pwd
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
This also is my Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
COPY . /app
CMD python main.py
Problem: When I run
docker-compose up the following error occurs
backend_1 | python: can't open file 'manage.py': [Errno 2] No such file or directory
I don't know why it tries to open manage.py file since it is Flask and not Django App. I need your help. Thanks in Advance.
I'm not sure how this will work for you but, I resolved this by changing my docker-compose.yml to have a command parameter, so it looks like this
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: 'python main.py' << Updated Line >>
ports:
- 8001:5000
volumes:
- .:/app
depends_on:
- db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: main
MYSQL_USER: username
MYSQL_PASSWORD: pwd
MYSQL_ROOT_PASSWORD: pwd
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33067:3306
I started a new Flask app(but this time on virtualenv) and my problem fixed :)
I have a flask app that I want to host on nginx in my docker compose file but when I do it gives me an Internal Server error
Here are some important files:
docker-compose.yml
version: "3.8"
services:
th3pl4gu3:
container_name: portfolix
build: ./
networks:
- portfolix_net
ports:
- 8084:8084
restart: always
server:
image: nginx:1.17.10
container_name: nginx
depends_on:
- th3pl4gu3
volumes:
- ./reverse_proxy/nginx.conf:/etc/nginx/nginx.conf
ports:
- 80:80
networks:
- portfolix_net
networks:
portfolix_net:
name: portfolix_network
driver: bridge
nginx.conf:
server {
listen 80;
location / {
include uwsgi_params;
uwsgi_pass th3pl4gu3:8084;
}
}
Flask Dockerfile
# Using python 3.8 in Alpine
FROM python:3.8-alpine3.11
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Dependencies for uWSGI
RUN apk add python3-dev build-base linux-headers pcre-dev && pip install -r requirements.txt && apk update
# In case bash is needed
#RUN apk add --no-cache bash
# Tell the port number the container should expose
EXPOSE 8084
# Run the command
ENTRYPOINT ["uwsgi", "app.ini"]
app.ini
[uwsgi]
module = run:app
master = true
processes = 5
http-socket = 0.0.0.0:8084
chmod-socket = 660
vacuum = true
die-on-term = true
Now when I run this docker-compose without the nginx service, it works, but i want it to run on nginx server. Any idea why i am geting the Internal Server Error?
I was able to solve it with the following docker-compose:
version: "3.8"
services:
th3pl4gu3:
container_name: portfolix
build: ./
networks:
- portfolix_net
expose:
- 8084
restart: always
server:
image: nginx:1.17.10
container_name: nginx
depends_on:
- th3pl4gu3
volumes:
- ./reverse_proxy/nginx.conf:/etc/nginx/nginx.conf
ports:
- 8084:80
networks:
- portfolix_net
networks:
portfolix_net:
name: portfolix_network
driver: bridge
The issue was with my 8084:8084
This might sound simple, but I have this problem.
I have two docker containers running. One is for my front-end and other is for my backend services.
these are the Dockerfiles for both services.
front-end Dockerfile :
# Use an official node runtime as a parent image
FROM node:8
WORKDIR /app
# Install dependencies
COPY package.json /app
RUN npm install --silent
# Add rest of the client code
COPY . /app
EXPOSE 3000
CMD npm start
backend Dockerfile :
FROM python:3.7.7
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY server.py /usr/src/app
COPY . /usr/src/app
EXPOSE 8083
# CMD ["python3", "-m", "http.server", "8080"]
CMD ["python3", "./server.py"]
I am building images with the docker-compose.yaml as below:
version: "3.2"
services:
frontend:
build: ./frontend
ports:
- 80:3000
depends_on:
- backend
backend:
build: ./backends/banuka
ports:
- 8080:8083
How can I make this two services Update whenever there is a change to front-end or back-end?
I found this repo, which is a booilerplate for reactjs, python-flask and posgresel, which says it has enabled Hot reload for both reactjs frontend and python-flask backend. But I couldn't find anything related to that. Can someone help me?
repo link
What I want is: after every code change the container should b e up-to-date automatically !
Try this in your docker-compose.yml
version: "3.2"
services:
frontend:
build: ./frontend
environment:
CHOKIDAR_USEPOLLING: "true"
volumes:
- /app/node_modules
- ./frontend:/app
ports:
- 80:3000
depends_on:
- backend
backend:
build: ./backends/banuka
environment:
CHOKIDAR_USEPOLLING: "true"
volumes:
- ./backends/banuka:/app
ports:
- 8080:8083
Basically you need that chokidar environment to enable hot reloading and you need volume bindings to make your code on your machine communicate with code in container. See if this works.
If you are mapping your react container's port to a different port:
ports:
- "30000:3000"
you may need to tell the WebSocketClient to look at the correct port:
environment:
- CHOKIDAR_USEPOLLING=true # create-ui-app <= 5.x
- WATCHPACK_POLLING=true # create-ui-app >= 5.x
- FAST_REFRESH=false
- WDS_SOCKET_PORT=30000 # The mapped port on your host machine
See related issue:
https://github.com/facebook/create-react-app/issues/11779