How to Hot-Reload in ReactJS Docker - python

This might sound simple, but I have this problem.
I have two docker containers running. One is for my front-end and other is for my backend services.
these are the Dockerfiles for both services.
front-end Dockerfile :
# Use an official node runtime as a parent image
FROM node:8
WORKDIR /app
# Install dependencies
COPY package.json /app
RUN npm install --silent
# Add rest of the client code
COPY . /app
EXPOSE 3000
CMD npm start
backend Dockerfile :
FROM python:3.7.7
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY server.py /usr/src/app
COPY . /usr/src/app
EXPOSE 8083
# CMD ["python3", "-m", "http.server", "8080"]
CMD ["python3", "./server.py"]
I am building images with the docker-compose.yaml as below:
version: "3.2"
services:
frontend:
build: ./frontend
ports:
- 80:3000
depends_on:
- backend
backend:
build: ./backends/banuka
ports:
- 8080:8083
How can I make this two services Update whenever there is a change to front-end or back-end?
I found this repo, which is a booilerplate for reactjs, python-flask and posgresel, which says it has enabled Hot reload for both reactjs frontend and python-flask backend. But I couldn't find anything related to that. Can someone help me?
repo link
What I want is: after every code change the container should b e up-to-date automatically !

Try this in your docker-compose.yml
version: "3.2"
services:
frontend:
build: ./frontend
environment:
CHOKIDAR_USEPOLLING: "true"
volumes:
- /app/node_modules
- ./frontend:/app
ports:
- 80:3000
depends_on:
- backend
backend:
build: ./backends/banuka
environment:
CHOKIDAR_USEPOLLING: "true"
volumes:
- ./backends/banuka:/app
ports:
- 8080:8083
Basically you need that chokidar environment to enable hot reloading and you need volume bindings to make your code on your machine communicate with code in container. See if this works.

If you are mapping your react container's port to a different port:
ports:
- "30000:3000"
you may need to tell the WebSocketClient to look at the correct port:
environment:
- CHOKIDAR_USEPOLLING=true # create-ui-app <= 5.x
- WATCHPACK_POLLING=true # create-ui-app >= 5.x
- FAST_REFRESH=false
- WDS_SOCKET_PORT=30000 # The mapped port on your host machine
See related issue:
https://github.com/facebook/create-react-app/issues/11779

Related

Docker is taking wrong settings file when creating image

I have Django application where my settings are placed in folder named settings. Inside this folder I have init.py, base.py, deployment.py and production.py.
My wsgi.py looks like this:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp_settings.settings.production")
application = get_wsgi_application()
My Dockerfile:
FROM python:3.8
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN mkdir /code
COPY . /code/
WORKDIR /code
RUN pip install --no-cache-dir git+https://github.com/ByteInternet/pip-install-privates.git#master#egg=pip-install-privates
RUN pip install --upgrade pip
RUN pip_install_privates --token {GITHUB-TOKEN} /code/requirements.txt
RUN playwright install --with-deps chromium
RUN playwright install-deps
RUN touch /code/logs/celery.log
RUN chmod +x /code/logs/celery.log
EXPOSE 80
My docker-compose file:
version: '3'
services:
app:
container_name: myapp_django_app
build:
context: ./backend
dockerfile: Dockerfile
restart: always
command: gunicorn myapp_settings.wsgi:application --bind 0.0.0.0:80
networks:
- myapp_default
ports:
- "80:80"
env_file:
- ./.env
Problem
Every time I create image Docker is taking settings from development.py instead of production.py. I tried to change my setting using this command:
set DJANGO_SETTINGS_MODULE=myapp_settings.settings.production
It works fine when using conda/venv and I am able to switch to production mode however when creating Docker image it does not take into consideration production.py file at all.
Question
Is there anything else I should be aware of that causes issues like this and how can I fix it?
YES, there is something else you need to check:
When you run your docker container you can specify environment variables.
If you declare environment variable DJANGO_SETTINGS_MODULE=myapp_settings.development it will override what you specified inside of wsgi.py!
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp_settings.settings.production")
code above basically means: declare "myapp_settings.settings.production" as the default but if environment variable DJANGO_SETTINGS_MODULE is declared, take the value of that variable.
Edit 1
Maybe you can try specifying the environment variable inside your docker-compose file:
version: '3'
services:
app:
environment:
- DJANGO_SETTINGS_MODULE=myapp_settings.settings.production
container_name: myapp_django_app
build:
context: ./backend
dockerfile: Dockerfile
restart: always
command: gunicorn myapp_settings.wsgi:application --bind 0.0.0.0:80
networks:
- myapp_default
ports:
- "80:80"
env_file:
- ./.env

How to run a command in docker-compose after a service run?

I have searched but I couldn't find a solution for my problem. My docker-compose.yml file as below.
#
version: '2.1'
services:
mongo:
image: mongo_db
build: mongo_image
container_name: my_mongodb
restart: always
networks:
- isolated_network
ports:
- "27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root_pw
entrypoint: ["python3", "/tmp/script/get_api_to_mongodb.py", "&"]
networks:
isolated_network:
So here I use a custom Dockerfile. And my Dockerfile is like below.
FROM mongo:latest
RUN apt-get update -y
RUN apt-get install python3-pip -y
RUN pip3 install requests
RUN pip3 install pymongo
RUN apt-get clean -y
RUN mkdir -p /tmp/script
COPY get_api_to_mongodb.py /tmp/script/get_api_to_mongodb.py
#CMD ["python3","/tmp/script/get_api_to_mongodb.py","&"]
Here I want to create a container which have MongoDB and after create the container I collect a data using an API and send the data to MongoDB. But when I run the python script in that time mongodb is not initialized. So I need to run my script after container is created and right after mongodb initialized. Thanks in advance.
You should run this script as a separate container. It's not "part of the database", like an extension or plugin, but rather an ordinary client process that happens to connect to the database and that you want to run relatively early on. In general, if you're thinking about trying to launch a background process in a container, it's often a better approach to run foreground processes in two separate containers.
This setup means you can use a simpler Dockerfile that starts from an image with Python preinstalled:
FROM python:3.10
RUN pip install requests pymongo
WORKDIR /app
COPY get_api_to_mongodb.py .
CMD ["./get_api_to_mongodb.py"]
Then in your Compose setup, declare this as a second container alongside the first one. Since the script is in its own image, you can use the unmodified mongo image.
version: '2.4'
services:
mongo:
image: mongo:latest
restart: always
ports:
- "27017"
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root_pw
loader:
build: .
restart: on-failure
depends_on:
- mongodb
# environment:
# - MONGO_HOST=mongo
# - MONGO_USERNAME=root
# - MONGO_PASSWORD=root_pw
Note that the loader will re-run every time you run docker-compose up -d. You also may have to wait for the database to do its initialization before you can run the loader process; see Docker Compose wait for container X before starting Y.
It's likely you have an existing Compose service for your real application
version: '2.4'
services:
mongo: { ... }
app:
build: .
...
If that image contains the loader script, then you can docker-compose run it. This launches a new temporary container, using most of the attributes from the Compose service declaration, but you provide an alternate command: and the ports: are ignored.
docker-compose run app ./get_api_to_mongodb.py
One might ideally like a workflow where first the database container starts; then once it's accepting requests, run the loader script as a temporary container; then once that's completed start the main application server. This is mostly beyond Compose's capabilities, though you can probably get close with a combination of extended depends_on: declarations and a healthcheck: for the database.

How to use docker buildx bake to build docker compose containers for both linux/armv7 and linux/amd64

I have developed a primarily raspberry pi app in Python that uses Redis as its local cache so naturally I turned to docker compose to define all my services i.e. redis and my app. I am using Docker Hub private repository to host my container. But I do not get how to use the docker buildx bake command to target linux/armv7 platform as --platform flag is not part of bake
All the examples that the Docker team has shown use the simple docker buildx command which cannot be run for compose files.
My docker-compose.yml file is defined as:
version: '3.0'
services:
redis:
image: redis:alpine
app:
image: dockerhub/repository
build: gateway
restart: always
Dockerfile:
# set base image (Host OS)
FROM python:3.8-slim
# set the working directory in the container
WORKDIR /run
# copy the dependencies file to the working directory
COPY requirements.txt .
# install dependencies
RUN pip install -r requirements.txt
# copy the content of the local src directory to the working directory
COPY src/ .
# command to run on container start
CMD [ "python", "-u", "run.py" ]
Any help would be much appreciated. Thanks
you can supply platform parameter under key xbake as mentioned below. (reference document: https://docs.docker.com/engine/reference/commandline/buildx_bake/)
# docker-compose.yml
services:
addon:
image: ct-addon:bar
build:
context: .
dockerfile: ./Dockerfile
args:
CT_ECR: foo
CT_TAG: bar
x-bake:
tags:
- ct-addon:foo
- ct-addon:alp
platforms:
- linux/amd64
- linux/arm64
cache-from:
- user/app:cache
- type=local,src=path/to/cache
cache-to: type=local,dest=path/to/cache
pull: true
aws:
image: ct-fake-aws:bar
build:
dockerfile: ./aws.Dockerfile
args:
CT_ECR: foo
CT_TAG: bar
x-bake:
secret:
- id=mysecret,src=./secret
- id=mysecret2,src=./secret2
platforms: linux/arm64
output: type=docker
no-cache: true

How to install and start CouchDB server in a Docker image of a Web Application?

I made a Docker Image of a Web Application which is built on Python and my Web application needs CouchDB server to start before running the programme. Can anyone please tell me how can I install and run CouchDB server in the Dockerfile of this Web Application. My Dockerfile is given below:
FROM python:2.7.15-alpine3.7
RUN mkdir /home/WebDocker
ADD ./Webpage1 /home/WebDocker/Webpage1
ADD ./requirements.txt /home/WebDocker/requirements.txt
WORKDIR /home/WebDocker
RUN pip install -r /home/WebDocker/requirements.txt
RUN apk update && \
apk upgrade && \^M
apk add bash vim sudo
EXPOSE 8080
ENTRYPOINT ["/bin/bash"]
Welcome to SO! I solved it by using Docker-Compose for running a separate CouchDB Container and a separate Python Container. The relevant part of the configuration file docker-compose.yml looks like this:
# This help to avoid routing conflict within virtual machines:
networks:
default:
ipam:
driver: default
config:
- subnet: 192.168.112.0/24
# The CouchDB data is kept in docker volume:
volumes:
couchdb_data:
services:
# The container couchServer uses Dockerfile from the subdirectory CouchDB-DIR
# and it has the hostname 'couchServer':
couchServer:
build:
context: .
dockerfile: CouchDB-DIR/Dockerfile
ports:
- "5984:5984"
volumes:
- type: volume
source: couchdb_data
target: /opt/couchdb/data
read_only: False
- type: volume
source: ${DOCKER_VOLUMES_BASEPATH}/couchdb_log
target: /var/log/couchdb
read_only: False
tty: true
environment:
- COUCHDB_PASSWORD=__secret__
- COUCHDB_USER=admin
python_app:
build:
context: .
dockerfile: ./Python_DIR/Dockerfile
...
In the Docker subnet, the CouchDB can be accessed by http://couchServer:5984 from the Python container. To ensure that the CouchDB is not lost when restarting the container, it is kept in a separate Docker volume couchdb_data.
Use the enviroment-variable DOCKER_VOLUMES_BASEPATH to determine in which directory CouchDB logs. It can be defined in a .env-file.
The network section is only necessary if you have routing problems.

Run Python script in one docker-compose container in another

I want to run a python script in a docker container from another docker container within a docker-compose environment, here is an abstraction of the docker-compose.yml file:
app:
build: ./app
volumes:
- ./app:/app
- /var/run/docker.sock:/var/run/docker.sock
links:
- container1
- container2
- python_container
ports:
- "13000:3000"
working_dir: /app
command: npm install
entrypoint: /entrypoint.sh
container1:
image: container1:version
ports:
- "3005:3005"
volumes:
- ./volume:/volume
container2:
image: container2:version
ports:
- "3004:3004"
python_container:
image: some_image
volumes:
- ./scripts_volume:/scripts_volume
Is it possible to run a python script in de python_container from within the app container? I have a node application that needs to run python scripts for which we created a docker 'runtime' container with all dependencies prebuilt.
I already tried mounting de docker socket, however when i try to run
docker-compose run python_container python scriptname.py
it says:
Can't find a suitable configuration file in this directory or any parent. Are you in the right directory?
If it is possible, what is the best approach?
Regards

Categories

Resources