I've been trying to create a FLASK api that executes simple shell scripts (e.g session = Popen(['echo server_info.js" | node {}'.format(cmd)],shell=True, stdout=PIPE, stderr=PIPE). This worked very well but when i dockerized the application, the script stopped running and returned this error: b'/bin/sh: 1: /path No such file or directory.
Also, I use swagger and blueprint for the FLASK. the dockerized version shows the swagger but does not update any change I make in the swagger.json file (the code:
SWAGGER_URL = '/swagger'
API_URL = '/static/swagger.json'
SWAGGERUI_BLUEPRINT = get_swaggerui_blueprint(
SWAGGER_URL,
API_URL,
config={
'app_name': "NAME"
}
))
Also the docker file code:
FROM python:3.7
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN pip3 install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3", "/usr/src/app/app.py"]
Any suggestion?
Related
I've created simple Kafka app that sends message to a topic. It works perfectly when I'm run it in local environment. But when I move it to Docker container it cannot connect to the broker. I think problem in container network settings but I cannot figure it out.
App code:
from kafka import KafkaProducer
producer = KafkaProducer(
bootstrap_servers='xxxxxxxxxx.mdb.yandexcloud.net:9091',
security_protocol="SASL_SSL",
sasl_mechanism="SCRAM-SHA-512",
sasl_plain_password='xxxxxxxxxx',
sasl_plain_username='xxxxxxxxxx',
ssl_cafile="YandexCA.crt",
api_version=(0,11,5))
producer.send('test_topic', b'test message')
producer.flush()
producer.close()
Dockerfile:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.10
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "app.py"]
So it's runs perfectly in terminal but fails in Docker. What can cause it?
So the problem was in the password. There was characters with escaping like:
ENV PASS=xxxxx\6xxxxx
And when set by env vars it worked correctly but when set in docker file it was escaped. So in Dockerfile I set it like that:
ENV PASS="xxxxx\6xxxxx"
And everything started working.
So I'm trying to run my FastAPI python app in a Docker container. I choose python:3.9 as a base image and everything seemed to work until I decided to integrate my SSL Cert-Files into the container.
Dockerfile:
FROM python:3.9
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
RUN mkdir -p /app/SSL
VOLUME /etc/letsencrypt/live/soulforger.net/:/app/SSL/
COPY . .
CMD [ "uvicorn", "core:app", "--host", "0.0.0.0", "--port", "8000", "--ssl-keyfile", "/app/SSL/privkey.pem", "--ssl-certfile", "/app/SSL/cert.pem" ]
EXPOSE 8000
Docker run command:sudo docker run -p 33665:8000 -v /etc/letsencrypt/live/soulforger.net/:/app/SSL --name soulforger_api -d 24aea28ce756
Now the problem is that the directory im mapping is only accessible as a root user. When I exec into the Container, the files are there but I can't cat /app/SSL/cert.pem. Due to the fact that I can cat everything else without problem I assume its some sort of permissions problem when mapping the dir into the container. Does anybody have an idea of what can cause this issue?
Solution:
After a lot of digging I found out what the problem is, for anyone that happens upon this post and also uses Let's Encrypt, the files within /etc/letsencrypt/live/some.domain/ are only links to files in another directory. If you want to mount the SSL certificates of your server to your containers, you have to mount the entire /etc/letsencrypt/ dir in order to have access to the files referenced by the links. All props go to this answer.
You can change the user in the Dockerfile. Try to add USER root in your dockerfile.
Hopefully it will be helpful.
FROM python:3.9
USER root
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
RUN mkdir -p /app/SSL
VOLUME /etc/letsencrypt/live/soulforger.net/:/app/SSL/
COPY . .
CMD [ "uvicorn", "core:app", "--host", "0.0.0.0", "--port", "8000", "--ssl-keyfile", "/app/SSL/privkey.pem", "--ssl-certfile", "/app/SSL/cert.pem" ]
EXPOSE 8000
I am trying to deploy to cloud run using it's --command flag option (see https://cloud.google.com/sdk/gcloud/reference/run/deploy) but it fails anytime I try it. I do not know if I am not understanding how to use it or if it is something happening in google cloud's side.
My Dockerfile looks like the following:
FROM python:3.10-slim
ENV PYTHONUNBUFFERED True
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
RUN pip install --no-cache-dir -r requirements.txt
ENTRYPOINT ["python"]
CMD ["main.py"]
And I am deploying it with:
gcloud run deploy \
$SERVICE \
--image $IMAGE \
--allow-unauthenticated \
--memory $MEMORY \
--concurrency $CONCURRENCY \
--cpu $CPU \
--platform managed \
--region $REGION \
--command "main2.py"
The logs are as follows:
X Deploying... Internal error occurred while performing container health check.
- Creating Revision...
. Routing traffic...
✓ Setting IAM Policy...
Aborted by user.
ERROR: (gcloud.run.deploy) Aborted by user.
Y tried also using only CMD in the Dockerfile (replace 2 last lines with CMD python main.py) and using --command "python main2.py" without success. I want to use the same Docker image but being able to deploy to run main.py (as default in Dockerfile) or main2.py
Note that if the --command flag is omitted it is deployed successfully and the app works.
The test code is at https://github.com/charlielito/cloud-run-test
The python code is just a dummy flask server. The main2.py is the same for testing purposes, just changed the response string.
import os
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello_world():
name = os.environ.get("NAME", "World")
return "Hello {}!".format(name)
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
I am building Python container for the first time using VS Code and WSL2. Here is my sample Python code. It runs fine in VS interactive mode because it is picking up my default AWS credentials.
import boto3
s3BucketName = 'test-log-files'
s3 = boto3.resource('s3')
def s3move():
try:
s3.Object(s3BucketName, "destination/Changes.xlsx").copy_from(CopySource=(s3BucketName + "/source/Changes.xlsx"))
s3.Object(s3BucketName,"source/Changes.xlsx").delete()
print("Transfer Complete")
except:
print("Transfer failed")
if __name__ == "__main__":
s3move()
The Dockerfile built by VS Code:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
RUN pip install boto3
WORKDIR /app
COPY . /app
# Switching to a non-root user, please refer to https://aka.ms/vscode-docker-python-user-rights
RUN useradd appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "S3MoveFiles/S3MoveFiles.py"]
I would like to test this using docker container and seems like I have to pass the AWS credentials to the container. While there are other ways and probably more secure ways, I wanted to test the method by mounting the volume in a docker command as an argument.
docker run -v ~/.aws/credentials:/appuser/home/.aws/credentials:ro image_id
I get the "Transfer failed" message in the Terminal window in VS Code. What am I doing wrong here? Checked several articles but couldn't find any hints. I am not logged in as root.
I really just want to pass an argument via docker run
My Dockerfile:
FROM python:3
# set a directory for the app
WORKDIR /usr/src/app
# copy all the files to the container
COPY . .
# install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# tell the port number the container should expose
EXPOSE 5000
# run the command
CMD ["python", "./app.py"]
My python file:
import sys
print(sys.argv)
I tried:
docker run myimage foo
I got an error:
flask-app git:(master) ✗ docker run myimage foo
docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"foo\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
When you write foo at the end of your docker run command then you overwrite whole command. Therefore instead of
python app.py
you call
foo
Proper way of calling your script with arguments is:
docker run myimage python app.py foo
Alternatively you may use ENTRYPOINT instead of CMD and then your docker run command may contain just foo after image name
Dockerfile:
FROM python:3
# set a directory for the app
WORKDIR /usr/src/app
# copy all the files to the container
COPY app.py .
# run the command
ENTRYPOINT ["python", "./app.py"]
calling it:
docker run myimage foo