I am trying to deploy to cloud run using it's --command flag option (see https://cloud.google.com/sdk/gcloud/reference/run/deploy) but it fails anytime I try it. I do not know if I am not understanding how to use it or if it is something happening in google cloud's side.
My Dockerfile looks like the following:
FROM python:3.10-slim
ENV PYTHONUNBUFFERED True
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
RUN pip install --no-cache-dir -r requirements.txt
ENTRYPOINT ["python"]
CMD ["main.py"]
And I am deploying it with:
gcloud run deploy \
$SERVICE \
--image $IMAGE \
--allow-unauthenticated \
--memory $MEMORY \
--concurrency $CONCURRENCY \
--cpu $CPU \
--platform managed \
--region $REGION \
--command "main2.py"
The logs are as follows:
X Deploying... Internal error occurred while performing container health check.
- Creating Revision...
. Routing traffic...
✓ Setting IAM Policy...
Aborted by user.
ERROR: (gcloud.run.deploy) Aborted by user.
Y tried also using only CMD in the Dockerfile (replace 2 last lines with CMD python main.py) and using --command "python main2.py" without success. I want to use the same Docker image but being able to deploy to run main.py (as default in Dockerfile) or main2.py
Note that if the --command flag is omitted it is deployed successfully and the app works.
The test code is at https://github.com/charlielito/cloud-run-test
The python code is just a dummy flask server. The main2.py is the same for testing purposes, just changed the response string.
import os
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello_world():
name = os.environ.get("NAME", "World")
return "Hello {}!".format(name)
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
Related
I've been trying to create a FLASK api that executes simple shell scripts (e.g session = Popen(['echo server_info.js" | node {}'.format(cmd)],shell=True, stdout=PIPE, stderr=PIPE). This worked very well but when i dockerized the application, the script stopped running and returned this error: b'/bin/sh: 1: /path No such file or directory.
Also, I use swagger and blueprint for the FLASK. the dockerized version shows the swagger but does not update any change I make in the swagger.json file (the code:
SWAGGER_URL = '/swagger'
API_URL = '/static/swagger.json'
SWAGGERUI_BLUEPRINT = get_swaggerui_blueprint(
SWAGGER_URL,
API_URL,
config={
'app_name': "NAME"
}
))
Also the docker file code:
FROM python:3.7
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN pip3 install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3", "/usr/src/app/app.py"]
Any suggestion?
I am deploying a web scraping microservice in a Docker container. I've used Scrapy and I am exposing an API call using FastAPI that will execute the crawler command.
I've created a docker container using Ubuntu as the base and installed all required dependencies. Then I use 'exec container_name bash' as an entry point to run the FastAPI server command. But how do I run the server as a background job?
I've tried building from the FastAPI docker image (tiangolo/uvicorn-gunicorn-fastapi:python3.6) but it fails to start.
I used the tiangolo/uvicorn-gunicorn-fastapi:python3.6 image & installed my web scraping dependencies in there along with environment variables and changing working directory to the folder from which scrapy crawl mybot command can be executed.
The issue I was facing with this solution earlier is a response timeout because I am running the above scrapy crawl mybot command as an OS process using os.popen('scrapy crawl mybot') inside the API function, logging the output, and then returning the response. It is not the right way, I know - I will try to run it as a background job; but it's a workaround for now.
Dockerfile:
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.6
# Install dependencies:
COPY requirements.txt .
RUN pip3 install -r requirements.txt
ENV POSTGRESQL_HOST=localhost
ENV POSTGRESQL_PORT=5432
ENV POSTGRESQL_DB=pg
ENV POSTGRESQL_USER=pg
ENV POSTGRESQL_PWD=pwd
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
COPY ./app /app
WORKDIR "/app"
FastAPI Endpoint:
#app.get("/scraper/crawlWeb")
async def scrapy_crawl_web(bot_name):
current_time = datetime.datetime.now()
start = current_time.timestamp()
print("--START JOB at " + str(current_time))
stream = os.popen(
'scrapy crawl %s 2>&1 & echo "$! `date`" >> ./scrapy_pid.txt' % bot_name)
output = stream.read()
print(output)
current_time = datetime.datetime.now()
end = current_time.timestamp()
print("--END JOB at " + str(current_time))
return "Crawler job took %s minutes and closed at %s" % ((end-start)/60.00, str(current_time))
I am building Python container for the first time using VS Code and WSL2. Here is my sample Python code. It runs fine in VS interactive mode because it is picking up my default AWS credentials.
import boto3
s3BucketName = 'test-log-files'
s3 = boto3.resource('s3')
def s3move():
try:
s3.Object(s3BucketName, "destination/Changes.xlsx").copy_from(CopySource=(s3BucketName + "/source/Changes.xlsx"))
s3.Object(s3BucketName,"source/Changes.xlsx").delete()
print("Transfer Complete")
except:
print("Transfer failed")
if __name__ == "__main__":
s3move()
The Dockerfile built by VS Code:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
RUN pip install boto3
WORKDIR /app
COPY . /app
# Switching to a non-root user, please refer to https://aka.ms/vscode-docker-python-user-rights
RUN useradd appuser && chown -R appuser /app
USER appuser
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "S3MoveFiles/S3MoveFiles.py"]
I would like to test this using docker container and seems like I have to pass the AWS credentials to the container. While there are other ways and probably more secure ways, I wanted to test the method by mounting the volume in a docker command as an argument.
docker run -v ~/.aws/credentials:/appuser/home/.aws/credentials:ro image_id
I get the "Transfer failed" message in the Terminal window in VS Code. What am I doing wrong here? Checked several articles but couldn't find any hints. I am not logged in as root.
I'm following a Python/TDD/Docker tutorial by TestDriven.io.
I build a custom image and I want to test it. I cannot (I think, I'm a noob with Docker and Python, please patience) do.
This is the image: registry.gitlab.com/sineverba/warehouse:latest. It works because I deployed to Heroku with success.
I don't want docker-compose for testing the final image, so I tried to do:
docker network create -d bridge flask-tdd-net
export DATABASE_TEST_URL=postgres://postgres:postgres#flask-tdd-net:5432/users_dev
docker run -d --name app -e "PORT=8765" -p 5002:8765 --network=flask-tdd-net registry.gitlab.com/sineverba/warehouse:latest
docker run -d --name db -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=users_dev -p 5432:5432 --network=flask-tdd-net postgres:12-alpine
I can launch a simple
docker exec app python -V and get version, for example.
But when I launch
docker exec app python -m pytest "project/tests"
I get (split down, full log here: https://pastebin.com/tYjn65ys)
self = <[AttributeError("'NoneType' object has no attribute 'drivername'") raised in repr()] SQLAlchemy object at 0x7fc74676e7f0>
app = <Flask 'project'>, sa_url = None, options = {}
def apply_driver_hacks(self, app, sa_url, options):
"""This method is called before engine creation and used to inject
driver specific hacks into the options. The `options` parameter is
a dictionary of keyword arguments that will then be used to call
the :func:`sqlalchemy.create_engine` function.
The default implementation provides some saner defaults for things
like pool sizes for MySQL and sqlite. Also it injects the setting of
`SQLALCHEMY_NATIVE_UNICODE`.
"""
> if sa_url.drivername.startswith('mysql'):
E AttributeError: 'NoneType' object has no attribute 'drivername'
I did try also (after stopping and removing containers and recreating DBs)
export DATABASE_TEST_URL=postgres://postgres:postgres#db:5432/users
So, moving name from users_dev to users.
Full repo link: https://github.com/sineverba/flask-tdd-docker/tree/add-gitlab-warehouse
Thank you in advance!
Edit
I changed the env cause link db was wrong. These are new commands, but got same error. I tried also to export both env, without success.
docker network create -d bridge flask-tdd-net
export DATABASE_TEST_URL=postgres://postgres:postgres#db:5432/users
export DATABASE_URL=postgres://postgres:postgres#db:5432/users
docker run -d --name app -e "PORT=8765" -p 5002:8765 --network=flask-tdd-net registry.gitlab.com/sineverba/warehouse:latest
docker run -d --name db -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=users -p 5432:5432 --network=flask-tdd-net postgres:12-alpine
docker exec app python -m pytest "project/tests"
docker container stop app && docker container rm app && docker container stop db && docker container rm db
Starting example
This is the testdriven.io example, from Gitlab integration (that I want not use). Only env exported for app is the DATABASE_TEST_URL
image: docker:stable
stages:
- build
- test
variables:
IMAGE: ${CI_REGISTRY}/${CI_PROJECT_NAMESPACE}/${CI_PROJECT_NAME}
build:
stage: build
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
script:
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $IMAGE:latest || true
- docker build
--cache-from $IMAGE:latest
--tag $IMAGE:latest
--file ./Dockerfile.prod
"."
- docker push $IMAGE:latest
test:
stage: test
image: $IMAGE:latest
services:
- postgres:latest
variables:
POSTGRES_DB: users
POSTGRES_USER: runner
POSTGRES_PASSWORD: runner
DATABASE_TEST_URL: postgres://runner:runner#postgres:5432/users
script:
- pytest "project/tests" -p no:warnings
- flake8 project
- black project --check
- isort project/**/*.py --check-only
Solved
The error is the need to export the variables inside the docker command:
docker run -d --name app -e "PORT=8765" -p 5002:8765 -e "DATABASE_TEST_URL=postgres://postgres:postgres#db:5432/users" --network=flask-tdd-net registry.gitlab.com/sineverba/warehouse:latest
The error is the need to export the variables inside the docker command:
docker run -d --name app -e "PORT=8765" -p 5002:8765 -e "DATABASE_TEST_URL=postgres://postgres:postgres#db:5432/users" --network=flask-tdd-net registry.gitlab.com/sineverba/warehouse:latest
I would like to make a Flask API running into docker with a conda environment.
It seems that I can install the conda environment from the .yml file.
But I can't run the app when I do docker run.
I just have errors about files that do not exist
exec source activate flask_env && python app.py failed: No such file or directory
The flask API is based on a simple example:
from flask import Flask, jsonify
app = Flask(__name__)
#app.route('/', methods=['GET'])
def hello_world():
return jsonify({'message': 'Hello World'})
#app.route('/test', methods=['GET'])
def test():
return jsonify({'test': 'test'})
if __name__ == "__main__":
app.run(debug=True) # remember to set debug to False
The Dockerfile is:
FROM continuumio/miniconda3:latest
WORKDIR /app
# Install myapp requirements
COPY environment.yml /app/environment.yml
RUN conda config --add channels conda-forge \
&& conda env create -n myapp -f environment.yml \
&& rm -rf /opt/conda/pkgs/*
# Copy all files after to avoid rebuild the conda env each time
COPY . /app/
# activate the myapp environment
ENV PATH /opt/conda/envs/myapp/bin:$PATH
# Launch the API
CMD [ "source activate flask_env && python app.py" ]
And the environment file is:
name: myapp
channels:
- conda-forge
- defaults
dependencies:
- flask=1.0.2
- python=3.7.3
I tried a lot of thing but I can't make it works. Did I miss something ?
Thank you
See this:
The CMD instruction has three forms:
CMD ["executable","param1","param2"] (exec form, this is the preferred form)
CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
CMD command param1 param2 (shell form)
Here, you CMD is used as parameters of ENTRYPOINT(see this), so you have to use next format:
CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
But, your command have && in it, so you have to enable shell in json format. So for your case, this could be next, FYI:
CMD ["bash", "-c", "source activate flask_env && python app.py"]