I recently started using streamlit, which is definitely an awesome library for Dashboarding and visualizing Machine Learning applications.
However, my deployment workflow is currently Docker and Heroku. I can't find simple documentation on how to easily deploy a streamlit app hosted within a Docker container onto Heroku. Therefore, I wanted to document something simple I found here.
After a little bit of research and play around with the code, this is the simplest way that seems to be working:
create .streamlit folder where a config.toml will live
Within the config.toml write the following code:
[browser]
serverAddress = '0.0.0.0'
Build your Dockerfile with whatever you want in simply at the end, add this command:
CMD streamlit run --server.port $PORT app.py
For example, here is my complete Dockerfile based on the code example that streamlit currently provides
FROM continuumio/miniconda3
WORKDIR /home/app
RUN apt-get update
RUN apt-get install nano unzip
RUN apt install curl -y
RUN curl -fsSL https://get.deta.dev/cli.sh | sh
RUN pip install boto3 pandas gunicorn streamlit
COPY . /home/app
CMD streamlit run --server.port $PORT app.py
In development, simply run your container with a PORT environment variable and a port mapping like this:
docker run -it MY_DOCKER_IMAGE -p HOST_PORT:CONTAINER_PORT -e PORT=CONTAINER_PORT
If everything works correctly locally, then you can follow this tutorial to deploy your container to Heroku :
Deploy to heroku
Related
I've create a python api using flask. If i try to run it on my local windows desktop works perfectly using the code below:
I'm trying to put this api inside a docker container and call it using the same script above. Below is the Dockerfile code:
FROM python:3
RUN pip install flask
RUN pip install flask_restful
RUN pip install sympy
WORKDIR /app
COPY . .
EXPOSE 8080
CMD ["python", "app/main.py"]
I'm running this container using this: docker run -p 8080:8080 searchitens
But i don't know what exactly i have to change on my test script to call this api. I'm having this response:
Can anyone help me?
I've tried to EXPOSE port 8080 and also modify the test script with base = 'http://127.0.0.1:8080/' and base = 'http://localhost:8080/'
I have a Docker-application, that i build and run with:
docker build -t swagger_server .
docker run -p 8080:8080 swagger_server
The Dockerfile looks like this:
FROM python:3-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . /usr/src/app
EXPOSE 8080
ENTRYPOINT ["python3"]
CMD ["-m", "swagger_server"]
This in and of itself is fairly simple, but I'm struggling with deploying this Dockerfile to Heroku. I have connected Heroku to auto-deploy on every push, but haven't configured anything up until now. It builds and runs the application successfully, but i think it only runs the python-application without exposing any ports.
Heroku have a documentation-page on their website, however I don't understand how to specify ports or build-tags in heroku.yml.
To give some more context: I want to deploy a Python/Flask-Application that was auto-generated by the swagger-codegen. I can access the API locally, no matter if I run it within a conda-environment or with docker.
Can somebody explain to me how that should work?
When you deploy with Docker with Heroku, EXPOSE port manually in Docker won't be respected, the port to be exposed will be determined automatically by Heroku.
Your app must listening in $PORT (env set by Heroku).
Another thing to note is, when you start swagger server, you must allow traffic coming from all IPs, otherwise it'll only be reachable in localhost (notice that this localhost is the container itself).
import os
app.run(host="0.0.0.0", port=os.getenv('PORT'))
I'm developing some python microservices with grpc and i'm using docker for the cassandra database and the microservices. Is there a way to setup reload on change within docker-compose?
I'm guessing that first I need the code mounted as a volume but I don't see a way to reload on GRPC server like for example flask does.
We use watchdog[watchmedo] with our grpc services and Docker.
Install watchdog or add to your requirements.txt file
python -m pip install watchdog[watchmedo]
Then in your docker-compose.yml add watchmedo auto-restart --recursive --pattern="*.py" --directory="/usr/src/app/" python -- -m app to your container where --directory is the directory to where your app is contained inside the docker container, and python -- -m app is the file that starts your grpc Server. In this example the file that starts the server is called app.py:
app:
build:
context: ./app/
dockerfile: ./Dockerfile
target: app
command: watchmedo auto-restart --recursive --pattern="*.py" --directory="/usr/src/app/" python -- -m app
volumes:
- ./app/:/usr/src/app/
I'm trying to create a simple web application container within Ubuntu-WSL2 with the help of the Docker. So I've built my container creating my-simple-webapp folder and within that folder, I've created Dockerfile and app.py files;
Dockerfile
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y python python-pip
RUN pip install flask
COPY app.py /opt/
ENTRYPOINT FLASK_APP=/opt/app.py flask run --host=0.0.0.0 --port=8080
app.py
import os
from flask import Flask
app = Flask(__name__)
#app.route("/")
def main():
return "Welcome!"
#app.route('/how are you')
def hello():
return 'I am good, how about you?'
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8080)
When I run the command docker build ./my-simple-webapp it works without error. However when I use my browser to connect my container typing 172.17.0.2:8080, o.o.o.o:8080 or localhost:8080 connection times out.
Resource : https://github.com/mmumshad/simple-webapp-flask
If all you run is docker build... then you still need to start your container with docker run....
You can open the docker dashboard (in your Windows tray) to see if your container is actually running.
To actually run your app you need to start a container. First, build the image:
docker build -t simple-webapp-flask .
Then start a container using the image, with 8080:8080 mapping from container to your host:
docker run -p 8080:8080 simple-webapp-flask
If you want to deploy your flask application, you need to choose from the following options:
https://flask.palletsprojects.com/en/1.1.x/deploying/
The way you are trying to do it, can be used only for development purposes.
I am creating Restful API using python, flask and docker. I have already created image and run container.
FROM python:2.7
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
When i run: docker run -p 5000:5000 flaskrestful and go the localhost:5000 i got the expected response:
{'hello': 'world'}
After editing method that return me json above, nothing is changed. I want the server in docker container to automatically reload after changing the project files on host machine.
Is there any way to do that? I have tried with volumes but to edit anything inside I need to use root privileges and I want to avoid that.
All I needed to do is to run container with several flags:
docker run -it --name containerName --mount type=bind,source=host directory,target=container direcotory -p host_port:container_port image_name