Python web application with OpenCV in Heroku - python

I am building a web application that uses OpenCV in its back-end. I have built the application on Ubuntu (and I tried it on Windows, too) and it works fine. Currently, I am trying to configure OpenCV to work on Heroku. As OpenCV is not possible to be loaded using pip, I read about using heroku buildpacks which provide customization for the server environment.
The following is my attempt to test two of OpenCV buildpacks:
I build simple web server with Flask that tries to import OpenCV:
#hello.py
import os
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
text = ''
try:
import cv2
text = 'success'
except:
text = 'fail'
pass
return text + ' to load openCV'
if __name__ == "__main__":
port = int(os.environ.get("PORT", 5000))
app.run(host='0.0.0.0', port=port)
The above code should return either success or fail in loading OpenCV.
Then I configured Heroku to use (heroku multi buildpack) by running the following command:
heroku buildpacks:set https://github.com/ddollar/heroku-buildpack-multi
In the .buildpacks file (that is required by multi buildpack) I put the https://github.com/heroku/heroku-buildpack-python and https://github.com/slobdell/heroku-buildpack-python-opencv-scipy buildpacks.
The first one is for compiling a python application and for installing other modules (e.g., Flask) through pip. The second buildpack is the one that is supposed to load OpenCV.
After all, the whole application did not work!
I got (Application Error) page in Heroku as following screenshot:
I tried to use other buildpack (https://github.com/diogojc/heroku-buildpack-python-opencv-scipy) but I got the same result.
My questions are:
What is wrong in the steps I did?
How should I call (or use) OpenCV within my application in heroku?Should I use import statement or some other commands?

I could install by doing as follows:
cd /path/to/your/dir && git init
heroku create MYAPP (start from scratch)
heroku config:add BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git --app MYAPP
create .buildpacks as follows:
https://github.com/heroku/heroku-buildpack-python
https://github.com/diogojc/heroku-buildpack-python-opencv-scipy#cedar14
git add . && git commit -m 'MESSAGE' && git push heroku master

For anyone seeing this today and having the same issue, switch opencv-python in your requirements.txt to opencv-python-headless. This sidesteps the problem with the problematic library file.

The following steps should solve the problem of openCV which you are facing -
Add the heroku-buildpack-apt to the BuildPack by pasting - https://github.com/heroku/heroku-buildpack-apt to add buildpack in dasboard.
ScreenShot -
Adding through Dashboard -> Settings -> Add BuildPacks
Then add the Aptfile in your Github base directory which contains -
libsm6
libxrender1
libfontconfig1
libice6
- one library in each line. See Example Github Link
Now build and deploy and you are ready to go!

Related

How to run python code from linux via a docker containing a specific python version

I have a linux server running in which I want to be able to run some python scripts. To do so, I created a docker image of python (3.6.8) with some specific dependencies to run my code.
I am new to linux command line and need help on how to write a line that would run a given python script based on my docker (python 3.6.8)
My server's directory structure looks like this :
My docker is named geomatique_python and its image is located in docker_image.
As for the structure of the code itself, I am starting from scratch and am looking for some hints and advices.
Thanks
I'm very much for a all the things in docker approach. Ref. your mention of having specific versions set in stone, the declaritive nature of docker is great for that. You can extend an official python docker image with your libraries then bind-mount the folders into your container during the run. A minimal project might look like:
.
├── app.py
└── Dockerfile
My app.py is a simple requests script:
#!/usr/bin/env python3
import requests
r = requests.get('https://api.github.com')
if r.status_code == 200:
print("HTTP {}".format(r.status_code))
My Dockerfile contains the runtime dependencies for my app:
FROM python:3.6-slim
RUN python3 -m pip install requests
Note: I'm extending the official python image in this example.
After building the docker image (i.e. docker build --rm -t so:57697538 .) you can run a container from the image bind-mounting the directory that contains the scripts inside the container and execute them: docker run --rm -it -v ${PWD}:/src --entrypoint python3 so:57697538 /src/app.py
Admittedly for python virtualenv / virtualenvwrapper can be convenient however it's very much python-only whereas docker is language agnostic.

How to create a environment-application on aws beanstalk for a python2.7.14 program using aws eb console?

I am completely new to AWS, and MacOS too. However, I am trying to create simple python apps in aws beanstalk. I got their default demo app- (the one that comes as an option while creating an environment/application) working. I followed this tutorial, and the code works locally (python 2.7.14).
But when I upload this application.py and requirements.txt via a zip, the compiled application shows health 'ok' but 'internal server error' when I load the application url.
I dont know how to debug or even what to debug as the code is quite straightforward: it is mostly an incompatible environment issue.
So, I'm looking everywhere how to make my AWSEB environment a python2.7 instead of python3.4 that its giving me, for both preconfigured docker python and preconfigured python platforms.
I am confused. How do I make my AWSEB environment / application a python 2.7.14, the one I use locally and is working well?
For reference, here is my code:
from flask import Flask
# print a nice greeting.
def say_hello(username = "World"):
return '<p>Hello %s!</p>\n' % username
# some bits of text for the page.
header_text = '''
<html>\n<head> <title>EB Flask Test</title> </head>\n<body>'''
instructions = '''
<p><em>Hint</em>: This is a RESTful web service! Append a username
to the URL (for example: <code>/Thelonious</code>) to say hello to
someone specific.</p>\n'''
home_link = '<p>Back</p>\n'
footer_text = '</body>\n</html>'
# EB looks for an 'application' callable by default.
application = Flask(__name__)
# add a rule for the index page.
application.add_url_rule('/', 'index', (lambda: header_text +
say_hello() + instructions + footer_text))
# add a rule when the page is accessed with a name appended to the site
# URL.
application.add_url_rule('/<username>', 'hello', (lambda username:
header_text + say_hello(username) + home_link + footer_text))
# run the app.
if __name__ == "__main__":
# Setting debug to True enables debug output. This line should be
# removed before deploying a production app.
application.debug = True
application.run()
requirements.txt
click==6.7
Flask==1.0.2
itsdangerous==0.24
Jinja2==2.10
MarkupSafe==1.0
Werkzeug==0.14.1
I'm running into path issues while installing the AWSEB cli- so please restrict your answers to using the console. Thank you.
If you need a different version of the platform for your application, you need to specify using a mechanism elastic beanstalk provides, ie through config.yml file. You need to create a config.yml file with environment/platform requirements and place the file under .elasticbeanstalk folder at base folder of your application.
You can see the config.yml setup and format here
And there are many supported platforms.
how do I make my aws eb environment/application a python2.7.14
Based on the above document you need to specify the following in config.yml file:
global: default_platform: 64bit Amazon Linux 2018.03 v2.7.1 running
Python 2.7
Different config files help to customize the EB config/environment. More here

Serving static files from Python alpine image: no /etc/mine.types file found

I have an Python Django application that I want to deploy through docker-compose, I used the blogpost called A Production-ready Dockerfile for Your Python/Django App to setup my files.
This blogpost, however, assumes you use a third party to host your static files. Since this isn't the case, I changed the CMD command from:
CMD ["/venv/bin/uwsgi", "--http-auto-chunked", "--http-keepalive"]
to:
CMD ["/venv/bin/uwsgi", "--http-auto-chunked", "--http-keepalive", "--static-map", "/static=/code/base/static"]
This works more or less, I now however receive the following warning when I start my docker file:
backend_1 | !!! no /etc/mime.types file found !!!
This makes my solution not workable, since files are all interpreted as text/plain. Is there a simple solution to fix this?
You need a mailcap package in your alpine container. Add the below to Dockerfile
RUN apk add mailcap

How do I download the code for a specific google cloud "service"

This doc show the command to download the source of an app I have in app engine:
appcfg.py -A [YOUR_APP_ID] -V [YOUR_APP_VERSION] download_app [OUTPUT_DIR]
Thats fine, but I also have services that I deployed. Using this command I can only seem to download the "default" service. I also deployed "myservice01" and "myservice02" to app engine in my GCP project. How do I specify the code of a specific service to download?
I tried this command as suggested:
appcfg.py -A [YOUR_APP_ID] -M [YOUR_MODULE] -V [YOUR_APP_VERSION] download_app [OUTPUT_DIR]
It didn't fail but this is the ouput I got (and it didn't download anything)
01:30 AM Host: appengine.google.com
01:30 AM Fetching file list...
01:30 AM Fetching files...
Now as a test I tried it with the name of a module I know doesn't exist and I got this error:
Error 400: --- begin server output ---
Version ... of Module ... does not exist.
So I at least know its successfully finding the module and version, but doesn't seem to want to download them?
Also specify the module (services used to be called modules):
-M MODULE, --module=MODULE
Set the module, overriding the module value from
app.yaml.
So something like:
appcfg.py -A [YOUR_APP_ID] -M [YOUR_MODULE] -V [YOUR_APP_VERSION] download_app [OUTPUT_DIR]
Side note: YOUR_APP_VERSION should really read YOUR_MODULE_VERSION :)
Of course, the answer assumes the app code downloads were not permanently disabled from the Console's GAE App Settings page:
Permanently prohibit code downloads
Once this is set, no one, including yourself, will ever be able to
download the code for this application using the appcfg download_app
command.

Heroku python Tutorial Won't run locally in windows

I'm trying to run the python heroku tutorial and it won't work in windows. This is from this repository.
I posted this previously but I was able to get a more descriptive error message. It should be said that I've installed postgres.
Furthermore, I can't run it locally using the method defined in the git respository. Both the createdb and foreman commands don't work. This is despite installing foreman.
django.core.exceptions.ImproperlyConfigured: 'django_postgrespool' isn't an available database backend.
Try using 'django.db.backends.XXX', where XXX is one of:
u'base', u'mysql', u'oracle', u'postgresql_psycopg2', u'sqlite3'
Error was: DLL load failed: The specified module could not be found.
Looks like python does not know what django-postgrespool is.
Perhaps it did not install properly. Check the output of pip install -r requirements.txt
DATABASES['default']['ENGINE'] = 'django_postgrespool'
in your settings.py is what this is referring to. For me i still dont know why this is causing a problem. I've install psycopg2 and I even tried installing 'pip install django-postgrespool'. It worked once i commented out:
DATABASES['default'] = dj_database_url.config()
DATABASES['default']['ENGINE'] = 'django_postgrespool'
this helps me run the app locally by using the
heroku local web -f Procfile.windows
I was following the same tutorial and what worked for me was changing the settings.py file, this line:
# Enable Connection Pooling (if desired)
DATABASES['default']['ENGINE'] = 'django_postgrespool'
To this:
# Enable Connection Pooling (if desired)
DATABASES['default']['ENGINE'] = 'django.db.backends.postgresql_psycopg2'

Categories

Resources