I am working on Windows 10 Pro, Git Bash, Docker Desktop.
Now I have a project which runs a Flask application in Docker through Gunicorn.
The entrypoint in Dockerfile:
ENTRYPOINT ["gunicorn", "-b",":8080","main.py"]
When run below command:
docker run -p 127.0.0.1:80:8080 jwt-api-test
It shows the error:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/gunicorn/util.py", line 358, in import_app
mod = importlib.import_module(module)
File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 962, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'main.py'; 'main' is not a package
If I am right, it is related to gunicorn, which isn't available in Windows.
After googling, it seems wsl is an option. In fact, there is wsl (already being turned on, running in Docker Desktop), info as below:
wsl.exe --list --all --verbose
NAME STATE VERSION
* docker-desktop-data Running 2
docker-desktop Running 2
When I clicked the wsl.exe, and tried to open bash, it didn't work: no error, just nothing happened. I did use shift +restart according to some instructions, but it didn't work either.
May I ask for your help on how to make this Flask application works? Thanks.
Edited: The structure of main.py:
JWT_SECRET = os.environ.get('JWT_SECRET', 'abc123abc1234')
LOG_LEVEL = os.environ.get('LOG_LEVEL', 'INFO')
LOG = _logger()
LOG.debug("Starting with log level: %s" % LOG_LEVEL )
APP = Flask(__name__)
if __name__ == '__main__':
APP.run(host='127.0.0.1', port=8080, debug=True)
The Dockerfile:
FROM python:stretch
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ENTRYPOINT ["gunicorn", "-b", ":8080", "main.py"]
Assuming that the rest of your setup is correct (relative paths, ports, Dockerfile, etc.), the problem could be passing main.py to gunicorn.
Usually you need to pass your Flask variable, i.e in your case replace "main.py" in your ENTRYPOINT with "main.APP" (see docs)
Apart from that: If you get the container running it might be the case that you cannot reach your API. In this case change your gunicorn binding to "0.0.0.0:8080" in your ENTRYPOINT.
Related
I have a Flask API where I do the following :
from flask import Flask, request
import os
import json
import paramiko
import subprocess
app = Flask(__name__)
#app.route("/")
def hello():
return "Service up in K8S!"
#app.route("/get", methods=['GET'])
def get_ano():
print("Test liveness")
return "Pod is alive !"
#app.route("/run", methods=['POST'])
def run_dump_generation():
rules_str = request.headers.get('database')
print(rules_str)
postgres_bin = r"/usr/bin/"
dump_file = "database_dump.sql"
os.environ['PGPASSWORD'] = 'XXXXX'
print('Before dump generation')
with open(dump_file, "w") as f:
result = subprocess.call([
os.path.join(postgres_bin, "pg_dump"),
"-Fp",
"-d",
"XX",
"-U",
"XX",
"-h",
"XX",
"-p",
"XX"
],
stdout=f
)
print('After dump generation')
# Connect to SFTP server
transport = paramiko.Transport(("X", X))
transport.connect(username="X", password="X")
sftp = transport.open_sftp_client()
remote_file = '/data/database_dump.sql'
sftp.put(dump_file, remote_file)
print("---- SFTP object ----", sftp)
# Close the SFTP connection
sftp.close()
transport.close()
print("---- Dump generated ! ----")
return "Dump generated and loaded to SFTP"
if __name__ == "__main__":
app.run(host='0.0.0.0', debug=True)
When I run the app I have the error : AttributeError: partially initialized module 'paramiko' has no attribute 'Transport' (most likely due to a circular import). So the problem comes from paramiko but I don't understand the error. It doesn't work in Kubernetes but worked in an virtual env that I have locally.
I used the following Dockerfile. Maybe it is a mistake for installations.
FROM python:3.8-alpine
USER root
WORKDIR /dump_generator_api
COPY requirements.txt ./
RUN python3 -m pip install --upgrade pip
RUN apk add --no-cache --update python3-dev gcc libc-dev libffi-dev && pip3 install --no-cache-dir -r requirements.txt
RUN apk add postgresql
ADD . /dump_generator_api
EXPOSE 5000
CMD ["python", "/dump_generator_api/app.py"]
Here are the requirements.txt:
Flask==2.0.1
paramiko==3.0.0
I don't know where I am wrong. I don't think this is a filename problem:
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 08/02/2023 12:59 certs
d----- 08/02/2023 17:52 __pycache__
-a---- 08/02/2023 17:55 2011 app.py
-a---- 08/02/2023 15:45 0 database_dump.sql
-a---- 08/02/2023 17:55 2557 Dockerfile
-a---- 08/02/2023 16:39 1659 flask-postgresql-dump.yml
-a---- 08/02/2023 17:55 300 requirements.txt
Precise error :
Traceback (most recent call last):
File "c:/Users/X/dump_generator_api - Copie (2)/paramiko.py", line 3, in <module>
paramiko = importlib.import_module("paramiko")
File "C:\Users\X\AppData\Local\Programs\Python\Python38-32\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "c:\Users\X\dump_generator_api - Copie (2)\paramiko.py", line 6, in <module>
transport = paramiko.Transport("x", x)
AttributeError: partially initialized module 'paramiko' has no attribute 'Transport' (most likely due to a circular import)
UPDATE / SOLUTION
Per Sytech's answer....
Did not realize that the build was in Ubuntu which has all the packages but when Azure deploys it to a Linux container, the needed packages were missing.
Like in other questions/answers just add these installs to a startup script that Azure will use
ex.
#!/bin/bash
apt-get update
apt-get install tk --yes
python manage.py wait_for_db
python manage.py migrate
gunicorn --bind=0.0.0.0 --timeout 600 app.wsgi --access-logfile '-' --error-logfile '-' &
celery -A app worker -l info --uid=1
Original Post:
When Azure builds & deploys a Python3.9 Django/Django-Rest WebApp it has been failing in it's start up.
Error in question ( full logs below )
2022-03-08T21:13:30.385999188Z File "/tmp/8da0147da65ec79/core/models.py", line 1, in <module>
2022-03-08T21:13:30.386659422Z from tkinter import CASCADE
2022-03-08T21:13:30.387587669Z File "/opt/python/3.9.7/lib/python3.9/tkinter/__init__.py", line 37, in <module>
2022-03-08T21:13:30.387993189Z import _tkinter # If this fails your Python may not be configured for Tk
2022-03-08T21:13:30.388227101Z ImportError: libtk8.6.so: cannot open shared object file: No such file or directory
I have come across other answers to this needing to make sure that tkinter is installed with sudo apt-get python3-tk which I have added to the deployment yml file
Though it still seems to have issue. Reverting back to previous code for deployment is successful and the only feature that has been added to the application is Celery. Not sure if that has anything to do with it or not.
Am I adding the installation of the tk/tkinter in the wrong sequence?
When I revert the to previous code and have a successful build/deploy I ssh into the container and run the python shell and try to manually import the tkinter module.
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/opt/python/3.9.7/lib/python3.9/tkinter/__init__.py", line 37, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
ImportError: libtk8.6.so: cannot open shared object file: No such file or directory
it errors out like expected.
when I run apt-get update && apt-get install python3-tk --yes manually in the container, then go back to the shell on the container there is not error importing tkinter.
Which leads me to believe something is not installing in the right place? virtualenv? Or is being overwritten in the build process?
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Python version
uses: actions/setup-python#v1
with:
python-version: "3.9"
- name: Create and start virtual environment
run: |
python -m venv venv
source venv/bin/activate
- name: Install TK dependency
run: |
sudo apt-get update
sudo apt-get install python3-tk
- name: Install dependencies
run: pip install -r requirements.txt
- name: Upload artifact for deployment jobs
uses: actions/upload-artifact#v2
with:
name: python-app
path: |
.
!venv/
App Log spit out below...
2022-03-08T21:13:27.830330743Z Updated PYTHONPATH to ':/opt/startup/code_profiler:/tmp/8da0147da65ec79/antenv/lib/python3.9/site-packages'
2022-03-08T21:13:30.370903021Z Traceback (most recent call last):
2022-03-08T21:13:30.371872470Z File "/tmp/8da0147da65ec79/manage.py", line 22, in <module>
2022-03-08T21:13:30.372648510Z main()
2022-03-08T21:13:30.373176037Z File "/tmp/8da0147da65ec79/manage.py", line 18, in main
2022-03-08T21:13:30.373892773Z execute_from_command_line(sys.argv)
2022-03-08T21:13:30.374862922Z File "/tmp/8da0147da65ec79/antenv/lib/python3.9/site-packages/django/core/management/__init__.py", line 446, in execute_from_comma
nd_line
2022-03-08T21:13:30.374880323Z utility.execute()
2022-03-08T21:13:30.378586012Z File "/tmp/8da0147da65ec79/antenv/lib/python3.9/site-packages/django/core/management/__init__.py", line 420, in execute
2022-03-08T21:13:30.378603012Z django.setup()
2022-03-08T21:13:30.378607713Z File "/tmp/8da0147da65ec79/antenv/lib/python3.9/site-packages/django/__init__.py", line 24, in setup
2022-03-08T21:13:30.378612113Z apps.populate(settings.INSTALLED_APPS)
2022-03-08T21:13:30.378679216Z File "/tmp/8da0147da65ec79/antenv/lib/python3.9/site-packages/django/apps/registry.py", line 116, in populate
2022-03-08T21:13:30.378689817Z app_config.import_models()
2022-03-08T21:13:30.378694417Z File "/tmp/8da0147da65ec79/antenv/lib/python3.9/site-packages/django/apps/config.py", line 304, in import_models
2022-03-08T21:13:30.379003533Z self.models_module = import_module(models_module_name)
2022-03-08T21:13:30.381756173Z File "/opt/python/3.9.7/lib/python3.9/importlib/__init__.py", line 127, in import_module
2022-03-08T21:13:30.383257849Z return _bootstrap._gcd_import(name[level:], package, level)
2022-03-08T21:13:30.383423757Z File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
2022-03-08T21:13:30.383857479Z File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
2022-03-08T21:13:30.384148694Z File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
2022-03-08T21:13:30.384836329Z File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
2022-03-08T21:13:30.384850030Z File "<frozen importlib._bootstrap_external>", line 850, in exec_module
2022-03-08T21:13:30.385281052Z File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
2022-03-08T21:13:30.385999188Z File "/tmp/8da0147da65ec79/core/models.py", line 1, in <module>
2022-03-08T21:13:30.386659422Z from tkinter import CASCADE
2022-03-08T21:13:30.387587669Z File "/opt/python/3.9.7/lib/python3.9/tkinter/__init__.py", line 37, in <module>
2022-03-08T21:13:30.387993189Z import _tkinter # If this fails your Python may not be configured for Tk
2022-03-08T21:13:30.388227101Z ImportError: libtk8.6.so: cannot open shared object file: No such file or directory
2022-03-08T21:13:36.193Z ERROR - Container <container_name>_0_fd6a978c for site <container_name> has exited, failing site start
Tkinter is already included in the ubuntu-latest image. No particular setup is needed.
jobs:
verify-tkinter:
name: verify-tkinter
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Python version
uses: actions/setup-python#v1
with:
python-version: "3.9"
- name: show tk version
run: |
python -c "import tkinter;print(tkinter.TkVersion)"
If this error is occurring after deployment, you need to install tkinter in your deployment environment, which is separate from GitHub Actions runs.
On your server is running Ubuntu 20 and, make sure the tk package is installed, which provides the libtk8.6.so file needed.
apt install -y tk
I came across this error because a simple mistake.
The IDE add from turtle import up to my .py and I didn't notice
This is such a basic question, I'm sorry. I installed django-parsley with poetry (poetry add django-parsley). It's clearly installed in my pyproject.toml file.
In my django project files, in forms.py, I have a line of code that imports a module from parsley: from parsley.decorators import parsleyfy
However, when I try to run python manage.py runserver, I get the following error:
from parsley.decorators import parsleyfy
ModuleNotFoundError: No module named 'parsley'
I also tried adding 'parsley' to my INSTALLED_APPS in settings.py. That gives me this error (which is maybe due to not adding it globally with pip install?):
...some more errors...
File "C:\Program Files\Python310\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'parsley'
What do I need to do to be able to import it in a python file in my project?
I figured it out. It's actually a VSCode issue - normally, VSCode automatically identifies the right virtual environment for the project (in this case, Poetry's default auto-created project-specific venv).
However, in this project, it didn't switch over. To fix, I ran the Python: Select Interpreter command and switched the venv over to the right project. It then recognized the site-packages folder and was able to import normally.
I'm using flask to build a project hosted on OVH. Unfortunately it doesnt work.
Here is my app.py :
from flask import Flask, render_template, request, make_response
app = Flask(__name__)
#app.route('/')
#app.route('/test')
def test():
return render_template('test.html')
if __name__ == '__main__':
app.run(debug=True,host='0.0.0.0')
My requirement.txt :
click==7.1.2
Flask==1.1.4
itsdangerous==1.1.0
Jinja2==2.11.3
MarkupSafe==1.1.1
Werkzeug==1.0.1
My tree structure :
www
-templates
--- index.html
-requirement.txt
-my_py3_env
---pyvenv.cfg
---lib
-----python3.5
-------site-packages
---------flask
---bin
-app.py
-__pycache__
However, I get this output :
Traceback (most recent call last):
File "/usr/share/passenger/helper-scripts/wsgi-loader.py", line 369, in <module>
app_module = load_app()
File "/usr/share/passenger/helper-scripts/wsgi-loader.py", line 76, in load_app
return imp.load_source('passenger_wsgi', startup_file)
File "/usr/lib/python3.5/imp.py", line 172, in load_source
module = _load(spec)
File "<frozen importlib._bootstrap>", line 693, in _load
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 673, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "/home/deposec/www/app.py", line 2, in <module>
import flask
ImportError: No module named 'flask'
Does anyone know why ?
EDIT : I have added the following .platform.app.yaml :
name: app
type: python:3.5
web:
commands:
start: "gunicorn -b $PORT project.wsgi:application"
locations:
"/":
root: ""
passthru: true
allow: false
"/static":
root: "static/"
allow: true
hooks:
build: |
pip install -r requirements.txt
pip install -e .
pip install gunicorn
mounts:
tmp:
source: local
source_path: tmp
logs:
source: local
source_path: logs
disk: 512
However I still get No module named 'flask'... Do I also need a wsgi.py somewhere ?
The documentation you are pointing (here is for the OVHcloud Web PaaS powered by Platform.sh offer, not the Cloud Web one. It's two different product.
This means your .platform.app.yaml is ignored on Cloud Web.
To install your Python dependencies in Cloud Web, the only available documentation is here, and seems to be only available in french.
You need to connect to your Cloud Web instance through SSH to run your pip install command.
It looks like:
# 1 - Connect to your Cloud Web through SSH
# You can find these infos in OVH Manager > Web > Your cloudweb > "FTP - SSH"
ssh <cloudweb_username>#sshcloud.cluster024.hosting.ovh.net -p <your port>
# 2 - Setup a Python virtualenv
pip3 install --user virtualenv
export PATH=$PATH:~/.local/bin
echo "export PATH=$PATH:~/.local/bin" >> ~/.profile
# If "www" is your root dir, otherwise adjust it:
cd www/
virtualenv venv
source venv/bin/activate
pip install -r requirements.txt
I am trying to containerize my airflow setup. I've been tasked to keep the environment the same, just move it into a docker container. We currently have Airflow and all our dependencies installed within a anaconda environment. So what I've done is created a custom docker image that installs anaconda and creates my environment. The problem is, our current environment utilized systemd services to start airflow where Docker needs it to run via airflow command "airflow webserver/scheduler/worker" and when I run it like that, I get an error. I get the error after I start up the scheduler.
Our DAGs require a custom repo that helps communicate to our database servers. Within that repo we are using pathlib to get the path of a config file and pass it to configparser.
Basically like this:
import configparser
from pathlib import Path
config = configparser.ConfigParser()
p = Path(__file__)
p = p.parent
config_file_name = 'comms.conf'
config.read(p.joinpath('config', config_file_name))
This is throwing an the following error for all my DAGs in Airflow:
Broken DAG: [/opt/airflow/dags/example_folder/example_dag.py] 'PosixPath' object is not iterable
On the command line the error is:
[2021-01-11 19:53:13,868] {dagbag.py:259} ERROR - Failed to import: /opt/airflow/dags/example_folder/example_dag.py
Traceback (most recent call last):
File "/opt/anaconda3/envs/airflow/lib/python3.7/site-packages/airflow/models/dagbag.py", line 256, in process_file
m = imp.load_source(mod_name, filepath)
File "/opt/anaconda3/envs/airflow/lib/python3.7/imp.py", line 172, in load_source
module = _load(spec)
File "<frozen importlib._bootstrap>", line 696, in _load
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/opt/airflow/example_folder/example_dag.py", line 8, in <module>
dag = Dag()
File "/opt/airflow/dags/util/dag_base.py", line 27, in __init__
self.comms = get_comms(Variable.get('environment'))
File "/opt/airflow/repository/repo_folder/custom_script.py", line 56, in get_comms
config = get_config('comms.conf')
File "/opt/airflow/repository/repo_folder/custom_script.py", line 39, in get_config
config.read(p.joinpath('config', config_file_name))
File "/opt/anaconda3/envs/airflow/lib/python3.7/site-packages/backports/configparser/__init__.py", line 702, in read
for filename in filenames:
TypeError: 'PosixPath' object is not iterable
I was able to replicate this behavior outside of the docker container, so I don't think that has anything to do with it. It has to be a difference between how airflow runs as a systemd service and how it runs via cli?
Here is my airflow service file that works:
[Unit]
Description=Airflow webserver daemon
After=network.target postgresql.service mysql.service redis.service rabbitmq-server.service
Wants=postgresql.service mysql.service redis.service rabbitmq-server.service
[Service]
EnvironmentFile=/etc/sysconfig/airflow
User=airflow
Group=airflow
Type=simple
ExecStart=/opt/anaconda3/envs/airflow/bin/airflow webserver
Restart=on-failure
RestartSec=5s
PrivateTmp=true
[Install]
WantedBy=multi-user.target
Here is the airflow environment file that I'm using within the service file. Note that I needed to export these env variables locally to get airflow to run up to this point in the cli. Also note that the custom repos live in the /opt/airflow directory.
AIRFLOW_CONFIG=/opt/airflow/airflow.cfg
AIRFLOW_HOME=/opt/airflow
PATH=/bin:/opt/anaconda3/envs/airflow/bin:/opt/airflow/etl:/opt/airflow:$PATH
PYTHONPATH=/opt/airflow/etl:/opt/airflow:$PYTHONPATH
My airflow config is default, other then the following changes:
executor = CeleryExecutor
sql_alchemy_conn = postgresql+psycopg2://airflow:airflow#192.168.x.x:5432/airflow
load_examples = False
logging_level = WARN
broker_url = amqp://guest:guest#127.0.0.1:5672/
result_backend = db+postgresql://airflow:airflow#192.168.x.x:5432/airflow
catchup_by_default = False
configparser==3.5.3
My conda environment is using python 3.7 and the airflow version is 1.10.14. It's running on a Centos7 server. If anyone has any ideas that could help, I would appropriate it!
Edit: If I change the line config.read(p.joinpath('config', config_file_name)) to point directly to the config like this config.read('/opt/airflow/repository/repo_folder/config/comms.conf') it works fine. So it has something to do with how configparser handles the pathlib output? But it doesn't have a problem with this if airflow is run via systemd service?
Edit2: I can also wrap the pathlib object in str() and it works. config.read(str(p.joinpath('config', config_file_name))) I just want to know why this works fine with the systemd service.. I'm afraid other stuff is going to be broken?
The path to the config file is computed wrongly.
This is because the following line
# filename: custom_script.py
p = p.parent
confpath = p.joinpath('config', config_file_name))
confpath evaluates to /opt/airflow/repository/repo_folder/config/comms.conf
The path you shared where the configuration file lies is /opt/airflow/repository/repo_folder/conn.conf.
You need to resolve the config file relative to repo_folder by constructing its path using the folder custom_script.py is in.
# filename: custom_script.py
from pathlib import Path
p = Path(dirname(__file__))
p = p.parent
confpath = p.joinpath(config_file_name)
I was able to fix this issue by uninstalling and installing a newer version of configparser.
configparser==5.0.1