I'm trying to deploy my Flask app by Docker container. When I run the app on the local machine, the log file writes and updates normally. After I deployed it in Docker, the app runs normally but the log file doesn't seem to appear.
This is my Flask code:
Python version: 3.7, Flask version: 1.1.2
from elasticapm.contrib.flask import ElasticAPM
from elasticapm.handlers.logging import LoggingHandler
from flask import Flask, jsonify, request
import logging
from logging.handlers import TimedRotatingFileHandler
formatter = logging.Formatter("[%(asctime)s - %(levelname)s] %(message)s")
handler = TimedRotatingFileHandler('./log/apm-test.log',when='midnight',interval=1,encoding='utf8')
handler.suffix = '%Y_%m_%d'
handler.setFormatter(formatter)
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.addHandler(handler)
server_domain = '0.0.0.0'
app = Flask(__name__)
app.config['ELASTIC_APM'] = {
# Set required service name. Allowed characters:
# a-z, A-Z, 0-9, -, _, and space
'SERVICE_NAME': 'Master Node',
# Use if APM Server requires a token
'SECRET_TOKEN': '',
# Set custom APM Server URL (default: http://localhost:8200)
'ELASTIC_APM_SERVER_URL': 'http://elastic-agent:8200',
'ENVIRONMENT':'production'
}
apm = ElasticAPM(app)
#app.route('/',methods=['post'])
def index():
app.logger.info("info log -- POST / 200")
return 'Hello!'
#app.route('/route1',methods=['get'])
def passdata():
a = {'test01':1,'test02':2}
app.logger.info("info log -- GET /route1 200")
return jsonify(a)
#app.route('/route2',methods=['get'])
def postdata():
b = {'test003':3,'test004':4}
app.logger.info('info: "get /route2 HTTP1.1" --200')
return jsonify(b)
if __name__ == '__main__':
apm_handler = LoggingHandler(client=apm.client)
apm_handler.setLevel(logging.INFO)
app.logger.addHandler(apm_handler)
app.run(host=server_domain,port=5000)
This is my Dockerfile and command:
Docker-desktop(Windows) version: 4.13
Dockerfile
FROM python:3.7-slim
COPY ./apm-test.py /usr/share/apm-test.py
RUN pip install elastic-apm flask blinker
ENV ELASTIC_APM_SERVER_URL='http://elastic-agent:8200'
EXPOSE 5000
CMD ["python", "./apm-test.py"]
Docker command
docker run -d --name apm-client00 apm-client00
I tried to mount the log folder and the exact log file in my local machine into the same relative container path as the py file writes the log in local machine, but it is still the same.
Related
I have some weird behaviour with an API I am currently developing with PyCharm.
I already developed many REST-API with Python FastAPI successfully. But this time, I am facing some very weird behaviour of my application.
To test locally, I added in the main.py script the following line, which works fine for every other application I ever developed:
if __name__ == "__main__":
logger.info("starting server with dev configuration")
uvicorn.run(app, host="127.0.0.1", port=5000)
The code runs when I set a breakpoint on line uvicorn.run(...) and then resume from there.
The code does also run, when I build the docker image and then run it on docker when I use the following dockerfile:
FROM python:3.10-slim
...
EXPOSE 80
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80", "--workers", "1"]
But when I run normally, uvicorn does not seem to boot as no messages are logged like I would expect:
INFO: Started server process [6828]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:5000 (Press CTRL+C to quit)
The problem occurred, after I added Azure application insights handler.
Configuration of application insights:
import logging
from fastapi import Request
from opencensus.ext.azure.trace_exporter import AzureExporter
from opencensus.trace import attributes_helper, samplers
from opencensus.trace.span import SpanKind
from opencensus.trace.tracer import Tracer
from starlette.types import ASGIApp
HTTP_HOST = attributes_helper.COMMON_ATTRIBUTES["HTTP_HOST"]
HTTP_METHOD = attributes_helper.COMMON_ATTRIBUTES["HTTP_METHOD"]
HTTP_PATH = attributes_helper.COMMON_ATTRIBUTES["HTTP_PATH"]
HTTP_ROUTE = attributes_helper.COMMON_ATTRIBUTES["HTTP_ROUTE"]
HTTP_URL = attributes_helper.COMMON_ATTRIBUTES["HTTP_URL"]
HTTP_STATUS_CODE = attributes_helper.COMMON_ATTRIBUTES["HTTP_STATUS_CODE"]
STACKTRACE = attributes_helper.COMMON_ATTRIBUTES["STACKTRACE"]
_logger = logging.getLogger(__name__)
class AzureApplicationInsightsMiddleware:
def __init__(self, app: ASGIApp, azure_exporter: AzureExporter) -> None:
self._app = app
self._sampler = samplers.AlwaysOnSampler()
self._exporter = azure_exporter
async def __call__(self, request: Request, call_next):
tracer = Tracer(exporter=self._exporter, sampler=self._sampler)
with tracer.span("main") as span:
span.span_kind = SpanKind.SERVER
response = await call_next(request)
# does not seem to return a response
span.name = "[{}]{}".format(request.method, request.url)
tracer.add_attribute_to_current_span(HTTP_HOST, request.url.hostname)
tracer.add_attribute_to_current_span(HTTP_METHOD, request.method)
tracer.add_attribute_to_current_span(HTTP_PATH, request.url.path)
tracer.add_attribute_to_current_span(HTTP_URL, str(request.url))
try:
tracer.add_attribute_to_current_span(HTTP_STATUS_CODE, response.status_code)
except Exception:
tracer.add_attribute_to_current_span(HTTP_STATUS_CODE, "500")
return response
Added as middleware:
from fastapi import FastAPI
from opencensus.ext.azure.trace_exporter import AzureExporter
from app.startup.middleware.application_insights import AzureApplicationInsightsMiddleware
from app.startup.middleware.cors import add_cors
from config import app_config
def create_middleware(app: FastAPI) -> FastAPI:
if app_config.appinsights_instrumentation_key is not None:
azure_exporter = AzureExporter(connection_string=app_config.appinsights_instrumentation_key)
app.middleware("http")(AzureApplicationInsightsMiddleware(app=app, azure_exporter=azure_exporter))
app = add_cors(app)
return app
Added handler to logger:
import logging
import sys
from logging import StreamHandler
from opencensus.ext.azure.log_exporter import AzureLogHandler
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter(fmt="%(asctime)s %(levelname)-8s %(name)-15s %(message)s",
datefmt="%Y-%m-%d %H:%M:%S")
handler = StreamHandler(stream=sys.stdout)
handler.setLevel(logging.DEBUG)
handler.setFormatter(formatter)
logger.addHandler(handler)
handler = AzureLogHandler(connection_string=connection_string)
handler.setLevel(logging.INFO)
handler.setFormatter(formatter)
logger.addHandler(handler)
I use the standard template, which works for all my other application but this time obviously it does not. I wonder, there might be an issue with maybe some of the installed modules I am using in requirements.txt:
fastapi
uvicorn
azure-servicebus
pymongo[srv]
pandas
numpy
jinja2
svgwrite
matplotlib
Has anyone faced the same problem yet?
changed host to localhost --> uvicorn server still not booting
I have an app that will convert audio file to text. Using flask and flask-socketio. It works perfectly when I run it using: "python run.py", but when I run it using: "gunicorn -k eventlet -b 0.0.0.0:5000 run:app" it will stop on the part where it calls the google speech to text api in audio.py file.
These are the current codes right now.
run.py:
from ats import socketio, app, db
if __name__ == '__main__':
db.create_all()
socketio.run(app, host='0.0.0.0', port=5001, debug=True)
init.py
import logging, json
from flask import Flask, jsonify, render_template, request
from flask_socketio import SocketIO, emit, send
from flask_cors import CORS
from flask_sqlalchemy import SQLAlchemy
from flask_marshmallow import Marshmall
app = Flask(__name__, instance_relative_config=True, static_folder="templates/static", template_folder="templates")
# Create db instance
db = SQLAlchemy(app)
ma = Marshmallow(app)
#app.route('/')
def index():
return render_template('index.html');
# import models
from ats import models
# set up CORS
CORS(app)
socketio = SocketIO(app, cors_allowed_origins='*', async_mode='eventlet')
# import blueprints
from ats.product.product import product_blueprint
# register blueprints
app.register_blueprint(product_blueprint, url_prefix='/api/product')
from ats import error_handlers
product.py
import os
import math
import eventlet
from os.path import join
from flask import Blueprint, request, jsonify, abort
from ats.utils import audio as AUDIO
product_blueprint = Blueprint('product', __name__)
#product_blueprint.route('/add', methods=['post'])
def addProduct():
try:
data = request.form
foldername = data['name']
scriptFile = request.files['script']
audioFile = request.files['audio']
# save the script and audio file to uploads folder
FILE.createFolder(foldername)
FILE.save(foldername, scriptFile)
FILE.save(foldername, audioFile)
# list the files in the uploads
audioFiles = FILE.getAudioFileList(foldername)
fileCount = len(audioFiles)
currentFile = 1
# ============ speech to text =============
for file in audioFiles:
recognizedText = AUDIO.convert(foldername, file)
# save to database
newAudio = {
'name': file,
'recognizedText': recognizedText,
'length': duration,
}
Audio.add(newAudio)
# emit event to update the client about the progress
percent = math.floor((currentFile / float(fileCount) ) * 100)
emit('upload_progress', {'data': percent}, room=data['sid'], namespace='/')
eventlet.sleep()
currentFile += 1
# Delete the files in uploads folder
FILE.delete(foldername)
return jsonify({'data': None, 'message': 'Product was added.', 'success': True}), 200
except Exception as e:
abort(500, str(e))
audio.py
import os
from ats import app
# Imports the Google Cloud client library
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
# Instantiates a client
client = speech.SpeechClient()
def convert(foldername, filename):
try:
file = os.path.join(app.config['UPLOAD_FOLDER'], foldername, filename)
# Loads the audio into memory
with io.open(file, 'rb') as audio_file:
content = audio_file.read()
audio = types.RecognitionAudio(content=content)
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=48000,
language_code='ja-JP')
# Call speech in the audio file
response = client.recognize(config, audio) # The code will stop here, that results to worker timeout in gunicorn
return response
except Exception as e:
raise e
I've been searching solution for almost a week but I still couldn't find one. THank you for you're help guys.
When you run your application directly using python run.py there is no timeout applied the application takes whatever time it needs to process, however when you run your application using Gunicorn, the default timeout is 30 seconds which means that you will get a timeout error incase your application does not respond within 30 seconds. To avoid this you can increase the default timeout set by Gunicorn by adding --timeout <timeinterval-in-seconds>
The following command sets the timeout to 10 mins
gunicorn -k eventlet -b 0.0.0.0:5000 --timeout 600 run:app
It's working now, by running it using uwsgi instead of gunicorn. Here's the config, service and nginx
ats.ini
[uwsgi]
module = wsgi:app
master = true
processes = 1
socket = ats.sock
chmod-socket = 660
vacuum = true
die-on-term = true
/etc/systemd/system/ats.service
[Unit]
Description=uWSGI instance to serve ats
After=network.target
[Service]
User=ubuntu
Group=www-data
WorkingDirectory=/home/user/ats
Environment="PATH=/home/user/ats/env/bin"
ExecStart=/home/user/ats/env/bin/uwsgi --ini ats.ini --gevent 100
[Install]
WantedBy=multi-user.target
nginx
server {
listen 80;
server_name <ip_address or domain>;
access_log /var/log/nginx/access.log;
location / {
include uwsgi_params;
uwsgi_pass unix:/home/user/ats/ats.sock;
proxy_set_header Connection "Upgrade";
client_max_body_size 200M;
}
location /socket.io {
include uwsgi_params;
uwsgi_pass unix:/home/user/ats/ats.sock;
proxy_set_header Connection "Upgrade";
}
}
Thank you guys
Google cloud python had some conflict with gevent. I found out from this thread that in order for them to work, you need to add the following in the beginning of init.py:
from gevent import monkey
monkey.patch_all()
import grpc.experimental.gevent as grpc_gevent
grpc_gevent.init_gevent()
I met this problem too today, finally I found that the bug caused by proxy setting. at first, I set my proxy is "",
os.environ['http_proxy'] = ""
os.environ['https_proxy'] = ""
and I get the error about time out in request, after I comment the code and it works
# os.environ['http_proxy'] = ""
# os.environ['https_proxy'] = ""
I think it is not an error about gunicore timeout default setting, it is about system proxy setting.
I have written a Flask App as follows:
import logging
from flask import Flask, jsonify
from mongo import Mongo
app = Flask(__name__)
app.config.from_object("config.ProductionConfig")
# routes to a particular change and patch number
#app.route("/<project>/")
def home(project):
app.logger.info('testing info log')
Mongo_obj = Mongo(ip=app.config["DB_HOST"], port=app.config["DB_PORT"],
username=app.config["DB_USERNAME"],
.....................
if __name__ == '__main__':
app.run(host = '0.0.0.0', port = '5000')
Now, the problem I face is, when I look at the logs of the Flask application, all I see is the following:
* Serving Flask app "service" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
172.16.40.189 - - [01/Oct/2019 14:44:29] "GET /abc HTTP/1.1" 200 -
172.16.11.231 - - [01/Oct/2019 14:44:29] "GET /abc HTTP/1.1" 200 -
............
Is there something specific that needs to be done in order to see the log message? Do I need to run the Flask App in debug mode?
If not in debug mode the default log level is WARNING. That's why you don't see your logs. If you'd like your logs to contain INFO level ones you must set it within the logger, eg:
app = Flask(__name__)
app.logger.setLevel(logging.INFO)
This is how I set log levels in our app. I use the create_app function to create a flask app with error handling logging & other necessary configurations. Here is the snippet:
from flask import Flask
import logging
def create_app(test_config=None):
# create and configure the app
app = Flask(__name__)
app.logger.setLevel(logging.ERROR)
# Test Log levels
app.logger.debug("debug log info")
app.logger.info("Info log information")
app.logger.warning("Warning log info")
app.logger.error("Error log info")
app.logger.critical("Critical log info")
return app
app = create_app()
Output show only error and lower logs are visible for now
* Restarting with stat
[2022-12-12 17:38:39,375] ERROR in __init__: Error log info
[2022-12-12 17:38:39,375] CRITICAL in __init__: Critical log info
I am running a python flask app on an amazon ec2 linux instace.
MY python app looks like:
application.py
#!flask/bin/python
from flask import Flask
application = Flask(__name__)
#application.route('/', methods=['GET', 'POST'])
def index():
return '{"Output":"Hello World"}'
if __name__ == '__main__':
application.run(host='0.0.0.0', port=80, debug=False)
my supervisor config looks like:
supervisor.conf
[program:flaskapplication]
command = /home/ec2-user/myapp/venv/bin/python /home/ec2-user/myapp/application.py
stdout_logfile = /var/log/watcher-stdout.log
stdout_logfile_maxbytes = 10MB
stdout_logfile_backups = 5
stderr_logfile = /var/log/watcher-stderr.log
stderr_logfile_maxbytes = 10MB
stderr_logfile_backups = 5
When I do the following command:
supervisorctl -c supervisor.conf
I get the following response:
00:00:00 /home/ec2-user/myapp/venv/bin/python2.7 /home/ec2-user/myapp/venv/bin/supervisord -c supervisor.conf
But when I hit the Amazon instance link, nothing is displayed. I get a server not responding page. What am I doing wrong?
I think you probably need an end-point:
>>> #application.route('/say_hi', methods=['POST'])
I realized that port 80 by default is not configured to be an incoming port in the EC2 security group. Once I added port 80 to be a verified incoming group, I was able to run the application.
I'm trying to add logging to a web application which uses Flask.
When hosted using the built-in server (i.e. python3 server.py), logging works. When hosted using Gunicorn, the log file is not created.
The simplest code which reproduces the problem is this one:
#!/usr/bin/env python
import logging
from flask import Flask
flaskApp = Flask(__name__)
#flaskApp.route('/')
def index():
flaskApp.logger.info('Log message')
print('Direct output')
return 'Hello World\n'
if __name__ == "__main__":
logHandler = logging.FileHandler('/var/log/demo/app.log')
logHandler.setLevel(logging.INFO)
flaskApp.logger.addHandler(logHandler)
flaskApp.logger.setLevel(logging.INFO)
flaskApp.run()
The application is called using:
gunicorn server:flaskApp -b :80 -w 4
--access-gfile /var/log/demo/access.log
--error-logfile /var/log/demo/error.log
When doing a request to the home page of the site, the following happens:
I receive the expected HTTP 200 "Hello World\n" in response.
There is a trace of the request in /var/log/demo/access.log.
/var/log/demo/error.log stays the same (there are just the boot events).
There is the "Direct output" line in the terminal.
There is no '/var/log/demo/app.log'. If I create the file prior to launching the application, the file is not modified.
Note that:
The directory /var/log/demo can be accessed (read, write, execute) by everyone, so this is not the permissions issue.
If I add StreamHandler as a second handler, there is still no trace of the "Log message" message neither in the terminal, nor in Gunicorn log files.
Gunicorn is installed using pip3 install gunicorn, so there shouldn't be any mismatch with Python versions.
What's happening?
This approach works for me: Import the Python logging module and add gunicorn's error handlers to it. Then your logger will log into the gunicorn error log file:
import logging
app = Flask(__name__)
gunicorn_error_logger = logging.getLogger('gunicorn.error')
app.logger.handlers.extend(gunicorn_error_logger.handlers)
app.logger.setLevel(logging.DEBUG)
app.logger.debug('this will show in the log')
My Gunicorn startup script is configured to output log entries to a file like so:
gunicorn main:app \
--workers 4 \
--bind 0.0.0.0:9000 \
--log-file /app/logs/gunicorn.log \
--log-level DEBUG \
--reload
When you use python3 server.py you are running the server3.py script.
When you use gunicorn server:flaskApp ... you are running the gunicorn startup script which then imports the module server and looks for the variable flaskApp in that module.
Since server.py is being imported the __name__ var will contain "server", not "__main__" and therefore you log handler setup code is not being run.
You could simply move the log handler setup code outside of the if __name__ == "__main__": stanza. But ensure that you keep flaskApp.run() in there since you do not want that to be run when gunicorn imports server.
More about what does if __name__ == “__main__”: do?
There are a couple of reasons behind this: Gunicorn has its own loggers, and it’s controlling log level through that mechanism. A fix for this would be to add app.logger.setLevel(logging.DEBUG).
But what’s the problem with this approach? Well, first off, that’s hard-coded into the application itself. Yes, we could refactor that out into an environment variable, but then we have two different log levels: one for the Flask application, but a totally separate one for Gunicorn, which is set through the --log-level parameter (values like “debug”, “info”, “warning”, “error”, and “critical”).
A great solution to solve this problem is the following snippet:
import logging
from flask import Flask, jsonify
app = Flask(__name__)
#app.route('/')
def default_route():
"""Default route"""
app.logger.debug('this is a DEBUG message')
app.logger.info('this is an INFO message')
app.logger.warning('this is a WARNING message')
app.logger.error('this is an ERROR message')
app.logger.critical('this is a CRITICAL message')
return jsonify('hello world')
if __name__ == '__main__':
app.run(host=0.0.0.0, port=8000, debug=True)
else:
gunicorn_logger = logging.getLogger('gunicorn.error')
app.logger.handlers = gunicorn_logger.handlers
app.logger.setLevel(gunicorn_logger.level)
Refrence: Code and Explanation is taken from here