uvicorn suppresses python's syslog for gelf driver - python

I have a docker container running logging with gelf to a logging instance via udp -- all fine!
The container is based on Ubuntu 18 where rsyslog is running as a service, which works well.
Inside the container is a FastAPI application running with uvicorn webserver. It also works perfectly and uvicorn is perfectly logging to the logging instance.
Here comes what is not working, but usually works with non-FastAPI python projects. I use python's syslog to log more stuff.
The app with syslog looks like this (I created an easy example to debug for myself):
from fastapi import FastAPI
import syslog
syslog.openlog(facility=syslog.LOG_LOCAL0)
app = FastAPI()
syslog.syslog(syslog.LOG_INFO, 'startup done')
#app.get("/")
async def root():
syslog.syslog(syslog.LOG_INFO, 'get hello')
return {"message": "Hello World"}
The logs at the logging instance don't show the syslog messages. Only uvicorn's messages:
INFO: Started server process [21]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:80 (Press CTRL+C to quit)
INFO: 172.17.0.1:35346 - "GET / HTTP/1.1" 200 OK
For further debugging I checked rsyslog's log file, and it contains the syslog messages:
Dec 23 17:21:39 /uvicorn: startup done
Dec 23 17:21:50 /uvicorn: get hello
and here is the rsyslog configuration at /etc/rsyslog.d
local0.* {
action(type="omfile" file="/var/log/test.log" fileOwner="syslog" fileGroup="syslog" fileCreateMode="0640")
stop
}
What am I missing here?
Why is gelf ignoring rsyslog?
What do I need to understand about uvicorn concerning syslog?
or what can I do?
Thanks

The problem results from gelf ignoring syslog. A simple print in python will be recognized though.
Solution: I built myself a log function in python which will log to syslog and print out whatever I want to log.

Related

Python AWS Apprunner Service Failing Health Check

New to Apprunner and trying to get a vanilla Python application to deploy successfully but continuing to fail the TCP Health Checks. The following is some relevant portions of the service and Apprunner console logs:
Service Logs:
2023-02-18T15:20:20.856-05:00 [Build] Step 5/5 : EXPOSE 80
2023-02-18T15:20:20.856-05:00 [Build] ---> Running in abcxyz
2023-02-18T15:20:20.856-05:00 [Build] Removing intermediate container abcxyz
2023-02-18T15:20:20.856-05:00 [Build] ---> f3701b7ee4cf
2023-02-18T15:20:20.856-05:00 [Build] Successfully built abcxyz
2023-02-18T15:20:20.856-05:00 [Build] Successfully tagged application-image:latest
2023-02-18T15:30:49.152-05:00 [AppRunner] Failed to deploy your application source code.
Console:
02-18-2023 03:30:49 PM [AppRunner] Deployment with ID : 123456789 failed. Failure reason : Health check failed.
02-18-2023 03:30:38 PM [AppRunner] Health check failed on port '80'. Check your configured port number. For more information, read the application logs.
02-18-2023 03:24:21 PM [AppRunner] Performing health check on port '80'.
02-18-2023 03:24:11 PM [AppRunner] Provisioning instances and deploying image for privately accessible service.
02-18-2023 03:23:59 PM [AppRunner] Successfully built source code.
My app is a vanilla, non-networked Python application into which I've added a SimpleHTTPRequestHandler running on a TCPServer configured to run as a seperate thread as follows:
import socketserver
import threading
from http.server import SimpleHTTPRequestHandler
# handler for server
class HealthCheckHandler(SimpleHTTPRequestHandler):
def do_GET(self) -> None:
self.send_response(code=200)
self.send_header("Content-Type", "text/html")
self.end_headers()
self.wfile.write("""<html><body>hello, client.</body><html>""".encode('utf-8'))
# runs the server
def run_healthcheck_server():
with socketserver.TCPServer(("127.0.0.1", 80), HealthCheckHandler) as httpd:
print(f"Fielding health check requests on: 80")
httpd.serve_forever()
# dummy
def my_app_logic():
while True:
print('hello, server.')
time.sleep(5)
# wrapper to run primary application logic AND TCPServer
def main():
# run the server in a thread
threading.Thread(target=run_healthcheck_server).start()
# run my app
my_app_logic()
if __name__ == "__main__":
main()
This works fine on my local machine and I see "hello, client." in my browser when going to 127.0.0.1 and a stream of hello, server. messages every 5 seconds in my console.
I don't know much about networking and the only reason I'm incorporating this into the app is to facilitate the AWS HealthCheck which I can't disable in the AppRunner service. I am trying to understand if the issue is how I'm trying to handle TCP requests from AWS' Health Checker or if it's something else on the Apprunner side.

Loguru stops logging flask exceptions in production

I am using Loguru to handle the logging in my flask rest api. When testing the app locally it logs exactly as expected. When I deploy my app to my linux server running apache the logging stops. I can run the app manually on the server using python app.py and the logging works again but that just spins up the development server.
from flask import Flask
from loguru import logger
import logging
import os
class InterceptHandler(logging.Handler):
def emit(self, record):
# Retrieve context where the logging call occurred, this happens to be in the 6th frame upward
logger_opt = logger.opt(depth=6, exception=record.exc_info)
logger_opt.log(record.levelno, record.getMessage())
# create the Flask application
app = Flask(__name__)
logger.add(
'logs/events.log',
level='DEBUG',
format='{time} {level} {message}',
backtrace=True,
rotation='5 MB',
retention=9
)
app.logger.addHandler(InterceptHandler())
logging.basicConfig(handlers=[InterceptHandler()], level=20)
if __name__ == '__main__':
app.run(debug=False)
Figured out the issue. By default using the werkzeug dev server it was using the logs/events.log file. When I deployed the application to the apache server, it rerouted the logs that would have been placed here and put them in with the apache server logs

Accessing the Python flask request context for writing to Apache error log

WSGI passes its request context into Flask. How do I access it?
Eg, to write to the webserver error log from Flask app that isn't using WSGI, I write to stderr:
sys.stderr.write(str(msg) + "\n")
For this type of logging output, WSGI for Python requires that I use the WSGI request context environ['wsgi.errors'] See the WSGI debugging docs.
How do I do this?
If using mod_wsgi you can just use print(). It will capture and send it to the Apache error logs. If you use a new enough mod_wsgi version and it is done inside of the context of a request, then it will even be associated with logging at Apache level for the request, which means you can tag it in Apache logs, by setting Apache log format, with Apache request ID if necessary so can associate it with any Apache logging for same request.
In general all WSGI servers will capture print() output and send it to the logs. The only exceptions tend to be for WSGI over CGI/FASTCGI/SCGI. So use of wsgi.errors is not specifically needed and it is actually rare for code to even use it.
Access the WSGI request context via request.environ
Logging to the webserver error log for both WSGI and non-WSGI configs:
def log(msg):
out_stream = sys.stderr
# See http://modwsgi.readthedocs.io/en/develop/user-guides/debugging-techniques.html
if 'wsgi.errors' in request.environ: out_stream = request.environ['wsgi.errors']
out_stream.write(str(msg) + "\n")

How to get MQTT client to work on Appengine (python)

Trying to run a simple MQTT client (not the broker) on a paid GAE app. However a on_connect callback never occurs in the following:
worker.py
import webapp2
import paho.mqtt.client as paho
class WorkerHandler(webapp2.RequestHandler):
def on_subscribe(client, userdata, mid, granted_qos):
print("Subscribed: "+str(mid)+" "+str(granted_qos))
def on_message(client, userdata, msg):
print(msg.topic+" "+str(msg.qos)+" "+str(msg.payload))
def on_connect(client, userdata, flags, rc):
client.subscribe("$SYS/#")
print "Subscribed to wildcard"
def get(self):
client = paho.Client()
client.on_connect = self.on_connect
client.on_subscribe = self.on_subscribe
client.on_message = self.on_message
client.connect("iot.eclipse.org")
print "connected to broker"
client.loop_forever()
app = webapp2.WSGIApplication([
(r'/_ah/start', WorkerHandler),
])
In the dev environment it fails silently with just a message after a minute or so
INFO 2017-04-04 01:51:40,958 module.py:813] worker: "GET /_ah/start HTTP/1.1" 500 220
INFO 2017-04-04 01:51:41,869 module.py:1759] New instance for module "worker" serving on: http://localhost:8080
connected to broker
WARNING 2017-04-04 01:52:10,860 module.py:1927] All instances may not have restarted
This is configured as a "backend"/service and the yaml looks like this:
worker.yaml
service: worker
runtime: python27
api_version: 1
threadsafe: true
instance_class: B8
manual_scaling:
instances: 1
handlers:
# If a service has an _ah/start handler, it should be listed first.
- url: /_ah/start
script: worker.app
Note: In the dev environment, socket.py is being imported directly from python install .../2.7/lib/python2.7/socket.py
You're attempting to run a standalone script as your GAE app worker service. It won't work.
Your worker.py needs to contain a WSGI application called app to match your worker.yaml configuration.
From the script row in the Handlers element table:
A script: directive must be a python import path, for example,
package.module.app that points to a WSGI application. The last component of a script: directive using a Python module path is
the name of a global variable in the module: that variable must be a
WSGI app, and is usually called app by convention.
The error you get most likely indicates that the attempt to start the worker's module WSGI app fails.
After your update to bring back the WSGI app the reason for the error message becomes even clearer: the WorkerHandler.get() method doesn't respond to the /_ah/start request, because it's stuck in client.loop_forever().
From Startup:
Each service instance is created in response to a start request, which
is an empty HTTP GET request to /_ah/start. App Engine sends this
request to bring an instance into existence; users cannot send a
request to /_ah/start. Manual and basic scaling instances must
respond to the start request before they can handle another request.
...
When an instance responds to the /_ah/start request with an HTTP
status code of 200–299 or 404, it is considered to have successfully
started and can handle additional requests. Otherwise, App Engine
terminates the instance. Manual scaling instances are restarted
immediately, while basic scaling instances are restarted only when
needed for serving traffic.

flask with uwsgi nginx gateway timeout

I have set up a flask application to use uwsgi and nginx
I used the tutorials on the internet but I have the following issue
I have the following functions on the controller.py file
api_module = Blueprint('cassandra_api', __name__, url_prefix="/api")
#api_module.route('/', methods=['GET', 'POST'])
def home():
return "c"
the above works great when trying
myip/api/
but the following doesn't work at all
#api_module.route("/fault_prone_snippets/", methods=['GET'])
def get_fault_prone_snippets():
#code to connect with cassandra db and retrieve get parameters
When I visit
myip/api/faut_prone_snippets/
with or without get parameters, no code is executed, I don't see an error message and after the minute is over I get a gateway timeout. The problem is that when I run my flask from localhost it works great. Trying to use cassandra driver from the python console on my dev environment works too and connects with no error. How can I debug this kind of setup when it works locally but not in production?
As you're running behind nginx it might be that setting the keep_alive timeout in the http section of your nginx.conf will help. And/or proxy_send_timeout,
proxy_read_timeout parameters in the location section.

Categories

Resources