How to get MQTT client to work on Appengine (python) - python

Trying to run a simple MQTT client (not the broker) on a paid GAE app. However a on_connect callback never occurs in the following:
worker.py
import webapp2
import paho.mqtt.client as paho
class WorkerHandler(webapp2.RequestHandler):
def on_subscribe(client, userdata, mid, granted_qos):
print("Subscribed: "+str(mid)+" "+str(granted_qos))
def on_message(client, userdata, msg):
print(msg.topic+" "+str(msg.qos)+" "+str(msg.payload))
def on_connect(client, userdata, flags, rc):
client.subscribe("$SYS/#")
print "Subscribed to wildcard"
def get(self):
client = paho.Client()
client.on_connect = self.on_connect
client.on_subscribe = self.on_subscribe
client.on_message = self.on_message
client.connect("iot.eclipse.org")
print "connected to broker"
client.loop_forever()
app = webapp2.WSGIApplication([
(r'/_ah/start', WorkerHandler),
])
In the dev environment it fails silently with just a message after a minute or so
INFO 2017-04-04 01:51:40,958 module.py:813] worker: "GET /_ah/start HTTP/1.1" 500 220
INFO 2017-04-04 01:51:41,869 module.py:1759] New instance for module "worker" serving on: http://localhost:8080
connected to broker
WARNING 2017-04-04 01:52:10,860 module.py:1927] All instances may not have restarted
This is configured as a "backend"/service and the yaml looks like this:
worker.yaml
service: worker
runtime: python27
api_version: 1
threadsafe: true
instance_class: B8
manual_scaling:
instances: 1
handlers:
# If a service has an _ah/start handler, it should be listed first.
- url: /_ah/start
script: worker.app
Note: In the dev environment, socket.py is being imported directly from python install .../2.7/lib/python2.7/socket.py

You're attempting to run a standalone script as your GAE app worker service. It won't work.
Your worker.py needs to contain a WSGI application called app to match your worker.yaml configuration.
From the script row in the Handlers element table:
A script: directive must be a python import path, for example,
package.module.app that points to a WSGI application. The last component of a script: directive using a Python module path is
the name of a global variable in the module: that variable must be a
WSGI app, and is usually called app by convention.
The error you get most likely indicates that the attempt to start the worker's module WSGI app fails.
After your update to bring back the WSGI app the reason for the error message becomes even clearer: the WorkerHandler.get() method doesn't respond to the /_ah/start request, because it's stuck in client.loop_forever().
From Startup:
Each service instance is created in response to a start request, which
is an empty HTTP GET request to /_ah/start. App Engine sends this
request to bring an instance into existence; users cannot send a
request to /_ah/start. Manual and basic scaling instances must
respond to the start request before they can handle another request.
...
When an instance responds to the /_ah/start request with an HTTP
status code of 200–299 or 404, it is considered to have successfully
started and can handle additional requests. Otherwise, App Engine
terminates the instance. Manual scaling instances are restarted
immediately, while basic scaling instances are restarted only when
needed for serving traffic.

Related

Python AWS Apprunner Service Failing Health Check

New to Apprunner and trying to get a vanilla Python application to deploy successfully but continuing to fail the TCP Health Checks. The following is some relevant portions of the service and Apprunner console logs:
Service Logs:
2023-02-18T15:20:20.856-05:00 [Build] Step 5/5 : EXPOSE 80
2023-02-18T15:20:20.856-05:00 [Build] ---> Running in abcxyz
2023-02-18T15:20:20.856-05:00 [Build] Removing intermediate container abcxyz
2023-02-18T15:20:20.856-05:00 [Build] ---> f3701b7ee4cf
2023-02-18T15:20:20.856-05:00 [Build] Successfully built abcxyz
2023-02-18T15:20:20.856-05:00 [Build] Successfully tagged application-image:latest
2023-02-18T15:30:49.152-05:00 [AppRunner] Failed to deploy your application source code.
Console:
02-18-2023 03:30:49 PM [AppRunner] Deployment with ID : 123456789 failed. Failure reason : Health check failed.
02-18-2023 03:30:38 PM [AppRunner] Health check failed on port '80'. Check your configured port number. For more information, read the application logs.
02-18-2023 03:24:21 PM [AppRunner] Performing health check on port '80'.
02-18-2023 03:24:11 PM [AppRunner] Provisioning instances and deploying image for privately accessible service.
02-18-2023 03:23:59 PM [AppRunner] Successfully built source code.
My app is a vanilla, non-networked Python application into which I've added a SimpleHTTPRequestHandler running on a TCPServer configured to run as a seperate thread as follows:
import socketserver
import threading
from http.server import SimpleHTTPRequestHandler
# handler for server
class HealthCheckHandler(SimpleHTTPRequestHandler):
def do_GET(self) -> None:
self.send_response(code=200)
self.send_header("Content-Type", "text/html")
self.end_headers()
self.wfile.write("""<html><body>hello, client.</body><html>""".encode('utf-8'))
# runs the server
def run_healthcheck_server():
with socketserver.TCPServer(("127.0.0.1", 80), HealthCheckHandler) as httpd:
print(f"Fielding health check requests on: 80")
httpd.serve_forever()
# dummy
def my_app_logic():
while True:
print('hello, server.')
time.sleep(5)
# wrapper to run primary application logic AND TCPServer
def main():
# run the server in a thread
threading.Thread(target=run_healthcheck_server).start()
# run my app
my_app_logic()
if __name__ == "__main__":
main()
This works fine on my local machine and I see "hello, client." in my browser when going to 127.0.0.1 and a stream of hello, server. messages every 5 seconds in my console.
I don't know much about networking and the only reason I'm incorporating this into the app is to facilitate the AWS HealthCheck which I can't disable in the AppRunner service. I am trying to understand if the issue is how I'm trying to handle TCP requests from AWS' Health Checker or if it's something else on the Apprunner side.

flask server with ssl_context freezes if it receives http request

I'm trying to create a simple flask server that redirects any http requests to https. I've created a certificate and key file and registered a before_request hook to see if the request is secure and redirect appropriately, following advise this SO answer.
The flask server responds to https requests as expected. However, when I send an http request, the before_request hook never gets called and ther server hangs forever. If I send the http request from the browser, I see an "ERR_EMPTY_RESPONSE". The server doesn't even respond to https requests afterwards. No logs are printed either.
Running the app with gunicorn didn't help either. The only difference was that gunicorn is able to detect that the worker is frozen and eventually kills and replaces it. I've also tried using flask-talisman, with the same results.
Below is the code I'm running
### server.py
from flask import Flask, request, redirect
def verify_https():
if not request.is_secure:
url = request.url.replace("http://", "https://", 1)
return redirect(url, 301)
def create_flask_app():
app = Flask(__name__)
app.before_request(verify_https)
app.add_url_rule('/', 'root', lambda: "Hello World")
return app
if __name__ == '__main__':
app = create_flask_app()
app.run(
host="0.0.0.0",
port=5000,
ssl_context=('server.crt', 'server.key')
)
Running it with either python3.8 server.py or gunicorn --keyfile 'server.key' --certfile 'server.crt' --bind '0.0.0.0:5000' 'server:create_flask_app()' and opening a browser window to localhost:5000 causes the server to hang.
Talking about freezes, its not. Flask and gunicorn can serve only one variant of connection. So it's not freezing because your browser canceled the request and is idling.
I think it is better to use a faster web server, for example, Nginx, if you want to change HTTP to HTTPS. I would recommend it to you.
But it's possible to trigger your verify_https function if you run multiple instances of gunicorn at the same time.
I took your example, generated a certificate, and then run this script in my console (it contains a background job and can be runned in twoo separate ter)
gunicorn --bind '0.0.0.0:80' 'server:create_flask_app()' & gunicorn --certfile server.crt --keyfile server.key --bind '0.0.0.0:443' 'server:create_flask_app()'
now chrome goes to the secure page as expected.
Typically servers don't listen for both http and https on the same port. I have a similar requirement for my personal portfolio, but I use nginx to forward http requests (port 80) to https (port 443) and then the https server passes it off to my uwsgi backend, which listens on port 3031. That's probably more complex than you need, but a possible solution. If you go that route I would recommend letsencrypt for your certificate needs. It will set up the certificates AND the nginx.conf for you.
If you don't want to go the full nginx/apache route I think your easiest solution is the one suggested here on that same thread that you linked.

uvicorn suppresses python's syslog for gelf driver

I have a docker container running logging with gelf to a logging instance via udp -- all fine!
The container is based on Ubuntu 18 where rsyslog is running as a service, which works well.
Inside the container is a FastAPI application running with uvicorn webserver. It also works perfectly and uvicorn is perfectly logging to the logging instance.
Here comes what is not working, but usually works with non-FastAPI python projects. I use python's syslog to log more stuff.
The app with syslog looks like this (I created an easy example to debug for myself):
from fastapi import FastAPI
import syslog
syslog.openlog(facility=syslog.LOG_LOCAL0)
app = FastAPI()
syslog.syslog(syslog.LOG_INFO, 'startup done')
#app.get("/")
async def root():
syslog.syslog(syslog.LOG_INFO, 'get hello')
return {"message": "Hello World"}
The logs at the logging instance don't show the syslog messages. Only uvicorn's messages:
INFO: Started server process [21]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:80 (Press CTRL+C to quit)
INFO: 172.17.0.1:35346 - "GET / HTTP/1.1" 200 OK
For further debugging I checked rsyslog's log file, and it contains the syslog messages:
Dec 23 17:21:39 /uvicorn: startup done
Dec 23 17:21:50 /uvicorn: get hello
and here is the rsyslog configuration at /etc/rsyslog.d
local0.* {
action(type="omfile" file="/var/log/test.log" fileOwner="syslog" fileGroup="syslog" fileCreateMode="0640")
stop
}
What am I missing here?
Why is gelf ignoring rsyslog?
What do I need to understand about uvicorn concerning syslog?
or what can I do?
Thanks
The problem results from gelf ignoring syslog. A simple print in python will be recognized though.
Solution: I built myself a log function in python which will log to syslog and print out whatever I want to log.

printing URL parameters of a HTTP request using Python + Nginx + uWSGI

I have used this link and successfully run a python script using uWSGI. Although i just followed the doc line by line.
I have a GPS device which is sending data to a remote server. Document of the same device say that it connect to server using TCP which therefore would be http as simple device like a GPS device would not be able to do https (i hope i am right here.) Now as i have configure my Nginx server to forward all incoming HTTP request to python script for processing via uWSGI.
What i want to do is to simply print the url or query string on the HTML page. As i don't have control on the device side (i can only configure device to send data on a IP + Port), i have no clue how data is coming. Below is my access log
[23/Jan/2016:01:50:32 +0530] "(009591810720BP05000009591810720160122A1254.6449N07738.5244E000.0202007129.7200000000L00000008)" 400 172 "-" "-" "-"
Now i have look at this link on how to get the url parameters values, but i don't have a clue that what is the parameter here.
I tried to modified my wsgi.py file as
import requests
r = requests.get("http://localhost.com/")
# or r = requests.get("http://localhost.com/?") as i am directly routing incoming http request to python script and incoming HTTP request might not have #any parameter, just data #
text1 = r.status_code
def application(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/html')])
return ["<h1 style='color:blue'>Hello There shailendra! %s </h1>" %(text1)]
but when i restarted nginx, i get internal server error. Can some one help me to understand wrong i am doing here (literally i have no clue about the parameters of the application function. Tried to read this link, but what i get from here is that environ argument take care of many CGI environment variables.)
Can some one please help me to figure out what wrong am i doing and guide me to even a doc or resource.
Thanks.
why are you using localhost ".com" ?
Since you are running the webserver on the same machine,
you should change the line to
r = requests.get("http://localhost/")
Also move below lines from wsgi.py and put them in testServerConnection.py
import requests
r = requests.get("http://localhost/")
# or r = requests.get("http://localhost.com/?") as i am directly routing incoming http request to python script and incoming HTTP request might not have #any parameter, just data #
text1 = r.status_code
Start NGINX
and you also might have to run (i am not sure uwsgi set up on nginx)
uwsgi --socket 0.0.0.0:8080 --protocol=http -w wsgi
run testConnection.py to send a test request to localhost webserver and print the response
i got the answer for my question. Basically to process a TCP request, you need to open a socket and accept the TCP request on a specific port (as you specified on the hardware)
import SocketServer
class MyTCPHandler(SocketServer.BaseRequestHandler):
def handle(self):
# self.request is the TCP socket connected to the client
self.data = self.request.recv(1024).strip()
print "{} wrote:".format(self.client_address[0])
#data which is received
print self.data
if __name__ == "__main__":
#replace IP by your server IP
HOST, PORT = <IP of the server>, 8000
# Create the server, binding to localhost on port 9999
server = SocketServer.TCPServer((HOST, PORT), MyTCPHandler)
# Activate the server; this will keep running until you
# interrupt the program with Ctrl-C
server.serve_forever()
After you get the data, you can do any thing with the data. As my data format was shared in the GPS datasheet, i was able to parse the string and get the Lat and long out of it.

Combining websockets and WSGI in a python app

I'm working on a scientific experiment where about two dozen test persons play a turn-based game with/against each other. Right now, it's a Python web app with a WSGI interface. I'd like to augment the usability with websockets: When all players have finished their turns, I'd like to notify all clients to update their status. Right now, everyone has to either wait for the turn timeout, or continually reload and wait for the "turn is still in progress" error message not to appear again (busy waiting, effectively).
I read through multiple websocket libraries' documentation and I understand how websockets work, but I'm not sure about the architecture for mixing WSGI and websockets: Can I have a websockets and a WSGI server in the same process (and if so, how, using really any websockets library) and just call my_websocket.send_message() from a WSGI handler, or should I have a separate websockets server and do some IPC? Or should I not mix them at all?
edit, 6 months later: I ended up starting a separate websockets server process (using Autobahn), instead of integrating it with the WSGI server. The reason was that it's much easier and cleaner to separate the two of them, and talking to the websockets server from the WSGI process (server to server communication) was straight forward and worked on the first attempt using websocket-client.
Here is an example that does what you want:
https://github.com/tavendo/AutobahnPython/tree/master/examples/twisted/websocket/echo_wsgi
It runs a WSGI web app (Flask-based in this case, but can be anything WSGI conforming) plus a WebSocket server under 1 server and 1 port.
You can send WS messages from within Web handlers. Autobahn also provides PubSub on top of WebSocket, which greatly simplifies the sending of notifications (via WampServerProtocol.dispatch) like in your case.
http://autobahn.ws/python
Disclosure: I am author of Autobahn and work for Tavendo.
but I'm not sure about the architecture for mixing WSGI and websockets
I made it
use WSocket
Simple WSGI HTTP + Websocket Server, Framework, Middleware And App.
Includes
Server(WSGI) included - works with any WSGI framework
Middleware - adds Websocket support for any WSGI framework
Framework - simple Websocket WSGI web application framework
App - Event based app for Websocket communication
When external server used, some clients like Firefox requires http 1.1 Server. for Middleware, Framework, App
Handler - adds Websocket support to wsgiref(python builtin WSGI server)
Client -Coming soon...
Common Features
only single file less than 1000 lines
websocket sub protocol supported
websocket message compression supported (works if client asks)
receive and send pong and ping messages(with automatic pong sender)
receive and send binary or text messages
works for messages with or without mask
closing messages supported
auto and manual close
example using bottle web framework and WSocket middleware
from bottle import request, Bottle
from wsocket import WSocketApp, WebSocketError, logger, run
from time import sleep
logger.setLevel(10) # for debugging
bottle = Bottle()
app = WSocketApp(bottle)
# app = WSocketApp(bottle, "WAMP")
#bottle.route("/")
def handle_websocket():
wsock = request.environ.get("wsgi.websocket")
if not wsock:
return "Hello World!"
while True:
try:
message = wsock.receive()
if message != None:
print("participator : " + message)
wsock.send("you : "+message)
sleep(2)
wsock.send("you : "+message)
except WebSocketError:
break
run(app)

Categories

Resources