Non-blocking tornado instance on Openshift? - python

I am trying to create tornado webserver and quick start made me standard project of tornado, but according to documentation this configuration is blocking.
I am new to non-blocking python.
I have this wsgi file that lies in root folder in my PAAS server
#!/usr/bin/env python
import os
import imp
import sys
#
# Below for testing only
#
if __name__ == '__main__':
ip = 'localhost'
port = 8051
zapp = imp.load_source('application', 'wsgi/application')
from wsgiref.simple_server import make_server
httpd = make_server(ip, port, zapp.application)
httpd.serve_forever()
This is main handler file
#!/usr/bin/env python
import tornado.web
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.render('index.html')
And Application folder contains this
# Put here yours handlers.
import tornado.wsgi
from . import handlers
handlers = [(r'/',MainHandler),]
application = tornado.wsgi.WSGIApplication(handlers, **settings)
In WSGI mode asynchronous methods are not supported
uses WSGI to deploy the python applications
Is it possible to configure python application on openshift to be fully non-blocking
Though i have seen project that seemed to work

If you are talking about OpenShift V2 (not V3 which uses Kubernetes/Docker), then you need to use the app.py file as described in:
http://blog.dscpl.com.au/2015/08/running-async-web-applications-under.html

Related

Enable access control on simple HTTP server

I have the following shell script for a very simple HTTP server:
#!/bin/sh
echo "Serving at http://localhost:3000"
python -m SimpleHTTPServer 3000
I was wondering how I can enable or add a CORS header like Access-Control-Allow-Origin: * to this server?
Unfortunately, the simple HTTP server is really that simple that it does not allow any customization, especially not for the headers it sends. You can however create a simple HTTP server yourself, using most of SimpleHTTPRequestHandler, and just add that desired header.
For that, simply create a file simple-cors-http-server.py (or whatever) and, depending on the Python version you are using, put one of the following codes inside.
Then you can do python simple-cors-http-server.py and it will launch your modified server which will set the CORS header for every response.
With the shebang at the top, make the file executable and put it into your PATH, and you can just run it using simple-cors-http-server.py too.
Python 3 solution
Python 3 uses SimpleHTTPRequestHandler and HTTPServer from the http.server module to run the server:
#!/usr/bin/env python3
from http.server import HTTPServer, SimpleHTTPRequestHandler, test
import sys
class CORSRequestHandler (SimpleHTTPRequestHandler):
def end_headers (self):
self.send_header('Access-Control-Allow-Origin', '*')
SimpleHTTPRequestHandler.end_headers(self)
if __name__ == '__main__':
test(CORSRequestHandler, HTTPServer, port=int(sys.argv[1]) if len(sys.argv) > 1 else 8000)
Python 2 solution
Python 2 uses SimpleHTTPServer.SimpleHTTPRequestHandler and the BaseHTTPServer module to run the server.
#!/usr/bin/env python2
from SimpleHTTPServer import SimpleHTTPRequestHandler
import BaseHTTPServer
class CORSRequestHandler (SimpleHTTPRequestHandler):
def end_headers (self):
self.send_header('Access-Control-Allow-Origin', '*')
SimpleHTTPRequestHandler.end_headers(self)
if __name__ == '__main__':
BaseHTTPServer.test(CORSRequestHandler, BaseHTTPServer.HTTPServer)
Python 2 & 3 solution
If you need compatibility for both Python 3 and Python 2, you could use this polyglot script that works in both versions. It first tries to import from the Python 3 locations, and otherwise falls back to Python 2:
#!/usr/bin/env python
try:
# Python 3
from http.server import HTTPServer, SimpleHTTPRequestHandler, test as test_orig
import sys
def test (*args):
test_orig(*args, port=int(sys.argv[1]) if len(sys.argv) > 1 else 8000)
except ImportError: # Python 2
from BaseHTTPServer import HTTPServer, test
from SimpleHTTPServer import SimpleHTTPRequestHandler
class CORSRequestHandler (SimpleHTTPRequestHandler):
def end_headers (self):
self.send_header('Access-Control-Allow-Origin', '*')
SimpleHTTPRequestHandler.end_headers(self)
if __name__ == '__main__':
test(CORSRequestHandler, HTTPServer)
Try an alternative like http-server
As SimpleHTTPServer is not really the kind of server you deploy to production, I'm assuming here that you don't care that much about which tool you use as long as it does the job of exposing your files at http://localhost:3000 with CORS headers in a simple command line
# install (it requires nodejs/npm)
npm install http-server -g
#run
http-server -p 3000 --cors
Need HTTPS?
If you need https in local you can also try caddy or certbot
Edit 2022: my favorite solution is now serve, used internally by Next.js.
Just run npx serve --cors
Some related tools you might find useful
ngrok: when running ngrok http 3000, it creates an url https://$random.ngrok.com that permits anyone to access your http://localhost:3000 server. It can expose to the world what runs locally on your computer (including local backends/apis)
localtunnel: almost the same as ngrok
now: when running now, it uploads your static assets online and deploy them to https://$random.now.sh. They remain online forever unless you decide otherwise. Deployment is fast (except the first one) thanks to diffing. Now is suitable for production frontend/SPA code deployment It can also deploy Docker and NodeJS apps. It is not really free, but they have a free plan.
I had the same problem and came to this solution:
class Handler(SimpleHTTPRequestHandler):
def send_response(self, *args, **kwargs):
SimpleHTTPRequestHandler.send_response(self, *args, **kwargs)
self.send_header('Access-Control-Allow-Origin', '*')
I simply created a new class inheriting from SimpleHTTPRequestHandler that only changes the send_response method.
try this: https://github.com/zk4/livehttp. support CORS.
python3 -m pip install livehttp
goto your folder, and run livehttp. that`s all.
http://localhost:5000
You'll need to provide your own instances of do_GET() (and do_HEAD() if choose to support HEAD operations). something like this:
class MyHTTPServer(SimpleHTTPServer):
allowed_hosts = (('127.0.0.1', 80),)
def do_GET(self):
if self.client_address not in allowed_hosts:
self.send_response(401, 'request not allowed')
else:
super(MyHTTPServer, self).do_Get()
My working code:
self.send_response(200)
self.send_header( "Access-Control-Allow-Origin", "*")
self.end_headers()
self.wfile.write( bytes(json.dumps( answ ), 'utf-8'))

Trouble Sending iOS push notifications from Google App Engine

I'm working on my first iOS app that uses push notifications. I have a python script that lets me to send push notifications from my machine but I'm unable to get this working with the Google App Engine Launcher.
When I run this on GAE I get nothing - no errors and no push notifications. What am I doing wrong? I know the code for sending the actual notification is working properly but I'm not able to duplicate this on Google's servers.
Here is the script I'm trying to run with GAE Launcher:
import os
import cgi
import webapp2
from google.appengine.ext.webapp.util import run_wsgi_app
import ssl
import json
import socket
import struct
import binascii
TOKEN = 'my_app_token'
PAYLOAD = {'aps': {'alert':'Push!','sound':'default'}}
class APNStest(webapp2.RequestHandler):
def send_push(token, payload):
# Your certificate file
cert = 'ck.pem'
# APNS development server
apns_address = ('gateway.sandbox.push.apple.com', 2195)
# Use a socket to connect to APNS over SSL
s = socket.socket()
sock = ssl.wrap_socket(s, ssl_version=ssl.PROTOCOL_SSLv3, certfile=cert)
sock.connect(apns_address)
# Generate a notification packet
token = binascii.unhexlify(token)
fmt = '!cH32sH{0:d}s'.format(len(payload))
cmd = '\x00'
message = struct.pack(fmt, cmd, len(token), token, len(payload), payload)
sock.write(message)
sock.close()
send_push(TOKEN, json.dumps(PAYLOAD))
application = webapp2.WSGIApplication([
('/apns', APNStest)
], debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
So the solution was very simple as I expected. I had enabled billing for the project on cloud.google.com but needed to have billing enabled at appengine.google.com as well. Stupid mistake that set me back 2 days.

How to deploy CherryPy on pythonanywhere.com

I have a python app developed on Flask. Everything works fine offline, I tried deploying on CherryPy successfully too. Now, I'm trying to deploy the same on www.pythonanywhere.com.
Here's the deploy.py I use for deploying the Flask app on CherryPy
from cherrypy import wsgiserver
from appname import app
def initiate():
app_list = wsgiserver.WSGIPathInfoDispatcher({'/appname': app})
server = wsgiserver.CherryPyWSGIServer( ('http://username.pythonanywhere.com/'), app_list)
try:
server.start()
except KeyboardInterrupt:
server.stop()
print "Server initiated..."
initiate()
print "Ended"
I created a "manual configuration" app on pythonanywhere.com.
Here's the configuration file (username_pythonanywhere_com_wsgi.py):
import sys
path = '/home/username/appname'
if path not in sys.path:
sys.path.append(path)
import deploy
deploy.initiate()
Now I'm pretty sure that it "almost worked", because in the server logs I could see my "Server initiated..." message.
2013-09-27 09:57:16 +0000 username.pythonanywhere.com - *** Operational MODE: single process ***
Server initiated...
Now the problem, when I try to view my app username.pyhtonanywhere.com/about, it times out.
This I believe is caused due to incorrect port given while starting the CherryPy server (in deploy.py).
Could anyone please tell how I can properly initiate the CherryPy server?
Joe Doherty is right. You want something more like this in you wsgi file:
import sys
sys.path = [ <path to your web app> ] + sys.path
from cherrypy._cpwsgi import CPWSGIApp
from cherrypy._cptree import Application
from <your_web_app> import <your web app class>
config_path = '<path to your cherrypy config>'
application = CPWSGIApp(
Application(<your web app class>(), '', config = config_path)
I stuck everything that should be based on your particular app in <>s.

Python CGIHTTPServer Default Directories

I've got the following minimal code for a CGI-handling HTTP server, derived from several examples on the inner-tubes:
#!/usr/bin/env python
import BaseHTTPServer
import CGIHTTPServer
import cgitb;
cgitb.enable() # Error reporting
server = BaseHTTPServer.HTTPServer
handler = CGIHTTPServer.CGIHTTPRequestHandler
server_address = ("", 8000)
handler.cgi_directories = [""]
httpd = server(server_address, handler)
httpd.serve_forever()
Yet, when I execute the script and try to run a test script in the same directory via CGI using http://localhost:8000/test.py, I see the text of the script rather than the results of the execution.
Permissions are all set correctly, and the test script itself is not the problem (as I can run it fine using python -m CGIHTTPServer, when the script resides in cgi-bin). I suspect the problem has something to do with the default CGI directories.
How can I get the script to execute?
My suspicions were correct. The examples from which this code is derived showed the wrong way to set the default directory to be the same directory in which the server script resides. To set the default directory in this way, use:
handler.cgi_directories = ["/"]
Caution: This opens up potentially huge security holes if you're not behind any kind of a firewall. This is only an instructive example. Use only with extreme care.
The solution doesn't seem to work (at least for me) if the .cgi_directories requires multiple layers of subdirectories ( ['/db/cgi-bin'] for instance). Subclassing the server and changing the is_cgi def seemed to work. Here's what I added/substituted in your script:
from CGIHTTPServer import _url_collapse_path
class MyCGIHTTPServer(CGIHTTPServer.CGIHTTPRequestHandler):
def is_cgi(self):
collapsed_path = _url_collapse_path(self.path)
for path in self.cgi_directories:
if path in collapsed_path:
dir_sep_index = collapsed_path.rfind(path) + len(path)
head, tail = collapsed_path[:dir_sep_index], collapsed_path[dir_sep_index + 1:]
self.cgi_info = head, tail
return True
return False
server = BaseHTTPServer.HTTPServer
handler = MyCGIHTTPServer
Here is how you would make every .py file on the server a cgi file (you probably don't want that for production/a public server ;):
import BaseHTTPServer
import CGIHTTPServer
import cgitb; cgitb.enable()
server = BaseHTTPServer.HTTPServer
# Treat everything as a cgi file, i.e.
# `handler.cgi_directories = ["*"]` but that is not defined, so we need
class Handler(CGIHTTPServer.CGIHTTPRequestHandler):
def is_cgi(self):
self.cgi_info = '', self.path[1:]
return True
httpd = server(("", 9006), Handler)
httpd.serve_forever()

Django as middleware in a tornado app

I am trying to run tornadio (socket.io for python) to work with django. Is there are way to do something like this in tornado (running django as middleware), or can I access tornadio from within django (uncommenting the second application definition routes straight to django):
#!/usr/bin/env python
import os
import tornado.httpserver
import tornado.ioloop
import tornado.wsgi
import sys
sys.path.append('..')
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
import django.core.handlers.wsgi
wsgi = django.core.handlers.wsgi.WSGIHandler()
django_container = tornado.wsgi.WSGIContainer(wsgi)
application = tornado.web.Application([
(r"/", MainHandler),
django_container
])
# application = django_container
tornado.httpserver.HTTPServer(application).listen(8888)
tornado.ioloop.IOLoop.instance().start()
I would look at using this project to help:
https://github.com/koblas/django-on-tornado
It an integration of tornado and django allowing you to do this:
python manage.py runtornado --reload 8888
Included is a sample chat service built using django and tornado.

Categories

Resources