Can python cgi scripts write and read data to the session? If so how? Is there a high-level API or must I roll my own classes?
There's no "session" on cgi. You must roll your own session handling code if you're using raw cgi.
Basically, sessions work by creating a unique cookie number and sending it on a response header to the client, and then checking for this cookie on every connection. Store the session data somewhere on the server (memory, database, disk) and use the cookie number as a key to retrieve it on every request made by the client.
However cgi is not how you develop applications for the web in python. Use wsgi. Use a web framework.
Here's a quick example using cherrypy. cherrypy.tools.sessions is a cherrypy tool that handles cookie setting/retrieving and association with data automatically:
import cherrypy
class HelloSessionWorld(object):
#cherrypy.tools.sessions()
def index(self):
if 'data' in cherrypy.session:
return "You have a cookie! It says: %r" % cherrypy.session['data']
else:
return "You don't have a cookie. <a href='getcookie'>Get one</a>."
index.exposed = True
#cherrypy.tools.sessions()
def getcookie(self):
cherrypy.session['data'] = 'Hello World'
return "Done. Please <a href='..'>return</a> to see it"
getcookie.exposed = True
application = cherrypy.tree.mount(HelloSessionWorld(), '/')
if __name__ == '__main__':
cherrypy.quickstart(application)
Note that this code is a wsgi application, in the sense that you can publish it to any wsgi-enabled web server (apache has mod_wsgi). Also, cherrypy has its own wsgi server, so you can just run the code with python and it will start serving on http://localhost:8080/
My 'low-cost' web hosting plan don't permit use wsgi. The 'mod_wsgi' apache module can't be used because is a shared apache server. I am developing my own class.
To not start from zero, I am experimenting the implementation of a session class available in this site: http://cgi.tutorial.codepoint.net/a-session-class
Related
I am having some problems understanding how a application written in python jsonrpc2 is related to a wgsi application.
I have a json rpc test application in a file called greeting.py
It is a simple test case
def hello(name=None,greeting=None):
# Print to stdout the greeting
result = "From jsonrpc you have: {greeting} , {name}".format(greeting=greeting,name=name)
# print result
# You can basically now return the string result
return result
Using the jsonrpc2 module I am able to POST json to this function which then returns a json response.
Sample post :
self.call_values_dict_webpost = dict(zip(["jsonrpc","method","id","params"],["2.0","greeting.hello","2",["Hari","Hello"]]))
Response returned as json:
u"jsonrpc": u"2.0", u"id": u"2", u"result": u"From jsonrpc you have: Hello , Hari"
I start the server with an entry point defined in the jsonrpc2 module which essentially does the following
from jsonrpc2 import JsonRpcApplication
from wsgiref.simple_server import make_server
app = JsonRpcApplication()
app.rpc.add_module("greeting")
httpd = make_server(host, port, app)
httpd.serve_forever()
I can currently run this jsonrpc2 server as a standalone "web app" and test it approproately.
I wanted to understand how to go from this simple function web app to a wsgi web app that reads and writes json without using a web framework such as flask or django ( which I know some of)
I am looking for whether there is a simple conceptual step that makes my function above compatible with a wsgi "callable" : or am I just better off using flask or django to read/receive json "POST" and write json response.
I don't know that particular module, but it looks like your app object is the WSGI application. All you do in that code is instantiate the app, then create a server for it via wsgiref. So instead of doing that, just point your real WSGI server - Apache/mod_wsgi, or gunicorn, or whatever - to that app object in exactly the same way as you would serve Flask or Django.
Github offers to send Post-receive hooks to an URL of your choice when there's activity on your repo.
I want to write a small Python command-line/background (i.e. no GUI or webapp) application running on my computer (later on a NAS), which continually listens for those incoming POST requests, and once a POST is received from Github, it processes the JSON information contained within. Processing the json as soon as I have it is no problem.
The POST can come from a small number of IPs given by github; I plan/hope to specify a port on my computer where it should get sent.
The problem is, I don't know enough about web technologies to deal with the vast number of options you find when searching.. do I use Django, Requests, sockets,Flask, microframeworks...? I don't know what most of the terms involved mean, and most sound like they offer too much/are too big to solve my problem - I'm simply overwhelmed and don't know where to start.
Most tutorials about POST/GET I could find seem to be concerned with either sending or directly requesting data from a website, but not with continually listening for it.
I feel the problem is not really a difficult one, and will boil down to a couple of lines, once I know where to go/how to do it. Can anybody offer pointers/tutorials/examples/sample code?
First thing is, web is request-response based. So something will request your link, and you will respond accordingly. Your server application will be continuously listening on a port; that you don't have to worry about.
Here is the similar version in Flask (my micro framework of choice):
from flask import Flask, request
import json
app = Flask(__name__)
#app.route('/',methods=['POST'])
def foo():
data = json.loads(request.data)
print "New commit by: {}".format(data['commits'][0]['author']['name'])
return "OK"
if __name__ == '__main__':
app.run()
Here is a sample run, using the example from github:
Running the server (the above code is saved in sample.py):
burhan#lenux:~$ python sample.py
* Running on http://127.0.0.1:5000/
Here is a request to the server, basically what github will do:
burhan#lenux:~$ http POST http://127.0.0.1:5000 < sample.json
HTTP/1.0 200 OK
Content-Length: 2
Content-Type: text/html; charset=utf-8
Date: Sun, 27 Jan 2013 19:07:56 GMT
Server: Werkzeug/0.8.3 Python/2.7.3
OK # <-- this is the response the client gets
Here is the output at the server:
New commit by: Chris Wanstrath
127.0.0.1 - - [27/Jan/2013 22:07:56] "POST / HTTP/1.1" 200 -
Here's a basic web.py example for receiving data via POST and doing something with it (in this case, just printing it to stdout):
import web
urls = ('/.*', 'hooks')
app = web.application(urls, globals())
class hooks:
def POST(self):
data = web.data()
print
print 'DATA RECEIVED:'
print data
print
return 'OK'
if __name__ == '__main__':
app.run()
I POSTed some data to it using hurl.it (after forwarding 8080 on my router), and saw the following output:
$ python hooks.py
http://0.0.0.0:8080/
DATA RECEIVED:
test=thisisatest&test2=25
50.19.170.198:33407 - - [27/Jan/2013 10:18:37] "HTTP/1.1 POST /hooks" - 200 OK
You should be able to swap out the print statements for your JSON processing.
To specify the port number, call the script with an extra argument:
$ python hooks.py 1234
I would use:
https://github.com/carlos-jenkins/python-github-webhooks
You can configure a web server to use it, or if you just need a process running there without a web server you can launch the integrated server:
python webhooks.py
This will allow you to do everything you said you need. It, nevertheless, requires a bit of setup in your repository and in your hooks.
Late to the party and shameless autopromotion, sorry.
If you are using Flask, here's a very minimal code to listen for webhooks:
from flask import Flask, request, Response
app = Flask(__name__)
#app.route('/webhook', methods=['POST'])
def respond():
print(request.json) # Handle webhook request here
return Response(status=200)
And the same example using Django:
from django.http import HttpResponse
from django.views.decorators.http import require_POST
#require_POST
def example(request):
print(request.json) # Handle webhook request here
return HttpResponse('Hello, world. This is the webhook response.')
If you need more information, here's a great tutorial on how to listen for webhooks with Python.
If you're looking to watch for changes in any repo...
1. If you own the repo that you want to watch
In your repo page, Go to settings
click webhooks, new webhook (top right)
give it your ip/endpoint and setup everything to your liking
use any server to get notified
2. Not your Repo
take the url you want i.e https://github.com/fire17/gd-xo/
add /commits/master.atom to the end such as:
https://github.com/fire17/gd-xo/commits/master.atom
Use any library you want to get that page's content, like:
filter out the keys you want, for example the element
response = requests.get("https://github.com/fire17/gd-xo/commits/master.atom").text
response.split("<updated>")[1].split("</updated>")[0]
'2021-08-06T19:01:53Z'
make a loop that checks this every so often and if this string has changed, then you can initiate a clone/pull request or do whatever you like
How do I make an authenticated request from a python script to appengine? I have found lots of different methods on the web but none work.
E.G. How do you access an authenticated Google App Engine service from a (non-web) python client? doesn't work, the request returns the login page.
That post is old, maybe something changed since then.
Has anyone got a nice wrapped object to do this?
Answered my own question:
from google.appengine.tools import appengine_rpc
use_production = True
if use_production:
base_url = 'myapp.appspot.com'
else:
base_url = 'localhost:8080'
def passwdFunc():
return ('user#gmail.com','password')
def main(argv):
rpcServer = appengine_rpc.HttpRpcServer(base_url,
passwdFunc,
None,
'myapp',
save_cookies=True,
secure=use_production)
# Makes the actual call, I guess is the same for POST and GET?
blah = rpcServer.Send('/some_path/')
print blah
if __name__ == '__main__':
main(sys.argv)
You can see one example of a non-web authenticated python client making requests in the Python client library used to process GAE Pull Queues at https://developers.google.com/appengine/docs/python/taskqueue/overview-pull#Using_the_Task_Queue_REST_API_with_the_Python_Google_API_Library
How do I get App Engine to generate the URL of the server it is currently running on?
If the application is running on development server it should return
http://localhost:8080/
and if the application is running on Google's servers it should return
http://application-name.appspot.com
You can get the URL that was used to make the current request from within your webapp handler via self.request.url or you could piece it together using the self.request.environ dict (which you can read about on the WebOb docs - request inherits from webob)
You can't "get the url for the server" itself, as many urls could be used to point to the same instance.
If your aim is really to just discover wether you are in development or production then use:
'Development' in os.environ['SERVER_SOFTWARE']
Here is an alternative answer.
from google.appengine.api import app_identity
server_url = app_identity.get_default_version_hostname()
On the dev appserver this would show:
localhost:8080
and on appengine
your_app_id.appspot.com
If you're using webapp2 as framework chances are that you already using URI routing in you web application.
http://webapp2.readthedocs.io/en/latest/guide/routing.html
app = webapp2.WSGIApplication([
webapp2.Route('/', handler=HomeHandler, name='home'),
])
When building URIs with webapp2.uri_for() just pass _full=True attribute to generate absolute URI including current domain, port and protocol according to current runtime environment.
uri = uri_for('home')
# /
uri = uri_for('home', _full=True)
# http://localhost:8080/
# http://application-name.appspot.com/
# https://application-name.appspot.com/
# http://your-custom-domain.com/
This function can be used in your Python code or directly from templating engine (if you register it) - very handy.
Check webapp2.Router.build() in the API reference for a complete explanation of the parameters used to build URIs.
Given, when a user requests /foo on my server, I send the following HTTP response (not closing the connection):
Content-Type: multipart/x-mixed-replace; boundary=-----------------------
-----------------------
Content-Type: text/html
foo
When the user goes to /bar (which will send 204 No Content so the view doesn't change), I want to send the following data in the initial response.
-----------------------
Content-Type: text/html
bar
How would I get the second request to trigger this from the initial response? I'm planning on possibly creating a fancy [engines that support multipart/x-mixed-replace (currently only Gecko)]-only email webapp that does server-push and Ajax effects without any JavaScript, just for fun.
No complete answer, but:
In your question, you're describing a Comet-style architecture. Regarding support of Comet-style techniques in Python/WSGI, there is a StackOverflow question, which talks about various Python servers with support for long-running requests a la Comet.
Also interesting is this mail thread in the Python Web-SIG: "Could WSGI handle Asynchronous response?". In May 2008, there was a broad discussion in the Web-SIG about the topic of asynchronous requests in WSGI.
A recent development is evserver, a lightweight WSGI server, which implements the Asynchronous WSGI extension proposed by Christopher Stawarz in the Web-SIG in May 2008.
Finally, the Tornado web server supports non-blocking asynchronous requests. It has a chat example application using long polling, which has similarities with your requirements.
If the problem is to pass some command from /bar application to /foo application and you are using some servlet-like approach (the Python code is loaded once and not for each request as in CGI), you can just change some class property of the /foo application and be ready to react to the change in the /foo instance (by checking the property state).
Obviously the /foo application should not return right after the first request and yield content line by line.
Thought this is just theory, I have not tried that myself.
I have created some small example (just for fun, you know :))
import threading
num = 0
cond = threading.Condition()
def app(environ, start_response):
global num
cond.acquire()
num += 1
cond.notifyAll()
cond.release()
start_response("200 OK", [("Content-Type", "multipart/x-mixed-replace; boundary=xxx")])
while True:
n = num
s = "--xxx\r\nContent-Type: text/html\r\n\r\n%s\n" % n
yield s
# wait for num change:
cond.acquire()
while num == n:
cond.wait()
cond.release()
from cherrypy.wsgiserver import CherryPyWSGIServer
server = CherryPyWSGIServer(("0.0.0.0", 3000), app)
try:
server.start()
except KeyboardInterrupt:
server.stop()
# Now whenever you visit http://127.0.0.1:3000/, the number increases.
# It also automatically increases in all previously opened windows/tabs.
The idea of a shared variable and thread synchronization (using condition variable object) is based on the fact that WSGI server provided by CherryPyWSGIServer is threaded.
Not sure if this is quite what you're looking for, but there is a fairly old way of doing server push using a mime content of multipart/x-mixed-replace
Basically you compose the response as a mime object with content type multipart/x-mixed-replace, and send the first "version" of a document down. The browser will keep the socket open.
Then as the server decides to push more data, a new "version" of the document gets sent from the server, and the browser will intelligently replace (within whatever frame/iframe contains the content) the content.
This was an early way of doing webcams, where the server would send down (push) image after image, and the browser would just keep replacing the image in the document over and over. This is also a way of doing a "Loading..." message over a single HTTP request.