How do I receive Github Webhooks in Python - python

Github offers to send Post-receive hooks to an URL of your choice when there's activity on your repo.
I want to write a small Python command-line/background (i.e. no GUI or webapp) application running on my computer (later on a NAS), which continually listens for those incoming POST requests, and once a POST is received from Github, it processes the JSON information contained within. Processing the json as soon as I have it is no problem.
The POST can come from a small number of IPs given by github; I plan/hope to specify a port on my computer where it should get sent.
The problem is, I don't know enough about web technologies to deal with the vast number of options you find when searching.. do I use Django, Requests, sockets,Flask, microframeworks...? I don't know what most of the terms involved mean, and most sound like they offer too much/are too big to solve my problem - I'm simply overwhelmed and don't know where to start.
Most tutorials about POST/GET I could find seem to be concerned with either sending or directly requesting data from a website, but not with continually listening for it.
I feel the problem is not really a difficult one, and will boil down to a couple of lines, once I know where to go/how to do it. Can anybody offer pointers/tutorials/examples/sample code?

First thing is, web is request-response based. So something will request your link, and you will respond accordingly. Your server application will be continuously listening on a port; that you don't have to worry about.
Here is the similar version in Flask (my micro framework of choice):
from flask import Flask, request
import json
app = Flask(__name__)
#app.route('/',methods=['POST'])
def foo():
data = json.loads(request.data)
print "New commit by: {}".format(data['commits'][0]['author']['name'])
return "OK"
if __name__ == '__main__':
app.run()
Here is a sample run, using the example from github:
Running the server (the above code is saved in sample.py):
burhan#lenux:~$ python sample.py
* Running on http://127.0.0.1:5000/
Here is a request to the server, basically what github will do:
burhan#lenux:~$ http POST http://127.0.0.1:5000 < sample.json
HTTP/1.0 200 OK
Content-Length: 2
Content-Type: text/html; charset=utf-8
Date: Sun, 27 Jan 2013 19:07:56 GMT
Server: Werkzeug/0.8.3 Python/2.7.3
OK # <-- this is the response the client gets
Here is the output at the server:
New commit by: Chris Wanstrath
127.0.0.1 - - [27/Jan/2013 22:07:56] "POST / HTTP/1.1" 200 -

Here's a basic web.py example for receiving data via POST and doing something with it (in this case, just printing it to stdout):
import web
urls = ('/.*', 'hooks')
app = web.application(urls, globals())
class hooks:
def POST(self):
data = web.data()
print
print 'DATA RECEIVED:'
print data
print
return 'OK'
if __name__ == '__main__':
app.run()
I POSTed some data to it using hurl.it (after forwarding 8080 on my router), and saw the following output:
$ python hooks.py
http://0.0.0.0:8080/
DATA RECEIVED:
test=thisisatest&test2=25
50.19.170.198:33407 - - [27/Jan/2013 10:18:37] "HTTP/1.1 POST /hooks" - 200 OK
You should be able to swap out the print statements for your JSON processing.
To specify the port number, call the script with an extra argument:
$ python hooks.py 1234

I would use:
https://github.com/carlos-jenkins/python-github-webhooks
You can configure a web server to use it, or if you just need a process running there without a web server you can launch the integrated server:
python webhooks.py
This will allow you to do everything you said you need. It, nevertheless, requires a bit of setup in your repository and in your hooks.
Late to the party and shameless autopromotion, sorry.

If you are using Flask, here's a very minimal code to listen for webhooks:
from flask import Flask, request, Response
app = Flask(__name__)
#app.route('/webhook', methods=['POST'])
def respond():
print(request.json) # Handle webhook request here
return Response(status=200)
And the same example using Django:
from django.http import HttpResponse
from django.views.decorators.http import require_POST
#require_POST
def example(request):
print(request.json) # Handle webhook request here
return HttpResponse('Hello, world. This is the webhook response.')
If you need more information, here's a great tutorial on how to listen for webhooks with Python.

If you're looking to watch for changes in any repo...
1. If you own the repo that you want to watch
In your repo page, Go to settings
click webhooks, new webhook (top right)
give it your ip/endpoint and setup everything to your liking
use any server to get notified
2. Not your Repo
take the url you want i.e https://github.com/fire17/gd-xo/
add /commits/master.atom to the end such as:
https://github.com/fire17/gd-xo/commits/master.atom
Use any library you want to get that page's content, like:
filter out the keys you want, for example the element
response = requests.get("https://github.com/fire17/gd-xo/commits/master.atom").text
response.split("<updated>")[1].split("</updated>")[0]
'2021-08-06T19:01:53Z'
make a loop that checks this every so often and if this string has changed, then you can initiate a clone/pull request or do whatever you like

Related

Trading view alerts to trigger market order through python and Oanda's API

I'm trying to trigger a python module (market order for Oanda) using web hooks(from trading view).
Similar to this
1) https://www.youtube.com/watch?v=88kRDKvAWMY&feature=youtu.be
and this
2)https://github.com/Robswc/tradingview-webhooks-bot
But my broker is Oanda so I'm using python to place the trade. This link has more information.
https://github.com/hootnot/oanda-api-v20
The method is web hook->ngrok->python. When a web hook is sent, the ngrok (while script is also running) shows a 500 internal service error and that the server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
This is what my script says when its running (see picture);
First says some stuff related the market order then;
running script picture
One thing I noticed is that after Debug it doesn't say Running on... (so maybe my flask is not active?
Here is the python script;
from flask import Flask
import market_orders
# Create Flask object called app.
app = Flask(__name__)
# Create root to easily let us know its on/working.
#app.route('/')
def root():
return 'online'
#app.route('/webhook', methods=['POST'])
def webhook():
if request.method == 'POST':
# Parse the string data from tradingview into a python dict
print(market_orders.myfucn())
else:
print('do nothing')
if __name__ == '__main__':
app.run()
Let me know if there is any other information that would be helpful.
Thanks for your help.
I fixed it!!!! Google FTW
The first thing I learned was how to make my module a FLASK server. I followed these websites to figure this out;
This link helped me set up the flask file in a virtual environment. I also moved my Oanda modules to this new folder. And opened the ngrok app while in this folder via the command window. I also ran the module from within the command window using flask run.
https://topherpedersen.blog/2019/12/28/how-to-setup-a-new-flask-app-on-a-mac/
This link showed me how to set the FLASK_APP and the FLASK_ENV
Flask not displaying http address when I run it
Then I fixed the internal service error by adding return 'okay' after print(do nothing) in my script. This I learned from;
Flask Value error view function did not return a response

Implementing Google Directory API users watch with Python

I'm having some trouble understanding and implementing the Google Directory API's users watch function and push notification system (https://developers.google.com/admin-sdk/reports/v1/guides/push#creating-notification-channels) in my Python GAE app. What I'm trying to achieve is that any user (admin) who uses my app would be able to watch user changes within his own domain.
I've verified the domain I want to use for notifications and implemented the watch request as follows:
directoryauthdecorator = OAuth2Decorator(
approval_prompt='force',
client_id='my_client_id',
client_secret='my_client_secret',
callback_path='/oauth2callback',
scope=['https://www.googleapis.com/auth/admin.directory.user'])
class PushNotifications(webapp.RequestHandler):
#directoryauthdecorator.oauth_required
def get(self):
auth_http = directoryauthdecorator.http()
service = build("admin", "directory_v1", http=auth_http)
uu_id=str(uuid.uuid4())
param={}
param['customer']='my_customer'
param['event']='add'
param['body']={'type':'web_hook','id':uu_id,'address':'https://my-domain.com/pushNotifications'}
watchUsers = service.users().watch(**param).execute()
application = webapp.WSGIApplication(
[
('/pushNotifications',PushNotifications),
(directoryauthdecorator.callback_path, directoryauthdecorator.callback_handler())],
debug=True)
Now, the receiving part is what I don't understand. When I add a user on my domain and check the app's request logs I see some activity, but there's no usable data. How should I approach this part?
Any help would be appreciated. Thanks.
The problem
It seems like there's been some confusion in implementing the handler. Your handler actually sets up the notifications channel by sending a POST request to the Reports API endpoint. As the docs say:
To set up a notification channel for messages about changes to a particular resource, send a POST request to the watch method for the resource.
source
You should only need to send this request one time to set up the channel, and the "address" parameter should be the URL on your app that will receive the notifications.
Also, it's not clear what is happening with the following code:
param={}
param['customer']='my_customer'
param['event']='add'
Are you just breaking the code in order to post it here? Or is it actually written that way in the file? You should actually preserve, as much as possible, the code that your app is running so that we can reason about it.
The solution
It seems from the docs you linked - in the "Receiving Notifications" section, that you should have code inside the "address" specified to receive notifications that will inspect the POST request body and headers on the notification push request, and then do something with that data (like store it in BigQuery or send an email to the admin, etc.)
Managed to figure it out. In the App Engine logs I noticed that each time I make a change, which is being 'watched', on my domain I get a POST request from Google's API, but with a 302 code. I discovered that this was due to the fact I had login: required configured in my app.yaml for the script, which was handling the requests and the POST request was being redirected to the login page, instead of the processing script.

Instagram Real-time API can't see my server

I'm testing the Instagram Real-time API with Python and Flask and I get everytime this response from the Instagram server:
{
"meta":{
"error_type":"APISubscriptionError",
"code":400,
"error_message":"Unable to reach callback URL \"http:\/\/my_callback_url:8543\/instagram\"."
}
}
The request:
curl -F 'client_id=my_client_id...' \
-F 'client_secret=my_client_secret...' \
-F 'object=tag' \
-F 'aspect=media' \
-F 'object_id=fox' \
-F 'callback_url=http://my_callback_url:8543/instagram' \
https://api.instagram.com/v1/subscriptions/
And this is the code of the Flask server:
from flask import Flask
from flask import request
from instagram import subscriptions
app = Flask(__name__)
CLIENT_ID = "my_client_id..."
CLIENT_SECRET = "my_client_secret..."
def process_tag_update(update):
print 'Received a push: ', update
reactor = subscriptions.SubscriptionsReactor()
reactor.register_callback(subscriptions.SubscriptionType.TAG, process_tag_update)
#app.route('/instagram', methods=['GET'])
def handshake():
# GET method is used when validating the endpoint as per the Pubsubhubub protocol
mode = request.values.get('hub.mode')
challenge = request.values.get('hub.challenge')
verify_token = request.values.get('hub.verify_token')
if challenge:
return challenge
return 'It is not a valid challenge'
#app.route('/instagram', methods=['POST'])
def callback():
# POST event is used to for the events notifications
x_hub_signature = request.headers.get('X-Hub-Signature')
raw_response = request.data
try:
reactor.process(CLIENT_SECRET, raw_response, x_hub_signature)
except subscriptions.SubscriptionVerifyError:
print 'Signature mismatch'
return 'done'
def server():
""" Main server, will allow us to make it wsgi'able """
app.run(host='0.0.0.0', port=8543, debug=True)
if __name__ == '__main__':
server()
The machine have a public IP and the port is open for everyone. I can reach the url from others networks.
Why can't Instagram reach my url? Is there a black list or something like that?
Update 1
I have tested the same code with some frameworks and WSGI servers (Django, Flask, Node.js, Gunicorn, Apache) and different responses in the GET/POST endpoint and I always get the same 400 error message.
Also I have checked the packages received in my network interface with Wireshark and I get the expected results with calls from any network. But I don't get any package when I do the subscription request.
So... Is this a bug? Could be my IP in any blacklist for some reason?
I had exactly the same. It worked when I accidentally restarted the router, getting a different IP. It seems that it could be an IP issue indeed and the Unable to reach callback URL... is not really helpful in this case.
I agree, there are plenty of AWS servers answering to that API and some are not working. Ping api.instagram.com and you'll see you get multiple and different IP for that domain name. There is a DNS round robin and you are not reaching the same server every time.
I've found one server (IP : 52.6.133.72) which seems to be working for subscription and have configured my server to use that one (by editing the /etc/hosts file). Not a reliable solution ... but it's working.

Sending a flask request within a flask request

I am implementing an endpoint in my Flask application that receives a collection of HTTP requests, and returns a collection of the corresponding HTTP responses. In order to accomplish this, I need my endpoint to call other endpoints in order to construct the result. However, because Flask is blocking while processing the original request, it cannot process the nested requests and the application gets deadlocked.
Is there any way to issue a request within a request in flask in a way that doesn't result in a deadlock?
I included a segment of my code which I believe should be enough to illustrate the problem without overwhelming you. If you would like to see more of it please let me know and I'll share.
from requests import Session, Request
def split(request):
multipart = request.stream.read()
boundary = request.content_type.split(';')[1]
prefix = ' boundary"'
suffix = '"'
delimiter = '--%s' % boundary[len(prefix)+1:-len(suffix)]
subrequests = [s.lstrip() for s in multipart.split(delimiter)]
for sub in subrequests:
status_line, _, more_lines = sub.partition('\n')
method, path, version = status_line.split()
headers, _, body = more_lines.partition('\n\n')
url = 'http://localhost:3000' + path
return Request(method, url, headers=headers, data=body)
#app.route('/batch', methods=["GET", "POST"])
def batch():
subrequests = split(request)
session = Session()
responses = []
for sub in subrequests:
response.append(s.send(sub.prepare())) # Deadlock!
There are two candidate solutions that I considered which I found to be unsatisfactory:
Don't issue a full request. Instead, just call the function that is mapped to the endpoint of interest (url_for). I am unsatisfied by this approach because the nested requests have their own headers and cookies which are neglected by this approach. Furthermore, code in the 'before_request' and 'after_request' handlers won't get called automatically
Run multiple instances of the application. This will solve the problem, but expose my service to a pretty simple DoS attack. If I have X instances running, All an attacker would need to do is to hit my service with X different requests to cause a deadlock.
Thank you.
Knowing that the internal flask server is not production-ready, when using only for development, pass the threaded=true parameter to app.run.
app.run(debug=True, threaded=True)
This happens cause you're using the flask devserver. It's not for production use.
In production environment you would use an application server (uWSGI, GUnicorn, Tornado, ...) with or without a webserver layer (NGINX, Apache,...) to proxy/balance connections to the workers protecting (not completely but in a lot of environments it's acceptable) from DoS attacks.

Pushing data once a URL is requested

Given, when a user requests /foo on my server, I send the following HTTP response (not closing the connection):
Content-Type: multipart/x-mixed-replace; boundary=-----------------------
-----------------------
Content-Type: text/html
foo
When the user goes to /bar (which will send 204 No Content so the view doesn't change), I want to send the following data in the initial response.
-----------------------
Content-Type: text/html
bar
How would I get the second request to trigger this from the initial response? I'm planning on possibly creating a fancy [engines that support multipart/x-mixed-replace (currently only Gecko)]-only email webapp that does server-push and Ajax effects without any JavaScript, just for fun.
No complete answer, but:
In your question, you're describing a Comet-style architecture. Regarding support of Comet-style techniques in Python/WSGI, there is a StackOverflow question, which talks about various Python servers with support for long-running requests a la Comet.
Also interesting is this mail thread in the Python Web-SIG: "Could WSGI handle Asynchronous response?". In May 2008, there was a broad discussion in the Web-SIG about the topic of asynchronous requests in WSGI.
A recent development is evserver, a lightweight WSGI server, which implements the Asynchronous WSGI extension proposed by Christopher Stawarz in the Web-SIG in May 2008.
Finally, the Tornado web server supports non-blocking asynchronous requests. It has a chat example application using long polling, which has similarities with your requirements.
If the problem is to pass some command from /bar application to /foo application and you are using some servlet-like approach (the Python code is loaded once and not for each request as in CGI), you can just change some class property of the /foo application and be ready to react to the change in the /foo instance (by checking the property state).
Obviously the /foo application should not return right after the first request and yield content line by line.
Thought this is just theory, I have not tried that myself.
I have created some small example (just for fun, you know :))
import threading
num = 0
cond = threading.Condition()
def app(environ, start_response):
global num
cond.acquire()
num += 1
cond.notifyAll()
cond.release()
start_response("200 OK", [("Content-Type", "multipart/x-mixed-replace; boundary=xxx")])
while True:
n = num
s = "--xxx\r\nContent-Type: text/html\r\n\r\n%s\n" % n
yield s
# wait for num change:
cond.acquire()
while num == n:
cond.wait()
cond.release()
from cherrypy.wsgiserver import CherryPyWSGIServer
server = CherryPyWSGIServer(("0.0.0.0", 3000), app)
try:
server.start()
except KeyboardInterrupt:
server.stop()
# Now whenever you visit http://127.0.0.1:3000/, the number increases.
# It also automatically increases in all previously opened windows/tabs.
The idea of a shared variable and thread synchronization (using condition variable object) is based on the fact that WSGI server provided by CherryPyWSGIServer is threaded.
Not sure if this is quite what you're looking for, but there is a fairly old way of doing server push using a mime content of multipart/x-mixed-replace
Basically you compose the response as a mime object with content type multipart/x-mixed-replace, and send the first "version" of a document down. The browser will keep the socket open.
Then as the server decides to push more data, a new "version" of the document gets sent from the server, and the browser will intelligently replace (within whatever frame/iframe contains the content) the content.
This was an early way of doing webcams, where the server would send down (push) image after image, and the browser would just keep replacing the image in the document over and over. This is also a way of doing a "Loading..." message over a single HTTP request.

Categories

Resources