Implementing subscriber Pubsubhubbub in python - python

I have a bunch of Google alerts set up as rss feeds that update in real time. What I want is to be able to store the new data the rss feed is sending out in a database.
After looking around I found Google and Superfeedr both offer hubs that do most of the work for you; however they both require a callback url (obviously). I do have an Apache server running on the machine I'm working off, it already has python enabled so I can run python scripts on my server. However at the moment its only accessible from within my LAN.
What my real question is, what do I do next? I know that in php you would just have a call back file that handles requests but I'm lost as to what to do in python. Would I write a script and give the google/superfeedr services a url to that script? What would be in the script? Specific imports needed?
Also, I just read that if you use XMPP you don't need a callback url. How does that work?

For the local LAN problem, the most commonly used solution is to use tuneling solutions like Passageway. They will temporarily expose a local port of your machine to the "outer" web.
Now, as for implementation, it's fairly easy to set things up. Python is similar to PHP in the sense that you'll have to write a script that listen on networking connection and then handles the HTTP requests you're getting from Superfeedr or Google. (it looks like you're not familiar with Python, why not stick to PHP then?)
Finally XMPP is a feature that only us (Superfeedr) offer. It solves the problem of exposing local ports because it works behind the firewall.

Related

How to communicate between a NodeJS graphql-apollo server and a Python project?

Since this stack is new to me I'd figure it wouldnt hurt to ask the community for input.
I'm exploring a new type of project and have been brainstorming how/if it's possible to communicate between an apollo-graphql server (NodeJS) and a client Python program. I have a web development background and I intend to keep my app and server in Typescript/Node.
However ...
As a project and exercise, I would like to play around with connecting a watering system via the GPIO package in Python + RaspberryPi and then send the updates to my apollo-graphql server. I think conceptually, in a traditional RESTful api it would be straightforward ... However, with GraphQL things are a bit more nebulous to me. It would be nice to build out hardware connected services in various languages as appropriate and not be bottled into having everything in typescript/node and to that end Python is the most widely used language within the RaspberryPi community.
Has anyone done this before, or something similar, and have insights and experience to share?
edit: since posting I became more familiar with graphql and played around with it in Postman. It simply has an endpoint that I can post to it with queries (and I assume mutations and everything else) and it's captured by my NodeJS apollo-graphQL server. I ended up using this python package to query against my node API and it seems to work well.
Your server is node and your client is python. The server doesn't care at all whether your client is python or java or C#, it only wants to see GraphQL requests.
I suggest using a python-based GraphQL client package such as python-graphql-client
You're right that in the end everything looks like a POST. Using an actual client might make life a little bit easier for you though.
You can try calling your GraphQL service using a RESTful wrapper: https://graphql.org/blog/rest-api-graphql-wrapper/. There's a similar question asked before: link.
Secondly, instead of calling the server directly, you could also try using a message broker in the middle. In short: the raspi sends messages to a queue that is polled by your server. There are several solutions available like: Rabbitmq.

How to store incoming UDP messages in database and make accessible via REST API?

I am working on a project whereby I have a couple of remote IoT devices that send messages via UDP. I am looking to make a server that can receive this constant flow of UDP messages and can store them in a database. Additionally I would like to make a (REST) API which allows the information from this database to be accessed (every 15/30 minutes or so) from other applications.
Does anyone have any suggestions for how to do this (preferably in python)?
So far I am able to do the following (in python):
I know how to make a UDP client and server, and send messages between them using "socket". This link provided a useful explanation.
I know how to create a Flask server, store random data in a database using SQLAlchemy, and make the database content available via an API that can be accessed via Postman. This link showed me how.
What I am not able to do:
Tying everything together is where the problem arises. Specifically I don't know how to combine these above methods so that everything works at the same time (in the same loop so to say). Both Flask and the UDP server are running their own loops and listening (for events?) so I don't see how those processes would work simultaneously.
One thing that I was considering is to run the UDP server + database insertion in one terminal, and the Flask/API server from another terminal. That would mean that the database is being opened and accessed by multiple programs at the same time. Is that possible? It would be like opening a single Excel sheet multiple times (which is not permitted I would think).
I also came across this library which allows you to combine Flask with Flask-Sockets, but that doesn't seem to support UDP as far as I understand..
Many thanks!

Serve dynamic data to many clients

I am writing a client-server type application. The server side gathers constantly changing data from other hardware and then needs to pass it to multiple clients (say about 10) for display. The server data gathering program will be written in Python 3.4 and run on Debian. The clients will be built with VB Winforms on .net framework 4 running on Windows.
I had the idea to run a lightweight web server on the server-side and use system.net.webclient.downloadstring calls on the client side to receive it. This is so that all the multi-threading async stuff is done for me by the web server.
Questions:
Does this seem like a good approach?
Having my data gathering program write a text file for the web server to serve seems unnecessary. Is there a way to have the data in memory and have the server just serve that so there is no disk file intermediary? Setting up a ramdisk was one solution I thought of but this seems like overkill.
How will the web server deal with the data being frequently updated, say, once a second? Do webservers deal with this elegantly or is there a chance the file will be served whilst it is being written to?
Thanks.
1) I am not very familiar with Python, but for the .net application you will likely want to push change notifications to it, rather than pull. The system.net.webclient.downloadstring is a request (pull). As I am not a Python developer I cannot assist in that.
3) As you are requesting data, it is possible to create some errors of the read/write while updating and reading at the same time. Even if this does not happen your data may be out of date as soon as you read it. This can be an acceptable problem, this just depends of how critical your data is.
This is why I would do a push notification rather than a pull. If worked correctly this can keep data synced and avoid some timing issues.

Running web.py as a service on linux

I've used web.py to create a web service that returns results in json.
I run it on my local box as python scriptname.py 8888
However, I now want to run it on a linux box.
How can I run it as a service on the linux box?
update
After the answers it seems like the question isn't right. I am aware of the deployment process, frameworks, and the webserver. Maybe the following back story will help:
I had a small python script that takes as input a file and based on some logic splits that file up. I wanted to use this script with a web front end I already have in place (Grails). I wanted to call this from the grails application but did not want to do it by executing a command line. So I wrapped the python script as a webservice. which takes in two parameters and returns, in json, the number of split files. This webservice will ONLY be used by my grails front end and nothing else.
So, I simply wish to run this little web.py service so that it can respond to my grails front end.
Please correct me if I'm wrong, but would I still need ngix and the like after the above? This script sounds trivial but eventually i will be adding more logic to it so I wanted it as a webservice which can be consumed by a web front end.
In general, there are two parts of this.
The "remote and event-based" part: Service used remotely over network needs certain set of skills: to be able to accept (multiple) connections, read requests, process, reply, speak at least basic TCP/HTTP, handle dead connections, and if it's more than small private LAN, it needs to be robust (think DoS) and maybe also perform some kind of authentication.
If your script is willing to take care of all of this, then it's ready to open its own port and listen. I'm not sure if web.py provides all of these facilities.
Then there's the other part, "daemonization", when you want to run the server unattended: running at boot, running under the right user, not blocking your parent (ssh, init script or whatever), not having ttys open but maybe logging somewhere...
Servers like nginx and Apache are built for this, and provide interfaces like mod_python or WSGI, so that much simpler applications can give up as much of the above as possible.
So the answer would be: yes, you still need Nginx or the likes, unless:
you can implement it yourself in Python,
or you are using the script on localhost only and are willing to take some
risks of instability.
Then probably you can do on your own.
try this
python scriptname.py 8888 2>/dev/null
it will run as daemon

Decentralized networking in Python - How?

I want to write a Python script that will check the users local network for other instances of the script currently running.
For the purposes of this question, let's say that I'm writing an application that runs solely via the command line, and will just update the screen when another instance of the application is "found" on the local network. Sample output below:
$ python question.py
Thanks for running ThisApp! You are 192.168.1.101.
Found 192.168.1.102 running this application.
Found 192.168.1.104 running this application.
What libraries/projects exist to help facilitate something like this?
One of the ways to do this would be the Application under question is broadcasting UDP packets and your application is receiving that from different nodes and then displaying it. Twisted Networking Framework provides facilities for doing such a job. The documentation provides some simple examples too.
Well, you could write something using the socket module. You would have to have two programs though, a server on the users local computer, and then a client program that would interface with the server. The server would also use the select module to listen for multiple connections. You would then have a client program that sends something to the server when it is run, or whenever you want it to. The server could then print out which connections it is maintaining, including the details such as IP address.
This is documented extremely well at this link, more so than you need but it will explain it to you as it did to me. http://ilab.cs.byu.edu/python/
You can try broadcast UDP, I found some example here: http://vizible.wordpress.com/2009/01/31/python-broadcast-udp/
You can have a server-based solution: a central server where clients register themselves, and query for other clients being registered. A server framework like Twisted can help here.
In a peer-to-peer setting, push technologies like UDP broadcasts can be used, where each client is putting out a heartbeat packet ever so often on the network, for others to receive. Basic modules like socket would help with that.
Alternatively, you could go for a pull approach, where the interesting peer would need to discover the others actively. This is probably the least straight-forward. For one, you need to scan the network, i.e. find out which IPs belong to the local network and go through them. Then you would need to contact each IP in turn. If your program opens a TCP port, you could try to connect to this and find out your program is running there. If you want your program to be completely ignorant of these queries, you might need to open an ssh connection to the remote IP and scan the process list for your program. All this might involve various modules and libraries. One you might want to look at is execnet.

Categories

Resources