I am trying to hack together a Python script to monitor ongoing downloads in Chrome and shut-down my PC automatically after the download process closes. I know little JavaScript and am considering using the PyJs library, if required.
1) Is this the best approach? I don't need the app to be portable, just working.
2) How would you identify the download process?
3) How would you monitor the download progress? Apparently the Chrome API doesn't provide a specific function for it.
Nice question, may be because I can relate with the need of automating the shutdown. ;)
I just googled. There happens to be an experimental API but only for the dev channel as of now. I am not on a dev channel to try that out, so I just hope I am pointing you in the right direction.
One approach would be:
Have a Python HTTP server listening on some port XYZ
To your extension add the permission to the URL http://localhost:XYZ/
In your extension, you could use:
chrome.downloads.search(query, function (arrayOfDownloadItem){ .. })
Where, query is an instance of DownloadQuery, and contains state property as in_progress
You could probably check for the length of arrayOfDownloadItem.
If its zero, create a new XMLHttpRequest to your HTTP server end point, and then let the server shutdown your machine.
HTH
Related
I am developing a web application with Tornado and have encountered the following problem:
I can't run more than 6 instances of my application in one browser probably because each instance creates websocket connection to Tornado server. I use standard WebSocketHandler class. They close properly, i.e. if I close the 6th tab, then I'd be able to open another application tab.
Is there any way to circumvent it? I will provide any additional information if needed.
EDIT: Connection information (I have 6 identical tabs here, 7th won't load):
Are you sure the limitation is not on the browser? I've seen the same issue (long-polling requests, 7th or 8th won't load), but opening the URL in another browser or location works fine.
Edit: each browser has indeed a limit of simultaneous persistent connections per server, as well as global limit. See this question, and especially this response which has more up-to-date values.
I have a bunch of Google alerts set up as rss feeds that update in real time. What I want is to be able to store the new data the rss feed is sending out in a database.
After looking around I found Google and Superfeedr both offer hubs that do most of the work for you; however they both require a callback url (obviously). I do have an Apache server running on the machine I'm working off, it already has python enabled so I can run python scripts on my server. However at the moment its only accessible from within my LAN.
What my real question is, what do I do next? I know that in php you would just have a call back file that handles requests but I'm lost as to what to do in python. Would I write a script and give the google/superfeedr services a url to that script? What would be in the script? Specific imports needed?
Also, I just read that if you use XMPP you don't need a callback url. How does that work?
For the local LAN problem, the most commonly used solution is to use tuneling solutions like Passageway. They will temporarily expose a local port of your machine to the "outer" web.
Now, as for implementation, it's fairly easy to set things up. Python is similar to PHP in the sense that you'll have to write a script that listen on networking connection and then handles the HTTP requests you're getting from Superfeedr or Google. (it looks like you're not familiar with Python, why not stick to PHP then?)
Finally XMPP is a feature that only us (Superfeedr) offer. It solves the problem of exposing local ports because it works behind the firewall.
I've used web.py to create a web service that returns results in json.
I run it on my local box as python scriptname.py 8888
However, I now want to run it on a linux box.
How can I run it as a service on the linux box?
update
After the answers it seems like the question isn't right. I am aware of the deployment process, frameworks, and the webserver. Maybe the following back story will help:
I had a small python script that takes as input a file and based on some logic splits that file up. I wanted to use this script with a web front end I already have in place (Grails). I wanted to call this from the grails application but did not want to do it by executing a command line. So I wrapped the python script as a webservice. which takes in two parameters and returns, in json, the number of split files. This webservice will ONLY be used by my grails front end and nothing else.
So, I simply wish to run this little web.py service so that it can respond to my grails front end.
Please correct me if I'm wrong, but would I still need ngix and the like after the above? This script sounds trivial but eventually i will be adding more logic to it so I wanted it as a webservice which can be consumed by a web front end.
In general, there are two parts of this.
The "remote and event-based" part: Service used remotely over network needs certain set of skills: to be able to accept (multiple) connections, read requests, process, reply, speak at least basic TCP/HTTP, handle dead connections, and if it's more than small private LAN, it needs to be robust (think DoS) and maybe also perform some kind of authentication.
If your script is willing to take care of all of this, then it's ready to open its own port and listen. I'm not sure if web.py provides all of these facilities.
Then there's the other part, "daemonization", when you want to run the server unattended: running at boot, running under the right user, not blocking your parent (ssh, init script or whatever), not having ttys open but maybe logging somewhere...
Servers like nginx and Apache are built for this, and provide interfaces like mod_python or WSGI, so that much simpler applications can give up as much of the above as possible.
So the answer would be: yes, you still need Nginx or the likes, unless:
you can implement it yourself in Python,
or you are using the script on localhost only and are willing to take some
risks of instability.
Then probably you can do on your own.
try this
python scriptname.py 8888 2>/dev/null
it will run as daemon
I am building desktop software with a Python backend and a web interface. Currently, I have to start the backend, then open up the interface in the browser. If I accidentally refresh the page - then that clears everything! What I'd like to do is start the application and have a fullscreen browser window appear (using Chrome) - that shouldn't be difficult. I have two questions:
Can refresh be disabled?
Is it possible to hook into closing my program when the web UI is closed?
Update:
What I'm looking for is more like this: geckofx. A way to embed a Chrome webpage in a desktop app. Except I'm using Python rather than C#
Your first question is a dup of disable f5 and browser refresh using javascript.
For your second question, it depends on what kind of application you're building, but generally, the answer is no.
If you can rely on the JS to catch the close and, e.g., send a "quit" message to the service before closing, that's easy. However, there are plenty of cases where that won't work, because the client has no way to catch the close.
If, on the other hand, you can rely on a continuous connection between the client and the service—e.g., using a WebSocket—you can just quit when the connection goes down. But usually, you can't.
Finally, if you're writing something specifically for one platform, you may be able to use OS-level information to handle the cases that JS can't. (For example, on OS X, you can attach a listener to the default NSDistributedNotificationCenter and be notified whenever Chrome quits.) But generally, you can't rely on this, and even when you can, it may still not cover 100% of the cases.
So, you need to use the same tricks that every "real" web app uses for managing sessions. For example, the client can send a keepalive every 5 minutes, and the server can quit if it doesn't get any requests for 5 minutes. (You can still have an explicit "quit" command, you just can't rely on always receiving it.) If you want more information on ways to do this, there are probably 300 questions on SO about it.
Instead of embeding Chrome, you may embed only Webkit ( I don't know on Windows, but on Mac and Linux is easy).
Application logic seams to be on server side, and browser used only as interface. If that is the case, you may put „onbeforeunload” in body tag, and call a js function that send an ajax request to server to die.
I have got a working web application in Python that downloads a file into the web server upon a user's request. This works fine for small file downloads but when the user requests a larger file, the connection times out. So, I think I need to process the download in the background but I'm not sure what tool is most suitable for this. Celery seems to be right but I don't really want it to be queued(the download must start immediately). What would you suggest?
Timout duration is up to you, you could just make it longer.
Anyway there are plenty of flash or AJAX uploaders out there, nothing you can do only server side AFAIK