I want to make a python HTTP Server that runs on my machine, it will listen for me to send it urls and then it will take that url and download it to the computer.
Example:
PC1 has Downloader.py running with a simple interface, an input and a submit button.
I send "https://mega.nz/file/XXXXXXXX#XXXXXXXXXX_XXXXXXXXXXXXX-XXXXX-XXX" to PC1 from my phone.
Downloader.py receives the mega url and starts downloading the file in the background.
Ideally this would be non-blocking so if I send 2 links one after the other it wouldn't wait for the first to finish before downloading the second.
I already have the download functions but I'm not sure how to put it together with an HTTP Server.
subprocess? threading? multiprocessing?
Thanks.
If you familiar with python, it can be easily done with Django and Celery.
Related
so I have a server and did an api for it so I can update patch files to my server, however now when I update the some batch files in the server, I always have to stop running the server and than run it again to see the changes, I was wondering what can I do so that my server restart it's self
Yes you can,
Make requests api send a json like {'do': 'refresh_server'}-
then just type exit(), then run the file again using the os module.
Edit: This is solution for windows
I am new to Python - and work on Slackware Linux with Python 3.4.3. I prefer simple no-dependency solutions within one single python script.
I am building a demonized server program (A) which I need to access through both a regular shell CLI and GUIs in my web browser: it serves various files, uses a corresponding database and updates a firefox tab through python's WEBBROWSER function. Currently, I access process (A) via the CLI or a threaded network socket. This all started to work in a localhost scenario with all processes running on one machine.
Now, it turns out that the WebSocket protocol would render my setup dramatically simpler and cut short on traditional flow protocols using Apache and complex frameworks as middlemen.
1st central question: How do I access daemon (A) with websockets from the CLI? I thought about firing up a non-daemon version of my server program, now called (B), and send a program call to its (A) counterpart via the WebSocket HTTP protocol. This would make process (B) a websocket CLIENT, and process (A) a websocket SERVER. Is such a communication at all possible today?
2nd question: Which is the best suited template solution for this scenario - that works with python 3.4.3 ?! I started to play with Pithikos' very sleek python-websocket-server template (see https://github.com/Pithikos/python-websocket-server) but I am unable to use it as CLIENT (initiating the network call) to call its SERVER equivalent (receiving the call while residing in a daemonized process).
Problem 'solved': I gave up on the zero-dependency zero-library idea :
pip install websockets
https://websockets.readthedocs.io
It works like a charm. The WebSocket server sits in the daemon process and receives and processes WebSocket client calls that come from the CLI processes and from the HTML GUIs.
I have a basic (server1) Django development web server and another server (server2) which has a python script that does some scientific calculations. Assume that the server1 has necessary authentication in place to run the script on server2. All I want to do is, click a button on the django website which would run the python script sitting on server2.
Ideas that I have so far are,
use some kind of SSH library to run the script and get response
have a REST API setup on server2 to run the script
Not sure if above ideas would work, please suggest your insight into this and if possible a simple example would be appreciated.
More info: Server1 and Server2 has to be 2 separate servers, server1 is a webserver, while server2 can be any linux virtual machine. Also, the response from server2 has to be sent back to server1.
After reading through and trying out various forum suggestions and spending solid time in Google, I've settled down with paramiko. It does exactly what I wanted and works like a charm for now.
On click of a button on my website, running on server1, I make a request to run a python script. The python script uses paramiko to make SSH connections to server2, runs the necessary command and writes a response to a plain/text file. This plain/text file is rendered back to the request through django form as a response.
This looks a little dirty now and there are more things to look into like, what happens if the command took very long time to execute or it erred out for some reason. I haven't spent time in figuring out answers for all those questions, but eventually will.
There is no reason why server one cannot execute something like selenium or phantomjs on itself to navigate to your website on server2 and click a button on server 2 which then uses something like python's subprocess module to execute a program from server 2.
I have large video files (~100GB) that are local on my machine. I have a non-local website where I enter information about the video file. In addition, I need to get the checksum of the video file (and I do not want to manually trigger the script locally and copy and paste the value). To get a checksum of the video file, I have a script I can run as $ checksum.py <video file>.
How would I trigger the local script through the web interface? In other words, I want to be able to enter the PATH of the video file and click Submit on the web app, and it will locally trigger the script, and (after the checksum has finished), insert that value into the web app/database. What would be the best way to do this?
You cannot trigger anything unless your local script is continuously listening for some kind of data feed (like a fixed URL serving an XML/JSON feed of paths) which is, IMHO, over-complicating your system.
You could also use a Java applet ran locally instead of a remote website, but you'll have to sign it to be able to read local files, and it might not be what you're looking for.
Think of it: it's all about security. Would you like any web server to trigger scripts in your local machine? I certainly wouldn't.
IMHO the best solution is to trigger the script manually which will send the data to your web server.
In general browsers run in a sandbox that has very limited access to the OS. In particular you can't run shell scripts from a browser. As I see it you have two options:
Adapt your checksum.py script to send the checksum info directly to your website using the urllib2 calls, or pipe the results to a "curl" command. (No browser involved.)
Rewrite checksum.py as JavaScript using the FileReader class. This will probably be convoluted, slow, and won't work in Internet Explorer.
I'm about to finish a jquery plugin that should provide a progress-bar for file uploading using ajax and a python CGI script.
My plugin is inspired after this piece of software.
The idea is reading(with python CGI script) raw post data on the server while the browser is uploading, parse it, write file (in chunks) to disk and at the same time using ajax to ask the server about the currently written file size.
The plugin works on my server and on some other web hosting servers. But on Mochahost (using apache) it does not work properly. What that server does is that it calls my python CGI script after all upload is completely done, and not while the file is uploading (as expected). I verified the moment of calling by adding the very beginning of python script command to write a test file. That command should fire when python is called and does not involve reading sys.stdin.
Even on that server python CGI script works (almost) as intended. It reads sys.stdin in chunks, parses it and writes the file on disk. The problem is timing. The server does not provide python with input stream in real-time, but, after all, post from the browser was received, it calls python and populate sys.stdin with raw posted data that is already saved somewhere.
I don't know if it is relevant but I've noticed that their apache version is 2.0.x and SERVER_PROTOCOL is HTTP/1.0. On other web hosting servers SERVER_PROTOCOL is HTTP/1.1.
Can anybody say what things could make a server or specifically an apache server to call a CGI script after uploading is finished and not when the browser asked for it?