Calculation Server Architecture - python

I have a rails app. This app takes parameters from the users and sends it to what I think would be called a slave application server that runs heavy calculations in Python. This slave server is located on the same physical box and runs a Python "SimpleHTTPServer" (just a basic web server).
I set up the slave to receive commands through post requests and run the calculations. Is it appropriate for the python server to receive these requests through GET/POST even though it is on the same box or should I be using another protocol?
**note I have looked into rubypython and other direct connectors but I need a separate app server to run calculations.

Related

VirtualBox Flask Server / Ubuntu 16.04 32-bit / Server stops responding periodically

I'm not sure where to post this since I don't really know if this is an issue with Flask, VirtualBox (6.0) or Ubuntu (16.04 LTS).
Intro
My company installs Ubuntu VMs for our clients that run a local Flask web server on their system. I know Flask is not meant to be used for production, but each individual server handles about 15 users max (and often just 1 or 2) and usually only one or two users at a time. So far it's been working well for dozens of clients over several months.
Problem
We have a couple of clients where the application becomes inaccessible in users' browsers after a certain period of time. I can SSH to the VM and see that the Flask service still seems to be running without issue. But the users are not able to connect to the IP address and port of the VM via their browser like they could before (eg. http://<vm_ip>:<port_number>).
Restarting the service doesn't resolve the issue, but restarting the entire VM does resolve it.
The flask application runs as a service, but all the service is doing is triggering a bash script that runs python run.py.
The VM network adapter is set to "NAT" and it serves the Flask app on the same IP address as the host machine. When the app becomes inaccessible, I'm not able to connect to the application on the host machine using either http://<vm_ip>:<port_number> or http://localhost:<port_number>. We have this same configuration on many client systems and it works without issue for all other clients.
Nothing should change in the application configuration when the VM restarts. All networking and port settings are static. So something is clearly getting hung up. Since the application output doesn't show anything off and the Flask service seems to restart fine internally, I'm guessing the issue is with Ubuntu or VirtualBox (or something on the client machine/network).
Question
I'm wondering what would cause a Flask server that runs very reliably in most cases, to periodically stop responding after a certain amount of time, even though the VM is accessible and Flask appears to be running fine internally.
I apologize for lack of details and logs - our access to the client machine is limited. I will try to get my hands on these.

Best way to run task on Windows from web server on Linux

I have a task written in Python with a 1-2 minute run-time that I want to run on-demand. The requests would come In very small volumes from a Django server on Linux. The return would be a file.
Usually, I'd use a queue system like Celery. But, this task can only be run on Windows.
What is the best way to make this happen?
Remotely execute the task by establishing an SSH session?
Still use Celery, go through a lot of workarounds to get it to work on Windows (seems messy)?
I could think of 5 solutions which do not require ssh
I didn't talk about authentication in my solutions you should implement something based on chosen solution
Solution 1:
write a simple flask/django app for windows server which runs the task and returns response
in your linux server send a request to windows and get data
linux server can make this call using celery so you wouldn't bother about 2/3 minute wait
Solution 2:
write a simple flask/django app for windows server which calls a celery task in back ground
this app should return the url of result file
celery task creates a file which contains the result
serve this file with nginx (or a windows based static file server, I don't know windows)
send a request from linux server to windows to get result (if file doesn't exists it means result is not ready yet)
Solution 3:
write a simple flask/django app for windows server which calls a celery task in back ground
this app returns a random id for given request
from your linux server send requests to windows app with task id
when task is finished windows app returns result
Solution 4:
write a simple flask/django app for windows server which calls a celery task in back ground
add an endpoint to your linux django app for uploading data
when windows app finishes processing it uploads data to linux django app
Solution 5:
solution 4 but your linux django app is not for uploading data. it only sets a boolean which mean task is done
if task is done linux server send a request to winsows server to get data (this request contains the task id)

How to make two different python scripts to run on the same port?

Currently I'm developing web services in python using web.py, that serves different functions. Something like
BigQueryService.py
LogicImplementer.py
PostgreService.py
Each service works perfectly when running on the local machine. After deploying on the server, due to I'm refering an other python script, it returns a module error.
Since we have to run all the services on the same port, I pasted all the scripts into a single file named Engine and made it to work using the command
$ nohup python Engine.py 8080 &
Is there any better way to structure the service in web.py? Or is there a way to run all the individual scripts on the same port?
If each service creates its own listener/server socket on the port, then the answer is no. You will need to use the equivalent of an app server that has a single server port and distributes the incoming request to the relevant app (running on the server) based typically on the relative path - so e.g. http://myserver.net:8080/bqs paths get passed to your BiqQueryService, /li to LinkImplementor, /pgs to PostgreService. Flask will do something like this, I'm sure other web service frameworks will too. The server will handle all the communication stuff, pass requests to the app (e.g. bqs) and handle sending response to the client.

Using client and server websockets in same python script

I am new to Python - and work on Slackware Linux with Python 3.4.3. I prefer simple no-dependency solutions within one single python script.
I am building a demonized server program (A) which I need to access through both a regular shell CLI and GUIs in my web browser: it serves various files, uses a corresponding database and updates a firefox tab through python's WEBBROWSER function. Currently, I access process (A) via the CLI or a threaded network socket. This all started to work in a localhost scenario with all processes running on one machine.
Now, it turns out that the WebSocket protocol would render my setup dramatically simpler and cut short on traditional flow protocols using Apache and complex frameworks as middlemen.
1st central question: How do I access daemon (A) with websockets from the CLI? I thought about firing up a non-daemon version of my server program, now called (B), and send a program call to its (A) counterpart via the WebSocket HTTP protocol. This would make process (B) a websocket CLIENT, and process (A) a websocket SERVER. Is such a communication at all possible today?
2nd question: Which is the best suited template solution for this scenario - that works with python 3.4.3 ?! I started to play with Pithikos' very sleek python-websocket-server template (see https://github.com/Pithikos/python-websocket-server) but I am unable to use it as CLIENT (initiating the network call) to call its SERVER equivalent (receiving the call while residing in a daemonized process).
Problem 'solved': I gave up on the zero-dependency zero-library idea :
pip install websockets
https://websockets.readthedocs.io
It works like a charm. The WebSocket server sits in the daemon process and receives and processes WebSocket client calls that come from the CLI processes and from the HTML GUIs.

Communicate with long-running Python program

I have a long-running Python program on a server that already listens for messages on one serial port and forwards them out another serial port.
What do I need to do to allow that program to accept data from a web server (that ultimately gets that data from a web browser on a laptop)?
The options I've seen so far are:
flask()
The solution at "
Communicating with python program running on server " server
doesn't seem to work for me, because
(I may be doing this wrong)
the long-running Python program can't seem to grab port 80,
I guess because the web server is already running on port 80 (serving other pages).
Have a CGI script that writes the data to the file, and the long-running script reads the data from that file. I'm a little reluctant to do this on a system where flash wear-out may be a concern.
Somehow (?) convert the long-running script
to a FastCGI script that includes everything it used to do plus new stuff to accept data from the web server.
Somehow (?) convert the long-running script
to a WSGI script that includes everything it used to do plus new stuff to accept data from the web server.
Write a brief web script that the web server starts up, that communicates with a long-running script using asynchat / asyncore / sockets / twisted , which seem designed for communication between two different computers, and so seems like overkill when talking between a long-running Python script and a web server (perhaps with a short-time CGI script or FastCGI script between them) running on the same server.
Perhaps some other option?
Is there a standard "pythonic" way for a web server to hand off data to a Python program that is already up and running? (Rather than the much more common case of a web server starting a Python program and hand off data to that freshly-started program).
(Details that I suspect aren't relevant: my server runs Lighttpd on Ubuntu Linux running on a Beaglebone Black).
(Perhaps this question should be moved to https://softwareengineering.stackexchange.com/ ?)
You could setup your python process to use any other port (f.e. 8091). Than configure your webserver to forward certain (or all) requests to that port using proxypas. Example for Apache:
<VirtualHost yourdomain.for.python.thread>
ServerName localhost
ServerAdmin webmaster#example.com
ProxyRequests off
ProxyPass * http://127.0.0.1:8091
</VirtualHost>
I've done this before for quickly getting a Django server in development mode to show pages via a webserver. If you actually want to serve html content, this is not the most efficient way to go.

Categories

Resources