We can run any script in python doing:
python main.py
Is it possible do the same if the script was a FastApi application?
Something like:
python main.py GET /login.html
To call a GET method that returns a login.html page.
If not, how I could start a FastApi application without using Uvicorn or another webserver?
I would like can run the script only when necessary.
Thanks
FastApi is designed to allow you to BUILD APIs which can be queried using a HTTP client, not directly query those APIs yourself; however, technically I believe you could.
When you start the script you could start the FastApi app in a another process running in the background, then send a request to it.
import subprocess
import threading
import requests
url = "localhost/some_path"
# launch sub process in background task while redirecting all output to /dev/null
thread = threading.Thread(target=lambda: subprocess.check_output(["uvcorn", "main:app"]))
thread.start()
response = requests.get(url)
# do something with the response...
thread.join()
Obviously this snippet has MUCH room for improvement, for example the thread will never actually end unless something bad happens, this is just a minimal example.
This is method has the clear drawback of starting the API each time you want to run the command. A better approach would be to emulate applications such as Docker, in which you would start up a local server daemon which you would then ping using the command line app.
This would mean that you would have the API running for much longer in the background, but typically these APIs are fairly light and you shouldn't notice and hit to you computer's performance. This also provides the benefit of multiple users being able to run the command at the same time.
If you used the first previous method you may run into situations where user A send a GET request, starting up the server taking hold of the configured host port combo. When user B tries to run the same command just after, they will find themselves unable to start the server. and perform the request.
This will also allow you to eventually move the API to an external server with minimal effort down the line. All you would need to do is change the base url of the requests.
TLDR; Run the FastApi application as a daemon, and query the local server from the command line program instead.
Related
Context
I am working in an escape game company.
We currently have a Windows app that controls the games :
It runs a big loop that checks all the state of all the sensors (via queries to the serial port of the PC), take decisions and sends commands to that same serial port.
It has a GUI where the game master can monitor the status of the game and send manual commands to bypass some game logic when needed.
It works very well, but for stability reasons, update nightmare etc, we want to move away from Windows for that specific application. We want to run all this on Linux.
The project
The ideal thing would be a system where the PC that runs the game is headless and the escape room software is remotely controlled using a web interface. This is better that the current situation where the operators have to take remote control of the game PC using Windows Remote Desktop.
I would like to have some kind of RESTful API that can be queried by some JS webpages to display the state of the system and send commands to it.
I have the constrain to do the server part in Python.
But, I don't know how to approach that system.
In one hand, I will have a software that controls real world things and will, obviously, manage only one single game at a given time. Basically a big, non blocking, always running loop.
On the other hand, I will have a REST API to send command to the running game.
If I look at web frameworks, such as Flask, it provides RESTful API but it is designed to handle multiple connections at the same time and have them basically isolated from each other.
I don't see how I would make that web part discuss with the game system part.
As you can guess, I am not an expert at all. and I would like to keep the system as simple as possible to keep it manageable and understandable.
What should be the best (in term of simplicity) approach here ?
I tough of having two apps, one that runs the game and the web server, that sends commands and receive status through some sort of inter-process communication. But it looks complicated.
One dream thing would be to be able to have a sort of background task within the Flask framework that is running the game, sending the serial port requests and following the game scrips. At the same time, when REST request are received, the callback function of the request would have access to the variables of the background tasks to gather the status of the game and reply accordingly.
But I have no ideal how to do that. I even don't know what keyword to Google for to have an idea how to do that. Is there a common pattern here that would be so common that is supported by basic frameworks ? Or am I reinventing the wheel ?
To run a permanent background task in the same process as a Flask application, use a threading.Thread running a function with an infinite loop. Communicate through a queue.Queue which is thread-safe.
Note: if scaling past a single process, this would create multiple, separate control tasks which probably isn't desired. Scaling requires an external database or queue and a task framework such as Celery.
Example (based on Flask quickstart and basic thread usage):
from flask import Flask
from queue import Queue, Empty
from threading import Thread
from time import sleep
app = Flask(__name__)
commands = Queue()
def game_loop():
while True:
try:
command = commands.get_nowait()
print(command)
except Empty:
pass
sleep(5) # TODO poll other things
Thread(target=game_loop, daemon=True).start()
# Literally the Flask quickstart but pushing to the queue
#app.route("/")
def hello_world():
commands.put_nowait({ 'action': 'something' })
return "<p>Hello, World!</p>"
I want to let a class run on my server, which contains a connected bluetooth socket and continously checks for incoming data, which can then by interpreted. In principle the class structure would look like this:
Interpreter:
-> connect (initializes the class and starts the loop)
-> loop (runs continously in the background)
-> disconnect (stops the loop)
This class should be initiated at some point and then run continously in the background, from time to time a http request would perhaps need data from the attributes of the class, but it should run on its own.
I don't know how to accomplish this and don't want to get a description on how to do it, but would like to know where I should start, like how this kind of process is called.
Django on its own doesn't support any background processes - everything is request-response cycle based.
I don't know if what you're trying to do even has a dedicated name. But most certainly - it's possible. But don't tie yourself to Django with this solution.
The way I would accomplish this is I'd run a separate Python process, that would be responsible for keeping the connection to the device and upon request return the required data in some way.
The only difficulty you'd have is determining how to communicate with that process from Django. Since, like I said, django is request based, that secondary app could expose some data to your Django app - it could do any of the following:
Expose a dead-simple HTTP Rest API
Expose an UNIX socket that would just return data immediatelly after connection
Continuously dump data to some file/database/mmap/queue that Django could read
I am trying to setup a python script that I can have running all the time and then use a HTTP command to activate an action in the script. So that when I type a command like this into a web browser:
http://localhost:port/open
The script executes a piece of code.
The idea is that I will run this script on a computer on my network and activate the code remotely from elsewhere on the network.
I know this is possible with other programming languages as I've seen it before, but I can't find any documentation on how to do it in python.
Is there an easy way to do this in Python or do I need to look into other languages?
First, you need to select a web framework. I will recommand using Flask, since it is lightweight and really easy to start using it fast.
We begin by initializing your app and setting a route. your_open_func() (in the code below) which is decorated with the #app.route("/open") decorator will be triggered and run when you will send a request to that preticular url (for example http://127.0.0.1:5000/open)
As Flask's website says: flask is fun. The very first example (with minor modifications) from there suits your needs:
from flask import Flask
app = Flask(__name__)
#app.route("/open")
def your_open_func():
# Do your stuff right here.
return 'ok' # Remember to return or a ValueError will be raised.
In order to run your app app.run() is usually enough, but in your case you want other computers on your network to be able to access the app, so you should call the run() method like so: app.run(host="0.0.0.0").
By passing that parameter you are making the server publicly available.
Imagine that I've written a celery task, and put the code to the server, however, when I want to send the task to the server, I need to reuse the code written before.
So my question is that are there any methods to seperate the code between server and client.
Try a web server like flask that forwards requests to the celery workers. Or try a server that reads from a queue (SQS, AMQP,...) and does the same.
No matter the solution you choose, you end up with 2 services: the celery worker itself and the "server" that calls the celery tasks. They both share the same code but are launched with different command lines.
Alternately, if the task code is small enough, you could just import the git repository in your code and call it from there
I am working on a django web app that has functions (say for e.g. sync_files()) that take a long time to return. When I use gevent, my app does not block when sync_file() runs and other clients can connect and interact with the webapp just fine.
My goal is to have the webapp responsive to other clients and not block. I do not expect a zillion users to connect to my webapp (perhaps max 20 connections), and I do not want to set this up to become the next twitter. My app is running on a vps, so I need something light weight.
So in my case listed above, is it redundant to use celery when I am using gevent? Is there a specific advantage to using celery? I prefer not to use celery since it is yet another service that will be running on my machine.
edit: found out that celery can run the worker pool on gevent. I think I am a litle more unsure about the relationship between gevent & celery.
In short you do need a celery.
Even if you use gevent and have concurrency, the problem becomes request timeout. Lets say your task takes 10 minutes to run however the typical request timeout is about up to a minute. So what will happen if you trigger the task directly within a view is that the server will start processing it however after a minute a client (browser) will probably disconnect the connection since it will think the server is offline. As a result, your data can become corrupt since you cannot be guaranteed what will happen when connection will close. Celery solves this because it will trigger a background process which will process the task independent of the view. So the user will get the view response right away and at the same time the server will start processing the task. That is a correct pattern to handle any scenarios which require lots of processing.