Running a background task continuously in Django - python

I am running a server in Django,which is taking values continuously. The function used forever loop in it, when I call that function it never gets out of the loop.
My problem - I want to take values continuously from the server and use it afterwords wherever I want.
I tried threading, what I thought I could do is make a background task which keeps on feeding the database and when I want to use I can take values from it. But I dont know how to do this
ip = "192.168.1.15"
port = 5005
def eeg_handler(unused_addr, args, ch1, ch2, ch3, ch4, ch5):
a.append(ch1)
print(a)
from pythonosc import osc_server, dispatcher
dispatcher = dispatcher.Dispatcher()
dispatcher.map("/muse/eeg", eeg_handler, "EEG")
server = osc_server.ThreadingOSCUDPServer(
(ip, port), dispatcher)
# print("Serving on {}".format(server.server_address))
server.serve_forever()

You can create a Management command
With a Management command you can acces to your database in the same way you accesss to it through Django.
Then you can schedule this command from cron or you can make this run forever because it will not block your application.
Another guide to write management command.

You can use django-background-tasks, a database-backed worked queue for django. You can follow their installation instructions from here.
A sample background task for your case would be:
from background_task import background
#background(schedule=60)
def feed_database(some_parameter):
# feed your database here
# you can also pass a parameter to this function
pass
All you need is to call feed_database from regular code to activate your background task, which will create a Task object and stores it in the database and run this function after 60 seconds.
In your case you want to run this function infinitely, so you can do something like this:
feed_database(some_parameter, repeat=60, repeat_until=None)
This will run your function once in 60 seconds, infinitely.
They also provide a django management command, where you can give run commands to your tasks (if you don't want to start your task from your code), by using python manage.py process_tasks.

Related

Is it possible run a fastapi in command line?

We can run any script in python doing:
python main.py
Is it possible do the same if the script was a FastApi application?
Something like:
python main.py GET /login.html
To call a GET method that returns a login.html page.
If not, how I could start a FastApi application without using Uvicorn or another webserver?
I would like can run the script only when necessary.
Thanks
FastApi is designed to allow you to BUILD APIs which can be queried using a HTTP client, not directly query those APIs yourself; however, technically I believe you could.
When you start the script you could start the FastApi app in a another process running in the background, then send a request to it.
import subprocess
import threading
import requests
url = "localhost/some_path"
# launch sub process in background task while redirecting all output to /dev/null
thread = threading.Thread(target=lambda: subprocess.check_output(["uvcorn", "main:app"]))
thread.start()
response = requests.get(url)
# do something with the response...
thread.join()
Obviously this snippet has MUCH room for improvement, for example the thread will never actually end unless something bad happens, this is just a minimal example.
This is method has the clear drawback of starting the API each time you want to run the command. A better approach would be to emulate applications such as Docker, in which you would start up a local server daemon which you would then ping using the command line app.
This would mean that you would have the API running for much longer in the background, but typically these APIs are fairly light and you shouldn't notice and hit to you computer's performance. This also provides the benefit of multiple users being able to run the command at the same time.
If you used the first previous method you may run into situations where user A send a GET request, starting up the server taking hold of the configured host port combo. When user B tries to run the same command just after, they will find themselves unable to start the server. and perform the request.
This will also allow you to eventually move the API to an external server with minimal effort down the line. All you would need to do is change the base url of the requests.
TLDR; Run the FastApi application as a daemon, and query the local server from the command line program instead.

how to use value from an rpyc client called from a progress script in that script

Trying to use rpyc server via progress script, and have the script do different tasks based on different values I'm trying to get from the client.
I'm using a rpyc server to automate some tasks on demand from users, and trying to implement the client in a progress script in this way:
1.progress script launches.
2.progress script calls the rpyc client via cmd, to run a function that checks if the server is live, and returns some sort of var to indicate wether the server is live or not (doesn't really matter to me what kind of indication is used, I guess different chars like 0-live 1-not live would be preferable).
3.based on the value returned in the last step, either notifies the user that the server is down and quits, or proceeds to the rest of the code.
The part I'm struggling with is stage 2, how to call the client in a way that stores a value it should return, and how to actually return the value properly to the script.
I thought about using -param command, but couldn't figure how to use it in my scenario, where the value I'm trying to return is to a script that already midrun, and not just call another progress script with the value.
The code of the client that I use for checking if the server is up is:
def client_check():
c=rpyc.connect(host,18812)
if __name__=="__main__":
try:
client_check()
except:
#some_method_of_transferring_the_indication#
For the progress script, as mentioned I didn't really managed to figure out the right way to call the client and store a value in the way I'm trying to..
I guess I can make the server create a file that will use as an indicator for his status, and check for the file at the start of the script, but I don't know if that's the right way to do so, and prefare to avoid using this if possible.
I am guessing that you are saying that you are shelling out from the Progress script to run your rpyc script as an external process?
In that case something along these lines will read the first line of output from that rpyc script:
define variable result as character no-undo format "x(30)".
input through value( "myrpycScript arg1 arg2" ). /* this runs your rpyc script */
import unformatted result. /* this reads the result */
input close.
display result.

(Django) Running asynchronous server task continously in the background

I want to let a class run on my server, which contains a connected bluetooth socket and continously checks for incoming data, which can then by interpreted. In principle the class structure would look like this:
Interpreter:
-> connect (initializes the class and starts the loop)
-> loop (runs continously in the background)
-> disconnect (stops the loop)
This class should be initiated at some point and then run continously in the background, from time to time a http request would perhaps need data from the attributes of the class, but it should run on its own.
I don't know how to accomplish this and don't want to get a description on how to do it, but would like to know where I should start, like how this kind of process is called.
Django on its own doesn't support any background processes - everything is request-response cycle based.
I don't know if what you're trying to do even has a dedicated name. But most certainly - it's possible. But don't tie yourself to Django with this solution.
The way I would accomplish this is I'd run a separate Python process, that would be responsible for keeping the connection to the device and upon request return the required data in some way.
The only difficulty you'd have is determining how to communicate with that process from Django. Since, like I said, django is request based, that secondary app could expose some data to your Django app - it could do any of the following:
Expose a dead-simple HTTP Rest API
Expose an UNIX socket that would just return data immediatelly after connection
Continuously dump data to some file/database/mmap/queue that Django could read

Use python to shut down instance script runs on

I am running machine learning scripts that take a long time to finish. I want to run them on AWS on a faster processor and stop the instance when it finishes.
Can boto be used within the running script to stop its own instance? Is there a simpler way?
If your EC2 instance is running Linux, you can simply issue a halt or shutdown command to stop your EC2 instance. This allows you to shutdown your EC2 instance without requiring IAM permissions.
See Creating a Connection on how to create a connection. Never tried this one before, so use caution. Also make sure the instance is EBS backed. Otherwise the instance will be terminated when you stop it.
import boto.ec2
import boto.utils
conn = boto.ec2.connect_to_region("us-east-1") # or your region
# Get the current instance's id
my_id = boto.utils.get_instance_metadata()['instance-id']
conn.stop_instances(instance_ids=[my_id])

how to write endless loop crawler in python?

EDITED:
I have a crawler.py that crawls certain sites every 10 minutes and sends me some emails regarding these site. The crawler is ready and working locally.
How can I adjust it so that the following two things will happen :
It will run in endless loop on the hosting that I'll upload it to?
Sometimes I will be able to stop it ( e.g. for debugging).
At first, I thought of doing endless loop e.g.
crawler.py:
while True:
doCarwling()
sleep(10 minutes)
However, according to answers I got below, this would be impossible since hosting providers kill processes after a while (just for the question sake, let's assume proccesses are killed every 30 min). Therefore, my endless loop process would be killed at some point.
Therefore, I have thought pf a different solution:
Lets assume that my crawler is located at "www.example.com\crawler.py" and each time it is accessed, it executes the function run():
run()
doCarwling()
sleep(10 minutes)
call URL "www.example.com\crawler.py"
Thus, there will be no endless loop. In fact, every time my crawler runs, it would also access the URL which will execute the same crawler again. Therefore, there would be no endless loop, no process with a long-running time, and my crawler will continue operating forever.
Will my idea work?
Are there any hidden drawbacks I haven't thought of?
Thanks!
Thanks
As you stated in the comments, you are running on a public shared server like GoDaddy and so on. Therefore cron is not available there and long running scripts are usually forbidden - your process would be killed even if you were using sleep.
Therefore, the only solution I see is to use an external server on which you have to control to connect to your public server and run the script, every 10 minutes. One solution could be using cron on your local machine to connect with wget or curl to a specific page on your host. **
Maybe you can find on-line services that allow running a script periodically, and use those, but I know none.
** Bonus: you can get the results directly as response without having to send yourself an email.
Update
So, in your updated question you propose yo use your script to call itself with an HTTP request. I thought of it before, but I didn't consider it in my previous answer because I believe it won't work (in general).
My concern is: will the server kill a script if the HTTP connection requesting it is closed before the script terminates?
In other words: if you open yoursite.com/script.py and it takes 60 seconds to run, and you close the connection with the server after 10 seconds, will the script run till its regular end?
I thought that the answer was obviously "no, the script will be killed", therefore that method would be useless, because you should guarantee that a script calling itself via a HTTP request stays alive longer than the called script. I did a little experiment using flask, and it proved me wrong:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
import time
print('Script started...')
time.sleep(5)
print('5 seconds passed...')
time.sleep(5)
print('Script finished')
return 'Script finished'
if __name__ == '__main__':
app.run()
If I run this script and make an HTTP request to localhost:5000, and close the connection after 2 seconds, the scripts continues to run until the end and the messages are still printed.
Therefore, with flask, if you can do an asynchronous request to yourself, you should be able to have an "infinite loop" script.
I don't know the behavior on other servers, though. You should make a test.
Control
Assuming your server allows you to do a GET request and have the script running even if the connection is closed, you have few things to take care of, for example that your script still has to run fast enough to complete during the maximum server time allowance, and that to make your script run every 10 minutes, with a maximum allowance of 1 minute, you have to count every time 10 calls.
In addition, this mechanism has to be controlled, because you cannot interrupt it for debug as you requested. At least, not directly.
Therefore, I suggest you to use files: use a file to split your crawling in smaller steps, each capable to finish in less than one minute, and then continue again when the script is called again.
Use a file to count how many times the script is called, before actually doing the crawling. This is necessary if, for example, the script is allowed to live 90 seconds, but you want to crawl every 10 hours.
Use a file to control the script: store a boolean flag that you use to stop the recursion mechanism if you need to.
If you're using Linux you should just do a cron job for your script. Info: http://code.tutsplus.com/tutorials/scheduling-tasks-with-cron-jobs--net-8800
If you are running linux I would setup and upstart script http://upstart.ubuntu.com/getting-started.html to turn it into a service.
It offers a lot of advantages like:
-Starting at system boot
-Auto restart on crashes
-Manageable: service mycrawler restart
...
Or if you would prefer to have it run every 10 minutes forget about the endless loop and do a cronjob http://en.wikipedia.org/wiki/Cron

Categories

Resources