I have a NextJS index page where it will resize an image. To do this, I have an api/run.js that will execute a python script and return the result.
However, the python script is resource intensive and takes about 3 minutes to return the result, so I want to make it run consecutively and not concurrently.
My goal is to be able to access the same webpage from multiple devices and be placed in the same queue.
How can I achieve this?
I tried using web sockets and a MySQL database, but I realized that someone has to be active on the website for it to work correctly. If not, the queue will not work.
I try to do an automation using a python script and Snap7 to connect windows pc to PLC Siemens7-1200.
They script itself does a procession of an image and saving that image afterwards and it is working as it should. My next step is to include an automation. I want this script to always run, when PLC performed a specific action. After doing that action the PLC gives out a specific signal '1' for 3 seconds.
My first idea was to connect to PLC in my python script via Snap7 (this already works) and having a While loop that always checks this signal and an IF statement. And IF the signal is '1', the rest of the python script should run, take image, process image etc.
I wondered whether this idea is useful or whether there may be an easier solution? If i have the While loop the script will be active all the time. I wondered whether there is an option to run the script only after this action was performed by PLC to not have the script active all the time. All the time is actually 24/7
In other words: Whats the easiest way to run a python script from a windows PC when a connected PLC performed a specific action?
I am trying to schedule some tasks using python...
Here is the whole project:
I have online classes on zoom which I want to automatically record. (I can't wake up on time). I have the invite link to the meeting. The time and date of the meeting are mentioned in it.
I have written the python script to extract the message. Let's call this script A.
I have also written a script to click on the link so that the zoom meeting opens. Let's call this script B.
I need to run the scripts in this order:
I manually run script A at let's say 12:00 AM in the night(morning). By that time, the teachers would have sent the invite link message.
Based on the information that was extracted, I want to automatically run script B and start the OBS recording. A way to end the recording after the meeting has ended would also be appreciated. (or I could just record for 1 hour).
I just need to find a way to automatically start recording from OBS screen recorder at the time mentioned in the message. (possibly in script B only)
How do I go about it?
You could use the windows task scheduler.
scriptA extracts the start time for the online task, sleeps until the correct time (using time.sleep()) and then uses subprocess.Popen to start script B at the appropriate time.
For example; simply starting script B, discarding any output (assuming it resides in the current working directory):
import subprocess
subprocess.Popen(
['python3', 'scriptB.py'],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL
)
I currently have a simple HTML with three input text boxes and a button running on Node.js. I am able to send values from the HTML page to the python script as arguments when executing the script (sys.argv) through Node.js as a child process.
The python script keeps generating values. Right now, I simply print those values, but can I grab these values and send them back to the webpage every second until the script stops running after about 3 minutes? If yes, how can I grab them?
I want to use Node.js because I want to use the package pdfmake (https://www.npmjs.com/package/pdfmake) from npm to generate reports of the same.
If you're already executing the Python script as a child process, simply capture the STDOUT stream from the process.
https://nodejs.org/api/child_process.html
https://nodejs.org/api/child_process.html#child_process_subprocess_stdout
I do not have comment privileges yet and this may not be an exact answer but have you looked into websockets? It looks like you should be able to emit the data from the python code to the port that you may be hosting your web page on.
EDITED:
I have a crawler.py that crawls certain sites every 10 minutes and sends me some emails regarding these site. The crawler is ready and working locally.
How can I adjust it so that the following two things will happen :
It will run in endless loop on the hosting that I'll upload it to?
Sometimes I will be able to stop it ( e.g. for debugging).
At first, I thought of doing endless loop e.g.
crawler.py:
while True:
doCarwling()
sleep(10 minutes)
However, according to answers I got below, this would be impossible since hosting providers kill processes after a while (just for the question sake, let's assume proccesses are killed every 30 min). Therefore, my endless loop process would be killed at some point.
Therefore, I have thought pf a different solution:
Lets assume that my crawler is located at "www.example.com\crawler.py" and each time it is accessed, it executes the function run():
run()
doCarwling()
sleep(10 minutes)
call URL "www.example.com\crawler.py"
Thus, there will be no endless loop. In fact, every time my crawler runs, it would also access the URL which will execute the same crawler again. Therefore, there would be no endless loop, no process with a long-running time, and my crawler will continue operating forever.
Will my idea work?
Are there any hidden drawbacks I haven't thought of?
Thanks!
Thanks
As you stated in the comments, you are running on a public shared server like GoDaddy and so on. Therefore cron is not available there and long running scripts are usually forbidden - your process would be killed even if you were using sleep.
Therefore, the only solution I see is to use an external server on which you have to control to connect to your public server and run the script, every 10 minutes. One solution could be using cron on your local machine to connect with wget or curl to a specific page on your host. **
Maybe you can find on-line services that allow running a script periodically, and use those, but I know none.
** Bonus: you can get the results directly as response without having to send yourself an email.
Update
So, in your updated question you propose yo use your script to call itself with an HTTP request. I thought of it before, but I didn't consider it in my previous answer because I believe it won't work (in general).
My concern is: will the server kill a script if the HTTP connection requesting it is closed before the script terminates?
In other words: if you open yoursite.com/script.py and it takes 60 seconds to run, and you close the connection with the server after 10 seconds, will the script run till its regular end?
I thought that the answer was obviously "no, the script will be killed", therefore that method would be useless, because you should guarantee that a script calling itself via a HTTP request stays alive longer than the called script. I did a little experiment using flask, and it proved me wrong:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
import time
print('Script started...')
time.sleep(5)
print('5 seconds passed...')
time.sleep(5)
print('Script finished')
return 'Script finished'
if __name__ == '__main__':
app.run()
If I run this script and make an HTTP request to localhost:5000, and close the connection after 2 seconds, the scripts continues to run until the end and the messages are still printed.
Therefore, with flask, if you can do an asynchronous request to yourself, you should be able to have an "infinite loop" script.
I don't know the behavior on other servers, though. You should make a test.
Control
Assuming your server allows you to do a GET request and have the script running even if the connection is closed, you have few things to take care of, for example that your script still has to run fast enough to complete during the maximum server time allowance, and that to make your script run every 10 minutes, with a maximum allowance of 1 minute, you have to count every time 10 calls.
In addition, this mechanism has to be controlled, because you cannot interrupt it for debug as you requested. At least, not directly.
Therefore, I suggest you to use files: use a file to split your crawling in smaller steps, each capable to finish in less than one minute, and then continue again when the script is called again.
Use a file to count how many times the script is called, before actually doing the crawling. This is necessary if, for example, the script is allowed to live 90 seconds, but you want to crawl every 10 hours.
Use a file to control the script: store a boolean flag that you use to stop the recursion mechanism if you need to.
If you're using Linux you should just do a cron job for your script. Info: http://code.tutsplus.com/tutorials/scheduling-tasks-with-cron-jobs--net-8800
If you are running linux I would setup and upstart script http://upstart.ubuntu.com/getting-started.html to turn it into a service.
It offers a lot of advantages like:
-Starting at system boot
-Auto restart on crashes
-Manageable: service mycrawler restart
...
Or if you would prefer to have it run every 10 minutes forget about the endless loop and do a cronjob http://en.wikipedia.org/wiki/Cron