Trigger action when manually stopping Python script - python

I was searching for quite some time but I was unable to find a simple solution.
I have a python script that runs indefinitely and saves some files when a condition is met (enough data gathered). Is there a way to terminate the execution of the script and trigger a function that would save the gathered (but yet unsaved) data?
Every time I have to do something (let's say close the computer), I must manually stop (terminate) script (in Pycharm) and I loose a part of the data which is not yet saved.
Edit: Thanks to #Alexander, I was able to solve this. A simple code that might better outline the solution:
import atexit
import time
#atexit.register
def on_close():
print('success') #save my data
while True:
print('a')
time.sleep(2)
Now when clicking on a Stop button, 'on_close' function is executed and I am able to save my data...

Use the atexit module. It part of the python std lib.
import atexit
#atexit.register
def on_close():
... do comething
atexit.register(func, *args, **kwargs)
Register func as a function to be executed at termination. Any optional arguments that are to be passed to func must be passed as arguments to register(). It is possible to register the same function and arguments more than once.
This function returns func, which makes it possible to use it as a decorator.

Related

Class is calling Method immediately when trying to set up a running schedule. Didn't happen with a function

I am using schedule https://schedule.readthedocs.io/en/stable/ a pretty easy scheduling library. I just created a class out of my function. Named Processing with a process function that takes two inputs df, df1.
When I run this line it immediately calls the method and runs it vs when it was a function it simply just set up the schedule then I call the schedule1 function and it runs the schedule. I'm quite confused as to whats going on as this is my first foray into classes.
schedule.every().day.at("14:45").do(Processing.process(df,df1))
def schedule1():
while True:
try:
schedule.run_pending()
time.sleep(1)
print('Schedule Running')
except KeyboardInterrupt:
break
It is not the scheduling library that calls your method immediately, but you ;)
You call Processing.process(df,df1) and pass the result to the .do method.
As stated in the documentation for the schedule.Job.do method, you can use schedule.every().day.at("14:45").do(Processing.process, (df,df1)) instead.
This passes the method you want to call and the arguments for that method to the job.

Timeout uncontrolled overridden function in Python

There have been some questions discussing this but none have the set of constraints that I have so maybe someone will come with a good idea.
Basically I need to set-up a timeout for a Python function under the following constraints:
Cross-platform (i.e. no signal.ALARM)
Not Python 3 (I can assume Python >= 2.7.9)
Only the function needs to be timed-out, can't just exit the entire program.
I have absolutely no control over the called function, i.e. it's a callback using an abstract interface (using derived classes and overrides). Other people will be writing these callback functions and the assumption is that they're idiots.
Example code:
class AbstractInterface(Object):
def Callback(self):
# This will be overridden by derived classes.
# Assume the implementation cannot be controlled or modified.
pass
...
def RunCallbacks(listofcallbacks):
# This is function I can control and modify
for cb in listofcallbacks:
# The following call should not be allowed to execute
# for more than X seconds. If it does, the callback should
# be terminated but not the entire iteration
cb.Callback()
Any ideas will be greatly appreciated.
Other people will be writing these callback functions and the assumption is that they're idiots.
You really shouldn't execute code from people you consider 'idiots'.
However, I came up with one possibility shown below (only tested in python3 but should work in python2 with minor modifications).
Warning: This runs every callback in a new process, which is terminated after the specified timeout.
from multiprocessing import Process
import time
def callback(i):
while True:
print("I'm process {}.".format(i))
time.sleep(1)
if __name__ == '__main__':
for i in range(1, 11):
p = Process(target=callback, args=(i,))
p.start()
time.sleep(2) # Timeout
p.terminate()

Python 'print' in a c++ based threading model

I am designing a Python app by calling a C++ DLL, I have posted my interaction between my DLL and Python 3.4 here. But now I need to do some process in streaming involving a threading based model and my callback function looks to put in a queue all the prints and only when my streaming has ended, all the Info is printed.
def callbackU(OutList, ConList, nB):
for i in range(nB):
out_list_item = cast(OutList[i], c_char_p).value
print("{}\t{}".format(ConList[i], out_list_item))
return 0
I have tried to use the next ways, but all of them looks to work in the same way:
from threading import Lock
print_lock = Lock()
def save_print(*args, **kwargs):
with print_lock:
print (*args, **kwargs)
def callbackU(OutList, ConList, nB):
for i in range(nB):
out_list_item = cast(OutList[i], c_char_p).value
save_print(out_list_item))
return 0
and:
import sys
def callbackU(OutList, ConList, nB):
for i in range(nB):
a = cast(OutList[i], c_char_p).value
sys.stdout.write(a)
sys.stdout.flush()
return 0
I would like that my callback prints its message when the it is called, not when the whole process ends.
I can find what was the problem, I am using a thread based process that needs to stay for an indefinite time before end it. In c++ I'm using getchar() to wait until the process has to be ended, then when I pushed the enter button the process jump to the releasing part. I also tried to use sleep()s of 0.5 secs in a while until a definite time has passed to test if that could help me, but it didn't. Both methods worked in the same way in my Python application, the values that I needed to have in streaming were put in a queue first and unless the process ended that values were printed.
The solution was to make two functions, the former one for initialize the thread based model. And the last one function for ends the process. By so doing I didn't need a getchar() neither a sleep(). This works pretty good to me!, thanks for you attention!

How to run and stop an infinite loop in a python thread

I need to run a (series of) infinite loops that must be able to check an externally set condition to terminate. I thought the threading module would allow that, but my efforts so fare have failed. Here is an example of what I am trying to do:
import threading
class Looping(object):
def __init__(self):
self.isRunning = True
def runForever(self):
while self.isRunning == True:
"do stuff here"
l = Looping()
t = threading.Thread(target = l.runForever())
t.start()
l.isRunning = False
I would have expected t.start to run in a separate thread, with l's attributes still accessible. This is not what happens. I tried the snippet above in the python shell (IPython). Execution of t start immediately after instantiation and it blocks any further input.
There is obviously something I am not getting right about the threading module.
Any suggestion on how to solve the problem?
You are calling runForever too early. Use target = l.runForever without parentheses.
A function call is not evaluated until after its arguments are. When you write runforever(), it calls the function right then, before even creating the thread. By just passing runForever, you pass the function object itself, which the threading apparatus can then call when it is ready. The point is that you don't actually want to call runForever; you just want to tell the threading code that runForever is what it should call later.

Is there a way to "nice" a method of a Python script

My scripts have multiple components, and only some pieces need to be nice-d. i.e., run in low priority.
Is there a way to nice only one method of Python, or I need to break it down into several processes?
I am using Linux, if that matters.
You could write a decorator that renices the running process on entry and exit:
import os
import functools
def low_priority(f):
#functools.wraps(f)
def reniced(*args, **kwargs):
os.nice(5)
try:
f(*args,**kwargs)
finally:
os.nice(-5)
return reniced
Then you can use it this way:
#low_priority
def test():
pass # Or whatever you want to do.
Disclaimers:
Works on my machine, not sure how universal os.nice is.
As noted below, whether it works or not may depend on your os/distribution, or on being root.
Nice is on a per-process basis. Behaviour with multiple threads per process will likely not be sane, and may crash.

Categories

Resources