How to decouple Python function as separate process? [duplicate] - python

This question already has answers here:
Python spawn off a child subprocess, detach, and exit
(2 answers)
Closed 3 years ago.
For example, I've got a function foo, and the caller function. I need foo to go to background, set lock file, set everything up, remove lock file. The caller must call foo and exit. I was thinking about Subprocess module, but as I see it can't do what I need from it to do. python-daemon seems to be promising, but I don't need it to run forever as a daemon.

Might want to look at threading
import threading
def foo(my_args):
# do something here
def caller(some_args):
# do some stuff
foo_thread = threading.Thread(target=foo, args=some_args)
foo_thread.start()
# continue doing stuff
caller()

You can daemonize your function in a thread: e.g.
import threading
import time
def worker(snooze):
print(f'snoozing {snooze} seconds')
time.sleep(snooze)
if __name__ == '__main___':
task = threading.Thread(name='daemonize_worker', target=worker, args=(5, ))
task.setDaemon(True)
task.start()
This throws worker with snoozing of 5 sec in the background in a daemonized way.

Related

Is there a way to set title/name of a thread in Python? [duplicate]

This question already has answers here:
Python thread name doesn't show up on ps or htop
(7 answers)
Closed 2 years ago.
I would like to set the title of a thread (the title seen in ps or top)in Python in order to make it visible to process tracer programs. All the threads of a process are always called python or called file name when /usr/bin/python is used and the script is called via ./script.
Now, I want to change each thread's name. I have a simple script with 4 threads (incl main thread). I use threading to start the threads.
Is there a way that I could achieve this without having to install third-party stuff? Any guidance is appreciated.
Try this:
def your_function(arg1, arg2, argn):
* do stuff *
new_thread = threading.Thread(target=your_function, args=(arg1, arg2, argn))
new_thread.name = 'your name'
new.thread.start()
Where new_thread.name is your answer.
Just do the following:
t = threading.Thread(name='my_thread')

Python Threading timer with package [duplicate]

This question already has answers here:
How to repeatedly execute a function every x seconds?
(22 answers)
Closed 6 years ago.
I've been reading up on threading and tried implementing it into my code however I'm not sure if the way I'm doing it, is the best practise.
My code simply imports a self scripted package which pulls weather data and runs the package every 60 seconds thereafter.
I plan on running multiple packages which gather data at once, when I have worked out a good code technique.
from package.weather import weatherapi
import threading
def update():
weatherapi()
threading.Timer(60, update).start()
update()
Firstly it just seems messy and if I wanted more packages running in a thread, I'd need to create another update function
Secondly I'm not able to kill my process
If anyone has any suggestions, it would be greatly appreciated.
This is a really bad use of Threading.timer. You're constantly starting new threads, when you just want one thread to do something regularly. This code is equivalent:
from package.weather import weatherapi
import threading
import time
def update():
while True:
weatherapi()
time.sleep(60)
WHEATHER_THREAD=threading.Thread(target=update)
WHEATHER_THREAD.daemon = True # main can exit while thread is still running
WHEATHER_THREAD.start()
Since threads all use the same namespace, you can also make do with just one function.
UPDATE_CALLABLES = [weatherapi] # add new functions to have them called by update
def update():
while True:
for func in UPDATE_CALLABLES:
func()
time.sleep(60)
Note that UPDATE_CALLABLES can also be appended while the Thread is already running.
A class like this does what you want:
import threading
class Interval:
def __init__(self):
self.api=[]
self.interval=60
self.timer=self
def set_api(self,api):
self.api=api
def set_interval(self,interval):
self.interval=interval
def cancel(self):
pass
def stop(self):
self.timer.cancel()
def update(self):
for api in self.api:
api()
self.timer = threading.Timer(self.interval,self.update).start()
# Create instance and start with default parameters
interval=Interval()
interval.update()
# Later on change the list of items to call
interval.set_api([thisApi,thatApi])
# Later on still change the interval between calls
interval.set_interval(30)
# When you have had enough, cancel the timer
interval.stop()
Note that it still creates a new thread for each interval timed, but you can change the list of calls made at any time and stop it repeating at any time.

Python time.sleep lock process

I want to create multi process app. Here is sample:
import threading
import time
from logs import LOG
def start_first():
LOG.log("First thread has started")
time.sleep(1000)
def start_second():
LOG.log("second thread has started")
if __name__ == '__main__':
### call birhtday daemon
first_thread = threading.Thread(target=start_first())
### call billing daemon
second_thread = threading.Thread(target=start_second())
### starting all daemons
first_thread.start()
second_thread.start()
In this code second thread does not work. I guess, after calling sleep function inside first_thread main process is slept. I found this post. But here sleep was used with class. I got that(Process finished with exit code 0
) as a result when I run answer. Could anybody explain me where I made a mistake ?
I am using python 3.* on windows
When creating your thread you are actually invoking the functions when trying to set the target for the Thread instead of passing a function to it. This means when you try to create the first_thread you are actually calling start_first which includes the very long sleep. I imagine you then get frustrated that you don't see the output from the second thread and kill it, right?
Remove the parens from your target= statements and you will get what you want
first_thread = threading.Thread(target=start_first)
second_thread = threading.Thread(target=start_second)
first_thread.start()
second_thread.start()
will do what you are trying

How to run a script forever? [duplicate]

This question already has answers here:
Best way to implement a non-blocking wait?
(5 answers)
Closed 1 year ago.
I need to run my Python program forever in an infinite loop..
Currently I am running it like this -
#!/usr/bin/python
import time
# some python code that I want
# to keep on running
# Is this the right way to run the python program forever?
# And do I even need this time.sleep call?
while True:
time.sleep(5)
Is there any better way of doing it? Or do I even need time.sleep call?
Any thoughts?
Yes, you can use a while True: loop that never breaks to run Python code continually.
However, you will need to put the code you want to run continually inside the loop:
#!/usr/bin/python
while True:
# some python code that I want
# to keep on running
Also, time.sleep is used to suspend the operation of a script for a period of time. So, since you want yours to run continually, I don't see why you would use it.
How about this one?
import signal
signal.pause()
This will let your program sleep until it receives a signal from some other process (or itself, in another thread), letting it know it is time to do something.
I know this is too old thread but why no one mentioned this
#!/usr/bin/python3
import asyncio
loop = asyncio.get_event_loop()
try:
loop.run_forever()
finally:
loop.close()
sleep is a good way to avoid overload on the cpu
not sure if it's really clever, but I usually use
while(not sleep(5)):
#code to execute
sleep method always returns None.
Here is the complete syntax,
#!/usr/bin/python3
import time
def your_function():
print("Hello, World")
while True:
your_function()
time.sleep(10) #make function to sleep for 10 seconds
for OS's that support select:
import select
# your code
select.select([], [], [])
I have a small script interruptableloop.py that runs the code at an interval (default 1sec), it pumps out a message to the screen while it's running, and traps an interrupt signal that you can send with CTL-C:
#!/usr/bin/python3
from interruptableLoop import InterruptableLoop
loop=InterruptableLoop(intervalSecs=1) # redundant argument
while loop.ShouldContinue():
# some python code that I want
# to keep on running
pass
When you run the script and then interrupt it you see this output, (the periods pump out on every pass of the loop):
[py36]$ ./interruptexample.py
CTL-C to stop (or $kill -s SIGINT pid)
......^C
Exiting at 2018-07-28 14:58:40.359331
interruptableLoop.py:
"""
Use to create a permanent loop that can be stopped ...
... from same terminal where process was started and is running in foreground:
CTL-C
... from same user account but through a different terminal
$ kill -2 <pid>
or $ kill -s SIGINT <pid>
"""
import signal
import time
from datetime import datetime as dtt
__all__=["InterruptableLoop",]
class InterruptableLoop:
def __init__(self,intervalSecs=1,printStatus=True):
self.intervalSecs=intervalSecs
self.shouldContinue=True
self.printStatus=printStatus
self.interrupted=False
if self.printStatus:
print ("CTL-C to stop\t(or $kill -s SIGINT pid)")
signal.signal(signal.SIGINT, self._StopRunning)
signal.signal(signal.SIGQUIT, self._Abort)
signal.signal(signal.SIGTERM, self._Abort)
def _StopRunning(self, signal, frame):
self.shouldContinue = False
def _Abort(self, signal, frame):
raise
def ShouldContinue(self):
time.sleep(self.intervalSecs)
if self.shouldContinue and self.printStatus:
print( ".",end="",flush=True)
elif not self.shouldContinue and self.printStatus:
print ("Exiting at ",dtt.now())
return self.shouldContinue
If you mean run as service then you can use any rest framework
from flask import Flask
class A:
def one(port):
app = Flask(__name__)
app.run(port = port)
call it:
one(port=1001)
it will always keep listening on 1001
* Running on http://127.0.0.1:1001/ (Press CTRL+C to quit)

How to exit the entire application from a Python thread?

How can I exit my entire Python application from one of its threads? sys.exit() only terminates the thread in which it is called, so that is no help.
I would not like to use an os.kill() solution, as this isn't very clean.
Short answer: use os._exit.
Long answer with example:
I yanked and slightly modified a simple threading example from a tutorial on DevShed:
import threading, sys, os
theVar = 1
class MyThread ( threading.Thread ):
def run ( self ):
global theVar
print 'This is thread ' + str ( theVar ) + ' speaking.'
print 'Hello and good bye.'
theVar = theVar + 1
if theVar == 4:
#sys.exit(1)
os._exit(1)
print '(done)'
for x in xrange ( 7 ):
MyThread().start()
If you keep sys.exit(1) commented out, the script will die after the third thread prints out. If you use sys.exit(1) and comment out os._exit(1), the third thread does not print (done), and the program runs through all seven threads.
os._exit "should normally only be used in the child process after a fork()" -- and a separate thread is close enough to that for your purpose. Also note that there are several enumerated values listed right after os._exit in that manual page, and you should prefer those as arguments to os._exit instead of simple numbers like I used in the example above.
If all your threads except the main ones are daemons, the best approach is generally thread.interrupt_main() -- any thread can use it to raise a KeyboardInterrupt in the main thread, which can normally lead to reasonably clean exit from the main thread (including finalizers in the main thread getting called, etc).
Of course, if this results in some non-daemon thread keeping the whole process alive, you need to followup with os._exit as Mark recommends -- but I'd see that as the last resort (kind of like a kill -9;-) because it terminates things quite brusquely (finalizers not run, including try/finally blocks, with blocks, atexit functions, etc).
Using thread.interrupt_main() may not help in some situation. KeyboardInterrupts are often used in command line applications to exit the current command or to clean the input line.
In addition, os._exit will kill the process immediately without running any finally blocks in your code, which may be dangerous (files and connections will not be closed for example).
The solution I've found is to register a signal handler in the main thread that raises a custom exception. Use the background thread to fire the signal.
import signal
import os
import threading
import time
class ExitCommand(Exception):
pass
def signal_handler(signal, frame):
raise ExitCommand()
def thread_job():
time.sleep(5)
os.kill(os.getpid(), signal.SIGUSR1)
signal.signal(signal.SIGUSR1, signal_handler)
threading.Thread(target=thread_job).start() # thread will fire in 5 seconds
try:
while True:
user_input = raw_input('Blocked by raw_input loop ')
# do something with 'user_input'
except ExitCommand:
pass
finally:
print('finally will still run')
Related questions:
Why does sys.exit() not exit when called inside a thread in Python?
Python: How to quit CLI when stuck in blocking raw_input?
The easiest way to exit the whole program is, we should terminate the program by using the process id (pid).
import os
import psutil
current_system_pid = os.getpid()
ThisSystem = psutil.Process(current_system_pid)
ThisSystem.terminate()
To install psutl:- "pip install psutil"
For Linux you can use the kill() command and pass the current process' ID and the SIGINT signal to start the steps to exit the app.
import signal
os.kill(os.getpid(), signal.SIGINT)

Categories

Resources