I have this python click CLI design (wificli.py). At the end of command execution, it prints only respective print messages.
For example, when executed command python3.7 wificli.py png, it prints only png and when executed command python3.7 wificli.py terminal it prints only terminal.
As I have shown, I am expecting that it would also print End of start function and End of the main function but it is not. The idea is that to do clean up of resources only at one place rather than at each exit point of the respective command.
import click
#click.group()
#click.option('--ssid', help='WiFi network name.')
#click.option('--security', type=click.Choice(['WEP', 'WPA', '']))
#click.option('--password', help='WiFi password.')
#click.pass_context
def main(ctx, ssid: str, security: str = '', password: str = ''):
ctx.obj['ssid'] = ssid
ctx.obj['security'] = security
ctx.obj['password'] = password
#main.command()
#click.pass_context
def terminal(ctx):
print('terminal')
#main.command()
#click.option('--filename', help='full path to the png file')
#click.pass_context
def png(ctx, filename, scale: int = 10):
print('png')
def start():
main(obj={})
print('End of start function')
if __name__ == '__main__':
start()
print('End of main function')
When executed
As you've not asked a specific question, I can only post what worked for me with the reasoning behind it, and if this is not what you are looking for, I apologize in advance.
#main.resultcallback()
def process_result(result, **kwargs):
print('End of start function')
click.get_current_context().obj['callback']()
def start():
main(obj={'callback': lambda: print('End of main function')})
So, the resultcallback seems to be the suggested way of handling the termination of the group, and the invoked command. In our case, it prints End of start function, because at that point, the start function has finished executing, so we are wrapping up before terminating main. Then, it retrieves the callback passed in via the context, and executes that.
I am not sure if this is the idiomatic way of doing it, but it seems to have the intended behaviour.
For the result callback, a similar question was answered here
As to what exactly is causing this behaviour, and this is only a guess based on some quick experimentation with placing yield in the group or the command, I suspect some kind of thread/processor is spawned to handle the execution of the group and its command.
Hope this helps!
click's main() always raises a SystemExit. Quoting the documentation:
This will always terminate the application after a call. If this is not wanted, SystemExit needs to be caught.
In your example, change start() to:
def start():
try:
main(obj={})
except SystemExit as err:
# re-raise unless main() finished without an error
if err.code:
raise
print('End of start function')
See the click docs here also this answer here
# Main Runtime call at bottom of your code
start(standalone_mode=False)
# or e.g
main(standalone_mode=False)
Related
When I press Ctrl+C the call jumps into signal_handler as expected, but the greenlets are not getting killed as they continue the process.
# signal handler to process after catch ctrl+c command
def signal_handler(signum, frame):
print("Inside Signal Handler")
gevent.sleep(10)
print("Signal Handler After sleep")
gevent.joinall(maingreenlet)
gevent.killall(maingreenlet,block=True,timeout=10)
gevent.kill(block=True)
sys.exit(0)
def main():
signal.signal(signal.SIGINT, signal_handler) // Catching Ctrl+C
try:
maingreenlet = [] // Creating a list of greenlets
while True:
for key,profileval in profile.items():
maingreenlet.append(gevent.spawn(key,profileval)) # appending all grrenlets to list
gevent.sleep(0)
except (Error) as e:
log.exception(e)
raise
if __name__ == "__main__":
main()
The main reason your code is not working is because the variable maingreenlet is defined inside the main function, and is out of the scope of the signal_handler function which tries to access it. Your code should throw an error like this:
NameError: global name 'maingreenlet' is not defined
If you were to move the line maingreenlet = [] into the global scope, i.e. anywhere outside of the two def blocks, the greenlets should get killed without problem.
Of course that's after you fix the other issues in your code, like using // instead of # to start the comments, or calling the function gevent.kill with the wrong parameter. (you didn't specify your gevent version but I assume the current version 1.3.7) Actually this function call is redundant after you call gevent.killall.
Learn to use a Python debugger liker pdb or rpdb2 to help you debug your code. It'll save your precious time in the long run.
The BDFL posted in 2003 an article about how to write a Python main function. His example is this:
import sys
import getopt
class Usage(Exception):
def __init__(self, msg):
self.msg = msg
def main(argv=None):
if argv is None:
argv = sys.argv
try:
try:
opts, args = getopt.getopt(argv[1:], "h", ["help"])
except getopt.error, msg:
raise Usage(msg)
# more code, unchanged
except Usage, err:
print >>sys.stderr, err.msg
print >>sys.stderr, "for help use --help"
return 2
if __name__ == "__main__":
sys.exit(main())
The reason for the optional argument argv to main() is, "We change main() to take an optional argv argument, which allows us to call it from the interactive Python prompt."
He explains the last line of his code like this:
Now the sys.exit() calls are annoying: when main() calls sys.exit(),
your interactive Python interpreter will exit! The remedy is to let
main()'s return value specify the exit status. Thus, the code at the
very end becomes
if __name__ == "__main__":
sys.exit(main())
and the calls to sys.exit(n) inside main() all become return n.
However, when I run Guido's code in a Spyder console, it kills the interpreter. What am I missing here? Is the intention that I only import modules that have this type of main(), never just executing them with execfile or runfile? That's not how I tend to do interactive development, especially given that it would require me to remember to switch back and forth between import foo and reload(foo).
I know I can catch the SystemExit from getopt or try to use some black magic to detect whether Python is running interactively, but I assume neither of those is the BDFL's intent.
Your options are to not use execfile or to pass in a different __name__ value as a global:
execfile('test.py', {'__name__': 'test'})
The default is to run the file as a script, which means that __name__ is set to __main__.
The article you cite only applies to import.
Another way to handle it, which I alluded to briefly in the question, is to try to detect if you're in an interactive context. I don't believe this can be done portably, but here it is in case it's helpful to someone:
if __name__ == "__main__":
if 'PYTHONSTARTUP' in os.environ:
try:
main() # Or whatever you want to do here
except SystemExit as se:
logging.exception("")
else:
sys.exit(main())
I'm struggling with a issue for some time now.
I'm building a little script which uses a main loop. This is a process that needs some attention from the users. The user responds on the steps and than some magic happens with use of some functions
Beside this I want to spawn another process which monitors the computer system for some specific events like pressing specif keys. If these events occur then it will launch the same functions as when the user gives in the right values.
So I need to make two processes:
-The main loop (which allows user interaction)
-The background "event scanner", which searches for specific events and then reacts on it.
I try this by launching a main loop and a daemon multiprocessing process. The problem is that when I launch the background process it starts, but after that I does not launch the main loop.
I simplified everything a little to make it more clear:
import multiprocessing, sys, time
def main_loop():
while 1:
input = input('What kind of food do you like?')
print(input)
def test():
while 1:
time.sleep(1)
print('this should run in the background')
if __name__ == '__main__':
try:
print('hello!')
mProcess = multiprocessing.Process(target=test())
mProcess.daemon = True
mProcess.start()
#after starting main loop does not start while it prints out the test loop fine.
main_loop()
except:
sys.exit(0)
You should do
mProcess = multiprocessing.Process(target=test)
instead of
mProcess = multiprocessing.Process(target=test())
Your code actually calls test in the parent process, and that call never returns.
You can use the locking synchronization to have a better control over your program's flow. Curiously, the input function raise an EOF error, but I'm sure you can find a workaround.
import multiprocessing, sys, time
def main_loop(l):
time.sleep(4)
l.acquire()
# raise an EOFError, I don't know why .
#_input = input('What kind of food do you like?')
print(" raw input at 4 sec ")
l.release()
return
def test(l):
i=0
while i<8:
time.sleep(1)
l.acquire()
print('this should run in the background : ', i+1, 'sec')
l.release()
i+=1
return
if __name__ == '__main__':
lock = multiprocessing.Lock()
#try:
print('hello!')
mProcess = multiprocessing.Process(target=test, args = (lock, ) ).start()
inputProcess = multiprocessing.Process(target=main_loop, args = (lock,)).start()
#except:
#sys.exit(0)
Suppose you are working with some bodgy piece of code which you can't trust, is there a way to run it safely without losing control of your script?
An example might be a function which only works some of the time and might fail randomly/spectacularly, how could you retry until it works? I tried some hacking with using threading module but had trouble to kill a hung thread neatly.
#!/usr/bin/env python
import os
import sys
import random
def unreliable_code():
def ok():
return "it worked!!"
def fail():
return "it didn't work"
def crash():
1/0
def hang():
while True:
pass
def bye():
os._exit(0)
return random.choice([ok, fail, crash, hang, bye])()
result = None
while result != "it worked!!":
# ???
To be safe against exceptions, use try/except (but I guess you know that).
To be safe against hanging code (endless loop) the only way I know is running the code in another process. This child process you can kill from the father process in case it does not terminate soon enough.
To be safe against nasty code (doing things it shall not do), have a look at http://pypi.python.org/pypi/RestrictedPython .
You can try running it in a sandbox.
In your real case application can you switch to multiprocessing? Becasue it seems that what you're asking could be done with multiprocessing + threading.Timer + try/except.
Take a look at this:
class SafeProcess(Process):
def __init__(self, queue, *args, **kwargs):
self.queue = queue
super().__init__(*args, **kwargs)
def run(self):
print('Running')
try:
result = self._target(*self._args, **self._kwargs)
self.queue.put_nowait(result)
except:
print('Exception')
result = None
while result != 'it worked!!':
q = Queue()
p = SafeProcess(q, target=unreliable_code)
p.start()
t = Timer(1, p.terminate) # in case it should hang
t.start()
p.join()
t.cancel()
try:
result = q.get_nowait()
except queues.Empty:
print('Empty')
print(result)
That in one (lucky) case gave me:
Running
Empty
None
Running
it worked!!
In your code samples you have 4 out of 5 chances to get an error, so you might also spawn a pool or something to improve your chances of having a correct result.
In python when running scripts is there a way to stop the console window from closing after spitting out the traceback?
You can register a top-level exception handler that keeps the application alive when an unhandled exception occurs:
def show_exception_and_exit(exc_type, exc_value, tb):
import traceback
traceback.print_exception(exc_type, exc_value, tb)
raw_input("Press key to exit.")
sys.exit(-1)
import sys
sys.excepthook = show_exception_and_exit
This is especially useful if you have exceptions occuring inside event handlers that are called from C code, which often do not propagate the errors.
If you doing this on a Windows OS, you can prefix the target of your shortcut with:
C:\WINDOWS\system32\cmd.exe /K <command>
This will prevent the window from closing when the command exits.
try:
#do some stuff
1/0 #stuff that generated the exception
except Exception as ex:
print ex
raw_input()
On UNIX systems (Windows has already been covered above...) you can change the interpreter argument to include the -i flag:
#!/usr/bin/python -i
From the man page:
-i
When a script is passed as first argument or the -c option is used, enter interactive mode after executing the script or the command. It does not read the $PYTHONSTARTUP file. This can be useful to inspect global variables or a stack trace when a script raises an exception.
You could have a second script, which imports/runs your main code. This script would catch all exceptions, and print a traceback (then wait for user input before ending)
Assuming your code is structured using the if __name__ == "__main__": main() idiom..
def myfunction():
pass
class Myclass():
pass
def main():
c = Myclass()
myfunction(c)
if __name__ == "__main__":
main()
..and the file is named "myscriptname.py" (obviously that can be changed), the following will work
from myscriptname import main as myscript_main
try:
myscript_main()
except Exception, errormsg:
print "Script errored!"
print "Error message: %s" % errormsg
print "Traceback:"
import traceback
traceback.print_exc()
print "Press return to exit.."
raw_input()
(Note that raw_input() has been replaced by input() in Python 3)
If you don't have a main() function, you would use put the import statement in the try: block:
try:
import myscriptname
except [...]
A better solution, one that requires no extra wrapper-scripts, is to run the script either from IDLE, or the command line..
On Windows, go to Start > Run, enter cmd and enter. Then enter something like..
cd "\Path\To Your\ Script\"
\Python\bin\python.exe myscriptname.py
(If you installed Python into C:\Python\)
On Linux/Mac OS X it's a bit easier, you just run cd /home/your/script/ then python myscriptname.py
The easiest way would be to use IDLE, launch IDLE, open the script and click the run button (F5 or Ctrl+F5 I think). When the script exits, the window will not close automatically, so you can see any errors
Also, as Chris Thornhill suggested, on Windows, you can create a shortcut to your script, and in it's Properties prefix the target with..
C:\WINDOWS\system32\cmd.exe /K [existing command]
From http://www.computerhope.com/cmd.htm:
/K command - Executes the specified command and continues running.
In windows instead of double clicking the py file you can drag it into an already open CMD window, and then hit enter. It stays open after an exception.
Dan
if you are using windows you could do this
import os
#code here
os.system('pause')
Take a look at answer of this question: How to find exit code or reason when atexit callback is called in Python?
You can just copy this ExitHooks class, then customize your own foo function then register it to atexit.
import atexit
import sys, os
class ExitHooks(object):
def __init__(self):
self.exit_code = None
self.exception = None
def hook(self):
self._orig_exit = sys.exit
sys.exit = self.exit
sys.excepthook = self.exc_handler
def exit(self, code=0):
self.exit_code = code
self._orig_exit(code)
def exc_handler(self, exc_type, exc, *args):
self.exception = exc
hooks = ExitHooks()
hooks.hook()
def goodbye():
if not (hooks.exit_code is None and hooks.exception is None):
os.system('pause')
# input("\nPress Enter key to exit.")
atexit.register(goodbye)
Your question is not very clear, but I assume that the python interpreter exits (and therefore the calling console window closes) when an exception happens.
You need to modify your python application to catch the exception and print it without exiting the interpreter. One way to do that is to print "press ENTER to exit" and then read some input from the console window, effectively waiting for the user to press Enter.