I'm trying to develop a script that will connect to our switch and do some tasks.
In this script I have a main function that calls a second function. In the second function I pass a list of switches that Python will start to connect one by one.
The second function will call a third function. In the third function the script makes some tests. If one of these tests fail I want to close the entire script.
The problem is that I tried to put return, exit, raise System, os.exit but what happens is that the script doesn't stop, it just jumps to another switch and goes on.
Anyone knows how can I close my entire script from a function?
Best regards.
You can use
import sys
sys.exit()
or
raise SystemExit()
The parameters can be used to pass messages. If you are also dealing with loops, break also works really well.
Try using sys.exit(). sys.exit() will raise a SystemExit exception and close the program.
Thanks for your reply.
I already tried use sys.exit, raise and others into the third funcion but nothing works. What I did, I put a return with stantement pass or fail. On the secund function I test the return and if fail the script execute sys.exit(). When I do this on the secund function the script stop like we want. Now it's working fine. Probably this is the worst way to do this but worked.
Best regards.
Related
I'm not sure exactly how to ask what I'm asking, and I don't know if any sample code would really be relevant here, so if clarification is necessary just ask.
I have a non-trivial program in Python that does a couple of things. It reads from some SQL Server database tables, executes some DDL in SQL Server (both of these with pyodbc), analyzes some data, and has a GUI to orchestrate everything so users in the future besides me can automate the process.
Everything is functioning as it should, but obviously I don't expect future users to always play by the rules. I can explicitly indicate what input is wrong (i.e. fields left empty), but there are quite a bit of things that can go wrong. Try-catch structures are out of the question because they cause a few issues in the web of things happening in my program, some of which are embedded in the resources I'm using, not to mention I feel like it's probably not good form to have them everywhere.
That being said, I'm wondering if there's a way to cause an event (likely just a dialog box saying that an error occurred), so that a user without a view of the output window would know something had gone wrong. It would be nice if I could also grab the error message itself, but that isn't necessary. I'm alright with the error still occurring so long as the program can keep going if the issue is corrected.
I'm not sure if this is possible, or if it is possible what form it would take, like something that monitors the output or listens for errors, but I'm open to all information about this.
Thank you
You can wrap your code with a try/except block and name the Exception to print it in the dialog, for example:
try:
# your code
except Exception as e:
print(e) # change this to be in your dialog
This way you will not use try/except many times in different places and you will catch basically any Exception. The other way is to raise custom exceptions for each error and catch them with except.
If you still don't want to use try/except at all, maybe start a thread to keep checking for certain variables (through a loop) and whenever you want to start an error event you just set that variable to True and the thread will start the error dialog. For instance:
import time
import threading
test1 = False
test2 = False
def errorCheck():
while True:
if test1:
# start error dialog for error1
if test2:
# start error dialog for error2
time.sleep(0.1)
if __name__ == '__main__':
t = threading.Thread(target=errorCheck)
t.daemon = True
t.start()
# start your app
However, I recommend using try/except instead.
I run a Python Discord bot. I import some modules and have some events. Now and then, it seems like the script gets killed for some unknown reason. Maybe because of an error/exception or some connection issue maybe? I'm no Python expert but I managed to get my bot working pretty well, I just don't exactly understand how it works under the hood (since the program does nothing besides waiting for events). Either way, I'd like it to restart automatically after it stops.
I use Windows 10 and just start my program either by double-clicking on it or through pythonw.exe if I don't want the window. What would be the best approach to verify if my program is still running (it doesn't have to be instant, the verification could be done every X minutes)? I thought of using a batch file or another Python script but I have no idea how to do such thing.
Thanks for your help.
You can write another python code (B) to call your original python code (A) using Popen from subprocess. In python code (B), ask the program to wait for your python code (A). If 'A' exits with an error code, recall it from B.
I provide an example for python_code_B.py
import subprocess
filename = 'my_python_code_A.py'
while True:
"""However, you should be careful with the '.wait()'"""
p = subprocess.Popen('python '+filename, shell=True).wait()
"""#if your there is an error from running 'my_python_code_A.py',
the while loop will be repeated,
otherwise the program will break from the loop"""
if p != 0:
continue
else:
break
This will generally work well on Unix / Windows systems. Tested on Win7/10 with latest code update.
Also, please run python_code_B.py from a 'real terminal' which means running from a command prompt or terminal, and not in IDLE.
for problem you stated i prefer to use python subprocess call to rerun python script or use try blocks.
This might be helpful to you.
check this sample try block code:
try:
import xyz # consider it is not exist or any error code
except:
pass # go to next line of code to execute
I am trying to create a python script that on a click of a button opens another python script and closes itself and some return function in the second script to return to the original script hope you can help.
Thanks.
Since your question is very vague, here's a somewhat vague answer:
First, think about whether you really need to do this at all. Why can't the first script just import the second script as a module and call some function on it?
But let's assume you've got a good answer for that, and you really do need to "close" and run the other script, where by "close" you mean "make your GUI invisible".
def handle_button_click(button):
button.parent_window().hide()
subprocess.call([sys.executable, '/path/to/other/script.py'])
button.parent_window().show()
This will hide the window, run the other script, then show the window again when the other script is finished. It's generally a very bad idea to do something slow and blocking in the middle of an event handler, but in this case, because we're hiding our whole UI anyway, you can get away with it.
A smarter solution would involve some kind of signal that either the second script sends, or that a watcher thread sends. For example:
def run_other_script_with_gui_hidden(window):
gui_library.do_on_main_thread(window.hide)
subprocess.call([sys.executable, '/path/to/other/script.py'])
gui_library.do_on_main_thread(window.show)
def handle_button_click(button):
t = threading.Thread(target=run_other_script_with_gui_hidden)
t.daemon = True
t.start()
Obviously you have to replace things like button.window(), window.hide(), gui_library.do_on_main_thread, etc. with the appropriate code for your chosen window library.
If you'd prefer to have the first script actually exit, and the second script re-launch it, you can do that, but it's tricky. You don't want to launch the second script as a child process, but as a sibling. Ideally, you want it to just take over your own process. Except that you need to shut down your GUI before doing that, unless your OS will do that automatically (basically, Windows will, Unix will not). Look at the os.exec family, but you'll really need to understand how these things work in Unix to do it right. Unless you want the two scripts to be tightly coupled together, you probably want to pass the second script, on the command line, the exact right arguments to re-launch the first one (basically, pass it your whole sys.argv after any other parameters).
As an alternative, you can use execfile to run the second script within your existing interpreter instance, and then have the second script execfile you back. This has similar, but not identical, issues to the exec solution.
as an alternative to using pdb, I would have a use for the Python continue statement in interactive mode, after control-C during a script invocation with python -i. that way, say at a raw_input('continue->') prompt in my script, I could break out, inspect/modify things, and go right back to the raw_input prompt (or whatever code caused an exception) with a continue command. the break command outside of a loop could also be repurposed for symmetry, but I'd have less use for that. before submitting a PEP for this, I'd like some feedback from the Python community.
it might be possible to do something similar just using a PYTHONSTARTUP script and the inspect module, but if so I haven't figured it out yet.
ctrl-C raised a KeyboardInterrupt exception in your script. Since you didn't catch that exception, the program terminated. Only then the interactive prompt appears.
You can't continue because your program is already over. The fact that you pressed Ctrl-C just raised an exception, the program didn't pause at that exact place. It continued execution, up to the last line, and finished.
There's no way to know where you want to continue to. For that you need a real debugger.
I'm using the PyDev for Eclipse plugin, and I'm trying to set a break point in some code that gets run in a background thread. The break point never gets hit even though the code is executing. Here's a small example:
import thread
def go(count):
print 'count is %d.' % count # set break point here
print 'calling from main thread:'
go(13)
print 'calling from bg thread:'
thread.start_new_thread(go, (23,))
raw_input('press enter to quit.')
The break point in that example gets hit when it's called on the main thread, but not when it's called from a background thread. Is there anything I can do, or is that a limitation of the PyDev debugger?
Update
Thanks for the work arounds. I submitted a PyDev feature request, and it has been completed. It should be released with version 1.6.0. Thanks, PyDev team!
The problem is that there's no API in the thread module to know when a thread starts.
What you can do in your example is set the debugger trace function yourself (as Alex pointed) as in the code below (if you're not in the remote debugger, the pydevd.connected = True is currently required -- I'll change pydev so that this is not needed anymore). You may want to add a try..except ImportError for the pydevd import (which will fail if you're not running in the debugger)
def go(count):
import pydevd
pydevd.connected = True
pydevd.settrace(suspend=False)
print 'count is %d.' % count # set break point here
Now, on a second thought, I think that pydev can replace the start_new_thread method in the thread module providing its own function which will setup the debugger and later call the original function (just did that and it seems to be working, so, if you use the nightly that will be available in some hours, which will become the future 1.6.0, it should be working without doing anything special).
The underlying issue is with sys.settrace, the low-level Python function used to perform all tracing and debugging -- as the docs say,
The function is thread-specific; for a
debugger to support multiple threads,
it must be registered using settrace()
for each thread being debugged.
I believe that when you set a breakpoint in PyDev, the resulting settrace call is always happening on the main thread (I have not looked at PyDev recently so they may have added some way to work around that, but I don't recall any from the time when I did look).
A workaround you might implement yourself is, in your main thread after the breakpoint has been set, to use sys.gettrace to get PyDev's trace function, save it in a global variable, and make sure in all threads of interest to call sys.settrace with that global variable as the argument -- a tad cumbersome (more so for threads that already exist at the time the breakpoint is set!), but I can't think of any simpler alternative.
On this question, I found a way to start the command-line debugger:
import pdb; pdb.set_trace()
It's not as easy to use as the Eclipse debugger, but it's better than nothing.
For me this worked according to one of Fabio's posts, after setting the trace with setTrace("000.000.000.000") # where 0's are the IP of your computer running Eclipse/PyDev
threading.settrace(pydevd.GetGlobalDebugger().trace_dispatch)