Here i'm reading and comparing values from the two logs using 'for' loop.Problem is i'm not able to continue to next TC after sys.exit command. Let me know if required more clarification
f = open('/tmp/ftplog', 'r')
for line in f:
m = re.findall("5\d+", line)
#print m
fd = open('/tmp/tellog', 'r')
for line in fd:
n = re.findall(r"5\d+", line)
#print n
if m == n:
print "passed"
sys.exit()
####TC-02####
def tc02(ipadrr,login,password,ftpipaddr,ftplogin,ftppassword,ftpfilename):
try:
telconn2 = pexpect.spawn(ipadrr)
You can add hooks that will be executed on exit using atexit. http://docs.python.org/2/library/atexit.html?highlight=atexit#atexit
However, needing to do this in a simple script is usually a sign your logic is wrong. Do you really need to exit? Could you throw an exception instead? break? return? For example, try having the logic in a function, the function returning when it is done, and some code that calls it and does something with the returned result.
sys.exit actually throws a SystemExit exception, which you can catch, but you really shouldn't. Restructure your program so you don't have to.
try this
print "passed"
return ####instead of sys.exit use return
A way to do this if it's really needed is to call the same script but with different parameters in a subprocess just before you exit it like this:
import subprocess
p = subprocess.Popen("exec your_script.py -parameters parameter", stdout=subprocess.PIPE, shell=True)
Then you can add some checks for a specific parameter that you will provide and execute only this part of your code (e.g. the tc02() function that you need).
Just keep in mind that once you call the script as a subprocess, you won't be able to stop it from the console with Ctr+C, since this will kill the parent process, but not the child processes. In order to kill everything you need to call a method like this:
p.kill()
Related
I'm trying to restart a subprocess if it crashes, but somewhy this loop just doesn't work. I've been wondering if that's even possible?
def dont_stop(conv):
try:
subprocess.call(['python', 'main.py', str(conv)])
except:
dont_stop(conv)
if __name__ == '__main__':
proc = []
for conv in range(3,8):
p = multiprocessing.Process(name=f'p{conv}', target=dont_stop, args=(conv,))
p.start()
proc.append(p)
for p in proc:
p.join()
The subprocess.call function doesn't raise an exception if the program it is running exits in a non-standard way. All it does is return the "return code" from the process you told it to run. That's usually 0 for a process that exits normally, and some other value for a program that crashes (the specific meanings of non-zero values vary between programs and OSs).
Here's a simple solution that replaces your recursive code with a loop that checks the return value of the subprocess:
def dont_stop(conv):
retval = 1
while retval != 0: # a return value of zero indicates a normal exit
retval = subprocess.call(['python', 'main.py', str(conv)])
An alternative approach is to stop using subprocess.call and use subprocess.check_call instead. That function checks the return code and raises an exception if it's not zero. While often that's what we'd prefer, it's actually a bit uglier here.
def dont_stop(conv):
while True:
try:
subprocess.check_call(['python', 'main.py', str(conv)])
break
except subprocess.CalledProcessError:
# do logging here?
pass
Since the program you're running is also a Python program, you might consider importing it, rather than running it in a separate interpreter. That might let your dont_stop function directly interact with the main.py code, such as catching and logging exceptions. The details of that are much too dependent on the design of main.py and what it's supposed to be doing though, so I'm not going to show any suggested code for this approach.
I am programming in python which involves me implementing a shell in Python in Linux. I am trying to run standard unix commands by using os.execvp(). I need to keep asking the user for commands so I have used an infinite while loop. However, the infinite while loop doesn't work. I have tried searching online but they're isn't much available for Python. Any help would be appreciated. Thanks
This is the code I have written so far:
import os
import shlex
def word_list(line):
"""Break the line into shell words."""
lexer = shlex.shlex(line, posix=True)
lexer.whitespace_split = False
lexer.wordchars += '#$+-,./?#^='
args = list(lexer)
return args
def main():
while(True):
line = input('psh>')
split_line = word_list(line)
if len(split_line) == 1:
print(os.execvp(split_line[0],[" "]))
else:
print(os.execvp(split_line[0],split_line))
if __name__ == "__main__":
main()
So when I run this and put in the input "ls" I get the output "HelloWorld.py" (which is correct) and "Process finished with exit code 0". However I don't get the output "psh>" which is waiting for the next command. No exceptions are thrown when I run this code.
Your code does not work because it uses os.execvp. os.execvp replaces the current process image completely with the executing program, your running process becomes the ls.
To execute a subprocess use the aptly named subprocess module.
In case of an ill-advised programming exercise then you need to:
# warning, never do this at home!
pid = os.fork()
if not pid:
os.execvp(cmdline) # in child
else:
os.wait(pid) # in parent
os.fork returns twice, giving the pid of child in parent process, zero in child process.
If you want it to run like a shell you are looking for os.fork() . Call this before you call os.execvp() and it will create a child process. os.fork() returns the process id. If it is 0 then you are in the child process and can call os.execvp(), otherwise continue with the code. This will keep the while loop running. You can have the original process either wait for it to complete os.wait(), or continue without waiting to the start of the while loop. The pseudo code on page 2 of this link should help https://www.cs.auckland.ac.nz/courses/compsci340s2c/assignments/A1/A1.pdf
I try to come up with a workaround for this python bug calling subprocess. I figured the way to go is using os.system in combination with os.waitpid. To test this I wrote the code below. system_call_test.py writes the pid and lot's of text to the file f. But calling os.waitpid() always get me this error: OSError: [Errno 10] No child processes. So I'm having a hard time to check if this construct is working properly. How can I ensure that the script waits for the termination of the other. I'm on windows XP/ python 2.7.
import os
f = r'D:\temp\called.txt'
s = os.system('C:\Python27\python.exe D:\python_spullen\system_call_test.py')
with open(f, 'r') as f_in:
i = f_in.readline()[-4:]
print i
rr = os.waitpid(int(i),0)
print rr
os.system returns the exit code of the process. So s above is already populated and the process has exited. os.waitpid has nothing to wait on.
system() is a combine of fork() + exec() + waitpid(). You should not call waitpid() again.
I've always been a heavy user of Notepad2, as it is fast, feature-rich, and supports syntax highlighting. Recently I've been using it for Python.
My problem: when I finish editing a certain Python source code, and try to launch it, the screen disappears before I can see the output pop up. Is there any way for me to make the results wait so that I can read it, short of using an input() or time-delay function? Otherwise I'd have to use IDLE, because of the output that stops for you to read.
(My apologies if this question is a silly one, but I'm very new at Python and programming in general.)
If you don't want to use raw_input() or input() you could log your output (stdout, stderr) to a file or files.
You could either use the logging module, or just redirect sys.stdout and sys.stderr.
I would suggest using a combination of the logging and traceback if you want to log errors with their trace stack.
Something like this maybe:
import logging, traceback
logging.basicConfig(filename=r'C:\Temp\log.txt', level=logging.DEBUG)
try:
#do some stuff
logging.debug('I did some stuff!')
except SomeException:
logging.error(traceback.format_exc())
Here's an example of redirecting stdout and stderr:
if __name__ == '__main__':
save_out = sys.stdout # save the original stdout so you can put it back later
out_file = open(r'C:\Temp\out.txt', 'w')
sys.stdout = out_file
save_err = sys.stderr
err_file = open(r'C:\Temp\err.txt', 'w')
sys.stderr = err_file
main() #call your main function
sys.stdout = save_out # set stdout back to it's original object
sys.stderr = save_err
out_file.close()
err_file.close()
I'm going to point out that this is not the easiest or most straight forward way to go.
This is a "problem" with Notepad2, not Python itself.
Unless you want to use input()/sleep (or any other blocking function) in your scripts, I think you have to turn to the settings in Notepad2 and see what that has to offer.
you could start in the command window. e.g.:
c:\tmp\python>main.py
adding raw_input() (or input() in py3k) at the end of your script will let you freeze it for until enter is pressed, but it's not a good thing to do.
You can add a call to raw_input() to the end of your script in order to make it wait until you press Enter.
I'm writing an IRC bot in Python, due to the alpha nature of it, it will likely get unexpected errors and exit.
What's the techniques that I can use to make the program run again?
You can use sys.exit() to tell that the program exited abnormally (generally, 1 is returned in case of error).
Your Python script could look something like this:
import sys
def main():
# ...
if __name__ == '__main__':
try:
main()
except Exception as e:
print >> sys.stderr, e
sys.exit(1)
else:
sys.exit()
You could call again main() in case of error, but the program might not be in a state where it can work correctly again.
It may be safer to launch the program in a new process instead.
So you could write a script which invokes the Python script, gets its return value when it finishes, and relaunches it if the return value is different from 0 (which is what sys.exit() uses as return value by default).
This may look something like this:
import subprocess
command = 'thescript'
args = ['arg1', 'arg2']
while True:
ret_code = subprocess.call([command] + args)
if ret_code == 0:
break
You can create wrapper using subprocess(http://docs.python.org/library/subprocess.html) which will spawn your application as a child process and track it's execution.
The easiest way is to catch errors, and close the old and open a new instance of the program when you do catch em.
Note that it will not always work (in cases it stops working without throwing an error).