Python Script Calling Make and Other Utilities - python

I have a python script that invokes the following command:
# make
After make, it also invokes three other programs. Is there a standard way of telling whether the make command was successful or not? Right now, if make is successful or unsuccessful, the program still continues to run. I want to raise an error that the make was not possible.
Can anyone give me direction with this?

The return value of the poll() and wait() methods is the return code of the process. Check to see if it's non-zero.

Look at the exit code of make. If you are using the python module commands, then you can get the status code easily. 0 means success, non-zero means some problem.

import os
if os.system("make"):
print "True"
else:
print "False"

Use subprocess.check_call(). That way you don't have to check the return code yourself - an Exception will be thrown if the return code was non-zero.

Related

is it possible to automate the Python scripts each time when returning an error?

I want to automate the python to re-run the code when it returns error, is it possible and how can I do it?
I have a script and each time that returns an error I have to run it again. The error does not interfere in the process, so it can't be a problem re-run.
When you face an error, typically, the program will halt.
That's why there are error handlers in place to not let that happen.
Look at some already answered examples here
So, lets say you want the user to input integer values and it shouldn't halt, so you could use a loop and break it when the entered value is an integer (there's no error thrown)
...
while True:
try:
i = int(input("Integer: "))
break
except:
print("Error")
...
You get the idea.

How to immediately restart the test in case of a crash?

I have a test that may crash if the server is currently unavailable.
Can you tell me how I can immediately restart the same test in case it crashes (Without waiting for the completion of other tests)? And to recognize it as really failed only in case of a second failure.
I saw a solution using the pytest-rerunfailures library, but it needs to be additionally installed, and this does not quite suit me
Also, I present a solution with the try-except construction, but it seems to me that there should be a more convenient solution
With the help of try-except I present the solution like this:
def test_first():
try:
assert 1 == 0 # The main part of the function
except:
assert 1 == 0 # In case of failure, just repeat the same code

Watch Log for "Success" or "Failed" string using Python

I'm trying to do everything I would have previously used Bash for in Python, in order to finally learn the language. However, this problem has me stumped, and I haven't been able to find any solutions fitting my use-case.
There is an element of trying to run before I can walk with though, so I'm looking for some direction.
Here's the issue:
I have a Python script that starts a separate program that creates and writes to a log file.
I want to watch that log file, and print out "Successful Run" if the script detects the "Success" string in the log, and "Failed Run" if the "Failed" string is found instead. The underlying process generally takes about 10 seconds to get to the stage where it'll write "Success" or "Failure" to the log file. Neither string will appear in the log at the same. It's either a success, or failure. It can't be both.
I've been attempting to do this with a while loop. So I can continue to watch the log file, until the string appears, and then exit when it does. I have got it working for just one string, but I'm unsure how to accomodate the other string.
Here's the code I'm running.
log_path = "test.log"
success = "Success"
failure = "Failed"
with open(log_path) as log:
while success != True:
if success in log.read():
print("Process Successfully Completed")
sys.exit()
Thanks to the pointers above from alaniwi and David, I've actually managed to get it to work, using the following code. So I must have been quite close originally.
I've wrapped it all in a while True, put the log.read() into a variable, and added an elif. Definitely interested in any pointers on whether this is the most Pythonic way to do it though? So please critique if need be.
while True:
with open(log_path) as log:
read_log = log.read()
if success in read_log:
print("Process Successfully Completed")
sys.exit()
elif fail in read_log:
print("Failed")
sys.exit()

Python Unusual error with multiline code in IDLE Shell

I was testing some code on IDLE for Python which I haven't used in a while and stumbled on an unusual error.
I was attempting to run this simple code:
for i in range(10):
print(i)
print('Done')
I recall that the shell works on a line by line basis, so here is what I did first:
>>> for i in range(10):
print(i)
print('Done')
This resulted in a indent error, shown by the picture below:
I tried another way, as it might be that the next statement needed to be at the start perhaps, like this:
>>> for i in range(10):
print(i)
print('Done')
But this gave a syntax error, oddly:
This is quite odd to me the way IDLE works.
Take Note:
I am actually testing a much more complex program and didn't want to create a small Python file for a short test. After all, isn't IDLE's shell used for short tests anyways?
Why is multi-line coding causing this issue? Thanks.
Just hit return once or twice after the print(i), until you get the >>> prompt again. Then you can type the print('Done'). What's going on is that python is waiting for you to tell it that you're done working inside that for. And you do that by hitting return.
(You'll see, though, that the for loop is executed right away.)

Executing commands at breakpoint in gdb via python interface

Not a duplicate of this question, as I'm working through the python interface to gdb.
This one is similar but does not have an answer.
I'm extending a gdb.breakpoint in python so that it writes certain registers to file, and then jumps to an address: at 0x4021ee, I want to write stuff to file, then jump to 0x4021f3
However, nothing in command is ever getting executed.
import gdb
class DebugPrintingBreakpoint(gdb.Breakpoint):
def __init__(self, spec, command):
super(DebugPrintingBreakpoint, self).__init__(spec, gdb.BP_BREAKPOINT, internal = False)
self.command = command
def stop(self):
with open('tracer', 'a') as f:
f.write(chr(gdb.parse_and_eval("$rbx") ^ 0x71))
f.close()
return False
gdb.execute("start")
DebugPrintingBreakpoint("*0x4021ee", "jump *0x4021f3")
gdb.execute("continue")
If I explicitly add gdb.execute(self.command) to the end of stop(), I get Python Exception <class 'gdb.error'> Cannot execute this command while the selected thread is running.:
Anyone have a working example of command lists with breakpoints in python gdb?
A couple options to try:
Use gdb.post_event from stop() to run the desired command later. I believe you'll need to return True from your function then call continue from your event.
Create a normal breakpoint and listen to events.stop to check if your breakpoint was hit.
The Breakpoint.stop method is called when, in gdb terms, the inferior is still "executing". Partly this is a bookkeeping oddity -- of course the inferior isn't really executing, it is stopped while gdb does a bit of breakpoint-related processing. Internally it is more like gdb hasn't yet decided to report the stop to other interested parties inside gdb. This funny state is what lets stop work so nicely vis a vis next and other execution commands.
Some commands in gdb can't be invoked while the inferior is running, like jump, as you've found.
One thing you could try -- I have never tried this and don't know if it would work -- would be to assign to the PC in your stop method. This might do the right thing; but of course you should know that the documentation warns against doing weird stuff like this.
Failing that I think the only approach is to fall back to using commands to attach the jump to the breakpoint. This has the drawback that it will interfere with next.
One final way would be to patch the running code to insert a jump or just a sequence of nops.

Categories

Resources