I have a Python project with some tests in different folders and files. I wanted to write a script which executes all tests for the project and outputs some results. I want it to be a ruby script (because I know Ruby more than Python and for now I enjoy it more) and I was thinking to get the output from the tests and to parse them with ruby and to output something like "48 tests run in total, all ok" instead of the the output from python.
Long story short - I want I way to get the output from python test_something.py in a variable or in a file and I want nothing from it on the screen.
Here are my tries:
tests = Dir.glob("**/test_*")
wd = Dir.pwd
output = ''
tests.each do |test|
Dir.chdir(File.dirname(test))
# output += `python #{File.basename(test)}`
# system("python #{File.basename(test)} >> f.txt")
Dir.chdir(wd)
end
I tried both things which are commented, but both of them print the result on the standard exit and in the first one output variable is empty, in the second one the file is created but is empty again :(
Any ideas? Thank you very much in advance! :)
The test framework might have send the result to STDERR. Try use Open3.capture3 to capture the standard error.
require 'open3'
...
stdout, stderr, status = Open3.capture3(%{python "#{File.basename(test)}"})
and write the standard output and standard error to the destination:
File.write("f.txt", stdout + stderr)
You may check status.success? to see if you write the external command right. However, the test framework may return non-zero exit code on failed tests. In that case, you should check the stderr to see the actual error output.
Use Open3.capture2 as below:
output, _ = Open3.capture2("python #{File.basename(test)")
To write output to a file do as below:
File.write("f.txt", output)
Related
I'm trying to capture the results of my logging to do a diff in the end and verify the results are what as expected on Robot test. I've tried adding the following:
stdout=/path/to/file however that only seems to print python 'print()' statements and doesn't actually utilize my loggers. I was wondering, if I do the following:
Test Case
Start Process python ../Scripts/test.py
How do I get the logs produced by test.py in a separate file?
You can always run the proces in shell and just redirect output of the command. Like so
Process.Start Process python3 ../Scripts/test.py > ../Scripts/test.log shell=yes alias=test
I would also suggest to use ${CURDIR}. That way you can execute your robot from different locations and it will still work.
I am trying to execute this bash command n python and try to evaluate the output, based on the output, I want to show my own-defined outputs.
The bash command I am trying to execute using the python is:
kubectl get pods --field-selector=status.phase=Failed -n kube-system
everything looks really good and only problem I am having is,
This outputs No resources found, means there are no resources matching the given criteria (i.e status.phase=Succeeded), but that is fine, but the problem is It prints in the terminal. What I want to do is, prints my own-defined output when the actual output is No resources found, but I can't do that since it already prints that output in the terminal. I even can't use the status_code to check, it always gives 0 even after the resources are found or not (which is correct) since code has executed successfully. Is there a way that I can filter the output during the executing of the bash command and gives a condition based on the output to print my own-defined output?
Here is my code snippet:
def subprocess_execute_arr(self):
output = subprocess.call(self)
return output
cmd_failed = ["kubectl", "get", "pods", "--field-selector=status.phase=Failed", "-n", "kube-system"]
failed = execute.subprocess_execute_arr(cmd_failed) //this is where it prints the output in the terminal
output:
No resources found.
PS: Here output is not an error, command has executed correctly but I don't need to print this output.
Any idea how to solve this?
I am a student studying C right now and i have been trying to write test programs for my peers.
I decided to write my test program in python due to knowing that all of the students have the same IDE and interpreter version.
I decided to work with subprocess.Popen.
I unpack a tar file in that my C program is stored in using a tar command, then i compile the needed files using a gcc command. and then i run tests using input that is stored in text files, comparing the users output to expected outputs in corresponding text files.
when i run my C executable through the cmd, it works flawlessly and no errors appear.
but when i try to run them through subprocess.Popen, i get a cygwin error in addition to the expected output, causing my testing program to report a test failure.
I use this function, supplying it with an array of strings as arguments for the command.
def run_cmd_command(arguments_list):
"""
runs the given command through the command line and returns information
about the command.
:param arguments_list: a list representing the command to run
:return: A tuple containing the returncode, the output of the command and
all the errors.
"""
process = subprocess.Popen(arguments_list, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
universal_newlines=True)
output, errors = process.communicate()
return process.returncode, output, errors
The test python program works properly when supplied with proper inputs.
On the other hand, when the inputs are not proper, the C program is supposed to print into stderr the string : "Invalid Input\n" but prints Invalid input
0 [main] TreeAnalyzer 1780 cygwin_exception::open_stackdumpfile: Dumping stack trace to TreeAnalyzer.exe.stackdump
I would like to supply you with more information but im not sure which information to supply due to not understand where the problem's may root from.
My file structure looks like this:
runner.py
scripts/
something_a/
main.py
other_file.py
something_b/
main.py
anythingelse.py
something_c/
main.py
...
runner.py should look at all folders in scripts/ in run the main.py located there.
Right now I'm achieving this through subprocess.check_output. It works, but some of these scripts take a long time to run and I don't get to see any progress; it prints everything after the process has finished.
I'm hoping to find a solution that allows for 2 things to be done somewhat easily:
1) Stream the output instead of getting it all at the end
2) Doesn't
prohibit running multiple scripts at once
Is this possible? A lot of the solutions I've seen for running a Python script from another require knowledge of the other script's name/location. I can also enforce that all the main.py's have a specific function if that helps.
You could use Popen to loop through each file and write its content to multiple log files. Then, you could read from these files in real-time, while each one is populated. :)
How you would want to translate the output to a more readable format, is a little bit more tricky because of readability. You could create another script which reads these log files, with Popen, and decide on how you'd like this information read back in a understandable manner.
""" Use the same command as you would do for check_output """
cmd = ''
for filename in scriptList:
log = filename + ".log"
with io.open(filename, mode=log) as out:
subprocess.Popen(cmd, stdout=out, stderr=out)
I am using the following call for executing the 'aspell' command on some strings in Python:
r,w,e = popen2.popen3("echo " +str(m[i]) + " | aspell -l")
I want to test the success of the function looking at the stdout File Object r. If there is no output the command is successful.
What is the best way to test that in Python?
Thanks in advance.
Best is to use the subprocess module of the standard Python library, see here -- popen2 is old and not recommended.
Anyway, in your code, if r.read(1): is a fast way to test if there's any content in r (if you don't care about what that content might specifically be).
Why don't you use aspell -a?
You could use subprocess as indicated by Alex, but keep the pipe open. Follow the directions for using the pipe API of aspell, and it should be pretty efficient.
The upside is that you won't have to check for an empty line. You can always read from stdout, knowing that you will get a response. This takes care of a lot of problematic race conditions.