Capture/redirect console output to file - python

So, I believe this issue can be applied to any python file generating logs to console.
I am trying to capture the output generated while running a behave test(all code is in python3.x).
Or more precisely: I am trying to capture the live generated console output to a file for a particular function of interest in the python code.
My aim is to capture the console logs printed (on stdout) the moment it hits a particular function in the python code.
I usually do a behave data_base_scenarios.feature without quotes on Ubuntu 18.04.
I would like to capture the complete output as it is directed to the console to file.
On StackOverflow, after searching for a while, I tried some of the methods described here: Capturing stdout within the same process in Python. I also found this: https://capturer.readthedocs.io/en/latest/
Unfortunately, I don't see anything captured.
I have taken care to set up the behave environment to generate the logs. For example all these flags are appropriately and explicitly set to generate outputs:
context.config.stdout_capture = True, context.config.log_capture = True.
What am I missing with behave environment.
Behave framework also provides a variable within "context". Its "context.stdout_capture". But unfortunately it contains nothing.
In short Behave prints on the console
Captured logging:
INFO:database.system.status: MyDatabase is online.
INFO:database.system.status: MyDatabase is now offline.
INFO:database.system.status: MyDatabase has now initiated.
I just want to dump the above "Captured Logging" console output to a file for analysis.
How can I do it? Please let me know.

So it depends on how you're doing it, but if you're using behave_main, then you can do something like the following:
from behave.__main__ import main as behave_main
args = ['--outfile=/path/to/your/log/directory/output.log']
behave_main(args)
If you're running your tests via the command line, then use the -o, or --outfile, flag and append your log name.

Related

Automating Configuration commands that require input from the user using the Fabric module

I am currently in the process of developing a python code that connects to a remote brocade switch using the fabric module and issue some configuration commands. The problem I am facing is when it comes to commands that require an input from the user (i.e. yes/no).
I read several posts that have advised to use Fabric's native settings methods as well as wexpect but none have been successful.
I checked the following links but none were able to help with my code:
how to handle interactive shell in fabric python
How to answer to prompts automatically with python fabric?
Python fabric respond to prompts in output
Below is an example of the command output that requires to be automated:
DS300B_Autobook:admin> cfgsave
You are about to save the Defined zoning configuration. This
action will only save the changes on Defined configuration.
If the update includes changes to one or more traffic isolation
zones, you must issue the 'cfgenable' command for the changes
to take effect.
Do you want to save the Defined zoning configuration only? (yes, y, no, n): [no]
The code that I have written for this is show below (tried to make it exactly the same as the output the command provides):
with settings(prompts={"DS300B_Autobook:admin> cfgsave\n"
"You are about to save the Defined zoning configuration. This\n"
"action will only save the changes on Defined configuration.\n"
"If the update includes changes to one or more traffic isolation\n"
"zones, you must issue the 'cfgenable' command for the changes\n"
"to take effect.\n"
"Do you want to save the Defined zoning configuration only? (yes, y, no, n): [no] " : "yes"}):
c.run('cfgsave')
If there is a way to have it display the output of the command to the screen and prompt me to provide the input that would also be reasonable solution.

How to send command in separate python window

Searching isn't pulling up anything useful so perhaps my verbiage is wrong.
I have a python application that I didn't write that takes user input and performs tasks based on the input. The other script I did write watches the serial traffic for a specific match condition. Both scripts run in different windows. What I want to do is if I get a match condition from my script output a command to the other script. Is there a way to do this with python? I am working in windows and want to send the output to a different window.
Since you can start the script within your script, you can just follow the instructions in this link: Read from the terminal in Python
old answer:
I assume you can modify the code in the application you didn't write. If so, you can tell the code to "print" what it's putting on the window to a file, and your other code could constantly monitor that file.

Multiple terminal handling in python

I have a python application which i want to purpose as a multi as a multi terminal handler, i want each object to have it's own terminal separated from the rest each running it's own instance, exactly like when i run two or more separate terminals in Linux (/bin/sh or /bin/bash)
sample: (just logic not code)
first_terminal = terminalInstance()
second_terminal = terminalInstance()
first_result = first_terminal.doSomething("command")
second_result = second_terminal.doSomething("command")
i actually need to have each terminal to grab a stdin & stdout in a virtual environment and control them, this is why they must be seperate, is this possible in python range? i've seen alot of codes handling a single terminal but how do you do it with multiple terminals.
PS i don't want to include while loops (if possible) since i want to add scalability from dealing with 2 or more terminals to as much as my system can handle? is it possible to control them by reference giving each terminal a reference and then calling on that object and issuing a command?
The pexpect module (https://pypi.python.org/pypi/pexpect/), among others, allows you to launch programs via a pseudo-tty, which "allows your script to spawn a child application and control it as if a human were typing commands."
You can easily spawn multiple commands, each running in a separate pseudo-tty and represented by a separate object, and you can interact with each object separately. There is a lot of flexibility as to when/how you interact. You can send input to them, and read their output, either blocking or non-blocking, and incorporating timeouts and alternative outputs.
Here's a trivial session example (run bash, have it execute an "ls" command, gather the first line of output).
import pexpect
x = pexpect.spawn("/bin/bash")
x.sendline("ls")
x.expect("\n") # End of echoed command
x.expect("\n") # End of first line of output
print x.before # Print first line of output
Note that you'll receive all the output from the terminal, typically including an echoed copy of every character you send to it. If running something like a shell, you might also need to set the shell prompt (or determine the shell prompt in use) and use that in parsing the output (i.e. in finding the end of each command's output).

How to see the previous printed values in Python console

I am printing some set of values in console and this is a program which runs for than an hour. There were run time warning in red during the run. However, when I scroll up to see them, they don't appear, as the program is running very fast displaying new values as it runs.
Is there any way for me to display only run time warnings alone or to see the entire values printed previously (as they are warnings, it does not stop the program from running)?
I don't know how you are "printing some set of values" in your console but if you are using the python logging module and your "warnings" are set with the WARN level (all your other stuff is INFO, or DEBUG etc) you can set the logger to only output for WARN and above (ERROR and CRITICAL).
import logging
logger = logging.getLogger('spam_application')
logger.setLevel(logging.WARN)
See more examples in the logging cookbook.
Another option is to set the history of your terminal (I don't know what you're using so can't give exact instructions) to store more lines of your command line.
Finally you could pipe the output to grep (if using a *NIX system) looking for warning or similar:
python your_script.py | grep warning

Python fabric put statistics

When I put a file on a remote server (using put()), is there anyway I can see the upload information or statistics printed to the stdout file descriptor?
There's no such way according to the documentation. You could however try the project tools.
There's also the option to play with fabric's local function, but of course breaks the whole host concept.
There's also no way to make fabric more verbose than the default (except for debugging). This makes sense because fabric doesn't really work with terminal escape keys to delete lines again. Displaying statistics would print way to many lines. This would actually be a nice feature - detecting line deletions within fabric and applying them (just throwing the idea out for a potential pull request).

Categories

Resources