How to see the previous printed values in Python console - python

I am printing some set of values in console and this is a program which runs for than an hour. There were run time warning in red during the run. However, when I scroll up to see them, they don't appear, as the program is running very fast displaying new values as it runs.
Is there any way for me to display only run time warnings alone or to see the entire values printed previously (as they are warnings, it does not stop the program from running)?

I don't know how you are "printing some set of values" in your console but if you are using the python logging module and your "warnings" are set with the WARN level (all your other stuff is INFO, or DEBUG etc) you can set the logger to only output for WARN and above (ERROR and CRITICAL).
import logging
logger = logging.getLogger('spam_application')
logger.setLevel(logging.WARN)
See more examples in the logging cookbook.
Another option is to set the history of your terminal (I don't know what you're using so can't give exact instructions) to store more lines of your command line.
Finally you could pipe the output to grep (if using a *NIX system) looking for warning or similar:
python your_script.py | grep warning

Related

Capture/redirect console output to file

So, I believe this issue can be applied to any python file generating logs to console.
I am trying to capture the output generated while running a behave test(all code is in python3.x).
Or more precisely: I am trying to capture the live generated console output to a file for a particular function of interest in the python code.
My aim is to capture the console logs printed (on stdout) the moment it hits a particular function in the python code.
I usually do a behave data_base_scenarios.feature without quotes on Ubuntu 18.04.
I would like to capture the complete output as it is directed to the console to file.
On StackOverflow, after searching for a while, I tried some of the methods described here: Capturing stdout within the same process in Python. I also found this: https://capturer.readthedocs.io/en/latest/
Unfortunately, I don't see anything captured.
I have taken care to set up the behave environment to generate the logs. For example all these flags are appropriately and explicitly set to generate outputs:
context.config.stdout_capture = True, context.config.log_capture = True.
What am I missing with behave environment.
Behave framework also provides a variable within "context". Its "context.stdout_capture". But unfortunately it contains nothing.
In short Behave prints on the console
Captured logging:
INFO:database.system.status: MyDatabase is online.
INFO:database.system.status: MyDatabase is now offline.
INFO:database.system.status: MyDatabase has now initiated.
I just want to dump the above "Captured Logging" console output to a file for analysis.
How can I do it? Please let me know.
So it depends on how you're doing it, but if you're using behave_main, then you can do something like the following:
from behave.__main__ import main as behave_main
args = ['--outfile=/path/to/your/log/directory/output.log']
behave_main(args)
If you're running your tests via the command line, then use the -o, or --outfile, flag and append your log name.

Logging in Python. Can someone explain?

This is directly from the Python 2.7.13 documentation.
It says:
import logging
logging.warning('Watch out!') # will print a message to the console
logging.info('I told you so') # will not print anything.
If you type these lines into a script and run it, you will see:
WARNING:root:Watch out!.
The INFO message doesn't appear because the default level is WARNING
I don't really understand why the INFO message does not appear. What does it mean that the default level is WARNING?
The default log level is the minimum level that logs will appear in your terminal.
That is to say, if your default log level was WARNING, at minimum you would see logs at level WARNING. You'd also likely see ERROR and CRITICAL messages, too.
If you lowered your log level, you would be able to see the other messages. It's usually not done too often since logs get more verbose and more noisy, but useful if you need to do some deep debugging.

Where to run python file on Remote Debian Sever

I have written a python script that is designed to run forever. I load the script into a folder that I made on my remote server which is running debian wheezy 7.0. The code runs , but it will only run for 3 to 4 hours then it just stops, I do not have any log information on it stopping.I come back and check the running process and its not there. Is this a problem in where I am running the python file from? The script simply has a while loop and writes to an external csv file. The file runs from /var/pythonscript. The folder is a custom folder that I made. There is not error that I receive and the only way I know how long the code runs is by the time stamp on the csv file. I run the .py file by ssh to the server and sudo python scriptname.I also would like to know the best place in the linux debian directory to run python files from and limitations concerning that. Any help would be much appreciated.
Basically you're stuffed.
Your problem is:
You have a script, which produces no error messages, no logging, and no other diagnostic information other than a single timestamp, on an output file.
Something has gone wrong.
In this case, you have no means of finding out what the issue was. I suggest any of the following:
either adding logging or diagnostic information to the script.
Contacting the developer of the script and getting them to find a way of determining the issue.
Delete the evidently worthless script if you can't do either option 1, or 2, above, and consider an alternative way of doing your task.
Now, if the script does have logging, or other diagnostic data, but you delete or throw them away, then that's your problem and you need to stop discarding this useful information.
EDIT (following comment).
At a basic level, you should print to either stdout, or to stderr, that alone will give you a huge amount of information. Just things like, "Discovered 314 records, we need to save 240 records", "Opened file name X.csv, Open file succeeded (or failed, as the case may be)", "Error: whatever", "Saved 2315 records to CSV". You should be able to determine if those numbers make sense. (There were 314 records, but it determined 240 of them should be saved, yet it saved 2315? What went wrong!? Time for more logging or investigation!)
Ideally, though, you should take a look at the logging module in python as that will let you log stack traces effectively, show line numbers, the function you're logging in, and the like. Using the logging module allows you to specify logging levels (eg, DEBUG, INFO, WARN, ERROR), and to filter them or redirect them to file or the console, as you may choose, without changing the logging statements themselves.
When you have a problem (crash, or whatever), you'll be able to identify roughly where the error occured, giving you information to either increase the logging in that area, or to be able to reason what must have happened (though you should probably then add enough logging so that the logging will tell you what happened clearly and unambiguously).

Realtime output redirection

Currently I am redirecting a script to a log file with the following command:
python /usr/home/scripts/myscript.py 2>&1 | tee /usr/home/logs/mylogfile.log
This seems to work but it does not write to the file as soon as there is a print command. Rather it waits until there is a group of lines that it can print. I want the console and the log file to be written to simultaneously. How can this be done with output redirection. Note that running the script on the console prints everything when it should. Though doing a tail -f on the logfile is not smooth since it writes about 50 lines at a time. Any suggestions?
It sounds like the shell is actually what's doing the buffering, since you say it outputs as expected to the console when not tee'd.
You could look at this post for potential solutions to undo that shell buffering: https://unix.stackexchange.com/questions/25372/turn-off-buffering-in-pipe
But I would recommend doing it entirely within Python, so you have more direct control, and instead of printing to stdout, use the logging module.
This would allow additional flexibility in terms of multiple logging levels, the ability to add multiple sources to the logging object centrally (i.e. stdout and a file -- and one which rotates with size if you'd like with logging.handlers.RotatingFileHandler) and you wouldn't be subject to the external buffering of the shell.
More info: https://docs.python.org/2/howto/logging.html

How to resume iPython 0.13.2 session with logging

I've been using iPython (0.13.2) more frequently lately, and the logging seems like a great feature -- if I could get it to work right.
Currently, I'm able to start and specify a log file either through ipython --logfile ~/path/fake.log, or even in the middle of an iPython session with the %magic command %logstart ~/path/fake.log.
However, I can't seem to resume the session from the logfile, which seems to defeat the purpose in part. I've scoured Google and SO, but none of the recommended solutions here at SO or in the docs seem to work quite right.
I have tried:
from Bash
ipython -log ~/path/fake.log (source, result: [TerminalIPythonApp] Unrecognized flag: '-log')
ipython -logplay ~/path/fake.log (source and numerous others, result: [TerminalIPythonApp] Unrecognized flag: '-logplay')
ipython --logfile=~/path/fake.log (source, result: new log started, variables from previous session undefined)
from iPython
%logstart ~/path/fake.log append (source, result: old log continued but not loaded, variables from previous session undefined)
Two that are partially working (in that they try to import the logfile) but don't seem to be intended for this purpose are:
from bash: ipython -i ~/path/fake.log (source, result: if there were no errors in the session imports and works. If there were any errors, not imported and variables still unavailable. Logging not resumed.).
from ipython: %run ~/path/fake.log (no source, just guessed and tried. Result: same as above. Runs the file if there were no errors and variables are GTG. If there were errors, does not work. Does not resume logging.)
Is there any way in iPython 0.13.2 to resume a session that effectively "starts where you left off"? Thanks for any help you can provide.
All these save/restore sessions work by saving the interactions as py files and then trying to run the py file during restore. If an error like undefined variable happens, that prompts a python error and restore halts there, but it does restore values stored upto the error condition.
To avoid storing error conditions, use the suggestion at chosen answer of How to save a Python interactive session? :
save my_session_name 1-4 6 9
Where my session will get the commands in In[1] through In[4] and skip In[5], save In[6], skip In[7], In[8] and save In[9]. This way you avoid offending interactions.
Restore the session later:
%run my_session_name.py

Categories

Resources