I've been using iPython (0.13.2) more frequently lately, and the logging seems like a great feature -- if I could get it to work right.
Currently, I'm able to start and specify a log file either through ipython --logfile ~/path/fake.log, or even in the middle of an iPython session with the %magic command %logstart ~/path/fake.log.
However, I can't seem to resume the session from the logfile, which seems to defeat the purpose in part. I've scoured Google and SO, but none of the recommended solutions here at SO or in the docs seem to work quite right.
I have tried:
from Bash
ipython -log ~/path/fake.log (source, result: [TerminalIPythonApp] Unrecognized flag: '-log')
ipython -logplay ~/path/fake.log (source and numerous others, result: [TerminalIPythonApp] Unrecognized flag: '-logplay')
ipython --logfile=~/path/fake.log (source, result: new log started, variables from previous session undefined)
from iPython
%logstart ~/path/fake.log append (source, result: old log continued but not loaded, variables from previous session undefined)
Two that are partially working (in that they try to import the logfile) but don't seem to be intended for this purpose are:
from bash: ipython -i ~/path/fake.log (source, result: if there were no errors in the session imports and works. If there were any errors, not imported and variables still unavailable. Logging not resumed.).
from ipython: %run ~/path/fake.log (no source, just guessed and tried. Result: same as above. Runs the file if there were no errors and variables are GTG. If there were errors, does not work. Does not resume logging.)
Is there any way in iPython 0.13.2 to resume a session that effectively "starts where you left off"? Thanks for any help you can provide.
All these save/restore sessions work by saving the interactions as py files and then trying to run the py file during restore. If an error like undefined variable happens, that prompts a python error and restore halts there, but it does restore values stored upto the error condition.
To avoid storing error conditions, use the suggestion at chosen answer of How to save a Python interactive session? :
save my_session_name 1-4 6 9
Where my session will get the commands in In[1] through In[4] and skip In[5], save In[6], skip In[7], In[8] and save In[9]. This way you avoid offending interactions.
Restore the session later:
%run my_session_name.py
Related
I'm currently doing some work in a server (Ubuntu) without admin rights nor contact with the administrator. When using the help(command) in the python command line I get an error.
Here's an example:
>>> help(someCommand)
/bin/sh: most: command not found
So, this error indicates that most pager is not currently installed. However, the server I'm working on has "more" and "less" pagers installed. So, how can I change the default pager configuration for this python utility?
This one is annoyingly difficult to research, but I think I found it.
The built-in help generates its messages using the standard library pydoc module (the module is also intended to be usable as a standalone script). In that documentation, we find:
When printing output to the console, pydoc attempts to paginate the output for easier reading. If the PAGER environment variable is set, pydoc will use its value as a pagination program.
So, presumably, that's been set to most on your system. Assuming it won't break anything else on your system, just unset or change it. (It still pages without a value set - even on Windows. I assume it has a built-in fallback.)
You can make a custom most script that just invokes less (or even more).
The steps would be:
Set up a script called most, the contents of which are:
#!/bin/sh
less ${#:1} # wierdess is just "all arguments except argument 0"
Put that script in a location that is on your PATH
Then most filename should just run less on that file, and that command should get called from in your python interpreter.
To be honest though, I'd just use Karl's approach.
You can view the various pager options in the source code. That function can be replaced to return whatever is desired. For example:
import pydoc
pydoc.getpager = lambda: lambda text: pydoc.pipepager(text, 'less')
So, I believe this issue can be applied to any python file generating logs to console.
I am trying to capture the output generated while running a behave test(all code is in python3.x).
Or more precisely: I am trying to capture the live generated console output to a file for a particular function of interest in the python code.
My aim is to capture the console logs printed (on stdout) the moment it hits a particular function in the python code.
I usually do a behave data_base_scenarios.feature without quotes on Ubuntu 18.04.
I would like to capture the complete output as it is directed to the console to file.
On StackOverflow, after searching for a while, I tried some of the methods described here: Capturing stdout within the same process in Python. I also found this: https://capturer.readthedocs.io/en/latest/
Unfortunately, I don't see anything captured.
I have taken care to set up the behave environment to generate the logs. For example all these flags are appropriately and explicitly set to generate outputs:
context.config.stdout_capture = True, context.config.log_capture = True.
What am I missing with behave environment.
Behave framework also provides a variable within "context". Its "context.stdout_capture". But unfortunately it contains nothing.
In short Behave prints on the console
Captured logging:
INFO:database.system.status: MyDatabase is online.
INFO:database.system.status: MyDatabase is now offline.
INFO:database.system.status: MyDatabase has now initiated.
I just want to dump the above "Captured Logging" console output to a file for analysis.
How can I do it? Please let me know.
So it depends on how you're doing it, but if you're using behave_main, then you can do something like the following:
from behave.__main__ import main as behave_main
args = ['--outfile=/path/to/your/log/directory/output.log']
behave_main(args)
If you're running your tests via the command line, then use the -o, or --outfile, flag and append your log name.
I know at some point I may want to "upgrade", but for now I just want the old look & feel back.
I know IPython is very configurable but I'm not having much luck finding the correct settings.
(This only refers to IPython's interactive terminal, by the way)
Start by creating a default iPython profile. From your shell, type:
$ ipython profile create
That should give output similar to this, assuming the username someuser:
[ProfileCreate] Generating default config file: u'/home/someuser/.ipython/profile_default/ipython_config.py'
Open the newly created file with your favorite text editor, in the above example shown as: /home/someuser/.ipython/profile_default/ipython_config.py
Add the following lines at the bottom:
c.TerminalInteractiveShell.colors = 'NoColor'
c.TerminalInteractiveShell.display_completions = 'readlinelike'
Save the file.
That should be it. Changes should be active in new iPython sessions.
I am printing some set of values in console and this is a program which runs for than an hour. There were run time warning in red during the run. However, when I scroll up to see them, they don't appear, as the program is running very fast displaying new values as it runs.
Is there any way for me to display only run time warnings alone or to see the entire values printed previously (as they are warnings, it does not stop the program from running)?
I don't know how you are "printing some set of values" in your console but if you are using the python logging module and your "warnings" are set with the WARN level (all your other stuff is INFO, or DEBUG etc) you can set the logger to only output for WARN and above (ERROR and CRITICAL).
import logging
logger = logging.getLogger('spam_application')
logger.setLevel(logging.WARN)
See more examples in the logging cookbook.
Another option is to set the history of your terminal (I don't know what you're using so can't give exact instructions) to store more lines of your command line.
Finally you could pipe the output to grep (if using a *NIX system) looking for warning or similar:
python your_script.py | grep warning
I'm writing a command-line interface in Python. It uses the readline module to provide command history and completion.
While everything works fine in interactive mode, I'd like to run automated tests on the completion feature. My naive first try involved using a file for standard input:
my_app < command.file
The command file contained a tab, in the hopes that it would invoke the completion feature. No luck. What's the right way to do the testing?
For this I would use Pexpect (Python version of Expect). The readline library needs to be speaking to a terminal to do interactive tab-completion and such—it can't do this if it is only getting one-way input from a redirected file.
Pexpect works for this because it creates a pseudo terminal, which consists of two parts: the slave, where the program you are testing runs, and the master, where the Python pexpect code runs. The pexpect code emulates the human running the test program. It is responsible for sending characters to the slave, including characters such as newline and tab, and reacting to the expected output (this is where the phrase "expect" comes from).
See the program ftp.py from the examples directory for a good example of how you would control your test program from within expect. Here is a sample of the code:
child = pexpect.spawn('ftp ftp.openbsd.org')
child.expect('(?i)name .*: ')
child.sendline('anonymous')
child.expect('(?i)password')
child.sendline('pexpect#sourceforge.net')
child.expect('ftp> ')
rlcompleter might accomplish what you want
From the documentation:
The rlcompleter module is designed for use with Python’s interactive mode. A user can add the following lines to his or her initialization file (identified by the PYTHONSTARTUP environment variable) to get automatic Tab completion:
try:
import readline
except ImportError:
print "Module readline not available."
else:
import rlcompleter
readline.parse_and_bind("tab: complete")
https://docs.python.org/2/library/rlcompleter.html
Check out ScriptTest:
from scripttest import TestFileEnvironment
env = TestFileEnvironment('./scratch')
def test_script():
env.reset()
result = env.run('do_awesome_thing testfile --with extra_win --file %s' % filename)
And play around with passing the arguments as you please.
You can try using Sikuli to test the end-user interaction with your application.
However, this is a complete overkill, requires a lot of extra dependencies, will work slowly, and will fail if the terminal font/colors change. But, still, you will be able to test actual user interaction.
The documentation homepage links to a slideshow and a FAQ question about writing tests using Sikuli.