Testing programs that read form sys.stdin - python

I am playing with some programming challenges that will check the submission by:
python my_submission < in.txt > out.txt
When I try and make my submission, I want to read some cases/numbers/whatever from in.txt to see what is happening. Currently I am doing that by:
import sys
file = open('in.txt')
sys.stdin = file
for line in sys.stdin:
case1 = line.split()
some_function(case1)
So when I run my python program (hit cmd+B) in Sublime text, I can see whether I manage to read the input correctly, process one test case correctly, etc.... Then I just commend out the 2nd and 3rd line when my program should be submitted to the submission judge.
I was just wondering: is this the "preffered workflow" for dealing with this? Do pro programmers write some kind of unit test template function to do this?

The preferred workflow is to let the shell doing the redirection so you don't have to change the program code all the time.
But your IDE (sublime text) doesn't allow you to specify such arguments, so it limits your options.
Solutions/workarounds:
Start the program from a shell. Which means you need to switch between the terminal window and sublime all the time.
Write a second program which runs the first and which sets up the input redirection. This way, you just need to switch tabs in sublime.
Instead of reading from stdin directly, use the fileinput module. See How do you read from stdin in Python? This will allow you to write proper unit tests for your code. You can then use the Python Unittest Helper plugin for Sublime.

Related

Displaying all results of execution

When I execute a python program, the results starts to appear quickly and I can't read it all. It just flushes over my screen.
When the execution ends, I can no longer see the first displays, because the terminal display space is limited.
How save the output, so I can read all of it?
You have a few options here.
Add a breakpoint and learn how to use the debugger. Once you add this command (import pdb;pdb.set_trace() # this will take some learning so look up what pdb is online. actually, i prefer 'ipdb' instead.), the code will stop at that specific point when you execute it.
Save it to a file (python file.py > filename.txt) and then read it afterwards. Bonus: Before you ask yourself, where are my outputs? https://askubuntu.com/questions/625224/how-to-redirect-stderr-to-a-file
(More advanced) Your code is spitting out too much garbage output. You can remove some of the code or use python logging filters.
May be platform dependant.
On Linux you can also pipe your program output into your favorite pager (less for example) if you don't want to write it to a file.
python file.py | less

How to send command in separate python window

Searching isn't pulling up anything useful so perhaps my verbiage is wrong.
I have a python application that I didn't write that takes user input and performs tasks based on the input. The other script I did write watches the serial traffic for a specific match condition. Both scripts run in different windows. What I want to do is if I get a match condition from my script output a command to the other script. Is there a way to do this with python? I am working in windows and want to send the output to a different window.
Since you can start the script within your script, you can just follow the instructions in this link: Read from the terminal in Python
old answer:
I assume you can modify the code in the application you didn't write. If so, you can tell the code to "print" what it's putting on the window to a file, and your other code could constantly monitor that file.

Write and save a file with nano using subprocess

how can I write/append to a file by calling nano using subprocess and get it saved automatically .For example I have a file and I want to open it and append something at the end of it so I write
>>> import tempfile
>>> file = tempfile.NamedTemporaryFile(mode='a')
>>> example = file.name
>>> f.close()
>>> import subprocess
>>> subprocess.call(['nano', example])
Now once the last line gets executed the file gets open and I can write anything and then save it by hitting Ctrl+O and Ctrl+X
Instead I want that I send the input through a stdin PIPE and and the file gets saved by itself ie there could be any mechanism that hits Ctrl+O and Ctrl+X automayically by itself ?
Can help me in solving this issue ?
A ctrl-O is just a character, same as any other. You can send it by writing '\x0f' (or, in Python 3, b'\x0f').
However, that probably isn't going to do you any good. Most programs that provide an interactive GUI in the terminal, like nano, cannot be driven by stdin. They need to take control of the terminal, and to do that, they will either check that stdin isatty and then tcsetattr it, or just open /dev/tty,
You can deal with this by creating a pseudo-terminal with os.openpty, os.forkpty, or pty.
But it's often easier to use a library like pexpect to deal with interactive programs, GUI or otherwise.
And it's even easier to not try to drive an interactive program in the first place. For example, unlike nano, ed is designed to be driven in "batch mode" by a script, and sed even more so.
And it's even easier to not try to drive a program at all when you're trying to do something that can be just as easily done directly in Python. The easiest way to append something to a file is to open it in 'a' mode and write to it. No need for an external program at all. For example:
new_line = input('What do you want to add?')
with open(fname, 'a') as f:
f.write(new_line)
If the only reason you were using nano is because you needed something to sudo… there's really no reason for that. You can sudo anything else—like sed, or another Python script—just as easily. Using nano is just making things harder for yourself for absolutely no reason.
The big question here is: why do you have a file that's not writable by your Python script, but which you want arbitrary remote users to be able to append to? That sounds like a very bad system design. You make files non-writable because you want to restrict normal users from modifying them; if you want your Python script to be able to modify it on behalf of your remote users, why isn't it owned by the same user that the script runs as?
In the (unlikely) event that you still find that you need to control nano or some other interactive program from a Python process, I'm going to suggest the same thing here that I suggested for this question: Using python subprocess.call() to launch an ncurses process ...
... don't use subprocess for controlling curses/full-screen interactive processes. use pexpect. That's what it's for.
(On the other hand I also agree with the many comments here regarding better ways to work around the permissions issue. Write some sort of script (in Python, bash, sed or whatever) which can be run under sudo and which can make the in-place edits or appendices to your data file directly.

How to test Python readline completion?

I'm writing a command-line interface in Python. It uses the readline module to provide command history and completion.
While everything works fine in interactive mode, I'd like to run automated tests on the completion feature. My naive first try involved using a file for standard input:
my_app < command.file
The command file contained a tab, in the hopes that it would invoke the completion feature. No luck. What's the right way to do the testing?
For this I would use Pexpect (Python version of Expect). The readline library needs to be speaking to a terminal to do interactive tab-completion and such—it can't do this if it is only getting one-way input from a redirected file.
Pexpect works for this because it creates a pseudo terminal, which consists of two parts: the slave, where the program you are testing runs, and the master, where the Python pexpect code runs. The pexpect code emulates the human running the test program. It is responsible for sending characters to the slave, including characters such as newline and tab, and reacting to the expected output (this is where the phrase "expect" comes from).
See the program ftp.py from the examples directory for a good example of how you would control your test program from within expect. Here is a sample of the code:
child = pexpect.spawn('ftp ftp.openbsd.org')
child.expect('(?i)name .*: ')
child.sendline('anonymous')
child.expect('(?i)password')
child.sendline('pexpect#sourceforge.net')
child.expect('ftp> ')
rlcompleter might accomplish what you want
From the documentation:
The rlcompleter module is designed for use with Python’s interactive mode. A user can add the following lines to his or her initialization file (identified by the PYTHONSTARTUP environment variable) to get automatic Tab completion:
try:
import readline
except ImportError:
print "Module readline not available."
else:
import rlcompleter
readline.parse_and_bind("tab: complete")
https://docs.python.org/2/library/rlcompleter.html
Check out ScriptTest:
from scripttest import TestFileEnvironment
env = TestFileEnvironment('./scratch')
def test_script():
env.reset()
result = env.run('do_awesome_thing testfile --with extra_win --file %s' % filename)
And play around with passing the arguments as you please.
You can try using Sikuli to test the end-user interaction with your application.
However, this is a complete overkill, requires a lot of extra dependencies, will work slowly, and will fail if the terminal font/colors change. But, still, you will be able to test actual user interaction.
The documentation homepage links to a slideshow and a FAQ question about writing tests using Sikuli.

Can I save a text file in python without closing it?

I am writing a program in which I would like to be able to view a log file before the program is complete. I have noticed that, in python (2.7 and 3), that file.write() does not save the file, file.close() does. I don't want to create a million little log files with unique names but I would like to be able to view the updated log file before the program is finished. How can I do this?
Now, to be clear I am scripting using Ansys Workbench (trying to batch some CFX runs). Here's a link to a tutorial that shows what I'm talking about. They appear to have wrapped python, and by running the script I can send commands to the various modules. When the script is running there is no console onscreen and it appears to be eating all of the print statements, so the only way I can report what's happening is via a file. Also, I don't want to bring a console window up because eventually I will just run the program in batch mode (no interface). But the simulations take a long time to run and I can't wait for the program to finish before checking on what's happening.
You would need this:
file.flush()
# typically the above line would do. however this is used to ensure that the file is written
os.fsync(file.fileno())
Check this: http://docs.python.org/2/library/stdtypes.html#file.flush
file.flush()
Flush the internal buffer, like stdio‘s fflush(). This may be a no-op on some file-like objects.
Note flush() does not necessarily write the file’s data to disk. Use flush() followed by os.fsync() to ensure this behavior.
EDITED: See this question for detailed explanations: what exactly the python's file.flush() is doing?
Does file.flush() after each write help?
Hannu
This will write the file to disk immediately:
file.flush()
os.fsync(file.fileno())
According to the documentation https://docs.python.org/2/library/os.html#os.fsync
Force write of file with filedescriptor fd to disk. On Unix, this calls the native fsync() function; on Windows, the MS _commit() function.
If you’re starting with a Python file object f, first do f.flush(), and then do os.fsync(f.fileno()), to ensure that all internal buffers associated with f are written to disk.

Categories

Resources