Detect STDIN file and prevent user input in Python - python

I want to write a command-line Python program that can be called in a Windows cmd.exe prompt using the STDIN syntax and to print help text to STDOUT if an input file is not provided.
The STDIN syntax is different from argument syntax, and is necessary to be a drop-in replacement solution:
my_program.py < input.txt
Here's what I have so far:
import sys
# Define stdout with \n newline character instead of the \r\n default
stdout = open(sys.__stdout__.fileno(),
mode=sys.__stdout__.mode,
buffering=1,
encoding=sys.__stdout__.encoding,
errors=sys.__stdout__.errors,
newline='\n',
closefd=False)
def main(args):
lines = ''.join([line for line in sys.stdin.readlines()])
lines = lines.replace( '\r\n', '\n' ).replace( '\t', ' ' )
stdout.write(lines)
if __name__=='__main__':
main(sys.argv)
I cannot figure out how to detect if a file was provided to STDIN and prevent prompting for user input if it wasn't. sys.argv doesn't contain STDIN. I could wrap it in a thread with a timer and wait for some file access upper limit time and decide that a file probably wasn't provided, but I wanted to see if there's a better way. I searched in SO for this question, but was unable to find an answer that avoids a timer.

test.py:
import sys
if sys.__stdin__.isatty():
print("stdin from console")
else:
print("stdin not from console")
execution:
> test.py
stdin from console
> test.py <input.txt
stdin not from console

The operator you are using will read a file and provide the contents of that file on stdin for your process. This means there is no way for your script to tell whether it is being fed the contents of a file, or whether there is a really fast typist at the keyboard entering the exact same series of keystrokes that matches the file contents.
By the time your script accesses the data, it's just a stream of characters, the fact that it was a file is only known to the command line interface you used to write the redirection.

Related

How to read a print out statement from another program with python?

I have an algorithm that is written in C++ that outputs a cout debug statement to the terminal window and I would like to figure out how to read that printout with python without it being piped/written to a file or to return a value.
Python organizes how each of the individual C++ algorithms are called while the data is kept on the heap and not onto disk. Below is an example of the a situation that is of similar output,
+-------------- terminal window-----------------+
(c++)runNewAlgo: Debug printouts on
(c++)runNewAlgo: closing pipes and exiting
(c++)runNewAlgo: There are 5 objects of interest found
( PYTHON LINE READS THE PRINT OUT STATEMENT)
(python)main.py: Starting the next processing node, calling algorithm
(c++)newProcessNode: Node does work
+---------------------------------------------------+
Say the line of interest is "there are 5 objects of interest" and the code will be inserted before the python call. I've tried to use sys.stdout and subprocess.Popen() but I'm struggling here.
Your easiest path would probably be to invoke your C++ program from inside your Python script.
More details here: How to call an external program in python and retrieve the output and return code?
You can use stdout from the returned process and read it line-by-line. The key is to pass stdout=subprocess.PIPE so that the output is sent to a pipe instead of being printed to your terminal (via sys.stdout).
Since you're printing human-readable text from your C++ program, you can also pass encoding='utf-8' as well to automatically decode each line using utf-8 encoding; otherwise, raw bytes will be returned.
import subprocess
proc = subprocess.Popen(['/path/to/your/c++/program'],
stdout=subprocess.PIPE, encoding='utf-8')
for line in proc.stdout:
do_something_with(line)
print(line, end='') # if you also want to see each line printed

How can I save the os commands outputs in a text file? [duplicate]

This question already has answers here:
Save output of os.system to text file
(4 answers)
Closed 2 years ago.
I'm trying to write a script which uses the os command(linux) and save them in the text file. But when I try to run this code the output of the os command is not saved in the text file.
#!/usr/bin/python
import sys
import os
target = raw_input('Enter the website : ')
ping_it = os.system('ping ' + target)
string_it = str(ping_it)
with open("Output.txt", "w+") as fo:
fo.write(string_it)
fo.close()
After running the script when I check txt file the only thing I get is no 2 in the Output.txt.
Welcome to Stackoverflow.
The main issue here is that os.system is not designed to produce the output from the command - it simply runs it, and the process sends its output to whatever it inherits from its parent (your program).
To capture output it's easiest to use the subprocess module, which allows you to capture the process's outputs.
Here's a fairly simple program that will get you started:
import subprocess
target = 'google.com'
ping_it = subprocess.Popen('ping ' + target,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, err = ping_it.communicate()
with open("Output.txt", "w+") as fo:
fo.write(str(out))
fo.close()
If you want to read output as it is produced rather than waiting for the subprocess to terminate you can use a single subprocess.PIPE channel and read from that, which is conveniently expressed in forms like this:
with Popen(["ping", "google.com"], stdout=PIPE) as proc:
print(proc.stdout.read())
In this example I chose to give the command as a list of arguments rather than as a simple string. This avoids having to join arguements into a string if they are already in list form.
Note that when interacting with subprocesses in this way it's possible for the subprocess to get in a blocked state because either stdout or stderr has filled up its output buffer space. If your program then tries to read from the other channel that will create a deadlock, where each process is waiting for the other to do something. To avoid this you can make stderr a temporary file, then verify after subprocess completion that the file contains nothing of significance (and, ideally, remove it).
From docs you can use os.popen to assign output of any command to a variable.
import os
target = raw_input('Enter the website : ')
output = os.popen('ping ' + target).read() # Saving the output
with open('output.txt', 'w+') as f:
f.write(output)
Exactly what are you trying to save in the file? You did save the output of os.command, which is nothing more than the final status of the execution. This is exactly what the documentation tells you is the return value of that command.
If you want the output of the ping command, you need to use something that focuses on ping, not on os.command. The simple way is to add UNIX redirection:
os.system('ping ' + target + '&> Output.txt')
If you feel a need to pass the results through Python, use a separate process and receive the command results; see here.
You can also spawn a separate process and examine the results as they are produced, line by line. You don't seem to need that, but just in case, see my own question here.

Read from stdin or input file with argparse

I'd like to use argparse to read from either stdin or an input file. In other words:
If an input file is given, read that.
If not, read from stdin only if it's not the terminal. (i.e. a file is being piped in)
If neither of these criteria are satisfied, signal to argparse that the inputs aren't correct.
I'm asking for behavior similar to what's described in this question, but I want argparse to recognize no file as a failed input.
Using the information from the question you linked to, what about using sys.stdin.isatty() to check if the instance your program is being run is part of a pipeline, if not, read from input file, otherwise read from stdin. If the input file does not exist or stdin is empty throw an error.
Hope that helped.
I would recommend just settings nargs='?' and then handling the case of a Nonetype separately. According to the official documentation, "FileType objects understand the pseudo-argument '-' and automatically convert this into sys.stdin for readable FileType objects and sys.stdout for writable FileType objects". So just give it a dash if you want stdin.
Example
import argparse
import sys
parser = argparse.ArgumentParser()
parser.add_argument('inputfile', nargs='?', type=argparse.FileType('r'))
if not inputfile:
sys.exit("Please provide an input file, or pipe it via stdin")

Using textfile as stdin in python under windows 7

I'm a win7-user.
I accidentally read about redirections (like command1 < infile > outfile) in *nix systems, and then I discovered that something similar can be done in Windows (link). And python is also can do something like this with pipes(?) or stdin/stdout(?).
I do not understand how this happens in Windows, so I have a question.
I use some kind of proprietary windows-program (.exe). This program is able to append data to a file.
For simplicity, let's assume that it is the equivalent of something like
while True:
f = open('textfile.txt','a')
f.write(repr(ctime()) + '\n')
f.close()
sleep(100)
The question:
Can I use this file (textfile.txt) as stdin?
I mean that the script (while it runs) should always (not once) handle all new data, ie
In the "never-ending cycle":
The program (.exe) writes something.
Python script captures the data and processes.
Could you please write how to do this in python, or maybe in win cmd/.bat or somehow else.
This is insanely cool thing. I want to learn how to do it! :D
If I am reading your question correctly then you want to pipe output from one command to another.
This is normally done as such:
cmd1 | cmd2
However, you say that your program only writes to files. I would double check the documentation to see if their isn't a way to get the command to write to stdout instead of a file.
If this is not possible then you can create what is known as a named pipe. It appears as a file on your filesystem, but is really just a buffer of data that can be written to and read from (the data is a stream and can only be read once). Meaning your program reading it will not finish until the program writing to the pipe stops writing and closes the "file". I don't have experience with named pipes on windows so you'll need to ask a new question for that. One down side of pipes is that they have a limited buffer size. So if there isn't a program reading data from the pipe then once the buffer is full the writing program won't be able to continue and just wait indefinitely until a program starts reading from the pipe.
An alternative is that on Unix there is a program called tail which can be set up to continuously monitor a file for changes and output any data as it is appended to the file (with a short delay.
tail --follow=textfile.txt --retry | mycmd
# wait for data to be appended to the file and output new data to mycmd
cmd1 >> textfile.txt # append output to file
One thing to note about this is that tail won't stop just because the first command has stopped writing to the file. tail will continue to listen to changes on that file forever or until mycmd stops listening to tail, or until tail is killed (or "sigint-ed").
This question has various answers on how to get a version of tail onto a windows machine.
import sys
sys.stdin = open('textfile.txt', 'r')
for line in sys.stdin:
process(line)
If the program writes to textfile.txt, you can't change that to redirect to stdin of your Python script unless you recompile the program to do so.
If you were to edit the program, you'd need to make it write to stdout, rather than a file on the filesystem. That way you can use the redirection operators to feed it into your Python script (in your case the | operator).
Assuming you can't do that, you could write a program that polls for changes on the text file, and consumes only the newly written data, by keeping track of how much it read the last time it was updated.
When you use < to direct the output of a file to a python script, that script receives that data on it's stdin stream.
Simply read from sys.stdin to get that data:
import sys
for line in sys.stdin:
# do something with line

Python subprocess to call Unix commands, a question about how output is stored

I am writing a python script that reads a line/string, calls Unix, uses grep to search a query file for lines that contain the string, and then prints the results.
from subprocess import call
for line in infilelines:
output = call(["grep", line, "path/to/query/file"])
print output
print line`
When I look at my results printed to the screen, I will get a list of matching strings from the query file, but I will also get "1" and "0" integers as output, and line is never printed to the screen. I expect to get the lines from the query file that match my string, followed by the string that I used in my search.
call returns the process return code.
If using Python 2.7, use check_output.
from subprocess import check_output
output = check_output(["grep", line, "path/to/query/file"])
If using anything before that, use communicate.
import subprocess
process = subprocess.Popen(["grep", line, "path/to/query/file"], stdout=subprocess.PIPE)
output = process.communicate()[0]
This will open a pipe for stdout that you can read with communicate. If you want stderr too, you need to add "stderr=subprocess.PIPE" too.
This will return the full output. If you want to parse it into separate lines, use split.
output.split('\n')
I believe Python takes care of line-ending conversions for you, but since you're using grep I'm going to assume you're on Unix where the line-ending is \n anyway.
http://docs.python.org/library/subprocess.html#subprocess.check_output
The following code works with Python >= 2.5:
from commands import getoutput
output = getoutput('grep %s path/to/query/file' % line)
output_list = output.splitlines()
Why would you want to execute a call to external grep when Python itself can do it? This is extra overhead and your code will then be dependent on grep being installed. This is how you do simple grep in Python with "in" operator.
query=open("/path/to/query/file").readlines()
query=[ i.rstrip() for i in query ]
f=open("file")
for line in f:
if "line" in query:
print line.rstrip()
f.close()

Categories

Resources