Python - stdin - how to recognize the source of the input? - python

I am looking for a way how to determine whether stdin input comes from another application by piping or not.
Let say that I have a program that either accepts the input data from piped stdin (when you pipe it from another application - grep, tail, ...) or it uses a default data file. I don't want the user to fill in manually the data when prompted because there was no stdin piped.
My simple code example looks like this:
from sys import stdin
for line in stdin:
print line
When I run the script using this:
echo "data" | python example.py
I get
data
and the script ends.
While if I run the script in the following way,
python example.py
it prompts user to fill in the input and it waits.
Therefore I am looking for something like follows to avoid the prompt when no data are piped.
from sys import stdin
if is_stdin_piped():
for line in stdin:
print line
else:
print "default"
Is something like this possible? Thanks

If you use input redirection, the standard input is not connected to a terminal like it usually is. You can check if a file descriptor is connected to a terminal or not with the isatty function:
import os
def is_stdin_piped():
return not os.isatty(0)
For extra safety use sys.stdin.fileno() instead of 0.
Update: To check if stdin is redirected from a file (rather than another program, such as an IDE or a shell pipeline) you can use fstat:
import os, stat
def is_stdin_a_file():
status = os.fstat(0)
return stat.S_ISREG(status.st_mode)

Related

How can I take piped input from another application AND take user input after?

import sys
stdin_input = sys.stdin.read()
print(f"Info loaded from stdin: {stdin_input}")
user_input = input("User input goes here: ")
Error received:
C:\>echo "hello" | python winput.py
Info loaded from stdin: "hello"
User input goes here: Traceback (most recent call last):
File "C:\winput.py", line 6, in <module>
user_input = input("User input goes here: ")
EOFError: EOF when reading a line
I've recently learned this is because sys.stdin is being used for FIFO, which leaves it closed after reading.
I can make it work on CentOS by adding sys.stdin = open("/dev/tty") after stdin_input = sys.stdin.read() based on this question, but this doesn't work for Windows.
Preferably rather than identifying the OS and assigning a new value to sys.stdin accordingly, I'd rather approach it dynamically. Is there a way to identify what the equivalent of /dev/tty would be in every case, without necessarily having to know /dev/tty or the equivalent is for the specific OS?
Edit:
The reason for the sys.stdin.read() is to take in JSON input piped from another application. I also have an option to read the JSON data from a file, but being able to used the piped data is very convenient. Once the data is received, I'd like to get user input separately.
I'm currently working around my problem with the following:
if os.name == "posix":
sys.stdin = open("/dev/tty")
elif os.name == "nt":
sys.stdin = open("con")
else:
raise RunTimeError(
f"Error trying to assign to sys.stdin due to unknown os {os.name}"
)
This may very well work in all cases but it would still be preferable to know what /dev/tty or con or whatever the equivalent is for the OS is dynamically. If it's not possible and my workaround is the best solution, I'm okay with that.
Since you're using Bash, you can avoid this problem by using process substitution, which is like a pipe, but delivered via a temporary filename argument instead of via stdin.
That would look like:
winput.py <(another-application)
Then in your Python script, receive the argument and handle it accordingly:
import json
import sys
with open(sys.argv[1]) as f:
d = json.load(f)
print(d)
user_input = input("User input goes here: ")
print('User input:', user_input)
(sys.argv is just used for demo. In a real script I'd use argparse.)
Example run:
$ tmp.py <(echo '{"someKey": "someValue"}')
{'someKey': 'someValue'}
User input goes here: 6
User input: 6
The other massive advantage of this is that it works seamlessly with actual filenames, for example:
$ cat test.json
{"foo": "bar"}
$ tmp.py test.json
{'foo': 'bar'}
User input goes here: x
User input: x
So your real issue is that sys.stdin can be only one of two things:
Connected to the typed input from the terminal
Connected to some file-like object that is not the terminal (actual file, pipe, whatever)
It doesn't matter that you consumed all of sys.stdin by doing sys.stdin.read(), when sys.stdin was redirected to some file-system object, you lost the ability to read from the terminal via sys.stdin.
In practice, I'd strongly suggest not trying to do this. Use argparse and accept whatever you were considering accepting via input from the command line and avoid the whole problem (in practice, I basically never see real production code that's not a REPL of some sort dynamically interacting with the user via stdin/stdout interactions; for non-REPL cases, sys.stdin is basically always either unused or piped from a file/program, because writing clean user-interaction code like this is a pain, and it's a pain for the user to have to type their responses without making mistakes). The input that might come for a file or stdin can be handled by passing type=argparse.FileType() to the add_argument call in question, and the user can then opt to pass either a file name or - (where - means "Read from stdin"), leaving your code looking like:
parser = argparse.ArgumentParser('Program description here')
parser.add_argument('inputfile', type=argparse.FileType(), help='Description here; pass "-" to read from stdin')
parser.add_argument('-c', '--cmd', action='append', help='User commands to execute after processing input file')
args = parser.parse_args()
with args.inputfile as f:
data = f.read()
for cmd in args.cmd:
# Do stuff based on cmd
The user can then do:
otherprogram_that_generates_data | myprogram.py - -c 'command 1' -c 'command 2'
or:
myprogram.py file_containing_data -c 'command 1' -c 'command 2'
or (on shells with process substitution, like bash, as an alternative to the first use case):
myprogram.py <(otherprogram_that_generates_data) -c 'command 1' -c 'command 2'
and it works either way.
If you must do this, your existing solution is really the only reasonable solution, but you can make it a little cleaner factoring it out and only making the path dynamic, not the whole code path:
import contextlib
import os
import sys
TTYNAMES = {"posix": "/dev/tty", "nt": "con"}
#contextlib.contextmanager
def stdin_from_terminal():
try:
ttyname = TTYNAMES[os.name]
except KeyError:
raise OSError(f"{os.name} does not support manually reading from the terminal")
with open(ttyname) as tty:
sys.stdin, oldstdin = tty, sys.stdin
try:
yield
finally:
sys.stdin = oldstdin
This will probably die with an OSError subclass on the open call if run without a connected terminal, e.g. when launched with pythonw on Windows (another reason not to use this design), or launched in non-terminal ways on UNIX-likes, but that's better than silently misbehaving.
You'd use it with just:
with stdin_from_terminal():
user_input = input("User input goes here: ")
and it would restore the original sys.stdin automatically when the with block is exited.

Python subprocess.run C Program not working

I am trying to write the codes to run a C executable using Python.
The C program can be run in the terminal just by calling ./myprogram and it will prompt a selection menu, as shown below:
1. Login
2. Register
Now, using Python and subprocess, I write the following codes:
import subprocess
subprocess.run(["./myprogram"])
The Python program runs but it shows nothing (No errors too!). Any ideas why it is happening?
When I tried:
import subprocess
subprocess.run(["ls"])
All the files in that particular directory are showing. So I assume this is right.
You have to open the subprocess like this:
import subprocess
cmd = subprocess.Popen(['./myprogram'], stdin=subprocess.PIPE)
This means that cmd will have a .stdin you can write to; print by default sends output to your Python script's stdout, which has no connection with the subprocess' stdin. So do that:
cmd.stdin.write('1\n') # tell myprogram to select 1
and then quite probably you should:
cmd.stdin.flush() # don't let your input stay in in-memory-buffers
or
cmd.stdin.close() # if you're done with writing to the subprocess.
PS If your Python script is a long-running process on a *nix system and you notice your subprocess has ended but is still displayed as a Z (zombie) process, please check that answer.
Maybe flush stdout?
print("", flush=True,end="")

Python redirect output after os._exit

I have thread which is supposed to close whole application, so I use os._exit(1). But I also want to redirect output from my program to file and the output file is empty after all. Simple example:
import os
print('something')
os._exit(1)
Running program with:
python myprogram.py > output.txt
Is there any way to do this?

grab serial input line and move them to a shell script

I tries to grab a uart - line and give this string to a shell script;
#!/usr/bin/env python
import os
import serial
ser = serial.Serial('/dev/ttyAMA0', 4800)
while True :
try:
state=ser.readline()
print(state)
except:
pass
So, "state" should given to a shell script now,
like: myscript.sh "This is the serial input..."
but how can I do this?
print(os.system('myscript.sh ').ser.readline())
doesn't work.
Just simple string concatenation passed to the os.system function.
import os
os.system("myscript.sh " + ser.readline())
If myscript can continuously read additional input, you have a much more efficient pipeline.
from subprocess import Popen, PIPE
sink = Popen(['myscript.sh'], stdin=PIPE, stdout=PIPE)
while True:
sink.communicate(ser.readline())
If you have to start a new myscript.sh for every input line, (you'll really want to rethink your design, but) you can, of course:
while True:
subprocess.check_call(['myscript.sh', ser.readline())
Notice how in both cases we avoid a pesky shell.
There are different ways to combine two strings (namely "./myscript.sh" and ser.readLine()), which will then give you the full command to be run by use of os.system. E.g. strings can be arguments of the string.format method:
os.system('myscript.sh {}'.format(ser.readline()))
Also you can just add two strings:
os.system('myscript.sh '+ser.readline())
I am not sure what you want to achieve with the print statement. A better way to handle the call and input/output of your code would be to switch from os to the subprocess module.

How to run and control a commandline program from python?

I have a python script which will give an output file. I need to feed this output file to a command line program. Is there any way I could call the commandline program and control it to process the file in python?
I tried to run this code
import os
import subprocess
import sys
proc = subprocess.Popen(["program.exe"], stdin=subprocess.PIPE)
proc.communicate(input=sys.argv[1]) #here the filename should be entered
proc.communicate(input=sys.argv[2]) #choice 1
proc.communicate(input=sys.argv[3]) #choice 2
is there any way I could enter the input coming from the commandline. And also though the cmd program opens the interface flickers after i run the code.
Thanks.
Note: platform is windows
Have a look at http://docs.python.org/library/subprocess.html. It's the current way to go when starting external programms. There are many examples and you have to check yourself which one fits your needs best.
You could do os.system(somestr) which lets you execute semestr as a command on the command line. However, this has been scrutinized over time for being insecure, etc (will post a link as soon as I find it).
As a result, it has been conventionally replaced with subprocess.popen
Hope this helps
depending on how much control you need, you might find it easier to use pexpect which makes parsing the output of the program rather easy and can also easily be used to talk to the programs stdin. Check out the website, they have some nice examples.
If your target program is expecting the input on STDIN, you can redirect using pipe:
python myfile.py | someprogram
As I just answered another question regarding subprocess, there is a better alternative!
Please have a look at the great library python sh, it is a full-fledged subprocess interface for Python that allows you to call any program as if it were a function, and more important, it's pleasingly pythonic.
Beside redirecting data stream with pipes, you can also process a command line such as:
mycode.py -o outputfile inputfilename.txt
You must import sys
import sys
and in you main function:
ii=1
infile=None
outfile=None
# let's process the command line
while ii < len(sys.argv):
arg = sys.argv[ii]
if arg == '-o':
ii = ii +1
outfile = sys.argv[ii]
else:
infile=arg
ii = ii +1
Of course, you can add some file checking, etc...

Categories

Resources