prompt question generator
class SynthesisPromptGenerator:
def wait_key(self):
''' Wait for a key press on the console and return it. '''
result = None
for singlePrompt in ["questionCat", "questionDog"]:
try:
result = raw_input(singlePrompt)
print 'input is: ', result
except IOError:
pass
return result
I have a PromptGenerator that will generate multiple terminal prompt questions, and after answering first question, then second will pop up, as
questionCat
(and wait for keyboard input)
questionDog
(and wait for keyboard input)
my goal is to automatically and dynamically answer to the questions
class PromptResponder:
def respond(self):
generator = SynthesisPromptGenerator()
child = pexpect.spawn(generator.wait_key())
child.expect("\*Cat\*")
child.sendline("yes")
child.expect("\*Dog\*")
child.sendline("no")
child.expect(pexpect.EOF)
if __name__ == "__main__":
responder = PromptResponder()
responder.respond()
if the prompt question contains Cat then answer yes
if the prompt question contains Dog then answer no
So it comes to:
how to get the prompt string from terminal and filter based on it?
how to answer multiple prompt questions in python?
I did some search but found most questions are for shell script echo yes | ./script, not much doing it in python
thank you very much
As suggested in the comments, use pexpect.
See pexpect on github, the official docs and this handy python for beginners walkthrough on pexpect.
For an example. Let's say this is your x.sh file:
#!/bin/bash
echo -n "Continue? [Y/N]: "
read answer
if [ "$answer" != "${answer#[Yy]}" ]; then
echo -n "continuing.."
else
echo -n "exiting.."
fi
You can do this:
import os, sys
import pexpect
# It's probably cleaner to use an absolute path here
# I just didn't want to include my directories..
# This will run x.sh from your current directory.
child = pexpect.spawn(os.path.join(os.getcwd(),'x.sh'))
child.logfile = sys.stdout
# Note I have to escape characters here because
# expect processes regular expressions.
child.expect("Continue\? \[Y/N\]: ")
child.sendline("Y")
child.expect("continuing..")
child.expect(pexpect.EOF)
print(child.before)
Result of the python script:
Continue? [Y/N]: Y
Y
continuing..
Although I have to say that it's a bit unusal to use pexpect with a bash script if you have the ability to edit it. It would be simpler to edit the script so that it no longer prompts:
#!/bin/bash
echo -n "Continue? [Y/N]: "
answer=y
if [ "$answer" != "${answer#[Yy]}" ]; then
echo "continuing.."
else
echo "exiting.."
fi
Then you're free to just use subprocess to execute it.
import os
import subprocess
subprocess.call(os.path.join(os.getcwd(),"x.sh"))
Or if you want the output as a variable:
import os
import subprocess
p = subprocess.Popen(os.path.join(os.getcwd(),"x.sh"), stdout=subprocess.PIPE)
out, error = p.communicate()
print(out)
I realise this might not be possible for you but it's worth noting.
Related
import sys
stdin_input = sys.stdin.read()
print(f"Info loaded from stdin: {stdin_input}")
user_input = input("User input goes here: ")
Error received:
C:\>echo "hello" | python winput.py
Info loaded from stdin: "hello"
User input goes here: Traceback (most recent call last):
File "C:\winput.py", line 6, in <module>
user_input = input("User input goes here: ")
EOFError: EOF when reading a line
I've recently learned this is because sys.stdin is being used for FIFO, which leaves it closed after reading.
I can make it work on CentOS by adding sys.stdin = open("/dev/tty") after stdin_input = sys.stdin.read() based on this question, but this doesn't work for Windows.
Preferably rather than identifying the OS and assigning a new value to sys.stdin accordingly, I'd rather approach it dynamically. Is there a way to identify what the equivalent of /dev/tty would be in every case, without necessarily having to know /dev/tty or the equivalent is for the specific OS?
Edit:
The reason for the sys.stdin.read() is to take in JSON input piped from another application. I also have an option to read the JSON data from a file, but being able to used the piped data is very convenient. Once the data is received, I'd like to get user input separately.
I'm currently working around my problem with the following:
if os.name == "posix":
sys.stdin = open("/dev/tty")
elif os.name == "nt":
sys.stdin = open("con")
else:
raise RunTimeError(
f"Error trying to assign to sys.stdin due to unknown os {os.name}"
)
This may very well work in all cases but it would still be preferable to know what /dev/tty or con or whatever the equivalent is for the OS is dynamically. If it's not possible and my workaround is the best solution, I'm okay with that.
Since you're using Bash, you can avoid this problem by using process substitution, which is like a pipe, but delivered via a temporary filename argument instead of via stdin.
That would look like:
winput.py <(another-application)
Then in your Python script, receive the argument and handle it accordingly:
import json
import sys
with open(sys.argv[1]) as f:
d = json.load(f)
print(d)
user_input = input("User input goes here: ")
print('User input:', user_input)
(sys.argv is just used for demo. In a real script I'd use argparse.)
Example run:
$ tmp.py <(echo '{"someKey": "someValue"}')
{'someKey': 'someValue'}
User input goes here: 6
User input: 6
The other massive advantage of this is that it works seamlessly with actual filenames, for example:
$ cat test.json
{"foo": "bar"}
$ tmp.py test.json
{'foo': 'bar'}
User input goes here: x
User input: x
So your real issue is that sys.stdin can be only one of two things:
Connected to the typed input from the terminal
Connected to some file-like object that is not the terminal (actual file, pipe, whatever)
It doesn't matter that you consumed all of sys.stdin by doing sys.stdin.read(), when sys.stdin was redirected to some file-system object, you lost the ability to read from the terminal via sys.stdin.
In practice, I'd strongly suggest not trying to do this. Use argparse and accept whatever you were considering accepting via input from the command line and avoid the whole problem (in practice, I basically never see real production code that's not a REPL of some sort dynamically interacting with the user via stdin/stdout interactions; for non-REPL cases, sys.stdin is basically always either unused or piped from a file/program, because writing clean user-interaction code like this is a pain, and it's a pain for the user to have to type their responses without making mistakes). The input that might come for a file or stdin can be handled by passing type=argparse.FileType() to the add_argument call in question, and the user can then opt to pass either a file name or - (where - means "Read from stdin"), leaving your code looking like:
parser = argparse.ArgumentParser('Program description here')
parser.add_argument('inputfile', type=argparse.FileType(), help='Description here; pass "-" to read from stdin')
parser.add_argument('-c', '--cmd', action='append', help='User commands to execute after processing input file')
args = parser.parse_args()
with args.inputfile as f:
data = f.read()
for cmd in args.cmd:
# Do stuff based on cmd
The user can then do:
otherprogram_that_generates_data | myprogram.py - -c 'command 1' -c 'command 2'
or:
myprogram.py file_containing_data -c 'command 1' -c 'command 2'
or (on shells with process substitution, like bash, as an alternative to the first use case):
myprogram.py <(otherprogram_that_generates_data) -c 'command 1' -c 'command 2'
and it works either way.
If you must do this, your existing solution is really the only reasonable solution, but you can make it a little cleaner factoring it out and only making the path dynamic, not the whole code path:
import contextlib
import os
import sys
TTYNAMES = {"posix": "/dev/tty", "nt": "con"}
#contextlib.contextmanager
def stdin_from_terminal():
try:
ttyname = TTYNAMES[os.name]
except KeyError:
raise OSError(f"{os.name} does not support manually reading from the terminal")
with open(ttyname) as tty:
sys.stdin, oldstdin = tty, sys.stdin
try:
yield
finally:
sys.stdin = oldstdin
This will probably die with an OSError subclass on the open call if run without a connected terminal, e.g. when launched with pythonw on Windows (another reason not to use this design), or launched in non-terminal ways on UNIX-likes, but that's better than silently misbehaving.
You'd use it with just:
with stdin_from_terminal():
user_input = input("User input goes here: ")
and it would restore the original sys.stdin automatically when the with block is exited.
I have strange problem with auto run my python application. As everybody know to run this kind of app I need run command:
python app_script.py
Now I try to run this app by cronetab using one simple script to ensure that this app isn't running. If answer is no, script run application.
#!/bin/bash
pidof appstart.py >/dev/null
if [[ $? -ne 0 ]] ; then
python /path_to_my_app/appstart.py &
fi
Bad side of this approach is that script during checking pid, checks only first word from command of ps aux table and in this example it always will be python and skip script name (appstart). So when i run another app based on python language the script will failed... Maybe somebody know how to check this out in a proper way?
This might be a question better suited for Unix & Linux Stack Exchange.
However, it's common to use pgrep instead of pidof for applications like yours:
$ pidof appstart.py # nope
$ pidof python # works, but it can be different python
16795
$ pgrep appstart.py # nope, it would match just 'python', too
$ pgrep -f appstart.py # -f is for 'full', it searches the whole commandline (so it finds appstart.py)
16795
From man pgrep: The pattern is normally only matched against the process name. When -f is set, the full command line is used.
Maybe you should better check for pid-file created in your application?
This will help you track even different instances of same script if needed. Something just like this:
#!/usr/bin/env python3
import os
import sys
import atexit
PID_file = "/tmp/app_script.pid"
PID = str(os.getpid())
if os.path.isfile(PID_file):
sys.exit('{} already exists!'.format(PID_file))
open(PID_file, 'w').write(PID)
def cleanup():
os.remove(PID_file)
atexit.register(cleanup)
# DO YOUR STUFF HERE
After that you'll be able to check if file exists, and if it exists you'll be able to retrieve PID of your script.
[ -f /tmp/app_script.pid ] && ps up $(cat /tmp/app_script.pid) >/dev/null && echo "Started" || echo "Not Started"
you could also do the whole thing in python without the bash-script around it by creating a pidfile somewhere writeable.
import os
import sys
pidpath = os.path.abspath('/tmp/myapp.pid')
def myfunc():
"""
Your logic goes here
"""
return
if __name__ == '__main__':
# check for existing pidfile and fail if true
if os.path.exists(pidpath):
print('Script already running.')
sys.exit(1)
else:
# otherwise write current pid to file
with open(pidpath,'w') as _f:
_f.write(str(os.getpid()))
try:
# call your function
myfunc()
except Exception, e:
# clean up after yourself in case something breaks
os.remove(pidpath)
print('Exception: {}'.format(e))
sys.exit(1)
finally:
# also clean up after yourself in case everything's fine...
os.remove(pidpath)
I'm trying to understand some basic shell scripting. I have a script.sh, did the chmod, and was messing around with some pretty easy print statements by executing ./script.sh
Now how could I launch the shell displaying a prompt that includes the current working directory, and said prompt should accept a line of input and display a prompt each time?
To sum up the tools I understand so far: os.getcwd(), sys.stdin.readlines(), subprocess.Popen(['ls'], stdout=subproccess.PIPE)
Here is what I have so far.
#!/usr/bin/env python
import os
import sys
import subprocess
proc = subprocess.Popen(['ls'], stdout=subprocess.PIPE)
cwd = os.getcwd()
while True:
user_input = raw_input(str(cwd) + " >> ")
if user_input == 'ls':
print proc
if not foo:
sys.exit()
So this seems to work. At least the command prompt part, not exiting.
If you want to prompt the user, then you probably don't want to be using sys.stdin.readlines() as there isn't really an easy way to put your prompt in after each line. Instead, use input() (or raw_input() on Python 2).
user_input = input("My prompt text> ")
Then the user's input will be stored in a string in user_input. Put that in a while loop, and you can have it repeatedly display, like a regular command prompt.
I want to fork a new process in a script, but how to interactive with the subprocess in a new terminal?
For example:
#python
a='a'
b='b'
if os.fork():
print a
a = input('a?')
print 'a:',a
else:
print b
b = input('b?')
print 'b:',b
The script should print a/b and ask for a new value. But these two process share a same terminal, and that makes it confused.
How can I open a new terminal and let the subprocess run in the new terminal?
I've thought about to use subprocess.Popen('gnome-terminal',shell=True) and communicate with the new terminal. But gnome-terminal will open bash on default, how can i open a terminal only for input and output?
Its probably bad practice to open a new terminal like that from a command line application, but gnome-terminal has an -e flag. E.g. gnome-terminal -e python will open a python interpreter.
I finally implement it in a(maybe ugly) way.
Inspired by https://unix.stackexchange.com/questions/256480/how-do-i-run-a-command-in-a-new-terminal-window-in-the-same-process-as-the-origi
I'v solve most of the problem:
#python
import sys,os,subprocess
a='a'
b='b'
if os.fork():
print a
a = raw_input('a?')
print 'a:',a
else:
p = subprocess.Popen("xterm -e 'tty >&3; exec sleep 99999999' 3>&1",
shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
tty_path = p.stdout.readline().strip()
tty = open(tty_path,'r+')
sys.stdout=tty
sys.stderr=tty
sys.stdin=tty
print b
b = raw_input('b?')
print 'b:',b
The only problem is that the prompt: 'b?' will still show in the former terminal. So the new question is: where does prompt belongs?
Despite that, another way to solve this prompt problem:
_r_i = raw_input
def raw_input(prompt):
print prompt,
return _r_i('')
I'm a little strange and, mad... I know...
I am executing a script which prompts for 2 values one after the other. I want to pass the values from the script itself as I want to automate this.
Using the subprocess module, I can easily pass one value:
suppression_output = subprocess.Popen(cmd_suppression, shell=True,
stdin= subprocess.PIPE,
stdout= subprocess.PIPE).communicate('y') [0]
But passing the 2nd value does not seem to work. If I do something like this:
suppression_output = subprocess.Popen(cmd_suppression, shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE).communicate('y/r/npassword')[0]
You should use \n for the new line instead of /r/n -> 'y\npassword'
As your question is not clear, I assumed you have a program which behaves somewhat like this python script, lets call it script1.py:
import getpass
import sys
firstanswer=raw_input("Do you wish to continue?")
if firstanswer!="y":
sys.exit(0) #leave program
secondanswer=raw_input("Enter your secret password:\n")
#secondanswer=getpass.getpass("Enter your secret password:\n")
print "Password was entered successfully"
#do useful stuff here...
print "I should not print it out, but what the heck: "+secondanswer
It asks for confirmation ("y"), then wants you to enter a password. After that it does "something useful", finally prints the password and then exits
Now to get the first program to be run by a second script script2.py it has to look somewhat like this:
import subprocess
cmd_suppression="python ./testscript.py"
process=subprocess.Popen(cmd_suppression,shell=True\
,stdin=subprocess.PIPE,stdout=subprocess.PIPE)
response=process.communicate("y\npassword")
print response[0]
The output of script2.py:
$ python ./script2.py
Do you wish to continue?Enter your secret password:
Password was entered successfully
I should not print it out, but what the heck: password
A problem can most likely appear if the program uses a special method to get the password in a secure way, i.e. if it uses the line I just commented out in script1.py
secondanswer=getpass.getpass("Enter your secret password:\n")
This case tells you that it is probably not a good idea anyway to pass a password via a script.
Also keep in mind that calling subprocess.Popen with the shell=True option is generally a bad idea too. Use shell=False and provide the command as a list of arguments instead:
cmd_suppression=["python","./testscript2.py"]
process=subprocess.Popen(cmd_suppression,shell=False,\
stdin=subprocess.PIPE,stdout=subprocess.PIPE)
It is mentioned a dozen times in the Subprocess Documentation
Try os.linesep:
import os
from subprocess import Popen, PIPE
p = Popen(args, stdin=PIPE, stdout=PIPE)
output = p.communicate(os.linesep.join(['the first input', 'the 2nd']))[0]
rc = p.returncode
In Python 3.4+, you could use check_output():
import os
from subprocess import check_output
input_values = os.linesep.join(['the first input', 'the 2nd']).encode()
output = check_output(args, input=input_values)
Note: the child script might ask for a password directly from the terminal without using subprocess' stdin/stdout. In that case, you might need pexpect, or pty modules. See Q: Why not just use a pipe (popen())?
import os
from pexpect import run # $ pip install pexpect
nl = os.linesep
output, rc = run(command, events={'nodes.*:': 'y'+nl, 'password:': 'test123'+nl},
withexitstatus=1)