Cannot Launch Interactive Program While Piping to Script in Python - python

I have a python script that needs to call the defined $EDITOR or $VISUAL. When the Python script is called alone, I am able to launch the $EDITOR without a hitch, but the moment I pipe something to the Python script, the $EDITOR is unable to launch. Right now, I am using nano which shows
Received SIGHUP or SIGTERM
every time. It appears to be the same issue described here.
sinister:Programming [1313]$ echo "import os;os.system('nano')" > "sample.py"
sinister:Programming [1314]$ python sample.py
# nano is successfully launched here.
sinister:Programming [1315]$ echo "It dies here." | python sample.py
Received SIGHUP or SIGTERM
Buffer written to nano.save.1
EDIT: Clarification; inside the program, I am not piping to the editor. The code is as follows:
editorprocess = subprocess.Popen([editor or "vi", temppath])
editorreturncode = os.waitpid(editorprocess.pid, 0)[1]

When you pipe something to a process, the pipe is connected to that process's standard input. This means your terminal input won't be connected to the editor. Most editors also check whether their standard input is a terminal (isatty), which a pipe isn't; and if it isn't a terminal, they'll refuse to start. In the case of nano, this appears to cause it to exit with the message you included:
% echo | nano
Received SIGHUP or SIGTERM
You'll need to provide the input to your Python script in another way, such as via a file, if you want to be able to pass its standard input to a terminal-based editor.
Now you've clarified your question, that you don't want the Python process's stdin attached to the editor, you can modify your code as follows:
editorprocess = subprocess.Popen([editor or "vi", temppath],
stdin=open('/dev/tty', 'r'))

The specific case of find -type f | vidir - is handled here:
foreach my $item (#ARGV) {
if ($item eq "-") {
push #dir, map { chomp; $_ } <STDIN>;
close STDIN;
open(STDIN, "/dev/tty") || die "reopen: $!\n";
}
You can re-create this behavior in Python, as well:
#!/usr/bin/python
import os
import sys
sys.stdin.close()
o = os.open("/dev/tty", os.O_RDONLY)
os.dup2(o, 0)
os.system('vim')
Of course, it closes the standard input file descriptor, so if you intend on reading from it again after starting the editor, you should probably duplicate its file descriptor before closing it.

Related

Bypass a command subprocess that ask for user input ? (password) [duplicate]

I'm working with a piece of scientific software called Chimera. For some of the code downstream of this question, it requires that I use Python 2.7.
I want to call a process, give that process some input, read its output, give it more input based on that, etc.
I've used Popen to open the process, process.stdin.write to pass standard input, but then I've gotten stuck trying to get output while the process is still running. process.communicate() stops the process, process.stdout.readline() seems to keep me in an infinite loop.
Here's a simplified example of what I'd like to do:
Let's say I have a bash script called exampleInput.sh.
#!/bin/bash
# exampleInput.sh
# Read a number from the input
read -p 'Enter a number: ' num
# Multiply the number by 5
ans1=$( expr $num \* 5 )
# Give the user the multiplied number
echo $ans1
# Ask the user whether they want to keep going
read -p 'Based on the previous output, would you like to continue? ' doContinue
if [ $doContinue == "yes" ]
then
echo "Okay, moving on..."
# [...] more code here [...]
else
exit 0
fi
Interacting with this through the command line, I'd run the script, type in "5" and then, if it returned "25", I'd type "yes" and, if not, I would type "no".
I want to run a python script where I pass exampleInput.sh "5" and, if it gives me "25" back, then I pass "yes"
So far, this is as close as I can get:
#!/home/user/miniconda3/bin/python2
# talk_with_example_input.py
import subprocess
process = subprocess.Popen(["./exampleInput.sh"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE)
process.stdin.write("5")
answer = process.communicate()[0]
if answer == "25":
process.stdin.write("yes")
## I'd like to print the STDOUT here, but the process is already terminated
But that fails of course, because after `process.communicate()', my process isn't running anymore.
(Just in case/FYI): Actual problem
Chimera is usually a gui-based application to examine protein structure. If you run chimera --nogui, it'll open up a prompt and take input.
I often need to know what chimera outputs before I run my next command. For example, I will often try to generate a protein surface and, if Chimera can't generate a surface, it doesn't break--it just says so through STDOUT. So, in my python script, while I'm looping through many proteins to analyze, I need to check STDOUT to know whether to continue analysis on that protein.
In other use cases, I'll run lots of commands through Chimera to clean up a protein first, and then I'll want to run lots of separate commands to get different pieces of data, and use that data to decide whether to run other commands. I could get the data, close the subprocess, and then run another process, but that would require re-running all of those cleaning up commands each time.
Anyways, those are some of the real-world reasons why I want to be able to push STDIN to a subprocess, read the STDOUT, and still be able to push more STDIN.
Thanks for your time!
you don't need to use process.communicate in your example.
Simply read and write using process.stdin.write and process.stdout.read. Also make sure to send a newline, otherwise read won't return. And when you read from stdin, you also have to handle newlines coming from echo.
Note: process.stdout.read will block until EOF.
# talk_with_example_input.py
import subprocess
process = subprocess.Popen(["./exampleInput.sh"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE)
process.stdin.write("5\n")
stdout = process.stdout.readline()
print(stdout)
if stdout == "25\n":
process.stdin.write("yes\n")
print(process.stdout.readline())
$ python2 test.py
25
Okay, moving on...
Update
When communicating with an program in that way, you have to pay special attention to what the application is actually writing. Best is to analyze the output in a hex editor:
$ chimera --nogui 2>&1 | hexdump -C
Please note that readline [1] only reads to the next newline (\n). In your case you have to call readline at least four times to get that first block of output.
If you just want to read everything up until the subprocess stops printing, you have to read byte by byte and implement a timeout. Sadly, neither read nor readline does provide such a timeout mechanism. This is probably because the underlying read syscall [2] (Linux) does not provide one either.
On Linux we can write a single-threaded read_with_timeout() using poll / select. For an example see [3].
from select import epoll, EPOLLIN
def read_with_timeout(fd, timeout__s):
"""Reads from fd until there is no new data for at least timeout__s seconds.
This only works on linux > 2.5.44.
"""
buf = []
e = epoll()
e.register(fd, EPOLLIN)
while True:
ret = e.poll(timeout__s)
if not ret or ret[0][1] is not EPOLLIN:
break
buf.append(
fd.read(1)
)
return ''.join(buf)
In case you need a reliable way to read non blocking under Windows and Linux, this answer might be helpful.
[1] from the python 2 docs:
readline(limit=-1)
Read and return one line from the stream. If limit is specified, at most limit bytes will be read.
The line terminator is always b'\n' for binary files; for text files, the newline argument to open() can be used to select the line terminator(s) recognized.
[2] from man 2 read:
#include <unistd.h>
ssize_t read(int fd, void *buf, size_t count);
[3] example
$ tree
.
├── prog.py
└── prog.sh
prog.sh
#!/usr/bin/env bash
for i in $(seq 3); do
echo "${RANDOM}"
sleep 1
done
sleep 3
echo "${RANDOM}"
prog.py
# talk_with_example_input.py
import subprocess
from select import epoll, EPOLLIN
def read_with_timeout(fd, timeout__s):
"""Reads from f until there is no new data for at least timeout__s seconds.
This only works on linux > 2.5.44.
"""
buf = []
e = epoll()
e.register(fd, EPOLLIN)
while True:
ret = e.poll(timeout__s)
if not ret or ret[0][1] is not EPOLLIN:
break
buf.append(
fd.read(1)
)
return ''.join(buf)
process = subprocess.Popen(
["./prog.sh"],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE
)
print(read_with_timeout(process.stdout, 1.5))
print('-----')
print(read_with_timeout(process.stdout, 3))
$ python2 prog.py
6194
14508
11293
-----
10506

Run script for send in-game Terraria server commands

In the past week I install a Terraria 1.3.5.3 server into an Ubuntu v18.04 OS, for playing online with friends. This server should be powered on 24/7, without any GUI, only been accessed by SSH on internal LAN.
My friends ask me if there is a way for them to control the server, e.g. send a message, via internal in-game chat, so I thought use a special character ($) in front of the desired command ('$say something' or '$save', for instance) and a python program, that read the terminal output via pipe, interpreter the command and send it back with a bash command.
I follow these instructions to install the server:
https://www.linode.com/docs/game-servers/host-a-terraria-server-on-your-linode
And config my router to forward a dedicated port to the terraria server.
All is working fine, but I really struggle to make python send a command via "terrariad" bash script, described in the link above.
Here is a code used to send a command, in python:
import subprocess
subprocess.Popen("terrariad save", shell=True)
This works fine, but if I try to input a string with space:
import subprocess
subprocess.Popen("terrariad \"say something\"", shell=True)
it stop the command in the space char, output this on the terminal:
: say
Instead of the desired:
: say something
<Server>something
What could I do to solve this problem?
I tried so much things but I get the same result.
P.S. If I send the command manually in the ssh putty terminal, it works!
Edit 1:
I abandoned the python solution, by now I'll try it with bash instead, seem to be more logic to do this way.
Edit 2:
I found the "terrariad" script expect just one argument, but the Popen is splitting my argument into two no matter the method I use, as my input string has one space char in the middle. Like this:
Expected:
terrariad "say\ something"
$1 = "say something"
But I get this of python Popen:
subprocess.Popen("terrariad \"say something\"", shell=True)
$1 = "say
$2 = something"
No matter i try to list it:
subprocess.Popen(["terrariad", "say something"])
$1 = "say
$2 = something"
Or use \ quote before the space char, It always split variables if it reach a space char.
Edit 3:
Looking in the bash script I could understand what is going on when I send a command... Basically it use the command "stuff", from the screen program, to send characters to the terraria screen session:
screen -S terraria -X stuff $send
$send is a printf command:
send="`printf \"$*\r\"`"
And it seems to me that if I run the bash file from Python, it has a different result than running from the command line. How this is possible? Is this a bug or bad implementation of the function?
Thanks!
I finally come with a solution to this, using pipes instead of the Popen solution.
It seems to me that Popen isn't the best solution to run bash scripts, as described in How to do multiple arguments with Python Popen?, the link that SiHa send in the comments (Thanks!):
"However, using Python as a wrapper for many system commands is not really a good idea. At the very least, you should be breaking up your commands into separate Popens, so that non-zero exits can be handled adequately. In reality, this script seems like it'd be much better suited as a shell script.".
So I came with the solution, using a fifo file:
First, create a fifo to be use as a pipe, in the desired directory (for instance, /samba/terraria/config):
mkfifo cmdOutput
*/samba/terraria - this is the directory I create in order to easily edit the scripts, save and load maps to the server using another computer, that are shared with samba (https://linuxize.com/post/how-to-install-and-configure-samba-on-ubuntu-18-04/)
Then I create a python script to read from the screen output and then write to a pipe file (I know, probably there is other ways to this):
import shlex, os
outputFile = os.open("/samba/terraria/config/cmdOutput", os.O_WRONLY )
print("python script has started!")
while 1:
line = input()
print(line)
cmdPosition = line.find("&")
if( cmdPosition != -1 ):
cmd = slice(cmdPosition+1,len(line))
cmdText = line[cmd]
os.write(outputFile, bytes( cmdText + "\r\r", 'utf-8'))
os.write(outputFile, bytes("say Command executed!!!\r\r", 'utf-8'))
Then I edit the terraria.service file to call this script, piped from terrariaServer, and redirect the errors to another file:
ExecStart=/usr/bin/screen -dmS terraria /bin/bash -c "/opt/terraria/TerrariaServer.bin.x86_64 -config /samba/terraria/config/serverconfig.txt < /samba/terraria/config/cmdOutput 2>/samba/terraria/config/errorLog.txt | python3 /samba/terraria/scripts/allowCommands.py"
*/samba/terraria/scripts/allowCommands.py - where my script is.
**/samba/terraria/config/errorLog.txt - save Log of errors in a file.
Now I can send commands, like 'noon' or 'dawn' so I can change the in-game time, save world and backup it with samba server before boss fights, do another stuff if I have some time XD, and have the terminal showing what is going on with the server.

How to execute set of commands after sudo using python script

I am trying to automate deployment process using the python. In deployment I do "dzdo su - sysid" first and then perform the deployment process. But I am not able to handle this part in python. I have done similar thing in shell where I used following piece of code,
/bin/bash
psh su - sysid << EOF
. /users/home/sysid/.bashrc
./deployment.sh
EOF
this handles execution of deployment.sh very well. It does the sudo and then execute the script with sysid id.
I am trying to do similar thing using python but I am not able to find any alternative to << EOF in python.
I am using subprocess.Popen to execute dzdo part, it does the dzdo, but when I try to execute next command for e.g. say "ls -l", then this command will not get executed with the sysid, instead, i had to exit from sysid session and as soon as i exit, it will execute "ls -l" in my home directory which is of no use. Can someone please help me on this?
And one more thing, in this case I am not calling any deployment.sh but I will call commands like cp, rm, mkdir etc.
The text between << EOF and EOF in your shell script example will be written to the standard input of the psh process. So you have to redirect the standard input of your Popen instance and write the data either directly into the stdin file of your instance or use the communicate() method:
#!/usr/bin/env python
# coding: utf8
from __future__ import absolute_import, division, print_function
from subprocess import Popen, PIPE
SHELL_COMMANDS = r'''\
. /users/home/sysid/.bashrc
./deployment.sh
'''
def main():
process = Popen(['psh', 'su', '-', 'sysid'], stdin=PIPE)
process.communicate(SHELL_COMMANDS)
if __name__ == '__main__':
main()
If you need the output of the process' stdandard output and/or error then you need to pipe those too and work with the return value of the communicate() call.

Signals are not always seen by all children in a process group

I have a problem with the way signals are propagated within a process group. Here is my situation and an explication of the problem :
I have an application, that is launched by a shell script (with a su). This shell script is itself launched by a python application using subprocess.Popen
I call os.setpgrp as a preexec_function and have verified using ps that the bash script, the su command and the final application all have the same pgid.
Now when I send signal USR1 to the bash script (the leader of the process group), sometimes the application see this signal, and sometimes not. I can't figure out why I have this random behavior (The signal is seen by the app about 50% of the time)
Here is he example code I am testing against :
Python launcher :
#!/usr/bin/env python
p = subprocess.Popen( ["path/to/bash/script"], stdout=…, stderr=…, preexec_fn=os.setpgrp )
# loop to write stdout and stderr of the subprocesses to a file
# not that I use fcntl.fcntl(p.stdXXX.fileno(), fcntl.F_SETFL, os.O_NONBLOCK)
p.wait()
Bash script :
#!/bin/bash
set -e
set -u
cd /usr/local/share/gios/exchange-manager
CONF=/etc/exchange-manager.conf
[ -f $CONF ] && . $CONF
su exchange-manager -p -c "ruby /path/to/ruby/app"
Ruby application :
#!/usr/bin/env ruby
Signal.trap("USR1") do
puts "Received SIGUSR1"
exit
end
while true do
sleep 1
end
So I try to send the signal to the bash wrapper (from a terminal or from the python application), sometimes the ruby application will see the signal and sometimes not. I don't think it's a logging issue as I have tried to replace the puts by a method that write directly to a different file.
Do you guys have any idea what could be the root cause of my problem and how to fix it ?
Your signal handler is doing too much. If you exit from within the signal handler, you are not sure that your buffers are properly flushed, in other words you may not be exiting gracefully your program. Be careful of new signals being received when the program is already inside a signal handler.
Try to modify your Ruby source to exit the program from the main loop as soon as an "exit" flag is set, and don't exit from the signal handler itself.
Your Ruby application becomes:
#!/usr/bin/env ruby
$done = false
Signal.trap("USR1") do
$done = true
end
until $done do
sleep 1
end
puts "** graceful exit"
Which should be much safer.
For real programs, you may consider using a Mutex to protect your flag variable.

Run a python script in perl

I have two scripts, a python script and a perl script.
How can I make the perl script run the python script and then runs itself?
Something like this should work:
system("python", "/my/script.py") == 0 or die "Python script returned error $?";
If you need to capture the output of the Python script:
open(my $py, "|-", "python2 /my/script.py") or die "Cannot run Python script: $!";
while (<$py>) {
# do something with the input
}
close($py);
This also works similarly if you want to provide input for the subprocess.
The best way is to execute the python script at the system level using IPC::Open3. This will keep things safer and more readable in your code than using system();
You can easily execute system commands, read and write to them with IPC::Open3 like so:
use strict;
use IPC::Open3 ();
use IO::Handle (); #not required but good for portabilty
my $write_handle = IO::Handle->new();
my $read_handle = IO::Handle->new();
my $pid = IPC::Open3::open3($write_handle, $read_handle, '>&STDERR', $python_binary. ' ' . $python_file_path);
if(!$pid){ function_that_records_errors("Error"); }
#read multi-line data from process:
local $/;
my $read_data = readline($read_handle);
#write to python process
print $write_handle 'Something to write to python process';
waitpid($pid, 0); #wait for child process to close before continuing
This will create a forked process to run the python code. This means that should the python code fail, you can recover and continue with your program.
It may be simpler to run both scripts from a shell script, and use pipes (assuming that you're in a Unix environment) if you need to pass the results from one program to the other

Categories

Resources