Running python script by " | bash " with sys.stdin.readline() - python

I am doing something need to use curl xxx | bash to run something.
I created a python script with sys.stdin.readline() to test like this:
[python3] test.py
import sys
def read_input():
input = sys.stdin.readline().rstrip()
print(input)
read_input()
It works directly run by python3 test.py
test
test
But if I use echo 'python3 test.py' | bash, it will not stop to let me input something.
Any tips?

When you pipe with |, you are redirecting the output from the first command into the input of the second. This means standard input of the second command doesn't connect to the terminal, and therefore cannot read keyboard input. The form curl xxx | bash therefore only functions for non-interactive scripts. This is in no way specific to Python.
You could in principle work around this by saving the input descriptor under another number, but it does get quite complex:
$ ( echo 'exec <&3 3<&- ; echo script starts ; read hello ; echo you entered $hello ; exit' | bash ) 3<&0
script starts
something
you entered something
Here I used () to create a subshell, in which stdin is duplicated on file descriptor 3 using 3<&0, and the script generated in the pipeline both renames that back as stdin with exec <&3 3<&- and exits to prevent further commands from being read from the restored stdin. This has side effects, such as descriptor 3 being open for the echo command.
Since the main reason to use curl address | bash in the first place is to keep the command simple, this is not what you're after. Besides, the pipe prevents you from handling if anything goes wrong during your download; your script could be interrupted anywhere. A traditional download then run isn't that much worse:
curl -O http://somewhere/somefile.py && python somefile.py
In comparison, this saves somefile.py to your filesystem. There are downsides to that, like requiring a writable filesystem and replacing that particular filename. On the upside, if anything goes wrong it stops there and doesn't run a corrupted script, due to &&.
One final possibility if the script you're downloading fits within the command line might be to put it there rather than in a pipe:
python -c "$(curl $url)"
This carries the same weaknesses to interrupted downloads, and additionally places the script contents in a command line which is generally public information (consider ps ax output). But if you're just downloading the script with curl, the information on how to get it likely was too. As this doesn't redirect stdin, it might be the answer to your immediate question.
In general, I recommend not to run any scripts straight off the internet without verification as this curl something | bash command line does. It's way too vulnerable to hijacking, as there's no verification involved for any step. It's better to use a package repository which checks signatures, such as apt.
Another method to get access to a terminal on Linux is via the device /dev/tty. This method is used for instance when ssh asks for a password. It might also be possible to reopen stdout or stderr for input, as in ( exec < /dev/null ; read foo <&2 ; echo $foo ).

Related

Launching subprocesses on resource limited machine

Edit:
The original intent of this question was to find a way to launch an interactive ssh session via a Python script. I'd tried subprocess.call() before and had gotten a Killed response before anything was output onto the terminal. I just assumed this was an issue/limitation with the subprocess module instead of an issue somewhere else.This was found not to be the case when I ran the script on a non-resource limited machine and it worked fine.
This then turned the question into: How can I run an interactive ssh session with whatever resource limitations were preventing it from running?
Shoutout to Charles Duffy who was a huge help in trying to diagnose all of this .
Below is the original question:
Background:
So I have a script that is currently written in bash. It parses the output of a few console functions and then opens up an ssh session based on those parsed outputs.
It currently works fine, but I'd like to expand it's capabilities a bit by adding some flag arguments to it. I've worked with argparse before and thoroughly enjoyed it. I tried to do some flag work in bash, and let's just say it leaves much to be desired.
The Actual Question:
Is it possible to have python to do stuff in a console and then put the user in that console?
Something like using subprocess to run a series of commands onto the currently viewed console? This in contrast to how subprocess normally runs, where it runs commands and then shuts the intermediate console down
Specific Example because I'm not sure if what I'm describing makes sense:
So here's a basic run down of the functionality I was wanting:
Run a python script
Have that script run some console command and parse the output
Run the following command:
ssh -t $correctnode "cd /local_scratch/pbs.$jobid; bash -l"
This command will ssh to the $correctnode, change directory, and then leave a bash window in that node open.
I already know how to do parts 1 and 2. It's part three that I can't figure out. Any help would be appreciated.
Edit: Unlike this question, I am not simply trying to run a command. I'm trying to display a shell that is created by a command. Specifically, I want to display a bash shell created through an ssh command.
Context For Readers
The OP is operating on a very resource-constrained (particularly, it appears, process-constrained) jumphost box, where starting an ssh process as a subprocess of python goes over a relevant limit (on number of processes, perhaps?)
Approach A: Replacing The Python Interpreter With Your Interactive Process
Using the exec*() family of system calls causes your original process to no longer be in memory (unlike the fork()+exec*() combination used to start a subprocess while leaving the parent process running), so it doesn't count against the account's limits.
import argparse
import os
try:
from shlex import quote
except ImportError:
from pipes import quote
parser = argparse.ArgumentParser()
parser.add_argument('node')
parser.add_argument('jobid')
args = parser.parse_args()
remote_cmd_str = 'cd /local_scratch/pbs.%s && exec bash -i' % (quote(args.jobid))
local_cmd = [
'/usr/bin/env', 'ssh', '-tt', node, remote_cmd_str
]
os.execv("/usr/bin/env", local_cmd)
Approach B: Generating Shell Commands From Python
If we use Python to generate a shell command, the shell can invoke that command only after the Python process exited, such that we stay under our externally-enforced process limit.
First, a slightly more robust approach at generating eval-able output:
import argparse
try:
from shlex import quote
except ImportError:
from pipes import quote
parser = argparse.ArgumentParser()
parser.add_argument('node')
parser.add_argument('jobid')
args = parser.parse_args()
remoteCmd = ['cd', '/local_scratch/pbs.%s' % (args.jobid)]
remoteCmdStr = ' '.join(quote(x) for x in remoteCmd) + ' && bash -l'
cmd = ['ssh', '-t', args.correctnode, remoteCmdStr]
print(' '.join(pipes.quote(x) for x in cmd)
To run this from a shell, if the above is named as genSshCmd:
#!/bin/sh
eval "$(genSshCmd "$#")"
Note that there are two separate layers of quoting here: One for the local shell running eval, and the second for the remote shell started by SSH. This is critical -- you don't want a jobid of $(rm -rf ~) to actually invoke rm.
This is in no way a real answer, just an illustration to my comment.
Let's say you have a Python script, test.py:
import argparse
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('myarg', nargs="*")
args = parser.parse_args()
print("echo Hello world! My arguments are: " + " ".join(args.myarg))
So, you create a bash wrapper around it, test.sh
set -e
$(python test.py $*)
and this is what you get:
$ bash test.sh
Hello world! My arguments are:
$ bash test.sh one two
Hello world! My arguments are: one two
What is going on here:
python script does not execute commands. Instead, it outputs the commands bash script will run (echo in this example). In your case, the last command will be ssh blabla
bash executes the output of the python script (the $(...) part), passing on all its arguments (the $* part)
you can use argparse inside the python script; if anything is wrong with the arguments, the message will be put to stderr and will not be executed by bash; bash script will stop because of set -e flag

Python: alternate for stdin() when running headless?

I am passing data from hcitool and hcidump, via sed and awk then piped into python, which then reads from stdin. This works well when run from the command line. It works equally well when put into a shell program and called from the command line.
However, when I call that shell program via cron on startup and run headless, the python program executes, but nothing ever flows from sed | awk into python.
I have read thin wisps of information that stdin may not flow thru headless, but haven't found anything concrete.
What am I missing?

Add a delay while curl is downloading python script and then pipe to execute script?

I just created a rickroll prank to play on friends and family. I want to be able to download the file from github using a curl command which works. My issue is that when I use a pipe and try to execute the script it does it right after curl gets executed and before it downloads the file.
This is the command I am trying to run:
curl -L -O https://raw.githubusercontent.com/krish-penumarty/RickRollPrank/master/rickroll.py | python rickroll.py
I have tried to run it using the sleep command as well, but haven't had any luck.
(curl -L -O https://raw.githubusercontent.com/krish-penumarty/RickRollPrank/master/rickroll.py; sleep 10) | python rickroll.py
Expanding on my comment.
There are several ways to chain commands using most shell languages (here I assume sh / bash dialect).
The most basic: ; will just run each command sequentially, starting the next one as the previous one completes.
Conditional chaining, && works as ; but aborts the chain as soon as a command returns an error (any non-0 return code).
Conditional chaining, || works as && but aborts the chain as soon as a command succeeds (returns 0).
What you tried to do here is neither of those, it's piping. Triggered by |, it causes commands on its sides to be run at once, with the standard output of the left-hand one being fed into the standard input of the right-hand one.
Your second example doesn't work either, because it causes two sequences to be run in parallel:
First sequence is the curl, followed by a sleep once it finishes.
Second sequence is the python command, run simultaneously with anything written by the first sequence redirected as its input.
So fix it: command1 && command2, will run curl, wait for it to complete, and only run python if curl succeeded.
And again, you can use your example to show how harmful it can be to run commands one doesn't fully understand. Have your script write “All your files have been deleted” in red, it can be good for educating people on that subject.

In Python: how can I get the exit status of the previous process run from Bash (i.e. "$?")?

I have a Python script that should report success or failure of the previous command. Currently, I'm doing
command && myscript "Success" || myscript "Failed"
What I would like to do is instead to be able to run the commands unlinked as in:
command; myscript
And have myscript retrieve $?, i.e. the exist status. I know I could run:
command; myscript $?
But what I'm really looking for is a Python way to retrieve $? from Python.
How can I do it?
Since this is a strange request, let me clarify where it comes from. I have a Python script that uses the pushover API to send a notification to my phone.
When I run a long process, I run it as process && notify "success" || notify "failure". But sometimes I forget to do this and just run the process. At this point, I'd like to run "notify" on the still processing command line, and have it pick up the exit status and notify me.
Of course I could also implement the pushover API call in bash, but now it's become a question of figuring out how to do it in Python.
This may not be possible, because of how a script (of any language, not just Python) gets executed. Try this: create a shell script called retcode.sh with the following contents:
#!/bin/bash
echo $?
Make it executable, then try the following at a shell prompt:
foo # Or any other non-existent command
echo $? # prints 127
foo
./retcode.sh # prints 0
I'd have to double-check this, but it seems that all scripts, not just Python scripts, are run in a separate process that doesn't have access to the exit code of the previous command run by the "parent" shell process. Which may explain why Python doesn't (as far as I can tell) give you a way to retrieve the exit code of the previous command like bash's $? — because it would always be 0, no matter what.
I believe your approach of doing command; myscript $? is the best way to achieve what you're trying to do: let the Python script know about the exit code of the previous step.
Update: After seeing your clarification of what you're trying to do, I think your best bet will be to modify your notify script just a little, so that it can take an option like -s or --status (using argparse to make parsing options easier, of course) and send a message based on the status code ("success" if the code was 0, "failure NN" if it was anything else, where NN is the actual code). Then when you type command without your little && notify "success" || notify "failure" trick, while it's still running you can type notify -s $? and get the notification you're looking for, since that will be the next thing that your shell runs after command returns.
false; export ret=$?; ./myscript2.py
myscript2.py:
#!/usr/bin/python
import os
print os.environ['ret']
Output:
1
It is clearly not possible: the exit value of a process is only accessible to its parent, and no shells I know offer an API to allow next process to retrieve it.
IMHO, what is closer to your need would be:
process
myscript $?
That way, you can do it even if you started you process without thinking about notification.
You could also make the script able to run a process get the exit code and to its notification, or (depending on options given in command line) use an exit code given as parameter. For example, you could have:
myscript process: runs process and does notifications
myscript -c $?: only does notifications
argparse module can make that easy.
But what I'm really looking for is a Python way to retrieve $? from Python.
If you ran them as separate processes, then it's clearly impossible.
An independent process cannot know how another process ended.
You can 1) store the result in a file, 2) emit something from command, and pipe it to the script 3) run the command from myscript, and so on... but you need some kind of communication.
The simplest way, of course is command; myscript $?

Run shell script using fabric and piping script text to shell's stdin

Is there a way to execute a multi-line shell script by piping it to the remote shell's standard input in fabric? Or must I always write it to the remote filesystem, then run it, then delete it? I like sending to stdin as it avoids the temporary file. If there's no fabric API (and it seems like there is not based on my research), presumably I can just use the ssh module directly. Basically I wish fabric.api.run was not limited to a 1-line command that gets passed to the shell as a command line argument, but instead would take a full multi-line script and write it to the remote shell's standard input.
To clarify I want the fabric equivalent of this command line:
ssh somehost /bin/sh < /tmp/test.sh
Except in python the script source coude wouldn't come from a file on the local filesystem, it would just be a multiline string in memory. Note that this is a single logical operation and there is no temporary file on the remote side, meaning unexpected failures and crashes don't leave orphan files. If there were such an option in fabric (which is what I'm asking about), there would not need to be a temporary file on either side and this would only require a single ssh operation.
You could use Fabric operations. you can use
fabric.operations.put(local_path, remote_path, use_sudo=False,mirror_local_mode=False,mode=None)
to copy the script file to the remote path and then execute it.
or,
you could use fabric.operations.open_shell, but that will work only for series of simple commands, for scripting involving logical flow, it is better to use put operation and execute the script just like you do on a local server.
If the script is in a file, you can read it, then pass its content to run. Rewriting Jasper's example:
from fabric.api import run
script_file = open('myscript.sh')
run(script_file.read())
script_file.close()
For what it's worth, this works perfectly fine. It uses python's multi-line string notation ''' and bash's line break (\ before newline). You can use semicolons to separate independent lines or just the piping operations like &&
run('''echo hello;\
echo testing;\
echo is this thing on?;''')
run('''echo hello && \
echo testing && \
echo is this thing on?''')
Here's the output I get:
[root#192.168.59.103:49300] run: echo hello; echo testing; echo is this thing on?;
[root#192.168.59.103:49300] out: hello
[root#192.168.59.103:49300] out: testing
[root#192.168.59.103:49300] out: is this thing on?
[root#192.168.59.103:49300] out:
[root#192.168.59.103:49300] run: echo hello && echo testing && echo is this thing on?
[root#192.168.59.103:49300] out: hello
[root#192.168.59.103:49300] out: testing
[root#192.168.59.103:49300] out: is this thing on?
[root#192.168.59.103:49300] out:
There is no built-in way to do that. You could program it though:
from fabric.api import run
scriptfilehandle = open('myscript.sh')
for line in scriptfilehandle:
run(line)

Categories

Resources