Python Subprocess: fails to read "|" in Shell Command - python

I'm trying to use Go binary as well some shell packages in a python script. It's a chain command using |, for summary the command would look like this:
address = "http://line.me"
commando = f"echo {address} | /root/go/bin/crawler | grep -E --line-buffered '^200'"
Above code is just a demonstration, where the actual code is reading address from a wordlist. First try using os.system, it fails.
read = os.system(commando)
print(read)
Turns out os.system doesnt transfer any std. I had to use subprocess:
commando=subprocess.Popen(commando,shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
commandos = commando.stdout.read() + commando.stderr.read()
print(commandos)
Mentioning shell=True triggers:
b'/bin/sh: 1: Syntax error: "|" unexpected\n'
Trough more reading, it could be because sh can't read | or i need to use bash. Is there any alternative to this? have been trying to use shebang line in commando variable:
#!/bin/bash
Still doesn't push my luck...

Try this way:
subprocess.call(commando, shell=True)
the following code worked for me!
subprocess.call('ls | grep x | grep y', shell=True)

Fixed by explicitly mentioning bash:
['bash', '-c', commando]

Related

Python subprocess() reading file in bash

I have a shell command for a file as given below:
filename="/4_illumina/gt_seq/gt_seq_proccessor/200804_MN01111_0025_A000H35TCJ/fastq_files/raw_data/200804_MN01111_0025_A000H35TCJ.demultiplex.log"
assembled_reads=$(cat $filename | grep -i " Assembled reads ...................:" | grep -v "Assembled reads file...............:")
Now I am trying to run this within a python environment using subprocess as:
task = subprocess.Popen("cat $filename | grep -i " Assembled reads ...................:" | grep -v "Assembled reads file...............:"", shell=True, stdout=subprocess.PIPE)
p_stdout = task.stdout.read()
print (p_stdout)
This is not working becasue I am not able to parse the filename variable from python to shell and probably there is a syntax error in the way I have written the grep command.
Any suggestions ?
This code seems to solve your problem with no external tools required.
filename="/4_illumina/gt_seq/gt_seq_proccessor/200804_MN01111_0025_A000H35TCJ/fastq_files/raw_data/200804_MN01111_0025_A000H35TCJ.demultiplex.log"
for line in open(filename):
if "Assembled reads" in line and "Assembled reads file" not in line:
print(line.rstrip())
I would consider doing all the reading and searching in python and maybe rethink what you want to achieve, however:
In a shell:
$ export filename=/tmp/x-output.GOtV
In a Python (note the access to $filename and mixing quotes in the command, I also use custom grep command to simplify things a bit):
import os
import subprocess
tmp = subprocess.Popen(f"cat {os.environ['filename']} | grep -i 'x'", shell=True, stdout=subprocess.PIPE)
data = tmp.stdout.read()
print(data)
Though working, the solution is ... not what I consider a clean code.

Executing bash profile aliases from python script

I am aware that many similar questions have been posted here but none of them seems to work in my case. I have a few commands in my bash profile like below
export HEADAS=/Users/heasoft/x86_64-apple-darwin18.7.0
alias heainit=". $HEADAS/headas-init.sh"
. $HEADAS/headas-init.sh
export SAS_DIR=/Users/sas-Darwin-16.7.0-64/xmmsas
alias sas=". $SAS_DIR/setsas.sh"
sit='source ~/.bash_profile'
in which I created an alias to run them consecutively: alias prep1='sit; heainit; sas. This works just fine when I execute it in the command line. But I want to insert in a python script and run it from there. I am running Python (v 3.7.4). So, as suggested in here, I tried
import subprocess
command = "prep1"
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=None, shell=True)
output = process.communicate()
print(output[0].decode())
But I get an error saying command not found. I tried to export it in bash profile but got an error stating -bash: export: prep1: not a function
I also tried the method suggested in here, but still nothing. Related to this, I couldn't even run a shell command like below in python
epatplot set=evli.FTZ plotfile="pn_filtered_pat.ps" 2>&1 | tee pn_filtered_pat.txt
Here is my Python script attempt
command = "epatplot set=evli.FTZ plotfile="pn_filtered_pat.ps" 2>&1 | tee pn_filtered_pat.txt"
process = subprocess.Popen(command.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
I get SyntaxError: invalid syntax. I know where this syntax error is rising from but don't know how to fix.
I am a beginner in python so I appreciate any help/guidance.
Please see this answer: https://askubuntu.com/a/98791/1100014
The recommendation is to convert your aliases to bash functions and then export them with -f to be available in subshells.
When you call Popen, execute "bash -c <functionname>".
As for your last script attempt, you have a conflict in quotation marks. Replace the outer quotes with single quotes like this:
command = 'epatplot set=evli.FTZ plotfile="pn_filtered_pat.ps" 2>&1 | tee pn_filtered_pat.txt'
process = subprocess.Popen(command.split(), stdout=subprocess.PIPE)
output, error = process.communicate()

Unix command in python with &

Want to execute below command in python and get output in variable.
I tried to used Popen,system methods but it is not grepping particular content like 0 or 1.Python is also throwing some error for & character.
can anyone suggest, how can i prepare command. Using python 2.4.
"symtest host port USDSYM | & egrep -e '^0:' -e '^1:' "
If you are using Popen from subprocess, you can pass you own pipe and then read the pipe.

Filter output of a process with `grep` while keeping the return value

I think this is not a Python question but in order to provide the context I'll tell, what exactly I'm doing.
I run a command on a remote machine using ssh -t <host> <command> like this:
if os.system('ssh -t some_machine [ -d /some/directory ]') != 0:
do_something()
(note: [ -d /some/directory ] is only an example. Could be replaced by any command which returns 0 in case everything went fine)
Unfortunately ssh prints "Connection to some_machine close." every time I run it.
Stupidly I tried to run ssh -t some_machine <command> | grep -v "Connection" but this returns the result of grep of course.
So in short: In Python I'd like to run a process via ssh and evaluate it's return value while filtering away some unwanted output.
Edit: this question suggests s.th. like
<command> | grep -v "bla"; return ${PIPESTATUS[0]}
Indeed this might be an approach but it seems to work with bash only. At least with zsh PIPESTATUS seems to be not defined.
Use subprocess, and connect the two commands in Python rather than a shell pipeline.
from subprocess import Popen, PIPE, call
p1 = Popen(["ssh", "-t", "some_machine", "test", "-d", "/some/directory"],
stdout=PIPE)
if call(["grep", "-v", "Connection"], stdin=p1.stdout) != 0:
# use p1.returncode for the exit status of ssh
do_something()
Taking this a step further, try to avoid running external programs when unnecessary. You can examine the output of ssh directly in Python without using grep; for example, using the re library to examine the data read from p1.stdout yourself. You can also use a library like Paramiko to connect to the remote host instead of shelling out to run ssh.

Running interactive commands in docker in Python subprocess

When I use docker run in interactive mode I am able to run the commands I want to test some python stuff.
root#pydock:~# docker run -i -t dockerfile/python /bin/bash
[ root#197306c1b256:/data ]$ python -c "print 'hi there'"
hi there
[ root#197306c1b256:/data ]$ exit
exit
root#pydock:~#
I want to automate this from python using the subprocess module so I wrote this:
run_this = "print('hi')"
random_name = ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(20))
command = 'docker run -i -t --name="%s" dockerfile/python /bin/bash' % random_name
subprocess.call([command],shell=True,stderr=subprocess.STDOUT)
command = 'cat <<\'PYSTUFF\' | timeout 0.5 python | head -n 500000 \n%s\nPYSTUFF' % run_this
output = subprocess.check_output([command],shell=True,stderr=subprocess.STDOUT)
command = 'exit'
subprocess.call([command],shell=True,stderr=subprocess.STDOUT)
command = 'docker ps -a | grep "%s" | awk "{print $1}" | xargs --no-run-if-empty docker rm -f' % random_name
subprocess.call([command],shell=True,stderr=subprocess.STDOUT)
This is supposed to create the container, run the python command on the container and exit and remove the container. It does all this except the command is run on the host machine and not the docker container. I guess docker is switching shells or something like that. How do I run python subprocess from a new shell?
It looks like you are expecting the second command cat <<... to send input to the first command. But the two subprocess commands have nothing to do with each other, so this doesn't work.
Python's subprocess library, and the popen command that underlies it, offer a way to get a pipe to stdin of the process. This way, you can send in the commands you want directly from Python and don't have to attempt to get another subprocess to talk to it.
So, something like:
from subprocess import Popen, PIPE
p = Popen("docker run -i -t --name="%s" dockerfile/python /bin/bash", stdin=PIPE)
p.communicate("timeout 0.5 python | head -n 500000 \n" % run_this)
(I'm not a Python expert; apologies for errors in string-forming. Adapted from this answer)
You actually need to spawn a new child on the new shell you are opening.So after docker creation run docker enter or try the same operation with pexpect instead of subprocess.`pexpect spawns a new child and that way you can send commands.

Categories

Resources