I am aware that many similar questions have been posted here but none of them seems to work in my case. I have a few commands in my bash profile like below
export HEADAS=/Users/heasoft/x86_64-apple-darwin18.7.0
alias heainit=". $HEADAS/headas-init.sh"
. $HEADAS/headas-init.sh
export SAS_DIR=/Users/sas-Darwin-16.7.0-64/xmmsas
alias sas=". $SAS_DIR/setsas.sh"
sit='source ~/.bash_profile'
in which I created an alias to run them consecutively: alias prep1='sit; heainit; sas. This works just fine when I execute it in the command line. But I want to insert in a python script and run it from there. I am running Python (v 3.7.4). So, as suggested in here, I tried
import subprocess
command = "prep1"
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=None, shell=True)
output = process.communicate()
print(output[0].decode())
But I get an error saying command not found. I tried to export it in bash profile but got an error stating -bash: export: prep1: not a function
I also tried the method suggested in here, but still nothing. Related to this, I couldn't even run a shell command like below in python
epatplot set=evli.FTZ plotfile="pn_filtered_pat.ps" 2>&1 | tee pn_filtered_pat.txt
Here is my Python script attempt
command = "epatplot set=evli.FTZ plotfile="pn_filtered_pat.ps" 2>&1 | tee pn_filtered_pat.txt"
process = subprocess.Popen(command.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
I get SyntaxError: invalid syntax. I know where this syntax error is rising from but don't know how to fix.
I am a beginner in python so I appreciate any help/guidance.
Please see this answer: https://askubuntu.com/a/98791/1100014
The recommendation is to convert your aliases to bash functions and then export them with -f to be available in subshells.
When you call Popen, execute "bash -c <functionname>".
As for your last script attempt, you have a conflict in quotation marks. Replace the outer quotes with single quotes like this:
command = 'epatplot set=evli.FTZ plotfile="pn_filtered_pat.ps" 2>&1 | tee pn_filtered_pat.txt'
process = subprocess.Popen(command.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
Related
I use Python 3.10.7 and I am trying to get the Python interpreter to run this command:
rg mysearchterm /home/user/stuff
This command, when I run it in bash directly successfully runs ripgrep and searches the directory (recursively) /home/user/stuff for the term mysearchterm. However, I'm trying to do this programmatically with Python's subprocess.Popen() and I am running into issues:
from subprocess import Popen, PIPE
proc1 = Popen(["rg", "term", "/home/user/stuff", "--no-filename"],stdout=PIPE,shell=True)
proc2 = Popen(["wc","-l"],stdin=proc1.stdin,stdout=PIPE,shell=True)
#Note: I've also tried it like below:
proc1 = Popen(f"rg term /home/user/stuff --no-filename",stdout=PIPE,shell=True)
proc2 = Popen("wc -l",stdin=proc1.stdin,stdout=PIPE,shell=True)
result, _ = proc2.communicate()
print(result.decode())
What happens here was bizarre to me; I get an error (from rg itself) which says:
error: The following required arguments were not provided:
<PATTERN>
So, using my debugging/tracing skills, I looked at the process chain and I see that the python interpreter itself is performing:
python3 1921496 953810 0 /usr/bin/python3 ./debug_script.py
sh 1921497 1921496 0 /bin/sh -c rg term /home/user/stuff --no-filename
sh 1921498 1921496 0 /bin/sh -c wc -l
So my next thought is just trying to run that manually in bash, leading to the same error. However, in bash, when I run /bin/sh -c "rg term /home/user/stuff --no-filename" with double quotations, the command works in bash but when I try to do this programmatically in Popen() it again doesn't work even when I try to escape them with \. This time, I get errors about unexpected EOF.
As for the behavior when shell=True is specified,
the python document tells:
If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional arguments to the shell itself. That is to say, Popen does the equivalent of:
Popen(['/bin/sh', '-c', args[0], args[1], ...])
Then your command invocation is equivalent to:
/bin/sh -c "rg" "term" "/home/tshiono/stackoverflow/221215" ...
where no arguments are fed to rg.
You need to pass the command as a string (not a list) or just drop shell=True.
I would like to run this multiline shell commands:
echo 'a=?'
read a
echo "a=$a"
from a python script, using the subprocess.call() method.
I wrote this, in test.py file:
import shlex, subprocess
args = ["echo", 'a=?',"read", "a", "echo", "a=$a"]
subprocess.call(args)
and when I execute it, I have in terminal this report:
Armonicus#MyMacs-iMac MyNewFolder % python test.py
a=? read a echo a=$a
which is not at least close to what I expect.
Can I have some support from anyone, please?
There are a couple of issues with your approach here.
First, if what you're trying to do is prompt the user for input from the command line, then you can use Python builtins instead of a subprocess:
a = input('a=?')
print(a)
If you do want to call a subprocess with multiple commands, you need to either make separate calls for each command, or invoke a shell and execute the commands within it. For example:
subprocess.call("echo 'a=?'; read a; echo $a", shell=True)
I have a python script which captures repo command.
import subprocess
processing(commandforrepo)
def processing(repocmd):
process = None
process = subprocess.Popen(repocmd,
stdout=subprocess.PIPE, stderr=None, shell=True)
process.communicate()
In the particular command, I am trying to parse a list of repocmd to compare two branches and print out the differences
"repo forall $(repo forall -c 'echo $REPO_PROJECT')\
-c 'git log --abbrev-commit --pretty=oneline --no-merges \
--cherry-pick --left-only HEAD...$REPO_RREV'"
Attempted to run the script on the terminal but the command did not get executed. However, when this command is issued on the terminal, it produces a list of differences between the two branches.
Any clue as to what is missing?
Warning: most of the standard output for Git command are done on... stderr.
See here for why: informative messages are on stderr only.
So make sure to parse stderr, not stdout.
The subprocess.Popen() function has a "env" parameter. But it doesn't seem to have the desired effect with sudo. This is what I get when I do this in the interactive Python shell:
import subprocess
env={"CVS_RSH":"ssh"}
command = "sudo -u user cvs -d user#1.8.7.2:/usr/local/ncvs co file.py"
p = subprocess.Popen(command, stdout=subprocess.PIPE,
stderr=subprocess.PIPE,env=env,shell=True)
(command_output, error_output) = p.communicate()
p.wait()
1
>>> error_output
b'cvs [checkout aborted]: cannot exec rsh: No such file or
directory\ncvs [checkout aborted]: end of file from server (consult
above messages if any)\n'
The message is distracting, so let me explain. I'm forced to use ancient CVS and the environment variable tells it to use ssh to connect to the server, rather than the default which sadly is rsh. It also needs an environment variable called CVS_ROOT, but fortunately there's a "-d" option for that, but none for the CVS_RSH that I know of.
Interestingly enough, if I do:
command = "sudo -u user echo $CVS_RSH"
env={"CVS_RSH":"something_else"}
p = subprocess.Popen(command, stdout=subprocess.PIPE,
stderr=subprocess.PIPE,env=env,shell=True)
(command_output, error_output) = p.communicate()
p.wait()
0
>>> command_output
b'something_else\n'
Maybe this worked because echo wasn't actually started as a child process? Is it possible to pass an environment to a process executed as another user with sudo?
This doesn't seem possible using the env parameter. The solution seems to be to just pass the environment as I was doing on the shell, for example:
command = "sudo -u user CVS_RSH=ssh
CVSROOT=:ext:user#2.8.7.2:/usr/local/ncvs cvs co dir/file.py"
p = subprocess.Popen(command, stdout=subprocess.PIPE,
stderr=subprocess.PIPE,env=env,shell=True)
The weird thing is, if I do this in a Python CGI script, I can see:
cvs [checkout aborted]: cannot exec ssh: Permission denied
cvs [checkout aborted]: end of file from server (consult above messages if
any)
But if I try on the interactive Python shell, it goes past this, so it must be another weird (because the user has permission to ssh) issue, unrelated to this question.
I am trying to execute this command using Python:
findSyntax = "find . -maxdepth 2 -name '.config' | cpio -updm ../test1/"
subprocess.Popen(findSyntax.split(' '))
But this command just would not work. When I execute this command, it will start listing all the files (not just .config) under the . directory beyond the maxdepth 2... which is a long list.
What am I missing here! Can someone point it out? Thanks.
NOTE: I've tried running subProcess.run as well with same results. I was able to get just the find part working using os.system() command.
EDIT: I just wanted to clarify that this command will copy the files found with the exact directory structure intact to the new location (creating subdirectories if necessary). I've tried this command on bash terminal, and it works fine. But I couldn't get it to work with Python.
EDIT2: So, the whole command works with os.system(), but I couldn't figure out how to make it work with subprocess. os.system() is supposed to be deprecated, so I would be very interested in figuring out the solution using subprocess instead.
Please look at this good answer and this also helps
But in essence, you can't use your above subprocess command with a pipe.
Lets run through the simple example of getting all py files in the current directory: (ls | grep py)
This is broken:
import subprocess
subprocess.call(['ls', '|', 'grep', 'py'])
Because subprocess does only one process at a time, and by piping you are really creating 2 processes.
The simple but limited (to platform) way is to use os.system
import os
os.system('ls | grep py')
This literally just passes a shell command to the system to execute.
However, you should do it with subprocess by defining your pipes:
# Get all files and pass the stdout to a pipe
p1 = subprocess.Popen(['ls'], stdout=subprocess.PIPE)
# then pass that pipe to another process as stdin and do part 2
output = subprocess.check_output(['grep', 'py'], stdin=p1.stdout)
print(output)
So, a copy paste for your example:
import subprocess
p1 = subprocess.Popen("find . -maxdepth 2 -name '.config'".split(), stdout=subprocess.PIPE)
output = subprocess.check_output("cpio -updm ../test1/".split(), stdin=p1.stdout)
Or with os:
os.system("find . -maxdepth 2 -name '.config' | cpio -updm ../test1/")