I am trying to pass some shell cmd into one of my python script so that I can run them toward hosts. However, when I pass them into the line, the $ is omitted for some reason. How can I avoid this?
I use parser and setup like this:
parser = argparse.ArgumentParser(description='check size', add_help=True)
parser.add_argument('-c', "--cmd", dest="cmd",type=str, help="Command to run Ex: df -h /boot/|grep -i boot|awk '{print \$4}' 2>/dev/null ")
Ex:
This is working with a backslash
python3.4 disk_file_check.py -e env -op chksize -c "df -h /boot/|grep -i boot|awk {'print \$4'} 2>/dev/null"
since anybody can use this script and all they should pass is just the command they want to use. but I want it works like this:
python3.4 disk_file_check.py -e env -op chksize -c "df -h /boot/|grep -i boot|awk {'print $4'} 2>/dev/null"
But when I do this, The string is turned into:
df -h /boot/|grep -i boot|awk '{print }' 2>/dev/null
As you can see $ is cutting out and give me wrong result...
Any possible setting in argparse can do it? or I should try diff way? Thanks
Thanks. I think I will not pass this through command line since shell exapnd it. Thanks for pointing that out.
Related
I have command.list file with command parameters for my python script my_script.py which have 3 parameters.
One line of which look like:
<path1> <path2> -sc 4
Looks like it not work like this because parameters should be split?
cat command.list | xargs -I {} python3 my_script.py {}
How to split string to pararmeters and pass it to python script?
What about cat command.list | xargs -L 1 python3 my_script.py? This will pass one line (-L 1) at a time to your script.
The documentation of -I from man xargs
-I replace-str
Replace occurrences of replace-str in the initial-arguments with names read from standard input. Also, unquoted blanks do not terminate input items; instead the separator is the newline character. Implies -x and -L 1.
What you want is
xargs -L1 python3 my_script.py
By the way: cat is not necessary. Use one of the following commands
< command.list xargs -L1 python3 my_script.py
xargs -a command.list -L1 python3 my_script.py
Not sure, what you are trying to do with xargs -I {} python3 my_script.py {} there.
But are you looking for,
$ cat file
<path1> <path2> -sc 4
....
<path1n> <path2n> -sc 4
$ while read -r path1 path2 unwanted unwanted; do python3 my_script.py "$path2" ; done<file
I am trying to execute a bash script from python code. The bash script has some grep commands in a pipe inside a for loop. When I run the bash script itself it gives no errors but when I use it within the python code it says: grep:write error.
The command that I call in python is:
subprocess.call("./change_names.sh",shell=True)
The bash script is:
#!/usr/bin/env bash
for file in *.bam;do new_file=`samtools view -h $file | grep -P '\tSM:' | head -n 1 | sed 's/.\+SM:\(.\+\)/\1/' | sed 's/\t.\+//'`;rename s/$file/$new_file.bam/ $file;done
What am I missing?
You should not use shell=True when you are running a simple command which doesn't require the shell for anything in the command line.
subprocess_call(["./change_names.sh"])
There are multiple problems in the shell script. Here is a commented refactoring.
#!/usr/bin/env bash
for file in *.bam; do
# Use modern command substitution syntax; fix quoting
new_file=$(samtools view -h "$file" |
grep -P '\tSM:' |
# refactor to a single sed script
sed -n 's/.\+SM:\([^\t]\+\).*/\1/p;q')
# Fix quoting some more; don't use rename
mv "$file" "$new_file.bam"
done
grep -P doesn't seem to be necessary or useful here, but without an example of what the input looks like, I'm hesitant to refactor that into the sed script too. I hope I have guessed correctly what your sed version does with the \+ and \t escapes which aren't entirely portable.
This will still produce a warning that you are not reading all of the output from grep in some circumstances. A better solution is probably to refactor even more of this into your Python script.
import glob
for file in glob.glob('*.bam'):
new_name = subprocess.check_output(['samtools', 'view', '-h', file])
for line in new_name.split('\n'):
if '\tSM:' in line:
dest = line.split('\t')[0].split('SM:')[-1] + '.bam'
os.rename(file, dest)
break
Hi try with below modification which will fix your issue.
for file in *.bam;do new_file=`unbuffer samtools view -h $file | grep -P '\tSM:' | head -n 1 | sed 's/.\+SM:\(.\+\)/\1/' | sed 's/\t.\+//'`;rename s/$file/$new_file.bam/ $file;done
Or else try to redirect your standard error to dev/null like below
for file in *.bam;do new_file=`samtools view -h $file >2>/dev/null | grep -P '\tSM:' | head -n 1 | sed 's/.\+SM:\(.\+\)/\1/' | sed 's/\t.\+//'`;rename s/$file/$new_file.bam/ $file;done
Your actual issue is with this command samtools view -h $file While you are running the script from python you should provide a full path like below:-
/fullpath/samtools view -h $file
I had script on bash where I generated username, password, ssh-key for user.
Part for creating of ssh-key:
su $user -c "ssh-keygen -f /home/$user/.ssh/id_rsa -t rsa -b 4096 -N ''"
How can I do the same in Python with os.system? I tried this:
os.system('su %s -c "ssh-keygen -f /home/%s/.ssh/id_rsa -t rsa -b 4096 -N ''"', user)
TypeError: system() takes at most 1 argument (2 given)
Also I tried:
os.system('su user -c "ssh-keygen -f /home/user/.ssh/id_rsa -t rsa -b 4096 -N ''"')
Of course, it doesn't work either.
Format your instructions with the os package; for instance:
import os
user = 'joe'
ssh_dir = "/home/{}/.ssh/id_rsa".format(user)
os.system("ssh-keygen -f {} -t rsa -b 4096 -N ''".format(ssh_dir))
os.system is very close to a bash command line because it uses an underlying shell (like its cousins subprocess.call... using shell=True)
In your case, there's little interest using subprocess since your command runs a command, so you cannot really use argument protection by subprocess fully.
Pass the exact command, but the only change would be to protect the simple quotes, else python sees that as string end+string start (your string is protected by simple quotes already) and they're eliminated.
Check this simpler example:
>>> 'hello '' world'
'hello world'
>>> 'hello \'\' world'
"hello '' world"
that's a kind of worst-case when you cannot use either double or simple quotes to protect the string because you're using the other flavour within. In that case, escape the quotes using \:
os.system('su $user -c "ssh-keygen -f /home/$user/.ssh/id_rsa -t rsa -b 4096 -N \'\'"')
Use the subprocess module:
import subprocess
username = 'user'
result, err = subprocess.Popen(
'su %s -c "ssh-keygen -f /home/%s/.ssh/id_rsa -t rsa -b 4096 -N ''"' % (username, username),
stdout=subprocess.PIPE,
shell=True
).communicate()
if err:
print('Something went wrong')
else:
print(result)
Edit: this is the 'fast' way to do that, you should't use shell=True if you can't control the input since it allows code execution as said here
I've been wrestling with solutions from "How do I use sudo to redirect output to a location I don't have permission to write to?" and "append line to /etc/hosts file with shell script" with no luck.
I want to "append 10.10.10.10 puppetmaster" at the end of /etc/hosts. (Oracle/Red-Hat linux).
Been trying variations of:
subprocess.call("sudo -s", shell=True)
subprocess.call('sudo sh -c" "10.10.10.10 puppetmaster" >> /etc/hosts"', shell=True)
subprocess.call(" sed -i '10.10.10.10 puppetmaster' /etc/hosts", shell=True)
But /etc/hosts file stands still.
Can someone please point out what I'm doing wrong?
Simply use dd:
subprocess.Popen(['sudo', 'dd', 'if=/dev/stdin',
'of=/etc/hosts', 'conv=notrunc', 'oflag=append'],
stdin=subprocess.PIPE).communicate("10.10.10.10 puppetmaster\n")
You can do it in python quite easily once you run the script with sudo:
with open("/etc/hosts","a") as f:
f.write('10.10.10.10 puppetmaster\n')
opening with a will append.
The problem you are facing lies within the scope of the sudo.
The code you are trying calls sudo with the arguments sh and -c" "10.10.10.10 puppetmaster". The redirection of the >> operator, however, is done by the surrounding shell, of course with its permissions.
To achieve the effect you want, try starting a shell using sudo which then is given the command:
sudo bash -c 'sh -c" "10.10.10.10 puppetmaster" >> /etc/hosts"'
This will do the trick because the bash you started with sudo has superuser permissions and thus will not fail when it tries to perform the output redirection with >>.
To do this from within Python, use this:
subprocess.call("""sudo bash -c 'sh -c" "10.10.10.10 puppetmaster" >> /etc/hosts"'""", shell=True)
But of course, if you run your Python script with superuser permissions (start it with sudo) already, all this isn't necessary and the original code will work (without the additional sudo in the call):
subprocess.call('sh -c" "10.10.10.10 puppetmaster" >> /etc/hosts"', shell=True)
If you weren't escalating privileges for the entire script, I'd recommend the following:
p = subprocess.Popen(['sudo', 'tee', '-a', '/etc/hosts'],
stdin=subprocess.PIPE, stdout=subprocess.DEVNULL)
p.stdin.write(b'10.10.10.10 puppetmaster\n')
p.stdin.close()
p.wait()
Then you can write arbitrary content to the process's stdin (p.stdin).
I'm trying to execute a rsync command via subrocess & popen. Everything's ok until I don't put the rsh subcommand where things go wrong.
from subprocess import Popen
args = ['-avz', '--rsh="ssh -C -p 22 -i /home/bond/.ssh/test"', 'bond#localhost:/home/bond/Bureau', '/home/bond/data/user/bond/backups/']
p = Popen(['rsync'] + args, shell=False)
print p.wait()
#just printing generated command:
print ' '.join(['rsync']+args)
I've tried to escape the '--rsh="ssh -C -p 22 -i /home/bond/.ssh/test"' in many ways, but it seems that it's not the problem.
I'm getting the error
rsync: Failed to exec ssh -C -p 22 -i /home/bond/.ssh/test: No such file or directory (2)
If I copy/paste the same args that I output at the time, I'm getting a correct execution of the command.
Thanks.
What happens if you use '--rsh=ssh -C -p 22 -i /home/bond/.ssh/test' instead (I removed the double quotes).
I suspect that this should work. What happens when you cut/paste your line into the commandline is that your shell sees the double quotes and removes them but uses them to prevent -C -p etc. from being interpreted as separate arguments. when you call subprocess.Popen with a list, you've already partitioned the arguments without the help of the shell, so you no longer need the quotes to preserve where the arguments should be split.
Having the same problem, I googled this issue extensively. It would seem you simply cannot pass arguments to ssh with subprocess. Ultimately, I wrote a shell script to run the rsync command, which I could pass arguments to via subprocess.call(['rsyncscript', src, dest, sshkey]). The shell script was: /usr/bin/rsync -az -e "ssh -i $3" $1 $2
This fixed the problem.