How can I execute a bash command containing '&' with Popen - python

I want to run hcitool lescan --duplicates & hcidump -R using Popen. However Popen does not seem to consider the & (the way it works in bash scripting) and gives error as "lescan: too many arguments"
Am I doing something incorrect

Popen does not interpret shell metacharacters like & by default. So, you need to pass shell=True to get it to work. Note that if you're including strings from external sources (e.g. user's files, or user input), then this can be dangerous.
See the frequently used arguments section of the documentation for details.

Related

Executing python module through popen from a python script/shell

I have a python module that is executed by the following command:
python3 -m moduleName args
Trying to execute it from a script using subprocess.popen.
subprocess.Popen(command, shell=True, text=True, stdout=subprocess.PIPE)
Based on subprocess documentation, we are recommended to pass a sequence rather than a string. So when I try to pass the below command as the argument
command = ['python3','-m','moduleName','args']
I end up getting a python shell instead of the module being executed. If I pass it as a string, things are working as expected. I'm not able to find documentation or references for this.
Can someone please help throw some light into this behavior?
What would be the best way to make this work?
Thanks!
This behavior is caused by the shell=True option. When Popen runs in shell mode (under POSIX), the command is appended to the shell command after a "-c" option (subprocess.py, Python 3.9):
args = [unix_shell, "-c"] + args
When the list of arguments is expanded, the first argument after '-c' (in your case, 'python3') is treated as the parameter to '-c'. The other arguments are interpreted as further arguments to the unix_shell command. The -m, for example, activates job control in bash, as outlined in the bash manual.
The solution is to either
pass the command as a single string, as you did, or
do not set the shell option for Popen, which is a good idea anyway, as it is lighter on resources and avoids pitfalls like the one you encountered.

How to use subprocess.call to run a Windows program

I'm trying to run a psql command in a Python script, with the subprocess command.
I use a Windows environment and the psql command aims to restore a database located in a remote Linux server.
The snippet is this one :
import os, sys
import subprocess
subprocess.call('psql -h ip_remote_server -p port -U user-d database -n schema --file="C:\Docs\script.sql"')
This does not work and the console tells that the specified file can't be found.
Any help would be greatly appreciated !
Thanks !
Yeah, your problem is definitely your paths. I went through the hassle of installing Python on Windows 10 and created these scripts:
example.bat
#echo off
echo This is a stand-in for your program
echo arg1 = %1
echo arg2 = %2
example.py
import subprocess
subprocess.call("C:\\Users\\bogus\\example.bat example arguments")
Console
C:\Users\bogus>python example.py
This is a stand-in for your program
arg1 = example
arg2 = arguments
As you can see, you do not need to pass shell=True, or split your command into a list.
If you look closely at the documentation for subprocess.call, you will see this (emphasis added):
The arguments shown above are merely some common ones. The full function signature is the same as that of the Popen constructor - this function passes all supplied arguments other than timeout directly through to that interface.
If you look closely the documentation for subprocess.Popen, you will see this (emphasis added):
On Windows, if args is a sequence, it will be converted to a string in a manner described in Converting an argument sequence to a string on Windows. This is because the underlying CreateProcess() operates on strings.
Any advice about splitting your arguments into a list, or passing shell=True, only applies to POSIX, with one exception:
The only time you need to specify shell=True on Windows is when the command you wish to execute is built into the shell (e.g. dir or copy). You do not need shell=True to run a batch file or console-based executable.

Python subprocess stdout truncated by env variable $COLUMNS

I am calling a bash script in python (3.4.3) using subprocess:
import subprocess as sp
res = sp.check_output("myscript", shell=True)
and myscript contains a line:
ps -ef | egrep somecommand
It was not giving the same result as when myscript is directly called in a bash shell window. After much tinkering, I realized that when myscript is called in python, the stdout of "ps -ef" was truncated by the current $COLUMNS value of the shell window before being piped to "egrep". To me, this is crazy as simply by resizing the shell window, the command can give different results!
I managed to "solve" the problem by passing env argument to the subprocess call to specify a wide enough COLUMNS:
res = sp.check_output("myscript", shell=True, env={'COLUMNS':'100'})
However, this looks very dirty to me and I don't understand why the truncation only happens in python subprocess but not in a bash shell. Frankly I'm amazed that this behavior isn't documented in the official python doc unless it's in fact a bug -- I am using python 3.4.3. What is the proper way of avoiding this strange behavior?
You shoud use -ww, from man ps:
-w
Wide output. Use this option twice for unlimited width.

Invoking C compiler using Python subprocess command

I am trying to compile a C program using Python and want to give input using "<" operator but it's not working as expected.
If I compile the C program and run it by giving input though a file it works; for example
./a.out <inp.txt works
But similarly if I try to do this using a Python script, it did not quite work out as expected.
For example:
import subprocess
subprocess.call(["gcc","a.c","-o","x"])
subprocess.call(["./x"])
and
import subprocess
subprocess.call(["gcc","a.c","-o","x"])
subprocess.call(["./x","<inp.txt"])
Both script ask for input though terminal. But I think in the second script it should read from file. why both the programs are working the same?
To complement #Jonathan Leffler's and #alastair's helpful answers:
Assuming you control the string you're passing to the shell for execution, I see nothing wrong with using the shell for convenience. [1]
subprocess.call() has an optional Boolean shell parameter, which causes the command to be passed to the shell, enabling I/O redirection, referencing environment variables, ...:
subprocess.call("./x <inp.txt", shell = True)
Note how the entire command line is passed as a single string rather than an array of arguments.
[1]
Avoid use of the shell in the following cases:
If your Python code must run on platforms other than Unix-like ones, such as Windows.
If performance is paramount.
If you find yourself "outsourcing" tasks better handled on the Python side.
If you're concerned about lack of predictability of the shell environment (as #alastair is):
subprocess.call with shell = True always creates non-interactive non-login instances of /bin/sh - note that it is NOT the user's default shell that is used.
sh does NOT read initialization files for non-interactive non-login shells (neither system-wide nor user-specific ones).
Note that even on platforms where sh is bash in disguise, bash will act this way when invoked as sh.
Every shell instance created with subprocess.call with shell = True is its own world, and its environment is neither influenced by previous shell instances nor does it influence later ones.
However, the shell instances created do inherit the environment of the python process itself:
If you started your Python program from an interactive shell, then that shell's environment is inherited. Note that this only pertains to the current working directory and environment variables, and NOT to aliases, shell functions, and shell variables.
Generally, that's a feature, given that Python (CPython) itself is designed to be controllable via environment variables (for 2.x, see https://docs.python.org/2/using/cmdline.html#environment-variables; for 3.x, see https://docs.python.org/3/using/cmdline.html#environment-variables).
If needed, you can supply your own environment to the shell via the env parameter; note, however, that you'll have to supply the entire environment in that event, potentially including variables such as USER and HOME, if needed; simple example, defining $PATH explicitly:
subprocess.call('echo $PATH', shell = True, \
env = { 'PATH': '/sbin:/bin:/usr/bin' })
The shell does I/O redirection for a process. Based on what you're saying, the subprocess module does not do I/O redirection like that. To demonstrate, run:
subprocess.call(["sh","-c", "./x <inp.txt"])
That runs the shell and should redirect the I/O. With your code, your program ./x is being given an argument <inp.txt which it is ignoring.
NB: the alternative call to subprocess.call is purely for diagnostic purposes, not a recommended solution. The recommended solution involves reading the (Python 2) subprocess module documentation (or the Python 3 documentation for it) to find out how to do the redirection using the module.
import subprocess
i_file = open("inp.txt")
subprocess.call("./x", stdin=i_file)
i_file.close()
If your script is about to exit so you don't have to worry about wasted file descriptors, you can compress that to:
import subprocess
subprocess.call("./x", stdin=open("inp.txt"))
By default, the subprocess module does not pass the arguments to the shell. Why? Because running commands via the shell is dangerous; unless they're correctly quoted and escaped (which is complicated), it is often possible to convince programs that do this kind of thing to run unwanted and unexpected shell commands.
Using the shell for this would be wrong anyway. If you want to take input from a particular file, you can use subprocess.Popen, setting the stdin argument to a file descriptor for the file inp.txt (you can get the file descriptor by calling fileno() a Python file object).

python script argument misinterpreted in Hudson Execute Shell step

When I run my python script in the shell terminal, it works
sudo myscript.py --version=22 --base=252 --hosts="{'hostA':[1],'hostB':[22]}"
But when I run in Hudson and Jenkins, using Execute Shell step, somehow, the string --hosts="{'hostA':[1],'hostB':[22]}" is interpreted as
sudo myscript.py --version=22 --base=252 '--hosts="{'hostA':[1],'hostB':[22]}"'
How do we overcome this so that our script would run in Jenkins and Hudson ?
Thank you.
Sincerely
It looks like you're encountering a battle-of-the-quoted-strings type situation due to your use of quotes directly and the fact that Jenkins is shelling out from a generated temp shell script.
I find the best thing to do with Jenkins is to create a bash script that wraps the commands you want to run (and you can also have it do any other environment-related setup you may want to have it do, such as source a config bash script that sets up other env vars).
You can have it accept arguments that may vary from the command line, which can be passed to it from the Jenkins config. So any of the interpolation then happens within the script -- you're just passing strings. (In particular, in this case, you'll have the hosts arg be "{'hostA':[1],'hostB':[22]}", which will be passed to the shell script, and then interpolated, with the double quotes re-included.
So, to that end, say you have a jenkins_run.sh script that runs a command like this:
myscript.py --version=$VERSION --base=$BASE --hosts="$HOSTS"
Where the variables are passed in as arguments and assigned prior to that (you could directly use $0, $1 et al if you want.
I would also be cautious using sudo in conjunction with a Jenkins run, since that could end up prompting for I/O. I would instead recommend setting the permissions on the script such that the using under which Jenkins is running can simply execute the script.

Categories

Resources