Executing python module through popen from a python script/shell - python

I have a python module that is executed by the following command:
python3 -m moduleName args
Trying to execute it from a script using subprocess.popen.
subprocess.Popen(command, shell=True, text=True, stdout=subprocess.PIPE)
Based on subprocess documentation, we are recommended to pass a sequence rather than a string. So when I try to pass the below command as the argument
command = ['python3','-m','moduleName','args']
I end up getting a python shell instead of the module being executed. If I pass it as a string, things are working as expected. I'm not able to find documentation or references for this.
Can someone please help throw some light into this behavior?
What would be the best way to make this work?
Thanks!

This behavior is caused by the shell=True option. When Popen runs in shell mode (under POSIX), the command is appended to the shell command after a "-c" option (subprocess.py, Python 3.9):
args = [unix_shell, "-c"] + args
When the list of arguments is expanded, the first argument after '-c' (in your case, 'python3') is treated as the parameter to '-c'. The other arguments are interpreted as further arguments to the unix_shell command. The -m, for example, activates job control in bash, as outlined in the bash manual.
The solution is to either
pass the command as a single string, as you did, or
do not set the shell option for Popen, which is a good idea anyway, as it is lighter on resources and avoids pitfalls like the one you encountered.

Related

How to use subprocess.call to run a Windows program

I'm trying to run a psql command in a Python script, with the subprocess command.
I use a Windows environment and the psql command aims to restore a database located in a remote Linux server.
The snippet is this one :
import os, sys
import subprocess
subprocess.call('psql -h ip_remote_server -p port -U user-d database -n schema --file="C:\Docs\script.sql"')
This does not work and the console tells that the specified file can't be found.
Any help would be greatly appreciated !
Thanks !
Yeah, your problem is definitely your paths. I went through the hassle of installing Python on Windows 10 and created these scripts:
example.bat
#echo off
echo This is a stand-in for your program
echo arg1 = %1
echo arg2 = %2
example.py
import subprocess
subprocess.call("C:\\Users\\bogus\\example.bat example arguments")
Console
C:\Users\bogus>python example.py
This is a stand-in for your program
arg1 = example
arg2 = arguments
As you can see, you do not need to pass shell=True, or split your command into a list.
If you look closely at the documentation for subprocess.call, you will see this (emphasis added):
The arguments shown above are merely some common ones. The full function signature is the same as that of the Popen constructor - this function passes all supplied arguments other than timeout directly through to that interface.
If you look closely the documentation for subprocess.Popen, you will see this (emphasis added):
On Windows, if args is a sequence, it will be converted to a string in a manner described in Converting an argument sequence to a string on Windows. This is because the underlying CreateProcess() operates on strings.
Any advice about splitting your arguments into a list, or passing shell=True, only applies to POSIX, with one exception:
The only time you need to specify shell=True on Windows is when the command you wish to execute is built into the shell (e.g. dir or copy). You do not need shell=True to run a batch file or console-based executable.

How can I execute a bash command containing '&' with Popen

I want to run hcitool lescan --duplicates & hcidump -R using Popen. However Popen does not seem to consider the & (the way it works in bash scripting) and gives error as "lescan: too many arguments"
Am I doing something incorrect
Popen does not interpret shell metacharacters like & by default. So, you need to pass shell=True to get it to work. Note that if you're including strings from external sources (e.g. user's files, or user input), then this can be dangerous.
See the frequently used arguments section of the documentation for details.

Is there a way to change the SHELL program called by os.system()?

I’m in an embedded environment where/bin/shdoesn’t exist so when I callposix.system()or os.system()it always returns-1.
Because of that environment, thesubprocessmodule isn’t available. I can’t create pipes too.
I’m using ɴᴀᴄʟ as glibc.
Setting theSHELLenvironment variable doesn’t seems to work (it still tries to open/bin/sh).
So is there a way to change the shell process invoked byos.system() ?
No, you can't change what shell os.system() uses, because that function makes a call to the system() C function, and that function has the shell hardcoded to /bin/sh.
Use the subprocess.call() function instead, and set the executable argument to the shell you want to use:
subprocess.call("command", shell=True, executable='/bin/bash')
From the Popen() documentation (which underlies all subprocess functionality):
On Unix with shell=True, the shell defaults to /bin/sh. If args is a string, the string specifies the command to execute through the shell.
and
If shell=True, on Unix the executable argument specifies a replacement shell for the default /bin/sh.
If you can't use subprocess and you can't use pipes, you'd be limited to the os.spawn*() functions; set mode to os.P_WAIT to wait for the exit code:
retval = os.spawnl(os.P_WAIT, '/bin/bash', '-c', 'command')

Python subprocess stdout truncated by env variable $COLUMNS

I am calling a bash script in python (3.4.3) using subprocess:
import subprocess as sp
res = sp.check_output("myscript", shell=True)
and myscript contains a line:
ps -ef | egrep somecommand
It was not giving the same result as when myscript is directly called in a bash shell window. After much tinkering, I realized that when myscript is called in python, the stdout of "ps -ef" was truncated by the current $COLUMNS value of the shell window before being piped to "egrep". To me, this is crazy as simply by resizing the shell window, the command can give different results!
I managed to "solve" the problem by passing env argument to the subprocess call to specify a wide enough COLUMNS:
res = sp.check_output("myscript", shell=True, env={'COLUMNS':'100'})
However, this looks very dirty to me and I don't understand why the truncation only happens in python subprocess but not in a bash shell. Frankly I'm amazed that this behavior isn't documented in the official python doc unless it's in fact a bug -- I am using python 3.4.3. What is the proper way of avoiding this strange behavior?
You shoud use -ww, from man ps:
-w
Wide output. Use this option twice for unlimited width.

How to run bash commands inside of a Python script [duplicate]

This question already has answers here:
Running Bash commands in Python
(11 answers)
Closed 2 years ago.
I am trying to run both Python and bash commands in a bash script.
In the bash script, I want to execute some bash commands enclosed by a Python loop:
#!/bin/bash
python << END
for i in range(1000):
#execute‬ some bash command such as echoing i
END
How can I do this?
Use subprocess, e.g.:
import subprocess
# ...
subprocess.call(["echo", i])
There is another function like subprocess.call: subprocess.check_call. It is exactly like call, just that it throws an exception if the command executed returned with a non-zero exit code. This is often feasible behaviour in scripts and utilities.
subprocess.check_output behaves the same as check_call, but returns the standard output of the program.
If you do not need shell features (such as variable expansion, wildcards, ...), never use shell=True (shell=False is the default). If you use shell=True then shell escaping is your job with these functions and they're a security hole if passed unvalidated user input.
The same is true of os.system() -- it is a frequent source of security issues. Don't use it.
Look in to the subprocess module. There is the Popen method and some wrapper functions like call.
If you need to check the output (retrieve the result string):
output = subprocess.check_output(args ....)
If you want to wait for execution to end before proceeding:
exitcode = subprocess.call(args ....)
If you need more functionality like setting environment variables, use the underlying Popen constructor:
subprocess.Popen(args ...)
Remember subprocess is the higher level module. It should replace legacy functions from OS module.
I used this when running from my IDE (PyCharm).
import subprocess
subprocess.check_call('mybashcommand', shell=True)

Categories

Resources