So, I thought it would be cool if i could get my dev env up and running in a single fell swoop with some python magic. Various DBs, webserver etc.
However, every variation on the below that i have tried on the following seems to fail with 'file not found'.
p2 = Popen(["exec", "/path/to/redis/server"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
Running the command directly from the shell (i.e. exec /path/to/redis/server) works just fine. Strangely enough, a simple command line uptime seems to work fine.
Any clues as to what is going on? Also, whilst we are on the topic, is multiprocessing the thing to use when i want to run many of these external processes in parallel?
Thanks
exec is a builtin command in bash, not an executable. The file not found error probably comes from exec not being found in the $PATH.
I would try ommitting "exec" in the Popen call.
Related
I have python-script, which run bash-scripts via subprocess library. I need to collect stdout and stderr to files, so I have wrapper like:
def execute_chell_script(stage_name, script):
subprocess.check_output('{} &>logs/{}'.format(script, stage_name), shell=True)
And it works correct when I launch my python script on mac. But If I launch it in docker-container (FROM ubuntu:18.04) I cant see any log-files. I can fix it if I use bash -c 'command &>log_file' instead of just command &>log_file inside subprocess.check_output(...). But it looks like too much magic.
I thought about the default shell for user, which launches python-script (its root), but cat /etc/passwd shows root ... /bin/bash.
It would be nice if someone explain me what happened. And maybe I can add some lines to dockerfile to use the same python-script inside and outside docker-container?
As the OP reported in a comment that this fixed their problem, I'm posting it as an answer so they can accept it.
Using check_output when you don't get expect any output is weird; and requiring shell=True here is misdirected. You want
with open(os.path.join('logs', stage_name)) as output:
subprocess.run([script], stdout=ouput, stderr=output)
I'm trying to write my own shell script in Python for SSH to call to (using the SSH command= parameters in authorized_keys files). Currently I'm simply calling the original SSH command (it is set as an environment variable prior to the script being called my SSH). However, I always end up with a git error regarding the repository hanging up unexpectedly.
My Python code is literally:
#!/usr/bin/python
import os
import subprocess
if os.environ('SSH_ORIGINAL_COMMAND') is not None:
subprocess.Popen(os.environ('SSH_ORIGINAL_COMMAND'), shell=True)
else:
print 'who the *heck* do you think you are?'
Please let me know what is preventing the git command from successfully allowing the system to work. For reference, the command that is being called on the server when a client calls git push is git-receive-pack /path/to/repo.git.
Regarding the Python code shown above, I have tried using shell=True and shell=False (correctly passing the command as a list when False) and neither work correctly.
Thank you!
Found the solution!
You'll need to call the communicate() method of the subprocess object created by Popen call.
proc = subprocess.Popen(args, shell=False)
proc.communicate()
I'm not entirely sure why, however I think it has to do with the communicate() method allowing data to also be given via stdin. I thought the process would automatically accept input since I didn't override the input stream at all anywhere, but perhaps a manual call to communicate is needed to kick things off...hopefully someone can weigh in here!
You also can't stdout=subprocess.PIPE as it will cause the command to hang. Again, not sure if this is because of how git works or something to do about the whole process. Hopefully this at least helps someone in the future!
Before I start, some notes on my environment: python 2.7.14 installed through miniconda, macOS 10.13.3.
The Problem
I'm trying to write a data processing pipeline (call it analyze.py) in python which calls several different programs. One of those programs, EMAN2 uses its own python environment to run (located, say, ~/EMAN2/bin/python). From the command line, a call to eman2 might look something like this:
~/EMAN2/bin/e2buildstacks.py <args>
where e2buildstacks.py specifies its own python environment using the standard #!/Users/<username>/EMAN2/bin/python type formulation at the top of the file (note, there's a lot of these different .py files to run, I'm just using one name as an example).
In python, I'm using a subprocess.Popen construction for these calls. A very simple example would be:
import subprocess
stacks_cmd = '/Users/<username>/EMAN2/bin/e2buildstacks.py <args>'
process = subprocess.Popen(stacks_cmd, shell=True, stdout=subprocess.PIPE)
stacks_output, stacks_error = process.communicate()
Where I've replaced the actual args and username, for simplicity.
If this is all I'm doing, it works fine. I can run python analyze.py from the command line and it runs EMAN2 without problem.
However, this pipeline is getting wrapped up in a GUI (wxpython based). So I have to use pythonw to run my program. When I try to do:
pythonw analyze.py
It can't run the EMAN2 scripts correctly, I get an error:
Traceback (most recent call last):
File "/Users/<username>/EMAN2/bin/e2buildstacks.py", line 34, in <module>
from EMAN2 import *
ImportError: No module named EMAN2
This indicates to me that it's using my miniconda python rather than the ~/EMAN2/bin/python to run the script.
(Note: if anyone uses EMAN2, I can provide you with the full argument list and the input files. But rest assured that's not the issue, I can run the command I'm using just fine from the command line)
What I've tried
I've tried several different things to fix this problem. The simplest was specifying the python to be used in the command:
import subprocess
stacks_cmd = '/Users/<username>/EMAN2/bin/python /Users/<username>/EMAN2/bin/e2buildstacks.py <args>'
process = subprocess.Popen(stacks_cmd, shell=True, stdout=subprocess.PIPE)
stacks_output, stacks_error = process.communicate()
That didn't work, and fails with the same error.
I've tried using Popen in shell=False mode
import subprocess
stacks_cmd = ['/Users/<username>/EMAN2/bin/python', '/Users/<username>/EMAN2/bin/e2buildstacks.py', <args>]
process = subprocess.Popen(stacks_cmd, shell=False, stdout=subprocess.PIPE)
stacks_output, stacks_error = process.communicate()
or
import subprocess
stacks_cmd = ['/Users/<username>/EMAN2/bin/e2buildstacks.py', <args>]
process = subprocess.Popen(stacks_cmd, shell=False, stdout=subprocess.PIPE)
stacks_output, stacks_error = process.communicate()
Both of those fail with the same error.
I've tried specifying the executable:
import subprocess
stacks_cmd = ['/Users/<username>/EMAN2/bin/e2buildstacks.py', <args>]
process = subprocess.Popen(stacks_cmd, shell=False, stdout=subprocess.PIPE, executable='/Users/<username>/EMAN2/bin/python')
stacks_output, stacks_error = process.communicate()
This one gives a different error:
Unknown option: --
usage: /Users/<username>/EMAN2/bin/e2buildstacks.py [option] ... [-c cmd | -m mod | file | -] [arg] ...
Try `python -h' for more information.
Which makes me think that somehow the arguments aren't getting passed correctly to the script, but are rather being passed to python itself (I probably don't really understand what the executable setting does here).
I've tried setting the environment variable:
import os
my_env = os.environ.copy()
my_env['PATH'] = '/Users/<username>/EMAN2/bin:'+my_env['PATH']
my_env['PYTHONPATH'] = '/Users/<username>/EMAN2/bin'
And then passing that to subprocess.Popen in any of the above commands as env=my_env. This fails with the same errors.
At this point, I'm pretty much out of ideas and hoping someone can help. Again, this is only happening with pythonw, not python.
Things I've read looking for solutions
I couldn't find anything on stackoverflow that quite matched this problem. things I've looked at:
subprocess a program using another version of python
Never answered successfully.
Module cannot be found when using "pythonw" (instead of "python") to run an application
The problem is not which python/pythonw I'm using, both are coming from miniconda.
Python subprocess is running a different version of Python
This is kind of the exact opposite of my problem, so not much use.
https://github.com/ContinuumIO/anaconda-issues/issues/199
This suggests that the miniconda pythonw is a bit of a hack that can cause various problems, but doesn't directly offer a solution (lets say that using another version of python is strongly discouraged).
Python subprocess/Popen with a modified environment
Python Script not running in crontab calling pysaunter
These led me to trying to modify the environment variables.
Final note
Changing the python distribution being used is possible, but very much not the ideal solution. I'm writing something that will be run in several different environments (including linux and windows, which I haven't tested on yet) and by people who aren't me, so I need a more bulletproof solution.
Any help folks can provide is much appreciated.
I am pulling my hair out here. I am spawning a process which I need the feedback from in Python.
When I run the command in the cmd window it runs fine, but when I try to run it via Python the terminal hangs.
p = subprocess.Popen(startcmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = p.communicate()
Where startcmd is a string which when printed in the Python console looks like this:
"C:/Program Files/GRASS GIS 7.2.1/grass72.bat" --version
If I copy and paste this into a Windows cmd, it shows the version information and returns control to the command prompt about a second later, but in Python it freezes up.
I should point out, if I replace the startcmd string with something like "dir" or even "python --version", it works fine!
Additional: I have tried shell=True, this has the same result.
Additional: I have tried sending the cmd and arguments through as an array as suggested in an answer below given that shell=False, but this also hangs the same.
Additional: I have added the GRASS path to the system PATH, so that now I can simply call grass72 --version in the cmd window to get a result, however this also still freezes in Python but works fine in cmd.
Additional: I have created a basic .bat file to test if .bat files run ok via Python, here is what I created:
#echo off
title Test Batch Script
echo I should see this message
This runs fine both in cmd, and in Python.
Problem found but not solved!
So, I'm running the script which spawns the process using subprocess.Popen using Python 3.6. The .bat file which is spawned launches a Python script using a version of Python (based on 2.7) which comes shipped with GRASS:
%GRASS_PYTHON% "\BLAH\BLAH\grass72.py"
What is interesting, is that if I launch the subprocess.Popen script with Python 2.7, it works fine. Ahah, you may think, solved! But this doesn't solve my problem - because I really need Python 3.6 to be launching the process, also why does it matter what version of Python launches the batch file? The new Python script which is spawned is launched with Python 2.7 anyway.
Since I started re-directing stdout I can see that there is an error when I use Python 3.6 to launch the process:
File "C:\ProgramData\Anaconda3\lib\site.py", line 177
file=sys.stderr)
^
SyntaxError: invalid syntax
Notice its reverting to Anaconda3! Even though it is launched using python.exe from 2.7!
I experienced the same issue with Python 3.6 and 3.7 on Windows hanging for subprocess calls:
p = subprocess.Popen(startcmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = p.communicate()
Upon closer investigation I noticed this occurs only if the process writes more than about 4 KB (4096 bytes) of output which might explain why your short script does not reproduce this.
A workaround I found is using tempfile in the standard library:
# Write to a temporary file because pipe redirection seems broken
with tempfile.NamedTemporaryFile(mode="w+") as tmp_out,
tempfile.NamedTemporaryFile(mode="w+") as tmp_err:
p = subprocess.Popen(startcmd, stdout=tmp_out, stderr=tmp_err,
universal_newlines=True)
# `run` waits for command to complete, `Popen` continues Python program
while p.poll() is None:
time.sleep(.1)
# Cursor is after the last write call, reset to read output
tmp_out.seek(0)
tmp_err.seek(0)
out = tmp_out.read()
err = tmp_err.read()
You don't specify shell=True in your arguments to Popen. The recommended usage in that case is to specify a sequence of arguments instead of a string. So you should set startcmd equal to ["C:/Program Files/GRASS GIS 7.2.1/grass72.bat", "--version"].
Try this:
p = subprocess.Popen(startcmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
When I launch a PowerShell script from Python, the delay seems to be approximately 45s, and I cannot figure out why.
I'm trying to run a PowerShell script (accessing some APIs only available to PowerShell) from a Python script.
I've tried a lot of permutations, and all incur ~45 second delay compared to just running the script from a command prompt, using an identical command line.
For example - sample.ps1 might say:
echo foo
And runner.py might say:
import subprocess
p = subprocess.Popen([POWERSHELL, '-File', 'sample.ps1'], stdout=subprocess.STDOUT)
d = p.stdout.read()
Running the .ps1 script directly is fast, running it via runner.py (Python 2.7, 32bit on a 64bit machine) incurs 45 second delay.
The exact same thing occurs if I use "os.system", or Twisted's built-in process tools. So I suspect it's some subtle interaction between the Python interpreter and the Powershell interpreter, possibly related to creation of console windows, or handling of stdin/out/err streams? (which I know don't "really exist" in the same way on Windows)
I do not see any such delays. It is pretty snappy. ( that will also depend on what your script actually does.) Try using call:
from subprocess import call
call(["powershell", "sample.ps1"])
PowerShell loads your user's profile by default. Use the -NoProfile argument to turn that behavior off:
import subprocess
p = subprocess.Popen([POWERSHELL, '-NoProfile', '-File', 'sample.ps1'], stdout=subprocess.STDOUT)
d = p.stdout.read()