I have python-script, which run bash-scripts via subprocess library. I need to collect stdout and stderr to files, so I have wrapper like:
def execute_chell_script(stage_name, script):
subprocess.check_output('{} &>logs/{}'.format(script, stage_name), shell=True)
And it works correct when I launch my python script on mac. But If I launch it in docker-container (FROM ubuntu:18.04) I cant see any log-files. I can fix it if I use bash -c 'command &>log_file' instead of just command &>log_file inside subprocess.check_output(...). But it looks like too much magic.
I thought about the default shell for user, which launches python-script (its root), but cat /etc/passwd shows root ... /bin/bash.
It would be nice if someone explain me what happened. And maybe I can add some lines to dockerfile to use the same python-script inside and outside docker-container?
As the OP reported in a comment that this fixed their problem, I'm posting it as an answer so they can accept it.
Using check_output when you don't get expect any output is weird; and requiring shell=True here is misdirected. You want
with open(os.path.join('logs', stage_name)) as output:
subprocess.run([script], stdout=ouput, stderr=output)
Related
I wrote a python script that calls a docker-compose exec command via subprocess.run. The script works as intended when I run it manually from the terminal, however I didn't get it to work as a cronjob yet.
The subprocess part looks like this:
subprocess.run(command, shell=True, capture_output= True, text=True, cwd='/home/ubuntu/directory_of_the_script/')
and the command itself is something like
docker-compose exec container_name python3 folder/scripts/another_script.py -parameter
I assume it has something to do with the paths, but all thevreading and googling I did recently didn't get me to fix the issue.
Can anyone help me out?
Thanks!
I added cwd='/home/ubuntu/directory_of_the_script/ to the subprocess hoping to fix the path issues.
Also cd /path/to/script in the cronjob didn't fix the thing
I have given python script :
import os
os.popen(f'docker run --rm -t --link mosquitto-server ruimarinho/mosquitto mosquitto_pub -h mosquitto-server -t "test_topic" -m "{message}"')
Now, this script works as expected, docker command is executed, but every time i run this line given error appears in terminal:
write /dev/stdout: broken pipe
Can someone please help me get rid of it? I did search for solution but every post is about docker only and no already posted solution works for me.
Use os.system() if you're not going to read the pipes opened by os.popen(). Using os.system() will not redirect output, though. If you need that, you could try e.g. subprocess.check_output() (which reads them for you).
I'm wanting to open a terminal from a Python script (not one marked as executable, but actually doing python3 myscript.py to run it), have the terminal run commands, and then keep the terminal open and let the user type commands into it.
EDIT (as suggested): I am primarily needing this for Linux (I'm using Xubuntu, Ubuntu and stuff like that). It would be really nice to know Windows 7/8 and Mac methods, too, since I'd like a cross-platform solution in the long-run. Input for any system would be appreciated, however.
Just so people know some useful stuff pertaining to this, here's some code that may be difficult to come up with without some research. This doesn't allow user-input, but it does keep the window open. The code is specifically for Linux:
import subprocess, shlex;
myFilePathString="/home/asdf asdf/file.py";
params=shlex.split('x-terminal-emulator -e bash -c "python3 \''+myFilePathString+'\'; echo \'(Press any key to exit the terminal emulator.)\'; read -n 1 -s"');
subprocess.call(params);
To open it with the Python interpreter running afterward, which is about as good, if not better than what I'm looking for, try this:
import subprocess, shlex;
myFilePathString="/home/asdf asdf/file.py";
params=shlex.split('x-terminal-emulator -e bash -c "python3 -i \''+myFilePathString+'\'"');
subprocess.call(params);
I say these examples may take some time to come up with because passing parameters to bash, which is being opened within another command can be problematic without taking a few steps. Plus, you need to know to use to quotes in the right places, or else, for example, if there's a space in your file path, then you'll have problems and might not know why.
EDIT: For clarity (and part of the answer), I found out that there's a standard way to do this in Windows:
cmd /K [whatever your commands are]
So, if you don't know what I mean try that and see what happens. Here's the URL where I found the information: http://ss64.com/nt/cmd.html
I'm using Python to call someone's program:
print cmd
os.system(cmd)
This following is the output of the print command, which shows cmd calls sclite with a few parameters and then redirects the output to dump.
C:/travel/sctk-2.4.0/bin/sclite -r C:/travel/tempRef.txt -h C:/travel/tempTrans.txt -i spu_id > C:/travel/dump
When I run the command in cygwin, dump contains the desired output. When I open Python in cygwin and use os.system(cmd) there, dump contains the desired output. If I run my Python script from cygwin, dump contains the desired output. When I run my Python script in Eclipse, dump contains nothing, i.e., the file is created but nothing is written to it.
I've tried the same with subprocess(cmd,shell=True) with the same results: running the script in Eclipse results in an empty file while the others work fine. I'm guessing there's something wrong with Eclipse/Pydev, but I'm not sure what.
One workaround to this problem could be using Popen --
from subprocess import Popen
cmd="C:/travel/sctk-2.4.0/bin/sclite -r C:/travel/tempRef.txt -h C:/travel/tempTrans.txt -i spu_id"
f=open('C:/travel/dump','w')
p=Popen(cmd.split(),stdout=f)
But that still doesn't explain your strange behavior in Eclipse...
I am developing FUSE filesystem with python. The problem is that after mounting a filesystem I have no access to stdin/stdout/stderr from my fuse script. I don't see anything, even tracebacks. I am trying to launch pdb like this:
import pdb
pdb.Pdb(None, open('pdb.in', 'r'), open('pdb.out', 'w')).set_trace()
All works fine but very inconvenient. I want to make pdb.in and pdb.out as fifo files but don't know how to connect it correctly. Ideally I want to type commands and see output in one terminal, but will be happy even with two terminals (in one put commands and see output in another). Questions:
1) Is it better/other way to run pdb without stdin/stdout?
2) How can I redirect stdin to pdb.in fifo (All what I type must go to pdb.in)? How can I redirect pdb.out to stdout (I had strange errors with "cat pdb.out" but maybe I don't understand something)
Ok. Exactly what I want, has been done in http://pypi.python.org/pypi/rpdb/0.1.1 .
Before starting the python app
mkfifo pdb.in
mkfifo pdb.out
Then when pdb is called you can interact with it using these two cat commands, one running in the background
cat pdb.out & cat > pdb.in
Note the readline support does not work (i.e. up arrow)
I just ran into a similar issue in a much simpler use-case:
debug a simple Python program running from the command line that had a file piped into sys.stdin, meaning, no way to use the console for pdb.
I ended up solving it by using wdb.
Quick rundown for my use-case. In the shell, install both the wdb server and the wdb client:
pip install wdb.server wdb
Now launch the wdb server with:
wdb.server.py
Now you can navigate to localhost:1984 with your browser and see an interface listing all Python programs running. The wdb project page above has instructions on what you can do if you want to debug any of these running programs.
As for a program under your control, you can you can debug it from the start with:
wdb myscript.py --script=args < and/stdin/redirection
Or, in your code, you can do:
import wdb; wdb.set_trace()
This will pop up an interface in your browser (if local) showing the traced program.
Or you can navigate to the wdb.server.py port to see all ongoing debugging sessions on top of the list of running Python programs, which you can then use to access the specific debugging session you want.
Notice that the commands for navigating the code during the trace are different from the standard pdb ones, for example, to step into a function you use .s instead of s and to step over use .n instead of n. See the wdb README in the link above for details.