`write /dev/stdout: broken pipe` when calling docker run from python - python

I have given python script :
import os
os.popen(f'docker run --rm -t --link mosquitto-server ruimarinho/mosquitto mosquitto_pub -h mosquitto-server -t "test_topic" -m "{message}"')
Now, this script works as expected, docker command is executed, but every time i run this line given error appears in terminal:
write /dev/stdout: broken pipe
Can someone please help me get rid of it? I did search for solution but every post is about docker only and no already posted solution works for me.

Use os.system() if you're not going to read the pipes opened by os.popen(). Using os.system() will not redirect output, though. If you need that, you could try e.g. subprocess.check_output() (which reads them for you).

Related

python subprocess.run via crontab

I wrote a python script that calls a docker-compose exec command via subprocess.run. The script works as intended when I run it manually from the terminal, however I didn't get it to work as a cronjob yet.
The subprocess part looks like this:
subprocess.run(command, shell=True, capture_output= True, text=True, cwd='/home/ubuntu/directory_of_the_script/')
and the command itself is something like
docker-compose exec container_name python3 folder/scripts/another_script.py -parameter
I assume it has something to do with the paths, but all thevreading and googling I did recently didn't get me to fix the issue.
Can anyone help me out?
Thanks!
I added cwd='/home/ubuntu/directory_of_the_script/ to the subprocess hoping to fix the path issues.
Also cd /path/to/script in the cronjob didn't fix the thing

python : get the stdout of a docker process launched with subprocess.popen --progress not available anymore

Im using popen to launch a docker-compose process.
print(subprocess.Popen('docker-compose -f docker-compose_dev.yml up', shell=True,
stdout=subprocess.PIPE).stdout.read())
I get the output :
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
b''
Creating network "wagtail-react-blog_spa_network" with the default driver
Creating wagtail-react-blog_backend_1 ... done
Creating wagtail-react-blog_frontend_1 ... done
Creating wagtail-react-blog_nginx_1 ... done
But i dont get the outputs of the command run in the docker container which usually appear when i run this command in the terminal. Do you know how i can do to retrieve them? In another script I use a subprocess.check_call to open a sh script and everything is printed, but i would like to avoid to write a sh script.
Thanks
EDIT: it used to work with --progress flag but i dont see it anymore in the docs and i dont have any output. I tried with stdout.read and stderr.read but it's the same

strange problem with running bash-scripts from python in docker

I have python-script, which run bash-scripts via subprocess library. I need to collect stdout and stderr to files, so I have wrapper like:
def execute_chell_script(stage_name, script):
subprocess.check_output('{} &>logs/{}'.format(script, stage_name), shell=True)
And it works correct when I launch my python script on mac. But If I launch it in docker-container (FROM ubuntu:18.04) I cant see any log-files. I can fix it if I use bash -c 'command &>log_file' instead of just command &>log_file inside subprocess.check_output(...). But it looks like too much magic.
I thought about the default shell for user, which launches python-script (its root), but cat /etc/passwd shows root ... /bin/bash.
It would be nice if someone explain me what happened. And maybe I can add some lines to dockerfile to use the same python-script inside and outside docker-container?
As the OP reported in a comment that this fixed their problem, I'm posting it as an answer so they can accept it.
Using check_output when you don't get expect any output is weird; and requiring shell=True here is misdirected. You want
with open(os.path.join('logs', stage_name)) as output:
subprocess.run([script], stdout=ouput, stderr=output)

"Command not found" when using python for shell scripting

I have this python script:
#!/usr/bin/python
print 'hi'
I'm trying to send this script as a job to be executed on a computing cluster. I'm sending it with qsub like this: qsub myscript.py
Before running it I executed the following:
chmod +x myscript.py
However when I open the output file I find this:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
And when I open the error file I find this:
print: Command not found.
So what's wrong?!
Edit: I followed the instructions in this question
It looks like qsub isn't reading your shebang line, so is simply executing your script using the shell.
This answer provides a few options on how to deal with this, depending on your system: How can I use qsub with Python from the command line?
An option is to set the interpreter to python like so:
qsub -S /usr/bin/python myscript.py
I am quite sure there is an alternate way to do this without the -S option and have SGE execute the code based on interpreter in the shebang; however, this solution might be enough for you needs.
Also, concerning this output:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
It seems safe to ignore this:
http://www.linuxquestions.org/questions/linux-software-2/warning-no-access-to-tty-bad-file-descriptor-702671/
EDIT:
Also works:
qsub <<< "./myscript.py"
qsub <<< "python ./myscript.py"

Python os.system() in Eclipse

I'm using Python to call someone's program:
print cmd
os.system(cmd)
This following is the output of the print command, which shows cmd calls sclite with a few parameters and then redirects the output to dump.
C:/travel/sctk-2.4.0/bin/sclite -r C:/travel/tempRef.txt -h C:/travel/tempTrans.txt -i spu_id > C:/travel/dump
When I run the command in cygwin, dump contains the desired output. When I open Python in cygwin and use os.system(cmd) there, dump contains the desired output. If I run my Python script from cygwin, dump contains the desired output. When I run my Python script in Eclipse, dump contains nothing, i.e., the file is created but nothing is written to it.
I've tried the same with subprocess(cmd,shell=True) with the same results: running the script in Eclipse results in an empty file while the others work fine. I'm guessing there's something wrong with Eclipse/Pydev, but I'm not sure what.
One workaround to this problem could be using Popen --
from subprocess import Popen
cmd="C:/travel/sctk-2.4.0/bin/sclite -r C:/travel/tempRef.txt -h C:/travel/tempTrans.txt -i spu_id"
f=open('C:/travel/dump','w')
p=Popen(cmd.split(),stdout=f)
But that still doesn't explain your strange behavior in Eclipse...

Categories

Resources