I wrote a python script that calls a docker-compose exec command via subprocess.run. The script works as intended when I run it manually from the terminal, however I didn't get it to work as a cronjob yet.
The subprocess part looks like this:
subprocess.run(command, shell=True, capture_output= True, text=True, cwd='/home/ubuntu/directory_of_the_script/')
and the command itself is something like
docker-compose exec container_name python3 folder/scripts/another_script.py -parameter
I assume it has something to do with the paths, but all thevreading and googling I did recently didn't get me to fix the issue.
Can anyone help me out?
Thanks!
I added cwd='/home/ubuntu/directory_of_the_script/ to the subprocess hoping to fix the path issues.
Also cd /path/to/script in the cronjob didn't fix the thing
Related
I have given python script :
import os
os.popen(f'docker run --rm -t --link mosquitto-server ruimarinho/mosquitto mosquitto_pub -h mosquitto-server -t "test_topic" -m "{message}"')
Now, this script works as expected, docker command is executed, but every time i run this line given error appears in terminal:
write /dev/stdout: broken pipe
Can someone please help me get rid of it? I did search for solution but every post is about docker only and no already posted solution works for me.
Use os.system() if you're not going to read the pipes opened by os.popen(). Using os.system() will not redirect output, though. If you need that, you could try e.g. subprocess.check_output() (which reads them for you).
I have python-script, which run bash-scripts via subprocess library. I need to collect stdout and stderr to files, so I have wrapper like:
def execute_chell_script(stage_name, script):
subprocess.check_output('{} &>logs/{}'.format(script, stage_name), shell=True)
And it works correct when I launch my python script on mac. But If I launch it in docker-container (FROM ubuntu:18.04) I cant see any log-files. I can fix it if I use bash -c 'command &>log_file' instead of just command &>log_file inside subprocess.check_output(...). But it looks like too much magic.
I thought about the default shell for user, which launches python-script (its root), but cat /etc/passwd shows root ... /bin/bash.
It would be nice if someone explain me what happened. And maybe I can add some lines to dockerfile to use the same python-script inside and outside docker-container?
As the OP reported in a comment that this fixed their problem, I'm posting it as an answer so they can accept it.
Using check_output when you don't get expect any output is weird; and requiring shell=True here is misdirected. You want
with open(os.path.join('logs', stage_name)) as output:
subprocess.run([script], stdout=ouput, stderr=output)
I need to run a command using python code and I tried to use both os.system and subprocess however both didn't work for some reason. Here's my code:
#app.route('/run-script')
def run_script():
subprocess.call('python3.6 GoReport.py --id 31-33 --format word', cwd="working_dir", shell=True)
return flask.render_template('results.html', **locals())
Running this command from terminal directly works as it should. Trying to reproduce this from python interpreter using command line works as as a charm as well. However it doesn't work when I use Flask. What's the reason for this?
So I've managed to edit my code and import the module instead of using subprocess and os.system. Thanks #tripleee for the explanation!
I have this python script:
#!/usr/bin/python
print 'hi'
I'm trying to send this script as a job to be executed on a computing cluster. I'm sending it with qsub like this: qsub myscript.py
Before running it I executed the following:
chmod +x myscript.py
However when I open the output file I find this:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
And when I open the error file I find this:
print: Command not found.
So what's wrong?!
Edit: I followed the instructions in this question
It looks like qsub isn't reading your shebang line, so is simply executing your script using the shell.
This answer provides a few options on how to deal with this, depending on your system: How can I use qsub with Python from the command line?
An option is to set the interpreter to python like so:
qsub -S /usr/bin/python myscript.py
I am quite sure there is an alternate way to do this without the -S option and have SGE execute the code based on interpreter in the shebang; however, this solution might be enough for you needs.
Also, concerning this output:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
It seems safe to ignore this:
http://www.linuxquestions.org/questions/linux-software-2/warning-no-access-to-tty-bad-file-descriptor-702671/
EDIT:
Also works:
qsub <<< "./myscript.py"
qsub <<< "python ./myscript.py"
I'm using Python to call someone's program:
print cmd
os.system(cmd)
This following is the output of the print command, which shows cmd calls sclite with a few parameters and then redirects the output to dump.
C:/travel/sctk-2.4.0/bin/sclite -r C:/travel/tempRef.txt -h C:/travel/tempTrans.txt -i spu_id > C:/travel/dump
When I run the command in cygwin, dump contains the desired output. When I open Python in cygwin and use os.system(cmd) there, dump contains the desired output. If I run my Python script from cygwin, dump contains the desired output. When I run my Python script in Eclipse, dump contains nothing, i.e., the file is created but nothing is written to it.
I've tried the same with subprocess(cmd,shell=True) with the same results: running the script in Eclipse results in an empty file while the others work fine. I'm guessing there's something wrong with Eclipse/Pydev, but I'm not sure what.
One workaround to this problem could be using Popen --
from subprocess import Popen
cmd="C:/travel/sctk-2.4.0/bin/sclite -r C:/travel/tempRef.txt -h C:/travel/tempTrans.txt -i spu_id"
f=open('C:/travel/dump','w')
p=Popen(cmd.split(),stdout=f)
But that still doesn't explain your strange behavior in Eclipse...