Python os.system() in Eclipse - python

I'm using Python to call someone's program:
print cmd
os.system(cmd)
This following is the output of the print command, which shows cmd calls sclite with a few parameters and then redirects the output to dump.
C:/travel/sctk-2.4.0/bin/sclite -r C:/travel/tempRef.txt -h C:/travel/tempTrans.txt -i spu_id > C:/travel/dump
When I run the command in cygwin, dump contains the desired output. When I open Python in cygwin and use os.system(cmd) there, dump contains the desired output. If I run my Python script from cygwin, dump contains the desired output. When I run my Python script in Eclipse, dump contains nothing, i.e., the file is created but nothing is written to it.
I've tried the same with subprocess(cmd,shell=True) with the same results: running the script in Eclipse results in an empty file while the others work fine. I'm guessing there's something wrong with Eclipse/Pydev, but I'm not sure what.

One workaround to this problem could be using Popen --
from subprocess import Popen
cmd="C:/travel/sctk-2.4.0/bin/sclite -r C:/travel/tempRef.txt -h C:/travel/tempTrans.txt -i spu_id"
f=open('C:/travel/dump','w')
p=Popen(cmd.split(),stdout=f)
But that still doesn't explain your strange behavior in Eclipse...

Related

Running script file through subprocess

I'm attempting to run a Linux script through Python's subprocess module. Below is the subprocess command:
result = subprocess.run(['/dir/scripts/0_texts.sh'], shell=True)
print(result)
Here is the 0_texts.sh script file:
cd /dir/TXTs
pylanguagetool text_0.txt > comments_0.txt
The subprocess command executes the script file, writing a new comments_0.txt in the correct directory. However, there's an error in the execution. The comments_0.txt contains an error of "input file is required", and the subprocess result returns returncode=2. When I run the pylanguagetool text_0.txt > comments_0.txt directly in the terminal the command executes properly, with the comments_0.txt written with the proper input file of text_0.txt.
Any suggestions on what I'm missing?
There is some ambiguity here in that it's not obvious which shell is run each time 0_texts.sh is invoked, and whether it has the values you expect of environment variables like PATH, which could result in a different copy of pylanguagetool running from when you call it at the command line.
First I'd suggest removing the shell=True option in subprocess.run, which is only involving another, potentially different shell here. Next I would change subprocess.run(['/dir/scripts/0_texts.sh']) to subprocess.run(['bash', '/dir/scripts/0_texts.sh']) (or whichever shell you wanted to run, probably bash or dash) to remove that source of ambiguity. Finally, you can try using type pylanguagetool in the script, invoking pylanguagetool with its full path, or calling bash /dir/scripts/0_texts.sh from your terminal to debug the situation further.
A bigger-picture issue is, pyLanguageTool is a Python library, so you're almost certainly going to be better off calling its functions from your original Python script directly instead of using a shell script as an intermediary.

`write /dev/stdout: broken pipe` when calling docker run from python

I have given python script :
import os
os.popen(f'docker run --rm -t --link mosquitto-server ruimarinho/mosquitto mosquitto_pub -h mosquitto-server -t "test_topic" -m "{message}"')
Now, this script works as expected, docker command is executed, but every time i run this line given error appears in terminal:
write /dev/stdout: broken pipe
Can someone please help me get rid of it? I did search for solution but every post is about docker only and no already posted solution works for me.
Use os.system() if you're not going to read the pipes opened by os.popen(). Using os.system() will not redirect output, though. If you need that, you could try e.g. subprocess.check_output() (which reads them for you).

strange problem with running bash-scripts from python in docker

I have python-script, which run bash-scripts via subprocess library. I need to collect stdout and stderr to files, so I have wrapper like:
def execute_chell_script(stage_name, script):
subprocess.check_output('{} &>logs/{}'.format(script, stage_name), shell=True)
And it works correct when I launch my python script on mac. But If I launch it in docker-container (FROM ubuntu:18.04) I cant see any log-files. I can fix it if I use bash -c 'command &>log_file' instead of just command &>log_file inside subprocess.check_output(...). But it looks like too much magic.
I thought about the default shell for user, which launches python-script (its root), but cat /etc/passwd shows root ... /bin/bash.
It would be nice if someone explain me what happened. And maybe I can add some lines to dockerfile to use the same python-script inside and outside docker-container?
As the OP reported in a comment that this fixed their problem, I'm posting it as an answer so they can accept it.
Using check_output when you don't get expect any output is weird; and requiring shell=True here is misdirected. You want
with open(os.path.join('logs', stage_name)) as output:
subprocess.run([script], stdout=ouput, stderr=output)

passing the output of one .py script to another in GDB

How to pass the output of one .py script which I executed in GDB using below command:
(gdb)source myfilename.py
to another .py script myfilename2.py
E.g: On running myfile myfilename.py in GDB I get the below results
(gdb)source myfilename.py
(gdb)print $function("input")
(gdb)$1 = someoutput`
If I want to pass $1 as input to another .py script how do I do it.
Thanks for the help.
I'm not sure if you can run other python programs output to another in gdb, but of course you can pass output of a command like this.
(gdb) run "`python -c 'print "\xff\xff\xff\xff"'`"
May be you can try passing the second .py file instead of the command for its output to be passed into first .py.
You cannot use contents of GDB variable $1 for your program input, since program launch is performed by shell, not by gdb.
You can store your script output in file, launch gdb for your executable and use command
(gdb)run <outputfile
Or use a named pipe, as suggested in this answer.

"Command not found" when using python for shell scripting

I have this python script:
#!/usr/bin/python
print 'hi'
I'm trying to send this script as a job to be executed on a computing cluster. I'm sending it with qsub like this: qsub myscript.py
Before running it I executed the following:
chmod +x myscript.py
However when I open the output file I find this:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
And when I open the error file I find this:
print: Command not found.
So what's wrong?!
Edit: I followed the instructions in this question
It looks like qsub isn't reading your shebang line, so is simply executing your script using the shell.
This answer provides a few options on how to deal with this, depending on your system: How can I use qsub with Python from the command line?
An option is to set the interpreter to python like so:
qsub -S /usr/bin/python myscript.py
I am quite sure there is an alternate way to do this without the -S option and have SGE execute the code based on interpreter in the shebang; however, this solution might be enough for you needs.
Also, concerning this output:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
It seems safe to ignore this:
http://www.linuxquestions.org/questions/linux-software-2/warning-no-access-to-tty-bad-file-descriptor-702671/
EDIT:
Also works:
qsub <<< "./myscript.py"
qsub <<< "python ./myscript.py"

Categories

Resources