I have this python script:
#!/usr/bin/python
print 'hi'
I'm trying to send this script as a job to be executed on a computing cluster. I'm sending it with qsub like this: qsub myscript.py
Before running it I executed the following:
chmod +x myscript.py
However when I open the output file I find this:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
And when I open the error file I find this:
print: Command not found.
So what's wrong?!
Edit: I followed the instructions in this question
It looks like qsub isn't reading your shebang line, so is simply executing your script using the shell.
This answer provides a few options on how to deal with this, depending on your system: How can I use qsub with Python from the command line?
An option is to set the interpreter to python like so:
qsub -S /usr/bin/python myscript.py
I am quite sure there is an alternate way to do this without the -S option and have SGE execute the code based on interpreter in the shebang; however, this solution might be enough for you needs.
Also, concerning this output:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
It seems safe to ignore this:
http://www.linuxquestions.org/questions/linux-software-2/warning-no-access-to-tty-bad-file-descriptor-702671/
EDIT:
Also works:
qsub <<< "./myscript.py"
qsub <<< "python ./myscript.py"
Related
I'm attempting to run a Linux script through Python's subprocess module. Below is the subprocess command:
result = subprocess.run(['/dir/scripts/0_texts.sh'], shell=True)
print(result)
Here is the 0_texts.sh script file:
cd /dir/TXTs
pylanguagetool text_0.txt > comments_0.txt
The subprocess command executes the script file, writing a new comments_0.txt in the correct directory. However, there's an error in the execution. The comments_0.txt contains an error of "input file is required", and the subprocess result returns returncode=2. When I run the pylanguagetool text_0.txt > comments_0.txt directly in the terminal the command executes properly, with the comments_0.txt written with the proper input file of text_0.txt.
Any suggestions on what I'm missing?
There is some ambiguity here in that it's not obvious which shell is run each time 0_texts.sh is invoked, and whether it has the values you expect of environment variables like PATH, which could result in a different copy of pylanguagetool running from when you call it at the command line.
First I'd suggest removing the shell=True option in subprocess.run, which is only involving another, potentially different shell here. Next I would change subprocess.run(['/dir/scripts/0_texts.sh']) to subprocess.run(['bash', '/dir/scripts/0_texts.sh']) (or whichever shell you wanted to run, probably bash or dash) to remove that source of ambiguity. Finally, you can try using type pylanguagetool in the script, invoking pylanguagetool with its full path, or calling bash /dir/scripts/0_texts.sh from your terminal to debug the situation further.
A bigger-picture issue is, pyLanguageTool is a Python library, so you're almost certainly going to be better off calling its functions from your original Python script directly instead of using a shell script as an intermediary.
I am new to Stack Overflow and bash/sh commands. I ran into the following issue when trying to execute a shell command in 2 different ways:
Executing the script from the Bash CLI
Executing the script from a Python IDE
The script is as follows:
For /R C:\\Users\\userid\\Desktop\\my-test\\src\\api-explorer\\ %G IN (*.json) do widdershins "%G" -o "%G".md
The intent of the script is to recursively convert a number of Swagger.json files to Markdown files using a conversion tool called Widdershins.
The script runs fine when executing it from Python like this:
def convertSwaggerToMarkdown():
cmd = 'For /R C:\\Users\\userid\\Desktop\\my-test\\src\\api-explorer\\ %G IN (*.json) do widdershins "%G" -o "%G".md
subprocess.run(cmd, shell=True)
Where it fails, is when I try to execute the script in Bash directly. I've tried the recommendations from other users that encountered a similar error, which suggest appending either the #!/bin/bash or #!/bin/sh to the beginning of the command, but when doing this the command does not execute and also does not provide any error.
I also tried suggestions to add " " around the (*json), since this appears to be where the issue resides. Since the script executes in Python when shell=True, I'm certain there is a syntax error which I am overlooking, and also a better needed understanding of how the logic between bash and sh scripts work.
In Bash, this is what it looks like:
Syntax Error Unexpected Token
What am I missing here?
That script is not bash. My guess is it is PowerShell, a scripting language used in Windows that fills a similar role as Bash. The correct Bash syntax would be
for g in *.json; do
widdershins $g -o $g.md
done
If you want the command to be recursive, best option is to use the find command, e.g.
find . -name \*.json -type f -exec widdershins \{\} -o \{\}.md \;
Or as an alternative, launch PowerShell to run your script instead of Bash. PowerShell is actually cross platform these days.
Your original command should work happily if executed directly from the cmd prompt - although the double-\ are unnecessary.
If you are executing that command as a line in a *.bat file, then each %G needs to have the % doubled %%G as G here is a metavariable. Again, the double-\ are unnecessary.
I have python-script, which run bash-scripts via subprocess library. I need to collect stdout and stderr to files, so I have wrapper like:
def execute_chell_script(stage_name, script):
subprocess.check_output('{} &>logs/{}'.format(script, stage_name), shell=True)
And it works correct when I launch my python script on mac. But If I launch it in docker-container (FROM ubuntu:18.04) I cant see any log-files. I can fix it if I use bash -c 'command &>log_file' instead of just command &>log_file inside subprocess.check_output(...). But it looks like too much magic.
I thought about the default shell for user, which launches python-script (its root), but cat /etc/passwd shows root ... /bin/bash.
It would be nice if someone explain me what happened. And maybe I can add some lines to dockerfile to use the same python-script inside and outside docker-container?
As the OP reported in a comment that this fixed their problem, I'm posting it as an answer so they can accept it.
Using check_output when you don't get expect any output is weird; and requiring shell=True here is misdirected. You want
with open(os.path.join('logs', stage_name)) as output:
subprocess.run([script], stdout=ouput, stderr=output)
Simple, maybe not so simple issue. How can I run GDB with the results of a script?
What I mean is that instead of saying:
run arg1
You would say:
run "python generatingscript.py"
which would output the args. I'm aware I could find a way to do this by sleeping the program after run the args from the command line, but it would be really darn convenient if there was a way to do this kind of thing directly in gdb.
Just for context, my use case is a situation where I writing crash test cases that look like long strings of hex data. Putting them in by hand at the command line isn't the most convenient thing.
You can use gdb with the --args option like this: gdb --args your_program $(python generatingscript.py.
You could also use bash to generate an output file
$python generatingscript.py > testinput
then in gdb
gdb$ run < testinput
This works for my ubuntu machine which deploys "<<<" instead of "<".
(gdb) r <<< $(python exploit.py )
How can I create functions and software for my Linux server? Let me explain this in a bit more detail.
So for my Linux server which I access with my SSH client, I have made a few Python scripts that work fine, but what I really want to do is have these Python scripts active all the time, such that I can execute functions I've created in the script (such as "def time(): ...") just by typing "time" in to the command line rather than starting up a script with ./script-name.py and then type "time". Do I need to install my Python files in to the system in some way?
I struggled searching Google because I didn't fully understand what to search, and results that came up weren't really related to my request. I did find the cmd Python module and learned how to create cmd interpreters, however, in order for me to access the commands I defined in the cmd interpreter, I had to first start the script, which as I explained above, not what I am looking for.
How can I make script-level Python functions callable from the Linux command line?
If you're using Python, you'll still need to fire up the interpreter, but you can make that happen automatically.
Start by making your script executable. Run this command in the shell:
chmod +x script-name.py
ls -l script-name.py
The output of ls should look something like this (note the xs in the left-hand column):
-rwxr-xr-x 1 me me 4 Jan 14 11:02 script-name.py
Now add an interpreter directive line at the top of your script file - this tells the shell to run Python to interpret your script:
#!/usr/bin/python
Then add code at the end of the file that calls your function:
if __name__ == '__main__':
time()
The if statement checks to see if this is the file that is being executed. It means you can still import your module from another Python file without the time() function being automatically called, if you want.
Finally, you need to put your script in the executable path.
mkdir -p $HOME/bin
mv script-name.py $HOME/bin/
export PATH=$HOME/bin:$PATH
Now you should be able to run script-name.py, and you'll see the output of the time() function. You can rename your file to whatever you like; you can even remove the .py extension.
Additional things you can do:
Use the argparse module to add command line arguments, e.g. so you can run script-name.py time to execute the time() function
Put the script in a system-wide path, like /usr/local/bin, or
Add the export PATH=$HOME/bin:$PATH line to your .bashrc so that it happens by default when you log in
The answer above is by far more complete and more informative than mine. I just wanted to offer a quick and dirty alternative.
echo "alias time='<your script> time'" > ~/.bashrc
bash
Like I said, quick and dirty.