How come python allows bash scripts to be run without being executable? - python

I have a simple bash script that runs "apt update" I tried to call it via python like this. It runs although I didn't chmod +x update.sh it.
def updateUsingBash(self):
p = QtCore.QProcess()
p.finished.connect(self.onFinished)
p.start('sh', ['update.sh'])
p.waitForFinished(-1)
def onFinished(self, exit_code, exit_status):
print "script finished with exit code :", exit_code

You did not execute update.sh. You executed sh passing it update.sh as an argument. That made sh interpret update.sh as a shell script.
By the way, note that sh is not exactly like bash.

This is exactly the same as how python foo.py runs foo.py without the latter being marked executable -- it's simply a data file containing script text, and the thing being executed is either python or sh, respectively.

Related

Change directory from python script for calling shell

I would like to build a python script, which can manipulate the state of it's calling bash shell, especially it's working directory for the beginning.
With os.chdir or os.system("ls ..") you can only change the interpreters path, but how can I apply the comments changes to the scripts caller then?
Thank you for any hint!
You can't do that directly from python, as a child process can never change the environment of its parent process.
But you can create a shell script that you source from your shell, i.e. it runs in the same process, and in that script, you'll call python and use its output as the name of the directory to cd to:
/home/choroba $ cat 1.sh
cd "$(python -c 'print ".."')"
/home/choroba $ . 1.sh
/home $

"source" to set PATH to bitbake with shell=True having no effect in Python

Below is the code in shell script
source /proj/common/tools/repo/etc/profile.d/repo.sh
repo project init $branch
repo project sync
source poky/fnc-init-build-env build
bitbake -g $image
I am trying to convert shell script into python script
a = subprocess.call("source /proj/common/tools/repo/etc/profile.d/repo.sh", shell=True)
b = subprocess.call("repo project init " + branch, shell=True)
b2 = subprocess.call("repo project sync", shell=True)
c = subprocess.call("source poky/fnc-init-build-env build", shell=True)
os.chdir("poky/build")
d = subprocess.call("bitbake -g " + image, shell=True)
But i am getting the following error
/bin/sh: bitbake: command not found
How to resolve this in python ?
you must add bitbake to path:
set Path=%path%;PathOfBitbake
run it on Command prompt of Windows then retry
The problem is, that you run subprocess.call(something, shell=True) several times and assume that the variables set in the first call are present in the later calls, which use shells independend from the earlier calls.
I would put the commands in a shell script and then run it with a single subprocess.call command. There seems to be no real point in converting it line by line into python by just running shell commands with the subprocess module.
When repo and bitbake are python programs there could be a point to import the relevant modules from them and run the corresponding python functions instead of the shell commands provided by their main method.
When using shell=True, the first list element is a script to run, and subsequent arguments are passed to that script.
Each subprocess.Popen() invocation starts a single shell; state configured in one isn't carried through to others, so the source command is useless unless you run the commands that depend on it within the same invocation.
script='''
branch=$1; shift # pop first argument off the list, assign to variable named branch
source /proj/common/tools/repo/etc/profile.d/repo.sh || exit
repo project init "$branch" || exit
repo project sync || exit
source poky/fnc-init-build-env build || exit
exec "$#" # use remaining arguments to form our command to run
'''
subprocess.call([
"bash", "-c", script, # start bash, explicitly: /bin/sh does not support "source"
"_", # becomes $0 inside the shell
branch, # becomes $1, which is assigned to branch and shifted off
"bitbake", "-g", image # these arguments become "$#" after the shift
])
Note the || exits -- you should generally have those on any command where you don't explicitly intend to ignore failures.
You need to add the path to 'bitbake', so it is found from your python script.
sys.path.append(your_path_there)

How to check if a script is being called from terminal or from another script

I am writing a python script and i want to execute some code only if the python script is being run directly from terminal and not from any another script.
How to do this in Ubuntu without using any extra command line arguments .?
The answer in here DOESN't WORK:
Determine if the program is called from a script in Python
Here's my directory structure
home
|-testpython.py
|-script.sh
script.py contains
./testpython.py
when I run ./script.sh i want one thing to happen .
when I run ./testpython.py directly from terminal without calling the "script.sh" i want something else to happen .
how do i detect such a difference in the calling way . Getting the parent process name always returns "bash" itself .
I recommend using command-line arguments.
script.sh
./testpython.py --from-script
testpython.py
import sys
if "--from-script" in sys.argv:
# From script
else:
# Not from script
You should probably be using command-line arguments instead, but this is doable. Simply check if the current process is the process group leader:
$ sh -c 'echo shell $$; python3 -c "import os; print(os.getpid.__name__, os.getpid()); print(os.getpgid.__name__, os.getpgid(0)); print(os.getsid.__name__, os.getsid(0))"'
shell 17873
getpid 17874
getpgid 17873
getsid 17122
Here, sh is the process group leader, and python3 is a process in that group because it is forked from sh.
Note that all processes in a pipeline are in the same process group and the leftmost is the leader.

Open new gnome-terminal and run command

I'm trying to write a script that opens a new terminal then runs a separate python script from that terminal.
I've tried:
os.system("gnome-terminal 'python f.py'")
and
p = Popen("/usr/bin/gnome-terminal", stdin=PIPE)
p.communicate("python f.py")
but both methods only open a new terminal and do not run f.py. How would I go about opening the terminal AND running a separate script?
Edit:
I would like to open a new terminal window because f.py is a simply server that is running serve_forever(). I'd like the original terminal window to stay "free" to run other commands.
Like most terminals, gnome terminal needs options to execute commands:
gnome-terminal [-e, --command=STRING] [-x, --execute]
You probably need to add -x option:
x, --execute
Execute the remainder of the command line inside the terminal.
so:
os.system("gnome-terminal -x python f.py")
That would not run your process in the background unless you add & to your command line BTW.
The communicate attempt would need a newline for your input but should work too, but complex processes like terminals don't "like" being redirected. It seems like using an interactive tool backwards.
And again, that would block until termination. What could work would be to use p.stdin.write("python f.py\n") to give control to the python script. But in that case it's unlikely to work.
So it seems that you don't even need python do to what you want. You just need to run
python f.py &
in a shell.
As of GNOME Terminal 3.24.2 Using VTE version 0.48.4 +GNUTLS -PCRE2
Option “-x” is deprecated and might be removed in a later version of gnome-terminal.
Use “-- ” to terminate the options and put the command line to execute after it.
Thus the preferred syntax appears to be
gnome-terminal -- echo hello
rather than
gnome-terminal -x echo hello
Here is a complete example of how you would call a executable python file with subprocess.call Using argparse to properly parse the input.
the target process will print your given input.
Your python file to be called:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--file", help="Just A test", dest='myfile')
args = parser.parse_args()
print args.myfile
Your calling python file:
from subprocess import call
#call(["python","/users/dev/python/sandboxArgParse.py", "--file", "abcd.txt"])
call(["gnome-terminal", "-e", "python /users/dev/python/sandboxArgParse.py --file abcd.txt"])
Just for information:
You probably don't need python calling another python script to run a terminal window with a process, but could do as follows:
gnome-terminal -e "python /yourfile.py -f yourTestfile.txt"
The following code will open a new terminal and execute the process:
process = subprocess.Popen(
"sudo gnome-terminal -x python f.py",
stdout=subprocess.PIPE,
stderr=None,
shell=True
)
I am running a uWS server with this.In my case Popen didn't help(Even though it run the executable, still it couldn't communicate with a client -: socket connection is broken).This is working.Also now they recommends to use "--" instead of "-e".
subprocess.call(['gnome-terminal', "--", "python3", "server_deployment.py"])
#server_deployment.py
def run():
execution_cmd = "./my_executable arg1 arg2 dll_1 dll_2"
os.system(execution_cmd)
run()

R equivalent to `python -i`

Typing python -i file.py at the command line runs file.py and then drops into the python terminal preserving the run environment.
https://docs.python.org/3/using/cmdline.html
Is there an equivalent in R?
I may be misinterpreting what python -i file.py does, but try:
From inside R, at the terminal, you can do:
source('file.R')
and it will run file.R, with the global environment reflecting what was done in file.R
If you're trying to run from the command line, review this post

Categories

Resources