I would like to simplify running Python scripts from within the Python shell. In Python 2, you could just use execfile(path). But in Python 3 it's harder to remember:
exec(open(path).read())
So I want a function to run a script, as simple as run(path). I can do this from the Python shell:
def run(filename):
source = open(filename).read()
code = compile(source, filename, 'exec')
exec(code)
Then I can just type in run(path). This works great, and now I want to simplify things by defining the run function every time I launch Python 3.
I'd like to configure my ~/.zshenv with a zsh alias or function (say, py) that launches Python and tells it to define the run function. So that's where I'm stumped. What would a such a zsh command look like? I've tried and failed with things like:
py () {
python -c "\
def run(filename): \
source = open(filename).read() \
code = compile(source, filename, 'exec') \
exec(code)" \
}
But that fails miserably:
% py
File "<string>", line 1
def run(filename): source = open(filename).read() code = compile(source, filename, 'exec') exec(code)
IndentationError: unexpected indent
%
And even if it were to work, it would drop back out of the Python shell once the function was defined. Obviously I don't know what I'm doing here. Any pointers?
Also… please don't assume I have asked the right question. Usually on StackOverflow we try to avoid second-guessing posters' assumptions. But go ahead and second-guess mine if there's a better way to get Python to always define a run function when it is launched.
If you need this function only for interactive shells, you can write it in a file and then run python -i file_with_function.py. The -i option will tell the interpreter to drop into an interactive session after whatever is in the file_with_function.py file runs.
If you want it for any .py file that you will run non-interactively then you can do one of the following:
Create a package that contains your run function and install your package on your interpreter. There is a detailed guide on the Python docs (https://packaging.python.org/tutorials/packaging-projects/).
Add the directory that contains a .py file with your function on the PYTHONPATH environmental variable and import it from there.
In the command which you are passing to Python (using python -c), you start the function definition with a couple of spaces. Spaces at the start of a line are significant in Python. You would get the same error, if you would open a Python shell and write
def foo:
with several spaces in front: Python responds with IndentationError: unexpected indent.
In addition, your use of backslash characters makes all the linefeeds disappear, with the effect that you are going to define the complete function in a single line. This is also invalid in Python, so even if you would fix the initial spaces, you would still get SyntaxError: invalid syntax.
Note that you can use the -m option of Python to load initial definition together with starting Python. You can do a
python -h
to get a list of the valid command line options.
Related
This question already has answers here:
how to "source" file into python script
(8 answers)
Closed 3 years ago.
I am struggling to execute a shell script from a Python program. The actual issue is the script is a load profile script and runs manually as :
. /path/to/file
The program can't be run as sh script as the calling programs are loading some configuration file and so must need to be run as . /path/to/file
Please do guide how can I integrate the same in my Python script? I am using subprocess.Popen command to run the script and as said the only way it works is to run as . /path/to/file and so not giving the right result.
Without knowledge of the precise reason the script needs to be sourced, this is slightly speculative.
The fundamental problem is this: How do I get a source command to take effect outside the shell script?
Let's say your sourced file does something like
export fnord="value"
This cannot (usefully) be run in a subshell (as a normally executed script would) because the environment variable and its value will be lost when the script terminates. The solution is to source (aka .) this snippet from an already running shell; then the value stays in that shell's environment until that shell terminates.
But Python is not a shell, and there is no general way for Python to execute arbitrary shell script code, short of reimplementing the shell in Python. You can reimplement a small subset of the shell's functionality with something like
with open('/path/to/file') as shell_source:
lines = shell_source.readlines()
for line in lines:
if line.strip().startswith('export '):
var, value = line[7:].strip().split('=', 1)
if value.startswith('"'):
value = value.strip('"')
elif value.startswith("'"):
value = value.strip("'")
os.environ[var] = value
with some very strict restrictions (let's not say naïve assumptions) on the allowable shell script syntax in the file. But what if the file contained something else than a series of variable assignments, or the assignment used something other than trivial quoted strings in the values? (Even the export might or might not be there. Its significance is to make the variable visible to subprocesses of the current shell; maybe that is not wanted or required? Also export variable=value is not portable; proper Bourne shell script syntax would use variable=value; export variable or one of the many variations.)
If you know what exactly your Python script needs from the shell script, maybe do something like
r = subprocess.run('. /path/to/file; printf "%s\n" "$somevariable"',
shell=True, capture_output=True, text=True)
os.environ['somevariable'] = r.stdout.split('\n')[-2]
to source the entire script in a subshell, then print to standard output the part you actually need, and capture that from your Python script (and assign it to an environment variable if that's what you eventually need to accomplish).
My perl script is at path:
a/perl/perlScript.pl
my python script is at path:
a/python/pythonScript.py
pythonScript.py gets an argument from stdin, and returns result to stdout. From perlScript.pl , I want to run pythonScript.py with the argument hi to stdin, and save the results in some variable. That's what I tried:
my $ret = `../python/pythonScript.py < hi`;
but I got the following error:
The system cannot find the path specified.
Can you explain the path can't be found?
The qx operator (backticks) starts a shell (sh), in which prog < input syntax expects a file named input from which it will read lines and feed them to the program prog. But you want the python script to receive on its STDIN the string hi instead, not lines of a file named hi.
One way is to directly do that, my $ret = qx(echo "hi" | python_script).
But I'd suggest to consider using modules for this. Here is a simple example with IPC::Run3
use warnings;
use strict;
use feature 'say';
use IPC::Run3;
my #cmd = ('program', 'arg1', 'arg2');
my $in = "hi";
run3 \#cmd, \$in, \my $out;
say "script's stdout: $out";
The program is the path to your script if it is executable, or perhaps python script.py. This will be run by system so the output is obtained once that completes, what is consistent with the attempt in the question. See documentation for module's operation.
This module is intended to be simple while "satisfy 99% of the need for using system, qx, and open3 [...]. For far more power and control see IPC::Run.
You're getting this error because you're using shell redirection instead of just passing an argument
../python/pythonScript.py < hi
tells your shell to read input from a file called hi in the current directory, rather than using it as an argument. What you mean to do is
my $ret = `../python/pythonScript.py hi`;
Which correctly executes your python script with the hi argument, and returns the result to the variable $ret.
The Some of the other answers assume that hi must be passed as a command line parameter to the Python script but the asker says it comes from stdin.
Thus:
my $ret = `echo "hi" | ../python/pythonScript.py`;
To launch your external script you can do
system "python ../python/pythonScript.py hi";
and then in your python script
import sys
def yourFct(a, b):
...
if __name__== "__main__":
yourFct(sys.argv[1])
you can have more informations on the python part here
I have seen plenty examples of running a python script from inside a bash script and either passing in variables as arguments or using export to give the child shell access, I am trying to do the opposite here though.
I am running a python script and have a separate file, lets call it myGlobalVariables.bash
myGlobalVariables.bash:
foo_1="var1"
foo_2="var2"
foo_3="var3"
My python script needs to use these variables.
For a very simple example:
myPythonScript.py:
print "foo_1: {}".format(foo_1)
Is there a way I can import them directly? Also, I do not want to alter the bash script if possible since it is a common file referenced many times elsewhere.
If your .bash file is formatted as you indicated - you might be able to just import it direct as a Python module via the imp module.
import imp
bash_module = imp.load_source("bash_module, "/path/to/myGlobalVariables.bash")
print bash_module.foo_1
You can also use os.environ:
Bash:
#!/bin/bash
# works without export as well
export testtest=one
Python:
#!/usr/bin/python
import os
os.environ['testtest'] # 'one'
I am very new to python, so I would welcome suggestions for more idiomatic ways to do this, but the following code uses bash itself to tell us which values get set by first calling bash with an empty environment (env -i bash) to tell us what variables are set as a baseline, then I call it again and tell bash to source your "variables" file, and then tell us what variables are now set. After removing some false-positives and an apparently-blank line, I loop through the "additional" output, looking for variables that were not in the baseline. Newly-seen variables get split (carefully) and put into the bash dictionary. I've left here (but commented-out) my previous idea for using exec to set the variables natively in python, but I ran into quoting/escaping issues, so I switched gears to using a dict.
If the exact call (path, etc) to your "variables" file is different than mine, then you'll need to change all of the instances of that value -- in the subprocess.check_output() call, in the list.remove() calls.
Here's the sample variable file I was using, just to demonstrate some of the things that could happen:
foo_1="var1"
foo_2="var2"
foo_3="var3"
if [[ -z $foo_3 ]]; then
foo_4="test"
else
foo_4="testing"
fi
foo_5="O'Neil"
foo_6='I love" quotes'
foo_7="embedded
newline"
... and here's the python script:
#!/usr/bin/env python
import subprocess
output = subprocess.check_output(['env', '-i', 'bash', '-c', 'set'])
baseline = output.split("\n")
output = subprocess.check_output(['env', '-i', 'bash', '-c', '. myGlobalVariables.bash; set'])
additional = output.split("\n")
# these get set when ". myGlobal..." runs and so are false positives
additional.remove("BASH_EXECUTION_STRING='. myGlobalVariables.bash; set'")
additional.remove('PIPESTATUS=([0]="0")')
additional.remove('_=myGlobalVariables.bash')
# I get an empty item at the end (blank line from subprocess?)
additional.remove('')
bash = {}
for assign in additional:
if not assign in baseline:
name, value = assign.split("=", 1)
bash[name]=value
#exec(name + '="' + value + '"')
print "New values:"
for key in bash:
print "Key: ", key, " = ", bash[key]
Another way to do it:
Inspired by Marat's answer, I came up with this two-stage hack. Start with a python program, let's call it "stage 1", which uses subprocess to call bash to source the variable file, as my above answer does, but it then tells bash to export all of the variables, and then exec the rest of your python program, which is in "stage 2".
Stage 1 python program:
#!/usr/bin/env python
import subprocess
status = subprocess.call(
['bash', '-c',
'. myGlobalVariables.bash; export $(compgen -v); exec ./stage2.py'
]);
Stage 2 python program:
#!/usr/bin/env python
# anything you want! for example,
import os
for key in os.environ:
print key, " = ", os.environ[key]
As stated in #theorifice answer, the trick here may be that such formatted file may be interpreted by both as bash and as python code. But his answer is outdated. imp module is deprecated in favour of importlib.
As your file has extension other than ".py", you can use the following approach:
from importlib.util import spec_from_loader, module_from_spec
from importlib.machinery import SourceFileLoader
spec = spec_from_loader("foobar", SourceFileLoader("foobar", "myGlobalVariables.bash"))
foobar = module_from_spec(spec)
spec.loader.exec_module(foobar)
I do not completely understand how this code works (where there are these foobar parameters), however, it worked for me. Found it here.
In a linux terminal typing
python script.py
Will run run script.py and exit the python console, but what if I just want to run a part of the script and leave the console open? For example, run script.py until line 15 and leave the console open for further scripting. How would I do this?
Let's say it's possible, then with the console still open and script.py run until line 15, can I then from inside the console call line fragments from other py files?
...something like
python script.py 15 #(opens script and runs lines 1-15 and leaves console open)
Then having the console open, I would like to run lines 25-42 from anotherscript.py
>15 lines of python code run from script.py
> run('anotherscript.py', lines = 25-42)
> print "I'm so happy the console is still open so I can script some more")
I'm so happy the console is still open so I can script some more
>
Your best bet might be pdb, the Python debugger. You can start you script under pdb, set a breakpoint on line 15, and then run your script.
python -m pdb script.py
b 15 # <-- Set breakpoint on line 15
c # "continue" -> run your program
# will break on line 15
You can then inspect your variables and call functions. Since Python 3.2, you can also use the interact command inside pdb to get a regular Python shell at the current execution point!
If that fits your bill and you also like IPython, you can check out IPdb, which is a bit nicer than normal pdb, and drops you into an IPython shell with interact.
if you want to run script.py from line a to line b, simply use this bash snippet:
cat script.py|head -{a+b}|tail -{b-a}|python -i
replace {a+b} and {b-a} with their values
You could use the python -i option to leave the console open at the end of the script.
It lets your script run until it exits, and you can then examine variables, call any function and any Python code, including importing and using other modules.
Of course your script needs to exit first, either at the end or, if your goal is to debug that part of the script, you could add a sys.exit() or os._exit() call where you want it to stop (such as your line 15).
For instance:
import os
print "Script starting"
a=1
def f(x):
return x
print "Exiting on line 8"
os._exit(0) # to avoid the standard SystemExit exception
print "Code continuing"
Usage example:
python -i test_exit.py
Scrit starting
Exiting on line 8
>>> print a
1
>>> f(4)
4
>>>
You cannot do that directly but you can do something similar from inside Python console (or IDLE) with exec :
just open you favorite Python console
load wanted lines into a string and exec them :
script = 'script.py'
txt = ''
with open(script) as sc:
for i, line in enumerate(sc):
if i >= begline and i<= endline:
txt = txt + line
exec(txt)
You can even write your own partial script runner based on that code ...
EDIT
I must admit that above answer alone really deserved to be downvoted. It is technically correct and probably the one that most closely meet what you asked for. But I should have warned you that it is bad pratice. Relying on line numbers to load pieces of source files is error prone and you should avoid it unless you really know what you are doing and why you do it that way. Python debugger at least allows you to control what are the lines you are about to execute.
If you really have to use this solution be sure to always print and double check the lines that you are about to execute. IMHO it is always both simpler and safer to copy and past lines in an IDE like IDLE that is packaged into any standard Python installation.
Im trying to pass a python command from R (on Windows x64 Rstudio) to a python script via the command promt. It works if I type directly into cdm but not if I do it via R using the R function system(). The format is (this is how I EXACTLY would write in the windows cmd shell/promt):
pyhton C:/some/path/script <C:/some/input.file> C:/some/output.file
This works in the cmd promt, and runs the script with the input file (in <>) and gives the output file. I thought I in R could do:
system('pyhton C:/some/path/script <C:/some/input.file> C:/some/output.file')
But this gives an error from python about
error: unparsable arguments: ['<C:/some/input.file>', 'C:/some/output.file']
It seems as if R or windows interpret the white spaces different than if I simply wrote (or copy-paste) the line to the cmd promt. How to do this.
From ?system
This interface has become rather complicated over the years: see
system2 for a more portable and flexible interface which is
recommended for new code.
System2 accepts a parameter args for the arguments of your command.
So you can try:
system2('python', c('C:\\some\\path\\script', 'C:\\some\\input.file', 'C:\\some\\output.file'))
On Windows:
R documentation is not really clear on this point (or maybe it's just me), anyway it seems that on Windows the suggested approach is to use the shell() which is less raw than system and system2, plus it seems to work better with redirection operators (like < or >).
shell ('python C:\\some\\path\\script < C:\\some\\input.file > C:\\some\\output.file')
So what is this command doing is:
Call python
Telling python to execute the script C:\some\path\script. Here we need to escape the '\' using '\'.
Then we passing some inputs to the script using a the '<' operator and the input.file
We redirect the output (using '>') to the output file.