I have three python files, let's call them master.py, slave1.py and slave2.py. Now slave1.py and slave2.py do not have any functions, but are required to do two different things using the same input (say the variable inp).
What I'd like to do is to call both the slave programs from master, and specify the one input variable inp in master, so I don't have to do it twice. Also so I can change the outputs of both slaves in one master program etc.
I'd like to keep the code of both slave1.py and slave2.py separate, so I can debug them individually if required, but when I try to do
#! /usr/bin/python
# This is master.py
import slave1
import slave2
inp = #some input
both slave1 and slave2 run before I can change the input. As I understand it, the way python imports modules is to execute them first. But is there some way to delay executing them so I can specify the common input? Or any other way to specify the input for both files from one place?
EDIT: slave1 and slave2 perform two different simulations being given a particular initial condition. Since the output of the two are the same, I'd like to display them in a similar manner, as well as have control over which files to write the simulated data to. So I figured importing both of them into a master file was the easiest way to do that.
Write the code in your slave modules as functions, import the functions, then call the functions from master with whatever input you need. If you need to have more stateful information, consider constructing an object.
You can do imports at any time:
inp = #some input
import slave1
import slave2
Note that this is generally considered bad design - you would be better off making the modules contain a function, rather than just having it happen when you import the module.
It looks like the architecture of your program is not really optimal. I think you have two files that execute immediately when you run them with python slave1.py. That is nice for scripting, but when you import them you run in trouble as you have experienced.
Best is to wrap your code in the slave files in a function (as suggested by #sr2222) and call these explicitly from master.py:
slave1.py/ slave2.py
def run(inp):
#your code
master.py
import slave1, slave2
inp = "foo"
slave1.run(inp)
slave2.run(inp)
If you still want to be able to run the slaves independently you could add something like this at the end:
if __name__ == "__main__":
inp = "foo"
run(inp)
Related
I would like to use data from a file called simdata.txt to run a simulation that I wrote in Python. I would like to ensure that when the user executes my program from the command line that the data is only being fed through the sys.stdin stream.
That is, I would like my program to be ran like this:
python3 simulation.py < simdata.txt
as opposed to this (fed through sys.argv as opposed to sys.stdin):
python3 simulation.py simdata.txt
How can I ensure that the user will always execute the program the first way as opposed to the second way? Usually I enforce this rule using this:
if len(sys.argv) > 2:
print("This is not how you use the program. Example of use:")
print("python3 simulation.py < simdata.txt")
exit(1)
But this seems problematic since I may want to add extra flags to my program that would change the behavior of the simulation. That is, maybe I would want to do this:
python3 simulation.py < simdata.txt --plot_data
Is there a better way to ensure that the simdata.txt is only fed through stdin without compromising my ability to add extra program flags?
Working with the argparse module turned out to be the better answer. That way I didn't have to have such strict requirements for input.
There is a python script start_test.py.
There is a second python script siple_test.py.
# pseudo code:
start_test.py --calls--> subprocess(python.exe simple_test.py, args_simple_test[])
The python interpreter for both scripts is the same. So instead of opening a new instance, I want to run simple_test.py directly from start_test.py. I need to preserve the sys.args environment. A nice to have would be to actually enter following code section in simple_test.py:
# file: simple_test.py
if __name__ == '__main__':
some_test_function()
Most important is, that the way should be a universal one, not depending on the content of the simple_test.py.
This setup would provide two benefits:
The call is much less resource intensive
The whole stack of simple_test.py can be debugged with pycharm
So, how do I execute the call of a python script, from a python script, without starting a new subprocess?
"Executing a script" is a somewhat blurry term.
Typically the if __name__== "__main__": part does the argument (sys.argv) decoding and then calls a worker function with explicit parameters. For clarity: It should not do anything else, since this additional work can't be called without creating a new process causing all the overhead you are trying to avoid.
You simply bypass that and call this implementing routine directly.
So you end up with start_test.py containing something like:
from simple_test import worker
# ...
worker(typed_arg1, typed_arg2)
Please excuse what I know is an incredibly basic question that I have nevertheless been unable to resolve on my own.
I'm trying to switch over my data analysis from Matlab to Python, and I'm struggling with something very basic: in Matlab, I write a function in the editor, and to use that function I simply call it from the command line, or within other functions. The function that I compose in the matlab editor is given a name at the function definition line, and it's generally best for the function name to match the .m file name to avoid confusion.
I don't understand how functions differ in Python, because I have not been successful translating the same approach there.
For instance, if I write a function in the Python editor (I'm using Python 2.7 and Spyder), simply saving the .py file and calling it by its name from the Python terminal does not work. I get a "function not defined" error. However, if I execute the function within Spyder's editor (using the "run file" button), not only does the code execute properly, from that point on the function is also call-able directly from the terminal.
So...what am I doing wrong? I fully appreciate that using Python isn't going to be identical to Matlab in every way, but it seems that what I'm trying to do isn't unreasonable. I simply want to be able to write functions and call them from the python command line, without having to run each and every one through the editor first. I'm sure my mistake here must be very simple, yet doing quite a lot of reading online hasn't led me to an answer.
Thanks for any information!
If you want to use functions defined in a particular file in Python you need to "import" that file first. This is similar to running the code in that file. Matlab doesn't require you to do this because it searches for files with a matching name and automagically reads in the code for you.
For example,
myFunction.py is a file containing
def myAdd(a, b):
return a + b
In order to access this function from the Python command line or another file I would type
from myFunction import myAdd
And then during this session I can type
myAdd(1, 2)
There are a couple of ways of using import, see here.
You need to a check for __main__ to your python script
def myFunction():
pass
if __name__ == "__main__":
myFunction()
then you can run your script from terminal like this
python myscript.py
Also if your function is in another file you need to import it
from myFunctions import myFunction
myFunction()
Python doesn't have MATLAB's "one function per file" limitation. You can have as many functions as you want in a given file, and all of them can be accessed from the command line or from other functions.
Python also doesn't follow MATLAB's practice of always automatically making every function it can find usable all the time, which tends to lead to function name collisions (two functions with the same name).
Instead, Python uses the concept of a "module". A module is just a file (your .py file). That file can have zero or more functions, zero or more variables, and zero or more classes. When you want to use something from that file, you just import it.
So say you have a file 'mystuff.py':
X = 1
Y = 2
def myfunc1(a, b):
do_something
def myfunc2(c, d):
do_something
And you want to use it, you can just type import mystuff. You can then access any of the variables or functions in mystuff. To call myfunc2, you can just do mystuff.myfunc2(z, w).
What basically happens is that when you type import mystuff, it just executes the code in the file, and makes all the variables that result available from mystuff.<varname>, where <varname> is the name of the variable. Unlike in MATLAB, Python functions are treated like any other variable, so they can be accessed just like any other variable. The same is true with classes.
There are other ways to import, too, such as from mystuff import myfunc.
You run python programs by running them with
python program.py
I got a python file which is a code that I developed. During his execution I input from the keyboard several characters at different stages of the program itself. Also, during the execution, I need to close a notepad session which comes out when I execute into my program the command subprocess.call(["notepad",filename]). Having said that I would like to run this code several times with inputs which change according to the case and I was wondering if there is an automatic manner to do that. Assuming that my code is called 'mainfile.py' I tried the following command combinations:
import sys
sys.argv=['arg1']
execfile('mainfile.py')
and
import sys
import subprocess
subprocess.call([sys.executable,'mainfile.py','test'])
But it does not seem to work at least for the first argument. Also, as the second argument should be to close a notepad session, do you know how to pass this command?
Maybe have a look at this https://stackoverflow.com/a/20052978/4244387
It's not clear what you are trying to do though, I mean the result you want to accomplish seems to be just opening notepad for the sake of saving a file.
The subprocess.call() you have is the proper way to execute your script and pass it arguments.
As far as launching notepad goes, you could do something like this:
notepad = subprocess.Popen(['notepad', filename])
# do other stuff ...
notepad.terminate() # terminate running session
Sorry for the beginner question, but I can't figure out cProfile (I'm really new to Python)
I can run it via my terminal with:
python -m cProfile myscript.py
But I need to run it on a webserver, so I'd like to put the command within the script it will look at. How would I do this? I've seen stuff using terms like __init__ and __main__ but I dont really understand what those are.
I know this is simple, I'm just still trying to learn everything and I know there's someone who will know this.
Thanks in advance! I appreciate it.
I think you've been seeing ideas like this:
if __name__ == "__main__":
# do something if this script is invoked
# as python scriptname. Otherwise, gets ignored.
What happens is when you call python on a script, that file has an attribute __name__ set to "__main__" if it is the file being directly called by the python executable. Otherwise, (if it is not directly called) it is imported.
Now, you can use this trick on your scripts if you need to, for example, assuming you have:
def somescriptfunc():
# does something
pass
if __name__ == "__main__":
# do something if this script is invoked
# as python scriptname. Otherwise, gets ignored.
import cProfile
cProfile.run('somescriptfunc()')
This changes your script. When imported, its member functions, classes etc can be used as normal. When run from the command-line, it profiles itself.
Is this what you're looking for?
From the comments I've gathered more is perhaps needed, so here goes:
If you're running a script from CGI changes are it is of the form:
# do some stuff to extract the parameters
# do something with the parameters
# return the response.
When I say abstract out, you can do this:
def do_something_with_parameters(param1, param2):
pass
if __name__ = "__main__":
import cProfile
cProfile.run('do_something_with_parameters(param1=\'sometestvalue\')')
Put that file on your python path. When run itself, it will profile the function you want profiling.
Now, for your CGI script, create a script that does:
import {insert name of script from above here}
# do something to determine parameter values
# do something with them *via the function*:
do_something_with_parameters(param1=..., param2=...)
# return something
So your cgi script just becomes a little wrapper for your function (which it is anyway) and your function is now self-testing.
You can then profile the function using made up values on your desktop, away from the production server.
There are probably neater ways to achieve this, but it would work.