I want to pass a constant in a C preprocessor style but with a Python script.
This constant is already declared with AC_DEFINE in my configure.ac file and used in my C program, and now I need to pass it to a Python script too.
I tried with a custom target in my Makefile.am with a sed call to preprocess a specific symbol in my Python script, but it seems dirty-coding to me.
How can I achieve this?
Create a config.py.in with some contents like MYVAR = '''#MYVAR#''' and add it to AC_CONFIG_FILES in your configure.ac. You can then import config in your other Python scripts.
This fulfills much the same function as config.h does for C programs.
Related
Is it possible to have a Makefile grabing arguments from either config.ini or config.yml file?
Let's consider this example, we have a python main.py file which is written as a CLI. Not we do not want users to be filling arguments to a python CLI in terminal so we have an example config.ini file with the arguments:
PYTHON FILE:
import typer
def say_name(name:str):
print('runnig the code')
print(f'Hello there {name}')
if __name__ == "__main__":
typer.run(say_name)
config.ini FILE:
[argument]
name = person
Makefile FILE:
run_code:
python main.py ${config.ini.argument.name}
Is it possible to have a project infrastructure like this?
I am aware that Spacy project does exactly this. However I would like to some something like those even outside NLP project without the need of using spacy.
You need to find, or write, a tool which will read in your .ini file and generate a set of makefile variables from it. I don't know where you would find such a thing but it's probably not hard to write one using a python module that parses .ini files.
Suppose you have a script ini2make that will do this, so that if you run:
ini2make config.ini
it will write to stdout makefile variable assignment lines like this:
config.ini.argument.name = person
config.ini.argument.email = person#somewhere.org
etc. Then you can integrate this into your makefile very easily (here I'm assuming you're using GNU make) through use of GNU make's automatic include file regeneration:
include config.ini.mk
config.ini.mk: config.ini
ini2make $< > $#
Done. Now whenever config.ini.mk doesn't exist or config.ini has been changed since config.ini.mk was last generated, make will run the ini2make script to update it then re-execute itself automatically to read the new values.
Then you can use variables that were generated, like $(config.ini.argument.name) etc.
Please excuse what I know is an incredibly basic question that I have nevertheless been unable to resolve on my own.
I'm trying to switch over my data analysis from Matlab to Python, and I'm struggling with something very basic: in Matlab, I write a function in the editor, and to use that function I simply call it from the command line, or within other functions. The function that I compose in the matlab editor is given a name at the function definition line, and it's generally best for the function name to match the .m file name to avoid confusion.
I don't understand how functions differ in Python, because I have not been successful translating the same approach there.
For instance, if I write a function in the Python editor (I'm using Python 2.7 and Spyder), simply saving the .py file and calling it by its name from the Python terminal does not work. I get a "function not defined" error. However, if I execute the function within Spyder's editor (using the "run file" button), not only does the code execute properly, from that point on the function is also call-able directly from the terminal.
So...what am I doing wrong? I fully appreciate that using Python isn't going to be identical to Matlab in every way, but it seems that what I'm trying to do isn't unreasonable. I simply want to be able to write functions and call them from the python command line, without having to run each and every one through the editor first. I'm sure my mistake here must be very simple, yet doing quite a lot of reading online hasn't led me to an answer.
Thanks for any information!
If you want to use functions defined in a particular file in Python you need to "import" that file first. This is similar to running the code in that file. Matlab doesn't require you to do this because it searches for files with a matching name and automagically reads in the code for you.
For example,
myFunction.py is a file containing
def myAdd(a, b):
return a + b
In order to access this function from the Python command line or another file I would type
from myFunction import myAdd
And then during this session I can type
myAdd(1, 2)
There are a couple of ways of using import, see here.
You need to a check for __main__ to your python script
def myFunction():
pass
if __name__ == "__main__":
myFunction()
then you can run your script from terminal like this
python myscript.py
Also if your function is in another file you need to import it
from myFunctions import myFunction
myFunction()
Python doesn't have MATLAB's "one function per file" limitation. You can have as many functions as you want in a given file, and all of them can be accessed from the command line or from other functions.
Python also doesn't follow MATLAB's practice of always automatically making every function it can find usable all the time, which tends to lead to function name collisions (two functions with the same name).
Instead, Python uses the concept of a "module". A module is just a file (your .py file). That file can have zero or more functions, zero or more variables, and zero or more classes. When you want to use something from that file, you just import it.
So say you have a file 'mystuff.py':
X = 1
Y = 2
def myfunc1(a, b):
do_something
def myfunc2(c, d):
do_something
And you want to use it, you can just type import mystuff. You can then access any of the variables or functions in mystuff. To call myfunc2, you can just do mystuff.myfunc2(z, w).
What basically happens is that when you type import mystuff, it just executes the code in the file, and makes all the variables that result available from mystuff.<varname>, where <varname> is the name of the variable. Unlike in MATLAB, Python functions are treated like any other variable, so they can be accessed just like any other variable. The same is true with classes.
There are other ways to import, too, such as from mystuff import myfunc.
You run python programs by running them with
python program.py
How would the output of one script be passed as the input to another? For example if a.py outputs format.xml then how would a.py call b.py and pass it the argument format.xml? I think it's supposed to work like piping done on the command line.
I've been hired by a bunch of scientists with domain specific knowledge but sometimes there computer programming requirements don't make sense. There's a long chain of "modules" and my boss is really adamant about 1 module being 1 python script, and the output of one module is the input of the next. I'm very new to Python, but if this design pattern rings a bell to anyone let me know.
Worse yet the project is to be converted to executable format (using py2exe) and there still has to be the same number of executable files as .py files.
The pattern makes sense in some cases, but for me it's when you want to be able to run each module as a self sustained executeable.
I.E. Should you want to use the script from within FORTRAN or similar language, it is the easiest way, to build the python module to an executeable, and then call it from FORTRAN.
That would not mean that one module is pr definition 1 python file, just that it only has one entry point, and is in fact executeable.
The one module pr script, could be to make it easier to locate the code. Or to mail it to someone for code inspection or peer review (done often in scientific communities)
So the requirements may be a mix of technical and social requirements.
Anyway back to the problem.
I would use the subprocess module to call the next module. (with close_fds set to true)
If close_fds is true, all file descriptors except 0, 1 and 2 will be
closed before the child process is executed. (Unix only). Or, on
Windows, if close_fds is true then no handles will be inherited by the
child process. Note that on Windows, you cannot set close_fds to true
and also redirect the standard handles by setting stdin, stdout or
stderr.
To emulate a | b shell pipeline in Python:
#!/usr/bin/env python
from subprocess import check_call
check_call('a | b', shell=True)
a program writes to its stdout stream and knows nothing about b program. b program reads from its stdin and knows nothing about a program.
A more flexible approach is to define functions, classes in a.py, b.py modules that work with objects and to implement the command-line interface that produces/consumes xml in terms of these functions in if __name__ == "__main__" block e.g., a.py:
#!/usr/bin/env python
import sys
import xml.etree.ElementTree as etree
def items():
yield {'name': 'a'}
yield {'name': 'b'}
def main():
parent = etree.Element("items")
for item in items():
etree.SubElement(parent, 'item', attrib=item)
etree.ElementTree(parent).write(sys.stdout) # set encoding="unicode" on Python 3
if __name__=="__main__":
main()
It would allow to avoid unnecessary serialization to xml/deserialization from xml when the scripts are not called from the command line:
#!/usr/bin/env python
import a, b
for item in a.items():
b.consume(item)
Note: item can be an arbitrary Python object such as dict or an instance of a custom class.
I have an OpenModelica model made with OMEdit. In order to get a concrete example I designed the following:
Now I would like to run the model in Python. I can do this by using OMPython. After importing OMPython and loading the files I use the following command to run the simulation:
result = OMPython.execute("simulate(myGain, numberOfIntervals=2, outputFormat=\"mat\")")
The simulation now runs and the results are written to a file.
Now I would like to run the same model but with an different parameter for the constant block.
How can I do this?
Since the parameter is compiled into the model it should not be possible to change it. So what I need is a model like that:
Is it possible to call the model from Python and set the variable "a" to a specific value?
With the command OMPython.execute("simulate(...)") I can specify some environment variables like "numberOfIntervals" or "outputFormat" but not more.
You can send more flags to the simulate command. For example simflags to override parameters. See https://openmodelica.org/index.php/forum/topic?id=1011 for some details.
You can also use the buildModel(...) command followed by system("./ModelName -overrideFile ...") to avoid re-translation and re-compilation or with some minor scripting parallel parameter sweeps. If you use Linux or OSX it should be easy to call OMPython to create the executable and then call it yourself. On Windows you need to setup some environment variables for it to work as expected.
I believe you are looking for the setParameterValue command. You can read about it here: https://openmodelica.org/download/OMC_API-HowTo.pdf
Basically you would add a line similar to OMPython.execute("setParameterValue(myGain, a, 20)") to your python script before the line where you run the simulation, so long as a is a parameter in your model.
Create one new folder in windows
In this folder put/create 2 new files file1.py and file2.bat
The file1.py content is:
import os
import sys
sys.path.insert(0, "C:\OpenModelica1.11.0-32bit\share\omc\scripts\PythonInterface")
from OMPython import OMCSession
sys.path.insert(0, "C:\OpenModelica1.11.0-32bit\lib\python")
os.environ['USER'] = 'stefanache'
omc = OMCSession()
omc.sendExpression("loadModel(Modelica)")
omc.sendExpression("loadFile(getInstallationDirectoryPath() + \"/share/doc/omc/testmodels/BouncingBall.mo\")")
omc.sendExpression("instantiateModel(BouncingBall)")
omc.sendExpression("simulate(BouncingBall)")
omc.sendExpression("plot(h)")`
the file2.bat content is:
#echo off
python file1.py
pause
then click on file2.bat... and please be patient!
The plotted result window will appear.
I know it can be achieved by command line but I need to pass at least 10 variables and command line will mean too much of programming since these variables may or may not be passed.
Actually I have build A application half in vB( for GUI ) and Half in python( for script ). I need to pass variables to python, similar, to its keywords arguments, i.e, x = val1, y = val2. Is there any way to achieve this?
If you are using Python <2.7 I would suggest optparse.
optparse is deprecated though, and in 2.7 you should use argparse
It makes passing named parameters a breeze.
you can do something fun like call it as
thepyscript.py "x = 12,y = 'hello world', z = 'jam'"
and inside your script,
parse do:
stuff = arg[1].split(',')
for item in stuff:
exec(item) #or eval(item) depending on how complex you get
#Exec can be a lot of fun :) In fact with this approach you could potentially
#send functions to your script.
#If this is more than you need, then i'd stick w/ arg/optparse
Since you're working on windows with VB, it's worth mentioning that IronPython might be one option. Since both VB and IronPython can interact through .NET, you could wrap up your script in an assembly and expose a function which you call with the required arguments.
Have you taken a look at the getopt module? It's designed to make working with command line options easier. See also the examples at Dive Into Python.
If you are working with Python 2.7 (and not lower), than you can also have a look at the argparse module which should make it even easier.
If your script is not called too often, you can use a configuration file.
The .ini style is easily readable by ConfigParser:
[Section_1]
foo1=1
foo2=2
foo3=5
...
[Section_2]
bar1=1
bar2=2
bar3=3
...
If you have a serious amount of variables, it might be the right way to go.
What do you think about creating a python script setting these variables from the gui side? When starting the python app you just start this script and you have your vars.
Execfile