Is it possible to have a Makefile grabing arguments from either config.ini or config.yml file?
Let's consider this example, we have a python main.py file which is written as a CLI. Not we do not want users to be filling arguments to a python CLI in terminal so we have an example config.ini file with the arguments:
PYTHON FILE:
import typer
def say_name(name:str):
print('runnig the code')
print(f'Hello there {name}')
if __name__ == "__main__":
typer.run(say_name)
config.ini FILE:
[argument]
name = person
Makefile FILE:
run_code:
python main.py ${config.ini.argument.name}
Is it possible to have a project infrastructure like this?
I am aware that Spacy project does exactly this. However I would like to some something like those even outside NLP project without the need of using spacy.
You need to find, or write, a tool which will read in your .ini file and generate a set of makefile variables from it. I don't know where you would find such a thing but it's probably not hard to write one using a python module that parses .ini files.
Suppose you have a script ini2make that will do this, so that if you run:
ini2make config.ini
it will write to stdout makefile variable assignment lines like this:
config.ini.argument.name = person
config.ini.argument.email = person#somewhere.org
etc. Then you can integrate this into your makefile very easily (here I'm assuming you're using GNU make) through use of GNU make's automatic include file regeneration:
include config.ini.mk
config.ini.mk: config.ini
ini2make $< > $#
Done. Now whenever config.ini.mk doesn't exist or config.ini has been changed since config.ini.mk was last generated, make will run the ini2make script to update it then re-execute itself automatically to read the new values.
Then you can use variables that were generated, like $(config.ini.argument.name) etc.
Related
I have a python code in which at the beginning it takes a string variable let say "element_name" from user and build some sub-folders based on this string and also some output files created by this code move to those folders.
On the other hand, I have a bash script in which some codes should be running in the sub-folders made in python code.
Any help how to introduce those folders in bash? How to pass the "element_name" from python to bash?
In python code "a.py" I tried
first = subprocess.Popen(['/bin/echo', element_name], stdout=subprocess.PIPE)
second = subprocess.Popen(['bash', 'path/to/script', '--args'], stdin=first.stdout)
and then in bash
source a.py
echo $element_name
but it doesn't work.
It's not clear from your question what is in your scripts, but I guess
subprocess.run(['/bin/bash', 'path/to/script', '--args', element_name])
is doing what you intend to do, passing the value of element_name to script as an argument.
I found a way. What I did is to pass the argument in a bash file and import this bash file as a source to my main bash file. Now everything works well.
My file structure looks like this:
runner.py
scripts/
something_a/
main.py
other_file.py
something_b/
main.py
anythingelse.py
something_c/
main.py
...
runner.py should look at all folders in scripts/ in run the main.py located there.
Right now I'm achieving this through subprocess.check_output. It works, but some of these scripts take a long time to run and I don't get to see any progress; it prints everything after the process has finished.
I'm hoping to find a solution that allows for 2 things to be done somewhat easily:
1) Stream the output instead of getting it all at the end
2) Doesn't
prohibit running multiple scripts at once
Is this possible? A lot of the solutions I've seen for running a Python script from another require knowledge of the other script's name/location. I can also enforce that all the main.py's have a specific function if that helps.
You could use Popen to loop through each file and write its content to multiple log files. Then, you could read from these files in real-time, while each one is populated. :)
How you would want to translate the output to a more readable format, is a little bit more tricky because of readability. You could create another script which reads these log files, with Popen, and decide on how you'd like this information read back in a understandable manner.
""" Use the same command as you would do for check_output """
cmd = ''
for filename in scriptList:
log = filename + ".log"
with io.open(filename, mode=log) as out:
subprocess.Popen(cmd, stdout=out, stderr=out)
Note:- Wait before you mark my question as duplicate please read it completely.
I wan't to run a python file using another.
I have tried using runpy,os.system & subprocess. The problem with subprocess and os.system command is that it fails for systems which have python2 and python3 both installed if i just run with python. If i run it with python3 i fails for people having single installation.
The problem with runpy is that it does not work according to my needs.
The following is run my directory structure
test\
average\
average.py
average_test.py
many similar directories like average...
run_tests.py
The content of average is
def average(...args):
# Do something
The content of average_test.py
from average import average
def average_test():
assert average(1,2,3) == 2
Now if i use runpy.run_path it throws a ImportError saying average is not a module. The os.system and subprocess.call works perfectly but I hope my "testing_framework" will be used by many so I can't use the above two functions. Isn't there any other way to do it. I have researched the whole of SO and google but didn't find a solutions.
Also sys.path.append/insert will not help as I can't tell my "users" to add this to every file of theirs.
Is there no easy way to do it? I mean like pytest accomplishes this so there must be a way.
Thank you moderators for reading my question.
EDIT I forgot to mention that I wan't the code to be run in if __name__ == '__main__' block too and I have also tried using a snippet from another SO answer which fails too. The snippet was
def exec_full(filepath):
global_namespace = {
"__file__": filepath,
"__name__": "__main__",
}
with open(filepath, 'rb') as file:
exec(compile(file.read(), filepath, 'exec'), global_namespace)
Please note that the directory structure was just an example the user may have a different code/directory structure.
NOTE:- I found the answer. I needed to do subprocess.call([sys.executable,file_path]). sys.executable returns the path for the python executable file for the current version.
Create an empty __init__.py in average folder
And then try to import
from average import average
it would work like charm :)
test\
average\
average.py
average_test.py
__init__.py
many similar directories like average...
run_tests.py
I've been given a piece of code which is a physical model (filename 'agnsim.py') and some instructions to run it which I'm confused by.
The instructions say that I should import the code using
import agnsim as agn
and then to run the model with
ed = agn.Wilp(dens=3., incr=0.2, drac=2.0)
The argument above in Wilp will configure the run.
My question is, how do I actually run this? Do I create a separate .py file that contains these two lines of code?
I've only ever run simple python programs before using e.g. >>>python file.py
You should literally just open up the python console and type those two lines in.
$python
import agnsim as agn
ed = agn.Wilp(dens=3., incr=0.2, drac=2.0)
Make sure that agnsim.py is located at the same folder level from which you start python. E.g. If you're in "My Documents" and "agnsim.py" is in my documents, you should cd to "My Documents" and then start python there (from the command line).
Some background (not mandatory, but might be nice to know): I am writing a Python command-line module which is a wrapper around latexdiff. It basically replaces all \cite{ref1, ref2, ...} commands in LaTeX files with written-out and properly formatted references before passing the files to latexdiff, so that latexdiff will properly mark changes to references in the text (otherwise, it treats the whole \cite{...} command as a single "word"). All the code is currently in a single file which can be run with python -m latexdiff-cite, and I have not yet decided how to package or distribute it. To make the script useful for anybody else, the citation formatting needs to be configurable. I have implemented an optional command-line argument -c CONFIGFILE to allow the user to point to their own JSON config file (a default file resides in the module folder and is loaded if the argument is not used).
Current implementation: My single-file command-line Python module currently parses command-line arguments in if __name__ == '__main__', and loads the config file (specified by the user in -c CONFIGFILE) here before running the main function of the program. The config variable is thus available in the entire module and all is well. However, I'm considering publishing to PyPI by following this guide which seems to require me to put the command-line parsing in a main() function, which means the config variable will not be available to the other functions unless passed down as arguments to where it's needed. This "passing down by arguments" method seems a little cluttered to me.
Question: Is there a more pythonic way to set some configuration globals in a module or otherwise accomplish what I'm trying to? (I don't want to rely on 3rd party modules.) Am I perhaps completely off the tracks in some fundamental way?
One way to do it is to have the configurations defined in a class or a simple dict:
class Config(object):
setting1 = "default_value"
setting2 = "default_value"
#staticmethod
def load_config(json_file):
""" load settings from config file """
with open(json_file) as f:
config = json.load(f)
for k, v in config.iteritems():
setattr(Config, k, v)
Then your application can access the settings via this class: Config.setting1 ...