Configuration file for Python - python

Is there any standard for python configuration files? What I would like to have is a seperate document to my script which would have the options of my script in it. For example...
Test Options
Random_AOI = 1
Random_ReadMode = 1
These would then become a list within the python script, such as...
test_options(random_aoi, random_readmode)
Would I have to use regular expressions and scan the document or is there an easier way of performing this action?

Yes - there's ConfigParser, which parses .ini files.
There's a handy introduction to it here.

The "standard" config file formats for Python are INI (which you parse/write with the ConfigParser module -- http://docs.python.org/2/library/configparser.html) and JSON (http://docs.python.org/2/library/json.html).

Related

A format to specify search pattern for folders and files within them

I am using a testing framework to test designs written in VHDL. In order for this to work, a Python script creates several "libraries" and then adds files in these libraries. Finally the simulator program is invoked, it starts up, compiles all the files into the specified libraries and then runs the tests.
I want to make changes in the way we specify what "libraries" to create and where to add the files for each library from. I think that it should be possible to write the description for these things in JSON and then let Python script process it. In this way, the same Python script can be used for all projects and I don't have to worry about someone not knowing Python.
The main issue is deciding how to express the information in JSON file. The JSON file shall create entries for library name and then location of source files. The fundamental problem is how to express these things using some type of pattern like glob or regular expression:
Pattern for name of folder to search
Pattern for name of subfolders to search
Express if all subfolders should be searched in a folder or not
What subfolders to exclude from search
This would express something like e.g "files in folder A but not its subfolders, folder B and its subfolders but not subfolder X in folder B"
Then we come to the pattern for the actual file names. The pattern of file names shall follow the pattern for the folder. If same file pattern applies to multiple folders, then after multiple lines of folder name patterns, the filename pattern applying to all of them shall occur once.
Pattern for name of file to add into library.
Pattern for name of file to exclude from library.
This would express something like e.g "all files ending with ".vhd" but no files that have "_bb_inst.vhd" in their name and do not add p.vhd and q.vhd"
Finally the Python script parsing the files should be able to detect conflicts in the rules e.g a folder is specified for search and exclusion at same time, the same files are being added into multiple libraries e.t.c. This will of course be done within the Python script.
Now my question is, does a well defined pre-existing method to define something like what I have described here already exist? The only reason to choose JSON to express this is that Python has packages to traverse JSON files.
Have you looked at the glob library?
For your more tricky use cases you could specify in/out lists using glob patterns.
For example
import glob
inlist_pattern = "/some/path/on_yoursystem/*.vhd"
outlist_pattern = "/some/path/on_yoursystem/*_bb_inst.vhd"
filtered_files = set(glob.glob(inlist_pattern )) - set(glob.glob(outlist_pattern))
And other set operations allow you to perform more interesting in/out operations.
To do recursive scans, try ammending your patterns accordingly:
inlist_pattern = "/some/path/on_yoursystem/**/*.vhd"
outlist_pattern = "/some/path/on_yoursystem/**/*_bb_inst.vhd"
list_of_all_vhds_in_sub_dirs = glob.glob(inlist_pattern, recursive=True)
With the recursive=True keyword option, the scan will be performed at the point in the path specified, and where the ** notation is used, plus zero or more subfolders, returning the files that match the overall pattern.

Read and modify a the code of a python script from another python script

Is there any package in python that can read a python script and give the ability to modify it? Something like the following:
my_script: PythonScript = read_script("my_script.py")
list_of_functions: [PythonFunction] = my_script.functions
for func in list_of_functions:
print(func.name)
print(func.body)
list_of_functions[0].name = "new_function_name"
my_script.functions = list_of_functions
So again what I am looking for is a package that can read a python script and give the ability to modify it, not necessary the same way I did in my example, I just have a lot of scripts and I am looking for a way to do a fix on all of them without using find and replace in an IDE nor reading them as text files for example.
Somehow traversing a python script from python code? I do not know what keywords should I use to do a proper search either.

How to pass all files (with given name patterns) to python program in PyCharm using the parameters field in run/debug configurations?

I have a bunch of .html files in a directory that I am reading into a python program using PyCharm. I am using the (*) star operator in the following way in the parameters field of the run/debug configuration dialog box in PyCharm:
*.html
, but this doesn't work. I get the following error:
IOError: [Errno 2] No such file or directory: '*.html'
at the line where I open the file to read into my program. I think its reading the "*.html" literally as a file name. I'd appreciate your help in teaching me how to use the star operator in this case.
Addendum:
I'm pretty new to Python and Pycharm. I'm running my script using the following configuration options:
Now, I've tried different variations of parameters here, like '*.html', "*.html", and just *.html. I also tried glob.glob('*.html'), but the code takes it literally and thinks that the file name itself is "glob.glob('*.html')" and throws an error. I think this is more of a Pycharm thing than understanding bash or python. I guess what I want is to make Pycharm pass all the files of the directory through that parameters field in the picture. Is there some way for me to specify to Pycharm NOT to consider the string of parameters literally?
The way the files are being handled is by running a for loop through the sys.argv list and calling a function on each file. The function simply uses the open() method to read the contents of the file into a string so I can pull patterns out of the text. Hope that fleshes out the problem a bit better.
Filename expansion is a feature of bash. So if you call your python script from the linux command line, it will work, just like if you would have typed out all of the filenames as arguments to your script. Pycharm doesn't have this feature, so you will have to do that by yourself in your python script using a glob.
import glob
import sys
files = glob.glob(sys.argv[-1])
To keep compatibility between bash and pycharm, you can use something like this:
import glob
globs = ['*.html', '*.css', 'script.js']
files = []
for g in globs:
files.extend(glob.glob(g))
I have multiple arguments so this is what I did to allow for compatibility:
I have an argparse argument that returns an array of image file names. I check it as follows.
images = args["images"]
if len(images) == 1 and '*' in images[0]:
import glob
images = glob.glob(images[0])

Create a .xml file with same as .txt file after conversion using elementree python

I am in a project in which I have the task to convert a text file to xml file.
But the restriction is, the file names should be same. I use element tree
module of python for this but while writing to the tree I use tree.write()
method in which I have to explicitly define / hardcode the xml filename
myself.
I want to know is there any procedure to automatically create the .xml file
with same name as the text file
the sh module provides great flexibility and may help you for what you want to do if I understand the question correctly. I have shown an example in the following:
import sh
name = "filename.xml"
new_name = name[-3] + "txt"
sh.touch(new_name)
from there you can open the created file and write directly to it.

How to use Python to programmatically generate part of Sphinx documentation?

I am using Sphinx to generate the documentation for a project of mine.
In this project, I describe a list of available commands in a yaml file which, once loaded, results in a dictionary in the form {command-name : command-description} for example:
commands = {"copy" : "Copy the highlighted text in the clipboard",
"paste" : "Paste the clipboard text to cursor location",
...}
What I would like to know, is if there is a method in sphinx to load the yaml file during the make html cycle, translate the python dictionary in some reStructuredText format (e.g. a definition list) and include in my html output.
I would expect my .rst file to look like:
Available commands
==================
The commands available in bla-bla-bla...
.. magic-directive-that-execute-python-code::
:maybe python code or name of python file here:
and to be converted internally to:
Available commands
==================
The commands available in bla-bla-bla...
copy
Copy the highlighted text in the clipboard
paste
Paste the clipboard text to cursor location
before being translated to HTML.
At the end I find a way to achieve what I wanted. Here's the how-to:
Create a python script (let's call it generate-includes.py) that will generate the reStructuredText and save it in the myrst.inc file. (In my example, this would be the script loading and parsing the YAML, but this is irrelevant). Make sure this file is executable!!!
Use the include directive in your main .rst document of your documentation, in the point where you want your dynamically-generated documentation to be inserted:
.. include:: myrst.inc
Modify the sphinx Makefile in order to generate the required .inc files at build time:
myrst.inc:
./generate-includes.py
html: myrst.inc
...(other stuff here)
Build your documentation normally with make html.
An improvement based on Michael's code and the built-in include directive:
import sys
from os.path import basename
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
from docutils.parsers.rst import Directive
from docutils import nodes, statemachine
class ExecDirective(Directive):
"""Execute the specified python code and insert the output into the document"""
has_content = True
def run(self):
oldStdout, sys.stdout = sys.stdout, StringIO()
tab_width = self.options.get('tab-width', self.state.document.settings.tab_width)
source = self.state_machine.input_lines.source(self.lineno - self.state_machine.input_offset - 1)
try:
exec('\n'.join(self.content))
text = sys.stdout.getvalue()
lines = statemachine.string2lines(text, tab_width, convert_whitespace=True)
self.state_machine.insert_input(lines, source)
return []
except Exception:
return [nodes.error(None, nodes.paragraph(text = "Unable to execute python code at %s:%d:" % (basename(source), self.lineno)), nodes.paragraph(text = str(sys.exc_info()[1])))]
finally:
sys.stdout = oldStdout
def setup(app):
app.add_directive('exec', ExecDirective)
This one imports the output earlier so that it goes straight through the parser. It also works in Python 3.
I needed the same thing, so I threw together a new directive that seems to work (I know nothing about custom Sphinx directives, but it's worked so far):
import sys
from os.path import basename
from StringIO import StringIO
from sphinx.util.compat import Directive
from docutils import nodes
class ExecDirective(Directive):
"""Execute the specified python code and insert the output into the document"""
has_content = True
def run(self):
oldStdout, sys.stdout = sys.stdout, StringIO()
try:
exec '\n'.join(self.content)
return [nodes.paragraph(text = sys.stdout.getvalue())]
except Exception, e:
return [nodes.error(None, nodes.paragraph(text = "Unable to execute python code at %s:%d:" % (basename(self.src), self.srcline)), nodes.paragraph(text = str(e)))]
finally:
sys.stdout = oldStdout
def setup(app):
app.add_directive('exec', ExecDirective)
It's used as follows:
.. exec::
print "Python code!"
print "This text will show up in the document"
Sphinx doesn't have anything built-in to do what you like. You can either create a custom directive to process your files or generate the reStructuredText in a separate step and include the resulting reStructuredText file using the include directive.
I know this question is old, but maybe someone else will find it useful as well.
It sounds like you don't actually need to execute any python code, but you just need to reformat the contents of your file. In that case you might want to look at sphinx-jinja (https://pypi.python.org/pypi/sphinx-jinja).
You can load your YAML file in the conf.py:
jinja_contexts = yaml.load(yourFileHere)
Then you can use jinja templating to write out the contents and have them treated as reST input.
Sphinx does support custom extensions that would probably be the best way to do this http://sphinx.pocoo.org/ext/tutorial.html.
Not quite the answer you're after, but perhaps a close approximation: yaml2rst. It's a converter from YAML to RST. Doesn't do anything explicitly fancy with the YAML itself, but looks for comment lines (starts with #) and pulls them out into RST chunks (with the YAML going into code-blocks). Allows for a sort-of literate YAML.
Also, the syntax-highlighted YAML is quite readable (heck, it's YAML, not JSON!).

Categories

Resources