edit linux telegraf.conf with python - python

I am using telegraf as a measuring/monitoring tool in my tests. I need to edit telegraf configurations automatically; since all tests are being executed automatically.
Currently I am using re for configuring it; this is the process:
Read the whole file content.
Use regex to find and edit the required plugin/property.
Write the whole changed content to the file.
But I'm searching for a library, if exists, like ConfigParser or reconfigure to handle the configurations as an object not content.
I tried ConfigParser.ConfigParser, ConfigParser.RawConfigParser and ConfigParser.SafeConfigParser; all return:
ConfigParser.ParsingError: File contains parsing errors: /etc/telegraf/telegraf.conf
reconfigure library has specific configuration classes, each belongs to a special type of linux configs (e.g. FSTabConfig, ResolvConfig and some other types), but it doesn't contain a class for telegraf configs.
Does anyone have an option in mind?
EDIT 1:
I tried configobj library (as #KevinC suggested), but it loads nothing:
>>> import configobj
>>> c = configobj.ConfigObj('/home/zeinab/Desktop/config-modification/telegraf.conf', list_values=False)
>>> c
ConfigObj({})
Using list_values=True returns the same results.

You can use toml
Configuration file
[[inputs.ping]]
## Hosts to send ping packets to.
urls = ["example.org"]
method = "exec"
Usage
import toml
conf = (toml.load("/etc/telegraf/telegraf.conf"))
conf.get("inputs")
Output
{'ping': [{'urls': ['example.org'], 'method': 'exec'}]}

You can use configobj, but you have to specify "list_values"=False
c = configobj.ConfigObj('/etc/telegraf/telegraf.conf', list_values=False)

Related

How to get a list of warnings from sphinx compilation

I am developing a sphinx based collaborative writing tool. Users access the web application (developed in python/Flask) to write a book in sphinx and compile it to pdf.
I have learned that in order to compile a sphinx documentation from within python I should use
import sphinx
result = sphinx.build_main(['-c', 'path/to/conf',
'path/to/source/', 'path/to/out'])
So far so good.
Now my users want the app to show them their syntax mistakes. But the output (result in the example above) only gives me the exit code.
So, how do I get a list of warnings from the build process?
Perhaps I am being too ambitious, but since sphinx is a python tool, I was expecting to have a nice pythonic interface with the tool. For example, the output of sphinx.build_main could be a very rich object with warnings, line numbers...
On a related note, the argument to the method sphinx.build_main looks just like a wrapper to the command line interface.
sphinx.build_main() calls sphinx.cmdline.main(), which in turn creates a sphinx.application.Sphinx object. You could create such an object directly (instead of "making system calls within python"). Use something like this:
import os
from sphinx.application import Sphinx
# Main arguments
srcdir = "/path/to/source"
confdir = srcdir
builddir = os.path.join(srcdir, "_build")
doctreedir = os.path.join(builddir, "doctrees")
builder = "html"
# Write warning messages to a file (instead of stderr)
warning = open("/path/to/warnings.txt", "w")
# Create the Sphinx application object
app = Sphinx(srcdir, confdir, builddir, doctreedir, builder,
warning=warning)
# Run the build
app.build()
Assuming you used sphinx-quickstart to generate your initial Sphinx documentation set with a makefile, then you can use make to build docs, which in turn uses the Sphinx tool sphinx-build. You can pass the -w <file> option to sphinx-build to write warnings and errors to a file as well as stderr.
Note that options passed through the command line override any other options set in the makefile and conf.py.

Pythonic way to set module-wide settings from external file

Some background (not mandatory, but might be nice to know): I am writing a Python command-line module which is a wrapper around latexdiff. It basically replaces all \cite{ref1, ref2, ...} commands in LaTeX files with written-out and properly formatted references before passing the files to latexdiff, so that latexdiff will properly mark changes to references in the text (otherwise, it treats the whole \cite{...} command as a single "word"). All the code is currently in a single file which can be run with python -m latexdiff-cite, and I have not yet decided how to package or distribute it. To make the script useful for anybody else, the citation formatting needs to be configurable. I have implemented an optional command-line argument -c CONFIGFILE to allow the user to point to their own JSON config file (a default file resides in the module folder and is loaded if the argument is not used).
Current implementation: My single-file command-line Python module currently parses command-line arguments in if __name__ == '__main__', and loads the config file (specified by the user in -c CONFIGFILE) here before running the main function of the program. The config variable is thus available in the entire module and all is well. However, I'm considering publishing to PyPI by following this guide which seems to require me to put the command-line parsing in a main() function, which means the config variable will not be available to the other functions unless passed down as arguments to where it's needed. This "passing down by arguments" method seems a little cluttered to me.
Question: Is there a more pythonic way to set some configuration globals in a module or otherwise accomplish what I'm trying to? (I don't want to rely on 3rd party modules.) Am I perhaps completely off the tracks in some fundamental way?
One way to do it is to have the configurations defined in a class or a simple dict:
class Config(object):
setting1 = "default_value"
setting2 = "default_value"
#staticmethod
def load_config(json_file):
""" load settings from config file """
with open(json_file) as f:
config = json.load(f)
for k, v in config.iteritems():
setattr(Config, k, v)
Then your application can access the settings via this class: Config.setting1 ...

asciidoc fails when called via python subprocess

Here is my code:
#!/usr/bin/python
import subprocess
asciidoc_file_name = '/tmp/redoc_2013-06-25_12:52:19.txt'
asciidoc_call = ["asciidoc","-b docbook45",asciidoc_file_name]
print asciidoc_call
subprocess.call(asciidoc_call)
And here is the output:
labamba#lambada:~$ ./debug.py
['asciidoc', '-b docbook45', '/tmp/redoc_2013-06-25_12:52:19.txt']
asciidoc: FAILED: missing backend conf file: docbook45.conf
labamba#lambada:~$ asciidoc -b docbook45 /tmp/redoc_2013-06-25_12\:52\:19.txt
labamba#lambada:~$ file /tmp/redoc_2013-06-25_12\:52\:19.xml
/tmp/redoc_2013-06-25_12:52:19.xml: XML document text
labamba#lambada:~$ file /etc/asciidoc/docbook45.conf
/etc/asciidoc/docbook45.conf: HTML document, ASCII text, with very long lines
When called via python subprocess, asciidoc complains about a missing config file. When called on the command line, everything is fine, and the config file is there. Can anyone make sense out of this? I'm lost.
Try this:
asciidoc_call = ["asciidoc","-b", "docbook45", asciidoc_file_name]
the other call would call ascidoc with "-b docbook45" as one single option, which won't work.
The question is old... Anyway, the asciidoc is implemented in Python and it also includes the asciidocapi.py that can be used as a module from your Python program. The module docstring says:
asciidocapi - AsciiDoc API wrapper class.
The AsciiDocAPI class provides an API for executing asciidoc. Minimal example
compiles `mydoc.txt` to `mydoc.html`:
import asciidocapi
asciidoc = asciidocapi.AsciiDocAPI()
asciidoc.execute('mydoc.txt')
- Full documentation in asciidocapi.txt.
- See the doctests below for more examples.
To simplify, it implements the AsciiDocAPI class that--when initialized--searches for the asciidoc script and imports it behind the scene as a module. This way, you can use it more naturally in Python, and you can avoid using the subprocess.call().

Can I configure mercurial hooks like some extensions are configured in the hgrc file?

I know how to specify which hooks are run when. What I want to know is if it is possible to pass config into the hook via the hgrc file. Extensions can do this, e.g.
[extensions]
someextension = something
[someextension]
some.config = 1
some.other.config = True
I want to be able to do something similar for hooks, e.g.
[hooks]
changegroup.mail_someone = python:something
[changegroup.mail_someone]
to_address = some.email.address#somewhere.com
Is something like this possible? Searching for a way to do this hasn't turned up anything useful... If it is possible, how do I go about reading in the config in my (Python in-process) hook handler?
Let me answer for both hook types:
An in-process hook would use ui.config and the related methods to read the config values:
address = ui.config('changegroup.mail_someone', 'to_address')
You can also use ui.configbool and ui.configlist to read Booleans and lists, respectively.
An external hook can use hg showconfig to extract the configuration value:
$ hg showconfig changegroup.mail_someone.to_address
That will return some.email.address#somewhere.com on stdout. You can use
$ hg showconfig changegroup.mail_someone
to see all settings in that particular section.

What's the official way of storing settings for Python programs?

Django uses real Python files for settings, Trac uses a .ini file, and some other pieces of software uses XML files to hold this information.
Are one of these approaches blessed by Guido and/or the Python community more than another?
Depends on the predominant intended audience.
If it is programmers who change the file anyway, just use python files like settings.py
If it is end users then, think about ini files.
As many have said, there is no "offical" way. There are, however, many choices. There was a talk at PyCon this year about many of the available options.
Don't know if this can be considered "official", but it is in standard library: 14.2. ConfigParser — Configuration file parser.
This is, obviously, not an universal solution, though. Just use whatever feels most appropriate to the task, without any necessary complexity (and — especially — Turing-completeness! Think about automatic or GUI configurators).
I use a shelf ( http://docs.python.org/library/shelve.html ):
shelf = shelve.open(filename)
shelf["users"] = ["David", "Abraham"]
shelf.sync() # Save
Just one more option, PyQt. Qt has a platform independent way of storing settings with the QSettings class. Underneath the hood, on windows it uses the registry and in linux it stores the settings in a hidden conf file. QSettings works very well and is pretty seemless.
There is no blessed solution as far as I know. There is no right or wrong way to storing app settings neither, xml, json or all types of files are fine as long as you are confortable with. For python I personally use pypref it's very easy, cross platform and straightforward.
pypref is very useful as one can store static and dynamic settings and preferences ...
from pypref import Preferences
# create singleton preferences instance
pref = Preferences(filename="preferences_test.py")
# create preferences dict
pdict = {'preference 1': 1, 12345: 'I am a number'}
# set preferences. This would automatically create preferences_test.py
# in your home directory. Go and check it.
pref.set_preferences(pdict)
# lets update the preferences. This would automatically update
# preferences_test.py file, you can verify that.
pref.update_preferences({'preference 1': 2})
# lets get some preferences. This would return the value of the preference if
# it is defined or default value if it is not.
print pref.get('preference 1')
# In some cases we must use raw strings. This is most likely needed when
# working with paths in a windows systems or when a preference includes
# especial characters. That's how to do it ...
pref.update_preferences({'my path': " r'C:\Users\Me\Desktop' "})
# Sometimes preferences to change dynamically or to be evaluated real time.
# This also can be done by using dynamic property. In this example password
# generator preference is set using uuid module. dynamic dictionary
# must include all modules name that must be imported upon evaluating
# a dynamic preference
pre = {'password generator': "str(uuid.uuid1())"}
dyn = {'password generator': ['uuid',]}
pref.update_preferences(preferences=pre, dynamic=dyn)
# lets pull 'password generator' preferences twice and notice how
# passwords are different at every pull
print pref.get('password generator')
print pref.get('password generator')
# those preferences can be accessed later. Let's simulate that by creating
# another preferences instances which will automatically detect the
# existance of a preferences file and connect to it
newPref = Preferences(filename="preferences_test.py")
# let's print 'my path' preference
print newPref.get('my path')
I am not sure that there is an 'official' way (it is not mentioned in the Zen of Python :) )- I tend to use the Config Parser module myself and I think that you will find that pretty common. I prefer that over the python file approach because you can write back to it and dynamically reload if you want.
One of the easiest ways which is use is using the json module.
Save the file in config.json with the details as shown below.
Saving data in the json file:
{
"john" : {
"number" : "948075049" ,
"password":"thisisit"
}
}
Reading from json file:
import json
#open the config.json file
with open('config.json') as f:
mydata = json.load(f) ;
#Now mydata is a python dictionary
print("username is " , mydata.get('john').get('number') , " password is " , mydata.get('john').get('password')) ;
It depends largely on how complicated your configuration is. If you're doing a simple key-value mapping and you want the capability to edit the settings with a text editor, I think ConfigParser is the way to go.
If your settings are complicated and include lists and nested data structures, I'd use XML or JSON and create a configuration editor.
For really complicated things where the end user isn't expected to change the settings much, or is more trusted, just create a set of Python classes and evaluate a Python script to get the configuration.
For web applications I like using OS environment variables: os.environ.get('CONFIG_OPTION')
This works especially well for settings that vary between deploys. You can read more about the rationale behind using env vars here: http://www.12factor.net/config
Of course, this only works for read-only values because changes to the environment are usually not persistent. But if you don't need write access they are a very good solution.
It is more of convenience. There is no official way per say. But using XML files would make sense as they can be manipulated by various other applications/libraries.
Not an official one but this way works well for all my Python projects.
pip install python-settings
Docs here: https://github.com/charlsagente/python-settings
You need a settings.py file with all your defined constants like:
# settings.py
DATABASE_HOST = '10.0.0.1'
Then you need to either set an env variable (export SETTINGS_MODULE=settings) or manually calling the configure method:
# something_else.py
from python_settings import settings
from . import settings as my_local_settings
settings.configure(my_local_settings) # configure() receives a python module
The utility also supports Lazy initialization for heavy to load objects, so when you run your python project it loads faster since it only evaluates the settings variable when its needed
# settings.py
from python_settings import LazySetting
from my_awesome_library import HeavyInitializationClass # Heavy to initialize object
LAZY_INITIALIZATION = LazySetting(HeavyInitializationClass, "127.0.0.1:4222")
# LazySetting(Class, *args, **kwargs)
Just configure once and now call your variables where is needed:
# my_awesome_file.py
from python_settings import settings
print(settings.DATABASE_HOST) # Will print '10.0.0.1'
why would Guido blessed something that is out of his scope? No there is nothing particular blessed.

Categories

Resources