My goal is quite simple, but I couldn't find it on the guide for configobj.
When I run my code I want it to write to a file but not erase what there's in the file already.
I would like everytime I run this it should write underneath what's already in the file
This is my current code: That erase/overwrite what's inside the dasd.ini already
from configobj import ConfigObj
config = ConfigObj()
config.filename = "dasd.ini"
#
config['hey'] = "value1"
config['test'] = "value2"
#
config['another']['them'] = "value4"
#
config.write()
this would be remarkably simpler if configobj accepted a file-like object instead of a file name. This is a solution i offered in comments.
import tempfile
with tempfile.NamedTemporaryFile() as t1, tempfile.NamedTemporaryFile() as t2, open('dasd.ini', 'w') as fyle:
config = ConfigObj()
config.filename = t1.file.name
config['hey'] = "value1"
config['test'] = "value2"
config['another']['them'] = "value4"
config.write()
do_your_thing_with_(t2)
t1.seek(0)
t2.seek(0)
fyle.write(t2.read())
fyle.write(t1.read())
If I understand your question correctly, doing what you want is a very simple change. Use the following syntax to create your initial config object. This reads in keys and values from the existing file.
config = ConfigObj("dasd.ini")
Then you can add new settings or change the existing ones as in your example code.
config['hey'] = "value1"
config['test'] = "value2"
After you write it out using config.write(), you'll find that your dasd.ini file contains the original and new keys/values merged. It also preserves any comments you had in your original ini file, with new keys/values added to the end of each section.
Check out this link, I found it to be quite helpful: An Introduction to ConfigObj
try it:
You have to read all keys and values of the section if the section existed already
and then write the whole section data
# -*- coding: cp950 -*-
import configobj
import os
#-------------------------------------------------------------------------
# _readINI(ini_file, szSection, szKey)
# read KeyValue from a ini file
# return True/False, KeyValue
#-------------------------------------------------------------------------
def _readINI(ini_file, szSection, szKey=None):
ret = None
keyvalue = None
if os.path.exists(ini_file) :
try:
config = configobj.ConfigObj(ini_file, encoding='UTF8')
if not szKey==None :
keyvalue = config[szSection][szKey]
else:
keyvalue = config[szSection]
ret = True
print keyvalue
except Exception, e :
ret = False
return ret, keyvalue
#-------------------------------------------------------------------------
# _writeINI(ini_file, szSection, szKey, szKeyValue):
# write key value into a ini file
# return True/False
# You have to read all keys and values of the section if the section existed already
# and then write the whole section data
#-------------------------------------------------------------------------
def _writeINI(ini_file, szSection, szKey, szKeyValue):
ret = False
try:
ret_section = _readINI(ini_file, szSection)
if not os.path.exists(ini_file) :
# create a new ini file with cfg header comment
CreateNewIniFile(ini_file)
config = configobj.ConfigObj(ini_file, encoding='UTF8')
if ret_section[1] == None :
config[szSection] = {}
else :
config[szSection] = ret_section[1]
config[szSection][szKey] = szKeyValue
config.write()
ret = True
except Exception, e :
print str(e)
return ret
#-------------------------------------------------------------------------
# CreateNewIniFile(ini_file)
# create a new ini with header comment
# return True/False
#-------------------------------------------------------------------------
def CreateNewIniFile(ini_file):
ret = False
try:
if not os.path.exists(ini_file) :
f= open(ini_file,'w+')
f.write('########################################################\n')
f.write('# Configuration File for Parallel Settings of Moldex3D #\n')
f.write('# Please Do Not Modify This File #\n')
f.write('########################################################\n')
f.write('\n\n')
f.close()
ret = True
except Exception, e :
print e
return ret
#----------------------------------------------------------------------
if __name__ == "__main__":
path = 'D:\\settings.cfg'
_writeINI(path, 'szSection', 'szKey', u'kdk12341 ä»–dkdk')
_writeINI(path, 'szSection', 'szKey-1', u'kdk123412dk')
_writeINI(path, 'szSection', 'szKey-2', u'kfffk')
_writeINI(path, 'szSection', 'szKey-3', u'dhhhhhhhhhhhh')
_writeINI(path, 'szSection-333', 'ccc', u'555')
#_writeINI(path, 'szSection-222', '', u'')
print _readINI(path, 'szSection', 'szKey-2')
print _readINI(path, 'szSection-222')
#CreateNewIniFile(path)
Related
I have a Lambda python function that I inherited which searches and reports on installed packages on EC2 instances. It pulls this information from SSM Inventory where the results are output to an S3 bucket. All of the installed packages have specific names until now. Now we need to report on Palo Alto Cortex XDR. The issue I'm facing is that this product includes the version number in the name and we have different versions installed. If I use the exact name (i.e. Cortex XDR 7.8.1.11343) I get reporting on that particular version but not others. I want to use a wild card to do this. I import regex (import re) on line 7 and then I change line 71 to xdr=line['Cortex*']) but it gives me the following error. I'm a bit new to Python and coding so any explanation as to what I'm doing wrong would be helpful.
File "/var/task/SoeSoftwareCompliance/RequiredSoftwareEmail.py", line 72, in build_html
xdr=line['Cortex*'])
import configparser
import logging
import csv
import json
from jinja2 import Template
import boto3
import re
# config
config = configparser.ConfigParser()
config.read("config.ini")
# logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# #TODO
# refactor common_csv_header so that we use one with variable
# so that we write all content to one template file.
def build_html(account=None,
ses_email_address=None,
recipient_email=None):
"""
:param recipient_email:
:param ses_email_address:
:param account:
"""
account_id = account["id"]
account_alias = account["alias"]
linux_ec2s = []
windows_ec2s = []
ec2s_not_in_ssm = []
excluded_ec2s = []
# linux ec2s html
with open(f"/tmp/{account_id}_linux_ec2s_required_software_report.csv", "r") as fp:
lines = csv.DictReader(fp)
for line in lines:
if line["platform-type"] == "Linux":
item = dict(id=line['instance-id'],
name=line['instance-name'],
ip=line['ip-address'],
ssm=line['amazon-ssm-agent'],
cw=line['amazon-cloudwatch-agent'],
ch=line['cloudhealth-agent'])
# skip compliant linux ec2s where are values are found
compliance_status = not all(item.values())
if compliance_status:
linux_ec2s.append(item)
# windows ec2s html
with open(f"/tmp/{account_id}_windows_ec2s_required_software_report.csv", "r") as fp:
lines = csv.DictReader(fp)
for line in lines:
if line["platform-type"] == "Windows":
item = dict(id=line['instance-id'],
name=line['instance-name'],
ip=line['ip-address'],
ssm=line['Amazon SSM Agent'],
cw=line['Amazon CloudWatch Agent'],
ch=line['CloudHealth Agent'],
mav=line['McAfee VirusScan Enterprise'],
trx=line['Trellix Agent'],
xdr=line['Cortex*'])
# skip compliant windows ec2s where are values are found
compliance_status = not all(item.values())
if compliance_status:
windows_ec2s.append(item)
# ec2s not found in ssm
with open(f"/tmp/{account_id}_ec2s_not_in_ssm.csv", "r") as fp:
lines = csv.DictReader(fp)
for line in lines:
item = dict(name=line['instance-name'],
id=line['instance-id'],
ip=line['ip-address'],
pg=line['patch-group'])
ec2s_not_in_ssm.append(item)
# display or hide excluded ec2s from report
display_excluded_ec2s_in_report = json.loads(config.get("settings", "display_excluded_ec2s_in_report"))
if display_excluded_ec2s_in_report == "true":
with open(f"/tmp/{account_id}_excluded_from_compliance.csv", "r") as fp:
lines = csv.DictReader(fp)
for line in lines:
item = dict(id=line['instance-id'],
name=line['instance-name'],
pg=line['patch-group'])
excluded_ec2s.append(item)
# pass data to html template
with open('templates/email.html') as file:
template = Template(file.read())
# pass parameters to template renderer
html = template.render(
linux_ec2s=linux_ec2s,
windows_ec2s=windows_ec2s,
ec2s_not_in_ssm=ec2s_not_in_ssm,
excluded_ec2s=excluded_ec2s,
account_id=account_id,
account_alias=account_alias)
# consolidated html with multiple tables
tables_html_code = html
client = boto3.client('ses')
client.send_email(
Destination={
'ToAddresses': [recipient_email],
},
Message={
'Body': {
'Html':
{'Data': tables_html_code}
},
'Subject': {
'Charset': 'UTF-8',
'Data': f'SOE | Software Compliance | {account_alias}',
},
},
Source=ses_email_address,
)
print(tables_html_code)
If I understand your problem correctly, you are getting a KeyError exception because Python does not support wildcards out of the box. A csv.DictReader creates a standard Python dictionary for each row in csv. Python's dictionary is just an associative array without pattern matching.
You can implement this by regex, though. If you have a dictionary line and you don't know the full name of a key you are looking for, you can solve it by re.search function.
line = {'Cortex XDR 7.8.1.11343': 'Some value you are looking for'}
val = next(v for k, v in line.items() if re.search('Cortex.+', k))
print(val) # 'Some value you are looking for'
Be aware that this assumes that a line dictionary contains at least one item that matches the 'Cortex.+' pattern and returns the first match. You would have to refactor this a bit to change this.
1. import os - missing in the code
2. def build_html(account=None -> When the account is pass with Nonetype and below error will thrown in account["id"] and account["alias"].
Ex:
Traceback (most recent call last):
File "C:\Users\pro\Documents\project\pywilds.py", line 134, in <module>
build_html(account=None)
File "C:\Users\pro\Documents\project\pywilds.py", line 33, in build_html
account_id = account["id"]
TypeError: 'NoneType' object is not subscriptable
I hope it helps..
i am using ConfigParser to write some modification in a configuration file, basically what i am doing is :
retrieve my urls from an api
write them in my config file
but after the edit, i noticed that the file format has changed :
Before the edit :
[global_tags]
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 10000
[[inputs.cpu]]
percpu = true
totalcpu = true
[[inputs.prometheus]]
urls= []
interval = "140s"
[inputs.prometheus.tags]
exp = "exp"
After the edit :
[global_tags]
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 10000
[[inputs.cpu]
percpu = true
totalcpu = true
[[inputs.prometheus]
interval = "140s"
response_timeout = "120s"
[inputs.prometheus.tags]
exp = "snmp"
the offset changed and all the comments that were in the file has been deleted, my code :
edit = configparser.ConfigParser(strict=False, allow_no_value=True, empty_lines_in_values=False)
edit.read("file.conf")
edit.set("[section]", "urls", str(urls))
print(edit)
# Write changes back to file
with open('file.conf', 'w') as configfile:
edit.write(configfile)
I have already tried : SafeConfigParser, RawConfigParser but it doesn't work.
when i do a print(edit.section()), here is what i get : ['global_tags', 'agent', '[inputs.cpu', , '[inputs.prometheus', 'inputs.prometheus.tags']
Is there any help please ?
Here's an example of a "filter" parser that retains all other formatting but changes the urls line in the agent section if it comes across it:
import io
def filter_config(stream, item_filter):
"""
Filter a "config" file stream.
:param stream: Text stream to read from.
:param item_filter: Filter function; takes a section and a line and returns a filtered line.
:return: Yields (possibly) filtered lines.
"""
current_section = None
for line in stream:
stripped_line = line.strip()
if stripped_line.startswith('['):
current_section = stripped_line.strip('[]')
elif not stripped_line.startswith("#") and " = " in stripped_line:
line = item_filter(current_section, line)
yield line
def urls_filter(section, line):
if section == "agent" and line.strip().startswith("urls = "):
start, sep, end = line.partition(" = ")
return start + sep + "hi there..."
return line
# Could be a disk file, just using `io.StringIO()` for self-containedness here
config_file = io.StringIO("""
[global_tags]
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 10000
# HELLO! THIS IS A COMMENT!
metric_buffer_limit = 100000
urls = ""
[other]
urls = can't touch this!!!
""")
for line in filter_config(config_file, urls_filter):
print(line, end="")
The output is
[global_tags]
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 10000
# HELLO! THIS IS A COMMENT!
metric_buffer_limit = 100000
urls = hi there...
[other]
urls = can't touch this!!!
so you can see all comments and (mis-)indentation was preserved.
The problem is that you're passing brackets with the section name, which is unnecessary:
edit.set("[section]", "urls", str(urls))
See this example from the documentation:
import configparser
config = configparser.RawConfigParser()
# Please note that using RawConfigParser's set functions, you can assign
# non-string values to keys internally, but will receive an error when
# attempting to write to a file or when you get it in non-raw mode. Setting
# values using the mapping protocol or ConfigParser's set() does not allow
# such assignments to take place.
config.add_section('Section1')
config.set('Section1', 'an_int', '15')
config.set('Section1', 'a_bool', 'true')
config.set('Section1', 'a_float', '3.1415')
config.set('Section1', 'baz', 'fun')
config.set('Section1', 'bar', 'Python')
config.set('Section1', 'foo', '%(bar)s is %(baz)s!')
# Writing our configuration file to 'example.cfg'
with open('example.cfg', 'w') as configfile:
config.write(configfile)
But, anyway, it won't preserve the identation, nor will it support nested sections; you could try the YAML format, which does allow to use indentation to separate nested sections, but it won't keep the exact same indentation when saving, but, do you really need it to be the exact same? Anyway, there are various configuration formats out there, you should study them to see what fits your case better.
There is a python code which reads from a field xls file
The script works, but there are problems when there are empty fields in the file
The script does not read the field if the file has an empty field
My code, for example, here the NORD field is empty:
from msexcel8com import *
def convert(dsIn, dsOut):
import sys
sys.setdefaultencoding("utf-8")
import msexcel8com
xlsApp = msexcel8com.Application()
xlsApp.Workbooks.Open(unicode(dsIn["PATH_TO_XLS"]))
xlsWorkbook = xlsApp.Workbooks.Item(1)
xlsWorksheet = xlsWorkbook.Worksheets.Item(1)
xlsWorksheet.Cells.SpecialCells(11, None).Activate()
rowsCount = xlsApp.ActiveCell.Row
import msxml2
dsOut.clear()
outXML = msxml2.DOMDocument()
RootNode = outXML.createElement("MSG")
RootNode.setAttribute("FORMAT", "IMPORT_LN")
ChildNodes = outXML.appendChild(RootNode)
i, k, c = 1, 1, 2
while i < rowsCount:
i = i + 1
if k > c:
k = 0
dsOut.append()
dsOut["XML_OUT"] = unicode.encode(outXML.xml, "utf-8")
outXML = msxml2.DOMDocument()
RootNode = outXML.createElement("MSG")
RootNode.setAttribute("FORMAT", "IMPORT_LN")
ChildNodes = outXML.appendChild(RootNode)
try:
TMPNode = outXML.createElement("CLIENT")
TMPNode.setAttribute("NCODE", xlsWorksheet.Cells.Item(i, 1).Value)
TMPNode.setAttribute("NORD", xlsWorksheet.Cells.Item(i, 2).Value)
ChildNodes.appendChild(TMPNode)
k = k + 1
except Exception as e:
print(e)
dsOut.append()
dsOut["XML_OUT"] = unicode.encode(outXML.xml, "utf-8")
try:
xlsApp.Workbooks.Close()
except Exception as e:
print(e)
try:
xlsApp.Quit()
except Exception as e:
print(e)
How to make sure that even if there is an empty field, return as null and the rest of the values?
I couldn't resist the temptation of writing this without all that Excel automation.
Assuming an Excel file that looks something like this called so59715137.xls:
import xlrd # assuming it's an .xls, not .xlsx
import xml.etree.ElementTree as et
def read_rows(xls_filename, column_labels):
book = xlrd.open_workbook(xls_filename)
sh = book.sheet_by_index(0)
for rx in range(sh.nrows):
yield dict(zip(column_labels, sh.row_values(rx)))
def convert(xls_filename):
xml_root = et.Element("MSG", {"FORMAT": "IMPORT_LN"})
for row in read_rows(xls_filename, ("NCODE", "NORD")):
print(row) # for debugging
if row.get("NCODE") and row.get("NORD"): # both attributes must be truthy
et.SubElement(xml_root, "CLIENT", attrib=row) # just use the dict as attributes
return et.tostring(xml_root, encoding="unicode")
xml_content = convert("so59715137.xls")
print("-------------------------------")
print(xml_content)
# TODO: write to file
outputs (with debugging output included, so you see it reads but elides the rows that are missing data)
{'NCODE': 'foo', 'NORD': 'faa'}
{'NCODE': 'blep', 'NORD': 'blop'}
{'NCODE': 'missing', 'NORD': ''}
{'NCODE': '', 'NORD': 'other-missing'}
-------------------------------
<MSG FORMAT="IMPORT_LN"><CLIENT NCODE="foo" NORD="faa" /><CLIENT NCODE="blep" NORD="blop" /></MSG>
From there on out, it's easy to read/write your dsIn/dsOut structures.
I have a file of constant variables that I need to query and I am not sure how to go about it.
I have a database query which is returning user names and I need to find the matching user name in the file of constant variables.
The file looks like this:
SALES_MANAGER_01 = {"user_name": "BO01", "password": "password", "attend_password": "BO001",
"csm_password": "SM001", "employee_num": "BOSM001"}
There is just a bunch of users just like the one above.
My function looks like this:
#attr("user_test")
def test_get_user_for_login(self):
application_code = 'BO'
user_from_view = self.select_user_for_login(application_code=application_code)
users = [d['USER'] for d in user_from_view]
user_with_ent = choice(users)
user_wo_ent = user_with_ent[-4:]
password = ""
global_users = dir(gum)
for item in global_users:
if user_wo_ent not in item.__getattr__("user_name"):
user_with_ent = choice(users)
user_wo_ent = user_with_ent[-4:]
else:
password = item.__getattr__("password")
print(user_wo_ent, password)
global_users = dir(gum) is my file of constants. So I know I am doing something wrong since I am getting an attribute error AttributeError: 'str' object has no attribute '__getattr__', I am just not sure how to go about resolving it.
You should reverse your looping as you want to compare each item to your match condition. Also, you have a dictionary, so use it to do some heavy lifting.
You need to add some imports
import re
from ast import literal_eval
I've changed the dir(gum) bit to be this function.
def get_global_users(filename):
gusers = {} # create a global users dict
p_key = re.compile(ur'\b\w*\b') # regex to get first part, e.g.. SALES_MANAGER_01
p_value = re.compile(ur'\{.*\}') # regex to grab everything in {}
with (open(filename)) as f: # open the file and work through it
for line in f: # for each line
gum_key = p_key.match(line) # pull out the key
gum_value = p_value.search(line) # pull out the value
''' Here is the real action. update a dictionary
with the match of gum_key and with match of gum_value'''
gusers[gum_key.group()] = literal_eval(gum_value.group())
return(gusers) # return the dictionary
The bottom of your existing code is replaced with this.
global_users = get_global_users(gum) # assign return to global_users
for key, value in global_users.iteritems(): # walk through all key, value pairs
if value['user_name'] != user_wo_ent:
user_with_ent = choice(users)
user_wo_ent = user_with_ent[-4:]
else:
password = value['password']
So a very simple answer was get the dir of the constants file then parsing over it like so:
global_users = dir(gum)
for item in global_users:
o = gum.__dict__[item]
if type(o) is not dict:
continue
if gum.__dict__[item].get("user_name") == user_wo_ent:
print(user_wo_ent, o.get("password"))
else:
print("User was not in global_user_mappings")
I was able to find the answer by doing the following:
def get_user_for_login(application_code='BO'):
user_from_view = BaseServiceTest().select_user_for_login(application_code=application_code)
users = [d['USER'] for d in user_from_view]
user_with_ent = choice(users)
user_wo_ent = user_with_ent[4:]
global_users = dir(gum)
user_dict = {'user_name': '', 'password': ''}
for item in global_users:
o = gum.__dict__[item]
if type(o) is not dict:
continue
if user_wo_ent == o.get("user_name"):
user_dict['user_name'] = user_wo_ent
user_dict['password'] = o.get("password")
return user_dict
I have config file,
[local]
variable1 : val1 ;#comment1
variable2 : val2 ;#comment2
code like this reads only value of the key:
class Config(object):
def __init__(self):
self.config = ConfigParser.ConfigParser()
self.config.read('config.py')
def get_path(self):
return self.config.get('local', 'variable1')
if __name__ == '__main__':
c = Config()
print c.get_path()
but i also want to read the comment present along with the value, any suggestions in this regards will be very helpful.
Alas, this is not easily done in general case. Comments are supposed to be ignored by the parser.
In your specific case, it is easy, because # only serves as a comment character if it begins a line. So variable1's value will be "val1 #comment1". I suppose you use something like this, only less brittle:
val1_line = c.get('local', 'var1')
val1, comment = val1_line.split(' #')
If the value of a 'comment' is needed, probably it is not a proper comment? Consider adding explicit keys for the 'comments', like this:
[local]
var1: 108.5j
var1_comment: remember, the flux capacitor capacitance is imaginary!
Your only solutions is to write another ConfigParser overriding the method _read(). In your ConfigParser you should delete all checks about comment removal. This is a dangerous solution, but should work.
class ValuesWithCommentsConfigParser(ConfigParser.ConfigParser):
def _read(self, fp, fpname):
from ConfigParser import DEFAULTSECT, MissingSectionHeaderError, ParsingError
cursect = None # None, or a dictionary
optname = None
lineno = 0
e = None # None, or an exception
while True:
line = fp.readline()
if not line:
break
lineno = lineno + 1
# comment or blank line?
if line.strip() == '' or line[0] in '#;':
continue
if line.split(None, 1)[0].lower() == 'rem' and line[0] in "rR":
# no leading whitespace
continue
# continuation line?
if line[0].isspace() and cursect is not None and optname:
value = line.strip()
if value:
cursect[optname].append(value)
# a section header or option header?
else:
# is it a section header?
mo = self.SECTCRE.match(line)
if mo:
sectname = mo.group('header')
if sectname in self._sections:
cursect = self._sections[sectname]
elif sectname == DEFAULTSECT:
cursect = self._defaults
else:
cursect = self._dict()
cursect['__name__'] = sectname
self._sections[sectname] = cursect
# So sections can't start with a continuation line
optname = None
# no section header in the file?
elif cursect is None:
raise MissingSectionHeaderError(fpname, lineno, line)
# an option line?
else:
mo = self._optcre.match(line)
if mo:
optname, vi, optval = mo.group('option', 'vi', 'value')
optname = self.optionxform(optname.rstrip())
# This check is fine because the OPTCRE cannot
# match if it would set optval to None
if optval is not None:
optval = optval.strip()
# allow empty values
if optval == '""':
optval = ''
cursect[optname] = [optval]
else:
# valueless option handling
cursect[optname] = optval
else:
# a non-fatal parsing error occurred. set up the
# exception but keep going. the exception will be
# raised at the end of the file and will contain a
# list of all bogus lines
if not e:
e = ParsingError(fpname)
e.append(lineno, repr(line))
# if any parsing errors occurred, raise an exception
if e:
raise e
# join the multi-line values collected while reading
all_sections = [self._defaults]
all_sections.extend(self._sections.values())
for options in all_sections:
for name, val in options.items():
if isinstance(val, list):
options[name] = '\n'.join(val)
In the ValuesWithCommentsConfigParser I fixed some imports and deleted the appropriate sections of code.
Using the same config.ini from my previous answer, I can prove the previous code is correct.
config = ValuesWithCommentsConfigParser()
config.read('config.ini')
assert config.get('local', 'variable1') == 'value1 ; comment1'
assert config.get('local', 'variable2') == 'value2 # comment2'
Accordiing to the ConfigParser module documentation,
Configuration files may include comments, prefixed by specific
characters (# and ;). Comments may appear on their own in an otherwise
empty line, or may be entered in lines holding values or section
names. In the latter case, they need to be preceded by a whitespace
character to be recognized as a comment. (For backwards compatibility,
only ; starts an inline comment, while # does not.)
If you want to read the "comment" with the value, you can omit the whitespace before the ; character or use the #. But in this case the strings comment1 and comment2 become part of the value and are not considered comments any more.
A better approach would be to use a different property name, such as variable1_comment, or to define another section in the configuration dedicated to comments:
[local]
variable1 = value1
[comments]
variable1 = comment1
The first solution requires you to generate a new key using another one (i.e. compute variable1_comment from variable1), the other one allows you to use the same key targeting different sections in the configuration file.
As of Python 2.7.2, is always possibile to read a comment along the line if you use the # character. As the docs say, it's for backward compatibility. The following code should run smoothly:
config = ConfigParser.ConfigParser()
config.read('config.ini')
assert config.get('local', 'variable1') == 'value1'
assert config.get('local', 'variable2') == 'value2 # comment2'
for the following config.ini file:
[local]
variable1 = value1 ; comment1
variable2 = value2 # comment2
If you adopt this solution, remember to manually parse the result of get() for values and comments.
according to the manuals:
Lines beginning with '#' or ';' are ignored and may be used to provide comments.
so the value of variable1 is "val1 #comment1".The comment is part of the value
you can check your config whether you put a Enter before your comment
In case anyone comes along afterwards. My situation was I needed to read in a .ini file generated by a Pascal Application. That configparser didn't care about # or ; starting the keys.
For example the .ini file would look like this
[KEYTYPE PATTERNS]
##-######=CLAIM
Python's configparser would skip that key value pair. Needed to modify the configparser to not look at # as comments
config = configparser.ConfigParser(comment_prefixes="")
config.read("thefile")
I'm sure I could set the comment_prefixes to whatever Pascal uses for comments, but didn't see any, so I set it to an empty string