execute_script throws no exception but file not created - python

Trying to create a function to streamline some automation.
When the javascript is called with the arguments directly it works perfectly and the file is created.
browser.execute_script("HAR.clear({token: \"abcd\"})")
browser.find_element_by_link_text("B").click()
browser.execute_script("HAR.triggerExport({token: \"abcd\", fileName: \"name_of_file\"}).then(result => {})")
When i try to pass it as a variable, there are no errors but the har file is not created.
Call:
simple_find("B",'\\"name_of_file\\"')
Function:
def simple_find (element, filename):
browser.execute_script("HAR.clear({token: \"abcd\"})")
browser.find_element_by_link_text(element).click()
options = '{token: \\"abcd\\", fileName: '+filename+'}'
ret=browser.execute_script("HAR.triggerExport(arguments[0]).then(result => {});return arguments[0]",options)
print ret
I added the return piece to help debug what is being passed and here is the output:
C:>python firefox-Manage.py
{token: \"abcd\", fileName: \"name_of_file\"}
It looks exactly like the call made earlier with the exception of the file not being created. What am I missing?
java version is: 1.8.0_66
selenium version is: 2.48.2
python version is: 2.7.10
thx

Your options object created from Python looks malformed. There's no reason to surround the values with \\":
options = '{token: \\"abcd\\", fileName: '+filename+'}'
My guess would be you want to pass a dictionary directly to selenium instead of a string:
options = {'token': "abcd", 'fileName': filename}

Related

Add Python Script as a file to AWS SSM Document (YAML)

I'm trying to write a script for a SystemsManager Automation document and would like to keep the Python code in a seperate file so it's easy to invoke on my local machine. For complex scripts they can also be tested using a tool such as unittest.
Example YAML syntax from my SSM Automation:
mainSteps:
- name: RunTestScript
action: aws:executeScript
inputs:
Runtime: python3.8
Handler: handler
InputPayload:
Action: "Create" # This is just a random payload for now I'm only testing
Script: |-
"${script}" # My script should be injected here :)
Note: Yes, I've written the script directly in YAML and it works fine. But I'd like to achieve something similar to:
locals {
script = file("${path.module}/automations/test.py")
}
resource "aws_ssm_document" "aws_test_script" {
name = "test-script"
document_format = "YAML"
document_type = "Automation"
content = templatefile("${path.module}/automations/test.yaml", {
ssm_automation_assume_role = aws_iam_role.ssm_automation.arn
script = local.script
})
}
My console shows that yes the file is being read correctly.
Terraform plan...:
+ "def handler(event, context):
+ print(event)
+ import boto3
+ iam = boto3.client('iam')
+ response = iam.get_role(
+ RoleName='test-role'
+ )
+ print(response)"
Notice how the indentation is broken? My .py file has the correct indentation.
I suspect one of the terraform functions or YAML operators I'm using it breaking the indentation - which is very important for a language such as Python.
If I go ahead and Terraform apply I receive:
Error: Error updating SSM document: InvalidDocumentContent: YAML not well-formed. at Line: 30, Column: 1
I tried changing the last line in my YAML to be Script: "${script}" and I can Terraform Plan and Apply fine, but the Python script is on a single line and fails when executing the automation in AWS.
I've also tried using indent(4, local.script) without success.
Keen to hear/see what ideas and solutions you may have.
Thanks
I noticed my plan output had "'s around the code. So I tried a multiline string in Python """ and it continued to fail. Bearing in mind I assumed SSM was smart enough to strip quotes if it doesn't want them.
Anyway, the mistake was adding quotes around the template variable:
# Mistake
Script: |-
"${script}" # My script should be injected here :)
Solution
Script: |-
${script}
After that I decided okay now that it works, can I remove the indent() method I'm using and I got the YAML not well-formed error back.
So using:
locals {
script = indent(8, file("${path.module}/automations/test.py"))
}
resource "aws_ssm_document" "test_script" {
name = "test-script"
document_format = "YAML"
document_type = "Automation"
content = templatefile("${path.module}/automations/test.yaml", {
ssm_automation_assume_role = aws_iam_role.ssm_automation.arn
script = local.script
})
}
Works great with:
mainSteps:
- name: RunDMSRolesScript
action: aws:executeScript
inputs:
Runtime: python3.8
Handler: handler
InputPayload:
Action: "create"
Script: |-
${script}
If it helps anyone, this is what my SSM Script looks like in the AWS UI when it runs without errors. Formatted correctly and AWS seems to append " around it but turned into "\" when I provided quotes in my YAML template which would've broken the script as it's now a string literal.
"def handler(event, context):
print(event)
import boto3
iam = boto3.client('iam')
response = iam.get_role(
RoleName='test-role'
)
print(response)"
That's one way to lose 3 hours!

Python error: Nonetype object not subscriptable, when loading JSON into variables

I have a program where I am reading in a JSON file, and executing some SQL based on parameters specified in the file. The
load_json_file()
method loads the json file to a Python object first (not seen here but works correctly)
The issue is with the piece of the code here:
class TestAutomation:
def __init__(self):
self.load_json_file()
# connect to Teradata and load session to be used for execution
def connection(self):
con = self.load_json_file()
cfg_dsn = con['config']['dsn']
cfg_usr = con['config']['username']
cfg_pwd = con['config']['password']
udaExec = teradata.UdaExec(appName="DataAnalysis", version="1.0", logConsole=False)
session = udaExec.connect(method="odbc", dsn=cfg_dsn, username=cfg_usr, password=cfg_pwd)
return session
the init_ method first loads the JSON file, and then I store that in 'con'. I am getting an error though that reads:
cfg_dsn = con['config']['dsn']
E TypeError: 'NoneType' object is not subscriptable
The JSON file looks like this:
{
"config":{
"src":"C:/Dev\\path",
"dsn":"XYZ",
"sheet_name":"test",
"out_file_prefix":"C:/Dev\\test\\OutputFile_",
"password":"pw123",
"username":"user123",
"start_table":"11",
"end_table":"26",
"skip_table":"1,13,17",
"spot_check_table":"77"
}
}
the load_json_file() is defined like this:
def load_json_file(self):
if os.path.isfile(os.path.dirname(os.path.realpath(sys.argv[0])) + '\dwconfig.json'):
with open('dwconfig.json') as json_data_file:
cfg_data = json.load(json_data_file)
return cfg_data
Any ideas why I am seeing the error?
problem is that you're checking if the configuration file exists, then read it.
If it doesn't, your function returns None. This is wrong in many ways because os.path.realpath(sys.argv[0]) can return an incorrect value, for instance if the command is run with just the base name, found through the system path ($0 returns the full path in bash but not in python or C).
That's not how you get the directory of the current command.
(plus afterwards you're going to do with open('dwconfig.json') as json_data_file: which is now the name of the file, without the full path, wrong again)
I would skip this test, but compute the config file path properly. And if it doesn't exist, let the program crash instead of returning None that will crash later.
def load_json_file(self):
with open(os.path.join(os.path.dirname(__file__),'dwconfig.json')) as json_data_file:
cfg_data = json.load(json_data_file)
return cfg_data
So... cfg_dsn = con['config']['dsn']
something in there is set to None
you could be safe and write it like
(con or {}).get('config',{}).get('dsn')
or make your data correct.

pyYAML, expected NodeEvent, but got DocumentEndEvent

I'm trying to dump a custom object, that is a kind of a list of objects. So I overrode the to_yaml method of the YAMLOBject class from which I set my class to inherit from:
#classmethod
def to_yaml(cls, dumper, data):
""" This methods defines how to save this class to a yml
file """
passage_list = []
for passage in data:
passage_dict = {
'satellite': passage.satellite.name,
'ground_station': passage.ground_station.name,
'aos': passage.aos,
'los': passage.los,
'tca': passage.tca,
}
passage_list.append(passage_dict)
passage_list_dict = {
'passages': passage_list
}
return dumper.represent(passage_list_dict)
When I call the yaml.dump method, the output file is created correctly with the correct data:
if save_to_file:
with open(save_to_file, 'w') as f:
yaml.dump(all_passages, f, default_flow_style=False)
but at the end of the execution I get a EmitterError: expected NodeEvent, but got DocumentEndEvent()
I believe it's related to not closing correctly the YAML document because when I was debugging my code I was getting save_to_file files that were missing the new line at the end of the document. Could it be? Or is it something else?
Your code does not work because dumper.represent doesn't return anything. You want to use dumper.represent_data instead.

passing a python object into a casperjs script iterating over the object and returning a result object to python

I'm new to programing in languages more suited to the web, but I have programmed in vba for excel.
What I would like to do is:
pass a list (in python) to a casper.js script.
Inside the casperjs script I would like to iterate over the python object (a list of search terms)
In the casper script I would like to query google for search terms
Once queried I would like to store the results of these queries in an array, which I concatenate together while iterating over the python object.
Then once I have searched for all the search-terms and found results I would like to return the RESULTS array to python, so I can further manipulate the data.
QUESTION --> I'm not sure how to write the python function to pass an object to casper.
QUESTION --> I'm also not sure how to write the casper function to pass an javascript object back to python.
Here is my python code.
import os
import subprocess
scriptType = 'casperScript.js'
APP_ROOT = os.path.dirname(os.path.realpath(__file__))
PHANTOM = '\casperjs\bin\casperjs'
SCRIPT = os.path.join(APP_ROOT, test.js)
params = [PHANTOM, SCRIPT]
subprocess.check_output(params)
js CODE
var casper = require('casper').create();
casper.start('http://google.com/', function() {
this.echo(this.getTitle());
});
casper.run();
Could you use JSON to send the data to the script and then decode it when you get it back?
Python:
json = json.dumps(stuff) //Turn object into string to pass to js
Load a json file into python:
with open(location + '/module_status.json') as data_file:
data = json.load(data_file);
Deserialize a json string to an object in python
Javascript:
arr = JSON.parse(json) //Turn a json string to a js array
json = JSON.stringify(arr) //Turn an array to a json string ready to send to python
You can use two temporary files, one for input and the other for output of the casperjs script. woverton's answer is ok, but lacks a little detail. It is better to explicitly dump your JSON into a file than trying to parse the console messages from casperjs as they can be interleaved with debug strings and such.
In python:
import tempfile
import json
import os
import subprocess
APP_ROOT = os.path.dirname(os.path.realpath(__file__))
PHANTOM = '\casperjs\bin\casperjs'
SCRIPT = os.path.join(APP_ROOT, test.js)
input = tempfile.NamedTemporaryFile(mode="w", delete=False)
output = tempfile.NamedTemporaryFile(mode="r", delete=False)
yourObj = {"someKey": "someData"}
yourJSON = json.dumps(yourObj)
input.file.write(yourJSON)
# you need to close the temporary input and output file because casperjs does operations on them
input.file.close()
input = None
output.file.close()
print "yourJSON", yourJSON
# pass only file names
params = [PHANTOM, SCRIPT, input.name, output.name]
subprocess.check_output(params)
# you need to open the temporary output file again
output = open(output.name, "r")
yourReturnedJSON = json.load(output)
print "returned", yourReturnedJSON
output.close()
output = None
At the end, the temporary files will be automatically deleted when the objects are garbage collected.
In casperjs:
var casper = require('casper').create();
var fs = require("fs");
var input = casper.cli.raw.get(0);
var output = casper.cli.raw.get(1);
input = JSON.parse(fs.read(input));
input.casper = "done"; // add another property
input = JSON.stringify(input);
fs.write(output, input, "w"); // input written to output
casper.exit();
The casperjs script isn't doing anything useful. It just writes the inputfile to the output file with an added property.

Properties file in python (similar to Java Properties)

Given the following format (.properties or .ini):
propertyName1=propertyValue1
propertyName2=propertyValue2
...
propertyNameN=propertyValueN
For Java there is the Properties class that offers functionality to parse / interact with the above format.
Is there something similar in python's standard library (2.x) ?
If not, what other alternatives do I have ?
I was able to get this to work with ConfigParser, no one showed any examples on how to do this, so here is a simple python reader of a property file and example of the property file. Note that the extension is still .properties, but I had to add a section header similar to what you see in .ini files... a bit of a bastardization, but it works.
The python file: PythonPropertyReader.py
#!/usr/bin/python
import ConfigParser
config = ConfigParser.RawConfigParser()
config.read('ConfigFile.properties')
print config.get('DatabaseSection', 'database.dbname');
The property file: ConfigFile.properties
[DatabaseSection]
database.dbname=unitTest
database.user=root
database.password=
For more functionality, read: https://docs.python.org/2/library/configparser.html
For .ini files there is the configparser module that provides a format compatible with .ini files.
Anyway there's nothing available for parsing complete .properties files, when I have to do that I simply use jython (I'm talking about scripting).
I know that this is a very old question, but I need it just now and I decided to implement my own solution, a pure python solution, that covers most uses cases (not all):
def load_properties(filepath, sep='=', comment_char='#'):
"""
Read the file passed as parameter as a properties file.
"""
props = {}
with open(filepath, "rt") as f:
for line in f:
l = line.strip()
if l and not l.startswith(comment_char):
key_value = l.split(sep)
key = key_value[0].strip()
value = sep.join(key_value[1:]).strip().strip('"')
props[key] = value
return props
You can change the sep to ':' to parse files with format:
key : value
The code parses correctly lines like:
url = "http://my-host.com"
name = Paul = Pablo
# This comment line will be ignored
You'll get a dict with:
{"url": "http://my-host.com", "name": "Paul = Pablo" }
A java properties file is often valid python code as well. You could rename your myconfig.properties file to myconfig.py. Then just import your file, like this
import myconfig
and access the properties directly
print myconfig.propertyName1
if you don't have multi line properties and a very simple need, a few lines of code can solve it for you:
File t.properties:
a=b
c=d
e=f
Python code:
with open("t.properties") as f:
l = [line.split("=") for line in f.readlines()]
d = {key.strip(): value.strip() for key, value in l}
If you have an option of file formats I suggest using .ini and Python's ConfigParser as mentioned. If you need compatibility with Java .properties files I have written a library for it called jprops. We were using pyjavaproperties, but after encountering various limitations I ended up implementing my own. It has full support for the .properties format, including unicode support and better support for escape sequences. Jprops can also parse any file-like object while pyjavaproperties only works with real files on disk.
This is not exactly properties but Python does have a nice library for parsing configuration files. Also see this recipe: A python replacement for java.util.Properties.
i have used this, this library is very useful
from pyjavaproperties import Properties
p = Properties()
p.load(open('test.properties'))
p.list()
print(p)
print(p.items())
print(p['name3'])
p['name3'] = 'changed = value'
Here is link to my project: https://sourceforge.net/projects/pyproperties/. It is a library with methods for working with *.properties files for Python 3.x.
But it is not based on java.util.Properties
This is a one-to-one replacement of java.util.Propeties
From the doc:
def __parse(self, lines):
""" Parse a list of lines and create
an internal property dictionary """
# Every line in the file must consist of either a comment
# or a key-value pair. A key-value pair is a line consisting
# of a key which is a combination of non-white space characters
# The separator character between key-value pairs is a '=',
# ':' or a whitespace character not including the newline.
# If the '=' or ':' characters are found, in the line, even
# keys containing whitespace chars are allowed.
# A line with only a key according to the rules above is also
# fine. In such case, the value is considered as the empty string.
# In order to include characters '=' or ':' in a key or value,
# they have to be properly escaped using the backslash character.
# Some examples of valid key-value pairs:
#
# key value
# key=value
# key:value
# key value1,value2,value3
# key value1,value2,value3 \
# value4, value5
# key
# This key= this value
# key = value1 value2 value3
# Any line that starts with a '#' is considerered a comment
# and skipped. Also any trailing or preceding whitespaces
# are removed from the key/value.
# This is a line parser. It parses the
# contents like by line.
You can use a file-like object in ConfigParser.RawConfigParser.readfp defined here -> https://docs.python.org/2/library/configparser.html#ConfigParser.RawConfigParser.readfp
Define a class that overrides readline that adds a section name before the actual contents of your properties file.
I've packaged it into the class that returns a dict of all the properties defined.
import ConfigParser
class PropertiesReader(object):
def __init__(self, properties_file_name):
self.name = properties_file_name
self.main_section = 'main'
# Add dummy section on top
self.lines = [ '[%s]\n' % self.main_section ]
with open(properties_file_name) as f:
self.lines.extend(f.readlines())
# This makes sure that iterator in readfp stops
self.lines.append('')
def readline(self):
return self.lines.pop(0)
def read_properties(self):
config = ConfigParser.RawConfigParser()
# Without next line the property names will be lowercased
config.optionxform = str
config.readfp(self)
return dict(config.items(self.main_section))
if __name__ == '__main__':
print PropertiesReader('/path/to/file.properties').read_properties()
If you need to read all values from a section in properties file in a simple manner:
Your config.properties file layout :
[SECTION_NAME]
key1 = value1
key2 = value2
You code:
import configparser
config = configparser.RawConfigParser()
config.read('path_to_config.properties file')
details_dict = dict(config.items('SECTION_NAME'))
This will give you a dictionary where keys are same as in config file and their corresponding values.
details_dict is :
{'key1':'value1', 'key2':'value2'}
Now to get key1's value :
details_dict['key1']
Putting it all in a method which reads that section from config file only once(the first time the method is called during a program run).
def get_config_dict():
if not hasattr(get_config_dict, 'config_dict'):
get_config_dict.config_dict = dict(config.items('SECTION_NAME'))
return get_config_dict.config_dict
Now call the above function and get the required key's value :
config_details = get_config_dict()
key_1_value = config_details['key1']
-------------------------------------------------------------
Extending the approach mentioned above, reading section by section automatically and then accessing by section name followed by key name.
def get_config_section():
if not hasattr(get_config_section, 'section_dict'):
get_config_section.section_dict = dict()
for section in config.sections():
get_config_section.section_dict[section] =
dict(config.items(section))
return get_config_section.section_dict
To access:
config_dict = get_config_section()
port = config_dict['DB']['port']
(here 'DB' is a section name in config file
and 'port' is a key under section 'DB'.)
create a dictionary in your python module and store everything into it and access it, for example:
dict = {
'portalPath' : 'www.xyx.com',
'elementID': 'submit'}
Now to access it you can simply do:
submitButton = driver.find_element_by_id(dict['elementID'])
My Java ini files didn't have section headers and I wanted a dict as a result. So i simply injected an "[ini]" section and let the default config library do its job.
Take a version.ini fie of the eclipse IDE .metadata directory as an example:
#Mon Dec 20 07:35:29 CET 2021
org.eclipse.core.runtime=2
org.eclipse.platform=4.19.0.v20210303-1800
# 'injected' ini section
[ini]
#Mon Dec 20 07:35:29 CET 2021
org.eclipse.core.runtime=2
org.eclipse.platform=4.19.0.v20210303-1800
The result is converted to a dict:
from configparser import ConfigParser
#staticmethod
def readPropertyFile(path):
# https://stackoverflow.com/questions/3595363/properties-file-in-python-similar-to-java-properties
config = ConfigParser()
s_config= open(path, 'r').read()
s_config="[ini]\n%s" % s_config
# https://stackoverflow.com/a/36841741/1497139
config.read_string(s_config)
items=config.items('ini')
itemDict={}
for key,value in items:
itemDict[key]=value
return itemDict
This is what I'm doing in my project: I just create another .py file called properties.py which includes all common variables/properties I used in the project, and in any file need to refer to these variables, put
from properties import *(or anything you need)
Used this method to keep svn peace when I was changing dev locations frequently and some common variables were quite relative to local environment. Works fine for me but not sure this method would be suggested for formal dev environment etc.
import json
f=open('test.json')
x=json.load(f)
f.close()
print(x)
Contents of test.json:
{"host": "127.0.0.1", "user": "jms"}
I have created a python module that is almost similar to the Properties class of Java ( Actually it is like the PropertyPlaceholderConfigurer in spring which lets you use ${variable-reference} to refer to already defined property )
EDIT : You may install this package by running the command(currently tested for python 3).
pip install property
The project is hosted on GitHub
Example : ( Detailed documentation can be found here )
Let's say you have the following properties defined in my_file.properties file
foo = I am awesome
bar = ${chocolate}-bar
chocolate = fudge
Code to load the above properties
from properties.p import Property
prop = Property()
# Simply load it into a dictionary
dic_prop = prop.load_property_files('my_file.properties')
Below 2 lines of code shows how to use Python List Comprehension to load 'java style' property file.
split_properties=[line.split("=") for line in open('/<path_to_property_file>)]
properties={key: value for key,value in split_properties }
Please have a look at below post for details
https://ilearnonlinesite.wordpress.com/2017/07/24/reading-property-file-in-python-using-comprehension-and-generators/
you can use parameter "fromfile_prefix_chars" with argparse to read from config file as below---
temp.py
parser = argparse.ArgumentParser(fromfile_prefix_chars='#')
parser.add_argument('--a')
parser.add_argument('--b')
args = parser.parse_args()
print(args.a)
print(args.b)
config file
--a
hello
--b
hello dear
Run command
python temp.py "#config"
You could use - https://pypi.org/project/property/
eg - my_file.properties
foo = I am awesome
bar = ${chocolate}-bar
chocolate = fudge
long = a very long property that is described in the property file which takes up \
multiple lines can be defined by the escape character as it is done here
url=example.com/api?auth_token=xyz
user_dir=${HOME}/test
unresolved = ${HOME}/files/${id}/${bar}/
fname_template = /opt/myapp/{arch}/ext/{objid}.dat
Code
from properties.p import Property
## set use_env to evaluate properties from shell / os environment
prop = Property(use_env = True)
dic_prop = prop.load_property_files('my_file.properties')
## Read multiple files
## dic_prop = prop.load_property_files('file1', 'file2')
print(dic_prop)
# Output
# {'foo': 'I am awesome', 'bar': 'fudge-bar', 'chocolate': 'fudge',
# 'long': 'a very long property that is described in the property file which takes up multiple lines
# can be defined by the escape character as it is done here', 'url': 'example.com/api?auth_token=xyz',
# 'user_dir': '/home/user/test',
# 'unresolved': '/home/user/files/${id}/fudge-bar/',
# 'fname_template': '/opt/myapp/{arch}/ext/{objid}.dat'}
I did this using ConfigParser as follows. The code assumes that there is a file called config.prop in the same directory where BaseTest is placed:
config.prop
[CredentialSection]
app.name=MyAppName
BaseTest.py:
import unittest
import ConfigParser
class BaseTest(unittest.TestCase):
def setUp(self):
__SECTION = 'CredentialSection'
config = ConfigParser.ConfigParser()
config.readfp(open('config.prop'))
self.__app_name = config.get(__SECTION, 'app.name')
def test1(self):
print self.__app_name % This should print: MyAppName
This is what i had written to parse file and set it as env variables which skips comments and non key value lines added switches to specify
hg:d
-h or --help print usage summary
-c Specify char that identifies comment
-s Separator between key and value in prop file
and specify properties file that needs to be parsed eg : python
EnvParamSet.py -c # -s = env.properties
import pipes
import sys , getopt
import os.path
class Parsing :
def __init__(self , seprator , commentChar , propFile):
self.seprator = seprator
self.commentChar = commentChar
self.propFile = propFile
def parseProp(self):
prop = open(self.propFile,'rU')
for line in prop :
if line.startswith(self.commentChar)==False and line.find(self.seprator) != -1 :
keyValue = line.split(self.seprator)
key = keyValue[0].strip()
value = keyValue[1].strip()
print("export %s=%s" % (str (key),pipes.quote(str(value))))
class EnvParamSet:
def main (argv):
seprator = '='
comment = '#'
if len(argv) is 0:
print "Please Specify properties file to be parsed "
sys.exit()
propFile=argv[-1]
try :
opts, args = getopt.getopt(argv, "hs:c:f:", ["help", "seprator=","comment=", "file="])
except getopt.GetoptError,e:
print str(e)
print " possible arguments -s <key value sperator > -c < comment char > <file> \n Try -h or --help "
sys.exit(2)
if os.path.isfile(args[0])==False:
print "File doesnt exist "
sys.exit()
for opt , arg in opts :
if opt in ("-h" , "--help"):
print " hg:d \n -h or --help print usage summary \n -c Specify char that idetifes comment \n -s Sperator between key and value in prop file \n specify file "
sys.exit()
elif opt in ("-s" , "--seprator"):
seprator = arg
elif opt in ("-c" , "--comment"):
comment = arg
p = Parsing( seprator, comment , propFile)
p.parseProp()
if __name__ == "__main__":
main(sys.argv[1:])
Lightbend has released the Typesafe Config library, which parses properties files and also some JSON-based extensions. Lightbend's library is only for the JVM, but it seems to be widely adopted and there are now ports in many languages, including Python: https://github.com/chimpler/pyhocon
You can use the following function, which is the modified code of #mvallebr. It respects the properties file comments, ignores empty new lines, and allows retrieving a single key value.
def getProperties(propertiesFile ="/home/memin/.config/customMemin/conf.properties", key=''):
"""
Reads a .properties file and returns the key value pairs as dictionary.
if key value is specified, then it will return its value alone.
"""
with open(propertiesFile) as f:
l = [line.strip().split("=") for line in f.readlines() if not line.startswith('#') and line.strip()]
d = {key.strip(): value.strip() for key, value in l}
if key:
return d[key]
else:
return d
this works for me.
from pyjavaproperties import Properties
p = Properties()
p.load(open('test.properties'))
p.list()
print p
print p.items()
print p['name3']
I followed configparser approach and it worked quite well for me. Created one PropertyReader file and used config parser there to ready property to corresponding to each section.
**Used Python 2.7
Content of PropertyReader.py file:
#!/usr/bin/python
import ConfigParser
class PropertyReader:
def readProperty(self, strSection, strKey):
config = ConfigParser.RawConfigParser()
config.read('ConfigFile.properties')
strValue = config.get(strSection,strKey);
print "Value captured for "+strKey+" :"+strValue
return strValue
Content of read schema file:
from PropertyReader import *
class ReadSchema:
print PropertyReader().readProperty('source1_section','source_name1')
print PropertyReader().readProperty('source2_section','sn2_sc1_tb')
Content of .properties file:
[source1_section]
source_name1:module1
sn1_schema:schema1,schema2,schema3
sn1_sc1_tb:employee,department,location
sn1_sc2_tb:student,college,country
[source2_section]
source_name1:module2
sn2_schema:schema4,schema5,schema6
sn2_sc1_tb:employee,department,location
sn2_sc2_tb:student,college,country
You can try the python-dotenv library. This library reads key-value pairs from a .env (so not exactly a .properties file though) file and can set them as environment variables.
Here's a sample usage from the official documentation:
from dotenv import load_dotenv
load_dotenv() # take environment variables from .env.
# Code of your application, which uses environment variables (e.g. from `os.environ` or
# `os.getenv`) as if they came from the actual environment.

Categories

Resources