dynamically creating output folder based on timestamp - python

I used to use a runner.py with my pytest framework in order to get around the bug with combining markers and string params, e.g.:
-k 'foo' -m 'bar'
I also used the runner to get the test run's starting timestamp and create an output folder output// to which I write my logs and the html report, save any screenshots, etc.
runner.py excerpt:
timestamp = time.strftime('%y%m%d-%H%M%S')
# the following are used by conftest.py
output_path = utils.generate_output_path(timestamp)
folder = utils.create_output_folder(output_path)
def main():
args = sys.argv[1:]
args.append('-v')
args.append('--timestamp=%s' % timestamp)
args.append('--output_target=%s' % folder)
args.append('--html=%s/results.html' % folder)
pytest.main(args, plugins=plgns)
if __name__ == "__main__":
main()
I want to lose the runner.py and use straight CLI args and fixtures/hooks but not manually pass in timestamp, output_target, or the html report path, but have so far been unable find a way to change that config, for example by modifying config.args.
How can I dynamically write timestamp, output_target, and the html path so that pytest uses them during initialization?

Here's what I did:
in my pytest.ini I added a default command line option for the html report, so that there is a configuration attribute that I can modify:
addopts = --html=output/report.html
in my conftest.py, I added this pytest_configure() call
def pytest_configure(config):
"""
Set up the output folder, logfile, and html report file;
this has to be done right after the command line options
are parsed, because we need to rewrite the pytest-html path.
:param config: pytest Config object
:return: None
"""
# set the timestamp for the start of this test run
timestamp = time.strftime('%y%m%d-%H%M%S')
# create the output folder for this test run
output_path = utils.generate_output_path(timestamp)
folder = utils.create_output_folder(output_path)
# start logging
filename = '%s/log.txt' % output_path
logging.basicConfig(filename=filename,
level=logging.INFO,
format='%(asctime)s %(name)s.py::%(funcName)s() [%(levelname)s] %(message)s')
initial_report_path = Path(config.option.htmlpath)
report_path_parts = list(initial_report_path.parts)
logger.info('Output folder created at "%s".' % folder)
logger.info('Logger started and writing to "%s".' % filename)
# insert the timestamp
output_index = report_path_parts.index('output')
report_path_parts.insert(output_index + 1, timestamp)
# deal with doubled slashes
new_report_path = Path('/'.join(report_path_parts).replace('//', '/'))
# update the pytest-html path
config.option.htmlpath = new_report_path
logger.info('HTML test report will be created at "%s".' % config.option.htmlpath)
This is logged as:
2018-03-03 14:07:39,632 welkin.tests.conftest.py::pytest_configure() [INFO] HTML test report will be created at "output/180303-140739/report.html".
The html report is written to the appropriate output/<timestamp>/ folder. That's the desired result.

Related

Log4j2 configuration cannot read the filename I have passed through in a python script

So I had to change my log4j configuration into a log4j2 configuration recently. Previously, to get the variable of the filename I wanted, I passed through the variable of the filename created by the python script as ${logfilename}. This is the original Log4j configuration and this is a
section of the Python script.
My Log4j2 configuration currently looks like this in terms of calling the filename. However, I have tried using ${sys:logfilename}, ${main:logfilename}, ${env:logfilename} and more to no avail.
I know the rolling file configuration works as it is working on another part of the application where the name of the log file is static and does not change. The issue seems to be that the log4j2 configuration file just isn't getting the name of the file from the python script.
Original Log4j Configuration:
# Logging level and destination:
log4j.rootLogger=INFO, FILE
# Define the file appender:
log4j.appender.FILE=org.apache.log4j.rolling.RollingFileAppender
# Define the layout for file appender:
log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
log4j.appender.FILE.layout.conversionPattern=%d{ISO8601} %-5p [%t] %c{2} -%X{runId} %m%n
# Appender log roll policy:
log4j.appender.FILE.RollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy
log4j.appender.FILE.RollingPolicy.ActiveFileName=${logfilename}
log4j.appender.FILE.RollingPolicy.FileNamePattern=${logfilename}.%d{yyyyMMdd}.gz
Log4j2 Configuration:
name = PropertiesConfig
# Logging level and destination and define file appender:
rootLogger.level = INFO
property.filename = ${main:\--logfilename}
appenders = FILE, console
# Define the layout for file appender:
appender.console.type = Console
appender.console.name = stdout
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{ISO8601} %-5p [%t] %c{2} -%X{runId} %m%n
# Appender log roll policy:
appender.FILE.type = RollingFile
appender.FILE.name = File
appender.FILE.fileName = ${filename}
appender.FILE.filePattern = ${filename}.%d{yyyy-MM-dd}.gz
appender.FILE.layout.type = PatternLayout
appender.FILE.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %c{1} [%p] %m%n
appender.FILE.policies.type = Policies
appender.FILE.policies.time.type = TimeBasedTriggeringPolicy
appender.FILE.policies.time.interval = 1
appender.FILE.policies.time.modulate = true
Python Script:
count = 1
# Search for running nodes:
for process in psutil.process_iter(["cmdline"]):
# Count the number of grid node processes found and increase count each time:
if grid_node in process.info["cmdline"]:
count += 1
# Variables:
log_home_directory = "/home/user/logs"
log_file_name = f"node_{count}.log"
log_4j_path = log_home_directory + log_file_name
command = (
"nohup java"
+ f"-Dlogfilename={log_4j_path}"
)

Pytest testing functions with file read (hardcoded file path)

Can't wrap my mind around how to test my function which search for files.
My current tree:
project
- etls
-- app.py
-- pipeline_1
---- pipeline_1.yaml
-- pipeline_2
---- pipeline_2.yaml
- tests
-- etl
--- unit
---- tests_appfile.py
- conftest.py
In my tests_appfile.py I would like to test function from app.py
The function is:
def get_yaml_path(job_name: str) -> str:
list_of_paths = list(Path("etls").rglob(f"{job_name}.yaml"))
if len(list_of_paths) > 1:
raise ValueError(
f"Number of paths > 1 (actual value: {len(list_of_paths)}. Can't decide which pipeline to run"
)
elif len(list_of_paths) == 0:
raise ValueError(f"There is no YAML files for that {job_name}")
else:
return str(list_of_paths[0])
So, I run app.py with param job_name, function have to find specific YAML for that job.
I want to test it and the main caveat here is that 'etls' is hardcoded path here. My ideas are:
to create fixture that creates fake folders and YAMLs for test
change workdir to temd_dir from pytest during tests and create there 'etls' folder and etc.
Which approach is more efficient considering I will need this YAMLS for another tests and how to implement them?
Found the answer:
Created the fixture
#pytest.fixture
def test_appfile_yaml_files(tmp_path):
directory = tmp_path / "etls/test_pipeline"
directory.mkdir(parents=True)
file_path = directory / "test_pipeline.yaml"
file_path.write_text("this is temp yaml file")
return file_path
And then monkeypatch the working dir
def test_get_yaml_path(test_appfile_yaml_files, tmp_path, monkeypatch):
monkeypatch.chdir(tmp_path)
assert get_yaml_path("test_pipeline") == "etls/test_pipeline/test_pipeline.yaml"

Generate pytest-html report per module in a session

I understand that the HTML report is written in the pytest_sessionfinish hookimpl (src). Which if i run multiple modules in one session, it will create only one report.
However, i would want to have one report for each module ran.
With my conftest.py containing:
#pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
script_name = os.path.splitext(os.path.basename(lib_programname.get_path_executed_script()))[0]
if not os.path.exists('reports'):
os.makedirs('reports')
config.option.htmlpath = 'reports/'+ 'report_' + script_name + ".html"
config.option.self_contained_html = True
If i run: pytest .\test_One.py .\test_Two.py. It generates only one html file, report_test_One.html containing both module results.
However, is there a way i could add in on my conftest.py that it will create reports for each module ran instead of one per session? i.e. report_test_One.html, report_test_Two.html?

Testing Click application prompting with multiple inputs

I've written a small command line application using Click and Python, which I am now trying to write tests for (mainly for the purpose of learning how to test Click apps so I can move on to testing the one I'm actually developing).
Here's a function I'm trying to test:
#click.group()
def main():
pass
#main.command()
#click.option('--folder', '-f', prompt="What do you want to name the folder? (No spaces please)")
def create_folder(folder):
while True:
if " " in folder:
click.echo("Please enter a name with no spaces.")
folder = click.prompt("What do you want to name the folder?", type=str)
if folder in os.listdir():
click.echo("This folder already exists.")
folder = click.prompt("Please choose a different name for the folder")
else:
break
os.mkdir(folder)
click.echo("Your folder has been created!")
I'm trying to test this using the built in testing from Click (http://click.pocoo.org/6/testing/ and http://click.pocoo.org/6/api/#testing for more details) as well as pytest. This works well in the case where I test an acceptable folder name (ie. one that does not have spaces and that does not already exist). See below:
import clicky # the module we're testing
import os
from click.testing import CliRunner
import click
import pytest
input sys
runner = CliRunner()
folder = "myfolder"
folder_not = "my folder"
question_create = "What do you want to name the folder? (No spaces please): "
echoed = "\nYour folder has been created!\n"
def test_create_folder():
with runner.isolated_filesystem():
result = runner.invoke(clicky.create_folder, input=folder)
assert folder in os.listdir()
assert result.output == question_create + folder + echoed
I now want to test this function in the case where I provide a folder name that's not allowed, such as one with spaces, and then after the prompt telling me I can't have spaces, I will provide an acceptable folder name. However, I cannot figure out how to make click.runner accept more than one input value, and that's the only way I can think to make that work. I'm also open to using unittest mocking, but I'm not sure how to integrate that into the way Click does tests, which other than this problem has worked wonderfully so far. Here's my attempt with multiple inputs:
def test_create_folder_not():
with runner.isolated_filesystem():
result = runner.invoke(clicky.create_folder, input=[folder_not, folder]) # here i try to provide more than one input
assert result.output == question_create + folder_not + "\nPlease enter a name with no spaces.\n" + "What do you want to name the folder?: " + folder + echoed
I tried to provide multiple inputs by putting them in a list, like what I've seen done with mocking, but I get this error:
'list' object has no attribute 'encode'
Any thoughts on this would be greatly appreciated!
To provide more than one input to the test runner, you can simply join() the inputs together with a \n like so:
Code:
result = runner.invoke(clicky.create_folder,
input='\n'.join([folder_not, folder]))
Result:
============================= test session starts =============================
platform win32 -- Python 3.6.3, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: \src\testcode, inifile:
collected 2 items
testit.py .. [100%]
========================== 2 passed in 0.53 seconds ===========================
Alternate implementation using click.ParamType
I however would suggest that for prompting of the desired filename properties, you can use ParamType. Click offers a ParamType that can be subclassed, and then passed to click.option(). The format checking and re-prompting can then be taken care of by click.
You can subclass a ParamType like:
import click
class NoSpacesFolder(click.types.StringParamType):
def convert(self, value, param, ctx):
folder = super(NoSpacesFolder, self).convert(value, param, ctx)
if ' ' in folder:
raise self.fail("No spaces allowed in selection '%s'." %
value, param, ctx)
if folder in os.listdir():
raise self.fail("This folder already exists.\n"
"Please choose another name.", param, ctx)
return folder
Using ParamType:
To use the custom ParamType, pass it to click.option() like:
#main.command()
#click.option(
'--folder', '-f', type=NoSpacesFolder(),
prompt="What do you want to name the folder? (No spaces please)")
def create_folder(folder):
os.mkdir(folder)
click.echo("Your folder has been created!")

How to implement single logger in python for two scripts one defining the workflow and one containing all functions?

I have two python scripts one as workflow.py and other is task.py. The workflow.py defines the workflow of the task.py and thus it has only main inside it in which it instantiate the constructor of task.py. The structure of worflow.py is as follows:
workflow.py
from optparse import OptionParser
from optparse import OptionGroup
from task import *
def main():
dir = os.path.dirname(os.path.abspath(sys.argv[0]))
try:
parser = parse_options(parser, dir)
(options, args) = parser.parse_args()
print ("everything working fine in workflow")
except SystemExit:
return 0
task_object = task_C(options.source_dir, options.target, options.build_dir)
print ("workflow is reached")
task_object.another_funct()
##following 3 lines is when set_logger is defined
log = task_object .set_logger()
log.info(""workflow is reached"")
log.info("more info if required")
def parser(parser, dir):
group = OptionGroup(parser, "Prepare")
group.add_option("-t", "--target",action="store", dest="target",
help="Specifies the target.")
group.add_option("-s", "--Source",action="store", dest="source_dir",
help="Specifies the source dir.")
group.add_option("-b", "--build",action="store", dest="build_dir",
help="Specifies the build dir.")
parser.add_option_group(group)
task.py
class task_C():
def __init__(self, source_dir, target, build_dir):
self.target = self.settargetplatform(target)
self.source_dir = self.setsourcedir(source_dir)
self.build_dir = self.setbuilddir(build_dir)
def settargetplatform( target):
...sets target dir
def setsourcedir(source_dir ):
...sets source_dir
def setbuilddir(build_dir):
..sets build_dir
def another_funct( ):
print ("inside the another funct")
print ("some usefull info")
print ("...")
##following part after adding set_logger then using logger
log = self.set_logger()
log.info( "inside the another funct")
log.info( " some usefull info")
log.info ("...")
.
def set_logger(self):
logging.basicConfig()
l = logging.getLogger('root')
l.setLevel(logging.DEBUG)
formatter = logging.Formatter(' %(levelname)s : %(message)s')
fileHandler = logging.FileHandler(self.build_dir+"/root.log", mode='w')
fileHandler.setFormatter(formatter)
l.addHandler(fileHandler)
Now, as in the above script it is shown that the task.py constructor is invoked inside the workflow and there are various print statements in both script, i would like to have a logger instead of the print statements and for that purpose i want to place the log inside the location "build_dir" but that location is set inside the task.py and i don't want to add another function inside workflow.py which retrieve back the 'build_dir'. I added set_logger() function inside task.py as you can see in the task.py which could serve my purpose but the log i am getting contains all NULL NULL NULL...so on. So, suggest me how can i have one log containing all print statements in these two script and what improvements do i need to make?
actually that can be done but the point is in that case log location
has to be defined in workflow.py and i don't want to define location
there as it is already defined in task.py. In workflow i don't want to
define the same location for logger which is already set in task.py
As per your above comment -
Then you can call your set_logger() in worker.py and pass it to task.py i.e. have the following lines in worker.py:
task_object = task_C(options.source_dir, options.target, options.build_dir)
log = task_object .set_logger()
For any call to task methods, pass the logger (methods must accept it as param) - for example:
task_object.another_funct(log=log)
For logging not working properly - add return l at the end of set_logger() in task.py
I think I would define logger inside workflow.py and pass it as a parameter into the constructor of task_C. That would seem to be the easiest solution.

Categories

Resources