Accessing pytest assert message in finalizer - python

I am trying to generate a custom report with pytest and trying to access the assert message generated by pytest, in the case of a failure, in the finalizer of a global fixture in the conftest.py file. I am able to access the status of the test but I am not able to get the error message.
I would like to access the status message in the following way
#pytest.fixture(scope='function',autouse = True)
def logFunctionLevel(request):
start = int(time.time() * 1000)
def fin():
stop = int(time.time())
fo = open("/Users/mahesh.nayak/Desktop/logs/test1.log", "a")
fo.write(request.cls.__name__ + "." + request.function.__name__ + " " + str(start) + " " + str(stop) + "\n")
Any help to access the exception message is appreciated
Thanks
Edit : The answer by Bruno did help. Adding the below lines printed the asserts.
l = str(report.longrepr)
fo.write(l)

I'm not sure you can access the exception message from a fixture, but you can implement a custom pytest_runtest_logreport hook (untested):
def pytest_runtest_logreport(report):
fo = open("/Users/mahesh.nayak/Desktop/logs/test1.log", "a")
fo.write('%s (duration: %s)\n' % (report.nodeid, report.duration))
fo.close()
Hope that helps.

To access the assert message from a fixture you can follow the documentation here :
https://docs.pytest.org/en/latest/example/simple.html#making-test-result-information-available-in-fixtures
Which for your question make something look like that: (untested code)
# content of conftest.py
import pytest
#pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
outcome = yield
rep = outcome.get_result()
# set a report attribute for each phase of a call, which can
# be "setup", "call", "teardown"
setattr(item, "rep_" + rep.when, rep)
#pytest.fixture
def something(request):
yield
# request.node is an "item" because we use the default
# "function" scope
error_message= ""
if request.node.rep_setup.failed:
print("setting up a test failed!", request.node.nodeid)
error_message = request.node.rep_setup.longreprtext
elif request.node.rep_setup.passed:
if request.node.rep_call.failed:
print("executing test failed", request.node.nodeid)
error_message = request.node.rep_call.longreprtext
fo = open("/Users/mahesh.nayak/Desktop/logs/test1.log", "a")
if error_message:
fo.write('%s (duration: %s) - ERROR - %s \n' % (report.nodeid, report.duration,
error_message))
else:
fo.write('%s (duration: %s) - PASSED \n' % (report.nodeid, report.duration))
fo.close()
The main difference with Bruno Oliveira answer, is that pytest_runtest_logreport is called for each different step of the test ( setup/call/teardown? ), when a fixture can be called only one time at the end of the test.

Related

How to apply changes to ontology saved on SQLite Database?

Everytime I create a new instance on my ontology, something goes wrong If I try to read from the same database again.
ps - these are all part of different views on Django
This is how I am adding instances to my ontology:
# OWLREADY2
try:
myworld = World(filename='backup.db', exclusive=False)
kiposcrum = myworld.get_ontology(os.path.dirname(__file__) + '/kipo.owl').load()
except:
print("Error opening ontology")
# Sync
#--------------------------------------------------------------------------
sync_reasoner()
seed = str(time.time())
id_unico = faz_id(seed)
try:
with kiposcrum:
# here I am creating my instance, these are all strings I got from the user
kiposcrum[input_classe](input_nome + id_unico)
if input_observacao != "":
kiposcrum[input_nome + id_unico].Observacao.append(input_observacao)
sync_reasoner()
status = "OK!"
myworld.close()
myworld.save()
except:
print("Mistakes were made!")
status = "Error!"
input_nome = "Mistakes were made!"
input_classe = "Mistakes were made!"
finally:
print(input_nome + " " + id_unico)
print(input_classe)
print(status)
This is how I am reading stuff from It:
# OWLREADY2
try:
myworld = World(filename='backup.db', exclusive=False)
kiposcrum = myworld.get_ontology(os.path.dirname(__file__) + '/kipo_fialho.owl').load()
except:
print("Error")
sync_reasoner()
try:
with kiposcrum:
num_inst = 0
# gets a list of properties given an instance informed by the user
propriedades = kiposcrum[instancia].get_properties()
num_prop = len(propriedades)
myworld.close()
I am 100% able to read from my ontology, but If I try to create an instance and then try to read the database again, something goes wrong.

Capture assertion message in Pytest

I am trying to capture the return value of a PyTest. I am running these tests programmatically, and I want to return relevant information when the test fails.
I thought I could perhaps return the value of kernel as follows such that I can print that information later when listing failed tests:
def test_eval(test_input, expected):
kernel = os.system("uname -r")
assert eval(test_input) == expected, kernel
This doens't work. When I am later looping through the TestReports which are generated, there is no way to access any return information. The only information available in the TestReport is the name of the test and a True/False.
For example one of the test reports looks as follows:
<TestReport 'test_simulation.py::test_host_has_correct_kernel_version[simulation-host]' when='call' outcome='failed'>
Is there a way to return a value after the assert fails, back to the TestReport? I have tried doing this with PyTest plugins but have been unsuccessful.
Here is the code I am using to run the tests programmatically. You can see where I am trying to access the return value.
import pytest
from util import bcolors
class Plugin:
def __init__(self):
self.passed_tests = set()
self.skipped_tests = set()
self.failed_tests = set()
self.unknown_tests = set()
def pytest_runtest_logreport(self, report):
print(report)
if report.passed:
self.passed_tests.add(report)
elif report.skipped:
self.skipped_tests.add(report)
elif report.failed:
self.failed_tests.add(report)
else:
self.unknown_tests.add(report)
if __name__ == "__main__":
plugin = Plugin()
pytest.main(["-s", "-p", "no:terminal"], plugins=[plugin])
for passed in plugin.passed_tests:
result = passed.nodeid
print(bcolors.OKGREEN + "[OK]\t" + bcolors.ENDC + result)
for skipped in plugin.skipped_tests:
result = skipped.nodeid
print(bcolors.OKBLUE + "[SKIPPED]\t" + bcolors.ENDC + result)
for failed in plugin.failed_tests:
result = failed.nodeid
print(bcolors.FAIL + "[FAIL]\t" + bcolors.ENDC + result)
for unknown in plugin.unknown_tests:
result = unknown.nodeid
print(bcolors.FAIL + "[FAIL]\t" + bcolors.ENDC + result)
The goal is to be able to print out "extra context information" when printing the FAILED tests, so that there is information immediately available to help debug why the test is failing.
You can extract failure details from the raised AssertionError in the custom pytest_exception_interact hookimpl. Example:
# conftest.py
def pytest_exception_interact(node, call, report):
# assertion message should be parsed here
# because pytest rewrites assert statements in bytecode
message = call.excinfo.value.args[0]
lines = message.split()
kernel = lines[0]
report.sections.append((
'Kernels reported in assert failures:',
f'{report.nodeid} reported {kernel}'
))
Running a test module
import subprocess
def test_bacon():
assert True
def test_eggs():
kernel = subprocess.run(
["uname", "-r"],
stdout=subprocess.PIPE,
text=True
).stdout
assert 0 == 1, kernel
yields:
test_spam.py::test_bacon PASSED [ 50%]
test_spam.py::test_eggs FAILED [100%]
=================================== FAILURES ===================================
__________________________________ test_eggs ___________________________________
def test_eggs():
kernel = subprocess.run(
["uname", "-r"],
stdout=subprocess.PIPE,
text=True
).stdout
> assert 0 == 1, kernel
E AssertionError: 5.5.15-200.fc31.x86_64
E
E assert 0 == 1
E +0
E -1
test_spam.py:12: AssertionError
--------------------- Kernels reported in assert failures: ---------------------
test_spam.py::test_eggs reported 5.5.15-200.fc31.x86_64
=========================== short test summary info ============================
FAILED test_spam.py::test_eggs - AssertionError: 5.5.15-200.fc31.x86_64
========================= 1 failed, 1 passed in 0.05s ==========================

Getting all the test cases inside the test folder using Pyral api

I have used pyral api
rally = Rally(server, apikey=api_key, workspace=workspace, project=project)
to connect to rally
Than ,
testfolders = rally.get('TestFolder', fetch=True, query=query_criteria)
I need to extract all the test cases defined in the rally as:
TF*****
-TC1
-TC2
-TC3
I need to get all the test cases from the formattedID returned from
testfolders.
for tc in testfolders:
print(tc.Name)
print(tc.FormattedID)# this represents the TF***
tc_link=tc._ref#url for TC's https://rally1.rallydev.com/slm/webservice
query_criteria = 'TestSets = "%s"' % tp_name
testfolders = rally.get('TestFolder', fetch=True, query=query_criteria)
I have tried looking for different combination sets to extract test cases
The tc._ref is actually a
https://rally1.rallydev.com/slm/webservice/v2.0/TestFolder/XXXXX/TestCases
when I tried to open it in browser it gave the entire Test cases list.
I need to extract those test case list .
I tried accessing using urllib to acess the data using python ,but said unauthorized access, I think if I am on the same session authorization shouldnt be an issue since rally object is already instantiated.
Any help on this will be much appreciated!!
Thanks in advance
rally = Rally(server, apikey=api_key, workspace=workspace, project=project)
tp_name = "specific test case "
query_criteria = 'FormattedID = "%s"' % tp_name
testfolders = rally.get('TestFolder', fetch=True, query=query_criteria)
for tc in testfolders:
print(tc.Name)
print(tc.FormattedID)
query_criteria = 'TestSets = "%s"' % tc._ref
response = rally.get('TestCase', fetch=True, query=query_criteria)
print(response)
422 Could not read: could not read all instances of class com.f4tech.slm.domain.TestCase for class TestCase
It's not clear what you are trying to achieve but please take a look at the code below. Probably, it will solve your issue:
query = 'FormattedID = %s'
test_folder_req = rally.get('TestFolder', fetch=True, projectScopeDown=True,
query=query % folder_id) # folder_id == "TF111"
if test_folder_req.resultCount == 0:
print('Rally returned nothing for folder: %s', folder_id)
sys.exit(1)
test_folder = test_folder_req.next()
test_cases = test_folder.TestCases
print('Start working with %s' % folder_id)
print('Test Folder %s contains %s TestCases' % (folder_id, len(test_cases)))
for test_case in test_cases:
print(test_case.FormattedID)
print(test_case.Name)

Python Module snmpSessionBaseClass: Where to download

I am trying to import the snmpSessionBaseClass python module in a script I am running, but I do not have the module installed and I can't seem to find where to download it. Does anyone know the pip or yum command to download and install this module? Thanks!
import netsnmp
sys.path.insert(1, os.path.join(sys.path[0], os.pardir))
from snmpSessionBaseClass import add_common_options, get_common_options, verify_host, get_data
from pynag.Plugins import PluginHelper,ok,critical
The following code needs to be added to a file called snmpSessionBaseClass.py and that file needs to be placed in a directory that is in pythons path.
#!/usr/bin/env python
# Copyright (C) 2016 rsmuc <rsmuc#mailbox.org>
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with health_monitoring_plugins. If not, see <http://www.gnu.org/licenses/>.
import pynag
import netsnmp
import os
import sys
dev_null = os.open(os.devnull, os.O_WRONLY)
tmp_stdout = os.dup(sys.stdout.fileno())
def dev_null_wrapper(func, *a, **kwargs):
"""
Temporarily swap stdout with /dev/null, and execute given function while stdout goes to /dev/null.
This is useful because netsnmp writes to stdout and disturbes Icinga result in some cases.
"""
os.dup2(dev_null, sys.stdout.fileno())
return_object = func(*a, **kwargs)
sys.stdout.flush()
os.dup2(tmp_stdout, sys.stdout.fileno())
return return_object
def add_common_options(helper):
# Define the common command line parameters
helper.parser.add_option('-H', help="Hostname or ip address", dest="hostname")
helper.parser.add_option('-C', '--community', dest='community', help='SNMP community of the SNMP service on target host.', default='public')
helper.parser.add_option('-V', '--snmpversion', dest='version', help='SNMP version. (1 or 2)', default=2, type='int')
def get_common_options(helper):
# get the common options
host = helper.options.hostname
version = helper.options.version
community = helper.options.community
return host, version, community
def verify_host(host, helper):
if host == "" or host is None:
helper.exit(summary="Hostname must be specified"
, exit_code=pynag.Plugins.unknown
, perfdata='')
netsnmp_session = dev_null_wrapper(netsnmp.Session,
DestHost=helper.options.hostname,
Community=helper.options.community,
Version=helper.options.version)
try:
# Works around lacking error handling in netsnmp package.
if netsnmp_session.sess_ptr == 0:
helper.exit(summary="SNMP connection failed"
, exit_code=pynag.Plugins.unknown
, perfdata='')
except ValueError as error:
helper.exit(summary=str(error)
, exit_code=pynag.Plugins.unknown
, perfdata='')
# make a snmp get, if it fails (or returns nothing) exit the plugin
def get_data(session, oid, helper, empty_allowed=False):
var = netsnmp.Varbind(oid)
varl = netsnmp.VarList(var)
data = session.get(varl)
value = data[0]
if value is None:
helper.exit(summary="snmpget failed - no data for host "
+ session.DestHost + " OID: " +oid
, exit_code=pynag.Plugins.unknown
, perfdata='')
if not empty_allowed and not value:
helper.exit(summary="snmpget failed - no data for host "
+ session.DestHost + " OID: " +oid
, exit_code=pynag.Plugins.unknown
, perfdata='')
return value
# make a snmp get, but do not exit the plugin, if it returns nothing
# be careful! This funciton does not exit the plugin, if snmp get fails!
def attempt_get_data(session, oid):
var = netsnmp.Varbind(oid)
varl = netsnmp.VarList(var)
data = session.get(varl)
value = data[0]
return value
# make a snmp walk, if it fails (or returns nothing) exit the plugin
def walk_data(session, oid, helper):
tag = []
var = netsnmp.Varbind(oid)
varl = netsnmp.VarList(var)
data = list(session.walk(varl))
if len(data) == 0:
helper.exit(summary="snmpwalk failed - no data for host " + session.DestHost
+ " OID: " +oid
, exit_code=pynag.Plugins.unknown
, perfdata='')
for x in range(0, len(data)):
tag.append(varl[x].tag)
return data, tag
# make a snmp walk, but do not exit the plugin, if it returns nothing
# be careful! This function does not exit the plugin, if snmp walk fails!
def attempt_walk_data(session, oid):
tag = []
var = netsnmp.Varbind(oid)
varl = netsnmp.VarList(var)
data = list(session.walk(varl))
for x in range(0, len(data)):
tag.append(varl[x].tag)
return data, tag
def state_summary(value, name, state_list, helper, ok_value = 'ok', info = None):
"""
Always add the status to the long output, and if the status is not ok (or ok_value),
we show it in the summary and set the status to critical
"""
# translate the value (integer) we receive to a human readable value (e.g. ok, critical etc.) with the given state_list
state_value = state_list[int(value)]
summary_output = ''
long_output = ''
if not info:
info = ''
if state_value != ok_value:
summary_output += ('%s status: %s %s ' % (name, state_value, info))
helper.status(pynag.Plugins.critical)
long_output += ('%s status: %s %s\n' % (name, state_value, info))
return (summary_output, long_output)
def add_output(summary_output, long_output, helper):
"""
if the summary output is empty, we don't add it as summary, otherwise we would have empty spaces (e.g.: '. . . . .') in our summary report
"""
if summary_output != '':
helper.add_summary(summary_output)
helper.add_long_output(long_output)

No handlers could be found for logger "git.remote"

So I am getting the issue using gitpython:
No handlers could be found for logger "git.remote"
My code
print repo_object.remote() # origin
print repo_object.active_branch # master
print repo_object.active_branch.name # master
refspec = repo_object.active_branch.name + ":refs/for/" + repo_object.active_branch.name
print refspec # master:/refs/for/master
print "Push " + refspec # push master:refs/for/master
print remote.push(refspec) # [<git.remote.PushInfo object at 0x....>]
remote.push(refspec)
pushed_repos.append(repo_name)
print pushed_repos # my project repo
My goal is to push a submodule newly added (created/updated) files to gerrit.
Edit: Request from #Arount
Ok, so I have a function called
del_proj_gerrit(pushed_repos, commit_message, repo_path, repo_name, push)
The arguments has following values:
pushed_repos # []
commit_message # just a commit message
repo_name_path # path to my local repo
repo_name # name of the project repo
push # A value of True is set here
Inside my del_proj_gerrit function I have:
add_all_files("./")
if push == True:
commit_and_push(pushed_repos, commit_message, repo_path, repo_name)
Inside my commit_and_push function I have:
repo_path_new = repo_path+repo_name
repo_object = Repo(repo_path_new) # <git.Repo "C\localpath\.git\modules\repo-project">
if commit_command_line(commit_message, repo_object, repo_name):
# Inside the if-statement you have the code I posted before
Inside the commit_command_line function I have:
cmd = "git commit -m \"%s\"" % commit_message
errmsg = "Failed command:\n" + cmd
success = True
try:
execute_shell_process(cmd, errmsg, True, True)
except CommonProcessError as e:
print "Error during commit for repo '" + repo_name + "' (ret code " + \
str(e.returncode) + ")."
print "Assuming that there was no changes to last commit => continue"
success = False
return success
GitPython uses python logging so you just have to configure it. As per docs the minimal configuration is:
import logging
logging.basicConfig(level=logging.INFO)

Categories

Resources