My question is actually a design problem. I am using python + selenium for automation. PyUnit is the unit framework used. I have a sheet in an excel where I have 2 columns- TestCaseID and Run. TestCaseID will have the testcase id in it and Run will have either Y or N signifying whether that test case should be runnable or not. What I am trying to do is read a particular test case Id from this sheet and see what does it have as it's run value, Y or N. If it is Y, then this test case will be executed by the unit framework else it will not be run.
Here is an excerpt of a test case that I have written:
`class Test_ID_94017(unittest.TestCase):
ex = Excel()
def setUp(self):
self.ex.setUpASheetInExcel('Select_Test_Cases_To_Run')
if self.ex.getTestCaseRunStatusFromExcel("94017") == "Y":
self.driver = Browser().createBrowserDriver()
self.driver.maximize_window()
self.driver.implicitly_wait(15)
self.ex.setUpASheetInExcel('Login')
else:
return
def test_CCPD_Regression001_PER_Custom_Check(self):
//do something
The definition for getTestCaseRunStatusFromExcel(testCaseId) method is:
`def getTestCaseRunStatusFromExcel(self, testCaseId):
i=1
while self.workSheet.cell_value(i,0).value != testCaseId:
i+=1
return self.workSheet.cell_value(i,1).value
Here are the problems I am facing:
How should I give a condition in my existing code so that only for Y, my test case executes? Should I give an if condition in setUp method for that test case class as I have mentioned in my code above?
Is the way in which I am trying to iterate over the rows in a column (of the excel sheet) correct until I find my testcase ID and get its corresponding Run value (Y or N)?
Please help!
I implemented similar design for selenium+testng suite. After reading the execution flags from excel, i create the testng.xml dynamically and run the same xml. TestNG provides features for xml to be created and executed dynamically. Not sure if you have anything similar in PyUnit. I hope this gives you some ideas.
Related
i have a initial_data command in my project and when i try to create my test database and run this command
"py manage.py initial_data" it's not working because this command makes data in main database and not test_database, actually can't understand which one is test_database.
so i give up to use this command,and i have other method to resolve my problem .
and i decided to create manually my initial_data.
initial_data command actually makes my required data in my other tables.
and why i try to create data?
because i have greater than 500 tests and this tests required to other test datas.
so i have this tests:
class BankServiceTest(TestCase):
def setUp(self):
self.bank_model = models.Bank
self.country_model = models.Country
def test_create_bank_object(self):
self.bank_model.objects.create(name='bank')
re1 = self.bank_model.objects.get(name='bank')
self.assertEqual(re1.name, 'bank')
def test_create_country_object(self):
self.country_model.objects.create(name='country', code=100)
bank = self.bank_model.objects.get(name='bank')
re1 = self.country_model.objects.get(code=100)
self.assertEqual(re1.name, 'country')
i want after running "test create bank object " to have first function datas in second function
actually this method Performs the command "initial_data".
so what is the method for this problem.
i used unit_test.TestCase but it not working for me.
this method actually makes test data in main data base and i do not want this method.
or maybe i used bad of this method.
actually i need to keep my data in running test.
I might be guilty of using pytest in a way I'm not supposed to, but let's say I want to generate and display some short message on successful test completion. Like I'm developing a compression algorithm, and instead of "PASSED" I'd like to see "SQUEEZED BY 65.43%" or something like that. Is this even possible? Where should I start with the customization, or maybe there's a plugin I might use?
I've stumbled upon pytest-custom-report, but it provides only static messages, that are set up before tests run. That's not what I need.
I might be guilty of using pytest in a way I'm not supposed to
Not at all - this is exactly the kind of use cases the pytest plugin system is supposed to solve.
To answer your actual question: it's not clear where the percentage value comes from. Assuming it is returned by a function squeeze(), I would first store the percentage in the test, for example using the record_property fixture:
from mylib import squeeze
def test_spam(record_property):
value = squeeze()
record_property('x', value)
...
To display the stored percentage value, add a custom pytest_report_teststatus hookimpl in a conftest.py in your project or tests root directory:
# conftest.py
def pytest_report_teststatus(report, config):
if report.when == 'call' and report.passed:
percentage = dict(report.user_properties).get('x', float("nan"))
short_outcome = f'{percentage * 100}%'
long_outcome = f'SQUEEZED BY {percentage * 100}%'
return report.outcome, short_outcome, long_outcome
Now running test_spam in default output mode yields
test_spam.py 10.0% [100%]
Running in the verbose mode
test_spam.py::test_spam SQUEEZED BY 10.0% [100%]
Note: Using flask_sqlalchemy here
I'm working on adding versioning to multiple services on the same DB. To make sure it works, I'm adding unit tests that confirm I get an error (for this case my error should be StaleDataError). For other services in other languages, I pulled the same object twice from the DB, updated one instance, saved it, updated the other instance, then tried to save that as well.
However, because SQLAlchemy adds a fake-cache layer between the DB and the service, when I update the first object it automatically updates the other object I hold in memory. Does anyone have a way around this? I created a second session (that solution had worked in other languages) but SQLAlchemy knows not to hold the same object in two sessions.
I was able to manually test it by putting time.sleep() halfway through the test and manually changing data in the DB, but I'd like a way to test this using just the unit code.
Example code:
def test_optimistic_locking(self):
c = Customer(formal_name='John', id=1)
db.session.add(c)
db.session.flush()
cust = Customer.query.filter_by(id=1).first()
db.session.expire(cust)
same_cust = Customer.query.filter_by(id=1).first()
db.session.expire(same_cust)
same_cust.formal_name = 'Tim'
db.session.add(same_cust)
db.session.flush()
db.session.expire(same_cust)
cust.formal_name = 'Jon'
db.session.add(cust)
with self.assertRaises(StaleDataError): db.session.flush()
db.session.rollback()
It actually is possible, you need to create two separate sessions. See the unit test of SQLAlchemy itself for inspiration. Here's a code snippet of one of our unit tests written with pytest:
def test_article__versioning(connection, db_session: Session):
article = ProductSheetFactory(title="Old Title", version=1)
db_session.refresh(article)
assert article.version == 1
db_session2 = Session(bind=connection)
article2 = db_session2.query(ProductSheet).get(article.id)
assert article2.version == 1
article.title = "New Title"
article.version += 1
db_session.commit()
assert article.version == 2
with pytest.raises(sqlalchemy.orm.exc.StaleDataError):
article2.title = "Yet another title"
assert article2.version == 1
article2.version += 1
db_session2.commit()
Hope that helps. Note that we use "version_id_generator": False in the model, that's why we increment the version ourselves. See the docs for details.
For anyone that comes across this question, my current hypothesis is that it can't be done. SQLAlchemy is incredibly powerful and, given that the functionality is so good that we can't test this line, we should trust that it works as expected
currently I am able to:
start CANoe application
load a CANoe configuration file
load a test setup file
def load_test_setup(self, canoe_test_setup_file: str = None) -> None:
logger.info(
f'Loading CANoe test setup file <{canoe_test_setup_file}>.')
if self.measurement.Running:
logger.info(
f'Simulation is currently running, so new test setup could \
not be loaded!')
return
self.test_setup.TestEnvironments.Add(canoe_test_setup_file)
test_environment = self.test_setup.TestEnvironments.Item(1)
logger.info(f'Loaded test environment is <{test_environment.Name}>.')
How can I access the XML Test Module loaded with the test setup (tse) file and select tests to be executed?
The last before line in your snippet is most probably causing the issue.
I have been trying to fix this issue for quite some time now and finally found the solution.
Somehow when you execute the line self.test_setup.TestEnvironments.Item(1)
win32com creates an object of type TestSetupItem which doesn't have the necessary properties or methods to access the test cases. Instead we want to access objects of collection types TestSetupFolders or TestModules. win32com creates object of TestSetupItem type even though I have a single XML Test Module (called AutomationTestSeq) in the Test Environment as you can see here.
There are three possible solutions that I found.
Manually clearing the generated cache before each run.
Using win32com.client.DispatchWithEvents or win32com.client.gencache.EnsureDispatch generates a bunch of python files that describe CANoe's object model.
If you had used either of those before, TestEnvironments.Item(1) will always return TestSetupItem instead of the more appropriate type objects.
To remove the cache you need to delete the C:\Users\{username}\AppData\Local\Temp\gen_py\{python version} folder.
Doing this every time is of course not very practical.
Force win32com to always use dynamic dispatch.
You can do this by using:
canoe = win32com.client.dynamic.Dispatch("CANoe.Application")
Any objects you create using canoe from now on, will be dynamically dispatched.
Forcing dynamic dispatch is easier than manually clearing the cache folder every time. This gave me good results always. But doing this will not let you have any insight into the objects. You won't be able to see the acceptable properties and methods for the objects.
Typecast TestSetupItem to TestSetupFolders or TestModules.
This has the risk that if you typecast incorrectly, you will get unexpected results. But has worked well for me so far.
In short: win32.CastTo(test_env, "ITestEnvironment2"). This will ensure that you are using the recommended object hierarchy as per CANoe technical reference.
Note that you will also have to typecast TestSequenceItem to TestCase to be able to access test case verdict and enable/disable test cases.
Below is a decent example script.
"""Execute XML Test Cases without a pass verdict"""
import sys
from time import sleep
import win32com.client as win32
CANoe = win32.DispatchWithEvents("CANoe.Application")
CANoe.Open("canoe.cfg")
test_env = CANoe.Configuration.TestSetup.TestEnvironments.Item('Test Environment')
# Cast required since test_env is originally of type <ITestEnvironment>
test_env = win32.CastTo(test_env, "ITestEnvironment2")
# Get the XML TestModule (type <TSTestModule>) in the test setup
test_module = test_env.TestModules.Item('AutomationTestSeq')
# {.Sequence} property returns a collection of <TestCases> or <TestGroup>
# or <TestSequenceItem> which is more generic
seq = test_module.Sequence
for i in range(1, seq.Count+1):
# Cast from <ITestSequenceItem> to <ITestCase> to access {.Verdict}
# and the {.Enabled} property
tc = win32.CastTo(seq.Item(i), "ITestCase")
if tc.Verdict != 1: # Verdict 1 is pass
tc.Enabled = True
print(f"Enabling Test Case {tc.Ident} with verdict {tc.Verdict}")
else:
tc.Enabled = False
print(f"Disabling Test Case {tc.Ident} since it has already passed")
CANoe.Measurement.Start()
sleep(5) # Sleep because measurement start is not instantaneous
test_module.Start()
sleep(1)
Just continue what you have done.
The TestEnvironment contains the TestModules. Each TestModule contains a TestSequence which in turn contains the TestCases.
Keep in mind that you cannot individual TestCases but only the TestModule. But you can enable and disable individual TestCases before execution by using the COM-API.
(typing this from the top of my head, might not work 100%)
test_module = test_environment.TestModules.Item(1) # of 2 or whatever
test_sequence = test_module.Sequence
for i in range(1, test_sequence.Count + 1):
test_case = test_sequence.Item(i)
if ...:
test_case.Enabled = False # or True
test_module.Start()
You have to keep in mind that a TestSequence can also contain other TestSequences (i.e. a TestGroup). This depends on how your TestModule is setup. If so, you have to take care of that in your loop and descend into these TestGroups while searching for your TestCase of interest.
I'm mildly confused. I'm testing a django application with python's unittest library. All of a sudden, after running my tests with 100 % success for some minutes, suddenly an error appears. Ok I thought, I must have just added some stupid syntax error. I started looking at the test and then my code, I then tried to print out the results which are being compared with assertEqual before they are compared. Suddenly if I do that, the test runs!!! :o
Why is this? has anyone experienced this before. I swear, the only change I made was adding a print statement inside my test function. I'll post this function before and after
Before (Fails)
def test_swap_conditionals(self):
"""
Test conditional template keys
"""
testStr = "My email is: {?email}"
swapStr = self.t.swap(testStr)
# With email
self.assertEqual(swapStr, "My email is: john#baconfactory.com")
# Without email
self.t.template_values = {"phone" : "00458493"}
swapStr = self.t.swap(testStr)
self.assertEqual(swapStr, "My email is: ")
After (Success)
def test_swap_conditionals(self):
"""
Test conditional template keys
"""
testStr = "My email is: {?email}"
swapStr = self.t.swap(testStr)
print(swapStr) #diff here
# With email
self.assertEqual(swapStr, "My email is: john#baconfactory.com")
# Without email
self.t.template_values = {"phone" : "00458493"}
swapStr = self.t.swap(testStr)
self.assertEqual(swapStr, "My email is: ")
It looks like there is some external reason.
What you can check:
Rerun the test several times under the same conditions. Is it always failing or passing? Or is it a 'flipper' test? This might be caused by timing issues (although unlikely).
Put the test in its own class, so there are no side effects from other unit tests.
If the test in its own test class was passing the reason is a side effect by:
other unit tests
startup/teardown functionality
Ok well embarrassing, but this was completely my fault. The swap function was looking up every conditional template variable on the line and then iterating over that list one conditional template variable at a time, so either it missed keys it already had or it got lucky and it happened to hit that key.
Example
line: "This is my {?email}, and this is my {?phone}"
finds:
[{?email}, {?phone}]
iterates over [{?email}, {?phone}]
1. {?email}
key being compared = phone : '00549684'
It has phone as a key but it completely disregards it and does not swap it out because
it's just holding {?email} so it simply returns "".
I'm sincerely sorry to waste all your time here. Thanks for good answers. I am refactoring the code now for the better, and definitely taking a coffee break :D