I am trying to run unit tests for a Iron Python code (Python with C#). I am facing the below error when running the unit tests. However the weird thing is that the framework runs for other python code tests. The framework is written using the unittest.defaultTestLoader.loadTestsFromName and then calls the test runner.
One reason I feel the code may not be running is because of the memory exception as there are alot of data structures that have been used in this python code compared to others for which the tests are running successfully.
I have worked on changing the defaultTestLoader to loadTestsFromModule and loadtestsfromtestcase. But I still get the same error.
Can someone please help me to find a solution around this?
Error on CMD when running tests
Test Framework Code to run tests
I faced whith the same problem in Dynamo for Revit application.
As i understand the count of naming variable have some memory limit.
I start to use dict instead of variables. It help me. But it is not the best way out.
Related
I am really new to Jenkins and Python so when I have initially researched for this problem, there has been a limit to my understanding. I am looking to write a Python script and for it to be run on Jenkins as part of some automated testing I wish to do. My script interacts with an API and hence imports the 'requests' module on Python. It works fine using the Python interpreter on my local machine but I have had issues when I have tried using the Jenkins Python script builder and so I am looking for a way around this.
As I mentioned, I have looked around the internet for solutions but as my knowledge on this topic is limited I have found it difficult to understand certain ideas that have been mentioned on the web. One lead I have had is related to the use of virtual environments on Jenkins, but as its something I've never used, I have struggled implementing it. I have installed the ShiningPanda Plugin on Jenkins, but I am unsure how to use it.
Any help given is greatly appreciated :)
Thanks
I recently moved my Jenkins CI setup to a new computer. Everything works fine, apart from the fact the build timings are erratic to say the least. For example note timing discrepancy here:
Jenkins test result with inconsistent timings
Sometimes, the timing sitution seems to improve / change when re-starting the server.
I am running Jenkins 2.89.2 on Win10. Moreover, I have not seen other people complain about this issue - might not be a bug, but an issue with my configuration...
Any suggestions?
Thanks!
Update: following some digging through the underlying JUnit xml reports, and looking at my old setup, it seems that the issue has in fact been around for a while, and relates (I now believe) only to the folder-in-build, multi-module view.
I'm trying to run a Python script using Matlab's built-in py. It's pretty simple, but I'm running into some difficulty drying to debug an error in my code (which runs fine testing in my Python IDE but crashes when run through Matlab).
The issue is that Matlab seems to be caching the module the first time I call a function, and I can't figure out how to get it to recognize changes to the module without restarting Matlab. Is anyone aware of a way to avoid this issue?
This is the first limitation listed on the MATLAB documentation's Limitations to Python Support page:
Editing and reloading a Python® module in the same MATLAB session. To
use an updated module, restart MATLAB
Sorry. That said, that page might help you figure out what the issue is, as there are other limitations that might be coming into play. You might also find their page about troubleshooting Python errors useful.
I've started working on a project with loads of unused legacy code in it. I was wondering if it might be possible to use a tool like coverage in combination with a crawler (like the django-test-utils one) to help me locate code which isn't getting hit which we can mark with deprecation warnings. I realise that something like this won't be foolproof but thought it might help.
I've tried running coverage.py with the django debug server but it doesn't work correctly (it seems to just profile the runserver machinery rather than my views, etc).
We're improving our test coverage all the time but there's a way to go and I thought there might be a quicker way.
Any thoughts?
Thanks.
You can run the development server under coverage if you use the --noreload switch:
coverage run ./manage.py runserver --noreload
pylint is great tool for static code analysis (among others things it will detect unused imports, variables or arguments).
http://nedbatchelder.com/blog/200806/pylint.html
http://www.doughellmann.com/articles/pythonmagazine/completely-different/2008-03-linters/index.html
I have a python script which generates some reports based on a DB.
I am testing the script using java Db Units which call the python script.
My question is how can I verify the code coverage for the python script while I am running the DB Units?
Coverage.py has an API that you can use to start and stop coverage measurement as you need.
I'm not sure how you are invoking your Python code from your Java code, but once in the Python, you can use coverage.py to measure Python execution, and then get reports on the results.
Drop me a line if you need help.
I don't know how you can check for inter-language unit test coverage. You will have to tweak the framework yourself to achieve something like this.
That said, IMHO this is a wrong approach to take for various reasons.
Inter-language disqualifies the tests from being described as "unit". These are functional tests. and thereby shouldn't care about code coverage (See #Ned's comment below).
If you must (unit) test the Python code then I suggest that you do it using Python. This will also solve your problem of checking for test coverage.
If you do want to function test then it would be good idea to keep Python code coverage checks away from Java. This would reduce the coupling between the Java and Python code. Tests are code after all and it is usually a good idea to reduce coupling between parts.