Jenkins CI timings erratic - python

I recently moved my Jenkins CI setup to a new computer. Everything works fine, apart from the fact the build timings are erratic to say the least. For example note timing discrepancy here:
Jenkins test result with inconsistent timings
Sometimes, the timing sitution seems to improve / change when re-starting the server.
I am running Jenkins 2.89.2 on Win10. Moreover, I have not seen other people complain about this issue - might not be a bug, but an issue with my configuration...
Any suggestions?
Thanks!
Update: following some digging through the underlying JUnit xml reports, and looking at my old setup, it seems that the issue has in fact been around for a while, and relates (I now believe) only to the folder-in-build, multi-module view.

Related

IronPython Microsoft.Scripting.MutableTuple Exception

I am trying to run unit tests for a Iron Python code (Python with C#). I am facing the below error when running the unit tests. However the weird thing is that the framework runs for other python code tests. The framework is written using the unittest.defaultTestLoader.loadTestsFromName and then calls the test runner.
One reason I feel the code may not be running is because of the memory exception as there are alot of data structures that have been used in this python code compared to others for which the tests are running successfully.
I have worked on changing the defaultTestLoader to loadTestsFromModule and loadtestsfromtestcase. But I still get the same error.
Can someone please help me to find a solution around this?
Error on CMD when running tests
Test Framework Code to run tests
I faced whith the same problem in Dynamo for Revit application.
As i understand the count of naming variable have some memory limit.
I start to use dict instead of variables. It help me. But it is not the best way out.

Python - Difference between Sonarcube Vs pylint

I'm evaluating test framework, lint and code coverage options for a new Python project I'll be working on.
I've chosen pytest for the testing needs. After reading a bunch of resources, I'm confused when to use Sonarcube, Sonarlint , pylint and coverage.py.
Is SonarLint and Pylint comparable? When would I use Sonarcube?
I need to be able to use this in a Jenkins build. Thanks for helping!
Sonarlint and pylint are comparable, in a way.
Sonarlint is a code linter and pylint is too. I haven't used sonarlint, but it seems that analyzes the code a bit deeper that pylint does. From my experience, pylint only follows a set of rules (that you can modify, by the way), while sonarlint goes a bit further analyzing the inner workings of your code. They are both static analyze tools, however.
Sonarcube, on the other hand, does a bit more. Sonarcube is a CI/CD tool that runs static linters, but also shows you code smells, and does a security analysis. All of what I'm saying is based purely on their website.
If you would like to run CI/CD workflows or scripts, you would use Sonarcube, but for local coding, sonarlint is enough. Pylint is the traditional way, though.
Nicholas has a great summary of Pylint vs Sonarlint.
(Personally I use the Sonarlint)
Although the question is older, I thought I'd answer the other part of your question in case anyone else has the same question; internet being eternal and all.
Coverage.py as it sounds, runs code coverage for your package. SonarQube then uses the report that coverage.py makes and does things with it and formats it in a way that the Sonar team decided was necessary. Coverage.py is needed if you want to use SonarQube for code coverage. However, if you just want the code smells from SonarQube, it is not needed.
You were also asking about when to use SonarQube, coverage.py, and Jenkins.
In Jenkins, you would create a pipeline with several stages. Something along the following lines:
Check out code (automatically done as the first step by Jenkins
Build code as it is intended to be used by user/developer
Run Unit Tests
run coverage.py
run SonarQube

Kernel crashes when increasing iterations

I am running a Python script using Spyder 2.3.9. I have a fairly large script and when running it through with (300x600) iterations (a loop inside another loop), everything appears to be working fine and takes approximately 40 minutes. But when I increase the number to (500x600) iterations, after 2 hours, the output yields:
It seems the kernel died unexpectedly. Use 'Restart kernel' to continue using this console.
I've been trying to go through the code but don't see anything that might be causing this in particular. I am using Python 2.7.12 64bits, Qt 4.8.7, PyQt4 (API v2) 4.11.4. (Anaconda2-4.0.0-MacOSX-x86_64)
I'm not entirely sure what additional information is pertinent, but if you have any suggestions or questions, I'd be happy to read them.
https://github.com/spyder-ide/spyder/issues/3114
It seems this issue has been opened on their GitHub profile, should be addressed soon given the repo record.
Some possible solutions:
It may be helpful, if possible, to modify your script for faster convergence. Very often, for most practical purposes, the incremental value of iterations after a certain point is negligible.
An upgrade or downgrade of the Spyder environment may help.
Check your local firewall for blocked connections to 127.0.0.1 from pythonw.exe.
If nothing works, try using Spyder on Ubuntu.

Error in `/usr/bin/python': double free or corruption (out): 0x00007f7c3c017260

I'm developing a website in Python using the (excellent) Flask framework. In the backend code I use APScheduler to run some cron-like jobs every minute, and I use Numpy to calculate some Standard Deviations. Don't know whether the usage of these modules matter, but I thought I'dd better mention them since I guess they might be the most likely to be the cause.
Anyway, in the middle of operation, Python itself seemed to crash, giving the following:
*** Error in `/usr/bin/python': double free or corruption (out): 0x00007f7c3c017260 ***
I might be wrong, but as far as I know, this is pretty serious. So my question is actually; what could cause this, and how can I get more information about a crash like this? What does the (out) mean? I can't really reproduce this, but it happened 4 times in about 5 months now. I'm running the standard Python 2.7 on Ubuntu server 14.04
I searched around and found a couple discussions about similar crashes, of which one thing comes back: concurrency seems to be related somehow (which is why I included the usage of APScheduler).
If anybody has any idea how I could debug this or what could possibly be the cause of this; all tips are welcome!
I had a similar issue.
I had an unused dependency: spacy == 1.6.0
removing it solved the issue.
(maybe upgrading spacy version could also work)
spacy is written in Cython - an optimising static compiler for Python. So it might be related to sum bug memory allocation in spacy implementation.

Fabric: How can I unit test my fabfile?

In the previous project I was working on, our fabfile got out of control. While the rest of our project was well-tested, we didn't write a single test for our fabfile. Refactoring was scary, and we weren't confident a fabric command would work how we expected until we ran the command.
I'm starting a new project, and I'd like to make sure our fabfile is well-tested from the beginning. Obey the Testing Goat has a great article discussing some possible strategies, yet it has more questions than answers. Using fabtest is a possibility, although it seems to be dead.
Has anyone successfully unit tested their fabfile? If so, how?
run your Fabfile task in a Docker instance
use docker diff to verify that the right files were changed by
the Fabfile.
This is still quite a bit of work, but it allows testing without excessive Fabfile modifications.
Have you tried python-vagrant? It seems to do the same thing that fabtest does, but it includes some Fabric demos and is still used and maintained.
The slides - mentioned by Henrik Andersson - from back then are available here
Robin Kåveland Hansen replied to me:
There are some examples of the types of refactoring that we did in order to keep our fabric code well-tested there.
In general I would say the best advice is to try avoiding low-level code such as shell commands in higher level code that makes decisions about what code to run, eg. isolate effect-full code from code that makes decisions.
Branching increases the amount of test-cases that you need and it's a lot more effort to write good test-cases for code that changes state on some server.
At the time, we used mock to mock out fabric to write test-cases for branch-less code that has side-effects on the server, so the code + tests would look a lot like this
Obviously this has the weakness that it won't pick up bugs in the shell commands themselves. My experience is that this is rarely the cause of serious problems, though.
Other options using mock would be to use the following idea to run the
tests locally on your machine instead of remotely
Maybe the most robust approach is to run the tests in vagrant, but that has the disadvantage of requiring lots of setup and has a tendency to make the tests slower.
I think it's important to have fast tests, because then you can run them all the time and they give you a really nice feedback-loop.
The deploy-script I've written for my current employer has ~150 test cases and runs in less than 0.5 seconds, so the deploy-script will actually do a self-test before deploying.
This ensures that it is tested on each developer machine all the time, which has picked up a good few bugs for example for cases where linux and mac osx behave differently.

Categories

Resources