At work, we already have a custom script to perform regressions for us.
I have been tasked with the creation of a wrapper script which will ultimately invoke our pre existing script in a custom fashion and do a bunch of house keeping in and around the executed tests.
Essentially what I need to provide are:
Ability to invoke simulation using custom switches, paths etc
Provide some form of regression tracking (history of pass fail)
Some form of basic web interface to view regression results
Perhaps an automated email if something breaks
Now nothing I have mentioned is new and I am sure there are a lot of ways to approach this.
What I am looking for is some suggestions as to a good way to go about this.
Some solutions that come to mind:
Build custom python script (most effort)
Extend python's built in Unit Testing framework. Subclass where necessary.
This is really the crux of my question. Is this a good solution?
Use some other framework?
Jenkins? I have not used jenkins but I've heard good things about this tool. Any thoughts on if this tool would suit here?
Thanks for your help!
Some of you may ask why I don't extend the base simulation script directly.
Well, it's a few thousand line Perl monster!
I don't speak Perl, nor do I have any intention to start!
Related
So I've been self-learning Python through this app and I've gotten through the course it offers. I feel like I have a good grasp on the absolute basics of the language, but I still have no idea how to actually use it to solve real problems, ie do actually useful things. I feel like I have a bunch of building blocks but I don't know how to put them together.
So enter my job. I'm a student that works part time and we do weekly schedule requests by paper. It's a giant pain for both us and the supervisors who do the actual scheduling. I was thinking it might be cool to write something to automate the process. It's not so much about the end product though, I'm not being paid enough to write software for my employer. I've just heard everywhere that actually working on projects is the best way to learn and I'm taking this real problem as an opportunity to practice.
So my question is, where should I go from my bare bones knowledge of Python to be able to write something like this? Do I read books? Learn about fancy algorithms? Find something similar and browse its source code? Thanks!
First, you need to figure out what you want to your program to do, specifically. Programs are ways to automate processes, and you can not automate something until you know what the process is. You are trying to solve a real-world problem, the first thing you need to do is figure out how this problem is solved today (without your program).
When you try to figure this out, you should not be thinking about how to actually code it in Python, but rather what stepds need to be taken (manually) to achieve what you want. Exactly what would you do if you were doing the scheduling by hand?
Often, people find it useful to describe what a system is actually supposed to do via use cases. A use case would describe a single function, feature or process from the perspective of a user of your system. Of course, you don't need to formally write uses cases for your system, but it might help to think of what the system should be doing from a user's perspectice before thinking of the technical details to achieve this.
Once you have concrete features, processes or steps the program need to support, you can start digging into how to actually implement this in Python.
I have a system, let's say a product firmware that has to be tested on its embedded platform. For this I can access the platform using a C library and I will need to control some instruments (function generators, multi-meters, oscilloscopes) to get some measurements.
To be more specific regarding my application, let's imagine I would like to check that the firmware of a parallel robot is perfectly working. Apart from the independent unit-tests inside the firmware source-code, I will need to do more tests that involve physical feedback. I can for instance measure the pressure of a arm against a sensor or measure the back EMF voltage. Some of these tests are critical (that means the overall testing procedure will fail with no second chance). Some other are not only pass-fail, they can raise a warning and the test can continue.
Because some routines are implemented in low-level C, they are part of the API that I am addressing with Matlab/Python. So these routines may fail and I will have to catch the error code into a try/catch.
At a different level, my test can be also broken in some ways. If a failure occurs, I would like to log the complete trace-back. Which test unit, which class instance, which method and which API function.
I have found two technologies that seem to be very suited to be used for that purpose: Matlab and Python. With both I can access C dlls, I can plot graphs and I can control instruments over a VISA port.
To write the series of tests in python I can use a combo of unittest, HTMLTestRunner and logging.
In Matlab it seems the existing packages are very poor. I have found log4m and Matlab2015 provides native functions for unit testing. However, my feeling is that Python is a bit more adapted for what I am trying to do.
I have to say I already have plenty of Matlab licenses and money is not an issue in this case.
I recently discovered that Matlab and Python offer both interfaces to talk to each others. Also, I've read that these two technologies are becoming very popular for testing purposes.
I would like to find solid arguments and concrete examples that will help me to find the right technology.
My current feeling is that Matlab will not give me any help to automate my test-bench with for instance external hooks on Git/ClearCase repositories. Bulding an HTML report and getting a good traceback information is not easy to achieve.
Is it possible to get good traceback information, nice logger accross my modules, and test classes that can be triggered by an external script in matlab?
It's tough to say without gathering more details on your specific use case and learning more how you are setting up your environment. Just looking at what you are using in python we have the unit test framework, a logging framework, and an HTML test runner.
As you say, MATLAB we have the unit test framework and it itself has logging features (1,2). Can those be used?. There is not (yet) have an HTML report, but you can:
Use the XMLPlugin to produce JUnit style xml and perform an XSLT to create html from it.
Plug in to a CI system using either JUnit or TAP and these systems typically have reporting capabilities. An example of doing this with Jenkins is here and here.
Ultimately, the CI system links do so how to run your tests automatically, and I can tell you that there has been and continues to be a very large investment in the diagnostic (i.e. traceback) information provided by the unit test framework. The logging features in the framework are catered to test logging rather than inter-module logging so I am not sure if that will work for what you would like, but there is also log4m like you say.
We have a large, interactive R program that we would like to interface with Shiny. There is a small Python program we would also like to create an interface for alongside it. There are no dependencies between the two sets of code, but as a research institute we'd like to provide a common interface for the two programs might be accessed by the same users. What is a good way to go about it? Is it better to consolidate under python/Django and use rpy2, or make system calls to the python program through R's Shiny interface? Are there better alternatives, or recommended practices?
Django would be an overkill.
rpy2 is a good option for small modules containing simpler methods
flask is another good option for python's side. Programmers can transmit files or even build simple web-interfaces. I prefer this method. Tell your students/collegues to define fixed APIs and response format [JSON/XML] and even a new scholar wouldn't have to spend times thinking about how to make it work. Just tell him the APIs and work with it just like Alchemy etc interfaces.
Shiny is a good option for building web-interfaces on R side. A quick tutorial that works. http://shiny.rstudio.com/tutorial/lesson2/
I am going to write configuration tool for my Ubuntu based system. Next I would like to write frontends (text, GUI and web). But it is the most complicated project I wanted to write and I am not sure about general architecture I should use.
At the current I have functions and classes for changing system config. But these functions will probably grow & change. #Abki gave me advice how to write interface for frontends. I am going to make base classes for this interface but I don't know how to connect it with backend and next with frontends. Probably I should use design patterns like fasade, wrapper or something else.
It looks like (without interface_to_backend layer):
I don't care about UI and functions to change system config now. But I don't know how to write middle layer so It would be easy to connect it with the rest and extend functionality i the future.
I need general ideas, design patterns, advices how to implement this in Python.
I'm not sure this is entirely appropriate for SO but I'm intrigued and so I'll bite. As a rubyist I can't help much with the Python but here is some opinion on pattens from my experience.
My initial suggestion is you should review a few of the contenders out there. Specifically I'd be looking at cfengine, chef and bcfg2. They each tell a different story but if I'd summarise I'd say:
Chef has a lovely dsl syntax but is let down by a complicated architecture
bcfg2 is written in python but seems to have an annoying tendency to use XML :(
cfengine has the strongest theoretical underpinnings in promise theory (which is v.interesting BTW) but is C based.
Wikipedia also provides a pretty impressive list of configuration management tools that you will find useful.
In regard to designing your own tool I'd suggest there are three principles you want to pursue:
Simplicity, the simpler you make this the better. Simple in terms of scope, configuration and use are all important.
You'll need a single way to store data, you need to be able to trace the choices as they are made and not trample other people's changes (especially in a team environment).
Security, most configuration management tools need root privileges at some point. So you need to make sure that users can trust the code they're running.
You could use Fabric with Python as described in the article Ubuntu Server Setup with Python Fabric
The Wikipedia article at Comparison of open source configuration management software has several other tools that use Python to do this.
I like the approach taken by SALT.
If you write the GUI, text/CLI, and Web interfaces using Python, they can all use the same Python module. That way a change in one interface transparently affects the others. Plus all of those are in Python's area of strength.
We have around 250 identical linux server which runs a business critical web application for a bank. Basically we do a lot of scripting work but now i want to centralize that only in one location. That means run on one server and and deploy it in many. I know you guys must be thinking that this is an easy task and can be done with a shell script. But again we need to create many different different scripts to do our work
I know python has a big library and this can be possible but i dont know how. To cut in short i need all scripts in one file and based on the argument it will execute it according.
For example in a python program we have a function where we can mix them to perform different result.
So you please let me know how to go about it
This is a very general question, so I'll respond with two different frameworks that are made using Python to facilitate bulk system administration tasks.
func - Func is part of the
Fedora project and so is specialized
to their architecture. If your hosts
are all RedHat/CentOS based, this is
the solution for you.
fabric - Fabric is more generic
but does generally the same thing at
a high level. If your environment
is heterogenous (full of different
types of systems/distributions),
this is probably what you want.
You could also try any of the distributed computing packages. Pyro is one of them that might interest you.