Is there a testing framework (preferable python) that executes test, monitor the progress (failed/passed/timeout) and controls the vmware? Thanks
I am trying to make some automation functional testing in Vmware using Autoit script, VMs are controlled by a little python script on the host machine (deploy test files into VMs, execute them and collect the results data). But now it seems to be lots of works to do if I want this script to be able to manage and execute a series of test cases.
Thanks a lot!
Cheers,
Zhe
There are lots of continuous integration tools that may do what you want.
One implemented in Python that may fit your need is Buildbot - it can manage running builds and tests across multiple machines and consolidating the results.
Related
I have a Python script that consumes an Azure queue, and I would like to scale this easily inside Azure infrastructure. I'm looking for the easiest solution possible to
run the Python script in an environment that is as managed as possible
have a centralized way to see the scripts running and their output, and easily scale the amount of scripts running through a GUI or something very easy to use
I'm looking at Docker at the moment, but this seems very complicated for the extremely simple task I'm trying to achieve. What possible approaches are known to do this? An added bonus would be if I could scale wrt the amount of items on the queue, but it is fine if we'd just be able to manually control the amount of parallelism.
You should have a look at Azure Web Apps, which also support Python.
This would be a managed and scaleable environment and also supports background tasks (WebJobs) with a central logging.
Azure Web Apps also offer a free plan for development and testing.
Per my experience, I think CoreOS on Azure can satisfy your needs. You can try to refer to the doc https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-coreos-how-to/ to know how to get started.
CoreOS is a Linux distribution for running Docker as Linux container, that you can remote access via SSH client like putty. For using Docker, you can search the key words Docker tutorial via Bing to rapidly learning some simple usage that enough for running Python scripts.
Sounds to me like you are describing something like a micro-services architecture. From that perspective, Docker is a great choice. I recommend you consider using an orchestration framework such as Apache Mesos or Docker Swarm which will allow you to run your containers on a cluster of VMs with the ability to easily scale, deploy new versions, rollback and implement load balancing. The schedulers Mesos supports (Marathon and Chronos) also have a Web UI. I believe you can also implement some kind of triggered scaling like you describe but that will probably not be off the shelf.
This does seem like a bit of a learning curve but I think is worth it especially once you start considering the complexities of deploying new versions (with possible rollbacks), monitoring failures and even integrating things like Jenkins and continuous delivery.
For Azure, an easy way to deploy and configure a Mesos or Swarm cluster is by using Azure Container Service (ACS) which does all the hard work of configuring the cluster for you. Find additional info here: https://azure.microsoft.com/en-us/documentation/articles/container-service-intro/
I am running python scripts on the pi using another voice recognition python script at the moment. I now also want to run these scripts from the internet. According to a little bit of research, one way could be setting up a small webserver on the pi such as lighttpd and create a database on it. Then create another small script which periodically checks a value in the database. This value can be modified over the internet. According to the value I will be using the voice recognition script or using the other values in the database to run the python scripts.
My question is, is this method efficient or is there a simpler method to do this? I am fairly competent at python but I am totally new to web servers and databases. However I do not mind to spend time learning how to use them.
Thanks in advance!
One route that I personally chose, was the configure the Pi for use as a LAMP (Liniux Apache MySQL Python). Some great instructions can be found here: http://www.wikihow.com/Make-a-Raspberry-Pi-Web-Server
If this is overkill, have you considered using cron jobs to automate your pythons scripts? You could then set up times at which your two scripts would run, and with a little inter-process communication you have two entities that are aware of each other. http://www.thesitewizard.com/general/set-cron-job.shtml
My current project has a policy of 100% code coverage from its unit tests. Our continuous integration service will not allow developers to push code without 100% coverage.
As the project has grown, so has the time to run the full test suite. While developers typically run a subset of tests relevant to the code they are changing, they will usually do one final full run before submitting to CI, and the CI server itself also runs the full test suite.
Unit tests by their nature are highly parallelizable, as they are self-contained and stateless from test to test. They return only two pieces of information: pass/fail and the lines of code covered. A map/reduce solution seems like it would work very well.
Are there any Python testing frameworks that will run tests across a cluster of machines with code coverage and combine the results when finished?
I don't know of any testing frameworks that will run tests distributed off a group of machines, but nose has support for parallelizing tests on the same machine using multiprocessing.
At minimum that might be a good place to start to create a distributed testing framework
I think there is no framework that matches exactly to your needs.
I know py.test has xdist plugin which adds distributed test executors. You can use it to write your CI infrastructure on top of it.
Not exactly what you are looking at, but the closest I could recall is from the Hadoop groups is using JUnit for testing with Hadoop. Here is the mail. As mentioned in the mail search for gridunit papers.
Unit testing with Hadoop in a distributed way is very interesting. Any frameworks around this would be very useful, but developing a framework shouldn't be very difficult. If interested let me know.
I have a python program that performs several independent and time consuming processes. The python code is generally an automater, that calls into several batch files via popen.
The program currently takes several hours, so I'd like to split it up across multiple machines. How can I split tasks to process in parallel with python, over an intranet network?
There are many Python parallelisation frameworks out there. Just two of the options:
The parallel computing facilities of IPython
The parallelisation framework jug
For the remote execution you could use execnet. Do you have to distribute the data too?
I might suggest STAF. It's advertised as a software testing framework, yet it allows for distribution of activities across multiple PCs (and multiple platforms). You can run scripts, copy data, and easily communicate between your multiple sessions. Best of all, it's fairly easy to integrate with already existing scripts.
I have a Rails server which will need to run a python script at the background. I know that I can run it like I run terminal commands in ruby, but how is the performance like? is it better to use a python framework and not Rails? Is there better ways (optimization wise) to run python scripts on a Rails server?
If you mean that it needs to run periodically, just set it up as a cron job, no special performance characteristics to worry about there.
If you mean that it needs to run when pages are requested from your Ruby website, then simply running the script each time won't perform well as it needs to fire up the Python interpreter over and over again.
If the Python script is large but is only called from a relatively small number of page requests, you might be able to get away with this, sometimes it's not worth the time to optimise a slow operation that isn't called often.
If the bulk of your website is based around the functionality of the Python script, then yes, you are probably better off switching to a Python web framework and loading it as a module.
If the Python script isn't very big, then you are probably better off rewriting it in Ruby.
Worst case scenario is that the script is big and used often, but doesn't make up enough of your website to justify switching to Python. In that case, I'd consider wrapping the Python in a daemon that Ruby can talk to in the background.
You will incur the cost of starting python each time you run it from ruby. The cost would be the same in a python framework, unless you could use the python script as a library instead.
You could setup a daemon in rails to execute the python script.
http://railscasts.com/episodes/129-custom-daemon - Tutorial for setting up daemons in rails