I am trying to write a custom step, that will run few commands, and return a pas or fail based on certain conditions.
So far I was able to subclass ShellCommand; so I can execute a shell command on the slave. Now the next step is to write something that will execute not only one shell command, but various ones, and will analyze the results of these commands and act accordingly.
Altho I've not been successful in this endeavor. So far I am only able to subclass ShellCommand; but this allow me to run just one command.
I have found that ShellCommand uses buildstep.remoteCommand and remoteShellCommand; but my attempts to subclass buildstep.BuildStep has been unsuccessful.
The objective that I want to accomplish is to run a finite amount of python or shell commands (without write a shell script and call it from python; I was able to accomplish that), and analyze the results coming from these operations, so I can dictate if a step pass or fail, and what is logged.
So far this is what I have:
class myclass(build step.BuildStep)
def __init__(self, **kwargs):
buildstep.BuildStep.__init__(self, **kwargs)
def start(self):
cmd=buildstep.RemoteShellCommand({'command':"ls -la"})
self.setupEnvironment(cmd)
d=self.runCommand(cmd)
return d
This will run, and I will get an error on the remoteShellCommand line, saying
exceptions.TypeError: __init__() takes at least 3 arguments (2 given)
I've tried with both remoteCommand and remoteShellCommand, and the result is the same.
Checking init for both, I can't see 3 arguments but just the command, so I am not really sure what is wrong. I even tried to use **kwargs but I get an error saying that kwarg is not defined (there was an example from a blog, that would use kwargs; so I tried and it didn't work anyway).
This is the original documentation for the remoteShellCommand:
[Original Buildbot API documentation][1]
Do you know where I could find an example that actually show me how to accomplish this, or at least how do you actually use remoteCommand/remoteShellCommand? The original docs are a mess, even google returns few results that are even more obscure than the original docs.
Any suggestion is welcome; I've been running in circles for the past 3 days, and have no idea about where to look next.
One way to have a shellcommand execute multiple commands is to pass it a script instead of a command as this example shows:
class CoverageReport(ShellCommand):
def __init__(self):
self.coverage_path = '/var/www/coverage'
command = '''rm -rf htmlcov
coverage html
cp htmlcov/* %s/''' % self.coverage_path
description = 'Generating coverage report'
workdir = 'build/python'
self.url = "https://XXXXXXXX/coverage"
ShellCommand.__init__(self, command=command, description=description, workdir=workdir)
This Shellcommand does three things:
remove old report
Generate a new report
Copy the report to the www folder
Returned value will be the one of equivalent bash script, you can probably play with bash to return whatever you want
The other option is to add multiple steps to your build
Related
So I have discovered the python extension for SPSS, and everything works fine, I have created some scripts now and included them in the extensions map and it works fine. However, now I have created a couple of scripts that require arguments, I thought I could just follow the same method but I guess not.
def Run(args):
import spss
def testing_p(variables):
all_variables = [spss.GetVariableName(i) for i in range(spss.GetVariableCount())]
variable_nr = [all_variables.index(i) for i in variables]
print all_variables
print variable_nr
With the following .xml-file:
<Command xmlns="http://xml.spss.com/extension" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" Name="testing_p" Language="Python">
</Command>
However, this keep throwing the error when calling testing_p(['my_var', 'my_var2']):
Warnings
This command should specify a valid subcommand at the beginning.
Execution of this command stops.
I cannot wrap my head around this because everything works fine when not put in the extensions map and only doing:
BEGIN PROGRAM.
import spss
def testing_p(variables):
all_variables = [spss.GetVariableName(i) for i in range(spss.GetVariableCount())]
variable_nr = [all_variables.index(i) for i in variables]
print all_variables
print variable_nr
END PROGRAM.
For an extension, which can be writen in Python, R, or Java, you need to create a syntax specification containing the command name, any subcommands, and the arguments and argument types you want. Here is a picture of the start of one (SPSSINC_TURF, which is installed with Statistics).
This will guide the Statistics parser in checking the user input. It also then calls the Run function with a complicated structure containing the user input. You can use the functions in the extensions module to map that to your Python variables and do further validation. Here is a picture of the start of the Run function for SPSSINC TURF.
Finally, if the syntax is valid, your Run function calls the worker function to do something useful, mapping all the parameters to the specified arguments by calling
processcmd(oobj, args, superturf, vardict=spssaux.VariableDict())
which was imported from extensions.py.
Look at the doc for extensions in the help system, and look at some of the extensions installed with Statistics for examples.
Finally, here is a slide from one of my presentations summarizing the flow from user input to results.
Example, I have file1.robot and file2.robotand each has ${var} as the variable. Can I pass 2 different values to this same ${var} in the command line? Something like pabot -v var:one:two file1.robot file2.robot where -v var:one:two would follow the order of the robot files; not by name but by how they were introduced in the command line?
This solution is not 100% what you've asked for, but maybe you can make it work.
In pabot readme file is mentioned something about shared set of variables and acquiring set for each running process. The documentation was bit unclear to me, but if you try following example, you'll see for yourself. It's basically pool of variables and each process can get set of variables from it and when it's done with it, it can return this set back to the pool.
Create your value set valueset.dat
[Set1]
USERNAME=user1
PASSWORD=password1
[Set2]
USERNAME=user2
PASSWORD=password2
create suite1.robot and suite2.robot. I've created 2 suites that are exactly the same. I just wanted to try to run 2 suites in parallel.
*** Settings ***
Library pabot.PabotLib
*** Test Cases ***
Foobar
${valuesetname}= Acquire Value Set
Log ${valuesetname}
${username}= Get Value From Set username
Log ${username}
# Release Value Set
And then run command pabot --pabotlib --resourcefile valueset.dat tests. If you check html report, you'll see that one suite used set1 and other used set2.
Hope this helps.
Cheers!
Another way is to use multiple argument files. One containing the first value for ${var} and the other containing the other.
This will execute the same test suite for both argument files.
pabot --agumentfile1 varone.args --argumentfile2 vartwo.args file.robot
=>
file.robot executed with varone.args
file.robot executed with vartwo.args
Looking at the documentation for SBWatchpoint at http://lldb.llvm.org/python_reference/index.html, I do not see a method for assigning a python callback function for when a watchpoint is triggered.
Is there a way to do this with the Python API?
There is a
watchpoint command add
command that supports doing that
watchpoint command add [-e <boolean>] [-s <none>] [-F <python-function>] <watchpt-id>
If you have an SBWatchpoint, you can query for its ID, and then craft an appropriate command line to pass down to SBDebugger.HandleCommand
You will need your Python module to contain the script function you want executed, and pass it by qualified name on the command line. For instance, if you have
# myfile.py
def callback(wp_no):
# stuff
# more stuff
mywatchpoint = ...
debugger.HandleCommand("watchpoint command add -F myfile.callback %s" % mywatchpoint.GetID())
would be the way to tell LLDB about your callback
Currently, there is no way to pass Python functions directly to LLDB API calls.
There is no reason why that is impossible, but it is a little tricky to get right in a world where multiple scripting languages could coexist, and given the lack of a viable alternative strategy, there's not much pressure to get it working.
I've tried really hard to find this but no luck - I'm sure it's possible I just can't find and example or figure out the syntax for myself
I want to use fabric as a library
I want 2 sets of hosts
I want to reuse the same functions for these different sets of hosts (and so cannot us the #roles decorator on said functions)
So I think I need:
from fabric.api import execute, run, env
NODES = ['192.168.56.141','192.168.56.152']
env.roledefs = {'head':['localhost'], 'nodes':NODES}
env.user('r4space')
def testfunc ():
run('touch ./fred.txt')
execute(testfunc(),<somehow specific 'head' or 'nodes' as my hosts list and user >)
I've tried a whole range of syntax // hosts=NODES, -H NODES, user='r4space'....much more but I either get a syntax error or "host_string = raw_input("No hosts found. Please specify (single)""
If it makes a difference, ultimately my function defs would be in a separate file that I import into main where hosts etc are defined and execute is called.
Thanks for any help!
You have some errors in your code.
env.user('r4space') is wrong. Should be env.user = 'r4space'
When you use execute, the first parameter should be a callable. You have used the return value of the function testfunc.
I think if you fix the last line, it will work:
execute(testfunc, hosts = NODES)
when generating python wrappers with swig the python wrapper classes in the generated python file do not have an explicit self parameter, for example see below:
class PySwigIterator(_object):
def value(*args): return _spatiotemporalnmf.PySwigIterator_value(*args)
def incr(*args): return _spatiotemporalnmf.PySwigIterator_incr(*args)
def decr(*args): return _spatiotemporalnmf.PySwigIterator_decr(*args)
def distance(*args): return _spatiotemporalnmf.PySwigIterator_distance(*args)
I am developing with the eclipse pluging Pydev. Pydev always shows an error when it detects a method without explicit self parameter. I am aware of two methods to get rid of the errors: First, disable error checking for the whole project in the Pydev preferences. Second, add a ##NoSelf to every line with an error. I don't want to use the first one, because I still want to get error warnings for my non-swig-generated files. Obviously the second one is also not very good, because I would have to do it by hand and every time I generate the file again, all ##NoSelfs will be gone.
My Question now is, is there a better way to achieve this?
Thanks
As from the documentation, any file with the comment
##PydevCodeAnalysisIgnore
inside will not be analyzed.
Therefore, you just need to add it to all SWIG-generated files, and you should be OK. It is just one place to change, and you could even write a very small processor that will add it automatically.