I am invoking Robot Framework on a folder with a command like following:
robot --name MyTestSuite --variablefile lib/global_variables.py --variable TARGET_TYPE:FOO --variable IMAGE_TYPE:BAR --prerunmodifier MyCustomModifier.py ./tests
MyCustomModifier.py contains a simple SuiteVisitor class, which includes/excludes tags and does a few other things based on some of the variable values set.
How do I access TARGET_TYPE and IMAGE_TYPE in that class? The method shown here does not work, because I want access to the variables before tests start executing, and therefore I get a RobotNotRunningError with message Cannot access execution context.
After finding this issue report, I tried to downgrade to version 2.9.1 but nothing changed.
None of public API's seem to provide this information but debugging the main code does provide an alternative way of obtaining it. It has to be said that this example code will work with version 3.0.2, but may not work in the future as these are internal functions subject to change. That said, I do think that the approach will remain.
As Robot Framework is an application, it obtains the command line arguments through it's main function: run_cli (when running from command line). This function is filled with the arguments from the system itself and can be obtained throughout every python script via:
import sys
cli_args = sys.argv[1:]
Robot Framework has a function that interprets the commandline argument list and make it into a more readable object:
from robot.run import RobotFramework
import sys
options, arguments = RobotFramework().parse_arguments(sys.argv[1:])
The argument variable is a list where all the variables from the command line are added. An example:
arguments[0] = IMAGE_TYPE:BAR
This should allow you to access the information you need.
Related
Probably related to globals and locals in python exec(), Python 2 How to debug code injected by the exec block and How to get local variables updated, when using the `exec` call?
I am trying to develop a test framework for our desktop applications which uses click bot like functions.
My goal was to enable non-programmers to write small scripts which could work as a test script. So my idea is to structure the test scripts by files like:
tests-folder
| -> 01-first-test.py
| -> 02-second-test.py
| ... etc
| -> fixture.py
And then just execute these scripts in alphabetical order. However, I also wanted to have fixtures which would define functions, classes, variables and make them available to the different scripts without having the scripts to import that fixture explicitly. If that works, I could also have that approach for 2 or more directory levels.
I could get it working-ish with some hacking around, but I am not entirely convinced. I have a test_sequence.py which looks like this:
from pathlib import Path
from copy import deepcopy
from my_module.test import Failure
def run_test_sequence(test_dir: str):
error_occured = False
fixture = {
'some_pre_defined_variable': 'this is available in all scripts and fixtures',
'directory_name': test_dir,
}
# Check if fixture.py exists and load that first
fixture_file = Path(dir) / 'fixture.py'
if fixture_file.exists():
with open(fixture_file.absolute(), 'r') as code:
exec(code.read(), fixture, fixture)
# Go over all files in test sequence directory and execute them
for test_file in sorted(Path(test_dir).iterdir()):
if test_file.name == 'fixture.py':
continue
# Make a deepcopy, so scripts cannot influence one another
fixture_copy = deepcopy(fixture)
fixture_copy.update({
'some_other_variable': 'this is available in all scripts but not in fixture'
})
try:
with open(test_file.absolute(), 'r') as code:
exec(code.read(), fixture_locals, fixture_locals)
except Failure:
error_occured = True
return error_occured
This iterates over all files in the directory tests-folder and executes them in order (with fixture.py first). It also makes the local variables, functions and classes from fixture.py available to each test-script.
A test script could then just be arbitrary code that will be executed and if it raises my custom Failure exception, this will be noted as a failed test.
The whole sequence is started with a script that does
from my_module.test_sequence import run_test_sequence
if __name__ == '__main__':
exit(run_test_sequence('tests-folder')
This mostly works.
What it cannot do, and what leaves me unsatisfied with this approach:
I cannot debug the scripts itself. Since the code is loaded as string and then interpreted, breakpoints inside the test scripts are not recognized.
Calling fixture functions behaves weird. When I define a function in fixture.py like:
from my_hello_module import print_hello
def printer():
print_hello()
I will receive a message during execution that print_hello is undefined because the variables/modules/etc. in the scope surrounding printer are lost.
Stacktraces are useless. On failure it shows the stacktrace but of course only shows my line which says `exec(...)' and the insides of that function, but none of the code that has been loaded.
I am sure there are other drawbacks, that I have not found yet, but these are the most annoying ones.
I also tried to find a solution through __import__ but I couldn't get it to inject my custom locals or globals into the imported script.
Is there a solution, that I am too inexperienced to find or another builtin Python function that actually does, what I am trying to do? Or is there no way to achieve this and I should rather have each test-script import the fixture and file/directory names from the test-scripts itself. I want those scripts to have as few dependencies and pythony code as possible. Ideally they are just:
from my_module.test import *
click(x, y, LEFT)
write('admin')
press('tab')
write('password')
press('enter')
if text_on_screen('Login successful'):
succeed('Test successful')
else:
fail('Could not login')
Additional note: I think I had the debugger working when I still used execfile, but it is not available in python3 environments.
I am developing a sphinx based collaborative writing tool. Users access the web application (developed in python/Flask) to write a book in sphinx and compile it to pdf.
I have learned that in order to compile a sphinx documentation from within python I should use
import sphinx
result = sphinx.build_main(['-c', 'path/to/conf',
'path/to/source/', 'path/to/out'])
So far so good.
Now my users want the app to show them their syntax mistakes. But the output (result in the example above) only gives me the exit code.
So, how do I get a list of warnings from the build process?
Perhaps I am being too ambitious, but since sphinx is a python tool, I was expecting to have a nice pythonic interface with the tool. For example, the output of sphinx.build_main could be a very rich object with warnings, line numbers...
On a related note, the argument to the method sphinx.build_main looks just like a wrapper to the command line interface.
sphinx.build_main() calls sphinx.cmdline.main(), which in turn creates a sphinx.application.Sphinx object. You could create such an object directly (instead of "making system calls within python"). Use something like this:
import os
from sphinx.application import Sphinx
# Main arguments
srcdir = "/path/to/source"
confdir = srcdir
builddir = os.path.join(srcdir, "_build")
doctreedir = os.path.join(builddir, "doctrees")
builder = "html"
# Write warning messages to a file (instead of stderr)
warning = open("/path/to/warnings.txt", "w")
# Create the Sphinx application object
app = Sphinx(srcdir, confdir, builddir, doctreedir, builder,
warning=warning)
# Run the build
app.build()
Assuming you used sphinx-quickstart to generate your initial Sphinx documentation set with a makefile, then you can use make to build docs, which in turn uses the Sphinx tool sphinx-build. You can pass the -w <file> option to sphinx-build to write warnings and errors to a file as well as stderr.
Note that options passed through the command line override any other options set in the makefile and conf.py.
So I have discovered the python extension for SPSS, and everything works fine, I have created some scripts now and included them in the extensions map and it works fine. However, now I have created a couple of scripts that require arguments, I thought I could just follow the same method but I guess not.
def Run(args):
import spss
def testing_p(variables):
all_variables = [spss.GetVariableName(i) for i in range(spss.GetVariableCount())]
variable_nr = [all_variables.index(i) for i in variables]
print all_variables
print variable_nr
With the following .xml-file:
<Command xmlns="http://xml.spss.com/extension" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" Name="testing_p" Language="Python">
</Command>
However, this keep throwing the error when calling testing_p(['my_var', 'my_var2']):
Warnings
This command should specify a valid subcommand at the beginning.
Execution of this command stops.
I cannot wrap my head around this because everything works fine when not put in the extensions map and only doing:
BEGIN PROGRAM.
import spss
def testing_p(variables):
all_variables = [spss.GetVariableName(i) for i in range(spss.GetVariableCount())]
variable_nr = [all_variables.index(i) for i in variables]
print all_variables
print variable_nr
END PROGRAM.
For an extension, which can be writen in Python, R, or Java, you need to create a syntax specification containing the command name, any subcommands, and the arguments and argument types you want. Here is a picture of the start of one (SPSSINC_TURF, which is installed with Statistics).
This will guide the Statistics parser in checking the user input. It also then calls the Run function with a complicated structure containing the user input. You can use the functions in the extensions module to map that to your Python variables and do further validation. Here is a picture of the start of the Run function for SPSSINC TURF.
Finally, if the syntax is valid, your Run function calls the worker function to do something useful, mapping all the parameters to the specified arguments by calling
processcmd(oobj, args, superturf, vardict=spssaux.VariableDict())
which was imported from extensions.py.
Look at the doc for extensions in the help system, and look at some of the extensions installed with Statistics for examples.
Finally, here is a slide from one of my presentations summarizing the flow from user input to results.
Looking at the documentation for SBWatchpoint at http://lldb.llvm.org/python_reference/index.html, I do not see a method for assigning a python callback function for when a watchpoint is triggered.
Is there a way to do this with the Python API?
There is a
watchpoint command add
command that supports doing that
watchpoint command add [-e <boolean>] [-s <none>] [-F <python-function>] <watchpt-id>
If you have an SBWatchpoint, you can query for its ID, and then craft an appropriate command line to pass down to SBDebugger.HandleCommand
You will need your Python module to contain the script function you want executed, and pass it by qualified name on the command line. For instance, if you have
# myfile.py
def callback(wp_no):
# stuff
# more stuff
mywatchpoint = ...
debugger.HandleCommand("watchpoint command add -F myfile.callback %s" % mywatchpoint.GetID())
would be the way to tell LLDB about your callback
Currently, there is no way to pass Python functions directly to LLDB API calls.
There is no reason why that is impossible, but it is a little tricky to get right in a world where multiple scripting languages could coexist, and given the lack of a viable alternative strategy, there's not much pressure to get it working.
Some background (not mandatory, but might be nice to know): I am writing a Python command-line module which is a wrapper around latexdiff. It basically replaces all \cite{ref1, ref2, ...} commands in LaTeX files with written-out and properly formatted references before passing the files to latexdiff, so that latexdiff will properly mark changes to references in the text (otherwise, it treats the whole \cite{...} command as a single "word"). All the code is currently in a single file which can be run with python -m latexdiff-cite, and I have not yet decided how to package or distribute it. To make the script useful for anybody else, the citation formatting needs to be configurable. I have implemented an optional command-line argument -c CONFIGFILE to allow the user to point to their own JSON config file (a default file resides in the module folder and is loaded if the argument is not used).
Current implementation: My single-file command-line Python module currently parses command-line arguments in if __name__ == '__main__', and loads the config file (specified by the user in -c CONFIGFILE) here before running the main function of the program. The config variable is thus available in the entire module and all is well. However, I'm considering publishing to PyPI by following this guide which seems to require me to put the command-line parsing in a main() function, which means the config variable will not be available to the other functions unless passed down as arguments to where it's needed. This "passing down by arguments" method seems a little cluttered to me.
Question: Is there a more pythonic way to set some configuration globals in a module or otherwise accomplish what I'm trying to? (I don't want to rely on 3rd party modules.) Am I perhaps completely off the tracks in some fundamental way?
One way to do it is to have the configurations defined in a class or a simple dict:
class Config(object):
setting1 = "default_value"
setting2 = "default_value"
#staticmethod
def load_config(json_file):
""" load settings from config file """
with open(json_file) as f:
config = json.load(f)
for k, v in config.iteritems():
setattr(Config, k, v)
Then your application can access the settings via this class: Config.setting1 ...