I am attempting to use the API for building and running test suites and I've noticed that the log.html that is output when using the API vs the shell is subtly different.
The log when using the command line invocation
The log when run through a minimal python script
The minimal python script:
import robot.api
import robot.result
from robot.conf.settings import RobotSettings
options = {
"output": "./results.xml",
"log": "./log.html",
"report": "./report.html",
"loglevel": "TRACE"
}
settings = RobotSettings(options)
suite = robot.api.TestSuiteBuilder().build(".")
result = suite.run(settings=settings)
result_writer = robot.api.ResultWriter(result)
result_writer.write_results(settings=settings.get_rebot_settings())
As can be seen the log is missing a lot of keyword detail, how can I get this back?
edit: The minimal file after changing it to the correct method:
import robot.api
import robot.result
from robot.conf.settings import RobotSettings
options = {
"output": "./results.xml",
"log": "./log.html",
"report": "./report.html",
"loglevel": "TRACE",
"rpa": False
}
settings = RobotSettings(options)
suite = robot.api.TestSuiteBuilder().build(".")
result = suite.run(settings=settings)
result_writer = robot.api.ResultWriter("./results.xml")
result_writer.write_results(settings=settings.get_rebot_settings())
Are you sure you don't have two different Robot Framework packages active on your system? One externally in your path and another as a package within your venv.
Try
robot --version
and check if the version in your cmd line is the same as the one running the script.
You should also try to upgrade to the latest version, which typically solves such errors.
I have also found a similar error on this thread, which states
The problem is that the result object you get from
suite.run(settings) doesn't contain all information about test
execution. The reason for this behavior is saving memory during test
execution. All information about the execution is in the generated
output.xml file, and what you need to do is passing a path to it to
the ResultWriter class instead of the result object. This ought to
be documented somewhere in the API docs but I'm not sure about it.
Related
I'm trying to learn how to use variables from Jenkins in Python scripts. I've already learned that I need to call the variables, but I'm not sure how to implement them in the case of using os.path.join().
I'm not a developer; I'm a technical writer. This code was written by somebody else. I'm just trying to adapt the Jenkins scripts so they are parameterized so we don't have to modify the Python scripts for every release.
I'm using inline Jenkins python scripts inside a Jenkins job. The Jenkins string parameters are "BranchID" and "BranchIDShort". I've looked through many questions that talk about how you have to establish the variables in the Python script, but with the case of os.path.join(),I'm not sure what to do.
Here is the original code. I added the part where we establish the variables from the Jenkins parameters, but I don't know how to use them in the os.path.join() function.
# Delete previous builds.
import os
import shutil
BranchID = os.getenv("BranchID")
BranchIDshort = os.getenv("BranchIDshort")
print "Delete any output from a previous build."
if os.path.exists(os.path.join("C:\\Doc192CS", "Output")):
shutil.rmtree(os.path.join("C:\\Doc192CS", "Output"))
I expect output like: c:\Doc192CS\Output
I am afraid that if I do the following code:
if os.path.exists(os.path.join("C:\\Doc",BranchIDshort,"CS", "Output")):
shutil.rmtree(os.path.join("C:\\Doc",BranchIDshort,"CS", "Output"))
I'll get: c:\Doc\192\CS\Output.
Is there a way to use the BranchIDshort variable in this context to get the output c:\Doc192CS\Output?
User #Adonis gave the correct solution as a comment. Here is what he said:
Indeed you're right. What you would want to do is rather:
os.path.exists(os.path.join("C:\\","Doc{}CS".format(BranchIDshort),"Output"))
(in short use a format string for the 2nd argument)
So the complete corrected code is:
import os
import shutil
BranchID = os.getenv("BranchID")
BranchIDshort = os.getenv("BranchIDshort")
print "Delete any output from a previous build."
if os.path.exists(os.path.join("C:\\Doc{}CS".format(BranchIDshort), "Output")):
shutil.rmtree(os.path.join("C:\\Doc{}CS".format(BranchIDshort), "Output"))
Thank you, #Adonis!
I'm new to python tests so don't hesitate to provide any obvious information.
Basically I want to do some RESTful tests using python, and found the httpretty and sure libraries which look really nice.
I have a python file containing:
#!/usr/bin/python
from sure import expect
import requests, httpretty
#httpretty.activate
def RestTest():
httpretty.register_uri(httpretty.GET, "http://localhost:8090/test.json",
body='{"status": "ok"}',
content_type="application/json")
response = requests.get("http://localhost:8090/test.json")
expect(response.json()).to.equal({"status": "ok"}
Which is basically the same as the example code provided at https://github.com/gabrielfalcao/HTTPretty
My question is; how do I simply run this test to see it either passing or failing? I tried just executing it using ./pythonFile but that doesn't work.
If your test is implemented as a Python function, then of course simply trying to execute the file isn't going to run the test: nothing in that file actually calls RestTest.
You need some sort of test framework that will call your tests and collate the results.
One such solution is python-nose, which will look for methods named test_* and run them. So if you were to rename RestTest to test_rest, you could run:
$ nosetests myfile.py
.
----------------------------------------------------------------------
Ran 1 test in 0.012s
OK
The nosetests command has a variety of options that control which tests are run, how errors are handled and reported, and more.
Python 3 includes similar functionality in the unittest module, which is also available as a backport for Python 2 called unittest2. You could modify your code to take advantage of unittest like this:
#!/usr/bin/python
from sure import expect
import requests, httpretty
import unittest
class RestTest(unittest.TestCase):
#httpretty.activate
def test_rest(self):
httpretty.register_uri(httpretty.GET, "http://localhost:8090/test.json",
body='{"status": "ok"}',
content_type="application/json")
response = requests.get("http://localhost:8090/test.json")
expect(response.json()).to.equal({"status": "ok"})
if __name__ == '__main__':
unittest.main()
Running your file would now provide output similar to what we saw with
nosetests:
$ python myfile.py
.
----------------------------------------------------------------------
Ran 1 test in 0.012s
OK
Have you tried calling your method?
Or does the annotation mean you don't have to explicitly call your method?
If I call your method, it seems like it works. If I change the value on one side of the expect, it complains properly about the values not matching.
I am new to Jmeter. My HTTP request sampler call looks like this
Path= /image/**image_id**/list/
Header = "Key" : "Key_Value"
Key value is generated by calling a python script which uses the image_id to generate a unique key.
Before each sampler I wanted to generate the key using python script which will be passed as a header to the next HTTP Request sampler.
I know I have to used some kind of preprocessor to do that. Can anyone help me do it using a preprocessor in jmeter.
I believe that Beanshell PreProcessor is what you're looking for.
Example Beanshell code will look as follows:
import java.io.BufferedReader;
import java.io.InputStreamReader;
Runtime r = Runtime.getRuntime();
Process p = r.exec("/usr/bin/python /path/to/your/script.py");
p.waitFor();
BufferedReader b = new BufferedReader(new InputStreamReader(p.getInputStream()));
String line = "";
StringBuilder response = new StringBuilder();
while ((line = b.readLine()) != null) {
response.append(line);
}
b.close();
vars.put("ID",response.toString());
The code above will execute Python script and put it's response into ID variable.
You will be able to refer it in your HTTP Request as
/image/${ID}/list/
See How to use BeanShell: JMeter's favorite built-in component guide for more information on Beanshell scripting in Apache JMeter and a kind of Beanshell cookbook.
You can also put your request under Transaction Controller to exclude PreProcessor execution time from load report.
A possible solution posted by Eugene Kazakov here:
JSR223 sampler has good possibility to write and execute some code,
just put jython.jar into /lib directory, choose in "Language" pop-up
menu jython and write your code in this sampler.
Sadly there is a bug in Jython, but there are some suggestion on the page.
More here.
You can use a BSF PreProcessor.
First download the Jython Library and save to your jmeter's lib directory.
On your HTTP sampler add a BSF PreProcessor, choose as language Jython and perform your needed magic to obtain the id, as an example I used this one:
import random
randImageString = ""
for i in range(16):
randImageString = randImageString + chr(random.randint(ord('A'),ord('Z')))
vars.put("randimage", randImageString)
Note the vars.put("randimage",randImageString") which will insert the variable available later to jmeter.
Now on your test you can use ${randimage} when you need it:
Now every Request will be different changing with the value put to randimage on the Python Script.
I have a simple app which requires a many-to-many relationship to be configured as part of its set-up. For example, the app requires a list of repository URLs, a list of users and for each user, a subset of the repository URLs.
I first thought of using a config.py file similar to the following:
repositories = {
'repo1': 'http://svn.example.com/repo1/',
'repo2': 'http://svn.example.com/repo2/',
'repo3': 'http://svn.example.com/repo3/',
}
user_repository_mapping = {
'person_A': ['repo1', 'repo3'],
'person_B': ['repo2'],
'person_C': ['repo1', 'repo2']
}
which I could import. But this is quite messy as the config file lives outside my python-path and I would rather use a standard configuration approach such as using ini files or YAML.
Is there an elegant way of configuring a relationship such as this without importing a Python directly?
I would store the config in JSON format. For example:
cfg = """
{
"repositories": {
"repo1": "http://svn.example.com/repo1/",
"repo2": "http://svn.example.com/repo2/",
"repo3": "http://svn.example.com/repo3/"
},
"user_repository_mapping": {
"person_A": ["repo1", "repo3"],
"person_B": ["repo2"],
"person_C": ["repo1", "repo2"]
}
}
"""
import simplejson as json
config = json.loads(cfg)
person = "person_A"
repos = [config['repositories'][r] for r in config['user_repository_mapping'][person]]
print repos
If you like the idea of representing structure by indentation (like in Python) then YAML will be perfect for you. If you don't want to rely on whitespace and prefer explicit syntax then better go with JSON. Both are easy to understand and popular, which means that there are Python libraries out there.
Additional advantage is the fact that, in contrast to using standard Python code, you can be sure that your configuration file can contains only data and no arbitrary code that will get executed.
The tactic I use is to put the whole application in a class, and then instead of having an importable config file, allow the user to pass in configuration to the constructor. Or, in more complicated cases they could even subclass the application class to add members or change behaviours. Although this does require a little knowledge of Python syntax in order to configure the app, it's not really that difficult, and much more flexible than the ini/markup config file approach.
So you example you could have an invoke-script outside the pythonpath looking like:
#!/usr/bin/env python
import someapplication
class MySomeApplication(someapplication.Application):
repositories = {
'repo1': 'http://svn.example.com/repo1/',
'repo2': 'http://svn.example.com/repo2/',
'repo3': 'http://svn.example.com/repo3/',
}
user_repository_mapping = {
'person_A': ['repo1', 'repo3'],
'person_B': ['repo2'],
'person_C': ['repo1', 'repo2']
}
MySomeApplication().run()
Then to have a second configuration they can swap out or even run at the same time, you simply cope the invoke-script and change the settings in it.
I've written a setup.py script for py2exe, generated an executable for my python GUI application and I have a whole bunch of files in the dist directory, including the app, w9xopen.exe and MSVCR71.dll. When I try to run the application, I get an error message that just says "see the logfile for details". The only problem is, the log file is empty.
The closest error I've seen is "The following modules appear to be missing" but I'm not using any of those modules as far as I know (especially since they seem to be of databases I'm not using) but digging up on Google suggests that these are relatively benign warnings.
I've written and packaged a console application as well as a wxpython one with py2exe and both applications have compiled and run successfully. I am using a new python toolkit called dabo, which in turn makes uses of wxpython modules so I can't figure out what I'm doing wrong. Where do I start investigating the problem since obviously the log file hasn't been too useful?
Edit 1:
The python version is 2.5. py2exe is 0.6.8. There were no significant build errors. The only one was the bit about "The following modules appear to be missing..." which were non critical errors since the packages listed were ones I was definitely not using and shouldn't stop the execution of the app either. Running the executable produced a logfile which was completely empty. Previously it had an error about locales which I've since fixed but clearly something is wrong as the executable wasn't running. The setup.py file is based quite heavily on the original setup.py generated by running their "app wizard" and looking at the example that Ed Leafe and some others posted. Yes, I have a log file and it's not printing anything for me to use, which is why I'm asking if there's any other troubleshooting avenue I've missed which will help me find out what's going on.
I have even written a bare bones test application which simply produces a bare bones GUI - an empty frame with some default menu options. The code written itself is only 3 lines and the rest is in the 3rd party toolkit. Again, that compiled into an exe (as did my original app) but simply did not run. There were no error output in the run time log file either.
Edit 2:
It turns out that switching from "windows" to "console" for initial debugging purposes was insightful. I've now got a basic running test app and on to compiling the real app!
The test app:
import dabo
app = dabo.dApp()
app.start()
The setup.py for test app:
import os
import sys
import glob
from distutils.core import setup
import py2exe
import dabo.icons
daboDir = os.path.split(dabo.__file__)[0]
# Find the location of the dabo icons:
iconDir = os.path.split(dabo.icons.__file__)[0]
iconSubDirs = []
def getIconSubDir(arg, dirname, fnames):
if ".svn" not in dirname and dirname[-1] != "\\":
icons = glob.glob(os.path.join(dirname, "*.png"))
if icons:
subdir = (os.path.join("resources", dirname[len(arg)+1:]), icons)
iconSubDirs.append(subdir)
os.path.walk(iconDir, getIconSubDir, iconDir)
# locales:
localeDir = "%s%slocale" % (daboDir, os.sep)
locales = []
def getLocales(arg, dirname, fnames):
if ".svn" not in dirname and dirname[-1] != "\\":
mo_files = tuple(glob.glob(os.path.join(dirname, "*.mo")))
if mo_files:
subdir = os.path.join("dabo.locale", dirname[len(arg)+1:])
locales.append((subdir, mo_files))
os.path.walk(localeDir, getLocales, localeDir)
data_files=[("resources", glob.glob(os.path.join(iconDir, "*.ico"))),
("resources", glob.glob("resources/*"))]
data_files.extend(iconSubDirs)
data_files.extend(locales)
setup(name="basicApp",
version='0.01',
description="Test Dabo Application",
options={"py2exe": {
"compressed": 1, "optimize": 2, "bundle_files": 1,
"excludes": ["Tkconstants","Tkinter","tcl",
"_imagingtk", "PIL._imagingtk",
"ImageTk", "PIL.ImageTk", "FixTk", "kinterbasdb",
"MySQLdb", 'Numeric', 'OpenGL.GL', 'OpenGL.GLUT',
'dbGadfly', 'email.Generator',
'email.Iterators', 'email.Utils', 'kinterbasdb',
'numarray', 'pymssql', 'pysqlite2', 'wx.BitmapFromImage'],
"includes": ["encodings", "locale", "wx.gizmos","wx.lib.calendar"]}},
zipfile=None,
windows=[{'script':'basicApp.py'}],
data_files=data_files
)
You may need to fix log handling first, this URL may help.
Later you may look for answer here.
My answer is very general because you didn't give any more specific info (like py2exe/python version, py2exe log, other used 3rd party libraries).
See http://www.wxpython.org/docs/api/wx.App-class.html for wxPyton's App class initializer. If you want to run the app from a console and have stderr print to there, then supply False for the redirect argument. Otherwise, if you just want a window to pop up, set redirect to True and filename to None.