I am new to Jmeter. My HTTP request sampler call looks like this
Path= /image/**image_id**/list/
Header = "Key" : "Key_Value"
Key value is generated by calling a python script which uses the image_id to generate a unique key.
Before each sampler I wanted to generate the key using python script which will be passed as a header to the next HTTP Request sampler.
I know I have to used some kind of preprocessor to do that. Can anyone help me do it using a preprocessor in jmeter.
I believe that Beanshell PreProcessor is what you're looking for.
Example Beanshell code will look as follows:
import java.io.BufferedReader;
import java.io.InputStreamReader;
Runtime r = Runtime.getRuntime();
Process p = r.exec("/usr/bin/python /path/to/your/script.py");
p.waitFor();
BufferedReader b = new BufferedReader(new InputStreamReader(p.getInputStream()));
String line = "";
StringBuilder response = new StringBuilder();
while ((line = b.readLine()) != null) {
response.append(line);
}
b.close();
vars.put("ID",response.toString());
The code above will execute Python script and put it's response into ID variable.
You will be able to refer it in your HTTP Request as
/image/${ID}/list/
See How to use BeanShell: JMeter's favorite built-in component guide for more information on Beanshell scripting in Apache JMeter and a kind of Beanshell cookbook.
You can also put your request under Transaction Controller to exclude PreProcessor execution time from load report.
A possible solution posted by Eugene Kazakov here:
JSR223 sampler has good possibility to write and execute some code,
just put jython.jar into /lib directory, choose in "Language" pop-up
menu jython and write your code in this sampler.
Sadly there is a bug in Jython, but there are some suggestion on the page.
More here.
You can use a BSF PreProcessor.
First download the Jython Library and save to your jmeter's lib directory.
On your HTTP sampler add a BSF PreProcessor, choose as language Jython and perform your needed magic to obtain the id, as an example I used this one:
import random
randImageString = ""
for i in range(16):
randImageString = randImageString + chr(random.randint(ord('A'),ord('Z')))
vars.put("randimage", randImageString)
Note the vars.put("randimage",randImageString") which will insert the variable available later to jmeter.
Now on your test you can use ${randimage} when you need it:
Now every Request will be different changing with the value put to randimage on the Python Script.
Related
New to coding here. I am trying to make an application to call a number and give a set of instructions, this part is easy. After the call hangs up I would like to call again and give a different set of instructions. While testing to see if it's possible I am only calling myself and playing DTMF tones so I can hear that it is functioning as I need. I am trying to pass the instructions to TwiML as a variable so I don't have to write multiple functions to perform similar instructions. However, XML doesn't take variables like that. I know the code I have included is completely wrong but is there a method to perform the action I am trying to get.
def dial_numbers(code):
client.calls.create(to=numberToCall, from_=TWILIO_PHONE_NUMBER, twiml='<Response> <Play digits=code></Play> </Response>')
if __name__ == "__main__":
dial_numbers("1234")
dial_numbers("2222")
As I understand from the question: do you need to define a function to send Twilio instructions to the call?
In order to play digit tones, you need to import from twilio.twiml.voice_response import Play, VoiceResponse from Twilio and create XML command for it.
EASY WAY: And then you create a POST request to the Twilio Echo XML service and put it as URL into call function
HARD WAY: There is an alternative - to use Flask or FastAPI framework as a web server and create a global link via DDNS service like ngrok, if you are interested there is official manual.
Try this one:
def dial_numbers(number_to_call, number_from, digit_code):
from twilio.twiml.voice_response import Play, VoiceResponse # Import response module
import urllib.parse # Import urllib to create url for new xml file
response = VoiceResponse() # Create VoiceResponse instance
response.play('', digits=digit_code) # Create xml string of the digit code
url_of_xml = "http://twimlets.com/echo?Twiml=" # Now use twimlet echo service to create simple xml
string_to_add = urllib.parse.quote(str(response)) # Encode xml code to the url
url_of_xml = url_of_xml + string_to_add # Add our xml code to the service
client.calls.create(to=number_to_call, from_=number_from, url=url_of_xml) # Make a call
dial_numbers(number_to_call = numberToCall, number_from = TWILIO_PHONE_NUMBER, digit_code = "1234")
I have a simple SQL file that I'd like to read and execute using a Python Script Snap in SnapLogic. I created an expression library file to reference the Redshift account and have included it as a parameter in the pipeline.
I have the code below from another post. Is there a way to reference the pipeline parameter to connect to the Redshift database, read the uploaded SQL file and execute the commands?
fd = open('shared/PythonExecuteTest.sql', 'r')
sqlFile = fd.read()
fd.close()
sqlCommands = sqlFile.split(';')
for command in sqlCommands:
try:
c.execute(command)
except OperationalError, msg:
print "Command skipped: ", msg
You can access pipeline parameters in scripts using $_.
Let's say, you have a pipeline parameter executionId. Then to access it in the script you can do $_executionId.
Following is a test pipeline.
With the following pipeline parameter.
Following is the test data.
Following is the script
# Import the interface required by the Script snap.
from com.snaplogic.scripting.language import ScriptHook
import java.util
class TransformScript(ScriptHook):
def __init__(self, input, output, error, log):
self.input = input
self.output = output
self.error = error
self.log = log
# The "execute()" method is called once when the pipeline is started
# and allowed to process its inputs or just send data to its outputs.
def execute(self):
self.log.info("Executing Transform script")
while self.input.hasNext():
try:
# Read the next document, wrap it in a map and write out the wrapper
in_doc = self.input.next()
wrapper = java.util.HashMap()
wrapper['output'] = in_doc
wrapper['output']['executionId'] = $_executionId
self.output.write(in_doc, wrapper)
except Exception as e:
errWrapper = {
'errMsg' : str(e.args)
}
self.log.error("Error in python script")
self.error.write(errWrapper)
self.log.info("Finished executing the Transform script")
# The Script Snap will look for a ScriptHook object in the "hook"
# variable. The snap will then call the hook's "execute" method.
hook = TransformScript(input, output, error, log)
Output:
Here, you can see that the executionId was read from the pipeline parameters.
Note: Accessing pipeline parameters from scripts is a valid scenario but accessing other external systems from the script is complicated (because you would need to load the required libraries) and not recommended. Use the snaps provided by SnapLogic to access external systems. Also, if you want to use other libraries inside scripts, try sticking to Javascript instead of going to python because there are a lot of open source CDNs that you can use in your scripts.
Also, you can't access any configured expression library directly from the script. If you need some logic in the script, you would keep it in the script and not somewhere else. And, there is no point in accessing account names in the script (or mappers) because, even if you know the account name, you can't use the credentials/configurations stored in that account directly; that is handled by SnapLogic. Use the provided snaps and mappers as much as possible.
Update #1
You can't access the account directly. Accounts are managed and used internally by the snaps. You can only create and set accounts through the accounts tab of the relevant snap.
Avoid using script snap as much as possible; especially, if you can do the same thing using normal snaps.
Update #2
The simplest solution to this requirement would be as follows.
Read the file using a file reader
Split based on ;
Execute each SQL command using the Generic JDBC Execute Snap
Hi All,
I have a requirement where the client application is expected to send the data through the rest api end point provided by us. Client application is expected to send the data as query parameters.
curl -L -X POST "http://endpointurl:port/data?result=PASS&c=ADD_RECORD&attribute1=test1&attribute2=test2&attribute3=test3&attribute4=test4&signature=62759010b083d8fcf6e7ec18e6582bc07789d6dda17efbff6f474635c63db6afcbb3a0c25cf0d4c5bb1ba0ab772124edb9ba064d1530c2848fc160546263c86a2ba0cc26dd0073bb6344a1abb7475bcb1cd9f1c2b6af750db043a3da807ca356ab2d0959719dfff28af16246ce242a71d9fc99e5c383edfa90f6426568e1b6e9f8510871e40a05f6debaa6d9eee72eb9f6e0691ec625b1b24bb49cb3840940e7f83a13cdc0022e4a8ac35866f9b74418dcbeb232962113ad765cce334f431108866753c767098c363f97c056fa5f377b04094436629e9ede71b3074766c5b7492e4d7d5f4f52af0bee1683af68bb70f3cda4fef78cf5f98ce8765fd5d0e12280"
Along with the all the elements/columns of actual data(result, attribute1, attribute2, attribute3 and attribute4 client application would also send one addition parameter called signature which is created by hashing and creating the signature using client side private key based on the key parameters(not all the query parameters)
Signature: echo -n 'attribute1attribute3' | openssl sha1 -sign id_rsa -hex
Client application also provided the public key file for us to validate the signature with the actual data before wee process the records.
I am using apache nifi on HDP with below high level flow. Have used some other processors for validations and to allow other http requests.
Handlehttprequest-->>AttributestoJson-->>RouteOnAttribute-->>JolttransformJSON-->>ReplaceText-->>PutKafka-->>HandlehttpResponse
Basically, I am extracting all the http.query.param values from the data posted, and if parameter C=ADD_RECORD, I am concatenating the key attributes (attribute1, attribute3) which would be the actual data which should be verified against the signature value.
I tried to go through hashcontent processor with SHA1 but the hash value that i get is very small and it is not derived based on the client provided public key.
I also tried looking at the python scripts using Crypto package, but not been able to verify the signature with the actual data. On top of that, I am not sure, how can i call python script inside nifi.
Below are the commands that I can use manually to validate the signature with the data
echo -n 'attribute1attribute3' | openssl sha1 -sign id_rsa -hex>signature.hex
xxd -r -p signature.hex > signature.bin
echo -n 'attribute1attribute3'>keyattribute.txt
openssl dgst -sha1 -verify /tmp/test.pub -signature signature.bin keyattribute.txt and signature.bin to verify the digital signature., but in my actual requirement, I would be getting all these data as query parameters.
Need help in providing insights with respect to below.
Hashcontent can be used to generate the signature based on the public.key file? if so, I think we can use Routeonattribute to verify the signature with the actual value and take necessary actions.
Any guidance on achieving this by python/Groovy/Jython script and idea on how it can be called with in Nifi pipeline?
Any possibility of building the custom processor for meeting this requirement?
Appreciate any help on this.
Thanks,
Vish
====================================
Hi All,
In addition to my earlier query, I could finally get the python script up and running which takes three arguments
pub.key file location
signature value from hexdump from the client side
actual concatenated fields of key columns on which the signature is generated.
and displays whether the signature matches or fails.
from __future__ import print_function, unicode_literals
import sys
import binascii
from Crypto.PublicKey import RSA
from Crypto.Signature import PKCS1_v1_5
from Crypto.Hash import SHA
pubfile = sys.argv[1]
sig_hex = sys.argv[2]
data = sys.argv[3]
if not path.isfile(pubfile):
sys.stderr.write('public key file not found\n')
def verifier(pubkey, sig, data):
rsakey = RSA.importKey(key)
signer = PKCS1_v1_5.new(rsakey)
digest = SHA.new()
digest.update(data)
return signer.verify(digest, sig)
with open("pubfile", 'rb') as f: key = f.read()
sig = sig_hex.strip().decode('hex')
if verifier(key, sig, data):
print("Verified OK")
else:
print("Verification Failure")
Now need to know how can this be called with in nifi? and how can I pass the flow file attributes as arguments to the script (execute script processor) ? and how can I get the verification status message as additional attribute in the flow file?
Any help is greatly appreciated.
Thanks,
Vish
============================================================
You can do this in a few ways. Since you already have a working series of shell commands, you can use the ExecuteStreamCommand processor to execute the series of commands (or wrap them in a shell script) and stream the incoming flowfile content to STDIN and stream STDOUT to the outgoing flowfile content.
If you prefer, you can use ExecuteScript or InvokeScriptedProcessor to run the DSL script you've written. I would recommend switching to use Groovy, as it is handled much better in current Apache NiFi releases. Python (actually Jython) is much slower and does not have access to native libraries.
Finally, you could write a custom processor to do this if it's a repeated task you'll need in future flows. There are many guides for writing a custom processor available online.
I use Python 3.4 and Visual 2010.
I'm embedding Python using the C API to give the user some script capabilities in processing his data. I call python functions defined by the user from my C++ code. I call specific function like Apply() for example that the user has to define in a Python file.
Suppose the user has a file test.py where he has defined a function Apply() that process some data.
All I have to do is to import his module and get a "pointer" to his python function from the C++.
PySys_SetPath(file_info.absolutePath().toUtf8().data()));
m_module = PyImport_ImportModule(module_name.toUtf8().data());
if (m_module)
{
m_apply_function = PyObject_GetAttrString(m_module, "Apply");
m_main_dict = PyModule_GetDict(m_module);
}
So far, so good. But if the user modifies his script, the new version of his function is never taken into account. I have to reboot my program to make it work... I read somewhere that I need to reload the module and get new pointers on functions but the PyImport_ReloadModule returns NULL with "Import error".
// .... code ....
// Reload the module
m_module = PyImport_ReloadModule(m_module);
Any ideas ?
Best regards,
Poukill
The answer was found in the comments of my first post (thank you J.F Sebastian), the PySys_SetPath has to contain also the PYTHONPATH. In my case, that is the reason why the PyImport_ReloadModule was failing.
QString sys_path = file_info.absolutePath() + ";" + "C:\\Python34\\Lib";
PySys_SetPath(UTF8ToWide(sys_path.toUtf8().data()));
m_module = PyImport_ReloadModule(m_module); // Ok !
I have a simple app which requires a many-to-many relationship to be configured as part of its set-up. For example, the app requires a list of repository URLs, a list of users and for each user, a subset of the repository URLs.
I first thought of using a config.py file similar to the following:
repositories = {
'repo1': 'http://svn.example.com/repo1/',
'repo2': 'http://svn.example.com/repo2/',
'repo3': 'http://svn.example.com/repo3/',
}
user_repository_mapping = {
'person_A': ['repo1', 'repo3'],
'person_B': ['repo2'],
'person_C': ['repo1', 'repo2']
}
which I could import. But this is quite messy as the config file lives outside my python-path and I would rather use a standard configuration approach such as using ini files or YAML.
Is there an elegant way of configuring a relationship such as this without importing a Python directly?
I would store the config in JSON format. For example:
cfg = """
{
"repositories": {
"repo1": "http://svn.example.com/repo1/",
"repo2": "http://svn.example.com/repo2/",
"repo3": "http://svn.example.com/repo3/"
},
"user_repository_mapping": {
"person_A": ["repo1", "repo3"],
"person_B": ["repo2"],
"person_C": ["repo1", "repo2"]
}
}
"""
import simplejson as json
config = json.loads(cfg)
person = "person_A"
repos = [config['repositories'][r] for r in config['user_repository_mapping'][person]]
print repos
If you like the idea of representing structure by indentation (like in Python) then YAML will be perfect for you. If you don't want to rely on whitespace and prefer explicit syntax then better go with JSON. Both are easy to understand and popular, which means that there are Python libraries out there.
Additional advantage is the fact that, in contrast to using standard Python code, you can be sure that your configuration file can contains only data and no arbitrary code that will get executed.
The tactic I use is to put the whole application in a class, and then instead of having an importable config file, allow the user to pass in configuration to the constructor. Or, in more complicated cases they could even subclass the application class to add members or change behaviours. Although this does require a little knowledge of Python syntax in order to configure the app, it's not really that difficult, and much more flexible than the ini/markup config file approach.
So you example you could have an invoke-script outside the pythonpath looking like:
#!/usr/bin/env python
import someapplication
class MySomeApplication(someapplication.Application):
repositories = {
'repo1': 'http://svn.example.com/repo1/',
'repo2': 'http://svn.example.com/repo2/',
'repo3': 'http://svn.example.com/repo3/',
}
user_repository_mapping = {
'person_A': ['repo1', 'repo3'],
'person_B': ['repo2'],
'person_C': ['repo1', 'repo2']
}
MySomeApplication().run()
Then to have a second configuration they can swap out or even run at the same time, you simply cope the invoke-script and change the settings in it.