I am exploring building an API to my application, as part of developer tool i can see the payload as below -
-X POST -H "Content-Type:application/json" -d '{ "action": "DeviceManagementRouter", "method": "addMaintWindow", "data": [{"uid": "/zport/dmd/Devices/Server/Microsoft/Windows/10.10.10.10", "durationDays":"1", "durationHours":"00", "durationMinutes":"00", "enabled":"True", "name":"Test", "repeat":"Never", "startDate":"08/15/2018", "startHours":"09", "startMinutes":"50", "startProductionState":"300" } ], "type": "rpc", "tid": 1}
I see below error -
{"uuid": "a74b6e27-c9af-402a-acd0-bd9c4254736e", "action": "DeviceManagementRouter", "result": {"msg": "TypeError: addMaintWindow() got an unexpected keyword argument 'startDate'", "type": "exception", "success": false}, "tid": 1, "type": "rpc", "method": "addMaintWindow"}
Code in below URL:
https://zenossapiclient.readthedocs.io/en/latest/_modules/zenossapi/routers/devicemanagement.html
Assuming this is your real python code, then if you want to pass multiple params in python, you should use either *args or **kwargs (keyworded arguments). For you, it seems kwargs is more appropriate.
def addMaintWindow(self, **kwargs):
"""
adds a new Maintenance Window
"""
_name = kwargs["name"]
# _name = kwargs.pop("name", default_value) to be fail-safe and
# it's more defensive because you popped off the argument
# so it won't be misused if you pass **kwargs to next function.
facade = self._getFacade()
facade.addMaintWindow(**kwargs)
return DirectResponse.succeed(msg="Maintenance Window %s added successfully." % (_name))
Here is a good answer about how to use it. A more general one is here.
Read the general one first if you are not familiar with them.
This should get you pass the error at this stage but you will need to do the same for your facade.addMaintWindow; if it not owned by you, make sure you pass in a correct number of named arguments.
Related
Hello I'm trying to figure out how to use azure functions binding to extract data from a trigger and use it in an input db for filtering. I'm using service bus message as a trigger. It seems that service bus bindings listed in Azure Functions documentation always return null for me, (even if inside a function this parameter has value set).
That's what i've been trying so far in example below, i want to extract ApplicationProperties.zoneId set inside a message.
function.json
"bindings": [
{
"name": "msg",
"type": "serviceBusTrigger",
"direction": "in",
"queueName": "myqueue",
"connection": "ServiceBusConnection"
},
{
"name": "documents",
"type": "cosmosDB",
"direction": "in",
"databaseName": "test_db",
"collectionName": "items",
"sqlQuery": "SELECT * from c where c.zoneId = {ApplicationProperties.zoneId}",
"connectionStringSetting": "CosmosDBConnection"
}]
for testing i'm sending test message to service bus:
def send_single_message(sender):
message = ServiceBusMessage(
"woah_a_test",
correlation_id="1",
subject="az-fcn",
)
message.application_properties = {"zoneId": 1}
sender.send_messages(message)
print("Sent a single message")
It seems quite strange for me that i could not access in function.json any of the trigger parameters (i've also tried CorrelationId and Subject), but inside of a triggered function those parameters return correct value.
I'm aware that i can bypass this issue by filtering inside a function code instead of using input binding, but I'm just curious why those params does not return expected value there. Is there any way to debug it?
I am attempting to create a "authorizer dependecy" for usine in FastAPI that accepts a scope parameter, which then checks a given auth token and if this user has access to a resource specified as a path parameter.
The problem is the way (and in which order, if any) FastAPI seems to evaluate these depedencies.
For this case, I would find it most ideal if the path parameter in the path operation function (resource_id) would be evaluated first and raise an exception and immediately "abort" if the user attempted to access an invalid resource path.
I guess it's not surprising that the path operation decorator dependency is evaluated first, however, so I've also repeated the same path parameter validation in this one (which is also needed for permission checking against the resource).
The thing I am more surprised about, is that both of them are evaluated and included in a list of validation errors. Wouldn't it give more sense if it raised a single exception on the first dependency which not fulfilled (not continue evaluating all of them)?
Since the authorize dependecy not always require a specified resource_id, I've also attempted to make this an optional, but since this dependecy is evaluated first, it still attempts to authroize the user agianst a (possible) invalid resource_id.
Is there any way to instruct fastapi to fail on first "failed" dependecy, control it's order, or any better designs to solve this problem?
from fastapi import FastAPI, Depends, status, Path
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
app = FastAPI()
http_bearer = HTTPBearer(scheme_name="Token")
def authorize(scope: str):
def _check_permission(
resource_id: str, # = Path(None, regex=r"^[0-9a-f-]+$"),
authorization: HTTPAuthorizationCredentials = Depends(http_bearer),
):
# check credentials and permission `scope` on `resource_id`
return True
return _check_permission
#app.get(
"/{resource_id}",
dependencies=[Depends(authorize("read"))],
status_code=status.HTTP_200_OK,
)
def get_resource(
resource_id: str = Path(..., regex=r"^[0-9a-f-]+$"),
):
return f"access granted: {resource_id}"
{
"detail": [
{
"loc": [
"path",
"resource_id"
],
"msg": "string does not match regex \"^[0-9a-f-]+$\"",
"type": "value_error.str.regex",
"ctx": {
"pattern": "^[0-9a-f-]+$"
}
},
{
"loc": [
"path",
"resource_id"
],
"msg": "string does not match regex \"^[0-9a-f-]+$\"",
"type": "value_error.str.regex",
"ctx": {
"pattern": "^[0-9a-f-]+$"
}
}
]
}
I have a simple step function launching a lambda and I am looking for a way to pass parameters (event / context) to each of several consequent tasks. My step function looks like this:
{
"Comment": "A Hello World example of the Amazon States Language using an AWS Lambda function",
"StartAt": "HelloWorld",
"States": {
"HelloWorld": {
"Type": "Task",
"Parameters": {
"TableName": "table_example"
},
"Resource": "arn:aws:lambda:ap-southeast-2:XXXXXXX:function:fields_sync",
"End": true
}
}
}
In the lambda written with Python I am using a simple handler which is:
def lambda_handler(event, context):
#...
The event and context look like this (checking the logs):
START RequestId: f58140b8-9f04-47d7-9285-510b0357b4c2 Version: $LATEST
I cannot find a way to pass parameters to this lambda and to use them in the script. Essentially, what I am trying to do is to run the same lambda passing a few different values as a parameter.
Could anyone please point me in the right direction?
Based on what you said: "looking for a a way to pass parameters
(event / context) to each of several consequent tasks" I assumed that
you want to pass non-static values to lambdas.
There are two ways to pass arguments through state machine. Via InputPath and Parameters. For differences please look here.
If you do not have any static values that you want to pass to lambda, I would do the following. Passed all parameters to step function in json format.
Input JSON for state machine
{
"foo": 123,
"bar": ["a", "b", "c"],
"car": {
"cdr": true
}
"TableName": "table_example"
}
In step function you are passing entire JSON explicitly to lambda using "InputPath": "$", except for a first step where it is passed implicitly. For more about $ path syntax please look here. You also need to take care of task result, with one of multiple approaches using ResultPath. For most of cases the safest solution is to keep task result in special variable "ResultPath": "$.taskresult"
{
"Comment": "A Hello World example of the Amazon States Language using an AWS Lambda function",
"StartAt": "HelloWorld",
"States": {
"HelloWorld": {
"Type": "Task",
"Resource": "arn:aws:lambda:ap-southeast-2:XXXXXXX:function:fields_sync",
"Next": "HelloWorld2"
},
"HelloWorld2": {
"Type": "Task",
"InputPath": "$",
"ResultPath": "$.taskresult"
"Resource": "arn:aws:lambda:ap-southeast-2:XXXXXXX:function:fields_sync_2",
"End": true
}
}
}
Which in lambda became event variable and can be access as python dictionary
def lambda_handler(event, context):
table_example = event["TableName"]
a = event["bar"][0]
cdr_value = event["car"]["cdr"]
# taskresult will not exist as event key
# only on lambda triggered by first state
# in the rest of subsequent states
# it will hold a task result of last executed state
taskresult = event["taskresult"]
With this approach you can use multiple step functions and different lambdas and still keep both of them clean and small by moving all the logic in lambdas.
Also it is easier to debug because all events variables will be the same in all lambdas, so via simple print(event) you can see all parameters needed for entire state machine and what possibly went wrong.
I came across this, apparently, when Resource is set to lambda ARN(for example "arn:aws:lambda:ap-southeast-2:XXXXXXX:function:fields_sync"), you can't use the Parameters to specify the input and instead the state of the step function is passed(possibly the input to your state function, if there's no step before it).
To pass the function input via Parameters you can specify Resource as "arn:aws:states:::lambda:invoke" and the provide your FunctionName in the Parameters section:
{
"StartAt": "HelloWorld",
"States": {
"HelloWorld": {
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"Parameters": {
"FunctionName": "YOUR_FUNCTION_NAME",
"Payload": {
"SOMEPAYLOAD": "YOUR PAYLOAD"
}
},
"End": true
}
}
}
You can find the documentation for invoking Lambda functions in here: https://docs.aws.amazon.com/step-functions/latest/dg/connect-lambda.html
You can also potentially use inputPath, or also have elements from your step function state function: https://docs.aws.amazon.com/step-functions/latest/dg/input-output-inputpath-params.html
For some reason, specifying directly the lambda function ARN in Resource doesn't work.
The following workaround is purely with ASL definition, you just create a Pass step before with the parameters, and its output will be used as input of the next step (your step HelloWorld with the lambda call):
{
"Comment": "A Hello World example of the Amazon States Language using an AWS Lambda function",
"StartAt": "HelloParams",
"States": {
"HelloParams": {
"Type": "Pass",
"Parameters": {
"TableName": "table_example"
},
"Next": "HelloWorld"
},
"HelloWorld": {
"Type": "Task",
"Resource": "arn:aws:lambda:ap-southeast-2:XXXXXXX:function:fields_sync",
"End": true
}
}
}
there is a workaround in another response that tells you to make the previous step using a lambda function, but it is not needed for simple cases. Context values can also be mapped, for example the current timestamp:
"HelloParams": {
"Type": "Pass",
"Parameters": {
"TableName": "table_example",
"Now": "$$.State.EnteredTime"
},
"Next": "HelloWorld"
},
Also, InputPath and ResultPath can be used to prevent overwriting the values from previous steps. For example:
{
"Comment": "A Hello World example of the Amazon States Language using an AWS Lambda function",
"StartAt": "HelloParams",
"States": {
"HelloParams": {
"Type": "Pass",
"Parameters": {
"TableName": "table_example"
},
"ResultPath": "$.hello_prms",
"Next": "HelloWorld"
},
"HelloWorld": {
"Type": "Task",
"Resource": "arn:aws:lambda:ap-southeast-2:XXXXXXX:function:fields_sync",
"InputPath": "$.hello_prms",
"ResultPath": "$.hello_result",
"End": true
}
}
}
that would save the parameters in hello_prms (so that you can reuse them in other steps) and save the result of the execution in hello_result without values from previous steps (in case you add them).
Like Milan mentioned in his comment, you can pass on data to a Lambda function from a Step Function State.
In the Lambda function you'll need to read the event contents.
import json
def lambda_handler(event, context):
TableName = event['TableName']
I found that the DRF's return value could be various upon different occasions. So I want to make sure all my JSON return have "code" "message" or other values nested inside in order to keep consistency of my APIs.
For example:
Success
{"code": 1, "status": "success", "message": "", "data": [{"id": 1, "name": "John Doe", "email": "johndoe#gmail.com"}]}
Error
{"code": -1, "status": "error", "message":"Something went wrong","data": [] }
The return will always have "code" "status" "message" "data" inside whatever the result would become.
After looked up on Google but couldn't find any work-around over DRF. So I suppose everybody is redefining the APIViews or Mixins (get put post etc. ) to control the response. But I am not very sure if the return should be that widely different without a certain pattern. Or is it cool that DRF's JSON response could be directly adopted as a production case?
Hope to have some advice from you guys.
Thanks.
You can use the status code of the HTTP to indicate the errors and success and you dont have to explicitly specify the error codes and response codes. In DRF status codes are defined in the module rest_framework.status
and you should use them and form the response accordingly.
I am creating an imposter process using Mountebank and want to record the request and response. To create a http imposter I used the following CURL command as described in their documentation.
curl -i -X POST -H 'Content-Type: application/json' http://127.0.0.1:2525/imposters --data '{
"port": 6568,
"protocol": "http",
"name": "proxyAlways",
"stubs": [
{
"responses": [
{
"proxy": {
"to": "http://localhost:8000",
"mode": "proxyAlways",
"predicateGenerators": [
{
"matches": {
"method": true,
"path": true,
"query": true
}
}
]
}
}
]
}
]
}'
I have another server running at http://localhost:8000 which is listening to all the request coming to port 6568.
Output of my server now:
mb
info: [mb:2525] mountebank v1.6.0-beta.1102 now taking orders - point your browser to http://localhost:2525 for help
info: [mb:2525] POST /imposters
info: [http:6568 proxyAlways] Open for business...
info: [http:6568 proxyAlways] ::ffff:127.0.0.1:55488 => GET /
I want to record all the request and response going around, and unable to do right now. When I enter curl -i -X GET -H 'Content-Type: application/json' http://127.0.0.1:6568/ it is giving me a response but how I do store it?
Also can anyone explain me the meaning of
save off the response in a new stub in front of the proxy response:
(from this Mountebank documentation)
How to store proxy results
The short answer is that mountebank already is storing it. You can verify that by looking at the output of curl http://localhost:2525/imposters/6568. The real question is how do you replay the stored response?
The common usage scenario with mountebank proxies is that you record the proxy responses on one running instance of mb, save off the results, and then start the next instance of mb with those saved responses. The way you would do that is to have the system under test talk to service you're trying to stub out via the mountebank proxy under whatever conditions you need it to, and then save off the responses (and their request predicates) by sending an HTTP GET or DELETE to http://localhost:2525/imposters/6568?removeProxies=true&replayable=true. You feed the JSON body of that response into the next mb instance, either through the REST API, or by saving it on disk and starting mountebank with a command of something like mb --configfile savedProxyResults.json. At that point, mountebank is providing the exact same responses to the requests without connecting to the downstream service.
Proxies create new stubs
Your last question revolves around understanding how the proxyAlways mode works. The default proxyOnce mode means that the first time a mountebank proxy sees a request that uniquely satisfies a predicate, it queries the downstream service and saves the response. The next time it seems a request that satisfies the exact same predicates, it avoids the downstream call and simply returns the saved result. It only proxies downstream once for the same request. The proxyAlways mode, on the other hand, always sends the requests downstream, and saves a list of responses for the same request.
To make this clear, in the example you copied we care about the method, path, and query fields on the request, so if we see two requests with exactly the same combination of those three fields, we need to know whether we should send the saved response back or continue to proxy. Imagine we first sent:
GET /test?q=elephants
The method is GET, the path is /test, and the query is q=elephants. Since this is the first request, we send it to the downstream server, which returns a body of:
No results
That will be true regardless of which proxy mode you set mountebank to, since it has to query downstream at least once. Now suppose, while we're thinking about it, the downstream service added an elephant, and then our system under test makes the same call:
GET /test?q=elephants
If we're in proxyOnce mode, the fact that the elephant was added to the real service simply won't matter, we'll continue to return our saved response:
No results
You'd see the same behavior if you shut the mountebank process down and restarted it as described above. In the config file you saved, you'd see something like this (simplifying a bit):
"stubs": [
{
"predicates": [{
"deepEquals': {
"method": "GET",
"path": "/test",
"query": { "q": "elephants" }
}
}],
"responses": [
{
"is": {
"body": "No results"
}
}
]
}
]
There's only the one stub. If, on the other hand, we use proxyAlways, then the second call to the GET /test?q=elephants would yield the new elephant:
1. Jumbo reporting for duty!
This is important, because if we shut down the mountebank process and restart it, now our tests can rely on the fact that we'll cycle through both responses:
"stubs": [
{
"predicates": [{
"deepEquals': {
"method": "GET",
"path": "/test",
"query": { "q": "elephants" }
}
}],
"responses": [
{
"is": {
"body": "No results"
}
},
{
"is": {
"body": "1. Jumbo reporting for duty!"
}
}
]
}
]