Updating Zapier Storage programmatically - python

I have been working with the Zapier storage api through the store.zapier.com endpoint and have been successful at setting and retrieving values. However I have recently found a need to store more complex information that I would like to update over time.
The data I am storing at the moment looks like the following:
{
"task_id_1": {"google_id": "google_id_1", "due_on": "2018-10-24T17:00:00.000Z"},
"task_id_2": {"google_id": "google_id_2", "due_on": "2018-10-23T20:00:00.000Z"},
"task_id_3": {"google_id": "google_id_3", "due_on": "2018-10-25T21:00:00.000Z"},
}
What I would like to do is update the "due_on" child value of any arbitrary task_id_n without having to delete and add it again. Reading the API information at store.zapier.com I see you can send a patch request combined with a specific action to have better control over the stored data. I attempt to use the patch request and the "set_child_value" action as follows:
def update_child(self, parent_key, child_key, child_value):
header = self.generate_header()
data = {
"action" : "set_child_value",
"data" : {
"key" : parent_key,
"value" : {child_key : child_value}
}
}
result = requests.patch(self.URL, headers=header, json=data)
return result
When I send this request Zapier responds with a 200 status code but the storage is not updated. Any ideas what I might be missing?

Zapier Store doesn't seem to be validating the request body past the "action" and "data" fields.
When you make a request with the "data" field set to an array, you trigger a validation error that describes the schema for the data field (What a way to find documentation for an API! smh).
In the request body, the data field schema for "set_child_value" action is:
{
"action" : {
"enum": [
"delete",
"increment_by",
"set_child_value",
"list_pop",
"set_value_if",
"remove_child_value",
"list_push"
]
},
"data" : {
"key" : {
"type": "object"
},
"values" : {
"type": "object"
}
}
}
Note that it's "values" and not "value"

I was able to update specific child values by modifying my request from a PATCH to a PUT. I had to do away with the data structure of:
data = {
"action" : "set_child_value",
"data" : {
"key" : parent_key,
"value" : {child_key : child_value}
}
and instead send it along as:
data = {
parent_key : {child_key : child_value}
}
My updated request looks like:
def update_child(self, parent_key, child_key, child_value):
header = self.generate_header()
data = {
parent_key : {child_key : child_value}
}
result = requests.put(self.URL, headers=header, json=data)
return result
Never really resolved the issue with the patch method I was attempting before, it does work for other Zapier storage methods such as "pop_from_list" and "push_to_list". Anyhow this is a suitable solution for anyone who runs into the same problem.

Related

Get context data from Lambda Authorizer (APi Gateway)

I'm using the Aws Lambda authorizer to secure an Api gateway. The authorizer lambda function is written in python using this blueprint from aws (https://github.com/awslabs/aws-apigateway-lambda-authorizer-blueprints/blob/master/blueprints/python/api-gateway-authorizer-python.py)
I added this code in the "blueprint"
if(event['authorizationToken'] == 'allow'):
policy.allowAllMethods()
else:
policy.denyAllMethods()
# Finally, build the policy
authResponse = policy.build()
# new! -- add additional key-value pairs associated with the authenticated principal
# these are made available by APIGW like so: $context.authorizer.<key>
# additional context is cached
context = {
'key': 'somevalue, # $context.authorizer.key -> value
'number' : 1,
'bool' : True
}
# context['arr'] = ['foo'] <- this is invalid, APIGW will not accept it
# context['obj'] = {'foo':'bar'} <- also invalid
authResponse['context'] = context
return authResponse
However in the lambda function attached to the route i cannot find the context value from authorizer. How can i get the values from context[key] ?
The solution is to use Mapping Templates on Integration Request. If you look at the route pipeline you will see that before reaching the Lambda Function you have a "Integration Request" section (and also a Integration Response)
In Integration Request you have the option to edit the input into lambda function via Mapping Templates.
So, i created a new Mapping Template ( use "Where" there are no Templates defined)
Content -Type use application/json
and in the actual template use something like
#set($inputRoot = $input.path('$'))
{
"key":"$context.authorizer.key"
}
Attention : the above template will remove the original output. That data is found in $inputRoot and you can add it to response using this format
{
"key":"$context.authorizer.key",
"originalkey":$inputRoot.originalkey
}
With the help of accepted answer I had came up with this:
## See http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html
## This template will pass through all parameters including path, querystring, header, stage variables, and context through to the integration endpoint via the body/payload
#set($inputRoot = $input.path('$'))
#set($authorizer = $context.authorizer)
#set($allParams = $input.params())
{
#foreach($key in $inputRoot.keySet())
"$key" : "$util.escapeJavaScript($inputRoot.get($key))"
#if($foreach.hasNext),#end
#end,
"context" : {
"params" : {
#foreach($type in $allParams.keySet())
#set($params = $allParams.get($type))
"$type" : {
#foreach($paramName in $params.keySet())
"$paramName" : "$util.escapeJavaScript($params.get($paramName))"
#if($foreach.hasNext),#end
#end
}
#if($foreach.hasNext),#end
#end
},
"stage-variables" : {
#foreach($key in $stageVariables.keySet())
"$key" : "$util.escapeJavaScript($stageVariables.get($key))"
#if($foreach.hasNext),#end
#end
},
#foreach($key in $context.keySet())
"$key" : "$util.escapeJavaScript($context.get($key))"
#if($foreach.hasNext),#end
#end,
"authorizer": {
#foreach($key in $authorizer.keySet())
"$key" : "$util.escapeJavaScript($authorizer.get($key))"
#if($foreach.hasNext),#end
#end
}
}
}
edit:
after tweaking around in api gateway I'we found out about
Use Lambda Proxy integration toggle button that adds extra parameters to event object

How to select a particular value/attribute in json data via python?

I have some json data , which I want to load and inspect in python. I know python has few different ways to handle json. if i want to see what is the author name in following json data, how can directly select the value of name inside author in following json, without having to iterate , if there are multiple topic/blog in the data?
{
"topic":
{
"language": "JSON",
},
"blog":[
{
"author" : {
"name" : "coder"
}
}]
}

How to validate JSON request body before sending PUT request in python

It's when I send a PUT request to my API endpoint from python with a JSON request body I receive empty request body, because sometimes It's containing special characters which is not supported by JSON.
How can I sanitize my JSON before sending my request?
I've tried with stringify and parsing json before I sent my request!
profile = json.loads(json.dumps(profile))
My example invalid json is:
{
"url": "https://www.example.com/edmund-chand/",
"name": "Edmund Chand",
"current_location": "FrankfurtAmMainArea, Germany",
"education": [],
"skills": []
}
and My expected validated json should be:
{
"url": "https://www.example.com/edmund-chand/",
"name": "Edmund Chand",
"current_location": "Frankfurt Am Main Area, Germany",
"education": [],
"skills": []
}
If you're looking for something quick to sanitize json data for limited fields i.e. current_location, you can try something like the following below:
def sanitize(profile):
profile['current_location'] = ', '.join([val.strip() for val in profile['current_location'].split(',')])
return profile
profile = sanitize(profile)
The idea here is that you would write code to sanitize each bits in that function and send it your api or throw exception if invalid etc.
For more robust validation, you can consider using jsonschema package. More details here.
With that package you can validate strings and json schema more flexibly.
Example taken from the package readme:
from jsonschema import validate
# A sample schema, like what we'd get from json.load()
schema = {
"type" : "object",
"properties" : {
"url" : {"type" : "string", "format":"uri"},
"current_location" : {"type" : "string", "maxLength":25, "pattern": "your_regex_pattern"},
},
}
# If no exception is raised by validate(), the instance is valid.
validate(instance=profile, schema=schema)
You can find more infor and types of available validation for strings here.
Thank you #Rithin for your solution but that one seems more coupled with one field of the whole JSON.
I found a solution to replace it with below example code which works for any field:
profile = json.loads(json.dumps(profile).replace("\t", " "))

How to save dictionary data in list ? and return that list in Python?

I have a json file i need to retrieve data from it and then insert it into another API.
WorkFlow: External Feed -> Parsing -> Insert into to Another API
Coding Part:
Function Defined in a Parsing class.
def parsed_items(self):
self.get_response()
items = self.soup.find_all('item')
self.payload = []
for item in items:
self.payload.append({'title': item.find('title').text,
'description': item.find('description').text,
'status': '3'
}
)
return self.payload
Function Defined in main class to get values of this function.
for items in parser.parsed_items():
response2 = requests.request('POST', settings.BASE_URL,
json= (items['title'], items['description'], items['status']),
headers=headers())
Sample of JSON:
{ Data:
{
"title": "ipsum",
"description": "lorem"
}
{
"title": "ipsum1",
"description": "lorem1"
}
{
"title": "ipsum2",
"description": "lorem2"
}
{
"title": "ipsum3",
"description": "lorem3"
}
}
Error:
{"errors":[{"status":"400","source":"non_field_errors","detail":"Invalid data. Expected a dictionary, but got list."}]}
I need to know ?
Q1: What is the best way to handle such scenarios? Please refer any tutorial which can be helpful in this scenario.
Q2: How to retrieve list of values from payload ? Any example that you can refer ?
Q3: How can the list which is returned back by parse_item() be converted into dictionary and passed into request for value of json parameter.
I need to fetch these values of "title" and "description" from JSON and POST them in local API. (Note: Local API is authenticated successfully)

Can one get multiple results from one API call in IBM Watson?

I am using Python to program a script for IBM Watson's Personality Insights service. I am using the results as training data for a Machine Learning project.
Since the service is so limited (100 calls/month), is it possible to get multiple personality insights with only one API call?
Jeff is correct about the API limit: You are not limited to 100 api calls/month; this is just the number of free calls you get per month.
However and replying your question: Yes, it is possible to compute multiple portraits. If you are using application/json as Content-Type, you will notice you are including a userid field for each content element. You can include content from different authors (userid's), just that you cannot get the output in as JSON since this one only supports a single author. You can use the CSV API and get multiple rows, one corresponding to each author in the input.
Here is sample code that may help:
import requests, json
data = { "contentItems" : [
{
"userid" : "user1",
"id" : "uuid1.1",
"contenttype" : "text/plain",
"language" : "en",
"created" : 1393264847000,
"content": "some text"
},
{
"userid" : "user1",
"id" : "uuid1.2",
"contenttype" : "text/plain",
"language" : "en",
"created" : 1393263869000,
"content": "even more"
},
{
"userid" : "user2",
"id" : "uuid2",
"contenttype" : "text/plain",
"language" : "en",
"created" : 1394826985000,
"content": "this is a different author"
}
] }
response = requests.post(
"https://gateway.watsonplatform.net/personality-insights"+
"/api/v2/profile", # Or append: "?headers=True",
auth=("API_USERID", "API_PASSWORD"),
headers={"Content-Type": "application/json", "Accept": "text/csv"},
data = json.dumps(data)
)
print("HTTP %d:\n%s" % (response.status_code, response.content))
Two notes on this code:
running this exact code will get a HTTP 400, since it does not meet the minimum text requirements: you need to replace the content fields with your text -- more text!
multiple content items can belong to the same author - note that the first two above belong to user1 and the last one to user2
if you omit the Accept: "text/csv" header, it will default to the JSON API and return HTTP 400: "multiple authors found". Remember to use the CSV API for multiple authors.
This way you can batch some authors in a single API call. Keep in mind you need to stay under the request size limit (currently 20Mb) so you just need to be little more careful.
You are not limited to 100 API calls a month, just over 100 you have to pay for the API calls.

Categories

Resources