Handle missing key from dict - python

I have this script which I use to pull in some data from an API call.
# list of each api url to use
link =[]
#for every device id , create a new url link into the link list
for i in deviceIDList:
link.append('https://website/v2/accounts/accountid/devices/'+i)
#create a list with all the different requests
deviceReq = []
for i in link:
deviceReq.append(requests.get(i, headers=headers).json())
# write to a txt file
with open('masterSheet.txt', 'x') as f:
for i in deviceReq:
devices =[i['data']]
for x in devices:
models = [x['provision']]
for data in models:
sheet=(data['endpoint_model']+" ",x['name'])
f.write(str(sheet)+"\n")
Some devices do not have the provision key.
Here is some sample data looks like from a device that is different.
Let's say I want to grab the device_type key value instead if provision key is non-existent.
"data": {
"sip": {
"username": "xxxxxxxxxxxxxxxx",
"password": "xxxxxxxxxxxxxxxx",
"expire_seconds": xxxxxxxxxxxxxxxx,
"invite_format": "xxxxxxxxxxxxxxxx",
"method": "xxxxxxxxxxxxxxxx",
"route": "xxxxxxxxxxxxxxxx"
},
"device_type": "msteams",
"enabled": xxxxxxxxxxxxxxxx,
"suppress_unregister_notifications": xxxxxxxxxxxxxxxx,
"owner_id": "xxxxxxxxxxxxxxxx",
"name": "xxxxxxxxxxxxxxxx",
}
How do I cater for missing keys?

You can use .get(key, defualt_value) to get the value from a dict, or if one is not present it will use the default like this:
provision = x.get('provision', None)
if provision is None:
provision = x.get('device_type')
models = [provision]
or if you prefer you can do the same on one line and without the extra if or assignment (though some people might find it more difficult to read and understand.
models = [x.get('provision', x.get('device_type'))]

Related

Appending tags in OCI using SDK

I have namespace already created and defined tags to resources. When I try adding new tags to the resources, the old tags are getting deleted.
As I would like to use the old data and return the value along with the new tags. Please help me with how I can achieve this.
get volume details from a specific compartment
import oci
config = oci.config.from_file("~/.oci/config")
core_client = oci.core.BlockstorageClient(config)
get_volume_response = core_client.get_volume(
volume_id="ocid1.test.oc1..<unique_ID>EXAMPLE-volumeId-Value")
# Get the data from response
print(get_volume_response.data)
output
{
"availability_domain": "eto:PHX-AD-1",
"compartment_id": "ocid1.compartment.oc1..aaaaaaaapmj",
"defined_tags": {
"OMCS": {
"CREATOR": "xyz#gmail.com"
},
"Oracle-Tags": {
"CreatedBy": "xyz#gmail.com",
"CreatedOn": "2022-07-5T08:29:24.865Z"
}
},
"display_name": "test_VG",
"freeform_tags": {},
"id": "ocid1.volumegroup.oc1.phx.abced",
"is_hydrated": null,
"lifecycle_state": "AVAILABLE",
"size_in_gbs": 100,
"size_in_mbs": 102400,
"source_details": {
"type": "volumeIds",
"volume_ids": [
"ocid1.volume.oc1.phx.xyz"
]
}
I want the API below to update the tag along with the old data.
old tag
"defined_tags": {
"OMCS": {
"CREATOR": "xyz#gmail.com"
},
"Oracle-Tags": {
"CreatedBy": "xyz#gmail.com",
"CreatedOn": "2022-07-5T08:29:24.865Z"
import oci
config = oci.config.from_file("~/.oci/config")
core_client = oci.core.BlockstorageClient(config)
update_volume_response = core_client.update_volume(
volume_id="ocid1.test.oc1..<unique_ID>EXAMPLE-volumeId-Value",
update_volume_details=oci.core.models.UpdateVolumeDetails(
defined_tags={
'OMCS':{
'INSTANCE': 'TEST',
'COMPONENT': 'temp1.mt.exy.vcn.com'
}
},
display_name = "TEMPMT01"))
print(update_volume_response.data)
I also tried but got an attribute error.
for tag in get_volume_response.data:
def_tag.appened(tag.defined_tags)
return (def_tag)
Please help on how can I append the defined_tags?
tags are defined as dict in OCI. Append works the same way as in appending dict.
Below I have pasted the code for updating the defined_tags for Block Volumes in OCI
import oci
from oci.config import from_file
configAPI = from_file() # Config file is read from user's home location i.e., ~/.oci/config
core_client = oci.core.BlockstorageClient(configAPI)
get_volume_response = core_client.get_volume(
volume_id="ocid1.volume.oc1.ap-hyderabad-1.ameen")
# Get the data from response
volume_details = get_volume_response.data
defined_tags = getattr(volume_details, "defined_tags")
freeform_tags = getattr(volume_details, "freeform_tags")
# Add new tags as required. As defined_tags is a dict, addition of new key/value pair works like below.
# In case there are multiple tags to be added then use update() method of dict.
defined_tags["OMCS"]["INSTANCE"] = "TEST"
defined_tags["OMCS"]["COMPONENT"] = "temp1.mt.exy.vcn.com"
myJson={"freeform_tags":freeform_tags,"defined_tags": defined_tags}
update_volume_response = core_client.update_volume(
volume_id="ocid1.volume.oc1.ap-hyderabad-1.ameen",
update_volume_details=oci.core.models.UpdateVolumeDetails(
defined_tags=defined_tags,
freeform_tags=freeform_tags))
print(update_volume_response.data)

How can I make checking the value for the parameter?

I want to write a program that will save information from the API, in the form of a JSON file. The API has the 'exchangeId' parameter. When I save information from the API, I want to save only those files in which the 'exchangeId' will be different and his value will be more then one. How can I make it? Please, give me hand.
My Code:
exchangeIds = {102,311,200,302,521,433,482,406,42,400}
for pair in json_data["marketPairs"]:
if (id := pair.get("exchangeId")):
if id in exchangeIds:
json_data["marketPairs"].append(pair)
exchangeIds.remove(id)
pairs.append({
"exchange_name": pair["exchangeName"],
"market_url": pair["marketUrl"],
"price": pair["price"],
"last_update" : pair["lastUpdated"],
"exchange_id": pair["exchangeId"]
})
out_object["name_of_coin"] = json_data["name"]
out_object["marketPairs"] = pairs
out_object["pairs"] = json_data["numMarketPairs"]
name = json_data["name"]
Example of ExchangeIds output, that I don't need:
{200} #with the one id in `ExchangeId`
Example of JSON output:
{
"name_of_coin": "Pax Dollar",
"marketPairs": [
{
"exchange_name": "Bitrue",
"market_url": "https://www.bitrue.com/trade/usdp_usdt",
"price": 1.0000617355334473,
"last_update": "2021-12-24T16:39:09.000Z",
"exchange_id": 433
},
{
"exchange_name": "Hotbit",
"market_url": "https://www.hotbit.io/exchange?symbol=USDP_USDT",
"price": 0.964348817699553,
"last_update": "2021-12-24T16:39:08.000Z",
"exchange_id": 400
}
],
"pairs": 22
} #this one of exapmle that I need, because there are two id

Inconsistent results using Marketo API - can't find campaign ID

I am using the python library marketo-rest-api to pull data from Marketo. I am just pulling one day to attempt to connect the dots from activities to campaigns. I am making the following calls:
print('Getting Campaigns')
with open(marketoCampaignsFile,'w') as fcamp:
campaigns = mc.execute(method='get_multiple_campaigns', id=None, name=None, programName=None, workspaceName=None, batchSize=None)
for campaign in campaigns:
jsonString = json.dumps(campaign)
fcamp.write(jsonString)
fcamp.close()
print('Getting Activities...')
activitiesFile = 'c:\\users\\mark\\marketocsv\\emailActivities.2016-07-26.json'
with open(activitiesFile,'w',newline='') as fopen:
for activities in mc.execute(method='get_lead_activities_yield', activityTypeIds=['6','7','8','9','10'], nextPageToken=None, sinceDatetime='2016-07-26', untilDatetime='2016-07-27', batchSize=None, listId=None, leadIds=None):
for item in activities:
jsonString = json.dumps(item)
fopen.write(jsonString+'\n')
fopen.close()
What I have found is that the campaign IDs in the activities file do not match any of the campaign IDs in the campaign file. Does anyone know why this might be? I need campaign attributes in order to filter the specific activities that I need. Thanks.
The activity types that you are downloading don't include the Campaign ID, they provide the Email ID instead.
So Jep was right. I did finally find the EmailID. It's called the primaryAttributeValueId. You can link this back to the EmailID provided by Marketo. I never did find the campaignID but I can get to the campaign through the email. Here's the full JSON from one of the requests:
{
"primaryAttributeValue": "2016-07-Email-To-Customers",
"activityDate": "2016-07-26T19:05:41Z",
"attributes": [{
"value": "0",
"name": "Choice Number"
},
{
"value": "43182",
"name": "Step ID"
}
],
"primaryAttributeValueId": 17030,
"leadId": 115345,
"id": 393962103,
"activityTypeId": 7,
"campaignId": 15937
}

Elasticsearch/Python - Re-index data after changing the mappings?

I'm a little stuck on how to re-index data in elastic search after a mapping or a data type has been changed.
According to elastic search docs
Pull the documents in from your old index, using a scrolled search and index them into the new index using the bulk API. Many of the client APIs provide a reindex() method which will do all of this for you. Once you are done, you can delete the old index.
This is my old mapping
{
"test-index2": {
"mappings": {
"business": {
"properties": {
"address": {
"type": "nested",
"properties": {
"country": {
"type": "string"
},
"full_address": {
"type": "string"
}
}
}
}
}
}
}
}
New Index mapping, I'm changing full_address -> location_address
{
"test-index2": {
"mappings": {
"business": {
"properties": {
"address": {
"type": "nested",
"properties": {
"country": {
"type": "string"
},
"location_address": {
"type": "string"
}
}
}
}
}
}
}
}
I'm using the python client for elasticsearch
https://elasticsearch-py.readthedocs.org/en/master/helpers.html#elasticsearch.helpers.reindex
from elasticsearch import Elasticsearch
from elasticsearch.helpers import reindex
es = Elasticsearch(["es.node1"])
reindex(es, "source_index", "target_index")
However this transfers the data from one index to another.
How may i use this to change the mappings/(data types etc) for my case above?
It's Straightforward if you use the scan&scroll and the Bulk API already implemented in the python client of elasticsearch
First -> Fetch all the documents by scan&scroll method
Loop through and make neccessary modifications to each document
Insert the modified documents into a new index using the Bulk API
from elasticsearch import Elasticsearch, helpers
es = Elasticsearch()
# Use the scan&scroll method to fetch all documents from your old index
res = helpers.scan(es, query={
"query": {
"match_all": {}
},
"size":1000
},index="old_index")
new_insert_data = []
# Change the mapping and everything else by looping through all your documents
for x in res:
x['_index'] = 'new_index'
# Change "address" to "location_address"
x['_source']['location_address'] = x['_source']['address']
del x['_source']['address']
# This is a useless field
del x['_score']
es.indices.refresh(index="testing_index3")
# Add the new data into a list
new_insert_data.append(x)
es.indices.refresh(index="new_index")
print new_insert_data
#Use the Bulk API to insert the list of your modified documents into the database
helpers.bulk(es,new_insert_data)
The reindex() API simply "moves" documents from one index to another. There is no way it can detect/infer that the field name full_address in documents of the old index should be location_address in documents in the new index. I doubt there is any API provided by standard Elasticsearch clients that can do what you desire. The only way I can think of achieving this is through additional custom logic on the client side which maintains a dictionary of field names from old index to new index and then read documents from old index and indexes the corresponding document to the new index with new field names obtained from the field name dictionary.
After updating the mapping, this can be done by updating the exiting documents using bulk API.
POST /_bulk
{"update":{"_id":"59519","_type":"asset","_index":"assets"}}
{"doc":{"facility_id":491},"detect_noop":false}
Note - Use 'detect_noop' for detecting the noop update.

need to list all friends with facebook.py

i use facebook.py from:
https://github.com/pythonforfacebook/facebook-sdk
my problem is:
I don't know to use the next-url from graph.get_object("me/friends")
graph = facebook.GraphAPI(access_token)
friends = graph.get_object("me/friends")
If you type in /me/friends into the Graph API Explorer, you'll see that it returns a JSON file, which is just a combination of dictionaries and lists inside one another.
For example, the output could be:
{
"data": [
{
"name": "Foo",
"id": "1"
},
{
"name": "Bar",
"id": "1"
}
],
"paging": {
"next": "some_link"
}
}
This JSON file is already converted to a Python dictionary/list. In the outer dictionary, the key data maps to a list of dictionaries, which contain information about your friends.
So to print your friends list:
graph = facebook.GraphAPI(access_token)
friends = graph.get_object("me/friends")
for friend in friends['data']:
print "{0} has id {1}".format(friend['name'].encode('utf-8'), friend['id'])
The .encode('utf-8') is to properly print out special characters.
The above answer is mislead, as Facebook has shut down graph users from getting lists of friends UNLESS THE FRIENDS HAVE ALSO INSTALLED THE APP.
See:
graph = facebook.GraphAPI( token )
friends = graph.get_object("me/friends")
if friends['data']:
for friend in friends['data']:
print ("{0} has id {1}".format(friend['name'].encode('utf-8'), friend['id']))
else:
print('NO FRIENDS LIST')

Categories

Resources