KeyError Issue when decoding JSON - python

I am making telegram bot using python 3 on RPI and for HTTP requesting I used requests library
I wrote the code that should answer &start command:
import requests as rq
updateURL="https://api.telegram.org/bot925438333:AAGEr3pf3c4Fz91sL79mwJ6aGYm-Y6BM7_4/getUpdates"
while True:
r=rq.post(url = updateURL)
data = r.json()
messageArray = data['result']
lastMsgID=len(messageArray)-1
lastMsgData = messageArray[lastMsgID]
lastMsgSenderID = lastMsgData['message']['from']['id']
lastMsgUsername = lastMsgData['message']['from']['username']
lastMsgText = lastMsgData["message"]["text"]
lastMsgChatType = lastMsgData['message']['chat']['type']
if lastMsgChatType == "group":
lastMsgGroupID = lastMsgData['message']['chat']['id']
if lastMsgText == "&start":
if lastMsgChatType == "private":
URL="https://api.telegram.org/bot925438333:AAGEr3pf3c4Fz91sL79mwJ6aGYm-Y6BM7_4/sendMessage"
chatText="Witamy w KozelBot"
chatID=lastMsgSenderID
Params={"chat_id":chatID,"text":chatText}
rs = rq.get(url = URL, params = Params)
if lastMsgChatType == "group":
URL="https://api.telegram.org/bot925438333:AAGEr3pf3c4Fz91sL79mwJ6aGYm-Y6BM7_4/sendMessage"
chatText="Witamy w KozelBot"
chatID=lastMsgGroupID
Params={"chat_id":chatID,"text":chatText}
rs = rq.get(url = URL, params = Params)
but the code outputs an error:
File "/home/pi/telegramResponse.py", line 16, in
lastMsgText = lastMsgData["message"]["text"]
KeyError: 'text'
I don't know how to solve this problem because this fragment is working fine in my other scripts!
Please help!

The reason was simple!
Last message that program finds doesn't contain any text because it was new user notification. KeyError just occured because last message doesn't have ['text'] parameter.

Related

Using a variable from a dictionary in a loop to attach to an API call

I'm calling a LinkedIn API with the code below and it does what I want.
However when I use almost identical code inside a loop it returns a type error.
it returns a type error:
File "C:\Users\pchmurzynski\OneDrive - Centiq Ltd\Documents\Python\mergedreqs.py", line 54, in <module>
auth_headers = headers(access_token)
TypeError: 'dict' object is not callable
It has a problem with this line (which again, works fine outside of the loop):
headers = headers(access_token)
I tried changing it to
headers = headers.get(access_token)
or
headers = headers[access_token]
EDIT:
I have also tried this, with the same error:
auth_headers = headers(access_token)
But it didn't help. What am I doing wrong? Why does the dictionary work fine outside of the loop, but not inside of it and what should I do to make it work?
What I am hoping to achieve is to get a list, which I can save as json with share statistics called for each ID from the "shids" list. That can be done with individual requests - one link for one ID,
(f'https://api.linkedin.com/v2/organizationalEntityShareStatistics?q=organizationalEntity&organizationalEntity=urn%3Ali%3Aorganization%3A77487&ugcPosts=List(urn%3Ali%3AugcPost%3A{shid})
or a a request with a list of ids.
(f'https://api.linkedin.com/v2/organizationalEntityShareStatistics?q=organizationalEntity&organizationalEntity=urn%3Ali%3Aorganization%3A77487&ugcPosts=List(urn%3Ali%3AugcPost%3A{shid},urn%3Ali%3AugcPost%3A{shid2},...,urn%3Ali%3AugcPost%3A{shidx})
Updated Code thanks to your comments.
shlink = ("https://api.linkedin.com/v2/organizationalEntityShareStatistics?q=organizationalEntity&organizationalEntity=urn%3Ali%3Aorganization%3A77487&shares=List(urn%3Ali%3Ashare%3A{})")
#loop through the list of share ids and make an api request for each of them
shares = []
token = auth(credentials) # Authenticate the API
headers = fheaders(token) # Make the headers to attach to the API call.
for shid in shids:
#create a request link for each sh id
r = (shlink.format(shid))
#call the api
res = requests.get(r, headers = auth_headers)
share_stats = res.json()
#append the shares list with the responce
shares.append(share_stats["elements"])
works fine outside the loop
Because in the loop, you re-define the variable. Added print statments to show it
from liapiauth import auth, headers # one type
for ...:
...
print(type(headers))
headers = headers(access_token) # now set to another type
print(type(headers))
Lesson learned - don't overrwrite your imports
Some refactors - your auth token isn't changing, so don't put it in the loop; You can use one method for all LinkedIn API queries
from liapiauth import auth, headers
import requests
API_PREFIX = 'https://api.linkedin.com/v2'
SHARES_ENDPOINT_FMT = '/organizationalEntityShareStatistics?q=organizationalEntity&organizationalEntity=urn%3Ali%3Aorganization%3A77487&shares=List(urn%3Ali%3Ashare%3A{}'
def get_linkedin_response(endpoint, headers):
return requests.get(API_PREFIX + endpoint, headers=headers)
def main(access_token=None):
if access_token is None:
raise ValueError('Access-Token not defined')
auth_headers = headers(access_token)
shares = []
for shid in shids:
endpoint = SHARES_ENDPOINT_FMT.format(shid)
resp = get_linkedin_response(endpoint, auth_headers)
if resp.status_code // 100 == 2:
share_stats = resp.json()
shares.append(share_stats[1])
# TODO: extract your data here
idlist = [el["id"] for el in shares_list["elements"]]
if __name__ == '__main__':
credentials = 'credentials.json'
main(auth(credentials))

reading response returns error python sdk OCI

I am trying to read and pass the response of work requests in OCI for my compartment.
import oci
import configparser
import json
from oci.work_requests import WorkRequestClient
DEFAULT_CONFIG = "~/.oci/config"
DEFAULT_PROFILE = "DEFAULT"
config_file="config.json"
ab=[]
def config_file_parser(config_file):
config=configparser.ConfigParser()
config.read(config_file)
profile=config.sections()
for config_profile in profile:
func1 = get_work_request(file=config_file, profile_name=config_profile)
get_print_details(func1)
def get_work_request(file=DEFAULT_CONFIG, profile_name=DEFAULT_PROFILE):
global oci_config, identity_client, work_request_client
oci_config = oci.config.from_file(file, profile_name=profile_name)
identity_client = oci.identity.identity_client.IdentityClient(oci_config)
core_client = oci.core.ComputeClient(oci_config)
work_request_client = WorkRequestClient(oci_config)
work_requests = work_request_client.list_work_requests(oci_config["compartment"]).data
print("{} Work Requests found.".format(len(work_requests)))
return work_requests
def get_print_details(workrequest_id):
resp = work_request_client.get_work_request(','.join([str(i["id"]) for i in workrequest_id]))
wrDetails = resp.data
print()
print()
print('=' * 90)
print('Work Request Details: {}'.format(workrequest_id))
print('=' * 90)
print("{}".format(wrDetails))
print()
if __name__ == "__main__":
config_file_parser(config_file)
But while executing work_request_client.get_work_request I am getting TypeError: 'WorkRequestSummary' object is not subscriptable I have tried multiple times with making as object JSON but still the error remains, any way to solve or any leads would be great.
I don't think get_work_request supports passing in multiple work request ids. You'd need to call get_work_request individually for each work request id.

Custom Image creation in IBM Cloud failing from COS

I am working on creating custom image in IBM Cloud using python. I have a very simple straight code for just creating the image and it fails.
As per me I am passing the relevant correct details for all the parameters.
Still I get an Error which is not much descriptive :
ERROR:root:Please check whether the resource you are requesting exists.
Traceback (most recent call last):
File "/Users/deepali.mittal/GITHUB/dcoa/python/build/dmittal/virtual-env36/lib/python3.6/site-packages/ibm_cloud_sdk_core/base_service.py", line 246, in send
response.status_code, http_response=response)
ibm_cloud_sdk_core.api_exception.ApiException: Error: Please check whether the resource you are requesting exists., Code: 400
Process finished with exit code 0
This is not related to the resource missing in COS. As if it is not able to find the image in COS it gives a different error.
Code :
from ibm_vpc import VpcV1 as vpc_client
from ibm_cloud_sdk_core import ApiException
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator
from boto3 import client as boto3_client
import logging
#logging.basicConfig(level=logging.DEBUG)
SOURCE_OBJECT_PATH = 'cos://us-south/de-images-dmittal/abc.qcow2'
RESOURCE_GROUP_ID = '1234'
OPERATING_SYSTEM = 'ubuntu-16-amd64'
def create_ssm_client():
ssm_client = boto3_client("ssm", region_name="us-west-2")
return ssm_client
def retrieve_ibm_config(ssm_client):
params = ["/ibm/service-key"]
response = ssm_client.get_parameters(Names=params, WithDecryption=True)
try:
api_key = response["Parameters"][0]["Value"]
except (ValueError, IndexError):
raise RuntimeError(
f"Required SSM parameters not retrieved. "
f'Required parameters are: {params}.'
)
return api_key
def create_authenticator(api_key):
authenticator = IAMAuthenticator(api_key)
return authenticator
def create_ibm_client(authenticator):
ibm_client = vpc_client('2021-05-28', authenticator=authenticator)
return ibm_client
def create_image_prototype():
image_file_prototype_model = {'href': SOURCE_OBJECT_PATH}
operating_system_identity_model = {'name': OPERATING_SYSTEM}
resource_group_identity_model = {'id': RESOURCE_GROUP_ID}
image_prototype_model = {
'name': 'my-image',
#'resource_group': resource_group_identity_model,
'file': image_file_prototype_model,
'operating_system': operating_system_identity_model
}
image_prototype = image_prototype_model
return image_prototype
def create_image():
ssm_client = create_ssm_client()
api_key = retrieve_ibm_config(ssm_client)
authenticator = create_authenticator(api_key)
ibm_client = create_ibm_client(authenticator)
image_prototype = create_image_prototype()
try:
#images = ibm_client.list_images()
#print(vpc)
#ibm_client.set_service_url('https://us-south.iaas.cloud.ibm.com/v1')
response = ibm_client.create_image(image_prototype)
print(response)
except ApiException as e:
print("Failed")
if __name__ == "__main__":
create_image()
Issue was with IAM Permission. After fixing it worked, the error shown was not relevant so it took time to figure out

How can I get Google Calendar API status_code in Python when get list events?

I try to use Google Calendar API
events_result = service.events().list(calendarId=calendarId,
timeMax=now,
alwaysIncludeEmail=True,
maxResults=100, singleEvents=True,
orderBy='startTime').execute()
Everything is ok, when I have permission to access the calendarId, but it will be errors if wrong when I don't have calendarId permission.
I build an autoload.py function with schedule python to load events every 10 mins, this function will be stopped if error come, and I have to use SSH terminal to restart autoload.py manually
So i want to know:
How can I get status_code, example, if it is 404, python will PASS
Answer:
You can use a try/except block within a loop to go through all your calendars, and skip over accesses which throw an error.
Code Example:
To get the error code, make sure to import json:
import json
and then you can get the error code out of the Exception:
calendarIds = ["calendar ID 1", "calendar ID 2", "calendar Id 3", "etc"]
for i in calendarIds:
try:
events_result = service.events().list(calendarId=i,
timeMax=now,
alwaysIncludeEmail=True,
maxResults=100, singleEvents=True,
orderBy='startTime').execute()
except Exception as e:
print(json.loads(e.content)['error']['code'])
continue
Further Reading:
Python Try Except - w3schools
Python For Loops - w3schools
Thanks to #Rafa Guillermo, I uploaded the full code to the autoload.py program, but I also wanted to know, how to get response json or status_code for request Google API.
The solution:
try:
code here
except Exception as e:
continue
import schedule
import time
from datetime import datetime
import dir
import sqlite3
from project.function import cmsCalendar as cal
db_file = str(dir.dir) + '/admin.sqlite'
def get_list_shop_from_db(db_file):
cur = sqlite3.connect(db_file).cursor()
query = cur.execute('SELECT * FROM Shop')
colname = [ d[0] for d in query.description ]
result_list = [ dict(zip(colname, r)) for r in query.fetchall() ]
cur.close()
cur.connection.close()
return result_list
def auto_load_google_database(list_shop, calendarError=False):
shopId = 0
for shop in list_shop:
try:
shopId = shopId+1
print("dang ghi vao shop", shopId)
service = cal.service_build()
shop_step_time_db = list_shop[shopId]['shop_step_time']
shop_duration_db = list_shop[shopId]['shop_duration']
slot_available = list_shop[shopId]['shop_slots']
slot_available = int(slot_available)
workers = list_shop[shopId]['shop_workers']
workers = int(workers)
calendarId = list_shop[shopId]['shop_calendarId']
if slot_available > workers:
a = workers
else:
a = slot_available
if shop_duration_db == None:
shop_duration_db = '30'
if shop_step_time_db == None:
shop_step_time_db = '15'
shop_duration = int(shop_duration_db)
shop_step_time = int(shop_step_time_db)
shop_start_time = list_shop[shopId]['shop_start_time']
shop_start_time = datetime.strptime(shop_start_time, "%H:%M:%S.%f").time()
shop_end_time = list_shop[shopId]['shop_end_time']
shop_end_time = datetime.strptime(shop_end_time, "%H:%M:%S.%f").time()
# nang luc moi khung gio lay ra tu file Json WorkShop.js
booking_status = cal.auto_load_listtimes(service, shopId, calendarId, shop_step_time, shop_duration, a,
shop_start_time,
shop_end_time)
except Exception as e:
continue
def main():
list_shop = get_list_shop_from_db(db_file)
auto_load_google_database(list_shop)
if __name__ == '__main__':
main()
schedule.every(5).minutes.do(main)
while True:
# Checks whether a scheduled task
# is pending to run or not
schedule.run_pending()
time.sleep(1)

Nested JSON Values cause "TypeError: Object of type 'int64' is not JSON serializable"

Would love some help here. Full context this is my first "purposeful" Python script. Prior to this I've only dabbled a bit and am honestly still learning so maybe I jumped in a bit too early here.
Long story short, been running all over fixing various type mismatches or just general indentation issues (dear lord python isn't forgiving on this).
I think I'm about finished but have a few last issues. Most of them seem to come from the same section too. This script is just mean to get a csv file that has 3 columns and use that to send requests based on the first column (either iOS or Android). The problem is when I'm creating the body to send...
Here's the code (a few tokens omitted for postability):
#!/usr/bin/python
# -*- coding: utf-8 -*-
import requests
import json
import pandas as pd
from tqdm import tqdm
from datetime import *
import uuid
import warnings
from math import isnan
import time
## throttling based on AF's 80 request per 2 minute rule
def throttle():
i = 0
while i <= 3:
print ("PAUSED FOR THROTTLING!" + "\n" + str(3-i) + " minutes remaining")
time.sleep(60)
i = i + 1
print (i)
return 0
## function for reformating the dates
def date():
d = datetime.utcnow() # # <-- get time in UTC
d = d.isoformat('T') + 'Z'
t = d.split('.')
t = t[0] + 'Z'
return str(t)
## function for dealing with Android requests
def android_request(madv_id,mtime,muuid,android_app,token,endpoint):
headers = {'Content-Type': 'application/json', 'Accept': 'application/json'}
params = {'api_token': token }
subject_identities = {
"identity_format": "raw",
"identity_type": "android_advertising_id",
"identity_value": madv_id
}
body = {
'subject_request_id': muuid,
'subject_request_type': 'erasure',
'submitted_time': mtime,
'subject_identities': dict(subject_identities),
'property_id': android_app
}
body = json.dumps(body)
res = requests.request('POST', endpoint, headers=headers,
data=body, params=params)
print("android " + res.text)
## function for dealing with iOS requests
def ios_request(midfa, mtime, muuid, ios_app, token, endpoint):
headers = {'Content-Type': 'application/json',
'Accept': 'application/json'}
params = {'api_token': token}
subject_identities = {
'identity_format': 'raw',
'identity_type': 'ios_advertising_id',
'identity_value': midfa,
}
body = {
'subject_request_id': muuid,
'subject_request_type': 'erasure',
'submitted_time': mtime,
'subject_identities': list(subject_identities),
'property_id': ios_app,
}
body = json.dumps(body)
res = requests.request('POST', endpoint, headers=headers, data=body, params=params)
print("ios " + res.text)
## main run function. Determines whether it is iOS or Android request and sends if not LAT-user
def run(output, mdf, is_test):
# # assigning variables to the columns I need from file
print ('Sending requests! Stand by...')
platform = mdf.platform
device = mdf.device_id
if is_test=="y":
ios = 'id000000000'
android = 'com.tacos.okay'
token = 'OMMITTED_FOR_STACKOVERFLOW_Q'
endpoint = 'https://hq1.appsflyer.com/gdpr/stub'
else:
ios = 'id000000000'
android = 'com.tacos.best'
token = 'OMMITTED_FOR_STACKOVERFLOW_Q'
endpoint = 'https://hq1.appsflyer.com/gdpr/opengdpr_requests'
for position in tqdm(range(len(device))):
if position % 80 == 0 and position != 0:
throttle()
else:
req_id = str(uuid.uuid4())
timestamp = str(date())
if platform[position] == 'android' and device[position] != '':
android_request(device[position], timestamp, req_id, android, token, endpoint)
mdf['subject_request_id'][position] = req_id
if platform[position] == 'ios' and device[position] != '':
ios_request(device[position], timestamp, req_id, ios, token, endpoint)
mdf['subject_request_id'][position] = req_id
if 'LAT' in platform[position]:
mdf['subject_request_id'][position] = 'null'
mdf['error status'][position] = 'Limit Ad Tracking Users Unsupported. Device ID Required'
mdf.to_csv(output, sep=',', index = False, header=True)
# mdf.close()
print ('\nDONE. Please see ' + output
+ ' for the subject_request_id and/or error messages\n')
## takes the CSV given by the user and makes a copy of it for us to use
def read(mname):
orig_csv = pd.read_csv(mname)
mdf = orig_csv.copy()
# Check that both dataframes are actually the same
# print(pd.DataFrame.equals(orig_csv, mdf))
return mdf
## just used to create the renamed file with _LOGS.csv
def rename(mname):
msuffix = '_LOG.csv'
i = mname.split('.')
i = i[0] + msuffix
return i
## adds relevant columns to the log file
def logs_csv(out, df):
mdf = df
mdf['subject_request_id'] = ''
mdf['error status'] = ''
mdf['device_id'].fillna('')
mdf.to_csv(out, sep=',', index=None, header=True)
return mdf
## solely for reading in the file name from the user. creates string out of filename
def readin_name():
mprefix = input('FILE NAME: ')
msuffix = '.csv'
mname = str(mprefix + msuffix)
print ('\n' + 'Reading in file: ' + mname)
return mname
def start():
print ('\nWelcome to GDPR STREAMLINE')
# # blue = OpenFile()
testing = input('Is this a test? (y/n) : ')
# return a CSV
name = readin_name()
import_csv = read(name)
output_name = rename(name)
output_file = logs_csv(output_name, import_csv)
run( output_name, output_file, testing)
# # print ("FILE PATH:" + blue)
## to disable all warnings in console logs
warnings.filterwarnings('ignore')
start()
And here's the error stacktrace:
Reading in file: test.csv
Sending requests! Stand by...
0%| | 0/384 [00:00<?, ?it/s]
Traceback (most recent call last):
File "a_GDPR_delete.py", line 199, in <module>
start()
File "a_GDPR_delete.py", line 191, in start
run( output_name, output_file, testing)
File "a_GDPR_delete.py", line 114, in run
android_request(device[position], timestamp, req_id, android, token, endpoint)
File "a_GDPR_delete.py", line 57, in android_request
body = json.dumps(body)
File "/Users/joseph/anaconda3/lib/python3.6/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/Users/joseph/anaconda3/lib/python3.6/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/Users/joseph/anaconda3/lib/python3.6/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/Users/joseph/anaconda3/lib/python3.6/json/encoder.py", line 180, in default
o.__class__.__name__)
TypeError: Object of type 'int64' is not JSON serializable
TL;DR:
Getting a typeError when calling this on a JSON with another nested JSON. I've confirmed that the nested JSON is the problem because if I remove the "subject_identities" section this compiles and works...but the API I'm using NEEDS those values so this doesn't actually do anything without that section.
Here's the relevant code again (and in the version I first used that WAS working previously):
def android (madv_id, mtime, muuid):
headers = {
"Content-Type": "application/json",
"Accept": "application/json"
}
params = {
"api_token": "OMMITTED_FOR_STACKOVERFLOW_Q"
}
body = {
"subject_request_id": muuid, #muuid,
"subject_request_type": "erasure",
"submitted_time": mtime,
"subject_identities": [
{ "identity_type": "android_advertising_id",
"identity_value": madv_id,
"identity_format": "raw" }
],
"property_id": "com.tacos.best"
}
body = json.dumps(body)
res = requests.request("POST",
"https://hq1.appsflyer.com/gdpr/opengdpr_requests",
headers=headers, data=body, params=params)
I get the feeling I'm close to this working. I had a much simpler version early on that worked but I rewrote this to be more dynamic and use less hard coded values (so that I can eventually use this to apply to any app I'm working with an not only the two it was made for).
Please be nice, I'm entirely new to python and also just rusty on coding in general (thus trying to do projects like this one)
You can check for numpy dtypes like so:
if hasattr(obj, 'dtype'):
obj = obj.item()
This will convert it to the closest equivalent data type
EDIT:
Apparently np.nan is JSON serializable so I've removed that catch from my answer
Thanks to everyone for helping so quickly here. Apparently I was deceived by the error message as the fix from #juanpa.arrivillaga did the job with one adjustment.
Corrected code was on these parts:
android_request(str(device[position]), timestamp, req_id, android, token, endpoint)
and here:
ios_request(str(device[position]), timestamp, req_id, ios, token, endpoint)
I had to cast to string apparently even though these values are not originally integers and tend to look like this instead ab12ab12-12ab-34cd-56ef-1234abcd5678

Categories

Resources