I am Getting this error. I am executing code of aws lambda function using python 3.7 to know quicksight dashboard version. Thanks in advance!
errorMessage: "Unable to marshal response: Object of type datetime is not JSON serializable",
errorType : "Runtime.MarshalError"
Code-
import boto3
import time
import sys
client = boto3.client('quicksight')
def lambda_handler(event, context):
response = client.list_dashboard_versions(AwsAccountId='11111', DashboardId='2222',MaxResults=10)
return response
I quick fix could be:
import boto3
import time
import sys
import json
client = boto3.client('quicksight')
def lambda_handler(event, context):
response = client.list_dashboard_versions(AwsAccountId='11111', DashboardId='2222',MaxResults=10)
return json.dumps(response, default=str)
Looking at https://boto3.amazonaws.com/v1/documentation/api/1.14.8/reference/services/quicksight.html#QuickSight.Client.list_dashboard_versions the return looks like this -
{
'DashboardVersionSummaryList': [
{
'Arn': 'string',
'CreatedTime': datetime(2015, 1, 1),
'VersionNumber': 123,
'Status': 'CREATION_IN_PROGRESS'|'CREATION_SUCCESSFUL'|'CREATION_FAILED'|'UPDATE_IN_PROGRESS'|'UPDATE_SUCCESSFUL'|'UPDATE_FAILED',
'SourceEntityArn': 'string',
'Description': 'string'
},
],
'NextToken': 'string',
'Status': 123,
'RequestId': 'string'
}
As you can see, CreatedTime is returned as datetime. If you want to return this as a JSON, you should transform this value.
I was struggling with this today with a method that also returns a datetime.
In my example 'JoinedTimestamp': datetime(2015, 1, 1) resulting in the same Unable to marshal response.
If you don't need the CreatedTime value you might as well remove it from the response as:
for account in list_accounts_response["Accounts"]:
if "JoinedTimestamp" in account:
del account["JoinedTimestamp"]
To follow up on Joseph Lane's answer, transforming this value could be something along the lines of:
for account in list_accounts_response["Accounts"]:
if "JoinedTimestamp" in account:
account["JoinedTimestamp"] = str(account["JoinedTimestamp"])
Related
I am not sure what's happening in with Facebook ads API but it starts throwing me the below error. yesterday it was all fine. this error is only for few accounts like in for below live_accounts for 1st account it's not throwing me an error but for the second it's throwing an error.
Error:
raise fb_response.error()
facebook_business.exceptions.FacebookRequestError:
Message: Call was not successful
Method: GET
Path: https://graph.facebook.com/v9.0/act_25XX93XXX763XXX/insights
Params: {'time_range': '{"since":"2021-04-24","until":"2021-04-24"}', 'breakdowns': '["publisher_platform","platform_position"]', 'action_breakdowns': '["action_type"]', 'level': 'ad', 'time_increment': 1, 'limit': 1, 'fields': 'adset_name,ad_name,campaign_name,account_name,impressions,account_currency,video_p25_watched_actions,video_p50_watched_actions,video_p75_watched_actions,video_p100_watched_actions,inline_link_clicks,spend,actions,action_values'}
Status: 500
Response:
{
"error": {
"code": 1,
"message": "Please reduce the amount of data you're asking for, then retry your request"
}
}
Here is my code
from facebook_business.api import FacebookAdsApi
from facebook_business.adobjects.adsinsights import AdsInsights
from facebook_business.adobjects.adaccount import AdAccount
import pandas as pd
from facebook_business.adobjects.user import User
from datetime import date
from datetime import timedelta
from google.cloud import storage
import os
start = '2021-04-24'
end = '2021-04-24'
FacebookAdsApi.init(my_app_id, my_app_secret, my_access_token, api_version='v9.0')
me = User(fbid="me")
my_account = me.get_ad_accounts()
account_list = pd.DataFrame(my_account)
appended_data = []
live_accounts = ['21XXX478XXX279X', '25XX93XXX763XXX']
for i in live_accounts:
print(i)
act = AdAccount('act_{}'.format(i))
async_job = act.get_insights(params={'time_range': {'since': start, 'until': end},
'breakdowns': ['publisher_platform', 'platform_position'],
'action_breakdowns': ['action_type'], 'level': 'ad', 'time_increment': 1,
'limit': 1,
},
# is_async=True,
fields=[AdsInsights.Field.adset_name,
AdsInsights.Field.ad_name,
AdsInsights.Field.campaign_name,
AdsInsights.Field.account_name,
AdsInsights.Field.impressions,
AdsInsights.Field.account_currency,
AdsInsights.Field.video_p25_watched_actions,
AdsInsights.Field.video_p50_watched_actions,
AdsInsights.Field.video_p75_watched_actions,
AdsInsights.Field.video_p100_watched_actions,
AdsInsights.Field.inline_link_clicks,
AdsInsights.Field.spend,
AdsInsights.Field.actions,
AdsInsights.Field.action_values,
])
results = []
for item in async_job:
print(item, type(item), async_job)
data = dict(item)
results.append(data)
I have tried passing is_async=True in the get_insights method but in return, it's just giving me only 8 rows which don't seem right.
please help.
Try going to ad level first, and then access Ad(< ad_id >).get_insights().
When you try to get the 'level':'ad' from AdAccount level - with many fields, it throws this error.
Example code:
res = []
for i in live_accounts:
print(i)
act = AdAccount('act_{}'.format(i))
ads = act.get_ads(params={'time_range': {'since': start, 'until': end}})
for ad in ads:
ad_ins = ad.get_insights(params={'time_range': {'since': start, 'until': end},
'breakdowns': ['publisher_platform', 'platform_position'],
'action_breakdowns': ['action_type'],
'time_increment': 1,
'limit': 500
},
fields=[AdsInsights.Field.adset_name,
AdsInsights.Field.ad_name,
AdsInsights.Field.campaign_name,
AdsInsights.Field.account_name,
AdsInsights.Field.impressions,
AdsInsights.Field.account_currency,
AdsInsights.Field.video_p25_watched_actions,
AdsInsights.Field.video_p50_watched_actions,
AdsInsights.Field.video_p75_watched_actions,
AdsInsights.Field.video_p100_watched_actions,
AdsInsights.Field.inline_link_clicks,
AdsInsights.Field.spend,
AdsInsights.Field.actions,
AdsInsights.Field.action_values,
])
res.append(ad_ins)
Alternate way is using async like in this example: https://github.com/facebook/facebook-python-business-sdk/blob/master/examples/async.py
I have the below code, and want to get it to return a dataframe properly. The polling logic works, but the dataframe doesn't seem to get created/returned. Right now it just returns None when called.
import boto3
import pandas as pd
import io
import re
import time
AK='mykey'
SAK='mysecret'
params = {
'region': 'us-west-2',
'database': 'default',
'bucket': 'my-bucket',
'path': 'dailyreport',
'query': 'SELECT * FROM v_daily_report LIMIT 100'
}
session = boto3.Session(aws_access_key_id=AK,aws_secret_access_key=SAK)
# In[32]:
def athena_query(client, params):
response = client.start_query_execution(
QueryString=params["query"],
QueryExecutionContext={
'Database': params['database']
},
ResultConfiguration={
'OutputLocation': 's3://' + params['bucket'] + '/' + params['path']
}
)
return response
def athena_to_s3(session, params, max_execution = 5):
client = session.client('athena', region_name=params["region"])
execution = athena_query(client, params)
execution_id = execution['QueryExecutionId']
df = poll_status(execution_id, client)
return df
def poll_status(_id, client):
'''
poll query status
'''
result = client.get_query_execution(
QueryExecutionId = _id
)
state = result['QueryExecution']['Status']['State']
if state == 'SUCCEEDED':
print(state)
print(str(result))
s3_key = 's3://' + params['bucket'] + '/' + params['path']+'/'+ _id + '.csv'
print(s3_key)
df = pd.read_csv(s3_key)
return df
elif state == 'QUEUED':
print(state)
print(str(result))
time.sleep(1)
poll_status(_id, client)
elif state == 'RUNNING':
print(state)
print(str(result))
time.sleep(1)
poll_status(_id, client)
elif state == 'FAILED':
return result
else:
print(state)
raise Exception
df_data = athena_to_s3(session, params)
print(df_data)
I plan to move the dataframe load out of the polling function, but just trying to get it to work as is right now.
I recommend you to take a look at AWS Wrangler instead of using the traditional boto3 Athena API. This newer and more specific interface to all things data in AWS including queries to Athena and giving more functionality.
import awswrangler as wr
df = wr.pandas.read_sql_athena(
sql="select * from table",
database="database"
)
Thanks to #RagePwn comment it is worth checking PyAthena as an alternative to the boto3 option to query Athena.
If it is returning None, then it is because state == 'FAILED'. You need to investigate the reason it failed, which may be in 'StateChangeReason'.
{
'QueryExecution': {
'QueryExecutionId': 'string',
'Query': 'string',
'StatementType': 'DDL'|'DML'|'UTILITY',
'ResultConfiguration': {
'OutputLocation': 'string',
'EncryptionConfiguration': {
'EncryptionOption': 'SSE_S3'|'SSE_KMS'|'CSE_KMS',
'KmsKey': 'string'
}
},
'QueryExecutionContext': {
'Database': 'string'
},
'Status': {
'State': 'QUEUED'|'RUNNING'|'SUCCEEDED'|'FAILED'|'CANCELLED',
'StateChangeReason': 'string',
'SubmissionDateTime': datetime(2015, 1, 1),
'CompletionDateTime': datetime(2015, 1, 1)
},
'Statistics': {
'EngineExecutionTimeInMillis': 123,
'DataScannedInBytes': 123,
'DataManifestLocation': 'string',
'TotalExecutionTimeInMillis': 123,
'QueryQueueTimeInMillis': 123,
'QueryPlanningTimeInMillis': 123,
'ServiceProcessingTimeInMillis': 123
},
'WorkGroup': 'string'
}
}
Just to elaborate on the RagePwn's answer of using PyAthena -that's what I ultimately did as well. For some reason AwsWrangler choked on me and couldn't handle the JSON that was being returned from S3. Here's the code snippet that worked for me based on PyAthena's PyPi page
import os
from pyathena import connect
from pyathena.util import as_pandas
aws_access_key_id = os.getenv('ATHENA_ACCESS_KEY')
aws_secret_access_key = os.getenv('ATHENA_SECRET_KEY')
region_name = os.getenv('ATHENA_REGION_NAME')
staging_bucket_dir = os.getenv('ATHENA_STAGING_BUCKET')
cursor = connect(aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
region_name=region_name,
s3_staging_dir=staging_bucket_dir,
).cursor()
cursor.execute(sql)
df = as_pandas(cursor)
The above assumes you have defined as environment variables the following:
ATHENA_ACCESS_KEY: the AWS access key id for your AWS account
ATHENA_SECRET_KEY: the AWS secret key
ATHENA_REGION_NAME: the AWS region name
ATHENA_STAGING_BUCKET: a bucket in the same account that has the correct access settings (explanation of which is outside the scope of this answer)
What is the proper way to handle response classes in Flask-RESTplus?
I am experimenting with a simple GET request seen below:
i_throughput = api.model('Throughput', {
'date': fields.String,
'value': fields.String
})
i_server = api.model('Server', {
'sessionId': fields.String,
'throughput': fields.Nested(i_throughput)
})
#api.route('/servers')
class Server(Resource):
#api.marshal_with(i_server)
def get(self):
servers = mongo.db.servers.find()
data = []
for x in servers:
data.append(x)
return data
I want to return my data in as part of a response object that looks like this:
{
status: // some boolean value
message: // some custom response message
error: // if there is an error store it here
trace: // if there is some stack trace dump throw it in here
data: // what was retrieved from DB
}
I am new to Python in general and new to Flask/Flask-RESTplus. There is a lot of tutorials out there and information. One of my biggest problems is that I'm not sure what to exactly search for to get the information I need. Also how does this work with marshalling? If anyone can post good documentation or examples of excellent API's, it would be greatly appreciated.
https://blog.miguelgrinberg.com/post/customizing-the-flask-response-class
from flask import Flask, Response, jsonify
app = Flask(__name__)
class CustomResponse(Response):
#classmethod
def force_type(cls, rv, environ=None):
if isinstance(rv, dict):
rv = jsonify(rv)
return super(MyResponse, cls).force_type(rv, environ)
app.response_class = CustomResponse
#app.route('/hello', methods=['GET', 'POST'])
def hello():
return {'status': 200, 'message': 'custom_message',
'error': 'error_message', 'trace': 'trace_message',
'data': 'input_data'}
result
import requests
response = requests.get('http://localhost:5000/hello')
print(response.text)
{
"data": "input_data",
"error": "error_message",
"message": "custom_message",
"status": 200,
"trace": "trace_message"
}
I'm new to Python unit testing, and I want to mock calls to the boto3 3rd party library. Here's my stripped down code:
real_code.py:
import boto3
def temp_get_variable(var_name):
return boto3.client('ssm').get_parameter(Name=var_name)['Parameter']['Value']
test_real_code.py:
import unittest
from datetime import datetime
from unittest.mock import patch
import real_code
class TestRealCode(unittest.TestCase):
#patch('patching_config.boto3.client')
def test_get_variable(self, mock_boto_client):
response = {
'Parameter': {
'Name': 'MyTestParameterName',
'Type': 'String',
'Value': 'myValue',
'Version': 123,
'Selector': 'asdf',
'SourceResult': 'asdf',
'LastModifiedDate': datetime(2019, 7, 16),
'ARN': 'asdf'
}
}
mock_boto_client.get_variable.return_value = response
result_value = real_code.get_variable("MyTestParameterName")
self.assertEqual("myValue", result_value)
When I run it the test fails with
Expected :myValue
Actual :<MagicMock name='client().get_parameter().__getitem__().__getitem__()' id='2040071816528'>
What am I doing wrong? I thought by setting mock_boto_client.get_variable.return_value = response it would mock out the call and return my canned response instead. I don't understand why I am getting a MagicMock object instead of the return value I tried to set. I'd like to set up my test so that when the call to get_parameter is made with specific parameters, the mock returns the canned response I specified in the test.
There are two issues with your test code. The first is that when your mock object mock_boto_client called, it returns a new mock object. This means that the object that get_parameter() is being called on is different than the one you are attempting to set a return value on. You can have it return itself with the following:
mock_boto_client.return_value = mock_boto_client
You can also use a different mock object:
foo = MagicMock()
mock_boto_client.return_value = foo
The second issue that you have is that you are mocking the wrong method call. mock_boto_client.get_variable.return_value should be mock_boto_client.get_parameter.return_value. Here is the test updated and working:
import unittest
from datetime import datetime
from unittest.mock import patch
import real_code
class TestRealCode(unittest.TestCase):
#patch('boto3.client')
def test_get_variable(self, mock_boto_client):
response = {
'Parameter': {
'Name': 'MyTestParameterName',
'Type': 'String',
'Value': 'myValue',
'Version': 123,
'Selector': 'asdf',
'SourceResult': 'asdf',
'LastModifiedDate': datetime(2019, 7, 16),
'ARN': 'asdf'
}
}
mock_boto_client.return_value = mock_boto_client
mock_boto_client.get_parameter.return_value = response
result_value = real_code.get_variable("MyTestParameterName")
self.assertEqual("myValue", result_value)
I'm trying to access the Poloniex API using requests.
The returnBalances code works, but the returnTradeHistory code does not.
The returnTradeHistory is commented out in the example.
Data is returned for returnBalances but not for returnTradeHistory.
I know the whole APIKey and secret code is working because I am getting accurate returnBalances data.
So why is returnTradeHistory not working?
from time import time
import urllib.parse
import hashlib
import hmac
import requests
import json
APIKey=b"stuff goes in here"
secret=b"stuff goes in here"
url = "https://poloniex.com/tradingApi"
# this works and returns data
payload = {
'command': 'returnBalances',
'nonce': int(time() * 1000),
}
# this does not work and does not return data
#payload = {
# 'command': 'returnTradeHistory',
# 'currencyPair': 'BTC_MAID',
# 'nonce': int(time() * 1000),
#}
paybytes = urllib.parse.urlencode(payload).encode('utf8')
sign = hmac.new(secret, paybytes, hashlib.sha512).hexdigest()
headers = {
'Content-Type': 'application/x-www-form-urlencoded',
'Key': APIKey,
'Sign': sign,
}
r = requests.post(url, data=paybytes, headers=headers)
fulldata=r.content
data = json.loads(fulldata)
print(data)
According to the official poloniex API documentation:
returnTradeHistory
Returns the past 200 trades for a given market, or up to 50,000 trades
between a range specified in UNIX timestamps by the "start" and "end"
GET parameters [...]
so it is required to specify the start and end parameter
e.g: https://poloniex.com/public?command=returnTradeHistory¤cyPair=BTC_NXT&start=1410158341&end=1410499372