Alexa will not play audio from a stream using lambda and python - python

I can not for the life of me understand why this isnt working.
Here's my lambda function
def lambda_handler(event, context):
url = "https://prod-65-19-131-166.wostreaming.net/kindred-wcmifmmp3-128"
return build_audio_response(url)
def build_audio_response(url):
return {
"version": "1.01",
"response": {
"directives": [
{
"type": "AudioPlayer.Play",
"playBehavior": "ENQUEUE",
"audioItem": {
"stream": {
"token": "sdfsdfsdfsdfsdf3ew234235wtetgdsfgew3534tg",
"url": url,
"offsetInMilliseconds": 0
}
}
}
],
"shouldEndSession": True
}
}
When I run the test in dev portal. I get a response as I should but its missing the directives.
{
"version": "1.01",
"response": {
"shouldEndSession": true
},
"sessionAttributes": {}
}
Alexa just says "There was a problem with the requested skills response."
Well I think its because the directives arent making it over. But I've tested the stream, it works. It's https. Theres a token. What am I missing?

That response from Alexa means that the skill has returned an invalid response that Alexa doesn't know how to parse.
If you haven't already, you should check your CloudWatch logs for the Lambda function to see if any errors are arising there: https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#
To the best of my knowledge, the developer portal still doesn't display directives, so you don't want to test there. From the developer portal Alexa skill test page:
Note: Service Simulator does not currently support testing audio
player directives and customer account linking.
What you can do to debug further if no errors found in CloudWatch is copy/paste the Service Request from that page and use it as a custom test for your Lambda function. On the Lambda page, click the Actions drop down and select Configure Text Event and paste your request from the developer portal into that. That'll give you a better picture of the response you're returning to Alexa. If you can't figure this out, add that response here and we'll try to puzzle things out a bit more.

Related

Azure Analysis rest api : 401 Unauthorized. "Authentication failed."

I'm trying to make a data partition refresh (post) following this azure documentation : https://learn.microsoft.com/en-us/azure/analysis-services/analysis-services-async-refresh
Either with post or get I got 401 Unauthorized (Even when the service is Off !).
I got the token from azure AD (ServicePrincipalCredential).
I added the AD as Analysis Services Admins (https://learn.microsoft.com/en-us/azure/analysis-services/analysis-services-server-admins)
I gave the owner role to AD in Analysis Services IAM.
it worked with Analysis Services management rest api (https://learn.microsoft.com/en-us/rest/api/analysisservices/operations/list) With the same authentification (got code response 200)
My python code :
from azure.common.credentials import ServicePrincipalCredentials
import requests
credentials = ServicePrincipalCredentials(client_id="ad_client_id",
secret="ad_secret",
tenant="ad_tenant")
token = credentials.token
url = "https://westeurope.asazure.windows.net/servers/{my_server}/models/{my_model}/refreshes"
test_refresh = {
"Type": "Full",
"CommitMode": "transactional",
"MaxParallelism": 1,
"RetryCount": 1,
"Objects": [
{
"table": "my_table",
"partition": "my_partition"
}
]
}
header={'Content-Type':'application/json', 'Authorization': "Bearer {}".format(token['access_token'])}
r = requests.post(url=url, headers=header, data=test_refresh)
import json
print(json.dumps(r.json(), indent=" "))
Response I got :
{
"code": "Unauthorized",
"subCode": 0,
"message": "Authentication failed.",
"timeStamp": "2019-05-22T13:39:03.0322998Z",
"httpStatusCode": 401,
"details": [
{
"code": "RootActivityId",
"message": "aab22348-9ba7-42c9-a317-fbc231832f75"
}
]
}
I'm hopeless, could you please give me somes help to make this clear ?
Finally I resolved the issue.
I had wrong token. The api expect an OAuth2.0 authentification token (The Azure analysis services rest api documentation ins't very clear about the way to get one)
For thoses will encounter the same issu there is the way to get one.
from adal import AuthenticationContext
authority = "https://login.windows.net/{AD_tenant_ID}"
auth_context = AuthenticationContext(authority)
oauth_token = auth_context.acquire_token_with_client_credentials(resource="https://westeurope.asazure.windows.net", client_id=AD_client_id, client_secret=AD_client_id)
token = oauth_token['accessToken']
Documentation about this :
https://learn.microsoft.com/en-us/python/api/adal/adal.authentication_context.authenticationcontext?view=azure-python#acquire-token-with-client-credentials-resource--client-id--client-secret-
https://github.com/AzureAD/azure-activedirectory-library-for-python/wiki/ADAL-basics
Most likely your token is not right.
Have you tried validating your token? Use something like http://calebb.net/
I see some examples of ServicePrincipalCredentials that stipulate the context or resource like this:
credentials = ServicePrincipalCredentials(
tenant=options['tenant_id'],
client_id=options['script_service_principal_client_id'],
secret=options['script_service_principal_secret'],
resource='https://graph.windows.net'
Good samples here:
https://www.programcreek.com/python/example/103446/azure.common.credentials.ServicePrincipalCredentials
I think the solution is try a couple more things that make sense and follow the error details.
You need token which has resource (audience) set to https://*.asazure.windows.net
For token validation I like https://jwt.io
Also if you want to automate this properly you have two options
Either by Logic Apps
or with Azure Data Factory
Both of which I have very detailed posts on if you want to check them out
https://marczak.io/posts/2019/06/logic-apps-refresh-analysis-services/
https://marczak.io/posts/2019/06/logic-app-vs-data-factory-for-aas-refresh/

How to pass variables as context to IBM Cloud Watson Assistant with V2?

I am trying to use the new API version V2 for IBM Cloud Watson Assistant. Instead of sending a message for a workspace I need to send a message to an assistant. The context structure has global and skill-related sections now.
How would my app pass in values as context variables? Where in the structure would they need to be placed? I am using the Python SDK.
I am interested in sending information as part of client dialog actions.
Based on testing the Python SDK and the API V2 using a tool, I came to the following conclusion. Context is provided by the assistant if it is requested as part of the input options.
"context": {
"skills": {
"main skill": {
"user_defined": {
"topic": "some chatbot talk",
"skip_user_input": true
}
}
},
"global": {
"system": {
"turn_count": 2
}
}
}
To pass back values from my client / app to the assistant, I could use the context parameter. However, in contrast to the V1 API I needed to place the key / value pairs "down below" in the user_defined part:
context['skills']['main skill']['user_defined'].update({'mydateOUT':'2018-10-08'})
The above is a code snippet from this sample file for a client action. With that placement of my context variables everything works and I can implement client actions using the API Version 2.

(Fine uploader) Error uploading file to s3 using external ec2 server as signature handler

I been struggling for a couple of days trying to get this working. So now I thought I'd ask for some help.
I'm not able to sign headers and I don't know how to proceed. I have followed this guide to every detail: https://blog.fineuploader.com/fine-uploader-s3-upload-directly-to-amazon-s3-from-your-browser-3d9dcdcc0f33#sign-policy
This is my setup:
Javascript to handle upload:
var uploader = new qq.s3.FineUploader({
element: document.getElementById("uploader"),
debug: true,
request: {
endpoint: 'my-bucket.s3-accelerate.amazonaws.com',
accessKey: 'my-access-key'
},
signature: {
endpoint: 'my-external-signature-server'
},
uploadSuccess: {
endpoint: 'my-external-signature-server/s3/success'
},
iframeSupport: {
localBlankPagePath: '/success.html'
},
validation: {
allowedExtensions: ["jpeg", "jpg", "png"],
acceptFiles: "image/jpeg, image/png",
sizeLimit: 10000000,
itemLimit: 1
},
retry: {
enableAuto: true // defaults to false
},
paste: {
targetElement: document,
promptForName: true
}
});
Server setup:
Python flask enviornment set up according to the github example: https://github.com/FineUploader/server-examples/tree/master/python/flask-fine-uploader-s3. I have set it up according to the setup. Do I need to do more? Create my own policy documents or is everything included in the app.py file?
The app is running and publicly available by it's url. This is the url I link to in the signature endpoint above.
I get the following error when I try to upload an image:
GET https://cdn.shopify.com/s/files/1/2116/3741/t/1/assets/edit.gif 404 ()
OPTIONS https://my-external-signature-server/ net::ERR_CONNECTION_REFUSED
s3.fine-uploader.js?11308611504760701622:165 [Fine Uploader 5.15.5] POST request for 0 has failed - response code 0
[Fine Uploader 5.15.5] Received an empty or invalid response from the server!
[Fine Uploader 5.15.5] Policy signing failed. Received an empty or invalid response from the server!
my-external-signature-server is of course a stand in term.
I hope someone has an idea what could be wrong?
I'll gladly provide more information if necessary.

Extracting BigQuery Data From a Shared Dataset

Is it possible to extract data (to google cloud storage) from a shared dataset (where I have only have view permissions) using the client APIs (python)?
I can do this manually using the web browser, but cannot get it to work using the APIs.
I have created a project (MyProject) and a service account for MyProject to use as credentials when creating the service using the API. This account has view permissions on a shared dataset (MySharedDataset) and write permissions on my google cloud storage bucket. If I attempt to run a job in my own project to extract data from the shared project:
job_data = {
'jobReference': {
'projectId': myProjectId,
'jobId': str(uuid.uuid4())
},
'configuration': {
'extract': {
'sourceTable': {
'projectId': sharedProjectId,
'datasetId': sharedDatasetId,
'tableId': sharedTableId,
},
'destinationUris': [cloud_storage_path],
'destinationFormat': 'AVRO'
}
}
}
I get the error:
googleapiclient.errors.HttpError: https://www.googleapis.com/bigquery/v2/projects/sharedProjectId/jobs?alt=json
returned "Value 'myProjectId' in content does not agree with value
sharedProjectId'. This can happen when a value set through a parameter
is inconsistent with a value set in the request.">
Using the sharedProjectId in both the jobReference and sourceTable I get:
googleapiclient.errors.HttpError: https://www.googleapis.com/bigquery/v2/projects/sharedProjectId/jobs?alt=json
returned "Access Denied: Job myJobId: The user myServiceAccountEmail
does not have permission to run a job in project sharedProjectId">
Using myProjectId for both the job immediately comes back with a status of 'DONE' and with no errors, but nothing has been exported. My GCS bucket is empty.
If this is indeed not possible using the API, is there another method/tool that can be used to automate the extraction of data from a shared dataset?
* UPDATE *
This works fine using the API explorer running under my GA login. In my code I use the following method:
service.jobs().insert(projectId=myProjectId, body=job_data).execute()
and removed the jobReference object containing the projectId
job_data = {
'configuration': {
'extract': {
'sourceTable': {
'projectId': sharedProjectId,
'datasetId': sharedDatasetId,
'tableId': sharedTableId,
},
'destinationUris': [cloud_storage_path],
'destinationFormat': 'AVRO'
}
}
}
but this returns the error
Access Denied: Table sharedProjectId:sharedDatasetId.sharedTableId: The user 'serviceAccountEmail' does not have permission to export a table in
dataset sharedProjectId:sharedDatasetId
My service account now is an owner on the shared dataset and has edit permissions on MyProject, where else do permissions need to be set or is it possible to use the python API using my GA login credentials rather than the service account?
* UPDATE *
Finally got it to work. How? Make sure the service account has permissions to view the dataset (and if you don't have access to check this yourself and someone tells you that it does, ask them to double check/send you a screenshot!)
After trying to reproduce the issue, I was running into the parse errors.
I did how ever play around with the API on the Developer Console [2] and it worked.
What I did notice is that the request code below had a different format than the documentation on the website as it has single quotes instead of double quotes.
Here is the code that I ran to get it to work.
{
'configuration': {
'extract': {
'sourceTable': {
'projectId': "sharedProjectID",
'datasetId': "sharedDataSetID",
'tableId': "sharedTableID"
},
'destinationUri': "gs://myBucket/myFile.csv"
}
}
}
HTTP Request
POST https://www.googleapis.com/bigquery/v2/projects/myProjectId/jobs
If you are still running into problems, you can try the you can try the jobs.insert API on the website [2] or try the bq command tool [3].
The following command can do the same thing:
bq extract sharedProjectId:sharedDataSetId.sharedTableId gs://myBucket/myFile.csv
Hope this helps.
[2] https://cloud.google.com/bigquery/docs/reference/v2/jobs/insert
[3] https://cloud.google.com/bigquery/bq-command-line-tool
Make sure the service account has permissions to view the dataset (and if you don't have access to check this yourself and someone tells you that it does, ask them to double check/send you a screenshot!)

Error while getting degree connection between two users

I am facing a problem while fetching degree connection between two LinkedIn users. I am sending a request at
https://api.linkedin.com/v1/people::(~,id=<other person's linkedin id>):(relation-to-viewer:(distance))?format=json&oauth2_access_token=<user's access token>.
Sometimes I get a correct response:
{
"_total": 2,
"values": [
{
"_key": "~",
"relationToViewer": {"distance": 0}
},
{
"_key": "id=x1XPVjdXkb",
"relationToViewer": {"distance": 2}
}
]
}
while most of the time I get an erroneous response:
{
"_total": 1,
"values": [{
"_key": "~",
"relationToViewer": {"distance": 0}
}]
}
I have gone through LinkdIn api's profile fields and I believe that I am using the api correctly. I am not sure what's wrong here. Please help.
After posting it on LinkedIn forum, I got the response
The behavior you're seeing where you only get yourself back from your
call falls in line with what I'd expect to see if the member ID you're
asking for isn't legitimate. If the member ID exists, but isn't in ~'s
network, you should get a -1 distance back, not nothing at all, as you
are seeing. However if you put in a completely invalid member ID, only
information about ~ will be returned from the call.
This was indeed the problem. The client on Android and the client on iOS had different API keys and both were using the same backend to access the degree connection. By using the same API key for both the clients resolved the issue.

Categories

Resources