IBM watson conversation no output python - python

I have followed the sample examples and created a workspace using IBM watson conversation.
I am using python, and also followed the document api to input the text into watson. However, there is no output as opposed to example.
# watson conversation -na service
import json
from watson_developer_cloud import ConversationV1 as Cv
conversation = Cv(username='XXXX',password='XXXX', version='2017-02-03')
# obtain workspace id
workspace_id = 'Your-ID'
context = {}
response = conversation.message(
workspace_id=workspace_id,
message_input={'text': 'hi'},context)
print(json.dumps(response, indent=2))
Here is the output of json.dumps():
{
"output":{
"text":["hello there, how can i help you?"
],
"nodes_visited":["node_1_1487754696521"
],
"log_messages":[]
},
"intents":[{
"intent":"hello",
"confidence":0.99
}
],
"entities":[],
"input":{
"text":"hi"
},
"context":{
"system":{
"_node_output_map":{
"node_1_1487754696521":[0
]
},
"dialog_turn_counter":1,
"dialog_stack":[{
"dialog_node":"root"
}
],
"dialog_request_counter":1
},
"conversation_id":"b2940af7-73c4-4ca8-81d6-363d18637e8e"
},
"alternate_intents":false
}
I have tried using the test bot in the workspace and it works. However, there is no output here in python. Does anyone knows what is wrong?

Okay,
i suspect that there can only be one output. Following the sample, I have added a conversation_start that is suppose to trigger after every conversation start.
"output": { "text": [ "hello there, how can i help you?" ]
I went to my workspace and deleted it. Now it works.
tldr, it only allows 1 output

Related

How to transfers files with Google Workspace Admin SDK (python)

I am trying to write a program that transfers users drive and docs files from one user to another. It looks like I can do it using this documentation Documentation.
I created the data transfer object, which looks like this:
datatransfer = {
'kind': 'admin#datatransfer#DataTransfer',
'oldOwnerUserId': 'somenumberhere',
'newOwnderUserId': 'adifferentnumberhere',
'applicationDataTransfers':[
{
'applicationId': '55656082996', #the app id for drive and docs
'applicationTransferParams': [
{
'key': 'PRIVACY_LEVEL',
'value': [
{
'PRIVATE',
'SHARED'
}
]
}
]
}
]
}
I have some code here for handling Oauth and then I bind the service with:
service = build('admin', 'datatransfer_v1', credentials=creds)
Then I attempt to call insert() with
results = service.transfers().insert(body=datatransfer).execute()
and I get back an error saying that it 'missing required field: resource'.
I tried nesting all of this inside a field called resource and I get the same message.
I tried passing in JUST a json structure that looked like this {'resource': 'test'} and I get the same message.
So I tried using the "Try this method" live tool on the documentation website,
If I pass in no arguments at all, or just pass in the old and new user, I get the same message 'missing required nested field: resource'.
If I put in 'id':'55656082996' with ANY other arguments it just says error code 500 backend error.
I tried manually adding a field named "resource" to the live tool and it says 'property 'resource' does not exist in object specification"
I finally got this to work. If anyone else is struggling with this and stumbles on this, "applicationId" is a number, not a string. Also, the error message is misleading - there is no nested field called "resource." This is what worked for me:
datatransfer = {
"newOwnerUserId": "SomeNumber",
"oldOwnerUserId": "SomeOtherNumber",
"kind": "admin#datatransfer#DataTransfer",
"applicationDataTransfers": [
{
"applicationId": 55656082996,
"applicationTransferParams": [
{
"key": "PRIVACY_LEVEL"
},
{
"value": [
"{PRIVATE, SHARED}"
]
}
]
}
]
}
service = build('admin', 'datatransfer_v1', credentials=creds)
results = service.transfers().insert(body=datatransfer).execute()
print(results)
To get the user's Id's I'm first using the Directory API to query all users who are suspended, and getting their ID from that. Then passing their ID into this to transfer their files to another user before deleting them.

Using Facebook Graph API Pages Search

I am playing with Facebook Graph API. I am interested in getting results like pages search.
I can look for places like this:
search?q=audi&type=place
and I get json with data I want, like website, location etc.
Example:
{{
""datadata": ": [[
{{
"category": "Automotive Parts Store",
"category_list": [
{
"id": "139492049448901",
"name": "Automotive Parts Store"
}
],
"location": {
"city": "Belgrade",
"country": "Serbia",
"latitude": 44.8206,
"longitude": 20.4622,
"zip": "11000"
},
But when I want to search pages and get public page data in json with a request like this:
search?q=audi&type=page
this is json I get:
{
"data": [
]
}
What's happening here, why can't I do the search and is there a way to achieve this?
I am reading Facebook API documentation for two days now and I can't find anything relevant.
Python code that I am using (token removed)
import facebook
token = "////////"
graph = facebook.GraphAPI(access_token=token, version = 2.7)
events = graph.request("/search?type=page&q=cafe&fields=about,fan_count,website")
print(events)
Use the pages/search endpoint.
Here is the api end point, use with valid access token:
https://graph.facebook.com/v3.2/pages/search?q=m&fields=id,is_eligible_for_branded_content,is_unclaimed,link,location,name,verification_status

How Do I get the Name of The inputBlob That Triggered My Azure Function With Python

I have an azure function which is triggered by a file being put into blob storage and I was wondering how (if possible) to get the name of the blob (file) which triggered the function, I have tried doing:
fileObject=os.environ['inputBlob']
message = "Python script processed input blob'{0}'".format(fileObject.fileName)
and
fileObject=os.environ['inputBlob']
message = "Python script processed input blob'{0}'".format(fileObject.name)
but neither of these worked, they both resulted in errors. Can I get some help with this or some suggesstions?
Thanks
The blob name can be captured via the Function.json and provided as binding data. See the {filename} token below.
Function.json is language agnostic and works in all languages.
See documentation at https://learn.microsoft.com/en-us/azure/azure-functions/functions-triggers-bindings for details.
{
"bindings": [
{
"name": "image",
"type": "blobTrigger",
"path": "sample-images/{filename}",
"direction": "in",
"connection": "MyStorageConnection"
},
{
"name": "imageSmall",
"type": "blob",
"path": "sample-images-sm/{filename}",
"direction": "out",
"connection": "MyStorageConnection"
}
],
}
If you want to get the file name of the file that triggered your function you can to that:
Use {name} in function.json :
{
"bindings": [
{
"name": "myblob",
"type": "blobTrigger",
"path": "MyBlobPath/{name}",
"direction": "in",
"connection": "MyStorageConnection"
}
]
}
The function will be triggered by changes in yout blob storage.
Get the name of the file that triggered the function in python (init.py):
def main(myblob: func.InputStream):
filemane = {myblob.name}
Will give you the name of the file that triggered your function.
There is not any information about what trigger you used in your description. But fortunately, there is a sample project yokawasa/azure-functions-python-samples on GitHub for Azure Function using Python which includes many samples using different triggers like queue trigger or blob trigger. I think it's very helpful for you now, and you can refer to these samples to write your own one to satisfy your needs。
Hope it helps.
Getting the name of the inputBlob is not currently possible with Python Azure-Functions. There are open issues about it in azure-webjobs-sdk and azure-webjobs-sdk-script GitHub:
https://github.com/Azure/azure-webjobs-sdk/issues/1090
https://github.com/Azure/azure-webjobs-sdk-script/issues/1339
Unfortunatelly it's still not possible.
In Python, you can do:
import azure.functions as func
import os
def main(blobin: func.InputStream):
filename=os.path.basename(blobin.name)

adding new dict to json output python

Very new to json so please forgive if I am using the wrong terms here. Any ways I am trying to create a json file every x minutes with updated twitter info. Now I know i could just use the twitter api and grab what I need but I am wanting to mess around a bit. My problem is getting a new key/dict? or what ever it is called for each new item added in a for statement.
What im trying to get
[
{
"name": "thename",
"date": "thedate",
"text": "thetext"
}, <--- trying to get that little comma
{
"name": "thename",
"date": "thedate",
"text": "thetext"
}
]
Now i am getting the data that i want and all but not that comma. It outputs it all but not like it should be with that little one character thing that makes it valid.
[
{
"name": "thename",
"date": "thedate",
"text": "thetext"
} <--- I get this instead for each new object
{
"name": "thename",
"date": "thedate",
"text": "thetext"
}
]
Here is just a snippet of the code as the rest should be explanatory since its just Twitter oauth stuff.
for player in users:
statuses = api.GetUserTimeline(screen_name=player, count=10)
print "["
for s in statuses:
print json.dumps('timestamp': s.created_at,'username': s.user.name,'status': s.text)
print "]"
Also is there a better way to do the [ ] at the start and end because I know that is ugly and way un-proper i bet XD . Like I said newbie on json/python stuff but its a good learning experience.
Instead of trying to build the JSON array yourself, you should just let the JSON encoder do that for you too. To do that, you just need to pass a list of objects to json.dumps. So instead of printing each JSON object on its own, just collect them in a list, and dump that:
allstatuses = []
for player in users:
statuses = api.GetUserTimeline(screen_name=player, count=10)
for s in statuses:
allstatuses.append({'timestamp': s.created_at, 'username': s.user.name, 'status': s.text})
print(json.dumps(allstatuses))

Local parameter on Facebook issue

I have written this Python code to retrieve all Facebook pages that are written in Arabic:
import facebook # pip install facebook-sdk
import json
import codecs
from prettytable import PrettyTable
from collections import Counter
# A helper function to pretty-print Python objects as JSON
def pp(o):
print json.dumps(o, indent=1)
# Create a connection to the Graph API with your access token
ACCESS_TOKEN = ''#my access token
g = facebook.GraphAPI(ACCESS_TOKEN)
s=g.request('search', { 'q' : '&',
'type' : 'page',
'limit' : 5000 ,
'locale' : 'ar_AR' })
pp(s)
The locale parameter should return all pages written in Arabic. However, as the output below shows, I get results that contain English. What am I doing incorrectly?
{
"paging": {
"data": [
{
"category": "\u0628\u0636\u0627\u0626\u0639 \u0627\u0644\u0628\u064a\u0639 \u0628\u0627\u0644\u062a\u062c\u0632\u0626\u0629 \u0648\u0628\u0636\u0627\u0626\u0639 \u0627\u0644\u0645\u0633\u062a\u0647\u0644\u0643\u064a\u0646",
"name": "Stop & Shop",
"category_list": [
{
"id": "169207329791658",
"name": "\u0645\u062d\u0644 \u0628\u0642\u0627\u0644\u0629"
}
],
"id": "170000993071234"
},
{
"category": "\u0628\u0636\u0627\u0626\u0639 \u0627\u0644\u0628\u064a\u0639 \u0628\u0627\u0644\u062a\u062c\u0632\u0626\u0629 \u0648\u0628\u0636\u0627\u0626\u0639 \u0627\u0644\u0645\u0633\u062a\u0647\u0644\u0643\u064a\u0646",
"name": "C&A",
"category_list": [
{
"id": "186230924744328",
"name": "\u0645\u062a\u062c\u0631 \u0645\u0644\u0627\u0628\u0633"
}
],
"id": "109345009145382"
},
Your query is 100% correct and should only return Arabic posts. Unfortunately, this is a known Facebook Graph Search API bug. It looks like it flips back and forth from working to not working.
See the discussions, https://developers.facebook.com/bugs/294623187324442 and https://developers.facebook.com/bugs/409365862525282
I had similar issues working with the Facebook Graph API, it never seems to work quite right.

Categories

Resources