I'm doing a simple chatbot with watson. I have a python script. Assume script is this for simplicity:
x=5
x
And in watson i want to return :
result is 5
However, I'm not sure, how to interact with python. My research showed that it is something related to NodeJS and JSON, but I couldn't find any example or tutorial that suites my requirements.
Could someone route me what course of actions should i take or any documentation?
The data between Watson Assistant and a client, an application, is exchanged as JSON-formatted data. The service itself has a REST API and you can use it from any programming language or with command-line tools. For Python, there is even a SDK.
There are some samples written in Python. I recommend my own code :). There is a tool I wrote to interact with Watson Assistant / Watson Conversation (blog entry is here). Another Python sample is what I called EgoBot (blog is here). It shows how you can even change the dialog itself from within the chatbot. Basically, you can tell the bot to learn stuff. The examples should get you started.
Related
I have built an NLP Engine in Python for domain-specific language. It takes in raw text and extracts semantics and entities. Also through socket method I have built in the state-management of the conversation. However now this needs to be pushed into Teams and I have understood that this can't be done directly (outgoing webhook Teams) due to security compliance, so I have to use Azure Cloud. I have been going through MSFT BotBuilder Framework, but this is not what I want. I need either:
Teams to be like the client.py that I currently have in the socket-method and use the state-management I currently have.
somehow for Teams to send a POST message to Python (Flask) and then manage the state, which I have no clue how to approach. How should this be done, there seem so many steps involved from Azure to MSFT botbuilder framework. However I don't need the bots from msft, already have my own bot, which I want to invoke in python.
Has anyone experience with any of the above approaches?
Thanks
This is my first post ever, so I hope it is alright.
I am working on a Raspberry Pi Zero W, and I am trying to make a live speech-to-text translator. I have researched, and I think I need to use the SpeechRecognition module, and I have been doing that and did end up writing a program that does what I need it to using the Google speech to text module, and it does the job just not live.
I think for me to make it transcribe it live, I need to use the IBM Watson Speech to Text with something called Websockets.
I can not seem to find a lot of information about those two together yet alone any code, if any of you have any experience with transcribing live to text using this or any other way in Python I would really appreciate if you could point me in the right direction, and any code would be fantastic.
Google have live speech to text transcription API. They also provide the source code to get you started on it. Check this github page. All it does is it listens to your microphone and sends you the text version of whatever you are saying in real time.
This is an example software that works right out of the box. All you need to do is to run with your GOOGLE_APPLICATION_CREDENTIALS saved in your environment variables.
If you have already used it once, you should have set up a billing account already. If not, please do so here.
I am developing a chatbot using DialogFlow, as my natural language processing handler, and Python as my client.
My application aims to talk with a human in a python environment (I am currently using a Jupyter Notebook), send the request to DialogFlow, get the response, then calculate the data using some python libraries and show the results to the user.
All the process described above is already working.
Now I must to find a way that lets the people uses my chatbot on line.
Here is my problem, I don't know how to model this.
I think I should put my chatbot in a webpage and make it communicate with my python application stored in a server.
Did anybody make something similar?
Given your current architecture, you'll have to do the following:
Write a client for your chatbot in HTML and JavaScript
Write a server in Python that contains your application logic and makes the API calls to Dialogflow
This is a pretty normal architecture for a web application. Given that you're using Python, you might find Flask or Django helpful.
There should be plenty of samples out there that can help you figure out what to do; I just found this blog post that demonstrates how to build a simple chat client/server with Flask and websockets.
If you're willing to change your architecture so that the user interacts directly with Dialogflow, and all of your application logic lives in the Dialogflow fulfillment webhook, you can make use of Dialogflow's Web Demo integration that provides a pre-built chat widget you can embed into an HTML page.
I am using the python libraries from the Assistant SDK for speech recognition via gRPC. I have the speech recognized and returned as a string calling the method resp.result.spoken_request_text from \googlesamples\assistant\__main__.py and I have the answer as an audio stream from the assistant API with the method resp.audio_out.audio_data also from \googlesamples\assistant\__main__.py
I would like to know if it is possible to have the answer from the service as a string as well (hoping it is available in the service definition or that it could be included), and how I could access/request the answer as string.
Thanks in advance.
Currently (Assistant SDK Developer Preview 1), there is no direct way to do this. You can probably feed the audio stream into a Speech-to-Text system, but that really starts getting silly.
Speaking to the engineers on this subject while at Google I/O, they indicated that there are some technical complications on their end to doing this, but they understand the use cases. They need to see questions like this to know that people want the feature.
Hopefully it will make it into an upcoming Developer Preview.
Update: for
google.assistant.embedded.v1alpha2
the assistant SDK includes the field supplemental_display_text
which is meant to extract the assistant response as text which aids
the user's understanding
or to be displayed on screens. Still making the text available to the developer. Goolge assistant documentation
Im looking to write a new application in ruby/python which uses a feed from bloomberg and am stuck trying to find any documentation for using (or even setting up) Bloomberg Server API with either of these languages.
Does anyone have any good links to tutorials for this or maybe some boilerplate code to get set up? Or is it best to just stick to the three main supported languages?
The Bloomberg Open API (BLPAPI) v3.5 release now includes a native Python SDK.
http://www.openbloomberg.com/2012/11/21/open-api-blpapi-v3-5-x-released/
Did you check out some questions at SO on this. It might help you
Bloomberg API request timing out
Asynchronous data through Bloomberg's new data API (COM v3) with Python?
Resolver is an spreadsheet implementation in IronPython and has a very good integration for Bloomberg API
http://www.resolversystems.com/documentation/apidocs/MarketData_Bloomberg.html
Here is a simple Client access API which I wrote with the help of the mentioned links as well as some others. Not everything is implemented but it is a good start.
https://github.com/bpsmith/pybbg