Azure loganalytics python SDK always throws MissingApiVersionParameter exception - python

I try to follow MS official doc to get the log from my resource in Azure Log Monitor but never success.
My code is like below.
from azure.loganalytics import LogAnalyticsDataClient
from azure.common.client_factory import get_client_from_cli_profile
from azure.loganalytics.models import QueryBody
log_client = get_client_from_cli_profile(LogAnalyticsDataClient)
myWorkSpaceId = '1234567890...'
result = log_client.query(myWorkSpaceId, QueryBody(**{'query': 'Heartbeat| limit 50'}))
And I always get exception like below:
result = log_client.query(myWorkSpaceId, QueryBody(**{'query': 'Heartbeat| limit 50'}))
File ".../lib/python2.7/site-packages/azure/loganalytics/log_analytics_data_client.py", line 121, in query
raise models.ErrorResponseException(self._deserialize, response)
azure.loganalytics.models.error_response.ErrorResponseException: (MissingApiVersionParameter) The api-version query parameter (?api-version=) is required for all requests
I trace code into library in /azure/loganalytics/log_analytics_data_client.py, and dump the url string used for query like below.
print(url, query_parameters, header_parameters, body_content)
request = self._client.post(url, query_parameters)
response = self._client.send(request, header_parameters, body_content, stream=False, **operation_config)
The output of the url and query information is like below and it look like no version information in between and I doubt this is why I get the exception:
('https://management.azure.com/workspaces/1234567890.../query', {}, {'Content-Type': 'application/json; charset=utf-8'}, {'query': 'Heartbeat| limit 50'})
My azure SDK version is 4.0.0, and my azure-loganalytics library version is v0.1.0, running on Ubuntu.
Does anyone run into same issue or know how to fix this?
Thanks.

Related

Resource not found on Azure Cognitive Services

I have created an Azure Cognitive Services resource following the tutorial 1
Then I have created the environment and run the following code (from tutorial 2):
# Import required modules.
from azure.cognitiveservices.search.websearch import WebSearchAPI
from azure.cognitiveservices.search.websearch.models import SafeSearch
from msrest.authentication import CognitiveServicesCredentials
# Replace with your subscription key.
subscription_key = "YOUR_SUBSCRIPTION_KEY"
# Instantiate the client and replace with your endpoint.
client = WebSearchAPI(CognitiveServicesCredentials(subscription_key), base_url = "YOUR_ENDPOINT")
# Make a request. Replace Yosemite if you'd like.
web_data = client.web.search(query="Yosemite")
print("\r\nSearched for Query# \" Yosemite \"")
However, it seems the generaed Subscription key and endpoint are not correctly read by the script since I get the following error:
File "azu_scrapper.py", line 17, in
web_data = client.web.search(query="Yosemite") File "/home/user/.local/share/virtualenvs/linkedin-CHSAGU1d/lib/python3.7/site-packages/azure/cognitiveservices/search/websearch/operations/web_operations.py",
line 365, in search
raise models.ErrorResponseException(self._deserialize, response) azure.cognitiveservices.search.websearch.models.error_response_py3.ErrorResponseException:
Operation returned an invalid status code 'Resource Not Found'
Any idea why it is not working?
The base_url value should be :
https://<your endpoint>/bing/v7.0
I have tested on my side and it works for me :

How to make api call in python?

I have ran tika server in my machine and call api using terminal which is working fine. I am able to extract text from image and pdf. But, I want to implement the api call in my python application.
curl -T price.xls http://localhost:9998/tika --header "Accept: text/plain"
Above is api call that i have to make. I can run this in my terminal and works fine but how to implement in python application. I have installed and tried requests.
API_URL = 'http://localhost:9998/tika'
APP_ROOT = os.path.dirname(os.path.abspath(__file__))
tika_client = TikaApp(file_jar=join(APP_ROOT,'../tika-app-1.19.jar'))
data = {
"url": join(APP_ROOT,'../static/image/a.pdf')
}
response = requests.put(API_URL, data)
print(response.content)
Any help will be appreciate. Thank you :)
error output
INFO tika (application/x-www-form-urlencoded)
WARN tika: Text extraction failed
org.apache.tika.exception.TikaException: Unexpected RuntimeException from org.apache.tika.server.resource.TikaResource$1#475b0e2
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:282)
at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:143)
at org.apache.tika.server.resource.TikaResource.parse(TikaResource.java:402)
at org.apache.tika.server.resource.TikaResource$5.write(TikaResource.java:513)
at org.apache.cxf.jaxrs.provider.BinaryDataProvider.writeTo(BinaryDataProvider.java:177)
at org.apache.cxf.jaxrs.utils.JAXRSUtils.writeMessageBody(JAXRSUtils.java:1391)
at org.apache.cxf.jaxrs.interceptor.JAXRSOutInterceptor.serializeMessage(JAXRSOutInterceptor.java:246)
at org.apache.cxf.jaxrs.interceptor.JAXRSOutInterceptor.processResponse(JAXRSOutInterceptor.java:122)
at org.apache.cxf.jaxrs.interceptor.JAXRSOutInterceptor.handleMessage(JAXRSOutInterceptor.java:84)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308)
at org.apache.cxf.interceptor.OutgoingChainInterceptor.handleMessage(OutgoingChainInterceptor.java:90)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308)
at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:267)
at org.apache.cxf.transport.http_jetty.JettyHTTPDestination.doService(JettyHTTPDestination.java:247)
at org.apache.cxf.transport.http_jetty.JettyHTTPHandler.handle(JettyHTTPHandler.java:79)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:205)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:531)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.ws.rs.WebApplicationException: HTTP 415 Unsupported Media Type
at org.apache.tika.server.resource.TikaResource$1.parse(TikaResource.java:128)
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280)
... 37 more
ERROR Problem with writing the data, class org.apache.tika.server.resource.TikaResource$5, ContentType: text/plain
You need to define the data(payload), header.
url = 'http://localhost:9998/tika/......'
headers = {"Accept: text/plain"}
response = requests.put(url,data = data, headers=headers)
Have glance at this Making a request to a RESTful API using python

Azure module on webservice

I am trying to publish a machine learning model on Azure webservice using python. I am able to deploy the code successfully but when i try to call it through the URL, it's throwing me 'Azure' module doesn't exist. The code basically retrieves a TFIDF model from the container (blob) and use it to predict the new value. The error clearly says, Azure package is missing while trying to run on the webservice and I am not sure how to fix it. Here goes the code:
For deployment:
from azureml import services
from azure.storage.blob import BlobService
#services.publish('7c94eb2d9e4c01cbe7ce1063','f78QWNcOXHt9J+Qt1GMzgdEt+m3NXby9JL`npT7XX8ZAGdRZIX/NZ4lL2CkRkGQ==')
#services.types(res=unicode)
#services.returns(str)
def TechBot(res):
from azure.storage.blob import BlobService
from gensim.similarities import SparseMatrixSimilarity, MatrixSimilarity, Similarity
blob_service = BlobService(account_name='tfidf', account_key='RU4R/NIVPsPOoR0bgiJMtosHJMbK1+AVHG0sJCHT6jIdKPRz3cIMYTsrQ5BBD5SELKHUXgBHNmvsIlhEdqUCzw==')
blob_service.get_blob_to_path('techbot',"2014.csv","df")
df=pd.read_csv("df")
doct = res
To access the url I used the python code from
service.azureml.net
import urllib2
import json
import requests
data = {
"Inputs": {
"input1":
[
{
'res': "wifi wnable",
}
],
},
"GlobalParameters": {
}
}
body = str.encode(json.dumps(data))
#proxies = {"http":"http://%s" % proxy}
url = 'http://ussouthcentral.services.azureml.net/workspaces/7c94eb2de26a45399e4c01cbe7ce1063/services/11943e537e0741beb466cd91f738d073/execute?api-version=2.0&format=swagger'
api_key = '8fH9kp67pEt3C6XK9sXDLbyYl5cBNEwYg9VY92xvkxNd+cd2w46sF1ckC3jqrL/m8joV7o3rsTRUydkzRGDYig==' # Replace this with the API key for the web service
headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
#proxy_support = urllib2.ProxyHandler(proxies)
#opener = urllib2.build_opener(proxy_support, urllib2.HTTPHandler(debuglevel=1))
#urllib2.install_opener(opener)
req = urllib2.Request(url, body, headers)
try:
response = urllib2.urlopen(req, timeout=60)
result = response.read()
print(result)
except urllib2.HTTPError, error:
print("The request failed with status code: " + str(error.code))
# Print the headers - they include the requert ID and the timestamp, which are useful for debugging the failure
print(error.info())
print(json.loads(error.read()))
The string 'res' will be predicted at the end. As I said it runs perfectly fine if I run as it is in python by calling azure module, problem happens when I access the url.
Any help is appreciated, please let me know if you need more information (I only sohwcased half of my code)
I tried to reproduce the issue via POSTMAN, then I got the error information below as you said.
{
"error": {
"code": "ModuleExecutionError",
"message": "Module execution encountered an error.",
"details": [
{
"code": "85",
"target": "Execute Python Script RRS",
"message": "Error 0085: The following error occurred during script evaluation, please view the output log for more information:\r\n---------- Start of error message from Python interpreter ----------\r\nCaught exception while executing function: Traceback (most recent call last):\n File \"\\server\\InvokePy.py\", line 120, in executeScript\n outframe = mod.azureml_main(*inframes)\n File \"\\temp\\1280677032.py\", line 1094, in azureml_main\n File \"<ipython-input-15-bd03d199b8d9>\", line 6, in TechBot_2\nImportError: No module named azure\n\r\n\r\n---------- End of error message from Python interpreter ----------"
}
]
}
}
According to the error code 00085 & the information ImportError: No module named azure, I think the issue was caused by importing python moduleazure-storage. There was a similar SO thread Access Azure blog storage from within an Azure ML experiment which got the same issue, I think you can refer to its answer try to use HTTP protocol instead HTTPS in your code to resolve the issue as the code client = BlobService(STORAGE_ACCOUNT, STORAGE_KEY, protocol="http").
Hope it helps. Any concern & update, please feel free to let me know.
Update: Using HTTP protocol for BlobService
from azureml import services
from azure.storage.blob import BlobService
#services.publish('7c94eb2d9e4c01cbe7ce1063','f78QWNcOXHt9J+Qt1GMzgdEt+m3NXby9JL`npT7XX8ZAGdRZIX/NZ4lL2CkRkGQ==')
#services.types(res=unicode)
#services.returns(str)
def TechBot(res):
from azure.storage.blob import BlobService
from gensim.similarities import SparseMatrixSimilarity, MatrixSimilarity, Similarity
# Begin: Update code
# Using `HTTP` protocol for BlobService
blob_service = BlobService(account_name='tfidf',
account_key='RU4R/NIVPsPOoR0bgiJMtosHJMbK1+AVHG0sJCHT6jIdKPRz3cIMYTsrQ5BBD5SELKHUXgBHNmvsIlhEdqUCzw==',
protocol='http')
# End
blob_service.get_blob_to_path('techbot',"2014.csv","df")
df=pd.read_csv("df")
doct = res

geopy openmapquest for Python throws GeocoderInsufficientPrivileges error

I am running the following:
import geopy
geolocator = geopy.geocoders.OpenMapQuest(api_key='my_key_here')
location1 = geolocator.geocode('Madrid')
where my_key_here is my consumer key for mapquest, and I get the following error:
GeocoderInsufficientPrivileges: HTTP Error 403: Forbidden
Not sure what I am doing wrong.
Thanks!
I've also tried the same with the same result. After checking the Library, I found out, that the error is referring to the line, where the request ist build and it seems, that the API Key is not transmitted. If you add no key in the init statement, the api_key='' so I tried to change the line 66 in my own Library of the file: https://github.com/geopy/geopy/blob/master/geopy/geocoders/openmapquest.py to my key.
Still no success! The key itself works, I've tested it with calling the URL that is also called in the Library:
http://open.mapquestapi.com/nominatim/v1/search.php?key="MY_KEY"&format=json&json_callback=renderBasicSearchNarrative&q=westminster+abbey
no idea why this isn't working…
Cheers.kg
I made slight progress with fixing this one. I was able to get the query written correctly, but its the json parsing that kind of have me stumped. Maybe someone knows. I know the url is being sent correctly (I checked it in the browser and it returned a json object). Maybe someone knows how to parse the returned json object to get it to finally work.
Anyways, I had to go in the openmapquest.py source code, and starting from line 66, I made the following modifications:
self.api_key = api_key
self.api = "http://www.mapquestapi.com/geocoding/v1/address?"
def geocode(self, query, exactly_one=True, timeout=None): # pylint: disable=W0221
"""
Geocode a location query.
:param string query: The address or query you wish to geocode.
:param bool exactly_one: Return one result or a list of results, if
available.
:param int timeout: Time, in seconds, to wait for the geocoding service
to respond before raising a :class:`geopy.exc.GeocoderTimedOut`
exception. Set this only if you wish to override, on this call
only, the value set during the geocoder's initialization.
.. versionadded:: 0.97
"""
params = {
'key': self.api_key,
'location': self.format_string % query
}
if exactly_one:
params['maxResults'] = 1
url = "&".join((self.api, urlencode(params)))
print url # Print the URL just to make sure it's produced correctly
Now the task remains to get the _parse_json function working.

Using Python for Facebook to search

I'm trying to use Python to search the Facebook Graph for Pages. When I use the Graph API Explorer on the Facebook webpage and I enter:
search?q=aquafresh&type=page
I get the results I'm looking for. When I do the same in Python (after installing the PythonForFacebook module):
post = graph.get_object("search?q=aquafresh&type=post")
I get:
facebook.GraphAPIError: Unsupported get request. Please read the Graph API documentation at https://developers.facebook.com/docs/graph-api
I believe I'm properly identifying with the token, I'm using the same one as the webpage and it works on the webpage. I'm also able to do basic queries in Python (e.g., querying for "me" works fine)
I figured out a way to do this that doesn't involve the Facebook Graph API. Instead, use the Requests library:
import requests
token = "your_token"
query = "your_query"
requests.get("https://graph.facebook.com/search?access_token=" + token + "&q=" + query + "&type=page")
You can directly use search method of `GraphAPI
import facebook
TOKEN = "" # Your GraphAPI token here.
graph = facebook.GraphAPI(access_token=TOKEN, version="2.7")
posts = graph.search(q="aquafresh", type="post")
print posts['data']
Instead of:-
$ pip install facebook-sdk
Use this:-
pip install -e git+https://github.com/mobolic/facebook-sdk.git#egg=facebook-sdk
What works for me is:
graph.request('search', {'q': 'aquafresh', 'type': 'page'})
That’s not exactly what you need, but when I try to search for posts (rather than pages) containing “aquafresh,” I get:
In [13]: graph.request('search', {'q': 'aquafresh', 'type': 'post'})
---------------------------------------------------------------------------
GraphAPIError Traceback (most recent call last)
<ipython-input-13-9aa008df54ba> in <module>()
----> 1 graph.request('search', {'q': 'aquafresh', 'type': 'post'})
/home/telofy/.buildout/eggs/facebook_sdk-0.4.0-py2.7.egg/facebook.pyc in request(self, path, args, post_args)
296 except urllib2.HTTPError, e:
297 response = _parse_json(e.read())
--> 298 raise GraphAPIError(response)
299 except TypeError:
300 # Timeout support for Python <2.6
GraphAPIError: (#11) Post search has been deprecated
(I wonder why deprecation doesn’t result in a mere warning.)
The request method seems to be missing from the documentation, but it’s documented in the code.
This should work:
import facebook
aToken = "your access tocken"
graph = facebook.GraphAPI(access_token=aToken, version="2.10")
data = graph.search(
q='aquafresh',
type='post'
)

Categories

Resources