I have created an Azure Cognitive Services resource following the tutorial 1
Then I have created the environment and run the following code (from tutorial 2):
# Import required modules.
from azure.cognitiveservices.search.websearch import WebSearchAPI
from azure.cognitiveservices.search.websearch.models import SafeSearch
from msrest.authentication import CognitiveServicesCredentials
# Replace with your subscription key.
subscription_key = "YOUR_SUBSCRIPTION_KEY"
# Instantiate the client and replace with your endpoint.
client = WebSearchAPI(CognitiveServicesCredentials(subscription_key), base_url = "YOUR_ENDPOINT")
# Make a request. Replace Yosemite if you'd like.
web_data = client.web.search(query="Yosemite")
print("\r\nSearched for Query# \" Yosemite \"")
However, it seems the generaed Subscription key and endpoint are not correctly read by the script since I get the following error:
File "azu_scrapper.py", line 17, in
web_data = client.web.search(query="Yosemite") File "/home/user/.local/share/virtualenvs/linkedin-CHSAGU1d/lib/python3.7/site-packages/azure/cognitiveservices/search/websearch/operations/web_operations.py",
line 365, in search
raise models.ErrorResponseException(self._deserialize, response) azure.cognitiveservices.search.websearch.models.error_response_py3.ErrorResponseException:
Operation returned an invalid status code 'Resource Not Found'
Any idea why it is not working?
The base_url value should be :
https://<your endpoint>/bing/v7.0
I have tested on my side and it works for me :
Related
I am working on a script that accesses details about an Azure Virtual Machine currently. This is the code that I have so far:
"""
Instantiate the ComputeManagementClient with the appropriate credentials.
#return ComputeManagementClient object
"""
def get_access_to_virtual_machine():
subscription_id = key.SUBSCRIPTION_ID
credentials = DefaultAzureCredential(authority = AzureAuthorityHosts.AZURE_GOVERNMENT,
exclude_environment_credential = True,
exclude_managed_identity_credential = True,
exclude_shared_token_cache_credential = True)
client = KeyClient(vault_url = key.VAULT_URL, credential = credentials)
compute_client = ComputeManagementClient(credentials, subscription_id)
return compute_client
"""
Check to see if Azure Virtual Machine exists and the state of the virtual machine.
"""
def get_azure_vm(resource_group_name, virtual_machine_name):
compute_client = get_access_to_virtual_machine()
vm_data = compute_client.virtual_machines.get(resource_group_name,
virtual_machine_name,
expand = 'instanceView')
return vm_data
When trying to run get_azure_vm(key.RESOURCE_GROUP, key.VIRTUAL_MACHINE_NAME) which I am certain does have the correct credentials, I get the following error output (note that I replaced the actual subscription ID with 'xxxx' for now):
Traceback (most recent call last):
File "/Users/shilpakancharla/Documents/function_app/WeedsMediaUploadTrigger/event_process.py", line 62, in <module>
vm_data = get_azure_vm(key.RESOURCE_GROUP, key.VIRTUAL_MACHINE_NAME)
File "<decorator-gen-2>", line 2, in get_azure_vm
File "/usr/local/lib/python3.9/site-packages/retry/api.py", line 73, in retry_decorator
return __retry_internal(partial(f, *args, **kwargs), exceptions, tries, delay, max_delay, backoff, jitter,
File "/usr/local/lib/python3.9/site-packages/retry/api.py", line 33, in __retry_internal
return f()
File "/Users/shilpakancharla/Documents/function_app/WeedsMediaUploadTrigger/event_process.py", line 55, in get_azure_vm
vm_data = compute_client.virtual_machines.get(resource_group_name,
File "/usr/local/lib/python3.9/site-packages/azure/mgmt/compute/v2019_12_01/operations/_virtual_machines_operations.py", line 641, in get
map_error(status_code=response.status_code, response=response, error_map=error_map)
File "/usr/local/lib/python3.9/site-packages/azure/core/exceptions.py", line 102, in map_error
raise error
azure.core.exceptions.ResourceNotFoundError: (SubscriptionNotFound) The subscription 'xxxx' could not be found.
Code: SubscriptionNotFound
Message: The subscription 'xxxx' could not be found.
I am using the beta preview version of azure.mgmt.compute which was installed with pip install azure-mgmt-compute=17.0.0b1. Note that I am also using an Azure Government account. Is there a way to solve this error? I have also tried using ServicePrincipalCredentials and get_azure_credentials() but ran into different errors - I was recommended to use DefaultAzureCredentials and the key vault by a coworker.
There is no problem with the code and it works fine on my side. And the error message shows the reason:
azure.core.exceptions.ResourceNotFoundError: (SubscriptionNotFound)
The subscription 'xxxx' could not be found. Code: SubscriptionNotFound
Message: The subscription 'xxxx' could not be found.
It seems you run the python code in your local machine. I recommend you log in with the Azure CLI first and then check if the subscription id that you used in your python code is right.
I try to follow MS official doc to get the log from my resource in Azure Log Monitor but never success.
My code is like below.
from azure.loganalytics import LogAnalyticsDataClient
from azure.common.client_factory import get_client_from_cli_profile
from azure.loganalytics.models import QueryBody
log_client = get_client_from_cli_profile(LogAnalyticsDataClient)
myWorkSpaceId = '1234567890...'
result = log_client.query(myWorkSpaceId, QueryBody(**{'query': 'Heartbeat| limit 50'}))
And I always get exception like below:
result = log_client.query(myWorkSpaceId, QueryBody(**{'query': 'Heartbeat| limit 50'}))
File ".../lib/python2.7/site-packages/azure/loganalytics/log_analytics_data_client.py", line 121, in query
raise models.ErrorResponseException(self._deserialize, response)
azure.loganalytics.models.error_response.ErrorResponseException: (MissingApiVersionParameter) The api-version query parameter (?api-version=) is required for all requests
I trace code into library in /azure/loganalytics/log_analytics_data_client.py, and dump the url string used for query like below.
print(url, query_parameters, header_parameters, body_content)
request = self._client.post(url, query_parameters)
response = self._client.send(request, header_parameters, body_content, stream=False, **operation_config)
The output of the url and query information is like below and it look like no version information in between and I doubt this is why I get the exception:
('https://management.azure.com/workspaces/1234567890.../query', {}, {'Content-Type': 'application/json; charset=utf-8'}, {'query': 'Heartbeat| limit 50'})
My azure SDK version is 4.0.0, and my azure-loganalytics library version is v0.1.0, running on Ubuntu.
Does anyone run into same issue or know how to fix this?
Thanks.
I'm trying to access a csv file in my Watson Data Platform catalog. I used the code generation functionality from my DSX notebook: Insert to code > Insert StreamingBody object.
The generated code was:
import os
import types
import pandas as pd
import boto3
def __iter__(self): return 0
# #hidden_cell
# The following code accesses a file in your IBM Cloud Object Storage. It includes your credentials.
# You might want to remove those credentials before you share your notebook.
os.environ['AWS_ACCESS_KEY_ID'] = '******'
os.environ['AWS_SECRET_ACCESS_KEY'] = '******'
endpoint = 's3-api.us-geo.objectstorage.softlayer.net'
bucket = 'catalog-test'
cos_12345 = boto3.resource('s3', endpoint_url=endpoint)
body = cos_12345.Object(bucket,'my.csv').get()['Body']
# add missing __iter__ method so pandas accepts body as file-like object
if not hasattr(body, "__iter__"): body.__iter__ = types.MethodType(__iter__, body)
df_data_2 = pd.read_csv(body)
df_data_2.head()
When I try to run this code, I get:
/usr/local/src/conda3_runtime.v27/4.1.1/lib/python3.5/site-packages/botocore/endpoint.py in create_endpoint(self, service_model, region_name, endpoint_url, verify, response_parser_factory, timeout, max_pool_connections)
270 if not is_valid_endpoint_url(endpoint_url):
271
--> 272 raise ValueError("Invalid endpoint: %s" % endpoint_url)
273 return Endpoint(
274 endpoint_url,
ValueError: Invalid endpoint: s3-api.us-geo.objectstorage.service.networklayer.com
What is strange is that if I generate the code for SparkSession setup instead, the same endpoint is used but the spark code runs ok.
How can I fix this issue?
I'm presuming the same issue will be encountered for the other Softlayer endpoints so I'm listing them here as well to ensure this question is also applicable for the other softlayer locations:
s3-api.us-geo.objectstorage.softlayer.net
s3-api.dal-us-geo.objectstorage.softlayer.net
s3-api.sjc-us-geo.objectstorage.softlayer.net
s3-api.wdc-us-geo.objectstorage.softlayer.net
s3.us-south.objectstorage.softlayer.net
s3.us-east.objectstorage.softlayer.net
s3.eu-geo.objectstorage.softlayer.net
s3.ams-eu-geo.objectstorage.softlayer.net
s3.fra-eu-geo.objectstorage.softlayer.net
s3.mil-eu-geo.objectstorage.softlayer.net
s3.eu-gb.objectstorage.softlayer.net
The solution was to prefix the endpoint with https://, changing from ...
this
endpoint = 's3-api.us-geo.objectstorage.softlayer.net'
to
endpoint = 'https://s3-api.us-geo.objectstorage.softlayer.net'
For IBM Cloud Object Storage, it should be import ibm_boto3 rather than import boto3. The original boto3 is for accessing AWS, which uses different authentication. Maybe those two have a different interpretation of the endpoint value.
I am trying to publish a machine learning model on Azure webservice using python. I am able to deploy the code successfully but when i try to call it through the URL, it's throwing me 'Azure' module doesn't exist. The code basically retrieves a TFIDF model from the container (blob) and use it to predict the new value. The error clearly says, Azure package is missing while trying to run on the webservice and I am not sure how to fix it. Here goes the code:
For deployment:
from azureml import services
from azure.storage.blob import BlobService
#services.publish('7c94eb2d9e4c01cbe7ce1063','f78QWNcOXHt9J+Qt1GMzgdEt+m3NXby9JL`npT7XX8ZAGdRZIX/NZ4lL2CkRkGQ==')
#services.types(res=unicode)
#services.returns(str)
def TechBot(res):
from azure.storage.blob import BlobService
from gensim.similarities import SparseMatrixSimilarity, MatrixSimilarity, Similarity
blob_service = BlobService(account_name='tfidf', account_key='RU4R/NIVPsPOoR0bgiJMtosHJMbK1+AVHG0sJCHT6jIdKPRz3cIMYTsrQ5BBD5SELKHUXgBHNmvsIlhEdqUCzw==')
blob_service.get_blob_to_path('techbot',"2014.csv","df")
df=pd.read_csv("df")
doct = res
To access the url I used the python code from
service.azureml.net
import urllib2
import json
import requests
data = {
"Inputs": {
"input1":
[
{
'res': "wifi wnable",
}
],
},
"GlobalParameters": {
}
}
body = str.encode(json.dumps(data))
#proxies = {"http":"http://%s" % proxy}
url = 'http://ussouthcentral.services.azureml.net/workspaces/7c94eb2de26a45399e4c01cbe7ce1063/services/11943e537e0741beb466cd91f738d073/execute?api-version=2.0&format=swagger'
api_key = '8fH9kp67pEt3C6XK9sXDLbyYl5cBNEwYg9VY92xvkxNd+cd2w46sF1ckC3jqrL/m8joV7o3rsTRUydkzRGDYig==' # Replace this with the API key for the web service
headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
#proxy_support = urllib2.ProxyHandler(proxies)
#opener = urllib2.build_opener(proxy_support, urllib2.HTTPHandler(debuglevel=1))
#urllib2.install_opener(opener)
req = urllib2.Request(url, body, headers)
try:
response = urllib2.urlopen(req, timeout=60)
result = response.read()
print(result)
except urllib2.HTTPError, error:
print("The request failed with status code: " + str(error.code))
# Print the headers - they include the requert ID and the timestamp, which are useful for debugging the failure
print(error.info())
print(json.loads(error.read()))
The string 'res' will be predicted at the end. As I said it runs perfectly fine if I run as it is in python by calling azure module, problem happens when I access the url.
Any help is appreciated, please let me know if you need more information (I only sohwcased half of my code)
I tried to reproduce the issue via POSTMAN, then I got the error information below as you said.
{
"error": {
"code": "ModuleExecutionError",
"message": "Module execution encountered an error.",
"details": [
{
"code": "85",
"target": "Execute Python Script RRS",
"message": "Error 0085: The following error occurred during script evaluation, please view the output log for more information:\r\n---------- Start of error message from Python interpreter ----------\r\nCaught exception while executing function: Traceback (most recent call last):\n File \"\\server\\InvokePy.py\", line 120, in executeScript\n outframe = mod.azureml_main(*inframes)\n File \"\\temp\\1280677032.py\", line 1094, in azureml_main\n File \"<ipython-input-15-bd03d199b8d9>\", line 6, in TechBot_2\nImportError: No module named azure\n\r\n\r\n---------- End of error message from Python interpreter ----------"
}
]
}
}
According to the error code 00085 & the information ImportError: No module named azure, I think the issue was caused by importing python moduleazure-storage. There was a similar SO thread Access Azure blog storage from within an Azure ML experiment which got the same issue, I think you can refer to its answer try to use HTTP protocol instead HTTPS in your code to resolve the issue as the code client = BlobService(STORAGE_ACCOUNT, STORAGE_KEY, protocol="http").
Hope it helps. Any concern & update, please feel free to let me know.
Update: Using HTTP protocol for BlobService
from azureml import services
from azure.storage.blob import BlobService
#services.publish('7c94eb2d9e4c01cbe7ce1063','f78QWNcOXHt9J+Qt1GMzgdEt+m3NXby9JL`npT7XX8ZAGdRZIX/NZ4lL2CkRkGQ==')
#services.types(res=unicode)
#services.returns(str)
def TechBot(res):
from azure.storage.blob import BlobService
from gensim.similarities import SparseMatrixSimilarity, MatrixSimilarity, Similarity
# Begin: Update code
# Using `HTTP` protocol for BlobService
blob_service = BlobService(account_name='tfidf',
account_key='RU4R/NIVPsPOoR0bgiJMtosHJMbK1+AVHG0sJCHT6jIdKPRz3cIMYTsrQ5BBD5SELKHUXgBHNmvsIlhEdqUCzw==',
protocol='http')
# End
blob_service.get_blob_to_path('techbot',"2014.csv","df")
df=pd.read_csv("df")
doct = res
Just getting started on the Adwords API, for some reason I can't seem to connect at all.
The code below, straight from the tutorial throws the error:
Traceback (most recent call last):
File "<pyshell#12>", line 1, in <module>
client = AdWordsClient(path=os.path.join('Users', 'ravinthambapillai', 'Google Drive', 'client_secrets.json'))
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/adspygoogle/adwords/AdWordsClient.py", line 151, in __init__
self._headers = self.__LoadAuthCredentials()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/adspygoogle/adwords/AdWordsClient.py", line 223, in __LoadAuthCredentials
return super(AdWordsClient, self)._LoadAuthCredentials()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/adspygoogle/common/Client.py", line 94, in _LoadAuthCredentials
raise ValidationError(msg)
**ValidationError: Authentication data is missing.**
from adspygoogle.adwords.AdWordsClient import AdWordsClient
from adspygoogle.common import Utils
client = AdWordsClient(path=os.path.join('Users', 'this-user', 'this-folder', 'client_secrets.json'))
It looks like there's two issues. First, try removing the last path element, as far as I recall, the path parameter expects a directory that contains the authentication pickle, logs etc. This approach requires that you already have a valid auth_token.pkl.
Second, it appears that you're using OAuth2 for authentication (I'm guessing by the client_secrets.json file). For this to work, you'll need to use the oauth2client library and provide an oauth2credentials instance in the headers parameter to AdWordsClient.
The following is straight from the file examples/adspygoogle/adwords/v201302/misc/use_oauth2.py in the client distribution and should give you an idea how it works:
# We're using the oauth2client library:
# http://code.google.com/p/google-api-python-client/downloads/list
flow = OAuth2WebServerFlow(
client_id=oauth2_client_id,
client_secret=oauth2_client_secret,
# Scope is the server address with '/api/adwords' appended.
scope='https://adwords.google.com/api/adwords',
user_agent='oauth2 code example')
# Get the authorization URL to direct the user to.
authorize_url = flow.step1_get_authorize_url()
print ('Log in to your AdWords account and open the following URL: \n%s\n' %
authorize_url)
print 'After approving the token enter the verification code (if specified).'
code = raw_input('Code: ').strip()
credential = None
try:
credential = flow.step2_exchange(code)
except FlowExchangeError, e:
sys.exit('Authentication has failed: %s' % e)
# Create the AdWordsUser and set the OAuth2 credentials.
client = AdWordsClient(headers={
'developerToken': '%s++USD' % email,
'clientCustomerId': client_customer_id,
'userAgent': 'OAuth2 Example',
'oauth2credentials': credential
})
I am not familiar with the AdWordsClient api but are you sure your path is correct?
your current join produces a relative path, do you need an absolute one?
>>> import os
>>> os.path.join('Users', 'this-user')
'Users/this-user'
For testing you could hardcode the absoulte path in to make sure it is not a path issue
I would also make sure that 'client_secrets.json exists, and that it is readable by the user executing python