Azure ML: Upload File to Step Run's Output - Authentication Error - python

During a PythonScriptStep in an Azure ML Pipeline, I'm saving a model as joblib pickle dump to a directory in a Blob Container in the Azure Blob Storage which I've created during the setup of the Azure ML Workspace. Afterwards I'm trying to upload this model file to the step run's output directory using
Run.upload_file (name, path_or_stream)
(for the function's documentation, see https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.run(class)?view=azure-ml-py#upload-file-name--path-or-stream--datastore-name-none-)
Some time ago when I created the script using the azureml-sdk version 1.18.0, everything worked fine. Now, I've updated the script's functionalities and upgraded the azureml-sdk to version 1.33.0 during the process and the upload function now runs into the following error:
Traceback (most recent call last):
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_file_utils/upload.py", line 64, in upload_blob_from_stream
validate_content=True)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/clientbase.py", line 93, in execute_func_with_reset
return ClientBase._execute_func_internal(backoff, retries, module_logger, func, reset_func, *args, **kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/clientbase.py", line 367, in _execute_func_internal
left_retry = cls._handle_retry(back_off, left_retry, total_retry, error, logger, func)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/clientbase.py", line 399, in _handle_retry
raise error
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/clientbase.py", line 358, in _execute_func_internal
response = func(*args, **kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/blob/blockblobservice.py", line 614, in create_blob_from_stream
initialization_vector=iv
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/blob/_upload_chunking.py", line 98, in _upload_blob_chunks
range_ids = [f.result() for f in futures]
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/blob/_upload_chunking.py", line 98, in <listcomp>
range_ids = [f.result() for f in futures]
File "/opt/miniconda/lib/python3.7/concurrent/futures/_base.py", line 435, in result
return self.__get_result()
File "/opt/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/opt/miniconda/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/blob/_upload_chunking.py", line 210, in process_chunk
return self._upload_chunk_with_progress(chunk_offset, chunk_bytes)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/blob/_upload_chunking.py", line 224, in _upload_chunk_with_progress
range_id = self._upload_chunk(chunk_offset, chunk_data)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/blob/_upload_chunking.py", line 269, in _upload_chunk
timeout=self.timeout,
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/blob/blockblobservice.py", line 1013, in _put_block
self._perform_request(request)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/common/storageclient.py", line 432, in _perform_request
raise ex
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/common/storageclient.py", line 357, in _perform_request
raise ex
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/common/storageclient.py", line 343, in _perform_request
HTTPError(response.status, response.message, response.headers, response.body))
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/common/_error.py", line 115, in _http_error_handler
raise ex
azure.common.AzureHttpError: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. ErrorCode: AuthenticationFailed
<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:5d4e1b7e-c01e-0070-0d47-9bf8a0000000
Time:2021-08-27T13:30:02.2685991Z</Message><AuthenticationErrorDetail>Signature did not match. String to sign used was rcw
2021-08-27T13:19:56Z
2021-08-28T13:29:56Z
/blob/mystorage/azureml/ExperimentRun/dcid.98d11a7b-2aac-4bc0-bd64-bb4d72e0e0be/outputs/models/Model.pkl
2019-07-07
b
</AuthenticationErrorDetail></Error>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/batch/tasks/shared/LS_root/jobs/.../azureml-setup/context_manager_injector.py", line 243, in execute_with_context
runpy.run_path(sys.argv[0], globals(), run_name="__main__")
File "/opt/miniconda/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/opt/miniconda/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/opt/miniconda/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "401_AML_Pipeline_Time_Series_Model_Training_Azure_ML_CPU.py", line 318, in <module>
main()
File "401_AML_Pipeline_Time_Series_Model_Training_Azure_ML_CPU.py", line 286, in main
path_or_stream=model_path)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/core/run.py", line 53, in wrapped
return func(self, *args, **kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/core/run.py", line 1989, in upload_file
datastore_name=datastore_name)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/artifacts_client.py", line 114, in upload_artifact
return self.upload_artifact_from_path(artifact, *args, **kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/artifacts_client.py", line 107, in upload_artifact_from_path
return self.upload_artifact_from_stream(stream, *args, **kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/artifacts_client.py", line 99, in upload_artifact_from_stream
content_type=content_type, session=session)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/artifacts_client.py", line 88, in upload_stream_to_existing_artifact
timeout=TIMEOUT, backoff=BACKOFF_START, retries=RETRY_LIMIT)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_file_utils/upload.py", line 71, in upload_blob_from_stream
raise AzureMLException._with_error(azureml_error, inner_exception=e)
azureml._common.exceptions.AzureMLException: AzureMLException:
Message: Encountered authorization error while uploading to blob storage. Please check the storage account attached to your workspace. Make sure that the current user is authorized to access the storage account and that the request is not blocked by a firewall, virtual network, or other security setting.
StorageAccount: mystorage
ContainerName: azureml
StatusCode: 403
InnerException Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. ErrorCode: AuthenticationFailed
<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:5d4e1b7e-c01e-0070-0d47-9bf8a0000000
Time:2021-08-27T13:30:02.2685991Z</Message><AuthenticationErrorDetail>Signature did not match. String to sign used was rcw
2021-08-27T13:19:56Z
2021-08-28T13:29:56Z
/blob/mystorage/azureml/ExperimentRun/dcid.98d11a7b-2aac-4bc0-bd64-bb4d72e0e0be/outputs/models/Model.pkl
2019-07-07
b
</AuthenticationErrorDetail></Error>
ErrorResponse
{
"error": {
"code": "UserError",
"message": "Encountered authorization error while uploading to blob storage. Please check the storage account attached to your workspace. Make sure that the current user is authorized to access the storage account and that the request is not blocked by a firewall, virtual network, or other security setting.\n\tStorageAccount: mystorage\n\tContainerName: azureml\n\tStatusCode: 403",
"inner_error": {
"code": "Auth",
"inner_error": {
"code": "Authorization"
}
}
}
}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "401_AML_Pipeline_Time_Series_Model_Training_Azure_ML_CPU.py", line 318, in <module>
main()
File "401_AML_Pipeline_Time_Series_Model_Training_Azure_ML_CPU.py", line 286, in main
path_or_stream=model_path)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/core/run.py", line 53, in wrapped
return func(self, *args, **kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/core/run.py", line 1989, in upload_file
datastore_name=datastore_name)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/artifacts_client.py", line 114, in upload_artifact
return self.upload_artifact_from_path(artifact, *args, **kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/artifacts_client.py", line 107, in upload_artifact_from_path
return self.upload_artifact_from_stream(stream, *args, **kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/artifacts_client.py", line 99, in upload_artifact_from_stream
content_type=content_type, session=session)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/artifacts_client.py", line 88, in upload_stream_to_existing_artifact
timeout=TIMEOUT, backoff=BACKOFF_START, retries=RETRY_LIMIT)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_file_utils/upload.py", line 71, in upload_blob_from_stream
raise AzureMLException._with_error(azureml_error, inner_exception=e)
UserScriptException: UserScriptException:
Message: Encountered authorization error while uploading to blob storage. Please check the storage account attached to your workspace. Make sure that the current user is authorized to access the storage account and that the request is not blocked by a firewall, virtual network, or other security setting.
StorageAccount: mystorage
ContainerName: azureml
StatusCode: 403
InnerException AzureMLException:
Message: Encountered authorization error while uploading to blob storage. Please check the storage account attached to your workspace. Make sure that the current user is authorized to access the storage account and that the request is not blocked by a firewall, virtual network, or other security setting.
StorageAccount: mystorage
ContainerName: azureml
StatusCode: 403
InnerException Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. ErrorCode: AuthenticationFailed
<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:5d4e1b7e-c01e-0070-0d47-9bf8a0000000
Time:2021-08-27T13:30:02.2685991Z</Message><AuthenticationErrorDetail>Signature did not match. String to sign used was rcw
2021-08-27T13:19:56Z
2021-08-28T13:29:56Z
/blob/mystorage/azureml/ExperimentRun/dcid.98d11a7b-2aac-4bc0-bd64-bb4d72e0e0be/outputs/models/Model.pkl
2019-07-07
b
</AuthenticationErrorDetail></Error>
ErrorResponse
{
"error": {
"code": "UserError",
"message": "Encountered authorization error while uploading to blob storage. Please check the storage account attached to your workspace. Make sure that the current user is authorized to access the storage account and that the request is not blocked by a firewall, virtual network, or other security setting.\n\tStorageAccount: verovisionstorage\n\tContainerName: azureml\n\tStatusCode: 403",
"inner_error": {
"code": "Auth",
"inner_error": {
"code": "Authorization"
}
}
}
}
ErrorResponse
{
"error": {
"code": "UserError",
"message": "Encountered authorization error while uploading to blob storage. Please check the storage account attached to your workspace. Make sure that the current user is authorized to access the storage account and that the request is not blocked by a firewall, virtual network, or other security setting.\n\tStorageAccount: mystorage\n\tContainerName: azureml\n\tStatusCode: 403"
}
}
As far as I can tell from the code of the azureml.core.Run class and the subsequent function calls, the Run object tries to upload the file to the step run's output directory using SAS-Token-Authentication (which fails). This documentation article is linked in the code (but I don't know if this relates to the issue): https://learn.microsoft.com/en-us/rest/api/storageservices/create-service-sas#service-sas-example
Did anybody encounter this error as well and knows what causes it or how it can be resolved?
Best,
Jonas

We’ve seen the before, it’s annoying. I think the answer is to go to the data stores page of the AML Studio UI and manually enter the storage account key again.

Related

My GitLab server response with 500 Error but still recieves and uploads files that are sent

So I have very strange problem. I am working on this application that is using GitLab instance (version: 15.5.1) on my server in background with CentOS. I am using API, python-gitlab library and my Flask app. I was was making little change to my function which was adding bas64 decoder before this everything worked just fine. So after this change I send one request with Postman to see if it works.
And this is where the problem starts functions works it decodes base64 and sends it to server where it is saved in repo. BUT serevr response with 500 INTERNAL SERVER ERROR in Postman.
ERROR FROM TERMINAL:
[2023-02-17 10:07:45,597] ERROR in app: Exception on /data [POST]
Traceback (most recent call last):
File "exceptions.py", line 337, in wrapped_f
return f(*args, **kwargs)
File "mixins.py", line 246, in list
obj = self.gitlab.http_list(path, **data)
File "client.py", line 939, in http_list
return list(GitlabList(self, url, query_data, **kwargs))
File "client.py", line 1231, in __next__
return self.next()
File "client.py", line 1242, in next
self._query(self._next_url, **self._kwargs)
File "/client.py", line 1154, in _query
result = self._gl.http_request("get", url, query_data=query_data, **kwargs)
File "client.py", line 798, in http_request
raise gitlab.exceptions.GitlabHttpError(
gitlab.exceptions.GitlabHttpError: 404: HERE STARTS LONG HTML FILE
MY FUNCTION: type and text are JSON parameter JSON file is below this code
def pushFile(type, text):
decoded_text = base64.b64decode(text)
project_id = config.REPO_ID
project = gl.projects.get(project_id)
#RANDOM ID
uni_id = uuid.uuid1()
f = project.files.create({'file_path': f'{compared_type}'+'_RULES/'+f'{type}'+'_'+f'{uni_id}'+'.txt',
'branch': 'main',
'content': f'{decoded_text}',
'author_email': 'test#example.com',
'author_name': 'yourname',
'commit_message': 'Create testfile'})
JSON:
{
"type" :"radar",
"text" : "dGVzdHRlc3R0ZXN0dGVzdHRlc3R0ZXN0dGVzdHRlc3R0ZXN0dGVzdHRlc3R0ZXN0dGVzdHRlc3R0ZXN0dGVzdHRlc3R0ZXN0dGVzdHRlc3R0ZXN0dGVzdA=="
}
So I tried to:
Restart GitLab instance
Delete bas64 decoder
But nothing helped and I still get 500 error but files are still uploaded. Does someone have any idea what might be wrong?

NoCredentialsError: Unable to locate credentials in Hugging Face Library

So i am not using Huggin face a lot for my ai but I've discover that you can train you're ai with it so it tried to use my machine to train it but i kept having that error:
PS C:\Users\gboss\OneDrive\Bureau\Ai training> & C:/Users/gboss/AppData/Local/Programs/Python/Python310/python.exe "c:/Users/gboss/OneDrive/Bureau/Ai training/AiTraining.py"
Traceback (most recent call last):
File "c:\Users\gboss\OneDrive\Bureau\Ai training\AiTraining.py", line 8, in <module>
role = iam_client.get_role(RoleName='{IAM_ROLE_WITH_SAGEMAKER_PERMISSIONS}')['Role']['Arn']
File "C:\Users\gboss\AppData\Local\Programs\Python\Python310\lib\site-packages\botocore\client.py", line 514, in _api_call
return self._make_api_call(operation_name, kwargs)
File "C:\Users\gboss\AppData\Local\Programs\Python\Python310\lib\site-packages\botocore\client.py", line 921, in _make_api_call
http, parsed_response = self._make_request(
File "C:\Users\gboss\AppData\Local\Programs\Python\Python310\lib\site-packages\botocore\client.py", line 944, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "C:\Users\gboss\AppData\Local\Programs\Python\Python310\lib\site-packages\botocore\endpoint.py", line 119, in make_request
return self._send_request(request_dict, operation_model)
File "C:\Users\gboss\AppData\Local\Programs\Python\Python310\lib\site-packages\botocore\endpoint.py", line 198, in _send_request
request = self.create_request(request_dict, operation_model)
File "C:\Users\gboss\AppData\Local\Programs\Python\Python310\lib\site-packages\botocore\endpoint.py", line 134, in create_request
self._event_emitter.emit(
File "C:\Users\gboss\AppData\Local\Programs\Python\Python310\lib\site-packages\botocore\hooks.py", line 412, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "C:\Users\gboss\AppData\Local\Programs\Python\Python310\lib\site-packages\botocore\hooks.py", line 256, in emit
return self._emit(event_name, kwargs)
File "C:\Users\gboss\AppData\Local\Programs\Python\Python310\lib\site-packages\botocore\hooks.py", line 239, in _emit
response = handler(**kwargs)
File "C:\Users\gboss\AppData\Local\Programs\Python\Python310\lib\site-packages\botocore\signers.py", line 105, in handler
return self.sign(operation_name, request)
File "C:\Users\gboss\AppData\Local\Programs\Python\Python310\lib\site-packages\botocore\signers.py", line 189, in sign
auth.add_auth(request)
File "C:\Users\gboss\AppData\Local\Programs\Python\Python310\lib\site-packages\botocore\auth.py", line 418, in add_auth
raise NoCredentialsError()
botocore.exceptions.NoCredentialsError: Unable to locate credentials
and let's say that i can't find why because i don't use huggin face a lot
the code:
import sagemaker
import boto3
from sagemaker.huggingface import HuggingFace
# gets role for executing training job
iam_client = boto3.client('iam')
role = iam_client.get_role(RoleName='{IAM_ROLE_WITH_SAGEMAKER_PERMISSIONS}')['Role']['Arn']
hyperparameters = {
'model_name_or_path':'ZipperXYZ/DialoGPT-medium-TheWorldMachineExpressive2',
'output_dir':'/opt/ml/model'
# add your remaining hyperparameters
# more info here https://github.com/huggingface/transformers/tree/v4.17.0/examples/pytorch/language-modeling
}
# git configuration to download our fine-tuning script
git_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.17.0'}
# creates Hugging Face estimator
huggingface_estimator = HuggingFace(
entry_point='run_clm.py',
source_dir='./examples/pytorch/language-modeling',
instance_type='ml.p3.2xlarge',
instance_count=1,
role=role,
git_config=git_config,
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
hyperparameters = hyperparameters
)
# starting the train job
huggingface_estimator.fit()
You would first need to configure your credentials, its not an error in your code. Follow this thread
TLDR:
NoCredentialsError: Unable to locate credentials
Reply: Yes let me explain this, it is a bit complicated to get started. So the IAM ROLE, which you have created will be used inside the SageMaker Training Job/Inference. Meaning this ROLE is used to, e.g. download your data from s3 or is needed to start the underlying machine.
The error NoCredentialsError: Unable to locate credentials is shown when you don’t have credentials configured on your machine. You need to run aws configure so set up IAM Credentials (User) on your machine. You need these credentials to start your training job.

client.get_bucket() fails, but only from cloud dataflow (compute engine)

Application has been working normally, now on a re-deploy, google storage is giving strange errors.
MissingSchema: Invalid URL 'None/storage/v1/b/my-bucket-name?projection=noAcl': No schema supplied. Perhaps you meant http://None/storage/v1/b/my-bucket-name?projection=noAcl?
File "/usr/local/lib/python2.7/dist-packages/lib/file_store.py", line 11, in __init__
self.bucket = self.client.get_bucket(parts[0])
File "/usr/local/lib/python2.7/dist-packages/google/cloud/storage/client.py", line 301, in get_bucket
bucket.reload(client=self)
File "/usr/local/lib/python2.7/dist-packages/google/cloud/storage/_helpers.py", line 130, in reload
_target_object=self,
File "/usr/local/lib/python2.7/dist-packages/google/cloud/_http.py", line 392, in api_request
target_object=_target_object,
File "/usr/local/lib/python2.7/dist-packages/google/cloud/_http.py", line 269, in _make_request
return self._do_request(method, url, headers, data, target_object)
File "/usr/local/lib/python2.7/dist-packages/google/cloud/_http.py", line 298, in _do_request
return self.http.request(url=url, method=method, headers=headers, data=data)
File "/usr/local/lib/python2.7/dist-packages/google/auth/transport/requests.py", line 208, in request
method, url, data=data, headers=request_headers, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 519, in request
prep = self.prepare_request(req)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 462, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 313, in prepare
self.prepare_url(url, params)
File "/usr/local/lib/python2.7/dist-packages/requests/models.py", line 387, in prepare_url
raise MissingSchema(error)
MissingSchema: Invalid URL 'None/storage/v1/b/my-bucket-name?projection=noAcl': No schema supplied. Perhaps you meant http://None/storage/v1/b/my-bucket-name?projection=noAcl? [while running 'generatedPtransform-51']
The code causing the error, I can run this locally using the same service account and it works, no error. I am using $env:GOOGLE_APPLICATION_CREDENTIALS to export my service account credentials at deploy time. All other services are working normally.
# My test is:
# fs = FileStore("gs://my-bucket-name/models/", "development", "general")
class FileStore():
# modelPath - must be a gs:// style google storage resource path containing everything but the file extension
def __init__(self, modelPath, env, modelName):
from google.cloud import storage
parts = modelPath[5:].split('/', 1)
self.client = storage.Client()
self.bucket = self.client.get_bucket(parts[0]) # <- error here
Why would google core client fail to build a URL? Based on 'None/storage/v1/b/my-bucket-name?projection=noAcl', the missing part of the URL should be something like "https://www.googleapis.com".
This error is apparently caused by a mismatch between google_cloud_storage and google_cloud_core. I had specified google_cloud_core >= 1.0.3 in my setup.py but when I looked on the docker image on the compute VM I found it had an earlier version.
After rebuilding my venv from setup.py I had to also run:
C:\Python27\python.exe -m pipenv install google-cloud-core>=1.0.3 --skip-lock
Then I was able to deploy and the application started working again.

Speech-to-text: google.api_core.exceptions.PermissionDenied: 403

I am trying to use Google speech-to-text service, according to https://googleapis.github.io/google-cloud-python/latest/speech/index.html
I have created project, uploaded audio to gs: cloud, added permissions, downloaded json file named My First Project-7bb85a480131.json. https://console.cloud.google.com/storage/browser/mybucket?project=my-project
that is my file:
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="/home/joo/Документы/LocalRepository/robotze/My First Project-7bb85a480131.json"
from google.cloud import speech
client = speech.SpeechClient()
audio = speech.types.RecognitionAudio(
uri='gs://zaudio/audio.mp3')
config = speech.types.RecognitionConfig(
encoding=speech.enums.RecognitionConfig.AudioEncoding.LINEAR16,
language_code='ru-RU',
sample_rate_hertz=44100)
operation = client.long_running_recognize(config=config, audio=audio)
op_result = operation.result()
for result in op_result.results:
for alternative in result.alternatives:
print('=' * 20)
print(alternative.transcript)
print(alternative.confidence)
Issue: i got
google.api_core.exceptions.PermissionDenied: 403 my-service-account#my-project.iam.gserviceaccount.com does not have storage.objects.get access to mybucket/audio.mp3.
Full traceback
/home/joo/anaconda3/bin/python /home/joo/Документы/LocalRepository/robotze/speech-to-text-googlecloud.py
Traceback (most recent call last):
File "/home/joo/anaconda3/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
File "/home/joo/anaconda3/lib/python3.6/site-packages/grpc/_channel.py", line 565, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/joo/anaconda3/lib/python3.6/site-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.PERMISSION_DENIED
details = "my-service-account#my-project.iam.gserviceaccount.com does not have storage.objects.get access to mybucket/audio.mp3."
debug_error_string = "{"created":"#1565253582.126380437","description":"Error received from peer ipv4:74.125.131.95:443","file":"src/core/lib/surface/call.cc","file_line":1052,"grpc_message":"my-service-account#my-project.iam.gserviceaccount.com does not have storage.objects.get access to mybucket/audio.mp3.","grpc_status":7}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/joo/Документы/LocalRepository/robotze/speech-to-text-googlecloud.py", line 46, in <module>
operation = client.long_running_recognize(config=config, audio=audio)
File "/home/joo/anaconda3/lib/python3.6/site-packages/google/cloud/speech_v1/gapic/speech_client.py", line 341, in long_running_recognize
request, retry=retry, timeout=timeout, metadata=metadata
File "/home/joo/anaconda3/lib/python3.6/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__
return wrapped_func(*args, **kwargs)
File "/home/joo/anaconda3/lib/python3.6/site-packages/google/api_core/retry.py", line 273, in retry_wrapped_func
on_error=on_error,
File "/home/joo/anaconda3/lib/python3.6/site-packages/google/api_core/retry.py", line 182, in retry_target
return target()
File "/home/joo/anaconda3/lib/python3.6/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/home/joo/anaconda3/lib/python3.6/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.PermissionDenied: 403 my-service-account#my-project.iam.gserviceaccount.com does not have storage.objects.get access to mybucket/audio.mp3.
Process finished with exit code 1
What i tried: gcloud auth application-default login - login in browser works, but still 403 error
From what i can see on your logs, you are able to authenticate your service account inside your code (you are currently authenticating with: starting-account-*******-239919.iam.gserviceaccount.com), however, that service account doesn't have "storage.objects.get" permission over the object "zaudio/audio.mp3".
So you can either:
A.- Give the proper permissions to that service account (may be the role "storage.objectViewer" inside that bucket would be enough, but you could also set it with the role "storage.admin" so it can have more control over that bucket and others).
B.- Authenticate using other service account that have the proper permissions.
I resolved the following issue:
“google.api_core.exceptions.PermissionDenied: 403 my-service-account#my-project.iam.gserviceaccount.com does not have storage.objects.get access to mybucket/audio.mp3.”
To resolve this issue: go to your bucket, click that three dots, choose “Edit permissions”, Entity set “User”, Name set your email (in this case, my-service-account#my-project.iam.gserviceaccount.com), Access: “Reader”. Save and try again. This should resolve this issue. Regardless whether you have created the bucket, whatnot, you have to do this step to explicitly set permission. Hopefully this is useful.

App Engine to GCS | Uploading Issue

I have some files that I'm receiving from the Evernote API (via getResource) and writing to Google Cloud Storage with the following code:
gcs_file = gcs.open(filename, 'w', content_type=res.mime,
retry_params=write_retry_params)
# Retrieve the binary data and write to GCS
resource_file = note_store.getResource(res.guid, True, False, False, False)
gcs_file.write(resource_file.data.body)
gcs_file.close()
For even some types of documents, it still works. But there are certain documents that GCS throws this in the logs:
Unable to fetch URL: https://storage.googleapis.com/evernoteresources/5db799f1-c03c-4056-812a-6d77bad55261/Sleep Away.mp3
and
Got exception while contacting GCS. Will retry in 0.11 seconds.
There doesn't seem to be any pattern to these errors. It happens with documents, sounds, pictures, whatever - some of these document types work and some don't. It isn't due to size (since some small work and some large do).
Any ideas?
Here's the full stack trace, though I'm not sure it will help.
Encountered unexpected error from ProtoRPC method implementation: TimeoutError (('Request to Google Cloud Storage timed out.', DownloadError('Unable to fetch URL: https://storage.googleapis.com/evernoteresources/78413585-2266-4426-b08c-71d6c224f266/Evernote Snapshot 20130512 124546.jpg',)))
Traceback (most recent call last):
File "/python27_runtime/python27_lib/versions/1/protorpc/wsgi/service.py", line 181, in protorpc_service_app
response = method(instance, request)
File "/python27_runtime/python27_lib/versions/1/google/appengine/ext/endpoints/api_config.py", line 972, in invoke_remote
return remote_method(service_instance, request)
File "/python27_runtime/python27_lib/versions/1/protorpc/remote.py", line 412, in invoke_remote_method
response = method(service_instance, request)
File "/base/data/home/apps/s~quinector/2a.368528733040360018/endpoints.py", line 61, in get_note_details
url = tools.registerResource(note_store, req.note_guid, r)
File "/base/data/home/apps/s~quinector/2a.368528733040360018/GlobalUtilities.py", line 109, in registerResource
retry_params=write_retry_params)
File "/base/data/home/apps/s~quinector/2a.368528733040360018/cloudstorage/cloudstorage_api.py", line 69, in open
return storage_api.StreamingBuffer(api, filename, content_type, options)
File "/base/data/home/apps/s~quinector/2a.368528733040360018/cloudstorage/storage_api.py", line 526, in __init__
status, headers, _ = self._api.post_object(path, headers=headers)
File "/base/data/home/apps/s~quinector/2a.368528733040360018/cloudstorage/rest_api.py", line 41, in sync_wrapper
return future.get_result()
File "/python27_runtime/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 325, in get_result
self.check_success()
File "/python27_runtime/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 368, in _help_tasklet_along
value = gen.throw(exc.__class__, exc, tb)
File "/base/data/home/apps/s~quinector/2a.368528733040360018/cloudstorage/storage_api.py", line 84, in do_request_async
'Request to Google Cloud Storage timed out.', e)
TimeoutError: ('Request to Google Cloud Storage timed out.', DownloadError('Unable to fetch URL: https://storage.googleapis.com/evernoteresources/78413585-2266-4426-b08c-71d6c224f266/Evernote Snapshot 20130512 124546.jpg',))
This is a bug in the gcs client code. It should properly handle the filename. The fact it is using http request to GCS should be "hidden". This will be fixed soon. Thanks!
Note if you quote the filename yourself to work around this bug, the filename will be double quoted after the fix. Sorry.
Thank you Brian! The problem was the spaces in the filenames. I just used urllib2.quote() to get those out of there and it works like a charm.

Categories

Resources