For my current Python project I' using the Microsoft Azure SDK for Python.
I want to copy a specific blob from one container path to another and tested already some options, described here.
Overall they are basically "working", but unfortunately the new_blob.start_copy_from_url(source_blob_url) command always leads to an erorr: ErrorCode:CannotVerifyCopySource.
Is someone getting the same error message here, or has an idea, how to solve it?
I was also trying to modify the source_blob_url as a sas-token, but still doesn't work. I have the feeling that there is some connection to the access levels of the storage account, but so far I wasn't able to figure it out. Hopefully someone here can help me.
Is someone getting the same error message here, or has an idea, how to solve it?
As you have mentioned you might be receiving this error due to permissions while including the SAS Token.
The difference to my code was, that I used the blob storage sas_token from the Azure website, instead of generating it directly for the blob client with the azure function.
In order to allow access to certain areas of your storage account, a SAS is generated by default with a number of permissions such as read/write, services, resource type, Start and expiration date/time, and Allowed IP addresses, etc.
It's not that you always need to generate directly for the blob client with the azure function but you can generate one from the portal too by allowing the permissions.
REFERENCES: Grant limited access to Azure Storage resources using SAS - MSFT Document
Related
I'm trying to use a service account and the Google Drive API to automate sharing a folder and i want to be able to set the expirationTime property.
I've found previous threads that mention setting it in a permissions.update() call, but whatever i try i get the same generic error - 'Expiration dates cannot be set on this item.'.
I've validated i'm passing the correct date format, because i've even shared manually from my drive account and then used the permissions.list() to get the expirationTime from the returned data.
I've also tried creating a folder in my user drive, making my service account and editor and then trying to share that folder via API but i get the same problem.
Is there something that prevents a service account being able to set this property?
To note - I haven't enabled the domain wide delegation and tried impersonating yet.
Sample code:
update_body = {
'role': 'reader',
'expirationTime': '2023-03-13T23:59:59.000Z'
}
driveadmin.permissions().update(
fileId='<idhere>', permissionId='<idhere>', body=update_body).execute()
Checking the documentation from the feature it seems that it's only available to paid Google Workspace subscriptions as mentioned in the Google Workspace updates blog. You are most likely getting the error Expiration dates can't be set on this item as the service account is treated as a regular Gmail account and you can notice that this feature is not available for this type of accounts in the availability section of the update:
If you perform impersonation with your Google Workspace user I'm pretty sure that you won't receive the error as long as you have one of the subscriptions in which the feature is enabled. You can check more information about how to perform impersonation and Domain Wide Delegation here.
I have created an Azure function which is trigered when a new file is added to my Blob Storage. This part works well !
BUT, now I would like to start the "Speech-To-Text" Azure service using the API. So I try to create my URI leading to my new blob and then add it to the API call. To do so I created an SAS Token (From Azure Portal) and I add it to my new Blob Path .
https://myblobstorage...../my/new/blob.wav?[SAS Token generated]
By doing so I get an error which says :
Authentification failed Invalid URI
What am I missing here ?
N.B : When I generate manually the SAS token from the "Azure Storage Explorer" everything is working well. Plus my token is not expired in my test
Thank you for your help !
You might generate the SAS token with wrong authentication.
Make sure the Object option is checked.
Here is the reason in docs:
Service (s): Access to service-level APIs (e.g., Get/Set Service Properties, Get Service Stats, List Containers/Queues/Tables/Shares)
Container (c): Access to container-level APIs (e.g., Create/Delete Container, Create/Delete Queue, Create/Delete Table, Create/Delete
Share, List Blobs/Files and Directories)
Object (o): Access to object-level APIs for blobs, queue messages, table entities, and files(e.g. Put Blob, Query Entity, Get Messages,
Create File, etc.)
Within a framework I am building some functions that run on the main Faas providers (aws,gcp,azure,alicloud). The main function is essentially an elif based on a environment variable deciding which function to call ("do stuff on aws", "do stuff on gcp")etc. The functions essentially just read from the appropriate database (aws->dynamo, gcp->firestore, azure->cosmos).
When uploading my zip to google cloud functions through their web portal, i get the following error:
Function failed on loading user code. Error message: You must specify a region.
I'm concerned it's got something to do with my piplock file, and a clash with the aws dependencies. Not sure though. I cannot find anywhere online where someone has had this error message with gcp (certainly not through using the online console), and only see results for this error with aws.
My requirements.txt file is simply:
google-cloud-firestore==1.4.0
The piplock contains the google requirements, but doesn't state the region anywhere. However, when using the gcp console, it automatically uploads to us-central1.
Found the error in a Google Groups. If anyone else has this problem, it's because you're importing boto and uploading to gcp. GCP say it's boto's fault. So you can either split up your code so that you only bring in necessary gcp files, or wrap your imports in if's based on environment vars.
The response from the gcp Product Manager was "Hi all -- closing this out. Turns out this wasn't an issue in Cloud Functions/gcloud. The error was one emitted by the boto library: "You must specify a region.". This was confusing because the concept of region applies to AWS and GCP. We're making a tweak to our error message so that this should hopefully be a little more obvious in the future."
I'm writing a bit of python using the google cloud api to translate some text.
I have set up billing on my account and it says it's active (with some credit added for the free trial). I created an application_default_credentials.json file with -
gcloud auth application-default login
Which asked me to log in to my account (I logged into the same account I set billing up on).
I then used -
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/home/theo/.config/gcloud/application_default_credentials.json"
at the start of my python script. For the coding I followed these samples here - https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/translate/cloud-client
Yesterday the api wouldn't work and I would receive "daily limit exceeded" even though I had not used it yet. Eventually I gave up and decided to sleep on it.
Tried again today and it was working. Without having to do anything. Ah great I thought, it must just have taken a while to update my billing information.
But I've since translated a few things, maybe 10000 characters and I'm already receiving the same error message.
I did create a "Project" on the cloud console and have an api key from there. I'm not entirely sure how to use it because the documentation I linked above just uses the json credentials file. From what I've read online, using the json file is recommended over using a key now.
Any ideas about what I need to do?
Thanks.
Solved by creating a token at https://console.cloud.google.com/apis/credentials/serviceaccountkey instead of the one created with the gcloud auth command.
After I referenced the generated json file from that page it started working.
More info here - https://cloud.google.com/docs/authentication/getting-started
I get an error when trying to deallocate a virtual machine with the Python SDK for Azure.
Basically I try something like:
credentials = ServicePrincipalCredentials(client_id, secret, tenant)
compute_client = ComputeManagementClient(credentials, subscription_id, '2015-05-01-preview')
compute_client.virtual_machines.deallocate(resource_group_name, vm_name)
pprint (result.result())
-> exception:
msrestazure.azure_exceptions.CloudError: Azure Error: AuthorizationFailed
Message: The client '<some client UUID>' with object id '<same client UUID>' does not have authorization to perform action 'Microsoft.Compute/virtualMachines/deallocate/action' over scope '/subscriptions/<our subscription UUID>/resourceGroups/<resource-group>/providers/Microsoft.Compute/virtualMachines/<our-machine>'.
What I don't understand is that the error message contains an unknown client UUID that I have not used in the credentials.
Python is version 2.7.13 and the SDK version was from yesterday.
What I guess I need is a registration for an Application, which I did to get the information for the credentials. I am not quite sure which exact permission(s) I need to register for the application with IAM. For adding an access entry I can only pick existing users, but not an application.
So is there any programmatic way to find out which permissions are required for an action and which permissions our client application has?
Thanks!
As #GauravMantri & #LaurentMazuel said, the issue was caused by not assign role/permission to a service principal. I had answered another SO thread Cannot list image publishers from Azure java SDK, which is similar with yours.
There are two ways to resolve the issue, which include using Azure CLI & doing these operations on Azure portal, please see the details of my answer for the first, and I update below for the second way which is old.
And for you want to find out these permissions programmatically, you can refer to the REST API Role Definition List to get all role definitions that are applicable at scope and above, or refer to Azure Python SDK Authentication Management to do it via the code authorization_client.role_definitions.list(scope).
Hope it helps.
Thank you all for your answers! The best recipe for creating an application and to register it with the right role - Virtual Machine Contributor - is presented indeed on https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal
The main issue I had was that there is a bug in the adding a role within IAM. I use add. I select "Virtual Machine Contributor". With "Select" I get presented a list of users, but not the application that I have created for this purpose. Entering the first few letters of the name of my application will give a filtered output that includes my application this time though. Registration is then finished and things can proceed.