When I run the code below, which is taken from: https://developers.google.com/drive/api/v3/manage-shareddrives#python
# Create a new drive
test_drive_metadata = {'name': 'Test Drive'}
request_id = str(uuid.uuid4())
test_drive = self.service.drives().create(
body=test_drive_metadata,
requestId=request_id,
fields='id'
).execute()
I get "The user does not have sufficient permissions for this file." This does not happen if I create files, if I list shared drives or anything else. There are no other required scopes other than ['https://www.googleapis.com/auth/drive'].
It should be noted that I am using a service account. Are service accounts not allowed to create shared drives? This is not documented anywhere as far as I am aware if this is the case.
There doesn't seem to be any explicit documentation regarding this limitation, but considering that Service Accounts (without using DWD and impersonation) are supposed to manage application data, and not user data, it makes sense that they cannot be used to manage data that is shared with regular users.
Also, the use of Service Accounts to manage shared documents seems to be advised against, according to the official documentation:
Using the service account as a common owner to create many shared documents can have severe performance implications.
On the other hand, since Service Accounts have certain limitations compared to regular accounts (for example, Event creation in Calendar), this could probably be one of these limitations.
In any case, in order to make sure that's the case, I'd suggest you to report this behaviour in this Issue Tracker component.
Reference:
Drive API: Perform G Suite Domain-Wide Delegation of Authority
Using OAuth 2.0 for Server to Server Applications
Related
I'm trying to use a service account and the Google Drive API to automate sharing a folder and i want to be able to set the expirationTime property.
I've found previous threads that mention setting it in a permissions.update() call, but whatever i try i get the same generic error - 'Expiration dates cannot be set on this item.'.
I've validated i'm passing the correct date format, because i've even shared manually from my drive account and then used the permissions.list() to get the expirationTime from the returned data.
I've also tried creating a folder in my user drive, making my service account and editor and then trying to share that folder via API but i get the same problem.
Is there something that prevents a service account being able to set this property?
To note - I haven't enabled the domain wide delegation and tried impersonating yet.
Sample code:
update_body = {
'role': 'reader',
'expirationTime': '2023-03-13T23:59:59.000Z'
}
driveadmin.permissions().update(
fileId='<idhere>', permissionId='<idhere>', body=update_body).execute()
Checking the documentation from the feature it seems that it's only available to paid Google Workspace subscriptions as mentioned in the Google Workspace updates blog. You are most likely getting the error Expiration dates can't be set on this item as the service account is treated as a regular Gmail account and you can notice that this feature is not available for this type of accounts in the availability section of the update:
If you perform impersonation with your Google Workspace user I'm pretty sure that you won't receive the error as long as you have one of the subscriptions in which the feature is enabled. You can check more information about how to perform impersonation and Domain Wide Delegation here.
For my current Python project I' using the Microsoft Azure SDK for Python.
I want to copy a specific blob from one container path to another and tested already some options, described here.
Overall they are basically "working", but unfortunately the new_blob.start_copy_from_url(source_blob_url) command always leads to an erorr: ErrorCode:CannotVerifyCopySource.
Is someone getting the same error message here, or has an idea, how to solve it?
I was also trying to modify the source_blob_url as a sas-token, but still doesn't work. I have the feeling that there is some connection to the access levels of the storage account, but so far I wasn't able to figure it out. Hopefully someone here can help me.
Is someone getting the same error message here, or has an idea, how to solve it?
As you have mentioned you might be receiving this error due to permissions while including the SAS Token.
The difference to my code was, that I used the blob storage sas_token from the Azure website, instead of generating it directly for the blob client with the azure function.
In order to allow access to certain areas of your storage account, a SAS is generated by default with a number of permissions such as read/write, services, resource type, Start and expiration date/time, and Allowed IP addresses, etc.
It's not that you always need to generate directly for the blob client with the azure function but you can generate one from the portal too by allowing the permissions.
REFERENCES: Grant limited access to Azure Storage resources using SAS - MSFT Document
I'm developing a Cloud Run Service that accesses different Google APIs using a service account's secrets file with the following python 3 code:
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file(SECRETS_FILE_PATH, scopes=SCOPES)
In order to deploy it, I upload the secrets file during the build/deploy process (via gcloud builds submit and gcloud run deploy commands).
How can I avoid uploading the secrets file like this?
Edit 1:
I think it is important to note that I need to impersonate user accounts from GSuite/Workspace (with domain wide delegation). The way I deal with this is by using the above credentials followed by:
delegated_credentials = credentials.with_subject(USER_EMAIL)
Using the Secret Manager might help you, as you can manage the multiple secrets you have and not have them stored as files, as you are doing right now. I would recommend you to take a look at this article here, so you can get more information on how to use it with Cloud Run, to improve the way you manage your secrets.
In addition to that, as clarified in this similar case here, you have two options: use default service account that comes with it or deploy another one with the Service Admin role. This way, you won't need to specify keys with variables - as clarified by a Google developer in this specific answer.
To improve the security, the best way is to never use service account key file, locally or on GCP (I wrote an article on this). To achieve this, Google Cloud service have an automatically loaded service account, either this one by default or, when possible, a custom one.
On Cloud Run, the default service account is the Compute Engine default service account (I recommend you to never use it, it has editor role on the project, it's too wide!), or you can specify the service account to use (--service-account= parameter)
Then, in your code, simply use the ADC mechanism (Application Default Credential) to get your credentials, like this in Python
import google.auth
credentials, project_id = google.auth.default(scopes=SCOPES)
I've found one way to solve the problem.
First, as suggested by guillaume blaquiere answer, I used google.auth ADC mechanism:
import google.auth
credentials, project_id = google.auth.default(scopes=SCOPES)
However, as I need to impersonate GSuite's (now Workspace) accounts, this method is not enough, as the credentials object generated from this method does not have the with_subject property. This led me to this similar post and specific answer which works a way to convert google.auth.credentials into the Credential object returned by service_account.Credentials.from_service_account_file. There was one problem with his solution, as it seemed that an authentication scope was missing.
All I had to do is add the https://www.googleapis.com/auth/cloud-platform scope to the following places:
The SCOPES variable in the code
Google Admin > Security > API Controls > Set client ID and scope for the service account I am deploying with
At the OAuth Consent Screen of my project
After that, my Cloud Run had access to credentials that were able to impersonate user's accounts without using key files.
I have a number of REST APIs from a software program (ex: Tradingview)
I would like to store the API credentials (e.g. keys, secrets) safely.
I had thought about placing them in a Database table - but - I am not totally fond of placing clear text in a table.
I already know about using OS Environment Variables:
[... snip ...]
import os
import sys
import logging
[... snip ...]
LD_API_KEY = os.getenv("BINANCE_APIKEY")
LD_API_SECRET = os.getenv("BINANCE_API_SECRET")
where keys are stored in a file - but - as mentioned before, I have a number of API keys.
Just leaving them on a server (in clear text) - even though the file is hidden - is not sitting well with me.
Is there any other way to store API Keys?
There are a number of articles on this topic, a quick web search for "Storing API keys" will net you some interesting and informative reads, so I'll just talk about my experience here.
Really, it's all up to preference, the requirements of your project, and the level of security you need. I personally have run through a few solutions. Here's how my project has evolved over time.
Each key stored in environment variables
Simple enough, just had to use os.environ for every key. This very quickly became a management headache, especially when deploying to multiple environments, or setting up an environment for a new project contributor.
All keys stored in a local file
This started as just a file outside source control with an environment variable pointing to the file. I started with a simple JSON file in the following structure.
[
{
"name": "Service Name",
"date": "1970-01-01", // to track rotations
"key": "1234abcd",
"secret_key": "abcd1234"
}
]
This evolved into a class that accessed this file for me and returned the desired key so I didn't have to repeat json.load() or import os in every script that accessed APIs. This got a little more complex when I started needing to store OAuth tokens.
I eventually moved this file to a private, encrypted (git-secret), local-only git repo so team members could also use the keys in their environments.
Use a secret management service
The push to remote work forced me to create a system for remote API key access and management. My team debated a number of solutions, but we eventually fell on AWS Secrets Manager. The aforementioned custom class was pointed at AWS instead of a local file, and we gained a significant increase in security and flexibility over the local-only solution.
There are a number of cloud-based secret management solutions, but my project is already heavily AWS integrated, so this made the most sense given the costs and constraints. This also means that each team member now only needs to have AWS permissions and use their accounts AWS API key for access.
I want to programatically get all the actions a user is allowed to do across aws services.
I've tried to fiddle with simulate_principal_policy but it seems this method expects a list of all actions, and I don't want to maintain a hard-coded list.
I also tried to call it with iam:* for example and got a generic 'implicitDeny' response so I know the user is not permitted all the actions but I require a higher granularity of actions.
Any ideas as to how do I get the action list dynamically?
Thanks!
To start with, there is no programmatic way to retrieve all possible actions (regardless of whether they are permitted to use an action).
You would need to construct a list of possible actions before checking the security. As an example, the boto3 SDK for Python contains an internal list of commands that it uses to validate commands before sending them to AWS.
Once you have a particular action, you could use Access the Policy Simulator API to validate whether a given user would be allowed to make a particular API call. This is much easier than attempting to parse the various Allow and Deny permissions associated with a given user.
However, a call might be denied based upon the specific parameters of the call. For example, a user might have permissions to terminate any Amazon EC2 instance that has a particular tag, but cannot terminate all instances. To correctly test this, an InstanceId would need to be provided to the simulation.
Also, permissions might be restricted by IP Address and even Time of Day. Thus, while a user would have permission to call an Action, where and when they do it will have an impact on whether the Action is permitted.
Bottom line: It ain't easy! AWS will validate permissions at the time of the call. Use the Policy Simulator to obtain similar validation results.
I am surprised no one has answered this question correctly. Here is code that uses boto3 that addresses the OP's question directly:
import boto3
session = boto3.Session('us-east-1')
for service in session.get_available_services ():
service_client = session.client (service)
print (service)
print (service_client.meta.service_model.operation_names)
IAM, however, is a special case as it won't be listed in the get_available_services() call above:
IAM = session.client ('iam')
print ('iam')
print (IAM.meta.service_model.operation_names)