How to solve "Insufficient Permission" for userUsageReport with Google API? - python

I'm trying to write a Python script that will check if a user account has got two-step verification enabled.
As a starting point, I'm using the quickstart script provided on https://developers.google.com/admin-sdk/reports/v1/quickstart/python. I've followed the instructions and the sample code works as expected.
I then add the following line after the example code:
results = service.userUsageReport().get(userKey='john.doe#example.com', date='2016-08-02', parameters='accounts:is_2sv_enrolled').execute()
but I get "Insufficient Permission" returned.
Just to make it clear, I do replace "john.doe#example.com" with an email address that is valid for my organisation :).
I've double-checked the credentials used and, indeed, if I use the web-based API Explorer with the same account being used to run the script, it works.
I don't understand why the call to activities().list() is working but userUsageReport().get() isn't.

I've solved this.
userUsageReport requires the usage scope to be added, specifically:
https://www.googleapis.com/auth/admin.reports.usage.readonly
Since the quickstart only reference the audit scope:
https://www.googleapis.com/auth/admin.reports.audit.readonly
that is why I was getting the error.

Related

PyPI package testing: 403 Forbidden from https://test.pypi.org/legacy/

I am just trying to create a python package on testPyPI for some testing (using twine), following this official guide. I am using a token, which succeeded exactly one time (username __token__, password is the token itself). Then I made a change and wanted to repeat that process, it doesn't work anymore.
related post
This seems to be a common issue, however I couldn't fix it so far. I'm on Windows 10, this is what I tried...
different ways of pasting into the console and different consoles (so that's not the issue)
using a .pypirc file for authentication details
a new token
a new account
a new email
also directly inserting username and password into the twine command (which should be avoided, I guess)
And I'm running out of ideas. Any clue?
I was struggling with the same issue. The tutorial OP referred to is, in my opinion, not explicit enough, because there tend to be two ways, which can easily be mixed (as I was doing, however, I am a newbie following the tutorial for a reason ;) ).
Solution:
Assuming a /home/.pypirc file is created you can either
use credentials (i.e, use username and password for logging into the website):
[testpypi]
username = <your username>
password = <your password>
or use the API token created on the website:
[testpypi]
username = __token__
password = pypi-<Rest of token>
Hope that helps others following the tutorial.

"Function failed on loading user code. Error message: You must specify a region." when uploading function through GCF online console

Within a framework I am building some functions that run on the main Faas providers (aws,gcp,azure,alicloud). The main function is essentially an elif based on a environment variable deciding which function to call ("do stuff on aws", "do stuff on gcp")etc. The functions essentially just read from the appropriate database (aws->dynamo, gcp->firestore, azure->cosmos).
When uploading my zip to google cloud functions through their web portal, i get the following error:
Function failed on loading user code. Error message: You must specify a region.
I'm concerned it's got something to do with my piplock file, and a clash with the aws dependencies. Not sure though. I cannot find anywhere online where someone has had this error message with gcp (certainly not through using the online console), and only see results for this error with aws.
My requirements.txt file is simply:
google-cloud-firestore==1.4.0
The piplock contains the google requirements, but doesn't state the region anywhere. However, when using the gcp console, it automatically uploads to us-central1.
Found the error in a Google Groups. If anyone else has this problem, it's because you're importing boto and uploading to gcp. GCP say it's boto's fault. So you can either split up your code so that you only bring in necessary gcp files, or wrap your imports in if's based on environment vars.
The response from the gcp Product Manager was "Hi all -- closing this out. Turns out this wasn't an issue in Cloud Functions/gcloud. The error was one emitted by the boto library: "You must specify a region.". This was confusing because the concept of region applies to AWS and GCP. We're making a tweak to our error message so that this should hopefully be a little more obvious in the future."

Getting HTTP error 403 - invalid access token while trying to access cluster through Azure databricks

I'm trying to access Azure databricks spark cluster by a python script which takes token as an input generated via databricks user settings and calling a Get method to get the details of the cluster alongwith the cluster-id.
The below is the code snippet. As shown, I have created a cluster in southcentralus zone.
import requests
headers = {"Authorization":"Bearer dapiad************************"}
data=requests.get("https://southcentralus.azuredatabricks.net/api/2.0/clusters/get?cluster_id=**************",headers=headers).text
print data
Expected result should give the full detail of the cluster eg.
{"cluster_id":"0128-******","spark_context_id":3850138716505089853,"cluster_name":"abcdxyz","spark_version":"5.1.x-scala2.11","spark_conf":{"spark.databricks.delta.preview.enabled":"true"},"node_type_id" and so on .....}
The above code is working when I execute the code on google colaboratory whereas the same is not working with my local IDE i.e. idle. It gives the error of HTTP 403 stating as below:
<p>Problem accessing /api/2.0/clusters/get. Reason:
<pre> Invalid access token.</pre></p>
Can anyone help me resolve the issue? I'm stuck on this part and not able to access the cluster through APIs.
It could be due to encoding issue when you pass the secret. Please look into this issue and how to resolve it. Even though the resolution they have given for AWS,it could be similar one for Azure as well. Your secret might be having "/", which you have to replace.
There is a known problem in the last update related to the '+'
character in secret keys. In particular, we no longer support escaping
'+' into '%2B', which some URL-encoding libraries do.
The current best-practice way of encoding your AWS secret key is
simply
secretKey.replace("/","%2F")
sample python script is given below:
New_Secret_key = "MySecret/".replace("/","%2F")
https://forums.databricks.com/questions/6590/s3serviceexception-raised-when-accessing-via-mount.html
https://forums.databricks.com/questions/6621/responsecode403-responsemessageforbidden.html

Google cloud translate API - "Daily Limit Exceeded"

I'm writing a bit of python using the google cloud api to translate some text.
I have set up billing on my account and it says it's active (with some credit added for the free trial). I created an application_default_credentials.json file with -
gcloud auth application-default login
Which asked me to log in to my account (I logged into the same account I set billing up on).
I then used -
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/home/theo/.config/gcloud/application_default_credentials.json"
at the start of my python script. For the coding I followed these samples here - https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/translate/cloud-client
Yesterday the api wouldn't work and I would receive "daily limit exceeded" even though I had not used it yet. Eventually I gave up and decided to sleep on it.
Tried again today and it was working. Without having to do anything. Ah great I thought, it must just have taken a while to update my billing information.
But I've since translated a few things, maybe 10000 characters and I'm already receiving the same error message.
I did create a "Project" on the cloud console and have an api key from there. I'm not entirely sure how to use it because the documentation I linked above just uses the json credentials file. From what I've read online, using the json file is recommended over using a key now.
Any ideas about what I need to do?
Thanks.
Solved by creating a token at https://console.cloud.google.com/apis/credentials/serviceaccountkey instead of the one created with the gcloud auth command.
After I referenced the generated json file from that page it started working.
More info here - https://cloud.google.com/docs/authentication/getting-started

Sending an order to oanda

I want to send an order to oanda to make a transaction,I use ipython notebook to compile my code,this is my code:
import oandapy
trade_expire=datetime.now()+timedelta(days=1)
trade_expire=trade_expire.isoformat("T")+"Z"
oanda=oandapy.API(environment='practice',access_token='XXXX....')
account_id=xxxxxxx
response=oanda.create_order(account_id,instrument='USD_EUR',units=1000,side='buy',/
type='limit',price=1.105,expire=trade_expire)
But the error is:
OandaError: OANDA API returned error code 4 (The access token provided does
not allow this request to be made)
How can I solve this problem?
I had the same problem, but when sending orders via curl commands.
The problem has to do with which API you are using from which account.
I notice in your python it says "practice," so you'll want to make sure the API token you generated is from within your practice account. Live accounts and practice accounts each use their own API tokens, and your commands will need to match.
You might also look elsewhere in your python, where it actually pings OandA's server.
For example, when using curl, a live account uses
"https://api-fxtrade.oanda.com/v3/accounts/<ACCOUNT>/orders"
and a practice account uses
"https://api-fxpractice.oanda.com/v3/accounts/<ACCOUNT>/orders"
Using your API token generated on your live account in a practice account will produce the error you're asking about.

Categories

Resources