I'm trying to fetch last 5 mins CPU utilization of a GCP SQL instance but getting following error. Please help me resolve it. The service account I'm using has full read permission over monitoring.
My script:
from google.cloud import monitoring_v3
from google.cloud.monitoring_v3 import query
credential_file = "/home/user/my-first-project.json"
GCP_SCOPES = [
'https://www.googleapis.com/auth/sqlservice.admin',
"https://www.googleapis.com/auth/logging.read",
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/monitoring.read"
]
gcp_credential = service_account.Credentials.from_service_account_file(credential_file,scopes=GCP_SCOPES)
client = monitoring_v3.MetricServiceClient(credentials=gcp_credential)
project_name = f"projects/my-first-project"
cpu_query = query.Query(client,
project=project_name,
metric_type='cloudsql.googleapis.com/database/cpu/utilization',
minutes=5)
next(cpu_query.iter())
Error:
File "$PATH\grpc_helpers.py", line 75, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.PermissionDenied: 403 Permission monitoring.timeSeries.list denied (or the resource may not exist).
Removing the prefix "projects/" worked. So basically using project id instead of project_name worked.
cpu_query = query.Query(
client,
project="my-first-project",
metric_type='cloudsql.googleapis.com/database/cpu/utilization',
minutes=5
)
Related
I have an issue running a query in using BigQuery client for python even though the service account permissions are set correctly. I need to pass the project name argument to the client because I have to do some cross-project ETL from within one procedure, so the finished solution will have different clients for each project.
I am running the following in a docker container.
# SETUP
from google.cloud import bigquery
from google.oauth2 import service_account
import os
# path to environment default credential - a service account with correct permissions
GOOGLE_APPLICATION_CREDENTIALS = os.getenv("GOOGLE_APPLICATION_CREDENTIALS")
# project name
bq_read_project = os.getenv("BQ_READ_PROJECT") # same value as `myproject`
# query to run
select_statement = """
SELECT id
FROM `myproject.mydataset.mytable`
LIMIT 100
"""
# credentials without scope
credentials = service_account.Credentials.from_service_account_file(
GOOGLE_APPLICATION_CREDENTIALS,
scopes=["https://www.googleapis.com/auth/bigquery"]
)
Then I run the following tests:
client1 = bigquery.Client()
read1 = client1.query(select_statement).to_dataframe()
read1
>>> id
0 5189340288253952
1 5557517098680320
2 5580633619300352
3 6208829494657024
4 4896003350069248
...
This demonstrates that the service account that the client environment defaults to is working. The project is contained in the query and doesn't need to be specified in the client. Next I try the following:
client2 = bigquery.Client(credentials=credentials)
read2 = client2.query(select_statement).to_dataframe()
read2
>>> id
0 5189340288253952
1 5557517098680320
2 5580633619300352
3 6208829494657024
4 4896003350069248
...
This demonstrates that adding the credential explicitly still works correctly. The credentials and permissions are not in doubt. Next I try passing the project as an argument:
# project only
client3 = bigquery.Client(project=bq_read_project)
read3 = client3.query(select_statement).to_dataframe()
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/google/api_core/grpc_helpers.py", line 66, in error_remapped_callable
return callable_(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/grpc/_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/lib/python3.9/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.PERMISSION_DENIED
details = "request failed: the user does not have 'bigquery.readsessions.create' permission for 'projects/myproject'"
debug_error_string = "{"created":"#1664544244.277229969","description":"Error received from peer ipv4:172.217.16.74:443","file":"src/core/lib/surface/call.cc","file_line":1074,"grpc_message":"request failed: the user does not have 'bigquery.readsessions.create' permission for 'projects/myproject'","grpc_status":7}"
Naming the project breaks the protocol. I try the following:
client1 = bigquery.Client(project=bq_read_project, credentials=credentials_1)
read1 = client1.query(select_statement).to_dataframe()
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/google/api_core/grpc_helpers.py", line 66, in error_remapped_callable
return callable_(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/grpc/_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/lib/python3.9/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.PERMISSION_DENIED
details = "request failed: the user does not have 'bigquery.readsessions.create' permission for 'projects/myproject'"
debug_error_string = "{"created":"#1664544418.915063405","description":"Error received from peer ipv4:172.217.16.74:443","file":"src/core/lib/surface/call.cc","file_line":1074,"grpc_message":"request failed: the user does not have 'bigquery.readsessions.create' permission for 'projects/myproject'","grpc_status":7}"
Same problem, despite explicitly providing the credential that worked before.
We can see that the credential definitely has access to the project and can connect to it and run a query when the project is only given in the query. However, when the project is passed as an argument to the client, everything breaks.
I am working on a script that accesses details about an Azure Virtual Machine currently. This is the code that I have so far:
"""
Instantiate the ComputeManagementClient with the appropriate credentials.
#return ComputeManagementClient object
"""
def get_access_to_virtual_machine():
subscription_id = key.SUBSCRIPTION_ID
credentials = DefaultAzureCredential(authority = AzureAuthorityHosts.AZURE_GOVERNMENT,
exclude_environment_credential = True,
exclude_managed_identity_credential = True,
exclude_shared_token_cache_credential = True)
client = KeyClient(vault_url = key.VAULT_URL, credential = credentials)
compute_client = ComputeManagementClient(credentials, subscription_id)
return compute_client
"""
Check to see if Azure Virtual Machine exists and the state of the virtual machine.
"""
def get_azure_vm(resource_group_name, virtual_machine_name):
compute_client = get_access_to_virtual_machine()
vm_data = compute_client.virtual_machines.get(resource_group_name,
virtual_machine_name,
expand = 'instanceView')
return vm_data
When trying to run get_azure_vm(key.RESOURCE_GROUP, key.VIRTUAL_MACHINE_NAME) which I am certain does have the correct credentials, I get the following error output (note that I replaced the actual subscription ID with 'xxxx' for now):
Traceback (most recent call last):
File "/Users/shilpakancharla/Documents/function_app/WeedsMediaUploadTrigger/event_process.py", line 62, in <module>
vm_data = get_azure_vm(key.RESOURCE_GROUP, key.VIRTUAL_MACHINE_NAME)
File "<decorator-gen-2>", line 2, in get_azure_vm
File "/usr/local/lib/python3.9/site-packages/retry/api.py", line 73, in retry_decorator
return __retry_internal(partial(f, *args, **kwargs), exceptions, tries, delay, max_delay, backoff, jitter,
File "/usr/local/lib/python3.9/site-packages/retry/api.py", line 33, in __retry_internal
return f()
File "/Users/shilpakancharla/Documents/function_app/WeedsMediaUploadTrigger/event_process.py", line 55, in get_azure_vm
vm_data = compute_client.virtual_machines.get(resource_group_name,
File "/usr/local/lib/python3.9/site-packages/azure/mgmt/compute/v2019_12_01/operations/_virtual_machines_operations.py", line 641, in get
map_error(status_code=response.status_code, response=response, error_map=error_map)
File "/usr/local/lib/python3.9/site-packages/azure/core/exceptions.py", line 102, in map_error
raise error
azure.core.exceptions.ResourceNotFoundError: (SubscriptionNotFound) The subscription 'xxxx' could not be found.
Code: SubscriptionNotFound
Message: The subscription 'xxxx' could not be found.
I am using the beta preview version of azure.mgmt.compute which was installed with pip install azure-mgmt-compute=17.0.0b1. Note that I am also using an Azure Government account. Is there a way to solve this error? I have also tried using ServicePrincipalCredentials and get_azure_credentials() but ran into different errors - I was recommended to use DefaultAzureCredentials and the key vault by a coworker.
There is no problem with the code and it works fine on my side. And the error message shows the reason:
azure.core.exceptions.ResourceNotFoundError: (SubscriptionNotFound)
The subscription 'xxxx' could not be found. Code: SubscriptionNotFound
Message: The subscription 'xxxx' could not be found.
It seems you run the python code in your local machine. I recommend you log in with the Azure CLI first and then check if the subscription id that you used in your python code is right.
I am trying to use my own configuration file with pyngrok but I don't understand why it does not detect it, my project needs to be run forcefully with sudo, therefore ngrok does not detect the configuration file in the root home directory for some users, it is that's why I want to mount my own configuration file in my project directory, here is my code:
from pyngrok import ngrok, conf
import os
from pathlib import Path
import re
def ngrok_start ():
file = Path (".config/ngrok.yml")
if file.exists():
os.system("kill -9 $(pgrep ngrok)")
ngrok.DEFAULT_CONFIG_PATH = ".config/ngrok.yml"
ngrok.connect(443, "tcp")
while True:
ngrok_tunnels = ngrok.get_tunnels()
url = ngrok_tunnels[0].public_url
if re.match ("tcp://[0-9]*.tcp.ngrok.io:[0-9]*", url) is not None:
print ("Ngrok TCP:" + url)
break
With my code I get the following error:
Traceback (most recent call last):
File "ngrok.py", line 27, in <module>
ngrok_start ()
File "ngrok.py", line 20, in ngrok_start
ngrok.connect (443, "tcp")
File "/usr/local/lib/python2.7/dist-packages/pyngrok/ngrok.py", line 181, in connect
timeout = pyngrok_config.request_timeout))
File "/usr/local/lib/python2.7/dist-packages/pyngrok/ngrok.py", line 321, in api_request
status_code, e.msg, e.hdrs, response_data)
pyngrok.exception.PyngrokNgrokHTTPError: ngrok client exception, API returned 502: {"error_code": 103, "status_code": 502, "msg": "failed to start tunnel", "details": {"err": "TCP tunnels are only available after you sign up. \ nSign up at: https://ngrok.com/signup\n\nIf you have already signed up, make sure your authtoken is installed. \ nYour authtoken is available on your dashboard: https: //dashboard.ngrok.com/auth/your-authtoken\r\n\r\nERR_NGROK_302\r\n "}}
No handlers could be found for logger "pyngrok.process"
My script is in the lib folder and my configuration file in .config/ngrok.yml, in which I have my token but I can't detect it, I hope you can support me, thanks.
imagen
Per the docs, the DEFAULT_CONFIG_PATH variable is in the conf module, not the ngrok module. So change ngrok.DEFAULT_CONFIG_PATH = to conf.get_default().config_path = .
I have created an Azure Cognitive Services resource following the tutorial 1
Then I have created the environment and run the following code (from tutorial 2):
# Import required modules.
from azure.cognitiveservices.search.websearch import WebSearchAPI
from azure.cognitiveservices.search.websearch.models import SafeSearch
from msrest.authentication import CognitiveServicesCredentials
# Replace with your subscription key.
subscription_key = "YOUR_SUBSCRIPTION_KEY"
# Instantiate the client and replace with your endpoint.
client = WebSearchAPI(CognitiveServicesCredentials(subscription_key), base_url = "YOUR_ENDPOINT")
# Make a request. Replace Yosemite if you'd like.
web_data = client.web.search(query="Yosemite")
print("\r\nSearched for Query# \" Yosemite \"")
However, it seems the generaed Subscription key and endpoint are not correctly read by the script since I get the following error:
File "azu_scrapper.py", line 17, in
web_data = client.web.search(query="Yosemite") File "/home/user/.local/share/virtualenvs/linkedin-CHSAGU1d/lib/python3.7/site-packages/azure/cognitiveservices/search/websearch/operations/web_operations.py",
line 365, in search
raise models.ErrorResponseException(self._deserialize, response) azure.cognitiveservices.search.websearch.models.error_response_py3.ErrorResponseException:
Operation returned an invalid status code 'Resource Not Found'
Any idea why it is not working?
The base_url value should be :
https://<your endpoint>/bing/v7.0
I have tested on my side and it works for me :
Just getting started on the Adwords API, for some reason I can't seem to connect at all.
The code below, straight from the tutorial throws the error:
Traceback (most recent call last):
File "<pyshell#12>", line 1, in <module>
client = AdWordsClient(path=os.path.join('Users', 'ravinthambapillai', 'Google Drive', 'client_secrets.json'))
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/adspygoogle/adwords/AdWordsClient.py", line 151, in __init__
self._headers = self.__LoadAuthCredentials()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/adspygoogle/adwords/AdWordsClient.py", line 223, in __LoadAuthCredentials
return super(AdWordsClient, self)._LoadAuthCredentials()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/adspygoogle/common/Client.py", line 94, in _LoadAuthCredentials
raise ValidationError(msg)
**ValidationError: Authentication data is missing.**
from adspygoogle.adwords.AdWordsClient import AdWordsClient
from adspygoogle.common import Utils
client = AdWordsClient(path=os.path.join('Users', 'this-user', 'this-folder', 'client_secrets.json'))
It looks like there's two issues. First, try removing the last path element, as far as I recall, the path parameter expects a directory that contains the authentication pickle, logs etc. This approach requires that you already have a valid auth_token.pkl.
Second, it appears that you're using OAuth2 for authentication (I'm guessing by the client_secrets.json file). For this to work, you'll need to use the oauth2client library and provide an oauth2credentials instance in the headers parameter to AdWordsClient.
The following is straight from the file examples/adspygoogle/adwords/v201302/misc/use_oauth2.py in the client distribution and should give you an idea how it works:
# We're using the oauth2client library:
# http://code.google.com/p/google-api-python-client/downloads/list
flow = OAuth2WebServerFlow(
client_id=oauth2_client_id,
client_secret=oauth2_client_secret,
# Scope is the server address with '/api/adwords' appended.
scope='https://adwords.google.com/api/adwords',
user_agent='oauth2 code example')
# Get the authorization URL to direct the user to.
authorize_url = flow.step1_get_authorize_url()
print ('Log in to your AdWords account and open the following URL: \n%s\n' %
authorize_url)
print 'After approving the token enter the verification code (if specified).'
code = raw_input('Code: ').strip()
credential = None
try:
credential = flow.step2_exchange(code)
except FlowExchangeError, e:
sys.exit('Authentication has failed: %s' % e)
# Create the AdWordsUser and set the OAuth2 credentials.
client = AdWordsClient(headers={
'developerToken': '%s++USD' % email,
'clientCustomerId': client_customer_id,
'userAgent': 'OAuth2 Example',
'oauth2credentials': credential
})
I am not familiar with the AdWordsClient api but are you sure your path is correct?
your current join produces a relative path, do you need an absolute one?
>>> import os
>>> os.path.join('Users', 'this-user')
'Users/this-user'
For testing you could hardcode the absoulte path in to make sure it is not a path issue
I would also make sure that 'client_secrets.json exists, and that it is readable by the user executing python