I was authenticating with an internal IDP and then using the SAML assertion to assume role using with boto3 sts client. Interaction with IDP was fine and able to generate the SAML assertion after successful authentication but when I tried to generate the sts client "client = boto3.client('sts')" botocore threw Invalid header value error.
Error was coming from our egress proxy server.
File "/usr/local/lib/python3.8/dist-packages/aws_authentication/credentials.py", line 219, in decode_saml_assertion
client = boto3.client('sts')
File "/usr/local/lib/python3.8/dist-packages/boto3/__init__.py", line 93, in client
return _get_default_session().client(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/boto3/session.py", line 258, in client
return self._session.create_client(
File "/usr/local/lib/python3.8/dist-packages/botocore/session.py", line 826, in create_client
credentials = self.get_credentials()
File "/usr/local/lib/python3.8/dist-packages/botocore/session.py", line 430, in get_credentials
self._credentials = self._components.get_component(
File "/usr/local/lib/python3.8/dist-packages/botocore/credentials.py", line 1975, in load_credentials
creds = provider.load()
File "/usr/local/lib/python3.8/dist-packages/botocore/credentials.py", line 1028, in load
metadata = fetcher.retrieve_iam_role_credentials()
File "/usr/local/lib/python3.8/dist-packages/botocore/utils.py", line 486, in retrieve_iam_role_credentials
role_name = self._get_iam_role(token)
File "/usr/local/lib/python3.8/dist-packages/botocore/utils.py", line 518, in _get_iam_role
return self._get_request(
File "/usr/local/lib/python3.8/dist-packages/botocore/utils.py", line 427, in _get_request
response = self._session.send(request.prepare())
File "/usr/local/lib/python3.8/dist-packages/botocore/httpsession.py", line 356, in send
raise HTTPClientError(error=e)
botocore.exceptions.HTTPClientError: An HTTP Client raised an unhandled exception: Invalid header value b'---- proxy error response ----'
This issue occurred because in the botocore package _fetch_metadata_token function Link is connecting to the url http://169.254.169.254/latest/api/token Link for fetching the metadata token.
To connect to 169.254.169.254 successfully I have add it to no_proxy so that egress proxy_server don't block the connection.
no_proxy=localhost,169.254.169.254
After adding metadata endpoint 169.254.169.254 to no_proxy, I was able to connect to sts and generate the client.
Related
I need to write a python script which checks for incoming e-mail in a shared mailbox hosted on office 365.
I have used python exchangelib 4.7.3:
Code
#!/usr/bin/env python3
import logging
from exchangelib import Credentials, Account, Configuration, DELEGATE
def list_mails():
credentials = Credentials('user#company.com', 'SecretPassword')
config = Configuration(server='outlook.office365.com', credentials=credentials)
account = Account(primary_smtp_address='sharedmailbox#company.com', config=config, autodiscover=False, access_type=DELEGATE)
for item in account.inbox.all().order_by('-datetime_received')[:100]:
print(item.subject, item.sender, item.datetime_received)
def main():
list_mails()
if __name__ == "__main__":
logging.basicConfig(level=logging.DEBUG)
main()
Issue
Regardless of the different tries, the following error is showing up:
DEBUG:exchangelib.protocol:No retry: no fail-fast policy
DEBUG:exchangelib.protocol:Server outlook.office365.com: Retiring session 87355
DEBUG:exchangelib.protocol:Server outlook.office365.com: Created session 82489
DEBUG:exchangelib.protocol:Server outlook.office365.com: Releasing session 82489
Traceback (most recent call last):
File "/Users/test/Code/./orderalert.py", line 24, in <module>
main()
File "/Users/test/Code/./orderalert.py", line 20, in main
list_mails()
File "/Userstest/Code/./orderalert.py", line 11, in list_mails
account = Account(primary_smtp_address='sharedmailbox#company.com', config=config, autodiscover=False, access_type=DELEGATE)
File "/opt/homebrew/lib/python3.9/site-packages/exchangelib/account.py", line 204, in __init__
self.version = self.protocol.version.copy()
File "/opt/homebrew/lib/python3.9/site-packages/exchangelib/protocol.py", line 483, in version
self.config.version = Version.guess(self, api_version_hint=self._api_version_hint)
File "/opt/homebrew/lib/python3.9/site-packages/exchangelib/version.py", line 233, in guess
list(ResolveNames(protocol=protocol).call(unresolved_entries=[name]))
File "/opt/homebrew/lib/python3.9/site-packages/exchangelib/services/common.py", line 187, in _elems_to_objs
for elem in elems:
File "/opt/homebrew/lib/python3.9/site-packages/exchangelib/services/common.py", line 245, in _chunked_get_elements
yield from self._get_elements(payload=payload_func(chunk, **kwargs))
File "/opt/homebrew/lib/python3.9/site-packages/exchangelib/services/common.py", line 265, in _get_elements
yield from self._response_generator(payload=payload)
File "/opt/homebrew/lib/python3.9/site-packages/exchangelib/services/common.py", line 227, in _response_generator
response = self._get_response_xml(payload=payload)
File "/opt/homebrew/lib/python3.9/site-packages/exchangelib/services/common.py", line 343, in _get_response_xml
r = self._get_response(payload=payload, api_version=api_version)
File "/opt/homebrew/lib/python3.9/site-packages/exchangelib/services/common.py", line 297, in _get_response
r, session = post_ratelimited(
File "/opt/homebrew/lib/python3.9/site-packages/exchangelib/util.py", line 917, in post_ratelimited
protocol.retry_policy.raise_response_errors(r) # Always raises an exception
File "/opt/homebrew/lib/python3.9/site-packages/exchangelib/protocol.py", line 688, in raise_response_errors
raise UnauthorizedError(f"Invalid credentials for {response.url}")
exchangelib.errors.UnauthorizedError: Invalid credentials for https://outlook.office365.com/EWS/Exchange.asmx
DEBUG:exchangelib.protocol:Server outlook.office365.com: Closing sessions
Tried solutions (no luck with them)
The user#company.com credentials are valid, I can log myself in the web interface (no MFA or other enforced)
I have tried the auto discovery:
account = Account('sharedmailbox#company.com', credentials=credentials, autodiscover=True, access_type=DELEGATE)
Disable TLS Validation when connecting
As suggested added a "on" subdomain on on.company.com for the emails addresses.
Tested the overall setup for autodiscovery on testconnectivity.microsoft.com
On the Office 365 Azure AD admin web interface, I can see a successful login performed by the script.
I started working with the Prefect Orchestration tool.
My goal is to set up a server managing my automation on different other PCs and servers.
I do not fully understand the architecture of Prefect yet (with all these Agents etc.) but I managed to start a server on a remote Ubuntu environment.
To access the UI remotely I created a config.toml and added following lines:
[server]
endpoint = "<IPofserver>:4200/graphql"
[server.ui]
apollo_url = "http://<IPofserver>:4200/graphql"
[telemetry]
[server.telemetry]
enabled = false
The telemetry part is just to disable sending analysis data to Prefect.
Afterswards it was possible to accesss the UI from another PC and also to start an Agent on another PC with:
prefect agent local start --api "http://<IPofserver>:4200/graphql"
But how can I deploy flows now? A do not find an option to set their api like for the agent.
Even if I try to register a flow on the machine where the server itself is runnig I get following error message:
Traceback (most recent call last): File "", line 1, in
File
"/usr/local/lib/python3.10/dist-packages/prefect/core/flow.py", line
1726, in register
registered_flow = client.register( File "/usr/local/lib/python3.10/dist-packages/prefect/client/client.py",
line 831, in register
project = self.graphql(query_project).data.project # type: ignore File
"/usr/local/lib/python3.10/dist-packages/prefect/client/client.py",
line 443, in graphql
result = self.post( File "/usr/local/lib/python3.10/dist-packages/prefect/client/client.py",
line 398, in post
response = self._request( File "/usr/local/lib/python3.10/dist-packages/prefect/client/client.py",
line 633, in _request
response = self._send_request( File "/usr/local/lib/python3.10/dist-packages/prefect/client/client.py",
line 497, in _send_request
response = session.post( File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line
635, in post
return self.request("POST", url, data=data, json=json, **kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py",
line 587, in request
resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line
695, in send
adapter = self.get_adapter(url=request.url) File "/usr/local/lib/python3.10/dist-packages/requests/sessions.py", line
792, in get_adapter
raise InvalidSchema(f"No connection adapters were found for {url!r}") requests.exceptions.InvalidSchema: No connection adapters
were found for ':4200/graphql'
Used Example Code:
import prefect
from prefect import task, Flow
#task
def say_hello():
logger = prefect.context.get("logger")
logger.info("Hello, Cloud!")
with Flow("hello-flow") as flow:
say_hello()
# Register the flow under the "tutorial" project
flow.register(project_name="Test")
If you are getting started with Prefect, I'd recommend using Prefect 2.0 - check this documentation page on getting started and this one about the underlying architecture.
If you still need help with Prefect Server and Prefect 1.0, check this extensive troubleshooting guide and if that doesn't help, send us a message on Slack, and we'll try to help you there.
During a PythonScriptStep in an Azure ML Pipeline, I'm saving a model as joblib pickle dump to a directory in a Blob Container in the Azure Blob Storage which I've created during the setup of the Azure ML Workspace. Afterwards I'm trying to upload this model file to the step run's output directory using
Run.upload_file (name, path_or_stream)
(for the function's documentation, see https://learn.microsoft.com/en-us/python/api/azureml-core/azureml.core.run(class)?view=azure-ml-py#upload-file-name--path-or-stream--datastore-name-none-)
Some time ago when I created the script using the azureml-sdk version 1.18.0, everything worked fine. Now, I've updated the script's functionalities and upgraded the azureml-sdk to version 1.33.0 during the process and the upload function now runs into the following error:
Traceback (most recent call last):
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_file_utils/upload.py", line 64, in upload_blob_from_stream
validate_content=True)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/clientbase.py", line 93, in execute_func_with_reset
return ClientBase._execute_func_internal(backoff, retries, module_logger, func, reset_func, *args, **kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/clientbase.py", line 367, in _execute_func_internal
left_retry = cls._handle_retry(back_off, left_retry, total_retry, error, logger, func)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/clientbase.py", line 399, in _handle_retry
raise error
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/clientbase.py", line 358, in _execute_func_internal
response = func(*args, **kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/blob/blockblobservice.py", line 614, in create_blob_from_stream
initialization_vector=iv
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/blob/_upload_chunking.py", line 98, in _upload_blob_chunks
range_ids = [f.result() for f in futures]
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/blob/_upload_chunking.py", line 98, in <listcomp>
range_ids = [f.result() for f in futures]
File "/opt/miniconda/lib/python3.7/concurrent/futures/_base.py", line 435, in result
return self.__get_result()
File "/opt/miniconda/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/opt/miniconda/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/blob/_upload_chunking.py", line 210, in process_chunk
return self._upload_chunk_with_progress(chunk_offset, chunk_bytes)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/blob/_upload_chunking.py", line 224, in _upload_chunk_with_progress
range_id = self._upload_chunk(chunk_offset, chunk_data)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/blob/_upload_chunking.py", line 269, in _upload_chunk
timeout=self.timeout,
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/blob/blockblobservice.py", line 1013, in _put_block
self._perform_request(request)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/common/storageclient.py", line 432, in _perform_request
raise ex
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/common/storageclient.py", line 357, in _perform_request
raise ex
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/common/storageclient.py", line 343, in _perform_request
HTTPError(response.status, response.message, response.headers, response.body))
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_vendor/azure_storage/common/_error.py", line 115, in _http_error_handler
raise ex
azure.common.AzureHttpError: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. ErrorCode: AuthenticationFailed
<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:5d4e1b7e-c01e-0070-0d47-9bf8a0000000
Time:2021-08-27T13:30:02.2685991Z</Message><AuthenticationErrorDetail>Signature did not match. String to sign used was rcw
2021-08-27T13:19:56Z
2021-08-28T13:29:56Z
/blob/mystorage/azureml/ExperimentRun/dcid.98d11a7b-2aac-4bc0-bd64-bb4d72e0e0be/outputs/models/Model.pkl
2019-07-07
b
</AuthenticationErrorDetail></Error>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/batch/tasks/shared/LS_root/jobs/.../azureml-setup/context_manager_injector.py", line 243, in execute_with_context
runpy.run_path(sys.argv[0], globals(), run_name="__main__")
File "/opt/miniconda/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/opt/miniconda/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/opt/miniconda/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "401_AML_Pipeline_Time_Series_Model_Training_Azure_ML_CPU.py", line 318, in <module>
main()
File "401_AML_Pipeline_Time_Series_Model_Training_Azure_ML_CPU.py", line 286, in main
path_or_stream=model_path)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/core/run.py", line 53, in wrapped
return func(self, *args, **kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/core/run.py", line 1989, in upload_file
datastore_name=datastore_name)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/artifacts_client.py", line 114, in upload_artifact
return self.upload_artifact_from_path(artifact, *args, **kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/artifacts_client.py", line 107, in upload_artifact_from_path
return self.upload_artifact_from_stream(stream, *args, **kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/artifacts_client.py", line 99, in upload_artifact_from_stream
content_type=content_type, session=session)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/artifacts_client.py", line 88, in upload_stream_to_existing_artifact
timeout=TIMEOUT, backoff=BACKOFF_START, retries=RETRY_LIMIT)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_file_utils/upload.py", line 71, in upload_blob_from_stream
raise AzureMLException._with_error(azureml_error, inner_exception=e)
azureml._common.exceptions.AzureMLException: AzureMLException:
Message: Encountered authorization error while uploading to blob storage. Please check the storage account attached to your workspace. Make sure that the current user is authorized to access the storage account and that the request is not blocked by a firewall, virtual network, or other security setting.
StorageAccount: mystorage
ContainerName: azureml
StatusCode: 403
InnerException Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. ErrorCode: AuthenticationFailed
<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:5d4e1b7e-c01e-0070-0d47-9bf8a0000000
Time:2021-08-27T13:30:02.2685991Z</Message><AuthenticationErrorDetail>Signature did not match. String to sign used was rcw
2021-08-27T13:19:56Z
2021-08-28T13:29:56Z
/blob/mystorage/azureml/ExperimentRun/dcid.98d11a7b-2aac-4bc0-bd64-bb4d72e0e0be/outputs/models/Model.pkl
2019-07-07
b
</AuthenticationErrorDetail></Error>
ErrorResponse
{
"error": {
"code": "UserError",
"message": "Encountered authorization error while uploading to blob storage. Please check the storage account attached to your workspace. Make sure that the current user is authorized to access the storage account and that the request is not blocked by a firewall, virtual network, or other security setting.\n\tStorageAccount: mystorage\n\tContainerName: azureml\n\tStatusCode: 403",
"inner_error": {
"code": "Auth",
"inner_error": {
"code": "Authorization"
}
}
}
}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "401_AML_Pipeline_Time_Series_Model_Training_Azure_ML_CPU.py", line 318, in <module>
main()
File "401_AML_Pipeline_Time_Series_Model_Training_Azure_ML_CPU.py", line 286, in main
path_or_stream=model_path)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/core/run.py", line 53, in wrapped
return func(self, *args, **kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/core/run.py", line 1989, in upload_file
datastore_name=datastore_name)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/artifacts_client.py", line 114, in upload_artifact
return self.upload_artifact_from_path(artifact, *args, **kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/artifacts_client.py", line 107, in upload_artifact_from_path
return self.upload_artifact_from_stream(stream, *args, **kwargs)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/artifacts_client.py", line 99, in upload_artifact_from_stream
content_type=content_type, session=session)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_restclient/artifacts_client.py", line 88, in upload_stream_to_existing_artifact
timeout=TIMEOUT, backoff=BACKOFF_START, retries=RETRY_LIMIT)
File "/opt/miniconda/lib/python3.7/site-packages/azureml/_file_utils/upload.py", line 71, in upload_blob_from_stream
raise AzureMLException._with_error(azureml_error, inner_exception=e)
UserScriptException: UserScriptException:
Message: Encountered authorization error while uploading to blob storage. Please check the storage account attached to your workspace. Make sure that the current user is authorized to access the storage account and that the request is not blocked by a firewall, virtual network, or other security setting.
StorageAccount: mystorage
ContainerName: azureml
StatusCode: 403
InnerException AzureMLException:
Message: Encountered authorization error while uploading to blob storage. Please check the storage account attached to your workspace. Make sure that the current user is authorized to access the storage account and that the request is not blocked by a firewall, virtual network, or other security setting.
StorageAccount: mystorage
ContainerName: azureml
StatusCode: 403
InnerException Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. ErrorCode: AuthenticationFailed
<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:5d4e1b7e-c01e-0070-0d47-9bf8a0000000
Time:2021-08-27T13:30:02.2685991Z</Message><AuthenticationErrorDetail>Signature did not match. String to sign used was rcw
2021-08-27T13:19:56Z
2021-08-28T13:29:56Z
/blob/mystorage/azureml/ExperimentRun/dcid.98d11a7b-2aac-4bc0-bd64-bb4d72e0e0be/outputs/models/Model.pkl
2019-07-07
b
</AuthenticationErrorDetail></Error>
ErrorResponse
{
"error": {
"code": "UserError",
"message": "Encountered authorization error while uploading to blob storage. Please check the storage account attached to your workspace. Make sure that the current user is authorized to access the storage account and that the request is not blocked by a firewall, virtual network, or other security setting.\n\tStorageAccount: verovisionstorage\n\tContainerName: azureml\n\tStatusCode: 403",
"inner_error": {
"code": "Auth",
"inner_error": {
"code": "Authorization"
}
}
}
}
ErrorResponse
{
"error": {
"code": "UserError",
"message": "Encountered authorization error while uploading to blob storage. Please check the storage account attached to your workspace. Make sure that the current user is authorized to access the storage account and that the request is not blocked by a firewall, virtual network, or other security setting.\n\tStorageAccount: mystorage\n\tContainerName: azureml\n\tStatusCode: 403"
}
}
As far as I can tell from the code of the azureml.core.Run class and the subsequent function calls, the Run object tries to upload the file to the step run's output directory using SAS-Token-Authentication (which fails). This documentation article is linked in the code (but I don't know if this relates to the issue): https://learn.microsoft.com/en-us/rest/api/storageservices/create-service-sas#service-sas-example
Did anybody encounter this error as well and knows what causes it or how it can be resolved?
Best,
Jonas
We’ve seen the before, it’s annoying. I think the answer is to go to the data stores page of the AML Studio UI and manually enter the storage account key again.
I would like to use python kubernetes-client to connect to my AKS cluster api.
To do that I try to use the example give by kubernetes:
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
It is supposed to load my local kubeconfig and get a pods list but I get the following error:
Traceback (most recent call last): File "test.py", line 4, in
config.load_kube_config() File "/Users//works/test-kube-api-python/env/lib/python2.7/site-packages/kubernetes/config/kube_config.py",
line 661, in load_kube_config
loader.load_and_set(config) File "/Users//works/test-kube-api-python/env/lib/python2.7/site-packages/kubernetes/config/kube_config.py",
line 469, in load_and_set
self._load_authentication() File "/Users//works/test-kube-api-python/env/lib/python2.7/site-packages/kubernetes/config/kube_config.py",
line 203, in _load_authentication
if self._load_auth_provider_token(): File "/Users//works/test-kube-api-python/env/lib/python2.7/site-packages/kubernetes/config/kube_config.py",
line 221, in _load_auth_provider_token
return self._load_azure_token(provider) File "/Users//works/test-kube-api-python/env/lib/python2.7/site-packages/kubernetes/config/kube_config.py",
line 233, in _load_azure_token
self._refresh_azure_token(provider['config']) File "/Users//works/test-kube-api-python/env/lib/python2.7/site-packages/kubernetes/config/kube_config.py",
line 253, in _refresh_azure_token
refresh_token, client_id, '00000002-0000-0000-c000-000000000000') File
"/Users//works/test-kube-api-python/env/lib/python2.7/site-packages/adal/authentication_context.py",
line 236, in acquire_token_with_refresh_token
return self._acquire_token(token_func) File "/Users//works/test-kube-api-python/env/lib/python2.7/site-packages/adal/authentication_context.py",
line 128, in _acquire_token
return token_func(self) File "/Users//works/test-kube-api-python/env/lib/python2.7/site-packages/adal/authentication_context.py",
line 234, in token_func
return token_request.get_token_with_refresh_token(refresh_token, client_secret) File
"/Users//works/test-kube-api-python/env/lib/python2.7/site-packages/adal/token_request.py",
line 343, in get_token_with_refresh_token
return self._get_token_with_refresh_token(refresh_token, None, client_secret) File
"/Users//works/test-kube-api-python/env/lib/python2.7/site-packages/adal/token_request.py",
line 340, in _get_token_with_refresh_token
return self._oauth_get_token(oauth_parameters) File "/Users//works/test-kube-api-python/env/lib/python2.7/site-packages/adal/token_request.py",
line 112, in _oauth_get_token
return client.get_token(oauth_parameters) File "/Users//works/test-kube-api-python/env/lib/python2.7/site-packages/adal/oauth2_client.py",
line 291, in get_token
raise AdalError(return_error_string, error_response) adal.adal_error.AdalError: Get Token request returned http error: 400
and server response:
{"error":"invalid_grant","error_description":"AADSTS65001: The user or
administrator has not consented to use the application with ID
'' named 'Kubernetes AD Client
'. Send an interactive authorization request for this user and
resource.\r\nTrace ID:
\r\nCorrelation ID:
\r\nTimestamp: 2019-10-14
12:32:35Z","error_codes":[65001],"timestamp":"2019-10-14
12:32:35Z","trace_id":"","correlation_id":"","suberror":"consent_required"}
I really don't understand why it doesn't work.
When I use kubectl, all work fine.
I read some docs but I'm not sure to understand the adal error.
Thanks for your help
Login as a tenant admin to https://portal.azure.com
Open the registration for your app in the
Go to Settings then Required Permissions
Press the Grant Permissions button
If you are not a tenant admin, you cannot give admin consent
From https://github.com/Azure-Samples/active-directory-angularjs-singlepageapp-dotnet-webapi/issues/19
This is good post where you can find snippet to authenticate to AKS:
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.containerservice import ContainerServiceClient
from azure.mgmt.containerservice.models import (ManagedClusterAgentPoolProfile,
ManagedCluster)
credential = AzureCliCredential()
subscription_id = "XXXXX"
resource_group= 'MY-RG'
resouce_client=ResourceManagementClient(credential,subscription_id)
container_client=ContainerServiceClient(credential,subscription_id)
resouce_list=resouce_client.resources.list_by_resource_group(resource_group)
Note: You need to install respective Az Python SKD libraries.
I want to make some very easy tasks on BigQuery via a python script. I found this package which does not work well. Indeed, when I try this code:
from bigquery import get_client
project_id = 'txxxxxxxxxxxxxxxxxx9'
# Service account email address as listed in the Google Developers Console.
service_account = '7xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com'
# PKCS12 or PEM key provided by Google.
key = '/home/fxxxxxxxxxxxx/Dropbox/access_keys/google_storage/xxxxxxxxxxxxxxxxxxxxx.pem'
client = get_client(project_id, service_account=service_account, private_key_file=key, readonly=True)
# Submit an async query.
results = client.get_table_schema('newdataset', 'newtable2')
print('results')
I get this error:
/home/xxxxxx/anaconda3/envs/snakes/bin/python2.7 /home/xxxxxx/Dropbox/Prog/bigQuery_daily_import/src/main.py
Traceback (most recent call last):
File "/home/xxxxxx/Dropbox/Prog/bigQuery_daily_import/src/main.py", line 9, in <module>
client = get_client(project_id, service_account=service_account, private_key_file=key, readonly=True)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/bigquery/client.py", line 83, in get_client
readonly=readonly)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/bigquery/client.py", line 101, in _get_bq_service
service = build('bigquery', 'v2', http=http)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/oauth2client/util.py", line 142, in positional_wrapper
return wrapped(*args, **kwargs)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/googleapiclient/discovery.py", line 196, in build
cache)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/googleapiclient/discovery.py", line 242, in _retrieve_discovery_doc
resp, content = http.request(actual_url)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/oauth2client/client.py", line 565, in new_request
self._refresh(request_orig)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/oauth2client/client.py", line 835, in _refresh
self._do_refresh_request(http_request)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/oauth2client/client.py", line 862, in _do_refresh_request
body = self._generate_refresh_request_body()
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/oauth2client/client.py", line 1541, in _generate_refresh_request_body
assertion = self._generate_assertion()
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/oauth2client/client.py", line 1670, in _generate_assertion
private_key, self.private_key_password), payload)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/oauth2client/_pycrypto_crypt.py", line 121, in from_string
pkey = RSA.importKey(parsed_pem_key)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/Crypto/PublicKey/RSA.py", line 665, in importKey
return self._importKeyDER(der)
File "/home/xxxxxx/anaconda3/envs/snakes/lib/python2.7/site-packages/Crypto/PublicKey/RSA.py", line 588, in _importKeyDER
raise ValueError("RSA key format is not supported")
ValueError: RSA key format is not supported
Process finished with exit code 1
My question: is there a tutorial in python which shows how to communicate easily with BigQuery: importing a dataset from google storage or S3, querying something, exporting the result to google storage.
A lot depends on your environment, and once you've figure that out everything should be super simple. I see the only problem on the error log you pasted is figuring out authentication.
Python pandas has had support for BigQuery for a while:
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.gbq.read_gbq.html
And I did a video with the creators of the module:
https://www.youtube.com/watch?v=gLeTDUMb7HY
Now, the simplest and fastest way these days to launch an Jupyter notebook with all of the Google Cloud goodies you mention is our new Google Datalab project:
https://cloud.google.com/datalab/
The only Datalab caveat is that it works on cloud servers, but if you want a fully managed Jupyter/IPython environment, totally secure, persistent, and ready to handle BigQuery, storage, etc... try it out.
Meanwhile, if you are writing a web application look at how other web applications solve this task.
For example, re:dash code to connect to BigQuery:
https://github.com/EverythingMe/redash/blob/master/redash/query_runner/big_query.py