This code retrieves the buckets of a Amazon S3-compatible storage (not Amazon AWS but the Zadara compatible cloud storage) and IT WORKS:
import boto3
from botocore.client import Config
session = boto3.session.Session( )
s3_client = session.client(
service_name = 's3',
region_name = 'IT',
aws_access_key_id = 'xyz',
aws_secret_access_key = 'abcedf',
endpoint_url = 'https://nothing.com:443',
config = Config(signature_version='s3v4'),
)
print('Buckets')
boto3.set_stream_logger(name='botocore')
print(s3_client.list_buckets())
I am trying to use the same method to access S3 via C# and AWS SDK, anyway I always obtain the error "The request signature we calculated does not match the signature you provided. Check your key and signing method.".
AmazonS3Config config = new AmazonS3Config();
config.AuthenticationServiceName = "s3";
config.ServiceURL = "https://nothing.com:443";
config.SignatureVersion = "s3v4";
config.AuthenticationRegion = "it";
AmazonS3Client client = new AmazonS3Client(
"xyz",
"abcdef",
config);
ListBucketsResponse r = await client.ListBucketsAsync();
What can I do? Why it is not working? I can't get a solution.
I tried also to trace debug infos:
Python
boto3.set_stream_logger(name='botocore')
C#
AWSConfigs.LoggingConfig.LogResponses = ResponseLoggingOption.Always;
AWSConfigs.LoggingConfig.LogMetrics = true;
AWSConfigs.LoggingConfig.LogTo = Amazon.LoggingOptions.SystemDiagnostics;
AWSConfigs.AddTraceListener("Amazon", new System.Diagnostics.ConsoleTraceListener());
but for C# it is not logging the whole request.
Any suggestion?
Related
A FastAPI-based API written in Python has been deployed as an Azure App Service. The API needs to read and write data from CosmosDB, and I attempted to use Managed Identity for this purpose, but encountered an error, stating Unrecognized credential type
These are the key steps that I took towards that goal
Step One: I used Terraform to configure the managed identity for Azure App Service, and assigned the 'contributor' role to the identity so that it can access and write data to CosmosDB. The role assignment was carried out in the file where the Azure App Service is provisioned.
resource "azurerm_linux_web_app" "this" {
name = var.appname
location = var.location
resource_group_name = var.rg_name
service_plan_id = azurerm_service_plan.this.id
app_settings = {
"PROD" = false
"DOCKER_ENABLE_CI" = true
"DOCKER_REGISTRY_SERVER_URL" = data.azurerm_container_registry.this.login_server
"WEBSITE_HTTPLOGGING_RETENTION_DAYS" = "30"
"WEBSITE_ENABLE_APP_SERVICE_STORAGE" = false
}
lifecycle {
ignore_changes = [
app_settings["WEBSITE_HTTPLOGGING_RETENTION_DAYS"]
]
}
https_only = true
identity {
type = "SystemAssigned"
}
data "azurerm_cosmosdb_account" "this" {
name = var.cosmosdb_account_name
resource_group_name = var.cosmosdb_resource_group_name
}
// built-in role that allow the app-service to read and write to an Azure Cosmos DB
resource "azurerm_role_assignment" "cosmosdbContributor" {
scope = data.azurerm_cosmosdb_account.this.id
principal_id = azurerm_linux_web_app.this.identity.0.principal_id
role_definition_name = "Contributor"
}
Step Two: I used the managed identity library to fetch the necessary credentials in the Python code.
from azure.identity import ManagedIdentityCredential
from azure.cosmos.cosmos_client import CosmosClient
client = CosmosClient(get_endpoint(),credential=ManagedIdentityCredential())
client = self._get_or_create_client()
database = client.get_database_client(DB_NAME)
container = database.get_container_client(CONTAINER_NAME)
container.query_items(query)
I received the following error when running the code locally and from Azure (the error can be viewed from the Log stream of the Azure App Service):
raise TypeError(
TypeError: Unrecognized credential type. Please supply the master key as str, or a dictionary or resource tokens, or a list of permissions.
Any help or discussion is welcome
If you are using the Python SDK, you can directly do this ,check the sample here
aad_credentials = ClientSecretCredential(
tenant_id="<azure-ad-tenant-id>",
client_id="<client-application-id>",
client_secret="<client-application-secret>")
client = CosmosClient("<account-endpoint>", aad_credentials)
I am working on deploy resources in Azure using python based on provided templates. As a starting point I am working with https://github.com/Azure-Samples/Hybrid-Resource-Manager-Python-Template-Deployment
Using as it is, I am having an issue at the beginning of the deployment (deployer.py deploy function)
def deploy(self, template, parameters):
"""Deploy the template to a resource group."""
self.client.resource_groups.create_or_update(
self.resourceGroup,
{
'location': os.environ['AZURE_RESOURCE_LOCATION']
}
)
The error message is
Message='ServicePrincipalCredentials' object has no attribute
'get_token'
The statement is correct, ServicePrincipalCredentials get_token attribute doesn't exist, however token is, may be an error due to an outdated version?
Based on the constructor information, the error may be on credentials creation or client creation
def __init__(self, subscription_id, resource_group, pub_ssh_key_path='~/id_rsa.pub'):
mystack_cloud = get_cloud_from_metadata_endpoint(
os.environ['ARM_ENDPOINT'])
subscription_id = os.environ['AZURE_SUBSCRIPTION_ID'] //This may be an error as subscription_id is already provided as a parameter
credentials = ServicePrincipalCredentials(
client_id=os.environ['AZURE_CLIENT_ID'],
secret=os.environ['AZURE_CLIENT_SECRET'],
tenant=os.environ['AZURE_TENANT_ID'],
cloud_environment=mystack_cloud
) --> here
self.subscription_id = subscription_id
self.resource_group = resource_group
self.dns_label_prefix = self.name_generator.haikunate()
pub_ssh_key_path = os.path.expanduser(pub_ssh_key_path)
# Will raise if file not exists or not enough permission
with open(pub_ssh_key_path, 'r') as pub_ssh_file_fd:
self.pub_ssh_key = pub_ssh_file_fd.read()
self.credentials = credentials
self.client = ResourceManagementClient(self.credentials, self.subscription_id,
base_url=mystack_cloud.endpoints.resource_manager) --> here
Do you know how I can fix this?
After struggling a little, I could find a solution. Just replace
credentials = ServicePrincipalCredentials(
client_id=os.environ['AZURE_CLIENT_ID'],
secret=os.environ['AZURE_CLIENT_SECRET'],
tenant=os.environ['AZURE_TENANT_ID'],
cloud_environment=mystack_cloud
)
for
self.credentials = DefaultAzureCredential()
The final code looks like:
from azure.identity import DefaultAzureCredential
def __init__(self, subscriptionId, resourceGroup):
endpoints = get_cloud_from_metadata_endpoint(os.environ.get("ARM_ENDPOINT"))
self.subscriptionId = subscriptionId
self.resourceGroup = resourceGroup
self.credentials = DefaultAzureCredential()
self.client = ResourceManagementClient(self.credentials, self.subscriptionId,
base_url=endpoints.endpoints.resource_manager)
I'm working on an internal S3 service (not AWS one). When I provide hard coded credentials, region and endpoint_url, boto3 seems to ignore them.
I came to that conclusion because it is attempting to go on internet (by using a public aws endpoint URL instead of the internal I have provided) but it does not work because of the following proxy error. But he should not go on internet, since it is an internal S3 service :
botocore.exceptions.ProxyConnectionError: Failed to connect to proxy URL: "http://my_company_proxy"
Here is my code
import io
import os
import boto3
import pandas as pd
# Method 1 : Client #########################################
s3_client = boto3.client(
's3',
region_name='EU-WEST-1',
aws_access_key_id='xxx',
aws_secret_access_key='zzz',
endpoint_url='https://my_company_enpoint_url'
)
# ==> at this point no error, but I don't know the value of endpoint_url
# Read bucket
bucket = "bkt-udt-arch"
file_name = "banking.csv"
print("debug 1") # printed OK
obj = s3_client.get_object(Bucket= bucket, Key= file_name)
# program stops here :
botocore.exceptions.ProxyConnectionError: Failed to connect to proxy URL: "http://my_company_proxy"
print("debug 2") # not printed -
initial_df = pd.read_csv(obj['Body']) # 'Body' is a key word
print("debug 3")
# Method 2 : Resource #########################################
# use third party object storage
s3 = boto3.resource('s3', endpoint_url='https://my_company_enpoint_url',
aws_access_key_id='xxx',
aws_secret_access_key='zzz',
region_name='EU-WEST-1'
)
print("debug 4") # Printed OK if method 1 is commented
# Print out bucket names
for bucket in s3.buckets.all():
print(bucket.name)
Thank you for the review
It was indeed a proxy problem : when http_prxoxy env variable is disabled, it works fine.
my question is about a code to extract a table extract a table from Bigquery and save it as a json file
.
I made my code mostly by following the gcloud tutorials on their documentation.
I couldn't implicit set my credentials, so I did it in a explicit way, to my json file. But it seems that it doesn't quite get the "Client" object by the path I took.
If anyone could clarify me how this whole implicit and explicit credential works, would help me a lot too!
I am using python 2.7 and pycharm. The code is as follows:
from gcloud import bigquery
from google.cloud import storage
def bigquery_get_rows ():
json_key = "path/to/my/json_file.json"
storage_client = storage.Client.from_service_account_json(json_key)
print("\nPeguei o Cliente\n")
# Make an authenticated API request
buckets = list(storage_client.list_buckets())
print(buckets)
print(storage_client)
#Setando ambiente
bucket_name = 'my_bucket/name'
print(bucket_name)
destination_uri = 'gs://{}/{}'.format(bucket_name, 'my_table_json_name.json')
print(destination_uri)
#dataset_ref = client.dataset('samples', project='my_project_name')
dataset_ref = storage_client.dataset('my_dataset_name', project='my_project_id')
print(dataset_ref)
table_ref = dataset_ref.table('my_table_to_be_extracted_name')
print(table_ref)
job_config = bigquery.job.ExtractJobConfig()
job_config.destination_format = (
bigquery.DestinationFormat.NEWLINE_DELIMITED_JSON)
extract_job = client.extract_table(
table_ref, destination_uri, job_config=job_config) # API request
extract_job.result() # Waits for job to complete.
bigquery_get_rows()
You are using wrong client object. You try to use gcs client to work with bigquery.
Instead of
dataset_ref = storage_client.dataset('my_dataset_name', project='my_project_id')
it should be:
bq_client = bigquery.Client.from_service_account_json(
'path/to/service_account.json')
dataset_ref = bq_client.dataset('my_dataset_name', project='my_project_id')
I'm using Python 2.6 and the client library for Google API which I am trying to use to get authenticated access to email settings :
f = file(SERVICE_ACCOUNT_PKCS12_FILE_PATH, 'rb')
key = f.read()
f.close()
credentials = client.SignedJwtAssertionCredentials(SERVICE_ACCOUNT_EMAIL, key, scope='https://apps-apis.google.com/a/feeds/emailsettings/2.0/', sub=user_email)
http = httplib2.Http()
http = credentials.authorize(http)
return discovery.build('email-settings', 'v2', http=http)
When I execute this code , I got the follwowing error:
UnknownApiNameOrVersion: name: email-settings version: v2
What's the api name and version for email settingsV2?
Is it possible to use it with service account?
Regards
I found the solution to get email settings using service account oauth2:
Here is a example:
SERVICE_ACCOUNT_EMAIL = ''
SERVICE_ACCOUNT_PKCS12_FILE_PATH = ''
EMAIL_SETTING_URI = "https://apps-apis.google.com/a/feeds/emailsettings/2.0/%s/%s/%s"
def fctEmailSettings():
user_email = "user#mail.com"
f = file(SERVICE_ACCOUNT_PKCS12_FILE_PATH, 'rb')
key = f.read()
f.close()
credentials = client.SignedJwtAssertionCredentials(SERVICE_ACCOUNT_EMAIL, key, scope='https://apps-apis.google.com/a/feeds/emailsettings/2.0/', sub=user_email)
auth2token = OAuth2TokenFromCredentials(credentials)
ESclient = EmailSettingsClient(domain='doamin.com')
auth2token.authorize(ESclient)
username = 'username'
setting='forwarding'
uri = ESclient.MakeEmailSettingsUri(username, setting)
entry = ESclient.get_entry(uri = uri, desired_class = GS.gdata.apps.emailsettings.data.EmailSettingsEntry)
It appears that the emailsettings API is not available using the Discovery API. The APIs Discovery service returns back details of an API - what methods are available, etc.
See the following issue raised on the PHP client API
https://github.com/google/google-api-php-client/issues/246
I'm unclear as to why the emailsettings is not available via the discovery API or whether there are plans to do so. Really it feels like a lot of these systems and libraries are unmaintained.
The deprecated gdata client library does have support. Try the following example, which I can confirm works ok.
https://code.google.com/p/gdata-python-client/source/browse/samples/apps/emailsettings_example.py
In case you have multiple entry points in your app that need to access the EmailSettings API, here's a re-usable function that returns a "client" object:
def google_get_emailsettings_credentials():
'''
Google's EmailSettings API is not yet service-based, so delegation data
has to be accessed differently from our other Google functions.
TODO: Refactor when API is updated.
'''
with open(settings.GOOGLE_PATH_TO_KEYFILE) as f:
private_key = f.read()
client = EmailSettingsClient(domain='example.com')
credentials = SignedJwtAssertionCredentials(
settings.GOOGLE_CLIENT_EMAIL,
private_key,
scope='https://apps-apis.google.com/a/feeds/emailsettings/2.0/',
sub=settings.GOOGLE_SUB_USER)
auth2token = gdata.gauth.OAuth2TokenFromCredentials(credentials)
auth2token.authorize(client)
return client
It can then be called from elsewhere, e.g. to reach the DelegationFeed:
client = google_get_emailsettings_credentials()
uri = client.MakeEmailSettingsUri(username, 'delegation')
delegates_xml = client.get_entry(
uri=uri,
desired_class=gdata.apps.emailsettings.data.EmailSettingsDelegationFeed)