How do I fix Key error with azure cosmos db? - python

here is my code:
endpoint = os.environ["https://davidjohns.documents.azure.com/"]
DATABASE_NAME = "cosmicworks"
CONTAINER_NAME = "products"
credential = DefaultAzureCredential()
client = CosmosClient(url=endpoint, credential=credential)
And here is the error I am receiving:
endpoint = os.environ["https://davidjohns.documents.azure.com/"]
File "/Users/davidjohns/opt/miniconda3/lib/python3.9/os.py", line 679, in __getitem__
raise KeyError(key) from None
KeyError: 'https://davidjohns.documents.azure.com/'
Thanks for your help!

I tried in my environment and got below results:
Initially I tried same code and got same error:
raise KeyError(key) from None KeyError: 'COSMOS_ENDPOINT'
The above error suggest the COSMOS_ENDPOINT is not found in the environment variables.
To set an environment variable you can use below command in your terminal:
$env:COSMOS_ENDPOINT = "https://cosmos-account.documents.azure.com:443/"
Console:
I used the sample code to create container with environment variable.
Code:
from azure.cosmos import CosmosClient,Partitionkey
from azure.identity import DefaultAzureCredential
import os
ENDPOINT = os.environ["COSMOS_ENDPOINT"]
KEY = os.environ["COSMOS_KEY"]
client=CosmosClient(ENDPOINT,credential=KEY)
database=client.get_database_client("database987")
Partition_key=Partitionkey(Path='/Name')
container=database.create_container(id="test",partition_key=Partition_key)
print("Container created!!!")
Console:
The above code executed and created container successfully.
Portal:
Reference:
Quickstart - Azure Cosmos DB for NoSQL client library for Python | Microsoft Learn

Related

EnrichClient elasticsearch python API

I'm trying to use the python API for elasticsearch client in order to execute an existing enrichment policy.
In the API documentation there is an example with the elasticsearch.client.EnrichClient class, but when I'm trying to run python script with it I'm getting the following error:
File "/home/ubuntu/.local/lib/python3.6/site-packages/elasticsearch/client/utils.py", line 206, in transport return self.client.transport
AttributeError: 'list' object has no attribute 'transport'
The command to elastic which I'm trying to run is: es.execute_policy("overall_scoring_policy")
Is there anyway I'm missing something with this type of client?
I was having a similar issue and managed to resolve it. Here is a working sample using the EnrichClient where I am executing a policy:
#!/usr/bin/python3
from elasticsearch import client
from elasticsearch import Elasticsearch
# Configure variables for your environment
elasticUrl = 'https://cluster.contoso.foo:9200/'
requestTimeout = 60 # Request timeout in seconds
policyName = "Your_EnrichPolicy_Name"
apiId = "redactedId"
apiKey = "redactedKey"
# Create the Python Elasticsearch client
es = Elasticsearch(
elasticUrl,
api_key=(apiId, apiKey),
request_timeout=requestTimeout,
retry_on_timeout=True,
max_retries=5
)
# Create the EnrichClient object using our Elasticsearch client object from above
enrichClient = client.EnrichClient(es)
# Execute the request and wait for completion
r = enrichClient.execute_policy(name=policyName, wait_for_completion=True)
# Print the response
print(str(r))

'HTTP headers is not in the correct format' error when creating Azure Container using Python SDK

I'm trying to create an Azure blob container using Python SDK with the below code. I'm getting an 'ErrorCode:InvalidHeaderValue' in the response.
I'm using the 'ConnectionString' from 'Access Keys' section from the Azure Portal of the storage account. And I don't think the connection is the issue since this line works ok blob_service_client = BlobServiceClient.from_connection_string(connection_string).
I used a clean venv for this and below are the library versions
azure-core==1.10.0
azure-storage-blob==12.7.1
import os
import yaml
from azure.storage.blob import ContainerClient, BlobServiceClient
def load_config():
dir_root = os.path.dirname(os.path.abspath(__file__))
with open (dir_root + "/config.yaml", "r") as yamlfile:
return yaml.load(yamlfile, Loader=yaml.FullLoader)
config = load_config()
connection_string = config['azure_storage_connectionstring']
blob_service_client = BlobServiceClient.from_connection_string(connection_string)
blob_service_client.create_container('testing')
Traceback (most recent call last):
File "/Users/anojshrestha/Documents/codes/gen2lake/project_azure/lib/python3.7/site-packages/azure/storage/blob/_container_client.py", line 292, in create_container
**kwargs)
File "/Users/anojshrestha/Documents/codes/gen2lake/project_azure/lib/python3.7/site-packages/azure/storage/blob/_generated/operations/_container_operations.py", line 134, in create
raise HttpResponseError(response=response, model=error)
azure.core.exceptions.HttpResponseError: Operation returned an invalid status 'The value for one of the HTTP headers is not in the correct format.'
During handling of the above exception, another exception occurred:
.......
azure.core.exceptions.HttpResponseError: The value for one of the HTTP headers is not in the correct format.
RequestId:5X-601e-XXXX00ab-5368-f0c05f000000
Time:2021-01-22T02:43:22.3983063Z
ErrorCode:InvalidHeaderValue
Error:None
HeaderName:x-ms-version
HeaderValue:2020-04-08```
You do not need to reinstall. You can get around this issue by setting your api_version variable when instantiating any of the clients.
For example:
blob = BlobServiceClient(
account_url="https://MY_BLOB_STORAGE.blob.core.windows.net",
credential="MY_PRIMARY_KEY",
api_version="2019-12-12", #or api_version='2020-02-10'
)
https://github.com/Azure/azure-sdk-for-python/issues/16193
As mentioned by #kopaczew, reverting the azure_storage_blob to 12.6.0 version fixed my issue. It does seem to be some sort of bug with the latest azure-storage-blob library. Unfortunately, the aforementioned issue is not limited to the create_container call. Answering my own question in case this helps someone else in a similar situation.
How I fixed my issue:
Removed my previous venv environment (reinstalling itself caused some issues when importing the azure library)
Created new venv
pip install azure-storage-bob==12.6.0

boto3 lambda script to shutdown RDS not working

I'm just starting out with boto3 and lambda and was trying to run the below function via Pycharm.
import boto3
client = boto3.client('rds')
response = client.stop_db_instance(
DBInstanceIdentifier='dummy-mysql-rds'
)
But i receive the below error:
botocore.errorfactory.DBInstanceNotFoundFault: An error occurred (DBInstanceNotFound) when calling the StopDBInstance operation: DBInstance dummy-mysql-rds not found.
Do you know what may be causing this?
For the record, I have the AWS toolkit installed for Pycharm and can run simple functions to list and describe ec2 instances and my AWS profile has admin access.
By explicitly defining the profile name the below function now works via Pycharm. Thank you #OleksiiDonoha for your help in getting this resolved.
import boto3
rds = boto3.setup_default_session(profile_name='dev')
client = boto3.client('rds')
response = client.stop_db_instance(
DBInstanceIdentifier='dev-mysql-rds'
)

Azure function apps : [Errno 30] Read-only file system

I'm developing an API using Azure Function Apps. The API works fine locally (using localhost). However, after publishing to Function App, I'm getting this error:
[Errno 30] Read-only file system
This error happens after I made the connection as a function to allow establishing new connection every time the API is requested. The data is taken from Azure Blob Storage container.
The code:
DBConnection.py:
import os, uuid
from azure.storage.blob import BlockBlobService, AppendBlobService
from datetime import datetime
import pandas as pd
import dask.dataframe as dd
import logging
def BlobConnection() :
try:
print("Connecting...")
#Establish connection
container_name = 'somecontainer'
blob_name = 'some_name.csv'
file_path = 'somepath'
account_name = 'XXXXXX'
account_key = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
blobService = BlockBlobService(account_name=account_name, account_key=account_key)
blobService.get_blob_to_path(container_name, blob_name, file_path)
df = dd.read_csv(file_path, dtype={'Bearing': 'int64', 'Speed': 'int64'})
df = df.compute()
return df
except Exception as ex:
print('Unable to connect!')
print('Exception:')
print(ex)
You are probably running in Package or Zip.
If so when you run your code the following line is trying to save the blob and can't. If you update that to use get_blob_to_bytes or get_blob_to_stream you would be fine.
blobService.get_blob_to_path(container_name, blob_name, file_path)
From [https://stackoverflow.com/questions/53630773/how-to-disable-read-only-mode-in-azure-function-app]
Part 1 - Disabling read-only mode
You'll likely find if you're using the latest tools that your function app is in run-from-package mode, which means it's reading the files directly from the uploaded ZIP and so there's no way to edit it. You can turn that off by deleting the WEBSITE_RUN_FROM_ZIP or WEBSITE_RUN_FROM_PACKAGE application setting in the portal. Note this will clear your function app until the next time you publish.
If your tools are a little older, or if you've deployed using the latest tools but with func azure functionapp publish my-app-name --nozip then you can use the App Service Editor in Platform Features in the portal to edit the function.json files and remove the "generatedBy" setting, which will stop them being read-only.

get secrets from enterprise vault using python

I am trying to get secrets(user id/password) from enterprise vault. When manually I try to read user id and password, I log in to vault by okta and then I select a namespace and inside that, I can get the secrets by going into the proper path.
I want to do that programmatically but I am not understanding from where to start. I found some packages "HVAC" which is useful for vault login.
Can anyone here post the way to login into the vault and then fetching secrets from the vault? Consider the application that will be running on the AWS ec2 machine. The application has access to AWS sts service and AWS Cognito.
I am using the below code and running it from ec2 instance:
import logging
import requests
from requests.exceptions import RequestException
import hvac
logger = logging.getLogger(__name__)
EC2_METADATA_URL_BASE = 'http://169.254.169.254'
def load_aws_ec2_role_iam_credentials(role_name, metadata_url_base=EC2_METADATA_URL_BASE):
metadata_pkcs7_url = '{base}/latest/meta-data/iam/security-credentials/{role}'.format(
base=metadata_url_base,
role=role_name,
)
logger.debug("load_aws_ec2_role_iam_credentials connecting to %s" % metadata_pkcs7_url)
response = requests.get(url=metadata_pkcs7_url)
response.raise_for_status()
security_credentials = response.json()
return security_credentials
credentials = load_aws_ec2_role_iam_credentials('my_ec2_role')
a = credentials['AccessKeyId']
b = credentials['SecretAccessKey']
c = credentials['Token']
client = hvac.Client(
url='http://vault.mycompany.net/ui/vault/secrets?namespace=namespace1',
token = c
)
print(client.is_authenticated())
list_response = client.secrets.kv.v2.list_secrets(
path='path'
)
print(list_response['data'])
I am getting response "true" and then this error
getting this error:
Traceback (most recent call last):
File "3.py", line 44, in <module>
print(list_response['data'])
TypeError: 'Response' object is not subscriptable
.Can anyone tell me what wrong I am doing?what will be the url if in my enterprise vault there is namespace called "namespace1"

Categories

Resources