I need to create a new user in azure devops using the python client library for Azure DevOps REST API.
I wrote the following code:
from azure.devops.connection import Connection
from azure.devops.v5_0.graph.models import GraphUserCreationContext
from msrest.authentication import BasicAuthentication
credentials = BasicAuthentication('', personal_access_token)
connection = Connection(base_url=organization_url, creds=credentials)
graph_client = connection.clients_v5_0.get_graph_client()
addAADUserContext = GraphUserCreationContext("anaya.john#mydomain.com")
print(addAADUserContext)
resp = graph_client.create_user(addAADUserContext)
print(resp)
I get the output:
{'additional_properties': {}, 'storage_key': 'anaya.john#dynactionize.onmicrosoft.com'}
And an error occurs while calling the create_user method:
azure.devops.exceptions.AzureDevOpsServiceError: VS860015: Must have exactly one of originId or principalName set.
Actually what i should pass a GraphUserPrincipalNameCreationContext to the create_user function.
I found a .NET sample which does this in a function named AddRemoveAADUserByUPN() :
https://github.com/microsoft/azure-devops-dotnet-samples/blob/master/ClientLibrary/Samples/Graph/UsersSample.cs
GraphUserPrincipalNameCreationContext is an interface in this sample. But python doesn't support interfaces.
So how can implement this in python?
Some of the classes like GraphUserPrincipalNameCreationContext aren't currently available in the python client API. They are working on it. You can track the issue here in GitHub repo:
https://github.com/microsoft/azure-devops-python-api/issues/176
You can use User Entitlements - Add REST API for azure devops instead of it's Graph API. You can use the following python client for this purpose:
https://github.com/microsoft/azure-devops-python-api/tree/dev/azure-devops/azure/devops/v5_0/member_entitlement_management
You can refer to the sample given in the following question to know about how to use the mentioned python client :
Unable to deserialize to object: type, KeyError: ' key: int; value: str '
Related
I'm trying to set the environment variable from a dict but getting and error when connecting.
#service account pulls in airflow variable that contains the json dict with service_account credentials
service_account = Variable.get('google_cloud_credentials')
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]=str(service_account)
error
PermissionDeniedError: Error executing an HTTP request: HTTP response code 403 with body '<?xml version='1.0' encoding='UTF-8'?><Error><Code>AccessDenied</Code><Message>Access denied.</Message><Details>Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object.</Details></Error>'
when reading if I use and point to file then there are no issues.
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]=/file/path/service_account.json
I'm wondering is there a way to convert the dict object to an os path like object? I don't want to store the json file on the container and airflow/google documentation isn't clear at all.
The Python stringio package lets you create a file-like object backed by a string, but that won't help here because the consumer of this environment variable is expecting a file path, not a file-like object. I don't think it's possible to do what you're trying to do. Is there a reason you don't want to just put the credentials in a file?
There is a way to do it, but the Google documentation is terrible. So I wrote a Github gist to document the recipe that I and a colleague (Imee Cuison) developed to use the key securely. Sample code below:
import json
from google.oauth2.service_account import Credentials
from google.cloud import secretmanager
def access_secret(project_id:str, secret_id:str, version_id:str="latest")->str:
"""Return the secret in string format"""
# Create the Secret Manager client.
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret version.
name = f"projects/{project_id}/secrets/{secret_id}/versions/{version_id}"
# Access the secret version.
response = client.access_secret_version(name=name)
# Return the decoded payload.
return response.payload.data.decode('UTF-8')
def get_credentials_from_token(token:str)->Credentials:
"""Given an authentication token, return a Credentials object"""
credential_dict = json.loads(secret_payload)
return Credentials.from_service_account_info(credential_dict)
credentials_secret = access_secret("my_project", "my_secret")
creds = get_credentials_from_token(credentials_secret)
# And now you can use the `creds` Credentials object to authenticate to an API
Putting the service account into the repository is not a good practice. As a best practice; You need to use authentication propagating from the default google auth within your application.
For instance, using Google Cloud Kubernetes you can use the following python code :
from google.cloud.container_v1 import ClusterManagerClient
credentials, project = google.auth.default(
scopes=['https://www.googleapis.com/auth/cloud-platform', ])
credentials.refresh(google.auth.transport.requests.Request())
cluster_manager = ClusterManagerClient(credentials=credentials)
I wrote an Azure function that runs Python3 to simply turn on an Azure VM.
The function app has a system assigned managed identity that I've given VM contributor role. To have the function use the Managed Identity, I am using the DefaultAzureCredential() class.
The error I am getting is:
Exception: AttributeError: 'DefaultAzureCredential' object has no attribute 'signed_session'
I've done tons of research and can't seem to find the solution.
Here is the code that is related:
from azure.identity import DefaultAzureCredential
credentials = DefaultAzureCredential()
compute_client = ComputeManagementClient(credentials, subscription_id)
# Starting the VM
print('\nStarting VM ' + VM_NAME)
vm_start = compute_client.virtual_machines.start(
RG_NAME, VM_NAME)
vm_start.wait()
You'll have to forgive me, I'm only new to Python, but very interested in learning.
EDIT May 2022:
As of May 2022, all SDKs have been re-released with native support for azure-identity. If you still encounter this error with a given SDK on its latest version, please open an issue asking for a re-release of that SDK here: https://github.com/Azure/azure-sdk-for-python/issues
Original response:
This is addressed here: https://learn.microsoft.com/en-us/azure/developer/python/azure-sdk-authenticate?tabs=cmd
Search "Using DefaultAzureCredential with SDK management libraries" on this page and it will take you to the section that covers your problem in more detail.
In a nutshell....
They updated the DefaultAzureCredential class and it no longer has a 'signed_session' attribute. The newest versions of the management libraries should be updated to handle this. As mentioned in another solution, update your azure-cli library to ensure you have the latest. However, not all of the management libraries have been updated yet. You can use this wrapper created by a member of the Azure SDK engineering team for the time being.
https://gist.github.com/lmazuel/cc683d82ea1d7b40208de7c9fc8de59d
# Wrap credentials from azure-identity to be compatible with SDK that needs msrestazure or azure.common.credentials
# Need msrest >= 0.6.0
# See also https://pypi.org/project/azure-identity/
from msrest.authentication import BasicTokenAuthentication
from azure.core.pipeline.policies import BearerTokenCredentialPolicy
from azure.core.pipeline import PipelineRequest, PipelineContext
from azure.core.pipeline.transport import HttpRequest
from azure.identity import DefaultAzureCredential
class CredentialWrapper(BasicTokenAuthentication):
def __init__(self, credential=None, resource_id="https://management.azure.com/.default", **kwargs):
"""Wrap any azure-identity credential to work with SDK that needs azure.common.credentials/msrestazure.
Default resource is ARM (syntax of endpoint v2)
:param credential: Any azure-identity credential (DefaultAzureCredential by default)
:param str resource_id: The scope to use to get the token (default ARM)
"""
super(CredentialWrapper, self).__init__(None)
if credential is None:
credential = DefaultAzureCredential()
self._policy = BearerTokenCredentialPolicy(credential, resource_id, **kwargs)
def _make_request(self):
return PipelineRequest(
HttpRequest(
"CredentialWrapper",
"https://fakeurl"
),
PipelineContext(None)
)
def set_token(self):
"""Ask the azure-core BearerTokenCredentialPolicy policy to get a token.
Using the policy gives us for free the caching system of azure-core.
We could make this code simpler by using private method, but by definition
I can't assure they will be there forever, so mocking a fake call to the policy
to extract the token, using 100% public API."""
request = self._make_request()
self._policy.on_request(request)
# Read Authorization, and get the second part after Bearer
token = request.http_request.headers["Authorization"].split(" ", 1)[1]
self.token = {"access_token": token}
def signed_session(self, session=None):
self.set_token()
return super(CredentialWrapper, self).signed_session(session)
if __name__ == "__main__":
import os
credentials = CredentialWrapper()
subscription_id = os.environ.get("AZURE_SUBSCRIPTION_ID", "<subscription_id>")
from azure.mgmt.resource import ResourceManagementClient
client = ResourceManagementClient(credentials, subscription_id)
for rg in client.resource_groups.list():
print(rg.name)
Your code would then look like:
from cred_wrapper import CredentialWrapper
credentials = CredentialWrapper()
compute_client = ComputeManagementClient(credentials, subscription_id)
# Starting the VM
print('\nStarting VM ' + VM_NAME)
vm_start = compute_client.virtual_machines.start(
RG_NAME, VM_NAME)
vm_start.wait()
You should be cooking from there!
Looks like it's fixed if you use the preview version of azure-mgmt-compute (17.0.0b1)
Another gotchya because of the version bump is they changed the start function from start to begin_start.
Hope this helps somebody!
I ran into this issue and cannot upgrade the Azure management libraries in question. However, the wrapper does not work as it leads to another error:
self._token = self._credential.get_token(*self._scopes)
AttributeError: 'CredentialWrapper' object has no attribute 'get_token'
To get around this I had to pass through the get_token call in the CredentialWrapper class:
class CredentialWrapper(BasicTokenAuthentication):
def __init__(...):
...
# Set credential to instance variable
self.credential = credential
...
def get_token(self, *scopes, **kwargs):
# Pass get_token call to credential
return self.credential.get_token(*scopes, **kwargs)
For reference the library versions I'm using are:
azure-common==1.1.25
azure-core==1.9.0
azure-identity==1.5.0
azure-mgmt-compute==17.0.0
azure-mgmt-core==1.2.2
azure-mgmt-network==16.0.0
azure-mgmt-resource==10.2.0
msrestazure==0.6.4
I have faced the similar (Signed_Session) kind of issue while working with Azure nsgs and have fixed it. There might be 2 causes.
Azure library versions mismatch.
for me combination of below 2 libraries are working.
azure-identity==1.6.1 and azure-mgmt-network==19.0.0
May be you are importing the incorrect library.
I am working with nsgs for this. I installed the library called "azure-mgmt" and imported the "NetworkManagementClient" class then I have faced "signed session" issue. Later, I uninstalled the "azure-mgmt" library and installed another library, "azure-mgmt-network==19.0.0" and now it is working fine.
To import/work with "azure.mgmt.network import NetworkManagementClient", we need to install "azure-mgmt-network==19.0.0" library but not "azure-mgmt.
Just think on above 2 ways..it may helps you.
Am beginner to Amazon web services.
I have a below lambda python function
import sys
import logging
import pymysql
import json
rds_host=".amazonaws.com"
name="name"
password="123"
db_name="db"
port = 3306
def save_events(event):
result = []
conn = pymysql.connect(rds_host, user=name, passwd=password, db=db_name,
connect_timeout=30)
with conn.cursor(pymysql.cursors.DictCursor) as cur:
cur.execute("select * from bodyPart")
result = cur.fetchall()
cur.close()
print ("Data from RDS...")
print (result)
cur.close()
bodyparts = json.dumps(result)
bodyParts=(bodyparts.replace("\"", "'"))
def lambda_handler(event, context):
save_events(event)
return bodyParts
using an above function am sending json to the client using API gateway, now suppose user selects an item from the list and send it back, in form of json where will i get http request and how should i process that request
I just made an additional information for #Harsh Manvar.
The easiest way I think is you can use
api-gateway-proxy-integration-lambda
Currently API Gateway support AWS lambda very good, you can pass request body (json) by using event.body to your lambda function.
I used it everyday in my hobby project (a Slack command bot, it is harder because you need to map from application/x-www-form-urlencoded to json through mapping template)
And for you I think it is simple because you using only json as request and response. The key is you should to select Integratiton type to Lambda function
You can take some quick tutorials in Medium.com for more detail, I only link the docs from Amazon.
#mohith: Hi man, I just made a simple approach for you, you can see it here.
The first you need to create an API (see the docs above) then link it to your Lambda function, because you only use json, so you need to check the named Use Lambda Proxy integration like this:
Then you need to deploy it!
Then in your function, you can handle your code, in my case, I return all the event that is passed to my function like this:
Finally you can post to your endpoint, I used postman in my case:
I hope you get my idea, when you successfully deployed your API then you can do anything with it in your front end.
I suggest you research more about CloudWatch, when you work with API Gateway, Lambda, ... it is Swiss army knife, you can not live without it, it is very easy for tracing and debug your code.
Please do not hesitate to ask me anything.
you can use aws service called API-gateway it will give you endpoint for http api requests.
this api gateway make connection with your lambda and you can pass http request to lambda.
here sharing info about creating rest api on lambda you can check it out : https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-create-api.html
aws also provide example for lambda GET, POST lambda example.you just have to edit code it will automatically make api-gateway.as reference you can check it.
From Lambda Console > create function > choose AWS serverless repository > in search bar type "get" and search > api-lambda-dynamodb > it will take value from user and process in lambda.
here sharing link you can direct check examples : https://console.aws.amazon.com/lambda/home?region=us-east-1#/create?tab=serverlessApps
I have created an api gateway from my existing api using boto3 import command.
apiClient = boto3.client('apigateway', awsregion)
api_response=apiClient.import_rest_api
(
failOnWarnings=True,
body=open('apifileswagger.json', 'rb').read()
)
But i cant modify integration request. I tried with following Boto3 command.
apiClient = boto3.client('apigateway', awsregion)
api_response=apiClient.put_integration
(
restApiId=apiName,
resourceId='/api/v1/hub',
httpMethod='GET',
integrationHttpMethod='GET',
type='AWS',
uri='arn:aws:lambda:us-east-1:141697213513:function:test-lambda',
)
But I got error like this
Unexpected error: An error occurred () when calling the PutIntegration operation:
I need to change lambda function region & name using Boto3 command. is it possible? .
if it is possible what is the actual issue with this command?
In the put_integration() call listed above, your restApiId and resourceId look incorrect. Here's what you should do.
After importing your rest API, check to see if it is available by calling your apiClient's get_rest_apis(). If the API was imported correctly, you should see it listed in the response along with the API's ID (which is generated by AWS). Capture this ID for future operations.
Next, you'll need to look at all of the resources associated with this API by calling your apiClient's get_resources(). Capture the resource ID for the resource you wish to modify.
Using the API ID and resource ID, check to see if an integration config exists by calling your apiClient's get_integration(). If it does exist you can modify the integration request by calling update_integration(); if it does not exist, you need to create a new integration by calling put_integration() and passing the integration request as a parameter.
Here's an example of how that might look in code:
# Import API
api_response1 = apiClient.import_rest_api(failOnWarnings=True, body=open('apifileswagger.json', 'rb').read())
print(api_response1)
# Get API ID
api_response2 = apiClient.get_rest_apis()
for endpoint in api_response2['items']:
if endpoint['name'] == "YOUR_API_NAME":
api_ID = endpoint['id']
# Get Resource ID
api_response3 = apiClient.get_resources(restApiId=api_ID)
for resource in api_response3['items']:
if resource['path'] == "YOUR_PATH":
resource_ID = resource['id']
# Check for Existing Integrations
api_response4 = apiClient.get_integration(restApiId=api_ID, resourceId=resource_ID , httpMethod='GET')
print(api_response4)
# Create Integration with Request
integration_request = { 'application/json': '{\r\n "body" : $input.json(\'$\'),\r\n}' }
api_response5 = apiClient.put_integration(restApiId=api_ID, resourceId=resource_ID , httpMethod='GET', type='AWS',
integrationHttpMethod='GET', uri="YOUR_LAMBDA_URI", requestTemplates=integration_request)
print(api_response5)
All the methods listed above are explained in the Boto3 Documentation found here.
As with most API Gateway updates to API definitions, in order to update an integration request, you have to do a PATCH and pass a body with a patch document using the expected format. See documentation here
I manage a couple (presently, but will increase) clusters at GKE and up till now have been ok launching things manually as needed. I've started working my own API that can take in requests to spin up new resources on-demand for a specific cluster but in order to make it scalable I need to do something more dynamic than switching between clusters with each request. I have found a link for a Google API python client that supposedly can access GKE:
https://developers.google.com/api-client-library/python/apis/container/v1#system-requirements
I've also found several other clients (specifically one I was looking closely at was the nodejs client from godaddy) that can access Kubernetes:
https://github.com/godaddy/kubernetes-client
The Google API Client doesn't appear to be documented for use with GKE/kubectl commands, and the godaddy kubernetes-client has to access a single cluster master but can't reach one at GKE (without a kubectl proxy enabled first). So my question is, how does one manage kubernetes on GKE programmatically without having to use the command-line utilities in either nodejs or python?
I know this question is a couple of years old, but hopefully this helps someone. Newer GKE APIs are available for Node.js here: https://cloud.google.com/nodejs/docs/reference/container/0.3.x/
See list of container APIs here: https://developers.google.com/apis-explorer/#p/container/v1/
Once connected via the API, you can access cluster details, which includes the connectivity information for connecting to the master with standard API calls.
I just posted an article on Medium with an example of how to do this
The first part of the article outlines how to setup the service account, roles and credentials and load them as Environmental variables. Once done, you could then run the following python:
from kubernetes import client
import base64
from tempfile import NamedTemporaryFile
import os
import yaml
from os import path
def main():
try:
host_url = os.environ["HOST_URL"]
cacert = os.environ["CACERT"]
token = os.environ["TOKEN"]
# Set the configuration
configuration = client.Configuration()
with NamedTemporaryFile(delete=False) as cert:
cert.write(base64.b64decode(cacert))
configuration.ssl_ca_cert = cert.name
configuration.host = host_url
configuration.verify_ssl = True
configuration.debug = False
configuration.api_key = {"authorization": "Bearer " + token}
client.Configuration.set_default(configuration)
# Prepare all the required properties in order to run the create_namespaced_job method
# https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/BatchV1Api.md#create_namespaced_job
v1 = client.BatchV1Api()
with open(path.join(path.dirname(__file__), "job.yaml")) as f:
body = yaml.safe_load(f)
v1.create_namespaced_job(namespace="default", body=body, pretty=True)
return f'Job created successfully', 200
except Exception as e:
return str(e), 500
if __name__ == '__main__':
main()