Loading a bulk of device tokens into SNS using Boto3 - python

I need to upload new device tokens to AWS SNS, and would rather doing it in batches instead of one token at a time.
According to the AWS documentation this is supported by their API, and an example is given for the Java SDK using a "bulkupload package".
The problem is that I wrote everything in Python and I can't find any reference to this feature in the Boto3 documentation.
Do you know of a way to do this in Python (not necessarily using Boto)? Or am I doomed to either uploading tokens one by one or rewrite everything in Java?
Thanks!

Related

How to remove a cloud armor secuiry policy from backend service using Python

I'm creating a few GCP cloud armor policies across multiple projects using the Python client library and attaching them to several backend services using the .set_security_policy() method
I know you can do it using the console / gcloud but I need to automate this in Python
I've tried the .update() method in google-cloud-compute but that did not work out
from google.cloud import compute, compute_v1
client = compute.BackendServicesClient()
backend_service_resource = compute_v1.types.BackendService(security_policy="")
client.update(project='project_id',
backend_service='backend_service',
backend_service_resource=backend_service_resource)
The error I got when running the above code is
google.api_core.exceptions.BadRequest: 400 PUT https://compute.googleapis.com/compute/v1/projects/<project-id>/global/backendServices/<backend-name>: Invalid value for field 'resource.loadBalancingScheme': 'INVALID_LOAD_BALANCING_SCHEME'. Cannot change load balancing scheme.
When I specify loadBalancingScheme then the same error occurs with another resource value. At run-time I would not have information of all the meta data of the backend-service and some meta-data might not be initialized in the first place.
This is for anyone who had similar issues in the future. I was originally going to call the gcloud commands through python using os.system() as #giles-roberts recommended, but then I stumbled across a proper way to to do this using the Client Libraries
You simply use the same .set_security_policy() to set the security policy in the first place but this time make the policy as None. This is not quite obvious since the name of the security policy has to be a string in the documentation and it does not accept an empty string either.
from google.cloud import compute, compute_v1
client = compute.BackendServicesClient()
resource = compute_v1.types.SecurityPolicyReference(security_policy=None)
error = client.set_security_policy(project='<project_id>',
backend_service='<backend_service>',
security_policy_reference_resource=resource)

How to specifify the class of an AWS client in Python?

In Java, for instance, we have a class that represents the SageMaker client class: AmazonSageMakerClient, but I couldn't find the equivalent for Python.
I was hoping to be able to do something like:
from sagemaker import SageMakerClient
client: SageMakerClient = boto3.client("sagemaker")
I looked into the library code and docs but I couldn't find any references to such class containing the defined methods for that client. In fact, I couldn't find any classes for AWS clients like s3, sqs, etc. Are those hidden somewhere or am I missing something obvious?
In boto3, there is basically 2 levels of objects avaialble:
A client
Actual objects like you are asking about
Take a look at S3, and you will see that in addition to the Client object there are also other rich object types like Bucket.
It would seem that Sagemaker doesn't (yet) have this second level of abstraction available.
To be more productive, and work with Python classes rather than Json, try to use the SageMaker Python SDK whenever possible rather than Boto3 clients.
With Boto3 you have several SageMaker clients (As #anon said correctly):
SageMaker - Most of SageMaker features
SageMakerRuntime - Invoking endpoints
SageMaker* - Other misc SageMaker features like feature store, edge manager, ...
The boto3-stubs library can help with this.
Install using the instructions for your IDE on the package page, and then install the specific type annotations for SageMaker.
pip install 'boto3-stubs[sagemaker]'
You should be able to see type hints for the client object (type: SageMakerClient).
import boto3
client = boto3.client('sagemaker')
If you need to add hints yourself:
from mypy_boto3_sagemaker import SageMakerClient
def my_func(client: SageMakerClient):
client.create_algorithm(...)

AWS MediaConvert Docs are obnoxious and unclear

I have been messing with AWS MediaConvert for boto3 for the python library and I find the docs incredibly confusing.
There are so many settings.
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/mediaconvert.html
and amazon does a absoultely terrible job of labeling what is necessary to do a basic job.
what would be the correct json for a simple job.
taking a video with audio file and turning it into a CMAF file
and taking a audio only file and turning it into A CMAF file.
I am trying to establish the baseline use of this technology. And there is so much extra that I don't know what I absolutely need and what is extra settings for specific use cases.
The solve for this is to use the MediaConvert UI in AWS then use the copy json button save it and then use the json created.
Never mind trying to create it yourself. Unless you like pain.

automatically extract aws keys using python

I have been provided to access aws and get credentials manaully i,e to copy access_key_id,secret_access_key,session_token this will expire every one hour. I am using these credentials to extract information from Route53. I want to automate and get access_key_id,secret_access_key,session_token instead of manually copying to the script. I would like to understand is there any way to do this automation..
The process of refreshing tokens are documented here:
Auto-refresh AWS Tokens Using IAM Role and boto3
It is poorly documented in boto documentation but this could help

Azure Python SDK to get usage details - UsageDetailsOperations Class

I am new to python.
I need to get the Usage details using python sdk.
I am able to do the same using the usage detail API.
But unable to do so using the sdk.
I am trying to use the azure.mgmt.consumption.operations.UsageDetailsOperations class. The official docs for UsageDetailsOperations
https://learn.microsoft.com/en-us/python/api/azure-mgmt-consumption/azure.mgmt.consumption.operations.usage_details_operations.usagedetailsoperations?view=azure-python#list-by-billing-period
specifies four parameters to create the object
(i.e.client:Client for service requests,config:Configuration of service client,
serializer:An object model serializer,deserializer:An object model deserializer).
Out of these parameters I only have the client.
I need help understanding how to get the other three parameters or is there another way to create the UsageDetailsOperations object.
Or is there any other approach to get the usage details.
Thanks!
This class is not designed to be created manually, you need to create a consumption client, which will have an attribute "usages" which will be the class in question (instanciated correctly).
There is unfortunately no samples for consumption yet, but creating the client will be similar to creating any other client (see Network client creation for instance).
For consumption, what might help is the tests, since they give some idea of scenarios:
https://github.com/Azure/azure-sdk-for-python/blob/fd643a0/sdk/consumption/azure-mgmt-consumption/tests/test_mgmt_consumption.py
If you're new to Azure and Python, you might want to do this quickstart:
https://learn.microsoft.com/en-us/azure/python/python-sdk-azure-get-started
Feel free to open an issue in the main Python repo, asking for more documentation about this client (this will help prioritize it):
https://github.com/Azure/azure-sdk-for-python/issues
(I'm working at Microsoft in the Python SDK team).

Categories

Resources