How can I dynamically observe/change limits of a Autoscale group? - python

I want to modify the number of minimum/maximum/target instances of an autoscale group and see if there's any instance on from this autoscale group, all dynamically using the AWS SDK for Python. How can I do it?
I'm unable to find it from literature.

I will help you by pointing out where you can find informaton about using AutoScaling and the AWS SDK for Python. Refer to the AWS SDK Code Examples Code Library.
This doc should be the reference point when you want to learn how to do tasks using a given AWS SDK.
See:
https://docs.aws.amazon.com/code-library/latest/ug/auto-scaling_example_auto-scaling_Scenario_GroupsAndInstances_section.html

First verify your time is sync with aws:
sudo ntpdate pool.ntp.org
Read configuration:
import boto3
client = boto3.client('autoscaling')
response = client.describe_auto_scaling_groups(
AutoScalingGroupNames=[
'autoscaling_group_name',
]
)
print(response['AutoScalingGroups'][0]['MinSize'], response['AutoScalingGroups'][0]['MaxSize'], response['AutoScalingGroups'][0]['DesiredCapacity'], response['AutoScalingGroups'][0]['Instances'])
Set desired/min/max:
response = client.update_auto_scaling_group(
AutoScalingGroupName='autoscaling_group_name',
MinSize=123,
MaxSize=123,
DesiredCapacity=123,
)

Related

Azure via Python API - Set Storage Account Property - Allow blob public access To Disabled

I'm trying to use python 3 in order to set property in Azure : Allow Blob public access
I didn't find any information on the net on how to implement this via python,
I did find solution via Powershell: https://learn.microsoft.com/en-us/azure/storage/blobs/anonymous-read-access-configure?tabs=powershell
looking for solution for python3...
Thanks!
Allow Blob public access feature is newly added in the latest python sdk azure-mgmt-storage 16.0.0.
When using this feature, you need to add this line in your code:
from azure.mgmt.storage.v2019_06_01.models import StorageAccountUpdateParameters
Here is an example, it can work at my side:
from azure.identity import ClientSecretCredential
from azure.mgmt.storage import StorageManagementClient
from azure.mgmt.storage.v2019_06_01.models import StorageAccountUpdateParameters
subscription_id = "xxxxxxxx"
creds = ClientSecretCredential(
tenant_id="xxxxxxxx",
client_id="xxxxxxxx",
client_secret="xxxxxxx"
)
resource_group_name="xxxxx"
storage_account_name="xxxx"
storage_client = StorageManagementClient(creds, subscription_id)
#set the allow_blob_public_access settings here
p1 = StorageAccountUpdateParameters(allow_blob_public_access=False)
#then use update method to update this feature
storage_client.storage_accounts.update(resource_group_name, storage_account_name, p1)
I haven't tried this myself, but looking at the Python Storage Management SDK and the REST API this should be possible.
Look here for an example on how to create a new storage account using the Python SDK. As you can see, the request body seems to be pretty much exactly what gets passed on to the underlying REST API.
That API does support the optional parameter properties.allowBlobPublicAccess so you should be able to add that directly in python as well.

which API should i use so as to access the V2beta2PodsMetricStatus Model of kubernetes-client in Python?

I am developing an application in python where i need to consume the current values of a metric for some pods(eg. cpu, memory). I can get this info through an API (/apis/metrics.k8s.io/v1beta1/pods) but i try to use the kubernetes client so as to access these metrics within my python app.
I see that the V2beta2PodsMetricStatus Model includes the information i need but i can not find the API endpoint i should use so as to reach this model.
Any help or alternative option would be very helpful since i am really stacked with this issue several days.
Thank you very much for you time.
I finally get the metrics by executing directly the relevant kubectl command. I do not like the solution but it works. Hope to be able to use the kubernetes-client instead at the near future.
import subprocess
p = subprocess.getoutput('kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/default/pods')
ret_metrics = json.loads(p)
items = ret_metrics['items']
for item in items:
print(item['metadata']['name'])
You could use the call_api method of api_client as below to call an API which is not part of core kubernetes API.
ret_metrics = api_client.call_api('/apis/metrics.k8s.io/v1beta1/pods', 'GET', auth_settings = ['BearerToken'], response_type='json', _preload_content=False)
response = ret_metrics[0].data.decode('utf-8')
There is an open issue to support it via V2beta2PodsMetricStatus model

How to instantiate an AWS Linux using python API?

1) Instantiate an AWS Linux, micro instance using the AWS python API (include authentication to AWS)
2) Update the instance with tags: customer=ACME, environment=PROD
3) Assign a security group to the instance
To program in Python on AWS, you should use the boto3 library.
You will need to do the following:
supply credentials to the library (link)
create an EC2 client (link)
use the EC2 client to launch EC2 instances using run_instances (link)
You can specify both tags and security groups in the run_instances call. Additionally, the boto3 documentation provides some Amazon EC2 examples that will help.
Maybe you want to observe this project:
https://github.com/nchammas/flintrock
This is a hadoop and apache spark clustering project. But, it can inspire you.
Actually, there is many feature that you want like security group or filtering by tag name. Just, look around of code

How do I list Security Groups of current Instance in AWS EC2?

EDIT Removed BOTO from question title as it's not needed.
Is there a way to find the security groups of an EC2 instance using Python and possible Boto?
I can only find docs about creating or removing security groups, but I want to trace which security groups have been added to my current EC2 instance.
To list the security groups of current instance, you don't need Boto/Boto3. Make use of the AWS meta-data server.
import os
sgs = os.popen("curl -s http://169.254.169.254/latest/meta-data/security-groups").read()
print sgs
You can check it from that instance and execute below command
curl http://169.254.169.254/latest/meta-data/security-groups
or from aws-cli also
aws ec2 describe-security-groups

How to access EMR master private ip address using pure python / boto

I've searched on this site and google but have not been able to get an answer for this.
I have code running from an EC2 instance which creates and manager EMR clusters using boto.
I can use this framework to get the flow_id (or cluster_id, not sure which is the right name for it), it start with "j-" and has a fixed amount of chars to identify the cluster.
Using the framework I can establish an emr or ec2 connection, but for the life of me I cannot do the following using boto:
aws emr --list-clusters --cluster-id=j-ASDFGHJKL | json '["instances"].[0].["privateipaddress"]
**The above is a little fudged, I cannot remember the json format and what the json command is or what args it wants, but cli nonetheless.
I've pprint.pprint()'ed and inspected with inspect.getmembers() the connections, getting the conn to the specific cluster_id, but I have yet to see this field/var/attribute with or without method calls.
I've been up and down amazon and boto, how do they do it like
here?
In the
def test_list_instances(self): #line 317
...
self.assertEqual(response.instances[0].privateipaddress , '10.0.0.60')
...
P.S. I've tried this but python complains that "instances" property is not iterable, array accessable (i forget the "var[0]" naming), and something else I tried, including inspecting.
BTW, i can access the publicDNSaddress from here, and many other things, just not the privateIP...
Please tell me if I messup up somewhere and where I can find the answer, i'm using an ugly fix by using subprocess!
If you are asking for taking master ip of emr then below commands will work:
list_intance_resp = boto3.client('emr',region_name='us-east-1').list_instances(ClusterId ='j-XXXXXXX')
print list_intance_resp['Instances'][len(list_intance_resp['Instances'])-1]['PrivateIpAddress']
Check your version of boto using
pip show boto
My guess is your using a version 2.24 or earlier as this version did not parse instance information see https://github.com/boto/boto/blob/2.24.0/tests/unit/emr/test_connection.py#L117
compared to
https://github.com/boto/boto/blob/2.25.0/tests/unit/emr/test_connection.py#L313
If you upgrade your version of boto to 2.25 or newer you'll be able to do the following
from boto.emr.connection import EmrConnection
conn = EmrConnection(<your aws key>, <your aws secret key>)
jobid = 'j-XXXXXXXXXXXXXX' # your job id
response = conn.list_instances(jobid)
for instance in response.instances:
print instance.privateipaddress
You just need to query the master instances from the Master Instance Group with the help of the EMR cluster-ID. If you have more than one master, you can parse the boto3 output and take the IP if the first listed master.
Your Boto3 execution environment should have access to EMR describe a cluster and its instance groups. Here is the
emr_list_instance_rep = boto_client_emr.list_instances(
ClusterId=cluster_id,
InstanceGroupTypes=[
'MASTER',
],
InstanceStates=[
'RUNNING',
]
)
return emr_list_instance_rep["Instances"][0]["PrivateIpAddress"]
You can find the full boto3 script and its explanation here https://scriptcrunch.com/script-retrieve-aws-emr-master-ip/

Categories

Resources