Alternative way for terraform S3 backend to use like variables - python

Variables are not supported in S3 backend I need alternative way to do this can any one suggests I go through online some are saying terragrunt some are say like python, workspaces.environments.Actually we are built some Dev environment for clients from the app they will enter the details like for ec2 they will enter count, ami, type from here all are fine but with the backend state file issue which does not support variables I need to change every time bucket name,path.Can some one please explain me the structure and sample code to resolve this thanks in advance. #23208

You don't need to have different backends for every client, you need to have different tfstate. Then you can use terraform init --reconfigure "key=<client>", where <client> is an identifier you set for your clients.

Related

How can I be sure that a Library like Pandas is not sending my API Key Secrets to places outside from my Local?

Let's say:
I have my python code in main.py and I am using Pandas
I am storing my API Key(to some azure service) in a Windows Environment Variable ( variable name = "AZURE_KEY" and variable_value = "abc123abc")
I will import this API Key in main.py using azure_key = os.environ.get("AZURE_KEY")
Question:
How can I be sure that Pandas Library hasn't sent azure_key's value to somewhere outside my local system?
Possible Approach:
I know one way is to go through the entire Pandas module files and understand the source code to see if any fishy stuff is happening , but such an approach is not feasible.
Note:
Pandas is just an example for the question.I want to use an API Key within a Streamlit code.
Hence,Please take this question agnostic to the library..
For a production system (on a server), you could use a firewall to filter outgoing connections
For a development system (your machine), you could add restrictions to the "API Key" account (e.g. only access test data, only access systems you really need, etc.)

AWS CDK Python Create Resources conditionally

I would like to create resources depending on the parameters value. How can I achieve that?
Example:
vpc_create = core.CfnParameter(stack, "createVPC")
condition = core.CfnCondition(stack,
"testeCondition",
expression=core.Fn.condition_equals(vpc_create, True)
)
vpc = ec2.Vpc(stack, "MyVpc", max_azs=3)
How add the condition to VPC resource for VPC being created only if the parameter it's true?
I think that I need to get Cloudformation resource, something like that:
vpc.node.default_child # And I think this returns an object from ec2.CfnVPC class, but I'm stuck here.
Thanks
Conditional resource creation and a lot of other flexibility is possible using context data. AWS itself recommends context over parameters
In general, we recommend against using AWS CloudFormation parameters with the AWS CDK. Unlike context values or environment variables, the usual way to pass values into your AWS CDK apps without hard-coding them, parameter values are not available at synthesis time, and thus cannot be easily used in other parts of your AWS CDK app, particularly for control flow.
Please read in full at: https://docs.aws.amazon.com/cdk/latest/guide/parameters.html

Azure Python SDK to get usage details - UsageDetailsOperations Class

I am new to python.
I need to get the Usage details using python sdk.
I am able to do the same using the usage detail API.
But unable to do so using the sdk.
I am trying to use the azure.mgmt.consumption.operations.UsageDetailsOperations class. The official docs for UsageDetailsOperations
https://learn.microsoft.com/en-us/python/api/azure-mgmt-consumption/azure.mgmt.consumption.operations.usage_details_operations.usagedetailsoperations?view=azure-python#list-by-billing-period
specifies four parameters to create the object
(i.e.client:Client for service requests,config:Configuration of service client,
serializer:An object model serializer,deserializer:An object model deserializer).
Out of these parameters I only have the client.
I need help understanding how to get the other three parameters or is there another way to create the UsageDetailsOperations object.
Or is there any other approach to get the usage details.
Thanks!
This class is not designed to be created manually, you need to create a consumption client, which will have an attribute "usages" which will be the class in question (instanciated correctly).
There is unfortunately no samples for consumption yet, but creating the client will be similar to creating any other client (see Network client creation for instance).
For consumption, what might help is the tests, since they give some idea of scenarios:
https://github.com/Azure/azure-sdk-for-python/blob/fd643a0/sdk/consumption/azure-mgmt-consumption/tests/test_mgmt_consumption.py
If you're new to Azure and Python, you might want to do this quickstart:
https://learn.microsoft.com/en-us/azure/python/python-sdk-azure-get-started
Feel free to open an issue in the main Python repo, asking for more documentation about this client (this will help prioritize it):
https://github.com/Azure/azure-sdk-for-python/issues
(I'm working at Microsoft in the Python SDK team).

Obtain EC2 instance IP from AWS lambda function and use requests library

I am writing a lambda function on Amazon AWS Lambda. It accesses the URL of an EC2 instance, on which I am running a web REST API. The lambda function is triggered by Alexa and is coded in the Python language (python3.x).
Currently, I have hard coded the URL of the EC2 instance in the lambda function and successfully ran the Alexa skill.
I want the lambda function to automatically obtain the IP from the EC2 instance, which keeps changing whenever I start the instance. This would ensure that I don't have to go the code and hard code the URL each time I start the EC2 instance.
I stumbled upon a similar question on SO, but it was unanswered. However, there was a reply which indicated updating IAM roles. I have already created IAM roles for other purposes before, but I am still not used to it.
Is this possible? Will it require managing of security groups of the EC2 instance?
Do I need to set some permissions/configurations/settings? How can the lambda code achieve this?
Additionally, I pip installed the requests library on my system, and I tried uploading a '.zip' file with the structure :
REST.zip/
requests library folder
index.py
I am currently using the urllib library
When I use zip files for my code upload (I currently edit code inline), it can't even accesse index.py file to run the code
You could do it using boto3, but I would advise against that architecture. A better approach would be to use a load balancer (even if you only have one instance), and then use the CNAME record of the load balancer in your application (this will not change for as long as the LB exists).
An even better way, if you have access to your own domain name, would be to create a CNAME record and point it to the address of the load balancer. Then you can happily use the DNS name in your Lambda function without fear that it would ever change.

Discovering peer instances in Azure Virtual Machine Scale Set

Problem: Given N instances launched as part of VMSS, I would like my application code on each azure instance to discover the IP address of the other peer instances. How do I do this?
The overall intent is to cluster the instances so, as to provide active passive HA or keep the configuration in sync.
Seems like there is some support for REST API based querying : https://learn.microsoft.com/en-us/rest/api/virtualmachinescalesets/
Would like to know any other way to do it, i.e. either python SDK or instance meta data URL etc.
The RestAPI you mentioned has a Python SDK, the "azure-mgmt-compute" client
https://learn.microsoft.com/python/api/azure.mgmt.compute.compute.computemanagementclient
One way to do this would be to use instance metadata. Right now instance metadata only shows information about the VM it's running on, e.g.
curl -H Metadata:true "http://169.254.169.254/metadata/instance/compute?api-version=2017-03-01"
{"compute":
{"location":"westcentralus","name":"imdsvmss_0","offer":"UbuntuServer","osType":"Linux","platformFaultDomain":"0","platformUpdateDomain":"0",
"publisher":"Canonical","sku":"16.04-LTS","version":"16.04.201703300","vmId":"e850e4fa-0fcf-423b-9aed-6095228c0bfc","vmSize":"Standard_D1_V2"},
"network":{"interface":[{"ipv4":{"ipaddress":[{"ipaddress":"10.0.0.4","publicip":"52.161.25.104"}],"subnet":[{"address":"10.0.0.0","dnsservers":[],"prefix":"24"}]},
"ipv6":{"ipaddress":[]},"mac":"000D3AF8BECE"}]}}
You could do something like have each VM send the info to a listener on VM#0, or to an external service, or you could combine this with Azure Files, and have each VM output to a common share. There's an Azure template proof of concept here which outputs information from each VM to an Azure File share.. https://github.com/Azure/azure-quickstart-templates/tree/master/201-vmss-azure-files-linux - every VM has a mountpoint which contains info written by every VM.

Categories

Resources