AWS route table export detail to CSV - python

I need to export my route table detail to CSV. The goal is to load the CSV to a graphDB. What I need is the RoutetableID, CIDR block, Gateway and associated Subnet list as a CSV table. Would like to automate this as much as I can since I have several VPCs that I need to consolidate data for.
1- I can not find a BOTO3 query that provides this depth of detail.
2- AWS-config as well do not go to this depth.
3- AWS-CLI will get me the detail as a nested JSON. But here I loose the easy of automation.
Am I missing a detail here or does AWS not expose this detail for automation?

In AWS EC2 CLI, using describe-route-tables (link) you can fetch routetableID, CIDR block, gateway and associated subnet list. If you use this CLI, the automation will be fetching your four key value from JSON, converting into an array and writing it in a CSV.
The boto3 equivalent for this will be using RouteTable and RouteTableAssociation. This is a good example in python.

Related

Python Data Ingestion (Start with API call "Get" to AWS S3 Bucket), how to manage the username/pwd/api-key and token( expired in short time window)

data source is from SaaS Server's API endpoints, aim to use python to move data into AWS S3 Bucket(Python's Boto3 lib)
API is assigned via authorized Username/password combination and unique api-key.
then every time initially call API need get Token for further info fetch.
have 2 question:
how to manage those secrets above, save to a head file (*.ini, *.json *.yaml) or saved via AWS's Secret-Manager?
the Token is a bit challenging, the basically way is each Endpoint, fetch a new token and do the API call
then that's end of too many pipeline (like if 100 Endpoints info need per downstream business needs) then
need to craft 100 pipeline like an universal template repeating 100 times.
I am new to Python programing world, you all feel free to comment to share any user-case.
Much appreciate !!
I searched and read this show-case
[saving-from-api-to-s3-bucket/74648533]
saving from api to s3 bucket
and
"how-to-write-a-file-or-data-to-an-s3-object-using-boto3"
How to write a file or data to an S3 object using boto3
I found this has been helpful:
#Python-decopule summary: store parameters in .ini or .env files;
#few options of manage(hiding) sensitive info
a. IAM role
b. Store Secrets using **Parameter Store**
c. Store Secrets using **Secrets Manager** - Current method
recommended by AWS

automatically extract aws keys using python

I have been provided to access aws and get credentials manaully i,e to copy access_key_id,secret_access_key,session_token this will expire every one hour. I am using these credentials to extract information from Route53. I want to automate and get access_key_id,secret_access_key,session_token instead of manually copying to the script. I would like to understand is there any way to do this automation..
The process of refreshing tokens are documented here:
Auto-refresh AWS Tokens Using IAM Role and boto3
It is poorly documented in boto documentation but this could help

Elasticsearch Data Insertion with Python

I'm brand new to using the Elastic Stack so excuse my lack of knowledge on the subject. I'm running the Elastic Stack on a Windows 10, corporate work computer. I have Git Bash installed for a bash cli, and I can successfully launch the entire Elastic Stack. My task is to take log data that is stored in one of our databases and display it on a Kibana dashboard.
From what my team and I have reasoned, I don't need to use Logstash because the database that the logs are sent to is effectively our 'log stash', so to use the Logstash service would be redundant. I found this nifty diagram
on freecodecamp, and from what I gather, Logstash is just the intermediary for log retrieval different services. So instead of using Logstash, since the log data is already in a database, I could just do something like this
USER ---> KIBANA <---> ELASTICSEARCH <--- My Python Script <--- [DATABASE]
My python script successfully calls our database and retrieves the data, and a function that molds the data into a dict object (as I understand, Elasticsearch takes data in a JSON format).
Now I want to insert all of that data into Elasticsearch - I've been reading the Elastic docs, and there's a lot of talk about indexing that isn't really indexing, and I haven't found any API calls I can use to plug the data right into Elasticsearch. All of the documentation I've found so far concerns the use of Logstash, but since I'm not using Logstash, I'm kind of at a loss here.
If there's anyone who can help me out and point me in the right direction I'd appreciate it. Thanks
-Dan
You ingest data on elasticsearch using the Index API, it is basically a request using the PUT method.
To do that with Python you can use elasticsearch-py, the official python client for elasticsearch.
But sometimes what you need is easier to be done using Logstash, since it can extract the data from your database, format it using many filters and send to elasticsearch.

Easiest way to retrieve snapshot time stamp and ID from AWS in python?

I am trying to automate passing copies of snapshot backups to different regions using this code (AWS Lambda - Copy EC2 Snapshot automatically between regions?).
I have tried using boto3 library in python but I keep getting this error:
EndpointConnectionError: Could not connect to the endpoint URL: "https://ec2.us-east-1a.amazonaws.com/"
Using this code:
client = boto3.client('ec2')
client.describe_snapshots(OwnerIds=['self'])
I have ensured that my config file has the right security keys. Not sure what else I can do to retrieve the pieces of information I need.
config file was like this:
region=us-east-1a
changed it to:
region=us-east-1
works for other regions.(example region=us-east-2c is region=us-east-1)

Combining many log files in Amazon S3 and read in locally

I have a log file being stored in Amazon S3 every 10 minutes. I am trying to access weeks and months worth of these log files and read it into python.
I have used boto to open and read every key and append all the logs together but it's way too slow. I am looking for an alternate solution to this. Do you have any suggestion?
There is no functionality on Amazon S3 to combine or manipulate files.
I would recommend using the AWS Command-Line Interface (CLI) to synchronize files to a local directory using the aws s3 sync command. This can copy files in parallel and supports multi-part transfer for large files.
Running that command regularly can bring down a copy of the files, then your app can combine the files rather quickly.
If you do this from an Amazon EC2 instance, there is no charge for data transfer. If you download to a computer via the Internet, then Data Transfer charges apply.
Your first problem is that you're naive solution is probably only using a single connection and isn't making full use of your network bandwidth. You can try to roll your own multi-threading support, but it's probably better to experiment with existing clients that already do this (s4cmd, aws-cli, s3gof3r)
Once you're making full use of your bandwidth, there are then some further tricks you can use to boost your transfer speed to S3.
Tip 1 of this SumoLogic article has some good info on these first two areas of optimization.
Also, note that you'll need to modify your key layout if you hope to consistently get above 100 requests per second.
Given a year's worth of this log file is only ~50k objects, a multi-connection client on a fast ec2 instance should be workable. However, if that's not cutting it, the next step up is to use EMR. For instance, you can use S3DistCP to concatenate your log chunks into larger objects that should be faster to pull down. (Or see this AWS Big Data blog post for some crazy overengineering) Alternatively, you can do your log processing in EMR with something like mrjob.
Finally, there's also Amazon's new Athena product that allows you to query data stored in S3 and may be appropriate for your needs.

Categories

Resources