How do i write aws cli commands in python - python

I am new to AWS as well as Python.
AWS CLI, the below command is working perfectly fine:
aws cloudformation package --template-file sam.yaml --output-template-file output-sam.yaml --s3-bucket <<bucket_Name>>
The goal is to create automate python script which will run the above command. I tried to google it but none of the solutions is working for me.
Below are the solution that I have tried but not able to upload the artifacts onto the S3 bucket.
test.py file:
import subprocess
command= ["aws","cloudformation","package","--template-file","sam.yaml","--output-template-file","output-sam.yaml","--s3-bucket","<<bucket_name>>"]
print(subprocess.check_output(command, stderr=subprocess.STDOUT))

It can easily be done using the os library. The simplest way of doing it is given in the code.
import os
os.system("aws cloudformation package --template-file sam.yaml --output-template-file output-sam.yaml --s3-bucket <<bucket_name>>")
However, subprocess can be used for a little complicated tasks.
You can also check out boto3 library for such tasks. Boto is AWS SDK for Python.

You can check how this aws-cli command is implemented as it's all in Python already. Basically aws cloudformation package uploads the template to S3, so you can do the same with boto3 as mentioned in the comments.

Related

Can i run maven dependency in aws lambda via python?

I got a question about python, maven and aws lambda. Basically I am trying to build dependency trees for my repos using the terminal command
mvn dependency:tree
This command is being ran via python using the os library, i.e.
import os
os.system('mvn dependency:tree')
Now comes the issue - I need to run this on AWS Lambda.
Being aware that AWS Lambda is serverless and that the layers of each lambda can only be 250mb, 1) is it possible to run terminal commands via lambda without spinning up any sort of server? and 2) maven usually needs to be installed on a system, thus is it possible, or even viable, to run maven on AWS Lambda?
Any input will be appreciated.
Thanks
is it possible to run terminal commands via lambda without spinning up any sort of server?
Yes, you can run terminal commands in a Lambda function.
maven usually needs to be installed on a system, thus is it possible, or even viable, to run maven on AWS Lambda?
You can create a custom Lambda container image that includes dependencies.
Additional AWS Blog Post: https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/

Boto3 - Copying File from S3 to Lightsail

I have a requirement to copy the file as soon as it landed in the S3 bucket. So I enabled trigger to lambda function on S3. But I am not sure how to copy the content to lightsail directory using AWS lambda. I looked into the documentation but I don't see any solution using Python - Boto3.
I can only see the FTP solution. Is there any other way to dump the file into Lightsail from S3?
Thanks.
Generally, for ec2 instances you SSM Run Command for that.
In this solution, your lambda invokes sends a command (e.g. download some files from s3) to the instance by means of the SSM Run Command.
For this to work, you need to install SSM Agent on your Lightsail Instance. The following link describs the installation process:
Configuring SSM Agent on an Amazon Lightsail Instance.
The link also gives an example of sending commands to the instance:
How to Run Shell Commands on a Managed Instance

Google Cloud Functions - how do I authenticate to AWS S3 bucket?

I am trying to get a Google Cloud Function in Python 3.7 to take a file from Google Cloud Storage and upload it into AWS S3. In the command line, I would authenticate with awscli and then use the gsutil cp command to copy the file across. I have translated this process into python as:
import subprocess
def GCS_to_s3(arg1, arg2):
subprocess.call(["aws configure set aws_access_key_id AKIA********"], shell=True)
subprocess.call(["aws configure set aws_secret_access_key EgkjntEFFDVej"], shell=True)
subprocess.call(["gsutil cp gs://test_bucket/gcs_test.csv s3://mybucket)"], shell=True)`
The requirements.txt is:
awscli
google-cloud-storage
This function deploys successfully and runs successfully but the output does not show up in the S3 bucket.
What would be the best way of writing such a function?
You'll probably want to use the boto3 Python package instead, since the command-line AWS tools aren't available or installable for Cloud Functions. There's a number of ways to configure credentials as well.

CSV-File from AWS S3 into PostgreSQL Amazon RDS using python

Status:
I have created new tables in PostgreSQL-Database on Amazon RDS
I have uploaded a csv-file into Bucket on Amazon S3
via lambda function I have connected with Amazon S3 Buckets and Amazon RDS
I can read csv-file via the following code
import csv, io, boto3
s3 = boto3.resource('s3')
client = boto3.client('s3',aws_access_key_id=Access_Key,aws_secret_access_key=Secret_Access_Key)
buf = io.BytesIO()
s3.Object('bucketname','filename.csv').download_fileobj(buf)
buf.seek(0)
while True:
line = buf.readlines(1)
print(line)
Problem:
I can't import necessary python libraries e.g. psycopg2, openpyxl etc.
when I tried to import psycopg2
import psycopg2
I got the error info:
Unable to import module 'myfilemane': No module named 'psycopg2._psycopg'
at first I have not imported the module "psycopg2._psycopg" but "psycopg2". I don't know where is the suffix '_psycopg' from
secondly I followed all the steps in the documentation:
https://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html (1. create a directory. 2. Save all of your Python source files (the .py files) at the root level of this directory. 3. Install any libraries using pip at the root level of the directory. 4. Zip the content of the project-dir directory)
And I have also read this documentation:
https://docs.aws.amazon.com/lambda/latest/dg/vpc-rds-deployment-pkg.html
The same applies to other modules or libraries e.g. openpyxl etc. I was always told that "No Module Named 'OneNameThatIHaveNotImported'"
So does anyone have any idea or who know another way how can one via lambda-function edit the csv-file on s3 and import the edited version into rds-database?
Thanks for the help in advance!
the answer thread this SO answer references will put you on the right path. basically, you'd need to create the deployment package in an EC2 that matches the linux image the AWS lambda functions runs on. better yet, you can deploy lambda functions from the same staging EC2 instance where you created your deployment package through the AWS CLI.
you can also use [precompiled lambda packages][2] if you want an out-of-the-box fix.
[2]: https://github.com/jkehler/awslambda-psycopg2 or more generally, https://github.com/Miserlou/lambda-packages

Uploading to S3 using Python Requests

I'd like to upload xml's directly to S3 without the use of modules like boto, boto3, or tinys3.
So far I have written:
url = "https://my-test-s3.s3.amazonaws.com"
with open(xml_file,'rb') as data:
requests.put(url, data=data)
and I've gone and head and set the AllowedOrigin on my S3 bucket to accept my server's address.
This does not error when running, however, it also does not seem to be uploading anything.
Any help would be appreciated --- I'd like to (a) get the thing to upload and (b) figure out how to apply AWSAccessKey and AWSSecretAccessKey to the request
If you want to upload xml's directly to S3 without the use of modules like boto, boto3, or tinys3 I would recommend to use awscli:
pip install awscli
aws configure # enter your AWSAccessKey and AWSSecretAccessKey credentials
AWSAccessKey and AWSSecretAccessKey will be stored inside ~/.aws folder permanently after using aws configure.
And then you can upload files using python:
os.system("aws s3 cp {0} s3://your_bucket_name/{1}".format(file_path, file_name))
Docs are here.
you need to install awscli following this documentation.
Then in a commandline shell, execute aws configure and follow the instruction.
to upload file, its much easier using boto
import boto3
s3 = boto3.resource('s3')
s3.meta.client.upload_file(xml_file, 'yourbucket', 'yours3filepath')
Alternatively, you can use aws s3 cp command combined with python subprocess.
subprocess.call(["aws", "s3", "cp", xml_file, "yours3destination"])

Categories

Resources