Append String in a file Using Python Boto 3 - python

I am writing one Lambda function using Python. And I need to collect a list of AMIs which is having a specified tag key-value pair and write it to an S3 Bucket as a JSON file. My code is in below,
import boto3
import json
client = boto3.client('ec2')
def lambda_handler(event, context):
response = client.describe_images(Owners=['self'])
versions = response['Images']
for x in range(len(versions)):
if {'Key': 'product', 'Value': 'code'} in response['Images'][x]['Tags']:
ImageId=versions[x]['ImageId']
print(ImageId)
s3 = boto3.resource('s3')
obj = s3.Object('my-ami-bucketforelk','hello.json')
obj.put(Body=json.dumps(ImageId))
My Lambda is working as expected except for one thing. My output is overwriting. So I am only able to write one AMI ID at a time.
Can somebody help me to resolve this issue?

You're writing the object to S3 for each and every image ID. Instead, accumulate the image IDs in a list, and then upload that to S3 at the end. For example:
import json
import boto3
ec2 = boto3.client('ec2')
s3 = boto3.resource('s3')
def lambda_handler(event, context):
response = ec2.describe_images(Owners=['self'])
versions = response['Images']
images = []
for x in range(len(versions)):
if {'Key': 'product', 'Value': 'code'} in response['Images'][x]['Tags']:
ImageId=versions[x]['ImageId']
images.append(ImageId)
obj = s3.Object('my-ami-bucketforelk', 'hello.json')
obj.put(Body=json.dumps(images))

Related

S3 folder to folder file copy using Lambda [duplicate]

I am trying to copy multiple files from one s3 bucket to another s3 bucket using lambda function but it is just copying 2 files in destination s3 bucket.
Here is my code:
# using python and boto3
import json
import boto3
s3_client = boto3.client('s3')
def lambda_handler(event, context):
source_bucket_name = event['Records'][0]['s3']['bucket']['name']
file_name = event['Records'][0]['s3']['object']['key']
destination_bucket_name = 'nishantnkd'
copy_object = {'Bucket': source_bucket_name, 'Key': file_name}
s3_client.copy_object(CopySource=copy_object,
Bucket=destination_bucket_name, Key=file_name)
return {'statusCode': 3000,
'body': json.dumps('File has been Successfully Copied')}
I presume that the Amazon S3 bucket is configured to trigger the AWS Lambda function when a new object is created.
When the Lambda function is triggered, it is possible that multiple event records are sent to the function. Therefore, it should loop through the event records like this:
# using python and boto3
import json
import boto3
s3_client = boto3.client('s3')
def lambda_handler(event, context):
for record in event['Records']: # This loop added
source_bucket_name = record['s3']['bucket']['name']
file_name = urllib.parse.unquote_plus(record['s3']['object']['key']) # Note this change too
destination_bucket_name = 'nishantnkd'
copy_object = {'Bucket': source_bucket_name, 'Key': file_name}
s3_client.copy_object(CopySource=copy_object, Bucket=destination_bucket_name, Key=file_name)
return {'statusCode': 3000,
'body': json.dumps('File has been Successfully Copied')}

How to retrieve AWS S3 objects URL using python

I need to write a lambda function that retrieves s3 object URL for object preview. I came across this solution, but I have a question about it. In my case, I would like to retrieve URL of any object in my s3 bucket, hence there is no Keyname.How can i retriece url of any future objects stored in my s3 bucket.
bucket_name = 'aaa'
aws_region = boto3.session.Session().region_name
object_key = 'aaa.png'
s3_url = f"https://{bucket_name}.s3.{aws_region}.amazonaws.com/{object_key}"
return {
'statusCode': 200,
'body': json.dumps({'s3_url': s3_url})
}
You have some examples here. But, what exactly would you like to do? What do you mean by future objects? You can put a creation event on your bucket that will trigger your lambda each time when a new object is uploaded into that bucket.
import boto3
def lambda_handler(event, context):
print(event)
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
s3 = boto3.client('s3')
obj = s3.get_object(
Bucket=bucket,
Key=key
)
print(obj['Body'].read().decode('utf-8'))

How to read a csv file from S3 using dask and mentioning my access key and secret?

I am trying to read a CSV file from S3 using dask but I am getting the following error. Can anyone please correct me If I'm doing anything wrong here?
aws_access_key_id = 'xxxx'
aws_secret_access_key = 'xxxx'
df = dd.read_csv('s3://{bucket}/{file_key.csv}', storage_options = {'key': aws_access_key_id, 'secret': aws_secret_access_key})
Error I am facing:
TypeError: sequence item 0: expected str instance, tuple found
You can use boto3 to create S3 connection with access key ID and secret access key.
import boto3
import io
import dask as dd
s3_client = boto3.client('s3')
response = s3_client.get_object(Bucket,s3_key)
file = response["Body"].read()
df = dd.read_csv(io.BytesIO(file))
Note:Export keys in environment using os.env.
I used boto3 and this is similar to read a csv from S3 using pandas. This worked out for me!
import boto3
import dask
import dask.dataframe as dd
df = dd.read_csv('s3://*****.csv', storage_options = {'key': 'XXXX', 'secret': 'XXXX'}, assume_missing=True)

boto3 python - list objects

While trying to list objects with a prefix, the return is only fetching only 1 object in my Lambda. Not sure what is missing.
import boto3
s3 = boto3.resource('s3')
def lambda_handler(event, context):
try:
## Bucket to use
bucket = s3.Bucket(mybucket)
## List objects within a given prefix
for obj in bucket.objects.filter(Prefix='output/group1'):
print(obj.key)
It's hard to know what the exact problem is when we can't see a valid function or any returned errors. This code works without issue for me:
import boto3
s3 = boto3.resource('s3')
def lambda_handler(event, context):
bucket = s3.Bucket('your-bucket-name')
for obj in bucket.objects.filter(Prefix='output/group1'):
print(obj.key)
lambda_handler('event','context')
Make sure 'output/group1' actually has more than 1 file in it to return.

Get a specific file from s3 bucket (boto3)

So I have a file.csv on my bucket 'test', I'm creating a new session and I wanna download the contents of this file:
session = boto3.Session(
aws_access_key_id=KEY,
aws_secret_access_key=SECRET_KEY
)
s3 = session.resource('s3')
obj = s3.Bucket('test').objects.filter(Prefix='file.csv')
This returns me a collection but is there a way to fetch the file directly? Without any loops, I wanna do something like:
s3.Bucket('test').objects.get(key='file.csv')
I could achieve the same result without passing credentials like this:
s3 = boto3.client('s3')
obj = s3.get_object(Bucket='test', Key='file.csv')
If you take a look at the client method:
import boto3
s3_client = boto3.client('s3')
s3_client.download_file('mybucket', 'hello.txt', '/tmp/hello.txt')
and the resource method:
import boto3
s3 = boto3.resource('s3')
s3.meta.client.download_file('mybucket', 'hello.txt', '/tmp/hello.txt')
you'll notice that you can convert from the resource to the client with meta.client.
So, combine it with your code to get:
session = boto3.Session(aws_access_key_id=KEY, aws_secret_access_key=SECRET_KEY)
s3 = session.resource('s3')
obj = s3.meta.client.download_file('mybucket', 'hello.txt', '/tmp/hello.txt')
I like mpu.aws.s3_download, but I'm biased ;-)
It does it like that:
import os
import boto3
def s3_download(bucket_name, key, profile_name, exists_strategy='raise'):
session = boto3.Session(profile_name=profile_name)
s3 = session.resource('s3')
if os.path.isfile(destination):
if exists_strategy == 'raise':
raise RuntimeError('File \'{}\' already exists.'
.format(destination))
elif exists_strategy == 'abort':
return
s3.Bucket(bucket_name).download_file(key, destination)
For authentication, I recommend using environment variables. See boto3: Configuring Credentials for details.
you can use the following boto3 method.
download_file(Bucket, Key, Filename, ExtraArgs=None, Callback=None,
Config=None)
s3 = boto3.resource('s3')
s3.meta.client.download_file('mybucket', 'hello.txt', '/tmp/hello.txt')
find more details here - download_file()

Categories

Resources