How to execute bash commands in aws lambda - python

I am trying to schedule a job in AWS lambda where i get data fromm a Json API. I want to transfer JSON file to amazon S3 everytime.I have set up S3 bucket and aws lambda function with proper IAM roles. I am writing AWS lambda function in Python. Code works fine on an EC2 instance but It's not transferring file to S3 if I put it in AWS Lambda.
import os
def lambda_handler(event, context):
#changing the directory to /tmp
os.chdir("/tmp")
print "loading function"
#downloading file to
os.system("wget https://jsonplaceholder.typicode.com/posts/1 -P /tmp")
#using aws-cli to transfer file to amazon S3
os.system("aws s3 sync . s3://targetbucket")
I am new to aws lambda. I am not getting any error but it's not giving me expected output

AWS Lambda does not have the aws cli by default.
You can either Create a deployment package with awscli in it or Use python boto3 library.
import boto3
s3client = boto3.client('s3')
for filename in os.listdir('/tmp'): # assuming there will not be any sub-directories
fpath = os.path.join('/tmp',filename)
if os.path.isfile(fpath):
s3client.upload_file(fpath, 'targetbucket', filename)

Related

How to Upload to AWS S3 with Transfer Acceleration using Python BOTO

How can I pass a Transfer Acceleration endpoint-url to the boto upload_file function while trying to upload a file to S3?
My current code is:
s3.Bucket(BUCKET).upload_file(filetoupload, pathnfilename, ExtraArgs={'ACL':'public-read'})
You would provide it with when you created your resource object:
s3 = boto3.resource('s3', endpoint_url="the_endpoint")

No Credentials Error - Using boto3 and aws s3 bucket

I am using python and jupyter notebook to read files from an aws s3 bucket, and I am getting the error 'No Credentials Error:Unable to locate credentials' when running the following code:
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket')
for obj in bucket.objects.all():
key = obj.key
body = obj.get()['Body'].read()
I believe I need to put my access key somewhere, but I am not sure where. Thank you!
If you have AWS CLI installed, just do a simple aws configure. Then you will be good to go.

Python (Boto/tinys3) Upload file to AWS S3 bucket subdirectory

conn = tinys3.Connection(S3_ACCESS_KEY,S3_SECRET_KEY)
f = open('sample.zip','rb')
conn.upload('sample.zip',f,bucketname)
I can upload the file to my bucket (test) via the code above, but I want to upload it directly to test/images/example. I am open to moving over to boto, but I can't seem to import boto.s3 in my environment.
I have looked through How to upload a file to directory in S3 bucket using boto but none of the tinys3 examples show this.
import boto3
client = boto3.client('s3', region_name='ap-southeast-2')
client.upload_file('/tmp/foo.txt', 'my-bucket', 'test/images/example/foo.txt')
The following worked for me
from boto3.s3.transfer import S3Transfer
from boto3 import client
client_obj = client('s3',
aws_access_key_id='my_aws_access_key_id',
aws_secret_access_key='my_aws_secret_access_key')
transfer = S3Transfer(client_obj)
transfer.upload_file(src_file,
'my_s3_bucket_name',
dst_file,
extra_args={'ContentType': "application/zip"})

AWS S3 download file from Flask

I have created a small app that should download file from a AWS S3.
I can download the data correctly in this way:
s3_client = boto3.resource('s3')
req = s3_client.meta.client.download_file(bucket, ob_key, dest)
but if I add this function in a flask route it does not work anymore. I obtain this error:
ClientError: An error occurred (400) when calling the HeadObject operation: Bad Request
I'm not able to figure out why it does not work inside the route. Any idea?
That is related to your AWS region. Mention the region name as an added parameter.
Try it on your local machine, using
aws s3 cp s3://bucket-name/file.png file.png --region us-east-1
If you are able to download the file using this command, then it should work fine from your API also.
The problem was that with flask I needed to declare s3_client as global variable instead of just inside the function.
Now it works perfectly!

how to write airflow error logs into s3 bucket using python

when my airflow dag fails i get the errors in the path
"/home/ec2-user/software/airflow/logs/dagtest_dag/trigger_load/2019-10-10T06:01:33.342433+00:00/1.log
"
how to take these log to s3 bucket?
Configure this as a cron job.
import boto3
s3 = boto3.client('s3')
# Make sure your client is authenticated
with open('path/to/your/logs.log', 'rb') as data:
s3.upload_fileobj(data, 'bucketname', 'path/to/your/logs/in/s3.log')

Categories

Resources