Boto3 - Copying File from S3 to Lightsail - python

I have a requirement to copy the file as soon as it landed in the S3 bucket. So I enabled trigger to lambda function on S3. But I am not sure how to copy the content to lightsail directory using AWS lambda. I looked into the documentation but I don't see any solution using Python - Boto3.
I can only see the FTP solution. Is there any other way to dump the file into Lightsail from S3?
Thanks.

Generally, for ec2 instances you SSM Run Command for that.
In this solution, your lambda invokes sends a command (e.g. download some files from s3) to the instance by means of the SSM Run Command.
For this to work, you need to install SSM Agent on your Lightsail Instance. The following link describs the installation process:
Configuring SSM Agent on an Amazon Lightsail Instance.
The link also gives an example of sending commands to the instance:
How to Run Shell Commands on a Managed Instance

Related

How to update AWS Cloudwatch Canary zip file

I created a an AWS Cloudwatch Canary through the Import from S3 option, because I had a packaged python script and directory. That went well, and works as expected. However, I cannot find how to update the script package for the existing Canary.
There has to be an easier way than creating a new canary every time I update the zip, any thoughts?
If I upload a new zip to s3, it does not automatically read it in again, it continues to run the old code. If I go to edit, there is no Upload option, just the parsed out handler function. And, if I make any changes to the handler function it times out when saving it ("Error: Request Too Long").
I still don't see a way to do this in the UI, but I am able to do it from the command line. I now package up my scripts (zip), then update the zip in s3 (aws s3 cp s3://path/python.zip) and then
aws synthetics update-canary --name test-canary --code '{"S3Bucket": "path", "S3Key":"python.zip", "Handler": "test-canary.handler"}'

How do i write aws cli commands in python

I am new to AWS as well as Python.
AWS CLI, the below command is working perfectly fine:
aws cloudformation package --template-file sam.yaml --output-template-file output-sam.yaml --s3-bucket <<bucket_Name>>
The goal is to create automate python script which will run the above command. I tried to google it but none of the solutions is working for me.
Below are the solution that I have tried but not able to upload the artifacts onto the S3 bucket.
test.py file:
import subprocess
command= ["aws","cloudformation","package","--template-file","sam.yaml","--output-template-file","output-sam.yaml","--s3-bucket","<<bucket_name>>"]
print(subprocess.check_output(command, stderr=subprocess.STDOUT))
It can easily be done using the os library. The simplest way of doing it is given in the code.
import os
os.system("aws cloudformation package --template-file sam.yaml --output-template-file output-sam.yaml --s3-bucket <<bucket_name>>")
However, subprocess can be used for a little complicated tasks.
You can also check out boto3 library for such tasks. Boto is AWS SDK for Python.
You can check how this aws-cli command is implemented as it's all in Python already. Basically aws cloudformation package uploads the template to S3, so you can do the same with boto3 as mentioned in the comments.

How to download file from website to S3 bucket without having to download to local machine

I'm trying to download a dataset from a website. However all the files I want to download add up to about 100 gb which I don't want to download to my local machine, then upload to s3. Is there a way to download directly to an s3 bucket? Or do you have to use ec2, and if so could somebody give brief instructions on how to do this? Thanks
S3's put_object() method supports a Body parameter for Bytes (or file):
Python example:
response = client.put_object(
Body=b'bytes'|file,
Bucket='string',
Key='string',
)
So if you download a webpage, using Python you'd use the requests.Get() method or .Net you use either HttpWebRequest or WebClient and then upload the file as a byte array so you never need to save it locally. It can all be done in memory.
Or do you have to use ec2
An Ec2 is just a VM in the cloud, you can programatically do this task (download 100gb to S3) from your Desktop PC/Laptop. Simply open an Command Window or a Terminal and type:
AWS Configure
Put in an IAM users creds and use the aws cli or use an AWS SDK like the python example above. You can give the S3 Bucket a Policy Document that will allow access to the IAM user. This will download everything to your local machine.
If you want to run this on an EC2 and avoid downloading everything to your local PC modify the role assigned to the EC2 and give it Put privileges to S3. This will be the easiest and most secure. If you use the in-memory and bytes approach it will download all the data but it wont save it to disk.

Google Cloud Functions - how do I authenticate to AWS S3 bucket?

I am trying to get a Google Cloud Function in Python 3.7 to take a file from Google Cloud Storage and upload it into AWS S3. In the command line, I would authenticate with awscli and then use the gsutil cp command to copy the file across. I have translated this process into python as:
import subprocess
def GCS_to_s3(arg1, arg2):
subprocess.call(["aws configure set aws_access_key_id AKIA********"], shell=True)
subprocess.call(["aws configure set aws_secret_access_key EgkjntEFFDVej"], shell=True)
subprocess.call(["gsutil cp gs://test_bucket/gcs_test.csv s3://mybucket)"], shell=True)`
The requirements.txt is:
awscli
google-cloud-storage
This function deploys successfully and runs successfully but the output does not show up in the S3 bucket.
What would be the best way of writing such a function?
You'll probably want to use the boto3 Python package instead, since the command-line AWS tools aren't available or installable for Cloud Functions. There's a number of ways to configure credentials as well.

Uploading to S3 using Python Requests

I'd like to upload xml's directly to S3 without the use of modules like boto, boto3, or tinys3.
So far I have written:
url = "https://my-test-s3.s3.amazonaws.com"
with open(xml_file,'rb') as data:
requests.put(url, data=data)
and I've gone and head and set the AllowedOrigin on my S3 bucket to accept my server's address.
This does not error when running, however, it also does not seem to be uploading anything.
Any help would be appreciated --- I'd like to (a) get the thing to upload and (b) figure out how to apply AWSAccessKey and AWSSecretAccessKey to the request
If you want to upload xml's directly to S3 without the use of modules like boto, boto3, or tinys3 I would recommend to use awscli:
pip install awscli
aws configure # enter your AWSAccessKey and AWSSecretAccessKey credentials
AWSAccessKey and AWSSecretAccessKey will be stored inside ~/.aws folder permanently after using aws configure.
And then you can upload files using python:
os.system("aws s3 cp {0} s3://your_bucket_name/{1}".format(file_path, file_name))
Docs are here.
you need to install awscli following this documentation.
Then in a commandline shell, execute aws configure and follow the instruction.
to upload file, its much easier using boto
import boto3
s3 = boto3.resource('s3')
s3.meta.client.upload_file(xml_file, 'yourbucket', 'yours3filepath')
Alternatively, you can use aws s3 cp command combined with python subprocess.
subprocess.call(["aws", "s3", "cp", xml_file, "yours3destination"])

Categories

Resources