I created a an AWS Cloudwatch Canary through the Import from S3 option, because I had a packaged python script and directory. That went well, and works as expected. However, I cannot find how to update the script package for the existing Canary.
There has to be an easier way than creating a new canary every time I update the zip, any thoughts?
If I upload a new zip to s3, it does not automatically read it in again, it continues to run the old code. If I go to edit, there is no Upload option, just the parsed out handler function. And, if I make any changes to the handler function it times out when saving it ("Error: Request Too Long").
I still don't see a way to do this in the UI, but I am able to do it from the command line. I now package up my scripts (zip), then update the zip in s3 (aws s3 cp s3://path/python.zip) and then
aws synthetics update-canary --name test-canary --code '{"S3Bucket": "path", "S3Key":"python.zip", "Handler": "test-canary.handler"}'
Related
We are developing a product running on a RPI CM3+ module (Buster OS). It is WiFi connected and runs a python flask server.
The code (server.py) is run from a locally cloned git repository. We have a page on the server where the idea is that the user can check if there are any remote updates by clicking a button on the flask UI.
The issue seems to be that when we do a git pull. The server.py file size becomes 0kB and when we subsequently try and interact with the git we get:
error: object file .git/objects/3b/1f1939a417f8bbb15e759c899911cc664b5488 is empty
fatal: loose object 3b1f1939a417f8bbb15e759c899911cc664b5488 (stored in .git/objects/3b/1f1939a417f8bbb15e759c899911cc664b5488) is corrupt
We have to remove and re-clone the repository.
Is this expected behaviour for performing a git pull on running code? What procedure should we be following to achieve a method of remote code update for our product?
I solved this using a custom workflow. I took the advice of #torek and moved away from using git as this was corrupting itself and my code.
The approach I took was a 3 step approach
Step 1. I created an API which allowed the application to check if a new version existed. For this I used AWS lambda and AWS API Gateway. I hosted my new code in a .zip file in an AWS S3 bucket.
Step 2. Since my application is in python then if the API returned a true stating that a version existed then I would use:
import boto3
s3_client = boto3.client('s3')
s3_client.download_file('bucket', '<code_folder_in_bucket>', '<download_dir_on_devivce')
Step 3. I created a bash service that runs on start-up (systemd) that checks if a .zip folder exists in a specific directory. If it does then it performs an unzipand overwrites the application folder with the new code. Deleting the .zip folder afterwards for a clean start next time. The service after this service is the service to launch my application code.
This works very reliably.
I have a requirement to copy the file as soon as it landed in the S3 bucket. So I enabled trigger to lambda function on S3. But I am not sure how to copy the content to lightsail directory using AWS lambda. I looked into the documentation but I don't see any solution using Python - Boto3.
I can only see the FTP solution. Is there any other way to dump the file into Lightsail from S3?
Thanks.
Generally, for ec2 instances you SSM Run Command for that.
In this solution, your lambda invokes sends a command (e.g. download some files from s3) to the instance by means of the SSM Run Command.
For this to work, you need to install SSM Agent on your Lightsail Instance. The following link describs the installation process:
Configuring SSM Agent on an Amazon Lightsail Instance.
The link also gives an example of sending commands to the instance:
How to Run Shell Commands on a Managed Instance
This question already exists:
How do I provide a zip file for download from an AWS EC2 instance running Ubuntu?
Closed 2 years ago.
So I am supposed to create an application that uploads files(images) to an AWS Ec2 instance running Linux. I used Flask to create the upload logic and a simple HTML webpage to upload files.
After this I am supposed to generate a thumbnails and web optimized images from these images and store them on a directory and then provide the "web-optimized" image directory for download. How do I achieve this?
I have asked this previously here. I have pasted the code that I am using on that thread as well.
So my questions are :
Is it a good idea to use pscp to transfer files to the EC2 instance?
Is it a good idea to use paramiko to run a ssh on the remote instance in order to invoke a shell script that does the image processing (thumbnail web-optimized image generation using python)?
How do I get the compressed images via a simple download button on the client/host computer?
Thanks.
I am not entirely sure if this is one of the limitations that you have to use EC2 instance only. If not I would recommend something like the following if you have to write a robust solution.
Upload the file to an AWS S3 bucket, would be much cheaper to hold all the images (original). Let us call this images_original
Now you can write a lambda to monitor the bucket, every time a new upload happens, process the image to create thumbnails etc for different resolutions, and upload them to their individual buckets (images_thumbnails, images_640_480 etc).
To transform the images, I don't think you should be doing SSH into a remote instance to run the transform. Its better to use a python native library, such as pillow inside the lambda itself.
On the client, you can simply access the respective file in the bucket. Eg if you want a thumbnail, you can access the thumbnail_bucket_url/file_name.jpeg etc.
Additionally, you can have a service that gets signed urls from S3.
To learn how to monitor an S3 bucket using a lambda you can refer THIS
One important thing to remember while creating the lambda, all the depending libraries will have to be uploaded as a zipfile.
eg. pip install Pillow -t
Currently, I know of two ways to download a file from a bucket to your computer
1) Manually go to the bucket and clicking download
2) Use gsutil
Is there a way to do this in a program in Google Cloud Functions? You can't execute gsutil in a Python script. I also tried this:
with open("C:\", "wb") as file_obj:
blob.download_to_file(file_obj)
but I believe that looks for a directory on Google Cloud. Please help!
The code you are using is for downloading the files from your local machine, this is covered in this document, however, as you mention this would look for the local file in the machine executing the Cloud Function.
I would suggest you to create a cron to download the files to your local instance through gsutil, doing this through a Cloud Function is not going to be possible.
Hope you find this useful.
You can't directly achieve what you want. Cloud Function must be able to reach your local computer for copying the file on it.
The common way for sending file to a computer is to use FTP protocol. So install a FTP server on your computer and to set up your function for reading into your bucket and then send the file to your FTP server (you have to get your public IP, be sure that the firewall rules/routers are configured for this,...). It's not the easiest way.
A gsutil with a rsync command work perfectly. Use a planned task on Windows if you want to cron this.
I'm trying to download a dataset from a website. However all the files I want to download add up to about 100 gb which I don't want to download to my local machine, then upload to s3. Is there a way to download directly to an s3 bucket? Or do you have to use ec2, and if so could somebody give brief instructions on how to do this? Thanks
S3's put_object() method supports a Body parameter for Bytes (or file):
Python example:
response = client.put_object(
Body=b'bytes'|file,
Bucket='string',
Key='string',
)
So if you download a webpage, using Python you'd use the requests.Get() method or .Net you use either HttpWebRequest or WebClient and then upload the file as a byte array so you never need to save it locally. It can all be done in memory.
Or do you have to use ec2
An Ec2 is just a VM in the cloud, you can programatically do this task (download 100gb to S3) from your Desktop PC/Laptop. Simply open an Command Window or a Terminal and type:
AWS Configure
Put in an IAM users creds and use the aws cli or use an AWS SDK like the python example above. You can give the S3 Bucket a Policy Document that will allow access to the IAM user. This will download everything to your local machine.
If you want to run this on an EC2 and avoid downloading everything to your local PC modify the role assigned to the EC2 and give it Put privileges to S3. This will be the easiest and most secure. If you use the in-memory and bytes approach it will download all the data but it wont save it to disk.