creating a CSV file and then attaching to an email using boto3 - python

I have a report I created. it basically pulls data, manipulates it than sends the report to a S3 bucket. What I would like to know how to do is how I can pull that CSV from the S3 bucket and email it out. I send it to S3 for longterm retention initially.
other code
..
..
..
copy_source = {'Bucket': target, 'Key': 'mycsv.csv' }
s3client.copy_object(CopySource = copy_source, Bucket = target, Key = dated_file )
s3client.delete_object(Bucket = target, Key = 'generic.csv')
I would like to attach the csv located in the s3 bucket to the boto3
something goes wrong.
is it possible?
lets say target = s3://mys3bucket
UPDATE:::So I have found a solution using boto3 s3_object.get_object
this will send the email and attach the attachment to the email.
sg = MIMEMultipart()
new_body = "The following EC2 server are up and running"
text_part = MIMEText(new_body, _subtype="html")
msg.attach(text_part)
filename='generic.csv'
msg["To"] = "randal1981#gmail.com"
msg["From"] = "randal1981#gmail.com"
s3_object = boto3.client('s3', 'us-west-1')
s3_object = s3_object.get_object(Bucket=target, Key=filename)
body = s3_object['Body'].read()
part = MIMEApplication(body, filename)
part.add_header("Content-Disposition", 'attachment', filename=filename)
msg.attach(part)
ses_aws_client = boto3.client('ses', 'us-west-1')
ses_aws_client.send_raw_email(RawMessage={"Data" : msg.as_bytes()})

I have posted in the edit of the original question how I was able to actually send the attachment via email. what I was trying to understand is how to pull the attachment from S3. I did not explain myself very well. I found a similar solution using the boto3 call: s3_object.get_object. It worked well with my code. I hope it can help someone else...

Related

How do I directly save images from a url to my aws bucket (python / boto3)? [duplicate]

I'm working in a Python web environment and I can simply upload a file from the filesystem to S3 using boto's key.set_contents_from_filename(path/to/file). However, I'd like to upload an image that is already on the web (say https://pbs.twimg.com/media/A9h_htACIAAaCf6.jpg:large).
Should I somehow download the image to the filesystem, and then upload it to S3 using boto as usual, then delete the image?
What would be ideal is if there is a way to get boto's key.set_contents_from_file or some other command that would accept a URL and nicely stream the image to S3 without having to explicitly download a file copy to my server.
def upload(url):
try:
conn = boto.connect_s3(settings.AWS_ACCESS_KEY_ID, settings.AWS_SECRET_ACCESS_KEY)
bucket_name = settings.AWS_STORAGE_BUCKET_NAME
bucket = conn.get_bucket(bucket_name)
k = Key(bucket)
k.key = "test"
k.set_contents_from_file(url)
k.make_public()
return "Success?"
except Exception, e:
return e
Using set_contents_from_file, as above, I get a "string object has no attribute 'tell'" error. Using set_contents_from_filename with the url, I get a No such file or directory error . The boto storage documentation leaves off at uploading local files and does not mention uploading files stored remotely.
Here is how I did it with requests, the key being to set stream=True when initially making the request, and uploading to s3 using the upload.fileobj() method:
import requests
import boto3
url = "https://upload.wikimedia.org/wikipedia/en/a/a9/Example.jpg"
r = requests.get(url, stream=True)
session = boto3.Session()
s3 = session.resource('s3')
bucket_name = 'your-bucket-name'
key = 'your-key-name' # key is the name of file on your bucket
bucket = s3.Bucket(bucket_name)
bucket.upload_fileobj(r.raw, key)
Ok, from #garnaat, it doesn't sound like S3 currently allows uploads by url. I managed to upload remote images to S3 by reading them into memory only. This works.
def upload(url):
try:
conn = boto.connect_s3(settings.AWS_ACCESS_KEY_ID, settings.AWS_SECRET_ACCESS_KEY)
bucket_name = settings.AWS_STORAGE_BUCKET_NAME
bucket = conn.get_bucket(bucket_name)
k = Key(bucket)
k.key = url.split('/')[::-1][0] # In my situation, ids at the end are unique
file_object = urllib2.urlopen(url) # 'Like' a file object
fp = StringIO.StringIO(file_object.read()) # Wrap object
k.set_contents_from_file(fp)
return "Success"
except Exception, e:
return e
Also thanks to How can I create a GzipFile instance from the “file-like object” that urllib.urlopen() returns?
For a 2017-relevant answer to this question which uses the official 'boto3' package (instead of the old 'boto' package from the original answer):
Python 3.5
If you're on a clean Python install, pip install both packages first:
pip install boto3
pip install requests
import boto3
import requests
# Uses the creds in ~/.aws/credentials
s3 = boto3.resource('s3')
bucket_name_to_upload_image_to = 'photos'
s3_image_filename = 'test_s3_image.png'
internet_image_url = 'https://docs.python.org/3.7/_static/py.png'
# Do this as a quick and easy check to make sure your S3 access is OK
for bucket in s3.buckets.all():
if bucket.name == bucket_name_to_upload_image_to:
print('Good to go. Found the bucket to upload the image into.')
good_to_go = True
if not good_to_go:
print('Not seeing your s3 bucket, might want to double check permissions in IAM')
# Given an Internet-accessible URL, download the image and upload it to S3,
# without needing to persist the image to disk locally
req_for_image = requests.get(internet_image_url, stream=True)
file_object_from_req = req_for_image.raw
req_data = file_object_from_req.read()
# Do the actual upload to s3
s3.Bucket(bucket_name_to_upload_image_to).put_object(Key=s3_image_filename, Body=req_data)
Unfortunately, there really isn't any way to do this. At least not at the moment. We could add a method to boto, say set_contents_from_url, but that method would still have to download the file to the local machine and then upload it. It might still be a convenient method but it wouldn't save you anything.
In order to do what you really want to do, we would need to have some capability on the S3 service itself that would allow us to pass it the URL and have it store the URL to a bucket for us. That sounds like a pretty useful feature. You might want to post that to the S3 forums.
A simple 3-lines implementation that works on a lambda out-of-the-box:
import boto3
import requests
s3_object = boto3.resource('s3').Object(bucket_name, object_key)
with requests.get(url, stream=True) as r:
s3_object.put(Body=r.content)
The source for the .get part comes straight from the requests documentation
from io import BytesIO
def send_image_to_s3(url, name):
print("sending image")
bucket_name = 'XXX'
AWS_SECRET_ACCESS_KEY = "XXX"
AWS_ACCESS_KEY_ID = "XXX"
s3 = boto3.client('s3', aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
response = requests.get(url)
img = BytesIO(response.content)
file_name = f'path/{name}'
print('sending {}'.format(file_name))
r = s3.upload_fileobj(img, bucket_name, file_name)
s3_path = 'path/' + name
return s3_path
I have tried as following with boto3 and it works me:
import boto3;
import contextlib;
import requests;
from io import BytesIO;
s3 = boto3.resource('s3');
s3Client = boto3.client('s3')
for bucket in s3.buckets.all():
print(bucket.name)
url = "#resource url";
with contextlib.closing(requests.get(url, stream=True, verify=False)) as response:
# Set up file stream from response content.
fp = BytesIO(response.content)
# Upload data to S3
s3Client.upload_fileobj(fp, 'aws-books', 'reviews_Electronics_5.json.gz')
Using the boto3 upload_fileobj method, you can stream a file to an S3 bucket, without saving to disk. Here is my function:
import boto3
import StringIO
import contextlib
import requests
def upload(url):
# Get the service client
s3 = boto3.client('s3')
# Rember to se stream = True.
with contextlib.closing(requests.get(url, stream=True, verify=False)) as response:
# Set up file stream from response content.
fp = StringIO.StringIO(response.content)
# Upload data to S3
s3.upload_fileobj(fp, 'my-bucket', 'my-dir/' + url.split('/')[-1])
S3 doesn't support remote upload as of now it seems. You may use the below class for uploading an image to S3. The upload method here first tries to download the image and keeps it in memory for sometime until it gets uploaded. To be able to connect to S3 you will have to install AWS CLI using command pip install awscli, then enter few credentials using command aws configure:
import urllib3
import uuid
from pathlib import Path
from io import BytesIO
from errors import custom_exceptions as cex
BUCKET_NAME = "xxx.yyy.zzz"
POSTERS_BASE_PATH = "assets/wallcontent"
CLOUDFRONT_BASE_URL = "https://xxx.cloudfront.net/"
class S3(object):
def __init__(self):
self.client = boto3.client('s3')
self.bucket_name = BUCKET_NAME
self.posters_base_path = POSTERS_BASE_PATH
def __download_image(self, url):
manager = urllib3.PoolManager()
try:
res = manager.request('GET', url)
except Exception:
print("Could not download the image from URL: ", url)
raise cex.ImageDownloadFailed
return BytesIO(res.data) # any file-like object that implements read()
def upload_image(self, url):
try:
image_file = self.__download_image(url)
except cex.ImageDownloadFailed:
raise cex.ImageUploadFailed
extension = Path(url).suffix
id = uuid.uuid1().hex + extension
final_path = self.posters_base_path + "/" + id
try:
self.client.upload_fileobj(image_file,
self.bucket_name,
final_path
)
except Exception:
print("Image Upload Error for URL: ", url)
raise cex.ImageUploadFailed
return CLOUDFRONT_BASE_URL + id
import boto
from boto.s3.key import Key
from boto.s3.connection import OrdinaryCallingFormat
from urllib import urlopen
def upload_images_s3(img_url):
try:
connection = boto.connect_s3('access_key', 'secret_key', calling_format=OrdinaryCallingFormat())
bucket = connection.get_bucket('boto-demo-1519388451')
file_obj = Key(bucket)
file_obj.key = img_url.split('/')[::-1][0]
fp = urlopen(img_url)
result = file_obj.set_contents_from_string(fp.read())
except Exception, e:
return e

AWS presigned URLS location constraint is incompatible for the region specific endpoint this request was sent to

I am using a lambda to create a pre-signed URL to download files that land in an S3 bucket -
the code works and I get a URL but when trying to access it I get
af-south-1 location constraint is incompatible for the region-specific endpoint this request was sent to.
both the bucket and the lambda are in the same region
I'm at a loss as to what is actually happening any ideas or solutions would be greatly appreciated.
my code is below
import json
import boto3
import boto3.session
def lambda_handler(event, context):
session = boto3.session.Session(region_name='af-south-1')
s3 = session.client('s3')
for record in event['Records']:
bucket = record['s3']['bucket']['name']
key = record['s3']['object']['key']
url = s3.generate_presigned_url(ClientMethod='get_object',
Params={'Bucket': bucket,
'Key': key}, ExpiresIn = 400)
print (url)```
Set an endpoint_url=https://s3.af-south-1.amazonaws.com while generating the s3_client
s3_client = session.client('s3',
region_name='af-south-1',
endpoint_url='https://s3.af-south-1.amazonaws.com')
Could you please try using the boto3 client directly rather than via the session, and generate the pre-signed url :
import boto3
import requests
# Get the service client.
s3 = boto3.client('s3',region_name='af-south-1')
# Generate the URL to get 'key-name' from 'bucket-name'
url = s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': 'bucket-name',
'Key': 'key-name'
}
)
You could also have a look at these 1 & 2, which resembles the same issue.

How attach a dataframe as excel in email and send that attachment from python?

I have a file which is generated and stored inside a S3 bucket in AWS.
I want a sent this S3 file as excel as attachment in email send using python how can do it.
I was successfully was able sent email without the attachment.
My code
import os
import boto3
import pandas as pd
from sagemaker import get_execution_role
role = get_execution_role()
bucket='cotydata'
data_key = 'modeloutput'+'.csv'
data_location = 's3://{}/{}'.format(bucket, data_key)
output_key='UI_'+input_date
output_bucket='model-output-ui'
# Insert weather api key here
api_key= 'key'
#read modeloutput and prepare dataframe
modeloutput = pd.read_csv(data_location)
df = modeloutput.to_excel("Outpout.xls")
# import necessary packages
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
import smtplib
# create message object instance
msg = MIMEMultipart()
password = "password"
msg['From'] = "riskradar#gmail.com"
msg['To'] = "abc#gmail.com"
msg['Subject'] = "Messgae"
filename = df
f = file(filename)
attachment = MIMEText(f.read(),'xls')
msg.attach(attachment)
# attach image to message body
server = smtplib.SMTP('smtp.gmail.com:587')
server.starttls()
# Login Credentials for sending the mail
server.login(msg['From'], password)
server.sendmail(msg['From'], msg['To'], msg.as_string())
Any help is appreciated
There is no need to generate a pandas dataframe out of the xls file.
msg = MIMEMultipart('mixed')
fp = open(data_location, 'rb')
record = MIMEBase('application', 'octet-stream')
record.set_payload(fp.read())
encode_base64(record)
record.add_header('Content-Disposition', 'attachment',
filename="what_ever_header_you_want.xls")
msg.attach(record)
And the rest is as you wrote it.. That should do the trick.
If you'd like to send a dataframe as an excel file, then you can use the to_excel() method and pass the file's absolute path to the block of code above.

Encode CSV file for Sendgrid's Email API

I am attempting to build a client reporting engine using R/Python, and Sendgrid's Email API. I can send emails, but the last thing that I need to do is attach a client's CSV report.
I have attempted a number of approaches, including base64 encoding the file and writing the string to disk from R to python, but not luck. That said, it seems like I am getting stuck on this error:
TypeError: Object of type 'bytes' is not JSON serializable
My code to get there is:
with open('raw/test-report.csv', 'rb') as fd:
b64data = base64.b64encode(fd.read())
attachment = Attachment()
attachment.content = b64data
attachment.filename = "your-lead-report.csv"
mail.add_attachment(attachment)
What is confusing is that if I simply replace b64data with the line
attachment.content = 'TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNlY3RldHVyIGFkaXBpc2NpbmcgZWxpdC4gQ3JhcyBwdW12'
an email is sent with an attachment.
For reference, I have been using:
https://github.com/sendgrid/sendgrid-python
and
kitchen sink tutorial
and haven't had any issues until this final step in my project.
Any help will be greatly appreciated. It's worth noting that my strength is in R, but I usually can hack things together in python with the help of the internets.
You need to convert b64data to a regular string before assigning it to attachment.content. Sendgrid builds a JSON payload which it sends in the requests so it doesn't know how to serialize the value assigned to attachment.content which in this case is a bytestring.
str(b64data,'utf-8')
References:
https://github.com/sendgrid/sendgrid-python/blob/master/USAGE.md#post-mailsend
Here's how to attach an in memory CSV to a sendgrid email:
import base64
from sendgrid import SendGridAPIClient
from sendgrid.helpers.mail import (
Mail, Attachment, FileContent, FileName,
FileType, Disposition)
from io import BytesIO
import pandas as pd
def send_email_with_csv(from_address, to_address, subject, content, dataframe, filename, filetype):
message = Mail(
from_email=from_address,
to_emails=to_address,
subject=subject,
html_content=content)
#%% Create buffered csv
buffer = BytesIO()
dataframe.to_csv(buffer);
buffer.seek(0)
data = buffer.read()
encoded = base64.b64encode(data).decode()
#%%
attachment = Attachment()
attachment.file_content = FileContent(encoded)
attachment.file_type = FileType(filetype)
attachment.file_name = FileName(filename)
attachment.disposition = Disposition('attachment')
message.attachment = attachment
try:
sendgrid_client = SendGridAPIClient('API_KEY')
response = sendgrid_client.send(message)
print(response.status_code)
print(response.body)
print(response.headers)
except Exception as e:
print(e.message)
df = pd.DataFrame(data={'col1': [1, 2], 'col2': [3, 4]})
send_email_with_csv(from_address='sender#sender.com',
to_address='example#recipient.com',
subject='Sending with Twilio SendGrid is Fun',
content='<strong>and easy to do anywhere, even with Python</strong>',
dataframe=df,
filename='spreadsheet.csv',
filetype='text/csv')

Upload image available at public URL to S3 using boto

I'm working in a Python web environment and I can simply upload a file from the filesystem to S3 using boto's key.set_contents_from_filename(path/to/file). However, I'd like to upload an image that is already on the web (say https://pbs.twimg.com/media/A9h_htACIAAaCf6.jpg:large).
Should I somehow download the image to the filesystem, and then upload it to S3 using boto as usual, then delete the image?
What would be ideal is if there is a way to get boto's key.set_contents_from_file or some other command that would accept a URL and nicely stream the image to S3 without having to explicitly download a file copy to my server.
def upload(url):
try:
conn = boto.connect_s3(settings.AWS_ACCESS_KEY_ID, settings.AWS_SECRET_ACCESS_KEY)
bucket_name = settings.AWS_STORAGE_BUCKET_NAME
bucket = conn.get_bucket(bucket_name)
k = Key(bucket)
k.key = "test"
k.set_contents_from_file(url)
k.make_public()
return "Success?"
except Exception, e:
return e
Using set_contents_from_file, as above, I get a "string object has no attribute 'tell'" error. Using set_contents_from_filename with the url, I get a No such file or directory error . The boto storage documentation leaves off at uploading local files and does not mention uploading files stored remotely.
Here is how I did it with requests, the key being to set stream=True when initially making the request, and uploading to s3 using the upload.fileobj() method:
import requests
import boto3
url = "https://upload.wikimedia.org/wikipedia/en/a/a9/Example.jpg"
r = requests.get(url, stream=True)
session = boto3.Session()
s3 = session.resource('s3')
bucket_name = 'your-bucket-name'
key = 'your-key-name' # key is the name of file on your bucket
bucket = s3.Bucket(bucket_name)
bucket.upload_fileobj(r.raw, key)
Ok, from #garnaat, it doesn't sound like S3 currently allows uploads by url. I managed to upload remote images to S3 by reading them into memory only. This works.
def upload(url):
try:
conn = boto.connect_s3(settings.AWS_ACCESS_KEY_ID, settings.AWS_SECRET_ACCESS_KEY)
bucket_name = settings.AWS_STORAGE_BUCKET_NAME
bucket = conn.get_bucket(bucket_name)
k = Key(bucket)
k.key = url.split('/')[::-1][0] # In my situation, ids at the end are unique
file_object = urllib2.urlopen(url) # 'Like' a file object
fp = StringIO.StringIO(file_object.read()) # Wrap object
k.set_contents_from_file(fp)
return "Success"
except Exception, e:
return e
Also thanks to How can I create a GzipFile instance from the “file-like object” that urllib.urlopen() returns?
For a 2017-relevant answer to this question which uses the official 'boto3' package (instead of the old 'boto' package from the original answer):
Python 3.5
If you're on a clean Python install, pip install both packages first:
pip install boto3
pip install requests
import boto3
import requests
# Uses the creds in ~/.aws/credentials
s3 = boto3.resource('s3')
bucket_name_to_upload_image_to = 'photos'
s3_image_filename = 'test_s3_image.png'
internet_image_url = 'https://docs.python.org/3.7/_static/py.png'
# Do this as a quick and easy check to make sure your S3 access is OK
for bucket in s3.buckets.all():
if bucket.name == bucket_name_to_upload_image_to:
print('Good to go. Found the bucket to upload the image into.')
good_to_go = True
if not good_to_go:
print('Not seeing your s3 bucket, might want to double check permissions in IAM')
# Given an Internet-accessible URL, download the image and upload it to S3,
# without needing to persist the image to disk locally
req_for_image = requests.get(internet_image_url, stream=True)
file_object_from_req = req_for_image.raw
req_data = file_object_from_req.read()
# Do the actual upload to s3
s3.Bucket(bucket_name_to_upload_image_to).put_object(Key=s3_image_filename, Body=req_data)
Unfortunately, there really isn't any way to do this. At least not at the moment. We could add a method to boto, say set_contents_from_url, but that method would still have to download the file to the local machine and then upload it. It might still be a convenient method but it wouldn't save you anything.
In order to do what you really want to do, we would need to have some capability on the S3 service itself that would allow us to pass it the URL and have it store the URL to a bucket for us. That sounds like a pretty useful feature. You might want to post that to the S3 forums.
A simple 3-lines implementation that works on a lambda out-of-the-box:
import boto3
import requests
s3_object = boto3.resource('s3').Object(bucket_name, object_key)
with requests.get(url, stream=True) as r:
s3_object.put(Body=r.content)
The source for the .get part comes straight from the requests documentation
from io import BytesIO
def send_image_to_s3(url, name):
print("sending image")
bucket_name = 'XXX'
AWS_SECRET_ACCESS_KEY = "XXX"
AWS_ACCESS_KEY_ID = "XXX"
s3 = boto3.client('s3', aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
response = requests.get(url)
img = BytesIO(response.content)
file_name = f'path/{name}'
print('sending {}'.format(file_name))
r = s3.upload_fileobj(img, bucket_name, file_name)
s3_path = 'path/' + name
return s3_path
I have tried as following with boto3 and it works me:
import boto3;
import contextlib;
import requests;
from io import BytesIO;
s3 = boto3.resource('s3');
s3Client = boto3.client('s3')
for bucket in s3.buckets.all():
print(bucket.name)
url = "#resource url";
with contextlib.closing(requests.get(url, stream=True, verify=False)) as response:
# Set up file stream from response content.
fp = BytesIO(response.content)
# Upload data to S3
s3Client.upload_fileobj(fp, 'aws-books', 'reviews_Electronics_5.json.gz')
Using the boto3 upload_fileobj method, you can stream a file to an S3 bucket, without saving to disk. Here is my function:
import boto3
import StringIO
import contextlib
import requests
def upload(url):
# Get the service client
s3 = boto3.client('s3')
# Rember to se stream = True.
with contextlib.closing(requests.get(url, stream=True, verify=False)) as response:
# Set up file stream from response content.
fp = StringIO.StringIO(response.content)
# Upload data to S3
s3.upload_fileobj(fp, 'my-bucket', 'my-dir/' + url.split('/')[-1])
S3 doesn't support remote upload as of now it seems. You may use the below class for uploading an image to S3. The upload method here first tries to download the image and keeps it in memory for sometime until it gets uploaded. To be able to connect to S3 you will have to install AWS CLI using command pip install awscli, then enter few credentials using command aws configure:
import urllib3
import uuid
from pathlib import Path
from io import BytesIO
from errors import custom_exceptions as cex
BUCKET_NAME = "xxx.yyy.zzz"
POSTERS_BASE_PATH = "assets/wallcontent"
CLOUDFRONT_BASE_URL = "https://xxx.cloudfront.net/"
class S3(object):
def __init__(self):
self.client = boto3.client('s3')
self.bucket_name = BUCKET_NAME
self.posters_base_path = POSTERS_BASE_PATH
def __download_image(self, url):
manager = urllib3.PoolManager()
try:
res = manager.request('GET', url)
except Exception:
print("Could not download the image from URL: ", url)
raise cex.ImageDownloadFailed
return BytesIO(res.data) # any file-like object that implements read()
def upload_image(self, url):
try:
image_file = self.__download_image(url)
except cex.ImageDownloadFailed:
raise cex.ImageUploadFailed
extension = Path(url).suffix
id = uuid.uuid1().hex + extension
final_path = self.posters_base_path + "/" + id
try:
self.client.upload_fileobj(image_file,
self.bucket_name,
final_path
)
except Exception:
print("Image Upload Error for URL: ", url)
raise cex.ImageUploadFailed
return CLOUDFRONT_BASE_URL + id
import boto
from boto.s3.key import Key
from boto.s3.connection import OrdinaryCallingFormat
from urllib import urlopen
def upload_images_s3(img_url):
try:
connection = boto.connect_s3('access_key', 'secret_key', calling_format=OrdinaryCallingFormat())
bucket = connection.get_bucket('boto-demo-1519388451')
file_obj = Key(bucket)
file_obj.key = img_url.split('/')[::-1][0]
fp = urlopen(img_url)
result = file_obj.set_contents_from_string(fp.read())
except Exception, e:
return e

Categories

Resources