My question is inspired by a previous SO about this topic: uploading and saving DataFrames as csv files in Amazon Web Services (AWS) S3. Using Python3, I would like to use s3.upload_fileobj – multi-part uploads – to make the data transfer to S3 faster. When I run the code in the accepted answer, I receive an error message : "TypeError: a bytes-like object is required, not 'str' ". .
The answer has recently been upvoted several times. So I think there must be a way to run this code without error in Python3.
Please find below the code. Let's for ease use a simple DataFrame. In reality this DataFrame is much bigger (at around 500 MB).
import pandas as pd
import io
df = pd.DataFrame({'A':[1,2,3], 'B':[6,7,8]})
The code is the following. I turned it for convenience in a function :
def upload_file(dataframe, bucket, key):
"""dat=DataFrame, bucket=bucket name in AWS S3, key=key name in AWS S3"""
s3 = boto3.client('s3')
csv_buffer = io.BytesIO()
dataframe.to_csv(csv_buffer, compression='gzip')
s3.upload_fileobj(csv_buffer, bucket, key)
upload_file(df, your-bucket, your-key)
Thank you very much for your advices!
Going off this reference, it seems you'll need to wrap a gzip.GzipFile object around your BytesIO which will then perform the compression for you.
import io
import gzip
buffer = io.BytesIO()
with gzip.GzipFile(fileobj=buffer, mode="wb") as f:
f.write(df.to_csv().encode())
buffer.seek(0)
s3.upload_fileobj(buffer, bucket, key)
Minimal Verifiable Example
import io
import gzip
import zlib
# Encode
df = pd.DataFrame({'A':[1,2,3], 'B':[6,7,8]})
buffer = io.BytesIO()
with gzip.GzipFile(fileobj=buffer, mode="wb") as f:
f.write(df.to_csv().encode())
buffer.getvalue()
# b'\x1f\x8b\x08\x00\xf0\x0b\x11]\x02\xff\xd3q\xd4q\xe22\xd01\xd41\xe32\xd41\xd21\xe72\xd21\xd6\xb1\xe0\x02\x00Td\xc2\xf5\x17\x00\x00\x00'
# Decode
print(zlib.decompress(out.getvalue(), 16+zlib.MAX_WBITS).decode())
# ,A,B
# 0,1,6
# 1,2,7
# 2,3,8
The only you need is a TextIOWrapper, as to_csv expects a string while upload_fileobj expects bytes
def upload_file(dataframe, bucket, key):
"""dat=DataFrame, bucket=bucket name in AWS S3, key=key name in AWS S3"""
s3 = boto3.client('s3')
csv_buffer = io.BytesIO()
w = io.TextIOWrapper(csv_buffer)
dataframe.to_csv(w, compression='gzip')
w.seek(0)
s3.upload_fileobj(csv_buffer, bucket, key)
And the code uploads fine
$ cat test.csv
,A,B
0,1,6
1,2,7
2,3,8
You could try something like this.
import pandas as pd
import io
df = pd.DataFrame({'A':[1,2,3], 'B':[6,7,8]})
def upload_file(dataframe, bucket, key):
"""dat=DataFrame, bucket=bucket name in AWS S3, key=key name in AWS S3"""
s3 = boto3.client('s3')
csv_buffer = io.StringIO()
dataframe.to_csv(csv_buffer, compression='gzip')
csv_buffer.seek(0)
s3.upload_fileobj(csv_buffer, bucket, key)
upload_file(df, your-bucket, your-key)
Related
I'm trying to write a pandas dataframe as a pickle file into an s3 bucket in AWS. I know that I can write dataframe new_df as a csv to an s3 bucket as follows:
bucket='mybucket'
key='path'
csv_buffer = StringIO()
s3_resource = boto3.resource('s3')
new_df.to_csv(csv_buffer, index=False)
s3_resource.Object(bucket,path).put(Body=csv_buffer.getvalue())
I've tried using the same code as above with to_pickle() but with no success.
Further to you answer, you don't need to convert to csv.
pickle.dumps method returns a byte obj. see here: https://docs.python.org/3/library/pickle.html
import boto3
import pickle
bucket='your_bucket_name'
key='your_pickle_filename.pkl'
pickle_byte_obj = pickle.dumps([var1, var2, ..., varn])
s3_resource = boto3.resource('s3')
s3_resource.Object(bucket,key).put(Body=pickle_byte_obj)
I've found the solution, need to call BytesIO into the buffer for pickle files instead of StringIO (which are for CSV files).
import io
import boto3
pickle_buffer = io.BytesIO()
s3_resource = boto3.resource('s3')
new_df.to_pickle(pickle_buffer)
s3_resource.Object(bucket, key).put(Body=pickle_buffer.getvalue())
this worked for me with pandas 0.23.4 and boto3 1.7.80 :
bucket='your_bucket_name'
key='your_pickle_filename.pkl'
new_df.to_pickle(key)
s3_resource.Object(bucket, key).put(Body=open(key, 'rb'))
This solution (using s3fs) worked perfectly and elegantly for my team:
import s3fs
from pickle import dump
fs = s3fs.S3FileSystem(anon=False)
bucket = 'bucket1'
key = 'your_pickle_filename.pkl'
dump(data, fs.open(f's3://{bucket}/{key}', 'wb'))
This adds some clarification to a previous answer:
import pandas as pd
import boto3
# make df
df = pd.DataFrame({'col1:': [1,2,3]})
# bucket name
str_bucket = 'bucket_name'
# filename
str_key_file = 'df.pkl'
# bucket path
str_key_bucket = dir_1/dir2/{str_key_file}'
# write df to local pkl file
df.to_pickle(str_key_file)
# put object into s3
boto3.resource('s3').Object(str_bucket, str_key_bucket).put(Body=open(str_key_file, 'rb'))
From the just-released book 'Time Series Analysis with Python' by Tarek Atwan, I learned this method:
import pandas as pd
df = pd.DataFrame(...)
df.to_pickle('s3://mybucket/pklfile.bz2',
storage_options={
'key': AWS_ACCESS_KEY,
'secret': AWS_SECRET_KEY
}
)
which I believe is more pythonic.
I've found the best solution - just upgrade pandas and also install s3fs:
pip install s3fs==2022.8.2
pip install install pandas==1.1.5
bucket,key='mybucket','path'
df.to_pickle(f"{bucket}{key}.pkl.gz", compression='gzip')
I'm trying to load a large CSV (~5GB) into pandas from S3 bucket.
Following is the code I tried for a small CSV of 1.4 kb :
client = boto3.client('s3')
obj = client.get_object(Bucket='grocery', Key='stores.csv')
body = obj['Body']
csv_string = body.read().decode('utf-8')
df = pd.read_csv(StringIO(csv_string))
This works well for a small CSV, but my requirement of loading a 5GB csv to pandas dataframe cannot be achieved through this (probably due to memory constraints when loading the csv by StringIO).
I also tried below code
s3 = boto3.client('s3')
obj = s3.get_object(Bucket='bucket', Key='key')
df = pd.read_csv(obj['Body'])
but this gives below error.
ValueError: Invalid file path or buffer object type: <class 'botocore.response.StreamingBody'>
Any help to resolve this error is much appreciated.
I know this is quite late but here is an answer:
import boto3
bucket='sagemaker-dileepa' # Or whatever you called your bucket
data_key = 'data/stores.csv' # Where the file is within your bucket
data_location = 's3://{}/{}'.format(bucket, data_key)
df = pd.read_csv(data_location)
I found that copy the data "locally" to the notebook files makes reading the file much faster.
This code writes json to a file in s3,
what i wanted to achieve is instead of opening data.json file and writing to s3 (sample.json) file,
how do i pass the json directly and write to a file in s3 ?
import boto3
s3 = boto3.resource('s3', aws_access_key_id='aws_key', aws_secret_access_key='aws_sec_key')
s3.Object('mybucket', 'sample.json').put(Body=open('data.json', 'rb'))
I'm not sure, if I get the question right. You just want to write JSON data to a file using Boto3? The following code writes a python dictionary to a JSON file.
import json
import boto3
s3 = boto3.resource('s3')
s3object = s3.Object('your-bucket-name', 'your_file.json')
s3object.put(
Body=(bytes(json.dumps(json_data).encode('UTF-8')))
)
I don't know if anyone is still attempting to use this thread, but I was attempting to upload a JSON to s3 trying to use the method above, which didnt quite work for me. Boto and s3 might have changed since 2018, but this achieved the results for me:
import json
import boto3
s3 = boto3.client('s3')
json_object = 'your_json_object here'
s3.put_object(
Body=json.dumps(json_object),
Bucket='your_bucket_name',
Key='your_key_here'
)
Amazon S3 is an object store (File store in reality). The primary operations are PUT and GET. You can not add data into an existing object in S3. You can only replace the entire object itself.
For a list of available operations you can perform on s3 see this link
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectOps.html
An alternative to Joseph McCombs answer can be achieved using s3fs.
from s3fs import S3FileSystem
json_object = {'test': 3.14}
path_to_s3_object = 's3://your-s3-bucket/your_json_filename.json'
s3 = S3FileSystem()
with s3.open(path_to_s3_object, 'w') as file:
json.dump(json_object, file)
I have a hacky way of achieving this using boto3 (1.4.4), pyarrow (0.4.1) and pandas (0.20.3).
First, I can read a single parquet file locally like this:
import pyarrow.parquet as pq
path = 'parquet/part-r-00000-1e638be4-e31f-498a-a359-47d017a0059c.gz.parquet'
table = pq.read_table(path)
df = table.to_pandas()
I can also read a directory of parquet files locally like this:
import pyarrow.parquet as pq
dataset = pq.ParquetDataset('parquet/')
table = dataset.read()
df = table.to_pandas()
Both work like a charm. Now I want to achieve the same remotely with files stored in a S3 bucket. I was hoping that something like this would work:
dataset = pq.ParquetDataset('s3n://dsn/to/my/bucket')
But it does not:
OSError: Passed non-file path: s3n://dsn/to/my/bucket
After reading pyarrow's documentation thoroughly, this does not seem possible at the moment. So I came out with the following solution:
Reading a single file from S3 and getting a pandas dataframe:
import io
import boto3
import pyarrow.parquet as pq
buffer = io.BytesIO()
s3 = boto3.resource('s3')
s3_object = s3.Object('bucket-name', 'key/to/parquet/file.gz.parquet')
s3_object.download_fileobj(buffer)
table = pq.read_table(buffer)
df = table.to_pandas()
And here my hacky, not-so-optimized, solution to create a pandas dataframe from a S3 folder path:
import io
import boto3
import pandas as pd
import pyarrow.parquet as pq
bucket_name = 'bucket-name'
def download_s3_parquet_file(s3, bucket, key):
buffer = io.BytesIO()
s3.Object(bucket, key).download_fileobj(buffer)
return buffer
client = boto3.client('s3')
s3 = boto3.resource('s3')
objects_dict = client.list_objects_v2(Bucket=bucket_name, Prefix='my/folder/prefix')
s3_keys = [item['Key'] for item in objects_dict['Contents'] if item['Key'].endswith('.parquet')]
buffers = [download_s3_parquet_file(s3, bucket_name, key) for key in s3_keys]
dfs = [pq.read_table(buffer).to_pandas() for buffer in buffers]
df = pd.concat(dfs, ignore_index=True)
Is there a better way to achieve this? Maybe some kind of connector for pandas using pyarrow? I would like to avoid using pyspark, but if there is no other solution, then I would take it.
You should use the s3fs module as proposed by yjk21. However as result of calling ParquetDataset you'll get a pyarrow.parquet.ParquetDataset object. To get the Pandas DataFrame you'll rather want to apply .read_pandas().to_pandas() to it:
import pyarrow.parquet as pq
import s3fs
s3 = s3fs.S3FileSystem()
pandas_dataframe = pq.ParquetDataset('s3://your-bucket/', filesystem=s3).read_pandas().to_pandas()
Thanks! Your question actually tell me a lot. This is how I do it now with pandas (0.21.1), which will call pyarrow, and boto3 (1.3.1).
import boto3
import io
import pandas as pd
# Read single parquet file from S3
def pd_read_s3_parquet(key, bucket, s3_client=None, **args):
if s3_client is None:
s3_client = boto3.client('s3')
obj = s3_client.get_object(Bucket=bucket, Key=key)
return pd.read_parquet(io.BytesIO(obj['Body'].read()), **args)
# Read multiple parquets from a folder on S3 generated by spark
def pd_read_s3_multiple_parquets(filepath, bucket, s3=None,
s3_client=None, verbose=False, **args):
if not filepath.endswith('/'):
filepath = filepath + '/' # Add '/' to the end
if s3_client is None:
s3_client = boto3.client('s3')
if s3 is None:
s3 = boto3.resource('s3')
s3_keys = [item.key for item in s3.Bucket(bucket).objects.filter(Prefix=filepath)
if item.key.endswith('.parquet')]
if not s3_keys:
print('No parquet found in', bucket, filepath)
elif verbose:
print('Load parquets:')
for p in s3_keys:
print(p)
dfs = [pd_read_s3_parquet(key, bucket=bucket, s3_client=s3_client, **args)
for key in s3_keys]
return pd.concat(dfs, ignore_index=True)
Then you can read multiple parquets under a folder from S3 by
df = pd_read_s3_multiple_parquets('path/to/folder', 'my_bucket')
(One can simplify this code a lot I guess.)
It can be done using boto3 as well without the use of pyarrow
import boto3
import io
import pandas as pd
# Read the parquet file
buffer = io.BytesIO()
s3 = boto3.resource('s3')
object = s3.Object('bucket_name','key')
object.download_fileobj(buffer)
df = pd.read_parquet(buffer)
print(df.head())
Probably the easiest way to read parquet data on the cloud into dataframes is to use dask.dataframe in this way:
import dask.dataframe as dd
df = dd.read_parquet('s3://bucket/path/to/data-*.parq')
dask.dataframe can read from Google Cloud Storage, Amazon S3, Hadoop file system and more!
Provided you have the right package setup
$ pip install pandas==1.1.0 pyarrow==1.0.0 s3fs==0.4.2
and your AWS shared config and credentials files configured appropriately
you can use pandas right away:
import pandas as pd
df = pd.read_parquet("s3://bucket/key.parquet")
In case of having multiple AWS profiles you may also need to set
$ export AWS_DEFAULT_PROFILE=profile_under_which_the_bucket_is_accessible
so you can access your bucket.
If you are open to also use AWS Data Wrangler.
import awswrangler as wr
df = wr.s3.read_parquet(path="s3://...")
You can use s3fs from dask which implements a filesystem interface for s3. Then you can use the filesystem argument of ParquetDataset like so:
import s3fs
s3 = s3fs.S3FileSystem()
dataset = pq.ParquetDataset('s3n://dsn/to/my/bucket', filesystem=s3)
Using pre-signed URLs
s3 =s3fs.S3FileSystem(key='your_key',secret='your_secret',client_kwargs={"endpoint_url":'your_end_point'})
df = dd.read_parquet(s3.url('your_bucket' + 'your_filepath',expires=3600,client_method='get_object'))
I have tried the #oya163 solution and it works but after little bit change
import boto3
import io
import pandas as pd
# Read the parquet file
buffer = io.BytesIO()
s3 = boto3.resource('s3',aws_access_key_id='123',aws_secret_access_key= '456')
object = s3.Object('bucket_name','myoutput.parquet')
object.download_fileobj(buffer)
df = pd.read_parquet(buffer)
print(df.head())
I'm trying to do a "hello world" with new boto3 client for AWS.
The use-case I have is fairly simple: get object from S3 and save it to the file.
In boto 2.X I would do it like this:
import boto
key = boto.connect_s3().get_bucket('foo').get_key('foo')
key.get_contents_to_filename('/tmp/foo')
In boto 3 . I can't find a clean way to do the same thing, so I'm manually iterating over the "Streaming" object:
import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
chunk = key['Body'].read(1024*8)
while chunk:
f.write(chunk)
chunk = key['Body'].read(1024*8)
or
import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
for chunk in iter(lambda: key['Body'].read(4096), b''):
f.write(chunk)
And it works fine. I was wondering is there any "native" boto3 function that will do the same task?
There is a customization that went into Boto3 recently which helps with this (among other things). It is currently exposed on the low-level S3 client, and can be used like this:
s3_client = boto3.client('s3')
open('hello.txt').write('Hello, world!')
# Upload the file to S3
s3_client.upload_file('hello.txt', 'MyBucket', 'hello-remote.txt')
# Download the file from S3
s3_client.download_file('MyBucket', 'hello-remote.txt', 'hello2.txt')
print(open('hello2.txt').read())
These functions will automatically handle reading/writing files as well as doing multipart uploads in parallel for large files.
Note that s3_client.download_file won't create a directory. It can be created as pathlib.Path('/path/to/file.txt').parent.mkdir(parents=True, exist_ok=True).
boto3 now has a nicer interface than the client:
resource = boto3.resource('s3')
my_bucket = resource.Bucket('MyBucket')
my_bucket.download_file(key, local_filename)
This by itself isn't tremendously better than the client in the accepted answer (although the docs say that it does a better job retrying uploads and downloads on failure) but considering that resources are generally more ergonomic (for example, the s3 bucket and object resources are nicer than the client methods) this does allow you to stay at the resource layer without having to drop down.
Resources generally can be created in the same way as clients, and they take all or most of the same arguments and just forward them to their internal clients.
For those of you who would like to simulate the set_contents_from_string like boto2 methods, you can try
import boto3
from cStringIO import StringIO
s3c = boto3.client('s3')
contents = 'My string to save to S3 object'
target_bucket = 'hello-world.by.vor'
target_file = 'data/hello.txt'
fake_handle = StringIO(contents)
# notice if you do fake_handle.read() it reads like a file handle
s3c.put_object(Bucket=target_bucket, Key=target_file, Body=fake_handle.read())
For Python3:
In python3 both StringIO and cStringIO are gone. Use the StringIO import like:
from io import StringIO
To support both version:
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
# Preface: File is json with contents: {'name': 'Android', 'status': 'ERROR'}
import boto3
import io
s3 = boto3.resource('s3')
obj = s3.Object('my-bucket', 'key-to-file.json')
data = io.BytesIO()
obj.download_fileobj(data)
# object is now a bytes string, Converting it to a dict:
new_dict = json.loads(data.getvalue().decode("utf-8"))
print(new_dict['status'])
# Should print "Error"
Note: I'm assuming you have configured authentication separately. Below code is to download the single object from the S3 bucket.
import boto3
#initiate s3 client
s3 = boto3.resource('s3')
#Download object to the file
s3.Bucket('mybucket').download_file('hello.txt', '/tmp/hello.txt')
If you wish to download a version of a file, you need to use get_object.
import boto3
bucket = 'bucketName'
prefix = 'path/to/file/'
filename = 'fileName.ext'
s3c = boto3.client('s3')
s3r = boto3.resource('s3')
if __name__ == '__main__':
for version in s3r.Bucket(bucket).object_versions.filter(Prefix=prefix + filename):
file = version.get()
version_id = file.get('VersionId')
obj = s3c.get_object(
Bucket=bucket,
Key=prefix + filename,
VersionId=version_id,
)
with open(f"{filename}.{version_id}", 'wb') as f:
for chunk in obj['Body'].iter_chunks(chunk_size=4096):
f.write(chunk)
Ref: https://botocore.amazonaws.com/v1/documentation/api/latest/reference/response.html
When you want to read a file with a different configuration than the default one, feel free to use either mpu.aws.s3_download(s3path, destination) directly or the copy-pasted code:
def s3_download(source, destination,
exists_strategy='raise',
profile_name=None):
"""
Copy a file from an S3 source to a local destination.
Parameters
----------
source : str
Path starting with s3://, e.g. 's3://bucket-name/key/foo.bar'
destination : str
exists_strategy : {'raise', 'replace', 'abort'}
What is done when the destination already exists?
profile_name : str, optional
AWS profile
Raises
------
botocore.exceptions.NoCredentialsError
Botocore is not able to find your credentials. Either specify
profile_name or add the environment variables AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN.
See https://boto3.readthedocs.io/en/latest/guide/configuration.html
"""
exists_strategies = ['raise', 'replace', 'abort']
if exists_strategy not in exists_strategies:
raise ValueError('exists_strategy \'{}\' is not in {}'
.format(exists_strategy, exists_strategies))
session = boto3.Session(profile_name=profile_name)
s3 = session.resource('s3')
bucket_name, key = _s3_path_split(source)
if os.path.isfile(destination):
if exists_strategy is 'raise':
raise RuntimeError('File \'{}\' already exists.'
.format(destination))
elif exists_strategy is 'abort':
return
s3.Bucket(bucket_name).download_file(key, destination)
from collections import namedtuple
S3Path = namedtuple("S3Path", ["bucket_name", "key"])
def _s3_path_split(s3_path):
"""
Split an S3 path into bucket and key.
Parameters
----------
s3_path : str
Returns
-------
splitted : (str, str)
(bucket, key)
Examples
--------
>>> _s3_path_split('s3://my-bucket/foo/bar.jpg')
S3Path(bucket_name='my-bucket', key='foo/bar.jpg')
"""
if not s3_path.startswith("s3://"):
raise ValueError(
"s3_path is expected to start with 's3://', " "but was {}"
.format(s3_path)
)
bucket_key = s3_path[len("s3://"):]
bucket_name, key = bucket_key.split("/", 1)
return S3Path(bucket_name, key)