Convert .txt to .csv with AWS Lambda function - python

I am trying to create a Lambda function which is triggered whenever a .txt file is dropped in the bucket 'testinput' (a private bucket), convert it from a .txt file (with a pipe delimiter) to a .csv file, and then put that converted file into the bucket 'testoutput' (a private bucket).
I created a Lambda function using the code below and set up the S3 trigger. I followed the steps in this blog so that I could import Pandas. However, when I drop a file 'pipedelimitedtest' into 'testinput', nothing happened.
This leads me to believe that there is something wrong with the code I am using for converting the .txt file to a .csv file.
If someone could provide insight into what specifically is wrong with my code, or if there is a simpler way to address this problem, I would greatly appreciate it.
UPDATE: I changed the script based on the helpful feedback from Marcin and added a pandas layer to the function. However, when I test the function, I am now getting the error: Unable to import module 'lambda_function': No module named 'pandas'.
import boto3
import io
import pandas as pd
import json
def lambda_handler(event, context):
s3 = boto3.client('s3')
obj = s3.get_object(Bucket='lambdatestinput985', Key='Pipedelimitedtest.txt')
df = pd.read_csv(io.BytesIO(obj['Body'].read()))
csv_buffer = StringIO()
df.to_csv (csv_buffer)
s3_resource = boto3.resource('s3')
s3_resource.Object(Bucket, 'df.csv').put(Body = csv_buffer.getvalue())
return {
'statusCode': 200,
'body': json.dumps('Hello from S3 events Lambda!')
}

Your code is invalid lambda function. You require a lambda handler which is an entry point to your function.
Also you need to bundle pandas in your lambda deployment package or use pandas layer to provide pandas library to your function.

Related

How to access AWS S3 data using boto3

I am fairly new to both S3 as well as boto3. I am trying to read in some data in the following format:
https://blahblah.s3.amazonaws.com/data1.csv
https://blahblah.s3.amazonaws.com/data2.csv
https://blahblah.s3.amazonaws.com/data3.csv
I am importing boto3, and it seems like I would need to do something like:
import boto3
s3 = boto3.client('s3')
However, what should I do after creating this client if I want to read in all files separately in-memory (I am not supposed to locally download this data). Ideally, I would like to read in each CSV data file into separate Pandas DataFrames (which I know how to do once I know how to access the S3 data).
Please understand I'm fairly new to both boto3 as well as S3, so I don't even know where to begin.
You'll have 2 options, both the options you've already mentioned:
Downloading the file locally using download_file
s3.download_file(
"<bucket-name>",
"<key-of-file>",
"<local-path-where-file-will-be-downloaded>"
)
See download_file
Loading the file contents into memory using get_object
response = s3.get_object(Bucket="<bucket-name>", Key="<key-of-file>")
contentBody = response.get("Body")
# You need to read the content as it is a Stream
content = contentBody.read()
See get_object
Either approach is fine and you can just chose which one fits your scenario better.
Try this:
import boto3
s3 = boto3.resource('s3')
obj = s3.Object(<<bucketname>>, <<itemname>>)
body = obj.get()['Body'].read()

How can I read from a CSV file from an S3 bucket, apply certain if-statements to it, and write a new updated CSV file and place it in the S3 bucket?

I'm having trouble writing to a new CSV file into an S3 bucket. I want to be able to read a CSV file that I have in an S3 bucket, and if one of the values in the CSV fits a certain requirement, I want to change it to a different value. I've read that it's not possible to edit an S3 object, so I need to create a new one every time. In short, I want to create a new, updated CSV file from another CSV file in an S3 bucket, with changes applied.
I'm trying to use DictWriter and DictReader, but I always run into issues with DictWriter. I can read the CSV file properly, but when I try to update it, there are a myriad of significantly different issues from DictWriter. Right now, the issue that I am getting is that
# Function to be pasted into AWS Lambda.
# Accesses S3 bucket, opens the CSV file, receive the response line-by-line,
# To be able to access S3 buckets and the objects within the bucket
import boto3
# To be able to read the CSV by using DictReader
import csv
# Lambda script that extracts, transforms, and loads data from S3 bucket 'testing-bucket-1042' and CSV file 'Insurance.csv'
def lambda_handler(event, context):
s3 = boto3.resource('s3')
bucket = s3.Bucket('testing-bucket-1042')
obj = bucket.Object(key = 'Insurance.csv')
response = obj.get()
lines = response['Body'].read().decode('utf-8').split()
reader = csv.DictReader(lines)
with open("s3://testing-bucket-1042/Insurance.csv", newline = '') as csvfile:
reader = csv.DictReader(csvfile)
fieldnames = ['county', 'eq_site_limit']
writer = csv.DictWriter(lines, fieldnames=fieldnames)
for row in reader:
writer.writeheader()
if row['county'] == "CLAY": # if the row is under the column 'county', and contains the string "CLAY"
writer.writerow({'county': 'CHANGED'})
if row['eq_site_limit'] == "0": # if the row is under the column 'eq_site_limit', and contains the string "0"
writer.writerow({'eq_site_limit': '9000'})
Right now, the error that I am getting is that the path I use when attempting to open the CSV, "s3://testing-bucket-1042/Insurance.csv", is said to not exist.
The error says
"errorMessage": "[Errno 2] No such file or directory: 's3://testing-bucket-1042/Insurance.csv'",
"errorType": "FileNotFoundError"
What would be the correct way to use DictWriter, if at all?
First of all s3:\\ is not a common (file) protocol and therefore you get your error message. It is good, that you stated your intentions.
Okay, I refactored your code
import codecs
import boto3
# To be able to read the CSV by using DictReader
import csv
from io import StringIO
# Lambda script that extracts, transforms, and loads data from S3 bucket 'testing-bucket-1042' and CSV file 'Insurance.csv'
def lambda_handler(event, context):
s3 = boto3.resource('s3')
bucket = s3.Bucket('testing-bucket-1042')
obj = bucket.Object(key = 'Insurance.csv')
stream = codecs.getreader('utf-8')(obj.get()['Body'])
lines = list(csv.DictReader(stream))
### now you have your object there
csv_buffer = StringIO()
out = csv.DictWriter(csv_buffer, fieldnames=['county', 'eq_site_limit'])
for row in lines:
if row['county'] == "CLAY":
out.writerow({'county': 'CHANGED'})
if row['eq_site_limit'] == "0":
out.writerow({'eq_site_limit': '9000'})
### now write content into some different bucket/key
s3client = boto3.client('s3')
s3client.put_object(Body=csv_buffer.getvalue().encode(encoding),
Bucket=...targetbucket, Key=...targetkey)
I hope that this works. Basically there are few tricks:
use codecs to directly stream csv data from s3 bucket
use BytesIO to create a stream in memory to which csv.DictWriter can write to.
when you are finished, one way to "upload" your content is through s3.clients's put_object method (as documented in AWS)
To logically separate AWS code from business logic, I normally recommend this approach:
Download the object from Amazon S3 to the /tmp directory
Perform desired business logic (read file, write file)
Upload the resulting file to Amazon S3
Using download_file() and upload_file() avoids having to worry about in-memory streams. It means you can take logic that normally operates on files (eg on your own computer) and then apply them to files obtained from S3.
It comes down to personal preference.
You can use streaming functionality of S3 to make changes on the fly. It is better suited for text manipulation tools such as awk and sed.
Example:
aws s3 cp s3://bucketname/file.csv - | sed 's/foo/bar/g' | aws s3 cp - s3://bucketname/new-file.csv
AWS Docs: https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html

AWS Lambda: read csv file dimensions from an s3 bucket with Python without using Pandas or CSV package

good afternoon. I am hoping that someone can help me with this issue.
I have multiple CSV files that are sitting in an s3 folder. I would like to use python without the Pandas, and the csv package (because aws lambda has very limited packages available, and there is a size restriction) and loop through the files sitting in the s3 bucket, and read the csv dimensions (length of rows, and length of columns)
For example my s3 folder contains two csv files (1.csv, and 2 .csv)
my code will run through the specified s3 folder, and put the count of rows, and columns in 1 csv, and 2 csv, and puts the result in a new csv file. I greatly appreciate your help! I can do this using the Pandas package (thank god for Pandas, but aws lambda has restrictions that limits me on what I can use)
AWS lambda uses python 3.7
If you can visit your s3 resources in your lambda function, then basically do this to check the rows,
def lambda_handler(event, context):
import boto3 as bt3
s3 = bt3.client('s3')
csv1_data = s3.get_object(Bucket='the_s3_bucket', Key='1.csv')
csv2_data = s3.get_object(Bucket='the_s3_bucket', Key='2.csv')
contents_1 = csv1_data['Body'].read()
contents_2 = csv2_data['Body'].read()
rows1 = contents_1.split()
rows2=contents_2.split()
return len(rows1), len(rows2)
It should work directly, if not, please let me know. BTW, hard coding the bucket and file name into the function like what I did in the sample is not a good idea at all.
Regards.

Reading Data From Cloud Storage Via Cloud Functions

I am trying to do a quick proof of concept for building a data processing pipeline in Python. To do this, I want to build a Google Function which will be triggered when certain .csv files will be dropped into Cloud Storage.
I followed along this Google Functions Python tutorial and while the sample code does trigger the Function to create some simple logs when a file is dropped, I am really stuck on what call I have to make to actually read the contents of the data. I tried to search for an SDK/API guidance document but I have not been able to find it.
In case this is relevant, once I process the .csv, I want to be able to add some data that I extract from it into GCP's Pub/Sub.
The function does not actually receive the contents of the file, just some metadata about it.
You'll want to use the google-cloud-storage client. See the "Downloading Objects" guide for more details.
Putting that together with the tutorial you're using, you get a function like:
from google.cloud import storage
storage_client = storage.Client()
def hello_gcs_generic(data, context):
bucket = storage_client.get_bucket(data['bucket'])
blob = bucket.blob(data['name'])
contents = blob.download_as_string()
# Process the file contents, etc...
This is an alternative solution using pandas:
Cloud Function Code:
import pandas as pd
def GCSDataRead(event, context):
bucketName = event['bucket']
blobName = event['name']
fileName = "gs://" + bucketName + "/" + blobName
dataFrame = pd.read_csv(fileName, sep=",")
print(dataFrame)

Azure blob storage to JSON in azure function using SDK

I am trying to create a timer trigger azure function that takes data from blob, aggregates it, and puts the aggregates in a cosmosDB. I previously tried using the bindings in azure functions to use blob as input, which I was informed was incorrect (see this thread: Azure functions python no value for named parameter).
I am now using the SDK and am running into the following problem:
import sys, os.path
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), 'myenv/Lib/site-packages')))
import json
import pandas as pd
from azure.storage.blob import BlockBlobService
data = BlockBlobService(account_name='accountname', account_key='accountkey')
container_name = ('container')
generator = data.list_blobs(container_name)
for blob in generator:
print("{}".format(blob.name))
json = json.loads(data.get_blob_to_text('container', open(blob.name)))
df = pd.io.json.json_normalize(json)
print(df)
This results in an error:
IOError: [Errno 2] No such file or directory: 'test.json'
I realize this might be an absolute path issue, but im not sure how that works with azure storage. Any ideas on how to circumvent this?
Made it "work" by doing the following:
for blob in generator:
loader = data.get_blob_to_text('kvaedevdystreamanablob',blob.name,if_modified_since=delta)
json = json.loads(loader.content)
This works for ONE json file, i.e I only had one in storage, but when more are added I get this error:
ValueError: Expecting object: line 1 column 21907 (char 21906)
This happens even if i add if_modified_since as to only take in one blob. Will update if I figure something out. Help always welcome.
Another update: My data is coming in through stream analytics, and then down to the blob. I have selected that the data should come in as arrays, this is why the error is occurring. When the stream is terminated, the blob doesnt immediately append ] to the EOF line in json, thus the json file isnt valid. Will try now with using line-by-line in stream analytics instead of array.
figured it out. In the end it was a quite simple fix:
I had to make sure each json entry in the blob was less than 1024 characters, or it would create a new line, thus making reading lines problematic.
The code that iterates through each blob file, reads and adds to a list is a follows:
data = BlockBlobService(account_name='accname', account_key='key')
generator = data.list_blobs('collection')
dataloaded = []
for blob in generator:
loader = data.get_blob_to_text('collection',blob.name)
trackerstatusobjects = loader.content.split('\n')
for trackerstatusobject in trackerstatusobjects:
dataloaded.append(json.loads(trackerstatusobject))
From this you can add to a dataframe and do what ever you want :)
Hope this helps if someone stumbles upon a similar problem.

Categories

Resources