Formatting a single row as CSV - python

I'm creating a script to convert a whole lot of data into CSV format. It runs on Google AppEngine using the mapreduce API, which is only relevant in that it means each row of data is formatted and output separately, in a callback function.
I want to take advantage of the logic that already exists in the csv module to convert my data into the correct format, but because the CSV writer expects a file-like object, I'm having to instantiate a StringIO for each row, write the row to the object, then return the content of the object, each time.
This seems silly, and I'm wondering if there is any way to access the internal CSV formatting logic of the csv module without the writing part.

The csv module wraps the _csv module, which is written in C. You could grab the source for it and modify it to not require the file-like object, but poking around in the module, I don't see any clear way to do it without recompiling.

One option could be having your own "file-like" object. Actually, cvs.writer requires for the object only to have a write method, so:
class PseudoFile(object):
def write(self, string):
# Do whatever with your string
csv.writer(PseudoFile()).writerow(row)
You're skipping a couple steps in there, but maybe it's just what you want.

Related

Method to read DBF files efficiently python

**I have the following method to open DBF files, but it performs very slow, I´m looking to open large DBF files, I need to efficient my method. Need Help. Thanxs **
def lectura_tablas(nombre_tabla):
table_name=nombre_tabla
table_ventas = DBF(f'{table_name}', load=True,ignore_missing_memofile=True)
table_new=[]
for x in range(0,len(table_ventas.records)):
table_new.append(OrderedDict(table_ventas.records[x]))
dataframe= pd.DataFrame(table_new)
return dataframe
Your code ought to work. A slowness might be caused by a "Schlemiel the Painter" problem in the loop (the fact that you use load=True makes this unlikely, but it's the only possibility I see). Try rewriting it like this:
for record in table_ventas:
table_new.append(record)
This ought to use the standard iterator on the DBF object, which ought to already return an OrderedDict.
Other than that, you might try recompiling the DBF into some other format which is more efficiently accessed and see whether the overall time improves any.

Python: Iteratively Writing to Excel Files

Python 2.7. I'm using xlsxwriter.
Let's say I have myDict = {1: 'One', 2: 'Two', 3: 'Three'}
I need to perform some transformation on the value and write the result to a spreadsheet.
So I write a function to create a new file and put some headers in there and do formatting, but don't close it so I can write further with my next function.
Then I write another function for transforming my dict values and writing them to the worksheet.
I'm a noob when it comes to classes so please forgive me if this looks silly.
import xlsxwriter
class ReadWriteSpreadsheet(object):
def __init__(self, outputFile=None, writeWorkbook=None, writeWorksheet=None):
self.outputFile = outputFile
self.writeWorksheet = writeWorksheet
self.writeWorkbook = writeWorkbook
# This function works fine
def setup_new_spreadsheet(self):
self.writeWorkbook = xlsxwriter.Workbook(self.outputFile)
self.writeWorksheet = self.writeWorkbook.add_worksheet('My Worksheet')
self.writeWorksheet.write('A1', 'TEST')
# This one does not
def write_data(self):
# Forget iterating through the dict for now
self.writeWorksheet.write('A5', myDict[1])
x = ReadWriteSpreadsheet(outputFile='test.xlsx')
x.setup_new_spreadsheet()
x.write_data()
I get:
Exception Exception: Exception('Exception caught in workbook destructor. Explicit close() may be required for workbook.',) in <bound method Workbook.__del__ of <xlsxwriter.workbook.Workbook object at 0x00000000023FDF28>> ignored
The docs say this error is due to not closing the workbook, but if I close it then I can't write to it further...
How do I structure this class so that the workbook and worksheet from setup_new_spreadsheet() is able to be written to by write_data()?
The exception mentioned in your question is triggered when python realises you will not need to use your Workbook any more in the rest of your code and therefore decides to delete it from his memory (garbage collection). When doing so, it will realise you haven't closed your workbook yet and so will not have persisted your excel spreadsheet at all on the disk (only happen on close I assume) and will raise that exception.
If you had another method close on your class that did: self.writeWorkbook.close() and made sure to call it last you would not have that error.
When you do ReadWriteSpreadsheet() you get a new instance of the class you've defined. That new instance doesn't have any knowledge of any workbooks that were set up in a different instance.
It looks like what you want to do is get a single instance, and then issue the methods on that one instance:
x = ReadWriteSpreadsheet(outputFile='test.xlsx')
x.setup_new_spreadsheet()
x.write_data()
To address your new concern:
The docs say this error is due to not closing the workbook, but if I close it then I can't write to it further...
Yes, that's true, you can't write to it further. That is one of the fundamental properties of Excel files. At the level we're working with here, there's no such thing as "appending" or "updating" an Excel file. Even the Excel program itself cannot do it. You only have two viable approaches:
Keep all data in memory and only commit to disk at the very end.
Reopen the file, reading the data into memory; modify the in-memory data; and write all the in-memory data back out to a new disk file (which can have the same name as the original if you want to overwrite).
The second approach requires using a package that can read Excel files. The main choices there are xlrd and OpenPyXL. The latter will handle both reading and writing, so if you use that one, you don't need XlsxWriter.

Python Storing Data

I have a list in my program. I have a function to append to the list, unfortunately when you close the program the thing you added goes away and the list goes back to the beginning. Is there any way that I can store the data so the user can re-open the program and the list is at its full.
You may try pickle module to store the memory data into disk,Here is an example:
store data:
import pickle
dataset = ['hello','test']
outputFile = 'test.data'
fw = open(outputFile, 'wb')
pickle.dump(dataset, fw)
fw.close()
load data:
import pickle
inputFile = 'test.data'
fd = open(inputFile, 'rb')
dataset = pickle.load(fd)
print dataset
You can make a database and save them, the only way is this. A database with SQLITE or a .txt file. For example:
with open("mylist.txt","w") as f: #in write mode
f.write("{}".format(mylist))
Your list goes into the format() function. It'll make a .txt file named mylist and will save your list data into it.
After that, when you want to access your data again, you can do:
with open("mylist.txt") as f: #in read mode, not in write mode, careful
rd=f.readlines()
print (rd)
The built-in pickle module provides some basic functionality for serialization, which is a term for turning arbitrary objects into something suitable to be written to disk. Check out the docs for Python 2 or Python 3.
Pickle isn't very robust though, and for more complex data you'll likely want to look into a database module like the built-in sqlite3 or a full-fledged object-relational mapping (ORM) like SQLAlchemy.
For storing big data, HDF5 library is suitable. It is implemented by h5py in Python.

python gnupg timestamp

i have noticed when using python gnupg, that if i sign some data and save the signed data to a file using pickle, lots of data gets saved along with the signed data. one of these things is a timestamp in unix time, for example the following line is part of a timestamp:
p24
sS'timestamp'
p25
V1347364912
the documentation does not mention any of this, which makes me a little confused. after loading in the file using pickle, i can't see any mention of the timestamp or any way to return the value. but if pickle is saving it, it must be part of the python object. does this mean there should be a way i can get to this information in python? i would also like to utilise this data, which i can maybe do by reading in the file itself but am looking for a cleaner way to do it using the gnupg module.
gnupg isn't very well documented but if you Inspect it you will see there are attributes besides the ones normally used...
#234567891123456789212345678931234567894123456789512345678961234567897123456789
# core
import inspect
import pickle
import datetime
# 3rd party
import gnupg
def depickle():
""" pull and depickle our signed data """
f = open('pickle.txt', 'r')
signed_data = pickle.load(f)
f.close()
return signed_data
# depickle our signed data
signed_data = depickle()
# inspect the object
for key, value in inspect.getmembers(signed_data):
print key
One of them is your timestamp... aptly named timestamp. Now that you know it you can use it easily enough...
# use the attribute now that we know it
print signed_data.timestamp
# make it pretty
print datetime.datetime.fromtimestamp(float(signed_data.timestamp))
That felt long winded but I thought this discussion would benefit from documenting the use of inspect to identify the undocumented attributes instead of just saying "use signed_data.timestamp".
I have found that some fields of python-gnupg Sign and Verify classes are not described in documentation. You will have to look at python-gnupg source: [PYTHONDIR]/Lib/site-packages/gnupg.py. There is Sign class with handle_status() method that fill all the variables/fields connected with signature including timestamp field.

Is there a memory efficient and fast way to load big JSON files?

I have some json files with 500MB.
If I use the "trivial" json.load() to load its content all at once, it will consume a lot of memory.
Is there a way to read partially the file? If it was a text, line delimited file, I would be able to iterate over the lines. I am looking for analogy to it.
There was a duplicate to this question that had a better answer. See https://stackoverflow.com/a/10382359/1623645, which suggests ijson.
Update:
I tried it out, and ijson is to JSON what SAX is to XML. For instance, you can do this:
import ijson
for prefix, the_type, value in ijson.parse(open(json_file_name)):
print prefix, the_type, value
where prefix is a dot-separated index in the JSON tree (what happens if your key names have dots in them? I guess that would be bad for Javascript, too...), theType describes a SAX-like event, one of 'null', 'boolean', 'number', 'string', 'map_key', 'start_map', 'end_map', 'start_array', 'end_array', and value is the value of the object or None if the_type is an event like starting/ending a map/array.
The project has some docstrings, but not enough global documentation. I had to dig into ijson/common.py to find what I was looking for.
So the problem is not that each file is too big, but that there are too many of them, and they seem to be adding up in memory. Python's garbage collector should be fine, unless you are keeping around references you don't need. It's hard to tell exactly what's happening without any further information, but some things you can try:
Modularize your code. Do something like:
for json_file in list_of_files:
process_file(json_file)
If you write process_file() in such a way that it doesn't rely on any global state, and doesn't
change any global state, the garbage collector should be able to do its job.
Deal with each file in a separate process. Instead of parsing all the JSON files at once, write a
program that parses just one, and pass each one in from a shell script, or from another python
process that calls your script via subprocess.Popen. This is a little less elegant, but if
nothing else works, it will ensure that you're not holding on to stale data from one file to the
next.
Hope this helps.
Yes.
You can use jsonstreamer SAX-like push parser that I have written which will allow you to parse arbitrary sized chunks, you can get it here and checkout the README for examples. Its fast because it uses the 'C' yajl library.
It can be done by using ijson. The working of ijson has been very well explained by Jim Pivarski in the answer above. The code below will read a file and print each json from the list. For example, file content is as below
[{"name": "rantidine", "drug": {"type": "tablet", "content_type": "solid"}},
{"name": "nicip", "drug": {"type": "capsule", "content_type": "solid"}}]
You can print every element of the array using the below method
def extract_json(filename):
with open(filename, 'rb') as input_file:
jsonobj = ijson.items(input_file, 'item')
jsons = (o for o in jsonobj)
for j in jsons:
print(j)
Note: 'item' is the default prefix given by ijson.
if you want to access only specific json's based on a condition you can do it in following way.
def extract_tabtype(filename):
with open(filename, 'rb') as input_file:
objects = ijson.items(input_file, 'item.drugs')
tabtype = (o for o in objects if o['type'] == 'tablet')
for prop in tabtype:
print(prop)
This will print only those json whose type is tablet.
On your mention of running out of memory I must question if you're actually managing memory. Are you using the "del" keyword to remove your old object before trying to read a new one? Python should never silently retain something in memory if you remove it.
Update
See the other answers for advice.
Original answer from 2010, now outdated
Short answer: no.
Properly dividing a json file would take intimate knowledge of the json object graph to get right.
However, if you have this knowledge, then you could implement a file-like object that wraps the json file and spits out proper chunks.
For instance, if you know that your json file is a single array of objects, you could create a generator that wraps the json file and returns chunks of the array.
You would have to do some string content parsing to get the chunking of the json file right.
I don't know what generates your json content. If possible, I would consider generating a number of managable files, instead of one huge file.
Another idea is to try load it into a document-store database like MongoDB.
It deals with large blobs of JSON well. Although you might run into the same problem loading the JSON - avoid the problem by loading the files one at a time.
If path works for you, then you can interact with the JSON data via their client and potentially not have to hold the entire blob in memory
http://www.mongodb.org/
"the garbage collector should free the memory"
Correct.
Since it doesn't, something else is wrong. Generally, the problem with infinite memory growth is global variables.
Remove all global variables.
Make all module-level code into smaller functions.
in addition to #codeape
I would try writing a custom json parser to help you figure out the structure of the JSON blob you are dealing with. Print out the key names only, etc. Make a hierarchical tree and decide (yourself) how you can chunk it. This way you can do what #codeape suggests - break the file up into smaller chunks, etc
You can parse the JSON file to CSV file and you can parse it line by line:
import ijson
import csv
def convert_json(self, file_path):
did_write_headers = False
headers = []
row = []
iterable_json = ijson.parse(open(file_path, 'r'))
with open(file_path + '.csv', 'w') as csv_file:
csv_writer = csv.writer(csv_file, ',', '"', csv.QUOTE_MINIMAL)
for prefix, event, value in iterable_json:
if event == 'end_map':
if not did_write_headers:
csv_writer.writerow(headers)
did_write_headers = True
csv_writer.writerow(row)
row = []
if event == 'map_key' and not did_write_headers:
headers.append(value)
if event == 'string':
row.append(value)
So simply using json.load() will take a lot of time. Instead, you can load the json data line by line using key and value pair into a dictionary and append that dictionary to the final dictionary and convert it to pandas DataFrame which will help you in further analysis
def get_data():
with open('Your_json_file_name', 'r') as f:
for line in f:
yield line
data = get_data()
data_dict = {}
each = {}
for line in data:
each = {}
# k and v are the key and value pair
for k, v in json.loads(line).items():
#print(f'{k}: {v}')
each[f'{k}'] = f'{v}'
data_dict[i] = each
Data = pd.DataFrame(data_dict)
#Data will give you the dictionary data in dataFrame (table format) but it will
#be in transposed form , so will then finally transpose the dataframe as ->
Data_1 = Data.T

Categories

Resources