Is there a way to set a buffer of '0' when using the Pandas dataframe.to_csv()? I looked through the documentation and it appears to not allow that as an argument. Am I overlooking something?
Edit: I am asking because I am outputting dataframes which range in size from several hundred to many thousands of rows (always with the same 7 columns), and a later process that eventually examines the file is occasionally failing because sometimes it isn't finished being written.
I could of course introduce a delay (of 3-5 minutes), but I'd rather not arbitrarily slow down my code if I don't have to - I'd rather force the the code to wait for the completion of the output before moving on, and when writing files with open() it's nice to be able to set a buffer value of '0'.
If I'm understanding your question correctly, you could implement the following. This snippet passes a StringIO instance as the first argument for to_csv, and calls seek(0):
import StringIO
#### your code here...assuming something like:
#### import pandas as pd
#### data = {"key1":"value1"}
#### dataframe = pd.DataFrame(data, index=dataframe)
buffer = StringIO.StringIO()
dataframe.to_csv(buffer)
buffer.seek(0)
output = buffer.getvalue()
buffer.close()
You could then manipulate output however you choose.
Related
I'm currently using the following line to read Excel files
df = pd.read_excel(f"myfile.xlsx")
The problem is the enormous slow down which occurs when I implement data from this Excel file, for example in function commands. I think this occurs because I'm not reading the file via a context manager. Is there a way of combining a 'with' command with the pandas 'read' command so the code runs more smoothly? Sorry that this is vague, I'm just learning about context managers.
Edit : Here is an example of a piece of code that does not run...
import pandas as pd
import numpy as np
def fetch_excel(x):
df_x = pd.read_excel(f"D00{x}_balance.xlsx")
return df_x
T = np.zeros(3000)
for i in range(0, 3000):
T[i] = fetch_excel(1).iloc[i+18, 0]
print(fetch_excel(1).iloc[0,0])
...or it takes more than 5 minutes which seems exceptional to me. Anyway I can't work with a delay like that. If I comment out the for loop, this does work.
Usually the key reason to use standard context managers for reading in files is convenience of closing and opening the underlying file descriptor. You can create context managers to do anything you'd like, though. They're just functions.
Unfortunately they aren't likely to solve the problem of slow loading times reading in your excel file.
You are accessing the HDD, opening, reading and converting the SAME file D001_balance.xlsx 3000 times to access a single piece of data - different row each time from 18 to 3017. This is pointless as the data is all in the DataFrame after one reading. Just use:
df_x = pd.read_excel(f"D001_balance.xlsx")
T = np.zeros(3000)
for i in range(0, 3000):
T[i] = df_x.iloc[i+18, 0]
print(df_x.iloc[0,0])
I downloaded IBM's Airline Reporting Carrier On-Time Performance Dataset; the uncompressed CSV is 84 GB. I want to run an analysis, similar to Flying high with Vaex, with the vaex libary.
I tried to convert the CSV to a hdf5 file, to make it readable for the vaex libary:
import time
import vaex
start=time.time()
df = vaex.from_csv(r"D:\airline.csv", convert=True, chunk_size=1000000)
end=time.time()
print("Time:",(end-start),"Seconds")
I always get an error when running the code:
RuntimeError: Dirty entry flush destroy failed (file write failed: time = Fri Sep 30 17:58:55 2022
, filename = 'D:\airline.csv_chunk_8.hdf5', file descriptor = 7, errno = 22, error message = 'Invalid argument', buf = 0000021EA8C6B128, total write size = 2040, bytes this sub-write = 2040, bytes actually written = 18446744073709551615, offset = 221133661).
Second run, I get this error:
RuntimeError: Unable to flush file's cached information (file write failed: time = Fri Sep 30 20:18:19 2022
, filename = 'D:\airline.csv_chunk_18.hdf5', file descriptor = 7, errno = 22, error message = 'Invalid argument', buf = 000002504659B828, total write size = 2048, bytes this sub-write = 2048, bytes actually written = 18446744073709551615, offset = 348515307)
Is there an alternative way to convert the CSV to hdf5 without Python? For example, a downloadable software which can do this job?
I'm not familiar with vaex, so can't help with usage and functions. However, I can read error messages. :-)
It reports "bytes written" with a huge number (18_446_744_073_709_551_615), much larger than the 84GB CSV. Some possible explanations:
you ran out of disk
you ran out of memory, or
had some other error
To diagnose, try testing with a small csv file and see if vaex.from_csv() works as expected. I suggest the lax_to_jfk.csv file.
Regarding your question, is there an alternative way to convert a csv to hdf5?, why not use Python?
Are you more comfortable with other languages? If so, you can install HDF5 and write your code with their C or Fortran API.
OTOH, if you are familiar with Python, there are other packages you can use to read the CSV file and create the HDF5 file.
Python packages to read the CSV
Personally, I like NumPy's genfromtxt() to read the CSV (You can also use loadtxt() to read the CSV, if you don't have missing values and don't need the field names.) However, I think you will run into memory problems reading a 84GB file. That said, you can use the skip_header and max_rows parameters with genfromtxt() to read and load a subset of lines. Alternately you can use csv.DictReader(). It reads a line at a time. So, you avoid memory issues, but it could be very slow loading the HDF5 file.
Python packages to create the HDF5 file
I have used both h5py and pytables (aka tables) to create and read HDF5 files. Once you load the CSV data to a NumPy array, it's a snap to create the HDF5 dataset.
Here is a very simple example that reads the lax_to_jfk.csv data and loads to a HDF5 file.
csv_name = 'lax_to_jfk'
rec_arr = np.genfromtxt(csv_name+'.csv', delimiter=',',
dtype=None, names=True, encoding='bytes')
with h5py.File(csv_name+'.h5', 'w') as h5f:
h5f.create_dataset(csv_name,data=rec_arr)
Update:
After posting this example, I decided to test with a larger file (airline_2m.csv). It's 861 MB, and has 2M rows. I discovered the code above doesn't work. However, it's not because of the number of rows. The problem is the columns (field names). Turns out the data isn't as clean; there are 109 field names on row 1, and some rows have 111 columns of data. As a result, the auto-generated dtype doesn't have a matching field. While investigating this, I also discovered many rows only have the values for first 56 fields. In other words, fields 57-111 are not very useful. One solution to this is to add the usecols=() parameter. Code below reflects this modification, and works with this test file. (I have not tried testing with your large file airline.csv. Given it's size likely you will need to read and load incrementally.)
csv_name = 'airline_2m'
rec_arr = np.genfromtxt(csv_name+'.csv', delimiter=',',
dtype=None, names=True, encoding='bytes') #,
usecols=(i for i in range(56)) )
with h5py.File(csv_name+'.h5', 'w') as h5f:
h5f.create_dataset(csv_name,data=rec_arr)
I tried reproducing your example. I believe the problem you are facing is quite common when dealing with CSVs. The schema is not known.
Sometimes there are "mixed types" and pandas (used underneath vaex's read_csv or from_csv ) casts those columns as dtype object.
Vaex does not really support such mixed dtypes, and requires each column to be of a single uniform type (kind of a like a database).
So how to go around this? Well, the best way I can think of is to use the dtype argument to explicitly specify the types of all columns (or those that you suspect or know to have mixed types). I know this file has like 100+ columns and that's annoying.. but that is also kind of the price to pay when using a format such as CSV...
Another thing i noticed is the encoding.. using pure pandas.read_csv failed at some point because of encoding and requires one to add encoding="ISO-8859-1". This is also supported by vaex.open (since the args are just passed down to pandas).
In fact if you want to do manually what vaex.open does automatically for you (given that this CSV file might not be as clean as one would hope), do something like (this is pseudo code but I hope close to the real thing)
# Iterate over the file in chunks
for i, df_tmp in enumerate(pd.read_csv(file, chunksize=11_000_000, encoding="ISO-8859-1", dtype=dtype)):
# Assert or check or do whatever needs doing to ensure column types are as they should be
# Pass the data to vaex (this does not take extra RAM):
df_vaex = vaex.from_pandas(df_tmp)
# Export this chunk into HDF5
# df_vaex.export_hdf5(f'chunk_{i}.hdf5')
# When the above loop finishes, just concat and export the data to a single file if needed (gives some performance benefit).
df = vaex.open('chunk*.hdf5')
df.export_hdf5('converted.hdf5', progress='rich')
I've seen potentially much better/faster way of doing this with vaex, but it is not released yet (i saw it in the code repo on github), so I will not go into it, but if you can install from source, and want me to elaborate further feel free to drop a comment.
Hope this at least gives some ideas on how to move forward.
EDIT:
In last couple of versions of vaex core, vaex.open() opens all CSV files lazily, so then just export to hdf5/arrow directly, it will do it in one go. Check the docs for more details: https://vaex.io/docs/guides/io.html#Text-based-file-formats
I've got a json files of total size of 3gb. I need to parse some data from it to Pandas Dataframe. I already made it a bit faster with custom library to parse json, but it is still too slow. It works only in one thread, that is a problem, too. How can I make it faster? Main problem is that is starts with 60it/s, but on 50000th iteration speed lowers down to 5it/s, but RAM is still not fully used, so it is not the problem. Here is an example of what am I doing:
import tqdm
with open('data/train.jsonlines') as fin:
for line in tqdm.tqdm_notebook(fin):
record = ujson.loads(line)
for target in record['damage_targets']:
df_train.loc[record['id'], 'target_{}'.format(target)] = record["damage_targets"][target]
This follows from a discussion with piRSquared here, where I found that read_csv seems to have its own type inference methods that appear to be broader in their ability to obtain the correct type. It also appears to be more fault-tolerant in the case of missing data, electing for NaN instead of throwing ValueError as its default behaviour.
There's a lot of cases where the inferred datatypes are perfectly acceptable for my work but this functionality doesn't seem to be exposed when instantiating a DataFrame, or anywhere else in the API that I can find, meaning that I have to manually deal with dtypes unnecessarily. This can be tedious if you have hundreds of columns. The closest I can find is convert_objects() but it doesn't handle the bools in this case. The alternative I could use is to dump to disk and read it back in, which is grossly inefficient.
The below example illustrates the default behaviour of read_csv vs. the default behaviour of the conventional methods for setting dtype (correct in V 0.20.3). Is there a way to access the type inference of read_csv without dumping to disk? More generally, is there a reason why read_csv behaves like this?
Example:
import numpy as np
import pandas as pd
import csv
data = [['string_boolean', 'numeric', 'numeric_missing'],
['FALSE', 23, 50],
['TRUE', 19, 12],
['FALSE', 4.8, '']]
with open('my_csv.csv', 'w') as outfile:
writer = csv.writer(outfile)
writer.writerows(data)
# Reading from CSV
df = pd.read_csv('my_csv.csv')
print(df.string_boolean.dtype) # Automatically converted to bool
print(df.numeric.dtype) # Float, as expected
print(df.numeric_missing.dtype) # Float, doesn't care about empty string
# Creating directly from list without supplying datatypes
df2 = pd.DataFrame(data[1:], columns=data[0])
df2.string_boolean = df2.string_boolean.astype(bool) # Doesn't work - ValueError
df2.numeric_missing = df2.numeric_missing.astype(np.float64) # Doesn't work
# Creating but forcing dtype doesn't work
df3 = pd.DataFrame(data[1:], columns=data[0],
dtype=[bool, np.float64, np.float64])
# The working method
df4 = pd.DataFrame(data[1:], columns=data[0])
df4.string_boolean.map({'TRUE': True, 'FALSE': False})
df4.numeric_missing = pd.to_numeric(df4.numeric_missing)
One solution is to use a StringIO object. The only difference is that it keeps all the data in memory, instead of writing to disk and reading back in.
Code is as follows (note: Python 3!):
import numpy as np
import pandas as pd
import csv
from io import StringIO
data = [['string_boolean', 'numeric', 'numeric_missing'],
['FALSE', 23, 50],
['TRUE', 19, 12],
['FALSE', 4.8, '']]
with StringIO() as fobj:
writer = csv.writer(fobj)
writer.writerows(data)
fobj.seek(0)
df = pd.read_csv(fobj)
print(df.head(3))
print(df.string_boolean.dtype) # Automatically converted to bool
print(df.numeric.dtype) # Float, as expected
print(df.numeric_missing.dtype) # Float, doesn't care about empty string
The with StringIO() as fobj isn't really necessary: fobj = String() will work just as fine. And since the context manager will close the StringIO() object outside its scope, the df = pd.read_csv(fobj) has to be inside it.
Note also the fobj.seek(0), which is another necessity, since your solution simply closes and reopens a file, which will automatically set the file pointer to the start of the file.
A note on Python 2 vs Python 3
I actually tried to make the above code Python 2/3 compatible. That became a mess, because of the following: Python 2 has an io module, just like Python 3, whose StringIO class makes everything unicode (also in Python 2; in Python 3 it is, of course, the default).
That is great, except that the csv writer module in Python 2 is not unicode compatible.
Thus, the alternative is to use the (older) Python 2 (c)StringIO module, for example as follows:
try:
from cStringIO import StringIO
except ModuleNotFoundError: # Python 3
from io import StringIO
and things will be plain text in Python 2, and unicode in Python 3.
Except that now, cStringIO.StringIO does not have a context manager, and the with statement will fail. As I mentioned, it is not really necessary, but I was keeping things as close as possible to your original code.
In other words, I could not find a nice way to stay close to the original code without ridiculous hacks.
I've also looked at avoiding the CSV writer completely, which leads to:
text = '\n'.join(','.join(str(item).strip("'") for item in items)
for items in data)
with StringIO(text) as fobj:
df = pd.read_csv(fobj)
which is perhaps neater (though a bit less clear), and Python 2/3 compatible. (I don't expect it to work for everything that the csv module can handle, but here it works fine.)
Why can't pd.DataFrame(...) do the conversion?
Here, I can only speculate.
I would think the reasoning is that when the input are Python objects (dicts, lists), the input is known, and in hands of the programmer. Therefore, it is unlikely, perhaps even illogical, that that input would contain strings such as 'FALSE' or ''. Instead, it would normally contain the objects False and np.nan (or math.nan), since the programmer would already have taken care of the (string) translation.
Whereas for a file (CSV or other), the input can be anything: your colleague might send an Excel CSV file, or someone else sends you a Gnumeric CSV file. I don't know how standardised CSV files are, but you'd probably need some code to allow for exceptions, and overall for the conversion of the strings to Python (NumPy) format.
So in that sense, it is actually illogical to expect pd.DAtaFrame(...) to accept just anything: instead, it should accept something that is properly formatted.
You might argue for a convenience method that takes a list like yours, but a list is not a CSV file (which is just a bunch of characters, including newlines). Plus, I expect there is the option for pd.read_csv to read the files in chunks (perhaps even line by line), which becomes harder if you'd feed it a string with newlines instead (you can't really read that line by line, as you would have to split it on newlines and keep all the lines in memory. And you already have the full string in memory somewhere, instead of on disk. But I digress).
Besides, the StringIO trick is just a few lines to precisely perform this trick.
I am trying to create a dask.dataframe from a bunch of large CSV files (currently 12 files, 8-10 million lines and 50 columns each). A few of them might fit together into my system memory but all of them at once definitely will not, hence the use of dask instead of regular pandas.
Since reading each csv file involves some extra work (adding columns with data from the file path), I tried creating the dask.dataframe from a list of delayed objects, similar to this example.
This is my code:
import dask.dataframe as dd
from dask.delayed import delayed
import os
import pandas as pd
def read_file_to_dataframe(file_path):
df = pd.read_csv(file_path)
df['some_extra_column'] = 'some_extra_value'
return df
if __name__ == '__main__':
path = '/path/to/my/files'
delayed_collection = list()
for rootdir, subdirs, files in os.walk(path):
for filename in files:
if filename.endswith('.csv'):
file_path = os.path.join(rootdir, filename)
delayed_reader = delayed(read_file_to_dataframe)(file_path)
delayed_collection.append(delayed_reader)
df = dd.from_delayed(delayed_collection)
print(df.compute())
When starting this script (Python 3.4, dask 0.12.0), it runs for a couple of minutes while my system memory constantly fills up. When it is fully used, everything starts lagging and it runs for some more minutes, then it crashes with killed or MemoryError.
I thought the whole point of dask.dataframe was to be able to operate on larger-than-memory dataframes that span over multiple files on disk, so what am I doing wrong here?
edit: Reading the files instead with df = dd.read_csv(path + '/*.csv') seems to work fine as far as I can see. However, this does not allow me to alter each single dataframe with additional data from the file path.
edit #2:
Following MRocklin's answer, I tried to read my data with dask's read_bytes() method as well as using the single-threaded scheduler as well as doing both in combination.
Still, even when reading chunks of 100MB in single-threaded mode on a laptop with 8GB of memory, my process gets killed sooner or later. Running the code stated below on a bunch of small files (around 1MB each) of similar shape works fine though.
Any ideas what I am doing wrong here?
import dask
from dask.bytes import read_bytes
import dask.dataframe as dd
from dask.delayed import delayed
from io import BytesIO
import pandas as pd
def create_df_from_bytesio(bytesio):
df = pd.read_csv(bytesio)
return df
def create_bytesio_from_bytes(block):
bytesio = BytesIO(block)
return bytesio
path = '/path/to/my/files/*.csv'
sample, blocks = read_bytes(path, delimiter=b'\n', blocksize=1024*1024*100)
delayed_collection = list()
for datafile in blocks:
for block in datafile:
bytesio = delayed(create_bytesio_from_bytes)(block)
df = delayed(create_df_from_bytesio)(bytesio)
delayed_collection.append(df)
dask_df = dd.from_delayed(delayed_collection)
print(dask_df.compute(get=dask.async.get_sync))
If each of your files is large then a few concurrent calls to read_file_to_dataframe might be flooding memory before Dask ever gets a chance to be clever.
Dask tries to operate in low memory by running functions in an order such that it can delete intermediate results quickly. However if the results of just a few functions can fill up memory then Dask may never have a chance to delete things. For example if each of your functions produced a 2GB dataframe and if you had eight threads running at once, then your functions might produce 16GB of data before Dask's scheduling policies can kick in.
Some options
Use dask.bytes.read_bytes
The reason why read_csv works is that it chunks up large CSV files into many ~100MB blocks of bytes (see the blocksize= keyword argument). You could do this too, although it's tricky because you need to always break on an endline.
The dask.bytes.read_bytes function can help you here. It can convert a single path into a list of delayed objects, each corresponding to a byte range of that file that starts and stops cleanly on a delimiter. You would then put these bytes into an io.BytesIO (standard library) and call pandas.read_csv on that. Beware that you'll also have to handle headers and such. The docstring to that function is extensive and should provide more help.
Use a single thread
In the example above everything would be fine if we didn't have the 8x multiplier from parallelism. I suspect that if you only ran a single function at once that things would probably pipeline without ever reaching your memory limit. You can set dask to use only a single thread with the following line
dask.set_options(get=dask.async.get_sync)
Note: For Dask versions >= 0.15, you need to use dask.local.get_sync instead.
Make sure that results fit in memory (response to edit 2)
If you make a dask.dataframe and then compute it immediately
ddf = dd.read_csv(...)
df = ddf.compute()
You're loading in all of the data into a Pandas dataframe, which will eventually blow up memory. Instead it's better to operate on the Dask dataframe and only compute on small results.
# result = df.compute() # large result fills memory
result = df.groupby(...).column.mean().compute() # small result
Convert to a different format
CSV is a pervasive and pragmatic format, but also has some flaws. You might consider a data format like HDF5 or Parquet.