I am writing a data-harvesting code in Python. I'd like to produce a data frame file that would be as easy to import into R as possible. I have full control over what my Python code will produce, and I'd like to avoid unnecessary data processing on the R side, such as converting columns into factor/numeric vectors and such. Also, if possible, I'd like to make importing that data as easy as possible on the R side, preferably by calling a single function with a single argument of file name.
How should I store data into a file to make this possible?
You can write data to CSV using http://docs.python.org/2/library/csv.html Python's csv module, then it's a simple matter of using read.csv in R. (See ?read.csv)
When you read in data to R using read.csv, unless you specify otherwise, character strings will be converted to factors, numeric fields will be converted to numeric. Empty values will be converted to NA.
First thing you should do after you import some data is to look at the ?str of it to ensure the classes of data contained within meet your expectations. Many times have I made a mistake and mixed a character value in a numeric field and ended up with a factor instead of a numeric.
One thing to note is that you may have to set your own NA strings. For example, if you have "-", ".", or some other such character denoting a blank, you'll need to use the na.strings argument (which can accept a vector of strings ie, c("-",".")) to read.csv.
If you have date fields, you will need to convert them properly. R does not necessarily recognize dates or times without you specifying what they are (see ?as.Date)
If you know in advance what each column is going to be you can specify the class using colClasses.
A thorough read through of ?read.csv will provide you with more detailed information. But I've outlined some common issues.
Brandon's suggestion of using CSV is great if your data isn't enormous, and particularly if it doesn't contain a whole honking lot of floating point values, in which case the CSV format is extremely inefficient.
An option that handled huge datasets a little better might be to construct an equivalent DataFrame in pandas and use its facilities to dump to hdf5, and then open it in R that way. See for example this question for an example of that.
This other approach feels like overkill, but you could also directly transfer the dataframe in-memory to R using pandas's experimental R interface and then save it from R directly.
Related
I have a few Pandas dataframes with several millions of rows each. The dataframes have columns containing JSON objects each with 100+ fields. I have a set of 24 functions that run sequentially on the dataframes, process the JSON (for example, compute some string distance between two fields in the JSON) and return a JSON with some new fields added. After all 24 functions execute, I get a final JSON which is then usable for my purposes.
I am wondering what the best ways to speed up performance for this dataset. A few things I have considered and read up on:
It is tricky to vectorize because many operations are not as straightforward as "subtract this column's values from another column's values".
I read up on some of the Pandas documentation and a few options indicated are Cython (may be tricky to convert the string edit distance to Cython, especially since I am using an external Python package) and Numba/JIT (but this is mentioned to be best for numerical computations only).
Possibly controlling the number of threads could be an option. The 24 functions can mostly operate without any dependencies on each other.
You are asking for advice and this is not the best site for general advice but nevertheless I will try to point a few things out.
The ideas you have already considered are not going to be helpful - neither Cython, Numba, nor threading are not going to address the main problem - the format of your data that is not conductive for performance of operations on the data.
I suggest that you first "unpack" the JSONs that you store in the column(s?) of your dataframe. Preferably, each field of the JSON (mandatory or optional - deal with empty values at this stage) ends up being a column of the dataframe. If there are nested dictionaries you may want to consider splitting the dataframe (particularly if the 24 functions are working separately at separate nested JSON dicts). Alternatively, you should strive to flatten the JSONs.
Convert to the data format that gives you the best performance. JSON stores all the data in the textual format. Numbers are best used in their binary format. You can do that column-wise on the columns that you suspect should be converted using df['col'].astype(...) (works on the whole dataframe too).
Update the 24 functions to operate not on JSON strings stored in dataframe but on the fields of the dataframe.
Recombine the JSONs for storage (I assume you need them in this format). At this stage the implicit conversion from numbers to strings will occur.
Given the level of details you provided in the question, the suggestions are necessarily brief. Should you have any more detailed questions at any of the above points, it would be best to ask maximally simple question on each of them (preferably containing a self-sufficient MWE).
I hope this is a good question, if I should post this as an issue on the PyPolars GitHub instead, please let me know.
I have a quite large parquet file where some columns contain binary data.
These columns are not interesting for me right now, so it is ok for me that PyPolars does not support the Binary datatype so far (this is how I understand it at least, my question would be irrelevant if that were not the case!), but I would like to make full use of the query optimization by lazily reading the file with .scan_parquet() instead of read_parquet().
Currently .scan_parquet() gives me the following error:
pyo3_runtime.PanicException: Arrow datatype Binary not supported by Polars. You probably need to activate that data-type feature.
and I don't know of a way to 'activate that data-type feature'
So my workaround is to use .read_parquet() and specify in advance which columns I want to use so that it never attempts to read the Binary ones.
The problem is I am doing exploratory data analysis and there are a large amount of columns so for one it is annoying to have to specify a large list of columns (basically ~150 minus the two that produce the issue) and it is also inefficient to read all these columns each time when I only need some small subset each time (it is even more annoying to change a small list of columns each time I, for example, add some filter).
It would be ideal if I could use .scan_parquet and let the query optimizer figure out that it only needs to read the (unproblematic) columns that I actually need.
Is there a better way of doing things that I am not seeing?
I have multiple Excel files with different sheets in each file, these files have been made my people, so each one has different formats, different number of columns and also different structures to represent the data.
For example, in one sheet, the dataframe/table starts at 8th row, second column. In other it starts at 122 row, etc...
I want to retrieve something in common from these Excels, it is variable names and information.
However, I don't how could I possibly retrieve all this information without needing to parse each individual file. This is not an option because there are lot of these files with lots of sheets in each file.
I have been thinking about using regex as well as edit distance between words, but I don't know if that is the best option.
Any help is appreciated.
I will divide my answer into what I think you can do now, and suggestions for the future (if feasible).
An attempt to "solve" the problem you have with existing files.
Without regularity on your input files (such as at least a common name in the column), I think what you're describing is among the best solutions. Having said that, perhaps a "fancier" similarity metric between column names would be more useful than using regular expressions.
If you believe that there will be some regularity in the column names, you could look at string distances such as the Hamming Distance or the Levenshtein distance, and using a threshold on the distance that works for you. As an example, let's say that you have a function d(a:str, b:str) -> float that calculates a distance between column names, you could do something like this:
# this variable is a small sample of "expected" column names
plausible_columns = [
'interesting column',
'interesting',
'interesting-column',
'interesting_column',
]
for f in excel_files:
# process the file until you find columns
# I'm assuming you can put the colum names into
# a variable `columns` here.
for c in columns:
for p in plausible_columns:
if d(c,p) < threshold:
# do something to process the column,
# add to a pandas DataFrame, calculate the mean,
# etc.
If the data itself can tell you something on whether you should process it (such as having a particular distribution, or being in a particular range), you can use such features to decide on whether you should be using that column or not. Even better, you can use many of these characteristics to make a finer decision.
Having said this, I don't think a fully automated solution exists without inspecting some of the data manually, and studying the ditribution of the data, or variability in the names of the columns, etc.
For the future
Even with fancy methods to calculate features and doing some data analysis on the data you have right now, I think it would be impossible to ensure that you will always get the data you need (by the very nature of the problem). A reasonable way to solve this, in my opinion (and if this is feasible in whatever context you're working in), is to impose a stricter format in the data generation end (I suppose this is a manual thing with people inputting data to excel directly). I would argue that the best solution is to get rid of the problem at the root, and create a unified form, or excel sheet format, and distribute it to the people that will fill the files with data, so that you can ensure the data is then automatically ingested minimizing the risk of errors.
I am very-very new to Python and still learning my way around. I am trying to process some data and I have a very big raw_data.csv file that reads as follows:
ARB1,k_abc,t_def,s_ghi,1.321
ARB2,ref,k_jkl,t_mno,s_pqr,0.31
ARB3,k_jkl,t_mno,s_pqr,qrs,0.132
ARB4,sql,k_jkl,t_mno,s_pqr,ets,0.023
I want to append this data in an existing all_data.csv and it should look like
ARB1,k_abc,t_def,s_ghi,1.321
ARB2,k_jkl,t_mno,s_pqr,0.31
ARB3,k_jkl,t_mno,s_pqr,0.132
ARB4,k_jkl,t_mno,s_pqr,0.023
As you can see, the code has to detect partial strings and numbers and rearrange them in an orderly way (by excluding the cells that don't have them). I was trying to use the csv module with very little luck. Can anyone help please?
You can parse this using pandas.read_csv on Pandas. Alternatively, if you don't want to use Pandas, I would recommend simply reading in data, a line at a time and splitting by commas using Python's string operators. You can create a 2-D array that you can populate row-by-row as you read in more data.
I have a large dataset i need to read into a pandas dataframe.
It contains alot of categorical data consiting of some rather long string.
Trying to use the pandas read_sql_query method I can't seems to specify what columns should be treated as categorical data.
This means i get memory issues.
I have a background in R where i can specify things like, string as factor. Meaning you can have long strings with a small memory footprint since they are indexed as integers in R. Can't i do the same in Python/Pandas?
I would like to do it as i read the data from the database! not after. Converting string to category in pandas is easy once you have it in a dataframe, but that is not what I'm looking for.
I understand that i could simply encode the data in the database but I would like to avoid that.
I'm afraid currently encoding on the DB side (this can be done using JOIN with a mapping table) is the only viable option.
There were a few similar feature requests:
https://github.com/pandas-dev/pandas/issues/17862
https://github.com/pandas-dev/pandas/issues/13049
https://github.com/pandas-dev/pandas/issues/6798
https://github.com/pandas-dev/pandas/issues/17560
Reading data in chunks and converting each chunk to category dtype might be tricky as one might need to join categories from all chunks.