Alternative to seaborn pairplot to visualize dataframe? - python

I have a fairly large pandas data frame((4000, 103) and for smaller dataframes I love using pairplot to visually see patterns in my data. But for my larger dataset the same command runs for hour+ with no output.
Is there an alternative tool to get the same outcome or a way to speed up the command? I tried to use the sample option on pandas to reduce the dataset but it still takes over a hour with no outcome.
dfSample = myData.sample(100) # make dataset smaller
sns.pairplot(dfSample, diag_kind="hist")

You should sample from colums, so replace your first line by
dfSample=myData.sample(10, axis=1).
And live happy.

Related

Using Matplotlib with Dask

Let's say we have pandas dataframe pd and a dask dataframe dd. When I want to plot pandas one with matplotlib I can easily do it:
fig, ax = plt.subplots()
ax.bar(pd["series1"], pd["series2"])
fig.savefig(path)
However, when I am trying to do the same with dask dataframe I am getting Type Errors such as:
TypeError: Cannot interpret 'string[python]' as a data type
string[python] is just an example, whatever is your dd["series1"] datatype will be inputed here.
So my question is: What is the proper way to use matplotlib with dask, and is this even a good idea to combine the two libraries?
One motivation to use dask instead of pandas is the size of the data. As such, swapping pandas DataFrame with dask DataFrame might not be feasible. Imagine a scatter plot, this might work well with 10K points, but if the dask dataframe is a billion rows, a plain matplotlib scatter is probably a bad idea (datashader is a more appropriate tool).
Some graphical representations are less sensitive to the size of the data, e.g. normalized bar chart should work well, as long as the number of categories does not scale with the data. In this case the easiest solution is to use dask to compute the statistics of interest before plotting them using pandas.
To summarise: I would consider the nature of the chart, figure out the best tool/representation, and if it's something that can/should be done with matplotlib, then I would run computations on dask DataFrame to get the reduced result as a pandas dataframe and proceed with the matplotlib
SultanOrazbayev's is still spot on, here is an answer elaborating on the datashader option (which hvplot call under the hood).
Don't use Matplotlib, use hvPlot!
If you wish to plot the data while it's still large, I recommend using hvPlot, as it can natively handle dask dataframes. It also automatically provides interactivity.
Example
import numpy as np
import dask
import hvplot.dask
# Create Dask DataFrame with normally distributed data
df = dask.datasets.timeseries()
df['x'] = df['x'].map_partitions(lambda x: np.random.randn(len(x)))
df['y'] = df['y'].map_partitions(lambda x: np.random.randn(len(x)))
# Plot
df.hvplot.scatter(x='x', y='y', rasterize=True)

Data analysis : compare two datasets for devising useful features for population segmentation

Say I have two pandas dataframes, one containing data for general population and one containing the same data for a target group.
I assume this is a very common use case of population segmentation. My first idea to explore the data would be to perform some vizualization using e.g. seaborn Facetgrid or barplot & scatterplot or something like that to get a general idea of the trends and differences.
However, I found out that this operation is not as straightforward as I thought as seaborn is made to analyze one dataset and not compare two datasets.
I found this SO answer which provides a solution. But I am wondering how would people go about if if the dataframe was huge and a concat operation would not be possible ?
Datashader does not seem to provide such features as far as I have seen ?
Thanks for any ideas on how to go about such task
I would use the library Dask when data is too big for pandas. Dask is made by the same people who created pandas and it is a little bit more advanced, because it is a big data tool, but it has some of the same features including concat. I found dask easy enough to use and am using it for a couple of projects where I have dozens of columns and tens of millions of rows.

Visualization workflow of large Pandas datasets

I am handling datasets of several GB, which I process in parallel with the multiprocessing library.
It takes a lot of time, but it make sense.
Once I have the resultant dataset, I need to plot it.
In this particular case, through matplotlib, I generate my stacked bar chart with:
plot = df.plot(kind='bar',stacked=True)
fig = plot.get_figure()
fig.savefig('plot.pdf', bbox_inches='tight')
At this point, for large datasets, is simply unmanageable. This method is performed sequentially, so it does not care how many cores do you have.
The generated plot is saved in a pdf, which in turn, is also really heavy, and slow to open.
Is there any alternative workflow to generate lighter plots?
So far, I've tried with dropping alternate rows from the original dataset (this process may be repeated several times, until reaching a more handy dataset). This is done with:
df = df.iloc[::2]
Let's say that it sort of work. However, I'd like to know if there are other approaches for this.
How do you face this type of large dimension visualization?

Plotting a Apache DataFrame

is there any possibility to plot an APACHE Dataframe? I figured it out while converting it to a Pandas dataframe which takes a lot of time and is not my goal.
In particular, the goal is to plot a map out of an Apache DataFrame without convertion to a Pandas DataFrame.
With plotting I mean to use a library such as matplotlib or plotly for plotting a graph or something similar.
Any ideas?
Thanks!
Do you mean plot an Spark Dataframe?
In that case, you could do something like this, having yourDF as your Dataframe:
yourDF.show(100, truncate=false)
This will show in your logging your dataframe structure and values (in this case, first 100 rows) the same way you'll find in pandas. With the truncate option you specify you want to show the whole dataframe instead of a reduced version.
EDIT: in order to directly plot from a dataframe, please check the plotly lib, o the
display(dataframe)
function, documented here.

Plot specifying column by name, upper case issue

I'm learning how to plot things (CSV files) in Python, using import matplotlib.pyplot as plt.
Column1;Column2;Column3;
1;4;6;
2;2;6;
3;3;8;
4;1;1;
5;4;2;
I can plot the one above with plt.plotfile('test0.csv', (0, 1), delimiter=';'), which gives me the figure below.
Do you see the axis labels, column1 and column2? They are in lower case in the figure, but in the data file they beggin with upper case.
Also, I tried plt.plotfile('test0.csv', ('Column1', 'Column2'), delimiter=';'), which does not work.
So it seems matplotlib.pyplot works only with lowercase names :(
Summing this issue with this other, I guess it's time to try something else.
As I am pretty new to plotting in Python, I would like to ask: Where should I go from here, to get a little more than what matplotlib.pyplot provides?
Should I go to pandas?
You are mixing up two things here.
Matplotlib is designed for plotting data. It is not designed for managing data.
Pandas is designed for data analysis. Even if you were using pandas, you would still need to plot the data. How? Well, probably using matplotlib!
Independently of what you're doing, think of it as a three step process:
Data aquisition, data read-in
Data processing
Data representation / plotting
plt.plotfile() is a convenience function, which you can use if you don't need step 2. at all. But it surely has its limitations.
Methods to read in data (not complete of course) are using pure python open, python csvReader or similar, numpy / scipy, pandas etc.
Depeding on what you want to do with your data, you can already chose a suitable input method. numpy for large numerical data sets, pandas for datasets which include qualitative data or heavily rely on cross correlations etc.

Categories

Resources