Plotting multiple time series from pandas dataframe - python

I have a pandas dataframe loaded from file in the following format:
ID,Date,Time,Value1,Value2,Value3,Value4
0063,04/21/2020,11:22:55,0.0347,0.41,1440,10.5
0064,04/21/2020,11:22:56,0.0355,0.41,1440,10.4
...
9849,04/22/2020,10:46:19,0.058,1.05,1460,10.6
I have tried multiple methods of plotting a line graph of each value vs date/time or a single graph with multiple subplots with limited success. I am hoping someone with much more experience may have an elegant solution to try as opposed to my blind swinging. Note that the dataset may have large breaks in time between days.
Thanks!

parsing dates during the import of the pandas dataframe seemed to be my biggest issue. Once I added parse_dates to the pd.read_csv I was able to define the dt column and plot with matplotlib as expected.
df = pd.read_csv(input_text, parse_dates = [["Date", "Time"]])
dt = df["Date_Time"]

Related

Pandas Data frames and sorting values

I am having a difficult time with writing this hw assignment, and am not sure where I messed up. I have tried several things, and believe my issue lies in the sort_values or maybe in the groupby command.
The issue is that I want to only display graph data from the year 2007. (using pandas and plotly in jupyternotebook for my class). I have the graph I want mostly but cannot get it to display the data correctly. It simply isn't filtering out the years, or taking data from specific dates as requested.
import pandas as pd
import plotly.express as px
df = pd.read_csv('Data/Country_Data.csv')
print(df.shape)
df.head(2)
df_Q1 = df.query("year == '2007'")
print(df_Q1.shape)
df_Q1.head()
This is where the issue begins, because it prints a table with only header information. As in it prints all the column names, but none of the data for them, and then later on it displays a graph of what I assume is the most recent death data rather than the year 2007 as specified.

Why does not Seaborn Relplot print datetime value on x-axis?

I'm trying to solve a Kaggle Competition to get deeper into data science knowledge. I'm dealing with an issue with seaborn library. I'm trying to plot a distribution of a feature along the date but the relplot function is not able to print the datetime value. On the output, I see a big black box instead of values.
Here there is my code, for plotting:
rainfall_types = list(auser.loc[:,1:])
grid = sns.relplot(x='Date', y=rainfall_types[0], kind="line", data=auser);
grid.fig.autofmt_xdate()
Here there is the
Seaborn.relpot output and the head of my dataset
I found the error. Pratically, when you use pandas.read_csv(dataset), if your dataset contains datetime column they are parsed as object, but python read these values as 'str' (string). So when you are going to plot them, matplotlib is not able to show them correctly.
To avoid this behaviour, you should convert the datetime value into datetime object by using:
df = pandas.read_csv(dataset, parse_date='Column_Date')
In this way, we are going to indicate to pandas library that there is a date column identified by the key 'Column_Date' and it has to be converted into datetime object.
If you want, you could use the Column Date as index for your dataframe, to speed up the analyis along the time. To do it add argument index='Column_Date' at your read_csv.
I hope you will find it helpful.

Pandas Large CSV

A continuation on a previous post. Previously, I had help creating a new column in a dataframe using Pandas, and each value would represent a factorized or unique value based on another column's value. I used this on a test case and it successfully works, but I am having trouble with a much larger log and htm file to do the same process for. I have 12 log files (for each month) and after combining them, I get a 17Gb file to work with. I want to factorize each and every username on it. I have been looking into using Dask, however, I can't replicate the functionality of sort and factorize to do what I want for the Dask dataframe. Would it be better to try to use Dask, continue with Pandas or try with a MySQL database to manipulate a 17GB file?
import pandas as pd
import numpy as np
#import dask.dataframe as pf
df = pd.read_csv('example2.csv', header=0, dtype='unicode')
df_count = df['fruit'].value_counts()
df.sort_values(['fruit'], ascending=True, inplace=True)
sorting the column fruit
df.reset_index(drop=True, inplace=True)
f, u = pd.factorize(df.fruit.values)
n = np.core.defchararray.add('Fruit', f.astype(str))
df = df.assign(NewCol=n)
#print(df)
df.to_csv('output.csv')
Would it be better to try to use Dask, continue with Pandas or try with a MySQL database to manipulate a 17GB file?
The answer to this question depends on a great many things and is probably too general to get a good answer on Stack Overflow.
However, there are a few particular questions you bring up that are easier to answer
How do I factorize a column?
The easy way here is to categorize a column:
df = df.categorize(columns=['fruit'])
How do I sort unique values within a column
You can always set the column as the index, which will cause a sort. However beware that sorting in a distributed setting can be quite expensive.
However if you want to sort a column with a small number of options then you might find the unique values, sort those in-memory, and then join those back onto the dataframe. Something like the following might work:
unique_fruit = df.fruit.drop_duplicates().compute() # this is now a pandas series
unique_fruit = unique_fruit.sort_values()
numbers = pd.Series(unique_fruit.index, index=unique_fruit.values, name='fruit')
df = df.merge(numbers.to_frame(), left_on='fruit', right_index=True)

Read Directory of Timeseries CSV data efficiently with Dask DataFrame and Pandas

I have a directory of timeseries data stored as CSV files, one file per day. How do I load and process it efficiently with Dask DataFrame?
Disclaimer: I maintain Dask. This question occurs often enough in other channels that I decided to add a question here on StackOverflow to which I can point people in the future.
Simple Solution
If you just want to get something quickly then simple use of dask.dataframe.read_csv using a globstring for the path should suffice:
import dask.dataframe as dd
df = dd.read_csv('2000-*.csv')
Keyword arguments
The dask.dataframe.read_csv function supports most of the pandas.read_csv keyword arguments, so you might want to tweak things a bit.
df = dd.read_csv('2000-*.csv', parse_dates=['timestamp'])
Set the index
Many operations like groupbys, joins, index lookup, etc. can be more efficient if the target column is the index. For example if the timestamp column is made to be the index then you can quickly look up the values for a particular range easily, or you can join efficiently with another dataframe along time. The savings here can easily be 10x.
The naive way to do this is to use the set_index method
df2 = df.set_index('timestamp')
However if you know that your new index column is sorted then you can make this much faster by passing the sorted=True keyword argument
df2 = df.set_index('timestamp', sorted=True)
Divisions
In the above case we still pass through the data once to find good breakpoints. However if your data is already nicely segmented (such as one file per day) then you can give these division values to set_index to avoid this initial pass (which can be costly for a large amount of CSV data.
import pandas as pd
divisions = tuple(pd.date_range(start='2000', end='2001', freq='1D'))
df2 = df.set_index('timestamp', sorted=True, divisions=divisions)
This solution correctly and cheaply sets the timestamp column as the index (allowing for efficient computations in the future).
Convert to another format
CSV is a pervasive and convenient format. However it is also very slow. Other formats like Parquet may be of interest to you. They can easily be 10x to 100x faster.

Plotly date formatting issue for pandas dataframe

I am using the Plotly python API to upload a pandas dataframe that contains a date column (which is generated via pandas date_range function).
If I look at the pandas dataframe locally the date formatting is as I'd expect, i.e YYYY-MM-DD. However, when I view it in Plotly I see it in the form YYYY-MM-DD HH:MM:SS. I really don't need this level of precision and also having such a wide column results in formatting issues when I try to fit all the other columns that I want in.
Is there a way to prevent Plotly from re-formatting the pandas dataframe?
A basic example of my current approach looks like:
import plotly.plotly as py
from plotly.tools import FigureFactory as FF
import pandas as pd
dates = pd.date_range('2016-01-01', '2016-02-01', freq='D')
df = pd.DataFrame(dates)
table = FF.create_table(df)
py.plot(table, filename='example table')
It turns out that this problem wasn't solvable - Plotly just happened to treat datetimes in that way.
This has since been updated (fixed) - you can read more here.

Categories

Resources