Still not getting the hang of pandas, I am attempting to join two data frames in Pandas using merge. I have read in the CSVs into two data frames (named dropData and deosData in the code below). Both data frames have the column ‘Date_Time’, which is a parsed column of Date and Time information to create a unique id for each entry. The deosData file is an entire year’s worth of observations that I am trying to match up with corresponding entries in dropData.
CSV files:
deosData: https://www.dropbox.com/s/3rr7hf7jzrmxdke/inputDeos.csv?dl=0
dropData: https://www.dropbox.com/s/z9mv4xccjzlsyif/inputDrop.csv?dl=0
I have gone through the documentation for the merge function and have tried the following code in various iterations, so far I have only been able to have a blank data frame with correct header row, or have the two data frames merged on the 0--(N-1) indexing that is assigned by default:
My code:
import pandas as pd
import numpy as np
import os
from matplotlib import pyplot as plt
#read in CSV to dataframe
dropData=pd.read_csv("inputDrop.csv", header=0, index_col=None)
deosData=pd.read_csv("inputDeos.csv", header=0, index_col=None)
#merging dataframes into single sf
merge=pd.merge(dropData,deosData, how='inner', on='Date_Time')
#comment out during debugging
#merge.to_csv('output.csv', sep=',', headers=True, index=False)
#check merge dataframe creation
print merge.head(1)
After searching on SE and the Doc’s I have tried resetting the index, ignoring the index columns, copying the ‘Date_Time’ column as a separate index and trying to merge on the new column, I have tried using ‘on=None’, ‘left_on’ and ‘right_on’ as permutations of ‘Date_Time’ to no avail. I have checked the column data types, ‘Date_Time’ in both are dtype Objects, I do not know if this is the source of the error, since the only issues I could find searching revolved around matching different dtypes to each other.
What I am looking to do is have the two data frames merge where the two 'Date_Time' columns intersect. For example:
Date_Time,Volume(Max),Volume(Sum),Volume(Min),Volume(Mean),Diameter(Count),Diameter(Max),Diameter(Sum),Diameter(Min),Diameter(Mean),Depth(Sum),Velocity(Max),Velocity(Sum),Velocity(Min),Velocity(Mean), Air Temperature (deg. C), Relative humidity (%), Wind Speed (m.s-1), Wind Direction (deg.), Wind Gust Speed (5) (m.s-1), Barometric Pressure (mbar), Gage Precipitation (5) (mm)
9/1/2014 0:00,2.266188524,2.989272461,0.052464219,0.332141385,9,1.629668,5.972978,0.464467,0.663664222,0.003736591,2.288401,16.889656,1.495487,1.876628444,22.5,99,0,216.1,0.4,1016.2,0
Any help would be greatly appreciated.
You need to parse_dates when reading csv file, so that Date_Time columns in both dataframes are of pd.Timestamp object instead of raw strings. (if you look at your csv file, one is in ISO format YYYY-MM-DD HH:MM:SS whereas the other is in MM/DD/YYYY HH:MM) Try the following codes:
#read in CSV to dataframe
dropData = pd.read_csv("inputDrop.csv", header=0, index_col=None, parse_dates=['Date_Time'])
deosData = pd.read_csv("inputDeos.csv", header=0, index_col=None, parse_dates=['Date_Time'])
and then do your merge.
You can use join, but you first need to set the index:
dropData=pd.read_csv('.../inputDrop.csv', header=0, index_col='Date_Time', parse_dates=True)
deosData=pd.read_csv('.../inputDeos.csv', header=0, index_col='Date_Time', parse_dates=True)
dropData.join(deosData)
Related
I have many .csv files with two columns. One with timestamps and the other with values. The data is sampled on seconds. What I would like to do is
read all files
set index on time column
resample on hours
save to new files (parquet, hdf,...)
1) Only dask
I tried to use dask's read_csv.
import dask.dataframe as dd
import pandas as pd
df = dd.read_csv(
"../data_*.csv",
parse_dates = [0],
date_parser = lambda x: pd.to_datetime(float(x)),
)
So far that's fine. The problem is that I cannot df.resample("min").mean() directly, because the index of dask data frame is not properly set.
After calling dd.reset_index().set_index("timestamp") it works - BUT I cannot afford to do this because it is expensive.
2) Workaround with pandas and hdf files
Another approach was to save all csv files to hdf files using pandas. In this case the pandas dataframes were already indexed by time.
df= dd.read_hdf("/data_01.hdf", key="data")
# This doesn't work directly
# df = df.resample("min").mean()
# Error: "Can only resample dataframes with known divisions"
df = df.reset_index().set_index("timestamp") # expensive! :-(
df = df.resample("min").mean() # works!
Of course this works but it would be extremely expensive on dd.read_hdf("/data_*.hdf", key="data").
How can I directly read timeseries data in dask that it is properly partitioned and indexed?
Do you have any tips or suggestions?
Exmpample Data:
import dask
df = dask.datasets.timeseries()
df.to_hdf("dask.hdf", "data")
# Doesn't work!
# dd.read_hdf("dask.hdf", key="data").resample("min").mean()
# Works!
dd.read_hdf("dask.hdf", key="data").reset_index().set_index("timestamp").resample(
"min"
).mean()
Can you try something like:
pd.read_csv('data.csv', index_col='timestamp', parse_dates=['timestamp']) \
.resample('T').mean().to_csv('data_1min.csv') # or to_hdf(...)
I am trying to manipulate a data from excel file, however it has merged heading for columns, I managed to transform them in pandas. Please see example of original data below.
So I transformed to this format.
and my final goal is to get the format below and plot brand items and their sales quantity and prices over the period, however I don't know how to access info in multiindex dataframe. Could you please suggest something. Thanks.
My code:
import pandas as pd
df = pd.read_excel('path.xls', sheet_name = 'data', header = [0,1])
a = df.columns.get_level_values(0).to_series()
b = a.mask(a.str.startswith('Unnamed')).fillna('')
df.columns = [b, df.columns.get_level_values(1)]
df.drop(0, inplace=True)
try pandas groupby or pivot_table. The pivot table include index, columns, values and aggfunc. It really nice for summarizing data.
I'm trying to aggregate several csv files into one.
After the aggregation I notice that pandas tries to do a float estimation even if i state that the datatype should be a string. Works for most cells, however, in some cases pandas adds many trailing zero's or nine's as it would be a float. The main goal would be to preserve the raw data as i just want to aggregate the files on top of eachother into one file.
How do i prevent pandas to do add float numbers and instead preserve the input data?
Raw data from column
Column - Sales
-3.145
-111
-1.418
-3.453
Generated Output in csv
Column - Sales
-3.1450000000000004
-111.0
-1.418
-3.4529999999999997
Expected output
Column - Sales
-3.145
-111
-1.418
-3.453
Code:
import pandas as pd
df = pd.read_csv(filename, sep=(';'), dtype={'Sales':str})
li.append(df)
frame = pd.concat(li, axis=0, ignore_index=True)
frame.to_csv("sales.csv", mode='a')
I wonder how to save a new pandas Series into a csv file in a different column. Suppose I have two csv files which both contains a column as a 'A'. I have done some mathematical function on them and then create a new variable as a 'B'.
For example:
data = pd.read_csv('filepath')
data['B'] = data['A']*10
# and add the value of data.B into a list as a B_list.append(data.B)
This will continue until all of the rows of the first and second csv file has been reading.
I would like to save a column B in a new spread sheet from both csv files.
For example I need this result:
colum1(from csv1) colum2(from csv2)
data.B.value data.b.value
By using this code:
pd.DataFrame(np.array(B_list)).T.to_csv('file.csv', index=False, header=None)
I won't get my preferred result.
Since each column in a pandas DataFrame is a pandas Series. Your B_list is actually a list of pandas Series which you can cast to DataFrame() constructor, then transpose (or as #jezrael shows a horizontal merge with pd.concat(..., axis=1))
finaldf = pd.DataFrame(B_list).T
finaldf.to_csv('output.csv', index=False, header=None)
And should csv have different rows, unequal series are filled with NANs at corresponding rows.
I think you need concat column from data1 with column from data2 first:
df = pd.concat(B_list, axis=1)
df.to_csv('file.csv', index=False, header=None)
I am trying to perform a some arithmetic operations in Python Pandas and merge the result in one of the file.
Path_1: File_1.csv, File_2.csv, ....
This path has several file which are supposed to be increasing in time intervals. with the following columns
File_1.csv | File_2.csv
Nos,12:00:00 | Nos,12:30:00
123,1451 485,5464
656,4544 456,4865
853,5484 658,4584
Path_2: Master_1.csv
Nos,00:00:00
123,2000
485,1500
656,1000
853,2500
456,4500
658,5000
I am trying to read the n number of .csv files from Path_1 and compare the col[1] header timeseries with col[last] timeseries of Master_1.csv.
If Master_1.csv does not have that time it should create a new column with timeseries from path_1 .csv files and update the values with respect col['Nos'] while subtracting them from col[1] of Master_1.csv.
If the col with time from path_1 file is present then look for col['Nos'] and then replace the NAN with the subtracted values respect to that col['Nos'].
i.e.
Expected Output in Master_1.csv
Nos,00:00:00,12:00:00,12:30:00,
123,2000,549,NAN,
485,1500,NAN,3964,
656,1000,3544,NAN
853,2500,2984,NAN
456,4500,NAN,365
658,5000,NAN,-416
I can understand the arithmetic calculations but I am not able to loop in with respect to Nos and timeseries I have tried to put some code together and trying to work around looping. Need help in that context. Thanks
import pandas as pd
import numpy as np
path_1 = '/'
path_2 = '/'
df_1 = pd.read_csv(os.path_1('/.*csv'), Index=None, columns=['Nos', 'timeseries'] #times series is different in every file eg: 12:00, 12:30, 17:30 etc
df_2 = pd.read_csv('master_1.csv', Index=None, columns=['Nos', '00:00:00']) #00:00:00 time series
for Nos in df_1 and df_2:
df_1['Nos'] = df_2['Nos']
new_tseries = df_2['00:00:00'] - df_1['timeseries']
merged.concat('master_1.csv', Index=None, columns=['Nos', '00:00:00', 'new_tseries'], axis=0) # new_timeseries is the dynamic time series that every .csv file will have from path_1
You can do it in three steps
Read your csv's in to a list of dataframes
Merge the dataframes together (equivalent to a SQL left join or an Excel VLOOKUP
Calculate your derived columns using a vectorized subtraction.
Here's some code you could try:
#read dataframes into a list
import glob
L = []
for fname in glob.glob(path_1+'*.csv'):
L.append(df.read_csv(fname))
#read master dataframe, and merge in other dataframes
df_2 = pd.read_csv('master_1.csv')
for df in L:
df_2 = pd.merge(df_2,df, on = 'Nos', how = 'left')
#for each column, caluculate the difference with the master column
df_2.apply(lambda x: x - df_2['00:00:00'])