I've got a dataframe df_a with id information:
unique_id lacet_number
15 5570613 TLA-0138365
24 5025490 EMP-0138757
36 4354431 DXN-0025343
and another dataframe df_b, with the same number of rows that I know correspond to the rows in df_a:
latitude longitude
0 -93.193560 31.217029
1 -93.948082 35.360874
2 -103.131508 37.787609
What I want to do is simply concatenate the two horizontally (similar to cbind in R) and get:
unique_id lacet_number latitude longitude
0 5570613 TLA-0138365 -93.193560 31.217029
1 5025490 EMP-0138757 -93.948082 35.360874
2 4354431 DXN-0025343 -103.131508 37.787609
What I have tried:
df_c = pd.concat([df_a, df_b], axis=1)
which gives me an outer join.
unique_id lacet_number latitude longitude
0 NaN NaN -93.193560 31.217029
1 NaN NaN -93.948082 35.360874
2 NaN NaN -103.131508 37.787609
15 5570613 TLA-0138365 NaN NaN
24 5025490 EMP-0138757 NaN NaN
36 4354431 DXN-0025343 NaN NaN
The problem is that the indices for the two dataframes do not match. I read the documentation for pandas.concat, and saw that there is an option ignore_index. But that only applies to the concatenation axis, in my case the columns and it certainly is not the right choice for me. So my question is: is there a simple way to achieve this?
If you're sure the index row values are the same then to avoid the index alignment order then just call reset_index(), this will reset your index values back to start from 0:
df_c = pd.concat([df_a.reset_index(drop=True), df_b], axis=1)
DataFrame.join
While concat is fine, it's simpler to join:
C = A.join(B)
This still assumes aligned indexes, so reset_index as needed. In OP's example, B's index is already default, so we only need to reset A:
C = A.reset_index(drop=True).join(B)
# unique_id lacet_number latitude longitude
# 0 5570613 TLA-0138365 -93.193560 31.217029
# 1 5025490 EMP-0138757 -93.948082 35.360874
# 2 4354431 DXN-0025343 -103.131508 37.787609
You can use set_axis to make the index labels of one of the frames to be the same as the other's and concatenate horizontally or join. Unlike reset_index, this method preserves the index labels of one of the dataframes.
joined_df = pd.concat([df_a.set_axis(df_b.index), df_b], axis=1)
# or using `join`
joined_df = df_a.set_axis(df_b.index).join(df_b)
Related
I have the following df:
Item Service Damage Type Price
A Fast 3.5 1 15.48403728
A Slow 3.5 1 17.41954194
B Fast 5 1 19.3550466
B Slow 5 1 21.29055126
C Fast 5.5 1 23.22605592
and so on
I want to turn this into this format:
Item Damage Type Price_Fast Price_slow
So the first row would be:
Item Damage Type Price_Fast Price_slow
A 3.5 1 15.4840.. 17.41954...
I tried:
df.pivot(index=['Item', 'Damage', 'Type'],columns='Service', values='Price')
but it threw this error:
ValueError: Length of passed values is 2340, index implies 3
To get exactly the dataframe layout you want use
dfData = dfRaw.pivot_table(index=['Item', 'Damage', 'Type'],columns='Service', values='Price')
like #CJR suggested followed by
dfData.reset_index(inplace=True)
to flatten dataframe and
dfData.rename(columns={'Fast': 'Price_fast'}, inplace=True)
dfData.rename(columns={'Slow': 'Price_slow'}, inplace=True)
to get your desired column names.
Then use
dfNew.columns = dfNew.columns.values
to get rid of custom index label and your are done (Thanks to #Akaisteph7 for pointing that out that I was not quite done with my previous solution.)
You can do it with the following code:
# You should use pivot_table as it handles multiple column pivoting and duplicates aggregation
df2 = df.pivot_table(index=['Item', 'Damage', 'Type'], columns='Service', values='Price')
# Make the pivot indexes back into columns
df2.reset_index(inplace=True)
# Change the columns' names
df2.rename(columns=lambda x: "Price_"+x if x in ["Fast", "Slow"] else x, inplace=True)
# Remove the unneeded column Index name
df2.columns = df2.columns.values
print(df2)
Output:
Item Damage Type Price_Fast Price_Slow
0 A 3.5 1 15.484037 17.419542
1 B 5.0 1 19.355047 21.290551
2 C 5.5 1 23.226056 NaN
I want to calculate the mean value of previous two rows and fill the NAN's in my dataframe. There are only few rows with missing values in the 2010-19 column.
I tried using bfill and ffill but it only captures the previous or next row/column value and fill NAN.
My example data set has 7 columns as below:
X 1990-2000 2000-2010 2010-19 1990-2000 2000-2010 2010-19
Hyderabad 10 20 NAN 1 3 NAN
The output I want:
X 1990-2000 2000-2010 2010-19 1990-2000 2000-2010 2010-19
Hyderabad 10 20 15 1 3 2
To use fillna row-wise in this way, an easy solution is to provide an pandas series as argument to fillna. This will replace NaN values depending on the index.
Since the column names have duplicates the below code uses the column indices. Assuming a dataframe called df:
col_indices = [3, 6]
for i in col_indices:
means = df.iloc[:, [i-1, i-2]].mean(axis=1)
df.iloc[:, i].fillna(means, inplace=True)
This will fill the NaN values with the mean of the two columns to the left of each column in col_indices.
(no idea how to introduce a matrix here for readability)
I have two dataframes obtained with Panda and Python.
df1 = pd.DataFrame({'Index': ['0','1','2'], 'number':[3,'dd',1], 'people':[3,'s',3]})
df1 = df1.set_index('Index')
df2 = pd.DataFrame({'Index': ['0','1','2'], 'quantity':[3,2,'hi'], 'persons':[1,5,np.nan]})
I would like to sum the quantities of columns based on Index. Columns do not have the same name and may contain strings. (I have in fact 50 columns on each df). I want to consider nan as 0. The result should look:
df3
Index column 1 column 2
0 6 4
1 nan nan
2 nan nan
I was wondering how could this be done.
Note:
For sure a double while or for would do the trick, just not very elegant...
indices=0
columna=0
while indices<len(df.index)-1:
while columna<numbercolumns-1:
df3.iloc[indices,columna]=df1.iloc[indices,columna] +df2.iloc[indices,columna]
indices += 1
columna += 1
Thank you.
You can try of concatenating both dataframes, then add based on the index group
df1.columns = df.columns
df1.people = pd.to_numeric(df1.people,errors='coerce')
pd.concat([df,df1]).groupby('Index').sum()
Out:
number people
Index
A 8 5.0
B 2 2.0
C 2 5.0
F 3 3.0
This is my original dataframe.
This is my second dataframe containing one column.
I want to add the column of second dataframe to the original dataframe at the end. Indices are different for both dataframes. I did like this.
df1['RESULT'] = df2['RESULT']
It doesn't return an error and the column is added but all values are NaNs. How do I add these columns with their values?
Assuming the size of your dataframes are the same, you can assign the RESULT_df['RESULT'].values to your original dataframe. This way, you don't have to worry about indexing issues.
# pre 0.24
feature_file_df['RESULT'] = RESULT_df['RESULT'].values
# >= 0.24
feature_file_df['RESULT'] = RESULT_df['RESULT'].to_numpy()
Minimal Code Sample
df
A B
0 -1.202564 2.786483
1 0.180380 0.259736
2 -0.295206 1.175316
3 1.683482 0.927719
4 -0.199904 1.077655
df2
C
11 -0.140670
12 1.496007
13 0.263425
14 -0.557958
15 -0.018375
Let's try direct assignment first.
df['C'] = df2['C']
df
A B C
0 -1.202564 2.786483 NaN
1 0.180380 0.259736 NaN
2 -0.295206 1.175316 NaN
3 1.683482 0.927719 NaN
4 -0.199904 1.077655 NaN
Now, assign the array returned by .values (or .to_numpy() for pandas versions >0.24). .values returns a numpy array which does not have an index.
df2['C'].values
array([-0.141, 1.496, 0.263, -0.558, -0.018])
df['C'] = df2['C'].values
df
A B C
0 -1.202564 2.786483 -0.140670
1 0.180380 0.259736 1.496007
2 -0.295206 1.175316 0.263425
3 1.683482 0.927719 -0.557958
4 -0.199904 1.077655 -0.018375
You can also call set_axis() to change the index of a dataframe/column. So if the lengths are the same, then with set_axis(), you can coerce the index of one dataframe to be the same as the other dataframe.
df1['A'] = df2['A'].set_axis(df1.index)
If you get SettingWithCopyWarning, then to silence it, you can create a copy by either calling join() or assign().
df1 = df1.join(df2['A'].set_axis(df1.index))
# or
df1 = df1.assign(new_col = df2['A'].set_axis(df1.index))
set_axis() is especially useful if you want to add multiple columns from another dataframe. You can just call join() after calling it on the new dataframe.
df1 = df1.join(df2[['A', 'B', 'C']].set_axis(df1.index))
I am merging two data frames using pandas.merge. Even after specifying how = left option, I found the number of rows of merged data frame is larger than the original. Why does this happen?
panel = pd.read_csv(file1, encoding ='cp932')
before_len = len(panel)
prof_2000 = pd.read_csv(file2, encoding ='cp932').drop_duplicates()
temp_2000 = pd.merge(panel, prof_2000, left_on='Candidate_u', right_on="name2", how="left")
after_len = len(temp_2000)
print(before_len, after_len)
> 12661 13915
This sounds like having more than one rows in right under 'name2' that match the key you have set for the left. Using option 'how='left' with pandas.DataFrame.merge() only means that:
left: use only keys from left frame
However, the actual number of rows in the result object is not necessarily going to be the same as the number of rows in the left object.
Example:
In [359]: df_1
Out[359]:
A B
0 a AAA
1 b BBA
2 c CCF
and then another DF that looks like this (notice that there are more than one entry for your desired key on the left):
In [360]: df_3
Out[360]:
key value
0 a 1
1 a 2
2 b 3
3 a 4
If I merge these two on left.A, here's what happens:
In [361]: df_1.merge(df_3, how='left', left_on='A', right_on='key')
Out[361]:
A B key value
0 a AAA a 1.0
1 a AAA a 2.0
2 a AAA a 4.0
3 b BBA b 3.0
4 c CCF NaN NaN
This happened even though I merged with how='left' as you can see above, there were simply more than one rows to merge and as shown here the result pd.DataFrame has in fact more rows than the pd.DataFrame on the left.
I hope this helps!
The problem of doubling of rows after each merge() (of any type, 'both' or 'left') is usually caused by duplicates in any of the keys, so we need to drop them first:
left_df.drop_duplicates(subset=left_key, inplace=True)
right_df.drop_duplicates(subset=right_key, inplace=True)
If you do not have any duplication, as indicated in the above answer. You should double-check the names of removed entries. In my case, I discovered that the names of removed entries are inconsistent between the df1 and df2 and I solved the problem by:
df1["col1"] = df2["col2"]