This question already has answers here:
Pandas Merging 101
(8 answers)
Closed 2 years ago.
I am new to python. I'm want to change all the values in the column 'Starting' from df_2 with the 'Station' column from df_1. I did it by using for loop . But How can I perform this task in simplest way?
df_1:
ID Station
0 1 Satose
1 2 Forlango
2 3 poterio
.
.
df_2:
Rail_Number Starting Ending
AABDD 3 44433
DLRAKA 1 45232
MiGOMu 2 18756
.
.
I have answered a similar question here :
Replace a value in a dataframe with a value from another dataframe
Step 1: Convert both columns from df_1 into a dictionary by using the following code:
d = dict(zip(df_1.ID,df_1.Station))
Step 2: Now we just need to map this dictionary and df_2:
df_2.Starting = df_1.ID.map(d)
Related
This question already has answers here:
How can I pivot a dataframe?
(5 answers)
Pivoting a Pandas Dataframe containing strings - 'No numeric types to aggregate' error
(3 answers)
Closed 1 year ago.
Pandas question:
If I have this dataframe:
Member
Value
Group
1
a
AC
1
c
AC
1
d
DF
2
b
AC
2
e
DF
which I would like to transform, using pivot?, to a DataFrame showing occurences of individual elements of the group, like:
x
AC
DF
1
ac
d
2
b
e
I run into "Index contains duplicate values, cannot reshape" if I try:
pivot(index='Member', columns=['Group'], values='Value')
Feel confused over something seemingly very trivial. Can somebody help?
This question already has answers here:
Pandas: how to merge two dataframes on a column by keeping the information of the first one?
(4 answers)
Closed 1 year ago.
i turned a json file into a dataframe, but I am unsure of how to map a certain value from the json dataframe onto the existing data frame i have.
df1 = # (2nd column does'nt matter just there)
category_id
tags
1
a
1
a
10
b
10
c
40
d
df2(json) =
id
title
1
film
2
music
3
travel
4
cooking
5
dance
I would like to make a new column in df1, that maps the titles from the df2 onto df1 corresponding to the category_id. I am sorry I am new to python programming. I know I can hard code the dictionary and key values and go from there. However I was wondering if there is a way with python/pandas to do this in an easier way.
You can use pandas.Series.map() which maps values of Series according to input correspondence.
df1['tilte'] = df1['category_id'].map(df2.set_index('id')['title'])
# print(df1)
category_id tags tilte
0 1 a film
1 1 a film
2 10 b NaN
3 10 c NaN
4 40 d NaN
This question already has answers here:
Pandas, Pivot table from 2 columns with values being a count of one of those columns
(2 answers)
Most efficient way to melt dataframe with a ton of possible values pandas
(2 answers)
How to form a pivot table on two categorical columns and count for each index?
(2 answers)
Closed 2 years ago.
am trying to transform the rows and count the occurrences of the values based on groupby the id
Dataframe:
id value
A cake
A cookie
B cookie
B cookie
C cake
C cake
C cookie
expected:
id cake cookie
A 1 1
B 0 2
c 2 1
This question already has answers here:
Pandas groupby with delimiter join
(2 answers)
Concatenate strings from several rows using Pandas groupby
(8 answers)
Closed 3 years ago.
Given a Pandas Dataframe df, with column names 'Session', and 'List':
Can I group together the 'List' values for the same values of 'Session'?
My Approach
I've tried solving the problem by creating a new dataframe, and iterating through the rows of the inital dataframe while maintaing a session counter that I increment if I see that the session has changed.
If it hasn't changed, then I append the List value that corresponds to that rows value with a comma.
Whenever the session changes, I used strip to get rid of the last comma (extra).
Initial DataFrame
Session List
0 1 a
1 1 b
2 1 c
3 2 d
4 2 e
5 3 f
Required DataFrame
Session List
0 1 a,b,c
1 2 d,e
2 3 f
Can someone suggest something more efficient or simple?
Thank you in advance.
Use groupby and apply and reset_index:
>>> df.groupby('Session')['List'].agg(','.join).reset_index()
Session List
0 1 a,b,c
1 2 d,e
2 3 f
>>>
This question already has answers here:
Pandas dataframe: truncate string fields
(4 answers)
Closed 4 years ago.
I have a dataframe with some columns having large sentences.
How do I truncate the columns to say 50 characters max?
current df:
a b c
I like data science 1 2
new truncated df for ONLY column a:
a b c
I like data 1 2
(The above is an example sentence I made up)
For a specific column:
df['a'] = df['a'].str[:50]