Unique Value Index from two fields - python

I'm new to pandas and python, and could definitely use some help.
I have the code below, which almost does what I want. It creates dummy variables for the unique values in a field and indexes them by the unique combinations of the unique values in two other fields.
What I would like is only one row for each unique combination of the fields used for the index. Right now I get multiple rows for say 'asset subs end dt' = 10/30/2008 and 'reseller csn' = 55008 if the dummy variable comes up 3 times. I would rather have one row for the combination of index field values with a 3 in the dummy variable column.
Code:
df = data
df = df.set_index(['ASSET_SUBS_END_DT','RESELLER_CSN'])
Dummies=pd.get_dummies(df['EXPERTISE'])

something like:
df.groupby(level=[0, 1]).EXPERTISE.count()
when you do this groupby, everything with the same index is grouped together. assuming your data in EXPERTISE is notnull, you will get a new DataFrame returned with unique index values and the count per each index. try it out for yourself, play around with the results, and see how it can be combined with your existing DataFrame to get the final result you want.

Related

How to convert a pandas dataframe with non unique indexes into a one with unique indexes?

I created a dataframe with some previous operations but when I query a column name with an index (for example, df['order_number][0] ), multiple rows/records come as output.
The screenshot shows the unique and total indexes of the dataframe. image shows the difference in lengths of uniques indexes and all indexes
It looks like the data kept their index when you merged/joined df. Try:
df.reset_index()
Could you should a df.head() for example, usually when you consume a data source, if you sent the arg indexto True each row will be assigned a unique numerical index

How to make new dataframe from existing dataframe with unique rows values of one column and corresponding row values from other columns?

I have a dataframe 'raw' that looks like this -
It has many rows with duplicate values in each column.
I want to make a new dataframe 'new_df' which has unique customer_code corresponding and market_code.
The new_df should look like this -
It sounds like you simply want to create a DataFrame with unique customer_code which also shows market_code. Here's a way to do it:
df = df[['customer_code','market_code']].drop_duplicates('customer_code')
Output:
customer_code market_code
0 Cus001 Mark001
1 Cus003 Mark003
3 Cus004 Mark003
4 Cus005 Mark004
The part reading df[['customer_code','market_code']] gives us a DataFrame containing only the two columns of interest, and the drop_duplicates('customer_code') part eliminates all but the first occurrence of duplicate values in the customer_code column (though you could instead keep the last occurrence of each duplicate by calling it using the keep='last' argument).

EDA for loop on multiple columns of dataframe in Python

Just a random q. If there's a dataframe, df, from the Boston Homes ds, and I'm trying to do EDA on a few of the columns, set to a variable feature_cols, which I could use afterwards to check for na, how would one go about this? I have the following, which is throwing an error:
This is what I was hoping to try to do after the above:
Any feedback would be greatly appreciated. Thanks in advance.
There are two problems in your pictures. First is a keyError, because if you want to access subset of columns of a dataframe, you need to pass the names of the columns in a list not a tuple, so the first line should be
feature_cols = df[['RM','ZN','B']]
However, this will return a dataframe with three columns. What you want to use in the for loop can not work with pandas. We usually iterate over rows, not columns, of a dataframe, you can use the one line:
df.isna().sum()
This will print all names of columns of the dataframe along with the count of the number of missing values in each column. Of course, if you want to check only a subset of columns, you can. replace df buy df[list_of_columns_names].
You need to store the names of the columns only in an array, to access multiple columns, for example
feature_cols = ['RM','ZN','B']
now accessing it as
x = df[feature_cols]
Now to iterate on columns of df, you can use
for column in df[feature_cols]:
print(df[column]) # or anything
As per your updated comment,. if your end goal is to see null counts only, you can achieve without looping., e.g
df[feature_cols].info(verbose=True,null_count=True)

Drop duplicates from a pandas dataframe based on all columns starting from the third one

I have a dataframe with 50 + more columns, and the first 2 are unique IDs. For some reason for different IDs the data from the third column can be the exact same.
What I want to achieve is to delete the duplicates from the dataframe based on all columns starting from the third one. If there are more than 1 rows with different IDs and the same data from the third column, it is all the same which row we will keep, it can be the last one or the first one, whichever is easier to do.
I am fairly new to pandas, what I tried is something like this:
df.drop_duplicates(subset=df.iloc[2:], keep="last")
df.drop_duplicates expects a list of column names as the subset argument, so try this:
df.drop_duplicates(subset=df.columns[2:], keep="last")

Pandas list down unique values in column and assign it to separate columns

I have following table:
I want to create new data frame or column in same data frame where unique values are listed. e.g.
I used following code:
data.groupby('EMAIL')['Classification'].transform('nunique')
But it is giving me number of unique values (for CLASSIFICATION, it is 2),
However I want to note down value in list format. So that at the end i will remove duplicate rows and put single row for each unique email id. Please advise on this.
Thanks!
For performance use set for unique values and pass to lambda function in GroupBy.agg, order should be different like original:
df = data.groupby('EMAIL').agg(lambda x: ','.join(set(x))).reset_index()
For same order like original use dictionary trick:
f = ','.join(dict.fromkeys(x).keys())
df = data.groupby('EMAIL').agg(f).reset_index()
Use df.groupby(as_index=False) with df.groupby.agg:
data.groupby('EMAIL',as_index=False).agg(lambda x: ','.join(x.unique()))

Categories

Resources