I'd like to plot my DataFrame. I had this DF first:
id|project|categories|rating
1 | a | A | 1
1 | a | B | 1
1 | a | C | 2
1 | b | A | 1
1 | b | B | 1
2 | c | A | 1
2 | c | B | 2
used this code:
import pandas as pd
df = pd.DataFrame(...)
(df.groupby('id').project.nunique().reset_index()
.merge(pd.crosstab(df.id, df.categories).reset_index()))
and now got this DataFrame:
id | project | A | B | C |
1 | 2 | 2 | 2 | 1 |
2 | 1 | 1 | 1 | 0 |
Now I'd like to plot the DF. I want to show, if the number of projects depends on how many categories are affected, or which categories are affected. I know how to visualize dataframes, but after crosstab and merging, it is not working as usual
I reproduced your data using below code:
import pandas as pd
df = pd.DataFrame({'id': [1, 1, 1, 1, 1, 2, 2,],\
'project': ['a', 'a', 'a', 'b', 'b', 'c', 'c'],\
'categories': ['A', 'B', 'C', 'A', 'B', 'A', 'B'],\
'rating': [1, 1, 2, 1, 1, 1, 2]})
Now data looks like this
categories id project rating
0 A 1 a 1
1 B 1 a 1
2 C 1 a 2
3 A 1 b 1
4 B 1 b 1
5 A 2 c 1
6 B 2 c 2
If you want to plot 'category count' as a function of 'project count' it looks like this.
import matplotlib.pyplot as plt
# this line is your code
df2 = df.groupby('id').project.nunique().reset_index().merge(pd.crosstab(df.id, df.categories).reset_index())
plt.scatter(df2.project, df2.A, label='A', alpha=0.5)
plt.scatter(df2.project, df2.B, label='B', alpha=0.5)
plt.scatter(df2.project, df2.C, label='C', alpha=0.5)
plt.xlabel('project count')
plt.ylabel('category count')
plt.legend()
plt.show()
And you will get this
Related
I have a DataFrame with col names 'a', 'b', 'c'
#Input
import pandas as pd
list_of_dicts = [
{'a' : 0, 'b' : 4, 'c' : 3},
{'a' : 1, 'b' : 1, 'c' : 2 },
{'a' : 0, 'b' : 0, 'c' : 0 },
{'a' : 1, 'b' : 0, 'c' : 3 },
{'a' : 2, 'b' : 1, 'c' : 0 }
]
df = pd.DataFrame(list_of_dicts)
#Input DataFrame
-----|------|------|-----|
| a | b | c |
-----|------|------|-----|
0 | 0 | 4 | 3 |
1 | 1 | 1 | 2 |
2 | 0 | 0 | 0 |
3 | 1 | 0 | 3 |
4 | 2 | 1 | 0 |
I want to reduce the wide DataFrame to One column, with the column names
as DataFrame values multiplied by the corresponding row values. The operation must be done Row wise.
#Output
| Values |
-----------------
0 | b |
1 | b |
2 | b |
3 | b |
4 | c |
5 | c |
6 | c |
7 | a |
8 | b |
9 | c |
10 | c |
11 | a |
12 | c |
13 | c |
14 | c |
15 | a |
17 | a |
18 | b |
Explanation:
Row 0 in the Input DataFrame has 4 'b' and 3 'c', so the first seven elements of the output DataFrame are bbbbccc
Row 1 similarly has 1 'a' 1 'b' and 2 'c', so the output will have abcc as the next 4 elements
Row 2 has 0's across, so would be skipped entirely.
The Order of the output is very important
For example, the first row has '4' b and 3 'c', so the output DataFrame must be bbbbccc because Column 'b' comes before column 'c'. The operation must be row-wise from left to right.
I'm trying to find an efficient way in order to accomplish this. The real dataset is too big for me to compute. Please provide the python3 solution.
Stack the data (you could melt as well), and drop rows where the count is zero. Finally use numpy.repeat to build a new array, and build your new dataframe from that.
reshape = df.stack().droplevel(0).loc[lambda x: x != 0]
pd.DataFrame(np.repeat(reshape.index, reshape), columns=['values'])
values
0 b
1 b
2 b
3 b
4 c
5 c
6 c
7 a
8 b
9 c
10 c
11 a
12 c
13 c
14 c
15 a
16 a
17 b
I don't think pandas buys you anything in this process, and especially if you have a large amount of data you don't want to read that all into memory and reprocess it into another large data structure.
import csv
with open('input.csv', 'r') as fh:
reader = csv.DictReader(fh)
for row in reader:
for key in reader.headers:
value = int(row[key])
for i in range(value):
print(key)
I have the following dataframe with multiple cols and rows,
A | B | C | D | E |....
2 | b | c | NaN | 1 |
3 | c | b | NaN | 0 |
4 | b | b | NaN | 1 |
.
.
.
Is there a way to add excel formulas (for some columns) in the manner stated below through an example using python in an output excel file?
For instance, I want to be able to have the output something like this,
=SUM(A0:A2) | | | | =SUM(E0:E2)
A | B | C | D | E
0 2 | b | c | =IF(B0=C0, "Yes", "No") | 1
1 3 | c | b | =IF(B1=C1, "Yes", "No") | 0
2 4 | b | b | =IF(B2=C2, "Yes", "No") | 1
.
.
.
Final output,
9 | | | | 2
A | B | C | D | E
0 2 | b | c | No | 1
1 3 | c | b | No | 0
2 4 | b | b | Yes | 1
.
.
.
I want to add formulas in the final output excel file so that if there are any changes in the values of columns (in the final output excel file) other columns can also be updated in the excel file in real time, for instance,
15 | | | | 3
A | B | C | D | E
0 2 | b | b | Yes | 1
1 9 | c | b | No | 1
2 4 | b | b | Yes | 1
.
.
.
If I change the values of, for instance, A1 from 3 to 9, then the sum of the column changes to 15; when I change the value of C0 from "c" to "b", the value of its corresponding row value, that is, D0 changes from "No" to "Yes"; Same for col E.
I know you can use xlsxwriter library to write the formulas but I am not able to figure out as to how I can add the formulas in the manner I have stated in the example above.
Any help would be really appreciated, thanks in advance!
You're best doing all of your formulas you wish to keep via xlsxwriter and not pandas.
You would use pandas if you only wanted to export the result, since you want to preserve the formula, do it when you write your spreadsheet.
The code below will write out the dataframe and formula to an xlsx file called test.
import xlsxwriter
import pandas as pd
from numpy import nan
data = [[2, 'b', 'c', nan, 1], [3, 'c', 'b', nan, 0], [4, 'b', 'b', nan, 1]]
df = pd.DataFrame(data=data, columns=['A', 'B', 'C', 'D', 'E'])
## Send values to a list so we can iterate over to allow for row:column matching in formula ##
values = df.values.tolist()
## Create Workbook ##
workbook = xlsxwriter.Workbook('test.xlsx')
worksheet = workbook.add_worksheet()
row = 0
col = 0
## Iterate over the data we extracted from the DF, generating our cell formula for 'D' each iteration ##
for idx, line in enumerate(values):
d = f'=IF(B{row + 1}=C{row + 1}, "Yes", "No")'
a, b, c, _, e = line
## Write cells into spreadsheet ##
worksheet.write(row, col, a)
worksheet.write(row, col + 1, b)
worksheet.write(row, col + 2, c)
worksheet.write(row, col + 3, d)
worksheet.write(row, col + 4, e)
row += 1
## Write the total sums to the bottom row of the sheet utilising the row counter to specify our stop point ##
worksheet.write(row, 0, f'=SUM(A1:A{row})')
worksheet.write(row, 4, f'=SUM(E1:E{row})')
workbook.close()
My dataset looks like below:
+--------+----------+-----------+--------------------+
| | FST_NAME | LAST_NAME | EMAIL_ADDR |
+--------+----------+-----------+--------------------+
| ROW_ID | | | |
| 1-123 | Will | Smith | will.smith#abc.com |
| 1-124 | Dan | Brown | dan.brown#xyz.com |
| 1-125 | Will | Smith | will.smith#abc.com |
| 1-126 | Dan | Brown | dan.brown#xyz.com |
| 1-127 | Tom | Cruise | tom.cruise#abc.com |
| 1-128 | Will | Smith | will.smith#abc.com |
+--------+----------+-----------+--------------------+
I am trying to count duplicate rows by keeping the first record and store all the duplicated row index in a column.
I tried below. It gives me the count but i am unable to group the duplicated index.
df.groupby(df.columns.tolist(),as_index=False).size()
How can I get the duplicated row index?
Try:
df.reset_index().groupby(df.columns.tolist())["index"].agg(list).reset_index()
To get exactly what you want:
res=df.reset_index().groupby(df.columns.tolist())["index"].agg(list).reset_index().rename(columns={"index": "duplicated"})
res.index=res["duplicated"].str[0].tolist()
res["duplicated"]=res["duplicated"].str[1:]
Outputs (dummy data):
#original df:
a b
a1 x 4
a2 y 3
b6 z 2
c7 x 4
d x 4
x y 3
#transformed one:
a b duplicated
a1 x 4 [c7, d]
a2 y 3 [x]
b6 z 2 []
Not a very efficient way, just that it can be used as a solution
df2 = df.drop_duplicates()
This will result as df2 =
Name1 Name2
0 Will Smith
1 Dan Brown
4 Tom Cruise
Now,
lis = []
for i in df2.iterrows():
lis.append(i[0])
This will make lis = [0, 1, 4]. All the indexes from 0 to len(df) that are not in lis, are the indexes that contain duplicates.
For df like:
FST_NAME L_NAME email
0 w s ws
1 d b db
2 w s ws
3 z z zz
Get grouped index into lists
import pandas as pd
df = pd.DataFrame({'FST_NAME': ['w', 'd', 'w', 'z'], 'L_NAME': ['s', 'b', 's', 'z'], 'email': ['ws', 'db', 'ws', 'zz']})
df = df.groupby(df.columns.tolist()).apply(lambda row: pd.Series({'duplicated': list(row.index)}))
Output:
duplicated
FST_NAME L_NAME email
d b db [1]
w s ws [0, 2]
z z zz [3]
A while ago I asked this question
But that does not cover the case where two merged categories might have a common category
In that case I wanted to merge the categories A and B into AB. What if I have categories A, B, C and I want to merge A,B into AB, and B,C into BC?
Suppose I have the data:
+---+---+
| X | Y |
+---+---+
| A | D |
| B | D |
| B | E |
| B | D |
| A | E |
| C | D |
| C | E |
| B | E |
+---+---+
I want the cross-tab to look like:
+--------+---+---+
| X/Y | D | E |
+--------+---+---+
| A or B | 3 | 3 |
| B or C | 3 | 2 |
| C | 1 | 1 |
+--------+---+---+
I think you can use crosstab by all unique values and then sum values by selecting by categories in index values:
df = pd.crosstab(df.X, df.Y)
df.loc['A or B'] = df.loc[['A','B']].sum()
df.loc['B or C'] = df.loc[['C','B']].sum()
df = df.drop(['A','B'])
print (df)
Y D E
X
C 1 1
A or B 3 3
B or C 3 3
EDIT: If want general solution it is not easy, because is necessary repeat groups with rename like:
df1 = df[df['X'] == 'B'].assign(X = 'B or C')
df2 = df[df['X'] == 'C']
df = pd.concat([df, df1], ignore_index=True)
df['X'] = df['X'].replace({'A':'A or B', 'B': 'A or B', 'C': 'B or C'})
df = pd.concat([df, df2], ignore_index=True)
df = pd.crosstab(df.X, df.Y)
print (df)
Y D E
X
A or B 3 3
B or C 3 3
C 1 1
I am still in a learning phase in python and wanted to know how do we roll up the data and count the duplicate data rows in a column called count
The data frame structure is as follows
Col1| Value
A | 1
B | 1
A | 1
B | 1
C | 3
C | 3
C | 3
C | 3
My result should be as follows
Col1|Value|Count
A | 1 | 2
B | 1 | 2
C | 3 | 4
>>> df2 = df.groupby(['Col1', 'Value']).size().reset_index()
>>> df2.columns = ['Col1', 'Value', 'Count']
>>> df2
Col1 Value Count
0 A 1 2
1 B 1 2
2 C 3 4
Roman Pekar's fine answer is correct for this case. However, I saw it after trying to write a solution for the general case stated in the text of your question, not just the example with specific column names. So, for the general case, consider:
df.groupby([df[c] for c in df.columns]).size().reset_index().rename(columns={0: 'Count'})
For example:
import pandas as pd
df = pd.DataFrame({'Col1': ['a', 'a', 'a', 'b', 'c'], 'Value': [1, 2, 1, 3, 2]})
>>> df.groupby([df[c] for c in df.columns]).size().reset_index().rename(columns={0: 'Count'})
Col1 Value Count
0 a 1 2
1 a 2 1
2 b 3 1
3 c 2 1
You can also try:
df.groupby('Col1')['Value'].value_counts().reset_index(name='Count')