Replace the last value of each row by the penultimate - python

I have a pivot table that looks like this:
And I need to replace the last (non-empty) value of each row by the value before it.
for row in cohort_pivot2.iterrows():
a=[i for i in row[1].values if i][-1]
b=[i for i in row[1].values if i][-2]
But I can't think of any way to do the substitution.
I have not added the steps to replicate the pivot table because it's generated from confidential information and it's a process that would take up a lot of space, but if necessary I'd generate dummy data and replicate the process.
I would greatly appreciate any help

Here is a solution using apply which converts each row to a list and then convert it back to a dataframe:
df = pd.DataFrame([[1,2,3,None],
[10,20,None,None],
[100,None,None,None]]).fillna('')
using list addition
ll = df.apply(lambda x: list(x)[:list(x).index('')-1] + [list(x)[list(x).index('')-2]] + ['']*(length - list(x).index('')), 1)
using list comprehension
ll = df.apply(lambda x: [el if idx != list(x).index('') -1 else list(x)[list(x).index('') -2] for idx, el in enumerate(list(x))], 1)
results in
pd.DataFrame.from_records(ll)
# 0 1 2 3
# 0 1 2 2
# 1 10 10
# 2
Please note that the last line becomes entirely empty because there was no previous element with which we could set it. Also note that I used empty strings as null elements, I did that because of the automatic type inference in pandas, which converts None to np.nan if column has float type.

Related

Count the number of elements in a list where the list contains the empty string

I'm having difficulties counting the number of elements in a list within a DataFrame's column. My problem comes from the fact that, after importing my input csv file, the rows that are supposed to contain an empty list [] are actually parsed as lists containing the empty string [""]. Here's a reproducible example to make things clearer:
import pandas as pd
df = pd.DataFrame({"ID": [1, 2, 3], "NETWORK": [[""], ["OPE", "GSR", "REP"], ["MER"]]})
print(df)
ID NETWORK
0 1 []
1 2 [OPE, GSR, REP]
2 3 [MER]
Even though one might think that the list for the row where ID = 1 is empty, it's not. It actually contains the empty string [""] which took me a long time to figure out.
So whatever standard method I try to use to calculate the number of elements within each list I get a wrong value of 1 for those who are supposed to be empty:
df["COUNT"] = df["NETWORK"].str.len()
print(df)
ID NETWORK COUNT
0 1 [] 1
1 2 [OPE, GSR, REP] 3
2 3 [MER] 1
I searched and tried a lot of things before posting here but I couldn't find a solution to what seems to be a very simple problem. I should also note that I'm looking for a solution that doesn't require me to modify my original input file nor modify the way I'm importing it.
You just need to write a custom apply function that ignores the ''
df['COUNT'] = df['NETWORK'].apply(lambda x: sum(1 for w in x if w!=''))
Another way:
df['NETWORK'].apply(lambda x: len([y for y in x if y]))
Using apply is probably more straightforward. Alternatively, explode, filter, then group by count.
_s = df['NETWORK'].explode()
_s = _s[_s != '']
df['count'] = _s.groupby(level=0).count()
This yields:
NETWORK count
ID
1 [] NaN
2 [OPE, GSR, REP] 3.0
3 [MER] 1.0
Fill NA with zeroes if needed.
df["COUNT"] = df["NETWORK"].apply(lambda x: len(x))
Use a lambda function on each row and in the lambda function return the length of the array

how to vectorize a for loop on pandas dataframe?

i am working whit a data of about 200,000 rows, in one column of the pandas i have some values that have a empty list, the most of them are list whit several values, here is a picture:
what i want to do is change the empty sets whit this set
[[close*0.95,close*0.94]]
where the close is the close value on the table, the for loop that i use is this one:
for i in range(1,len(data3.index)):
close = data3.close[data3.index==data3.index[i]].values[0]
sell_list = data3.sell[data3.index==data3.index[i]].values[0]
buy_list = data3.buy[data3.index==data3.index[i]].values[0]
if len(sell_list)== 0:
data3.loc[data3.index[i],"sell"].append([[close*1.05,close*1.06]])
if len(buy_list)== 0:
data3.loc[data3.index[i],"buy"].append([[close*0.95,close*0.94]])
i tried to make it work whit multithread but as i need to read all the table to do the next step i cant split the data, i hope you can help me to make a kind of lamda function to apply the df, or something, i am not to much skilled on this, thanks for reading!
the expected output of the row and column "buy" of and empty set should be [[[11554, 11566]]]
Example data:
import pandas as pd
df = pd.DataFrame({'close': [11763, 21763, 31763], 'buy':[[], [[21763, 21767]], []]})
close buy
0 11763 []
1 21763 [[[21763, 21767]]]
2 31763 []
You could do it like this:
# Create mask (a bit faster than df['buy'].apply(len) == 0).
# Assumes there are no NaNs in the column. If you have NaNs, use pd.apply.
m = [len(l) == 0 for l in df['buy'].tolist()]
# Create triple nested lists and assign.
df.loc[m, 'buy'] = list(df.loc[m, ['close', 'close']].mul([0.95, 0.94]).to_numpy()[:, None][:, None])
print(df)
Result:
close buy
0 11763 [[[11174.85, 11057.22]]]
1 21763 [[[21763, 21767]]]
2 31763 [[[30174.85, 29857.219999999998]]]
Some explanation:
m is a boolean mask that selects the rows of the DataFrame with an empty list in the 'buy' column:
m = [len(l) == 0 for l in df['buy'].tolist()]
# Or (a bit slower)
# "Apply the len() function to all lists in the column.
m = df['buy'].apply(len) == 0
print(m)
0 True
1 False
2 True
Name: buy, dtype: bool
We can use this mask to select where to calculate the values.
df.loc[m, ['close', 'close']].mul([0.95, 0.94]) duplicates the 'close' column and calculates the vectorised product of all the (close, close) pairs with (0.95, 0.94) to obtain (close*0.94, close*0.94) in each row of the resulting array.
[:, None][:, None] is just a trick to create two additional axes on the resulting array. This is required since you want triple nested lists ([[[]]]).

Pandas Dataframe grouping / combining columns?

I'm new to Pandas, and I'm having a horrible time figuring out datasets.
I have a csv file I've read in using pandas.read_csv, dogData, that looks as follows:
The column names are dog breeds, the first line [0] refers to the size of the dogs, and beyond that there's a bunch of numerical values. The very first column has string description that I need to keep, but isn't relevant to the question. The last column for each size category contains separate "Average" values. (Note that it changed the "Average" columns to "Average.1", "Average.2" and so on, to take care of them not being unique)
Basically, I want to "group" by the first row - so all "small" dog values will be averaged except the "small" average column, and so on. The result would look like something like this:
The existing "Average" columns should not be included in the new average being calculated. The existing "Average" columns for each size don't need to be altered at all. All "small" breed values should be averaged, all "medium" breed values should be averaged, and so on (actual file is much larger then the sample I showed here).
There's no guarantee the breeds won't be altered, and no guarantee the "sizes" will remain the same / always be included ("Small" could be left out, for example).
EDIT:: After Joe Ferndz's comment, I've updated my code and have something slightly closer to working, but the actual adding-the-columns is giving me trouble still.
dogData = pd.read_csv("dogdata.csv", header=[0,1])
dogData.columns = dogData.columns.map("_".join)
totalVal = ""
count = 0
for col in dogData:
if "Unnamed" in col:
continue # to skip starting columns
if "Average" not in col:
totalVal += dogData[col]
count += 1
else:
# this is where I'd calculate average, then reset count and totalVal
# right now, because the addition isn't working, I'm haven't figured that out
break
print(totalVal)
Now, this code is getting the correct values technically... but it won't let me numerically add them (hence why totalVal is a string right now). It gives me a string of concatenated numbers, the correct concatenated numbers, but won't let me convert them to floats to actually add.
I've tried doing float(dogData[col]) for the totalVal addition line - it gives me a TypeError: cannot convert the series to <class float>
I've tried keeping it as a string, putting in "," between the numbers, then doing totalVal.split(",") to separate them, then convert and add... but obviously that doesn't work either, because AttributeError: 'Series' has no attribute 'split'
These errors make sense to me and I understand why it's happening, but I don't know what the correct method for doing this is. dogData[col] gives me all the values for every row at once, which is what I want, but I don't know how to then store that and add it in the next iteration of the loop.
Here's a copy/pastable sample of data:
,Corgi,Yorkie,Pug,Average,Average,Dalmation,German Shepherd,Average,Great Dane,Average
,Small,Small,Small,Small,Medium,Large,Large,Large,Very Large,Very Large
Words,1,3,3,3,2.4,3,5,7,7,7
Words1,2,2,4,4,2.2,4,4,6,8,8
Words2,2,1,5,3,2.5,5,3,8,9,6
Words3,1,4,4,2,2.7,6,6,5,6,9
You have to do a few tricks to get this to work.
Step 1: You need to read the csv file and use first two rows as header. It will create a MultiIndex column list.
Step 2: You need to join them together with say an _.
Step 3: Then rename the specific columns as per your requirement like S-Average, M-Average, ....
Step 4: find out how many columns have dog name + small
Step 5: Compute value for Small. Per your req, sum (columns with Small) / count (columns with Small)
Step 6,7: do same for Large
Step 8,9: do same for Very Large
This will give you the final list. If you want the columns to be in specific order, then you can change the order.
Step 10: Change the order for the dataframe
import pandas as pd
df = pd.read_csv('abc.txt',header=[0,1], index_col=0)
df.columns = df.columns.map('_'.join)
df.rename(columns={'Average_Small': 'S-Average',
'Average_Medium': 'M-Average',
'Average_Large': 'L-Average',
'Average_Very Large': 'Very L-Average'}, inplace = True)
idx = [i for i,x in enumerate(df.columns) if x.endswith('_Small')]
if idx:
df['Small']= ((df.iloc[:, idx].sum(axis=1))/len(idx)).round(2)
df.drop(df.columns[idx], axis = 1, inplace = True)
idx = [i for i,x in enumerate(df.columns) if x.endswith('_Large')]
if idx:
df['Large']= ((df.iloc[:, idx].sum(axis=1))/len(idx)).round(2)
df.drop(df.columns[idx], axis = 1, inplace = True)
idx = [i for i,x in enumerate(df.columns) if x.endswith('_Very Large')]
if idx:
df['Very_Large']= ((df.iloc[:, idx].sum(axis=1))/len(idx)).round(2)
df.drop(df.columns[idx], axis = 1, inplace = True)
df = df[['Small', 'S-Average', 'M-Average', 'L-Average', 'Very L-Average', 'Large', 'Very_Large', ]]
print (df)
The output of this will be:
Small S-Average M-Average ... Very L-Average Large Very_Large
Words 2.33 3 2.4 ... 7 4.0 7.0
Words1 2.67 4 2.2 ... 8 4.0 8.0
Words2 2.67 3 2.5 ... 6 4.0 9.0
Words3 3.00 2 2.7 ... 9 6.0 6.0

assigning values to first three rows of every group

I'm trying to code following logic in pandas, for first three rows of every group i want to create a variable which should have value 1(1st row), 2 (2nd row), 3(3rd row). I'm doing it like below, In the below code I'm not creating a new variable because i don't know how to do that, so I'm replacing the variable that's already present in the data set. Though my code doesn't throw error, it's giving me very strange results.
def func (i):
data.loc[data.groupby('ID').nth(i).index,'date'] = i
func(1)
Any suggestions?
Thanks in Advance.
If you don't have duplicated index, you can create a row id for each group, filter out id which is larger than 3 and then assign it back to the data frame:
data['date'] = (data.groupby('ID').cumcount() + 1)[lambda x: x <= 3]
This gives the first three rows for each ID 1,2,3, rows beyond 3 will have NaN values.
data = pd.DataFrame({"ID":[1,1,1,1,2,2,3,3,3]})
data['date'] = (data.groupby('ID').cumcount() + 1)[lambda x: x <= 3]
data

Pandas - Add column containing metadata about the row

I want to add a column to a Dataframe that will contain a number derived from the number of NaN values in the row, specifically: one less than the number of non-NaN values in the row.
I tried:
for index, row in df.iterrows():
count = row.value_counts()
val = sum(count) - 1
df['Num Hits'] = val
Which returns an error:
-c:4: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_index,col_indexer] = value instead
and puts the first val value into every cell of the new column. I've tried reading about .loc and indexing in the Pandas documentation and failed to make sense of it. I gather that .loc wants a row_index and a column_index but I don't know if these are pre-defined in every dataframe and I just have to specify them somehow or if I need to "set" an index on the dataframe somehow before telling the loop where to place the new value, val.
You can totally do it in a vectorized way without using a loop, which is likely to be faster than the loop version:
In [89]:
print df
0 1 2 3
0 0.835396 0.330275 0.786579 0.493567
1 0.751678 0.299354 0.050638 0.483490
2 0.559348 0.106477 0.807911 0.883195
3 0.250296 0.281871 0.439523 0.117846
4 0.480055 0.269579 0.282295 0.170642
In [90]:
#number of valid numbers - 1
df.apply(lambda x: np.isfinite(x).sum()-1, axis=1)
Out[90]:
0 3
1 3
2 3
3 3
4 3
dtype: int64
#DSM brought up an good point that the above solution is still not fully vectorized. A vectorized form can be simply (~df.isnull()).sum(axis=1)-1.
You can use the index variable that you define as part of the for loop as the row_index that .loc is looking for:
for index, row in df.iterrows():
count = row.value_counts()
val = sum(count) - 1
df.loc[index, 'Num Hits'] = val

Categories

Resources