Using .loc or .iloc instead of .ix - python

I am using python 3.6.
I have a pandas.core.frame.DataFrame and would like to filter the entire DataFrame based on if the column called "Closed Date" is not null. In other words, if it is null in the "Closed Date" column, then remove the whole row from the DataFrame.
My code right now is the following:
data = raw_data.ix[raw_data['Closed Date'].notnull()]
Though it gets the job done, I get an warming message saying the following:
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: DeprecationWarning:
.ix is deprecated. Please use
.loc for label based indexing or
.iloc for positional indexing
I tried this code:
data1 = raw_data.loc[raw_data.notnull(), 'Closed Date']
But get this error:
ValueError: Cannot index with multidimensional key
How do I fix this? Any suggestions?

This should work for you:
data1 = raw_data.loc[raw_data['Closed Date'].notnull()]
.ix was very similar to the current .loc (which is why the correct .loc syntax is equivalent to what you were originally doing with .ix). The difference, according to this detailed answer is: "ix usually tries to behave like loc but falls back to behaving like iloc if a label is not present in the index"
Example:
Taking this dataframe as an example (let's call it raw_data):
Closed Date x
0 1.0 1.0
1 2.0 2.0
2 3.0 NaN
3 NaN 3.0
4 4.0 4.0
raw_data.notnull() returns this DataFrame:
Closed Date x
0 True True
1 True True
2 True False
3 False True
4 True True
You can't index using .loc based on a dataframe of boolean values. However, when you do raw_data['Closed Date'].notnull(), you end up with a Series:
0 True
1 True
2 True
3 False
4 True
Which can be passed to .loc as a sort of "boolean filter" to apply onto your dataframe.
Alternate Solution
As pointed out by John Clemens, the same can be achieved with raw_data.dropna(subset=['Closed Date']). The documentation for the .dropna method outlines how this could be more flexible in some situations (for instance, allowing to drop rows or columns in which any or all values are NaN using the how argument, etc...)

Related

Why does the groupby command in Pandas produce non-exist ids?

I use the pandas groupby command on my dataframe as:
df.groupby('courier_id').type_of_vehicle.size()
but this code produces some 'courier_id' that they're not in my dataframe
courier_id
00aecd42-472f-11ec-94e0-77812be296a5 4
011da6a6-eb0b-11ec-97e1-179dc13cdf87 1
0140f63c-02e0-11ed-b314-9b2e7e4f7e5c 1
0188d572-7228-11ec-ab3b-07d470cb404d 7
01cef7ba-e32e-11ec-bb21-67c7079055d4 0
..
c98fc418-7b51-11ec-a81c-77139d6dd889 0
d98a4b9a-d056-11ec-9e3c-0b80c11ec04b 1
dae54c80-d1f8-11ec-bbb0-b71d7b2c4e1a 1
f7925664-0ac1-11ed-ab40-df16023f78cb 0
f857cb84-371c-11ec-9af6-ffeaeea4b0f1 4
Name: type_of_vehicle, Length: 268, dtype: int64
I checked it with: '01cef7ba-e32e-11ec-bb21-67c7079055d4' in df.courier_id.values and result was False
I used df.groupby('courier_id').get_group('01cef7ba-e32e-11ec-bb21-67c7079055d4') and it raise KeyError but when make for in it, return empty DataFrame
Note: when I slice my dataframe as new_df = df[['courier_id', 'type_of_vehicle']] the result become right!
If you provide some reproducible code/data it would be appreciated. That way we can provide you the best possible answer.
However, I think the problem is due the following:
When you use the function groupby(), the original courier_id becomes the new index of the transformed DataFrame. Try to use .reset_index() and your problem should be solved.
df.groupby('courier_id').type_of_vehicle.size().reset_index()

Trying to compare to values in a pandas dataframe for max value

I've got a pandas dataframe, and I'm trying to fill a new column in the dataframe, which takes the maximum value of two values situated in another column of the dataframe, iteratively. I'm trying to build a loop to do this, and save time with computation as I realise I could probably do it with more lines of code.
for x in ((jac_input.index)):
jac_output['Max Load'][x] = jac_input[['load'][x],['load'][x+1]].max()
However, I keep getting this error during the comparison
IndexError: list index out of range
Any ideas as to where I'm going wrong here? Any help would be appreciated!
Many things are wrong with your current code.
When you do ['abc'][x], x can only take the value 0 and this will return 'abc' as you are slicing a list. Not at all what you expect it to do (I imagine, slicing the Series).
For your code to be valid, you should do something like:
jac_input = pd.DataFrame({'load': [1,0,3,2,5,4]})
for x in jac_input.index:
print(jac_input['load'].loc[x:x+1].max())
output:
1
3
3
5
5
4
Also, when assigning, if you use jac_output['Max Load'][x] = ... you will likely encounter a SettingWithCopyWarning. You should rather use loc: jac_outputLoc[x, 'Max Load'] = .
But you do not need all that, use vectorial code instead!
You can perform rolling on the reversed dataframe:
jac_output['Max Load'] = jac_input['load'][::-1].rolling(2, min_periods=1).max()[::-1]
Or using concat:
jac_output['Max Load'] = pd.concat([jac_input['load'], jac_input['load'].shift(-1)], axis=1).max(1)
output (without assignment):
0 1.0
1 3.0
2 3.0
3 5.0
4 5.0
5 4.0
dtype: float64

Pandas - find occurrence within a subset

I'm stripping values from unformatted summary sheets in a for loop, and I need to dynamically find the index location of a string value after the occurrence of another specific string value. I used this question as my starting point. Example dataframe:
import pandas as pd
df = pd.DataFrame([['Small'],['Total',4],['Medium'],['Total',12],['Large'],['Total',7]])
>>>df
0 1
0 Small NaN
1 Total 4.0
2 Medium NaN
3 Total 12.0
4 Large NaN
5 Total 7.0
Say I want to find the 'Total' after 'Medium.' I can find the location of 'Medium' with the following:
MedInd = df[df.iloc[:,0]=='Medium'].first_valid_index()
>>>MedInd
2
After this, I run into issues placing a subset limitation on the query:
>>>MedTotal = df[df.iloc[MedInd:,0]=='Total'].first_valid_index()
IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match).
Still very new to programming and could use some direction with this error. Searching the error itself it seems like it's an issue of the ordering in which I should define the subset, but I've been unable to fix it thus far. Any assistance would be greatly appreciated.
EDIT:
So I ended up resolving this by moving the subset limitation to the front, outside the first_valid_index clause as follows (suggestion obtained from this reddit comment):
MedTotal = df.iloc[MedInd:][df.iloc[:,0]=='Total'.first_valid_index()
This does throw the following warning:
UserWarning: Boolean Series key will be reindexed to match DataFrame index.
But the output was as desired, which was just the index number for the value being sought.
I don't know if this will always produce desired results given the warning, so I'll continue to scan the answers for other solutions.
You may want to use shift:
df[df.iloc[:,0].shift().eq('Medium') & df.iloc[:,0].eq('Total')]
Output:
0 1
3 Total 12.0
This would work
def find_idx(df, first_str, second_str):
first_idx = df[0].eq(first_str).idxmax()
rest_of_df = df.iloc[first_idx:]
return rest_of_df[0].eq(second_str).idxmax()
find_idx(df, 'Medium', 'Total')

faster replacement of -1 and 0 to NaNs in column for a large dataset

The 'azdias' is a dataframe which is my main dataset and meta data or feature summary of it lies in dataframe 'feat_info'. The 'feat_info' shows the values in every column that have been displayed as NaN.
Ex: column1 has values [-1,0] as NaN values. So my job will be to find and replace these -1,0 in column1 as NaN.
azdias dataframe:
feat_info dataframe:
I have tried following in jupyter notebook.
def NAFunc(x, miss_unknown_list):
x_output = x
for i in miss_unknown_list:
try:
miss_unknown_value = float(i)
except ValueError:
miss_unknown_value = i
if x == miss_unknown_value:
x_output = np.nan
break
return x_output
for cols in azdias.columns.tolist():
NAList = feat_info[feat_info.attribute == cols]['missing_or_unknown'].values[0]
azdias[cols] = azdias[cols].apply(lambda x: NAFunc(x, NAList))
Question 1: I am trying to impute NaN values. But my code is very
slow. I wish to speed up my process of execution.
I have attached sample of both dataframes:
azdias_sample
AGER_TYP ALTERSKATEGORIE_GROB ANREDE_KZ CJT_GESAMTTYP FINANZ_MINIMALIST
0 -1 2 1 2.0 3
1 -1 1 2 5.0 1
2 -1 3 2 3.0 1
3 2 4 2 2.0 4
4 -1 3 1 5.0 4
feat_info_sample
attribute information_level type missing_or_unknown
AGER_TYP person categorical [-1,0]
ALTERSKATEGORIE_GROB person ordinal [-1,0,9]
ANREDE_KZ person categorical [-1,0]
CJT_GESAMTTYP person categorical [0]
FINANZ_MINIMALIST person ordinal [-1]
If the azdias dataset is obtained from read_csv or similar IO functions, the na_values keyword argument can be used to specify column-specific missing value representations to make sure the returned data frame already has in-place NaN values from the very beginning. The sample code is shown in the following.
from ast import literal_eval
feat_info.set_index("attribute", inplace=True)
# A more concise but less efficient alternative is
# na_dict = feat_info["missing_or_unknown"].apply(literal_eval).to_dict()
na_dict = {attr: literal_eval(val) for attr, val in feat_info["missing_or_unknown"].items()}
df_azdias = pd.read_csv("azidas.csv", na_values=na_dict)
As for the data type, there is no built-in NaN representation for integer data types. Hence a float data type is needed. If the missing values are imputed using fillna, the downcast argument can be specified to make the returned series or data frame have an appropriate data type.
Try using the DataFrame's replace method. How about this?
for c in azdias.columns.tolist():
replace_list = feat_info[feat_info['attribute'] == c]['missing_or_unknown'].values
azidias[c] = azidias[c].replace(to_replace=list(replace_list), value=np.nan)
A couple things I'm not sure about without being able to execute your code:
In your example, you used .values[0]. Don't you want all the values?
I'm not sure if it's necessary to do to_replace=list(replace_list), it may work to just use to_replace=replace_list.
In general, I recommend thinking to yourself "surely Pandas has a function to do this for me." Often, they do. For performance with Pandas generally, avoid looping over and setting things. Vectorized methods tend to be much faster.

how to apply custom function to each row of pandas dataframe

i have the following example:
import pandas as pd
import numpy as np
df = pd.DataFrame([(0,2,5), (2,4,None),(7,-5,4), (1,None,None)])
def clean(series):
start = np.min(list(series.index[pd.isnull(series)]))
end = len(series)
series[start:] = series[start-1]
return series
my objective is to obtain a dataframe in which each row which contains a None value is filled in with the last available numerical value.
so, for example, running this function on just the 3rd row of the dataframe, i would produce the following:
row = df.ix[3]
test = clean(row)
test
0 1.0
1 1.0
2 1.0
Name: 3, dtype: float64
i cannot get this to work using the .apply() method, i.e. df.apply(clean,axis=1)
i should mention that this is a toy example - the custom function i would write in the real one is more dynamic in how it fills the values - so i am not looking for basic utilities like .ffill or .fillna
The apply method didn't work because when the row is completely filled your clean function will not know where to start the index from because of empty array for the given series.
So use a condition before altering series data i.e
def clean(series):
# Creating a copy for the sake of safety
series = series.copy()
# Alter series if only there exists a None value
if pd.isnull(series).any():
start = np.min(list(series.index[pd.isnull(series)]))
# for completely filled row
# series.index[pd.isnull(series)] will return
# Int64Index([], dtype='int64')
end = len(series)
series[start:] = series[start-1]
return series
df.apply(clean,1)
Output :
0 1 2
0 0.0 2.0 5.0
1 2.0 4.0 4.0
2 7.0 -5.0 4.0
3 1.0 1.0 1.0
Hope it clarifies why apply didn't work. I also suggest to take builtins to consideration to clean the data rather than writing functions from scratch.
At first, This is the code to solve your toy problem. But this code isn't what you want.
df.ffill(axis=1)
Next, I try to test your code.
df.apply(clean,axis=1)
#...start = np.min(list(series.index[pd.isnull(series)]))...
#=>ValueError: ('zero-size array to reduction operation minimum
# which has no identity', 'occurred at index 0')
To understand the situation, test with lambda function.
df.apply(lambda series:list(series.index[pd.isnull(series)]),axis=1)
0 []
1 [2]
2 []
3 [1, 2]
dtype: object
And next expression puts the same value error:
import numpy as np
np.min([])
In conclusion, pandas.apply() works well but clean function doesn't.
Could you use something like the fillna with backfill? I think this might be more efficient, if backfill meets your scenario..
i.e.
df.fillna(method='backfill')
However, this assumes a np.nan in the cells?
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html

Categories

Resources