How to use np.where in creating new column using previous rows? - python

I'm kind of new in python and I've been working on migrating my excel to pandas because it cannot run a hundred of thousands of rows.
I have a table that looks like this in excel:
Where column A and B are inputs and column C is output.
The formula for column C is
=IF(B2="new",A2,C3)
If 'Status' is equal to "new" the result will be the value in column A
And if 'Status' is not equal to "new" the result will be the previous row of C
I tried doing it using np.where and .shift(-1) using this code
df['Previous'] = np.where (df['Status']=='new', df['Count'], df['Previous'].shift(-1))
but it seems I am receiving this error
Key error: 'Previous'
It seems that I need to define the column 'Previous' first.
I tried searching Stack Overflow but most of the time the related solutions are somewhat based in complex problems and I was not able to pattern it to my simple problem.
df.columns looks like
Index(['Count', 'Status'], dtype='object')
This is the result of my code once run.

Since you are creating new column Previous and this column is still not yet defined when you use it in the definition of itself, in the np.where() statement, you will get an error.
Also, your question is actually not taking a "previous" value since when you are handling the first row, there is no previous value for the first row and even when processing 2nd and 3rd rows the value is still not defined until we go to the 4th row.
So, the solution need to set a kind of temporary non-deterministic value while processing rows still with unknown value and set these non-deterministic values afterwards when some values are defined. In this case, we can set those temporary non-deterministic values as np.nan and then back-fill using .bfill() with defined values afterwards. We use backward fill because we are filling values of rows with index 0, 1, 2 by value on row with index 3.
To solve it, you can try the following:
df['Previous'] = np.where(df['Status']=='new', df['Count'], np.nan)
df['Previous'] = df['Previous'].bfill().astype(int)
print(df)
Count Status Previous
0 4 old 1
1 3 old 1
2 2 old 1
3 1 new 1
4 40 old 10
5 30 old 10
6 20 old 10
7 10 new 10
8 400 old 100
9 300 old 100
10 200 old 100
11 100 new 100
Here, I assumed the dtype of column Count is integer. If it is of string type, then you don't need to use the .astype(int) in the code above.
Alternatively, you can also do it in one step using .where() on column Count, instead of np.where() as follows:
df['Previous'] = df['Count'].where(df['Status'] =='new').bfill().astype(int)
print(df)
Count Status Previous
0 4 old 1
1 3 old 1
2 2 old 1
3 1 new 1
4 40 old 10
5 30 old 10
6 20 old 10
7 10 new 10
8 400 old 100
9 300 old 100
10 200 old 100
11 100 new 100
Similarly, no need to use .astype(int) in the code above if column Count is of string type.
.where() is to: "Replace values where the condition is False". This is some how equivalent to "Retain values where the condition is True". So when the condition is True, we use the values of original Count column. Then, you probably would ask: "What if the condition False and what value to replace?" The answer can be seen from the official document and can be found from the 2nd parameter showing other=nan. When the condition is False, the value specified in the 2nd parameter other (if any) will be used. If the 2nd parameter is not specified, it defaults to nan. Hence, in our case, we don't specify the 2nd parameter for when the condition is False, nan will be used for the values. Therefore, same effect as we specify np.nan for the False condition in the np.where() call.

Related

Select rows from Dataframe with variable number of conditions

I'm trying to write a function that takes as inputs a DataFrame with a column 'timestamp' and a list of tuples. Every tuple will contain a beginning and end time.
What I want to do is to "split" the dataframe in two new ones, where the first contains the rows for which the timestamp value is not contained between the extremes of any tuple, and the other is just the complementary.
The number of filter tuples is not known a priori though.
df = DataFrame({'timestamp':[0,1,2,5,6,7,11,22,33,100], 'x':[1,2,3,4,5,6,7,8,9,1])
filt = [(1,4), (10,40)]
left, removed = func(df, filt)
This should give me two dataframes
left: with rows with timestamp [0,5,6,7,100]
removed: with rows with timestamp [1,2,11,22,33]
I believe the right approach is to write a custom function that can be used as a filter, and then call is somehow to filter/mask the dataframe, but I could not find a proper example of how to implement this.
Check
out = df[~pd.concat([df.timestamp.between(*x) for x in filt]).any(level=0)]
Out[175]:
timestamp x
0 0 1
3 5 4
4 6 5
5 7 6
9 100 1
Can't you use filtering with .isin():
left,removed = df[df['timestamp'].isin([0,5,6,7,100])],df[df['timestamp'].isin([1,2,11,22,33])]

How to delete the following row if duplicated data found in excel using python?

does anyone know how to delete the following row if duplicated data found in excel using python?
Here is my input data (there are only 2 columns for input data):
col_1 col_2
1 number 2.37
2 number 2.8
3 number 3.4
4 number
5 number
6 number
7 number 2.62
8 number 3.1
9 number 2.6
If duplicated data is found, the rest of the row should be deleted starting from the duplicated data. In this case, the above input data shows line 4 to line 6 is duplicated data, this means duplicated data detected, the line 4 until the end of the column which is line 9 should be deleted.
Therefore, the output should be like this (there are only 2 columns for output data):
col_1 col_2
1 number 2.37
2 number 2.8
3 number 3.4
here is my code: (but my code seems like not achieve my objective)
df = pd.read_excel(path_to_the_file)
df = df[~df.col_1.str.match('number')]
df.to_excel(path_to_the_file)
Any helps will be appreciated, thanks!!
df = pd.read_excel(path_to_the_file)
index=df[df.duplicated(['col_2']) == True].values[0][0]
df2 = df.iloc[:index-2]
print(df2)
output:
col_1 col_2
0 1 number2.37
1 2 number2.8
2 3 number3.4
The case you are asking for (independent of which column is which or what 'duplicated data' means) needs a loop because slicing has its own way, it is not meant to be an iterative function. What you are stating though in your question does depend on data found at some point ("following row"). So the slicing should be used once you find where that data is duplicated (if it is indeed).
dataframe.str.match() function returns a complete series that tells you if the match happens for each and every row in the dataframe. sum function gets you the total matches by adding the coerced boolean values to int in the series. If there is more than one match (current line + one), and only then, you can slice the df from then onwards. First row cannot be duplicate (its the first and needs no check).
The specific whereabouts of what you are trying to achieve you can work out from this example.
import pandas as pd
for idx,row in df[1:].iterrows():
if df.col_2.str.match(df.loc[idx].col_2).sum() > 1:
print("found at", idx)
df = df[:idx]
break
Bare in mind that this is just an example out of the box, break should never be used, you should use elif and explicitly define the behaviour of your code.

Pandas: add a new column with one single value at the last row of a dataframe

My request is simple but I am stuck and do not know why... my dataframe looks like this:
price time
0 1 3
1 3 6
2 4 7
What I need to do is to add a new column mkt with only one value equal to 10 at the last row (in my example index = 2). What I have tried:
df.mkt=''
df.mkt.loc[-1] =10
But when I want to see again my dataframe the last row is not updated... I know the answer must be simple but I am stuck? Any idea? thanks!
You can use the at function:
df.mkt = ''
df.at[-1, 'mkt'] = 10
Use pd.Series:
df['mkt'] = [''] * (len(df)-1) +[10]

Finding rows with highest means in dataframe

I am trying to find the rows, in a very large dataframe, with the highest mean.
Reason: I scan something with laser trackers and used a "higher" point as reference to where the scan starts. I am trying to find the object placed, through out my data.
I have calculated the mean of each row with:
base = df.mean(axis=1)
base.columns = ['index','Mean']
Here is an example of the mean for each row:
0 4.407498
1 4.463597
2 4.611886
3 4.710751
4 4.742491
5 4.580945
This seems to work fine, except that it adds an index column, and gives out columns with an index of type float64.
I then tried this to locate the rows with highest mean:
moy = base.loc[base.reset_index().groupby(['index'])['Mean'].idxmax()]
This gives out tis :
index Mean
0 0 4.407498
1 1 4.463597
2 2 4.611886
3 3 4.710751
4 4 4.742491
5 5 4.580945
But it only re-index (I have now 3 columns instead of two) and does nothing else. It still shows all rows.
Here is one way without using groupby
moy=base.sort_values('Mean').tail(1)
It looks as though your data is a string or single column with a space in between your two numbers. Suggest splitting the column into two and/or using something similar to below to set the index to your specific column of interest.
import pandas as pd
df = pd.read_csv('testdata.txt', names=["Index", "Mean"], delimiter="\s+")
df = df.set_index("Index")
print(df)

Filling missing time values in a multi-indexed dataframe

Problem and what I want
I have a data file that comprises time series read asynchronously from multiple sensors. Basically for every data element in my file, I have a sensor ID and time at which it was read, but I do not always have all sensors for every time, and read times may not be evenly spaced. Something like:
ID,time,data
0,0,1
1,0,2
2,0,3
0,1,4
2,1,5 # skip some sensors for some time steps
0,2,6
2,2,7
2,3,8
1,5,9 # skip some time steps
2,5,10
Important note the actual time column is of datetime type.
What I want is to be able to zero-order hold (forward fill) values for every sensor for any time steps where that sensor does not exist, and either set to zero or back fill any sensors that are not read at the earliest time steps. What I want is a dataframe that looks like it was read from:
ID,time,data
0,0,1
1,0,2
2,0,3
0,1,4
1,1,2 # ID 1 hold value from time step 0
2,1,5
0,2,6
1,2,2 # ID 1 still holding
2,2,7
0,3,6 # ID 0 holding
1,3,2 # ID 1 still holding
2,3,8
0,5,6 # ID 0 still holding, can skip totally missing time steps
1,5,9 # ID 1 finally updates
2,5,10
Pandas attempts so far
I initialize my dataframe and set my indices:
df = pd.read_csv(filename, dtype=np.int)
df.set_index(['ID', 'time'], inplace=True)
I try to mess with things like:
filled = df.reindex(method='ffill')
or the like with various values passed to the index keyword argument like df.index, ['time'], etc. This always either throws an error because I passed an invalid keyword argument, or does nothing visible to the dataframe. I think it is not recognizing that the data I am looking for is "missing".
I also tried:
df.update(df.groupby(level=0).ffill())
or level=1 based on Multi-Indexed fillna in Pandas, but I get no visible change to the dataframe again, I think because I don't have anything currently where I want my values to go.
Numpy attempt so far
I have had some luck with numpy and non-integer indexing using something like:
data = [np.array(df.loc[level].data) for level in df.index.levels[0]]
shapes = [arr.shape for arr in data]
print(shapes)
# [(3,), (2,), (5,)]
data = [np.array([arr[i] for i in np.linspace(0, arr.shape[0]-1, num=max(shapes)[0])]) for arr in data]
print([arr.shape for arr in data])
# [(5,), (5,), (5,)]
But this has two problems:
It takes me out of the pandas world, and I now have to manually maintain my sensor IDs, time index, etc. along with my feature vector (the actual data column is not just one column but a ton of values from a sensor suite).
Given the number of columns and the size of the actual dataset, this is going to be clunky and inelegant to implement on my real example. I would prefer a way of doing it in pandas.
The application
Ultimately this is just the data-cleaning step for training recurrent neural network, where for each time step I will need to feed a feature vector that always has the same structure (one set of measurements for each sensor ID for each time step).
Thank you for your help!
Here is one way , by using reindex and category
df.time=df.time.astype('category',categories =[0,1,2,3,4,5])
new_df=df.groupby('time',as_index=False).apply(lambda x : x.set_index('ID').reindex([0,1,2])).reset_index()
new_df['data']=new_df.groupby('ID')['data'].ffill()
new_df.drop('time',1).rename(columns={'level_0':'time'})
Out[311]:
time ID data
0 0 0 1.0
1 0 1 2.0
2 0 2 3.0
3 1 0 4.0
4 1 1 2.0
5 1 2 5.0
6 2 0 6.0
7 2 1 2.0
8 2 2 7.0
9 3 0 6.0
10 3 1 2.0
11 3 2 8.0
12 4 0 6.0
13 4 1 2.0
14 4 2 8.0
15 5 0 6.0
16 5 1 9.0
17 5 2 10.0
You can have a dictionary of last readings for each sensors. You'll have to pick some initial value; the most logical choice is probably to back-fill the earliest reading to earlier times. Once you've populated your last_reading dictionary, you can just sort all the readings by time, update the dictionary for each reading, and then fill in rows according to the dictionay. So after you have your last_reading dictionary initialized:
last_time = readings[1][time]
for reading in readings:
if reading[time] > last_time:
for ID in ID_list:
df.loc[last_time,ID] = last_reading[ID]
last_time = reading[time]
last_reading[reading[ID]] = reading[data]
#the above for loop doesn't update for the last time
#so you'll have to handle that separately
for ID in ID_list:
df.loc[last_time,ID] = last_reading[ID]
last_time = reading[time]
This assumes that you have only one reading for each time/sensor pair, and that 'readings' a list of dictionaries sorted by time. It also assumes that df has the different sensors as columns and different times as index. Adjust the code as necessary if otherwise. You can also probably optimize it a bit more by updating a whole row at once instead of using a for loop, but I didn't want to deal with making sure I had the Pandas syntax right.
Looking at the application, though, you might want to have each cell in the dataframe be not a number but a tuple of last value and time it was read, so replace last_reading[reading[ID]] = reading[data] with
last_reading[reading[ID]] = [reading[data],reading[time]]. Your neural net can then decide how to weight data based on how old it is.
I got this to work with the following, which I think is pretty general for any case like this where the time index for which you want to fill values is the second in a multi-index with two indices:
# Remove duplicate time indices (happens some in the dataset, pandas freaks out).
df = df[~df.index.duplicated(keep='first')]
# Unstack the dataframe and fill values per serial number forward, backward.
df = df.unstack(level=0)
df.update(df.ffill()) # first ZOH forward
df.update(df.bfill()) # now back fill values that are not seen at the beginning
# Restack the dataframe and re-order the indices.
df = df.stack(level=1)
df = df.swaplevel()
This gets me what I want, although I would love to be able to keep the duplicate time entries if anybody knows of a good way to do this.
You could also use df.update(df.fillna(0)) instead of backfilling if starting unseen values at zero is preferable for a particular application.
I put the above code block in a function called clean_df that takes the dataframe as argument and returns the cleaned dataframe.

Categories

Resources