Slicing pandas raw dataframe (prior to re-organizing the data) - python

This is my very first post but I'll do my best to make it relevant.
I have a dataframe of stock prices freshly imported with the DataReader, from Morningstar. It looks like this :
print df.head()
Close High Low Open Volume Symbol
Symbol Date
AAPL 2018-03-01 175.00 179.775 172.66 178.54 48801970 AAPL
2018-03-02 176.21 176.300 172.45 172.80 38453950 AAPL
2018-03-05 176.82 177.740 174.52 175.21 28401366 AAPL
2018-03-06 176.67 178.250 176.13 177.91 23788506 AAPL
2018-03-07 175.03 175.850 174.27 174.94 31703462 AAPL
I want to refer to specific cells in the dataframe, especially the values in the last row for a given stock. There are 255 rows.
Please note that the dataframe is a concatenation of multiple DataReader fetches. I made it from code found on StackOverflow with slight updates and changes :
rawdata = [] # initializing empty dataframe
for ticker in tickers:
fetched = web.DataReader(ticker, "morningstar", start='3/1/2018', end='4/15/2018') # bloody month/day/year
fetched['Symbol'] = ticker # add a symbol column
rawdata.append(fetched)
stocks = pd.concat(fetched) # concatenate all the dfs
Now
print df[255:]
returns the last row with column names, and
print df[255:].values
returns the values of the last row.
But
print df[-1]
returns an error.
I will need to refer to the last row after updating the dataframe, without knowing whether there are now x or y rows. Why can't I do df[-1] ?
I've looked around and found techniques with "iloc" notably, but I'm trying to keep this very simple for the moment.
I've also looked for questions about slicing. But
print df[255:['Close']]
returns the error "unhashable type" - although there already is a column named 'Close'.
Is it because my dataframe is not properly indexed ? Or because it is not a csv yet ?
I know how to work on indexes, and also how to write to csv. And I will definitely have to organize the data in a better way at some stage. But I don't understand why I cannot call the last row or slice for a specific cell with the current format of my data.
Thanks for your kind attention

You need to be a bit careful when slice DataFrames with []
When you provide only a single argument, it looks to slice the DataFrame by columns. When you write df[-1] you're going to get KeyError: -1 because your df doesn't have any column labeled -1.
If you want to slice the last row, you need to either add a semi-colon with [] or if you want to be super safe, use .iloc.
Hopefully this illustrates this a bit more. I've included a column labeled -1 just to show you what df[-1] will actually do.
import pandas as pd
df = pd.DataFrame({'value': [-2,-1,0,1,2],
'name': ['a', 'b', 'c', 'd', 'e'],
-1: [1,2,3,4,5]})
# value name -1
#0 -2 a 1
#1 -1 b 2
#2 0 c 3
#3 1 d 4
#4 2 e 5
df[-1]
#0 1
#1 2
#2 3
#3 4
#4 5
#Name: -1, dtype: int64
df[-1:] # or df.iloc[-1:]
# value name -1
#4 2 e 5

Related

How to delete the following row if duplicated data found in excel using python?

does anyone know how to delete the following row if duplicated data found in excel using python?
Here is my input data (there are only 2 columns for input data):
col_1 col_2
1 number 2.37
2 number 2.8
3 number 3.4
4 number
5 number
6 number
7 number 2.62
8 number 3.1
9 number 2.6
If duplicated data is found, the rest of the row should be deleted starting from the duplicated data. In this case, the above input data shows line 4 to line 6 is duplicated data, this means duplicated data detected, the line 4 until the end of the column which is line 9 should be deleted.
Therefore, the output should be like this (there are only 2 columns for output data):
col_1 col_2
1 number 2.37
2 number 2.8
3 number 3.4
here is my code: (but my code seems like not achieve my objective)
df = pd.read_excel(path_to_the_file)
df = df[~df.col_1.str.match('number')]
df.to_excel(path_to_the_file)
Any helps will be appreciated, thanks!!
df = pd.read_excel(path_to_the_file)
index=df[df.duplicated(['col_2']) == True].values[0][0]
df2 = df.iloc[:index-2]
print(df2)
output:
col_1 col_2
0 1 number2.37
1 2 number2.8
2 3 number3.4
The case you are asking for (independent of which column is which or what 'duplicated data' means) needs a loop because slicing has its own way, it is not meant to be an iterative function. What you are stating though in your question does depend on data found at some point ("following row"). So the slicing should be used once you find where that data is duplicated (if it is indeed).
dataframe.str.match() function returns a complete series that tells you if the match happens for each and every row in the dataframe. sum function gets you the total matches by adding the coerced boolean values to int in the series. If there is more than one match (current line + one), and only then, you can slice the df from then onwards. First row cannot be duplicate (its the first and needs no check).
The specific whereabouts of what you are trying to achieve you can work out from this example.
import pandas as pd
for idx,row in df[1:].iterrows():
if df.col_2.str.match(df.loc[idx].col_2).sum() > 1:
print("found at", idx)
df = df[:idx]
break
Bare in mind that this is just an example out of the box, break should never be used, you should use elif and explicitly define the behaviour of your code.

Finding rows with highest means in dataframe

I am trying to find the rows, in a very large dataframe, with the highest mean.
Reason: I scan something with laser trackers and used a "higher" point as reference to where the scan starts. I am trying to find the object placed, through out my data.
I have calculated the mean of each row with:
base = df.mean(axis=1)
base.columns = ['index','Mean']
Here is an example of the mean for each row:
0 4.407498
1 4.463597
2 4.611886
3 4.710751
4 4.742491
5 4.580945
This seems to work fine, except that it adds an index column, and gives out columns with an index of type float64.
I then tried this to locate the rows with highest mean:
moy = base.loc[base.reset_index().groupby(['index'])['Mean'].idxmax()]
This gives out tis :
index Mean
0 0 4.407498
1 1 4.463597
2 2 4.611886
3 3 4.710751
4 4 4.742491
5 5 4.580945
But it only re-index (I have now 3 columns instead of two) and does nothing else. It still shows all rows.
Here is one way without using groupby
moy=base.sort_values('Mean').tail(1)
It looks as though your data is a string or single column with a space in between your two numbers. Suggest splitting the column into two and/or using something similar to below to set the index to your specific column of interest.
import pandas as pd
df = pd.read_csv('testdata.txt', names=["Index", "Mean"], delimiter="\s+")
df = df.set_index("Index")
print(df)

Filling missing time values in a multi-indexed dataframe

Problem and what I want
I have a data file that comprises time series read asynchronously from multiple sensors. Basically for every data element in my file, I have a sensor ID and time at which it was read, but I do not always have all sensors for every time, and read times may not be evenly spaced. Something like:
ID,time,data
0,0,1
1,0,2
2,0,3
0,1,4
2,1,5 # skip some sensors for some time steps
0,2,6
2,2,7
2,3,8
1,5,9 # skip some time steps
2,5,10
Important note the actual time column is of datetime type.
What I want is to be able to zero-order hold (forward fill) values for every sensor for any time steps where that sensor does not exist, and either set to zero or back fill any sensors that are not read at the earliest time steps. What I want is a dataframe that looks like it was read from:
ID,time,data
0,0,1
1,0,2
2,0,3
0,1,4
1,1,2 # ID 1 hold value from time step 0
2,1,5
0,2,6
1,2,2 # ID 1 still holding
2,2,7
0,3,6 # ID 0 holding
1,3,2 # ID 1 still holding
2,3,8
0,5,6 # ID 0 still holding, can skip totally missing time steps
1,5,9 # ID 1 finally updates
2,5,10
Pandas attempts so far
I initialize my dataframe and set my indices:
df = pd.read_csv(filename, dtype=np.int)
df.set_index(['ID', 'time'], inplace=True)
I try to mess with things like:
filled = df.reindex(method='ffill')
or the like with various values passed to the index keyword argument like df.index, ['time'], etc. This always either throws an error because I passed an invalid keyword argument, or does nothing visible to the dataframe. I think it is not recognizing that the data I am looking for is "missing".
I also tried:
df.update(df.groupby(level=0).ffill())
or level=1 based on Multi-Indexed fillna in Pandas, but I get no visible change to the dataframe again, I think because I don't have anything currently where I want my values to go.
Numpy attempt so far
I have had some luck with numpy and non-integer indexing using something like:
data = [np.array(df.loc[level].data) for level in df.index.levels[0]]
shapes = [arr.shape for arr in data]
print(shapes)
# [(3,), (2,), (5,)]
data = [np.array([arr[i] for i in np.linspace(0, arr.shape[0]-1, num=max(shapes)[0])]) for arr in data]
print([arr.shape for arr in data])
# [(5,), (5,), (5,)]
But this has two problems:
It takes me out of the pandas world, and I now have to manually maintain my sensor IDs, time index, etc. along with my feature vector (the actual data column is not just one column but a ton of values from a sensor suite).
Given the number of columns and the size of the actual dataset, this is going to be clunky and inelegant to implement on my real example. I would prefer a way of doing it in pandas.
The application
Ultimately this is just the data-cleaning step for training recurrent neural network, where for each time step I will need to feed a feature vector that always has the same structure (one set of measurements for each sensor ID for each time step).
Thank you for your help!
Here is one way , by using reindex and category
df.time=df.time.astype('category',categories =[0,1,2,3,4,5])
new_df=df.groupby('time',as_index=False).apply(lambda x : x.set_index('ID').reindex([0,1,2])).reset_index()
new_df['data']=new_df.groupby('ID')['data'].ffill()
new_df.drop('time',1).rename(columns={'level_0':'time'})
Out[311]:
time ID data
0 0 0 1.0
1 0 1 2.0
2 0 2 3.0
3 1 0 4.0
4 1 1 2.0
5 1 2 5.0
6 2 0 6.0
7 2 1 2.0
8 2 2 7.0
9 3 0 6.0
10 3 1 2.0
11 3 2 8.0
12 4 0 6.0
13 4 1 2.0
14 4 2 8.0
15 5 0 6.0
16 5 1 9.0
17 5 2 10.0
You can have a dictionary of last readings for each sensors. You'll have to pick some initial value; the most logical choice is probably to back-fill the earliest reading to earlier times. Once you've populated your last_reading dictionary, you can just sort all the readings by time, update the dictionary for each reading, and then fill in rows according to the dictionay. So after you have your last_reading dictionary initialized:
last_time = readings[1][time]
for reading in readings:
if reading[time] > last_time:
for ID in ID_list:
df.loc[last_time,ID] = last_reading[ID]
last_time = reading[time]
last_reading[reading[ID]] = reading[data]
#the above for loop doesn't update for the last time
#so you'll have to handle that separately
for ID in ID_list:
df.loc[last_time,ID] = last_reading[ID]
last_time = reading[time]
This assumes that you have only one reading for each time/sensor pair, and that 'readings' a list of dictionaries sorted by time. It also assumes that df has the different sensors as columns and different times as index. Adjust the code as necessary if otherwise. You can also probably optimize it a bit more by updating a whole row at once instead of using a for loop, but I didn't want to deal with making sure I had the Pandas syntax right.
Looking at the application, though, you might want to have each cell in the dataframe be not a number but a tuple of last value and time it was read, so replace last_reading[reading[ID]] = reading[data] with
last_reading[reading[ID]] = [reading[data],reading[time]]. Your neural net can then decide how to weight data based on how old it is.
I got this to work with the following, which I think is pretty general for any case like this where the time index for which you want to fill values is the second in a multi-index with two indices:
# Remove duplicate time indices (happens some in the dataset, pandas freaks out).
df = df[~df.index.duplicated(keep='first')]
# Unstack the dataframe and fill values per serial number forward, backward.
df = df.unstack(level=0)
df.update(df.ffill()) # first ZOH forward
df.update(df.bfill()) # now back fill values that are not seen at the beginning
# Restack the dataframe and re-order the indices.
df = df.stack(level=1)
df = df.swaplevel()
This gets me what I want, although I would love to be able to keep the duplicate time entries if anybody knows of a good way to do this.
You could also use df.update(df.fillna(0)) instead of backfilling if starting unseen values at zero is preferable for a particular application.
I put the above code block in a function called clean_df that takes the dataframe as argument and returns the cleaned dataframe.

python pandas - map using 2 columns as reference

I have 2 txt files I'd like to read into python: 1) A map file, 2) A data file. I'd like to have a lookup table or dictionary read the values from TWO COLUMNS of one, and determine which value to put in the 3rd column using something like the pandas.map function. The real map file is ~700,000 lines, and the real data file is ~10 million lines.
Toy Dataframe (or I could recreate as a dictionary) - Map
Chr Position Name
1 1000 SNPA
1 2000 SNPB
2 1000 SNPC
2 2000 SNPD
Toy Dataframe - Data File
Chr Position
1 1000
1 2000
2 1000
2 2001
Resulting final table:
Chr Position Name
1 1000 SNPA
1 2000 SNPB
2 1000 SNPC
2 2001 NaN
I found several questions about this with only one column lookup: Adding a new pandas column with mapped value from a dictionary. But can't seem to find a way to use 2 columns. I'm also open to other packages that may handle genomic data.
As a bonus second question, it'd also be nice if there was a way to map the 3rd column if it was with a certain amount of the mapped value. In other words, row 4 of the resulting table above would map to SNPD, as it's only 1 away. But I'd be happy to just get the solution for above.
i would do it this way:
read your map data so that first two columns will become an index:
dfm = pd.read_csv('/path/to/map.csv', delim_whitespace=True, index_col=[0,1])
change delim_whitespace=True to sep=',' if you have , as a delimiter
read up your DF (setting the same index):
df = pd.read_csv('/path/to/data.csv', delim_whitespace=True, index_col=[0,1])
join your DFs:
df.join(dfm)
Output:
In [147]: df.join(dfm)
Out[147]:
Name
Chr Position
1 1000 SNPA
2000 SNPB
2 1000 SNPC
2001 NaN
PS for the bonus question try something like this

Pandas.DataFrame - find the oldest date for which a value is available

I have a pandas.DataFrame object containing 2 time series. One series is much shorter than the other.
I want to determine the farther date for which a data is available in the shortest series, and remove data in the 2 columns before that date.
What is the most pythonic way to do that?
(I apologize that I don't really follow the SO guideline for submitting questions)
Here is a fragment of my dataframe:
osr go
Date
1990-08-17 NaN 239.75
1990-08-20 NaN 251.50
1990-08-21 352.00 265.00
1990-08-22 353.25 274.25
1990-08-23 351.75 290.25
In this case, I want to get rid of all rows before 1990-08-21 (I add there may be NAs in one of the columns for more recent dates)
You can use idxmax in inverted s by df['osr'][::-1] and then use subset of df:
print df
# osr go
#Date
#1990-08-17 NaN 239.75
#1990-08-20 NaN 251.50
#1990-08-21 352.00 265.00
#1990-08-22 353.25 274.25
#1990-08-23 351.75 290.25
s = df['osr'][::-1]
print s
#Date
#1990-08-23 351.75
#1990-08-22 353.25
#1990-08-21 352.00
#1990-08-20 NaN
#1990-08-17 NaN
#Name: osr, dtype: float64
maxnull = s.isnull().idxmax()
print maxnull
#1990-08-20 00:00:00
print df[df.index > maxnull]
# osr go
#Date
#1990-08-21 352.00 265.00
#1990-08-22 353.25 274.25
#1990-08-23 351.75 290.25
EDIT: New answer based upon comments/edits
It sounds like the data is sequential and once you have lines that don't have data you want to throw them out. This can be done easily with dropna.
df = df.dropna()
This answer assumes that once you are passed the bad rows, they stay good. Or if you don't care about dropping rows in the middle...depends on how sequential you need to be. If the data needs to be sequential and your input is well formed jezrael answer is good
Original answer
You haven't given much here by way of structure in your dataframe so I am going to make assumptions here. I'm going to assume you have many columns, two of which: time_series_1 and time_series_2 are the ones you referred to in your question and this is all stored in df
First we can find the shorter series by just using
shorter_col = df['time_series_1'] if len(df['time_series_1']) > len(df['time_series_2']) else df['time_series_2']
Now we want the last date in that
remove_date = max(shorter_col)
Now we want to remove data before that date
mask = (df['time_series_1'] > remove_date) | (df['time_series_2'] > remove_date)
df = df[mask]

Categories

Resources