I am trying to use get_loc to get the current date and then return the 10 rows above the current date from the data, but I keep getting a Key Error.
Here is my datable => posting_df5:
Posting_date rooms Origin Rooms booked ADR Revenue
0 2019-03-31 1 1 1 156.000000 156.000000
1 2019-04-01 13 13 13 160.720577 2089.367500
2 2019-04-02 15 15 15 167.409167 2511.137500
3 2019-04-03 21 21 21 166.967405 3506.315500
4 2019-04-04 37 37 37 162.384909 6008.241643
5 2019-04-05 52 52 52 202.150721 10511.837476
6 2019-04-06 49 49 49 199.611887 9780.982476
7 2019-04-07 44 44 44 182.233171 8018.259527
8 2019-04-08 50 50 50 187.228192 9361.409623
9 2019-04-09 37 37 37 177.654422 6573.213623
10 2019-04-10 31 31 31 184.138208 5708.284456
I tried doing the following:
idx = posting_df7.index.get_loc('2019-04-05')
posting_df7 = posting_df5.iloc[idx - 5 : idx + 5]
But I received the following error:
indexer = self._get_level_indexer(key, level=level)
File "/usr/local/lib/python3.7/site-packages/pandas/core/indexes/multi.py", line 2939, in _get_level_indexer
code = level_index.get_loc(key)
File "/usr/local/lib/python3.7/site-packages/pandas/core/indexes/base.py", line 2899, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas/_libs/index.pyx", line 107, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 128, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index_class_helper.pxi", line 91, in pandas._libs.index.Int64Engine._check_type
KeyError: '2019-04-05'
So, I then tried to first index Posting_date before using get_loc but it didn't work as well:
rooms Origin Rooms booked ADR Revenue
Posting_date
0 2019-03-31 1 1 1 156.000000 156.000000
1 2019-04-01 13 13 13 160.720577 2089.367500
2 2019-04-02 15 15 15 167.409167 2511.137500
3 2019-04-03 21 21 21 166.967405 3506.315500
4 2019-04-04 37 37 37 162.384909 6008.241643
5 2019-04-05 52 52 52 202.150721 10511.837476
6 2019-04-06 49 49 49 199.611887 9780.982476
7 2019-04-07 44 44 44 182.233171 8018.259527
8 2019-04-08 50 50 50 187.228192 9361.409623
9 2019-04-09 37 37 37 177.654422 6573.213623
Then I used the same get_loc function but the same error appeared. How can I select the row based no the date required.
Thanks
Here is a a different approach ...
Because iloc and get_loc can be tricky, this solution uses boolean masking to return the rows relative to a given date, then use the head() function to return the number of rows you require.
import pandas as pd
PATH = '/home/user/Desktop/so/room_rev.csv'
# Read in data from a CSV.
df = pd.read_csv(PATH)
# Convert the date column to a `datetime` format.
df['Posting_date'] = pd.to_datetime(df['Posting_date'],
format='%Y-%m-%d')
# Sort based on date.
df.sort_values('Posting_date')
Original Dataset:
Posting_date rooms Origin Rooms booked ADR Revenue
0 2019-03-31 1 1 1 156.000000 156.000000
1 2019-04-01 13 13 13 160.720577 2089.367500
2 2019-04-02 15 15 15 167.409167 2511.137500
3 2019-04-03 21 21 21 166.967405 3506.315500
4 2019-04-04 37 37 37 162.384909 6008.241643
5 2019-04-05 52 52 52 202.150721 10511.837476
6 2019-04-06 49 49 49 199.611887 9780.982476
7 2019-04-07 44 44 44 182.233171 8018.259527
8 2019-04-08 50 50 50 187.228192 9361.409623
9 2019-04-09 37 37 37 177.654422 6573.213623
10 2019-04-10 31 31 31 184.138208 5708.284456
Solution:
Replace the value in the head() function with the number of rows you want to return. Note: There is also a tail() function for the inverse.
df[df['Posting_date'] > '2019-04-05'].head(3)
Output:
Posting_date rooms Origin Rooms booked ADR Revenue
6 2019-04-06 49 49 49 199.611887 9780.982476
7 2019-04-07 44 44 44 182.233171 8018.259527
8 2019-04-08 50 50 50 187.228192 9361.409623
Related
I have a text file which has a number of integer values like this.
20180701 20180707 52 11 1 2 4 1 0 0 10 7 1 3 1 0 4 5 2
20180708 20180714 266 8 19 3 2 9 7 25 20 17 12 9 9 27 34 54 11
20180715 20180721 654 52 34 31 20 16 12 25 84 31 38 37 38 69 66 87 14
20180722 201807281017 110 72 46 52 29 29 22 204 41 46 51 57 67 82 92 17
20180729 201808041106 276 37 11 87 20 10 8 284 54 54 72 38 49 41 53 12
20180805 20180811 624 78 19 15 55 16 8 9 172 15 31 35 38 47 29 36 21
20180812 20180818 488 63 17 7 26 10 9 7 116 17 14 39 31 34 27 64 7
20180819 20180825 91 4 7 0 4 5 1 3 16 3 4 5 10 10 7 11 1
20180826 20180901 49 2 2 1 0 4 0 1 2 0 1 4 8 2 6 6 10
I have to make a file by merging several files like this but you guys can see a problem with this data.
In 4 and 5 lines, the first values, 1017 and 1106, right next to period index make a problem.
When I try to read these two lines, I always have had this result.
It came out that first values in first column next to index columns couldn't recognized as first values themselves.
In [14]: fw.iloc[80,:]
Out[14]:
3 72.0
4 46.0
5 52.0
6 29.0
7 29.0
8 22.0
9 204.0
10 41.0
11 46.0
12 51.0
13 57.0
14 67.0
15 82.0
16 92.0
17 17.0
18 NaN
Name: (20180722, 201807281017), dtype: float64
I tried to make it correct with indexing but failed.
The desirable result is,
In [14]: fw.iloc[80,:]
Out[14]:
2 1017.0
3 110.0
4 72.0
5 46.0
6 52.0
7 29.0
8 29.0
9 22.0
10 204.0
11 41.0
12 46.0
13 51.0
14 57.0
15 67.0
16 82.0
17 92.0
18 17.0
Name: (20180722, 201807281017), dtype: float64
How can I solve this problem?
+
I used this code to read this file.
fw = pd.read_csv('warm_patient.txt', index_col=[0,1], header=None, delim_whitespace=True)
A better fit for this would be pandas.read_fwf. For your example:
df = pd.read_fwf(filename, index_col=[0,1], header=None, widths=2*[10]+17*[4])
I don't know if the column widths can be inferred for all your data or need to be hardcoded.
One possibility would be to manually construct the dataframe, this way we can parse the text by splitting the values every 4 characters.
from textwrap import wrap
import pandas as pd
def read_file(f_name):
data = []
with open(f_name) as f:
for line in f.readlines():
idx1 = line[0:8]
idx2 = line[10:18]
points = map(lambda x: int(x.replace(" ", "")), wrap(line.rstrip()[18:], 4))
data.append([idx1, idx2, *points])
return pd.DataFrame(data).set_index([0, 1])
It could be made somewhat more efficient (in particular if this is a particularly long text file), but here's one solution.
fw = pd.read_csv('test.txt', header=None, delim_whitespace=True)
for i in fw[pd.isna(fw.iloc[:,-1])].index:
num_str = str(fw.iat[i,1])
a,b = map(int,[num_str[:-4],num_str[-4:]])
fw.iloc[i,3:] = fw.iloc[i,2:-1]
fw.iloc[i,:3] = [fw.iat[i,0],a,b]
fw = fw.set_index([0,1])
The result of print(fw) from there is
2 3 4 5 6 7 8 9 10 11 12 13 14 15 \
0 1
20180701 20180707 52 11 1 2 4 1 0 0 10 7 1 3 1 0
20180708 20180714 266 8 19 3 2 9 7 25 20 17 12 9 9 27
20180715 20180721 654 52 34 31 20 16 12 25 84 31 38 37 38 69
20180722 20180728 1017 110 72 46 52 29 29 22 204 41 46 51 57 67
20180729 20180804 1106 276 37 11 87 20 10 8 284 54 54 72 38 49
20180805 20180811 624 78 19 15 55 16 8 9 172 15 31 35 38 47
20180812 20180818 488 63 17 7 26 10 9 7 116 17 14 39 31 34
20180819 20180825 91 4 7 0 4 5 1 3 16 3 4 5 10 10
20180826 20180901 49 2 2 1 0 4 0 1 2 0 1 4 8 2
16 17 18
0 1
20180701 20180707 4 5 2.0
20180708 20180714 34 54 11.0
20180715 20180721 66 87 14.0
20180722 20180728 82 92 17.0
20180729 20180804 41 53 12.0
20180805 20180811 29 36 21.0
20180812 20180818 27 64 7.0
20180819 20180825 7 11 1.0
20180826 20180901 6 6 10.0
Here's the result of the print after applying your initial solution of fw = pd.read_csv('test.txt', index_col=[0,1], header=None, delim_whitespace=True) for comparison.
2 3 4 5 6 7 8 9 10 11 12 13 14 \
0 1
20180701 20180707 52 11 1 2 4 1 0 0 10 7 1 3 1
20180708 20180714 266 8 19 3 2 9 7 25 20 17 12 9 9
20180715 20180721 654 52 34 31 20 16 12 25 84 31 38 37 38
20180722 201807281017 110 72 46 52 29 29 22 204 41 46 51 57 67
20180729 201808041106 276 37 11 87 20 10 8 284 54 54 72 38 49
20180805 20180811 624 78 19 15 55 16 8 9 172 15 31 35 38
20180812 20180818 488 63 17 7 26 10 9 7 116 17 14 39 31
20180819 20180825 91 4 7 0 4 5 1 3 16 3 4 5 10
20180826 20180901 49 2 2 1 0 4 0 1 2 0 1 4 8
15 16 17 18
0 1
20180701 20180707 0 4 5 2.0
20180708 20180714 27 34 54 11.0
20180715 20180721 69 66 87 14.0
20180722 201807281017 82 92 17 NaN
20180729 201808041106 41 53 12 NaN
20180805 20180811 47 29 36 21.0
20180812 20180818 34 27 64 7.0
20180819 20180825 10 7 11 1.0
20180826 20180901 2 6 6 10.0
Hello I have a pandas dataframe that I want to clean.Here is an example:
IDBILL
IDBUYER
BILL
DATE
001
768787
45
1897-07-24
002
768787
30
1897-07-24
005
786545
45
1897-08-19
008
657676
89
1989-09-23
009
657676
42
1989-09-23
010
657676
18
1989-09-23
012
657676
51
1990-03-10
016
892354
73
1990-03-10
018
892354
48
1765-02-14
I want to delete the highest bills(and keep the lowest when the bills are made on the same day, by the same IDBUYER, and whose bills IDs follow each other.
To get this:
IDBILL
IDBUYER
BILL
DATE
002
768787
30
1897-07-24
005
786545
45
1897-08-19
010
657676
18
1989-09-23
012
657676
51
1990-03-10
016
892354
73
1990-03-10
018
892354
48
1765-02-14
Thank you in advance
Firstly convert 'DATE' column into datetime dtype by using to_datetime() method:
df['DATE'] = pd.to_datetime(df['DATE'])
Try with groupby() method:
result=df.groupby(['IDBUYER',df['DATE'].dt.day],as_index=False)[['IDBILL','BILL','DATE']].min()
OR
result=df.groupby(['DATE', 'IDBUYER'], sort=False)[['IDBILL','BILL']].min().reset_index()
Output of result:
IDBUYER IDBILL BILL DATE
0 657676 12 51 1990-03-10
1 657676 8 18 1989-09-23
2 768787 1 30 1897-07-24
3 786545 5 45 1897-08-19
4 892354 16 73 1990-03-10
5 892354 18 48 1765-02-14
You could try this to keep only min values of the lowest entry which is a follow-upof the idbill:
df['follow_up'] = df['IDBILL'].ne(df['IDBILL'].shift()+1).cumsum()
m = df.groupby(['IDBUYER', 'follow_up', df['DATE']])['BILL'].idxmin()
df.loc[sorted(m)]
# IDBILL IDBUYER BILL DATE follow_up
# 1 2 768787 30 1897-07-24 1
# 2 5 786545 45 1897-08-19 2
# 5 10 657676 18 1989-09-23 3
# 6 12 657676 51 1990-03-10 4
# 7 16 892354 73 1990-03-10 5
# 8 18 892354 48 1765-02-14 6
I have a timeseries dataframe which has over 11000 observations. Unfortunately the datetime column got corrupted when stored in .csv format. The date portion (Y/M/D) went missing and I am left with only the time as shown below in the first 50 observations of the dataframe.
I know that the same values in the sequence of left out Time portion of the corrupted date_time column correspond to a specific date. For example all observations with the date_time value "10:27.9" correspond to a specific date and all observations with the value "45:05.8" correspond to some other date (here previous date).
Given this, how can I get the original datetime column (in Y/M/D H:M:S format) assuming the first set of rows belong to 15th April, 2021, the 2nd set to 14th Apr, 2021, so on., for each previous day passed. As I am not sure what is 10:27.9 is (I guess it is in S:M:H format), it does not matter if I get any values for the H:M:S portion as long as I have correct date.
Appreciate inputs.
D Date_Time
0 349 10:27.9
1 20 10:27.9
2 66 10:27.9
3 29 10:27.9
4 14 10:27.9
5 112 10:27.9
6 104 10:27.9
7 22 10:27.9
8 135 10:27.9
9 33 10:27.9
10 81 10:27.9
11 53 10:27.9
12 2 10:27.9
13 9 10:27.9
14 18 10:27.9
15 24 10:27.9
16 50 10:27.9
17 1 10:27.9
18 28 10:27.9
19 4 10:27.9
20 9 10:27.9
21 11 10:27.9
22 5 10:27.9
23 1 10:27.9
24 0 10:27.9
25 3 10:27.9
26 0 10:27.9
27 0 10:27.9
28 0 10:27.9
29 0 10:27.9
30 0 10:27.9
31 0 10:27.9
32 0 10:27.9
33 0 10:27.9
34 2 10:27.9
35 0 10:27.9
36 278 45:05.8
37 22 45:05.8
38 38 45:05.8
39 25 45:05.8
40 18 45:05.8
41 104 45:05.8
42 67 45:05.8
43 24 45:05.8
44 120 45:05.8
45 29 45:05.8
46 73 45:05.8
47 51 45:05.8
48 3 45:05.8
49 8 45:05.8
50 18 45:05.8
Create a reverse date_range() starting at 2021-04-15 and then map() the current Date_Time values.
Note that this does not preserve the times, but that was acceptable if I understood the comments correctly.
keys = df.Date_Time.unique()
values = pd.date_range('2021-04-15', periods=keys.size, freq='-1D')
mapping = dict(zip(keys, values))
df.Date_Time = df.Date_Time.map(mapping)
# D Date_Time
# 0 349 2021-04-15
# 1 20 2021-04-15
# 2 66 2021-04-15
# ...
# 48 3 2021-04-14
# 49 8 2021-04-14
# 50 18 2021-04-14
I need to create two new Pandas columns using the logic and value from the previous row.
I have the following data:
Day Vol Price Income Outgoing
1 499 75
2 3233 90
3 1812 70
4 2407 97
5 3474 82
6 1057 53
7 2031 68
8 304 78
9 1339 62
10 2847 57
11 3767 93
12 1096 83
13 3899 88
14 4090 63
15 3249 52
16 1478 52
17 4926 75
18 1209 52
19 1982 90
20 4499 93
My challenge is to come up with a logic where both the Income and Outgoing columns (which are currently empty), should have the values of (Vol * Price).
But, the Income column should carry this value when, the previous day's "Price" value is lower than present. The Outgoing column should carry this value when, the previous day's "Price" value is higher than present. The rest of the Income and Outgoing columns, should just have NaN's. If the Price is unchanged, then that day's value is to be dropped.
But the entire logic should start with (n + 1) day. The first row should be skipped and the logic should apply from row 2 onwards.
I have tried using shift in my code example such as:
if sample_data['Price'].shift(1) < sample_data['Price'].shift(2)):
sample_data['Income'] = sample_data['Vol'] * sample_data['Price']
else:
sample_data['Outgoing'] = sample_data['Vol'] * sample_data['Price']
But it isn't working.
I feel there would be a simpler and comprehensive tactic to go about this, could someone please help ?
Update (The final output should look like this):
For day 16, the data is deleted because we have two similar prices for day 15 and 16.
I'd calculate the product and the mask separately, and then update the cols:
In [11]: vol_price = df["Vol"] * df["Price"]
In [12]: incoming = df["Price"].diff() < 0
In [13]: df.loc[incoming, "Income"] = vol_price
In [14]: df.loc[~incoming, "Outgoing"] = vol_price
In [15]: df
Out[15]:
Day Vol Price Income Outgoing
0 1 499 75 NaN 37425.0
1 2 3233 90 NaN 290970.0
2 3 1812 70 126840.0 NaN
3 4 2407 97 NaN 233479.0
4 5 3474 82 284868.0 NaN
5 6 1057 53 56021.0 NaN
6 7 2031 68 NaN 138108.0
7 8 304 78 NaN 23712.0
8 9 1339 62 83018.0 NaN
9 10 2847 57 162279.0 NaN
10 11 3767 93 NaN 350331.0
11 12 1096 83 90968.0 NaN
12 13 3899 88 NaN 343112.0
13 14 4090 63 257670.0 NaN
14 15 3249 52 168948.0 NaN
15 16 1478 52 NaN 76856.0
16 17 4926 75 NaN 369450.0
17 18 1209 52 62868.0 NaN
18 19 1982 90 NaN 178380.0
19 20 4499 93 NaN 418407.0
or is it this way around:
In [21]: incoming = df["Price"].diff() > 0
In [22]: df.loc[incoming, "Income"] = vol_price
In [23]: df.loc[~incoming, "Outgoing"] = vol_price
In [24]: df
Out[24]:
Day Vol Price Income Outgoing
0 1 499 75 NaN 37425.0
1 2 3233 90 290970.0 NaN
2 3 1812 70 NaN 126840.0
3 4 2407 97 233479.0 NaN
4 5 3474 82 NaN 284868.0
5 6 1057 53 NaN 56021.0
6 7 2031 68 138108.0 NaN
7 8 304 78 23712.0 NaN
8 9 1339 62 NaN 83018.0
9 10 2847 57 NaN 162279.0
10 11 3767 93 350331.0 NaN
11 12 1096 83 NaN 90968.0
12 13 3899 88 343112.0 NaN
13 14 4090 63 NaN 257670.0
14 15 3249 52 NaN 168948.0
15 16 1478 52 NaN 76856.0
16 17 4926 75 369450.0 NaN
17 18 1209 52 NaN 62868.0
18 19 1982 90 178380.0 NaN
19 20 4499 93 418407.0 NaN
I'm working with the following dataset with hourly counts (df):
The datframe has 8784 rows (for the year 2016, hourly).
I'd like to see if there are daily trends (e.g if there is an increase in the morning hours. For this i'd like to create a plot that has the hour of the day (from 0 to 24) on the x-axis and number of cyclists on the y axis (something like in the picture below from http://ofdataandscience.blogspot.co.uk/2013/03/capital-bikeshare-time-series-clustering.html).
I experimented with differet ways of pivot, resample and set_index and plotting it with matplotlib, without success. In other words, i couldn't find a way to sum up every observation at a certain hour and then plot those for each weekday
Any ideas how to do this? Thanks in advance!
I think you can use groupby by hour and weekday and aggregate sum (or maybe mean), last reshape by unstack and DataFrame.plot:
df = df.groupby([df['Date'].dt.hour, 'weekday'])['Cyclists'].sum().unstack().plot()
Solution with pivot_table:
df1 = df.pivot_table(index=df['Date'].dt.hour,
columns='weekday',
values='Cyclists',
aggfunc='sum').plot()
Sample:
N = 200
np.random.seed(100)
rng = pd.date_range('2016-01-01', periods=N, freq='H')
df = pd.DataFrame({'Date': rng, 'Cyclists': np.random.randint(100, size=N)})
df['weekday'] = df['Date'].dt.weekday_name
print (df.head())
Cyclists Date weekday
0 8 2016-01-01 00:00:00 Friday
1 24 2016-01-01 01:00:00 Friday
2 67 2016-01-01 02:00:00 Friday
3 87 2016-01-01 03:00:00 Friday
4 79 2016-01-01 04:00:00 Friday
print (df.groupby([df['Date'].dt.hour, 'weekday'])['Cyclists'].sum().unstack())
weekday Friday Monday Saturday Sunday Thursday Tuesday Wednesday
Date
0 102 91 120 53 95 86 21
1 102 83 100 27 20 94 25
2 121 53 105 56 10 98 54
3 164 78 54 30 8 42 6
4 163 0 43 48 89 84 37
5 49 13 150 47 72 95 58
6 24 57 32 39 30 76 39
7 127 76 128 38 12 33 94
8 72 3 59 44 18 58 51
9 138 70 67 18 93 42 30
10 77 3 7 64 92 22 66
11 159 84 49 56 44 0 24
12 156 79 47 34 57 55 55
13 42 10 65 53 0 98 17
14 116 87 61 74 73 19 45
15 106 60 14 17 54 53 89
16 22 3 55 72 92 68 45
17 154 48 71 13 66 62 35
18 60 52 80 30 16 50 16
19 79 43 2 17 5 68 12
20 11 36 94 53 51 35 86
21 180 5 19 68 90 23 82
22 103 71 98 50 34 9 67
23 92 38 63 91 67 48 92
df.groupby([df['Date'].dt.hour, 'weekday'])['Cyclists'].sum().unstack().plot()
EDIT:
You can also convert wekkday to categorical for correct soting of columns by names of week:
names = [ 'Monday', 'Tuesday', 'Wednesday', 'Thursday','Friday', 'Saturday', 'Sunday']
df['weekday'] = df['weekday'].astype('category', categories=names, ordered=True)
df.groupby([df['Date'].dt.hour, 'weekday'])['Cyclists'].sum().unstack().plot()