I have been collecting Twitter data for a couple of days now and, among other things, I need to analyze how content propagates. I created a list of timestamps when users were interested in content and imported twitter timestamps in pandas df with the column name 'timestamps'. It looks like this:
0 Sat Dec 14 05:13:28 +0000 2013
1 Sat Dec 14 05:21:12 +0000 2013
2 Sat Dec 14 05:23:10 +0000 2013
3 Sat Dec 14 05:27:54 +0000 2013
4 Sat Dec 14 05:37:43 +0000 2013
5 Sat Dec 14 05:39:38 +0000 2013
6 Sat Dec 14 05:41:39 +0000 2013
7 Sat Dec 14 05:43:46 +0000 2013
8 Sat Dec 14 05:44:50 +0000 2013
9 Sat Dec 14 05:47:33 +0000 2013
10 Sat Dec 14 05:49:29 +0000 2013
11 Sat Dec 14 05:55:03 +0000 2013
12 Sat Dec 14 05:59:09 +0000 2013
13 Sat Dec 14 05:59:45 +0000 2013
14 Sat Dec 14 06:17:19 +0000 2013
etc. What I want to do is to sample every 10min and count how many users are interested in content in each time frame. My problem is that I have no clue how to process the timestamps I imported from Twitter. Should I use regular expressions or is there any better approach to this? I would appreciate if someone could provide some pointers. Thanks!
That's ISO date format, it can be easily converted to datetime with pd.to_datetime:
>>> df[:2]
timestamp
0 Sat Dec 14 05:13:28 +0000 2013
1 Sat Dec 14 05:21:12 +0000 2013
>>> df['timestamp'] = pd.to_datetime(df['timestamp'])
>>> df[:2]
timestamp
0 2013-12-14 05:13:28
1 2013-12-14 05:21:12
To resample, you can make it an index, and use resample
>>> df.index = df['timestamp']
>>> df.resample('20Min', 'count')
2013-12-14 05:00:00 timestamp 1
2013-12-14 05:20:00 timestamp 5
2013-12-14 05:40:00 timestamp 8
2013-12-14 06:00:00 timestamp 1
dtype: int64
Related
I have dates as below:
date
0 Today, 12 Mar
1 Tomorrow, 13 Mar
2 Tomorrow, 13 Mar
3 Tomorrow, 13 Mar
4 Tomorrow, 13 Mar
5 14 Mar 2021
6 14 Mar 2021
7 14 Mar 2021
8 14 Mar 2021
9 15 Mar 2021
How do I parse it as datetime in pandas?
Your date contains 'Today' and 'Tomorrow' which is not a valid format(if it is valid then I don't know I never worked with this type of format) of datetime so firstly replace them to 2021(if year is fixed...i.e 2021):-
df['date']=df['date'].str.replace('Today','2021')
df['date']=df['date'].str.replace('Tomorrow','2021')
Now just use to_datetime() method:-
df['date']=pd.to_datetime(df['date'])
I'm trying to sort a grouped data using Pandas
My code :
df = pd.read_csv("./data3.txt")
grouped = df.groupby(['cust','year','month'])['price'].count()
print(grouped)
My data:
cust,year,month,price
astor,2015,Jan,100
astor,2015,Jan,122
astor,2015,Feb,200
astor,2016,Feb,234
astor,2016,Feb,135
astor,2016,Mar,169
astor,2017,Mar,321
astor,2017,Apr,245
tor,2015,Jan,100
tor,2015,Feb,122
tor,2015,Feb,200
tor,2016,Mar,234
tor,2016,Apr,135
tor,2016,May,169
tor,2017,Mar,321
tor,2017,Apr,245
This is my result.
cust year month
astor 2015 Feb 1
Jan 2
2016 Feb 2
Mar 1
2017 Apr 1
Mar 1
tor 2015 Feb 2
Jan 1
2016 Apr 1
Mar 1
May 1
2017 Apr 1
Mar 1
How to get output sorted by month?
Add parameter sort=False to groupby:
grouped = df.groupby(['cust','year','month'], sort=False)['price'].count()
print (grouped)
cust year month
astor 2015 Jan 2
Feb 1
2016 Feb 2
Mar 1
2017 Mar 1
Apr 1
tor 2015 Jan 1
Feb 2
2016 Mar 1
Apr 1
May 1
2017 Mar 1
Apr 1
Name: price, dtype: int64
If not possible use first solution is possible convert months to datetimes and last convert back:
df['month'] = pd.to_datetime(df['month'], format='%b')
f = lambda x: x.strftime('%b')
grouped = df.groupby(['cust','year','month'])['price'].count().rename(f, level=2)
print (grouped)
cust year month
astor 2015 Jan 2
Feb 1
2016 Feb 2
Mar 1
2017 Mar 1
Apr 1
tor 2015 Jan 1
Feb 2
2016 Mar 1
Apr 1
May 1
2017 Mar 1
Apr 1
Name: price, dtype: int64
df2 = pd.DataFrame({'person_id':[11,11,11,11,11,12,12,13,13,14,14,14,14],
'admit_date':['01/01/2011','01/01/2009','12/31/2013','12/31/2017','04/03/2014','08/04/2016',
'03/05/2014','02/07/2011','08/08/2016','12/31/2017','05/01/2011','05/21/2014','07/12/2016']})
df2 = df2.melt('person_id', value_name='dates')
df2['dates'] = pd.to_datetime(df2['dates'])
What I would like to do is
a) Exclude/filter out records from the data frame if a subject has Dec 31st and Jan 1st in its records. Please note that year doesn't matter.
If a subject has either Dec 31st or Jan 1st, we leave them as is.
But if they have both Dec 31st and Jan 1st, we remove one (either Dec 31st or Jan 1st) of them. note they could have multiple entries with the same date as well. Like person_id = 11
I could only do the below
df2_new = df2['dates'] != '2017-12-31' #but this excludes if a subject has only `Dec 31st on 2017`. How can I ignore the dates and not consider `year`
df2[df2_new]
My expected output is like as shown below
For person_id = 11, we drop 12-31 because it had both 12-31 and 01-01 in their records whereas for person_id = 14, we don't drop 12-31 because it has only 12-31 in its records.
We drop 12-31 only when both 12-31 and 01-01 appear in a person's records.
Use:
s = df2['dates'].dt.strftime('%m-%d')
m1 = s.eq('01-01').groupby(df2['person_id']).transform('any')
m2 = s.eq('12-31').groupby(df2['person_id']).transform('any')
m3 = np.select([m1 & m2, m1 | m2], [s.ne('12-31'), True], default=True)
df3 = df2[m3]
Result:
# print(df3)
person_id variable dates
0 11 admit_date 2011-01-01
1 11 admit_date 2009-01-01
4 11 admit_date 2014-04-03
5 12 admit_date 2016-08-04
6 12 admit_date 2014-03-05
7 13 admit_date 2011-02-07
8 13 admit_date 2016-08-08
9 14 admit_date 2017-12-31
10 14 admit_date 2011-05-01
11 14 admit_date 2014-05-21
12 14 admit_date 2016-07-12
Another way
Coerce the date to day month.
Create temp column where 31st Dec is converted to 1st Jan
Drop duplicates by Person id and the temp column keeping first.
df2['dates']=df2['dates'].dt.strftime('%d %b')
df2=df2.assign(check=np.where(df2.dates=='31 Dec','01 Jan', df2.dates)).drop_duplicates(['person_id', 'variable', 'check'], keep='first').drop(columns=['check'])
person_id variable dates check
0 11 admit_date 01 Jan 01 Jan
4 11 admit_date 03 Apr 03 Apr
5 12 admit_date 04 Aug 04 Aug
6 12 admit_date 05 Mar 05 Mar
7 13 admit_date 07 Feb 07 Feb
8 13 admit_date 08 Aug 08 Aug
9 14 admit_date 31 Dec 01 Jan
10 14 admit_date 01 May 01 May
11 14 admit_date 21 May 21 May
12 14 admit_date 12 Jul 12 Jul
I have a dataframe with a column that looks like this
Other via Other on 17 Jan 2019
Other via Other on 17 Jan 2019
Interview via E-mail on 14 Dec 2018
Rejected via E-mail on 15 Jan 2019
Rejected via E-mail on 15 Jan 2019
Rejected via E-mail on 15 Jan 2019
Rejected via E-mail on 15 Jan 2019
Interview via E-mail on 14 Jan 2019
Rejected via Website on 12 Jan 2019
Is it possible to split this column into two, one is whatever before the "via" and the other is whatever after the "on"? Thank you!
Use str.extract
df[['col1', 'col2']] = df.col.str.extract('(.*)\svia.*on\s(.*)', expand = True)
col1 col2
0 Other 17 Jan 2019
1 Other 17 Jan 2019
2 Interview 14 Dec 2018
3 Rejected 15 Jan 2019
4 Rejected 15 Jan 2019
5 Rejected 15 Jan 2019
6 Rejected 15 Jan 2019
7 Interview 14 Jan 2019
8 Rejected 12 Jan 2019
You can pretty much use split() as df.col.str.split('via|on',expand=True)[[0,2]:
Lets details it out........
Reproducing Your DataFrame:
>>> df
col
0 Other via Other on 17 Jan 2019
1 Other via Other on 17 Jan 2019
2 Interview via E-mail on 14 Dec 2018
3 Rejected via E-mail on 15 Jan 2019
4 Rejected via E-mail on 15 Jan 2019
5 Rejected via E-mail on 15 Jan 2019
6 Rejected via E-mail on 15 Jan 2019
7 Interview via E-mail on 14 Jan 2019
8 Rejected via Website on 12 Jan 2019
Let's looks at here First splitting the whole column based on the our required strings via and on which will split the entire column col into three distinct separated columns 0 1 2 where 0 will be before the string via & 2 will be after string on and rest will be middle one which is column 1 which we don't require.
So, we can take liberty and only opt for columns 0 & 2 as follows.
>>> df.col.str.split('via|on',expand=True)[[0,2]]
0 2
0 Other 17 Jan 2019
1 Other 17 Jan 2019
2 Interview 14 Dec 2018
3 Rejected 15 Jan 2019
4 Rejected 15 Jan 2019
5 Rejected 15 Jan 2019
6 Rejected 15 Jan 2019
7 Interview 14 Jan 2019
8 Rejected 12 Jan 2019
Better do it assign a new dataframe and the rename the columns:
Result:
newdf = df.col.str.split('via|on',expand=True)[[0,2]]
newdf.rename(columns={0: 'col1', 2: 'col2'}, inplace=True)
print(newdf)
col1 col2
0 Other 17 Jan 2019
1 Other 17 Jan 2019
2 Interview 14 Dec 2018
3 Rejected 15 Jan 2019
4 Rejected 15 Jan 2019
5 Rejected 15 Jan 2019
6 Rejected 15 Jan 2019
7 Interview 14 Jan 2019
8 Rejected 12 Jan 2019
I have the following dataframe:
Year Month Booked
0 2016 Aug 55999.0
6 2017 Aug 60862.0
1 2016 Jul 54062.0
7 2017 Jul 58417.0
2 2016 Jun 42044.0
8 2017 Jun 48767.0
3 2016 May 39676.0
9 2017 May 40986.0
4 2016 Oct 39593.0
10 2017 Oct 41439.0
5 2016 Sep 49677.0
11 2017 Sep 53969.0
I want to obtain the percentage change with respect to the same month from last year. I have tried the following code:
df['pct_ch'] = df.groupby(['Month','Year'])['Booked'].pct_change()
but I get the following, which is not at all what I want:
Year Month Booked pct_ch
0 2016 Aug 55999.0 NaN
6 2017 Aug 60862.0 0.086841
1 2016 Jul 54062.0 -0.111728
7 2017 Jul 58417.0 0.080556
2 2016 Jun 42044.0 -0.280278
8 2017 Jun 48767.0 0.159904
3 2016 May 39676.0 -0.186417
9 2017 May 40986.0 0.033017
4 2016 Oct 39593.0 -0.033987
10 2017 Oct 41439.0 0.046624
5 2016 Sep 49677.0 0.198798
11 2017 Sep 53969.0 0.086398
Do not groupby Year otherwise you won't get, for instance, Aug 2017 and Aug 2016 together. Also, use transform to broadcast back results to original indices
Try:
df['pct_ch'] = df.groupby(['Month'])['Booked'].transform(lambda s: s.pct_change())
Year Month Booked pct_ch
0 2016 Aug 55999.0 NaN
6 2017 Aug 60862.0 0.086841
1 2016 Jul 54062.0 NaN
7 2017 Jul 58417.0 0.080556
2 2016 Jun 42044.0 NaN
8 2017 Jun 48767.0 0.159904
3 2016 May 39676.0 NaN
9 2017 May 40986.0 0.033017
4 2016 Oct 39593.0 NaN
10 2017 Oct 41439.0 0.046624
5 2016 Sep 49677.0 NaN
11 2017 Sep 53969.0 0.086398