I have a dataframe that looks like the below.
Day | Price
12-05-2015 | 73
12-06-2015 | 68
11-07-2015 | 77
10-08-2015 | 54
I would like to subtract the price for one Day from the corresponding price 30 days later. To add to the days, I've used data.loc[data['Day'] + timedelta(days=30)] however this obviously overflowed near the final dates in my dataframe. Is there a way to subtract the prices without iterating over all the rows in the dataframe?
If it helps, my desired output is something like the following.
Start_Day | Price
12-05-2015 | -5
11-07-2015 | -23
You can use df.diff() function.
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.diff.html
Related
I'm trying something i've never done before and i'm in need of some help.
Basically, i need to filter sections of a pandas dataframe, transpose each filtered section and then concatenate every resulting section together.
Here's a representation of my dataframe:
df:
id | text_field | text_value
1 Date 2021-06-23
1 Hour 10:50
2 Position City
2 Position Countryside
3 Date 2021-06-22
3 Hour 10:45
I can then use some filtering method to isolate parts of my data:
df.groupby('id').filter(lambda x: True)
test = df.query(' id == 1 ')
test = test[["text_field","text_value"]]
test_t = test.set_index("text_field").T
test_t:
text_field | Date | Hour
text_value | 2021-06-23 | 10:50
If repeat the process looking for row with id == 3 and then concatenate the result with test_t, i'll have the following:
text_field | Date | Hour
text_value | 2021-06-23 | 10:50
text_value | 2021-06-22 | 10:45
I'm aware that performing this with rows where id == 2 will give me other columns and that's alright too, it's what a i want as well.
What i can't figure out is how to do this for every "id" in my dataframe. I wasn't able to create a function or for loop that works. Can somebody help me?
To summarize:
1 - I need to separate my dataframe in sections according with values from the "id" column
2 - After that i need to remove the "id" column and transpose the result
3 - I need to concatenate every resulting dataframe into one big dataframe
You can use pivot_table:
df.pivot_table(
index='id', columns='text_field', values='text_value', aggfunc='first')
Output:
text_field Date Hour Position
id
1 2021-06-23 10:50 NaN
2 NaN NaN City
3 2021-06-22 10:45 NaN
It's not exactly clear how you want to deal with repeating values though, would be great to have some description of that (id=2 would make a good example)
Update: If you want to ignore the ids and simply concatenate all the values:
pd.DataFrame(df.groupby('text_field')['text_value'].apply(list).to_dict())
Output:
Date Hour Position
0 2021-06-23 10:50 City
1 2021-06-22 10:45 Countryside
I have such a dataframe df:
time | Score| weekday
01-01-21 12:00 | 1 | Friday
01-01-21 24:00 | 33 | Friday
02-01-21 12:00 | 12 | Saturday
02-01-21 24:00 | 9 | Saturday
03-01-21 12:00 | 11 | Sunday
03-01-21 24:00 | 8 | Sunday
I now want to get the correlation between columns Score and weekday.
I did the following to get it:
s_corr = df.weekday.str.get_dummies().corrwith(df['Score'])
print (s_corr)
I am now wondering if this is the correct way of doing it? Or would it be better to create a new dataframe in which all the rows are first summed for each day by the time column and after this using the code above to get the correlation between Score and weekday? Or are there maybe other suggestions for improvement?
I have used numpy.corrcoef before for getting correlations between continuous and categorical variables. You can try it and see if it works for you:
I first created dummies for the categorical variables:
df_dummies = pd.get_dummies(df['weekday'], drop_first= True)
df_new = pd.concat([df['Score'], df_dummies], axis=1)
I then converted the DataFrame with the dummies to a numpy array and applied corrcoef on it likewise:
df_arr = df_new.to_numpy()
corr_matrix = np.corrcoef(df_arr.T)
I have a DataFrame that keeps track the item_id with its price over time period:
item_id | item_price | day
1 | 10 | 09-02-2000 # DD-MM-YYYY format
2 | 24 | 10-02-2000
1 | 10 | 20-02-2000
...
As you can see, the price of the item 1 does not change over time. How do I select all of the items that the specific column item_price does not change over time? I tried groupby(), but it does not seem to work right.
EDIT: desired output is all the item_id that its price does not change over time. For example: item_id_list = [1, ...]. It can be a list or a DataFrame Series.
here it is little bro
df.groupby('item_id').item_price.nunique()
And you keep the ones with one unique price.
I have data over the timespan of over a year. I am interested in grouping the data by week, and getting the slope of two variables by week. Here is what the data looks like:
Date | Total_Sales| Products
2015-12-30 07:42:50| 2900 | 24
2015-12-30 09:10:10| 3400 | 20
2016-02-07 07:07:07| 5400 | 25
2016-02-07 07:08:08| 1000 | 64
So ideally I would like to perform a linear regression on total_sales and products on each week of this data and record the slope. This works when each week is represented in the data, but I have problems when there are some weeks skipped in the data. I know I could do this with turning the date into the week number but I feel like the result will be skewed because there is over a year's worth of data.
Here is the code I have so far:
df['Date']=pd.to_datetime(vals['EventDate']) - pd.to_timedelta(7,unit='d')
df.groupby(pd.Grouper(key='Week', freq='W-MON')).apply(lambda v: linregress(v.Total_Sales, v.Products)[0]).reset_index()
However, I get the following error:
ValueError: Inputs must not be empty.
I expect the output to look like this:
Date | Slope
2015-12-28 | -0.008
2016-02-01 | -0.008
I assume this is happening because python is unable to groupby properly and also it is unable to recognise datetime as key ,as Date column has varying timestamp too.
Try the following code.It worked for me:
df['Date']=pd.to_datetime(df['Date']) #### Converts Date column to Python Datetime
df['daysoffset'] = df['Date'].apply(lambda x: x.weekday())
#### Return the day of the week as an integer, where Monday is 0 and Sunday is 6.
df['week_start'] = df.apply(lambda x: x['Date'].date()-timedelta(days=x['daysoffset']), axis=1)
#### x.['Date'].date() removes timestamp and considers only Date
#### the line assigns date corresponding to last Monday to column 'week_start'.
df.groupby('week_start').apply(lambda v: stats.linregress(v.Total_Sales,v.Products)
[0]).reset_index()
I have a dataset like this:
Policy | Customer | Employee | CoveragDate | LapseDate
123 | 1234 | 1234 | 2011-06-01 | 2015-12-31
124 | 1234 | 1234 | 2016-01-01 | ?
125 | 1234 | 1234 | 2011-06-01 | 2012-01-01
124 | 5678 | 5555 | 2014-01-01 | ?
I'm trying to iterate through each policy for each employee of each customer (a customer can have many employees, an employee can have multiple policies) and compare the covered date against the lapse date for a particular employee. If the covered date and lapse date are within 5 days, I'd like to add that policy to a results list.
So, expected output would be:
Policy | Customer | Employee
123 | 1234 | 1234
because policy 123's lapse date was within 5 days of policy 124's covered date.
So far, I've used this code:
import pandas
import datetime
#Pull in data from query
wd = pandas.read_csv('DATA')
wd=wd.set_index('Policy#')
wd = wd.rename(columns={'Policy#':'Policy'})
Resultlist=[]
for EMPID in wd.groupby(['EMPID', 'Customer']):
for Policy in wd.groupby(['EMPID','Customer']):
EffDate = pandas.to_datetime(wd['CoverageEffDate'])
for Policy in wd.groupby(['EMPID','Customer']):
check=wd['LapseDate'].astype(str)
if check.any() =='?': #here lies the problem - it's evaluating if ANY of the items ='?'
print(check)
continue
else:
LapseDate = pandas.to_datetime(wd['LapseDate']) + datetime.timedelta(days=5)
if EffDate < LapseDate:
Resultlist.append(wd['Policy','Customer'])
print(Resultlist)
I'm trying to use the pandas .any() function to evaluate if the current row is a '?' (which means null data, i.e. the policy hasn't lapsed). However, it appears that this statement just evaluates if there is a '?' row in the entire column, not the current row. I need to determine this because if I compare the '?' value against a date I get an error.
Is there a way to reference just the row I'm iterating on for a conditional check? To my knowledge, I can't use the pandas apply function technique because I need each employee's policy data compared against any other policies they hold.
Thank you!
check.str.contains('?') would return a boolean array showing which entries had a '?' in them. Otherwise you might consider just iterating through i.e
check=wd['LapseDate'].astype(str)
for row in check:
if row == '?':
print(check)
but there's really no difference between checking for any match and returning if there's a match and iterating through all and returning if there's a match.