Filling the missing rows in pandas dataframe - python

data = {
'node1': [1, 1,1, 2,2,5],
'node2': [8,16,22,5,25,10],
'weight': [1,1,1,1,1,1], }
df = pd.DataFrame(data, columns = ['node1','node2','weight'])
df2=df.assign(Cu=df.groupby('node1').cumcount()).set_index('Cu').groupby('node1') \
.apply(lambda x : x['node2']).unstack('Cu').fillna(np.nan)
Output:
1 8.0 16.0 22.0
2 5.0 25.0 0.0
5 10.0 0.0 0.0
This the output I am gettting but I require the output:
1 8 16 22
2 5 25 0
3 0 0 0
4 0 0 0
5 10 0 0
The rows which are missing in the data like the 3,4 should have the columns as zeros

Here are few ways of doing it.
Option 1
In [36]: idx = np.arange(df.node1.min(), df.node1.max()+1)
In [37]: df.groupby('node1')['node2'].apply(list).apply(pd.Series).reindex(idx).fillna(0)
Out[37]:
0 1 2
node1
1 8.0 16.0 22.0
2 5.0 25.0 0.0
3 0.0 0.0 0.0
4 0.0 0.0 0.0
5 10.0 0.0 0.0
Option 2
In [39]: (df.groupby('node1')['node2'].apply(lambda x: pd.Series(x.values))
.unstack().reindex(idx).fillna(0))
Out[39]:
0 1 2
node1
1 8.0 16.0 22.0
2 5.0 25.0 0.0
3 0.0 0.0 0.0
4 0.0 0.0 0.0
5 10.0 0.0 0.0
Option 3
In [55]: pd.DataFrame.from_dict(
{i: x.values for i, x in df.groupby('node1')['node2']},
orient='index').reindex(idx).fillna(0)
Out[55]:
0 1 2
1 8.0 16.0 22.0
2 5.0 25.0 0.0
3 0.0 0.0 0.0
4 0.0 0.0 0.0
5 10.0 0.0 0.0
And, measure the efficiency, readability based on your usecase.

In [15]: idx = np.arange(df.node1.min(), df.node1.max()+1)
In [16]: df.pivot_table(index='node1',
columns=df.groupby('node1').cumcount(),
values='node2',
fill_value=0) \
.reindex(idx) \
.fillna(0)
Out[16]:
0 1 2
node1
1 8.0 16.0 22.0
2 5.0 25.0 0.0
3 0.0 0.0 0.0
4 0.0 0.0 0.0
5 10.0 0.0 0.0

Related

Conditionally Set Values Greater Than 0 To 1

I have a dataframe that looks like this, with many more date columns
AUTHOR 2022-07-01 2022-10-14 2022-10-15 .....
0 Kathrine 0.0 7.0 0.0
1 Catherine 0.0 13.0 17.0
2 Amanda Jane 0.0 0.0 0.0
3 Jaqueline 0.0 3.0 0.0
4 Christine 0.0 0.0 0.0
I would like to set values in each column after the AUTHOR to 1 when the value is greater than 0, so the resulting table would look like this:
AUTHOR 2022-07-01 2022-10-14 2022-10-15 .....
0 Kathrine 0.0 1.0 0.0
1 Catherine 0.0 1.0 1.0
2 Amanda Jane 0.0 0.0 0.0
3 Jaqueline 0.0 1.0 0.0
4 Christine 0.0 0.0 0.0
I tried the following line of code but got an error, which makes sense. As I need to figure out how to apply this code just to the date columns while also keeping the AUTHOR column in my table.
Counts[Counts != 0] = 1
TypeError: Cannot do inplace boolean setting on mixed-types with a non np.nan value
You can select the date column first then mask on these columns
cols = df.drop(columns='AUTHOR').columns
# or
cols = df.filter(regex='\d{4}-\d{2}-\d{2}').columns
# or
cols = df.select_dtypes(include='number').columns
df[cols] = df[cols].mask(df[cols] != 0, 1)
print(df)
AUTHOR 2022-07-01 2022-10-14 2022-10-15
0 Kathrine 0.0 1.0 0.0
1 Catherine 0.0 1.0 1.0
2 Amanda Jane 0.0 0.0 0.0
3 Jaqueline 0.0 1.0 0.0
4 Christine 0.0 0.0 0.0
Since you would like to only exclude the first column you could first set it as index and then create your booleans. In the end you will reset the index.
df.set_index('AUTHOR').pipe(lambda g: g.mask(g > 0, 1)).reset_index()
df
AUTHOR 2022-10-14 2022-10-15
0 Kathrine 0.0 1.0
1 Cathrine 1.0 1.0

Сonvert the data from summary to daily time series data (pandas)

I have a dataset which is a time series. It has several regions at once, here is a small example:
date confirmed deaths recovered region_code
0 2020-03-27 3.0 0.0 0.0 ARK
1 2020-03-27 4.0 0.0 0.0 BA
2 2020-03-27 1.0 0.0 0.0 BEL
..........................................................
71540 2022-07-19 164194.0 2830.0 160758.0 YAR
71541 2022-07-19 19170.0 555.0 18484.0 YEV
71542 2022-07-19 169603.0 2349.0 167075.0 ZAB
I have three columns for which I want to display information about how many new cases have been added in separate three columns:
date confirmed deaths recovered region_code daily_confirmed daily_deaths daily_recovered
0 2020-03-27 3.0 0.0 0.0 ARK 3.0 0.0 0.0
1 2020-03-27 4.0 0.0 0.0 BA 4.0 0.0 0.0
2 2020-03-27 1.0 0.0 0.0 BEL 1.0 0.0 0.0
..........................................................
71540 2022-07-19 164194.0 2830.0 160758.0 YAR 32.0 16.0 8.0
71541 2022-07-19 19170.0 555.0 18484.0 YEV 6.0 1.0 1.0
71542 2022-07-19 169603.0 2349.0 167075.0 ZAB 1.0 8.0 9.0
That is, for each region, you need to get the difference between the current date and the last day in order to understand how many new cases have occurred.
The problem is that I don't know how to do this process correctly. Since there are no missing dates in the data, you can use something like this: df['daily_cases'] = df['confirmed'] - df['confirmed'].shift(fill_value=0). But there are many different regions here, that is, first you need to filter everything correctly somehow ... Any ideas how to do this?
Use DataFrameGroupBy.diff with replace first missing values by original columns add prefix to columns and cast to inetegers if necessary:
print (df)
date confirmed deaths recovered region_code
0 2020-03-27 3.0 0.0 0.0 ARK
1 2020-03-27 4.0 0.0 0.0 BA
2 2020-03-27 1.0 0.0 0.0 BEL
3 2020-03-28 4.0 0.0 4.0 ARK
4 2020-03-28 6.0 0.0 0.0 BA
5 2020-03-28 1.0 0.0 0.0 BEL
6 2020-03-29 6.0 0.0 10.0 ARK
7 2020-03-29 8.0 0.0 0.0 BA
8 2020-03-29 5.0 0.0 0.0 BEL
cols = ['confirmed','deaths','recovered']
df1 = (df.groupby(['region_code'])[cols]
.diff()
.fillna(df[cols])
.add_prefix('daily_')
.astype(int))
print (df1)
daily_confirmed daily_deaths daily_recovered
0 3 0 0
1 4 0 0
2 1 0 0
3 1 0 4
4 2 0 0
5 0 0 0
6 2 0 6
7 2 0 0
8 4 0 0
Last append to original:
df = df.join(df1)
print (df)

OneHotEncoder striping headers

I am trying to make an ML model in the titanic dataset and while preparing it I used OneHotEncoder to make Embarked dummies and while doing that I lost my column headers.
Here is how the dataset looked before.
Pclass Sex Age SibSp Parch Fare Cabin Embarked
0 3 1 22.000000 1 0 7.2500 146 2
1 1 0 38.000000 1 0 71.2833 81 0
2 3 0 26.000000 0 0 7.9250 146 2
3 1 0 35.000000 1 0 53.1000 55 2
4 3 1 35.000000 0 0 8.0500 146 2
... ... ... ... ... ... ... ... ...
886 2 1 27.000000 0 0 13.0000 146 2
887 1 0 19.000000 0 0 30.0000 30 2
888 3 0 29.699118 1 2 23.4500 146 2
889 1 1 26.000000 0 0 30.0000 60 0
890 3 1 32.000000 0 0 7.7500 146 1
Here is the code.
ct = ColumnTransformer([('encoder', OneHotEncoder(), [7])], remainder='passthrough')
X = pd.DataFrame(ct.fit_transform(X))
X
Here is how the dataset is looking now.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
0 1.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 3.0 1.0 22.000000 1.0 7.2500 146.0
1 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 38.000000 1.0 71.2833 81.0
2 1.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 3.0 0.0 26.000000 0.0 7.9250 146.0
3 1.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 35.000000 1.0 53.1000 55.0
4 1.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 3.0 1.0 35.000000 0.0 8.0500 146.0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
886 1.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 1.0 27.000000 0.0 13.0000 146.0
887 1.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 19.000000 0.0 30.0000 30.0
888 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 1.0 3.0 0.0 29.699118 1.0 23.4500 146.0
889 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 26.000000 0.0 30.0000 60.0
890 1.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 3.0 1.0 32.000000 0.0 7.7500 146.0
You can use the get_feature_names method of ColumnTransformer, provided all your transformers support that method and you've trained on a dataframe.
ct = ColumnTransformer([('encoder', OneHotEncoder(), [7])], remainder='passthrough')
X = pd.DataFrame(ct.fit_transform(X), columns=ct.get_feature_names())
X
output of fit_transform is array like
X_t{array-like, sparse matrix} of shape (n_samples, sum_n_components)
(not DataFrame-like)
Thus no Headers. If you want headers, you'll have to name them when rebuilding the DataFrame.

Separating strings from numerical data in a .txt file by python [duplicate]

This question already has answers here:
How read Common Data Format (CDF) in Python
(4 answers)
Closed 4 years ago.
I have a .txt file that looks like this:
08/19/93 UW ARCHIVE 100.0 1962 W IEEE 14 Bus Test Case
BUS DATA FOLLOWS 14 ITEMS
1 Bus 1 HV 1 1 3 1.060 0.0 0.0 0.0 232.4 -16.9 0.0 1.060 0.0 0.0 0.0 0.0 0
2 Bus 2 HV 1 1 2 1.045 -4.98 21.7 12.7 40.0 42.4 0.0 1.045 50.0 -40.0 0.0 0.0 0
3 Bus 3 HV 1 1 2 1.010 -12.72 94.2 19.0 0.0 23.4 0.0 1.010 40.0 0.0 0.0 0.0 0
4 Bus 4 HV 1 1 0 1.019 -10.33 47.8 -3.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
5 Bus 5 HV 1 1 0 1.020 -8.78 7.6 1.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
6 Bus 6 LV 1 1 2 1.070 -14.22 11.2 7.5 0.0 12.2 0.0 1.070 24.0 -6.0 0.0 0.0 0
7 Bus 7 ZV 1 1 0 1.062 -13.37 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
8 Bus 8 TV 1 1 2 1.090 -13.36 0.0 0.0 0.0 17.4 0.0 1.090 24.0 -6.0 0.0 0.0 0
9 Bus 9 LV 1 1 0 1.056 -14.94 29.5 16.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.19 0
10 Bus 10 LV 1 1 0 1.051 -15.10 9.0 5.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
11 Bus 11 LV 1 1 0 1.057 -14.79 3.5 1.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
12 Bus 12 LV 1 1 0 1.055 -15.07 6.1 1.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
13 Bus 13 LV 1 1 0 1.050 -15.16 13.5 5.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
14 Bus 14 LV 1 1 0 1.036 -16.04 14.9 5.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
I need to remove the characters from this file and need only the numerical data in a matrix form. I am relatively new to python, so any kind of help will be really appreciated. Thank you.
I would suggest reading the Data in a Pandas Dataframe and than deleting the column with text or create a second Frame without the text column.
Try:
data = pd.read_csv('output_list.txt', sep=" ", header=None)
data.columns = ["a", "b", "c", "etc."]
As it is simple to do this in pandas if data is correct, here is my take:
import pandas as pd
data = '''\
08/19/93 UW ARCHIVE 100.0 1962 W IEEE 14 Bus Test Case
BUS DATA FOLLOWS 14 ITEMS
1 Bus 1 HV 1 1 3 1.060 0.0 0.0 0.0 232.4 -16.9 0.0 1.060 0.0 0.0 0.0 0.0 0
2 Bus 2 HV 1 1 2 1.045 -4.98 21.7 12.7 40.0 42.4 0.0 1.045 50.0 -40.0 0.0 0.0 0
3 Bus 3 HV 1 1 2 1.010 -12.72 94.2 19.0 0.0 23.4 0.0 1.010 40.0 0.0 0.0 0.0 0
4 Bus 4 HV 1 1 0 1.019 -10.33 47.8 -3.9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
5 Bus 5 HV 1 1 0 1.020 -8.78 7.6 1.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
6 Bus 6 LV 1 1 2 1.070 -14.22 11.2 7.5 0.0 12.2 0.0 1.070 24.0 -6.0 0.0 0.0 0
7 Bus 7 ZV 1 1 0 1.062 -13.37 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
8 Bus 8 TV 1 1 2 1.090 -13.36 0.0 0.0 0.0 17.4 0.0 1.090 24.0 -6.0 0.0 0.0 0
9 Bus 9 LV 1 1 0 1.056 -14.94 29.5 16.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.19 0
10 Bus 10 LV 1 1 0 1.051 -15.10 9.0 5.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
11 Bus 11 LV 1 1 0 1.057 -14.79 3.5 1.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
12 Bus 12 LV 1 1 0 1.055 -15.07 6.1 1.6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
13 Bus 13 LV 1 1 0 1.050 -15.16 13.5 5.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0
'''
fileobj = pd.compat.StringIO(data)
# change fileobj to filepath and sep to `\t`
df = pd.read_csv(fileobj, sep='\s+', header=None, skiprows=2)
df = df.loc[:,df.dtypes != 'object']
print(df)
Returns:
0 2 4 5 6 7 8 9 10 11 12 13 14 \
0 1 1 1 1 3 1.060 0.00 0.0 0.0 232.4 -16.9 0.0 1.060
1 2 2 1 1 2 1.045 -4.98 21.7 12.7 40.0 42.4 0.0 1.045
2 3 3 1 1 2 1.010 -12.72 94.2 19.0 0.0 23.4 0.0 1.010
3 4 4 1 1 0 1.019 -10.33 47.8 -3.9 0.0 0.0 0.0 0.000
4 5 5 1 1 0 1.020 -8.78 7.6 1.6 0.0 0.0 0.0 0.000
5 6 6 1 1 2 1.070 -14.22 11.2 7.5 0.0 12.2 0.0 1.070
6 7 7 1 1 0 1.062 -13.37 0.0 0.0 0.0 0.0 0.0 0.000
7 8 8 1 1 2 1.090 -13.36 0.0 0.0 0.0 17.4 0.0 1.090
8 9 9 1 1 0 1.056 -14.94 29.5 16.6 0.0 0.0 0.0 0.000
9 10 10 1 1 0 1.051 -15.10 9.0 5.8 0.0 0.0 0.0 0.000
10 11 11 1 1 0 1.057 -14.79 3.5 1.8 0.0 0.0 0.0 0.000
11 12 12 1 1 0 1.055 -15.07 6.1 1.6 0.0 0.0 0.0 0.000
12 13 13 1 1 0 1.050 -15.16 13.5 5.8 0.0 0.0 0.0 0.000
15 16 17 18 19
0 0.0 0.0 0.0 0.00 0
1 50.0 -40.0 0.0 0.00 0
2 40.0 0.0 0.0 0.00 0
3 0.0 0.0 0.0 0.00 0
4 0.0 0.0 0.0 0.00 0
5 24.0 -6.0 0.0 0.00 0
6 0.0 0.0 0.0 0.00 0
7 24.0 -6.0 0.0 0.00 0
8 0.0 0.0 0.0 0.19 0
9 0.0 0.0 0.0 0.00 0
10 0.0 0.0 0.0 0.00 0
11 0.0 0.0 0.0 0.00 0
12 0.0 0.0 0.0 0.00 0

Pandas, create new columns based on existing with repeated count

It's a bit complicated for explain, so I'll do my best. I have a pandas with two columns: hour (from 1 to 24) and value(corresponding to each hour). Dataset index is huge but column hour is repeated on that 24 hours basis (from 1 to 24). I am trying to create new 24 columns: value -1, value -2, value -3...value -24 that will correspond to each row and value from -1 hour, value from -2 hours(from above rows).
hour | value | value -1 | value -2 | value -3| ... | value - 24
1 10 0 0 0 0
2 11 10 0 0 0
3 12 11 10 0 0
4 13 12 11 10 0
...
24 32 31 30 29 0
1 33 32 31 30 10
2 34 33 32 31 11
and so on...
All value numbers are for the example. As I said there are lots of rows, not only 24 for all hours in a day time but all following time series from 1 to 24 and etc.
Thanks in advance and may the force be with you!
Is this what you need?
df = pd.DataFrame([[1,10],[2,11],
[3,12],[4,13]], columns=['hour','value'])
for i in range(1, 24):
df['value -' + str(i)] = df['value'].shift(i).fillna(0)
result:
Is this what you are looking for?
import pandas as pd
df = pd.DataFrame({'hour': list(range(24))*2,
'value': list(range(48))})
shift_cols_n = 10
for shift in range(1, shift_cols_n):
new_columns_name = 'value - ' + str(shift)
# Assuming that you don't have any NAs in your dataframe
df[new_columns_name] = df['value'].shift(shift).fillna(0)
# A safer (and a less simple) way, in case you have NAs in your dataframe
df[new_columns_name] = df['value'].shift(shift)
df.loc[:shift, new_columns_name] = 0
print(df.head(9))
hour value value - 1 value - 2 value - 3 value - 4 value - 5 \
0 0 0 0.0 0.0 0.0 0.0 0.0
1 1 1 0.0 0.0 0.0 0.0 0.0
2 2 2 1.0 0.0 0.0 0.0 0.0
3 3 3 2.0 1.0 0.0 0.0 0.0
4 4 4 3.0 2.0 1.0 0.0 0.0
5 5 5 4.0 3.0 2.0 1.0 0.0
6 6 6 5.0 4.0 3.0 2.0 1.0
7 7 7 6.0 5.0 4.0 3.0 2.0
8 8 8 7.0 6.0 5.0 4.0 3.0
value - 6 value - 7 value - 8 value - 9
0 0.0 0.0 0.0 0.0
1 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0
4 0.0 0.0 0.0 0.0
5 0.0 0.0 0.0 0.0
6 0.0 0.0 0.0 0.0
7 1.0 0.0 0.0 0.0
8 2.0 1.0 0.0 0.0

Categories

Resources