I want to create a dataframe in Python with 24 columns (indicating 24 hours), which looks like this:
column name 0 1 2 3 ... 24
row 1 0 0 0 0 0
row 2 0 0 0 0 0
row 3 0 0 0 0 0
I would like to know how to initialize it? and in the future, I may add row 4, with all "0", how to do that? Thanks,
There's a trick here: when DataFrame (or Series) constructor is passed a scalar as the first argument this value is propogated:
In [11]: pd.DataFrame(0, index=np.arange(1, 4), columns=np.arange(24))
Out[11]:
0 1 2 3 4 5 6 7 8 9 ... 14 15 16 17 18 19 20 21 22 23
1 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
[3 rows x 24 columns]
Note: np.arange is numpy's answer to python's range.
You can create an empty numpy array, convert it to a dataframe, and then add the header names.
import numpy
a = numpy.zeros(shape=(3,24))
df = pd.DataFrame(a,columns=['col1','col2', etc..])
to set row names use
df.set_index(['row1', 'row2', etc..])
if you must.
Related
I'm having trouble describing exactly what I want to achieve. I've tried looking here on stack to find others with the same problem, but are unable to find any. So I will try to describe exactly what I want and give you a sample setup code.
I would like to have a function that gives me a new column/pd.Series. This new column has boolean TRUE values (or int's) that are based on a certain condition.
The condition being as follows. There are N number of columns (example is 8), each with the same name but ending with one new number. IE, column_1, column_2 etc. The function I need is twofold:
If N is given, look for/through each column row and see if it and the next N columns row are also TRUE/1 ..
If N is NOT given, look for each column row and if all next columns rows are also TRUE/1, with the numbers as ID's to look at the column.
def get_df_series(df: pd.DataFrame, columns_ids: list, n: int=8) -> pd.Dataframe:
for i in columns_ids:
# missing code here .. i dont know if this would be the way to go
pass
return df
def create_dataframe(numbers: list) -> pd.DataFrame:
df = pd.DataFrame() # empty df
# create a column for each number with the number as ID and with random boolean values as int's
for i in numbers:
df[f'column_{i}'] = np.random.randint(2, size=20)
return df
if __name__=="__main__":
numbers = [1, 2, 3, 4, 5, 6, 7, 8]
df = create_dataframe(numbers=numbers)
df = get_df_series(df=df, numbers=numbers, n=3)
I have some experience with Pandas dataframes and know how to create IF/ELSE things with np.select for example.
(function) select(condlist: Sequence[ArrayLike], choicelist: Sequence[ArrayLike], default: ArrayLike = ...) -> NDArray
The problem I'm running into is that I don't know how to make a conditional statement if I don't know how many columns are ahead. For example, if I want to know for column_5 if the next 3 are also true, I can hardcode this, but I have columns up to id 20 and would love to not have to hardcode everything from column_1 to column_20 if I want to know if all conditions in all those columns are true.
Now the problem is that I don't know if this is even possible. So any feedback would be appreaciated. Even just giving me a hint on where to look for a way to do this.
EDIT: What I forgot to mention was that there will be random columns in between that obviously cannot be taking into the equation. For example, there will be main_column_1, main_column_2, main_column_3, side_column_1, side_column_2, right_column_1, main_column_3, main_column_4 etc...
The answer Corralien gave is correct, but I should've made my question more clearer.
I need to be able to, say, look at main_column and for that one look ahead N amount of columns of the same type: main_column.
Try:
n = 3
out = (df.rolling(n, min_periods=1, axis=1).sum()
.shift(-n+1, fill_value=0, axis=1).eq(n).astype(int)
.rename(columns=lambda x: 'result_' + x.split('_')[1]))
Output:
>>> out
result_1 result_2 result_3 result_4 result_5 result_6 result_7 result_8
0 1 1 1 1 1 1 0 0
1 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0
4 0 0 0 1 0 0 0 0
5 0 0 0 0 0 0 0 0
6 0 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0
8 0 1 1 1 0 0 0 0
9 0 0 0 0 0 1 0 0
10 0 0 0 0 0 0 0 0
11 0 0 0 0 1 0 0 0
12 0 0 0 0 0 0 0 0
13 0 0 0 1 1 0 0 0
14 0 0 0 0 0 1 0 0
15 0 0 0 0 0 0 0 0
16 0 0 0 0 0 0 0 0
17 0 0 1 0 0 0 0 0
18 0 0 1 0 0 0 0 0
19 0 0 0 0 0 0 0 0
Input:
>>> df
column_1 column_2 column_3 column_4 column_5 column_6 column_7 column_8
0 1 1 1 1 1 1 1 1
1 0 1 0 0 0 1 1 0
2 1 1 0 1 0 1 1 0
3 1 0 1 0 0 0 0 0
4 1 0 0 1 1 1 0 1
5 1 1 0 1 0 1 1 0
6 1 0 1 0 0 0 0 1
7 0 0 1 0 0 0 0 0
8 0 1 1 1 1 1 0 0
9 1 0 1 1 0 1 1 1
10 0 0 1 1 0 0 1 1
11 1 0 1 0 1 1 1 0
12 0 1 1 0 1 0 1 0
13 0 0 0 1 1 1 1 0
14 0 0 1 1 0 1 1 1
15 1 0 0 1 0 1 0 0
16 1 0 0 0 0 0 0 1
17 0 0 1 1 1 0 0 1
18 0 0 1 1 1 0 0 1
19 0 0 1 0 0 0 1 0
I created a dataframe using pivot_table command.Dataframe has 351 rows and 120 columns. The dataframe looks like follow:
RY 2011 ... 2020
Month 1 2 3 4 5 6 7 8 9 10 ... 3 4 5 6 7 8 9 10 11 12
ID
AB10 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
AB1286 0 0 0 0 2 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
AB1951 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
AB2 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
AB2338 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
Now I want to calculate the rolling sum of 12 months for ID. I wrote following command to calculate the rolling sum:
df.groupby('ID').rolling(12,on='Month').sum()
However, it gave the following error:
ValueError: invalid on specified as Month, must be a column (of DataFrame), an Index or None
Could anyone help me in fixing the issue?
Try running that code before creating a pivot table. But, make sure that you first create a datetime column with something like:
df['Date'] = pd.to_datetime(df['Year'].astype(str) + '-' + df['Month'].astype(str) + '-01')
and then:
df.groupby('ID').rolling(12,on='Date').sum()
What does "ID" contain? Have you tried transposing the pivot table by using?
df.T.groupby('ID').rolling(12,on='Month').sum()
I'm trying to create two new columns to alternate starts and endings in a dataframe :
for 1 start there is only 1 ending maximum
the last start can have no ending corresponding
there is no ends before the first start
the succession of two or more starts or two or more ends isn't possible
How could I do that without using any loop, so using numpy or pandas functions ?
The code to create the dataframe :
df = pd.DataFrame({ 'start':[0,0,1,0,1,0,1,0,0,0,0,1,0,1,0,0,0,1,0],
'end':[1,0,0,0,0,0,0,1,0,1,0,0,0,0,0,1,0,0,0]})
The render and the result I want :
start end start wanted end wanted
0 0 1 0 0
1 0 0 0 0
2 1 0 1 0
3 0 0 0 0
4 1 0 0 0
5 0 0 0 0
6 1 0 0 0
7 0 1 0 1
8 0 0 0 0
9 0 1 0 0
10 0 0 0 0
11 1 0 1 0
12 0 0 0 0
13 1 0 0 0
14 0 0 0 0
15 0 1 0 1
16 0 0 0 0
17 1 0 1 0
18 0 0 0 0
I don't know how to do this with pure pandas/numpy but here's a simple for loop that gives your expected output. I tested it with a pandas dataframe 50,000 times the size of your example data (so around 1 million rows in total) and it runs in roughly 1 second:
import pandas as pd
df = pd.DataFrame({ 'start':[0,0,1,0,1,0,1,0,0,0,0,1,0,1,0,0,0,1,0],
'end':[1,0,0,0,0,0,0,1,0,1,0,0,0,0,0,1,0,0,0]})
start = False
start_wanted = []
end_wanted = []
for s, e in zip(df['start'], df['end']):
if start:
if e == 1:
start = False
start_wanted.append(0)
end_wanted.append(e)
else:
if s == 1:
start = True
start_wanted.append(s)
end_wanted.append(0)
df['start_wanted'] = start_wanted
df['end_wanted'] = end_wanted
print(df)
Output:
end start start_wanted end_wanted
0 1 0 0 0
1 0 0 0 0
2 0 1 1 0
3 0 0 0 0
4 0 1 0 0
5 0 0 0 0
6 0 1 0 0
7 1 0 0 1
8 0 0 0 0
9 1 0 0 0
10 0 0 0 0
11 0 1 1 0
12 0 0 0 0
13 0 1 0 0
14 0 0 0 0
15 1 0 0 1
16 0 0 0 0
17 0 1 1 0
18 0 0 0 0
Seems like an easy question but I'm running into an odd error. I have a large dataframe with 24+ columns that all contain 1s or 0s. I wish to concatenate each field to create a binary key that'll act as a signature.
However, when the number of columns exceeds 12, the whole process falls apart.
a = np.zeros(shape=(3,12))
df = pd.DataFrame(a)
df = df.astype(int) # This converts each 0.0 into just 0
df[2]=1 # Changes one column to all 1s
#result
0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 1 0 0 0 0 0 0 0 0 0
1 0 0 1 0 0 0 0 0 0 0 0 0
2 0 0 1 0 0 0 0 0 0 0 0 0
Concatenating function...
df['new'] = df.astype(str).sum(1).astype(int).astype(str) # Concatenate
df['new'].apply('{0:0>12}'.format) # Pad leading zeros
# result
0 1 2 3 4 5 6 7 8 9 10 11 new
0 0 0 1 0 0 0 0 0 0 0 0 0 001000000000
1 0 0 1 0 0 0 0 0 0 0 0 0 001000000000
2 0 0 1 0 0 0 0 0 0 0 0 0 001000000000
This is good. However, if I increase the number of columns to 13, I get...
a = np.zeros(shape=(3,13))
# ...same intermediate steps as above...
0 1 2 3 4 5 6 7 8 9 10 11 12 new
0 0 0 1 0 0 0 0 0 0 0 0 0 0 00-2147483648
1 0 0 1 0 0 0 0 0 0 0 0 0 0 00-2147483648
2 0 0 1 0 0 0 0 0 0 0 0 0 0 00-2147483648
Why am I getting -2147483648? I was expecting 0010000000000
Any help is appreciated!
I'm trying to modify my data frame in a way that the last variable of a label encoded feature is converted to 0. For example, I have this data frame, top row being the labels and the first column as the index:
df
1 2 3 4 5 6 7 8 9 10
0 0 1 0 0 0 0 0 0 1 1
1 0 0 0 1 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 1 0
Columns 1-10 are the ones that have been encoded. What I want to convert this data frame to, without changing anything else is:
1 2 3 4 5 6 7 8 9 10
0 0 1 0 0 0 0 0 0 1 0
1 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0
So the last values occurring in each row should be converted to 0. I was thinking of using the last_valid_index method, but that would take in the other remaining columns and change that as well, which I don't want. Any help is appreciated
You can use cumsum to build a boolean mask, and set to zero.
v = df.cumsum(axis=1)
df[v.lt(v.max(axis=1), axis=0)].fillna(0, downcast='infer')
1 2 3 4 5 6 7 8 9 10
0 0 1 0 0 0 0 0 0 1 0
1 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0
Another similar option is reversing before calling cumsum, you can now do this in a single line.
df[~df.iloc[:, ::-1].cumsum(1).le(1)].fillna(0, downcast='infer')
1 2 3 4 5 6 7 8 9 10
0 0 1 0 0 0 0 0 0 1 0
1 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0
If you have more columns, just apply these operations on the slice. Later, assign back.
u = df.iloc[:, :10]
df[u.columns] = u[~u.iloc[:, ::-1].cumsum(1).le(1)].fillna(0, downcast='infer')