Is there any method to append test data with predicted data? - python

I have 1 random array of tested dataset like array=[[5, 6 ,7, 1], [5, 6 ,7, 4], [5, 6 ,7, 3]] and 1 array of predicted data like array_pred=[10, 3, 4] both with the equal length. Now I want to append this result like this in 1 res_array = [[5, 6 ,7, 1, 10], [5, 6 ,7, 4, 3], [5, 6 ,7, 3, 4]]. I don't know what to say it but I want this type of result in python. Actually I have to store it in a dataframe and then have to generate an excel file from this data. this is what I want. Is it possible??

Use numpy.vstack for join arrays, convert to Series and then to excel:
a = np.hstack((array, np.array(array_pred)[:, None]))
#thank you #Ch3steR
a = np.column_stack([array, array_pred])
print(a)
0 [5, 6, 7, 1, 10]
1 [5, 6, 7, 4, 3]
2 [5, 6, 7, 3, 4]
dtype: object
s = pd.Series(a.tolist())
print (s)
0 [5, 6, 7, 1, 10]
1 [5, 6, 7, 4, 3]
2 [5, 6, 7, 3, 4]
dtype: object
s.to_excel(file, index=False)
Or if need flatten values convert to DataFrame, Series and use concat:
df = pd.concat([pd.DataFrame(array), pd.Series(array_pred)], axis=1, ignore_index=True)
print(df)
0 1 2 3 4
0 5 6 7 1 10
1 5 6 7 4 3
2 5 6 7 3 4
And then:
df.to_excel(file, index=False)

Related

Pandas data frame index

if I have a Series
s = pd.Series(1, index=[1,2,3,5,6,9,10])
But, I need a standard index = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], with index[4, 7, 8] values equal to zeros.
So I expect the updated series will be
s = pd.Series([1,1,1,0,1,1,0,0,1,1], index=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
How should I update the series?
Thank you in advance!
Try this:
s.reindex(range(1,s.index.max() + 1),fill_value=0)
Output:
1 1
2 1
3 1
4 0
5 1
6 1
7 0
8 0
9 1
10 1

Pandas cumsum separated by comma

I have a dataframe with a column with data as:
my_column my_column_two
1,2,3 A
5,6,8 A
9,6,8 B
5,5,8 B
if I do:
data = df.astype(str).groupby('my_column_two').agg(','.join).cumsum()
data.iloc[[0]]['my_column'].apply(print)
data.iloc[[1]]['my_column'].apply(print)
I have:
1,2,3,5,6,8
1,2,3,5,6,89,6,8,5,5,8
how can I have 1,2,3,5,6,8,9,6,8,5,5,8 so the cummulative adds a comma when adding the previous row? (Notice 89 should be 8,9)
Were you after?
df['new']=df.groupby('my_column_two')['my_column'].apply(lambda x: x.str.split(',').cumsum())
my_column my_column_two new
0 1,2,3 A [1, 2, 3]
1 5,6,8 A [1, 2, 3, 5, 6, 8]
2 9,6,8 B [9, 6, 8]
3 5,5,8 B [9, 6, 8, 5, 5, 8]

how do I write code in accord with the tasks

a matrix is given:
1 2 3 4 5 6 7 8,
8 7 6 5 4 3 2 1,
2 3 4 5 6 7 8 9,
9 8 7 6 5 4 3 2,
1 3 5 7 9 7 5 3,
3 1 5 3 2 6 5 7,
1 7 5 9 7 3 1 5,
2 6 3 5 1 7 3 2.
Define a structure for storing the matrix.
Write code that swaps the first and last rows of the matrix.
Write the code for creating a matrix of any size, filled with zeros (the size is set via the console).
Write a code that will count how many times the number 3 occurs in the matrix.
I tried solving this but My teacher says the following code is wrong. Where is my mistake??
matr = [[1, 2, 3, 4, 5, 6, 7, 8],
[8, 7, 6, 5, 4, 3, 2, 1],
[2, 3, 4, 5, 6, 7, 8, 9],
[9, 8, 7, 6, 5, 4, 3, 2],
[1, 3, 5, 7, 9, 7, 5, 3],
[3, 1, 5, 3, 2, 6, 5, 7],
[1, 7, 5, 9, 7, 3, 1, 5],
[2, 6, 3, 5, 1, 7, 3, 2]]
def will_swap_first_and_last_rows(matr):
matr[len(matr) - 1], matr[0] = matr[0], matr[len(matr) - 1]
return matr
def will_craete_matrix_of_any_size_filled_with_zeros():
m = int(input('Enter the number of rows of the matrix '))
n = int(input('enter the number of columns of the matrix '))
return [[0] * m for i in range(n)]
def will_count_how_many_times_the_number_3_occurs_in_the_matrix(matr):
s = 0
for row in matr:
for elem in row:
if elem == 3:
s += 1
return s
print(*will_swap_first_and_last_rows(matr), sep='\n')
print(will_craete_matrix_of_any_size_filled_with_zeros())
print(will_count_how_many_times_the_number_3_occurs_in_the_matrix(matr))
Your code has rows (m) and columns (n) swapped. Do it like this:
return [[0] * n for i in range(m)]

Use of index in pandas DataFrame for groupby and aggregation

I want to aggregate a single column DataFrame and count the number of elements. However, I always end up with an empty DataFrame:
pd.DataFrame({"A":[1, 2, 3, 4, 5, 5, 5]}).groupby("A").count()
Out[46]:
Empty DataFrame
Columns: []
Index: [1, 2, 3, 4, 5]
If I add a second column, I get the desired result:
pd.DataFrame({"A":[1, 2, 3, 4, 5, 5, 5], "B":[1, 2, 3, 4, 5, 5, 5]}).groupby("A").count()
Out[45]:
B
A
1 1
2 1
3 1
4 1
5 3
Can you explain the reason for this?
Give this a shot:
import pandas as pd
print(pd.DataFrame({"A":[1, 2, 3, 4, 5, 5, 5]}).groupby("A")["A"].count())
prints
A
1 1
2 1
3 1
4 1
5 3
You have to add the grouped by column in your result:
import pandas as pd
pd.DataFrame({"A":[1, 2, 3, 4, 5, 5, 5]}).groupby("A").A.count()
Output:
A
1 1
2 1
3 1
4 1
5 3

Pandas drop duplicated values partially

I have a dataframe as
df=pd.DataFrame({'A':[1, 3, 3, 4, 5, 3, 3],
'B':[0, 2, 3, 4, 5, 6, 7],
'C':[7, 2, 2, 5, 7, 2, 2]})
I would like to drop the duplicated values from columns A and C. However, I want it to work partially.
If I use
df.drop_duplicates(subset=['A','C'], keep='first')
It will drop row 2, 5, 6. However, I only want to drop row 2 and 6. The desired results are like:
df=pd.DataFrame({'A':[1, 3, 4, 5, 3],
'B':[0, 2, 4, 5, 6],
'C':[7, 2, 5, 7, 2]})
Here's how you can do this, using shift:
df.loc[(df[["A", "C"]].shift() != df[["A", "C"]]).any(axis=1)].reset_index(drop=True)
Output:
A B C
0 1 0 7
1 3 2 2
2 4 4 5
3 5 5 7
4 3 6 2
This question is a nice reference.
You can just keep every second repetition of A, C pair:
df=df.loc[df.groupby(["A", "C"]).cumcount()%2==0]
Outputs:
A B C
0 1 0 7
1 3 2 2
3 4 4 5
4 5 5 7
5 3 6 2

Categories

Resources