Related
I am trying to build a object classification model, but when trying to print out the classification report it returned a value error.
ValueError: Classification metrics can't handle a mix of multiclass and continuous-multioutput targets
This is my current code:
train_size = int(len(df) * 0.7,)
train_text = df['cleansed_text'][:train_size]
train_cat = df['category'][:train_size]
test_text = df['cleansed_text'][train_size:]
test_cat = df['category'][train_size:]
max_words = 2500
tokenize = text.Tokenizer(num_words=max_words, char_level=False)
tokenize.fit_on_texts(train_text)
x_train = tokenize.texts_to_matrix(train_text)
x_test = tokenize.texts_to_matrix(test_text)
encoder = LabelEncoder()
encoder.fit(train_cat)
y_train = encoder.transform(train_cat)
y_test = encoder.transform(test_cat)
num_classes = np.max(y_train) + 1
y_train = utils.to_categorical(y_train, num_classes)
y_test = utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Dense(256, input_shape=(max_words,)))
model.add(Dropout(0.5))
model.add(Dense(256,))
model.add(Dropout(0.5))
model.add(Activation('relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train,
batch_size=32,
epochs=10,
verbose=1,
validation_split=0.1)
from sklearn.metrics import classification_report
y_test_arg=np.argmax(y_test,axis=1)
Y_pred = np.argmax(model.predict(x_test),axis=1)
print('Confusion Matrix')
print(confusion_matrix(y_test_arg, Y_pred))
print(classification_report(y_test_arg, y_pred, labels=[1,2,3,4,5]))
However, when I attempt to print out the classification report, it ran into this error:
21/21 [==============================] - 0s 2ms/step
Confusion Matrix
[[138 1 6 0 2]
[ 0 102 3 0 2]
[ 3 2 121 1 2]
[ 1 0 1 157 0]
[ 0 3 0 0 123]]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [56], in <cell line: 8>()
5 print('Confusion Matrix')
6 print(confusion_matrix(y_test_arg, Y_pred))
----> 8 print(classification_report(y_test_arg, y_pred, labels=[1,2,3,4,5]))
File ~\anaconda3\lib\site-packages\sklearn\metrics\_classification.py:2110, in classification_report(y_true, y_pred, labels, target_names, sample_weight, digits, output_dict, zero_division)
1998 def classification_report(
1999 y_true,
2000 y_pred,
(...)
2007 zero_division="warn",
2008 ):
2009 """Build a text report showing the main classification metrics.
2010
2011 Read more in the :ref:`User Guide <classification_report>`.
(...)
2107 <BLANKLINE>
2108 """
-> 2110 y_type, y_true, y_pred = _check_targets(y_true, y_pred)
2112 if labels is None:
2113 labels = unique_labels(y_true, y_pred)
File ~\anaconda3\lib\site-packages\sklearn\metrics\_classification.py:93, in _check_targets(y_true, y_pred)
90 y_type = {"multiclass"}
92 if len(y_type) > 1:
---> 93 raise ValueError(
94 "Classification metrics can't handle a mix of {0} and {1} targets".format(
95 type_true, type_pred
96 )
97 )
99 # We can't have more than one value on y_type => The set is no more needed
100 y_type = y_type.pop()
ValueError: Classification metrics can't handle a mix of multiclass and continuous-multioutput targets
y_test_arg
array([3, 3, 1, 0, 4, 1, 0, 4, 3, 4, 1, 1, 2, 2, 3, 0, 0, 4, 1, 3, 2, 0,
4, 1, 2, 3, 1, 2, 2, 4, 3, 2, 0, 2, 1, 4, 3, 2, 1, 1, 0, 3, 4, 4,
3, 1, 4, 2, 4, 3, 2, 2, 3, 1, 3, 2, 3, 4, 1, 3, 1, 0, 0, 1, 1, 1,
4, 3, 0, 0, 2, 2, 0, 2, 1, 3, 3, 4, 2, 3, 0, 3, 0, 4, 3, 3, 0, 1,
3, 3, 4, 3, 0, 2, 0, 1, 4, 1, 2, 0, 1, 2, 1, 2, 2, 0, 3, 3, 3, 4,
4, 3, 2, 1, 4, 3, 1, 0, 1, 2, 0, 3, 4, 0, 3, 2, 0, 1, 1, 1, 2, 1,
2, 1, 3, 1, 3, 2, 2, 0, 2, 4, 3, 4, 3, 0, 2, 4, 1, 1, 2, 1, 2, 3,
3, 2, 0, 4, 3, 2, 2, 1, 3, 2, 2, 0, 4, 4, 0, 4, 3, 3, 0, 2, 0, 4,
3, 4, 2, 1, 3, 0, 3, 1, 4, 4, 3, 2, 3, 0, 3, 0, 3, 3, 1, 1, 0, 4,
4, 0, 4, 0, 0, 3, 3, 2, 3, 4, 3, 4, 3, 3, 0, 0, 4, 3, 0, 4, 4, 2,
3, 0, 1, 1, 4, 2, 3, 3, 4, 0, 4, 1, 1, 2, 2, 0, 1, 3, 1, 1, 0, 3,
2, 4, 0, 3, 1, 4, 2, 2, 3, 3, 0, 0, 0, 0, 0, 1, 0, 2, 2, 4, 4, 1,
2, 1, 0, 2, 3, 3, 0, 4, 0, 4, 3, 0, 0, 2, 3, 3, 2, 2, 1, 1, 2, 0,
2, 2, 0, 4, 2, 2, 2, 2, 2, 1, 1, 4, 2, 3, 2, 3, 4, 3, 3, 3, 1, 4,
1, 4, 3, 4, 3, 3, 1, 1, 0, 1, 1, 2, 0, 3, 4, 4, 2, 0, 3, 0, 1, 3,
2, 1, 3, 3, 0, 2, 4, 4, 0, 0, 3, 2, 1, 3, 3, 2, 1, 4, 3, 1, 0, 2,
3, 2, 4, 1, 3, 2, 0, 1, 2, 1, 2, 3, 2, 0, 0, 2, 0, 4, 3, 0, 1, 0,
3, 3, 1, 4, 2, 4, 2, 2, 3, 3, 3, 0, 4, 1, 0, 3, 0, 3, 0, 4, 0, 0,
0, 0, 3, 3, 3, 0, 0, 1, 0, 0, 0, 3, 3, 3, 4, 0, 3, 3, 3, 0, 1, 4,
4, 4, 2, 0, 0, 4, 0, 4, 3, 3, 2, 2, 2, 3, 3, 2, 2, 4, 0, 3, 3, 3,
3, 0, 3, 0, 0, 0, 0, 3, 2, 3, 4, 4, 3, 4, 0, 1, 0, 3, 0, 4, 4, 2,
1, 0, 1, 0, 4, 2, 1, 2, 1, 1, 4, 0, 4, 4, 0, 2, 3, 1, 0, 2, 1, 0,
4, 3, 4, 2, 3, 2, 0, 2, 2, 0, 0, 0, 4, 2, 0, 2, 0, 1, 2, 3, 2, 2,
3, 1, 4, 4, 0, 4, 3, 0, 0, 2, 3, 4, 4, 4, 3, 1, 3, 2, 0, 2, 2, 1,
4, 0, 4, 3, 1, 1, 3, 0, 1, 4, 4, 3, 1, 0, 2, 2, 2, 4, 4, 0, 2, 0,
2, 2, 1, 3, 4, 0, 4, 1, 4, 4, 3, 2, 3, 3, 2, 1, 1, 0, 2, 2, 3, 0,
0, 4, 0, 4, 4, 3, 0, 2, 3, 0, 0, 3, 4, 3, 4, 1, 3, 3, 1, 0, 4, 3,
3, 2, 4, 0, 2, 3, 3, 2, 1, 4, 4, 4, 0, 3, 1, 1, 4, 0, 2, 4, 3, 3,
4, 4, 2, 0, 3, 1, 1, 3, 1, 4, 4, 0, 0, 0, 3, 3, 4, 3, 0, 4, 0, 0,
3, 0, 2, 0, 0, 4, 0, 4, 2, 4, 1, 2, 4, 1, 3, 2, 1, 0, 4, 0, 4, 1,
4, 3, 0, 0, 2, 1, 2, 3], dtype=int64)
y_pred
array([[2.6148611e-05, 1.2884392e-06, 8.0136197e-06, 9.9993646e-01,
2.8027451e-05],
[1.1888630e-08, 1.9621881e-07, 6.0117927e-08, 9.9999917e-01,
4.2087538e-07],
[2.4368815e-06, 9.9999702e-01, 2.0465748e-07, 9.2730332e-08,
2.5044619e-07],
...,
[8.7212893e-04, 9.9891293e-01, 7.5106349e-05, 7.0842376e-05,
6.8954141e-05],
[1.2511186e-02, 5.9731454e-05, 9.8512655e-01, 3.0246837e-04,
2.0000227e-03],
[5.9550672e-07, 7.1766672e-06, 2.0012515e-06, 9.9999011e-01,
1.1376539e-07]], dtype=float32)
Your problem is caused by the presence of continuous-multioutput target values in y_test_arg or Y_pred. I think this error was generated in the below code:
y_test_arg=np.argmax(y_test,axis=1)
Y_pred = np.argmax(model.predict(x_test),axis=1)
It would help if you rounded your predictions in Y_pred before calculating classification_report.
You can see this question
I am using ImageDataGenerator (tensorflow version 2.5.0) to load in a number of jpg files that I am using for a classification system. I have specified the class_mode='categorical'. My images are originally RGB, but even though I am converting them to greyscale I don't think that should matter. However, when I call train_set.classes, the data I get is not one-hot encoded data, but it is sparse numerical data. Here is my ImageDataGenerator call:
def preprocessing_function(image):
neg = 1 - image
return neg
#image_path = sys.argv[1]
image_path = ''
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
vertical_flip=True,
width_shift_range=0.2,
height_shift_range=0.2,
horizontal_flip=True,
preprocessing_function=preprocessing_function)
train_set = train_datagen.flow_from_directory(
os.path.join(image_path, 'endo_jpg/endo_256_2021_08_05/Training'),
target_size=(100,100),
batch_size=batch,
class_mode='categorical',
color_mode='grayscale')
Upon calling the flow_from_directory method, I am returned what I expect:
Found 625 images belonging to 4 classes.
Calling train_set.classes, I am returned a long list of integers, not one hot encoded data:
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3])
I can force the data to be one hot encoded by using:
train_set.classes = tensorflow.keras.utils.to_categorical(train_set.classes), but then I can't train with the data generator.
I think there is a problem with my specifying class_mode='categorical', but I have no idea why. I followed the example in the documentation (here), but calling categorical returns a sparse label.
Since you are using class_mode='categorical' you don't have to manually convert the labels to one hot encoded vectors using to_categorical().
The Generator will return labels as categorical automatically.
Simply calling train_set[0] clearly shows me the images and the labels. The printed labels are one hot encoded based on my code.
How can I add annotations (in a particular shape) to a PDF?
I want to be able to control:
the link target
the color
the shape of the link annotation
the location of the link annotation
Disclaimer: I am the author of the library being used in this answer
To showcase this behaviour, this example is going to re-create a shape using "pixel-art".
This array, together with these colors define the shape of super-mario:
m = [
[0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 2, 2, 2, 3, 3, 2, 3, 0, 0, 0, 0],
[0, 0, 2, 3, 2, 3, 3, 3, 2, 3, 3, 3, 0, 0],
[0, 0, 2, 3, 2, 2, 3, 3, 3, 2, 3, 3, 3, 0],
[0, 0, 2, 2, 3, 3, 3, 3, 2, 2, 2, 2, 0, 0],
[0, 0, 0, 0, 3, 3, 3, 3, 3, 3, 3, 0, 0, 0],
[0, 0, 0, 1, 1, 4, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 1, 1, 1, 4, 1, 1, 4, 1, 1, 1, 0, 0],
[0, 1, 1, 1, 1, 4, 4, 4, 4, 1, 1, 1, 1, 0],
[0, 3, 3, 1, 4, 5, 4, 4, 5, 4, 1, 3, 3, 0],
[0, 3, 3, 3, 4, 4, 4, 4, 4, 4, 3, 3, 3, 0],
[0, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 3, 3, 0],
[0, 0, 0, 4, 4, 4, 0, 0, 4, 4, 4, 0, 0, 0],
[0, 0, 2, 2, 2, 0, 0, 0, 0, 2, 2, 2, 0, 0],
[0, 2, 2, 2, 2, 0, 0, 0, 0, 2, 2, 2, 2, 0],
]
c = [
None,
X11Color("Red"),
X11Color("Black"),
X11Color("Tan"),
X11Color("Blue"),
X11Color("White"),
]
To manipulate the PDF, I am going to use pText.
First we are going to read an existing PDF:
# attempt to read PDF
doc = None
with open("boring-input.pdf", "rb") as in_file_handle:
print("\treading (1) ..")
doc = PDF.loads(in_file_handle)
Then we are going to add the annotations, using the array indices as references (and keeping in mind the coordinate system for PDF starts at the bottom left):
# add annotation
pixel_size = 2
for i in range(0, len(m)):
for j in range(0, len(m[i])):
if m[i][j] == 0:
continue
x = pixel_size * j
y = pixel_size * (len(m) - i)
doc.get_page(0).append_link_annotation(
page=Decimal(0),
color=c[m[i][j]],
location_on_page="Fit",
rectangle=(
Decimal(x),
Decimal(y),
Decimal(x + pixel_size),
Decimal(y + pixel_size),
),
)
Then we store the output PDF:
# attempt to store PDF
with open("its-a-me.pdf, "wb") as out_file_handle:
PDF.dumps(out_file_handle, doc)
This is a screenshot of Okular opening the PDF:
For example:
data = [[3, 0, 1, 1, 1, 0, 2, 1, 2, 3],
[0, 5, 3, 2, 2, 1, 1, 1, 3, 0],
[1, 3, 5, 3, 2, 1, 1, 1, 2, 1],
[1, 2, 3, 4, 1, 1, 2, 1, 1, 1],
[1, 2, 2, 1, 4, 0, 2, 2, 2, 1],
[0, 1, 1, 1, 0, 1, 0, 0, 0, 0],
[2, 1, 1, 2, 2, 0, 4, 3, 2, 2],
[1, 1, 1, 1, 2, 0, 3, 3, 1, 1],
[2, 3, 2, 1, 2, 0, 2, 1, 5, 2],
[3, 0, 1, 1, 1, 0, 2, 1, 2, 4]]
I want to print the largest number in the nested list [2, 3, 2, 1, 2, 0, 2, 1, 5, 2] which is 5 which is contained in index[8][8].
I also want to print on which index of the nested list it was in.
This should help you:
data = [[3, 0, 1, 1, 1, 0, 2, 1, 2, 3], [0, 5, 3, 2, 2, 1, 1, 1, 3, 0], [1, 3, 5, 3, 2, 1, 1, 1, 2, 1], [1, 2, 3, 4, 1, 1, 2, 1, 1, 1], [1, 2, 2, 1, 4, 0, 2, 2, 2, 1], [0, 1, 1, 1, 0, 1, 0, 0, 0, 0], [2, 1, 1, 2, 2, 0, 4, 3, 2, 2], [1, 1, 1, 1, 2, 0, 3, 3, 1, 1], [2, 3, 2, 1, 2, 0, 2, 1, 5, 2], [3, 0, 1, 1, 1, 0, 2, 1, 2, 4]]
for lst in data:
maximum = max(lst)
index = [data.index(lst),lst.index(maximum)]
print(f"Largest Number = {maximum} , Index = [{index[0]}][{index[1]}]")
Output:
Largest Number = 3 , Index = [0][0]
Largest Number = 5 , Index = [1][1]
Largest Number = 5 , Index = [2][2]
Largest Number = 4 , Index = [3][3]
Largest Number = 4 , Index = [4][4]
Largest Number = 1 , Index = [5][1]
Largest Number = 4 , Index = [6][6]
Largest Number = 3 , Index = [7][6]
Largest Number = 5 , Index = [8][8]
Largest Number = 4 , Index = [9][9]
The above answer is good. However, if you wish to use the beauty of NumPy and python you can use the following snippet. The snippet also generalizes if there are multiple max elements.
import numpy as np
data = [[3, 0, 1, 1, 1, 0, 2, 1, 2, 3],
[0, 5, 3, 2, 2, 1, 1, 1, 3, 0],
[1, 3, 5, 3, 2, 1, 1, 1, 2, 1],
[1, 2, 3, 4, 1, 1, 2, 1, 1, 1],
[1, 2, 2, 1, 4, 0, 2, 2, 2, 1],
[0, 1, 1, 1, 0, 1, 0, 0, 0, 0],
[2, 1, 1, 2, 2, 0, 4, 3, 2, 2],
[1, 1, 1, 1, 2, 0, 3, 3, 1, 1],
[2, 3, 2, 1, 2, 0, 2, 1, 5, 2],
[3, 0, 1, 1, 1, 0, 2, 1, 2, 4]]
# For max value across the matrix
indices = np.where(data == data.max())
max_indices = list(zip(indices[0], indices[1]))
print(max_indices)
[(1, 1), (2, 2), (8, 8)]
# If you want it for each row
max_args = data.max(axis=1)
result = [(i,j,np.where(data[j]==i)[0][0]) for j,i in enumerate(max_args)]
print(result)
# Format (number,row,col)
[(3, 0, 0),
(5, 1, 1),
(5, 2, 2),
(4, 3, 3),
(4, 4, 4),
(1, 5, 1),
(4, 6, 6),
(3, 7, 6),
(5, 8, 8),
(4, 9, 9)]
I try to use classification_report from sklearn.metrics:
sklearn.metrics.classification_report(y_true, y_pred, labels=None, target_names=None, sample_weight=None, digits=2, output_dict=False)
As input for prediction and label i've got one list each with the following form:
for pred:
[array([0, 0, 0, 3, 0, 3, 2, 2, 1, 1, 2, 0, 2, 3, 0, 2, 2, 2, 2, 3, 3, 2,
2, 2, 2, 2, 2, 1, 0, 3, 2, 2, 0, 2, 2, 3, 2, 0, 0, 0, 0, 0, 2, 2,
2, 1, 0, 0, 0, 2, 2, 0, 2, 0, 2, 1, 0, 2, 2, 3, 0, 2, 0, 2])]
for true:
[array([2, 3, 3, 2, 3, 3, 2, 3, 2, 2, 3, 3, 3, 2, 3, 2, 3, 2, 3, 2, 3, 3,
2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 3, 3, 2, 2, 2, 2, 2, 3, 2,
2, 2, 3, 3, 3, 2, 2, 2, 2, 3, 3, 3, 2, 2, 3, 2, 2, 2, 2, 2])]
for the sklearn-function above i need a simple list. The array produces an error:
ValueError: multiclass-multioutput is not supported
I tried .tolist() already but didn't work for me.
I am searching a possibility to convert my array-list [?] to a simple list.
Thanks for your help.
Each of those objects is already a list, each of which contains a single element, which is an array.
To access the 1st element and convert it to a list, try something like:
x = [array([2, 3, 3, 2, 3, 3, 2, 3, 2, 2, 3, 3, 3, 2, 3, 2, 3, 2, 3, 2, 3, 3,
2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 3, 3, 2, 2, 2, 2, 2, 3, 2,
2, 2, 3, 3, 3, 2, 2, 2, 2, 3, 3, 3, 2, 2, 3, 2, 2, 2, 2, 2])]
x_list = list(x[0])
And x_list will contain the array element in list form.
Way 1: Just index the lists e.g. pred[0]
Code:
pred = [array([0, 0, 0, 3, 0, 3, 2, 2, 1, 1, 2, 0, 2, 3, 0, 2, 2, 2, 2, 3, 3, 2,
2, 2, 2, 2, 2, 1, 0, 3, 2, 2, 0, 2, 2, 3, 2, 0, 0, 0, 0, 0, 2, 2,
2, 1, 0, 0, 0, 2, 2, 0, 2, 0, 2, 1, 0, 2, 2, 3, 0, 2, 0, 2])]
test = [array([0, 0, 0, 3, 0, 3, 2, 2, 1, 1, 2, 0, 2, 3, 0, 2, 2, 2, 2, 3, 3, 2,
2, 2, 2, 2, 2, 1, 0, 3, 2, 2, 0, 2, 2, 3, 2, 0, 0, 0, 0, 0, 2, 2,
2, 1, 0, 0, 0, 2, 2, 0, 2, 0, 2, 1, 0, 2, 2, 3, 0, 2, 0, 2])]
classification_report(pred[0], test[0])
Way 2:
Reform it to match sklearn requirements:
pred = [array([0, 0, 0, 3, 0, 3, 2, 2, 1, 1, 2, 0, 2, 3, 0, 2, 2, 2, 2, 3, 3, 2,
2, 2, 2, 2, 2, 1, 0, 3, 2, 2, 0, 2, 2, 3, 2, 0, 0, 0, 0, 0, 2, 2,
2, 1, 0, 0, 0, 2, 2, 0, 2, 0, 2, 1, 0, 2, 2, 3, 0, 2, 0, 2])]
test = [array([0, 0, 0, 3, 0, 3, 2, 2, 1, 1, 2, 0, 2, 3, 0, 2, 2, 2, 2, 3, 3, 2,
2, 2, 2, 2, 2, 1, 0, 3, 2, 2, 0, 2, 2, 3, 2, 0, 0, 0, 0, 0, 2, 2,
2, 1, 0, 0, 0, 2, 2, 0, 2, 0, 2, 1, 0, 2, 2, 3, 0, 2, 0, 2])]
flat_pred = [item for sublist in pred for item in sublist]
flat_test = [item for sublist in test for item in sublist]
print(classification_report(flat_pred,flat_test))