I'm facing an issue when opening a .tif using rasterio using the code below.
fp = 'image.tif'
image = rasterio.open(fp)
print(image.read())
When printing the content of the image, I get this
[[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]]
I verified all values and they are all 0. However, when dragging the image in QGIS, I can view it and confirm that the image contains values ranging from 101 to 122.
QGIS image
Any idea on how to read the image and get these 101 to 122 values as a numpy array ?
Here's a link to the image in question
I have a collection of one-hot vectors (in numpy)
[[0 0 0 ... 0 0 0] [0 1 0 ... 0 0 0] [0 1 0 ... 0 0 0] ... [0 0 0 ... 0 0 0] [0 0 0 ... 1 0 0]]
My goal is to find the optimal path to reach all of the vectors, starting from the first vector (which is all 0's), which minimizes the number of steps. The path does not need to be continuous (ie if each vector has only one 1, then the number of steps can just be the number of non-zero vectors).
Is there any existing method that optimizes this? It's kind of like a shortest path problem.
I have a dataset in the form of a DataFrame and each row has a label ranging from 1-5. I am doing a one hot encode using pd.get_dummies(). If my dataset has all 5 labels there is not problem. However not all sets contain all 5 numbers so the encode just skips the missing value and creates a problem for new datasets coming in. Can I set a range so that the one hot encode knows there should be 5 labels? Or would I have to append 1,2,3,4,5 to the end of the array before I perform the encode and then delete the last 5 entries?
Correct encode: values 1-5 are encoded
arr = np.array([1,2,5,3,1,5,1,4])
df = pd.DataFrame(arr, columns = ['test'])
hotarr = np.array(pd.get_dummies(df['test']))
>>>[[1 0 0 0 0]
[0 1 0 0 0]
[0 0 0 0 1]
[0 0 1 0 0]
[1 0 0 0 0]
[0 0 0 0 1]
[1 0 0 0 0]
[0 0 0 1 0]]
Missing value encode: this dataset is missing label 4.
arr = np.array([1,2,5,3,1,5,1,])
df = pd.DataFrame(arr, columns = ['test'])
hotarr = np.array(pd.get_dummies(df['test']))
>>>[[1 0 0 0]
[0 1 0 0]
[0 0 0 1]
[0 0 1 0]
[1 0 0 0]
[0 0 0 1]
[1 0 0 0]]
Set up the CategoricalDtype before encoding to ensure all categories are represented when getting dummies:
import numpy as np
import pandas as pd
arr = np.array([1, 2, 5, 3, 1, 5, 1])
df = pd.DataFrame(arr, columns=['test'])
# Setup Categorical Dtype
df['test'] = df['test'].astype(pd.CategoricalDtype(categories=[1, 2, 3, 4, 5]))
hotarr = np.array(pd.get_dummies(df['test']))
print(hotarr)
Alternatively can reindex after get_dummies with fill_value=0 to add the missing columns:
hotarr = np.array(pd.get_dummies(df['test'])
.reindex(columns=[1, 2, 3, 4, 5], fill_value=0))
Both produce hotarr with 5 columns even though input does not contain 4:
[[1 0 0 0 0]
[0 1 0 0 0]
[0 0 0 0 1]
[0 0 1 0 0]
[1 0 0 0 0]
[0 0 0 0 1]
[1 0 0 0 0]]
I have a model which predicts 5 classes. I want to change Accuracy metric as in example below :
def accuracy(y_pred,y_true):
#our pred tensor
y_pred = [ [0,0,0,0,1], [0,1,0,0,0], [0,0,0,1,0], [1,0,0,0,0], [0,0,1,0,0]]
# make some manipulations with tensor y_pred
# actons description :
for array in y_pred :
if array[3] == 1 :
array[3] = 0
array[0] = 1
if array[4] == 1 :
array[4] = 0
array[1] = 1
else :
continue
#this nice work with arrays but howe can i implement it with tensors ?
#after manipulations result->
y_pred = [ [0,1,0,0,0], [0,1,0,0,0], [1,0,0,0,0], [1,0,0,0,0],[0,0,1,0,0] ]
#the same ations i want to do with y_true
# and after it i want to run this preprocess tensors the same way as simple tf.keras.metrics.Accuracy metric
I think tf.where can help to filter tensor, but unfortunately can't do this correctly.
How to make this preprocessing accuracy metric with Tensors ?
If you want to shift the ones to left by 3 indices, you can do this:
import numpy as np
y_pred = [ [0,0,0,0,1], [0,1,0,0,0], [0,0,0,1,0], [1,0,0,0,0], [0,0,1,0,0]]
y_pred = np.array(y_pred)
print(y_pred)
shift = 3
one_pos = np.where(y_pred==1)[1] # indices where the y_pred is 1
# updating the new positions with 1
y_pred[range(y_pred.shape[1]),one_pos - shift] = np.ones((y_pred.shape[1],))
# making the old positions zero
y_pred[range(y_pred.shape[1]),one_pos] = np.zeros((y_pred.shape[1],))
print(y_pred)
[[0 0 0 0 1]
[0 1 0 0 0]
[0 0 0 1 0]
[1 0 0 0 0]
[0 0 1 0 0]]
[[0 1 0 0 0]
[0 0 0 1 0]
[1 0 0 0 0]
[0 0 1 0 0]
[0 0 0 0 1]]
Update:
If you only want to shift for index 3 and 4.
import numpy as np
y_pred = [ [0,0,0,0,1], [0,1,0,0,0], [0,0,0,1,0], [1,0,0,0,0], [0,0,1,0,0]]
y_pred = np.array(y_pred)
print(y_pred)
shift = 3
one_pos = np.where(y_pred==1)[1]# indices where the y_pred is 1
print(one_pos)
y_pred[range(y_pred.shape[1]),one_pos - shift] = [1 if (i == 3 or i == 4) else 0 for i in one_pos]
y_pred[range(y_pred.shape[1]),one_pos] = [0 if (i == 3 or i == 4) else 1 for i in one_pos]
print(y_pred)
[[0 0 0 0 1]
[0 1 0 0 0]
[0 0 0 1 0]
[1 0 0 0 0]
[0 0 1 0 0]]
[4 1 3 0 2]
[[0 1 0 0 0]
[0 1 0 0 0]
[1 0 0 0 0]
[1 0 0 0 0]
[0 0 1 0 0]]
I have a binary array of size 64x64x64, where a volume of 40x40x40 is set to "1" and rest is "0". I have been trying to rotate this cube about its center around z-axis using skimage.transform.rotate and also Opencv as:
def rotateImage(image, angle):
row, col = image.shape
center = tuple(np.array([row, col]) / 2)
rot_mat = cv2.getRotationMatrix2D(center, angle, 1.0)
new_image = cv2.warpAffine(image, rot_mat, (col, row))
return new_image
In the case of openCV, I tried, 2D rotation of each idividual slices in a cube (Cube[:,:,n=1,2,3...p]).
After rotating, total sum of the values in the array changes. This may be caused by interpolation during rotation. How can I rotate 3D array of this kind without adding anything to the array?
Ok so I understand now what you are asking. The closest I can come up with is scipy.ndimage. But there is a way interface with imagej from python if which might be easier. But here is what I did with scipy.ndimage:
from scipy.ndimage import interpolation
angle = 25 #angle should be in degrees
Rotatedim = interpolation.rotate(yourimage, angle, reshape = False,output = np.int32, order = 5,prefilter = False)
This worked for some angles to preserve the some and not others, perhaps by playing around more with the parameters you might be able to get your desired outcome.
One option is to convert into sparse, and transform the coordinates using a matrix rotation. Then transform back into dense. In 2 dimensions, this looks like:
import numpy as np
import scipy.sparse
import math
N = 10
space = np.zeros((N, N), dtype=np.int8)
space[3:7, 3:7].fill(1)
print(space)
print(np.sum(space))
space_coo = scipy.sparse.coo_matrix(space)
Coords = np.array(space_coo.nonzero()) - 3
theta = 30 * 3.1416 / 180
R = np.array([[math.cos(theta), math.sin(theta)], [-math.sin(theta), math.cos(theta)]])
space2_coords = R.dot(Coords)
space2_coords = np.round(space2_coords)
space2_coords += 3
space2_sparse = scipy.sparse.coo_matrix(([1] * space2_coords.shape[1], (space2_coords[0], space2_coords[1])), shape=(N, N))
space2 = space2_sparse.todense()
print(space2)
print(np.sum(space2))
Output:
[[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]
[0 0 0 1 1 1 1 0 0 0]
[0 0 0 1 1 1 1 0 0 0]
[0 0 0 1 1 1 1 0 0 0]
[0 0 0 1 1 1 1 0 0 0]
[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]]
16
[[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]
[0 0 0 1 0 0 0 0 0 0]
[0 0 1 1 1 1 0 0 0 0]
[0 0 1 1 1 1 1 0 0 0]
[0 1 1 0 1 1 0 0 0 0]
[0 0 0 1 1 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0]]
16
The advantage is that you'll get exactly as many 1 values before and after the transform. The downsides is that you might get 'holes', as above, and/or duplicate coordinates, giving values of '2' in the final dense matrix.