I am working on a piece of python code that will take in an image in grey scale, scale it, and output a 3d model with the height of each pixel being determined by the value of the grey scale. I have everything working except the output of the 3d model. I am using numpy-stl to create it based on an array of values derived from the image. Using the numpy-stl library I create a box and then copy it as many times as i need for the image. then I translate each one to the position and height corresponding with the image. This all works. The problem comes when I try to save it all as one .stl file. I cant figure out how to combine all the individual meshes of the cubes into one.
Here is just the code dealing with the creation of the 3d array. I can plot the created meshes but not save them.
from stl import mesh
import math
import numpy
test = [[1,2],[2,1]]
a = [[1,2,3,4],
[5,6,7,8],
[9,10,11,12],
[13,14,15,16]]
# Create 6 faces of a cube, 2 triagles per face
data = numpy.zeros(12, dtype=mesh.Mesh.dtype)
#cube defined in stl format
# Top of the cube
data['vectors'][0] = numpy.array([[0, 1, 1],
[1, 0, 1],
[0, 0, 1]])
data['vectors'][1] = numpy.array([[1, 0, 1],
[0, 1, 1],
[1, 1, 1]])
# Right face
data['vectors'][2] = numpy.array([[1, 0, 0],
[1, 0, 1],
[1, 1, 0]])
data['vectors'][3] = numpy.array([[1, 1, 1],
[1, 0, 1],
[1, 1, 0]])
# Left face
data['vectors'][4] = numpy.array([[0, 0, 0],
[1, 0, 0],
[1, 0, 1]])
data['vectors'][5] = numpy.array([[0, 0, 0],
[0, 0, 1],
[1, 0, 1]])
# Bottem of the cube
data['vectors'][6] = numpy.array([[0, 1, 0],
[1, 0, 0],
[0, 0, 0]])
data['vectors'][7] = numpy.array([[1, 0, 0],
[0, 1, 0],
[1, 1, 0]])
# Right back
data['vectors'][8] = numpy.array([[0, 0, 0],
[0, 0, 1],
[0, 1, 0]])
data['vectors'][9] = numpy.array([[0, 1, 1],
[0, 0, 1],
[0, 1, 0]])
# Left back
data['vectors'][10] = numpy.array([[0, 1, 0],
[1, 1, 0],
[1, 1, 1]])
data['vectors'][11] = numpy.array([[0, 1, 0],
[0, 1, 1],
[1, 1, 1]])
# Generate 4 different meshes so we can rotate them later
meshes = [mesh.Mesh(data.copy()) for _ in range(16)]
#iterates through the array and translates cube in the x and y direction according
#to position in array and in the z direction according to eh value stored in the array
def ArrayToSTL(array, STLmesh):
y_count = 0
x_count = 0
count = 0
for row in array:
x_count = 0
for item in row:
meshes[count].x += x_count
meshes[count].y += y_count
meshes[count].z += item
x_count +=1
count += 1
y_count += 1
ArrayToSTL(a, meshes)
# Optionally render the rotated cube faces
from matplotlib import pyplot
from mpl_toolkits import mplot3d
# Create a new plot
figure = pyplot.figure()
axes = mplot3d.Axes3D(figure)
# Render the cube faces
for m in meshes:
axes.add_collection3d(mplot3d.art3d.Poly3DCollection(m.vectors))
# Auto scale to the mesh size
scale = numpy.concatenate([m.points for m in meshes]).flatten(-1)
axes.auto_scale_xyz(scale, scale, scale)
# Show the plot to the screen
pyplot.show()
This works well:
import numpy as np
import stl
from stl import mesh
import os
def combined_stl(meshes, save_path="./combined.stl"):
combined = mesh.Mesh(np.concatenate([m.data for m in meshes]))
combined.save(save_path, mode=stl.Mode.ASCII)
loading stored stl files and meshing them, use this.
direc = "path_of_directory"
paths = [os.path.join(direc, i) for i in os.listdir(direc)]
meshes = [mesh.Mesh.from_file(path) for path in paths]
combined_stl(meshes)
Related
My input is a list of y_true labels, where the element in position i contains a value in the range of 0..len(classes) and depicts what class that element of the data set truly is. i ranges from 0 to len(data). Example below:
# 5 elements in data, 3 classes, all of which had representation in the data:
y_true = [0,2,1,0,1]
I want my output to be a list of lists that islen(data) by len(classes), where inner list i would have a 1 in the position of y_true[i], and 0 in the other len(classes)-1 slots, example:
#same configuration as the previous example
y_true = [0,2,1,0,1]
result = [[1,0,0],[0,0,2],[0,1,0],[1,0,0],[0,1,0]]
Here's how I'm initilazing result:
result = np.zeros((len(y_true), max(y_true)+1))
However I haven't been able to make any further progress with this issue. I tried using add.at(result, y_true, 1) and this with y_true's shape flipped, but neither produced the result I wanted. What fuction(s) can achieve what I'm trying to do here?
Edit: For better clarity on what I want to achieve, I made it using a for loop:
result = np.zeros((len(y_true), max(y_true)+1))
for x in range(4):
result[x][y_true[x]] = 1
You can use fancy indexing:
result = np.zeros((len(y_true), max(y_true)+1), dtype=int)
result[np.arange(len(y_true)), y_true] = 1
output:
array([[1, 0, 0],
[0, 0, 1],
[0, 1, 0],
[1, 0, 0],
[0, 1, 0]])
alternative
an interesting alternative might be to use pandas.get_dummies:
import pandas as pd
result = pd.get_dummies(y_true).to_numpy()
output:
array([[1, 0, 0],
[0, 0, 1],
[0, 1, 0],
[1, 0, 0],
[0, 1, 0]], dtype=uint8)
I'm trying to perform a rigid + scale transformation on a 3D volume with pytorch, but I can't seem to understand how the theta required for torch.nn.functional.affine_grid works.
I have a transformation matrix of size (1,4,4) generated by multiplying the matrices Translation * Scale * Rotation. If I use this matrix in, for example, scipy.ndimage.affine_transform, it works with no issues. However, the same matrix (cropped to size (1,3,4)) fails completely with torch.nn.functional.affine_grid.
I have managed to understand how the translation works (range -1 to 1) and I have confirmed that the Translation matrix works by simply normalizing the values to that range. As for the other two, I am lost.
I tried using a basic Scaling matrix alone (below) as a most basic comparison but the results in pytorch are different than that of scipy
Scaling =
[[0.75, 0, 0, 0],
[[0, 0.75, 0, 0],
[[0, 0, 0.75, 0],
[[0, 0, 0, 1]]
How can I convert the (1,4,4) affine matrix to work the same with torch.nn.functional.affine_grid? Alternatively, is there a way to generate the correct matrix based on the transformation parameters (shift, euler angles, scaling)?
To anyone that comes across a similar issue in the future, the problem with scipy vs pytorch affine transforms is that scipy applies the transforms around (0, 0, 0) while pytorch applies it around the middle of the image/volume.
For example, let's take the parameters:
euler_angles = [ea0, ea1, ea2]
translation = [tr0, tr1, tr2]
scale = [sc0, sc1, sc2]
and create the following transformation matrices:
# Rotation matrix
R_x(ea0, ea1, ea2) = np.array([[1, 0, 0, 0],
[0, math.cos(ea0), -math.sin(ea0), 0],
[0, math.sin(ea0), math.cos(ea0), 0],
[0, 0, 0, 1]])
R_y(ea0, ea1, ea2) = np.array([[math.cos(ea1), 0, math.sin(ea1), 0],
[0, 1, 0, 0],
[-math.sin(ea1), 0, math.cos(ea1)], 0],
[0, 0, 0, 1]])
R_z(ea0, ea1, ea2) = np.array([[math.cos(ea2), -math.sin(ea2), 0, 0],
[math.sin(ea2), math.cos(ea2), 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]])
R = R_x.dot(R_y).dot(R_z)
# Translation matrix
T(tr0, tr1, tr2) = np.array([[1, 0, 0, -tr0],
[0, 1, 0, -tr1],
[0, 0, 1, -tr2],
[0, 0, 0, 1]])
# Scaling matrix
S(sc0, sc1, sc2) = np.array([[1/sc0, 0, 0, 0],
[0, 1/sc1, 0, 0],
[0, 0, 1/sc2, 0],
[0, 0, 0, 1]])
If you have a volume of size (100, 100, 100), the scipy transform around the centre of the volume requires moving the centre of the volume to (0, 0, 0) first, and then moving it back to (50, 50, 50) after S, T, and R have been applied. Defining:
T_zero = np.array([[1, 0, 0, 50],
[0, 1, 0, 50],
[0, 0, 1, 50],
[0, 0, 0, 1]])
T_centre = np.array([[1, 0, 0, -50],
[0, 1, 0, -50],
[0, 0, 1, -50],
[0, 0, 0, 1]])
The scipy transform around the centre is then:
transform_scipy_centre = T_zero.dot(T).dot(S).dot(R).T_centre
In pytorch, there are some slight differences to the parameters.
The translation is defined between -1 and 1. Their order is also different. Using the same (100, 100, 100) volume as an example, the translation parameters in pytorch are given by:
# Note the order difference
translation_pytorch = =[tr0_p, tr1_p, tr2_p] = [tr0/50, tr2/50, tr1/50]
T_p = T(tr0_p, tr1_p, tr2_p)
The scale parameters are in a different order:
scale_pytorch = [sc0_p, sc1_p, sc2_p] = [sc2, sc0, sc1]
S_p = S(sc0_p, sc1_p, sc2_p)
The euler angles are the biggest difference. To get the equivalent transform, first the parameters are negative and in a different order:
# Note the order difference
euler_angles_pytorch = [ea0_p, ea1_p, ea2_p] = [-ea0, -ea2, -ea1]
R_x_p = R_x(ea0_p, ea1_p, ea2_p)
R_y_p = R_y(ea0_p, ea1_p, ea2_p)
R_z_p = R_z(ea0_p, ea1_p, ea2_p)
The order in which the rotation matrix is calculated is also different:
# Note the order difference
R_p = R_x_p.dot(R_z_p).dot(R_y_p)
With all these considerations, the scipy transform with:
transform_scipy_centre = T_zero.dot(T).dot(S).dot(R).T_centre
is equivalent to the pytorch transform with:
transform_pytorch = T_p.dot(S_p).dot(R_p)
I hope this helps!
In the code that I am writing, I have three 2D numpy arrays with the same dimensions (m x n), with each 2D array containing info about a specific trait, but each corresponding cell (with a specific row/col value) across all three 2D arrays corresponding to a specific person. The three 2D arrays are trait1, trait2, and trait3. As an example, person (0, 0) will have traits 1, 2, but not three, if only trait1 and trait2 have a value of 1 at location (0,0), but trait3 does not.
What would be an efficient method of updating a 2D array at a specific location based on the values of other corresponding 2D arrays of the same dimension at the same location? That is, how can I efficiently update a 2D array at a specific location such that the other 2D arrays at this same location fulfill specific conditions?
I am currently trying to update the values of the 2D array trait1 and trait2 according to the current values of trait1 and trait2 (such that the corresponding trait1 value == 1, and the corresponding trait2 value == 0); I am also trying to update the values of trait3 according to the current values of trait1, and trait2 (under the same conditions as the previous). However, I am having trouble doing this without using nested for loops, which greatly slows down my program.
Below is my current approach, which works, but is much too slow for my purposes:
for i in range (0, m):
for j in range (0, n):
if trait1[i][j] == 1:
if trait2[i][j] == 0:
trait1[i][j] = 0
trait2[i][j] = 1
new_color(i, j, 1) #updates the color of the specific person on a grid
trait3[i][j] = 0
elif trait1[i][j] == 0:
if trait2[i][j] <= 0:
trait1[i][j] = 1
trait2[i][j] = 0
new_color(i, j, 0)
Numpy array are really slow if you use loop indeed. If you can use matrices operations / numpy function for everything, it will go much faster.
In your case, you could first extract the indices you're interested about, and then update your matrices like this:
import numpy as np
np.random.seed(1)
# Generate some sample data
trait1, trait2, trait3 = ( np.random.randint(0,2, [4,4]) for _ in range(3) )
In [4]: trait1
Out[4]:
array([[1, 1, 0, 0],
[1, 1, 1, 1],
[1, 0, 0, 1],
[0, 1, 1, 0]])
In [5]: trait2
Out[5]:
array([[0, 1, 0, 0],
[0, 1, 0, 0],
[1, 0, 0, 0],
[1, 0, 0, 0]])
In [6]: trait3
Out[6]:
array([[1, 1, 1, 1],
[1, 0, 0, 0],
[1, 1, 1, 1],
[1, 1, 0, 1]])
And then:
cond1_idx = np.where((trait1 == 1) & (trait2==0))
cond2_idx = np.where((trait1 == 0) & (trait2<=0))
trait1[cond1_idx] = 0
trait2[cond1_idx] = 1
trait3[cond1_idx] = 0
[ new_color(i, j, 1) for i,j in zip(*cond1_idx) ]
trait1[cond2_idx] = 1
trait2[cond2_idx] = 0
[ new_color(i, j, 0) for i,j in zip(*cond2_idx) ]
Result:
In [2]: trait1
Out[2]:
array([[0, 1, 1, 1],
[0, 1, 0, 0],
[1, 1, 1, 0],
[0, 0, 0, 1]])
In [3]: trait2
Out[3]:
array([[1, 1, 0, 0],
[1, 1, 1, 1],
[1, 0, 0, 1],
[1, 1, 1, 0]])
In [4]: trait3
Out[4]:
array([[0, 1, 1, 1],
[0, 0, 0, 0],
[1, 1, 1, 0],
[1, 0, 0, 1]])
I cannot really test the new_color though since I don't have the function
Suppose I have original_image: as (451, 521, 3) shape.
And it contains [0,0,0] RGB values at some locations.
I would like to replace all [0,0,0] with [0,255,0]
What I tried was
I created mask which has True where [0,0,0] are located in original_image
And that mask has (451, 521) shape
I thought I could use following
new_original_image=original_image[mask]
But it turned out new_original_image is just an array (shape is like (18, 3)) whose all elements (for example, [[ 97 68 108],[127 99 139],[156 130 170],...]) are filtered by True of mask array from original_image
Here is one way
idx=np.all(np.vstack(a)==np.array([0,0,5]),1)
a1=np.vstack(a)
a1[idx]=[0,0,0]
yourary=a1.reshape(2,-1,3)
Out[150]:
array([[[0, 0, 0],
[0, 0, 1],
[0, 0, 0],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 1],
[0, 0, 0],
[0, 0, 0]]])
Data input
a
Out[133]:
array([[[0, 0, 0],
[0, 0, 1],
[0, 0, 5],
[0, 0, 5]],
[[0, 0, 0],
[0, 0, 1],
[0, 0, 5],
[0, 0, 5]]])
I would like to replace all [0,0,0] with [0,255,0]
import cv2
img = cv2.imread("test.jpg")
rows, cols, channels = img.shape
for r in range(rows):
for c in range(cols):
if np.all(img[r,c][0]==[0,0,0]):
img[r,c]=[0,255,0]
Based on reply solution from Wen-Ben, I try to write detailed code snippet that I wanted to implement
# original_image which contains [0,0,0] at several location
# in 2 (last) axis from (451, 521, 3) shape image
# Stack original_image or using original_image.reshape((-1,3)) is also working
stacked=np.vstack(original_image)
# print(stacked.shape)
# (234971, 3)
# Create mask array which has True where [0,0,0] are located in stacked array
idx=np.all(stacked==[0,0,0],1)
# print(idxs.shape)
# (234971,)
# Replace existing values which are filtered by idx with [0,255,0]
stacked[idx]=[0,255,0]
# Back to original image shape
original_image_new=stacked.reshape(original_image.shape[0],original_image.shape[1],3)
# print(original_image_new.shape)
# (451, 521, 3)
Ok, so I feel like there should be an easy way to create a 3-dimensional scatter plot using matplotlib. I have a 3D numpy array (dset) with 0's where I don't want a point and 1's where I do, basically to plot it now I have to step through three for: loops as such:
for i in range(30):
for x in range(60):
for y in range(60):
if dset[i, x, y] == 1:
ax.scatter(x, y, -i, zdir='z', c= 'red')
Any suggestions on how I could accomplish this more efficiently? Any ideas would be greatly appreciated.
If you have a dset like that, and you want to just get the 1 values, you could use nonzero, which "returns a tuple of arrays, one for each dimension of a, containing the indices of the non-zero elements in that dimension.".
For example, we can make a simple 3d array:
>>> import numpy
>>> numpy.random.seed(29)
>>> d = numpy.random.randint(0, 2, size=(3,3,3))
>>> d
array([[[1, 1, 0],
[1, 0, 0],
[0, 1, 1]],
[[0, 1, 1],
[1, 0, 0],
[0, 1, 1]],
[[1, 1, 0],
[0, 1, 0],
[0, 0, 1]]])
and find where the nonzero elements are located:
>>> d.nonzero()
(array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2]), array([0, 0, 1, 2, 2, 0, 0, 1, 2, 2, 0, 0, 1, 2]), array([0, 1, 0, 1, 2, 1, 2, 0, 1, 2, 0, 1, 1, 2]))
>>> z,x,y = d.nonzero()
If we wanted a more complicated cut, we could have done something like (d > 3.4).nonzero() or something, as True has an integer value of 1 and counts as nonzero.
Finally, we plot:
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, -z, zdir='z', c= 'red')
plt.savefig("demo.png")
giving
If you wanted to avoid using the nonzero option (for example, if you had a 3D numpy array whose values were supposed to be the color values of the data points), you could do what you do, but save some lines of code by using ndenumerate.
Your example might become:
for index, x in np.ndenumerate(dset):
if x == 1:
ax.scatter(*index, c = 'red')
I guess the point is just that you dont need to have nested for loops to iterate through multidimensional numpy arrays.