I'm fairly new to python programming and I'm struggling with the Vispy Library.
Basically, I have a Raspberry pi connected to 2 Arduinos accelerometers sensors. The raspberry is sending the X, Y and Z values from both of the sensors through UDP to my computer. Then my computer has to displays 9 graphs : 6 for the evolutions of x, y and z for both sensors and 3 for the differences between them (X1-X2, Y1-Y2 and Z1-Z2) and finally, it must be in real-time.
I wanted to use the Vispy library for that last point. After reading the documentation, I came up with the following code :
#!/usr/bin/env python3
import numpy as np
from vispy import app
from vispy import gloo
import socket
from itertools import count
# init x, y arrays
x1_vals = []
time_vals = []
#UDP connection from Raspberry pi
UDP_IP = ""
UDP_PORT = 5005
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind((UDP_IP, UDP_PORT))
# Initialize the index and set it to 1
index = count()
next(index)
# Initialize the Canvas c
c = app.Canvas(keys='interactive')
vertex = """
attribute vec2 a_position;
void main (void)
{
gl_Position = vec4(a_position, 0.0, 1.0);
}
"""
fragment = """
void main()
{
gl_FragColor = vec4(0.0, 0.0, 15.0, 10.0);
}
"""
program = gloo.Program(vertex, fragment)
#c.connect
def on_resize(event):
gloo.set_viewport(0, 0, *event.size)
#c.connect
def on_draw(event):
gloo.clear((1,1,1,1))
program.draw('line_strip')
def on_timer(event):
# next index
cpt = next(index)
# Get data from UDP
recv, addr = sock.recvfrom(1024)
data = recv.decode('UTF-8').split(';')
# We want to display only 100 samples so the graph still readable.
# So we delete the first value of the x array if there are more than 100 values
if (cpt > 100):
del x1_vals[0]
time_vals = np.linspace(-1.0, +1.0, 100)
else:
time_vals = np.linspace(-1.0, +1.0, cpt)
# The values must be bound between -1.0 and 1.0
tmp = float(data[0])*0.5
if (tmp >= 1):
tmp = float(0.99)
elif (tmp <= -1):
tmp = float(-0.99)
x1_vals.append(tmp)
# Then we concatenate the arrays of x and y
program['a_position'] = np.c_[time_vals, x1_vals].astype(np.float32)
c.update()
c.timer = app.Timer('auto', connect=on_timer, start=True)
c.show()
app.run()
So as the comments describe it, it firstly intitializes the UDP connection and the canvas, then for each values received it updates the canvas with the newly added value. If the number of values exceed 100, the first value of the array is deleted to keep a constant number of samples.
It works well when I want to display only the X1 accelerometer sensors evolution. So now I picked the code from the Vispy documentation which demonstrates how to show multiple graphs, but the code is a bit too complex for my level.
Basically, in my code I receive all the sensors values in the data array. I pick the first value [0] (X1), but the complete data looks like this : [x1, y1, z1, dx, dy, dz, x2, y2, z2] where dx = x1 - x2, dy = y1 - y2 and dz = z1 - z2. (the difference has to be directly calculated on the raspberry).
So I tried to modify the code from the documentation as following :
# Number of cols and rows in the table.
nrows = 3
ncols = 3
# Number of signals.
m = nrows*ncols
# Number of samples per signal.
n = 100
Because I want 9 graphs and only 100 samples per graph.
I ignored the index, the color and deleted the amplitude has it is not required in my case. Basically, I almost kept the original code for the whole setting part, then I replaced the def on_timer with mine.
Now I'm trying to feed the a_position array from GLSL with my own data. But I'm not sure how to prepare the data to make it works properly with this code. I'm struggling to understand what does these lines do :
# GLSL C code
VERT_SHADER = """
// Compute the x coordinate from the time index.
float x = -1 + 2*a_index.z / (u_n-1);
vec2 position = vec2(x - (1 - 1 / u_scale.x), a_position);
// Find the affine transformation for the subplots.
vec2 a = vec2(1./ncols, 1./nrows)*.9;
vec2 b = vec2(-1 + 2*(a_index.x+.5) / ncols,
-1 + 2*(a_index.y+.5) / nrows);
// Apply the static subplot transformation + scaling.
gl_Position = vec4(a*u_scale*position+b, 0.0, 1.0);
"""
# Python code
def __init__(self):
self.program['a_position'] = y.reshape(-1, 1)
def on_timer(self, event):
k = 10
y[:, :-k] = y[:, k:]
y[:, -k:] = amplitudes * np.random.randn(m, k)
self.program['a_position'].set_data(y.ravel().astype(np.float32))
I deleted the surrounding code that I think I'm understanding.
Note that even if I'm starting with python, I'm aware that they are using a class definition for the Canvas when I'm using the bare object in my code. I understand the use of self and others.
How can I adapt the code from the realtime_signals documentation to my case ?
Disclaimer: Overall that realtime signals example is, in my opinion, a bit of a hack. It "cheats" to produce as many plots as it does, but in the end the result is fast.
What that bit of shader code is doing is trying to take the series of line vertices and figure out which "sub-plot" they should go in. All vertices of all the lines are going into the shader as one array. The shader code is trying to say "this vertex is 23rd in the array which means it must belong to sub-plot 5 and it is the 3rd point in that plot because we know we have 5 points per plot" (as an example). The shader does this mostly by the information in a_index. For example, this bit:
// Compute the x coordinate from the time index.
float x = -1 + 2*a_index.z / (u_n-1);
vec2 position = vec2(x - (1 - 1 / u_scale.x), a_position);
Is adjusting the x coordinate (a_position) based on which sub-plot the point falls in.
The next chunk:
// Find the affine transformation for the subplots.
vec2 a = vec2(1./ncols, 1./nrows)*.9;
vec2 b = vec2(-1 + 2*(a_index.x+.5) / ncols,
-1 + 2*(a_index.y+.5) / nrows);
// Apply the static subplot transformation + scaling.
gl_Position = vec4(a*u_scale*position+b, 0.0, 1.0);
Is trying to determine how big each subplot should be. So the first chunk was "what subplot does this point fall in" and this one is "where in that subplot does the point sit". This code it coming up with a linear affine transformation (y = m*x + b) to scale the line to the appropriate size so that all the subplots are the same size and don't overlap.
I'm not sure I can go into more detail without re-walking the whole script and trying to understand exactly what each value in a_index is.
Edit: Another suggestion, in the long run you may want to move the UDP recv code to a separate thread (QThread if using a Qt backend) that emits a signal with the new data when it is available. This way the GUI/main thread stays responsive and isn't hung up waiting for data to come in.
Related
I have a shapefile layer with polygons for the 50 U.S. states ("States"). I also have a point featureclass with potentially many overlapping points at the centroid of each state polygon ("Dots").
I want to use the CreateRandomPoints_management() function to find new placements for all the points in each state, essentially creating a 1:1 dot density map with clickable point features.
Here's the workflow I have envisioned:
Loop through and select each state, one by one
Get the count of points contained within each state polygon boundary
Use CreateRandomPoints_management to generate count number of randomized points within
each state boundary
Move the original points to the new random locations and then delete the random points layer
-or- Add fields and assign values to the newly created random points layer from the original points [Whichever's simpler and more reliable]
I've tried several times to write this code, but my arcpy chops aren't that great yet. Any ideas or examples would be greatly appreciated!
Someone shared the following code, which is a more direct solution to my problem:
#-------------------------------------------------------------------------------
# Name: Disperse3.py
# Purpose: Disperse points in multiple polygons
# Author: arcpy Team
# http://arcpy.wordpress.com/2013/06/07/disperse-overlapping-points/
# Created: 02-dec-2013
#-------------------------------------------------------------------------------
import arcpy
import random
def main():
fcPoints = r"C:\Project\_Forums\Disperse\test.gdb\points3"
fcPolygon = r"C:\Project\_Forums\Disperse\test.gdb\Polygons"
arcpy.env.overwriteOutput = True
with arcpy.da.SearchCursor(fcPolygon, ("SHAPE#")) as cursor:
for row in cursor:
polygon = row[0]
disperse_points(fcPoints, polygon)
del row
print "ready..."
def point_in_poly(poly, x, y):
pg = arcpy.PointGeometry(arcpy.Point(x, y), poly.spatialReference)
return poly.contains(pg)
def disperse_points(in_points, polygon):
lenx = polygon.extent.width
leny = polygon.extent.height
with arcpy.da.UpdateCursor(in_points, "SHAPE#XY") as points:
for p in points:
if point_in_poly(polygon, p[0][0], p[0][1]):
# I changed code here!
x = (random.random() * lenx) + polygon.extent.XMin
y = (random.random() * leny) + polygon.extent.YMin
inside = point_in_poly(polygon, x, y)
while not inside:
x = (random.random() * lenx) + polygon.extent.XMin
y = (random.random() * leny) + polygon.extent.YMin
inside = point_in_poly(polygon, x, y)
points.updateRow([(x, y)])
else:
pass # don't update location if point doesn't originally falls inside current polygon
if __name__ == '__main__':
main()
Currently, I'm playing around with a script generating Julia sets and the Mandelbrot set and then using pygame to render the points.
Essentially, the screen is mapped to a smaller coordinate system where its bounded by -2.5, 2.5 on the x axis and -1, 1 on the y axis. Each of the pixels in this mapped range is then passed to a function to check whether its complex number equivalent is in the given set. This function returns the number of iterations it took to calculate whether the number is in the set or not (or the max iterations).
Then, for each pixel, I know what colour to colour it based on this iteration score and render each of the pixels one by one. This part of the process is really intensive and takes a ~30 seconds to render but can be much more depending on the complexity of the set.
Here is the code for finding out if a passed complex number and complex coordinate are in Julia set, this doesn't take long to compute at all when checking 1920 * 1080 pixels:
max_iter = 45
def julia(z, c):
n = 0
while abs(z) <= 2 and n < max_iter:
z = z * z + c
n += 1
return n
Here is the code I use for pygame rendering, this is definitely where the problem lies:
size_ = 1920, 1080
re_ = -2.5, 2.5
im_ = -1, 1
surf = pygame.Surface(size)
colour_gradient1 = [c, c1, c2, c3, ...] # This is some list of colours generated by a gradient function
for x in range(0, size_[0]):
for y in range(0, size_[1]):
z = complex(re_[0] + (x / size_[0]) * (re_[1] - re_[0]),
im_[0] + (y / size_[1]) * (im_[1] - im_[0]))
m = julia(z, c)
colour = colour_gradient1[m]
pygame.draw.rect(surf,
colour,
(x, y, 1, 1))
I think I understand why this is performance intensive in that both pygame and python aren't really optimised for rendering stuff to the screen like this. I'm currently trying to learn C++ and I understand its better for stuff like this.
I also experimented with a zoom function where I could select a box with the mouse and the script would render this selected area but implementing this was where the problem stuck out. As the zoomed in fractals got more complex, the script took too long to use this function.
So my question, is there a better way to render something like this in close to real-time using python and maybe pygame? I'm open to using a different package but if it's possible through pygame that would be ideal.
Attached below are a couple of pictures of the generated sets:
Fractal generating algorithms always slow down the further in you zoom because there needs to be ever more iterations per pixel the deeper you go (or before the bail-out is reached).
This is never going to be particularly fast in an interpreted language. Sure you can tweak it to increase the speed a little, but it will never be "real time" (say < 1 second / image) for all zoom levels.
If you want to continue in Python, you will have to just agree with yourself that it's never going to be fast.
However. However, you could split the generation of each quadrant into separate processes which would each run on their own CPU/core. That will give you a N/cores speed up.
There are some optimisations that can be performed with detecting symmetry in the image, and only calculating say half the pixels because the other side is a mirror of it (like the horizontal axis through an zoomed-out Mandelbrot set). You could probably refer to the source of the venerable Fractint Program for examples of this.
...
Aside: I wrote one of these (drawing a Mandelbrot set) in C using the nVidia CUDA library which spreads the calculation over the 1200-ish "CPU"s on the video card (using a mid-range 2018 laptop). While it worked quite fast, for sufficiently large images, or deeply "zoomed-in" fractals, it still became slow. There's just so much number crunching involved.
(This question finally made me install PyOpenGL. So thanks!)
As far as I've seen, iterating over each pixel individually,
will never give good performance (not in C++/C/Assembly/).
Vectorization (in the CPU) will help. What will really help,
is using the GPU's ability to apply one operation(/kernel),
to a whole multi-dimensional array of elements, in parallel.
Specifically: Using a fragment shader, for calculating the
color of each pixel. But that means using a graphics API
like OpenGL(/Vulkan/Direct3D/), or a GPGPU/Compute API like
OpenCL(/CUDA/).
If the resulting image is used within the graphics pipeline,
then it can stay on the GPU & be displayed directly from
there. If the resulting image needs to be used e.g. in a
GUI, saved to disk, or similar, it needs to be brought from
GPU to CPU (maybe render-to-texture, read the framebuffer,
use off-screen buffers, or other options that I don't know).
import numpy as np
from OpenGL.GL import *
from OpenGL.GL import shaders
from OpenGL.GLUT import *
# Vertex shader: Pass through (no model-view-projection).
vsSrc = '''
#version 300 es
layout (location = 0) in vec4 posIn;
void main()
{
gl_Position = posIn;
}
'''
# Fragment shader: Compute fractal color, per-pixel.
# en.wikipedia.org/wiki/Mandelbrot_set#Computer_drawings
fsSrc = '''
#version 300 es
precision mediump float;
out vec4 colorOut;
vec2 mapLinear(
vec2 val,
vec2 srcMin, vec2 srcMax,
vec2 dstMin, vec2 dstMax
) {
vec2 valNorm = (val - srcMin) / (srcMax - srcMin);
return valNorm * (dstMax - dstMin) + dstMin;
}
void main()
{
// Debugging: Return fixed color; see which pixels get it.
//colorOut = vec4(0.0, 0.5, 0.0, 1.0);
//return;
// Originally, origin is top-left. Convert to Cartesian.
vec2 pixelMin = vec2(0.0f, 720.0f);
vec2 pixelMax = vec2(1280.0f, 0.0f);
vec2 mbMin = vec2(-2.5f, -1.0f);
vec2 mbMax = vec2(1.0f, 1.0f);
vec2 mbExtent = mbMax - mbMin;
vec2 mbCenter = mbMin + (mbExtent / 2.0f);
vec2 fragMapped = mapLinear(
gl_FragCoord.xy, pixelMin, pixelMax, mbMin, mbMax
);
float real = 0.0f;
float imag = 0.0f;
int iter = 0;
const int maxIter = 500;
while (
((real*real + imag*imag) < 4.0f) &&
(iter < maxIter)
) {
float realTemp = real*real - imag*imag + fragMapped.x;
imag = 2.0f*real*imag + fragMapped.y;
real = realTemp;
++iter;
}
// Using generated colors, instead of indexing a palette.
// (Don't remember anymore where this came from,
// or if it was a heuristic.)
vec3 chosenColor;
float iterNorm = float(iter) / float(maxIter);
if (iterNorm > 0.5f) {
float iterNormInverse = 1.0f - iterNorm;
chosenColor = vec3(
0.0f, iterNormInverse, iterNormInverse
);
}
else {
chosenColor = vec3(0.0f, iterNorm, iterNorm);
}
colorOut = vec4(chosenColor.xyz, 1.0f);
}
'''
def compileFractalShader():
vs = shaders.compileShader(vsSrc, GL_VERTEX_SHADER)
fs = shaders.compileShader(fsSrc, GL_FRAGMENT_SHADER)
return shaders.compileProgram(vs, fs)
# Geometry: Just 2 triangles, covering the display surface.
# (So that the fragment shader runs for all surface pixels.)
def drawTriangles():
topLeftTriangle = (
1.0, 1.0, 0.0,
-1.0, -1.0, 0.0,
-1.0, 1.0, 0.0
)
bottomRightTriangle = (
1.0, 1.0, 0.0,
-1.0, -1.0, 0.0,
1.0, -1.0, 0.0
)
verts = np.array(
topLeftTriangle + bottomRightTriangle,
dtype=np.float32
)
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, verts)
glEnableVertexAttribArray(0)
glDrawArrays(GL_TRIANGLES, 0, 6)
def printShaderException(e):
errorMsg, shaderSrc, shaderType = e.args
print('Shader error message:')
for line in errorMsg.split('\\n'): print(line)
print('--')
#print('Shader source:')
#for line in shaderSrc[0].split(b'\n'): print(line)
#print('--')
print('Shader type:', shaderType)
WIDTH = 1280
HEIGHT = 720
glutInit()
glutInitWindowSize(WIDTH, HEIGHT)
glutCreateWindow('Fractals with fragment shaders.')
# Create shaders, after creating a window / opengl-context:
try: fractalShader = compileFractalShader()
except RuntimeError as e:
printShaderException(e)
exit()
glViewport(0, 0, WIDTH, HEIGHT)
glClearColor(0.5, 0.0, 0.5, 1.0)
def display():
glClear(GL_COLOR_BUFFER_BIT)
with fractalShader: drawTriangles()
glutSwapBuffers()
glutDisplayFunc(display)
glutMainLoop()
This is entirely unoptimized.
Also, as Kingsley wrote, zooming (not shown here)
was slowing things down even in the GPU (but: unoptimized).
I wanted a very simple spring system written in numpy. The system would be defined as a simple network of knots, linked by links. I'm not interested in evaluating the system over time, but instead I want to go from an initial state, change a variable (usually move a knot to a new position) and solve the system until it reaches a stable state (last applied force is below a given threshold). The knots have no mass, there's no gravity, the forces are all derived from each link's current lengths/init lengths. And the only "special" variable is that each knot can bet set as "anchored" (doesn't move).
So I wrote this simple solver below, and included a simple example. Jump to the very end for my question.
import numpy as np
from numpy.core.umath_tests import inner1d
np.set_printoptions(precision=4)
np.set_printoptions(suppress=True)
np.set_printoptions(linewidth =150)
np.set_printoptions(threshold=10)
def solver(kPos, kAnchor, link0, link1, w0, cycles=1000, precision=0.001, dampening=0.1, debug=False):
"""
kPos : vector array - knot position
kAnchor : float array - knot's anchor state, 0 = moves freely, 1 = anchored (not moving)
link0 : int array - array of links connecting each knot. each index corresponds to a knot
link1 : int array - array of links connecting each knot. each index corresponds to a knot
w0 : float array - initial link length
cycles : int - eval stops when n cycles reached
precision : float - eval stops when highest applied force is below this value
dampening : float - keeps system stable during each iteration
"""
kPos = np.asarray(kPos)
pos = np.array(kPos) # copy of kPos
kAnchor = 1-np.clip(np.asarray(kAnchor).astype(float),0,1)[:,None]
link0 = np.asarray(link0).astype(int)
link1 = np.asarray(link1).astype(int)
w0 = np.asarray(w0).astype(float)
F = np.zeros(pos.shape)
i = 0
for i in xrange(cycles):
# Init force applied per knot
F = np.zeros(pos.shape)
# Calculate forces
AB = pos[link1] - pos[link0] # get link vectors between knots
w1 = np.sqrt(inner1d(AB,AB)) # get link lengths
AB/=w1[:,None] # normalize link vectors
f = (w1 - w0) # calculate force vectors
f = f[:,None] * AB
# Apply force vectors on each knot
np.add.at(F, link0, f)
np.subtract.at(F, link1, f)
# Update point positions
pos += F * dampening * kAnchor
# If the maximum force applied is below our precision criteria, exit
if np.amax(F) < precision:
break
# Debug info
if debug:
print 'Iterations: %s'%i
print 'Max Force: %s'%np.amax(F)
return pos
Here's some test data to show how it works. In this case i'm using a grid, but in reality this can be any type of network, like a string with many knots, or a mess of polygons...:
import cProfile
# Create a 5x5 3D knot grid
z = np.linspace(-0.5, 0.5, 5)
x = np.linspace(-0.5, 0.5, 5)[::-1]
x,z = np.meshgrid(x,z)
kPos = np.array([np.array(thing) for thing in zip(x.flatten(), z.flatten())])
kPos = np.insert(kPos, 1, 0, axis=1)
'''
array([[-0.5 , 0. , 0.5 ],
[-0.25, 0. , 0.5 ],
[ 0. , 0. , 0.5 ],
...,
[ 0. , 0. , -0.5 ],
[ 0.25, 0. , -0.5 ],
[ 0.5 , 0. , -0.5 ]])
'''
# Define the links connecting each knots
link0 = [0,1,2,3,5,6,7,8,10,11,12,13,15,16,17,18,20,21,22,23,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]
link1 = [1,2,3,4,6,7,8,9,11,12,13,14,16,17,18,19,21,22,23,24,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]
AB = kPos[link0]-kPos[link1]
w0 = np.sqrt(inner1d(AB,AB)) # this is a square grid, each link's initial length will be 0.25
# Set the anchor states
kAnchor = np.zeros(len(kPos)) # All knots will be free floating
kAnchor[12] = 1 # Middle knot will be anchored
This is what the grid looks like:
If we run my code using this data, nothing will happen since the links aren't pushing or stretching:
print np.allclose(kPos,solver(kPos, kAnchor, link0, link1, w0, debug=True))
# Returns True
# Iterations: 0
# Max Force: 0.0
Now lets move that middle anchored knot up a bit and solve the system:
# Move the center knot up a little
kPos[12] = np.array([0,0.3,0])
# eval the system
new = solver(kPos, kAnchor, link0, link1, w0, debug=True) # positions will have moved
#Iterations: 102
#Max Force: 0.000976603249133
# Rerun with cProfile to see how fast it runs
cProfile.run('solver(kPos, kAnchor, link0, link1, w0)')
# 520 function calls in 0.008 seconds
And here's what the grid looks like after being pulled by that single anchored knot:
Question:
My actual use cases are a little more complex than this example and solve a little too slow for my taste: (100-200 knots with a network anywhere between 200-300 links, solves in a few seconds).
How can i make my solver function run faster? I'd consider Cython but i have zero experience with C. Any help would be greatly appreciated.
Your method, at a cursory glance, appears to be an explicit under-relaxation type of method. Calculate the residual force at each knot, apply a factor of that force as a displacement, repeat until convergence. It's the repeating until convergence that takes the time. The more points you have, the longer each iteration takes, but you also need more iterations for the constraints at one end of the mesh to propagate to the other.
Have you considered an implicit method? Write the equation for the residual force at each non-constrained node, assemble them into a large matrix, and solve in one step. Information now propagates across the entire problem in a single step. As an additional benefit, the matrix you construct should be sparse, which scipy has a module for.
Wikipedia: explicit and implicit methods
EDIT Example of an implicit method matching (roughly) your problem. This solution is linear, so it doesn't take into account the effect of the calculated displacement on the force. You would need to iterate (or use non-linear techniques) to calculate this. Hope it helps.
#!/usr/bin/python3
import matplotlib.pyplot as pp
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import scipy as sp
import scipy.sparse
import scipy.sparse.linalg
#------------------------------------------------------------------------------#
# Generate a grid of knots
nX = 10
nY = 10
x = np.linspace(-0.5, 0.5, nX)
y = np.linspace(-0.5, 0.5, nY)
x, y = np.meshgrid(x, y)
knots = list(zip(x.flatten(), y.flatten()))
# Create links between the knots
links = []
# Horizontal links
for i in range(0, nY):
for j in range(0, nX - 1):
links.append((i*nX + j, i*nX + j + 1))
# Vertical links
for i in range(0, nY - 1):
for j in range(0, nX):
links.append((i*nX + j, (i + 1)*nX + j))
# Create constraints. This dict takes a knot index as a key and returns the
# fixed z-displacement associated with that knot.
constraints = {
0 : 0.0,
nX - 1 : 0.0,
nX*(nY - 1): 0.0,
nX*nY - 1 : 1.0,
2*nX + 4 : 1.0,
}
#------------------------------------------------------------------------------#
# Matrix i-coordinate, j-coordinate and value
Ai = []
Aj = []
Ax = []
# Right hand side array
B = np.zeros(len(knots))
# Loop over the links
for link in links:
# Link geometry
displacement = np.array([ knots[1][i] - knots[0][i] for i in range(2) ])
distance = np.sqrt(displacement.dot(displacement))
# For each node
for i in range(2):
# If it is not a constraint, add the force associated with the link to
# the equation of the knot
if link[i] not in constraints:
Ai.append(link[i])
Aj.append(link[i])
Ax.append(-1/distance)
Ai.append(link[i])
Aj.append(link[not i])
Ax.append(+1/distance)
# If it is a constraint add a diagonal and a value
else:
Ai.append(link[i])
Aj.append(link[i])
Ax.append(+1.0)
B[link[i]] += constraints[link[i]]
# Create the matrix and solve
A = sp.sparse.coo_matrix((Ax, (Ai, Aj))).tocsr()
X = sp.sparse.linalg.lsqr(A, B)[0]
#------------------------------------------------------------------------------#
# Plot the links
fg = pp.figure()
ax = fg.add_subplot(111, projection='3d')
for link in links:
x = [ knots[i][0] for i in link ]
y = [ knots[i][1] for i in link ]
z = [ X[i] for i in link ]
ax.plot(x, y, z)
pp.show()
I want to create a skeleton based on an existing segmentation, similar to what is done here (from sk-image):
However I want to do this on 3D data. Is there code for that somewhere out there? Preferably in python but any language helps.
I am aware of this great site, however I think they don't offer any code.
I am planning on using that on volumes of about 500x500x500 Pixels, so it should scale well...
I am developing this tools in this link below. The function getSkeletonize3D in program of the name convOptimize.py lets you thin your 3D data. It took about 30 minutes to give the result for the 512 cube I have. Let me know if you have any problems. https://github.com/3Scan/3scan-skeleton. The paper I used for implementing is in the comments in the code below
Basically how this 3D skeletonization algorithm works is, in each pass it has 12 subiterations in which it removes boundaries in specific directions iteratively, until you get a skeleton in the center.
The main python code that is needed for skeletonizing your data is as below. As it requires imports from different other porgrams rotationalOperators which has an import from another file called Thin3dtemplates. I recommend you downlaod rotationalOperators, Thin3dtemplates, convoptimize python scripting files and also download lookuparray.npy which is a file that is used as a lookup table in an numpy array format pre-calculated for validating a voxel for marking to be deleted or not. You need python > 3 version, scipy, numpy and pyeda modules installed to run these codes.
import numpy as np
import time
from scipy import ndimage
from scipy.ndimage.filters import convolve
"""
the following subiteration functions are how each image is rotated to the next direction for removing
boundary voxels in the order described in the reference paper
us, ne, wd,..
"""
from rotationalOperators import firstSubiteration, secondSubiteration, thirdSubiteration, fourthSubiteration, fifthSubiteration, sixthSubiteration, seventhSubiteration, eighthSubiteration, ninthSubiteration, tenthSubiteration, eleventhSubiteration, twelvethSubiteration
"""
reference paper
http://web.inf.u-szeged.hu/ipcg/publications/papers/PalagyiKuba_GMIP1999.pdf
input should be a binary image/ already segmented
"""
"""
array that has calculated the validity of the 14 templates beforehand and stored each index which is
decimal number of the binary string of 26 values (sqrt(3) connectivity) that are around a single voxel
"""
lookUpTablearray = np.load('lookupTablearray.npy')
def _convolveImage(arr, flippedKernel):
arr = np.ascontiguousarray(arr, dtype=np.uint64)
result = convolve(arr, flippedKernel, mode='constant', cval=0)
result[arr == 0] = 0
return result
"""
each of the 12 iterations corresponds to each of the following
directions - us, ne, wd, es, uw, nd, sw, un, ed, nw, ue, sd
imported from template expressions
evaluated in advance using pyeda
https://pyeda.readthedocs.org/en/latest/expr.html
"""
sElement = ndimage.generate_binary_structure(3, 1)
def _getBouondariesOfimage(image):
"""
function to find boundaries/border/edges of the array/image
"""
erode_im = ndimage.morphology.binary_erosion(image, sElement)
boundaryIm = image - erode_im
return boundaryIm
"""
each of the 12 iterations corresponds to each of the following
directions - us, ne, wd, es, uw, nd, sw, un, ed, nw, ue, sd
imported from template expressions
evaluated in advance using pyeda
https://pyeda.readthedocs.org/en/latest/expr.html
"""
directionList = [firstSubiteration, secondSubiteration, thirdSubiteration, fourthSubiteration,
fifthSubiteration, sixthSubiteration, seventhSubiteration, eighthSubiteration,
ninthSubiteration, tenthSubiteration, eleventhSubiteration, twelvethSubiteration]
def _skeletonPass(image):
"""
each pass consists of 12 serial subiterations and finding the
boundaries of the padded image/array
"""
boundaryIm = _getBouondariesOfimage(image)
numPixelsremovedList = [] * 12
boundaryIndices = list(set(map(tuple, list(np.transpose(np.nonzero(boundaryIm))))))
for i in range(0, 12):
convImage = _convolveImage(image, directionList[i])
totalPixels, image = _applySubiter(image, boundaryIndices, convImage)
print("number of pixels removed in the {} direction is {}". format(i, totalPixels))
numPixelsremovedList.append(totalPixels)
numPixelsremoved = sum(numPixelsremovedList)
return numPixelsremoved, image
def _applySubiter(image, boundaryIndices, convImage):
"""
each subiteration paralleley reduces the border voxels in 12 directions
going through each voxel and marking if it can be deleted or not in a
different image named temp_del and finally multiply it with the original
image to delete the voxels so marked
"""
temp_del = np.zeros_like(image)
# boundaryIndicesCopy = copy.deepcopy(boundaryIndices)
lenB = len(boundaryIndices)
for k in range(0, lenB):
temp_del[boundaryIndices[k]] = lookUpTablearray[convImage[boundaryIndices[k]]]
numpixel_removed = np.einsum('ijk->', image * temp_del, dtype=int)
image[temp_del == 1] = 0
return numpixel_removed, image
def getSkeletonize3D(image):
"""
function to skeletonize a 3D binary image with object in brighter contrast than background.
In other words, 1 = object, 0 = background
"""
assert np.max(image) in [0, 1]
zOrig, yOrig, xOrig = np.shape(image)
padImage = np.lib.pad(image, 1, 'constant', constant_values=0)
start_skeleton = time.time()
pass_no = 0
numpixel_removed = 0
while pass_no == 0 or numpixel_removed > 0:
numpixel_removed, padImage = _skeletonPass(padImage)
print("number of pixels removed in pass {} is {}".format(pass_no, numpixel_removed))
pass_no += 1
print("done %i number of pixels in %f seconds" % (np.sum(image), time.time() - start_skeleton))
return padImage[1: zOrig + 1, 1: yOrig + 1, 1: xOrig + 1]
if __name__ == '__main__':
sample = np.ones((5, 5, 5), dtype=np.uint8)
resultSkel = getSkeletonize3D(sample)
# gives a single voxel at the center
print("resultSkel", resultSkel)
I have a set of points (3D) taken from a range scanner. Sample data can be found here: http://pastebin.com/RBfQLm56
I also have the following parameters for the scanner:
camera matrix
[3871.88184, 0, 950.736938;
0, 3871.88184, 976.1383059999999;
0, 0, 1]
distortion coeffs
[0.020208003; -1.41251862; -0.00355229038; -0.00438868301; 6.55825615]
camera to reference point (transform)
[0.0225656671, 0.0194614234, 0.9995559233, 1.2656986283;
-0.9994773883, -0.0227084301, 0.0230060289, 0.5798922567;
0.0231460759, -0.99955269, 0.0189388219, -0.2110195758;
0, 0, 0, 1]
I am trying to render these points properly using opengl but the rendering does not look right. What is the correct way to set openGL projection and modelview matrix? This is what I currently do -
znear = 0.00001
zfar = 100
K = array([[3871.88184, 0, 950.736938],[0, 3871.88184, 976.1383059999999],[0, 0, 1]])
Rt =array([[0.0225656671, 0.0194614234, 0.9995559233, 1.2656986283],[-0.9994773883, -0.0227084301, 0.0230060289, 0.5798922567],[0.0231460759, -0.99955269, 0.0189388219, -0.2110195758]])
ren.set_projection(K,zfar,znear)
ren.set_projection_from_camera(Rt)
The function being used are:
def set_projection(self,K,zfar,znear):
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
f_x = K[0,0]
f_y = K[1,1]
c_x = K[0,2]
c_y = K[1,2]
fovY = 1/(float(f_x)/height * 2);
aspectRatio = (float(width)/height) * (float(f_y)/f_x);
near = zfar
far = znear
frustum_height = near * fovY;
frustum_width = frustum_height * aspectRatio;
offset_x = (width/2 - c_x)/width * frustum_width * 2;
offset_y = (height/2 - c_y)/height * frustum_height * 2;
glFrustum(-frustum_width - offset_x, frustum_width - offset_x, -frustum_height - offset_y, frustum_height - offset_y, near, far);
def set_modelview_from_camera(self,Rt):
glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
Rx = array([[1,0,0],[0,0,-1],[0,1,0]])
R = Rt[:,:3]
U,S,V = linalg.svd(R)
R = dot(U,V)
R[0,:]=-R[0,:]
t=Rt[:,3]
M=eye(4)
M[:3,:3]=dot(R,Rx)
M[:3,3]=t
M=M.T
m=M.flatten()
glLoadMatrixf(m)
Then I just render points (pasting snippet):
def renderLIDAR(self,filename):
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
glPushMatrix();
glEnable(GL_DEPTH_TEST)
glClear(GL_DEPTH_BUFFER_BIT)
glPointSize(1.0)
f = open(filename,'r')
f.readline() #Contains number of particles
for line in f:
line = line.split(' ')
glBegin(GL_POINTS)
glColor3f (0.0,1.0,0.0);
x = float(line[0])
y = float(line[1])
z = float(line[2])
glVertex3f(x,y,z)
#print x,y,z
glEnd()
glPopMatrix();
The matrices you get back, most notably the last one in your question are what in OpenGL is the composition of projection and modelview, also called Modelviewprojection, i.e.
MVP = P · M
As long as you're not interested in performing illumination calculations, you can use just that in a vertex shader, i.e.
#version 330
uniform mat4 MVP;
in vec3 position;
void main()
{
gl_Position = MVP * vec4(position, 1);
}
BTW, OpenGL and probably the library you're using as well, are using column major order, i.e. the order of the elements in memory is
0 4 8 c
1 5 9 d
2 6 a e
3 7 b f
so what's written in source code must be thought as "transposed" (of course it is not). Since the matrix you wrote follows the same scheme you can just put it into the uniform as it is. The only question that remains are the boundaries of the NDC space used by the range scanner. But that could be taken care of with an additional matrix applied. OpenGL uses the range [-1, 1]^3 so the worst thing that can happen is, that if it's in the other popular NDC range [0, 1]^3, you'll see your geometry just squeezed into the upper left hand corner of your window, and maybe turned "inside out" if the Z axis goes into the other direction. Just try it, I'd say it already matches OpenGL.
Anyway, if you want to use it with illumination, you have to decompose it into a projection and a modelview part. Easier said than done, but a good starting point is to orthonormalize the upper left 3×3 submatrix, which yields the rotational part of the modelview 'M'. You then have to find a matrix P, that, when left multiplied with M yields the original matrix. That's an overdetermined set of linear equations, so a Gauss-Jordan scheme can do it. And if I'm not entirely mistaken, what you already got in form of that camera matrix is either the decomposed M or P (I'd go for M).
Once you got that you may want to get the translational part (the 4th column) into the modelview matrix as well.