How to get all connections between pixels in an image - python

Developing the A* algorithm for path planning, I am trying to get a list called edges in which all connections from one pixel to its neighbour pixels that occur in a non-occupied space (where the pixel value is 1).
The image from which I compute this connections is a 351x335 pixels image.
Pixels P2,4,6,8 are at a distance=1 from the center, while pixels P1,3,5,7 are at a approximate distance=1.4 from the center (Pythagoras theorem); see image:
The code written for the edges loop is never ending. Is this taking too much computational time due to the loops? The vertices loop ends in a second or so.
Note: I am initializing the lists as two very big arrays and cutting them at the end to not use dynamic allocation.
EDIT: The image (imOut) is the following one:
Link to image used as map
EDIT: The full code is the following:
'''
IMPORTS
'''
import cv2 as cv # Import OpenCV
import numpy as np # Import Numpy
from skimage.color import rgb2gray
import math
from datetime import datetime
import matplotlib.pyplot as plt
from scipy import arange
'''
CODE SETTINGS
'''
# Allowing to print full array without truncation
np.set_printoptions(threshold=np.nan)
'''
MAIN PROGRAM
'''
im = rgb2gray(cv.imread('map1.png'))
imOut = im # Making a copy of the image to output
# plt.imshow(imOut)
# plt.show()
vertices = np.zeros((imOut.shape[0]*imOut.shape[1], 3)) # 1st col to x, 2nd col to y, 3rd col to heuristic (euclidean distance to QGoal)
edges = np.zeros((100*imOut.shape[0]*imOut.shape[1], 3)) # 1st col 1st vertex, 2nd col 2nd vertex, 3rd col edge length
# Initialization of vertices with start pos.
# CREATE VERTICES AND EDGES LISTS FROM THE MAP PROVIDED
'''
Vertices List -> Add all obstacle-free configurations to the vertices list.
Edges List -> Go pixel by pixel in the map and, if they are obstacle-free configurations,
add, out of the 8 neighbouring pixels, the ones that are obstacle-free as feasible edges.
'''
# Vertices list creation
indexVertices = 0
for i in range(0, imOut.shape[0]):
for j in range(0, imOut.shape[1]):
if imOut[i,j] == 1: # If it's in free space
# Compute heuristic to goal node (euclidean distance).
heuristic = math.sqrt(pow(i-QGoal[0],2)
+ pow(j-QGoal[1],2))
vertices[indexVertices,:] = [i, j, heuristic]
indexVertices = indexVertices + 1
vertices = vertices[0:indexVertices,:]
# Edges list creation
# I loop over the same vertices array, as it only contains the free pixels.
indexEdges = 0
for i in range(0, vertices.shape[0]):
for k in range(0, vertices.shape[0]):
# If it is not the same pixel that we are checking
if i != k:
# Check if it is a neighbouring pixel and, if so,
#add it to the list of edges with its distance(path cost).
pathCost = (math.sqrt(pow(vertices[i,0]
- vertices[k,0], 2)
+ pow(vertices[i,1]
- vertices[k,1], 2))
if pathCost == 1 or round(pathCost,1) == 1.4:
edges[indexEdges,:] = [i, k, pathCost]
indexEdges = indexEdges + 1
edges = edges[0:indexEdges,:]

Related

How to find branch and intersection points of a binary skeletonized image?

I have the following code which looks for feature points in a binary skeletonized image. I need to find ending points, branch points and intersection points separately and display their coordinates as (x, y, point type). For example, (147, 45, 3), where 3 is the number of adjacent pixels (branch point).
import cv2 as cv
import numpy as np
def extraction(img):
# Find row and column locations that are non-zero
(rows, cols) = np.nonzero(img)
# Initialize empty list of co-ordinates
skel_coords = []
# For each non-zero pixel
for (r, c) in zip(rows, cols):
# Extract an 8-connected neighbourhood
(col_neigh, row_neigh) = np.meshgrid(np.array([c - 1, c, c + 1]), np.array([r - 1, r, r + 1]))
# Cast to int to index into image
col_neigh = col_neigh.astype('int')
row_neigh = row_neigh.astype('int')
# Convert into a single 1D array and check for non-zero locations
pix_neighbourhood = img[row_neigh, col_neigh].ravel() != 0
# If the number of non-zero locations, add this to our list of co-ordinates
if np.sum(pix_neighbourhood) == 2:
skel_coords.append((c, r, 1))
elif np.sum(pix_neighbourhood) == 4:
skel_coords.append((c, r, 3))
elif np.sum(pix_neighbourhood) == 5:
skel_coords.append((c, r, 4))
return skel_coords
img = cv.imread('abc.png', 0)
coord = extraction(img)
for element in coord:
print(element)
The code correctly finds the number of neighboring pixels, but they are not branching and crossing points. You can see it in the picture below (the found point is marked in gray):
An enlarged image of a 3x3 pixel matrix (below, two white pixels are in a row):
I need to find points of the following kind for branch points (so that neighboring pixels alternate):
Does anyone have any ideas how to implement this? I would be very grateful for your help!

image distance transform different xyz voxel sizes

I would like to find minimum distance of each voxel to a boundary element in a binary image in which the z voxel size is different from the xy voxel size. This is to say that a single voxel represents a 225x110x110 (zyx) nm volume.
Normally, I would do something with scipy.ndimage.morphology.distance_transform_edt (https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.morphology.distance_transform_edt.html) but this gives the assume that isotropic sizes of the voxel:
dtrans_stack = np.zeros_like(segm_stack) # empty array to add to
### iterate over the t dimension and get distance transform
for t_iter in range(dtrans_stack.shape[0]):
segm_ = segm_stack[t_iter, ...] # segmented image in single t
neg_segm = np.ones_like(segm_) - segm_ # negative of the segmented image
# get a ditance transform with isotropic voxel sizes
dtrans_stack_iso = distance_transform_edt(segm_)
dtrans_neg_stack_iso = -distance_transform_edt(neg_segm) # make distance in the segmented image negative
dtrans_stack[t_iter, ...] = dtrans_stack_iso + dtrans_neg_stack_iso
I can do this with brute force using scipy.spatial.distance.cdist (https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html) but this takes ages and I'd rather avoid it if I can
vox_multiplier = np.array([z_voxelsize, xy_voxelsize, xy_voxelsize]) # array of voxel sizes
## get a subset of coordinatess so I'm not wasting times in empty space
disk_size = 5 # size of disk for binary dilation
mip_tz = np.max(np.max(decon_stack, axis = 1), axis = 0)
thresh_li = threshold_li(mip_tz) # from from skimage.filters
mip_mask = mip_tz >= thresh_li
mip_mask = remove_small_objects(mip_mask) # from skimage.morphology
mip_dilated = binary_dilation(mip_mask, disk(disk_size)) # from skimage.morphology
# get the coordinates of the mask
coords = np.argwhere(mip_dilated == 1)
ycoords = coords[:, 0]
xcoords = coords[:, 1]
# get the lower and upper bounds of the xyz coordinates
ylb = np.min(ycoords)
yub = np.max(ycoords)
xlb = np.min(xcoords)
xub = np.max(xcoords)
zlb = 0
zub = zdims -1
# make zeros arrays of the proper size
dtrans_stack = np.zeros_like(segm_stack)
dtrans_stack_neg = np.zeros_like(segm_stack) # this will be the distance transform into the low inten area
for t_iter in range(dtrans_stack.shape[0]):
segm_ = segm_stack[t_iter, ...]
neg_segm_ = np.ones_like(segm_) - segm_ # negative of the segmented image
# get the coordinats of segmented image and convert to nm
segm_coords = np.argwhere(segm_ == 1)
segm_coords_nm = vox_multiplier * segm_coords
neg_segm_coords = np.argwhere(neg_segm_ == 1)
neg_segm_coords_nm = vox_multiplier * neg_segm_coords
# make an empty arrays for the xy and z distance transforms
dtrans_stack_x = np.zeros_like(segm_)
dtrans_stack_y = np.zeros_like(segm_)
dtrans_stack_z = np.zeros_like(segm_)
dtrans_stack_neg_x = np.zeros_like(segm_)
dtrans_stack_neg_y = np.zeros_like(segm_)
dtrans_stack_neg_z = np.zeros_like(segm_)
# iterate over the zyx and determine the minimum distance in nm from segmented image
for z_iter in range(zlb, zub):
for y_iter in range(ylb, yub):
for x_iter in range(xlb, xub):
coord_nm = vox_multiplier* np.array([z_iter, y_iter, x_iter]) # change coords from pixel to nm
coord_nm = coord_nm.reshape(1, 3) # reshape for distance calculateion
dists_segm = distance.cdist(coord_nm, segm_coords_nm) # distance from the segmented image
dists_neg_segm = distance.cdist(coord_nm, neg_segm_coords_nm) # distance from the negative segmented image
dtrans_stack[t_iter, z_iter, y_iter, x_iter] = np.min(dists_segm) # add minimum distance to distance transfrom stack
dtrans_neg_stack[t_iter, z_iter, y_iter, x_iter] = np.min(dists_neg_segm)
Here is image of a single zslice of segmented image if that helps to clear things up
single z-slice of segmented image
Normally, I would do something with scipy.ndimage.morphology.distance_transform_edt but this gives the assume that isotropic sizes of the voxel:
It does no such thing! You are looking for the sampling= parameter. From the latest version of the docs:
Spacing of elements along each dimension. If a sequence, must be of length equal to the input rank; if a single number, this is used for all axes. If not specified, a grid spacing of unity is implied.
The wording "sampling" or "spacing" is probably a bit mysterious if you think of pixels as little squares/cubes, and that is probably why you missed it. In most situations, it is better to think of pixels as point samples on a grid, with fixed spacing between samples. I recommend Alvy Ray's a pixel is not a little square for a better understanding of this terminology.

Python NetworkX: How to access edges with a specific data value

For a bond percolation model I want to build a square lattice with NetworkX using grid_2d_graph(l,l). This gives me a square lattice of size lxl with every edge open.
The idea is that I want to pick an edge of the graph randomly, then check if the edge has already been assigned (1 to leave the edge as it is, 0 to add it to the list of edges to remove from the graph) and if it hasn't been assigned yet (edge has 'state' = -1), I want to randomly choose with a specific probability p, if the edge is open (leave it as it is), or if it is closed (put it on the list of the edges to remove).
Therefor, I saved all edges with data attribute 'state' = -1 as a list and then tried to randomly access an entry of this list to then change the attribute 'state' to some value. But it seems that this operation is not allowed. When I try to edit the states, I receive the following error:
File "bond-percolation.py", line 39, in <module>
ed[10][2] = 1
TypeError: 'tuple' object does not support item assignment
So my question is, how can I randomly pick an edge and change the value of 'state' efficiently?
Here is my code:
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
import random
#Width of the quadratic lattice
l = 30
#Create grid
G = nx.grid_2d_graph(l,l)
#Total number of edges in the lattice
n = 2 * l * (l-1)
m = 0
#Set probability if an edge is open
p = 0.17
#Create empty list to add closed edges later
ed = []
ld = []
for e in G.edges(data = 'state', default = -1):
ed.append(e)
#Creating the lattice
while (m != n):
i = np.random.randint(n-1)
a = random.random()
if (ed[i][2] == -1):
if (a > p):
ld.append(ed[i])
else:
ed[i][2] = 1
m = m + 1
#We need this so that the lattice is drawn vertically to the horizon
pos = dict( (l,l) for l in G.nodes() )
#Draw the lattice
nx.draw_networkx(G, pos = pos, with_labels = False, node_size = 0)
#Plot it on the screen
plt.axis('off')
plt.show()
I believe you can simply search for it using the edge selector.
There's no built-in selector (afaik) but you can create a helper function that loops through the edges and returns your list.
def filter_edges(value):
edge_list = []
for u,v,s in G.edges(data='state'):
if s == value:
edge_list.append((u,v))
return edge_list
Re-reading your error, I don't think your error is related to randomly picking edges. Instead, you are incorrectly trying to assign the state values.
ed[10][2] returns the whole edge (presumably a dict). It'd be helpful to include the output when you just run ed[10][2].
You can't assign an int to that. You likely want to do ed[10][2]['state'] = 1

Convert Numpy Array to Monotone Graph (networkx)

I have a simple array of 1s and 0s, and I want to convert this array to a graph using NetworkX with the following conditions:
monotone
Directional
Weighted graph (go/no go areas)
Starts in the lower left hand corner and works right
There is a built in function called from_numpy_matrix
See this
The goal is to take this graph and show that I can get from the lower left hand corner of the matrix (think raster dataset) to the upper right hand corner without moving backwards or down.
Example array:
array = [[0,0,1,0,0],
[1,0,0,1,0],
[1,0,1,1,0],
[0,0,1,1,0]]
myarray = np.array(array)
0 means go area, 1 means blocked.
That was fun.
from_numpy_matrix doesn't help as there is no simple transformation from your maze to an adjacency matrix. Instead it is much easier to iterate over allowed positions (i.e. "not wall") and check if there is an allowed position in the allowed directions (up, right, diagonal up-right).
import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
def maze_to_graph(is_wall, allowed_steps):
"""
Arguments:
----------
is_wall -- 2D boolean array marking the position of walls in the maze
allowed_steps -- list of allowed steps; e.g. [(0, 1), (1, 1)] signifies that
from coming from tile (i, j) only tiles (i, j+1) and (i+1, j+1)
are reachable (iff there is no wall)
Returns:
--------
g -- networkx.DiGraph() instance
pos2idx -- dict mapping (i, j) position to node idx (for testing if path exists)
idx2pos -- dict mapping node idx to (i, j) position (for plotting)
"""
# map array indices to node indices and vice versa
node_idx = range(np.sum(~is_wall))
node_pos = zip(*np.where(~is_wall))
pos2idx = dict(zip(node_pos, node_idx))
# create graph
g = nx.DiGraph()
for (i, j) in node_pos:
for (delta_i, delta_j) in allowed_steps: # try to step in all allowed directions
if (i+delta_i, j+delta_j) in pos2idx: # i.e. target node also exists
g.add_edge(pos2idx[(i,j)], pos2idx[(i+delta_i, j+delta_j)])
idx2pos = dict(zip(node_idx, node_pos))
return g, idx2pos, pos2idx
def test():
arr = np.array([[0,0,1,0,0],
[1,0,0,1,0],
[1,0,1,1,0],
[0,0,1,1,0]]).astype(np.bool)
steps = [(0, 1), # right
(-1, 0), # up
(-1, 1)] # diagonal up-right
g, idx2pos, pos2idx = maze_to_graph(arr, steps)
nx.draw(g, pos=idx2pos, node_size=1200, node_color='w', labels=idx2pos)
start = (3, 0)
stop = (0, 4)
print "Has path: ", nx.has_path(g, pos2idx[start], pos2idx[stop])
return

Vtk inserts incorrect color between nodes when mapping texture to mesh

Hi I am trying to map a texture to 3d mesh using Mayavi and Python bindings of vtk. I am visualising an .obj wavefront. This obj is 3D photograph of a face. The texture image is a composite of three 2D photographs.
Each node in the mesh has an (uv) co-ordinate in the image, which defines its color. Different regions of the mesh draw their colours from different sections of the image. To illustrate this I have replaced the actual texture image with this one:
And mapped this to the mesh instead.
The problem I am having is illustrated around the nose. At the border between red and green there is an outline of blue. Closer inspection of this region in wireframe mode shows that it is not a problem with the uv mapping, but with how vtk is interpolating colour between two nodes. For some reason it is adding a piece of blue in between two nodes where one is red and one is green.
This causes serious problems when visualising using the real texture
Is there a way to force vtk to choose the colour of one or the other neighbouring nodes for the colour between them? I tried turning "edge-clamping" on, but this did not achieve anything.
The code that I am using is below and you can access the files in question from here https://www.dropbox.com/sh/ipel0avsdiokr10/AADmUn1-qmsB3vX7BZObrASPa?dl=0
but I hope this is a simple solution.
from numpy import *
from mayavi import mlab
from tvtk.api import tvtk
import os
from vtk.util import numpy_support
def obj2array(f):
"""function for reading a Wavefront obj"""
if type(f)==str:
if os.path.isfile(f)==False:
raise ValueError('obj2array: unable to locate file ' + str(f))
f =open(f)
vertices = list()
connectivity = list()
uv = list()
vt = list()
fcount = 0
for l in f:
line = l.rstrip('\n')
data = line.split()
if len(data)==0:
pass
else:
if data[0] == 'v':
vertices.append(atleast_2d(array([float(item) for item in data[1:4]])))
elif data[0]=='vt':
uv.append(atleast_2d(array([float(item) for item in data[1:3]])))
elif data[0]=='f':
nverts = len(data)-1 # number of vertices comprising each face
if fcount == 0: #on first face establish face format
fcount = fcount + 1
if data[1].find('/')==-1: #Case 1
case = 1
elif data[1].find('//')==True:
case = 4
elif len(data[1].split('/'))==2:
case = 2
elif len(data[1].split('/'))==3:
case = 3
if case == 1:
f = atleast_2d([int(item) for item in data[1:len(data)]])
connectivity.append(f)
if case == 2:
splitdata = [item.split('/') for item in data[1:len(data)]]
f = atleast_2d([int(item[0]) for item in splitdata])
connectivity.append(f)
if case == 3:
splitdata = [item.split('/') for item in data[1:len(data)]]
f = atleast_2d([int(item[0]) for item in splitdata])
connectivity.append(f)
if case == 4:
splitdata = [item.split('//') for item in data[1:len(data)]]
f = atleast_2d([int(item[0]) for item in splitdata])
connectivity.append(f)
vertices = concatenate(vertices, axis = 0)
if len(uv)==0:
uv=None
else:
uv = concatenate(uv, axis = 0)
if len(connectivity) !=0:
try:
conarray = concatenate(connectivity, axis=0)
except ValueError:
if triangulate==True:
conarray=triangulate_mesh(connectivity,vertices)
else:
raise ValueError('obj2array: not all faces triangles?')
if conarray.shape[1]==4:
if triangulate==True:
conarray=triangulate_mesh(connectivity,vertices)
return vertices, conarray,uv
# load texture image
texture_img = tvtk.Texture(interpolate = 1,edge_clamp=1)
texture_img.input = tvtk.BMPReader(file_name='HM_1_repose.bmp').output
#load obj
verts, triangles, uv = obj2array('HM_1_repose.obj')
# make 0-indexed
triangles = triangles-1
surf = mlab.triangular_mesh(verts[:,0],verts[:,1],verts[:,2],triangles)
tc=numpy_support.numpy_to_vtk(uv)
pd = surf.mlab_source.dataset._vtk_obj.GetPointData()
pd.SetTCoords(tc)
surf.actor.actor.mapper.scalar_visibility=False
surf.actor.enable_texture = True
surf.actor.actor.texture = texture_img
mlab.show(stop=True)
You can turn off all interpolation (change interpolate = 1 to interpolate = 0 in your example), but there is not a way to turn off interpolation at just the places where it would interpolate across sub-images of the texture – at least not without writing your own fragment shader. This will likely look crude.
Another solution would be to create 3 texture images with transparent texels at each location that is not part of the actor's face. Then render the same geometry with the same texture coordinates but a different image each time (i.e., have 3 actors each with the same polydata but a different texture image).
I just ran into this exact problem as well and found that the reason this happens is because VTK assumes there's a 1-to-1 relationship between points in the polydata and uv coordinates when rendering the actor and associated vtkTexture. However, in my case and the case of OP, there are neighboring triangles that are mapped to different sections the the image, so they have very different uv coordinates. The points that share these neighboring faces can only have one uv coordinate (or Tcoord) associated with it, but they actually need 2 (or more, depending on your case).
My solution was to loop through and duplicate these points that lie on the the seams/borders and create a new vtkCellArray with triangles with these duplicated pointIds. Then I simply replaced the vtkPolyData Polys() list with the new triangles. It would have been much easier to duplicate the points and update the existing pointIds for each of the triangles that needed it, but I couldn't find a way to update the cells properly.

Categories

Resources