I am currently using MPL's im_show() function in order to display the depth image of an IFM 3D camera. I am able to display a single scene of the camera with no issues. Although, I am finding that the image displayed does not differ from one scene to the next (i.e changing the scene that the camera is looking at from one to another). Although, the actual data of the depth map is changing.
I have been looking into how to dynamically change images using MPL and I haven't found the right solution.
The depth map is found as a key called distance in the result dictionary after calling the method readNextFrame(). Although my question involves the plotting code. In short, the code looks a little something like this:
import o3d3xx
import array
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
imageWidth = 176
imageHeight = 132
#create ImageClient Object
pcic = o3d3xx.ImageClient("Camera IP",50010)
#store distance array as variable 'distance'
result = pcic.readNextFrame()
distance = result["distance"]
#convert to np array and reshape
distance = np.asarray(distance)
distance = distance.reshape(imageHeight,imageWidth)
#plot distance array
plt.figure()
plt.title("Distance Image")
plt.imshow(distance)
plt.show()
After changing scene, I know that the actual distance array is changing because I have compared the data arrays from one scene to the next. The only way I can get around this issue is by creating a new ImageClient object but I would like to avoid that.
Any ideas as to how to get around this? Ultimately I would like to call readNextFrame() and use imshow() to display a new depth image once the scene has changed without creating a new ImageClient object.
Easy one:
figure, axis = plt.subplots(figsize=(7.6, 6.1))
im = axis.imshow(***SOME ARRAY***)
if you want to reset plot data just
im.set_data(***SOME OTHER ARRAY***)
Related
I'm trying to use Imshow to plot a 2-d Fourier transform of my data. However, Imshow plots the data against its index in the array. I would like to plot the data against a set of arrays I have containing the corresponding frequency values (one array for each dim), but can't figure out how.
I have a 2D array of data (gaussian pulse signal) that I Fourier transform with np.fft.fft2. This all works fine. I then get the corresponding frequency bins for each dimension with np.fft.fftfreq(len(data))*sampling_rate. I can't figure out how to use imshow to plot the data against these frequencies though. The 1D equivalent of what I'm trying to do us using plt.plot(x,y) rather than just using plt.plot(y).
My first attempt was to use imshows "extent" flag, but as fas as I can tell that just changes the axis limits, not the actual bins.
My next solution was to use np.fft.fftshift to arrange the data in numerical order and then simply re-scale the axis using this answer: Change the axis scale of imshow. However, the index to frequency bin is not a pure scaling factor, there's typically a constant offset as well.
My attempt was to use 2d hist instead of imshow, but that doesn't work since 2dhist plots the number of times an order pair occurs, while I want to plot a scalar value corresponding to specific order pairs (i.e the power of the signal at specific frequency combinations).
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
f = 200
st = 2500
x = np.linspace(-1,1,2*st)
y = signal.gausspulse(x, fc=f, bw=0.05)
data = np.outer(np.ones(len(y)),y) # A simple example with constant y
Fdata = np.abs(np.fft.fft2(data))**2
freqx = np.fft.fftfreq(len(x))*st # What I want to plot my data against
freqy = np.fft.fftfreq(len(y))*st
plt.imshow(Fdata)
I should see a peak at (200,0) corresponding to the frequency of my signal (with some fall off around it corresponding to bandwidth), but instead my maximum occurs at some random position corresponding to the frequencie's index in my data array. If anyone has any idea, fixes, or other functions to use I would greatly appreciate it!
I cannot run your code, but I think you are looking for the extent= argument to imshow(). See the the page on origin and extent for more information.
Something like this may work?
plt.imshow(Fdata, extent=(freqx[0],freqx[-1],freqy[0],freqy[-1]))
I have a 3D numpy array that I want to rotate with an angle that I want. I have tried using scipy.ndimage.rotate function and it does the job. However, it does a lot of rounding when rotating. This causes me a problem because my 3D array is representation of an object and numbers in each pixel represent the material that pixel is filled with (which I store in a different file). Therefore, I need a way to rotate the array without doing approximation or rounding and making the object blurry is not a problem
Here is what I got with the function I used:
The problem you are dealing with is essentially a sampling issue. Your resolution is too low for the data you are dealing with. One possibility to solve this is to increase the resolution of the image you are working with, enforce the color values as you rotate (ie no blending colors at the edges), and create a size/shape template that must be met after the rotation.
Edit: For clarity, it isn't the data that is at too low of a resolution, it's the image in which the data is stored that should be at a high enough resolution. The wikipedia page on multidimensional sampling is good for this topic: https://en.wikipedia.org/wiki/Multidimensional_sampling
I think the way I would approach it, outside of someone knowing an actual package to do this, is start with the indices and rotate them, then, given they may be floating point, round them. This may not be the best, but I think it should work.
Most of this example is loading a 3D dataset I found to use as an example.
import matplotlib.pyplot as plt
import os
import numpy as np
from scipy.ndimage import rotate
def load_example_data():
# Found data as an example
from urllib.request import urlopen
import tarfile
opener = urlopen( 'http://graphics.stanford.edu/data/voldata/MRbrain.tar.gz')
tar_file = tarfile.open('MRbrain.tar.gz')
try:
os.mkdir('mri_data')
except:
pass
tar_file.extractall('mri_data')
tar_file.close()
import numpy as np
data = np.array([np.fromfile(os.path.join('mri_data', 'MRbrain.%i' % i),
dtype='>u2') for i in range(1, 110)])
data.shape = (109, 256, 256)
return data
def rotate_nn(data, angle, axes):
"""
Rotate a `data` based on rotating coordinates.
"""
# Create grid of indices
shape = data.shape
d1, d2, d3 = np.mgrid[0:shape[0], 0:shape[1], 0:shape[2]]
# Rotate the indices
d1r = rotate(d1, angle=angle, axes=axes)
d2r = rotate(d2, angle=angle, axes=axes)
d3r = rotate(d3, angle=angle, axes=axes)
# Round to integer indices
d1r = np.round(d1r)
d2r = np.round(d2r)
d3r = np.round(d3r)
d1r = np.clip(d1r, 0, shape[0])
d2r = np.clip(d2r, 0, shape[1])
d3r = np.clip(d3r, 0, shape[2])
return data[d1r, d2r, d3r]
data = load_example_data()
# Rotate the coordinates indices
angle = 5
axes = (0, 1)
data_r = rotate_nn(data, angle, axes)
I think the general idea will work. You will have to consider what the axis is to rotate around.
For anyone with this problem stumbling upon this thread: brechmos' comment under the OP put me in the right direction for an actual solution. rotate() by default uses a third-order spline interpolation, which gives nice smooth edges. We want sharp edges though, without numbers in between. Setting order = 0 does exactly this. No need for extra functions or implementing anything yourself, just change a single argument.
Given 2000 random points in a unit circle (using numpy.random.normal(0,1)), I want to normalize them such that the output is a circle, how do I do that?
I was requested to show my efforts. This is part of a larger question: Write a program that samples 2000 points uniformly from the circumference of a unit circle. Plot and show it is indeed picked from the circumference. To generate a point (x,y) from the circumference, sample (x,y) from std normal distribution and normalise them.
I'm almost certain my code isn't correct, but this is where I am up to. Any advice would be helpful.
This is the new updated code, but it still doesn't seem to be working.
import numpy as np
import matplotlib.pyplot as plot
def plot():
xy = np.random.normal(0,1,(2000,2))
for i in range(2000):
s=np.linalg.norm(xy[i,])
xy[i,]=xy[i,]/s
plot.plot(xy)
plot.show()
I think the problem is in
plot.plot(xy)
even if I use
plot.plot(xy[:,0],xy[:,1])
it doesn't work.
Connected lines are not a good visualization here. You essentially connect random points on the circle. Since you do this quite often, you will get a filled circle. Try drawing points instead.
Also avoid name space mangling. You import matplotlib.pyplot as plot and also name your function plot. This will lead to name conflicts.
import numpy as np
import matplotlib.pyplot as plt
def plot():
xy = np.random.normal(0,1,(2000,2))
for i in range(2000):
s=np.linalg.norm(xy[i,])
xy[i,]=xy[i,]/s
fig, ax = plt.subplots(figsize=(5,5))
# scatter draws dots instead of lines
ax.scatter(xy[:,0], xy[:,1])
If you use dots instead, you will see that your points indeed lie on the unit circle.
Your code has many problems:
Why using np.random.normal (a gaussian distribution) when the problem text is about uniform (flat) sampling?
To pick points on a circle you need to correlate x and y; i.e. randomly sampling x and y will not give a point on the circle as x**2+y**2 must be 1 (for example for the unit circle centered in (x=0, y=0)).
A couple of ways to get the second point is to either "project" a random point from [-1...1]x[-1...1] on the unit circle or to pick instead uniformly the angle and compute a point on that angle on the circle.
First of all, if you look at the documentation for numpy.random.normal (and, by the way, you could just use numpy.random.randn), it takes an optional size parameter, which lets you create as large of an array as you'd like. You can use this to get a large number of values at once. For example: xy = numpy.random.normal(0,1,(2000,2)) will give you all the values that you need.
At that point, you need to normalize them such that xy[:,0]**2 + xy[:,1]**2 == 1. This should be relatively trivial after computing what xy[:,0]**2 + xy[:,1]**2 is. Simply using norm on each dimension separately isn't going to work.
Usual boilerplate
import numpy as np
import matplotlib.pyplot as plt
generate the random sample with two rows, so that it's more convenient to refer to x's and y's
xy = np.random.normal(0,1,(2,2000))
normalize the random sample using a library function to compute the norm, axis=0 means consider the subarrays obtained varying the first array index, the result is a (2000) shaped array that can be broadcasted to xy /= to have points with unit norm, hence lying on the unit circle
xy /= np.linalg.norm(xy, axis=0)
Eventually, the plot... here the key is the add_subplot() method, and in particular the keyword argument aspect='equal' that requires that the scale from user units to output units it's the same for both axes
plt.figure().add_subplot(111, aspect='equal').scatter(xy[0], xy[1])
pt.show()
to have
What I am trying to do is to create a 3D triangulated mesh that can be parsed into a .vtk or .stl file for use in 3D printing application. Right now I am stuck with the creation of the triangle mesh. The geometry I want to create are basically three dimensional sine waves that have a certain thickness and intersect each other. So far I got one sine wave. Here's a MWE:
import matplotlib.pyplot as plt
import numpy as np
from scipy import ndimage
import scipy.spatial
# create empty 3d array
array = np.zeros((100, 100, 100))
# create 3D sine wave in empty array
strut = np.sin(np.linspace(1, 10, 100))*12
for k in enumerate(strut):
y_shift = int(np.round(strut[k[0]]))
array[k, 50 + y_shift, 50] = 1
pattern = np.ones((4, 4, 4))
# convolve the array with the pattern / apply thickness
conv_array = ndimage.convolve(array, pattern)
# create list with data coordinates from convolved array
data = list()
for j in range(conv_array.shape[0]):
for k in range(conv_array.shape[1]):
for l in range(conv_array.shape[2]):
if conv_array[j, k, l] != 0:
data.append([j, k, l])
data = np.asarray(data)
tri = scipy.spatial.Delaunay(data)
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.hold(True)
ax.plot_trisurf(data[:, 0], data[:, 1], data[:, 2], triangles=tri.simplices)
plt.show()
What it does: I create an empty array which I fill with a sine wave represented by ones. I convolve that array with a rectangular array of a defined size, which gives me a thicker sine wave in space. Then the array gets converted into coordinate form so that it can be triangulated using Delaunay triangulation. What I get is this:
Plot
As you can see the triangulation kinda worked, but it fills the space between the sine wave amplitudes. Is there a way to remove the filled spaced? Or prevent it from doing them in the first place? The sine wave also looks wrong at the ends and I am not sure why. Is this even the best method to achieve want I am trying to do?
The parsing to a .vtk file should not present a problem, but I need a clean structure first. Thanks in advance for any kind of help!
I would not reinvent the wheel and do all that on my own. Rather than that, use python-vtk and paraview (which is a post-processing application for 3D data) to do the triangulation for you. "Just" create the points and do the rest in that application.
I don't know much about 3D printing, but I know my fair share about STL and VTK. It is a pain to do manually and the VTK library has has some nice Python examples and a dedicated STLWriter. You just need to wrap your head around the workflow of VTK and how it manages things internally. This is where paraview comes in quite handy. It enables you to record your actions that you do in the GUI and displays them and displays them in Python. This is great to learn the way it works internally.
Finally I got something very close to what I want. In case someone is interested in the answer:
Instead of going with the point cloud approach I dug myself into VTK (which is a pain to learn, but has a lot of functionality) with python.
My algorithm is basically this:
Approximate the sine wave as a simple triangular wave first.
Feed the x, y and z coordinates of the wave into a vtkPoints object
Use vtkParametricSpline to get a smooth wave
vtkSplineFilter to have control over the smoothness of the wave
vtkTubeFilter to create a volume from the line
vtkTriangleFilter for meshing
vtkSTLWriter
I have a 3D ndarry object, which contains spectral data (i.e. spatial xy dimensions, and an energy dimension). I would like to extract and plot the spectra from each individual pixel in a line plot. At present, I am doing this using np.ndenumerate along the axis I'm interested in, but it's quite slow. I was hoping to try np.apply_along_axis, to see if it was faster, but I keep getting a strange error.
What works:
# Setup environment, and generate sample data (much smaller than real thing!)
import numpy as np
import matplotlib.pyplot as plt
ax = range(0,10) # the scale to use when plotting the axis of interest
ar = np.random.rand(4,4,10) # the 3D data volume
# Plot all lines along axis 2 (i.e. the spectrum contained in each pixel)
# on a single line plot:
for (x,y) in np.ndenumerate(ar[:,:,1]):
plt.plot(ax,ar[x[0],x[1],:],alpha=0.5,color='black')
It is my understanding that this is basically a loop, which is less efficient than array-based methods, so I would like to try an approach using np.apply_along_axis, to see if it's faster. This is my first attempt at python, however, and am still finding out how it works, so please put me right if this idea is fundamentally flawed!
What I would like to try:
# define a function to pass to apply_along_axis
def pa(y,x):
if ~all(np.isnan(y)): # only do the plot if there is actually data there...
plt.plot(x,y,alpha=0.15,color='black')
return
# check that the function actually works...
pa(ar[1,1,:],ax) # should produce a plot - does for me :)
# try to apply to to the whole array, along the axis of interest:
np.apply_along_axis(pa,2,ar,ax) # does not work... booo!
The resulting error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-109-5192831ba03c> in <module>()
12 # pa(ar[1,1,:],ax)
13
---> 14 np.apply_along_axis(pa,2,ar,ax)
//anaconda/lib/python2.7/site-packages/numpy/lib/shape_base.pyc in apply_along_axis(func1d, axis, arr, *args)
101 holdshape = outshape
102 outshape = list(arr.shape)
--> 103 outshape[axis] = len(res)
104 outarr = zeros(outshape, asarray(res).dtype)
105 outarr[tuple(i.tolist())] = res
TypeError: object of type 'NoneType' has no len()
Any ideas whats going wrong here/advice on how to do this better would be great.
Thanks!
apply_along_axis creates a new array from the output of your function.
You're returning None (by not returning anything). Thus the error. Numpy checks the length of the returned output to see if it makes sense for the new array.
Because you're not constructing a new array from the results, there's no reason to use apply_along_axis. It's not going to be any faster.
However, your current ndenumerate statement is exactly equivalent to:
import numpy as np
import matplotlib.pyplot as plt
ar = np.random.rand(4,4,10) # the 3D data volume
plt.plot(ar.reshape(-1, 10).T, alpha=0.5, color='black')
In general, you probably want to do something like:
for pixel in ar.reshape(-1, ar.shape[-1]):
plt.plot(x_values, pixel, ...)
That way you can easily iterate over the spectra at each pixel in your hyperspectral array.
You bottleneck here probably isn't how you're using the array. Plotting each line separately with identical parameters like this in matplotlib is going to be somewhat inefficient.
It will take slightly longer to construct, but a LineCollection will render much faster. (Basically, using a LineCollection tells matplotlib to not bother checking what the properties of each line are, and just pass them all to the low-level renderer to be drawn in the same way. You bypass a bunch of individual draw calls in favor of a single draw of a large object.)
On the downside, the code will be a bit less readable.
I'll add an example in a bit.