Good evening,
I have a script that rotates the camera in paraview. It looks like this.
camera.Elevation(45)
camera.Roll(90)
Render()
The thing is, changing the order of the commands changes the final orientation as the camera rotates the second command starting from the already rotated position. Is there a way to make both commands take effect at the same time?
Thank you for any suggestions
Given a vtkCamera object, there is a method ApplyTransform which will allow you to apply a vtkTransform object to your camera.
vtkTransform objects have many more methods for transforms than the simple ones exposed in the vtkCamera interface. You can even use multiple transform objects to build up a transform system. If you have a transformation matrix for the camera already, you can pass it to the vtkTransform object with the SetMatrix method.
https://www.vtk.org/doc/nightly/html/classvtkTransform.html
You cannot apply the two commands at the same time. Moreover, the two operations (Elevation and Roll) are noncommutative:
Indeed, you can see here:
https://www.paraview.org/Wiki/ParaView_and_Python
that Roll(angle) perform a rotation around the axis defined by the view direction and the origin of the dataset.
As the view direction is changed by the use or not of Elevation, so does the final result.
Related
I would like to manually rotate and translate instances of my Abaqus model without inserting values (see image). I don't have coordinates to translate the instance to, but I merely want an estimation of the final shape of my model. Therefore I want to 'manually' shift and drag the instances with my computermouse. I cannot find the buttons for it. Does anyone know if a certain option exists?
Thank you in advance.
I have a camera in a fixed position looking at a target and I want to detect whether someone walks in front of the target. The lighting in the scene can change so subtracting the new changed frame from the previous frame would therefore detect motion even though none has actually occurred. I have thought to compare the number of contours (obtained by using findContours() on a binary edge image obtained with canny and then getting size() of this) between the two frames as a big change here could denote movement while also being less sensitive to lighting changes, I am quite new to OpenCV and my implementations have not been successful so far. Is there a way I could make this work or will I have to just subtract the frames. I don't need to track the person, just detect whether they are in the scene.
I am a bit rusty but there are various ways to do this.
SIFT and SURF are very expensive operations, so I don't think you would want to use them.
There are a couple of 'background removal' methods.
Average removal: in this one you get the average of N frames, and consider it as BG. This is vulnerable to many things, light changes, shadow, moving object staying at a location for long time etc.
Gaussian Mixture Model: a bit more advanced than 1. Still vulnerable to a lot of things.
IncPCP (incremental principal component pursuit): I can't remember the algorithm totally but basic idea was they convert each frame to a sparse form, then extract the moving objects from sparse matrix.
Optical flow: you find the change across the temporal domain of a video. For example, you compare frame2 with frame1 block by block and tell the direction of change.
CNN based methods: I know there are a bunch of them, but I didn't really follow them. You might have to do some research. As far as I know, they often are better than the methods above.
Notice that, for a #30Fps, your code should complete in 33ms per frame, so it could be real time. You can find a lot of code available for this task.
There are a handful of ways you could do this.
The first that comes to mind is doing a 2D FFT on the incoming images. Color shouldn't affect the FFT too much, but an object moving, entering/exiting a frame will.
The second is to use SIFT or SURF to generate a list of features in an image, you can insert these points into a map, sorted however you like, then do a set_difference between the last image you took, and the current image that you have. You could also use the FLANN functionality to compare the generated features.
I have written some code in Python which allows 3D objects to be defined in 3D object space and mapped onto a 2D screen. Currently the finished 2D polygons are drawn on the screen using the PyGame library, which works effectively, but I would like to go the full way and write code myself to complete the drawing operations PyGame does for me. This means I would like to manually control the drawing of each pixel on the screen, with the use of GPU support to accelerate the entire rendering process. From some reading it seems OpenGL is suitable for this sort of thing, but I'm not sure what the complete purpose of OpenGL is and whether I could achieve what I am trying to do in a better way. Do I really need to use OpenGL? Or is there another way for me to directly access my GPU to draw at the pixel by pixel level?
It sounds like OpenGL's programmable shaders are what you're looking for (in particular fragment shaders). They run massively parallel on a pixel-by-pixel basis, in the sense that basically you write a function that takes a single pixel location and computes its color. Note that this means that the individual pixels can't exchange information, though there are certain ways around that.
(Technically when I said "pixel" I meant "fragment", which is sort of a generalized version of a pixel.)
I am trying to model spherical aberrations as a function of the tilt angle of a mirror in an optical system. I am using Optics Studio for the model and PyZDDE to communicate with it. I thought this would be easy; I would setup a list of tilt angles and then loop over them changing the relevant surface parameters and calling zGetZernike():
for i in range(len(angle)):
ln.zSetSurfaceParameter(n, 54, angle[i])
ln.zSetSurfaceParameter(n, 64, -angle[i])
ln.zGetZernike()
print(Zern[1])
However, this didn't work.
I am getting the same Zernike coefficients independent of angle. I tried calling ln.zPushLens(1) and ln.zGetUpdate() but neither one worked. It looks like the changes are not getting updated on the server side.
I also tried introducing coordinate breaks before and after the mirror surface and changing the angles for those surfaces but that didn't work either.
What am I missing and what can be done to make this work ?
I would also like to change the wavelength, but that doesn't seem to work either. I call ln.zSetPrimaryWave(N), where N is a wave number, but the server always uses the first wavelength from the settings in Optics Studio.
Is there a way to change the wavelength with which the Zernike coefficients are calculated ?
Please use the parameter numbers 3 and 4 (instead of 54 and 64) with the function zSetSurfaceParameter().
The codes SDAT_TILT_X_BEFORE (54) and SDAT_TILT_X_AFTER (64) respectively are meant to be used with the function zSetSurfaceData(), which sets the appropriate fields in the Surface Properties >> Tilt/Decenter Tab for the specified surface in the main Zemax/OpticStudio application.
Please note that you need to use parameters 3 and 4 (with the function zSetSurfaceParameter()) if you are using coordinate breaks. Additionally, you may use parameter numbers 1 and 2 if the surface type is tilted.
I'm not sure why the function zSetPrimaryWave()is not working for you. I just tested the function in OpticStudio, and it works as expected.
Regards,
Indranil.
As it turned out changing tilts and decenters directly from surface properties using zSetSurfaceParameter() didn’t work. I had to use two coordinate breaks, one in front of the mirror surface and one behind it, and to set the tilts and decenters for those surfaces using zSetSurfaceParameter(). I chose to set a pickup solve on the second surface that restores the geometry behind the mirror and was only changing the tilts and decenters on the first surface. The parameter numbers for x tilts and y tilts are 3 and 4 correspondingly, as described in the Optics Studio manual. For debugging it really helps to push the lens to the lens editor after each change of parameters (zPushLens(1)). One should also consider saving intermediate configurations as Zemax design files. However for the actual calculation none of this is necessary. Also Optics Studio uses the first wavelength in the settings for the Zernike calculation. I had to change the wavelength using zSetWave(). Thanks to Indranil and ZEMAX technical support for valuable advice and guidance along the way.
Is there any way to obtain background from cv2.BackgroundSubtractorMOG2 in python?
In other words, is there any technique to compute an image based on last n frames of a video, which can be used as background?
Such a technique would be pretty complicated, but you might want to look at some keywords: image-stitching, gradient-based methods, patch-match, image filling. Matlab, for example, has a function that tries to interpolate missing values from nearby pixels. You could extend this method to work with 3D (shouldn't be so difficult in linear case).
More generally, it is sort of an ill-posed problem since there is no way to know what goes in the missing region.
Specifically to address your question, you might first take the difference between the original frame, and the extracted image, which should reveal the background. Then, use ROI fill in or similar method. There is likely some examples you can find on the web, such as this.