How can I model a tilting mirror in PyZDDE? - python

I am trying to model spherical aberrations as a function of the tilt angle of a mirror in an optical system. I am using Optics Studio for the model and PyZDDE to communicate with it. I thought this would be easy; I would setup a list of tilt angles and then loop over them changing the relevant surface parameters and calling zGetZernike():
for i in range(len(angle)):
ln.zSetSurfaceParameter(n, 54, angle[i])
ln.zSetSurfaceParameter(n, 64, -angle[i])
ln.zGetZernike()
print(Zern[1])
However, this didn't work.
I am getting the same Zernike coefficients independent of angle. I tried calling ln.zPushLens(1) and ln.zGetUpdate() but neither one worked. It looks like the changes are not getting updated on the server side.
I also tried introducing coordinate breaks before and after the mirror surface and changing the angles for those surfaces but that didn't work either.
What am I missing and what can be done to make this work ?
I would also like to change the wavelength, but that doesn't seem to work either. I call ln.zSetPrimaryWave(N), where N is a wave number, but the server always uses the first wavelength from the settings in Optics Studio.
Is there a way to change the wavelength with which the Zernike coefficients are calculated ?

Please use the parameter numbers 3 and 4 (instead of 54 and 64) with the function zSetSurfaceParameter().
The codes SDAT_TILT_X_BEFORE (54) and SDAT_TILT_X_AFTER (64) respectively are meant to be used with the function zSetSurfaceData(), which sets the appropriate fields in the Surface Properties >> Tilt/Decenter Tab for the specified surface in the main Zemax/OpticStudio application.
Please note that you need to use parameters 3 and 4 (with the function zSetSurfaceParameter()) if you are using coordinate breaks. Additionally, you may use parameter numbers 1 and 2 if the surface type is tilted.
I'm not sure why the function zSetPrimaryWave()is not working for you. I just tested the function in OpticStudio, and it works as expected.
Regards,
Indranil.

As it turned out changing tilts and decenters directly from surface properties using zSetSurfaceParameter() didn’t work. I had to use two coordinate breaks, one in front of the mirror surface and one behind it, and to set the tilts and decenters for those surfaces using zSetSurfaceParameter(). I chose to set a pickup solve on the second surface that restores the geometry behind the mirror and was only changing the tilts and decenters on the first surface. The parameter numbers for x tilts and y tilts are 3 and 4 correspondingly, as described in the Optics Studio manual. For debugging it really helps to push the lens to the lens editor after each change of parameters (zPushLens(1)). One should also consider saving intermediate configurations as Zemax design files. However for the actual calculation none of this is necessary. Also Optics Studio uses the first wavelength in the settings for the Zernike calculation. I had to change the wavelength using zSetWave(). Thanks to Indranil and ZEMAX technical support for valuable advice and guidance along the way.

Related

Is there a simple way to prevent GalSim from shooting all the photons from a star into a single pixel?

I'm generating PSF-free images, so no atmosphere and no diffraction, and the images I'm getting out have stars in "quantized" positions. I'm wondering if there is an option in GalSim to prevent this, i.e. to have a more sinc-like distribution of the photons, so the behaviour of photons landing somewhere between pixels is taken into account. If there isn't an option for this, I suppose I would need to create my own sinc-function PSF and implement it around the drawImage() step?
Stars are inherently supposed to look like point sources if you don't have any PSF at all (no atmosphere, no diffraction). They are a delta function in that case, so all of the photons should fall into a single pixel. GalSim is doing exactly what you are asking it to do.
It sounds like you actually do want to have a PSF; I suggest using the galsim.Airy class, representing a diffraction-limited PSF.

Color area under graph in manim

I am following this tutorial for manim: https://talkingphysics.wordpress.com/2019/01/08/getting-started-animating-with-manim-and-python-3-7/. Under heading 7.0: Graphing functions, the example shows code for plotting sine and cosine functions.
I was wondering if I could also fill the area covered between, let's say, sine function and the x-axis from x_min to x_max. I realized that the used PlotFunctions class has following hierarchy: PlotFunctions -> GraphScene -> Scene -> Container -> object (where -> denotes child of). But in this entire chain of hierarchy, I do not see a config option such as fill_color that is present in VMobject.
I'm also not readily able to locate any code that helps in doing so, although I'm sure that some really easy 1 line code must exist since this is used in so many 3blue1brown videos. I would really appreciate some help with this!
You can check this github issue, it might be something your'e looking for.
It hasn't been merged onto the main branch so this could be a work-around for now.
After looking more into the code, I still don't see a 1-liner code per se but did find that to color the area under the graph, a set of riemann rectangles of very small width (~0.01) are used. This makes the graph look colored.
I used get_point_from_function() option to get the points and passed them to create a Polygon filled with colour.
you can check it here , look at def get_region()

How can i make camera commands of paraview take effect simultaneously

Good evening,
I have a script that rotates the camera in paraview. It looks like this.
camera.Elevation(45)
camera.Roll(90)
Render()
The thing is, changing the order of the commands changes the final orientation as the camera rotates the second command starting from the already rotated position. Is there a way to make both commands take effect at the same time?
Thank you for any suggestions
Given a vtkCamera object, there is a method ApplyTransform which will allow you to apply a vtkTransform object to your camera.
vtkTransform objects have many more methods for transforms than the simple ones exposed in the vtkCamera interface. You can even use multiple transform objects to build up a transform system. If you have a transformation matrix for the camera already, you can pass it to the vtkTransform object with the SetMatrix method.
https://www.vtk.org/doc/nightly/html/classvtkTransform.html
You cannot apply the two commands at the same time. Moreover, the two operations (Elevation and Roll) are noncommutative:
Indeed, you can see here:
https://www.paraview.org/Wiki/ParaView_and_Python
that Roll(angle) perform a rotation around the axis defined by the view direction and the origin of the dataset.
As the view direction is changed by the use or not of Elevation, so does the final result.

Python script to uncurl or unfold geometry along a curve

I have been all over the internet trying to find the answer to this one and I've reached the end of my patience. What I am trying to achieve is pretty much what the standard bend deformer does but the only difference is I want it to unfold along a pre-defined curve. There are literally hundreds of tutorials about the bend deformer all doing the same thing, unfolding along a flat plane but none on how to do it along a curved surface. I have also tried paint effects with control curves to no avail and baking the bend deformer into the geometry then curving it afterwards. This last option didn't work as I no longer had the control I required. It seems from my search that probably the only way to do this would be through a Mel or Python script and I was wondering would anyone be able to help?
something like this?
Constrain -> Motion Patch -> Attach to Motion Patch + Flow Path Object
Lattice deformers themselves are skinnable, so you can run a skeleton or bend deformer through a lattice to maniuplate it's shape. This will also let you control the twist along the deformation. Animate the object you want to deform into the area of influence of the lattice to create the follow effect, while animating the deformation of the lattice itself at the same time to create the follow effect.
Or, you can just make the lattice follow the path using a spline IK control.

Python OpenCV stereo camera position

I'd like to determine the position and orientation of a stereo camera relative to its previous position in world coordinates. I'm using a bumblebee XB3 camera and the motion between stereo pairs is on the order of a couple feet.
Would this be on the correct track?
Obtain rectified image for each pair
Detect/match feature points rectified images
Compute Fundamental Matrix
Compute Essential Matrix
Thanks for any help!
Well, it sounds like you have a fair understanding of what you want to do! Having a pre-calibrated stereo camera (like the Bumblebee) will then deliver up point-cloud data when you need it - but it also sounds like you basically want to also use the same images to perform visual odometry (certainly the correct term) and provide absolute orientation from a last known GPS position, when the GPS breaks down.
First things first - I wonder if you've had a look at the literature for some more ideas: As ever, it's often just about knowing what to google for. The whole idea of "sensor fusion" for navigation - especially in built up areas where GPS is lost - has prompted a whole body of research. So perhaps the following (intersecting) areas of research might be helpful to you:
Navigation in 'urban canyons'
Structure-from-motion for navigation
SLAM
Ego-motion
Issues you are going to encounter with all these methods include:
Handling static vs. dynamic scenes (i.e. ones that change purely based on the camera motion - c.f. others that change as a result of independent motion occurring in the scene: trees moving, cars driving past, etc.).
Relating amount of visual motion to real-world motion (the other form of "calibration" I referred to - are objects small or far away? This is where the stereo information could prove extremely handy, as we will see...)
Factorisation/optimisation of the problem - especially with handling accumulated error along the path of the camera over time and with outlier features (all the tricks of the trade: bundle adjustment, ransac, etc.)
So, anyway, pragmatically speaking, you want to do this in python (via the OpenCV bindings)?
If you are using OpenCV 2.4 the (combined C/C++ and Python) new API documentation is here.
As a starting point I would suggest looking at the following sample:
/OpenCV-2.4.2/samples/python2/lk_homography.py
Which provides a nice instance of basic ego-motion estimation from optic flow using the function cv2.findHomography.
Of course, this homography H only applies if the points are co-planar (i.e. lying on the same plane under the same projective transform - so it'll work on videos of nice flat roads). BUT - by the same principal we could use the Fundamental matrix F to represent motion in epipolar geometry instead. This can be calculated by the very similar function cv2.findFundamentalMat.
Ultimately, as you correctly specify above in your question, you want the Essential matrix E - since this is the one that operates in actual physical coordinates (not just mapping between pixels along epipoles). I always think of the Fundamental matrix as a generalisation of the Essential matrix by which the (inessential) knowledge of the camera intrinsic calibration (K) is omitted, and vise versa.
Thus, the relationships can be formally expressed as:
E = K'^T F K
So, you'll need to know something of your stereo camera calibration K after all! See the famous Hartley & Zisserman book for more info.
You could then, for example, use the function cv2.decomposeProjectionMatrix to decompose the Essential matrix and recover your R orientation and t displacement.
Hope this helps! One final word of warning: this is by no means a "solved problem" for the complexities of real world data - hence the ongoing research!

Categories

Resources