Fixing dihedral angles beyond 180 degrees limit to create smooth curves - python

I have an output from a commercial program that contains the dihedral angles of a molecule in time. The problem comes from apparently a known quadrant issue when taking cosines, that your interval is -180 to 180, and I am not familiar with. If the dihedral would be bigger than 180, this commercial program (SHARC, for molecular dynamics simulations) understands that it is bigger than -180, creating jumps on the plots (you can see an example in the figure bellow).
Is there a correct mathematical way to convert these plots to smooth curves, even if it means to go to dihedrals higher than 180?
What I am trying is to create an python program to deal with each special case, when going from 180 to -180 or vice versa, how to deal with cases near 90 or 0 degrees, by using sines and cosines... But it is becoming extremely complex, with more than 12 nested if commands inside a for loop running through the X axis.
If it was only one figure, I could do it by hand, but I will have dozens of similar plots.
I attach an ascii file with the that for plotting this figure.
What I would like it to look like is this:
Thank you very much,
Cayo Gonçalves

Ok, I've found a pretty easy solution.
Numpy has the unwrap function. I just need to feed the function with a vector with the angles in radians.
Thank you Yves for giving me the name of the problem. This helped me find the solution.

This is called phase unwrapping.
As your curves are smooth and slowly varying, every time you see a large negative (positive) jump, add (subtract) 360. This will restore the original curve. (For the jump threshold, 170 should be good, I guess).

Related

How to draw smooth contour/level curves of multivariable functions

G'day programmers and math enthusiasts.
Recently I have been exploring how CAS graphing calculators function; in particular, how they are able to draw level curves and hence contours for multivariable functions.
Just a couple of notes before I ask my question:
I am using Python's Pygame library purely for the window and graphics. Of course there are better options out there but I really wanted to keep my code as primitive as I am comfortable with, in an effort to learn the most.
Yes, yes. I know about matplotlib! God have I seen 100 different suggestions for using other supporting libraries. And while they are definitely stunning and robust tools, I am really trying to build up my knowledge from the foundations here so that one day I may even be able to grow and support libraries such as them.
My ultimate goal is to get plots looking as smooth as this:
Mathematica Contour Plot Circle E.g.
What I currently do is:
Evaluate the function over a grid of 500x500 points equal to 0, with some error tolerance (mine is 0.01). This gives me a rough approximation of the level curve at f(x,y)=0.
Then I use a dodgy distance function to find each point's closest neighbour, and draw an anti-aliased line between the two.
The results of both of these steps can be seen here:
First Evaluating Valid Grid Points
Then Drawing Lines to Closest Points
For obvious reasons I've got gaps in the graph where the next closest point is always keeping the graph discontinuous. Alas! I thought of another janky work around. How about on top of finding the closest point, it actually looks for the next closest point that hasn't already been visited? This idea came close, but still doesn't really seem to be even close to efficient. Here are my results after implementing that:
Slightly Smarter Point Connecting
My question is, how is this sort of thing typically implemented in graphing calculators? Have I been going about this all wrong? Any ideas or suggestions would be greatly appreciated :)
(I haven't included any code, mainly because it's not super clear, and also not particularly relevant to the problem).
Also if anyone has some hardcore math answers to suggest, don't be afraid to suggest them, I've got a healthy background in coding and mathematics (especially numerical and computational methods) so here's me hoping I should be able to cope with them.
so you are evaluating the equation for every x and y point on your plane. then you check if the result is < 0.01 and if so, you are drawing the point.
a better way to check if the point should be drawn is to check if one of the following is true:
(a) if the point is zero
(b) if the point is positive and has at least one negative neighbor
(c) if the point is negative and has at least one positive neighbor
there are 3 problems with this:
it doesn't support any kind of antialisasing so the result will not look as smooth as you would want
you can't make thicker lines (more then 1 pixel)
if the 0-point line is only touching (it's positive on both sides and not positive on one, negative on the other)
this second solution may fix those problems but it was made by me and not tested so it may or may not work:
you assign the value to a corner and then calculate the distance to the zero line for each point from it's corners. this is the algorithm for finding the distance:
def distance(tl, tr, bl, br): # the 4 corners
avg = abs((tl + tr + bl + br) / 4) # getting the absolute average
m = min(map(abs, (tl + tr + bl + br))) # absolute minimum of points
if min == 0: # special case
return float('inf')
return avg / m # distance to 0 point assuming the trend will continue
this returns the estimated distance to the 0 line you can now draw the pixel e.g. if you want a 5-pixel line, then if the result is <4 you draw the pixel full color, elif the pixel is <5 you draw the pixel with an opacity of distance - 4 (*255 if you are using pygames alpha option)
this solution assumes that the function is somewhat linear.
just try it, in the worst case it doesn't work...
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.30.1319&rep=rep1&type=pdf
This 21 Page doc has everything I need to draw the implicit curves accurately and smoothly. It even covers optimisation methods and supports bifurcation points for implicit functions. Would highly recommend to anyone with questions similar to my own above.
Thanks to everyone who had recommendations and clarifying questions, they all helped lead me to this resource.

3D curve fitting using python

I am trying to reduce the number of data points for a 3D curve, currently I have 20000 points and I would like to reduce this to around 2000 without losing much information.
I am doing this on python.
As a simple example, think of a spiral on the surface of a cylinder.
Are there any built-in functions that will do this?
I've tried using the Ramer–Douglas–Peucker algorithm to simplify the line, but due to the nature of the curve, for every data point ignored the final plot is undershooting. See picture of a 2D example, orange is what using rdp produces, green is what I want.
I would like the output of the program to be an array of ~2000 coordinates that still represent the shape of the 3D curve but they don't necessarily have to be original coordinates, I want some points to overshoot and others to undershoot.
Thank you for your help
UPDATE:
In the end I chose to do something quite involved but gave me exactly what I wanted. I started using the rdp algorithm to reduce the number of points. With this new information I then fit a straight line of best fit to the spread of the original points between the new reduced points:
i.e. if the algorithm 'ignored' 13 points, I fit the line from point 0 to point 14, and did the same for the next segment where the algorithm had skipped for example 7 points, so I fit from 14 to 22 etc.
Having those lines of best fit, I found the points were the lines intersected or if the lines did not intersect, the closest point on each of the lines to the other line.
Due to the nature of my problem, I did not need my data to be continuous, so 2000 "discontinuous" segments were not a problem.
Thank you very much for your help!
In the end I chose to do something quite involved but gave me exactly what I wanted. I started using the rdp algorithm to reduce the number of points. With this new information I then fit a straight line of best fit to the spread of the original points between the new reduced points: i.e. if the algorithm 'ignored' 13 points, I fit the line from point 0 to point 14, and did the same for the next segment where the algorithm had skipped for example 7 points, so I fit from 14 to 22 etc. Having those lines of best fit, I found the points were the lines intersected or if the lines did not intersect, the closest point on each of the lines to the other line. Due to the nature of my problem, I did not need my data to be continuous, so 2000 "discontinuous" segments were not a problem. Thank you very much for your help!

Computing the similarity between two line drawings

I have a Python program where people can draw simple line drawings using a touch screen. The images are documented in two ways. First, they are saved as actual image files. Second, I record 4 pieces of information at every refresh: the time point, whether contact was being made with the screen at the time (1 or 0), the x coordinate, and the y coordinate.
What I'd like to do is gain some measure of how similar a given drawing is to any other drawing. I've tried a few things, including simple Euclidian distance and similarity between each pixel, and I've looked at Frechet distance. None of these can give what I'm looking for.
The issues are that each drawing might have a different number of points, one segment does not always immediately connect to the next, and the order of the points is irrelevant. For instance, if you and I both draw something as simple as an ice cream cone, I might draw ice cream first, and you might draw the cone first. We may get an identical end result, but many of the most intuitive metrics would be totally thrown off.
Any ideas anyone has would be greatly appreciated.
if you care about how similar a drawing is to another, then there's no need to collect data at every refresh. just collect it once the drawer is done drawing
Then, you can use fourier analysis to break the images down in to frequency domains and run cross correlations on that
or some kind of 2D cross correlation on the images, I guess

Spline in VTK looks distorted

I am using the vtk package for python 2.7 to create some 3-dimensional stuff that I want to export to an .stl. Part of the geometry are sine waves with adjustable amplitudes. Here is my problem: When I generate the splines from point data (basically a point in every max, min and turning point) it does not look uniform!
This is what the spline looks like:
You can see that the middle amplitude looks kinda okay, while the rest is clearly distorted towards the center
Basically I only want the middle part to look like a perfect sine, because I cut away the remainder anyway.
When I use another program (Autodesk Inventor) to create splines manually from the same point data it creates a uniform sine wave. Is there a way to fix this problem?
Sorry for not providing any code, but I will give you the steps I do:
add points to vtkPoints object
create vtkParametricSpline with vtkPoints as input
use vtkSplineFilter to get a finer resolution of the spline
use vtkTubeFilter to create volume
use vtkClipClosedSurface to cut away what is not needed
In the end, parameterizing the line with a cosine function was the only way to avoid the weird spline behaviour. I tried avoiding it before, because it seemed over-engineered, but it turned out to be the better way.
New algorithm:
cosine function -> vtkPoints-> vtkLineSource -> vtkTubeFilter

Efficient 2D edge detection in Python

I know that this problem has been solved before, but I've been great difficulty finding any literature describing the algorithms used to process this sort of data. I'm essentially doing some edge finding on a set of 2D data. I want to be able to find a couple points on an eye diagram (generally used to qualify high speed communications systems), and as I have had no experience with image processing I am struggling to write efficient methods.
As you can probably see, these diagrams are so called because they resemble the human eye. They can vary a great deal in the thickness, slope, and noise, depending on the signal and the system under test. The measurements that are normally taken are jitter (the horizontal thickness of the crossing region) and eye height (measured at either some specified percentage of the width or the maximum possible point). I know this can best be done with image processing instead of a more linear approach, as my attempts so far take several seconds just to find the left side of the first crossing. Any ideas of how I should go about this in Python? I'm already using NumPy to do some of the processing.
Here's some example data, it is formatted as a 1D array with associated x-axis data. For this particular example, it should be split up every 666 points (2 * int((1.0 / 2.5e9) / 1.2e-12)), since the rate of the signal was 2.5 GB/s, and the time between points was 1.2 ps.
Thanks!
Have you tried OpenCV (Open Computer Vision)? It's widely used and has a Python binding.
Not to be a PITA, but are you sure you wouldn't be better off with a numerical approach? All the tools I've seen for eye-diagram analysis go the numerical route; I haven't seen a single one that analyzes the image itself.
You say your algorithm is painfully slow on that dataset -- my next question would be why. Are you looking at an oversampled dataset? (I'm guessing you are.) And if so, have you tried decimating the signal first? That would at the very least give you fewer samples for your algorithm to wade through.
just going down your route for a moment, if you read those images into memory, as they are, wouldn't it be pretty easy to do two flood fills (starting centre and middle of left edge) that include all "white" data. if the fill routine recorded maximum and minimum height at each column, and maximum horizontal extent, then you have all you need.
in other words, i think you're over-thinking this. edge detection is used in complex "natural" scenes when the edges are unclear. here you edges are so completely obvious that you don't need to enhance them.

Categories

Resources