Does anyone knows how to calculate how far a curve should dip down at a certain point, given the two points it's connected to (and maybe weight but it doesn't matter as much)?
From what I've been looking up, all the formulas involve tension and a few other things, but I only really need to know the coordinates, and can presume these are all constant.
This is the formula I have at the moment, it's very basic so would only work with two points at the same height. One method is from wikipedia and the other I got from somewhere else, they do unfortunately give different results though so I've no idea which is correct
import math
x=[-2,2]
a=1.0
for i in range(x[0],x[1]+1):
#method 1
y=a*math.cosh(1/a*i)-a
#method 2 (from wikipedia)
y=(a/2)*(math.exp(i/a)+math.exp(-i/a))
Oh also, it's for maya, so it'd be a huge help if it could be done from the normal python functions without having to install scipy.optimise
Thanks :)
Related
First of all, sorry if this is rather basic but it is certainly not my field of expertise.
So, I'm working with protein surface and I have this cavity:
Protein cavity
It is part of a larger, watertight, triangular mesh (.ply format) that represents a protein surface.
What I want to do, is find whether this particular "sub-mesh" is found in other proteins. However, I'm not looking for a perfect fit, rather similar "sub-meshes" since the only place I will find this exact shape is in the original protein.
I've been reading the docs for the Python modules trimesh and open3d. Trimesh does have a comparison module, but it doesn't seem to have the functionality I'm looking for. Also, open3d has a "compute point cloud distance" function that is recommended to compare the difference between two point cloud or meshes.
However, since what I'm actually trying to find is similarity, I would need a way to fit my cavity's "sub-mesh" onto the surface of the protein I'm analyzing, and then "score" how different or deformed the fitted submesh is. Another way would be to rotate and translate my sub-mesh to match the most vertices and faces on the protein surface and score that I guess.
Just a heads-up, I'm a biotechnologist, self-taught in Python and with extremely limited experience in anything 3D. At this point, anything helps, be it a paper, Python module or whatever knowledge you have that you think might be useful.
Thank you very much for any help you can provide with this!
I'm working on a python script whose goal is to detect if a point is out of a row of points (gps statement from an agricultural machine).
Input data are shapefile and I use Geopandas library for all geotreatments.
My first idea was to make a buffer around the 2 points around considered point. After that, I watch if my point is in the buffer. But results aren't good.
So I ask myself if there is a mathematical smart method, maybe with Scikit lib... Somebody is able to help me?
try arcgis.
build two new attributes in arcgis with their X and Y coordinate,then calculate the distance between the points you want
Question is kinda vague, but my guess would be to find approximation/regression line (I believe, numpy.polyfit of 2nd degree) and take points with largest distance from line, probably with threshold relative to overall fit loss
I would like to represent a bunch of particles (~100k grains), for which I have the position (including rotation) and a discretized level set function (i.e. the signed distance of every voxel from surface). Due to the large sample, I'm searching for eficient solutions to visualize it.
I first went for vtk, using its python interface, but I'm not really sure if it's the best (and simplest) way to do it since, as far as I know, there is no direct implementation for getting an isosurface from a 3D data set. In the beginning I was thinking usind marching cubes, but then I still would have to use a threshold or to interpolate in order to get the voxels that are on the surface and label them in order to be used by the marching cubes.
Now I found mayavi which has a python function
mlab.pipeline.iso_surface()
I however did not find much documentation on it and was wondering how it behaves in terms of performance.
Does someone have experience with this kind of tools? Which would be the best solution (in terms of efficiency and, secondly, in terms of simplicity - I do not know the vtk library, but if there is a huge difference in performance I can dig into it, also without python interface).
Okay so I appreciate this will require a bit of patience but bear with me. I'm analysing some Raman spectra and have written the basis of a program to use Scipy curve_fit to fit multiple lorentzians to the peaks on my data. The trick is I have so much data that I want the program to automatically identify initial guesses for the lorentzians, rather than manually doing it. On the whole, the program gives it a good go (and might be of use to others with a similar use case with simpler data), but I don't know Scipy well enough to optimise the curve_fit enough to make it work on many different examples.
Code repo here: https://github.com/btjones-me/raman_spectroscopy
An example of it working well can be see in fig 1.
Part of the problem is my peak finding algorithm, which sometimes struggles to find the appropriate starting positions for each lorentzian. You can see this in fig 2.
The next problem is that, for some reason, curve_fit occasionally catastrophically diverges (my best guess is due to rounding errors). You can see this behaviour in fig 3.
Finally while I usually make good guesses with the height and x position of each lorentzian, I haven't found a good way of predicting the width, or FWHM of the curves. Predicting this might help curve_fit.
If anybody can help with either of these problems in any way I would appreciate it greatly. I'm open to any other methods or suggestions including additional third party libraries, so long as they improve upon the current implementation. Many thanks to anybody who attempts this one!
Here it is working exactly as I intend:
Below you can see the peak finding method has failed to identify all the peaks. There are many peak finding algorithms, but the one I use is Scipy's 'find_peaks_cwt()' (it's not usually this bad, this is an extreme case).
Here it's just totally way off. This happens fairly often and I can't really understand why, nor stop it from happening. Possibly it occurs when I tell it to find more/less peaks than are available in the spectra, but just a guess.
I've done this in Python 3.5.2. PS I know I won't be winning any medals for code layout, but as always comments on code style and better practices are welcomed.
I stumbled across this because I'm attempting to solve the exact same problem, here is my solution. For each peak, I only fit my lorentzian in the region of the domain + or - 1/2 the distance to the next closest peak. Here is my function that does breaks up the domain:
def get_local_indices(peak_indices):
#returns the array indices of the points closest to each peak (distance to next peak/2)
chunks = []
for i in range(len(peak_indices)):
peak_index = peak_indices[i]
if i>0 and i<len(peak_indices)-1:
distance = min(abs(peak_index-peak_indices[i+1]),abs(peak_indices[i-1]-peak_index))
elif i==0:
distance = abs(peak_index-peak_indices[i+1])
else:
distance = abs(peak_indices[i-1]-peak_index)
min_index = peak_index-int(distance/2)
max_index = peak_index+int(distance/2)
indices = np.arange(min_index,max_index)
chunks.append(indices)
return chunks
And here is a picture of the resulting graph, the dashed lines indicate which areas the lorentzians are fit in:
Lorentzian Fitting Plot
My math skills are really poor.
I'm trying to handle accelerometer data (tri-axis) with Python
I need to compute the norm
That is easy I made something like that:
math.sqrt(x_value**2 + y_value**2 + z_value**2)
but now, I have to compute the integration of that:
And for that I have no clue..
Can somebody help me on that?
Edit: Adding more info due to the negative votes (??)
I know there are tools to make integrations in python, but this one is not with edges (there are no limits in the formula) So that, I don't understand how to make it works..
Integrating the modulus of the acceleration will lead you nowhere. You must integrate all three components separately to get the components of the speed (and integrate them a second time to get the position vector).
To perform the numerical integration, use the Simpson rule incrementally. Anyway, as the number of points must be even, you will only get every other value of the speed.