I've been playing around with an accelerometer with 3 axis: X, Y and Z. It says on the supplier's site that it measures gravitational force.
I'm sending this data to the blender games engine where I am rotating a cube in real time depending on the data values coming from the accelerometer. However the values coming through don't seem to match up.
On each axis the accelerometer spits out values from -700 to 700 on each axis and I need to convert these values to something I can use in Blender. My maths knowledge is not up to scratch so I don't know where to start with this one.
If anybody could shed some light on this, that would be great.
Many thanks
Will
EDIT
Currently I'm using a bit of python code to convert the rotation values to a matrix:
def reorient(alpha, beta, gamma):
a = math.cos(alpha)
b = math.sin(alpha)
c = math.cos(beta)
d = math.sin(beta)
e = math.cos(gamma)
f = math.sin(gamma)
ad = a*d
bd = b*d
matrix = [[c*e, -a*f+b*d*e, b*f+a*d*e], [c*f, a*e+b*d*f, -b*e+a*d*f], [-d, b*c, a*c]]
return matrix
I am then using setOrientation(matrix) to affect the rotation of the cube. However I am currently throwing the wrong values into the matrix reorient() function
I guess you are using the measured acceleration to find the direction of gravitational pull (ie, down). If you are moving the accelerometer, apart from just turning it, there will be some additional force; think of the accelerometer having a pendulum weight handing from it, as you move it the pendulum sways (although in this case it would be a very short, fast-reacting pendulum?). You could try doing some sort of movement compensation, but it might be simpler to just try to keep the sensor in a fixed location.
Edit: ok, it looks like I totally misread the question - you want to know how to do the rotation in a script?
It looks like each Blender object has three properties (.RotX, .RotY, .RotZ) which contain the current values (in radians) and a method (.rot(new_rotx, new_roty, new_rotz)) which performs a rotation (see documentation at http://www.blender.org/documentation/249PythonDoc/Object.Object-class.html). I am currently looking at how the rotations are applied; more shortly.
Edit2: it looks like the angles are specified as Euler angles (http://en.wikipedia.org/wiki/Euler_angles); they give some conversion matrices. It also looks like your accelerometer data is underconstrained (you need one more constraint, specifying rotation about the 'down' direction - maybe some sort of inertial 'least distance from previous position' calculation?)
Edit3: there is a sample script which may be helpful; on my machine it is at C:\Users\Me\AppData\Roaming\Blender Foundation\Blender.blender\scripts\object_random_loc_sz_rot.py It shows how to get the currently selected object and tweak its rotation. Hope that helps!
Edit4: for sake of discussion, here is some sample code; it may be a bit redundant (I haven't worked in Blender before) and it doesn't solve the problem, but it will at least give us a common basis for further discussion ;-)
#!BPY
"""
Name: 'Set rotation by accelerometer'
Blender: 249
Group: 'Object'
Tooltip: 'Set the selected objects rotation by accelerometer'
"""
__bpydoc__=\
'''
This script sets the selected objects rotation by accelerometer.
'''
from Blender import Draw, Scene
import math
def reorient(alpha, beta, gamma):
a = math.cos(alpha)
b = math.sin(alpha)
c = math.cos(beta)
d = math.sin(beta)
e = math.cos(gamma)
f = math.sin(gamma)
ad = a*d
bd = b*d
return = [
[c*e, -a*f+b*d*e, b*f+a*d*e],
[c*f, a*e+b*d*f, -b*e+a*d*f],
[-d, b*c, a*c ]
]
def getAccel():
# test stub -
# need to get actual values from accelerometer here
dx = -700
dy = 100
dz = 250
return (dx,dy,dz)
def normalize(vec):
"Return scaled unit vector"
x,y,z = vec
mag = (x*x + y*y + z*z)**0.5
return (x/mag, y/mag, z/mag)
def main():
scn = Scene.GetCurrent()
try:
obj = scn.objects.context
euler = (obj.RotX, obj.RotY, obj.RotZ)
except AttributeError:
return
down = normalize(getAccel())
matrix = None
# do something here to find new rotation-matrix
# based on euler and down
# then
if matrix:
obj.setOrientation(matrix)
else:
# test value:
# if reorient() is working properly, the
# object's rotation should not change!
obj.setOrientation(reorient(*euler))
if __name__=="__main__":
main()
Let's assume that you can use the accelerometers to correctly determine which way is 'up'. We can call that vector N. It seems to me that you want the 'up' direction of your cube to align with N. As already mentioned, that leaves the cube free to spin, but you can still find a rotation matrix that might accomplish what you're going for, but you need to account for two separate cases or else you'll have a singularity in the solution. I'll assume that 'Z' is 'up' on the cube.
If you treat the three accelerometer values as a vector and normalize it (to get N), you've got the new 'Z' axis portion of your rotation matrix so that a vector pointed in the z direction will now align with the 'up' vector.
| a d N.x | |0| |N.x|
| b e N.y | * |0| = |N.y|
| c f N.z | |1| |N.z|
So we need to decide what to do with a-f. One common thing to do is this: if N is pointing mostly along the original 'Z' axis, then make the new 'Y' axis portion of the matrix be M = N cross X:
d = 0
e = N.z
f = -N.y
Normalize M and then find the 'X' axis portion of the matrix: L = M cross N. Normalize L.
If N is not pointing mostly along the 'Z' axis (N.z < .707), then you find the new 'Y' axis portion as M = N cross Z. Normalize M and find L = M cross N and, finally, normalize L.
Edit:
So we have our three accelerometer values: A.x, A.y, A.z. First step is to normalize them:
a = sqrt(A.x*A.x + A.y*Ay + A.z*A.z); and then
N.x = A.x/a; N.y = A.y/a; N.z = A.z/a;
We assume that if N == [0, 0, 1] then the correct rotation matrix is the identity matrix. If N doesn't point directly along the z-axis, then we want to form a matrix that will rotate the z-axis of the cube so that it lines up with N.
Related
I want to implement a photo editor in python using flask. So far, I managed to apply an s curve to a photo, like this:
import cv2
import numpy as np
image = cv2.imread('apple.jpg')
def sToneCurve(frame):
look_up_table = np.zeros((256, 1), dtype='uint8')
for i in range(256):
look_up_table[i][0] = 255 * (np.sin(np.pi * (i / 255 - 1 / 2)) + 1) / 2
return cv2.LUT(frame, look_up_table)
image_contrasted = sToneCurve(image)
cv2.imwrite('apple_dark.jpg', image_contrasted)
How could I implement an interactive tone curve, so that the user could select how he would like to edit the photos, like this: tone curve and not be a predefined formula applied to the photo, as in the code above. What would be the best approach, what libraries and visualizations for the curve plots to use?
You implement this using "standard" polynomial fitting: you have N points that you need a curve through, so you find the N-1st order polynomial that does that, then use that polynomial as your mapping function.
You're already using numpy, so use numpy.polynomial.polynomial.polyfit with:
x all your points' x coordinates, including your black and white points (which in a proper tone curve users should be able to move off of (0,0) and (1,1) respective),
y all your points' y coordinates,
deg if the polynomial has to pass through all points, which it should, this should be equal to len(x) - 1, as two points is a line, or a first degree polynomial, three points is a quadratic curve, or a second degree polynomial, etc. "The" polynomial through N points is an N-1 degree polynomial,
the rest of the args shouldn't particularly matter.
This gives you a numpy array of polynomial coefficients (let's call that array c) that you can then use for mapping: any pixel with lightness/intensity value i should get mapped to:
mapped = f(I) = c[0] * i**0 + c[1] * i**1 + c[2] * i**2 + ...
Which thankfully numpy can do for you by simply using the corresponding polyval function.
And of course, to make that fast, what you really want to do is build a LUT that you can just directly consult, every time the user changes a coordinate in the tone curve UI, so:
from numpy.polynomial.polynomial import polyfit, polyval
# How big of a LUT you actually need depends entirely
# on the bit depth you're working with, of course...
BIT_DEPTH = 2**16
TONE_LUT = range(0, BIT_DEPTH)
def update_from_tone_ui(coordinates):
"""
Called on user value update, with coordinates being
a list-of-lists a la [[0,0], [0.1,0.1], ...]
"""
x, y = zip(*coordinates)
coefficients = polyfit(x, y, len(x) - 1)
f = lamba i: clamp(polyval(i, coefficients), 0, 1)
# And remember to make sure the input range to f() matches
# the actual x/y domain that we used for the polyfit:
divisor = BIT_DEPTH - 1
TONE_LUT = [BIT_DEPTH * f(i/divisor) for i in range(0, BIT_DEPTH)]
with clamp coming from "somewhere", but if you don't already have one then it's trivially implemented with some shortcut returns:
def clamp(n, floor, ceiling):
if n < floor: return floor
if n > ceiling: return ceiling
return n
(And of course make sure to adjust your clamping values if you don't want your tone curve x and y coordinates in [0,1])
Now, rather than running the mapping function every time, you just directly look up the mapped value. Note that you get a bit of freedom in terms of precision: you could use a tone curve in which the x and y values run from 0 to 1, or you use have them run from 0 to whatever-bit-depth-you-use (28, 216, what have you) but whatever you use, make sure you scale your actual pixel intensities accordingly when you generate your LUT. Otherwise things will look really interesting.
I'm trying to find the angle between two vectors.
Following is the code that I use to evaluate the angle between vectors ba and bc
import numpy as np
import scipy.linalg as la
a = np.array([6,0])
b = np.array([0,0])
c = np.array([1,1])
ba = a - b
bc = c - b
cosine_angle = np.dot(ba, bc) / (la.norm(ba) * la.norm(bc))
angle = np.arccos(cosine_angle)
print (np.degrees(angle))
My question is,
here in this code:
for both c = np.array([1,1]) and c = np.array([1,-1]) you get 45 degrees as the answer. I can understand this in a mathematical viewpoint because, from the dot product you always focus on the angle in the interval [0,180].
But geometrically this is misleading as the point c is in two different locations for [1,1] and [1,-1].
So is there a way that I can get the angle in the interval [0,360] for a general starting point
b = np.array([x,y])
Appreciate your help
Conceptually, obtaining the angle between two vectors using the dot product is perfectly alright. However, since the angle between two vectors is invariant upon translation/rotation of the coordinate system, we can find the angle subtended by each vector to the positive direction of the x-axis and subtract one value from the other.
The advantage is, we'll use np.arctan2to find the angles, which returns angles in the range [-π,π] and hence you get an idea of the quadrant your vector lies in.
# Syntax: np.arctan2(y, x) - put the y value first!
# Instead of explicitly referring by indices, you can unpack each vector in reverse, like so:
# np.arctan2(*bc[::-1])
angle = np.arctan2(bc[1], bc[0]) - np.arctan2(ba[1], ba[0])
Which you can then appropriately transform to get a value within [0, 2π].
Objective
I have a soup of triangle polygons. I want to retrieve the largest median as vector for each triangle.
State of work
Starting point:
Array of points (n,3) , e.g. [x,y,z]
Array of triangle point indices (n, 3) referencing the array of points above, e.g. [[0,1,2],[2,3,4]...]
I combine both two one single matrix containing the real 3D point coordinates. Then I calculate the median vectors and their lengths.
/Edit : I updated the code to my current version of it
def calcMedians(polygon):
# C -> AB = C-(A + 0.5(B-A))
# B -> AC = B - (A + 0.5(C-A))
# A -> BC = A - (B
dim = np.shape(polygon)
medians = np.zeros((dim[0],3,2,dim[1]))
medians[:,0,0] = polygon[:,2]
medians[:,0,1] = polygon[:,0] + 0.5*(polygon[:,1]-polygon[:,0])
medians[:,1,0] = polygon[:,1]
medians[:,1,1] = polygon[:,0] + 0.5*(polygon[:,2]-polygon[:,0])
medians[:,2,0] = polygon[:,0]
medians[:,2,1] = polygon[:,1] + 0.5*(polygon[:,2]-polygon[:,1])
m1 = np.linalg.norm(medians[:,0,0]-medians[:,0,1],axis=1)
m2 = np.linalg.norm(medians[:,1,0]-medians[:,1,1],axis=1)
m3 = np.linalg.norm(medians[:,2,0]-medians[:,2,1],axis=1)
medianlengths = np.vstack((m1,m2,m3)).T
maxlengths = np.argmax(medianlengths,axis=1)
final = np.zeros((dim[0],2,dim[1]))
dim = np.shape(medians)
for i in range(0,dim[0]):
idx = maxlengths[i]
final[i] = medians[i,idx]
return final
Now I am creating the final median vector matrix using an empty matrix first. The lengths are calculated using np.linalg.norm and collected in a matrix. For this matrix, the argmax method is used to identify to target median vector.
Problem
Old:However, I am somehow confused by the dimensionality and currently not able to get this to work or to understand if the result is correct.
Does somebody know how to do this correctly and/or if this approach is efficient?
My target would be a construct of the 3 medians in form of [n_polygons, 3( due to 3 medians), 2 (start and end point), 3 (xyz)]
Using the max lengths information, i would like to reduce it to [n_polygons, 2 (start and end point), 3 (xyz)]
Using this improvised for loop in the function, I can create the output. But there has to be a more "clean" matrix method to it. Using medians[:,maxlengths,:,:] leads to a shape of [4,n_polygons,2,3] instead of [n_polygons,2,3] and I do not understand why.
Example image for medians of two triangles:
Unfortunately, I don't have a large exemplary data set but I guess that this can be generated quite quickly. The example data set from the picture shown above is:
polygons = np.array([[0,1,2],[0,3,2]])
points = np.array([[0,0],
[1,0],
[1,1],
[0,1]])
polygons3d = points[polygons[:,:]]
The longest median is for the shortest triangle side. Look here and rewrite median length formula as
M[i] = Sqrt(2(a^2+b^2+c^2)-3*side[i]^2) / 2
So you can simplify calculations a bit using only side lengths (perhaps you already have them)
Concerning 3D coordinates - just use projection on any coordinate plane not perpendicular to your point plane - ignore one dimension (choose dimension with the lowest value range)
For example, find the image below, which explains the problem for a simple 2D case. The label (N) and coordinates (x,y) for each point is known. I need to find all the point labels that lie within the red circle
My actual problem is in 3D and the points are not uniformly distributed
Sample input file which contain coordinates of 7.25 M points is attached here point file.
I tried the following piece of code
import numpy as np
C = [50,50,50]
R = 20
centroid = np.loadtxt('centroid') #chk the file attached
def dist(x,y): return sum([(xi-yi)**2 for xi, yi in zip(x,y)])
elabels=[i+1 for i in range(len(centroid)) if dist(C,centroid[i])<=R**2]
For an single search it takes ~ 10 min. Any suggestions to make it faster ?
Thanks,
Prithivi
When using numpy, avoid using list comprehensions on arrays.
Your computation can be done using vectorized expressions like this
centre = np.array((50., 50., 50.))
points = np.loadtxt('data')
distances2= np.sum((points-centre)**2, axis=1)
points is a N x 2 array, points-centre is also a N x 2 array,
(points-centre)**2 computes the squares of each element of the difference and eventually np.sum(..., axis=1) sums the elements of the squared differences along axis no. 1, that is, across columns.
To filter the array of positions, you can use boolean indexing
close = points[distances2<max_dist**2]
You are heavily calling the dist function. You could try to low level optimize it, and control with the timeit Python module which is more efficient. On my machine, I tried this one:
def dist(x,y):
d0 = y[0] -x[0]
d1 = y[1] -x[1]
d2 = y[2] -x[2]
return d0 * d0 + d1*d1 + d2*d2
and timeit said it was more than 3 times quicker.
This one was just in the middle:
def dist(x,y):
s = 0
for i in range(len(x)):
d = y[i] - x[i]
s += d * d
return s
Does anyone know a good method to calculate the empirical/sample covariogram, if possible in Python?
This is a screenshot of a book which contains a good definition of covariagram:
If I understood it correctly, for a given lag/width h, I'm supposed to get all the pair of points that are separated by h (or less than h), multiply its values and for each of these points, calculate its mean, which in this case, are defined as m(x_i). However, according to the definition of m(x_{i}), if I want to compute m(x1), I need to obtain the average of the values located within distance h from x1. This looks like a very intensive computation.
First of all, am I understanding this correctly? If so, what is a good way to compute this assuming a two dimensional space? I tried to code this in Python (using numpy and pandas), but it takes a couple of seconds and I'm not even sure it is correct, that is why I will refrain from posting the code here. Here is another attempt of a very naive implementation:
from scipy.spatial.distance import pdist, squareform
distances = squareform(pdist(np.array(coordinates))) # coordinates is a nx2 array
z = np.array(z) # z are the values
cutoff = np.max(distances)/3.0 # somewhat arbitrary cutoff
width = cutoff/15.0
widths = np.arange(0, cutoff + width, width)
Z = []
Cov = []
for w in np.arange(len(widths)-1): # for each width
# for each pairwise distance
for i in np.arange(distances.shape[0]):
for j in np.arange(distances.shape[1]):
if distances[i, j] <= widths[w+1] and distances[i, j] > widths[w]:
m1 = []
m2 = []
# when a distance is within a given width, calculate the means of
# the points involved
for x in np.arange(distances.shape[1]):
if distances[i,x] <= widths[w+1] and distances[i, x] > widths[w]:
m1.append(z[x])
for y in np.arange(distances.shape[1]):
if distances[j,y] <= widths[w+1] and distances[j, y] > widths[w]:
m2.append(z[y])
mean_m1 = np.array(m1).mean()
mean_m2 = np.array(m2).mean()
Z.append(z[i]*z[j] - mean_m1*mean_m2)
Z_mean = np.array(Z).mean() # calculate covariogram for width w
Cov.append(Z_mean) # collect covariances for all widths
However, now I have confirmed that there is an error in my code. I know that because I used the variogram to calculate the covariogram (covariogram(h) = covariogram(0) - variogram(h)) and I get a different plot:
And it is supposed to look like this:
Finally, if you know a Python/R/MATLAB library to calculate empirical covariograms, let me know. At least, that way I can verify what I did.
One could use scipy.cov, but if one does the calculation directly (which is very easy), there are more ways to speed this up.
First, make some fake data that has some spacial correlations. I'll do this by first making the spatial correlations, and then using random data points that are generated using this, where the data is positioned according to the underlying map, and also takes on the values of the underlying map.
Edit 1:
I changed the data point generator so positions are purely random, but z-values are proportional to the spatial map. And, I changed the map so that left and right side were shifted relative to eachother to create negative correlation at large h.
from numpy import *
import random
import matplotlib.pyplot as plt
S = 1000
N = 900
# first, make some fake data, with correlations on two spatial scales
# density map
x = linspace(0, 2*pi, S)
sx = sin(3*x)*sin(10*x)
density = .8* abs(outer(sx, sx))
density[:,:S//2] += .2
# make a point cloud motivated by this density
random.seed(10) # so this can be repeated
points = []
while len(points)<N:
v, ix, iy = random.random(), random.randint(0,S-1), random.randint(0,S-1)
if True: #v<density[ix,iy]:
points.append([ix, iy, density[ix,iy]])
locations = array(points).transpose()
print locations.shape
plt.imshow(density, alpha=.3, origin='lower')
plt.plot(locations[1,:], locations[0,:], '.k')
plt.xlim((0,S))
plt.ylim((0,S))
plt.show()
# build these into the main data: all pairs into distances and z0 z1 values
L = locations
m = array([[math.sqrt((L[0,i]-L[0,j])**2+(L[1,i]-L[1,j])**2), L[2,i], L[2,j]]
for i in range(N) for j in range(N) if i>j])
Which gives:
The above is just the simulated data, and I made no attempt to optimize it's production, etc. I assume this is where the OP starts, with the task below, since the data already exists in a real situation.
Now calculate the "covariogram" (which is much easier than generating the fake data, btw). The idea here is to sort all the pairs and associated values by h, and then index into these using ihvals. That is, summing up to index ihval is the sum over N(h) in the equation, since this includes all pairs with hs below the desired values.
Edit 2:
As suggested in the comments below, N(h) is now only the pairs that are between h-dh and h, rather than all pairs between 0 and h (where dh is the spacing of h-values in ihvals -- ie, S/1000 was used below).
# now do the real calculations for the covariogram
# sort by h and give clear names
i = argsort(m[:,0]) # h sorting
h = m[i,0]
zh = m[i,1]
zsh = m[i,2]
zz = zh*zsh
hvals = linspace(0,S,1000) # the values of h to use (S should be in the units of distance, here I just used ints)
ihvals = searchsorted(h, hvals)
result = []
for i, ihval in enumerate(ihvals[1:]):
start, stop = ihvals[i-1], ihval
N = stop-start
if N>0:
mnh = sum(zh[start:stop])/N
mph = sum(zsh[start:stop])/N
szz = sum(zz[start:stop])/N
C = szz-mnh*mph
result.append([h[ihval], C])
result = array(result)
plt.plot(result[:,0], result[:,1])
plt.grid()
plt.show()
which looks reasonable to me as one can see bumps or troughs at the expected for the h values, but I haven't done a careful check.
The main speedup here over scipy.cov, is that one can precalculate all of the products, zz. Otherwise, one would feed zh and zsh into cov for every new h, and all the products would be recalculated. This calculate could be sped up even more by doing partial sums, ie, from ihvals[n-1] to ihvals[n] at each timestep n, but I doubt that will be necessary.