I'm new to the ROS and i have a struggle with the normals. So here is my situation.
I subscribe to my camera (realsense camera), I get the point cloud, I convert the point cloud from ROS_to_pcl and finally I use the function make_NormalEstimation() of python_pcl to get the normals. So far so good!
Now I want to publish somehow these normals to a ROS topic and by publishing i mean visualizing them also in RVIZ. The python_pcl function make_NormalEstimation() returns 4 values in the form of a vector. The 3 first values are normal_x, normal_y, normal_z and the 4th value is the curvature. I want to publish and visualize the normals in RVIZ through PoseStamed messages. As far as i know a PoseStamped message needs a Pose and a Quaternion. For the Pose field i use the x, y, z of the desired point in my point cloud i want to find the normal. But when it comes for the quaternion (and here is my main problem and struggle) i don't know what to use! I tried to use the returned values as they were quaternion_x, quaternion_y, quaernion_z, quaternion_w but the results where not so good...
So. my questions are:
How can i use the returned values of make_NormalEstimation() to
create a PoseStamed message?
Is there a way to transform the returned values into quaternion?
Am i missing something about the returned values?
Is there another way of finding and using normals in ROS?
How can i generate and publish a normal in ROS? Not only it's
normal_x, normal_y, normal_z values but also it's orientation.
Do i have to publish both it's normal_x, normal_y, normal_z and
orientation or just it's normal_x, normal_y, normal_z values? And
if so how will a robot know the orientation it needs to approach a
point of interest?
Sorry for the chaos in my questions! I really hope they make sense!
Thanks in advance!
You can simply setup a publisher and create messages on demand for every normal in your list.
import rospy
from geometry_msgs import PoseStamped
pose_pub = rospy.Publisher('/normals_topic', PoseStamped, queue_size=10)
def calc_and_publish():
#Calculate your normals
norms = calculate_norms()
for n in norms:
output_pose = PoseStamped
output_pose.pose.position.x = n[0]
output_pose.pose.position.y = n[1]
output_pose.pose.position.z = n[2]
pose_pub.publish(output_pose)
I should note that this can eat up a lot of resources since you'll be having to loop over the pointcloud multiple times per message.
Related
I'm writing a program that is for a friend of mine that is currently studying Aeronautical Engineering. I'm trying to test if the math I've implemented works. For those who know, I'm trying to calculate the divergence (I think I'm not an engineer and I'm not going to pretend that I am).
He sent me a Stack overflow link to a how he thinks this should be done. (The thread can be found here. His version doesn't work for me as it gives me a Numpy error as seen below:
numpy.core._internal.AxisError: axis 1 is out of bounds for array of
dimension 1
Now I've tried a different method that gives me a different error as seen below:
ValueError: operands could not be broadcast together with shapes (60,58)
(60,59)
This method gives me the error above and I'm not entirely sure how to fix it. I've put the code that gives me the above error.
velocity = np.diff(c_flow)/np.diff(zex)
ucom = velocity.real
vcom = -(velocity.imag)
deltau = np.divide((np.diff(ucom)),(np.diff(x)))
deltav = np.divide((np.diff(vcom)),np.diff(y))
print(deltau + deltav)
Note: C_flow is defined earlier in the program and is the complex potential. zex is also defined earlier as an early form of the complex variable. x and y are two coordinate matrices from coordinate vectors.
The expected results from the print statement should be zero or a value that is very close to zero. (I'm not entirely sure what the value should be but as I've said, I'm not an engineer)
Thank you in advance
EDIT:
After following BenT's advice I used np.gradient and np.sum but this was adding the axis in the wrong direction so to counteract this the I separated the two functions as seen below:
velocity = np.diff(c_flow)/np.diff(z)
grad = (np.gradient(velocity))
divergence = np.sum(grad, axis=0)
print(np.average(divergence))
print(np.average(velocity))
I need to transfer some Matlab code into P. I got stuck at a numpy.arange I use to set points continuously on the arc of circle with a given angle (in radians).
I got this far (example for points on x-axis):
def sensor_data_arc_x():
theta = np.arange(0, angle/2, 2*np.pi/360)
return np.multiply(radius, np.cos(np.transpose(theta)))
I know numpy.arange does not include the endpoint, although the Matlab equivalent does; the array is always one element short, which is messing up my further computations.
Is there an way to include the endpoint?
I recommend that you work through a tutorial on for loops -- the information you need is there, as well as other hints on using controlled iteration. To solve your immediate need, simply increase your upper bound by one loop increment:
inc = 2*np.pi/360
theta = np.arange(0, angle/2 + inc, inc)
So in a nutshell I am needing to export the vertex normals from a character into a text file, or whatever, and then reimport them onto the same character in a different scene.
I have the import export part working in a method that I think is ok, but actually going through the loop and setting the normal on each vertex is taking over twenty minutes and usually overloads the ram on my machine and crashes maya.
I guess I am looking for a way to make my code more efficient or just run faster, any advice would be appreciated. Thanks.
def ImoNorms(self):
ll = mc.ls ('head.vtxFace[*][*]')
input = open('My desktop.txt', 'r')
spltOne = ll[:len(ll)/2]
spltTwo = ll[len(ll)/2:]
i = 0
for each in spltOne:
CurrentLine = input.readline()
kk = re.split(r'\[|\]|\,|\/n|\ ',CurrentLine)
aa = float(kk[1])
aa = round(aa, 3)
bb = float(kk[3])
bb = round(bb,3)
cc = float(kk[5])
cc = round(cc,3)
mc.select(each)
mc.polyNormalPerVertex(xyz =(aa, bb, cc))
i = i + 1
if i%1000 == 0:
print i
init()
Sorry for the formatting issues, still new to this site.
+1 to using OpenMaya if you want better performance.
Check out MFnMesh.getNormals and MFnMesh.setNormals. I admit I haven't used these methods myself, but if it's anything like MFnMesh.setPoints it should be a significant boost in speed as it's setting the normals all at once. Seems like you don't have to deal with its history either.
Here's an example on its usage that will re-direct all of a sphere's vert normals to point down. (Go to Display->Polygons->Vertex Normals to visualize the normals)
import maya.OpenMaya as OpenMaya
# Create a sphere to change vert normals with
mesh_obj, _ = cmds.polySphere()
# Wrap sphere as MDagPath object
sel = OpenMaya.MSelectionList()
sel.add(mesh_obj)
dag_path = OpenMaya.MDagPath()
sel.getDagPath(0, dag_path)
# Pass sphere to mesh function set
mesh_fn = OpenMaya.MFnMesh(dag_path)
# Create empty vector array
vec_array = OpenMaya.MFloatVectorArray()
# Get sphere normals and stuff data in our vector array
mesh_fn.getNormals(vec_array, OpenMaya.MSpace.kObject)
for i in range( vec_array.length() ):
# Point all normals downwards
vec_array[i].assign( OpenMaya.MFloatVector(0, -1, 0) )
# Apply normals back to sphere
mesh_fn.setNormals(vec_array, OpenMaya.MSpace.kObject)
You may also want to consider a different way on how you read your file instead of reading each line one by one. Maybe using json.dumps to store the data to a file and json.loads to retrieve the data. This could potentially speed things up.
What you have will be very slow because you are creating a lot of history on your mesh: if you exit out of the loop after a couple of iterations you'll see that your mesh is accumulating a polyNormalPerVertex node for every iteration.
Unfortunately there's no way to turn of construction history for this command (which seems like an oversight: most commands have a ch flag for exactly this purpose). So the first thing to try is to add an mc.delete(ch=True) after every polyNormalPerVertex call. That will be much faster and it might be enough for what you're doing.
Otherwise you'll need to use the OpenMaya api, which is a bit harder than cmds but will let you do bulk normal operations faster. I would get the cmds version working first and see if it's good enough, it should be much more performant without the history overhead.
UPDATE
Unless you want to learn the API to do this, the right thing is probably:
Save out the mesh with the right normals as an MB or MA file
in the other file, load that mesh and use Transfer Attributes to copy the normals across.
Delete history and remove the mesh you loaded in (2)
That gets you off the hook for both making your own file format and for perf issues, plus it gives you pre-made options for cases where the topology does not line up for some reason
I need to write a semidefinite program that minimizes the trace of an operator, say R, subject to the constraint that tr_A(R)^{Tb} >>0 . That means that R represents a 3 qubit quantum system and the trace over the first system gives you an operator that represents the remaining 2 qubit systems. Taking the partial transpose with respect to one of the qubits, you get the partially transposed quantum state of the restricted 2 qubit system. It is this state that I want to make positive semidefinite.
I am using PICOS (to write the SDP) and qutip (to do the operations).
P = pic.Problem()
Rho = P.add_variable('Rho',(n,n),'hermitian')
P.add_constraint(pic.trace(Rho)==1)
P.add_constraint(Rho>>0)
RhoQOBJ = Qobj(Rho)
RhoABtr = ptrace(RhoQOBJ, [0,1])
RhoABqbj = partial_transpose(RhoABtr, [0], method='dense')
RhoAB = RhoABqbj.full()
Problem: I need to make Rho a Qobj, for qutip to be able to understand it, but Rho above is only an instance of the Variable class. Anyone has any idea on how to do this?
Also I looked here, http://picos.zib.de/tuto.html#variables , it became even more confusing as this function puts the instance in a dictionary and only gives you back a key.
You need to be able to output a numpy array or sparse matrix to convert to a Qobj. I could not find anything in the picos docs that discusses this option.
I am seeing this post very late, but maybe I can help... I am not sure what the function Qobj() is doing, can you please tell me more about it.
Otherwise, there is now a new partial_transpose() function in PICOS (version released today), which hopefully does what you need.
Best,
Guillaume.
I am working with Baxter robot and what I am trying to do is get the position of an object using augmented reality markers and move the hand to that position in order to grasp it.
I am using the ar_tools package to get the position/orientation of the object, but that with respect to the head_camera which I am using. The last couple of days I've been experimenting with how to change that reference frame (head_camera) to the base frame as this frame is used by moveit to make the plans. I have tried to set the frame_id of the header of the message I receive from the ar_tools manually to 'base':
pose_target.header.frame_id = 'base'
but the position and orientation I am getting are still WRT the head_camera. I also tried to do:
self.tl.lookupTransform("/base", "/head_camera", self.t)
where self.t = self.tl.getLatestCommonTime("/head_camera", "/base"), but I was getting an extrapolation error. It was something like
the transformation requires to extrapolate in the past
(I can't really remember now and I'm not in the lab.) Then I thought I might need to run the lookupTransform from the head_cam to head, from head to torso and from torso to Baxter's base.
Could someone guide me on how to change the frame of the marker of the ar_tools from head_camera to base?
Also, for the extrapolation error, is there a way to do this in a static way?
There is a slightly more straightforwards way to do this, presuming you're reviving a PoseStamped message from ar_tools:
on_received_pose(pose):
'''
Handler for your poses from ar_tools
'''
if self.tl.waitForTransform(pose.header.frame_id, "/base", pose.header.stamp, rospy.Duration(0.1)): # this line prevents your extrapolation error, it waits up to 0.1 seconds for the transform to become valid
transd_pose = self.tl.transformPose("/base",pose)
# do whatever with the transformed pose here
else:
rospy.logwarn('Couldn\'t Transform from "{}" to "/base" before timeout. Are you updating TF tree sufficiently?'.format(pose.header.frame_id))
You're getting that extrapolation error likely because the transform network wasn't fully formed at the time you got your first message; tf refuses to extrapolate, it will only interpolate, so if you haven't received at least one transform message for every frame both before and after (or exactly at) the timestamp you're trying to transform to, it will throw an exception. That added if statement checks to see if it can actually perform the transform before trying to do so. You could of course also just surround the transformPose() call in a try/catch block instead, but I feel that for tf this makes it more explicit what you're trying to do.
In general, check out the simple ROS tf Python Tutorial for more examples/modes of operation.