I am working with Baxter robot and what I am trying to do is get the position of an object using augmented reality markers and move the hand to that position in order to grasp it.
I am using the ar_tools package to get the position/orientation of the object, but that with respect to the head_camera which I am using. The last couple of days I've been experimenting with how to change that reference frame (head_camera) to the base frame as this frame is used by moveit to make the plans. I have tried to set the frame_id of the header of the message I receive from the ar_tools manually to 'base':
pose_target.header.frame_id = 'base'
but the position and orientation I am getting are still WRT the head_camera. I also tried to do:
self.tl.lookupTransform("/base", "/head_camera", self.t)
where self.t = self.tl.getLatestCommonTime("/head_camera", "/base"), but I was getting an extrapolation error. It was something like
the transformation requires to extrapolate in the past
(I can't really remember now and I'm not in the lab.) Then I thought I might need to run the lookupTransform from the head_cam to head, from head to torso and from torso to Baxter's base.
Could someone guide me on how to change the frame of the marker of the ar_tools from head_camera to base?
Also, for the extrapolation error, is there a way to do this in a static way?
There is a slightly more straightforwards way to do this, presuming you're reviving a PoseStamped message from ar_tools:
on_received_pose(pose):
'''
Handler for your poses from ar_tools
'''
if self.tl.waitForTransform(pose.header.frame_id, "/base", pose.header.stamp, rospy.Duration(0.1)): # this line prevents your extrapolation error, it waits up to 0.1 seconds for the transform to become valid
transd_pose = self.tl.transformPose("/base",pose)
# do whatever with the transformed pose here
else:
rospy.logwarn('Couldn\'t Transform from "{}" to "/base" before timeout. Are you updating TF tree sufficiently?'.format(pose.header.frame_id))
You're getting that extrapolation error likely because the transform network wasn't fully formed at the time you got your first message; tf refuses to extrapolate, it will only interpolate, so if you haven't received at least one transform message for every frame both before and after (or exactly at) the timestamp you're trying to transform to, it will throw an exception. That added if statement checks to see if it can actually perform the transform before trying to do so. You could of course also just surround the transformPose() call in a try/catch block instead, but I feel that for tf this makes it more explicit what you're trying to do.
In general, check out the simple ROS tf Python Tutorial for more examples/modes of operation.
Related
I would like to create an object in pydrake that is loaded from a URDF (e.g. just a single-link URDF that has an object mesh), and be able to manually set the pose of the object as I wish after creation. What I've been doing so far is this:
urdf = FindResourceOrThrow(my_urdf_path)
parser.AddModelFromFile(urdf)
frame = plant.GetFrameByName("my_frame")
T = RigidTransform(p=pos, rpy=RollPitchYaw(rpy))
plant.WeldFrames(plant.world_frame(), frame, T)
That lets me set the pose on creation. However, I don't know how to change the pose once the plant has been finalized. I assume there's a way to do this but am struggling to find an example. Thanks in advance for your help.
I believe you can do it as follows:
(1) Attach a FixedOffsetFrame (e.g., "robot_base") to the world. The offset is a parameter of the context, not State dofs.
(2) Weld the robot to robot_base.
(3) Re-pose that frame in a Context.
(1) goes like:
robot_base = plant.AddFrame(frame=FixedOffsetFrame(
name="robot_base",
P=plant.world_frame(),
X_PF=RigidTransform_[float].Identity(),
model_instance=None))
(3) goes like:
robot_base.SetPoseInBodyFrame(context=context, X_PF=new_pose)
(Yes, Drake's examples should show this better; it's a bit obscure right now.)
I believe what you're looking for is SetFreeBodyPose (C++, Python). Do this instead of welding your frame to the world frame.
It's worth nothing that you need to do with a Context. I.e., something like:
urdf = FindResourceOrThrow(my_urdf_path)
parser.AddModelFromFile(urdf)
plant.Finalize()
body = plant.GetBodyByName("my_body")
T = RigidTransform(p=pos, rpy=RollPitchYaw(rpy))
context = plant.CreateDefaultContext()
plant.SetFreeBodyPose(context=context, body=body, X_WB=T)
(Note the change from grabbing the frame to the body. I assume the frame you grabbed is the body frame of a body.)
I'm trying to plot netCDF data using cartopy + matplotlib (contourf) call, however on some occasions the data generate invalid polygon instances through contourf and the following error is thrown (Sample), followed by other exceptions:
TopologyException: side location conflict at -83.68749999996696 36.937499999989356
When running the code using contour (not contourf), the plot works, and I can see the invalid geometry in the contours. I am able to address this issue in the contourf call by NAN-ing the value of the point at the location of the side-location conflict.
aLatName = "PRISM_Latitude"
aLonName = "PRISM_Longitude"
tLat = 36.937499999989356
tLon = -83.68749999996696
aLat = np.abs(array[aLatName] - tLat)
aLon = np.abs(array[aLonName] - tLon)
c = np.maximum(aLon, aLat)
([xloc], [yloc]) = np.where(c == np.min(c))
array["data"][yloc, xloc] = np.nan
I'm wondering if there is a way to capture the coordinates of the side location conflict in a try-except block so I can then try the nan approach to fix the issue.
5/30 Edit: I did some rabbit hole digging through the call stack to find out more information and it seems like the error message shown above is not a python exception being thrown but an exception being thrown in C++ and then passed along through the python interpreter, specifically the GEOS library (https://github.com/libgeos/geos/blob/70b1662e9b8f5118f9eef26529e9eeb9eb466544/src/geomgraph/EdgeEndStar.cpp#L312). From what I know about C++ / Python exceptions, I cannot directly "catch" this exception in python unless it exists as a python object already so it looks like this is a dead end, but perhaps this may provide some information to others who may be stuck with a similar problem.
The top Embedded Python Block in this GRC application is the stock block. The bottom one is a LPF the code for which is below.
Without the Throttle Block the application works. With the Throttle Block the error gr::log :ERROR: thread_body_wrapper - ERROR thread[thread-per-block[2]: <block Embedded Python Block 2(3)>]: ValueError: could not broadcast input array from shape (17,) into shape (16,) is generated. There is no such problem when the Throttle Block is in the path of the stock block.
I learning GRC and would like to understand why the introduction of the Throttle Block is causing the problem.
import numpy as np
from gnuradio import gr
class blk(gr.sync_block): # other base classes are basic_block, decim_block, interp_block
"""Embedded Python Block example - a simple multiply const"""
h = np.array([0.0397989, -0.0488053, -0.0345932, 0.0659844, 0.0315417, -0.1074744,
-0.0299212, 0.3187633, 0.5294118, 0.3187633, -0.0299212, -0.1074744, 0.0315417,
0.0659844, -0.0345932, -0.0488053, 0.0397989], dtype=np.float32)
a = True;
def __init__(self, example_param=2.0): # only default arguments here
"""arguments to this function show up as parameters in GRC"""
gr.sync_block.__init__(
self,
name='Embedded Python Block 2', # will show up in GRC
in_sig=[np.float32],
out_sig=[np.float32]
)
# if an attribute with the same name as a parameter is found,
# a callback is registered (properties work, too).
self.example_param = example_param
def work(self, input_items, output_items):
"""Convolution of input with LPF coefficients"""
output_items[0][:] = np.convolve(input_items[0], self.h, 'same')
return len(output_items[0])
First of all, this won't work – convolution has memory, and your convolver doesn't save that. This makes the numbers that come out of your blk to depend on in what kind of "chunks" your input signal is delivered. That's not OK, because GNU Radio can (and will) use different lengths at any time.
Your algorithm must be written in a way that always yields the same result, no matter how your (infinite or finite) input stream is being partitioned for individual calls to work.
That's basically it – the problem here seems to be that the result of your np.convolve call doesn't have the same length as output_itmes[0]. Read the documentation of numpy.convolve; it specifies exactly why and when this happens:
Mode ‘same’ returns output of length max(M, N). Boundary effects are still visible.
So, if the chunk of samples you've gotten is shorter than h, then you're trying to save len(h) items in a len(output_items[0]) sized array, which is too short.
This can always happen – as said, GNU Radio needs to "cut" the signal into pieces and hand them to your work, and the length of these chunks isn't fixed. Your Throttle just makes it more likely to happen.
So, you first need to conceptually solve the fact that this is not proper mathematically (the convolution of a long signal is not the same as a sequence of truncated convolutions of short sub-segments, simple as that). Most likely the answer to this is to just use one of the filter blocks GNU Radio comes with anyway.
I have some code that uses Python to read some data from a file, and then while still using Python generates an object in Maya based on the file. Everything is working good and the object comes out looking correctly. The problem is however that non of the segments that the object is made up of has a correct actual position, the Translate XYZ is set to 0 for all of them, even though they look correctly. The problem with this is that when I later on import the model into Unity3D I can't interact with the objects properly as the position is still at 0 while the mesh is were it should be. Is there some correct way to generate objects to make them have a position?
The code calls multiple functions to make different segments (one example of one of these functions is shown below). Then it uses maya.cmds.polyUnite to make it into one object. This is then repeated using a for-loop that runs for some amount of times (specified in the file). This for-loop calls cmds.duplicate(object made above) and moves the new object in the z-axis using cmds.move(0, 0, z, duped object). Due to some bad coding the code then calls polyUnite to make it all into one big object and then calls polySeperate to split it into small segements again. Is it possible that this is causing the problem?
Each segment is generated something like this:
cmds.polyCube(n='leftWall', w=w, h=h, d=d, sw=w, sh=h, sd=d)
cmds.setAttr('leftWall.translateX', -Bt/float(2)-(THICKNESS/float(2)))
cmds.setAttr('leftWall.translateY', a)
cmds.setAttr('leftWall.rotateZ', -90)
cmds.polyNormal('leftWall', nm=0, unm=0)
cmds.polyCube(n='rightWall', w=w, h=h, d=d, sw=w, sh=h, sd=d)
cmds.setAttr('rightWall.translateX', Bt/float(2)+(THICKNESS/float(2)))
cmds.setAttr('rightWall.translateY', a)
cmds.setAttr('rightWall.rotateZ', 90)
cmds.polyNormal('rightWall', nm=0, unm=0)
cmds.polyUnite('leftWall', 'rightWall', n='walls')
addTexture('walls', 'wall.jpg')
I am trying to optimize a similar problem as the "Commercial Aircraft Range Maximization by Differential Inclusion" example using Dymos.
Is there a way to see the variation of (for example) the lift coefficient for the optimized trajectory?
It isn't a state or control variable, just some intermediate variable within the problem definition.
I know there is the .add_recorder() method, but I'm not sure how to use it or whether it is the right solution.
So the timeseries object captures time series data regardless of the transcription used. By default, it includes the "problem" variables (states, controls, input and design parameters, and time).
To record other outputs in your ODE, use the add_timeseries_output method on Phase. It is documented here: https://openmdao.github.io/dymos/feature_reference/timeseries.html
For the aircraft example in Dymos, you can add the line:
phase.add_timeseries_output('aero.CL', units=None, shape=(1,))
Which will add a new output traj.phase0.timeseries.CL. Theres a few things to note here:
dotted variable names aren't allowed. So while aero.CL is the path of the lift coefficient relative to the top of the ODE, it will be recorded in the timeseries as CL. If this will cause name collisions, you can override the timeseries name using the output_name argument.
Currently we can't use introspection to automatically determine the units and shape of the variable to be added to the timeseries (we're working on that). So it's good practice to specify the units and shape when adding the timeseries output (and mandatory if the units are not None or the shape is not (1,).
So adding that above line to the commercial aircraft example, and adding the following to our simplified plot maker:
plot_results([('traj.phase0.timeseries.states:range', 'traj.phase0.timeseries.states:alt',
'range (NM)', 'altitude (kft)'),
('traj.phase0.timeseries.time', 'traj.phase0.timeseries.states:mass_fuel',
'time (s)', 'fuel mass (lbm)'),
('traj.phase0.timeseries.time', 'traj.phase0.timeseries.CL',
'time (s)', 'lift coefficient')],
title='Commercial Aircraft Optimization',
p_sol=p, p_sim=exp_out)
Make the following plot:
You can use an OpenMDAO recorder to store timeseries outputs in a recorded database file. For instance, to add a recorder to the optimization driver (which will save at every iteration), you'd do something like this:
p.driver.add_recorder(rec)
p.driver.recording_options['record_desvars'] = True
p.driver.recording_options['record_responses'] = True
p.driver.recording_options['record_objectives'] = True
p.driver.recording_options['record_constraints'] = True
p.driver.recording_options['includes'] = ['*timeseries*']
That last instruction will inform Dymos to record all of the outputs whose name includes "timeseries". FYI you can also add a recorder to the problem and only record the final values, instead of recording each iteration. This can save a good bit of filesize if you're not interested in the iteration history.