TF in past message even though echoed times are the same

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

TF in past message even though echoed times are the same

grrrzebo
I'm running a simulated husky in the gazebo baylands environment.

I was having an issue where the map was showing up above the robot, but that has become an issue where there is no map and I'm seeing the following warning repeatedly:

Detected jump back in time. Resetting RViz.
[rviz2-5] [WARN] [1750451421.780371377] [tf2_buffer]: Detected jump back in time. Clearing TF buffer.

I have checked the timestamps of the topics being synchronized via ros2 topic echo, and they all appear to have similar time stamps. I am using simulation time. Any leads would be much appreciated (since my most recent debugging has appeared to make the problems worse...). Thanks!

Almost everything needed to run is attached. Just missing the baylands world folder, which was too big (http://models.gazebosim.org/baylands/

baylands_robot.sdf
launch.py
ros_bridge_mappings.yaml
COSTAR_HUSKY_SENSOR_CONFIG_1.zip
Reply | Threaded
Open this post in threaded view
|

Re: TF in past message even though echoed times are the same

matlabbe
Administrator
It seems there is a missing optical rotation for the camera. I edited slightly the sdf file to have this:
    <link name="camera_link">
      <pose>0.438 0 0.272 0 0 0</pose>
      <collision name="base_link_fixed_joint_lump__camera/camera_link_collision_2">
        <geometry>
          <box>
            <size>0.0078 0.13 0.0192</size>
          </box>
        </geometry>
      </collision>
      <visual name="base_link_fixed_joint_lump__camera/camera_link_visual_1">
        <geometry>
          <mesh>
            <uri>meshes/realsense.dae</uri>
          </mesh>
        </geometry>
      </visual>
      <sensor name="camera_front" type="rgbd_camera">
        <camera>
            <horizontal_fov>1.0472</horizontal_fov>
            <lens>
                <intrinsics>
                    <!-- fx = fy = width / ( 2 * tan (hfov / 2 ) ) -->
                    <fx>277.1</fx>
                    <fy>277.1</fy>
                    <!-- cx = ( width + 1 ) / 2 -->
                    <cx>160.5</cx>
                    <!-- cy = ( height + 1 ) / 2 -->
                    <cy>120.5</cy>
                    <s>0</s>
                </intrinsics>
            </lens>
            <distortion>
                <k1>0.0</k1>
                <k2>0.0</k2>
                <k3>0.0</k3>
                <p1>0.0</p1>
                <p2>0.0</p2>
                <center>0.5 0.5</center>
            </distortion>
            
            <clip>
                <near>0.1</near>
                <far>300</far>
            </clip>
            <depth_camera>
              <clip>
                <near>0.1</near>
                <far>300</far>
              </clip>
            </depth_camera>
            <noise>
                <type>gaussian</type>
                <mean>0</mean>
                <stddev>0.007</stddev>
            </noise>
            <optical_frame_id>costar_husky_sensor_config_1/optical_link</optical_frame_id>
        </camera>
        <always_on>1</always_on>
        <update_rate>20</update_rate> 
        <visualize>0</visualize>
      </sensor>
    </link>
    <link name="optical_link">
      <pose relative_to='camera_link'>0 0 0 -1.57 0 -1.57</pose>
    </link>
    <joint name="base_to_camera" type="fixed">
      <parent>base_link</parent>
      <child>camera_link</child>
    </joint>
    <joint name="camera_to_optical" type="fixed">
      <parent>camera_link</parent>
      <child>optical_link</child>
    </joint>
Basically, you can tell which frame the camera should use, which should be the optical frame. Note that I don't think it is recommended to put all sensors under same Link (base_link). Refer to this repo for an official example on how using Link and Joint in the SDF. Once you moved out the sensors from base_link Link and added corresponding joints, you need to add this to your launch file (taken from same linked example):
    # Load the SDF file from "description" package
    sdf_file  =  os.path.join('COSTAR_HUSKY_SENSOR_CONFIG_1', 'model.sdf')
    with open(sdf_file, 'r') as infp:
        robot_desc = infp.read()

    # Takes the description and joint angles as inputs and publishes the 3D poses of the robot links
    robot_state_publisher = Node(
        package='robot_state_publisher',
        executable='robot_state_publisher',
        name='robot_state_publisher',
        output='both',
        parameters=[
            {'use_sim_time': True},
            {'robot_description': robot_desc},
            {'frame_prefix': 'costar_husky_sensor_config_1/'},
        ]
    )

Then for the bridge parameters:
- ros_topic_name: "/joint_states"
  gz_topic_name: "/world/baylands/model/costar_husky_sensor_config_1/joint_state"
  ros_type_name: "sensor_msgs/msg/JointState"
  gz_type_name: "gz.msgs.Model"
  direction: GZ_TO_ROS
- ros_topic_name: "/tf_static"
  gz_topic_name: "/model/costar_husky_sensor_config_1/pose_static"
  ros_type_name: "tf2_msgs/msg/TFMessage"
  gz_type_name: "ignition.msgs.Pose_V"
  direction: GZ_TO_ROS  
Note that the poses are published on tf_static instead of tf.

We those changes, I think I also fixed the Tf timing issue, I didn't see tf errors afterwards. Note however that some configs are still different than in the official example, and I feel there could still be duplicated Tf published (not sure if robot_state_publisher can published all tf frames on its own, but the official example still forwarded some tf from gazebo).

Last note: I coudl not make the velodyne lidar working, so I just tested with RGB-D topics.




cheers,
Mathieu