Hi,
1) possible, though most limited approach
2) if the scans don't overlap, the second scan received will be lost, though if the robot rotate, both lidar may be able to localize in the local scan map eventually. Time synchronization between lidars may not be important in this approach, but the robot will localize only with one of the two lidar at a time.
3) Do you have wheel odometry at higher frame rate than 33 Hz and that is relatively accurate for 30 ms? That could be useful to deskew the scans and assemble them together in same frame (e.g., base_link). That could be done with a combination of
rtabmap_util/lidar_deskewing and
rtabmap_util/point_cloud_aggregator (and maybe
pointcloud_to_laserscan node to feed a 2D scan instead of 3D point cloud), then feed resulting cloud to rtabmap/icp_odometry. Other option to combine the scans is using
ira_laser_tools, though not sure it account for displacement between the scan timestamps (though it may be ignored as scan rate is high and if the robot doesn't very fast motions). In all cases, you may end up with a cloud or scan with frame_id in the middle of the robot (e.g., base_link), so the ray tracing in the map will be done from that point instead of each lidar point of view. For the static global map it can be not too bad, however if you are going to use navigation stack, I would feed the lidar independently to local/global costmaps.
cheers,
Mathieu