Skip to content

Target - CAD

Given the CAD target and the RGB image of the scene, the tracker estimates the 6 degree-of-freedom (DoF) pose of the object with respect to the camera. Using this pose estimate, it is possible to superimpose a rendered version of the model onto the image and add details or other augmented reality effects.

To find the CAD target in the image, an initial detection phase computes a rough estimate of its 6 DoF pose. This usually only takes few frames to process and might require a slow movement of the camera until a promising viewpoint of the object has been found. Once the detection was successful, the tracker refines and updates the pose in real-time for every frame. If at some point the tracker fails to track the object or the object goes out of view, the tracker waits for the re-initialized detector to return a rough pose estimate of the object.

To use the CAD tracker, it first needs to be trained onto the objects of interest. You can find a detailed description of the training process at the VIRNECT Track Target Trainer page.

Limitations

For best performance, it is recommended to follow some guidelines related to the object, the scene, and the relationship between the camera and the object.

Objects should

  • Be rigid (non-deformable),
  • Have non-shiny, non-transparent material,
  • Only consist of parts, whose surfaces are either uniformly colored or have simple texture,
  • Not have discrete or continuous symmetries, or repetitive geometric features,
  • Have approximately a dimension of 5-15cm,
  • Be standing upright on a planar surface

The environment surrounding the object of interest should

  • Have a simple monotonous color that is different to the color of the object. Distracting patterns or cluttered environments will influence the detection and tracking quality,
  • Not occlude the object target,
  • Not make the object throw hard shadows visible in the image,

Also the relation between camera and object is important. For best tracking performance, it is recommended that

  • Training and tracking use the same camera calibration parameters,
  • Hhe camera/object moves smoothly and slowly with respect to the camera's frame rate (no abrupt movements),
  • Only one object is visible at a time,
  • The object target to camera distance is within the detection range defined by the rendering radii, as well as within the tracking range of 0.1 to 1.0m,
  • The elevation angle of the camera with respect to the object target's reference frame is within the defined elevation angle range,
  • The object stands upright such that the upright vector defined during training aligns with the gravity vector of the real world.
  • Yo approximately level the camera horizontally (landscape view),
  • Potential occlusions caused by the hand or arm of an operator moving the object is minimized,
  • The object is fully visible in the image and ideally roughly centered in the image.

Supported Platforms

CAD detection and tracking is currently supported on following platforms:

Operating System Supported
Windows
Linux
Android
macOS
iOS

Note: The devices running the Trainer and the Tracker application need to support GLES 3.2 explicitly.

Back to top