Start a new topic

Size of the instant tracking point cloud


We are planning a museum AR exhibition, and we are looking into Wikitude as the engine with Unity. Please refer to the image attached. 

Briefly explaining the scheme, we have a cubic room of the size of 3m x 3m x 2m (width, length, height). On the three walls of the room, we will have feature-rich wall paintings (purple lines on the walls), which cover most of the wall areas. The idea is to augment 3D contents on the wall paintings (red circles) and have visitors look at them from various positions in the room. 


Initially, we were considering to use the saving function of the instant tracking, so that we can we can make a point cloud of the room and track it. However, looking at the sample project, it seems that there are some physical boundaries of the point cloud data when it is saved from instant tracking saving routine. Can you please explain how the boundary is calculated?

For example, let's say I save the instant tracking, matching the origin of the space to the mid-bottom point of the front wall (illustrated as the green arrow in the sketch above). Then, I guess the generated point clouds will have the different coverages of the physical space when I save the scene at the points A and B, which are illustrated in the sketch in green color. In situations like this, could you please let me know how the coverages of the point clouds are determined? 

Also, in order to make this exhibition scheme work, the saved point cloud should cover all the three walls, but I am not sure if this is achievable by saving the instant tracking data. Do you think this will be possible via instant tracking? If not, any recommendations on alternative methods will be greatly appreciated. 

1 Comment


I'm not sure to which boundary you are referring, but the point cloud should be able to grow in all directions and contain the 3x3x2 scene described. The point cloud doesn't have a predetermined size or boundary, but it grows as it sees more features, so it shouldn't matter if the starting point is A or B. The only thing that matters is the initial origin, since that will be the coordinate space relative to which the augmentations are placed.

Regarding your second questions, I would say that the point cloud should cover all three walls, but I would strongly encourage you to give it a try, if you can. There are a lot of factors that can influence how mapping will work, from the content on the walls, to the lighting conditions, and I cannot make any guarantees. You can also visualize the point cloud in real time, so you can have a better understanding of how it will work.

An alternative to instant tracking would be to use image tracking on the three wall paintings, but this will only work if the paintings can be tracked and are sufficiently different from one another. With image tracking, there are multiple ways to go about it:

  • Single image recognition. Visitors can see the augmentations only on one painting at a time. This is the simplest implementation.
  • Multiple image recognition. Multiple paintings can be recognized and augmented at the same time. This is a bit more complex to setup, and requires a more powerful device.
  • Extended image recognition. Starting from the front wall, for example, visitors will be able to see the augmentations for the front wall painting, and as they turn around, tracking will continue, even if the front wall is no longer in view, allowing them to see the augmentations on the other walls. Internally, this also creates a point cloud, but uses an image as the origin.

I hope this answers your questions.

Best regards,


Login or Signup to post a comment