I came across the question already here: https://support.wikitude.com/support/discussions/topics/5000083025
"In the latest version of the SDK (7.2) we added ARKit and ARCore support. So if you are running on a device that has support for this, it will be used automatically and the scale should match real world scale. If the device doesn't support ARKit or ARCore, it will fallback to the previous behavior. The configuration step is still there to define the plane on which tracking should work and where the origin should be."
--> does it mean that if I create a 1x1x1m cube in wt3 format, and load it into the scene with instant tracking and ARCore/ARKit support, and put the cube next to a real 1x1x1m cube, it is the same size?
if yes, how does this work?
yes, one virtual unit should coincide with one meter when using ARKit/ARCore. I'm no expert on computer vision matters, but I believe they derive the real-world scale from the sensors that are used in addition to the camera frame. That's something the Wikitude instant tracking does not provide yet. That's why we have the device height input parameter.
Thanks for your quick reply!
I am from the computer vision field, which is also the reason I asked how it works. Do you may have some links/sources from ARCore or ARKit itself which exaplains how it works?
Also: If ARCore/ARKit is active, the device height is ignored right?
I'm afraid I don't, and I'm not sure there is such a thing; probably not from Apple or Google anyway.
The device height is ignored when tracking with ARKit/ARCore, it will still be used to determine the size of any augmentations you render during the initialisation phase. So an incorrectly set device height will result in a discontinuity in augmentation size when switching from initialisation state to tracking state.