I came across the question already here: https://support.wikitude.com/support/discussions/topics/5000083025
"In the latest version of the SDK (7.2) we added ARKit and ARCore support. So if you are running on a device that has support for this, it will be used automatically and the scale should match real world scale. If the device doesn't support ARKit or ARCore, it will fallback to the previous behavior. The configuration step is still there to define the plane on which tracking should work and where the origin should be."
--> does it mean that if I create a 1x1x1m cube in wt3 format, and load it into the scene with instant tracking and ARCore/ARKit support, and put the cube next to a real 1x1x1m cube, it is the same size?
if yes, how does this work?
I'm afraid there's not much you can do currently other than generally improving the complexity of your model or adding simple texturing to make it look more realistic. We only offer this kind of basic 3D model rendering currently.
If you know the location your AR experience will be used in, you might be able to adjust the direction and colour of your light source to match the predominant light source in this environment, if there is such a thing.
If you need more complex rendering, you could consider switching to our Unity plugin.
Good, thanks! Now it looks a lot better. However, it seems that the model still looks a bit unrealistic. It does not react well to room lighting, for example. It's just plain white gloss (see the attachment). Is there anything you could recommend to make it blend into the virtual reality a bit more?
Good morning Mateusz,
the wt3 file you sent contains a point light source which, I'm afraid, are currently not working as intended. If you replace that with a directional or spot light source, the model should have actual shading that depends on parameters such as the specularity.
Thanks a lot Daniel. Will check it now.
Also, could you give us some guidance on the texture/materials accepted by wt3? The attached tray was intended to be in high gloss. However, as I was told by the person who made it - "I found out that I cannot embed textures and materials into the .wt3 model using the encoder. How can we deal with this?" Also, could you help to answer this please?
How to apply textures: whether it is done in the encoder or somewhere in the SDK; how to tweak specularity/rrflectivity value of the model.
Good morning Mateusz,
the 0.045 value is purely empirical to make our model have an appropriate size. If you are creating your own model, you don't necessarily have to set this value. If your model's vertices are within the interval of [-0.5, 0.5] in each axis, the size should be very reasonable to begin with.
When exporting your 90cm by 90cm square, which base unit did you have set in your modelling software? If it was centimeters rather than meters, the FBX file would likely have vertex positions in the range [-90, 90], making it very huge.
Let me tag along to this thread as my query is somehow similar.
I'm playing with the sample apps and I was just trying to replace the standard couch wt3 model with a model downloaded form the web. However, it seems that the dimensions are slightly exaggerated. (The model floats above my head). What should be the dimensions of the fbx file to relate to real-world dimensions when rendered in AR? I tried a square of 90cm by 90cm and it was gigantic... Also, I'm failing to understand the scale properties. In the sample apps, they are currently defined as 0.045, where did this number come from? If I'm building an app with a few different models, should they all have a different scale??
Thanks a lot!
I'm afraid I don't, and I'm not sure there is such a thing; probably not from Apple or Google anyway.
The device height is ignored when tracking with ARKit/ARCore, it will still be used to determine the size of any augmentations you render during the initialisation phase. So an incorrectly set device height will result in a discontinuity in augmentation size when switching from initialisation state to tracking state.
Thanks for your quick reply!
I am from the computer vision field, which is also the reason I asked how it works. Do you may have some links/sources from ARCore or ARKit itself which exaplains how it works?
Also: If ARCore/ARKit is active, the device height is ignored right?
yes, one virtual unit should coincide with one meter when using ARKit/ARCore. I'm no expert on computer vision matters, but I believe they derive the real-world scale from the sensors that are used in addition to the camera frame. That's something the Wikitude instant tracking does not provide yet. That's why we have the device height input parameter.