SDK Version 9.0 Professional Edition
I was advised to create a new topic. I am trying to create the best 3D points possible with recognition based off of 'object recognition best practices' : https://www.wikitude.com/documentation/latest/unity/objecttargetguide.html
However, I am looking for some clarification on how to approach collecting images to populate the data points. Based off of the steps provided in quotations "",
1. "Ideally take pictures from the same area of the object from different spots" -- I am assuming this means take images of an area of the object, but at different angles?
2. "Use pictures with some overlap between each other" -- I am unsure what this means, am I basically just taking multiple pictures of the same area & angle (basically duplicate)?
3. Should I always start the initial collection of images of the object in a white background and upload, and then transition the object to an area with different background colors, or it doesn't matter which background I use first?
Regarding taking pictures for objects that are rather large (lets say a statue based off of your example -- https://www.youtube.com/watch?v=IdnA1xXZe2M),
4. how would I capture images of the object following the best practices as it may be too large to fit a background studio or have even lighting (I have an object I need to recognize and it is fairly large).
5. "Avoid taking pictures from a very close distance (few centimeters) -- my large object has components within it, I assume it's okay to take pictures of a section of that object after taking pictures of the entire object to provide more accurate data points, or does every picture need to include the entirety of the object?
Hoping to get any clarification soon!
Sorry for the late response. Here are further details on your points:
1. This means different angles and different distances (in case the object should be scannable also for different distances). In the Studio Editor you can now see the visualization of the key frames you took - it's called image views and can be found in the Studio Editor in the Point Cloud properties window --> just open the .wto file in the Editor and it's visible on the right side at the bottom (see screenshot)
2. If you e.g. walk around the object and take images from different angels, make sure that parts of object are visible in at least 2 images.
3. This depends on the background the object will be scanned in. If the object will be scanned and located in front of a dark background in the real scenario, you should also take the images accordingly. The single color background approach, without any noise in the background, will help the conversion process, as it does not have to 'filter' out the noise in the background.
4. See above - in cases where possible it is recommended to take pictures without noise in the background. But of course where this is not possible, try to fit the object in the camera view to make sure that the background points are not taken into consideration for the creation of the map.
5. yes, if you wish to create an object tracker that only covers parts of the object, then you would only cover those in your images. You can take images from different distances as well.
I hope this helps. Should you need anything further, just let me know anytime.