This video is a 360 rendering of a statue created in 3ds Max.
The attached images show the resulting cloud data in the Wikitude Studio Editor.
Looking at the cloud point data inside your .wto generator/Wikitude Studio Editor only the front face, not the rear, right or left side seem to have any points. Why is this?
Thx for the resources. The reason for the lack of the points on the side and rear of the object can be traced back to the properties of the object itself. We generally suggest objects highly textured and non reflectory surfaces to get the best experience. The lack of textures and general shape of the object in your use case could also lead to a unsatisfactory tracking experience.
However, there are a couple of things you could do to improve the experience. As this anyways is an artificially generated scene you can decrease the speed of rotation per frame to allow the algorithm to get more information from the video itself. Furthermore it is recommended to restrict the size of the video resolution, as a higher resolution does not directly lead to a better point cloud and in some cases can even be detrimental. It is also suggested to adjust the lighting in a way that it resembles your real tracking scenario afterwards.
Hope this helps,
I have same problem, whether such model can be recognition?
Just to clarify the question, we were wondering why only the front face of the statue target is showing point cloud data.
Why are we only seeing point cloud data from the front side of the statue? We haven't seen any data from the left, right, and rear.
See the attached images.
Can you provide a screenshot of an example of all four sides represented in the point cloud? How do you achieve 360 recognition?
We weren't able to achieve 360 point cloud data from our video tests using Wikitude's object tracking best practices video(s).
Sure. One of the main reasons for these kind of results is usually fast movement of the users camera around the object or the lack of texture on the object itself so that the algorithm is not able to correctly construct its shape in 3D.
To improve your use case you try the suggested steps above to see wether you get an improved pointcloud.
Below i attached a couple of screenshots of an example object.
Hope that helps,
From your example images, we only see tracking point data from the top and not the sides - particularly in Screen Shot 2017-09-08 at 15.05.12.png.
Why is this? Shouldn't there be points surrounding all sides?
In this specific case, the reason is the way the object video was recorded which was mostly from top/side view on the target. So you are right that most of the points are on the top of the object, but from the perspective you can see that there are points on both sides of the object. Again the reason for that is mostly the camera path that is used to create the video as well as the speed at which the camera (user's phone) is moved around the object.
This visualisation is also not intended to give a full reconstruction of the object, but give an indication of the shape and layout of the object for a better placement of the augmentation and will definitely be improved over time.
Hope this helps,
Thanks for the input.
We want the statue to be recognized from all angles. The following video is our latest attempt to create a full reconstruction of the object:
Here's the screen recording of the point cloud in the Studio Editor:
Shouldn't we see the points from all angles based on our animation?
I think the animation itself looks good, its just that the motion per frame in the animation could be too high for the algorithm to record all the details.
Thats why one possibility would be to have the same animation with more frames so that the animation would be slower per frame.
Also the output in the Studio is not an accurate representation of the object itself but rather an generalization to get an idea of the shape.
It therefore also does not contain all the information the created map might have.
Hope that helps,