So you wish to recognize an image and then place content on the image and be able to move your device away from the image and do SLAM based trackig then?
Did you already test our Extended Tracking feature? If not, please have a look at this feature which seems to be what you're looking for.
Thx and greetings
We've tested extended tracking, but maybe I'm missing something. Extended tracking only seems to work (and not as good as SLAM) if the user is facing in the general direction of the image (less than 180 degree, rotating very slowly, and not translating at all) . But I want room-scale, 360 degree, full 6dof AR. That means we need SLAM. Or am I missing something?
My ideal would be able to use image recognition (and object & scene recognition and instant targets as well) separate from any of the AR technologies. From Wikitude, I just want to know that the image has been recognized and then let me decide how to use that information. So, more modular. Then I can use image recognition as an input and not as something we need to continue the AR experience. I only need to use images as inputs to place the world. We could even throw away the image once it is recognized and that wouldn't mess up the experience because the room would already have a sufficient point cloud. So the image (or object or scene) is just used to place the world. Then we don't need it. It's so the users don't have to go through all the work of placing and orienting the AR world to align. (I've added 2 video links at the bottom to help clear this up).
Here's my use case. I want to have multiple people walk into a room and see everything in the world in the same physical location. It's a shared AR world. One way that we have done this with ARKit and ARCore is to use their image recognition features. Everyone can scan the same image and we can triangulate their positions and align their AR worlds.
The image is just used as 0,0 on the AR coordinate system (without the user having to place it manually). Everyone scans the image and that places their 0,0 in the same spot (so we get both location and orientation).
However, with Wikitude, we need to switch from Image Recognition AR to SLAM, I want to use image recognition and SLAM simultaneously. So it doesn't work with Wikitude.
So, in sum, I would like for image recognition, object recognition, and scene recognition to be separated from the AR tech so that we can use them in unique ways with the AR tech.
Here are two videos that might help clear up what we are doing:
Video 01: https://drive.google.com/a/yeticgi.com/file/d/1D8vSguJm4mvV-ZEq-hJag_VxAXyrYrrn/view?usp=sharing
Video 02: https://drive.google.com/a/yeticgi.com/file/d/1r-yqKDb5PTdtWjKJx_Qem5UIUHcOkF7H/view?usp=sharing
We want to be able to use image recognition to place a room-scale AR world that uses SLAM tracking. So instead of all users having to individually place the world in the same spot by tapping on a fiducial on each of their screens, I want them all to recognize the image and have it automatically place the world in the same place for all of them.
But it seems like we can't use Wikitude's image recognition technology with Wikitude's SLAM tech.
We don't want marker tracking (because the extended tracking isn't room scale), we just want it to recognize the image so we can place the AR world.
We can do this with ARKit and ARCore (using their image recognition tech), but we can't with Wikitude.
Any help would be greatly appreciated!
1 person likes this idea