HI Shen Heng and thanks for sharing our video !
We experimented a lot with scenarios close to the one you are mentioning. In order to make all it work we had to develop a bunch of techniques and tools :
- first a tool that let you import a indoor map of the indoor space and synchronize it with gps coordinates. We can then place POI on the map, and deduce the GPS coordinates of each indoor POI.
- then we have a localization system based on iBeacons, which allow us to know on which POI the user is. And as we know the POI gps coordinates, we know where the user is on a GPS referential.
- we also realized that the compass integrated in smartphones was not a very reliable instrument, so we constantly correct it thanks to visual recognition (in our case, when a user is pointing an artwork with their phone, we know for sure their orientation, and we can correct the compass orientation if needed).
Here we have all the data we need to invent an indoor navigation system !
We tried a lot of things at MuseoPic, including placing AR GeoObject (like on the video) and updating the user position by injecting GPS location that we deduced from our iBeacons location system. But that was not working great as the object to represent were often to close and we didn't have precise enough localization. We ended up using a map where the user and its orientation are represented live, around the POI placed on the map. We have good result with this system, visitors are currently managing to navigate from artwork to artwork in museums thanks to this. The limitations can come from the iBeacon localization system that can be slow and inaccurate, especially in places with a lot of crowd. Also be aware that a lot of android users does not have a compass integrated in their phones.
Finally, a promising solution that is on its way is to locate user thanks to visual recognition of object that we know the position of (like on this recent amazing video from dent reality). We could almost do that with wikitude if we were able to :
- combine instant tracking and image recognition at the same time.
- if we could get the camera pose !! That is a big (and weird) limitation of Wikitude Cordova SDK for me : when something is recognized, you are not able to get the camera (device) position relatively to this object. The data is here, screaming to be delivered to developers, but it is not exposed by the API... ;-(
Frontdesk