I'm working on a project for a construction company where they would like to use augmented reality to record defects (e.g. take pictures of them) and later display them on an iPad when someone points their camera at a defect, with additional interactive (clickable) information next to it.
I've searched a lot over the last couple of days but unfortunately couldn't find concrete answers to the following questions, I'm hoping that you can answer them:
1. Can we do this with Wikitude? I think it's what Client Recognition or Cloud Recognition is supposed to be used for but I'm not entirely sure.
2. I couldn't find a way to place a custom UIView next to recognized targets on screen or draw anything whatsoever. Is there an API for this in the Native SDK?
3. How realiable would it be if we wanted to draw markers for defects outside the camera's field of view, i.e. based on current location within a building? For example, if there are defects somewhere to the left of our position (e.g. in another room) and they are not captured by the camera, draw some markers on the left edge of the screen to point this out.
4. Is it possible to achieve reasonable accuracy (within 1 meter) without image recognition (again, bsed on location)? For example, if a defect has no photo, only GPS coordinates.