Start a new topic

Snapping photo exactly to building (combing geo and image tracking for higher accuracy)

 Hi,

we developed and released an App (iOS and Android), based on the Wikitude Geo-Location Features,
where the user can walk through the streets of a town and can see old photos of buildings merged into reality by AR.
This works quite fine, if he is standing in a similar direction like the ancient photographer did.
But as the sensors, especially the compass, are not accurate, the augmentation is not exact, of course.
The ancient photo of the building is in AR often too much left or right, in comparison to the real building in the camera view.

We would like to improve our App by using some kind of additional image recoginition, so that the building in the ancient photo "snaps" exactly to the real building in the camera stream,
if the real building is recognized.

I assume this could be done with your 3D scene and 3D object recognition, isn't it?
Wikitude would recognize the building as 3D object and we place the ancient photo in front of it.

But I think this 3D approach is not needed as the problem is basically 2D, not 3D:


The user is standing at a certain, exactly well known point in front of the building (the ancient photographers position). This point is physically marked on the street floor, so it is exact!


Could we do it the following way? ...:


We take "today" some pictures of the building: one by day, one by night, one with snow, one with rain, one with fog ...
We feed these images as input to a Wikitude *(2D) image tracker*.
We hope that the *image tracker* will recognize the real building in the camera stream as it compares the view against the taken "today" photos (?)
Then we could place the ancient photo -exactly- on the real building!

Could this work?
Will this work e.g. in a few months, too, when some details at the building have changed:
e.g. different light conditions, a tree has now green leaves (in summer) or no leaves (in winter), there is snow now on the roof, a closed door is open now, ....

I think we would need some kind of threshold in the image tracker, allowing us to define that also SIMILAR, but not exact matches are recognized as image.
I think you have this threshold internally hard-coded to e.g. 90%, some kind of "tracking accuracy threshold when to throw an event", but can we access this threshold from our code?

Or do we have to use your 3D object recognition mode as only this can handle such "similiaries" instead of "exact matches" (like an image recognition).

How can we solve our task?
We will buy a new Wikitude license if this will work! ;-)

Thanks and
all the best
Manuel

1 Comment

Dear support,

any hints? :-)

Thanks

Manuel

Login or Signup to post a comment