I know there's WTO files and object recognition but this process is too heavy for users in my app. Is it possible to make photos as targets? Currently I'm trying to take a photo and then draw something. But place where I've left drawing can't be recognized by previously taken photo (which then was converted to WTC file). I've seen similar functionality in some other apps (they're not based on Wikitude) and I understand that it's possible. So I suppose in my case it's because of high accuracy? Or problem is in algorithms or something like that? Is there any workaround in such situation?
Currently I'm using JS SDK but this question is about both JS and Native SDK.
I am sorry but I do not understand what you mean with this sentence "ut place where I've left drawing can't be recognized by previously taken photo (which then was converted to WTC file)".
The .wto file is for Object Recognition and the .wtc file is for Image Recognition. If you wish to generate .wtc files through a script then you need to reach out to firstname.lastname@example.org and make a request.
I meant that I want to place 2D/3D objects on real world objects. But the target of real world object is .WTC file (previously taken photo which then were converted to .WTC file). It works fine when target image is something like cover of magazine but when target file is a photo of real worlds object - it can be recognized very very hard (after few minutes of searching the angle).
To be more clear I've attached few screens below. Steps (by the names of an images):
1) Taking a photo of an object
2) Putting text on that object
3) Recognizing an object (at almost the same angle) by photo
4, 5) Changing an angle and the rotation of text is changing.
It's okay that text is not staying so good and sometimes jumps a lot but we want to achieve something like this.
If you want to track an object then why are you using Image Recognition (and therefore provide a .wtc file) and you do not proceed with Object Recognition and provide a .wto file? When you try to track a 3D object from different ankles when in fact what you provide to our algorithm is a flat 2D target, then the results will not be the desirable ones.
Because I want to make this process simple for my users. I want to give possibility to just take a photo, make it as target and put whatever user want on it. So as I understand it isn't possible to achieve such results (like my example on screens above) with Wikitude?
Although I do understand your use case, the problem is that you are providing our algorithm with a 2D image when in fact you are trying to recognize a 3D object. The only way for this use case to work is only if you try to track the object from the exactly similar corner where you have taken the picture. But even so, the results are not stable. Also, keep in mind that you need to have the same environmental conditions when you took the picture and when you actually try to recognize the object.