Start a new topic

3D model not showing up after scanning target successfully

Hey everyone,


on using the sample code frames from wikitude api documentation we get no 3D object shown on the Epson BT200. We used the given Car-Object and also tried with some same files like an simple cube, designed in blender (..and exported with wikitude3Dencoder). The scanning is working fine, we used some logs for verification.


Could you please have a look on the attached engine file, if the normal loading process looks fine or give us feedback which other ways we have to use to show our object on the device?


Best regards

Team KlaviAR

js
(7.43 KB)

Hi KlaviAR team,


At a quick glance your JS code looks fine. Non-appearing augmentations on the BT-200 usually indicates a wrong calibration of the device. Please try to verify the calibration or redo it. 


Best Regards,

Alex


Thank you for your suggestion. We checked our calibration, also did a factory reset of the BT-200 and calibrated it again. It didnt change anything. 

We did the calibration process with your sample app. Maybe we are doing something wrong when creating, configuring and loading the ArchitechtView. Could you please have a look at our Activity? If the JS is fine and the calibration doesn't change something, maybe it is because of an error in this file.


Best regards and thanks in advance

Team KlaviAR

java

Did you see the orange rectangle for verification of the calibration before you saved it?

Do 3D models work in the sample app?


The only thing i noticed in your Activity was that you disabled 3d rendering of the architect view. This should not lead to an issue but it is different in the example app, so please try if setting this to true in onResume changes anything.


Best Regards,

Alex

Dear Wikitude Technical Support

   I have a problem about object recognition in BT-350.

   My Developing-Tool: android studio, and version of the sdk I am using is Wikitude Sdk Epson 7.2.1 ,which is currently the newest sdk for Epson glasses.

    Here is the problem, I have made a video for my object, and created a wto file, and then made it became the tracking target. If I put my javascript code into my android mobile project, and used my android mobile phone to track that object, it could recognize that . However, the same js code , if I used it in my glasses and tracked my object, it didn't recognized it all the time, and nothing happened. I don't know why this happened. I always thought that the same js should work in the android device whatever it is. Could you help me with it?


In face, after I made a more carefully experience, such as made the video more carefully, the glasses could recognize the target object. 

But I met another problem. As you know, I had to make a white background paper and a white rotate platform in order to make a good object video. So the glasses could only recognize the object after I put it at that place, when I moved the target away from that place, the recognition lost immediately.

But the mobile phone didn't have this problem. It's wierd isn't it?

Could you please tell me the reason of that problem,and help me deal with it?

Hi,


As we already have been in contact directly - did you try to scan the object from a nearer distance - in the image that you sent the object is very small in the camera view.


As for the Epson performance: the Epson BT-350 has a rather slow CPU, this does affect tracking and recognition which may be the reason why you experience the difference between the glasses and a mobile phone.


Greetings

Nicola

Login or Signup to post a comment