Hello Wikitude team,
So I am trying to figure out how to properly use Wikitude in VR such as MockHMD in Unity or Cardboard VR, basically I just need to render the same camera twice. I already tried duplicating the WikitudeCamera but that doesn't work because each of the objects requires 1 physical camera I guess. Using Unity's MockHMD throws the error Cannot set field of view on camera with name 'Wikitude' while VR is enabled.
So I wanted to know if something like that is supported/possible to do and to give some more context I will not be using a physical camera, I am planning on using YUV frames from another source so it should be possible somehow to update 2 different cameras.
are you trying to get the physical camera's data from the VR device and pass it to Wikitude or are you trying to use a Wikitude camera to detect something in VR itself... I am not sure about the use case, since that's an unusual environment for the Wikitude SDK to be used in.
For the first use case (getting the VR camera data), I would suggest getting that through another API, since I am not sure how the rendering of the background camera will be handled.... The camera image can be passed to the WIkitude SDK with the Plugins API.
So my end goal is to take virtual camera input so I will make some custom drivers as in the Wikitude example to parse the YUV frames I have from my own android library to Wikitude within unity.
And then the final thing I need is to display the Wikitude output ( Camera + AR ) for both eyes using any kind of VR GoogleVR or whatever works with Wikitude.
I hope my query is clear now.
Thanks a lot for taking the time and effort to answer my question.
we never tried to use the Wikitude SDK in that setting with Unity's MockHMD...
But disabling both cameras (the WIkitude- and BackgroundCamera) should not break recognition and tracking... The rendering of the camera frame should be done independently anyway.
Seems like I am trying to solve a rare use-case. I tried disabling both cameras but that didn't help.
Is there any working setting on rendering Wikitude twice in unity? It should be simple to duplicate something that is working once already. But having 2 WikitudeCameras also didn't work.
So far nothing has worked with Wikitude
MockHMD , unity complains with cannot set field of view on camera with name WikitudeCamera
and with 2 WikitudeCameras I am getting the left to show the camera and the right to render the ar.
Is there any guide for multiple WikitudeCameras? Or any other possible way to have left and right Wikitude camera within Unity?
Which Wikitude SDK version are you using? Before version 9.3, the BackgroundCamera GameObject was instantiated during runtime and hidden....
If you can't upgrade for some reason you might be able to query that camera with:
var backgroundCamera = wikitudeCamera.transform.GetChild(0).GetComponent<Camera>();
Then you can disable it...
Rendering two cameras won't work with the WikitudeCamera, because it's heavily working with singletons internally... Therefore, I would go the route of disabling the camera rendering as a whole and take care of that part separately.
So apparently I was not able to find the version written somewhere, but I gave a try to install Unity 2021 and use the latest Wikitude SDK (9.10)
and Unity's MockHMD recognizes the image and renders the target, I will try to figure out why it is not calculating the distances of the image target properly but that is a start thanks a lot.
I am still trying to troubleshoot what changed between using MockHMD or any other Unity VR and the regular use case of non-VR which works perfectly
The only customizable scripts I found were CustomCameraController which only affects the input, and CustomCameraRenderer which seems to be for any additional rendering on top of the AR provided by Wikitude camera.
Is there any way to customize the AR part as well? I can see that Wikitude camera is not properly tracking the image target is however rotations happen fine. So it seems that wikitude camera is confused on where the distances when using VR, there is space between the 3d model loaded and the image target plus when I move the image target a certain distance the 3d model is moved for like half of the distance.
If there is any workaround please let me know because that is a blocker for the project I am working on.
(Tracking works fine in the same project by disabling vr by selecting None at Target Eye)
Fyi goal is to have the exact same image for both eyes for my use case that why I have select left for both eyes it doesn't really matter for my use case.
I see that your FoV is set to 111° and that seems not right.. Do you have the same value if VR is disabled? I am not quite sure how the VR rendering is done but the Wikitude SDK needs a proper FoV value of the fullscreen camera frame. If this is not reported properly, the returned pose might not align as you see in your case... As a workaround I think it might work if you do a plain wikitude rendering and render out textures for both the Wikitude Camera and the Background Camera... these render textures can then be displayed on a canvas with split view quite easily I guess.
I followed pretty much what you instructed given that there was no play out of the box way for VR probably due to Unity limitations not allowing other scripts to modify FoV while using VR.
Even though I tried using Canvas and the texture it seemed like a portion of the video was getting cut and the VR was having issues again, maybe I didn't fully get what you meant.
However I ended up taking the texture and added it to a plane and duplicated it. I find it easier to setup and works perfectly.
Thanks a ton,
Keep up the great work
You're welcome! Glad that it works!