I'm trying to create a WTSCNView which acts similarly to an ARSCNView(*) (with limited functionality), i.e. a SCNView with a camera SCNNode attached and containing WTInstantTracker which is used to automatically update the camera position and rotation; which I can use to render SCNNodes with geometry etc.. in.
(*) I don't have an ARKit-capable iPad/iPhone; that's why I'm not using ARKit.
My first try was to just get the pose information from `instantTracker(_:, didChange pose:)` and `instantTracker(_:, didTrack target:)`, extract the modelview en projection transformations and apply them to the camera as such:
self.cameraNode.camera?.projectionTransform = WTSCNView.toSCNMatrix(wtMatrix: target.projection)
self.cameraNode.transform = SCNMatrix4Invert(WTSCNView.toSCNMatrix(wtMatrix: target.modelView))
This way however, I was not able to render a 3D geometry SCNNode at a fixed position (0,0, -z); I could turn around all I want but I didn't see the model. Comparing the 'default' transforms from a simple camera with the transforms from the wikitude pose information showed large differences and I could not figure out how to place a geometry node in the SCNView so it would be rendered by the camera.
To solve this, I now calculate 'the difference' (transformation) between the previous and current wikitude pose matrices and apply that same difference to the default camera. That way I can place an object in the scene and look at it. When moving the device, I can see the camera gets updated so my object seems to follow the device movements a bit, but I still have the impression something's off and my object is 'moving' (as opposed to 'lying still on the table').
So I'm wondering that what I'm doing is right, and if there is maybe a simpler way to do what I want?
I wanted to write a SceneKit guide for a long time but never had time for it. So let's see if we can find a solution to the problem.
The basic advise would be to apply the projection to the camera node and the modelView to the SCNNodes that you use to position your augmentations.
Are you familiar with OpenGL or Metal rendering or is SceneKit your first rendering API?
Depending on the last question, our Native SDK examples might be helpful as well.
thanks for the support!
Unfortunately I don't have knowledge of OpenGL and Metal, that was why I was hoping to use SceneKit (and truthfully, my SceneKit knowledge is also 'beginner').
<quote>The basic advise would be to apply the projection to the camera node and the modelView to the SCNNodes that you use to position your augmentations.</quote>
I don't fully understand this. My geometry SCNNodes are already positioned somewhere in the SceneKit scene, I do not want to change their position in the scene, but rather update the position of the camera (to move along with the iPad movement/orientation based on WT's instant tracker capabilities) so all geometries remain where they are, just 'where they are displayed in the screen' is updated.
Consider if I have multiple augmentations; 1 3D model per SCNNode; with the model locally centred at (0, 0, 0) in the node, and the node positioned somewhere in 3D space (using SCNNode.transform). The WTInstantTracker gives me a single modelView transform (I assume the 'model' for the InstantTracker is an automatically generated model based on feature detection of the camera view?). If I update each SCNNode.transform with the instanttracker.viewModel, all my nodes would move to the same coordinate + orientation; that's not my intention :)
I feel like I'm missing some vital context/understanding, so forgive me if what I'm saying doesn't make a lot of sense.
I found a couple of hours to play around with SceneKit.
I haven't tested it with the current product, but based on the SceneKit API, you should assign the Wikitude projection matrix to the SCNCamera `projectionTransform` property and the modelView matrix to the SCNNode 'transform' property. `SCNMatrix4FromGLKMatrix4(GLKMatrix4MakeWithArray(*wikitude matrix ptr*))` is your fried to convert Wikitude matrices to SceneKit matrices.