Start a new topic

how to get cam position?

hey wikitide support,

     I am making an app that uses the instant tracking that needs the cameras position. as I understand you first mark your starting point and from there the camera moves in respect to that starting point


so cam does have a pos (x, y, z) and a (x, y ,z) rot


how do i access it?


i am developing in cordova 


thanks in advance,

      Pascal


Hi Yannick,



it seems I was wrong indeed. Thank you for the correction.



- Daniel

Apparently to get the actual eye coordinates I had to invert the view matrix

source: https://gamedev.stackexchange.com/questions/22283/how-to-get-translation-from-view-matrix


Android code snippet:

float[] invertedViewMatrix = new float[16];
Matrix.invertM(invertedViewMatrix, 0, viewMatrix, 0);
final Vector3<Float> eyeCoords = new Vector3<>(invertedViewMatrix[12], invertedViewMatrix[13], invertedViewMatrix[14]);

Hi Yannick,



the translation vector extracted in the code snippet I posed previously already is the camera position in the coordinate system of the target. No need for any further calculations.



- Daniel


Hi,


Could you clarify how one would calculate the camera (or eye) coordinates (x,y,z) using the 3D translation vector and 3x3 rotation matrix from the modelView?


Thanks and regards!

Thanks for this detailed answer. As a developper, I totally understand your point. 


If that can help to promote this feature in your roadmap, the camera pose is exposed as a standard API entry-point by upcoming / platform-specific AR SDKs (here and here). Having a technology that would provide the same basic features than these SDKs (amongst other exclusive things !), while allowing to run the same AR experience cross-platform, would be great in the future.

Hi Amaury,



I don't believe there is anything inherently difficult with exposing the pose in the JavaScript API. You have to consider, however, that there is a lot of tasks following the implementation of a new feature (testing, documentation, sample, maintenance). Summed up, that amounts to quite a bit of work for even a small feature.


Another aspect being, that the JavaScript SDK is intended to allow for quick and convenient development of AR experiences. By design, it will never be able cover every conceivable use case. For intricate control, we offer the native SDK, which gives you all of the flexibility, but none of the convenience.


I can see that the camera position might be something useful in the JavaScript SDK from the use-cases you presented. But I don't see this feature happening anytime soon.



- Daniel

Oups I can't find a way to edit my message (?).


The url for the video demonstrating our social application was wrong, this is is the good one :

https://youtu.be/cUm2z3iQmtk

Hi Daniel,


We are working on an app for museum visitors. It relies heavily on an indoor-location system on one side, and image tracking and augmented reality on the other side (thanks to Wikitude SDK). These two innovative technologies help us to build engaging experience in the museum. For instance one of our function lead visitors to discoverer details of the collection, and unlock augmented reality content (see examples of such content here and here).


Our indoor location system relies on compass and beacons, which are not very accurate. It could be way more precise if we could calculate user position from tracked objects. For instance knowing how the user is orientated relatively from an object would help us to correct the values from uncalibrated compass.


With more precise indoor / realtime location of our users, we could also unlock a lot of new scenarios, including multiplayer games, or precise guidance (from one detail of an artwork, to details of another surrounding artwork).


We also have a social application in mind. We want to show photos taken by other visitors (with associated comments), from where they have bee taken in the exhibition space. We already started to experiment with other AR SDK and it was promising. See a demo here : https://www.youtube.com/edit?o=U&video_id=cUm2z3iQmtk  
(this is made with After Effect, but was working also in realtime with Metaio).


Finally all the AR Drawing app could also be made possible with this feature : https://www.youtube.com/watch?v=fm7ZI0e2yvY


You already expose the distance from the tracked object in the JS api (with distanceToTarget.onDistanceChanged). The position of the camera seems to correspond to an object which is processed under the hood anyway. Would that be complicated to expose this object in the JS API ? 


Hi Amaury,



I'm afraid we currently do not have plans to extend the Cordova plugin by that functionality.


May I ask what you'd intend to do with the camera position, if it was available?



- Daniel


Any news about exposing the camera position in the Cordova Plugin API ? That would be so helpful in our case.

Hi Amaury,



for an ImageTracker it's virtually the same code. It's just the WTImageTrackerDelegate instead of the WTInstantTrackerDelegate and a WTImageTarget instead of a WTInstantTarget. It will still contain an identically organized matrix which you can process identically as well.

 

- (void)imageTracker:(nonnull WTImageTracker *)imageTracker didTrackImage:(nonnull WTImageTarget *)trackedTarget

- (void)imageTracker:(nonnull WTImageTracker *)imageTracker didLoseImage:(nonnull WTImageTarget *)lostTarget

 

I'm omitting the Java equivalent.



- Daniel

Hi Pascal & Daniel,


I would have the same question as Daniel, but with ImageTrackable. Is there a way to get the device camera position and rotation in the 3D space related to a trackable ?


Pascal : if you manage to extend the cordova plugin to add the ability to get the camera position (for instantTracking), I would very interested by your work !


Have a good sunday,


Amaury.

Small amendment:


The extracted rotation contains a uniform scale value which might be undesirable for your use case. You should be able to easily extract the scale by normalising the column vectors and constructing a new rotation matrix from the resulting vectors.


If you are interested in the scale, simply the take length of the column vectors. The length of these column vectors (they all have the same length) is the scale value.


- Daniel

Good morning Pascal,



you can get the 3D translation vector and 3x3 rotation matrix from the column major model view matrix we supply.


On iOS the WTInstantTrackerDelegate is what you are looking for:

  

- (void)instantTracker:(nonnull WTInstantTracker *)instantTracker didChangeInitializationPose:(nonnull WTInitializationPose *)pose
{
	float* mv = pose.modelView;

	float rotation[9];

	// x-column
	rotation[0] = mv[0];
	rotation[1] = mv[1];
	rotation[2] = mv[2];

	// y-column
	rotation[3] = mv[4];
	rotation[4] = mv[5];
	rotation[5] = mv[6];

	// z-column
	rotation[6] = mv[8];
	rotation[7] = mv[9];
	rotation[8] = mv[10];

	float translation[3];

	translation[0] = mv[12];
	translation[1] = mv[13];
	translation[2] = mv[14];
}

   

- (void)instantTracker:(nonnull WTInstantTracker *)instantTracker didTrack:(nonnull WTInstantTarget *)target
{
	// identical function body using target.modelView instead of pose.modelView
}

  

 On Android you need to implement the InstantTrackerListener interface and do the exact same thing translated to Java:

  

@Override
public void onInitializationPoseChanged(InstantTracker tracker, InitializationPose pose) {
}

   

@Override
public void onTracked(InstantTracker tracker, InstantTarget target) {
}

 

The instant tracking samples of both iOS and Android and the corresponding documentation pages should be your resources of choice, if you are having trouble with setting this up.



Kind regards

Daniel

thanks in advance 

Login or Signup to post a comment