Start a new topic

Separate tracking from rendering

Hi,

My first question (https://support.wikitude.com/support/discussions/topics/5000081332) was not clear maybe so here is a better one : 


I would like to use Wikitude for tracking, but I would like to keep my own rendering engine.

Based on the SimpleClientTracking example I saw how I can make my own external render for the 3D model, but I would like to render the background picture too.


It seems the RenderExtension does the job by itself in the GLRenderer, but is it possible and how to make me drive the background render (the camera picture)? No getting the picture, just asking for the background picture texture binding would be enough.

I really need to drive the render operations by myself to order them and make them match my custom rendering engine.


Thanks for any help you could provide me.


Regards, 

    Vincent


Hi Vincent,

The Wikitude Native SDK would then be the product of choice. If you have a look at the `custom camera` example, all your questions should be answered as this example does almost exactly what you need (except that it injects it's own camera feed into the Wikitude SDK).


Best regards,

Andreas

Hi Andreas, 


Thanks a lot, it give me a very good starting point to manage my own rendering, but I still have one more question : 

If I make my plugin to manage image rendering and 3D model rendering, I'll need to use RenderExtension.onDrawFrame() to have the Wikitude loop working (capture, detection)? Will it make a render I'll need to ignore to make my own ?


Thanks.

Regards,

    Vincent


Hi Vincent,

You encountered a implementation detail of the Android Native SDK. It doesn't clearly separates updating it's internal logic from rendering. In case you also use the Wikitude iOS SDK, you will see that you can update logic and rendering separately. 

So far there is nothing you can do against this, but I create a internal bug report and we will fix this with our SDK 2.1 which is likely to be released end of March. In case you are interested in early pre-release builds, simply write a mail to info@wikitude.com and mention this thread.


Best regards,

Andreas


1 person likes this

Hi Andreas,


Thanks for the bug report, I think it will be useful to have a way to avoid Wikitude rendering the background. An idea could be to allow Wikitude to GL bind the rgb image using a user texture index, this way it is efficient to manage the render from an external renderer without having to copy the whole picture data.

We will use iOS SDK too, but in a later stage.

By the way, I sent an email as mentioned to get the lastest feature you are working on.


Waiting for it, I'll make Wikitude integration without this feature using a plugin.

Thanks a lot.


Regards,

    Vincent

Hi Vincent,

You can already pass a user generated OpenGL ES texture to the Wikitude SDK. We then use this texture to render the camera image into it. The Android API would be `CameraManager.setTextureId` or `[WTCaptureDeviceManager setCameraRenderingTargetTexture]` for iOS. Is that what you was referring to in your last reply?


Best regards,

Andreas



Hi Andreas, 

Yes it looks like what I'm looking for.

I need a little more work to try to make Wikitude integration in our system, but it can be a way of making it work.

Thanks.

Regards,

  Vincent

Hi, I continue asking questions here since it's the same topic :


I'm using the External Renderer, but I still have doubts : What is the minimal code I need to maintain to have Wikitude work ? (camera capture and image tracking)

From the example, I removed the GLSurface and the Driver, only kept GLRenderer to manage the RenderExtension and call onDrawFrame from my own 3D rendering engine loop. It this a good start ? Or am I missing something ? 

The Api documentation does not help since it is minimalist, and the documentation does not give so much more answers.


Thanks a lot !


Regards, 

    Vincent.

Hi Vincent,

I'm not quite sure what you're exactly asking for. Our native examples only contain the minimal amount of code to do what they do. 

I guess for you case that's the 'External Rendering' example. You need to call the Wikitude RenderExtension in any kind of update loop so that we can update image tracking. The camera updates itself in the background, but we need the render extension called in order to update the camera rendering (either to a texture or to the screen).


Best regards,

Andreas

Hi Andreas,

I starting from the External Rendering example yes.

In this example, the RenderExtension is managed vi a GLSurfaceView.Renderer, a Driver and a GLSurfaceView to make it work.

Because I'm using my own rendering engine, the GLSurfaceView and the Driver are already managed in my engine.


So here is the question : to have Wikitude camera start and loop in background, do I just need to manage the RenderExtension  (i.e. GLRenderer class in the example without the strockedRectangle part, and an Activity that extends ExternalRendering to manage the renderExtension) ?


Here is what I'm trying to do : 

My Activity extends ImageTrackerListener and External Rendering.

Like the example, the WikitudeSDK is started, paused, etc... 

In onRenderExtensionCreated() method, I create a new instance of GLSurfaceView.Renderer extending RenderExtention too, once again like in the example.

It calls onDrawFrame and other methods of the renderExtension.


Last, I use my rendering loop to call myGLRenderer.onDrawFrame().


It it enough to have Wikitude camera start and run ? Or do I miss something important ?


Thanks.

Regards,

    Vincent

Hi Vincent,

The only two things you need to care about is calling our SDK's lifecycle methods correctly and call the render extension `onDrawFrame` within your engine environment (every frame). The rest is example specific and you don't need to care about it (surface view, renderer).


Best regards,

Andreas 

Hi, 

So, here is my main Wikitude class, is it enough ?

 

package com.innersense.osmose.ar.wikitudeandroid;

import android.app.Activity;
import android.content.Context;
import android.util.Log;

import com.badlogic.gdx.Gdx;
import com.wikitude.NativeStartupConfiguration;
import com.wikitude.WikitudeSDK;
import com.innersense.osmose.ar.wikitude.ARWikitudeEngine;
import com.wikitude.common.camera.CameraSettings;
import com.wikitude.common.camera.CameraSettings.CameraPosition;
import com.wikitude.common.camera.CameraSettings.CameraResolution;
import com.wikitude.common.rendering.RenderExtension;
import com.wikitude.rendering.ExternalRendering;
import com.wikitude.tracker.ImageTarget;
import com.wikitude.tracker.ImageTracker;
import com.wikitude.tracker.ImageTrackerListener;
import com.wikitude.tracker.TargetCollectionResource;
import com.wikitude.tracker.TargetCollectionResourceLoadingCallback;


public class ARWikitudeEngineAndroid implements ImageTrackerListener, ExternalRendering {

    private static final String TAG = "ARWikitudeEngineAndroid";

    private WikitudeSDK mWikitudeSDK;
    /**
     * Android activity instance.
     */
    private Activity activity;
    /**
     * The vuforia license key to use.
     */
    private final String wikitudeLicenseKey;
    private TargetCollectionResource mTargetCollectionResource;

    private RenderExtension renderExtension;

    public ARWikitudeEngineAndroid(String datasetFileName, boolean isAbsolute, String licenseKey) {
        super(datasetFileName, isAbsolute);
        this.wikitudeLicenseKey = licenseKey;
    }


    @Override
    public void onARStart() {
        super.onARStart();

        mWikitudeSDK = new WikitudeSDK(this);
        NativeStartupConfiguration startupConfiguration = new NativeStartupConfiguration();
        startupConfiguration.setLicenseKey(wikitudeLicenseKey);
        startupConfiguration.setCameraPosition(CameraPosition.BACK);
        startupConfiguration.setCameraResolution(CameraResolution.AUTO);

        mWikitudeSDK.onCreate(activity.getApplicationContext(), activity, startupConfiguration);

        mTargetCollectionResource = mWikitudeSDK.getTrackerManager().createTargetCollectionResource(datasetFileName, new TargetCollectionResourceLoadingCallback() {
            @Override
            public void onError(int errorCode, String errorMessage) {
                Log.e(TAG, "Error : Failed to load target collection resource. Reason: " + errorMessage);
            }

            @Override
            public void onFinish() {
                mWikitudeSDK.getTrackerManager().createImageTracker(mTargetCollectionResource, ARWikitudeEngineAndroid.this, null);
            }
        });
    }

    @Override
    public void onARResume() {
        super.onARResume();
        mWikitudeSDK.onResume();
    }

    @Override
    public void onARDestroy() {
        super.onARDestroy();
        mWikitudeSDK.clearCache();
        mWikitudeSDK.onDestroy();
    }

    @Override
    public void onARPause() {
        super.onARPause();
        mWikitudeSDK.onPause();
    }

    @Override
    public void onARConfigurationChanged() {
        super.onARConfigurationChanged();
    }

    @Override
    public void onRenderExtensionCreated(final RenderExtension renderExtension) {

        this.renderExtension = renderExtension;
        this.renderExtension.onSurfaceCreated(null, null);
    }

    @Override
    public void onTargetsLoaded(ImageTracker imageTracker) {
    }

    @Override
    public void onErrorLoadingTargets(ImageTracker tracker, int errorCode, final String errorMessage) {
    }

    @Override
    public void onImageRecognized(ImageTracker tracker, final String targetName) {
    }

    @Override
    public void onImageTracked(ImageTracker imageTracker, ImageTarget imageTarget) {
    }

    @Override
    public void onImageLost(ImageTracker imageTracker, String s) {
    }

    @Override
    public void onExtendedTrackingQualityChanged(ImageTracker imageTracker, String s, int i, int i1) {

    }

	/*
	 * HERE is the 3D main render loop event
	*/
    @Override
    public void onBeginRenderFrame() {
        super.onBeginRenderFrame();

        //make a new render loop
        renderExtension.onDrawFrame(null);
    }
}

 

Thanks a lot !

Regards,

    Vincent

Hi Vincent,

This looks good so far but if you compare your code with the GLRenderer, you notice that the render extension also has three other methods that you need to call, being onSurfaceChanged, onPause, onResume. Maybe you already call them somewhere else but in the snippet you send I couldn't find them.


Best regards,

Andreas

Hi, 

Yes i did add the onPause, onResume and onSurfaceChanged.

The camera seems to start and detect the picture well :-)


But, now I'm trying to render the camera picture.

My first idea is to avoid using a plugin that will give me a picture I need to copy before binding it.

I prefer to render direclty the specified GL index picture.


Does renderExtension.onDrawFrame(null); do a camera picture render ? If this method bind() the picture, my render could use the binded content to render it maybe, it would be very useful and quick.

So could I get an idea of what is done in this call ?


Thanks.

Regards, 

   Vincent

Hi Vincent,

After the onDrawFrame() was called, the OpenGL texture that you set in the CameraManager should have it's content updated so that you can draw it in your rendering engine. It does call glBindTexture to render the current camera image into this given texture.


Best regards,

Andreas

Login or Signup to post a comment