Start a new topic

Some question

Some question


Hi there,

I'm starting a new big project (AR based) and I'm looking for some answer to what I call pain point. 

1) Will Wikitude Phonegap Plugin work offline? Starting from the example, will be everytime possible to run them offline?

2) It there any way to manage the 3D Model occlusion? I mean suppose that you have two 3D model rendered, and one is in front of another. Will the model overlap? Can I define a rule to have a good rendered view?

3) It will be possible to eliminate all the watermark (the trial one)? If yes what product I need to buy to integrate my phonegap app?

4) Is there any possibility to direct recognize 3D real object?

 

Thanks and please let me know soon :)

1) yes you can package the ar content with the phonegap app. This way it will be loaded locally and won't require an online connection.

2) 3D models will be rendered based on their distance from the viewer. e.g. parts of a 3D model that are further away will be occluded by parts of a 3D model that are nearer. (rendering uses a Z-Buffer)

3) Once you bought a licence for the SDK and input the key the watermark and logo will be gone. For phonegap you will need a license for all the platforms you want to support (android and/or ios). Please see the pricing at: http://www.wikitude.com/products/wikitude-sdk/pricing/ for further details on licensing options and their differences.

4) Our SDK can only recognize planar targets. If the 3D object you want to augment has any bigger planar parts you can use this as the target (e.g. front side of a packaging box).

Thank you so much Wolfgang. 

Another couple of question.... sorry but my boss is afraid of buy the license without a full coverage.

5) As far as you know every example wikitude provide (for the phonegap plugin) will work out of the box in an offline scenario?

6) Is there any quick api to achieve the same result of wikitude drive: so having, starting from map of an area, a "virtual" indicator of the directions that one must take to reach the target?

7) As far you know will a 3D object recognition supported in the future?

 

Many thanks!!

no worries, we are here to help.

5) Most of the samples already are offline as they package the ar content and needed resources with the app.

6) There is no api to mimic wiktiude drive's features. However if you do not depend on lines but use POIs as "bread crumbs" to visualize a route this can be easily done if you have the latitude/longitude values of the route points.

7) Unfortunately I cannot discuss our plans regarding 3D object recognition in this forum. We are however very interested in the possible usecase you want to implement and what features you want to see in our SDK.

Ok, I see your points: thanks!

6) Nice, I have some idea on building this functionality. Is there any event associated with reaching the POI?

8) And what about aliasing? Is there any api/functions to help me to reduce the aliasing of my model?

Thanks again and let me say this is a great product and you are a great supporter!

6) There is the AR.ActionRange that has a trigger if the user enters it.

8) Can you give an example of what aliasing you want to reduce? If you are refering to mipmapping, this is currently supported but not correctly set by the encoder. I can share the manual steps needed to get that working if this is what you are refering to.

Perfect!!

Yes I'm referring to mipmapping!!! CAn you share the manual steps with me, please?

 

Thanks!!

Hey, I'ready to listen from you :D

Prerequisits:

- Textures have to be power of two in order to work with mipmapping. E.g. 16x16, 32x32, 64x64, 128x128, 256x256, 512x512

- Caution: you are modifing internal files that are officially not ment to be modified and therefore expect very limited support if you run into an error.

 

1. Rename xyz.wt3 in xyz.zip

2. unzipping the file will give you a "model/" folder

3. Open model/model.material file in a text editor

4. Set following properties within all "sampler u_diffuseTexture" that should use mipmapping

    wrapS = REPEAT

    wrapT = REPEAT

    minFilter = LINEAR_MIPMAP_LINEAR

    magFilter = LINEAR_MIPMAP_LINEAR

    mipmap = true

5. Zip model folder again and rename back to .wt3

6. Open in Wikitude3dEncoder and verify the result

Hey Wolfgang,

I tried with success (no error) but the result is not what I was thinking (at least is not the final result I was expected). Have you in mind the video-game aliasing? Is there any additional methods to "eliminate" the bad aliasing effect?

And, sorry, another question: is there an approach to load or start to load a model not on startup but based on location / direction of the user? I mean from a performance point of view supposing that I need to load 20 model, can you suggest a good approach to maximize the performance by limiting the loading time?

Thanks again!!!

P.S.: sorry for this question avalanche but it is really necessary to think about all those thing... from a licensing point of view do you expect a license, app pair of can I buy a "full developer key" that will allow me to create and publish tons of app?

Can you post a screenshot of the effect you want to eliminate?

Loading of a 3D model starts when you create the object (new AR.Model...). You have the possibility to delay that until a particular object is inside the field of view. Here is a little pseudo code on how to accomplish this:

1. Create AR.GeoObject with no drawables attached or just an image that visualizes a loading indicator

2. Add a function as onEnterFieldOfVision trigger to the GeoObject. It will be called everytime an object enters the user's field of view

3. Create the model on the first call of this trigger and include it in the drawables.cam property of the AR.GeoObject.

 

Regarding licensing: there is an agency package available which might be what you are looking for. Please contact sales@wikitude.com with your licensing needs.
Login or Signup to post a comment