a.If I track one marker , some ARObjects are loaded,
b.when you track new marker, past ARObjects are destroyed and new ARObjects are loaded.
createOverlays: function createOverlaysFn() {
this.tracker = new AR.ClientTracker("assets/tracker.wtc", {
onLoaded: function(){
AR.logger.info("marker loaded");
}
});
var marker0 = new AR.Trackable2DObject(this.tracker, "01", {
onEnterFieldOfVision: function(){
AR.logger.info("Marker track : 0");
load(0);
}
});
var marker1 = new AR.Trackable2DObject(this.tracker, "02", {
onEnterFieldOfVision: function(){
AR.logger.info("Marker Track : 1");
load(1);
}
});
var marker2 = new AR.Trackable2DObject(this.tracker, "03", {
onEnterFieldOfVision: function(){
AR.logger.info("Marker Track : 2");
load(2);
}
});
.
.
.
load: function loadFn(arg){
AR.context.destroyAll();
loadEachObject(arg);
}
}
It seems to work well in first 1, or 2 objects tracking,
However If I try to track the third object,
This error occurs and no more tracking works.
CoreAnimation: failed to allocate IOSurface
A
Andreas Schacherbauer
said
about 7 years ago
Hi daichi, Which type of objects do you try to create (content of the loadEAchObject function)? Does it work if you create the third group of objects right at the beginning?
Best regards
Andreas
d
daichi hayakawa
said
about 7 years ago
Hi Andrea, thank you for your reply.
>Which type of objects do you try to create
I create some mp4 files and png files with AR.Geolcation
My object attribute data is stored in 'objectModel' json file.
I read the each object attribute in 'for loop' and judge that is video or png by filename.
loadEachObj(arg){
this.model = objectModel;
objs = new Array(this.model.objPoints.length);
var overlay = new Array(this.model.objPoints.length);
for (i = 0; i < objs.length ; i++){
//obtain each object data attribute
var filename = this.model.objPoints.name;
var pos1 = parseFloat(this.model.objPoints.relNorth);
var pos2 = parseFloat(this.model.objPoints.relEast);
var al = parseFloat(this.model.objPoints.relAlt);
var scale = this.model.objPoints.scale;
var rotation = this.model.objPoints.rotation;
var tilt = this.model.objPoints.tilt;
var roll = this.model.objPoints.roll;
var location = new AR.RelativeLocation(null, pos1,pos2,al);
new AR.ImageResource(filename,1, {offsetX: 0,offsetY: 0,scale:scale,rotation:rotation,roll:roll,tilt:tilt
});
}
//show object
objs = new AR.GeoObject(location,{
drawables: {
cam: ,
}
});
}
}
>Does it work if you create the third group of objects right at the beginning?
Mysterious point is, it works fine when showing the objects in the way other than trakcing object, for example 'make push button to trigger the loadEachEvent.
If I use his code
function buttonBush(ButtonNum){
loadEachObject(ButtonNum);
}
instead of this code
var marker1 = new AR.Trackable2DObject(this.tracker, "02", {
onEnterFieldOfVision: function(){
AR.logger.info("Marker Track : 1");
loadEachObject(1);
}
});
The problem happens only when I use Trackabale2DObject to trigger the loadEachObject();
A
Andreas Schacherbauer
said
about 7 years ago
Is it intention to combine Tracker/Trackable2DObjects with GeoObject/RelativeLocations. Just in case you want to place videos/images on target images, geo objects/locations are not needed.
I don't see a reason why it shouldn't work in case you use the onEnterFieldOfVision instead of the button. The error message might point to video/image drawables that are not released properly. You need to call .destroy on each ArchitectObject when it is no longer needed or use AR.context.destroyAll() to delete all objects currently created.
daichi hayakawa