CopyPastor

Detecting plagiarism made easy.

Score: 0.8476612099395978; Reported for: Open both answers

Possible Plagiarism

Plagiarized on 2018-04-10
by Nani

Original Post

Original - Posted on 2017-06-14
by rickster



            
Present in both answers; Present only in the new answer; Present only in the old answer;

This seems to be an area of active research in the iOS developer community — I met several teams trying to figure it out at WWDC last week, and nobody had even begun to crack it yet. So I'm not sure there's a "best way" yet, if even a feasible way at all.
Feature points are positioned relative to the session, and aren't individually identified, so I'd imagine correlating them between multiple users would be tricky.[IOS Training][1] provides you to understand, not just the error codes but also knows how to develop the code and fix the bugs which will arise while doing coding.
The session alignment mode [gravityAndHeading][2] might prove helpful: that fixes all the directions to a (presumed/estimated to be) absolute reference frame, but positions are still relative to where the device was when the session started. If you could find a way to relate that position to something absolute — a lat/long, or an iBeacon maybe — and do so reliably, with enough precision... Well, then you'd not only have a reference frame that could be shared by multiple users, you'd also have the main ingredients for location based AR. (You know, like a floating virtual arrow that says turn right there to get to Gate A113 at the airport, or whatever.)
Another avenue I've heard discussed is image analysis. If you could place some real markers — easily machine recognizable things like QR codes — in view of multiple users, you could maybe use some form of object recognition or tracking (a ML model, perhaps?) to precisely identify the markers' positions and orientations relative to each user, and work back from there to calculate a shared frame of reference.Dunno how feasible that might be. (But if you go that route, or similar, note that [ARKit exposes a pixel buffer][3] for each captured camera frame.)

[1]: https://mindmajix.com/iphone-development-training [2]: https://developer.apple.com/documentation/arkit/arconfiguration.worldalignment/2873776-gravityandheading [3]: https://developer.apple.com/documentation/arkit/arframe/2867984-capturedimage
This seems to be an area of active research in the iOS developer community — I met several teams trying to figure it out at WWDC last week, and nobody had even begun to crack it yet. So I'm not sure there's a "best way" yet, if even a feasible way at all. Feature points are positioned relative to the session, and aren't individually identified, so I'd imagine correlating them between multiple users would be tricky. The session alignment mode [`gravityAndHeading`][1] might prove helpful: that fixes all the *directions* to a (presumed/estimated to be) absolute reference frame, but *positions* are still relative to where the device was when the session started. If you could find a way to relate that position to something absolute — a lat/long, or an iBeacon maybe — and do so reliably, with enough precision... Well, then you'd not only have a reference frame that could be shared by multiple users, you'd also have the main ingredients for location based AR. (You know, like a floating virtual arrow that says turn *right there* to get to Gate A113 at the airport, or whatever.) Another avenue I've heard discussed is image analysis. If you could place some real markers — easily machine recognizable things like QR codes — in view of multiple users, you could maybe use some form of object recognition or tracking (a ML model, perhaps?) to precisely identify the markers' positions and orientations relative to each user, and work back from there to calculate a shared frame of reference. Dunno how feasible that might be. (But if you go that route, or similar, note that [ARKit exposes a pixel buffer][2] for each captured camera frame.) Good luck! [1]: https://developer.apple.com/documentation/arkit/arsessionconfiguration.worldalignment/2873776-gravityandheading [2]: https://developer.apple.com/documentation/arkit/arframe/2867984-capturedimage

        
Present in both answers; Present only in the new answer; Present only in the old answer;