Current technology does not allow augmented reality to retain information about objects it can no longer see. Perspectus aims to change that by giving AR a memory boost.
Augmented reality and virtual reality are not the same. Virtual reality involves replacing your surroundings with a computer-projected image. This usually requires the use of glasses that simultaneously show you new surroundings while covering your peripheral vision to block out your real surroundings. Augmented reality often requires glasses as well, but rather than replace your current vision with something generated by a computer, it simply modifies your surroundings to include things that aren’t there. Current AR technology doesn’t continuously remember real-life objects, instead having to rescan items if they are removed from view, but a new platform aims to change that.
As humans, we are very comfortable with the idea that objects exist even if they aren’t in our direct line of sight. We know that our couch is in the living room and there’s toilet paper in the bathroom because we remember that they are there. AR doesn’t have that capability. At least it didn’t until Perceptus.
“When we are in an AR space, we don’t look at the whole room all at once, we only look at a part of it,” says Brad Quinton, Singulos Research’s CEO. “As humans, we have no trouble with the idea that there are things that exist that we can’t see at the moment because we saw them before and we remember them. Once you have AR that can understand what’s around you, it can go off and proactively do things for you.”
Well, sort of. Right now, Perspectus acts as a layer above existing AR technologies like ARkit or ARCore. Perspectus already works on Apple devices and mobile devices that use Qualcomm’s Snapdragon chips and Google’s Tensor processor, but a lot more needs to happen before this technology will work on your smartphone or tablet.
If an app developer provides Singulos Research with 3D models of objects from the app, the models are fed into a machine learning process where an algorithm studies the way an object looks in the real world (different lighting, surfaces, backgrounds, etc.). After that, Perceptus can be layered over a developer’s app, allowing it to use the new information. Developers must submit their own computer-aided design models for Perceptus to memorize, and they must remember to give users something to do with the objects. For instance, if you’re using AR glasses while building with Legos, the developer should give you suggestions for what to build based on the bricks in front of you.
Perceptus still relies heavily on manual processes for object scanning and identification, which is why developers must submit their own models. But those models will be added to Singulos’ library, and it’s possible that developers will be able to dive into digital stacks to more quickly locate models of objects they need. There are already many accurate 3D models available from video game makers, which only enhances this project.
Perhaps the most intriguing piece of this platform is that because Perspectus is trained to identify certain objects before a user even launches an AR app, the image data doesn’t have to be sent to a cloud server for analysis. Perspectus runs locally on devices and has no problem with existing mobile processors, so any delay experienced by users is minimal.
“The thing that I find the neatest about this is the interaction between virtual and physical worlds,” Quinton says. “We kind of have this metaverse-y thing that’s not real—there aren’t any [chess] pieces here, but we’ve created this new reality. It’s not hard to imagine you could have a chessboard on your side and you could have this app. Then we’ve created an overlapped, physical reality that we’re both in but doesn’t actually exist anywhere.”
AR apps that require a physical component will see this as a good solution, but the applications of Perspectus on general-purpose AR may be limited. But that’s this particular platform and its purpose, it doesn’t account for companies developing technology that can be applied to general-purpose AR, which is likely the next step in AR innovation.