Edge Computing and AR: A Match Made in Data Center

VR and AR content experiences are undoubtedly more immersive when necessary computation to render content is executed close enough to those devices. So far AR still hasn’t reached its optimal potential but it looks like edge computing could change that. A lot of devices have switched to “cloud” processing, meaning that it is carried out at huge data centers and not directly on your device.

However, it takes time to reach datacenters, resulting in delays we experience as latency. For augmented reality devices, this is a crucial problem because it needs real-time data processing to deliver a fulfilling experience. Though often used, “real time” is still far from our grasp in actual reality. While in theory, it may be impossible to achieve real-time, edge computing takes us as close to it as is possible in realistic terms.

“Now you can think about placing low-latency, complex application and computation power closer to the users. By improving the functionality and the user experience, we can actually unleash some of the new business models that we all talk and hear about like AR, VR, self-driving cars, drones, mobile gaming, and others.”

~ Igal Elbaz, AT&T’s Senior VP of Wireless Technology

Edge computing has become a new area of interests for AR developers. Essentially, some information will be processed by cloudlets (networks of smaller data centers) physically in close proximity to source and the rest will be sent to bigger ones when necessary. This creates less latency leading to a more immersive augmented reality experience.

So far, a few companies such as AT&T have initiated edge computing test zones and plan to continue investing in it to create a better AR experience. In its advanced stages, cloudlets could blanket an entire nation, leading to the future of AR experience that would surpass the potential of today.