Subscribe

The future of data in the metaverse

AI will be the backbone of the metaverse − a high-speed, high-capacity digital connection that will be the axis of this new reality.
Kashen Verappen
By Kashen Verappen, senior data engineer at PBT Group.
Johannesburg, 04 Oct 2022

The metaverse is a popular topic of late. But what is the metaverse, and why does it even “meta”? 

Apart from being phonetically positioned to provide a series of good homophones, the metaverse is a sci-fi movie come to life. It is the internet, in 3D, fully engaging and integrated into virtual worlds through an amalgamation of immersive technologies like the internet of things, virtual reality, augmented reality and artificial intelligence (AI).

It is the integration of mind and machine. An environment where users can own digital assets, experienced as avatars, and most importantly, consume.

While the metaverse market is still in its infancy, it is set to grow to $800 billion by 2024. And this is before mainstream adoption. I will let that sink in.

Considering this, as a data professional I find myself asking what will it take to fulfil this vision for the future?

The environment to create the metaverse will be comprised of layers upon layers of data, both structured and unstructured, being exchanged at every level in real-time, across platforms, using solutions that are yet to be created. Big data, through both virtual and augmented channels. The type of big data that provides infinite, operable insights into users in a much more spontaneous manner.

This kind of knowledge about the user needs to provide an understanding and direction that will ultimately drive data that will literally shape the metaverse and the way we exist and move in it.

This level of integration results in a myriad of obstacles. To provide this, machine learning algorithms will be crucial. Data engineers and data scientists will be met with an immense task as they drive better shopping experiences, create a flow of movement, manage personal information and market products to the consumer.

The volume of data that will be used to market products will range from the data record of elevated heart rates when encountering product placement, letting AI know that this is something that excites and interests the consumer, to a simple previous purchase of the product.

It will be tailor-made advertising, campaigns that are personally targeted using an AI algorithm, in a manner and a depth we have never experienced before. This means AI will be the backbone of the metaverse − a high-speed, high-capacity digital connection that will be the axis of this new reality.

To cater to this need for information, high-quality training data will be required to build platforms that are heavily driven for and by AI. These programs will need to have the ability to identify, hold and integrate large volumes of data, instantaneously.

Needless to say, the tools we have on hand today are not enough − our current AI is too simplistic and one-dimensional. We not only need to overcome the challenge of unmanageable amounts of data sources, but there is also the challenge of overcoming siloed data.

How do we create synergies between various platforms to provide the user with a seamless experience as they transition from one platform to another?

To do this, networks will have to be decentralised to allow for the flow of data, data captured on platforms should reflect on another, giving the user a seamless experience, blurring the lines between reality and augmented reality, whilst in the background businesses can collect and process both internal and third-party data.

Digital assets need to move from one platform to another, avatars should remain consistent, and users should not be met with the same challenges we face in the real world: queues at the grocery store, missing information from a service provider, traffic slowing us down to our destination.

No, the expectation from the user is that the metaverse should operate perfectly and know us perfectly. All while maintaining our privacy.

Now that we have an idea of the level of data it takes to fuel the metaverse and the level of intimate knowledge at which it will know a user, it most certainly raises an eyebrow with regards to privacy concerns. The sheer volume of VERY personal information that can be collected, the manner in which it is stored and how it is used is a regulator’s nightmare.

It is imperative that the data that is collected is used ethically, thus automated systems need to be introduced to protect data integrity in these virtual worlds. However, can the current architecture of data privacy laws protect consumers? And furthermore, can the current architecture of data privacy laws allow for the innovation required to allow for the projected growth of the metaverse?

The need for multilateral data sharing agreements and housing of personal information that is needed to create the metaverse and provide a seamless user experience are no-go areas in terms of data protection regulators.

Regulators frown upon such large volumes of data being centrally managed and stored for extended periods of time. In addition, legislative changes cannot keep pace with maintaining electronic privacy with innovation.

Whilst these territories still need to be navigated, what we do know is that these platforms need to be developed to handle the large data flows. The data touch points per user will be significant and so will the challenges. Certainly, an interesting and exciting time for all data professionals as this space unfolds.

Share