Spatial & Robotics: Combining for the Future - MetaVisions #30

Spatial Computing and Robotics are both fields of huge potential, not only in terms of the market that it will create over the next decade, but also the wider impact that these technologies will have on both consumer and enterprise markets.

‘Global Robotics Market size was valued at USD 34.06 billion in 2022 poised to grow from USD 39.54 Billion in 2023 to USD 134.64 Billion by 2031’ - SkyQuest 2024

‘The global spatial computing market was valued at USD 108.9 billion in 2023, growing at a CAGR of 22.3% from 2024 to 2033. The market is expected to reach USD 815.2 billion by 2033.’ - Brainy Insights 2024

We tend to analyse new emerging fields of technology on an individual basis, but when we explore synergies between them, we open up a new market and array of use cases. For example, I have previously written about Spatial Computing and Generative AI, where we can provide users with a different way to interact with LLMs, moving from 2D interfaces to avatars and virtual humans that blend seamlessly into users' environments. I still believe we will keep seeing those fields colliding to provide really exciting and impactful innovation.

Now, let’s delve into how spatial computing intersects with robotics.

MetaVisions views Spatial Computing as XR (VR,AR,MR), Metaverse experiences, Digital Twins, Real time 3D.

Nvidia recently highlighted the potential use of Spatial Computing to accelerate humanoid robotics development. They demonstrated how an Apple Vision Pro headset can be used to train robots by emulating human movements.

Currently, Humanoid Robots require large amounts of datasets for training and in most cases this data is captured through Teleoperation, which according to Nvidia, is ‘becoming an increasingly expensive and lengthy process’. Nvidia’s solution leverages spatial computing and advanced AI tools to drastically reduce the time and cost of training

Simply put, instead of needing large amounts of data for training captured through Teleoperation, users can wear headsets like the Apple Vision Pro whilst performing the actions they want to train the robots on, capturing small datasets, which will then be used in Nvidia’s tools and workflows that are powered by AI to train the humanoid in hundreds of different scenarios. I obviously made it very simplified, but Nvidia done a really good job at explaining the process in this video: Advancing Humanoid Robot Development.

Exciting? Yes! But, what if they are able to implement this process into a low-code/no-code experience for users? This is when the adoption of Humanoid Robotics could absolutely skyrocket. Right now, you need some real smart folks to be able to train these robots, even with this new solution that Nvidia announced. Now, imagine if a manufacturing plant manager or a farmer could simply wear a headset, perform tasks and upload that data into a platform with a an easy-to-follow UI and UX that lets them perform checks and make tweaks before launching it into production.
That is the sort of revolutionary experience that I think we will see at some point.

As for non-humanoid robots, we are seeing a rapid adoption of these across the manufacturing world, from automobile to semi-conductors, the most innovative plants in the world are actively pushing to increase the % of tasks that are done by robots. As that % grows, the use case for Spatial Computing for real time data visualisation will become stronger. Why? Because Digital Twins usefulness linearly increases with the quality and amount of data that is provided, and if you have a high fidelity Digital Twin, why would you not use VR to experience and collaborate with others?

This is the beauty of the tech world, it is constantly evolving, with smart people producing incredible innovation that impacts how we work, collaborate and interact with content.

Are there any emerging tech synergies that have caught your attention?

Till next time,
Davi, MetaVisions

Reply

or to participate.