5.3 3D Light field displays the meta universe

The 3D light field display metaverse will become an important development direction in the field of science and technology. We plan to start from the two core aspects of space computing protocol and 3D content ecology to create a very innovative and attractive 3D light field display metaverse world.

5.3.1 Hardware and content co-evolution

Display the technical roadmap

  1. 2026: Optimize the existing microlens array solution, and aim to reduce the power consumption of 3D mode by 40%, and adapt to more third-party applications.

  2. 2027: Development of a light field display module, supporting dynamic switching of multi-view angles.

  3. 2028: Pre-research holographic projection technology, aiming to achieve consumer-grade product prototypes for aerial suspension imaging.

5.3.2 Decentralized space anchoring system

  • Technology stack

  1. SLAM positioning: Fusion of LiDAR point cloud and visual odometer, positioning error <2cm

  2. Dynamic lighting: adjust the light and darkness of virtual objects in real time based on the ambient light sensor

  3. Physics engine: supports cross-device synchronous rigid body dynamics simulation

  • Economic system

In the current concept, land NFT will adopt the current mainstream stratified right confirmation model:

a. Basic layer: Geographic coordinate ownership (permanent holding)

b. Application layer: Scenario development income rights (dividends per day)

c. Advertising layer: digital advertising space lease rights (bidding auction)

  • Technology opening steps

2026 Q4:Release the SDK beta version of the basic SLAM positioning algorithm to support developers to call mobile phone sensor data.

2027 Q3:Develop a virtual object physical interaction engine and establish a developer contribution points system.

2029 Q1:Develop a virtual object physical interaction engine and establish a developer contribution points system.

5.3.3 3D Content Ecology

  • Building a closed-loop for creators

Material workshops are an important tool we create for creators. We will provide a library of 200+ parameterized templates that are rich in customizability. Creators can use genetic algorithms to automatically generate texture variants, and quickly generate diverse texture effects based on preset parameters and rules, greatly saving creation time. After the creator completes his work, he can also publish it to the cross-ecological mall with one click to promote his creative achievements to a wider market.

  • Distributed rendering technology

Distributed rendering is our key technology to solve the problem of large-scale 3D scene rendering. We will use the DePIN network to schedule million-level mobile phone GPUs, and split a single frame rendering into 1024 computing units through task sharding. Each computing unit can be processed in parallel on different mobile phone GPUs, greatly improving rendering efficiency. The dynamic pricing mechanism will match the quotation in real time based on the computing power of the device. The higher the computing power of the device, the higher the reward it will receive, which will inspire more users to participate in distributed rendering. To ensure the rendering quality, we will verify the correctness of the rendering through zero-knowledge proof to ensure that the quality of the final 3D scene reaches the expected standard.

Last updated