Lab
Unity GAUGAN
Using neural networks as the final rendering element in the graphics pipeline in Unity.
How it works
A Unity scene consisting of objects is rendered using a regular pipeline.
At the output, each object has a special mask in the form of an image, which is processed by a neural network that converts it into the final image.
The generated image is the result of the final render displayed on the screen.
Technology stack
The neural network runs on the server
Unity acts as a client application
For the VR experience, the HTC Vive and SteamVR helmet were used
We see the development potential in creating a tool for drawing applications in virtual reality and obtaining a photorealistic result.
Examples
Potential for art – creating visuals for music or installations
GauGAN in virtual reality
2021-08-17 17:16
Experiments