Immersive sound effects bring a totally different sensory experience to the game. In VR, immersive sound is even more essential. The current immersive 3D audio does not yet have a common framework. The format that can run on the PC may not work on the phone
On November 7, Google released a Resonance Audio SDK to help developers create immersive audio for ARs, VRs and games on different platforms across Android, iOS, macOS, Linux and Windows.
Google's 3D sound technology has matured. Back in February 2016, Google provided Cardboard with 3D sound technology, and in May 2016 brought the audio rendering engine to Daydream VR, its mobile VR platform.
Google in VR, web VR layout has been a long time, the open 3D audio tools Resonance Audio SDK makes Google's powerful VR audio tools can be real-time to a variety of devices on the sound conversion to 3D audio.
Resonance Audio not only controls the direction of sound but also controls its spread. When you are closer to an object, you hear a louder voice. This technology can run on a variety of engines, including Unreal Engine and FMOD.
Lei Feng Wang also reported earlier that Oculus has near-field head-related transform functions (Near-Field HRTF) and stereo source (Volumetric Sound Sources) two 3D audio technology added to the Oculus Rift SDK. Developers can choose different 3D sound effects tools, you can also develop your own tools. But what Google means is that it offers more simple tools for developers to create great VR, AR, and 360-degree panoramic videos using Google's tools on Google's platform. It appears that its 3D content sharing platform Poly, released on November 2, is intended to stimulate more developers to create content on Daydream VR or Youtube's panoramic video platform.