Samsung has taken a bold step as the first partner to announce a Mixed Reality (MR) headset under the new Android XR platform. Their upcoming device, intriguingly referred to as “Project Moohan,” is scheduled for a consumer launch in 2025. I recently had the opportunity to test an early version of this exciting technology.
Now, it’s important to note that Samsung and Google are keeping specific details under wraps for now. Specs like resolution, weight, field of view, and pricing are still unspecified. Plus, I wasn’t permitted to capture photos or videos during the demo, so we only have an official image to reference.
Picture Project Moohan as a blend between the Quest and Vision Pro, and you start to get a sense of its capabilities. This isn’t just an off-the-cuff comparison. A glance at the headset reveals its strong design inspiration from Vision Pro. Considerations such as color schemes, button placements, and even the calibration process all echo familiar elements from existing market players.
Regarding the software, if someone set out to merge Horizon OS and VisionOS, ending up with Android XR, it would be considered a success. It’s remarkable how Project Moohan and Android XR mirror aspects of the two dominant headset platforms.
But let’s be clear—this isn’t an accusation of idea theft. In the tech world, companies frequently borrow and refine each other’s innovations. So long as Android XR and Project Moohan capitalize on the best features and sidestep the pitfalls, it’s a win-win for developers and consumers alike.
Diving into the hardware of Project Moohan, it’s a striking piece of gear. It definitely echoes the ‘goggles’ style of the Vision Pro. However, it features a rigid strap with a tightening dial, unlike the Vision Pro’s soft strap, and embraces a design philosophy akin to Quest Pro, focusing on ergonomics. You get an open-peripheral design, perfect for AR applications, and magnetic snap-on blinders for when you want to block out external distractions and focus on an immersive experience.
Interestingly, despite the visual similarities regarding buttons and design, Moohan doesn’t have Vision Pro’s external display for user eye contact. While some criticize this external ‘EyeSight’ feature, I find it quite handy, and I wish Project Moohan incorporated it. It feels a bit off to communicate with someone without seeing their eyes, although they can see you.
As far as tech details go, Samsung is holding onto them tightly, citing the device’s prototype status. What we do know is that the headset boasts a Snapdragon XR2+ Gen 2 processor, a step up from the chips in the Quest 3 and 3S.
During my hands-on experience, I uncovered some wishful insights. The headset uses pancake lenses with automatic IPD adjustment via integrated eye-tracking. While its field of view seemed narrower compared to the Quest 3 or Vision Pro, I’ll reserve judgement until I try different forehead pad setups, as these might push my eyes closer for a wider field of view.
From what I’ve experienced, though narrower, the field of view was suitably immersive. However, there was some brightness fade towards the display’s edges, possibly owing to the lens placement, but it suggests that in terms of lens quality, Meta’s Quest 3 leads the pack, followed by Vision Pro, with Moohan trailing slightly behind.
Though Samsung acknowledged that Project Moohan will have dedicated controllers, I didn’t get to see or interact with them. There’s still deliberation on whether these will come bundled with the headset or be sold separately.
For now, all controls were via hand-tracking and eye-tracking, offering a compelling crossover of Horizon OS and VisionOS. The headset supports both raycast cursors, akin to Horizon OS, and VisionOS-like eye+pinch inputs. Plus, the inclusion of downward-facing cameras enables comfortable lap-positioned pinch detections.
Upon donning the headset, the clarity of my hands was strikingly vivid. The passthrough cameras delivered a clearer image compared to the Quest 3 with less motion blur than in the Vision Pro, albeit, this was in optimally lit conditions. However, given that closer objects seemed sharper, it seems the cameras might focus at arm’s length.
Moving on to Android XR, visually, it features a home screen reminiscent of Vision Pro, with app icons floating on a transparent backdrop. Choosing an app spawns floating panels, employing a look-and-pinch gesture that’s similarly used to open the home screen.
System windows resemble those of Horizon OS more than VisionOS due to their opaque backgrounds and the freedom to reposition them via an invisible frame around the window.
Android XR also dabbles in immersive experiences. I had a chance to explore a virtual Google Maps, echoing Google Earth VR, offering glimpses into 3D-modeled cities, Street View imagery, and newly introduced volumetric interior captures.
While these 360 Street Views are monoscopic, the volumetric captures are explorable in real time. Google describes this capturing tech as a gaussian splat solution, but whether it requires new scans or draws from existing photos isn’t clear yet. Despite lacking the precision of photogrammetry scans, it’s fairly competent.
Google Photos has phased into Android XR, facilitating automated 3D conversions of 2D media from your library. My brief testing impressed me with its conversion prowess, rivaling Vision Pro’s equivalent feature.
YouTube has also seen updates for Android XR, allowing panoramic antiques such as its 180, 360, and 3D content to grace your viewing pleasure. While not uniformly high-definition, it’s clear these features will expand as headset capabilities roll out.
Moreover, Google demoed a YouTube video that was originally shot in 2D but converted to 3D for the headset, pulling off an impressive feat on par with their Photos’ 3D conversion. It’s not clear if this is an automatic process or creator-opted, but more details will unravel in time.
In terms of what Android XR and Project Moohan offer, they appear like Google’s spin on existing tech but they’ve aced the conversational AI game, which is a cut above its competition.
Google’s AI, Gemini, specifically its ‘Project Astra’ variant, is accessible from the home screen and takes full advantage of its environment. It sees and hears what’s happening both in real and virtual worlds, maintaining a constant perceptive conversation—a step beyond the offerings from rivals.
Vision Pro’s Siri can only hear, sticking mainly to standalone tasks. Meanwhile, Quest’s experimental AI can see and hear, but only within the real world, leaving a disconnect with virtual content. Meta plans to change this, but currently, it feels a bit clunky as you need to pause for a ‘shutter’ sound before it processes the image.
In contrast, Gemini receives what’s akin to a low-frame-rate video feed of both real and virtual worlds, omitting awkward pauses for image reference during interactions.
A standout trait is Gemini’s memory, which retains conversational context over a rolling 10-minute period. This means it recalls past dialogues and viewed objects, ensuring seamless interaction.
I observed this memory in action during a customary AI demo involving room-based item inquiries. Despite attempts to stump it with complicated questions, it navigated smoothly. I also tested it with multilingual signs, where it accurately identified existing languages and offered flawless translations—all in a native accent.
Gemini’s memory excelled further by recalling previously discussed signs upon inquiry, even after time had passed, showcasing a level of contextual grasp often challenging for AI.
Beyond information retrieval, Gemini controls the headset, pulling up 3D Google Maps views of landmarks upon request and aiding in continuous dialogue about them. It can also fetch relevant YouTube videos based on ongoing queries, enhancing the overall intuitive experience.
Aside from conventional tasks like setting reminders or sending messages, Gemini’s real prowess and potential lies in its XR capabilities, promising exciting developments in AI-driven headset utilities.
Gemini’s strength on Project Moohan is noticeable, especially for spatial productivity, but its greater promise might lay in future smartglasses, which I had the chance to check out as well—but I’ll save that exploration for another discussion.