Apple’s latest Watches have introduced a seemingly innocuous feature that holds the potential to revolutionize how we interact with technology. The double-tap finger gesture, derived from accessibility features, has piqued curiosity and left users, myself included, yearning for more.
For weeks before the official release of the double-tap gesture, I found myself discreetly tapping my fingers in anticipation. Would this simple action unlock a world of possibilities? Now that it’s finally here, I’ve had the opportunity to test it out for over a week. The results have been mixed. At times, it’s truly fantastic, while at others, it feels frustratingly limited. Nevertheless, it has ignited a yearning for more innovation. It’s as if Apple has provided us with a tiny glimpse into an entirely new interface language, taking bold strides towards a future brimming with potential.
Apple has an even more ambitious product slated for release next year—the Vision Pro. This AR/VR headset blends iOS seamlessly into a mixed-reality interface, relying heavily on eye and hand-tracking for control. Interestingly, double-tapping is one of the key gestures that the Vision Pro utilizes to “click” on objects.
Although the current iteration of the Apple Watch’s double-tap feature may not serve as a direct gateway to a new gestural interface future, it sets the stage for broader developments and wider adoption across various wearable devices by other manufacturers.
The resemblance between these gestures and those employed by other tech giants, such as Meta, is no coincidence. Meta envisions a future where wrist trackers and headsets interweave, utilizing neural input technologies like electromyography (EMG) for precise hand motion detection. However, before EMG integration occurs, there will likely be a transitional period of a few years during which gesture recognition evolves to a “good enough” standard. Similar to how smartphones initially incorporated augmented-reality effects using cameras and motion detection before the integration of advanced depth- sensing technologies like LiDAR, we are witnessing the emergence of adequate gesture tracking. Advanced sensors will undoubtedly be added in due course to enhance capabilities further.
Apple already offers a more extensive selection of gestural movement options within the Apple Watch’s Accessibility Settings, under Assistive Touch. These gestures enable full navigation and activation of any onscreen touch feature, serving as an alternative for individuals with limited touch activation capabilities.
The single double-tap feature introduced in the new Series 9/Ultra 2 Watches represents a refinement, optimizing battery life through refined algorithms while maintaining constant availability. Although Apple prioritized this particular feature for the current release, it’s clear that the company has the potential However to develop additional gestures with new algorithms. Fist clenches, single taps, and motion-controlled pointers already exist within the Accessibility settings, making it likely that tapping controls will be the next logical step, enabling double-tap to perform more intricate actions. , third-party apps currently cannot utilize this feature, unless integrated within a pop-up notification.
The concept of gestural inputs transcends the realm of VR/AR and aligns with the broader notion of ambient computing. It evokes memories of Google’s Soli, a radar-based gestural sensor designed for home and touchscreen-free interaction. For gestural inputs to succeed, they must seamlessly blend into our daily lives without becoming cumbersome. It’s a delicate balance that can only be truly assessed with extensive usage. Personally, I tend to default to familiar flows and ignore new shortcuts or features on my phone. Similarly, in VR, I often forget to leverage voice commands, relying on muscle memory for common gestures instead.
The current highlight of the double-tap feature on the Apple Watch undoubtedly lies in its integration with message notifications. Each phase of the double-tap activates a different function, facilitating fluid and efficient quick responses. However, certain limitations exist, such as the ability to only stop timers but not initiate new ones. While some watch faces allow basic scrolling through pop-up widget panes within the “smart stack,” double-tap does not permit opening these panes. The integration of Siri with double-tap functionality requires further refinement as well.
Considering Apple’s future vision of spatial computing, the question arises regarding the level of customization and speed that users will be able to achieve. As the Vision Pro headsets will rely primarily on hand and eye movements for control, it raises the possibility of leveraging the Apple Watch, which does not need to be continuously visible to the headset’s cameras, as a reliable companion for enhanced interaction. Perhaps the watch’s display could merge seamlessly with gestural inputs to unlock new levels of versatility in interaction.
As Apple and Meta forge ahead in their pursuit of advanced input mechanisms, it’s crucial to note that Apple’s foray into smartwatch gestures is just the beginning, although not interconnected with VR and AR technology at present. Achieving a new gestural language for everyday wearables and seamlessly integrating them with future smart glasses and watches, alongside neural input sensors, is imperative for the future of technology. As we take these initial steps, one tap at a time, we find ourselves on the cusp of an exciting and transformative era.