HandiMote

Free man designing website layout

HandiMote

Empowering the Future of Remote Solutions

A revolutionary remote control solution that utilizes user hand gestures for seamless interaction with computers or hosts. Harnessing the intuitive nature of body gestures, Handimote simplifies AI interaction, allowing users to effortlessly control AI systems with simple gestures.

Present HMI devices are ill-equipped for the future of AI applications, which demand ubiquitous accessibility beyond desktop confines. Enter Handimote: a groundbreaking remote control solution harnessing intuitive hand gestures. With Handimote, seamlessly integrate AI into daily tasks, empowering effortless control across any location or device.

Remote control

Redefining Remote Control with Advanced Motion and Finger Sensing Capabilities. Seamlessly transmitting motion and finger data wirelessly to the host, HandiMote enables hosts to interpret user body gestures and intuitive intentions accurately, revolutionizing the user experience.

HandiMote X1
X1 Wearing -Left Hand

Introducing HMI Revolution

We are planning to develop an innovative human-machine interface system that utilizes the most intuitive human control mechanism—gestures—for communication with computers. Traditional computer systems primarily rely on keyboard inputs and mouse operations to interact with humans, and the corresponding software user interfaces (UI) are also designed based on these tools. The keyboard accurately reflects the text users wish to express, while the mouse offers expanded capabilities for a broader range of operations. Together, these tools have supported decades of technological advancement.

However, the advent of smartphones has introduced a new generation of human-machine interface technology: “touch technology.” This innovative interface provides users with a more intuitive interaction experience. Compared to the traditional mouse, touch technology offers a more natural and intuitive mode of operation, supporting complex gestures such as zooming, shrinking, and swiping selections, all designed based on instinctual human movements. These features are so user-friendly that even young children can quickly master them without instruction.

Despite advancements in intuitive operation, touch technology has not significantly revolutionized precise text input. Keyboards, whether physical or virtual, continue to hold an irreplaceable position due to their accuracy in input. On the other hand, voice input technology, although convenient, often fails to adequately understand semantics and can lead to high error rates. As a result, users frequently need to perform complex mouse or touch operations to correct these errors, further solidifying the keyboard’s status as the primary tool for text input.

screens mockup
Photo by Jakub Zerdzicki on Pexels.com

Thanks to the advancement of AI technology, particularly with large language models like GPT, AI has now become capable of effectively understanding human semantics. Moreover, it can even respond to humans through language, marking a new phase in human-computer interaction interfaces. This evolution allows for more natural and intuitive communications between humans and machines, opening up possibilities for more complex and seamless integrations across various digital platforms.

In the past, human-machine interaction typically involved humans issuing commands and the computer displaying the corresponding raw data, which users then had to read, digest, and utilize to derive outcomes or perform tasks. However, with the advancement of AI technology, this mode of interaction is undergoing rapid transformation. In the future, users will only need to present preliminary ideas, and AI will automatically search and analyze relevant data, providing precise conclusions, thus significantly enhancing efficiency.

Traditionally, large amounts of raw data were presented to users, requiring substantial time for interpretation, so requir necessitating display on large, high-resolution monitors. But in the AI era, this demand will significantly decrease. AI’s advancements enable it to understand and respond to human semantics, allowing for more concise methods of information exchange. Future human-machine interfaces will no longer rely on precise text input or traditional input devices such as keyboards, mice, and touch screens.

As technology progresses further, display technology will also undergo a significant revolution, shifting towards lighter, more compact, next-generation displays. This technological revolution will fundamentally change traditional human-machine interfaces based on keyboards and mice. Future interfaces will need to match the natural communication styles of humans.

In this transformation, “gesture interaction” technology will become particularly important. Gestures and body language will become the main means of communicating with machines, making the interaction process more natural and greatly enhancing the efficiency and intuitiveness of communication. Through simple gestures, AI will be able to accurately recognize and execute user commands, similar to how people often communicate with each other not through words but through simple gestures or body movements. This intuitive gesture interaction technology will make future human-machine interfaces not only more in line with human habits but also provide smoother, more natural interactions across various environments.

How HandiMote Work

We aim to create an interface that interacts with computers or AI through human body language, specifically focusing on hand gestures. To achieve this, we need to analyze and understand the types of information that can be derived from human gestures. Our analysis has identified two primary types of hand gestures sensing:

1. Finger Movements: Specific actions like making a fist, pointing, using two fingers, grasping, and releasing. By analyzing these finger states, we can understand these movements.
2. Palm Movements and Trajectories: This includes waving side to side, static rotations, angles, and dynamic swinging motions.

Fingers Posture
Screenshot


Combining Finger and Palm Trajectories: By integrating these, we can capture the gestures required at any given moment.
Once we can simultaneously capture these two types of information, machines (AI) can intuitively understand the interactive messages or commands that humans wish to convey.

For our human-machine interface solution, we introduce HandiMote. This system utilizes integrated sensors to capture bodily information, featuring 6DoF sensors, accelerometers, gyroscopes, and geomagnetic sensors placed on the user’s palm. This placement allows for intuitive capture of real-time palm information. Additionally, we use an optical image capture system called ScanaVision, an advanced sensing technology that does not require optical lenses. By positioning the ScanaVision sensors on the user’s palm and orienting the view towards the fingers, we can track the exact movements of the fingers.

With sensors mounted directly on the palm, they maintain a fixed relative position to the hand, ensuring precise tracking of palm movements and a stable image of the fingers regardless of the hand’s speed or the angle of movement. This setup not only provides stable images but also reduces the need for extensive data analysis later, effectively decreasing energy consumption and allowing for smaller device sizes. While wearing the device may introduce some inconvenience compared to non-wearable solutions, we can enhance user experience by incorporating tactile (vibration) feedback mechanisms, rounding out the capabilities expected of a comprehensive human-machine interface device.

In addition, Incorporating an accelerometer’s impact sensing capabilities can significantly enhance the functionality of gesture-based interfaces like the envisioning. Here’s how it can be applied:

  1. Touch Command Recognition: When fingers touch or tap together, the accelerometer detects the specific pattern of movement and force, enabling the system to recognize it as a specific command. This can be particularly useful in environments where traditional touch interactions are not feasible, such as while wearing gloves or in virtual reality settings.
  2. Air Tap Applications: By detecting the acceleration patterns associated with tapping gestures made in the air, the system can interpret these movements as inputs. This allows for the development of “air touch” applications, where users can interact with virtual interfaces by making gestures in space. This could revolutionize user interfaces in augmented reality (AR) and virtual reality (VR) environments, providing a more intuitive way to interact with digital content without physical contact.
  3. Enhanced User Interaction: By combining touch and air gestures, users can have a more fluid and versatile interaction experience. For instance, commands could be initiated by air gestures, while more delicate operations could be controlled by touch gestures, offering a richer and more responsive interface.

What the Advance

Limitless Freedom

“HandiMote EVM X1 boasts unique advantages that set it apart as an unparalleled solution. Worn directly on the hand, users enjoy unrestricted motion, free from spatial constraints. Moreover, the seamless integration of sensors ensures stable image capture, guaranteeing accurate and consistent gesture recognition. This distinctive design advantage positions HandiMote as the ideal choice for a wide range of applications.”

Low Power consumption

HandiMote requires minimal resource allocation, processing data efficiently without the need for high computational power.

Vibrator

The wearable device is equipped with a vibrator, providing users with tactile feedback for enhanced user experience. In the realm of human-machine interface devices, this feature holds paramount importance.

Low Data transfer

HandMade products low data requirements enable seamless support with just the BLE wireless standard. As a result, the overall size of the device can be minimized, along with its power consumption.

——

HandiMote Introduce Youtube

HandiMote remote control solution

HandiMote EVM Kit X1

Screenshot

HandiMote X1 Evaluation Kit – (Left Hand)

Introducing the HandiMote EVM X1: A Comprehensive Development Kit Featuring Wireless BLE, 6DoF Motion Sensor, Lensless Vision Solution, and Vibrator Integration. Designed for effortless wearing on the user’s hand, the X1 simplifies solution estimation and facilitates seamless development of custom software and applications. With its intuitive architecture, this kit empowers customers to easily capture and integrate our advanced solution into their own applications

  • Low Power Bluetooth BLE
  • 6DOF Motion Sense (Palm position)
  • Lensless ScanaVision Sense Solution (MSV9706V)
  • Vibrator

Handimote system block

HandiMote Block Diagram