Designing control interfaces with haptics

Ultrahaptics technology can be used to extend control interfaces by providing the end-user with a sense of touch in mid-air. The sense of touch is vital for control in two key ways:

  1. Confirmation
    Conveying that you have made an action
  2. Presence and Affordance
    Conveying information about the physical requirements of control i.e. where the control is located (presence) and what actions are required from the user to assert control (affordance).

We can see these concepts play out in all areas of life. For example, the tactile cues involved in a classic light switch utilise the principles above.

An interface should support a user’s internal locus of control – one of Shneiderman’s 8 golden rules of interface design. Mid-air tactile cues address the two requirements for control above and therefore support the user’s internal locus of control by:

  1. Enabling control
  2. Enhancing the feeling of control.

There are four key principles to consider when designing an effective control interface with mid-air haptics and these are expanded below.

  • Presence of controls
  • System status and state changes
  • Confirmation
  • Timing

Presence of controls

Tactile cues can signal the presence of the control itself.

With Ultrahaptics technology, a button can be rendered in mid-air. It can also be used to indicate that the hand is in the correct location for making a particular action (known as feedforward).

When the user feels tactile cues they know they can make an action. This is a key part for both enabling control and enhancing the feeling of control.

Tactile cues can be used to signify that the hand is in the correct location to make an action. Researchers working with Ultrahaptics have found that Ultrahaptics technology is effective at guiding users where to perform gestures. For example, in a gesture-based interface where the user has to place their hand in a certain location to make a gesture, feeling haptic cues enables them to know that their hand is in the correct location and that the computer is ready to recognise their gesture. This is especially important if the use-case requires the user to be visually attending to another element of the interaction, such as driving. Of course, representing the controls can also be done with visual and auditory cues too.

presenceOfControls.png

Figure. 1 Presence of controls. This image illustrates that a button, slide and general presence can be represented in the mid-air using Ultrahaptics’ technology.

smart-home.gif

Figure. 2. A demonstration of these principles in a smart home context.

  1. When watching TV, the user places their hand over the array and feels the presence of the control box, they also see a visual representation of the controls appear on the screen.
  2. They move their hand over the button and feel the button, thus enabling control over the lights in the room.
  3. They can also move their hand over the slider and enable control over the music volume in the room.

System status and changes

Tactile cues can convey the state of a system.

The first principle in Nielsen’s 10 usability heuristics is conveying system status. Adhering to this heuristic ensures that the user feels knowledgeable and in control. This is typically achieved in interface design using Shniederman’s third golden rule – offering informative feedback. This is why we have become accustomed to seeing loading bars telling you that the system is loading and the user can expect some latency while the system processes information.

Tactile cues can convey to the user the system status through changes in haptic parameters. For example, a loading bar can be conveyed to a user by drawing a line or a circle on the palm of their hand corresponding to the loading of the system.

systemstatus.png

Figure 3. The system status of a loading time being conveyed through tactile information.

FF_FutureControls.gif controlSystemStatus.gif

Figure 4. Example of system status and speed of a fast-forward button. The user places their hand over the fast-forward button and feels a point moving around their hand in a clockwise direction, with a corresponding speed.

Confirmation

Tactile cues can be used to provide haptic feedback to confirm that the system has recognised the user’s action. 

Confirmation is the most commonly considered and applied use of tactile cues in interface design. It is well established that to “feel in control”, we must be notified of the success of our action. Gestures, in particular, require confirmation that the system has registered them and researchers have found this to be particularly useful in reducing the amount of time users need to glance at the screen (reducing glance time is particularly important for car infotainment systems).  When using gestures, stick to our recommendations for hand positioning (see Designing effective mid-air haptics) and use gestures that involve palm being open and facing the array, so that the confirmation tactile cue can be projected to the hand.

Confirmation can also be given when the hand has crossed a certain threshold. For example, being in the top menu compared to the bottom menu. A good rule of thumb for choosing a good confirmation haptic is something very distinctive that moves over much of the hand. The example below shows how a sensation starting as a small point in the centre of the hand expands into a large circle in response to a double-tap gesture.

OpenConfirmation.gif

Figure 5. An example of a large expanding circle haptic feedback being used as a confirmation cue for a double tap hand gesture.

Timing

Before? During? After? Choose suitable times to apply the tactile cues

Tactile cues must be well timed to facilitate control. We see from psychological studies that the perception of time and control are linked during our interaction with technology. There are two key considerations:

  1. Latency between when the user makes an action and when the feedback is provided.This all depends on the context of the interaction, but as a general rule in interface design should not exceed 300ms. Excessive latency prompts the user to attempt the operation again, for example, when we continue pressing the lift button… Of course, in some interaction scenarios (such as those with gesture recognition) the computer must recognise the action before it can generate a response. This makes it more difficult to achieve a delay of less than 300ms. In these scenarios, it has been found that adding haptic feedback can reduce the perceived latency.
  2.  Before? During? After?
    Choose a suitable time to apply the tactile cue. If it is presented at the wrong time during the interaction, it can be confusing and frustrating. Therefore, the best time to provide the tactile cue depends on the role it plays in the interaction.
    Ask yourself, is the tactile cue for
  • Guiding the user toward where to make an action? If yes, provide the tactile cue prior to their action to indicate that their hand is in the correct location. This is known as feedforward.
  • Representing the physicality of the controls? If yes, the tactile cue is representing the control element itself (such as a button or slider). Therefore, the user will experience the tactile cue during their action,
  • Confirmation that the system recognised the action? If yes, the tactile cue is providing feedback and should be applied shortly after the action has been made, but no longer than 300ms.
Have more questions? Submit a request

0 Comments

Article is closed for comments.