Detailed Explanation of Automotive HMI Interaction Design (Part 2)

Introduction

This article is one of a series of articles on automotive HMI. Other articles are

Detailed Explanation of Automotive HMI interaction design (Part 1)
How to Build an Automotive HMI: Design Specifications (Part 2)
Detailed Explanation of Automotive HMI interaction design (Part 1)

1. Efficient interaction 

Before talking about the efficient interaction of HMI, we also need to understand multimodal interaction.

1. Multimodal interaction

"Multimodal interaction" includes the sensory interaction of vision, hearing, smell, touch and taste, which is realized through the eyes, ears, nose, mouth and touch of finger. Its technology application in real life is also designed around these senses, integrating the interaction technology of multiple senses to form a multimodal interaction form.

The interaction used in the car, through voice, touch, touch, smell, vision, gesture, body feeling and other interactions, in a way closer to the interaction between people, makes the interaction between people and cars more natural and relaxed.

2. Throw a question ️️️

What kind of interaction is considered an efficient interaction? What kind of interaction is best? Everyone has their own definition method. I have talked about this topic with many designers. I will give you a conclusion at the end of this topic.

3. Common interaction methods

Let’s first introduce the interaction methods in the current HMI on the market, hard buttons / touch / voice / gestures.

 

(1) Hard button interaction
The original original HMI was basically done by hard buttons, and it was thousands of times of muscle memory that had to be done so smoothly. This also has its advantages, the following shows the interior of the car with hard buttons.

 

(2) Touch interaction method
When the large LCD screen appears, its interaction mode also changes. The following shows the large screen in the car 

When we are driving a car, clicking is the most effective way to interact. Interaction methods such as long-press, slide, double-click, and single/two-finger drag will exceed the safe time range of 2 seconds, which greatly increases the difficulty of operation and the risk in driving. The rest of the operation methods can be used in the non-driving state. . If some features cannot be realized by touch, then voice interaction will be mentioned below.
 

After talking about the above interaction methods, we should discuss some substantive matters. What modules and scenarios should these interaction methods be used in?

Click interaction method: button, check box, tab, icon, search bar, etc.

Sliding interaction method: negative screen, home page feature card, all progress bars (music, video, online radio, volume, brightness, air conditioning volume, temperature, etc.), list type, air conditioning wind direction, car model rotation, etc.
Long press interaction method: select to edit , some keys in the virtual keyboard
Double-click interaction method: navigation map, image zooming

Single / double finger dragging: single finger dragging changes the position of the object, double finger dragging zooms the image and navigation map.

(3) Voice interaction method
One of my favorite interactions is voice interaction, which minimizes user distraction and allows for safe driving.

Navigation is the most commonly used in vehicle voice interaction. Take navigation as an example.

Step 1: Input the desired destination information by voice. If you don't know the specific address information, you can navigate vaguely.

User: "Navigate to the nearest parking lot."

Step 2: According to the voice input navigation requirements given by the user, the system matches the navigation destination and allows the user to select the desired destination.

Voice assistant: "Which one do you choose?"

Other scenes: music, phone, air conditioning module

Music: "I want to listen to XXXXX", "Previous", "Next", "Volume up", "Volume down"

Phone: "Call XXX", "Open Contacts", "Find XXX"

 Air conditioning: "turn off and turn on air conditioning", "higher temperature", "lower temperature", "turn on internal / external circulation", "higher air volume", "lower air volume", etc

After the above voice content is finished, someone will say that if you want to accurately adjust to a certain temperature, do you have to say "lower temperature" many times. What I want to say is that now there are R & D technologies that can directly write code, so that voice can control the air conditioner in the car to achieve accurate temperature. That's the charm of programmers. Anyway, I'm attracted.

(4) Gesture interaction method

Gesture interaction, currently used in the car, the biggest advantage is that the action is relative and does not require precise action (accurate action requires not only the hands, but also the eyes to search, which is very dangerous), but the biggest disadvantage of gestures is that the action will be less. (too many actions may require recalling the action method, and it is also very dangerous for the brain to wander) So as mentioned above, I prefer the choice of voice interaction method, and I am more optimistic about this.

Using the HMI gesture control system, through different gesture combinations, car owners can realize various actions more quickly, such as switching songs, answering and hanging up calls, adjusting the volume, flipping the list page, zooming the map, etc. Some of these gestures are extracted from touch gestures, and some are combined with life habits. For example, the closed mouth can be used to hang up the phone, and the gesture can be used to make a fist.

4. Final conclusion

Tasks in the vehicle other than driving will distract the driver's attention to varying degrees. In order to ensure driving safety, the action design of the feature needs to consider the combination of touch, voice, image and other multi-type interaction, flexibly combine the input forms according to different use scenarios, and appropriately retain some physical buttons (hard buttons) to provide the most natural use experience for users.

Another point I want to say is that if we designers can participate in the definition of the interaction design of the whole vehicle, we will have more right to speak, and we can define this car from the prototype to the landing. Otherwise, when the car has been finalized, there will be many restrictions on the content of the interaction definition.

2. Content layout and information presentation in interaction

1. Content layout in interaction

In the process of driving, most of the energy of users is focused on driving behavior, and users can only extract about 5% of the energy and time to operate the vehicle. Therefore, the information layout of the HMI system must be presented in the best way in a very short time.

If the user does not complete the action task within the time, either the user chooses to give up and start again, or the user spends more time and energy, but the risk factor of driving will increase exponentially.

2. Improve operation efficiency

The information layout design of the feature should fully consider the characteristics of the driving environment, and combine the specific scenes to make a reasonable layout of the interface information. The relative position of the driver in the driving position and the screen, buttons, etc. is relatively fixed, so the most important information should be placed in this area taking into account the area that is easy for the driver to see and touch.

According to the hot spots of vehicle action, the layout of features and entrances shall be designed according to the distribution of hot spots as far as possible. The feature layout shall be designed as close as possible to the hand, shorten the action distance, and place the information display area on the right.

3. General information layout

Under different driving scenarios, the information layout of the same feature should always be consistent to avoid disturbing drivers due to layout changes. Between the same or similar features, the page layout should be universal to help the driver reduce the memory cost through location Association.

According to the visual scanning in the above three second principle, the information in the page should be gathered and viewed centrally to ensure that the page content can accurately convey the relevant content information of the current task in progress, so that the user can master the information in 1-2 seconds and quickly return to the normal driving state.

Interactive copywriting

The definition of interactive copywriting must be short, concise and easy to understand, and keep the information up-to-date and browsable.

(1) Convey the information clearly

Don't be vague and ambiguous about interaction copywriting.

For example: scan the code in the HMI to log in to apple music. If the login fails:

Scheme 1: the pop-up message content is "login failed".

Scheme 2: the pop-up message content is "login failed", and what is the reason for the failure.

(2) The copy is concise and clear

From the perspective of users, you will find that many users do not look at the contents of the pop-up window. When they see a button on the pop-up window, they immediately click the button. Therefore, the more concise the text in the pop-up window, the more OK it is.

For example: the two pop-up windows express the same meaning. One is short and directly says the current situation and action points, and the user can grasp the key points. The other is too long. The user doesn't like to see it and can't grasp the key points.

 

(3) Consistency

To ensure the consistency of products, similar copywriting and expression should be consistent.

For example, the buttons "start navigation" and "modify default" verbs + nouns in navigation cannot become nouns + verbs

 

(4) Distinction between primary and secondary

Each paragraph of text must be clearly defined. If a distinction can be made, it will better help users understand the content of the text.

In the call record of the phone module, we generally need to call someone. We must find "XXX" at the first time, and then check the mobile phone number. Therefore, we should distinguish the primary and secondary information.

 

(5) Form a closed loop

If the copy will inevitably be long, is there a way for users to operate quickly?

Implementation scheme: the definition of our interaction scheme is "I see".
What I look forward to: if I define the button feature, the button will become "Settings" (I will propose my scheme when the next version is updated)

3. Multitasking

Efficient task process feature design should aim to improve the success rate of task completion, reduce the cost of cognition and action, avoid designing too complex information architecture and feature process path, and all features need to have fixed and complete entry and exit paths.

In what scenario will multitasking occur, let's imagine that the music and radio in the car application are playing in the navigation, and there are calls in the navigation, which must be answered. Later, we also added the group travel feature in the navigation; Music, radio, phone calls, and group travel can be switched after the split screen is unfolded. We can't make random drag and drop for the time being (the project time is tight), so the experience is not perfect.

Why should I talk about the "drag and drop at will" feature?

According to the features of the current version, multitasking can only appear in the navigation module. The switching of several features has been mentioned above, but our split screen scheme was still 2 years ago and has never been modified. Let's show the previous interactive content.

Because of this content, the space occupied by the right reduces the content of navigation on the left. Once there are road signs, road conditions, navigation route map, zooming map button and so on, the navigation page needs to carry more content.

This scheme may interfere with the driving task. Later, our designer came up with a scheme called widget, which can be dragged at will, and the temporary control is much smaller than the previous scheme. Now I'll show you the prototype of this scheme.

 

Finally, the scheme was approved by the leaders, but due to the high development cost, the new scheme needs to be incorporated into the next IOT upgrade package.

Conclusion: no matter what scenario, the highest priority is driving task, and the impact on driving should be considered in any multi task processing.

4. the HMI interaction level

Avoid hiding the features often used during driving. There must be no more than three levels, otherwise there will be a great threat to driving safety.

1. Frequency of using features of HMI

Let's take a look at the features that are frequently used and of high importance during driving.

 

Low use frequency: system settings, third-party apps

High use frequency: music, radio, air conditioner, telephone, back camera, navigation

Here's an example of phone:

2. Phone module

The traditional HMI has no voice interaction. So if you want to call Tom

Step 1: Turn on the telephone feature.

Step 2: Click Contact List.

Step 3: slide the list to find the beginning of "T" and dial.

It seems that the traditional HMI is only 2 layers, you can reach the contact and dial.

In case of contact with voice search:

The first step is to say directly: "Call Tom".

Step 2: select the search result} and then make a call.

5. Number of options

Reduce the number of options as much as possible. It is very dangerous to turn pages during driving.

Take a navigation example:

When navigating, the destination to be reached is input by voice, and how much information should be displayed at the end?

 

 Another practical example:

 When doing the project, because Apple music obtains a lot of resources, it is inevitable to slide the page and there will be many restrictions on the horizontal screen design. Due to the limited height, only two rows of music can be displayed when sliding.


But when making the vertical screen scheme, the situation will be much better. Let's show you the interaction draft in the vertical screen.
 


6. Definition of Feedback

From the perspective of feedback output, it is mainly visual, voice and touch. For the sake of safety, the feedback of HMI must be enough to make the user clearly understand the important level of the task, and tell the user what to do next.

1. Visual feedback

For example, when the car is backing and is about to hit the wall, it will sound a warning sound, and there will also be a red warning mark on the screen, which tells the driver that he needs to respond immediately.

2. Touch feedback

The tactile feedback mechanism of vehicle control interface is essentially the exchange of information between people and control interface.

The user inputs through the finger touch screen, and the HMI processes and stores the input information, and presents it to the user by the screen. The transmission of information is based on the medium of graphic information, through the human visual perception system to complete the recognition and reprocessing of visual information, and classify the information, such as space, time, color, shape, etc.

The visual cortex processes the matched visual information, which will form short-term and long-term memory in people's mind, and finally form a tactile feedback mechanism for people's operation. The whole information processing process mentioned above is the process of human-computer interaction, which is the feedback process of human behavior to the visual and tactile information in the brain.

3. Voice feedback

Voice interaction is a relatively important interaction method. In addition to allowing users to experience the voice control, it also allows users to easily and naturally accept the feedback information of the vehicle. The following types of feedback and application scenarios are divided according to different confidence levels .

 

Recommendation of Related Axure Products 

Introduction

This app template is used for smart driving. Features include vehicle condition monitoring, vehicle control, navigation, driving record, etc.


Features
· Easy to use, strong reusability
· High-fidelity interactive prototype
· Cool
 and tech style
· Extracted from real products, high applicability

 

 


Leave a comment

Please note, comments must be approved before they are published

This site is protected by hCaptcha and the hCaptcha Privacy Policy and Terms of Service apply.