questions
Technological development must be a social debate. The more people know, the more they can act. To create these debates we are working on an Adaptive User Interface with Artificial Intelligence. In the beginning we asked ourselves: How can an AUI contribute to the common good? By whom is the adaptation made to the user or the system? How can an artificial intelligence get to know the user? What are the fields of application? Where do we see potential and what are the characteristics of a potential user?
definition
Adaptive User Interface are an evolution of standard static UIs, which present the same UI regardless of the user or context of use.Most interfaces we encounter today are static. Some interfaces are user-adaptible, though mostly through a separate settings menu. Responsive design presents an important step towards AUIs, although we would not consider them to be truly adaptive: responsive design only considers different screen sizes/devices and do not integrate information gathered about the user. AUIs are a stepping stone towards ubiquitous computing (also known as ambient intelligence), a futuristic vision in which every object is equipped with computers, sensors, and networking power. In this vision, the border between the environment and computing interface is blurred – the entire environment becomes an interface.
An adaptive User Interface (AUI) is a user interface that dynamically adjusts its layout, elements, functionality, and/or content to a given user's needs, capabilities, and context of use.
skip the information part and jump directly to the use cases
adaption
We worked on Adaptive UIs are particularly useful as a method to bridge the two UX gulfs. And we also developed two models for adaptation, one for active and one for passive adaptation.
You get more information under "Bridging the Gulfs" on our website
interaction
An interface is always based on the communication between the user and the user interface, thus creating interaction. The first step for us was to analyze the interaction between different participants and systems. First questions we answered is how can an artificial intelligence get to know the user? And by whom is the the adaption made ?
We differentiate between the Human-to-Human-Interaction where the humans communicate with each other. We are able to interpret each other draw conclusions about our environment and develop intelligent reactions is the core of human existence.
With a Human-to-Computer-Interaction the information of the communication is lost. As it stands, classical computers are simply incapable of interpreting humans as well as we can interpret each other. In this project, we asked ourselves, What if human-computer interaction were as rich as human-to-human interaction? How could we design computers to intelligently interpret and react to human behavior? Advances in artificial intelligence, particularly computer vision and natural language processing, have endowed computers with remarkable sensory abilities. It is only a matter of time before computers are capable of perceiving information about their environments better than humans can.
Using Artificial Intelligence and Machine Learning to design Adaptive Interface Systems. Is Bridging the gap between human experience and machine intelligence.
You get more information under "Adaptive UIs" on our website
framework - magic mirror
We have developed a framework for adaptive user interface systems and an installation to demonstrate the current state of artificial perception. A person can position himself or herself in front of this installation it uses a camera to detect different characteristics of a person and compares them with its own database to find out how old and what gender the person is.
adaption parameter
We also present several interaction sketches for scenarios where adaptive UIs might be useful. Listed in our table of sensors and information processing we conclude different user models:
Mental State, Physical Traits, Habits and Abilities and Context and Surroundings.
habits and abilities
A user's navigation habits gives insight into their goals, priorities, and individual preferences. Their navigation ability may reflect slight difference in micro-interactions or gross differences in motor ability. How can adaptive interfaces adjust themselves to accelerate common tasks and accommodate individual differences in ability?
use case 1
This Use Case with the smartphone detects a very slow reader and suggests that the text be played using a reader tuner. The focus point runs along with the text in support of the voice.
use case 2
In this Use Case, the system recognizes that the reader finds it difficult to read text with a very wide column of text. The system adapts the layout including the text width with a query that the user can confirm or reject.
context and surroundings
We are always on the move–surrounded by different people, navigataing different social contexts, traveling within our towns, across countries, or around the world. Despite our ever-changing environments, our digital devices always remain the same. How might we incorporate situational and contextual awareness into our devices?
use case 1
In order to extract information from an interface, it requires a certain distance. However, contextual and situational conditions do not always guarantee this. An AUI could allow users to record distance and then customize content.
use case 2
The system preserves your privacy, as soon as the camera detects a stranger, options for editing and changes on your smartphone are blocked.
physical traits
In a public setting, it is impossible to design a system perfectly tailored to each individual's dimensions. An interface suited for a moderately tall user will prove cumbersome for both tall and short users. What if we didn't have to design a single interface for everyone? Adaptive UI systems could conceivable detect and adapt themselves to a user's height, vertically aligning and scaling the interface appropriately.
team partner
Kalle Robinson Reiter and Matthew Jörke
supervised by
Professor Jörg Beck