Categorylist of algorithmic affordances

We defined 4 themes to categorize the algorithmic affordances. As this is a start for growing the list of algorithmic affordances, please let us know if you encountered one or some of them and sent us examples of it. We really like you as designer to add value to this affordances overview.

Feeding the algorithm

Algorithmic controls intended to feed the algorithm with information of user preferences. Many social media enable this in the form of a ‘like’, ‘favorite’ or ‘recommend’ item. In the context of social software, such features serve the double function of informing the algorithm and informing other users of the software. For example, users using the like function in Twitter (illustrated with a little heart shape) are aware that other users are notified about this action, in particular the author of the message (see Figure 2). The latter is important for users [5] and the algorithmic output relying on ‘likes data’ may not be on top of the mind for users. As a result, the control may not help with building an accurate mental model of the algorithm [9].

Navigating the recommendation space

A promising, avenue for exploring XAI, may be solutions that allow users to navigate the recommendation space. Rather than treating a recommendation as a point solution - a single best outcome - the system could present the user with a ‘landscape’ of outcomes of the recommender and controls to navigate it. A common solution ‘in the wild’ is the use of ordered lists, in music and movie recommenders such as Netflix and Spotify. The user is presented with a set of tiles suggesting multiple outputs of the recommender that might be relevant and is allowed an easy choice between them. E-commerce sites also explain the social context that fed the recommendations “others who bought this item”. In the academic literature, we find more sophisticated examples of this central idea. Bakalov et al. [1] for example propose the idea of recommendation scapes for controllable personalization. In their approach recommendations are not just ordered list, but they take position in structured and interactive visualization. This helps the user to understand what alternatives the recommender may provide, and how they are related to the ‘best option’. It is easy to imagine how this proposal can help the physician in the fictional example above. Medical diagnoses have a structure and presenting the outcome of the decision support system with respect to alternative diagnoses, combined with putting different weights to underlying data, might be an effective way to enable the physician to make a more educated decision on how to interpret the system output. We consider alternatives for the navigation of the recommendation space as a potent avenue for XAI although a custom design for each context will be needed.

Tuning algoritmic parameters

Tuning algoritmic parameters might be offering a more direct, and for XAI a more vital approach, over parameters within the algorithm. The most straightforward solution is to enable them to open or close certain data sources as input for the algorithm. This solution was applied in our design project about recommender systems that adhere to public values conducted by several student groups. We were, however, unable to locate an example in a commercial system or a proposal in academic literature. A related idea is to allow users to change weights to elements of the decision-making algorithm such as data sources or intermediate variables included in the modelling. This is implemented in the legal search engine ‘Fastcase’ and it has been proposed by academics as well (e.g. [1]). Nascent studies suggest that such controls are appreciated by users. For example, Jin et al. [2] have tried to add algorithmic controls for music recommendation. They let users control the weight of six characteristics: mood, location, weather, social aspects, current activity, and time of day. This control increased perceived recommendation quality without increasing cognitive load. Users also liked to play with the system. There are also proposals in the literature to make the full complexity of an algorithm controllable for the user. For example, Gretarsson et al. [3] enable users to adjust decision paths. They built a recommender in which users can adjust the decision process in each of its layer. This solution gives users full control over the algorithm and allows them to explore the decision-making process in greater detail. However, it may not be feasible to apply this to all kinds of algorithms and in many cases the approach might be ‘too direct’. It is often not necessary to completely align users’ mental model with the technical implementation of the algorithm. Proposals that allow users to tune algorithmic parameters seem to have great potential to achieve explainability of the algorithms involved, because they allow for a very direct manipulation of the algorithm and users can explore the influence on the output immediately; they are the most model intrinsic approach [4]. At the same time, the proposals that we found were still very explorative and ‘literal’ with regards to the inner workings of the algorithm. To implement this in a way that fits the task context, and the mental model of the user will be a challenge. To us it seems insufficient to just expose the inner working of the algorithm; instead, more direct user controls should bridge between them and the decision of the algorithm in a specific task context.

Activating Recommendation Contexts

Activating Recommendation Contexts allow users to give control to the algorithm is the notion of context specification. Different user contexts may ask for different settings of the algorithm and different data to be used to train the algorithm. There may be settings in which the user does not want the algorithm to learn from his actions or when the user needs different recommendations. A well-known example is Netflix’s “who is watching?” function, which allows users to ‘build’ different recommendation profiles for e.g., their children. Similarly, the ‘Incognito’ function in Google Chrome allows users to avoid some of the personalization that is an integral part of Google’s service. Different student projects also proposed ‘reset’ or ‘chance’ options in their recommenders, indicating a need to escape the profile that a recommender has built from time to time. At first sight, these contextual control solutions do little to improve the explainability of algorithms and it is not the most promising avenue to explore in the context of explainable AI. Still, we should not immediately dismiss recommendation contexts as way forward. There is a call for context sensitivity of explanations and comparing system output for different contexts might help the user if these contexts are meaningful and designed with the right granularity.

Uncategorized

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut eleifend nibh lorem, et fermentum tellus dictum ac. Quisque fringilla sapien faucibus mauris luctus, et consequat justo rutrum. Morbi placerat efficitur odio vel fringilla. Nunc rutrum vulputate mi. Phasellus auctor odio ut ex porta lobortis. Donec vitae nulla sed lacus hendrerit consectetur eget at lacus. Vestibulum aliquam, nulla quis gravida ullamcorper, nisl odio tempor mauris, id vulputate neque lacus in eros.

Relevant theories

Simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s, when an unknown printer.

Comments & Talks

Simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s, when an unknown printer.

I used this last week in a design, thanks. I used this last week in a design, thanks. I used this last week in a design, thanks.
Sender.V.
c 2
What happens if I use the wrong affordance? I used this last week in a design, thanks.What happens if I use the wrong affordance? I used this last week in a design, th… [read more]
Marscha K
c 0
I used this last week in a design, thanks. I used this last week in a design, thanks. I used this last week in a design, thanks.
simone
c 0
What happens if I use the wrong affordance? I used this last week in a design, thanks.What happens if I use the wrong affordance? I used this last week in a design, th… [read more]
Marscha K
c 22