Technologies increasingly mimic human-like social behaviours. Beyond prototypical conversational agents like chatbots, this also applies to basic automated systems like app notifications or self-checkout machines that address or 'talk to' users in everyday situations. Whilst early evidence suggests social cues may enhance user experience, we lack a good understanding of when, and why, their use may be inappropriate. Building on a survey of English-speaking smartphone users (n=80), we conducted experience sampling, interview, and workshop studies (n=11) to elicit people's attitudes and preferences regarding how automated systems talk to them. We thematically analysed examples of phrasings/conduct participants disliked, the reasons they gave, and what they would prefer instead. One category of inappropriate behaviour we identified regards the use of social cues as tools for manipulation. We describe four unwanted tactics interfaces use: agents playing on users' emotions (e.g., guilt-tripping or coaxing them), being pushy, `mothering' users, or being passive-aggressive. Another category regards pragmatics: personal or situational factors that can make a seemingly friendly or helpful utterance come across as rude, tactless, or invasive. These include failing to account for relevant contextual particulars (e.g., embarrassing users in public); expressing obviously false personalised care; or treating a user in ways that they find inappropriate for the system's role or the nature of their relationship. We discuss these behaviours in terms of an emerging 'social' class of dark and anti-patterns. Drawing from participant recommendations, we offer suggestions for improving how interfaces treat people in interactions, including broader normative reflections on treating users respectfully.
翻译:暂无翻译