Session:「Algorithms in (Social) Practice」

Towards Algorithmic Experience: Initial Efforts for Social Media Contexts

論文URL: http://dl.acm.org/citation.cfm?doid=3173574.3173860

論文アブストラクト: Algorithms influence most of our daily activities, decisions, and they guide our behaviors. It has been argued that algorithms even have a direct impact on democratic societies. Human - Computer Interaction research needs to develop analytical tools for describing the interaction with, and experience of algorithms. Based on user participatory workshops focused on scrutinizing Facebook's newsfeed, an algorithm-influenced social media, we propose the concept of Algorithmic Experience (AX) as an analytic framing for making the interaction with and experience of algorithms explicit. Connecting it to design, we articulate five functional categories of AX that are particularly important to cater for in social media: profiling transparency and management, algorithmic awareness and control, and selective algorithmic memory.

日本語のまとめ:

アルゴリズムエクスペリエンスという概念を提案し、SNSのインターフェイスに関し調査とWSによってユーザーへの影響を調査。AXのフレームワークとしてアルゴリズミックプロファイリングトランスパレンシーなどの5つの指針を提示。

Communicating Algorithmic Process in Online Behavioral Advertising

論文URL: http://dl.acm.org/citation.cfm?doid=3173574.3174006

論文アブストラクト: Advertisers develop algorithms to select the most relevant advertisements for users. However, the opacity of these algorithms, along with their potential for violating user privacy, has decreased user trust and preference in behavioral advertising. To mitigate this, advertisers have started to communicate algorithmic processes in behavioral advertising. However, how revealing parts of the algorithmic process affects users' perceptions towards ads and platforms is still an open question. To investigate this, we exposed 32 users to why an ad is shown to them, what advertising algorithms infer about them, and how advertisers use this information. Users preferred interpretable, non-creepy explanations about why an ad is presented, along with a recognizable link to their identity. We further found that exposing users to their algorithmically-derived attributes led to algorithm disillusionment---users found that advertising algorithms they thought were perfect were far from it. We propose design implications to effectively communicate information about advertising algorithms.

日本語のまとめ:

オンライン広告のアルゴリズムの不透明性について、32名の被験者に3つのオンライン広告システムを利用させ、望ましいアルゴリズムの説明を書きださせる。アルゴリズムの誤謬性が認識されるとユーザーの恐れがなくなるなどの結果。

Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making

論文URL: http://dl.acm.org/citation.cfm?doid=3173574.3174014

論文アブストラクト: Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions-like taxation, justice, and child protection-are now commonplace. How might designers support such human values? We interviewed 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding and imbuing public values into their work. The results suggest a disconnect between organisational and institutional realities, constraints and needs, and those addressed by current research into usable, transparent and 'discrimination-aware' machine learning-absences likely to undermine practical initiatives unless addressed. We see design opportunities in this disconnect, such as in supporting the tracking of concept drift in secondary data sources, and in building usable transparency tools to identify risks and incorporate domain knowledge, aimed both at managers and at the 'street-level bureaucrats' on the frontlines of public service. We conclude by outlining ethical challenges and future directions for collaboration in these high-stakes applications.

日本語のまとめ:

公共の意思決定のアルゴリズムでの支援につき、デザイナーが考慮すべき観点を5国27名の公共セクター機械学習実践者へIVで調査。データ収集目的とモデリング目的が異なり、モデルの説明責任が追えないなど5つの倫理的問題を挙げる。

A Qualitative Exploration of Perceptions of Algorithmic Fairness

論文URL: http://dl.acm.org/citation.cfm?doid=3173574.3174230

論文アブストラクト: Algorithmic systems increasingly shape information people are exposed to as well as influence decisions about employment, finances, and other opportunities. In some cases, algorithmic systems may be more or less favorable to certain groups or individuals, sparking substantial discussion of algorithmic fairness in public policy circles, academia, and the press. We broaden this discussion by exploring how members of potentially affected communities feel about algorithmic fairness. We conducted workshops and interviews with 44 participants from several populations traditionally marginalized by categories of race or class in the United States. While the concept of algorithmic fairness was largely unfamiliar, learning about algorithmic (un)fairness elicited negative feelings that connect to current national discussions about racial injustice and economic inequality. In addition to their concerns about potential harms to themselves and society, participants also indicated that algorithmic fairness (or lack thereof) could substantially affect their trust in a company or product.

日本語のまとめ:

アルゴリズミックフェアネスに関する米国の少数派(世帯収入・人種・教育等で)の印象を調査。アルゴリズムの不公平性がユーザーに社会問題を喚起させる、企業がアルゴリズムの公平さを失うと企業の信用に関わるというデータが得られた。