Session:「Crowdsourcing/crowdwork」

BSpeak: An Accessible Voice-based Crowdsourcing Marketplace for Low-Income Blind People

論文URL: http://dl.acm.org/citation.cfm?doid=3173574.3173631

論文アブストラクト: BSpeak is an accessible crowdsourcing marketplace that enables blind people in developing regions to earn money by transcribing audio files through speech. We examine accessibility and usability barriers that 15 first-time users, who are low-income and blind, experienced while completing transcription tasks on BSpeak and Mechanical Turk (MTurk). Our mixed-methods analysis revealed severe accessibility barriers in MTurk due to the absence of landmarks, unlabeled UI elements, and improper use of HTML headings. Compared to MTurk, participants found BSpeak significantly more accessible and usable, and completed tasks with higher accuracy in lesser time due to its voice-based implementation. In a two-week field deployment of BSpeak in India, 24 low-income blind users earned rupee 7,310 by completing over 16,000 transcription tasks to yield transcriptions with 87% accuracy. Through our analysis of BSpeak's strengths and weaknesses, we provide recommendations for designing crowdsourcing marketplaces for low-income blind people in resource-constrained settings.

日本語のまとめ:

BSpeakは音声ベースのクラウドソーシング市場である。盲人にとってのアクセシビリティ障壁が少なく、AmazonMechanicalTurkよりもアクセシビリティとユーザビリティにおいて優れている。

Crowdsourcing Rural Network Maintenance and Repair via Network Messaging

論文URL: http://dl.acm.org/citation.cfm?doid=3173574.3173641

論文アブストラクト: Repair and maintenance requirements limit the successful operation of rural infrastructure. Current best practices are centralized management, which requires travel from urban areas and is prohibitively expensive, or intensively training community members, which limits scaling. We explore an alternative model: crowdsourcing repair from the community. Leveraging a Community Cellular Network in the remote Philippines, we sent SMS to all active network subscribers (n = 63) requesting technical support. From the pool of physical respondents, we explored their ability to repair through mock failures and conducted semi-structured interviews about their experiences with repair. We learned that community members would be eager to practice repair if allowed, would network to recruit more expertise, and seemingly have the collective capacity to resolve some common failures. They are most successful when repairs map directly to their lived experiences. We suggest infrastructure design considerations that could make repairs more tractable and argue for an inclusive approach.

日本語のまとめ:

地方のインフラの修理とメンテナンスは困難である。そこで現行の集中管理モデルに代わり、ローカルコミュニティから修理知識/能力をクラウドソーシングする手法を提案する。

The Role of Gamification in Participatory Environmental Sensing: A Study In the Wild

論文URL: http://dl.acm.org/citation.cfm?doid=3173574.3173795

論文アブストラクト: Participatory sensing (PS) and citizen science hold promises for a genuinely interactive and inclusive citizen engagement in meaningful and sustained collection of data about social and environmental phenomena. Yet the underlying motivations for public engagement in PS remain still unclear particularly regarding the role of gamification, for which HCI research findings are often inconclusive. This paper reports the findings of an experimental study specifically designed to further understand the effects of gamification on citizen engagement. Our study involved the development and implementation of two versions (gamified and non-gamified) of a mobile application designed to capture lake ice coverage data in the sub-arctic region. Emerging findings indicate a statistically significant effect of gamification on participants' engagement levels in PS. The motivation, approach and results of our study are outlined and implications of the findings for future PS design are reflected.

日本語のまとめ:

参加型センシングのパブリックエンゲージメントに対するゲーミフィケーションの役割を調査した。結果、ゲーミファイドされたアプリケーションの方が、参加者のモチベーションに有意な影響を与えた。

A Data-Driven Analysis of Workers' Earnings on Amazon Mechanical Turk

論文URL: http://dl.acm.org/citation.cfm?doid=3173574.3174023

論文アブストラクト: A growing number of people are working as part of on-line crowd work. Crowd work is often thought to be low wage work. However, we know little about the wage distribution in practice and what causes low/high earnings in this setting. We recorded 2,676 workers performing 3.8 million tasks on Amazon Mechanical Turk. Our task-level analysis revealed that workers earned a median hourly wage of only ~$2/h, and only 4% earned more than $7.25/h. While the average requester pays more than $11/h, lower-paying requesters post much more work. Our wage calculations are influenced by how unpaid work is accounted for, e.g., time spent searching for tasks, working on tasks that are rejected, and working on tasks that are ultimately not submitted. We further explore the characteristics of tasks and working patterns that yield higher hourly wages. Our analysis informs platform design and worker tools to create a more positive future for crowd work.

日本語のまとめ:

AmazonMechanicalTurkの2676人のワーカーを分析したところ、7.25ドル以上の時給を得ていたワーカーはわずか4%であった。我々の分析はクラウドワークのよりポジティブな将来に寄与する。