Crowdsourcing software evaluation

Authors: Sherief, N., Jiang, N., Hosseini, M., Phalp, K. and Ali, R.

Journal: ACM International Conference Proceeding Series

ISBN: 9781450324762

DOI: 10.1145/2601248.2601300

Abstract:

Crowdsourcing is an emerging online paradigm for problem solving which involves a large number of people often recruited on a voluntary basis and given, as a reward, some tangible or intangible incentives. It harnesses the power of the crowd for minimizing costs and, also, to solve problems which inherently require a large, decentralized and diverse crowd. In this paper, we advocate the potential of crowdsourcing for software evaluation. This is especially true in the case of complex and highly variable software systems, which work in diverse, even unpredictable, contexts. The crowd can enrich and keep the timeliness of the developers' knowledge about software evaluation via their iterative feedback. Although this seems promising, crowdsourcing evaluation introduces a new range of challenges mainly on how to organize the crowd and provide the right platforms to obtain and process their input. We focus on the activity of obtaining evaluation feedback from the crowd and conduct two focus groups to understand the various aspects of such an activity. We finally report on a set of challenges to address and realize correct and efficient crowdsourcing mechanisms for software evaluation. Copyright 2014 ACM.

https://eprints.bournemouth.ac.uk/21895/

Source: Scopus

Crowdsourcing Software Evaluation

Authors: Sherief, N., Jiang, N., Hosseini, M., Phalp, K. and Ali, R.

Conference: The 18th International Conference on Evaluation and Assessment in Software Engineering (EASE 2014).

Dates: 13-14 May 2014

https://eprints.bournemouth.ac.uk/21895/

Source: Manual

Preferred by: Keith Phalp and Nan Jiang

Crowdsourcing software evaluation.

Authors: Sherief, N., Jiang, N., Hosseini, M., Phalp, K. and Ali, R.

Editors: Shepperd, M.J., Hall, T. and Myrtveit, I.

Journal: EASE

Pages: 19:1

Publisher: ACM

ISBN: 978-1-4503-2476-2

https://eprints.bournemouth.ac.uk/21895/

http://dl.acm.org/citation.cfm?id=2601248

Source: DBLP

Crowdsourcing Software Evaluation

Authors: Sherief, N., Jiang, N., Hosseini, M., Phalp, K.T. and Ali, R.

Conference: The 18th International Conference on Evaluation and Assessment in Software Engineering (EASE 2014).

Abstract:

Crowdsourcing is an emerging online paradigm for problem solving which involves a large number of people often recruited on a voluntary basis and given, as a reward, some tangible or intangible incentives. It harnesses the power of the crowd for minimizing costs and, also, to solve problems which inherently require a large, decentralized and diverse crowd. In this paper, we advocate the potential of crowdsourcing for software evaluation. This is especially true in the case of complex and highly variable software systems, which work in diverse, even unpredictable, contexts. The crowd can enrich and keep the timeliness of the developers’ knowledge about software evaluation via their iterative feedback. Although this seems promising, crowdsourcing evaluation introduces a new range of challenges mainly on how to organize the crowd and provide the right platforms to obtain and process their input. We focus on the activity of obtaining evaluation feedback from the crowd and conduct two focus groups to understand the various aspects of such an activity. We finally report on a set of challenges to address and realize correct and efficient crowdsourcing mechanisms for software evaluation

https://eprints.bournemouth.ac.uk/21895/

Source: BURO EPrints