METHODOLOGY MATTERS Is There a Method Choice Bias in Software Engineering? Courtney Williams, Alexey Zagalsky, Margaret-Anne Storey
NIER ICSE 2018 • Reflections on the • Visions of the future past • Bold visions of new directions that may not yet be supported • Startling results that call by solid results, but rather by a current research directions into question; strong and well-motivated scientific intuition. • Bold arguments on current research directions that may be somehow misguided; • Results that disregard established results or believe of evidence that call for fundamentally new directions.
RESEARCH QUALITY • What makes good • Grounded theory in research in software software engineering engineering? research: a critical review and guidelines • Mary Shaw • Stol, Ralph, and Fitzgerald • Focuses on research questions, methods, and • Focuses in on Grounded evaluation criteria Theory studies and aspects of quality in GT work
METHOD CHOICE Concrete Abstract
FIELD STRATEGIES • FIELD STUDIES • FIELD EXPERIMENTS • No manipulation • Introduce a controlled variable to the natural • Observing participants in environment their “natural environment”
EXPERIMENTAL STRATEGIES • LABORATORY • EXPERIMENTAL EXPERIMENTS SIMULATIONS • Controlled situations • Controlled situations • Outside of the • Simulating the participant’s natural participant’s natural environment environment in the lab setting
RESPONDENT STRATEGIES • SAMPLE SURVEYS • JUDGMENT STUDIES • Investigate the effects of • Investigate aspects of a a phenomenon on a phenomenon using a population population • Relies on self-reports of • Relies on self-reports participants • Typically used to • Questionnaires, surveys, evaluate a tool or interviews technique’s efficacy
FORMAL METHODS No active human participation • COMPUTER SIMULATIONS • FORMAL THEORY • Complete and closed • No gathering of new system empirical evidence • Data mining studies • The creation of models and theories • Computerized analysis of software • Systematic literature reviews, meta-analysis, • Automatic tool etc. evaluations using repository data • Prediction and classification models
HUMAN PARTICIPANTS • ACTIVE • INACTIVE • Self-reports • Public archival records • Visible observer • Private archival records • Hidden observer • Trace measures
THE STUDY • Applied McGrath’s • ICSE 2017 and 2016 models to SE • Technical track papers • Descriptions of these • 68 (2017) + 101 (2016) = methods in the SE 169 papers domain • Classified in excel spreadsheet • Research method, human involvement • Inter-rater reliability: 72%
FINDINGS
FINDINGS
DISCUSSION
BIG DATA When does it become inappropriate to conduct software engineering research using only big data resources and repositories?
NEW TECHNOLOGIES In what circumstances is it inappropriate to conduct human research remotely?
NEW TECHNOLOGIES Will future technologies make remote research as rigorous as in-person interaction?
NEW TECHNOLOGIES How should we approach the study of virtual development environments?
HUMAN INVOLVEMENT What are the implications of using inactive forms of human participation in the majority of our research? Is this how we want to move forward as a community?
METHOD BALANCE What are the implications of this method “imbalance”? Is a balance even desired?
Recommend
More recommend