Data

Quantifying error in effect size estimates in executive function and implicit learning

The University of Queensland
Dr Kelly Garner (Aggregated by) Dr Kelly Garner (Aggregated by)
Viewed: [[ro.stat.viewed]] Cited: [[ro.stat.cited]] Accessed: [[ro.stat.accessed]]
ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Adc&rfr_id=info%3Asid%2FANDS&rft_id=info:doi10.48610/1e6bf9a&rft.title=Quantifying error in effect size estimates in executive function and implicit learning&rft.identifier=RDM ID: c152ad80-8248-11ec-81db-7f0174d2d664&rft.publisher=The University of Queensland&rft.description=An accurate quantification of effect sizes for an experimental manipulation has the power to motivate theory, and to reduce misinvestment in scientific resources by informing power calculations during study planning. Such a quantification could theoretically be achieved by a meta-analysis. However a combination of publication bias and small sample sizes (~N = 25) hampers certainty that such an analysis would yield a non-erroneous estimate. We sought to determine the extent to which each of these caveats may produce error in effect size estimates for 4 commonly used paradigms assessing attention, executive function and implicit learning (attentional blink (AB), multitasking (MT), contextual cueing (CC), serial response task (SRT)). We combined a large dataset with a bootstrapping approach to simulate 1000 experiments across a range of N (13-313). Beyond quantifying the effect size and statistical power that can be anticipated for each study design, we demonstrate that experiments with lower values of N lead to problematic information loss, potentially biasing power calculations. Furthermore, we show that for the CC and SRT, a meta-analysis of experiments with lower N is unlikely to ever converge on the true effect size, owing to underspecification of the mapping between theory and statistical model. We conclude with practical recommendations for researchers and demonstrate how our simulation approach can yield theoretical insights that are not readily achieved by other methods; such as identifying when qualitative individual differences exist in response to an experimental manipulation.&rft.creator=Dr Kelly Garner&rft.creator=Dr Kelly Garner&rft.date=2022&rft_rights= https://guides.library.uq.edu.au/deposit-your-data/license-reuse-data-agreement&rft_subject=eng&rft_subject=executive function&rft_subject=implicit learning&rft_subject=PSYCHOLOGY AND COGNITIVE SCIENCES&rft_subject=COGNITIVE SCIENCE&rft.type=dataset&rft.language=English Access the data

Contact Information

[email protected]
School of Psychology

Full description

An accurate quantification of effect sizes for an experimental manipulation has the power to motivate theory, and to reduce misinvestment in scientific resources by informing power calculations during study planning. Such a quantification could theoretically be achieved by a meta-analysis. However a combination of publication bias and small sample sizes (~N = 25) hampers certainty that such an analysis would yield a non-erroneous estimate. We sought to determine the extent to which each of these caveats may produce error in effect size estimates for 4 commonly used paradigms assessing attention, executive function and implicit learning (attentional blink (AB), multitasking (MT), contextual cueing (CC), serial response task (SRT)). We combined a large dataset with a bootstrapping approach to simulate 1000 experiments across a range of N (13-313). Beyond quantifying the effect size and statistical power that can be anticipated for each study design, we demonstrate that experiments with lower values of N lead to problematic information loss, potentially biasing power calculations. Furthermore, we show that for the CC and SRT, a meta-analysis of experiments with lower N is unlikely to ever converge on the true effect size, owing to underspecification of the mapping between theory and statistical model. We conclude with practical recommendations for researchers and demonstrate how our simulation approach can yield theoretical insights that are not readily achieved by other methods; such as identifying when qualitative individual differences exist in response to an experimental manipulation.

Issued: 2022

This dataset is part of a larger collection

Click to explore relationships graph
Subjects

User Contributed Tags    

Login to tag this record with meaningful keywords to make it easier to discover

Identifiers