Urquhart Arsenault (pushharp64)
In contrast, when participants actively attempted to retrieve primes from their definitions, no phonological facilitation was observed. Successful retrieval of semantic and both primes facilitated subsequent target retrieval, whereas, failure to retrieve semantic and both primes inhibited subsequent target retrieval. These facilitatory and inhibitory influences of prime retrieval for semantic and both primes were independent of feedback on retrieval performance (Experiment 4) and participants' overall knowledge of the primes and targets (Experiment 5), and also did not extend to retrieval from episodic memory (Experiment 6). The findings are consistent with ongoing retrospective processes during target retrieval, which reengage prime retrieval success or failure and consequently produce benefits and costs during repeated retrieval from semantic memory. (PsycINFO Database Record (c) 2020 APA, all rights reserved).Determining the number of factors is one of the most crucial decisions a researcher has to face when conducting an exploratory factor analysis. As no common factor retention criterion can be seen as generally superior, a new approach is proposed-combining extensive data simulation with state-of-the-art machine learning algorithms. First, data was simulated under a broad range of realistic conditions and 3 algorithms were trained using specially designed features based on the correlation matrices of the simulated data sets. Subsequently, the new approach was compared with 4 common factor retention criteria with regard to its accuracy in determining the correct number of factors in a large-scale simulation experiment. Sample size, variables per factor, correlations between factors, primary and cross-loadings as well as the correct number of factors were varied to gain comprehensive knowledge of the efficiency of our new method. A gradient boosting model outperformed all other criteria, so in a second step, we improved this model by tuning several hyperparameters of the algorithm and using common retention criteria as additional features. This model reached an out-of-sample accuracy of 99.3% (the pretrained model can be obtained from https//osf.io/mvrau/). A great advantage of this approach is the possibility to continuously extend the data basis (e.g., using ordinal data) as well as the set of features to improve the predictive performance and to increase generalizability. (PsycINFO Database Record (c) 2020 APA, all rights reserved).Reporting the reliability of the scores obtained from a scale or test is part of the standard repertoire of empirical studies in psychology. With reliability being a key concept in psychometrics, researchers have become more and more interested in evaluating reliability coefficients across studies and, ultimately, quantify and explain possible between-study variation. This approach-commonly known as "reliability generalization"-can be specified within the framework of meta-analysis. The existing procedures of reliability generalization, however, have several methodological issues (a) unrealistic and often untested assumptions on the measurement model underlying the reliability coefficients (e.g., essential τ-equivalence for Cronbach's α); (b) the use of univariate approaches to synthesizing reliabilities of total and subscale scores; (c) the lack of comparability across different types of reliability coefficients. However, these issues can be addressed directly through meta-analytic structural equation modeling (MASEM)-a method that combines meta-analysis with structural equation modeling through synthesizing either correlation matrices or model parameters across studies. The primary objective of this article is to present the potential MASEM has for the meta-analysis of reliability coefficients. We review the extant body of literature on the use of reliability generalization, discuss and illustrate two MASEM approaches (i.e., correlation-based and parameter-based MASEM), and propose some p