When individuals retrieve facts from memory, or predict future events, how can we aggregate the judgments to maximize accuracy? We analyze the performance of individuals in a number of ranking tasks, such as reconstructing the order of historic events from memory (e.g. the order of US presidents) as well as forecasting tasks where individuals judge the likelihood of geopolitical events. Participants either independently provide judgments or share information in iterated learning environments where each individual in a chain combines their own independent judgment with the judgments from the previous individual in a chain. We propose that a successful aggregation approach requires a cognitive modeling framework that consider a number of psychological factors, including individual differences in skill and expertise, systematic distortions in human judgments, and the role of information sharing. We develop Bayesian cognitive models that assume that each individual's judgment is based on random samples from distributions centered on a latent ground truth and that each individual is associated with a latent level of knowledge of the domain. The models demonstrate a wisdom of crowds effect, where the aggregated judgments are closer to the true answer than the majority of individual judgments. The models also demonstrate that we can recover the degree of knowledge of each individual, in the absence of any explicit feedback or access to ground truth, and suggest ways in which limited information sharing can improve performance.