This critical review of the integrated assessment modeling (IAM) research underlying the AMPERE study is also relevant to many other IAM-based model comparison papers. One of the main symptoms of the serious methodological problems of these studies is that the results produced by different models for what are portrayed as the “same” scenarios differ quite substantially from each other. While the authors of the AMPERE study correctly raise the important question of whether these differences are due primarily to differences in model structures, or to differences in the sets of input assumptions for the “same” scenario used by different research teams, they never address this question in a logically systematic and credible way. In fact, they cannot and do not arrive at an answer, since each modeling team generally relies on a single but different set of most input assumptions for the same scenario. Finally, the research teams involved in the AMPERE project, and other similar projects, fail to understand the fundamental impossibility of forecasting net mitigation costs or benefits over the long run given both the practical and deep uncertainties implicit in both the equations comprising these IAMs, and the input assumptions on which they rely.