The climate crisis illustrates the critical need for earth and environmental models to assess the Earth's past and future by translating emissions into climate signals and subsequent impacts regarding floods, droughts, or heatwaves, as well as future resource availability. While computational models grow in relevance by guiding policies and public discourse, our trust in these models is put to the test. A recent study estimates that 93% of hydrology and water resources published studies cannot be reproduced. In this perspective, we question whether we are amid a reproducibility crisis in the computational earth sciences and peek behind the curtain of everyday research. Software development has become an integral part of research in most areas, including the earth sciences, where computational models and data processing algorithms become increasingly sophisticated to solve the challenges of our time. Paradoxically, this development poses a threat to scientific progress: Reproducibility, as an essential pillar of science, is increasingly difficult to reach or even to test. This trend is particularly worrisome as scientific results have potentially controversial implications for stakeholders and policymakers and may influence public opinion and decisions for a long time. In recent years, progress towards Open Science has led to more publishers demanding access to data and source code alongside peer-reviewed manuscripts; but recent studies still find that less reproducible research may be even cited more frequently. We argue that we insufficiently understand how the earth science community currently attempts to reproduce computational results and what challenges they face in this effort. To what do scientists attribute this lack of reproducibility in computational earth sciences, and what are possible solutions? In this perspective we survey the community on what they think is necessary and paint a picture of a future that fosters reproducible computational science and thus trust.