Scientific community in the United States calls for safeguarding integrity of science in the age of generative AI
In a groundbreaking move, a panel of leading experts across academia, industry, and government has issued a call to the scientific community to uphold the core values and norms of science amid the revolutionary advances in generative artificial intelligence (AI) through the establishment of a Strategic Council on the Responsible Use of Artificial Intelligence in Science.
Convened by the National Academy of Sciences, the Annenberg Public Policy Center, and the Annenberg Foundation Trust, the interdisciplinary panel issued a stark warning that although rapid advances in generative AI are bringing a transformative moment for science and accelerating discoveries, it is also challenging core scientific norms like accountability, transparency and reproducibility in scientific research.
Strategic Council on the Responsible Use of Artificial Intelligence in Science
In an editorial published in PNAS < https://doi.org/10.1073/pnas.2407886121 >, the panel proposes the establishment of a Strategic Council on the Responsible Use of Artificial Intelligence in Science under the National Academies of Sciences, Engineering, and Medicine. This council would coordinate with the scientific community, provide updated guidance on appropriate AI uses, study emerging ethical and societal concerns, and develop best practices.
Generative AI systems, trained on vast bodies of scientific literature and data, have the capability to generate coherent text, imagery, and analyses, pushing the boundaries of automated content creation. However, this power raises concerns about verifying the accuracy and attributing the sources of AI-generated information, maintaining transparency, enabling replication of studies, and mitigating biases introduced by algorithms and training data.
Five principles of human accountability
To address these challenges, the panel has endorsed five principles of human accountability and responsibility for the use of AI in science:
- Transparent disclosure and attribution: Scientists should clearly disclose the use of generative AI, attribute human and AI contributions, and ensure proper citation of human expertise and prior literature.
- Verification of AI-generated content and analyses: Scientists are accountable for validating the accuracy and reliability of AI-generated inferences, monitoring biases, and providing thorough disclosure of evidence.
- Documentation of AI-generated data: Synthetic data, inferences, and imagery should be marked with provenance information to distinguish them from real-world observations.
- Focus on ethics and equity: Scientists and model creators should ensure ethical and socially beneficial uses of AI, mitigate potential harms, promote equity in access and applications, and solicit meaningful public participation.
- Continuous monitoring, oversight, and public engagement: The scientific community should continuously monitor and evaluate the impact of AI, adapt strategies as necessary, and disseminate findings broadly.
As generative AI reshapes the scientific landscape, the panel urges the scientific community to proactively safeguard the norms and values of science through adherence to these principles, ongoing governance efforts, and public engagement. They believe that by embracing these measures, the pursuit of trustworthy science for the benefit of all can be upheld in this transformative era of AI.