Conference on Lifelong Learning Agents
These guidelines are based on NeurIPS 2021 Ethics Guidelines. NeurIPS 2021 guidelines were prepared by Samy Bengio, Kate Crawford, Jenne Fromer, Iason Gabriel, Amanda Levendowski, Deborah Raji and Marc'Aurelio Ranzato with support and feedback from NeurIPS 2021 Prgram Chairs Alina Beygelzimer, Yann Dauphin, Percy Liang, Jenn Wortman Vaugham. The CoLLAs guidelines follow closely the NeurIPS 2021 guidelines with very minor modifications done by Razvan Pascanu. We urge the reader to also look at the original guidelines from the NeurIPS conference!
As ML research and applications have increasing real-world impact, the likelihood of meaningful social benefit increases, as does the attendant risk of harm. Indeed, problems with data privacy, algorithmic bias, automation risk, and potential malicious uses of AI have been well-documented [e.g., 1].
In the light of these findings, ML researchers can no longer ‘simply assume that... research will have a net positive impact on the world’ [2]. The research community should consider not only the potential benefits but also the potential negative societal impacts of ML research, and adopt measures that enable positive trajectories to unfold while mitigating risk of harm. We expect authors to discuss such ethical and societal consequences of their work in their papers, while avoiding excessive speculation.
This document should be used by both authors and reviewers (including normal reviewers and ethics reviewers) in order to get on the same page about the CoLLAs ethics principles. The primary goal of the CoLLAs ethics review is to provide critical feedback for the authors to incorporate into the paper. In rare situations, however, CoLLAs reserves the right to reject submissions that have violated key ethical principles.
There are two aspects of ethics to consider: potential negative societal impacts and general ethical conduct.
Submissions to CoLLAs are expected to include a discussion about potential negative societal impacts of the proposed research artifact or application, when these are apparent. Whenever these are identified, submissions should also include a discussion about how these risks can be mitigated.
Grappling with ethics is a difficult problem for the field, and thinking about ethics is still relatively new to many authors. Given its controversial nature, we choose to place a strong emphasis on transparency. In certain cases, it will not be possible to draw a bright line between ethical and unethical. A paper should therefore discuss any potential issues, welcoming a broader discussion that engages the whole community.
We expect authors to be open to discuss such issues during the review process and be willing to add a discussion on the potential impact of their work on society if prompted to do so during the review process. A common difficulty with assessing ethical impact is its indirectness: most papers focus on general-purpose methodologies, whearas ethical concerns are more apparent when considering deployed applications. Our aim is to do the best we can to identify potential risks and to be transparent about them.
Submissions must adhere to ethical standards for responsible research practice and due diligence in the conduct.Plagiarism in any form is strictly forbidden as it is an unethical use of privileged information by reviewers, such as sharing it or using it for any other purpose than the reviewing process.
If the research uses human-derived data, consider whether that data might:
This list is not intended to be exhaustive — it is included here as a prompt for author and reviewer reflection.
In summary, we expect CoLLAs submissions to include discussion about potential harms, malicious use, and other potential ethical concerns arising from the use of the proposed approach or application. We also expect authors to include a discussion about methods to mitigate such risks. Moreover, authors should adhere to best practices in their handling of data. Whenever there are risks associated with the proposed methods, methodology, application or data collection and data usage, authors are expected to elaborate on the rationale of their decision and potential mitigations.
Additionally behaviors such as plagiarism or not respecting the anonimity of the review process are not permited.
[1] J. Whittlestone, R. Nyrup, A. Alexandrova, K. Dihal, and S. Cave. (2019) Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. London: Nuffield Foundation.
[2] B. Hecht, L. Wilcox, J. P. Bigham, J. Schoning, E. Hoque, J. Ernst, Y. Bisk, L. De Russis, L. Yarosh, B. Anjam, D. Contractor, and C. Wu. (2018) It’s Time to Do Something: Mitigating the Negative Impacts of Computing Through a Change to the Peer Review Process. ACM Future of Computing Blog.