> **Note:** This is currently provisional as it has not been approved by the community yet. At Project Elara, we [deeply value scientific integrity](https://handbook.elaraproject.org/basics/about/open-philosophy.html#scientific-integrity) and want to make sure that the science and research we produce is of the highest quality possible. Thus, as of December 27th, 2025, we have promulgated a new **AI use policy** aiming to protect our work from [lackluster AI-generated research](https://www.nature.com/articles/d41586-025-02616-5). Our policy boils down to the requirement that **all of our research must be original and authentic**. While AI may be used for literature/web search purposes or for validation of existing work (essentially as an enhanced search engine), the following is to be prohibited: - Direct copying or paraphrasing of the outputs of an AI model - Using AI-generated ideas, derivations, etc. - Using images or videos generated by an AI model We have several major reasons for implementing this policy: - [AI hallucination](https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)) means that AI models can give believable content that is false or misleading - AI does not give proper attribution to its sources or reveal what its sources are, potentially leading to copyright infringement that violates the terms of our [public domain license](https://creativecommons.org/public-domain/cc0/) - [Major publishers](https://www.nature.com/nature-portfolio/editorial-policies/ai) ban or strictly control the use of AI for research papers - AI companies have a [terrible record of unethical behavior](https://www.forbes.com/sites/tomchavez/2024/01/31/openai--the-new-york-times-a-wake-up-call-for-ethical-data-practices/) and [unchecked profit maximization](https://thezvi.substack.com/p/openai-moves-to-complete-potentially) that is the antithesis of everything that we believe in In addition, we **strongly urge** using our own, self-hosted **ethical AI models**, and putting resources into the development of [our own machine learning libraries](codeberg.org/elaraproject/elara-ml). It gives us the assurance that these are models that are fully in our control and represent our values, rather than those of companies that have highly-exploitative goals. We leave the ultimate decisions up to the personal discretion of each of our members. **Any content that violates our policies** is to be immediately flagged and then removed (if possible without breaking links) or archived (if removal is not possible without breaking links). While we are aware that many other researchers do not share our view, we believe in taking the time to work on science that has substance and quality rather than publishing bad science quickly.