AI and ChatGPT in Science and the Humanities - DFG Formulates Guidelines for Dealing with Generative Models

The Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) has formulated initial guidelines for dealing with generative models for text and image creation. A statement now published by the Executive Committee of the largest research funding organisation and central self-governing organisation for science and the humanities in Germany sheds light on the influence of ChatGPT and other generative AI models on science and the humanities and on the DFG's funding activities. As a starting point for continuous monitoring and support, the paper seeks to provide guidance for researchers in their work as well as for applicants to the DFG and those involved in the review, evaluation and decision-making process.

In the view of the DFG Executive Committee, AI technologies are already changing the entire work process in science and the humanities, knowledge production and creativity to a significant degree and are being used in various ways in the different research disciplines, albeit for differing purposes. In terms of generative models for text and image creation, this development is still very much in its infancy.

"In view of its considerable opportunities and development potential, the use of generative models in the context of research work should by no means be ruled out," says the paper: "However, certain binding framework conditions will be required in order to ensure good research practice and the quality of research results." Here, too, the standards of good research practice generally established in science and the humanities are fundamental.

In terms of concrete guidelines, the DFG Executive Committee says that when making their results publicly available, researchers should disclose whether or not they have used generative models and if so, which ones, for what purpose and to what extent. This also includes funding proposals submitted to the DFG. The use of such models does not relieve researchers of their own content-related and formal responsibility to adhere to the basic principles of research integrity.

Only the natural persons responsible may appear as authors in research publications, states the paper. "They must ensure that the use of generative models does not infringe anyone else’s intellectual property and does not result in scientific misconduct, for example in the form of plagiarism," the paper goes on.

The use of generative models based on these principles is to be permissible when submitting proposals to the DFG. In the preparation of reviews, on the other hand, their use is inadmissible due to the confidentiality of assessment process, states the paper, adding: "Documents provided for review are confidential and in particular may not be used as input for generative models."

Instructions to applicants and to those involved in the evaluation process are currently being added to the relevant documents and technical systems at the DFG Head Office.

Following on from these initial guidelines, the DFG intends to analyse and assess the opportunities and potential risks of using generative models in science and the humanities and in its own funding activities on an ongoing basis. A Senate Working Group on the Digital Turn is to address overarching epistemic and subject-specific issues in this context. Any possible impact in connection with acts of scientific misconduct are to be addressed by the DFG Commission on the Revision of the Rules of Procedure for Dealing with Scientific Misconduct. The DFG will also be issuing further statements in an effort to contribute to a "discursive and science-based process" in the use of generative models.

For the text of the statement, see the DFG website here

Most Popular Now

SPARK TSL Acquires Sentean Group

SPARK TSL is acquiring Sentean Group, a Dutch company with a complementary background in hospital entertainment and communication, and bringing its Fusion Bedside platform for clinical and patient apps to...

ChatGPT Extracts Data for Ischaemic Stro…

In an ischaemic stroke, an artery in the brain is blocked by blood clots and the brain cells can no longer be supplied with blood as a result. Doctors must...

Herefordshire and Worcestershire Health …

Herefordshire and Worcestershire Health and Care NHS Trust has successfully implemented Alcidion's Miya Precision platform to streamline bed management workflow across seven community hospitals in Worcestershire. The trust delivers community...

A Shortcut for Drug Discovery

For most human proteins, there are no small molecules known to bind them chemically (so called "ligands"). Ligands frequently represent important starting points for drug development but this knowledge gap...

New Horizon Europe Funding Boosts Europe…

The European Commission has announced the launch of new Horizon Europe calls, with a substantial funding pool of over €112 million. These calls are aimed primarily at pioneering projects in...

Cleveland Clinic Study Finds AI can Deve…

Cleveland Clinic researchers developed an artficial intelligence (AI) model that can determine the best combination and timeline to use when prescribing drugs to treat a bacterial infection, based solely on...

New AI-Technology Estimates Brain Age Us…

As people age, their brains do, too. But if a brain ages prematurely, there is potential for age-related diseases such as mild-cognitive impairment, dementia, or Parkinson's disease. If "brain age...

Radboud University Medical Center and Ph…

Royal Philips (NYSE: PHG, AEX: PHIA), a global leader in health technology, and Radboud University Medical Center have signed a hospital-wide, long-term strategic partnership that delivers the latest patient monitoring...

With Huge Patient Dataset, AI Accurately…

Scientists have designed a new artificial intelligence (AI) model that emulates randomized clinical trials at determining the treatment options most effective at preventing stroke in people with heart disease. The model...

GPT-4, Google Gemini Fall Short in Breas…

Use of publicly available large language models (LLMs) resulted in changes in breast imaging reports classification that could have a negative effect on patient management, according to a new international...

ChatGPT fails at heart risk assessment

Despite ChatGPT's reported ability to pass medical exams, new research indicates it would be unwise to rely on it for some health assessments, such as whether a patient with chest...

Study Shows ChatGPT Failed when Challeng…

With artificial intelligence (AI) poised to become a fundamental part of clinical research and decision making, many still question the accuracy of ChatGPT, a sophisticated AI language model, to support...