An anonymous reader quotes a report from Scientific American, written by Almira Osmanovic Thunstrom: On a rainy afternoon earlier this year, I logged in to my OpenAI account and typed a simple instruction for the company’s artificial intelligence algorithm, GPT-3: Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text. As it started to generate text, I stood in awe. Here was novel content written in academic language, with well-grounded references cited in the right places and in relation to the right context. It looked like any other introduction to a fairly good scientific publication. Given the very vague instruction I provided, I didn’t have any high expectations: I’m a scientist who studies ways to use artificial intelligence to treat mental health concerns, and this wasn’t my first experimentation with AI or GPT-3, a deep-learning algorithm that analyzes a vast stream of information to create text on command. Yet there I was, staring at the screen in amazement. The algorithm was writing an academic paper about itself.
My attempts to complete that paper and submit it to a peer-reviewed journal have opened up a series of ethical and legal questions about publishing, as well as philosophical arguments about nonhuman authorship. Academic publishing may have to accommodate a future of AI-driven manuscripts, and the value of a human researcher’s publication records may change if something nonsentient can take credit for some of their work.
Some stories about GPT-3 allow the algorithm to produce multiple responses and then publish only the best, most humanlike excerpts. We decided to give the program prompts — nudging it to create sections for an introduction, methods, results and discussion, as you would for a scientific paper — but interfere as little as possible. We were only to use the first (and at most the third) iteration from GPT-3, and we would refrain from editing or cherry-picking the best parts. Then we would see how well it does. […] In response to my prompts, GPT-3 produced a paper in just two hours. “Currently, GPT-3’s paper has been assigned an editor at the academic journal to which we submitted it, and it has now been published at the international French-owned pre-print server HAL,” adds Thunstrom. “We are eagerly awaiting what the paper’s publication, if it occurs, will mean for academia.”
“Perhaps it will lead to nothing. First authorship is still one of the most coveted items in academia, and that is unlikely to perish because of a nonhuman first author. It all comes down to how we will value AI in the future: as a partner or as a tool.”
Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft