[ad_1]
Asian Scientist Magazine (Nov. 14, 2023) — Writing has taken many shapes and forms throughout history, from picture writing engraved in stone to graphite rubbed on paper, or from the classic typewriter to the modern keyboard. Despite the differences in medium, all these forms of writing had one thing in common: a human brain generated the content.
However, in November 2022, a new form of writing was introduced to the world—ChatGPT, a generative artificial intelligence (AI) that can write like a person. In the few months following its release, hundreds of different types of generative AI have flooded the internet—some capable of artwork and poetry while others can mimic real human voices.
Generative AI works similarly to the human brain, in that it first needs to be trained by providing it with vast amounts of information. In the case of ChatGPT, the internet was its data source. These types of programs—known generally as large language models (LLMs)— work by integrating a prompt or command with the patterns and connections it learned during training to generate a response or create new content.
With the exception of some specific coding rules, content generated by AI does not require human intervention. Nevertheless, it is capable of writing at a remarkably high level of proficiency. GPT-4, the latest iteration of Open AI’s GPT series, scored 710 out of 800 on the Evidence-Based Reading and Writing portion of the United States’ college admission standardized SAT test—181 points higher than the national average in 2022.
With such an impressive performance, using this technology to assist or replace human writing is irresistible. As a case in point, ChatGPT has already been used in part to assist in scientific writing and has even been listed as an author in studies—at least four instances were discussed in a recent report published in Nature, one of the world’s leading scientific journals.
Where we go from here requires dialogue on a series of pressing questions—both moral and practical—about the use of generative AI in writing, specifically in scientific publication: when can it be used, or should it be used at all?
THE ELEPHANT IN THE ROOM
According to Nature, generative AI can be used in scientific writing and publication under certain circumstances and as long as its use is clearly spelled out. Their editorial policy states, “Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript.”
But not everyone is on the same page. For example, Science—another major scientific publication—has a slightly different take on the matter. “Text generated from AI, machine learning, or similar algorithmic tools cannot be used in papers published in Science journals, nor can the accompanying figures, images, or graphics be the products of such tools, without explicit permission from the editors.” Crossing these boundaries is a serious offence and constitutes scientific misconduct, the editorial added.
“The world of generative AI is still in its early stages, making it difficult for publications to firm up rules around use,” Chris Stokel-Walker, a science journalist who has written on this issue for Nature, told Asian Scientist Magazine. “It’s likely that those rules and policies will change over time.”
A REGULATORY NIGHTMARE
As with any new technology that enters our diverse society, there will be a wide range of perspectives surrounding its use from unrestricted freedom to outright banning. This happened, for instance, when the calculator was introduced in schools. In 1986, teachers and parents filled the streets of Sumter, South Carolina, in protest, fearing their children would never be able to do or understand mathematics without a calculator in their back pocket.
The real question in most such cases is not a matter of whether or not technology should be allowed, but rather how it ought to be used while doing minimal harm.
“You don’t want to over-regulate, which would mean denying your population the benefit of these technologies. But if you underregulate them, you run the risk of technology going so far ahead that it becomes very difficult to regulate and some harm may take place,” said Simon Chesterman, David Marshall professor and senior director of AI Governance at the National University of Singapore, in an interview with Asian Scientist Magazine.
The calculator made it possible for humans to perform much more complex mathematics but at the cost of basic mental arithmetic.
DO THE BENEFITS OUTWEIGH THE RISKS?
Learning how to write and produce scientific literature is a difficult task which is further complicated by language barriers, English being the international scientific language. It takes many years of practice and study to write effectively in science. However, when new technology can usurp this learning curve and expedite the societal benefits of new scientific knowledge, there is an argument to be made that the benefits outweigh the risks.
“AI rewriting tools can be helpful for researchers who may have excellent ideas and reading abilities, but struggle to express their thoughts effectively in writing,” said Aw Ai Ti, head of the Aural & Language Intelligence (ALI) department at A*STAR’s Institute for Infocomm Research (I2R), in an interview with Asian Scientist Magazine. Aw develops language processing and machine translation technologies, such as SGTranslate, to facilitate information sharing by overcoming such language barriers.
Aw also argued that tools like ChatGPT can result in more refined papers for readers to have a better understanding of the content. “Generally, it can be used to enhance productivity for scientists by summarizing long paragraphs of information for ease of reading and understanding, or to check for any spelling or grammatical errors,” said Aw, adding that cross-checking would still be needed, and appropriate acknowledgements should be given to the technology as it should not be used to replace any form of original writing.
THE ACCOUNTABILITY QUESTION
Generative AI could, in theory, become sophisticated enough to publish a scientific paper given sufficient training and the correct prompts. Despite the differences in guidelines between major scientific journals like Science and Nature, they have one thing in common: no authorship for AI.
The problem is that the underlying mechanics of generative AI do not facilitate understanding in the way that a human understands, said Chesterman. “It is therefore wrong to attribute authorship—in the way we mean authorship—to these entities.”
A good part of the current conversation regarding the use of generative AI in scientific publishing is about accountability. Who will be held accountable if AI generates and references a fake research paper or confuses a patient’s blood pressure reading with their home address? This would mean a degradation in trust for the integrity and accountability of the scientific pursuit.
Deep fakes, personalized phishing campaigns, and fake news already plague our society. As the content generated by AI gets closer to what a human might produce, it will be evermore prudent for AI companies to uphold the highest standards of transparency for AI-generated content, especially as it integrates into our daily and professional lives. As for scientific writing, it remains to be seen to what extent generative AI will transform the publishing ecosystem.
—
This article was first published in the print version of Asian Scientist Magazine, July 2023.Click here to subscribe to Asian Scientist Magazine in print.
—
Copyright: Asian Scientist Magazine. Illustration: Wong Wey Wen/Asian Scientist Magazine
[ad_2]
Source link