Friend or foe?

Nature, Published online: 16 March 2026; doi:10.1038/d41586-026-00843-yGraduate students increasingly use artificial-intelligence tools to draft, code and search — but many fear it could erode the very skills a doctorate is meant to build.

Friend or foe?
Friend or foe? Photo: Nature News

Linda Nordling is a freelance journalist in Cape Town, South Africa.

Search author on: PubMed Google Scholar
Illustration: Antonio Rodríguez
At the same time, Diala, who is studying infectious-disease modelling at the University of Abuja, is concerned that AI overuse is undermining essential academic skills.

She checks every reference and fact that AI tools give her and rewrites their text in her own words.

But she worries about the next generation of researchers.

When she was an undergraduate, she says, “there was no AI like this.

We sat for hours and read, practised, tried and retried until we got it.

Now people want AI to write everything.” She adds: “AI is a blessing, but it has made students lazy.

People don’t go the extra mile to build the skill.”
Leona Diala checks every reference and fact that AI tools give her.

Credit: Courtesty of Leona Diala
Diala’s ambivalence is typical of PhD students.

When Nature surveyed almost 3,800 PhD students last year , three-quarters thought AI tools could help students to work more efficiently, and 71% felt it was acceptable to use them to support their studies — yet the majority also voiced strong concerns.

Some 81% said they don’t fully trust AI tools and 65% worried that AI weakens thinking, research and writing skills.

Since ChatGPT launched in November 2022, AI use has exploded across higher education.

In a survey of 1,041 UK undergraduates published in February 2025, 88% admitted to using AI for assessments, up from 53% the year before (see go.nature.com/4d37rcc ).

The proportion of respondents who had used any AI tool also jumped, from 66% in 2024 to 92% in 2025.

Such a rapid change in behaviour is “almost unheard of”, wrote study author Josh Freeman, policy manager at the Higher Education Policy Institute in Oxford, UK, in a statement accompanying the results.

Doctoral students are now charting paths through territory their supervisors never had to navigate.

Some use AI daily and swear by it; others refuse to touch it, worried about the cost to their development as researchers.

Most fall in between, working out their own rules for when AI helps and when it hinders.

Yinghui He uses a mix of AI tools for tasks such as checking grammar and generating code.

Credit: Courtesy of Yinghui He
That’s a lesson that Richard Ang, a PhD student in soil microbiology at the University of Western Australia in Perth, learnt the hard way.

He once asked ChatGPT to calculate fertilizer doses for an experiment.

When the experiment failed, he asked the tool to show its thinking.

“It totally misunderstood my question,” he says.

“AI will never tell us that our design is uncommon, or wrong,” he adds.

“If we ask AI to carry out a ridiculous or impossible task, it will do it.” Now, he always asks the tool to explain its reasoning step by step, and he cross-checks answers using multiple tools.

Diala learnt the same lesson when she plotted a graph and asked AI to explain it.

“It said a value increased when the graph showed a decrease,” she says.

“Before we use AI, we should have some knowledge of what we’re asking, so we can catch mistakes.”
The confusion many PhD students feel comes partly from AI’s technological acceleration outpacing the ability of universities to govern it.

According to a report published in January by the European University Association in Brussels, which surveyed universities across Europe about their policies for doctoral education, only 5% felt the existing AI guidelines were sufficient (see go.nature.com/4stfddp ).

The largest group, 38%, reported that they were still establishing policies for the first time, and 13% had no policies at all.

‘Without these tools, I’d be lost’: how generative AI aids in accessibility
“There are parts where I’d say ‘use AI’ and parts where I’d say ‘don’t’,” says Yonis, who is based in Dubai, United Arab Emirates.

Tasks such as finding, digesting and organizing research literature fall in the ‘use AI’ column.

“It helps you read 50 papers in a month rather than 50 in a year.” But she urges students to avoid the tools for data analysis, to stay in control of the intellectual work and because of data protection: many AI tools do not keep what you tell them confidential.

Amina Yonis offers tips on ethical AI use.

Credit: Courtesy of Amina Yonis
Yonis also urges caution in using AI to write a thesis.

The technology is useful for coming up with a structure or cleaning up text, she says.

But getting it to compose long sections from scratch can be problematic.

“If you let AI write and then you look, you’ll struggle to break away from the example,” she says.

“It’s better to write it yourself and then use AI to clean up.”
For language help, however, AI can be transformative .

Manikandan Palanichamy, an electrical and computer engineer at Østfold University College in Fredrikstad, Norway, who supervises PhD students, says that it gives those who don’t have English as their first language access to cutting-edge research and helps them to express complex ideas more effectively, which “democratizes PhD training significantly”.

However, although AI can be used to amplify thinking, it should not replace it, he says.

“As a professor, my challenge for PhD or master’s or bachelor’s supervision is teaching students this distinction early.”
Not all AI tools are created equal, Yonis says.

She recommends stepping away from generic AI tools and exploring applications that are purpose-built for academics, such as Paperpal for writing or Consensus for literature searches (see ‘AI joins the lab’).

More AI apps are being purpose-built for research.

Most have tiered pricing models, with free limited-use options or free trial periods.

We highlight a selection.

For discovering and understanding literature, ResearchRabbit maps citation networks and helps to find connected papers.

The tool Elicit searches a large body of academic papers and can extract key findings, summarize papers and answer research questions.

Consensus returns evidence-based answers with linked citations.

Scite looks at how other papers talk about a study — whether they support, contradict or simply mention its findings.

Connected Papers, Litmaps and Inciteful (each has a free version) create visual maps of research fields and paper relationships.

For data analysis and coding, Julius analyses data sets through language prompts, and generates code and visualizations.

Bohrium is a cloud platform for running scientific simulations, notebooks and managing research data, and GitHub Copilot suggests ways to complete your code as you type.

For writing and editing, Paperpal offers writing suggestions and language polishing specifically for research papers.

The Prism workspace from ChatGPT creator OpenAI, based on the model GPT5.2, lets users refine documents written in LaTeX, a document-preparation system used for some scientific papers.

Remember, whichever tools you use, they should help you to learn — not replace your thinking.

Never upload sensitive, proprietary or identifiable participant data to AI tools without proper data-use agreements and institutional approval, and don’t rely on AI-generated code without understanding and testing it thoroughly.

Don’t use AI to write entire papers or manuscript sections without disclosing what you’ve done — this might violate academic-integrity policies.

Finally, if you feel you are becoming too dependent on AI systems, Amina Yonis at academic-support company The Page Doctor has some simple advice: go back to conventional methods.

“AI has only been around for about three years.

Most people finished PhDs without it.

You can do it.”
Even with practical guidance on how to use AI responsibly, deeper questions remain.

As the technology hurtles forwards, it’s difficult to say precisely what skills PhD students need to hone or protect to navigate the future.

Natalia Bielczyk, a computational neuroscientist who splits her time between California, Poland and the Netherlands and founded Ontology of Value, a company that helps professionals to navigate career transitions, says PhD students might want to think about their value differently in the age of AI.

Rather than competing with AI on speed or recall, doctoral students should lean into areas that machines still struggle with: framing good questions, navigating ambiguity and designing ways to test ideas in the real world.

Artificial-intelligence search engines wrangle academic literature
“The superpower of the PhD is that it gives you a systematic approach to solving problems,” she says.

AI can help with routine tasks such as scanning the literature or polishing drafts, and so leave more time for the kind of slow, conceptual work that underpins real breakthroughs.

“It’s unlikely that AI will replace higher-range thinking any time soon,” Bielczyk suggests.

“If anything, it raises the bar for humans to specialize in the parts of research that only humans can do.”
Enjoying our latest content?

Log in or create an account to continue
doi: https://doi.org/10.1038/d41586-026-00843-y
ChatGPT for students: learners find creative new uses for chatbots
Collection: ChatGPT’s impact on careers in science
Data from smart watches reveal early signs of insulin resistance
China approves brain chip to treat paralysis — a world first
China intensifies push to become world leader in tech and AI
How I turned online misogyny about my PhD into momentum for my career
PhD students are turning to side hustles to make ends meet, finds Nature poll
The problem with Canada’s plan to buy scientific prestige
Rethinking AI’s role in survey research: from threat to collaboration
AI is programmed to hijack human empathy — we must resist that
AlphaFold hits ‘next level’: the AI database now includes protein pairing
CeMM is recruiting two scientists to join as Starting Principal Investigators within a new research program on pain and aging/healthy.

Research Center for Molecular Medicine (CeMM), ÖAW
Profile the cellular components of tissue microenvironments (TME) in healthy and inflamed sites.

Dortmund, Nordrhein-Westfalen (DE)
Leibniz-Institut für Analytische Wissenschaften – ISAS – e.V.

Postdoctoral positions exploring microbiota–stem cell interactions in development, disease & cancer using gnotobiotic models, organoids & multi-omics.

The Chinese Institutes for Medical Research (CIMR), Beijing
Join Huazhong Agricultural University
No.1 Shizishan Street, Hongshan District, Wuhan, Hubei Province, China
Huazhong Agricultural University (HZAU)
SUSTech School of Medicine offers equal opportunities and welcome applicants from the world with all ethnic backgrounds.

Southern University of Science and Technology, School of Medicine

Source: This article was originally published by Nature News

Read Full Original Article →

Share this article

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

Maximum 2000 characters