JSPH Student Council and Jindal Policy Circle organise a workshop on the ethical use of AI in academia
Jindal School of Public Health and Human Development
By Punam Hazari
As university students, almost all of us have experienced the long nights, staring at a blinking cursor but not being able to write, while an AI tab opened quietly as if ready always ready to help us to overcome the challenge by writing on hour behalf. But how much of that is help and how much is a trap?
It is ethical to ask an AI chatbot to write on my behalf? Think on my behalf? Or maybe brainstorm like a classmate? Provide feedback like a peer reviewer or ‘revise’, ‘refine’, ‘polish’ ‘grammar correct’ our academic assignment?
Across universities, students are increasingly grappling with this dilemma. Recognising these concerns within the Jindal School of Public Health (JSPH) and beyond, the JSPH Students’ Council, in collaboration with Jindal Policy Circle, organised a workshop on “Ethical Use of AI in Academia” on Friday, 27th March 2026.
The workshop brought together three distinguished scholars from JSPH. Dean Prof. Stephen P Marks, Prof. Abdul Kalam Azad, and Prof. Vikash R Keshri reflected on a question that is becoming central to academic life: how do we engage with new technologies like artificial intelligence ethically and without compromising academic integrity?
In his opening remarks, Prof. Marks who is a world-renowned human rights scholar situated the discussion within a global ethical framework. He referred UNESCO’AI ethics, especially the human rights approach to AI. His intervention reminded us of the question of fairness and justice in overall development and use of AI tools. He emphasised on accountability even on parts of the AI developers.
Prof. Azad, who teaches Global Health Ethics approached the question from a philosophical standpoint. Drawing on German philosopher Immanuel Kant, he reflected on the importance of focusing on the means, not just the ends. Writing is not just producing an output; it is a process through which we think, struggle, and grow. In a world that prioritises efficiency, outsourcing this process -specially thinking and writing with creativity to AI may save time, but it also risks eroding the space where our intellectual and ethical capacities are formed.
Prof. Keshri turned attention to the practical risks and opportunity of AI use in academia. He spoke about the dangers of plagiarism, fabricated citations, and AI “hallucinations.” While AI can support research, overreliance without verification can undermine the credibility of academic work.
His delivered a clear message: Don’t fall for AI but rather rise with it. He also gave a SMART framework to use AI: Scrutinise (S), Modify (M), Acknowledge (A), Reflect (T), Take Responsibility (T)
The discussion became especially engaging during the Q&A session. One student posed a question that resonated widely: “When my own writing sounds like AI, how do I defend my work if someone assumes I used AI?” Prof. Azad responded by emphasising the importance of sustained engagement with academic literature. What we read shapes how and what we write. Reading deeply and writing regularly, he suggested, helps develop a distinctive voice.
The speakers reminded us that AI ethics is not just some rules or regulatory compliance, but our moral call to shape knowledge, responsibility, and the conditions of fairness. AI is a powerful tool, but it cannot replace the human work of thinking, compassion, empathy and human creativity.
As we navigate this evolving landscape, we cannot reject technology but must use it responsibly and ensuring academic integrity.
Author is the Cultural Secretary of the JSPH Students’ Council.
AI use statement: ChatGPT was used to copy-edit this blog.
Date of publication:31 March 2026
| Published Date | 31-03-2026 |
| Category | Activities |