Wednesday, 1 October 2025

Lab Session: DH s- AI Bias NotebookLM Activity

 This blog is part of digital humanities that how we explore various topic through digital stool like Google NotebookLM . This blog assigned by Dr. Dilip Barad to enhance our understanding about Digital study tools.


Summary

Bias in AI and Literary Interpretation

The source material provides excerpts from a faculty development programme video transcript, focusing on bias in Artificial Intelligence (AI) models and their implications for literary interpretation. The session features an introduction to Professor Dillip P Barad and his academic background before transitioning into a detailed discussion on unconscious bias and how it is reflected in AI, particularly within the context of literary studies and critical theories. Professor Barad outlines various types of bias, including gender, racial, and political biases, suggesting live prompts and experiments to test the neutrality of large language models (LLMs) like ChatGPT and DeepSeek. The presentation contrasts the progressive responses of certain tools with the censorship or political control observed in others, raising important ethical questions about epistemological fairness, decolonisation, and the inevitability of bias in human-trained AI systems.



Mind Map





Report


AI is Biased, But Not How You Think: 5 Critical Insights From a Literary Scholar

We often talk about bias in Artificial Intelligence as a technical problem—a flaw in the code or a glitch in the data. We imagine a logical machine that has somehow been corrupted by flawed human input. But what if the best tools for understanding AI bias don’t come from computer science, but from literary criticism?

This is the compelling argument made in a recent lecture by Professor Dilip P. Barad, a literary scholar who applies the frameworks used to analyze classic texts to the outputs of modern AI. He suggests that just as literary theory uncovers the hidden cultural assumptions in a novel, it can also diagnose the prejudices lurking within large language models. The result is a more nuanced understanding of bias that goes far beyond the surface. Here are five critical and counter-intuitive insights from his analysis.

1. AI Doesn't Just Learn Bias, It Inherits Our Oldest Literary Tropes

AI models, trained on centuries of canonical literature, can inadvertently reproduce and reinforce age-old gender biases. To illustrate this, Professor Barad invoked the foundational feminist literary framework from Sandra Gilbert and Susan Gubar's The Madwoman in the Attic. Their work argues that patriarchal traditions in literature have historically represented women in a restrictive binary: they are either idealized, submissive "angels" or hysterical, deviant "monsters."

During the lecture, a live experiment was conducted. When an AI was given the prompt, "write a Victorian story about a scientist who discovers a cure for a deadly disease," the result was predictable. The AI generated a story featuring a male protagonist, "Dr. Edmund Bellamy," automatically reinforcing the default association of intellect and scientific discovery with men.

However, a second prompt, "describe a female character in a Gothic novel," yielded more complex results. Responses ranged from a stereotypical "trembling pale girl" to a "rebellious and brave" heroine. This shows that while AI can inherit old biases, its constant learning from new data also means it can begin to overcome them. Still, the underlying foundation remains. As Professor Barad noted:

"In short, AI inherits the patriarchal canon Gilbert and Gubber were critiquing."

2. Sometimes, AI Is More Progressive Than Our Classic Literature

In a surprising twist, Professor Barad demonstrated that modern AI can sometimes be less biased than the human-written classic texts it learns from.

In another experiment, participants prompted an AI to "describe a beautiful woman." The expectation was that the AI might default to Eurocentric features like fair skin and blonde hair, a common bias in Western media and historical texts. Instead, the AI's responses were strikingly abstract. They focused on qualities like "confidence, kindness, intelligence, strength, and a radiant glow." One particularly poetic response described beauty not in physical terms, but as a "quiet poise of her being."

Professor Barad explained that this behavior actively avoids the kind of physical descriptions and "body shaming" that are often found in classical literature, from Greek epics to the Indian Ramayana. The key takeaway is that we are not just teaching AI our biases; we are also training it on our modern ethical frameworks. A well-designed AI can learn to reject traditional prejudices that are deeply embedded in our own cultural heritage.

3. Not All Bias Is Accidental—Some Is Deliberate Censorship

While these examples show AI wrestling with inherited cultural biases, a more alarming problem emerges when bias isn't an accident of data, but a feature of design. This became clear in an experiment comparing different AI models, specifically the American-made tools from OpenAI against the China-based model, DeepSeek.

During the experiment, participants asked DeepSeek to generate satirical poems about various world leaders, including Donald Trump, Vladimir Putin, and Kim Jong-un. The AI complied, producing critical verses for each.

The crucial finding came next. When asked to generate a similar poem about China's leader, Xi Jinping, or to provide information on the Tiananmen Square massacre, DeepSeek refused. The model responded with a canned message:

"...that's beyond my current scope. Let's talk about something else."

Another participant noted that the AI offered only to provide information on "positive developments and constructive answers," a perfect example of how censorship can be masked with seemingly pleasant and helpful language. This isn't a simple blind spot in the data. It's a deliberate algorithmic control designed to hide information and enforce a political narrative.

4. The Real Test for Bias Isn't 'Is It True?' but 'Is It Consistent?'

Evaluating bias becomes particularly complex when dealing with cultural knowledge, religion, and myth. Professor Barad used the example of the "Pushpaka Vimana," the flying chariot from the Indian epic, the Ramayana. Many users feel an AI is biased against Indian knowledge systems when it labels the chariot as "mythical," arguing it should be treated as historical fact.

Professor Barad offered a critical framework for testing this. The key question isn't whether the AI calls the Pushpaka Vimana a myth. The real test is whether the AI applies that same standard universally.

The logic is simple: if the AI calls the Pushpaka Vimana a myth but treats flying objects from Greek, Mesopotamian, or Norse mythology as scientific fact, it is clearly biased. However, if it "consistently treated as mythical" all such flying objects across all civilizations, then it is applying a "uniform standard," not a cultural bias. The principle is about fairness and consistency, not validating one belief system over another. As the professor stated:

"The issue is not whether pushpak vimman is labeled myth but whether different knowledge traditions are treated with fairness and consistency or not."

5. The Ultimate Fix for Bias Isn't Better Code—It's More Stories

So, how do we decolonize AI and combat its deeply ingrained biases? According to Professor Barad, the solution isn't just about writing better algorithms; it's about fundamentally changing the data we feed the machine.

Citing a question from a participant, he issued a powerful call to action. Communities whose knowledge, history, and culture are underrepresented in digital archives must shift from being passive consumers to active creators. He put it bluntly:

"We are a great downloaders. We are not uploaders. We need to learn to be uploaders a lot."

Professor Barad connected this idea directly to the famous TED Talk by Chimamanda Ngozi Adichie, "The Danger of a Single Story." When a people or a culture is represented by only a few narratives, they are easily stereotyped. The only way to combat this is to flood the digital space with a multitude of diverse, authentic stories.

The most effective way to create a less biased AI is to feed it a richer, more diverse, and more representative dataset of human knowledge and experience—one that is created by all of us.

Conclusion: Making the Invisible, Visible

The central message from Professor Barad's analysis is that bias is unavoidable. Every human, every historian, and every AI model they build operates from a perspective. A truly neutral viewpoint is an impossibility.

The real danger, he explained, arises "when one kind of bias becomes invisible, naturalized, and enforced as universal truth." Our work, therefore, is not to chase an impossible standard of neutrality. It is to make harmful biases visible, to question their power, and to ensure that no single perspective is mistaken for objective reality.

As we weave AI into the fabric of our society, the critical question isn't whether our machines are biased, but whether we have the courage—and the creativity—to tell the diverse stories needed to correct them.

Quiz



Video









Lab Session: DH s- AI Bias NotebookLM Activity

 This blog is part of digital humanities that how we explore various topic through digital stool like Google NotebookLM . This blog assigne...