The Unethical and Ethical Use of AI: Preventing Cognitive Atrophy

The Unethical and Ethical Use of AI: Preventing Cognitive Atrophy

Centuries ago, education relied on memory, handwritten essays, chalk-and-talk classrooms, and libraries filled with books. Mental effort was central to learning. Scholars and students exercised memory, critical thought, and creativity through traditional methods—long before the age of calculators, computers or any digital assistance.

Then came digital tools like Wikipedia, Google search, and early educational software, which began easing the burden of recall and research. Yet students still needed to synthesise information manually, draft essays by hand or keyboard, and interpret texts for themselves. This era marked the start of cognitive offloading, where some cognitive tasks began shifting to external tools, but independent thinking remained essential.

With the rise of generative AI (LLMs such as ChatGPT), many tasks—writing essays, creating code, drafting emails—can now be done via prompt. At work, professionals may lean on AI for reports or summaries; at school, students increasingly rely on AI to write assignments. While productivity and efficiency benefit, foundational cognitive skills can be undermined.

What AI Was Designed to Do

AI was created to assist and augment human intelligence, not replace it. Its primary purposes include:

  • Automating repetitive tasks: processing data, calculations, and scheduling.
  • Supporting decision-making: offering insights and predictions based on patterns.
  • Enhancing creativity and productivity: generating ideas, drafting content, or simulating scenarios for human refinement.
  • Facilitating learning and knowledge acquisition: summarising content, explaining concepts, and providing personalised practice or guidance.
  • Solving complex problems at scale: handling tasks too large or complicated for humans alone, such as climate modelling or fraud detection.

In essence, AI is meant to extend human capability, save time, and support thinking, not replace the thinking process itself.

A 16:9 over-the-shoulder view of a lady using a laptop_etical use of AI

What AI Was Not Designed to Do

AI was not created to:

  • Fully replace human intelligence or judgement. Humans must still interpret, evaluate, and apply information critically.
  • Do all the thinking for students or professionals. Using AI as a crutch without engaging your own reasoning can stunt learning.
  • Generate content with guaranteed truth or creativity. AI can hallucinate, produce biased information, or create unoriginal ideas.
  • Replace ethical and moral responsibility. Humans remain accountable for how AI is used.
  • Develop cognitive skills independently. AI doesn’t “train” your memory, problem-solving, or critical thinking; it only aids the process if used wisely.

Bottom line: AI is a tool for support, not a substitute for human effort, creativity, or accountability.

Cognitive Atrophy: What It Is

Cognitive atrophy refers to the gradual weakening of mental processes—like memory, problem-solving, writing, and creativity—due to underuse. In psychology, it’s akin to “use it or lose it”. Neurons and connections not regularly exercised may weaken, reducing cognitive reserve and capacity.

The Unethical Use of AI and How It Can Lead to Cognitive Atrophy

When it comes to schoolwork, students might misuse AI by:

  • Copy-pasting complete essays.
  • Over-relying on AI for problem-solving without reflection.
  • Plagiarising or presenting AI-generated work as their own, especially without any input or editing.

These behaviours are unethical: they breach academic integrity and stunt intellectual development.

Supporting Studies:

  • An MIT study found that students using ChatGPT for essay writing showed lower brain engagement and poorer performance compared to those who wrote essays unaided. Over time, they became lazier, resorting to copy-and-paste.
  • Research confirmed that frequent AI tool use correlates with reduced critical thinking and cognitive engagement, particularly among younger users.
  • In schools, such habits could translate to decreased mastery, weaker critical analysis, and long-term loss of academic rigour.
A 16:9 front view of a male student actively talking during a presentation

The Ethical Use of AI: How to Prevent Cognitive Atrophy

Schools can lead the way by promoting ethical, engaged AI use through:

  1. Guided AI Support

Teach students to use AI as a brainstorming tool, not the writer. For example, students should first jot down initial ideas, pose these ideas to AI, and then elaborate critically.

  1. Active Learning Strategies

Encourage students to draft their own answers first, independent of AI, then compare with AI suggestions (feedback, improvements, or counterarguments), preserving ownership and deeper engagement.

  1. Critical Evaluation

Train students to recognise and question errors in AI responses—spotting hallucinations, bias, and factual inaccuracies—fostering critical thinking.

  1. Reflective Practice

Require students to reflect on how they used AI, what they learned, and where they took initiative. This builds metacognitive awareness and can give them direction on how to be more proactive.

  1. “Ex­traheric” AI Models

Adopt extraheric AI tools that stimulate thinking rather than provide answers, like MIND. AI that asks questions, offers alternatives, or challenges assumptions.

  1. Balanced Integration

Combine traditional cognitive tasks—essays, problem solving, manual research—with AI-supported tasks. This blend preserves skills while harnessing AI’s strengths.

  1. Education on Long-Term Effects

Share findings with students: over-reliance can lead to memory loss and reduced creativity. But when used intentionally, AI can enhance learning.

A Step-by-Step Ethical AI Workflow for Students

  1. Check the rules first

Confirm your school/course policy on AI. If AI use must be disclosed or limited, follow that.

  1. Understand the task

Restate the question in your own words. List the marking criteria and key deliverables.

  1. Think before tools

Do quick pre-writing from memory: jot key points, definitions, examples, and a rough outline. This protects you from cognitive atrophy.

  1. Use AI for brainstorming (optional, brief)

Share your outline with AI and ask for missing angles, counterpoints, or example frameworks, not a full draft.

Prompt idea: “Here’s my outline. What counterarguments or examples am I missing? Do not write the essay.”

  1. Draft it yourself

Write the first pass in your own words. Aim for a complete draft, even if it’s rough.

  1. Use AI for targeted refinement
    • Ask for feedback on grammar, structure, clarity, and logic.
    • Ask for alternative topic sentences, transitions, or ways to tighten arguments.
    • Keep control: accept or reject suggestions one by one.

Prompt idea: “Review my paragraph for clarity and cohesion. Suggest improvements but do not rewrite in full.”

  1. Fact-check and source-check (you, not AI)

Verify every claim and statistic using credible sources. Add proper citations. Don’t rely on AI for final facts or references.

  1. Originality & voice check

Read aloud. Does it sound like you? If anything reads “generic” or is unlike your style, rewrite it yourself. Run required originality checks if your school mandates them.

  1. Final polish—by you

Tighten arguments, ensure you’ve answered the exact question, and align with the marking rubric. Format citations correctly.

  1. Reflect (1–3 sentences)

Note what you learned and what you chose to change after AI feedback. This builds metacognition and prevents over-reliance.

A 16:9 front-view of a teenage female student actively taking notes in a notebook with a tablet beside her on a bench

Simple Prompt Starters For Ethical Use

  • “Here’s my outline. What counterarguments or examples should I consider? Do not draft the essay.
  • “Identify logical gaps in this paragraph and suggest how I can strengthen the reasoning.”
  • “Suggest clearer topic sentences for these three paragraphs—keep my meaning.”
  • “List potential primary and secondary sources I should read on this question (no fabricated citations).”

Follow this workflow and you’ll use AI as a co-pilot to enhance your work—without outsourcing your thinking or risking cognitive atrophy.

Conclusion

Over the past century, the tools we use to learn and work have dramatically changed. Today’s AI offers enormous potential for efficiency, accessibility, and learning support. But without ethical use, it may usher in cognitive atrophy, eroding memory, creativity and critical thinking.

If you genuinely take the time to do the initial thinking, outline your ideas, and then use AI to refine or expand on your own work, that can be considered ethical and will be beneficial for you. It’s more about the process and the level of your own intellectual contribution.

As long as the AI is helping you enhance your own ideas rather than replace them entirely, you’re on the right track!

Leave a Reply

Your email address will not be published. Required fields are marked *

Share this article

Popular Posts

Subscribe to Our Newsletter

Subscribe to our newsletter for exclusive content and expert tips delivered straight to your inbox.