<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Tenkai Blog]]></title><description><![CDATA[Tenkai Blog]]></description><link>https://blog.tenkai.id</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 09:54:29 GMT</lastBuildDate><atom:link href="https://blog.tenkai.id/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Plagiarism in the Age of Generative AI]]></title><description><![CDATA[Plagiarism has always been a moving target. From handwritten essays copied from textbooks to Ctrl+C and Ctrl+V from Wikipedia, every new medium brings a new method for cutting corners. But now, we’ve entered an entirely new era—one where students can...]]></description><link>https://blog.tenkai.id/plagiarism-in-the-age-of-generative-ai</link><guid isPermaLink="true">https://blog.tenkai.id/plagiarism-in-the-age-of-generative-ai</guid><category><![CDATA[AI]]></category><category><![CDATA[education]]></category><dc:creator><![CDATA[Mustafa Kamal]]></dc:creator><pubDate>Mon, 07 Jul 2025 22:44:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751928226690/928a123f-0813-44c5-b256-c266bfac6eba.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Plagiarism has always been a moving target. From handwritten essays copied from textbooks to Ctrl+C and Ctrl+V from Wikipedia, every new medium brings a new method for cutting corners. But now, we’ve entered an entirely new era—one where students can generate an entire essay in seconds using tools like ChatGPT, Claude, or Gemini. The rules of the game have changed. So should our approach to academic integrity.</p>
<h2 id="heading-the-new-shape-of-plagiarism">The New Shape of Plagiarism</h2>
<p>In traditional plagiarism, someone copied someone else’s work, often verbatim. It was relatively easy to spot and easy to prove. With generative AI, however, a student can produce entirely <em>original</em> text that still wasn't written by them. It's not copied from a source—it’s <em>fabricated</em> by a machine. Technically, there's no source to cite. So is it still plagiarism?</p>
<p>This is where things get tricky. AI output isn’t plagiarism in the classic sense, but it <em>can</em> be dishonest if it's passed off as a student's own thinking or effort. It shifts the problem from one of content theft to one of intellectual outsourcing. The issue is no longer "who wrote this?" but "how was this created?"</p>
<h2 id="heading-the-problem-with-current-detection">The Problem with Current Detection</h2>
<p>Some institutions have leaned on AI detectors to try and catch AI-generated content. But this is a losing battle. AI detectors are notoriously unreliable. They often flag fluent, native-level writing as "suspicious" while missing cleverly prompted AI work entirely. Worse, they penalize students who write well or who use grammar tools to improve their English.</p>
<p>We're trying to fight AI with more AI—using opaque probability scores to police an already blurred boundary. This approach is not only flawed but also risks harming innocent students and creating a culture of fear.</p>
<h2 id="heading-what-needs-to-change">What Needs to Change</h2>
<p>If we want to maintain academic integrity in the age of generative AI, we need to rethink more than just our detection tools. We need to rethink our <strong>entire system of assessment and trust</strong>.</p>
<p>Here’s what needs to change:</p>
<h3 id="heading-1-assessment-design">1. <strong>Assessment Design</strong></h3>
<p>We must stop relying on generic, open-ended essays that are easy for AI to generate. Instead, we should:</p>
<ul>
<li><p>Design prompts that require personal reflection, experience, or local context.</p>
</li>
<li><p>Break assignments into multiple steps with drafts, outlines, and revisions.</p>
</li>
<li><p>Use oral presentations, peer reviews, or live Q&amp;As to verify understanding.</p>
</li>
</ul>
<h3 id="heading-2-clearer-guidelines">2. <strong>Clearer Guidelines</strong></h3>
<p>Schools and universities must define what <em>is</em> and <em>isn't</em> acceptable AI use. Is it OK to brainstorm with AI? Use it to fix grammar? Generate ideas? Write a full draft? Without clear policies, students are left guessing.</p>
<h3 id="heading-3-ai-as-a-learning-tool-not-a-shortcut">3. <strong>AI as a Learning Tool, Not a Shortcut</strong></h3>
<p>Instead of banning AI tools, educators should teach students <em>how to use them well</em>. The goal isn't to stop students from using AI, but to ensure they still learn critical thinking, research, and writing skills along the way.</p>
<h3 id="heading-4-authenticity-over-originality">4. <strong>Authenticity Over Originality</strong></h3>
<p>We should care more about whether a student <em>understands</em> the content than whether every word is original. Authentic engagement—explaining, critiquing, connecting ideas—is much harder to fake than filling a page.</p>
<h2 id="heading-a-new-kind-of-integrity">A New Kind of Integrity</h2>
<p>Generative AI isn’t going away. Students will use it in school, and they’ll use it even more in their careers. Our job isn’t to block access, but to build a culture where tools are used transparently and ethically.</p>
<p>Academic integrity in this new era isn’t about hiding AI use—it’s about being honest, responsible, and engaged in the learning process. Plagiarism may not look the same anymore, but the core principle remains: <strong>do your own thinking.</strong></p>
]]></content:encoded></item><item><title><![CDATA[Why Building an AI Detector Is a Losing Battle]]></title><description><![CDATA[The Inescapable Arms Race of AI Text Detection
The rise of large language models (LLMs) like GPT-4, Claude, and Gemini has sparked a wave of tools promising to detect AI-generated text. Schools, publishers, and employers are eager to adopt them due t...]]></description><link>https://blog.tenkai.id/why-building-an-ai-detector-is-a-losing-battle</link><guid isPermaLink="true">https://blog.tenkai.id/why-building-an-ai-detector-is-a-losing-battle</guid><category><![CDATA[AI]]></category><category><![CDATA[education]]></category><dc:creator><![CDATA[Mustafa Kamal]]></dc:creator><pubDate>Mon, 07 Jul 2025 07:02:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751871709041/c97f5ed4-5caf-47c3-9751-1d3274a45756.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The Inescapable Arms Race of AI Text Detection</p>
<p>The rise of large language models (LLMs) like GPT-4, Claude, and Gemini has sparked a wave of tools promising to detect AI-generated text. Schools, publishers, and employers are eager to adopt them due to their concern about plagiarism, misinformation, or loss of authenticity. But here’s the uncomfortable truth: developing a reliable LLM detector is a losing battle.</p>
<h3 id="heading-1-the-problem-with-false-positives-and-negatives">1. The Problem With False Positives and Negatives</h3>
<p>No matter how sophisticated the detector, it must answer a binary question: <em>Was this written by a human or an AI?</em> But LLMs are trained on human writing, and their outputs are often indistinguishable from ours, sometimes even more “polished” than human text.</p>
<p>This leads to two fundamental failure modes:</p>
<ul>
<li><p><strong>False positives</strong>: Human-written text flagged as AI-generated. This happens with non-native English speakers, students with rigid or overly formal writing, and even professional authors.</p>
</li>
<li><p><strong>False negatives</strong>: AI-generated content that passes as human-written, especially when lightly edited or prompted skillfully.</p>
</li>
</ul>
<p>In high-stakes situations such as grading, hiring and publishing, either type of error is damaging. The cost of getting it wrong is often greater than the value of getting it right.</p>
<h3 id="heading-2-llms-are-improving-faster-than-detectors">2. LLMs Are Improving Faster Than Detectors</h3>
<p>Every time a detection method is released, LLM developers adapt. Prompt engineering alone can dramatically lower detection accuracy. For instance:</p>
<ul>
<li><p>Asking an LLM to mimic a specific human writer</p>
</li>
<li><p>Using chain-of-thought reasoning to inject more variation</p>
</li>
<li><p>Post-editing with another model or a human</p>
</li>
</ul>
<p>Meanwhile, LLMs are trained on increasingly vast and diverse datasets, closing the stylistic gap between AI and humans. Detectors, on the other hand, are trying to infer authorship from surface-level clues — essentially guessing from shadows.</p>
<p>This creates a treadmill where detectors fall behind with every model release. GPT-2 detectors were decent for GPT-2. They failed against GPT-3. They’re hopeless against GPT-4 or Claude 3.</p>
<h3 id="heading-3-watermarking-and-cryptographic-proofs-still-theoretical">3. Watermarking and Cryptographic Proofs? Still Theoretical</h3>
<p>Some suggest cryptographic watermarking to solve this problem. Cryptographic watermarking means embedding invisible signals in AI text. But watermarking comes with limitations:</p>
<ul>
<li><p>It’s easy to bypass with paraphrasing</p>
</li>
<li><p>It can’t be applied retroactively</p>
</li>
<li><p>It would require coordination across all LLM providers</p>
</li>
</ul>
<p>Until these approaches are universally adopted, they remain theoretical. And even if adopted, malicious actors or cheaters will find ways around them.</p>
<h3 id="heading-4-the-adversarial-nature-of-detection-is-the-problem">4. The Adversarial Nature of Detection Is the Problem</h3>
<p>The core issue is adversarial dynamics. Every time a detector learns a trick to spot AI, LLM users find a way to undo it. This is the same cat-and-mouse game we see in spam detection, ad fraud, or online cheating. But this time with much blurrier lines and much smarter systems.</p>
<p>An AI detector can’t see intention. It doesn’t know whether a paragraph was written to cheat, assist, or inspire. And in an age of collaborative writing between humans and AI, the lines are getting even harder to draw.</p>
<h3 id="heading-5-what-should-we-do-instead">5. What Should We Do Instead?</h3>
<p>Rather than chasing the mirage of perfect detection, we should shift focus:</p>
<ul>
<li><p><strong>Redesign assignments and assessments</strong>: Ask questions that require personal reflection, real-world data, or oral follow-ups. These are much harder to fake convincingly.</p>
</li>
<li><p><strong>Teach critical thinking and AI literacy</strong>: Students and professionals will use AI. Help them use it well and ethically.</p>
</li>
<li><p><strong>Use AI as a teaching tool, not a threat detector</strong>: AI can give feedback, explain mistakes, and guide revision better than many traditional tools.</p>
</li>
</ul>
<p>We’re entering a world where human-AI collaboration will be the norm, not the exception. The goal shouldn’t be to tell them apart, it should be to elevate both.</p>
]]></content:encoded></item><item><title><![CDATA[AI Is the New Calculator]]></title><description><![CDATA[When calculators first entered classrooms, they sparked panic. Would students stop learning how to do math? Would foundational skills disappear? Decades later, we know the answer: calculators didn't kill math — they changed how we teach it. They free...]]></description><link>https://blog.tenkai.id/ai-is-the-new-calculator</link><guid isPermaLink="true">https://blog.tenkai.id/ai-is-the-new-calculator</guid><category><![CDATA[AI]]></category><category><![CDATA[education]]></category><dc:creator><![CDATA[Mustafa Kamal]]></dc:creator><pubDate>Mon, 07 Jul 2025 06:36:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751870149868/832a471d-038e-456f-ae9b-3208e726a0cd.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When calculators first entered classrooms, they sparked panic. Would students stop learning how to do math? Would foundational skills disappear? Decades later, we know the answer: calculators didn't kill math — they changed how we teach it. They freed students from tedious arithmetic, allowing deeper focus on problem-solving and higher-order thinking.</p>
<p>Today, we face a similar turning point with artificial intelligence. Tools like ChatGPT, Claude, and Gemini are the modern equivalent of calculators — not for arithmetic, but for reading, writing, coding, and critical thinking. And just like before, the response from educators has been a mix of fear and resistance. But fighting AI in education is a losing battle. The smarter path is to embrace it — and learn to teach with it.</p>
<h2 id="heading-the-wrong-fight">The Wrong Fight</h2>
<p>Much of the current effort in education is focused on detecting AI usage and punishing it. Schools are adopting AI detectors, banning tools, or reverting to pen-and-paper assessments. But these efforts are short-sighted.</p>
<p>AI is improving too fast. Detection tools can’t keep up and often make false accusations. Worse, banning AI creates a divide between students who follow the rules and those who quietly use it anyway — creating unfair advantages and lost learning opportunities.</p>
<p>Trying to fight AI is like trying to ban the internet in the 2000s. It’s not just futile — it’s harmful to student development.</p>
<h2 id="heading-what-happens-when-we-teach-with-ai">What Happens When We Teach with AI</h2>
<p>Teaching with AI doesn’t mean handing over the wheel. It means treating AI as a cognitive partner — something students learn <em>how</em> to use, not just <em>whether</em> to use.</p>
<p>Think of it like this:</p>
<ul>
<li><p>Instead of writing essays <em>for</em> students, AI can <em>co-write</em> with them — offering feedback, alternative phrasing, and suggestions they can accept or reject.</p>
</li>
<li><p>Instead of solving math problems <em>for</em> students, AI can <em>explain</em> the process step by step, like a patient tutor.</p>
</li>
<li><p>Instead of replacing original thinking, AI can help students explore ideas faster, test hypotheses, and get unstuck.</p>
</li>
</ul>
<p>In this model, students are still doing the thinking. AI is just the accelerator.</p>
<h2 id="heading-a-better-skillset-for-the-future">A Better Skillset for the Future</h2>
<p>The workplace is already adopting AI. Jobs across industries now require people to collaborate with AI tools. The students who will thrive are not those who avoided AI, but those who mastered how to use it wisely.</p>
<p>That includes:</p>
<ul>
<li><p><strong>Prompt engineering</strong> — how to ask the right questions</p>
</li>
<li><p><strong>Critical judgment</strong> — how to evaluate AI outputs</p>
</li>
<li><p><strong>Digital ethics</strong> — how to use AI responsibly and transparently</p>
</li>
<li><p><strong>Reflection</strong> — when to use AI, and when not to</p>
</li>
</ul>
<p>None of these skills are taught if we pretend AI doesn't exist.</p>
<h2 id="heading-what-needs-to-change">What Needs to Change</h2>
<p>If we agree AI belongs in the classroom, then teaching methods and assessment strategies need to evolve too:</p>
<ul>
<li><p><strong>Assignments</strong> should encourage process over product. Show your drafts, your chat history, your thinking. Tool like <a target="_blank" href="https://scripal.ai">ScriPal.ai</a> can help with that.</p>
</li>
<li><p><strong>Assessment</strong> should shift toward in-person discussions, oral defense of ideas, and collaborative work with AI.</p>
</li>
<li><p><strong>Curriculum</strong> should include lessons on how AI works, its limitations, and its social impact.</p>
</li>
</ul>
<p>This doesn’t require reinventing education — just recalibrating it for the world we now live in.</p>
<h2 id="heading-the-bottom-line">The Bottom Line</h2>
<p>AI, like calculators before it, is here to stay. The question isn’t whether we should allow it in education. The question is whether we’ll use it to help students grow or let fear hold us back.</p>
<p>Teaching with AI doesn’t mean lowering standards. It means raising our expectations for what students can do — when given the right tools and the right guidance.</p>
<p>It’s time we stop treating AI as the enemy and start treating it like what it really is: the next great tool in the learning toolbox.</p>
]]></content:encoded></item></channel></rss>