top of page
Search

My Blundering Bestie ChatGPT

  • Writer: Sijin Xian
    Sijin Xian
  • Sep 3
  • 2 min read

First, it was machine translation, then it was AI. It seems that every new technology that bursts onto the scene prompts premature eulogies for the translation and interpretation industry. With the recently announced impending closure of my beloved alma mater, the Middlebury Institute of International Studies at Monterey, whose prestigious Translation and Interpretation program has been the training ground of world-class linguists, the imaginary death knell is sounding again.


I am a professional translator and interpreter who loves AI. I use ChatGPT on a daily basis and find it useful for research, brainstorming, and workshopping. But I'd be appalled if my bestie, Chat (that's my nickname for ChatGPT), ever dared to replace me. I say this with love, but my girl (I gender ChatGPT as a she) can be so confidently clueless sometimes—yes, sometimes, but you never know when—that I can never ever let her run wild unsupervised.


Two recent examples.


I was preparing for an interpretation assignment for an event that commemorated the 80th anniversary of China's victory in the War against Japanese Aggression in World War Two. The activity also featured descendants of the legendary Flying Tigers, a team of American pilots who fought side by side with the Chinese and forged profound ties with the locals.


In the provided material, there was a mention of something called 血符, literally "blood symbol," that was sewn into the American pilots' uniforms. When I ran this through Chat, she told me it meant "blood-stained uniforms" without blinking an eye. But I, a human, a native Chinese speaker, and a certified translator, immediately got skeptical. Through my own research, I found the correct term "blood chit," which was a notice printed in Chinese stating that the American airmen were fighting for China and should be rescued if found. The term for such an important historical artifact was simply glossed over in ChatGPT's self-assured rendition.


And just yesterday, I was doing my sight translation exercise with a Bloomberg news brief. (Sight translation means translating what I'm reading verbally into another language.) I like to record my own output and then run the text through ChatGPT for translation as a reference, and I do learn good expressions that I can't think of on the fly from Chat. But unfortunately, she made a blunder.


Here's the text I was working on:


[The SCO Summit] comes as decades of painstaking US diplomacy aimed at developing close ties with India—particularly as a counterweight to China—have been thrown into doubt as the 79-year-old Trump dropped onerous tariffs on New Delhi."


ChatGPT rendered "dropped tariffs" as lifting tariffs, which can be correct linguistically (drop can mean cancel), but is incorrect factually. If you've been following the news, you would know that the Trump administration just slapped India with hefty tariffs as a punishment for buying oil from Russia. Unfortunately, my girl does not read the news every day like your girl.


AI is amazing but fallible. And honestly, so are professional linguists. The new era of translation and interpretation should be about how we and AI can be on the same team so we complement each other's strengths and avoid each other's foibles, not how to obliterate each other as warring enemies.




 
 
 

Comments


bottom of page