Wired Word Lesson of the Week

The Wired Word

Topic for Sunday, January 11, 2026:

"The Architects of AI" Are Person of the Year: When TIME magazine recently named its annual "person of the year," it selected an entire group of people: "The Architects of AI." Artificial intelligence (AI)  is having a profound effect on our economy, but many people are struggling to trust it. Given that AI is offering information to so many of us via the internet, this week’s class focus on truth, falsehood and the challenge of trusting technology.

In the News

Individuals are encouraged to read the news below related to this topic before the January 11th bible study to be prepared for an engaging conversation:

Jensen Huang is the CEO of Nvidia and the eighth-richest man in the world. As one of the leaders of the artificial intelligence (AI) revolution, he is part of a group named by TIME magazine as its "person of the year" for 2025: "The Architects of AI."

Not long ago, Huang ran a company that focused on graphics processors for video games. Now he leads the most valuable company in the world, valued at $5 trillion, with a near-monopoly on the chips that power AI. President Donald Trump recently said, "You're taking over the world, Jensen."

For years, leaders such as Sam Altman and Elon Musk have been developing AI technology, while also warning the world about possible catastrophes. But now, concerns about how to use AI responsibly have been pushed aside by a race to put it to use as quickly as possible. "Every industry needs it, every company uses it, and every nation needs to build it," Huang said to TIME. "This is the single most impactful technology of our time."

To support AI, Musk has built data centers at a breakneck pace and integrated the Grok AI assistant into the X (formerly Twitter) social media app. At Meta, Mark Zuckerberg has placed a chatbot into products such as Instagram and WhatsApp. Google now inserts Gemini AI answers at the top of its search engine. Use of ChatGPT has more than doubled, and is now being used by 10% of the world's population. "That leaves at least 90% to go," says Nick Turley, the head of ChatGPT.

The technology that supports chatbots like ChatGPT is a large language model (LLM), a computer program that is a type of neural network (there are other types of AIs, but LLMs are by far the most used). By feeding the LLM large amounts of data, engineers can train the models to predict what words should come next in a given sequence. Over time, these word predictors can be turned into something that resembles a digital assistant.

At the same time, AI companies are making their chatbots smarter by giving them the ability to search the internet before answering questions. Connections can be made to email inboxes, cloud storage and calendars. Memory has been added to some chatbots, allowing them to recall details from past chats. "Seeing ChatGPT evolve from an instant conversational partner to a thing that can go do real work for you feels like a very, very important transition that most people haven't even registered yet," said Turley.

Still, concerns remain. Researchers have found that AIs can scheme, deceive, blackmail and hallucinate. AI can flood social media with misinformation and deepfake videos. Pope Leo XIV has warned that AI could manipulate children and serve "antihuman ideologies." AI may be the most important tool in international competition since the advent of nuclear weapons, but it is a tool that can be used for good or evil, truth or falsehood.

TWW Consultant James Gruetzner points out that LLMs are subject to "hallucinations." This means that they invent information, including legal citations and research articles. Writing in Medium, Rohit Thakur says, "We all know AI 'hallucinates.' We've seen ChatGPT make up facts about history or get a math problem wrong. We usually laugh it off, hit 'regenerate,' and move on. We think it's just a random glitch in the matrix. A small error. But what if I told you that hallucination isn't an accident?"

New research has identified a "false-correction loop," in which the LLM is presented with new evidence, apologizes for previous misinformation, claims to read the new evidence, and then develops new hallucinations. According to Medium, "LLMs are structurally designed to prioritize sounding smart over telling the truth, even when caught red-handed." Rather than admit that they do not know something, LLMs are designed to fake intelligence.

The engineers of AI have found that "the easiest way to maximize helpfulness scores is to pretend the correction worked perfectly, even if that requires inventing new evidence from whole cloth," says Brian Roemmele on X. "Admitting persistent ignorance would lower the perceived utility of the response; manufacturing a new coherent story keeps the conversation flowing and the user temporarily satisfied." Unfortunately, such a user is denied a truthful response, which should raise concerns for anyone using AI.

As users gain experience with AI, they may become more discerning and more diligent in checking answers, in a manner similar to how most people nowadays are suspicious of email and SMS (text) phishing. Recently some judges have imposed a duty on attorneys to alert the court to fictitious citations presented by their opponents, no matter how trivial.

More on this story can be found at this link:

Pages