April 2024

Tech Talk

The real question… is who can you trust?

In AI we don’t trust By Michael E. Duffy

I t’s been happening for decades now, a subtle erosion. I don’t remember quite when I first saw it, but the Internet was the first place I saw a comparison of actual photoshoots of celebrities with what ended up on the cover of People magazine. They had been subtly

conference call with co-workers. Unfortunately, the “co-workers” in that call were sophisticated fakes. But, because they looked and sounded like known individuals, the worker was convinced to send the money (one can ask about appropriate controls for large money transfers, but that’s a different story). The real question—and it’s not something that technology alone can answer—is who can you trust?

reshaped and touched up to hide the reality of their imperfections. And so we discounted the accuracy of those photos. We know that Taylor Swift doesn’t really look that good. With the advent of easy-to-use filters in photo apps, mere mortals were not above enhancing their appearances online, whether to get a date, or to get followers on their social media feeds. Even “candid” photos (just like those in People ) are arranged “just so” to convey the perfect image. And now, AI-based image software makes it possible to create photos of things—people included—which simply don’t exist. Film and video are just smooth sequences of still images, and AI is now capable of producing extremely realistic videos. OpenAI announced Sora on Feb. 15 ( openai.com/sora – 10 sample

Radio changed the trust equation long before magazines used Photoshop: Ronald Reagan called Cubs games on the radio from telegraph messages, not by watching the game, despite how it sounded. Orson Welles scared a lot of people with the imagined reality of War of the Worlds. Once we all listened to Walter Cronkite to get the news. Now there are many different sources. Who do you trust? We don’t trust photos, because of Photoshop. We don’t trust caller ID, because it can be spoofed. And even a familiar voice is now suspect.

videos). The program, currently only available to its testers and a few filmmakers, accepts text prompts like those you would give to ChatGPT to generate a document. Each of the 10 sample videos above comes with the prompt that generated it. On casual examination, you might think the video of waves breaking on a rugged coastline was a real video—I did. Similarly, the video of a woman walking in Tokyo might be a highly-stylized advertisement—until you notice the close-up reflection in her sunglasses is wrong. We have reached a point where the adage “seeing is believing” is no longer true. The quality of generated images and video is so high, and the accessibility of tools to create those images and videos is so broad that it will soon become nearly impossible for the average person to judge for themselves whether something they see is factual or not, especially if they are not used to thinking critically about what they are seeing. It goes for what you hear as well. New Hampshire voters in last January’s Democratic primary received calls telling them not to vote, using the AI-generated voice of Joe Biden. Microsoft announced last year that a new text-to-speech AI model named VALL-E “can closely simulate a person’s voice when given a three- second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anything—and do it in a way that attempts to preserve the speaker’s emotional tone.” Three seconds! Criminals are already taking advantage of this technology. A financial worker in Hong Kong was convinced to transfer $25 million after participating in a video

Both Intel (Fake Catcher) and Microsoft (Video Authenticator Tool) have released tools to detect what is being called “synthetic media,” which are photos, videos or audio files manipulated by artificial intelligence (AI) in hard-to-detect ways. But there will always be a race between the forgers and those who detect them. For now, the only answer is to be skeptical and to check for yourself. (Try out your fake-detection skills at detectfakes.kellogg.northwestern.edu .) For a very, very long time, people have been able to believe their physical senses. Watching magic (especially close-up magic) is popular because it defies our senses. We know we’ve been tricked, but we don’t know how. The danger of these new technologies is that we don’t think we’ve been tricked. Healthy journey update: I haven’t kept up with my planned 2-pounds-a-week goal for weight loss, but I have lost 12 pounds in the past nine weeks, so the trend is downward, albeit a bit slower than expected. I am drinking a lot more water, and I notice that it helps with weight loss. The process of tracking what I eat with myFitnessPal makes me more mindful about what I am eating (more protein, less fat and carbs). I still need to be more active, though! g

Michael E. Duffy is a senior software engineer for Atlanta-based mobile gaming company Global Worldwide ( globalworldwide.com ), who lives in Sonoma County. He has been writing about technology and business for NorthBay biz since 2001.

April 2024

NorthBaybiz 41

Made with FlippingBook - Online magazine maker