Monteforte Law, P.C. - March 2026

AI’S HIDDEN RISKS TO CONSUMERS DIGITAL DANGERS

If you’ve been to an airport lately, you likely have posed for a facial recognition camera before entering your security checkpoint. This technology is just one example of how artificial intelligence (AI) is being used to not only identify who we are, but also learn more about us than we might realize.

In some cases, this reality is not a great thing.

Sure, being tracked online by AI may be considered beneficial by consumers who don’t mind receiving alerts on new purchasing opportunities based on their past shopping habits. However, society’s growing dependency on this level of technology is problematic when it leads to innocent people being incarcerated on false charges. Here’s a look at some of the growing risks surrounding the use of AI … and what you can do to better protect your privacy and rights from the prying eyes of emerging technology. Amazon’s Data Defect Debacle Although AI is seemingly everywhere these days, its use in the corporate world has existed for some time … and has created considerable gaffes along the way. Amazon learned about machine learning’s potential missteps the hard way. As far back as 2015, the company discovered that its AI-generated tools for screening resumes were biased against female job candidates. The system, designed to assign a rating from one to five stars to each applicant, gave lower scores to women who had applied for technical positions. The reason? The system had been trained to review and recommend candidates based on trends identified in resumes submitted to the company over the past 10 years, a period when men dominated the majority of positions. Instead of advancing future AI technology, Amazon stumbled back into America’s cultural past, creating a PR nightmare and “When faulty tech threatens a person’s liberty, it’s clear that AI’s road to perfection still has plenty of potholes.”

raising serious questions about the potential long-term harm AI could cause in efforts to promote gender equality. AI’s Misadventures in Faulty Arrests Facial recognition technology may be all the rage at airports, but the same can’t be said for its use at police stations. According to research conducted by the National Institute of Standards and Technology, Asian and African Americans are up to twice as likely to be misidentified by facial recognition as Caucasians. This discrepancy has real-world consequences, including the 2023 arrest of a pregnant woman in Detroit who was charged with carjacking after AI technology mistook her for someone else. When faulty tech threatens a person’s liberty, it’s clear that AI’s road to perfection still has plenty of potholes. Consumers’ Best Practices for Data Privacy Naturally, everyday consumers may also find themselves in sticky situations as a result of AI’s still-imperfect processes. Banking giant JPMorgan Chase offers the following suggestions to help better protect your personal information from AI-driven data tracking: • Utilize a separate, dedicated email address when engaging with AI chatbots, and avoid using the same email associated with your banking or social media. • Log off after every AI chat session to help ensure the system is not tracking your subsequent online usage. • Use only generative AI platforms available through the Google and Apple App stores and other reputable sources. AI may be a fascinating new chapter in our technical evolution, but it’s not without causes for concern. Whether you’re ordering shoes online or checking your savings account, forewarned is forearmed when it comes to guarding your identity … and even your freedom.

2

(978) 653-4092

Made with FlippingBook Ebook Creator