Check out our June newsletter!
Bolton Bulletin The June 2026
Tomball 990 Village Square, Suite G1100 Tomball, TX 77375 (281) 351-7897
The Woodlands 2441 High Timbers Dr., Suite 400
The Woodlands, TX 77380 BoltonLaw.com
AI IN THE COURTROOM
Helpful Tool or Legal Risk?
Let me take you back for a second.
sure what we’re doing is grounded in something reliable. But it does mean that when new technology comes along, we tend to approach it carefully, sometimes very carefully. A great example of that is how long lawyers held onto WordPerfect. While the rest of the world moved on to Google Docs, the legal field stayed put for years. It’s only been in the last several years that I’ve stopped regularly seeing those documents come through. I saw the same resistance when I started pushing for remote court appearances before COVID-19. The technology existed, it was secure, and it would have saved clients time and money. But it required everyone, especially judges, to agree to something new, and that just didn’t happen. Then COVID-19 forced the issue. Courts shut down, and suddenly, remote hearings became necessary. Almost overnight, we were all using Zoom. It was one of the rare moments when the legal system moved quickly, but only because it had to. Even now, not every court allows remote appearances, and when they do, it’s often limited. Now, we’re having a similar conversation about AI, and I want to talk to you about that directly because I know many of you are hearing about it and wondering how it affects your case. There is a lot of excitement about AI, especially in the business world. You hear how it saves time, reduces costs, and does
complex work quickly. Some of that is true. But when it comes to practicing law, AI cannot be relied on to get it right. I’ve tested it myself. I’ve asked AI legal questions where I already knew the answer, and what it gave back was wrong. And not just slightly incorrect, either. It was totally false. It can cite cases that don’t exist or quote things never actually said, which could be detrimental if mentioned in a court case. I also want you to understand something important. When an attorney signs a document filed with the court, we assume responsibility for everything in that document. Judges have made it very clear that blaming AI is unacceptable. If it’s wrong, it’s on the attorney. We’ve even had situations where clients bring in documents generated by AI and ask us to file them, and I understand why. They appear polished and professional, but when we review them, they often don’t meet the legal requirements or even move the case forward in a meaningful way. That said, I am not opposed to technology. In fact, I am actively investing in it. The key is using it the right way. There are areas where AI is incredibly helpful. It can assist with organizing information, tracking deadlines, reviewing large amounts of data, and ensuring nothing slips through the cracks. It can help us work more efficiently and ultimately reduce costs for you. However, every thing it produces must be
There was a time in my career when “high- tech” meant a trip to Kinko’s, a stack of poster boards, and a thick black marker. If we wanted to make a point in court, we didn’t click a button; we wrote it out by hand in front of the jury and hoped the ink didn’t run dry at the wrong moment. Fast forward to today, and now everyone is talking about AI as if it’s about to take over the world, including the legal one. So, the natural question I’ve been getting from clients is this: What does all of this actually mean for my case? I’ve been doing this for 30 years now, and one thing that has remained consistent is that the legal profession doesn’t rush to embrace change, which isn’t necessarily a negative. We are trained to rely on precedent, be cautious, and make
Continued on Page 3 ...
BoltonLaw.com | 1
AND THE LAW IS RACING TO CATCH UP Deepfakes Are Here
The TAKE IT DOWN Act One of the most significant federal measures to address deepfakes arrived in 2025 with the passage of the TAKE IT DOWN Act. This legislation directly targets non-consensual intimate imagery, including AI-generated images or videos of an individual designed to appear real. Under the law, distributing or threatening to distribute manipulated intimate media without permission is considered a criminal offense. The law also requires online platforms to respond quickly when victims report this type of content. Once notified, a platform must remove the material within 48 hours and take steps to prevent further circulation. Individuals who violate the law may face substantial penalties, including potential prison sentences. Consumer Fraud and the FTC Act The Federal Trade Commission Act (FTC Act) prohibits unfair or misleading business practices. If a company exaggerates what its AI technology can do or uses synthetic media to trick consumers, the Federal Trade Commission may step in. Deepfake scams are already becoming more sophisticated. In some reported cases, criminals have used AI-generated audio or video that imitates a trusted colleague or executive to convince someone to transfer large sums of money. When synthetic media is used to trick people for financial gain, it may fall under the purview of fraud enforcement. Why These Laws Matter The rapid rise of AI has opened the door to remarkable innovation, but it has also created new avenues for harm. Deepfakes can be used to manipulate public opinion, commit fraud, or damage someone’s personal or professional reputation. As we establish clearer rules around consent, deception, and digital manipulation, lawmakers are beginning to define how AI-generated media should be handled in the legal system. Next Steps Additional laws at both the federal and state levels are expected to address rising challenges such as political deepfakes, identity theft, and intellectual property concerns. For individuals and businesses alike, staying informed about these changes is becoming increasingly important. Synthetic media may be new territory, but the legal system is quickly catching up and working to ensure that powerful technology is used responsibly and never at the expense of someone’s well-being.
Is it real, or is it AI?
Years ago, the idea of artificial intelligence (AI) creating convincing videos of real people sounded like science fiction. Today, it’s reality. With just a few clicks, sophisticated AI tools can generate images, voices, or videos that look so authentic they can fool even careful viewers. While this technology can be used for harmless fun or creative projects, it also raises serious concerns. Deepfakes can be used to impersonate, spread misinformation, damage reputations, or create explicit images without consent. As the technology advances, lawmakers across the United States are moving quickly to put guardrails in place. Federal Laws Addressing Deepfakes Deepfake legislation is evolving rapidly to keep pace with advances in AI. The primary goal is to prevent misuse while still allowing for legitimate innovation. However, there are countless risks involved, including:
• Financial scams
• Defamation and misinformation
• Election interference
• Non-consensual explicit imagery
As lawmakers define when synthetic media crosses the line from creative expression into deception or harm, we get closer to protecting both people and institutions.
“With just a few clicks, AI can create videos so realistic they can fool even careful viewers.”
2 | (281) 351-7897
... continued from Cover
checked. We do not rely on it to make legal decisions, and we do not use anything we haven’t thoroughly tested.
In fact, we recently hired a director of technology who will evaluate new tools, test them, and make sure anything we implement actually improves our work without compromising quality. At the end of the day, if I find a way to use technology to save you time, reduce costs, and keep your case more organized, I will use it. However, it must meet one standard. It has to be accurate, reliable, and appropriate for your case. Because no matter how advanced these tools become, the responsibility still sits with me. Every document, strategy, and decision is reviewed, tested, and backed by real legal judgment. Technology is absolutely changing the legal field; there’s no question about that. But the part that matters most hasn’t changed at all. You still have an experienced attorney making sure things are done right, and that’s exactly how it should be.
For years, parents have relied on one classic trick. You say the opposite of what you mean, hoping your child takes the bait. “I bet you can’t eat all your broccoli …” Cue the determined chomping and, ideally, an empty plate. But if that strategy suddenly stopped working and your child is now giving you the “Nice try” stare, you’re not alone. According to parenting coach and Montessori expert Ankita B. Chandak, there’s a good reason your clever tactics are falling flat. Around age 8, many children begin developing what’s called “ theory of mind. ” In simple terms, they become skilled at picking up on other people’s intentions. Meaning they learn exactly what you’re doing. When you casually suggest they definitely shouldn’t tidy up, they can see the strings attached. And instead of feeling motivated, they may feel underestimated. So, what works better once your child catches on? Chandak suggests shifting from mind games to meaningful communication. To start, focus on clarity. Instead of hinting, try being direct. Explain the expectation and invite them to think it through. “You have a test next week and need to study. What’s your plan?” That simple question turns a command into a conversation. Next, invite collaboration. Giving children some ownership, like choosing whether to tackle homework before or after dinner, offers autonomy without sacrificing structure. You’re still guiding the outcome, but they get a say in how it unfolds. Finally, ask for ideas. When mornings feel chaotic or bedtime drags on, bring them into the problem-solving process. Children are often more cooperative when they feel heard. Asking “How can we make this smoother?” goes much further than a frustrated reminder. The magic is mutual respect, not manipulation. When parents acknowledge their child’s growing awareness and intelligence, it builds trust. Kids develop stronger decision-making skills, feel valued, and are more likely to follow through. And there’s a bonus here as well. When you stop trying to outwit your child, they stop trying to outmaneuver you. What replaces the power struggle is partnership. It turns out that the smartest move in parenting isn’t being one step ahead; it’s walking alongside your kids. CAUGHT IN YOUR OWN PARENTING TRICK? When Reverse Psychology Backfires
-Ruby Bolton
COCONUT SHRIMP CURRY
Ingredients
• 2 tbsp butter • 1 1/2 lbs jumbo shrimp, peeled and deveined • 1 medium onion, diced • 4 cloves garlic, finely chopped • 1 tbsp yellow curry powder • 1 (13.5 oz) can coconut milk • 2 tbsp honey, plus more to taste • 1/4 tsp kosher salt, plus more to taste
• Juice of 1 lime • 12 basil leaves,
chopped, plus more for serving
• Hot sauce (optional) • Cooked basmati rice, for serving
Directions 1. In a large skillet over medium-high heat, melt butter.
2. Cook shrimp 2–3 minutes, turning halfway, then transfer to a plate. 3. Add onion and garlic to the skillet and cook for 2 minutes, then stir in curry powder and cook 2 minutes more. 4. Reduce heat to medium-low and stir in coconut milk, honey, salt, and lime juice, and cook until gently bubbling. 5. Return shrimp to the skillet and simmer 2–3 minutes to thicken slightly. Stir in basil and add hot sauce if desired. 6. Serve over rice with extra basil.
BoltonLaw.com | 3
PRST STD US POSTAGE PAID BOISE, ID PERMIT 411
(281) 351-7897 BoltonLaw.com 2441 High Timbers Dr., Suite 400 The Woodlands, TX 77380
INSIDE THIS ISSUE
Technology Is Changing Law Without Replacing Judgment
1
Deepfakes Are Changing the Legal Landscape
2
Coconut Shrimp Curry
3
Raising Smart Kids Means Ditching Mind Games
Can Your Pup Be Your Dependent?
4
One Woman’s Quest to Get Her Dog Recognized as a Dependent CANINE VS. TAX CODE
Finnegan Mary Reynolds, an 8-year-old golden retriever, might be the first dog to officially qualify for tax breaks. Her owner, Amanda Reynolds of New York City, recently filed a lawsuit against the IRS, arguing that her pup should get the same tax breaks as a human child. Before you scoff, this claim is far more convincing than you might assume. According to Reynolds, Finnegan is more than just property. She is a fully dependent family member, with annual care costs topping $5,000. From food and grooming to vet visits, daycare, and even transportation, Reynolds handles it all. And under Section 152 of the tax code, she contends, Finnegan already ticks every “dependent” box, including being financially reliant, living in the same home, and earning zero income. The only hiccup? The IRS hasn’t updated the definition to include four-legged furballs. The lawsuit also leans on heavy-hitting constitutional arguments, citing the Equal Protection Clause and the Fifth Amendment’s Takings Clause. Reynolds argues that excluding pets from tax relief is unfair and essentially penalizes responsible pet owners. She points out the quirky inconsistency: Service animals can qualify for deductions as medical expenses, but beloved companion animals (who can incur similar costs) get nothing.
As a New York state-licensed lawyer, Reynolds is representing herself in the case and not holding back. She argues, “For all intents and purposes, Finnegan is like a daughter, and is definitely a ‘dependent.’” While the IRS hasn’t responded yet, the case is already sparking debates about modern families, legal definitions, and how far our tax code should go to acknowledge furry family members.
Whether Finnegan walks away with a tax deduction or just more belly rubs, one thing’s clear: Americans’ relationships with pets are evolving, and maybe it’s time our
laws caught up. After all, someone has to pay for all those vet bills, gourmet treats, and squeaky toys.
4 | (281) 351-7897
Published by Newsletter Pro • newsletterpro.com
Page 1 Page 2 Page 3 Page 4Made with FlippingBook Ebook Creator