Tech

How to prevent AI addiction and set boundaries at work

Caitlin Ner|Published

.

Image: Immo Wegmann/Unsplash

AI’s promise was it would liberate us from busywork. Instead, it’s becoming a new dependency. Maybe it’s an email you didn’t feel like writing. A brainstorm you didn’t feel inspired to lead. Code that would take you hours to program. A bio or outline that felt just a bit too hard to begin. 

You used to do these things on your own. But now AI makes it so easy to skip the effort that you barely notice you’re outsourcing your thinking. I use AI to research companies for my venture fund, deep dive into new industries and technical topics, design presentations, and record my meetings. I am an advocate for AI use and literacy, but the more my work started to become intertwined with AI, the more I started to think about the looming digital addiction crisis.

So I wasn’t surprised to see that some researchers have begun labeling compulsive overuse of generative AI as a potential behavioral addiction. They found that “over time, excessive reliance on AI can impair cognitive flexibility, diminish problem-solving abilities and erode creative independence.” In other words, AI can enhance human capability, but if used unchecked, it can also start to replace it.

In 2024, over $100 billion was invested into generative AI startups globally. But not enough money is being spent to understand or mitigate AI’s psychological impact. At PsyMed Ventures, we want to change this by investing in a new generation of companies focused on digital wellness, cognitive resilience, and mental health in the AI era. 

However, while investing in research is a long-term fix, in the short term leaders can combat AI addiction by helping their teams implement AI boundaries.

Signs You or Your Team Are Overusing AI

How do you know when your AI use is becoming harmful? One early sign: you can’t start working without it. Maybe you once drafted memos or solved problems on your own, but now you wait until an AI tool gives you a prompt or plan. 

That reliance can weaken your ability to think independently. A study from MIT used EEG to observe people using ChatGPT, Google’s search engine, or nothing at all. Out of the 54 test subjects, ChatGPT users were found to have the worst brain engagement and consistently underperformed across neural, linguistic, and behavioral levels. This small study is the first step into the need for longitudinal studies to assess potential long-term effects on cognition, learning, and critical thinking.

Another clear sign you’re overusing AI tools is when you find yourself zoning out in meetings because you know the tool will capture, summarize, highlight action items, or even give real-time responses for you to say live. A 2025 mixed methods study on cognitive offloading shows that, while delegating comprehension to external aids can boost short-term efficiency, it undermines recall and independent reasoning when the aid is unavailable. Over time, relying on these tools can dull your ability to follow complex discussions in real time and chip away at your confidence in making judgments without algorithmic backup. 

Your decreased confidence can show when you hesitate to share an idea in a meeting until you’ve first run it through an AI for validation, even on topics where you have direct expertise. You might also notice yourself redoing projects or emails multiple times based on AI suggestions, even when the original version was solid.

From AI Literacy to AI Boundaries

In a rush to adopt generative AI across workplaces, most leaders are focusing on AI literacy, without thinking about the consequences of overreliance. However, AI literacy also requires focusing on AI boundaries. Similar to healthy screen time or smartphone use, guidance for ourselves and our employees on when to lean on AI and when to deliberately step back will help us use this tool in a way that benefits rather than harms us. 

A good first step is treating AI as a collaborator, not a crutch. AI is immensely helpful for ideation, summarization, and drafting, but it shouldn’t replace human reasoning or judgment. One practical shift is to use AI to support your thinking, not to start it. For example, if you’re drafting a report, write your main argument or outline yourself before prompting a tool like ChatGPT to help you refine, expand, or stress-test what you’ve already written while you still do the core thinking, analysis, and structuring on your own. 

Just as we schedule physical workouts, it’s worth building in “analog workouts” for the brain. These are AI-free moments of problem-solving, brainstorming, or creative writing without any digital help. This could mean gathering at a whiteboard to map out workflows without laptops, drafting meeting agendas or strategy notes by hand, solving a technical bug without a copilot, holding quick debates or design sprints without digital aids, or jotting down meeting takeaways from memory before checking notes. These small acts protect human creativity and maintain our ability to think deeply without an algorithm’s influence. Consider digital wellness check-ins or even AI detox periods, especially for younger employees who may be more prone to skill erosion. 

It’s also valuable for leaders to outline where not to use AI. Every team should establish task boundaries like AI can be used for general research or a second set of eyes, but never for a final output. Look for ways to limit its influence on high-stakes or irreversible decisions, like hiring, strategic pivots, policy changes, or investment selection. AI should serve as a research and analysis assistant not the ultimate decision-maker. This not only guards against overreliance on AI’s outputs but also preserves accountability, ensuring that critical choices remain the product of deliberate human judgment rather than automated consensus.

Avoiding The Digital Addiction Crisis, Together

Some may argue that enforcing AI boundaries can slow progress or undermine the very operational or financial efficiency gains these tools promise. But ignoring these limits risks a hidden cost of eroding the skills, confidence, and independent thinking that keep a business resilient. Saving time today is meaningless if your team loses the ability to problem-solve tomorrow. 

Other common objections include fears that boundaries will make the company less competitive, that employees will ignore them, or that skilled staff don’t need them. In reality, boundaries are about using AI better, not less, by protecting teams from overreliance. 

Leaders can frame boundaries not as top-down restrictions, but as a shared investment in long-term capability by inviting employees into the conversation about where AI should support and where human judgment must lead. This collaborative approach turns guardrails into a cultural norm, rather than a compliance burden, and reassures teams that the goal isn’t to strip away autonomy but to protect it. 

Yes, AI can make work faster and cheaper but the healthiest workplaces will be those that treat efficiency as a means to strengthen people, not replace them.

ABOUT THE AUTHOR

CaitlinNer is a Director at PsyMed Ventures, a venture fund investing in frontier mental and brain health technologies. 

FAST COMPANY