Small businesses across the globe are embracing artificial intelligence at a rapid pace—using it to automate customer service, streamline operations, and boost revenue. But as cutting-edge AI systems inch closer to human-level and even superhuman intelligence, experts are urging caution—and calculation.
In a striking comparison to the Manhattan Project, renowned physicist and AI researcher Max Tegmark of MIT has warned that companies developing Artificial Super Intelligence (ASI) must quantify the risk that such systems could escape human control. Tegmark and his students propose calculating a “Compton constant”—a probability metric named after physicist Arthur Compton, who once estimated the risk of a nuclear bomb igniting Earth’s atmosphere during the first atomic test.
Tegmark estimates a 90% probability that unchecked superintelligent AI could pose an existential threat. “It’s not enough to feel good about what we’re building,” he said. “Companies must calculate the actual odds—and take responsibility.”
This warning, while directed at the world’s largest AI labs like OpenAI and DeepMind, has ripple effects that touch even the smallest Main Street businesses. Here’s why:
The Tools You Use Might Change Overnight
Many small businesses now rely on AI “employees” for tasks like lead capture, voice response, and customer service. But if global AI regulation tightens—or if powerful models are paused due to safety concerns—those tools could become restricted, limited, or pulled entirely. Relying solely on black-box AI without local backups or human contingencies could leave businesses exposed.
New Standards May Require Compliance
Just as data privacy laws like GDPR and CCPA created obligations for even small businesses, future AI governance may require small firms to vet the safety or explainability of the AI systems they use. If you’re using AI to communicate with customers or make decisions, regulatory requirements may follow.
Ethics and Trust Will Become Business Differentiators
Consumers are growing more aware of the ethical concerns surrounding AI. Small businesses that proactively choose transparent, safe, and human-guided AI solutions may gain a competitive edge. Customers will increasingly prefer brands they trust to use AI responsibly.
Opportunity in Caution
Tegmark’s call isn’t just a warning—it’s also a prompt for innovation. Many small AI providers are now pivoting to create more controllable, auditable, and transparent systems. For entrepreneurs, this is a chance to fill the gap with safer, more practical AI tailored for specific business needs without the risks of uncontrolled general intelligence.
The Singapore Consensus on Global AI Safety Research Priorities—a collaboration between industry giants, government bodies, and researchers—marks a hopeful step toward cooperative safety efforts. But as the debate continues, small business owners should keep one foot in the future and the other grounded in practical resilience.
AI is here to stay, but it’s entering a new phase. As researchers call for the same caution that preceded the first nuclear test, small business owners should remain agile, informed, and ready to adapt.
Want to future-proof your business with ethical AI employees that never go rogue? Book your free AI Growth Strategy Session and discover AI that works for you—not the other way around.