
One topic we’ve been thinking about a lot lately: how do we give AI systems real-world power without letting them run wild?
AI agents become much safer when they operate inside carefully designed boundaries. One example was giving an AI access to a credit card — something that sounds risky today, but could become practical if strict constraints are added. Imagine an AI that can only spend within certain dollar amounts, only on approved websites, and only through a separate security layer that enforces those rules.
That distinction matters. The safety doesn’t come from “trusting” the AI. It comes from engineering systems around the AI that limit what it can do.
Right now, companies rely heavily on large general-purpose AI models because they’re expensive and difficult to build. But in the future, businesses may use smaller, highly tuned models for specific jobs — coding, legal review, brand safety, accounting, or marketing compliance — each with carefully defined responsibilities and safety boundaries.
And perhaps most importantly, safety itself may become a competitive advantage. If two AI systems can perform the same task, people will likely choose the one that can safely handle sensitive actions, such as spending money, accessing data, or interacting with customers, without causing damage.
The era of “just let the AI do whatever it wants” may not last very long.
Want to learn more? Check out our podcast: Episode #16: Safety in AI Systems
(art by Becka Rahn)

