I’ll be honest: I’m not a fan of letting any software make my choices for me. Not my bank app, not my calendar tool, and definitely not an AI agent. But that doesn’t mean I ignore what they can do. The trick is knowing where the line is between helping and deciding.
My local Hermes Agent lives on my laptop. It’s not connected to the cloud. It can’t email anyone. It can’t approve a payment or fire an employee. What it can do is sit with me while I think out loud, ask smart questions, and knock up summaries in seconds. And after a few months of using it daily, I’ve realised this is actually the most valuable role an AI can play: getting you to a better decision without ever making the decision itself.
The moment you hand over the final choice to an AI, you stop thinking. You stop weighing trade-offs. You stop trusting your own gut, which – for better or worse – is baked into years of experience. I’ve seen people feed a vague prompt into a large language model and then blindly follow the output. That’s not decision-making. That’s delegation to a machine that doesn’t have your context, your culture, or your risk appetite.
Hermes Agent is deliberately constrained. It can’t make calls to external systems or sign off on anything. It only sees the data I give it. That forces me to stay in the driver’s seat. But because it’s fast and thorough, it makes me a better driver.
Last month I needed to pick a new expense-reporting tool. We had shortlisted three vendors. Instead of reading through their feature lists myself, I dropped the PDFs and website content into Hermes and asked it to build a comparison table.
But I never asked it “which one should I pick?”. I only asked for the data arranged clearly. Once I had that, I could bring in my own knowledge: that one company’s customer support had gone downhill based on a mate’s experience, and that another had a strong local partner. The AI helped me see the landscape faster. The final call was mine, and it was better informed.
One of my suppliers sent through a revised service agreement. I’m no lawyer. So I asked Hermes to pull out key clauses: termination penalties, data ownership, notification periods. It gave me a bullet-point summary with page numbers. I could then spot a weird 90-day notice requirement that I’d have missed reading the full 20 pages late at night. That single discovery changed how I negotiated.
Again, the agent didn’t say “reject this clause”. It simply surfaced the information. My decision was based on that surfaced info plus my relationship with the supplier and our commercial priorities.
Every quarter we review active projects and kill the ones that aren’t pulling weight. I asked Hermes to crunch the numbers from our internal tracking system (which I exported as a CSV). It calculated burn rates, milestones missed, and projected completion dates. It then ranked projects by risk level based on my own criteria from a previous conversation.
But the rank was just a suggestion. I overrode it twice because I knew one team had just hired a new lead and another project had strong strategic value, even if the numbers were rough. The AI gave me a clean dashboard. I made the hard calls.
When our finance team asked for a 10% reduction in operating costs, I didn’t know where to start. So I fed Hermes last year’s line-item expenses and asked: “Show me categories where spending rose fastest, and where we’re above industry benchmarks.” It produced a simple table of categories – software, travel, contractors, office supplies – with variance percentages. That made it obvious which areas needed the most scrutiny.
But I didn’t ask it to pick which line to cut. I used the table to have a proper conversation with my team. We ended up trimming contractors and delaying a software renewal. The agent prepped the battlefield; we fought the battle.
Before a key pricing negotiation with a payments partner, I asked Hermes to build a one-pager: their typical pricing model, industry benchmarks, our current volume, and the points they’d most likely push back on. It even suggested some counter-arguments based on trends in the space. The document came together in about four minutes. I took that into the room with me. I didn’t read from it, but it gave me confidence because I’d seen the data laid out clearly. The negotiation went better than expected – partly because I wasn’t scrambling for numbers mid-conversation.
After all these examples, the pattern is clear: the AI prepares you. It doesn’t replace you.
The moment I feel tempted to say “Hermes, just decide for me”, I stop. I rewrite the prompt as “Give me the options and their trade-offs, ranked by risk, and leave the conclusion unstated.” That forces me to do the final synthesis.
In a fintech environment, where compliance and human oversight matter, this approach isn’t just smart – it’s essential. You can’t blame an AI for a wrong call. You can’t hand over your fiduciary duty to a local model running on your laptop. But you can use it to get smarter faster.
That’s the sweet spot. An AI agent that helps you think clearer, not think instead of you. Use it to prep. Use it to summarise. Use it to challenge your blind spots. Then make the decision yourself. You’ll be surprised how much better those decisions become when you’ve done better prep in half the time.
Need help setting up your own AI assistant? Feel free to contact me at [email protected].