The $0 AI Agent That Runs My Investment Portfolio — What It Taught Me About Enterprise AI ROI

Enterprise AI · Strategy

The $0 AI Agent That Runs My Investment Portfolio — What It Taught Me About Enterprise AI ROI

Technology Leadership ~900 words 5 min read

I built an AI agent for $0 a month. Not as a side project. As a deliberate experiment to understand something every technology leader should understand before signing another seven-figure AI contract.

It runs on a Raspberry Pi 5 sitting on my desk, powered by Home Assistant OS, connected to Telegram, and backed by a private GitHub repository. Every trading day at 2:30 PM it sends me a live portfolio snapshot — current value, capital P&L, dividend-adjusted total return, and which positions moved more than 3% with their technical signals.

When I want signals on what to consider buying or selling, I type two words. The analysis comes back in under 90 seconds.

I didn't build this because I couldn't afford a commercial alternative. I built it because I wanted to understand something that I think is being consistently missed in how organisations are approaching AI investment.

$0
Monthly cost
4
LLM providers tested
<90s
Analysis on demand
The real cost of AI isn't the model

Every enterprise AI conversation I've had in the past 18 months has started in the same place: which model, which vendor, which price per token. The assumption baked into those conversations is that the model is the product — that intelligence is the scarce resource you're paying for.

My Raspberry Pi experiment taught me that assumption is wrong.

Over three months of building, I switched between four different LLM providers. The underlying model turned out to be almost irrelevant to whether the system was useful. What mattered was everything around the model: the data pipelines feeding it, the scripts executing on its behalf, the skill definitions governing its behaviour, and the version control keeping it auditable and recoverable.

The LLM itself does remarkably little in this system. It receives a trigger, runs a script, and forwards the output.

The intelligence isn't in the model. It's in the architecture.

The actual work — fetching live prices, calculating weighted average costs, aggregating dividend income correctly across position changes, generating signals from technical indicators — is all deterministic code that can be tested, versioned, and audited independently of any AI vendor.

What constrained environments teach you

Running on a free API tier means hitting rate limits. Regularly. Especially during development.

In an enterprise context that would be a blocker. In a home lab it was a forcing function. Every time I hit a limit, I had to ask: does this task actually need an LLM? Almost always the answer was no. Move it to a script. Remove the dependency. The LLM is expensive — in tokens, in latency, in rate limit exposure — so use it only where it adds value that nothing else can provide.

๐Ÿ’ก
The pattern that scales

This discipline is rare in enterprise AI projects. The model gets used for everything because the model is what everyone is excited about. Data retrieval, formatting, calculation, routing — tasks that should be deterministic code end up as LLM calls. The result is systems that are expensive, slow, inconsistent, and difficult to audit.

Constraints teach you to be precise about where intelligence actually belongs in a workflow. That precision is what separates enterprise AI projects that deliver ROI from those that become expensive technical debt.

The governance problem hiding in plain sight

The hardest problems weren't technical — they were operational.

A scheduled job fired at the wrong time because a container restarted and the system was configured to fire immediately on missed runs. A timezone field silently defaulted to the wrong region after a configuration edit. A dividend calculation produced incorrect totals because historical data from exited positions was being included in current position metrics — invisible to any end user looking only at the final output.

None of these are AI problems. They are data integrity problems, operational problems, configuration management problems. The kind that exist in every enterprise system and become significantly harder to detect when an LLM is in the middle of the pipeline — because the model may produce confident, plausible-looking output even when the underlying data is wrong.

The solution wasn't more sophisticated AI. It was better engineering discipline.

Governance isn't a layer you add to AI systems. It's a property of the architecture from day one.

Version control for agent behaviour, explicit configuration management, clear separation between what the LLM does and what deterministic code does, and a private repository that gives every change a timestamp, an author, and a rollback path. These aren't AI innovations — they're software engineering fundamentals applied rigorously to a new context.

What this means for enterprise AI investment

I'm not arguing that enterprise AI should run on Raspberry Pis. I'm arguing that the questions a constrained environment forces you to ask — where does intelligence belong in this workflow, what needs to be deterministic, how do I maintain auditability, what happens when the model changes or the vendor changes their pricing — are exactly the questions that determine whether a significant AI investment delivers ROI or becomes expensive technical debt.

๐Ÿ“
The bottom line for CIOs and architects

The organisations getting the most from AI right now are not the ones with the most sophisticated models. They are the ones who have been most disciplined about what the model is actually for. The model is the last mile. The architecture is the investment.

Three months of building on a $80 computer clarified something that no vendor briefing or analyst report had.

If your organisation's AI strategy is primarily a conversation about which model to buy — what does that tell you about where the real risk in your AI investment actually sits?

Comments