Our AI+ Approach

Our AI+ Approach

Photo by Steve Johnson / Unsplash

Most firms talk about “adding AI” to their workflows. We see it differently. For us, AI is not an accessory — it is part of the architecture.

Zenith Grove is being built as an AI-native investment practice. Today that means using the best available tools — large foundation models, internal automation, and structured research workflows — to expand our analytical capacity. Over time, it means developing a proprietary, AI-enabled research platform that supports the way we make decisions: long-horizon, evidence-driven, and deeply curious.

Our focus is not on replacing judgment, but on strengthening it. Models help us surface patterns, test hypotheses, and navigate large bodies of information more effectively. Human reasoning provides calibration, context, and discipline.

The frontier is moving quickly. We are exploring emerging approaches to financial-domain models and the early research around specialised architectures for investment analysis. We’re learning, experimenting, and assembling the foundations for training models of our own when the technology — and our understanding — meet.

AI is not a shortcut. It is infrastructure. And by treating it as such from day one, we hope to build a firm that can compound insight the same way capital compounds: patiently, with accumulated understanding as the core driver of long-term results.

Why AI belongs at the core of our workflow

Across asset management, AI is shifting from experiment to plumbing. Firms are already using large models to:

  • synthesise earnings calls, filings and research faster than human teams can read them,
  • support portfolio construction and risk monitoring with richer pattern-detection, and
  • automate operational checks, documentation and compliance reviews.

At the same time, regulators and industry bodies are stressing governance, explainability and financial-stability risks alongside the efficiency gains.

Taken together, this points to a simple conclusion for a small, long-horizon firm like ours:

AI should sit inside the investment and research process, not on top of it.

That is what we mean by an AI+ framework. AI enhances every stage of the workflow, but it does not replace judgment or discipline. Human stewards remain responsible for decisions; models expand their field of vision and test their thinking.

An AI+ roadmap for Zenith Grove

We do not yet have a proprietary model. Today we rely on advanced third-party systems (such as GPT-class models) and a growing set of internal tools. The roadmap below is our path from “smart user of external AI” to “operator of private, domain-tuned AI”.

Phase 1 — Augmented research (now)

Goal: Use best-in-class general models safely and systematically.

  • Research copilots: Use LLMs as research assistants to summarise transcripts, filings and news; generate variant theses; and highlight inconsistencies across sources. This mirrors what many leading firms already do to accelerate insight generation without handing over final judgment.  
  • Structured prompts & checklists: Encode parts of our process (for example, how we read an annual report or decompose a business model) into prompts and templates, so that AI tools reinforce our method rather than randomise it.
  • Documentation & audit trail: Keep a record of key AI-assisted steps in research and portfolio decisions, so that we can replicate and review them later.

This phase is about habits more than hardware: teaching our process to the tools, not the other way around.

Phase 2 — Private research layer

Goal: Build a secure, firm-specific “knowledge engine” around public and internal data.

Here we move towards private AI: models and retrieval systems that sit inside our own infrastructure, with our own access controls, rather than sending sensitive material to public endpoints. 

Key elements:

  • Curated research corpus: Systematically collect our internal memos, models, meeting notes, and annotated filings into a clean, searchable knowledge base.
  • Retrieval-augmented research assistant: Instead of asking a generic model to “know markets”, we let it retrieve from our corpus plus carefully licensed external data. This is the emerging pattern in institutional AI use: keep the base model general, make the context specific.  
  • Portfolio-aware views: Build tools that answer questions in the context of our actual holdings and watchlists — for example, “show me all recent regulatory changes affecting our top ten positions” or “summarise risks mentioned in the last four calls for companies with >5% portfolio weight”.  

At this stage, we are still standing on foundation models built by others, but the system around them — data, retrieval, prompts, guardrails — is distinctly our own.

Phase 3 — Domain-tuned Zenith models

Goal: Train or adapt models that internalise our domain and philosophy.

The research community is rapidly exploring finance-specific LLMs and multimodal systems that combine text, time-series and alternative data for stock selection and risk analysis.  

For a focused firm like Zenith Grove, a realistic path is:

  • Fine-tuning on our corpus: Start by adapting open-source or partner models on our own labelled data — past decisions, investment memos, thesis changes — so that the system learns what we consider material, risky or attractive.
  • Task-specific models: Develop smaller, specialised models for narrow tasks (e.g. earnings-call risk tagging, governance red-flag detection, or portfolio-constraint checking) where interpretability and control matter more than sheer size.
  • Continuous feedback loops: Use real investment outcomes and ex-post reviews as training signals, so that the models evolve with our process rather than freezing today’s views.

This is the point at which we can reasonably talk about proprietary AI: models whose behaviour reflects Zenith Grove’s accumulated judgment and data, not just generic market text.

Guardrails: responsible, private, human-in-the-loop

Becoming AI-native is not only about capability; it is also about restraint.

Industry frameworks for responsible AI in finance emphasise privacy, robustness, explainability, and clear lines of accountability.  For us, that translates into a few simple commitments:

  • Human accountability: Investment decisions remain the responsibility of named humans. Models propose; stewards dispose.
  • Private by default: Sensitive client and portfolio data stays inside controlled infrastructure, with encryption, access controls, and region-appropriate compliance as we grow.
  • Bias and error awareness: We treat AI outputs as hypotheses, not facts — especially in areas where recent research has shown models can over-generalise or misinterpret evidence.  
  • Regulatory alignment: We will build our AI systems with reference to evolving guidance from regulators and standard-setters, rather than retrofitting controls after the fact. 

We are, by our own standards, still early in this journey. Today Zenith Grove is an AI-enabled practice built on state-of-the-art external tools, careful prompts, and disciplined human oversight. Over time, as our data, experience, and infrastructure deepen, we intend to become a genuinely AI-native firm — one whose internal models and workflows quietly reflect years of accumulated research, not just the latest technology cycle.