Personal Project / 2024
Navorina Assistant
Designing an AI system for financial reasoning, not reactions
When I started working on Navorina, I wasn't trying to design another financial product. I was trying to understand why people, even when surrounded by financial data, still struggle to make confident decisions.
The issue wasn't access to information. It was the absence of continuity, context, and reasoning. Most financial tools optimize for speed, visibility, and automation. In practice, this fragments thinking. Decisions become reactive, short-term, and difficult to explain even to oneself.
I approached Navorina as a response to that gap.
The Core Question
From the beginning, my role went beyond interface design. I acted as the product architect, defining how the system should think, what it is allowed to assume, and where it must remain transparent.
Every design decision was grounded in one question: does this help the user reason better, or does it simply make the system look smarter?
That question eliminated many familiar patterns early on. I deliberately avoided dashboards, alerts, and predictive outputs. Not because they are ineffective, but because they shift attention away from understanding and toward reaction. Instead, I designed Navorina as a conversational system that preserves context over time and makes its assumptions visible. The assistant doesn't replace the user's judgment. It supports it.
Key Decisions
One of the most difficult choices was accepting that the product would appear less impressive at first glance. There are no instant answers or flashy signals. Every conclusion is explained, every recommendation grounded in visible logic. This slows down interaction, but it builds trust.
In a financial context, trust is more valuable than speed.
I didn't start the design process with screens. I started by defining constraints. I decided what the system must always explain, what information should never be hidden, and where uncertainty should remain explicit. Only after those boundaries were clear did the interface take shape. The UI became a direct expression of the system's reasoning rather than a layer of abstraction on top of it.
Interaction Model
The interaction model is intentionally calm. It avoids visual noise and real-time distractions. Instead, it focuses on structured dialogue and decision snapshots that are tied to time and context. Users can return to past conclusions and understand not only what decision was made, but why it made sense at that moment.
System Architecture
From a design perspective, I treated Navorina as a system rather than a collection of features. Reasoning is separated from presentation, context from output, memory from interaction. This separation allows the product to evolve without accumulating complexity or breaking its mental model.
Trade-offs and Outcomes
There were clear trade-offs. I accepted slower interaction, fewer surface-level features, and a lower "wow" factor. In return, the system reduced cognitive load, minimized context switching, and supported long-term thinking. Users spent less time juggling tools and more time understanding their own decisions.
That outcome validated the original hypothesis: clarity scales better than complexity.
What This Represents
For me, Navorina represents more than a single case study. It reflects how I approach complex, AI-driven systems. I work best in ambiguous problem spaces, where constraints are unclear and decisions have long-term consequences. I don't optimize interfaces in isolation. I design systems that help people think more clearly over time.
Ongoing Development
Navorina Assistant is intentionally not finished. The project continues to evolve as a living system. I actively develop it further, refining the reasoning model, expanding the context layer, and exploring how AI can support increasingly complex financial scenarios without sacrificing clarity or trust. Each iteration reinforces the same principle: the system must remain understandable, auditable, and aligned with real human thinking.
Today, Navorina exists both as a working product and as an ongoing research space where I test ideas around AI-assisted reasoning, long-term decision support, and system design under uncertainty.