// Engineer building intelligent systems — Rust, Python, and fine-tuned LLMs underneath.
Ten-plus years across Rust, Python, TypeScript and .NET. Currently shipping a multi-locale assistant at Apple, with RAG and small-model fine-tuning on the side. Turning research into products that stay up.
No badges, no percentages. The things I reach for first, grouped lightly. If it's listed here I've shipped with it.
Five entries, oldest at the bottom. Each one is a thing I shipped or am shipping — not a job description.
A context-aware automated assistant serving millions of users across multiple locales and surfaces. Low-latency, transactional, IVR + chat. Rust for node-to-node graph traversal at the core; Vue.js inspector on top; Node.js + MongoDB clusters around it. Fine-tuned LLMs for the harder intents — nuance over novelty.
A short, dense engagement on an internal tool for managing loans across milestones with analytics on top. F# + C# microservices feeding a React frontend; Snowflake as the warehouse. Tight scope, fast delivery.
Broke a legacy Encompass-SDK LOS into micro-frontends and microservices on Kubernetes. GraphQL gateway, Redis for low-latency caching, and an ML-based region pricing model. Angular + React in front of .NET Core. I owned the scaling story.
Led the team that built and ran the customer-facing loan origination system. ReactJS + C# .NET Web API, transactional and fault-tolerant. Region-based pricing and appraisal. Served 2,500 associates and 250k+ customer logins.
Replaced a paper bill-reimbursement workflow with an ASP.NET + WPF system. Wrote a JavaScript SHA-1 path that beat the C# one, designed a ClickOnce WPF doc-scan flow, and migrated the data from PostgreSQL to SQL Server. Security tested with Burp Suite.
Things I've built over the years — production systems, AI tooling, and full-stack platforms. Most started as a personal itch and grew into something real.
Started as a plain Kanban board I built for my own dev workflow. Six years later it's a visual flow builder where LLM agents move tasks through Requirements → Plan → Code → Review → Test automatically. The git log tells the whole story: drag-and-drop in 2019, OpenAI streaming in 2022, real-time WebSocket orchestration in 2023, local Ollama models in 2024, and a React Flow canvas with custom node types in 2025.
A Python MLOps service that trains a scikit-learn classifier on tabular income data, serves predictions via FastAPI, logs every inference to Postgres, detects feature drift against a training baseline, and supports automated retraining. Built commit-by-commit from a lone notebook to a monitored production system — four years of real-world evolution captured in 30 commits.
A RAG system that ingests a developer team's documents, logs, tickets, and markdown notes into a hybrid vector + keyword index, then answers natural-language questions grounded in that real data. Streams answers token-by-token with clickable citations back to the exact chunk.
A Python NLP project that classifies news articles by topic. Started with TF-IDF + logistic regression in 2019, moved to BERT fine-tuning in 2020, simplified with HuggingFace pipelines in 2022, added GPT-3.5 in 2023, survived the OpenAI SDK breaking change in 2024, and finally cut the cloud dependency entirely in 2026 by running Llama 3 locally.
A LangChain/LangGraph agent that pulls daily APMC mandi prices from the Government of India's Agmarknet API, reasons across price differentials, transport cost, and weather forecasts, and recommends optimal sell locations for farmers — delivering advice in Telugu. A WhatsApp bot lets farmers text their crop and quantity and receive a response in Telugu within 30 seconds.
A lightweight Python framework that classifies incoming queries by complexity, routes them to the right LLM provider and model, optionally injects a RAG context hook, evaluates confidence, and retries via escalation — all without any LLM inside the framework itself. Supports OpenAI, Anthropic, Google Gemini, and local Ollama. Every routing decision is persisted to SQLite and surfaced in a Rich CLI dashboard.
A local MCP server exposing four Telugu NLP tools — translate, transliterate, simplify, summarize — evolving from a Gemma3-backed prototype to a stack of specialized models. EN→TE uses IT3-llama, TE→EN uses NLLB-200-1.3B, Roman→Telugu uses IndicXlit. Integrates as a sidecar with MandiMind.
A Rust CLI for chatting with LLMs from the terminal. Starts with Ollama for single-turn queries, grows to support TOML prompt templates, SQLite conversation history, named sessions, OpenAI and Anthropic providers, streaming output, and a full ratatui TUI chat interface.
A Rust MCP server that exposes local developer tools to Claude Desktop — file operations, shell execution, git history, and code search. Rust binary starts instantly, uses minimal memory, and gives full control over allowed paths and commands.
A local developer time tracker. tl start, do work, tl stop. Projects and tags for grouping. A ratatui TUI dashboard with a live ticking elapsed counter, project bar chart, and daily timeline view.
A fast, terminal-native Markdown notes manager. File-based storage — notes are plain .md files, metadata in a TOML index. No database. qn create 'my idea' opens in $EDITOR and tracks everything automatically.
Two degrees, plus a stubborn habit of reading documentation cover to cover.
College of Engineering and Computer Science · Dayton, OH
Network security, applied cryptography, secure systems.
Hyderabad, India
Algorithms, OS, distributed systems. First job started before the gown was off.
The longer story, kept short. Three paragraphs and a sidebar.
I've spent ten-plus years writing software for a living — mostly full-stack, mostly under pressure. The work I'm proudest of is rarely the most novel; it's the stuff that stayed up and stayed correct.
Today I'm at Apple on a conversational AI platform — multi-locale, transactional, low-latency. Rust handles the core traversal; Vue.js drives the operator surface; Node.js and MongoDB clusters sit underneath. Before this, a long run at Home Point Financial on the loan-origination platform.
On personal time I fine-tune small language models on intent data, build terminal-native tools in Rustbecause the shell should feel instant, and read the layers under the application — networking, security, hardware. There's a cyber-security MS in there too, and a stubborn opinion that application engineers should know more about what's under them.
Best by email. I read everything; I reply to most of it within a day or two. No form, on purpose.
Encrypt sensitive communications or verify signatures.
gpg --encrypt -r inbox@purumandlaharin.com file.txtgpg --verify signature.asc file.txtecho "message" | gpg -e -r inbox@purumandlaharin.com