Hi there! Like I said in the post, we're actively developing support for JS/TS next, and are building toward a language-extensible project. We started with an open-source Python tool. :)
Thank you! The latter test was exactly what I was looking for. I think the Purpose statement in the original (sans Nuanced) answer was kind of useful. That’s missing from the after scenario, though I suppose you could easily request it in your prompt though.
When debugging code, experienced developers don't read every file—they follow execution paths and understand system architecture. But today's AI coding tools try to read all files and get bogged down in unnecessary details.
With context windows limited to 200K tokens, cramming in random files isn't just inefficient, it's impossible for large codebases. If you’re debugging a failing test, you only need to understand the relevant files in the call chain.
It's not about more context, it's about relevant context. That's what Nuanced provides through static analysis and machine learning.
At Nuanced, we're building tools that make AI-generated code more reliable.
As AI writes more code, we need better tools to trust it and technologies that ensure our human understanding keeps pace with this rapid development.
While everyone else races to ship new features with AI, we're focused on addressing gaps in AI coding tools and ensuring those features are reliable and maintainable rather than code that works today but becomes a liability tomorrow.
We're starting with an AI-powered Python language server that makes AI-generated code more reliable by understanding your entire system—using a deeper semantic understanding of code than LLMs have today, but also artifacts outside of code such as commit histories, configs, and team patterns.
We're a team of ex-GitHub engineers and researchers who've scaled some of the world's largest developer platforms. I'm Ayman (https://www.aymannadeem.com/about/), and before founding Nuanced, I spent seven years at GitHub where I helped build Semantic(https://github.com/github/semantic), an open-source library for parsing and analyzing code across languages—and scaled security systems to detect anomalous code patterns across millions of repositories. Our team’s deep experience in static analysis and large-scale system design shapes our approach to the AI reliability challenge today.
We've all been on-call at 2 AM, untangling complex service dependencies, and more recently, we've seen firsthand how AI accelerates development—both the wins and the wounds.
If you're building an AI coding tool and any of this sounds interesting to you—we should talk!