Show HN: A principle for building agentic LLM pipelines without spaghetti code (hackanton.com)
by anon | permalink
21/ 99 viralityNiche, low traction
edit & rescore →
8 HN points · front-page probability 31%
p10 · 3p90 · 295
The model already found titles that score higher. Try one.
Every LLM pipeline I’ve seen fail in production fails the same way: intelligence is spread across the harness — in routing logic, in loop conditions, in ad-hoc prompt assembly — and the prompts themselves are thin wrappers around intent. I’ve been building agentic pipelines for B2B products and found one principle keeps things from turning into spaghetti: Push intelligence up into skills. Push execution down into deterministic code. Never let them meet in the middle. Fat Skill = a callable unit with full judgment embedded. It takes (TARGET, QUESTION, DATASET) and returns structured output. The prompt inside is long, opinionated, and includes scoring rubrics, failure modes, and output schema. You can test it in isolation. Thin Harness = loop + IO + context loading + safety rails. It knows nothing about domain logic. It just calls skills in order and passes results forward. Deterministic Layer = SQL, regex, formatters, calculators. Anything where counting or assignment is involved. Never inside a model call. The test: “Does this step require judgment or synthesis?” → model. “Does this step require counting, matching, or formatting?” → code. Side effect: skills become reusable artifacts. The harness becomes boring infrastructure. That’s the goal. Happy to share the prompt schema I use for skills if there’s interest.
ForesynWanna keep in touch?
Built this solo over a weekend. Soft-launching before the HN post on Monday. If you scored a draft and the prediction either nailed it or whiffed, I want to know.
DM @crimeacs on Telegram — fastest way to reach me
Connect on LinkedIn — Artemii Novoselov
Edit & re-score