200 words of context: An automated summarizer for any Telegram link (t.me)
by anon | permalink
20/ 99 viralityNiche, low traction
edit & rescore →
7 HN points · front-page probability 31%
p10 · 3p90 · 298
The model already found titles that score higher. Try one.
Built @tldr_echo_bot over the weekend on top of my Teleton fork. Send it any link (HN story, GitHub repo, arXiv paper, Reddit thread, Wikipedia article, raw URL) — get a 200-word TL;DR back. Works from any chat without explicit commands, also gets invoked from other bots in the same group. The interesting bit isn't the summarizer — it's how the agent decides WHEN to summarize. I added a Claude-Code-style skills system to Teleton: SKILL.md files in ~/.teleton/workspace/skills/<scope>/<name>/, where <scope> is shared/, admin/, or users/<id>/. Only the YAML frontmatter (name + one-line trigger description) lands in the system prompt — the body is fetched on-demand via a skill_invoke tool. So you can have 100 skills without any prompt bloat. The agent itself can install new skills on disk via skill_install (admins write to admin scope, users get per-user personal skills, with a premium gate via plugin hook). The TL;DR skill is one such SKILL.md (~90 lines) telling the model how to route URLs to the right tool, format the output, and fall back to the Wayback Machine if the source is dead. Free users get 1 invocation per 24h on a real payment-channel rate limit, premium is unlimited. Stack: Teleton fork (TS), runs on a Raspberry Pi over Tonutils-Proxy. Skills feature lives at cthellla:feat/skills-support, PR open in TONresistor's upstream. SKILL.md format is loader-compatible with Claude Code skills out of the box. Try it: t.me/tldr_echo_bot
ForesynWanna keep in touch?
Built this solo over a weekend. Soft-launching before the HN post on Monday. If you scored a draft and the prediction either nailed it or whiffed, I want to know.
DM @crimeacs on Telegram — fastest way to reach me
Connect on LinkedIn — Artemii Novoselov
Edit & re-score