I built an agent to read your 120 commenters, Jack.
You asked agent builders to comment. The post got 119 replies in days. So I built an agent to do the first pass - score each commenter against your own criteria, surface the ones worth your time. Here's what it found.
The scoring rubric
Five axes derived from your post, weighted by what you said you want:
- technical_depth0-30 - engineering background, code-shipping role, builder track record
- ai_agent_experience0-25 - LLM work, n8n/Zapier/Make, agentic frameworks, RPA, AI automation
- systems_thinking0-20 - platform/infra/architect/founder/CTO roles, system design
- comment_substance0-20 - concrete work in comment (numbers, repos, demos) vs. vague
- velocity_signal0-5 - recent pivot into AI, public building, in-flight projects
What the commenter pool actually looks like
Top role keywords (qualified)
Top geographies
Top 10 candidates - anonymized
Identities, profile links, and named breakdowns are in the private brief - DM me for that. Below shows the score, verdict, and why-pattern so you can see how the agent is reasoning.
How it was built
- Apify
harvestapi/linkedin-post-commentsscrape (120 comments + 124 enriched profiles, $0.49) - Claude Opus scoring against the 5-axis rubric, JSON-out per candidate
- Two markdown briefs generated: named (private) + anonymized (this page)
- Next.js + Tailwind, deployed on Vercel
- Total build time: one afternoon
The named ranking
Top 25 commenters with names, profile URLs, full comment text, and per-axis breakdowns sits in a separate brief. Message me on LinkedIn if you want it.
Built over a weekend as an experiment in agent-driven hiring triage. Curious how well it actually ranks the field versus a human reviewer doing the same job.