An experiment by Anuj Kapoor

I built an agent to read your 120 commenters, Jack.

You asked agent builders to comment. The post got 119 replies in days. So I built an agent to do the first pass - score each commenter against your own criteria, surface the ones worth your time. Here's what it found.

Commenters analyzed
119
Worth interviewing first
9 (8%)
Strong fit
6
Top score
62/100

The scoring rubric

Five axes derived from your post, weighted by what you said you want:

What the commenter pool actually looks like

Top role keywords (qualified)

ai
16
engineer
8
product
8
agent
7
ml
3
founder
3
ops
2
automation
2

Top geographies

India
4
Australia
3
Singapore
3
United States
2
Hong Kong SAR
2
New Zealand
1
New York City Metropolitan Area
1
Nigeria
1

Top 10 candidates - anonymized

Identities, profile links, and named breakdowns are in the private brief - DM me for that. Below shows the score, verdict, and why-pattern so you can see how the agent is reasoning.

#162/ 100
strong fit
Why: 14-year full-stack engineer with hands-on LangGraph/LangChain multi-agent RAG systems, CKA, and production SaaS architecture experience.
Built a production multi-agent RAG SaaS (SoRag) and a clinical decision AI system. This resonates - would love to connect.…”
22
technical
18
ai
13
systems
7
comment
2
velocity
#262/ 100
strong fit
Why: Production RAG pipeline builder with strong cloud infra/DevSecOps background, compliance experience in fintech, and a concrete technical article shared.
Hi Jack, would love to connect! I've been building RAG pipelines for a SaaS AI Platform here in NZ. You can check a bit of my work here: https://dev.to/lpossama…”
22
technical
14
ai
14
systems
9
comment
3
velocity
#358/ 100
strong fit
Why: Built a multi-agent system with LLM-as-judge, conditional retries, MCP tool layer - demonstrates real agentic architecture thinking.
This is exactly the shift happening, systems that evaluate and iterate, not just automate steps. I have recently built a multi agent system (Quant and Research …”
17
technical
18
ai
11
systems
9
comment
3
velocity
#458/ 100
strong fit
Why: Production ML background at a bank, Google Build With AI Hackathon winner, and co-founded an LLM/agent startup on the back of it.
Recently won 2 Hackathons - Google and Veris.ai, I would be great fit for this role - daniyar@udel.edu…”
20
technical
17
ai
10
systems
7
comment
4
velocity
#554/ 100
strong fit
Why: Claims hands-on multi-agent orchestration platform with named agents, routing logic, memory, and self-correction running on owned hardware.
This is the exact stack I've been building for the last 6 months. I run a multi-agent platform called brill.ct that orchestrates across OpenClaw and Hermes, cu…”
14
technical
18
ai
11
systems
8
comment
3
velocity
#652/ 100
strong fit
Why: 11+ years AI experience, architecting production agentic RAG systems at Shell serving 15K+ daily users with LangChain/LangGraph/CrewAI stack.
Happy to help…”
22
technical
18
ai
10
systems
1
comment
1
velocity
#744/ 100
worth a look
Why: Hands-on with n8n, LangChain, CrewAI, MCP servers, and Python; built his own agent-readiness platform RankedLM.
I am an independent builder with hands-on experience shipping AI agents using n8n and GHL - I have several projects going live on GitHub shortly that highlight …”
14
technical
14
ai
9
systems
5
comment
4
velocity
#842/ 100
worth a look
Why: Solid Technical Lead at AI-focused company (Harrison.ai) with multi-language backend depth and team architecture experience.
Hi, Jack Zhang I would be interested in those roles.…”
20
technical
7
ai
10
systems
2
comment
3
velocity
#942/ 100
worth a look
Why: Built and shipped a conversational AI chatbot for agentic payments on WhatsApp and automated a $200M+ OTC treasury workflow - concrete deliverables.
Currently Chief of Staff to the CEO at a Tether-backed series A cross-border payments company where I built and shipped a Conversational AI chatbot for agentic …”
8
technical
13
ai
10
systems
8
comment
3
velocity

How it was built

  1. Apify harvestapi/linkedin-post-comments scrape (120 comments + 124 enriched profiles, $0.49)
  2. Claude Opus scoring against the 5-axis rubric, JSON-out per candidate
  3. Two markdown briefs generated: named (private) + anonymized (this page)
  4. Next.js + Tailwind, deployed on Vercel
  5. Total build time: one afternoon

The named ranking

Top 25 commenters with names, profile URLs, full comment text, and per-axis breakdowns sits in a separate brief. Message me on LinkedIn if you want it.

Built over a weekend as an experiment in agent-driven hiring triage. Curious how well it actually ranks the field versus a human reviewer doing the same job.