I automate 25 engagement comments daily across LinkedIn, X, and Reddit. Here's the exact process, the configs, and the full pipeline. Copy it.
Every morning at 05:30 UTC, a cron job fires up. It hits three platforms with niche-specific queries and pulls back fresh posts from the past week.
The key is specificity. I don't search for "marketing." I search for "AI agents for lead gen" or "solopreneur automation stack." Narrow beats broad every time.
Here's how each platform works:
site:reddit.com prefix. Targets subreddits like r/Entrepreneur, r/SaaS, r/artificial. Past week filter.{
"niches": [
"AI agents for marketing",
"marketing automation solopreneur",
"AI cold outreach",
"solopreneur productivity stack",
"vibecoding tools",
"GTM engineering"
],
"platforms": {
"linkedin": {
"method": "mcp_batch_search",
"freshness": "past_week",
"max_results": 15
},
"x": {
"method": "cached_reader",
"freshness": "past_week",
"max_results": 10
},
"reddit": {
"method": "web_search",
"prefix": "site:reddit.com",
"subreddits": [
"r/Entrepreneur",
"r/SaaS",
"r/artificial",
"r/smallbusiness"
],
"freshness": "past_week",
"max_results": 10
}
}
}
# Engagement comment pipeline - runs daily at 05:30 UTC
30 5 * * * /home/luca/scripts/engagement-scan.sh >> /var/log/engagement.log 2>&1
# The script does:
# 1. Query all 3 platforms
# 2. Deduplicate results
# 3. Score by relevance (niche match + engagement count)
# 4. Pass top 30 to comment generation
This is the part most people skip. And it's the reason their automated comments read like a bot wrote them (because one did, with zero personality).
I built a voice profile document. It has 6 traits that define how I write online:
Before every comment gets generated, it runs through this checklist:
Speaking of which, here's the AI vocabulary blacklist:
# Voice Profile - Luca
## Core Traits
- Direct, no fluff
- Technical but accessible
- Opinionated with receipts
- Conversational (short sentences, fragments ok)
- First-person stories over abstract advice
- Dry humor, never forced
## Comment Rules
- Max 3 sentences (4 for LinkedIn if adding value)
- Must add new info/perspective
- Never generic agreement
- Always a personal angle or experience
- Question at the end only if genuine
## Blacklist (auto-reject if detected)
pivotal, crucial, delve, showcase, foster,
leverage, streamline, landscape, paradigm,
synergy, holistic, game-changer, empower,
navigate, robust, harness, spearhead,
cutting-edge, best-in-class, thought leader
For each post, the LLM gets three things: the post content, my voice profile, and the instruction to generate 3 comment variants.
Why 3? Because the first draft is usually too safe. The second is usually better. The third sometimes surprises you. The system auto-picks the best one based on a scoring function (uniqueness, voice match, length).
After picking, it runs a humanization pass:
def generate_comments(post, voice_profile):
variants = []
for i in range(3):
prompt = f"""
Post: {post['content']}
Platform: {post['platform']}
Voice profile: {voice_profile}
Write a comment as Luca. Rules:
- Max 3 sentences
- Add something the post didn't say
- Use a personal angle or experience
- No AI vocabulary (see blacklist)
- Sound like a real person, not a bot
- Variant {i+1} of 3: {'safe' if i==0 else 'bold' if i==1 else 'wild card'}
"""
variants.append(llm.generate(prompt))
# Score and pick best
scored = [(v, score_comment(v, voice_profile)) for v in variants]
best = max(scored, key=lambda x: x[1])
# Humanize
return humanize(best[0])
def humanize(comment):
# Remove em dashes
comment = comment.replace('\u2014', '.')
comment = comment.replace(' - ', ', ')
# Kill arrows
comment = comment.replace('->', 'so')
# Strip hedging
for phrase in HEDGE_PHRASES:
comment = comment.replace(phrase, '')
return comment.strip()
Every comment runs through an AI detector. The target is "low" risk. If it comes back "medium" or "high," it gets auto-fixed and re-checked.
The auto-fix targets three things:
If a comment fails 3 rounds of detection, it gets flagged for manual review instead of going into the sheet automatically. This happens maybe 2-3 times per batch.
def detection_pass(comment, max_retries=3):
for attempt in range(max_retries):
result = ai_detector.check(comment)
if result['risk'] == 'low':
return {'status': 'pass', 'comment': comment}
# Auto-fix
if result['flags'].get('parenthetical_density'):
comment = reduce_parens(comment)
if result['flags'].get('low_specificity'):
comment = add_specificity(comment)
if result['flags'].get('repetitive_structure'):
comment = vary_structure(comment)
# Failed all retries
return {'status': 'manual_review', 'comment': comment}
def reduce_parens(text):
"""Keep max 1 parenthetical per comment"""
# Find all parentheticals, keep first, remove rest
...
def add_specificity(text):
"""Replace vague claims with specific ones"""
# "many people" -> "3 founders I talked to"
# "recently" -> "last Tuesday"
...
All approved comments land in a Google Sheet. This is my review dashboard. I open it with my morning coffee, scan through, and approve or edit.
The columns are color-coded so I can scan fast:
| Date | Priority | Author | Post | Link | Comment Draft | Done | Notes |
|---|---|---|---|---|---|---|---|
| 2026-02-14 | HIGH | @sarahcodes | Why I stopped using Zapier... | link | "Switched to n8n last year for the same reason. The webhook reliability alone..." | [ ] | 10k followers, SaaS niche |
| 2026-02-14 | MED | u/startupguy | Best AI tools for solo founders? | link | "Running 3 AI agents that handle my content, outreach, and analytics. Total cost is..." | [ ] | r/SaaS, 45 upvotes |
| 2026-02-14 | LOW | @marketer_mike | Hot take: AI content is killing... | link | "The problem isn't AI content, it's AI content without a voice profile..." | [ ] | Contrarian angle |
Priority is based on: author follower count, post engagement, niche relevance. HIGH means "this person is a potential lead or has a big audience." I always do those first.
def push_to_sheet(comments, sheet_id):
rows = []
for c in comments:
rows.append([
c['date'],
c['priority'], # HIGH/MED/LOW
c['author'],
c['post_title'][:50],
c['post_url'],
c['comment_draft'],
'FALSE', # Done checkbox
c['notes']
])
sheets_api.append(
spreadsheet_id=sheet_id,
range='Comments!A:H',
values=rows
)
# Apply conditional formatting
# RED for HIGH priority
# YELLOW for MED
# GREEN for LOW
apply_formatting(sheet_id, rows)
This is the human-in-the-loop part. Every morning, I spend 5 minutes on the sheet:
When someone replies to my comment, that's a warm signal. The system flags it and adds them to my outreach pipeline. A reply means they already know my name and had a positive interaction. That's worth 10x a cold DM.
I track engagement weekly: reply rate, profile visits from comments, DMs received. The data feeds back into query tuning. If posts from r/SaaS get 3x more replies than r/Entrepreneur, I shift the query weight.
But the numbers that actually matter:
Total cost: about $60/month in API calls. Total time: 30 minutes to set up, 5 minutes daily to review. That's it.
Paste a post URL below and see what kind of comment the system would generate. (This is a static demo, not hitting a real API.)
This process takes 30 minutes to set up and 5 minutes daily to review. Or you can get it pre-configured, voice-profiled, and running out of the box.
Claw4Growth is an OpenClaw instance pre-loaded with this entire pipeline. Engagement comments, content generation, lead tracking. All of it.
GET CLAW4GROWTH