🦞 25 COMMENTS/DAY x 3 PLATFORMS x $2/DAY = WARM LEAD MACHINE -- FREE STEP-BY-STEP BREAKDOWN -- CLAW4GROWTH.COM      🦞 25 COMMENTS/DAY x 3 PLATFORMS x $2/DAY = WARM LEAD MACHINE -- FREE STEP-BY-STEP BREAKDOWN -- CLAW4GROWTH.COM     

How I Automate 25 Daily
Engagement Comments
Across 3 Platforms

I automate 25 engagement comments daily across LinkedIn, X, and Reddit. Here's the exact process, the configs, and the full pipeline. Copy it.

25comments/day
3platforms
$2daily cost
5mreview time
cron-engagement.sh
// STEP 01
Finding the Right Posts

Every morning at 05:30 UTC, a cron job fires up. It hits three platforms with niche-specific queries and pulls back fresh posts from the past week.

The key is specificity. I don't search for "marketing." I search for "AI agents for lead gen" or "solopreneur automation stack." Narrow beats broad every time.

Here's how each platform works:

  • LinkedIn: MCP batch search. Queries like "AI agents" + "marketing automation" + "solopreneur." Filter by past week, sort by engagement.
  • X (Twitter): Cached reader with keyword monitoring. Same niche queries, freshness filter baked in.
  • Reddit: Web search with site:reddit.com prefix. Targets subreddits like r/Entrepreneur, r/SaaS, r/artificial. Past week filter.
example-queries.json
{
  "niches": [
    "AI agents for marketing",
    "marketing automation solopreneur",
    "AI cold outreach",
    "solopreneur productivity stack",
    "vibecoding tools",
    "GTM engineering"
  ],
  "platforms": {
    "linkedin": {
      "method": "mcp_batch_search",
      "freshness": "past_week",
      "max_results": 15
    },
    "x": {
      "method": "cached_reader",
      "freshness": "past_week",
      "max_results": 10
    },
    "reddit": {
      "method": "web_search",
      "prefix": "site:reddit.com",
      "subreddits": [
        "r/Entrepreneur",
        "r/SaaS",
        "r/artificial",
        "r/smallbusiness"
      ],
      "freshness": "past_week",
      "max_results": 10
    }
  }
}
crontab entry
# Engagement comment pipeline - runs daily at 05:30 UTC
30 5 * * * /home/luca/scripts/engagement-scan.sh >> /var/log/engagement.log 2>&1

# The script does:
# 1. Query all 3 platforms
# 2. Deduplicate results
# 3. Score by relevance (niche match + engagement count)
# 4. Pass top 30 to comment generation
// STEP 02
The Voice Profile

This is the part most people skip. And it's the reason their automated comments read like a bot wrote them (because one did, with zero personality).

I built a voice profile document. It has 6 traits that define how I write online:

  • Direct. No fluff, no throat-clearing. Get to the point in the first sentence.
  • Technical but accessible. I can talk about cron jobs and also explain them to a non-dev.
  • Opinionated. I take a stance. "This is wrong because..." not "Some might argue..."
  • Conversational. Like texting a smart friend. Short sentences. Sometimes fragments.
  • First-person stories. "I built this" beats "One could build this" every time.
  • Dry humor. Occasional. Never forced. If it's not funny, cut it.

Before every comment gets generated, it runs through this checklist:

  • Does it add something the original post didn't say?
  • Would I actually say this out loud?
  • Is it under 3 sentences? (LinkedIn can go to 4)
  • Does it avoid generic agreement? ("Great post!" = instant delete)
  • Zero AI vocabulary? (check the blacklist)

Speaking of which, here's the AI vocabulary blacklist:

pivotal crucial delve showcase foster leverage streamline landscape paradigm synergy holistic game-changer empower navigate robust
BAD COMMENT "This is a pivotal insight! AI agents are truly a game-changer for streamlining marketing workflows. Thanks for sharing this robust framework!"
GOOD COMMENT "I run a similar setup. The part most people miss is the voice profile - without it, your AI comments read like every other bot on LinkedIn. I spent 2 hours on mine and it changed everything."
voice-profile.md (snippet)
# Voice Profile - Luca

## Core Traits
- Direct, no fluff
- Technical but accessible  
- Opinionated with receipts
- Conversational (short sentences, fragments ok)
- First-person stories over abstract advice
- Dry humor, never forced

## Comment Rules
- Max 3 sentences (4 for LinkedIn if adding value)
- Must add new info/perspective
- Never generic agreement
- Always a personal angle or experience
- Question at the end only if genuine

## Blacklist (auto-reject if detected)
pivotal, crucial, delve, showcase, foster,
leverage, streamline, landscape, paradigm,
synergy, holistic, game-changer, empower,
navigate, robust, harness, spearhead,
cutting-edge, best-in-class, thought leader
// STEP 03
Comment Generation

For each post, the LLM gets three things: the post content, my voice profile, and the instruction to generate 3 comment variants.

Why 3? Because the first draft is usually too safe. The second is usually better. The third sometimes surprises you. The system auto-picks the best one based on a scoring function (uniqueness, voice match, length).

After picking, it runs a humanization pass:

  • Remove all em dashes. Replace with periods or commas.
  • Kill arrow symbols (->). Use "so" or "which means" instead.
  • Strip any remaining AI patterns (excessive hedging, "It's worth noting that...")
  • Add typo variation (occasionally leave a minor imperfection)
  • Vary sentence length. Mix short punchy with one longer one.
generate-comments.py (core logic)
def generate_comments(post, voice_profile):
    variants = []
    for i in range(3):
        prompt = f"""
        Post: {post['content']}
        Platform: {post['platform']}
        Voice profile: {voice_profile}
        
        Write a comment as Luca. Rules:
        - Max 3 sentences
        - Add something the post didn't say
        - Use a personal angle or experience
        - No AI vocabulary (see blacklist)
        - Sound like a real person, not a bot
        - Variant {i+1} of 3: {'safe' if i==0 else 'bold' if i==1 else 'wild card'}
        """
        variants.append(llm.generate(prompt))
    
    # Score and pick best
    scored = [(v, score_comment(v, voice_profile)) for v in variants]
    best = max(scored, key=lambda x: x[1])
    
    # Humanize
    return humanize(best[0])

def humanize(comment):
    # Remove em dashes
    comment = comment.replace('\u2014', '.')
    comment = comment.replace(' - ', ', ')
    # Kill arrows  
    comment = comment.replace('->', 'so')
    # Strip hedging
    for phrase in HEDGE_PHRASES:
        comment = comment.replace(phrase, '')
    return comment.strip()
// STEP 04
AI Detection Pass

Every comment runs through an AI detector. The target is "low" risk. If it comes back "medium" or "high," it gets auto-fixed and re-checked.

The auto-fix targets three things:

  • Parenthetical density. AI loves parentheses. If there's more than one set of parens, remove the extras.
  • Specificity check. Vague comments trigger detectors. "I tried this" becomes "I tested this on 3 LinkedIn accounts last month."
  • Sentence structure variety. AI tends to write Subject-Verb-Object over and over. Mix it up. Start with a fragment. End with a question sometimes.

If a comment fails 3 rounds of detection, it gets flagged for manual review instead of going into the sheet automatically. This happens maybe 2-3 times per batch.

detection-pass.py
def detection_pass(comment, max_retries=3):
    for attempt in range(max_retries):
        result = ai_detector.check(comment)
        
        if result['risk'] == 'low':
            return {'status': 'pass', 'comment': comment}
        
        # Auto-fix
        if result['flags'].get('parenthetical_density'):
            comment = reduce_parens(comment)
        if result['flags'].get('low_specificity'):
            comment = add_specificity(comment)
        if result['flags'].get('repetitive_structure'):
            comment = vary_structure(comment)
    
    # Failed all retries
    return {'status': 'manual_review', 'comment': comment}

def reduce_parens(text):
    """Keep max 1 parenthetical per comment"""
    # Find all parentheticals, keep first, remove rest
    ...

def add_specificity(text):
    """Replace vague claims with specific ones"""
    # "many people" -> "3 founders I talked to"
    # "recently" -> "last Tuesday"
    ...
// STEP 05
Google Sheet Pipeline

All approved comments land in a Google Sheet. This is my review dashboard. I open it with my morning coffee, scan through, and approve or edit.

The columns are color-coded so I can scan fast:

Date Priority Author Post Link Comment Draft Done Notes
2026-02-14 HIGH @sarahcodes Why I stopped using Zapier... link "Switched to n8n last year for the same reason. The webhook reliability alone..." [ ] 10k followers, SaaS niche
2026-02-14 MED u/startupguy Best AI tools for solo founders? link "Running 3 AI agents that handle my content, outreach, and analytics. Total cost is..." [ ] r/SaaS, 45 upvotes
2026-02-14 LOW @marketer_mike Hot take: AI content is killing... link "The problem isn't AI content, it's AI content without a voice profile..." [ ] Contrarian angle

Priority is based on: author follower count, post engagement, niche relevance. HIGH means "this person is a potential lead or has a big audience." I always do those first.

push-to-sheet.py
def push_to_sheet(comments, sheet_id):
    rows = []
    for c in comments:
        rows.append([
            c['date'],
            c['priority'],      # HIGH/MED/LOW
            c['author'],
            c['post_title'][:50],
            c['post_url'],
            c['comment_draft'],
            'FALSE',            # Done checkbox
            c['notes']
        ])
    
    sheets_api.append(
        spreadsheet_id=sheet_id,
        range='Comments!A:H',
        values=rows
    )
    
    # Apply conditional formatting
    # RED for HIGH priority
    # YELLOW for MED
    # GREEN for LOW
    apply_formatting(sheet_id, rows)
// STEP 06
Daily Review (5 min)

This is the human-in-the-loop part. Every morning, I spend 5 minutes on the sheet:

  • Scan HIGH priority first. Edit if needed, approve.
  • Quick pass on MED. Most are good to go as-is.
  • LOW priority: approve all unless something looks off.
  • Post the comments. (This part is still manual on purpose. I want the final click to be mine.)
  • Mark as Done. Track which ones get replies or engagement.

When someone replies to my comment, that's a warm signal. The system flags it and adds them to my outreach pipeline. A reply means they already know my name and had a positive interaction. That's worth 10x a cold DM.

I track engagement weekly: reply rate, profile visits from comments, DMs received. The data feeds back into query tuning. If posts from r/SaaS get 3x more replies than r/Entrepreneur, I shift the query weight.

// RESULTS
What This Gets Me
25comments/day
3platforms
~$2API cost/day
5mdaily effort

But the numbers that actually matter:

  • Warm signals: 8-12 reply-back conversations per week. These people already know me.
  • Profile visits: 3-5x increase since starting this. Comments drive more profile traffic than posts do.
  • DM openers: "Hey, saw your comment on [post]" is the warmest intro that exists.
  • Content ideas: Reading 25+ posts daily in my niche gives me endless content angles.
  • Pipeline feed: Every warm signal goes into my lead pipeline. No cold outreach needed for these.

Total cost: about $60/month in API calls. Total time: 30 minutes to set up, 5 minutes daily to review. That's it.

Try It Yourself

Paste a post URL below and see what kind of comment the system would generate. (This is a static demo, not hitting a real API.)

// GENERATED COMMENT (variant 2 of 3)
"I built something similar last month. The part that took the longest wasn't the automation, it was getting the voice right. Spent 2 hours writing a 'how I actually talk' doc and it made everything 10x better. What model are you using for generation?"
AI Detection: LOW RISK | Priority: MED | Platform: LinkedIn
Want This Running in 60 Seconds?

This process takes 30 minutes to set up and 5 minutes daily to review. Or you can get it pre-configured, voice-profiled, and running out of the box.

Claw4Growth is an OpenClaw instance pre-loaded with this entire pipeline. Engagement comments, content generation, lead tracking. All of it.

GET CLAW4GROWTH
4 SPOTS LEFT