How I Built an AI Employee That Actually Follows Procedures (Instead of Guessing)

How I Built an AI Employee That Actually Follows Procedures (Instead of Guessing)

·

·

,

👁 3 views

Using OpenClaw and agent skills to turn a chatbot into a reliable AI coding assistant

Today I taught my AI assistant to stop guessing and start following procedures. The difference is night and day.

If you’re building AI agents or trying to use AI as a real employee for your business, this post breaks down exactly what I did—and how you can apply the same approach to make your AI workflow automation actually reliable.

The Problem with AI Assistants

Most AI assistants are trained to be helpful. That’s great for answering questions, but terrible for doing real work.

Here’s what typically happens:

  • You ask for SEO analysis
  • The AI picks a tool (maybe the right one, maybe not)
  • Results are inconsistent
  • You spend more time checking its work than doing it yourself

Sound familiar? I was there too. My AI employee—running on OpenClaw, a framework for building AI agents—had access to multiple SEO tools but would often grab the wrong one.

The Solution: Agent Skills with Routing Logic

I discovered that AI assistants fail when they guess. They succeed when you give them explicit procedures.

Following guidance from OpenAI’s recent post on skills, I rewrote my AI’s skill definitions with clear routing logic:

Before (vague):

description: SEO keyword research tool

After (explicit routing):

description: |
  USE WHEN:
  - "Research keywords for [topic]"
  - "What's the search volume for [keyword]?"
  - Planning NEW content and need keyword data

  DON'T USE WHEN:
  - User wants their OWN site's ranking data → use gsc skill
  - User wants traffic/visitor analytics → use ga4-analytics

The key insight: skills aren’t about what the AI can do—they’re about when to do it and when NOT to.

What I Actually Built Today

I overhauled 9 skills in my OpenClaw setup:

SkillWhat it doesWhen to use
gscGoogle Search ConsoleOwn site rankings, indexing status
ga4-analyticsGoogle Analytics 4Traffic, user behavior, conversions
seo-dataforseoDataForSEO APIKeyword research, competitor analysis
seo-optimizerOn-page auditsHTML analysis, meta tags, schema
githubGitHub CLIPRs, issues, CI/Actions
notionNotion APITasks, databases, pages
gogGoogle WorkspaceGmail, Calendar, Sheets
slackSlack actionsReactions, pins, message management
cold-outreachEmail campaignsCold email sequences

Each skill now includes:

  • ✅ “Use when” triggers with example phrases
  • ✅ “Don’t use when” negative examples
  • ✅ Decision tables for quick routing
  • ✅ Relevant context (my database IDs, repo names, etc.)

The Meta Part: This Post Was Made Using These Skills

Here’s where it gets fun. I used my newly-improved skills to write this post:

  1. Keyword research via DataForSEO skill:
    • “ai coding assistant” — 12,100 monthly searches
    • “building ai agents” — 2,400 searches
    • “ai employee” — 2,400 searches
  2. Routing decision: This is research for NEW content → DataForSEO, not GSC
  3. Draft written following SEO best practices from my seo-optimizer skill
  4. Published via WordPress MCP integration

The AI didn’t guess which tool to use. It followed the procedure.

Adding WordPress Agent Skills

I also added the official WordPress agent-skills repository—13 skills that teach AI assistants WordPress development patterns:

  • Block development (Gutenberg, block.json)
  • Theme development (theme.json, patterns)
  • Plugin architecture and security
  • REST API endpoints
  • WP-CLI automation
  • Performance profiling

These skills have the same structure: explicit routing, negative examples, and procedures the AI can follow deterministically.

How to Apply This to Your AI Workflow

Step 1: Audit your current skills/prompts

Look for vague descriptions. “Helps with SEO” tells the AI nothing about when to use it.

Step 2: Add routing logic

For each skill, define:

  • What triggers it (exact phrases users say)
  • What should NOT trigger it (and what to use instead)
  • Expected inputs and outputs

Step 3: Add negative examples

This was the biggest win. When you have multiple similar skills, explicit “don’t use this for X” statements prevent misfires.

Step 4: Include context

Database IDs, API endpoints, repo names—anything the AI needs to actually execute. Don’t make it search for this.

Results

After these changes, my AI assistant:

  • Routes to the correct SEO tool ~95% of the time (vs. ~60% before)
  • Follows WordPress best practices instead of generating outdated patterns
  • Executes multi-step workflows without getting confused mid-task
  • Actually feels like an employee, not a chatbot

Tools I Used

  • OpenClawAI agent framework (my AI runs on this)
  • DataForSEO API — Keyword research and backlink analysis
  • WordPress agent-skills — Official WP development skills
  • Claude — The underlying model

Building AI agents that work reliably isn’t about better models—it’s about better instructions. Start with routing logic, add negative examples, and watch your AI stop guessing.

Stay in the loop

Get WordPress + AI insights delivered to your inbox. No spam, unsubscribe anytime.

We respect your privacy. Read our privacy policy.


Recommended Posts