Documentation that lives outside your CI/CD pipeline is documentation that goes stale. A developer updates a function signature, forgets to update the README, and six months later someone spends an afternoon debugging a discrepancy that should never have existed. The fix is not discipline — it is automation.

This guide walks through a complete pattern for building a documentation generation pipeline that triggers on every commit, converts your source files to HTML, and publishes the result automatically. The conversion step is handled by the DocForge API, so there is no Markdown parser to configure, no templating engine to wrangle, and no build-time dependencies to maintain.

Why API-Based Conversion Belongs in Your Pipeline

When you embed a Markdown-to-HTML library directly in your build tooling, you take on a maintenance burden: the library must be installed in your CI environment, kept in sync across dev machines, and upgraded carefully to avoid rendering regressions. An HTTP API sidesteps all of this.

The tradeoffs are straightforward:

The Core Pattern: Convert on Commit

The simplest automation pattern converts every Markdown file in your docs/ directory to HTML whenever the main branch is updated. Here is a GitHub Actions workflow that does exactly that:

GitHub Actions — .github/workflows/docs.yml
name: Build Docs

on:
  push:
    branches: [main]
    paths: ['docs/**']

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Convert Markdown docs to HTML
        run: |
          mkdir -p dist/docs
          for file in docs/*.md; do
            name=$(basename "$file" .md)
            content=$(cat "$file")
            curl -s -X POST https://docforge-api.vercel.app/api/md-to-html \
              -H "Content-Type: application/json" \
              -H "X-API-Key: ${{ secrets.DOCFORGE_API_KEY }}" \
              -d "{\"markdown\": $(echo "$content" | jq -Rs .)}" \
              | jq -r .html > "dist/docs/${name}.html"
          done

      - name: Upload dist
        uses: actions/upload-artifact@v4
        with:
          name: docs-html
          path: dist/docs/

This workflow fires only when files under docs/ change, so it does not slow down unrelated commits. The jq -Rs . trick converts the raw file content into a valid JSON string, handling newlines and special characters safely.

Adding a Table of Contents from the API Response

The DocForge API returns more than just HTML. The response includes a meta.headings array extracted from your document. You can use this to build a navigation sidebar automatically, without parsing the HTML output yourself.

Shell script
#!/bin/bash
# convert-with-toc.sh

CONTENT=$(cat "$1")
API_KEY="$DOCFORGE_API_KEY"

RESPONSE=$(curl -s -X POST https://docforge-api.vercel.app/api/md-to-html \
  -H "Content-Type: application/json" \
  -H "X-API-Key: $API_KEY" \
  -d "{\"markdown\": $(echo "$CONTENT" | jq -Rs .)}")

HTML=$(echo "$RESPONSE" | jq -r .html)
HEADINGS=$(echo "$RESPONSE" | jq -r '.meta.headings[]')

# Build a simple TOC nav from headings
TOC="<nav class='toc'>"
while IFS= read -r heading; do
  slug=$(echo "$heading" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
  TOC+="<a href='#$slug'>$heading</a>"
done <<< "$HEADINGS"
TOC+="</nav>"

echo "${TOC}${HTML}"

Multi-Format Documentation Pipelines

Real documentation projects rarely consist of Markdown alone. You might have:

The DocForge API handles all of these formats. A single pipeline step can accept different file types and route them to the appropriate endpoint:

Shell script — multi-format converter
#!/bin/bash
# convert-docs.sh — routes files to the right endpoint by extension

convert_file() {
  local file="$1"
  local ext="${file##*.}"
  local name=$(basename "$file" ".$ext")
  local endpoint=""

  case "$ext" in
    md)   endpoint="md-to-html"   ; key="markdown" ;;
    txt)  endpoint="txt-to-html"  ; key="text" ;;
    json) endpoint="json-to-html" ; key="json" ;;
    csv)  endpoint="csv-to-json"  ; key="csv" ;;
    *)    echo "Skipping $file (unknown extension)" ; return ;;
  esac

  content=$(cat "$file")
  curl -s -X POST "https://docforge-api.vercel.app/api/$endpoint" \
    -H "Content-Type: application/json" \
    -H "X-API-Key: $DOCFORGE_API_KEY" \
    -d "{\"$key\": $(echo "$content" | jq -Rs .)}" \
    | jq -r .html > "dist/docs/${name}.html"

  echo "Converted: $file -> dist/docs/${name}.html"
}

for file in docs/*; do
  convert_file "$file"
done

For more on orchestrating these different format types in a pipeline, see our article on building data pipelines with format conversion APIs.

Handling Rate Limits and Retries

The free tier allows 500 requests per day. For most documentation pipelines this is generous — a 200-file doc set consumed in a single build would exhaust it, but that scenario is uncommon. For larger repositories, the Pro plan at $9/month provides 50,000 daily requests.

When writing production pipeline scripts, add a simple retry wrapper around each API call:

Shell — retry wrapper
#!/bin/bash
# Retry up to 3 times with exponential backoff

api_call_with_retry() {
  local max_attempts=3
  local attempt=1
  local delay=2

  while [ $attempt -le $max_attempts ]; do
    response=$(curl -s -w "\n%{http_code}" -X POST \
      "https://docforge-api.vercel.app/api/md-to-html" \
      -H "Content-Type: application/json" \
      -H "X-API-Key: $DOCFORGE_API_KEY" \
      -d "$1")

    http_code=$(echo "$response" | tail -1)
    body=$(echo "$response" | head -1)

    if [ "$http_code" -eq 200 ]; then
      echo "$body"
      return 0
    fi

    echo "Attempt $attempt failed (HTTP $http_code). Retrying in ${delay}s..." >&2
    sleep $delay
    delay=$((delay * 2))
    attempt=$((attempt + 1))
  done

  echo "All attempts failed" >&2
  return 1
}

Storing Your API Key Securely

Never hardcode your DocForge API key in your pipeline scripts. Use your CI platform's secrets mechanism:

You can generate and manage your API keys from the DocForge dashboard. The free tier does not require a key, but using one lets the API attribute your usage correctly and gives you access to higher rate limits.

Publishing the Output

Once your pipeline has produced HTML files, you have a few standard options for publishing them:

Summary

An automated documentation pipeline built around the DocForge API requires only a few lines of shell script and a CI workflow file. You get consistent, sanitized HTML output across all environments, metadata for building navigation, and support for multiple input formats — all without adding a single build-time dependency to your repository.

Check the full API reference for the complete list of supported endpoints, request schemas, and response shapes. The free tier is enough to get started today.

Get Your Free API Key

Start automating your documentation pipeline. 500 requests/day free, no credit card required.

Get API Key