Documentation that lives outside your CI/CD pipeline is documentation that goes stale. A developer updates a function signature, forgets to update the README, and six months later someone spends an afternoon debugging a discrepancy that should never have existed. The fix is not discipline — it is automation.
This guide walks through a complete pattern for building a documentation generation pipeline that triggers on every commit, converts your source files to HTML, and publishes the result automatically. The conversion step is handled by the DocForge API, so there is no Markdown parser to configure, no templating engine to wrangle, and no build-time dependencies to maintain.
Why API-Based Conversion Belongs in Your Pipeline
When you embed a Markdown-to-HTML library directly in your build tooling, you take on a maintenance burden: the library must be installed in your CI environment, kept in sync across dev machines, and upgraded carefully to avoid rendering regressions. An HTTP API sidesteps all of this.
The tradeoffs are straightforward:
- Zero CI dependencies — curl is already on every build runner. You do not need Node.js, Python, or any runtime installed just to convert docs.
- Consistent output — every environment calls the same API and receives the same HTML. No more "works on my machine" rendering differences.
- Sanitized output by default — the API returns clean, XSS-safe HTML without any additional configuration on your part.
- Metadata for free — each conversion response includes a word count and heading list, which you can use to auto-generate a table of contents or a search index.
The Core Pattern: Convert on Commit
The simplest automation pattern converts every Markdown file in your docs/ directory to HTML whenever the main branch is updated. Here is a GitHub Actions workflow that does exactly that:
name: Build Docs on: push: branches: [main] paths: ['docs/**'] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Convert Markdown docs to HTML run: | mkdir -p dist/docs for file in docs/*.md; do name=$(basename "$file" .md) content=$(cat "$file") curl -s -X POST https://docforge-api.vercel.app/api/md-to-html \ -H "Content-Type: application/json" \ -H "X-API-Key: ${{ secrets.DOCFORGE_API_KEY }}" \ -d "{\"markdown\": $(echo "$content" | jq -Rs .)}" \ | jq -r .html > "dist/docs/${name}.html" done - name: Upload dist uses: actions/upload-artifact@v4 with: name: docs-html path: dist/docs/
This workflow fires only when files under docs/ change, so it does not slow down unrelated commits. The jq -Rs . trick converts the raw file content into a valid JSON string, handling newlines and special characters safely.
Adding a Table of Contents from the API Response
The DocForge API returns more than just HTML. The response includes a meta.headings array extracted from your document. You can use this to build a navigation sidebar automatically, without parsing the HTML output yourself.
#!/bin/bash # convert-with-toc.sh CONTENT=$(cat "$1") API_KEY="$DOCFORGE_API_KEY" RESPONSE=$(curl -s -X POST https://docforge-api.vercel.app/api/md-to-html \ -H "Content-Type: application/json" \ -H "X-API-Key: $API_KEY" \ -d "{\"markdown\": $(echo "$CONTENT" | jq -Rs .)}") HTML=$(echo "$RESPONSE" | jq -r .html) HEADINGS=$(echo "$RESPONSE" | jq -r '.meta.headings[]') # Build a simple TOC nav from headings TOC="<nav class='toc'>" while IFS= read -r heading; do slug=$(echo "$heading" | tr '[:upper:]' '[:lower:]' | tr ' ' '-') TOC+="<a href='#$slug'>$heading</a>" done <<< "$HEADINGS" TOC+="</nav>" echo "${TOC}${HTML}"
Multi-Format Documentation Pipelines
Real documentation projects rarely consist of Markdown alone. You might have:
- API reference data stored as JSON that needs to be rendered as an HTML page
- Changelog entries maintained as plain text that feed into a formatted release notes page
- CSV exports of configuration options that should be presented as searchable HTML tables
The DocForge API handles all of these formats. A single pipeline step can accept different file types and route them to the appropriate endpoint:
#!/bin/bash # convert-docs.sh — routes files to the right endpoint by extension convert_file() { local file="$1" local ext="${file##*.}" local name=$(basename "$file" ".$ext") local endpoint="" case "$ext" in md) endpoint="md-to-html" ; key="markdown" ;; txt) endpoint="txt-to-html" ; key="text" ;; json) endpoint="json-to-html" ; key="json" ;; csv) endpoint="csv-to-json" ; key="csv" ;; *) echo "Skipping $file (unknown extension)" ; return ;; esac content=$(cat "$file") curl -s -X POST "https://docforge-api.vercel.app/api/$endpoint" \ -H "Content-Type: application/json" \ -H "X-API-Key: $DOCFORGE_API_KEY" \ -d "{\"$key\": $(echo "$content" | jq -Rs .)}" \ | jq -r .html > "dist/docs/${name}.html" echo "Converted: $file -> dist/docs/${name}.html" } for file in docs/*; do convert_file "$file" done
For more on orchestrating these different format types in a pipeline, see our article on building data pipelines with format conversion APIs.
Handling Rate Limits and Retries
The free tier allows 500 requests per day. For most documentation pipelines this is generous — a 200-file doc set consumed in a single build would exhaust it, but that scenario is uncommon. For larger repositories, the Pro plan at $9/month provides 50,000 daily requests.
When writing production pipeline scripts, add a simple retry wrapper around each API call:
#!/bin/bash # Retry up to 3 times with exponential backoff api_call_with_retry() { local max_attempts=3 local attempt=1 local delay=2 while [ $attempt -le $max_attempts ]; do response=$(curl -s -w "\n%{http_code}" -X POST \ "https://docforge-api.vercel.app/api/md-to-html" \ -H "Content-Type: application/json" \ -H "X-API-Key: $DOCFORGE_API_KEY" \ -d "$1") http_code=$(echo "$response" | tail -1) body=$(echo "$response" | head -1) if [ "$http_code" -eq 200 ]; then echo "$body" return 0 fi echo "Attempt $attempt failed (HTTP $http_code). Retrying in ${delay}s..." >&2 sleep $delay delay=$((delay * 2)) attempt=$((attempt + 1)) done echo "All attempts failed" >&2 return 1 }
Storing Your API Key Securely
Never hardcode your DocForge API key in your pipeline scripts. Use your CI platform's secrets mechanism:
- GitHub Actions — store as a repository secret, reference as
${{ secrets.DOCFORGE_API_KEY }} - GitLab CI — use CI/CD variables, reference as
$DOCFORGE_API_KEY - CircleCI — store in project environment variables, same syntax
- Jenkins — use the Credentials plugin and inject as an environment variable
You can generate and manage your API keys from the DocForge dashboard. The free tier does not require a key, but using one lets the API attribute your usage correctly and gives you access to higher rate limits.
Publishing the Output
Once your pipeline has produced HTML files, you have a few standard options for publishing them:
- GitHub Pages — push the
dist/directory to agh-pagesbranch usingpeaceiris/actions-gh-pages - Vercel — connect your repository and set the output directory to
dist/docs - S3 + CloudFront — sync the directory with
aws s3 sync dist/ s3://your-bucket/and invalidate the CDN cache - Netlify — drop the
dist/folder into the Netlify UI for instant publishing, or configure their build CLI
Summary
An automated documentation pipeline built around the DocForge API requires only a few lines of shell script and a CI workflow file. You get consistent, sanitized HTML output across all environments, metadata for building navigation, and support for multiple input formats — all without adding a single build-time dependency to your repository.
Check the full API reference for the complete list of supported endpoints, request schemas, and response shapes. The free tier is enough to get started today.
Get Your Free API Key
Start automating your documentation pipeline. 500 requests/day free, no credit card required.
Get API Key