Chester Chee
Marketing Technology & AI Automation
AboutChester Chee·Marketing AI Practitioner · PURE Group · Singapore

I have rebuilt how a marketing function operates using AI, one workflow at a time. AI is now the default mode for content, reporting, advertising, and customer intelligence. These are the systems I built, the workflows they permanently changed, and the teams that now use them.

I believe a marketing team of one can outperform an agency if it owns its own automation stack.

Systems featured below have been sanitised to remove company branding and proprietary data.

Portfolio

The transformation model.

SGD195,000
annual cost savings and prevented vendor spend
SGD540,000
Annual ad spend managed
4
Teams using AI tools built for them
SGD64,800
Projected additional savings

AI should be the path of least resistance for every marketer, not an extra step. Every system in this portfolio was built to remove a manual process and replace it with an AI workflow the team can use without friction. The goal is not automation for its own sake. The goal is permanent workflow transformation: marketers who start every task with an AI tool because doing it any other way feels slower.

Enablement
AI chat in live agency reviews
Marketing and agency teams now use the AI meeting assistant during performance reviews to query data, surface insights, and generate improvement recommendations in real time, without preparing separate briefing documents or exporting data manually. The friction of getting information in front of AI is gone.
Cross-brand adoption
Content pipeline adopted by sibling brand teams
The blog content pipeline built for the primary brand is being adopted by sibling brand teams within the group, each calibrating the brand voice profile for their own positioning. One architecture, multiple brands, configured rather than rebuilt.
Friction removal
From ad hoc queries to structured intelligence
The team was already copying data into LLMs for ad hoc analysis. What changed was consolidation: all performance data now lives in structured dashboards with embedded AI chat. The prep work that prevented consistent AI use is gone. The team now queries their own data through natural language as a default, not an exception.
Vision prototype, presented to C-suite leadership
Showcasing an example layout for Stripe regional marketing intelligence in APAC · trial conversion rates, cost-per-customer, and onboarding nurture flows are illustrative, not real Stripe data.
Scroll horizontally to explore the full dashboard
Case Study 02

RAG Chatbot

AI-powered support, shared between marketing and customer service.

The chatbot replaced a USD5,000/month vendor contract. The more significant change is operational: marketing and technical support now share responsibility for keeping the knowledge base current. Marketing owns product announcements, campaign context, and pricing updates; support owns integration guidance, error resolution, and account queries. Two teams now maintain an AI system together as a default part of their workflow.

SGD81,000/yr saved
Built and validated
01
The situation. A premium services company operating in Singapore was paying USD5,000 per month for a third-party chatbot vendor. The solution handled basic customer queries with limited customisation and no visibility into conversation quality or volume.
02
The prototype. A RAG-based replacement was built in under a day using n8n as the orchestration layer, a Supabase vector store for knowledge retrieval, and the Claude API for response generation. Language detection routed queries to the correct response path for English and Chinese speakers.
03
The result. The prototype matched and exceeded the vendor on response quality, while adding conversation transcript logging and a dashboard API for operations review. The vendor contract was terminated. Annual saving: approximately SGD81,000.
Note
Architecture lineage. Built on the pilot n8n instance. The RAG retrieval pattern and modular webhook design established here informed all subsequent production systems.
SGD81,000/yr vendor contract terminated following prototype validation

Technical highlights

n8n Supabase pgvector Claude API RAG Webhook
  • RAG architecture with Supabase pgvector for semantic knowledge retrieval
  • Language detection gateway routing queries for English and Chinese
  • Conversation transcript logging with timestamp and session tracking
  • Dashboard API endpoint for operations team transcript review
Architecture, RAG chatbot request flow
User Input Query Language Gateway EN / ZH routing Vector Search Supabase pgvector Claude API Response generation Response Customer-facing Transcript Log Supabase store
n8n workflow canvas · RAG architecture with language gateway, vector search, and transcript logging
n8n chatbot workflow canvas
Customer-facing chat interface · natural language routing across 15+ supported languages
Case Study 03

Ad Automation System

Autonomous campaign management.

The most technically complex system in the portfolio. Five interconnected workstreams replacing manual agency processes across Meta, Google Ads, and TikTok, built on a self-hosted production n8n instance. The system runs autonomous campaign setup, optimisation, and reporting at the infrastructure level the team would otherwise outsource to agencies.

Workstream 1
Campaign Setup
Accepts an Excel campaign brief via web form upload. Parses parameters across multiple sheet tabs and provisions campaigns, ad sets, and audiences directly through Meta, Google Ads, and TikTok APIs. Eliminates manual setup in Ads Manager entirely.
Staging version active. Final billing configuration with platforms in progress before full deployment.
Workstream 2
Daily Monitor
Runs at 08:00 SGT every morning. Aggregates previous-day spend and performance from Meta, Google Ads, and TikTok. Detects anomalies against configured thresholds and delivers a consolidated HTML summary via Telegram.
Built and validated. Deployment pending alongside WS3.
Workstream 3 + 4
Optimise and Execute
Runs 30 minutes after WS2. Reads 7-day trend data, generates specific bid and budget change recommendations via Azure OpenAI with explicit reasoning, then gates all execution behind a Telegram human-in-the-loop approval flow. No campaign change executes without human confirmation.
Built and validated. Deployment pending billing configuration.
Workstream 6
Conversion Events
Weekly batch upload of CRM conversions to Meta CAPI and Google Offline Conversions, giving the platforms first-party conversion signal to improve algorithm targeting.
🔒PDPA by design. PII never enters the n8n environment. A standalone SHA-256 hashing tool sits entirely outside the workflow layer. Only hashed identifiers pass to platform APIs. Plaintext personal data exposure in workflow logs is structurally impossible.
Support Layer
Active Campaigns API
A read-only webhook endpoint surfacing live campaign metrics accumulated by WS2. Consumed by WS3 as its data input. Clean separation between data collection and optimisation logic was a deliberate architectural decision to keep each workstream independently testable and replaceable.
Always active. Serves as the data bridge between monitoring and optimisation.
Workstream data flow
WS1 Campaign Setup WS2 Daily Monitor API Active Campaigns API WS3 + 4 Optimise and Execute WS6 Conversion Upload
WS1 campaign setup workflow · Excel brief parsing through to Meta, Google Ads, and TikTok campaign provisioning
WS1 n8n workflow canvas
Campaign management interfaces · intake through to approval
Campaign brief intake · Excel upload triggers automated campaign provisioning
SGD64,800/yr projected
Agency management fees eliminated upon full WS1 to WS4 deployment across Meta, Google Ads, and TikTok
Status
WS1 staging active
Active Campaigns API live
WS2, WS3, WS4, WS6 built and validated
Pending final billing configuration with Meta and Google before full deployment
Case Study 04

Agency Performance Reporting

AI-powered performance reviews. Agencies are accountable. Teams arrive better informed.

Built to establish objective performance measurement across APAC, tracking three media agencies that run paid digital across Meta and Google Ads. The system creates a data-normalised accountability layer between the client and its agencies, replacing subjective slide deck reviews with repeatable AI analysis.

The real change is behavioral. Marketing teams now walk into agency review meetings with an AI meeting assistant active on screen. They query it in real time: what drove the performance delta, which campaigns underperformed against forecast, what to push back on. The agency prepares differently because the team arrives better informed. The dynamic in the room has changed.

Workflow A
Report Intake
Each agency submits via a structured webhook endpoint. Data is normalised into a consistent schema covering overview, channels, social, search, creative, and programmatic. Stored in Google Sheets for longitudinal analysis across months.
Agency HK · HK paid digital Agency SG · SG paid digital
Workflow B
Dashboard Generator
Triggered manually when a review is due. Reads the full submission history, runs AI analysis to surface performance trends, gaps, and anomalies, and produces a self-contained HTML report. CRM conversion data is merged in for closed-loop attribution.
Companion Tool
Discussion Webhook
Used live on a phone during agency review meetings. Accepts the topic under discussion and returns rapid counter-arguments, strategic challenges, or alternative framings. Three modes: counter, challenge, and strategy.
Active
Long-term arc. The autonomous ads system in Case Study 03 is designed to replace agency dependency entirely across APAC. At that point, this reporting infrastructure becomes the executive performance dashboard shown in Section 1, covering market intelligence and autonomous spend across every market.
Live tool output · data anonymised for portfolio
Status
Agency HK intake active
Agency SG intake active
Discussion Webhook active
Dashboard generators built, triggered on demand
Case Study 05

Blog Content Generator

Content production pipeline for guides, educational explainers, and localised market variants.

This pipeline produces the high-volume, content-team-owned work that sits alongside subject-matter expert authoring: guides, educational explainers, vertical landing pages, localised market variants. It does not replace the SME-authored thought leadership that lives elsewhere in the content function. It handles the templated production work that otherwise gets outsourced to agencies or deprioritised due to capacity.

The same architecture runs trilingual at PURE (EN, TC, SC) and replaced two content agencies. For a global business with regional content needs, the unit of work is not one article: it is one canonical piece plus its market variants, social derivatives, SEO metadata, and structured data schemas. The pipeline produces all of them from a single brief.

Status
Live and replacing agency
TypeScripttRPCDrizzle ORMClaude API

Sample output shown is illustrative of the pipeline producing one canonical article plus its social derivatives, SEO metadata, and structured data schema from a single brief.

Content production pipeline interface, showing one canonical piece plus its social derivatives, SEO metadata, and structured data schema from a single brief
Case Study 06

AI Ad Creative Generator

AI ad creative at speed. The same architecture extends to organic content templates for the wider team.

A standalone application built with React and deployed on Cloudflare Workers and Pages, with Cloudflare R2 for image storage. Accepts a reference image, uses Claude API to describe the environment, then calls FAL.ai to generate AI images based on that environment description. Four visual types are supported: promotional with customisable offer text, UGC-style, community-based, and editorial ghost showing a person standing alongside a ghost-like figure representing an aspirational identity. All generated assets are resized into 9 formats covering Google PMAX, Meta, and Programmatic Display specifications. Logo compositing runs in-browser using Canvas API with three positioning options. Text overlays are fully customisable before export with font and size preserved across edits.

The underlying architecture, prompt-driven creative generation with brand constraint inputs, is format-agnostic. The same system is being extended to generate organic content templates for the social media team, who currently produce content manually. One build, multiple use cases, expanding adoption.

Stack
ReactFAL.aiClaude APICloudflare WorkersCloudflare R2Canvas API
Interactive demo · static mock with hardcoded data, no live API calls
Case Study 07

Google Review Handler

Every review answered. AI-drafted, human-approved, sent in minutes.

A market-agnostic review-response architecture, currently in production across two of the client's markets but designed so adding a new market is a configuration file, not a code change. Locations, Telegram approval channel, brand voice, and review-scoring thresholds are all per-market parameters. The same workflow scales to as many markets as a business operates in.

Replaced a manual, inconsistent process where reviews were answered sporadically. Every review now receives a contextual, brand-appropriate response with a human final check before anything is published.

Step 1
Fetch and draft
A manually triggered workflow fetches all unanswered Google Business Profile reviews. Each review is scored by sentiment and escalation risk. A personalised Claude-generated reply is drafted, tailored to the specific review content and tone.
Step 2
Human approval via Telegram
Each pending reply is sent to a Telegram bot displaying the original review and AI draft side by side. Inline keyboard buttons allow one-tap approval (posts directly to Google), request for redraft, or skip. The human always has final say before anything is published.
N+
Markets supportable

Each market is a configuration: location list, Telegram channel, brand voice, escalation thresholds. Adding a new market means a new config file, not a new workflow. The architecture has no hardcoded market assumptions.

2
Markets currently live

In production today across Hong Kong and Singapore, demonstrating the multi-market pattern. Same Claude-drafted, human-approved flow runs end to end in each market against its own Google Business Profile and Telegram channel.

Status
In production, market-agnostic architecture
Pending structured error handling documentation before handover to operations teams
Workflow demo, from review detection to live publication
Case Study 08

Social Media Insights

From static reports to strategic dialogue. The team now queries their own data through AI.

Regional Instagram and Facebook performance analysis covering both markets. Each region pairs a scheduled analysis workflow with a conversational chat assistant, turning one-directional reports into interactive intelligence tools the social team can query in plain language.

The team was already doing ad hoc AI analysis by manually copying Instagram data into external LLMs. What changed was the setup cost: zero. All data is structured, embedded, and ready. The AI chat is open during weekly performance reviews. The team asks it for content recommendations, underperformance diagnoses, and posting schedule optimizations without leaving the dashboard. The behavior shifted because the friction disappeared.

Hong Kong
All-brands weekly analysis
Runs every Tuesday across all five brand accounts. Fetches Instagram and Facebook media performance data, generates AI commentary on trends, top posts, and content themes across both platforms, and publishes a formatted report to Microsoft Teams.
Active
Singapore
Priority brands monthly analysis
Runs on the first Tuesday of each month for the priority Singapore brand accounts, covering Instagram and Facebook performance. A separate Stories Tracker polls every four hours to capture ephemeral Instagram story metrics before the 24-hour expiry window closes, so no data is lost to Instagram's story lifecycle.
Built and validated
Chat Assistant, both markets
Conversational intelligence layer
Each region has a webhook-based chat assistant. After reviewing the scheduled report, the social team asks follow-up questions in natural language and receives Azure OpenAI-powered answers grounded in the actual report data, not generic responses. Both assistants are independently deployable and active.
Both chat assistants active
Live tool output · data anonymised for portfolio
Scroll horizontally to explore the full report
09

The stack.

Automation and Orchestration
n8n Webhooks Cron
AI Models
Claude (Anthropic) Azure OpenAI Google Gemini
Vector and Data
Supabase pgvector Pinecone Firebase Cloudflare D1 Google Sheets
Analytics and Attribution
GA4 Google Tag Manager
Ad Platforms
Meta Ads API Google Ads API TikTok Ads API Meta CAPI Google Offline Conv.
Infrastructure
Cloudflare Pages Cloudflare Workers Cloudflare R2 Vercel GitHub Self-hosted n8n
CRM and Email
SendGrid Telegram Bot API
Languages
JavaScript TypeScript HTML CSS React
Frameworks and ORMs
tRPC Drizzle ORM FAL.ai
Platforms and Integrations
Instagram Graph API Google Business Profile Microsoft Teams Google Drive
Framework

Workflow Governance Framework

Production-grade standards for AI and automation systems.

This framework governs how AI and automation workflows are designed, deployed, and maintained in production environments. It is designed to be ecosystem-agnostic, the patterns, tier classifications, and governance requirements apply regardless of whether your organisation runs on Google Workspace, Microsoft 365, or any other stack. The specific tools referenced are illustrative defaults and can be swapped for your equivalents without changing the underlying standard. This framework exists because sustainable AI adoption requires consistency: every tool built to the same standard, every failure handled the same way, every team customer able to trust that the system will behave predictably. Reliability is the foundation of adoption.

Tier system
Tier 0

Personal Scratchpad

Use when only you run the workflow, no stakeholders are waiting for output, and a failure would only affect you. Suited to ad-hoc analysis or one-off internal pulls.

  • Basic try-catch in code nodes to prevent crashes
  • Version control optional, periodic JSON export recommended
  • No formal documentation, testing, or logging required
  • Use whatever tools work, including spreadsheets you already have
  • Promote to Tier 2 the moment someone else starts depending on the output
Tier 1

Proof of Concept

Use when something is explicitly tagged as a POC, proof of concept, or quick demo. Built to validate an idea, not to run in production.

  • Manual trigger only, must be inactive when not in use
  • Error handling, alerts, logging, and documentation are not required
  • Test or mock data only, no live customer or customer records
  • Naming convention: prefix with [POC]
  • Maximum lifespan of two weeks, then promote to Tier 2 or delete
Tier 2 Default

Production-Ready

The standard for every workflow unless explicitly stated otherwise. If a workflow runs on a schedule, feeds another team, or touches real customer data, it must meet Tier 2.

  • Try-catch in every Code node, no exceptions
  • Failure alerts to Telegram or equivalent notification channel on any error
  • Execution logging to Google Sheets or equivalent on every run
  • Input validation at every entry point
  • Sticky-note documentation, workflow description, and inline comments
  • Credentials stored in vault, webhook validation enforced, HTTPS only
  • Data minimisation and 30-day log retention per applicable data protection regulations
  • JSON export backup after every change
  • Maintainable, someone other than the original builder can read and run it
Tier 3

Enterprise-Grade

Reference standard, never auto-applied. Triggered only when the operator explicitly asks for it, or when Tier 3 risk is flagged and confirmed. Includes everything in Tier 2 plus a heavier operations layer.

  • Comprehensive who, what, when, why audit trail
  • Separate dev, staging, and production environments
  • Defined uptime SLA with measured availability
  • Detailed step-by-step runbooks for every operational scenario
  • Automated regression testing on each change
  • Real-time monitoring dashboards for live health visibility
When to flag Tier 3 risk
  • Sales or customer service teams will use the output for daily contact with customers or clients
  • The workflow feeds data into another workflow, creating a cascade dependency
  • Failure for 48 hours or more would cause visible business impact
  • The C-suite or executive leadership would notice or ask about failures
  • Multiple departments depend on the output to do their work
Mandatory Tier 2 standards
01
Try-catch in every Code node
02
Failure alerts via Telegram or equivalent
03
Execution logging to Google Sheets or equivalent
04
Zero hardcoded credentials
Tool ecosystem
FunctionDefault toolAlternatives accepted
NotificationsTelegramMicrosoft Teams, Slack, or equivalent
LoggingGoogle SheetsSharePoint Excel, Airtable, or equivalent
EmailSendGrid or equivalentAny transactional email platform
File storageCloudflare R2Google Drive, OneDrive, S3, or equivalent
DatabaseSupabaseAny managed Postgres or equivalent
AI generationClaude APIAny LLM API depending on task requirements
CRMCRM platformHubSpot, Salesforce, or equivalent
Maintenance schedule
Weekly · Fridays · 1 hour

Weekly maintenance

  1. Back up every workflow as JSON to a dated folder, verify each file is non-zero size (15 min)
  2. Review the week's executions, count failures by workflow, look for repeating errors or time-of-day patterns (20 min)
  3. Credential audit, review the credential list, rotate anything expiring soon, test one credential per week (10 min)
  4. Refresh one-page docs and sticky notes for any workflow that changed during the week (15 min)
Monthly · First Monday · 2 hours

Monthly review

  1. For each workflow, check execution time and error rate against baseline, document trends (30 min)
  2. Verify 30-day auto-deletion is running, spot-check for stale rows, check storage usage (30 min)
  3. Workflow audit, list active workflows, flag anything to deprecate, mark anything approaching Tier 3 risk, refresh the dependency map (30 min)
  4. Draft a status update covering active workflow count, reliability stats, issues encountered, new API access needed, compliance status, and next month's planned changes (30 min)
Chester Chee
cheechester1991@gmail.com
Open to senior marketing technology and growth operations roles in Singapore.