Engineering Surface

The build is not the pitch.

But the build is real. Here's the technical surface.

Public-facing engagements lead with the workflow outcome. This page exists for the engineers, architects, and security reviewers who want to see the substrate underneath — stack, patterns, certifications, and how secure AI workflows actually get assembled in production.

The Stack

What's actually used.

Tools chosen because they fit the job, not because they're trendy. Every engagement starts from what the client already has — Microsoft 365 or Google Workspace — before adding anything.

Cloud & Infra
  • AWSBedrock, SageMaker, Lambda, IAM, CloudWatch
  • AzureOpenAI Service, Functions, Entra ID
  • GCPVertex AI, Cloud Run, IAM
  • IaCTerraform, AWS CDK
AI & RAG
  • ModelsClaude (Anthropic), GPT-4 class, open-weight as required
  • FrameworksLangChain, LlamaIndex, custom pipelines
  • Vector Storespgvector, Pinecone, OpenSearch
  • ProtocolMCP for tool integration
Integration
  • M365Graph API, SharePoint, Teams, Outlook
  • GoogleWorkspace APIs, Drive, Gmail
  • CRM/PSASalesforce, HubSpot, ServiceTitan-class
  • DocumentOCR, parsing, structured extraction
Delivery & Ops
  • CI/CDGitHub Actions, Azure DevOps
  • ObservabilityCloudWatch, Datadog, OpenTelemetry
  • LanguagesPython, TypeScript, Go
  • IdentityZero Trust IAM, SSO, MFA enforcement
Reference Patterns

Four workflow archetypes.

The pilots reduce to a small number of architecture patterns. Each one has a known data flow, governance posture, and failure mode. The audit picks which pattern fits.

01

Inbox Triage & Reply Drafting

Incoming email is classified, routed, and a first-draft reply is staged for human approval. No outbound message sends without an in-loop human. Best for client-services teams drowning in repetitive inbound.

data_path: inbox → classifier → RAG over response history → draft → reviewer
controls: human approval gate, audit log, redaction of PII before model call
typical_savings: 5–12 hrs/week per role

02

Internal SOP / Policy Assistant

RAG over the firm's own documents — SOPs, policies, prior memos, training material — accessed via Slack, Teams, or a simple web UI. Source citations on every answer.

data_path: query → vector retrieval → ranked chunks → answer with citations
controls: retrieval scope per user role, no model fine-tuning on client data
typical_savings: 3–8 hrs/week across team

03

Meeting Notes → Tasks

Transcripts in (Zoom, Teams, Fireflies, etc.), structured action items out — owners, due dates, and a routed task in your tracker. Optional summary email to attendees.

data_path: transcript → structured extraction → task tracker write
controls: consent flag check, recording retention rules honored
typical_savings: 2–5 hrs/week per regular meeting attendee

04

Client Intake / Approval Routing

Forms, documents, or emails come in; structured fields are extracted, validated, and routed to the right person with the right context. Reduces "who owns this?" friction.

data_path: intake → field extraction → validation → routed handoff
controls: validation thresholds, fallback-to-human under confidence floor
typical_savings: 30–50% cycle-time reduction

Security & Compliance Posture

Boring on purpose.

The interesting part is the workflow. The security posture should be unremarkable. Here's how the engagements are structured to stay that way.

Data & Access

  • Client data stays in client tenant where possible — Bedrock/Azure OpenAI prioritized over hosted SaaS
  • Sub-processors disclosed up front (which model provider, what data flows where)
  • No client data used for model training or fine-tuning by default
  • PII redaction at the prompt boundary where applicable
  • Per-role retrieval scope; least-privilege RAG access

Governance & Audit

  • Every model call logged with input/output and reviewer identity
  • Human-in-the-loop gates on any outbound action (email, file write, transaction)
  • HIPAA-ready architectures available; BAAs in place where required
  • SOC 2-aligned controls: change management, access reviews, incident response
  • IP ownership: client owns prompts, configurations, workflows; consultant retains generic methodology
Certifications

Verified, not asserted.

Cloud certifications and vendor credentials are verified at credly.com/users/jason-campbell-cloud.

AWS
Solutions Architect — Associate
CSA
AWS
Machine Learning Engineer — Associate
MLE
AWS
SysOps Administrator — Associate
SysOps
Azure
Solutions Architect Expert
AZ-305
Google Cloud
Associate Cloud Engineer
ACE
Experience
Platform Engineering Lead
10+ years · Healthcare, Fintech, Retail
Tech Wall

Quick scan.

For the recruiter scan or the "do they know X?" check.

AWS Bedrock Azure OpenAI Vertex AI Anthropic Claude LangChain LlamaIndex MCP Protocol RAG / pgvector SageMaker Lambda Terraform IaC AWS CDK GitHub Actions CloudWatch Datadog OpenTelemetry Zero Trust IAM Microsoft Graph Google Workspace API TypeScript Python Go HIPAA-aligned SOC 2-aligned
Engineering Conversation

Talk to the architect.

For technical due diligence, security review, or stack questions before an engagement — direct line, no sales filter.

jason@nycolai.com · LinkedIn

Or book a 30-min call →