ML
MASTERLINK
Chon Buri · Thailand · masterlinkha.com
Local AI · Smart Home · Security · No Cloud

Independent IT & AI infrastructure — built, owned, and operated locally. Security-focused services delivered without cloud dependency, without subscriptions, and without compromise.

Stack Live · March 2026
The Monster
THE MONSTER
Primary AI Compute Node · Windows 11 · Static IP · LAN Only
CPU Intel Core i9-10900K
RAM 64 GB DDR4
GPU RTX 2080 Ti · 11 GB VRAM
Cores / Threads 10C / 20T · up to 5.3 GHz
Motherboard ASUS ROG Maximus XII Extreme
Docker WSL2 · Auto-start
OS Windows 11 · Static LAN IP
Role LLM Host · Automation Engine
GPU Upgrade Planned — RTX 3090 · 24 GB VRAM
Unlocks llama3:70b in full local inference · 2× throughput · larger context windows
64
GB RAM
20
CPU Threads
11
GB VRAM
7
Docker Services
Companion Node
🍓 RPi5 — MASTERLINK HUB
OSHome Assistant OS
Entities273 active
Automations17 running
ZigbeeSonoff Dongle Plus V2
RemoteNabu Casa (encrypted)
The RPi5 is the always-on smart home brain — running 24/7 at minimal power draw. It bridges the physical world (Zigbee sensors, switches, climate) into the digital layer, where The Monster's AI can act on it.

The Monster + RPi5 form a two-node local AI system — one thinks, one senses.
AI Stack — Layer by Layer

Every layer of the stack is self-contained, locally hosted, and interconnected. Data flows between services over a private Docker network. Nothing touches the internet unless explicitly triggered. The stack is designed to be modular — each layer can be upgraded independently.

01
LLM ENGINE
Local Language Model Runtime
OLLAMA
The core inference engine. Runs llama3.1:8b natively. Exposed to all services on the network. GPU support ready — activate one config block when RTX 3090 is installed. Serves as the brain for every AI feature in the stack.
:11434
llama3.1:8b
Active model. 8 billion parameters. Runs fully in VRAM on RTX 2080 Ti. Fast, capable, private. Context: 128K tokens. Suitable for reasoning, summarization, code, and conversation.
active
02
USER INTERFACES
Chat & Interaction Layer
OPEN WEBUI
Full-featured chat interface connected directly to Ollama. Memory enabled — conversations persist and inform future responses. The daily driver for interacting with the local LLM.
:3000
HOME ASSISTANT
Smart home platform with Ollama integrated as the Assist engine. Voice and text commands processed locally. 273 entities, 17 automations — all AI-accessible.
:8123
03
MEMORY & KNOWLEDGE
Vector Storage & RAG Pipeline
QDRANT
Local vector database. Stores embeddings from documents, conversations, and sensor data. Powers Retrieval-Augmented Generation — the AI can answer questions from your own data. No cloud vector store needed.
:6333
04
AUTOMATION & ORCHESTRATION
Workflow Engine & IoT Bridge
N8N
The glue between everything. Connects HA, Ollama, Qdrant, files, and external triggers. Builds the memory pipeline, handles alerts, automated reports, and complex multi-step AI workflows.
:5678
MOSQUITTO
MQTT broker. The real-time message bus for all IoT events. Zigbee2MQTT → Mosquitto → Home Assistant. Every sensor update, button press, and device state change flows through here.
:1883
ZIGBEE2MQTT
Physical world interface. Bridges Zigbee devices via Sonoff Dongle Plus V2. All sensor data captured locally — temperature, motion, power, presence — no manufacturer cloud required.
:8080
Data Flow
SENSE Zigbee Sensors ──→ Zigbee2MQTT ──→ Mosquitto ──→ Home Assistant
THINK User / HA Trigger ──→ n8n Workflow ──→ Ollama LLM ──→ Response
REMEMBER Conversation / Doc ──→ n8n Embed ──→ Qdrant Store ──→ RAG Retrieval
INTERACT Open WebUI ──→ Ollama API ──→ Qdrant Context ──→ Answer
What This Opens Up

A local AI stack of this depth is rare outside enterprise environments. Having LLM inference, vector memory, workflow automation, and smart home integration running in one place — fully owned — creates possibilities that cloud-dependent setups simply can't match.

🔒
Security Audits — AI-Powered
Run reconnaissance, analyse network logs, and generate structured security reports using the local LLM. Sensitive client data never leaves the machine. The AI processes everything on-site — a key differentiator in security work.
masterlinkha.com · Service #1
🧠
Private RAG Knowledge Base
Feed Qdrant with technical docs, client histories, HA configs, and notes. Ask the AI questions and get answers grounded in your own data — not hallucinated from the internet. Your personal expert system.
Qdrant + Ollama + n8n
⚙️
Intelligent Home Automation
Beyond triggers and schedules — the LLM can reason about sensor states, predict patterns, and generate HA automations on the fly. Voice commands processed locally via Assist + Ollama. Zero latency, zero cloud dependency.
Home Assistant + Ollama
🤖
Autonomous Workflow Automation
n8n orchestrates multi-step pipelines that combine AI reasoning with real-world actions. Triage emails, generate reports, alert on anomalies, backup systems — all triggered locally, all processed locally.
n8n · Local-only · Zero API cost
💼
Client Deliverables — Automated
Intake form → n8n → Ollama generates a preliminary audit report → PDF delivered to client. The entire service delivery pipeline runs on your hardware. Scalable without recurring AI costs.
Full stack · Client-ready
🎮
RTX 3090 — Unlock Larger Models
Upgrading to 24 GB VRAM opens llama3:70b in full local inference. GPT-4 class capability, running entirely on your machine. No token costs. No rate limits. No terms of service restricting what you analyse.
Planned · v1.3
Core Principles
🔐
Zero Cloud
No data leaves without an explicit decision. Total data sovereignty.
💸
Zero Recurring Cost
No API subscriptions. No token billing. Hardware is the only investment.
🧩
Fully Modular
Each service is independent. Upgrade, swap, or remove any layer without breaking the rest.
🏠
Always On
Docker auto-starts with Windows. The AI is available 24/7, on the LAN, instantly.
Get In Touch
hello@masterlinkha.com
masterlinkha.com COMING SOON
Chon Buri, Thailand · Available for Projects
MICROSOFT TEAMS · PREFERRED CHANNEL

Send a message to hello@masterlinkha.com and we'll connect on Microsoft Teams — for project discussions, demos, and technical consultations.

SERVICES AVAILABLE
Security Audits & IT Consulting Available
Local AI Deployment & Setup Available
Smart Home Automation (HA) Available
Workflow Automation & Integration Available
Network Architecture & Segmentation Coming Soon