Methods

Understanding AI architectures involves looking at two distinct but interconnected layers: the internal model architecture that gives a Large Language Model (LLM) its capabilities, and the external system architecture that integrates the LLM into a functional application.

Methods Sections

  • Key Concepts: Web Model Context Protocol Declarative API Imperative API Agentic Web

    Google WebMCP turns Chrome into an AI agent playground by replacing fragile screen scraping with structured, direct website communication.

  • Key Concepts: Self-Organizing Memory Scene Consolidation Memory Cells Salience Scoring

    A step-by-step guide to architecting a self-organizing memory system for AI agents. This implementation separates memory management from reasoning, using SQLite to structure interactions into 'cells' and 'scenes' for long-term context retention.

  • Key Concepts: Natural Language SQL Autonomous Research Agents Obsidian MCP Home Assistant MCP Sandboxed File Management

    Five practical application scenarios for connecting local Large Language Models to external tools via MCP, covering databases, research, knowledge management, smart homes, and file operations.

  • Key Concepts: LLM Memory Context Engineering DSPy QDrant Vector Database ReAct Agent Tool Calling Stateful AI

    Learn how to solve 'agent amnesia' by building your own custom LLM memory layer. This guide covers memory extraction, embedding, retrieval, and maintenance.

  • Key Concepts: Developer Knowledge API Model Context Protocol (MCP) Google Cloud Firebase Android Agentic Tools Documentation API

    A guide to Google's Developer Knowledge API and official MCP server, which provide programmatic access to Google's developer documentation for AI agents.

  • Key Concepts: Model Context Protocol (MCP) MCP Connectors Agentic Integration Tool Discovery JSON-RPC Supabase MCP Chrome DevTools MCP

    Discover how Model Context Protocol (MCP) connectors work. This guide explains the architecture, integration patterns, and best practices for linking AI agents to real-world data and tools.

  • Explore Agentic Context Engineering (ACE), the practice of designing and managing context windows to enhance reasoning within autonomous AI agents. Learn techniques for structuring, compacting, and validating contexts to maintain truthfulness, efficiency, and alignment.

  • Key Concepts: Pre-training Fine-Tuning Retrieval-Augmented Generation (RAG) Parameter-Efficient Fine-Tuning (PEFT) Low-Rank Adaptation (LoRA) Reinforcement Learning from Human Feedback (RLHF) Direct Preference Optimization (DPO) Foundational Model

    Explains the difference between pre-training and fine-tuning for LLMs, compares fine-tuning to RAG, and outlines modern techniques like PEFT and LoRA.

Methods Categories