Chapter 7-9: Knowledge-based Agents
Logical Agents
Knowledge-based agents store facts typically using logic. First-order logic is often performed using the dedicated logic programming language Prolog. Here are two online examples using SWI-Prolog:
Python provides several modules for logic and symbolic mathematics. Here is a short primer for
Large Language Models
Large language models (LLMs) are a type of knowledge-based agents that use natural language rather than logic. They can be used via an API or run locally. Important task are prompt engineering/context engineering. Resources:
- OpenAI Python API Library.
- Hugging Face provides a large collection of downloadable pretrained LLMs with
- Prompt engineering guide from OpenAI.
- Textbook: Speech and Language Processing by Dan Jurafsky and James H. Martin. Part I contains an excellent introduction to language models, transformers and large language models.
Agentic AI
An AI solution that uses a set of specially prompted LLM calls. The solution involves any or all of these:
- Multiple LLM calls
- LLMs can use tools (browse the web, access files, etc.) to interact with an environment.
-
A planner coordinates the activities of the agents: Can be a
- developer-defined workflow using “prompt chaining” and LLMs giving each other feedback, or
- use an LLM to plan its own tasks (the LLM acts as an autonomous agent leading to the name agentic AI).
Video:
Tools:
- Open AI Agent SDK: native support for function calling, retrieval, and tool orchestration for the OpenAI ecosystem.
- CrewAI: orchestrate multiple specialized AI agents working collaboratively.
- Langgraph: a low-level LLM orchestration framework. Build structured, reproducible agent pipelines.
- Model Context Protocol (MCP): An open protocol that enables seamless integration between LLM applications and external data sources and tools.
License
All code and documents in this repository is provided under Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) License