
Rating: 7.5/10.
Book about the latest developments in AI agents, covering a lot of common patterns like all use agentic loops, using prompt engineering and no-code tools to build and configure their agent and evaluate it. Most of the book consists of walkthroughs of various different agent libraries and platforms and tutorials of how to get started and the basic functionality of each one. However, the technical depth is fairly low because it covers the happy path in each case, but not much about dealing with real-world messiness and failure modes, comparing the trade-offs of different agentic approaches, and the challenges of designing robust agents or evaluating them. There is also no engagement with academic literature on this topic, so it’s most suitable as an introduction to the topic.
Chapter 1: There are many types of agents. The simplest involves rewriting the input and output from another model, while more complex ones engage in planning, evaluate task completion, access external data or tools, and request permissions for various actions.
Chapter 2: Prompting best practices include providing examples, offering step-by-step instructions, using delimiters in XML format, and giving the AI a persona.
Chapter 3: You can give your custom GPT access to tools through HTTP endpoints that they can query through ngrok or upload files for searching; also publish it in the GPT store.
Chapter 4: Autogen and CrewAI libraries can be used to set up multi-agent systems with different roles, such as one agent that plans and another that executes, or one that creates something while another criticizes it for problems. Sometimes, a manager agent can be used to coordinate sub-agents while sharing context, or agents may be set up to be independent without sharing context. The costs increase with these types of multi-agent setups.
Chapter 5: The Microsoft Semantics Kernel (SK) allows you to add functions to LLMs, and one example is making an LLM prompt and exposing it as a function for another LLM. Native functions can perform tasks such as fetching from APIs, and these can be exposed to the LLM with a decorator or tool description.
Chapter 6: Agentic Behavior Trees (ABTs) sequence a series of nodes that consist of LLM calls and tool use, such as running code and reporting back the result as success or failure to the parent, or in case of failure, retrying it. Eg: one agent writes code while another verifies it until the judge passes. You can control whether the context is shared in this case or kept separate.
Chapter 7: Using Nexus and Streamlit, you can build a chat AI interface with the ability to describe a persona and discover tools automatically from Python files or specified in JSON. You can make a call to an LLM with a prompt or fetch data from various sources.
Chapter 8: Document indexing works by breaking documents into overlapping chunks and generating embeddings. It then retrieves the most relevant chunks in response to a query before adding them to the context for generation. This is supported in the Nexus platform, and memory functions similarly by retrieving relevant facts about the user or previous conversations.
Chapter 9: Prompt Flow is a tool that help iterate on prompts – set up an agent with a prompt template and tools, configure LLM settings, and define evaluations, including the ability to use an LLM to score responses according to a rubric and upload batches of data for running and evaluation.
Chapter 10: Prompt engineering methods available in Prompt Flow including few-shot examples, decomposing problems into sequential steps (sometimes with separate calls for each step), executing multiple iterations for self-consistency, and implementing multiple solution paths using BFS or DFS approaches (tree of thought method).
Chapter 11: Planning capabilities can be added to an agent by having one model design a plan while another executes it with tools, with some reasoning models having this capability built-in natively. Another strategy is giving evaluation feedback to the model, allowing it to analyze its mistakes and adjust its strategy for subsequent examples.



