Below you will find pages that utilize the taxonomy term “Prompting”
Context Binoculars: Understanding LLM Context Windows
Remember as a kid (or in my case as an adult) when you pulled out binoculars and gazed around the room? It’s a bit disorienting for sure, but also fun. And I guess if you’ve ever wondered about what I do for fun, now you know.
It struck me the other day that this was the ideal representation for how LLM context works. While my demonstration you’re going to see below is extreme, this is the reality of what we’re facing when we’re trying to get LLMs to do big projects. The context window is everything, and you’re hearing more and more about context engineering in online articles. Ensuring that the LLM has exactly the right context greatly improves the accuracy of the results that you get out the other end.
The Ultimate Design Review: Example Action Items
This is an example output from the automated design review system described in The Ultimate Design Review: Orchestrating AI with Task-Based Workflows - Part 6 of 6.
This post shows how the AI transforms the comprehensive analysis into a structured, actionable backlog. The system automatically generates specific tasks with clear implementation steps, acceptance criteria, and testing requirements, turning qualitative feedback into a quantitative project plan.
Task Status Report: Design Review 2025-09-06 Action Items
Generated on: 2025-09-07
The Ultimate Design Review: Example Feedback
This is an example output from the automated design review system described in The Ultimate Design Review: Orchestrating AI with Task-Based Workflows - Part 6 of 6.
This post shows the comprehensive analysis that the AI generates when performing a full-scale design review of a codebase. The AI systematically analyzes each layer of the application, identifies architectural strengths and issues, and provides detailed findings that form the foundation for actionable improvement tasks.
The Ultimate Design Review: Orchestrating AI with Task-Based Workflows - Part 6 of 6
Part 6 of 6: The AI-Assisted Development Workflow Series
This is the final installment in our six-part series on building an AI-assisted development workflow. We’ve set up our infrastructure, taught the AI our coding standards, and established a robust “memory bank” of project rules. Now, it’s time to put it all to the test with the ultimate challenge: a comprehensive, end-to-end design review of the entire codebase.
Series Overview:
- Part 1: MCP Servers, Ports, and Sharing - Setting up the foundation
- Part 2: ESLint Configuration Refactoring - Cleaning up tooling with AI
- Part 3: Custom Architectural Rules - Teaching AI to enforce design patterns
- Part 4: Task Orchestration - Managing complex refactoring workflows
- Part 5: Project Rules for AI - Creating effective memory banks and guidelines
- Part 6: The Ultimate Design Review - Putting it all together
The Challenge: From “Codebase” to “Action Plan”
Want to see what we’re building? I’ve created two example posts that show the actual output from this automated design review system: