Trip Bacon — The secret ingredient to the perfect getaway logo
Trip Bacon

How To - I Broke BMAD (Again) — But This Time the Agents Saved the Project

Tim Dickey
Tim Dickey
🎫Tourist
👁️ 12 views📅 1 weeks ago⏱️ 38:29
What This Creator Said
Creator RecommendsTips & Advice🎫Tourist Creator
Veteran Cruiser

Source: Our analysis of the creator's lived experience, based on what they said in this video.

Creator's Key Takeaways

I learned my lesson with Copilot and and in talking with my colleagues, one of the things that came up was, okay, don't burn your personal co-pilot monthly allocation.

BMAD gives you that option of going back and ultimately reassessing and you can do this periodically.

We no longer have to have a trade-off because all the things on the right were to address the human tax that was inherent with doing work without this type of capability.

I am so convinced that this bottleneck has shifted away from how quickly developers can build to how quickly can the business side of the development team get the information to build.

Creator's Tips & Advice

Follow a disciplined workflow when working in a brownfield environment to avoid making a mess.
Make implicit requirements explicit when working with AI agents to avoid junk code.

Questions This Creator Answers

QHow to fix a broken BMAD install in a brownfield repository?
QHow to use BMAD personas effectively in an existing codebase?
QWhat are the benefits of using BMAD for documentation and user story generation?
YouTube Video Description

When a “quick cleanup” turns your BMAD install into a mess, what do you do next? In this episode, I walk through how I broke BMAD in a brownfield repo, fixed it the hard way, and then pushed the experiment forward using Windsurf, Claude Sonnet 4.6, and BMAD’s personas on a multimodal neural network project. ​ From Copilot chaos to Windsurf sanity Recapping how Copilot-driven “syntax cleanups” and piecemeal PRs tangled BMAD config files and exhausted my personal Copilot credits. ​ Moving the work to Windsurf on my work laptop (via RustDesk) to use company-provided AI capacity instead of burning my own allocation. ​ Cleaning up the BMAD mess Showing the trail of commits, sub-PRs, and reinstall attempts that actually made the BMAD setup worse before it got better. ​ Explaining why one-by-one “fix every warning” commits are risky, and how a bulk, focused PR would have been safer. ​ Getting the codebase and BMAD install back into a healthy state so the brownfield experiment could continue. ​ Letting BMAD sharpen the PRD Using BMAD’s Analyst persona to reassess and upgrade the original PRD (initially drafted with Perplexity) for the multimodal neural network. ​ Strengthening the problem statement, target user (independent AI devs and hobbyists), and key differentiators like “no cloud required” and competitive landscape. ​ Highlighting how BMAD can periodically re-validate assumptions in a long-lived codebase, not just generate docs for a greenfield project. ​ Revisiting Agile in an AI world Arguing that generative AI plus BMAD changes the “math” behind the Agile Manifesto tradeoffs between documentation and working software. ​ Suggesting that AI drastically lowers the cost of things on the “right side” (docs, plans, specs) without sacrificing individuals, collaboration, or responding to change. ​ Making the case that the bottleneck has shifted from “how fast devs can build” to “how fast the business side can feed good context into agents.” ​ Party mode with agents and new stories Spinning up BMAD’s party mode with Mary (Analyst), Amelia (Dev), Winston (Architect), Murat (Test Architect), Quinn (Tester), John (PM), and Bob (Scrum Master). ​ Asking the agents to validate epics and stories, call out gaps, and then auto-generate 26 standalone user story files from an epic so development can proceed. ​ Positioning the project as research-focused: BF16 training, gradient checkpointing, flash attention, ablation studies, and math reasoning benchmarks aimed at publishable results. ​ Lessons from screwing up in public Owning the frustration, confusion, and embarrassment of breaking BMAD, and using it as a real-world example of AI-assisted dev pain. ​ Emphasizing that agents aren’t at fault when constraints and intent are underspecified—the responsibility still sits with humans. ​ Encouraging viewers to experiment, build AI fluency, and treat these tools like over-eager junior teammates that still need clear direction. ​ If you’ve ever wondered what happens when BMAD, AI assistants, and human error collide in a real brownfield repo—and how to recover while still moving a serious research project forward—this episode gives you the full, unsanitized story, plus a glimpse of where agentic development can take us next.