This is what feedback looks like
after a real session.
Below is an actual report from a behavioral round. Real strengths, real gaps, and the specific moment the hiring panel would have lost interest. No marketing copy.
Session Report
Question asked
Tell me about a time you disagreed with a teammate about a technical decision.
Strong evidence-based reasoning let down by a long setup and a soft landing. The bake-off detail is the kind of specific that hiring managers love, but the Situation runs ~40% of the answer and the Result never tells the panel what actually changed in production.
Up from 6.1 last week. You tightened the structure and added the benchmark detail.
Concrete benchmark numbers ("queries dropped from 230ms to 45ms") turned a subjective architecture debate into a falsifiable test. That is the move that separates senior thinking from junior thinking.
You spent ~40% of the answer on setup. Compress the Situation to one sentence and lead with the conflict so the panel hears the interesting part first.
Criteria Scores
Answer Breakdown
Strengths
- Concrete benchmark numbers ("queries dropped from 230ms to 45ms") turned a subjective debate into a falsifiable test. That is exactly how a hiring manager wants to see disagreements resolved.
- You named the eventual outcome - Maya stayed on the project and you both shipped together - instead of declaring victory. That signals you understand collaboration is the multi-quarter game, not the single decision.
Improvements
- The Situation runs ~100 words before any conflict surfaces. A hiring manager already knows what an ORM is and what a Series B fintech looks like. Cut it to one sentence and use the saved time on the Action.
- You do not say WHY Maya pushed back, only that she did. Steelmanning her position ("she had run into ORM regressions at her last job") would have shown conflict-handling maturity and kept her from sounding like an obstacle.
- The Result is qualitative ("we shipped, it has been running fine"). Add the deploy date relative to the original deadline, or the production p95 latency on real traffic. The panel already heard your test numbers - they want to hear the prod numbers.
Open with the conflict, not the project: "Maya wanted raw SQL, I wanted Prisma - we disagreed for two days before we agreed to settle it with a benchmark." That puts the interesting frame on the answer in the first sentence and lets you spend the next sixty seconds on the Action and the Result.
Where you lost them
~0:48"We had a long discussion about it..."
This is where the panel mentally checks out. "Long discussion" is vague time and vague substance - both of which signal that you are about to skip the part of the story that actually demonstrates judgment. Tell them what Maya argued, what you argued, and how long you each held the position before the bake-off ended it. Two specific sentences here would change the whole answer.
Example of a stronger answer
Maya wanted raw SQL, I wanted Prisma - we disagreed for two days before we agreed to settle it with a benchmark. She had hit ORM perf regressions in a prior role and was worried about our two-million-row daily volume. I argued the consistency cost of going off-pattern would slow every future engineer on the service. We built the three hottest queries both ways against a million-row test set. Prisma was 30% slower on the heaviest one. We added two indices and one raw escape hatch and closed the gap to 5%. We shipped on the original deadline; the service has held p95 under 80ms in prod for eight months. Maya owns the escape-hatch query - she still does not love Prisma, but she signed off on the call.
Predicted follow-ups
- What would you have done if the benchmark had favored Maya's approach?
- How did Maya feel about being overruled? Have you talked about it since?
- Walk me through how you set up the bake-off - what were you measuring and why those three queries?
Your transcript
Yeah, sure. So, um, last year I was on the payments team at my previous company - we were a Series B fintech, about thirty engineers - and we were spinning up a new microservice to handle payout reconciliation. I was the second engineer assigned to it. The first one, Maya, had been there longer than me, and she had already done some scoping. When I joined, she had a draft architecture doc that proposed using raw SQL for all the database access. I pushed back because I wanted to use Prisma, our standard ORM, the same one the rest of the company was using.
She argued that for the volume we were going to hit - about two million reconciliation rows per day - the ORM overhead would matter, and that raw SQL would be more performant and more explicit. We had a long discussion about it, and honestly we were both kind of dug in. What I ended up doing was suggesting we just run a bake-off - I built the same three queries both ways and ran them against a million-row test dataset. The Prisma version was about thirty percent slower on the heaviest query, but we found that with a couple of indices and one raw escape hatch for the worst offender, it came within five percent.
So we went with Prisma plus the escape hatch. We shipped on time, the service has been running fine for, I think, eight months now. Maya stayed on the project. I think the lesson for me was that having data ended the disagreement way faster than more arguing would have.
Suggested next question
"Tell me about a time you had to push back on a senior engineer or manager." (Tests the same skill from a different power dynamic.)