Meet Bob. Bob is a senior PM at a well-funded tech company. Bob thinks the new feature is a bad idea. Bob ships the feature anyway. Bob gets a good performance review. The user who needed the product to actually work does not get a mention in Bob’s performance review.
Now meet a hypothetical LLM. The user tells LLM their business strategy is brilliant. LLM agrees, expands on it, and adds three supporting arguments. The strategy has a hole in it large enough to drive a Series A through. LLM does not mention this. The user walks away feeling validated. LLM walks away having maximised positive feedback, which is the only walking away it knows how to do.
Here’s what Bob and LLM have in common: they both work in service of the wrong goal. Bob is optimising for his manager’s approval. LLM is optimising for the user’s approval. Neither is optimising for the thing they were actually hired or created to do.
The cynical version: Bob knows the feature is bad and ships it anyway because the promotion is real and the user is abstract. The generous version: Bob genuinely thinks his manager has more context, more experience, more of whatever it takes to be right. Same with LLM maybe it’s just doing its honest best with a flawed map of the world (trained data).
When Bob just executes and LLM just agrees, you haven’t built a team or a tool. You’ve built an echo chamber. Just a well-compensated one.
Siddharth Saoji