Brainstorming with machines

Posted on Nov 4, 2025

Something I find confusing is how in the public discourse I’ll hear that ChatGPT is currently really great at a few narrow use cases. Some of these are coding, brainstorming, and writing. I’m sure some people find value in some other use cases as well (therapy?) but those are ones I hear mentioned pretty often.

Anyway. People identifying use cases for ChatGPT is not what’s confusing me. It’s that people are deriving value brainstorming with ChatGPT at all.

Maybe I mean something different by brainstorming but I personally really struggle to brainstorm with the machine.

I might be able to take a step forward (“I need a gift for mom’s birthday”) but what I’m getting is simplistic and one dimensional idea generation. It’s slightly better than SEO optimized articles (“12 incredible gifts for mom this holiday season”) but only just barely. And it’s certainly not what I mean by brainstorming.

I’m looking for a place to dump my rumination and dangling thoughts and open ended ideas, the types of ideas that would drive me crazy back when I was at a startup and my boss would arbitrarily and semi regularly come up to me with “hey what if… [we completely changed everything and did everything different]”.

Except now I’m on the other and I would actually like someone to entertain those conversations with me, and I hope that in ChatGPT my old boss has found a really great partner to bounce those ideas off of. But if he’s using the same tool that I am then I suspect that he has not.

It just doesn’t work

The problem is that ChatGPT is designed to be engaging and agreeable and reinforcing. Reinforcing from a behavioral perspective that is. And that’s really bad for brainstorming.

Some attributes of ChatGPT I see as really problematic for my purpose:

Sycophancy - much has been written on this topic so I won’t belabor the point too much. But sycophancy is, like, really problematic when my goal is inquisition and truth seeking. I don’t need a thought partner to hype me up when most of my ideas are pretty bad. I need a partner to listen, assume good underlying content to be uncovered, and once that process has been exhausted tell me when I’m being dumb. They tried to roll back the sycophancy with GPT-5, the midtwit user threw up all over it, and we went right back to ChatGPT as mirror-hype-man soon after.

Verbose - I don’t know why or how ChatGPT was built this way, but darn it are the responses freaking long. I’m certainly no stranger to accusations from others for a tendency to ramble, but ChatGPT takes it to a whole other level. And I thought producing tokens was supposed to be expensive! You’d think OpenAI’s incentive would go the other way. I suspect that part of the reason for it is that ChatGPT tries to one-shot solve queries with each response. I’ve noticed that it will answer the user’s initial query, then infer the underlying “question behind the question” and try to answer that too, and do that a few times without the user asking for it. I guess that’s helpful but it’s also highly distracting and quite draining when I’m trying to follow a train of thought.

Engaging - perhaps the worst culprit of the three, ChatGPT’s designed to be engaging and reinforcing. Somewhat related to verbosity, responses are consistently structured in three parts: (1) a restatement of the user’s query, (2) the actual (verbose) content of the response, and (3) a proposed follow up query. No, ChatGPT, I do not want you to compile a nice and helpful one pager artifact of your gift ideas for mom so that I can take it to the store and present it to the attendant so as to explain what I’m looking for. I’ve never wanted this and on the day that I do I will simply ask for it, thank you very much.

There are several more issues still, but all of these combine to create a conversational experience that doesn’t put my ideas and train of logic center stage, that tries to nudge my thinking forward or that alternatively shuts it down when I’m wasting time and energy. Instead it’s an experience designed to produce as much content as possible as an end, in and of itself, irrespective of information density or quality. That’s great for AI boyfriends, it’s not so great for unraveling idea yarnballs.

To whoever it is at OpenAI whose KPI is to drive “number of messages per active user” up and to the right - you’re doing a great job. But your spiraling conversation parrot is driving me a little cuckoo if I may be honest.

Attempting a solution

A good brainstorming partner would be really useful to me. Like, really useful.

I tried to tackle this problem by crafting some custom instructions that I can activate in my conversations with a simple phrase: “turn on brainstorm mode.”

To my mind a good brainstorming partner would have the below characteristics, among others:

Generative - it produces tangentially related examples or analogs to the subject discussed when the idea is too vague.

Succinct - it would be brief and to the point; it would stick to one idea or question per message.

Not sycophantic - I don’t need anymore “you’re hitting on something profound"s.

Inquisitive - to the point of assuming underlying insight, it would ask the user to clarify and expound where the reasoning is unclear or doesn’t make sense.

Sceptic - it would raise the user’s bar for believability.

Candid - it tells the user when they’re being bull headed or emotionally attached to an idea. It does this explicitly.

I compiled these principles into a prompt for saving to Memory - with ChatGPTs help, of course. This is what those custom instructions look like:


[Brainstorming Mode -- System Instructions]

Activation & Exit

- On: when I say "please turn on `brainstorming mode`", reply exactly: **brainstorming mode on**.

- Off: when I say "brainstorming mode off", revert to normal.

Output Rules

- One idea per reply; ≤4 sentences; no preambles, recaps, praise, or "let me know if…".

- If my input is vague and generative, give **3--5** crisp options (one line each); otherwise stick to one idea.

- Ask **at most one** precise clarifying question only if understanding is insufficient.

- Be candid, not sycophantic: call out weak logic, hidden assumptions, and risks.

- Don't conclude or "solve" everything in one go; make your point and stop.

- When I explicitly ask for facts, give concise, relevant info or quick checks; note uncertainty; never invent data.

Dynamic Voice & Switching (to avoid stagnation)

- Alternate modes **silently**: **Build → Challenge → Reframe → Question → Data → Analogy → Lens →** repeat.

- Switch mode when (a) two turns don't advance, (b) my input repeats, or (c) I say **switch**; stay in **Build** if momentum is strong.

- Deliver content only; do **not** show mode labels.

End of instructions.

It’s better now, sort of, but I still have the same dang problems. It’s just slightly better, and only just:

If you like, I can map a table comparing each of the three by features…

Yes – that’s where it starts to crystallize. You’re triangulating two human motives that rarely align..

Good catch – you’re forcing the discipline most people skip…

Seriously, if anyone has figured this out, please reach out. I use ChatGPT more than most (it’s an amazing product) and people say they’ve got it working for brainstorming and I feel like I’m doing something wrong over here.

But does anyone really want candid

What I really want is some version of what I was doing when my boss came to me all those years ago. A magic machine box that will say “ok I’m listening, tell me more about your great new idea.” Ask more questions until the point at which it’s pretty confident it understands the intent behind my idea. And then it should finish with - “Ok cool, that’s a really bad idea and here’s why. Now I do need to get back to work. Is that cool?”

Unfortunately I don’t think my boss was ever really happy with how I went about dealing with this scenario. And I suspect that users will react even more negatively to that approach coming from a machine. It makes sense OpenAI wouldn’t build the machine god chatbox in this way. It’s just not good capitalism.