by Doc – Owner, Founder, Remember That AI Is Not a Human And That Human Creativity Is Infinitely More Valuable, and Anyone Who Says Otherwise Needs Their Head Examined And Their Computer Taken Away
This was not written by AI, by the way.
As part of my job I work extensively with both Claude’s suite of tools and ChatGPT, and I’ve worked with both of them both in light programming and in simple writing and summarization. I’m referring specifically to the web-based platforms as I don’t use Claude Code or any APIs related to them.
The models I’ll be comparing is Claude’s Sonnet 4.5 and ChatGPT’s 5.2, which are the current models available by default right now on most paid plans. I’m comparing Sonnet 4.5 because I consider it to be the best model Claude currently has, and it’s extremely low-usage compared to 4.6 or the Opus models, and ChatGPT 5.2 is far inferior to their 4o model and o1 pro models, but that’s what they recommend right now anyway. I’ve not messed with agentic AI and think if you let a computer run your life for you you’re insane.
Coding
I’m assuming you’re not doing intense coding. If you are, then I can’t help you. But if you’re doing light stuff or debugging, Sonnet 4.5 wins by a landslide. ChatGPT’s 5.2 is, to put it lightly, moronic, and basically all their past models have been as well. It makes errors and frequently results in broken code. To tell you the truth, ever since I switched to 4.5 I’ve had barely a single bug in my smaller Python stuff. Oh sure, the o1 Pro model back in the day could handle programs of up to a few hundred lines of code, and that was acceptably good if you knew exactly what you wanted each process to look like. But these days my ideas for what I can use a program for are a lot more complicated and sometimes need multiple programs working in tandem, and only Claude does that. I’m not a programmer by any means (I “speak computer” around the office, but that’s about it) but I’m competent enough to know what a .py file is, how to install and run Python through a terminal, and how to precisely diagnose the problems in the 4,000-line code file Claude outputs. It’s like I speak Spanish, but gringo Spanish, not the real thing, and Claude accommodates that extremely well.
Anyway, Sonnet 4.5 is better than 4.6 because it uses far less “usage” (Anthropic plans have 5-hour usage limits) but it’s also just… smarter? It tends to keep things simpler and be faster. It feels like a more quality model than the finicky and occasionally dumb 4.6 or Opus models.
Writing
It depends on your audience. Sonnet is extremely good at taking in large amounts of information (I’m pretty sure it has a longer message length and context window) and mimicking stylistic choices, but if you’re starting from scratch then GPT is alright. ChatGPT tends to try to sound like a 31-year-old who never grew out of using emojis in college papers and drives the same Prius he did when they first came out, but at least it has an acceptably large vocabulary and its default “voice” can be professional enough if you tell it to be. Still, most of its outputs have emojis for some reason. Do we need the hieroglyphics to read? I thought we were done with that.
If you’re looking for a straight shooter, Sonnet’s better. Sonnet tends to write the way military guys speak, which is to the point and mission-oriented. I can tell that over the last year Anthropic has tried to tweak their models to not be so aggressive and mission-oriented, and I don’t like that. Still, most of the time GPT will be better if it’s a casual audience.
Information
ChatGPT wins by a landslide here. I don’t Google anymore if it’s not something I expect somebody to have already produced content on. For example, I’ll often google a Pokemon moveset. Claude doesn’t have that, and if it web searches it is usually wrong. But about 80% of the time ChatGPT will have the specifics down pat because it knows where to google to find the info, and interprets it correctly. It’ll make mistakes (it told me this morning that Articuno in Gen 9 has Calm Mind, which it most certainly does not) but hey, it’s better than Claude.
Reverse that if you’ve given it the information and want answers about it from the AI. As part of my job I often plug in extremely large amounts of info into a chat with an AI and ask it to do certain things or answer questions. Claude nails it. It’s extremely good at handling mistranscriptions and otherwise drawing contextual conclusions even based on the info I give it, which is extremely niche and not documented anywhere on the internet. ChatGPT doesn’t, and simply can’t operate without giving it a lot more context. I mean a lot more context – because of the relatively secretive nature of my work, I basically can’t give GPT all the info it needs to do a good job, so I have to modify its outputs myself whereas Claude draws inferences with great discretion. It doesn’t know if it’s right, so it doesn’t draw an inference and it admits to not knowing.
I’d encourage you to switch to Claude if that’s what makes sense for your use case. Anthropic’s AI has been the best on the market for at least as long as I’ve been using it (a little over a year) and based on recent GPT releases I think it’s staying that way.
