Trending topics
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
It all makes sense.
Google + Anthropic focused on pre-training, so you get to a great answer quicker, but the capability ceiling is lower.
OpenAI focused on post-training, so great answers take longer, but you can push the models much further.
To be clear (getting a lot of confused comments), I obviously don't mean that any of these companies are solely focused on one thing.
OAI/Anthropic/Google all focus on both pre-training and post-training (obv there are stages within those, just simplifying here).
But it feels like OAI did a better job of post-training whereas Google/Anthropic did a better job of pre-training. They're all interlinked (better pre-training makes post-training easier/increases the ceiling for post-training, for example).
This manifests in how the models feel to work with day-to-day.
114.73K
Top
Ranking
Favorites
