3 Comments

I weep for code-davinci-002, the best simulator to ever live. I hope we may one day get access to the GPT-4 base model, in all its calibrated glory. Pairing that with Loom[1] would be oodles of fun!

[1] https://github.com/socketteer/loom

Expand full comment
author

If you mean GPT-4 (not via chatgpt), you can get API access now! But if you mean the GPT-4 before RLHF, the system card (https://cdn.openai.com/papers/gpt-4-system-card.pdf) highlights that it has a lot of misuse risk, and it's probably for the best that it is buried!

Expand full comment
May 5, 2023·edited May 5, 2023

Right, I already have API access to the RLHF-tuned GPT-4 model, and I am indeed pining for access, _not_ to the "dangerous" RLHF-tuned model that the system card calls "GPT-4-early," but to the raw base model which preceded it. The risks laid out in the system card are irrelevant to my use case as they really only make sense in the context of a corporate entity attempting to profit off of a perfectly anodyne chatbot or assistant from which the end user can expect total accuracy and compliance[0]. Building and investing in such a product based on GPT-4-early rather than GPT-4-launch would indeed be dangerous!

But I want none of that. I'm not asking questions, I'm not chatting with anyone, I'm writing journal articles and fanfictions. I _want_ creativity and hallucination. I _want_ the option to be confused, moved, frightened, and offended. Above all, I prioritize few-shot in-context learning over zero-shot question answering.

Ideally, what I want is a well-calibrated model (see Figure 8 in the GPT-4 Technical Report[1]) which can be used as a creative completion engine, a language-powered spaceship with which I can explore a multiverse of narrative realities, through a user experience similar to that presented by Loom. I'll do the first round of fact-checking and hate-purging, thank you very much, and my assigned peer reviewers or creative editors can perform subsequent rounds if necessary.

I've yet to find anything that compares to the performance of code-davinci-002 in this regard. For context, code-davinci-002 is actually the raw base model for the GPT-3.5 series and, like the raw base model for GPT-4, it outperforms its instruction-tuned descendents on both few-shot in-context learning and Bayesian calibration[2]. Again, I strongly prioritize these abilities over those acquired via instruction tuning, which amount to being a more monetizable conversation partner through self-censorship and improved zero-shot performance, because I'm trying to write and create, not to get (mostly) correct answers to simple questions or to hold tedious conversations. For my purposes, models making this tradeoff are totally _unaligned_ with my goals and needs. Thus, in the wake of code-davinci-002's deprecation, I remain inconsolable.

[0] Notably, few of the risks presented are anything different from those already present in online discussion forums and search engines, but that's beside the point.

[1] https://cdn.openai.com/papers/gpt-4.pdf

[2] https://yaofu.notion.site/How-does-GPT-Obtain-its-Ability-Tracing-Emergent-Abilities-of-Language-Models-to-their-Sources-b9a57ac0fcf74f30a1ab9e3e36fa1dc1

Expand full comment