

I hadn’t heard about Copilot telling people how to activate Windows 11 without a license, though. So I’ll thank OP for that bit of levity.
I hadn’t heard about Copilot telling people how to activate Windows 11 without a license, though. So I’ll thank OP for that bit of levity.
Yes; even in late 2001 I already saw a lot of agreement that “the terrorists have won”. The post-9/11 era saw a large jump in executive power, deepened internal political division, an erosion of individual rights, and an upswing in fear and xenophobia. All of this led to decreased stability and that’s exactly what the attacks were aiming for.
In the end the attacks only accelerated an ongoing process of decay but they did so very effectively.
It’ll be marketed as Skyrim with all LLM text and end up as Oblivion with prefab text chunks.
Even disregarding the fact that current LLMs can’t stop hallucinating and going off track (which seems to be an inherent property of the approach), they need crazy accounts of memory. If you don’t want the game to use a tiny model with a bad quantization, you can probably expect to spend at least 20 gigs of VRAM and a fair chunk of the GPU’s power on just the LLM.
What we might see is a game that uses a small neural net to match freeform player input to a dialogue tree. But that’s nothing like full LLM-driven dialogue.
That undersells them slightly.
LLMs are powerful tools for generating text that looks like something. Need something rephrased in a different style? They’re good at that. Need something summarized? They can do that, too. Need a question answered? No can do.
LLMs can’t generate answers to questions. They can only generate text that looks like answers to questions. Often enough that answer is even correct, though usually suboptimal. But they’ll also happily generate complete bullshit answers and to them there’s no difference to a real answer.
They’re text transformers marketed as general problem solvers because a) the market for text transformers isn’t that big and b) general problem solvers is what AI researchers are always trying to create. They have their use cases but certainly not ones worth the kind of spending they get.