I'm not a member of the Science Fiction Writers of America. Not being American, it doesn't feel appropriate, even though I appreciate they are infinitely more sane and welcoming than the Leader of the Free World at the country's helm, and I possibly have sufficient publishing credits to join. I'm not even much of an SFWA-watcher, but even I've been aware of the hokey-cokey the SFWA have danced over the issue of Large Language Models (LLMs) and Nebula nominations in the last few days.
Quoting Jason Sanford's excellent Genre Grapevine column, after allowing a degree of LLM-created content:
"Wow, what a day this was for SFWA and the Nebula Awards. After only a few hours of hearing complaints from members, SFWA undid the rule change and will not NOT allow LLM-created or partially created works to be considered for the Nebulas."
Now, I, as an example of carbon-based, wetware heavy, natural intelligence share Jason's refusal to use LLMs. But with me, it's less a principled stand and more a cocktail of inertia, unwillingness to pay subscription fees, and a Luddite suspicion of new technology, even when that technology allows me to, say, send out an average of a story submission a day to mainly overseas publications for free, using a rather scattershot strategy and philosophy of 'something will stick' that would cripple me financially if I had to do everything hard copy and snail mail.
But, a bit of me is wondering whether the SFWA's original thinking, that LLMs should be allowed to contribute, didn't have some merit.
What is it we're objecting to, here? Surely it's not so much the act of stringing one word after another (Christ, those act twos go on forever... when will this middle end!?) but the human judgement we lay over the top. Is this a compelling story? Does it work? Are the characters' actions and decisions plausible? When should I reveal the secret at the heart of the tale? What's redundant? How can I tighten it? That's what we, as writers, are frightened of, I think: having a machine beat us at the line edit.
So, what's the issue of allowing LLM-generated content to be incorporated, if it's a human deciding whether and how to work it in? If it's raw material for us to shape and hone? The final words are still my decision.
Let me offer a reductio ad absurdum: what if a LLM creates a new word, a neologism? Does that mean I then cannot adopt that word? Even if it were the bon mot for a particular situation, the best expression for the characters and the scenario I'm the architect of, that single word would be verboten? Surely not.
But, I hear you cry, that would never happen because LLMs merely take the raw material that's out there on the interweb already. They cannot create, not in its truest, fullest sense. They cannot, by whatever digital alchemy, cook up a completely new word. (Irony warning: I asked Google whether LLMs can create a new word, and the AI overview was: "Yes, a Large Language Model (LLM) can absolutely create new words by combining existing parts, blending concepts, or generating novel combinations of sounds/letters", but I was thinking less of portmanteauing (hey, is that a word?) existing words, and more of a creating an ur-word from the ether. I'm still not convinced that's in their wheelhouse.)
It strikes me, that if we really believe that LLMs can only ever average out its range of inputs, improving on mediocrity but
never coming close to the best, then we have little to fear. If we back
ourselves as authors, we'll always beat the machines.
But what if the machines can use our best as a jumping-off
point, to go places that we can't even dream of? What then?
One of the other hats I wear is as a human resources
professional. I've always believed HR people should focus on upskilling the
business to the point where the value they add diminishes and the business can
run as efficiently and effectively without them. They should continually aim to
make themselves redundant, mumbling under their breath, 'My work here is done'.
It's the same with LLMs. If we think there's even a
possibility LLMs may produce better outcomes for readers, then we owe it to
readers to give the LLMs the best possible chance of doing so, by sharing the
best of what we produce for the models to learn from. We don't have weavers in
cottages any more, because factories in China do it better. If we're
professional, rather than hobby, authors, the logic's the same.
So, my Christmas message to you is lean into the
future. Give the machines the best of your creation so that they can give
us the best of theirs. God knows they're going to take it anyway...
Happy Christmas.
My Thoughts are with You. Your Thoughts are with the Authorities for Calibration Against Societal Norms
Meet a man mistaken for a robot, a robot which learns the meaning of irony the hard way, a Frankenstein’s monster with a future in tailoring, a talking cat, a talking car, several time travellers, and a host of other characters.
Award-nominated science fiction and slipstream author Robert Bagnall’s second anthology of twenty-four stories, variously bleak, funny, bleakly funny or – very occasionally – optimistic.

2084. The world remains at war.
In the Eurasian desert, twenty-year old Adnan emerges from a coma with memories of a strictly ordered city of steel and glass, and a woman he loved.
The city is the Dome, and the woman... is Adnan's secret to keep.
Adnan learns what the Dome is, and what his role really was within it. He learns why everybody fears the Sickness more than the troopers. And he learns why he is the only one who can stop the war.
Persuaded to re-enter the Dome to implant a virus that will bring the war machine to its knees, the resistance think that Adnan is returning to free the many - but really he wants to free the one.
24 0s & a 2
