|Magic Prompts For LLMs?|
|Written by Mike James|
|Wednesday, 08 November 2023|
Are there magic prompts that make LLMs disgorge the results that you want? New research suggests that there are and they are short.
It is often said, by way of reassurance, that AI generally makes jobs not destroys then and so is with large language models and the need to construct prompts that work - hence prompt engineers. At the moment the art of prompt engineering is just that - an art. There is no real science behind working out how to ask a question of an LLM to get a good response, but there could be in the future.
A team from the California Institute of Technology and the University of Toronto have tried to formulate the problem using control theory. Prompting is important because:
LLMs pre-trained on unsupervised next token prediction objectives exhibit unprecedented dynamic reprogrammability achieved through “prompting”, often referred to as zero-shot learning. These capabilities appear to emerge as the model’s size, training data, and training time are scaled. The dynamic reprogrammability of LLMs is akin to the adaptable computational capacities observed in biological systems.
The idea is that the prompt is taken to be the control variable for the system controlling the output. The main question to be answered is :
"given a sequence of tokens, does there always exist a prompt we can prepend that will steer the LLM toward accurately predicting the final token?"
The researchers have named such a prompt a "magic word". The idea is that whatever you are after as a response you will get it if you add the magic word. To be more precise, we have a word completion problem in which you input x and want the LLM to complete the sequence with a specific y, i.e. the output should be xy. The magic word is a hopefully short sequence u* that can be added to x to make the LLM output y.
It really doesn't seem likely that magic words exist, but it seems that they do. In an experiment trying to steer the LLM towards WikiText outputs it seem that for 97% of the instances magic words with 10 or fewer tokens exist.
While this isn't of practical value it does indicate that the prompt string is as important as we already think it is and constructing good prompts can make a model much better than using run-of-the-mill prompts. Put another way, LLMs are steerable by their input.
"We have demonstrated that language models are, in fact, highly controllable – immediately opening the door to the design of LLM controllers (programmatic or otherwise) that construct prompts on the fly to modulate LLM behavior. The behavior of LLMs is thus not strictly constrained by the weights of the model but rather by the sophistication of its prompt."
What's the Magic Word? A Control Theory of LLM Prompting
or email your comment to: firstname.lastname@example.org
|Last Updated ( Wednesday, 08 November 2023 )|