Skip to main content
  1. Blog/

Humans should also be prompted properly

·539 words·3 mins

The other day I posted a few lines on Facebook.

My intention was to point out what can be called perspective - that a case can be seen from several angles.

Specifically, it was about “Store Bededag” and the many other more or less official holidays that spring offers in Denmark. And there are a lot of them.

My point was that for a self-employed person who works from home, these could well be perceived as interruptions. As a kind of disturbance.

There were many reactions, as you now call it, to my post.

Some stated that the many small mini-holidays in the spring were a welcome break from life as a wage earner in the hamster wheel.

And yes. It was precisely a different perspective than the one I myself had p resented. And a perspective that I can easily relate to even though I am not independ ent. I can put myself in other situations if I pay attention to it.

There were also a number of other reactions which were of a slightly different type.

It was reactions that reminded me of the way the big language models work.

Because it was reactions that took fragments of what I put up, and work further with these fragments. And almost started to deduce other things than what I had actually written.

When the language models do that, we say they “hallucinate”.

It doesn’t sound nice to say about people that they hallucinate when they maybe put a little too much into something you write or interpret a little too much from it.

But these are clearly examples of what happens when you do not treat other people’s words in a disciplined manner.

Then it often becomes somewhat free association.

You can reduce the “risk” of a conversation being characterized by too much free association, just as you can give it direction and context.

This kind of thing is no guarantee that the conversation won’t slip off the track and end up in a completely different place than it started. But you reduce the risk of that.

But also remember this:

The ordinary everyday conversation depends precisely on the other part y inviting in. And we all tend to fill a conversation with our own interpretations.

The problem arises when people conclude or infer something about the original text based on their own interpretation of it - without being sufficiently aware that this is happening.

If you, as a human, still want to retain some advantages over artificial intelligence for a while, then I would recommend that you pay attention to how you relate to other people’s words.

Because otherwise you run the risk of becoming a mediocre language model or a slightly advanced, but rather severely biased “auto-complete.”

It is at least as important, however, that you, as the person who starts a debate or posts some text, think a little about your wording.

As when talking with ChatGPT, it is important to set the framework for the conversation clearly from the start.

So both the sender of a message and the recipient of it have a responsibility for a successful conversation.

And this applies regardless of whether you are talking to a human or a machine.