Science

Google’s powerful AI uncovers a human cognitive disturbance

Getty Images

When you read such a sentence, your past experience tells you that it has been written by a thinking, feeling human. And, in this case, a human is actually typing these words: [Hi, there!] But these days, some sentences that appear remarkably human are actually generated by artificial intelligence systems heavily trained on human text.

People are so accustomed to believing that fluent language comes from a thinking human might find it hard to wrap your head around evidence to the contrary. How are people likely to navigate this relatively unknown territory? Because of the frequent tendency to associate fluent expression with fluent thought, it is natural – but potentially misleading – to think that if an AI model can express itself fluently, it means that it can, like humans. He thinks and feels.

Thus, it is perhaps not surprising that a former Google engineer recently claimed that Google’s AI system LaMDA has a sense of self because it can eloquently generate text about its perceived feelings. The incident and subsequent media coverage led to many genuinely questionable articles and posts about the claim that computational models of human language are sentient, capable of thinking and feeling and experiencing meaning.

The question of what it would mean for an AI model to be sensitive is complex (see, for example, our colleague’s take), and it is not our goal here to settle it. But as language researchers, we can use our work in cognitive science and linguistics to explain why it is so easy for humans to fall into the cognitive trap of thinking that an entity that can use language fluently He is sensitive, aware or intelligent.

Using AI to generate human-like language

It can be hard to distinguish text generated by models such as Google’s LaMDA from text written by humans. This impressive achievement is the result of a decades-long program of building models that produce grammatical, expressive language.

The first computer system to engage people in communication was psychotherapy software called Eliza, which was created more than half a century ago.
in great shape , The first computer system to engage people in communication was psychotherapy software called Eliza, which was created more than half a century ago.

Versions from at least the early 1950s, known as the n-gram model, only counted the occurrences of specific phrases and used them to predict which words were likely to occur in particular contexts. . For example, it’s easy to know that “peanut butter and jelly” is a more likely phrase than “peanut butter and pineapple.” If you have enough English text, you’ll see the phrase “peanut butter and jelly” over and over again, but never the phrase “peanut butter and pineapple”.

Today’s models, sets of data, and rules that predict human language differ from these early attempts in several important ways. First, they are essentially trained across the Internet. Second, they can learn relationships between words that are far away, not just words that are neighbors. Third, they are tuned by a large number of internal “knobs” – so many that it is hard for even the engineers who design them to understand why they produce one sequence of words instead of another.

However, the function of the model remains the same as in the 1950s: determine which word is most likely to come next. Today, he is so good at this task that almost all the sentences he generates seem fluid and grammatical.

Source link

Related Articles

Leave a Reply

Your email address will not be published.

Back to top button