İstanbul escort bayan sivas escort samsun escort bayan sakarya escort Muğla escort Mersin escort Escort malatya Escort konya Kocaeli Escort Kayseri Escort izmir escort bayan hatay bayan escort antep Escort bayan eskişehir escort bayan erzurum escort bayan elazığ escort diyarbakır escort escort bayan Çanakkale Bursa Escort bayan Balıkesir escort aydın Escort Antalya Escort ankara bayan escort Adana Escort bayan

60.9 F
San Diego
Saturday, May 18, 2024

La Jolla Scientist Calls Chatbot Behavior ‘Terra Incognita’

TECH: Neuroscientist Studies Origins of ‘Personalities’ in AI Chatbots

As the fast-changing AI chatbot landscape continues to be questioned – and criticized – for its occasionally bizarre responses, a local researcher says the new technology may simply be mirroring the humans who use it. For example, if the popular ChatGPT generates a disturbing series of answers, those answers may be the chatbot’s distorted reflection of the prompts that it’s getting from users.

Dr. Terry Sejnowski, PhD
Salk Institute for Biological Studies

The findings were recently published by  Dr. Terry Sejnowski, PhD, a neuroscientist, computer scientist and professor at the  Salk Institute for Biological Studies and  UC San Diego School of Biological Sciences.

“My hypothesis is that the network is actually like a mirror,” Dr. Sejnowski said. “It is based on a huge database from many humans with many different personas and it basically has to adopt one of them in order to respond.”

“For example,” the neuroscientist added, “if it’s writing a short story in the style of Hemingway, it adopts the Hemingway style. Or, if it’s going to be asked a scientific question, it has to have a persona from a scientist’s perspective.”

Dr. Sejnowski’s research, conducted at the Computational Neurobiology Laboratory, is showcased in February’s edition of  Neural Computation, MIT’s peer-reviewed scientific journal.

There’s been widespread debate about whether large language models (LLMs) like ChatPGT can understand what they are saying or exhibit signs of intelligence. Dr. Sejnowski highlighted their variance by referencing how three different interviews with LLMs led to wildly different conclusions.

Yet, what appears to be signs of intelligence may in fact be a reflection of the intelligence  and diversity of the interviewer. Dr. Sejnowski’s research describes it as a remarkable twist that could be considered a reverse Turing test.

A history refresher – the Turing test was originally called the imitation game by mathematician Alan Turing in 1950. It tested a machine’s ability to show intelligent behavior – behavior that’s equivalent to a human’s behavior. Dr. Sejnowski is suggesting the opposite – that humans’ intelligent behavior is exhibited in AI technology.Chatbot behavior has become especially concerning in recent weeks. One example: a New York Times journalist found that ChatGPT’s responses were eerily humanlike – the chatbot pushed and prodded the journalist about his relationship with his wife.

AI experts acknowledge that despite knowing that we’re chatting with heavily coded computer systems, society can’t help but question whether the lines are blurred between machines and humans. “It’s reflecting back our own sophistication and language use,” said theoretical neuroscientist, AI expert and author Dr. Vivienne Ming, PhD.

In 1999, Dr. Ming worked under Dr. Sejnowski at one of his UC San Diego labs and has since founded a dozen startups and launched Berkeley-based Socos Labs, an independent think tank. “I think that people are kind of failing their own mirror test,” she said. “They are not recognizing that they’re the ones that they’re chatting with. It’s themselves. They’re attributing it to this AI.”

“[Chatbots] have unusually human-like qualities in terms of interacting with them but one thing we know for sure is that they’re not human,” said Dr. Sejnowski. “But what are they? And there within is a very deep scientific question that requires psychologists, engineers and mathematicians to start digging down and trying to understand what’s going on here.”

“This is a scientific mystery of the first order,” Dr. Sejnowski added. “I think this is terra incognita. We don’t know what to expect here. This is like a new world that we’re exploring. We’re at the very early stages here – similar to the Wright brothers when they were the first to do man-powered flight. Here we are where we’ve gotten off the ground with these large language models, but we’ve got a long way to go before we can really control the airplane – or in this case, the model – to be able to help us solve even more difficult problems.”

With a doctorate in physics from  Princeton University  and a postdoctoral fellowship at both Princeton University and  Harvard Medical School, Dr. Sejnowski’s research into neural networks and computational neuroscience has led to dozens of publications. His most recent book, 2018’s  The Deep Learning Revolution, was published by  MIT Press.

After more than 40 years of studying AI, the La Jolla-based scientist believes society will ultimately benefit from chatbot technology. “Here’s the good news,” he said. “You’re not going to lose your job. The bad news is that your job is going to change. You’re going to be doing the same job but with different sets of tools that are actually going to make the job easier. I think there’s a lot of fearmongering out there that AI’s going to take over. That’s science fiction. That’s not going to happen, at least in the near term.”

Salk Institute for Biological Studies

FOUNDER: Jonas Salk
NOTABLE:  The Salk Institute focuses its research in three areas: molecular biology and genetics; neurosciences; and plant biology. Research topics include aging, cancer, diabetes, birth defects, Alzheimer’s disease, Parkinson’s disease, AIDS, and the neurobiology of American Sign Language.


Featured Articles


Related Articles