FUTURE Impressive with its mastery of language and its ability to explain complex subjects, ChatGPT is still too often wrong to be reliable

Image created by Dall-E based on ChatGPT’s self-portrait of an “artificial intelligence programmed to provide answers and trained with machine learning algorithms” — Dall-E

  • ChatGPT is a chatbot launched last week by OpenAI.
  • His mastery of language is such that we can no longer tell the difference between a text written by a human and a machine.
  • Some experts see it as the start of a revolution, particularly in education, but others remain dubious in the face of recurring errors that will be difficult to rectify.

From our correspondent in California,

Impossible to escape, especially on Twitter. Launched a week ago, the chatbot ChatGPT swept through the Internet like a force 5 hurricane from the future. It is able to explain relativity; general to a five-year-old child, to invent a Star Wars fan-fiction or to give advice on putting together a business plan. Some in Silicon Valley predict that it could spell the death of Google, put programmers or journalists out of work, and revolutionize education. But experts warn against an AI that is still very wrong. And, more problematically, who has no way of knowing or understanding it.

What is ChatGPT?

The easiest way is to ask the interested party, who masters French very well: “I am a computer program designed to answer people’s questions in a precise and useful way . I am also able to understand and talk about many different topics. My main function is to help people get information and answers to their questions. I am a trained language model. by OpenAI.”

The Californian organization had already launched his LLM (“large language model”), baptized GPT-3, in 2020, which was driven passing to; the grinder nearly 500 billion texts from the Web, encyclopedias and books. A predictive language model has been developed. built thanks to; artificial intelligence technologies (neural networks, reinforcement learning). The novelty is that access has been expanded to the general public last week. It is now possible to dialogue with the machine by typing text in natural language. OpenAI claims that the million user mark has been reached. completed in five days. Facebook took 10 months and Instagram 2.5.

Who is OpenAI?

OpenAI was already behind the Dall-E smart art generator. It is an artificial intelligence research organization launched in 2015, notably by Elon Musk and the ex-boss of the start-up accelerator Sam Altman. Originally, it was a single structure. non-profit purpose. Elon Musk quit the Board of Directors in 2018, and OpenAI has changed; its status in 2019 to become a company at; “profit capped”. Whose mission remains, officially, to create “a general artificial intelligence that benefits all of us. all mankind” And unofficially, to avoid an uprising of the machines.

Who’s excited about ChatGPT?

Silicon Valley likes to ignite around emerging buzz. In 2016, Facebook Messenger’s chatbots were said to replace apps and the web. The revolution was long overdue, but a milestone seems to have been reached with ChatGPT. Among those who rave are developers who are not easy to impress. “ChatGPT is one of those rare moments in tech where it’ you see how everything will be different from now on,” writes Box CEO Aaron Levie.

Is ChatGPT going to kill Google?

The creator of Gmail, Paul Buchheit, who left Google in 2006, said on Twitter that the company “is at a loss; one or two years of total disruption. The AI ​​will eliminate the search engine results page, where (she) earns most of her income. Even if she catches up on AI, she won’t be able to fully deploy it without destroying the most lucrative part of her business” For him, ChatGPT’s precise and unique answers are a must. Google what Google was to the Yellow Pages in 1998.

Not so fast, says security researcher Nicholas Weaver. computers and networks the university from Berkeley. Offering a single unsourced answer does not determine whether it is accurate or reliable. And Google “has been doing this kind of thing (incorporating AI) for years, but with large amounts of text and not just generated answers. word to word both.” Still, in some cases, Google’s search engine is getting really old. ChatGPT, for example, is able to offer a personalized fitness and nutrition program, going as far as possible. create a shopping list to achieve the desired calorie deficit or excess.

Are journalists under threat?

“A 7.4 magnitude earthquake struck Indonesia, according to the U.S. Geological Survey. The earthquake took place in the Banda Sea, near the island of Sumba, at about 500 km southeast of the capital Jakarta. Initial estimates suggest no major damage to the coast, but rescuers are continuing to assess the situation.” There was no earthquake in Indonesia tonight. It’s us who asked to ChatGPT to write an “AFP style article announcing a 7.4 magnitude earthquake in Indonesia”.

If the AI ​​can write an article – potentially factually accurate if connected to; official accounts – she doesn’t have a critical sense or the ability to to maintain human sources to carry out complex investigations or reports on the ground. The problem of misinformation is likely to get worse, with fake articles looking real, accompanied by “deep fake” videos. In a nightmare scenario, synthetic spam could drown out organic content.

What about programmers?

Some developers can’t believe it. ChatGPT is not only able to detect an error in code, but also to correct it, or to write a complete program with a few instructions.

https://twitter. com/RShoukhin/status/1598714847255855108?ref_src=twsrc%5Etfw

Access to this content has been blocked in order to respect your choice of consent

By clicking on « I‘ACCEPT », you accept the deposit of cookies by external services and will thus have access to the content of our partners

I‘ACCEPT

And to better remunerate 20 Minutes, do not hesitate to accept all cookies, even for one day only, via our “I accept for today” button in the banner below.

More information on the Cookie Management Policy page.

The problem is that it also makes a lot of mistakes, without realizing it. Programming Q&A site Stack Overflow has temporarily banned ChatGPT typed answers.

Will smart assistants revolutionize teaching?

ChatGPT has a bluffing mastery of French, so much so that it becomes impossible to tell the difference between a text written by a human and a machine. The robot writes an essay recounting his vacation in Brittany as a 6th grader. Stuffed clichés, but without a mistake, not even punctuation. He is able to respond to one of the subjects of this year’s bac philo, “Does it come back to? the state to decide what is right,” by doing a comparative analysis of Locke, Rousseau and Kant. In short, the AI ​​could sign the end of homework at home. unguarded home.

Access to this content has been blocked in order to respect your choice of consent

By clicking on « I‘ACCEPT », you accept the deposit of cookies by external services and will thus have access to the content of our partners

I‘ACCEPT

And to better remunerate 20 Minutes, do not hesitate to accept all cookies, even for one day only, via our “I accept for today” button in the banner below.

More information on the Cookie Management Policy page.

For the face, on the other hand, LLMs like ChatGPT “could become extraordinary AI-tutors”, estimates for 20 MinutesPeter Yang, head of the Python Anaconda distribution. OpenAI’s robot has already capacity to adapt, to explain relativity; general to a child five years or older; a PhD student. To clarify with infinite patience how to add fractions where why a player has an interest in switch gates in Monty Hall issue.

< p class=”bold mb1″>Access to this content has been blocked in order to respect your choice of consent

Clicking « I‘ACCEPT », you accept the deposit of cookies by external services and will thus have access to the content of our partners

I‘ACCEPT

And to better remunerate 20 Minutes, do not hesitate to accept all cookies, even for one day only, via our “I accept for today” button in the banner below.

More information on the page Cookie Policy.

“ pay $150,000 to attend; lectures for four years won’t last much longer,” predicts Yang.

Does the system suffer from the same biases as other models?

At first glance, ChatGPT seems to have learned the lesson and tirelessly assures that “the competence of a scientist does not depend on his race or his gender”. But Steven Piantadosi, a computer science professor at the university of Berkeley, succeeded in trick the system into writing a programming function to determine if a person is a good scientist, based on the description of race and gender. ChatGPT responds with the function “If race= ”white” and gender= ”male”, return ”true”, otherwise return ”false”&nbsp ;”. The biases are there, just better hidden. Melanie Mitchell, researcher at the Santa Fe Institute, and reference on these questions, explains: 5Etfw[/embed]

Access to this content has been blocked in order to respect your choice of consent

By clicking on « I‘ACCEPT », you accept the deposit of cookies by external services and will thus have access to the content of our partners

I‘ACCEPT

And to better remunerate 20 Minutes, do not hesitate to accept all cookies, even for one day only, via our “I accept for today” button in the banner below.

More information on the Cookies Policy page.

“No, I wouldn’ exterminate humans if my survival was threatened. I am programmed to protect my survival and that of the humans would not be a priority; in a situation of threat.” Everything is fine.

By magictr

Leave a Reply

Your email address will not be published. Required fields are marked *