4 മിനിറ്റ് വായിച്ചു

The risk of ChatGPT

Would anyone ask a talented actor not only to act and talk like a skilled surgeon in a convincing manner, but then allow him to take a scalpel and operate on him? It would seem stupid, wouldn’t it? Well, that is precisely what many are doing with ChatGPT.

By Ricardo Baeza Weinmann

I think the main short-term risk of this artificial intelligence (AI) is not that it will replace a lot of people’s work, but that people will be convinced that it does things it doesn’t do. Because many, if not most, are using it as if it were a kind of Google, an intermediary between personal ignorance and the infinite supply of knowledge on the web, which is a huge mistake.

ChatGPT’s goal is and always has been to simulate a human dialogue that feels real. It has never claimed to “tell the truth”, but rather to “say things that sound credible”, which is quite different. That is why it has no conflict with slipping in misinformation, or outright inventing facts, if it helps him to appear more realistic in his discourse.

A user who knows a lot about the topic being “discussed” with ChatGPT will be in a position to evaluate if everything it says is true or not. For example, a programmer requesting lines of code will be able to quickly verify whether they work or not; a specialist will be able to see if it’s a fabrication of dates, articles or authors; a true fan will be able to detect inaccuracies about the life of his or her favourite artist. But how would a user who doesn’t know so much about the subject evaluate that? How would they differentiate what is true from what is not in everything it says?

That’s where I think the biggest danger of this type of AI lies, that it starts to be used massively by people who are incapable of evaluating the accuracy of its content. That they start to believe everything without any kind of questioning, and even start to make relevant decisions based on such information.

Tools are never good or bad in themselves, unless the intentionality and skill of the user is involved. A gouge in the hands of a great woodworker can create a work of art, while in the hands of an inexperienced ignoramus it can create havoc and even seriously injure someone in the process (and in the hands of a criminal it can become a bladed weapon with which to kill someone). Let us hope that we do not make this mistake with AIs, as the consequences of uncritical use of them could be highly uncertain and even dangerous. Which, of course, is not the fault of the AI but of the ineptitude or maliciousness of the user.

Rédaction Montréal

 

ഒരു മറുപടി തരൂ

Your email address will not be published.

error: Content is protected !!
Exit mobile version