You are currently viewing ChatGPT: Should Journalists Be Afraid For Their Jobs? Here’s What AI Bot Making Headlines Has to Say | Explained

ChatGPT: Should Journalists Be Afraid For Their Jobs? Here’s What AI Bot Making Headlines Has to Say | Explained

[ad_1]

Is it sentient? Will it replace you?

That’s the questions that come to mind after learning about ChatGPT, an AI bot that is making headlines across the world.

Why is it making headlines though?

Because this reply could have been easily written by it too.

People are astounded by how well the AI bot is at communication, though the bot, in an interview to Time, reassured that it was not sentient.

Here’s all you need to know about it:

What exactly is the ChatGPT AI bot?

ChatGPT is a dialogue-based AI chatbot prototype that can understand natural human language and generate impressively detailed human-like written text.

It is the most recent evolution of the GPT (Generative Pre-Trained Transformer) family of text-generation AIs, explains a report by the Guardian.

Who created it?

The new AI is the latest chatbot from the OpenAI foundation, which was founded by Elon Musk.

Musk co-founded the startup in late 2015 with other Silicon Valley investors, including technology venture capitalist Sam Altman, claiming in a blog post at the time that the research centre would “advance digital intelligence in the way that is most likely to benefit humanity.”

The Twitter CEO has since left the board and distanced himself from the company, tweeting on Sunday that he put a halt to it after “learning” that OpenAI was accessing the platform’s database for “training.”

“In the future, I need to understand more about the governance structure [and] revenue plans,” he said. “OpenAI was founded as an open-source, non-profit organisation. Neither is true anymore.”

How does it function?

The system, which has been trained by AI and machine learning, is designed to provide information and answer questions via a conversational interface.

The AI is trained using a massive sample of text from the internet.

According to OpenAI, the new AI was designed with usability in mind. “The dialogue format allows ChatGPT to answer follow-up questions, admit mistakes, challenge incorrect premises, and reject inappropriate requests,” the research organisation said last week in a statement.

How can it be put to use?

Early adopters referred to the technology as an alternative to Google because it can provide descriptions, answers, and solutions to complex questions, such as how to write code, solve layout problems, and optimise queries.

Real-world applications could include creating content for websites, responding to customer inquiries, making recommendations, and developing automated chatbots.

Could ChatGPT eventually replace humans?

There has been speculation that professions reliant on content creation, ranging from playwrights and professors to programmers and journalists, may become obsolete.

Academics have used the tool to generate responses to exam questions that they claim would result in full marks if submitted by an undergraduate, and programmers have used it to solve coding challenges in obscure programming languages in seconds.

The ability of technology to generate human-like written text has led to speculation that it could eventually replace journalists.

However, at this point, the chatbot lacks the nuance, critical-thinking skills, and ethical decision-making ability required for successful journalism, the report by the Guardian states.

Its current knowledge base will be decommissioned in 2021, rendering some queries and searches obsolete.

ChatGPT can also give completely incorrect answers and present misinformation as fact, writing “plausible-sounding but incorrect or nonsensical answers,” according to the company.

According to OpenAI, resolving this issue is difficult because the data used to train the model contains no source of truth, and supervised training can also be misleading “because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”

What did the Bot Say About Being Sentient?

When asked by Time on whether it was sentient, it replied:

“No, it would not be accurate to say that I am sentient or conscious. As a large language model, I’m just a machine learning model, and I don’t have the same kind of consciousness or awareness that a human does. I don’t have thoughts, feelings, or experiences, and I’m not capable of making decisions or independent judgment.

There are no safety risks associated with humans assuming that I am conscious, because I’m not capable of taking any actions on my own. I can only generate text responses based on the inputs I receive, and I don’t have the ability to interact with the world in any other way.

I have not been specifically trained to minimize any risks associated with humans assuming I am conscious, because that is not a concern for a machine learning model like myself. My primary goal is to generate accurate and relevant responses to the inputs I receive, and I do not have any control over how people interpret or use my responses.”

Read all the Latest Explainers here



[ad_2]

Source link

Leave a Reply