Conversational AI War Begins. Who Will Be the Winner?

In a recent interview with Bloomberg, Microsoft CEO Satya Nadella revealed his ambition to challenge Google’s dominance in the search engine market by adding ChatGPT to its Bing search engine. Meanwhile, Google CEO Sundar Pichai unveiled Bard, a conversational AI service to take on ChatGPT. In addition, it was reported that ChatGPT attracted 100 million users in just two months after its launch. Global tech companies have been aggressively developing and launching AI solutions. Confirming this intense competition, Forbes reported, “The new AI wars have begun.”

As AI is being developed at hair-raising speeds, it now seems the “Turing Test” (a test of a machine's ability to exhibit intelligent behavior equivalent to that of humans) has become meaningless. This is because AI chatbots can now provide answers that are at a higher level than humans, making the test meaningless.

Take a Break! - Chinese Room Argument, an Argument against Turing Test American philosopher John Searle has conducted a thought experiment, which is now generally known as the Chinese Room Argument. The argument shows that the Turing Test is not an adequate indicator to determine whether AI can think like humans. In the experiment, Searle (who does not know Chinese) imagines himself in a room, and a Chinese speaker outside the room passes him messages under the door. Searle has boxes of Chinese characters and a book of instructions. Searle selects an appropriate response using the instructions and sends his reply to the Chinese speaker. The result is that the Chinese person outside the door would think that the person on the other side can speak Chinese, which is not true. This thought experiment shows that even an AI system that has passed the Turing Test can converse with humans and does not need to understand human languages. Searle argues that computers merely choose an answer considering the context but do not understand the meaning of a question like a human. In conclusion, the experiment is against the Turing Test; meaning passing the Turing Test does not mean that computers have cognitive abilities equivalent to that of humans.

ChatGPT with Boundless Potential

Unlike existing chatbots, ChatGPT writes like a human by learning extensive information on the Internet. Recently, ChatGPT was competent enough to pass exams for a medical license, law exams, and a business management course exam, making ChatGPT appear to have professional capabilities. In this regard, academic circles are concerned that many students will likely write or plagiarize a thesis using ChatGPT. Indeed, it was reported that some international school students in Korea were found to have written English essays using ChatGPT, thus resulting in a failed grade.

ChatGPT has yet to get the latest information because its knowledge is restricted to things it learned before 2021. We asked a rather difficult question to test ChatGPT’s competence. When we asked ChatGPT regarding its opinion concerning people abusing it for plagiarism or medical purposes, ChatGPT gave the following answer:

ChatGPT’s answer to the question, “What do you think about the concern that you can be abused for thesis plagiarism and for medical purposes?” What do you think about the concern that you can be abused for thesis plagiarism and for medical purposes? As an AI language model, I do not have the ability to form opinions or hold beliefs. However, I understand that there are concerns about the potential misuse of AI language models like myself for academic plagiarism and medical purposes. Regarding academic plagiarism, it is important to note that using an AI language model like myself to generate text for a thesis without proper citation or attribution would constitute plagiarism, which is an unethical and unacceptable practice. While I can certainly assist with research and provide information on various topics, it is ultimately up to the user to ensure that any text generated by me is properly cited and attributed. As for medical purposes, AI language models can certainly be used to aid in medical research and diagnosis, but it is important to approach such applications with caution and careful consideration. Any medical decision or diagnosis should always be made by a qualified medical professional, and AI language models like myself should only be used as a tool to assist in the decision-making process. Additionally, any data used by AI language models for medical purposes must be carefully curated and validated to ensure accuracy and reliability. Overall, while there are potential concerns with the misuse of AI language models, it is important to approach their use with responsibility and ethical considerations.
ChatGPT’s answer to the question, “What do you think about the concern that you can be abused for thesis plagiarism and for medical purposes?”

What do you think? Can you believe it took less than one minute to generate this clear answer? If ChatGPT can perform at this high level, then I don’t need to write anything else in the near future. One media outlet reported that in two years, about 90% of all content would be produced by AI.

But there are concerns that generative AIs like ChatGPT might exponentially increase “garbage data” online. ChatGPT has an outstanding writing ability that can even deceive experts, but there is a limit. It can only generate output with limited data.

Big Tech AI War Triggered by ChatGPT … Competition Between Google’s Bard and Microsoft’s Bing

OpenAI’s ChatGPT uses a reward learning model based on feedback provided on answers. Therefore, it can make continuous improvements if a large amount of learning data and proper feedback are provided. Indeed, the reinforced learning model, Reinforcement Learning from Human Feedback (RLHF), is assessed using human feedback, and the model is used for robot control, game AI and autonomous cars, etc. However, human feedback is essential. Output significantly varies depending on how reward functions are designed, which comes at a high monetary cost. Therefore, ChatGPT may obtain feedback with learning results from other AIs, like AlphaGo did, to reduce time and costs.

As mentioned, Google unveiled the new chatbot Bard, in Paris, France, on February 8th. Google has been an AI technology leader among big global tech companies. However, on launch day when Bard was asked, "What discoveries from the James Webb Space Telescope can I tell my 9-year-old about?" Bard incorrectly stated that the James Webb Space Telescope took the very first pictures of an exoplanet outside our solar system. This inaccuracy made Google’s share price plummet by almost 8%. But considering that Google changed the AI paradigm through AlphaGo, Google will probably soon introduce a near-perfect version.

On February 7th, Microsoft launched a new version of its Bing search engine with built-in support for ChatGPT. Microsoft originally planned to unveil the service one month later but announced it earlier, maybe because Google said the previous day that it would reveal a similar conversational Bard AI service within the month. When a user asks a question, Bing answers as if conversing with the user. Following its previous one-billion-dollar investment in OpenAI, Microsoft will invest 10 billion dollars in March 2023. With ChatGPT built in, Bing will be able to answer questions through conversations and even provide consulting services. There are high expectations about how Google and Microsoft, two significant players in the search engine market, will change the landscape of the conversational AI market.

What Do You Think the AI Era Will Look Like in the Future?

The future generation will grow into AI natives who can learn from, converse and enjoy free time with AI. This also means obtaining knowledge will be much easier. A senior researcher at NIA said, “Humans have accumulated their own experiences directly or through the experiences of others delivered via the media. But AI services have changed the definition of knowledge by overhauling the human learning process.”

It seems AI can replace simple (e.g., information retrieval or customer response at a call center) and complicated tasks (e.g., psychological therapy to treat PTSD) alike. ChatGPT can develop a program that can be created by a developer who earns more than 150 thousand dollars a year. Maybe I should be concerned about my job.

Who do you think will win the AI war? Regardless of the winner, we should carefully look at how AI will change the world. I hope the way forward for the business will be more human-centered.