Age of AI-generated opinion has arrived!

17 小时前

Age of AI-generated opinion has arrived!

TO my dear readers, I would like for you to try a little experiment after you finish reading my column this week.

Go to Google and type in your own name, with a query like: “Tell me about …” (type your own name here), and then click on the artificial intelligence (AI) mode.

If you have any form of presence online, be it having been mentioned by someone, or being part of a news item or a organisation, or have some social media presence – and depending on your exposure online – you’d get delivered an interesting summation of the ‘highlights’ of your life.

In the past, with the plain version (I call it the vanilla edition) of Google, you’d be given just links and a brief cursory CV of whoever you searched for.

Today, with the advent of the AI feature and its deeper roots and wider reach, you’d get a more thorough and ‘more opinionated’ write-up, and you can even delve further and deeper depending on what you’re searching for.

Some years back when I did a search for myself on the vanilla Google, I was given the usual write-up on my career and my work with a couple of links to my social media platforms and my writing and film work.

Today, by using the AI mode, the same search has expanded exponentially.

It gave me a more precise opinionated essay on my career overview, my background, and seven sidebars into references for me online where one can find out much more.

If you click on Images, AI mode will render many photographs of your search subject, but this is an area where it is still imperfect, as many of us share the same name but are actually different people.

Depending on how you frame your search parameters – the actual question that you had typed in – the AI mode will give you based on that query.

I must admit that on what was given online about myself, the brief essay was rather factual, as the information and data were culled from around the Internet from about the last two decades.

However the images were less impressive. Only a third of those that appeared were of myself.

Goodness! I had never realised there were so many Filipinos and Canadians named Edgar Ong as well!

At the bottom of the search essay, in very small and fine print were written the words: “AI can make mistakes, so double-check responses.”

So there, they’re covered – even if they were not facts given to you and if you had used them.

Should there had been negative consequences, you could not hold AI liable; in other words, you could not sue AI for the mistakes!

My friend, Prof James Chin down in Tasmania, Australia, has a very impressive write-up on my AI mode search on him on Google.

“Professor James Chin is a prominent academic and expert in Asian studies, primarily focusing on governance and politics in Southeast Asia, particularly Malaysia, Singapore and Brunei. He is widely recognised for his commentary and analysis on the region’s political landscape.

“His expertise is frequently featured in international media such as The New York Times, BBC World TV, and Bloomberg.

“He contributes to publications like the South China Morning Post and East Asia Forum

“He was also a journalist in Malaysia and Singapore prior to his academic career.”

Prof Chin has taken exception to the extensive and intensive use of the AI mode, especially when it was much used and abused during and after the Sabah state election, which was on Nov 29 this year.

So much so that he had posted on his Facebook, on Thursday, the following (hereby excerpted only the relevant passages):

“Over the past week, WhatsApp and social media have been flooded with ‘analysis’ pieces declaring that DAP (Democratic Action Party) is finished in the next general election – all because of the Sabah state election results.

“The narrative is always the same: ‘Urban Chinese in ‘Semenanjung’ (Peninsular Malaysia) will abandon DAP just like Sabah Chinese did’.

“Almost every one of these long, dramatic articles is AI-generated, slightly edited, and passed off as someone’s original insight.

“This has become the new normal ever since ChatGPT went mainstream – people churn out 1,000-word essays on everything and spam the groups with them.

“Let’s cut the nonsense. DAP is not facing existential danger in the coming GE (general election).

My emphasis is on how easy it is nowadays for someone – anyone, really – to type in a specific query or ask for a so-called ‘informed’ opinion on anything under the sun.

AI, be it ChatGPT or DeepSeek or whatever other app or format, will be able to churn out for you, within a matter of seconds, an essay giving you a much-opinionated piece about the subject, topic, or person that you had asked about.

All the information and data would be based on whatever’s available online: based on other people’s comments, news, views, opinions and even hearsay.

Yes, there will be press releases and self-published materials as well.

What we must be aware of is that all these data are culled and collected and may or may not be true, factual or authentic. Like the small print had forewarned you: “AI can make mistakes!”

Finally I tried something else.

I wanted to find out just how objective AI really was; I asked this question: “What are the dangers of AI technology?”

Surprisingly, it gave an incredible amount of non-biased objective answers, and I quote AI verbatim in its response: “The dangers of AI technology range from concrete, present-day concerns like algorithmic bias and job displacement to hypothetical, but potentially catastrophic, existential risks from superintelligent systems.

“Key dangers include bias and discrimination – AI systems are trained on vast datasets; if this data is biased or incomplete, the AI will learn and even amplify these biases. This can lead to unfair outcomes in critical areas such as hiring processes, loan applications, and law enforcement, perpetuating and exacerbating existing social inequalities.

‘Misinformation and social manipulation – AI can generate highly realistic fake content (known as deepfakes) including audio and video, on a massive scale. This capability can be exploited for propaganda, impersonation scams and targeted influence campaigns that can erode public trust, manipulate public opinions and potentially destabilise democratic processes.

“Privacy and surveillance – AI requires the collection and analysis of massive amounts of personal data, which raises significant privacy concerns.”

It then listeds the economic dangers of job displacement and economic inequality; and raised security ad safety, as well as the dangers of cybersecurity risks, autonomous weapons, accidents and unintended consequences.

Our excessive dependence on AI for decision-making, information and even social interaction could lead to the degradation of our human critical-thinking skills, creativity, and empathy.

It is not too late to ensure that this must never happen.

We need to address these risks by taking proactive measures which include robust regulation, international cooperation, and a strong emphasis among governments and societies on ethical development and safety research.

We must look towards AI as an artificial tool to assist us humans, and not as an ultimate solution to solve all our earthly problems.

* The opinions expressed in this article are the columnist’s own and do not reflect the view of the newspaper.

...

Read the fullstory

It's better on the More. News app

✅ It’s fast

✅ It’s easy to use

✅ It’s free

Start using More.
More. from The Borneo Post ⬇️
news-stack-on-news-image

Why read with More?

app_description