What does humanity’s future look, co-existing with Artificial Intelligence (AI)? A panel discussion at the ongoing Dubai Future Forum led to intense debate on the ethics of AI and specifically with Intellectual Property protection, as experts spoke about the current state of AI development.
What they did appear to agree on was that we are at a tipping point – the impact AI will have on society could be compared to the first industrial revolution and the innovation of the steam engine, and how it rearranged social structures and job profiles.
William Hurley, an American tech entrepreneur and investor, felt that the negative predictions by some people was a cynical approach to how the technology would impact society.
“I think there is a lot of unnecessary fear when it comes to AI and does not consider our ability to predict technological change. Humans are above all else adaptive,” he said.
I think there is a lot of unnecessary fear when it comes to AI and does not consider our ability to predict technological change. Humans are above all else adaptive.
However, the other two members of the panel discussion took a more cautious approach.
Making the right choices
Professor Pascale Fung, who works closely with AI development and is the director of the multidisciplinary Centre for AI Research at Hong Kong University of Science and Technology (HKUST) felt that a lot currently depends on how decision and policy makers and developers shape the near future of AI development.
“I will say that AI will have ramifications beyond that of the steam engine. If humans make the right choices, then I am optimistic,” she said.
She spoke about how while hundreds of guidelines have been developed on the subject of AI ethics, their implementation is not an easy process, as “technology is not designed to include ethical challenges”.
She also spoke about the speed at which algorithms are developing as opposed to how quickly, or slowly, laws and regulations governing them are implemented.
“When we are developing algorithms, we have to implement them so that they are safe and ethical. But we have seen models that have been released without the necessary testing,” she said.
However, she was also optimistic, commenting on how AI was fundamentally dependent on human testing to be successful.
“AI is human centric, it is impossible to release it fully without testing it as we are all testers of this AI,” she said.
I will say that AI will have ramifications beyond that of the steam engine. If humans make the right choices, then I am optimistic.
Professor Hoda Alkhzaimi - research assistant professor at New York University Abu Dhabi and the founder and director of Emerging Advanced Research Acceleration for Technologies, Security and Cryptology research lab and centre (EMARATSEC) - also said that while AI can have many positive effects on day-to-day life, it also needs to be “nudged” to develop ethical systems.
“We stand to gain a lot economically by employing it in different fields. But at the same time, it worries me a little when I tend to test different algorithms for the use of different users because we are not addressing the other gaps - not the functionality gaps, but the aspect of having algorithms that can be pushed to a mass level when the trust is not very high,” she said.
Intellectual property
A popular ethical debate in the AI world is that of intellectual property (IP), with many artists this year raising the alarm against the use of their artwork for the creation of AI art.
However, Fung, who said that she teahes AI art, did not think it was a debate at the level of her artist community.
“Artists and designers are trained from prior work. I learnt to sketch when I was young from the prior work of artists, by copying the grandmasters. The Generative AI (artificial intelligence capable of generating text, images, or other media, using generative models) do exactly the same – they ingest prior work. There is a significant difference, though – AI won’t generate it without it being provided that work from a human being,” she said.
Alkhzaimi, however, did not see a parallel between artists learning from experts and AI being fed with collections of artists’ works to produce art.
“It is a concern that the original work is being used, we have to care about IP rights. Do we acknowledge all of the input that the Large Language Models (LLMs) use to produce the final result? If we do, then it makes sense, but we can’t use the artist’s work without giving them actual recognition,” she said.
It is a concern that the original work is being used, we have to care about IP rights. Do we acknowledge all of the input that the Large Language Models (LLMs) use to produce the final result? If we do, then it makes sense, but we can’t use the artist’s work without giving them actual recognition.
Hurley, on the other hand, spoke about the practical application of IP rights from a legal perspective.
“Let’s look at what Japan did - from a regulatory point of view, they said that LLMs taking from artists’ work is not a copyright violation. There are hundreds of lawsuits that have been filed and there are different stages that an IP lawsuit has to go through in order to help. Many of these cases were lost because it was ruled to not be a copyright violation,” he said.
An AI artist among the attendees, however, spoke up to share her perspective on how she found Generative AI to be limited or biased.
“If I write ‘beautiful woman’ into a model, I get a Caucasian, blonde woman’s face,” she said.
However, the experts felt that over the years this bias has changed, with more people getting involved at the development stage of AI.
They all agreed that while AI models needed to be made more ‘human-friendly’, the involvement of people from different walks of life – not just developers – would make the key difference in how the technology shapes up.
“The default set up is supposed to be diverse, not biased,” Alkhzaimi said.