Asian Scientist Magazine (Oct. 13, 2022) — Devesh Narayanan was in Israel when he began to feel the first stirrings of frustration. In 2018, in the third year of his engineering degree at a university in Singapore, Narayanan joined an overseas
entrepreneurship program that sent him to Israel to work on drone defense technologies.
“These programs tend to be very gung-ho about using technology to save the world,” said Narayanan in an interview with Asian Scientist Magazine. “It always felt a little empty to me.”
As he worked on the drones, Narayanan found himself growing increasingly concerned. The moral implications of the things he was doing seemed to be masked by the technical language of the detached instructions he received from his supervisors.
“You would get technical prompts instructing that the drone do things like ‘engage in these coordinates’,” he recalled. “It sounds like a technical requirement, when it is really about getting the drones to fight in hostile territories without being caught. But at that level of technology design, the moral and political considerations are kind of hidden.”
The experience made Narayanan realize how easy it could be for an engineer, caught up in solving a technical problem, to overlook the moral and political questions of their work.
Upon discovering that the questions he had been asking were not part of any engineering syllabus, Narayanan turned to moral philosophy textbooks and classes for answers. That curiosity has now led Narayanan to fully focus on the ethics of technology. As a research assistant at the National University of Singapore’s Centre on AI Technology for Humankind (AiTH), he investigates the ethics of artificial intelligence (AI) and what it means for AI to be ethical.
AiTH is just one of the many places in Asia where researchers are trying to understand how to make AI responsible and what happens when it is not.
What it means to be ethical
From the Hippocratic Oath to the debates about embryonic stem cells and today’s concerns about data privacy and equity in vaccine delivery, scientific developments and ethics have always gone hand in hand.
But what does it mean for technology to be ethical? According to All Tech is Human, a Manhattan-based non-profit organization that aims to foster a better tech future, responsible technology should “better align the development and deployment of digital technologies with individual and societal values and expectations.” In other words, responsible technology aims to reduce harm and increase benefits to all.
As technology continues to shape human societies, AI is driving most of that change. Often unseen yet ubiquitous AI algorithms drive e-commerce recommendations and social media feeds. These algorithms are also being increasingly integrated in more serious matters such as the justice and financial systems. In early 2020, courts in Malaysia began testing the use of an AI tool for speedier and more consistent sentencing. Despite the concerns voiced by lawyers and Malaysia’s Bar Council around the ethics of deploying such technology without sufficient guidelines or understanding of how the algorithm worked, the trial went ahead.
The government-developed tool was trialed on two offences, drug possession and rape, and analyzed data from cases between 2014 and 2019 to produce a sentencing recommendation for judges to consider. A report by Malaysian research organization Khazanah Research Institute showed that judges accepted a third of the AI’s recommendations. The report also highlighted the limited five-year dataset used to train the algorithm, and the risk of bias against marginalized or minority groups.
The use of decision-making AI in other contexts, such as in approving bank loan applications or making clinical diagnoses, raises a similar set of ethical questions. What decisions can be made by AI and what shouldn’t? Can we trust AI to make those decisions at all? As researchers argue that machines themselves lack the ability to make moral judgements, the responsibility then falls to the human beings who make them.
Making moral machines
The stakes of leaving such decisions up to AI can be monumental. Dr. Reza Shokri, a computer science professor at the National University of Singapore, believes that AI should only be used to make critical decisions if they are built on reliable and clearly explainable machine learning algorithms.
“Auditing the decision-making process is the first step towards ethical AI,” Shokri told Asian Scientist Magazine, adding that AI algorithms can have grave consequences if they operate on foundations and algorithms that aren’t fair or unbiased.
Shokri explained that bias often gets embedded in an algorithm when it is trained. Once supplied with training data, the algorithm extracts patterns from the data, which is then used in making predictions. If, for any reason, certain patterns are more dominant than others at the training stage, the algorithm might weigh the dominant data samples with more importance and ignore the less represented ones.
“Now imagine if these ignored patterns are the ones that apply to minority groups,” Shokri said. “The trained model would function poorly and less accurately on data samples from minority groups, leading to an unintended bias against them.”
For example, in 2021, Twitter famously drew controversy when users discovered that its AI-based image cropping algorithm preferred to highlight the faces of white people compared to the faces of people of color in thumbnails, effectively showing more white people on users’ feeds. A study by Twitter of over 10,000 image pairs later confirmed this bias.
Getting rid of the jargon
Given everything that is at stake with AI, numerous organizations have attempted to come up with guidelines for building fair and responsible AI, such as the World Economic Forum’s AI Ethics Framework. In Singapore, the Model AI Governance Framework, first launched in January 2019 by Singapore’s Personal Data Protection Commission, guides organizations in ethically deploying AI solutions by explaining how AI systems work, building data accountability practices and creating transparent communication.
But for Narayanan, these discussions on AI ethics mean little if they are not grounded in defined terms, or if there isn’t a proper explanation for how they should be implemented in practice.
These frameworks “currently exist at an abstract conceptual level, and often propose terms like fairness and transparency—ideas that sound important but are objectionably underspecified,” said Narayanan.
“If you don’t have a sense of what is meant by fairness or transparency, then you just don’t know what you’re doing,” he continued. “My worry is that people end up building systems they call fair and transparent, but are biased and harmful in all the same ways they always have been.”
Shokri also echoed the need for clear definitions. “In the case of fairness, we need a clear description of the notion of fairness that we want to satisfy. For example, does fairness mean we want the outcome of an algorithm to be similar across different groups? Or do we want to maximize the performance of the algorithm on an underrepresented group?” said Shokri. “When the notion of fairness is clear, then data processing and learning algorithms can be modified to respect such notions.”
The problem, Narayanan further posits, is theoretically grounding principles this way is challenging, and not something that industry practitioners, such as with Singapore’s Model AI Governance Framework, might be able or willing to do.
“Principles, in my opinion, are in this weird no-man’s land: neither theoretically grounded, nor practically implementable. I worry that we’re focusing too much on solving the latter problem, at the expense of the former,” explained Narayanan.
As such, Narayanan’s research at AiTH has been dedicated to interrogating the definitions of terms used when discussing AI ethics. He is currently examining the discourse around transparency to determine what it actually entails in the context of building ethical AI.
“I am asking if transparency is an end in itself or if there are things like accountability and redress that it should help us get,” Narayanan explained.
He is particularly concerned about what he terms performative transparency—providing people with information about how an AI algorithm makes decisions, but without doing anything more than simply making that information available.
“For example, you could tell a job applicant that their resumes were screened by an automated algorithm, but then not provide any explanation for why they may be rejected and mechanisms to contest it or seek redress,” said Narayanan. “When people can be potentially harmed by a system, they would want a channel to fight an unfair decision. Transparency could help with this to some extent.”
A better understanding of transparency and the other terms that dominate AI ethics frameworks may help us design AI that is actually beneficial to all.
Technology that centers humanity
But what exactly goes into designing AI that benefits humanity? Answering that question requires considering the myriad of different and intersecting factors that make us human, said Professor Setsuko Yokoyama of the Singapore University of Technology and Design. Yokoyama specializes in the speculative design of equitable technology, which incorporates the sociopolitical history of a particular digital technology to inform its ongoing design process.
For Yokoyama, who encourages a humanistic inquiry into digital technologies, clear definitions are crucial too.
“When we talk about ‘human-centric’ design, who are the ‘humans’ in question?” asked Yokoyama. “If it refers to a majority group in a society or a handful of elites that happened to be in the room where design decisions are made, that already indicates who is prioritized and who is left out.”
Yokoyama brings up a seemingly innocuous example to illustrate this point: speech-to-text technology. While you may be familiar with the technology through AI-powered automatic captions on YouTube videos, speech-to-text technology traces its beginnings back to the late 19th and early 20th century, when it was known as Visible Speech, and used as an assistive technology for deaf students to master oral communication.
“But at the same time it served as a corrective and assimilative tool for deaf students to be integrated into a larger society through the mastery of ‘normative’ speech,” said Yokoyama. “Though such design rationale might be characterized as ‘human-centric’, it stems from unchecked ableist assertions.”
Yokoyama uses intersectionality, which examines the intersecting effects of multiple different identity markers such as race, gender, class, disability status, national origins and other forms of discrimination, as a critical framework in her research. Starting with the premise that bias is multifaceted and intersectional, Yokoyama aims to mitigate such biases from getting entrenched in automatic speech recognition systems.
AI technology is no different, warned Yokoyama. “AI systems that are designed with a narrow and limited definition of humans would end up asserting and imposing a particular idea of who the humans are on the rest of us,” she said.
A question of power
The risk of sidelining certain voices or communities in technology design is a concern that Narayanan shares too. While Narayanan believes making ethical AI decisions requires deep critical thinking and moral skills, he is also quick to emphasize that high-stakes decision-making should not be centered on just a few select people.
“I’m skeptical of leaving just a few people in charge,” Narayanan said. “You have people, like AI developers and tech designers, with the most technical expertise who are making the decisions about bias and harm. On the other hand, you have the users who are most affected by these systems. The problem is these people are not the ones who have the most power in shaping the systems.”
To illustrate this point, Narayanan recalled his conversations with Grab taxi drivers and other gig workers for a previous research project. While the terms of transparency and fairness didn’t appear to mean much to the workers, this changed when Narayanan approached the topic from the angle of practical concepts like wages and ride competition.
“It turns out they had a lot of things to say; they just didn’t have this language of abstract terms about fairness or transparency principles,” said Narayanan. “Because of this, it is important to figure out what material issues people care about, and how that connects to the things that we’re talking about.”
Narayanan and Yokoyama both run the Singaporean node of the Design Justice Network, a community that explores the intersections of design and social justice. The members of the network aim to use design to empower communities and avoid oppression, while centering the voices of those who are directly impacted by the outcomes of the design process.
In the end, Narayanan, Yokoyama and other researchers like them hope that clearer language will help pave the way for more diverse voices in discussions about AI ethics.
The usual challenges that AI presents—like job displacement, data security and privacy risks—are amplified due to unequal power dynamics, and the consequences are more dire for those who may be intentionally or unintentionally sidelined by biased AI algorithms. Discussing the fairness of algorithms behind AI technologies is undoubtedly a crucial step towards a better tech future for all, but what’s even more important is who gets to have a voice in those discussions in the first place.
This article was first published in the print version of Asian Scientist Magazine, July 2022 with the title ‘Fair Tech’.
Click here to subscribe to Asian Scientist Magazine in print.
Copyright: Asian Scientist Magazine. Illustration: Lieu Yipei
What are the social and ethical issues of artificial intelligence and robotics site three answers and justify on your own words? ›
AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and perhaps the deepest, most difficult philosophical question of the era, the role of human judgment, said Sandel, who teaches a course in the moral, social, and political implications of new technologies.How will AI help us in the future? ›
According to a research by scientists at the University of Oxford, Artificial Intelligence will be better than humans at translating languages by 2024, writing school essays by 2026, selling goods by 2031, write a bestselling book by 2049, and conducting surgeries by 2053.Should you follow AI ethics yes or no? ›
Algorithms can enhance already existing biases. They can discriminate. They can threaten our security, manipulate us and have lethal consequences. For these reasons, people need to explore the ethical, social and legal aspects of AI systems.What does AI mean in the context of security SIA? ›
On a basic level, artificial intelligence (AI) security solutions are programmed to identify “safe” versus “malicious” behaviors by cross-comparing the behaviors of users across an environment to those in a similar environment.What is the biggest problem encountered of artificial intelligence at the present time? ›
One of the biggest artificial intelligence problems is that the sophisticated and expensive processing resources needed are unavailable to the majority of businesses. Additionally, they lack access to the expensive and scarce AI expertise required to utilize those resources effectively.What is the most important ethical issue of using AI in business? ›
But by and large, the biggest ethical issues when it comes to artificial intelligence are AI bias, concerns that AI could replace human jobs, privacy concerns, and using AI to deceive or manipulate.How AI will change the world? ›
On a far grander scale, AI is poised to have a major effect on sustainability, climate change and environmental issues. Ideally and partly through the use of sophisticated sensors, cities will become less congested, less polluted and generally more livable.What is the importance of AI in today's world? ›
Today, the amount of data that is generated, by both humans and machines, far outpaces humans' ability to absorb, interpret, and make complex decisions based on that data. Artificial intelligence forms the basis for all computer learning and is the future of all complex decision making.What is the importance of AI in real life? ›
From morning to night, going about our everyday routines, AI technology drives much of what we do. When we wake, many of us reach for our mobile phone or laptop to start our day. Doing so has become automatic, and integral to how we function in terms of our decision-making, planning and information-seeking.Can AI make moral decisions? ›
For the most part, AI systems make the right decisions given the constraints. However, AI notoriously fails in capturing or responding to intangible human factors that go into real-life decision-making — the ethical, moral, and other human considerations that guide the course of business, life, and society at large.
The legal and ethical issues that confront society due to Artificial Intelligence (AI) include privacy and surveillance, bias or discrimination, and potentially the philosophical challenge is the role of human judgment.What AI should not do? ›
- Creativity. AI cannot create, conceptualize, or plan strategically. ...
- Empathy. AI cannot feel or interact with feelings like empathy and compassion. ...
- Dexterity. AI and robotics cannot accomplish complex physical work that requires dexterity or precise hand-eye coordination.
Our mission is to regulate the private security industry effectively; to reduce criminality, raise standards and recognise quality service.What is the disadvantages of AI in security? ›
Challenges of AI in cyber security
AI can be used by hackers to conduct far more sophisticated attacks more quickly and can apply machine learning techniques to create more effective attack models.
AI systems in cybersecurity – examples of use
possible threat identification. cyber incident response. home security systems. CCTV cameras and crime prevention.
- Determining the right data set. ...
- The bias problem. ...
- Data security and storage. ...
- Infrastructure. ...
- AI integration. ...
- Computation. ...
- Niche skillset. ...
- Expensive and rare.
Undoubtedly, Artificial Intelligence (AI) is a revolutionary field of computer science, which is ready to become the main component of various emerging technologies like big data, robotics, and IoT. It will continue to act as a technological innovator in the coming years.What are the 4 main problems AI can solve? ›
So, you might say that AI is something made by people to emulate the human brain by solving problems. AI can do 4 main things: (1) understand or sense, (2) learn, (3) use models to make decisions, (4) interact with humans.What is the impact of AI on society? ›
Artificial intelligence's impact on society is widely debated. Many argue that AI improves the quality of everyday life by doing routine and even complicated tasks better than humans can, making life simpler, safer, and more efficient.What are the advantages and disadvantages of AI? ›
Advantages and Disadvantage of Artificial Intelligence.
|Advantages of artificial intelligence||Disadvantages of artificial intelligence|
|1. It defines a more powerful and more useful computers||1. The implementation cost of AI is very high.|
The benefits from the ethical uses of AI are numerous and significant. The application of AI can help organizations operate more efficiently, produce cleaner products, reduce harmful environmental impacts, increase public safety, and improve human health.Is AI really the future? ›
Is AI the Future? There's no doubt that artificial intelligence is already delivering value and will become even more enmeshed in our lives. We believe that AI is not the future, it's already here in our present, and the future will bring even more even more benefits to our lives and to the planet.What is the impact of AI on society positive or negative? ›
One of the main concerns of society is that with the rise of artificial intelligence, many jobs that are characterized by a sort of routine, will get automated. The more jobs that get automated, the more people will have to leave their jobs. While many might see it as a negative side of AI, it isn't.How will AI improve our lives? ›
AI makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks, from chess-playing computers to self-driving cars, which rely heavily on deep learning and natural language processing.How AI is helping us today? ›
Solve Complex Problems
Throughout the years, AI has progressed from simple Machine Learning algorithms to advanced machine learning concepts such as Deep Learning. This growth in AI has helped companies solve complex issues such as fraud detection, medical diagnosis, weather forecasting and so on.
Machine learning, deep learning, and other AI technologies are already being used to reduce human workloads in assembly, packaging, customer service, and HR, among other areas. This has reduced operational costs and employee costs substantially, bringing about a level of sophisticated automation never before seen.What is the impact of AI to the world give examples? ›
Artificial intelligence can now mimic human speech, translate languages, diagnose cancers, draft legal documents and play games (and even beat human competitors). We're already in a society where systems can accomplish tasks we didn't believe would be possible in our time.How does AI positively affect humanity? ›
Positive Impacts of Artificial Intelligence on Society
When AI takes over repetitive or dangerous tasks, it frees up the human workforce to do work they are better equipped for—tasks that involve creativity and empathy among others.
Today, artificial intelligence (AI) enables businesses, governments, and communities to create a high-performing ecosystem that can service the entire planet. Its significant impact on human lives is resolving some of society's most pressing issues.How does AI affect human behavior? ›
Manipulation can take many forms: the exploitation of human biases detected by AI algorithms, personalised addictive strategies for consumption of (online) goods, or taking advantage of the emotionally vulnerable state of individuals to promote products and services that match well with their temporary emotions.
Regardless of how well AI machines are programmed to respond to humans, it is unlikely that humans will ever develop such a strong emotional connection with these machines. Hence, AI cannot replace humans, especially as connecting with others is vital for business growth.Is AI a threat to human? ›
Change the jobs humans do/job automation: AI will change the workplace and the jobs that humans do. Some jobs will be lost to AI technology, so humans will need to embrace the change and find new activities that will provide them the social and mental benefits their job provided.Is AI a threat to human rights? ›
The inferences, predictions and monitoring performed by AI tools, including seeking insights into patterns of human behaviour, also raise serious questions. The biased datasets relied on by AI systems can lead to discriminatory decisions, and these risks are most acute for already marginalized groups.What is the most advanced AI in the world? ›
The most advanced AI technology to date is deep learning, a technique where scientists train machines by feeding them different kinds of data. Over time, the machine makes decisions, solves problems, and performs other kinds of tasks on their own based on the data set given to them.What AI Cannot replace? ›
Psychologists, caregivers, most engineers, human resource managers, marketing strategists, and lawyers are some roles that cannot be replaced by AI anytime in the near future”.What are 5 aims of the SIA? ›
Its mission is to protect the public by regulating the industry effectively through individual and company licensing, to remove and reduce criminality, to raise standards, to recognise quality of service and to monitor the industry generally.How long is SIA badge valid for? ›
Check you can afford the licence fee (if you are applying for a licence yourself) It costs £190 to apply for 1 licence. All licences last for 3 years – except for a vehicle immobiliser licence, which lasts for 1 year. You may need to apply for more than 1 licence.Is SIA licence Recognised in Europe? ›
The short answer to the question above is NO.Is AI an opportunity or a threat? ›
The biggest threat that Artificial Intelligence portrays to mankind is that machines will replace humans, resulting in unemployment. Experts predict that robots will do 30% of our jobs by 2025.Is AI better or cyber security? ›
Artificial intelligence is for those who want to learn more about robotics or computer-controlled machines. It's a field that has a lot to offer, yet demands a lot from students as well. More than 80% of people suggest cyber security as the best career choice for the future.
There are a lot of ongoing discoveries and developments, most of which are divided into four categories: reactive machines, limited memory, theory of mind, and self-aware AI.What are the ethical issues of artificial intelligence? ›
The legal and ethical issues that confront society due to Artificial Intelligence (AI) include privacy and surveillance, bias or discrimination, and potentially the philosophical challenge is the role of human judgment.What are the ethical issues involve in robotics? ›
Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as 'killer robots' in war), and how robots should be designed such that they act ...What are the 3 AI ethics? ›
For example, Mastercard's Jha said he is currently working with the following tenets to help develop the company's current AI code of ethics: An ethical AI system must be inclusive, explainable, have a positive purpose and use data responsibly.What are advantages and disadvantages of AI? ›
Advantages and Disadvantage of Artificial Intelligence.
|Advantages of artificial intelligence||Disadvantages of artificial intelligence|
|1. It defines a more powerful and more useful computers||1. The implementation cost of AI is very high.|
As ethics is important in the society of us humans, it is equally (if not more) necessary in the world of AI. To prevent AI from going rogue and out of our control, we need to implement ethics into the code so that one day the movie I Robot does not become a reality.Why should AI have rights? ›
Another argument in favor of giving rights to robots is that they deserve it. AI-enabled robots have the potential for greatly increasing human productivity, either by replacing human effort or supplementing it. Robots can work in places and perform more dangerous tasks than humans can or want to do.What are the social issues of AI? ›
- Cost to innovation.
- Harm to physical integrity.
- Lack of access to public services.
- Lack of trust.
- “Awakening” of AI.
- Security problems.
- Lack of quality data.
- Disappearance of jobs.
- Ethics Council. There should be a committee such as a governance board that can take care of fairness, privacy, cyber and other data related to risk and issues. ...
- Ethical AI Framework. ...
- Optimize guidance and tools. ...
Three challenges focus on fundamental problems in robotics: developing robot swarms, improving navigation and exploration, and developing artificial intelligence that can “learn how to learn”, and use common sense to make moral and social decisions.
Stephen Hawking: The world-renowned theoretical physicist also fears the advent of AI, and the impact it might leave on humanity. Hawking stressed on the fact that humans are limited by slow biological evolution, and thus would be superseded, if there's a struggle between AI and humanity.Is AI helping industries grow? ›
With their capacity to increase efficiency and revenue, AI-based technologies could be the key to giving your company a competitive edge for future growth - regardless of what industry you're in.What is the main philosophy of artificial intelligence? ›
The philosophy of artificial intelligence is a branch of the philosophy of technology that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will.