AI and Economic Poverty

 

Abstract

 

The author builds on her extensive literature review about AI systems, reflections on work and humanity, and decades of experience using AI in health care and education. This paper will discuss how AI affects employability, education equity, and health equity, potentially increasing poverty among people from developing countries and knowledge workers from developed countries. The objective is to prompt earlier interventions and questions about the consequences of AI in our lives, aiming to leverage AI to reduce economic poverty. The main contributions include an urgent call to generate greater awareness of the inbalanced power in the AI system and the limitations of AI foundation models such that the benefits of AI are shared widely. It is essential to consider wider social, ethical, and political contexts for the deployment and development of AI and to reactivate the core humanistic values that can easily be flattened and denied in the current AI system. The choices of the present AI system will significantly affect poverty.
AI has shown great promise in many industries but also provides existential threats to our humanity. When corporations disrupt the existing markets by actualizing new opportunities provided by AI, they are changing customers’ expectations of outcomes created or facilitated by AI systems. The disrupting threats from AI in our daily lives can be described by the boiling frog metaphor, which illustrates the failure to act against a problematic situation that will lead to uncontrollable catastrophic consequences. AI is shaping people’s expectations of efficiency and our reality about a good life and good society. It influences economic and political power, the future of warfare, and the distribution of costs and benefits. It is changing our perceptions of being, life, thinking, knowledge, consciousness, emotions, society, good and evil, and the ultimate nature of the universe. It may create more apathy toward those who are unemployed.
When AI execrates inequality and inequity in work, education, and health care, more people will be unemployed, and unable to access education and healthcare, especially when systematic and historical biases in the foundation models are not examined. Self-sufficiency is becoming more important than community interests, leading more people to optimize their short-term interests.
It is necessary to develop a better system to assess the quality of data in the foundation models and improve the governance of AI at the individual, organizational, and societal levels. With better public education about what AI is and a broad engagement of the public about its direction, we can rebalance the power between AI technology companies and consumers. We must understand AI and its limitations from a social-technological system perspective. We need to be conscious of our core human values such as genuine human contact and the dignity of work
that gradually are being replaced by AI’s focuses on the economics of scale, efficiency, and lower costs. More people should understand how the new AI models work and inquire how to interact with AI and augment AI for higher productivity and economic gains. Many developing countries need digital infrastructure and institutional support, including regulations, to redesign their work for more efficient and safer integration, and also to potentially receive compensation for licensing their copyright data to power machine learning tools.
The adoption process of mature AI technology into the familiar workflow needs to be monitored. We need safe AI rather than an AI safe culture! More stakeholders need to be invested in the discourse around AI systems and demand higher safety and security standards from technology companies. As AI becomes even more integrated into different industries, professionals need to understand the ethical implications and changing regulatory requirements surrounding AI usage. We must pay attention to the knowledge needed when humans interact with machines for specific decisions. No autonomous system can be held accountable for the result; humans and human lives will always be accountable. More research is needed on human-system interaction and communication before leverage AI to eradicate poverty.

 


Biographic note

 

Maria Lai-ling Lam, PhD, is currently the Chair of the International Business Administration Department at LCC International University in Klaipėda, Lithuania. She holds a Ph.D. in Marketing and Organizational Behavior from George Washington University, an M.A. in Religious Studies, and both an M.B.A. and B.B.A. from the Chinese University of Hong Kong. Maria has taught various marketing, organizational behavior, and strategy courses to undergraduate and graduate students at four Christian universities in the United States for nineteen years and at several universities in Hong Kong for eight years. She has conducted numerous seminars and workshops at prominent universities in China and Japan. Maria has published one book and over 100 peer-reviewed articles and book chapters. Her intellectual contributions cover areas such as U.S.-China business trust relationships, corporate social responsibility, empathy, human flourishing, management education, cybersecurity, smart cities, diversity, equity, and inclusion.

 


Chair of International Business Administration Department

Professor of Business Administration

LCC International University, Lithuania

mlam@lcc.lt

Testimonial

AntónioOliveiraDHI

António Oliveira

Fellow do Programa em Desenvolvimento Humano Integral
Plural, multidisciplinar, desafiante, potenciador de um humanismo fraterno e inclusivo...

Próximos Eventos

04
Dez
17:00