Deconstructing the Semantics of Human-Centric AI: A Linguistic Analysis

Authors

  • Srihari Maruthi University of New Haven, West Haven, CT, United States Author
  • Sarath Babu Dodda Central Michigan University, MI, United States Author
  • Ramswaroop Reddy Yellu Independent Researcher, USA Author
  • Praveen Thuniki Independent Researcher & Program Analyst, Georgia, United States Author
  • Surendranadha Reddy Byrapu Reddy Sr. Data Architect at Lincoln Financial Group, Greensboro, NC, United States Author

Keywords:

Human-Centric AI, Linguistic Analysis

Abstract

In all disciplines, there is a tremendous amount of value in not only developing a common language across a field, but also in constantly expanding and refining that language. This process occurs somewhat naturally within each community, as members collaborate and build upon existing terminology. However, when it comes to technical terms associated with a particular community, we often tend to overlook or leave unexpressed the more intuitive and natural interpretations that exist in the real world. While this approach is certainly necessary for brevity and focus within the limitations of academic spaces, we strongly believe that starting from a common, natural interpretation of certain ambiguous terms can lead to an array of new and intriguing insights and pathways of inquiry within the realm of Artificial Intelligence (AI). Therefore, in this paper, we engage in a rigorous and thought-provoking exercise centered on the term "AI," with a specific focus on the multifaceted roles that humans play in not only defining and developing AI, but also in the subsequent deployment and the far-reaching consequences that arise from humans' interactions with and utilization of AI systems. By adopting a human-centric framing, we center our analysis on the profound impact that human involvement has on AI's core enabling concepts. Through this practice, we aim to shed light on how AI often falls short of the lofty ideals we project upon this remarkable creation. Examining the complex interplay between AI and human beings allows us to delve into the various dimensions at play, unraveling the intricate threads that bind humans and machines together. By doing so, we can gain a deeper appreciation for the intricate dance between humans and AI, acknowledging the nuanced roles and responsibilities that emerge from this symbiotic relationship. Understanding the influence of humans on AI brings forth vital questions about ethical considerations, accountability, and the socio-cultural implications that accompany the development and application of AI. Furthermore, the alignment of AI with human values becomes an essential aspect to explore within this human-centric framework. It is imperative to scrutinize how AI reflects and reifies the beliefs, biases, and limitations of its human creators. Only by acknowledging and grappling with these aspects can we hope to navigate the complexities and challenges inherent in AI development and deployment. By constantly examining and refining our understanding of AI in relation to humans, we pave the way for the creation of genuinely beneficial and ethical AI systems that serve and empower humanity as a whole. In conclusion, this critical examination of AI and its connection to humans serves as a call to action. It invites researchers, practitioners, and policymakers to recognize the pivotal role of a human-centric approach in shaping the future development and deployment of AI. By embracing a holistic understanding of AI that encompasses the diverse perspectives and experiences of humanity, we can strive towards a future in which AI operates in harmony with our collective aspirations, and ultimately, lives up to the ideals we envision for this awe-inspiring creation.

Downloads

Download data is not yet available.

References

B. Goodman and S. Flaxman, "European Union regulations on algorithmic decision-making and a 'right to explanation'," AI Magazine, vol. 38, no. 3, pp. 50-57, Sep. 2017.

T. W. Bickmore and R. W. Picard, "Establishing and maintaining long-term human-computer relationships," ACM Transactions on Computer-Human Interaction, vol. 12, no. 2, pp. 293-327, Jun. 2005.

B. Shneiderman, "Human-centered artificial intelligence: Reliable, safe & trustworthy," International Journal of Human-Computer Interaction, vol. 36, no. 6, pp. 495-504, Jun. 2020.

A. Dastin, "Amazon scrapped 'sexist AI' recruiting tool," Reuters, Oct. 2018.

B. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, and L. Floridi, "The ethics of algorithms: Mapping the debate," Big Data & Society, vol. 3, no. 2, Jul. 2016.

S. Wachter, B. Mittelstadt, and L. Floridi, "Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation," International Data Privacy Law, vol. 7, no. 2, pp. 76-99, May 2017.

T. Miller, "Explanation in artificial intelligence: Insights from the social sciences," Artificial Intelligence, vol. 267, pp. 1-38, Feb. 2019.

P. Molnar and L. Gill, "Bots at the gate: A human rights analysis of automated decision-making in Canada’s immigration and refugee system," International Human Rights Program, University of Toronto Faculty of Law, 2018.

M. Veale and R. Binns, "Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data," Big Data & Society, vol. 4, no. 2, Jul. 2017.

R. Turner, "A model of human-centered artificial intelligence," Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17), pp. 5085-5089, Aug. 2017.

G. Annas and S. Elias, "Thirteen ways of looking at biomedical ethics from the International Space Station," Journal of Law, Medicine & Ethics, vol. 36, no. 4, pp. 688-693, Dec. 2008.

F. Doshi-Velez and B. Kim, "Towards a rigorous science of interpretable machine learning," arXiv preprint arXiv:1702.08608, Feb. 2017.

R. Montti, J. Vivalda, and C. Garita, "Human-AI interaction: A review of user interface design for explainable AI," IEEE Access, vol. 8, pp. 199674-199693, Oct. 2020.

A. Hind, "Explaining explainable AI," IT Professional, vol. 21, no. 3, pp. 19-25, May-Jun. 2019.

T. Wu, "Machine learning has been biased in the past, and that's bad news for the future," MIT Technology Review, Jul. 2019.

C. Rudin, "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead," Nature Machine Intelligence, vol. 1, no. 5, pp. 206-215, May 2019.

H. Chen, "Human-centered AI: Requirements, state-of-the-art and challenges," Digital Communications and Networks, vol. 6, no. 3, pp. 143-154, Aug. 2020.

J. H. L. Hansen and A. Pease, "Machine learning fairness: Lessons from social science," IEEE Transactions on Cognitive and Developmental Systems, vol. 11, no. 3, pp. 285-298, Sep. 2019.

D. Gunning and D. Aha, "DARPA’s explainable artificial intelligence (XAI) program," AI Magazine, vol. 40, no. 2, pp. 44-58, Jun. 2019.

S. R. Barocas, A. D. Selbst, and M. R. Raji, "The impact of data science on the moral and ethical aspects of information technology," Journal of the American Medical Informatics Association, vol. 25, no. 10, pp. 1357-1364, Oct. 2018.

Sasidharan Pillai, Aravind. “Utilizing Deep Learning in Medical Image Analysis for Enhanced Diagnostic Accuracy and Patient Care: Challenges, Opportunities, and Ethical Implications”. Journal of Deep Learning in Genomic Data Analysis 1.1 (2021): 1-17.

Downloads

Published

24-06-2021

How to Cite

[1]
S. Maruthi, S. Babu Dodda, R. Reddy Yellu, P. Thuniki, and S. Reddy Byrapu Reddy, “Deconstructing the Semantics of Human-Centric AI: A Linguistic Analysis”, J. of Artificial Int. Research and App., vol. 1, no. 1, pp. 11–30, Jun. 2021, Accessed: Nov. 22, 2024. [Online]. Available: https://aimlstudies.co.uk/index.php/jaira/article/view/24

Similar Articles

61-70 of 191

You may also start an advanced similarity search for this article.