Program Change

UPDATE: Due to some unforseen circumstances, the workshop has been canceled, thus, there will be be no oral nor poster presentations. We apologize for the inconvenience(s) caused by these turn of events.

ABOUT

Objective and Purpose

The objective of Trustworthy AI is to encompass a unified framework of fundamental principles: i) Harm Prevention, (ii) Explicability and (iii) Fairness to display robustness, reliability and explainability while being fair, safe and secure—to ensure positive and beneficial impacts on both human society (including future generations) and the environment.

This workshop is intended to inspire emerging researchers in AI: (a) highschool, undergraduate & graduate students (b) educators, (c) non-university affiliates (independent and industry researchers), and lastly (d) academic faculty to conduct and partake in plenary discussions, particularly with regard ecological and sociological concerns of widely deployed AI systems to disclose their adverse impacts — to truly enable humanity-centered Trustworthy AI systems.

Harm Prevention

  • Safety & Security
  • Robustness & Reliability
  • Privacy & Data Governance

Explicability

  • Explanability & Transparency

Fairness

  • Responsibility & Accountability
  • Well-Being
    — Social
    — Environmental

IMPACT

A Path Forward

To holistically address pressing concerns of AI and their inter-connected societal risks and impacts from a multi-disciplinary perspective: computer science, sociology, environmental science, and so on—a socioethical impact assessment perspective across the entirety of AI and its affiliated disciplines is a holistic necessity.

Trustworthy AI is defined as humanity-centered application systems that incorporate socioethical factors designed to simultaneously benefit both people and society while posing no risk of potential harm.

PARTICIPATION

Call for Papers

The ASEA (pronounced ay-see) workshop is a part of the Diversity and Inclusion Activities at the 38th Annual AAAI Conference on Artificial Intelligence and will be hosted online and in-person on Sunday 10am - 3:30pm PDT (UTC -8) at the Vancouver Convention Centre — West Building, in Vancouver, CN. The main dates for AAAI 2024 are February 20th - 27th, 2024. Registered participants will receive the event link as the workshop date draws near.

You must submit the full paper by the February 06, 2024 at 11:59pm UTC +0

Topics

The theme of this workshop is to create collaborative bridges within and beyond AI by highlighting both outcome (e.g., deliberate and unintentional harms, loss of life, air quality, and climate change) and error (e.g., poor system performance and non-interpretable results) disparities of AI systems across six dimensions: (i) Safety & Security, (ii) Robustness & Reliability, (iii) Responsibility & Accountability, (iv) Explainability & Transparency, (v) Privacy & Data Governance and finally (vi) Well-Being. To emphasize the importance of AI for positive societal (social & environmental) impacts, we posit that it is imperative to accentuate pitfalls of AI to truly enable humanity-centered Trustworthy AI systems.

Submissions may spans a wide range of topics within the realm of computing, including but not limited to:

  • High-Performance Computing
  • Quantum Computing
  • Data Science and Analytics
  • Cloud Computing and Distributed Systems
  • Cybersecurity and Privacy
  • Emerging Technologies in Computing
  • Natural Language Processing and LLMs
  • Artificial Intelligence and Machine Learning
  • Computer Vision and Image Processing
  • Intelligent Educational systems and e-Learning
  • Ambient Intelligence and IoT

Submission Guidelines

We invite emerging researchers, students, educators, and individuals or to submit original extended abstracts (2 pages) or short papers (3-5 pages), written in English, plus unlimited pages of references. Authors can choose to make either an archival or non-archival submission. Submitted papers will be assessed based on their novelty, technical quality, potential impact, insightfulness, depth, clarity, and reproducibility. We invite single-blinded submissions that include (but are not limited to):

  • Works-in-progress;
  • Research proposals;
  • Provocations or critical approaches, and positional papers;
  • Surveys or meta-analyses accentuating opportunities for new or underexplored research;

Generative AI tools and technologies, such as ChatGPT, may not be listed as authors of an AAAI 2024 published Work. The use of generative AI tools and technologies to create content is permitted but must be fully disclosed in the Work. For example, the authors could include the following statement in the Acknowledgements section of the Work: ChatGPT was utilized to generate sections of this Work, including text, tables, graphs, code, data, citations, etc.). If you are uncertain about the need to disclose the use of a particular tool, err on the side of caution, and include a disclosure in the acknowledgements section of the Work.

Text generated from a large-scale language model (LLM) such as ChatGPT must be clearly marked where such tools are used for purposes beyond editing the author’s own text. While we will not be using tools to detect LLM-generated text, we will investigate submissions brought to our attention and will desk reject papers where LLM use is not clearly marked.

Important Dates

Papers submission deadline: February 6th, 2024

Decision Notification: February 9th, 2024

NOTE: All deadlines are 11:59pm UTC +0

Contact Email

Email: asea-workshop@outlook.com
Organizer Email: jamell.dacon@morgan.edu (jamell dot dacon at morgan dot edu)

PAPERS

Accepted Papers

  • The right to human decision: analyzing policies, ethics, and implementation by Michael Cheng

  • Can You Trust a Generative AI Teacher? Verifying the Imperfections of Generative AI for Educational Purposes Using a Zero-Shot Level Classification Problem by NohMyongSung and Cho Ung Hui

  • Innovating AI for humanitarian action in emergencies by Aradhana R, Rajagopal A, Nirmala V and Immanuel Johnraja Jebadurai

  • KPC-cF: Korean Aspect-Based Sentiment Analysis via Pseudo-Classifier with Corpus Filtering for Low Resource Society by Kibeom Nam

  • Outlier Ranking for Large-Scale Public Health Data by Ananya Joshi, Tina Townes, Nolan Gormley, Luke Neureiter, Ronald Rosenfeld and Bryan Wilder

  • Towards Tractable Formal Explanations for Neural Network Models by Qianru Zhou

PEOPLE

Who are we?

Jamell Dacon

Dr. Jamell Dacon (he/him) is an Assistant Professor in the Department of Computer Science at Morgan State University. His current research spans the areas of Trustworthy AI, Natural Language Processing, Computational Social Science, and Interdisciplinary Data Science.

Dr. Jamell Dacon - Founder, Lead-Organizer, Chair
Tianlong Chen

Dr. Tianlong Chen (he/him) is an Incoming Assistant Professor in the Department of Computer Science @at University of North Carolina Chapel Hill, and currently Post Doctoral Student at CSAIL@MIT + BMI@Harvard. His research has been published to notable conferences for areas in Machine Learning, Computer Vision, and AI4Science.

Dr. Tianlong Chen - Guest Keynote Speaker
User

Han Xu (he/him) is a Ph.D. Candidate of Computer Science at Michigan State University. He has a broad research interests in Trustworthy AI, including machine learning robustness, fairness and privacy issues, as well as the related problems in real-world application domains such as graph, text and financial data.

Han Xu - Guest Keynote Speaker