NJIT Professor Awarded National Science Foundation Grants to Design

In the era of AI-driven decision systems, creating responsible and transparent algorithms is crucial for fair outcomes. Meet Aritra Dasgupta, an assistant professor at NJIT's Ying Wu College of Computing, whose research focuses on designing human-in-the-loop techniques and an interpretability-by-design framework named Trapeze. These NSF-funded projects aim to empower data scientists and HR specialists with the tools they need to enhance decision-making processes in critical areas like admissions, hiring, and rankings. Read on to explore Dasgupta's collaborative and innovative projects that strive to bring transparency and accountability to the forefront of AI technologies.

Designing Responsible AI Techniques

Implementing accountable and transparent AI algorithms.

In the age of artificial intelligence, responsible and accountable decision-making is paramount. Aritra Dasgupta's research focuses on designing techniques that ensure transparent and ethical AI algorithms. By incorporating human-in-the-loop approaches, Dasgupta aims to empower data scientists to build trustworthy and fair models for critical contexts like admissions, hiring, and rankings.

How can incorporating human input lead to more responsible AI outcomes? By actively involving experts in the decision-making process, data scientists can gain confidence in the model's effectiveness and mitigate bias. Dasgupta's work highlights the importance of being proactive in addressing complexities, sensitivity, and ethics when designing AI algorithms for real-world applications.

Introducing Trapeze: Helping HR Specialists Delicately Balance Talent Acquisition

Enhancing HR decision-making with an interpretability-by-design framework.

In HR, finding the perfect balance between talent acquisition and factors like cultural fit and diversity is a challenging tightrope act. Aritra Dasgupta's Trapeze offers an interpretability-by-design framework to assist HR specialists in making informed decisions.

How does Trapeze add value to HR decision-making?

Trapeze employs visualization techniques to show HR specialists the sensitivity of rankings to minor changes, scoring formulas, and candidate information. By visually representing this complex data, Trapeze aids in the delicate balancing act of candidate quality, fairness, and diversity. HR personnel can leverage these insights to avoid the unintended demotion of underrepresented candidates and promote fair hiring practices.

Collaboration and Expertise: Assembling Diverse Project Teams

The power of collaboration in tackling complex global challenges.

Aritra Dasgupta's projects thrive on collaboration by bringing together diverse expertise. By collaborating with Professor Julia Stoyanovich from New York University's Center for Responsible AI, Dasgupta ensures his research incorporates a range of perspectives and multidisciplinary knowledge.

Dasgupta's team of experts extends to fields including databases, cognitive psychology, and organizational psychology. Partnering with University of Michigan and Rice University professors, among others, strengthens the team's ability to develop holistic and inclusive AI solutions.

The Role of Visualization in Responsible AI Design

Utilizing AI-driven visualization for enhanced decision-making.

A crucial aspect of Aritra Dasgupta's work lies in developing interactive visualization techniques that facilitate responsible AI design. These techniques offer data scientists and decision-makers a clearer understanding of complex structures and enable them to act on the insights gained.

Why is visualization fundamental in implementing responsible AI design?

Visualization helps stakeholders uncover potential biases and sensitivities in rankings, scoring formulas, and data inputs. Understanding these intricacies mitigates the risk of skewed outcomes through accountable and transparent decision-making in contexts such as hiring and institutional rankings.

Conclusion

Aritra Dasgupta's research in designing responsible AI techniques is paving the way for transparent and accountable decision-making in critical socio-technical contexts. By incorporating human-in-the-loop approaches, Dasgupta empowers data scientists to create trustworthy algorithms that address bias and promote fairness. His Trapeze framework also aids HR specialists in achieving the delicate balance between talent acquisition, cultural fit, and diversity.

Collaboration and visualization play significant roles in Dasgupta's projects, ensuring diverse expertise and helping stakeholders gain a comprehensive understanding of AI algorithms and their potential impact. By striving for responsible AI design, Dasgupta aims to create a more inclusive and transparent future.

Post a Comment

Previous Post Next Post