Full width home advertisement

Welcome Home


Post Page Advertisement [Top]

A "New Nobel" – Computer Scientist Is Awarded a $1 Million Prize for Artificial Intelligence



Whether it's preventing electrical grid explosions, identifying patterns in past crimes, or optimizing resources for critically ill patients, Cynthia Rudin of Duke University wants artificial intelligence (AI) to demonstrate its worth. Especially when it comes to making decisions that have a profound impact on people's lives.


While many researchers in the developing field of machine learning were focused on algorithm improvement, Rudin desired to use the power of AI to benefit society. She chose to pursue opportunities to apply machine learning techniques to significant societal problems and discovered that the true potential of AI is best realized when humans can peer inside and understand what it is doing.


Now, after 15 years of advocating for and developing "interpretable" machine learning algorithms that enable humans to see inside AI, Rudin's contributions to the field have earned her the Association for the Advancement of Artificial Intelligence's $1 million Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity (AAAI). AAAI was founded in 1979 and is the preeminent international scientific society for artificial intelligence researchers, practitioners, and educators.


Rudin, a Duke professor of computer science and engineering, is the second recipient of the new annual award, which is funded by the online education company Squirrel AI and intended to recognize achievements in artificial intelligence on a par with more traditional fields.


She is being recognized for "pioneering scientific work on interpretable and transparent AI systems in real-world deployments, advocacy for these features in highly sensitive areas such as social justice and medical diagnosis, and serving as a role model for researchers and practitioners."


"Only world-renowned awards, such as the Nobel Prize and the Association of Computing Machinery's A.M. Turing Award, carry million-dollar monetary rewards," said AAAI awards committee chair and past president Yolanda Gil. "Professor Rudin's research demonstrates the critical nature of transparency for AI systems operating in high-risk domains. Her fortitude in confronting contentious issues demonstrates the critical importance of research in addressing critical issues surrounding the responsible and ethical use of AI."


Rudin's first practical endeavor was a collaboration with Con Edison, the energy company that supplies electricity to New York City. Her assignment required her to use machine learning to predict which manholes were at risk of exploding due to deteriorated and overloaded electrical circuitry. However, she quickly discovered that no matter how many recently published academic bells and whistles she added to her code, it struggled to improve performance meaningfully when confronted with the difficulties inherent in working with handwritten dispatcher notes and accounting records from the time of Thomas Edison.


"As we worked with the data, we gained more accuracy from simple classical statistics techniques and a better understanding of it," Rudin explained. "If we understood the data that the predictive models were using, we could solicit useful feedback from Con Edison engineers, which would help us improve our entire process." It was the interpretability of the process that contributed to the improvement of our predictions, not a larger or more sophisticated machine learning model. That is what I chose to work on, and it serves as the foundation for my laboratory."


Rudin spent the next decade developing techniques for interpretable machine learning, which are predictive models that can be understood by humans. While the code used to create these formulas is complex and sophisticated, the formulas themselves may be small enough to fit on an index card in a few lines.


Rudin's brand of interpretable machine learning has been applied to a variety of significant projects. She developed a simple point-based system with collaborators Brandon Westover and Aaron Struck at Massachusetts General Hospital, as well as her former student Berk Ustun, that can predict which patients are most likely to have destructive seizures following a stroke or other brain injury. Additionally, she developed a model with former MIT student Tong Wang and the Cambridge Police Department that assists in identifying commonalities between crimes in order to determine whether they are part of a series committed by the same criminals. This open-source program eventually served as the foundation for the New York Police Department's Patternizr algorithm, a highly sophisticated piece of code that determines whether a new crime committed in the city is connected to previous crimes.


"Cynthia's dedication to resolving critical real-world problems, willingness to collaborate closely with domain experts, and ability to distill and explain complex models are unmatched," said Daniel Wagner, deputy superintendent of the Cambridge Police Department. "Her research contributed significantly to the fields of crime analysis and policing. More impressively, she is a vocal critic of potentially unjust 'black box' models in criminal justice and other high-stakes fields, as well as a staunch advocate for transparent interpretable models in fields where accurate, just, and bias-free results are critical."


Contrary to Rudin's transparent codes, black box models are opaque. The methods used in these AI algorithms make it impossible for humans to understand the factors upon which the models are built, the data on which the models are focused, and how they are using it. While this may not be an issue for routine tasks such as telling a dog from a cat, it could be a significant issue for high-stakes decisions that affect people's lives.


"Cynthia is reshaping the landscape of how artificial intelligence is used in societal applications by refocusing efforts away from black box models and toward interpretable models, demonstrating that the conventional wisdom—that black boxes are typically more accurate—is frequently incorrect," said Jun Yang, chair of Duke's computer science department. "This makes it more difficult to justify the use of black-box models on individuals (such as defendants) in high-stakes situations. Cynthia's models' interpretability was critical to their adoption in practice, as they assist rather than replace human decision-makers."


COMPAS is one significant example—an AI algorithm used in multiple states to make bail parole decisions that was accused of using race as a factor in its calculations following a ProPublica investigation. The accusation is difficult to prove, however, because the details of the algorithm are proprietary, and some critical aspects of ProPublica's analysis are dubious. Rudin's team has demonstrated that a simple interpretable model that discloses precisely which factors it considers is equally effective at predicting whether or not a person will commit another crime. This begs the question, Rudin says, of why black box models are necessary for these types of high-stakes decisions in the first place.


"We've been systematically demonstrating that, for high-stakes applications, there is no trade-off between accuracy and interpretability as long as our models are carefully optimized," Rudin said. "This has been demonstrated in criminal justice decisions, numerous healthcare decisions such as medical imaging, power grid maintenance decisions, and financial loan decisions, among others. Knowing this is possible alters our perception of AI as being incapable of self-explanation."


Rudin has spent her career not only developing these interpretable AI models, but also developing and publishing techniques to assist others in doing so. That has not always been straightforward. When she began publishing her work, the terms "data science" and "interpretable machine learning" did not exist, and there were no neat categories for her work, which meant that editors and reviewers had no idea what to do with it. Cynthia discovered that if a paper did not prove theorems and did not assert that its algorithms were more accurate, it was — and frequently continues to be — more difficult to publish.


As Rudin continues to assist individuals and publish her interpretable designs—and as new concerns about black box code emerge—her influence is finally beginning to shift the ship. There are now entire categories devoted to interpretable and applied work in machine learning journals and conferences. Other researchers in the field and their collaborators have emphasized the critical nature of interpretability in developing trustworthy AI systems.


"I have admired Cynthia from an early age for her independence, her determination, and her unflinching pursuit of true understanding of anything new she encountered in classes and papers," said Ingrid Daubechies, the James B. Duke Distinguished Professor of Mathematics and Electrical and Computer Engineering at Princeton University, one of the world's preeminent signal processing researchers, and one of Rudin's PhD advisors. "Even as a graduate student, she was an active member of her cohort, advocating for others. She drew me into machine learning, as it was an area in which I lacked any expertise prior to her gentle but persistent prodding. I'm overjoyed for her wonderful and well-deserved recognition!"


"I am overjoyed that Cynthia's work is being recognized in this manner," added Rudin's second PhD advisor, Microsoft Research partner Robert Schapire, whose work on "boosting" helped lay the groundwork for modern machine learning. "For her illuminating and insightful research, her independent thinking that has taken her in unexpected directions, and for her long-standing commitment to issues and problems of practical, societal importance."


Rudin received undergraduate degrees in mathematical physics and music theory from the University at Buffalo and a PhD in applied and computational mathematics from Princeton. She then worked at New York University as a National Science Foundation postdoctoral research fellow and at Columbia University as an associate research scientist. She began her career at the Massachusetts Institute of Technology as an associate professor of statistics before joining the Duke faculty in 2017. She holds appointments in computer science, electrical and computer engineering, biostatistics and bioinformatics, and statistical science.


She is a three-time recipient of the INFORMS Innovative Applications in Analytics Award, which honors novel and creative applications of analytical techniques, and a Fellow of both the American Statistical Association and the Institute of Mathematical Statistics.


"I want to express my gratitude to AAAI and Squirrel AI for creating this award, which I am confident will revolutionize the field," Rudin said. "Having a 'Nobel Prize' for AI that benefits society demonstrates unequivocally that this subject—AI that benefits society—is actually important."

No comments:

Post a Comment

Bottom Ad [Post Page]