November 2, 2024

Do Humans and AI Think Alike?

Now, researchers at MIT and IBM Research have developed an approach that enables a user to aggregate, sort, and rank these individual descriptions to quickly analyze a machine-learning models habits. Their method, called Shared Interest, integrates measurable metrics that compare how well a models reasoning matches that of a human.
Shared Interest might help a user quickly discover concerning patterns in a designs decision-making– for instance, perhaps the design often ends up being baffled by sidetracking, irrelevant features, like background objects in photos. Aggregating these insights could help the user rapidly and quantitatively figure out whether a design is prepared and credible to be deployed in a real-world circumstance.
” In developing Shared Interest, our objective is to be able to scale up this analysis procedure so that you might understand on a more international level what your models behavior is,” states lead author Angie Boggust, a college student in the Visualization Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Boggust composed the paper with her advisor, Arvind Satyanarayan, an assistant professor of computer system science who leads the Visualization Group, as well as Benjamin Hoover and senior author Hendrik Strobelt, both of IBM Research. The paper will exist at the Conference on Human Factors in Computing Systems.
Boggust started working on this project throughout a summer internship at IBM, under the mentorship of Strobelt. After going back to MIT, Boggust and Satyanarayan broadened on the job and continued the cooperation with Strobelt and Hoover, who assisted deploy the case research studies that demonstrate how the method might be used in practice.
Human-AI alignment
Shared Interest leverages popular strategies that reveal how a machine-learning model made a specific decision, referred to as saliency techniques. Saliency techniques highlight locations of an image that are crucial to the model when it made its choice if the model is categorizing images. These areas are envisioned as a type of heatmap, called a saliency map, that is typically overlaid on the initial image. If the design categorized the image as a pet dog, and the pet dogs head is highlighted, that means those pixels was essential to the design when it chose the image includes a dog.
Shared Interest works by comparing saliency techniques to ground-truth data. In an image dataset, ground-truth information are normally human-generated annotations that surround the pertinent parts of each image. In the previous example, the box would surround the entire pet dog in the picture. When assessing an image category design, Shared Interest compares the model-generated saliency information and the human-generated ground-truth information for the exact same image to see how well they align.
The method utilizes a number of metrics to measure that positioning (or misalignment) and then sorts a specific decision into one of eight classifications. The categories run the gamut from perfectly human-aligned (the design makes a proper forecast and the highlighted location in the saliency map is identical to the human-generated box) to entirely distracted (the model makes an inaccurate forecast and does not utilize any image includes discovered in the human-generated box).
” On one end of the spectrum, your design decided for the exact same factor a human did, and on the other end of the spectrum, your design and the human are making this choice for totally various reasons. By measuring that for all the images in your dataset, you can use that metrology to arrange through them,” Boggust explains.
The strategy works likewise with text-based data, where keywords are highlighted rather of image regions.
Scientist developed a technique that uses quantifiable metrics to compare how well a maker discovering models reasoning matches that of a human. This image reveals the pixels in each image that the design used to classify the image (surrounded by the orange line) and how that compares to the most important pixels, as defined by a human (surrounded by the yellow box). Credit: Courtesy of the scientists
Rapid analysis
The researchers used three case research studies to reveal how Shared Interest could be helpful to both nonexperts and machine-learning researchers.
In the very first case research study, they utilized Shared Interest to assist a skin specialist figure out if he ought to trust a machine-learning design designed to assist detect cancer from pictures of skin lesions. Shared Interest enabled the dermatologist to quickly see examples of the models appropriate and inaccurate forecasts. Eventually, the dermatologist chose he could not trust the design since it made a lot of predictions based on image artifacts, instead of actual sores.
” The value here is that utilizing Shared Interest, we have the ability to see these patterns emerge in our designs behavior. In about half an hour, the skin specialist had the ability to make a positive choice of whether or not to rely on the design and whether or not to release it,” Boggust states.
In the second case study, they dealt with a machine-learning researcher to demonstrate how Shared Interest can assess a particular saliency method by exposing formerly unknown mistakes in the model. Their method enabled the researcher to evaluate thousands of correct and incorrect decisions in a fraction of the time needed by common manual methods.
In the 3rd case research study, they utilized Shared Interest to dive deeper into a particular image classification example. By manipulating the ground-truth area of the image, they had the ability to perform a what-if analysis to see which image features were crucial for particular forecasts..
The scientists were impressed by how well Shared Interest performed in these case research studies, however Boggust cautions that the technique is only as great as the saliency methods it is based upon. If those strategies consist of predisposition or are inaccurate, then Shared Interest will acquire those restrictions.
In the future, the scientists wish to use Shared Interest to various types of information, especially tabular information which is utilized in medical records. They likewise wish to use Shared Interest to help improve existing saliency strategies. Boggust hopes this research motivates more work that looks for to quantify machine-learning design behavior in ways that make sense to humans.
Reference: “Shared Interest: Measuring Human-AI Alignment to Identify Recurring Patterns in Model Behavior” by Angie Boggust, Benjamin Hoover, Arvind Satyanarayan and Hendrik Strobelt, 24 March 2022, arXiv. DOI: https://doi.org/10.48550/arXiv.2107.09234.
This work is funded, in part, by the MIT-IBM Watson AI Lab, the United States Air Force Research Laboratory, and the United States Air Force Artificial Intelligence Accelerator.

MIT scientists developed a technique that helps a user comprehend a machine-learning designs reasoning, and how that reasoning compares to that of a human. Credit: Christine Daniloff, MIT
A brand-new technique compares the reasoning of a machine-learning design to that of a human, so the user can see patterns in the designs habits.
In machine knowing, understanding why a model ensures choices is often just as important as whether those choices are appropriate. For instance, a machine-learning design may correctly predict that a skin sore is malignant, however it could have done so utilizing an unrelated blip on a clinical image.
While tools exist to assist experts make sense of a models reasoning, frequently these techniques just offer insights on one decision at a time, and each need to be manually assessed. Models are commonly trained using millions of information inputs, making it almost difficult for a human to evaluate enough choices to identify patterns.

Shared Interest leverages popular methods that reveal how a machine-learning model made a particular choice, understood as saliency techniques. If the design is classifying images, saliency methods highlight locations of an image that are important to the design when it made its decision. If the model classified the image as a pet dog, and the pets head is highlighted, that suggests those pixels were essential to the model when it chose the image consists of a pet dog.
When assessing an image category design, Shared Interest compares the model-generated saliency information and the human-generated ground-truth data for the exact same image to see how well they align.
Ultimately, the dermatologist decided he might not rely on the design since it made too lots of predictions based on image artifacts, rather than real lesions.