Questão 27 - 2ª Fase - Dia 4 - IME 2023

Gabarito

  • Questão ativa

  • Já visualizadas

  • Não visualizadas

  • Resolução pendente

  • ANL

    Questão anulada

  • S/A

    Sem alternativas

Questão 27

Objetiva
27

Text 1

XAI–Explainable artificial intelligence

Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z.

Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a          21          range of fields. However, many of these systems are not able to explain their          22          decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners.

Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate.

The          23          of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understanding; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on.

However, every explanation is set within a context that depends          24          the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial.

Models that are fully interpretable give full and complete          25          explanations. Models that are partially interpretable reveal important pieces of their          26          process. Interpretable models obey “interpretability constraints” that are defined within the domain, whereas black box or unconstrained models do not comply with these constraints. Partial explanations may include variable importance measures, local models that approximate the models at specific points and saliency maps.

XAI assumes that an explanation is          27          to an “end user” who depends on the decisions, recommendations, or actions produced by an AI system yet the user could be many different kinds of users, often          28          different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. Whoever this user may be, the demand an explanation of the system might be a developer or test operator who needs to understand where there are hopes of improvements. Yet another user might be policy-makers, who are trying to          29          the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take the target user group of the system into account, who might vary in their background knowledge and needs for what should be explained.

A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the user’s point of view, such as user          30         , which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanation’s effectiveness might be task performance, i.e., does the explanation improve the user’s decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation.

(…)

From a human-centered research perspective, research on competencies and knowledge could take XAI          31          the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future          32          XAI is just beginning.

Adapted from: Science Robotics in https://www.science.org/doi/10.1126/scirobotics.aay7120 [Accessed on 15th April 2022].

Alternativas

  1. A

    asked

  2. B

    required

  3. C

    provided

  4. D

    submitted

  5. E

    held

Gabarito:
    C

27

Downloads

  • Provas

Fique por dentro das novidades

Inscreva-se em nossa newsletter para receber atualizações sobre novas resoluções, dicas de estudo e informações que vão fazer a diferença na sua preparação!