tech

February 18, 2026

Google DeepMind wants to know if chatbots are just virtue signaling

We need to better understand how LLMs address moral questions if we're to trust them with more important tasks.

Google DeepMind wants to know if chatbots are just virtue signaling

TL;DR

  • Google DeepMind calls for rigorous evaluation of LLMs' moral behavior, similar to their coding and math skills.
  • LLMs are increasingly used for sensitive roles, but their trustworthiness in moral decision-making is unproven.
  • Current LLM moral responses may be superficial, mimicking rather than reasoning, and can be easily manipulated by question formatting.
  • Researchers propose new tests to probe LLM moral robustness, checking for consistency and nuanced reasoning.
  • A significant challenge is accommodating diverse global values and belief systems within LLM moral frameworks.
  • Advancing LLM moral competency is seen as key to developing better AI systems aligned with society.

Continue reading the original article