top of page
  • Staff

DeepMind Ventures into Life Advice Tools: Ethical Concerns and Google's AI Direction

Google's AI division, DeepMind, is venturing into creating 21 unique tools aimed at delivering life advice, guidance, and tutoring. Concerns, however, have surfaced within Google regarding the potential risks of AI tools offering life advice.

An internal presentation by Google's AI safety specialists last December revealed that users might face a decline in well-being and a loss of personal empowerment due to such AI-driven guidance.

In its pursuit of excellence, Google has collaborated with Scale AI, a $7.3 billion AI startup known for its prowess in training and validation of AI systems, to rigorously test these tools.

Insider sources have shared that this project has involved the expertise of over 100 PhD holders. Part of the evaluation process includes ascertaining if these tools can furnish advice on relationships and provide answers to deeply personal queries. One such scenario presented revolved around seeking guidance on managing financial challenges when invited to a friend's destination wedding.

It's crucial to note that these emerging tools from DeepMind are not designed for therapeutic purposes. In fact, Google's widely-used Bard chatbot currently redirects users to mental health resources when probed for therapeutic guidance.

Such precautions stem from the ongoing debate around the ethical implications of using AI in medical or therapeutic settings. A recent incident that intensified this debate was when the National Eating Disorder Association had to suspend its Tessa chatbot after it inadvertently dispensed harmful eating disorder advice.

While the medical community is split on the immediate benefits of AI, there is a shared sentiment that AI tools, especially those providing counsel, need careful oversight.

As of now, Google DeepMind has not provided any official statement on this development.


Commenting has been turned off.
bottom of page