AI’s Moral Maze: Deepfakes, Consent, and Beyond

Shiran Melamdovsky Somech, delves into the core principles driving the use of Generative AI for societal progress, grappling with critical issues from consent to bias, and envisioning a future where technology becomes a force for global empowerment

AI’s Moral Maze: Deepfakes, Consent, and Beyond

It’s truly astounding how AI has revolutionized our approach to business, accelerating processes, offering a more tailored experience, and unveiling insights previously unimaginable. Yet, with this remarkable capability comes a significant responsibility, particularly in the realm of ethics—a critical aspect that cannot be overlooked as we harness this powerful technology. 

Shiran Melamdovsky Somech, founder of Generative AI for Good, embodies this need at the very essence of its existence. In this interview, we explore with Shiran the ethical challenges and opportunities that arise from the use of generative AI, embracing the potential of AI in catalyzing social progress and creating meaningful change. 

What are the main ethical dilemmas in using AI for good? 

I am deeply committed to ethically navigating the use of generative AI in social initiatives. This involves grappling with nuanced complexities, particularly regarding the consent and dignity of individuals whose stories we share through our work. This concern is magnified when those individuals cannot give their consent because they are no longer with us. In the era of deepfakes, another ethical imperative is the prevention of creating misleading content that could undermine public trust.

These ethical challenges are centered around three main pillars: consent, transparency, and copyright. Consent is essential, particularly when individuals may not be aware (or not alive) of or in agreement with the use of AI in certain contexts. Transparency is critical, as there should be clear disclosure when AI is involved in content creation, though the specifics can be complex. Lastly, copyright concerns must be addressed to respect the intellectual property rights of original creators when AI interacts with or generates content.

Bias in AI is another critical concern. We must ensure that the algorithms we develop do not perpetuate or amplify societal prejudices. This requires vigilance in the data we use and transparency in the AI’s decision-making processes. Protecting individual privacy and preventing unwarranted surveillance are also paramount.

Accountability is a cornerstone of ethical AI use. Determining who is responsible for the decisions made by AI, be it the developers, users, or the AI itself, is a question that demands careful consideration.

In my experience, the most effective way to address these ethical challenges is through collaboration. Creating powerful partnerships between technologists, ethicists, legal experts, and the communities we serve is crucial. It is these alliances that enable us to find the most ethical path forward.

Regarding the topic of consent. What are the main issues here? 

In the realm of generative artificial intelligence, addressing the critical issue of consent involves navigating the ethical terrain surrounding the use of personal data, likenesses, and the replication of voices and images. This is particularly pertinent when dealing with subjects who cannot provide consent themselves, such as historical figures or victims of violence.

A prime example is seen in initiatives like “Listen to My Voice.” This initiative, aimed at combating the shadow pandemic of domestic violence, leveraged AI to bring the stories of victims to the forefront. By animating still photos of Israeli women who were murdered by their intimate partners, the campaign gave a voice to the silenced, creating a visceral impact that far exceeded the reach of statistics and reports: it garnered 10 million views in just 3 days, achieved an engagement rate five times higher than average, and notably, almost half of the viewers were men, leading to real-time lifesaving interventions. 

Engaging directly with the families of victims was essential for ethical storytelling. These collaborations ensure narratives crafted through AI are both legally compliant and morally respectful. By gaining consent from immediate family members and co-creating narratives with them, alongside consulting domestic violence experts and legal professionals, we exemplify how innovation can coexist with ethical responsibility.

What are the challenges with deepfakes?

Deepfakes present profound challenges, not just for individuals but for society as a whole. They enable the creation of content that is so realistic it is often indistinguishable from authentic footage. This can lead to reputational damage, emotional distress, and a fundamental erosion of trust in media. For example, consider a recent case in Indonesia, where a political candidate used a deepfake to appear younger and even fabricated an endorsement from a deceased dictator. Such instances show the potential for deepfakes to disrupt democratic processes and manipulate public opinion.

The rise of deepfakes poses significant challenges, especially for minors, by enabling the creation of realistic yet false content that can lead to reputational damage and emotional distress.  This technology’s malicious use underscores the vulnerability of children, who may lack online security awareness and share sensitive information more readily. The consequences for minors involved in such situations can be severe, including anxiety, depression, and a reluctance to seek help due to fear or embarrassment. 

As individuals  – Guardians play a crucial role in mitigating these risks through education on online safety, active monitoring of internet use, and employing tools to protect digital footprints. Moreover, as guardians, we have a fundamental obligation to exercise caution when sharing our children’s private data, especially images, on social media. This careful approach is essential in preventing unintended exposure and protecting minors from the complexities and dangers of the digital world.

As society – Addressing deepfake technology necessitates a multi-pronged strategy. Key measures include advancing detection technologies through collaborative research, implementing robust legal frameworks to penalize malicious use, and enhancing public awareness and media literacy to help individuals critically assess digital content. It’s also crucial to promote the development of content authentication methods, like digital watermarking, and to encourage ethical AI development practices. Social media platforms play a significant role in identifying and mitigating deepfakes, requiring them to adopt stringent content moderation policies. Support systems for victims of deepfakes are essential to provide necessary legal and psychological assistance.

Shiran Mlamdovsky Somech

Shiran Mlamdovsky Somech

Founder Generative AI for Good

Bio

Shiran is an entrepreneur specializing in GenAI within the realms of social impact.

Shiran Mlamdovsky Somech

Founder Generative AI for Good

Bio

Shiran is an entrepreneur specializing in GenAI within the realms of social impact.

Our latest articles

© 2024 - Startup Snapshot Design: Obys | Code: Eli Cohen