Sam Gregory Keynote Speaker
- Human rights technologist and global authority on deepfakes.
- Co-chaired the Partnership on AI's Expert Group on AI and Media.
- TED speaker on "When AI can fake reality, who can you trust?"
Sam Gregory's Biography
Sam Gregory, is an influential figure in preparing for the challenges posed by generative AI. As a global authority on deepfakes, he has significantly shaped AI policy and public discourse through Congressional testimonies before both the US House and Senate, and a compelling TED Talk addressing the threat of deepfakes. In 2018, he launched the groundbreaking “Prepare, Don’t Panic” initiative, aimed at addressing deceptive audiovisual AI technologies and grounding them in real-world applications. His pioneering work has influenced platform policies, the development of trust technologies, and key public discussions on priority actions.
With a distinguished career in video and image verification technology, Gregory is also an award-winning human rights advocate and technologist. His efforts in combating online disinformation alongside frontline reporters and advocates have equipped him to navigate the rapidly evolving landscape of generative AI and to steer policy discussions focused on maintaining authenticity and trust in the media.
Recognized by the Chronicle of Philanthropy as a leader against Big Tech in the AI era, Gregory is a foremost expert on deepfakes and their global impact. He advises governments and communities on countering misinformation and safeguarding trust, highlighting how manipulated media could incite conflicts, unjustly imprison individuals, and justify coups. His expertise has been featured in over 80 media outlets, empowered hundreds through training, and reached influential platforms, including the White House, Congress, and Davos. He has co-chaired the Partnership on AI’s Expert Group on AI and Media and served on the Tech Advisory Council of the International Criminal Court.
As the Executive Director of WITNESS, a global human rights organisation that received the inaugural Peabody Global Impact Award in 2024, Gregory leverages over two decades of experience in video, technology, and media to advance human rights and civic engagement.
Gregory’s academic journey includes being a Thomas White Scholar at St John’s College, University of Oxford, a Kennedy Memorial Scholar at the Harvard Kennedy School, and earning a PhD from the University of Westminster, where he focused on participatory witnessing, deepfakes, and trust.
Sam Gregory's Speaking Topics
-
Keeping it real. What’s actually happening with deepfakes, AI and elections in 2024?
Discover how synthetic media and deepfakes are being used in the mega-election year 2024, and what is being done about it.
-
When AI can fake reality, whom do you trust?
Discover strategies to combat them and defend truth in a world of manipulated media. A big picture, clear-eyed look at the realities of deceptive AI in the world, and the personal, practical and policy solutions needed if we are to continue to trust.
-
Don't Panic, Prepare: How to Counter the Lies in Our AI-Powered World
Deepfakes are just the tip of the iceberg. In a world increasingly reliant on artificial intelligence, the ability to discern truth from fiction has become more critical than ever. This talk will equip you with the essential practical skills to navigate the age of AI-powered misinformation.
-
Bridging the Gap Between AI and Authenticity: Policy and Tech Solutions for a Trusted Future
AI has immense potential, but trust is crucial, particularly when it is possible to seamlessly mimic reality. Explore policy and tech solutions that bridge the growing need to explain how humans and AI mix in the media and communications we make and consume.
-
The power of video for human rights: Exposing lies, empowering justice
Video has long been a powerful tool for documenting human rights abuses and raising awareness of injustice. This talk will delve into this transformative power. What does it take to witness on the frontlines and what are the challenges in an emerging time when we cannot necessarily believe what we see and hear?