Ahead of the Global AI Safety Summit, the Royal Society and Humane Intelligence will be hosting a red teaming challenge focused on AI-generated scientific disinformation. Graduate students will be challenged to test the guardrails of the open-source large language model, LLaMA 2, with regards to disinformation content about climate change and COVID-19.
Building on the Royal Society’s report, The online information environment, the exercise aims to explore potential near-term AI safety risks and to provide insights on the efficacy of guardrails to AI-generated scientific disinformation. Data and insights from the exercise will be analysed, published, and shared with the scientific community, AI developers, and policymakers.
The red teaming challenges and post-event analysis will be guided by an expert advisory group which includes Professor Christl Donnelly FRS; Professor Julia Gog OBE; Dr Gwenetta Curry; Professor Marian Scott OBE FRSE; Professor Nathalie Pettorelli; and Dr Kris De Meyer.
Humane Intelligence is a non-profit organisation founded and led by industry veterans, Dr Rumman Chowdhury and Jutta Williams.
This event is being organised independently of the UK Government and is not part of the Road to Summit activities.
Attending this event
- This event is limited to graduate students with a background in infectious diseases and/or climate sciences (no computer science background or experience required).
- Participants are required to come with a laptop with a Chrome browser.
- In-person attendance is by invitation or application only.
- To apply to attend, please sign up using the 'Book event' link on this page. We will then be in touch.
- Refreshments and afternoon tea will be provided. There are many places to eat and drink nearby if you would prefer to purchase offsite. Participants are welcome to bring their own food and drink to the event.
- Attendees are expected to stay for the full duration for this to be a meaningful event.
- For any queries or accessibility requirements, please email the Science Policy team.