• Rootcause
  • Posts
  • Takeaways from the Oxford Generative AI Summit 2023

Takeaways from the Oxford Generative AI Summit 2023

In December, our Founder Jonathan Tanner attended the Oxford Generative AI Summit, hosted each year by Jesus College, Oxford. Here were his crib notes. First shared on LinkedIn.

This year’s OxGen was an impressive feat of convening by Cassidy Bereskin et al, and it’s thrown up some things to think about.

Firstly - T&Cs.

One speaker mentioned how nobody reads social media platform T&Cs so we created this table (with Claude) that sets out the different personal data platforms may take and what they may use it for.

Obviously because AI isn’t perfect I should say this isn’t necessarily gospel truth but it’s very helpful. I wonder on what basis companies would resist being compelled to produce more accessible T&C summaries for the vast majority of their users who are not legal language experts?

Other incomplete thoughts I’ve had are:

  1. We know democracy is on the backfoot globally. AI has strong, multidimensional, anti-democratic potential. We all know it is risky but we need more proactive thinking about how AI might be used to address contemporary democratic frailty and rebuild citizen confidence in democratic institutions (a separate but related multidimensional issue)

  2. There’s been quite a lot of focus on the technological infrastructure of AI as it informs competing visions of what may or not be possible in different timeframes. The AI infrastructure arms race between the US & China over computing power is important but to the average citizen it matters just as much who has access to the social power that AI technology will generate. Some regulatory approaches being pushed by tech leaders may well lock-in power for those companies at the expense of competition - do the track records of leading US tech companies when it comes to social responsibility suggest that we should grant them long-term privileged market access?

  3. Getting ahead of the curve means acknowledging the things we think are highly likely and considering the social impacts. Instant translation and the widespread acquisition of coding capabilities seem like two sure bets that we should be thinking about now. The first will have consequences for human social relationships, political movements and possibly the nation state. The second will see a step change in how we individually and collectively engage with computer technologies and explore the potential of doing so.

  4. Journalists are worried about the impact of AI on journalism and rightly so. Yet this angst is often expressed through conversations about content generation and business models. Who really wants more content right now? And who isn’t already deriving a lot of insight from non legacy media social sources? There should be more emphasis on how to reskill journalists, invest in their ability to report on AI by getting under the skin of algorithms, thinking beyond human sources to earn from open-source, rethinking content distribution approaches and how their relationship with the truth can be maintained when the truth is increasingly impossible to ascertain.

Were you there? I’d love to know what your takeaways were. Drop me a line and let’s share notes.

Reply

or to participate.