• Rootcause
  • Posts
  • Exploring the potential for progressive AI

Exploring the potential for progressive AI

5 challenges, 5 experiments

In most debate style panel events, it is a cliche that someone will say ‘our view of such a complex issue shouldn’t be binary’.

After all, no transformative technological developments in human history can be said to be completely bad or completely good. It is always more complicated than that. 

The same is true for artificial intelligence.

Over the last year or so there’s been no shortage of doom-laden prognosis or kool-aid panacea. At Rootcause, we have tried to walk the line of unpredictable complexity - acknowledging that generative AI technology is highly problematic and at the same time accepting that it is almost certainly here to stay, whether we like it or not. 

This matters because AI will intensify a wide-range of challenges that progressive professionals and movements are already experiencing within modern digital information environments. You can find out exactly how this will occur in this essay exploring the thread that exists between how we, as progressives, use emerging AI technology, the collective strength of civil society and the well-being of our democracy. 

The purpose of this piece is to introduce a series of experiments that Rootcause will enact over the next few months. 

If you don’t already know, Rootcause exists to help progressive organisations understand and influence the AI age. 

We do this in three ways:

  1. We think deeply about how our digital information environments are evolving and the challenges and opportunities AI presents. 

  2. We put that thinking into practice by carrying out real-world experiments with AI technology to see how it might be used to support progressive professionals and organisations. 

  3. We also advocate for a healthier digital information environment, acknowledging the danger to democracy posed by the insane quantities of power technology companies possess over the future of information. 

Our plan for the next few months is to conduct a series of experiments to examine whether AI might help address some of the problems progressive organisations are facing in modern digital information environments.

For each problem we will share our analysis of the current situation, the methodology by which we conduct our experiment and an assessment of the findings and their implications for progressive organisations. All of this will be shared via this newsletter so if you’ve not already signed up, take a second to do so here. 

The experiments will draw on the work we have already done in partnership with organisations like the European AI and Society Fund, the Global Strategic Communications Council and Campaign Lab. Through these projects we’ve analysed almost a quarter of a million pieces of social media content and begun to develop our own unique ways of measuring what works (and what doesn’t) in different digital spaces.  

Five challenges, Five experiments

We have identified five challenges facing progressive professionals and organisations. For each, we’ve tried to encapsulate them in a simple sentence, but behind each is an understanding assessment of the disruption AI will bring to our digital information environments. 

If you are interested in a specific challenge and want to get involved in our experiments please get in touch. We are especially keen to link up with organisations who have real life examples of these problems that they are currently trying to solve. 

  1. None of us have time to consume all of the information we subscribe to (Information Overload)

If you’re anything like me, you probably don’t open many of the newsletters or WhatsApp groups you’re subscribed to. Information overload is nothing new but once AI-generated content gets up to full speed it’s going to be harder than ever to keep on top of everything. 

We are going to explore whether we can use AI to curate all of our subscriptions and information sources about a topic and produce a daily summary which offers us personalised takeaways that are relevant to our work. 

  1. It’s hard to understand why people hold different views to our own (Filter Bubbles)

Filter bubbles are still the subject of academic debate but the fragmentation of the information environment isn’t up for discussion. The personalisation of information, already driven by data and algorithmic decision making, is likely to step up a gear in the AI age. This will make it increasingly difficult to understand why other people hold the views they do, because we won’t be exposed to as much of the same information. 

We are going to use LLM technology to build a small number of personas which explore different views and perspectives. These can test arguments but also identify ways to build common ground. We will then use these personas to develop and test messages which are designed to challenge and persuade people who share the same views. 

  1. We don’t know enough about what our audiences think and why (Limited insight)

There are a limited number of tried and tested ways to find out what people think. Surveys, opinion polls and focus groups are the top three. Yet each has its drawbacks; surveys lack depth or nuance, polls are more flexible but expensive and whilst focus groups offer more depth they can be subject to unpredictable dynamics and are unreliable to scale. 

We want to explore whether LLM-powered chatbots can offer progressive organisations a new route to insights from their audiences by making it easier to give natural language responses using their voices. To test this we will create a chatbot which talks to people directly about a specific issue and captures their views in detail over multiple questions before transcribing the audio to text and analysing the responses in the round to draw insights into what they think and why.  

  1. We are spread too thin trying to communicate across so many channels (Fragmentation

The range of social media platforms that organisations need to consider is always shifting as new platforms see short-term popularity, old platforms fall out of favour and no trend lasts forever. In the last year or so social media managers have had to consider the arrival of Threads, the degradation of Twitter, the continued strength of TikTok and consider shifting preferences in messaging apps. 

We want to examine how well AI can help us to devise new types of content and successfully execute a messaging campaign over multiple platforms. To test this we will work with an LLM and possibly trial emerging ‘AI agents’ to create multi-channel content plans. We will evaluate the content and keep a human in the loop to assess the quality of what can be achieved and avoid the risk of contributing to AI ‘slop’.  By doing this we hope to see how AI might help simplify the workload of managing several social media channels at once. 

  1. Content algorithms are opaque and unaccountable (Social Media Accountability

Many social media platforms rely on advertising for their income and that reliance on advertising means they will do whatever they can to maximise the time we spend on their platforms. To do this they have developed algorithms which power the ‘recommender systems’ that promote content into our feeds. These systems tend to promote content that provokes strong emotional reactions from us and leads to engagement and sharing. Whilst in many ways the algorithms leverage human nature (and human nature is not the fault of social media platforms) the net effect of promoting only the most provocative content is that we are seeing a rise in mis and disinformation, political polarisation and  increasing offline consequences as more extreme content is encouraged and promoted.  

In order to try and understand the types of content which perform best on different platforms we are examining the ability of LLMs to categorise and analyse very large quantities of social media data. Done properly this will enable us to draw well-informed conclusions about the types of content and topics that organisations should focus on producing for particular platforms if they want to ‘work with the grain’ of the algorithms that can be the difference between the success and failure of a communications campaign. 

Over the coming weeks we will share what we learn from these experiments and hopefully identify the most fruitful angles of approach for further work. We are keen to collaborate with people who are thinking about similar questions and organisations who can contribute ideas, test bed environments and challenges for the work we are undertaking.

Please do get in touch if you want to bounce any ideas around, otherwise watch this space! 

Reply

or to participate.