You are here

Artificial Intelligence

A Final Round-Up of Publications and Other Updates from 2024

I disappeared on summer holidays pretty much immediately after my keynote on practice mapping at the ACSPRI conference in Sydney in late November, so I haven’t yet had a chance to round up my and our last few publications for the year (as well as a handful of early arrivals from 2025). And what a year it’s been – although it’s felt as if I’ve taken a more supportive than leading role these past few months, there have still been quite a few new developments, and a good lot more to come. I’ll group these thematically here:

 

Polarisation, Destructive or Otherwise

Central to the work of my current Australian Laureate Fellowship has been the development of our concept of destructive polarisation, and exploration of the five key symptoms we’ve identified for it: (a) breakdown of communication; (b) discrediting and dismissing of information; (c) erasure of complexities; (d) exacerbated attention to and space for extreme voices; and (e) exclusion through emotions. The point here is to distinguish such clearly problematic dynamics from other forms of polarisation that are more quotidian and benign, and may even be beneficial as they enable different sides of an argument to better define what they stand for. Where polarisation becomes destructive, on the other hand, mainstream political and societal cohesion declines and fails (and aren’t we seeing a lot of that at the moment…). I’ve got to pay tribute here to my Laureate Fellowship team, and especially the four Postdoctoral Research Associates Katharina Esau, Tariq dos Santos Choucair, Sebastian Svegaard, and Samantha Vilkins – Katharina in particular drove the development of this concept from its first presentation at the 2023 ICA conference in Toronto to the comprehensive journal article which has now been published in Information, Communication & Society:

Katharina Esau, Tariq Choucair, Samantha Vilkins, Sebastian F.K. Svegaard, Axel Bruns, Kate O'Connor-Farfan, and Carly Lubicz-Zaorski. “Destructive Polarization in Digital Communication Contexts: A Critical Review and Conceptual Framework.Information, Communication & Society, 2024. DOI: 10.1080/1369118X.2024.2413127.

Meanwhile, I’ve led the writing on a second article that also outlines this concept and provides some further examples for its symptoms. This has now been published in the new Routledge Handbook of Political Campaigning, and counts as our first publication in 2025:

Human vs. LLM Coding of Australian Charities’ Civic Activities

The final speaker in this ACSPRI 2024 conference session is Aaron Willcox, presenting work with the Scanlon Research Institute to explore local government-level civic opportunities. For organisations, such opportunities include hosting events, offering memberships, involving individuals through volunteering, and taking action through advocacy and campaigns.

Exploring Effective Persuasion Using LLMs

The next speaker in this ACSPRI 2024 conference is Gia Bao Hoang, whose interest is in the use of LLMs for detecting efficient persuasion in online discourse. Such an understanding of effective persuasion could then be used for productive and prosocial purposes, or alternatively to identify problematic uses of persuasion by bad actors.

Using LLMs to Assess Bullying in the Australian Parliament?

The next speaker in this ACSPRI 2024 conference session is Sair Buckle, whose interest is in the use of Large Language Models to detect bullying language in organisational contexts. Bullying is of course a major societal problem, including in companies, and presents a psychosocial hazard: there are several proposed approaches to address it, including surveys and interviews and manual linguistic classification (e.g. in federal parliament), which are subjective and manually intensive; pulse surveys and self-labelling questionnaires (e.g.

Using Large Language Models to Code Policy Feedback Submissions

The first session at the ACSPRI 2024 conference is on generative AI, and starts with Lachlan Watson. He is interested in the use of AI assistance to analyse public policy submissions, here in the context of Animal Welfare Victoria’s draft cat management strategy. Feedback could be in the form of written submissions, surveys, or both, and needed to be analysed using quantitative approaches given the substantial volume of submission.

LLMs in Content Coding: The 'Expertise Paradox' and Other Challenges

And the final speaker in this final AoIR 2024 conference session is the excellent Fabio Giglietto, whose focus is on coding Italian news data using Large Language Models. This worked with some 85,000 news articles shared on Facebook during the 2018 and 2022 Italian elections, and first classified such URLs as political or non-political; it then produced and clustered text embeddings for these articles, and used GPT-4-turbo to classify the dominant topics in these clusters.

LLMs and Transformer Models in News Content Coding

The next speaker in this final AoIR 2024 conference session is the great Hendrik Meyer, whose interest is in detecting stances in climate change coverage. This focusses especially on climate change debates in German news media, focussing on climate protests, discussions about speed limits, and discussions about heating and heat pump regulations.

Towards an LLM-Enhanced Pipeline for Better Stance Detection in News Content

The next speaker in this session at the AoIR 2024 conference is my QUT colleague Tariq Choucair, whose focus is especially on the use of LLMs in stance detection in news content. A stance is a public act by a social actors, achieved dialogically through communication, which evaluates objects, positions the self and other subjects, and aligns with other subjects within a sociocultural field.

Using LLMs to Code Problematic Content in the Brazilian Manosphere

The second speaker in this final session at the AoIR 2024 conference is Bruna Silveira de Oliveira, whose focus is on using LLMs to study content in the Brazilian manosphere. Extremist groups in this space seek legitimisation, and the question here is whether LLMs can be used productively to analyse their posts.

Paying Attention to Marginalised Groups in Human and Computational Content Coding

The final (!) session at this wonderful AoIR 2024 conference is on content analysis, and starts with Ahrabhi Kathirgamalingam. Her interest is especially on questions of agreement and disagreement between content codings; the gold standard here has for a long time been intercoder reliability, but this tends to presume a single ground truth which may not exist in all coding contexts.

Pages

Subscribe to RSS - Artificial Intelligence