Social Artificial Intelligence
October 31, 2024We humans spend our time on earth building up, maintaining and restructuring social networks with other humans. Or as Nicholas A. Christakis, MD, PhD, MPH, sociologist and physician so nicely puts it during his recent presentation as part of ETH’s Global Lecture Series: “We form long-term, volitional, non-reproductive unions with unrelated conspecifics.” Or for liberal arts majors like me: we cultivate formal and informal relationships with friends, acquaintances, colleagues – and people we may not really like, or family members we may not even get along with. And throughout our lives, we use these connections to solve problems, especially while we work. Whether we are successful depends on how well we perform together.
To this end, Christakis’ research looks at what happens when you take social systems and add “dumb AI” bots – which aim to assist instead of completely replace human intelligence – to them. He summarises: “Human-human interaction has been modified by the presence of the machine and that’s what my laboratory is focused on: how human behaviours and interactions in groups can be changed by the introduction of forms of AI into our midst and by the specific programming of that AI, which might affect our ability to work together to solve a range of dilemmas. So the question is, can we use an understanding of social network structure and function to assess the uses and impact of social AI within and upon human groups?”
How can the study of these bots help us create more efficient human-machine hybrid systems for things we already have – driverless cars, checkout machines, digital assistants – and develop fresh opportunities for a new world of social artificial intelligence?
Christakis stresses that it’s not about replacing humans with super-smart AI, but using dumb AI to supplement smart humans. He concludes that carbon can exist as graphite or diamonds, depending on how its atoms are connected. And that in a similar way, how we are connected to AI can help us.
After his presentation, Christakis sits down with ETH Professor of Bioethics Effy Vayena and Chris Luebkeman, Leader of the Strategic Foresight Hub at ETH and the host of the event.
One word that arose during the presentation was “nudge”. Within the context of Christakis’ studies, the bots would “nudge” human participants to work in a more efficient way.
Vayena takes this up: “How can we make decisions or how can we guarantee that we’re going to nudge towards […] a particular direction […] that we collectively believe is a good thing to go towards? That’s not a new problem, clearly an older problem, but it makes me think […] how can we control for those possibilities?”
“I don’t think of my work explicitly as nudges […] but you’re absolutely right,” Christakis answers. “Evil actors or even actors that aren’t evil necessarily but are maximizing profit […] could employ these techniques to achieve other objectives. We had branches of the experiment where people knew the bot was labelled and what we generally found is that the labelling of the bot as a bot didn’t affect the outcomes of the experiments very much. If they believed the bot was […] acting to help the humans [participants] were happy to have the bot in their midst [but] not so if they thought the bot was malevolent and trying to harm them, then they got really upset.”
Christakis also explains that technologies will increasingly interact with us as if they were human, which will increase the rise in hybrid systems of humans and machines. The aim of his research is to see whether this can improve our coordination, cooperation, communications, creativity, navigation, sharing – and evacuation systems in cases of disaster.
Towards the end of the discussion, a question from the audience looks toward the future: “Are we going to see an evolution of social skills?”
Christakis began his presentation with how human-machine interaction works with digital assistants, so we can find the background for his answer there. We give commands like – “Alexa: weather.” We don’t speak to them politely like we would with people – “Hey Alexa: Could you please tell me what the weather will be like today?” As kids grow up giving abrupt orders to these kinds of machines, they might take that attitude with them to the playground and be rude to other children.
But while rudeness can be trained out of kids, Christakis is looking at the evolutionary big picture. “Over lifetime horizons there will be a degrading of our social interactions,” he confirms. He gives the example of our domestication of milk-producing animals 9,000 years ago, of how that allows us to digest lactose after we are babies because we changed our environment during the Neolithic Age. Christakis believes that in a similar way, AI will affect how we think over the next 10,000 years, that we will evolve due to this technology.
Which opens up a range of questions. Will we really have intelligent machines for the next 10,000 years? Or will only a percentage of the world’s population have them? And is there long-term capacity for that sort of thing, given the massive affect our machines are already having on the planet? As usual, the Global Lecture series raises many questions, stimuli for enthusiastic discussions and inspiration for new ideas.
Watch the full recording:
Find out more about our ETH Global Lecture Series here.