"AI can not only improve short-term productivity of organizations but can also ... increase the organization’s collective intelligence." https://lnkd.in/edDzaGFV At first glance, this article looks like it will tick all my boxes: 👍 AI for Collective Intelligence? Yes please! After all, we have billions of *actually* intelligent people around the world, we just need help organising them 👍 AI for automation is a really bad idea? Absolutely! Let's avoid "downsides such short-term thinking, loss of flexibility due to lock-in, and loss of human intuition and skill", something I wrote about last month(1) But... while I like Christoph Riedl's exploration of how AI can boost the three aspects of collective intelligence, there are contradictions - eg: * improving Collective attention involves AI guiding "how groups allocate and align their focus on key tasks and priorities". Yet later the fact that "AI can significantly affect what teams pay attention to" is presented as a risk: "joined by an AI voice assistant, [teams] started to align their attention [to the assistant's] ... even adopted the [assistant's] specific terminology ... which further shaped where groups directed their attention". * AI should be used to "supercharge experimentation", yet "it also caused a decrease in intellectual diversity ... Through a form of algorithmic monoculture, receiving feedback from the same, centralized AI system, individuals tended to specialize in similar ways". (Please Lord let me be the first person to make #algorithmicmonoculture a hashtag ... Damn, 3rd place!) * one way to supercharge experimentation is to "create multiple first drafts from which the best one is selected for further development". Yet the author also argues that using AI to automate writing first drafts "replaces human intuition, expertise, experience, and reasoning... the downsides of automation [are] simply pushed down to a lower level." I'd also object to the multiple first draft strategy using the "blurry jpeg" argument (2). But don't get me wrong - by being nuanced about the risks and benefits of #AI, the article is better than 95% of the content I see, which generally falls into one of two simplistic "AI is terrible / great" camps. We need to get beyond that. Yet, as these contradictions show, the terrible seems intimately entangled with the great. Untangling this will be interesting work.
As I'm sure you recall, concerns around algorithmic monoculture concept (if not the term) were doing the rounds a few years back, pre-GPT 3.5 - mostly to do with cultural bias in early versions of DALL-E and Midjourney. Interesting that someone's found conceptually similar organisational effects via apparently semi-subliminal machine-to-human prompt engineering - AI as groupthink accelerator?
Information architect, content strategist & online innovator since 1995. Knowledge management, innovation, collaboration & communication strategist. Founder, MyHub.ai
4moReference (1): AI may kill our ability to innovate by mid-century: https://www.linkedin.com/feed/update/urn:li:activity:7233773857080250368/