Optimistically sceptical and anti-hype: where I’ve landed on AI

I’m gradually tidying up this blog, tweaking formats, setting up redirects and updating meta descriptions.

Reading old posts about developments I once supported – where colleagues now live – feels a bit like going through photos from yesteryear. My enthusiasm for social media in older posts is especially striking. We did some great things on Twitter back in the day. And didn’t I love it.

Things change. The enshittification of the internet is real and multi-layered.

And as AI advances into so many aspects of our lives, I hold more nuanced views about tech today. On balance, I’d say I’m optimistically sceptical. Or maybe sceptically optimistic.

Either way, I strongly support good tech, while pushing back against the bad bits. There is plenty to think about every day.

Making AI work for us

We’re building a tech stack at Distinctive, as part of an open discussion with each other and our clients about how we use AI. I’m interested in where it can help us do better work, without wanting to boil the ocean. Analysing feedback on consultation and engagement and streamlining admin are two areas of positive progress.

I’m much less interested in creating chatbots and apps to support our work.

Our AI policy is one of the most visited pages on our website since we published it in January. People responded positively and it indicates an appetite to see what others are doing.  

Progress for us is steady, incremental and encouraging, rather than exponential.

I’m proud that we’ve made time to build this capability at the same time as growing the business and expanding the team.

So why do I also feel sceptical – and, at times, a sense of dread – when I think about what’s happening with AI elsewhere?

Reasons to be sceptical

Firstly, I hate the hype around large language models like ChatGPT and their supposed ability to transform everything all at once. I don’t mind admitting this is out of step with my day-to-day experience of using them. I’ll come back to this point shortly.  

I’ve started blocking accounts that push ‘do this or your business will die’ posts on LinkedIn. Sometimes, I’ll challenge the poster. I’m struck by how many impressions my responses get (lots), against how little engagement they generate (hardly any). I don’t know why that is, but it feels like the loudest, most confident voices dominate the conversation. Often, these posts and many responses to it are AI-generated. This feels like an unhealthy place for conversations about AI to be.

Second, I’m very anti badly implemented tech that doesn’t factor in the needs of the people who have to use it. Dealing with an organisation should be straightforward. But digitising services has too often turned it into an endurance test. I’d love to see ministers set hard expectations for customer service. This would force teams to focus on outcomes over things like reducing call times.

And then there’s the day-to-day reality of using LLMs. The little things which burn time and drain my energy and sense of humour. Inventing ‘solutions’ that don’t exist. Dashing off in ways that aren’t helpful. Head scratching mistakes that lead you to question your own sanity – what the f*** is happening here?

I guess this is what people mean when they talk about AI’s ‘jagged edge’, when you find out what it’s bad at alongside what it can do. It stands in stark contrast to the hype, and it’s useful learning in its own right.

Feed your curiousity, not the hype

The pace of change is staggering, and standing still is not an option for us. But it’s important that we keep what matters firmly in view.

I believe there’s a big opportunity for good businesses who put people at the heart of what they do.

Those who can use tech to better connect with their customers and stakeholders are the ones who can thrive. Those who use it as a barrier may find it harder, because people are sick to death of bad tech. And if you’re just using AI to do the same shit faster, you’re quite literally involved in a race to the bottom.

Next time someone working for a tech provider breathlessly urges you to ‘read this, or else…’ hold that thought.  

While regulators struggle to keep up with the pace of change, it’s on us to make sense of this challenge. Noone has all the answers. I’m more trusting of those who acknowledge this than the silver bullet merchants who confidently speak like they’ve worked it all out.

We need to stay curious and put the work in. It’s also our responsibility not to feed the hype.

Whether you’re optimistic, sceptical or a mix of both, there’s great thinkers out there who help me see through the fog more clearly.

  • Andrew Bruce Smith, offering an expert comms perspective.
  • One Useful Thing, ‘a research-based view on the implications of AI’ by Professor Ethan Mollick.
  • Marcus on AI, a more sceptical outlook on the challenges of large language models, written by Professor Gary Marcus.

And if you want to hear the team’s perspectives, we sometimes write about this in the company Substack which comes out on the first Friday of the month. Catch previous editions here and sign up below.

Photo by Solen Feyissa on Unsplash.