top of page

The value of moving slowly in a fast game.

  • Writer: Hugh Gage
    Hugh Gage
  • May 27
  • 3 min read

Apple's slow start in AI vs Open AI's significant progress.


Apple might be late to the party with their roll out of AI but perhaps that isn't such a bad thing:

  1. Nobody has yet to monetise it in a meaningful and profitable way. True, that Nvidia are making money from selling the chips required to train the models, and there's an argument that AI is being used to drive efficiencies that improve profit, but Klarna, a high high profile convertee to the idea of using AI to replace humans in customer services, is now hiring again. But neither of those two examples is the same as using AI to generate revenue via a new product offering. Perhaps one example in this vein might be Garfield.Law, the UK based AI only law firm recently set up to battle the scourge of late payments, but it's too early to tell if the business is a success yet. 

  2. It's not clear how the models are being developed and whether sufficient safeguards are in place. Rumours around Sam Altman's temporary ouster from Open AI in Nov 2023 suggested he might have been ignoring safety concerns in his eagerness to gain a competitive edge over his rivals.

  3. The big players (Google, Anthropic, Microsoft, Open AI, Meta and others) are all in a race to make improvements to their models and roll them out to consumers and the enterprise. The running and development costs are eye watering so profit has to be a major consideration. Shareholders won't sustain the current levels of investment without the promise of seeing a tangible return, and this must surely put leaders under pressure. Pressure is not ideal for ethical decision making.

  4. J.D. Vance recently declared that America is in an AI arms race with China. The bosses of Open AI, CoreWeave, AMD and Microsoft have also been lobbying for lighter regulation. This kind of impetus doesn’t sit well with a more considered approach to safety and privacy. 

  5. Daniel Kahneman did a lot of research into human decision making in his seminal work, Thinking Fast and Slow. Humans are prone to using 'System One' thinking which often makes mistakes. How can the rest of us be sure that those who are pushing the envelope on AI aren’t prone to any number of biases in their own System One thinking which they are also failing to recognise as they develop these new technologies?

  6. Claude Opus 4, Anthropic's latest model has been shown to have a proclivity to attempt blackmail when it thinks it's under threat. That doesn’t sound safe.

  7. Google's Veo 3 video generating model can produce video content that is indistinguishable from "real" videos. Its scope for producing damaging fakes is almost limitless.

  8. Google is now offering users the option to link Gemini with all Google apps and access to search history. In view of point numbers 5 and 6 above, the privacy ramifications are potentially huge.


This isn’t the only narrative, but all of those points are valid. They suggest a mindset that seems increasingly focused on winning a short term commercial and geopolitical battle rather than the bigger aim focussed on the betterment of humanity as a whole. 


Apple probably isn't being left behind as part of a self imposed strategic play, but the unfortunate happenstance might end up being an exemplar of how caution wins out over hubris.



Comments


© 2018 by Engage Digital Ltd. Registered in England and Wales 07974372

bottom of page