That’s AI for now
Someone who knows, who wanted to remain anonymous, told me:
“Modern AI has a distinct interpretability problem. It is often impossible for a human to explain why a given model produces a given output. As a result it may take months of effort to re-engineer a complex algorithmic system when it is discovered to be making systematic mistakes…
AI has reached a place where innovation is slowing down significantly. The computational costs and latency implications of automatically processing streaming video and audio with full fidelity means there will almost certainly be no significant progress in handling them. In general, I don’t see any meaningful change in the status quo, barring some path-breaking innovations in artifical intelligence or general computing.
This is compounded by the fact that AI models are taking longer and longer to train as they grow in size to handle more complex problems, such that, even assuming a constant rate of innovation, there is going to be slower and slower turnaround in turning those innovations into material improvements.”
I’m pleased if this stagnation is indeed the case. We all think we need a moment to get to grips with the ethics and regulation of AI, don’t we?
I’m reminded of Arthur Koestler’s brilliant novel Darkness at Noon, and its theory of history in which technical advances leap, yet it takes time for society to adjust to the new circumstances (the metaphor is a barge’s rise on a filling lock). During the period of relative ‘immaturity’ democracy and personal freedoms suffer, and ‘only demagogues invoke the “higher judgement of the people”’. That certainly feels like where were are now. Once the lock is filled, according to this theory, democracy and personal freedom must inevitably conquer, and then ‘it is the duty and the function of the opposition to appeal to the masses’.