Wake up.
If a single bug can take down airlines, banks, retailers, media outlets, and more, what on earth makes you think we are ready for AGI?
The world needs to up its software game massively. We need to invest in improving software reliability and methodology, not rushing out half-baked chatbots.
Twenty years ago, Alan Kay said “Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves”. As Ernie Davis and I pointed out in Rebooting AI, five years ago, part of the reason we are struggling with AI in complex AI systems is that we still lack adequate techniques for engineering complex systems. That hasn’t really changed.
Chasing black box AI, difficult to interpret, and difficult to debug, is not the answer. And leaving more and more code writing to generative AI, which grasps syntax but not meaning, as a new benchmark from Stephen Wolfram shows, is not the answer, either.
As tech CEO Thorsten Linz just said to me on X, “tech giants need serious commitment to software robustness”. Another tech CEO, Vincent Valentine, added “Rushing innovative tech without robust foundations” — which is exactly what we are doing—“seems shortsighted.”
An unregulated AI industry is a recipe for disaster.
Gary Marcus is deeply distressed that certain tech leaders and investors are putting massive support behind the presidential candidate least likely to regulate software.
Thank you, have been waiting for the coverage to point this out. And whether a
GenAI code generator was involved in this seems a reasonable question, given that it reduces attention to detail….
Excellent post. The risks lying in wait for us at the intersection of AI and Cybersecurity are much bigger than people think. You may find my article on how to approach this interesting:
https://open.substack.com/pub/aipdp/p/openatom-5-national-ai-and-cybersafety?utm_campaign=post&utm_medium=web