62 Comments

In related news: https://www.bloodinthemachine.com/p/how-a-bill-meant-to-save-journalism

Here, big tech (in this case Google) was able to turn proposed checks and balances (on their somewhat predatory business model) in a Californian bill completely around. With some 'AI' thrown in for good measure.

Money is power. Power corrupts. Hence money corrupts.

Expand full comment
Aug 23Liked by Gary Marcus

While I applaud the efforts to corral AI before the beasts escape, I am resigned to the fact that it isn't going to happen. As is virtually always the case with regulating industry, bad stuff needs to happen before any preventative regulations can be passed. There are several reasons for this:

- No one is really sure what the bad things look like or how the scenarios will play out. This makes it hard to write effective regulations and there's nothing worse than ineffective regulations.

- Regulators are deathly afraid of restricting a possible economic powerhouse. After all, no one gives out awards for bad stuff avoided.

- When there are, say, 10 potential bad things predicted, it is hard to take the predictors seriously. They are hard to distinguish from people who simply want to thwart the technology. Gary Marcus constantly gets accused of this. The accusations aren't justified but it's still a problem.

- There's the feeling that even if US companies play by some new set of rules, other countries or rogue agents will not and the bad stuff will happen anyway.

Expand full comment

There is the precautionary principle, of course, which suggests that powerful and high-impact tech needs a careful look, before a catastrophe happens.

We are not there with AI, however. There's too much hype, too little functionality, and too little danger. It is better to wait and see.

Expand full comment

I mostly agree but there are bad things that have already happened that remain unaddressed. Copyright issues with training content is a biggie.

Expand full comment

Copyright issues will be settled in court. Publishers will get paid, and AI companies will get data in bulk that is often hidden behind paywalls or is hard to scrape. Publishers can also offer logs, usage, and other metadata AI companies will find helpful.

Expand full comment

The courts are a tarrible way to do it, because justice goes to those with the money to obtain it, and the unscrupulous with money to corrupt it. This means that the individuals and small creators (most artists) have no voice and no recourse.

Expand full comment

The big established media companies are now going after the AI companies who rip them off. Sony, Universal, and Time Warner have real money. They don't have the valuations of Google and Microsoft, but they also have decades of experience protecting their IP, better relationships with politicians, and more public goodwill than the tech companies.

Right now they're going after smaller players. But it's a warning shot to the big ones.

Expand full comment

There are fat cats on both sides. AI companies also disliked being ripped off, btw, by folks using chatbots to train their own cheaper clones. I think they will settle on some profit sharing. It is in everybody's benefit.

Expand full comment

Those are two unrelated groups so not a "both sides" situation.

Expand full comment
Aug 23Liked by Gary Marcus

At this point it's fair to say OpenAI has really, truly lived up to its name. It is openly unabashed about its desire for power and economic domination. I fully and completely expect them to do everything in their power to attain it, the world be damned. The question then becomes, do we let them do it?

Expand full comment

OpenAI : Open(ly) Autocratic Instincts

Expand full comment

🧵* Dear IPI**,

Thank you a lot. I’ve heard you a lot via Harry Shearer’s LeShow, so I reading you here. I don’t know if you have a paid membership, but I would pay.

Thank you.

——

* I use the bobbin of thread in the general sense, of all conversations being part of one thread & one garment in humanity.

Also, the simple blue color & blocky look make it a good identifier for me, and for those whom I’m writing to.

——

** “Indispensable Public Intellectual”

Expand full comment
author

You can upgrade to paid if you like. I will be back on Le Show soon!

Expand full comment

I'll put a plug in for Le Show as well. I've been a listener since the 1980's. Shearer has always been entertaining and on the cutting edge of important issues.

Expand full comment

I'll speak as a Biopharma founder who did a stint evaluating deals for an investment group, and has had the experience of pitching VCs, and published on how to renovate VC. I've got a paper (maybe book) I intend to write on how to operate an incubator in the Biopharma space. I also worked with early AI (which was a lot more than just neural nets and big data) implementing factory automation systems for the auto industry when the hype was "lights out factory". (This latter was a total crock of you know what.) So brass tacks here on regulation and AI.

The primary reason that big tech in general opposes all regulation and law is due to the interests of venture capital and the need of founders to keep that capital coming in to survive. VC wants profit. VC needs to either sell the firm in an acquisition, or sell it in an IPO. To do that, VC investments need to move, and move fast. "Move fast and break things." Why? Because if you have something real, and you slow down, you'll get caught and disappear into time. Think Visicalc, Borland, Myspace. Founders know this, and as long as they have to justify what they do to VC, they are like a mouse on a hotplate.

Force a founder to simply consult his/her board to make decisions, and you will increase the amount of time it takes to make said decisions by 2 to 10 days (at minimum). The board that can meet, think about, and return a good decision in 2 days is running at lightning speed. Outside boards are deadly because they are not beholden to the founder or the firm.

Any regulation slows decisions. Regulators can have little care for speed. The FAA does quite well, but there isn't any political force that has captured the FAA to try to kill aviation. Contrast this with the NRC, which notoriously slow-walks approvals, and abuses the process to force tear-downs and rebuilds for small differences from plans that weren't approved. The Vogtle plant in Georgia is a showcase for this. The head of the NRC for many years has been a succession of anti-nuclear clowns. (Yes, I have an opinion.)

In Biopharma we have a special situation that started with Nuremburg (in the current period) and has been codified into law and regulation for protection of human subjects. (Note that the AI applications in medicine should be under FDA purview, but AFAIK FDA hasn't enforced its authority there yet.) To protect human subjects, the first step is an Institutional Review Board (IRB). This board has different names in different countries. Two documents are required for the IRB. You need a complete protocol with full disclosure of how you are going to deal with problems that arise, and you need a consent document that tells the subject what the risks are of this thing you propose do. And your subjects are supposed to receive benefits.

Now, there are people who "don't want to freak out the reviewers" and try to hide risk. This is, in Biopharma, a terrible idea. Why? Because in practice, those documents are your get out of jail free card. As long as you diligently disclose all of your risks, and follow the rules, you cannot be prosecuted criminally when someone dies, or is otherwise harmed. You are also protected against most civil liability. Be clear. If you have a large enough trial, someone is going to die, and it may well have nothing to do with your product(s). FDA has a regulatory authority over IRBs. They can and do audit IRBs. If your protocol has something big happen, your IRB is likely to get audited. Your IRB may not appreciate this. If you are a brilliant dumdum and stuff turns up, it is not your IRB's responsibility to fix it for you after the fact.

If you are one of those who has written emails about how, "We have to make this sound better," or you edited the IRB documents to remove a risk and that turns up in an investigation? You don't want to find out. People get upset when somebody dies that is close to them, and they want to lash out.

Biopharma founders (especially from big name universities) can come out of grad school having been so coddled that they had no contact with the regulatory matters that their University took care of for them. I've seen these sorts do mind numbingly stupid things. MDs can also be quite arrogant about it, and simply ignore regulations. MDs are regularly getting nailed for calling something a clinical trial that had all the elements of a clinical trial except no IRB and no IND. Those are hand-slaps.

The far end is when the case is turned over for prosecution as homicide. For instance, having not even submitted an IRB, a drug was tested and the subject died within half an hour. Charges were homicide 2. Plea bargain was man 2 and 5 years probation, give up license for life.

Once you have your IRB documents approved, you can submit them to the FDA to get an investigational new drug/device (IND) approval. With that, you are legally allowed to inject, infect, poke, prod, and otherwise do your thing with people. There was a time when the FDA could take its time and you might not get your approval for a year or two. Then FDA got put under a timeline of 30 days from receipt of the protocol until decision. In practice it means that the default is a no, but there has to be reasons why.

I have spent as much as 5 years just getting to the point where FDA finally assigned my IND application to a division. FDA is not my enemy. I'm not one of those. But this was about as dangerous (on its face) a protocol as could be conceived of by a Hollywood script writer. It took many contacts, and finally a chance meeting at a conference to find someone at FDA who really liked the idea. That project was not VC funded. It moved glacially, and unfortunately, time is money.

VC likes to plan on turning its money in 5 years. That means invest, then sell in 5 years. Now, this idea of the 5 year turn is a total crock. VCs don't do that as a rule. (Look up Mulcahy's really great paper she did with Kauffman's data. No 5 year turn. No J curve. I know one VC who did turn most investments in 5 years, but he is very unusual. And I digress.

The point of that digression is to illustrate why it is that founders in big tech oppose all regulation. In the case of so-called AI, it is obvious that there is a contingent that would kill what we call AI if it could. In my view, AI should be FI for "Fake Intelligence" because that's what LLMs are. But we are stuck in this term AI for now.

Imagine a process where every AI rollout had to start with an application to an AI review board (AIRB). This would require identification of all foreseeable risks and a consent form signed by every user of the beta until the Artificial Intelligence Administration (AIA) approved the final rollout. And no changes can then be made to the AI product from this point forward without going through a procedure. And all adverse user events must be reported to the AIA for the life of the product. If the adverse event is bad enough, the AIA can freeze your product, and nobody can use it until the investigation is over.

In the nice world version of this a saintly and brilliant regulator would return decisions on permission to start in 30 days and all would be well. Companies would dig around to disclose everything and regulators wouldn't have to treat them like they are 16 year olds denying they are going to have a party while the parents are gone.

In the real world, that regulator would be a political target. I would expect there to be problems like GMOs have had getting approvals. Some would want a head like those that have all but killed nuclear at the NRC.

What to do? I don't have a nice answer gift-wrapped with ribbon.

Expand full comment

Makes perfect sense, unlike the “We welcome regulation as long as it’s Federal” BS we hear from these AI companies.

Expand full comment

Thanks for sharing all of this, I just learned a lot. "16 year olds denying they are going to have a party while the parents are gone" is a fantastic analogy for the AI arms of the big tech companies. They're a bunch of irresponsible children. My only comfort is that they also have child-like imaginations regarding their technology, and the great majority of their fantastical predictions are not going to pan out. Lotta make-believe going on in this industry.

Expand full comment

Luckily, human-level AGI is *hard* - really, really hard. What we have today is easily sufficient to cause societal harm at global scale, but way too dumb to cause catastrophic or existential harm. Meanwhile, the idea that anyone motivated by short-term self interest will "self-regulate" as they race (necessarily via a sequence of low-hanging fruit) towards what they perceive to be infinite money, fame, and power is utterly ridiculous. It will be a very long time (decades) before those following the low-hanging-fruit path to human-level AGI will get anywhere close, and along the way there's quite likely to be some kind of high-profile "AI event" - such as a global AI cyberattack, or maybe even a large number of civilians killed by a swarm of rogue autonomous weapons - that forces people to realise the scale of harm that can occur when powerful AI is developed in an insufficiently-regulated way. With any luck, such events will finally persuade governments (including the US, UK, EU, and China) to enact and enforce appropriately strong AI regulation, legislation, and international treaties classifying powerful AI systems as safety-critical systems, requiring comprehensive evidence-based safety cases before being licensed for deployment.

Expand full comment

This is a great point regarding what a path to AGI would look like. I'm skeptical that "human level AGI" is even possible, but if it is a lot of intermediate-level interations will have to come first. It's not like tech companies are going to keep their technology under wraps until they invent AGI, and then unleash it. Quite the opposite: they're so eager to get every new model to market that they often do so prematurely. I don't know what AI that's halfway between AGI and what exists today would look like, but it would definitely freak people the hell out.

Expand full comment

I agree with everything here on the issues of governance, unaccountable decision making by CEOs, the futility of self-governance, and overly broad non-disclosure agreements.

But the scene-setter at the beginning makes this about existential risks, invoking explicitly the comparison of global nuclear war. And that is the usual problem of focusing on highly implausible 'extinction' risk to a degree that leaves too little room in public discourse for actual risks like important decisions being based on confidently wrong answers, discrimination and bias being baked into models through biased training data, social disruption through undermining of intellectual property, the political impact of deepfakes, and drowning information and communications in a firehose of AI-generated spam.

And this is a widespread phenomenon. I recently sat incredulously in a talk where a self-proclaimed AI safety expert nonchalantly pronounced that we were all agreed that there would be super-human AI by 2040, and if we don't support his work, it may kill us all. No, we aren't agreed. In fact, not only is there no evidence yet that that kind of AI is even possible in principle, and not only is there no evidence yet that if it could be built, it could be done without using up 800% of the global electricity supply, but there is good reason to believe that the kind of scenarios these people cook up in their heads are physically impossible.

In the end, if your AI does something scary, how about you just press the off button or pull the plug? The answer always amounts to, if it is smart enough, a mind can do magic, but sorry, magic is impossible. The AI won't copy itself onto my smartphone before somebody pulls the plug, because my smartphone can't store and run a model of that complexity, and also, data limits, firewall, etc. The AI won't create a super-virus that kills us all without the humans in the lab noticing what they are doing, because biology doesn't work like in a Lego movie. Likewise, a benevolent super-AI won't solve cancer in five minutes no matter how super-human it is, because even if it has a great idea, it will then have to request funding for a five-year experimental study to see if the idea works in real life and has no major side-effects.

This is all magical thinking, and a better analogy here is his 1940s counter-part worrying that a single nuclear bomb will burn up the entire atmosphere while ignoring the impact it could have on Nagasaki as negligible, or his 19th century counter-part worrying that riding a train at 80 km/h will kill the passengers while ignoring the dangerous work conditions of rail construction workers as some kind of unavoidable background noise that just has to be accepted. AFAIK, some people did think like that, because most people don't understand physics and have zero sense of plausibility. Hysteria repeats itself, one might say.

Expand full comment

"Current AI us not all that scary."

It isn't in the AGI apocalypse sense, but since 1995 the resources put into the arguably losing battle of preventing, detecting, addressing, and recovering from efforts of bad actors and unintended consequences of good actors have become extraordinary. Today's limited capabilities are expanding this problem, threatening social cohesion, democratic processes, and mental health. Will SB-1047 help at this level?

Expand full comment

Do we really need an AI company “insider” (current or former) to tell us things like “one or just a few people at AI companies shouldn’t be making decisions for humanity” and “there should be external governance”?

Those hardly seem like profound conclusions.

Expand full comment

But maybe I just think these things are obvious because I am ignorant and they actually ARE profound.

Expand full comment

The last person we need on an AI world governance board is Sam Altman. I'm all for Weiner's SB-1047 as a good starting point. "Regulation stifles innovation" has got to the biggest bull-crap IT commandment. The implication that undefined innovation is always good, and that anything (vendors are selling) that enables or encourages innovation is similarly good is nonsense.

Of course, this doesn’t mean you never make changes in non-differentiating areas, just that it’s about finding the right balance between standards and discipline on the one hand, and the freedom to explore and experiment on the other.

Expand full comment

Agreed. This is especially rich coming from a company that has, time and again, displayed a penchant for recklessness. Between the ChatGPT roll-out, the shameless hyping and dishonesty about GPT4's abilities, "hey y'all we invented a way to spoof someone's voice using only 15 seconds of audio do you think should we release it?", desperately courting Scarlett Johansson's permission for something they'd already done without her permission, and the Altman firing/re-hiring shitshow, these guys deserve zero benefit of the doubt.

Expand full comment

AIs will never be dangerous as instigators because they don't have free will.

Depending on AIs will of course be dangerous just like depending on everything from friends to lovers to the stability of the rock ledge you're clinging to when rock climbing.

AI in the next two years is very likely to finally convince humans to be more human because AI is so prolific at showing humans how pseudo human behavior is icky.

Expand full comment

For some time I thought that the current hype cycle in AI would at least have the benefit of prepping us up for when the real thing arrives, basically something like a dress rehearsal. I am not so sure any more. Due to the ridiculous claims and end-of-the-world scaremongering from the likes of Altman, the world has become numb to the danger and possibility of a true AI and may not react at all when it arrives. It's like the story of the shepherd who cried "wolf" to scare his mates and when a real wolf came they thought he was joking again and didn't answer his cries for help.

Expand full comment

“What former [and current] OpenAI employees are worried about”

Losing their vestment in the company if they spill the beans

Expand full comment

AI experts don't seem capable of grasping that making AI safe simply isn't possible.

In order to have control over the future of AI, we would need to have control over all the humans capable of developing AI. Given that most of the major powers on the planet have nuclear weapons, there is no way to force any of them to take whatever path we think is the right course of action, if we even knew what that was, which we don't.

AI experts writing in the English language seem determined to assume that America and Europe will determine the future of AI, even though America and Europe represent only about 10% of the world's population. Could someone please remind them that China is four times bigger than the United States?

But, for the sake of a thought experiment, we might imagine that AI could be made safe, or that it vanished altogether. That doesn't really matter. An accelerating knowledge explosion will continue to bring forth ever more, ever greater powers, which will almost certainly come online faster than we can figure out how to safely manage them.

More and more powers, of larger and larger scale, coming online faster and faster.

That's the threat.

Forget about AI. Think bigger.

Expand full comment

I see no dishonesty in OpenAI testifying in front of Congress that AI regulation is necessary while opposing a proposed, state specific regulation. Letting 50 states and the District of Columbia regulate AI in 51 different ways will slow innovation. This should be a federal matter.

Expand full comment

As much as a distrust OpenAI, I agree with you here. I suspect OpenAI's apparent support for regulation is cynical and insincere, but that doesn't mean they're contradicting themselves when the oppose some specific piece of legislation.

Expand full comment

If we believe the risks are real - and I do - then inevitably something will get through whatever regulatory scheme is established. It only needs one for the Pandora’s box to be opened. And regulation, while it should happen, is a very blunt instrument. Murder is illegal with very severe sanctions and a whole infrastructure sanctioning it. It still happens! Even in medicine, with what is mostly a fantastically successful code of ethics, bad things happen. And so it will with AI.

Even so press on because that diminishes the risk but realise it’s not eradicated. Work out m what to do when the goblins escape the box.

I don’t know and suspect no one does but I wonder if AI might itself be part of a solution.

Love your substack, Gary, and hope you’ll maintain your expert Cassandra prognostications. It must get lonely.

Expand full comment