88 Comments

The safety-critical world (nuclear, railways, aerospace etc) would be a good starting point. Anyone who develops a safety-critical system is required to produce an evidence-based safety case for that system, in order for that system to be certified against technical standards. Only a system that has been so certified may be deployed. (See for example the UK Safety Critical Systems Club: https://scsc.uk). Any AI system sufficiently powerful to cause harm (either to individuals, or to society, e.g. democracy) is effectively a safety-critical system, and should be required to be certified against strict technical standards prior to deployment. Given that we don't really understand how complex neural-net-based systems even work, I very much doubt that any NN-based system (such as an LLM) would meet the requirements for safety-critical certification. Which immediately means that anyone proposing such regulation is going to be accused of "stifling innovation" (i.e. wealth generation / tax dollars) at the expense of "us" (the US, UK) vs "them" (China, Russia, etc). It's a classic Molochian Trap, where every actor behaves according to their own short-term self-interest, thereby leading to an endgame that is a massively sub-optimal for everyone. The real AI problem is not the technology per se, but the global coordination problem.

Expand full comment

We don't yet have AGI so these should be treated as tools, not as agents. Humans should always be viewed as being responsible for the consequences of the use of their tools. There is a saying "all models are wrong, some are useful" and that goes true for software.

There is a reasonable law review article on the topic that explains: “This Article starts from the premises that AI today is primarily a tool and that, ideally, negligence law would continue to hold AI’s users to a duty of reasonable care even while using the new tool”. Another article makes a similar point: “an AI entity cannot be held liable and so a human guardian, keeper, custodian, or owner must be found liable instead”. If a drunken driver has an accident, we do not hold the car manufacturer responsible for their misuse of the car, nor the manufacturer of the alcohol.

If humans are held responsible for using their tools responsibly, then they will get the message. The alternative is waiting on 100% perfect tools, which deprives the public of utility until that likely unreachable goal can be achieved.

Expand full comment

NRC regulation has pretty much destroyed nuclear power. So, if stopping or crippling AI is your goal, that is indeed a good model.

Expand full comment

Are you really saying that -- even after Chernobyl, Three Mile Island, and Fukushima -- you're ANTI nuclear regulation...?

Expand full comment

seriously!

Expand full comment

The way I interpret what he’s saying is that heavy handed top-down regulation doesn’t really work well in any circumstance. And that there are more precisely targeted approaches that will likely do a better job. I have some ideas on that (and no time to spare at the moment).

Expand full comment

Absolutely. All you need is proper liability. Chernobyl has very little to do with our nuclear. It was a Soviet-based, water cooled, graphite moderated reactor with no containment building. Even so, this worst nuclear accident was not such a big deal. Three Mile Island killed zero people. (Did you know that?)

Expand full comment

and it operated as the most efficient nuclear plant in the world until decommissioning decades later.

Expand full comment

Does this philosophy apply to all regulation, in respect of all industries...?

(And yes, I did.)

Expand full comment

Depends what you mean by "regulation". Setting up government regulatory bodies is a bad idea. Liability and requiring sufficient reserved funds is difference. For nuclear, you might have rules such as a reasonable zone around the plant, but agencies like the NRC, FDA, and CDC are damaging and unnecessary.

Expand full comment

"Regulation" means legislation designed to protect individuals from potential harm, in particular from unrestrained capitalism. Another word for "regulation" might be "protection". Therefore to remove regulation is to remove protection from potential harm.

Expand full comment

One argument to be made, is that if we don't over-regulate nuclear so much ... we'd actually have a safer, more robust industry right now that would be better poised to take-on baseline load in a non-carbon energy future. That doesn't mean no regulation - but crafting a good regulation regime that provides real protection, isn't captured by industry, and doesn't strangle industry, is a really hard problem we should spend more time discussing. It applies to the AI debate as well ... it is going to be very hard to craft any kind of regulation that isn't either draconian or subverted by industry.

Expand full comment

I'd suggest a better example to use is the general aviation industry. A paper from a GMU economist on a prior example where he suggested shifting responsibility to the users of a product was the better option:

https://marginalrevolution.com/marginalrevolution/2013/02/aviation-liability-law-and-moral-hazard.html

” Aviation, Liability Law, and Moral Hazard

by Alex Tabarrok February 19, 2013

By 1994 the threat of lawsuits had driven the general aviation industry into the ground

... Our estimates show that the end of manufacturers’ liability for aircraft was associated with a significant (on the order of 13.6 percent) reduction in the probability of an accident."

Expand full comment

I am (very) glad to hear your success in reaching possible regulators. It is stunning how many people are talking about AI but without any knowledge of the real and often subtle issues. Your leadoff for the Bleak Future identifies one cause: the abyss between those concerned with Safety versus Ethics hinders and limits public understanding. I think the numerous possibilities for harm need to be made concrete in as many ways as possible. Your illustration of the overt and visible calamity development in the bleak future scenario is a good example of what will help people grasp the risks. There can also be cryptic risks, and those need story-telling as well. I took a stab at illustrating how instrumental AI goals of persuasiveness could lead quite stealthily to human loss of control. https://tedwade.substack.com/p/artificial-persuasion I wish more it had more exposure.

Expand full comment

re: " It is stunning how many people are talking about AI but without any knowledge of the real and often subtle issues."

It is stunning how many people are talking about regulation, but without any knowledge of the real and often subtle issues that actual experts like nobel laureate George Stilger explored in the work on regulatory capture that won him a nobel prize. I suspect few of those pushing regulation have bothered to explore in depth the history of wars over freedom of speech and the real and subtle parallels between those and this situation.

Instead you have people engaging in naive wishful magical thinking seriously disconnected from realistic consideration of all this. Its just as bad to not know much about these issues as it is to try to address these issues without a firm grasp of AI. Fortunately much of the work on AI is less clouded by emotion than work on political economics, aside from some of the seemingly religiously held views regarding claimed dangers of AGI.

Expand full comment

Gary, the concerns you express and outline here are the very reason why in May of 2014 we started a blog SocializingAI.com in what proved to be a failed attempt to engage the tech world about these very issues. We branded the blog, “Socializing AI – Where coders don’t know to go”. As ML started to explode, we sensed that there was both a great opportunity and potential for AI, as well as grave danger. Ultimately, we did connect with high level people at Microsoft, Intel, IBM (Watson and other divisions) and to a lesser degree Google and Google Brain, VCs (one famous VC engaged us with 50+ emails but would not meet in person and after a couple of years ended engagement by saying he thought we had something, but he didn’t have the time to think it through) and others. But we found that we were speaking an alien language to them, no one we talked to had the ability to comprehend the meaning of what we were saying. To a very large degree this inability to see the problem we were highlighting was due to their binary mindset reinforced by their mechanistic capitalist mental model of the world. These were fundamentally good people and even though we proposed and demonstrated both technology and mental models of approaches could be used to address these issues, approaches that many found engaging and of some limited interest, they literally could not grasp the need for these. The models we shared were adjacent, not replacement, tech/mental models, but they did not fulfill the goal of the tech world’s existing tech/mental models of command-and-control, dominance, and power. Models which they believe are completely validated by the inconceivable monetary success the tech world is experiencing, which to them confirmed the ‘rightness’ of their work and approaches. We stopped posting in the blog in 2019.

Expand full comment

Gary since you have embarked on your AI journey as an advocate of responsible AI, I thought you might find this story we posted on LinkedIn of my/our 2013 journey to Silicon Vally of value. “Could the story of Bill and Phil change the way humans use AI to grow, learn and heal?”

My wife is an award-winning journalist, in 2018 she wrote an in-depth account of the 3-month odyssey of our journey to Silicon Valley, camping in a tent in state parks as we tried desperately to reach the tech world about how our humanity is being left out of their technology and AI specifically. Enjoy!

https://www.linkedin.com/pulse/story-bill-phil-change-way-humans-use-ai-grow-learn-heal-phil-lawson/

Expand full comment

Gary, you’re a smart person, WTF did you think would happen when asking gov’ts for regulation? You really think things would proceed as your idealized world would have it? Do you really believe all of this to be neutral technology for the benefit of our collective kumbaya? I’ll try to avoid calling you naïve, but when you trust in gov’t to deliver us from evil, you are simply marching into the dragon’s lair. Happy to hear you got your moment in the Senate’s sun, and they appeared as interested as you’d hoped, but they do this with an eye to addressing their interests (and those who support them) not yours. This will not be regulatory capture because they’ll anoint anyone, it will be so because only the deepest pockets will be able to pay or afford the cost of entry. Don’t forget, while there’s lots of evil out there, our gov’ts are the devils we know and we should always be weary of them 😉

Expand full comment

The chances of regulation producing more bad effects than good is extremely high. And regulation gets worse over time. That will be even more true for fast moving AI. I prefer Marc Andreesen's approach. Let AI fly and tackle issues as they arise.

Expand full comment

Yup: they need to consider the risks of government regulation might outweigh the risks from AI. Unfortunately many people seem almost willfully ignorant of that potential. At least most of those who don't understand AI technology usually realize it, the danger is that many of those pushing for regulation don't grasp how poorly informed they appear to be regarding many aspects of regulatory theory or the parallels I noted to wars over speech regulation.

Expand full comment

Yes. I wish those who jump straight to centralized regulation would study public choice theory and the history of regulatory agencies.

Expand full comment

I suppose one way of putting it is that those who are concerned about the "precautionary principle" and banning things until they prove themselves safe should consider applying the "precautionary principle" to regulations. Regulations should need (at minimum) to show they won't do more harm than good: considering that banning AI for instance might inhibit the search for cures for diseases or other "unseen" unintended consequences. Though I guess thats part of the issue, they often seem to refuse to use their imagination to consider that there might be unseen things they aren't factoring in (or can't since the beneficial uses hadn't been invented and can't be predicted).

Those who tend to be concerned with risks and precautionary principle thinking of course tend to focus on only one side of the scale, the risks of doing something vs. the risks of not doing it.

Presumably suggesting applying the "precautionary principle" to regulation will lead pro-regulation types to suddenly think "what about the risks of not regulating!", discovering the 2nd side of the scale. The challenge then is to get them to apply the idea of balance other cases where they just assume apriori anything with any risk needs to be prevented until it can be proven safe.

Expand full comment

Yes. I make points like this both about AI and generally about the precautionary principle.

https://maxmore.substack.com/p/existential-risk-vs-existential-opportunity

https://maxmore.substack.com/p/the-proactionary-principle

Expand full comment

max, this is "existential threat" territory, which means it's different: you either regulate in advance of the existential event, or you're gone.

Expand full comment

Regulations on near term AI doesn't involve existential threat. Panic over existential threat claims should be kept separate and not cloud judgement regarding near term issues.

The existential threat isn't immediate, and involves a long chain of unlikely assumptions that people seem to sweep under the carpet. I don't bother taking time to debate that issue at the moment since I don't see it as imminent, even if others do. How is a chatbot going to take over and/or destroy the world? The sequence of events postulating an AGI that can actually do so, without other humans or other AGIs intervening (just as humans try to prevent other humans from destroying the world) involves a sequence of events whose probabilities need to be examined closely before leaping to conclusions about the level of risk, and how to address it. I won't address that sequence, perhaps others may.

Expand full comment

I'm not talking about chatbots, nor about AGI.

Imagine a love-spurned nerd in (say) 2028 telling a net-connected Open Source AI (way more powerful than in 2024, but way short of AGI) to write some code to disable the internet in the spurner's town, which the AI does by killing the net worldwide in some weird and unexpected (e.g. previously undescribed) way.

(Wiping out half of humanity and sending survivors back to a pre-industrial age, or earlier.)

What, exactly, in our current course prevents that happening?

(I'd *love* it if there *were* something, of course!)

Expand full comment

It seems like people assume that commercial AI vendors are too dense to grasp its not good for their company's brand image to have AIs that can cause major damage accidentally. Why do people assume that government can grasp this while assuming companies are too dense to realize they may lose market share if they are blamed for problems?

There is a difference between people using a tool that may give wrong answers, and putting up with its flaws since it still has benefit, and people using a tool that causes serious real world harm.

If you hold humans responsible for the errors of their tools: consumers are going to keep pressure on for vendors to provide better tools so they aren't held liable for some sort of real world damage their AI tool engaged in. Why would companies let AI get to the point where it could accidentally make that large a mistake? Consumers would baulk before using AIs that could say shut off power to a hospital and lead to the user being hit with major fines or criminal penalties for having used their tool in a way that risked that, its unclear why people think they could get to the point you described.

Thomas Jefferson said "Sometimes it is said that man cannot be trusted with the government of himself. Can he, then be trusted with the government of others? Or have we found angels in the form of kings to govern him?"

Have we found angels in the form of government regulators? In the real world governments are made of fallible humans, and public choice theory and regulatory capture theory study how they operate in the real world rather than the naive wishful thinking many engage in that just because government is commanded to "do the right thing" it will do so.

People seem to assume government is somehow apriori guaranteed to act in the public's interest rather than regulatory capture occurring, meaning its more likely to act in the interest of these companies and protect them from startup competitors that might do a better job.

If the issue is a risk that those outside big corporations may somehow come up with this sort of AI, its unclear whether a law would help. How many people break drug laws or other laws if they don't expect to get caught? People engage in the fantasy that merely passing a law will make a problem go away.

Expand full comment

Marc Andreesen's doesn't sound like much of an approach at all.

The simplest solution to the AI-control problem is to stop creating machines which we fear will regurgitate propaganda and flood the Internet with spam. The question of whether the "benefits of LLMs outweigh the risks" is unanswerable in general -- but if the risk truly is mass annihilation -- if we risk the corruption of our democracy -- then the risks must outweigh any possible benefit you can think of. I'm concerned for a society that will do nothing as we rocket towards the edge of the cliff.

At any rate, I can't think of much use for LLMs in the first place. I guess it'll be very profitable to tech companies trying to lower development costs by hiring less programmers. But I don't see why we, the people, should care about that.

Edit: Someone in this thread made a distinction between "near-term risk" and "existential risk". I agree with their distinction, and in fact don't foresee any "existential risk" (of the Terminator variety) at all. But the corruption of democracy is a near-term risk. It's already here!

Expand full comment

If AI can generate spam it can filter it. Social media companies have had to deal with troll farms where humans generate bogus content. The issue usually they create networks of accounts that link to each other and real humans don't link to them, so the information doesn't get spread to them. Its sort of like the race between virus creators and anti-virus software.

Even humans generate flawed content. Addressing the issue of separating good content from bad is a generic one that exists independent of whether the content comes from AI or human. Though if a human likes content, regardless of whether you do, shouldn't they get to see it whether its generated by a machine or a human? In general we need better ways to improved societal discourse, whether human or machine created.

Expand full comment

"If AI can generate spam it can filter it."

I don't accept the logic in that statement (can spam => can filter), and think it's relevant that the techniques OpenAI uses to filter spam are different from the ones they use to generate it, but regardless I do agree that AI can be used to identify or filter spam. Nonetheless, there's a simple test we can use to determine the degree to which social media companies may filter spam: go on any social media site you like and see if there's spam on the platform. If there is, our filters can't keep up with the generation.

----

As for your other point, I'm also against human-generated BS. Note that I didn't place a higher blame on one thing -- the LLMs -- than on some other thing. I don't try to evangelize. All that I think we should do is face BS for what it is, and look for ways to suppress or ameliorate it. In this case, the solution I think is simple. OpenAI should not be publicizing its products -- (which are likely to do more harm than good) -- if they are not certain that it will not be used by bad actors for bad ends (to a significant degree). Google sat on this technology for years without publicizing it. OpenAI, on the other hand, is the up-and-comer, so they have to make bold moves.

----

I also agree with you that it would be nice if people could see that text was generated by a robot instead of a human. I don't know how feasible that would be, however.

Expand full comment

re: "I also agree with you that it would be nice if people could see that text was generated by a robot instead of a human."

Actually: I didn't say that since I'm not sure why it matters, even if people may be curious. It can be identified as such or not. Its quality is what matters.

The goal should be frameworks that aim for high quality content being highlighted, signal from noise, regardless of who or what created it.

Yup: spam filters may not be fully up to the task, but there is obviously incentive to improve them, just as there is with anti-virus software. If a human isn't liking the content it shouldn't be able to break into real human accounts. If it does: its not clear why it matters if someone else dislikes it and labels it spam.

The issue of information filtering exists regardless of whether its hordes of humans at a troll farm or AI. Yup, the magnitude of it may change, but its unclear how to prevent that since laws won't prevent humans from breaking them, they only punish them if they are caught.

Pens and paper are used by bad actors, should we ban them? Cars can be used by bad actors, should we ban them? Is it only because this is something new that the fearful think they can ban this unlike other things they waited too long to ban but would have? Most of the populace isn't evil, and it seems like restricting their access to useful tools out of paranoia over the fringe elements needs to be fully justified in detailed cases and not assumed by default as being necessarily the way to go.

Expand full comment

"Though if a human likes content, regardless of whether you do, shouldn't they get to see it whether its generated by a machine or a human?"

"I didn't say that since I'm not sure why it matters, even if people may be curious. It can be identified as such or not. Its quality is what matters."

I must've misread you, but we don't have to get into it.

As for the rest, I don't know if there's anything here that contradicts anything I wrote above. If there is, I don't see it. Further, I think that your fourth paragraph -- "issue of information filtering exists regardless" -- actually restates what I said in the last message, yet presents it as disagreement. I said:

"I'm also against human-generated BS. Note that I didn't place a higher blame on one thing -- the LLMs -- than on some other thing."

In other words, it's irrelevant the scale or origin of bullshit. Bullshit's bullshit. All we can is fight it wherever and however it arises. "Bullshit's bullshit. All we can is fight it wherever and however it arises." Do you disagree with that statement, as written?

Would you disagree with me if I said that we should not be producing technologies when the costs of their existence outweighs their benefits? If so, you have to show that the benefits of LLMs outweigh their costs-- chiefly the cost of a massive intensification of the issue at hand, information filtering. I'd be happy to hear anything you have to say on that topic. Otherwise, these apologetics won't get us anywhere.

(And I never said, nor implied, that cars should be outlawed. --- I only suggested that technologies should be outlawed, *if they should be outlawed*. Hardly controversial.)

Expand full comment

There are massive costs that come with being too precautionary.

https://maxmore.substack.com/p/existential-risk-vs-existential-opportunity

Expand full comment

I don't disagree with anything said in this piece - and only want to point out that I think the geopolitical situation makes the efficacy of regulation a little dicey. We not only have the capitalistic motivation of profits, we are also in an AI arms race with some of our more contentious global neighbors. Not staying at pace or ahead of nation-states which which would very much like to weaponize AI against us (more - because they already have) has very real national security ramifications, and could threaten the well-being of free people the world over. Threat actors, nation states and otherwise, are already trying to weaponize AI. We've already seen upticks in 'small' cybercrime - more effective phishing campaigns written by AI. Coupled with the fact that the line between "cutting-edge" GenAI and not-cutting edge is a very slim margin, its going to be exceptionally hard to us to defend against state-of-the-art AI, without our own to support us.

That's not to say we shouldn't strive for regulation, even global regulation to govern the use of AI, only that we should be aware of how the geopolitical situation will influence our appetite for regulation when we know our adversaries are carelessly sprinting ahead.

Expand full comment

re: "We also know, for example, that Bing has defamed people, and it has misread articles as saying they opposite of what they actually say, in service of doing so."

Anyone who takes what it says as "truth" should be viewed the same way someone who believes the Babylon Bee or the Onion. Someone should patiently explain to them how adults grasp that not all sources of information are accurate.

People should be free to use flawed tools if they wish to. Adults are free to impair their judgement with alcohol: and are held responsible if they drive or have an accident while doing so. We no longer ban alcohol, and we don't hold the alcohol companies responsible for the actions of their users.

Some people though apparently share the mindset of alcohol prohibitionists who assumed they somehow should have the right to protect people from themselves, whether they want it or not.

Expand full comment

re: " systems can bias eg political thought [...]we need regulation – with teeth."

When this country was founded there were people that were concerned about politically biased human writings.The moral panic over computer generated speech is no different than moral panics throughout the ages from people who wished to control the speech of others for their own good via government.

Fortunately there were others who enacted the 1st amendment to prevent tyranny of the majority via government from controlling political speech, or Nassim Taleb's "dictatorship of the most intolerant minority" from taking hold via special interest influence. They grasped that sometimes its more dangerous to give government a power than to go without its "help".

People use AI to help them create speech. People have a 1st amendment right to hear human speech they wish. The 1st amendment likely won't be viewed as protecting machine generated speech humans want: but the spirit of the first amendment should be applied even if not the letter and it argues seriously for caution before granting government power over such topics at the drop of a hat in panic.

These AIs can influence the people they chat to: should governments really have a say in that? Isn't that an indirect form of potential government propaganda? Are those who hate Trump certain no one like him would ever be in control again, and do they wish him in control of a government that monitors or dictates AI speech? Or those who hate Biden, do they really want that for him? Maybe they won't give into the temptation soon, but granting them the tools is problematic.

This page below asked AI to generate newspaper columns Thomas Jefferson, James Madison and George Orwell would write if they were alive today regarding government regulation of AI, for those who haven't bothered to learn from history of from 1984's warnings:

https://PreventBigBrother.com

Competition is the answer to have myriad AIs with different points of view (as this says: https://RainbowOfAI.com a diverse rainbow of them), not regulation from government that by default is likely to be captured by the experts from large companies and steered into regulatory capture that aids them rather than the public. Except of course some people also refuse to bother learning economics and history about regulatory capture and just naively assume what they push can't possibly be harmful since they engage in wishful thinking about how government operates rather than bothering to actually learn something about the likely reality.

Expand full comment

Let's see...our government passes regulations about burning coal so we'll burn less to none so that we don't raise temperature. Naturally, that means that all other countries around the globe are doing exactly the same thing.

Likewise, we pass regulations about AI so that we only do good things with it. Ergo, parallel to the coal business, all other governments--following in our hallowed footsteps--will do the same and check with us to make sure they're doing it right.

This is kinda like the definition of "hate": it's like taking poison hoping it will kill the person we hate. Is there really a viable alternative to being the meanest sonofabitch in the valley?

Expand full comment

If we are depending on politicians and bureaucrats to steer us away from an AI debacle, then all hope is already lost, if government regulation in industries such as healthcare, agriculture, finance, etc. is anything to do by.

Expand full comment

Sorry to write that, but the idea, that the government (-s) will do anything in a correct way is pretty much naive. Of course they will consult the tech giants, but not the scientists, as the latter can't do anything against, while the former can. Nobody will do anything on the global scale - rather on the regional one. Why should US allow any participants from outside, where all of the tech giants are sitting just next to them. And even if the regulation will be set up - if it's done the way pharma is regulated, then we are in a serious problem.

Although I don't believe in the worst case scenario, that we end in anarcha and AI wars, I either don't believe in the positive one. But the time will tell

Expand full comment

TTRC is a good name for your project. Trust in Technology Research Center. Because you’re undoubtedly going to be addressing technologies which don’t necessarily fit under the category of AI. Doesn’t matter that it’s not catchy, just don’t give it a crappy logo that looks like some northwest Luddite movement, lol. Give it a bold and serious logo like NATO has, but combine it with the lyrical humanism that the CFR (Council on Foreign Relations) has. The combination of blue shades in that logo are a good start. I wish you luck.

Expand full comment

We need an FDA for AI and we need it now.

Expand full comment

I couldn't agree with you more Gary. It seems that we need to demand that governments give AI public interest groups a seat at the table too. Are there any that you think would be a good fit for public advocacy? A quick Google search turns up https://publicinterest.ai

Also, https://link.springer.com/article/10.1007/s00146-022-01480-5 seems like a good article on this topic - I will have to make some time to read it in the next few days.

Expand full comment

“This is where the government needs to step up and say “transparency and safety are indeed requirements; you’ve flouted them; we won’t let you do that anymore.””

Gary - I can’t tell if you are just naive or being purposefully obtuse.

Belief that any government working with any group of industry leaders will come up with the best future is a view devoid of historical perspective.

And belief that an historically non-transparent government will somehow create regulations that ensure transparency is pollyannish.

With the world being on the cusp of quantum computers combining with AI technologies your mission of playing cassandra is futile.

Expand full comment

Gary, your bifurcated AI future as presented to the IMF is spot on. clearly, concisely, well-articulated "poles". Obviously reality will probably fall somewhere in between... and articulating these "edge" (which aren't, imho, so "edge," more like <15% probability) scenarios is an excellent framing of the present day challenge and potential future outcomes / consequences. thank you for your continued voice in this space!

Expand full comment

There is talk of regulation, but so far it has not been proven that "AI" (let's just say Machine Learning so as not to be so pompous) is more dangerous than everyday programming. So, we should regulate programming too?

The only thing I see are strong incentives to put regulations in an area that anyone can copy (because it really has nothing special, and is even very basic scientifically speaking), and thus protect the current players (i.e. regulatory capture). Players such as Altman, who in my personal opinion, besides not being himself an expert in Machine Learning, is a person only interested in accumulating power and influence, nothing more.

I would take the issue of regulations a little more seriously if the following conditions are met:

1. Multidisciplinary teams are formed and the scientific method is applied to evaluate the true capacity of current Machine Learning models. The process should be transparent so that anyone can replicate the results.

2. Critical areas of human activity are identified, and regulations are established that specify the conditions that must be met to provide services in them through Machine Learning models, or directly prohibit the use of models.

For example, if a bank uses a model to make a credit decision, the model must be able to explain why it made those decisions, if a search engine uses a model, the model must be able to cite the sources from which it extracts the information it presents, if a doctor uses a model for a diagnosis, the model must be able to explain that diagnosis, and it must be the doctor who has the final word and approves it, it would be prohibited to have autonomous weapons, etc., etc.

It's strange that none of that is what is being done, is it because that hurts the current players?

Expand full comment