I really appreciate where you are taking the conversation these days, Gary. The OpenAI meltdown should have us all reevaluating what we thought was true about AI safety, legislation, and policy. Self-regulation is a joke. Period. But how to do regulation well? Thanks for walking us through the initial steps.
Between issues like this and increasing cultural polarization in the western world, I think the concept of a physical, sovereign "nation" will become obsolete. Conspiracy theorists worry about a formal world government being imposed by the UN or other. They never think it could be an extra-national technocracy, or even something driven by an AI.
Pff. The rise of the corporations in deciding what happens in society is a worrying trend. The way Chinese corporations like Tencent (and probably many others) buy large stakes in organisations (e.g. Discord) seems also to be a part of this worrying development.
Always good to hear your perspectives especially around AI that really isn’t all it’s been hyped to be. But when you get into the “needs to be regulated” side of things, these thoughts make me shudder. The idea that regulators and legislators of the ilk we have today around the world (did you catch the one about the UK military tracking anyone who disagreed with the state’s position on health issues?), would be making up the rules and overseeing them, makes me waaayyyy more nervous. Not saying that Big Tech isn’t dangerous or self-interested over the interests of nations’ citizens, just that if what we have experienced the past five years with gov’t abuses, lies, ignorance and deceptions if where we want to hinge our future in this area, well we may as well pack it up and call it a day ‘cause we’re totally screwed 😉
https://en.wikipedia.org/wiki/Linus%27s_law "In software development, Linus's law is the assertion that "given enough eyeballs, all bugs are shallow". The law was formulated by Eric S. Raymond in his essay and book The Cathedral and the Bazaar (1999), and was named in honor of Linus Torvalds.[1][2]"
AI gain of function might be viewed like bugs in open source one day.
The ship has sailed on objecting to technoligarchy -- the PATRIOT Act, Five Eyes, etc. It is, if anything, a mercy that government panjandrums will rely on inherently limited AI models to try to immanentize the eschaton.
I agree, it's disturbing that so many aspects of our daily lives are being shaped by the decisions of people who are not good at, or interested in governance.
I'm talking, of course, about Congress. Who did you think I was talking about?
I really appreciate where you are taking the conversation these days, Gary. The OpenAI meltdown should have us all reevaluating what we thought was true about AI safety, legislation, and policy. Self-regulation is a joke. Period. But how to do regulation well? Thanks for walking us through the initial steps.
Johnny's in the basement mixin' up the medicine, Gary's on the pavement, thinkin' 'bout the government...
You don't need to be a weatherman to know which way the wind blows
Between issues like this and increasing cultural polarization in the western world, I think the concept of a physical, sovereign "nation" will become obsolete. Conspiracy theorists worry about a formal world government being imposed by the UN or other. They never think it could be an extra-national technocracy, or even something driven by an AI.
🔥
This so very scary when you really think about the implications.
Interesting that I'm right in the middle of reading Gnomon by Nick Harkaway.
The public-private partnerships continue to discredit themselves the people gain power.
Pff. The rise of the corporations in deciding what happens in society is a worrying trend. The way Chinese corporations like Tencent (and probably many others) buy large stakes in organisations (e.g. Discord) seems also to be a part of this worrying development.
Always good to hear your perspectives especially around AI that really isn’t all it’s been hyped to be. But when you get into the “needs to be regulated” side of things, these thoughts make me shudder. The idea that regulators and legislators of the ilk we have today around the world (did you catch the one about the UK military tracking anyone who disagreed with the state’s position on health issues?), would be making up the rules and overseeing them, makes me waaayyyy more nervous. Not saying that Big Tech isn’t dangerous or self-interested over the interests of nations’ citizens, just that if what we have experienced the past five years with gov’t abuses, lies, ignorance and deceptions if where we want to hinge our future in this area, well we may as well pack it up and call it a day ‘cause we’re totally screwed 😉
https://en.wikipedia.org/wiki/Linus%27s_law "In software development, Linus's law is the assertion that "given enough eyeballs, all bugs are shallow". The law was formulated by Eric S. Raymond in his essay and book The Cathedral and the Bazaar (1999), and was named in honor of Linus Torvalds.[1][2]"
AI gain of function might be viewed like bugs in open source one day.
The ship has sailed on objecting to technoligarchy -- the PATRIOT Act, Five Eyes, etc. It is, if anything, a mercy that government panjandrums will rely on inherently limited AI models to try to immanentize the eschaton.
I agree, it's disturbing that so many aspects of our daily lives are being shaped by the decisions of people who are not good at, or interested in governance.
I'm talking, of course, about Congress. Who did you think I was talking about?
at least they were nominally elected, and in principle we can vote them out.
weird he put starlink i Ukraine cause cell service was cut... he stopped a American company from actively attacking Russia weird.
not as weird as the drones Russia was using had American chips bought thru proxy to china to russia . ukraine wanted ys to stop selling products.
our laws covering mail cover email.
thorns are thorny cause dominion over this atmosphere was ascertained via thorns
weird.
where were you during microsoft in the 1990’s