Just like Google hyping Google Glass or similar instances. They want to know what's in your house so they can sell you better (well, more) stuff. What Altman's doing isn't a novel idea, but the scope is staggering.
I think what's happening in AI is a prime example of Hanlon's Razor.
I don't think that privacy-infringing scandalous, terrifying thing that's happening right now has anything to do with control, surveillance or whatever. I think it's a simple paperclip maximization problem. They need more data, and more compute, because they're drunk on the idea that more compute will help them create God.
I don't think that's a viable route but they are likely to end up creating a surveillance monster in the process, as a byproduct. Which is even more terrifying.
Ironically, the Guy Who Broke Democracy might have a few things to add to this with his open source tech.
In the end, the attribution of intent doesn't really matter all that much, IMO. Whether malicious or stupid in intent and ambition, their actions display malice and complete disregard for the rights and well-being of others. That this violation of others might be done with a cool amorality without consideration of harm doesn't make it any better—worse, I'd say.
Privacy is a right that should be considered fundamental and inviolable. If it's violable, it's not a right, but a privilege. If Sam Altman wanted to lean over our shoulders looking at our private documents, we'd think he was a creep, but since OpenAI wants to do it electronically and invisibly, somehow it's seen as up for debate whether or not it's even a problem.
This is not to mention the violation of other rights, such as to people's property; violation done in mass, on industrial and impersonal scales like modern warfare. Companies like OpenAI seem unaware of the role they are playing in ripping up the Magna Carta of 1217 and the very idea of inviolable human rights, an idea we take for granted and that was hard fought for. That they are ignorant, however, makes them no less guilty and bearing no less responsibility.
How about millions of people mindlessly shovelling sensitive information into the prompt window? They don't even need the cam, people are doing it for them!
Microsoft owns OpenAI and also pushes OneDrive heavily, the configuration of which is set to allow Microsoft access to your files to presumably make money somehow, advertising, idfk, and to allow "authorities" access to your files to make sure you're not doing a terrorism.
This time next year I look forward to inviting everyone going "Oh, Gary, you conspiracy theorist!" to a dinner party - crow for dinner and humble pie for dessert. Remember all the times Facebook *couldn't possibly* be doing all the scummy shit it was doing? Remember all the times the government totally didn't have access to all your data?
Probably scouring Microsoft cloud and Onedrive material. I suspect Google is doing the same. Amazon? Who knows. I just wish OpenAI would license their text to speech software.
It seems to me that both can be true. Consider it just a matter of swapping out the customer base . It may never become a handy tool for the average consumer, but by hyping it so much, grabbing so much personal data while it could, and making their intentions unclear, has likely helped it become more desirable for surveillance and military applications. Whether the intentions for this outcome was there all along is anyone's guess - for now.
Something does not need to be effective to be destructive. When Trump comes back into power, the minions plan to deport 12 Million illegals. I'm sure those same minions will be diligent in discerning actual illegals from the perceived illegal adjacent (generally brown people). But they probably will not bothered if they're less than perfect (See Minions 4).
Altman's avenues to monetize OpenAI technology are either being constrained by Big Tech (Note Meta's giving it away) or the inherent limitations of LLM (Read G. Marcus).
This is obviously one of his many remaining strategies. We all know that Altman first sells and then worries about how the tech works.
Ah, so gpt spitting out my passwords, bad creative writing, and disconnected thoughts, however statistically unlikely, shall have a small chance of being given to some random on the other side of the planet.
period in the U.S. confirmce scams were widely practiced—Throughout the country. For instance there were “diploma mills” where people bought medical degrees, as well as law and set up practices. History is fact.
Someone should write an “anti-AI” tool that generates massive, incredible amounts of data with one purpose: to cause a complete nervous breakdown of LLMs when they touch it. They could call it “JabberwockAI” and you could let ordinary users contribute to it from “home servers” like Bitcoin mining. Rather than hoping for the best, we must fight AI with AI.
Kind of like the antibellum period in U.S. after Civil War, where con men were admited. And The Confidence Man was a norm, exploiting vulnerability. I think post-covid it’s a new low.
It is a historical period in American history. After the war. Melville even wrote a famous story about The Confidence Man. I am not a person prone to propaganda. Sorry you did not like my reference. Have a good day.
Saying Con Men, who are rife all across American history right up to now, are somehow native to the "antebellum period" is, I would suggest to you, neo-Confederate dogma. You might look into that, if you find the possibility interesting. Best wishes.
P.S. In a heavily propagandized society and world, we're all vulnerable to propaganda. Thinking you're immune seems a bit dangerous, not to mention wrong.
Time to move Your Face Belongs to Us to the top of my reading queue?
Just like Google hyping Google Glass or similar instances. They want to know what's in your house so they can sell you better (well, more) stuff. What Altman's doing isn't a novel idea, but the scope is staggering.
Get a sock and a padlock.
This is completely insane, and is really awful!
I think what's happening in AI is a prime example of Hanlon's Razor.
I don't think that privacy-infringing scandalous, terrifying thing that's happening right now has anything to do with control, surveillance or whatever. I think it's a simple paperclip maximization problem. They need more data, and more compute, because they're drunk on the idea that more compute will help them create God.
I don't think that's a viable route but they are likely to end up creating a surveillance monster in the process, as a byproduct. Which is even more terrifying.
Ironically, the Guy Who Broke Democracy might have a few things to add to this with his open source tech.
Ah, where did I put my popcorn...
In the end, the attribution of intent doesn't really matter all that much, IMO. Whether malicious or stupid in intent and ambition, their actions display malice and complete disregard for the rights and well-being of others. That this violation of others might be done with a cool amorality without consideration of harm doesn't make it any better—worse, I'd say.
Privacy is a right that should be considered fundamental and inviolable. If it's violable, it's not a right, but a privilege. If Sam Altman wanted to lean over our shoulders looking at our private documents, we'd think he was a creep, but since OpenAI wants to do it electronically and invisibly, somehow it's seen as up for debate whether or not it's even a problem.
This is not to mention the violation of other rights, such as to people's property; violation done in mass, on industrial and impersonal scales like modern warfare. Companies like OpenAI seem unaware of the role they are playing in ripping up the Magna Carta of 1217 and the very idea of inviolable human rights, an idea we take for granted and that was hard fought for. That they are ignorant, however, makes them no less guilty and bearing no less responsibility.
Given that they scraped the web to train their models on often copyrighted data without permission, this is no surprise.
I'd like to know the mechanism of obtaining access to private data beyond the cam.
How about millions of people mindlessly shovelling sensitive information into the prompt window? They don't even need the cam, people are doing it for them!
The Minority Report? In Phil Dick’s world,
even your future is under surveilance.
Microsoft owns OpenAI and also pushes OneDrive heavily, the configuration of which is set to allow Microsoft access to your files to presumably make money somehow, advertising, idfk, and to allow "authorities" access to your files to make sure you're not doing a terrorism.
This time next year I look forward to inviting everyone going "Oh, Gary, you conspiracy theorist!" to a dinner party - crow for dinner and humble pie for dessert. Remember all the times Facebook *couldn't possibly* be doing all the scummy shit it was doing? Remember all the times the government totally didn't have access to all your data?
Probably scouring Microsoft cloud and Onedrive material. I suspect Google is doing the same. Amazon? Who knows. I just wish OpenAI would license their text to speech software.
Which is it the other day you predicted the demise of AI and now it will fuel our panopticon? I’m confused…
It seems to me that both can be true. Consider it just a matter of swapping out the customer base . It may never become a handy tool for the average consumer, but by hyping it so much, grabbing so much personal data while it could, and making their intentions unclear, has likely helped it become more desirable for surveillance and military applications. Whether the intentions for this outcome was there all along is anyone's guess - for now.
PanOpenAIticon
Something does not need to be effective to be destructive. When Trump comes back into power, the minions plan to deport 12 Million illegals. I'm sure those same minions will be diligent in discerning actual illegals from the perceived illegal adjacent (generally brown people). But they probably will not bothered if they're less than perfect (See Minions 4).
Altman's avenues to monetize OpenAI technology are either being constrained by Big Tech (Note Meta's giving it away) or the inherent limitations of LLM (Read G. Marcus).
This is obviously one of his many remaining strategies. We all know that Altman first sells and then worries about how the tech works.
Ah, so gpt spitting out my passwords, bad creative writing, and disconnected thoughts, however statistically unlikely, shall have a small chance of being given to some random on the other side of the planet.
Good. I've long said that the general approach to privacy as a worthwhile thing strikes me as nonsensical.
Sorry you misunderstood. In the antebellum
period in the U.S. confirmce scams were widely practiced—Throughout the country. For instance there were “diploma mills” where people bought medical degrees, as well as law and set up practices. History is fact.
Frightening and scary - yet the future potential for good is there
Someone should write an “anti-AI” tool that generates massive, incredible amounts of data with one purpose: to cause a complete nervous breakdown of LLMs when they touch it. They could call it “JabberwockAI” and you could let ordinary users contribute to it from “home servers” like Bitcoin mining. Rather than hoping for the best, we must fight AI with AI.
Yep. And, if not Open AI, then somebody else? Highly probably, given the dictates of society that still puts money-making first.
Kind of like the antibellum period in U.S. after Civil War, where con men were admited. And The Confidence Man was a norm, exploiting vulnerability. I think post-covid it’s a new low.
How do you imagine con men were especially rampant after the Civil War? That smacks of neo-Confederate propaganda about Reconstruction.
Read about the anti-bellum period?
It is a historical period in American history. After the war. Melville even wrote a famous story about The Confidence Man. I am not a person prone to propaganda. Sorry you did not like my reference. Have a good day.
Saying Con Men, who are rife all across American history right up to now, are somehow native to the "antebellum period" is, I would suggest to you, neo-Confederate dogma. You might look into that, if you find the possibility interesting. Best wishes.
P.S. In a heavily propagandized society and world, we're all vulnerable to propaganda. Thinking you're immune seems a bit dangerous, not to mention wrong.
Read what?