52 Comments

Time to move Your Face Belongs to Us to the top of my reading queue?

Expand full comment

Just like Google hyping Google Glass or similar instances. They want to know what's in your house so they can sell you better (well, more) stuff. What Altman's doing isn't a novel idea, but the scope is staggering.

Expand full comment

Get a sock and a padlock.

Expand full comment

This is completely insane, and is really awful!

Expand full comment

I think what's happening in AI is a prime example of Hanlon's Razor.

I don't think that privacy-infringing scandalous, terrifying thing that's happening right now has anything to do with control, surveillance or whatever. I think it's a simple paperclip maximization problem. They need more data, and more compute, because they're drunk on the idea that more compute will help them create God.

I don't think that's a viable route but they are likely to end up creating a surveillance monster in the process, as a byproduct. Which is even more terrifying.

Ironically, the Guy Who Broke Democracy might have a few things to add to this with his open source tech.

Ah, where did I put my popcorn...

Expand full comment

In the end, the attribution of intent doesn't really matter all that much, IMO. Whether malicious or stupid in intent and ambition, their actions display malice and complete disregard for the rights and well-being of others. That this violation of others might be done with a cool amorality without consideration of harm doesn't make it any better—worse, I'd say.

Privacy is a right that should be considered fundamental and inviolable. If it's violable, it's not a right, but a privilege. If Sam Altman wanted to lean over our shoulders looking at our private documents, we'd think he was a creep, but since OpenAI wants to do it electronically and invisibly, somehow it's seen as up for debate whether or not it's even a problem.

This is not to mention the violation of other rights, such as to people's property; violation done in mass, on industrial and impersonal scales like modern warfare. Companies like OpenAI seem unaware of the role they are playing in ripping up the Magna Carta of 1217 and the very idea of inviolable human rights, an idea we take for granted and that was hard fought for. That they are ignorant, however, makes them no less guilty and bearing no less responsibility.

Expand full comment

Given that they scraped the web to train their models on often copyrighted data without permission, this is no surprise.

I'd like to know the mechanism of obtaining access to private data beyond the cam.

Expand full comment

How about millions of people mindlessly shovelling sensitive information into the prompt window? They don't even need the cam, people are doing it for them!

Expand full comment

The Minority Report? In Phil Dick’s world,

even your future is under surveilance.

Expand full comment

Microsoft owns OpenAI and also pushes OneDrive heavily, the configuration of which is set to allow Microsoft access to your files to presumably make money somehow, advertising, idfk, and to allow "authorities" access to your files to make sure you're not doing a terrorism.

This time next year I look forward to inviting everyone going "Oh, Gary, you conspiracy theorist!" to a dinner party - crow for dinner and humble pie for dessert. Remember all the times Facebook *couldn't possibly* be doing all the scummy shit it was doing? Remember all the times the government totally didn't have access to all your data?

Expand full comment

Probably scouring Microsoft cloud and Onedrive material. I suspect Google is doing the same. Amazon? Who knows. I just wish OpenAI would license their text to speech software.

Expand full comment

Which is it the other day you predicted the demise of AI and now it will fuel our panopticon? I’m confused…

Expand full comment

It seems to me that both can be true. Consider it just a matter of swapping out the customer base . It may never become a handy tool for the average consumer, but by hyping it so much, grabbing so much personal data while it could, and making their intentions unclear, has likely helped it become more desirable for surveillance and military applications. Whether the intentions for this outcome was there all along is anyone's guess - for now.

Expand full comment

PanOpenAIticon

Expand full comment

Something does not need to be effective to be destructive. When Trump comes back into power, the minions plan to deport 12 Million illegals. I'm sure those same minions will be diligent in discerning actual illegals from the perceived illegal adjacent (generally brown people). But they probably will not bothered if they're less than perfect (See Minions 4).

Altman's avenues to monetize OpenAI technology are either being constrained by Big Tech (Note Meta's giving it away) or the inherent limitations of LLM (Read G. Marcus).

This is obviously one of his many remaining strategies. We all know that Altman first sells and then worries about how the tech works.

Expand full comment

Ah, so gpt spitting out my passwords, bad creative writing, and disconnected thoughts, however statistically unlikely, shall have a small chance of being given to some random on the other side of the planet.

Expand full comment

Good. I've long said that the general approach to privacy as a worthwhile thing strikes me as nonsensical.

Expand full comment

Sorry you misunderstood. In the antebellum

period in the U.S. confirmce scams were widely practiced—Throughout the country. For instance there were “diploma mills” where people bought medical degrees, as well as law and set up practices. History is fact.

Expand full comment

Frightening and scary - yet the future potential for good is there

Expand full comment

Someone should write an “anti-AI” tool that generates massive, incredible amounts of data with one purpose: to cause a complete nervous breakdown of LLMs when they touch it. They could call it “JabberwockAI” and you could let ordinary users contribute to it from “home servers” like Bitcoin mining. Rather than hoping for the best, we must fight AI with AI.

Expand full comment

Yep. And, if not Open AI, then somebody else? Highly probably, given the dictates of society that still puts money-making first.

Expand full comment

Kind of like the antibellum period in U.S. after Civil War, where con men were admited. And The Confidence Man was a norm, exploiting vulnerability. I think post-covid it’s a new low.

Expand full comment

How do you imagine con men were especially rampant after the Civil War? That smacks of neo-Confederate propaganda about Reconstruction.

Expand full comment

Read about the anti-bellum period?

Expand full comment

It is a historical period in American history. After the war. Melville even wrote a famous story about The Confidence Man. I am not a person prone to propaganda. Sorry you did not like my reference. Have a good day.

Expand full comment

Saying Con Men, who are rife all across American history right up to now, are somehow native to the "antebellum period" is, I would suggest to you, neo-Confederate dogma. You might look into that, if you find the possibility interesting. Best wishes.

Expand full comment

P.S. In a heavily propagandized society and world, we're all vulnerable to propaganda. Thinking you're immune seems a bit dangerous, not to mention wrong.

Expand full comment

Read what?

Expand full comment