54 Comments
User's avatar
Diego Pineda's avatar

In 2010, the best engineers in the world focused on getting more likes and get kids addicted to social media.

Today, AI engineers are focused on replacing human creativity and setting the foundation for surveillance.

🤮

Expand full comment
Shon Pan's avatar

AI has become very much the anti-life equation.

The automation of life is the destruction of life.

Expand full comment
User's avatar
Comment deleted
May 21, 2024Edited
Comment deleted
Expand full comment
Diego Pineda's avatar

Don’t be sorry. It’s okay to disagree. Maybe the use of words was but the best, as surveillance is already here and the foundation has been there for a while. But Gary’s point is that AI could be focused on other areas which can solve larger issues. And I agree with that. Tech leaders can make decisions on where to steer AI and we can only hope they make good decisions.

As for creativity, well, just take a look at their creations and tell me how good they are.

Expand full comment
RMC's avatar

"the people who laid the groundwork for surveillance were your old "smart guys" and "good troublemakers" physicists at Los Alamos and early computer scientists at IBM (and others). This is a well-known fact"

Can you be specific about who and what you mean here? I think it matters what people's specific intentions in developing a piece of technology were, at the time they did it. Right now behavioural surveillance and applying enormous prediction engines to make money is the explicit aim of a large section of our economy. That's probably not really what they had in mind at xerox or even DARPA or wherever 60 years ago.

Expand full comment
Shon Pan's avatar

" not replacing singers or artists at all,"

Except that it literally is.

https://www.youtube.com/watch?v=U2vq9LUbDGs

AI boosters seek to take their same morals and attitude toward truth as Sam Altman, apparently.

Expand full comment
Simon Au-Yong's avatar

Thank you Dr Daniel.

I’ll ask the people who simplistically reduce it to “just prompting” if they’ve ever composed a song, published a piece or edited a video, with serious personal money (mainly savings and perhaps a loan from family), thousands of hours of sweat, tears, soul searching and sacrifice put into such an endeavour, only to find that someone has digitised _their_ work and is now building derivative pieces and making serious money off the hard work _they_ put in, with _zero_ attribution and share of revenue, how they would feel.

I would be a little more than upset.

An entry point into creative endeavour via prompting is:

a) given the tremendously blatant theft of IP in most open LLMs (you know, those with trillions of tokens), in its current state, a huge slap in the face of the creator economy

b) something that should be based off models trained *solely* on public domain, Creative Commons Attribution-based or similar licenses

To reduce it down to “c’mon, it’s _just_ prompting,” is also a huge insult to the data science community at large. And I have worked closely with such a community which has, prior to ChatGPT arriving on the scene, treated data with incredible respect and compliance with the law.

I don’t think non-creators have _any_ idea of the level of personal sacrifice that has now been rubbished or dismissed… because, “it’s _just_ prompting, chillax…”

🤬

Expand full comment
Simon Au-Yong's avatar

If a doctor misdiagnosed a loved one leading to her being paralysed or comatose for the rest of her life, with the misdiagnosis being the direct result of heavy but misplaced reliance on hallucinatory output due to pattern matching triggered by an attitude of “it’s _just_ a prompt, chillax, gosh dude,”…

The disrespect in both my creative and medical illustrations are one and the same.

🤬

Expand full comment
Eric Cort Platt's avatar

I like the double entendre of "nonconsensually". :)

Yes, they want "more, more, more"... until they own it all. The old dynamic of the Buddhist "hungry ghost" figure, with the tiny mouth and big belly. It's never enough.

Expand full comment
Shon Pan's avatar

This is why we need DC to do something that isn't "innovate!!!"

We just are sitting here being the boiling frog.

Expand full comment
Matthew Ferrara's avatar

The absolutely least helpful thing here would be for “DC to do something.”

Expand full comment
Shon Pan's avatar

Then what? Sit here and boil?

Expand full comment
Matthew Ferrara's avatar

No; but it’s not a binary choice. We can exercise our own options. The situation might even encourage some of us to develop alternatives. I much prefer an open market approach to the heavy hand of government.

Expand full comment
Shon Pan's avatar

Open market is almost certainly going to result in a race to the bottom where we die. This is not a place where market coordination can work.

#PauseAI

Expand full comment
Matthew Ferrara's avatar

I do not share your lack of optimism.

Expand full comment
Shon Pan's avatar

Unfortunately, this is time to look up and see the incoming asteroid. And likewise, market forces will not deflect it.

Expand full comment
Shon Pan's avatar

Likewise, we cannot market coordinate nuclear weapons

Expand full comment
Malcolm Muckle's avatar

Public interest and money have never been the best of friends. For most corporations profit=survival. Public interest is way down the list, though it can eventually become a thorn in the side of profit generators… eg. the tobacco industry, though it’s still finding ways: vaping etc. Yes, AI could, can, should be, and in some instances *is* amazingly beneficial. Eg. meta analyses in medical research, or research into protein folding and drug development. Le’s hope the downfall of “Sauron” is more than mythic.

Expand full comment
RMC's avatar

I lost interest in AI as a researcher about 15 years ago as I grew up and realised how little it had to do with either the brain, or cognition. I must say I feel even more like that now. I think the recent successes of the gargantuan connectionist systems we are calling "AI" at the moment throw the distinction into ever sharper relief. It has less and less to do with science each day. Perhaps I might have made money if I'd stuck with it, but actually slowly gathering knowledge about the brain by doing science is much more interesting and may yet be both profitable for me and helpful to others.

LLMs are impressive but I just don't use it for anything at all. I should spend more time seeing if it's helpful for coding. Its so transparent that going on and on about "AI" is just marketing to the credulous. In a million years Microsoft cannot take screenshots of everything I do.

I predict there will shortly be a pretty big crash. I could be wrong, of course I could, but a fugaze is a fugaze. And this is a fugaze. With that said, it will surely succeed in enriching the unscrupulous. I realise I'm just repeating what everyone here thinks, but there's some value in that.

Expand full comment
Scott Burson's avatar

A few years ago, on hearing about AI doomers' fear that runaway AIs will focus on a single goal (producing paperclips being the canonical example) to the detriment of everything else including life on Earth, I thought, "we already have systems that act like that; they're called corporations." Alas, I didn't foresee that AI companies themselves would soon become some of the best examplars of the phenomenon.

It's a perfect storm. We have a situation in which massive amounts of money have been invested; although the novel capabilities of the new technology are certainly fascinating, the precise path to a level of general usefulness that would reward the massive investment is not yet clear; and it seems likely that there will be a strong winner-take-all effect. Under such circumstances, it is sadly unsurprising that societal impacts are quickly forgotten about, as the demands of the competition drive all other considerations out.

Expand full comment
Ben P's avatar

This is obviously being done for the sake of targeted advertising. Microsoft's attempts at describing possible use cases are hilarious. From their website:

"Maybe you wanted to make that pizza recipe you saw earlier today but you don’t remember where you saw it. Typing goat cheese pizza into the search box would easily find the recipe again. You could also search for pizza or cheese if you didn’t remember the specific type of pizza or cheese. Less specific searches are likely to bring up more matches though. If you prefer to search using your voice, you can select the microphone then speak your search query."

Does anyone actually do this? Sit around wondering how they can get back to something they were looking at on their computers earlier in the day? Just google "pizza recipie" again, Jesus. Or look at your browsing history. "I'm not able to retrace my entire recent history of computer use" is a fake, non-problem. Which tracks with everything else AI-related that Microsoft is pumping out.

It seems most "innovations" in digital technology now are motivated by advertising. Google ruined its search platform for the sake of advertising. Amazon has cluttered up our search results for the sake of advertising. The internet is being flooded with SEO trash for the sake of advertising.

This new dumb crap is being thrust upon us for the sake of advertising. That it's of incredible value to scam artists and criminals and the surveillance state is of no concern to Microsoft.

Expand full comment
Lotus Rose's avatar

>> I pray that we can return to AI that is genuinely in the public interest.

Can we talk a little about what that might look like? My feeling is that we need to find a way to involve end users and engineers in the conversation about how to make AI better. Right now, devs using the APIs for commercial LLMs don't really have any input or control in how these models will evolve -- and they have very real flaws and limitations! Key stakeholders are not talking to each other. All that Mistral has for dev support, for instance, is a Discord channel.

Expand full comment
Simon Au-Yong's avatar

Thanks Lotus.

We don’t have to use APIs. There are genuinely open source LLMs (_not_ the misleadingly proprietary blobs branded as open LLMs) available to download from Hugging Face now, courtesy of the generosity of the likes of Apple and IBM. And yes Mistral 7b is available too (Apache License 2.0).

Expand full comment
Lotus Rose's avatar

The problem is not so much which LLM to use -- it's the fact that they all still have flaws. I have not yet seen any ticketing or issue tracking system where devs or end users can report unexpected behavior and get these flagged for further investigation.

Certainly not from Mistral!

Expand full comment
Simon Au-Yong's avatar

Sorry that you encountered what is so fundamental to LLMs… they’re just stochastic non-deterministic blobs. Just like pickles on a burger are inherently sour, unexpected behaviour is to be expected and fully embraced with these billion parameter models. And HTTP or even gRPC/Websocket based API calls have the usual networking challenges (SLAs and SLOs should be referred to).

Expand full comment
Lotus Rose's avatar

That's the part that is irritating. It is possible to build checks and confirmation mechanisms within a system, and do so much more with this technology... but most of upper management believes that AIs are magic, flawless, and infallible. Most end users are afraid of AIs or indifferent. Devs and PMs can address this gap, if they so choose. But are they willing to rock the boat?

Expand full comment
Simon Au-Yong's avatar

Given layoffs dominating the news cycle… definitely not! 🤫

Expand full comment
Lotus Rose's avatar

I have an interesting perspective on all of this, being self-employed and coming at this discipline from a UX perspective. Honestly, I am kind of tempted to jump in the fray.

It could be as simple as setting up an online survey plus reCAPTCHA...

Expand full comment
Costa's avatar

I wonder for how long macs are "safe" to use... there are already cracks in the privacy: https://sneak.berlin/20220409/apple-is-still-tracking-you-without-consent/

Expand full comment
Chaos Goblin's avatar

The Recall feature apparently can be turned off. I trust that about as much as I trust that the Recall contents will be stored "locally".

The Internet as the glorious decentralized mess it was in the 90s to the early 10s was quite empowering. Naturally the big levers of capital felt threatened by that and took it away.

Expand full comment
Lotus Rose's avatar

We have any incredibly versatile and valuable set of tools that are mostly just being used to produce garbage that pisses people off. Backlash is almost inevitable...

Expand full comment
Simon Au-Yong's avatar

Lotus, the aggression and velocity which major GenAI vendors operate and aim to silence criticism is concerning. The entire farm is being bet on.

I won’t be at all surprised to read news of the body of a whistleblower or equivalent floating somewhere, found by a passerby, given the huge amount of money involved.

Expand full comment
Lotus Rose's avatar

No kidding.

Except the news report would be buried...

Expand full comment
Tom Gottsche's avatar

Could be corporate IT's and law enforcement's wet dream (among others). Next step: secure (of course) upload to the cloud solving the "we're running out of LLM training data" problem, 5 cents per MB.

Expand full comment
Paul Backhouse's avatar

I guess if you skipped or switched off Recall, you'd just never be 100% sure...😬

Expand full comment
Art's avatar

Ring? 🤔 AI has become Sauron itself. 😈

Expand full comment
Dog's avatar

Thank you Gary - spot on and to the point. My wife is hesitant to tell anyone I work in AI because of what it's become (becoming). I always explain that AI isn't bad, it's what humans have done to it.

I know how to design an AI surveillance app, but never would. Too bad so many can't echo my position and will take the $$$ at the expense of the greater good.

Expand full comment
Gnug315's avatar

Capitalism corrupts everyhing; it’s inherent in the system. This is not a moral judgement, but a logical and empirical fact.

Expand full comment