58 Comments

In 2010, the best engineers in the world focused on getting more likes and get kids addicted to social media.

Today, AI engineers are focused on replacing human creativity and setting the foundation for surveillance.

🤮

Expand full comment

AI has become very much the anti-life equation.

The automation of life is the destruction of life.

Expand full comment
May 21·edited May 21

Sorry to disagree with you, but the people who laid the groundwork for surveillance were your old "smart guys" and "good troublemakers" physicists at Los Alamos and early computer scientists at IBM (and others). This is a well-known fact.

Taking screenshots of everything you do locally is a really precise way to develop powerful agent-based workflows that get the job done better. It's pretty much done today through analytics and metadata collection. I mean, 'recall' does sound nightmarish, but you have inaccurately described this as laying the groundwork for surveillance, when that groundwork was laid many years ago by a bunch of mild-mannered white men (almost wearing lab coats).

Giving the average person an entry point into music or art production or composition through prompting is not replacing singers or artists at all, so I also disagree with your point about "replacing human creativity".

Expand full comment

Don’t be sorry. It’s okay to disagree. Maybe the use of words was but the best, as surveillance is already here and the foundation has been there for a while. But Gary’s point is that AI could be focused on other areas which can solve larger issues. And I agree with that. Tech leaders can make decisions on where to steer AI and we can only hope they make good decisions.

As for creativity, well, just take a look at their creations and tell me how good they are.

Expand full comment

Yes, I hope to bring science into the mix wherever I end up working (OpenAI, Anthropic, Alphabet, Microsoft, etc.). The problem is that CS and AI geeks tend to conflate the ability to code or program well (classical competitive programming and classical data science) with the ability to solve all kinds of problems. If you want to do general science too, you have to give up the high expectations of newcomers when it comes to coding and programming (I mean, a good sense of algorithms, data structures and operating systems and architectures and machine learning is enough, but they may not have time to LeetCode or Kaggle or open-source their way into ML and AI). As a clear example, Isomorphic Labs (Alphabet) was spun off from DeepMind (Alphabet). If you look at their website and job descriptions (benefits, etc.), there is no talk of equity or salary considerations. They want to treat their prospects as "lab coat scientists", which I am not willing to do or consider. I mean, I am moving away from "pure chemistry" precisely because of the shitty work and shitty salaries, to coveted positions in AI and CS that have grander goals (AGI) and pay better. However, as I said, solving larger problems requires interaction between computer scientists and other types of scientists, and as far as I can see, computer scientists still have very high expectations of newcomers to the field, rather than seeing them as assets with transferable knowledge and specialized skills, and who can learn quickly. That said, some of us are good at programming and machine learning, while having PhDs and specialization in other areas, but the learning curve is steep and doing it alone is quite boring and unmotivating (and not many people are as privileged as I am to be able to just take extra years to change careers). We've created an AGI industry based on the Tech industry, which itself is based purely on product and profit, so it's hard to judge the AGI companies for prioritizing revenue streams over doing science.

Expand full comment
May 21·edited May 21

"the people who laid the groundwork for surveillance were your old "smart guys" and "good troublemakers" physicists at Los Alamos and early computer scientists at IBM (and others). This is a well-known fact"

Can you be specific about who and what you mean here? I think it matters what people's specific intentions in developing a piece of technology were, at the time they did it. Right now behavioural surveillance and applying enormous prediction engines to make money is the explicit aim of a large section of our economy. That's probably not really what they had in mind at xerox or even DARPA or wherever 60 years ago.

Expand full comment

" not replacing singers or artists at all,"

Except that it literally is.

https://www.youtube.com/watch?v=U2vq9LUbDGs

AI boosters seek to take their same morals and attitude toward truth as Sam Altman, apparently.

Expand full comment

What's an AI booster? you mean the booster the video game character in the video is using to jump high?

Expand full comment
May 22·edited May 22

Thank you Dr Daniel.

I’ll ask the people who simplistically reduce it to “just prompting” if they’ve ever composed a song, published a piece or edited a video, with serious personal money (mainly savings and perhaps a loan from family), thousands of hours of sweat, tears, soul searching and sacrifice put into such an endeavour, only to find that someone has digitised _their_ work and is now building derivative pieces and making serious money off the hard work _they_ put in, with _zero_ attribution and share of revenue, how they would feel.

I would be a little more than upset.

An entry point into creative endeavour via prompting is:

a) given the tremendously blatant theft of IP in most open LLMs (you know, those with trillions of tokens), in its current state, a huge slap in the face of the creator economy

b) something that should be based off models trained *solely* on public domain, Creative Commons Attribution-based or similar licenses

To reduce it down to “c’mon, it’s _just_ prompting,” is also a huge insult to the data science community at large. And I have worked closely with such a community which has, prior to ChatGPT arriving on the scene, treated data with incredible respect and compliance with the law.

I don’t think non-creators have _any_ idea of the level of personal sacrifice that has now been rubbished or dismissed… because, “it’s _just_ prompting, chillax…”

🤬

Expand full comment

If a doctor misdiagnosed a loved one leading to her being paralysed or comatose for the rest of her life, with the misdiagnosis being the direct result of heavy but misplaced reliance on hallucinatory output due to pattern matching triggered by an attitude of “it’s _just_ a prompt, chillax, gosh dude,”…

The disrespect in both my creative and medical illustrations are one and the same.

🤬

Expand full comment

I like the double entendre of "nonconsensually". :)

Yes, they want "more, more, more"... until they own it all. The old dynamic of the Buddhist "hungry ghost" figure, with the tiny mouth and big belly. It's never enough.

Expand full comment
May 21Liked by Gary Marcus

This is why we need DC to do something that isn't "innovate!!!"

We just are sitting here being the boiling frog.

Expand full comment

The absolutely least helpful thing here would be for “DC to do something.”

Expand full comment

Then what? Sit here and boil?

Expand full comment

No; but it’s not a binary choice. We can exercise our own options. The situation might even encourage some of us to develop alternatives. I much prefer an open market approach to the heavy hand of government.

Expand full comment

Open market is almost certainly going to result in a race to the bottom where we die. This is not a place where market coordination can work.

#PauseAI

Expand full comment

I do not share your lack of optimism.

Expand full comment

Unfortunately, this is time to look up and see the incoming asteroid. And likewise, market forces will not deflect it.

Expand full comment

Likewise, we cannot market coordinate nuclear weapons

Expand full comment
May 21Liked by Gary Marcus

Public interest and money have never been the best of friends. For most corporations profit=survival. Public interest is way down the list, though it can eventually become a thorn in the side of profit generators… eg. the tobacco industry, though it’s still finding ways: vaping etc. Yes, AI could, can, should be, and in some instances *is* amazingly beneficial. Eg. meta analyses in medical research, or research into protein folding and drug development. Le’s hope the downfall of “Sauron” is more than mythic.

Expand full comment

I lost interest in AI as a researcher about 15 years ago as I grew up and realised how little it had to do with either the brain, or cognition. I must say I feel even more like that now. I think the recent successes of the gargantuan connectionist systems we are calling "AI" at the moment throw the distinction into ever sharper relief. It has less and less to do with science each day. Perhaps I might have made money if I'd stuck with it, but actually slowly gathering knowledge about the brain by doing science is much more interesting and may yet be both profitable for me and helpful to others.

LLMs are impressive but I just don't use it for anything at all. I should spend more time seeing if it's helpful for coding. Its so transparent that going on and on about "AI" is just marketing to the credulous. In a million years Microsoft cannot take screenshots of everything I do.

I predict there will shortly be a pretty big crash. I could be wrong, of course I could, but a fugaze is a fugaze. And this is a fugaze. With that said, it will surely succeed in enriching the unscrupulous. I realise I'm just repeating what everyone here thinks, but there's some value in that.

Expand full comment

A few years ago, on hearing about AI doomers' fear that runaway AIs will focus on a single goal (producing paperclips being the canonical example) to the detriment of everything else including life on Earth, I thought, "we already have systems that act like that; they're called corporations." Alas, I didn't foresee that AI companies themselves would soon become some of the best examplars of the phenomenon.

It's a perfect storm. We have a situation in which massive amounts of money have been invested; although the novel capabilities of the new technology are certainly fascinating, the precise path to a level of general usefulness that would reward the massive investment is not yet clear; and it seems likely that there will be a strong winner-take-all effect. Under such circumstances, it is sadly unsurprising that societal impacts are quickly forgotten about, as the demands of the competition drive all other considerations out.

Expand full comment

This is obviously being done for the sake of targeted advertising. Microsoft's attempts at describing possible use cases are hilarious. From their website:

"Maybe you wanted to make that pizza recipe you saw earlier today but you don’t remember where you saw it. Typing goat cheese pizza into the search box would easily find the recipe again. You could also search for pizza or cheese if you didn’t remember the specific type of pizza or cheese. Less specific searches are likely to bring up more matches though. If you prefer to search using your voice, you can select the microphone then speak your search query."

Does anyone actually do this? Sit around wondering how they can get back to something they were looking at on their computers earlier in the day? Just google "pizza recipie" again, Jesus. Or look at your browsing history. "I'm not able to retrace my entire recent history of computer use" is a fake, non-problem. Which tracks with everything else AI-related that Microsoft is pumping out.

It seems most "innovations" in digital technology now are motivated by advertising. Google ruined its search platform for the sake of advertising. Amazon has cluttered up our search results for the sake of advertising. The internet is being flooded with SEO trash for the sake of advertising.

This new dumb crap is being thrust upon us for the sake of advertising. That it's of incredible value to scam artists and criminals and the surveillance state is of no concern to Microsoft.

Expand full comment

>> I pray that we can return to AI that is genuinely in the public interest.

Can we talk a little about what that might look like? My feeling is that we need to find a way to involve end users and engineers in the conversation about how to make AI better. Right now, devs using the APIs for commercial LLMs don't really have any input or control in how these models will evolve -- and they have very real flaws and limitations! Key stakeholders are not talking to each other. All that Mistral has for dev support, for instance, is a Discord channel.

Expand full comment

Thanks Lotus.

We don’t have to use APIs. There are genuinely open source LLMs (_not_ the misleadingly proprietary blobs branded as open LLMs) available to download from Hugging Face now, courtesy of the generosity of the likes of Apple and IBM. And yes Mistral 7b is available too (Apache License 2.0).

Expand full comment

The problem is not so much which LLM to use -- it's the fact that they all still have flaws. I have not yet seen any ticketing or issue tracking system where devs or end users can report unexpected behavior and get these flagged for further investigation.

Certainly not from Mistral!

Expand full comment

Sorry that you encountered what is so fundamental to LLMs… they’re just stochastic non-deterministic blobs. Just like pickles on a burger are inherently sour, unexpected behaviour is to be expected and fully embraced with these billion parameter models. And HTTP or even gRPC/Websocket based API calls have the usual networking challenges (SLAs and SLOs should be referred to).

Expand full comment

That's the part that is irritating. It is possible to build checks and confirmation mechanisms within a system, and do so much more with this technology... but most of upper management believes that AIs are magic, flawless, and infallible. Most end users are afraid of AIs or indifferent. Devs and PMs can address this gap, if they so choose. But are they willing to rock the boat?

Expand full comment

Given layoffs dominating the news cycle… definitely not! 🤫

Expand full comment

I have an interesting perspective on all of this, being self-employed and coming at this discipline from a UX perspective. Honestly, I am kind of tempted to jump in the fray.

It could be as simple as setting up an online survey plus reCAPTCHA...

Expand full comment
May 21·edited May 21

I wonder for how long macs are "safe" to use... there are already cracks in the privacy: https://sneak.berlin/20220409/apple-is-still-tracking-you-without-consent/

Expand full comment

The Recall feature apparently can be turned off. I trust that about as much as I trust that the Recall contents will be stored "locally".

The Internet as the glorious decentralized mess it was in the 90s to the early 10s was quite empowering. Naturally the big levers of capital felt threatened by that and took it away.

Expand full comment

We have any incredibly versatile and valuable set of tools that are mostly just being used to produce garbage that pisses people off. Backlash is almost inevitable...

Expand full comment
May 22·edited May 22

Lotus, the aggression and velocity which major GenAI vendors operate and aim to silence criticism is concerning. The entire farm is being bet on.

I won’t be at all surprised to read news of the body of a whistleblower or equivalent floating somewhere, found by a passerby, given the huge amount of money involved.

Expand full comment

No kidding.

Except the news report would be buried...

Expand full comment

Could be corporate IT's and law enforcement's wet dream (among others). Next step: secure (of course) upload to the cloud solving the "we're running out of LLM training data" problem, 5 cents per MB.

Expand full comment

I guess if you skipped or switched off Recall, you'd just never be 100% sure...😬

Expand full comment

I am sympathetic, but tough luck.

I doubt Apple is any more ethical. It is in talks to license ChatGPT.

Chatbots are here to stay, and will become every more helpful (and intrusive).

Expand full comment

Ring? 🤔 AI has become Sauron itself. 😈

Expand full comment

Thank you Gary - spot on and to the point. My wife is hesitant to tell anyone I work in AI because of what it's become (becoming). I always explain that AI isn't bad, it's what humans have done to it.

I know how to design an AI surveillance app, but never would. Too bad so many can't echo my position and will take the $$$ at the expense of the greater good.

Expand full comment