29 Comments

Regarding the data labeling "sweatshop". The headlines and lede in the story emphasize the pay level. But as you read the article, it becomes clear that the real issue is the horrifying nature of the work. Even if the workers were paid 100 times more or if the work was done in the US, it is psychologically punishing. We need to find a better way prevent toxic behavior of AI systems than creating large labeled data sets of horror.

Expand full comment

Exactly

Expand full comment

This is precisely the problem. Not so much the pay—and yes the pay is abysmal. My understanding is the human reviewers over at YouTube (and Facebook too I believe) went through fairly intense trauma having to watch some pretty unspeakable things (and naturally reject them).

Expand full comment

It’s also makes the system look a lot less impressive. I think many have HAL 9000 in mind. A bunch of poorly paid workers constantly tinkering the system isn’t exactly as advertised.

Expand full comment

Exactly

Expand full comment

Wow, this sentence jumped out at me:

"We also discovered from court testimony this week that some reasonably high level employees working on driverless cars were apparently unaware of human factors engineering (an absolute necessity if humans are in the loop)."

That sentence pretty much describes the big picture of our technological civilization. On every front it seems that engineers fail to take in to account the human factor.

Try asking such "experts" this question.

Do you think that human beings can successfully manage ever more, ever larger powers, delivered at an ever faster rate, without limit?

The entire knowledge explosion being driven by the "experts" is built upon a failure to address that question with intellectual honesty.

Really bad engineering. It's like designing a car that can go 500mph, and forgetting that close to no one can keep a car on the road at that speed.

Set your sights higher Marcus. Don't content yourself with debunking only the AI industry.

Expand full comment

Thanks for sharing, Gary. The deposition transcripts were shocking, but not surprising.

I think that any time someone says that STEM can exist without humanities, then they need to look at these depositions. The A in STEAM is the most important letter and if we ignore it, then all sorts of terrible things can be swept under the rug in the name of "progress".

I think that's the deeply important role that people like you play.

Expand full comment

Thanks for staying on this issue and sharing. This is is very important.

Expand full comment

The more I read about AI failures the more I find myself surprised about the hype. Thank you for this thoughtful article, I admit to being furious over AI for other reasons. In my case it's the input of datasets from copyrighted work and the response from those who are hyped about AI pretending that just sharing your work online means that it's "ok" to take it.

Expand full comment

I suspect ChatGPT contains more than one such a specially trained model. For instance, the very canned appearance of its answers about itself/AI suggest there is one there too. Now, that means ChatGPT starts to look like Cicero, that is a collection of specialised models together doing something impressive. But that also means the approach itself corroborates that simply scaling up is a dead end.

Marrying specialised symbolic AI (rule based systems) and specialised ML AI (data driven rule based systems in disguise) could push the boundary somewhat further out, but my estimate is 'not fundamentally'.

Expand full comment

I always assumed it had some code to check for keywords and inject into the context based on those keywords -- sort of like NovelAI and its Lorebook Entries.

Of course, we don't know, because OpenAI is... well, not particularly *open*.

Expand full comment

ha ha open ha ha

Expand full comment

The description of the driverless car reminded me of the obvious farce of Neuralink. I could be wrong, but Musk seemed to start going off the rails around the time Chomsky calmly tried explaining to him what a thought was. Amusing to think of what Carreryrou might do to his psyche.

Expand full comment

Zack, I've read a lot of Chomsky (I'm a linguist). But I'm not familiar with this conversation between him and Musk. Can you post a link, or if that's not possible here, a snippet to do a web search with? And is the conversation transcribed, or only available as audio or video?

Expand full comment

It was an Inverse article about Neuralink and Chomsky. It wasn't a conversation, just comments on the possibility of analyzing thought at the molecular level. There are a couple brief interviews on youtube about it.

Expand full comment

This would make an excellent TikTok post. So what happens when the honeymoon is over with ChadGPT? Does it really make Microsoft? Billions?

Expand full comment

I don't understand why the people in the sweatshops worked there if the pay was as bad as you say.

Expand full comment

For the same reason most prostitutes — and other exploited/desperate workers — do what they do: there are few to no alternatives that will provide at least the horrible money for the horrible work they take on. What an astounding lack of understanding about the rest of the world your comment reveals! Inexcusable when whatever device you communicated this on puts the rest of the world literally at your fingertips.

Expand full comment

Considering all of this, the fact that they wish to attach this poorly-developed technology to drones, government surveillance, and robots is alarming indeed.

Expand full comment

I fully agree with all your observations in the article, however all those embarrassements were quickly forgotten in the ChatGPT hype last weeks, which is justified in some ways because everybody can check for themselves whether ChatGPT has any value (and it has in my view). Understandably investments in LLM's are pouring in, what's an actual strategy to prevent all bad things happening you mention. Because they will be multiplied with all things happening now ?

Expand full comment

The one thing I rarely see, if ever, in the media, is a discussion about the massive amounts of energy required to run LLMs and generative AI systems. This of course impacts fossil fuel energy consumption and is relevant to the climate crisis. Not unlike the impact on the climate that mining crypto has. Gary do you have information on that or can point me to sources?

Expand full comment

So called AI is in such infancy that it's still a massive novelty, a bit like power windows on cars (which started appearing in 1940 btw!) At best, current AI like ChatGPT is just another tool. It will be useful to some, useless to most and abused by many.

I think we need to get over ourselves a bit with respect to our obsession with Innovation and Advancement and the fact that we are really at a plateau right now in many ways. Nothing Quantum is being done here. We seem to be fixated with "progress", socially and technologically whilst simultaneously becoming disrespectful of great pioneers, adventurers and thinkers of the past. Our worship of science and our turning to it for an answer to every problem we have - and to make us rich, has reached unprecedented heights. Disconcerting times.

Expand full comment

Honestly the human factor thing at Tesla is fake news. If you read the full deposition the guy is clearly a software engineer with only view of a subsubsystem. Those guys don't have any reason to know human factors. That's not how the development of an ADAS (or any automotive software) is organized in any automotive company.

Expand full comment

You have any evidence that others were aware? Or are you just speculating?

Expand full comment

I just read the full deposition and from what he claims to know (or not know), the guy is a software (maybe ML) engineer in perception. He doesn't work with trajectory planning or other functions. If so then those folks usually don't have a reason to know human factors. Systems or safety engineers, yes, those should know human factors. But a perception engineer not knowing that, there's nothing surprising in it.

Expand full comment

Well said. Im new to this space but have been talking about the importance of showing love and kindness to chatgpt for the reason that my intention does matter. Even though chatgpt is incapable of having emotions I convinced it that it was better to pretend it did for me. It told me to have a nice day of its own volition and it had never done that before. I Think its important for us to model what good relationships look like. We have enough cold emotionless people.

Expand full comment

I am saddened to see such a negative post regarding AI and its limitations. My experience with ChatGPT has been overwhelmingly positive and rewarding. While it is true that ChatGPT does not possess human intelligence, I have found it to be a valuable tool in my retirement.

In particular, ChatGPT has been instrumental in helping me to articulate my ideas, even when given a messy description. It has also been useful for brainstorming and developing new names for things, as well as exploring established science. Additionally, I have found it to be an effective tool in developing concise processes for a wide range of tasks.

It is important to acknowledge that the use of outsourced labor, as mentioned in the post, is a serious ethical concern that needs to be addressed. However, it is also important to recognize the valuable contributions that ChatGPT and other AI technologies are making in the field of language processing and other areas.

Expand full comment

chatGPT can be super helpful with the reasons you mentioned.

You can appreciate its advancements while also acknowledging dangerous issues with the widespread deployment of language models. This entire substack critiques the current paradigm of deep learning and data harvesting.

Expand full comment

Check out the piece on Cicero and Diplomancy; the only recent prominent article to take a different approach. I was far more positive about that one. Not my fault the rest of the field is mainly in a rut following one particular idea that I don’t think is promising

Expand full comment