I just don't understand how these people just don't understand. In the vast majority of use-cases, LLM-based chatbots deliver zero-to-minimal ROI, and yet nevertheless they somehow expect LLM-based agents (which are just simple software wrappers built around LLM-based chatbots) to magically generate ROI when the source of their "cognition" (LLM-based chatbots) usually cannot.
"beyond a few areas such as coding and customer service..."
I can't imagine how it has much value in customer service. Customer service is mostly about understanding what the customer is asking. Not about spewing answers. Quite often the agent has to solicit and clarify what it is that the customer actually needs. Especially with services that the customer does not understand to begin with. AI is useless for this.
Almost all of the AI customer service interactions I have end w my problem either not being understood or unsolved and me saying or yelling for a human. usu w a lot of fucks thrown in at the end.
In fact, these things are actually anathema to customer service.
Actual customer service disappeared about 15-20 years ago in most quarters. Now it's check and bag your own groceries at the supermarket and drug story, and voice answering systems that put you on hold while you're forced to listen to long menus—and adverts—before you even get to a menu option (forget about talking to an actual human), etc, etc.
This trend towards automation and "efficiency" is, in fact, dehumanizing our world.
Unfortunately, Natalia, this is simply a sign of how the majority little large businesses actually value their customers (and more unfortunately, mid-sized businesses are following).
We have experienced this for years with banking and financial services, mobile phone and broadband....the list goes on. The actual quality of customer service has been gradually degraded over the last couple of decades because this is about efficiency - squeezing the last possible dollar out of your wallet at the lowest possible cost - not about the customer (we are walking wallets not customers) and much less about service.
You can test the guiding philosophy of so many of these businesses by not paying a few bills. Then you will get a real person on the line making sure you understand all the unpleasant things that will happen to you if you don't pay asap.
True! I've encountered these chatbots and they create immense frustration. The customer has to phrase a question in just the right way to get the correct answer. I dealt with one of those yesterday that after 5 tries in rephrasing my question to suss out the chatbot's logic, I gave up. Now, if I had a person on the phone I could pose my query in any particular vernacular and they would respond "oh yeah, do this.". Dealing with chatbots is more frustrating than dealing with foreign based customer service agents trained in textbook English. At least with people you can make a joke to relieve the frustration.
In my experience, some Whatsapp (Meta AI) commerce business chats were very useful. You asked the bot about some product, delivery options, etc. and you could have all the info in less than a minute instead of waiting God know how much time for the seller to answer. But when it came to very specific answers they delegate you to a human. But still the time for waiting for answers was reduced by a lot
I can't understand saying that customer service is "driving returns, or proving value for money, given the abysmal results almost ever experience with a bot agent has resulted in for me. I accept that this is anecdotal, but it has been consistent, and very frustrating. Some companies will probable lose my business entirely and yet have no idea why. It's hard to make realistic business decisions under such opaque conditions.
Some do give a survey at the end, but many have constraints on what is possible to answer, by forcing you to check a box next to predetermined choices, with no option for "other," means they get no meaningful responses to evaluate what is going on, or even that they are at risk of losing a customer.
The core lack of reliability of LLMs is simply multiplied in agentic systems. If HR, Legal, and Finance can't hand off a workflow to an agent orchestrator without having to check and double-check every single thing it does, then there is no point to it. This will be even more significant when companies have to pay full freight for inference. Where will the savings be?
I was reading a NY Times article yesterday on Claude Cowork, the agentic solution. I thought it was hilarious. Put data in and package up your prompts as Skillz and Plugins, and voila, you are an agent. Anyone else smelling what I am?
That proof sounds pretty anecdotal. Why is it in the best interest of a firm to share profit maximizing strategies? Also remember, in competitive markets, profits move to zero anyway as everyone copies the tech. We need more causal evidence, which anecdote is not
it is indeed anecdotal, which is exactly why I said “looks to be” rather than anything stronger — but the reporter is a journalist at a major paper and just could not get legit examples, as described.
But there's so much hype and so much being spent on it, I think if any company had legit managed to make a huge profit thanks to agents, they would be shouting it from the rooftops
Why? That would only encourage competitors to do the exact same thing. It’s rarely in the best interest of anyone to share information like that. Besides, it’s not as if firms know their own counterfactual of performance without agents. It just seems too early to say anything at all either wayZ
With gen AI generally, there's a lot of 10x hype but not a lot of proof around it. I think if someone could demonstrate proof then the AI companies would be biting their hands off to make a big case study out of it to make number go up. And the company that shares their results will also get on the hype train and see some gains
The reporter asked about P&L. My experience of prior automation is not that profits increase, but often the streamlining of work, especially with software like spreadsheets and analysis tools increases the quality and extent of work that is done. Spreadsheet-itis was a common trap for business school graduates that extended to their employers. It allowed a lot of work delving into details, scenarios, etc., that probably improved teh quality of the work, but because everyone was doing the same, there was no extra profits to be made, and no loss of personnel. Just [probably] higher quality work. The scientists at a biotech company I worked for spent a lot of time trying to understand gene expression results using analysis software. It was cutting-edge work, but the industry was doing teh same, so it became a time sink to do this higher quality analysis.
So while CEOs focus on earnings and stock market returns and their impact on options, the real value of software that speeds up analysis work is improving quality. When all companies must improve quality, and software tools and effort are needed to achieve this, a company must do this work to stay competitive.
I would hazard a guess that AI is likely similar. Yes, some purely human work will disappear or be reduced (remember secretarial typing pools?), but other work will increase. Less "seat of the pants" analysis and more rigorous analysis. Just as manufacturing improved quality (think automobile quality today vs the 1960s/70s/80s), so more tools will increase quality. If LLMs are good at finding, collating, and summarizing huge quantities of data, rather than reducing time to do teh work, personnel will do more analyses, asking more questions, in order to perhaps get better, certainly more data-supported, answers. If the BS/hallucinations can be reduced to a minimum, and the cost to do so is sensible (I like the idea of local, "good enough", AI models to keep costs contained), AI will become a routine part of the software suite that professionals use, just as WP and spreadsheets, are in just about any company, from one-person companies to transnational corps.
Indeed, if you allow the AI to produce all your output. But you don't need to. For example, when summarizing work from a number of journal papers, I have the AI provide the source document and the place in the document to find the text the statement is based on. That means that I can rapidly check that the AI statement is correct, preventing hallucinations.
I will use the AI to create the first outline of a report. Then I will modify it, and I will write the content based on what the AI has provided. It could be wrong, but I will also have checked its output before using it as my input for a report or article.
Because the AI does the heavy lifting, I can use more documents as sources, or find more documents on the web to source from, and ask more questions of the documents. That reduces the research time for each question, but increases the research done for any given report. I can ask for arguments that support an idea, and also ones that oppose it. (Very useful for hypothesis testing before writing an essay or report.) I may be wrong, but I don't consider that "workslop" which is an AI writing the report. I can see agentic workflows created to do all the tasks and, with minimal input, generate all the research and final output at the push of a button, George Jetson fashion.
So that is how I see AI usage, as an aid to thinking and producing. With access to some powerful hardware, all this can be done safely offline. It may take longer, but there should be substantial cost savings over token subscriptions, plus there will be no options for the sellers of tokens to restrict use or raise prices to remove the subsidies. The only issue is the availability of good, open-source models, like DeepSeek, or specialized models trained on the domains you work in, e.g., molecular biology or mathematics.
Should AGI come along and be trainable, then it might be increasingly trusted to do more of the work, until it is "pushbutton".
You can't outsource your own cognition. Citing sources you didn't read does nothing but reduce the quality of your own work, not to mention cheating yourself of what insights you could have gained by reading those papers.
This is why I am not worried about losing my job to either LLMs or to the people who build their entire work on them.
At all.
Everybody is getting so damn lazy that anybody willing to put in the time and effort to do their own, real thinking will be both rare and able to command a very substantial pay premium.
In competitive industries profits can be expected to trend toward cost of capital meaning when considering the enterprise as a whole there really are no profits due to technological improvements ultimately. However consumers benefit and everyone is a consumer which is a way of saying living standards rise as a consequence of technological advances, which should be a "no kidding" insight.
At this point in time from an investment point of view AI poses dreadful hazards. Barriers to entry are relatively low so ultimately profitability is likely to prove unremarkable, as noted above. However the far bigger issue relates to the fact that AI has become the market darling during the current financial mania and this mania is without a doubt the worst in our lifetimes; possibly the most extreme in US history or perhaps even human history.
Having driven short nominal rates to zero central banks are now cornered which means the now inevitable financial collapse is likely to be a secular collapse (multi-decade cycle) rather than an interim collapse like 2000 and 2008. If history is any guide those who hold through the secular top won't recover, after adjusting for inflation and tax liabilities on phantom capital gains, for something like 30 years. AI as an "investment" is going to prove to be an historic disaster for those exposed now. Later, after the collapse, those who buy surviving entities and infrastructure cheap may enjoy remarkable gains.
Time will tell what AI is actually worth operationally and financially.
Why should the situation for Agents be any different? They are based on LLMs which we know cannot generate a useful ROI, therefore, they will show the same problems. They are just thin wrappers around LLMs. QED.
Having a chatbot for 3 Mountains Plumbing in Portland schedule an appointment for me, which never made it onto their actual schedule, I no longer do business with them. They had a previously satisfied customer and lost me. I will avoid chatbots if at all possible.
AI agents also SUCK at customer service, and they are being used now with no option to talk to a human being if they 1) Don't understand what you say, 2) Have an unusual or complex problem to discuss, or 3) Can't figure out a relevant solution.
When there IS a human being available, the AI agent often tries not to transfer you anyway. The one way I've managed to penetrate this AI wall is to start talking in long, complex sentences peppered with multi-syllabic vocabulary until they finally say "Let me get someone to help you."
The "customer service" chatbot is often an electronic moat to keep customers at bay (who frequently give up in frustration), thereby reducing the number of company representatives who would actually provide helpful service.
As I'm quoted in that article and was at the conference, I can corroborate Isabelle's observation. I would like to add, though, that i think that this situation is about to change. I saw plenty of examples of new companies that are taking an approach to agent building that, imho, has real potential for success: focusing on a narrow, well defined problem, and solving it with a well engineered system that uses LLMs as a component but with other non-LLM and non-AI components as well.
A sensible approach is to resist confirmation bias and explore LLM-based success cases as well as failure cases to understand them and how successes and failures might spread. Instead of rehashing the coding example, consider customer service. It was a major hope for the task-focused chatbot tsunami of 2016-2019. Some businesses still torture customers with them, providing no human alternative, but they failed. With LLM-based AI, they can succeed. When the predecessors of Gemini and Copilot appeared, I knew that Google and Microsoft product users would go to them with product problems and if it failed to help with THAT, the AI would be severely discredited. The companies poured their customer service databases into the training models, and the AI is very good. Not perfect, it may access outdated information. But it knows a lot. To debug an obscure "print to PDF" glitch yesterday it told me to bring up a menu with Control Shift P when the print menu was showing. Who knew such a menu existed? A subsequent suggested step was not there, but from that point I was home.
The greatest enemy of scientific research is confirmation bias. We all have biases, but if you realize that something disconfirms your bias -- profitable uses in coding or customer service -- you can start developing a better theory for where failures occur and successes occur, and build out from that to see where other successes and failures are likely and whether it will lead to ROI for you or the tech developers. My view so far is that AI use is profitable for me, who doesn't pay for it, but will not be for the tech companies, because I wouldn't pay enough for it, and because I think that not even previously failed cognitive models would enable it to succeed widely enough -- it would need deep social and organizational models along the lines Catherine Blanche King posted here two hours ago.
Rod Ast: I DID drift away from ROI to the claims of some that AI can "do anything that a human can do."
But still, and though there is probably much more to it, as you suggest, the "AI agent value gap" does get on the table, so to speak, as "an intelligence problem" (per my recent notes)--a term (intelligence problem), however, which still stands in need of further definition/explanation of its meaning in this or any other context where it is used. Thank you for replying.
If AI is so good, why is there still so much spam and phishing scams. After 30 years in the business, with so much change, I have never seen the hype that is generated by AI. Is it just the echo chamber of Social Media? And all those previous changes, like big data, were supposed to solve the very same problems that AI proposes to solve. AI may well be the next computing business model. Get people in the door cheap for a few years then ratchet up the subscription costs once enterprises and their former coders forgot how to do any of it.
There is so much spam and phishing because scammers and phishers are using AI to stay ahead of those deploying AI to block it. Scammers and phishers have a huge incentive. It is not that enjoyable to work on outwitting scammers and phishers, and engineers burn out or shift to work on something more fun. Wouldn't you? Maybe we could avoid directing frustration into thinking the worst of other people. This hype is pretty normal for AI, although the stage is larger and a few zeroes added.
I just don't understand how these people just don't understand. In the vast majority of use-cases, LLM-based chatbots deliver zero-to-minimal ROI, and yet nevertheless they somehow expect LLM-based agents (which are just simple software wrappers built around LLM-based chatbots) to magically generate ROI when the source of their "cognition" (LLM-based chatbots) usually cannot.
“I just don't understand how these people just don't understand“
Maybe they are not really people but just chatbots in disguise.
Ohhh! Sam AItman. Don't I just wish for the days of serif letters!
Collective psychosis + absurd amounts of money, my what beautiful clothes you have emperor.
"beyond a few areas such as coding and customer service..."
I can't imagine how it has much value in customer service. Customer service is mostly about understanding what the customer is asking. Not about spewing answers. Quite often the agent has to solicit and clarify what it is that the customer actually needs. Especially with services that the customer does not understand to begin with. AI is useless for this.
Almost all of the AI customer service interactions I have end w my problem either not being understood or unsolved and me saying or yelling for a human. usu w a lot of fucks thrown in at the end.
In fact, these things are actually anathema to customer service.
Actual customer service disappeared about 15-20 years ago in most quarters. Now it's check and bag your own groceries at the supermarket and drug story, and voice answering systems that put you on hold while you're forced to listen to long menus—and adverts—before you even get to a menu option (forget about talking to an actual human), etc, etc.
This trend towards automation and "efficiency" is, in fact, dehumanizing our world.
Unfortunately, Natalia, this is simply a sign of how the majority little large businesses actually value their customers (and more unfortunately, mid-sized businesses are following).
We have experienced this for years with banking and financial services, mobile phone and broadband....the list goes on. The actual quality of customer service has been gradually degraded over the last couple of decades because this is about efficiency - squeezing the last possible dollar out of your wallet at the lowest possible cost - not about the customer (we are walking wallets not customers) and much less about service.
You can test the guiding philosophy of so many of these businesses by not paying a few bills. Then you will get a real person on the line making sure you understand all the unpleasant things that will happen to you if you don't pay asap.
True! I've encountered these chatbots and they create immense frustration. The customer has to phrase a question in just the right way to get the correct answer. I dealt with one of those yesterday that after 5 tries in rephrasing my question to suss out the chatbot's logic, I gave up. Now, if I had a person on the phone I could pose my query in any particular vernacular and they would respond "oh yeah, do this.". Dealing with chatbots is more frustrating than dealing with foreign based customer service agents trained in textbook English. At least with people you can make a joke to relieve the frustration.
after 5 tries in rephrasing my question to suss out the chatbot's logic, I gave up“
“Chatbot logic” is an AI-xymoron
😄 Too true!
I just posted something similar before I saw your comment. They are HORRIBLE at customer service and a complete waste of time.
In my experience, some Whatsapp (Meta AI) commerce business chats were very useful. You asked the bot about some product, delivery options, etc. and you could have all the info in less than a minute instead of waiting God know how much time for the seller to answer. But when it came to very specific answers they delegate you to a human. But still the time for waiting for answers was reduced by a lot
I can't understand saying that customer service is "driving returns, or proving value for money, given the abysmal results almost ever experience with a bot agent has resulted in for me. I accept that this is anecdotal, but it has been consistent, and very frustrating. Some companies will probable lose my business entirely and yet have no idea why. It's hard to make realistic business decisions under such opaque conditions.
Some do give a survey at the end, but many have constraints on what is possible to answer, by forcing you to check a box next to predetermined choices, with no option for "other," means they get no meaningful responses to evaluate what is going on, or even that they are at risk of losing a customer.
All they care about is "large signals."
The core lack of reliability of LLMs is simply multiplied in agentic systems. If HR, Legal, and Finance can't hand off a workflow to an agent orchestrator without having to check and double-check every single thing it does, then there is no point to it. This will be even more significant when companies have to pay full freight for inference. Where will the savings be?
I was reading a NY Times article yesterday on Claude Cowork, the agentic solution. I thought it was hilarious. Put data in and package up your prompts as Skillz and Plugins, and voila, you are an agent. Anyone else smelling what I am?
It's hallucinating chatbots all the way down!
When this turgid bubble bursts, the mess is going to be epic.
That proof sounds pretty anecdotal. Why is it in the best interest of a firm to share profit maximizing strategies? Also remember, in competitive markets, profits move to zero anyway as everyone copies the tech. We need more causal evidence, which anecdote is not
it is indeed anecdotal, which is exactly why I said “looks to be” rather than anything stronger — but the reporter is a journalist at a major paper and just could not get legit examples, as described.
But there's so much hype and so much being spent on it, I think if any company had legit managed to make a huge profit thanks to agents, they would be shouting it from the rooftops
Why? That would only encourage competitors to do the exact same thing. It’s rarely in the best interest of anyone to share information like that. Besides, it’s not as if firms know their own counterfactual of performance without agents. It just seems too early to say anything at all either wayZ
With gen AI generally, there's a lot of 10x hype but not a lot of proof around it. I think if someone could demonstrate proof then the AI companies would be biting their hands off to make a big case study out of it to make number go up. And the company that shares their results will also get on the hype train and see some gains
Their SaaS vendors would certainly be shouting it from the rooftops.
The reporter asked about P&L. My experience of prior automation is not that profits increase, but often the streamlining of work, especially with software like spreadsheets and analysis tools increases the quality and extent of work that is done. Spreadsheet-itis was a common trap for business school graduates that extended to their employers. It allowed a lot of work delving into details, scenarios, etc., that probably improved teh quality of the work, but because everyone was doing the same, there was no extra profits to be made, and no loss of personnel. Just [probably] higher quality work. The scientists at a biotech company I worked for spent a lot of time trying to understand gene expression results using analysis software. It was cutting-edge work, but the industry was doing teh same, so it became a time sink to do this higher quality analysis.
So while CEOs focus on earnings and stock market returns and their impact on options, the real value of software that speeds up analysis work is improving quality. When all companies must improve quality, and software tools and effort are needed to achieve this, a company must do this work to stay competitive.
I would hazard a guess that AI is likely similar. Yes, some purely human work will disappear or be reduced (remember secretarial typing pools?), but other work will increase. Less "seat of the pants" analysis and more rigorous analysis. Just as manufacturing improved quality (think automobile quality today vs the 1960s/70s/80s), so more tools will increase quality. If LLMs are good at finding, collating, and summarizing huge quantities of data, rather than reducing time to do teh work, personnel will do more analyses, asking more questions, in order to perhaps get better, certainly more data-supported, answers. If the BS/hallucinations can be reduced to a minimum, and the cost to do so is sensible (I like the idea of local, "good enough", AI models to keep costs contained), AI will become a routine part of the software suite that professionals use, just as WP and spreadsheets, are in just about any company, from one-person companies to transnational corps.
but see also Workslop
Indeed, if you allow the AI to produce all your output. But you don't need to. For example, when summarizing work from a number of journal papers, I have the AI provide the source document and the place in the document to find the text the statement is based on. That means that I can rapidly check that the AI statement is correct, preventing hallucinations.
I will use the AI to create the first outline of a report. Then I will modify it, and I will write the content based on what the AI has provided. It could be wrong, but I will also have checked its output before using it as my input for a report or article.
Because the AI does the heavy lifting, I can use more documents as sources, or find more documents on the web to source from, and ask more questions of the documents. That reduces the research time for each question, but increases the research done for any given report. I can ask for arguments that support an idea, and also ones that oppose it. (Very useful for hypothesis testing before writing an essay or report.) I may be wrong, but I don't consider that "workslop" which is an AI writing the report. I can see agentic workflows created to do all the tasks and, with minimal input, generate all the research and final output at the push of a button, George Jetson fashion.
So that is how I see AI usage, as an aid to thinking and producing. With access to some powerful hardware, all this can be done safely offline. It may take longer, but there should be substantial cost savings over token subscriptions, plus there will be no options for the sellers of tokens to restrict use or raise prices to remove the subsidies. The only issue is the availability of good, open-source models, like DeepSeek, or specialized models trained on the domains you work in, e.g., molecular biology or mathematics.
Should AGI come along and be trainable, then it might be increasingly trusted to do more of the work, until it is "pushbutton".
You can't outsource your own cognition. Citing sources you didn't read does nothing but reduce the quality of your own work, not to mention cheating yourself of what insights you could have gained by reading those papers.
This is why I am not worried about losing my job to either LLMs or to the people who build their entire work on them.
At all.
Everybody is getting so damn lazy that anybody willing to put in the time and effort to do their own, real thinking will be both rare and able to command a very substantial pay premium.
In competitive industries profits can be expected to trend toward cost of capital meaning when considering the enterprise as a whole there really are no profits due to technological improvements ultimately. However consumers benefit and everyone is a consumer which is a way of saying living standards rise as a consequence of technological advances, which should be a "no kidding" insight.
At this point in time from an investment point of view AI poses dreadful hazards. Barriers to entry are relatively low so ultimately profitability is likely to prove unremarkable, as noted above. However the far bigger issue relates to the fact that AI has become the market darling during the current financial mania and this mania is without a doubt the worst in our lifetimes; possibly the most extreme in US history or perhaps even human history.
Having driven short nominal rates to zero central banks are now cornered which means the now inevitable financial collapse is likely to be a secular collapse (multi-decade cycle) rather than an interim collapse like 2000 and 2008. If history is any guide those who hold through the secular top won't recover, after adjusting for inflation and tax liabilities on phantom capital gains, for something like 30 years. AI as an "investment" is going to prove to be an historic disaster for those exposed now. Later, after the collapse, those who buy surviving entities and infrastructure cheap may enjoy remarkable gains.
Time will tell what AI is actually worth operationally and financially.
Far too much anecdotal evidence from both sides.
Why should the situation for Agents be any different? They are based on LLMs which we know cannot generate a useful ROI, therefore, they will show the same problems. They are just thin wrappers around LLMs. QED.
Maxwell Smart was also an “agent.”
As was Hank Kimball, the county agent on Green Acres.
https://m.youtube.com/watch?v=pVcXhaGOZyg
Actually, the actual answer is 'no'.
Having a chatbot for 3 Mountains Plumbing in Portland schedule an appointment for me, which never made it onto their actual schedule, I no longer do business with them. They had a previously satisfied customer and lost me. I will avoid chatbots if at all possible.
AI agents also SUCK at customer service, and they are being used now with no option to talk to a human being if they 1) Don't understand what you say, 2) Have an unusual or complex problem to discuss, or 3) Can't figure out a relevant solution.
When there IS a human being available, the AI agent often tries not to transfer you anyway. The one way I've managed to penetrate this AI wall is to start talking in long, complex sentences peppered with multi-syllabic vocabulary until they finally say "Let me get someone to help you."
The "customer service" chatbot is often an electronic moat to keep customers at bay (who frequently give up in frustration), thereby reducing the number of company representatives who would actually provide helpful service.
"Beyond a few areas such as coding and customer service..."
So...if you just ignore the areas where agents do well...then they suck? You could literally say this about anything.
As I'm quoted in that article and was at the conference, I can corroborate Isabelle's observation. I would like to add, though, that i think that this situation is about to change. I saw plenty of examples of new companies that are taking an approach to agent building that, imho, has real potential for success: focusing on a narrow, well defined problem, and solving it with a well engineered system that uses LLMs as a component but with other non-LLM and non-AI components as well.
A sensible approach is to resist confirmation bias and explore LLM-based success cases as well as failure cases to understand them and how successes and failures might spread. Instead of rehashing the coding example, consider customer service. It was a major hope for the task-focused chatbot tsunami of 2016-2019. Some businesses still torture customers with them, providing no human alternative, but they failed. With LLM-based AI, they can succeed. When the predecessors of Gemini and Copilot appeared, I knew that Google and Microsoft product users would go to them with product problems and if it failed to help with THAT, the AI would be severely discredited. The companies poured their customer service databases into the training models, and the AI is very good. Not perfect, it may access outdated information. But it knows a lot. To debug an obscure "print to PDF" glitch yesterday it told me to bring up a menu with Control Shift P when the print menu was showing. Who knew such a menu existed? A subsequent suggested step was not there, but from that point I was home.
The greatest enemy of scientific research is confirmation bias. We all have biases, but if you realize that something disconfirms your bias -- profitable uses in coding or customer service -- you can start developing a better theory for where failures occur and successes occur, and build out from that to see where other successes and failures are likely and whether it will lead to ROI for you or the tech developers. My view so far is that AI use is profitable for me, who doesn't pay for it, but will not be for the tech companies, because I wouldn't pay enough for it, and because I think that not even previously failed cognitive models would enable it to succeed widely enough -- it would need deep social and organizational models along the lines Catherine Blanche King posted here two hours ago.
Rod Ast: I DID drift away from ROI to the claims of some that AI can "do anything that a human can do."
But still, and though there is probably much more to it, as you suggest, the "AI agent value gap" does get on the table, so to speak, as "an intelligence problem" (per my recent notes)--a term (intelligence problem), however, which still stands in need of further definition/explanation of its meaning in this or any other context where it is used. Thank you for replying.
If AI is so good, why is there still so much spam and phishing scams. After 30 years in the business, with so much change, I have never seen the hype that is generated by AI. Is it just the echo chamber of Social Media? And all those previous changes, like big data, were supposed to solve the very same problems that AI proposes to solve. AI may well be the next computing business model. Get people in the door cheap for a few years then ratchet up the subscription costs once enterprises and their former coders forgot how to do any of it.
There is so much spam and phishing because scammers and phishers are using AI to stay ahead of those deploying AI to block it. Scammers and phishers have a huge incentive. It is not that enjoyable to work on outwitting scammers and phishers, and engineers burn out or shift to work on something more fun. Wouldn't you? Maybe we could avoid directing frustration into thinking the worst of other people. This hype is pretty normal for AI, although the stage is larger and a few zeroes added.