I joked (darkly) with my friend that one day we will say, "Remember when we used to write whole paragraphs de novo?"
It's not just ironic but shows their hypocrisy in touting these tools as the way of the future and not wanting to regulate their use...until it comes to their own employee search process.
I think this says more about the dismal state of the recruiting process at this time than about anything related to AI.
Just for starters, asking candidates early on to write an essay about what excites them about an organisation that they haven't talked to reeks more of desperation than asking the right questions at the right time.
Oh I am not surprised at all since I currently have a front row seat to application processes. My point is that the system is so fundamentally broken and recruiters have a certain process because it gives them "something" to make a decision on, not necessarily because the step is a valid indicator of anything. Given how many companies I have seen that have not set up their ATS even remotely, I am not sure how AI would be used by them even in its most basic ways. I can't tell you how often companies don't even set up automated emails.
This all reminds me of the Seinfeld episode which ends with the phrase "And you want to be my latex salesman?"
Now in the case of Anthropic, I had originally given them the benefit of the doubt that they have a decent handle on how to use ML for analyzing the applications, which to me is a vastly different topic than using GenAI for putting an application together.
But even then with around what 500 employees - and I assume not all went through such a screening process (referrals etc.), - I am questioning if they have enough training data to build a proper model.
I continue to have no desire to read things written with AI, and to feel like people are wasting my time when they send AI content. One exception would be as a help in editing (for construction feedback/brainstorming/non native speakers). But a whole email written with AI? My human eyes glaze over.
I don’t have any desire to read anything written or produced by AI either, but unfortunately it’s getting more difficult by the day not to.
Most of the stuff appearing at the top of pretty much every search output (Google, Bing , DuckDuckGo, etc) these days is crap generated by AI.
AIs are not only destroying search but botshit is flooding the World Wide Web and burying legitimate human produced content in a Mount Everest sized pile of manure.
In short, AI companies steal the collective high quality creative output of humanity , feed it into their AIs which churn out sausage which they actually expect us to PAY them for!
And the most amazing part of it all is that millions of folks are actually forking over the dough for their crap.
This is yet another classic case of Silicon Valley saying one thing to the press and something entirely different behind closed doors.
A well-known example is AI labs eagerly hiring developers while simultaneously claiming that AI has surpassed human-level performance in coding competitions like LeetCode.
But in the end, Silicon Valley’s hype can only take them so far. No matter how much they talk, they still need real people to get meaningful work done.
As a physicist once said, “We talk to God to uncover the secrets of the universe. Nevertheless, I am an atheist.” That sums up AI perfectly—nothing but hype with little real substance.
AI detection doesn’t work very well, but if it did, it would sure be a great way to weed out the cheaters that you wouldn’t want working for your organization.
My point is that they put the detection products forward as reassurance that AI is not going to ruin higher ed, etc., but they clearly don't have faith in these products.
But even if any given AI company had a way of reliably detecting output from their own bot, they wouldn’t implement it because it would put them at a disadvantage relative to companies that didn’t offer such detection mechanisms.
Who is going to use a bot that can get them “caught” when they can use one for which there is no detection?
Certainly, I'd be happy to tell you why I would be excited to work for your company, without the use of AI tools. Here are some reasons why I would be a perfect candidate:
That applicants for AI positions must not use AI in applications makes perfect sense.
At my work, I grill candidates on the whiteboard, to see how they think on their feet. Same idea. Later, if hired, sure, use whatever makes you productive.
I have reviewed tons of job applications, including for high-end positions, recently and I have seen a disappointing number with AI-generated pablum answers. Fortunately, they've usually been easy to spot.
As they love to say: You can’t put the genie back in the bottle!
Or in this case, you can’t put the the Gen AI back in the bot-tle
I joked (darkly) with my friend that one day we will say, "Remember when we used to write whole paragraphs de novo?"
It's not just ironic but shows their hypocrisy in touting these tools as the way of the future and not wanting to regulate their use...until it comes to their own employee search process.
AI Bot time! Create fictional applications and slam them a million times a minute.
It would be awesome to do this to all ai companies. Freaking hilarious
I think this says more about the dismal state of the recruiting process at this time than about anything related to AI.
Just for starters, asking candidates early on to write an essay about what excites them about an organisation that they haven't talked to reeks more of desperation than asking the right questions at the right time.
You’d be surprised how many places ask for that in the application process, even the lesser known firms.
Oh I am not surprised at all since I currently have a front row seat to application processes. My point is that the system is so fundamentally broken and recruiters have a certain process because it gives them "something" to make a decision on, not necessarily because the step is a valid indicator of anything. Given how many companies I have seen that have not set up their ATS even remotely, I am not sure how AI would be used by them even in its most basic ways. I can't tell you how often companies don't even set up automated emails.
This all reminds me of the Seinfeld episode which ends with the phrase "And you want to be my latex salesman?"
Now in the case of Anthropic, I had originally given them the benefit of the doubt that they have a decent handle on how to use ML for analyzing the applications, which to me is a vastly different topic than using GenAI for putting an application together.
But even then with around what 500 employees - and I assume not all went through such a screening process (referrals etc.), - I am questioning if they have enough training data to build a proper model.
I continue to have no desire to read things written with AI, and to feel like people are wasting my time when they send AI content. One exception would be as a help in editing (for construction feedback/brainstorming/non native speakers). But a whole email written with AI? My human eyes glaze over.
I don’t have any desire to read anything written or produced by AI either, but unfortunately it’s getting more difficult by the day not to.
Most of the stuff appearing at the top of pretty much every search output (Google, Bing , DuckDuckGo, etc) these days is crap generated by AI.
AIs are not only destroying search but botshit is flooding the World Wide Web and burying legitimate human produced content in a Mount Everest sized pile of manure.
In short, AI companies steal the collective high quality creative output of humanity , feed it into their AIs which churn out sausage which they actually expect us to PAY them for!
And the most amazing part of it all is that millions of folks are actually forking over the dough for their crap.
This is yet another classic case of Silicon Valley saying one thing to the press and something entirely different behind closed doors.
A well-known example is AI labs eagerly hiring developers while simultaneously claiming that AI has surpassed human-level performance in coding competitions like LeetCode.
But in the end, Silicon Valley’s hype can only take them so far. No matter how much they talk, they still need real people to get meaningful work done.
As a physicist once said, “We talk to God to uncover the secrets of the universe. Nevertheless, I am an atheist.” That sums up AI perfectly—nothing but hype with little real substance.
“We talk to AI to uncover the secrets of the universe because we are AI-theists”
AI-theist : someone who believes in GOD (Generative Overconfident Dissemblers)
The unHoly Trinity: The Fawner, the Son and the unHoly (Rivercrossing) Goats
So much for their faith in the AI detection tools.
AI detection doesn’t work very well, but if it did, it would sure be a great way to weed out the cheaters that you wouldn’t want working for your organization.
My point is that they put the detection products forward as reassurance that AI is not going to ruin higher ed, etc., but they clearly don't have faith in these products.
Sure, I get it.
But even if any given AI company had a way of reliably detecting output from their own bot, they wouldn’t implement it because it would put them at a disadvantage relative to companies that didn’t offer such detection mechanisms.
Who is going to use a bot that can get them “caught” when they can use one for which there is no detection?
OpenAI actually developed a cryptographic method for detecting output from ChatGPT but never implemented it.
And not because it somehow degraded the output because it was supposedly completely undetectable by users.
I think each company has by now acknowledged that those have no hope of working, at least for text.
Those who can do, those who can’t, use AI
Let us DeepThink about that. 🤣
That old saying comes to mind, "DeepSeek and ye shall find."
DeepSeek and ye shall find deep BS
Certainly, I'd be happy to tell you why I would be excited to work for your company, without the use of AI tools. Here are some reasons why I would be a perfect candidate:
- *Personal responsibility*: ...
That applicants for AI positions must not use AI in applications makes perfect sense.
At my work, I grill candidates on the whiteboard, to see how they think on their feet. Same idea. Later, if hired, sure, use whatever makes you productive.
I grill candidates on the BBQ to see how they run on their feet
By company BBQ time they better be hired already, bring food to share, and be good at flipping burgers. :)
These hallucinations are all my own.
I have reviewed tons of job applications, including for high-end positions, recently and I have seen a disappointing number with AI-generated pablum answers. Fortunately, they've usually been easy to spot.
Ahh, the old: “Do as I say, not as I do,” bit. Yeah, fat chance there, pal.
especially if using their Claude, whose writing and content seem more human than their competitors' .... :P
FFS 🙄