The things we want it to do, it doesn't do well enough to trust. Meanwhile, the things it is starting to do really well, generating photos and videos, start to erode trust in reality itself.
Do people want to live in a world where they can't trust anything they didn't personally see with their own eyes?
>> Do people want to live in a world where they can't trust anything they didn't personally see with their own eyes?
Maybe it will all turn out for the better, in an unexpected way. Before the advent of the Internet and then the flood of cheap "content", there were newspapers and TV news. These had real professionals behind them and some level of integrity and proofcheck of the facts, so at least for reputable names, you could reasonably trust what you saw presented by them.
When the garbage content will completely take over the real landscape, like litter in India let's say, we'll be left with no choice but to turn back to the old news channels: real journalism. And I think it's almost inevitable as there's no stopping to the littering people. Funnily, much of this content originates in the litter-filled Asian countries, where the promise of a few bucks made on the "content platforms" attracts huge crowds with no scruples whatsoever and if AI attracts views and likes, let's drown them in AI.
I personally have a visceral feeling of hate when I'm tricked by some video being AI and by reading comments, I'm far from alone.
> we'll be left with no choice but to turn back to the old news channels: real journalism
This sounds naively optimistic IMO. It's not as if people were immune to false information before social media and LLMs took off. Technological advancements and free access to information promised us a better future of a well-informed society. Instead, it's increasingly turbo-charging the worst instincts of humanity, putting our very freedoms at risk. Grim as it sounds, I'm not seeing a way out of this.
AI content has no value. As soon as you find out something was created by AI you think no effort, art or emotion was put into its creation and have no interest in consuming it. It’s fine and fun for “inside joke” vibes between friends but sharing outside of that context no one cares or is interested because they too can just type in a few words and get the same or better results. It’s empty calorie content.
I've encountered maybe 5 people who are happy and or optimistic about AI - and 3 of those I'm just guessing. For everyone else, their opinions land somewhere between wary and weary and resentful.
For the negative response I've heard, maybe 5% of it is due to actual performance of the LLM. They've run into substandard or unpredictable or unhelpful responses.
The other 95% is squarely due to deployment. It's the heavy-handed, pushy, obnoxious, deceitful, non-consensual, creepy coercion that platforms use to subvert you into their AI glue traps.¹
In short, biggest tech has turned AI of every quality into unwanted foistware.
From the article:
We’re angry that we don’t have choices to use AI.
Companies are shoving it into our video calls, email software,
digital assistants, shopping websites and our Google search results.
Some corporate bosses demand their workers use AI or else.
With other new waves of technology, such as smartphones and
social media, “you had to opt in,” said Yam. “Now there’s a lot
of ambient exposure to AI that I don’t necessarily choose.”
Even Harbath, who uses AI fairly enthusiastically, felt angry
when a publishing company ran her book manuscript through AI
software to identify repetition in her writing and to help
identify effective marketing strategies.
The feedback was helpful, but it took Harbath time to realize
why she was mad: She wasn’t told AI was going to be used in this way,
and she had no information about it.
And she said “both things can be true” — you can want AI
to help you and resent when it’s used in ways that you
don’t want or expect.
¹ Gmail became a minefield of Gemini elements to dodge, especially when they appeared where needed buttons used to be. Google, Brave and DDG all hijacked search-result space to launch into officious diatribes like your drunk and suddenly expert thanksgiving-uncle.
Elsewhere, CoPilot appears around every 3rd corner, ceaselessly trying to insert itself between you and your family pics or work docs. I think this is why MS CP has such a low adoption rate.
At some point you might notice that AI pushers and predator boyfriends are driven by the same compulsions - domination and control.
Massive money moving around the globe to spit out blerg content? It seems over hyped to me. I get it, I can generate some text on my screen, but rarely does it make me anything other than skeptical.
Even the common folk see their own demise in AI. Today it's AI slopware that nobody's asked for. Tomorrow it's AI-managed employees squeezed for maximum output. We are lucky the scientists haven't figured out yet how to make AI actually intelligent.
On the one hand Sam Altman stealing GPUs is funny... on the other what happens to a "nobody" involved in a lawsuit and mom brings a video of said nobody picking up and shaking her baby. At this point, I can see chain-of-custody algorithms will be needed to verify everything
There already is chain of custody and rules of evidence in court. Faking evidence is a problem as old as time.
the challenge we’re seeing is when none of this has a chance to make it to court, because that video gets plastered all over Fox News, Youtube, TikTok, Truth Social first, etc. and no one cares if it turns out it was fake later.
The things we want it to do, it doesn't do well enough to trust. Meanwhile, the things it is starting to do really well, generating photos and videos, start to erode trust in reality itself.
Do people want to live in a world where they can't trust anything they didn't personally see with their own eyes?
>> Do people want to live in a world where they can't trust anything they didn't personally see with their own eyes?
Maybe it will all turn out for the better, in an unexpected way. Before the advent of the Internet and then the flood of cheap "content", there were newspapers and TV news. These had real professionals behind them and some level of integrity and proofcheck of the facts, so at least for reputable names, you could reasonably trust what you saw presented by them.
When the garbage content will completely take over the real landscape, like litter in India let's say, we'll be left with no choice but to turn back to the old news channels: real journalism. And I think it's almost inevitable as there's no stopping to the littering people. Funnily, much of this content originates in the litter-filled Asian countries, where the promise of a few bucks made on the "content platforms" attracts huge crowds with no scruples whatsoever and if AI attracts views and likes, let's drown them in AI.
I personally have a visceral feeling of hate when I'm tricked by some video being AI and by reading comments, I'm far from alone.
> we'll be left with no choice but to turn back to the old news channels: real journalism
This sounds naively optimistic IMO. It's not as if people were immune to false information before social media and LLMs took off. Technological advancements and free access to information promised us a better future of a well-informed society. Instead, it's increasingly turbo-charging the worst instincts of humanity, putting our very freedoms at risk. Grim as it sounds, I'm not seeing a way out of this.
This is why Fox news has such a hold on viewers - it tells them what an angry part of them hopes to hear, true or not.
It isn't just Fox News. CNN and all the others are equally as bad.
[dead]
AI content has no value. As soon as you find out something was created by AI you think no effort, art or emotion was put into its creation and have no interest in consuming it. It’s fine and fun for “inside joke” vibes between friends but sharing outside of that context no one cares or is interested because they too can just type in a few words and get the same or better results. It’s empty calorie content.
I've encountered maybe 5 people who are happy and or optimistic about AI - and 3 of those I'm just guessing. For everyone else, their opinions land somewhere between wary and weary and resentful.
For the negative response I've heard, maybe 5% of it is due to actual performance of the LLM. They've run into substandard or unpredictable or unhelpful responses.
The other 95% is squarely due to deployment. It's the heavy-handed, pushy, obnoxious, deceitful, non-consensual, creepy coercion that platforms use to subvert you into their AI glue traps.¹
In short, biggest tech has turned AI of every quality into unwanted foistware.
From the article:
¹ Gmail became a minefield of Gemini elements to dodge, especially when they appeared where needed buttons used to be. Google, Brave and DDG all hijacked search-result space to launch into officious diatribes like your drunk and suddenly expert thanksgiving-uncle.
Elsewhere, CoPilot appears around every 3rd corner, ceaselessly trying to insert itself between you and your family pics or work docs. I think this is why MS CP has such a low adoption rate.
At some point you might notice that AI pushers and predator boyfriends are driven by the same compulsions - domination and control.
Massive money moving around the globe to spit out blerg content? It seems over hyped to me. I get it, I can generate some text on my screen, but rarely does it make me anything other than skeptical.
https://archive.fo/8wuSh
Even the common folk see their own demise in AI. Today it's AI slopware that nobody's asked for. Tomorrow it's AI-managed employees squeezed for maximum output. We are lucky the scientists haven't figured out yet how to make AI actually intelligent.
On the one hand Sam Altman stealing GPUs is funny... on the other what happens to a "nobody" involved in a lawsuit and mom brings a video of said nobody picking up and shaking her baby. At this point, I can see chain-of-custody algorithms will be needed to verify everything
There already is chain of custody and rules of evidence in court. Faking evidence is a problem as old as time.
the challenge we’re seeing is when none of this has a chance to make it to court, because that video gets plastered all over Fox News, Youtube, TikTok, Truth Social first, etc. and no one cares if it turns out it was fake later.
It will be used by the people in power to produce fake content to win elections.
Using AI and mass surveillance they won’t need elections.