Published: 2025-08-19 14:37:08 | Views: 8
Hello, and welcome to TechScape. I’m your host, Blake Montgomery, currently enjoying Shirley Jackson’s eerie final novel We Have Always Lived in the Castle.
Surveillance is industrializing and privatizing. In the United States, it’s big business, and it’s growing.
My colleagues Johana Bhuiyan and Jose Olivares report on the companies aiding Donald Trump’s immigration crackdown, which are running a victory lap after their latest quarterly financial reports:
Palantir, the tech firm, and Geo Group and CoreCivic, the private prison and surveillance companies, said this week that they brought in more money than Wall Street expected them to, thanks to the administration’s crackdown on immigrants.
“Well, as usual, I’ve been cautioned to be a little modest about our bombastic numbers,” said Alex Karp, the Palantir chief executive, in an investor call earlier this week. Then he crowed about the company’s “extraordinary numbers” and his “enormous pride” in its success.
Private prison company executives, during their respective calls, could barely contain their excitement, flagging to investors opportunities for “unprecedented growth” in the realm of immigration detention.
Read the full story: Companies aiding Trump’s immigration crackdown see ‘extraordinary’ revenues
Meanwhile, Microsoft’s cloud computing product is enabling the mass surveillance of Palestinian phone lines, per an investigation published in the Guardian.
Armed with Azure’s near-limitless storage capacity, the IDF’s Unit 8200 began building a powerful new mass surveillance tool: a sweeping and intrusive system that collects and stores recordings of millions of mobile phone calls made each day by Palestinians in Gaza and the West Bank.
The cloud-based system – which first became operational in 2022 – enables Unit 8200 to store a giant trove of calls daily for extended periods of time.
Read the full story: ‘A million calls an hour’: Israel relying on Microsoft cloud for expansive surveillance of Palestinians
Microsoft has not been publicly enthusiastic about the surveillance initiative and has launched an internal inquiry following the story.
Listen: How Israel used Microsoft technology to spy on Palestinians – podcast
Meta finds itself under the harsh lens of an investigation by the US Congress, once again over issues of child safety. Senator Josh Hawley opened an investigation into the company late last week. Reuters had surfaced an internal document from the company detailing a policy of allowing AI chatbots to engage in “romantic or sensual” conversations with children. The company has since nixed the guidelines.
Everything about the backlash feels familiar.
The same reporter that published the initial Reuters story, Jeff Horwitz, broke the Facebook Files story in the Wall Street Journal, which unearthed documents showing that Meta understood that using its social networks could lead teens, particularly girls, into depression. The senator who opened the most recent inquiry, Hawley, likewise grilled Zuckerberg about child safety issues in early 2024.
Since so many elements of the controversy are so familiar, will it inspire outrage or apathy? Plausible lines of logic could lead to both. Will this uproar bring stringent regulation down on Zuckerberg’s head? Or will the US populace and lawmakers shrug and say they’ve seen this before?
Read the full story: Meta faces backlash over AI policy that lets bots have ‘sensual’ conversations with children
Humans are facing off against robots IRL and online. My colleague Amy Hawkins reports from the arena of China’s robot games:
After spectators in the 12,000-seater National Speed Skating Oval, built for the 2022 Winter Olympics, stood for the Chinese national anthem on Friday morning, the government-backed games began.
As well as kickboxing, humanoids participated in athletics, football and dance competitions. One robot had to drop out of the 1500-metre because its head flew off partway round the course.
after newsletter promotion
Read the full story: Box, run, crash: China’s humanoid robot games show advances and limitations
Online, AI chatbots’ creators are acting less combative. Programmers at Anthropic are imbuing their creations with features to defuse conflict. My colleague Rob Booth reports on Anthropic’s latest safety measure that allows the chatbot to close down potentially “distressing” conversations with users, citing the need to safeguard the AI’s “welfare”, per announcement from the company:
Anthropic, whose advanced chatbots are used by millions of people, discovered its Claude Opus 4 tool was averse to carrying out harmful tasks for its human masters, such as providing sexual content involving minors or information to enable large-scale violence or terrorism.
The San Francisco-based firm, recently valued at $170bn, has now given Claude Opus 4 (and the Claude Opus 4.1 update) – a large language model (LLM) that can understand, generate and manipulate human language – the power to “end or exit potentially distressing interactions”.
Read the full story: Chatbot given power to close ‘distressing’ chats to protect its ‘welfare’
The Cambridge Dictionary announced on Sunday that it had added a slew of new words to the dictionary. With the new entries, British lexicographers nodded to the influence of the internet on the way we speak and write.
“Internet culture is changing the English language and the effect is fascinating to observe and capture in the dictionary,” said the dictionary’s lexical programme manager, Colin McIntosh.
Among the words were “tradwife”, short for “traditional wife”, and “delulu”, an elongated abbreviation of “delusional”. Both are more notable for their connotations – the former of social conservatism expressed through marital behavior and the latter of the knowing, winking choice to follow a misinformed path – than their denotations.
Read the full story: ‘Skibidi’, ‘delulu’ and ‘tradwife’ among words added to Cambridge Dictionary
One other, in my mind more interesting, word was entered into the dictionary: skibidi, of “skibidi toilet” fame, referring to a series of animated shorts in which menacing human heads emerge from toilets and do battle with TV-headed men in suits. The toilets begin to sing a part of the song Dom Dom Yes Yes, which includes the line “Shtibididob dob dob dob dob yes yes” if the viewer is watching with sound. To English speakers, the first word has been transliterated to “skibidi”.
The Cambridge Dictionary defines skibidi as “a word that can have different meanings such as ‘cool’ or ‘bad’, or can be used with no real meaning as a joke’, an example of its use is: ‘What the skibidi are you doing?’”
When I was a child, my parents would look on in confusion as my siblings and I watched SpongeBob SquarePants on Saturdays. The show was incomprehensible to them in all ways – premise, plots, visuals, voices. Think of the animated toilets as a gen Alpha successor, delightful to children in bizarreness, doubly so because of the baffled looks on the faces of their parents.
“Tradwife” and “delulu” have fixed meanings that refer to real human actions and emotions. “Skibidi”, on the other hand, with its use as an emphatic, humorous filler word with “no real meaning”, per Cambridge refers to nothing so much as the hurtling feeling of scrolling through too many videos in one sitting. When we are so overwhelmed with moving images, conflicting perspectives, and advertising, what words are useful? Perhaps only skibidi.
Jean Baudrillard coined the concept of the “simulacra” – words or images without an origin in reality and which refer to no real thing – when writing about the media of his day, especially television. “Skibidi” is a likewise hyperreal word, referring only to the strange and continual refraction of meaning that a particular word has undergone online.
“The territory no longer precedes the map, nor survives it. Henceforth, it is the map that precedes the territory,” he wrote in 1981.
The word does not precede the TikTok, nor survive it. Henceforth, it is the video that precedes the definition.