Published: 2025-07-29 22:24:22 | Views: 13
Hello, and welcome to TechScape. Johana Bhuiyan and Dara Kerr here, filling in for Blake Montgomery, who’s enjoying the beach but likely getting sunburned.
Tech companies are fighting to claim the title of having the world’s most advanced AI. The goal is to supercharge their bottom line and keep investors and Wall Street happy. But developing the world’s most advanced AI means spending billions on data centers and other physical infrastructure to house and power the supercomputers needed for AI. It also means a drain on natural resources and the grid in the areas surrounding data centers worldwide.
Still, last week’s earnings reports made clear that tech firms are forging ahead. Google announced it was planning to spend $85bn on building out its AI and cloud infrastructure just in 2025 – $10bn more than it initially predicted. And the company expects that spending to increase again in 2026. For context, Google reported $94bn in revenue in the second quarter of this year. Chief executive Sundar Pichai said Google is in a “tight supply environment” when it comes to the infrastructure needed to support AI processing and compute. The results of this increased spending would still take years to be realized, he said.
Google isn’t alone. Amazon has said it plans to spend $100bn in 2025 – the “vast majority” of which will go to powering the AI capabilities of its cloud division. As a point of comparison, Amazon spent just under $80bn in 2024.
“Sometimes people make the assumption that if you’re able to decrease the cost of any type of technology component … that somehow it leads to less total spend in technology,” said Amazon’s CEO Andy Jassy during an earnings call in February. “We’ve never seen that to be the case.”
Meta, too, has upped the amount it plans to spend on AI infrastructure. In June, Mark Zuckerberg said the company planned to spend “hundreds of billions” of dollars on building out a network of massive data centers across the US including one that the firm expects to be up and running in 2026. Originally, executives said the firm was projected to spend $65bn in 2025 but adjusted that to anywhere between $64bn and $72bn.
Meta and Amazon report earnings this week.
Artificial intelligence companies have come under fire for cannibalizing creative industries. Artists have seen their work used without their permission as companies train their algorithms. Creative teams have shrunk and been laid off as parts of their work are being done by AI.
“It will mean that 95% of what marketers use agencies, strategists, and creative professionals for today will easily, nearly instantly, and at almost no cost be handled by AI,” Sam Altman, the CEO of OpenAI, has said. “No problem.”
In response, coalitions of artists have launched several copyright lawsuits against the top AI companies, including OpenAI, Meta, Microsoft, Google and Anthropic. The companies say that under the “fair use” doctrine they should be able to use copyrighted material for free and without consent. Artists, including names such as Sarah Silverman and Ta-Nehisi Coates, say the companies shouldn’t be able to profit off their work. So far, the AI companies are winning.
Adobe, the software company best known for making creative tools such as Photoshop, says it’s trying to walk the line between developing useful AI programs and making sure artists aren’t getting the short end of the stick.
The company has introduced two “creator-safe” tools, that aim to tackle issues around copyright and intellectual property. One is its Firefly AI model, which Adobe says is trained only on licensed or public-domain content. The other is the Adobe Content Authenticity web app, which lets photographers and other visual artists indicate when they don’t want their work to be used to train AI and also lets them add credentials to their digital creations.
Artists can “apply a signature to it in the same way that a photographer might sign a photo or a sculptor would etch their initials into a sculpture”, said Andy Parsons, a senior director at Adobe who oversees the company’s work on content authenticity. We spoke with Parsons about the burgeoning world of AI and what it means for creators.
***
Q: What do you see as the biggest issues that creators and artists are facing with the advent of AI, and generative AI?
I think there’s one prevailing issue, which is the concern that various AI techniques will compete with human ingenuity and with artists of all kinds. And that goes for agencies, publishers, individual creators.
***
Q: Is Adobe Firefly one of the ways that Adobe is trying to address these problems and make sure that creators’ work is not ripped off?
Yeah, absolutely. From the beginning of Adobe Firefly, we followed two guiding principles. One is to make sure that Adobe Firefly is not trained on publicly available content. It’s only trained on things that Adobe and the Firefly team have exclusive rights to use. That means that it can’t do certain things. It cannot make a photo of a celebrity, because that celebrity’s likeness we would consider guarded and potentially protected.
The second thing we built in from the beginning is transparency, so knowing that something that comes out of Firefly was generated by AI. This is what we call content provenance, or content authenticity. It’s making clear something is a photograph or made by an individual artist as opposed to made by AI.
after newsletter promotion
***
Q: What is the Adobe Firefly trained on?
It’s a combination of Adobe Stock and some licensed datasets. It’s trained on things that Adobe has clear rights to use in this manner.
***
Q: How do tech companies like Adobe avoid copyrighted materials sneaking into the datasets?
We have licensed and clear rights to all of the data that goes into that dataset. There’s an entire team devoted to trust, safety and assurances that the material is available to be used. We don’t crawl the open web, because as soon as you do that, you do risk potentially infringing on someone’s intellectual property. Our feeling is it’s not always the case that more training data is better.
***
Q: What does the future of human creativity look like now that we’re living in this new world with generative AI?
When it comes to content authenticity, there’s that “nutrition label” idea we sometimes talk about. If you walk into a food store, you have a fundamental right that’s fulfilled in most democratic societies, to know what’s in the food that you’re going to serve your family. And we think the same is true of digital content. We have a fundamental right to know what it is.
Last week, the internet in the UK underwent a seismic change. As of Friday, social media and other internet platforms will be required to implement safety measures protecting children or face large fines.
It is a significant test for the Online Safety Act, a landmark piece of legislation that covers the likes of Facebook, Instagram, TikTok, YouTube and Google.
Read the Guardian’s guide to the new rules.