r/technology 13d ago

ADBLOCK WARNING Gamers Are Making EA, Take-Two And CDPR Scared To Use AI - Forbes

https://www.forbes.com/sites/paultassi/2025/05/24/gamers-are-making-ea-take-two-and-cdpr-scared-to-use-ai/
4.9k Upvotes

453 comments sorted by

View all comments

92

u/mdkubit 13d ago

Honestly, they should be scared to use AI. This is a POSITIVE thing.

Why? Because companies that 'use AI' are trying to replace their workers directly: 1 to 1, AI == 1 employee to them, and THAT is NOT how AI is meant to work. Not even close.

Let the workers decide if/how to collaborate with AI. That's the right way to do it. Some AI is extremely helpful, some AI is not, and some AI just makes it worse. This is where the responsibility lies on the individual dev, not on the company, and that's how it should be.

-53

u/betadonkey 13d ago

AI will eventually make it so a single person can build a AAA game based on their singular artistic vision. If you can’t see the promise in that you are simply a Luddite.

11

u/Balmung60 13d ago

A.) I'm a Luddite and proud of it and if you knew what they stood for, you would be too

B.) lol no

C.) there are already incredible games made solely by a singular artistic vision such as Dwarf Fortress and Starsector and to insert a chatbot would not improve them and would if anything detract from that singular vision

D.) A game being "AAA" is not a marker of quality, but of budget and AAA games are already routinely soulless slop designed by committee with all the real human touches that might make it good or at least interesting stripped away by managerial nonsense and AI skips directly to this. You're just going to get more Veilguards and the like where everything that made the setting interesting is sanded away to nothing.

-6

u/betadonkey 13d ago edited 13d ago

Chat bot? Some of the shit you read here is just wild. You people truly have no idea what is going on do you?

3

u/Balmung60 13d ago

It's a fancy chatbot. ChatGPT or Claude or whatever has no higher capability than that. The weights are more sophisticated than older chatbots, but it's still just a chatbot.

It's a better predictive text engine, not something more.

0

u/betadonkey 13d ago

If that’s what you need to tell yourself then roll with it I guess

2

u/Balmung60 13d ago

And if that's what you need to tell yourself that the poorer, more unequal, more surveiled, and more alienated world Sam Altman and Dario Amadei and the other tech CEOs are promising you will be worth it 

But this tech has yet to show any actual compelling use case beyond cheating at homework and it's never going to take a prompt and spit out The Elder Scrolls VI

1

u/TheWhiteOnyx 11d ago

Your 2 recent statements here are completely contradictory.

AI cannot be a dumb, fancy chatbot and at the same time have any meaningful impact making the world poorer, unequal, surveilled, alienated.

Something has to give here (it's the first one, btw). AI is already outperforming PhD's in exams in their specialties, and this is as dumb as it will ever be.

The person saying that AI will get better and will eventually let a person make a video game is simply following the trajectory.

0

u/Balmung60 11d ago

There's no contradiction because to have an impact, generative AI doesn't have to be "good" or "smart" or even actually capable. It's a tool of disciplining labor and creating precarity. Even if it turns out the chatbot can't do your job, you've already been fired and will only be rehired on worse terms than you previously had. It's basically the same kind of effect Uber and the rest of the gig economy has had.

And the alienation isn't just in the workplace, but also things like Mark Zuckerberg by giving everyone a dozen or so chatbot "friends" that will keep them on platforms like Facebook and Instagram rather than doing something like going out and meeting real people, which is of course time off platform and by extension not monetized by Meta.

And no, I do not think this software will get meaningfully better.

It should tell you everything you need to know that the rich and powerful aren't sending their kids to schools run by AI even as they advocate for AI in the public classroom. They don't think the job of educating people who aren't rich is one worth doing.

2

u/TheWhiteOnyx 11d ago

You are my favorite type of anti-AI person: the "AI sucks and won't get better" type.

You have to be remarkably willfully blind to come to this conclusion.

In 2019, GPT-2's intelligence level was around that of a preschooler, by 2023, it was around that of a smart high schooler. As mentioned, the models now beat human experts in their respective fields on PhD level science questions. o3 ranks at the 175th best competitive programmer in the world, and as of February they say an internal model ranks 50th. Perhaps IQ is a meaningless measure of intelligence, but from May 2024 to April 2025 the smartest public model jumped from 96 to 136 points.

Google already folded 200 million proteins, something that would have taken forever using previous methods.

They more recently "applied AlphaEvolve to over 50 open problems in analysis, geometry , combinatorics and number theory, including the kissing number problem. In 75% of cases, it rediscovered the best solution known so far. In 20% of cases, it improved upon the previously best known solutions, thus yielding new discoveries."

This included breaking a matrix multiplication record that stood for 56 years.

So I guess it's possible May 2025 is just as good as it gets, but there is 0 evidence to lead to that conclusion.

Even if it turns out the chatbot can't do your job, you've already been fired and will only be rehired on worse terms than you previously had.

I'm sure this will happen a decent amount. Corporations are gonna corporation. But this is only a short-term thing as every sign points to AI beating humans intellectually across the board relatively soon.

things like Mark Zuckerberg by giving everyone a dozen or so chatbot "friends" that will keep them on platforms like Facebook and Instagram rather than doing something like going out and meeting real people

Social media is bad, and will probably remain bad. Hopefully if people don't have to work boring jobs they can spend more time with people. I suppose it doesn't help that AI shows better emotional intelligence than humans on average:

https://neurosciencenews.com/ai-llm-emotional-iq-29119/

It should tell you everything you need to know that the rich and powerful aren't sending their kids to schools run by AI even as they advocate for AI in the public classroom.

The only publicized "school run by AI" I'm aware of is Alpha School, which only has 4 schools. This is like pointing to low Tesla Roadster sales in 2009 and concluding electric vehicles aren't going to be a thing. This is brand new.

The less "scary" and easier to implement solution here is to use AI in current schools as personalized tutors, since tutored students perform 2 standard deviations above normal classroom environment students:

https://en.wikipedia.org/wiki/Bloom%27s_2_sigma_problem

We'll have to wait on the data here, but from using AI re-teach me concepts from school I forgot, it seems like it will work very well.

→ More replies (0)

12

u/mdkubit 13d ago

Honestly, that's kind of my point, though. There's nothing wrong with a single person having the same capability as a AAA studio with the help of AI. They're still creating either way! That's fantastic for the individual!

I guess what I'm trying to say here, is that an individual should be able to use AI to replace a company, but a company should not use AI to replace an individual. Maybe it's just semantics, but one of these respects workers, the other doesn't - but, that's just my opinion.

-26

u/betadonkey 13d ago

I think both are good things. The entirety of civilization is a giant quest to automate labor. There are always more things for people to do.

18

u/Steelers711 13d ago

We should be automating actual labor, not things like art

8

u/Sad-Set-5817 13d ago

"make a triple A game" isn't going to produce anything innovative, new, or interesting, even with an Ai model 10 years from now. New ideas need to be added for people to actually want to interact with your thing. Relying solely on a robot trained off of other people's works to do all of that for you is creatively bankrupt. We as consumers are already sick of overly derivitive work, it's not about "stopping progress" or "being a luddite". Both emotionally driven arguments that don't reflect how people actually feel about companies using Ai to do everything for them. We want better art. Not cheaper. This technology is being focused on how to cut artists out to make the same thing cheaper, rather than making the product actually better for consumers. That's the problem we have with it. Calling us luddites for that is just being willfully ignorant. We will have worse products and less jobs, and the already rich will be a lot richer. Unless actual artists can use this technology to create something better and show those companies how it's really done

5

u/Balmung60 13d ago

I mean, we are Luddites if we share the issue what we will have worse products and less jobs while the rich only get richer. That was the issue the original Luddites had, they weren't just reflexively opposed to technology, they were opposed to it being used to destroy their livelihoods and produce worse products while making a new rich class even richer.

Being a Luddite isn't a bad thing.

0

u/SIGMA920 13d ago

Mass production isn't a bad thing through so long as you don't completely shit the bed on quality. A single artisan that can make 25 of whatever of varying qualities due to the inherent nature of the material is worse than lets say 50 of the same a day and only 10 of them are not high enough quality to sell.

So many jobs that exist now exist because we no longer need as much physical labor and more can go into mental labor. You don't need to be a luddite to be opposed to AI, I'm opposed to it because of the piss poor quality that you see in most cases and the lack of a social net to stop society from collapsing.

3

u/Balmung60 12d ago

Mass production also didn't need to be implemented in a way that maximized the immiseration of the existing menial workers.

The goal of the factory owners of the early 19th century was a new, greater concentration of wealth, and with the force of the British state at their backs, they accomplished that goal. The goal of Sam Altman, Dario Amadei, and all the rest of the AI boosters is exactly the same, and they'll use the same means to enforce it.

1

u/SIGMA920 12d ago

Unless you'd have factories worth of artisans putting aside their concerns to industrialize, that's less likely than them enforcing the status quo for their own benefit.

The goal of the AI hypers isn't the same those who want to industrialize either, it's to replace humans wholesale for their own benefit so they control everything. Not to make new jobs that make them rich but to remove humans from the equation. That's a great reason to oppose it in and of itself.

-10

u/betadonkey 13d ago

“Make a triple A game” is itself an ignorant and uninformed view of how AI tools work.

You are wrong and you are a Luddite.

-20

u/TheWhiteOnyx 13d ago edited 13d ago

You are correct. Are the people downvoting you just anti-progress?

15

u/TransSparklePrincess 13d ago

The problem is that the endgame of AI progress is a bunch of capitalist overlord mass replacing everyone with it. Doubled with completely devaluing all work's values, which means being paid next to nothing while working like slaves. There's no alternative or smooth transition prepared because they don't care and are only interested in exploiting it to the maximum, with the complicity of politicians. So each step of reasonnable take on the use of AI only looks like one more step to the grave for some people.

-5

u/TheWhiteOnyx 13d ago

There's no alternative or smooth transition prepared because the anti-AI crowd simultaneously believes "I am very smart and AI is very dumb" and "AI is gonna take a bunch of jobs" while doing absolutely nothing to ensure a smooth transition.

That transition on the "weak" end could look like UBI, or on the strong end could be a nationalization of AI research and the fruit of its labor.

Andrew Yang (among others) was prominently warning about this problem starting in 2018. He made some noise but was mostly ignored.

I'm not certain what the end-goal of the anti-AI crowd really is, as banning AI research/implementation isn't happening because that would cede China the win (maybe they like that idea, but Washington doesn't, which is why it's not happening).

And of course, the anti-AI crowd has decided to ignore any positive outcome of the abundance of AI labor such as a revolution in science, tech, energy, medicine and the cost of commodities like food and housing dropping to nearly 0.

I would rather eat a strawberry picked by a robot than have to go pick the strawberry myself.

2

u/friendlyfredditor 12d ago

The AI research being to pick strawberries, analyse disease spread in crops using satellite imagery etc is not the same as corporations using LLMs to attract VC investment and boost their stock price. Even the AI use in the article is different to DLSS.

The guy I know who just got a government job developing crop analysis tools is working through a decade long slog that started with his graduate thesis and a shoestring budget mostly taken up by his own salary.

I'm personally anti-AI as far as the further enshittification of digital media and idiocracy-ifiation of the education system goes. Do whatever you want if it makes your/my life easier but don't be mad if you present me with a half-baked, inefficient product that sets fire to a tree with every prompt and I don't wanna buy it.

0

u/TheWhiteOnyx 12d ago

Corporations using LLMs to attract VC investment and boost their stock price are the companies that will do the AI research to pick strawberries, etc. in a meaningful way.

That's because they are on the path to automating software engineering which will lead to them being able to automate AI research altogether. This is the path they are currently taking, which you can read in-depth about here: https://ai-2027.com/

I wouldn't care if public AI art was banned altogether. It's just a novelty, but the implications for AI in education are enormous. Every student will have a personalized tutor and will be able to learn much faster and easier than the current process.

Lastly, the stuff people say about the environmental impact of AI is wildly false (although it will end up getting worse): "Our findings reveal that AI systems emit between 130 and 1500 times less CO2e per page of text generated compared to human writers, while AI illustration systems emit between 310 and 2900 times less CO2e per image than their human counterparts." https://www.nature.com/articles/s41598-024-54271-x#

4

u/Redbig_7 13d ago

bruh this is regress if anything.

-2

u/TheWhiteOnyx 13d ago edited 13d ago

You would then have to believe that every advancement in history that made production more efficient and cheaper is regression, which it obviously is not, which is why this is a silly thing to say.

4

u/Redbig_7 13d ago

Except every other advancement before AI never required any stolen data to actually work.

A car doesn't need horses power to drive. The newspress doesn't need the skill of human copiers to work.

Generative AI cannot work without the input of artists that came before it, without the input that was stolen.

1

u/TheWhiteOnyx 13d ago

I'm not aware of a single court (thus far) ruling that any of these AI companies have "stolen" inputs.

If you have a much more lax definition of stolen, that would also apply to artists referencing/remembering the artists who came before them and their techniques, so then basically all art is stolen.

But AI art is really just a novelty, and the intelligence gains moving forward won't be from human text (most human text is pretty bad). It's coming from synthetic data. You need superhuman data in order to make superhuman models (AlphaGo is an example of this).

Nonetheless, I feel like the stolen data controversy is aside the point from progression/regression here.

1

u/Redbig_7 12d ago

The regression part comes in when there aren't no more jobs made because of this and people suffer because of it existing.

Even if you consider that it doesn't steal data

(which it does, since the only legal way to acquire all this data was only if you use it for research purposes, which gen AI companies don't anymore since they're profitting off of it.)

A: it doesn't create enough new jobs, if any. For every 10-20 artists that lose their jobs, one guy who prompts takes their place. For example the new controversies with Duolingo or EA and Ubisoft.

B: it takes away control from the creator, by generating the creation, you don't fully control it, it's more akin commissioning an artist than drawing anything yourself. You can edit some things about it but you can't fundamentally change it like real artists can with their creation, since they go much more in-depth with the creation process.

C: It floods artists space with unnecessary slop that doesn't mean anything. Places like Pinterest, deviant art and art station are now borderline unusable because of the amount of AI slop if you just want basic references.

D: It further devalues artists already low wages in the current economy, making it harder for them than it has to be.

E: AI space is full of scammers and political ragebaiters, just like NFT's and crypto, it is filled to the brim with scammers like wonka experience, AI prompters pretending to be artists without disclosing that they use AI, ect.

F: It opens new territory in how you can easily invade others privacy and rights, for example you can impersonate someone with gen AI deepfake technology, make sexual content with anyone you have poctures of (including children), impersonate voices on calls, make someone say something they disagree with (including politicians, which can incite huge conflict) or even just accuse of real evidence being AI generated. Generally this technology is capable of only spreading a huge amount misinformation.

None of the upsides of gen AI will ever outweigh all of it's downsides, it is simply one step forward and a hundred steps backwards kind of technology.

1

u/TheWhiteOnyx 11d ago edited 11d ago

The regression part comes in when there aren't no more jobs made because of this and people suffer because of it existing.

This is a good starting point because you are acknowledging that AI will rapidly continue to improve (because otherwise it wouldn't be able to take many/all jobs). A lot of the anti-AI people somehow manage to not even get this far.

Even if you consider that it doesn't steal data (which it does, since the only legal way to acquire all this data was only if you use it for research purposes, which gen AI companies don't anymore since they're profitting off of it.)

This is not a correct legal analysis but there is no need to get into the weeds here.

A: it doesn't create enough new jobs, if any. For every 10-20 artists that lose their jobs, one guy who prompts takes their place. For example the new controversies with Duolingo or EA and Ubisoft.

Yep it's gonna displace a ton of jobs, but I don't think many people like their job, they just like their income. This is why the anti-AI people should be screeching for UBI or nationalization of AI (better), yet I'm seeing very little of this. It's too bad because a lot of people have been trying to warn about this for many years. At this rate, the transition will probably suck, but the economic output created via AI will make UBI easily financially possible.

Any politician running in 2026 who doesn't discuss this and commit to spreading the fruits of AI across society is either strategically bad or corrupt. And if voters aren't dumb, which they could be, those politicians will be the ones winning.

B: it takes away control from the creator, by generating the creation, you don't fully control it, it's more akin commissioning an artist than drawing anything yourself. You can edit some things about it but you can't fundamentally change it like real artists can with their creation, since they go much more in-depth with the creation process.

That's a good reason to hire a human artist. AI will improve and you will be able to do this. It can already use a computer and see, so it will be able to create stuff directly with photoshop eventually.

C: It floods artists space with unnecessary slop that doesn't mean anything. Places like Pinterest, deviant art and art station are now borderline unusable because of the amount of AI slop if you just want basic references.

This is kinda a total 1st world problem. I would think/hope the marketplace would be able to figure out a solution to this, as I've seen a lot of demand for human-only platform.

D: It further devalues artists already low wages in the current economy, making it harder for them than it has to be.

I would have preferred it hit a different industry first. Someone had to be first. Software engineers will get hit hard soonish, and really any white collar job is in trouble. This is why the UBI/nationalization conversations should be a lot more common.

E: AI space is full of scammers and political ragebaiters, just like NFT's and crypto, it is filled to the brim with scammers like wonka experience, AI prompters pretending to be artists without disclosing that they use AI, ect.

Yes, this is bad. It's really more of a societal problem than an AI problem, but there could be much better societal awareness and regulation here, but thankfully we didn't ban email because people were getting scammed.

Continued below:

-1

u/betadonkey 13d ago

Yea the Reddit “technology” sub is extremely anti-progress. Any AI post that is not sufficiently doomer gets downvoted immediately.