libreplanet-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Is AI-generated code changing free software?


From: Aaron Wolf
Subject: Re: Is AI-generated code changing free software?
Date: Thu, 17 Jul 2025 11:05:35 -0700

   On 7/17/25 5:22, Jean Louis wrote:

* Aaron Wolf [1]<wolftune@riseup.net> [2025-07-16 17:03]:

Jean, zero of this is responding to the article I linked which I imagine you
didn't read.
Yes, my summary was exaggerated and simplistic because I was just trying to
make the point in a direction and emphasize the linked article. Read the
article.

I appreciate some of Doctorow’s concerns about AI potentially
undermining human autonomy and the sustainability of AI investments,
but honestly, I’m not a fan of sweeping generalizations—especially
when no concrete examples are provided. From what I see in actual
studies and within the developer community, the opposite is true: AI
tools are actively helping people work smarter and fostering
innovation. The impact of AI is complex and nuanced, and if we engage
with it thoughtfully, we can overcome the challenges and really
harness its benefits—without throwing the whole thing under the bus.

   I don't know what you mean "no concrete examples". He specifically
   describes cases like the single writer tasked with writing 30 "summer
   guides" in a short time and thus forced to rely on AI to do it.
   Cory never once suggested that AI isn't productive or can't be
   productive. He explicitly says that it *can* be, just as improvements
   to mechanized loom technology was productive for textile work.
   None of this, not me or Cory, denies anything about the productive
   *capacity* of AI. The issue is in how it gets actually used. Cory lays
   out with specific historical examples the assertion that technology is
   often first used to reduce the leverage that workers have — and then
   after some time as the technology advances, it gets more and more
   toward actually improved products.
   And neither he nor I were saying that AI is bad or should be rejected.
   The points are entirely about how it gets used in practice. Obviously a
   cooperative tech company (where the programmers are the owners) are
   only going to use AI in productive ways, i.e. the "centaur" approach.
   It's the exploitive companies that do the unhealthy reverse-centaur
   approach.
   And the reason I brought this up was in response to concerns about how
   AI is getting pushed into things and funded so extremely. The reason
   for that is because of the leverage and profit it can bring to the
   investors and owners of big corporations. Yes, at the same time there
   are productive things happening in AI that are just interesting, but
   that on its own would not be seeing the level of investment and energy
   use that we see now.
   If workers themselves find ways to use a technology to enhance their
   work, that's great. And if their work goes better and they get to do
   less of the tedious stuff and more of the most meaningful work, that's
   superb. That's the centaur scenario. Again, nobody is denying that
   scenario. The question is how AI gets used. It's not inevitably healthy
   or unhealthy. But if you believe that the investors and bosses in our
   corporate system are generally trustworthy to make choices that are
   healthy for the world, then you and I have a very different view of
   things.
   I'd be willing to discuss the actual points Cory is making if you
   actually want to deal with them.

References

   1. mailto:wolftune@riseup.net

reply via email to

[Prev in Thread] Current Thread [Next in Thread]