Thanks for the article, Jamie. I think caution is the appropriate approach to AI, especially given the range of morals involved in its use and potential for unintended/unforeseen consequences. More knowledge workers ought to balance their bemusement at AI with the implications for their livelihoods.
Something I wonder about is model collapse with these AI. I’ve seen articles taking about AI degrading when trained on AI- generated samples, and I imagine guardrails are being implemented by developers to prevent this. That said, will AI be or remain immune to the “enshitification” of everything? Will it destabilize the environment to the extent where its viability becomes compromised? I think a lesson from your article would be that we shouldn’t count on that, but I’m curious about your perspective on that.
I can definitely envision an Amazon.com-like scenario where, after years of relentless improvement that drives their competitors out of business (decimating large swaths of human work, in this case), major AI firms dial back their funding, customer service, etc., knowing that they're no longer in a market with the same degree of competition, and thus with less incentive to spend as much to produce the very best product. But by that point, even crappy freeware AI may be akin to the paid pro versions of currently available AIs. Happy New Year!
Thanks for the article, Jamie. I think caution is the appropriate approach to AI, especially given the range of morals involved in its use and potential for unintended/unforeseen consequences. More knowledge workers ought to balance their bemusement at AI with the implications for their livelihoods.
Something I wonder about is model collapse with these AI. I’ve seen articles taking about AI degrading when trained on AI- generated samples, and I imagine guardrails are being implemented by developers to prevent this. That said, will AI be or remain immune to the “enshitification” of everything? Will it destabilize the environment to the extent where its viability becomes compromised? I think a lesson from your article would be that we shouldn’t count on that, but I’m curious about your perspective on that.
Happy new year!
I can definitely envision an Amazon.com-like scenario where, after years of relentless improvement that drives their competitors out of business (decimating large swaths of human work, in this case), major AI firms dial back their funding, customer service, etc., knowing that they're no longer in a market with the same degree of competition, and thus with less incentive to spend as much to produce the very best product. But by that point, even crappy freeware AI may be akin to the paid pro versions of currently available AIs. Happy New Year!