It’s true, although the smart companies aren’t laying off workers in the first place, because they’re treating AI as a tool to enhance their productivity rather than a tool to replace them.
I don’t know if it even helps with productivity that much. A lot of bosses think developers’ entire job is just churning out code when it’s actually like 50% coding and 50% listening to stakeholders, planning, collaborating with designers, etc. I mean, it’s fine for a quick Python script or whatever but that might save an experienced developer 20 minutes max.
And if you “write” me an email using Chat GPT and I just read a summary, what is the fucking point? All the nuance is lost. Specialized A.I. is great! I’m all for it combing through giant astronomy data sets or protein folding and stuff like that. But I don’t know that I’ve seen generative A.I. without a specific focus increase productivity very much.
As a senior developer, my most productive days are genuinely when I remove a lot of code. This might seem like negative productivity to a naive beancounter, but in fact this is my peak contribution to the software and the organization. Simplifying, optimizing, identifying what code is no longer needed, removing technical debt, improving maintainability, this is what requires most of my experience and skill and contextual knowledge to do safely and correctly. AI has no ability to do this in any meaningful way, and code bases filled with mostly AI generated code are bound to become an unmaintainable nightmare (which I will eventually be paid handsomely to fix, I suspect)
That’s what I suspect. ChatGPT is never wrong, and even if it doesn’t know, it knows and still answers something. I guess its no different for source code: always add, never delete.
Yesterday it tried to tell me Duration.TotalYears() and Number.IsNaN() were M functions in the first few interactions. I immediately called it out and for the first time ever, it doubled-down.
I think I’m at a level where, for most cases, what I ask of LLMs for coding is too advanced, else I just do it myself. This results in a very high counts of bullshit. But even for the most basic stuff, I have to take the time to read all of it and fix or optimise mistakes.
My company has a policy to try to make use of LLMs for work. I’m not a fan. Most of the time I spend explaining the infrastructure and whatnot would be better spent just working, because half the time the model suggests something that flies in the face of what’s needed, or outright suggests changes that we can’t implement.
It helps with translating. My job is basically double-checking the translation quality and people are essentially paying me for my assurance. Of course, I take responsibility for any mistakes.
A lot of bosses think developers’ entire job is just churning out code when it’s actually like 50% coding and 50% listening to stakeholders, planning, collaborating with designers, etc.
A lot of leadership is incompetent. In a reasonable, just, world they would not be in these decision making positions.
I just watched an interview of Karen Hao and she mentioned something along the lines of executives being oversold AI as something to replace everyone instead of something that should exist alongside people to help them, and they believe it.
I was a frontend developer and UI/UX designer that specialized in JavaScript and Typescript with emphasis on React. I’m learning Python for Flask. I’m skipping meals so I can afford Udemy courses then AWS certifications. I don’t enjoy any of this and I’m falling apart.
Hey there. Of course, I am in no position to say “do this, and it will be all right”, but I will say that if there is any other way to live that won’t put this kind of load on you - do it. You being happier is way way more needed in this world than you getting those certificates
Fuck. Sorry to hear. Though that means all this ai bullshit won’t drown you, since you are after actual knowledge and skill. And if this makes any difference, I for one wish your life to be as sparing as it can possibly get
So some places started forcing developers to use AI with a quota and monitor the usage. Of course the devs don’t go checking each AI generated line for correctness. That’s bad for the quota. It’s guaranteed to add more slop to the codebase.
Productivity will go up, wages will remain the same, and no additional time off will be given to employees. They’ll merely be required to produce 4x as much and compensation will not increase to match.
It seems the point of all these machines and automation isn’t to make our individual lives easier and more prosperous, but instead to increase and maximize shareholder value.
If your job is just doing a lot of trivial code that just gets used once, yeah I can see it improving productivity.
If your job is more tackling the least trivial challenges and constantly needing to understand the edge cases or uncharted waters of the framework/tool/language, it’s completely useless.
This is why you get a lot of newbies loving AI and a lot of seniors saying it’s counter productive.
Microsoft did the June layoffs we knew were coming since January and pinned it on “AI cost savings” so that doing so would raise their stock price instead of lower it.
It’s Microsoft would make most sense its mangement decisions considering recently theyve pulled all the stops out to guarantee the software cant be shittier. They even made all there software spyware now.
It’s technically closer to Schrodinger’s truth. It goes both ways depending on “when” you look at it. Publicly traded companies are more or less expected to adopt AI as it is the next “cheap” labor… so long as it is the cheapest of any option. See the very related: slave labor and it’s variants, child labor, and “outsourcing” to “less developed” countries.
The problem is they need to dance between this experimental technology and … having a publicly “functional” company. The line demands you cut costs but also increase service. So basically overcorrection hell. Mass hirings into mass firings. Every quarter / two quarters depending on the company… until one of two things becomes true: ai works or ai no longer is the cheapest solution. I imagine that will rubberband for quite some time. (saas shit like oracle etc)
In short - I’d not expect this to be more than a brief reprieve from a rapidly drying well. Take advantage of it for now - but I’d recommend not expecting it to remain.
The line demands you cut costs but also increase service.
The line demands it go up. It doesn’t care how you get there. In many cases, decreasing service while also cutting costs is the way to do it so long as line goes up.
Absolutely. I should have used the term productivity rather than service. Lack of caffeine had blunted my vocabulary. In essence: more output for less work. Output in this case is profit.
Enshitification is, in essence, the push beyond diminishing returns into the ‘lossy’ space … sacrificing a for b. The end result is an increasingly shitty experience.
I think what makes enshittification is “give users less and charge more”. It’s about returning shareholder value instead of customer value.
Netflix is a great example. They have pulled back on content, made password sharing more challenging, and increased cost. They still report increases in paying users.
They’ve done the math. They know they can take lost in users because they know they’ll make up for it. That’s the sad part in all of this.
They’ve done the math. They know they can take lost in users because they know they’ll make up for it. That’s the sad part in all of this.
They really haven’t taken massive hits because we are creatures of habit: it’s more convenient to hang around even if we know we’re getting ripped off. There is a conversion rate - but it’s low enough where clearly they believe the market will bear more abuse.
I hope this is true. I would like to have a job again.
It’s true, although the smart companies aren’t laying off workers in the first place, because they’re treating AI as a tool to enhance their productivity rather than a tool to replace them.
I don’t know if it even helps with productivity that much. A lot of bosses think developers’ entire job is just churning out code when it’s actually like 50% coding and 50% listening to stakeholders, planning, collaborating with designers, etc. I mean, it’s fine for a quick Python script or whatever but that might save an experienced developer 20 minutes max.
And if you “write” me an email using Chat GPT and I just read a summary, what is the fucking point? All the nuance is lost. Specialized A.I. is great! I’m all for it combing through giant astronomy data sets or protein folding and stuff like that. But I don’t know that I’ve seen generative A.I. without a specific focus increase productivity very much.
As a senior developer, my most productive days are genuinely when I remove a lot of code. This might seem like negative productivity to a naive beancounter, but in fact this is my peak contribution to the software and the organization. Simplifying, optimizing, identifying what code is no longer needed, removing technical debt, improving maintainability, this is what requires most of my experience and skill and contextual knowledge to do safely and correctly. AI has no ability to do this in any meaningful way, and code bases filled with mostly AI generated code are bound to become an unmaintainable nightmare (which I will eventually be paid handsomely to fix, I suspect)
That’s what I suspect. ChatGPT is never wrong, and even if it doesn’t know, it knows and still answers something. I guess its no different for source code: always add, never delete.
Yesterday it tried to tell me Duration.TotalYears() and Number.IsNaN() were M functions in the first few interactions. I immediately called it out and for the first time ever, it doubled-down.
I think I’m at a level where, for most cases, what I ask of LLMs for coding is too advanced, else I just do it myself. This results in a very high counts of bullshit. But even for the most basic stuff, I have to take the time to read all of it and fix or optimise mistakes.
My company has a policy to try to make use of LLMs for work. I’m not a fan. Most of the time I spend explaining the infrastructure and whatnot would be better spent just working, because half the time the model suggests something that flies in the face of what’s needed, or outright suggests changes that we can’t implement.
It’s such a waste of time and resources.
Getting to deprecate legacy support… Yes please, let me get my eraser.
I find most tech debt resolution adds code though.
It helps with translating. My job is basically double-checking the translation quality and people are essentially paying me for my assurance. Of course, I take responsibility for any mistakes.
A lot of leadership is incompetent. In a reasonable, just, world they would not be in these decision making positions.
Verbose blogger Ed Zitron wrote about this. He called them “Business Idiots”: https://www.wheresyoured.at/the-era-of-the-business-idiot/
I just watched an interview of Karen Hao and she mentioned something along the lines of executives being oversold AI as something to replace everyone instead of something that should exist alongside people to help them, and they believe it.
Fuuuck, this infuriates me. I wrote that shit for a reason. People already don’t read shit before replying to it and this is making it so much worse.
I was a frontend developer and UI/UX designer that specialized in JavaScript and Typescript with emphasis on React. I’m learning Python for Flask. I’m skipping meals so I can afford Udemy courses then AWS certifications. I don’t enjoy any of this and I’m falling apart.
Hey there. Of course, I am in no position to say “do this, and it will be all right”, but I will say that if there is any other way to live that won’t put this kind of load on you - do it. You being happier is way way more needed in this world than you getting those certificates
I can’t think of any other options that don’t end in the best case scenario of myself being elderly and destitute.
Fuck. Sorry to hear. Though that means all this ai bullshit won’t drown you, since you are after actual knowledge and skill. And if this makes any difference, I for one wish your life to be as sparing as it can possibly get
So some places started forcing developers to use AI with a quota and monitor the usage. Of course the devs don’t go checking each AI generated line for correctness. That’s bad for the quota. It’s guaranteed to add more slop to the codebase.
Productivity will go up, wages will remain the same, and no additional time off will be given to employees. They’ll merely be required to produce 4x as much and compensation will not increase to match.
It seems the point of all these machines and automation isn’t to make our individual lives easier and more prosperous, but instead to increase and maximize shareholder value.
Idk about engaging productivity.
If your job is just doing a lot of trivial code that just gets used once, yeah I can see it improving productivity.
If your job is more tackling the least trivial challenges and constantly needing to understand the edge cases or uncharted waters of the framework/tool/language, it’s completely useless.
This is why you get a lot of newbies loving AI and a lot of seniors saying it’s counter productive.
Does anyone have numbers on that? Microsoft just announced they’re laying off around 10k.
Microsoft did the June layoffs we knew were coming since January and pinned it on “AI cost savings” so that doing so would raise their stock price instead of lower it.
they also admitted that thier AI isnt generating profit too.
Doesn’t that have more to do with Gamepass eating game studios’ lunch though? And a lot less with AI? Just regular ol’ dumbass management decisions.
It’s Microsoft would make most sense its mangement decisions considering recently theyve pulled all the stops out to guarantee the software cant be shittier. They even made all there software spyware now.
Fewer workers are required when their productivity is enhanced.
So conversely, we’ll need more workers now that generative AI is hindering productivity.
jobs are for suckers, be a consultant and charge triple
I’m absolutely not charismatic enough to pull that off.
youre in luck, i offer consultation for consultancing, now give me money
This person sounds confident! You’d be stupid not to take them up on it.
It’s technically closer to Schrodinger’s truth. It goes both ways depending on “when” you look at it. Publicly traded companies are more or less expected to adopt AI as it is the next “cheap” labor… so long as it is the cheapest of any option. See the very related: slave labor and it’s variants, child labor, and “outsourcing” to “less developed” countries.
The problem is they need to dance between this experimental technology and … having a publicly “functional” company. The line demands you cut costs but also increase service. So basically overcorrection hell. Mass hirings into mass firings. Every quarter / two quarters depending on the company… until one of two things becomes true: ai works or ai no longer is the cheapest solution. I imagine that will rubberband for quite some time. (saas shit like oracle etc)
In short - I’d not expect this to be more than a brief reprieve from a rapidly drying well. Take advantage of it for now - but I’d recommend not expecting it to remain.
The line demands it go up. It doesn’t care how you get there. In many cases, decreasing service while also cutting costs is the way to do it so long as line goes up.
See: enshittification
Absolutely. I should have used the term productivity rather than service. Lack of caffeine had blunted my vocabulary. In essence: more output for less work. Output in this case is profit.
Enshitification is, in essence, the push beyond diminishing returns into the ‘lossy’ space … sacrificing a for b. The end result is an increasingly shitty experience.
I think what makes enshittification is “give users less and charge more”. It’s about returning shareholder value instead of customer value.
Netflix is a great example. They have pulled back on content, made password sharing more challenging, and increased cost. They still report increases in paying users.
They’ve done the math. They know they can take lost in users because they know they’ll make up for it. That’s the sad part in all of this.
They really haven’t taken massive hits because we are creatures of habit: it’s more convenient to hang around even if we know we’re getting ripped off. There is a conversion rate - but it’s low enough where clearly they believe the market will bear more abuse.