PR by Xuan-Son Nguyen for `llama.cpp`: > This PR provides a big jump in speed for WASM by leveraging SIMD instructions for `qX_K_q8_K` and `qX_0_q8_0` dot product functions. > > …
What you’re afraid of is precisely what was tried with outsourcing dev jobs. That proved to work in some areas where you have very boring crud apps, but was a complete failure in others. I expect LLMs are just going to work out in a very similar fashion.
Meanwhile, the most enjoyable coding I’ve done was never done for money. If anything, I can see AI taking over work turning programming away from being a career and into a way for people to express themselves artistically the way you see with demo scene, live coding, generative art and so on. I don’t see that as a bad thing.
What you’re afraid of is precisely what was tried with outsourcing dev jobs. That proved to work in some areas where you have very boring crud apps, but was a complete failure in others. I expect LLMs are just going to work out in a very similar fashion.
Okay but like again, I’m not afraid of losing my job. I’m afraid that we’re going to lose real capability as a society. It’s how our oligarchs are practically morons compared to past oligarchs who built hundreds of libraries, or how we don’t have the real capacity in the US to build rail.
I’m currently working as a platform architect coordinating 5 teams over multiple products building a platform for authoring, publishing and managing rich educational courses across multiple different grade levels. I do most of the greenfield development still, I personally manage a DSL and tools for it, while figuring out platform requirements and timelines for other teams including my own. I used to work on a real time EEG system doing architecture and signal processing. I’ve architected and implemented medical logistics platforms. I’ve been a first engineer at a couple of startups. I’ve literally written purpose built ORMs, schedulers, middleware frameworks, and query frameworks from scratch. I’ve worked at almost every major common role at a principal level except security (which is mostly fake) and embedded so front end, back end, database optimization/integration, infrastructure, machine code on JVM and X86, and distributed computing. I haven’t work in niches like networking, industrial, ML or quantum, I’d only really want to explore quantum or networking in reality. But quantum is something you typically need PhDs for otherwise it’s gonna be a bit grunty. OSS may bring up engineers for some of these roles, but in practice the majority of OSS projects don’t reach the level of complexity that I’ve worked at – the ones that do aren’t community projects they’re corporate ones.
Very few people can step into my shoes, most principal engineers I’ve met average out at a large project where they implemented a strangler once or twice. The system currently has a hard time reproducing me, if the bottom falls out it’s gonna be good game. I’m happy that LLMs are helping you rediscover your passion, but the kind of stuff you’re talking about are toys. Personally they’re not fun, they’re mostly boring, I enjoy building large technical systems in complex problem spaces in a high level reproducible way. Everything else gets stale quickly. I’ve built out systems where if you blow on the code the tests turn red without test maintenance and creation being a burden. The goal was high value test in 5 minutes in that system. The future I see is that everything is just shittier because the skill that is hard to find and is dying is understanding the essential complexity at the 10,000 ft view, the 100 ft view, the 1 ft view, and the 1 micrometer view. I can barely find developers who can innately understand essential complexity at one of those view points. I’ve met about 20 who can do all 4 and I’ve met maybe like 400-ish devs in my life.
The only passion project I wanted to start I basically decided to call off because if successful it would be bad for the world. I wanted to build a high level persona management software that could build swarms in the tens of thousands without being discovered.
If LLM removes programming as a job, might be nice, but in practice it’s just gonna mean more people on the struggle bus.
That’s just a straw man, because there’s no reason why you wouldn’t be looking through your code. What LLM does is help you find areas of the code that are worth looking at.
It’s not a strawman because classifying unperformant code is a different task than generating performant replacement code. LLM can only generate code via it’s internal weights + input it doesn’t guarantee that that code is compilable, performant, readable, understandable, self documenting or much of anything.
The performance gain here is coincidental simply because the generated code uses functions that call processor features directly rather than get optimized into processor features by a compiler. LLM classifiers are also statistically analyzing the AST for performance they aren’t actually performing real static analysis of the AST or it’s compiled version. It doesn’t calculate a BigO or really know how to reason through this problem, it’s just primed that when you write the for loop to sum, that’s “slower” than using _mm_add_ps. It doesn’t even know which cases of the for loop compile down to a _mm_add_ps instruction on which compilers and which optimization levels.
Lastly you injected this line of reasoning when you basically said “why would I do this boring stuff as a programmer when I can get the LLM to do it”. It’s nice that there’s a tool that you can politely ask to parse your garbage and replace with other garbage that happens to use a function that’s more performant. But not only is this not Software Engineering, but a performant dot product is a solved problem at EVERY level of abstraction. This programming equivalent of tech bros reinventing the train every 5 years.
The fact that this is needed is a problem in and of itself with how people are building this software. This is machine spirit communion with technojargon. Instead of learning how to vectorize algorithms you’re feeding your garbage code through a LLM to produce garbage code with SIMD instructions in it. That is quite literally stunting your growth as a Software Engineer. You are choosing to ignore learning how things actually work because it’s too hard to parse through the existing garbage. A SIMD dot product algo is literally a 2 week college junior homework assignment.
Understanding what good uses for it are and the limitations of the tech is far more productive than simply rejecting it entirely.
I quite literally pointed several limitations in the post you replied to and in this post from a Software Engineering perspective.
Hey so I read your comments and found them insightful. Me being a Software Engineer who just started his first job, what would be your advice for the right approach to grow and learn as a software engineer? Both in general and with respect to using LLMs while learning/coding.
I’m afraid that we’re going to lose real capability as a society.
The US has far bigger problems than LLMs to worry about in the near future. Personally, I’d be far more worried about the rate of deindustrialization in the states, lack of people who know trades, engineers, farmers, and so on. All of that is far more crucial than the software industry. Meanwhile, even if people started losing this expertise, it’s not like it can’t be learned again if needed. The whole software industry has only existed for a handful of decades, and society got on just fine before it appeared. This is just complete hyperbole I’m afraid.
What I think is most likely to happen with serious engineering going forward is that the human aspect of the work will shift towards writing formal specifications that encode the desired constraints for the system, including things like memory usage, runtime complexity, and so on, and then having LLMs figure out how to generate code that passes the spec.
Incidentally, you can do this stuff without LLMs as well, for example Barliman is a program synthesis engine that is able to take a signature for a function and figure out an implementation, it can even compose functions it already wrote to solve more complex problems. Combining something like that with LLMs could be very effective.
I see this is as a similar advancement as creation of high level languages. Plenty of people moaned that nobody learned assembly anymore when C showed up, and making very similar arguments to the ones you’re making. Then people started moaning about garbage collection, and how you weren’t a real programmer if you weren’t managing memory by hand. Every time a new technology comes around that makes programming easier and more accessible, there are inevitably people screaming that the sky is falling. LLMs is just the latest iteration of this phenomenon.
And more people struggling on the bus because we have more automation is a result of capitalist relations. That’s where the ire should be directed.
What you’re afraid of is precisely what was tried with outsourcing dev jobs. That proved to work in some areas where you have very boring crud apps, but was a complete failure in others. I expect LLMs are just going to work out in a very similar fashion.
Meanwhile, the most enjoyable coding I’ve done was never done for money. If anything, I can see AI taking over work turning programming away from being a career and into a way for people to express themselves artistically the way you see with demo scene, live coding, generative art and so on. I don’t see that as a bad thing.
Okay but like again, I’m not afraid of losing my job. I’m afraid that we’re going to lose real capability as a society. It’s how our oligarchs are practically morons compared to past oligarchs who built hundreds of libraries, or how we don’t have the real capacity in the US to build rail.
I’m currently working as a platform architect coordinating 5 teams over multiple products building a platform for authoring, publishing and managing rich educational courses across multiple different grade levels. I do most of the greenfield development still, I personally manage a DSL and tools for it, while figuring out platform requirements and timelines for other teams including my own. I used to work on a real time EEG system doing architecture and signal processing. I’ve architected and implemented medical logistics platforms. I’ve been a first engineer at a couple of startups. I’ve literally written purpose built ORMs, schedulers, middleware frameworks, and query frameworks from scratch. I’ve worked at almost every major common role at a principal level except security (which is mostly fake) and embedded so front end, back end, database optimization/integration, infrastructure, machine code on JVM and X86, and distributed computing. I haven’t work in niches like networking, industrial, ML or quantum, I’d only really want to explore quantum or networking in reality. But quantum is something you typically need PhDs for otherwise it’s gonna be a bit grunty. OSS may bring up engineers for some of these roles, but in practice the majority of OSS projects don’t reach the level of complexity that I’ve worked at – the ones that do aren’t community projects they’re corporate ones.
Very few people can step into my shoes, most principal engineers I’ve met average out at a large project where they implemented a strangler once or twice. The system currently has a hard time reproducing me, if the bottom falls out it’s gonna be good game. I’m happy that LLMs are helping you rediscover your passion, but the kind of stuff you’re talking about are toys. Personally they’re not fun, they’re mostly boring, I enjoy building large technical systems in complex problem spaces in a high level reproducible way. Everything else gets stale quickly. I’ve built out systems where if you blow on the code the tests turn red without test maintenance and creation being a burden. The goal was high value test in 5 minutes in that system. The future I see is that everything is just shittier because the skill that is hard to find and is dying is understanding the essential complexity at the 10,000 ft view, the 100 ft view, the 1 ft view, and the 1 micrometer view. I can barely find developers who can innately understand essential complexity at one of those view points. I’ve met about 20 who can do all 4 and I’ve met maybe like 400-ish devs in my life.
The only passion project I wanted to start I basically decided to call off because if successful it would be bad for the world. I wanted to build a high level persona management software that could build swarms in the tens of thousands without being discovered.
If LLM removes programming as a job, might be nice, but in practice it’s just gonna mean more people on the struggle bus.
Hey so I read your comments and found them insightful. Me being a Software Engineer who just started his first job, what would be your advice for the right approach to grow and learn as a software engineer? Both in general and with respect to using LLMs while learning/coding.
The US has far bigger problems than LLMs to worry about in the near future. Personally, I’d be far more worried about the rate of deindustrialization in the states, lack of people who know trades, engineers, farmers, and so on. All of that is far more crucial than the software industry. Meanwhile, even if people started losing this expertise, it’s not like it can’t be learned again if needed. The whole software industry has only existed for a handful of decades, and society got on just fine before it appeared. This is just complete hyperbole I’m afraid.
What I think is most likely to happen with serious engineering going forward is that the human aspect of the work will shift towards writing formal specifications that encode the desired constraints for the system, including things like memory usage, runtime complexity, and so on, and then having LLMs figure out how to generate code that passes the spec.
Incidentally, you can do this stuff without LLMs as well, for example Barliman is a program synthesis engine that is able to take a signature for a function and figure out an implementation, it can even compose functions it already wrote to solve more complex problems. Combining something like that with LLMs could be very effective.
I see this is as a similar advancement as creation of high level languages. Plenty of people moaned that nobody learned assembly anymore when C showed up, and making very similar arguments to the ones you’re making. Then people started moaning about garbage collection, and how you weren’t a real programmer if you weren’t managing memory by hand. Every time a new technology comes around that makes programming easier and more accessible, there are inevitably people screaming that the sky is falling. LLMs is just the latest iteration of this phenomenon.
And more people struggling on the bus because we have more automation is a result of capitalist relations. That’s where the ire should be directed.