Money quote:
Excel requires some skill to use (to the point where high-level Excel is a competitive sport), and AI is mostly an exercise in deskilling its users and humanity at large.
Money quote:
Excel requires some skill to use (to the point where high-level Excel is a competitive sport), and AI is mostly an exercise in deskilling its users and humanity at large.
How do you know those formulas are correct?
I’m talking about using it when you’re “not great at Excel”, not when “you can’t do basic math”.
Always verify the results given to you by LLMs.
By verifying that they’re correct…? 🤔
I think the concern is that you can come up with a number of formulas that will get correct answers for some combinations of values and not others.
If you do not understand the logic of the formula, and what each function does, how do you verify they are correct and will always give you the results you think they will? Double check every result in its entirety?
I think you’re completely missing the point here.
I’m not great at Excel. That doesn’t mean I can’t do basic math, it means I struggle designing an
xlookup
orhlookup
.If AI does that for me, I’ll be a happy bunny. And then run a dozen different iterations of data to verify that the results I’m getting are correct.
This is what this integration is for - it’s not a replacement for a human brain, it’s an assistant. As are all LLMs.
This is what I think AI and automation is generally good at and should be used for - mitigating unpleasant or repetitive work so that the focus of the user is productivity/creativity.
The context is something we disagree on wholeheartedly. Those funding and fundraising for AI and an enormous subset of those using are not looking to use AI in the way we are talking about. The prior are hoping to use AI to extract value from it at the expense of people who would otherwise need to be paid, or they and claim it can do anything and everything. Those using it, many of them, do not have a sufficient understanding to comprehend the solution. They are basically “vibe coding”. Tell the LLM to do something they aren’t knowledgeable about, then keep telling it to fix the problems until they don’t see problems anymore. Yes, spreadsheet formulas are likely simpler than an app but I know people who use AI for Google Sheets and they rarely test any results, let alone rigorously.
Anecdotal, sure, but I don’t have enough faith in humanity to presume everyone else is doing something wildly different.
Edit: To expand, LLMs specifically, are what I consider to be the worst side of “AI”. You can use ML and neural networks to create “AI” (self altering, alien blackbox algorithms) to become proficient in analyzing information and solving problems. LLMs create a situation where the model appears intelligent because it knows how to mimic language… and so now we pretend like it can do whatever people can do.
Well… Yeah, I get what you mean, and - in general - I agree.
However, to me it’s also a bit like criticising the use of hammers because a lot of idiots hit themselves on the heads with them. Or, even worse, hit others on the heads.
AI/LLMs are a tool, and just like any other tool, they can be misused. That doesn’t mean the tool is bad, or immoral, or whatever, to use.
That’s why I hate the today’s discourse of “anything that has AI is shite be default” that so many people online have.
Let’s laugh at obviously bullshit attempt of shoving AI down consumer’s throats, but when it comes to actual, proper implementation - like in the case of baking Copilot into Excel - it becomes yet another optional tool at users’ disposal.
That’s my thinking
If you know what you’re doing, it’s significantly easier to do it yourself
You at least have some reassurance it’s correct (or at least thought through)
Verification is important, but I think you’re omitting from your imagination a real and large category of people who have a basic familiarity with spreadsheets and computers, so are able to understand a potential solution and see whether it makes sense, but who do not have the ability to quickly come up with it themselves.
In language it’s the difference between receptive and productive vocabulary: there are words which you understand but which you would never say or write because they’re part of your receptive, but not productive knowledge.
There are times when this will go wrong, because the LLM will can produce something plausible but incorrect and such a person will fail to spot it. And of course if you blindly trust it with something you’re not actually capable of (or willing to) check then you will also get bad results.