

Maybe you’re right. Maybe it’s Markov chains all the way down.
The only way I can think to test this would be to “poison” the training data with faulty arithmetic to see if it is just recalling precedent or actually implementing an algorithm.
Maybe you’re right. Maybe it’s Markov chains all the way down.
The only way I can think to test this would be to “poison” the training data with faulty arithmetic to see if it is just recalling precedent or actually implementing an algorithm.
This reminds me of learning a shortcut in math class but also knowing that the lesson didn’t cover that particular method. So, I use the shortcut to get the answer on a multiple choice question, but I use method from the lesson when asked to show my work. (e.g. Pascal’s Pyramid vs Binomial Expansion).
It might not seem like a shortcut for us, but something about this LLM’s training makes it easier to use heuristics. That’s actually a pretty big deal for a machine to choose fuzzy logic over algorithms when it knows that the teacher wants it to use the algorithm.
Yeah, but this reminds me of a line from game of thrones:
“If you’re a famous smuggler, you’re doing it wrong.”
XOR cleartext once with a key you get ciphertext. XOR the ciphertext with the same key you get the original cleartext. At its core this is the way the old DES cipher works.
A bit of useful trivia: If you XOR any number with itself, you get all zeros. You can see this in practice when an assembly programmer XOR’s a register with itself to clear it out.