That would give politicians another reason to raise the retirement age, in order to stay in power.
That would give politicians another reason to raise the retirement age, in order to stay in power.
I’m not sure I’d trust modern CA to do Med3 justice. The new style of Total War is just a different beast from the sublime RTW/Med2 era.
Lots of little things changed, and it just ‘hits different’. Probably the biggest difference is just that every single fight after the first 20 turns will be a 20 stack vs a 20 stack, and every single battle is life or death for that army. It makes the campaign much faster paced - declare war, wipe stack, capture cities for 3 turns until the AI magics up another 20 stack.
In the original Med2, since there wasn’t automatic replenishment, there were often battles between smaller stacks, even in late game, as they were sent from the backline to reinforce the large armies on the front. Led to some of my greatest memories trying to keep some random crossbowmen and cavalry alive against some ambushing enemy infantry they wandered into. The need for manual reinforcement led to natural pauses in wars and gave the losing side a chance to regroup without relying on the insane AI bonuses of the modern TW games - and I do mean insane; they’ll have multiple full stacks supplied from a single settlement.
Most OLEDs today ship with logo detection and will dampen the brightness on static elements automatically.
While it isn’t a silver bullet, it does help reduce burn in since it is strongly linked to heat, and therefore to the pixel brightness. New blue PHOLEDs are expected to also cut burn in risk. Remember that LCDs also used to have burn in issues, as did CRTs.
I’ve been using Nvidia under Linux for the last 3 years and it has been massive pita.
Getting CUDA to work consistently is a feat, and one that must be repeated for most driver updates.
Wayland support is still shoddy.
Hardware acceleration on the web (at least with Firefox) is very inconsistent.
It is very much a second-class experience compared to Windows, and it shouldn’t be.
Linux and Nvidia really need to sort out their shit so I can fully dump windows.
Luckily the AI hype is good for something in this regard, since running gpus on Linux servers is suddenly much more important.
One nitpick, Jesus was almost certainly a real figure. There are many records indicating someone with that name was in the area at the time, and that they were executed by crucifixion.
The religious stuff, obviously no way to prove. But as a person, the historical consensus is they existed.
As someone who spent years as a ‘big company engineer’, the reason I don’t write code until the bosses have clear requirements is because I don’t want to do it twice.
That and it isn’t just me, there’s 5 other teams who have to coordinate and they have other things on their roadmap that are more important than a project without a spec.
No, that’s not a real problem either. Model search techniques are very mature, the first automated tools for this were released in the 90s, they’ve only gotten better.
AI can’t ‘train itself’, there is no training required for an optimization problem. A system that queries the value of the objective function - “how good is this solution” - then tweaks parameters according to the optimization algorithm - traffic light timings - and queries the objective function again isn’t training itself, it isn’t learning, it is centuries-old mathematics.
There’s a lot of intentional and unintentional misinformation around what “AI” is, what it can do, and what it can do that is actually novel. Beyond Generative AI - the new craze - most of what is packaged as AI are mature algorithms applied to an old problem in a stagnant field and then repackaged as a corporate press release.
Take drug discovery. No “AI” didn’t just make 50 new antibiotics, they just hired a chemist who graduated in the last decade who understands commercial retrosynthetic search tools and who asked the biopharma guy what functional groups they think would work.
“AI” isn’t needed to solve optimization problems, that’s what we have optimization algorithms for.
Define an objective and parameters and give the problem to any one of the dozens of general solvers and you’ll get approximate answers. Large cities already use models like these for traffic flow, there’s a whole field of literature on it.
The one closest to what you mentioned is a genetic algorithm, again a decades-old technique that has very little in common with Generative “AI”
Humans are intelligent animals, but humans are not only intelligent animals. We do not make decisions and choose which beliefs to hold based solely on sober analysis of facts.
That doesn’t change the general point that a model given the vast corpus of human knowledge will prefer the most oft-repeated bits to the true bits, whereas we humans have muddled our way through to some modicum of understanding of the world around us by not doing that.
But the most current information does not mean it is the most correct information.
I could publish 100 papers on Arxiv claiming the Earth is, in fact, a cube - but that doesn’t make it true even though it is more recent than the sphere claims.
Some mechanism must decide what is true and send that information to train the model - that act of deciding is where the actual intelligence in this process lives. Today that decision is made by humans, they curate the datasets used to train the model.
There’s no intelligence in these current models.
Victoria 3 was just boring - I say this as a huge fan of Victoria 2.
I played a few weeks after launch, and - for every one of the 4 countries I tried (Russia, Japan, Denmark, Spain), simply building all the things everywhere and ignoring money made everything trivial.
The economic simulation was super barebones, the entire thing could be bootstrapped just by building. An entire population of illiterate farmers would become master architects overnight and send GDP to the double digit billions in a few decades.
A token is not a concept. A token is a word or word fragment that occured often in free text and was assigned a number. Common words, prefixes, and suffixes are the vast majority of tokens, and the rest are uncommon pairs of letters.
The algorithm to generate tokens is essentially compression, there is no semantic meaning embedded in them.
Copilot is GPT under the hood, it just starts with a search step that finds (hopefully) relevant content and then passes that to GPT for summarization.
The Dark Souls 2 DLCs are some of the best content in all of Souls. While the original game has some level design issues, the DLCs are sublime.
Loved the first one for fucking around with friends. I’ll maybe pick it up after they add vehicles and we see a bit more of their long-term monetization strategy.
Every billion parameters needs about 2 GB of VRAM - if using bfloat16 representation. 16 bits per parameter, 8 bits per byte -> 2 bytes per parameter.
1 billion parameters ~ 2 Billion bytes ~ 2 GB.
From the name, this model has 72 Billion parameters, so ~144 GB of VRAM
The US tax system is not at all ‘heavy’ on the wealthy. The largest burden, proprtionally, falls on those with high earned incomes, doctors, lawyers, etc. these are the people who will be paying the higher marginal tax rates on substantial portions of their income.
The truly wealthy do not have high earned incomes, they acquire large assets and borrow against their value to pay for living expenses while avoiding taxes. This is the “buy, borrow, die” strategy, specifically designed to limit tax liability.
Role of thumb is an employee costs roughly twice their base salary, as the employee still needs to cover insurance, taxes, sick time, and other benefits.
That leaves an average salary of 190K for the 50 employees. That isn’t much for tech.
There are really only 3 search providers, Google, Bing, and Yandex.
All others will pay one of these three to use their indexes, since creating and maintaining that index is incredibly expensive.