New robots are also using LLMs both for understanding their enviroment with cameras, rather than complicated sensors that might not understand the world as we do, and for controlling movement by basically taking in the data from the robot and what other LLMs understand from the enviroment and predicting what inputs are needed to move correctly for movement or doing any tasks.
As the LLMs get better they can also come up with better strategies too, which is already being used to some extent to have them create, test and fix codes based on output and error messages and this should soon allow fully autonomous robots as well that can think by themselves and interact with the world leading to many advancements, like full automation of work and scientific discoveries.
Even if AI can’t be much better than what has already been demonstrated, which I don’t think is the case but let’s consider it, there are already quite a few jobs which can be at least partially automated and that can already change the world by so much, even if only by having permanent unemployment at above 10-20% for every country, or by the bourgeoisie accepting to reduce worked hours to only a few so the system doesn’t collapse.