• 0 Posts
  • 8 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • This pivot against FOSS in the West will just hurt themselves. The strength of FOSS is its huge potential for reach, while its drawback is that it’s harder for FOSS devs to make a living.

    In the West, FOSS devs all live on donations. While a few FOSS programs become huge enough to sustain themselves, most fail. Western governments have deliberate policies of not finding much FOSS development, exacerbating this issue.

    Socialist countries’ (e.g. China’s) policies of government investment in development invariably leads to more investment in FOSS, providing FOSS developers more stable incomes. For example, China has multiple government-funded Linux OSes (UOS, Kylin, Deepin), OpenHarmony OS, etc.

    This will ultimately snowball into more and more FOSS programs, creating a vibrant socialist-developed software ecosystem the developing world can quickly plug into at low-cost.


  • This is technically true. China currently only has DUV machines, which don’t have enough resolution to produce 5nm chips directly. Only more advanced EUV machines are able to produce this directly or in one-two patterning steps.

    However, China can still produce 5nm chips via multiple patterning. Basically, by overlaying lower resolution patterns offset from one another, you can form higher resolution structures in between the lower-res patterns. By repeating the multi-patterning process, you can “technically” get infinitely small resolutions.

    The drawback is that you have to use several more steps. Each step takes more time, reducing throughout, and is a possible area for error. As a result, multi-patterned yields are lower than directly produced yields, and are more expensive as a result.

    Thus, the real benefit going from DUV to EUV is reduced costs and increased throughput. Until China gets EUV, multi-patterned DUV is sufficient, since the state is willing to accept higher costs of production and provide subsidies to chipmakers in exchange for self-reliance.

    The stumbling block right now is that the tech jump from DUV to EUV is large. DUV (deep ultraviolet light), which has a wavelength around 250nm, can be produced directly via lasers.

    On the other hand, EUV (extreme ultraviolet light), which has a wavelength of 13.5nm and is almost an X-ray, cannot currently be produced directly right now. Current machines make it by using a giant laser to zap a molten drop of tin, turning it into a plasma which emits the desired EUV light. In the process, only ~6% of the original laser power is converted to EUV, resulting in an output of around 350 watts.

    All in all, it took ~20 years to master this convoluted process. It will take China time to develop this EUV production tech, probably around 5-10 years at most.

    In the process, China might be able to leapfrog some lingering issues with the tech. For example, the dogshit conversion efficiency means that a ridiculously large starting laser is needed to produce a paltry amount of EUV light, with the process output capping out around 500 watts.

    Instead, China is starting work on oscillating electrons in a particle accelerator to generate EUV light directly, with much higher power output, called steady-state micro-bunching (SSMB). Using SSMB, China can use one accelerator to power dozens of EUV machines, and can scale output to be superior than current EUV machines. China is beginning work on a facility dedicated to SSMB research in Xiong’an this year.

    For more info on this stuff, see:




  • This isn’t really a significant player in the space yet. This field of local LLM frontends is extremely active right now, with many new projects popping up and user counts shifting around. Here’s a good article that reviews the most prominent ones right now, with rankings for different uses: https://matilabs.ai/2024/02/07/run-llms-locally/

    TLDR from the article:

    Having tried all of these tools, I find they are trying to solve for a few different problems. So, depending on what you are looking to do, here are my conclusions:

    • If you are looking to develop an AI application, and you have a Mac or Linux machine, Ollama is great because it’s very easy to set up, easy to work with, and fast.
    • If you are looking to chat locally with documents, GPT4All is the best out of the box solution that is also easy to set up
    • If you are looking for advanced control and insight into neural networks and machine learning, as well as the widest range of model support, you should try transformers
    • In terms of speed, I think Ollama or llama.cpp are both very fast
    • If you are looking to work with a CLI tool, llm is clean and easy to set up
    • If you want to use Google Cloud, you should look into localllm
    • For native support for roleplay and gaming (adding characters, persistent stories), the best choices are going to be textgen-webui by Oobabooga, and koboldcpp. Alternatively, you can use ollama with custom UIs such as ollama-webui


  • This tech is just a reskinned dehumidifier. The scientist Youtuber Thunderf00t has made a number of videos debunking these types of claims. Their issues can be summed up as the following:

    • they extract very little water while using a shitton of energy. The amount of energy required to condense water from gas into liquid equals the amount of energy required to boil said water, which is a lot. This is dictated by the laws of physics.
    • they work best in humid conditions. These conditions only exist in places where you would ALREADY be able to find water, or when it is right about to rain ( as the natural humidity condenses to form rain clouds), in which case you can just catch and drink that.
    • the water they produce is not clean. Do you want to drink the moldy water out of your dehumidifier? Model and bacteria grow on the cold, wet metal fins where the water condenses.