• 40 Posts
  • 407 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle



  • It is half okay, but only if they are not getting paid to screw up your results. It is a coup against democracy where freedom of information is freedom of the press and is an entire fundamental pillar of democracy. Google’s entire business model has always been neo feudalism. A web crawler and search engine is like a library, it must be neutral, objective, and publicly funded as a non profit. Much the same with YT, it is our digital public commons and the most efficient form for information sharing in the primary form of human communication.


  • It is not super common to impregnate on first offense, especially if you were her first child. You can count the days backwards from your birthday to see when it happened. If you were the first child, you may have been a day or few late.

    Growing up, I found it funny how many of my friends happened to be born in the first week of September… Happy New Years. There is often, not always, but often some correlated reason why they were free to screw around too much.


  • Multi threading is parallelism and is poised to scale to a similar factor, the primary issue is simply getting tensors in and out of the ALU. Good enough is the engineering game. Having massive chunks of silicon laying around without use are a mach more serious problem. At the present, the choke point is not the parallelism of the math but actually the L2 to L1 bus width and cycle timing. The ALU can handle the issue. The AVX instruction set is capable of loading 512 bit wide words in a single instruction, the problem is just getting these in and out in larger volume.

    I speculate that the only reason this has not been done already is because pretty much because of the marketability of single thread speeds. Present thread speeds are insane and well into the radio realm of black magic bearded nude virgins wizardry. I don’t think it is possible to make these bus widths wider and maintain the thread speeds because it has too many LCR consequences. I mean, at around 5 GHz the concept of wire connections and gaps as insulators is a fallacy when capacitive coupling can make connections across all small gaps.

    Personally, I think this is a problem that will take on a whole new architectural solution. It is anyone’s game unlike any other time since the late 1970’s. It will likely be the beginning of the real RISC-V age and the death of x86. We are presently at the age of the 20+ thread CPU. If a redesign can make a 50-500 logical core CPU slower for single thread speeds but capable of all workloads, I think it will dominate easily. Choosing the appropriate CPU model will become much more relevant.


  • Mainstream is about to collapse. The exploitation nonsense is faltering. Open source is emerging as the only legitimate player.

    Nvidia is just playing conservative because it was massively overvalued by the market. The GPU use for AI is a stopover hack until hardware can be developed from scratch. The real life cycle of hardware is 10 years from initial idea to first consumer availability. The issue with the CPU in AI is quite simple. It will be solved in a future iteration, and this means the GPU will get relegated back to graphics or it might even become redundant entirely. Once upon a time the CPU needed a math coprocessor to handle floating point precision. That experiment failed. It proved that a general monolithic solution is far more successful. No data center operator wants two types of processors for dedicated workloads when one type can accomplish nearly the same task. The CPU must be restructured for a wider bandwidth memory cache. This will likely require slower thread speeds overall, but it is the most likely solution in the long term. Solving this issue is likely to accompany more threading parallelism and therefore has the potential to render the GPU redundant in favor of a broader range of CPU scaling.

    Human persistence of vision is not capable of matching higher speeds that are ultimately only marketing. The hardware will likely never support this stuff because no billionaire is putting up the funding to back up the marketing with tangible hardware investments. … IMO.

    Neo Feudalism is well worth abandoning. Most of us are entirely uninterested in this business model. I have zero faith in the present market. I have AAA capable hardware for AI. I play and mod open source games. I could easily be a customer in this space, but there are no game manufacturers. I do not make compromises in ownership. If I buy a product, my terms of purchase are full ownership with no strings attached whatsoever. I don’t care about what everyone else does. I am not for sale and I will not sell myself for anyone’s legalise nonsense or pay ownership costs to rent from some neo feudal overlord.








  • Yeah this has been my experience too. LLMs don’t handle project specific code styles too well either. Or when there are several ways of doing things.

    Actually, earlier today I was asking a mixtral 8x7b about some bash ideas. I kept getting suggestions to use find and sed commands which I find unreadable and inflexible for my evolving scripts. They are fine for some specific task need, but I’ll move to Python before I want to fuss with either.

    Anyways, I changed the starting prompt to something like ‘Common sense questions and answers with Richard Stallman’s AI assistant.’ The results were remarkable and interesting on many levels. From the way the answers always terminated without continuing with another question/answer, to a short footnote about the static nature of LLM learning and capabilities, along with much better quality responses in general, the LLM knew how to respond on a much higher level than normal in this specific context. I think it is the combination of Stallman’s AI background and bash scripting that are powerful momentum builders here. I tried it on a whim, but it paid dividends and is a keeper of a prompting strategy.

    Overall, the way my scripts are collecting relationships in the source code would probably result in a productive chunking strategy for a RAG agent. I don’t think an AI would be good at what I’m doing at this stage, but it could use that info. It might even be possible to integrate the scripts as a pseudo database in the LLM model loader code for further prompting.








  • j4k3@lemmy.worldMtobike wrench@lemmy.worldQuestion about rim tape glue
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    14 days ago

    I wouldn’t use superglue. I would worry about it rubbing on the tube over time. Keep in mind, everything in there moves around a lot more than it seems at first. That was the selling point for old latex tubes; they cause less friction inside as the tire deforms at the rolling surface, (in addition to weight savings over butyl rubber tubes). The purpose in rim tape is to prevent the tube from contacting the inner edges of the spoke nipple holes as it flexes. Once the tube and tire are mounted and pressurized, it will hold everything in place, so long as the tape is the right width. Most factory wheels come with a plastic band and no adhesive because anything that collects junk in the rim is bad. I use these bands and a bit of talcum powder between the tube/tire/rim. This reduces the chance for pinch flats. Some tubes actually come in a talcum powder pouch for this reason as well. Generally, the cloth rim tape will stick to itself better than the rim.