

Googling around suggests it’s a global setting. Having recently used an Xfce version that didn’t want to super+arrow, maximize-vertical is an okay tool, but outside of super-duper-widescreen, it’s not what I’d ever want by default.


Googling around suggests it’s a global setting. Having recently used an Xfce version that didn’t want to super+arrow, maximize-vertical is an okay tool, but outside of super-duper-widescreen, it’s not what I’d ever want by default.


Microsoft marketing hasn’t gotten any better about song choices. A few years ago their ads had soft bleep-bloop tunes and “go baby, go baby, yeah we’re right behind you.”
The song is “Cherry Lips,” by Garbage. It’s the twink anthem.
And it’s still not as tone-deaf as whichever Bill Hicks target picked out “hey ho let’s go” from the god-damned “Blitzkrieg Bop.”


Mom’s gonna fix it all soon.


What’s the behavior when you double-click the title bar?


There’s only so many corners.


Thoroughly familiar with it; don’t care. The global menu has always been goofy because of the invisible relation to some open window. Usually a small window floating out in middle of the desktop, because Mac OS took forever to adopt any concept of “maximize.” I’m still not sure they do it right.


This has made a lot of people very angry and been widely regarded as a bad move.
Seriously though, this is the first properly good UI for a desktop computer. Mac OS (or I guess Macintosh OS at the time) was okay, but reliant on the global menu and weird drop-downs. Windows kept everything self-contained. Even multi-window programs tended to use the “multiple document interface,” i.e., windows inside windows. Tabs weren’t really a thing yet.
It also crashed if you looked at it funny and had the antivirus capabilities of warm cheese. But there’s damn good reasons Windows 7 was the same experience, extended, rather than replaced. It’s more-or-less what I style Linux to look like. And in light of that I’m kinda pissed off any OS ever struggles to remain responsive, when this relic ran smoothly on one stick of RAM that’s smaller than my CPU’s cache.


PC Gamer’s Coconut Monkey era.
Then why is it one hour?
Why, did they add a week-long quarantine in baggage check? It’s an airport. The whole point is to show up and leave. Even if the wait lasts longer than the flight.
If your ass in there longer than 24 hours, the wifi should be considered an apology.
Why’s it need to be temporary, anyway? It’s an airport. Nobody’s sticking around.


The magnetosphere below the equator drags vowels toward the front of the mouth. That’s why South Africa and New Zealand can only do E’s.


Tacky is an understatement. The striping is superglue.


Hilarious, after the tortuously long road Xenia took to get there.


Jesus Christ, the usability nightmare of this website is worse than the goofy animated GIF they think is an exaggeration.
www.wired.com to get rid of the autoplaying video go fuck yourselves, www.wired.com to get rid of the assorted gigantic flyover bullshit.


Gene Amdahl himself was arguing hardware. It was never about writing better software - that’s the lesson we’ve clawed out of it, after generations of reinforcing harmful biases against parallelism.
Telling people a billion cores won’t solve their problem is bad, actually.
Human beings by default think going faster means making each step faster. How you explain that’s wrong is so much more important than explaining that it’s wrong. This approach inevitably leads to saying ‘see, parallelism is a bottleneck.’ If all they hear is that another ten slow cores won’t help but one faster core would - they’re lost.
That’s how we got needless decades of doggedly linear hardware and software. Operating systems that struggled to count to two whole cores. Games that monopolized one core, did audio on another, and left your other six untouched. We still lionize cycle-juggling maniacs like John Carmack and every Atari programmer. The trap people fall into is seeing a modern GPU and wondering how they can sort their flat-shaded triangles sooner.
What you need to teach them, what they need to learn, is that the purpose of having a billion cores isn’t to do one thing faster, it’s to do everything at once. Talking about the linear speed of the whole program is the whole problem.


The PS3 had a 128-bit CPU. Sort of. “Altivec” vector processing could split each 128-bit word into several values and operate on them simultaneously. So for example if you wanted to do 3D transformations using 32-bit numbers, you could do four of them at once, as easily as one. It doesn’t make doing one any faster.
Vector processing is present in nearly every modern CPU, though. Intel’s had it since the late 90s with MMX and SSE. Those just had to load registers 32 bits at a time before performing each same-instrunction-multiple-data operation.
The benefit of increasing bit depth is that you can move that data in parallel.
The downside of increasing bit depth is that you have to move that data in parallel.
To move a 32-bit number between places in a single clock cycle, you need 32 wires between two places. And you need them between any two places that will directly move a number. Routing all those wires takes up precious space inside a microchip. Indirect movement can simplify that diagram, but then each step requires a separate clock cycle. Which is fine - this is a tradeoff every CPU has made for thirty-plus years, as “pipelining.” Instead of doing a whole operation all-at-once, or holding back the program while each instruction is being cranked out over several cycles, instructions get broken down into stages according to which internal components they need. The processor becomes a chain of steps: decode instruction, fetch data, do math, write result. CPUs can often “retire” one instruction per cycle, even if instructions take many cycles from beginning to end.
To move a 128-bit number between places in a single clock cycle, you need an obscene amount of space. Each lane is four times as wide and still has to go between all the same places. This is why 1990s consoles and graphics cards might advertise 256-bit interconnects between specific components, even for mundane 32-bit machines. They were speeding up one particular spot where a whole bunch of data went a very short distance between a few specific places.
Modern video cards no doubt have similar shortcuts, but that’s no longer the primary way the perform ridiculous quantities of work. Mostly they wait.
CPUs are linear. CPU design has sunk eleventeen hojillion dollars into getting instructions into and out of the processor, as soon as possible. They’ll pre-emptively read from slow memory into layers of progressively faster memory deeper inside the microchip. Having to fetch some random address means delaying things for agonizing microseconds with nothing to do. That focus on straight-line speed was synonymous with performance, long after clock rates hit the gigahertz barrier. There’s this Computer Science 101 concept called Amdahl’s Law that was taught wrong as a result of this - people insisted ‘more processors won’t work faster,’ when what it said was, ‘more processors do more work.’
Video cards wait better. They have wide lanes where they can afford to, especially in one fat pipe to the processor, but to my knowledge they’re fairly conservative on the inside. They don’t have hideously-complex processors with layers of exotic cache memory. If they need something that’ll take an entire millionth of a second to go fetch, they’ll start that, and then do something else. When another task stalls, they’ll get back to the other one, and hey look the fetch completed. 3D rendering is fast because it barely matters what order things happen in. Each pixel tends to be independent, at least within groups of a couple hundred to a couple million, for any part of a scene. So instead of one ultra-wide high-speed data-shredder, ready to handle one continuous thread of whatever the hell a program needs next, there’s a bunch of mundane grinders being fed by hoppers full of largely-similar tasks. It’ll all get done eventually. Adding more hardware won’t do any single thing faster, but it’ll distribute the workload.
Video cards have recently been pushing the ability to go back to 16-bit operations. It lets them do more things per second. Parallelism has finally won, and increased bit depth is mostly an obstacle to that.
So what 128-bit computing would look like is probably one core on a many-core chip. Like how Intel does mobile designs, with one fat full-featured dual-thread linear shredder, and a whole bunch of dinky little power-efficient task-grinders. Or… like a Sony console with a boring PowerPC chip glued to some wild multi-phase vector processor. A CPU that they advertised as a private supercomputer. A machine I wrote code for during a college course on machine vision. And it also plays Uncharted.
The PS3 was originally intended to ship without a GPU. That’s part of its infamous launch price. They wanted a software-rendering beast, built on the Altivec unit’s impressive-sounding parallelism. This would have been a great idea back when TVs were all 480p and games came out on one platform. As HDTVs and middleware engines took off… it probably would have killed the PlayStation brand. But in context, it was a goofy path toward exactly what we’re doing now - with video cards you can program to work however you like. They’re just parallel devices pretending to act linear, rather than they other way around.


Hewlett-Packard is just an unhinged ad campaign for Brother.
Windows 95 legitimately had better UI than that “Material” bullshit, via relief shading conveyed through four fucking colors. The hierarchy of elements is instantly visible. Buttons even popped in and out when clicked. There’s just no excuse for how minimalism fetishists have taken over user experience.