Bistable multivibrator
Non-state actor
Tabs for AI indentation, spaces for AI alignment
410,757,864,530 DEAD COMPUTERS

  • 16 Posts
  • 614 Comments
Joined 2 years ago
cake
Cake day: July 6th, 2023

help-circle


  • Dear acausal robot God, that was cathartic. Refreshing to see a mainstream journalist see through techbro weirdo uwu smol bean antics for what they are, especially after so many credulous puff pieces.

    This includes the Guardian (twice), the Wall Street Journal, the Philadelphia Inquirer, CBC News, Business Insider, Bloomberg, and Dallas Magazine, among many, many others. My industry peers very clearly want me to know about these people—a lot about them!

    I knew that a couple of outlets had done profiles of them lately, but I didn’t realize they were attention whoring this hard. Maybe their thing isn’t a breeding kink after all, but exhibitionism.

    I also didn’t know about the child abuse, though I could have seen it coming without subjecting myself to two Grauniad bits on these fuckers1.

    And then there’s the slap. The most notable aspect of the Guardian’s May 2024 profile—which, again, profiled them twice in the same year—was a moment when Malcolm slaps his son in the face, in public, after the then-2-year-old accidentally bumped into a table, leaving the boy “whimpering.” To her credit, reporter Jenny Kleeman didn’t let this go, forcing the couple to defend this punishment.

    1: Don’t even know if “fucker” is appropriate here given these bougie failchildren are apparently opting for IVF for the actual baby making part.







  • Oh no, it’s a very serious (in context of a psychological tragedy sci-fi anime with bionic mecha fighting lovecraftian kaiju) paramilitary national (or maybe a supranational) goverment body affiliated with a shadowy cabal of conspirators.

    Its logo is also quite heavily featured on the unscalable mountains of promotional merchandise for the franchise, so it’s an easy thing to name drop if you don’t know or remember much from the show but want to feel like you’re making a deep cut reference because you remember the name from a coffee mug you have or something.






  • I distinctly recall a lot of people a few years ago parroting some variation of “well I don’t know about Bitcoin specifically, but blockchain itself is probably going to be important and even revolutionary as a technology” and sometimesI wish I’d collected receipts to say “I told you it’s not”.

    Here we are, year of Nakamoto 17 and the full list of use cases for blockchains is:

    • Speculative trading of toy currencies made up by private nobodies
    • Paying through the nose to execute arbitrary code on SETI@Home’s evil cousin
    • Speculative trading of arbitrary blobs of bytes made up by private nobodies

    And no, Git is not a fucking blockchain. Much like the New York City Subway is not the fucking Loop.






  • GPU programs (specifically CUDA, although other vendors’ stacks are similar) combine code for the host system in a conventional programming language (typically C++), and code for the GPU written in CUDA language. Even if the C++ code for the host system can be optimized with hand written assembly, it’s not going to lead to significant gains when the performance bottleneck is on the GPU side.

    The CUDA compiler translates the high level CUDA code into something called PTX, machine code for a “virtual ISA” which is then translated by the GPU driver into native machine language for the proprietary instruction set of the GPU. This seems to be somewhat comparable to a compiler intermediate representation, such as LLVM. It’s plausible that hand written PTX assembly/IR language could have been used to optimize parts of the program, but that would be somewhat unusual.

    For another layer or assembly/machine languages, technically they could have reverse engineered the actual native ISA of the GPU core and written machine code for it, bypassing the compiler in the driver. This is also quite unlikely as it would practically mean writing their own driver for latest-gen Nvidia cards that vastly outperforms the official one and that would be at least as big of a news story as Yet Another Slightly Better Chatbot.

    While JIT and runtimes do have an overhead compared to direct native machine code, that overhead is relatively small, approximately constant, and easily amortized if the JIT is able to optimize a tight loop. For car analogy enjoyers, imagine a racecar that takes ten seconds to start moving from the starting line in exchange for completing a lap one second faster. If the race is more than ten laps long, the tradeoff is worth it, and even more so the longer the race. Ahead of time optimizations can do the same thing at the cost of portability, but unless you’re running Gentoo, most of the C programs on your computer are likely compiled for the lowest common denominator of x86/AMD64/ARMwhatever instruction sets your OS happens to support.

    If the overhead of a JIT and runtime are significant in the overall performance of the program, it’s probably a small program to begin with. No shame to small programs, but unless you’re running it very frequently, it’s unlikely to matter if the execution takes five or fifty milliseconds.