A turret toss does seem like it would be effective as reactive armor. Is this how tanks evolve to survive drone predation? Main turret autotomy to give the smaller turret buds a chance to flee?
A turret toss does seem like it would be effective as reactive armor. Is this how tanks evolve to survive drone predation? Main turret autotomy to give the smaller turret buds a chance to flee?
Some ships do have emergency antimatter generators per the TNG Technical Manual, but they’re hideously energy-intensive to run–something like a 10:1 ratio of deuterium used for each unit of antimatter. They only make sense to run in the rare situation you absolutely need to warp to safety when you somehow have deuterium and a warp core but no antimatter.
But holodecks apparently have their own infinite power supply incompatible with any other Starfleet technology, so perhaps Voyager used the holodeck replicators to generate deuterium to run their antimatter generator whenever the Doctor isn’t practicing his sermons.
Efficiency would be abysmal even by the normal standards of this process, but it beats walking back to the Alpha Quadrant.
The microwave thing? I couldn’t even guess, though I personally wouldn’t want to stand next to it even if it works. A big microwave emitter on the battlefield is just asking to catch a HARM.
It really doesn’t seem like anyone knows for sure what to do about drones right now.
Large, non-nuclear EMPs mostly use explosives. Covering a large battlefield means you’re essentially bringing a massive, single-use explosive charge to the battlefield, staying uncomfortably close enough to benefit from it, and trying to set it off at exactly the right time, because they’re not reloadable. And your enemy is probably thrilled you’re doing this, because it saves them from hauling their own explosives there. (On that note, why are you sitting on this thing instead of dropping it on the enemy?)
This is in addition to whatever shielding you brought, which is likely bulky and conspicuous. And you’re probably not doing combined arms, because shielding infantry and light vehicles from massive explosions is, it is fair to say, something of an unsolved problem.
But wait, you might be thinking. I know there are non-explosive ways to generate EMPs. Yes, there are, but you need a power source for those, and if you have a really good, portable one of those and a consistent supply of fuel to run it, you probably have better uses for it, like powering a modest laser. Oh, also, you’re 100% sure your shielding works perfectly, right? You’ll find out quick if you don’t.
Environmentalists are fond of saying that “There is no second Earth“. They are wrong! Here’s why:
There is an entire second Earth right here on Earth.
Second Earth is a waterworld. It’s the vast Pacific Ocean that covers half the planet.
Well, he’s a little fuzzy on the concepts of halves and wholes, but let’s hear him out on colossal geoengineering projects.
Nobody:
Absolutely nobody:
The ghost of Sam Hughes: Okay but have you considered
[The AI]’s going to fall in love with you
Fortunately for everyone, they went out of business before a mandatory reporter had to make the weirdest call ever to CPS.
Ah, right, I guess that’s why other vending machines never caught on. Why spend $2 on a Snickers at work when a quick trip to the grocery store can get you candy for way less?
What you’re overlooking this time is vending machines sell convenience, not just single-serving portions. The fact that very few customers really need ammo without leaving the store/mall is indeed why this is a questionable business model and not just a sketchy one.
I’m puzzled, though, by the belief that hunters are more likely to make overpriced, impulse purchases of ammo than mass shooters. I’m even less inclined to buy that than ammo from a vending machine.
You’re forgetting mass shooters, i.e., the people who don’t care if they’re identified or if they’re getting a good price. Safe to say they’re not worried about their credit rating if the plan is to take on a SWAT team in 20 minutes.
American Rounds
What, was the Circus of Values brand too expensive to license?
Oh, hey, I’ve run into this in the wild–the Kalendar AI people keep ineptly trying to start a conversation to sell some kind of kiosk software by referencing factoids they scraped from our latest press release. They’ve clearly spent more effort on evading spam filters and rotating domains than they have on anything else, but they helpfully use “human” names ending in “Kai,” so creating a wildcard filter wasn’t too hard.
Credit where it’s due: I’d never heard of Kalendar or the software company who hired them, but this experience has told me everything I need to know about both of them. If you don’t sweat the details and rate sentiment change using absolute value, that’s kind of impressive.
Addressing the “in hell” response that made headlines at Sundance, Rohrer said the statement came after 85 back-and-forth exchanges in which Angel and the AI discussed long hours working in the “treatment center,” working with “mostly addicts.”
We know 85 is the upper bound, but I wonder what Rohrer would consider the minimum number of “exchanges” acceptable for telling someone their loved one is in hell? Like, is 20 in “Hey, not cool” territory, but it’s all good once you get to 50? 40?
Rohrer says that when Angel asked if Cameroun was working or haunting the treatment center in heaven, the AI responded, “Nope, in hell.”
“They had already fully established that he wasn’t in heaven,” Rohrer said.
Always a good sign when your best defense of the horrible thing your chatbot says is that it’s in context.
I’m just going to pretend that’s one of the researchers from Where Oaken Hearts Do Gather.
I conclude that scheming is a disturbingly plausible outcome of using baseline machine learning methods to train goal-directed AIs sophisticated enough to scheme (my subjective probability on such an outcome, given these conditions, is ~25%).
Out: vibes and guesswork
In: “subjective probability”
“We’re all in grave danger! What? Well no, we can’t give specifics unless we risk not getting paid. Signed, Anonymous”
I mean, I wasn’t exactly expecting the Einstein-Szilard letter 2.0 when I clicked that link, but this is pathetic.
lmao, Zoom is cooked. Their CEO has no idea how LLMs work or why they aren’t fit for purpose, but he’s 100% certain someone else will somehow solve this problem:
So is the AI model hallucination problem down there in the stack, or are you investing in making sure that the rate of hallucinations goes down?
I think solving the AI hallucination problem — I think that’ll be fixed.
But I guess my question is by who? Is it by you, or is it somewhere down the stack?
It’s someone down the stack.
Okay.
I think either from the chip level or from the LLM itself.
OpenAI: “Our AI is so powerful it’s an existential threat to humanity if we don’t solve the alignment issue!”
Also OpenAI: “We can devote maybe 20% of our resources to solving this, tops. We need the rest for parlor tricks and cluttering search results.”
If they’re really lucky, they’ll end up working for the Laundry only once. Residual Human Resources is a bad way to go out.
Charles Stross’ Laundry series is basically this concept set in the present day: magic is a branch of mathematics, which means it can be computed and programmed.
It is perhaps worth noting at this point the series genre is cosmic horror.
Biblically-accurate Ace Combat aircraft.