• 5 Posts
  • 278 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • From an urban planning perspective, there are some caveats to your points:

    A new downtown would make a subway very easy and cheap to build, you could cut and cover instead of tunnelling

    Cut-and-cover will make shallow underground tunnels cheaper to construct in almost all cases irrespective of building in an old city center or as part of building a new city center from scratch. In fact, older pre-WW2 cities are almost ideal for cut-and-cover because the tunnels can follow the street grid, yielding a tunnel which will be near to already-built destinations, while minimizing costly curves.

    Probably the worst scenario for cut-and-cover is when the surface street has unnecessary curves and detours (eg American suburban arterials). So either the tunnel follows the curve and becomes weirdly farther from major destinations, or it’s built in segments using cut-and-cover where possible and digging for the rest.

    Cheeeaaap land for huge offices, roads, and even houses

    At least in America, where agricultural land at the edges of metropolitan areas is still cheap, the last 70 years do not suggest huge roads, huge offices, and huge house lead to a utopia. Instead, we just get car-dependency and sprawl, as well as dead shopping malls. The benefits of this accrued to the prior generations, who wheeled-and-dealed in speculative suburban house flipping, and saddled cities with sprawling infrastructure that the existing tax base cannot afford.

    Green field is just so cheap.

    It is, until it isn’t. Greenfield development “would be short term appealing but still expensive when it comes to building everything”. It’s a rare case in America where post-WW2 greenfield housing or commercial developments pay sufficient tax to maintain the municipal services those developments require.

    Look at any one municipal utility and it becomes apparent that the costs scale by length or area, but the revenue scales by businesses/households. The math doesn’t suggest we need Singapore-levels of density, but constant sprawling expansion will put American cities on the brink of bankruptcy. As it stands, regressive property tax policies result in dense neighborhoods subsidizing sprawling neighborhood, but with nothing in return except more traffic and wastewater.

    Either these cities must be permitted to somehow break away from their failed and costly suburban experiments, or the costs must be internalized upon greenfield development, which might not make it cheap anymore.


  • commercial appliances didn’t take any stand-by measures to avoid “keeping the wires warm”

    Generally speaking, the amount of standby current attributable to the capacitors has historically paled in comparison to the much higher standby current of the active electronics therein. The One Watt Initiative is one such program that shed light on “vampire draw” and posed a tangible target for what standby power draw for an appliance should look like: 1 Watt.

    A rather infamous example of profligate standby power was TV set-top boxes, rented from the satellite or cable TV company, at some 35 Watts. Because these weren’t owned by customers, so-called free-market principles couldn’t apply and consumers couldn’t “vote with their feet” for less power-hungry set-top boxes. And the satellite/cable TV companies didn’t care, since they weren’t the ones paying for the electricity to keep those boxes powered. Hence, a perverse scenario where power was being actively wasted.

    It took both carrots (eg EnergyStar labels) and sticks (eg EU and California legislation) to make changes to this sordid situation. But to answer your question in the modern day, where standby current mostly is now kept around 1 Watt or lower, it all boils down to design tradeoffs.

    For most consumer products, a physical power-switch has gone the way of the dodo. The demand is for products which can turn “off” but can start up again at a moment’s notice. Excellent electronics design could achieve low-power consumption in the milliwatts, but this often entails an entirely separate circuit and supply which is used to wake up the main circuit of the appliance. That’s extra parts and thus more that can go wrong and cause warranty claims. This is really only pursued if power consumption is paramount, such as for battery-powered devices. And even with all that effort, the power draw will never be zero.

    So instead, the more common approach is to reuse the existing supply and circuitry, but try to optimize it when not in active operation. That means accepting that the power supply circuitry will have some amount of always-on draw, and that the total appliance will have a standby power draw which is deemed acceptable.

    I would also be remiss if I didn’t mention the EU Directives since 2013 which mandate particular power-factor targets, which for most non-motor appliances can only be achieved with active components, ie Active Power Factor Correction (Active PFC). While not strictly addressing standby power, this would be an example of a measure undertaken to avoid the heating caused by apparent power, both locally and through the grid.


  • How were you measuring the current in the power cable? Is this with a Kill-o-watt device or perhaps with a clamp meter and a line splitter?

    As for why there is a capacitor across the mains input, a switching DC power supply like an ATX PSU draws current in a fairly jagged fashion. So to stabilize the input voltage, as well as preventing the switching noise from propagating through the mains and radiating everywhere, some capacitors are placed across the AC lines. This is a large oversimplification, though, as the type and values of these capacitors are the subject of careful design.

    Since a capacitor charges and discharges based on the voltage across it, and because AC power changes voltage “polarity” at 50 or 60 Hz, the flow of charge into and out of the capacitor will be measurable as a small current.

    Your choice of measuring instrument will affect how precisely you can measure this apparent power, which will in-turn affect how your instrument reports the power factor. It can also be that the current in question also includes some of the standby current for keeping the PSU’s logic ICs in a ready state, for when the computer starts up. So that would also explain why the power factor isn’t exactly zero.


  • litchralee@sh.itjust.workstoCrappy Design@sh.itjust.worksRNOP ADLH
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    5 days ago

    Agreed, it’s a very bad design. If your school speed limit covers most of the daylight hours on weekdays, is the implicit suggestion that it’s fine to drive faster on weekends and during nighttime? The street should be rebuilt to enforce the desired speed limits, not with paint or signs.

    Oh, we’re talking about the letters on the glass. My bad lol



  • A few months ago, my library gained a copy of Cybersecurity For Small Networks by Seth Enoka, published by No Starch Press in 2022. So I figured I’d have a look and see if it it included modern best-practices for networks.

    It was alright, in that it’s a decent how-to guide for a novice to set up sensible, minimum network fortifications. But it only includes an overview of how those fortifications work, without going into the additional depth needed to fine-tune or optimize them for specific environments. So if the reader has zero experience with network security, it’s a worthwhile read. But if you’ve already been operating a network with defenses for a while, there’s not much to gain from this particular text.

    Also, the author suggests that IPv6 should be disabled, which is a terrible idea. Modern best-practice is not to pretend IPv6 doesn’t exist, but to assure that firewalls and other defenses are configured to handle this traffic. There’s a vast difference between “administratively reject IPv6 traffic in/out of the WAN” and “disable IPv6 on all devices and pray no one ever connects an IPv6-enabled device”.

    You might have a look at other books available from No Starch Press, though.



  • The thing to keep in mind is that there exist things which have “circumstantial value”, meaning that the usefulness of something depends on the beholder’s circumstances at some point in time. Such an object can actually have multiple valuations, as compared to goods (which have a single, calculable market value) or sentimental objects (“priceless” to their owner).

    To use an easy example, consider a sportsball ticket. Presenting it at the ballfield is redeemable for a seat to watch the game at the time and place written on the ticket. And it can be transferred – despite Ticketmaster’s best efforts – so someone else could enjoy the same. But if the ticket is unused and the game is over, then the ticket is now worthless. Or if the ticket holder doesn’t enjoy watching sportsball, their valuation of the ticket is near nil.

    So to start, the coupon book is arguable “worth” $30, $0, or somewhere in between. Not everyone will use every coupon in the book. But if using just one coupon will result in a savings of at least $1, then perhaps the holder would see net-value from that deal. In no circumstance is KFC marking down $30 on their books because they issued coupons that somehow total to $30.

    That said, I’m of the opinion that if a donation directly results in me receiving something in return… that’s not a donation. It’s a sale or transaction dressed in the clothes of charity. Plus, KFC sends coupons in the mail for free anyway.


  • Notwithstanding the possible typo in the title, I think the question is why USA employers would prefer to offer a pension over a 401k, or vice-versa.

    For reference, a pension is also known as a defined benefit plan, since an individual has paid into the plan for the minimum amount will be entitled to some known amount of benefit, usually in the form of a fixed stipend for the remainder of their life, and sometimes also health insurance coverage. USA’s Social Security system is also sometimes called the public pension, because it does in-fact pay a stipend in old age and requires a certain amount of payments into the fund during one’s working years.

    Whereas a 401k is uncreatively named after the tax code section which authorized its existence, initially being a deferred compensation mechanism – aka a way to spread one’s income over more time, to reduce the personal taxes owed in a given year – and then grew into the tax-advantaged defined contribution plan that it is today. That is, it is a vessel for saving money, encouraged by tax advantages and by employer contributions, if any.

    The superficial view is that 401k plans overtook pensions because companies wouldn’t have to contribute much (or anything at all), shifting retirement costs entirely onto workers. But this is ahistorical since initial 401k plans offered extremely generous employer contribution rates, some approaching 15% matching. Of course, the reasoning then was that the tax savings for the company would exceed that, and so it was a way to increase compensation for top talent. In the 80s and 90s was when the 401k was only just taking hold as a fringe benefit, so you had to have a fairly cushy job to have access to a 401k plan.

    Another popular viewpoint is that workers prefer 401k plans because it’s more easily inspectable than a massive pension fund, and history has shown how pension funds can be mismanaged into non-existence. This is somewhat true, if US States’ teacher pension funds are any indication, although Ontario Teacher’s Pension Plan would be the counterpoint. Also, the 401k plan participants at Enron would have something to complain about, as most of the workers funds were invested in the company itself, delivering a double whammy: no job, and no retirement fund.

    So to answer the question directly, it is my opinion that the explosion of 401k plans and participants in such plans – to the point that some US states are enacting automatic 401k plans for workers whose employers don’t offer one – is due to 1) momentum, since more and more employers keep offering them, 2) but more importantly, because brokers and exchanges love managing them.

    This is the crux: only employers can legally operate a 401k plan for their employees to participate in. But unless the employer is already a stock trading platform, they are usually ill-equiped to set up an integrated platform that allows workers to choose from a menu of investments which meet the guidelines from the US DOL, plus all other manner of regulatory requirements. Instead, even the largest employers will partner with a financial services company who has expertise on offering a 401k plan, such as Vanguard, Fidelity, Merrill Edge, etc.

    Naturally, they’ll take a cut on every trade or somehow get compensated, but because of the volume of 401k investments – most people auto-invest every paycheck – even small percentages add up quickly. And so, just like the explosion of retail investment where ordinary people could try their hand at day-trading, it’s no surprise that brokerages would want to extend their hand to the high volume business of operating 401k plans.

    Whereas, how would they make money off a pension fund? Pension funds are multi-billion dollar funds, so they can afford their own brokers to directly buy a whole company in one-shot, with no repeat business.


  • Although copyright and patents (and trademarks) are lumped together as “intellectual property”, there’s almost nothing which is broadly applicable to them all, and they might as well be considered separately. The only things I can think of – and I’m not a lawyer of any kind – are that: 1) IP protection is mentioned vaguely in the US Constitution, and 2) they all behave as property, in that they can be traded/reassigned. That’s it.

    With that out of the way, it’s important to keep in mind that patent rights are probably the strongest in the family of IP, since there’s no equivalent “fair use” (US) or “fair dealing” (UK) allowance that copyright has. A patent is almost like owning an idea, whereas copyright is akin to owning a certain rendition plus a derivative right.

    Disney has leaned on copyright to carve for themselves an exclusive market of Disney characters, while also occasionally renewing their older characters (aka derivatives), so that’s why they lobby for longer copyright terms.

    Whereas there isn’t really a singular behemoth company whose bread-and-butter business is to churn out patents. Inventing stuff is hard, and so the lack of such a major player means a lack of lobbying to extend patent terms.

    To be clear, there are companies who rely almost entirely on patent law for their existence, just like Disney relies on copyright law. But type foundries (companies that make fonts) are just plainly different than Disney. Typefaces (aka fonts) as a design can be granted patents, and then the font files can be granted copyright. But this is a special case, I think.

    The point is: no one’s really clamoring for longer parents, and most people would regard a longer exclusive term on “ideas” to be very problematic. Esp if it meant pharmaceutical companies could engage in even more price-gouging, for example.



  • If you hold a patent, then you have an exclusive right to that invention for a fixed period, which would be 20 years from the filing date in the USA. That would mean Ford could not claim the same or a derivative invention, at least not for the parts which overlap with your patent. So yes, you could sit on your patent and do nothing until it expires, with some caveats.

    But as a practical matter, the necessary background research, the application itself, and the defense of a patent just to sit on it would be very expensive, with no apparent revenue stream to pay for it. I haven’t looked up what sort of patent Ford obtained (or maybe they’ve merely started the application) but patents are very long and technical, requiring whole teams of lawyers to draft properly.

    For their patent to be valid, they must not overlap with an existing claim, as well as being novel and non-obvious, among other requirements. They would only file a patent to: 1) protect themselves from competition in future, 2) expect that this patent can be monetized by directly implementing it, or licensing it out to others, or becoming a patent troll and extracting nuisance-value settlements, or 3) because they’re already so deep in the Intellectual Property land-grab that they must continue to participate by obtaining outlandish patents. The latter is a form of “publish or perish” and allows them to appear like they’re on the cutting edge of innovation.

    A patent can become invalidated if it is not sufficiently defended. This means that if no one even attempts to infringe, then your patent would be fine. But if someone does, then you must file suit or negotiate a license with them, or else they can challenge the validity of your patent. If they win, you’ll lose your exclusive rights and they can implement the invention after all. This is not cheap, and Ford has deep pockets.


  • I’ll address your question in two parts: 1) is it redundant to store both the IP subnet and its subnet mask, and 2) why doesn’t the router store only the bits necessary to make the routing decision.

    Prior to the introduction of CIDR – which came with the “slash” notation, like /8 for the 10.0.0.0 RFC1918 private IPv4 subnet range – subnets would genuinely be any bit arrangement imaginable. The most sensible would be to have contiguous MSBit-justified subnet masks, such as 255.0.0.0. But the standard did not preclude using something unconventional like 255.0.0.1.

    For those confused what a 255.0.0.1 subnet mask would do – and to be clear, a lot of software might prove unable to handle this – this is describing a subnet with 2^23 addresses, where the LSBit must match the IP subnet. So if your IP subnet was 10.0.0.0, then only even numbered addresses are part of that subnet. And if the IP subnet is 10.0.0.1, then that only covers odd numbered addresses.

    Yes, that means two machines with addresses 10.69.3.3 and 10.69.3.4 aren’t on the same subnet. This would not be allowed when using CIDR, as contiguous set bits are required with CIDR.

    So in answer to the first question, CIDR imposed a stricter (and sensible) limit on valid IP subnet/mask combinations, so if CIDR cannot be assumed, then it would be required to store both of the IP subnet and the subnet mask, since mask bits might not be contiguous.

    For all modern hardware in the last 15-20 years, CIDR subnets are basically assumed. So this is really a non-issue.

    For the second question, the router does in-fact store only the necessary bits to match the routing table entry, at least for hardware appliances. Routers use what’s known as a TCAM memory for routing tables, where the bitwise AND operation can be performed, but with a twist.

    Suppose we’re storing a route for 10.0.42.0/24. The subnet size indicates that the first 24 bits must match a prospective destination IP address. And the remaining 8 bits don’t matter. TCAMs can store 1’s and 0’s, but also X’s (aka “don’t cares”) which means those bits don’t have to match. So in this case, the TCAM entry will mirror the route’s first 24 bits, then populate the rest with X’s. And this will precisely match the intended route.

    As a practical matter then, the TCAM must still be as wide as the longest possible route, which is 32 bits for IPv4 and 128 bits for IPv6. Yes, I suppose some savings could be made if a CIDR-only TCAM could conserve the X bits, but this makes little difference in practice and it’s generally easier to design the TCAM for max width anyway, even though non-CIDR isn’t supported on most routing hardware anymore.


  • To start off, I’m sorry to hear that you’re not receiving the healthcare you need. I recognize that these words on a screen aren’t going to solve any concrete problems, but in the interest of a fuller comprehension of the USA healthcare system, I will try to offer an answer/opinion to your question that goes into further depth than simply “capitalism” or “money and profit” or “greed”.

    What are my qualifications? Absolutely none, whatsoever. Although I did previously write a well-received answer in this community about the USA health insurance system, which may provide some background for what follows.

    In short, the USA healthcare system is a hodge-podge of disparate insurers and government entities (collectively “payers”), and doctors, hospitals, clinics, ambulances, and more government entities (collectively “providers”) overseen by separate authorities in each of the 50 US States, territories, tribes, and certain federal departments (collectively “regulators”). There is virtually no national-scale vertical integration in any sense, meaning that no single or large entity has the viewpoint necessary to thoroughly review the systemic issues in this “system”, nor is there the visionary leadership from within the system to even begin addressing its problems.

    It is my opinion that by bolting-on short-term solutions without a solid long-term basis, the nation was slowly led to the present dysfunction, akin to boiling a frog. And this need not be through malice or incompetence, since it can be shown that even the most well-intentioned entities in this sordid and intricate pantomime cannot overcome the pressures which this system creates. Even when there are apparent winners like filthy-rich plastic surgeons or research hospitals brimming with talented expert doctors of their specialty, know that the toll they paid was heavy and worse than it had to be.

    That’s not to say you should have pity on all such players in this machine. Rather, I wish to point to what I’ll call “procedural ossification”, as my field of computer science has a term known as “protocol ossification” that originally borrowed the term from orthopedia, or the study of bone deformities. How very fitting for this discussion.

    I define procedural ossification as the loss of flexibility in some existing process, such that rather than performing the process in pursuit of a larger goal, the process itself becomes the goal, a mindless, rote machine where the crank is turned and the results come out, even though this wasn’t what was idealized. To some, this will harken to bureaucracy in government, where pushing papers and forms may seem more important that actually solving real, pressing issues.

    I posit to you that the USA healthcare system suffers from procedural ossification, as many/most of the players have no choice but to participate as cogs in the machine, and that we’ve now entirely missed the intended goal of providing for the health of people. To be an altruistic player is to be penalized by the crushing weight of practicalities.

    What do I base this on? If we look at a simple doctor’s office, maybe somewhere in middle America, we might find the staff composed of a lead doctor – it’s her private practice, after all – some Registered Nurses, administrative staff, a technician, and an office manager. Each of these people have particular tasks to make just this single doctor’s office work. Whether it’s supervising the medical operations (the doctor) or operating/maintaining the X-ray machine (technician) or cutting the checks to pay the building rent (office manager), you do need all these roles to make a functioning, small doctor’s office.

    How is this organization funded? In my prior comment about USA health insurance, there was a slide which showed the convoluted money flows from payers to providers, which I’ve included below. What’s missing from this picture is how even with huge injections of money, bad process will lead to bad outcomes.

    financial flow in the US healthcare system Source

    In an ideal doctor’s office, every patient that walks in would be treated so that their health issues are managed properly, whether that’s fully curing the condition or controlling it to not get any worse. Payment would be conditioned upon the treatment being successful and within standard variances for the cost of such treatment, such as covering all tests to rule out contributing factors, repeat visits to reassess the patient’s condition, and outside collaboration with other doctors to devise a thorough plan.

    That’s the ideal, and what we have in the USA is an ossified version of that, horribly contorted and in need of help. Everything done in a doctor’s office is tracked with a “CPT/HCPCS code”, which identifies the type of service rendered. That, in and of itself, could be compatible with the ideal doctor’s office, but the reality is that the codes control payment as hard rules, not considering “reasonable variances” that may have arisen. When you have whole professions dedicated to properly “coding” procedures so an insurer or Medicare will pay reimbursement, that’s when we’ve entirely lost the point and grossly departed from the ideal. The payment tail wags the doctor dog.

    To be clear, the coding system is well intentioned. It’s just that its use has been institutionalized into only ever paying out if and only if a specific service was rendered, with zero consideration for whether this actually advanced the patient’s treatment. The coding system provides a wealth of directly-comparable statistical data, if we wanted to use that data to help reform the system. But that hasn’t substantially happened, and when you have fee-for-service (FFS) as the base assumption, of course patient care drops down the priority list. Truly, the acronym is very fitting.

    Even if the lead doctor at this hypothetical office wanted to place patient health at the absolute forefront of her practice, she will be without the necessary tools to properly diagnose and treat the patient, if she cannot immediately or later obtain reimbursement for the necessary services rendered. She and her practice would have to absorb costs that a “conforming” doctor’s office would not have, and that puts her at a further disadvantage. She may even run out of money and have to close.

    The only major profession that I’m immediately aware of which undertakes unknown costs with regularity, in the hopes of a later full-and-worthwhile reimbursement, is the legal profession. There, it is the norm for personal injury lawyers to take cases on contingency, meaning that the lawyer will eat all the costs if the lawsuit does not ultimately prevail. But if the lawyer succeeds, then they earn a fixed percentage of the settlement or court judgement, typically 15-22%, to compensate for the risk of taking the case on contingency.

    What’s particularly notable is that lawyers must have a good eye to only accept cases they can reasonably win, and to decline cases which are marginal or unlikely to cover costs. This heuristic takes time to hone, but a lawyer could start by being conservative with cases accepted. The reason I mention this is because a doctor-patient relationship is not at all as transactional as a lawyer-client relationship. A doctor should not drop a patient because their health issues won’t allow the doctor to recoup costs.

    The notion that an altruistic doctor’s office can exist sustainably under the FFS model would require said doctor to discard the final shred of decency that we still have in this dysfunctional system. This is wrong in a laissez-faire viewpoint, wrong in a moral viewpoint, and wrong in a healthcare viewpoint. Everything about this is wrong.

    But the most insidious problems are those that perpetuate themselves. And because of all those aforementioned payers, providers, and regulators are merely existing and cannot themselves take the initiative to unwind this mess, it’s going to take more than a nudge from outside to make actual changes.

    As I concluded my prior answer on USA health insurance, I noted that Congressional or state-level legislation would be necessary to deal with spiraling costs for healthcare. I believe the same would be required to refocus the nation’s healthcare procedures to put patient care back as the primary objective. This could come in the form of a single-payer model. Or by eschewing insurance pools outright by extending a government obligation to the health of the citizenry, commonly in the form of a universal healthcare system. Costs of the system would become a budgetary line-item so that the health department can focus its energy on care.

    To be clear, the costs still have to be borne, but rather than fighting for reimbursement, it could be made into a form of mandatory spending, meaning that they are already authorized to be paid from the Treasury on an ongoing basis. For reference, the federal Medicare health insurance system (for people over 65) is already a mandatory spending obligation. So upgrading Medicare to universal old-people healthcare is not that far of a stretch.



  • Thank you for that detailed description. I see two things which are of concern: the first is the IPv6 network unreachable. The second is the lost IPv4 connection, as opposed to a rejection.

    So staring in order, the machine on the external network that you’re running curl on, does it have a working IPv6 stack? As in, if you opened a web browser to https://test-ipv6.com/ , does it pass all or most tests? An immediate “network is unreachable” suggests that external machine doesn’t have IPv6 connectivity, which doesn’t help debug what’s going on with the services.

    Also, you said that all services that aren’t on port 80 or 443 are working when viewed externally, but do you know if that was with IPv4 or IPv6? I use a browser extension called IPvFoo to display which protocol the page has loaded with, available for Chrome and Firefox. I would check that your services are working over IPv6 equally well as IPv4.

    Now for the second issue. Since you said all services except those on port 80, 443 are reachable externally, that would mean the IP address – v4 or v6, whichever one worked – is reachable but specifically ports 80 and 443 did not.

    On a local network, the norm (for properly administered networks) is for OS firewalls to REJECT unwanted traffic – I’m using all-caps simply because that’s what I learned from Linux IP tables. A REJECT means that the packet was discarded by the firewall, and then an ICMP notification is sent back to the original sender, indicating that the firewall didn’t want it and the sender can stop waiting for a reply.

    For WANs, though, the norm is for an external-facing firewall to DROP unwanted traffic. The distinction is that DROPping is silent, whereas REJECT sends the notification. For port forwarding to work, both the firewall on your router and the firewall on your server must permit ports 80 and 443 through. It is a very rare network that blocks outbound ICMP messages from a LAN device to the Internet.

    With all that said, I’m led to believe that your router’s firewall is not honoring your port-forward setting. Because if it did and your server’s firewall discarded the packet, it probably would have been a REJECT, not a silent drop. But curl showed your connection timed out, which usually means no notifications was received.

    This is merely circumstantial, since there are some OS’s that will DROP even on the LAN, based on misguided and improper threat modeling. But you will want to focus on the router’s firewall, as one thing routers often do is intercept ports 80 and 443 for the router’s own web UI. Thus, you have to make sure there aren’t such hidden rules that preempt the port-forwarding table.


  • I’m still trying to understand exactly what you do have working. You have other services exposed by port numbers, and they’re accessible in the form <user>.ducksns.org:<port> with no problems there. And then you have Jellyfin, which you’re able to access at home using https://jellyfin.<user>.duckdns.org without problems.

    But the moment you try accessing that same URL from an external network, it doesn’t work. Even if you use HTTP with no S, it still doesn’t connect. Do I understand that correctly?


  • I know this is c/programmerhumor but I’ll take a stab at the question. If I may broaden the question to include collectively the set of software engineers, programmers, and (from a mainframe era) operators – but will still use “programmers” for brevity – then we can find examples of all sorts of other roles being taken over by computers or subsumed as part of a different worker’s job description. So it shouldn’t really be surprising that the job of programmer would also be partially offloaded.

    The classic example of computer-induced obsolescence is the job of typist, where a large organization would employ staff to operate typewriters to convert hand-written memos into typed documents. Helped by the availability of word processors – no, not the software but a standalone appliance – and then the personal computer, the expectation moved to where knowledge workers have to type their own documents.

    If we look to some of the earliest analog computers, built to compute differential equations such as for weather and flow analysis, a small team of people would be needed to operate and interpret the results for the research staff. But nowadays, researchers are expected to crunch their own numbers, possibly aided by a statistics or data analyst expert, but they’re still working in R or Python, as opposed to a dedicated person or team that sets up the analysis program.

    In that sense, the job of setting up tasks to run on a computer – that is, the old definition of “programming” the machine – has moved to the users. But alleviating the burden on programmers isn’t always going to be viewed as obsolescence. Otherwise, we’d say that tab-complete is making human-typing obsolete lol



  • It’s also worth noting that switching from ANSI to ISO 216 paper would not be a substantial physical undertaking, as the short-side of even-numbered ISO 216 paper (eg A2, A4, A6, etc) is narrower than for ANSI equivalents. And for the odd-numbered sizes, I’ve seen Tabloid-size printers in America which generously accommodate A3.

    For comparison, the standard “Letter” paper size (aka ANSI A) is 8.5 inches by 11 inches. (note: I’m sticking with American units because I hope Americans read this). Whereas the similar A4 paper size is 8.3 inches by 11.7 inches. Unless you have the rare, oddball printer which takes paper long-edge first, this means all domestic and small-business printers could start printing A4 today.

    In fact, for businesses with an excess stock of company-labeled #10 envelopes – a common size of envelope, measuring 4.125 inches by 9.5 inches – a sheet of A4 folded into thirds will still (just barely) fit. Although this would require precision folding, that’s no problem for automated letter mailing systems. Note that the common #9 envelope (3.875 inches by 8.875 inches) used for return envelopes will not fit an A4 sheet folded in thirds. It would be advisable to switch entirely to A series paper and C series envelopes at the same time.

    Confusingly, North America has an A-series of envelopes, which bear no relation to the ISO 216 paper series. Fortunately, the overlap is only for the less-common A2, A6, and A7.

    TL;DR: bring reams of A4 to the USA and we can use it. And Tabloid-size printers often accept A3.