• Steve@communick.news
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    6
    ·
    11 months ago

    You can’t legally control weather or not someone recognizes you on the street. There is no right to privacy in public spaces. It’s the principal that protects people filming the police. Any important public events really.

    • Amerikan Pharaoh@lemmygrad.ml
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      11 months ago

      Counter-point: You don’t get to scrape my face to feed it into a settler-written algorithm that can’t be trusted to not misidentify the faces of my race; and you especially don’t get to expect I won’t thwart those attempts every time I have to leave my house. My consent to that is not given, and unfortunately, I don’t have a choice in whether or not I have to leave my house sometimes.

      • Steve@communick.news
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        11 months ago

        Those are both fantastic, but separate issues.

        The effectiveness of any individual AI is separate from the ethics of the concept generally. Any specific implementation might not be reliable enough to depend on. That would have to be a discussion for each implementation. Perhaps a discussion of the reliability threshold any implementation would need to meet, in order to be used.

        Also, another’s right to photograph and identify you while in public, is entirely separate from, and doesn’t effect, your right to try to conceal your identity.

        Even both of those don’t touch the issue of keeping your image, and using your likeness to improve the AI product. (Which is something literally all of them ignore, along with every other copyright issue.) Since you didn’t give them the rights to use your likeness for that (or any) purpose, it would be unethical and already illegal for them to do so.

    • CrypticCoffee@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      I don’t see why not. There is a difference between an off chance of someone noticing you vs. camera’s with high accuracy recognising your face and being able to track your locations, what places you visited and who with for every minute of every day.

      • Steve@communick.news
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        11 months ago

        what places you visited and who with for every minute of every day

        Well now that’s a different proposition.

        Now you aren’t simply in a public place being photographed and identified by AI. Now you’re actively being monitored and tracked. That’s more like a person stalking you. That may be unethical, depending on who’s tracking you and why? Basically unless it’s law enforcement of some kind, with a specific warrant to track your location, it wouldn’t be ethical.

        • CrypticCoffee@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          The thing is, most western governments are pushing towards facial recognition and monitoring without the need for a warrant. Most countries are already stacked up to the eyeballs with CCTV (UK for example, and hooking that in with facial recognition is dangerous). First they start off with it being for terrorists, then paedophiles, then other criminals, but ultimately, it’s monitoring everyone to track down a few. When you have that infra in place, and you don’t have sufficient oversight, you can soon tweak that towards activist groups, then opposition groups etc.

          You have to challenge it before the infrastructure goes in, because after it’s in, it’s already too late.

          • Steve@communick.news
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            11 months ago

            While that is all true, it’s effectively saying nothing more than, “The misuse of a technology is unethical.” Which I think we can all agree on. So many people are pointing out obvious examples of abuse as arguments against the tech itself.

            The original question was only about the technology itself. Which is only an interesting etical question if we assume, using it appropriately.