Police in England installed an AI camera system along a major road. It caught almost 300 drivers in its first 3 days.::An AI camera system installed along a major road in England caught 300 offenses in its first 3 days.There were 180 seat belt offenses and 117 mobile phone

  • Max_Power
    link
    fedilink
    English
    142
    edit-2
    1 year ago

    Photos flagged by the AI are then sent to a person for review.

    If an offense was correctly identified, the driver is then sent either a notice of warning or intended prosecution, depending on the severity of the offense.

    The AI just “identifying” offenses is the easy part. It would be interesting to know whether the AI indeed correctly identified 300 offenses or if the person reviewing the AI’s images acted on 300 offenses. That’s potentially a huge difference and would have been the relevant part of the news.

      • ZephrC
        link
        fedilink
        English
        441 year ago

        Nobody cares about false negatives. As long as the number isn’t something so massive that the system is completely useless false negatives in an automatic system are not a problem.

        What are the false positives? Every single false positive is a gross injustice. If you can’t come up with a number for that, then you haven’t even evaluated your system.

        • @tmRgwnM9b87eJUPq@lemmy.world
          link
          fedilink
          English
          19
          edit-2
          1 year ago

          The system works with AI signaling phone usage by driving.

          Then a human will verify the photo.

          AI is used to respect people’s privacy.

          The combination of the AI detection+human review leads to a 5% false negative rate, and most probably 0% false positive.

          This means that the AI missed at most 5% positives, but probably less because of the human reviewer not being 100% sure there was an offense.

          • ZephrC
            link
            fedilink
            English
            71 year ago

            Look, I’m not saying it’s a bad system. Maybe it’s great. “Most probably 0%” is meaningless though. If all you’ve got is gut feelings about it, then you don’t know anything about it. Humans make mistakes in the best of circumstances, and they get way, way worse when you’re telling them that they’re evaluating something that’s already pretty reliable. You need to know it’s not giving false positive, not have a warm fuzzy feeling about it.

            Again, I don’t know if someone else has already done that. Maybe they have. I don’t live in the Netherlands. I don’t trust it until I see the numbers that matter though, and the more numbers that don’t matter I see without the ones that do, the less I trust it.

            • @tmRgwnM9b87eJUPq@lemmy.world
              link
              fedilink
              English
              11 year ago

              The fine contains a letter, a picture and payment information. If the person really wasn’t using their phone, they can file a complaint and the fine will be dismissed. Seems pretty simple to me.

              However, I have not heard any complaints about it in the news and an embarrassing amount of fines has been given for this offense.

              • ZephrC
                link
                fedilink
                English
                31 year ago

                For a post on a site like this that kind of anecdote is plenty to add to a conversation, and it does actually make me feel a tiny bit better about the whole thing, but when you lead with statistics you’re implying a level of research and knowledge that goes beyond just anecdotal. It’s not really fair to you or any of us, but using the numbers that sound good to avoid using the ones that reveal flaws is one of the most popular ways for marketing teams and governments to deceive people. You should always be skeptical of that kind of thing.

              • @CalvinCopyright@lemmy.world
                link
                fedilink
                English
                01 year ago

                Heh. Heh heh. You think that you can… file a complaint, and get a fine dismissed just like that. Heh heh heh. God, you’re naive. Or stupid. Or a paid propagandist. Or just plain rich enough for your reaction to a fine to be ‘meh’.

                Criminality is predicated on convenience. If it’s easy for an authority to throw out fines and hard for the populace to dismiss those fines, guess what’s going to happen? There’s going to be fines applied that shouldn’t have been, but that the people who are getting fined literally can’t put in the effort to get dismissed. And that’s not justice in the slightest. ‘Innocent until proven guilty’, you troll. Heard that phrase before??

                • @tmRgwnM9b87eJUPq@lemmy.world
                  link
                  fedilink
                  English
                  11 year ago

                  Just wow.

                  I bet you do not live in The Netherlands. We have a standardized process to complain against a fine.

                  If the picture doesn’t prove with certainty that you were holding a phone, complain to the address in the letter or just don’t pay the €359 fine and talk to a judge about it.

      • Tywèle [she|her]
        link
        fedilink
        English
        161 year ago

        How do they know that they caught 95% of all offenders if they didn’t catch the remaining 5%? Wouldn’t that be unknowable?

        • @lasagna@programming.dev
          link
          fedilink
          English
          20
          edit-2
          1 year ago

          Welcome to the world of training datasets.

          There are many ways to go about it, but for a limited number they’d probably use human analysts.

          But in general, they’d put a lot more effort into a chunk of data and use that as the truth. It’s not a perfect method but it’s good enough.

        • @Hamartiogonic@sopuli.xyz
          link
          fedilink
          English
          6
          edit-2
          1 year ago

          The article didn’t really clarify that part, so it’s impossible to tell. My guess is, they tested the system by intentionally driving under it with a phone in your hand a 100 times. If the camera caught 95 of those, that’s how you would get the 95% catch rate. That setup has the a priori information on about the true state of the driver, but testing takes a while.

          However, that’s not the only way to test a system like this. They could have tested it with normal drivers instead. To borrow a medical term, you could say that this is an “in vivo” test. If they did that, there was no a priori information about the true state of each driver. They could still report a different 95% value though. What if 95% of the positives were human verified to be true positives and the remaining 5% were false positives. In a setup like that we have no information about true or false negatives, so this kind of test setup has some limitations. I guess you could count the number of cars labeled negative, but we just can’t know how many of them were true negatives unless you get a bunch of humans to review an inordinate amount of footage. Even then you still wouldn’t know for sure, because humans make mistakes too.

          In practical terms, it would still be a really good test, because you can easily have thousands of people drive under the camera within a very short period of time. You don’t know anything about the negatives, but do you really need to. This isn’t a diagnostic test where you need to calculate sensitivity, specificity, positive predictive value and negative predictive value. I mean, it would be really nice if you did, but do you really have to?

          • @tmRgwnM9b87eJUPq@lemmy.world
            link
            fedilink
            English
            61 year ago

            Just to clarify the result: the article states that AI and human review leads to 95%.

            Could also be that the human is flagging actual positives, found by the AI, as false positives.

          • Echo Dot
            link
            fedilink
            English
            31 year ago

            You wouldn’t need people to actually drive past the camera, you could just do that in testing when the AI was still in development in software, you wouldn’t need the physical hardware.

            You could just get CCTV footage from traffic cameras and feeds that into the AI system. Then you could have humans go through independently of the AI and tag any incident they saw in a infraction on. If the AI system gets 95% of the human spotted infractions then the system is 95% accurate. Of course this ignores the possibility that both the human and the AI miss something but that would be impossible to calculate for.

            • @Hamartiogonic@sopuli.xyz
              link
              fedilink
              English
              11 year ago

              That’s the sensible way to do it in early stages of development. Once you’re reasonably happy with the trained model, you need to test the entire system to see if each part actually works together. At that point, it could be sensible to run the two types of experiments I outlined. Different tests different stages.

        • @jopepa@lemmy.world
          link
          fedilink
          English
          31 year ago

          I think 95% were correct reports is what they mean. There could be a massive population of other offenders that continue sexting and driving or worse. One monocam won’t ever be enough we need many monocams. Polymonocams.

        • @tmRgwnM9b87eJUPq@lemmy.world
          link
          fedilink
          English
          11 year ago

          I suspect they sent through a controlled set of cars where they tested all kinds of scenarios.

          Other option would be to do a human review after installing it for a day.

    • @MotoAsh@lemmy.world
      link
      fedilink
      English
      121 year ago

      but digging out that info would involve journalism and possibly reporting something the cops wouldn’t like! We all know how that goes.