• 7 Posts
  • 1.71K Comments
Joined 1 year ago
cake
Cake day: August 24th, 2023

help-circle








  • even if you CAN rely solely on vision, why hamstring yourself?

    Their stance is that by using lidar OEMs are hamstringing themselves on solving vision because they are so reliant on it. They spend less time and resources perfecting vision so they never truly solve the problem. From their perspective you got it backwards.

    and there’s no good reason… just add extra sensors

    The more sensors you deal with, the more your attention gets divided. You aren’t laser focused on one thing.

    The extra sensors also cost a lot of money, you can’t put waymo’s sensor package onto millions of cars that consumers can buy when the suite is 10s of thousands of dollars (and originally well over 100k).

    By focusing on vision where the system can be put onto millions of cars, you can get massive amounts of extra training data and training data is going to be a huge part of solving this problem.

    You might not like the reasons, or their stance, but it’s not such an unreasonable position to take. Mobile Eye even cancelled their next gen lidar project after seeing improvements in vision and radar. What happens when they keep seeing improvements in vision and now radar isn’t needed?

    I don’t know if you’ve ever used AP but all the crazy headlines you see about it are idiots in cars being idiots. As a L2 vision only system it works very well. If people wanna blame Elon for convincing people to be idiots, sure, you can do that, but that has nothing to do with the actual technological approach they are taking. They’re two different things.


  • Texas killing this child for losing a pregnancy is akin to them having you roll a 5 sided dice and shooting anyone who lands on a “4” between the eyes.

    Akin to

    very similar to something

    Texas killing this child for losing a pregnancy is very similar to having you roll a 5 sided dice and shooting anyone who lands on a “4” between the eyes.

    Your equating the 1 in 5 miscarriages to having a 1 in 5 chance of death but 1 in 5 miscarriages do not have a chance of death very similar to being shot if you roll a 4 on a 5 sided dice which is a 1 in 5 chance of death.

    Edit: Just cleaning this up as what I wrote got confusing…

    Your saying that 1 in 5 pregencies have a miscarriage (20%) and equate a miscarriage that happens 1/5 times, to being shot 1/5 times which would be death. But an (edit untreated) miscarriage doesn’t mean death. So it is not very similar to having a 1/5 chance of death by being shot.

    Maybe you don’t know what you wrote?


  • NotMyOldRedditName@lemmy.worldtoTechnology@lemmy.zip*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    edit-2
    2 days ago

    The point is that to be truly autonomous when vision is the only fail safe reliable sensor, then vision MUST work to have a truly autonomous vehicle.

    You can’t rely on radar without vision or lidar because it can’t see stopped vehicles at high speed. This a deadly serious problem.

    You can’t rely on lidar in rain/fog/snow/dust because the light bounces off of the particles and gives bad data, plus it can’t tell you anything about what the object is or might intend to do, only that it’s there.

    Only vision can do all of those, it’s just a matter of number of cameras, camera quality, and AI processing capabilities.

    If vision can do all those things perfectly, maybe you don’t need those other sensors after all?

    if vision can’t do it, then we won’t have a truly autonomous future.

    The other sensors are a crutch because the vision problem is so hard.


  • Tesla uses lidar to help calibrate and validate the vision as well, it’s just not needed on the consumer vehicles to do that part.

    You’ll occasionally see people post photos of them though.

    Edit: just to clarify, it’s helpful to ensure things match, but once your confident it matches, it’s not something every vehicle needs, it’s just something you need to keep an eye on with some test vehicles. Waymo is completely reliant on the lidar though as it’s used as a primary sensor, but yes it can also help validate their future vision.


  • Tesla and Waymo are trying to solve the same problem but two different ways.

    Waymo chose the more expensive but easier option, but it also limits their scope and scalability.

    Tesla chose the cheaper option, but it’s much harder to solve, if they can even solve it with todays technology.

    Ignoring that, even if Tesla had a viable solution, they would still be years behind Waymo as even in a best case scenario there’s going to be a lot of hoops to jump through before they can be operating like Waymo is today.

    The difference is that IF Tesla can solve the problem they are trying to solve, they’ll be able to operate anywhere in North America and it’ll happen at the flip of a switch for either all their cars with HW3/HW4, only cars with HW4, or in the next few years, only cars with HW5, but all future cars from that point forward would have the capability.

    Tesla also makes their own cars so can do this at cost, while Waymo has to purchase them from a partner which means there’s a markup, and their sensor suite is very expensive.

    Someone on another thread mentioned how they wish their car has radar since it can detect everything, to which I replied, it can’t detect stationary objects at high speed. If he ever replies, I imagine he’s going to say, thats what Lidar is for, except Lidar doesn’t work well in Rain, Fog, Snow or Dust. That leaves vision.

    Waymo’s current tech/fleet won’t ever be able to operate as a L5 vehicle (everywhere all the time) unless they solve vision the same way Tesla has to solve vision, or we have some breakthroughs in radar/lidar tech to bypass their weaknesses, or we create a new sensor technology entirely that solves the problems of the others.

    Waymo can operate as a L4 though today due to all these extra sensors.

    Tesla may never solve the problem, but they are supposedly the furthest ahead in the vision game which is crucial.




  • The cameras have overlaps which can be used to measure depth and distance.

    There are multiple front cameras

    The side pillar camera has overlap with the side rear facing

    The 2 side rear facing each have overlap with the rear.

    Edit: I imagine their weakest depth/ distance perception with the current set up would be their side pillar cameras. But they could also probably do some calculations with how fast it passes from front to rear.



  • NotMyOldRedditName@lemmy.worldtoTechnology@lemmy.zip*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    11
    ·
    edit-2
    3 days ago

    Nothing you said there can’t be done by cameras other than sound and the car has a microphone inside. We just might not have the capabilities yet and need to keep improving them.

    All it really means is maybe the car needs more cameras and more microphones.

    Determining distance with images from multiple angles can provide accurate distances.