New research shows driverless car software is significantly more accurate with adults and light skinned people than children and dark-skinned people.
New research shows driverless car software is significantly more accurate with adults and light skinned people than children and dark-skinned people.
Wow. that’s all kinds of incorrect
It is absolutely data training bias. Whether it is the data that ML was trained on or the data that programmers were trained on is irrelevant. This is a problem of the computer not recognizing that a human is a human
It is not. See below:
No, not if the scale of your hardware is correct. A 3’ tall human may be smaller than a 6’ one, but it is larger than a 10” traffic light lens or a 30” stop sign. The systems do not have quite as much trouble recognizing those smaller objects. This is a problem of the computer not recognizing that the human is a human.
Also no. If that were the case, then we would have problems with collision bias against darker vehicles, or not being able to recognize the black asphalt of the road. Optical systems do not rely on the absolute signal strength of an object. they rely on contrast. A darker skin tone would only have low contrast against a background with a similar shade, and that doesn’t even account for clothing which usually covers most of a persons body. Again, this is a problem of the computer not recognizing that the human is a human.
No, they have different signals. that signal needs to be compared to the background to determine whether it exists and where it is, and then compared to the dataset to determine what it is. This is still a problem of the computer not recognizing that the human is a human.
Close, but not quite.
This is a problem of the computer not recognizing that the human is a human.