• GuinnessChocolates@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    This does seem to be a big revelation to ML/AI types. I was at a conference last year for NLP and one of the data scientists was reporting back on some other conference they’d been to where they had worked out that you don’t get better performance by running your data through a dozen different models, but by running better data through a single model.

    I guess @[email protected] is right that you need to confirm these things but it definitely spoke to quite a naive mindset.

    • garyyo@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      This does seem to be a big revelation to ML/AI types

      The big revelation is not that, but how to go about doing that for the very specific set of circumstances that we are currently dealing with. Just a cursory search on google scholar shows that even 20 years ago it was known that good data leads to better predictive models, but with more advanced models comes more complications as to what constitutes better quality data. The paper I linked is concerned with random errors which is important for simple regression models (which was state of the art back then) but now is pretty obvious and no one is really concerned about that. The TinyStories paper I mentioned in a previous comment is not so much concerned about better quality data, but what sort of changes do we need to make to the data used to train a model (like generate synthetic data consisting of short stories that only contain words that a typical 3 to 4-year-olds usually understand) to make it better suited towards getting that model to exhibit certain features (produce fluent and consistent stories with several paragraphs that are diverse and have almost perfect grammar, and demonstrate reasoning capabilities) along with how tweaking certain parameters. We have know that better quality teaching leads to better quality learning in humans, but yet we are still actively researching how to better teach students.

      I get that this is a complicated topic, but to represent it as “this is obvious idk why the scientist think this is new” is just being willfully ignorant to what is actually happening. It is never as simple as “running better data through a single model”, especially considering that ensemble learning is a pretty well established means to get better outputs from the same exact data (which is important because it is very difficult to get better quality data). If you want to actually know what is going on rather than listening to the reinterpretation of (what is probably) a single work from a single conference by a participant of that conference (and not the author of that work), just read the paper. For context, the TinyStories paper is 27 pages long, the one sentence summary is not going to give you the full detail.