Hey everyone, I’ve been searching for a bit on getting local LLM inference to process legal paperwork (I am not a lawyer, I just have trouble through large documents to figure out my rights). This would help me have conversations with my landlord and various other people who will withhold crucial information such as your rights during a unit inspection or accuse you of things you did not etc.

Given that there are 1000s of pre-trained models, would it be better to train a small model myself on an RTX 4090 or a Daisy chain of other GPUs? Is there a legal archive somewhere that I’m just not seeing or where should I direct my energy? I think lots of us could benefit from a pocket law reference that can serve as an aid to see what to do next.

  • inspxtr@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    thanks for your answer! Is this same or different from indexing to provide context? I saw some people ingesting large corpus of documents/structured data, like with LlamaIndex. Is it an alternative way to provide context or similar?

    • dartos@reddthat.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Indexing and tools like llamaindex use LLM generated embeddings to “intelligently” search for similar documents to a search query.

      Those documents are usually fed into an LLM as part of the prompt (eg. context)