Can’t say that I struggled to understand pointers but if GPT helped you conceptualize em that’s good. I really don’t see much utility in even the current iterations of these LLMs. Take copilot for example, ultimately all it actually helps with is boilerplate which if you are writing enough for it to be meaningfully helpful you can have a fancy IDE live template or just a plain old snippet.
Theres a lot of interesting things it could be doing like checking if my documentation is correct or the like but all it does is shit I could do myself with less hassle.
There’s also the whole issue of LLMs having no concept of anything. You aren’t having a conversation, it just spits out the words it thinks are most likely to occur in the given context. That can be helpful for extremely generic questions it’s been trained on thanks to Stack Overflow but GPT doesn’t actually know the right answer. It’s like really fancy autocorrect based on the current context. What this means is you absolutely cannot trust anything it says unless you know enough about the topic to determine what it outputs is accurate.
To draw a comparison to written language (hopefully you don’t know Japanese) is 私 or 僕 “I”? Can you confidently rely on auto correct to pick the right one? Probably not cause the first one わたし (watashi) is “I” and the second ぼく (boku) is also “I” (more boyish). Trusting an LLMs output without being able to ensure it’s accuracy is like trusting auto correct to use the right word in a language you don’t know. Sure it’ll work out fine generally but when it fails you don’t have the knowledge to even notice.
Because of these failings I don’t see much utility in LLMs especially seeing as the current obsession is chat apps geared at the general public to fool around with.
Thanks! Lot of people don’t seem to realize that GPT doesn’t actually have any idea what words mean. It just outputs stuff based on how likely it is to show up after the previous stuff. This results in very interesting behavior but there’s nothing conceptually “there”, there’s no thinking and so, no conversation to be had.
Can’t say that I struggled to understand pointers but if GPT helped you conceptualize em that’s good. I really don’t see much utility in even the current iterations of these LLMs. Take copilot for example, ultimately all it actually helps with is boilerplate which if you are writing enough for it to be meaningfully helpful you can have a fancy IDE live template or just a plain old snippet.
Theres a lot of interesting things it could be doing like checking if my documentation is correct or the like but all it does is shit I could do myself with less hassle.
There’s also the whole issue of LLMs having no concept of anything. You aren’t having a conversation, it just spits out the words it thinks are most likely to occur in the given context. That can be helpful for extremely generic questions it’s been trained on thanks to Stack Overflow but GPT doesn’t actually know the right answer. It’s like really fancy autocorrect based on the current context. What this means is you absolutely cannot trust anything it says unless you know enough about the topic to determine what it outputs is accurate.
To draw a comparison to written language (hopefully you don’t know Japanese) is 私 or 僕 “I”? Can you confidently rely on auto correct to pick the right one? Probably not cause the first one わたし (watashi) is “I” and the second ぼく (boku) is also “I” (more boyish). Trusting an LLMs output without being able to ensure it’s accuracy is like trusting auto correct to use the right word in a language you don’t know. Sure it’ll work out fine generally but when it fails you don’t have the knowledge to even notice.
Because of these failings I don’t see much utility in LLMs especially seeing as the current obsession is chat apps geared at the general public to fool around with.
Fucking love your example dude.
I’ve found ChatGPT3 OK for low level stuff, but I stopped using it pretty quickly once I went to trying to get it to help build intermediate stuff.
If its making errors in simple script design, it can’t handle more.
It is fab for the basics, but I wouldn’t truste it for learning anything else more complex for exactly the reasons you said.
Be liable to write my own backdoors that way hahah
Thanks! Lot of people don’t seem to realize that GPT doesn’t actually have any idea what words mean. It just outputs stuff based on how likely it is to show up after the previous stuff. This results in very interesting behavior but there’s nothing conceptually “there”, there’s no thinking and so, no conversation to be had.