DeepSeek launched a free, open-source large language model in late December, claiming it was developed in just two months at a cost of under $6 million.
Any other AI company, and most of that would be legitimate criticism of the overhype used to generate more funding. But how does any of that apply to DeepSeek, and the code & paper they released?
Yeah it’ll be exciting to see where this goes, i.e. if it really develops into a useful tool, for certain. Though I’m slightly cautious non-the less. It’s not doing something significantly different (i.e. it’s still an LLM), it’s just a lot cheaper/efficient to train, and open for everyone (which is great).
Even o1 (which AFAIK is roughly on par with R1-671B) wasn’t really helpful for me. I just need often (actually all the time) correct answers to complex problems and LLMs aren’t just capable to deliver this.
I still need to try it out whether it’s possible to train it on my/our codebase, such that it’s at least possible to use as something like Github copilot (which I also don’t use, because it just isn’t reliable enough, and too often generates bugs). Also I’m a fast typer, until the answer is there and I need to parse/read/understand the code, I already have written a better version.
Any other AI company, and most of that would be legitimate criticism of the overhype used to generate more funding. But how does any of that apply to DeepSeek, and the code & paper they released?
Yeah it’ll be exciting to see where this goes, i.e. if it really develops into a useful tool, for certain. Though I’m slightly cautious non-the less. It’s not doing something significantly different (i.e. it’s still an LLM), it’s just a lot cheaper/efficient to train, and open for everyone (which is great).
What’s this “if” nonsense? I loaded up a light model of it, and already have put it to work.
Have you actually read my text wall?
Even o1 (which AFAIK is roughly on par with R1-671B) wasn’t really helpful for me. I just need often (actually all the time) correct answers to complex problems and LLMs aren’t just capable to deliver this.
I still need to try it out whether it’s possible to train it on my/our codebase, such that it’s at least possible to use as something like Github copilot (which I also don’t use, because it just isn’t reliable enough, and too often generates bugs). Also I’m a fast typer, until the answer is there and I need to parse/read/understand the code, I already have written a better version.