Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well said. OpenAI's biggest contribution wasn't serving GPT-4, it was proving that big LLMs are AGIs. Rushing to gold is easy; discovering it ain't. Since we've found a correct path, low-hanging fruit is plentiful. Throw more compute at it. Work smarter not harder (optimize). Graft on simple ideas for big gainz (e.g. RAG, MoE, chain-of-thought, etc). We moved from "GPT Who?" to...all of this, in a year. I now run an AGI at a bookish 13-year-old's level on my MacBook. The trend of hardware/software advances are astonishing. And this is just the beginning.


AGI, thanks 4 lmao. Either that or you're trolling.


I figure AGI will need a lot of parts to it, the same way our brain does, but having the language part figured out is I would guess is about 1/4 of what we need to solve to get there. So it's not AGI, but I would say it's a big chunk of what we need to get to AGI.


I have to disagree. I think language will be emergent from AGI, not the other way round.

LLMs are neat, but likely wont have a place in an AGI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: