Posted April 13Apr 13 Researchers from top US universities warn extending pre-training can be detrimental to performanceToo much pre-training can deliver worse performance due to something akin to the butterfly effectThe more they are pre-trained, the more they become sensitive to small changes that could disrupt the end resultResearchers from Carnegie Mellon, Stanford, Harvard, and Princeton are challenging one of AI development’s accepted core beliefs - that the more pre-training data the better the performance…View the full article
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.