icrosoft, in collaboration with OpenAI, has been actively working on integrating AI capabilities into its wide range of products and services. As part of this effort, Microsoft Research has introduced a new AI model called Orca. Orca is designed to address the limitations of smaller models by emulating the reasoning processes of larger foundation models like GPT-4.
Orca, along with similar models, can be optimized for specific tasks and trained using large language models such as GPT-4. Despite its smaller size, Orca requires fewer computing resources to operate effectively.
According to the research paper, Orca can imitate and learn from larger language models like GPT-4. It is powered by a 13 billion parameter architecture based on Vucuna. With the assistance of GPT-4, Orca can acquire step-by-step reasoning abilities, comprehend complex instructions, and provide explanations.
Microsoft leverages extensive imitation data to facilitate progressive learning through Orca. The new model has already surpassed Vircuna in zero-shot reasoning benchmarks like BBH (Big-Bench Hard). It also outperforms traditional AI models in AGIEval by 42% in terms of speed.
Despite being a relatively smaller model, Orca is claimed to exhibit similar performance to ChatGPT in terms of reasoning skills, specifically in the BBH benchmark. Additionally, Orca demonstrates competence in academic exams such as LSAT, GMAT, GRE, and SAT, although it falls short of GPT-4 in some aspects.
According to the Microsoft research team, Orca can learn from advanced language models by utilizing step-by-step explanations created by humans. The model is expected to continue improving its capabilities and skills over time.