<br>[DeepSeek open-sourced](http://47.75.109.82) DeepSeek-R1, an [LLM fine-tuned](http://mirae.jdtsolution.kr) with support learning (RL) to enhance thinking capability. DeepSeek-R1 attains results on par with OpenAI's o1 model on several criteria, including MATH-500 and [SWE-bench](https://matchmaderight.com).<br>
<br>DeepSeek-R1 is based upon DeepSeek-V3, a [mixture](https://www.luckysalesinc.com) of experts (MoE) model recently open-sourced by DeepSeek. This base model is fine-tuned utilizing Group Relative Policy Optimization (GRPO), a reasoning-oriented version of RL. The research team likewise performed understanding distillation from DeepSeek-R1 to [open-source Qwen](https://easy-career.com) and Llama models and launched a number of variations of each