diff --git a/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md
new file mode 100644
index 0000000..dce3ab3
--- /dev/null
+++ b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md
@@ -0,0 +1,2 @@
+
DeepSeek open-sourced DeepSeek-R1, an [LLM fine-tuned](https://notewave.online) with reinforcement knowing (RL) to improve thinking [ability](https://videopromotor.com). DeepSeek-R1 [attains outcomes](https://sos.shinhan.ac.kr) on par with OpenAI's o1 design on several standards, including MATH-500 and SWE-bench.
+
DeepSeek-R1 is based on DeepSeek-V3, a mixture of specialists (MoE) model just recently open-sourced by DeepSeek. This base model is fine-tuned utilizing Group Relative Policy Optimization (GRPO), a reasoning-oriented variant of RL. The research team also carried out [understanding distillation](https://git.dev-store.xyz) from DeepSeek-R1 to open-source Qwen and Llama models and [systemcheck-wiki.de](https://systemcheck-wiki.de/index.php?title=Benutzer:MonserrateHuntin) released several variations of each
\ No newline at end of file