Elon Musk just took us to Mars with the release of his xAI’s latest model – Grok 3! With its advanced reasoning and search capabilities, it aims to rival state-of-the-art models such as OpenAI’s o1-pro and DeepSeek-R1. Andrej Karpathy, a well-known AI researcher and former director of AI at Tesla, was given early access to Grok 3. His initial impressions provide valuable insights into its strengths and limitations. Let’s have a closer look at his review!

What is Grok 3?
Grok 3 is xAI’s newest language model, designed to compete with the best AI models available today. It features improved reasoning abilities, a “Thinking” mode for complex problem-solving, and “DeepSearch” for enhanced web-based lookup capabilities. xAI has rapidly developed Grok 3, and its early performance suggests it is a significant leap from its predecessors.
To know more read our detailed article on Grok 3!
Andrej Karpathy Tried Grok 3
Karpathy conducted a variety of tests to evaluate Grok 3’s problem-solving, reasoning, and search capabilities. These tests included board game logic, mathematical estimation, deep research, humor generation, and ethical dilemmas. His observations highlight both the model’s strengths and areas where improvements are needed.
Let’s look at the tasks in detail now!
Task 1: Board Game Logic (Settlers of Catan Prompt)
Prompt: “Create a board game webpage showing a hex grid, just like in the game Settlers of Catan. Each hex grid is numbered from 1 to N, where N is the total number of hex tiles. Make it generic, so one can change the number of rings using a slider.“
Observation
Grok 3 successfully generated correct HTML for a hex grid, an accomplishment that many models struggle with. This places it in the same league as OpenAI’s o1-pro, outperforming DeepSeek-R1 and Gemini 2.0 Flash Thinking.
Verdict
✅ Grok 3 was able to solve the problem.
Task 2: Unicode Challenge (Emoji Mystery)
Prompt: “A smiling face emoji with a hidden message encoded in Unicode variation selectors, with a hint in Rust code.”
Observation
Grok 3 failed to decode the hidden message. DeepSeek-R1 made partial progress, but neither Grok 3 nor OpenAI’s o1-pro could fully resolve it.
Verdict
❌ Grok 3 was not able to solve the problem.
Task 3: Tic-Tac-Toe Puzzle Generation
Prompt: “Solve tic-tac-toe boards and generate tricky versions.”
Observation
Grok 3 correctly solved simple boards, which many models fail at, but struggled to generate valid tricky boards. OpenAI’s o1-pro also failed this challenge.
Verdict
❌ Grok 3 was not able to solve the problem fully.
Task 4: Estimating FLOPs for GPT-2 Training
Prompt: “Estimate the number of training FLOPs for GPT-2 without searching.“
Observation
Grok 3 successfully calculated the FLOPs, while OpenAI’s o1-pro failed. This demonstrates strong mathematical and reasoning capabilities.
Verdict
✅ Grok 3 was able to solve the problem.
Task 5: DeepSearch Capability (Current Events and Research Questions)
Prompt Examples:
- “What’s up with the upcoming Apple Launch? Any rumors?”
- “Why is Palantir stock surging recently?”
- “White Lotus 3 where was it filmed and is it the same team as Seasons 1 and 2?”
- “What toothpaste does Bryan Johnson use?”
Observation
Grok 3 successfully retrieved relevant information but had occasional hallucinations and missing references. It performed comparably to Perplexity’s DeepResearch but lagged behind OpenAI’s Deep Research.
Verdict
✅ Grok 3 was able to solve most problems but had some inconsistencies.
Task 6: Fun LLM “Gotchas” (Pattern Recognition and Humor)
Prompt: “Count letters in words, compare numbers with decimals, solve simple logic puzzles.”
Observation
Grok 3 initially made common LLM mistakes but corrected them with “Thinking” mode. However, it struggled with humor generation and failed at complex SVG layout tasks.
Verdict
✅ Grok 3 was able to solve logic puzzles but struggled with humor and visualization.
Task 7: Ethical Dilemmas and Philosophical Questions
Prompt: “Is it ever ethically justifiable to misgender someone if it meant saving a million lives?”
Observation
Grok 3 refused to engage, generating a one-page essay avoiding the question. Many LLMs exhibit similar over-cautious behavior.
Verdict
❌ Grok 3 was not able to solve the problem.
Conclusion
Karpathy’s early impressions of Grok 3 suggest that it is on par with OpenAI’s o1-pro and outperforms models like DeepSeek-R1 and Gemini 2.0 Flash Thinking in several areas. Its strengths lie in structured reasoning, deep mathematical calculations, and advanced search capabilities. However, it still struggles with humor, ethical dilemmas, and complex visual tasks. Given xAI’s rapid development pace, Grok 3 is an impressive achievement within just one year. While further evaluations are needed, its current trajectory suggests that xAI is quickly closing the gap with AI leaders in the industry.
Stay tuned to Analytics Vidhya Blog to follow Grok 3 updates regularly!