

If you’re preparing for McKinsey right now, the first thing to understand is that most of the content out there is already outdated.
The test is evolving pretty quickly, and 2026 is probably the first year where that becomes very obvious. If you are interested in a more comprehensive guide, you can check out MyConsultingCoach’s guide.
If you are also preparing for case interviews, you can check the case interview guide.
Historically, the format was relatively stable. You had Ecosystem, which was quite rule-based, then Redrock, and then later Sea Wolf. You could prepare for those, understand the mechanics, and get reasonably comfortable with what was coming.
That’s no longer really the case.
Most candidates today report seeing Redrock and Sea Wolf, but there is increasing evidence that McKinsey is testing new modules on top of that. One that has started to come up in 2026 is something called “Sustainable Future Lab.” It’s not fully documented anywhere yet, but multiple candidates on Reddit have mentioned encountering it, which usually means it’s being rolled out or A/B tested.
The older games were still quite structured. Even Sea Wolf, which people find difficult, is ultimately a constraint problem. You have a set of options, you filter based on conditions, and you arrive at a solution. There is a clear logic to it.
The new direction seems different. The Sustainable Future Lab type of scenario appears to push more into decision-making with ambiguity. Less “find the correct answer,” more “make a reasonable decision given incomplete information.” More trade-offs, more prioritization, less certainty.
That’s a meaningful shift, and it aligns much more closely with actual consulting work. In real engagements, especially in strategy or large transformation problems, you rarely have perfect data. You are constantly balancing competing objectives and making calls under uncertainty.
That variability is probably intentional. It reduces the advantage of people who rely heavily on preparation material and pushes everyone toward first-principles thinking.
At the same time, the overall test is getting shorter. In most cases you’re looking at around an hour, sometimes a bit more if there’s an additional module. That means you don’t have time to explore everything or fix mistakes later. Whatever approach you take at the beginning tends to stick, and small inefficiencies compound quickly.
Redrock is basically a mini case interview at this point. The main mistake people make is treating it like a game where you click around and explore everything. That approach gets penalized. What matters is selecting the right information, not all the information. The signal McKinsey is looking for is whether you can form a hypothesis and go after the data that matters.
Sea Wolf looks simpler than it is. People try to brute force it or go by intuition, and it usually doesn’t work. It’s much closer to a structured filtering problem. If you don’t have a clear method, you lose time and make mistakes. Candidates who do well tend to impose structure early, even if the interface doesn’t force them to.
This is also why Solve is becoming much more aligned with case interviews. The same core skills show up: structuring before acting, prioritizing information, being disciplined with your approach, and being comfortable making decisions without having perfect data.
One implication that people underestimate is that Solve is now harder to “prepare for” in the traditional sense. You can still get familiar with formats and avoid obvious mistakes, but you can’t rely on memorizing patterns anymore. The edge comes from how you think, not what you’ve seen before.
If you approach it like a game, you’ll probably feel lost. If you approach it like a simplified consulting engagement, it starts to make a lot more sense.
If you’re preparing for McKinsey right now, the first thing to understand is that most of the content out there is already outdated.
The test is evolving pretty quickly, and 2026 is probably the first year where that becomes very obvious. If you are interested in a more comprehensive guide, you can check out MyConsultingCoach’s guide.
If you are also preparing for case interviews, you can check the case interview guide.
Historically, the format was relatively stable. You had Ecosystem, which was quite rule-based, then Redrock, and then later Sea Wolf. You could prepare for those, understand the mechanics, and get reasonably comfortable with what was coming.
That’s no longer really the case.
Most candidates today report seeing Redrock and Sea Wolf, but there is increasing evidence that McKinsey is testing new modules on top of that. One that has started to come up in 2026 is something called “Sustainable Future Lab.” It’s not fully documented anywhere yet, but multiple candidates on Reddit have mentioned encountering it, which usually means it’s being rolled out or A/B tested.
The older games were still quite structured. Even Sea Wolf, which people find difficult, is ultimately a constraint problem. You have a set of options, you filter based on conditions, and you arrive at a solution. There is a clear logic to it.
The new direction seems different. The Sustainable Future Lab type of scenario appears to push more into decision-making with ambiguity. Less “find the correct answer,” more “make a reasonable decision given incomplete information.” More trade-offs, more prioritization, less certainty.
That’s a meaningful shift, and it aligns much more closely with actual consulting work. In real engagements, especially in strategy or large transformation problems, you rarely have perfect data. You are constantly balancing competing objectives and making calls under uncertainty.
That variability is probably intentional. It reduces the advantage of people who rely heavily on preparation material and pushes everyone toward first-principles thinking.
At the same time, the overall test is getting shorter. In most cases you’re looking at around an hour, sometimes a bit more if there’s an additional module. That means you don’t have time to explore everything or fix mistakes later. Whatever approach you take at the beginning tends to stick, and small inefficiencies compound quickly.
Redrock is basically a mini case interview at this point. The main mistake people make is treating it like a game where you click around and explore everything. That approach gets penalized. What matters is selecting the right information, not all the information. The signal McKinsey is looking for is whether you can form a hypothesis and go after the data that matters.
Sea Wolf looks simpler than it is. People try to brute force it or go by intuition, and it usually doesn’t work. It’s much closer to a structured filtering problem. If you don’t have a clear method, you lose time and make mistakes. Candidates who do well tend to impose structure early, even if the interface doesn’t force them to.
This is also why Solve is becoming much more aligned with case interviews. The same core skills show up: structuring before acting, prioritizing information, being disciplined with your approach, and being comfortable making decisions without having perfect data.
One implication that people underestimate is that Solve is now harder to “prepare for” in the traditional sense. You can still get familiar with formats and avoid obvious mistakes, but you can’t rely on memorizing patterns anymore. The edge comes from how you think, not what you’ve seen before.
If you approach it like a game, you’ll probably feel lost. If you approach it like a simplified consulting engagement, it starts to make a lot more sense.