LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
or continue with e-mail and password
Forgot password?
Don't have an account?
Create an account
or continue with e-mail and password
By signing up, you agree to our Terms & Conditions and Privacy Policy.

McKinsey Solve Games – what’s new in 2026

McKinsey Solve Games – what’s new in 2026
by Canary WharfianApril 8th 2026
Join the conversation

If you’re preparing for McKinsey right now, the first thing to understand is that most of the content out there is already outdated.

The test is evolving pretty quickly, and 2026 is probably the first year where that becomes very obvious. If you are interested in a more comprehensive guide, you can check out MyConsultingCoach’s guide.

If you are also preparing for case interviews, you can check the case interview guide.

The Solve Game is changing fast

What’s happening now is that Solve is moving away from being a “game” in any meaningful sense. It’s still called a game, but structurally it’s becoming much closer to a simulation of actual consulting work.

Historically, the format was relatively stable. You had Ecosystem, which was quite rule-based, then Redrock, and then later Sea Wolf. You could prepare for those, understand the mechanics, and get reasonably comfortable with what was coming.

That’s no longer really the case.

Most candidates today report seeing Redrock and Sea Wolf, but there is increasing evidence that McKinsey is testing new modules on top of that. One that has started to come up in 2026 is something called “Sustainable Future Lab.” It’s not fully documented anywhere yet, but multiple candidates on Reddit have mentioned encountering it, which usually means it’s being rolled out or A/B tested.

A new type of problem: Sustainable Future Lab

The interesting part is not just that there’s a new game, but what type of thinking it requires.

The older games were still quite structured. Even Sea Wolf, which people find difficult, is ultimately a constraint problem. You have a set of options, you filter based on conditions, and you arrive at a solution. There is a clear logic to it.

The new direction seems different. The Sustainable Future Lab type of scenario appears to push more into decision-making with ambiguity. Less “find the correct answer,” more “make a reasonable decision given incomplete information.” More trade-offs, more prioritization, less certainty.

That’s a meaningful shift, and it aligns much more closely with actual consulting work. In real engagements, especially in strategy or large transformation problems, you rarely have perfect data. You are constantly balancing competing objectives and making calls under uncertainty.

Less standardization, more variability

Another thing that’s becoming clear is that the test is less standardized than before. Two candidates applying to the same office can have slightly different experiences. Some get two games, some get three, some encounter new modules.

That variability is probably intentional. It reduces the advantage of people who rely heavily on preparation material and pushes everyone toward first-principles thinking.

At the same time, the overall test is getting shorter. In most cases you’re looking at around an hour, sometimes a bit more if there’s an additional module. That means you don’t have time to explore everything or fix mistakes later. Whatever approach you take at the beginning tends to stick, and small inefficiencies compound quickly.

Redrock and Sea Wolf still matter

Redrock and Sea Wolf are still central, and they haven’t fundamentally changed, but how you’re expected to approach them has.

Redrock is basically a mini case interview at this point. The main mistake people make is treating it like a game where you click around and explore everything. That approach gets penalized. What matters is selecting the right information, not all the information. The signal McKinsey is looking for is whether you can form a hypothesis and go after the data that matters.

Sea Wolf looks simpler than it is. People try to brute force it or go by intuition, and it usually doesn’t work. It’s much closer to a structured filtering problem. If you don’t have a clear method, you lose time and make mistakes. Candidates who do well tend to impose structure early, even if the interface doesn’t force them to.

What McKinsey is actually testing now

Across both, and even more with the new modules, the underlying signal McKinsey is trying to get is pretty consistent. They are not testing whether you can “figure out the game.” They are testing whether you think in a structured way under pressure.

This is also why Solve is becoming much more aligned with case interviews. The same core skills show up: structuring before acting, prioritizing information, being disciplined with your approach, and being comfortable making decisions without having perfect data.

One implication that people underestimate is that Solve is now harder to “prepare for” in the traditional sense. You can still get familiar with formats and avoid obvious mistakes, but you can’t rely on memorizing patterns anymore. The edge comes from how you think, not what you’ve seen before.

Final takeaway

The direction is pretty clear. McKinsey is moving away from standardized, repeatable games and toward something that looks more like real work. More variability, more ambiguity, less opportunity to rely on patterns.

If you approach it like a game, you’ll probably feel lost. If you approach it like a simplified consulting engagement, it starts to make a lot more sense.

Related Articles

McKinsey Solve Games – what’s new in 2026

McKinsey Solve Games – what’s new in 2026
by Canary Wharfian
April 8th 2026
Join the conversation

If you’re preparing for McKinsey right now, the first thing to understand is that most of the content out there is already outdated.

The test is evolving pretty quickly, and 2026 is probably the first year where that becomes very obvious. If you are interested in a more comprehensive guide, you can check out MyConsultingCoach’s guide.

If you are also preparing for case interviews, you can check the case interview guide.

The Solve Game is changing fast

What’s happening now is that Solve is moving away from being a “game” in any meaningful sense. It’s still called a game, but structurally it’s becoming much closer to a simulation of actual consulting work.

Historically, the format was relatively stable. You had Ecosystem, which was quite rule-based, then Redrock, and then later Sea Wolf. You could prepare for those, understand the mechanics, and get reasonably comfortable with what was coming.

That’s no longer really the case.

Most candidates today report seeing Redrock and Sea Wolf, but there is increasing evidence that McKinsey is testing new modules on top of that. One that has started to come up in 2026 is something called “Sustainable Future Lab.” It’s not fully documented anywhere yet, but multiple candidates on Reddit have mentioned encountering it, which usually means it’s being rolled out or A/B tested.

A new type of problem: Sustainable Future Lab

The interesting part is not just that there’s a new game, but what type of thinking it requires.

The older games were still quite structured. Even Sea Wolf, which people find difficult, is ultimately a constraint problem. You have a set of options, you filter based on conditions, and you arrive at a solution. There is a clear logic to it.

The new direction seems different. The Sustainable Future Lab type of scenario appears to push more into decision-making with ambiguity. Less “find the correct answer,” more “make a reasonable decision given incomplete information.” More trade-offs, more prioritization, less certainty.

That’s a meaningful shift, and it aligns much more closely with actual consulting work. In real engagements, especially in strategy or large transformation problems, you rarely have perfect data. You are constantly balancing competing objectives and making calls under uncertainty.

Less standardization, more variability

Another thing that’s becoming clear is that the test is less standardized than before. Two candidates applying to the same office can have slightly different experiences. Some get two games, some get three, some encounter new modules.

That variability is probably intentional. It reduces the advantage of people who rely heavily on preparation material and pushes everyone toward first-principles thinking.

At the same time, the overall test is getting shorter. In most cases you’re looking at around an hour, sometimes a bit more if there’s an additional module. That means you don’t have time to explore everything or fix mistakes later. Whatever approach you take at the beginning tends to stick, and small inefficiencies compound quickly.

Redrock and Sea Wolf still matter

Redrock and Sea Wolf are still central, and they haven’t fundamentally changed, but how you’re expected to approach them has.

Redrock is basically a mini case interview at this point. The main mistake people make is treating it like a game where you click around and explore everything. That approach gets penalized. What matters is selecting the right information, not all the information. The signal McKinsey is looking for is whether you can form a hypothesis and go after the data that matters.

Sea Wolf looks simpler than it is. People try to brute force it or go by intuition, and it usually doesn’t work. It’s much closer to a structured filtering problem. If you don’t have a clear method, you lose time and make mistakes. Candidates who do well tend to impose structure early, even if the interface doesn’t force them to.

What McKinsey is actually testing now

Across both, and even more with the new modules, the underlying signal McKinsey is trying to get is pretty consistent. They are not testing whether you can “figure out the game.” They are testing whether you think in a structured way under pressure.

This is also why Solve is becoming much more aligned with case interviews. The same core skills show up: structuring before acting, prioritizing information, being disciplined with your approach, and being comfortable making decisions without having perfect data.

One implication that people underestimate is that Solve is now harder to “prepare for” in the traditional sense. You can still get familiar with formats and avoid obvious mistakes, but you can’t rely on memorizing patterns anymore. The edge comes from how you think, not what you’ve seen before.

Final takeaway

The direction is pretty clear. McKinsey is moving away from standardized, repeatable games and toward something that looks more like real work. More variability, more ambiguity, less opportunity to rely on patterns.

If you approach it like a game, you’ll probably feel lost. If you approach it like a simplified consulting engagement, it starts to make a lot more sense.

Related Articles