I know it's tangential to your point. But the Microsoft Study on AI and Cognitive offloading is a significant issue we have to deal with if we want to use AI. It should alarm us that we seemingly become less curious the more we trust in and use effective AI. I get your point is the leaders in your peice haven't read that study, but it should be mandatory in our line of work.
BL: While I don't advocate blind use of AI/ML, I also caution people to paint with too wide a brush. What's critical is to understand what the technology can and cannot do. This what Emelia Probasco is pleading for our leaders to do:
In the Microsoft paper, they identify how the task conducted had large impacts on whether knowledge workers engaged critical thinking or not.
'This suggests that knowledge workers who already engage in critical thinking in their work are likely to continue doing so even when using GenAI tools. However, in contrast to knowledge workers’ confidence in AI doing the task at hand (i.e., Confidence in AI, above), which negatively correlated with their perceived enaction of critical thinking, we did not find a significant correlation between knowledge workers’ overall trust in GenAI and their perceived enaction of critical thinking. A possible explanation is that users’ reliance and confidence on AI, as well as their perceived enaction of critical thinking, might vary across tasks; accordingly, the variance that would have been explained by the general user-level factor may already be well captured by the task-level confidence factors.'
'A lack of critical thinking may also manifest through accepting a solution that merely meets a baseline aspirational threshold —in such cases, the AI solution is correct (albeit potentially of poor quality) and therefore not overreliance, strictly speaking.'
However, not all uses of AI/ML are the same. From another source:
'Narrow AI indeed has a role in tightly bounded pattern-recognition tasks, such as imagery filtering and analysis, and automated administrative tasks...'
I can’t help but attribute the development of this culture to our personnel and promotion policies.
Most Soldiers remain in one location for less than three years and in a specific role for under two. This short timeline limits both the opportunity and incentive to learn deeply or experiment and innovate in a meaningful way. Many say that by the time you understand your job, you’re already moving on. Long-term stewardship and innovating to improve an organization over time are not incentivized when you only need to do well these next 24 months, and you'll never see the fruits or bear the consequences of your service at that assignment again.
I don't say this to imply COLs are self-centered or don't care about innovating, but I think it at least subconsciously affects the way senior leaders who have operated along this paradigm for two decades think.
I certainly can't argue that the PCS cycle isn't a contributing factor. But.
Anecdote not data, but I'm a few weeks away from my 13th PCS in 22 years (once every 20.4 months on average). I've only held three jobs in my career for more than a year: detachment command, assignment officer, and battalion command.
I think, in the end, you shouldn't rely on external stimulus to drive innovation or curiosity. I didn't innovate to get recognition or to get promoted. That drive, that curiosity comes best when it comes from within. And I don't think only certain people have it.
Kids are born curious. So are most lieutenants. It's typically their leaders who beat it out of them.
Sir,
I know it's tangential to your point. But the Microsoft Study on AI and Cognitive offloading is a significant issue we have to deal with if we want to use AI. It should alarm us that we seemingly become less curious the more we trust in and use effective AI. I get your point is the leaders in your peice haven't read that study, but it should be mandatory in our line of work.
DJ,
I presume you're referencing this one?
https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf?msockid=010b67adae226abc36317275af106b14
BL: While I don't advocate blind use of AI/ML, I also caution people to paint with too wide a brush. What's critical is to understand what the technology can and cannot do. This what Emelia Probasco is pleading for our leaders to do:
https://downrangedata.substack.com/i/151365236/just-log-on
In the Microsoft paper, they identify how the task conducted had large impacts on whether knowledge workers engaged critical thinking or not.
'This suggests that knowledge workers who already engage in critical thinking in their work are likely to continue doing so even when using GenAI tools. However, in contrast to knowledge workers’ confidence in AI doing the task at hand (i.e., Confidence in AI, above), which negatively correlated with their perceived enaction of critical thinking, we did not find a significant correlation between knowledge workers’ overall trust in GenAI and their perceived enaction of critical thinking. A possible explanation is that users’ reliance and confidence on AI, as well as their perceived enaction of critical thinking, might vary across tasks; accordingly, the variance that would have been explained by the general user-level factor may already be well captured by the task-level confidence factors.'
'A lack of critical thinking may also manifest through accepting a solution that merely meets a baseline aspirational threshold —in such cases, the AI solution is correct (albeit potentially of poor quality) and therefore not overreliance, strictly speaking.'
However, not all uses of AI/ML are the same. From another source:
'Narrow AI indeed has a role in tightly bounded pattern-recognition tasks, such as imagery filtering and analysis, and automated administrative tasks...'
https://www.tandfonline.com/doi/epdf/10.1080/01402390.2023.2241648?needAccess=true
Meanwhile, taking a quick glance at X/TikTok/Facebook, I'm not quick to blame AI for killing our critical thinking.
I can’t help but attribute the development of this culture to our personnel and promotion policies.
Most Soldiers remain in one location for less than three years and in a specific role for under two. This short timeline limits both the opportunity and incentive to learn deeply or experiment and innovate in a meaningful way. Many say that by the time you understand your job, you’re already moving on. Long-term stewardship and innovating to improve an organization over time are not incentivized when you only need to do well these next 24 months, and you'll never see the fruits or bear the consequences of your service at that assignment again.
I don't say this to imply COLs are self-centered or don't care about innovating, but I think it at least subconsciously affects the way senior leaders who have operated along this paradigm for two decades think.
I certainly can't argue that the PCS cycle isn't a contributing factor. But.
Anecdote not data, but I'm a few weeks away from my 13th PCS in 22 years (once every 20.4 months on average). I've only held three jobs in my career for more than a year: detachment command, assignment officer, and battalion command.
I think, in the end, you shouldn't rely on external stimulus to drive innovation or curiosity. I didn't innovate to get recognition or to get promoted. That drive, that curiosity comes best when it comes from within. And I don't think only certain people have it.
Kids are born curious. So are most lieutenants. It's typically their leaders who beat it out of them.