The news about MSS is heavily focused on its use of ML tools like LLMs.1 This is in part because we’re still very much in the AI hype era. Some of the hype won’t pan out, but my expectation is, much like Internet 2.0, some interesting things are going to rise from the ashes of the bubble.
The AI/ML part of MSS isn’t as interesting to me as the way it was designed. For whatever my two-pence are worth, incorporating AI/ML was the inevitable by-product of the way MSS was made. I just wish we’d adopted this process at least a decade ago.
‘Building MSS… was perhaps more of an organizational feat than a technical one’, according to Emelia Probasco. Making MSS required the DoD to collapse the ‘waterfall’ model Jennifer Pahlka lays out in her book Recoding America.2 I’m not going to steal her thunder, but Pahlka’s description of how the government acquisition process prevents us from coding good software is spot on. With MSS, Emelia Probasco lists several key ingredients, to include ‘direct access by developers, designers, testers, and program/projector managers to the operational environment’. This was exactly what I was referring to when I wrote, ‘If you’re not the user, then damn well find the person who is’.
The operators and the coders iterated alongside one another:
For example, soldiers recounted an experience where they noticed a degradation in algorithm performance due to shifting weather conditions. While the algorithm still worked, the soldiers knew it could be improved and were able to consult and rapidly iterate with the developers to improve the algorithm’s performance across different weather conditions.
Users can provide immediate feedback on friction points and help identify by touch when things aren’t quite working as well as they can. The coders make changes because they’re operating under set goals and timelines versus traditional hard-to-change requirements. This required the third literacy that Probasco identified in her report: acquisitions.
Having only put my toe into the water during my tenure as a G8, I can attest to the volume of education you need to understand our acquisition process.3 As much as the current process is the problem, you can’t circumvent it until you understand how it works. But if you leave this to your acquisitions personnel, then you as a leader are making a critical mistake.
In the podcast, Mikolic-Torreira highlights the necessity for top-cover and leader prioritization. That comes from commanders. Hundreds of articles have been published about the need to change the way we acquire software, but Igor identifies, 'The long pole in the tent to actually using the full range of authorities we have in acquisitions is the trilingual leader' — a senior officer who understands the operations, tech, and acquisitions.
Our code is often shit because, as Palhka puts it: 'It's not the work that important people do and important people don't do it'. In contrast, the reason MSS could be pivoted from field artillery to FEMA was because of the focus was on how data was gathered and moved from the leadership down. Probasco cites three key ‘trilingual leaders’ who worked on MSS: Lieutenant General Chris Donahue, Colonel Joe O’Callaghan, and USMC Colonel Drew Cukor. She also gives credit to General Michael “Erik” Kurilla.
Listening to Probasco’s response had me in violent agreement — much to the confusion of the motorists passing me. She is clear we need leaders who understand data to, ‘…design the exercises and change the way the unit operates'. She also is spot in in designating this as a leadership task, not something you delegate to your Chief Data or Tech Officer. 'You don't want to delegate how the system works'.
‘Commanders need to be able to ask ,”How do I want my software to support my intent?”.’
I owe Probasco a valentine for that quote alone.
Dubbing these kinds of commanders, ‘Unicorns’, John Amble asks Mikolic-Torrerira, ‘How are we producing trilingual leaders?’ ‘It seems we’re just, crossing our fingers and hoping that the right people are put in the right places and the right time to make this happen.’
We can get this triumvirate of knowledge into our formations in one of three ways:
We can recruit the people who know it. But given the money being thrown around by the major AI firms, it’s unlikely the Department of Defense can compete well for this talent. The DoD also been very reticent to use the authorities congress gave us to direct recruit specially trained talent with these skillsets.
We can also teach it to ourselves in PME, which is going to be a much slower process because it waits on educated junior leaders aging into leadership roles.4
Or we can demand it from our leaders, incentivizing them to grow and challenge themselves.
If this year’s Command Assessment Program is indicative, the military is opting for the slow, education and replacement approach. While this is not my preferred approach, I can’t say I’m surprised. In the original draft of my War on the Rocks article, I suggested leaders needed to do self study. But this was (rightly) identified by one of the WoTR editors as something that people would attack in the article.
'But his response to that counterargument could be strengthened - it comes across as "so what, too bad."… "when is a major in the US Army supposed to find additional time to train themselves on data literacy?… his argument is probably better served with a solution that could actually be implemented over time. Like it or not, DoD and the Army are large bureaucracies, and it takes time to change training and educational curriculum to produce the new type of leader the author wants.’
Mikolic-Torreira thinks it’s going to take ‘generation or two’ to grow these leaders. I’m not certain if a military generation is smaller than a normal one, but that pace relies on the US military not needing to fight any conflicts for the next four to twenty years while we wait. Call me a Debbie Downer, but I don’t think we have that long. Mikolic-Torreira suggest we can create incentives, just like we did with Jointness, though he sees a lower demand for trilingual leaders. I hope he’s right, because right now, we’re not even demanding bilingual ones.
We don’t need to find mythical ‘unicorns’, we just need curious leaders. Leaders who see a new tool and their first instinct is to ask questions and to fuck with it.
‘Just log on’
Probasco’s nails this as well in the podcast. She cites the need for leaders to understand what AI/ML can do, and what it can’t do. We have too many leaders adopting the ‘sprinkle some magic AI dust' approach that Amble decries in the podcast. Probasco argues the underlying question with AI is 'How do I facilitate a decision process?' She suggests we need to ground people from some of the AI hype, contrasting the two extremes of those who think AI is going to change everything and solve all our problems, with those who think all of this is terrible.
Critically she provides a way to get more people to join the growing group in the middle:
'Have you ever actually touched it before?' Have you played with LLMs? My advice is “just log on”. Play with it for 30 days, and start to understand what it can and cannot do for you.’
Both guests call for an AI literate population, both in and outside the NatSec community. Probasco even gives West Point a hat tip for their AI center. Unfortunately West Point is about two decades away from command.
Which brings me all the way back to the picture at the top of this post. In the spring of 2023 I was toying around with Stable Diffusion. This was partly because I needed a way to break up a FICINT narrative that was currently a 10,000 word wall of text. I needed pictures. But it was also because I didn’t understand what I was reading and listening to about machine learning. So I started fucking with it.
A peer saw my early results and gave me a project: Could I redo Iron Maiden’s iconic Trooper cover of Eddie the British redcoat with a Special Forces Green Beret? It took about a month of clumsily stumbling around to get where I did, with a lot of dead ends along the way. But I also learned a ton about just how big ‘big data’ really is. I learned to appreciate the scale of compute going into machine learning, how bias is inherent in every model, and how to use things like LoRAs to control and shape it to your intents.
In the end I got a lot more than a cool new poster — which I got as well, thanks Ron. I learned a little more, and was able to share that information with my peers so we all got collectively a little smarter. But there’s nothing exceptional about me. Anyone can log on and play with LLMs, and there’s dozens of models to engage with, it at least a dozen different ways.
My two pence? Find the time now to learn. Learn about acquisitions and ‘How the army runs’ too. You won’t have the time later, and you’ll wish you had. Come join team unicorn.
Maven Smart System. If you didn’t do the assigned reading from last week, give a listen to John Amble’s podcast with Igor Mikolic-Torreira, the director of analysis at Georgetown University’s Center for Security and Emerging Technology (CSET) and Emeila Probasco, a CSET senior fellow. There’s plenty of spoilers coming in this post.
Machine Learning is using computers to execute tasks we used to think you needed people to do, usually by algorithms.
Large Language Models are a type of machine learning which can take text or voice input and execute a variety of tasks without needing to be directly programed for those tasks. LLMs are trained on huge sets of data — some on nearly all of the internet, hence the name "large."
I’ve recommended this one a few times over the course of Downrange Data’s posts. If you haven’t picked up a copy yet, I gotta question your life choices.
In the army, G8 is responsible for funding, fielding, and equipping actions. Aka acquisitions.
Professional Military Education. It might surprise many civilians to learn that the military places a high priority on education and professional development, with most officers taking between two and three years across their career to go to school while still in uniform.