[…]
“Right now we’re at a point as generation AI is coming along and it’s a really exciting time. We’re experimenting with ways to use new tools across the entire test process, from test planning to test execution, from test analysis to test reporting. With investments from the Chief Digital and Artificial Intelligence Office [CDAO] we have approved under Control Unclassified Information [CUI] a large language model that resides in the cloud, on a government system, where we can input a test description for an item under test and it will provide us with a Test Hazard Analysis [THA]. It will initially provide 10 points, and we can request another 10, and another 10, etc, in the format that we already use. It’s not a finished product, but it’s about 90% there.”
“When we do our initial test brainstorming, it’s typically a very creative process, but that can take humans a long time to achieve. It’s often about coming up with things that people hadn’t considered. Now, instead of engineers spending hours working on this and creating the administrative forms, the AI program creates all of the points in the correct format, freeing up the engineers to do what humans are really good at – thinking critically about what it all means.”
“So we have an AI tool for THA, and now we’ve expanded it to generate test cards from our test plans that we use in the cockpit and in the mission control rooms. It uses the same large language model but trained on the test card format. So we input the detailed test plan, which includes the method of the test, measures of effectiveness, and we can ask it to generate test cards. Rather than spending a week generating these cards, it takes about two minutes!”
Wickert says the Air Force Test Center is also blending its AI tooling into test reporting to enable rapid analysis and “quick look” reports. For example, audio recordings of debriefs are now able to be turned into written reports. “That’s old school debriefs being coupled with the AI tooling to produce a report that includes everything that we talked about in the audio and it produces it in a format that we use,” explained Wickert.
“There’s also the AI that’s under test, when the system under test is the AI, such as the X-62A VISTA [Variable-stability In-flight Simulator Test Aircraft]. VISTA is a sandbox for testing out different AI agents, in fact I just flew it and we did a BVR [Beyond Visual Range] simulated cruise missile intercept under the AI control, it was amazing. We were 20 miles away from the target and I simply pushed a button to engage the AI agent and then we continued hands off and it flew the entire intercept and saddled up behind the target. That’s an example of AI under test and we use our normal test procedures, safety planning, and risk management all apply to that.”
“There’s also AI assistance to test. In our flight-test control rooms, if we’re doing envelope expansion, flutter, or loads, or handling qualities – in fact we’re about to start high angle-of-attack testing on the Boeing T-7, for example – we have engineers sitting there watching and monitoring from the control room. The broad task in this case is to compare the actual handling against predictions from the models to determine if the model is accurate. We do this as incremental step ups in envelope expansion, and when the reality and the model start to diverge, that’s when we hit pause because we don’t understand the system itself or the model is wrong. An AI assistant in the control room could really help with real-time monitoring of tests and we are looking at this right now. It has a huge impact with respect to digital engineering and digital material management.”
“I was the project test pilot on the Greek Peace Xenia F-16 program. One example of that work was that we had to test a configuration with 600-gallon wing tanks and conformal tanks, which equated to 22,000 pounds of gas on a 20,000-pound airplane, so a highly overloaded F-16. We were diving at 1.2 mach, and we spent four hours trying to hit a specific test point. We never actually managed to hit it. That’s incredibly low test efficiency, but you’re doing it in a very traditional way – here’s a test point, go out and fly the test point, with very tight tolerances. Then you get the results and compare them to the model. Sometimes we do that real time, linked up with the control room, and it can typically take five or 10 minutes for each one. So, there’s typically a long time between test points before the engineer can say that the predictions are still good, you’re cleared to the next test point.”
“AI in the control room can now do comparison work in real time, with predictive analysis and digital modeling. Instead of having a test card that says you need to fly at six Gs plus or minus 1/10th of a G, at 20,000 feet plus or minus 400 feet pressure altitude, at 0.8 mach plus or minus 0.05, now you can just fly a representative maneuver somewhere around 20,000 feet and make sure you get through 0.8 mach and just do some rollercoaster stuff and a turn. In real time in the control room you’re projecting the continuous data that you’re getting via the aircraft’s telemetry onto a reduced order model, and that’s the product.”
“When Dr Will Roper started trumpeting digital engineering, he was very clear that in the old days we graduated from a model to test. In the new era of digital engineering, we graduate from tests to a validated model. That’s with AI as an assistant, being smarter about how we do tests, with the whole purpose of being able to accelerate because the warfighter is urgently asking for the capability that we are developing.”
[…]
Source: Flight Test Boss Details How China Threat Is Rapidly Changing Operations At Edwards AFB
Robin Edgar
Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft