Braden Eichmeier's Portfolio

Follow me on GitHub

309 Software Engineering Group (SWEG) - Hill Air Force Base

As a preface of this situation, much of the details of my work occurred in classified spaces. Accordingly, I must write with a level of obfuscation and generalization. Despite this, all of the information herein is accurately portrayed.

Technical Program Manager (EDDGE/SABER): June 2023 - June 2025

My final chapter with SWEG was as a technical program manager for the Software for Analytics, Big data Exploitation, and Research (SABER) team. This team successfully spun out of the Extreme Digital Development Group Enterprise (EDDGE) team that served as an internal research and development team focused on solving critical Air Force problems with emerging technologies and modern software practices. I was selected for this team due to my excellent performance in the Phantom Fellowship described in the next section. The team's work at the time of my departure focused on cloud-centric data engineering to ingest, store, and serve mass quantities of data.

Treehouse, an onboarding and assessment team, was my first stop on SABER. It was dedicated to developing talent, assessing best team fit, and producing supporting tools for the rest of the team. Weave, the first tool I worked on, was made to add structure to existing internal datasets. It is designed to quickly catalog available data using SQL, track lineage and provenance of derived data, and rapidly search metadata using MongoDB. After Weave I authored a white paper exploring the state of the art for predictive maintenance within the defense community. Finally, I pioneered the development of a Data Product within the concept of a Data Mesh for MIL-STD 1553 data. By the end of my time on Treehouse, I was the team lead and responsible for 12 engineers.

Embedded development was the focus on my second stop in SABER. The team was working to analyze and record RF data with high performance requirements. This was a fun collaboration team with the Air Force Software Factory SkiCAMP. They had some expertise and skills that were crucial for success in this project that I was very fortunate to learn from. I spent about 4 months on this team before being promoted to Technical Program Manager (TPM).

My final 8 months with the Air Force was as TPM. The TPM role was fun because I was given a large degree of freedom to lead and execute on my assigned project. My project was to accomplish a year long initiative which extracted data from a system, moved the data to a cloud environment via automated pipelines, processed analytical insights using hyperscale cloud compute, and return those insights back to the original system. I was given about twenty engineers to accomplish this task. Executing on this problem required a systems engineering approach where I broke down the initiative into subsystems within a full system architecture, defined the performance and interface requirements for each subsystem, and estimated the effort and complexity for each piece. Then I formed the teams to balance talent while aligning skills and interests. My primary role throughout the project was to ensure everybody was on track to develop a high quality product in time. I regularly participated in review and design sessions while also embedding myself as a developer whenever a team or individual began lagging behind. This period of mentorship was a joy for me as I helped my team improve their skills as professionals.

The TPM role also had general team leadership tasks beyond delivering my assigned initiative. My favorite was to help recruitment and lead the implementation and execution of onboarding efforts. We were tasked by our primary customer to grow from 25 engineers to 40 within a year. This scale of growth is scary because we did not want to lose our talent density and culture by bringing on underqualified talent. I helped by developing the standardized technical evaluation criteria for our interview process and helped perform 20+ interviews. I also garnered interest in our growing team by hosting a SWEG wide brown bag about our work to ~1,000 fellow software engineers. Once a new teammate joined the team, I developed a training program to simultaneously introduce our technical stack while also performing an extended hands-on evaluation to monitor technical capability directly. I really enjoyed working closely with the new hires to help them learn new skills while providing a friendly mentor relationship to ease the potential stress of a new situation.

Machine Learning Research Fellow (DAF-MIT AI-Accelerator): November 2022 - May 2023

The Department of the Air Force (DAF) has partnered with MIT to accelerate the development and fielding of key artificial intelligence (AI) capabilities in a partnership called the DAF-MIT AI Accelerator (AIA). As part of this program, the Air Force cycles airmen through a 5-month fellowship, called the Phantom Fellowship, to spread cutting edge AI knowledge throughout the force. I was selected for this highly competitive program in a cohort of 12 other Phantoms.

Half of my assignment during the Phantom Fellowship was working with MIT and Lincoln Lab researchers. I was assigned to the Responsible AI team that researched robustness and explainability topics in partnership with the Responsible AI team and Dr. Aleksander MÄ…dry. My specific tasks explored improving the robustness of a model to domain shift using an adversarial training regimen.

In addition to the work with the Responsible AI team, I produced a novel research paper on machine learning applications at the maintenance repair depot at Hill Air Force Base. I identified several promising opportunities for improving existing workflows with AI/ML by interviewing a wide swath of practitioners and subject matter experts. My findings were received with high praise and presented to several high leaders back at Hill AFB. After my time at the AIA concluded, I was asked to condense my report to a 2-page extended abstract for submission to HPEC 2023. This is my first publication to an academic conference, which I am quite proud of. Many details were distilled and removed to prepare the document for public release, but I'm happy to be able to share it here:

This browser does not support PDFs. Please download the PDF to view it: Download PDF.

I was awarded with the Director's Impact Award at the end of this fellowship. This award is given to the highest performing Phantom in each cohort as determined by both peers and leadership. The cohort unanimously voted me for this role because I quickly became the group's "go-to" expert for peer technical support and advice. I additionally extended my impact by participating in, and by many metrics "winning", the Bravo 10 Hackathon where I developed and implemented the key novelty on Team Yodacorn and was awarded with "Best Use of AI/ML" and "Most Inventive". I also was selected to participate in a public affairs event where we conducted an "AMA" on Reddit. This high level of performance in my expected role, and expansion beyond expectations led leadership to select me for the Director's Impact Award honor.

After the fellowship I had the pleasure of being featured in three news articles. The first was published by Hill Air Force Base Public Affairs to highlight my experience. During that process, a reporter with KSL contacted the same public affairs office asking for an interview on artificial intelligence in defense in response to a rise in interest on the topic due to the recent release of ChatGPT. I got to interview with the KSL reporter and discuss my thoughts on the AI field at large. Finally, the AIA itself released a statement on my capstone project being published to HPEC 2023.

F-16 Simulation Engineer (VOID): June 2021 - November 2022

The Viper OFP Integration and Development (VOID) simulator is a rapid feedback tool to support Operational Flight Program (OFP) development. There are four key components of the simulator: development builds of the OFP, simulated interfaces to external systems (such as physics or radios), an environment simulator for visualization, and several networked computers of different architectures and operating systems.

My efforts on this team focused on simulating external systems where I developed 3 new models and maintained several others. The general workflow for developing a new model involved consulting interface documentation, flight recordings, and subject matter experts to determine message flows for specified behaviors. Then emulating the proper 1553 message traffic in response to both environmental stimuli and message commands from the OFP. An interesting balance in this role involved finding the true operational behavior of a system to provide an accurate validation tool instead of simply trusting what the OFP expects.

I would like to highlight three things that make me proud from my time on this team. First was implementing unit testing. The VOID simulator is a mature project that has been in development for decades. Despite this, and the pressure to do so, no previous developer had successfully deployed a single unit test. I began writing unit tests using GoogleTest and integration tests for the new simulation models I developed.

Second, I noticed that multiple simulation teams were duplicating significant amounts of effort be developing the same simulation models independently. Seeing this problem and the opportunity for inter-team collaboration, I led an effort to create an implementation agnostic framework to make core simulation modules valuable for any team. In this framework, I standardized an interface that abstracted simulator specific behaviors, such as 1553 traffic, pilot vehicle interface (PVI) or external physical stimuli.

A final highlight in this time was to improve several modules in the VOID simulation core. The core simulator ran on multiple low resource compute boards running a real time operating system (RTOS) that were of similar architecture to the aircraft Line Replaceable Units (LRUs). These boards were overloaded with all simulation activity and were beginning to perform poorly, when only the OFP code needed to be run on them. I developed a tool to synchronize the mux state between multiple machines using Redis such that a majority of the simulation modules could be offloaded from the boards. I validated this tool using a suite of stress tests and used it in two external models.