Insights

Perspectives on AI, technology, and compliance transformation to help you move faster, smarter, and with more clarity.

featured insights

May 1, 2026

The AI Futures Program: A 13-Week Program for High School Seniors and College Students

The thing that keeps me up at night is the future for our kids, and for the young adults trying to get jobs today. Entry-level job postings in the U.S. have diminished by 35% since January 2023. Since the widespread adoption of generative AI, early-career workers (ages 22-25) in the most AI-exposed occupations have experienced a 16 percent relative decline in employment. This AI everything world is like quicksand, shifting as we move. Where will the opportunity be?

Our internship program is kicking off at the tail end of May, and we have the most interns we've ever had - a whopping 16. We dedicated ourselves to creating a program that will prepare these emerging professionals for whatever comes next for them, maybe a job with us, maybe their next year in college, who know. But whatever it is, we want to help them be ready, while also adding value to PhoenixTeam company goals.

We hope to learn from this first group, and roll out a broader set of (free) programs for employers as part of our AI futures initiative. Feel free to take this program framework and use it in your organization. We'd also love to hear from you if you are doing someone innovative in your work or have an idea to share.

The AI Futures Program

A program for preparing people — kids through early-career professionals —to think, work, and lead in a world being rewritten by AI.

The PhoenixTeam Internship Program

The goal of the PhoenixTeam summer internship program is to prepare young adults for whatever their next move towards productive employment is. It is designed for high school, college aged youth, and people with zero to two years of work experience.

The program runs thirteen weeks. The interns are broken into groups of between four and six people, and each group is a team that works together for the duration of the program. The team has a team leader, and this leader rotates every two weeks so that each member of the team has at least one opportunity to gain leadership experience. There are seven core areas to the program.

  1. AI education – every Tuesday, one of our AI educators provides one hour of hands on instruction on artificial intelligence.
  2. Running a business – Wednesdays, a member of the team at all different levels (from associate to managing partner) provides real world guidance on what it means to run a business. They are also providing professional stories, lessons on failure, and an opportunity for questions and answers. We invite speakers and leaders from other companies to join us as well on selected topics.
  3. Presentation and demonstration – Thursdays, on a rotating basis, three interns will present and showcase their individual projects and accomplishments. They will get feedback. This gives every individual their chance to learn presentation and demonstration skills. There is no competition at the individual project level.
  4. Teamwork and competition – Fridays, three teams present and demonstrate their team project, based on the leader’s guidance and leadership style. They will get feedback and compete as a team for a prize. At the end of the internship program, we will have a showcase showdown “shark tank” style where all teams demonstrate their final project and compete for prizes and awards.
  5. Portfolio building – each intern is part of a team, and that team has a group project to complete that is relevant to their teams’ business focus area. The focus area is assigned at the beginning of the program. Each intern also has their individual project that they select based on their interests. Both projects involve building something, which the students will vibe code and package in their portfolio. When the interns leave, they will have a portfolio, and three things in their portfolio – an individual project, a team project, and a case study written by them.
  6. Adding value to a business – each intern has a PhoenixTeam manager with whom they work to add value to the business. What the intern works on will be set by that manager, and the manager will meet with the intern regularly on their assignments. Their assignments will contribute to the goals of PhoenixTeam and add value to the organization.
  7. Leadership – interns alternate being the leaders of their team and learn what leadership is and how to lead. They will practice setting weekly and program level goals and guiding a team to achieve those goals.

The program culminates in a showcase showdown, where teams compete in front of a “shark tank” style panel. They present their project in seven minutes to the managing partners and demonstrate their product.

Week-by-Week AI Education

Article content

Week-by-Week Running a Business

Article content

We are really excited about making a real difference for this demographic, we talk about it all the time in our classes. It's time to do something real about it and this is a first step.

Follow us for more updates as the AI Futures Internship Program gets underway.

By Tela Mathias, CTO and Chief Nerd and Mad Scientist at PhoenixTeam

featured insights

April 28, 2026

PhoenixTeam Achieves SOC 2 Type II Compliance for Phoenix Burst

ARLINGTON, VA, UNITED STATES, April 28, 2026 -- PhoenixTeam today announced that both PhoenixTeam and Phoenix Burst, its generative AI platform, have achieved SOC 2 Type II compliance, marking an important step in the company’s continued focus on security, trust, and enterprise readiness. This milestone reflects PhoenixTeam’s commitment to building AI solutions that organizations can adopt with confidence.

For mortgage lenders, servicers, financial institutions, and federal housing stakeholders, trust is not optional. As AI becomes more embedded in compliance, operations, and technology workflows, organizations need solutions that help teams move faster without compromising the standards required in regulated environments. Phoenix Burst was built with that reality in mind.

“We built Phoenix Burst for environments where accuracy, accountability, and trust cannot be treated as afterthoughts,” said Tela Mathias, CTO of PhoenixTeam and CEO of Phoenix Burst. “SOC 2 Type II reflects the discipline behind our platform and the seriousness we bring to helping clients adopt AI in workflows that carry real operational and regulatory consequences.”

Established by the American Institute of Certified Public Accountants (AICPA), SOC 2 is a widely recognized standard for controls related to security, availability, processing integrity, confidentiality, and privacy. With support from Johanson Group LLP, a premier certification body specializing in SOC 2 audits across industries, the assessment evaluated how effectively PhoenixTeam’s controls operated over time against the AICPA’s Trust Services Criteria.

This achievement reflects PhoenixTeam’s broader commitment to building technology and AI solutions that are ready for real-world use in high-stakes environments. As organizations move beyond experimentation and toward operational adoption, PhoenixTeam remains focused on helping clients implement AI in ways that are practical, responsible, and built to hold up in production. SOC 2 Type II adds another layer of confidence for teams using Phoenix Burst to streamline execution, reduce friction, and modernize with greater trust.

About PhoenixTeam

PhoenixTeam is a woman-owned technology services firm headquartered in Arlington, Virginia, specializing in AI-powered mortgage operations and technology services for the mortgage and financial services industries and federal housing agencies. Our mission is to enable affordable and accessible homeownership through innovative, customer-centric technology. With a strong focus on generative AI, we tackle complex industry challenges, equipping businesses with cutting-edge tools that enhance innovation, efficiency, and compliance. By bridging the gap between technology and business teams, we strive to bring joy and purpose back to software development, making a meaningful impact in the lives of our clients and homeowners everywhere. For more information, please visit www.phoenixoutcomes.com.

featured insights

April 27, 2026

A Continuous Mortgage Innovation Pipeline

What if we've been looking at technical debt all wrong? There is a page in the Thinking Machine that talks about how the engineers from a company Jensen acquired got under the hood with the Nvidia code base. They were horrified. It was an obvious compromise, filled with workarounds, quick fixes, hackey solutions. It was a code base latent with technical debt. Why? Because technical debt is a spoil earned by winners.

While this acquired company had been perfecting its code base, focusing on engineering perfection, Nvidia had been, well, winning. And their code base suffered. I think we see this today as well with some of the really successful companies - look at the Anthropic Claude Code code leak. All the criticism. For what? We should all be so lucky as to have a team that pushes as many valuable products and features as Anthropic does.

So what if our goal should be to incur as much technical debt as possible? Consider the following continuous innovation flow:

  1. Take the people with the actual problems - mortgage operators - and provide them with a deep experiential understanding of generative artificial intelligence. This includes all the foundations, how it all works, the risks, the rewards, and everything in between. They will come out of this process forever changed.
  2. Give them the tools they need to fix their own problems. This includes big-ass developer boxes, the Claude suite of products, and hands-on education on these solutions necessary to build great solutions. They know the problems they need to solve - this is the huge time save. You don't have to teach them the mortgage domain, they already know it! You do, however, have to teach them to generate (or give them access to) a high quality synthetic test data set for their use cases. Great news here as well, they know the data, and the tools you gave them are excellent at generating synthetic test data, another win.
  3. Provision role-based, controlled access to the data they need for those solutions as MCP servers. The organization partners with the operators to understand what data they need, and to provision that data as MCP servers, removing the technical implementation specifics from the operators. Obviously, this is the long pole in getting the innovation pipeline up and running. The mortgage operators with the problems will have a "need to know" the data in question, and you will control the access, so there should not be a problem giving them controlled access to production data. Also keep in mind there will be a ton of use cases for which borrower data is simple not required. Yes we have to be cautious here, and also we don't have to be ridiculous about it.
  4. Provide an automated commit process controlled by the operators that moves the locally developed solutions to the organizational github repository. This step kicks of the automated safety review.
  5. Perform a thorough, automated safety review. Yes, you can have a human in the loop here if you want. You can insert a chief software engineer here, but wouldn't it be better to just run an automated review? All the security scans and safety protocols you need can likely be automated. So automate them. Keep in mind - you have given them authorized tools and authorized datasets, and they are going to deploy to an internal innovation site. You have already systematically controlled for safety.
  6. For solutions that need access to highly sensitive data, the successful completion of this will move the solution to an integrated pre-production environment for testing of these solutions by their inventors with production data. Again, this will not be every solution, but some of these solutions will have to go through pre-prod review as well.
  7. Once the safety review and pre-production review are complete, we automatically (or with a human in the loop) move these solutions to a role-based, access controlled internal innovation site. The inventors know who needs access, and those individuals can submit a simple request for access. That access review can be automated, or you can keep a human in the loop. You already provisioned the data here, and you tested the applications for safety. You know who can have access to what data, this can be automated.
  8. Now we can harness the value and the learnings, showcase the solutions. The owness is on the inventors to capture the value and document the learnings. We create an easy way for this to be captured, a simple wiki perhaps, and a leaderboard. Who found the most value? What is the best learning? These inventors also make up your innovation working group so they are already meeting and sharing learnings. They will operationalize learning, they will find new problems to solve based on these learnings. They will create the exuberance from their teams and leaders about the solutions through the (weekly? monthly?) innovation showcase.
  9. And finally, we will have solutions that we also want to move selected solutions to our enterprise solution base. Identify those solutions and put them into the regular (but accelerated with AI-assisted software engineering) process for enterprise release. Move as quickly as responsibly possible to enhance the core systems, but with less pressure. The inventors have already solved the problem, you've bought yourself a little time.

And the cycle continues. More data is safely provisioned, more learning are learned, more value is created, more wins are showcased. Yes, more technical debt will have been created and will have to be supported and maintained. To the victors, go the spoils.

Yes, I know this is an uncomfortable idea. Yes, we are heavily regulated and we do have to consider access to production data. Yes, this process will have to be battle-tested and tailored for your organization. But, again, these are mortgage operators, they already have access to this information, they know the problems because they are living with them every day. They are already working around these problems every day. Give them the power to solve those problems.

This continuous mortgage innovation pipeline puts the whole process on its head, it skips numerous steps in the product development lifecycle, not the least of which is teaching software engineers about mortgage. Let's face it, this is one of the most delay-laden steps in our current ways of working. So eliminate that step.

By Tela Gallagher Mathias, CTO and Chief Nerd and Mad Scientest at PhoenixTeam

featured insights

April 14, 2026

Agentic AI in Financial Services

Some of you will remember last year's amazing insights from the team at Capital One and the great Prem Natarajan, PhD. Unfortunately I missed this year's Cap One presentation, but I did hear from the senior technology executives from RBC, Wells Fargo, and BNY. This panel talked through four key questions for financial services companies on agentic AI at scale.

  1. How are financial services companies moving agentic use cases to production?
  2. What special considerations are there for agentic AI in highly regulated spaces?
  3. What metrics are being used to measure return on investment?
  4. What predictions do you have about the next year in agentic AI?

How Are Financial Services Companies Moving Agentic Use Cases to Production?

At Royal Bank of Canada, they have "elevated" AI into a top enterprise strategic function while embedding its use cases into business lines. I'm sure there are tradeoffs here and I wonder how they handle the likely proliferation of business units bringing their own AI (BYOAI).

RBC Went Deep

The "showcase" AI use cases here were in the capital markets research space. Their core AI platform is called AIDEN, and started ten years ago as a AI-powered electronic trading system that used reinforcement learning (RL) to continuously learn from market conditions and optimize trades in real time. Yeah, that sounds hard. They have extended it to become a broader AI platform used across capital markets for trading algorithms, research automation, and insight generation for clients. To do agentic AI in the cap markets space at scale is a pretty impressive feat even though the specific use cases implemented feel to me like fairly classic deep research agents, which are pretty mature.

BNY Went Wide

Bank of New York described their AI strategy as "AI for everyone, everywhere, for everything". They described this as meaning AI should be used by every employee for every interaction across every process, which is a pretty bold statement. I have questions about the actual viability of this as a practical strategy but the point was well taken. I continue to this we will have many use cases and applications where the accelerated delivery of classic deterministic solutions is the right way to go.

They had some use cases that made my ears perk up, noting main agentic use cases in dynamic asset pricing, lock box processing, data insight generation, and, of course, data extraction. The lock box one I have questions about, I'm guessing it's exception research and illogical condition mining, and not actually payment processing but they did not get into this. They take a platform mindset, with use cases deployed across a range of AI platforms. It seemed to me that RBC went really deep in one area, where BNY went wide. Both valid strategies, just different.

Wells Fargo Went Customer

Wells Fargo pointed out their biggest hurdle as the "verifiability of agent actions at scale" in an automated way, which is one I have heard from at least one enterprise leader in mortgage. They started their agentic journey with "Fargo", which began as a virtual assistant. They extended it into intent extraction and classification, next taking smaller baby steps for specific agent capabilities like make a payment and initiate a claim. Now they have moved into personalized capabilities with an emphasis on the verifiability of results. WF noted that if we can verify the result, then we can automate the action.

What Special Considerations Are There for Agentic AI in Highly Regulates Spaces?

We really heard hear two main themes - verifiable risk management focus and culture.

Risk and Model Management

We heard multiple times about the need for verifiable risk management frameworks, evidence that the framework is in place, and actually being able to prove that a particular implementation actually followed the framework and risk was mitigated. That sounds really hard. This tells me that it's not enough to have guardrails implemented at a use case level. Yes, that's important guardrails need to be part of a connected framework.

Think about providing evidence on some subset of 40,000,000 logins a day (yes, that's a real number), being able to prove that the risk management framework was followed. Identifying which of these interactions was agentic, what use case it implemented, and the specific controls and guardrails triggered.

So guardrails need to be part of a larger, evidenceable strategy - which is much harder than implementing risks management in a single use case. We heard from one leader that they have had the same standing weekly meeting with their legal and risk partners for 2.5 years. Imagine the discipline and forethought that takes. Wow. Every week for years.

We also heard about model management but, again, with a focus on the scale consideration of how model management fits into a broader systems-based approach. Evaluating, for each model choice, the degree to which the model increases the risk in the overall system or decreases it. Looking at the materiality of the risk, the complexity of the model, its degree of openness and how explainable it is (or is not - black box AI). Finally, looking at the degree of reliance upon AI within market infrastructure.

Culture

Of course we heard about process reimagination and the need to rethink the whole problem. Culture as a deeply ingrained attitude towards rethinking a process as a whole. This was described a a move from the trend of "I have AI magic, where can I use it?" to "what does AI-native actually mean?". For example, one of the panelists described having 1700 use cases defined in 2024, with AI "all over the place" and the need to aggregate.

I loved this statement, and I've heard it from the OCC as well - there is a big cost to playing it safe. The new comptroller of the currency expressed this sentiment last year as "not innovating is a big risk". I wonder if these attitudes are related. You need one perspective to allow the other. This statement came on the heels of another gem: "if you need to get really big things done, you need a small team with the right thinker/doer ratio and the right leader". And this one: "keep going - keep breaking glass".

We heard culture as the long pole in the tent, with one panelist noting that "99% of employees have gone through AI training". I'm not sure that's the greatest success measure (attendance is not the same as application), but I do have to give him credit for measuring something.

What Metrics are Being Used to Measure Return on Investment?

Bit of a mixed bag here, and what is impressive is that agentic AI was implemented at scale at all. Again, my $0.02 is that the use cases themselves are not profound. What is profound is how hard it must have been to get the use cases implemented across millions of interactions under intense regulatory scrutiny. Here is a list of all the metrics I heard about from these leaders:

  1. # people trained
  2. # agentic uses out of total interactions
  3. % time reduction in time spent producing analysis
  4. Revenue per managing director
  5. Clients covered (out of 1500 total)
  6. Reduction in time spend producing credit memos (from 3-4 weeks to 1-2 days)
  7. Call "deflection" meaning inquiries contained that did not become calls - Wells Fargo noted that in the 1.5 years they have had fargo, they have has 500M Fargo interactions, with between 2 two and three millions calls deflected.
  8. Complex account opening from 18 days to 15 minutes
  9. 25% increase in developer "burn rate"
  10. # platform capabilities
  11. Training breadth
  12. Degree of organizational enablement
  13. How fast a use case is implemented from concept to cash
  14. Risk capabilities implemented in the automated pipeline

What Predictions Do You Have About the Next year in Agentic AI?

Themes here were continuous agentic loops, process reimagination, and pace of change.

  1. Agentic loops. Certainly this is an extension of the OpenClaw mania that we heard so much about throughout the whole conference. The idea that we will see more and more agents that loop continuously, and based on self-reflection, improve themselves.
  2. Process reimagination. The statement here was "banks that leverage and redesign with agents will pull away from the pack in the next year".
  3. Pace of change. As training and inference converge, the pace of change will be even faster. That we will see the "bank as a system that learns every day and becomes even more personalized".

What this Means to Us in Mortgage

This was a really cool panel because it was real people talking about real, regulated AI at real scale. Without a ton of time to reflect this morning, I think my main takeaway for us is that the use cases people (are willing to) talk about at scale are not super profound, which is not to say they are uninteresting or trivial to implement. What I heard was a lot of useful, customer or operator enhancing solutions to problems we have today. With all the agent mania, I think the message here is, again, to scope the problem tight and iterate. I will also say again that this is not where the differentiation will ultimately come from.

We really have two paths to take simultaneously. The first is about being faster. Whatever we do today, do it faster. For a time, there will be advantage here. At the same time we have to look at redesigning a mortgage ecosystem that is specifically designed to be accelerated. Just like the innovation of accelerated computing that changed everything, so too will accelerated mortgage change everything. Mortgage is designed to be slow. It's a big, hairy decision where a lot of money changes hands and dreams are made. It's heavily regulated. the processes and technology are outdated. These are all natural throttles to respect and overcome, but this acceleration - native acceleration - is where the real edge will come.

By Tela Mathias, CTO and Chief Nerd at PhoenixTeam

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Artificial Intelligence

The AI Futures Program: A 13-Week Program for High School Seniors and College Students

May 1, 2026

Artificial Intelligence
Our Company

PhoenixTeam Achieves SOC 2 Type II Compliance for Phoenix Burst

April 28, 2026

Artificial Intelligence

A Continuous Mortgage Innovation Pipeline

April 27, 2026

Artificial Intelligence

Agentic AI in Financial Services

April 14, 2026

Artificial Intelligence

From vision to value: How AI is taking shape in mortgage

April 2, 2026

Artificial Intelligence

Introducing Large Language Models to Traditional Machine Learning Operations

March 31, 2026

Artificial Intelligence

GTC Insights | Visual AI Agents for Real Time Video Understanding

March 25, 2026

Artificial Intelligence

GTC Insights | Open Source and Agentic Development

March 23, 2026

Artificial Intelligence

GTC Depression - Jensen's Keynote

March 22, 2026

Artificial Intelligence

Process and Technical Hurdles to Achieve the Mortgage AI Future

March 10, 2026

Artificial Intelligence

The Homogenization of Everything

February 13, 2026

Artificial Intelligence

The Scary and Very Odd Thing That Happened Last Week

February 3, 2026

Our Company

PhoenixTeam Names Laura MacIntyre as Partner, Expanding Leadership for the Next Chapter of Growth

January 29, 2026

Artificial Intelligence

Getting Started with Vibe Coding in Four Steps

January 28, 2026

Artificial Intelligence

Why AI Adoption Is a Human Opportunity in Addition to a Technical One

January 22, 2026

Artificial Intelligence

Leading When Digital is Cheap and AI Slop is Everywhere

January 20, 2026

Artificial Intelligence

How to Get Agents with Agency

January 6, 2026

Artificial Intelligence

Why 2026 could be the year mortgage AI delivers

January 1, 2026

Artificial Intelligence

Freddie Mac Bulletin and Executive Order Implications on Mortgage AI

December 13, 2025

Artificial Intelligence

Eleven Reasons Why the Mortgage Industry Isn't Further Along with GenAI Adoption

December 8, 2025

Artificial Intelligence

Looking for ROI in All the Wrong Places

November 17, 2025

Artificial Intelligence

What does the infamous "MIT study" really mean to us in mortgage?

October 28, 2025

Our Company

Blue Phoenix Awarded $215 Million VA Loan Guaranty DevSecOps Contract | A PhoenixTeam and Blue Bay Mentor-Protégé Joint Venture

October 3, 2025

Artificial Intelligence
Our Thoughts

Ten Not Very Easy Steps to Achieve AI Workforce Transformation

September 29, 2025

Our Thoughts

MISMO Fall Summit Recap: Our Take on the Summit, AI, and the Road Ahead

September 25, 2025

No items found.

Towards Determinism in Generative AI-Based Mortgage Use Cases

September 22, 2025

Our Company

PhoenixTeam Achieves SOC 2 Compliance, Strengthening Security and Trust in Its Phoenix Burst GenAI Platform

September 9, 2025

Artificial Intelligence

My Journey with Claude Code and Running Llama 70b on My Mac Pro

September 4, 2025

Our Company

Tela Mathias recognized as a 2025 HousingWire Vanguard

September 2, 2025

Our Thoughts
Artificial Intelligence

Top Ten Insights on GenAI in Mortgage

August 25, 2025

Our Company

PhoenixTeam Awarded $49M Contract to Modernize USDA’s Guaranteed Underwriting System, Expanding Rural Homeownership

August 18, 2025

No items found.

Case Study: The Messy and Arduous Reality of Workforce Upskilling for the AI Future

July 28, 2025

Artificial Intelligence

What is uniquely human? AI impacts on the workforce.

July 7, 2025

Artificial Intelligence

251-Page Compliance Change in Hours, Not Months

June 20, 2025

Our Thoughts
Artificial Intelligence

The Medley of Misfits – Reflections from Day 2 at the AI Engineer World’s Fair

June 5, 2025

Our Thoughts
Artificial Intelligence

AI Engineer World’s Fair: What We’re Seeing

June 4, 2025

Artificial Intelligence

Departing from Determinism and into the Stochastic Mindset

May 30, 2025

Artificial Intelligence

The Agents Are Here and They Are Coming for our Kids

May 6, 2025

Our Company

Built for What’s Next: Welcome to the New PhoenixTeam Website

April 29, 2025

Artificial Intelligence
Our Company

Phoenix Burst Honored with MortgagePoint Tech Excellence Award for GenAI Compliance Innovation

April 7, 2025

Artificial Intelligence
Our Thoughts

From Trolling to Subscribing – An Alternative to Compliance Insanity

March 10, 2025

Artificial Intelligence
Our Thoughts

Supercharge LLM Performance with Prompt Chaining

February 24, 2025

Artificial Intelligence
Our Thoughts

From Program Management to Program Efficiency and Innovation

February 20, 2025

Artificial Intelligence
Our Thoughts

The Evolution of Service Level Agreements: Why AI Evaluations Matter in Mortgage

February 12, 2025

Artificial Intelligence
Our Thoughts

An Impassioned Plea for AI-Ready Mortgage Policy Data

February 3, 2025

Artificial Intelligence

The Role of Mortgage Regulators in Generative AI

January 28, 2025

No items found.

PhoenixTeam Announces Partnership with Mortgage Bankers Association to Offer GenAI Education for Mortgage Professionals

January 13, 2025

Artificial Intelligence
Our Thoughts

Calculating AI ROI in Mortgage: Strategies for Success

December 16, 2024

New Contract
Our Company

PhoenixTeam Awarded $5 Million Contract for HUD Section 3 Reporting System Modernization

November 25, 2024

Artificial Intelligence
Our Thoughts

The History of Artificial Intelligence in Mortgage

November 21, 2024

Artificial Intelligence
Our Thoughts

Adoption of GenAI is Outpacing the Internet and the Personal Computer

November 8, 2024

Artificial Intelligence
Our Company
Phoenix Burst

PhoenixTeam Launches Phoenix Burst — A Generative AI Platform to Accelerate the Product and Change Management Lifecycle

October 22, 2024

Artificial Intelligence
Our Thoughts

Application of Large Language Model (LLM) Guardrails in Mortgage

October 17, 2024

Artificial Intelligence

Where is GenAI Going in Mortgage?

September 9, 2024

Recognitions
Our Company

Tela Mathias Wins HousingWire's Vanguard Award

September 4, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Episode 9 | The Lindsay Bennett Test: A Live Assessment of Phoenix Burst with a Product Leader

July 19, 2024

Artificial Intelligence
Our Thoughts

A Practical Approach to AI in Mortgage

July 18, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Episode 8 | Inside Phoenix Burst: Transforming Software Development with AI

July 11, 2024

Accessible AI Talks
Our Thoughts
Phoenix Burst

Accessible AI Talks | Episode 7 | Leading a Gen AI Team to Production

July 10, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

AI Reflections After Getting Lots of Feedback

June 21, 2024

Our Company

PhoenixTeam Ranks Among Highest-Scoring Businesses on Inc.’s Annual List of Best Workplaces for 2024

June 18, 2024

Our Thoughts

MISMO Spring Summit 2024: Key Insights and Takeaways

June 18, 2024

Our Community

Join Us in Making a Difference: Hope Starts with a Home Charity Drive

June 3, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts

Freeing the American People from the Bondage of Joyless Mortgage Technology

June 3, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 6 | The Role of AI in Solution Design

May 25, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 5 | The Problem of Product Design

May 22, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 4 | The Problem of Requirements

May 8, 2024

Artificial Intelligence
Phoenix Burst

What is a value engineer?

May 7, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 3 | Problem of Shared Understanding

May 7, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 2 | The Imagine Space and More

May 6, 2024

Accessible AI Talks
Artificial Intelligence
Our Thoughts
Phoenix Burst

Accessible AI Talks | Part 1 | Introduction with Guest: Brian Woodring

May 6, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

Storyboards Matter: Three Insights for Using AI to Accelerate Product Design and Delivery

May 3, 2024

Our Company

PhoenixTeam Designated as One of MISMO's First Certified Consultants, Shaping the Future of Mortgage Industry Standards

April 24, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

The Two Rules of Gen AI

April 18, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

Three Simple Steps to Kickstart your AI Journey Today

April 16, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

The Peanut Butter and Jelly Sandwich AI Experiment

April 16, 2024

Artificial Intelligence
Our Thoughts
Phoenix Burst

Starting Our AI Journey

April 10, 2024

Our Company

PhoenixTeam CEO and COO Make Inc.’s 2024 Female Founders List

April 10, 2024

Our Company

PhoenixTeam has earned its spot on Inc. 5000 Regionals: Mid-Atlantic

March 4, 2024

Our Company
Our Thoughts

Key Insights and Takeaways from MISMO Winter Summit 2024

January 29, 2024

Our Company
Our Thoughts

Leader of the Year Interview with Jacki Frazer

January 18, 2024

Our Company
Our Thoughts

Why Phoenix - Shawn Burke

December 28, 2023

Agile
Our Thoughts

PhoenixTeam at Agile + DevOps East 2023: Key Insights and Takeaways

November 17, 2023

Spotlights
Our Company
Our Thoughts

Why Phoenix — Vicki Withrow

November 3, 2023

Our Thoughts

Breaking Barriers: The Extraordinary Woman Who Redefined Workplace Equality

October 20, 2023

Our Company

PhoenixTeam: In the Arena

October 12, 2023

Our Company

PhoenixTeam: In the Arena

October 12, 2023

Agile
Our Company

Defining the Word “Done” The Phoenix Way

September 14, 2023

Product
Our Company

PhoenixTeam Approved as DOD SkillBridge Partner to Help Active-Duty Military Service Members Re-Enter Civilian Workforce

August 18, 2023

Recognitions
Our Company

PhoenixTeam featured on Inc. 5000 List of America’s Fastest-Growing Private Companies for the 4th Consecutive Year!

August 15, 2023

Agile
Our Company

Military Veteran’s Transition from Active Duty to Civilian Life as Lean-Agile Methodologist and Coach

August 1, 2023

Product
Recognitions
Our Company
Our Work

Veteran Founded Technology Venture Blue Phoenix Expands Reach with GSA IT-70 Award

July 24, 2023

New Contract
Our Company
Our Work

PhoenixTeam Begins New Partnership with HUD for FHA Catalyst

June 9, 2023

Our Company

PhoenixTeam is Excited to Announce Becky Griswold as its Newest Partner

June 1, 2023

Product
Our Work

PhoenixTeam proves value realization begins with product discovery

March 30, 2023

New Contract
Our Company
Our Work

PhoenixTeam Strengthens Partnership with U.S. Department of Agriculture

March 29, 2023

Recognitions
Our Company

PhoenixTeam Featured on 2023 Inc. Regionals Mid-Atlantic for Third Consecutive Year

February 28, 2023

Recognitions
Our Company

PhoenixTeam Announces 2022 Annual Company Award Winners!

January 30, 2023

Recognitions
Our Community

PhoenixTeam Ranks #15 on Washington Business Journal’s 2022 Fastest Growing Companies

October 21, 2022

Recognitions
Our Community

Fortune and Great Place to Work® Rank PhoenixTeam #29 2022 Best Workplaces in Technology™

September 7, 2022

2

Accelerate Your Operations with AI-powered Expertise

Let’s Talk

Stay Connected

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
© 2026 PhoenixTeam. All rights reserved.   |   Privacy Policy   |   Terms of Use