Meet LuminosAI: AI risk management built by lawyers, for lawyers
LuminosAI builds AI governance and risk management software to help enterprises understand, test and control the collective risks of their AI systems. As every company races to become an “AI company,” tools like Luminos determine whether products ship quickly and safely or expose the business to discrimination, regulatory and reputational landmines.
.webp)
TLDR
- LuminosAI is an AI governance and risk management platform built by lawyers for lawyers.
- The company targets the growing, regulation-driven need for AI oversight.
- Its flagship product, Lighthouse, provides governance for generative and agentic AI, translating complex, evolving laws into automated workflows that help in-house legal and risk teams ship AI products faster and more safely.
Why M13 invested in LuminosAI: AI governance infrastructure for enterprise
Every company now wants to be an AI company but very few are prepared to manage the legal, regulatory and brand risk of putting AI into production. That gap between ambition and accountability is exactly where LuminosAI operates.
LuminosAI is connecting those who build AI and those responsible for ensuring it doesn’t harm users or blow up a company’s risk profile.
M13 partner Win Chevapravatdumrong said, “LuminosAI gives legal teams the infrastructure to automate AI review at scale so they can enable innovation instead of slowing it down. When compliance becomes embedded in the product lifecycle, speed and governance are no longer tradeoffs. Luminos helps legal and privacy teams contribute a competitive advantage.”
The topic is all too real for Chevapravatdumrong, who joined M13 after serving as General Counsel for Masterclass and Senior Counsel at Hulu. What stood out about Luminos co-founders Andrew Burt and Mike Schiller is their deep AI legal expertise combined with proven enterprise software experience. Burt co-authored the US government’s AI rulebook and is now building tools to operationalize it.
AI risk management is broken — and getting more complex
Despite the hype around AI, much of what’s running in production today is classic AI. These systems rank, score, and recommend — determining who gets a loan, which job applicants move forward, which patients are flagged for care and what prices customers see. The impact is real and so is the risk.
“The market for AI governance is full of smoke and mirrors,” Burt said. “There are a lot of companies that do AI governance but few can define what that means.”
Legal, privacy, and risk teams are being asked to do much more, much faster, with tools that don’t match the stakes or complexity. The result is growing exposure to:
- Algorithmic bias and discriminatory outcomes
- Violation of existing laws that long predate AI, from anti‑discrimination to consumer protection
- Brand and reputational damage when AI systems behave unpredictably
These risks are not theoretical, and Burt’s team has worked on areas ranging from facial recognition systems that risk reinforcing racial bias, to healthcare, employment screening, and insurance pricing algorithms that influence millions of decisions daily. AI governance failures can have wide-reaching consequences.
The regulatory landscape: EU AI Act, US state laws, fragmented compliance
At the same time, the regulatory environment is becoming more complex and fragmented.
The European Union AI Act is moving ahead. In the US, the federal government has struggled to pass comprehensive AI laws. States like California, Colorado, Utah and Texas are stepping in, mirroring the path seen with privacy and GDPR. The result is a patchwork of obligations that most companies — and even most tools calling themselves “AI governance” — are not equipped to manage.
“Texas is getting very involved, which suggests this is less partisan than privacy,” Burt said.
AI governance infrastructure for legal and engineering teams
LuminosAI is building AI governance infrastructure for the people accountable for AI decisions — general counsels, privacy teams, compliance officers — and for the data and engineering teams that need to ship AI products.
At the center of the platform is Lighthouse, LuminosAI’s latest generative and agentic AI solution. Lighthouse reviews and governs AI outputs across text, audio, video and images and multimodal content.
Rather than relying on static checklists or policy documentation, LuminosAI automates AI risk detection. It goes further by testing models and data for real risk factors — bias, safety issues, policy violations — and tying those findings back into legal and compliance workflows.
Many AI governance tools stop at process. LuminosAI focuses on enforcement.
LuminosAI unites builders and overseers in one place. Historically, engineers build AI and lawyers review it, often in separate systems, ending in a meeting about ethics. Luminos brings these functions together so the people building AI and the people overseeing risk share one source of truth and a set of automated controls.
“It does things that no other solution does,” Burt said. “We've seen customer interest at the ‘let's not get sued’ level and ‘let's focus on legal risk.’”
Lighthouse: continuous AI risk monitoring embedded in the development lifecycle
Lighthouse offers continuous AI risk monitoring infrastructure.
.webp)
Early customers use Lighthouse to move beyond reactive compliance and into proactive governance. For example, it ensures that GenAI chatbots maintain consistent tone and don’t drift into off‑brand or harmful responses. “We're able to review any type of output, which means the use cases for Lighthouse continue to expand,” Burt said.
Critically, Lighthouse is embedded directly into the AI development lifecycle. One enterprise customer runs daily automated reviews of GenAI systems with LuminosAI integrated into their CI/CD pipeline. Governance becomes part of the core infrastructure of AI deployment.
While the current focus is automation for customers, Burt and team are working on making Lighthouse even better, easier and self-service.
.webp)
From AI law to AI infrastructure: the origin story of LuminosAI
LuminosAI emerged from a trajectory that blends law, data science and hands-on risk management, leading to a dedicated focus on AI risk governance rather than generic tech solutions.
Burt came out of Yale Law School with a passion for what lay at the intersection of law and technology. He went to work in the FBI’s cyber division, often as “the only lawyer in a room full of data scientists.”
“I was the connective tissue between the people building and using AI and using cool data,” Burt said. “And then between the lawyers and the risk people who needed to understand what the restrictions were and how to make sure that bad things don't happen with that data.”
After that, he was an early employee at Immuta, a billion-dollar data governance company tasked with building a platform that could connect different siloed data. Burt called this “a big, big issue in the intelligence community and a super innovative way of connecting the data.”
Burt left Immuta in 2019 and started Luminos.Law, a law firm focused solely on AI risk. He emphasized that the initial pain was practical and financial. While in that role, he helped develop the National Institute of Standards and Technology’s AI Risk Management Framework.
However, while clients were happy with the legal work, they couldn’t pay for all necessary advisory tasks, so the team built software to scale governance.
This pivot — shifting from services to software — created central governance components for large enterprises and enabled the company to address broad, systemic AI risk challenges, not just point problems. This led to LuminosAI as a spin-off with former colleague and Immuta co-founder Mike Schiller that translates governance into scalable software. Burt sold Luminos.Law to ZwillGen.
.webp)
The future of AI governance: from workflow to infrastructure
LuminosAI’s mission is to make AI safer, more trustworthy and more widely usable by giving organizations the tools to manage AI risk at scale. Burt believes that Lighthouse will become a foundational product that customers aren’t just using but embedding into their systems and processes.
For Chevapravatdumrong, this is not a surprise. He said, “I love being able to deploy on the right side of safe and correct usage of AI.
“Businesses need to understand that it's not optional anymore, in the same way that privacy wasn't optional the second you collected any information about a person. LuminosAI is helping you do that and helping you ship products faster than your competitors. I appreciate when legal, compliance and risk can accelerate the growth of a business rather than become bottlenecks. That's exciting to us as investors."
In turn, Burt and his team are excited to work with M13, saying that “unlike other VCs, Win is a lawyer and understands that there's a huge market, giving him and M13 a very unique window into the company and the problem set that we're building.”
What this means for founders, technologists, investors
- AI governance is no longer optional: Any company deploying AI, especially customer-facing or high-stakes systems, needs structured risk management to avoid legal, regulatory and reputational blowback.
- In-house legal teams can become product accelerators, not bottlenecks. With tools like LuminosAI, lawyers can automate much of AI review, helping ship compliant products faster instead of slowing launches down.
- Regulation will only get more complex from here: With the European Union’s AI Act and a growing patchwork of U.S. state laws, organizations that don’t operationalize AI risk now will struggle to adapt later.
- Generative AI requires oversight beyond “just compliance.” Guardrails now have to cover tone, brand alignment and user impact — areas that demand continuous and automated monitoring, not one-off reviews.
Read more about LuminosAI
- Inside the Incident, a series where Andrew Burt breaks down real-life AI incidents in the news
- Why Legal Hurdles Are the Biggest Barrier to AI Adoption
- A Data Scientist and a Lawyer Walk Into… Two Different Bars
- D.C. startup Luminos raises $7.75M to tackle AI regulatory compliance challenges
- New AI legal risk company spins off from DC law firm
- Understanding the "Big Beautiful Bill's" Impact on AI Liabilities
- The Privacy Advisor Podcast: Alex LaCasse talks AI governance with Brenda Leong and Andrew Burt
Follow LuminosAI
- Learn more at LuminosAI or www.LuminosAI/resource-center.
- Get a demo at www.LuminosAI/demo or attend Luminos Live.
- Follow the company on LinkedIn at www.linkedin.com/company/luminos-ai.
- Follow Andrew Burt at www.linkedin.com/in/andrew-burt.
- Check out Inside the Incident, Andrew Burt’s video series discussing real-life AI incidents in the news.
TLDR
- LuminosAI is an AI governance and risk management platform built by lawyers for lawyers.
- The company targets the growing, regulation-driven need for AI oversight.
- Its flagship product, Lighthouse, provides governance for generative and agentic AI, translating complex, evolving laws into automated workflows that help in-house legal and risk teams ship AI products faster and more safely.
Why M13 invested in LuminosAI: AI governance infrastructure for enterprise
Every company now wants to be an AI company but very few are prepared to manage the legal, regulatory and brand risk of putting AI into production. That gap between ambition and accountability is exactly where LuminosAI operates.
LuminosAI is connecting those who build AI and those responsible for ensuring it doesn’t harm users or blow up a company’s risk profile.
M13 partner Win Chevapravatdumrong said, “LuminosAI gives legal teams the infrastructure to automate AI review at scale so they can enable innovation instead of slowing it down. When compliance becomes embedded in the product lifecycle, speed and governance are no longer tradeoffs. Luminos helps legal and privacy teams contribute a competitive advantage.”
The topic is all too real for Chevapravatdumrong, who joined M13 after serving as General Counsel for Masterclass and Senior Counsel at Hulu. What stood out about Luminos co-founders Andrew Burt and Mike Schiller is their deep AI legal expertise combined with proven enterprise software experience. Burt co-authored the US government’s AI rulebook and is now building tools to operationalize it.
AI risk management is broken — and getting more complex
Despite the hype around AI, much of what’s running in production today is classic AI. These systems rank, score, and recommend — determining who gets a loan, which job applicants move forward, which patients are flagged for care and what prices customers see. The impact is real and so is the risk.
“The market for AI governance is full of smoke and mirrors,” Burt said. “There are a lot of companies that do AI governance but few can define what that means.”
Legal, privacy, and risk teams are being asked to do much more, much faster, with tools that don’t match the stakes or complexity. The result is growing exposure to:
- Algorithmic bias and discriminatory outcomes
- Violation of existing laws that long predate AI, from anti‑discrimination to consumer protection
- Brand and reputational damage when AI systems behave unpredictably
These risks are not theoretical, and Burt’s team has worked on areas ranging from facial recognition systems that risk reinforcing racial bias, to healthcare, employment screening, and insurance pricing algorithms that influence millions of decisions daily. AI governance failures can have wide-reaching consequences.
The regulatory landscape: EU AI Act, US state laws, fragmented compliance
At the same time, the regulatory environment is becoming more complex and fragmented.
The European Union AI Act is moving ahead. In the US, the federal government has struggled to pass comprehensive AI laws. States like California, Colorado, Utah and Texas are stepping in, mirroring the path seen with privacy and GDPR. The result is a patchwork of obligations that most companies — and even most tools calling themselves “AI governance” — are not equipped to manage.
“Texas is getting very involved, which suggests this is less partisan than privacy,” Burt said.
AI governance infrastructure for legal and engineering teams
LuminosAI is building AI governance infrastructure for the people accountable for AI decisions — general counsels, privacy teams, compliance officers — and for the data and engineering teams that need to ship AI products.
At the center of the platform is Lighthouse, LuminosAI’s latest generative and agentic AI solution. Lighthouse reviews and governs AI outputs across text, audio, video and images and multimodal content.
Rather than relying on static checklists or policy documentation, LuminosAI automates AI risk detection. It goes further by testing models and data for real risk factors — bias, safety issues, policy violations — and tying those findings back into legal and compliance workflows.
Many AI governance tools stop at process. LuminosAI focuses on enforcement.
LuminosAI unites builders and overseers in one place. Historically, engineers build AI and lawyers review it, often in separate systems, ending in a meeting about ethics. Luminos brings these functions together so the people building AI and the people overseeing risk share one source of truth and a set of automated controls.
“It does things that no other solution does,” Burt said. “We've seen customer interest at the ‘let's not get sued’ level and ‘let's focus on legal risk.’”
Lighthouse: continuous AI risk monitoring embedded in the development lifecycle
Lighthouse offers continuous AI risk monitoring infrastructure.
.webp)
Early customers use Lighthouse to move beyond reactive compliance and into proactive governance. For example, it ensures that GenAI chatbots maintain consistent tone and don’t drift into off‑brand or harmful responses. “We're able to review any type of output, which means the use cases for Lighthouse continue to expand,” Burt said.
Critically, Lighthouse is embedded directly into the AI development lifecycle. One enterprise customer runs daily automated reviews of GenAI systems with LuminosAI integrated into their CI/CD pipeline. Governance becomes part of the core infrastructure of AI deployment.
While the current focus is automation for customers, Burt and team are working on making Lighthouse even better, easier and self-service.
.webp)
From AI law to AI infrastructure: the origin story of LuminosAI
LuminosAI emerged from a trajectory that blends law, data science and hands-on risk management, leading to a dedicated focus on AI risk governance rather than generic tech solutions.
Burt came out of Yale Law School with a passion for what lay at the intersection of law and technology. He went to work in the FBI’s cyber division, often as “the only lawyer in a room full of data scientists.”
“I was the connective tissue between the people building and using AI and using cool data,” Burt said. “And then between the lawyers and the risk people who needed to understand what the restrictions were and how to make sure that bad things don't happen with that data.”
After that, he was an early employee at Immuta, a billion-dollar data governance company tasked with building a platform that could connect different siloed data. Burt called this “a big, big issue in the intelligence community and a super innovative way of connecting the data.”
Burt left Immuta in 2019 and started Luminos.Law, a law firm focused solely on AI risk. He emphasized that the initial pain was practical and financial. While in that role, he helped develop the National Institute of Standards and Technology’s AI Risk Management Framework.
However, while clients were happy with the legal work, they couldn’t pay for all necessary advisory tasks, so the team built software to scale governance.
This pivot — shifting from services to software — created central governance components for large enterprises and enabled the company to address broad, systemic AI risk challenges, not just point problems. This led to LuminosAI as a spin-off with former colleague and Immuta co-founder Mike Schiller that translates governance into scalable software. Burt sold Luminos.Law to ZwillGen.
.webp)
The future of AI governance: from workflow to infrastructure
LuminosAI’s mission is to make AI safer, more trustworthy and more widely usable by giving organizations the tools to manage AI risk at scale. Burt believes that Lighthouse will become a foundational product that customers aren’t just using but embedding into their systems and processes.
For Chevapravatdumrong, this is not a surprise. He said, “I love being able to deploy on the right side of safe and correct usage of AI.
“Businesses need to understand that it's not optional anymore, in the same way that privacy wasn't optional the second you collected any information about a person. LuminosAI is helping you do that and helping you ship products faster than your competitors. I appreciate when legal, compliance and risk can accelerate the growth of a business rather than become bottlenecks. That's exciting to us as investors."
In turn, Burt and his team are excited to work with M13, saying that “unlike other VCs, Win is a lawyer and understands that there's a huge market, giving him and M13 a very unique window into the company and the problem set that we're building.”
What this means for founders, technologists, investors
- AI governance is no longer optional: Any company deploying AI, especially customer-facing or high-stakes systems, needs structured risk management to avoid legal, regulatory and reputational blowback.
- In-house legal teams can become product accelerators, not bottlenecks. With tools like LuminosAI, lawyers can automate much of AI review, helping ship compliant products faster instead of slowing launches down.
- Regulation will only get more complex from here: With the European Union’s AI Act and a growing patchwork of U.S. state laws, organizations that don’t operationalize AI risk now will struggle to adapt later.
- Generative AI requires oversight beyond “just compliance.” Guardrails now have to cover tone, brand alignment and user impact — areas that demand continuous and automated monitoring, not one-off reviews.
Read more about LuminosAI
- Inside the Incident, a series where Andrew Burt breaks down real-life AI incidents in the news
- Why Legal Hurdles Are the Biggest Barrier to AI Adoption
- A Data Scientist and a Lawyer Walk Into… Two Different Bars
- D.C. startup Luminos raises $7.75M to tackle AI regulatory compliance challenges
- New AI legal risk company spins off from DC law firm
- Understanding the "Big Beautiful Bill's" Impact on AI Liabilities
- The Privacy Advisor Podcast: Alex LaCasse talks AI governance with Brenda Leong and Andrew Burt
Follow LuminosAI
- Learn more at LuminosAI or www.LuminosAI/resource-center.
- Get a demo at www.LuminosAI/demo or attend Luminos Live.
- Follow the company on LinkedIn at www.linkedin.com/company/luminos-ai.
- Follow Andrew Burt at www.linkedin.com/in/andrew-burt.
- Check out Inside the Incident, Andrew Burt’s video series discussing real-life AI incidents in the news.
Read more
The views expressed here are those of the individual M13 personnel quoted and are not the views of M13 Holdings Company, LLC (“M13”) or its affiliates. This content is for general informational purposes only and does not and is not intended to constitute legal, business, investment, tax or other advice. You should consult your own advisers as to those matters and should not act or refrain from acting on the basis of this content. This content is not directed to any investors or potential investors, is not an offer or solicitation and may not be used or relied upon in connection with any offer or solicitation with respect to any current or future M13 investment partnership. Past performance is not indicative of future results. Unless otherwise noted, this content is intended to be current only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in funds managed by M13, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by M13 is available at m13.co/portfolio.