Skip to content

Our Commitment to the Responsible Use of AI

At Mursion, we use AI to help power parts of our simulation experience, feedback, and internal workflows. We believe AI enables us to offer innovative solutions for our customers, and commit to managing it transparently and with care. 

We hold ourselves accountable for the responsible use of AI. Humans are responsible for our systems, including AI-enabled systems. We use cross-functional oversight, a risk-based review process, high quality evaluation, strong data protection standards, and continuous improvement. We based our practice on these principles:

  1. Be clear to everyone.
    We are clear in the product experience about where and how AI is used. We are clear in documentation about how we manage AI responsibly and the details of its usage.
  2. Protect user data.
    No simulation data are used for model fine-tuning or training. One customer’s data is never used in another customer’s simulations or evaluations.
  3. Start with human oversight.

Design begins with humans. We assume AI outputs are not perfect, and continuously run evaluation against human-designed intent to monitor effectiveness.

  1. Evaluate, monitor, and improve.

We use monitoring, testing, and review processes to understand how our systems are performing and where they may need adjustment, and act quickly. We evaluate and monitor 3rd party systems carefully to ensure they comply with our own standards. We consider the use and management of AI as an ongoing practice that evolves with AI and how we apply it.

AI governance

We maintain a detailed internal governance framework that guides how AI-enabled capabilities are reviewed, launched, and managed across the company. Our approach includes:

  • Cross-functional oversight of AI-enabled systems
  • A risk-based review process for new or expanded use
  • Human oversight for high-impact applications
  • Monitoring and testing for quality and reliability
  • Strong protections around customer data
  • Careful evaluation of third-party AI providers
  • Continuous improvement as technology and expectations evolve

We do not use customer data to train AI models.

Working with 3rd party providers

Some parts of our AI stack rely on third-party models or infrastructure. Our aim is to make thoughtful choices about the providers we use and to avoid surprises in production, and to ensure we can change providers when necessary or to improve quality for customers. We evaluate those providers carefully and review them over time, considering things like:

  • Privacy and data handling commitments
  • Quality and fit for the use case
  • Reliability and operational performance
  • Security and compliance practices
  • How changes are introduced and managed