California’s “Physicians Make Decisions Act” Regulates the Use of AI in Insurance Decisions

The medical profession is using artificial intelligence (AI) in many ways. Many of the advances in AI can help tackle some of medicine’s more complicated problems. We’ve written about how AI is being used to help doctors communicate with their patients, diagnose diseases, develop quality treatments, and provide other benefits.

Pharmaceutical companies need to understand how FDA regulates new drugs, medical devices, and other medical products that use IA. Doctors need to understand how a patient’s privacy and medical needs must come first. There are many new compliance issues that medical developers and practitioners need to understand.

At its core, AI is software that continues to learn and improve as it gains more knowledge. AI can analyze data on a large-scale basis.

California’s “Physicians Make Decisions Act”

While AI has many potential medical advantages, there are legitimate concerns about how AI makes decisions. One such concern was addressed in a new law, Senate Bill 1120 (SB 1120) – known as the Physicians Make Decisions Act. The law requires that humans oversee the use of AI in medical care.

According to Senate Bill 1120, this law requires that health care service plans or disability insurers that use AI, algorithms, or other software for utilization review (reviewing claims and prior authorization requests) make determinations “fairly and equitably applied,” as specified in the new law. Mistakes or inequities in the use of AI by insurance companies can result in negative health outcomes and even death.

According to California State Senator Josh Becker, who authored the bill, the Physicians Make Decisions Act requires that licensed health care providers make decisions about medical care – and not the artificial intelligence programs. State Senator Becker states that “Artificial intelligence has immense potential to enhance healthcare delivery, but it should never replace the expertise and judgment of physicians.” “An algorithm cannot fully understand a patient’s unique medical history or needs, and its misuse can lead to devastating consequences.”

SB 1120, sponsored by the California Medical Association (CMA), sets a national precedent for using AI responsibly.

The Physicians Make Decisions Act was signed by Governor Gavin Newsom and is effective as of January 1, 2025.

The specific requirements of the Physicians Make Decisions Act

The new law provides that a licensed physician or a qualified health care provider “with expertise in the specific clinical issues at hand,” review any delay, modification, or denial of care based on “medical necessity.” The law “establishes fair and equitable standards for companies using AI in their utilization review processes, preventing improper or unethical practices.” The medical professional’s decisions “shall be communicated to the provider and the enrollee pursuant…”

Some of the key provisions of the law include the following:

The artificial intelligence, algorithm, or other software tool that healthcare service plans use:

  1. Must base its determination on the following information, as applicable:
    1. (i) An enrollee’s medical or other clinical history.
    2. (ii) Individual clinical circumstances as presented by the requesting provider.
    3. (iii) Other relevant clinical information contained in the enrollee’s medical or other clinical record.
  2. Should not “base its determination solely on a group dataset.”
  3. Should not “supplant health care provider decision-making.”
  4. Should not “discriminate, directly or indirectly, against enrollees in violation of state or federal law.”
  5. Should be reviewed periodically “to maximize accuracy and reliability.”
  6. Should not directly or indirectly harm the employee.
  7. Should not be used beyond its intended and stated purpose – and be consistent with the Confidentiality of Medical Information Act and the federal Health Insurance Portability and Accountability Act of 1996.
  8. Should be open to “inspection for audit or compliance reviews” and pursuant to applicable federal and California law.
  9. Should comply with other conditions set forth in the statute.

The law defines artificial intelligence as:

“An engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”

SB 1120 establishes that coverage decisions based on “medical necessity” must be supervised and determined by a licensed physician or a qualified healthcare professional. While AI algorithms can be used to assist in decision-making, they cannot be the sole authority on approving or denying claims. This law ensures a clinical judgment rooted in individual patient needs remains central to the process.

“An algorithm cannot fully understand a patient’s unique medical history or needs, and its misuse can lead to devastating consequences,” Becker said. “This law ensures that human oversight remains at the heart of health care decisions, safeguarding Californians’ access to the quality care they deserve.”

Additional features of the Physicians Make Decisions Act

According to Live Insurance News, the new law provides new timelines for various medical requests:

  • Standard authorization requests must be processed within five business days.
  • Urgent or critical cases require attention within 72 hours.
  • Retrospective reviews, applicable to already-delivered medical services, must be completed within 30 days.

The aim of the law is “to promote transparency and maintain public trust in health insurance systems.”

The law involves various California Departments, including the California Department of Managed Health Care, the California Department of Health Care Services, and the California Insurance Department. For example, the California Department of Managed Health Care will monitor “compliance, conduct audits, and hold health insurers accountable.” Violations of the law can result in fines.

The law was enacted, in part, because “class-action lawsuits against major insurers like UnitedHealthcare and Cigna have brought to light cases where algorithms were used to deny claims improperly or unfairly prioritize cost savings over patient care.” According to Live Insurance News, there is research to support the conclusion that reliance on AI can “disproportionately impact specific demographics or fail to account for unique patient conditions.”

According to State Senator Josh Becker, 19 other states are reviewing similar legislation. There is also the potential for a federal AI insurance law.

California’s new Physicians Make Decisions Act limits the ability of insurance companies to rely on AI to make medical coverage decisions.  The law requires oversight by medical professionals about the insurance decisions that use AI. The law has specific requirements for when and how AI can be used to make medical coverage decisions. Medical providers need to understand this law to ensure their patients receive the medical care they need. Medical providers also need to review how AI affects every aspect of their medical practice.

Medical providers should contact Cohen Healthcare Law Group, PC, to discuss their compliance requirements regarding California’s new Physicians Make Decisions Act and the use of artificial intelligence in their medical practice.  Our experienced healthcare attorneys advise medical practices and other healthcare businesses about healthcare compliance laws and regulations.

Cohen Healthcare Law Logo

Contact our healthcare law and FDA attorneys for legal advice relevant to your healthcare venture.

Contact Us

discovery-call-cta-vertical

Start typing and press Enter to search