Fiduciary Liability Product Leader
Every quarter, our management liability team provides noteworthy trends and emerging issues to help US-based companies make decisions to manage their risks. We will cover topics related to directors and officers (D&O) liability, employment practices/wage & hour liability, and fiduciary liability risks and share insights on building an effective, customized insurance program that is fit for an evolving risk landscape.
The Biden Administration and, in particular, the Securities Exchange Commission (SEC), continued to broaden its regulatory reach through rulemaking and enforcement activity in the second quarter of 2023. Companies should take close note of the SEC’s actions to best protect against the risk of regulatory action and to improve their underwriting profiles.
On May 3, the SEC voted in favor of new rules requiring companies to disclose information relating to share repurchase activity. In general, company boards and executives highlight share repurchase activity as a legitimate tool to maintain price stability and protect shareholders. However, there are those who believe there is often a more nefarious motive for repurchases, including to conceal company underperformance and unjustifiably prop up the company’s valuation.
Under the new rules, companies will be required to, among other things:
The United States Chamber of Commerce has filed a lawsuit challenging the SEC rules for failure to follow proper rulemaking procedures. The suit also claims a violation of the First Amendment for compelling a narrative around repurchases.
Additionally, the New York Stock Exchange and the NASDAQ have set a December 1, 2023 deadline for companies to comply with the SEC’s Dodd-Frank compensation claw-back rules, adopted by the agency in October 2022. The rule requires companies to recover incentive-based compensation from executives if the metrics upon which that compensation was based turn out to be inaccurate. Marsh has worked with various carriers to develop solutions for this exposure.
On the enforcement front, the SEC’s ESG Task Force, established in March 2021 soon after Chairman Gary Gensler was appointed, obtained a settlement on the first enforcement action it brought. The enforcement action, against Brazil mining company Vale, S.A., was based on alleged misrepresentations the company made about dam safety on ESG webinars and in other sources leading up to a deadly dam collapse. Vale ultimately settled with the SEC for $55.9 million, comprised of disgorgement and civil money penalties.
Finally, the SEC paid the largest-ever award to a whistleblower, nearly $279 million, for information leading to various successful enforcement actions. This award more than doubles the largest prior award of $114 million. The SEC considers these awards to be an incentive to provide information about wrongdoing, with whistleblowers receiving between 10% and 30% of the money collected when sanctions exceed $1 million.
The United States Supreme Court, which does not often hear securities law cases, recently ruled that an investor that purchases shares in connection with a company’s direct public listing must “trace” those shares to a registration statement in order to pursue a claim under Section 11 of the Securities Act of 1933.
The case revolved around a technology company that went public via a direct listing in 2019, after the NYSE adopted rules permitting direct listings as an alternative to a traditional Initial Public Offering (IPO). In a traditional IPO, the traceability requirement is not typically an issue since it is customary for companies to impose a “lock-up” period on owners of pre-IPO private company shares. This means that the shares available in the marketplace can be traced to a specific registration statement for purposes of Section 11 lawsuits.
In a direct listing, by contrast, the private company shareholders may sell unregistered shares at the same time registered shares hit the open market. This makes tracing effectively impossible. The Ninth Circuit Court of Appeals held that the plaintiff was excused from showing tracing to avoid Slack taking advantage of what the court called a “loophole.” However, the Supreme Court, in a unanimous opinion, reversed the Ninth Circuit’s decision and held that the statute clearly required plaintiffs to trace their shares. The court left open the possibility that Congress could amend the securities laws as it sees fit.
This decision removes an important protection for investors of newly public companies engaging in a direct listing — namely, that they can rely on the strict liability standard of Section 11 rather than the more onerous requirement of a standard securities fraud action, which requires a showing of intent to deceive. This may make investing in a direct listing significantly riskier, thus causing companies to rethink this avenue to the public markets as a viable option.
From generative artificial intelligence (AI) programs, to more complex machine learning algorithms used by companies for everything from their core service offerings to human resource management, the technology is everywhere. While AI is a current hot topic in the business world — especially following the launch of ChatGPT and similar programs — new frontiers bring new risk. Boards and executives should be mindful of new AI-related exposures their companies may face from shareholders and regulators.
First, the use of AI in employee hiring, layoffs, and promotional decisions could lead to litigation. (Read more about the potential risks of AI in this newsletter’s Employment Practices Liability section). Government agencies, like the Equal Employment Opportunity Commission and others, have already issued warnings and guidance. Companies that face employment-related class action AI-related claims have experienced shareholder actions against management for alleged misrepresentations and breaches of fiduciary duty.
Second, companies that incorporate AI as part of lending or investment decisions, or any other aspects of their professional services, may face claims from customers accusing them of discrimination or errors in their services. Boards must monitor the risk of AI malfunction or misuse that could cause a company’s products or services to suffer and appropriately communicate to investors about the safeguards in place.
Third, as AI becomes increasingly complex, boards must be mindful of the products or services they currently offer clients being displaced by technology advances. Investors will likely deem material situations when a competitor is incorporating AI into their offering or the technology renders a company’s professional services less unique and valuable, underscoring the importance of making appropriate disclosures.
Finally, generative AI presents the risk of lawsuits based on intellectual property (IP). AI programs that generate text, imagery, or video, may produce content by being “trained” on other copyrighted works that are “scraped” from online sources. Producing or using this AI-generated imagery could result in claims of infringement by artists, authors, and other producers of copyrighted works accessed by an AI program. Several large companies have already faced these lawsuits. If the impact of such a suit was material to a company’s bottom line, this could possibly result in a decline in the share price and shareholder litigation. Relatedly, it is important for companies to consider emerging insurance and risk management products to protect against IP risk. To learn more about Marsh’s IP Protect product click here.
AI is a significant tool with many different uses and benefits companies should consider. But, it is nevertheless important to remain aware of the risks the technology presents.
There are many considerations when designing a D&O program, such as the limits, carrier partners, and scope of coverage. The most important aspect of the program, however, may be the personal asset protection afforded to board members and executives in the event that the company cannot or will not indemnify them against claims. While personal asset protection is provided throughout a D&O tower, it is critically important to have a dedicated Side A portion of the program.
Two recent decisions highlight the significance of Side A coverage in the context of bankruptcy. First, FTX founder Sam Bankman-Fried and his organization face a series of investigations and lawsuits in the fallout of the company’s collapse. Because the company is in bankruptcy, Mr. Bankman-Fried needed to seek permission from the court to access the company’s D&O insurance. It is important to note that bankruptcy courts generally consider D&O insurance to be the property of a bankruptcy estate and therefore must be preserved to ensure all of the company’s assets are available to pay creditors. Ultimately, the court rejected Bankman-Fried’s request to access the Side A portion of the company’s D&O policies. A dedicated Side A policy, however, would not be within the purview of a bankruptcy court and thus should reliably provide coverage in a bankruptcy setting.
A second case involved directors of Silicon Valley Bank seeking similar relief from the bankruptcy court so that they could access D&O coverage. In that decision, the court sided with the individuals. The reason for the court’s decision was that the D&O policy contained a “priority of payments” provision that required the policy to pay individuals ahead of the company’s losses as a corporate entity. While the FTX policy had a similar provision, it required first that competing claims add up to an amount that would exhaust the full policy limits before individuals would be prioritized.
These cases are a reminder that dedicated Side A insurance can serve a critical purpose and help minimize the uncertainties of coverage litigation during the time when individual board members or directors need insurance most.
Employers have access to a variety of algorithmic decision-making tools to assist them in their employment decisions. Artificial intelligence (AI) can be used to create job descriptions and screen applicants. And algorithmic tools can also help with decisions across the employment lifecycle, including promotion, discipline, monitoring, and firing.
AI, which can be described as the development of computer systems that can simulate human intelligence, has a promising role to play within the employment context. But it also comes with several potential risks, including the potential of violating Title VII — which provides protection from discrimination based on race, color, religion, sex, and national origin to both employees and job applicants.
The risk of potential Title VII violations has led to increased concern from both lawmakers and regulatory bodies about the use of AI. Even before the launch of ChatGPT, they contemplated ways to effectively provide transparency and protections for workers amidst rapid advances in AI technology. In 2021, for example, the Equal Employment Opportunity Commission (EEOC) launched its Artificial Intelligence and Algorithmic Fairness Initiative.
Earlier this year, the EEOC doubled down on its focus on AI in its draft 2023-2027 Strategic Enforcement Plan (SEP), indicating the agency’s intent to focus on the use of AI tools across the employment lifecycle.
More recently, in May 2023, the EEOC published a new guidance — Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 — in support of its AI initiative. This guidance is intended to assist employers and developers as they design and adopt new AI-enabled technologies by focusing on how the use of AI in an employer's decisions on hiring, promoting, and firing may have a "disproportionately large negative effect" on the basis of race, color, religion, sex, or national origin. This comes on the heels of a technical assistance document — The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees — that was launched in May 2022 to address compliance with ADA requirements when using AI and other software to hire and assess employees. The EEOC has made clear that employers cannot shift the blame to third-party vendors.
There has already been activity by the EEOC. This March, the commission announced that it had entered into a conciliation agreement with a job search website operator after it found “reasonable cause” to believe that job ads posted by the operator’s customers were violating Title VII by discouraging workers of American national origin from applying.
Additional legislative activity includes:
In addition to EEOC enforcement efforts and other regulatory bodies, employers must be mindful of rapidly expanding state regulations pertaining to AI. For example, both Maryland and Illinois have enacted legislation. Maryland’s legislation restricts the use of facial recognition tools, while Illinois will regulate the use of AI video analyses. Numerous other states have proposed various AI-related regulations, including California, the District of Columbia, Massachusetts, New Jersey, New York, Pennsylvania, and Vermont.
At the local level, on July 5, 2023, New York City will begin enforcing Local Law 144 of 2021, which regulates the use of automated decision tools. The law requires anyone using automated decision tools to first conduct a bias audit and notify job candidates.
Underwriters are paying close attention to the use of AI in the context of employment decisions. Employers should be mindful of how their organizations may or may not be utilizing AI in the employment decision cycle and have a clear understanding of the methodology being used by any third-party vendors.
Seen as a powerful tool to advance pay equity, pay transparency requirements have become a hot legislative item. Pay transparency laws have typically mandated disclosure of salary and/or benefits available to a position upon request by a candidate, though newer laws require disclosure of compensation in job postings.
The Salary Transparency Act, which was introduced in the US House of Representatives earlier this year, proposes requiring all employers to include the salary range in job postings, provide wage ranges to applicants, and provide wage ranges to existing employees for their positions.
States have been very active with rolling our pay transparency legislation. Currently, California, Colorado, New York, and Washington, as well as Jersey City, New Jersey, Ithaca, New York, New York City, and Westchester County have laws that require salary range disclosures on job postings.
Most recently, in May 2023, the Illinois legislature passed House Bill 3129, which will require employers in the state to include pay scale and benefits information in job postings and to post or announce internally to employees all known opportunities for promotion. If signed into law, House Bill 3129 would take effect on January 1, 2025.
Since various states are implementing their own pay transparency laws, federal legislation may come as a relief for some multi-state employers. Pay transparency legislation will continue to challenge employers as states continue to roll out their own iteration of the law.
Employee Retirement Income Security Act (ERISA) cases with similar fact patterns and allegations will not necessarily lead to the same court decisions. In fact, different judges across federal circuits have reached different conclusions in cases that appear to be very similar.
One example revolves around different findings in two excessive fee allegations cases: In one case the court allowed the defense to provide evidence arguing that the plaintiff’s representation of the fees was incorrect at the motion to dismiss stage, yet in a different case similar evidence provisions were not granted at the motion to dismiss stage.
In another example, a court denied a motion to dismiss in three excessive fee cases. This happened even though similar cases — all in different circuits — had allowed defense motions to stop plaintiffs from presenting evidence to support a claim that the fiduciaries were imprudent simply due to the high fees charged without sharing context of the services provided.
These different outcomes pose challenges to the fiduciary insurance market, especially since underwriters cannot reasonably expect that lawsuits against plans with certain controls in place will be dismissed. As a result, it seems unlikely we will see a lowering of retention levels. Underwriters will likely seek to maintain higher retentions, as they assume that even plans with the right controls will end up settling or facing a costly trial.
In one of the most anticipated ERISA decisions in the last decade, in March 2023 the Seventh Circuit sent an excessive fee case back to the district court. This case had previously been sent to the Seventh Circuit by the US Supreme Court, vacating a district court’s dismissal and giving rise to different interpretations of how the decision should be applied.
In the underlying case, the plaintiffs alleged that the defendant had paid excessive fees and not pursued the lowest cost on all of the available plan offerings. The defendant maintained that it had several offerings, many of which were the lowest cost option, even if not all of them were. The Supreme Court held that offering a variety of investment offerings — some of which are low cost — does not provide sufficient basis for dismissal. The Court did not take a hard position on what would be a basis for dismissal, giving rise to different interpretations of how the decision should be applied.
In its March 2023 decision, the Seventh Circuit panel noted the fiduciary duty of prudence requires “systematically reviewing” funds at inclusion and at regular intervals. In the underlying case, plaintiffs had argued that the plan sponsors should have provided other alternatives in investments and fees, but did not outline the availability of these other options. The defendant argued that the plaintiffs failed to show that these alternatives were actually plausible.
If the standard becomes that plaintiffs need to only show there were available alternatives to the decisions made by fiduciaries, but not have to prove that these were possible to actually obtain, it would likely put a greater burden on defense. This could lead to higher insurance costs— underwriters are likely to set higher retention levels, assuming that even cases with good defenses may get through the motion to dismiss stage, potentially resulting in high settlements or trial expenses.
Fiduciary Liability Product Leader
Employment Practices Liability/Wage & Hour Product Leader
D&O Product Leader
US FINPRO Product and Industry Leader