OpenAI expands lobbyist team to influence regulation

OpenAI is building an international team of lobbyists working to influence politicians and regulators who are increasing their control over powerful artificial intelligence.

The San Francisco-based startup told the Financial Times that it has expanded its global affairs team from three to 35 employees at the start of 2023. The company aims to increase that number to 50 by the end of 2024.

The push comes as governments examine and debate AI security legislation that risks limiting the start-up’s growth and the development of its cutting-edge models that underpin products like ChatGPT.

“We’re not approaching it from a perspective, we just have to get out there and repeal the regulations.” . . because we do not aim to maximize profit; our goal is to ensure that AGI benefits all of humanity,” said Anna Makanju, vice president of government affairs at OpenAI, referring to artificial general intelligence, or the idea that machines have cognitive abilities equivalent to humans.

While the Global Affairs department is a small part of OpenAI’s 1,200 employees, it is the company’s most international unit, strategically located in places where AI legislation has advanced. This includes staff in Belgium, the UK, Ireland, France, Singapore, India, Brazil and the US.

However, OpenAI lags behind its Big Tech competitors in this reach. According to public filings in the US, Meta spent a record $7.6 million dealing with the US government in the first quarter of this year, while Google spent $3.1 million and OpenAI $340,000. In terms of AI-specific advocacy, Meta has named 15 lobbyists, Google has five, while OpenAI has only two.

“In the door, [ChatGPT had] 100 million users [but the company had] three people to do public policy,” said David Robinson, head of policy planning at OpenAI, who joined the company last May after a career in academia and advising the White House on its AI policy.

“It was literally to the point where there was someone at a high level who wanted an interview and there was no one to pick up the phone,” he added.

However, OpenAI’s global affairs unit does not deal with some of the most difficult regulatory cases. That task falls to its legal team, which is dealing with issues related to British and US regulatory scrutiny of its $18 billion alliance with Microsoft; the US Securities and Exchange Commission’s investigation into whether CEO Sam Altman misled investors during his brief ouster by the board in November; and the US Federal Trade Commission’s probe into the company.

Instead, OpenAI lobbyists are focused on proliferating AI legislation. The UK, US and Singapore are among the many countries looking into how to govern AI, consulting closely with OpenAI and other tech companies on proposed regulations.

The company has been involved in discussions on the EU’s AI law, passed this year, which is one of the most advanced pieces of legislation in the effort to regulate powerful AI models.

OpenAI was among the artificial intelligence companies that argued that some of its models should not be considered “high risk” in the first drafts of the bill and therefore subject to tougher rules, according to three people involved in the negotiations. Despite this pressure, the company’s most capable models will fall under the law.

OpenAI has also argued against EU efforts to vet all data provided to its foundation models, according to people familiar with the proceedings.

The company told the FT that pre-training data – datasets used to give large language models a broad understanding of language or patterns – should be outside the scope of regulation because it was the wrong way to understand the AI ​​system’s output. Instead, he suggested that the focus should be on post-training data that is used to fine-tune the models for a specific task.

The EU has decided that for high-risk AI systems, regulators can still request access to training data to ensure it is free of errors and bias.

Since the EU law was passed, OpenAI hired Chris Lehane, who worked for President Bill Clinton, Al Gore’s presidential campaign, and was Airbnb’s policy chief as vice president of public works. Lehane will work closely with Makanju and her team.

OpenAI also recently challenged Jakob Kucharczyk, the former head of competition at Meta. Sandro Gianella, head of European policy and partnerships, joined in June last year after working at Google and Stripe, while James Hairston, head of international policy and partnerships, joined from Meta in May last year.

The company recently engaged in a series of discussions with policymakers in the US and other markets about the OpenAI Voice Engine, which can clone and create its own voices, which led to the company narrowing its release plans after concerns about the risks of could be used. in connection with this year’s global elections.

The team has held workshops in countries facing elections this year, such as Mexico and India, and is publishing guidelines on disinformation. In autocratic countries, OpenAI provides individual access to its models to “trusted individuals” in areas where it believes it is not safe to release products.

One government official who has worked closely with OpenAI said another concern for the company is making sure any rules are flexible in the future and obsolete with new scientific or technological advances.

OpenAI hopes to solve some of the hangovers from the social media age, which Makanju says has led to a “general mistrust of Silicon Valley companies.”

“Unfortunately, people often see AI through the same lens,” she added. “We’re spending a lot of time making sure people understand that this technology is very different, and the regulatory interventions that make sense for it will be very different.”

However, some industry figures criticize the expansion of OpenAI’s lobbying.

“Initially, OpenAI recruited people deeply involved in AI policy and specialists, whereas now it only hires top technology lobbyists, which is a very different strategy,” said one person directly involved with OpenAI’s drafting of the legislation. .

“They just want to influence lawmakers in a way that Big Tech has been doing for over a decade.”

Robinson, OpenAI’s head of planning, said the global affairs team has more ambitious goals. “The mission is safe and beneficial, so what does that mean? It means creating laws that not only allow us to innovate and bring technology that benefits people, but also to end up in a world where that technology is safe.”

More news from Madhumita Murgia in London

Video: AI: Blessing or Curse for Humanity? | FT Tech

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top