The conversation about what has happened at OpenAI has continued on. A Bloomberg Opinion Piece (below) goes further to discuss the composition of the "next board" in part posing that the current board just wasn't the correct board and at least from a composition perspective, this is true (note, all white males - they don't even acknowledge what they don't know). But I think a different question remains unanswered particularly that the co-founder once fired and then rehired returns next week. Ownership, as in who "owns" OpenAI is being totally ignored as many pundits cry out about the lapses made by the board and the triumphs of the returning co-founder exec.
It is important to remember that a nonprofit board is indeed the surrogate owner on behalf of the taxpayer - licensed and responsible for all that happens within and as impact. There is certainly everything correct about creating a for-profit subsidiary especially when revenues are generated enough to offset costs for the nonprofit. And even more correct is that this subsidiary furthers the nonprofit's mission - or does it? Does a board have a right and responsibility to fire the exec? Absolutely! Does the exec have a right to fire the board? Absolutely not! And yet here we are.
Humanity needs an OpenAI board that will still say ‘No’
OpenAI’s leadership must think carefully about the remaining board members they add, and not just to look progressive. They need women, people of colour and other diverse voices for whom biased language models are most likely to cause harm, and who will speak up about those risks
OpenAI leaders prioritize reevaluating their nonprofit board after governance changes, emphasizing diversity to address risks of biased language models.
When OpenAI’s leaders return to work on Monday they’ll have one thing at the top of their to-do list: figure out what to do about the nonprofit board that nearly killed them.
They’ve already begun setting up a governance structure that will guide them in a more commercial direction,and though that’s great news for OpenAI’s investors, it does fly in the face of its founding principles of prioritizing humanity while building super-intelligent machines. OpenAI’s leadership can do something about that. They must think carefully about the remaining board members they add, and not just to look progressive. They need women, people of color and other diverse voices for whom biased language models are most likely to cause harm, and who will speak up about those risks.
Inscrutable machine-learning systems have denied women job opportunities, and they are poised to reinforce stereotypes in a flood of AI-generated content hitting the web. It doesn’t help that women make up about a third of the people building AI systems today, and just 12% at OpenAI, according to a 2023 study of LinkedIn profiles by Glass.ai. Little wonder women are among AI’s most vocal critics.
But they are also more likely to be silenced. One of the most influential research papers about the dangers of large language models — the 2021 Stochastic Parrots paper — was written by female academics and AI scientists, and Google fired two of them from its ranks, Timnit Gebru and Margaret Mitchell.
And of the four OpenAI board members who voted to oust Sam Altman as chief executive officer last week, two who ended up being booted by the company were academic Helen Toner and robotics entrepreneur Tasha McCauley. The resulting social media blowback has largely focused on both women, while their fellow male mutineers — OpenAI co-founder and Chief Scientist Ilya Sutskever and Quora Inc. CEO Adam D’Angelo — emerged with their reputations and positions largely intact. (Sutskever is also off the board now.)
Weeks before she voted to fire Altman, Toner’s name appeared on a research paper that accused OpenAI of “frantic corner-cutting” as it rushed to launch ChatGPT last year. A rankled Altman reportedly tried to remove Toner from the board. Toner and the other directors then did exactly what Altman designed it to do. He’d even lauded that capability. “The board can fire me,” he told Bloomberg’s Tech Summit earlier this year. “I think that's important.”
The board was admittedly sloppy in its execution, and its members probably fell prey to groupthink as they griped over Altman in a tight bubble. They should have more clearly communicated their concerns, which are still shrouded in mystery.
Even so, OpenAI must stay true to its founding principles and appoint board members who don’t just tick a box but understand AI’s side effects well enough to push back when necessary. People like Gebru and Mitchell would be a good start, as would Joy Buolamwini, Fei-Fei Li, Melanie Mitchell, Sasha Luccioni, Kate Crawford, Latanya Sweeney, Safiya Umoja Noble and Meredith Whittaker, all of whom are distinguished researchers and advocates in the AI field.
OpenAI has already considered several women as possible interim directors, though some appear a little self-serving. According to Bloomberg News, Laurene Powell Jobs, the billionaire philanthropist and widow of Steve Jobs, and former Yahoo CEO Marissa Mayer were floated as possibilities but deemed too close to Altman. Former US Secretary of State Condoleezza Rice was also considered, but her name was dismissed.
I say all this knowing full well that OpenAI will find a way to neuter its nonprofit board so that the events of last week don’t ever happen again. It will probably look a lot more like other tech boards that lack teeth, such as Facebook’s extravagantly clever Oversight Board, which has no real power over its algorithms, or Google’s AI review bodies, which are stuffed with its own staff and executives.
OpenAI’s board was unique in its ability to fire Sam Altman. But even if those days are over, the people building some of the most transformative software in history must not surround themselves with yes-mThis week, Elon Musk tweeted that with human civilization at stake, OpenAI’s new board needed directors “who deeply understand AI and will stand up to Sam.” Two women did just that and they were shown the door. The new voices who join the board should do so not to be the next fall guy, but to bring proper oversight to its work. OpenAI must choose wisely.