The Glaring Oversight in the OpenAI Board Drama
Let's dive into this. It's a long one, so strap in for a tech-contemplative ride with me.
I want to talk about the OpenAI nonprofit for a minute ( a long minute, really - so feel free to grab a cup of matcha and settle in what turned out to be over four thousands words of some techno-contemplative analysis). Particularly since the for-profit company has and will likely routinely get more lip-service, support, and analysis. But first, let’s get a few things resurfaced for the reader’s discerning eye.
👉🏽 NVIDIA’s CEO has stated boldly “don’t learn to code.” Aw, and just when I was getting ready to learn code in order to maybe afford to buy my first home / duplex/ investment property. Damn!
👉🏽 Larry Summers, recently stated that AI “could replace almost all forms of labor.”
💡 Already, companies are shifting almost overnight from an “AI harms our business model” to “let’s offer a section for AI-generated content” - as is true for Upwork, which in late 2023 said in legal filings to the SEC that artificial intelligence was a threat to their model, and then not one year later created a section for those pushing and interested in paying medium-dollar for AI-generated content (which the author places no value-judgment on, on the whole enjoying the idea of toying with AI to help streamline some of the less-fun parts of the writing-for-money industry to make time for writers to focus on more deeply exploring their creative and expressive potential).
Alright, now that the above, perhaps disarming, context is introduced…
Silicon Valley, along with pretty much everywhere else, has an unethical nonprofit problem. It’s pretty unarguable. And OpenAI’s board drama in March perhaps has been the most important and recent - and yet wholly predictable - culmination of a too-often-unchallenged status quo possibly tainting something with amazing potential.
Nonprofit strangeness has been an ordinary, even essential, part of the nonprofit world almost since it’s inception in 1867, about 80 years after the creation of the first private company in the US. However, somehow, only a century and a half later, it’s no longer news - for some reason - that Presidents of non-profits can rake in a quarter of a million dollars for a few hours a week, while entry level employees struggle to find volunteers and raise campaign awareness for $20/hr. Not exactly a structure to incentivize “putting the mission first”, many would say. Having looked into the IRS public filings of several large funding nonprofits - the nonprofits other nonprofits go to for grant money with grant money applications - it was wild to see just how many organizations are headed up by the same individual. Sometimes two, sometimes three or more. One would think folks helming multiple organizations aimed at humanitarian and world-improving ends would become immediately lauded, elevated examples, with featured profiles in Time’s list of Most Influential People or even secondary lists and local news. But they aren’t. And when you take a closer look, it’s pretty obvious to see why.
So, before we get to what I think is at the heart of the OpenAI board drama (the juicy juice) -as distinct from all of the other drama downstream of studies on the sentiment that SORA will harm workers in the film industry, CEOs replacing 90% of staff with artificial intelligence, and writers and illustrators witnessing the drying up of gig work - we need to first grapple with the realities facing nonprofit boards, and with them, what the public’s expectations of a nonprofit are at least codified as being.
In my research for this essay, since I’m no expert in nonprofits however much I enjoy and learn from, my time volunteering and working for them, it somehow still shocked me to see the level of wealth and enrichment that takes place in the name of a good cause. Dare I say, in a less cynical world, it’s almost inspiring that one can achieve a little good in the world, and actually get paid more than pennies to do it! I must admit to still thinking that is cool. It isn’t made to seem or credited to be “cool” by the majority of people and the standards of our evolving common sense (read: growing skepticism around anything that ties in with the government for funding or legitimacy purposes), but with the right practice, it objectively is - if you think helping people is cool.
Of the things I noticed when I was digging into different Bay Area nonprofits and their structures and relationships to one another, the law, and the public, one is that conflicts of interest seem to abound all over the place. As I’m sure they do in other major hubs of innovation and - at least historically - institutions of social progress.
One organization requesting - and receiving - funds from another funding organization (the one dispersing the grant money) are both headed up by the same person. I strongly doubt this was a one-off instance. They share the same tax preparation firm - probably business as usual, if not everywhere then definitely in the uncanny valley of uncanny valleys (yes, I am using a word that connotes a population of people that fantasize routinely about merging with machine- usually arousing revulsion in human beings the world over outside of said uncanny valley who by and large prefer to not act like machines when and where possible - and who at least appear to categorically dismiss any critique unless it comes from their inner circle, relying on their own minority-position reasoning and critical thinking at the expense of the Othered majority when it comes time to grab the mic and address said majority). Many of these are also in the AI-alignment-is-everything school of thought, who - on principle - most people agree with at least in theory, though the praxis remains shrouded in mystery by NDAs and key-card-locked doors.
So, yes, there are multiple organizations that give funds out to organizations with key positions held by an individual that also heads up the funding organization, which could easily be seen as a conflict of interest - but that only matters IF this conflict of interest poses a threat to the ability of the either organization to adhere to it’s mission. Whether or not such a dual-role IS considered a conflict of interest is usually decided by the board according to a Conflict of Interest policy. This sort of possible conflict of interest happens all the time and nobody blinks an eye - unless of course it’s regarding a more obvious and public phenomenon such as homelessness and drug abuse.
Usually they use the same tax preparation law firm, but I’m not sure it would matter if they didn’t. There are systems in place to monitor and provide transparency to the public to hold organizations at least somewhat accountable. And many of these nonprofits are providing educational scholarships to underprivileged youth, funding job training for the homeless, environmental conservation, a helluva lot of charter schools / universities /graduate schools, and increasingly and perhaps most often, path-breaking scientific discovery. Nonprofits, in reality, run everything substantial. They keep colleges like Princeton University flush with capital enough to pay professors and foster new generations of scientists, and give large capital sums to Johns Hopkins University. The technological innovation being funded by nonprofits is quite staggering (and perhaps worth listing in another essay). Pretty much everything important and disruptive, across the political and social spectrum, comes back to untaxable donations between organizations, and to organizations of this kind. Worth noting: the average donor of major sums to these nonprofits is over 60 years of age.
So, then, given the …unorthodox…machinations of the nonprofit world, why is it we are hearing OpenAI’s CEO argue that a nonprofit structure probably wasn’t the ideal way to begin setting out to create human-friendly AGI that benefits humanity? Was he completely unaware of the ease with which a poorly-run nonprofit can grow distant from it’s professed aim? Somehow, I just don’t think people are that naïve. Yet I also sincerely want to believe that they are.
So it was really no surprise when I saw that the youthful, optimistic (bordering on anti-realistic) nonprofit board at OpenAI nonprofit at least seemed to flout the guidelines around preventing conflicts of interest (and yes, unfortunately having a conflict of interest policy IS a guideline, merely “strongly encouraged” by the IRS and not legally required - a policy flaw that, combined with the lack of nonprofit experience or even study and interest of OpenAI’s earlier board members, appears to be putting large numbers of people’s livelihoods at risk every week [but that’s a subject for another essay, requiring some extensive research to substantiate with evidence - though I know you know it’s out there, trending on multiple platforms]). That being not a future probability, but a well-evidenced and present reality, it is safe to suggest that history may appreciate a summary of events that hues closely to the paperwork. And that questions that are least-likely to be answered simply are often the ones most in need of being asked.
Is it not bizarre that half of the OpenAI board, in 2022, was made up of employees of the for-profit OpenAI, taking a hefty sum of compensation for their expertise from the company ostensibly controlled by the “nonprofit” as well as, for some, a small bonus (under a hundred thousand dollars, in initial years being small, and increasing YoY), from the nonprofit itself?
This is where the conflict of interest, if unchecked, may arise. Typically, at least on the surface of things, nonprofit boards are advised by legal representation as well as well-published IRS guidelines to proactively identify, acknowledge, and take steps to address and prevent conflicts of interest from influencing the workings and operation of the organization.
Not just because one can assume this is the morally correct thing to do when you can write off your donations and expenses as nontaxable and in service of the public, but because it prevents outcry and lowers legal risk. This means, ideally though not necessarily usually, that board members have NO professional relationship to one another outside of their board positions, allowing them to remain uninfluenced by those working relationships in the course of their decisions and processes as board members. This ensures that while undertaking business directly on behalf of the nonprofit (a public interest of some kind), they can remain as objective as possible to ensure those decisions serve the nonprofit mission, unsullied by the private interests, office politics, confirmation biases, incentive-structure, pay differentials and hierarchies, and other activities in the workplace that demand a cooperative spirit of collaboration and non-judgment. A nonprofit board, unlike most other organizational structures, demands good judgment - particularly, one would think, when the mission is to make sure that AGI is achieved in alignment with the public welfare and humanity writ large.
According to the public IRS form 990 filings of the OpenAI nonprofit, most of the board members claimed to have put in 10 hours at the nonprofit. Whether this was to justify their increasing compensation AS board members and nonprofit employees, or an accurate reflection of the time spent exercising judgment in service of the public good, it’s unclear and in any case, not really of interest. What’s done is done and the board makeup was altered.
Besides the appearance of conflicts of interest existent in there being working professional relationships between board members outside of the board, there were other red flags. The payments being accepted by board members from both the for-profit and non-profit entities, for example. I’m not a lawyer, but there being a case currently under consideration whereby the major and initial private donor expresses being misled as to the purpose of his donation, would seem to suggest possible negligence - of what and how, though, is unclear. When I learned that several nonprofit board members and employees were claiming that ten hours a week invested into the nonprofit mission, my first reaction was - really?! You put TEN hours a week into the nonprofit on top of your full-time commitment to the for-profit company (for which most were paid significantly more)? ) How does that work? Did you work for ten hours on a Saturday? What exactly does ten hours at a nonprofit even entail? Did you meet for ten hours a week to discuss the ethical ramifications of the developments happening at the nonprofit-controlled company? Did board members each spend two hours a weekday, or fifteen minutes every hour in an 8-hour workday, asking the probing questions about the consequences of what society’s great “builders” were building (SF people really love the word “builders” and “building” so I use it in the hopes they are capable of laughing at themselves in the midst of this potentially disastrous, and at minimum concerning, series of events)? Since, as a governing body the primary concern was oversight of the for-profit company and it’s adherence to the mission of safe artificial intelligence - including safe AGI - for the betterment and benefit of humanity
Recall that the mission of OpenAI was and remains ”to ensure AGI benefits all of humanity, which means both building safe and beneficial AGI and helping create broadly distributed benefits.” Not some of humanity, or the portions that can afford a safe AGI or bounce back from major industry disruptions that deprive them of gig work as writers, illustrators, and salaried jobs with benefits as programmers, customer service agents, sales reps, data analysts, and graphic and visual artists. Humanity. Full stop. Writ large.
“But wait, wait,” I reminded myself. “The board was already functioning with obvious conflicts of interest. Of course they are going to demarcate a size-able yet seemingly random number of hours to work done for the nonprofit, because they are assuming they can do that work - faithful to the nonprofit mission - while raking in five to six thousand dollars a week working for OpenAI the company. Probably they believed they can fulfill the nonprofit obligations during the same hours they are getting paid by the for-profit company, and a sizeable sum more by the company at that.
If I had to do mental gymnastics to imagine life on the OpenAI board, imagine the kind of mental gymnastics being done by people being paid for their board position as well as the for-profit position. And those being paid by the company and not by the nonprofit.
It’s enough to engender equal amounts of cynicism and ennui - appearing to confirm the worse criticisms of charity and nonprofit culture, yet another nonprofit board that fails to put the mission truly first and foremost. How many times have we heard or read that story? And is it enough to become totally numb to it? And, most importantly, will this particular nonprofit act differently now, without those specific forms of conflicts of interest? To that, only time can tell. However, the cards don’t look super great, at least presently & in connection to recent events, for OpenAI’s activities benefiting “all of humanity” - one of the new board members, and former US Secretary of the Treasury, Larry Summers, recently states that AI “could replace almost all forms of labor.” Which admittedly sounds not entirely terrible for many people who don’t especially enjoy their work or desire to feel needed by society through labor. But the bills will still come regardless and they need to be paid. The fact that Larry’s comment is more likely an observation than a prescriptive statement, does little for the uneasiness across “the culture”.
💡 Tech-Contemplative Reading Break Material Here
IN any case, now is as good a time as any, and perhaps one of few times when the subject of even a single nonprofits is (or was) in the mainstream spotlight still, to re-iterate and shed light on the obligations of a not-for-profit entity ostensibly serving the common good and public interest - particular if it boasts being on the lookout for ALL OF HUMANITY (somebody get these folks a drink. One has to admit that does not seem like a simple or easy task). And it may be all the more important to do this, given even the CEO of OpenAI himself recently stated in an interview with Lex Fridman that
👉🏽 “[…] the world, people, institutions, whatever you want to call it, need time to adapt and think about these things. And I think one of the best things that OpenAI has done is this strategy and we get the world to pay attention to the progress to take AGI seriously to think about what systems, and structures, and governance we want in place before, we’re like under the gun and have to make a rush decision. I think that’s really good.
But the fact that people like you and others say you still feel like there are these leaps makes me think that maybe we should be doing our releasing even more iteratively. I don’t know what that would mean. I don’t have an answer ready to go. But our goal is not to have shock updates to the world.
Fridman: The opposite - yeah, for sure. more iterative would be amazing. I think that’s just beautiful for everybody.
Altman: But that’s what we’re trying to do. That’s like our state of the strategy and I think we’re somehow missing the mark.
So. When Sam says “get the world to pay attention to the progress to take AGI seriously to think about what systems, and structures, and governance we want in place before, we’re like under the gun and have to make a rush decision”. That is ostensibly the responsibility of the nonprofit, given it’s stated mission. And yet, Sam admits that what OpenAI has effectively done is include and involve everyone, all of humanity, in that responsibility - “get the world to pay attention”, “to think about”. So is it now a bicameral congress?
Kidding. But truly - where is the seat at the nonprofit table for regular people, non-experts, any representation of the skeptical majority? In the mid-aughts you couldn’t release an app without going through extensive iterations of user testing, getting user-feedback. Even still today, market research is being done in cities everywhere getting people to give their unfiltered - presumably - opinion on the effect of a product on their lives if not more simply their initial value judgement. Did OpenAI EVER do something like this? Does releasing GPT 3.5 and only a few months later GPT 4 for profit, count as proper user-testing? Perhaps this is the kind of thing Altman is alluding to when he admits that OpenAI is “somehow missing the mark.” He sees it so much that he says it again just a moment later :
It's like fun to say, declare victory on this one and go start the next thing. But, yeah, I feel like we're somehow getting this a little bit wrong
It’s not extremely difficult to deduce what leaps he is likely referring to, and that they aren’t all machine learning and technological in nature. Layoffs due to AI were ramping up before OpenAI’s release of GPT 3.5. Since the launch of GPT 4 and Turbo, People are excitedly posting about AI’s doing the legwork on legal cases and educating children, automating tax preparation and business administration tasks, and doing away with all movies in favor of interactive video games. AI - not specifically AGI but AI - has already displaced web and app developers. ChatGPT specifically -and it’s GPT competitors - has deprived writers of paid opportunities normally secured through time-rewarding platforms like Fiverr and Upwork, that can now be done in ten minutes with a few inputs, at comparable - though noticeably inferior…for now… quality.
Coincidentally, this may thereby qualify in some circles as a proper dunking on the majority of the years of OpenAIs (the nonprofit’s) work and any legitimacy it pretends (pretended?) to have when it comes to “humanity’s welfare”. And yes, there are new board members, and my hope is that they read this and take more pause for the long-term consequences on the economy, and not as measured by GDP. The consequences playing out before us in real time are ultimately the best judge of the situation, but in service of transparency - something which the OpenAI non-profit has been sorely and very publicly lacking in - let us seek to be informed for the sake of it, of the boring-yet-important nuances of nonprofit rules for a minute.
And let’s start with something Sam himself said in his most recent long-form interview with Lex:
I continue to think that no company should be making these decisions and that we really need governments to put rules of the road in place.
Let’s dive into that, and examine what some of those existing, and not so existent, rules of the road actually are.
There are three important legal obligations that the conflict-of-interest-having nonprofit board members of OpenAI, should have been familiar with - Duty of Care, Duty of Loyalty, and Duty of Obedience. We can hope that the new members who aren’t familiar with them, are becoming acquainted.
Duty of Care means, basically, you take care to progress the mission of the organization. This means asking prudent questions at the board meetings, reviewing financial documents, and showing up to those board meetings consistently. Being that I don’t identify as a journalist and have not a large enough following to have much hope of getting a response from the former board members, I don’t want to speculate as to whether or not this was a duty ever breached. I can say, however, that it’s unfortunate that the duty of care does not include such things as Definitely Reviewing the Conflict of Interest Policy and Making Sure Your Organization Has One. Again, and it’s worth repeating, technically, operating on the board with obvious conflicts of interest, is legal. Doesn’t make it morally right, just legal.
If the mission of the organization is for AI to benefit “all of humanity”, and arguments about GPT 4 fulfilling the category requirements to be defined as “AGI” are made increasingly every day, why is it then that GPT-3, the only free version, is a model Sam Altman describes as “unimaginably horrible” compared to GPT-4. And this is stated only a few short minutes after, opposite Lex Fridman, Sam says that the fulfillment of OpenAI’s nonprofit mission is evident in the offering of a “free version”?
So, a third-tier, “horrible” product for those who can’t afford newer versions fulfills making ai safe for “humanity”?
A question for the new board, I suppose, which probably won’t be answered in a direct way, and which is still comprised of a majority of folks with for-profit corporate board experience.
Duty of Loyalty is a bit more complicated. Duty of Loyalty, everywhere you look into it, does appear to include a duty to identify, disclose, and avoid potential conflicts of interest. Ahh, well there we are. In direct contradiction to the way the law functions to allow for an omission of any conflict of interest policy. The primary component of the duty of loyalty is to make sure that board members are looking out for advancing the mission of the organization at all times, with every activity and transaction.
Now, let me pause here to make sure I’m not misunderstood - I know that the nonprofit board, even and perhaps especially with, the conflicts of interest evident with it’s former members, did in fact take notable steps towards advancing it’s mission of safe artificial intelligence that considers (though perhaps the current iteration doesn’t prioritize) humanity’s best interests. It hosted a conference meeting with global leaders in tech and government (which seems to have transmitted some genuine concern about protection against ..what, exactly, I admit I’m not sure of. Runaway AI? Unregulated massive models and ai, I guess?) I don’t suggest that there was anything intentionally nefarious at play, but rather that the weaknesses in nonprofit board education or training, and management, may have created a situation where board members could “serve” a mission with obvious for-profit conflicts of interest for year and years, without arousing even a little push-back, questioning, or suspicion.
Now, back to this conflict of interest thing. Another “node” of responsibility in how OpenAI’s ostensibly humanitarian mission was or seems to have been overshadowed by the for-profit company’s mission of phat bottom lines, could also be the initial private funder’s lack of familiarity with the ethics of nonprofit creation and management (yes, this is none other than the controversial Elon Musk). The trouble with getting a sizable amount of funding from a private donor simply following an intellectual interest into what would seem to be a humanist and logical aim, is that NON-private funding sources often DO require disclosure of a conflict-of-interest policy upfront as part of the grant application process. Maybe that’s not something people who are not deep into nonprofit funding would be aware of, but it’s an important thing to be aware of in the scheme of all this.
However, even that being said, almost every law firm representing nonprofits has a dedicated page to informing potential clients of the importance of having a conflict of interest policy. But again, the IRS merely “strongly suggests” one. So the other nodes of responsibility here, should AGI displace more people than it helps (a possibility that ultimately lies with those interacting with it), would be the government and lawyers. Yippee.
The fact of these other nodes being part of the equation - I believe - in the OpenAI board drama, does not of course absolve previous board members of any betrayal or misstep that may have taken place. The duty of loyalty policy is typically also that with regards funding and major decisions, conflicted board members recuse themselves from a vote. Do we know if they did that regarding the changes to their compensation, and the somewhat-nebulous oversight decisions that were made by the board essentially governing the company? We don’t. Will we ever? Hard to say.
If AGI - which a number would argue is already here and already costing many portions of the population their livelihoods, if not their material copyrights and/or moral copyrights - isn’t living up to our hopes, there is then no one individual to blame but rather several nodes in broken systems (and as Mr. Altman so gently reminded us, we ourselves are now implicated and included in these systems). Many of these are the kinds of systems that re-define not just what was done yesterday, and what is done today, but what is done tomorrow and generations of the future - humanity and it’s future. Systems beholden in some part to the expressed will of the people, and maybe even more so, well-connected people in institutions and governance structures, like Open AI’s nonprofit and corporate boards, and corporate boards everywhere.
The last nonprofit duty is the Duty of Obedience (also called the Duty of Responsibility)
The duty of obedience requires that a nonprofit board member work to ensure that the organization complies with applicable laws and regulations, acts in accordance with its own policies, and carries out its mission appropriately. This is where, when board members are experienced in being on nonprofit boards, decisions are made to steer the activities and processes in the organization in line with legal requirements - perhaps even “strong suggestions”. Yet, the former OpenAI board very much seem to have been in-experienced with this sort of consideration at all. Time will tell if the new board is as cautious as the original promised to be.
CONCLUSION?
I didn’t want to write a conclusion to this. The reason being, we are basically in open water, unexplored territory, at the edge of the proverbial “map”. My best at a description of events strung through with a thread of my opinion isn’t much of a recipe for anything like an adequate or confident conclusion. There isn’t a simple way to sum up the point of what I’m trying to express, which is mostly a concern for my neighbors, a bit of I think warranted disdain for a world where compute may be valued above human life, and skepticism about OpenAI going forward. As a writer, I know I can find utility and derive great benefit from the products OpenAI has released so far, but that doesn’t mean that everyone who would be able to, can. People have different variables to consider, levels of toleration for job instability, amounts of autonomy over their circumstances defined - different variables. Just contemplating the cost of using GPT 4 at $20/month juxtaposed to the global average income being under $10,000 in USD, it’s in me, and I believe many others, to be concerned about levels of access and the kinds of information and feedback OpenAI is going to learn about it’s own efficacy with respect to it’s nonprofit mission, given that barrier to entry that exists for so many people. I also realize that my exposure to this topic is relatively limited, and there are other competing endeavors toward AGI development underway. But it’s true that OpenAI is the only one governed by a nonprofit, which I think could be it’s greatest secret weapon - if valued appropriately by the right people in the right moments.
Whether or not that has or will happen, is unclear. It seems some days that the moment has passed, and a future where AI does almost everything - while people still must pay their ballooning bills somehow - is fast approaching. Do people really want their children taught by instinct-and-emotion-free robots and algorithms? Are our admittedly flawed but exceptionally human animal natures, which have helped us survive as a species as long as we have, really so horrible? Maybe now would be a good time to ask Sam Altman to gas up his own individual nonprofit interests in studying UBI’s benefits (link below). But the question remains to be asked - even if the benefits of AI are distributed more broadly than they were pre-GPT 3.5 (when they weren’t distributed at all), will they distribute the same way wealth has been said to trickle down from the top to the bottom - but for the most part, save a few lucky coincidences, doesn’t?
Other Interesting Reads (from off of Substack)
https://techcrunch.com/2023/02/21/the-non-profits-accelerating-sam-altmans-ai-vision/
https://techcrunch.com/2024/03/26/worldcoin-portugal-ban/ https://arstechnica.com/ai/2024/03/openai-shows-off-sora-ai-video-generator-to-hollywood-execs/
And a little inside joke shared amongst us all HERE
And some reminiscing for those of us slow to move on from the past: