I Thought Climate Change Was the Biggest Threat – Until AI
What I Discovered After Diving Deep into the Risks of Artificial Intelligence
Did you know artificial intelligence is considered to be a greater threat to humanity than climate change?
Surprised?
So was I when I read a report in June of 2023 by a group of superforecasters (people who are exceptionally good at making predictions). They estimated a higher likelihood of a human civilization-ending event due to AI over climate change for this century.
If you are not familiar with artificial intelligence, also known as AI, this is how Cambridge Dictionary defines AI:
the use or study of computer systems or machines that have some of the qualities that the human brain has, such as the ability to interpret and produce language in a way that seems human, recognize or create images, solve problems, and learn from data supplied to them.
I have thought of climate change as the most threatening problem to humanity as well as ecosystems across the globe for most of my life.
This superforecaster report caught me by surprise.
Not sure about you, but when a bunch of people who are better at making predictions about a field than the experts of that field say, "bad things seem likely to happen," my curiosity is piqued.
The current reality is many companies, militaries, governments, and investors are pouring billions of dollars to be the first to build the most capable AI, but few people are working to understand and manage the risk of powerful AI systems.
I went down a rabbit hole - working all day at my day job, and spending my nights reading about civilization-ending events, AI, and the various ways AI systems could cause global catastrophes.
How can I do something about this?
What is the most effective way to bring about change?
Could anything I do prevent this potential negative outcome?
For the second half of 2023, I investigated these questions.
This led me to the field of AI safety - where people seek to understand and minimize the potential downsides of bringing more powerful AI systems online.
Hungry to learn more and contribute to the field, by the start of 2024, I had set myself up to join AI Safety Camp, a 3.5-month research program, where experienced AI Safety researchers lead teams of volunteers.
Unfortunately, my ambition outpaced my work capacity.
I was drowning in dissatisfaction with my job and searching for a new job (without truly wanting one).
Because I knew I would not be able to deliver, I pulled out of AI Safety Camp in January 2024 before it began.
Two months later, two days after telling my partner I was ready to hang up the phone on my current role, I received a surprise via a Zoom meeting notification.
That day at 9 am, my regular team standup was canceled.
"Weird." I thought.
These standups were never canceled, and there was no communication about why it was canceled.
"Whatever. More time to program."
Grateful for the free time, I dropped deep into a flow state.
Meanwhile, my next meeting with my engineering team disappeared from my calendar.
At 10 minutes to 11, a surprise notification popped up on my computer.
A 1:1 with my CEO and my manager?
"I am getting fired." I thought.
My heart began to race.
As I hopped into the call, I reminded myself to keep an open mind.
When they told me that I was being laid off, I remained somber.
Firing people is hard.
It was inappropriate to let my true emotions show.
Because on the inside I wanted to grin.
Earlier that morning, I had prayed, not to anyone or anything in particular, but more out of a sense of desperation:
"Please let this be over." I wished.
I was beyond done with the startup and my role there.
Was I afraid of the financial uncertainty that being kicked out of a regular paycheck meant?
Oh yes.
But I am not one to pray. And when the ask is answered so quickly, the feeling of freedom is immense.
What did I do with my freedom?
I did not rest as my ambition continued to drive me to kick off many new projects right away.
However, after a few weeks of self-imposed grinding, I realized I should rest.
At least briefly, as I was in Switzerland meeting my girlfriend's friends and family for the first time.
After rest, I forgot about AI Safety and did not think about it for many months as I pursued other paths.
As the fall of 2024 came about, all of the newsletters I had previously subscribed to about AI Safety began to feed into my daily reality.
"Oh yeah. That massive problem I was obsessed with." I thought.
I dove back in. Applying to programs, reading research papers, and applying to relevant roles.
Without the cushion of a full-time role at my back, research fellowships that meant cutting my salary to a trivial fraction now look much more feasible.
This shift in perspective allows the world to seem more open to me.
Because the frame is no longer:
I would have to give up my financial security to pursue meaningful work on AI Safety.
Instead, it is:
This is the perfect time to swing and break into research on what may turn out to be one of the most important problem spaces of this century.
Because if not now, when?