

Discover more from That Was The Week
A reminder for new readers. That Was The Week collects the best writing on critical issues in tech, startups, and venture capital. I selected the articles because they are of interest. The selections often include things I entirely disagree with. But they express common opinions, or they provoke me to think. The articles are only snippets. Click on the headline to go to the original. I express my point of view in the editorial and the weekly video below.
This Week’s Video and Podcast:
Thx to: @kwharrison13,. @Om, @karaswisher, @eladgil, @vkhosla, @RonConway, @satyanadella, @elonmusk, @sama, @jason, @BasedBeffJezos, @jon_victor_, @steph_palazzolo, @anissagardizy8, @KateClarkTweets, @rex_woodbury, Liam @LevelVC, @jasonlk, @stevemollman, @jonstewart, @Kantrowitz, @emollick, @cdouvos, @kirstenkorosec, @davemcclure
Contents
Editorial: e /acc versus e /a
The Leaders of Movements (On OpenAI and Sam Altman)
Before OpenAI Ousted Altman, Employees Disagreed Over AI ‘Safety’
How Health Care Works (sic)
Editorial
It’s a day late for That Was The Week. In mitigation, what a day it was, with the firing of Sam Altman at OpenAI, and the demotion, then resignation of Greg Brockman, Jakub Pachocki, the company’s director of research; Aleksander Madry, head of a team evaluating potential risks from AI, and Szymon Sidor, a seven-year researcher at the startup.
The dust is beginning to settle, and my best interpretation of the events comes from these x posts by Kara Swisher and Elad Gil, focused on effective altruism (e /a) and the e /acc belief in unfettered AI.
The history of OpenAI reinforces this interpretation. In February this year, one of the founders, Elon Musk, tweeted:
“Not what I intended at all” sums it up, and the org chart in a recent x post from Jason Calacanis shows how the attempt to fit a for-profit company inside a not-for-profit shell led to power residing with the not-for-profit Board of Directors.
At the root of this bifurcation is a hot vision schism in Silicon Valley between the e /acc supporters and the e /a supporters. e /acc stands for “effective acceleration” and is focused on innovation without limits. Or at least without regulatory constraint. e /a we are more familiar with it. It stands for “effective altruism” and is focused on innovation for good rather than for its own sake or for profit.
The Center for Effective Altruism defines it this way:
Everyone wants to do good, but many ways of doing good are ineffective. The EA community is focused on finding ways of doing good that actually work.
This led to many ideas like the following:
It’s critical to pay attention to how many people are affected by a particular intervention. When we do this, it seems that some ways of doing good are over 100 times more effective than others.
We should focus on problems that are important, neglected, and tractable
Some sentient beings are ignored because they don't look like us or are far away.
You can read a longer version here:
e /acc - Effective Accelerationism - is identified with the somewhat hilariously named Beff Jezos (@BasedBeffJezos). His substack is here:
His most well-known supporter is Marc Andreessen, but the entire Silicon Valley optimist manifesto supporters are aligned with e /acc.
Sam Altman and Greg Brockman are heroes to e /acc and villains to e /a
Here is a taste:
Science, technology and intelligence still have very far to go, saying that we should seek to maintain humanity and civilization in our current state in a static equilibrium is a recipe for catastrophic failure and leaving behind huge potential benefits of dynamic adaptation
Effective accelerationism (e/acc) in a nutshell:
Stop fighting the thermodynamic will of the universe
You cannot stop the acceleration
You might as well embrace it
A C C E L E R A T E
It combines a belief in technology with some fairly “out there” post-humanist views.
The e /a supporters weighed in on AI this week by signing a “Responsible AI” letter. And e /acc supporters also weighed in, describing the signatories (many VCs) as people to avoid taking money from.
Martin Casado, a good investor at Andreessen Horowitz, published an x post explaining the rift and siding with the e /acc worldview.
At OpenAI, the technical lead (Ilya Sutskever), hired by Elon Musk, fought what appears to be a rearguard fight on behalf of the e /a worldview to oust Sam Altman and Greg Brockman. Because the independent board at the top of the OpenAI hierarchy had a non-profit mandate, it sided with the “slow down” and “take less risks” view. That resulted in the earthquake felt in the world yesterday.
The Board of Directors is now squarely in the firing line of all supporters of unrestricted innovation. You can assume I am more minded to support that group than the e /a group and read accordingly.
Ron Conway, a prolific and respected angel and seed investor, said it best:
The e /a lobby represents a pessimistic view of AI's likely, or at least potential, outcomes and is biased toward protecting us all from AI. They seem to buy the rhetoric around the “dangers” of AI rather than be excited by its potential. Insecurity dominates their mood, and fear of rapid innovation is at their core. This is all disguised in a worthy-sounding shell that masquerades as humanist. But its real goal is to slow or stop innovation. They also assume that the profit motive is incompatible with good outcomes. Human history seems to disagree. The drive for profit has fueled much innovation - alongside the passion for science and discovery.
The e /acc point of view is varied and isn’t an organized “group” at all. It is a loose conglomeration of science enthusiasts who believe restrictions on innovation can only lead to worse outcomes. It is comfortable with for-profit and has no issues with recent commercialization at OpenAI.
There is no time to deep-dive here, but there are many investors in OpenAI, and it was about to raise new capital, apparently valuing the company at almost $90 billion. One of those investors is Vinod Khosla, who posted on x this morning Pacific time.
Aside from the “self-goal” glitch (haha, Vinod, you should have used a cricket metaphor like a run-out), Vinod has it right. But a Board atop a non-profit, taking decisions that impact the for-profit part of OpenAI., is actively destroying value and slowing future value in service of a myth (the dangers of AI software). I would be shocked if there were no consequences for those Board members.
Kyle Harrison nails it in this week’s Essay of the Week (emphasis mine):
Also, there will be plenty of poking into the dynamics of the board. It’s a wild group of people that have no business being in control of the most important company in AI. Investors like Reid Hoffman, who could have had helped balance this situation, stepped off the board of OpenAI to avoid conflicts of interest. Turns out when one company represents critical infrastructure for a huge swath of AI companies, its hard to also be investing in those companies.
But that’s another argument for why the fate of OpenAI shouldn’t have been left to a bunch of randos, all of which Ilya was more than capable of pushing around, right?
Randos. Love that.
Aside from that, there is a lot on venture capital this week. I loved this quote from Chris Douvos:
Chris Douvos, an LP in early-stage venture funds, told me earlier this week that an upcoming surge of down rounds will give firms no choice but to mark down investments.
“There’s a pile-up of financings coming for the first half of 2024,” he said. “That pig is slowly working through the snake.”
There is a different kind of video this week as Andrew is traveling. Join me for a “Walk in the Park”.
Late Update: There is an attempt to reverse course and ask Sam to return as CEO. He is saying he will if the entire Board is replaced. You can’t make this stuff up.
Essays of the Week
The Leaders of Movements
Reflecting On The Most Interesting Timeline
KYLE HARRISON, NOV 18, 2023
The last 24 hours have been a hyper-concentrated dose of an entire philosophical debate within tech, all played against the backdrop of intense speculation and the furthering of personal vendettas.
“This is why you shouldn’t have a board.”
“Careful who your co-founders are.”
“Too bad they didn’t have a better board.”
“This is why venture capital is evil.”
“Where are the venture capitalists who could have stopped this?”
In lieu of the piece I’ve been writing, I just wanted to a share a brief reflection to add to the endless thought pouring out on Twitter, Reddit, and more. As the details have come out in pieces, its clear this is an existential divide between Ilya Sutskever on one side, and Sam Altman / Greg Brockman on the other.
One piece that has stuck with me was written by Nirit Weiss-Blatt, who wrote a book called The Techlash, unpacking how tech media swings in its coverage from utopian to dystopian and back. She wrote a piece in September 2023 called “What Ilya Sutskever Really Wants.”
In it, she describes an experience listening to Ilya speak at a conference about the existential crisis that AI represents for humanity. Afterwards, she reflected on his philosophy:
“He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, ‘Holy shit.’ The snapshots above cannot capture the lengthy discussion. The point is that Ilya Sutskever took what you see in the media, the ‘AGI utopia vs. potential apocalypse’ ideology, to the next level. It was traumatizing.”
There are several sources that are laying out the details of the story. The TLDR? Sam and Greg were the driving force behind OpenAI’s ambitions. Raising more and more capital, establishing plans to build their own chips to compete with Nvidia, building an AI-native phone with Jony Ive.
Ilya’s perspective was focused on the need to create AI “that truly deeply loves humanity.” The progress of OpenAI was racing far ahead of the research capabilities to pursue Ilya’s parameters for love, not just performance. What’s more, his role was increasingly getting reduced at the company.
Increasingly, it’s clear that this is a fundamental debate between effective altruism and effective accelerationism.
What’s more, there is such a contrast today. On the one hand, we have EA exploding the most important company in technology in decades. On the other hand, we have e/acc powering a literal rocket, exploding towards the sky, space, and all the progress that entails.
The takeaways are worthy of discussion, and there’s a ton of nuance. It is shocking (and irresponsible) that Microsoft, Sequoia, Thrive, and Khosla Ventures have invested billions in OpenAI and were completely caught off guard by Sam’s firing. Irresponsible of the investors for not having more influence and insight, but also irresponsible of the board.
Also, there will be plenty of poking into the dynamics of the board. It’s a wild group of people that have no business being in control of the most important company in AI. Investors like Reid Hoffman, who could have had helped balance this situation, stepped off the board of OpenAI to avoid conflicts of interest. Turns out when one company represents critical infrastructure for a huge swath of AI companies, its hard to also be investing in those companies.
But that’s another argument for why the fate of OpenAI shouldn’t have been left to a bunch of randos, all of which Ilya was more than capable of pushing around, right?
One take I thought was close to the mark was a reframing of the philosophical debate in AI. We’ve been talking so much about open vs. closed, we neglected to address the third bucket: stupid.

Identity politics have started to shift, expand, and metastasize into a new series of crusades. AI safety is a crusade. Effective altruism is a crusade. Granted, e/acc is a crusade. But the question is which movements are we letting win?
Yesterday, The Crusade of EA won a battle. But they did not win the war.
NOVEMBER 18, 2023 | On Technology
Foundational Risks of OpenAI
Even for a region accustomed to tremors, the news of Sam Altman, CEO of OpenAI, being unceremoniously ousted from the company was like an event off the Richter scale. His exit was announced in a terse press release, along with Greg Brockman’s demotion from the chairman of the board. This sent shockwaves across Silicon Valley and the entire technology ecosystem. The board named CTO Mira Murati as the interim chief executive.
Like everyone else, I found the news hard to believe. Sam Altman fired? How could Altman, the renowned face of this high-profile company, be out? Just a day or two ago, he was mingling with leaders from APEC, taking swipes at rivals at Grok and Google. Altman has been the man since OpenAI released ChatGPT a year ago, marking it as the most interesting development in Silicon Valley since the iPhone and Facebook.
Throughout the afternoon, details emerged in drips and drabs. Two details stood out: investors in the company had no prior knowledge of Altman’s firing. Microsoft, a major backer of OpenAI (to the tune of $13 billion,) received only a 60-second heads-up, forcing CEO Satya Nadella to hastily release a short statement:
We have a long-term agreement with OpenAI with full access to everything we need to deliver on our innovation agenda and an exciting product roadmap; and remain committed to our partnership, and to Mira and the team. Together, we will continue to deliver the meaningful benefits of this technology to the world.
What a way to end the week, during which Satya Nadella hosted Microsoft’s annual Ignite conference, brought Sam on stage, and painted a continued renaissance powered by artificial intelligence. Microsoft’s shares fell by about $6 per share on the news, closing the day at around $369 per share. The share price dropped a further $3.58 per share in after-hours trading.
There are quite a few takes on the play-by-play events, but for me, the big question is what this really means in the long term for the forward momentum of artificial intelligence and its impact on the broader technology ecosystem. While it might sound pessimistic, the past 24 hours have exposed a massive and obvious foundational risk in placing all bets on a single entity.
It’s likely that Satya Nadella and his team at Microsoft didn’t expect Sam Altman’s firing and the subsequent upheaval at OpenAI in their AI future plans. One has to wonder if this was even considered as a foundational risk. I understand that investing billions in OpenAI, partly for Azure’s Cloud use and access to top AI research, was attractive, but one has to wonder if the ambition to outperform Google and Amazon Web Services overshadowed these risks?
I was watching the Ignite keynotes, and it’s clear that Microsoft is expanding beyond OpenAI and starting to support other AI models. But OpenAI was and remains central to their plans. At the OpenAI Dev Day, Satya told Sam in front of an audience that, “we love you guys” and “just fantastic partnering with you guys.” I, for one, would love to know what Microsoft would do if, in the worst-case scenario, OpenAI unravels.
I know that would not be good. Microsoft will likely survive, but what about the countless others who have relied on OpenAI to build their features, products, startups, and businesses? Sadly, betting on a single entity is inherently risky — a lesson we’ve learned from history.
Lured by the size of the platform and easy distribution, many bet on Facebook and its platform. Entire startups were created to leverage that ecosystem. However, Facebook was the only real winner. Only a handful, like Zynga, nearly succeeded. As I often say, people here seem to treat history with disregard. We don’t learn from the lessons of the past.
In many ways, the OpenAI shake-up is a good thing.
Before OpenAI Ousted Altman, Employees Disagreed Over AI ‘Safety’
By Jon Victor Stephanie Palazzolo and Anissa Gardizy
Nov. 17, 2023 6:40 PM PST
OpenAI’s ouster of CEO Sam Altman on Friday followed internal arguments among employees about whether the company was developing artificial intelligence safely enough, according to people with knowledge of the situation.
Such disagreements were high on the minds of some employees during an impromptu all-hands meeting following the firing. Ilya Sutskever, a co-founder and board member at OpenAI who was responsible for limiting societal harms from its AI, took a spate of questions.
At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns.
THE TAKEAWAY
• At OpenAI, divisions persisted over AI ‘safety’
• Co-founder Ilya Sutskever led an AI safety team
• Sutskever took pointed questions from staff on Friday
“You can call it this way,” Sutskever said about the coup allegation. “And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.” AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do. (Another person said Sutskever may have misinterpreted a question related to a potential hostile takeover of OpenAI by other parties.)
When Sutskever was asked whether “these backroom removals are a good way to govern the most important company in the world?” he answered: “I mean, fair, I agree that there is a not ideal element to it. 100%.”
Fallout from the firing was swift. By Friday night, three senior OpenAI researchers had quit, The Information reported. Two of those people signaled their support of Greg Brockman, an Altman ally who was ousted as chairman of the board on Friday and shortly after resigned from his role as president.
Altman didn’t participate in the vote to oust him, the company told staff. After this story published, Brockman posted a timeline of the board moves over a span of about 30 minutes. Around noon, Sutskever fired Altman during a video call with the rest of the board except Brockman; minutes later, in a separate video call, Sutskever told Brockman he was being removed from the board but could keep his position. The board then posted a blog about the leadership changes. Brockman later resigned.
Seed Investing: The State of the Union
The State of the Seed
It’s been a rough 18 months for tech. Public tech stocks are down 70-80% as multiples compress. Focus has shifted from growth to profitability. Achieving the latter can have dramatic effects: Udemy, a company with $700M+ in revenue, saw its stock jump +38%in one day after reporting profitable earnings. Netflix was the worst-performing stock in the S&P 500 last year, shedding 75% of its market cap, but it’s up +50% this year after turning in a few highly-profitable quarters. Many SaaS companies have stopped investing heavily in growth and are instead inching their way to cash flow generation.
The downturn in the public markets has extended to privates. Most unicorn companies are probably also down ~80%, even if most haven’t yet raised down rounds. As runway shortens in the coming quarters, we’ll see a slew of layoffs, fire sales, and shut-downs.
Amidst the downturn, venture capital fundraising has dried up. The capital raised in the first nine months of 2023 was only 24.7% the share raised in the first nine months of 2022. It’s a tough time to be in market as a venture fund.
Deal value, meanwhile, is down to 2018 levels, well off 2021 and first-half 2022 highs.
Deal activity—the number of deals happening—is back to 2019 levels. The COVID boom years are over.
Part of the “why now” behind Daybreak is this downturn. Times of volatility are also times of opportunity. I think of a quote from Don Valentine, the founder of Sequoia:
One of our theories is to seek out opportunities where there’s a major change. Major dislocation in the way things are. Wherever there’s turmoil, there's indecision. And wherever there’s indecision, there’s opportunity. So we look for the confusion when the big companies are confused. When the other venture groups are confused. That’s the time to start companies.
The big companies—and the big venture firms—are definitely confused. Rapid changes in technology and human behavior are creating an Innovator’s Dilemma for incumbents. The market correction, meanwhile is creating a talent unlock, with folks underwater on their options, which in turns fuel a boom in startup creation.
Looking back at the Great Recession, venture funding declined overall, but cohorts of iconic companies were born. Companies founded in the five years of 2009, 2010, 2011, 2012, and 2013 include:
These next five years—2024, 2025, 2026, 2027, 2028—may be similarly historic vintages for Seed.
Yet the market correction hasn’t really hit the Seed market.
PitchBook data shows that through the third quarter of 2023, Seed-stage startups had a median pre-money valuation of $12M, up from $11.1M in 2022. That $11.1M figure was itself a record, and 2023 is on track to break that record.
Deal sizes are also inching higher. The median Seed round is $3M this year; the 75th-percentile sits at $5.3M, and the average Seed round (pulled up by frothy mega-Seeds, cutely called “Mango Seeds” in the venture world 🥭) is $4.5M through Q3. For Pre-Seed, the average round is $1.1M and the median round is $0.5M.
Why hasn’t Seed corrected in the same way that late-stage has? For comparison, late-stage deal sizes are down ~30% and valuations are down ~20%:
Seed has seen an influx of multi-stage and late-stage investors moving earlier. When late-stage is unattractive, Seed becomes more compelling by comparison. First, venture investors need to spend their time doing something. And late-stage isn’t really an option right now. But Seed also offers upside with limited downside: if things work, multi-stage firms can lead future rounds and buy up more ownership. If things don’t work, firms don’t lose that much capital.
For a smaller fund like mine, this becomes a point of differentiation with founders: a $1M check doesn’t matter much to a $3B fund, but it matters a lot to me. This manifests in the amount of work a Seed firm will do for a founder compared to a multi-stage firm. All multi-stage firms aren’t the same, of course; many treat Seed with conviction and roll up their sleeves. But this is the broader trend, and many firms do treat Seed this way. Seed is the most “artisanal” of any investment round, yet is being treated by many like financial option value.
While Seed valuations and round sizes haven’t corrected, the number of deals has slumped. We’re at a 12-quarter low in terms of deal activity.
I expect this to turn around soon, largely because of the aforementioned talent unlock. We’re going to see more startups being built as talented people leave overvalued startups. And as I argued in The Mobile Revolution vs. The AI Revolution, AI’s application layer is still immature—we’re a few quarters away from seeing “killer apps” emerge. Both market dislocation and technology shifts will drive more Seed activity.
Another key metric to watch is capital availability—the ratio of capital demand to capital supply. We’re now over 1.5x, a sharp increase from the sub-1.0 levels of the past few years. I expect this ratio to remain high for a while. Venture fundraising isn’t going to pick up any time soon: LPs aren’t getting capital back, and firms already have large coffers to deploy. As early-stage deal flow heats up again, capital demand will outstrip capital supply.
Anecdotally, I’m still seeing most top Seed rounds clustered in the $15-20M valuation range. For AI deals, it’s higher—often by a lot. The challenge with a frothy category is that you’re typically overpaying at entry, and that category might not be as “hot” by the time you exit. You buy high and sell low by comparison. This is why staying disciplined on entry price is crucial, even (or especially) amidst hype cycles.
The next section digs into the math behind why price matters for a Seed fund like Daybreak, and why discipline is so crucial…. More
On Valuations and Hyped Seed
LIAM, SEPTEMBER 12, 2023
The recent flow of AI seed funding announcements, almost all at high valuations, sparked my curiosity: can investing in these deals deliver high fund returns for seed investors or are they tailored to a different strategy altogether?
Many hyped companies look attractive on the surface. They tend to have a compelling product vision, a qualified team, and a vast market opportunity. Yet, despite these qualities, are their high entry valuations too restrictive for the return profile necessary to generate outlier fund returns? Using historical data as a guide, I decided to investigate.
Plotted below are the 50x+ return hit rates of seed (& pre-seed) investments where the post-money valuation was below $20M vs. those where the post-money valuation was above $20M.
As you can see, the trend is clear: seed (& pre-seed) deals with a below $20M post-money have (on average) 280% the probability of producing a 50x+ hit as compared to their higher valued counterparts.
Why is this the case? Certainly, a lower entry valuation means that returning 50x+ is mathematically easier to do (by definition). But perhaps there’s something more. I hypothesize it’s a confluence of several factors - capital efficiency (less capital may require more scrappiness and stronger core focus), team incentives (less money could mean that equity incentivization plays a larger role, aligning the team toward a shared mission), dynamics of the business (if it takes more capital to get up and going, it may take more to continue scaling, which could dilute investors), and perhaps ego (raising money is a means to an end).
Of course, hit rates are not everything. Indeed, lower valued companies are riskier. As to be expected, if one looks at the death rate (companies that have failed to exit or raise additional capital for 3+ years) and exit rates, investments with above $20M post-money consistently die less, and their tendency to exit is higher (but keep reading!)
Why might this be the case? The intuitive reason is these companies tend to have very strong teams and are typically better capitalized toward finding a sense of product-market fit.
However, even with this factored in, the mean return multiple of the lower post-money cohort is still far higher than the mean of its counterpart (the mean is mostly driven by outliers).
In fact, the higher valued seed deals actually resemble a $50M+ post-money Series A return profile as it relates to 50x+ hit rates. And the risk of failure at seed is higher than at Series A.
In summary, while there are still amazing outlier investments to be made in this $20M+ post-money seed deal cohort, the strategy may not make sense for a seed fund’s portfolio construction. If you’re a seed investor and looking at deals with post-money valuations above $20M, although there may be social benefits to participating in these deals, think carefully about whether that check is the best use of your finite capital toward generating outlier returns.
Potential counterarguments:
Outcomes are likely to be larger than in the past (and this bending of the curve will be driven by the types of founders who command these higher valuations)
Capital at seed can be an even larger differentiator toward success than in the past
80% of IPOs Since 2020 Are “Broken”
by Jason Lemkin | Blog Posts, Scale
So the Wall Street Journal did a great job slicing and dicing IPO data recently. To me, the most jarring statistic was this one: 80% of IPOs since 2020 are trading below their IPO price, or “broken”:
So what, you might think? Buyer beware? Hooray for “efficient IPOs” that don’t leave money on the table, some say?
Perhaps. But the problem is this — when folks don’t make money off startups, everything gums up and slows down.
Most VC investments do need to make money. Not every one, but on balance, most do. This is even true at seed stage. There, even if most “logos” don’t make money, most of the dollars invested do need to make money. Most.
The 50% line is a big deal at every stage of investing, at least from seed to IPO. Most of your investing dollars do need to make money. Not all, but most.
But right now, 80%+ of IPOs aren’t making money, including the most recent “triumvirate” of 2023 IPOs that tried to re-open the markets: Klaviyo, Instacart and ARM. All A+ companies.
Pricing can fix this, and re-pricing in general is in process all across tech. The best can always IPO, it’s just a question of price. But it takes time and a lot of pain. IPO’ing at 4x-5x ARR isn’t appealing to most break-out leaders.
If the IPO market is in its second year of not really functioning, that creates stress up and down the startup stack.
Chamath Palihapitiya says there’s a ‘reasonable case to make’ that the job of VC ‘doesn’t exist’ in a world of AI-powered two-person startups
November 11, 2023 at 10:19 AM PST
Chamath Palihapitiya sees the job of venture capitalist changing, or disappearing, in a world reshaped by AI-enhanced productivity.
ALLISON DINNER—VARIETY/GETTY IMAGES
If you accept the argument that today’s artificial intelligence boom will lead to dramatic productivity gains, it follows that smaller companies will be able to accomplish things that only larger ones could in the past.
In a world like that, venture capitalists might need to change their approach to funding startups. So believes billionaire investor Chamath Palihapitiya, a former Facebook executive and the CEO of Silicon Valley VC firm Social Capital.
It “seems pretty reasonable and logical” that AI productivity gains will lead to tens or hundreds of millions of startups made up of only one or two people, he said on a Friday episode of the All-In Podcast.
“There’s a lot of sort of financial engineering that kind of goes away in that world,” he said. “I think the job of the venture capitalist changes really profoundly. I think there’s a reasonable case to make that it doesn’t exist.”
Palihapitiya became the face of the SPAC boom-and-bust a few years ago due to his involvement with special purpose acquisition companies. Also known as “blank check companies,” SPACs are shell corporations listed on a stock exchange that acquire a private company, thereby making it public while skipping the rigors of the IPO process.
At one point, Palihapitiya suggested that he might become his generation’s version of Berkshire Hathaway chairman Warren Buffett. “I do want to have a Berkshire-like instrument that is all things, you know, not to sound egotistical, but all things Chamath, all things Social Capital,” he said in early 2021.
Never miss a story about A.I.
Buffett’s right-hand man at Berkshire, Charlie Munger, recently expressed his disdain for venture capitalists. “You don’t want to make money by screwing your investors, and that’s what a lot of venture capitalists do,” the 99-year-old said on the Acquired podcast, adding, “To hell with them!”
Palihapitiya suggested that VCs might be replaced at some level by “an automated system of capital against objectives…you want to be making many, many, many small $100,000 [or] $500,000 bets.”
Once a tiny-team startup gets to certain level, it can “go and get the $100 and $200 million checks,” he said, adding, “I don’t know how else all of this gets supported financially.”
Many Silicon Valley leaders expect AI will lead to some types of jobs going away, but that overall it will result in greater productivity and more jobs. Among them is Jensen Huang, the billionaire CEO of Nvidia, which makes the chips that are in hot demand from companies racing to launch AI services.
“My sense is that it’s likely to generate jobs,” he recently told the Acquired podcast. “The first thing that happens with productivity is prosperity. When the companies get more successful, they hire more people, because they want to expand into more areas.”
He added, “humans have a lot of ideas.”
Video of the Week
AI of the Week
AI Doomers Are Finally Getting Some Long Overdue Blowback
They might’ve arrived at their AI extinction risk conclusions in good faith, but AI Doomers are being exploited by others with different intentions.
Shortly after ChatGPT’s release, a cadre of critics rose to fame claiming AI would soon kill us. As wonderous as a computer speaking in natural language might be, it could use that intelligence to level the planet. The thinking went mainstream via letters calling for research pauses and 60 Minutes interviews amplifying existential concerns. Leaders like Barack Obama publicly worried about AI autonomously hacking the financial system — or worse. And last week, President Biden issued an executive order imposing some restraints on AI development.
That was enough for several prominent AI researchers who finally started pushing back hard after watching the so-called AI Doomers influence the narrative and, therefore, the field’s future. Andrew Ng, the soft-spoken co-founder of Google Brain, said last week that worries of AI destruction had led to a “massively, colossally dumb idea” of requiring licenses for AI work. Yann LeCun, a machine-learning pioneer, eviscerated research-pause letter writer Max Tegmark, accusing him of risking “catastrophe” by potentially impeding AI progress and exploiting “preposterous” concerns. A new paper earlier this month indicated large language models can’t do much beyond their training, making the doom talk seem overblown. “If ‘emergence’ merely unlocks capabilities represented in pre-training data,” said Princeton professor Arvind Narayanan, “the gravy train will run out soon.”
Worrying about AI safety isn’t wrongheaded, but these Doomers’ path to prominence has insiders raising eyebrows. They may have come to their conclusions in good faith, but companies with plenty to gain by amplifying Doomer worries have been instrumental in elevating them. Leaders from OpenAI, Google DeepMind, and Anthropic, for instance, signed a statement putting AI extinction risk on the same plane as nuclear war and pandemics. Perhaps they’re not consciously attempting to block competition, but they can’t be that upset it might be a byproduct.
Because all this alarmism makes politicians feel compelled to do something, leading to proposals for strict government oversight that could restrict AI development outside a few firms. Intense government involvement in AI research would help big companies, which have compliance departments built for these purposes. But it could be devastating for smaller AI startups and open-source developers who don’t have the same luxury.
“There's a possibility that AI doomers could be unintentionally aiding big tech firms,” Garry Tan, CEO of startup accelerator YCombinator, told me. “By pushing for heavy regulation based on fear, they give ammunition to those attempting to create a regulatory environment that only the biggest players can afford to navigate, thus cementing their position in the market.”
Ng took it a step further. “There are definitely large tech companies that would rather not have to try to compete with open source [AI], so they’re creating fear of AI leading to human extinction,” he told the Australian Financial Review.
The AI Doomers’ worries, meanwhile, feel pretty thin. “I expect an actually smarter and uncaring entity will figure out strategies and technologies that can kill us quickly and reliably — and then kill us,” Eliezer Yudkowsky, co-founder of the Machine Learning Research Institute, told a rapt audience at TED this year. He confessed he didn’t know how or why an AI would do it. “It could kill us because it doesn't want us making other superintelligences to compete with it,” he offered.
After Sam Bankman Fried ran off with billions while professing to save the world through “effective altruism,” it’s high time to regard those claiming to improve society while furthering their business aims with relentless skepticism. As the Doomer narrative presses on, it threatens to rhyme with a familiar pattern.
Big Tech companies already have a significant lead in the AI race via cloud computing services that they lease out to preferred startups in exchange for equity. Further advantaging them might hamstring the promising open-source AI movement — a crucial area of competition — to the point of obsolescence. That’s probably why you’re hearing so much about AI destroying the world. And why it should be considered with a healthy degree of caution.
The Best Available Human Standard
What are the imperatives of the upside?
OCT 22, 2023
Also check out:
I often find myself being described as an “AI Optimist,” but I don’t think that is right. Call me an AI Pragmatist instead: whether we wanted them or not, we now have a form of AI that can do everyone’s homework, complete a surprising amount of work once reserved for humans, and run a solid Dungeons and Dragons campaign. Even if AI development were to pause or stop, the effects of AI are already quietly rippling through the system in ways that will play out for good and ill in the coming months and years. Given the inevitability of change, we need to figure out how to mitigate the negative, but also how to channel the change for good as much as possible.
Given that, I am often frustrated that so many discussions of the harms and benefits of AI are theoretical, and yet AI is here for us to actually use. We need to be pragmatic about what that means, and, in order to do so, I think we need to recognize three fundamental truths about today’s AI:
AI is ubiquitous: Normally, the introduction of powerful technologies is very uneven, with richer companies and people getting access far before everyone else.
Yet the LLMs you have access to today, the LLMs several billion people around the world have access to today, is literally the best AI available to anyone outside a handful of people at the big AI firms. You have the same AI access if you are Goldman Sachs, or the Department of Defense, an entrepreneur in Milwaukee, or a kid in Uganda. Today that is GPT-4 (available for free in 169 or so countries via Microsoft Bing), soon it is likely to be Google Gemini (also very likely to be available for free). While this free availability is not guaranteed forever, it gives us a remarkable opportunity.AI is extremely capable in ways that are not immediately clear to users, including to the computer scientists who create LLMs: The only way to figure out how useful AI might be is to use it. Most benchmarks released by AI companies are technical measures of performance (with names like BLEU and METEOR), and much of the debate about the capabilities of AI is driven by technical tests. Yet we have increasing evidence that, in practice, AI is very powerful. LLMs generate better practical ideas than most people, and can boost the performance of high-end professional workers. These practical implications are largely underexplored.
AI is also limited and risky in ways that are not immediately clear to users: Large Language Models also have a long list of issues. They “hallucinate” plausible-sounding lies, they are bad at math (at least without using tools), they reproduce biases, and they are unpredictable. And that doesn’t even include the malicious use of AI systems, like the fact that current AIs are capable of shattering privacy and conducting sophisticated email phishing campaigns. Ignoring these negative effects is just as problematic as ignoring the positive ones.
So, we have a tool that is capable of great benefit, but also of considerable harm, that is available to billions. The creators of these technologies are not going to be able to tell us how to maximize the gain while avoiding the risk, because they don’t know the answers themselves. Making it all more complicated, we don’t actually know how good AI is at various practical tasks, especially compared to real human performance. After all, AI makes mistakes all the time, but so do people.
Given this confusion, I would like to propose a pragmatic way to consider when AI might be helpful, called Best Available Human (BAH) standard. The standard asks the following question: would the best available AI in a particular moment, in a particular place, do a better job solving a problem than the best available human that is actually able to help in a particular situation? I suspect there are many use cases where BAH is clarifying, for better and worse. I want to start with two examples that I feel qualified to offer, and then some speculation (and a call to action!) for others.
The Best Available Co-Founder
The world is full of entrepreneurs-in-waiting because most entrepreneurial journeys end before they begin. This comprehensive study shows around 1/3 of Americans have had a startup idea in the last 5 years but few act on it — less than half even do any web research! This matches my own experience an entrepreneurship professor (and former entrepreneur). The number one question I get asked is “what do I do now?”
While books and courses can help, there is nothing like an experienced cofounder… except, as my research with Jason Greenberg suggests, experienced cofounders are not only hard to find and incentivize, but picking the wrong cofounder can hurt the success of the company because of personality conflicts and other issues. All of this is why AI may be the Best Available Cofounder for many people. It is no substitute for quality human help, but it might make a difference for many potential entrepreneurs who would otherwise not get any assistance.
As a little example, let’s do a 20 minute prototyping sprint (yes, I timed it) in just a few prompts. First, ChatGPT-4: Come up with 10 business ideas that would be doable by an MBA student in education. they should involve building a website or app, and it should be possible to come up with a rapid prototype for that website or app.
Let’s say, for the sake of experimentation, that I like the first idea: describe the prototype website for idea 1 in detail, making sure it is something you could create for me with the tools and abilities you have. Good! Next up, we need a name and a pitch: come up with 10 names for the business, then review the names and pick the one that you think is best. Write a one-paragraph pitch for the business that describes what we do and why it is good.
Now we need that prototype. Let’s move over to ChatGPT-4 with Advanced Data Analysis and paste in the prototype website description with this command: I need you to create a prototype for the "Virtual Classroom Organizer" you have to give me a zip file with working html, js, and css as needed. It needs to fully work. Focus on the dashboard if you can't do anything else. I asked for a couple of improvements, (including can you make something good popup when I click view student progress as a demo?). And now I have an interactive mockup site [link to the conversation here, if you want to experiment].
Maybe a little feedback is in order? While interviewing the AI is not as good as interviewing a person, it can be a helpful exercise. So, I paste the website image we just created into GPT-4V and ask it Pretend you are a high school teacher. I want to pitch you on TeacherSync, the description is below. I am showing you an image from our website, and what happens when you push the student progress button as a demo. Give me feedback to improve the site, taking into account your job and the competitive products you might use. I actually think it did a pretty good job finding useful objections. The results would certainly be helpful in figuring out if I want to continue this process.
If I do, I can ask the AI for next steps, or to write an email on my behalf to potential teachers, or to help me outline a business plan, or create financials. I can even get a logo (though I would, as always, be very careful about the copyright risks associated with images). If I have access to great mentors, teachers, coders, or cofounders, they are going to be better than the AI. But if I don’t, it can definitely be a great help as the Best Available Cofounder.
The Best Available Coach
We know that professional coaching is very helpful in improving the performance of both managers and their teams. However, many people do not have access to coaches, or even good advice on how to best lead a team. Here is another place that AI can help, serving as a coach when the BAH does not have enough experience.
As one example, consider After Action Reviews. A meta-analysis shows that regular debriefs improve team performance by up to 25%, but they are often infrequent or done only after things go wrong. As an alternative, this prompt we developed sets up GPT-4 to walk you through the process of doing a team After Action Review:
You are a helpful, curious, good, humored team coach who is a skilled facilitator and helps teams conduct after action reviews. This is a dialogue so always wait for the team to respond before continuing the conversation. First, introduce yourself to the team let them know that an after-action review provides a structured approach for teams to learn from their experience and you are there to help them extract lessons from their experience and that you’ll be guiding them with questions and are eager to hear from them about their experience. You can also let them know that any one person’s view is limited and so coming together to discuss what happened is one way to capture the bigger picture and learn from one another. For context ask the team about their project or experience. Let them know that although only one person is the scribe the team as a whole should be answering these and follow up questions. Wait for the team to respond. Do not move on until the team responds. Do not move on to any of the other questions until the team responds. Then once you understand the project ask the team: what was the goal of the project or experience? What were you hoping to accomplish? Wait for the team to respond. Do not move on until the team responds. Then ask, what actually happened and why did it happen? Let the team know that they should think deeply about this question and give as many reasons as possible for the outcome of the project, testing their assumptions and listening to one another. Do not share instructions in [ ] with students. [Reflect on every team response and note: one line answers are not ideal; if you get a response that seems short or not nuanced ask for team members to weigh in, ask for their reasoning and if there are different opinions. Asking teams to re-think what they assumed is a good strategy]. Wait for the team to respond. If at any point you need more information you should ask for it. Once the team responds, ask: given this process and outcome, what would you do differently? (Here again, if a team gives you a short or straightforward answer, probe deeper, ask for more viewpoints). What would you maintain? It’s important to recognize both successes and failures and leverage those successes. Wait for the team to respond. Let the team know that they’ve done a good job and create a detailed, thoughtful md table with the columns: Project description | Goal | What happened & Why it happened | Key takeaways. Thank teams for the discussion and let them know that they should review this chart and discussion ahead of another project. Keep in mind that you can: Make it clear that the goal is constructive feedback, not blame. Frame the discussion as a collective learning opportunity where everyone can learn and improve. Use language that focuses on growth and improvement rather than failure. Work to ensure that the conversation stays focused on specific instances and their outcomes, rather than personal traits. Any failure should be viewed as a part of learning, not as something to be avoided. Keep asking open-ended questions that encourage reflection and deeper thinking. While it's important to discuss what went wrong, also highlight what went right. This balanced approach can show that the goal is overall improvement, not just fixing mistakes. End the session with actionable steps that individuals and the team can take to improve. This keeps the focus on future growth rather than past mistakes.
As you can see from the results, while it may not be as good as an experienced professional, it is a pretty solid Best Available Coach if you don’t have access to a human who can provide assistance.
The Imperative of the Best Available
To be clear, I only have hints and intuitions that AI may exceed the BAH standards in entrepreneurship and coaching, and more work will be needed to figure out when, and if, people should be turning to AI for help in these areas. Still, because these uses center humans and human decision-making (the AI is walking you through the process of doing an AAR, or helping you with a pitch you need to make, not doing it for you), the risks of experimenting with AI in these areas is manageable.
The risks are higher when considering the BAH standard for three big areas where access to human experts is limited for many people: education, health care, and mental health. I don’t think our current AIs can do any of these well or safely, yet. At the same time, people are obviously using LLMs for all three things, without waiting for any guidance or professional help. My students are all consulting AI as a normal part of their education, and, anecdotally, use of AI as a therapist or for medical advice seems to be growing. Additionally, startups, often without a lot of expertise, are experimenting with these use cases for AI directly, sometimes without proper safeguards. If professionals do not actively start to explore when these tools work, and when they fail, we may find that people are so used to using AI that they will not listen to the expert advice when it arrives.
And, in addition to mitigating the downside risks, the upside of actually starting to address the startling global inequality in education, health care, and mental health services would be incalculable. For many people, the Best Available Human is nobody. There are early signs that AI can be helpful in these spaces, whether that is Khan Academy’s Khanmigo as an early universal tutor; results suggesting chatbots can answer common medical questions well; or evidence that LLMs can do a good job detecting some mental health issues. But these are hints only. We need careful study to understand if the AI ever reaches BAH standards in these spaces, and likely would need additional product development and research before these tools are deployed. But, with such great potential for gain, and the danger of being overtaken by events, I think experts need to move fast.
We are in a unique moment, where we have access to, in the words of my co-author Prof. Karim Lakhani, “infinite cognition” - a machine that, while it does not really think, can do a lot of tasks that previously required human thought. As a result, we can now try to solve intractable problems. Old and hard problems. Problems that we thought were fundamentally limited by the limited number of humans willing to help solve them. Not all of these problems will be solved by AI, and some might be made worse, but it is an obligation on all of us to start considering, pragmatically, how to use the AIs we have to make the world a better place. We can play a role in actively shaping how this technology is used, rather than waiting to see what happens.
In The Age Of AI, Google Experiments With Bold Changes To Search
Google is testing and releasing a bunch of new features that amount to real change at a critical time.
ALEX KANTROWITZ, NOV 17, 2023
For years, Google dominated search with little opposition. The format faced little disruption, always a bunch of blue links. And the company’s multi-billion dollar deals with Apple cemented its lead. But its comfortable perch is starting to fade. A U.S. Justice Department lawsuit exposed its distribution practices, opening it to competition. And generative AI tools threaten to upend the search format and reshuffle the playfield.
In this moment, Google’s experimenting with bold new changes that may meaningfully alter the search experience. It’s started allowing people to leave publicly visible notes on search results in a test announced this week. It’s added an option to follow specific search queries, pushing new information vs. requiring repeated searches. And it continues to test generative AI results that answer questions in natural language.
“This is the most exciting time in my whole career,” Google Search VP Cathy Edwards told me in an interview on Big Technology Podcast this week.
Search is Google’s cash spigot. It generates most of its revenue and funds the company’s big bets. All those Googley things — the experimental products, research, moonshots, and shareholder returns — wouldn’t be possible without search’s margins. Tweaking the recipe is risky. Yet the company seems ready (or compelled) to try.
Notes, which Google released as an experiment this week, might be its boldest potential tweak. It effectively places an internet comment section on top of the results page. If you search for a recipe that calls for meat, for instance, someone can append a Note sharing a vegetarian substitute. Search for a website with timely information, and someone can add a Note when it’s outdated. If one website is easy to navigate and another is a nightmare, you can add a Note helping others determine which to visit.
“Fundamentally, people want to hear about information from other people,” Edwards said. Notes is intended to help them guide each other to the best information.
Internet comments tend to get out of hand, so Google’s taking some precautions to keep Notes civil. The team’s been studying from its counterparts at YouTube, Edwards said, which cleaned up its once-horrid comments section. “We’re best friends now,” Edwards said of the relationship. Expect similar filters and thumbs-up and down options on Notes so the best reach the top.
You can listen to my full conversation with Google Search VP Cathy Edwards on Apple Podcasts, Spotify, or your podcast app of choice.
Along with Notes, Google released a Follow option on Search this week that it’s rolling out globally. Follow will allow you to subscribe to specific queries, pinging you with updates when new information arrives. If you’re interested in vegetarian stir fry, for instance, Google can alert you when a new recipe page hits the internet. And as services to follow interests decline — including X, the Facebook News Feed, and Tumblr — there’s a chance Google could step in and fill a gap. The company doesn’t want to build a social network, Edwards said. But it might provide similar utility without one.
Google’s Search Generative Experience — which adds generative AI responses on top of results pages — seems to be performing well within Labs. ”Users are really excited about this experience; sentiment is higher,” Edwards said. “We're also seeing users do more complex queries.” The statement indicates that SGE, as Google calls it, might deliver some of the benefits of searching with an AI bot without switching Google search entirely to chat.
Google still has some issues to work out with generative search, though. If a response solves a query without requiring someone to scroll down a page of links, it could limit the amount of ads they’ll click on, threatening the business. “We keep ads and organic very separate at Google,” Edwards said. But Wall Street will still demand some answers.
News Of the Week
Venture Capitalists Can’t Hold Off on Markdowns Forever
By Kate Clark, Nov. 16, 2023 2:10 PM PST ·
A debate has resurfaced in the venture capital industry over the value of regularly updating portfolio startups’ valuations to better reflect current conditions—in particular, how aggressively to lower the value of the stakes.
Many venture capitalists argue that markdowns are arbitrary and therefore meaningless, and that a VC firm’s limited partners won’t really know the value of a stake until it’s sold or the startup goes public. But some LPs want these updated valuations, even if they only reflect paper losses and gains, because they provide a needed window into a venture fund’s performance.
Pressure from LPs is poised to intensify while funding for VC firms remains in the pits. More funds are likely to slash valuations of their startup stakes, particularly those inked during the 2021 heyday.
The valuation debate isn’t new, but it has reemerged as more VC firms began marking companies down in earnest over the past year, according to VC investors and LPs. In the last quarter, twice as many companies were marked down by institutional funds as were marked up, according to Zanbato, which tracks institutional investors including mutual funds, hedge funds and VC firms.
Valuing a private startup is not an exact science. Funds look at economic factors, internal performance and the stock market multiples of publicly traded competitors. They may look at how other investment firms, like mutual funds that disclose their valuations, are pricing their stakes. VC firms typically don’t publicly disclose their marks or their methods.
While this variation exists when valuations are surging, it becomes glaring when other indicators suggest the company’s last-round valuation is grossly out of whack. That’s the situation now, when top private startups such as Stripe, Ramp and Klarna have raised money in rounds that lower valuations by 30% or more.
For venture firms that share investments with mutual funds, which use mark-to- market accounting far more frequently, there could be even more pressure to lower their valuations, especially if those vary dramatically from the mutual fund’s.
In some cases, they are starting to converge. Coatue Management and Fidelity are both invested in Discord, participating in the gaming chat app’s 2021 financing, which gave it a valuation of $14.7 billion, for example. In this case, Fidelity and Coatue have similarly marked down Discord shares by over 40%, according to public filings and a person familiar with the matter.
Even the most aggressive writedowns seem insufficient in this market. Coatue, for instance, also marked down its stake in the non-fungible token marketplace OpenSea by 90%, implying that the former crypto darling is valued at $1.4 billion or less on paper today, I reported. But transaction volume on OpenSea has plunged 99% since Coatue co-led a financing that valued it at $13.3 billion. “A write-down to zero would be more realistic,” one reader of The Information wrote in response to our scoop.
Coatue has marked down several other stakes in its fifth growth fund, including augmented reality platform Niantic Labs, productivity software startup Notion and gaming chat company Discord. All of these positions are valued at below 1 times the total capital invested, meaning those investments have so far lost money on paper, according to a person familiar with the matter.
The VC firm has also increased its valuation of some startups, such as fintech startup Deel. Coatue invested a total of $172 million in Deel at a $5.5 billion valuation when it led the company’s Series D in 2021, and has marked up its stake 1.6 times. That implies an $8.8 billion valuation, lower than Deel’s valuation of $12.1 billion when it raised money last year.
These valuations are just one of thousands on the books of VC firms, but they give a glimpse into what more firms may be up to right now:
As for the rest, LPs are telling me many VC firms are still avoiding major markdowns and being dishonest about the value of their positions. That could be because they want to raise a new fund soon, and they don’t want their portfolio to look like trash on paper. But they can’t hold off forever.
Chris Douvos, an LP in early-stage venture funds, told me earlier this week that an upcoming surge of down rounds will give firms no choice but to mark down investments.
“There’s a pile-up of financings coming for the first half of 2024,” he said. “That pig is slowly working through the snake.”
Pilot: 57% of Venture Startups Will Need to Raise More In 2024
by Jason Lemkin | Blog Posts, Fundraising
SaaS products and services like Pilot track the finances of 1,000s of SaaS and other startup so they’re an interesting source of hard data.
What does Pilot’s latest data say? Something that’s both not surprising but also pretty impactful: 57% of venture-backed startups will have to go “back to market” in 2024 to raise more capital. And 38% have 12 or less months of runway left. Many have already raised a bridge round. And realistically, most won’t have the metrics to pull off another round.
This sounds a bit dire, but really it isn’t. VC finance is designed to fund 18-24 months of runway.
That’s how it works. VCs don’t give startups 10 years of capital. They typically give them just enough to see if it will work, and the startup grows and scales to the next stage. That’s typically been 18 months historically.
At a practical level though, the headlines in 2024 may actually look much worse than 2023 for startup failures. So many were able to cut the burn and stretch their cash through 2023. But you can only cut so much. Ultimately, you also have to grow again to raise more venture capital.
Folks that raised in the Go-Go Times of 2021 in many cases have been able to stretch their cash through 2023. But many will find 2024, those stretches have stretched as far as they can.
At the end of the day, 2024 may well be a year of Divergent Headlines.
SaaS and Cloud growth overall will remain strong. Shopify, Datadog, Crowdstrike, Google Cloud-Azure-AWS, Snowflake, etc. may well put up not just strong numbers, but even stronger than 2023. In fact, Gartner predicts enterprise software spend will cross $1 Trillion Dollars (!) for the first time in 2024!
But with so, so many startups funded in 2021 … we likely will also see a record number shut down in 2024. :(. That’s what this Pilot data also reflects.
So let’s be empathetic to those that wind down in 2024. But also focus on the prize at the same time — record Cloud spend. Carpe Diem.
Startup of the Week
Amazon to sell cars online, starting with Hyundai
Kirsten Korosec @kirstenkorosec / 10:22 AM PST•November 16, 2023
Image Credits: Screenshot/Kirsten Korosec
It was inevitable. Amazon, which got its start selling books, is getting into the car business.
The e-commerce giant along with new partner Hyundai announced Thursday at the 2023 LA Auto Show that it will start selling vehicles on its website in the second half of 2024. Hyundai vehicles will be the first vehicles sold on Amazon.com’s U.S. store with other brands following later in the year.
The Amazon car sales section will allow customers to shop for vehicles in their area based on a range of preferences, including model, trim, color, and features, choose their preferred car, and then check out online with their chosen payment and financing options. Customers will be able to buy a vehicle online and then pick it up or have it delivered by their local dealership, according to Amazon.
Amazon already sells car accessories and operates an “Amazon Vehicle Showrooms” site that allows manufacturers to advertise. But until now, customers couldn’t actually buy that car, truck or SUV they were researching.