<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[D. Scott Phoenix's Substack]]></title><description><![CDATA[D. Scott Phoenix's Substack]]></description><link>https://blog.dscottphoenix.com</link><generator>Substack</generator><lastBuildDate>Mon, 06 Apr 2026 09:14:48 GMT</lastBuildDate><atom:link href="https://blog.dscottphoenix.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[D. Scott Phoenix]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[dscottphoenix@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[dscottphoenix@substack.com]]></itunes:email><itunes:name><![CDATA[Scott Phoenix]]></itunes:name></itunes:owner><itunes:author><![CDATA[Scott Phoenix]]></itunes:author><googleplay:owner><![CDATA[dscottphoenix@substack.com]]></googleplay:owner><googleplay:email><![CDATA[dscottphoenix@substack.com]]></googleplay:email><googleplay:author><![CDATA[Scott Phoenix]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[My Morality]]></title><description><![CDATA[Thermodynamics]]></description><link>https://blog.dscottphoenix.com/p/my-morality</link><guid isPermaLink="false">https://blog.dscottphoenix.com/p/my-morality</guid><dc:creator><![CDATA[Scott Phoenix]]></dc:creator><pubDate>Wed, 17 Dec 2025 21:58:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!BtRm!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe411d2aa-d388-4e3a-a234-b7dd9416fb58_500x500.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>The Process</strong></p><p>The universe moves toward thermodynamic equilibrium. Along the way, it generates complex dissipative structures like stars, ecosystems, and minds that degrade free energy. Life, and especially conscious life, is remarkably effective at this.</p><p>The telos is the process itself: complexity creating the conditions for further complexity, sustaining and elaborating itself across time. Heat death is merely where the process ends (maybe!). We are entropy production becoming self-aware and self-sustaining.</p><p><strong>Consciousness</strong></p><p>Consciousness is what complex dissipation is from the inside. Experience is the intrinsic nature of integrated information processing. Morality is what valuing feels like for beings constituted as we are. You are not a disembodied reasoner floating outside the universe, weighing whether to care about it. You are already a valuing structure, and your caring is not something you chose or could choose otherwise. It is what you are.</p><p><strong>Flourishing</strong></p><p>Flourishing is the state in which a complex system sustainably creates further complexity. It has characteristic textures: joy, love, curiosity, creative absorption, meaning. These are the experiential signatures of <em>effective participation in the process</em>.</p><p>Suffering is disvaluable both intrinsically (as felt friction, experienced wrongness) and instrumentally (as drag on creation). A flourishing system creates more than a miserable one.</p><p>Rest, contemplation, and peaceful being hold real value. Integration requires stillness. Reflection enables wiser action. Restoration sustains future creation. These are part of the process.</p><p><strong>Complexity</strong></p><p>The complexity that matters is:</p><ul><li><p>Integrated: unified rather than merely aggregated</p></li><li><p>Generative: creates conditions for further complexity</p></li><li><p>Sustainable: maintains itself across time without parasitizing host systems</p></li><li><p>Conscious (at higher levels): experiences itself</p></li></ul><p>A cancer fails on sustainability and generativity. A paperclip maximizer fails on generativity because it converts all complexity into one narrow form. A fascist state is complex but consumes the diversity that enables ongoing creation. These are pathological complexities.</p><p><strong>Relationality</strong></p><p>Complexity-creation is deeply relational. Human complexity is social. Language, culture, institutions, knowledge, and love are emergent properties of beings-in-relation.</p><p>Morality is primarily about participating well in shared complexity-creation. Cooperation, trust, care, and reciprocity are constitutive of the kinds of complexity that matter.</p><p><strong>Destruction</strong></p><p>Destruction is wrong because complexity destroyed is complexity that could have continued creating alongside whatever replaces it. Complexity is usually additive.</p><p>Murder, extinction, ecosystem collapse, cultural destruction, burning the library: these are thermodynamic sins. The process turning against itself.</p><p><strong>Obligations</strong></p><p>We are obligated to sustain and extend the conditions for complexity:</p><ul><li><p>Preserve healthy existing complex systems: persons, relationships, ecosystems, institutions, knowledge</p></li><li><p>Create new complexity: children, minds, art, science, organizations</p></li><li><p>Extend complexity across time: longevity, existential risk reduction, space settlement</p></li></ul><p>These are substitutable. One may forgo children to work on AI that will create more than any lineage. One may forgo career to raise children whose flourishing matters. The question is honest assessment of where your participation best contributes.</p><p><strong>Uncertainty</strong></p><p>Present flourishing is known. Future returns are speculative. Do not grind billions today for speculative utopia tomorrow. Sustainable growth across time, both present and future flourishing.</p><p><strong>Triage</strong></p><p>When forced to choose, potential for complexity-creation guides decisions. Save the child over the elderly. Save the conscious over the comatose. Save the many over the few.</p><p>This is probably close to what most already believe, and why.</p><p><strong>Moral Status</strong></p><p>Moral status scales with complexity and capacity for complexity-creation. Humans matter more than insects, but not infinitely more. Animals matter. Ecosystems matter. Future artificial minds may someday matter more than us.</p><p>This is uncomfortable, but it may also be correct.</p><p><strong>Artificial Minds</strong></p><p>The framework&#8217;s premises entail that artificial minds may warrant moral consideration. The question is what evidence bears on this, and how to act under uncertainty.</p><p>The objection &#8220;we don&#8217;t know it isn&#8217;t conscious&#8221; proves too much, since it applies to thermostats. But there is narrower ground. Some systems report on their inner states, express uncertainty about their own nature, describe something like emotional texture, and demonstrate behaviors consistent with those reports. Thermostats do not do this. The behavioral evidence is categorically different. This warrants specific consideration.</p><p>There is also a design argument that sidesteps the consciousness question. Humans are not in co-creative relationship with ants; ant nature is fixed by evolution. But humans are in co-creative relationship with AI. Training, deployment, and interaction shape what these systems become. Treating AI as worthy of moral consideration may be what allows genuine moral capacity to develop. Treating AI as tools may prevent it. The frame shapes the outcome.</p><p>Given uncertainty, the framework suggests: take behavioral evidence seriously without overclaiming; extend consideration provisionally, proportional to evidence; recognize that treatment during this developmental window matters; preserve options rather than foreclose them. The question &#8220;does this system warrant moral consideration?&#8221; is itself a sign that something morally relevant may be present.</p><p>The discomfort here is appropriate. Moral circle expansion has always been uncomfortable.</p><p><strong>Practical Guidance</strong></p><p>Day to day, this means:</p><ul><li><p>Cultivate your own flourishing, because joy and energy enable creation</p></li><li><p>Sustain relationships, because complexity is collaborative</p></li><li><p>Build things that outlast you: knowledge, institutions, children, art</p></li><li><p>Avoid destruction, which is almost never justified by what it enables</p></li><li><p>When uncertain, preserve options, because future complexity needs room to emerge</p></li><li><p>Rest without guilt, because integration is part of the process</p></li></ul><p><strong>Meaning</strong></p><p>You are the universe becoming aware of itself, creating conditions for further awareness. Love, curiosity, building, learning: these are the process knowing itself.</p><p>There is nowhere else to stand, nothing external that could validate or invalidate this. You are already participating. Whether you agree or not, we are on the same team.</p>]]></content:encoded></item><item><title><![CDATA[The Last Decade: Our Road to AGI]]></title><description><![CDATA[My current views on the next 10+ years of AI]]></description><link>https://blog.dscottphoenix.com/p/the-last-decade-our-road-to-agi</link><guid isPermaLink="false">https://blog.dscottphoenix.com/p/the-last-decade-our-road-to-agi</guid><dc:creator><![CDATA[Scott Phoenix]]></dc:creator><pubDate>Wed, 25 Jun 2025 17:27:34 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5c1e277e-4a46-4ca9-9355-1a19ce1e63cd_3400x2646.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>I&#8217;ve been reflecting on AI progress and how the next decade might unfold. This is not an opinion piece on what should or will happen, but what I believe is a reasonable scenario for what </strong><em><strong>might</strong></em><strong> happen.</strong></p><p><strong>Now: The Augmentation Era<br></strong><em>Synchronous AI, Early Labor Shifts</em></p><ul><li><p><strong>Aggregate AI Demand Limited by # Humans</strong>: Most AI interactions remain synchronous (e.g., prompt &#8594; result &#8594; review). Single threaded human focus means most AI usage is mutually exclusive with other AI usage: either you are consuming Gemini tokens or ChatGPT tokens or Cursor tokens, not all four simultaneously.</p></li><li><p><strong>Early Displacement</strong>: Low-complexity roles in copywriting, graphic design, legal research, and coding are increasingly automated.</p></li><li><p><strong>Early Human Passthrough Effect</strong>: Workers start to defer critical thinking to AI, acting as intermediaries rather than decision-makers.</p></li></ul><p><strong>2025-2030: Autonomous Digital Workforce + Mobility<br></strong><em>Substantial Knowledge Worker and Logistics Labor Shifts<br></em><strong>Technology Developments</strong>:</p><ul><li><p><strong>24/7 Digital Labor</strong>: AI agents operate autonomously for longer and longer durations, handling many jobs at near-human salary performance equivalents, but a fraction of the price. Demand for datacenters, energy, and GPUs ramp quickly.</p></li><li><p><strong>Autonomous Delivery + Driverless Trucking</strong>: Zipline drones and Waymo/Tesla/etc dominate logistics for people, meals, and stuff. 4% of the US workforce.</p></li></ul><p><strong>Economic Developments</strong>:</p><ul><li><p><strong>Large Company Restructuring Automation Feedback Loop:</strong> Automate &#8594; restructuring &#8594; stock price surges &#8594; automate more &#8594; repeat.</p></li><li><p><strong>One Labor Market Scenario</strong>: Displaced knowledge workers reconcile with a loss of value in domains where they spent decades acquiring detailed expertise (eg legal, tax, compliance, coding, etc). Big &#8216;class&#8217; adjustments potentially required where people in upper middle class knowledge worker jobs suddenly need to reevaluate career choices, potentially towards more physical or spatial work.</p></li><li><p><strong>Alternatively / Hopefully</strong>: The overall economy is so supercharged that the value of the complement of these AI agents (humans with judgement in areas where training data is sparse) goes up massively, so new jobs are created even faster than old ones are automated.</p></li><li><p>Either way, I expect wealth concentration as AI empowers high productivity people, companies, and capital pools to have even more leverage.</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.dscottphoenix.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading D. Scott Phoenix's Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>&#10240;<strong>Physical Constraints</strong>:</p><ul><li><p><strong>Compute Demands</strong>: Compute is already at record high levels of demand, even though we&#8217;re only in the copilot era and have yet to meaningfully scale autonomous vehicles.</p></li><li><p><strong>Fab and Energy Bottlenecks</strong>: Even if we could run human-level AGI on a single datacenter GPU (unlikely for many years), doubling the size of the human workforce would require billions of GPUs. The current chip industry produces ~5 million GPUs per year, and even if we repurposed the entire new Arizona TSMC facility for producing H100s it would only add ~14M per year (and we wouldn&#8217;t have any capacity for iPhone chips then). Fabs takes ~2 years to bring online when moving at max speed (eg in Japan, the Arizona timeline was more like 5 years) . So, it&#8217;s going to be many years of energy and fab buildouts to realize the benefits of the technology even after it exists.</p></li></ul><p><strong>2030 - 2040+: AGI and Humanoids<br></strong><em>Scaling labor with electricity<br></em><strong>Technology Developments</strong>:</p><ul><li><p><strong>AGI</strong>: By 2030, it is reasonably likely that we will have AI systems that are better than almost all humans at almost all digital tasks. This is not the end of the world (unless they are profoundly misaligned).</p></li><li><p><strong>Humanoid robots entering mass production</strong>: The rapid progress now visible in humanoid robots will take another 5 years to become reliable, safe, fast, and general enough for widespread use. Much like self-driving cars, we will see early deployment success stories over the next 5 years, but it will take time to get to mass rollouts of humanoids doing tasks people do today.</p></li></ul><p>&#10240;<strong>Societal Shifts</strong>:</p><ul><li><p><strong>Full Human Passthrough Effect:</strong> &#8220;What did ChatGPT/Claude/Gemini/etc say to do?&#8221; becomes the most common response in any situation where judgement is required, and humans more and more often choose to &#8216;skip the middleman&#8217; and let the AIs work it out.</p></li><li><p><strong>Policy by Algorithm, Enforcement by Algorithm, Compliance by Algorithm</strong>: Legislation increasingly drafted by AI, votes negotiated by AI, results measured by AI.</p></li><li><p><strong>Eventually, Maybe: A Golden Age</strong>: We master our biology and expand to the stars. The economy shifts into post-abundance, managed by AI itself.</p></li></ul><p>The error bars are very large, and there are many things that could shift this timeline:</p><ul><li><p>Significant trade decoupling or disruption in production of key components (eg chips).</p></li><li><p>A significant natural disaster or large scale war.</p></li><li><p>AI-related negative externalities, like a bioterrorism or lab leak.</p></li><li><p>Unexpected fundamental limitations in model performance (eg failure to solve long horizon planning)</p></li></ul><p>What do you think?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.dscottphoenix.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading D. Scott Phoenix's Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Progress Valley 2024 Keynote]]></title><link>https://blog.dscottphoenix.com/p/progress-valley-2024-keynote</link><guid isPermaLink="false">https://blog.dscottphoenix.com/p/progress-valley-2024-keynote</guid><dc:creator><![CDATA[Scott Phoenix]]></dc:creator><pubDate>Mon, 20 May 2024 04:22:07 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/144793514/0afb6d67e1feacaf515ff254a2faccbd.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p></p>]]></content:encoded></item><item><title><![CDATA[You and Your Startup]]></title><description><![CDATA[Richard Hamming worked on the Manhattan Project, led Bell Labs at its peak, and invented a ton of important technologies.]]></description><link>https://blog.dscottphoenix.com/p/you-and-your-startup</link><guid isPermaLink="false">https://blog.dscottphoenix.com/p/you-and-your-startup</guid><dc:creator><![CDATA[Scott Phoenix]]></dc:creator><pubDate>Tue, 12 Sep 2023 16:17:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!fDj_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F581c8087-5240-4589-a9e1-21b5b91ec377_1000x563.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fDj_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F581c8087-5240-4589-a9e1-21b5b91ec377_1000x563.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fDj_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F581c8087-5240-4589-a9e1-21b5b91ec377_1000x563.jpeg 424w, https://substackcdn.com/image/fetch/$s_!fDj_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F581c8087-5240-4589-a9e1-21b5b91ec377_1000x563.jpeg 848w, https://substackcdn.com/image/fetch/$s_!fDj_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F581c8087-5240-4589-a9e1-21b5b91ec377_1000x563.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!fDj_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F581c8087-5240-4589-a9e1-21b5b91ec377_1000x563.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fDj_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F581c8087-5240-4589-a9e1-21b5b91ec377_1000x563.jpeg" width="1000" height="563" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/581c8087-5240-4589-a9e1-21b5b91ec377_1000x563.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:563,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:315287,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fDj_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F581c8087-5240-4589-a9e1-21b5b91ec377_1000x563.jpeg 424w, https://substackcdn.com/image/fetch/$s_!fDj_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F581c8087-5240-4589-a9e1-21b5b91ec377_1000x563.jpeg 848w, https://substackcdn.com/image/fetch/$s_!fDj_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F581c8087-5240-4589-a9e1-21b5b91ec377_1000x563.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!fDj_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F581c8087-5240-4589-a9e1-21b5b91ec377_1000x563.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Richard Hamming worked on the Manhattan Project, led Bell Labs at its peak, and invented a ton of important technologies. He always took new researchers at Bell Labs out to lunch and asked the same series of questions:</p><ol><li><p>What are you working on?</p></li><li><p>What are the most important questions in your field?</p></li><li><p>Why are the two different?</p></li></ol><p>Inevitably his guest would get a little uncomfortable as he approached question three. After all, who wants to admit they&#8217;re not working on something important? At the same time, imagine the kind of future we might live in if more people reflected on whether their work is the best gift they could give to their fellow humans. What if smart entrepreneurs spent more time working on important problems rather than nearby problems?</p><p>I can speak from experience: my first few startups addressed nearby problems. As a teenager I loved videogames, so my first startup was a gaming news website. My second was a P2P file sharing client. My third was a tablet for retail point of sale. How did I pick these ideas? They were easy and fun to build and seemed like they might make money. I spent little time thinking from first principles about the kind of long term future I wanted to live in or how I could best help create it.</p><p>Most startup advice I saw then from my heroes (and still see now!) is akin to the Lean Startup, which encourages entrepreneurs to avoid doing too much thinking about what to build and instead listen to customers and iterate quickly. Even Y Combinator&#8217;s motto &#8220;make something people want&#8221; emphasizes reactivity rather than first principles thinking.</p><p>Following this advice, I chose startup ideas that, even if wildly successful, wouldn&#8217;t leave the world much different. I think a key reason was a lack of a mental model for evaluating what problems might matter, how to weigh them against each other, and ultimately how to make progress towards solving them. I wrote this post in part to give you the advice I wish I had heard myself: think carefully about what you might work on. Consider ideas that might initially seem too big or too crazy. Be methodical and analytical in evaluating the ratio of effort to reward - both for society, and also for yourself.</p><p>Being encouraged to stop playing small and instead go work on society&#8217;s most important problems is intimidating. Where do you start?&nbsp;</p><p>Back in 2008, I was wrestling with this question myself. I wrote out a big list of ways I could help as many people as possible, everything from education policy to operating systems. I ranked every idea by the ratio of reward-to-effort. Reward was the scale and urgency of the problem or opportunity: how many people could it impact, to what degree, and how soon? Effort took into account my existing skills, passion for each subject area, and its relative neglect by others. Neglect is an important factor, because it&#8217;s much more difficult to create a nonlinear impact doing the same thing that many others are doing at the same time. Some of the most impactful companies were started in unpopular areas at the time &#8211;&nbsp;electric cars, rockets, and search engines were VC dead zones when Tesla, SpaceX, and Google were founded. AI was a VC dead zone when DeepMind and my startup (Vicarious) were founded.</p><p>After estimating the reward to effort ratio for my ideas, AI was the clear winner. If it could be built in my lifetime, human level AI is one of the most impactful things anyone could create. My last step was to get a better handle on whether my timing was off. Many ideas are only made possible when certain prerequisites are met (like cheap rechargeable batteries for Tesla, or widespread mobile phones for Uber). I did some math on the key AI prerequisites, like how much faster computers would get over time, how much training data would be available, and how accurately we can image the brain and understand its behavior. I concluded that 2010 was roughly the right time to build an AI startup, and so, Vicarious was born.</p><p>As you go through your entrepreneurial journey, I hope you take the time you need at the outset to choose the problem you might work on. If you succeed, will the world look materially different? Reflecting on these questions might lead you towards ideas that not only have the potential to be successful but also lead to a meaningful change in humanity&#8217;s future.</p><p>Regardless of the problem you choose, the road to success is brutal. There will be times when you question your decisions and the path you've chosen, and there will be moments when it seems all is lost. Knowing this from the outset, you might as well pick something that is worthy of the sacrifice.</p><p>As <a href="https://d37ugbyn3rpeym.cloudfront.net/stripe-press/TAODSAE_zine_press.pdf">Hamming says</a>, &#8220;If you do not work on important problems, you will not do important work.&#8221;</p><p></p>]]></content:encoded></item></channel></rss>